id
stringlengths 10
10
| title
stringlengths 26
192
| abstract
stringlengths 172
1.92k
| authors
stringlengths 7
591
| published_date
stringlengths 20
20
| link
stringlengths 33
33
| markdown
stringlengths 269
344k
|
---|---|---|---|---|---|---|
2310.07885 | Leader-Follower Neural Networks with Local Error Signals Inspired by
Complex Collectives | The collective behavior of a network with heterogeneous, resource-limited
information processing units (e.g., group of fish, flock of birds, or network
of neurons) demonstrates high self-organization and complexity. These emergent
properties arise from simple interaction rules where certain individuals can
exhibit leadership-like behavior and influence the collective activity of the
group. Motivated by the intricacy of these collectives, we propose a neural
network (NN) architecture inspired by the rules observed in nature's collective
ensembles. This NN structure contains workers that encompass one or more
information processing units (e.g., neurons, filters, layers, or blocks of
layers). Workers are either leaders or followers, and we train a
leader-follower neural network (LFNN) by leveraging local error signals and
optionally incorporating backpropagation (BP) and global loss. We investigate
worker behavior and evaluate LFNNs through extensive experimentation. Our LFNNs
trained with local error signals achieve significantly lower error rates than
previous BP-free algorithms on MNIST and CIFAR-10 and even surpass BP-enabled
baselines. In the case of ImageNet, our LFNN-l demonstrates superior
scalability and outperforms previous BP-free algorithms by a significant
margin. | Chenzhong Yin, Mingxi Cheng, Xiongye Xiao, Xinghe Chen, Shahin Nazarian, Andrei Irimia, Paul Bogdan | 2023-10-11T20:47:57Z | http://arxiv.org/abs/2310.07885v1 | # Leader-Follower Neural Networks with Local Error Signals Inspired by Complex Collectives
###### Abstract
The collective behavior of a network with heterogeneous, resource-limited information processing units (e.g., group of fish, flock of birds, or network of neurons) demonstrates high self-organization and complexity. These emergent properties arise from simple interaction rules where certain individuals can exhibit leadership-like behavior and influence the collective activity of the group. Motivated by the intricacy of these collectives, we propose a neural network (NN) architecture inspired by the rules observed in nature's collective ensembles. This NN structure contains _workers_ that encompass one or more information processing units (e.g., neurons, filters, layers, or blocks of layers). Workers are either leaders or followers, and we train a leader-follower neural network (LFNN) by leveraging local error signals and optionally incorporating backpropagation (BP) and global loss. We investigate worker behavior and evaluate LFNNs through extensive experimentation. Our LFNNs trained with local error signals achieve significantly lower error rates than previous BP-free algorithms on MNIST and CIFAR-10 and even surpass BP-enabled baselines. In the case of ImageNet, our LFNN-\(\ell\) demonstrates superior scalability and outperforms previous BP-free algorithms by a significant margin.
## 1 Introduction
Artificial neural networks (ANNs) typically employ global error signals for learning [1]. While ANNs draw inspiration from biological neural networks (BNNs), they are not exact replicas of their biological counterparts. ANNs consist of artificial neurons organized in a structured layered architecture [2]. Learning in such architectures commonly involves gradient descent algorithms [3] combined with backpropagation (BP) [4]. Conversely, BNNs exhibit more intricate self-organizing connections, relying on specific local connectivity [5] to enable emergent learning and generalization capabilities even with limited and noisy input data. Simplistically, we can conceptualize a group of
neurons as a collection of _workers_ wherein each worker receives partial information and generates an output, transmitting it to others so as to achieve a specific collective objective. This behavior can be observed in various biological systems, such as decision-making among a group of individuals [6], flocking behavior in birds to avoid predators and maintain flock health [7], or collective behavior in cells fighting infections or sustaining biological functions [8].
The study of collective behavior in networks of heterogeneous agents, ranging from neurons and cells to animals, has been a subject of research for several decades. In physical systems, interactions among numerous particles give rise to emergent and collective phenomena, such as stable magnetic orientations [9]. A system of highly interconnected McCulloch-Pitts neurons [10] has collective computational properties [9]. Networks of neurons with graded response (or sigmoid input-output relation) exhibit collective computational properties similar to those of networks with two-state neurons [11]. Recent studies focus on exploring collective behaviors in biological networks. This includes the examination of large sensory neuronal networks [12], the analysis of large-scale small-world neuronal networks [13], the investigation of heterogeneous NNs [14], and the study of hippocampal networks [15]. These studies aim to uncover the collective dynamics and computational abilities exhibited by such biological networks.
In biological networks such as the human brain, synaptic weight updates can occur through local learning, independent of the activities of neurons in other brain regions [16; 17]. Partly for this reason, local learning has been identified as effective means to reduce memory usage during training and to facilitate parallelism in deep learning architectures, thereby enabling faster training [18; 19].
Acknowledging the extensive research on collective behavior and local learning in biological networks and the existing computational disparities between ANNs and BNNs, we draw inspiration from complex collective systems by proposing a NN architecture with more complex capabilities observed in biological counterparts. We propose to divide a NN into layers of elementary _leader_ workers and _follower_ workers and follow the characteristics of collective motion to select the _leadership_. As in a flock of birds shown in Figure1, leaders are informed by and control the motion of the whole flock. In our _leader-follower neural network_ (LFNN) architecture, the leaders and followers differ in their access to information and learn through distinct error signals. Hence, LFNNs offer a biologically-plausible alternative to BP and facilitate training using local error signals.
We evaluated our LFNN and its BP-free version trained with local loss (LFNN-\(\ell\)) on MNIST, CIFAR-10, and ImageNet datasets. Our LFNN and LFNN-\(\ell\) outperformed other biologically plausible BP-free algorithms and achieved comparable results to BP-enabled baselines. Notably, our algorithm demonstrated superior performance on ImageNet compared to all other BP-free baselines. This study, which introduces complex collectives to deep learning, provides valuable insights into biologically plausible NN research and opens up avenues for future work.
**Related work.** Efforts have been made to bridge the gaps in computational efficiency that continue to exist between ANNs and BNNs [20]. One popular approach is the replacement of global loss with local error signals [21]. Researchers have proposed to remove BP to address backward locking problems [22], mimic the local connection properties of neuronal networks [23] and incorporate local plasticity rules to enhance ANN's biological plausibility [24]. A research topic closely related to our work is supervised deep learning with local loss. It has been noticed that training NNs with BP is biologically implausible because BNNs in the human brain do not transmit error signals at a global scale [25; 26; 27]. Several studies have proposed training NNs with local error signals, such as layer-wise learning [21; 28], block-wise learning [23; 29], gated linear network family [30], etc.
Figure 1: **a-b.** A flock of birds where leaders are informed and lead the flock. **c.** An abstracted network from the flock. **d.** A leader-follower neural network architecture.
Mostafa et al. generate local error signals in each NN layer using fixed, random auxiliary classifiers [21], where a hidden layer is trained using local errors generated by a random fixed classifier. This is similar to an approach called feedback alignment training, where random fixed weights are used to back-propagate the error layer by layer [31]. In [29], the authors split a NN into a stack of gradient-isolated modules, and each module is trained to maximally preserve the information of its inputs. A more recent work by Ren et al. [32] proposed a local greedy forward gradient algorithm by enabling the use of forward gradient learning in supervised deep learning tasks. Their biologically plausible BP-free algorithm outperforms the forward gradient and feedback alignment family of algorithms significantly. Our LFNN-\(\ell\) shares some similarities with the above work in the sense that the LFNN-\(\ell\) is trained with loss signals generated locally without BP. In contradistinction to the state-of-the-art, we do not require extra memory blocks to generate an error signal. Hence, the number of trainable parameters can be kept identical to that of NNs without an LF hierarchy.
## 2 LFNNs Inspired by Complex Collectives
Collective motion refers to ordered movement in systems consisting of self-propelled particles, such as flocking [7] or swarming behavior [33]. The main feature of such behavior is that an individual particle is dominated by the influence of others and thus behaves entirely differently from how it might behave on its own [34]. A classic collective motion model, the Vicsek model [35], describes the trajectory of an individual using its velocity and location, and uses stochastic differential/ difference equations to update this agent's location and velocity as a function of its interaction strength with its neighbors. Inspired by collective motion seen in nature, we explore whether these minimal mathematical relations can be exploited in deep learning.
**LF hierarchy in fully connected layers.** In a fully-connected (FC) layer containing multiple neurons, we define _workers_ as structures containing one or more neurons grouped together. In contradistinction to classic NNs where the neuron is the basic computational unit, LFNN workers serve as basic units. By adapting the Vicsek model terms to deep learning, a worker's behavior is dominated by that of neighbors in the same layer. In addition, we consider _leadership_ relations inside the group. According to collective motion, "leadership" involves "the initiation of new directions of locomotion by one or more individuals, which are then readily followed by other group members" [36]. Thus, in FC layers, one or more workers are selected as leaders, and the rest are "followers" as shown in Figure 2b.
**LF hierarchy extended in convolutional layers.** Given a convolutional layer with multiple filters (or kernels), workers can be defined as one or more filters grouped together to form _filter-wise workers_. For a more coarsely-grained formulation, given a NN with multiple convolutional layers, a set of convolutional layers can be grouped naturally as a block (such as in VGG [37], ResNet [38], Inception [39] architectures). Our definition of the worker can be easily adapted to encompass _block-wise workers_ to reflect this architecture where a block of convolutional layers work together as a single, block-wise worker. Similarly, if a block contains one layer, it becomes a _layer-wise worker_.
More formally, we consider a NN with \(\mathcal{M}\) hidden layers, and a hidden layer contains \(\mathcal{N}\) workers. A worker can contain one or more individual working components, which can be neurons, filters in
Figure 2: **Weight updates of LFNN.****a.** BP in classic deep neural network (DNN) training. Global prediction loss is back-propagated through layers. **b.** An LF hierarchy in a DNN. Within a layer, neurons are grouped as (leader and follower) workers. **c.** Weight update of follower workers. **d.** Weight update of leader workers with BP. **e.** BP-free weight update of leader workers.
convolutional layers, or blocks of NN layers, and each individual working component is parametrized by a set of trainable parameters \(\mathcal{W}\). During training, at each time step \(t\), leader workers \(\mathcal{N}_{\bar{s}}\) are dynamically selected and the remaining workers are labeled as followers (denoted as \(\mathcal{N}_{\bar{s}}\)) at time step \(t\). Following the same notation, leader and follower workers are parameterized by matrices \(\mathcal{\widetilde{W}}_{\bar{s}}\) and \(\mathcal{\widetilde{W}}_{\bar{s}}\), respectively. The output of leader and follower workers in a hidden layer reads \(f(\vec{x},|\mathcal{\widetilde{W}}_{\delta},\mathcal{\widetilde{W}}_{\bar{s}}|)\), where \(\vec{x}\) is the input to the current hidden layer and \(f(\cdot)\) is a mapping function.
**Error signals in LFNN.** In human groups, one key difference between leaders and followers is that leaders are _informed_ individuals that can guide the whole group, while followers are uninformed and their instructions differ from treatment to treatment [40]. Adapting this concept to deep learning, LFNN leaders are informed that they receive error signals generated from the global or local prediction loss functions, whereas followers do not have this information. Specifically, assume that we train an LFNN with BP and a global prediction loss function \(\mathcal{L}_{g}\). Only leaders \(\mathcal{N}_{\delta}\) and output neurons receive information on gradients as error signals to update weights. This is similar to classic NN training, so we denote these pieces of information as _global error signals_. In addition, a local prediction error \(\mathcal{L}_{l}^{\delta}\) is optionally provided to leaders to encourage them to make meaningful predictions independently.
By contrast to leaders, followers \(\mathcal{N}_{\bar{s}}\) do not receive error signals generated in BP. Instead, they align with their neighboring leaders. Inspired by collective biological systems, we propose an "alignment" algorithm for followers and demonstrate its application in an FC layer as follows: Consider an FC layer where the input to a worker is represented by \(\vec{x}\), and the worker is parameterized by \(\mathcal{\widetilde{W}}\) (i.e., the parameters of all neurons in this worker). The output of a worker is given by \(\vec{y}=f(\mathcal{\widetilde{W}}\cdot\vec{x})\). In this context, we denote the outputs of a leader and a follower as \(\vec{y}_{\delta}\) and \(\vec{y}_{\delta}\), respectively. To bring the followers closer to the leaders, a local error signal is applied to the followers, denoted as \(\mathcal{L}_{l}^{\bar{s}}=\mathcal{D}(\vec{y}_{\delta},\vec{y}_{\bar{\delta}})\), where \(\mathcal{D}(a,b)\)1 measures the distance between \(a\) and \(b\). In summary, the loss function of our LFNN is defined as follows:
Footnote 1: In our experimentation, we utilize mean squared error loss.
\[\mathcal{L}=\mathcal{L}_{g}+\lambda_{1}\mathcal{L}_{l}^{\delta}+\lambda_{2} \mathcal{L}_{l}^{\bar{s}}, \tag{1}\]
where the first term of the loss function applies to the output neurons and leader workers. The second and third terms apply to the leader and follower workers, as illustrated in Figure 1(c) and d. The hyper-parameters \(\lambda_{1}\) and \(\lambda_{2}\) are used to balance the contributions of the global and local loss components. It is important to note that the local loss \(\mathcal{L}_{l}^{\delta}\) and \(\mathcal{L}_{l}^{\bar{s}}\) are specific to each layer, filter, or block and do not propagate gradients through all hidden layers.
**BP-free version (LFNN-\(\ell\)).** To address the limitations of BP such as backward locking, we propose a BP-free version of LFNN. The approach is as follows: In Eq. 1, it can be observed that the weight updates for followers are already local and do not propagate through layers. Based on this observation, we modify LFNN to train in a BP-free manner by removing the BP for global prediction loss. Instead, we calculate leader-specific local prediction loss (\(\mathcal{L}_{l}^{\delta}\)) for all leaders. This modification means that the global prediction loss calculated at the output layer, denoted as \(\mathcal{L}_{g}^{o}\) (where \(o\) stands for output), is only used to update the weights of the output layer. In other words, this prediction loss serves as a local loss for the weight update of the output layer only. The total loss function of the BP-free LFNN-\(\ell\) is given as follows:
\[\mathcal{L}=\mathcal{L}_{g}^{o}+\mathcal{L}_{l}^{\delta}+\lambda\mathcal{L}_{ l}^{\bar{s}}. \tag{2}\]
By eliminating the backpropagation of the global prediction loss to hidden layers, the weight update of leader workers in LFNN is solely driven by the local prediction loss, as depicted in Figure 1(e). It's important to note that the weight update of follower workers remains unchanged, regardless of whether backpropagation is employed or not, as shown in Figure 1(c).
_Dynamic leadership selection._ In our LF hierarchy, the selection of leadership is dynamic and occurs in each training epoch based on the local prediction loss. In a layer with \(\mathcal{N}\) workers, each worker can contain one or more neurons, enabling it to handle binary or multi-class classification or regression problems on a case-by-case basis. This unique characteristic allows a worker, even if it is located in hidden layers, to make predictions \(\vec{y}\). This represents a significant design distinction between our LFNN and a traditional neural network. Consequently, all workers in a hidden layer receive their respective prediction error signal, denoted as \(\mathcal{L}_{l}^{\delta}(\vec{y},\hat{y})\). Here, \(\mathcal{L}_{l}(\cdot,\cdot)\) represents the prediction error function, the superscript \(\delta\) indicates that it is calculated over the leaders, \(\hat{y}\) denotes the true label, and the top \(\delta\) (\(0\leq\delta\leq 100\%\)) workers with the lowest prediction error are selected as leaders.
_Implementation details._ To enable workers in hidden layers to generate valid predictions, we apply the same activation function used in the output layer to each worker. For instance, in the case of a neural network designed for \(K\)-class classification, we typically include \(K\) output neurons in the output layer and apply the softmax function. In our LFNN, each worker is composed of \(K\) neurons, and the softmax function is applied accordingly. In order to align the followers with the leaders, we adopt a simplified approach by selecting the best-performing leader as the reference for computing \(\mathcal{L}_{l}^{S}\). While other strategies such as random selection from the \(\delta\) leaders were also tested, they did not yield satisfactory performance. Therefore, for the sake of simplicity and better performance, we choose the best-performing leader as the reference for the followers' loss computation.
_Practical benefits and overheads._ In contrast to conventional neural networks trained with BP and a global loss, our LFNN-\(\ell\) computes worker-wise loss and gradients locally. This approach effectively eliminates backward locking issues, albeit with a slight overhead in local loss calculation. One significant advantage of the BP-free version is that local error signals can be computed in parallel, enabling potential speed-up in the weight update process through parallel implementation.
## 3 Experiments
In Section 3.1, we focus on studying the leadership size, conducting an ablation study of loss terms in Eq. 1, and analyzing the worker's activity. To facilitate demonstration and visualization, we utilize DNNs in this subsection. In Section 3.2, we present our main experimental results, where we evaluate LFNNs and LFNN-\(\ell\)s using CNNs on three datasets (i.e., MNIST, CIFAR-10, and ImageNet) and compare with a set of baseline algorithms.
### Leader-Follower Neural Networks (LFNNs)
Experimental setup.To assess the performance of LFNN for online classification, we conduct experiments on the pixel-permuted MNIST dataset [41]. Following the approach in [30], we construct a one-vs-all classifier using a simple neural network architecture consisting of one hidden FC layer. In our experiments, we vary the network architecture to examine the relationship between network performance and leadership size. We consider network configurations with 32, 64, 128, 256, and 512 workers, where each worker corresponds to a single neuron. We systematically vary the percentage of workers assigned as leaders from 10% to 100%. For each network configuration, we utilize the sigmoid activation function for each worker and train the model using the Adam optimizer with a learning rate of 5e-3. The objective is to investigate how different leadership sizes impact the classification performance in the online setting. In our experiments, we employ the binary cross-entropy loss for both the global prediction loss (\(\mathcal{L}_{g}\)) and the local prediction loss for leaders (\(\hat{\mathcal{L}}_{l}^{\delta}\)). For the local error signal of followers (\(\hat{\mathcal{L}}_{l}^{\delta}\)), we use the mean squared error loss. The hyperparameters \(\lambda_{1}\) and \(\lambda_{2}\) are both set to 1 in this section to balance the global and local loss terms. In the ablation study of loss terms and the worker activity study, we focus on a 32-worker LFNN with 30% leadership.
Figure 3: **a. Network performance results when varying leadership size from 10% to 100%. b. Ablation study results from four different loss functions.**
**Leadership size and performance.** In a study on the collective motion of inanimate objects, such as radio-controlled boats, it was observed that to effectively control the direction of the entire group, only a small percentage (5%-10%) of the boats needed to act as leaders [42]. This finding aligns with similar studies conducted on living collectives, such as fish schools and bird flocks, where a small subset of leaders were found to have a substantial impact on the behavior of the larger group. In our experiment, we investigate the relationship between network performance and the size of the leadership group. The results shown in Figure 3a indicate that our LFNN achieves high performance on the permuted MNIST classification task after just one pass of training data. When using a higher percentage of leadership, such as 90% or 100%, the LFNN achieves comparable performance to a DNN trained with BP. Even with a lower percentage of leadership, ranging from 10% to 30%, the LFNN still achieves decent performance on this task. It is worth noting that for more challenging datasets like ImageNet, higher percentages of leadership are preferred. These findings highlight both the similarities and differences between natural collectives and LFNNs in the field of deep learning.
**Ablation study of loss terms.** In our investigation of LFNN training using Eq. 1, we aim to evaluate the effectiveness of the local loss terms and examine the following aspects in this section: (a) whether global loss alone with BP is adequate for training LFNNs, and (b) how the inclusion of local losses contributes to training and network performance in terms of accuracy. To address these questions, we consider four variations of the loss function, as depicted in Figure 4: (i) \(\mathcal{L}_{1}=\mathcal{L}_{g}+\mathcal{L}_{l}^{\delta}+\mathcal{L}_{l}^{\delta}\): This variant includes the global loss as well as all local losses. (ii) \(\mathcal{L}_{2}=\mathcal{L}_{g}+\mathcal{L}_{l}^{\delta}\): Here, the global loss is combined with the local leader loss. (iii) \(\mathcal{L}_{3}=\mathcal{L}_{g}+\mathcal{L}_{l}^{\delta}\): This variant utilizes the global loss along with the local follower loss. (iv) \(\mathcal{L}_{4}=\mathcal{L}_{g}\): In this case, only the global loss is employed.
After training LFNNs with the four different loss functions mentioned earlier, we observe the one-pass results in Figure 3b. It is evident that using only the global prediction loss (\(\mathcal{L}_{4}\)) with backpropagation leads to the worst performance. The network's accuracy does not improve significantly when adding the local follower loss (\(\mathcal{L}_{3}\)) because the leader workers, which the followers rely on for weight updates, do not perform well. As a result, the overall network accuracy remains low. However, when we incorporate the local leader loss (\(\mathcal{L}_{2}\)), we notice a significant improvement in the network's performance after 100 training steps. The local leader loss plays a crucial role in this improvement. Despite updating only 30% of the workers at each step, it is sufficient to guide the entire network towards effective learning. Moreover, when we further include the local follower loss (\(\mathcal{L}_{1}\)) to update the weights of followers based on strong leaders, the overall network performance improves even further. As a result, the network achieves high accuracy with just one pass of training data. These results highlight the importance of incorporating both local leader and local follower losses in LFNN training. The presence of strong leaders positively influences the performance of followers, leading to improved network accuracy.
**Worker activity in an LFNN.** Collective motion in a group of particles is easily identifiable through visualization. Since our LFNN's weight update rules are inspired by a collective motion model, we visualize the worker activities and explore the existence of collective motion patterns in the network during training. Following our weight update rule, we select 30% of the leaders from the 32 workers in each training step and update their weight dynamics based on global and local prediction loss. Consequently, the leader workers receive individual error signals and update their activity accordingly. Conversely, the remaining 70% of workers act as followers and update their weight dynamics by mimicking the best-performing leader through local error signals. In essence, all followers align themselves with a single leader, resulting in similar and patterned activity in each training step.
Figure 4: **Loss variation demonstration.****a.** Global prediction loss and both local losses, \(\mathcal{L}_{1}\). **b.** Without local follower loss, \(\mathcal{L}_{2}\). **c.** Without local leader loss, \(\mathcal{L}_{3}\). **d.** Global prediction loss alone, \(\mathcal{L}_{4}\).
To visualize the activities of all workers, we utilize the neuron output \(\vec{y}\) before and after the weight update at each time step, and the difference between them represents the worker activity. The results in Figure 5a demonstrate that in each time step, the follower workers (represented by blue lines) move in unison to align themselves with the leaders. During the initial training period (steps 0 to 1000), both leaders and followers exhibit significant movement and rapid learning, resulting in relatively larger step sizes. As the learning process stabilizes and approaches saturation, the workers' movement becomes less pronounced as the weights undergo less drastic changes in the well-learned network. Overall, we observe a patterned movement in worker activity in LFNNs, akin to the collective motion observed in the classic Vicsek model [35].
**Leadership development.** In order to investigate how leadership is developed during training, we conduct a study using batch training, where leaders are re-selected in each batch. To provide a clearer demonstration, we focus solely on local losses in this study, thereby eliminating the effect of the global error signal and BP. We utilize an LFNN-\(\ell\) with two hidden FC layers, each containing 32 workers. The leadership rate is fixed at 20%, resulting in approximately 6 leaders being selected in each layer at every training step. The neural network is trained for 300 steps in each epoch, and the visualization of the leadership dynamics during the first 5 epochs is presented in Figure 6. In Figure 6, the visualization depicts the development of leadership during training. Each dot's color and size indicate the number of times a worker is selected as a leader. In the initial epoch (Epoch 0), we observe that several workers in each layer have already emerged as leaders, being selected most of the time. As training progresses, exactly six workers in each layer are consistently developed as leaders, while the remaining workers are no longer selected. By the fifth epoch, the leadership structure becomes nearly fixed, remaining relatively unchanged throughout the training process.
From the results obtained, leadership in LFNN-\(\ell\) is developed in the early stages of training and becomes fixed thereafter. The performance of the entire network relies on these leaders. Although this aspect is not the primary focus of the current work, one promising future direction involves the development of an intelligent dynamic leader selection algorithm. Additionally, we also investigated the performance of the best-performing leaders in each layer and compared the performance between leaders and followers in the supplementary materials.
### BP-free Leader-Follower Neural Networks (LFNN-\(\ell\)s)
In this section, we conduct a comparative analysis between LFNN-\(\ell\)s and several alternative approaches, with the option of engaging BP. We evaluate their performance on the MNIST, CIFAR-10, and ImageNet datasets to showcase the capabilities of LFNN-\(\ell\)s and further study the impact of leadership size. All LFNN-\(\ell\)s and LFNNs in this section consist of FC and convolutional layers. LFNNs are trained using a combination of BP, global loss, and local losses, while BP-free LFNN-\(\ell\)s are trained solely with local losses.
Figure 5: **a.** Worker activity visualization in an LFNN. At each time step, the followers (blue lines) align themselves with leaders (red lines). **b.** Patterned collective motion produced by the classic Vicsek model [35].
Figure 6: Leadership in workers during training. The color and size of thedots represent the times a worker is selected as leader. A worker can be selected as a leader up to 300 times in each epoch.
**Datasets.** Both MNIST and CIFAR-10 are obtained from the TensorFlow datasets [45]. MNIST [41] contains 70,000 images, each of size \(28\times 28\). CIFAR-10 [46] consists of 60,000 images, each of size \(32\times 32\). Tiny ImageNet [47] consists of a dataset of \(100,000\) images distributed across 200 classes, with 500 images per class for training, and an additional set of \(10,000\) images for testing. All images in the dataset are resized to \(64\times 64\) pixels. ImageNet subset (1pct) [48, 49] is a subset of ImageNet [50]. It shares the same validation set as ImageNet and includes a total of 12,811 images sampled from the ImageNet dataset. These images are resized to \(224\times 224\) pixels for training.
**MNIST and CIFAR-10.** We compare our LFNNs and LFNN-\(\ell\)s with BP, local greedy backdrop (LG-BP) [43], Feedback Alignment (FA) [31], weight-perturbed forward gradient (FG-W) [44], activity perturbation forward gradient (FG-A) [32], local greedy forward gradient weight / activity-perturbed (LG-FG-W and LG-FG-A) [32] on MNIST, CIFAR-10, and ImageNet datasets. To ensure a fair comparison, we make slight modifications to our model architectures to match the number of parameters of the models presented in [32].
Table 1 presents the image classification results for the MNIST and CIFAR-10 datasets using various BP and BP-free algorithms. The table displays the test and train errors as percentages for each dataset and network size. When comparing to BP-enabled algorithms, LFNN shows similar performance to standard BP algorithms and outperforms the LG-BP algorithm on both the MNIST and CIFAR-10 datasets. In the case of BP-free algorithms, LFNN-\(\ell\) achieves lower test errors for both MNIST and CIFAR-10 datasets. Specifically, in MNIST, our LFNN-\(\ell\) achieves test error rates of 2.04% and 1.20%, whereas the best-performing baseline models achieve 2.82% and 2.55%, respectively. For the CIFAR-10 dataset, LFNN-\(\ell\) outperforms all other BP-free algorithms with a test error rate of 20.85%, representing a significant improvement compared to the best-performing LG-FG-A algorithm, which achieves a test error rate of 30.68%.
In previous sections, we observed that both larger and smaller leadership sizes deliver good performance on simple tasks. This observation holds true for MNIST and CIFAR-10 datasets as shown in Table 2. In MNIST, LFNN and LFNN-\(\ell\) with different leadership sizes achieve similar test error rates.
\begin{table}
\begin{tabular}{c c|c|c|c|c} \hline \hline \multicolumn{2}{c|}{**Dataset**} & \multicolumn{1}{c|}{**MNIST**} & \multicolumn{1}{c|}{**MNIST**} & \multicolumn{1}{c|}{**CIFAR-10**} & \multicolumn{1}{c}{**ImageNet**} \\ \multicolumn{2}{c|}{Metric} & Test / Train Err. (\(\downarrow\)) & Test / Train Err. (\(\downarrow\)) & Test / Train Err. (\(\downarrow\)) & Test / Train Err. (\(\downarrow\)) & Test / Train Err. (\(\downarrow\)) \\ \hline \multirow{3}{*}{**BP-enabled**} & BP & 2.01 / 0.00 & 1.88 / 0.00 & 20.90 / 0.00 & 35.24 / 19.14 \\ & LG-BP [43] & 2.43 / 0.00 & 2.81 / 0.00 & 33.84 / 0.05 & 54.37 / 39.66 \\ & **LFNN** & 1.18 / 1.15 & 2.14 / 1.49 & 19.21 / 3.57 & 57.75 / 20.94 \\ \hline \multirow{3}{*}{**BP-free**} & FA [31] & 2.82 / 0.00 & 2.90 / 0.00 & 39.94 / 28.44 & 94.55 / 94.13 \\ & FG-W [44] & 9.25 / 8.93 & 8.56 / 8.64 & 55.95 / 54.28 & 97.71 / 97.58 \\ & FG-A [32] & 3.24 / 1.53 & 3.76 / 1.75 & 59.72 / 41.27 & 98.83 / 98.80 \\ & LG-FG-W [32] & 9.25 / 8.93 & 5.66 / 4.59 & 52.70 / 51.71 & 97.39 / 97.29 \\ & LG-FG-A [32] & 3.24 / 1.53 & 2.55 / 0.00 & 30.68 / 19.39 & 58.37 / 47.86 \\ & **LFNN-\(\ell\)** & **1.49** / 0.04 & **1.20** / 1.15 & **20.85** / 4.69 & **55.88** / 36.13 \\ \hline \multicolumn{2}{c|}{**Number of Parameters**} & 272K\(\sim\)275K & 429K\(\sim\)438K & 876K\(\sim\)919K & 17.3M\(\sim\)36.8M \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between the proposed model and a set of BP-enabled and BP-free algorithms under MNIST, CIFAR-10, and ImageNet. The best test errors (%) are highlighted in **bold**. Leadership size is set to \(70\%\) for all the LFNNs and LFNN-\(\ell\)s.
\begin{table}
\begin{tabular}{c c c|c c c c c c c c c} \hline \hline \multicolumn{2}{c|}{**Dataset**} & \multicolumn{1}{c|}{**Model**} & \multicolumn{1}{c}{**Leadership Percentage**} & \multicolumn{1}{c}{**Model**} & \multicolumn
Further details on the relationship between leadership size and model performance will be discussed in the next subsection.
**Scaling up to ImageNet.** Traditional BP-free algorithms have shown limited scalability when applied to larger datasets such as ImageNet [20]. To assess the scalability of LFNN and LFNN-\(\ell\), we conduct experiments on ImageNet subset and Tiny ImageNet2. The results in Table 1 compare the train / test error rates of LFNN and LFNN-\(\ell\) with other baseline models using BP and BP-free algorithms on the ImageNet dataset. In the ImageNet experiments, LFNN achieves competitive test errors compared to BP and LG-BP, achieving a test error rate of 57.75% compared to 35.24% and 54.37% respectively. Notably, when compared to BP-free algorithms, LFNN-\(\ell\) outperforms all baseline models and achieves a test error rate 2.49% lower than the best-performing LG-FG-A. Furthermore, LFNN-\(\ell\) demonstrates an improvement over LFNN on ImageNet. These results suggest that the use of local loss in LFNN-\(\ell\) yields better performance compared to global loss, particularly when dealing with challenging tasks such as ImageNet.
Footnote 2: More ImageNet results can be found in the supplementary materials.
To further investigate the generalizability of LFNN and LFNN-\(\ell\), we conduct experiments on ImageNet variants and increase the model size by doubling the number of parameters to approximately 37M. Additionally, we explore the impact of leadership size on model performance. The results of the error rates for Tiny ImageNet and ImageNet subset with varying leadership percentages are presented in Table 3. For Tiny ImageNet, we observe that using a leadership percentage of 90% yields the lowest test error rates, with LFNN achieving 35.21% and LFNN-\(\ell\) achieving 36.06%. These results are surprisingly comparable to other BP-enabled deep learning models tested on Tiny ImageNet, such as UPANets (test error rate \(=32.33\%\)) [51], PreActRest (test error rate \(=36.52\%\)) [52], DLME (test error rate \(=55.10\%\)) [53], and MMA (test error rate \(=35.59\%\)) [54].
In the ImageNet subset experiments, we follow the methodology of [48] and leverage the ResNet-50 architecture as the base encoder, combining it with LFNN and LFNN-\(\ell\). LFNN and LFNN-\(\ell\) with 90% leadership achieve the lowest test error rates of 57.37% and 53.82%, respectively. These results surpass all baseline models in Table 1 and are even comparable to the test error rates of BP-enabled algorithms reported in [48], which is 50.6%. This observation further demonstrates the effectiveness of our proposed algorithm in transfer learning scenarios. It is worth mentioning that we observed even better results than those in Table 3 when further increasing the number of parameters. From Figure 2(a) and Table 2, we recall that for simple tasks like MNIST or CIFAR-10 classification, small leadership sizes can achieve decent results. In Table 3, we observe a clearer trend that for difficult datasets like ImageNet, a higher leadership percentage is required to achieve better results. This presents an interesting avenue for future exploration, particularly in understanding the relationship between network / leadership size and dataset complexity.
## 4 Conclusion
In this work, we have presented a novel learning algorithm, LFNN, inspired by collective behavior observed in nature. By introducing a leader-follower hierarchy within neural networks, we have demonstrated its effectiveness across various network architectures. Our comprehensive study of LFNN aligns with observations and theoretical foundations in both the biological and deep learning domains. In addition, we have proposed LFNN-\(\ell\), a BP-free variant that utilizes local error signals
\begin{table}
\begin{tabular}{c c|c|c c c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multicolumn{1}{c}{\multirow{2}{*}{10\%}} & \multicolumn{1}{c}{\multicolumn{1}{c}{\multirow{2}{*}{20\%}}} & \multicolumn{1}{c}{\multicolumn{1}{c}{\multirow{2}{*}{30\%}}} & \multicolumn{1}{c}{\multicolumn{1}{c}{\multirow{2}{*}{40\%}}} & \multicolumn{1}{c}{\multicolumn{1}{c}{\multirow{2}{*}{50\%}}} & \multicolumn{1}{c}{\multirow{2}{*}{60\%}} & \multicolumn{1}{c}{\multirow{2}{*}{70\%}} & \multicolumn{1}{c}{\multirow{2}{*}{80\%}} & \multicolumn{1}{c}{\multirow{2}{*}{90\%}} & \multicolumn{1}{c}{\multirow{2}{*}{100\%}} \\ \hline \multirow{4}{*}{**Tiny ImageNet**} & \multirow{2}{*}{LFNN-\(\ell\)} & Test & 73.98 & 63.09 & 54.24 & 49.63 & 44.87 & 40.96 & 37.17 & 38.05 & **36.06** & 39.56 \\ & & Train & 71.47 & 57.29 & 43.69 & 38.57 & 30.53 & 22.04 & 19.50 & 19.38 & 16.00 & 32.33 \\ & & Test & 39.85 & 40.12 & 39.34 & 39.18 & 39.33 & 39.41 & 39.42 & 38.63 & **35.21** & 39.56 \\ & & Train & 36.50 & 35.76 & 32.71 & 32.16 & 32.02 & 32.36 & 32.70 & 31.91 & 32.59 & 32.33 \\ \hline \multirow{4}{*}{**ImageNet Subset**} & \multirow{2}{*}{LFNN-\(\ell\)} & Test & 90.57 & 84.83 & 78.75 & 73.65 & 68.61 & 64.25 & 59.53 & 56.54 & **53.82** & 54.44 \\ & & Train & 68.96 & 51.89 & 39.49 & 27.78 & 22.68 & 13.37 & 9.23 & 54.1 & 5.58 & 6.40 \\ \cline{1-1} & & Test & 79.37 & 78.83 & 69.87 & 61.80 & 60.05 & 59.10 & 57.46 & 58.01 & **57.37** & 57.75 \\ \cline{1-1} & & Train & 53.13 & 52.18 & 38.38 & 26.26 & 25.21 & 20.35 & 18.42 & 18.40 & 16.70 & 17.94 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Error rate (% \(\downarrow\)) results of LFNNs and LFNN-\(\ell\)s (with different leadership percentage) on Tiny ImageNet and ImageNet subset. We also trained CNN counterparts (without LF hierarchy) with BP and global loss for reference. The test error rates of BP-enabled CNNs under Tiny ImageNet and ImageNet subset are 35.76% and 51.62%, respectively.
instead of traditional backpropagation. We have shown that LFNN-\(\ell\), trained without a global loss, achieves superior performance compared to a set of BP-free algorithms. Through extensive experiments on MNIST, CIFAR-10, and ImageNet datasets, we have validated the efficacy of LFNN with and without BP. LFNN-\(\ell\) not only outperforms other state-of-the-art BP-free algorithms on all tested datasets but also achieves competitive results when compared to BP-enabled baselines in certain cases. Our work is unique as it is the first to introduce collective motion-inspired models for deep learning architectures, opening up new directions for the development of local error signals and alternatives to BP. The proposed algorithm is straightforward yet highly effective, holding potential for practical applications across various domains. We believe that this early study provides valuable insights into fundamental challenges in deep learning, including neural network architecture design and the development of biologically plausible decentralized learning algorithms. |
2302.06746 | Workload-Balanced Pruning for Sparse Spiking Neural Networks | Pruning for Spiking Neural Networks (SNNs) has emerged as a fundamental
methodology for deploying deep SNNs on resource-constrained edge devices.
Though the existing pruning methods can provide extremely high weight sparsity
for deep SNNs, the high weight sparsity brings a workload imbalance problem.
Specifically, the workload imbalance happens when a different number of
non-zero weights are assigned to hardware units running in parallel. This
results in low hardware utilization and thus imposes longer latency and higher
energy costs. In preliminary experiments, we show that sparse SNNs (~98% weight
sparsity) can suffer as low as ~59% utilization. To alleviate the workload
imbalance problem, we propose u-Ticket, where we monitor and adjust the weight
connections of the SNN during Lottery Ticket Hypothesis (LTH) based pruning,
thus guaranteeing the final ticket gets optimal utilization when deployed onto
the hardware. Experiments indicate that our u-Ticket can guarantee up to 100%
hardware utilization, thus reducing up to 76.9% latency and 63.8% energy cost
compared to the non-utilization-aware LTH method. | Ruokai Yin, Youngeun Kim, Yuhang Li, Abhishek Moitra, Nitin Satpute, Anna Hambitzer, Priyadarshini Panda | 2023-02-13T23:18:47Z | http://arxiv.org/abs/2302.06746v2 | # Workload-Balanced Pruning for
###### Abstract
Pruning for Spiking Neural Networks (SNNs) has emerged as a fundamental methodology for deploying deep SNNs on resource-constrained edge devices. Though the existing pruning methods can provide extremely high weight sparsity for deep SNNs, the high weight sparsity brings a workload imbalance problem. Specifically, the workload imbalance happens when a different number of non-zero weights are assigned to hardware units running in parallel, which results in low hardware utilization and thus imposes longer latency and higher energy costs. In preliminary experiments, we show that sparse SNNs (\(\sim\)98% weight sparsity) can suffer as low as \(\sim\)59% utilization. To alleviate the workload imbalance problem, we propose u-Ticket, where we monitor and adjust the weight connections of the SNN during Lottery Ticket Hypothesis (LTH) based pruning, thus guaranteeing the final ticket gets optimal utilization when deployed onto the hardware. Experiments indicate that our u-Ticket can guarantee up to 100% hardware utilization, thus reducing up to 76.9% latency and 63.8% energy cost compared to the non-utilization-aware LTH method.
Spiking Neural Networks, Pruning, Neuromorphic Computing, Sparse Neural Networks
## I Introduction
Spiking Neural Networks (SNNs) have gained tremendous attention towards ultra-low-power machine learning [1]. SNNs leverage spatio-temporal information of unary spike data to achieve energy-efficient processing in resource-constrained edge devices [2, 3]. However, in the case of large-scale tasks such as image classification, the model size of SNNs significantly increases. Unfortunately, edge devices typically have limited on-chip memory, rendering large-scale SNN deployment unpractical. To this end, recent works have proposed various unstructured SNN pruning techniques to achieve high weight sparsity in SNNs [4, 5].
Although unstructured pruning manages to compress the SNN models into the available memory resources, sparse SNNs encounter a **workload-imbalance problem**[6]. The workload-imbalance problem comes from the conventional weight stationary dataflow [7] adopted in sparse accelerators [8, 9, 10]. In weight stationary dataflow, filters are divided into several groups and kept stationary inside processing elements (PEs) for filter reuse. However, different filter groups inevitably have different densities of non-zero weights, due to the random weight connections from the unstructured pruning. As a result, different PEs end up with unbalanced workloads. Since all PEs run in parallel, PEs with fewer workloads need to wait for the PE that has the largest workload. This results in low utilization and imposes idle cycles, which increases the running latency and leakage of energy waste.
To address the workload-imbalance problem, various methods have been proposed in the prior sparse accelerator designs. However, they cannot be efficiently applied to SNNs for the following reasons. **(1) Requiring extra hardware:** The prior methods require extra hardware (_e.g._, deep FIFOs or permuting units) [8, 9, 11, 12, 13] to balance the workloads. For instance, applying the hardware-based (FIFOs [8] and permuting networks [9]) workload balancing methods to SNNs require approximately 18% and 13% of extra chip area (see Fig. 1). Consequentially, the improvements in PE utilization are at the cost of additional hardware resources, which should be avoided for SNNs whose running environments are typically resource-constrained edge devices. **(2) Limited to low sparsity:** As shown in Fig. 1, the solutions from prior sparse accelerators [8, 9] only work on low sparsity (roughly 60% and 35% on VGG-16) which is not sufficient for SNNs' extremely low-power edge deployment. Moreover, the workload-imbalance problem naturally gets more difficult to solve at high weight sparsity regime. Hence, the exploration of workload balancing for extremely sparse networks (\(>95\%\) weight sparsity) is missing
Fig. 1: Comparison between u-Ticket and state-of-the-art workload balance methods. Overall, u-Ticket recovers the PE utilization up to \(100\%\) for extremely sparse networks with 98% weight sparsity (here, we consider VGG-16). Please note that u-Ticket does not introduce any hardware area overhead, and thus is the best fit for SNNs (\(\uparrow\): the higher is the better, \(\downarrow\): the lower is the better).
in prior works. Considering the above-mentioned problems, we need an SNN-friendly solution to address the workload imbalance.
To this end, we propose u-Ticket, an iterative workload-balanced pruning method for SNNs that can effectively achieve high weight sparsity and minimize the workload imbalance problem simultaneously. Our method is based on Lottery Ticket Hypothesis (LTH) [14] which states that sub-networks with similar accuracy can be found in over-parameterized networks by repeating _training-pruning-initialization_ stages. Different from the standard LTH method [4] where the pruned networks are naively used for the next round, we either remove or recover weight connections to balance workloads across all PEs before sending the networks to re-initialization (see Fig. 2).
Compared to prior workload-balancing methods (see Fig. 1), the u-Ticket approach improves PE utilization by up to 100% (70% for [8] and 92% for [9]) while maintaining filter sparsity of 98% (60% for [8] and 35% for [9]), at iso-accuracy with the standard LTH-based pruning baseline [4]. Furthermore, since our method balances the workload during the pruning process, u-Ticket does not incur any additional hardware overhead for deployment.
We summarize the key contributions as follows:
1. We propose u-Ticket which discovers highly sparse SNNs with optimal PE utilization. The discovered sparse SNN model achieves a similar level of accuracy, weight sparsity, and spike sparsity with the standard LTH baseline [4] while improving the utilization up to \(100\%\).
2. By balancing the workload, u-Ticket reduces the running latency and energy cost by up to \(76.9\%\) and \(63.8\%\), respectively, compared to the standard LTH method.
3. We extend the prior sparse accelerator [8] and propose an energy estimation model for sparse SNNs.
4. To validate the proposed u-Ticket, we conduct experiments on two representative deep architectures (i.e., VGG-16 [15] and ResNet-19 [16]) across three public datasets including CIFAR10 [17], Fashion-MNIST [18] and SVHN [19].
## II Background
### _Spiking Neural Networks_
Spiking Neural Networks (SNNs) process the unary temporal signal through multi-layer weight connections. Instead of ReLU neuron for a non-linear activation, recent SNN works use a Leaky-Integrate-and-Fire (LIF) neuron which contains a memory called membrane potential. The membrane potential captures the temporal spike information by storing incoming spikes and generating output spikes accordingly. Suppose a LIF neuron \(i\) has a membrane potential \(u_{i}^{t}\) at timestep \(t\). We can formulate the discrete neuronal dynamics [20, 21] by:
\[u_{i}^{t}=\lambda u_{i}^{t-1}+\sum_{j}w_{ij}s_{j}^{t}. \tag{1}\]
Here, \(\lambda\) is the leaky factor for decaying the membrane potential through time. The \(s_{j}^{t}\) stands for the output spike from a neuron \(j\) at timestep \(t\). The \(w_{ij}\) denotes a weight connection between neuron \(j\) in the previous layer and neuron \(i\) in the current layer. If the membrane potential reaches a firing threshold, the neuron generates an output spike, and the membrane potential is reset to zero. Similar to ANNs, we train the weight connection \(w_{ij}\) in all layers. Our weight optimization is based on the recently proposed surrogate gradient learning, which assumes approximated gradient function for the non-differentiable LIF neuron [22]. We use \(tanh(\cdot)\) approximation following the previous work [21].
### _Lottery Ticket Hypothesis_
Lottery Ticket Hypothesis (LTH) [14] has been proposed where they found a dense neural network contains sparse sub-networks (_i.e._, winning tickets) with similar accuracy compared to the original dense network. The winning tickets are founded by multiple rounds of magnitude pruning operations. Specifically, suppose we have a dense network \(f(x;\theta)\) with randomly-initialized parameter weights \(\theta\in\mathbb{R}^{n}\). In the first round, the dense network \(f(x;\theta)\) is trained to convergence (**step1** in Fig. 2). Based on the trained weights, we prune \(p\%\) weight connections with the lowest absolute weight values (**step2** in Fig. 2). We represent this pruning operation as a binary mask \(m\in\{0,1\}^{n}\). In the next round, we reinitialize the pruned network with the original initialization parameters \(f(x;\theta\odot m)\) (**step4** in Fig. 2), where \(\odot\) represents the element-wise product. The _training-pruning-initialization_ stages are
Fig. 3: Example utilization and latency resulted from imbalance and balanced workload under the same model sparsity. With the unstructured pruning, non-zero weights will have a random distribution across four groups, thus leading to unbalanced workloads across PEs as shown on the left side (PE0 has four weights assigned, while PE1 and PE2 only have one).
Fig. 2: Illustration of the concept of the proposed u-Ticket. Our u-Ticket consists of training (**step1**), pruning (**step2**), adjusting weight connections based on workload (**step3**), and re-initialization (**step4**). We repeat these steps for multiple rounds. Note, the standard LTH method consists of training (**step1**), pruning (**step2**), and re-initialization (**step4**), which does not consider the utilization of the pruned SNNs.
repeated for multiple rounds. In the SNN domain, Kim _et al._[4] recently applied LTH to deep SNNs, resulting in high weight sparsity (\(\sim\)98%) for VGG and ResNet architectures. However, they do not consider the workload imbalance problem in sparse SNNs. Different from the previous work, we adjust weight connections for improving utilization at each pruning round **step3**, which reduces up to \(77\%\) latency and \(64\%\) energy cost compared to the standard LTH [4] while maintaining both sparsity and accuracy.
### _Workload Imbalance Problem_
In the context of neural network accelerators, dataflow refers to the input and weight mapping strategy on the hardware. To this effect, recent works [8, 9, 10, 23, 24] have demonstrated the efficacy of the weight stationary dataflow towards efficient deployment of sparse networks and SNNs. For weight-stationary dataflow, different weights are cast to different PEs and stay inside the PE until they are maximally reused across all the relevant computations. More specifically, during the running time, depending on the memory capacity of the hardware, each layer's filter kernels will be grouped in a chosen pattern and sent to each PE. As shown in Fig. 3, due to the randomness in unstructured pruning, the number of non-zero elements (or workload) allocated to each PE varies significantly. Moreover, the workload imbalance is persistent irrespective of the grouping method chosen. Note, here we define the number of non-zero weights assigned to a PE as the workload.
In this case, the wasted resources in PEs are based on the difference between the largest workload and the average of all other workloads. To quantitatively measure the portion of non-wasted resources, we use the utilization metric [6], given by
\[\mu=1-\frac{T_{max}-T_{avg}}{T_{max}}\cdot\frac{n}{n-1}, \tag{2}\]
\(T_{max}\) and \(T_{avg}\) are the slowest and the average processing time among the PEs. \(n\) is the number of PEs. The metric quantifies the percentage of processing time that the rest of the PEs, excluding the slowest one, is engaging in useful work.
In Fig. 4, we show how the utilization degrades as the weight sparsity of the SNN increases in the standard LTH method [4]. The preliminary result shows that in the final round, the utilization can be as low as \(59\%\) on VGG-16 CIFAR10. Here, we assume that the total number of PEs is 16 and the utilization is averaged across all layers (weighted by parameter count).
## III u-Ticket
To resolve the workload imbalance problem, we propose u-Ticket where we achieve high utilization in sparse SNNs during iterative pruning. In this section, we first present the algorithm to train sparse SNNs while maintaining high utilization. We then provide details of the proposed PE design and the energy model to map the u-Ticket on the hardware.
### _Algorithmic Approach_
Our u-Ticket pruning consists of multiple rounds similar to LTH [14]. For each round, we train the networks till convergence, prune the low-magnitude weight connections, balance the workload of PEs by recovering or removing the weight connections, and finally re-initialize the weights. The main idea is to ensure a balanced workload between PEs after unstructured pruning in each round.
The overall u-Ticket process is described in Algorithm 1. For each round, the pruned SNN from the previous round is re-initialized. After that, the model is trained and pruned where we obtain connectivity mask \(\hat{m}_{i}\) with imbalanced PE workloads. To increase the utilization, we first compute the workload for each PE, constructing the PE workload list \(W^{l}\) for each layer. Based on the \(W^{l}\), we calculate the average workload \(w^{l}_{avg}\) for layer \(l\). Then, we go through each workload \(w\) in \(W^{l}\), and randomly recover (\(w_{avg}-w\)) number of weight connections if the PE's workload \(w\) is smaller than the average workload \(w^{l}_{avg}\). Otherwise, the number of weight connections is pruned by (\(w-w_{avg}\)). After the workload adjustment, every workload \(w\) will have the same magnitude, to ensure the optimal utilization \(\mu\). We repeat the above-mentioned stages for \(N\) rounds.
In our method, we use average workload \(w^{l}_{avg}\) across all PEs at layer \(l\) as the reference to recover/remove weight connections. The reason behind such design choice is as follows: (1) If we look at only partial PE workloads to decide on a reference workload, it will bring a sub-optimal solution. (2) The cost of
Fig. 4: Sparsity and utilization across pruning rounds for the standard LTH method without utilization awareness. The pruning is done for 13 rounds on VGG-16 being trained for image classification on CIFAR10 with 16 PEs.
checking all PE workloads is negligible compared to the overall iterative training-pruning-initialization process. We find that on an RTX 2080Ti GPU, the time cost of our workload-balancing method is only 0.11% of one complete LTH searching round.
### _Hardware Mapping_
#### Iii-B1 Processing Elements (PEs)
To get an accurate energy estimation, we need to map the sparse SNN to a proper hardware design. We develop our PE design based on [8], one of the state-of-the-art sparse accelerators, to support the running of sparse SNNs. Please note that our method of balancing the workloads works on any sparse accelerator design as long as it utilizes the weight stationary dataflow.
First, the non-zero weights, input spikes, and their corresponding metadata (index) are read from the DRAM. The weights are represented in weight sparsity pattern (WSP) [8], while the spike activations are represented in standard compressed sparse row (CSR) format. We use four timesteps for the SNN in our experiments, thus we can group every two activations into one byte (each activation has four unary spikes.)
Then, an activation processing unit (APU, outside PEs) filters out the zero activation (0-spikes across four timesteps) and sends the non-zero activation together with their position indices (decoded from CSR) to the PE arrays. The position indices help to match the non-zero weights and activation in 2-D convolution.
At the PE level, each PE contains four 16-bit AND gates, 256 24-bit accumulators, and one 1024 \(\times\) 16 bits SRAM-based scratch-pad. We further extend the 256 accumulators with 256 LIF units for generating the output spikes. Each LIF unit is equipped with four 24-bit registers for storing the membrane potential across four timesteps.
Fig. 5 illustrates the overall architecture and the computation flow inside the PE. We process the network in a tick-batched manner [23]. At step 1, the non-zero weights together with their WSPs are mapped to each PE. At step 2, the spike activation \(S_{in}\) together with their position indices are sent to PE. Based on the weight's WSP and the activation's position index, the selector unit will output the matched non-zero weight. At step 3 and 4, the dot-product operations between the input spike and the matched weights are carried out, and the partial sums are stored according to their position index. At step 4, the partial sums for each time step are sequentially sent to the LIF units to generate the output spikes for each time step. Note that steps 5 - 5 need to be repeated four times for matching the four timesteps used in our SNN model (only 1 bit of \(S_{in}\) is cast to the PE at a time in step 6).
#### Iii-B2 Energy Modeling
We do the simulation for the full architecture. Since u-Ticket balances the workloads between PEs, the majority of the improvements can be found at the PE level. Thus, we focus on energy estimation at the PE level in this work. We extend the energy model from [24] to estimate the total energy:
\[E_{total}=N_{work}\cdot(E_{PE}^{d}\cdot(1-S_{in}^{spa})+E_{PE}^{l})+N_{idle} \cdot E_{PE}^{l}, \tag{3}\]
where \(E_{PE}^{d}\) and \(E_{PE}^{l}\) are the dynamic and leakage energy of a single PE processing one input spike. As shown in [24], there is no extra cost for skipping the zero-spike computation in SNNs. Thus, we directly apply the term of spike sparsity, \(S_{in}^{spa}\), in Eqn. 3 to consider the dynamic energy saving by skipping the zero spikes. Here \(N_{work}\) is defined as the total work cycles in which PEs are doing useful work and \(N_{idle}\) denotes the total cycles in which PEs are waiting in an idle state.
## IV Experiment
### _Experimental Settings_
#### Iv-A1 Software Configuration
First, to validate the u-Ticket pruning method, we evaluate our u-Ticket methods on three public datasets: CIFAR10 [17], Fashion-MNIST [18], and SVHN [19]. We choose two representative deep network architectures: VGG-16 [15] and ResNet-19 [16]. We implement the networks on PyTorch and set the timesteps \(T\) to 4 for all experiments. We use state-of the-art direct encoding technique that has been shown to train SNNs on image classification datasets with very few timesteps. We use the same training configurations used in [4].
#### Iv-A2 Hardware Configuration
We report the utilization, latency, work cycles, and idle cycles based on our PyTorch-based simulator which simulates the running-time distribution of the weights to PEs. We use the weights grouping method as in [8, 24] with 16 PEs. The PE level energy is estimated with the model in Section III-B2 with all computing units synthesized in Synopsys Design Compiler at 400MHz using 32nm CMOS technology and the memory units simulated in CACTI. We set the standard LTH method [4] without utilization-awareness as our baseline and use the same estimation model to get the speed-up and energy results.
### _Experimental Results_
#### Iv-B1 Validation Result
We summarize the validation results in Table I. The results confirm that our method works well for deep SNNs (less than \(\sim\)1% accuracy drop). We also compare the sparsity of filters and spikes between these two methods. u-Ticket has a slightly higher filter sparsity, due to the extra reduction in weight connections to ensure balanced workloads for each PE. At the same time, u-Ticket keeps a similar level of spike sparsity on VGG-16 and has better spike sparsity on ResNet-19. While higher spike sparsity will bring better
Fig. 5: Overall architecture and the detailed inner architecture of PE. Here APU denotes the activation processing unit.
energy efficiency, a spike sparsity that is too high will cause an accuracy drop in deep SNNs [25]. This explains the accuracy-sparsity tradeoff on ResNet-19 (on average 0.76% accuracy drop with 3.5% sparsity gain).
#### Iv-B2 Hardware Performance
We consider four metrics in this section (_i.e._, work cycles, idle cycles, latency, and utilization).
* **Work cycles** (\(N_{work}\) in Eqn. 4): Sum of total work cycles for every PE across all the layers in the network.
* **Idle cycles** (\(N_{idle}\) in Eqn. 4): Sum of total idle cycles for every PE across all the layers in the network.
* **Latency**: Time required by PEs to process all the layers in the network. The latency is normalized with respect to the time required for a PE to process one input spike.
* **Utilization**: We use Eqn. 2 to compute the utilization for each layer. To compute the utilization of the network, we calculate the weighted average utilization.
The hardware improvement results are summarized in Table II. By iteratively applying the utilization recovery during the pruning, u-Ticket can recover the utilization up to \(100\%\) in the final pruning round, thus reducing almost all the idle cycles for PEs. Because of the re-balance of workloads among PEs, the network can leverage more parallelism from the PE array, thus significantly reducing the running latency. The number of work cycles stays similar on both networks. We further visualize the layerwise speedup results for VGG-16 on CIFAR10 in Fig. 6. Overall, the layerwise work cycles and latency share similar trends between the two methods. Furthermore, u-Ticket has a larger number of idle cycle reductions on earlier layers due to the larger feature map sizes.
#### Iv-B3 Energy Performance
In this section, we further show the energy efficiency improvements of u-Ticket over the standard LTH baseline. The energy differences are visualized in Fig. 7 (a), from which we observe that the energy benefits of balancing the workloads are huge. For CIFAR10, FMNIST, and SVHN, we manage to reduce the energy cost by \(41.8\%\), \(35.4\%\), and \(37.2\%\) on VGG-16, and \(55.5\%\), \(63.8\%\), and \(56.1\%\) on ResNet-19.
The main source of energy cost reduction comes from the elimination of idle cycles and the reduction of latency, which ultimately reduces the leakage energy of the hardware. ResNet-19, whose network is deeper, suffers more from the workload imbalance problem and thus has more idle cycles and longer latency compared to VGG-16. By eliminating almost all the idle cycles, u-Ticket brings more energy cost reduction to ResNet-19 compared to VGG-16.
#### Iv-B4 Analysis of Sparsity
We study the effects of the u-Ticket method under different weight sparsity. We measure the energy difference between u-Ticket and the LTH baseline at
Fig. 6: The layerwise performance comparison between LTH and u-Ticket on four metrics, _i.e._, (a) work cycles, (b) idle cycles, (c) latency, (d) utilization. We conduct experiments with VGG-16 architecture on CIFAR10.
different pruning rounds for both ResNet-19 and VGG-16 on the CIFAR10 dataset. The result is visualized in Fig. 7 (b). As observed, with an increase in weight sparsity, the benefits of using u-Ticket get larger. This is due to the degradation of the utilization in LTH as aforementioned in Fig. 4.
#### Iv-B5 Analysis of #PEs
We further study the effects of changing the number of PEs. We run the u-Ticket for VGG-16 on CIFAR10 with 2, 4, 8, 16, 32, and 64 PEs, and illustrate the results in Table. III. While the energy cost only slightly changes with the increasing number of PEs, the latency decreases linearly. Considering that the area of PE arrays will also linearly increase with the number of PEs, we conduct most of our experiments with 16 PEs, which is a suitable trade-off point.
#### Iv-B6 Energy Breakdown
In Fig. 8, we show the energy breakdown comparison between u-Ticket and the LTH baseline on ResNet-19 for the CIFAR10 dataset. The energy components are the dynamic and leakage energy of MAC operation, LIF operation, and MEM operation (reading of SRAM-based scratchpad). We observe that the leakage energy for both MAC and LIF operation is significantly reduced in u-Ticket due to the elimination of the idle cycles. Expectedly, the portion of the dynamic energy of MAC and LIF operation increases.
#### Iv-B7 System Level Study
Finally, we study the behavior of the overall system of sparse SNNs. In Fig. 9 (a), we show how the total DRAM and SRAM access (normalized with respect to dense SNN) decrease with increasing weight sparsity. Furthermore, we find that in the extremely high weight sparsity regime, the PE level energy starts to take a significant portion of the total energy (\(\sim\) 45% on VGG-16 with CIFAR10). As a result, after applying u-Ticket to balance the PE workloads, we manage to reduce approximately \(19\%\) of the total energy at the system level as shown in Fig. 9 (b).
## V Conclusion
In this work, we propose u-Ticket, a utilization-aware LTH-based pruning method that solves the workload imbalance problem in SNNs. Unlike prior works, u-Ticket recovers the utilization during pruning, thus avoiding additional hardware to balance the workloads during deployment. Additionally, at iso-accuracy, u-Ticket improves PE utilization by up to 100% compared to the standard LTH-based pruning method while maintaining filter sparsity of 98%. Moreover, u-Ticket reduces the running latency by up to 77% and energy cost by up to 64% compared to standard LTH baseline.
|
2310.18769 | Linear Mode Connectivity in Sparse Neural Networks | With the rise in interest of sparse neural networks, we study how neural
network pruning with synthetic data leads to sparse networks with unique
training properties. We find that distilled data, a synthetic summarization of
the real data, paired with Iterative Magnitude Pruning (IMP) unveils a new
class of sparse networks that are more stable to SGD noise on the real data,
than either the dense model, or subnetworks found with real data in IMP. That
is, synthetically chosen subnetworks often train to the same minima, or exhibit
linear mode connectivity. We study this through linear interpolation, loss
landscape visualizations, and measuring the diagonal of the hessian. While
dataset distillation as a field is still young, we find that these properties
lead to synthetic subnetworks matching the performance of traditional IMP with
up to 150x less training points in settings where distilled data applies. | Luke McDermott, Daniel Cummings | 2023-10-28T17:51:39Z | http://arxiv.org/abs/2310.18769v1 | # Linear Mode Connectivity in Sparse Neural Networks
###### Abstract
With the rise in interest of sparse neural networks, we study how neural network pruning with synthetic data leads to sparse networks with unique training properties. We find that distilled data, a synthetic summarization of the real data, paired with Iterative Magnitude Pruning (IMP) unveils a new class of sparse networks that are more stable to SGD noise on the real data, than either the dense model, or subnetworks found with real data in IMP. That is, synthetically chosen subnetworks often train to the same minima, or exhibit linear mode connectivity. We study this through linear interpolation, loss landscape visualizations, and measuring the diagonal of the hessian. While dataset distillation as a field is still young, we find that these properties lead to synthetic subnetworks matching the performance of traditional IMP with up to 150x less training points in settings where distilled data applies.
## 1 Introduction & Background
Sparse neural networks are increasingly important in deep learning to enhance hardware performance (e.g., memory footprint, inference time) and reduce environmental impacts (e.g., energy consumption), especially as state-of-the-art foundational models continue to grow significantly in parameter count. The most common form of sparsity can be found in neural network pruning literature [9]. In this field, researchers exploit sparsity for computational savings, usually at inference, by removing parameters after training. In order to reduce the cost of training as well, other works explore how to prune at initialization [12; 18], the end goal for almost any pruning research. Despite these great ambitions, pruning at initialization does not perform as we hope [7]. To further understand why this is the case, Frankle et al. [5] propose the Lottery Ticket Hypothesis: _for a sufficiently over-parameterized dense network, there exists a non-trivial sparse subnetwork that can train in isolation to the full performance of the dense model_. This is empirically validated for small settings with Iterative Magnitude Pruning with weight rewinding back to initialization. In parallel, researchers have been exploring how synthetic data representations such as those generated by dataset distillation methods can be leveraged to efficiently accelerate deep learning model training. With this in mind, we explore the training dynamics and stability of sparse neural networks in the context of synthetic data to better understand how we should be efficiently creating sparsity masks at initialization.
Research on the training dynamics of dense models have led researchers to find that dense models are connected in the loss landscape through nonlinear paths [8; 4; 10]. Linear paths or Linear Mode Connectivity (LMC) is an uncommon phenomena that only occurs in rare cases, such as MLPs on subsets of MNIST in [14]. For large networks, Frankle et al. [6] found that pretrained dense models, when fine-tuned across different shufflings of data, are linear mode connected. While these models are "stable" to noise generated through stochastic gradient descent (SGD), only the smallest dense models are stable at initialization. As for its relationship with sparse neural networks, it was empirically found that the Lottery Ticket Hypothesis only holds for stable dense models, those that are linear mode connected across data shuffling [6]. They found that these large dense models only become stable early in training, leading to the conclusion that Iterative Magnitude Pruning (IMP)
with weight rewinding, the method to find such lottery tickets, should instead rewind a model to an early point in training rather than at initialization, revising the hypothesis to fit in larger settings. Our work aims to study the properties of sparse neural networks at initialization, different than new age lottery ticket literature [15], which utilizes some pretraining to find a "good" initialization.
We find that another class of sparse subnetworks exist that are more stable at initialization: _synthetic subnetworks_. We define synthetic subnetworks as those produced during "distilled pruning" [13]. These are found by replacing the traditional data in IMP with distilled data, essentially a summarized version of the training data consisting of only 1-50 synthetic images per class (see [17] for a survey). In general, dataset distillation optimizes a synthetic dataset to match the performance of a model trained on real data. This bi-level optimization problem can be defined as minimizing the difference of average loss over all validation points:
\[\operatorname*{arg\,min}_{\mathcal{D}_{\text{syn}}}|L(\Phi(\mathcal{D}_{\text {real}});\mathcal{D}_{\text{val}})-L(\Phi(\mathcal{D}_{\text{syn}});\mathcal{D }_{\text{val}})| \tag{1}\]
In distilled pruning, we perform the same training, pruning, and rewinding to initialization in order to produce the sparsity mask. This mask, as with those produced by IMP, can be applied to the dense model at initialization to create a high performing sparse neural network after training on real data. The significance is that synthetic images can be used to pick an appropriate sparsity mask for a downstream task. Recent work shows that despite synthetic subnetworks having a lower performance as a trade-off for pruning efficiency, these subnetworks have a lower need for rewinding to an early point in training due to their inherent stability [13]. We find that, with better dataset distillation methods such as Information-intensive Dataset Condensation (IDC) [11] rather than Matching Training Trajectories (MTT) [3] which was used previously, we match performance with IMP when rewinding to initialization. We achieve this with 5x less data than previous distilled pruning work which is approximately _150x less_ training points than traditional IMP to find a sparsity mask. While we do use a current state-of-the-art distillation method, such methods are still limited to models up to ResNet-18 and small datasets like CIFAR-10, CIFAR-100, and subsets of ImageNet.
## 2 Dataset distillation for neural network pruning.
To find a suitable sparsity mask of a randomly initialized model, we first train the network to convergence on distilled data1, prune the lowest magnitude weights, then rewind the non-pruned weights back to their initialized values, and loop until desired sparsity. The final model should have its randomly initialize weights with a sparsity mask. We can train the sparse synthetic subnetwork on real data to achieve sufficient performance at high sparsities. Using distilled data only to choose our sparsity mask allows us to better understand the architectural relationship of this data. We refer to subnetworks found with synthetic or distilled data as _synthetic subnetworks_ and those with real data as _IMP subnetworks_. The only differences of IMP and distilled pruning lie in the sparsity mask they choose. Since each method uses different datasets for training, their final converged weights will be different. What is deemed "important" for real data might not be important for distilled data; therefore, distilled pruning may attempt to remove these. The performance of these sparsity masks by distilled pruning directly relates to how relevant the distilled data is to the real data. We find that distilled pruning can match the performance of IMP, when rewinding to initialization, on settings where dataset distillation applies2.
Footnote 1: We use dataset distillation methods that match training trajectories to ensure that training on synthetic data yields similar converged results to training on real data [11]
Footnote 2: In the appendix, figure 5 showcases this performance, comparing traditional IMP to distilled pruning on CIFAR-10 with ResNet-18.
## 3 Stability of subnetworks.
To understand the training dynamics of sparsity masks chosen via distilled pruning vs IMP, we conduct an instability analysis. We take a randomly initialized model, generate a sparsity mask through pruning, and train it across two different orderings of the real training data. We save these two models and interpolate all the weights between them, measuring the training loss at each point in the interpolation as shown in Figures 1 and 2. We assess the linear mode connectivity of these
subnetworks to determine if the model is stable to SGD noise. If the loss increases as you interpolate between two trained versions, then there is a barrier in the loss landscape, implying the trained models found different minima. In these cases, the ordering of the training data directly impacts what minima its choosing. If the loss does not increase during interpolation, then this implies they exist in the same minima or at least the same flat basin.
We see that in simpler scenarios with ConvNet-3 on CIFAR-10 & ResNet-10 on ImageNet-10, we exhibit full linear mode connectivity. We even see slightly better performance during interpolation in Figure 2. As stated before, it was shown that lottery tickets can be found with IMP only when the dense model is stable [6]. However, we find that in some cases of unstable dense models there exists a sparse subnetwork that is stable at initialization. More importantly, traditional IMP is not able to produce stable subnetworks in these settings. Sparsity is not necessarily the answer for smoother landscapes, _where_ you induce sparsity is the main factor. As pruning continues, the results exhibit more stability despite lower trainability, as seen with higher training losses. We postulate that the
Figure 1: Comparison of the stability of synthetic vs. IMP subnetworks at initialization on CIFAR-10. We show how the loss increases as you interpolate the weights between two trained models. We measure this for subnetworks of different sparsities. The left column is reserved for subnetworks found via distilled data, and the middle column is for subnetworks found with real data. The dark lines in the 3D plots represents the pruning iteration we used for the combined plot; the dense model is iteration 0.
Figure 2: Comparison of the stability of synthetic vs. IMP subnetworks at initialization on ImageNet-10 and ResNet-10. An increased loss across interpolation implies instability / trained networks landing in different minima.
parameters pruned on distilled data, yet still exist in the IMP subnetwork, capture the intricacies of the real data which contribute to a sharper, but more trainable, landscape. Since IMP subnetworks are not stable, the intricacies it is learning is order dependent.
## 4 Loss Landscape Visualization
While linear mode connectivity is useful to study the loss landscape, this lightweight method can only show us a one dimensional slice of the bigger picture. We further examine the landscapes across two dimensions of parameters as shown in Figure 3.
We created two orthogonal vectors from trained reference models in order to map the hyperdimensional parameter space down to two dimensions. For each of the 10,000 points, we take the linear combination of the two vectors, and measure loss on real training data. Since this visualization is created after reference models are trained, reference models that are closer together will result in "zooming in" on their minima. The spatial distance is not preserved using this method. This is useful in determining the local area in which these models are training to. With post-hoc analysis, we find that spatial distance in our plot is mainly maintained, with slightly lower distances as you prune. From these visualizations, IMP chooses subnetworks that exhibit a similar landscape to the dense model. We see the trained models fall into two separate minima in both the IMP and Dense cases, explaining the loss barrier in the Figure 1. Subnetworks chosen with distilled data are falling into the same, flat basin.
Across almost all experiments, we see a general trend: subnetworks chosen via distilled pruning result in a smooth & generalizing loss landscape. As compression ratio increases, we see more stability than IMP; however, the performance trend largely depends on the distilled accuracy, in this case by using the IDC method [11]. Most notably, we achieve full linear mode connectivity for ConvNet-3 on CIFAR-10 and ResNet-10 on Imagenet-10. While there are numerous factors at play, IDC [11] optimized the synthetic data specifically for these models on each dataset, hinting that stability is a result of high performing synthetic data.
## 5 Conclusion
This work is an initial step into exploring the impact of using synthetic data, specifically distilled data, on pruning. We thoroughly assess the linear mode connectivity of these subnetworks to determine if the model is stable to SGD noise, even finding stable subnetworks from unstable dense models. We believe the inherent compression of dataset distillation is a driving factor in synthetic subnetworks' stability. Lastly, we believe this hints at the possibility of finding lottery tickets at initialization by first searching for stable subnetworks. In turn, we invite researchers to find new ways to search for stable subnetworks, especially on the real data.
Figure 3: Loss Landscape visualization around the neighborhood defined by trained models on different seeds for ConvNet-3 and CIFAR-10. |
2301.12505 | Implementing a Hybrid Quantum-Classical Neural Network by Utilizing a
Variational Quantum Circuit for Detection of Dementia | Magnetic resonance imaging (MRI) is a common technique to scan brains for
strokes, tumors, and other abnormalities that cause forms of dementia. However,
correctly diagnosing forms of dementia from MRIs is difficult, as nearly 1 in 3
patients with Alzheimer's were misdiagnosed in 2019, an issue neural networks
can rectify. Quantum computing applications This proposed novel neural network
architecture uses a fully-connected (FC) layer, which reduces the number of
features to obtain an expectation value by implementing a variational quantum
circuit (VQC). The VQC created in this study utilizes a layer of Hadamard
gates, Rotation-Y gates that are parameterized by tanh(intensity) * (pi/2) of a
pixel, controlled-not (CNOT) gates, and measurement operators to obtain the
expected values. This study found that the proposed hybrid quantum-classical
convolutional neural network (QCCNN) provided 97.5% and 95.1% testing and
validation accuracies, respectively, which was considerably higher than the
classical neural network (CNN) testing and validation accuracies of 91.5% and
89.2%. Additionally, using a testing set of 100 normal and 100 dementia MRI
images, the QCCNN detected normal and demented images correctly 95% and 98% of
the time, compared to the CNN accuracies of 89% and 91%. With hospitals like
Massachusetts General Hospital beginning to adopt machine learning applications
for biomedical image detection, this proposed architecture would approve
accuracies and potentially save more lives. Furthermore, the proposed
architecture is generally flexible, and can be used for transfer-learning
tasks, saving time and resources. | Ryan Kim | 2023-01-29T18:05:42Z | http://arxiv.org/abs/2301.12505v2 | Implementing a Hybrid Quantum-Classical Neural Network by Utilizing a Variational Quantum Circuit for Detection of Dementia
###### Abstract
Magnetic resonance imaging (MRI) is a common technique to scan brains for strokes, tumors, and other abnormalities that cause forms of dementia. However, correctly diagnosing forms of dementia from MRIs is difficult, as nearly 1 in 3 patients with Alzheimer's were misdiagnosed in 2019, an issue neural networks can rectify. Quantum computing applications This proposed novel neural network architecture uses a fully-connected (FC) layer, which reduces the number of features to obtain an expectation value by implementing a variational quantum circuit (VQC). The VQC created in this study utilizes a layer of Hadamard gates, Rotation-Y gates that are parameterized by tanh(intensity) \({}^{\circ}\) (\(\pi\)/2) of a pixel, controlled-not (CNOT) gates, and measurement operators to obtain the expected values. This study found that the proposed hybrid quantum-classical convolutional neural network (QCCNN) provided 97.5% and 95.1% testing and validation accuracies, respectively, which was considerably higher than the classical neural network (CNN) testing and validation accuracies of 91.5% and 89.2%. Additionally, using a testing set of 100 normal and 100 dementia MRI images, the QCCNN detected normal and deemed images correctly 95% and 98% of the time, compared to the CNN accuracies of 89% and 91%. With hospitals like Massachusetts General Hospital beginning to adopt machine learning applications for biomedical image detection, this proposed architecture would approve accuracies and potentially save more lives. Furthermore, the proposed architecture is generally flexible, and can be used for transfer-learning tasks, saving time and resources.
quantum, machine learning, dementia, variational quantum circuit
## I Introduction
With the field of quantum physics growing rapidly, more and more applications of quantum computing are entering the computer science field [1]. Quantum computing offers several benefits over classical computing, which include faster computations, a broader variety of problems to solve, and better performance for machine learning tasks [2]. The properties of encoding information through superposition and quantum entanglement are not present in classical computing and could provide new ways to solve specific tasks if harnessed correctly. My research attempts to enhance existing machine learning methods by applying a variational quantum circuit layer to a pre-existing network, in hopes of improving the accuracy of the network. This novel hybrid quantum-classical neural network was tested utilizing an MRI dataset of brains with and without dementia, which holds relevance to this day. Dementia is a growing issue in the world, as over 55 million people in the world suffer from it [3]. Dementia is the deterioration of cognitive function and can be caused by a variety of factors, such as head trauma, alcohol, Parkinson's, Alzheimer's, and more. Magnetic resonance imaging (MRI) is one of the most effective ways of diagnosing dementia, but identifying whether a patient has a certain condition is difficult because the images are complex and require subjective deduction [4][5]. One of the new ways to detect a certain condition through MRI images is to use neural networks, which can be much more accurate and objective than humans [6].
## II Materials and Methods
### _Description of Dataset_
The image dataset used in this study was produced and published publicly by Sarvesh Dubey on Kaggle [7]. The dataset consists of both commented and non-demented MRI images of brains separated into four different classes representing different severities of dementia: not demented, mildly demented, moderately demented, and very demented. Overall, the total number of images was approximately 6400.
Since this study focuses on the binary classification between demented and non-demented images, I chose to use the 3199 non-demented and 2239 very demented images for my dataset.
### _Preprocessing of Dataset_
The two pools of images were separated into different folders, and labeled 1 through the total number of images in the folder. By keeping the labels consistent within the folder, it becomes easy to iterate over the dataset to use for the model.
Additionally, images were resized to 250 x 250 pixels and the dataset array was converted into a Pytorch tensor.
### _Classical Neural Network_
A classical neural network consists of many different layers, used to extract features with mathematical functions. In deep-learning, feed-forward neural networks tend to be one of the most widely used models for object detection. A generalized layer of one feed-forward neural network can be represented by [8]:
\[L_{i}=x_{i}\to y_{i}=\psi(Wx_{i}+b) \tag{1}\]
where \(L_{i}\) is the specified layer, \(W\) is the weights, \(x_{i}\) is the input feature vector, \(y_{i}\) is the output feature vector, b is the bias, and \(\psi\) is a nonlinear activation function. The biases and weights are the values that will be optimized during the training process.
The selected model used for feature extraction was the ResNet18, which is a sequential convolutional neural network, that includes 18 layers in total. ResNet18 consists of convolution, batch normalization, ReLU, and max pooling layers.
### _Variational Quantum Circuit_
A variational quantum circuit takes advantage of certain quantum properties such as superposition, entanglement, and quantum gates to encode and manipulate data. Superposition is fundamental in quantum mechanics, as it a system is at all possible states, with each state having a corresponding probability, until a measurement occurs. Superposition of a general 1/2 state vector in the z-basis can be represented by:
\[\left|\psi\right\rangle=a\left|+\right\rangle+b\left|-\right\rangle \tag{2}\]
where \(\psi\) is the given state, + is the spin-up z-basis vector, - is the spin-down z-basis vector, a is the corresponding probability amplitude of +, and b is the corresponding probability amplitude of -.
Entanglement is another foundational principle of quantum mechanics, which states that once measuring a state of one particle of an entangled pair, the state of the other particle is known, no matter how far the particles are from each other. A simple representation of an entangled system in a Stern-Gerlach experiment is:
\[\left|\psi\right\rangle=\frac{1}{\sqrt{2}}(\left|+\right\rangle_{1}\left|- \right\rangle_{2}+\left|-\right\rangle_{1}\left|+\right\rangle_{2}) \tag{3}\]
where \(+_{1}\) and \(-_{1}\) represent basis state vectors of the first particle and \(+_{2}\) and \(-_{2}\) represent basis state vectors of the second particle.
In quantum computing, there are gates that perform operations on qubits, which store information. The most fundamental gate is the Hadamard (H) gate, which brings a single qubit into a superposition state. Another important gate is the controlled-not (CNOT) gate, which checks if the control qubit has the state \(\left|1\right\rangle\) and then flips the target qubit from \(\left|0\right\rangle\) to \(\left|1\right\rangle\) and vice versa. The final gate used in this study is the rotation-y (RY) gate, which rotates a given qubit state a given number of radians around the complex y-axis.
In general, variational quantum circuits consist of three major layers: embedding, variational, and measurement layer. A visualization of a variational quantum circuit is represented in Figure 1.
The embedding layer \(E\) will map a vector from the classical space to the quantum Hilbert space, and is parameterized by \(\theta\), which can both be defined as
\[E:x\rightarrow\left|x\right\rangle=E(x)\left|0\right\rangle \tag{4}\]
\[\theta_{i}=tanh(I_{x,y})\ast\frac{\pi}{2} \tag{5}\]
where \(I\) is the intensity of a pixel specified at (\(x\), \(y\)) of a given convoluted input image. A variational layer, \(L\), can be defined as [8]:
\[L:\left|x\right\rangle\rightarrow\left|y\right\rangle=U\left|w\right\rangle \left|x\right\rangle \tag{6}\]
where \(\left|x\right\rangle\) is the input state, \(\left|y\right\rangle\) is the output state, \(w\) is a vector of classical variational parameters, and U is a unitary gate matrix that represents the matrix product of the H, CNOT, and RY gates.
The concatenation of these variational layers L of depth n can be expressed as:
\[Q=L_{n}\circ\cdots L_{2}\circ L_{1} \tag{7}\]
Fig. 1: Variational quantum circuit with embedding layer, variational layer, and measurement layer reducing 512 features to 4 features
The measurement layer takes information from the four qubits and computes the expectation value, producing a classical output vector of dimension 4. It can be expressed as:
\[M:\left|x\right\rangle\to y=\left\langle x\right|\hat{y}\left|x\right\rangle \tag{8}\]
Overall, the variational quantum circuit can be expressed as:
\[V=M\circ Q\circ E \tag{9}\]
### _Creating a Hybrid Quantum-Classical Neural Network_
The ResNet-18 neural network model was used the extract features from the input images, and the variational quantum circuit significantly reduces the number of features, which catalyzes the network towards classification. According to Figure 2, 512 features are extracted from the initial ResNet-18 average pooling layer and then fed into the variational quantum circuit.
After the variational quantum circuit computes the given quantum logic gates and applies a z-measurement on each qubit, the four dimensional vector is sent to the classical fully-connected (FC) layer, to which it is converted into two values: an expectation value for a fermented brain and an expectation value for a non-demented brain.
Representing the quantum-classical neural network mathematically as \(QCNN\):
\[QCNN=L_{4\xrightarrow{}2}\cdot Q_{512\xrightarrow{}4}\cdot L_{512} \tag{10}\]
where \(L_{512}\) represents the ResNet-18 Layers that extract 512 features, \(Q_{512\xrightarrow{}4}\) is the variational quantum circuit that reduces the 512 features to 4, and \(L_{4\xrightarrow{}2}\) is the fully-connected layer.
### _Defining Hyperparameters_
In machine learning, hyperparameters define the learning process before training the model. Hyperparameters cannot be changed during the training process of the neural network model [9].
The first hyperparameter to be defined as the number of epochs. Epochs are defined as the number of times the whole dataset is passed through the model. In this study, the number of epochs is set to 20 to ensure that the accuracies and losses are able to converge.
The next hyperparameter to be defined as the learning rate. In machine learning, the learning rate defines how much the model changes its weights with respect to the loss gradient function. Generally, the higher the learning rate, the faster the loss function converges and the lower the learning rate, the more intricately and accurately the loss function converges [10].
Another hyperparameter to be defined is the batch size, which is the number of images to be sent through the model in one iteration. The more images used in a batch, the slower the model is trained [11]. However, like the learning rate, time is exchanged for potentially higher accuracies and lower losses. In this study, the chosen batch size was 32 images.
The layers of a neural network are made up of neurons, which can be activated, from which information can be sent to the next layer. The activation function, which determines whether or not a neuron activates, is another important hyperparameter. In this study, the chosen nonlinear activation function is the rectified linear unit (ReLU), which can be defined as [8]:
\[ReLU(x)=max(0,x) \tag{11}\]
The loss function is another important hyperparameter that quantifies how far the model prediction is from the actual value. The chosen loss function for this study is the cross-entropy loss function for binary classification, which can be expressed as [8]:
\[CL=-\sum_{n=1}^{2}y_{n}log(P_{n}) \tag{12}\]
where \(y_{n}\) is the output value computed by the model and \(P\) is the overall probability of the \(n\)th class.
The last hyperparameter is the optimizer, which is an algorithm to minimize the loss function. The selected optimizer for this study is the Adam optimizer [12], which can compute large amounts of data effectively.
All of the hyperparameters are represented in Table I.
\begin{table}
\begin{tabular}{c c c c c} \multicolumn{5}{c}{Hyperparameters for the hybrid and classical models} \\ \hline Epochs & Learning Rate & Loss Function & Optimizer & Batch Size & Activation Function \\
**20** & \(10^{-4}\) & **Cross-Entropy** & **Adam** & **32** & **RELu** \\ \hline \end{tabular}
\end{table} TABLE I:
Fig. 2: Architecture of the Hybrid Quantum-Classical Neural Network
### _Performance Metrics_
To compare the ResNet-18 and the proposed model, the accuracy, recall, precision, and f1-score metrics are used:
\[accuracy=\frac{TP+TN}{TP+TN+FP+FN} \tag{13}\]
\[recall=\frac{TP}{TP+FN} \tag{14}\]
\[precision=\frac{TP}{TP+FP} \tag{15}\]
\[F_{1}=\frac{2*recall*precision}{recall+precision} \tag{16}\]
where \(TP\) represents the true positives, which are data values correctly given a positive label. \(TN\) represents the true negatives, which are data values correctly given a negative label. \(FP\) represents the false positives, which are data values incorrectly given a positive label. \(FN\) represents the false negatives, which are data values incorrectly given a negative label.
For calculating the testing metrics, the same 100 unused non-dementemented and 100 unused demetned brain MRIs were used. Conditions were kept the same to prevent confounding variables to influence the comparison of performance metrics between the two models.
## III Results And Discussion
### _Training_
During the training phase, the accuracy and loss graphs for the classical and hybrid neural networks are shown in Figures 3 and 4.
In Figure 3, the hybrid model consistently has a higher accuracy than the classical model, with the difference of epochs 1-5.
After both models reached convergence, the training accuracy was approximately 0.976 for the hybrid model and 0.915 for the classical model. The corresponding convergence training losses were 0.148 and 0.316 for the hybrid and classical models, respectively.
### _Validation_
In Figure 5, the hybrid model once again consistently has a higher accuracy than the classical model, with the difference of
Fig. 4: Training loss with respect to epoch number for both hybrid and classical models
Fig. 5: Validation accuracy with respect to epoch number for both hybrid and classical models
Fig. 3: Training accuracy with respect to epoch number for both hybrid and classical models
Fig. 6: Validation loss with respect to epoch number for both hybrid and classical models
accuracy during convergence being approximately 0.06. Both models drastically improve in accuracy between epochs 1-4. In Figure 6, the hybrid model consistently has less loss than the classical model, with the difference of loss during convergence being approximately 0.18. Both models initially start with large values of loss and then drastically decrease in loss between epochs 1-5, but less drastically than the training losses.
After both models reached convergence, the training accuracy was approximately 0.976 for the hybrid model and 0.915 for the classical model. The corresponding convergence training losses were 0.148 and 0.316 for the hybrid and classical models, respectively.
### _Testing_
For the hybrid model testing results shown in Figure 7, out of the 100 normal images tested, 95 were correctly classified as normal and 5 were incorrectly classified as demented. Additionally, out of the 100 demented images tested, 98 were correctly classified as demented and 2 were incorrectly classified as normal. In Figure 8, which included the classical model testing results, 89 normal images were correctly classified as normal and 11 normal images were incorrectly classified as demented. The classical model also correctly classified 91 demented images and incorrectly classified 9 demented images as normal.
As shown in Table II, the hybrid model performed better in every metric calculated: accuracy, loss, recall, precision and F1-score. Since it has a higher accuracy and F1-score, it is better at correctly predicting the right label for an image across both classes, demented and non-demented. Since the hybrid model has a better recall and precision than the classical model, it is both better at predicting correct labels out of the true positives and has a better accuracy for predicting positive labels out of the dataset.
## IV Conclusion
This study found that by implementing a variational quantum circuit layer to the ResNet-18 neural network, the training accuracy, validation accuracy, training loss, validation loss, testing accuracy, recall, precision, and F1-score is improved significantly. The proposed hybrid model provides better predictions than the original ResNet-18 model due to its inherent quantum nature, which displays another example of quantum supremacy, where a quantum application provides significant speedups or benefits over a classical one [13]. This framework of a hybrid model can be applied to other binary classification systems, as ResNet-18 is generally flexible and the variational quantum circuit can reduce a large number of parameters. This hybrid model has potential to provide more accurate results for dementia from MRI diagnosis and improve patient outcomes. In the future, more layers of the proposed model can be parameterized by a quantum circuit to improve accuracy and the model can be trained on other datasets for binary classification tasks.
## V Acknowledgment
This research was done independently by the author, who is affiliated with the Thomas Jefferson High School for Science and Technology. No funding or mentoring was associated with this research.
Fig. 8: Confusion matrix for demented and non-demented labels for the classical model
Fig. 7: Confusion matrix for demented and non-demented labels for the hybrid model |
2308.09374 | Noise Sensitivity and Stability of Deep Neural Networks for Binary
Classification | A first step is taken towards understanding often observed non-robustness
phenomena of deep neural net (DNN) classifiers. This is done from the
perspective of Boolean functions by asking if certain sequences of Boolean
functions represented by common DNN models are noise sensitive or noise stable,
concepts defined in the Boolean function literature. Due to the natural
randomness in DNN models, these concepts are extended to annealed and quenched
versions. Here we sort out the relation between these definitions and
investigate the properties of two standard DNN architectures, the fully
connected and convolutional models, when initiated with Gaussian weights. | Johan Jonasson, Jeffrey E. Steif, Olof Zetterqvist | 2023-08-18T08:09:31Z | http://arxiv.org/abs/2308.09374v1 | # Noise Sensitivity and Stability of Deep Neural Networks for Binary Classification
###### Abstract
A first step is taken towards understanding often observed non-robustness phenomena of deep neural net (DNN) classifiers. This is done from the perspective of Boolean functions by asking if certain sequences of Boolean functions represented by common DNN models are noise sensitive or noise stable, concepts defined in the Boolean function literature. Due to the natural randomness in DNN models, these concepts are extended to annealed and quenched versions. Here we sort out the relation between these definitions and investigate the properties of two standard DNN architectures, the fully connected and convolutional models, when initiated with Gaussian weights.
**Keywords:** Boolean functions, Noise stability, Noise sensitivity, Deep neural networks, Feed forward neural networks
## 1 Introduction
The driving question of this paper is how robust a typical binary neural net classifier is to input noise, i.e. for a typical neural net classifier and a typical input, will tiny changes to that input make the classifier change its mind? When asking this, we take inspiration from phenomena observed for deep neural networks (DNN) used in practice and use that inspiration to give mathematically rigorous answers for some simple DNN models under one (of several possible) reasonable interpretations of the question. It is not a prerequisite for the reader to be familiar with DNNs to find the topic interesting and any Machine Learning lingo will be explained shortly.
DNNs have shown results that range from good to staggering in many different data-driven areas, e.g. for prediction and classification. One of many reasons for this is that with sufficiently large models, neural networks can approximate any function [5]. However, there is much to be discovered about these black box models, two concerns being about robustness and optimal model design. Studies have shown that DNNs are vulnerable to various attacks, where adding small noise to inputs can lead to significant differences in the output [7, 9]. For example, an image that is clearly of a fish which a DNN classifier also strongly believes is a fish can be such that only changing it by a tiny amount of random noise suddenly makes the classifier assign high probability to that it is now a dog. This raises the question of how stable DNN models tend to be under small perturbations such as these. Obviously, any non-trivial classification function must have the property that for some input, only a tiny amount of change leads to a different output. However, how typical is an input \(\omega\) such that the output \(f(\omega)\) changes from tiny amounts of change in \(\omega\)? In this formulation, one can clearly interpret the word "typical" in many different ways and also consider many different ways of defining what a tiny change is.
To take some small but rigorous steps towards answers, we will in this paper focus on the setting where the input into the DNN is a vector \(\omega\) of binary bits: \(\omega\in\{-1,1\}^{n}\), and the output is binary classification, \(f(\omega)\in\{-1,1\}\).
This point of view is not new and can be seen in Boolean networks. Here some research has been done for different noise settings, but to the best of our knowledge, it seems that none of it is close to what we propose, and little is strictly rigorous. To mention a few, [11] and [14] work in a Boolean network setting. In terms of DNNs, one is then considering a recurrent neural network model with the exact same weight matrix at each layer. The first of these papers considers small changes in the input, and the second paper considers small changes in the network structure. Both conclude that the final output (i.e. the fixed point) is robust to these changes.
The perhaps most natural way to talk about a "typical input", and the one that we are going to adopt, is to consider an input generated at random from a given probability distribution on \(\{-1,1\}^{n}\) (e.g. if we are considering one of the standard benchmark problems of classifying handwritten digits, we would e.g. consider inputting handwritten digits drawn from a probability distribution reflecting how people actually write digits). Then one asks what the probability is that the input is such that the DNN model changes its classification by changing the input in a tiny way.
We prove results for a fully connected DNN architecture with input noise which are valid for arbitrary probability distributions over the input and the noise (as long as the noise with high probability actually produces a change of at least one input bit). However, since the concepts studied are usually understood to assume uniform input distribution and pure noise, i.e. each input bit changes with some tiny predetermined probability independently over the bits, the presentation will be made under these conditions. However, in Sections 3.1 and 3.2, it becomes apparent that the sensitivity properties for fully connected DNN models hold under the most general conditions possible on the input distribution and the noise as will be commented on there. The later section's results rely however on uniform input and pure noise.
In summary, we intend to analyse robustness of DNNs to noise from the perspective of Boolean functions, i.e. to consider those feed-forward DNNs that represent functions with input in \(\{-1,1\}^{n}\) and output in \(\{-1,1\}\) and analyse how sensitive these are to small random noise to random input. Doing this, we find ourselves in, or at least very close to, the setting of the research field of noise sensitivity and noise stability of Boolean functions, concepts introduced in [4]. The standard references nowadays to these concepts are the textbooks [6, 10]. We will return with exact definitions and extensions of the concepts shortly, but in short, noise sensitivity means what we already said: a Boolean function \(f\) is noise sensitive if for large \(n\), \(\omega\in\{-1,1\}^{n}\) uniformly random and \(\eta\) that differs from \(\omega\) by changing a tiny random amount of randomly chosen bits, then \(f(\omega)\) and \(f(\eta)\) are virtually uncorrelated. One says that \(f\) is noise stable if such tiny changes are very unlikely to change the output of \(f\). (Clearly, a rigorous definition must be in terms of asymptotics as \(n\to\infty\)). The question in focus now becomes
Is a Boolean function represented by a given DNN noise sensitive or noise stable?
To further restrict the setting, the activation function will at all layers be assumed to be the sign function, and the linear transformation at a given node will always be without a bias term. Neither of these restrictions is common in practice, of course. Still, one can at least arguably claim that since the idea of feeding the output of a neuron into an activation function is to decide if that neuron fires or not, the sign function is "the ideal" activation (but, of course, not used in practice because of the difficulty in training).
All in all, precisely and in a way that explains the DNN lingo, each model we consider will be such that there is a given Boolean function \(h\), a given so called depth \(T\) and given so called layer sizes \(n_{1},n_{2},\ldots,n_{T}\). With those given, we are considering Boolean functions \(f\) on Boolean input strings \(\omega\in\{-1,1\}^{n}=\{-1,1\}^{n_{0}}\) that can be expressed by a choice of matrices \(W_{1},\ldots,W_{T}\), \(W_{t}\in\mathbf{R}^{n_{t}\times n_{t-1}}\) for \(t=1,\ldots,T\) and for \(\omega=\omega_{0}\in\{-1,1\}^{n}\) taking
\[\omega_{t}=\text{sign}(W_{t}\omega_{t-1}),\,t=1,\ldots,T \tag{1}\]
\[f(\omega)=h(\omega_{T}) \tag{2}\]
where the sign is taken point-wise. The most common \(h\) is of the form \(\mathrm{sign}(\mathbf{w}\,\omega_{T})\) for some row vector \(\mathbf{w}\), i.e. a prediction made from standard logistic regression on \(\omega_{T}\).
As already stated and from what is apparent from the restrictions made, i.e. Boolean input, sign activation functions, no bias terms and, in particular, uniform probability measure over inputs and noise, makes the setting fairly far removed from practical settings. Moreover, we will not consider models that have been fit to data in any way other than a loose motivation for one of our model choices when it comes to modelling randomness in the coefficients of the \(W_{t}\)-matrices.
To consider the matrices as random is natural when taking inspiration from DNNs in practice, since when one trains the model to fit with data, i.e. minimise the loss function at hand, one usually starts the optimisation algorithm by taking the initial coefficients to be random, often i.i.d. normal. Also during training, further randomness is often brought into the picture by the use of stochastic gradient descent. In the end of course, the training algorithm converges, but since there are usually many local minima for the loss function, the randomness in the coefficients at the start makes it random which local maximum one converges to. Furthermore, convergence is almost never reached and this is on purpose, since it is common practice to use early stopping, i.e. stop training well before convergence, to avoid overfitting. As already declared though, we will not consider any data and model fitting, only observe that any training of course produces correlation between the random weight matrix components and suggest and analyse a tractable model of such correlation.
Hence, in summary, we regard this paper as mostly a contribution to the field of noise sensitivity/stability of Boolean functions inspired by an interesting and important phenomenon of DNN prediction models, rather than to applied machine learning. Nevertheless, it is a first step towards an understanding of the non-robustness phenomena of DNNs and, to our knowledge, the first strictly rigorous contribution.
Observe that when considering the \(W_{t}\)'s as random, the precise predictor function \(f\) that comes out of it is in itself a random object. This is not the case in the field of noise sensitivity/stability and one can now ask two different things: (i) will the predictor be noise sensitive when taking both the randomness in the predictor itself and the randomness in the input and the noise into account?, (ii) will the random predictor after the weights have been drawn, with high probability become noise sensitive in the usual sense? This leads us to extend the standard definitions of noise sensitivity and noise stability to also encompass these aspects.
The paper is structured as follows. Section 2 focuses on the relevant concepts and states the relations between them. The remaining sections each focus on selected examples of models of the family of DNN architectures given by (1) and (2) with natural assumptions on the randomness of the weights.
**Section 3.1:** Fully connected DNN with \(T_{n}\) layers of equal width \(n\). All weights are assumed to be standard normal and mutually independent. This is a standard configuration of the DNN at the start of training. We prove that as soon as \(\lim_{n\to\infty}T_{n}=\infty\), the weights will with very high probability be such that the resulting classifier is very strongly sensitive to perturbations of the input no matter the input distribution and noise distribution. If \(T_{n}\) is bounded, the resulting DNN will produce a noise stable classifier.
**Section 3.2:** All weights are once again standard normal, but some of them are now correlated: the columns of each \(W_{t}\) are multivariate normal with all correlation being \(\rho_{n}\). The columns are mutually independent within and across the \(W_{t}\)'s. We show that with \(\rho_{n}\) converging to 1 sufficiently fast, the resulting DNN becomes noise stable with high probability. If \(\rho_{n}\) converges to 1 slowly enough, the resulting classifier is with high probability strongly sensitive to perturbations.
**Section 4:**\(2k+1\)-majority on "\(2k+1\)-trees with overlaps", i.e. the graph where the vertices/neurons in each generation share some children, see Figures 2a, 2c. It is proved that if the number
of children shared by two parents next to each other is \(2k\) (corresponding to stride \(s=1\)), then the resulting Boolean function is noise stable, whereas if the number of children shared is less than \(2k\) (stride \(s\geq 2\)), then we get a noise sensitive Boolean function. These models are convolutional neural nets where, in machine learning language, each filter represents a regular majority of the input bits. It will be observed that the results easily extend to the case where the weights of the filters are random under the only condition that there is at least some chance that a filter represents regular majority.
In the sections on the fully connected models, analysis of a certain Markov chain on \(\{0,1,\ldots,n-1,n\}\), which is symmetric around \(n/2\) and absorbs in \(0\) and \(n\), plays a central role. We believe that the structure of this Markov chain makes it interesting in its own right.
## 2 Different notions of noise sensitivity and noise stability
Let \(\omega\in\{\pm 1\}^{n}\) be an i.i.d. (1/2,1/2) Boolean row vector and let \(\{f_{n}\}\) be a sequence of Boolean functions from \(\{\pm 1\}^{n}\) to \(\{\pm 1\}\). Additionally let \(\omega^{\epsilon}\) be a Boolean row vector such that \(\omega^{\epsilon}(i)=\omega(i)\) with probability \(1-\epsilon\) and \(\omega^{\epsilon}(i)=-\omega(i)\) with probability \(\epsilon\) independently for different \(i\). View \(\omega^{\epsilon}\) as a small perturbation of \(\omega\). In this context, we can now define noise sensitivity and noise stability of a sequence of Boolean functions \(f_{n}\). In [4] these are defined as
**Definition 2.1**.: The sequence \(\{f_{n}\}\) is **noise sensitive** if for every \(0<\epsilon\leq 1/2\),
\[\lim_{n\rightarrow\infty}\operatorname{Cov}\left(f_{n}(\omega),f_{n}(\omega^ {\epsilon})\right)=0\]
**Definition 2.2**.: The sequence \(\{f_{n}\}\) is **noise stable** if
\[\lim_{\epsilon\to 0}\sup_{n}\operatorname{P}(f_{n}(\omega)\neq f_{n}( \omega^{\epsilon}))=0.\]
The definition of noise stability is easily seen to be equivalent with the condition \(\lim_{\epsilon\to 0}\limsup_{n}\operatorname{P}(f_{n}(\omega)\neq f_{n}( \omega^{\epsilon}))=0\). An example of a noise stable sequence is the weighted majority functions
\[\operatorname{maj}_{\theta^{(n)}}(\omega)=\operatorname{sign}\left(\sum_{j=1 }^{n}\theta_{j}^{(n)}\omega(j)\right),\]
where \(\theta_{1}^{(n)},\ldots,\theta_{n}^{(n)}\) are arbitrary given constants [12]. An example of a noise sensitive sequence is the parity functions [6],
\[\operatorname{par}(\omega)=\prod_{j=1}^{n}\omega_{j}.\]
For noise sensitivity, there is a more general and often much stronger concept, which will turn out to be interesting here since most fully connected and sufficiently deep neural nets will turn out to be noise sensitive in a very strong way. If a sequence of functions \(\{f_{n}\}\) is noise sensitive as defined above, then one can always find a sequence \(\epsilon_{n}\leq 1/2\) tending to \(0\) with \(n\) slowly enough such that
\[\lim_{n\rightarrow\infty}\operatorname{Cov}(f_{n}(\omega),f_{n}(\omega^{ \epsilon_{n}}))=0.\]
Since \(\operatorname{Cov}(f_{n}(\omega),f_{n}(\omega^{\epsilon}))\) is well known to be decreasing in \(\epsilon\) on \([0,1/2]\), the faster \(\epsilon_{n}\) can be taken to decrease, the stronger the statement. This leads to the following definition,
**Definition 2.3**.: Let \(\epsilon_{n}\leq 1/2\) be non-increasing in \(n\) with \(\epsilon_{n}\to 0\). The sequence \(\{f_{n}\}\) is **quantitatively noise sensitive (QNS) at level \(\{\epsilon_{n}\}\)** if,
\[\lim_{n\rightarrow\infty}\operatorname{Cov}\left(f_{n}(\omega),f_{n}(\omega^{ \epsilon_{n}})\right)=0.\]
The definitions of noise sensitivity and stability are by now standard when describing properties of deterministic sequences of Boolean functions. However, when dealing with randomness within the functions themselves we need a more general definition. This more general setup occurs in Boolean neural networks, where the network parameters \(\Theta\) can be seen as random elements (usually depending on randomness in what training data is presented to the network and the initial values of the parameters before training). Let \(\mathcal{F}_{n}\) be the set of all Boolean function from \(\{\pm 1\}^{n}\) to \(\{\pm 1\}\) and let \(\pi_{n}\) be a arbitrary probability measure on \(\mathcal{F}_{n}\). Recall that for \(0\leq\epsilon\leq 1/2\) and each function \(f\), we have \(\operatorname{Cov}_{\omega,\omega^{\epsilon}}(f(\omega),f(\omega^{\epsilon}))\geq 0\), and hence \(\operatorname{Cov}_{f,\omega,\omega^{\epsilon}}(f(\omega),f(\omega^{\epsilon}))\geq 0\). We can then define both quenched and annealed versions of noise sensitivity and noise stability as follows.
**Definition 2.4**.: \(\pi_{n}\) **is quenched QNS at level \(\{\epsilon_{n}\}\)** if for every \(\delta>0\) and \(0<\epsilon_{n}\leq 1/2\), there is an \(N\) such that for all \(n\geq N\)
\[\pi_{n}\{f_{n}\,:\,\operatorname{Cov}_{\omega,\omega^{\epsilon}}(f_{n}(\omega),f_{n}(\omega^{\epsilon_{n}}))\leq\delta\}\geq 1-\delta.\]
**Definition 2.5**.: \(\pi_{n}\) **is annealed noise QNS at level \(\{\epsilon_{n}\}\)** if for every \(0<\epsilon_{n}\leq 1/2\)
\[\lim_{n\to\infty}\operatorname{Cov}_{f_{n},\omega,\omega^{\epsilon_{n}}}(f_{n }(\omega),f_{n}(\omega^{\epsilon_{n}}))=0.\]
**Definition 2.6**.: \(\pi_{n}\) **is quenched noise stable** if for every \(\delta\) there is an \(\epsilon>0\) such that for all \(n\),
\[\pi_{n}\{f_{n}:\operatorname{P}_{\omega,\omega^{\epsilon}}(f_{n}(\omega)\neq f _{n}(\omega^{\epsilon}))<\delta\}\geq 1-\delta\]
**Definition 2.7**.: \(\pi_{n}\) **is annealed noise stable** if
\[\lim_{\epsilon\to 0}\sup_{n}\operatorname{P}_{f_{n},\omega,\omega^{\epsilon}}(f_{ n}(\omega)\neq f_{n}(\omega^{\epsilon}))=0\]
This notion of quenched noise sensitivity has arisen elsewhere; see [1, 3, 13, 2]. Notice that if \(\pi_{n}\) has support on only one Boolean function, then these definitions are equivalent to the usual ones in Definition 2.1 and 2.2. In Theorem 2.1 we show which relations there are between these definitions.
**Theorem 2.1**.: _Let \(\mathcal{F}_{n}\) be the set of all Boolean functions on \(\{-1,1\}^{n}\to\{-1,1\}\) and let \(\pi_{n}\) be a probability measure on \(\mathcal{F}_{n}\). Then the following are true_
1. \(\{\pi_{n}\}\) _is annealed QNS at level_ \(\{\epsilon_{n}\}\) _iff_ \(\{\pi_{n}\}\) _is quenched QNS at level_ \(\{\epsilon_{n}\}\) _and_ \(\operatorname{Var}_{f_{n}}(\operatorname{E}_{\omega}[f_{n}(\omega)])\to 0\) _as_ \(n\to\infty\)_._
2. \(\{\pi_{n}\}\) _is annealed noise stable iff_ \(\{\pi_{n}\}\) _is quenched noise stable._
Proof.: To prove the first statement we use the conditional covariance formula that for any random variables \(X,Y\) and \(Z\)
\[\operatorname{Cov}(X,Y)=\operatorname{E}[\operatorname{Cov}(X,Y|Z)]+ \operatorname{Cov}(\operatorname{E}[X|Z],\operatorname{E}[Y|Z])\]
to observe that
\[\operatorname{Cov}_{f,\omega,\omega^{\epsilon}}(f(\omega),f(\omega^{\epsilon_{ n}}))=\operatorname{E}_{f}\left[\operatorname{Cov}_{\omega,\omega^{\epsilon_{n}}}(f( \omega),f(\omega^{\epsilon_{n}}))\right]+\operatorname{Cov}_{f}(\operatorname {E}_{\omega}[f(\omega)],\operatorname{E}_{\omega^{\epsilon_{n}}}[f(\omega^{ \epsilon_{n}})]). \tag{3}\]
Now observe the following. First, the first term on the right is always non-negative. Secondly, since \(\omega\) and \(\omega^{\epsilon_{n}}\) are equal in distribution, the second term on the right is equal to \(\operatorname{Var}_{f}(\operatorname{E}_{\omega}[f(\omega)])\).
We now prove (i) starting with that quenched QNS at level \(\{\epsilon_{n}\}\) and \(\operatorname{Var}_{f}(\operatorname{E}_{\omega}[f_{n}(\omega)])\to 0\) as \(n\to\infty\) leads to annealed QNS at level \(\{\epsilon_{n}\}\). Fix \(\delta\in(0,1]\). We know that there exists an \(N\) such that for all \(n>N\)
\[\pi_{n}\{f:\operatorname{Cov}_{\omega,\omega^{\epsilon_{n}}}(f(\omega),f( \omega^{\epsilon_{n}}))\leq\frac{\delta}{4}\}\geq 1-\frac{\delta}{4}\]
\[\mathrm{Var}_{f}(\mathrm{E}_{\omega}[f(\omega)])<\frac{\delta}{2}.\]
Using (3) this leads to
\[\mathrm{Cov}_{f,\omega,\omega^{\epsilon_{n}}}(f(\omega),f(\omega^{\epsilon_{n}}) )<\mathrm{E}_{f}\left[\frac{\delta}{4}\mathrm{I}_{\mathrm{Cov}_{\omega,\omega^{ \epsilon_{n}}}(f(\omega),f(\omega^{\epsilon_{n}}))\leq\frac{\delta}{4}}+ \mathrm{I}_{\mathrm{Cov}_{\omega,\omega^{\epsilon_{n}}}(f(\omega),f(\omega^{ \epsilon_{n}}))>\frac{\delta}{4}}\right]+\frac{\delta}{2}<\delta\]
This proves the first direction of (i). For the other direction, fix \(\delta\in(0,1]\). Since \(\pi_{n}\) is annealed QNS at level \(\{\epsilon_{n}\}\) there exists an \(N\) such that \(\forall n>N\)
\[\mathrm{Cov}_{f,\omega,\omega^{\epsilon_{n}}}(f(\omega),f(\omega^{\epsilon_{n }}))<\delta^{2}.\]
Now, using (3) and the fact that \(\mathrm{Cov}_{\omega,\omega^{\epsilon_{n}}}(f(\omega),f(\omega^{\epsilon_{n}} ))\geq 0\) for all \(f\) it must be that
\[\mathrm{Var}_{f}(\mathrm{E}_{\omega}[f(\omega)])<\mathrm{Cov}_{f,\omega, \omega^{\epsilon_{n}}}(f(\omega),f(\omega^{\epsilon_{n}}))<\delta^{2}<\delta.\]
Hence the variances converge to zero. Additionally, for such \(n\) we have that \(0\leq\mathrm{Cov}_{f,\omega,\omega^{\epsilon_{n}}}(f(\omega),f(\omega^{ \epsilon_{n}}))\)\(\leq\delta^{2}\). Now using Markov's inequality and (3) we get
\[\pi_{n}\{f:\mathrm{Cov}_{\omega,\omega^{\epsilon_{n}}}(f(\omega),f(\omega^{ \epsilon_{n}}))\geq\delta\}\leq\delta\]
which concludes (i).
Next we prove (ii). We start by showing that quenched noise stability leads to annealed noise stability. Fix \(\delta>0\). \(\pi_{n}\) being quenched noise stable means that there exist an \(\epsilon>0\) such that for all \(n\)
\[\pi_{n}\{f:\mathrm{P}(f(\omega)\neq f(\omega^{\epsilon}))<\frac{\delta}{2}\} \geq 1-\frac{\delta}{2}.\]
Hence
\[\mathrm{P}_{f,\omega,\omega^{\epsilon}}\left(f(\omega)\neq f( \omega^{\epsilon})\right)=\mathrm{E}_{f}\left[\mathrm{E}_{\omega,\omega^{ \epsilon}}[\mathrm{I}_{f(\omega)\neq f(\omega^{\epsilon})}]\right]\leq\] \[\mathrm{E}_{f}\left[\frac{\delta}{2}\mathrm{I}_{\mathrm{E}_{ \omega,\omega^{\epsilon}}[I_{f(\omega)\neq f(\omega^{\epsilon})}]\leq\frac{ \delta}{2}}+\mathrm{I}_{\mathrm{E}_{\omega,\omega^{\epsilon}}[I_{f(\omega) \neq f(\omega^{\epsilon})}]>\frac{\delta}{2}}\right]<\delta.\]
This proves the first part. Now we prove that annealed noise stable implies quenched noise stable.
Fix \(\delta>0\) and pick an \(\epsilon>0\) sufficiently small such that \(\mathrm{P}_{f,\omega,\omega^{\epsilon}}(f(\omega)\neq f(\omega^{\epsilon}))< \delta^{2}\). Such an \(\epsilon\) are guaranteed to exist since \(\pi_{n}\) are annealed noise stable. Then due to Markov's inequality
\[\pi_{n}(f:\mathrm{P}_{\omega,\omega^{\epsilon}}(f(\omega)\neq f(\omega^{ \epsilon}))>\delta)<\delta\]
which gives us quenched noise stable. This proves (ii).
As seen from statement (ii), \(\pi_{n}\) being annealed noise stable is equivalent to \(\pi_{n}\) being quenched noise stable. Therefore we will further on only refer it as \(\pi_{n}\) being noise stable.
## 3 Random Boolean feed forward neural networks
In this section, we investigate a Boolean function structure \(f(\omega,\Theta)\) inspired by feed-forward neural networks. Let \(\omega_{0}\in\{-1,1\}^{n}\) be the input bits as a column vector. Then we can recursively define \(\omega_{t}=\mathrm{sign}(\theta_{t}\omega_{t-1})\) where \(\theta_{t}\in R^{n\times n}\) and sign acts pointwise. In the deep learning literature, the \(\theta\)'s and sign would be referred to as the weights of \(f\) and the activation function respectively. In the Neural Network literature, each \(t\) corresponds to a layer where \(\omega_{t}\) are seen as the bits, or nodes, at layer \(t\). The iteration is done for \(t=1,\ldots,T\) for some predetermined number \(T=T_{n}\) giving us the
final output \(h(\omega_{T})\) where \(h=h_{n}\) is some Boolean function \(\{-1,1\}^{n}\to\{-1,1\}\). A known fact is that a typical Neural Network can approximate any function to arbitrary accuracy as long as the amount of tunable parameters (a.k.a. weights), \(\Theta=\{\theta_{1},\ldots,\theta_{T}\}\), is large enough. In the Boolean setting, these models can still represent a huge number of Boolean functions. However, there are some limitations since no bias term is present in our Boolean Network, which typically is in Neural Networks in practice. Typically \(\Theta\) is determined by some training algorithm based on the observation data, which maximises the likelihood of the model. This means that \(\Theta\) is not deterministic since there is both randomness in the observed data and in the optimisation algorithm. Therefore we can consider a probability measure \(\pi_{n}\) on all \(f\) where the randomness comes from \(\Theta\).
Here we consider cases where the \(\theta_{t}\)'s are independent and, for each \(t\), the columns of \(\theta_{t}\) are independent and jointly normal \(N(0,\Sigma)\) with two different versions of \(\Sigma\). These cases, which thus induce their respective measures \(\{\pi_{n}\}\) on the set of Boolean function, are
1. \(\Sigma=\mathbf{I}_{n\times n}\).
2. \(\Sigma=\rho+(1-\rho)\mathbf{I}_{n\times n}\). A useful way to construct such \(\theta_{t}\)'s is to define \(\theta_{t}(i,j)=\sqrt{\rho}\nu_{t}(j)+\sqrt{1-\rho}\psi_{t}(i,j)\) where \(\nu_{t}(i)\), \(\psi_{t}(i,j)\sim N(0,1)\) all being independent.
The case \(\rho=0\) represents a typical starting state for the network before training. After some training, we expect the parameters of the network to be dependent. The cases \(\rho>0\) are examples of such dependence. Of course, we do not expect that the true dependence after training is represented in this way. Our assumptions should therefore be viewed as a simplifying mathematical framework under which our theorems can be proved. Somewhat related to this question of the behaviour of the parameters, in [8], a particular statistical structure for the values of intermediate layers in some convolution codes was discovered.
The following lemma is crucial.
**Lemma 3.1**.: Let \(x,y\in\{-1,1\}^{n}\) be column vectors such that \(|i:x(i)\neq y(i)|=nv\) for some \(v\in[0,1]\) and let \(\theta\) be a random row vector such that \(\theta\sim N(0,I_{n\times n})\). Then
\[\mathrm{P}\left(\mathrm{sign}(\theta x)\neq\mathrm{sign}(\theta y)\right)= \frac{2}{\pi}\arctan\left(\sqrt{\frac{v}{1-v}}\right)\]
Proof.: Let \(C\) be the set of indices where \(x\) and \(y\) differ. Notice that \(|C|=nv\). Consider the line segment between \(x\) and \(y\) defined as \(\frac{y+x}{2}+\tau\frac{y-x}{2}\), \(\tau\in[-1,1]\). Then \(\mathrm{sign}(\theta x)\neq\mathrm{sign}(\theta y)\) if there is a solution to
\[\theta^{T}\left(\frac{y+x}{2}+\tau\frac{y-x}{2}\right)=0 \tag{4}\]
for some \(\tau\in[-1,1]\).
Now, let \(A_{\Lambda,k}=\{j\::\:j\in\Lambda,\:x(j)=k\}\) for some set \(\Lambda\). Since both \(x\) and \(y\) are in the hypercube \(\{-1,1\}^{n}\), the condition can be rewritten as
\[\left|\sum_{j\in A_{C^{c},1}}\theta(j)-\sum_{j\in A_{C^{c},-1}}\theta(j) \right|\leq\left|\sum_{j\in A_{C,1}}\theta(j)-\sum_{j\in A_{C,-1}}\theta(j) \right|. \tag{5}\]
Define \(X\) and \(Y\) from the following equations
\[\sqrt{nv}X=\sum_{j\in A_{C,1}}\theta(j)-\sum_{j\in A_{C,-1}}\theta(j)\]
and
\[\sqrt{n-nv}Y=\sum_{j\in A_{C^{c},1}}\theta(j)-\sum_{j\in A_{C^{c},-1}}\theta(j).\]
Then it is easy to check that \(X\) and \(Y\) are standard independent normal and that the event in (5) can be rewritten as
\[-\sqrt{\frac{v}{1-v}}|X|\leq Y\leq\sqrt{\frac{v}{1-v}}|X|.\]
Due to symmetry around \(Y=0\), this results in
\[\mathrm{P}\left(-\sqrt{\frac{v}{1-v}}|X|\leq Y\leq\sqrt{\frac{v}{1-v}}|X| \right)=\int_{-\infty}^{\infty}\int_{-\sqrt{\frac{v}{1-v}}|x|}^{\sqrt{\frac{v} {1-v}}|x|}\frac{1}{2\pi}e^{-(x^{2}+y^{2})/2}dydx\]
\[=2\int_{-\arctan\left(\sqrt{\frac{v}{1-v}}\right)}^{\arctan\left(\sqrt{\frac {v}{1-v}}\right)}\int_{0}^{\infty}\frac{r}{2\pi}e^{-r^{2}/2}d\varphi dr=\int_{ -\arctan\left(\sqrt{\frac{v}{1-v}}\right)}^{\arctan\left(\sqrt{\frac{v}{1-v} }\right)}\frac{1}{\pi}d\varphi=\frac{2}{\pi}\arctan\left(\sqrt{\frac{v}{1-v }}\right)\]
where the second equality is due to the substitution \(x=r\cos(\varphi)\) and \(y=r\sin(\varphi)\). This concludes the proof.
In both noise parameter settings 1 and 2, we have independence between the weights leading into a node. This means, according to Lemma 3.1, that given \(\omega_{t-1}\) and \((\omega^{\epsilon})_{t-1}\) differ at \(nv\) bits the probability that \(\omega_{t}(i)\) and \((\omega^{\epsilon})_{t}(i)\) differ is \(\frac{2}{\pi}\arctan(\sqrt{\frac{v}{1-v}})\) for all \(i\). The difference between the two parameter settings is that in 2 the output bits of a layer are usually correlated. Also, due to symmetry, the probability of a disagreement at a fixed point, i.e. \(\omega_{t}(i)\neq(\omega^{\epsilon})_{t}(i)\), at layer \(t\) depends only on the number of disagreements between \(\omega_{t-1}\) and \((\omega^{\epsilon})_{t-1}\) and not where they disagree. This means that the number of disagreements at layer \(t\), which we denote by \(D_{t}=D_{t}^{\epsilon}=D_{t}^{\epsilon_{n}}\), can in both cases be seen as a Markov chain with \(n+1\) states, where \(D_{t}=0\) and \(D_{t}=n\) are absorbing states. Notice that \(D_{t}\) depends on the initial noise \(\epsilon\) since \(D_{0}\) corresponds to the number of bit disagreements created by the initial noise. However, for the sake of lighter notation we will not have a specific suffix showing this dependence.
### Uncorrelated networks
In the uncorrelated case, \(D_{t}\) is binomially distributed according to \((D_{t}|D_{t-1}=nv)\sim\mathrm{Bin}(n,g(v))\) where \(g(v):=\frac{2}{\pi}\arctan\left(\sqrt{\frac{v}{1-v}}\right)\). Note that if \(\epsilon_{n}\) is of order \(1/n\) or lower, then \(\mathrm{P}(D_{1}=0)\) does not get to 0 at the same time as \(P(D_{1}>n/2)\to 0\), which by symmetry implies that there cannot be QNS at that level. Hence one must have at least \(n\epsilon_{n}\to\infty\) for QNS to be possible.
The following theorem almost entirely considers the sensitivity properties of \(f_{n,T_{n}}\) and shows that under very mild assumptions for \(T_{n}\) larger than a specified function of \(n\), \(\{f_{n,T_{n}}\}\) is QNS in this the strongest possible sense (in particular that \(f_{n,T_{n}}\) is noise sensitive in the original sense as soon as \(T_{n}\to\infty\)). Indeed, since the distribution of \(\theta_{t}\) is such that even if we know \(\omega\) and \(\omega^{\epsilon}\), any trace of that knowledge is forgotten after the first layer, \(f_{n,T_{n}}\) has very strong sensitivity properties even for fixed input and fixed noise. What we mean precisely with this is formulated separately in Theorem 3.2. Note in particular part (ii) implies that the input and noise distribution do not need to be uniform but can indeed be taken to reflect what is realistic in the application at hand, e.g. a distribution over images of real handwritten digits.
Parts of Theorem 3.2 are clearly stronger than their counterparts in Theorem 3.1, but we find it natural to state and prove the weaker statements first and then extend them by pointing out the fairly minor extra observations that need to made in the proof.
**Theorem 3.1**.: _Consider the above fully connected network with i.i.d. normal entries in each \(\theta_{t}\) and i.i.d. \(\theta_{t}\)'s. Let \(1/2\geq\epsilon_{n}\downarrow 0\) be such that \(n\epsilon_{n}\to\infty\) and let \(\lim_{n\to\infty}K_{n}/\log(1/\epsilon_{n})=\infty\). Then_
* _if_ \(\lim_{n\to\infty}b_{n}=0\)_,_ \(\lim_{n\to\infty}nb_{n}=\infty\) _and_ \(T_{n}\in[K_{n},e^{b_{n}n}]\)_, then for any Boolean functions_ \(\{h_{n}\}\)_, the resulting_ \(\{f_{n,T_{n}}\}\) _is annealed QNS at level_ \(\{\epsilon_{n}\}\) _with respect to_ \(\{\pi_{n}\}\)
_
* _if the_ \(h_{n}\)_'s are odd and_ \(T_{n}\geq K_{n}\)_, then_ \(\{f_{n,T_{n}}\}\) _is annealed QNS at level_ \(\{\epsilon_{n}\}\)_,_
* _if_ \(T_{n}\geq K_{n}\) _then for any_ \(\{h_{n}\}\)_,_ \(\{f_{n,T_{n}}\}\) _is quenched QNS at level_ \(\{\epsilon_{n}\}\)_,_
* _there are Boolean functions_ \(\{h_{n}\}\) _such that for_ \(T_{n}\) _growing sufficiently fast with_ \(n\)_,_ \(\{f_{n,T_{n}}\}\) _is not annealed noise sensitive._
_In addition,_
* _if_ \(h_{n}\) _is noise stable and_ \(T_{n}\) _is bounded, then_ \(\{f_{n,T_{n}}\}\) _is annealed (and hence quenched) noise stable._
**Remark.** If one randomly chooses a sequence of Boolean functions uniformly among _all_ Boolean functions, it is known that the sequence will asymptotically almost surely be noise sensitive; see Exercise 1.14 in [6]. While the Boolean functions arising in Theorem 3.1 here are also random, they have a very specific form.
**Theorem 3.2**.: _Consider the above fully connected network with i.i.d. normal entries in each \(\theta_{t}\) and i.i.d. \(\theta_{t}\)'s. Let \(1/2\geq\epsilon_{n}\downarrow 0\) be such that \(n\epsilon_{n}\to\infty\) and let \(\lim_{n\to\infty}K_{n}/\log(1/\epsilon_{n})=\infty\). Then_
* _under the assumptions of either (i) or (ii) in Theorem_ 3.1_, for any fixed_ \(\omega,\eta\in\{-1,1\}^{n}\) _with_ \(\omega\not\in\{\eta,-\eta\}\)_,_ \[\lim_{n\to\infty}\operatorname{Cov}_{\Theta}(f_{n,T_{n}}(\omega),f_{n,T_{n}}( \eta))=0.\]
* _under the assumptions of either (i) or (ii) in Theorem_ 3.1_, for any probability measure_ \(\mathbf{Q}_{n}\) _on_ \(\{-1,1\}^{n}\times\{-1,1\}^{n}\) _such that_ \(\lim_{n\to\infty}\mathbf{Q}_{n}(\{(\omega,\eta):\eta\in\{\omega,-\omega\}\})=0\)_._ \[\lim_{n\to\infty}\operatorname{Cov}_{\mathbf{Q}_{n},\Theta}(f_{n,T_{n}}(\omega),f_{n,T_{n}}(\eta))=0.\]
* _Assume that_ \(h_{n}\) _is odd and fix any_ \(k\in\{1,2,\ldots,n-1\}\) _and_ \(\delta>0\)_. Fix also_ \(\omega\in\{-1,1\}^{n}\) _and let_ \(M_{k}=M_{k}^{(n)}(\omega)\) _be the number of_ \(\eta\) _with_ \(\eta(i)\neq\omega(i)\) _for exactly_ \(k\) _indexes_ \(i\)_, such that_ \(f_{n,T_{n}}(\eta)\neq f_{n,T_{n}}(\omega)\)_. Then for_ \(T_{n}\geq K_{n}\)_,_ \[\lim_{n\to\infty}\mathrm{P}\left(\frac{M_{k}}{\binom{n}{k}}\not\in\left(\frac {1-\delta}{2},\frac{1+\delta}{2}\right)\right)=0.\]
Proof of Theorem 3.1.: Recall \(D_{t}=D_{t}^{\epsilon_{n}}\) from above. In the sequel in (i)-(iii), to not burden the notation, write just \(\epsilon\) for \(\epsilon_{n}\) with the understanding that \(\epsilon=\epsilon_{n}\).
The conditional distribution of \((\omega_{T},(\omega^{\epsilon})_{T})\) given \(\mathcal{F}_{T-1}:=\sigma(\omega_{0},(\omega^{\epsilon})_{0},\theta_{1}, \ldots,\theta_{T-1})\) equals that of \((\omega_{0},(\omega^{g(D_{T-1}/n)})_{0})\). In other words; to determine the distribution of \((\omega_{T},\omega_{T}^{\epsilon})\) given \(\mathcal{F}_{T-1}\), we only need to know \(D_{T-1}\). Consequently, \(\mathrm{E}[f(\omega_{0})f((\omega^{\epsilon})_{0})|D_{T-1}=d]=\mathrm{E}[h( \omega_{T})h((\omega^{\epsilon})_{T})|D_{T-1}=d]=\mathrm{E}[h(\omega_{0})h(( \omega^{g(d/n)})_{0})]\). Let us study how \(D_{t}\) behaves, started from \(D_{0}\sim\mathrm{Bin}(n,\epsilon)\). First observe that \(g(v)/v\) is decreasing on \((0,1/2)\). This holds as \(g(0)=0\) and an easy computation shows \(g^{\prime\prime}(v)<0\) for \(v\in[0,1/2]\). Since \(g^{\prime}(1/2)=2/\pi\), it follows from Taylor's formula that
\[g\left(\frac{1}{2}-\delta\right)>\frac{1}{2}-\frac{2}{3}\delta\]
for sufficiently small \(\delta>0\). Fixing such a \(\delta\), it then follows that \(g(d/n)>(1+2\delta/3)d/n\) whenever \(1\leq d<n/2-\delta n\). This means by Chernoff bounds and the fact that \(g\) is increasing that there is \(\kappa=\kappa(\delta,\epsilon)>0\) such that
* for \(\epsilon n/2<d<(1/2-\delta)n/2\), \(\mathrm{P}(D_{t+1}<(1+2\delta/3)d|D_{t}=d)<e^{-\kappa\sqrt{n}}\),
* for \(d>(1/2-\delta)n\), \(\mathrm{P}(D_{t+1}<(1/2-\delta)n|D_{t}=d)<e^{-\kappa n}\).
(Here the \(\sqrt{n}\) in the exponent in the first point follows from that \(g(1/n)\) is of order \(1/\sqrt{n}\) and \(\epsilon_{n}>1/n\) for \(n\) large.) Combining these two points and the symmetry of \(g\) around \(1/2\),
\[\mathrm{P}\left(\exists t\in\left[\frac{\log\frac{1-2\delta}{\epsilon}}{\log \left(1+\frac{2\delta}{3}\right)},T\right]:D_{t}\not\in\left[\left(\frac{1}{2} -\delta\right)n,\left(\frac{1}{2}+\delta\right)n\right]\right)<Te^{-\kappa n}.\]
To prove (i), let \(L_{n}=e^{b_{n}n}\). Take now \(T\in\left[1+\frac{\log\frac{1-2\delta}{\epsilon_{n}}}{\log\left(1+\frac{2 \delta}{3}\right)},L_{n}\right]\). Conditionally on \(D_{T-1}=d\in[(1/2-\delta)n,(1/2+\delta)n]\), we have that \((\omega_{T},(\omega^{\epsilon})_{T})\) equals in distribution \((\omega_{0},(\omega^{\alpha})_{0})\) for \(\alpha=g(d/n)\in[1/2-\delta,1/2+\delta]\). Equation (4.2) in Section 4.3 in [6] implies that for \(\rho>0\), there exists \(\delta>0\) so that for all Boolean functions \(h\) and for all \(\alpha\in[1/2-\delta,1/2+\delta]\),
\[\mathrm{Cov}(h(\omega),h(\omega^{\alpha}))<\rho.\]
Hence for \(\rho>0\) and \(\delta>0\) sufficiently small it follows from the above that \(\mathrm{Cov}(f(\omega_{0}),f((\omega^{\epsilon})_{0})|D_{T-1}\)\(\in[(1/2-\delta)n,(1/2+\delta)n]])<\rho/2\). Since \(\mathrm{P}(D_{T-1}\in[(1/2-\delta)n,(1/2+\delta)n])>1-L_{n}e^{-\kappa n}>1- \rho/8\) for large \(n\), we get
\[\mathrm{Cov}(f(\omega_{0}),f((\omega^{\epsilon})_{0}))<\rho.\]
To see this, let \(B\) be the event \(\{D_{T-1}\in[(1/2-\delta)n,(1/2+\delta)n]\}\) and recall that \(\mathrm{Cov}(f(\omega_{0}),f((\omega^{\epsilon})_{0}))\)\(=\mathrm{E}[\mathrm{Cov}(f(\omega_{0}),f((\omega^{\epsilon})_{0})|I_{B})]+ \mathrm{Var}(\mathrm{E}[f(\omega_{0})|I_{B}])\). Since \(\mathrm{P}(B^{c})<\rho/8\), it is easy to see that the first term is bounded by \(\mathrm{Cov}(f(\omega_{0}),f((\omega^{\epsilon})_{0})|B)+\rho/4\) and the second term is bounded by \(\rho/2\).
This proves that for some constant \(K=K(\rho,\epsilon)\), \(T\in[K,L_{n}]\) and \(n\) sufficiently large, \(\mathrm{Cov}(f(\omega),f(\omega^{\epsilon}))\)\(<\rho\). In particular if \(K_{n}\rightarrow\infty\) and \(T\in[K_{n},L_{n}]\), then \(f\) is annealed QNS. This proves (i).
Next consider (ii) and (iii). This amounts to considering what happens to \(D_{t}\) in the long run. We have argued that with overwhelming probability, \(D_{t}\) will quickly approach \(n/2\) and stay there for a very long time. However, after an even longer time, \(D_{t}\) will end up in one of the absorbing states \(0\) or \(n\). Then the above argument for annealed QNS does not hold (since it is no longer true that \(D_{t}\in[(1/2-\delta)n,(1/2+\delta)n]\) with high probability).
Let \(\{D_{t}^{\prime}\}\) be a copy of \(\{D_{t}\}\) but started with \(D_{0}^{\prime}\sim\mathrm{Bin}(n,1/2)\). We can couple \((D_{0},D_{0}^{\prime})\) so that \(D_{0}\leq D_{0}^{\prime}\), so do that.
Since \(g\) is increasing, the distribution of \(D_{t}\) given \(D_{t-1}=d\) is increasing in \(d\). Hence, since we have coupled so that \(D_{0}\leq D_{0}^{\prime}\), there is a further coupling \((D,D^{\prime})\) such that \(D_{t}^{\prime}\geq D_{t}\) for every \(t\). Use such a coupling from now on and note that a consequence is that if at some point \(D_{t}^{\prime}=D_{t}\), then also \(D_{s}^{\prime}=D_{s}\) for all \(s\geq t\). By the above with \(t_{0}=1+\log(4\epsilon)/\log(6/7)\), \(P(\forall t\in[t_{0},L_{n}]:D_{t},D_{t}^{\prime}\in[n/4,3n/4])>1-L_{n}e^{- \kappa n}>1-e^{-\kappa n/2}\) for large \(n\). For all \(d,d^{\prime}\in[n/4,3n/4]\) and \(d\leq d^{\prime}\),
\[\mathrm{E}[D_{t}^{\prime}-D_{t}|D_{t-1}=d,D_{t-1}^{\prime}=d^{ \prime}] =n\left(g\left(\frac{d^{\prime}}{n}\right)-g\left(\frac{d}{n}\right)\right)\] \[<\frac{3}{4}(d^{\prime}-d),\]
where the last inequality follows on observing that \(g^{\prime}(v)<3/4\) for \(v\in[1/4,3/4]\). This gives for \(t\in[t_{0},L_{n}]\),
\[\mathrm{E}[D_{t}^{\prime}-D_{t}] \leq ne^{-\kappa n/2}+\mathrm{E}\left[D_{t}^{\prime}-D_{t}|D_{t-1 },D_{t-1}^{\prime}\in\left[\frac{n}{4},\frac{3n}{4}\right]\right]\] \[\leq 2ne^{-\kappa n/2}+\frac{3}{4}\mathrm{E}[D_{t-1}^{\prime}-D_{t-1 }].\]
For such \(t\), \(\mathrm{E}[D_{t-1}^{\prime}-D_{t-1}]\geq 1/n\) and the right hand side is smaller than \((4/5)\mathrm{E}[D_{t-1}^{\prime}-D_{t-1}]\) for large \(n\). By induction we thus get \(\mathrm{E}[D_{t_{0}+t}^{\prime}-D_{t_{0}+t}]\leq\max(1/n,(4/5)^{t}n)\) for \(t_{0}+t\leq L_{n}\). This gives \(\mathrm{E}[D_{t_{0}+t}^{\prime}-D_{t_{0}+t}]\leq 1/n\) whenever \(L_{n}\geq t\geq 9\log n\) and \(n\) large. Hence \(\mathrm{E}[D_{t}^{\prime}-D_{t}]\leq 1/n\) for all
\(L_{n}\geq t\geq 10\log n\) and \(n\) large. By Markov's inequality, \(\mathrm{P}(D_{t}^{\prime}\neq D_{t})\leq 1/n\) for all \(L_{n}\geq t\geq 10\log n\) and \(n\) large. Since if \(D_{t}^{\prime}=D_{t}\) for some \(t\), then \(D_{s}^{\prime}=D_{s}\) for all \(s\geq t\), we get \(\mathrm{P}(D_{t}^{\prime}\neq D_{t})\leq 1/n\) for all \(t\geq 10\log n\).
Thus, taking \(T\geq 10\log n\), \(\mathrm{P}(D_{T}\neq D_{T}^{\prime})<1/n\). This means that the total variation distance between the distribution of \(D_{T}\) and \(D_{T}^{\prime}\) is less than \(1/n\), i.e. \(\sum_{d=0}^{n}|\mathrm{P}(D_{T}^{\prime}=d)-\mathrm{P}(D_{T}=d)|<2/n\). This gives for \(T\geq 10\log n+1\)
\[\mathrm{E}[f(\omega_{0})f((\omega^{1/2})_{0}] =\mathrm{E}[h(\omega_{T})h((\omega^{1/2})_{T})]\] \[=\sum_{d}\mathrm{E}[h(\omega_{T})h((\omega^{1/2})_{T})|D_{T-1}^{ \prime}=d]P(D_{T-1}^{\prime}=d)\] \[\geq\sum_{d}\mathrm{E}[h(\omega_{T})h((\omega^{\epsilon})_{T})|D _{T-1}=d]\mathrm{P}(D_{T-1}=d)-\sum_{d}|\mathrm{P}(D_{T-1}^{\prime}=d)- \mathrm{P}(D_{T-1}=d)|\] \[\geq\mathrm{E}[h(\omega_{T})h((\omega^{\epsilon})_{T})]-\frac{2} {n}\] \[=\mathrm{E}[f(\omega_{0})f((\omega^{\epsilon})_{0})]-\frac{2}{n},\]
where the first inequality follows from that \(|h|\leq 1\). To now prove quenched QNS for all \(h\) and \(T\geq 10\log n\) observe that we now have
\[\mathrm{E}[\mathrm{Cov}(f(\omega_{0}),f((\omega^{\epsilon})_{0})| \Theta)] =\mathrm{E}[\mathrm{Cov}(f(\omega_{0}),f((\omega^{\epsilon})_{0})| \Theta)-\mathrm{Cov}(f(\omega_{0}),f((\omega^{1/2})_{0})|\Theta)]\] \[=\mathrm{E}[\mathrm{E}[f(\omega_{0})f((\omega^{\epsilon})_{0})| \Theta]-\mathrm{E}[f(\omega_{0})f((\omega^{1/2})_{0})|\Theta]]\] \[=\mathrm{E}[f(\omega_{0})f((\omega^{\epsilon})_{0})]-\mathrm{E} [f(\omega_{0})f((\omega^{1/2})_{0})]\] \[<\frac{2}{n}.\]
Since for any fixed \(f\), \(\mathrm{Cov}(f(\omega),f(\omega^{\epsilon}))\geq 0\), this implies quenched QNS for \(T\geq 10\log n\). Combining with (i), this gives (iii).
If \(h\) is also odd and \(T\geq 10\log n\), we have \(h(-(\omega^{1/2})_{T})=h((-\omega^{1/2})_{T})=-h((\omega^{1/2})_{T})\) and since \((\omega,\omega^{1/2})=_{d}(\omega,-\omega^{1/2})\), we have \(\mathrm{E}[h(\omega_{T})h((\omega^{1/2})_{T})]=0\). Thus
\[\mathrm{E}[f(\omega_{0})f((\omega^{\epsilon})_{0}]\leq\frac{2}{n}.\]
Since \(h\) odd implies \(\mathrm{E}[f(\omega_{0})]=0\), this proves annealed QNS for all odd \(h\) and \(T\geq 10\log n\) and combining with (i) we get (ii).
To prove (iv), we need an example of an \(h\) that is not odd and where annealed noise sensitivity does not hold for large \(T\). To achieve this, first observe that at each \(t\), \(\mathrm{P}(D_{t}\in\{0,n\}|D_{t-1})\geq 2^{-n+1}\). Hence, as \(D_{t}\in\{0,n\}\) implies that \(D_{t^{\prime}}=D_{t}\) for all \(t^{\prime}\geq t\), for \(T_{n}\) such that \(T_{n}2^{-n}\to\infty\), \(\mathrm{P}(D_{T_{n}}\not\in\{0,n\})\to 0\). This entails \(\mathrm{P}((\omega^{\epsilon})_{T_{n}}=\pm\omega_{T_{n}})\to 1\). Now let \(h_{n}\) be any even function. Then \(\mathrm{P}(h_{n}(\omega_{T_{n}})\neq h_{n}((\omega^{\epsilon})_{T_{n}}))\to 0\) and so in fact \(f_{n}\) is annealed noise stable. If also \(-1<\liminf_{n}\mathrm{E}[h_{n}(\omega)]\leq\limsup_{n}\mathrm{E}[h_{n}( \omega)]<1\), then \(\{f_{n}\}\) is not annealed noise sensitive.
For (v), assume for simplicity that \(T_{n}\) equals the constant \(T\); going from this to general bounded \(T_{n}\) is easy and left to the reader. Fix a small \(\delta>0\). Since \(h_{n}\) is stable one can pick \(\rho>0\) small enough that \(\sup_{n}\mathrm{P}(h_{n}(\omega)\neq h_{n}(\omega^{\epsilon}))<\delta\) whenever \(\xi<\rho\). Pick such a \(\rho\) and let \(\epsilon=\rho^{2^{T}}\). We have \(g(v)=(2/\pi)\arctan\sqrt{v/(1-v)}<(2/3)\sqrt{v}\) for \(v<\rho\) and \(\rho\) small enough. By Chernoff bounds we then have that there is a \(\kappa>0\) independent of \(n\) such that
\[\mathrm{P}(D_{T-1}^{\epsilon}\geq n\rho^{2})<e^{-\kappa n}.\]
Given \(D_{T-1}=nv\) for \(v<\rho^{2}\), the conditional distribution of \((\omega_{T},(\omega^{\epsilon})_{T})\) is that of \((\omega,\omega^{g(v)})\) and since \(g(v)<\rho\), we get by noise stability of \(h_{n}\) that
\[\mathrm{P}(f_{n,T_{n}}(\omega)\neq f_{n,T_{n}}(\omega^{\epsilon})|D_{T-1}^{ \epsilon}<\rho^{2})<\delta.\]
Summing up, this now gives
\[\mathrm{P}(f_{n,T_{n}}(\omega)\neq f_{n,T_{n}}(\omega^{\epsilon}))<e^{-\kappa n}+\delta\]
This easily implies noise stability.
Proof of Theorem 3.2.: Starting with (i), this follows from the simple observation that if \(d\) is the number of disagreements between \(\omega\) and \(\eta\), then \(\omega_{1}\) is a vector i.i.d. fair coin flips and given \(\omega_{1}\), \(\eta_{1}\) differs from \(\omega_{1}\) in uniformly random positions whose number is binomial with parameters \(n\) and \(g(d/n)\). Hence all arguments from (i) or (ii) in Theorem 3.1 can be copied from above from \(t=1\).
Part (ii) is an immediate corollary of (i) on observing that
\[\mathrm{Cov}_{\mathbf{Q}_{n},\Theta}(f_{n,T_{n}}(\omega),f_{n,T_{n}}(\eta))= \mathrm{E}_{\mathbf{Q}_{n}}[\mathrm{Cov}_{\Theta}(f_{n,T_{n}}(\omega),f_{n,T_{ n}}(\eta))]+\mathrm{Cov}_{\mathbf{Q}_{n}}(\mathrm{E}_{\Theta}[f_{n,T_{n}}( \omega)],\mathrm{E}_{\Theta}[f_{n,T_{n}}(\eta)])\]
and by the invariance and symmetry properties of \(\Theta\), \(\mathrm{E}_{\Theta}[f_{n,T_{n}}(\omega)]\) is independent of \(\omega\) and thus a constant and hence \(\mathrm{Cov}_{\mathbf{Q}_{n}}(\mathrm{E}_{\Theta}[f_{n,T_{n}}(\omega)],\mathrm{ E}_{\Theta}[f_{n,T_{n}}(\eta)])=0\).
Now for (iii) fix \(\omega\), let \(A_{k}\) be the set of \(\eta^{\prime}\) that each differs from \(\omega\) at \(k\) positions and take \(\eta\in A_{k}\). We have by (i), since \(h_{n}\) odd implies \(\mathrm{E}[f_{n,T_{n}}(\omega)]=\mathrm{E}[f_{n,T_{n}}(\eta)]=0\), that \(|\mathrm{E}[f_{n,T_{n}}(\omega)f_{n,T_{n}}(\eta)]|<\delta^{4}\), i.e. \(\mathrm{P}(f_{n,T_{n}}(\omega)\neq f_{n,T_{n}}(\eta))\in(1/2-1/2\delta^{4},1/2 +1/2\delta^{4})\), for \(n\) large.
Now let \(\bar{M}_{k}=\bar{M}_{k}^{\binom{n}{k}}(\omega)=|\{(\eta,\xi)\in A_{k}:f_{n,T_{ n}}(\eta)\neq f_{n,T_{n}}(\xi)\}|\). It follows that
\[\mathrm{E}[\bar{M}_{k}]\in\left((1-\delta^{4})\frac{1}{2}\binom{\binom{n}{k}} {2},(1+\delta^{4})\frac{1}{2}\binom{\binom{n}{k}}{2}\right)\subseteq\left((1- \delta^{4})\frac{1}{4}\binom{n}{k}^{2},(1+\delta^{4})\frac{1}{4}\binom{n}{k}^ {2}\right).\]
Let \(M\) be the maximum value that \(\bar{M}_{k}\) can take on, so that
\[M\leq\left(\frac{\binom{n}{k}}{2}\right)^{2}=\frac{1}{4}\binom{n}{k}^{2}.\]
Since \(M-\bar{M}_{k}\) is nonnegative, it follows from Markov's inequality that
\[\mathrm{P}\left(\bar{M}_{k}\leq(1-\delta^{2})\frac{1}{4}\binom{n}{k}^{2} \right)<\delta^{2}.\]
Also if \(\bar{M}_{k}>(1-\delta^{2})\frac{1}{4}\binom{n}{k}^{2}\), we have \(X_{k}^{+}\in((1-\delta)\frac{1}{2}\binom{n}{k},(1+\delta)\frac{1}{2}\binom{n} {k})\), where \(X_{k}^{+}\) is the number of \(\eta\in A_{k}\) with \(f_{n,T_{n}}(\eta)=1\). This gives \(M_{k}\in((1-\delta)\frac{1}{2}\binom{n}{k},(1+\delta)\frac{1}{2}\binom{n}{k})\) and hence for \(n\) sufficiently large,
\[\mathrm{P}\left(\frac{M_{k}}{\binom{n}{k}}\not\in\left(\frac{1-\delta}{2}, \frac{1+\delta}{2}\right)\right)<\delta^{2}.\]
This proves (iii).
(The property described in the statement (ii) in Theorem 3.4 could be referred to as \(f_{n,T_{n}}\) being annealed noise sensitive with respect to \(\mathbf{Q}_{n}\).)
### Correlated networks
In this section we will investigate the noise sensitivity of a deep network where the network weights \(\theta\) are sampled from a correlated normal distribution, i.e. Case 2 in the introduction.
More precisely, the model is that the random matrices \(\theta_{1},\theta_{2},\ldots,\theta_{T}\) are independent and for given \(t\), the columns \(\theta_{t}(\cdot,1),\ldots,\theta_{t}(\cdot,n)\) are independent. However, each column \(\theta_{t}(\cdot,j)\) of each \(\theta_{t}\) is now
assumed to be \(n\)-dimensional Gaussian with expectation \(0\) and covariance matrix \(\Sigma\), where \(\Sigma_{i,i}=1\), \(i=1,\ldots,n\) and \(\Sigma_{i,i^{\prime}}=\rho\), \(1\leq i<i^{\prime}\leq n\). Here \(\rho=\rho_{n}\) is a positive given correlation and we are interested in providing conditions on \(\rho\) that ensure noise stability or noise sensitivity of \(f=f_{n}=f_{n,T_{n}}\) defined as in the previous section.
To model the vectors \(\theta_{t}(\cdot,j)\), we let \(\theta_{t}(i,j)=\sqrt{\rho}\nu_{t}(j)+\sqrt{1-\rho}\psi_{t}(i,j)\), where the \(\nu_{t}(j)\)'s and \(\psi_{t}(i,j)\)'s are all independent standard Gaussian.
Since the entries in any given row of \(\theta_{t}\) are i.i.d. standard normals, Lemma 3.1 still says that \(\mathrm{P}(F_{t}(i)|D_{t-1}=d)=g(d/n)\), where \(F_{t}(i)=\{\omega_{t}(i)\neq(\omega^{\epsilon})_{t}(i)\}\). However the events \(F_{t}(i)\) and \(F_{t}(i^{\prime})\) are not, as in Section 2, conditionally independent given \(D_{t-1}\).
Recall the proof of Lemma 3.1, where the following observation was made. Here we recall that \(C=\{j:\omega_{t-1}(j)\neq(\omega^{\epsilon})_{t-1}(j)\}\), \(A_{\Lambda,k}=\{j:j\in\Lambda,\omega(j)=k\}\) and
\[F_{t}(i)=\left\{\left|\sum_{j\in A_{C^{c},1}}\theta(i,j)-\sum_{j\in A_{C^{c},- 1}}\theta(i,j)\right|\leq\left|\sum_{j\in A_{C,1}}\theta(i,j)-\sum_{j\in A_{C,-1}}\theta(i,j)\right|\right\}.\]
Using the above representations of \(\theta_{t}(i,j)\) this becomes
\[F_{t}(i) =\left\{\left|\sum_{j\in A_{C^{c},1}}\left(\sqrt{\rho}\nu_{t}(j) +\sqrt{1-\rho}\psi_{t}(i,j)\right)-\sum_{j\in A_{C^{c},-1}}\left(\sqrt{\rho} \nu_{t}(j)+\sqrt{1-\rho}\psi_{t}(i,j)\right)\right|\right.\] \[\leq\left.\left|\sum_{j\in A_{C,1}}\left(\sqrt{\rho}\nu_{t}(j)+ \sqrt{1-\rho}\psi_{t}(i,j)\right)-\sum_{j\in A_{C,-1}}\left(\sqrt{\rho}\nu_{t }(j)+\sqrt{1-\rho}\psi_{t}(i,j)\right)\right|\right\}.\]
Make the following substitutions
\[\sqrt{d}U_{t}^{C} =\sum_{j\in A_{C,1}}\nu_{t}(j)-\sum_{j\in A_{C,-1}}\nu_{t}(j)\] \[\sqrt{n-d}U_{t}^{C^{c}} =\sum_{j\in A_{C^{c},1}}\nu_{t}(j)-\sum_{j\in A_{C^{c},-1}}\nu_{t }(j)\] \[\sqrt{d}V_{t}^{C}(i) =\sum_{j\in A_{C,1}}\psi_{t}(i,j)-\sum_{j\in A_{C,-1}}\psi_{t}(i,j)\] \[\sqrt{n-d}V_{t}^{C^{c}}(i) =\sum_{j\in A_{C^{c},1}}\psi_{t}(i,j)-\sum_{j\in A_{C^{c},-1}}\psi _{t}(i,j)\]
and notice that \(U_{t}^{C}\), \(U_{t}^{C^{c}}\), \(V_{t}^{C}(i)\) and \(V_{t}^{C^{c}}(i)\) are all independent standard Gaussians for all \(t\) and \(i\). This gives, with \(v=d/n\),
\[F_{t}(i)=\left\{\left|\sqrt{\frac{\rho}{1-\rho}}U_{t}^{C^{c}}+V_{t}^{C^{c}}(i) \right|\leq\sqrt{\frac{v}{1-v}}\left|\sqrt{\frac{\rho}{1-\rho}}U_{t}^{C}+V_{t} ^{C}(i)\right|\right\}\]
and hence also
\[\mathrm{P}(F_{t}(i)|\omega_{t-1},(\omega^{\epsilon})_{t-1})=\mathrm{P}\left( \left|\sqrt{\frac{\rho}{1-\rho}}U_{t}^{C^{c}}+V_{t}^{C^{c}}(i)\right|\leq\sqrt{ \frac{v}{1-v}}\left|\sqrt{\frac{\rho}{1-\rho}}U_{t}^{C}+V_{t}^{C}(i)\right| \right).\]
The dependence between different \(i\)'s is captured by the common variables \(U_{t}^{C}\) and \(U_{t}^{C^{c}}\). Conditioning on \(W_{t}=\sqrt{\rho/(1-\rho)}(U_{t}^{C},U_{t}^{C^{c}})\) in addition to the condition \(D_{t-1}=d\), one gets that \(F_{t}(1),\ldots,F_{t}(n)\) are conditionally independent and, writing \(w=(w^{C},w^{C^{c}})\),
\[\mathrm{P}(F_{t}(i)|D_{t-1}=d,W_{t}=w)=\mathrm{P}\left(|Y|\leq\sqrt{\frac{v}{1 -v}}|X|\right), \tag{6}\]
where \(X\) and \(Y\) are independent normals with unit variance and means \(w^{C}\) and \(w^{C^{c}}\) respectively. Write \(g_{w}(v)\) for the right hand side of (6). Summing up, by letting \(\tilde{D}_{t}=D_{t}/n\), we have shown that the conditional distribution \((D_{t}|\tilde{D}_{t-1}=v,W=w)\) is binomial with parameters \(n\) and \(g_{w}(v)\).
Indeed given \(\tilde{D}_{t-1}=v\), \(D_{t}\) can be determined in two steps. First sample \(W\) from the two dimensional independent Gaussian with covariance \(\sqrt{\rho/(1-\rho)}I_{2}\). Then given \(W=w\), \(D_{t}\) is determined as the number of independent two-dimensional Gaussian's \((X_{i},Y_{i})\) with mean \(w\) and \(X_{i}\) and \(Y_{i}\) of unit variance and independent that end up in the region \(A_{t}=\{(x,y)\;:\;|y|\leq\sqrt{\frac{v}{1-v}}|x|\}\). The probability of a given sample in the second step ending up in \(A_{t}\) is \(g_{w}(v)\). From this it is easy to see that \(g_{w}(v)\) is increasing in \(v\) for all values of \(w\). An illustration of \(A_{t}\) and \(W\) is seen in figure 1.
We will need the following definition.
**Definition 3.1**.: A sequence of Boolean function \(\{h_{n}\}\) has a **sharp threshold at 1/2** if \(\{h_{n}\}\) is such that for \(\omega\) being i.i.d. Bernoulli\((p_{n})\) and if \(\{p_{n}-1/2\}\) is bounded away from 0, then \(\lim_{n\to\infty}\mathrm{P}(h_{n}(\omega)=\mathrm{sign}(p_{n}-1/2))=1\).
One among many examples of Boolean functions that has a sharp threshold at 1/2 is the majority function. It can be shown that this also is true for the weighted majority function when the weights \(\theta_{i}>0\) are such that \(\lim_{n\to\infty}\frac{\max_{i}\theta_{i}}{\sqrt{n}\min_{i}\theta_{i}}=0\).
Let \(\tau_{c}^{\varepsilon}=\tau_{c}^{\epsilon_{n}}=\min\{t:g_{W_{t}}(\tilde{D}_{t -1})\geq c\}\) with \(\tau_{c}^{\varepsilon}=\infty\) if \(g_{W_{t}}(\tilde{D}_{t-1})<c\) for all \(t\). The following lemma is crucial. It shows that noise sensitivity or noise stability almost entirely come down to if \(\tilde{D}\) started from arbitrary small \(\epsilon\) hits 1/2 before 0 (which gives rise to sensitivity results) or vice versa (giving rise to stability results). In both of these cases, the rest of the arguments come down to adopting suitable conditions on \(h_{n}\) to go along with that. Therefore and since we are also of the opinion that \(\tilde{D}\) is a very natural Markov process that is interesting in its own right, we will later on state the results for \(\tilde{D}\) explicitly along with sensitivity/stability results.
**Lemma 3.2**.: Let \(1/2\geq\epsilon_{n}\downarrow 0\). Assume \(\{h_{n}\}\) has a sharp threshold at 1/2, \(\rho_{n}>\delta\) for some \(\delta>0\). Then the following statements hold.
Figure 1: An illustration of how \(D_{t}\) is sampled given \(\tilde{D}_{t-1}=v\). First sample \(W\) from a independent two dimensional Gaussian with covariance matrix \(\sqrt{\rho/(1-\rho)}I_{2}\). Then given \(W=w\), \(D_{t}\) is determined as the number of independent Gaussian’s with mean \(w\) and unit variance that end up in the region \(A_{v}\).
1. If for all \(\epsilon_{n}\), \(\lim_{n\to\infty}\mathrm{P}(\tau_{1/2}^{\epsilon_{n}}\geq T_{n})=0\) and \(\{h_{n}\}\) is odd, then \(\{f_{n,T_{n}}\}\) is annealed, and hence quenched, QNS at level \(\{\epsilon_{n}\}\).
2. If for all \(\epsilon>0\) sufficiently small, \(\lim_{n\to\infty}\mathrm{P}(\tau_{2\epsilon}^{\epsilon}\leq T_{n})=0\), then \(\{f_{n,T_{n}}\}\) is annealed, and hence quenched, noise stable.
Proof.: Let \(T=T_{n}\) and \(f=f_{n,T}\). As before, write \(\theta_{t}(i,j)=\sqrt{\rho}\nu_{t}(j)+\sqrt{1-\rho}\psi_{t}(i,j)\), which in row vector form becomes \(\theta_{t}(i,\cdot)=\sqrt{\rho}\nu_{t}+\sqrt{1-\rho}\psi_{t}(i,\cdot)\), so that
\[\omega_{T}(i)=\mathrm{sign}\left((\sqrt{\rho}\nu_{T}+\sqrt{1-\rho}\psi_{T}(i, \cdot))\cdot\omega_{T-1}\right).\]
This means that conditionally on \(\omega_{T-1}\) and \(\nu_{T}\), \(\omega_{T}(i)\) are i.i.d. Bernoulli with probability \(p(\nu_{T},\omega_{T-1})\)\(:=\mathrm{P}(\omega_{T}(i)=1|\nu_{T},\omega_{T-1})=\mathrm{P}((\sqrt{\rho}\nu_{T}+ \sqrt{1-\rho}\psi_{T}(i,\cdot))\cdot\omega_{T-1}>0)\), which equals
\[\mathrm{P}\left(X>-\sqrt{\frac{\rho}{1-\rho}}\frac{1}{\sqrt{n}}\nu_{T}\cdot \omega_{T-1}\right)\]
where \(X\) is a standard normal random variable. Notice that \(p(\nu_{T},\omega_{T-1})>\frac{1}{2}\) if and only if \(\nu_{T}\cdot\omega_{T-1}>0\). Fix \(\gamma_{1}>0\). Since \(\rho_{n}>\delta\) for some \(\delta\), there is for every \(\gamma_{1}>0\) an \(\gamma_{2}>0\) independently of \(\rho\) and \(\omega_{T-1}\) such that for \(n\) large
\[\mathrm{P}\left(p(\nu_{T},\omega_{T-1})\in\left[\frac{1}{2}-\gamma_{2},\frac{ 1}{2}+\gamma_{2}\right]\Big{|}\,\omega_{T-1}\right)<\gamma_{1}. \tag{7}\]
Let us now also include the process \((\omega^{\epsilon})_{t}=(\omega^{\epsilon_{n}})_{t}\). By (6), \(\mathrm{P}(F_{t}(i)|\tilde{D}_{t-1}=v)\) is increasing in \(v\), which means that the conditional distribution \((\tilde{D}_{t}|\tilde{D}_{t-1}=v)\) is stochastically increasing in \(v\). Hence inductively for \(t\leq T-1\), \((\tilde{D}_{T-1}|\tilde{D}_{t-1}=v)\) is stochastically increasing in \(v\). Additionally, as in the proof of Lemma 3.1, writing \(B_{T}\) for the event \(\{\mathrm{sign}(\nu_{T}\cdot\omega_{T-1})\neq\mathrm{sign}(\nu_{T}\cdot( \omega^{\epsilon})_{T-1}\}\),
\[\mathrm{P}\left(B_{T}|\omega_{T-1},(\omega^{\epsilon})_{T-1}\right)=g(\tilde{ D}_{T-1}) \tag{8}\]
from which it follows by symmetry that for all \(\eta\in[0,1/2]\)
\[\mathrm{P}\left(B_{T}|\tilde{D}_{t}=\frac{1}{2}-\eta\right)+\mathrm{P}\left(B _{T}|\tilde{D}_{t}=\frac{1}{2}+\eta\right)=1.\]
It follows directly that if \(n\) is even \(\mathrm{P}(B_{T}|\tilde{D}_{t}=1/2)=1/2\) and if \(n\) is odd, since \(\mathrm{P}(B_{T}|\tilde{D}_{t}=1/2-\eta)<\mathrm{P}(B_{T}|\tilde{D}_{t}=1/2+\eta)\), that \(\mathrm{P}(B_{T}|\tilde{D}_{t}=1/2+1/2n)\geq 1/2\). Assume for simplicity for the rest of the proof of (i) that \(n\) is even; for the case with \(n\) odd just replace any conditioning on \(\tilde{D}_{t}=1/2\) with conditioning on \(\tilde{D}_{t}=1/2+1/2n\).
Since \((\tilde{D}_{T-1}|\tilde{D}_{t-1}=v)\) is increasing in \(v\), we now get
\[\mathrm{P}(B_{T}|\tau_{1/2}^{\epsilon}=t)\geq\mathrm{P}\left(B_{T}|\tilde{D}_{ t}=\frac{1}{2}\right)=\frac{1}{2}.\]
Now let \(A_{T}=B_{T}\setminus C_{T}\), where
\[C_{T}=\{p(\nu_{T},\omega_{T-1})\in[1/2-\gamma_{2},1/2+\gamma_{2}]\}\cup\{p( \nu_{T},(\omega^{\epsilon})_{T-1})\in[1/2-\gamma_{2},1/2+\gamma_{2}]\}.\]
By (7) and the fact that \(A_{T}\subset B_{T}\), it now follows for \(t<T\)
\[\mathrm{P}\left(A_{T}|\tau_{1/2}^{\epsilon}=t\right)\geq\frac{1}{2}-\mathrm{P} (C_{T}|\tau_{1/2}^{\epsilon}=t)>\frac{1}{2}-2\gamma_{1}.\]
By the assumptions on \(\{h_{n}\}\), for sufficiently large \(n\) and \(t<T\), \(\mathrm{E}[f(\omega)f(\omega^{\epsilon})|A_{T},\tau_{1/2}^{\epsilon}=t]<-1+ \gamma_{1}\).We then get
\[\mathrm{E}\left[f(\omega)f(\omega^{\epsilon})|\tau^{\epsilon}_{1/2}=t\right] =\mathrm{E}\left[f(\omega)f(\omega^{\epsilon})|A_{T},\tau^{ \epsilon}_{1/2}=t\right]\mathrm{P}\left(A_{T}|\tau^{\epsilon}_{1/2}=t\right)\] \[+\mathrm{E}\left[f(\omega)f(\omega^{\epsilon})|A^{c}_{T},\tau^{ \epsilon}_{1/2}=t\right]\mathrm{P}\left(A^{c}_{T}|\tau^{\epsilon}_{1/2}=t\right)\] \[<(-1+\gamma_{1})\left(\frac{1}{2}-2\gamma_{1}\right)+\frac{1}{2 }+2\gamma_{1}<5\gamma_{1}.\]
Finally this means that for \(n\) sufficiently large,
\[0 \leq\mathrm{E}\left[f(\omega)f(\omega^{\epsilon})\right]\] \[=\sum_{t=0}^{T-1}\mathrm{E}\left[f(\omega)f(\omega^{\epsilon})| \tau^{\epsilon}_{1/2}=t\right]\mathrm{P}\left(\tau^{\epsilon}_{1/2}=t\right)+ \mathrm{E}\left[f(\omega)f(\omega^{\epsilon})|\tau^{\epsilon}_{1/2}\geq T \right]\mathrm{P}\left(\tau^{\epsilon}_{1/2}\geq T\right)\] \[<5\gamma_{1}+\mathrm{P}\left(\tau^{\epsilon}_{1/2}\geq T\right).\]
Since \(h\) is assumed odd, so is \(f\) and hence \(\mathrm{Cov}(f(\omega),f(\omega^{\epsilon}))=\mathrm{E}[f(\omega)f(\omega^{ \epsilon})]\). Since \(\gamma_{1}\) is arbitrary, (i) follows.
To prove (ii), fix \(\gamma_{1}\), pick \(\epsilon\) such that \(g(2\epsilon)<\gamma_{1}\) and pick \(\gamma_{2}\) such that (7) holds. Notice that due to (8) and the fact that \(g\) is increasing, we have
\[\mathrm{P}\left(B_{T}|\tilde{D}_{T-1}\leq 2\epsilon\right)\leq g(2\epsilon)< \gamma_{1}.\]
Hence by (7)
\[\mathrm{P}(B_{T}\cup C_{T}|\tilde{D}_{T-1}\leq 2\epsilon)<3\gamma_{1}.\]
Additionally for \(n\) large and \(\epsilon\) sufficiently small, due to \(h_{n}\) having a sharp threshold at \(1/2\),
\[\mathrm{P}\left(f(\omega)\neq f(\omega^{\epsilon})|B^{c}_{T}\cap C^{c}_{T}, \tilde{D}_{T-1}\leq 2\epsilon\right)<\gamma_{1}.\]
This results in
\[\mathrm{P}\left(f(\omega)\neq f(\omega^{\epsilon})|\tilde{D}_{T- 1}\leq 2\epsilon\right)\] \[=\mathrm{P}\left(f(\omega)\neq f(\omega^{\epsilon})|B_{T}\cup C _{T},\tilde{D}_{T-1}\leq 2\epsilon\right)\mathrm{P}\left(B_{T}\cup C_{T}| \tilde{D}_{T-1}\leq 2\epsilon\right)\] \[+\mathrm{P}\left(f(\omega)\neq f(\omega^{\epsilon})|B^{c}_{T} \cap C^{c}_{T},\tilde{D}_{T-1}\leq 2\epsilon\right)\mathrm{P}\left(B^{c}_{T} \cap C^{c}_{T}|\tilde{D}_{T-1}\leq 2\epsilon\right)\] \[<3\gamma_{1}+\mathrm{P}\left(f(\omega)\neq f(\omega^{\epsilon})| B^{c}_{T}\cap C^{c}_{T},\tilde{D}_{T-1}\leq 2\epsilon\right)<4\gamma_{1}.\]
Since \(\lim_{n\to\infty}\mathrm{P}(\tau^{\epsilon}_{2\epsilon}\leq T)=0\), we have for \(n\) sufficiently large that \(\mathrm{P}(\tilde{D}_{T-1}>2\epsilon)\leq\mathrm{P}(\tau^{\epsilon}_{2\epsilon }\leq T)<\gamma_{1}\).
It now follows that for \(n\) large,
\[\mathrm{P}\left(f(\omega)\neq f(\omega^{\epsilon})\right)\] \[<5\gamma_{1}\]
Since \(\gamma_{1}\) is arbitrary, this concludes (ii).
We are now ready to start the proof that \(\bar{D}\) hits \(1/2\) before \(0\) and \(f=f_{n,T_{n}}\) is annealed (and thus quenched) QNS for a large range of \(\rho\) if \(T_{n}\) is suitably large. The results are stated in Theorem 3.3.
We are going to make several observations concerning the behaviour of \(g_{W}(v)\). Since we will for the most parts only be considering a single \(t\), we will in such cases drop \(t\) from the notation.
Before going on, recall that the sum of the squares of two independent standard normal random variables is exponential with mean \(2\). This also means that \(r^{2}:=||W||_{2}^{2}\) is exponential with mean \(2\rho/(1-\rho)\).
**Lemma 3.3**.: Let \(r=||w||_{2}\). Then
\[g_{w}(v)\geq\frac{2}{\pi}\arctan\left(\sqrt{\frac{v}{1-v}}\right)e^{-r^{2}/2}= g(v)e^{-r^{2}/2}.\]
Proof.: Let \(w=(a,b)\). Then
\[g_{w}(v)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\int_{-\sqrt{\frac{v}{1-v}}|x|}^ {\sqrt{\frac{v}{1-v}}|x|}e^{-((x-a)^{2}+(y-b)^{2})/2}dydx=\frac{e^{-\frac{a^{2 }+2}{2}}}{2\pi}\int_{-\infty}^{\infty}\int_{-\sqrt{\frac{v}{1-v}}|x|}^{\sqrt{ \frac{v}{1-v}}|x|}e^{-(x^{2}+y^{2})/2}e^{xa+yb}dydx\]
\[\geq\frac{e^{-\frac{r^{2}}{2}}}{2\pi}\int_{-\infty}^{\infty}\int_{-\sqrt{ \frac{v}{1-v}}|x|}^{\sqrt{\frac{v}{1-v}}|x|}e^{-(x^{2}+y^{2})/2}dydx=\frac{2} {\pi}\arctan\left(\sqrt{\frac{v}{1-v}}\right)e^{-\frac{r^{2}}{2}}\]
where the inequality follows from symmetry of the integrated function around the origin and that \(e^{x}+e^{-x}\geq 2\). The last equality follows from the computation in the proof of Lemma 3.1.
**Lemma 3.4**.: Assume that \(v\leq 1/2\) and that \(w=(w^{C},w^{C^{c}})\) satisfies \(|w^{C^{c}}|\leq\sqrt{v/(1-v)}|w^{C}|-2\). Then \(g_{w}(v)>1/2\).
Proof.: Let \(X\sim N(w^{C},1)\) and \(Y\sim N(w^{C^{c}},1)\) be independent and write \(X=w^{C}+\xi\) and \(Y=w^{C^{c}}+\eta\) for independent standard normal \(\xi\) and \(\eta\). We have
\[g_{w}(v)=\mathrm{P}\left(|Y|\leq\sqrt{\frac{v}{1-v}}|X|\right)=\mathrm{P}((X,Y )\in A_{v})\]
where
\[A_{v}=\left\{(x,y):|y|\leq\sqrt{\frac{v}{1-v}}|x|\right\}.\]
Since \(v\leq 1/2\), it is easily seen that the \(L_{2}\) distance between \((w^{C},w^{C^{c}})\) and \(A_{v}^{c}\) is smaller or equal to \(\sqrt{2}\) whenever \(|w^{C^{c}}|\leq\sqrt{v/(1-v)}|w^{C}|-2\). Hence
\[g_{w}(v)\geq\mathrm{P}(\xi^{2}+\eta^{2}\leq 2)=1-e^{-1}>\frac{1}{2}.\]
**Lemma 3.5**.: Let as above \(W=W_{t}=(W^{C},W^{C^{c}})\) for \(t<T\), where \(W^{C}\) and \(W^{C^{c}}\) are independent normals with means \(0\) and variance \(\rho/(1-\rho)\). Then for \(v\leq 1/2\),
\[\mathrm{P}\left(g_{W}(v)>\frac{1}{2}\right)>\frac{1}{2\pi}\sqrt{\frac{v}{1-v} }e^{-\frac{8(1-\rho)}{\rho}\frac{1-v}{v}}.\]
Proof.: By Lemma 3.4, \(\mathrm{P}(g_{W}(v)>1/2)\geq\mathrm{P}(|W^{C^{e}}|\leq\sqrt{v/(1-v)}|W^{C}|-2)\). Writing \((r,\varphi)\) for the polar coordinates of \(W\), it is straightforward by back substitution to see that it is sufficient for \(|W^{C^{e}}|\leq\sqrt{v/(1-v)}|W^{C}|-2\) that
\[r^{2}>16(1-v)/v\]
and
\[|\varphi|<\arctan\left(\frac{1}{2}\sqrt{\frac{v}{1-v}}\right)\]
and the latter in turn occurs whenever \(|\varphi|<\frac{1}{4}\sqrt{v/(1-v)}\). Since \(r^{2}\) and \(|\varphi|\) are independent and exponential\(((1-\rho)/2\rho)\) and uniform on \([0,\pi/2]\) respectively, we thus get
\[\mathrm{P}\left(g_{W}(v)>\frac{1}{2}\right)>e^{-\frac{8(1-\rho)}{\rho}\frac{1- v}{v}}\frac{2}{\pi}\frac{1}{4}\sqrt{\frac{v}{1-v}}\]
which is the desired bound.
We are now ready to state the first main results of this section.
**Theorem 3.3**.: _Let \(\{\epsilon_{n}\}\) be such that \(1/2\geq\epsilon_{n}\downarrow 0\) and \(n\epsilon_{n}\to\infty\). Assume that for some \(\delta>0\) independent of \(n\), \(\delta<\rho<1-\frac{4(\log\log n)^{3}}{\log n}\) and \(T_{n}\geq e^{4(\log\log n)^{2}}\). Then the following statements hold._
1. \(\lim_{n\to\infty}\mathrm{P}(\tau_{1/2}^{\epsilon_{n}}>T_{n})=0\)_,_
2. _If_ \(h_{n}\) _is odd and has a sharp threshold at 1/2, then_ \(\{f_{n,T_{n}}\}\) _is annealed QNS and hence also quenched QNS at level_ \(\{\epsilon_{n}\}\)_._
3. _If_ \(h_{n}\) _is odd and_ \(T_{n}\) _grows sufficiently large with_ \(n\)_, then_ \(\{f_{n,T_{n}}\}\) _is annealed QNS and hence also quenched QNS at level_ \(\{\epsilon_{n}\}\)_._
As in the uncorrelated case, we can prove even stronger versions.
**Theorem 3.4**.: _Let \(\{\epsilon_{n}\}\) be such that \(1/2\geq\epsilon_{n}\downarrow 0\) and \(n\epsilon_{n}\to\infty\). Assume that for some \(\delta>0\) independent of \(n\), \(\delta<\rho<1-\frac{4(\log\log n)^{3}}{\log n}\) and \(T_{n}\geq e^{4(\log\log n)^{2}}\). Then the following statements hold._
1. _Fix_ \(\omega,\eta\in\{-1,1\}^{n}\) _arbitrarily such that_ \(\eta\not\in\{\omega,-\omega\}\)_. Then if_ \(h_{n}\) _is odd and either has a sharp threshold at 1/2 or_ \(T_{n}\) _is sufficiently large, then_ \[\lim_{n\to\infty}\mathrm{Cov}_{\Theta}(f_{n,T_{n}}(\omega),f_{n,T_{n}}(\eta))=0.\]
2. _Let_ \(\mathbf{Q}_{n}\) _be any probability measure on_ \(\{-1,1\}^{n}\times\{-1,1\}^{n}\) _such that_ \(\lim_{n\to\infty}\mathbf{Q}_{n}(\eta\in\{\omega,-\omega\})=0\)_, Then if_ \(h_{n}\) _is odd and either has a sharp threshold at 1/2 or_ \(T_{n}\) _is sufficiently large, then_ \[\lim_{n\to\infty}\mathrm{Cov}_{\mathbf{Q}_{n},\Theta}(f_{n,T_{n}}(\omega),f_{n,T_{n}}(\eta))=0.\]
3. _Assume that_ \(h_{n}\) _is odd and either has a sharp threshold at 1/2 or_ \(T_{n}\) _is sufficiently large. Fix any_ \(k\in\{1,2,\ldots,n-1\}\) _and_ \(\delta>0\)_. Fix also_ \(\omega\in\{-1,1\}^{n}\) _and let_ \(M_{k}=M_{k}^{(n)}(\omega)\) _be the number of_ \(\eta\) _with_ \(\eta(i)\not=\omega(i)\) _for exactly_ \(k\) _indexes_ \(i\)_, such that_ \(f_{n,T_{n}}(\eta)\not=f_{n,T_{n}}(\omega)\)_. Then for_ \(T_{n}\geq K_{n}\)_,_ \[\lim_{n\to\infty}\mathrm{P}\left(\frac{M_{k}}{\binom{n}{k}}\not\in\left(\frac{ 1-\delta}{2},\frac{1+\delta}{2}\right)\right)=0.\]
Proof of Theorem 3.3.: Let us first outline the strategy of the proof of (i). We will run the \(\tilde{D}\)-process for a predetermined time \(s\). In doing so, we will prove that (a) regardless of \(\epsilon\in(0,1/2]\), the probability that \(\tilde{D}\) hits \(0\) during time \(s\) is very small regardless of the value of \(\tilde{D}_{0}\) as long as it is nonzero, and (b) the probability that \(\tilde{D}\) hits \(1/2\) during time \(s\) is of much higher order. Having done that allows us to repeatedly run the process for \(s\) units of time and use the Markov property to draw the conclusion that with very high probability, \(\tilde{D}\) hits \(1/2\) before \(0\) for sufficiently fast growing \(T_{n}\). The final part (c) upper bounds the time it takes to hit either \(1/2\) or \(0\).
To avoid confusion we point out that it will be easy for the reader to see that many of the bounds given are far from optimal and thus to some extent arbitrary; they are simply good enough for their purpose.
Let \(s=s_{n}=\log_{2}\log n\) and fix \(\epsilon>0\). We start by giving an upper bound on \(P(\tilde{D}_{s}=0)\). We have for large \(n\) and all \(t<T_{n}\),
\[\mathrm{P}(\tilde{D}_{t+1}=0|\tilde{D}_{t}>0) \leq\mathrm{P}\left(\tilde{D}_{t+1}=0\Big{|}\tilde{D}_{t}=\frac{ 1}{n}\right)\] \[\leq\mathrm{P}\left(\frac{||W||_{2}^{2}}{2}>\frac{2\log n}{\log \log n}\right)+\left(1-g\left(\frac{1}{n}\right)e^{-\frac{2\log n}{\log\log n }}\right)^{n}\] \[<e^{-\frac{4(\log\log n)^{3}}{\log n}}\frac{2\log n}{\log\log n} +\left(1-\frac{1}{n^{2/3}}\right)^{n}\] \[<e^{-8(\log\log n)^{2}}+e^{-n^{1/3}}\] \[<e^{-7(\log\log n)^{2}},\]
where the second inequality uses Lemma 3.3 for the second term. By the Markov property of \(\tilde{D}\) and Bonferroni, we get
\[\mathrm{P}(\tilde{D}_{s}=0)<(\log_{2}\log n)e^{-7(\log\log n)^{2}}<e^{-6(\log \log n)^{2}}. \tag{9}\]
This concludes part (a) in the sketch.
Next, we lower bound \(\mathrm{P}(g_{W_{*}}(D_{s-1})>1/2)\). There is \(\kappa>0\) such that for any \(v\in[1/n,1/2)\),
\[\mathrm{P}\left(\tilde{D}_{t+1}\geq\frac{\sqrt{v}}{20}\Big{|} \tilde{D}_{t}=v\right) \geq\mathrm{P}\left(g_{W}(v)>\frac{\sqrt{v}}{20}\right)\,\mathrm{ P}\left(\tilde{D}_{t+1}\geq\frac{\sqrt{v}}{20}\Big{|}g_{W}(v)>\frac{\sqrt{v}}{10}\right)\] \[>\mathrm{P}\left(\frac{||W||_{2}^{2}}{2}\leq 1\right)\left(1-e^{- \kappa\sqrt{n}}\right)\] \[=\left(1-e^{-\frac{1-\rho}{\rho}}\right)\left(1-e^{-\kappa\sqrt{n }}\right)\] \[>1-e^{-\frac{1}{2}\frac{4(\log\log n)^{3}}{\log n}}\] \[>\frac{(\log\log n)^{3}}{\log n}.\]
Here the second inequality is Lemma 3.3 and Chernoff bounds. This gives, provided that \(\tilde{D}_{0}\geq 1/n\),
\[\mathrm{P}\left(\forall t=0,\ldots,s-2:\tilde{D}_{t+1}\geq\min\left(\frac{1}{ 2},\frac{\sqrt{\tilde{D}_{t}}}{20}\right)\right)>\left(\frac{(\log\log n)^{3} }{\log n}\right)^{\log_{2}\log n}>e^{-2(\log\log n)^{2}}. \tag{10}\]
If the event in the left hand side occurs, then
\[\tilde{D}_{s-1}\geq\frac{1}{400}\left(\frac{1}{n}\right)^{2^{-\log_{2}\log n }}=\frac{1}{400}e^{-1}>\frac{1}{1200}.\]
By Lemma 3.5,
\[\mathrm{P}\left(g_{W_{*}}(\tilde{D}_{s-1})>\frac{1}{2}\Big{|}\tilde{D}_{s-1}> \frac{1}{1200}\right)>\frac{1}{40\pi}e^{-\frac{9600(1-\rho)}{\rho}}\geq\frac{ 1}{40\pi}e^{-\frac{9600(1-\delta)}{\delta}}=:a.\]
Combining with (10), it follows that
\[\mathrm{P}\left(g_{W_{s}}(\tilde{D}_{s-1})>\frac{1}{2}\Big{|}\tilde{D}_{0}>0 \right)>a\,e^{-2(\log\log n)^{2}}>e^{-3(\log\log n)^{2}}. \tag{11}\]
Comparing (9) and (11) we see that the conditional probability that \(g_{W_{s}}(\tilde{D}_{s-1})\) reaches \(1/2\) before absorbing at zero given \(\tilde{D}_{0}>0\) is at least \(1-e^{-3(\log\log n)^{2}}\), finishing part (b) of the sketch. Finally the time to either hitting \(1/2\) or absorbing at \(0\) is dominated by a geometric random variable with parameter \(e^{-3(\log\log n)^{2}}\), so with probability going to \(1\) this will happen before time \(T_{n}\) whenever \(T_{n}>e^{4(\log\log n)^{2}}\). Adding the fact that since \(\epsilon_{n}>1/n\) for \(n\) large, \(\mathrm{P}(\tilde{D}_{0}=0)<e^{-\sqrt{n}}\), this concludes the proof of (i). Now (ii) immediately follows from part (i) of Lemma 3.2.
For (iii), observe that by (i) and since \(\tilde{D}\) absorbs when it hits \(0\) or \(1\), we get by symmetry of \(\tilde{D}\) that \(\lim_{n\to\infty}\mathrm{P}(\tilde{D}_{T_{n}}=0)=\lim_{n\to\infty}\mathrm{P} (\tilde{D}_{T_{n}}=1)=1/2\) for \(\{T_{n}\}\) growing sufficiently large. Since \(h_{n}\) is odd (iii) follows for such \(\{T_{n}\}\).
Proof of Theorem 3.4.: The proof is very similar to the proof of Theorem 3.2. A quick glance back at the proof of (i) in Theorem 3.3 shows that \(\tilde{D}\) with high probability hits \(1/2\) before \(0\) also if the input \((\omega,\omega^{\epsilon_{n}})\) is replaced with \((\omega,\eta)\). Now (i) follows from exactly the same proof as that of part (i) of Lemma 3.2. Finally, both (ii) and (iii) follows from (i) in the same way as (ii) and (iii) follow from (i) in Theorem 3.2.
Next we move to proving that for sufficiently large \(\rho\), \(\{f_{n,T_{n}}\}\) is noise stable. First we need an upper bound on \(g_{w}(v)\):
**Lemma 3.6**.: Let \(w\in\mathcal{R}^{2}\) and let \((r,\theta)\) be its polar coordinates and let \(\varphi=\arctan\sqrt{v/(1-v)}\). Assume that \(|\theta|\geq\varphi\) and \(|\theta-\pi|\geq\varphi\). Then
\[g_{w}(v)\leq e^{-\frac{1}{2}r^{2}\sin^{2}(|\theta|-\varphi)}.\]
Proof.: As in the proof of Lemma 3.4, let
\[A_{v}=\left\{(x,y):|y|\leq\sqrt{\frac{v}{1-v}}|x|\right\},\]
so that
\[g_{w}(v)=\mathrm{P}((w^{C}+\xi,w^{C^{c}}+\eta)\in A_{v}),\]
where \(\xi\) and \(\eta\) are independent standard normal. Without loss of generality, we assume that \(\theta\in[0,\pi/2]\). By assumption, \(w\not\in A_{v}\). and the Euclidean distance from \(w\) to \(A_{v}\) is \(r\,\sin(\theta-\varphi)\), so for \(w+(\xi,\eta)\) to be in \(A_{v}\), it is necessary that \(\xi^{2}+\eta^{2}\geq r^{2}\sin^{2}(\theta-\varphi)\). The left hand side is exponential with mean \(2\), so
\[g_{w}(v)\leq e^{-\frac{1}{2}r^{2}\sin^{2}(\theta-\varphi)}.\]
**Theorem 3.5**.: _Assume that \(\rho>1-\log\log n/18\log n\). Then the following holds._
* _For_ \(T=T_{n}\geq(\log n)^{1/4}\) _and for all_ \(\delta>0\)_, it holds for all sufficiently small_ \(\epsilon>0\) _that if_ \(\tilde{D}_{0}=v<\epsilon^{2}/2\)_, then_ \[\limsup_{n}\mathrm{P}\left(\{\tilde{D}_{T}>0\}\cup\left\{\exists t\in\{1, \ldots,T\}:\tilde{D}_{t}>\frac{\epsilon^{2}}{2}\right\}\right)<\delta.\] _In particular,_ \[\limsup_{n}\mathrm{P}\left(\exists t\geq 1:\tilde{D}_{t}>\frac{\epsilon^{2}}{2 }\right)<\delta.\]
_._
2. \(f_{n,T_{n}}\) _is annealed and quenched noise stable if either_ \(h_{n}\) _has a sharp threshold at 1/2 or_ \(T_{n}\geq(\log n)^{1/4}\)_._
Proof.: This proof will start with showing that with high probability, \(\tilde{D}\) immediately drops to less than \(1/(\log n)^{1/2}\) and then stays there for a time of order at least \((\log n)^{1/16}\). Having done that, it will be shown that during that time, \(\tilde{D}\) will in fact have hit 0. As in previous proofs, many inequalities used are obviously far from optimal, but simply good enough for their purpose.
Consider now the first step of \(\tilde{D}\). Write \((r,\theta)\) for the polar coordinates of \(W=W_{1}\). We have by Lemma 3.6, since \(\arctan\sqrt{(\epsilon^{2}/2)/(1-(\epsilon^{2}/2)^{2})}<\epsilon\)
\[\begin{split}\mathrm{P}\left(g_{W}(v)<e^{-\frac{18\epsilon^{3} \log n}{\log\log n}}\right)&>\mathrm{P}\left(g_{W}\left(\frac{ \epsilon^{2}}{2}\right)<e^{-\frac{18\epsilon^{3}\log n}{\log\log n}}\right)\\ &>\mathrm{P}\left(e^{-\frac{1}{2}r^{2}\sin^{2}(|\theta|- \epsilon)}<e^{-\frac{16\epsilon^{3}\log n}{\log\log n}}\right)\\ &>\mathrm{P}\left(|\theta|>3\epsilon,|\theta-\pi|>3\epsilon,\frac {1}{2}r^{2}>\frac{18\epsilon\log n}{\log\log n}\right)\\ &>\left(1-\frac{18\epsilon}{\pi}\right)e^{-\epsilon}\\ &>1-4\epsilon\end{split}\]
and given that \(g_{W}(v)<e^{-\frac{18\epsilon^{3}\log n}{\log\log n}}\), we get for large \(n\),
\[\tilde{D}_{1}<2e^{-\frac{18\epsilon^{3}\log n}{\log\log n}}<\frac{1}{(\log n) ^{1/2}}\]
with conditional probability at least \(1-\epsilon\). Taken together, the last two inequalities give,
\[\mathrm{P}\left(\tilde{D}_{1}\geq\frac{1}{(\log n)^{1/2}}\right)<5\epsilon. \tag{12}\]
Let \(E_{t}=\{\tilde{D}_{t}<1/(\log n)^{1/2}\}\); this notation will be convenient as we are going to do much conditioning on this event from here on.
Next consider the distribution of \(\tilde{D}_{t+1}\) given \(\tilde{D}_{t}=\alpha^{2}/2<1/(\log n)^{1/2}\). Let as before \((r,\theta)=(r_{t},\theta_{t})\) be the polar coordinates of \(W_{t}\), \(t=1,2,\ldots\).
Let
\[B=\left\{e^{-\frac{1}{2}r^{2}\sin^{2}(|\theta|-\alpha)}<\frac{1}{2(\log n)^{1 /2}}\right\}.\]
It is sufficient for \(B\) to occur that \(|\theta|\geq 3/(\log n)^{1/4}\) and \(r^{2}\geq(\log n)^{1/2}\log\log n\). Since \(1-\rho<\log\log n/300\log n\), this gives for large \(n\),
\[\mathrm{P}\left(B\,\Big{|}\,\tilde{D}_{t}=\frac{\alpha^{2}}{2}\right)= \mathrm{P}(B)>\left(1-\frac{6}{\pi(\log n)^{1/4}}\right)e^{-\frac{\log\log n }{36\log n}(\log n)^{1/2}\log\log n}>1-\frac{3}{(\log n)^{1/4}}. \tag{13}\]
When \(B\) occurs,
\[g_{W}\left(\frac{\alpha^{2}}{2}\right)<\frac{1}{2(\log n)^{1/2}}\]
according to Lemma 3.6. Hence for \(n\) large by Chernoff bounds,
\[\mathrm{P}\left(\tilde{D}_{t+1}<\frac{1}{(\log n)^{1/2}}\Big{|}\tilde{D}_{t}= \frac{\alpha^{2}}{2},B\right)>1-e^{-\frac{n}{\log n}}>1-\frac{1}{(\log n)^{1/4}}.\]
Since \(\alpha^{2}/2\) is an arbitrary number in \([0,1/(\log n)^{1/2})\), we get
\[\mathrm{P}\left(\tilde{D}_{t+1}<\frac{1}{(\log n)^{1/2}}\Big{|}\tilde{D}_{t}< \frac{1}{(\log n)^{1/2}},B\right)>1-\frac{1}{(\log n)^{1/4}}\]
and hence on combining with (13),
\[\mathrm{P}\left(\tilde{D}_{t+1}<\frac{1}{(\log n)^{1/2}}\Big{|}E_{t}\right)>1- \frac{4}{(\log n)^{1/4}}. \tag{14}\]
Next let \(A=A_{t+1}=\{|\theta_{t+1}|-\epsilon>\pi/4,r_{t+1}^{2}>6\log n\}\). Then since \(1-\rho<\log\log n/18\log n\), provided that \(\epsilon<\pi/12\),
\[\mathrm{P}(A)>\frac{1}{3}e^{-\frac{\log\log n}{6}}>\frac{2}{(\log n)^{1/5}}.\]
If \(A\) occurs, Lemma 3.6 implies for \(v<\epsilon^{2}/2\)
\[g_{W}(v)<e^{-\frac{1}{4}r^{2}}<e^{-\frac{3}{2}\log n}=\frac{1}{n^{3/2}}.\]
That entails in turn that
\[\mathrm{P}(\tilde{D}_{t+1}=0|A)>\left(1-\frac{1}{n^{3/2}}\right)^{n}>1-\frac{ 1}{\sqrt{n}}.\]
Hence
\[\mathrm{P}\left(\tilde{D}_{t+1}=0|E_{t}\right) >\mathrm{P}\left(\tilde{D}_{t+1}=0|A\cap E_{t}\right)\mathrm{P} \left(A|E_{t-1}\right)=\mathrm{P}\left(\tilde{D}_{t+1}=0|A\right)\mathrm{P} \left(A\right)\] \[>\frac{2}{(\log n)^{1/5}}\left(1-\frac{1}{\sqrt{n}}\right)>\frac{ 1}{(\log n)^{1/5}}. \tag{15}\]
By taking (14) and (15) together, it now easily follows that the conditional probability given \(E_{1}\) that \(\tilde{D}\) for \(t\geq 1\) hits \(0\) before \((\log n)^{1/2}\) exceeds \(1-1/(\log n)^{1/4-1/5}=1-1/(\log n)^{1/20}\) and with probability exceeding \(1-e^{-(\log n)^{1/20}}\) it happens before time \((\log n)^{1/4}\).
Combining this with (12) and taking \(\epsilon<\delta/6\) we get
\[\limsup_{n}\mathrm{P}\left(\{\tilde{D}_{T_{n}}>0\}\cup\{\exists t\in\{1,\ldots,T_{n}\}:\tilde{D}_{t}>\frac{1}{(\log n)^{1/2}}\}\right)<\delta.\]
This obviously implies
\[\limsup_{n}\mathrm{P}\left(\{\tilde{D}_{T_{n}}>0\}\cup\{\exists t\in\{1, \ldots,T_{n}\}:\tilde{D}_{t}>\epsilon\}\right)<\delta.\]
For (ii) assume first that \(h_{n}\) has a sharp threshold at \(1/2\) and \(T_{n}\) is arbitrary. Fix \(\kappa>0\). Then (i) tells us that for sufficiently small \(\epsilon>0\) and sufficiently large \(n\), \(P(\forall t:\tilde{D}_{t}<2\epsilon)>1-\kappa/2\). In particular
\[\limsup_{n}\mathrm{P}(\tau_{2\epsilon}^{\epsilon}\leq T_{n})<\frac{\kappa}{2}.\]
Since the final output is \(f_{n,T_{n}}(\omega)=h_{n}(\omega_{T})\) and \(h_{n}\) has a sharp threshold at \(1/2\), the result follows from Lemma 3.2.
Finally if \(T_{n}\geq(\log n)^{1/4}\), then by (i), \(\lim_{\epsilon\to 0}\mathrm{P}(\tilde{D}_{T_{n}}=0)=1\), i.e. \(\lim_{\epsilon\to 0}\mathrm{P}(\omega_{T_{n}}=(\omega^{\epsilon})_{T_{n}})=1\), which implies noise stability.
Convolutional treelike networks
Above we have studied a Boolean representation of randomised feed forward neural networks. Another common architecture of neural networks is convolutional neural networks, ConvNet, where the output of each layer is the convolution between the input and some filter of size \(d\) through some activation function. The learnable parameters in the model will be those in the filter. If we study a one dimensional input and if no padding is used, the output of each convolution is of length \((n-d)/s+1\) where \(s\) is the stride and \(n\) the number of inputs. As previously, we consider the activation function to be the sign function. We also limit ourselves to a filter size of length \(d=3\) (but the technique used will with a few observations easily generalise to some other settings, which we point out and state at the end of this section). Stacking many of these convolutional layers our networks can be seen as a graph \(G_{n}=(V_{n},E_{n})\). From this graph we induce a Boolean function \(f_{n}\) that goes from \(\{-1,1\}^{n}\) to \(\{\)-1,1\(\}\). The graphs that we consider here, and soon give proper definitions of, can be seen in Figure 2.
With filter size 3, the value of each node at layer \(t\) in \(G_{n}\) is the weighted 3-majority function, where the weights are the parameters in the filter \(\theta_{t}\). These weights are the same for all triples of nodes next to each other on the same layer on which the filter is applied. A weighted 3-majority function can only express either a regular 3-majority, negative 3-majority, a dictator function or a negative dictator function. Clearly, reversing the signs of all values in a layer does not have anything to do with stability questions, so we may assume that the filter at a given layer either expresses a dictator or a regular majority. In each setting analysed here, we will at first assume that each layer expresses a regular majority, i.e. \(\theta_{t}=(1,1,1)\)\(\forall t\). It will then be easy to see that the arguments used can be easily extended to the setting where some layers are dictator layers under the very mild assumption that the distribution of the \(\theta_{t}\)'s is such that there is a probability bounded away from 0 that \(\theta_{t}\) expresses a regular majority. Each \(\theta_{t}\) is here assumed to be independent across \(t\). First, notice that if the stride \(s>2\), \(G_{n}\) would be the iterated 3-majority function with no overlap, which is known to be sensitive, so we will only consider \(s=1\) and \(s=2\).
This leaves us with four different structures of interest. There corresponding graph \(G_{n}\) are illustrated in Figure 2 and have the following formal definitions of them:
1. Convolutional iterated 3-majority with stride 1, \(G_{n}^{(1)}=(V,E)\), where \(V=\{v_{n,0},v_{n-1,-1},v_{n-1,0},v_{n-1,1},v_{n-2,-2},v_{n-2,-1},v_{n-2,0},v_{ n-2,1},v_{n-2,2},\ldots,v_{0,-n},\ldots,v_{0,n}\}\) and \(E=\{(v_{k,i},v_{k-1,j}):k=n,\ldots,1\), \(j=i-1,i,i+1\}\).
2. Convolutional iterated 3-majority with stride 1 on an n-cycle, \(G_{n}^{(1^{\prime})}=(V,E)\), where \(V=\{v_{K,-n},v_{K-1,-(n-1)},\ldots,v_{K-1,n},\ldots,v_{0,-n},\ldots,v_{0,n}\}\), \(E=\{(v_{K,1},v_{K-1,i}):i=1,\ldots,n\}\cup\{v_{k,i},v_{k-1,j}:k=K-1,\ldots,1\), \(|i-j|\leq 1\mod n\}\).
3. Convolutional iterated 3-majority with stride 2, \(G_{n}^{(2)}=(V,E)\), where \(V=\{v_{n,1},v_{n-1,1},v_{n-1,2},v_{n-1,3},\ldots,v_{0,1},\ldots,v_{0,2^{n+1}- 1}\}\), \(E=\{(v_{n-k,i},v_{n-k-1,j}):k=0,\ldots,n-1\), \(|j-2i|\leq 1\}\).
The graph in (d) will not be treated any more than that in the end it will be obvious that the results for (c) are valid there too, so the formal definition is skipped.
Given one of these \(G_{n}\), let \(f_{n}\) be the Boolean function induced by \(G_{n}\) where the value of each node \(v\), \(f(v)\), is evaluated as the majority of the evaluation of the three connected nodes below it (where "below" refers to the standard tree convention of thinking of nodes \(v_{k+1,i}\) as sitting immediately below the nodes \(v_{k,i}\)). The nodes of layer 0 are considered to output the input \(\omega\) into the ConvNet, where \(\omega(j)\), \(j\in L_{0}\), corresponds to bit \(j\) in the Boolean input vector. For example, for a node \(v_{k,i}\in G_{n}^{1}\), \(f(v_{k,i})\) can be evaluated recursively using \(f(v_{k,i})=\operatorname{sign}\left(f(v_{k-1,i})+f(v_{k-1,i+1})+f(v_{k-1,i+2})\right)\) and \(f(v_{0,j})=\omega(j)\), \(\forall k,j:\,1\leq k\leq n\), \(1\leq j\leq 2n+1\). We will denote the set of nodes at layer \(k\) as \(L_{k}\), \(k=0,\ldots,n\). In terms of the formal definition, \(L_{k}=\{v_{k,i}:v_{k,i}\in V\}\). Note that the layers/generations are numbered bottom up, which is unconventional in the graph sense, but is the "right thing" to do in the neural net sense. For any \(i\) such that \(v_{k,i}\) and \(v_{k,i+1}\) are both in \(V\), we say that those two vertices
are next to each other or a closest pair or, sometimes, neighbours in \(L_{k}\) even though they are strictly speaking not neighbours in \(G_{n}\). For \(G_{n}^{(1^{\prime})}\) and \(G_{n}^{(2^{\prime})}\), the nodes \(v_{k,1}\) and \(v_{k,n}\) are also said to be next to each other.
For each node \(u\in G_{n}\), in layer \(k\), let \(D_{u}\) denote its set of descendants, i.e. the set of nodes in layers \(k-1,\ldots,0\) defined recursively that the descendants in \(L_{k-1}\) are \(u\)'s three neighbours (in the graph sense) in \(L_{k-1}\) and then the descendants in \(L_{k-j}\) are the set nodes there that have an edge to at least one descendant in \(L_{k-j-1}\). Observe that to determine \(f(u)\) it is sufficient to study the subgraph \(D_{u}\). When \(v\) is a descendant of \(u\), we equivalently say that \(u\) is an ancestor of \(v\). Let \(A_{u}\) be the set of ancestors of \(u\), i.e. \(A_{u}\) is the set of nodes \(v\) such that \(u\in D_{v}\). Write \(D_{u,k}=D_{u}\cap L_{k}\), and \(A_{u,k}=A_{u}\cap L_{k}\), and write \(A_{u,k}^{+}\) for the union of \(A_{u,k}\) and the set of nodes in \(L_{k}\) that are next to a node in \(A_{u,k}\). If \(v\in A_{u}\), we say that \(v\) is a parent of \(u\) or that \(u\) is a child of \(v\) if the graphical distance between \(u\) and \(v\) is \(1\). Also define \(D_{\Lambda}=\cup_{u\in\Lambda}D_{u}\) for a set of nodes \(\Lambda\).
The following subsections prove the different noise properties of \(f_{n}\) induced by the different convolutional graphs.
### Convolutional iterated 3-majority with stride \(1\)
Let \(G_{n}=G_{n}^{(1)}=(V_{n},E_{n})\) i.e. the 3-iterated majority network with stride \(1\), Figure 1(a). Then the induced Boolean functions \(\{f_{n}\}\) requires \(N=2n+1\) input bits which we label in accordance with how the network is defined: \(\omega(-n),\ldots,\omega(n)\).
Write \(f=f_{n}\). The key observation to make is that if \(f(v_{0,i})=\omega(i)\) and \(f(v_{0,i+1})=\omega(i+1)\) are equal, then \(f(v_{t,i})=f(v_{t,i+1})\) for all \(t\) such that \(v_{t,i}\) and \(v_{t,i+1}\) exist. If only one of them, say \(v_{t,i}\) exists, then \(f(v_{t,i})=f(v_{t-1,i})=f(v_{t-1,i+1})\). In particular if \(\omega(-1)=\omega(0)\) or \(\omega(i+1)\) or \(\omega(0)=\omega(1)\), then \(f(\omega)=f(v_{n,0})=\omega(0)\). Obviously this generalises the statement that if \(f(v_{s,0})\) and \(f(v_{s,1})\) are equal, then \(f(v_{t,0})=f(v_{t,1})\) for all \(t\geq s\).
If we instead have \(\omega(-1)\neq\omega(0)\neq\omega(1)\) and \(\omega(2)=\omega(1)\), then \(f(v_{1,0})=f(v_{1,1})=\omega(1)=\omega(2)\)
and it then follows in the same way that \(f(\omega)=\omega(1)\).
An inductive structure suggests itself. Let \(K\) be the smallest positive integer such that either \(\omega(-K)=\omega(-(K-1))\neq\omega(-(K-2)\neq\ldots\neq\omega(K-1)\) or \(\omega(-(K-1))\neq\omega(-(K-2)\neq\ldots\neq\omega(K-1)=\omega(K)\), if such an \(K\) exists. In other words \(K\) is the distance from \(0\) to a closest pair of input bits with the same value. (There may be two such pairs. If so, by parity \(\omega(-K)=\omega(-(K-1))=\omega(K-1)=\omega(K)\) with the input alternating between \(-(K-1)\) and \(K-1\).) If no such \(i\) exists, set \(K=n+1\).
Assume without loss of generality that, unless \(K=n+1\), \(\omega(K-1)=\omega(K)\). It follows from the definition of \(K\), \(\omega(-(K-1))\neq\omega(-(K-2))\neq\ldots\neq\omega(K-2)\neq\omega(K-1)\), This clearly holds also for \(K=n+1\). If \(K\leq n\), we then get \(f(v_{1,-(K-2)})\neq f(v_{1,-(K-3)})\neq\ldots\neq f(v_{1,K-3})\neq f(v_{1,K-2 })=f(v_{1,K-1})\). This means that in layer \(1\), the closest pair of input bits with the same value is one step closer to \(0\) than in layer \(0\). It now follows from induction that \(f_{n}(\omega)=f(v_{n,0})=\omega(K)\). Of particular importance here is to observe that in particular \(f_{n}(\omega)\) does not depend on \(\omega(j)\), \(j\not\in[-K,K]\).
It remains to understand what happens if \(K=n+1\), i.e when the whole input is alternating: \(\omega(-n)=\omega(-(n-1))=\ldots\neq\omega(n-1)\neq\omega(n)\). However then \(f\) will be alternating at all layers and \(f(\omega)=\omega(-n)=\omega(n)\).
We have established the following lemma.
**Lemma 4.1**.: (Closest pair lemma)__
The induced Boolean function \(f=\) "iterated 3-majority with stride \(1\)" on \(G_{n}^{(1)}\) is the same as the function \(g\) with the following description:
Given the input bit vector \(\omega\) of length \(2n+1\), let \(K\) be the smallest positive integer \(i\) such that \(\omega(-i)=\omega(-(i-1))\) or \(\omega(i-1)=\omega(i)\) and set \(g(\omega)=\omega(-K)\) or \(g(\omega)=\omega(K)\) in the respective cases. If no such pair exists, set \(g(\omega)=\omega(n)\).
With this lemma we can state the following theorem.
**Theorem 4.1**.: _The iterated 3-majority function with stride \(1\) is noise stable._
Proof.: According to Lemma 4.1 we can translate the iterated 3-majority with stride \(1\) function to the closest pair function.
Let \(K\) be the distance between the closest pair and the mid input vertex \(0\). Since each bit is i.i.d., \(K\) is geometric with parameter \(3/4\) truncated at \(n+1\):
\[\operatorname{P}(K=k)=\frac{3}{4}\left(\frac{1}{4}\right)^{k-1}=3\left(\frac{ 1}{4}\right)^{k},\]
\(k=1,\ldots,n\) and \(\operatorname{P}(K=n+1)=(1/4)^{n}\).
Since \(f_{n}\) does not depend on \(\omega\) outside \([-K,K]\), for \(f_{n}(\omega^{\epsilon})\neq f_{n}(\omega)\) to hold there must at least be a disagreement in the interval \([-K,K]\), i.e. a \(j\in[-K,K]\) with \(\omega(j)\neq\omega^{\epsilon}(j)\). This gives
\[\operatorname{P}(f_{n}(\omega)\neq f_{n}(\omega^{\epsilon})|K=k)\leq 1-(1- \epsilon)^{2k+1},\]
\(k=1,\ldots,n+1\) and \(\operatorname{P}((f_{n}(\omega)\neq f_{n}(\omega^{\epsilon})|K=n+1)\leq 1-(1- \epsilon)^{2n+1}\).
Putting this into Definition (2.2) we see that
\[\lim_{\epsilon\to 0}\limsup_{n}\operatorname{P}(f_{n}(\omega^{ \epsilon})\neq f_{n}(\epsilon)) \leq\lim_{\epsilon\to 0}\limsup_{n}\sum_{k=1}^{n}3\left(\frac{1}{4} \right)^{k}\left(1-(1-\epsilon)^{2k+1}\right)\] \[\leq\lim_{\epsilon\to 0}\sum_{k=1}^{\infty}3\left(\frac{1}{4} \right)^{k}\left(1-(1-\epsilon)^{2k+1}\right)\] \[=\lim_{\epsilon\to 0}3\left[\frac{4}{3}-\frac{4(1-\epsilon)^{4}}{4-(1- \epsilon)^{2}}\right]=0\]
This concludes the proof.
Let us now consider when the filter weights \(\theta_{t}\) are random. Then \(\theta_{t}\) represents either a regular majority or a dictator function. If \(\theta_{t}\) represents a dictator function each node \(u\) at layer \(t\) only depends on one node \(v\) at layer \(t-1\) and the values \(f_{n}(u)\) and \(f_{n}(v)\) are the same. This means we can effectively skip that layer on noticing that either the leftmost or the rightmost node of layers \(t-1,t-2,\ldots,0\) can no longer affect anything and can be removed. Removing all dictator layers results in an ordinary stride 1 iterated 3-majority with fewer layers. Hence Theorem 4.1 applies and we can conclude the following.
**Corollary 4.1**.: _The iterated 3-majority function with stride 1 is annealed and quenched noise stable under any probability distribution on \(\theta_{t}\)._
### Convolutional iterated 3-majority with stride \(1\) on the \(n\)-cycle
Consider now \(G_{n}^{(1^{\prime})}\), Figure 2b, the network where each layer has \(n\) nodes with circular convention so that the first node at each layer is considered next to the last node in that layer. We assume that \(n\) is odd. Each output from node \(i\) in layer \(t\) is the majority of nodes \(i-1\), \(i\), \(i+1\) in layer \(t-1\). The final output of the network is the majority of \(\omega_{T}\) for some \(T=T_{n}\). The starting configuration is \(\omega_{0}\in\{-1,1\}^{n}\).
**Theorem 4.2**.: _For any \(T=T_{n}\), the convolutional iterated 3-majority function with stride 1 on an n-cycle is noise stable._
Proof.: Let \(\ldots,X(-1),X(0),X(1),X(2),\ldots\) be independent Bernoulli(1/2) random variable and model \((\omega,\omega^{\epsilon})\) in terms of these by taking \(\omega=\omega_{0}=(X(1),\ldots,X(n))\) and \(\omega^{\epsilon}=(X^{\epsilon}(1),\ldots,X^{\epsilon}(n))\). Let \(\tau_{0}=\min\{i>1:X(i-1)=X(i)=X^{\epsilon}(i-1)=X^{\epsilon}(i)\}\). Then recursively, let
\[\tau_{j}=\min\{i>\tau_{j-1}+1:X(i)=X(i-1)=X^{\epsilon}(i-1)=X^{\epsilon}(i)=-X (\tau_{j-1})\},\,j=1,2,\ldots.\]
Observe that if one writes \(X_{0}=X\), one can then define \(X_{1},X_{2},\ldots\) in complete analogy with how \(\omega_{1},\omega_{2},\ldots\) are defined, i.e. \(X_{t+1}(i)=\operatorname{sign}(X_{t}(i-1)+X_{t}(i)+X_{t}(i+1))\), \(t=0,1,2,\ldots\).
Let \(V_{0}=[1,\tau_{0}]\) and then \(V_{j}=[\tau_{j-1}+1,\tau_{j}]\). Let \(S=\max\{j\geq 0:\tau_{j}\leq n\}\) (with \(S=-1\) if \(\tau_{0}>n\)). Let also \(\tilde{V}=[\tau_{S}+1,n]\) (with \(\tau_{-1}\) taken to be 0). Note that \(\tilde{V}\subseteq V_{S+1}\) (and \(\tilde{V}\) may also be empty). In short, this divides \([1,n]\) into chunks, where each chunk ends with two consecutive indexes \(i-1\) and \(i\) where \(X_{t}(i-1)=X_{t}(i)=X_{t}^{\epsilon}(i-1)=X_{t}^{\epsilon}(i)\) for all \(t\) plus a chunk \(\tilde{V}\) at the end, which is empty precisely if \(\tau_{S}=n\). On the circle, i.e. where \(\omega\) and \(\omega^{\epsilon}\) are defined, \(V_{0}\) and \(\tilde{V}\) are next to each other in a natural way.
Note that \(X_{T}\) and \(\omega_{T}\) are equal on each \(V_{j}\), \(j=1,\ldots,S\). They may differ on \(V_{0}\) and \(\tilde{V}\), but this difference will not need to be controlled in any other way than the simple observation that their cardinalities are bounded.
Stability will be proven by proving that, for sufficiently small \(\epsilon>0\), the number of bits where \(\omega_{T}\) and \((\omega^{\epsilon})_{T}\) differ is with high probability smaller than the difference between the number of 1's in \(\omega_{T}\) and \(n/2\).
For subsets \(I\) of \([n]\) and \(y\in\{-1,1\}^{n}\), let \(y(I)=(y(i))_{i\in I}\). Let \(s(y(I))=\sum_{i\in I}y(i)\) and \(s(y)=s(y([n]))\). Let \(C_{j}=s((\omega^{\epsilon})_{T}(V_{j}))-s(\omega_{T}(V_{j}))\), \(j=0,1,2,\ldots\), \(S\), \(C_{\tilde{V}}=s((\omega^{\epsilon})_{T}(V))-s(\omega_{T}(\tilde{V}))\) so that \(C_{\tilde{V}}+\sum_{j=0}^{S}C_{j}=s((\omega^{\epsilon})_{T})-s(\omega_{T})\). Let also \(C_{j}^{\prime}=s((X^{\epsilon})_{T}(V_{j}))-s(X_{T}(V_{j}))\), \(j=1,2,\ldots\) and note that the \(C_{j}^{\prime}\)'s are i.i.d. and that \(C_{j}=C_{j}^{\prime}\) for \(j=1,\ldots,S\), so that \(s((\omega^{\epsilon})_{T}(V_{j}))-s(\omega_{T}(V_{j}))\) also equals \(C_{0}+C_{\tilde{V}}+\sum_{j=1}^{S}C_{j}^{\prime}\).
Clearly all moments of \(|V_{j}|\) are uniformly bounded. Let \(\nu=\operatorname{E}[|V_{1}|]\) and let \(m=\lfloor n/\nu\rfloor\). Fix arbitrarily small \(\rho>0\) and \(\delta>0\). By the Central Limit Theorem, there is a constant \(K_{1}<\infty\)
independent of \(n\) such that with \(K^{-}=m-K_{1}\sqrt{n}\) and \(K^{+}=m+K_{1}\sqrt{n}\), for large \(n\)
\[\mathrm{P}\left(\sum_{j=0}^{K^{-}}|V_{j}|\geq n\right)<\frac{1}{8}\delta,\ \mathrm{P}\left(\sum_{j=0}^{K^{+}}|V_{j}|\leq n\right)<\frac{1}{8}\delta\]
so that
\[\mathrm{P}(K^{-}\leq S\leq K^{+})>1-\frac{1}{4}\delta. \tag{16}\]
The \(C^{\prime}_{j}\)'s by symmetry have mean \(0\). Take some \(j\geq 1\) and fix it until (17). Let \(F=F_{j}\) be the event that there exists an index \(i\in V_{j}\) with \(X(i)\neq X^{\epsilon}(i)\). On \(F^{c}_{j}\), \(X_{T}(i)=(X^{\epsilon})_{T}(i)\) for all \(T\) and \(i\in V_{j}\). Thus \(C^{\prime}_{j}=0\) on \(F^{c}\). Let
\[\chi_{\epsilon}=\chi^{\epsilon}_{j}=\min\{r>0:X(\tau_{j-1}+2r)\neq X^{ \epsilon}(\tau_{j-1}+2r)\ \mathrm{or}\ X(\tau_{j-1}+2r+1)\neq X^{\epsilon}(\tau_{j-1}+2r+1)\}\]
\[\chi=\chi_{j}=\min\{r>0:X(\tau_{j-1}+2r)=X^{\epsilon}(\tau_{j-1}+2r)=X(\tau_{ j-1}+2r+1)=X^{\epsilon}(\tau_{j-1}+2r+1)\neq X(\tau_{j-1})\}.\]
Then \(\chi\) and \(\chi_{\epsilon}\) are geometric random variables that cannot take on the same value and \(\chi\geq|V_{j}|/2\). Also, \(F\subset G:=\{\chi_{\epsilon}<\chi\}\) and it is standard that \(G\) is independent of \(\min(\chi,\chi_{\epsilon})\) and
\[\mathrm{P}(G)=\frac{2\epsilon-\epsilon^{2}}{2\epsilon-\epsilon^{2}+\frac{1}{ 4}(1-\epsilon)^{2}}<10\epsilon.\]
Hence
\[\mathrm{Var}(C^{\prime}_{j}) \leq\mathrm{E}[C^{\prime\,2}_{j}]\leq 4E[|V_{j}|^{2}\mathbf{1}_{F}] \leq 4\mathrm{E}[\chi^{2}\mathbf{1}_{G}]\] \[=4\mathrm{E}[\mathrm{E}[\chi^{2}\mathbf{1}_{G}|\min(\chi,\chi^{ \epsilon})]]\] \[=4\mathrm{E}[\mathrm{P}(G|\min(\chi,\chi^{\epsilon}))\mathrm{E}[ \chi|\min(\chi,\chi^{\epsilon}),G]]\] \[=4\mathrm{P}(G)\mathrm{E}[(\min(\chi,\chi^{\epsilon})+Y)^{2}]\] \[<40\epsilon\mathrm{E}[(\min(\chi,\chi^{\epsilon})+Y)^{2}],\]
where \(Y\) is a copy of \(\chi\) that is independent of \((\chi,\chi_{\epsilon})\). This gives
\[\mathrm{Var}(C^{\prime}_{j})<160\epsilon\mathrm{E}[\chi^{2}]<16000\epsilon. \tag{17}\]
It follows that
\[\mathrm{Var}\left[\sum_{j=1}^{K^{-}}C^{\prime}_{j}\right]<16000\epsilon m.\]
By the Central Limit Theorem, for sufficiently large constant \(M_{5}\)
\[\mathrm{P}\left(\left|\sum_{j=1}^{K^{-}}C^{\prime}_{j}\right|>M_{5}\epsilon^{1 /2}\sqrt{n}\right)<\frac{\delta}{4}.\]
Taking \(\epsilon\) sufficiently small, we get
\[\mathrm{P}\left(\left|\sum_{j=1}^{K^{-}}C^{\prime}_{j}\right|<\frac{1}{2} \rho\sqrt{n}\right)>1-\frac{1}{4}\delta. \tag{18}\]
Kolmogorov's inequality gives
\[\mathrm{P}\left(\max_{K}\Big{|}\sum_{j=K^{-}}^{K}C^{\prime}_{j}\Big{|}>n^{1/3} \right)<\frac{1}{4}\rho \tag{19}\]
for \(n\) large. Adding that \(\mathrm{P}(|C_{0}+C_{\tilde{V}}|>a_{n})\leq\mathrm{P}(|V_{0}|+|\tilde{V}|>a_{n})\to 0\) for any \(a_{n}\to\infty\) to (16), (18) and (19) and summarising and recalling \(C_{j}=C_{j}^{\prime}\) for \(1\leq j\leq S\), we get
\[\mathrm{P}\left(\left|C_{0}+C_{\tilde{V}}+\sum_{j=1}^{S}C_{j}^{\prime}\right|< \rho\sqrt{n}\right)=\mathrm{P}\left(|s(\omega^{\epsilon})_{T})-s(\omega_{T})|< \rho\sqrt{n}\right)>1-\delta \tag{20}\]
for \(\epsilon\) sufficiently small and \(n\) sufficiently large.
The second part is very similar, but slightly easier. Let \(\xi_{-1}^{+}=0\). Define for \(j=0,1,2,\ldots\) recursively
\[\xi_{j}^{-}=\min\{i>\xi_{j-1}^{+}+1:X_{0}(i-1)=X_{0}(i)=-1\},\,\xi_{j}^{+}=\min \{i>\xi_{j}^{-}+1:X_{0}(i-1)=X_{0}(i)=1\}.\]
Let \(U_{j}=[\xi_{j-1}^{+}+1,\xi_{j}^{+}]\), \(j=0,1,2,\ldots\), and let \(\mu=\mathrm{E}[|U_{j}|]\). Let \(R=\max\{j:\xi_{j}^{+}\leq n\}\). Let also \(\tilde{U}=[\xi_{R}^{+}+1,n]\). Since \(|U_{j}|\) is clearly bounded stochastically by two times a sum of two independent geometric(1/4) random variables, all moments of \(|U_{j}|\) are finite. Let \(m=\lfloor n/\mu\rfloor\) and fix an arbitrarily small \(\delta>0\). The Central Limit Theorem gives that for a sufficiently large constant \(K_{2}\),
\[\mathrm{P}(L^{-}<R<L^{+})>1-\frac{1}{4}\delta, \tag{21}\]
where \(L^{-}=m-K_{2}\sqrt{n}\) and \(L^{+}=m+K_{2}\sqrt{n}\). Let \(D_{j}=s(\omega_{T}(U_{j}))=\sum_{i=\xi_{j-1}+1}^{\xi_{j}}\omega_{T}(i)\), \(j=0,1,2,\ldots,R\), \(D_{\tilde{U}}=s(\omega_{T}(\tilde{U}))\) and \(D_{j}^{\prime}=s(X_{T}(U_{j}))\), \(j=1,2,\ldots\). The \(D_{j}\) are i.i.d. with mean \(0\) (by symmetry) and, since \(|D_{j}|\leq|U_{j}|\), finite and clearly nonzero variance. Since \(X_{T}=\omega_{T}\) on \(U_{1},\ldots,U_{R}\), it also holds that \(D_{j}^{\prime}=D_{j}\) for \(j=1,\ldots,R\) and \(s(\omega_{T})=D_{0}+D_{\tilde{U}}+\sum_{j=1}^{R}D_{j}^{\prime}\). By the Central Limit Theorem and sufficiently small \(\rho>0\),
\[\mathrm{P}\left(\left|\sum_{j=1}^{L^{-}}D_{j}^{\prime}\right|>3\rho\sqrt{n} \right)>1-\frac{1}{4}\delta. \tag{22}\]
Also, for large \(n\) by Kolmogorov's inequality
\[\mathrm{P}\left(\max_{L}\Big{|}\sum_{j=L^{-}}^{L}D_{j}^{\prime}\Big{|}<n^{1/ 3}\right)>1-\frac{1}{4}\delta. \tag{23}\]
Summing up (21), (22) and (23), we get for large \(n\) and sufficiently small \(\rho>0\),
\[\mathrm{P}\left(\Big{|}\sum_{j=1}^{R}D_{j}\Big{|}>2\rho\sqrt{n}\right)>1- \frac{3}{4}\delta. \tag{24}\]
Also, for any \(a_{n}\to\infty\), \(P(|U_{1}|+|U_{R+1}|<a_{n})\to 1\). This gives in conjunction with (24) for sufficiently large \(n\),
\[\mathrm{P}\left(|s(\omega_{T})|>2\rho\sqrt{n}\right)>1-\delta.\]
Taking \(\rho\) also small enough to satisfy (20), we get for sufficiently large \(n\)
\[\mathrm{P}(\mathrm{maj}(\omega_{T})\neq\mathrm{maj}((\omega^{\epsilon})_{T})) <2\delta.\]
This proves noise stability.
**Remark.** With similar arguments as for the non-cycle case, it is easy to see that the outputs from each layer will soon freeze in a configuration that is easily determined by the following observation. Suppose \(\omega_{0}(k-1)=\omega_{0}(k)=1\) and let \(l=\min\{j\geq k+2\;:\;\omega_{0}(j-1)=\omega_{0}(j)=-1\}\). Let also \(m=\max\{k+2\leq j\leq l-2\;:\;\omega_{0}(j-1)=\omega_{0}(j)=1\}\). Then for \(t\geq d\) (note that \(l-m\) is even), we have \(\omega_{t}(k+1)=\ldots=\omega_{t}((l+m-2)/2)=1\), \(\omega_{t}((l+m)/2)=\ldots=\omega_{t}(l)=-1\), where \(d=(1/2)\max\{j-i:k+2\leq i\leq j\leq\ell:\omega_{0}(i)=-1,\omega_{0}(i+1)=1, \ldots,\omega_{0}(j-1)=-1,\omega_{0}(j)=1\}\). Of course the analogous thing with signs reversed holds. Taking \(\bar{d}\) as the maximum of all such \(d\)'s over the whole input, \(\omega_{t}=\omega_{\bar{d}}\) for all \(t\geq\bar{d}\). It is easy to see that \(\bar{d}\) is of order \(\log n\).
As in Section 4.1, the result can be generalised to random \(\theta_{t}\). This is easily done by noticing that if \(\theta_{t}\) represents a dictator function, each note at layer \(t\) only has one input from layer \(t-1\) with no overlap between nodes. This means that we can effectively skip that layer and indeed the whole graph can be collapsed into a less deep graph with the same width where each layer expresses regular 3-majority. The result is summarised in the following corollary.
**Corollary 4.2**.: _The iterated 3-majority with stride 1 and with random \(\theta_{t}\) chosen according to any probability distribution on the \(n\)-cycle is quenched and annealed noise stable._
### Convolutional iterated 3-majority with stride \(2\)
Here \(G_{n}=G_{n}^{(2)}\), i.e. the graph in Figure 2c, and \(f_{n}\) induced by \(G_{n}\) are considered, but as we will see in the end, the results are easily extended to \(G_{n}^{(2^{\prime})}\). There are a few crucial observations to make about \(G_{n}\). First that at each layer \(k\), \(|A_{j,k}|\leq 2\) for every input node \(j\in L_{0}\). This follows, by the definition of \(G_{n}\), from the easily checked fact that the union of the sets of parents of two neighbouring nodes are either two neighbouring nodes or a single node. Secondly, for two nodes \(u,u^{\prime}\in L_{k}\)\(D_{u}\cap D_{u^{\prime}}=\emptyset\) if \(u\) and \(u^{\prime}\) are not next to each other since the set of children of \(u\) and \(u^{\prime}\) are disjoint and no child of \(u\) is next to any child of \(u^{\prime}\). This means \(f(u)\) and \(f(u^{\prime})\) are measurable with respect to two disjoint subsets of \(\omega\) and are hence independent. We can now formulate the following lemma.
**Lemma 4.2**.: Fix an arbitrary layer \(L_{k}\) and let \(u_{j}=v_{k,j}\) be the nodes in \(L_{k}\) enumerated from left to right. Also, let \(S\subseteq\{1,\ldots,|L_{k}|\}\). Then
\[\mathrm{P}(\forall j\in S:f(u_{j})=1)\geq\frac{1}{2^{|S|}}.\]
Proof.: Since \(f(u_{j})\) is an increasing function of \(\omega\), this follows from Harris inequality.
Now, let \(\mathcal{A}\) be some randomised algorithm to determine the value of \(f_{n}(\omega)\) by querying necessary values from \(\omega\) one by one. Let \(J_{\mathcal{A}}\) be the set of \(\omega_{i}\) that are queried to determine \(f(\omega)\). As in ([6]) p 90, we make the following definition
**Definition 4.1**.: The revealment of a randomised algorithm \(\mathcal{A}\) for a Boolean function \(f\), denoted \(\delta_{A}\), is defined by
\[\delta_{\mathcal{A}}=\max_{j\in 1,\ldots,2^{n+1}-1}\mathrm{P}(j\in J_{\mathcal{A}})\]
and the revealment of a Boolean function \(f\) is defined as
\[\delta_{f}=\inf_{\mathcal{A}}\delta_{\mathcal{A}}.\]
The following crucial fact holds [6] p 93.
**Theorem 4.3**.: _If the revealments satisfy_
\[\lim_{n\to\infty}\delta_{f_{n}}=0\]
_then \(\{f_{n}\}\) is noise sensitive._
We can now state the following theorem.
**Theorem 4.4**.: _The sequence of convolutional iterated \(3\)-majority function on \(G_{n}^{(2)}\) with stride \(2\) is noise sensitive. This also holds on \(G_{n}^{(2^{\prime})}\)._
Proof.: Let \(n\) be fixed, and for now, a multiple of three. We recursively define an algorithm \(\mathcal{A}(m,W)\) that for all \(m=0,\ldots,n/3\) and all \(W\subseteq L_{3m}\), finds \(f(w)\), \(w\in W\) in random order. The algorithm goes as follows.
**Starting step:**\(\mathcal{A}(0,W)\): Query all bits in \(W\subseteq L_{0}\) in a random uniform order.
**Inductive step:**\(\mathcal{A}(m+1,W)\): Let \(o\) be a uniform random permutation of \(W\). Now recursively find \(f(o(\ell))\), \(\ell=1,\ldots,|W|\) by querying nodes in \(D_{o(\ell),3m}\) with \(\mathcal{A}(m,D_{o(\ell),3m})\) with the modification that when and if, in the process of doing so, encountering a node \(v\in D_{o(\ell),3m}\) such that \(f(v)\) has no information to give on \(f(o(\ell))\), then skip the query of \(v\). When querying each \(f(o(\ell))\), do not use any information gained when querying \(f(o(1)),\ldots,f(o(\ell-1))\).
In other words, by this definition a node \(v\in D_{W,3m}\) that has two ancestors, \(a_{1}\) and \(a_{2}\), in \(W\) may have been queried to find \(f(a_{1})\) previously, but the knowledge of \(f(v)\) is not used if one later needs to query \(f(a_{2})\) until the turn comes to \(v\) in \(\mathcal{A}(m,D_{a_{2},m})\) (if \(f(v)\) can influence \(f(a_{2})\) at that point.) We refer to this as the algorithm is _forgetful_; when querying each \(f(o(\ell))\), no input is known at the start of doing that. (Of course, when the turn comes to \(v\), do not query \(f(v)\) again, but simply do not use the knowledge of \(f(v)\) until this precise moment.)
Note that by forgetfulness, the permutation \(o\) is independent of which input bits are queried in the end. Refer to this as the algorithm being _permutation independent_.
Let \(R_{m,W,j}\) be the event that \(\mathcal{A}(m,W)\) queries \(j\) and let \(q_{m}=\max_{W}\max_{j}\mathrm{P}(R_{m,W,j})\).
Fix an arbitrary leaf \(j\) and assume that \(W\subseteq L_{3(m+1)}\) and \(W\cap A_{j,3(m+1)}\neq\emptyset\). We claim that \(\mathrm{P}(R_{m+1,W,j})\leq\mathrm{P}(R_{m,D_{W,3m},j})\). This follows directly from the forgetfulness of \(\mathcal{A}(m+1,W)\) which implies that since the full algorithm \(\mathcal{A}(m,D_{W,3m})\) queries every node in \(D_{W,3m}\) whereas its modification when applied in \(\mathcal{A}(m+1,W)\) generally does not and then the forgetfulness implies that deterministically, for every possible input \(\omega\), the modification only queries a subset of the input nodes of the ones queried by the full algorithm.
In order to ever query \(j\) by \(\mathcal{A}(m+1,W)\) it must be the case that
1. At least one \(a\in A_{j,3m}\) is queried when querying \(w\) for some \(w\in A_{a,3(m+1)}\) and
2. for at least one such \(a\), \(j\) is queried for finding \(f(a)\).
For an ancestor \(a\in A_{j,3(m+1)}\), let \(o^{\prime}_{a}\) be the permutation of \(D_{a,3m}\) that is included by recursion in Algorithm \(\mathcal{A}(m,D_{a,3m})\). Let \(E_{a}\) be the event that \(o^{\prime}_{a}(u)<o^{\prime}_{a}(h)<o^{\prime}_{v}(v)\) for all \(u\in D_{a,3m}\setminus A^{+}_{j,3m}\), \(h\in A_{j,3m}\cap D_{a,3m}\) and \(v\in A^{+}_{j,3m}\cap D_{a,3m}\setminus A_{j,3m}\). Let \(E=\bigcap_{a\in A_{j,3(m+1)}}E_{a}\).
Let us lower bound \(\mathrm{P}(E_{a})\). Now \(D_{a,3m}\) can contain either one or two ancestors of \(j\) and \(A^{+}_{j,3m}\) could be included in \(D_{a,m}\) or one node in \(A^{+}_{j,3m}\setminus A_{j,3m}\) could be outside \(D_{a,m}\). The "worst case" for the present purpose is when \(j\) has two ancestors, \(h_{1},h_{2}\in A_{j,3m}\) and \(A^{+}_{j,3m}\subset D_{a,3m}\). Since \(D_{a,3m}\) contains \(15\) nodes we get in this case
\[\mathrm{P}(E_{a})=\frac{1}{6{15\choose 4}}.\]
It is easy to see that in the other cases, \(\mathrm{P}(E)\) is larger than the right hand side. Since \(W\) may contain two ancestors of \(j\), we get
\[\mathrm{P}(E)\geq\frac{1}{\big{(}6{15\choose 4}\big{)}^{2}}.\]
Since the fact that probability of \(\mathcal{A}(m,D_{W,3m})\) querying \(j\) is independent of of the order in which the nodes in \(D_{W,3m}\) are queried (i.e. the permutation independence) and the above claim,
\[\mathrm{P}(R_{m+1,W,j}|E^{c})\leq\mathrm{P}(R_{m,D_{W,3m},j})\leq q_{m}.\]
Let \(F\) be the event that \(f(u)=f(v)\) for all \(u,v\in D_{A_{j,3(m+1),3m}}\setminus A_{j,3m}^{\prime}\). Then, according to Lemma 4.2, \(\mathrm{P}(F)\geq 1/2^{18}\). On \(E\), for \(j\) to be queried, \(F^{c}\cap R_{m,D_{W,3m},j}\) must necessarily occur. Also, \(F\) and \(R_{m,D_{W,3m},j}\) are independent. This holds since \(R_{m,D_{W,3m},j}\) by forgetfulness depends only on the leafs in \(D_{A_{j,3m}}\) and the randomness in the implicit random permutations when \(\mathcal{A}(m,a)\), \(a\in A_{j,3m}\) are carried out, whereas \(F\) depends on none of that. Clearly the two events are also independent of \(E\). Hence we get
\[\mathrm{P}(R_{m+1,W,j})\leq\mathrm{P}(E^{c})q_{m}+\mathrm{P}(E)\mathrm{P}(F^{ c})\mathrm{P}(R_{m,D_{W,3m},j}|E)\leq(\mathrm{P}(E^{c})+\mathrm{P}(E) \mathrm{P}(F^{c}))q_{m}.\]
Since this bound is independent of \(W\) and \(j\),
\[q_{m+1}\leq(\mathrm{P}(E^{c})+\mathrm{P}(E)\mathrm{P}(F^{c}))q_{m}\leq\left(1 -\frac{1}{36{15\choose 4}^{2}2^{18}}\right)q_{m}.\]
Hence
\[q_{n/3}\leq\left(1-\frac{1}{36{15\choose 4}^{2}2^{18}}\right)^{n/3}\to 0\]
as desired. Cases when \(n\) is not a multiple of \(3\) can be completed by e.g. querying everything in layers \(0,\ldots,3(n/3-\lfloor n/3\rfloor)\) and then using the above algorithm from there. This gives
\[q_{n/3}\leq\left(1-\frac{1}{36{15\choose 4}^{2}2^{18}}\right)^{\lfloor n/3 \rfloor}\to 0\]
which in combination with Theorem 4.3 proves the theorem.
The arguments in the proof are easily adapted to when \(\theta_{t}\) are random and we get the following corollary.
**Corollary 4.3**.: _The iterated 3-majority function with stride \(2\) is annealed and quenched noise sensitive if the distribution of \(\theta_{t}\) is such that the probability of representing a majority is bounded away from zero._
Proof.: We show that with a very high probability, \(\{f_{n}\}\) will be such that the revealment converges to zero as \(n\) grows. As previously discussed, \(\theta_{t}\) represents either a majority or dictator function. Using the same algorithm as in Theorem 4.4, \(q_{m}\) is now random depending on the filter structure, and we get that for a fixed iteration \(m\)
\[q_{m+1}\leq\left(1-\frac{1}{36{15\choose 4}^{2}2^{18}}\right)q_{m}\]
if the layers \(3m\), \(3m+1\) and \(3m+2\) all represent majority layers. If this is not the case, i.e. if at least one of the layers represents a dictator, we instead just use that \(q_{m+1}\leq q_{m}\). Letting
\[C_{n}=|\{m\in\{1,\ldots\lfloor n/3\rfloor\}:\theta_{3m},\theta_{3m+1}\text{ and }\theta_{3m+2}\text{ represents majority functions.}\}|\]
we get
\[q_{n/3}\leq\left(1-\frac{1}{36{15\choose 4}^{2}2^{18}}\right)^{C_{n}}.\]
Since the probability of a given \(\theta_{t}\) representing a majority is bounded away from zero, \(C_{n}\to\infty\) in probability as \(n\) grows and hence the right hand side converges to \(0\). This proves quenched noise sensitivity. Finally, using Theorem 2.1 and the fact that \(E_{\omega}[f_{n}(\omega)]=0\) for all \(\theta_{t}\), annealed noise sensitivity follows.
### Extensions to convolutional iterated \(2k+1\) majority with overlap
Obviously the problems just treated for convolutional iterated \(3\)-majority are equally interesting with \(3\) replaced with \(2k+1\) for some integer \(k\geq 2\). In this case the sensitivity is trivial if the stride \(s>2k+1\) since this would correspond to the regular \(2k+1\) iterated majority with no charred nodes. For smaller \(s\) the corresponding graph \(G_{n,k,s}\) is defined as \(G_{n,k,s}=(V_{n,k,s},E_{n,k,s})\), where
\[V_{n,k,s} =\{v_{n,0},v_{n-1,-k},v_{n-1,-(k-1)},\ldots,v_{n-1,k},v_{n-2,-2k},\ldots,v_{n-2,2k},\ldots,v_{0,-kn},\ldots,v_{0,kn}\}\text{ and }\] \[E_{n,k,s} =\{(v_{t,i},v_{t-1,j}):t=n,\ldots,1,|i-j|\leq k,v_{t,i}\text{ and }v_{t-1,j}\text{ exist}\}\]
if \(s=1\) and
\[V_{n,k,s} =\{v_{n,1},v_{n-1,1},v_{n-1,2},\ldots,v_{n-1,2k+1},v_{n-2,1}, \ldots,v_{n-2,\frac{2k^{2}-2k+i-1}{(s-1)}},\ldots,v_{0,1},\ldots,v_{0,\frac{2 k^{n}-2k+s-1}{(s-1)}}\}\text{ and }\] \[E_{n,k,s} =\{(v_{t,i},v_{t-1,j}):t=1,\ldots,n,j-s(i-1)\in[1,2k+1],v_{t,i} \text{ and }v_{t-1,j}\text{ exist}\}\]
if \(1<s<2k+1\).
When \(s>1\), the crucial condition for the algorithm in Theorem 4.4 for proving noise sensitivity is that each input node has at most two ancestors at any generation. This can in fact be generalised to the \(2k+1\) iterated majority with stride \(s\geq 2\). This is shown in Lemma 4.3.
**Lemma 4.3**.: Let \(G_{n,k,s}\) be the corresponding graph to the convolutional iterated \(2k+1\)-majority with stride \(s\geq 2\). Then \(|A_{t,S}|\leq 2k\), for every \(t\) and every \(S\) of \(m\leq 2k\) nodes next to each other.
Proof.: Notice that if \(s>2k\), the corresponding graph \(G_{n,k,s}\) would be such that \(|A_{t,j}|=1\) for each \(t\) and \(j\), making the result trivial. Therefore, assume \(2\leq s\leq 2k\). Now observe that given a node \(v_{t,i}\) at layer \(t>0\) has \(2k+1\) children which are \(\{v_{t-1,s(i-1)+j}:j\in\{1,\ldots 2k+1\}\}\). So, for a node \(v_{t+1,i}\) to be a parent to \(v_{t,j}\) it must be that \(j=s(i-1)+i^{\prime}\) for some \(i^{\prime}\in\{1,\ldots 2k+1\}\). Consequently, the parents to \(v_{t,j}\) is \(v_{t+1,i}\) such that
\[\left\lceil\frac{j-2k}{s}\right\rceil+1\leq i\leq\left\lfloor\frac{j-1}{s} \right\rfloor+1\]
such that \(v_{t,i}\) exists. Let \(S=\{v_{t,u},v_{t,u+1},\ldots,v_{t,u+m}\}\) where \(u>0\) and \(0\leq m\leq 2k-1\) such that \(v_{t,u}\) and \(v_{t,u+m}\) exist. Then
\[|A_{t+1,S}|=\left\lfloor\frac{u+m-1}{s}\right\rfloor-\left\lceil\frac{u-2k-1} {s}\right\rceil+1\leq\left\lfloor\frac{u+2k-2}{2}\right\rfloor-\left\lceil \frac{u-2k-1}{2}\right\rceil+1=2k\]
The lemma now follows by using this argument recursively over \(t\).
From Lemma 4.3 it follows that \(|A_{t},j|\leq 2k\) for all \(t\) and \(j\). With that condition satisfied, the proof of Theorem 4.4 easily generalises. Without supplying further details, we state the following theorem.
**Theorem 4.5**.: _The convolutional iterated \(2k+1\)-majority function with stride \(s\) is noise sensitive if \(s\geq 2\)._
The results for noise stability also go through very easily if the stride is \(1\) when \(\theta_{t}\) are non-random. This by noticing that if there are \(k+1\) sequentially equal input bits with the same sign, no information can be transferred from one side to the other. In the non cyclic case, input bits "outside" such a sequence has no influence on the final result. So by a high probability, the function is determined by the central bits, which with a high probability does not have a disagreement between \(\omega\) and \(\omega^{\epsilon}\).
A similar result can be stated for the cyclic case where the proof unfolds in a similar fashion but with a slightly different definition on \(\tau_{j}\). All in all, we can state the following theorem.
**Theorem 4.6**.: _The convolutional iterated \(2k+1\)-majority function with stride \(1\) is noise stable both with and without cyclical convention._
When considering random weights for \(s\geq 2\), noise sensitivity still holds as long as the distribution of \(\theta_{t}\) is such that with probability bounded away from \(0\), \(\theta_{t}\) represents ordinary majority. The arguments are identical to that of \(k=1\). Thus, we can state the following corollary.
**Corollary 4.4**.: _If \(\theta_{t}\) is such that the probability of representing a majority is bounded away from zero, then the convolutional iterated \(2k+1\)-majority function with stride \(s\geq 2\) and random weights \(\theta_{t}\) is annealed and hence also quenched noise sensitive._
**Remark.** For stride one, the arguments with random weights do not generalise well when \(k>1\).
## 5 Open problems and research directions
We have taken inspiration from observed non-robustness phenomena for DNN classifiers and have determined when a few common DNN architectures give rise to noise sensitive or noise stable classifiers. Going on focusing on the non-robustness phenomena, there are numerous further avenues to be explored. First of all, of course, there are many other DNN architectures that can be considered. Can noise sensitivity/stability results be achieved for (some of) them? If so, will that help us to design powerful DNN architectures that are robust to noise?
Besides that, here are a first few questions concerning the fully connected DNN models.
* In this paper, the activation function is the sign function for all layers. Will replacing them with activation functions used in practice, such as the arctan function, the sigmoid function, the ReLU function, etc, cause different properties with respect to noise sensitivity/stability?
* Above, all layers are of equal width. Does it make a difference if the layers are allowed to be of different widths? Does it then matter how unequal the widths are, e.g. of vastly different orders as functions of \(n\)? Here we believe that having varying widths, does not matter no matter how much they vary, and that would be a fairly easy task to prove.
* In Section 3.2 it is shown that under weak assumptions, if \(1-\rho_{n}\) shrinks as \(\log n/n\) or faster, then the resulting Boolean function will almost certainly be noise stable. On the other hand, if \(1-\rho_{n}\) shrinks as \((\log n)^{3}/n\), then we typically get a noise sensitive sequence of functions. Is there a cutoff between the two cases and if so, where between \(\log n/n\) and \((\log n)^{3}/n\) is it?
* What happens if one has access to data and train the network to fit with that? This could be interpreted in many different ways. For example the setting could be the following. Suppose that data are generated by a particular fully connected DNN of the kind in Section 3.1 and suppose that this particular DNN expresses a noise stable function. Assume further that we get data-points one by one, where each data-point is a uniform random input together with its output. According to Theorem 3.1, the network will almost certainly be noise sensitive at the start of training and in the end, after having seen all possible inputs, the DNN has learnt to express the true noise stable function behind the data. Where along this process does the DNN turn from sensitive to stable? Can it also flip back and forth between noise sensitive and noise stable along the way?
Concerning the convolutional models, there is also more to be done for \(k\geq 2\), i.e. for filters of size at least \(5\). We have already observed that when the filter size is at least \(5\), a filter can express many things besides (anti)majority and (anti)dictator function. For example with five input bits, a filter could express "the first input bit unless all the other bits agree on the opposite". For stride \(s\geq 2\), we showed that if the filter with at least some probability expresses a majority, then the resulting network is noise sensitive. However if regular majority is never expressed, then what?
Also, for \(k\geq 2\) and stride 1, it remains open if the resulting network is noise stable as we believe it is.
The noise sensitivity properties in Section 3.1 and 3.2 are very strong and one can even input two strings from any joint distribution over \(\{-1,1\}^{n}\times\{-1,1\}^{n}\) such that with probability tending to 1 with \(n\) the two strings neither equal nor completely opposite. With probability \(1/2\), the function described by the resulting DNN will produce different results for the two different strings. However, this is in a nonadversarial setting, i.e. the joint distribution of the input strings is independent of the network weights. Suppose an adversary gets information about the network weights and one of the input strings. Can he produce the second string by flipping bits of his own choice of the first string, with no other restriction than the expected number of flips has to be very small, so that the output of the second string differs from that of the first string? None of the results in this paper are concerned with questions such as these, and it would certainly be very interesting to look at that.
## Acknowledgement
The first and third authors were supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. The second author acknowledges the support of the Swedish Research Council, grant no. 2020-03763.
|
2308.02221 | Likelihood-ratio-based confidence intervals for neural networks | This paper introduces a first implementation of a novel
likelihood-ratio-based approach for constructing confidence intervals for
neural networks. Our method, called DeepLR, offers several qualitative
advantages: most notably, the ability to construct asymmetric intervals that
expand in regions with a limited amount of data, and the inherent incorporation
of factors such as the amount of training time, network architecture, and
regularization techniques. While acknowledging that the current implementation
of the method is prohibitively expensive for many deep-learning applications,
the high cost may already be justified in specific fields like medical
predictions or astrophysics, where a reliable uncertainty estimate for a single
prediction is essential. This work highlights the significant potential of a
likelihood-ratio-based uncertainty estimate and establishes a promising avenue
for future research. | Laurens Sluijterman, Eric Cator, Tom Heskes | 2023-08-04T09:34:48Z | http://arxiv.org/abs/2308.02221v1 | # Likelihood-ratio-based confidence intervals
###### Abstract
This paper introduces a first implementation of a novel likelihood-ratio-based approach for constructing confidence intervals for neural networks. Our method, called DeepLR, offers several qualitative advantages: most notably, the ability to construct asymmetric intervals that expand in regions with a limited amount of data, and the inherent incorporation of factors such as the amount of training time, network architecture, and regularization techniques. While acknowledging that the current implementation of the method is prohibitively expensive for many deep-learning applications, the high cost may already be justified in specific fields like medical predictions or astrophysics, where a reliable uncertainty estimate for a single prediction is essential. This work highlights the significant potential of a likelihood-ratio-based uncertainty estimate and establishes a promising avenue for future research.
**Keywords:** Likelihood ratio, Uncertainty estimation, Neural network,
Regression, Classification
## 1 Introduction
Over the past two decades, neural networks have seen an enormous rise in popularity and are currently being used in almost every area of science and
industry. In light of this widespread usage, it has become increasingly clear that trustworthy uncertainty estimates are essential (Gal, 2016).
Many uncertainty estimation methods have been developed using Bayesian techniques (Neal et al., 2011; MacKay, 1992; Gal, 2016), ensembling techniques (Heskes, 1997; Lakshminarayanan et al., 2017), or applications of frequentist techniques such as the delta method (Kallus and McInerney, 2022).
Many of the resulting confidence intervals (for the frequentist methods) and credible regions (for the Bayesian methods) have two common issues. Firstly, most methods result in symmetric intervals around the prediction which can be overly restrictive and can lead to very low coverage in biased regions (Sluijterman et al., 2022). Secondly, most methods rely heavily on asymptotic theorems (such as the central limit theorem or the Bernstein-von-Mises theorem) and can therefore only be trusted in the asymptotic regime where we have many more data points than model parameters, the exact opposite scenario of where we typically find ourselves within machine learning.
#### Contribution
In this paper, we demonstrate how the likelihood-ratio test can be leveraged to combat the two previously mentioned issues. We provide a first implementation of a likelihood-ratio-based approach, called DeepLR, that has the ability to produce asymmetric intervals that are more appropriately justified in the scenarios where we have more parameters than data points. Furthermore, these intervals exhibit desirable behavior in regions far removed from the data, as evidenced in Figure 1.
Figure 1: A comparison of the confidence intervals of our likelihood-ratio approach (DeepLR), an ensembling approach, and MC-Dropout on the two-moon data set. The colorbar represents the width of 95% confidence intervals, where yellow indicates greater uncertainty. The orange and the blue circles indicate the location of the data points of the two different classes. Crucially, DeepLR presents high levels of uncertainty in regions far away from the data, generating confidence intervals of [0.00, 1.00], unlike the other methods that display extreme certainty in those regions.
#### Organisation
This paper is structured into four sections, with this introduction being the first. Section 2 explains our method in detail and also contains the related work section which is simultaneously used to highlight the advantages and disadvantages of our method. Section 3 presents experimental results that illustrate the desirable properties of a likelihood-ratio-based approach. Finally, Section 4 summarizes and discusses the results and outlines possible directions for future work.
## 2 DeepLR: Deep Likelihood-Ratio-based confidence intervals
In this section, we present our method, named DeepLR, for constructing confidence intervals for neural networks using the likelihood-ratio test. We first formalize the problem that we are considering in Subsection 2.1. Subsection 2.2 explains the general idea behind constructing a confidence interval via the likelihood-ratio test. Subsequently, in Subsection 2.3, we outline the high level idea for translating this general procedure to neural networks. The details regarding the distribution and the calculation of the test statistic are provided in Subsections 2.4 and 2.5. Finally, in Subsection 2.6, we compare our method to related work while simultaneously highlighting its strengths and limitations.
### Problem formulation
We consider a data set \(\mathcal{D}=\{(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\}\), consisting of \(n\) independent observations of the random variable pair \((X,Y)\). We consider networks that provide an estimate for the conditional density of \(Y\mid X\). This is achieved by assuming a distribution and having the network output the parameter(s) of that distribution.
Three well-known types of networks that fall in this class are: (1) A regression setting where the network outputs a mean estimate and is trained using a mean-squared error loss. This is equivalent to assuming a normal distribution with homoscedastic noise. (2) Alternatively, the network could output both a mean and a variance estimate and be optimized by minimizing the negative loglikelihood assuming a normal distribution. (3) In a classification setting, the network could output logits that are transformed to class probabilities while assuming a categorical distribution.
The network is parametrized by \(\theta\in\mathbb{R}^{p}\), where \(p\) is typically much larger than \(n\). With \(\Theta\), we denote the set containing all the \(\theta\) that are reachable for a network with a specific training process. This includes choices such as training time, batch size, optimizer, and regularization techniques. With \(p_{\theta}\), we denote the predicted conditional density. Additionally, we assume that the true conditional density is given by \(p_{\theta_{0}}\) for some \(\theta_{0}\) in \(\Theta\). In other words, we assume that our model is well specified.
The objective of our method is to construct a confidence interval for one of the output nodes of the network for a specific input of interest \(X_{0}\). We denote this output of interest with \(f_{\theta_{0}}(X_{0})\) for the remainder of the paper. In the context of a regression setting, this output of interest is the true regression function value at \(X_{0}\) and in a classification setting it is the true class probability for input \(X_{0}\).
We define a \((1-\alpha)\cdot 100\%\) confidence interval for \(f_{\theta_{0}}(X_{0})\) as an interval, \(\mathrm{CI}(f_{\theta_{0}}(X_{0}))\), which is random since it depends on the random realization of the data, such that the probability (taken with respect to the random data set) that \(\mathrm{CI}(f_{\theta_{0}}(X_{0}))\) contains the true value \(f_{\theta_{0}}(X_{0})\) is \((1-\alpha)\cdot 100\%\).
### Confidence interval based on the likelihood ratio
We explain the general idea behind constructing a confidence interval with the likelihood-ratio by working through a well-known example. We consider \(n\) observations \(Y_{i}\) that are assumed to be normally distributed with unknown mean \(\mu\) and unknown variance \(\sigma^{2}\). Our goal is to create a confidence interval for \(\mu\) by using the likelihood-ratio test.
The duality between a confidence interval and hypothesis testing states that we can create a \((1-\alpha)\cdot 100\%\) confidence interval for \(\mu\) by including all the values \(c\) for which the hypothesis \(\mu=c\) cannot be rejected at a \((1-\alpha)\cdot 100\%\) confidence level. We must therefore test for what values \(c\) we can accept the hypothesis \(\mu=c\).
The general approach to test a hypothesis is to create a test statistic of which we know the distribution under the null hypothesis and to reject this hypothesis if the probability of finding the observed test statistic or an extremer value is smaller than \(\alpha\).
As our test statistic, we take two times the log of the likelihood ratio:
\[T(c):=2\left(\sup_{\Theta}\left(\sum_{i=1}^{n}\log(L(Y_{i};\theta))\right)- \sup_{\Theta_{0}}\left(\sum_{i=1}^{n}\log(L(Y_{i};\theta))\right)\right),\]
where \(L\) denotes the likelihood function \(\theta\mapsto L(Y_{i};\theta)\), \(\Theta\) is the full parameter space and \(\Theta_{0}\) the restricted parameter space. In our example, we have
\[\Theta=\{(\mu,\sigma^{2})\mid\mu\in\mathbb{R},\sigma^{2}\in\mathbb{R}_{>0}\},\]
and
\[\Theta_{0}=\{(c,\sigma^{2})\mid\sigma^{2}\in\mathbb{R}_{>0}\}.\]
Wilks (1938) proved that \(T(c)\) weakly converges to a \(\chi^{2}(1)\) distribution under the null hypothesis that \(\mu=c\). We therefore reject if \(T(c)>\chi^{2}_{1-\alpha}(1)\) and our confidence interval for \(\mu\) becomes the set \(\{c\mid T(c)\leq\chi^{2}_{1-\alpha}(1)\}\), where \(\chi^{2}_{1-\alpha}(1)\) is the \((1-\alpha)\)-quantile of a \(\chi^{2}(1)\) distribution.
In our example, this results in the well-known interval
\[\bar{Y}\pm z_{1-\alpha/2}\sqrt{\frac{1}{n}\frac{1}{n-1}\sum_{i=1}^{n}(Y_{i}-\bar{Y} )^{2}},\]
where \(z_{1-\alpha/2}\) is the \((1-\alpha/2)\)-quantile of a standard-normal distribution.
### High-level idea of DeepLR
Our goal is to apply the likelihood-ratio testing procedure outlined in the previous subsection to construct a confidence interval for \(f_{\theta_{0}}(X_{0})\), the value of one of the output nodes given input \(X_{0}\). We create this confidence interval by including all the values \(c\) for which the hypothesis \(f_{\theta_{0}}(X_{0})=c\) cannot be rejected. The testing of the hypothesis is done with the likelihood-ratio test. Specifically, we use two times the log likelihood ratio as our test statistic:
\[T(c) :=2\bigg{(}\sup_{\Theta}\big{(}\sum_{i=1}^{n}\log(L(X_{i},Y_{i}; \theta))\big{)}-\sup_{\Theta_{0}(c)}\big{(}\sum_{i=1}^{n}\log(L(X_{i},Y_{i}; \theta))\big{)}\bigg{)}\] \[=2\bigg{(}\sup_{\Theta}\big{(}\sum_{i=1}^{n}\log(p_{\theta}(Y_{i} \mid X_{i}))\big{)}-\sup_{\Theta_{0}(c)}\big{(}\sum_{i=1}^{n}\log(p_{\theta}(Y _{i}\mid X_{i}))\big{)}\bigg{)}, \tag{1}\]
and we construct a confidence interval for \(f_{\theta_{0}}(X_{0})\) by including all values \(c\) for which the test statistic is not larger than \(\chi^{2}_{1-\alpha}(1)\):
\[\text{CI}(f_{\theta_{0}}(X_{0}))=\{c\mid T(c)\leq\chi^{2}_{1-\alpha}(1)\}. \tag{2}\]
Here, we consider \(\Theta\subset\mathbb{R}^{p}\) to be the set containing all reachable parameters, and \(\Theta_{0}(c)=\{\theta\in\Theta\mid f_{\theta_{0}}(X_{0})=c\}\). The set \(\Theta\) is explicitly not equal to all parameter combinations. Due to explicit (e.g., early stopping) and implicit regularization (e.g., lazy training (Chizat et al., 2019)) not all parameter combinations can be reached. The set \(\Theta\) should therefore be seen as the set containing the parameters of all neural networks that can be found given the optimizer, training time, regularization techniques, and network architecture.
Intuitively, this approach answers the question: _What values could the network have reached at location \(X_{0}\) while still explaining the data well?_ This is a sensible question for a highly flexible and typically overparameterized machine-learning approach. After training, the model ends up with a certain prediction at location \(X_{0}\). However, since the model is typically very complex, it is likely that the model could just as well have made other predictions at that location while still explaining the data well. Therefore, all those other function values should also be considered as possibilities. Inherently, all modeling choices are taken into account by asking this question. A more flexible model, for instance, is likely able to reach more values without affecting the likelihood of the training data, leading to a larger confidence interval.
The construction of the confidence interval in Equation (2) assumes that the test statistic, \(T(c)\), has a \(\chi^{2}(1)\) distribution. We discuss this assumption in
the following subsection. The subsection thereafter describes how to calculate the test statistic.
### Distribution of the test statistic
In the classical setting, Wilks (1938) proved that the likelihood-ratio test statistic asymptotically has a \(\chi^{2}(1)\) distribution when the submodel has one degree of freedom less than the full model. We are, however, not in this classical regime. We have many more parameters than data points and therefore need a similar result for this setting.
It has been shown that the likelihood-ratio test statistic converges to a \(\chi^{2}\) distribution for a wide range of settings, which is referred to as the Wilks-phenomenon by later authors (Fan et al., 2001; Boucheron and Massart, 2011). For a semi-parametric model, which more closely resembles our situation, it has been shown that the test statistic also converges in distribution to a \(\chi^{2}\) distribution under appropriate regularity conditions (Murphy and van der Vaart, 1997).
We prove a similar result for our setting in the appendix. The theorem states that, under suitable assumptions, our test statistic has a \(\chi^{2}(1)\) distribution. Intuitively, this results from the fact that we added a single constraint, namely that \(f_{\theta}(X_{0})=c\). We emphasize that even if the test statistic does not exactly follow a \(\chi^{2}(1)\) distribution, the qualitative characteristics of the confidence intervals will remain evident, albeit with inaccurate coverage levels.
### Calculating the test statistic
Calculating the two terms in equation (1) presents certain challenges. The first term, \(\sup_{\Theta}\big{(}\sum_{i=1}^{n}\log(p_{\theta}(Y_{i}\mid X_{i}))\big{)}\), is relatively straightforward. We train a network that maximizes the likelihood, which gives the conditional densities \(p_{\hat{\theta}}(Y_{i}\mid X_{i})\).
The second term is substantially more complex. Ideally, we would optimize over the set \(\Theta_{0}(c)=\{\theta\in\Theta\mid f_{\theta}(X_{0})=c\}\). This is problematic for two reasons. Firstly, it is unclear how we can add this constraint to the network and secondly, this would necessitate training our network for every distinct value \(c\) that we wish to test. Even when employing an efficient bisection algorithm, this could easily result in needing to train upwards of 10 additional networks.
We address this problem as follows. We first create a network that is perturbed in the direction of a relatively large value (\(c_{\max}\)) at \(X_{0}\) and a network that is perturbed in the direction a relatively small value (\(c_{\min}\)) at \(X_{0}\) while maximizing the likelihood of the data. We denote the resulting network parameters with \(\hat{\theta}_{\pm}\):
\[\hat{\theta}_{+}=\underset{\begin{subarray}{c}\theta\in\Theta,\\ f_{\theta}(X_{0})\approx c_{\min}\end{subarray}}{\arg\max}L(\mathcal{D};\theta),\quad\text{and}\quad\hat{\theta}_{-}=\underset{\begin{subarray}{c}\theta \in\Theta,\\ f_{\theta}(X_{0})\approx c_{\min}\end{subarray}}{\arg\max}L(\mathcal{D};\theta).\]
Subsequently, the network that maximizes the likelihood under the constraint \(f_{\theta}(X_{0})=c\) is approximated using a linear combination. Specifically, suppose we want the network that maximizes the likelihood of the training data while passing through \(c\) at \(X_{0}\). In the case that \(c>f_{\hat{\theta}}(X_{0})\), we approximate this network by taking a linear combination of the outputs such that
\[(1-\lambda)f_{\hat{\theta}}(X_{0})+\lambda f_{\hat{\theta}_{+}}(X_{0})=c,\]
and we define \(p_{c}\) as the density that we get by using the same linear combinations for the distributional parameters that are predicted by the networks parametrized by \(\hat{\theta}\) and \(\hat{\theta}_{+}\). We then approximate the second term in equation (1) as follows:
\[\sup_{\Theta_{0}(c)}\big{(}\sum_{i=1}^{n}\log(p_{\theta}(Y_{i}\mid X_{i})) \big{)}\approx\sum_{i=1}^{n}\log(p_{c}(Y_{i}\mid X_{i})). \tag{3}\]
This procedure is visualized in Figure 2 for a regression setting. Details on the second step, finding the perturbed networks, are provided below both for a regression and binary-classification setting.
RegressionSuppose we completed step 1 and we have a network that maximizes the data's likelihood. For step 2, our objective is to adjust this network such that it goes through a relatively large value at \(X_{0}\) while continuing to maximize the data's likelihood. Moreover, we aim to achieve this in as stable a manner as possible, given that minor differences can significantly influence the test statistic, particularly for large data sets.
We accomplish this by copying the original network and training it on the objective to maintain the original predictions - as those maximized the likelihood - while predicting \(f_{\hat{\theta}}(X_{0})+1\) for input \(X_{0}\). For the perturbation in the negative direction we use \(-1\). These values \(\pm 1\) are chosen assuming that the targets are normalized prior to training.
We obtain this training objective by using modified training set, \(\tilde{\mathcal{D}}\), that is constructed by replacing the targets \(Y_{i}\) in the original training set with the predictions \(f_{\hat{\theta}}(X_{i})\) of the original network and adding the data point \((X_{0},f_{\hat{\theta}}(X_{0})+1)\).
The resulting problem is very imbalanced. We want the network to change the prediction at location \(X_{0}\), which is only present in the data once. This makes the training very unstable since, especially when training for a small number of epochs, it is very influential which specific batch contains the new point. To remedy this, we use a combination of upsampling and downweighting of the new data point \((X_{0},f_{\hat{\theta}}(X_{0})+1)\). Merely upsampling the new data point - i.e., adding it multiple times - is undesirable as this can introduce significant biases (Van den Goorbergh et al., 2022). Hence, we add many copies of \((X_{0},f_{\hat{\theta}}(X_{0})+1)\) but we reweigh the loss contributions of these added data
points such that they have a total contribution to the loss that is equivalent to that of a single data point.
We propose to add \(2n/\)batch size extra data points such that each batch is expected to have 2 new data points. The same training procedure is used as for the original network. We found slightly larger or smaller number of added
Figure 2: Illustration of the steps of our method for the positive direction in a regression setting. Steps 2, 3, and 4 are also carried out in the negative direction.
data points to perform very similarly. This setting worked for a wide variety of data sets and architectures.
Binary classificationConsider a data set where the targets are either \(1\) or \(0\). Our network outputs logits, denoted with \(f_{\hat{\theta}}(X)\), that are transformed to probabilities via a sigmoid function. The procedure is nearly identical for this binary classification setting: We create a positively perturbed network, parametrized by \(\hat{\theta}_{+}\), and a negatively perturbed network, parametrized by \(\hat{\theta}_{-}\).
The only difference is in the construction of the augmented data sets. We again replace the targets \(Y_{i}\) by the predictions of the original network, \(f_{\hat{\theta}}(X_{i})\), but now we add multiple copies of the data point \((X_{0},1)\) for the positive direction, and multiple copies of the data point \((X_{0},0)\) for the negative direction.
The entire method is summarized in Algorithm 1. In summary, we want to test what values the network can reach while still explaining the data well. We do this by perturbing the network in a positive direction and a negative direction and subsequently testing which linear combinations would still explain the data reasonably well, i.e., linear combinations with a test statistic smaller than \(\chi^{2}_{1-\alpha}(1)\).
### Related work
In this subsection, we place our work in context of existing work while simultaneously highlighting the strengths and limitations of our approach. Our aim is not to give a complete overview of all uncertainty quantification methods, for which we refer to the various reviews and surveys on the subject (Abdar et al., 2021; Gawlikowski et al., 2022; He and Jiang, 2023). Instead, we discuss several broad groups in which most methods can be categorized: Ensembling methods, Bayesian methods, frequentist methods, and distance-aware methods.
Ensembling methodstrain multiple models and use the variance of the predictions as an estimate for model uncertainty (Heskes, 1997; Lakshminarayanan et al., 2017; Zhang et al., 2017; Wenzel et al., 2020; Jain et al., 2020; Dwaracherla et al., 2022). While being extremely easy to implement, they can be computationally expensive due to the need to train multiple networks. Moreover, the resulting confidence intervals can behave poorly in regions with a limited amount of data where the predictor is likely biased. Additionally, ensemble members may interpolate in a very similar manner, potentially leading to unreasonably narrow confidence intervals.
Bayesian approachesplace a prior distribution on the model parameters and aim to simulate from the resulting posterior distribution given the observed data (MacKay, 1992; Neal, 2012; Hernandez-Lobato and Adams, 2015). Since this posterior is generally intractable, it is often approximated, with MC-Dropout being a notable example (Gal and Ghahramani, 2016; Gal et al., 2017).
```
0:\(\mathcal{D}=\{(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\}\), \(\hat{\theta},X_{0}\), \(\alpha\);
1:\(n_{\text{extra}}=2n/\text{(batch size)}\);
2: For binary classification \(c_{\text{max}}=1\) and \(c_{\text{min}}=0\), and for regression \(c_{\text{max}}=f_{\hat{\theta}}(X_{0})+\delta\) and \(c_{\text{min}}=f_{\hat{\theta}}(X_{0})-\delta\);
3:\(\tilde{\mathcal{D}}_{+}=\{(X_{1},f_{\hat{\theta}}(X_{1})),\ldots,(X_{n},f_{ \hat{\theta}}(X_{n})),\overbrace{(X_{0},c_{\text{max}}),\ldots,(X_{0},c_{ \text{max}})}^{n_{\text{extra}}\text{times}}\}\);
4:\(\tilde{\mathcal{D}}_{-}=\{(X_{1},f_{\hat{\theta}}(X_{1})),\ldots,(X_{n},f_{ \hat{\theta}}(X_{n})),\overbrace{(X_{0},c_{\text{min}}),\ldots,(X_{0},c_{ \text{min}})}^{n_{\text{extra}}\text{times}}\}\);
5: Make two copies of network parametrized by \(\hat{\theta}\) and train them on \(\tilde{\mathcal{D}}_{+}\) and \(\tilde{\mathcal{D}}_{-}\) using the original training procedure. During the training, the loss contribution of the added data points is divided by \(n_{\text{extra}}\). Denote the parameters of the resulting networks with \(\hat{\theta}_{+}\) and \(\hat{\theta}_{-}\);
6:for\(c\in\mathbb{R}\)do\(\triangleright\) Since testing all possible \(c\) is impossible, we propose to use some variation of a bisection search algorithm.
7:if\(c>f_{\hat{\theta}}(X_{0})\)then
8: Pick \(\lambda\) such that \((1-\lambda)f_{\hat{\theta}}(X_{0})+\lambda f_{\hat{\theta}_{+}}(X_{0})=c\) with corresponding density \(p_{c}\); \(\triangleright\) The density \(p_{c}\) is obtained by taking the same linear combination of the predicted distribution parameters outputted by the networks parametrized by \(\hat{\theta}\) and \(\hat{\theta}_{+}\).
9:endif
10:if\(c<f_{\hat{\theta}}(X_{0})\)then
11: Pick \(\lambda\) such that \((1-\lambda)f_{\hat{\theta}}(X_{0})+\lambda f_{\hat{\theta}_{-}}(X_{0})=c\) with corresponding density \(p_{c}\);
12:endif
13:if\(2\bigg{(}\sum_{i=1}^{n}\log(p_{\hat{\theta}}(Y_{i}\mid X_{i}))-\sum_{i=1}^{ n}\log(p_{c}(Y_{i}\mid X_{i}))\bigg{)}\leq\chi_{1-\alpha}^{2}(1)\)then
14: Include \(c\) in \(\text{CI}(f_{\theta_{0}}(X_{0}))\);
15:endif
16:endfor
17:return\(\text{CI}(f_{\theta_{0}}(X_{0}))\)
```
**Algorithm 1** Pseudocode for the construction of \(\text{CI}(f_{\theta_{0}}(X_{0}))\) using DeepLR
A downside is that these methods can be challenging to train, potentially resulting in a lower accuracy. Our proposed approach does not change the optimization procedure and therefore has no accuracy loss. Another downside is that while asymptotically Bayesian credible sets become confidence sets by the Bernstein-von Mises theorem (see Van der Vaart (2000, Chapter 10))? However, this theorem generally does not apply for a neural network where the dimension of the parameter space typically exceeds the number of data points. Moreover, the prior distribution is often chosen out of computational convenience instead of being motivated by domain knowledge.
_Distance-aware methods_ have a more pragmatic nature. They use the dissimilarity of a new input compared to the training data as a metric for model uncertainty (Lee et al., 2018; Van Amersfoort et al., 2020; Ren et al., 2021).
As we will see in the next section, our method also exhibits this distance-aware property, albeit for a different reason: The further away from the training data, the easier it becomes for the network to change the predictions without negatively affecting the likelihood of the training data.
_Frequentist methods_ use classical parametric statistics to obtain model uncertainty estimates. The typical approach - more elaborately explained in textbooks on parametric statistics, e.g. Seber and Wild (2003) - involves obtaining (an estimate of) the variance of the model parameters using asymptotic theory and then converting this variance to the variance of the model predictions using the delta method. This approach has been used by various authors to create confidence intervals for neural networks (Kallus and McInerney, 2022; Nilsen et al., 2022; Deng et al., 2023; Khosravi et al., 2011).
Confidence intervals of this type are often referred to as Wald-type intervals. These intervals are necessarily symmetric. Various authors have noted that, for classical models, Wald-type intervals often behave worse than likelihood-ratio type intervals in the low-data regime (Hall and La Scala, 1990; Andersen et al., 2012; Murphy, 1995; Murphy and van der Vaart, 1997). Specifically, when the loglikelihood cannot be effectively approximated with a quadratic function, Wald-type intervals may behave very poorly (Pawitan, 2001, Chapter 2). The significant advantage of Wald-type intervals in the classical setting is the easier computation. However, while only a single model needs to be fitted, the necessary inversion of a high-dimensional \(p\times p\) matrix and the quadratic approximation of the likelihood strongly rely on being in the asymptotic regime.
Conversely, the construction of the DeepLR confidence interval does not rely on a quadratic approximation of the likelihood, which is in general only valid asymptotically. While we still utilize asymptotic theory to determine the distribution of our test statistic (see our proof of Theorem 1 in Appendix A), we do impose the extremely strong requirement that the second derivative of the loglikelihood must converge and be invertible. We only require this second derivative to behave nicely in a single direction, which is a much weaker requirement.
Another benefit of likelihood-ratio-based confidence intervals is that they are transformation invariant (Pawitan, 2000). In other words, a different parametrization of the distribution does not alter the resulting confidence intervals. This is not the case for most Wald-type intervals or Bayesian approaches.
The main limitation of DeepLR in its current form is the computational cost. We compare the computational costs of the various approaches in Table 1. Our method comes at no extra training cost but requires two additional networks to be trained for every confidence interval. Ensembling methods also need to train multiple networks, typically from five to ten, but these networks can be reused for different confidence intervals. Frequentist methods typically come at no additional training cost but require the inversion of a \(p\times p\) matrix
to construct the confidence interval. The cost of Bayesian methods varies drastically from method to method. The training process can be substantially more involved and the construction of a credible set requires multiple samples from the (approximate) posterior.
In summary, a likelihood-ratio-based method is distance aware, transformation invariant, has no accuracy loss, and is capable of creating asymmetric confidence intervals. However, it comes with the downside of being computationally expensive. Producing a confidence interval for a single input requires the training of two additional networks.
## 3 Experimental results
In this section, we present the results of various experiments that showcase the desirable properties of DeepLR, such as its distance-aware nature and capability to create asymmetric confidence intervals. The high computational cost of our method prohibits any large-scale experiments. Nevertheless, the following experiments demonstrate the effectiveness of a likelihood-ratio-based approach.
All code used in the following experiments can be found at [https://github.com/LaurensSluyterman/Likelihood_ratio_intervals](https://github.com/LaurensSluyterman/Likelihood_ratio_intervals)
### Toy examples
To start, we present two one-dimensional toy examples that effectively illustrate the behavior of our method. Specifically, these examples illustrate the capability of our method to produce asymmetric confidence intervals that expand in regions with a limited amount of data points, both during interpolation and extrapolation.
RegressionThe data set consists of 80 realisations of the random variable pair \((X,Y)\). Half of the \(x\)-values are sampled uniformly from the interval \([-1,-0.2]\), while the
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Approach & Additional training cost & Additional inference cost \\ \hline DeepLR & No additional costs & Two additional networks per new input and multiple additional forward passes. A forward pass through each of the ensemble members. \\ Frequentist Bayesian & None & Inversion of a \(p\times p\) matrix. Varies from method to method. Typically, a large number of forward passes have to be made for every new input. \\ \hline \end{tabular}
\end{table}
Table 1: A comparison of the computational costs of different types of methods that create confidence intervals or credible regions.
remaining half are sampled uniformly from the interval \([0.2,1]\). The \(y\)-values are subsequently sampled using
\[Y\mid X=x\sim\mathcal{N}\left(2x^{2},0.1^{2}\right).\]
On this training set, we train a mean-variance estimation network (MVE) (Nix and Weigend, 1994). This particular type of network provides both mean and variance estimates and is trained by minimizing the negative loglikelihood under the assumption of a normal distribution. The network is trained for 400 epochs, using a default Adam optimizer (Kingma and Ba, 2014) and a batch size of 32. The network consists of 3 hidden layers for the mean estimation network with 40, 30, and 20 hidden units respectively and 2 hidden units for the variance using 5 and 2 hidden units respectively. All layers have elu activation functions (Clevert et al., 2015) with \(l_{2}\)-regularization applied in each dense layer with a constant value of 1e-4.
Figure 3 gives the 95% confidence intervals for mean predictions. Notably, in the biased region around 0, the intervals become highly asymmetric. In contrast, most other methods produce symmetric interval around the original network. In a region with a bias, this can easily lead to intervals with very poor coverage. This is exemplified by the biased symmetric intervals generated by an ensemble consisting of 10 ensemble members (we used the ensembling strategy employed by Lakshminarayanan et al. (2017)).
**Fig. 3**: This figure illustrates DeepLR for a regression problem. The blue dots indicate the locations of the training points, the dotted orange line represents the true function, and the solid green line represents the predicted regression function of the network. The shaded blue region gives the 95% CI of the regression function. DeepLR exhibits two desirable properties when compared to an ensemble approach. Firstly, the intervals expand in regions where data is sparse. Secondly, the intervals can be asymmetric, allowing for the compensation of potential bias.
### Binary classification
The data set consists of 60 realisations of the random variable pair \((X,Y)\), where half of the \(x\)-values are sampled uniformly from the interval \([0,0.2]\) and the other half are sampled uniformly from the interval \([0.8,1]\). The \(y\)-values are subsequently simulated using
\[Y\mid X=x\sim\text{Ber}(p(x)),\quad\text{with }p(x)=0.5+0.4\cos(6x).\]
On this training set, we train a fully connected network with three hidden layers consisting of 30 hidden units with elu activations functions. The final layer outputs a logit that is transformed using a sigmoid to yield a class probability. The network is trained for 300 epochs using a binary-crossy-entropy loss function and the Adam optimizer with a batch size of 32.
The resulting 95% confidence intervals of the predicted probability of class 1 are given in Figure 4. We carried out the experiment with two amounts of regularization to illustrate how this affects the result. For comparison, we also implemented an ensembling approach and MC-Dropout (see Lakshminarayanan et al. (2017) and Gal and Ghahramani (2016) for details). We used ten ensemble members and a standard dropout rate of 0.2. All networks were trained using the same training procedure.
We observe the same desirable properties as in the regression example. The intervals get much larger in regions with a limited amount of data, also when interpolating, and can become asymmetrical. Additionally, the intervals get smaller when we increase the amount of regularization. The model class becomes smaller (fewer parameters can be reached) making it more difficult for the model to change the predictions without affecting the likelihood. This, in turn, leads to smaller confidence intervals. If the model is overly regularized, it will become miss specified (\(\theta_{0}\notin\Theta\)) and the resulting confidence intervals will not be correct.
The other approaches do not share the same qualitative properties. The ensembling approach results in confidence intervals that are far too narrow. All ensemble members behave more or less the same, especially when interpolating. The MC-Dropout credible regions do not expand in the regions in between the data and also only moderately expand when extrapolating.
### Two-moon example
The data set consists of 80 data points, generated using the make_moons function from the scikit-learn package, which creates a binary classification problem with two interleaving half circles (Pedregosa et al., 2011).
We utilize the same network architecture as in the toy classification example. The network is trained for 500 epochs using the Adam optimizer with default learning rate and batch size of 32, while also applying \(l_{2}\)-regularization in each layer with a constant value of 1e-3.
Figure 1 presents the 95% confidence intervals for the predicted class probabilities. The results illustrate that DeepLR becomes extremely uncertain in regions farther from the data (i.e., the confidence interval for the class probability spans the full range of \([0,1]\)).
In contrast, both an ensembling approach and MC-Dropout report excessively high certainty in the upper left and lower right regions. The ensemble's behavior can be attributed to all ensemble members extrapolating in the
Figure 4: This figure illustrates DeepLR for a binary classification problem. The blue dots represent the training data, the dotted orange line represents the true probability of class 1 and the solid green line represents the predicted probability of class 1. The shaded blue region provides the 95% CI for the predicted probability of class 1. The top two figures (a) and (b) demonstrate the behavior of DeepLR for varying amounts of regularization. The more regularization, the smaller the class of admissible functions becomes, which naturally results in smaller intervals. Additionally, the top left figure demonstrates that the intervals expand when interpolating, a feature not shared by the dropout and ensembling approach.
same direction causing all ensemble members to report more or less the same class probability. For MC-dropout, a saturated sigmoid is causing the narrow credible intervals.
This comparison underscores the unique capability of DeepLR to provide more accurate uncertainty estimates in regions less well represented by data - a crucial capability in practical applications.
### MNIST binary example
For a more difficult task, we train a small convolutional network on the first two classes of the MNIST data set, consisting of handwritten digits. In this binary classification task, the 0's labeled as class 0 and the 1's as class 1.
The CNN architecture consists of two pairs of convolutional layers (with 28 filters and 3x3 kernels) and max-pooling layers (2x2 kernel), followed by a densely connected network with two hidden layers with 30 hidden units each and elu activation functions.
The network is trained for 10 epochs, using the SGD optimizer with a batch size of 32, default learning rate, and \(l_{2}\) regularization with a constant value of 1e-5, and binary cross-entropy loss function. The amounts of training time and regularization were determined from a manual grid search using an 80/20 split of training data. For the actual experiment, the entire training set was utilized.
Figure 5 presents 95% CIs for a number of different training points, test points, and OoD points. As shown, the OoD points have wider confidence intervals than the training and test points, reflecting greater uncertainty.
In an additional experiment, we rotated one of the test-points and created confidence intervals for the rotated images. As Figure 6 illustrates, increasing the rotation angle results in larger confidence intervals. This behavior is also seen with the ensembling and MC-dropout, but to a significantly lesser extent.
These larger intervals can be explained as follows: Our approach essentially asks the intuitive question how much the network can change the prediction for this new input without overly affecting the likelihood of the training data. A heavily rotated image of a number 1 deviates greatly from the typical input, utilizing pixels that are almost never used by the training inputs. It is therefore relatively straightforward for the network to change the prediction of this rotated input without dramatically lowering the likelihood of the training data. This, in turn, results in very large confidence intervals, accurately indicating that it is a very unfamiliar input.
**Fig. 5**: 95% CIs for different training points (first column), test points (second column), and OoD points (third column). Each subcaption displays the 95% CI for the probability of class 1 made with DeepLR (LR), MC-dropout (DR), and ensembling (EN). For in-distribution points (zeros and ones), all three methods provide extremely narrow intervals. For the OoD points, our method provides much larger CIs, a property also present in the other methods, albeit to a lesser extent.
Figure 6: This figure provides 95% CIs for different amounts of rotation. For rotations of 0 and 30 degrees, all methods produce very narrow confidence intervals, implying high certainty. At 60 degrees of rotation, DeepLR outputs high uncertainty whereas the ensembling approach and MC-dropout remain fairly certain. At 90 degrees of rotation, DeepLR outputs very high uncertainty, a confidence interval of [0.05, 1.00], a behavior also observed to a lesser extent in both ensembling and MC-dropout.
### CIFAR binary example
We extend the previous experiment to the CIFAR10 data set, using the first two classes - planes and cars - as a binary classification problem.
We use a CNN consisting of two pairs of convolutional layers (with 32 filters and 3x3 kernels) and max-pooling layers (2x2 kernel), followed by a densely connected network with three hidden layers with 30 hidden units each and elu activation functions.
The CNN is trained for 15 epochs using the SGD optimizer with default learning rate, a batch size of 32, and \(l_{2}\)-regularization with a constant value of 1e-5. The training time and regularization are determined the same way as for the MNIST experiment, using an 80/20 split of the training data.
Figure 7 presents 95% confidence intervals for several training, test, and OoD points. Our method is uncertain for out-of-distribution inputs. However, contrary to the MNIST example, we also see that the model is uncertain for various in-distribution points. Figures 7(e) and 7(g) provide examples of such uncertain predictions. The exact reason why the model is uncertain for those inputs remains speculation. Possible explanations might be the open hood of the car in (e) or the large amount of blue sky in (g). An interesting avenue for future work would be to investigate what specific features cause DeepLR to output greater uncertainty.
### Adversarial example
In addition to the previous experiments, we briefly tested how the method deals with adversarial examples (Goodfellow et al., 2014). Adversarial examples are modified inputs that are specifically designed to mislead the model, typically by adding small perturbations to the input. While virtually imperceptible to humans, these perturbations can dramatically alter the model's prediction.
We constructed adversarial versions of the two most confident test-inputs in Figure 7 using the FGSM method (Goodfellow et al., 2014), which works by using the gradients of the networks' loss function with respect to the input to create an input that maximizes the loss.
As illustrated in Figure 8, DeepLR demonstrates a higher uncertainty for the two adversarial inputs. This effect can be explained as follows. Although very similar to a human observer, an adversarial example is significantly different to a neural network. The difference was so large that both adversarial examples were wrongly classified by the network. Since these adversarial examples differ significantly from the training data, the network can more easily change the prediction at this location without changing the other predictions and thus the likelihood of the training data too much. This results in much larger confidence intervals.
These results provide an encouraging sign that our method could also be able to handle adversarial examples, offering the unique capability to create confidence intervals, detect OoD examples, and offer robustness against
adversarial examples, all with a single method. Unfortunately, the high computational cost of the method prevents more large-scale comparisons with other methods to more firmly establish this potential.
Figure 7: This figures gives 95% CIs for various training points (first column), test points (second column), and OoD points (third column). Each sub-caption provides the CIs for the probability of class 1 (cars) made with DeepLR (LR), MC-dropout (DR), and ensembling (EN). All methods demonstrate greater uncertainty than in the MNIST example, which is sensible as the CIFAR data set is significantly harder. Our method is very uncertain for all OoD points, which is not always the case for MC-dropout (c and i) and ensembling (i). We also observe relatively uncertain predictions by all methods for some in-distribution points, notably (e) and (g).
## 4 Discussion and conclusion
In this paper, we demonstrated the potential of a likelihood-ratio-based uncertainty estimate for neural networks. This approach is capable of producing asymmetric confidence intervals that are better motivated in cases where we have fewer data points than parameters, i.e. most deep learning applications.
The experimental results verify the theoretical advantages of a likelihood-ratio-based approach. The intervals are larger in regions with fewer data points, can get asymmetric in biased regions, and get larger for OoD and adversarial outputs. However, not being specifically designed for OoD detection or robustness against adversarial attacks, we do not claim it to be competitive in this regard against tailor-made alternatives.
While we made an effort to reduce it, our method still has some variance and can produce slightly different intervals upon repetition due to the randomness of the optimization procedure. This effect is greater for larger data sets where small differences can have a large effect on the test statistic.
Figure 8: Adversarial inputs generated by the FGSM method result in more uncertain predictions. The same network was used as for the CIFAR example illustrated in Figure 7. The intuition behind the larger CIs is as follows. While very similar to humans, the adversarial examples are significantly different to a neural network. This allows the network to change the prediction for the adversarial example without changing the predictions of the training data, resulting in larger confidence intervals.
Furthermore, it is essential that the model is well specified. This is the case for any model and not specific to our method. If the true density \(p_{\theta_{0}}\) cannot be reached, the resulting confidence intervals will surely be wrong. A model can be miss specified if it is overly regularized or if incorrect distributional assumptions are made (e.g. incorrectly assuming Gaussian noise).
In its current form, the high computational cost makes DeepLR unsuitable for many deep learning applications. A self-driving car that is approaching a cross-section cannot stop and wait for an hour until it has an uncertainty estimate. Nevertheless, a trustworthy uncertainty estimate may be critical in certain situations, or only a limited number of confidence intervals may be required. For instance, for medical applications, the extra computational time may be worthwhile. Alternatively, for some applications within astrophysics, only very few confidence intervals may be needed. If only a single interval is needed, for instance, our method is cheaper than an ensemble.
### Future work
Overall, our findings highlight the potential of a likelihood-ratio-based approach as a new branch of uncertainty estimation methods. We hope that our work will inspire further research in this direction. Several areas for improvement include:
* although not all
- applications and we hope that the proof of concept in this paper motivates further research in this direction that may result in reduced computational cost.
* Improved approximation of the test statistic: The calculation of the test statistic uses an approximation for the second term in equation (1). Further research could focus on finding better and possibly cheaper approximations. It may also be worthwhile to investigate the use of a Bartlett correction.
* Application to other machine-learning models: We applied this approach to neural networks. However, the methodology should also be applicable to other models, for example random forests. Especially for models that are relatively cheap to train, this approach could be very promising.
* Development of the theory on the distribution of the test statistic: It would be interesting to develop the theory surrounding the distribution of the test statistic in greater generality, possibly also when explicitly considering a regularization term. We constructed a reparametrization that showed that, under some assumptions, the test statistic has a \(\chi^{2}(1)\) distribution. It would be interesting to study these assumptions further.
* A better understanding of what causes DeepLR to become uncertain: We saw, for example, that various planes and cars had rather large accompanying confidence intervals. It would be interesting to study what causes certain input to become more uncertain than others. |
2310.00697 | Learning How to Propagate Messages in Graph Neural Networks | This paper studies the problem of learning message propagation strategies for
graph neural networks (GNNs). One of the challenges for graph neural networks
is that of defining the propagation strategy. For instance, the choices of
propagation steps are often specialized to a single graph and are not
personalized to different nodes. To compensate for this, in this paper, we
present learning to propagate, a general learning framework that not only
learns the GNN parameters for prediction but more importantly, can explicitly
learn the interpretable and personalized propagate strategies for different
nodes and various types of graphs. We introduce the optimal propagation steps
as latent variables to help find the maximum-likelihood estimation of the GNN
parameters in a variational Expectation-Maximization (VEM) framework. Extensive
experiments on various types of graph benchmarks demonstrate that our proposed
framework can significantly achieve better performance compared with the
state-of-the-art methods, and can effectively learn personalized and
interpretable propagate strategies of messages in GNNs. | Teng Xiao, Zhengyu Chen, Donglin Wang, Suhang Wang | 2023-10-01T15:09:59Z | http://arxiv.org/abs/2310.00697v1 | # Learning How to Propagate Messages in Graph Neural Networks
###### Abstract.
This paper studies the problem of learning message propagation strategies for graph neural networks (GNNs). One of the challenges for graph neural networks is that of defining the propagation strategy. For instance, the choices of propagation steps are often specialized to a single graph and are not personalized to different nodes. To compensate for this, in this paper, we present learning to propagate, a general learning framework that not only learns the GNN parameters for prediction but more importantly, can explicitly learn the interpretable and personalized propagate strategies for different nodes and various types of graphs. We introduce the optimal propagation steps as latent variables to help find the maximum-likelihood estimation of the GNN parameters in a variational Expectation-Maximization (VEM) framework. Extensive experiments on various types of graph benchmarks demonstrate that our proposed framework can significantly achieve better performance compared with the state-of-the-art methods, and can effectively learn personalized and interpretable propagate strategies of messages in GNNs. Code is available at [https://github.com/tengxiao1/L2P](https://github.com/tengxiao1/L2P).
Graph Neural Networks; Graph Representation Learning +
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
Footnote †: journal: Computing methodologies Machine learning
+
those heterophily graphs (Srivastava et al., 2017) (where connected nodes may have different class labels and dissimilar features), message propagation steps may hurt the node similarity, and stacking deep layers cannot achieve better performance compared with homophily graphs (Kumar et al., 2017; Li et al., 2018; Li et al., 2019). Since there is no strategy to learn how to propagate the message, existing GNNs need a hand-crafted layer number depending on different types of nodes and graphs. This requires expert domain knowledge and careful parameter tuning and will be sub-optimal. However, whether it is possible to learn personalized strategies while optimizing GNNs remains an open problem.
Motivated by the discussion above, in this paper, we investigate whether one can automatically learn personalized propagation strategies to help GNNs learn interpretable prediction and improve generalization. In essence, we are faced with several challenges: (i) The graph data is very complex, thus building hand-crafted and heuristic rules for propagation steps for each node tends to be infeasible when we know little knowledge underlying graph or the node distributions are too complicated. (ii) In practice, there is no way to directly access the optimal strategy of propagation. The lack of supervision about how to propagate obstructs models from modeling the distribution of propagation for each node. (iii) GNNs are also prone to be over-fitting (Srivastava et al., 2017), where the GNNs fit the training data very well but generalizes poorly to the testing data. It will suffer from the over-fitting issue more severely when we utilize an addition parameterized model for each node to learn how to propagate given limited labeled data in the real world.
To address the challenges mentioned above, we propose a simple yet effective framework called learning to propagate (L2P) to simultaneously learn the optimal propagation strategy and GNN parameters to achieve personalized and adaptive propagation. Our framework requires no heuristics and is generalizable to various types of nodes and graphs. Since there is no supervision, we adopt the principle of the probabilistic generative model and introduce the optimal propagation steps as latent variables to help find the maximum-likelihood estimation of GNN parameters in a variational Expectation-Maximization (VEM) framework. To further alleviate the over-fitting, we introduce an efficient bi-level optimization algorithm. The bi-level optimization closely matches the definition of generalization since validation data can provide accurate estimation of the generalization. The main contributions of this work are:
\(\bullet\) We study a new problem of learning propagation strategies for GNNs. To address this problem, we propose a general L2P framework which can learn personalized and interpretable propagation strategies, and achieve better performance simultaneously.
\(\bullet\) We propose an effective stochastic algorithm based on variational inference and bi-level optimization for the L2P framework, which enables simultaneously learning the optimal propagation strategies and GNN parameters, and avoiding the over-fitting issue.
\(\bullet\) We conduct experiments on homophily and heterophily graphs and the results demonstrate the effectiveness of our framework.
## 2. Related Work
### Graph Neural Networks
GNNs have achieved great success in modeling graph-structured data. Generally, GNNs can be categorized into two categories, i.e., spectral-based and spatial-based. Spectral-based GNNs define graph convolution based on spectral graph theory (Golovolov et al., 2010; Kipf and Welling, 2014; Kipf and Welling, 2014). GCN (Gori et al., 2015) further simplifies graph convolutions by stacking layers of first-order Chebyshev polynomial filters together with some approximations. Spatial-based methods directly define updating rules in the spatial space. For instance, GAT (Golovolov et al., 2010) introduces the self-attention strategy into aggregation to assign different importance scores of neighborhoods. We refer interested readers to the recent survey (Kipf and Welling, 2014) for more variants of GNN architectures. Despite the success of variants GNNs, the majority of existing GNNs aggregate neighbors' information for representation learning, which are shown to suffer from the over-smoothing (Kipf and Welling, 2014; Li et al., 2019) issue when many propagation layers are stacked, the representations of all nodes become the same.
To tackle the over-smoothing issue, some works (Gori et al., 2015; Li et al., 2019) try to add residual or dense connections (Kipf and Welling, 2014) in propagation steps for preserving the locality of the node representations. Other works (Kipf and Welling, 2014; Srivastava et al., 2017) augment the graph by randomly removing a certain number of edges or nodes to prevent the over-smoothing issue. Recently, GCNII (Kipf and Welling, 2014) introduces initial residual and identity mapping techniques for GCN and achieves promising performance. Since the feature propagation and transformation steps are commonly coupled with each other in standard GNNs, several works (Li et al., 2019; Li et al., 2019) separate this into two steps to reduce the risk of over-smoothing. We differ from these methods as (1) instead of focus on alleviating over-smoothing, we argue that different nodes and graphs may need a different number of propagation layers, and propose a framework of learning propagation strategies that generalizable to various types of graphs and backbones, and (2) we propose the bilevel optimization to utilize validation error to guide learning propagation strategy for improving the generalization ability of graph neural networks.
### The Bi-level Optimization
Bi-level optimization (Kipf and Welling, 2014), which performs upper-level learning subject to the optimality of lower-level learning, has been applied to different tasks such as few-shot learning (Kipf and Welling, 2014; Kipf and Welling, 2014; Kipf and Welling, 2014), searching architectures (Li et al., 2019), and reinforcement learning (Li et al., 2019). For the graph domain, Franceschi et al. propose a bi-level optimization objective to learn the structures of graphs. Some works (Kipf and Welling, 2014; Kipf and Welling, 2014) optimize a bi-level objective via reinforcement learning to search the architectures of GNNs. Moreover, Meta-attack (Zhang et al., 2017) adopts the principle of meta-learning to conduct the poisoning attack on the graphs by optimizing a bi-level objective. Recently, Hwang et al. propose SELAR (Huang et al., 2019) which learns the weighting function for self-supervised tasks to help the primary task on the graph with a bi-level objective. To conduct the few-shot learning in graphs, the work (Huang et al., 2019), inspired by MAML (Mamlak et al., 2017), try to obtain a parameter initialization that can adapt to unseen tasks quickly, using gradients information from the bi-level optimization. By contrast, in this paper, our main concern is the generalization, and we propose a bilevel programming with variational inference to develop a framework for learning propagation strategies, while avoiding the over-fitting issues.
## 3. Preliminaries
### Notations and Problem Definition
Let \(G=(\mathcal{V},\mathcal{E})\) denote a graph, where \(\mathcal{V}\) is a set of \(|\mathcal{V}|=N\) nodes and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is a set of \(|\mathcal{E}|\) edges between nodes. \(\mathbf{A}\in\{0,1\}^{N\times N}\) is the adjacency matrix of \(G\). The \((i,j)\)-th element
1 if there exists an edge between node \(v_{i}\) and \(v_{j}\), otherwise \(\mathbf{A}_{ij}=0\). Furthermore, we use \(\mathbf{X}=\left[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N}\right]\in \mathbb{R}^{NN\times d}\) to denote the features of nodes, where \(\mathbf{x}_{n}\) is the \(d\)-dimensional feature vector of node \(v_{n}\). Following the common semi-supervised node classification setting (Wang et al., 2017; Wang et al., 2018), only a small portion of nodes \(\mathcal{V}_{o}=\left\{v_{1},v_{2},\ldots,v_{o}\right\}\) are associated with observed labels \(\mathcal{Y}^{o}=\left\{y_{1},y_{2},\ldots,y_{o}\right\}\), where \(y_{n}\) denotes the label of \(v_{n}\). \(\mathcal{V}_{u}=\mathcal{V}\backslash\mathcal{V}_{o}\) is the set of unlabeled nodes. Given the adjacency matrix \(\mathbf{A}\), features \(\mathbf{X}\) and the observed labels \(\mathcal{Y}^{o}\), the task of node classification is to learn a function \(f_{\theta}\) which can accurately predict the labels \(\mathcal{Y}^{u}\) of unlabeled nodes \(\mathcal{V}_{u}\).
### Message Propagation
Generally, GNNs adopt the message propagation process, which iteratively aggregates the neighborhood information. Formally, the propagation process of the \(k\)-th layer in GNN is two steps:
\[\mathbf{m}_{k,n}=\textsc{AGGREGATE}\ \left(\left(\mathbf{h}_{k-1,u}:u \in\mathcal{N}(n)\right)\right) \tag{2}\] \[\mathbf{h}_{k,n}=\textsc{UPDATE}\left(\mathbf{h}_{k-1,u},\mathbf{ m}_{k,u}\mathbf{h}_{0,n}\right) \tag{1}\]
where \(\mathcal{N}_{n}\) is the set of neighbors of node \(v_{n}\), AGGREGATE is a permutation invariant function. After \(K\) message-passing layers, the final node embeddings \(\mathbf{H}_{K}\) are used to perform a given task. In general, most state-of-the-art GNN backbones (Wang et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) follow this message propagation form with different AGGREGATE functions, UPDATE function, or initial feature \(\mathbf{h}_{0,n}\). For instance, APPNP (Wang et al., 2018) and GCNII (Wang et al., 2018) add the initial feature \(\mathbf{h}_{0,n}=\textsc{MLP}(\mathbf{x}_{n};\theta)\) to each layer in the UPDATE function. In general, GNN consists of several message propagation layers. We abstract the message propagation with \(K\) layers as one parameterized function \(GNN(\mathbf{X},\mathbf{A},K)\).
## 4. Learning to Propagate
In this section, we introduce the Learning to Propagate (L2P) framework, which can perform personalized message propagation for better node representation learning and a more interpretable prediction process. The key idea is to introduce a discrete latent variable \(t_{n}\) for each node \(v_{n}\), which denotes the personalized optimal propagation step of \(v_{n}\). How to learn \(t_{n}\) is challenging given no explicit supervision on the optimal propagation step of \(v_{n}\). To address the challenge, we propose a generative model for modeling the joint distribution of node labels and propagation steps conditioned on node attributes and graphs, i.e., \(p(y_{n},t_{n}|\mathbf{X},\mathbf{A})\), and formulate the Learning to Propagate framework as a variational objective, where the goal is to find the parameters of GNNs and the optimal propagation distribution, by iteratively approximating and maximizing the log-likelihood function. To alleviate the over-fitting issue, we further frame the variational process as a bi-level optimization problem, and optimize the variational parameters of learning the propagation strategies in an outer loop to maximize generalization performance of GNNs trained based on the learned strategies.
### The Generative Process
Generally, we can consider the designing process of graph neural networks as follows: we first choose the number of propagation layers \(K\) for all nodes and the type of the aggregation function parameterized by \(\theta\). Then for each training label \(y_{n}\) of node \(n\), we typically conduct the Maximum Likelihood Estimation (MLE) of the marginal log-likelihood over the observed labels as:
\[\max_{\theta}\mathcal{L}\left(\theta;\mathbf{A},\mathbf{X},\mathcal{Y}^{o} \right)=\sum\nolimits_{y_{n}\in\mathcal{Y}^{o}}\log p_{\theta}\left(y_{n}|GNN \left(\mathbf{X},\mathbf{A},K\right)\right), \tag{3}\]
where \(p_{\theta}\left(y_{n}|GNN(\mathbf{X},\mathbf{A},K)\right)=p\left(y_{n}| \mathbf{H}_{K}\right)\) is the predicted probability of node \(v_{n}\) having label \(y_{n}\) using \(\mathbf{H}_{K,n}\). \(\mathbf{H}_{K,n}\) is the node representation of \(v_{n}\) after stacking \(K\) propagation steps (see SS 3.2). Generally, a softmax is applied on \(\mathbf{H}_{K,n}\) for predicting label \(y_{n}\).
Although the message propagation strategy above has achieved promising performance, it has two drawbacks: (i) The above strategy treats each node equally, i.e., each node stacks \(K\)-layers; while in practice, different nodes may need different propagation steps/layers. Simply using a one-for-all strategy could potentially lead to sub-optimal decision boundaries and is less interpretable, and (ii) Different datasets/graphs may also have different optimal propagation steps. Existing GNNs require a hand-crafted number of propagation steps, which requires expert domain knowledge, careful parameter tuning, and is time-consuming. Thus, it would be desirable if we could learn the personalized and adaptive propagation strategy which is applicable to various types of graphs and GNN backbones.
Based on the motivation above, we propose to learn a personalized propagate distribution from the given labeled nodes and utilize the learned propagate distribution at test time, such that each test node would automatically find the optimal propagate step to explicitly improve the performance. A natural idea of learning optimal propagate distribution is supervised learning. However, there is no direct supervision of the optimal propagate strategy for each node. To solve this challenge, we treat the optimal propagation layer of each node as a discrete latent variable and adopt the principle of the probabilistic generative model, which has shown to be effective in estimating the underlying data distribution (Wang et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018).
Specifically, for each node \(v_{n}\), we introduce a latent discrete variable \(t_{n}\in\left\{0,1,2,\cdots,K\right\}\) to denote its optimal propagation step, where \(K\) is the predefined maximum step. Note that \(t_{n}\) can be 0, which corresponds to use non-aggregated features for prediction. We allow \(t_{n}\) to be 0, because for some nodes in heterophily graphs, the neighborhood information is noisy, aggregating the neighborhood information may result in worse performance (Wang et al., 2018; Wang et al., 2018). \(t_{n}\) is node-wise because the optimal propagation step for each node may vary largely from one node to another. With the latent variable \(\left\{t_{n}\right\}_{n=1}^{|\mathcal{V}|}\), we propose the following generative model with modeling the joint distribution of each observed label \(y_{n}\) and latent \(t_{n}\):
\[p_{\theta}(y_{n},t_{n}|\mathbf{X},\mathbf{A})=p_{\theta}(y_{n}|GNN(\mathbf{X}, \mathbf{A},t_{n}))p(t_{n}), \tag{4}\]
where \(p(t_{n})\) is the prior of propagation variable and \(\theta\) is the parameter shared by all nodes. \(p_{\theta}(y_{n}|GNN(\mathbf{X},\mathbf{A},t_{n}))\) represents the label prediction probability using \(v_{n}\)'s representation from the \(t_{n}\)-th layer, i.e., \(\mathbf{H}_{t_{n},n}\). Since we do not have any prior of how to propagate, \(p(t_{n})=\frac{1}{K+1}\) is defined as uniform distribution on all layers of all nodes in this paper. We can also use an alternative prior with lower probability on the deeper layers, if we want to encourage shallower GNNs. Given the generative model in Eq. (4) and from the Bayesian perspective, what we are interested in are two folds:
(1) Learning the parameter \(\theta\) of the GNN by maximizing the follow likelihood which helps make label prediction in the testing phase:
\[\log p_{\theta}\left(y_{n}|\mathbf{X},\mathbf{A}\right)=\log\sum\nolimits_{t_{n }\to\theta}^{K}p_{\theta}(y_{n}|GNN(\mathbf{X},\mathbf{A},t_{n}))p(t_{n}). \tag{5}\]
(2) Inferring the following posterior \(p(t_{n}|\mathbf{X},\mathbf{A},y_{n})\) of the latent variable \(t\), which is related to the optimal propagation distribution.
\[p(t_{n}=k|\mathbf{X},\mathbf{A},y_{n})=\frac{p_{\theta}(y_{n}|GNN(\mathbf{X}, \mathbf{A},k))}{\sum_{k^{\prime}=0}^{K}p_{\theta}(y_{n}|GNN(\mathbf{X},\mathbf{ A},k^{\prime}))}. \tag{6}\]
Intuitively, this posterior can be understood as we choose the propagation step \(t_{n}\) of node \(v_{n}\) according to the largest likelihood (i.e., the smallest loss) in the defined propagation steps.
However, there are several challenges to solve these two problems. For learning, we cannot directly learn the parameter \(\theta\), since it involves marginalizing the latent variable, which is generally time-consuming and intractable (Kang and Hinton, 2015). In terms of the inference, since we do not have labels for test nodes, i.e, \(y_{n}\) for \(v_{n}\in\mathcal{V}_{n}\), the non-parametric true posterior in Eq. (6), which involves evaluating the likelihood \(p_{\theta}(y_{n}|GNN(\mathbf{X},\mathbf{A},k))\) of test nodes, is not applicable. To solve the challenges in the learning and inference, we adopt the variational inference principle (Kang and Hinton, 2015; Hinton et al., 2015), and instead consider the following lower bound of the marginal log-likelihood in Eq. (5) which gives rise to our following formal variational objective:
\[\mathcal{L}(\theta,q)=\mathbb{E}_{q(t_{n})}\left[\log p_{\theta} \left(y_{n}|\mathbf{X},\mathbf{A},t_{n}\right)\right]-\text{KL}(q(t_{n})||p(t _{n})), \tag{7}\]
where the derivations are given in Appendix A.1 and \(q(t_{n})\) is the introduced variational distribution. Maximizing the ELBO \(\mathcal{L}(\theta,q)\) is equivalent to (i) maximize Eq. (5) and to (ii) make the variational distributions \(q(t_{n})\) of each node be close to its intractable true posteriors \(p(t_{n}|\mathbf{X},\mathbf{A},y_{n})\). Note that the ELBO holds for any type of variational distribution \(q(t_{n})\). We defer discussion of the learning and inference process until the next section. Here, we first introduce two ways to show how to exactly parameterize the variational distribution \(q(t_{n})\), resulting in two instances of our L2P framework.
### Learning to Select
In the variational inference principle, we can introduce a variational distribution \(q_{\phi}(t_{n}|\mathbf{v}_{n})\) parameterized by \(\mathbf{v}_{n}\in\mathbb{R}^{K}\). However, we cannot fit each \(q_{\phi}(t_{n}|\mathbf{v}_{n})\) individually by solving \(N\cdot K\) parameters, which increases the over-fitting risk given the limited labels in the graphs. Thus, we consider the amortization inference (Kang and Hinton, 2015) which avoids the optimization of the parameter \(\mathbf{v}_{n}\) for each local variational distribution \(q_{\phi}(t_{n}|\mathbf{v}_{n})\). Instead, it fits a shared neural network to calculate each local parameter \(\mathbf{v}_{n}\). Since the latent variable \(t_{n}\) is a discrete multinomial variable, the simplest and most naive way to represent categorical variable is the softmax function. Thus, we pass the features of nodes through a softmax function to parameterize the categorical propagation distribution as:
\[q_{\phi}(t_{n}=k|\mathbf{X},\mathbf{A})=\frac{\exp(\mathbf{w}_{k}^{ \top}\mathbf{H}_{k,n})}{\sum_{k^{\prime}=0}^{K}\exp(\mathbf{w}_{k^{\prime}}^{ \top}\mathbf{H}_{k^{\prime},n})}, \tag{8}\]
where \(\mathbf{w}_{k}\) represents the trainable linear transformation for the \(k\)-th layer. \(\mathbf{H}_{k,n}\) is the representation of node \(n\) at the \(k\)-th layer and \(\phi\) represents the set of parameters. The main insight behind this amortization is to reuse the propagation representation of each layer, leveraging the accumulated knowledge of representation to quickly infer propagation distribution. With amortization, we reduce the number of parameters to \((K+1)\cdot D\), where \(K\) is the predefined maximum propagation step and \(D\) is the dimension of the representation of nodes. Since this formulation directly models the selection probability overall propagation steps of each node, we refer to this method as _Learning to Select_ (L2S). Figure 2(b) gives an illustration of L2S. We adopt the node representation of \(v_{n}\) in each layer to calculate \(q_{\phi}(t_{n}=k|\mathbf{X},\mathbf{A})\), which makes it able to personally and adaptively decide which propagation layer is best for \(v_{n}\). It also allows each graph to learn its own form of propagation with its own decay form from the validation signal (see SS 5.1 for details).
### Learning to Quit
Instead of directly modeling the selection probability over every propagation step, we can model the probability of exiting the propagation process and transform the modeling of multinomial probability parameters into the modeling of the logits of binomial probability parameters. More specifically, we consider modeling the quit probability at each propagation layer for each node \(n\) as follows:
\[\alpha_{k,n}=\frac{1}{1+\exp(-\mathbf{w}_{k}^{\top}\mathbf{H}_{k,n})}, \tag{9}\]
where \(\alpha_{k,n}\) denotes the probability that node \(v_{n}\) quits propagating at the \(k\)-th layer. The challenge is how to transfer the logits \(\alpha_{k,n}\) to the multinomial variational distribution \(q_{\phi}(t_{n}|\mathbf{X},\mathbf{A})\). In this paper, we consider the stick breaking (a non-parametric Bayesian) process (Kang and Hinton, 2015) to address this challenge. Specifically, the probability of the first step (no propagation), i.e., \(q(t_{n}=0)\) is modeled as a break of proportion \(\alpha_{0,n}\) (quit at the first-step), while the length of the remainder of the propagation is left for the next break. Each probability of propagation step can be deterministically computed by the quit probability \(q(t_{n}=k)=\alpha_{k,n}\prod_{k^{\prime}=0}^{k-1}\left(1-\alpha_{k^{\prime},n}\right)\) until \(K-1\), and the probability of last propagation step is \(q(t_{n}=K)=\prod_{k^{\prime}=0}^{K-1}(1-\alpha_{k^{\prime},n})\). Assume the maximum propagation step \(K=2\), then the propagation probability is generated by 2 breaks where \(p(t_{n}=0)=\alpha_{0,n}\), \(p(t_{n}=1)=\alpha_{1,n}\left(1-\alpha_{0,n}\right)\) and the last propagation step \(p(t_{n}=2)=(1-\alpha_{1,n})\left(1-\alpha_{0,n}\right)\) (not quit until the end). Hence, for different values of \(K\), this non-parametric breaking process always satisfies \(\sum_{k=0}^{K}q(t_{n}=k)=1\). We call this method _Learning to Quit_ (L2Q). Compared with L2S, L2Q models the quit probability of each node at each propagation step via the stick-breaking process which naturally induces the sparsity property of the modeling propagation step for each node. The deeper layers are less likely to be sampled. Figure 2(c) shows the architecture of L2Q.
### Learning and Inference
Maximizing the ELBO in Eq. (7) is challenging. The lack of labels for test data further exacerbates the difficulty. Thus, in this paper, we propose two algorithms: the alternate expectation maximization
Figure 2. Illustrations of our L2P framework. (a) The vanilla GNN architecture. (b) L2S predicts the selection probability over all propagation steps for each node. (c) L2Q forces each node to personally quit its propagation process.
and iterative variational inference algorithms to maximize it.
**Alternate expectation maximization.** Minimization of the negative ELBO in Eq. (7) can be solved by the expectation maximization (EM) algorithm, which iteratively infers \(q(t_{n})\) at E-step and learns \(\theta\) at M-step. More specifically, at each iteration \(i\), given the current status parameters \(\theta^{(i)}\), the E-step that maximizes \(\mathcal{L}(\theta^{(i)},q)\) w.r.t q has a closed-form solution the same as Eq. (6):
\[q^{(i+1)}(t_{n})=q(t_{n}|\mathbf{X},\mathbf{A},y_{n})=\frac{p_{\theta^{(i)}}(y _{n}|\mathbf{X},\mathbf{A},t_{n})}{\sum_{t_{n}=0}^{K}p_{\theta^{(i)}}(y_{n}| \mathbf{X},\mathbf{A},t_{n})}. \tag{10}\]
However, we can not utilize this non-parametric posterior since the label \(y_{n}\) is not available for the test nodes. We need to let the training and testing pipeline be consistent. Thus, we consider projecting the non-parametric posterior to a parametric posterior \(q_{\phi}(t_{n}|\mathbf{X},\mathbf{A})\) (i.e., L2S or L2Q). We adopt an approximation, which has also been used in the classical wake-sleep algorithm (Srivastava et al., 2015) by minimizing the forward KL divergence \(KL(q^{(i+1)}(t_{n})||q_{\phi}(t_{n}|\mathbf{X},\mathbf{A}))\). Then we can get the following pseudo maximizing likelihood objective:
\[\phi^{(i+1)}=\arg\max_{\phi}\mathbb{E}_{q^{(i+1)}(t_{n})}\left[\log q_{\phi}(t _{n}|\mathbf{X},\mathbf{A})\right]. \tag{11}\]
Given the parametric posterior \(q_{\phi^{(i+1)}}(t_{n}|\mathbf{X},\mathbf{A})\), the M-step optimizes \(\mathcal{L}(\theta,\,q_{\phi^{(i+1)}}(t_{n}|\mathbf{X},\mathbf{A}))\) w.r.t \(\theta\). Since there is no analytical solution for deep neural networks, we update the model parameters \(\theta\) with respect to the ELBO by one step of gradient descent.
**Iterative variational inference.** Although the alternate expectation maximization algorithm is effective to infer the optimal propagation variable, the alternate EM steps are time-consuming and we need calculating the loss at every layer for each training node, i.e., the \(O(N*(K+1))\) complexity. Thus, we propose an end-to-end iterative algorithm to minimize negative ELBO. Specifically, we introduce the parameterized posterior \(q_{\phi}(t_{n}|\mathbf{X},\mathbf{A})\) (i.e., L2S or L2Q) into Eq. (7) and directly optimize ELBO using reparameterization trick (Kang et al., 2015). We infer the optimal propagation distribution \(q_{\phi}(t_{n}|\mathbf{X},\mathbf{A})\) and learn GNN weights \(\theta\) jointly through standard back-propagation from the ELBO in Eq. (7). However, the optimal propagation steps \(t\) is discrete and non-differentiable which makes direct optimization difficult. Therefore, we adopt Gumbel-Softmax Sampling (Gumbel and Softmax, 1995; Gumbel and Softmax, 1995), which is a simple yet effective way to substitutes the original non-differentiable sample from a discrete distribution with a differentiable sample from a corresponding Gumbel-Softmax distribution. Specifically, we minimize the following negative ELBO in Eq. (7) with the reparameterization trick (Kang et al., 2015):
\[\mathcal{L}(\theta,\phi)=-\log p_{\theta}(\mathbf{y}|GNN(\mathbf{X},\mathbf{A},\hat{t}))+\mathrm{KL}(q_{\phi}(t_{n}|\mathbf{X},\mathbf{A})||p(t_{n})), \tag{12}\]
where \(\hat{t}\) is drawn from a categorical distribution with the discrete variational distribution \(q_{\phi}(t_{n}|\mathbf{X},\mathbf{A})\) parameterized by \(\phi\):
\[\hat{t}_{k}=\frac{\exp((\log(q_{\phi}(t_{n}|\mathbf{X},\mathbf{A})\left[a_{k} \right])+g_{k})/y_{g})}{\sum_{k^{\prime}=0}^{K}\exp((\log(q_{\phi}(t_{n}| \mathbf{X},\mathbf{A})\left[a_{k^{\prime}}\right])+g_{k^{\prime}})/y_{g})}, \tag{13}\]
where \((g_{k^{\prime}})_{k=0}^{K}\) are i.i.d. samples drawn from the Gumbel (0, 1) distribution, \(y_{g}\) is the softmax temperature, \(\hat{t}_{k}\) is the \(k\)-th value of sample \(\hat{t}\) and \(q_{\phi}(t_{n}|\mathbf{X},\mathbf{A})[a_{k}]\) indicates the \(a_{k}\)-th index of \(q_{\phi}(t_{n}|\mathbf{X},\mathbf{A})\), i.e., the logit corresponding the \((a_{k}-1)\)-th layer. Clearly, when \(\tau>0\), the Gumbel-Softmax distribution is smooth so \(\phi\) can be optimized by standard back-propagation. The KL term in Eq. (12) is respect to two categorical distributions, thus it has a closed form.
## 5. Bi-level variational inference
So far, we have proposed the L2P framework and shown how to solve it via variational inference. However, as suggested by previous work (Kang et al., 2015; Wang et al., 2015), GNNs suffer from over-fitting due to the scarce label information in the graph domain. In this section, we propose the bilevel variational inference to alleviate the over-fitting issue.
### The Bi-level Objective
For our L2P framework, the introduced inference network for joint learning optimal propagation steps in L2S and L2Q also increases the risk of over-fitting as shown in experiments (SS 6.4). To solve the over-fitting issue, we draw inspiration from gradient-based meta-learning (learning to learn) (Kang et al., 2015; Wang et al., 2015). Briefly, the objective of \(\phi\) is to maximize the ultimate measure of the performance of GNN model \(p_{\theta}(y|GNN(\mathbf{X},\mathbf{A},t))\), which is the model performance on a held-out validation set. Formally, this goal can be formulated as the following bi-level optimization problem:
\[\min_{\phi}\mathcal{L}_{\text{val}}\left(\theta^{*}(\phi),\phi\right)\text{ s.t. }\theta^{*}(\phi)=\arg\min_{\theta}\mathcal{L}_{\text{train}}(\theta,\phi), \tag{14}\]
where \(\mathcal{L}_{\text{val}}\left(\theta^{*}(\phi),\phi\right)\) and \(\mathcal{L}_{\text{train}}(\theta,\phi)\) are called upper-level and lower-level objectives on validation and training sets, respectively. For our L2P framework, the objective is the negative ELBO \(\mathcal{L}(\theta,\phi)\) in Eq. (7). This bi-level update is to optimize the propagation strategies of each node so that the GNN model performs best on the validation set. Instead of using fixed propagation steps, it learns to assign adaptive steps while regularizing the training of a GNN model to improve the generalization. Generally, the bi-level optimization problem has to solve each level to reach a local minimum. However, calculating the optimal \(\phi\) requires two nested loops of optimization, i.e., we need to compute the optimal parameter \(\theta^{*}(\phi)\) for each \(\phi\). Thus, in order to control the computational complexity, we propose an approximate alternating optimization method by updating \(\theta\) and \(\phi\) iteratively in the next section.
### Bi-level Training Algorithm
Indeed, in general, there is no closed-form expression of \(\theta\), so it is not possible to directly optimize the upper-level objective function in Eq. (14). To tackle this challenge, we propose an alternating approximation algorithm to speed up computation in this section.
**Updating lower level \(\theta\).** Instead of solving the lower level problem completely per outer iteration, we fix \(\phi\) and only take the following gradient steps over mode parameter \(\theta\) at the \(i\)-th iteration:
\[\theta^{(i)}=\theta^{(i-1)}-\eta_{\theta}\nabla_{\theta}\mathcal{L}_{\text{ train}}(\theta^{(i-1)},\phi^{(i-1)}), \tag{15}\]
where \(\eta_{\theta}\) is the learning rate for \(\theta\).
**Updating upper level \(\phi\).** After receiving the parameter \(\theta^{(i)}\) (a reasonable approximation of \(\theta^{(*)}(\phi)\)), we can calculate the upper level objective, and update \(\phi\) through:
\[\phi^{(i)}=\phi^{(i-1)}-\eta_{\phi}\nabla_{\phi}\mathcal{L}_{\text{ val}}(\theta^{(i)},\phi^{(i-1)}). \tag{16}\]
Note that \(\theta^{(i)}\) is a function of \(\phi\) due to Eq. (15), we can directly back-propagate the gradient through \(\theta^{(i)}\) to \(\phi\). the \(\nabla_{\phi}\mathcal{L}_{\text{val}}(\theta^{(i)},\phi^{(i-1)})\) can be approximated as (see Appendix A.2 for detailed derivations):
\[\nabla_{\phi}\mathcal{L}_{\text{val}}(\theta^{(i)},\phi^{(i-1)})= \nabla_{\phi}\mathcal{L}_{\text{val}}(\bar{\theta}^{(i)},\phi^{(i-1)})\] \[-\eta_{\theta}\frac{1}{\epsilon}(\nabla_{\phi}\mathcal{L}_{\text {train}}(\theta^{(i-1)}+\epsilon\epsilon,\phi^{(i-1)})-\nabla_{\phi}\mathcal{L}_{ \text{train}}(\theta^{(i-1)},\phi^{(i-1)})), \tag{17}\]
where \(v=\nabla_{\theta}\mathcal{L}_{\text{val}}(\theta^{(i)},\hat{\phi}^{(i-1)})\), and \(\bar{\theta}^{(i)}\) and \(\hat{\phi}^{(i-1)}\) means stopping the gradient. This can be easily implemented by maintaining a shadow version of \(\theta^{(i-1)}\) at last step, catching the training loss \(\mathcal{L}_{\text{train}}(\theta^{(i-1)},\hat{\phi}^{(i-1)})\) and computing the new loss \(\mathcal{L}_{\text{train}}(\theta^{(i-1)}+\epsilon v,\phi^{(i-1)})\). When \(\eta_{\theta}\) is set to 0 in Eq. (17), the second-order derivative will disappear, resulting in a first-order approximate. In experiments in SS 6.4, we study the effect of bi-level optimization, and the first- and second-order approximates.
Given the above derivations of gradients, we have the complete L2P algorithm by alternating the update rules in Eqs. (15) and (16). The time complexity mainly depends on the bi-level optimization. For the first-order approximate, the complexity is the same as vanilla GNN methods. L2P needs approximately 3 \(\times\) training time for the second-order approximate since it needs extra forward and backward passes of the weight to compute the bilevel gradient. However, as the experiments in SS 6.4 show, the first-order approximate is sufficient to achieve the best performance.
## 6. Experiment
In this section, we conduct experiments to evaluate the effectiveness of the proposed frameworks with comparison to state-of-the-art GNNs. Specifically, we aim to answer the following questions:
* How effective is the proposed L2P framework for the node classification task on both heterophily and homophily graphs?
* Could the proposed L2P alleviate over-smoothing?
* How do the proposed learning algorithms work? Could the bi-level optimization alleviate the over-fitting issue?
* Could the proposed framework adaptively learn propagation strategies for better understanding the graph structure?
* Could the proposed L2P framework effectively the personalized and interpretable propagation strategies for each node?
### Experimental Settings
**Datasets.** We conduct experiments on both homophily and heterophily graphs. For homophily graphs, we adopts three standard citation networks for semi-supervised node classification, i.e., Cora, CiteSeer, and PubMed (Yang et al., 2017). Recent studies (Zhou et al., 2018; Liu et al., 2019; Liu et al., 2019) show that the performance of GNNs can significantly drop on heterophily graphs, we also include heterophily benchmark in our experiments, including Actor, Cornell, Texas, and Wisconsin (Zhou et al., 2018; Liu et al., 2019). The descriptions and statistics of these datasets are provided in Appendix A.3.
**Baselines.** To evaluate the effectiveness of the proposed framework, we consider the following representative and state-of-the-art GNN models on the semi-supervised node classification task. GCN (Liu et al., 2019), GAT (Liu et al., 2019), JK-Net (Liu et al., 2019), APPNP (Liu et al., 2019), DAGNN (Liu et al., 2019), IncepGCN (Liu et al., 2019), and GCNII* (Zhou et al., 2019). We also compare our proposed methods with GCN(DropEdge), ResGCN(DropEdge), JKNet(DropEdge) and IncepGCN(DropEdge) by utilizing the drop-edge trick (Liu et al., 2019). The details and implementations of baselines are given in Appendix A.4.
**Setup.** For our L2P framework, we consider APPNP as our backbone unless otherwise stated, but note that our framework is broadly applicable to more complex GNN backbones (Zhou et al., 2018; Liu et al., 2019; Liu et al., 2019). We randomly initialize the model parameters. We utilize the first-order approximate for our methods due to its efficiency and study the effect of second-order approximate separately in SS 6.4. For each search of hyper-parameter configuration, we run the experiments with 20 random seeds and select the best configuration of hyper-parameters based on average accuracy on the validation set. Hyper-parameter settings and the splitting of datasets are given in Appendix A.5.
### RQ1. Performance Comparison
To answer RQ1, we conduct experiments on both homophily and heterophily graphs with comparison to state-of-the-art methods.
**Performance on homophily graphs.** Table 1 reports the mean classification accuracy with the standard deviation on the test nodes after 10 runs. From Table 1, we have the following findings: (1) Our L2S and L2Q improve the performance of the APPAP backbone consistently and significantly in all settings. This is because that our framework has the advantage of adaptively learning personalized strategies via bi-level training. This observation demonstrates our motivation and the effectiveness of our framework. (2) Our L2S and L2Q can achieve comparable performance with state-of-the-art methods such as DAGNN and GCNII* on Cora and PubMed, and outperform them on CiteSeer. This once again demonstrates the effectiveness of our L2P framework on the node classification task. (3) In terms of our methods, the L2Q performs better than L2S, indicting that the simple softmax is not the best parameterization for the variational distribution of the latent propagation variable.
**Performance on heterophily graphs.** Besides the previously
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Method** & **Cora** & **CiteSeer** & **PubMed** \\ \hline GCN & 81.3 \(\pm\) 0.8 & 71.1 \(\pm\) 0.7 & 78.8 \(\pm\) 0.6 \\ GAT & 83.0 \(\pm\) 0.7 & 72.5 \(\pm\) 0.7 & 79.0 \(\pm\) 0.3 \\ APPNP & 83.3 \(\pm\) 0.5 & 71.8 \(\pm\) 0.5 & 79.7 \(\pm\) 0.3 \\ JKNet & 80.6 \(\pm\) 0.5 & 69.6 \(\pm\) 0.2 & 77.8 \(\pm\) 0.3 \\ JKNet(Drop) & 83.0 \(\pm\) 0.3 & 72.2 \(\pm\) 0.7 & 78.9 \(\pm\) 0.4 \\ Incep(Drop) & 83.0 \(\pm\) 0.5 & 72.3 \(\pm\) 0.4 & 79.3 \(\pm\) 0.3 \\ DAGNN & 84.2 \(\pm\) 0.5 & 73.3 \(\pm\) 0.6 & 80.3 \(\pm\) 0.4 \\ GCNII* & **85.3 \(\pm\) 0.2** & 73.2 \(\pm\) 0.8 & 80.3 \(\pm\) 0.4 \\ \hline L2S & 84.9 \(\pm\) 0.3 & 74.2 \(\pm\) 0.5 & 80.2 \(\pm\) 0.5 \\ L2Q & 85.2 \(\pm\) 0.5 & **74.6 \(\pm\) 0.4** & **80.4 \(\pm\) 0.4** \\ \hline \hline \end{tabular}
\end{table}
Table 1. Summary of results on homophily graphs. Note our results can be easily improved by using a more complex backbone. For example, by using GCNII* as our backbone, L2S can achieve 85.6 \(\pm\) 0.2 on Cora and 80.9 \(\pm\) 0.3 on PubMed.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Method** & **Actor** & **Cornell** & **Texas** & **Wisconsin** \\ \hline GCN & 26.86 & 52.71 & 52.16 & 45.88 \\ GAT & 28.45 & 54.32 & 58.38 & 49.41 \\ Geom-GCN-I & 29.09 & 56.76 & 57.58 & 58.24 \\ Geom-GCN-P & 31.63 & 60.81 & 67.57 & 64.12 \\ Geom-GCN-S & 30.30 & 55.68 & 59.73 & 56.67 \\ APPNP & 32.41 & 73.51 & 65.41 & 69.02 \\ JKNet & 27.41 & 57.30 & 56.49 & 48.82 \\ JKNet(Drop) & 29.21 & 61.08 & 57.30 & 50.59 \\ Incep(Drop) & 30.13 & 61.62 & 57.84 & 50.20 \\ GCNII* & 35.18 & 76.49 & 77.84 & 81.57 \\ \hline L2S & 36.58 & 80.54 & 84.12 & 84.31 \\ L2Q & **36.97** & **81.08** & **84.56** & **84.70** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Node classification accuracy on heterophily graphs.
mentioned baselines, we also compare our methods with three variants of Geom-GCN (Wang et al., 2019): Geom-GCN-I, Geom-GCN-P, and Geom-GCN-S. Table 2 reports the results. (1) We can observe that L2S and L2Q outperform the APPNP backbone on four heterophily graphs, which indicates our framework can still work well on the heterophily graphs. (2) L2S and L2Q consistently improve GCNII* by a large margin and achieve new state-of-the-art results on four heterophily graphs. (3) We can find that the improvement on heterophily graphs is usually larger than that on homophily graphs (Table 1). This is because the neighborhood information is noisy, aggregating the neighborhood information may result in worse performance for GCNII*. In contrast, our L2S and L2Q adaptively learn the process of propagation which can avoid utilizing the structure information which maybe not helpful for heterophily graphs.
the best accuracy under a depth beyond 2, which again verifies the impact of L2P on formulating graph neural networks. Notably, our methods achieve the best performance as we increase the network depth to 64 and the results of our methods remain stable with stacking many layers. On the other hand, the performance of GCN with DropEdge and JKNet drops rapidly as the number of layers exceeds 32, which represents that they suffer from over-smoothing. This phenomenon suggests that with an adaptive and personalized message propagation strategy, L2P can effectively resolve the over-smoothing problem and achieve better performance.
### RQ3. The Effect of Learning Algorithms
To answer RQ3, We first compare the performance of alternate expectation maximization (EM) and iterative variational inference (VI). From Figure 4, we can find that our methods with two learning algorithms both achieve better performance compared to the best results of APPNP, which verifies the effectiveness of our learning algorithm. In general, the iterative VI achieves better performance than the EM algorithm. We then analyze the model loss of stochastic bi-level variational inference with _training_ (we optimize \(\phi\) simultaneously with \(\theta\) on training data without validation), _first-order_ and _second-order_ approximates. Figures 3 show the learning curves of training loss and validation loss on the Texas and PubMed datasets of L2Q. We can observe that the _training_ gets stuck in the overfitting issue attaining low training loss but high validation loss. The gap between training and validation losses is much smaller for first-order and second-order. This demonstrates that the bilevel optimization can significantly improve generalization capability and the first-order approximate is sufficient to prevent the overfitting.
### RQ4. Adaptive Propagation Strategies.
One of the most important properties of our framework is that the learned propagation strategy is interpretable and is different for different types of graphs and nodes. Thus, in this subsection, we investigate if the proposed framework can learn adaptive propagation strategies, which aims to answer RQ4. We visualize the average propagation distribution (via averaging propagation distributions of all nodes) for seven graphs learned by L2S and L2Q with K=16 in Figure 5. The darkness of a step represents the probability that the step is selected for propagation. From Figure 5, we can find that (1) different types of graphs exhibit different propagation distributions although the pre-defined step is 16 for all of them. For instance, the 0-th step probability in heterophily graphs is much larger than that of homophily graphs. This is because that the feature information in those heterophily graphs is much more important than the structure information. (2) The propagation distribution learned by L2Q is much sparse, and the layers on the tail are less likely to be sampled. In Figure 6, we also provide the correlation, i.e. the cosine similarity of learned propagation distributions of different graphs. We clearly observe the correlations between the same types of graphs are large while the correlation between homophily and heterophily graphs is small, which meets our expectation that similar types of graphs should generally have similar propagation strategies.
### RQ5. Personalized Propagation Strategies
To evaluate if L2P can learn good personalized and interpretable propagation for RQ5, we study the propagation strategy for individual nodes. Figures 7 and 8 show the case studies of personalized propagation on homophily and heterophily graphs. In Figures 7 and 8, we plot the 3-hop neighborhood of each test node and use different colors to indicate different labels. We find that a test node with more same class neighbors tends to propagate few steps. In contrast, a test node with fewer same class nodes will probably have more propagation steps to predict truly its label. This observation matches our intuition that different nodes need different propagation strategies, and the prediction of a node will be confused if it has too many propagation steps and thus can not benefit much from message propagation. Additionally, we can find that our framework successfully identifies the propagation steps that are important for predicting the class of nodes on both homophily and heterophily graphs and has a more interpretable prediction process.
Figure 8. Case studies of the personalized propagation on two heterophily datasets. The bigger node in each sub-graph is the test node. The propagation distributions learned by L2Q of the test nodes are visualized with heatmaps (bottom).
Figure 7. Case studies of the personalized propagation on two homophily datasets. The bigger node in each sub-graph is the test node. The propagation distributions learned by L2Q of the test nodes are visualized with heatmaps (bottom).
## 7. Conclusion
In this paper, we study the problem of learning the propagation strategy in GNNs. We propose learning to propagate (L2P), a general framework to address this problem. Specifically, we introduce the optimal propagation steps as latent variables to help find the maximum-likelihood estimation of the GNN parameters and infer the optimal propagation step for each node via the VEM. Furthermore, we propose L2S and L2Q, two instances to parameterize the variational propagation distribution and frame the variational inference process as a bi-level optimization problem to alleviate the over-fitting problem. Extensive experiments demonstrate that our L2P can achieve state-of-the-art performance on seven benchmark datasets and adaptively capture the personalized and interpretable propagation strategies of different nodes and various graphs.
###### Acknowledgements.
The authors would like to thank the Westlake University and Bright Dream Robotics Joint Institute for the funding support. Suhang Wang is supported by the National Science Foundation under grant number IIS-1909702, IIS1955851, and Army Research Office (ARO) under grant number W911NF-21-1-0198.
|
2305.19170 | Forward-Forward Training of an Optical Neural Network | Neural networks (NN) have demonstrated remarkable capabilities in various
tasks, but their computation-intensive nature demands faster and more
energy-efficient hardware implementations. Optics-based platforms, using
technologies such as silicon photonics and spatial light modulators, offer
promising avenues for achieving this goal. However, training multiple trainable
layers in tandem with these physical systems poses challenges, as they are
difficult to fully characterize and describe with differentiable functions,
hindering the use of error backpropagation algorithm. The recently introduced
Forward-Forward Algorithm (FFA) eliminates the need for perfect
characterization of the learning system and shows promise for efficient
training with large numbers of programmable parameters. The FFA does not
require backpropagating an error signal to update the weights, rather the
weights are updated by only sending information in one direction. The local
loss function for each set of trainable weights enables low-power analog
hardware implementations without resorting to metaheuristic algorithms or
reinforcement learning. In this paper, we present an experiment utilizing
multimode nonlinear wave propagation in an optical fiber demonstrating the
feasibility of the FFA approach using an optical system. The results show that
incorporating optical transforms in multilayer NN architectures trained with
the FFA, can lead to performance improvements, even with a relatively small
number of trainable weights. The proposed method offers a new path to the
challenge of training optical NNs and provides insights into leveraging
physical transformations for enhancing NN performance. | Ilker Oguz, Junjie Ke, Qifei Wang, Feng Yang, Mustafa Yildirim, Niyazi Ulas Dinc, Jih-Liang Hsieh, Christophe Moser, Demetri Psaltis | 2023-05-30T16:15:57Z | http://arxiv.org/abs/2305.19170v2 | # Forward-Forward Training of an Optical Neural Network
###### Abstract
Neural networks (NN) have demonstrated remarkable capabilities in various tasks, but their computation-intensive nature demands faster and more energy-efficient hardware implementations. Optics-based platforms, using technologies such as silicon photonics and spatial light modulators, offer promising avenues for achieving this goal. However, training multiple trainable layers in tandem with these physical systems poses challenges, as they are difficult to fully characterize and describe with differentiable functions, hindering the use of error backpropagation algorithm. The recently introduced Forward-Forward Algorithm (FFA) eliminates the need for perfect characterization of the learning system and shows promise for efficient training with large numbers of programmable parameters. The FFA does not require backpropagating an error signal to update the weights, rather the weights are updated by only sending information in one direction. The local loss function for each set of trainable weights enables low-power analog hardware implementations without resorting to metaheuristic algorithms or reinforcement learning. In this paper, we present an experiment utilizing multimode nonlinear wave propagation in an optical fiber demonstrating the feasibility of the FFA approach using an optical system. The results show that incorporating optical transforms in multilayer NN architectures trained with the FFA, can lead to performance improvements, even with a relatively small number of trainable weights. The proposed method offers a new path to the challenge of training optical NNs and provides insights into leveraging physical transformations for enhancing NN performance.
## Main Text
Neural networks (NN) are among the most powerful algorithms today. By learning from immense databases, these computational architectures can accomplish a wide variety of sophisticated tasks [1]. These tasks include understanding languages, translating between them [2], creating realistic images from verbal prompts [3], and estimating protein structures from genetic code [4]. Given the significant potential impact of NNs on various areas, the computation-intensive nature of this algorithm necessitates faster and more energy-efficient hardware implementations for NNs to become ubiquitous.
With its intrinsic parallelism, high number of degrees of freedom and low-loss information transfer capability, optics offer different approaches for the realization of a new generation of NN hardware. Silicon photonics based modulator meshes have been demonstrated capable of performing tasks that forms the building blocks of NNs, such as linear matrix operations [5] or pointwise nonlinear functions [6]. In addition to the chip-based platforms, two-dimensional spatial light modulators can exploit optics' 3D scalability fully as free-space propagation provides connectivity between each location on the modulator [7, 8]. Another approach, reservoir computing, capitalizes on the complex
interactions of various optical phenomena to make inferences by training a single readout layer to map the state of the physical system [9, 10, 11]. However, in order to achieve the state-of-the-art performance in sophisticated tasks, training of multiple trainable layers is generally required.
Within the framework of NNs, training parameters before a physical layer constitutes a challenge because complex physical systems are difficult to characterize or to analytically describe. Without a fully known and differentiable function to represent the optical system, the error backpropagation (EBP) algorithm which trains most conventional NNs, cannot be used. One solution to this problem utilizes a different NN in the digital domain (a digital twin) to model the optical system. Error gradients of layers preceding the physical system are approximated with the digital twin during training, [12]. However, this method requires a separate experiment characterization phase before training the main NN, which introduces a computational overhead that may be substantial depending on the complexity of the physical system. Another approach resorts to metaheuristic methods by only observing the dependency of the training performance on the values of programmable weights, without modeling the input-output relation of the physical system [13]. The computational complexity of the training is much smaller in this case but the method does not scale well for a large number of parameters.
The Forward-Forward Algorithm (FFA) defines a local loss function for each set of trainable weights, thereby eliminating the need for EBP and perfect characterization of the learning system while scaling efficiently to large numbers of programmable parameters [14]. With this approach the error at the output of the network does not need to be backpropagated to every layer. The local loss function is defined as the goodness metric \(L_{goodness}(y)=\sigma(\sum_{j}\,y_{j}^{2}-\theta)\), where \(\sigma(x)\) is the sigmoid nonlinearity function, \(y_{j}\)is the activation of the \(j\)-th neuron for a given sample and \(\theta\) is the threshold level of the metric. The goal for each trainable layer is to increase L-goodness for positive samples and decrease it for negative samples. For a classification problem, such as the MNIST-digits dataset, the positive samples are created by designating an area within the input pattern for encoding the class of the image. Similarly, a negative sample is created by marking the designated area with a different pattern corresponding to the second class. The difference between the squared sum of activations of positive and negative samples is balanced between each trainable layer with a normalization step to avoid that each layer learns only one of the two representations.
The local FFA training scheme enables the use of low-power analog hardware for NNs because, unlike EBP, FFA does not require direct access to or modeling of the weights of each layer in the NN. In our study, we explore this potential of the FFA and experimentally demonstrate that a complex nonlinear optical transform such as nonlinear propagation in a MMF can be incorporated into a NN to improve its performance.
The optical apparatus we used to implement the network which we trained with the FFA is shown in Figure 1. We use multimode nonlinear wave propagation of spatially modulated laser pulses in an optical fiber. Even though the proposed training method is suitable for virtually any system capable of high-dimensional nonlinear interactions, this experiment is selected as the demonstration setup due to its remarkable ability to provide these effects with very low power consumption (6.3 mW average power, 50 nl per pulse) [15]. The propagation of 10 ps long mode-locked laser (Amplitude Laser, Satsuma) pulses with 1030 nm wavelength in a confined area (diameter of 50 \(\mu\)m) for a long distance (5 m) provides nonlinear interactions between 240 spatial eigenchannels of the MMF (OFS, bend-insensitive OM2, 0.20 NA) using only 50 nl pulse energy.
Before coupling light pulses to the MMF, their spatial phase is modulated with input data by a phase-only two-dimensional spatial light modulator (Meadowlark HSP1920-600-1300). The input laser
beam, approximated as a Gaussian profile \(E_{input}(x,y)=E_{0}\exp\left(-\frac{(x^{2}+y^{2})}{w_{0}^{2}}\right)\), is phase modulated by the SLM. The modulated light beam can be written as \(E_{modulated}(x,y)=E_{0}\exp\left(-\frac{\left(x^{2}+y^{2}\right)}{w_{0}^{2}}\right) \exp\left(\mathrm{i}\;\mathrm{D}(\mathrm{x},\mathrm{y})\right)\), where \(\mathrm{D}(\mathrm{x},\mathrm{y})\) is the data transferred from the digital domain to optical system, mapped to the range \(0\) to \(2\pi\), \(w_{0}\) is the beam waist size, and \(E_{0}\) is input field. The modulated beam is coupled to the MMF with a plano-convex lens. The output of the MMF is collimated with a lens and its diffraction off a dispersion grating (Thorlabs GR25-0610) is recorded with the camera (FLIR BFS-U3-3154M-C). As the diffraction angle has a dependency on the wavelength, the dispersion grating enables the camera to capture information about the spectral changes in addition to the spatial changes due to the nonlinearities inside the MMF.
The linear and nonlinear optical interactions in the MMF can be simplified as follows by the multi-modal nonlinear Schrodinger's Equation in terms of the coefficients of propagation modes (\(A_{p}\)) of the MMF:
\[\frac{\partial A_{p}}{\partial z}=\underbrace{i\delta\beta_{0}^{p}A_{p}-\delta \beta_{1}^{p}\frac{\partial A_{p}}{\partial t}-i\frac{\beta_{2}^{p}}{2}\frac{ \partial^{2}A_{p}}{\partial t^{2}}}_{\text{Dispersion}}+\underbrace{i\sum_{n} C_{p,n}A_{n}}_{\text{Linear mode coupling}}+i\underbrace{\frac{n_{2}\omega_{0}}{A}\sum_{l,m,n}\eta_{p,l,m,n}A_{l}A_{m}A_{n}^{*}}_{ \text{Nonlinear mode coupling}}\]
where \(\beta_{n}\) is the n-th order propagation constant, C is the linear coupling matrix, \(n_{2}\) is the nonlinearity coefficient of the core material, \(\omega_{0}\) is the center angular frequency, A is the core area and \(\eta\) is the nonlinear coupling tensor.
Figure 1: The schematic of the experimental setup used for obtaining nonlinear optical information transform.
This equation delineates the nature of interactions obtained with the proposed experiment. In addition to linear coupling, the nonlinear coupling is provided with the multiplication of three different mode coefficients, demonstrating the high-dimensional complexity of the optical interactions.
We evaluated the effectiveness of the proposed approach by constructing a network to implement the MNIST handwritten digits classification task [16]. Due to speed and memory limitations, we randomly selected 4000 samples from the dataset for training, while the validation and test sets were allocated 1000 samples each. The architecture of our neural network is shown in Fig.2. Figure 2a shows a fully digital implementation of a multi-layer network trained with EBP while Figure 2b shows a fully digital implementation trained with the FFA. Finally, Figure 2c includes the optical layers which are trained with the FFA. In all three cases, each layer has similar numbers of trainable parameters and are trained on the same subset of samples with 32x32 resolution. In our implementation, all three NNs start with convolutional layers (2 or 3) followed by a fully connected (FC) output layer of 10 neurons. In the networks shown in Fig. 2.b and 2.c, the output layer is
Figure 2: Different NN architectures compared in the study. (a) A conventional NN trained with error backpropagation. Blue arrows show the information flow in the forward (inference) mode and green lines indicate the training (b) Diagram of a fully digital NN trained with the FF algorithm. Layers are trained locally with the goodness function. The activations of trainable layers except the first one is used by a separate output layer. (c) Our proposed method also includes optical information transformations between each trainable block. The activations reach to the output layer after the optical transformations.
trained with the Ridge classifier algorithm from the scikit-learn library in Python, since this algorithm allows for a faster training with a single-step of singular value decomposition. The first two trainable layers of the NN trained by EBP (Fig. 2.a) and all trainable layers of FFA trained NN's (Fig. 2.b and 2.c) except their output layers are convolutional layers each with one trainable kernel sized 16x16 with ReLU nonlinearity. Layer normalization operations are used as a part of the FFA and they scale activations for each sample so that the vector of activations has an L2 norm equal to 1. In the optical transform steps in Fig 2.c, these vectors of activations modulate beam phase distribution as 2-dimensional arrays individually, and the corresponding beam output patterns are recorded by the camera and transferred to the next trainable layer.
The performances obtained on the test set are shown in Table 1 and they confirm the findings in [14], that the performance of a NN trained with the FFA decreases compared to the performance obtained with EBP trained network. For the given dataset, the digitally EBP trained NN (2a.) consisting of 2 convolutional and 1 FC layer achieves 92.4 % test accuracy while its FFA trained counterpart reaches 90.4 % test accuracy. On the other hand, the addition of the high-dimensional
Figure 3: Comparison between classification performances of NNs with and without optical transform on a subset of the MNIST-digits dataset (a) The dependence of training and test accuracies on the Ridge classifier regularization strength without the optical transform. (b) Confusion matrix of the test set when the optimum regularization parameter in (a) is used. (c) The dependence of training and test accuracies on the Ridge classifier regularization strength with the optical transform. (d) Confusion matrix of the test set when the optimum regularization parameter in (c) is used.
nonlinear mapping improves the performance of the 2 convolutional layer NN to 92.7% without any increase in trainable weights. Also, the fact that LeNet-5 [16] achieves only 95.0% test accuracy when trained with EBP on the same subset of MNIST digits (instead of ~1% error rate on the full dataset of 60000 samples) shows the potential of our proposed approach to reach higher accuracies with larger datasets.
The improvement in the performance of the NN with the addition of optical transforms is shown in Figure 3 in more detail. The accuracy obtained is plotted as a function of the strength of the regularization term used in the training. The optical NN performs better with a stronger regularization, indicating a higher number of effective features provided by the optical transform. When the optical nonlinear connectivity is combined with the relatively small number of trainable weights in convolutional layers, both the training and test accuracy on the subset of the MNIST digits improve (Fig.3c). This improvement is observed also in class-wise accuracies in the confusion matrix. The correct inference ratio increases for nearly all classes with the optical transform.
Even though FFA simplifies NN training and decreases the memory usage by decoupling weight updates of different layers, benchmarks show that the task performance tends to decrease compared to training the same architecture with EBP. With this study, we demonstrate that by adding non-trainable nonlinear mappings to the architecture, this decrease can be reversed and even an increase in the performance can be obtained.
To make use of optical or any other type of physical transforms with a gradient-based or metaheuristic training algorithm, the transform should be applied to each sample multiple times through the epochs or iterations of the algorithm. FFA makes physical systems more accessible to NNs by removing this requirement. With the implementation presented in this letter, the physical transform is applied to the data representation only once after training each layer, and the next layer is trained with the transformed representation. This advancement could be a solution to one of the biggest bottlenecks in training optical NNs considering the limited modulation speed of electro-optic conversion devices.
|
2307.09268 | Neural network time-series classifiers for gravitational-wave searches
in single-detector periods | The search for gravitational-wave signals is limited by non-Gaussian
transient noises that mimic astrophysical signals. Temporal coincidence between
two or more detectors is used to mitigate contamination by these instrumental
glitches. However, when a single detector is in operation, coincidence is
impossible, and other strategies have to be used. We explore the possibility of
using neural network classifiers and present the results obtained with three
types of architectures: convolutional neural network, temporal convolutional
network, and inception time. The last two architectures are specifically
designed to process time-series data. The classifiers are trained on a month of
data from the LIGO Livingston detector during the first observing run (O1) to
identify data segments that include the signature of a binary black hole
merger. Their performances are assessed and compared. We then apply trained
classifiers to the remaining three months of O1 data, focusing specifically on
single-detector times. The most promising candidate from our search is
2016-01-04 12:24:17 UTC. Although we are not able to constrain the significance
of this event to the level conventionally followed in gravitational-wave
searches, we show that the signal is compatible with the merger of two black
holes with masses $m_1 = 50.7^{+10.4}_{-8.9}\,M_{\odot}$ and $m_2 =
24.4^{+20.2}_{-9.3}\,M_{\odot}$ at the luminosity distance of $d_L =
564^{+812}_{-338}\,\mathrm{Mpc}$. | A. Trovato, É. Chassande-Mottin, M. Bejger, R. Flamary, N. Courty | 2023-07-18T13:50:23Z | http://arxiv.org/abs/2307.09268v1 | # Neural network time-series classifiers for gravitational-wave searches in single-detector periods
###### Abstract
The search for gravitational-wave signals is limited by non-Gaussian transient noises that mimic astrophysical signals. Temporal coincidence between two or more detectors is used to mitigate contamination by these instrumental glitches. However, when a single detector is in operation, coincidence is impossible, and other strategies have to be used. We explore the possibility of using neural network classifiers and present the results obtained with three types of architectures: convolutional neural network, temporal convolutional network, and inception time. The last two architectures are specifically designed to process time-series data. The classifiers are trained on a month of data from the LIGO Livingston detector during the first observing run (O1) to identify data segments that include the signature of a binary black hole merger. Their performances are assessed and compared. We then apply trained classifiers to the remaining three months of O1 data, focusing specifically on single-detector times. The most promising candidate from our search is 2016-01-04 12:24:17 UTC. Although we are not able to constrain the significance of this event to the level conventionally followed in gravitational-wave searches, we show that the signal is compatible with the merger of two black holes with masses \(m_{1}=50.7^{+10.4}_{-8.9}\,M_{\odot}\) and \(m_{2}=24.4^{+20.2}_{-9.3}\,M_{\odot}\) at the luminosity distance of \(d_{L}=564^{+812}_{-338}\,\)Mpc.
## 1 Introduction
The breakthrough discovery of gravitational waves (GW) on September 14, 2015 [1], announced by the LIGO Scientific Collaboration [2] and the Virgo Collaboration [3], opened the era of the GW astronomy. The detection happened during the first observing run (O1) of the LIGO detector. With the subsequent observing runs, O2 and O3, performed jointly with Virgo, the list of detected GW signals has grown to 90 events. While the detected sources are mainly associated with the merger of binary black holes (BBH), they also include binary systems with neutron stars [4, 5, 6, 7]. These detections are collected and characterized in the GW transient catalogs GWTC [8, 9, 10, 11]. On May 2023 the fourth observing run (O4) started with an increasing detector sensitivity and consequently an enhanced expected rate of detections.
GW transient signals are detected in the data by a variety of data analysis pipelines, see e.g. [11] for a recent review. In particular, matched filtering [12] is a prominent technique to search for signals when an accurate waveform model is available, as in the case of compact star binary mergers. Algorithmically, this consists in correlating the data with a large set of template waveform models (the "template bank") that are representative of all the morphologies the expected signal can possibly take.
To make robust detection statements, those pipelines have to address a major difficulty: the presence in the data of short-duration noise artefacts, often called "instrumental glitches" [13, 14], that can mimic the GW signal [15, 16]. A very powerful tool to discriminate the signal from noise glitches is time coincidence across two or more separate detectors (see [17] for a discussion on multi-detector noise rejection techniques).
Obviously, coincidence cannot be used during periods when only one detector operates. During the O1 and O2 observing runs, single-detector periods amount to about 30% of the observation time [18, 19]. During O3, thanks to a more stable and reliable operation, this fraction was reduced to about 15% in O3a [20] and 11% in O3b [21] (the first and second six months parts of O3). In total, more than five months of observing time fall in this category, so far. The O4 science run initiated recently may also have long periods of single detector times.
The lack of coincidence results in difficulties to disentangle the signal from glitches and to measure the statistical significance of a trigger to high confidence levels. Several studies investigate ways to resolve these difficulties. Two methods [22, 23] that allow the identification of gravitational-wave candidates in single-detector data have been employed in production in the context of low-latency gravitational-wave searches [24], enabling the initial identification of GW170817 and GW190425. Similarly, Ref. [25] introduces a framework for assigning significance to single-detector gravitational-wave events by leveraging the measured rate of binary black hole mergers. More recently, ref. [26] studies the possibility to extend the multi-variate likelihood-ratio statistics used by the GstLAL pipeline to generate single-detector events. The likelihood estimation has been recently updated in view of the O4 run [27] and one of the improvements is the addition of a tuneable penalty in case of single detectors candidates to down weight
their significance [28]. To extrapolate the significance measure of single-detector triggers produced by the PyCBC pipeline [29], a method proposed in [30] allows to recover loud signals in single-detector data. In both cases, it is shown that the search sensitivity is significantly reduced compared to multi-detector searches.
Despite those developments, single-detector periods have received less attention than the rest of the observations and are covered in a few studies. Following a "multi-messenger" approach, several works looked for coincidences between data from a solitary gravitational-wave detector with gamma-ray observations from the Fermi Gamma-ray Burst Monitor [31, 32, 33]. Three searches for binary mergers in single-detector periods relied on gravitational-wave data only. Ref. [34] presents a search which specifically targets a narrow range of low masses motivated by the population of known double neutron-star binaries. Two contributions present the results of searches for binary mergers over the entire range from 1 to 100 \(M_{\odot}\) for the component masses, for the observing runs O1 and O2 [35] and for O3 [30]. The former finds two candidate events observed in single detector periods: 2015-12-25 04:11:44 UTC with the LIGO Hanford detector and 2016-01-04 12:24:17 UTC with the LIGO Livingston detector. The first candidate event has a low significance with a probability of astrophysical origin [36]\(p_{\rm astro}=0.12\), while the second has a larger significance \(p_{\rm astro}=0.47\). However, for this event, an excess power observed in the residual after subtraction of the best-fit waveform from the data suggests this event may not be of astrophysical origin, and is thus discarded.
Glitches of different types vary widely in duration, frequency range and morphology. It is difficult to construct a statistical model able to capture the overall complexity of the glitch populations. Their complex and time-evolving nature makes glitch identification and rejection a good problem and a use case for machine learning (ML). In principle, this approach allows to train a classifier able to distinguish between different types of input (glitches versus real GW signal in our case), and thus to learn a possibly very complex and high-dimensional statistical model from a set of examples.
As in many scientific fields, the use of ML has recently gained in popularity in the context of GW astronomy. There is a fairly large body of works pertaining to various aspects ranging from denoising, glitch classification and cancellation, waveform modelling, searches for GW signals, astrophysical parameter estimation, population studies (see e.g. [37, 38] for recent reviews).
In the context of GW signal searches, convolutional neural networks (CNN) [39] have been investigated to detect BBH signals for both single- and multi-detector cases [40, 41, 42, 43, 44, 45]. The primary motivation put forward in those contributions is the computational gains expected from the use of CNNs compared to matched filtering techniques.
So far a large fraction of those investigations use simulated Gaussian noise [40, 42, 43, 45]. In this case, it is not possible to learn the non-Gaussian component of the instrumental noise. Few studies use real GW data including glitches [41, 44]. The classifiers obtained in those contributions are limited to false positive probability (i.e., noise or glitches classified as signal) of about 1%. This corresponds to a false alarm
rate of once every 40 minutes, which is not sufficient in practice. A recent review [46] compares different approaches on a mock data challenge.
The purpose of this study is to enhance the ability of neural network based searches to reject noise artifacts and improve their sensitivity, with a particular focus on analyzing data from a single detector. The goal is to achieve a false alarm rate similar to that of current online searches performed by the LIGO-Virgo-KAGRA collaboration (LVK), i.e. two false alarms per day [47]. We explore various network architectures, particularly those designed for time series classification [48, 49].
We trained and tested neural network classifiers using a dataset produced from one month of O1 data collected by the LIGO Livingston detector, during which no GW signals were detected using the matched filtering based searches.
Section 2 provides details on how the training and testing sets are generated, while Sec. 3 describes the structure of the various neural network classifiers being considered. The performance and efficiency of the classifiers are assessed using testing data, and the results are presented in Sec. 4. We applied these classifiers to the remaining three months of O1 data, including segments associated with the three GW events detected during O1. Sec. 5 summarizes the results of this analysis. We checked the classifiers' response obtained with the known detected events during O1. A particular focus is then given to the single-detector times. Interestingly, we found that only one data segment was classified as "signal" by the three classifiers we considered. This event coincides with the single-detector event found by [35] in the LIGO Livingston data, as mentioned above, and was downgraded by the same study as a noise artifact. Following the additional checks we conducted on this event, we arrived at a different conclusion as they confirmed its compatibility with an astrophysical origin. Finally, Sec. 6 concludes on the applicability of the proposed methodology.
## 2 Generation of datasets for training and testing
The typical approach for applying ML methods to GW detection is to treat it as a classification problem, see e.g., [40, 41, 42, 43, 45]. In this approach, we aim to determine whether a given segment of GW strain data of fixed duration contains an astrophysical signal or not. This problem can be solved by developing an ML-based classifier that is trained using example data. We produce training data labeled as follows:
* noise: the data are compatible with stationary background noise, i.e., are free of transient instrumental artifacts (glitches) or known GW events,
* glitch: the data include one or several transient instrumental artifacts (glitches),
* signal: the data include a (simulated) astrophysical signal, added to the stationary background noise.
This three-class approach differs from other contributions in the literature, which consider only two classes. The presence of glitches is known to significantly alter the statistical distribution of the data. By assigning a specific label to data segments
containing glitches, the idea is that this may aid the classifier in achieving improved performance. Furthermore, the relative significance assigned to each class could offer valuable information when evaluating the contents of a given segment.
Training and testing data are extracted from the dataset of the observing run O1, which was publicly released via the Gravitational Wave Open Science Center (GWOSC) [50]. Specifically, we utilize the data from the LIGO Livingston detector spanning one month between November 25, 2015 (GPS time 1132444817) and December 25, 2015 (GPS time 1135036817). Throughout this duration, no GW signals were detected by the standard search pipelines.
In this period the available L1 data amounts in total to about 13.3 days (1,147,457 s), of which 3.6 days (312,284 s) were in single-detector time, i.e. 27% of the time.
The raw data are sampled at 16 kHz. We have downsampled the data to 2048 Hz,2 bandpass-filtered between 20 Hz and 1 kHz and whitened by applying the inverse amplitude spectral density (ASD) in the frequency domain.3 The ASD is estimated over stretches of variable length, depending on the duration of uninterrupted data-taking periods (minimum duration is 37 s and maximum is 100,573 s). The data are divided into one-second non-overlapping segments.
Footnote 2: The method signal.decimate of the software package Scipy[51] is used to downsample.
Footnote 3: For the preparation of the training and testing data, we acknowledge the use of the following software packages: GWpy[52], PyCBC[29] and LALSuite[53].
The data are distributed into the three classes introduced above as explained in the next sections. Representative instances of the three classes are shown in Fig. 1.
### The noise class
The noise class corresponds to segments that are free of known GW signals, glitches (see next section) or hardware injections.4 All segments in the dataset passed the first criterion, as no GW signals were confidently detected by standard pipeline over the selected period.5 Overall, there is a total of 750,000 noise samples in the one-month O1 dataset.
Footnote 4: During O1, hardware injections, which are simulated signals created by manipulating mirrors in the arms of the interferometers, were added to the LIGO detectors for testing and calibration. See [https://www.gw-openscience.org/o1_inj](https://www.gw-openscience.org/o1_inj)
Footnote 5: This implies that the noise label is essentially determined by the sensitivity limit of the matched-filtering based searches.
### The glitch class
A database of glitches is created using two different sources: the unmodeled transient search _coherent WaveBurst_ (cWB) [54, 55] and the citizen science project Gravity Spy[56].
The cWB pipeline is an open-source software package designed to search for a wide range of GW transients without prior knowledge of the signal waveform. To evaluate the analysis background, cWB uses a resampling technique [54] that involves applying
non-physical time shifts to the data before analysis. Loud, i.e., high signal-to-noise ratio (SNR), background triggers resulting from this procedure are good candidates for glitches. The loudest triggers in LIGO Livingston with an SNR higher than 5.8 were selected (258,480 glitches). This list was complemented with the Gravity Spy database (13,144 glitches). The timestamps and duration of the identified glitches from these two sources are collected in a single list, which is then used to label the one-second data segments from the O1 observing run. If the glitch duration is shorter than 1 second, the associated segment is labeled as a glitch. Note that the glitch has a random position within the one-second window. If the glitch duration is longer than 1 second, all segments that overlap with that glitch duration are labeled as glitch. Only the glitches whose time belongs to the data segments available on GWOSC are considered. In many cases, the glitches are closer in time than one second, so multiple glitches can fall in the same one-second segment.
From the one-month O1 data, a total of \(150,000\) segments receive the glitch label.
### The signal class
The samples from the signal class are produced by adding simulated GW signals from BBH systems to the one-month O1 data in periods without known GW signals or hardware injections. For the training set, the data segments used to generate samples
Figure 1: Instances of the classes noise (blue), signal (black) and glitch (green). Top (noise): one-second data segment recorded by LIGO Livingston at the GPS time 1132550972.487. Middle (signal): a simulated BBH waveform with SNR of 20 (dashed red line) is injected in the previous timeseries. Bottom (glitch): Data recorded at the GPS time 1132580628.41 which contains a low-frequency transient instrumental artifact.
of the signal class are not utilized for the noise class nor the glitch class, while for the testing set, the same data segments are used for both the noise and signal classes. To generate the astrophysical signals, the waveform model SEOBNRv4[57] is employed, with a lower frequency cutoff of 30 Hz. The simulated signals are sampled, whitened, and band-pass filtered in the same manner as the data segments.
The masses of the binary BH used for generating the simulated signals in the class signal are chosen to ensure that they fall within the mass range observed by the LVK and that the signals are short enough to be contained within the one-second data segments. Specifically, the component masses \(m_{1}\) and \(m_{2}\) are chosen randomly, with the constraint that \(m_{1}>m_{2}\geq 10M_{\odot}\) and the total mass \(M=m_{1}+m_{2}\) is uniformly distributed in \(33M_{\odot}\leq M\leq 60M_{\odot}\). We consider non-spinning BH, so the dimensionless spin magnitudes \(\chi_{1}\) and \(\chi_{2}\) are set to 0. The phase at coalescence and the polarization angle are drawn uniformly in \((0,2\pi)\), and the inclination angle in \((0,\pi)\). Since the focus is on a single detector, the right ascension and declination are not particularly important and are thus fixed to zero.
The amplitude of the added signals is computed such that the corresponding _optimal_ SNR \(\rho_{opt}\) is uniformly distributed between 8 and 20. Following [58], it is defined as
\[\rho_{opt}^{2}=4\int_{0}^{\infty}\frac{|\tilde{h}(f)|^{2}}{S_{n}(f)}\mathrm{d }f, \tag{1}\]
where \(\tilde{h}(f)\) denotes the Fourier transform of the template \(h(t)\) and \(S_{n}(f)\) is the power spectral density of the detector noise. To generate the signals, a fiducial luminosity distance \(d_{L}\) of 100 Mpc is initially chosen, and then scaled to obtain the desired \(\rho_{opt}\). The final values of \(d_{L}\) range from 1 to 1300 Mpc approximately.
The simulated signals are added at a random position within the segment while ensuring the chirping part of the signal is completely contained in the segment. The final part of the signal is randomly shifted between -0.25 s and 0.3 s with respect to the center of the one-second segment. A total of 750,000 signal samples are generated.
Overall, the training set consists of \(250,000\) segments for the noise class, the same number for the signal class, and \(70,000\) for the glitch class. A 20% fraction of the training set is allocated for validation. The testing set, used to evaluate the classifier, comprises \(500,000\) samples for both the noise and signal classes, and \(80,000\) for the glitch class. This ensures sufficient statistical data for characterizing the classifier's performance. In total, the training and testing datasets comprise 1,650,000 one-second segments, with 45% for the noise class, 45% for the signal class, and 10% for the glitch class. This amounts to a storage space of 26 gigabytes. Out of the total number of segments, 28% is utilized for training, 7% for validation, and 65% for testing.
## 3 Classifier architectures
This section discusses the type of neural network architectures considered in this study. Similarly to other works [40, 41, 42, 43, 44, 45], the classifier is directly fed by the one-second segment
of strain time series, so a vector size of 2048. We experiment6 with three different network architectures, namely the CNN, as well as two other architectures specialised for time-series classification: Temporal Convolutional Network (TCN) [48] and Inception Time (IT) [49]. The last two, to our knowledge, have never been tested with this type of problem. The architectures are described in more detail in the following subsections. The model hyperparameters provided below have been tuned after a coarse exploration of the parameter space.
Footnote 6: Implementations are based on the TensorFlow library [59] with the Keras API [60].
### Convolutional Neural Network (CNN)
CNNs were first introduced for image classification [39]. They are now used for a wide variety of tasks, including the detection of GW signals [40, 41, 42, 43, 44, 45]. In this study, we tested a range of CNNs similar to those considered in previous works.
We limited ourselves to shallow networks with five layers, four convolutional layers, and one final fully connected embedding layer. For simplicity, we only report here on the best-performing CNN, whose structure is detailed in Table 1.
The convolutional layers are defined by the number of output filters, the length of the 1D convolution window (kernel size), the stride length of the convolution, and the activation function. The dense layer only requires the definition of the activation function. The input of inner convolutional layers is downsampled with a max pooling operation over a window size indicated in the table. The output of convolutional layers is processed by a dropout layer that randomly sets the input units to 0 with the frequency rate specified in the table. A global average pooling, followed by a dropout with a rate of 10%, is applied to the output of the last convolutional layer.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**Layer number** & 1 & 2 & 3 & 4 & 5 \\ \hline
**Type** & Conv & Conv & Conv & Conv & Dense \\ \hline
**Number of filters** & 256 & 128 & 64 & 64 & - \\ \hline
**Kernel size** & 16 & 8 & 8 & 4 & - \\ \hline
**Stride length** & 4 & 2 & 2 & 1 & - \\ \hline
**Activation function** & relu & relu & relu & relu & softmax \\ \hline
**Dropout rate** & 0.5 & 0.5 & 0.25 & 0.25 & - \\ \hline
**Max pooling** & 4 & 4 & 2 & 2 & - \\ \hline \end{tabular}
\end{table}
Table 1: Structure of the CNN considered in this study. The type of the layer is either convolutional (Conv) or fully connected (Dense). The activation function is either the rectified linear unit (relu) or the softmax function [39].
### Temporal Convolutional Network (TCN)
TCN [48, 61] is a neural network architecture specifically developed for sequence modeling problems. TCN has been shown to outperform generic state-of-the-art architectures over a diverse range of tasks and datasets. The TCN architecture is based on causal convolutions, where an output at time \(t\) is only convolved with past inputs from the previous layer. This allows the network to collect information from further in the past, using a combination of deeper networks (augmented with residual layers) and dilated convolutions.
In this study, we have tuned the hyperparameters of the TCN model to find a compromise between the best performance and a reasonable training time. We ended up using a network with a TCN layer consisting of \(N=6\) dilated convolutional layers with 32 filters, a kernel size of \(k=16\), default values of dilation factors \(d_{k=1\dots 6}=(1,2,4,8,16,32)\) for the 6 convolutional layers, and a dropout rate of 0.1. The output of the TCN layer goes into a final dropout layer with a rate of 0.5, and a dense embedding layer closes the model.
A key parameter that governs the training efficiency is the receptive field, which is the size of the region in the input data that produces a given feature in the output. The receptive field of the TCN can be expressed as \(R=1\,+\,2\,(k-1)\,d_{\mathrm{tot}}\) where \(d_{\mathrm{tot}}=\sum d_{k}\)[48]. With the above configuration, we have \(R\approx 1900\). The data used in this work have a sampling rate of 2048 Hz so each segment of data has 2048 points. The training with TCN is effective when \(R\) is much larger than the length of the input sequence [48]. To satisfy this constraint, only for this model, it is necessary to downsample the input data to 1024 Hz, therefore producing an input vector of size 1024.
### Inception Time (IT)
IT [49] is a deep network ensemble designed specifically for time series classification. It leverages the concept of residual networks and incorporates Inception modules [62]. In a nutshell, the Inception module first produces a one-dimensional summary of the input multivariate time series (this is the "bottleneck" layer), and then convolves this summary through multiple filters of different lengths, leading to a multivariate output that provides inherently multi-resolution features. The module output is finally reduced by max pooling (pool size of 4) before passing to the next module.
The IT architecture is composed of five ResNet networks with a sequence of depth \(d\) Inception modules, with two residual blocks. The outputs of the five models are combined through a global average pooling and a final softmax layer, used to produce the classification probabilities for the different classes. In this study we have used the standard implementation of IT provided by the authors [49] with networks of depth \(d=10\), each with a bottleneck size of 32 processed through 32 filters with kernel sizes 20, 40 and 80.
### Training process
The three classifiers are optimized using the training set described in Sec. 2 to minimize the categorical cross-entropy loss function. The default implementations of the Adam optimizer are utilized, with a batch size of 24 [59]. The training procedure is repeated 10 times with different (random) initializations of the model weights and dropouts, and the instance exhibiting the best Receiver Operating Characteristic (ROC) curve on the testing dataset (as explained in Sec. 4) is chosen. Note that this evaluation cannot be done with the validation dataset, as it does not provide enough statistics to compute the ROC in the relevant regime of low false alarm rates.
Throughout the training process, the model's area under the ROC curve [63] is evaluated on the validation data, and the model with the highest value is ultimately selected. The CNN, TCN, and IT models are trained for 50, 150, and 20 epochs, respectively. The best models are obtained at the 24th epoch for CNN, the 34th epoch for TCN, and the 5th epoch for IT7. On the Tesla K40d GPU we used, the training times per epoch were 220 seconds for CNN, 1000 seconds for TCN, and 3320 seconds for IT.
Footnote 7: After the 5th epoch, the IT model displayed signs of overfitting.
### Decision statistic
The final objective is to detect with high confidence the segments with a true astrophysical signal, i.e., to classify them as signal and to reject the other segments as noise or glitch.8 We aim to constrain false alarms to a rate of two per day (similar to the current online search pipelines). This implies that we should reject all but one noise or glitch segment from the testing set in \(1.7\times 10^{5}\) trials.
Footnote 8: The two classes noise and glitch will be later merged _a posteriori_ into a single class associated with the absence of an astrophysical signal.
The classifiers output the probability of class membership for each of the classes, that is three numbers between 0 and 1, summing to 1. The final detection is performed by applying a threshold to the membership probability \(P_{s}\) assigned to the signal class, which thus defines our _decision statistic_. The class membership probability is computed by the softmax activation function applied to the raw output (the "logits tensor") of the fully connected embedding layer which concludes the classifiers. Because of the high-confidence level required, this threshold is very close to 1, thus requiring attention to the numerical precision for the evaluation of the membership probability (This issue related to the precision of floating-point arithmetics was already noted in [45]). This has consequences on the way the classification loss is computed from the membership probability at the training stage. We found that the categorical cross-entropy loss should be directly computed from the logits tensor rather than from the class membership probability after the softmax transformation.
This numerical precision issue has an impact on the performance of both the TCN and IT classifiers. The right panel of Fig. 4 provides an illustration for IT. This plot
compares the ROC curves based on the detection statistic \(P_{s}\) ( see Sec. 4 for the details on how the curves are computed) obtained with IT when the categorical cross-entropy loss is calculated from the logits tensor (green) and when it is calculated from the class membership probability after the softmax transformation (red). The shaded area represents the range between the best and worst models among the 10 instances computed at training. When softmax is used the uncertainty in the performance is larger (the shaded area is wider) and the classification efficiency reached at small false alarm rates is lower.
## 4 Classifier evaluation with the testing data
This section describes the results obtained with the three classifiers presented above applied to the testing set.
The classifiers all exhibit poor separation power between the noise and glitch classes. This can be attributed to several factors, including the absence of a distinct boundary between the two classes (potentially due to contamination and mislabeling), the considerable variation in glitch morphology, and the relative class imbalance with 3.5 times the glitch class being unrepresented by a factor of 3.5 compared to the noise class in the training set. The initial assumption that a three-class division would enhance classification performance turned out to be incorrect, at least with this dataset. Consequently, we proceed by combining the noise and glitch classes into a single class representing the absence of an astrophysical signal.
Figure 2: Distributions of \(P_{s}\) (the class membership probability assigned to the signal class), conditioned on the class of the input segment from the testing set: signal (blue), noise (dot-dashed red), or glitch (black). These distributions were computed for the CNN (left), TCN (middle), and IT (right) architectures. The histograms are normalized to have a unit sum. The classifiers do not distinguish between samples from the noise and glitch classes, thus resulting in practically identical probability distributions (see Sec. 4 for a discussion on this point).
### Noise rejection
We first assess the noise rejection capabilities of the classifiers. Fig. 2 compares the distributions of the decision statistic \(P_{s}\) (the membership probability assigned to the signal class) when the input segment belongs to each of the three classes. The \(P_{s}\) distributions obtained with samples from the noise or glitch classes have very similar shapes, reflecting the intrinsic similarity of those two classes (see above). The best classifier is the one that provides the greatest contrast between the distributions of the \(P_{s}\) statistic obtained in the presence of a signal (blue) versus noise or glitch (red dot-dashed and black).
The distributions obtained for the noise or glitch classes exhibit maxima at zero for the TCN and IT classifiers, while the maximum is shifted to around 0.1 for CNN. Moving from the peak to higher values, the distribution shows a monotonic decay for CNN and TCN. However, for IT, the distribution initially decreases and then slightly increases near \(P_{s}=1\). The TCN classifier appears to reach the lowest background \(\lesssim 10^{-2}\) in normalized count units.
Since our objective is to achieve high-confidence classification, we are primarily interested in the immediate vicinity of \(P_{s}=1\). This motivates to reparameterize the \(P_{s}\) statistic as \(\lambda:=-\log_{10}(1-P_{s})\). While \(P_{s}\) ranges from 0 to 1, \(\lambda\) can theoretically take values across the entire real line. However, our main focus lies in the range \(\lambda\gtrsim 7\). The most stringent criterion is to require \(P_{s}=1\) at machine precision, which corresponds to \(\lambda=\infty\). The number of noise and glitch samples in the testing set that satisfy this selection criterion is 0, 1, and 2 for the CNN, TCN, and IT classifiers, respectively. Such rejection power (between 0 and 2 false alarms in \(5.8\times 10^{5}\) trials) is in agreement with the false-alarm rate targeted initially.
### Signal extraction
We proceed to assess the classifiers' ability to extract signals. Fig. 2 illustrates the distributions of the decision statistic \(P_{s}\) when the input segment belongs to the signal class, represented in blue. As anticipated, all distributions exhibit a peak at \(P_{s}=1\). However, the peak appears narrower for the IT classifier. To focus on the region of interest near \(P_{s}=1\), we employ the \(\lambda\) reparametrization, as depicted in Fig. 3. This figure also incorporates the dependencies on the signal-to-noise ratio (SNR) of the injected GW signal and the chirp mass \(\mathcal{M}\) of the source binary. The distributions are computed separately for three ranges of chirp mass \(\mathcal{M}\): low, mid, and high, corresponding to \(\mathcal{M}\) values between 13 and 17 \(M_{\odot}\), 17 and 21 \(M_{\odot}\), and 21 and 26 \(M_{\odot}\), respectively. The histograms on the right-hand side are computed with the samples of the signal class that are classified with \(P_{s}=1\), showing their distribution in terms of SNR for the three chirp mass ranges.
It is worth noting that we have either \(\lambda\lesssim 7.5\) or \(\lambda=+\infty\) (i.e., \(P_{s}=1\)). 9 In
a sense, the latter case seems to accumulate all samples with \(\lambda\gtrsim 7.5\). From the left column of the figure, it is apparent that the IT classifier assigns larger \(\lambda\) (or \(P_{s}\)) values more uniformly over the full range of chirp mass and to lower SNR. In contrast, the CNN fails to do so for the lower chirp mass interval shown in blue. This is confirmed by the histograms in the right column, which indicate that the TCN and IT classifiers have a higher overall count (approximately 76%, compared to 65% for CNN) and extend to lower SNR values.
### Global assessment with Receiver Operating Characteristics
To fully characterize the performance of the classifier, the noise rejection and signal extraction capabilities have to be evaluated jointly. This can be done by computing the ROC curves [63]. The classification efficiency \(S_{th}/S_{tot}\) and false alarm rate \(N_{th}/N_{tot}\) are evaluated from the testing set, with \(S_{th}\) and \(N_{th}\), the number of signal samples and noise and glitch samples with a \(P_{s}\) value above some threshold, and \(S_{tot}\) and \(N_{tot}\) the total number of samples for each category. Note that since each sample has a duration of 1 second, \(N_{tot}\) is intended here as the total duration in seconds of noise and glitch samples, so \(N_{th}/N_{tot}\) is measured in s\({}^{-1}\). By varying the threshold, one obtains the ROC curves in Fig. 4 which displays the classification efficiency versus the false alarm rate. The TCN and IT classifiers appear to have similar ROC curves and show a clear improvement with respect to CNN. Fig. 4 also shows the ROC computed for two instances of the IT architecture, with and without the softmax activation during training (see Sec. 3.5 for a discussion).
Fig. 5 shows the classification efficiency for a given false alarm rate set to \(10^{-5}\) s\({}^{-1}\), as a function of the injected SNR as defined in Eq. (1). The classifiers TCN and IT give similar efficiencies and surpass uniformly over CNN. Note that the efficiency shown in this figure is averaged over the full chirp mass range and thus does not show the differences evidenced in Fig. 3. Overall, signals with SNR=10 can be detected at the considered significance level with a good probability, larger than 50%.
## 5 Application to the remaining O1 single-detector data
This section presents the results of applying the different classifiers to the remaining O1 data from the Livingston detector. Our primary focus is on the IT classifier, while the results for the other models can be found in A.
### Analysis of known O1 GW events
We first investigate how the three events detected in the O1 data [8] by matched filtering searches are classified by the considered models. The statistic \(P_{s}\) is evaluated for different positions of the one-second window, that is for different time delays \(\Delta t\) between the start of the analysis window and the merger time. This definition implies that, for \(\Delta t=-1\) s, the analysis window only includes the initial part of the signal (inspiral), whereas,
Figure 3: Distribution of the statistic \(\lambda:=-\log_{10}(1-P_{s})\) obtained with testing samples from the signal class and computed for the three considered classifiers: CNN (top), TCN (center) and IT (bottom). The column on the left shows a kernel density estimate of the \(\lambda\) distribution for the samples with \(P_{s}<1\), thus leading to a finite value for \(\lambda\). The shaded area is the 50% containment region, and the line is the 90% containment region. Those distributions are shown versus the SNR of the injected GW signal and computed separately for three ranges of chirp mass. The column on the right shows a histogram for the samples with \(P_{s}=1\). The signal samples detectable with high confidence fall in the range of large \(\lambda\gtrsim 7\) (i.e, \(P_{s}\) values very close or equal to 1).
Figure 4: ROC curves for the three considered classifiers, CNN, IT, and TCN, illustrating the classification efficiency versus the false positive rate. Each classifier has been trained 10 times, and the continuous line represents the result obtained for the best model, while the shaded area covers the range from the best to the worst model. The left panel displays the TCN (orange) and CNN (blue) ROC curves. In the right panel, the ROC curves are shown for two instances of the IT architecture: one trained with softmax activation (red) and another without softmax activation (green) (refer to Sec. 3.5). The TCN ROC curve is reproduced in this panel as a dashed orange line to facilitate comparison.
Figure 5: Classification efficiency versus SNR for a false alarm rate of \(10^{-5}\) s\({}^{-1}\). The classifiers TCN and IT give similar efficiencies and surpass uniformly over CNN. Overall, signals with SNR=10 can be detected at the considered significance level with a good probability, larger than 50%.
for \(\Delta t=0\) s, the analysis window starts at the merger time and thus only includes the final part (merger and ringdown). Fig. 6 shows the evaluation of \(P_{s}\) between those two extreme cases for the IT model for GW150914, GW151012 and GW151226 (see also A).
As expected, when the chirp signal is not included in the analysis window, the classifier is not able to detect the presence of the signal. GW150914 appears to be loud enough to be always identified, regardless of its position in the time window, even if it is partially visible. GW151012 is only detected when the chirp is at the center of the analysis window. GW151226 is not detected. This is expected as the binary component masses are outside the range used to generate the astrophysical signals in the signal class of the training data. Both events have single detector optimal SNRs for Livingston from parameter-estimation analyses lower than the minimum value of 8 we used to train the network (namely, \(5.8^{+1.2}_{-1.2}\) for GW151012 and \(6.9^{+1.2}_{-1.1}\) for GW151226 according to Table V of [8]).
### Analysis of the remaining O1 data
We analysed all the remaining L1 data in O1 excluding the month we used to train and test the classifiers (see Sec. 2). This corresponds to the period between GPS=1126051217 (2015-09-12 00:00:00 UTC) and GPS=1132444817 (2015-11-25 00:00:00 UTC) and between GPS=1135036817 (2015-12-25 00:00:00 UTC) and GPS=1137254417 (2016-01-19 16:00:00 UTC). In this period we excluded the intervals
Figure 6: Evolution of the statistic \(P_{s}\) produced with IT classifier versus the relative delay \(\Delta t\) of the analysis window to the O1 event merger time (GW150914, GW151012 and GW151226). For \(\Delta t=-1\) s, the analysis window only includes the initial part of the signal (inspiral), whereas, for \(\Delta t=0\) s, the analysis window starts at the merger time and thus only includes the final part (merger and ringdown).
of \(\pm\) 1 second around the chirp time of the 3 known events (see previous section). This amounts to a total of 4,216,489 s (about 49 days), of which 1,054,564 s (about 12 days) are single-detector times, corresponding to 25% of the total. This data set is whitened following the same procedure used to produce the training set (the ASD was calculated from periods of non-interrupted data taking with 26 s minimum and 146,978 s maximum). The data are then divided into non-overlapping one-second segments that are processed through the three classifiers. For each, we used the best-performing model on the testing data. The processing time for the full data set is about 4 hours per model on NVIDIA Tesla V100S GPUs, but most of this time is taken to load the data, the extraction of the model predictions takes about 8 min for CNN, 18 min for TCN and 52 min for IT. No data quality information was used, so this analysis is solely based on the gravitational-wave strain data.
Fig. 7 shows the distribution of the \(\lambda=-\log_{10}(1-P_{s})\) statistic obtained with the IT classifier (similar plots can be found in A for the other models). We apply the most restrictive selection cut, by requiring \(P_{s}=1\) (at machine precision). We recall that this selection cut corresponds to a false-alarm rate of \(\lesssim 4\times 10^{-6}\) s\({}^{-1}\) (that is one false alarm per 3 days) and a classification efficiency of 76% when estimated on the testing set, see Sec. 4.1 and 4.2. Based on these results, we estimate from basic counting statistics that the maximum number of false alarms expected for this analysis should be 29, 43 and 55 for CNN, TCN and IT respectively at 95% level for the full data set, and 9, 13 and 16 when restricting to the single-detector part.
For the IT classifier, a total of nine segments pass the selection cut, with two occurring in single detector time at GPS=1131289775 (2015-11-11 15:09:18 UTC) and GPS=1135945474 (2016-01-04 12:24:17 UTC). For the CNN and TCN classifier, we obtain 4 and 105 segments passing the cut, with 2 and 14 falling in single detector periods. The results are thus consistent with the expectations for CNN and IT, while there is a clear excess with TCN. We have observed that a significant fraction of the triggers comes from two time intervals around 2015-10-20. Our interpretation is that the data from those periods could differ in nature from those of the training set, and TCN may be sensitive to this difference.
Interestingly there is only one segment passing the selection cut for all three classifiers: GPS=1135945474 (2016-01-04 12:24:17 UTC) which we investigate further in the next section. As single-detector searches cannot employ statistical resampling techniques with time shifts [64], we can only provide an upper limit on the false alarm rate for this detection. The upper limit is estimated to be 1 event every 49 days, based on the available data from the three-month analysis period. This segment on 2016-01-04 corresponds to the event identified in the Livingston detector data during the O1 single-detector periods using a standard matched-filtering-based search, as reported in [35]. However, this candidate is subsequently eliminated by the authors of Ref. [35] after examining the residual obtained by subtracting the best-fit waveform from the data, since excess power is observed in the residual at frequencies below 80 Hz.
### Detailed analysis of the 2016-01-04 event
We have performed a number of detailed checks of the 2016-01-04 event. We have performed a "visual" inspection with the time-frequency Q-transform [66]. Fig. 8 provides a time-frequency representation of the entire segment with a Q-scan [66]. A transient is visible \(\sim 0.35\) seconds after the start of the segment, at a frequency of about 150 Hz. In the magnified view, the shape of the transient is clearly indicative of a frequency modulated chirp-like transient.
The Gravity Spy database [65] has marked this specific GPS time classified as being an instrumental artefact of the "Blip" type. The term refers to a well identified family of instrument glitches whose origin is still largely unknown (see, e.g., [67, 68] for more details). Generally, "Blip" glitches do not exhibit a chirping frequency (see Fig. 1 of [69] for a typical example). To complement this initial inspection, Fig. 7 gives in pink the statistic \(\lambda\) (or equivalently \(P_{s}\)) of the 600 blip glitches listed in Gravity Spy overlapping with the part of the O1 dataset being analyzed. The resulting distribution is compatible with the overall background distribution. The Jan 4 segment appears to be an outlier with respect to the blip glitches identified in the data.
Further, we checked if the transient signal can be fitted by a GW waveform model associated to a compact binary merger. To do so, we ran the Bayesian inference library Bilby[70] and used the IMRPhenomXPHM waveform model [71]. It is assumed that the component spins are co-aligned with the orbital momentum. For the rest of the source
Figure 7: Distribution of the \(\lambda=-\log_{10}(1-P_{s})\) statistic (shown in blue) obtained using the IT classifier on the remaining O1 dataset (refer to Sec. 5.2 for details). The segments with \(P_{s}=1\) have been assigned a value of \(\lambda=8\) for plotting purposes. The pink histogram corresponds to a subset labeled as “Blip” glitches by Gravity Spy[65]. The markers at the top indicate the highest values for the three O1 events displayed in Fig. 6. Please note that the vertical position of these markers is arbitrary.
parameters, generic and agnostic priors are assumed, along with a standard \(\Lambda\)-CDM cosmology model with \(H_{0}=67.9\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\)[72]. The analysis did not include a marginalization over calibration uncertainties. The analysis results in a signal-versus-noise log Bayes factor of 47. The estimated time of arrival of the merger at the detector is GPS=1135945474.373\({}^{+0.076}_{-0.07}\) and the measured optimal SNR is \(11.34^{+1.8}_{-1.6}\).
Fig. 9 shows the result of the fit in the time domain, by comparing the whitened data in orange to the inferred waveform (blue) with a 90% credible belt. We report that
Figure 8: Time-frequency representation of the segment at 2016-01-04 12:24:17 UTC (GPS=1135945474 s) recorded by the LIGO Livingston detector. The top panel shows the entire segment. The bottom panel is a detailed view that focuses on the transient signal at \(t\sim 0.35\) and \(f\sim 150\) Hz. The frequency of the signal is distinctly increasing in a chirping pattern.
there is no significant residual after subtraction of the inferred waveform as shown in Fig. 10. As an independent check of the nature of the signal, the figure also includes the waveform estimate produced by the denoising convolutional autoencoder described in [73] (dashed red). The two reconstructed waveforms are in good agreement, following a similar phase evolution, except for the initial and final parts of the signal, where the denoiser's reconstruction is not optimal because of its low-frequency cut-off, and the rather low SNR of the signal. In addition, we note that the denoising autoencoder was trained on the waveform family SEOBNRv4 which is different than the one used for Bilby (IMRPhenomXPHM), which may contribute to differences.
Above checks are all compatible with the event being of astrophysical origin. The corner plot in Fig. 11 displays the posterior distribution of the source parameters including the binary component masses, spins and source distance. Since only one detector is available, the source direction is not localized in the sky. The 90% credible intervals for those parameters are: the measured (redshifted) chirp mass \(\mathcal{M}=30.18^{+12.3}_{-7.3}M_{\odot}\), the (redshifted) component masses \(m_{1}=50.7^{+10.4}_{-8.9}\,M_{\odot}\) and \(m_{2}=24.4^{+20.2}_{-9.3}\,M_{\odot}\), the binary effective spin \(\chi_{\rm eff}=0.06^{+0.4}_{-0.5}\) and the luminosity distance \(d_{L}=564^{+812}_{-338}\) Mpc; see [74] for a definition of those physical parameters. Overall, these values are consistent with the observed population of BBH to date.
## 6 Conclusions
This contribution demonstrates the viability of training neural network classifiers on real detectors' data for analyzing single-detector observing periods of ground-based GW detectors. We show that architectures specifically designed for time-series classification, such as IT or TCN, outperform the standard CNN typically used so far. Their relative detectability limit in terms of signal-to-noise ratio is lower by few percents to 15%
Figure 9: Comparison of the whitened L1 data (orange line) with the reconstructed waveform from the Bilby posteriors (blue) and the ML denoising convolutional autoencoder neural network described in [73] (dashed red line).
for 50 and 90% classification efficiencies respectively. The models were trained with one month of the observing run O1 data from the LIGO Livingston detector. When applied to the remaining three months of O1 data, the classifiers independently detect a plausible GW signal of astrophysical origin on January 4, 2016. This candidate signal was also identified by [35] using standard matched filtering techniques. While [35] downgraded the event as a noise artifact, various diagnostics we performed substantiates the possibility of its astrophysical origin.
Operationally, we propose an approach where the multiple detector data from the first month of an observing run, labeled by standard matched filtering-based pipelines, are used to train the neural network models. The resulting classifiers can then be applied to the remaining data collected during single-detector periods. Once trained, the computational cost is such that the classifiers can produce low-latency triggers. However, the poor sky localization obtained with only one detector limits the relevance of this approach.
The current approach faces two limitations: (i) using real data for training and testing inherently limits the statistical characterization of these algorithms and their noise rejection capabilities, as already highlighted in [46] and observed with the excess of triggers produced by the TCN classifier; (ii) there is a technical issue arising from the use of bounded selection statistics (i.e., class membership probabilities in our case) that leads to numerical intricacies. More generally, due to the absence of a mathematical theory for neural networks, their precise statistical characterization on noisy data remains an open question. Consequently, research in this field is limited to a trial and error heuristic
Figure 10: Time-frequency representation of the residual after the subtraction of the reconstructed waveform from Bilby posteriors from the data segment at 2016-01-04 12:24:17 UTC (GPS=1135945474 s). The dynamic range and color code are the same as in Fig. 8. No excess power is visible in this plot.
approach.
This contribution opens up new possibilities for analyzing the fairly large single-detector data set. Applying the proposed classifiers to other LIGO-Virgo observing runs and broadening the parameter space to include lower masses and effects such as higher-order modes or precession would be interesting directions for future work.
Figure 11: Posterior distribution of the chirp mass \(\mathcal{M}\), luminosity distance, component masses \(m_{1}\) and \(m_{2}\), and effective spin \(\chi_{\rm eff}\) for the 2016-01-04 event (see Sect. 5.3 for details).
## 7 Acknowledgements
This work was partially supported by European Union's Horizon 2020 research and innovation programme under grant agreement No 653477, diiP (data intelligence institute of Paris), IdEx Universite de Paris, ANR-18-IDEX-0001, and the COST action G2net (CA 17137). MB gratefully acknowledges partial support from the Polish National Science Centre grants no. 2016/22/E/ST9/00037 and 2021/43/B/ST9/01714, and Poland's high-performance Infrastructure PLGrid (ACK Cyfronet AGH). The authors are grateful for computational resources provided by the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. This work was granted access to the HPC resources of IDRIS under the allocation 2021-A0111012956 and 2021-AD011012279 made by GENCI. Some of the numerical computations were performed on the DANTE platform, APC, France.
The authors would like to thank Charlotte Pelletier for suggesting the investigation of the TCN and Inception Time classifiers. The authors also thank Liliia Sinitsyna, Anirudh Kalla, Hugo Marchand and Felix Bretaudeau that have contributed to this project during their internship. E CM would like to thank Cecilio Garcia-Quiros for useful advice with the usage of Bilby, and Konstantin Leyde for stimulating discussions about this work. We thank Sophie Bini and Thomas Dent for their comments during the LVK internal review.
This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gwosc.org), a service of the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation, as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. KAGRA is supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan Society for the Promotion of Science (JSPS) in Japan; National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea; Academia Sinica (AS) and National Science and Technology Council (NSTC) in Taiwan.
|
2304.07073 | Uncertainty-Aware Vehicle Energy Efficiency Prediction using an Ensemble
of Neural Networks | The transportation sector accounts for about 25% of global greenhouse gas
emissions. Therefore, an improvement of energy efficiency in the traffic sector
is crucial to reducing the carbon footprint. Efficiency is typically measured
in terms of energy use per traveled distance, e.g. liters of fuel per
kilometer. Leading factors that impact the energy efficiency are the type of
vehicle, environment, driver behavior, and weather conditions. These varying
factors introduce uncertainty in estimating the vehicles' energy efficiency. We
propose in this paper an ensemble learning approach based on deep neural
networks (ENN) that is designed to reduce the predictive uncertainty and to
output measures of such uncertainty. We evaluated it using the publicly
available Vehicle Energy Dataset (VED) and compared it with several baselines
per vehicle and energy type. The results showed a high predictive performance
and they allowed to output a measure of predictive uncertainty. | Jihed Khiari, Cristina Olaverri-Monreal | 2023-04-14T11:51:26Z | http://arxiv.org/abs/2304.07073v2 | # Uncertainty-Aware Vehicle Energy Efficiency Prediction using an Ensemble of Neural Networks
###### Abstract
The transportation sector accounts for about 25% of global greenhouse gas emissions. Therefore, an improvement of energy efficiency in the traffic sector is crucial to reducing the carbon footprint. Efficiency is typically measured in terms of energy use per traveled distance, e.g. liters of fuel per kilometer. Leading factors that impact the energy efficiency are the type of vehicle, environment, driver behavior, and weather conditions. These varying factors introduce uncertainty in estimating the vehicles' energy efficiency. We propose in this paper an ensemble learning approach based on deep neural networks (ENN) that is designed to reduce the predictive uncertainty and to output measures of such uncertainty. We evaluated it using the publicly available Vehicle Energy Dataset (VED) and compared it with several baselines per vehicle and energy type. The results showed a high predictive performance and they allowed to output a measure of predictive uncertainty.
Vehicle energy efficiency, ensemble learning, uncertainty estimation
## I Introduction
The growth of e-commerce has contributed to an increase in the transportation systems' dependency on burning fossil fuels such as gasoline and diesel [1]. In general, transportation is estimated to generate a quarter of the total greenhouse gas emissions [2]. In this context, key alternative energy sources of transportation such as battery-equipped vehicles (BEV) can improve energy efficiency and reduce CO2 emissions in the traffic sector [3][4]. As a consequence, a real-world close evaluation and research towards a sustainable transport strategy is crucial to reduce energy consumption [5]. While energy consumption is a useful metric for comparing the total energy used by different vehicles or modes of transportation, it does not provide a complete picture of a vehicle's energy performance. Energy efficiency is a measure of how effectively a vehicle uses energy to perform a specific task, such as moving a certain distance or carrying a certain load. It takes into account not only the amount of energy consumed, but also the work achieved by the vehicle in relation to that energy consumption. For example, two vehicles may consume the same amount of energy, but one may be more efficient at converting that energy into motion, resulting in better performance and lower energy costs.
Different factors such as: vehicle type, size, and type of fuel, use of electric energy, the outside temperature, as well as the type of road (highway vs. urban area) can affect the use of energy [6][7]. By estimating the energy efficiency under given circumstances, individual and commercial vehicle users can make more informed decisions about their trip planning and better understand their impact. For instance, given a fleet of vehicles that can be used for deliveries, an operator can plan the trips according to the estimated energy efficiency. Furthermore, private car users can benefit from historical information and predictions of the energy efficiency for future trips. For instance, predictions of energy efficiency can inform the choice of route or timing of a planned trip. In both cases, the higher the energy efficiency, the lower the costs and the environmental impact.
On the other hand, traditional prediction methods can be highly sensitive and inaccurate as they are affected by factors that affect the energy consumption of a vehicle. Furthermore, prior knowledge about energy efficiency can diverge from reality, as the vehicles can be utilized in various contexts other than the ones tested by the manufacturer. In [8], the authors report about a 10% gap between the official fuel consumption rate of new cars and on-road fuel consumption rate, which generally corroborates the findings of earlier studies such as [9] and [10]. It is therefore useful to generate estimations based on commonly available ground-truth data that reflect specificities of the vehicles' usage: environment, weather conditions, driver behavior, etc.
To tackle this challenge, we propose in this paper a machine learning approach consisting in an ensemble of neural networks (ENN) that predicts energy efficiency and estimates the uncertainty in the prediction. By using a diverse set of base learners, ensembles are able to deliver results that are less sensitive to variations in the data. Furthermore, we propose an ensemble that outputs predictions, as well as a measure of confidence.
We investigate in this paper the use of an ensemble, the base learners and scoring functions. Then, we validate our model on the Vehicle Energy Dataset (VED) [11] that is an openly available dataset spanning over a year and including trips undertaken by several types of vehicles.
Unlike other works in the literature, our approach does not rely on complex kinetic, chemical, or physical models, nor does it require prior knowledge about the energy efficiency measures provided by the vehicle manufacturer. Furthermore, it is not aimed at a specific type of vehicle. This results in a practical approach, which can be adapted to the dataset at hand and then be used to improve energy efficiency predictions.
Our main contributions are:
1. Proposing an ensemble of deep neural networks (ENN) that can output energy efficiency predictions for different
types of vehicles that utilize fuel and/or an electric battery.
2. Proposing an ensemble that reduces predictive uncertainty and outputs predictions as well as a measure of uncertainty.
3. Presenting an extensive evaluation of the approach per type of vehicle and type of energy, as well as a comparison with several baselines.
The remainder of this paper is structured as follows. The next section considers related work in the field of predictive accuracy and uncertainty estimation for vehicle energy efficiency. Section III describes the steps that were followed to implement our approach. Section IV presents the results regarding the predicted energy values. Finally, Sections V and VI discuss the findings and conclude the paper.
## II Related Work
Most available works in the literature about energy efficiency rely either on manufacturer data (which announce an estimated measure of efficiency for a given vehicle model), or complex kinetic, physical, or chemical models that estimate the energy consumption of a given vehicle based on detailed data. Furthermore, such methods typically focus on only one type of vehicle. On the other hand, there are many studies that tackle how vehicle design and manufacturing choices can enable a higher energy efficiency such as [12][13][14] which respectively focus on the advanced driver assistance system, the torque vectoring control, and the adaptive cruise control. Other studies such as [15] investigate an energy-efficient driving strategy in the context of connected vehicles.
We propose to predict the energy efficiency for different types of vehicles based solely on floating car data that includes measures of instantaneous energy consumption and/or measures that can allow to estimate the energy consumption.
Focusing on trip efficiency estimation, there are mainly three types of approaches that have been followed in the literature:
1. Approaches that rely on mechanical, dynamic, and kinetic car data to derive models for energy consumption estimation, such as [16], [17], and [18].
2. Approaches that use a data-driven approach, where energy data and/or a measure of energy efficiency is available, such as [19], [20], [21], and [22].
3. Approaches that combine the first two types of approaches. For instance, [23] combines statistical modeling with load-based models which simulate the physical phenomena that generate emissions.
In [16], the authors used vehicle-specific power to estimate fuel efficiency per time and per distance. Vehicle-specific power (VSP) is a formalism used in the evaluation of vehicle emissions. It was first proposed in [24]. Informally, it is the sum of the loads resulting from aerodynamic drag, acceleration, rolling resistance, and hill climbing, all divided by the mass of the vehicle. Conventionally, it is reported in kilowatts per tonne, the instantaneous power demand of the vehicle divided by its mass. VSP, combined with dynamometer and remote-sensing measurements, can be used to determine vehicle emissions and, by extension, fuel consumption. The authors of [25] proposed a backpropagation network to predict fuel consumption. They relied on a dataset from an auto energy website in Taiwan which included fuel data, and further important factors for fuel consumption such as: car manufacturer, engine styles, vehicle's weight, vehicle type, and transmission types. Other common approaches comprise analyzing datasets that include fuel efficiency information expressed in terms of mileage per gallon or equivalently fuel per unit of traveled distance, such as in [19] and [20]. Given the fuel efficiency data, the authors of [19] used several supervised learning methods to predict the fuel efficiency. A fuzzy inference method to predict mileage per gallon was detailed in [20].
Energy efficiency prediction can be considered as one of the measures that can help with delivery trip optimization, alongside other considerations, such as: travel time prediction [26][27], route optimization [28][29], and order assignment [30][31].
Among the data-driven approaches for estimating efficiency, we can consider deep neural networks. Deep neural networks (DNN) are prominent machine learning methods that have achieved considerable successes in many fields such as computer vision, natural language processing, and robotics [32][33][34]. However, they can be prone to outputting wrong results with high confidence. Such high confidence can present security risks [35][36]. Adversarial training [37] is a method that can help tackle such issue by increasing robustness. As for quantifying uncertainty, the most common approach is to use Bayesian networks [38], which are however complex to implement and computationally expensive. Therefore, simpler methods such as the DNN ensemble described in [39] can be useful and easily deployed.
The use of ensembles to improve predictive performance is common [40][41]. In [39], the authors described how to use an ensemble to quantify predictive uncertainty. To the authors' knowledge, a similar method has not been used in the context of energy efficiency prediction for different types of vehicles. We therefore study in this work the usability of such an ensemble for providing uncertainty-aware predictions.
## III Methodology
### _Problem Formulation_
As input, we used vehicle data including energy information. We then divided the data per vehicle type. After processing, we generated relevant features and computed the energy efficiency per trip for each type of vehicle. We proceeded to train an ensemble of neural networks. The output consisted in energy efficiency predictions for each trip in the testing set. Figure 1(b) summarizes the process.
### _Data Description_
We relied in this study on the Vehicle Energy Dataset (VED) that is openly available and described in [11]. The dataset is divided into dynamic data and static data. The dynamic data capture GPS trajectories of vehicles along with their time series data of fuel, energy, speed, and auxiliary power usage. The data were collected, with a granularity of one second through onboard OBD-II loggers for the period of
November 2017 to November 2018. The data does not include information identifying the driver. Tables I and II show a brief summary of the dataset key aspects and attributes.
The static data captured the constant information about the vehicles, such as type, class, and weight. The list of static attributes is shown in Table III along with example values. By combining the static and dynamic data, we were able to uniquely identify the trips and the vehicles in the dataset, obtaining a fleet that consisted of a total of 383 personal cars in Ann Arbor, Michigan, USA. We further classified the cars into 264 gasoline vehicles (ICE), 92 hybrid electric vehicles (HEV), and 27 plug-in hybrid electric (PHEV) or electric vehicles (EV) according to their energy source:
1. **ICE:** Internal combustion engine
2. **PHEV:** Plug-in hybrid electric vehicles
3. **HEV:** Hybrid electric vehicles
4. **EV:** Electric vehicles
The distribution of vehicles per engine types is depicted in Figure 1. The trips show spatio-temporal diversity: they take place at different times of the day throughout the year, and they are distributed across different types of roads in highways and urban areas. The type of road is however not encoded in the dataset as an attribute. Figure 3 depicts the distribution of trip durations for each vehicle type. We note that for all vehicle types, the durations of trips are majoritarily under 10 minutes. For ICE vehicles, the histogram shows a considerably higher trip counts than other vehicle types.
### _Data Preprocessing_
#### Ii-C1 Energy efficiency estimation
The four different vehicle types: ICE, HEV, PHEV, and EV utilize different types of energy: fuel-based energy, electric energy, or a combination of both. A common measure of energy efficiency for ICE vehicles is mileage per gallon (MPG). The higher the mileage per gallon, the more efficient a vehicle is under given circumstances. For vehicles that utilize electric energy, an equivalent MPGe measure can be computed. Similarly, other measures such as L/100km can be utilized to estimate fuel energy efficiency.
For all trips, we first estimated the energy consumption. To compute the fuel consumption, we implemented the algorithm described in 1, which is based on attributes available in the dataset. As for the battery energy, we computed the power in Watt based on the instantaneous values of current and voltage. Then, we integrated it over time to obtain the electric energy measure in kilowatt hour (kWh). We then calculated the average distance traveled per unit of energy consumed as a measure of energy efficiency to be able to compare the performance of different vehicles under different driving conditions. For the fuel energy, we considered as measure of energy efficiency the kilometer per liter (km/L) and for the electric battery energy, we considered the kilometer per kilowatt hour (km/kWh).
#### Ii-C2 Feature Engineering
In order to augment the dataset, we extracted relevant features from the existing ones. For instance, we generated time-based features from the existing timestamp: hour, minute, month, day of the week. Similarly, we added descriptive statistics about speed for each trip. Furthermore, we clustered the trips based on origins and destinations to group trips that took place in a similar geographical area or share a similar pattern. The cluster number was added as a feature of our dataset. Figure 4 depicts the results of the trip clustering. For instance, we note that Cluster 2 illustrates trips that started and ended in a similar concentrated area,
\begin{table}
\begin{tabular}{l c} \hline \hline
**Features** & **VED** \\ \hline Number of Vehicles & 264 \\ \hline Number of Trips & 18963 \\ \hline Traveled distance (km) & 320792 \\ \hline Average Number of Trips per Day & 53 \\ \hline Average Trip Duration (minutes) & 15 \\ \hline Location & Ann Arbor, USA \\ \hline Time Period & 1 Year \\ \hline \hline \end{tabular}
\end{table} TABLE I: Datasets summary.
\begin{table}
\begin{tabular}{l c} \hline \hline & **Attributes** \\ \hline
**Time** & Timestamp \\ \hline
**GPS** & Latitude/Longitude (deg) \\ \hline \multirow{6}{*}{**Engine Info**} & Vehicle Speed (km/h) \\ \cline{2-3} & Engine RPM (rev/min) \\ \cline{2-3} & Mass Air Flow (g/s) \\ \cline{2-3} & Fuel Info & Fuel Rate (lh) \\ \cline{2-3} & Absolute Load (\%) \\ \cline{2-3} & Short Term Fuel Trim B1 (\%) \\ \cline{2-3} & Short Term Fuel Trim B2 (\%) \\ \cline{2-3} & Long Term Fuel Trim B1 (\%) \\ \cline{2-3} & Long Term Fuel Trim B2 (\%) \\ \hline
**Weather** & Outside Air Temperature (°C) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Dynamic Data Attributes per Dataset
Fig. 1: Vehicles distribution per engine type
\begin{table}
\begin{tabular}{l l} \hline \hline
**Attributes** & **Example Values** \\ \hline Vehicle Type & ICE Vehicle, HEV, PHEV, EV \\ \hline Vehicle Class & Passenger Car, SUV, Light Truck \\ \hline Engine Configuration & 14, V4, V4 Flex, V6 PZEV \\ \hline Engine Displacement & 1,0L, 2.0L, 3.6L \\ \hline Transmission & 5-SP Automatic, 4-SP Manual, CVT \\ \hline Drive Wheels & PWD, AWD \\ \hline Vehicle Weights & 3,000lb, 5,000lb \\ \hline \hline \end{tabular}
\end{table} TABLE III: Static Data Attributes for VED
while Clusters 6 and 7 illustrate trips that span over a bigger geographical area.
### _Ensemble Approach_
Our ensemble approach is based on deep neural networks that are trained on stratified batches of the data, thus considering all months of the year.
As shown in [39][42], we considered a dataset \(D\), such as \(D=\{x_{n},y_{n}\}_{n=1}^{N}\), where \(x\in\mathbb{R}^{d}\) stands for the d-dimensional features, and \(y\in\mathbb{R}\) is the prediction target. The base model, i.e. the neural network models the probabilistic predictive distribution \(p_{\theta}(y|x)\), where \(\theta\) are the parameters of the neural network. The method is based on three key aspects:
1. Using a scoring rule that reduces predictive uncertainty
2. Using adversarial training to smooth the predictive distributions
3. Training an ensemble of \(M\) neural networks
As in [39], we used a network that delivered two values in the final layer: the predicted mean and variance, by treating the observed value as a sample from a Gaussian distribution. With the predicted mean and variance, we minimized the negative log-likelihood criterion expressed in Equation 1.
Fig. 3: Histograms of trip durations in minutes per vehicle type
Fig. 2: Ensemble and neural network representations
\[-\log p_{\theta}(y_{n}|x_{n})=\frac{\log\sigma_{\theta}^{2}}{2}+\frac{(y-\mu_{ \theta}(x))^{2}}{2\sigma_{\theta}^{2}(x)}+constant \tag{1}\]
We trained an ensemble of neural networks independently in parallel. When training each network, we used random batches (subsamples) of the data at each iteration, as well as a random initialization of the parameters.
While the batches were drawn at random from the whole training set, the chronological order was later restored to take into account the original succession of the values. The resulting ensemble was a uniformly-weighted mixture model and combined the predictions as expressed in Equation 2:
\[p(y|x)=M^{-1}\sum_{m=1}^{M}p_{\theta_{m}}(y|x,\theta_{m}) \tag{2}\]
## IV Experiments and Results
### _Experimental Setup_
As a framework, we used Tensorflow [43]. To build the ensemble, we used \(M=10\) deep neural networks, a batch size of 500 and the Adam optimizer [44] with a fixed learning rate of 0.1 in our experiments. The individual neural networks are fully connected and 5 layers deep. We trained each neural network for 10 epochs. The results of the constituent networks are averaged in order to generate the ensemble's final prediction. A single neural network is represented in Figure (a)a, while Figure (b)b depicts the ensemble structure. To tune the learning rate, we used a grid search on a log scale from 0.1 to \(10^{-5}\). We performed a 70%-30% split between training and testing datasets in a stratified manner with respect to the month of the year, thus resulting in balanced training and test sets with respect to the chronological aspect of the data. Such a split allows to have a training and testing set that both include samples from all months of the year in a proportionate way, as opposed to a sequential split that would result in the testing set with samples from months that were not represented in the training set and vice versa.
All experiments were performed on a computer with the following specifications:
1. **Processor**: Intel(R) Core(TM) i7 - 10510U [email protected] 2.30 GHz
2. **RAM**: 16 GB
3. **GPU**: GeForce MX250
We opted to compare our findings to these baselines:
1. **Linear regression (LR)**[45]
2. **Random Forest (RF)**[46]
3. **xgboost (XGB)**[47]
4. **A single neural network (NN)**[48]
The choice of these baselines is mainly motivated by their common use in regression tasks. In particular, linear regression is a widely used simple, fast, and easy to interpret method for regression tasks. As we are proposing an ensemble of deep neural networks, we deemed it relevant to compare our findings to commonly used ensembles such as random forest and xgboost. Both are based on decision trees and they leverage the diversity of their base models. To underline the added value of an ensemble, we also used the single neural network as a baseline, whereby it's trained following the same settings and parameters as the individual models of our ensemble.
Fig. 4: Origin-Destination Clustering for Trips
As an evaluation metric, we used the root mean square error (RMSE) which is computed over the same test set for our model and the baselines. This choice is justified by the prevalence of RMSE for evaluating regression tasks. For all models, we also considered the R2 score; which is a measure of goodness of fit of a given model to a regression task. It is obtained by computing the proportion of the variance in the dependent variable that is predictable from the independent variables.
### _Statistical Analysis_
To determine if a statistically significant difference exists between the ensemble and the four baseline models, the Wilcoxon signed-rank test [49] was conducted. It does not assume a distribution normality. Therefore, it is the non-parametric version of the paired T-test [50]. It allows to compare the errors generated by different predictive models and works by comparing the difference between two paired samples and ranking the absolute differences in magnitude. The null hypothesis for the Wilcoxon signed-rank test is that the median difference between the two paired samples is zero. The test statistic is calculated as the sum of the ranks of the positive differences or the sum of the ranks of the negative differences, whichever is smaller.
More specifically, to determine whether one model performs significantly better than the others, we performed the one-tailed hypothesis test. The null hypothesis (\(H_{0}\)) is that "Two models have the same predictive error". The alternative hypothesis (\(H_{a}\)) is that "The first model has a smaller error than the second model". In the Wilcoxon signed-rank test, we chose a significance level of \(0.05\). In this case, the Wilcoxon signed-rank test is run using a dedicated function of the Python package scipy. The test was conducted to compare the ensemble to each baseline.
### _Results_
Figure 5 illustrates the prediction results of our ensemble as well as the measure of the predictive uncertainty. For each plot, the line represents the mean values of predictions, while the shadow in a lighter color represents the uncertainty measure. The higher the span of the variance, i.e. the shadow in the figure, the higher the uncertainty. On the contrary, when the variance span is small, it indicates a lower uncertainty and hence a higher confidence in the prediction.
As the available data span over a period of a year, we report the results over the testing set per month, to highlight the temporal distribution of the results. This is enabled by our stratified approach in the training and testing split, thus allowing us to show test results for all months of the year.
In blue, Figures 4(a), 4(b), and 4(c) show the fuel efficiency prediction results for ICE, PHEV, and HEV vehicles, respectively. The fuel efficiency predictions are reported in terms of Km/L i.e. traveled distance per unit of fuel consumption. As for the figures in red; 4(d), 4(e), and 4(f) show the prediction results for the electric battery efficiency for PHEV, HEV, and EV vehicles, respectively. The battery efficiency predictions are reported in terms of Km/kWh i.e. traveled distance per unit of electric energy consumption. Therefore, for both types of energy, the higher the predicted values, the higher the efficiency. Due to lack of data for HEV vehicles, Figure 4(e) shows prediction results only for four months: July to October. As for Figures 4(d) and 4(f), the available data covers the whole year.
As detailed in Section III, we compared the performance of our ensemble ENN to that of several baselines: LR, RF, XGB, and NN. The computed results are reported per vehicle and energy type in Table IV. The values in the table are in terms of root mean square error (RMSE) calculated over the same test set.
Additionally, we report in Table V the R2 values for our ensemble as well as the baselines. We note that the R2 scores are consistent with the RMSE values in IV. Therefore, the models with the highest R2 scores are also the models that perform better in the regression task. This is expected as the R2 score reflects the goodness of fit of a given model to a regression task. The closer the R2 score is to 1, the better the goodness of fit. Our ensemble has R2 scores between 0.74 and 0.92 for the six different prediction tasks per vehicle type and per energy type. Similarly to the RMSE values, we note a higher R2 score for xgboost for PHEV and EV battery energy efficiency.
As for the statistical test, the results of the Wilcoxon signed-rank test are summarized in Table VI. Each row details the results for the ensemble compared to one baseline. For the fuel efficiency prediction tasks, we note that the ensemble performs significantly better than other baselines for all vehicle types (ICE, PHEV, and HEV), since the p-value is lower than 0.05. As for the battery efficiency, the gain in performance is mostly not as significant. For instance, the ensemble does not perform significantly better than any of the baselines in the case of EV and HEV battery efficiency prediction. This effect can be attributed to the lack of data available for EV and HEV vehicles, resulting in a gain in performance that is not significant. In the case of battery efficiency prediction for PHEV, the ensemble performs significantly better compared to LR, RF, and XGB, but not NN. Overall, considering the distribution of trips per vehicle type, we note that the ensemble provides a considerable gain in performance in most cases.
note that the data spans over a year and stems from 383 vehicles. Therefore, it includes data from different drivers. Consequently, the data we used implicitly includes variance in driver behavior and driving conditions. Furthermore, we note that for all three vehicle types that are equipped with batteries, the uncertainty range increases between July and August. This effect possibly reflects the use of air conditioning which increases the use of energy and introduces more variance in the data.
Table IV shows the prediction error results in terms of RMSE for the various considered baselines. The linear regression consistently has the highest error values, likely due to the low complexity of the model compared to the complexity of the data. The ensemble results for the different use cases outperform the single neural net's. Additionally, the standard deviations of the RMSE errors of our neural networks ensemble are consistently smaller than those of the baselines, thus indicating a more consistent performance across the test samples. As for XGB, we note that it only outperforms the ensemble in the case of HEV and EV battery efficiency prediction. This is likely due to the reduced amount of data available for training in these particular cases compared to the remaining ones. As noted in the data description, there are only 27 PHEV and EV vehicles compared to 264 ICE and 92 HEV vehicles that are represented in the data. In all other cases, ENN has the lowest error values, thus demonstrating a high predictive accuracy compared to baselines. Consequently, we note that DNNs in general, and the DNN ensemble in particular require a high amount of data for sufficient training. On the other hand, tree-based ensembles such as XGB can be less sensitive to a lack of data and this gives them an advantage compared to more complex models.
Our proposed approach is capable of producing accurate predictions of the energy efficiency as shown in terms of RMSE and R2 score. Furthermore, the statistical test showed a significantly higher performance than baselines in most cases. The consistent performance across different types of vehicles and types of used energy indicates the versatility of the ensemble approach. In addition, the output of a measure of uncertainty as shown in Figure 5 provides further information about the model's confidence in the predictions, which is beneficial due to the variance present in the data and the multiple sources of uncertainty: weather conditions, driving behavior, type and state of vehicle, etc.
A possible limitation of using neural network ensemble is the increased initial training time compared to other ensembles such as decision-tree based ensembles. However, fast predictions can be provided in test time, thus facilitating deployment. On the other hand, the consistent performance across the training and testing sets indicate that the ensemble has not overfit to the training data. The considered data which span over a year is therefore deemed sufficient.
All in all, this study demonstrates an approach for predicting energy efficiency by relying only on floating car and sensor data. Our findings are in agreement with works in the literature that highlight how energy efficiency varies according to multiple factors such as the type of vehicle, driving conditions, weather, etc. On the other hand, by relying only on floating car and sensor data, we are able to propose an approach that can be deployed in different applications such as fleet management and trip planning, where providing the uncertainty of the estimations can be essential for decision-making, user trust, and energy saving policies.
## VI Conclusion
The high use of energy in the field of transportation can be considerably reduced by increasing the energy efficiency. In fact, predicting energy efficiency is relevant because it provides a comprehensive assessment, which is useful for evaluating the effectiveness of energy-saving technologies and policies, as well as for making informed decisions about vehicle purchases and usage. To do so, we proposed an approach based on an ensemble of neural networks (ENN). While most available works in the literature focus on one type of vehicle, we relied on publicly available energy data (VED) which include different vehicle types over a period of a year. Our approach ensured a high predictive accuracy, but also provided information about the model's confidence in its
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline & \multicolumn{2}{c|}{**Fuel Efficiency (P-value)**} & \multicolumn{2}{c|}{**Battery Efficiency (P-value)**} \\ \cline{2-7} & **ICE** & **PHEV** & **HEV** & **HEV** & **HEV** & **EV** \\ \hline
**LR** & \(0.68\) & \(0.72\) & \(0.73\) & \(0.65\) & \(0.62\) & \(0.7\) \\ \hline
**RF** & \(0.83\) & \(0.77\) & \(0.89\) & \(0.76\) & \(0.77\) & \(0.65\) \\ \hline
**XGB** & \(0.8\) & \(0.82\) & \(0.87\) & \(0.73\) & **0.82** & **0.78** \\ \hline
**NN** & \(0.85\) & \(0.79\) & \(0.9\) & \(0.83\) & \(0.74\) & \(0.72\) \\ \hline
**ENN** & **\(\mathbf{0.92}\)** & **\(\mathbf{0.36}\)** & **\(\mathbf{0.94}\)** & **\(\mathbf{0.87}\)** & \(0.78\) & \(0.74\) \\ \hline \end{tabular}
\end{table} TABLE V: R2 Scores; the highest values are in bold.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline & \multicolumn{2}{c|}{**Fuel Efficiency (RMSE)**} & \multicolumn{2}{c}{**Battery Efficiency (RMSE)**} \\ \hline & **ICE** & **PHEV** & **HEV** & **HEV** & **PHEV** & **EV** \\ \hline
**LR** & \(6.54\pm 1.95\) & \(17.39\pm 4.14\) & \(5.5\pm 1.75\) & \(0.025\pm 0.01\) & \(0.031\pm 0.015\) & \(0.039\pm 0.011\) \\ \hline
**RF** & \(5.22\pm 0.87\) & \(15.72\pm 3.28\) & \(4.01\pm 2.1\) & \(0.014\pm 0.006\) & \(0.018\pm 0.003\) & \(0.019\pm 0.002\) \\ \hline
**XGB** & \(2.92\pm 0.93\) & \(12.28\pm 3.36\) & \(4.9\pm 2.35\) & \(0.015\pm 0.005\) & **0.008 \(\pm\) 0.004** & **0.015 \(\pm\) 0.009** \\ \hline
**NN** & \(3.01\pm 1.01\) & \(9.45\pm 4.12\) & \(3.45\pm 1.02\) & \(0.007\pm 0.002\) & \(0.013\pm 0.003\) & \(0.024\pm 0.007\) \\ \hline
**ENN** & **2.45 \(\pm\) 0.85** & **6.79 \(\pm\) 2.14** & **2.87 \(\pm\) 0.52** & **0.005 \(\pm\) 0.002** & \(0.01\pm 0.002\) & \(0.021\pm 0.003\) \\ \hline \end{tabular}
\end{table} TABLE IV: Prediction Results for Ensemble and Baselines. The values are expressed as mean \(\pm\) standard deviation, and reported in terms of root mean square error (RMSE). The lowest errors are in bold.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline & \multicolumn{2}{c|}{**Fuel Efficiency (P-value)**} & \multicolumn{2}{c|}{**Battery Efficiency (P-value)**} \\ \cline{2-7} & **ICE** & **HEV** & **HEV** & **HEV** & **HEV** & **EV** \\ \hline
**LR** & \(0.68\) & \(0.72\) & \(0.73\) & \(0.65\) & \(0.62\) & \(0.7\) \\ \hline
**RF** & \(0.83\) & \(0.77\) & \(0.89\) & \(0.76\) & \(0.77\) & \(0.65\) \\ \hline
**XGB** & \(0.8\) & \(0.82\) & \(0.87\) & \(0.73\) & **0.82** & **0.78** \\ \hline
**NN** & \(0.85\) & \(0.79\) & \(0.9\) & \(0.83\) & \(0.74\) & \(0.72\) \\ \hline
**ENN** & \(\mathbf{0.92}\) & **0.36** & **0.94** & **0.87** & \(0.78\) & \(0.74\) \\ \hline \end{tabular}
\end{table} TABLE IV: R2 Scores; the highest values are in bold.
output, thus facilitating decision making and increasing trust in the model's output. Future work will focus on investigating various measures of uncertainty and uncoupling aleatoric and epistemic uncertainty in this particular context.
## Acknowledgment
This work was supported by the Austrian Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK) Endowed Professorship for Sustainable Transport Logistics 4.0., IAV France S.A.S.U., IAV GmbH, Austrian Post AG and the UAS Technikum Wien.
|
2304.03215 | Hierarchical Graph Neural Network with Cross-Attention for Cross-Device
User Matching | Cross-device user matching is a critical problem in numerous domains,
including advertising, recommender systems, and cybersecurity. It involves
identifying and linking different devices belonging to the same person,
utilizing sequence logs. Previous data mining techniques have struggled to
address the long-range dependencies and higher-order connections between the
logs. Recently, researchers have modeled this problem as a graph problem and
proposed a two-tier graph contextual embedding (TGCE) neural network
architecture, which outperforms previous methods. In this paper, we propose a
novel hierarchical graph neural network architecture (HGNN), which has a more
computationally efficient second level design than TGCE. Furthermore, we
introduce a cross-attention (Cross-Att) mechanism in our model, which improves
performance by 5% compared to the state-of-the-art TGCE method. | Ali Taghibakhshi, Mingyuan Ma, Ashwath Aithal, Onur Yilmaz, Haggai Maron, Matthew West | 2023-04-06T16:48:34Z | http://arxiv.org/abs/2304.03215v2 | # Hierarchical Graph Neural Network with Cross-Attention for Cross-Device User Matching
###### Abstract
Cross-device user matching is a critical problem in numerous domains, including advertising, recommender systems, and cybersecurity. It involves identifying and linking different devices belonging to the same person, utilizing sequence logs. Previous data mining techniques have struggled to address the long-range dependencies and higher-order connections between the logs. Recently, researchers have modeled this problem as a graph problem and proposed a two-tier graph contextual embedding (TGCE) neural network architecture, which outperforms previous methods. In this paper, we propose a novel hierarchical graph neural network architecture (HGNN), which has a more computationally efficient second level design than TGCE. Furthermore, we introduce a cross-attention (Cross-Att) mechanism in our model, which improves performance by 5% compared to the state-of-the-art TGCE method.
Keywords:Graph neural network User matching Cross-attention.
## 1 Introduction
Ensuring system security and effective data management are critical challenges in the modern day [3, 4]. In this regard, data integration plays a vital role in facilitating data management, as it enables the integration of data from diverse sources to generate a unified view of the underlying domain. One of the primary challenges in data integration is the problem of entity resolution, which involves identifying and linking multiple data records that correspond to the same real-world entity. The problem of entity resolution arises in a wide range of domains, including healthcare, finance, social media, and e-commerce. Entity resolution is a challenging problem due to various factors, including the presence of noisy and ambiguous data, the lack of unique identifiers for entities, and the complexity of the relationships between different entities.
Among entity resolution tasks, cross-device user matching is of significant importance. This task involves determining whether two separate devices belong to the same real-world person based on their sequential logs. The device
sequential logs are time-stamped actions taken by the user over a relatively long period of time, say a few months. These actions are often in the form of browsing a Uniform Resource Locator (URL), and almost always, user identifications are not available due to privacy reasons. Refer to Figure 1 for an illustration of the cross-device user matching task.
It is a common occurrence for users to engage in online activities across multiple devices. However, businesses and brands often struggle with having insufficient user identities to work with since users are perceived as different individuals across different devices due to their unique activities. The ability to automatically identify the same user across multiple devices is essential for gaining insights into human behavior patterns, which can aid in applications such as user profiling, online advertising, improving system security. Therefore, in recent years, the has been a flourishing amount of studies focusing on cross-device user matching [9].
In recent years, with the advent of machine learning-based methods for entity resolution, several studies have focused on learning distributed embeddings for the devices based on their URL logs [6, 11, 12]. The earlier studies focused on utilizing unsupervised feature learning techniques [7], developing handcrafted features for the device logs, or relied on co-occurrence of key attributes of URL logs in pairwise classification [12].
Methods that utilize deep learning have a greater ability to convey dense connections among the sequential device logs. For instance, researchers have utilized a 2D convolutional neural network (CNN) framework to encode sequential log representations to understand the relationship between two devices [16]. However, this model primarily captures local interactions within user sequence logs, limiting its ability to learn the entire sequence or a higher-level pattern. Recently, there has been further emphasis on the effectiveness of sequential models like recurrent neural networks (RNNs) and attention-based techniques in modeling sequence patterns and achieving promising results in numerous sequence modeling tasks [5, 13, 15]. Although these methods work well for sequence modeling, they are not specifically designed for user-matching tasks and may not be optimal for learning sequential log embeddings.
Figure 1: Cross device user matching problem: only based the URL visit logs of two different devices, determine whether or not they belong to the same real-world person.
Recently, researchers proposed a two-tier graph contextual embedding (TGCE) network for the cross-device user matching [6] task. While previous methods for the task often failed at long-range information passing along the sequence logs, TGCE leverages a two-level structure that can facilitate information passing beyond the immediate neighborhood of a device log. This was specifically achieved by considering a random walk starting from every node in a device log, connecting all of the visited nodes to the original node, and performing a round of message passing using the newly generated shortcut edges.
Although the two-tier structure seems to enable long-range information sharing, we note two major limitations with the existing method. First, in the device graph, the random walk on the URL nodes may randomly connect two URLs that have been visited at two far-away time-stamps. Intuitively, two different URLs browsed by a device with weeks of gap in between share less information than two URLs visited in a shorter time frame. Second, at the end of the TGCE architecture, for the pairwise classification task, the generated graph embeddings for two devices are entry-wise multiplied and sent through a fully connected network to determine if they belong to the same person. However, there could be significant key features in the learned embeddings that may be shared between the devices, which can alternatively get lost if the architecture does not compare them across one another.
To address the above two issues, we propose a new hierarchical graph neural network (HGNN) inspired by the star graph architecture [10]. In the terminology of HGNN, we refer to the URL nodes as _fine_ nodes, and in an unraveled sequence of URL logs, HGNN assigns a _coarse_ node to every \(K\) consecutive fine nodes. The message passing between the coarse and fine nodes enables effective long-range message passing without the need to excessively add edges, as in the random walk method. Moreover, for the pairwise classification task, we utilize a cross-attention mechanism inspired by Li _et. al._[8], which enables entry-wise cross-encoding of the learned embeddings. The main contributions of this paper are summarized as follows:
* We model a given device log as a hierarchical heterogeneous graph, which is 6x faster than the previous state-of-the-art while keeping a competitive level of accuracy and performance.
* We employ a cross-attention mechanism for pairwise matching of the graphs associated with a device log, which improves the accuracy of the overall method by about 5%.
## 2 Related work
The cross-device user matching task was first introduced in the CIKM Cup 2016++, and the first proposed methods for the task mainly considered hand-crafted features. For instance, the runner-up solution [9] produces sub-categories based on the most significant URLs to generate detailed features. Furthermore, the
competition winner solution proposed by Tay _et. al._[14] utilizes "term frequency inverse document frequency" (TD-IDF) features of URLs and other related URL visit time features. However, their manually designed features did not fully investigate more intricate semantic details, such as the order of behavior sequences, which restricted their effectiveness. Aside from the hand-crafted features, the features that are developed from the structural information of the device URL visit data are also crucial for accomplishing the task of user matching. To further process sequential log information, studies have applied LSTM, 2D-CNN, and Doc2vec to generate semantic features for a sequence visited by a device [11, 12, 16].
Sequence-based machine learning models have also been employed for different entity resolution tasks; for instance, recurrent neural networks (RNN) have been utilized to encode behavior item sequential information [5]. Nevertheless, long-range dependencies and more advanced sequence features are not well obtained using sequence models [6]. With the advent of graph neural networks (GNN), researchers have modeled device logs as individual graphs where nodes and edges represent visited URLs and transitions between URLs. Each node and/or edge has an initial feature vector obtained from the underlying problem, and the layers of GNNs are then employed to update these features based on information passing in the local neighborhood of every node, such as the SR-GNN paper [17]. Another example is the LESSR [1] method for recommendation systems where the method is capable of long-range information capturing using an edge-order preserving architecture. However, these methods are specifically designed for the recommendation task and do not necessarily achieve desirable results on the cross-device user matching task.
Recently, researchers have proposed TGCE [6], a two-tier GNN for the cross-device user matching task. In the first tier, for every device log, each URL is considered as a node, and directional edges denote transitions between URLs. In the second tier, shortcut edges are formed by starting a random walk from every node and connecting all of the visited nodes to it. After a round of message passing in the first tier, the second tier is supposed to facilitate long-range information sharing in the device log. After the second tier, a position-aware graph attention layer is applied, followed by an attention pooling, which outputs the learned embedding for the whole graph. For the final pairwise classification, these learned embeddings for each of the devices are multiplied in an entry-wise manner and are sent to a fully connected deep neural network to determine whether they belong to the same user.
## 3 Hierarchical graph neural network
In this section, we discuss how we employ a two-level heterogeneous graph neural network for the cross-device user matching problem.
### Problem definition
The aim of the cross-device user matching problem is to determine whether two devices belong to the same user, given only the URL visits of each device. Denote a sequence of visited URLs by a device \(v\) by \(\mathcal{S}_{v}=\{s_{1},s_{2},...,s_{n}\}\), where \(s_{i}\) denotes the \(i\)'th URL visit by the device (note that \(s_{i}\)'s are not necessarily different). We build a hierarchical heterogeneous graph, \(G_{v}\), based on the sequence \(\mathcal{S}_{v}\) as follows: for a visited URL, \(s_{i}\), consider a _fine_ node in \(G_{v}\) and denote it by \(f_{i}\). Note that if multiple \(s_{i}\)'s correspond to the same URL, we only consider one node for it in \(G_{v}\). Then, we connect nodes corresponding to consecutively visited URLs by directed edges in the graph; we connect \(f_{i}\) and \(f_{i+1}\) by a directional edge (if \(f_{i}\) and \(f_{i+1}\) correspond to the same URL, the edge becomes a self-loop). Up to this point, we have defined the fine-level graph, and we are ready to construct the second level, which we call the _coarse_ level.
To construct the second level, we partition the sequence \(\mathcal{S}_{v}\) into non-overlapping subgroups of \(K\) URLs, where each subgroup consists of consecutively visited URLs (the last subgroup may have less than \(K\) URLs). For every subgroup \(j\), we consider a coarse node, \(c_{j}\), and connect it to all of the fine nodes corresponding to the URLs in subgroup \(j\) via undirected edges.
### Fine level
In the fine level of the graph \(G_{v}\), for every node \(f_{i}\), we order the nodes corresponding to the URLs that have an incoming edge to \(f_{i}\) according to their position in \(\mathcal{S}_{v}\). We denote this ordered sequence of nodes by \(N_{i}=\{f_{j_{1}},f_{j_{2}},...,f_{j_{\kappa}}\}\). Also, we denote the feature vector of the fine node \(f_{i}\) by \(x_{i}\). The \(l\)-th round of message passing in the fine-level graph updates the node features according to the following update methods:
\[M_{i}^{(l)}=\Phi^{(l)}([x_{j_{1}},x_{j_{2}},...,x_{j_{\kappa}},x _{i}]), \tag{1}\] \[x_{i}^{(l+1)}=\Psi^{(l)}(x_{i}^{(l)},M_{i}^{(l)}), \tag{2}\]
where \(\Phi^{(l)}\) is a sequence aggregation function (such as sum, max, GRU, LSTM, etc.), for which we use GRU [2], and \(\Psi^{(l)}\) is a function for updating the feature vector (e.g., a neural network), for which we use a simple mean.
### Coarse level
In every round of heterogeneous message passing between fine and coarse level nodes, we update both the fine and coarse node features. Consider the coarse node \(c_{j}\), and denote its feature by \(\tilde{x}_{j}\). Also, denote the fine neighbor nodes of \(c_{j}\) by \(\tilde{\mathcal{N}}(c_{j})\). In the \(l\)-th layer of heterogeneous message passing, the coarse node feature update is as follows:
\[\tilde{x}_{j}^{(l+1)}=\mathop{\Box}\limits_{i\in\tilde{\mathcal{N}}(c_{j})}(W_ {1}^{(i)}x_{i}), \tag{3}\]
where \(W_{1}^{(l)}\) is a learnable matrix and \(\square\) is an aggregation function (such as mean, max, sum, etc.), for which we use mean. Denote by \(\mathcal{N}(f_{i})\) the set of coarse nodes connected to the fine node \(f_{i}\). We first learn attention weights for the heterogeneous edges, and then we update fine nodes accordingly. In the \(l\)-th round of heterogeneous message passing, the fine node features are updated as follows:
\[e_{i,j}^{(l)}=\phi(W_{2}^{(l)}x_{i}^{(l)},W_{3}^{(l)}\tilde{x}_{j }^{(l)}), \tag{4}\] \[\alpha_{i,j}^{(l)}=\frac{\exp(e_{i,j}^{(l)})}{\sum_{j\in\mathcal{ N}(f_{i})}\exp(e_{i,j}^{(l)})},\] (5) \[x_{i}^{(l+1)}=\xi(x_{i}^{(l)},\sum_{j\in\mathcal{N}(f_{i})}\alpha _{i,j}^{(l)}\tilde{x}_{j}^{(l)}), \tag{6}\]
where \(W_{2}^{(l)}\) and \(W_{3}^{(l)}\) are learnable matrices, and \(\xi\) and \(\phi\) are update functions (such as a fully connected network). Figure 2 shows the overall architecture of fine and coarse level message passing.
### Cross attention
After the message passing rounds in the fine level and long-range information sharing between fine and coarse nodes, we extract the learned fine node embeddings and proceed to cross encoding and feature filtering, inspired by the
Figure 2: From left to right: heterogeneous (fine and coarse) graph modeling from a given URL sequence. The hierarchical message passing blocks consist of message passing on the fine nodes with a GRU aggregation function. Next, the coarse node features are updated using a mean aggregation function. Finally, the fine node features are updated using their previous feature vector as well as an aggregated message from their associated coarse nodes obtained via an attention mechanism between coarse and fine level nodes.
GraphER architecture [8]. We consider two different device logs \(v\) and \(w\), and their learned fine node embeddings as a sequence, ignoring the underlying graph structure. We denote the learned fine node embeddings for device logs \(v\) and \(w\) as \(X_{v}\in\mathbb{R}^{m_{v}\times d}\) and \(X_{w}\in\mathbb{R}^{m_{w}\times d}\), where \(m_{v}\) and \(m_{w}\) are the number of nodes in the fine level of \(G_{v}\) and \(G_{w}\), respectively. We learn two matrices for cross-encoding \(X_{v}\) into \(X_{w}\) and vice-versa. Consider the \(i\)-th and \(j\)-th rows of \(X_{v}\) and \(X_{w}\), respectively, and denote them by \(x_{v,i}\) and \(x_{w,j}\). The entries \(\hat{\alpha}_{i,j}\) of the matrix \(A_{v,w}\) for cross-encoding \(X_{v}\) into \(X_{w}\) are obtained using an attention mechanism (and similarly for \(A_{w,v}\)):
\[\hat{e}_{i,j}=\zeta(W_{3}x_{v,i},W_{3}x_{w,j}), \tag{7}\] \[\hat{\alpha}_{i,j}=\frac{\exp(\hat{e}_{i,j})}{\sum_{k=1}^{m_{w}} \exp(\hat{e}_{i,k})}, \tag{8}\]
where \(\zeta\) is an update function (such as a neural network), for which we use a simple mean. After obtaining the cross-encoding weights, we apply feature filtering, a self-attention mechanism that filters important features. The filtering vector is obtained as \(\beta_{v}=\text{sigmoid}(W_{4}\text{tanh}(W_{5}X_{v}^{T}))\), where \(W_{4}\) and \(W_{5}\) are learnable weights (\(\beta_{w}\) is obtained similarly). We apply the feature-filtering vector to the cross-encoding matrix as follows:
\[L_{v,w}=[\text{diag}(\beta_{v})(A_{v,w}X_{w}-X_{v})]\odot[\text{diag}(\beta_{v })(A_{v,w}X_{w}-X_{v})], \tag{9}\]
where \(\odot\) denotes the Hadamard product (\(L_{w,v}\) is also obtained similarly). The \(L_{v,w}\in\mathbb{R}^{m_{v}\times d}\) and \(L_{w,v}\in\mathbb{R}^{m_{w}\times d}\) matrices come from the Euclidean distance between the cross-encoding of \(X_{v}\) into \(X_{w}\) and \(X_{w}\), and therefore are a measure of the closeness of the original sequence logs of \(v\) and \(w\).
To obtain a size-independent comparison metric, we apply a multi-layer perceptron (MLP) along the feature dimension of \(L\) matrices (the second dimension, \(d\)), followed by a max-pooling operation along the first dimension. Finally, we apply a dropout and a ReLU nonlinearity. This yields vectors \(r_{v,w}\) and \(r_{w,v}\) that have a fixed size for any pair of \(v\) and \(w\). For the final pairwise classification task, we concatenate \(r_{v,w}\) and \(r_{w,v}\) and pass it through an MLP followed by a sigmoid activation to determine if the two devices belong to the same user or not:
\[\hat{y}=\text{sigmoid}(\text{MLP}(r_{v,w}||r_{w,v})). \tag{10}\]
## 4 Experiment
In this section, we will describe the dataset, training details, and discuss how our method outperforms all other baselines, including TGCE [6], the previous state-of-the-art.
### Training details
We studied the cross-device user matching dataset made publicly available by the Data Centric AllianceSS for the CIKM Cup 2016 competition. The dataset consists of 14,148,535 anonymized URL logs of different devices with an average of 197 logs per device. The dataset is split into 50,146 and 48,122 training and test device logs, respectively. To obtain the initial embeddings of each URL, we applied the same data preprocessing methods as in [6, 11]. We used a coarse-to-fine node ratio of \(k=6\), a batch size of 800 pairs of device logs, a learning rate of \(10^{-3}\), and trained the model for 20 epochs. We used the binary cross-entropy (BCE) loss function for training our model. The training, evaluation, and test were all executed on an A100 NVIDIA GPU. The BCE loss during training as well as the validation F1 score are shown in Figures 4 and 5, respectively.
Footnote §: [https://competitions.codalab.org/competitions/11171](https://competitions.codalab.org/competitions/11171)
### Results
In this section, we evaluate the precision, recall, and F1 score of our method on the test set and compare it to available baselines. All of the baselines have been obtained similarly as described in [6]. We present two variants of our method; the first one, which we label "HGNN", only differs from TGCE in the design of the second tier, i.e., we use the hierarchical structure presented in subsections 3.2 and 3.3, followed by the rest of the TGCE architecture. The second variant, which we label "HGNN+Cross-Att", uses the hierarchical structure in
Figure 3: Pairwise device graph matching: After the message passing, the two device graphs are cross-encoded via an attention mechanism followed by an attention-based feature filtering. The resulting matrix for each graph is then passed through an MLP layer, acting along the feature, followed by a maxpool operator along the nodes. Next, the obtained vectors pass through a dropout layer followed by an activation function. Finally, the resulting vectors of the two graphs are concatenated and passed through an MLP to obtain the final output.
Figure 4: Binary cross-entropy loss of our proposed method against that of TGCE. During training, our method obtains strictly better loss values.
Figure 5: Validation F1 score during training. Throughout the training, our method achieves strictly better F1 scores for the validation set compared to that of TGCE.
subsections 3.2 and 3.3, and also utilizes the cross-attention mechanism presented in subsection 3.4 after the hierarchical structure. As shown in Table 1, the "HGNN+Cross-Att" variant outperforms all of the baselines on the F1 score metric, including the second-best method (TGCE) by 5% on the test data.
We also compare the training time of the two variants of our method with that of TGCE. As shown in Table 2, our hierarchical structure is significantly more efficient than that of TGCE while keeping a competitive F1 score. Table 2 essentially indicates that by simply replacing the second-tier design of TGCE with our hierarchical structure (presented in subsections 3.2 and 3.3), the method becomes 6x faster while almost keeping the same performance. This is due to the large number of artificial edges generated in the random walk passes in the creation of the second tier of TGCE. Moreover, although including cross-attention slows down the model, we can still obtain the same training time as TGCE and achieve 5% better overall F1 score.
Figure 6 shows the precision-recall curve of our method (the HGNN+Cross-Att variant, trained for 6 epochs) with that of TGCE (trained for 20 epochs). As shown in the figure, the precision-recall curve of our method is strictly better than that of TGCE. In other words, for every recall score, our method has a better precision. Additionally, we further trained the HGNN+Cross-Att variant for 20 epochs (the same number of epochs TGCE was trained for) to study if any further improvement is achieved on the test set. We also plot the F1 score with different thresholds (from 0 to 1 incremented by 0.01) for our model trained for 6 and 20 epochs and compare it to that of TGCE. As shown in Figure 7, our model trained for 20 epochs strictly outperforms TGCE (also trained for 20 epochs) for every threshold for obtaining the F1 score. However, our model
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **Precision at Best** & **Recall at Best** & **Best** \\ & **F1 Score** & **F1 Score** & **F1 Score** \\ \hline TF-IDF & 0.33 & 0.27 & 0.26 \\ \hline Doc2vec & 0.29 & 0.21 & 0.24 \\ \hline SCEmNet & 0.38 & 0.44 & 0.41 \\ \hline GRU & 0.37 & **0.49** & 0.42 \\ \hline Transformer & 0.39 & 0.47 & 0.43 \\ \hline SR-GNN & 0.35 & 0.34 & 0.34 \\ \hline LESSER & 0.41 & **0.48** & 0.44 \\ \hline TGCE & 0.49 & 0.44 & 0.46 \\ \hline HGNN (ours) & 0.48 & 0.43 & 0.45 \\ \hline HGNN+Cross-Att (ours) & **0.57** & **0.48** & **0.51** \\ \hline \end{tabular}
\end{table}
Table 1: Precision, Recall, and F1 score of different methods for cross-device user matching on DCA dataset.
trained for 6 epochs achieves the best overall F1 score, which is 5% higher than TGCE. This is significant since as shown in Table 2, the model takes the same time as TGCE to train.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **Best** & **End-to-end** & **Number of** \\ & **F1 Score** & **Training Time** & **Epochs** \\ \hline TGCE & 0.46 & 60h & 20 \\ \hline HGNN (ours) & 0.45 & **10h** & 20 \\ \hline HGNN+Cross-Att (ours) & **0.51** & 60h & **6** \\ \hline \end{tabular}
\end{table}
Table 2: Best F1 score and end-to-end training time of HGNN (without Cross-Att), HGNN+Cross-Att, and TGCE. The HGNN model is 6x faster than TGCE with a slight trade-off (about 1%) on the accuracy side. The HGNN+Cross-Att model has the same training time as TGCE while achieving 5% better F1 score.
Figure 6: Precision-Recall curve of the proposed method and that of TGCE on the test data.
## 5 Conclusions
In this paper, we present a novel graph neural network (GNN) architecture for a demanding entity resolution task: cross-device user matching, which determines if two devices belong to the same user based only on their anonymized internet logs. Our method comprises of designing an effective hierarchical structure for achieving long-range message passing in the graph obtained from device URL logs. After passing device logs through such a hierarchical GNN, we employ a cross-attention mechanism to effectively compare device logs against each other to determine if they belong to the same user. We demonstrate that our method outperforms available baselines by at least 5%, while having the same training time as the previous state-of-the-art method, establishing the effectiveness of our proposed method.
#### Acknowledgements
This research was supported by NVIDIA Corporation.
|
2304.10527 | Multidimensional Uncertainty Quantification for Deep Neural Networks | Deep neural networks (DNNs) have received tremendous attention and achieved
great success in various applications, such as image and video analysis,
natural language processing, recommendation systems, and drug discovery.
However, inherent uncertainties derived from different root causes have been
realized as serious hurdles for DNNs to find robust and trustworthy solutions
for real-world problems. A lack of consideration of such uncertainties may lead
to unnecessary risk. For example, a self-driving autonomous car can misdetect a
human on the road. A deep learning-based medical assistant may misdiagnose
cancer as a benign tumor.
In this work, we study how to measure different uncertainty causes for DNNs
and use them to solve diverse decision-making problems more effectively. In the
first part of this thesis, we develop a general learning framework to quantify
multiple types of uncertainties caused by different root causes, such as
vacuity (i.e., uncertainty due to a lack of evidence) and dissonance (i.e.,
uncertainty due to conflicting evidence), for graph neural networks. We provide
a theoretical analysis of the relationships between different uncertainty
types. We further demonstrate that dissonance is most effective for
misclassification detection and vacuity is most effective for
Out-of-Distribution (OOD) detection. In the second part of the thesis, we study
the significant impact of OOD objects on semi-supervised learning (SSL) for
DNNs and develop a novel framework to improve the robustness of existing SSL
algorithms against OODs. In the last part of the thesis, we create a general
learning framework to quantity multiple uncertainty types for multi-label
temporal neural networks. We further develop novel uncertainty fusion operators
to quantify the fused uncertainty of a subsequence for early event detection. | Xujiang Zhao | 2023-04-20T17:54:34Z | http://arxiv.org/abs/2304.10527v1 | # Multidimensional Uncertainty Quantification for
###### Abstract
We present a new method for computing the uncertainty quantification for the uncertainty quantification of the uncertainty quantification for the uncertainty quantification of the uncertainty quantification for the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification quantification of the uncertainty quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification of the uncertainty quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification quantification of the uncertainty quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification quantification of the uncertainty quantification quantification quantification quantification quantification of the uncertainty
## Chapter 4 Conclusions
In this thesis we have presented a new method for computing the energy of a system of coupled systems. We have presented a new method for computing the energy of a system of coupled systems.
MULTIDIMENSIONAL UNCERTAINTY QUANTIFICATION FOR DEEP NEURAL NETWORKS
by
XUJIANG ZHAO
DISSERTATION
Presented to the Faculty of
The University of Texas at Dallas
in Partial Fulfillment
of the Requirements
for the Degree of
DOCTOR OF PHILOSOPHY IN COMPUTER SCIENCE
THE UNIVERSITY OF TEXAS AT DALLAS
August 2022
###### Acknowledgements.
I am tremendously grateful to my advisor, Professor Feng Chen, for his excellent advice, encouragement, and support over the years. He brought my attention to machine learning, data mining, and other exciting research topics at the early stage of my PhD program. During my studies at UAlbany and UT Dallas, he was always willing to help me with suggestions for my research. His excellent research personality profoundly influences me.
I want to thank my committee members, Professor Gopal Gupta, Professor Rishabh Iyer, and Professor Chung Hwan Kim, for their willingness to serve as my committee members. I would also like to thank my Examining Committee Chair, Professor Justin Ruths, for his valuable time. I got invaluable help and great feedback from them. Their many critically important comments helped me to improve this dissertation significantly.
I am grateful to Professor Jin-Hee Cho, Professor Rishabh Iyer, and Professor Qi Yu for their care and generous help, especially in the early stage of my PhD career. They gave me detailed instructions and support.
I received tremendous help from friends and colleagues: Professor Zhiqiang Tao, Dr. Chen Zhao, Dr. Baojian Zhou, Dr. Fei Jie, Dr. Adil Alim, Chunpai Wang, Changbin Li, Haoliang Wang, Yuzhe Ou, Linlin Yu, Yibo Hu, Zhuoyi Wang, and Junfeng Guo. I also had two fantastic summer internships at _Alibaba Damo Academy_ and _NEC Laboratories America_ in the summer of 2019 and 2021, advised by great team members Dr. Hongxia Yang and Dr. Xuhcao Zhang.
I would also like to express my sincere gratitude to my parents, sister, and brother for their unconditional love.
Finally, this dissertation is dedicated to my wife, Chongchong, and my son, Shuhuan. I want to thank my wife and my son for their love and support.
June 2022
## Chapter 1 Multidimensional Uncertainty Quantification for
Deep neural networks (DNNs) have received tremendous attention and achieved great success in various applications, such as image and video analysis, natural language processing, recommendation systems, and drug discovery. However, inherent uncertainties derived from different root causes have been realized as serious hurdles for DNNs to find robust and trustworthy solutions for real-world problems. A lack of consideration of such uncertainties may lead to unnecessary risk. For example, a self-driving autonomous car can misdetect a human on the road. A deep learning-based medical assistant may misdiagnose cancer as a benign tumor.
In this work, we study how to measure different uncertainty causes for DNNs and use them to solve diverse decision-making problems more effectively. In the first part of this thesis, we develop a general learning framework to quantify multiple types of uncertainties caused by different root causes, such as vacuity (i.e., uncertainty due to a lack of evidence) and dissonance (i.e., uncertainty due to conflicting evidence), for graph neural networks. We provide a theoretical analysis of the relationships between different uncertainty types. We further demonstrate that dissonance is most effective for misclassification detection and vacuity is most effective for Out-of-Distribution (OOD) detection. In the second part of the thesis, we study the significant impact of OOD objects on semi-supervised learning (SSL) for
DNNs and develop a novel framework to improve the robustness of existing SSL algorithms against OODs. In the last part of the thesis, we create a general learning framework to quantity multiple uncertainty types for multi-label temporal neural networks. We further develop novel uncertainty fusion operators to quantify the fused uncertainty of a subsequence for early event detection.
TABLE OF CONTENTS
ACKNOWLEDGMENTS
ABSTRACT
LIST OF FIGURES
LIST OF TABLES
CHAPTER 1 INTRODUCTION
iv
1.1 Motivations
1.2 Summary of Main Contributions
1.3 Outline
CHAPTER 2 UNCERTAINTY AWARE SEMI-SUPERVISED LEARNING ON GRAPH DATA
2.1 Introduction
2.2 Related Work
2.3 Multidimensional Uncertainty and Subjective Logic
2.3.1 Notations
2.3.2 Subjective Logic
2.3.3 Evidential Uncertainty
2.3.4 Probabilistic Uncertainty
2.4 Relationships Between Multiple Uncertainties
2.5 Uncertainty-Aware Semi-Supervised Learning
2.5.1 Problem Definition
2.5.2 Proposed Uncertainty Framework
2.5.3 Graph-based Kernel Dirichlet distribution Estimation (GKDE)
2.6 Experiments
2.6.1 Experiment Setup
2.6.2 Results
2.6.3 Why is Epistemic Uncertainty Less Effective than Vacuity?
2.6.4 Graph Embedding Representations of Different Uncertainty Types
2.7 Conclusion
* 3 UNCERTAINTY-AWARE ROBUST SEMI-SUPERVISED LEARNING WITH OUT OF DISTRIBUTION DATA
* 3.1 Introduction
* 3.2 Semi-Supervised Learning (SSL)
* 3.2.1 Notations
* 3.3 Impact of OOD on SSL Performance
* 3.4 Methodology
* 3.4.1 Uncertainty-Aware Robust SSL Framework
* 3.4.2 Bi-level Optimization Approximation
* 3.4.3 Weighted Batch Normalization
* 3.4.4 Additional Implementation Details:
* 3.5 Experiment
* 3.5.1 Evaluation on Synthetic Dataset
* 3.5.2 Real-world Dataset Details
* 3.5.3 Performance with different real-world OOD datasets
* 3.5.4 Efficiency Analysis
* 3.5.5 Additional Analysis
* 3.6 Conclusion
* 4 MULTI-LABEL TEMPORAL EVIDENTIAL NEURAL NETWORKS FOR EARLY EVENT DETECTION
* 4.1 Introduction
* 4.2 Related Work
* 4.3 Preliminaries
* 4.3.1 Notations
* 4.3.2 Evidential neural networks
* 4.3.3 Multi-label classification
* 4.4 Problem Formulation
* 4.5 Multi-Label Temporal Evidential Neural Networks
* 4.5.1 MTENN Framework
* 4.5.2 Loss
#### 4.5.3 Theoretical Analysis
* 4.6 Multi-label Sequential Uncertainty Quantittation
* 4.6.1 Weighted Binomial Comultiplication
* 4.6.2 Uncertainty mean scan statistics.
* 4.7 Experiments
* 4.7.1 Experiment Details
* 4.7.2 Results and Analysis
* 4.8 Conclusion
* 5 CONCLUSION AND FUTURE WORK
* 5.1 Conclusion of Completed Work
* 5.2 Future Work
* 5.2.1 Quantification of multidimensional uncertainty
* 5.2.2 Interpretation of Multidimensional Uncertainty
* 6.1 PROOFS OF THE PROPOSED THEOREMS
* 6.1 Proof of Theorem 1
* 6.2 Derivations for Joint Probability and KL Divergence
* 6.2.1 Joint Probability
* 6.2.2 KL-Divergence
* 6.3 Proof of Theorem 2
* 6.4 Proof of Proposition 2
* 6.5 Proof of Proposition 3
* 6.2.3 BIOGRAPHICAL SKETCH CURRICULUM VITAE
List of Figures
* 2.1 Multiple uncertainties of different predictions. Let \(\mathbf{u}=[u_{v},u_{diss},u_{alea},u_{epis},u_{en}]\).
* 2.2 Uncertainty Framework Overview. Subjective Bayesian GNN (a) is designed for estimating the different types of uncertainties. The loss function includes a square error (d) to reduce bias, GKDE (b) to reduce errors in uncertainty estimation, and teacher network (c) to refine class probability.
* 2.3 Illustration of GKDE. Estimate prior Dirichlet distribution \(\text{Dir}(\hat{\alpha})\) for node \(j\) (red) based on training nodes (blue) and graph structure information.
* 2.4 Graph embedding representations of the Cora dataset for classes and the extent of uncertainty: (a) shows the representation of seven different classes; (b) shows our model prediction; and (c)-(f) present the extent of uncertainty for respective uncertainty types, including vacuity, dissonance, aleatoric, epistemic.
* 2.5 Graph embedding representations of the Citeseer dataset for classes and the extent of uncertainty: (a) shows the representation of seven different classes, (b) shows our model prediction, and (c)-(f) present the extent of uncertainty for respective uncertainty types, including vacuity, dissonance, and aleatoric uncertainty, respectively.
* 2.6 Graph embedding representations of the Amazon Photo dataset for the extent of vacuity uncertainty based on OOD detection experiment.
* 3.1 (a) Traditional SSL. (b) SSL with OODs.
* 3.2 SSL performance with different type OODs in synthetic datasets.
* 3.3 SSL performance with different type OODs in real world datasets.
* 3.4 Main flowchart of the proposed Weighted Robust SSL algorithm.
* 3.5 Additional experiments on the synthetic dataset.
* 3.6 Classification accuracy with varying OOD ratio on MNIST. (a)-(b) consider faraway OODs with batch normalization; (c)-(d) consider boundary OODs without batch normalization. Shaded regions indicate standard deviation.
* 3.7 Classification accuracy with varying OOD ratio on CIFAR10. We use _WRN-28-2_ (contains BN module) as the backbone. Shaded regions indicate standard deviation.
* 3.8 Running time results. (a)-(b) show our proposed approaches are only \(1.7\times\) to \(1.8\times\) slower compared to base SSL algorithms, while other robust SSL methods are \(3\times\) slower. (c) shows that the running time of our method would increase with \(J\) (inner loop gradients steps) and \(P\) (inverse Hessian approximation) increase. (d)-(e) show the running time of our strategies with different combinations of tricks viz; last layer updates and updating weights every \(L\) iterations. Note that by using only last layer updates, our strategies are around \(2\times\) slower. With \(L=5\) and last layer updates, we are around \(1.7\times\) to \(1.8\times\) slower with comparable test accuracy.
(a) shows that our method learns optimal weights for ID and OOD samples; (b) shows that our method is stable even for a small validation set containing 25 images.
* 3.10 (a) shows that WBN and CRW (or L1 regularization) are critical in retaining the performance gains of reweighting; (b)-(c) demonstrate that the performance of our approach would increase with (inner loop gradients steps) and inverse Hessian approximation) increase due to high-order approximation.
* 4.1 How many frames do we need to detect smoke and watch actions reliably? Can we even detect these actions before they finish? Existing event detectors are trained to recognize complete events only; they require seeing the entire event for a reliable decision, preventing early detection. We propose a learning formulation to recognize partial events, enabling early detection.
* 4.2 Illustration of overconfidence prediction. (Left) The occurrence of the event is falsely detected at the pre-event stage prior to its starting. This indicates that predicted probabilities are not reliable due to insufficient evidence. (Right) Instead of probabilities, subjective opinions (_e.g.,_ belief, disbelief, uncertainty) are used in the proposed method for early event detection.
* 4.3 **Framework Overview.** Given the streaming data, (b) MTENN is able to quality predictive uncertainty due to a lack of evidence for multi-label classifications at each time stamp based on belief/evidence theory. Specifically, (a) at each time step with data segment \(x^{t}\), MTENN is able to predict Beta distribution for each class, which can be equivalent transfer to subjective opinion \(\omega_{t}\); (c) based on a sliding window, two novel fusion operators (weighted binomial comultiplication and uncertainty mean scan statistics) are introduced to quantify the fused uncertainty of a sub-sequence for an early event.
* 4.4 Uncertainty distribution changed at the ongoing event early stage
* 4.5 Sensitive Analysis of uncertainty threshold. There is a tradeoff between detection delay and detection accuracy. The higher uncertainty threshold increase, the more overconfidence predictions.
* 4.6 Sensitive Analysis of sliding window size. When the sliding window size increases, the detection delay continuously decreases, and detection F1 increases until the sliding window size is large enough.
* 4.7 Per-Class Evaluation on Audio (Engine) dataset.
List of Tables
* 2.1 Important notations and corresponding descriptions.
* 2.2 Description of datasets and their experimental setup for the node classification prediction.
* 2.3 Hyperparameter configurations of S-BGCN-T-K model
* 2.4 Description of datasets and their experimental setup for the OOD detection.
* 2.5 AUROC and AUPR for the Misclassification Detection.
* 2.6 AUROC and AUPR for the OOD Detection.
* 2.7 Ablation experiment on AUROC and AUPR for the Misclassification Detection.
* 2.8 Big-O time complexity of our method and baseline GCN.
* 2.9 Ablation experiment on AUROC and AUPR for the OOD Detection.
* 2.10 Compare with DropEdge on Misclassification Detection
* 2.11 Epistemic uncertainty for semi-supervised image classification.
* 3.1 Important notations and corresponding descriptions.
* 3.2 Uncertainty for different type OODs based on MNIST.
* 3.3 Uncertainty for different type OODs based on CIFAR10.
* 3.4 Test accuracies for the two moons dataset (\(P=10,J=3\) for R-SSL-IFT and \(J=1\) for R-SSL-Meta).
* 3.5 Test accuracies for different \(P\) at OOD ratio = 50% on the synthetic dataset.
* 3.6 Hyperparameter settings used in MNIST experiments for four representative SSL. All robust SSL methods (e.g., ours (WR-SSL), DS3L, and UASD) are developed based on these representative SSL methods.
* 3.7 SVHN-Extra (VAT) with different OOD ratios.
* 3.8 CIFAR100 (MT) with different OOD ratios.
* 3.9 Test accuracies for different numbers of clusters \(K\) on the MNIST dataset with 50% Mean MNIST as OODs
* 4.1 Important notations and corresponding descriptions.
* 4.2 Description of datasets and their experimental setup for the early event detection.
* 4.3 Early sound event detection performance on Audio datasets.
* 4.4 Early human action detection performance on AVA datasets with different segment lengths (ST).
* 4.5
* 4.5 Ablation study. MTENN-BC: a variant of MTENN-WBC that uses binomial comultiplication instead of weighted binomial comultiplication; MTENN (Phase I): only consider phase I to predict event without any sequential uncertainty head; MTENN w/o MTENN loss: a variant of MTENN (Phase I) that consider BCE loss.
* 4.6 Compare inference time with different methods.
## Chapter 1 Introduction
### 1 Motivations
Deep neural networks have reached almost every field of science in the last decade and have become an essential component of a wide range of real-world applications (Szegedy et al., 2015; Graves, 2013; Wang et al., 2021; Dong et al., 2022). Because of the increasing spread, confidence in neural network predictions became increasingly important. However, basic neural networks do not provide certainty estimates and suffer from over-confidence or under-confidence, indicating poor calibration. For instance, given several cat breeds images as training data to train a neural network, the model should return a prediction with relatively high confidence when a test sample is a similar cat breed image. However, when the test sample is a dog image, is the neural network able to recognize the test sample coming from a different data distribution (_out of distribution_ test data) (Gal and Ghahramani, 2016; Wang et al., 2022)? Ideally, we hope the neural networks can say "I do not know" when the test sample is an out-of-distribution sample. Unfortunately, the answer is "NO" because the model was trained with images of different kinds of cats and has hopefully learned to recognize them. But the model has never seen a dog before, and an image of a dog would be outside of the data distribution the model was trained on. Even more critically, in high-risk fields, such as medical image analysis with a diagnostics system that has never been observed before or scenes that a self-driving system has never been trained to handle (Kendall and Gal, 2017).
A possible desired behavior of a model in such cases would be to return a prediction with the additional information that the test sample lies outside the data distribution. In other words, we hope our model can quantity a high level of uncertainty (alternatively, low confidence) with such out-of-distribution inputs. In addition to the out-of-distribution
situation, other scenarios may lead to uncertainty. One scenario is noisy data, such as the label noise due to the measurement imprecision (Kendall and Gal, 2017) causes aleatoric uncertainty and conflicting evidence due to dissonance uncertainty (Josang et al., 2018). Another scenario is the uncertainty in model parameters that best explain the observed (training) data (Gal and Ghahramani, 2015).
Many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction to overcome over or under-confidence prediction issues. As a result, different types and sources of uncertainty have been identified, and various approaches to measure and quantify uncertainty in neural networks have been proposed (Guo et al., 2022). Predictive uncertainty quantification in the deep learning research domain has been mainly explored based on two uncertainty types: aleatoric and epistemic uncertainty. Aleatoric uncertainty refers to inherent ambiguity in outputs for a given input and cannot be reduced due to randomness in the data. Aleatoric uncertainty can be estimated by probabilistic neural networks, such as mixture density networks (MacKay and Gibbs, 1999). Epistemic uncertainty indicates uncertainty about the model parameters estimated based on the training data. This uncertainty measures how well the model is matched to the data and can be reduced through the collection of additional data. Epistemic uncertainty can be estimated based on Bayesian neural networks (BNNs) that can learn a posterior distribution over parameters. The accuracy of uncertainty estimation depends on the choice of the prior distribution and the accuracy of the approximate posterior distribution, as the exact posterior is often infeasible to compute. Recent developments of approximate Bayesian approaches include \(L\) Laplace approximation (MacKay, 1992), variational inference (Graves, 2011), dropout-based variational inference (Gal and Ghahramani, 2016), expectation propagation (Hernandez-Lobato and Adams, 2015) and stochastic gradient Markov chain Monte Carlo (MCMC) (Welling and Teh, 2011).
In the belief (or evidence) theory domain, uncertainty reasoning has been substantially explored, such as Fuzzy Logic (De Silva, 2018), Dempster-Shafer Theory (DST) (Sentz et al.,
2002), or Subjective Logic (SL) (Josang, 2016). Belief theory focuses on reasoning inherent uncertainty in information caused by unreliable, incomplete, deceptive, or conflicting evidence. SL considered predictive uncertainty in subjective opinions in terms of _vacuity_ (i.e., a lack of evidence) (Zhao et al., 2018; Xu et al., 2021; Alim et al., 2019) and _vagueness_ (i.e., failing in discriminating a belief state) (Josang, 2016). Recently, other uncertainty types have been studied, such as _dissonance_ caused by conflicting evidence(Josang et al., 2018; Zhao et al., 2019; Shi et al., 2020).
However, inherent uncertainties derived from different root causes have been realized as serious hurdles for DNNs to find robust and trustworthy solutions for real-world problems. A lack of consideration of such uncertainties may lead to unnecessary risk. For example, a self-driving autonomous car can misdetect a human on the road. A deep learning-based medical assistant may misdiagnose cancer as a benign tumor. In this work, we study how to measure different uncertainty causes for DNNs and use them to solve diverse decision-making problems more effectively.
(1) _Uncertainty-aware semi-supervised learning on graph data_. We study the multidimensional uncertainty quantification for graph neural networks(GNNs) (Kipf and Welling, 2017; Velickovic et al., 2018), which have received tremendous attention in the data science community. Despite their superior performance in semi-supervised node classification and regression, they did not consider various uncertainties in their decision process. Although many methods (Zhao et al., 2018; Liu et al., 2020; Zhang et al., 2018; Zhao et al., 2018) have been proposed to estimate uncertainty for GNNs, no prior work has considered uncertainty decomposition in GNNs. To address the challenge of uncertainty decomposition in GNNs, we propose the research question: _can we quantify multidimensional uncertainty types in both deep learning and belief and evidence theory domains for node-level classification, misclassification detection, and out-of-distribution detection tasks on graph data?_
(2) _Uncertainty-aware robust semi-supervised learning_. Recent Semi-supervised learning works show significant improvement in semi-supervised learning algorithms' performance
using better-unlabeled data representations. However, recent work (Oliver et al., 2018) shows that the SSL algorithm's performance could degrade when the unlabeled set has out-of-distribution examples. To address the challenge of out-of-distribution issues in semi-supervised learning, we first propose the research questions: _How out-of-distribution data hurt semi-supervised learning performance?_ and _Can we develop an efficient and effective uncertainty-based approach for robust semi-supervised learning with out of distribution data?_
(3) _Uncertainty-aware early event detection with multi-labels_. Early event detection aims to detect event even before the event is complete. To achieve the earliness of event detection, existing approaches can be broadly divided into several major categories. Prefix-based techniques (Gupta et al., 2020, 2020, 2019, 2019) aim to learn a minimum prefix length of the time series using the training instances and utilize it to classify a testing time series. Shapelet-based approaches (Yan et al., 2020; Zhao et al., 2019; Yao et al., 2019) focus on obtaining a set of key shapelets from the training dataset and utilizing them as class discriminatory features. Model-based methods for event early detection (Mori et al., 2019; Lv et al., 2019) are proposed to obtain conditional probabilities by either fitting a discriminative classifier or using generative classifiers on training. Although these approaches address the importance of early detection, they primarily focus on an event with a single label but fail to be applied to cases with multiple labels. Another non-negligible issue for early event detection is a prediction with overconfidence (Zhao et al., 2020; Sensoy et al., 2018) due to high vacuity uncertainty exists in the early time series, which refers to a lack of evidence. It results in an over-confidence estimation and hence unreliable predictions. To address the aforementioned issues, we propose the research questions: _How to quantify the uncertainty for multi-label time series classification?_ and _How to make a reliable prediction for early event detection?_
In this dissertation, we aim to solve the challenges mentioned above. Therefore, a general question we want to ask is: How can we design effective and efficient uncertainty methods for deep neural networks under different problem settings?
### Summary of Main Contributions
To answer the above question, we propose a novel approach for each challenge by studying an effective and efficient algorithm that is followed by a detailed theoretic analysis. The main contributions are list out as follows.
(1) _Semi-supervised learning on graph data_. We proposed a multi-source uncertainty framework for GNNs. The proposed framework first provides the estimation of various types of uncertainty from both deep learning and evidence/belief theory domains, such as dissonance (derived from conflicting evidence) and vacuity (derived from lack of evidence). In addition, we designed a Graph-based Kernel Dirichlet distribution Estimation (GKDE) method to reduce errors in quantifying predictive uncertainties. Furthermore, we first provided a theoretical analysis of the relationships between different types of uncertainties considered in this work. We demonstrate via a theoretical analysis that an OOD node may have a high predictive uncertainty under GKDE. Based on the six real graph datasets, we compared the performance of our proposed framework with that of other competitive counterparts. We found that dissonance-based detection yielded the best results in misclassification detection, while vacuity-based detection performed best in OOD detection. The main results are from (Zhao et al., 2020)
(2) _Semi-supervised learning with OODs setting_. To answer the question of "_How out-of-distribution data hurt semi-supervised learning performance?_", we first study the critical causes of OOD's negative impact on SSL algorithms. In particular, we found that 1) certain kinds of OOD data instances close to the decision boundary have a more significant impact on performance than those far away, and 2) Batch Normalization (BN), a popular module, could degrade the performance instead of improving the performance when the unlabeled set contains OODs. To address the above causes, we proposed a novel unified weighted robust SSL framework that can be easily extended to many existing SSL algorithms and improve their robustness against OODs. To address the limitation of low-order approximations in
bi-level optimization, we developed an efficient hyper-parameter optimization algorithm that considers high-order approximations of the objective and is scalable to a higher number of inner optimization steps to learn a massive amount of weight parameters. In addition, we conduct a theoretical analysis of the impact of faraway OODs in the BN step and propose weighted batch normalization (WBN) to carry the weights over in the BN step. We also discuss the connection between our approach and low-order approximation approaches. Finally, we address a critical issue of the existing bi-level optimization-based reweighting schemes, which is that they are much slower (close to 3\(\times\)) compared to the original learning (SSL) algorithms - we show that several simple tricks like just considering the last layer in the inner loop and doing the weight updates every few epochs enable us to have a run-time comparable to the base SSL while maintaining the accuracy gains of reweighting. Extensive experiments on synthetic and real-world datasets prove that our proposed approach significantly improves the robustness of four representative SSL algorithms against OODs compared with four state-of-the-art robust SSL approaches. The main content is from (Zhao et al., 2020).
(3) _Early event detection with multi-label setting_. We first introduce a novel problem, namely _early event detection with multiple labels_. A temporal event with multiple labels that occurs sequentially along the timeline is considered in this problem setting. This work aims to accurately detect all classes at the ongoing stage of an event within the least amount of time. To this end, technically, we propose a novel framework, Multi-Label Temporal Evidential Neural Network (MTENN), for early event detection in temporal data. MTENN is able to quality predictive uncertainty due to the lack of evidence for multi-label classifications at each time stamp based on belief/evidence theory. In addition, we introduce two novel uncertainty estimation heads (weighted binomial comultiplication (WBC) and uncertainty mena scan statistics(UMSS)) to quantify the fused uncertainty of a sub-sequence for early event detection. We demonstrate that WBC is effective for detection accuracy and
UMSS is effective for detection delay. We validate the performance of our approach with state-of-the-art techniques on real-world audio and video datasets. Theoretic analysis and empirical studies demonstrate the effectiveness and efficiency of the proposed framework in both detection delay and accuracy. The results have been accepted in (Zhao et al., 2022).
### Outline
The rest of the dissertation is organized as follows. In Chapter 2, We present our first research on multidimensional uncertainty quantification for GNNs. In Chapter 3, we study an uncertainty-based robust semi-supervised learning framework via bi-level optimization. In Chapter 4, we further quantify the multi-label uncertainty and sequential uncertainty for early event detection. We finally conclude the dissertation in Chapter 5.
## Chapter 2 Uncertainty Aware Semi-Supervised Learning on Graph Data
### 2.1 Introduction
Inherent uncertainties derived from different root causes have realized as serious hurdles to find effective solutions for real-world problems. Critical safety concerns have been brought due to lack of considering diverse causes of uncertainties, resulting in high risk due to misinterpretation of uncertainties (e.g., misdetection or misclassification of an object by an autonomous vehicle). Graph neural networks (GNNs) (Kipf and Welling, 2017; Velickovic et al., 2018) have received tremendous attention in the data science community. Despite their superior performance in semi-supervised node classification and regression, they didn't consider various types of uncertainties in the their decision process. Predictive uncertainty estimation (Kendall and Gal, 2017) using Bayesian NNs (BNNs) has been explored for classification prediction and regression in the computer vision applications, based on aleatoric uncertainty (AU) and epistemic uncertainty (EU). AU refers to data uncertainty from statistical randomness (e.g., inherent noises in observations) while EU indicates model uncertainty due to limited knowledge (e.g., ignorance) in collected data. In the belief or evidence theory domain, Subjective Logic (SL) (Josang et al., 2018) considered vacuity (or a lack of evidence or ignorance) as uncertainty in a subjective opinion. Recently other uncertainty types, such as dissonance, consonance, vagueness, and monosonance (Josang et al., 2018), have been discussed based on SL to measure them based on their different root causes.
We first considered multidimensional uncertainty types in both deep learning (DL) and belief and evidence theory domains for node-level classification, misclassification detection,
and out-of-distribution (OOD) detection tasks. By leveraging the learning capability of GNNs and considering multidimensional uncertainties, we propose an uncertainty-aware estimation framework by quantifying different uncertainty types associated with the predicted class probabilities. In this work, we made the following **key contributions**:
* **A multi-source uncertainty framework for GNNs**. Our proposed framework first provides the estimation of various types of uncertainty from both DL and evidence/belief theory domains, such as dissonance (derived from conflicting evidence) and vacuity (derived from lack of evidence). In addition, we designed a Graph-based Kernel Dirichlet distribution Estimation (GKDE) method to reduce errors in quantifying predictive uncertainties.
* **Theoretical analysis**: Our work is the first that provides a theoretical analysis about the relationships between different types of uncertainties considered in this work. We demonstrate via a theoretical analysis that an OOD node may have a high predictive uncertainty under GKDE.
* **Comprehensive experiments for validating the performance of our proposed framework**: Based on the six real graph datasets, we compared the performance of our proposed framework with that of other competitive counterparts. We found that dissonance-based detection yielded the best results in misclassification detection while vacuity-based detection best performed in OOD detection.
Note that we use the term 'predictive uncertainty' in order to mean uncertainty estimated to solve prediction problems.
### Related Work
DL research has mainly considered _aleatoric_ uncertainty (AU) and _epistemic_ uncertainty (EU) using BNNs for computer vision applications. AU consists of homoscedastic uncer
tainty (i.e., constant errors for different inputs) and heteroscedastic uncertainty (i.e., different errors for different inputs) (Gal, 2016). A Bayesian DL framework was presented to simultaneously estimate both AU and EU in regression (e.g., depth regression) and classification (e.g., semantic segmentation) tasks (Kendall and Gal, 2017). Later, _distributional uncertainty_ was defined based on distributional mismatch between testing and training data distributions (Malinin and Gales, 2018). _Dropout variational inference_(Gal and Ghahramani, 2016) was used for approximate inference in BNNs using epistemic uncertainty, similar to _DropEdge_(Rong et al., 2019). Other algorithms have considered overall uncertainty in node classification (Eswaran et al., 2017; Liu et al., 2020; Zhang et al., 2019). However, no prior work has considered uncertainty decomposition in GNNs.
In the belief (or evidence) theory domain, uncertainty reasoning has been substantially explored, such as Fuzzy Logic (De Silva, 2018), Dempster-Shafer Theory (DST) (Sentz et al., 2002), or Subjective Logic (SL) (Josang, 2016). Belief theory focuses on reasoning inherent uncertainty in information caused by unreliable, incomplete, deceptive, or conflicting evidence. SL considered predictive uncertainty in subjective opinions in terms of _vacuity_ (i.e., a lack of evidence) and _vagueness_ (i.e., failing in discriminating a belief state) (Josang, 2016). Recently, other uncertainty types have been studied, such as _dissonance_ caused by conflicting evidence(Josang et al., 2018). In the deep NNs, (Sensoy et al., 2018) proposed evidential deep learning (EDL) model, using SL to train a deterministic NN for supervised classification in computer vision based on the sum of squared loss. However, EDL didn't consider a general method of estimating multidimensional uncertainty or graph structure.
### Multidimensional Uncertainty and Subjective Logic
This section provides an overview of SL and discusses multiple types of uncertainties estimated based on SL, called _evidential uncertainty_, with the measures of _vacuity_ and _dis
sonance_. In addition, we give a brief overview of _probabilistic uncertainty_, discussing the measures of _aleatoric_ uncertainty and _epistemic_ uncertainty.
#### Notations
Vectors are denoted by lower case bold face letters, _e.g._, belief vector \(\mathbf{b}\in[0,1]^{K}\) and class probability \(\mathbf{p}\in[0,1]^{K}\) where their \(i\)-th entries are \(b_{i},p_{i}\). Scalars are denoted by lowercase italic letters, _e.g._\(u\in[0,1]\). Matrices are denoted by capital italic letters. \(\omega\) denotes the subjective opinion. Some important notations are listed in Table 2.1
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Notations** & **Descriptions** \\ \hline \(\mathcal{G}\) & Graph dataset \\ \(\mathbb{V}\) & A ground set of nodes \\ \(\mathbb{L}\) & Training nodes \\ \(\mathbb{E}\) & A ground set of edges \\ \(\mathbf{R}\) & Node-level feature matrix \\ \(y_{i}\) & Class label of node \(i\) \\ \(\mathbf{p}_{i}\) & Class probability of node \(i\) \\ \(\mathbf{\theta}\) & model parameters \\ \(\omega\) & Subjective opinion \\ \(\mathbf{b}\) & Belief mass distribution \\ \(\mathbf{\alpha}\) & Dirichlet distribution parameters \\ \(u\) & Vacuity uncertainty \\ \(K\) & Number of classes \\ \(S\) & Dirichlet strength (summation of \(\mathbf{\alpha}\)) \\ \(\mathbf{e}\) & Evidence vector \\ \(diss(\omega)\) & Dissonance uncertainty based on opinion \(\omega\) \\ \(I(\cdot)\) & Mutual information \\ \(H(\cdot)\) & Entropy function \\ \(f(\cdot)\) & GNNs model function \\ \hline \hline \end{tabular}
\end{table}
Table 2.1: Important notations and corresponding descriptions.
#### 2.3.2 Subjective Logic
A multinomial opinion of a random variable \(y\) is represented by \(\omega=(\mathbf{b},u,\mathbf{a})\) where a domain is \(\mathbb{Y}\equiv\{1,\cdots,K\}\) and the additivity requirement of \(\omega\) is given as \(\sum_{k\in\mathbb{Y}}b_{k}+u=1\). To be specific, each parameter indicates,
* \(\mathbf{b}\): _belief mass distribution_ over \(\mathbb{Y}\) and \(\mathbf{b}=[b_{1},\ldots,b_{K}]^{T}\);
* \(u\): _uncertainty mass_ representing _vacuity of evidence_;
* \(\mathbf{a}\): _base rate distribution_ over \(\mathbb{Y}\) and \(\mathbf{a}=[a_{1},\ldots,a_{K}]^{T}\).
The projected probability distribution of a multinomial opinion can be calculated as:
\[P(y=k)=b_{k}+a_{k}u,\;\;\;\forall k\in\mathbb{Y}. \tag{2.1}\]
A multinomial opinion \(\omega\) defined above can be equivalently represented by a \(K\)-dimensional Dirichlet probability density function (PDF), where the special case with \(K=2\) is the Beta PDF as a binomial opinion. Let \(\mathbf{\alpha}\) be a strength vector over the singletons (or classes) in \(\mathbb{Y}\) and \(\mathbf{p}=[p_{1},\cdots,p_{K}]^{T}\) be a probability distribution over \(\mathbb{Y}\). The Dirichlet PDF with \(\mathbf{p}\) as a random vector \(K\)-dimensional variables is defined by:
\[\text{Dir}(\mathbf{p}|\mathbf{\alpha})=\frac{1}{B(\mathbf{\alpha})}\prod\nolimits_{k\in \mathbb{Y}}p_{k}^{(\alpha_{k}-1)}, \tag{2.2}\]
where \(\frac{1}{B(\mathbf{\alpha})}=\frac{\Gamma(\sum_{k\in\mathbb{Y}}\alpha_{k})}{\prod _{k\in\mathbb{Y}}(\alpha_{k})}\), \(\alpha_{k}\geq 0\), and \(p_{k}\neq 0\), if \(\alpha_{k}<1\).
The term _evidence_ is introduced as a measure of the amount of supporting observations collected from data that a sample should be classified into a certain class. Let \(e_{k}\) be the evidence derived for the class \(k\in\mathbb{Y}\). The total strength \(\alpha_{k}\) for the belief of each class \(k\in\mathbb{Y}\) can be calculated as: \(\alpha_{k}=e_{k}+a_{k}W\), where \(e_{k}\geq 0,\forall k\in\mathbb{Y}\), and \(W\) refers to a non-informative weight representing the amount of uncertain evidence. Given the Dirichlet PDF as defined above, the expected probability distribution over \(\mathbb{Y}\) can be calculated as:
\[\mathbb{E}[p_{k}]=\frac{\alpha_{k}}{\sum_{k=1}^{K}\alpha_{k}}=\frac{e_{k}+a_{k }W}{W+\sum_{k=1}^{K}e_{k}}. \tag{2.3}\]
The observed evidence in a Dirichlet PDF can be mapped to a multinomial opinion as follows:
\[b_{k}=\frac{e_{k}}{S},\ u=\frac{W}{S}, \tag{2.4}\]
where \(S=\sum_{k=1}^{K}\alpha_{k}\) refers to the Dirichlet strength. Without loss of generality, we set \(a_{k}=\frac{1}{K}\) and the non-informative prior weight (i.e., \(W=K\)), which indicates that \(a_{k}\cdot W=1\) for each \(k\in\mathbb{Y}\).
#### 2.3.3 Evidential Uncertainty
In (Josang et al., 2018), we discussed a number of multidimensional uncertainty dimensions of a subjective opinion based on the formalism of SL, such as singularity, vagueness, vacuity, dissonance, consonance, and monosonance. These uncertainty dimensions can be observed from binomial, multinomial, or hyper opinions depending on their characteristics (e.g., the vagueness uncertainty is only observed in hyper opinions to deal with composite beliefs). In this work, we discuss two main uncertainty types that can be estimated in a multinomial opinion, which are _vacuity_ and _dissonance_.
The main cause of vacuity is derived from a lack of evidence or knowledge, which corresponds to the uncertainty mass, \(u\), of a multinomial opinion in SL as: \(vac(\omega)\equiv u=K/S\), as estimated in Eq. (2.4). This uncertainty exists because the analyst may have insufficient information or knowledge to analyze the uncertainty. The _dissonance_ of a multinomial opinion can be derived from the same amount of conflicting evidence and can be estimated based on the difference between singleton belief masses (e.g., class labels), which leads to 'inconclusiveness' in decision-making applications. For example, a four-state multinomial opinion is given as \((b_{1},b_{2},b_{3},b_{4},u,a)=(0.25,0.25,0.25,0.25,0.0,a)\) based on Eq. (2.4), although the vacuity \(u\) is zero, a decision can not be made if there are the same amounts of beliefs supporting respective beliefs. Given a multinomial opinion with non-zero belief masses, the measure of
dissonance can be calculated as:
\[diss(\omega)=\sum_{i=1}^{K}\Big{(}\frac{b_{i}\sum_{j\neq i}b_{j}\text{ Bal}(b_{j},b_{i})}{\sum_{j\neq i}b_{j}}\Big{)}, \tag{2.5}\]
where the relative mass balance between a pair of belief masses \(b_{j}\) and \(b_{i}\) is defined as \(\text{Bal}(b_{j},b_{i})=1-|b_{j}-b_{i}|/(b_{j}+b_{i})\). We note that the dissonance is measured only when the belief mass is non-zero. If all belief masses equal to zero with vacuity being 1 (i.e., \(u=1\)), the dissonance will be set to zero.
#### 2.3.4 Probabilistic Uncertainty
For classification, the estimation of the probabilistic uncertainty relies on the design of an appropriate Bayesian DL model with parameters \(\mathbf{\theta}\). Given input \(x\) and dataset \(\mathcal{G}\), we estimate a class probability by \(P(y|x)=\int P(y|x;\mathbf{\theta})P(\mathbf{\theta}|\mathcal{G})d\mathbf{\theta}\), and obtain _epistemic uncertainty_ estimated by mutual information (Depeweg et al., 2018; Malinin and Gales, 2018):
\[\underbrace{I(y,\mathbf{\theta}|x,\mathcal{G})}_{\mathbf{Epistemic}}=\underbrace{ \mathcal{H}\big{[}\mathbb{E}_{P(\mathbf{\theta}|\mathcal{G})}[P(y|x;\mathbf{\theta}) ]\big{]}}_{\mathbf{Entropy}}-\underbrace{\mathbb{E}_{P(\mathbf{\theta}|\mathcal{G})} \big{[}\mathcal{H}[P(y|x;\mathbf{\theta})]\big{]}}_{\mathbf{Aleatoric}}, \tag{2.6}\]
where \(\mathcal{H}(\cdot)\) is Shannon's entropy of a probability distribution. The first term indicates _entropy_ that represents the total uncertainty while the second term is _aleatoric_ that indicates data uncertainty. By computing the difference between entropy and aleatoric uncertainties, we obtain epistemic uncertainty, which refers to uncertainty from model parameters.
### Relationships Between Multiple Uncertainties
We use the shorthand notations \(u_{v}\), \(u_{diss}\), \(u_{alea}\), \(u_{epis}\), and \(u_{en}\) to represent vacuity, dissonance, aleatoric, epistemic, and entropy, respectively.
To interpret multiple types of uncertainty, we show three prediction scenarios of 3-class classification in Figure 2.1, in each of which the strength parameters \(\alpha=[\alpha_{1},\alpha_{2},\alpha_{3}]\) are known. To make a prediction with high confidence, the subjective multinomial opinion,
following a Dirichlet distribution, will yield a sharp distribution on one corner of the simplex (see Figure 2.1 (a)). For a prediction with conflicting evidence, called a conflicting prediction (CP), the multinomial opinion should yield a central distribution, representing confidence to predict a flat categorical distribution over class labels (see Figure 2.1 (b)). For an OOD scenario with \(\alpha=[1,1,1]\), the multinomial opinion would yield a flat distribution over the simplex (Figure 2.1 (c)), indicating high uncertainty due to the lack of evidence. The first technical contribution of this work is as follows.
**Theorem 1**.: _We consider a simplified scenario, where a multinomial random variable \(y\) follows a K-class categorical distribution: \(y\sim\text{Cal}(\textbf{p})\), the class probabilities **p** follow a Dirichlet distribution: \(\textbf{p}\sim\text{Dir}(\boldsymbol{\alpha})\), and \(\boldsymbol{\alpha}\) refer to the Dirichlet parameters. Given a total Dirichlet strength \(S=\sum_{i=1}^{K}\alpha_{i}\), for any opinion \(\omega\) on a multinomial random variable \(y\), we have_
1. _General relations on all prediction scenarios._ _(a)_ \(u_{v}+u_{diss}\leq 1\)_; (b)_ \(u_{v}>u_{epis}\)_._
Figure 2.1: Multiple uncertainties of different predictions. Let \(\textbf{u}=[u_{v},u_{diss},u_{alea},u_{epis},u_{en}]\).
2. _Special relations on the OOD and the CP._ 1. _For an OOD sample with a uniform prediction (i.e.,_ \(\alpha=[1,\ldots,1]\)_), we have_ \[1=u_{v}=u_{en}>u_{alea}>u_{epis}>u_{diss}=0\] 2. _For an in-distribution sample with a conflicting prediction (i.e.,_ \(\alpha=[\alpha_{1},\ldots,\alpha_{K}]\) _with_ \(\alpha_{1}=\alpha_{2}=\cdots=\alpha_{K}\)_, if_ \(S\rightarrow\infty\)_), we have_ \[u_{en}=1,\lim_{S\rightarrow\infty}u_{diss}=\lim_{S\rightarrow\infty}u_{alea}=1,\lim_{S\rightarrow\infty}u_{v}=\lim_{S\rightarrow\infty}u_{epis}=0\] _with_ \(u_{en}>u_{alea}>u_{diss}>u_{v}>u_{epis}\)_._
The proof of Theorem 1 can be found in Appendix A.1. As demonstrated in Theorem 1 and Figure 2.1, entropy cannot distinguish OOD (see Figure 2.1 (c)) and conflicting predictions (see Figure 2.1 (b)) because entropy is high for both cases. Similarly, neither aleatoric uncertainty nor epistemic uncertainty can distinguish OOD from conflicting predictions. In both cases, aleatoric uncertainty is high while epistemic uncertainty is low. On the other hand, vacuity and dissonance can clearly distinguish OOD from a conflicting prediction. For example, OOD objects typically show high vacuity with low dissonance while conflicting predictions exhibit low vacuity with high dissonance. This observation is confirmed through the empirical validation via our extensive experiments in terms of misclassification and OOD detection tasks.
### Uncertainty-Aware Semi-Supervised Learning
In this section, we describe our proposed uncertainty framework based on semi-supervised node classification problem. It is designed to predict the subjective opinions about the classification of testing nodes, such that a variety of uncertainty types, such as vacuity,
dissonance, aleatoric uncertainty, and epistemic uncertainty, can be quantified based on the estimated subjective opinions and posterior of model parameters. As a subjective opinion can be equivalently represented by a Dirichlet distribution about the class probabilities, we proposed a way to predict the node-level subjective opinions in the form of node-level Dirichlet distributions. The overall description of the framework is shown in Figure 2.2.
#### Problem Definition
Given an input graph \(\mathcal{G}=(\mathbb{V},\mathbb{E},\mathbf{r},\mathbf{y}_{\mathbb{L}})\), where \(\mathbb{V}=\{1,\ldots,N\}\) is a ground set of nodes, \(\mathbb{E}\subseteq\mathbb{V}\times\mathbb{V}\) is a ground set of edges, \(\mathbf{r}=[\mathbf{r}_{1},\cdots,\mathbf{r}_{N}]^{T}\in\mathbb{R}^{N\times d}\) is a node-level feature matrix, \(\mathbf{r}_{i}\in\mathbb{R}^{d}\) is the feature vector of node \(i\), \(\mathbf{y}_{\mathbb{L}}=\{y_{i}\mid i\in\mathbb{L}\}\) are the labels of the training nodes \(\mathbb{L}\subset\mathbb{V}\), and \(y_{i}\in\{1,\ldots,K\}\) is the class label of node \(i\). **We aim to predict**: (1) the **class probabilities** of the testing nodes: \(\mathbf{p}_{\mathbb{V}\setminus\mathbb{L}}=\{\mathbf{p}_{i}\in[0,1]^{K}\mid i \in\mathbb{V}\setminus\mathbb{L}\}\); and (2) the **associated multidimensional uncertainty estimates** introduced by different root causes: \(\mathbf{u}_{\mathbb{V}\setminus\mathbb{L}}=\{\mathbf{u}_{i}\in[0,1]^{m}\mid i \in\mathbb{V}\setminus\mathbb{L}\}\), where \(p_{i,k}\) is the probability that the class label \(y_{i}=k\) and \(m\) is the total number of uncertainty types.
Figure 2.2: Uncertainty Framework Overview. Subjective Bayesian GNN (a) is designed for estimating the different types of uncertainties. The loss function includes a square error (d) to reduce bias, GKDE (b) to reduce errors in uncertainty estimation, and teacher network (c) to refine class probability.
#### 2.5.2 Proposed Uncertainty Framework
**Learning evidential uncertainty.** As discussed in Section 2.3.2, evidential uncertainty can be derived from multinomial opinions or equivalently Dirichlet distributions to model a probability distribution for the class probabilities. Therefore, we design a Subjective GNN (S-GNN) \(f\) to form their multinomial opinions for the node-level Dirichlet distribution \(\text{Dir}(\mathbf{p}_{i}|\boldsymbol{\alpha}_{i})\) of a given node \(i\). Then, the conditional probability \(P(\mathbf{p}|A,\mathbf{r};\boldsymbol{\theta})\) can be obtained by:
\[P(\mathbf{p}|A,\mathbf{r};\boldsymbol{\theta})=\prod\nolimits_{i=1}^{N}\text{ Dir}(\mathbf{p}_{i}|\boldsymbol{\alpha}_{i}),\ \boldsymbol{\alpha}_{i}=f_{i}(A,\mathbf{r};\boldsymbol{\theta}), \tag{2.7}\]
where \(f_{i}\) is the output of S-GNN for node \(i\), \(\boldsymbol{\theta}\) is the model parameters, and \(A\) is an adjacency matrix. The Dirichlet probability function \(\text{Dir}(\mathbf{p}_{i}|\boldsymbol{\alpha}_{i})\) is defined by Eq. (2.2).
Note that S-GNN is similar to classical GNN, except that we use an activation layer (e.g., \(ReLU\)) instead of the _softmax_ layer (only outputs class probabilities). This ensures that S-GNN would output non-negative values, which are taken as the parameters for the predicted Dirichlet distribution.
**Learning probabilistic uncertainty.** Since probabilistic uncertainty relies on a Bayesian framework, we proposed a Subjective Bayesian GNN (S-BGNN) that adapts S-GNN to a Bayesian framework, with the model parameters \(\boldsymbol{\theta}\) following a prior distribution. The joint class probability of \(\mathbf{y}\) can be estimated by:
\[P(\mathbf{y}|A,\mathbf{r};\mathcal{G}) = \int\int P(\mathbf{y}|\mathbf{p})P(\mathbf{p}|A,\mathbf{r}; \boldsymbol{\theta})P(\boldsymbol{\theta}|\mathcal{G})d\mathbf{p}d\boldsymbol{\theta} \tag{2.8}\] \[\approx \frac{1}{M}\sum_{m=1}^{M}\sum_{i=1}^{N}\int P(\mathbf{y}_{i}| \mathbf{p}_{i})P(\mathbf{p}_{i}|A,\mathbf{r};\boldsymbol{\theta}^{(m)})d \mathbf{p}_{i},\quad\boldsymbol{\theta}^{(m)}\sim q(\boldsymbol{\theta})\]
where \(P(\boldsymbol{\theta}|\mathcal{G})\) is the posterior, estimated via dropout inference, that provides an approximate solution of posterior \(q(\boldsymbol{\theta})\) and taking samples from the posterior distribution of models (Gal and Ghahramani, 2016). Thanks to the benefit of dropout inference, training a DL model
directly by minimizing the cross entropy (or square error) loss function can effectively minimize the KL-divergence between the approximated distribution and the full posterior (i.e., \(\text{KL}[q(\mathbf{\theta})\|P(\theta|\mathcal{G})]\)) in variational inference (Gal and Ghahramani, 2016; Kendall et al., 2015). For interested readers, please refer to more detail in Appendix B.8.
Therefore, training S-GNN with stochastic gradient descent enables learning of an approximated distribution of weights, which can provide good explainability of data and prevent overfitting. We use a _loss function_ to compute its Bayes risk with respect to the sum of squares loss \(\|\mathbf{y}-\mathbf{p}\|_{2}^{2}\) by:
\[\mathcal{L}(\mathbf{\theta}) = \sum\nolimits_{i\in\mathbb{L}}\int\|\mathbf{y}_{i}-\mathbf{p}_{i }\|_{2}^{2}\cdot P(\mathbf{p}_{i}|A,\mathbf{r};\mathbf{\theta})d\mathbf{p}_{i} \tag{2.9}\] \[= \sum\nolimits_{i\in\mathbb{L}}\sum\nolimits_{k=1}^{K}\big{(}y_{ ik}-\mathbb{E}[p_{ik}]\big{)}^{2}+\text{Var}(p_{ik}),\]
where \(\mathbf{y}_{i}\) is an one-hot vector encoding the ground-truth class with \(y_{ij}=1\) and \(y_{ik}\neq\) for all \(k\neq j\) and \(j\) is a class label. Eq. (2.9) aims to minimize the prediction error and variance, leading to maximizing the classification accuracy of each training node by removing excessive misleading evidence.
#### Graph-based Kernel Dirichlet distribution Estimation (GKDE)
The loss function in Eq. (2.9) is designed to measure the sum of squared loss based on class labels of training nodes. However, it does not directly measure the quality of the predicted node-level Dirichlet distributions (Zhao et al., 2019; 1). To address this limitation, we proposed _Graph-based Kernel Dirichlet distribution Estimation_ (GKDE) to better estimate node-level Dirichlet distributions by using graph structure information. The key idea of the GKDE is to estimate prior Dirichlet distribution parameters for each node based on the class labels of training nodes (see Figure 2.3). Then, we use the estimated prior Dirichlet distribution in the training process to learn the following patterns: (i) nodes with a high
cavity will be shown far from training nodes; and (ii) nodes with a high dissonance will be shown near the boundaries of classes.
Based on SL, let each training node represent one evidence for its class label. Denote the contribution of evidence estimation for node \(j\) from training node \(i\) by \(\mathbf{h}(y_{i},d_{ij})=[h_{1},\ldots,h_{k},\ldots,h_{K}]\in[0,1]^{K}\), where \(h_{k}(y_{i},d_{ij})\) is obtained by:
\[h_{k}(y_{i},d_{ij})=\begin{cases}0&y_{i}\neq k\\ g(d_{ij})&y_{i}=k,\end{cases} \tag{2.10}\]
\(g(d_{ij})=\frac{1}{\sigma\sqrt{2\pi}}\exp(-\frac{d_{ij}^{2}}{2\sigma^{2}})\) is the Gaussian kernel function used to estimate the distribution effect between nodes \(i\) and \(j\), and \(d_{ij}\) means the **node-level distance** (**the shortest path between nodes \(i\) and \(j\)**), and \(\sigma\) is the bandwidth parameter. The prior evidence is estimated based GKDE: \(\hat{\mathbf{e}}_{j}=\sum_{i\in\mathbb{L}}\mathbf{h}(y_{i},d_{ij})\), where \(\mathbb{L}\) is a set of training nodes and the prior Dirichlet distribution \(\hat{\mathbf{\alpha}}_{j}=\hat{\mathbf{e}}_{j}+\mathbf{1}\). During the training process, we minimize the KL-divergence between model predictions of Dirichlet distribution and prior distribution: \(\min\text{KL}[\text{Dir}(\mathbf{\alpha})\|\text{Dir}(\hat{\mathbf{\alpha}})]\). This process can prioritize the extent of data relevance based on the estimated evidential uncertainty, which is proven effective based on the proposition below.
Figure 2.3: Illustration of GKDE. Estimate prior Dirichlet distribution \(\text{Dir}(\hat{\alpha})\) for node \(j\) (red) based on training nodes (blue) and graph structure information.
**Proposition 1**.: _Given \(L\) training nodes, for any testing nodes \(i\) and \(j\), let \(\mathbf{d}_{i}=[d_{i1},\ldots,d_{iL}]\) be the vector of graph distances from nodes \(i\) to training nodes and \(\mathbf{d}_{j}=[d_{j1},\ldots,d_{jL}]\) be the graph distances from nodes \(j\) to training nodes, where \(d_{il}\) is the node-level distance between nodes \(i\) and \(l\). If for all \(l\in\{1,\ldots,L\}\), \(d_{il}\geq d_{jl}\), then we have_
\[\hat{u}_{v_{i}}\geq\hat{u}_{v_{j}},\]
_where \(\hat{u}_{v_{i}}\) and \(\hat{u}_{v_{j}}\) refer to vacuity uncertainties of nodes \(i\) and \(j\) estimated based on GKDE._
Proof.: Let \(\mathbf{y}=[y_{1},\ldots,y_{L}]\) be the label vector for training nodes. Based on GKDE, the evidence contribution for the node \(i\) and a training node \(l\in\{1,\ldots,L|\}\) is \(\mathbf{h}(y_{l},d_{il})=[h_{1}(y_{l},d_{il}),\ldots,h_{K}(y_{l},d_{il})]\), where
\[h_{k}(y_{l},d_{il})=\begin{cases}0&y_{l}\neq k\\ g(d_{il})=\frac{1}{\sigma\sqrt{2\pi}}\exp(-\frac{d_{il}{}^{2}}{2\sigma^{2}})&y_ {l}=k\end{cases}, \tag{2.11}\]
and the prior evidence can be estimated based GKDE:
\[\hat{\mathbf{e}}_{i}=\sum_{m=1}^{L}\sum_{k=1}^{K}h_{k}(y_{l},d_{il}), \tag{2.12}\]
where \(\hat{\mathbf{e}}_{i}=[e_{i1},...,e_{iK}]\). Since each training node only contributes the same evidence based on its label based on Eq. (2.11), the total evidence is estimated by all the contributing evidence as
\[\sum_{k=1}^{K}e_{ik}=\sum_{m=1}^{L}\frac{1}{\sigma\sqrt{2\pi}}\exp(-\frac{{d_ {il}}^{2}}{2\sigma^{2}}),\quad\sum_{k=1}^{K}e_{jk}=\sum_{m=1}^{L}\frac{1}{ \sigma\sqrt{2\pi}}\exp(-\frac{{d_{jl}}^{2}}{2\sigma^{2}}), \tag{2.13}\]
where the vacuity values for node \(i\) and node \(j\) based on GKDE are,
\[\hat{u}_{v_{i}}=\frac{K}{\sum_{k=1}^{K}e_{ik}+K},\quad\hat{u}_{v_{j}}=\frac{K} {\sum_{k=1}^{K}e_{jk}+K}. \tag{2.14}\]
Now, we prove Eq. (2.14) above. If \(d_{il}\geq d_{jl}\) for \(\forall l\in\{1,\ldots,L\}\), we have
\[\sum_{k=1}^{K}e_{ik} = \sum_{m=1}^{L}\frac{1}{\sigma\sqrt{2\pi}}\exp(-\frac{{d_{il}}^{2}} {2\sigma^{2}})\] \[\leq \sum_{m=1}^{L}\frac{1}{\sigma\sqrt{2\pi}}\exp(-\frac{{d_{jl}}^{2} }{2\sigma^{2}})\] \[= \sum_{k=1}^{K}e_{jk},\]
such that
\[\hat{u}_{v_{i}}=\frac{K}{\sum_{k=1}^{K}e_{ik}+K}\geq\frac{K}{\sum_{k=1}^{K}e_{ jk}+K}=\hat{u}_{v_{j}}. \tag{2.16}\]
The above proposition shows that if a testing node is too far from training nodes, the vacuity will increase, implying that an OOD node is expected to have a high vacuity.
In addition, we designed a simple iterative knowledge distillation method (Hinton et al., 2015) (i.e., Teacher Network) to refine the node-level classification probabilities. The key idea is to train our proposed model (Student) to imitate the outputs of a pre-train vanilla GNN (Teacher) by adding a regularization term of KL-divergence. This leads to solving the following optimization problem:
\[\min_{\boldsymbol{\theta}}\mathcal{L}(\boldsymbol{\theta})+\lambda_{1}\text{ KL}[\text{Dir}(\boldsymbol{\alpha})\|\text{Dir}(\hat{\boldsymbol{\alpha}})]+ \lambda_{2}\text{KL}[P(\mathbf{y}\mid A,\mathbf{r};\mathcal{G})\ \|\ P(\mathbf{y}|\hat{\mathbf{p}})], \tag{2.17}\]
where \(\hat{\mathbf{p}}\) is the vanilla GNN's (Teacher) output and \(\lambda_{1}\) and \(\lambda_{2}\) are trade-off parameters.
```
Input:\(\mathbb{G}=(\mathbb{V},\mathbb{E},\mathbf{r})\) and \(\mathbf{y}_{\mathbb{L}}\) Output:\(\mathbf{p}_{\mathbb{V}\backslash\mathbb{L}}\), \(\mathbf{u}_{\mathbb{V}\backslash\mathbb{L}}\)
1\(\ell=0\);
2 Set hyper-parameters \(\eta,\lambda_{1},\lambda_{2}\);
3 Initialize the parameters \(\gamma,\beta\);
4 Calculate the prior Dirichlet distribution \(\text{Dir}(\hat{\alpha})\);
5 Pretrain the teacher network to get \(\text{Prob}(\mathbf{y}|\hat{\mathbf{p}})\);
6repeat
7 Forward pass to compute \(\boldsymbol{\alpha}\), \(\text{Prob}(\mathbf{p}_{i}|A,\mathbf{r};\mathcal{G})\) for \(i\in\mathbb{V}\);
8 Compute joint probability \(\text{Prob}(\mathbf{y}|A,\mathbf{r};\mathcal{G})\);
9 Backward pass via the chain-rule the calculate the sub-gradient gradient: \(g^{(\ell)}=\nabla_{\Theta}\mathcal{L}(\Theta)\)
10 Update parameters using step size \(\eta\) via \(\Theta^{(\ell+1)}=\Theta^{(\ell)}-\eta\cdot g^{(\ell)}\)
11\(\ell=\ell+1\);
12
13untilconvergence
14 Calculate \(\mathbf{p}_{\mathbb{V}\backslash\mathbb{L}}\), \(\mathbf{u}_{\mathbb{V}\backslash\mathbb{L}}\) return\(\boldsymbol{p}_{\mathbb{V}\backslash\mathbb{L}}\), \(\boldsymbol{u}_{\mathbb{V}\backslash\mathbb{L}}\)
```
**Algorithm 1**S-BGCN-T-K
### Experiments
In this section, we conduct experiments on the tasks of misclassification and OOD detections to answer the following questions for semi-supervised node classification:
**Q1. Misclassification Detection:** What type of uncertainty is the most promising indicator of high confidence in node classification predictions?
**Q2. OOD Detection:** What type of uncertainty is a key indicator of accurate detection of OOD nodes?
**Q3. GKDE with Uncertainty Estimates:** How can GKDE help enhance prediction tasks with what types of uncertainty estimates?
Through extensive experiments, we found the following answers to the above questions:
**A1.** Dissonance (i.e., uncertainty due to conflicting evidence) is more effective than other uncertainty estimates in misclassification detection.
**A2.** Vacuity (i.e., uncertainty due to lack of confidence) is more effective than other uncertainty estimates in OOD detection.
**A3.** GKDE can indeed help improve the estimation quality of node-level Dirichlet distributions, resulting in a higher OOD detection.
#### Experiment Setup
**Datasets**: We used six datasets, including three citation network datasets (Sen et al., 2008) (i.e., Cora, Citeseer, Pubmed) and three new datasets (Shchur et al., 2018) (i.e., Coauthor Physics, Amazon Computer, and Amazon Photo). We summarized the description and experimental setup of the used datasets as follows.
**Cora, Citeseer, and Pubmed** (Sen et al., 2008): These are citation network datasets, where each network is a directed network in which a node represents a document and an edge
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & **Cora** & **Citeseer** & **Pubmed** & **Co. Physics** & **Ama.Computer** & **Ama.Photo** \\ \hline
**\#Nodes** & 2,708 & 3,327 & 19,717 & 34, 493 & 13, 381 & 7, 487 \\
**\#Edges** & 5,429 & 4,732 & 44,338 & 282, 455 & 259, 159 & 126, 530 \\
**\#Classes** & 7 & 6 & 3 & 5 & 10 & 8 \\
**\#Features** & 1,433 & 3,703 & 500 & 8,415 & 767 & 745 \\
**\#Training nodes** & 140 & 120 & 60 & 100 & 200 & 160 \\
**\#Validation nodes** & 500 & 500 & 500 & 500 & 500 & 500 \\
**\#Test nodes** & 1,000 & 1,000 & 1,000 & 1000 & 1,000 & 1000 \\ \hline \hline \end{tabular}
\end{table}
Table 2.2: Description of datasets and their experimental setup for the node classification prediction.
is a citation link, meaning that there exists an edge when \(A\) document cites \(B\) document, or vice-versa with a direction. Each node's feature vector contains a bag-of-words representation of a document. For simplicity, we don't discriminate the direction of links and treat citation links as undirected edges and construct a binary, symmetric adjacency matrix \(\mathbf{A}\). Each node is labeled with the class to which it belongs.
**Coauthor Physics, Amazon Computers, and Amazon Photo** (Shchur et al., 2018): Coauthor Physics is the dataset for co-authorship graphs based on the Microsoft Academic Graph from the KDD Cup 2016 Challenge1. In the graphs, a node is an author and an edge exists when two authors co-author a paper. A node's features represent the keywords of its papers and the node's class label indicates its most active field of study. Amazon Computers and Amazon Photo are the segments of an Amazon co-purchase graph (McAuley et al., 2015), where a node is a good (i.e., product), and an edge exists when two goods are frequently bought together. A node's features are the bag-of-words representation of product reviews and the node's class label is the product category.
Footnote 1: KDD Cup 2016 Dataset: Online Available at [https://kddcup2016.azurewebsites.net/](https://kddcup2016.azurewebsites.net/)
For all the used datasets, we deal with undirected graphs with 20 training nodes for each category. We chose the same dataset splits as in (Yang et al., 2016) with an additional validation node set of 500 labeled examples for the hyperparameter obtained from the citation datasets, and followed the same dataset splits in (Shchur et al., 2018) for Coauthor Physics, Amazon Computer, and Amazon Photo datasets, for the fair comparison2.
Footnote 2: The source code and datasets are accessible at [https://github.com/zxj32/uncertainty-GNN](https://github.com/zxj32/uncertainty-GNN)
**Metric**: We used the following metrics for our experiments:
* _Area Under Receiver Operating Characteristics (AUROC)_: AUROC shows the area under the curve where FPR (false positive rate) is in \(x\)-axis and TPR (true positive rate) is in \(y\)-axis. It can be interpreted as the probability that a positive example is assigned a higher
detection score than a negative example(Fawcett, 2006). A perfect detector corresponds to an AUROC score of 100%.
* _Area Under Precision-Prediction Curve (AUPR)_: The PR curve is a graph showing the precision=TP/(TP+FP) and recall=TP/(TP+FN) against each other,and AUPR denotes the area under the precision-recall curve. The ideal case is when Precision is 1 and Recall is 1.
**Comparing Schemes**: We conducted an extensive comparative performance analysis based on our proposed models and several state-of-the-art competitive counterparts. We implemented all models based on the most popular GNN model, GCN (Kipf and Welling, 2017). We compared our model (S-BGCN-T-K) against (1) Softmax-based GCN (Kipf and Welling, 2017) with uncertainty measured based on entropy; and (2) Drop-GCN that adapts the Monte-Carlo Dropout (Gal and Ghahramani, 2016; Ryu et al., 2019) into the GCN model to learn probabilistic uncertainty; (3) EDL-GCN that adapts the EDL model (Sensoy et al., 2018) with GCN to estimate evidential uncertainty; (4) DPN-GCN that adapts the DPN (Malinin and Gales, 2018) method with GCN to estimate probabilistic uncertainty. We evaluated the performance of all models considered using the area under the ROC (AUROC) curve and area under the Precision-Recall (AUPR) curve in both experiments (Hendrycks and Gimpel, 2017).
**Model Setups for semi-supervised node classification**. Our models were initialized using Glorot initialization (Glorot and Bengio, 2010) and trained to minimize loss using the Adam SGD optimizer (Kingma and Ba, 2014). For the S-BGCN-T-K model, we used the _early stopping strategy_(Shchur et al., 2018) on Coauthor Physics, Amazon Computer, and Amazon Photo datasets while _non-early stopping strategy_ was used in citation datasets (i.e., Cora, Citeseer and Pubmed). We set bandwidth \(\sigma=1\) for all datasets in GKDE, and set trade-off parameters \(\lambda_{1}=0.001\) for misclassification detection, \(\lambda_{1}=0.1\) for OOD detection
and \(\lambda_{2}=\min(1,t/200)\) (where \(t\) is the index of a current training epoch) for both task; other hyperparameter configurations are summarized in Table 2.3.
For semi-supervised node classification, we used 50 random weight initialization for our models on Citation network datasets. For Coauthor Physics, Amazon Computer, and Amazon Photo datasets, we reported the result based on 10 random train/validation/test splits. In both effects of uncertainty on misclassification and the OOD detection, we reported the AUPR and AUROC results in percent averaged over 50 times of randomly chosen 1000 test nodes in all of the test sets (except training or validation set) for all models tested on the citation datasets. For S-BGCN-T-K model in these tasks, we used the same hyperparameter configurations as in Table 2.3, except S-BGCN-T-K Epistemic using 10,000 epochs to obtain the best result.
**Baseline Setting**. In the experiment part, we considered 4 baselines. For GCN, we used the same hyper-parameters as (Kipf and Welling, 2016). For EDL-GCN, we used the same hyper-parameters as GCN, and replaced softmax layer to activation layer (Relu) with squares loss (Sensoy et al., 2018). For DPN-GCN, we used the same hyper-parameters as GCN, and changed the softmax layer to activation layer (exponential). Note that as we can not generate OOD node, we only used in-distribution loss of (see Eq.12 in (Malinin and Gales, 2018)) and ignored the OOD part loss. For Drop-GCN, we used the same hyper-parameters as GCN, and set Monte Carlo sampling times \(M=100\), dropout rate equal to 0.5.
**Experimental Setup for Out-of-Distribution (OOD) Detection** For OOD detection on semi-supervised node classification, we randomly selected 1-4 categories as OOD cate
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & **Cora** & **Citeseer** & **Pubmed** & **Co.Physics** & **Ama.Computer** & **Ama.Photo** \\ \hline
**Hidden units** & 16 & 16 & 16 & 64 & 64 & 64 \\
**Learning rate** & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 \\
**Dropout** & 0.5 & 0.5 & 0.5 & 0.1 & 0.2 & 0.2 \\ \(L_{2}\) **reg-strength** & 0.0005 & 0.0005 & 0.0005 & 0.001 & 0.0001 & 0.0001 \\
**Monte-Carlo samples** & 100 & 100 & 100 & 100 & 100 & 100 \\
**Max epoch** & 200 & 200 & 200 & 100000 & 100000 & 100000 \\ \hline \hline \end{tabular}
\end{table}
Table 2.3: Hyperparameter configurations of S-BGCN-T-K model
gories and trained the models only based on training nodes of the other categories. In this setting, we still trained a model for the semi-supervised node classification task, but only part of node categories were not used for training. Hence, we suppose that our model only outputs partial categories (as we don't know the OOD category), see Table 2.4. For example, Cora dataset, we trained the model with 80 nodes (20 nodes for each category) with the predictions of 4 categories. The positive ratio is the ratio of out-of-distribution nodes among on all test nodes.
#### Results
**Misclassification Detection.** The misclassification detection experiment involves detecting whether a given prediction is incorrect using an uncertainty estimate. Table 2.5 shows that S-BGCN-T-K outperforms all baseline models under the AUROC and AUPR for misclassification detection. The outperformance of dissonance-based detection is fairly impressive. This confirms that low dissonance (a small amount of conflicting evidence) is the key to maximizing the accuracy of node classification prediction. We observe the following performance order: \(\texttt{Dissonance}>\texttt{Entropy}\approx\texttt{Aleatoric}>\texttt{Vacuity} \approx\texttt{Epistemic}\), which is aligned with our conjecture: higher dissonance with conflicting prediction leads to higher misclassification detection. We also conducted experiments on additional three datasets and observed similar trends of the results, as demonstrated in Appendix C.
**OOD Detection.** This experiment involves detecting whether an input example is out-of-distribution (OOD) given an estimate of uncertainty. For semi-supervised node classification, we randomly selected one to four categories as OOD categories and trained the models based
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Dataset & **Cora** & **Citeseer** & **Pubmed** & **Co.Physics** & **Ama.Computer** & **Ama.Photo** \\ \hline
**Number of training categories** & 4 & 3 & 2 & 3 & 5 & 4 \\
**Training nodes** & 80 & 60 & 40 & 60 & 100 & 80 \\
**Test nodes** & 1000 & 1000 & 1000 & 1000 & 1000 & 1000 \\
**Positive ratio** & 38\% & 55\% & 40.4\% & 45.1\% & 48.1\% & 51.1\% \\ \hline \hline \end{tabular}
\end{table}
Table 2.4: Description of datasets and their experimental setup for the OOD detection.
on training nodes of the other categories. Due to the space constraint, the experimental setup for the OOD detection is detailed in Appendix B.3.
In Table 2.6, across six network datasets, our vacuity-based detection significantly outperformed the other competitive methods, exceeding the performance of the epistemic uncertainty and other type of uncertainties. This demonstrates that the vacuity-based model is more effective than other uncertainty estimates-based counterparts in increasing OOD detection. We observed the following performance order: \(\texttt{Vacuity}>\texttt{Entropy}\approx\texttt{Aleatoric}>\texttt{Aleatoric}>\)
\begin{table}
\begin{tabular}{c||c|c c c c|c c c c c|c} \hline \hline \multirow{2}{*}{Data} & \multirow{2}{*}{Model} & \multicolumn{5}{c|}{AUROC} & \multicolumn{5}{c}{AUPR} & \multirow{2}{*}{Acc} \\ & & Va. & Dis. & Al. & Ep. & En. & Va. & Dis. & Al. & Ep. & En. & Acc \\ \hline \multirow{5}{*}{Cora} & S-BGCN-T-K & 70.6 & **82.4** & 75.3 & 68.8 & 77.7 & 90.3 & **95.4** & 92.4 & 87.8 & 93.4 & **82.0** \\ & EDL-GCN & 70.2 & 81.5 & - & - & 76.9 & 90.0 & 94.6 & - & - & 93.6 & 81.5 \\ & DPN-GCN & - & - & 78.3 & 75.5 & 77.3 & - & - & 92.4 & 92.0 & 92.4 & 80.8 \\ & Drop-GCN & - & - & 73.9 & 66.7 & 76.9 & - & - & 92.7 & 90.0 & 93.6 & 81.3 \\ & GCN & - & - & - & - & 79.6 & - & - & - & - & 94.1 & 81.5 \\ \hline \multirow{5}{*}{Citeseer} & S-BGCN-T-K & 65.4 & **74.0** & 67.2 & 60.7 & 70.0 & 79.8 & **85.6** & 82.2 & 75.2 & 83.5 & **71.0** \\ & EDL-GCN & 64.9 & 73.6 & - & - & 69.6 & 79.2 & 84.6 & - & - & 82.9 & 70.2 \\ & DPN-GCN & - & - & 66.0 & 64.9 & 65.5 & - & - & 78.7 & 77.6 & 78.1 & 68.1 \\ & Drop-GCN & - & - & 66.4 & 60.8 & 69.8 & - & - & 82.3 & 77.8 & 83.7 & 70.9 \\ & GCN & - & - & - & - & 71.4 & - & - & - & - & 83.2 & 70.3 \\ \hline \multirow{5}{*}{Pubmed} & S-BGCN-T-K & 64.1 & **73.3** & 69.3 & 64.2 & 70.7 & 85.6 & **90.8** & 88.8 & 86.1 & 89.2 & **79.3** \\ & EDL-GCN & 62.6 & 69.0 & - & - & 67.2 & 84.6 & 88.9 & - & - & 81.7 & 79.0 \\ & DPN-GCN & - & - & 72.7 & 69.2 & 72.5 & - & - & 87.8 & 86.8 & 87.7 & 77.1 \\ & Drop-GCN & - & - & 67.3 & 66.1 & 67.2 & - & - & 88.6 & 85.6 & 89.0 & 79.0 \\ & GCN & - & - & - & - & 68.5 & - & - & - & - & 89.2 & 79.0 \\ \hline \multirow{5}{*}{Amazon Photo} & S-BGCN-T-K & 66.0 & **89.3** & 83.0 & 83.4 & 83.2 & 95.4 & **98.9** & 98.4 & 98.1 & 98.4 & **92.0** \\ & EDL-GCN & 65.1 & 88.5 & - & - & 82.2 & 94.6 & 98.1 & - & - & 98.0 & 91.2 \\ & DPN-GCN & - & - & 84.6 & 84.2 & 84.3 & - & - & 98.4 & 98.3 & 98.3 & 92.0 \\ & Drop-GCN & - & - & 84.5 & 84.4 & 84.6 & - & - & 98.2 & 98.1 & 98.2 & 91.3 \\ & GCN & - & - & - & - & 86.8 & - & - & - & - & 98.5 & 91.2 \\ \hline \multirow{5}{*}{Amazon Computer} & S-BGCN-T-K & 65.0 & **87.8** & 83.3 & 79.6 & 83.6 & 89.4 & **96.3** & 95.0 & 94.2 & 95.0 & **84.0** \\ & EDL-GCN & 64.1 & 86.5 & - & - & 82.2 & 93.6 & 97.1 & - & - & 97.0 & 79.7 \\ \cline{1-1} & DPN-GCN & - & - & 82.0 & 81.7 & 81.8 & - & - & 95.8 & 95.7 & 95.8 & 85.2 \\ \cline{1-1} & Drop-GCN & - & - & 79.1 & 75.9 & 79.2 & - & - & 95.1 & 94.5 & 95.1 & 79.6 \\ \cline{1-1} & GCN & - & - & - & - & 81.7 & - & - & - & - & 95.4 & 79.7 \\ \hline \multirow{5}{*}{Coauthor Physics} & S-BGCN-T-K & 80.2 & **91.4** & 87.5 & 81.7 & 87.6 & 98.3 & **99.4** & 99.0 & 98.4 & 98.9 & **93.0** \\ \cline{1-1} & EDL-GCN & 78.8 & 89.5 & - & - & 86.2 & 96.6 & 97.2 & - & - & 97.0 & 92.7 \\ \cline{1-1} & DPN-GCN & - & - & 90.0 & 89.9 & 90.0 & - & - & 99.3 & 99.3 & 99.3 & 92.5 \\ \cline{1-1} & Drop-GCN & - & - & 87.6 & 84.1 & 87.7 & - & - & 98.9 & 98.6 & 98.9 & 93.0 \\ \cline{1-1} & GCN & - & - & - & - & 88.7 & - & - & - & - & 99.0 & 92.8 \\ \hline \hline \end{tabular}
* Va.: Vacuity, Dis.: Dissonance, Al.: Aleatoric, Ep.: Epistemic, En.: Entropy
\end{table}
Table 2.5: AUROC and AUPR for the Misclassification Detection.
Epistemic\(\approx\)Dissonance, which is consistent with the theoretical results as shown in Theorem 1.
**Ablation Study.** We conducted additional experiments (see Table 2.7 and Table 2.9) in order to demonstrate the contributions of the key technical components, including GKDE, Teacher Network, and subjective Bayesian framework. The key findings obtained from this experiment are: (1) GKDE can enhance the OOD detection (i.e., 30% increase with vacuity), which is consistent with our theoretical proof about the outperformance of GKDE in
\begin{table}
\begin{tabular}{c||c|c c c c c|c c c c} \hline \hline \multirow{2}{*}{Data} & \multirow{2}{*}{Model} & \multicolumn{5}{c|}{AUROC} & \multicolumn{5}{c}{AUPR} \\ & & Va. & Dis. & Al. & Ep. & En. & Va. & Dis. & Al. & Ep. & En. \\ \hline \multirow{5}{*}{Cora} & S-BGCN-T-K & **87.6** & 75.5 & 85.5 & 70.8 & 84.8 & **78.4** & 49.0 & 75.3 & 44.5 & 73.1 \\ & EDL-GCN & 84.5 & 81.0 & & - & 83.3 & 74.2 & 53.2 & - & - & 71.4 \\ & DPN-GCN & - & - & 77.3 & 78.9 & 78.3 & - & - & 58.5 & 62.8 & 63.0 \\ & Drop-GCN & - & - & 81.9 & 70.5 & 80.9 & - & - & 69.7 & 44.2 & 67.2 \\ & GCN & - & - & - & - & 80.7 & - & - & - & - & 66.9 \\ \hline \multirow{5}{*}{Citeseer} & S-BGCN-T-K & **84.8** & 55.2 & 78.4 & 55.1 & 74.0 & **86.8** & 54.1 & 80.8 & 55.8 & 74.0 \\ & EDL-GCN & 78.4 & 59.4 & - & - & 69.1 & 79.8 & 57.3 & - & - & 69.0 \\ & DPN-GCN & - & - & 68.3 & 72.2 & 69.5 & - & - & 68.5 & 72.1 & 70.3 \\ & Drop-GCN & - & - & 72.3 & 61.4 & 70.6 & - & - & 73.5 & 60.8 & 70.0 \\ & GCN & - & - & - & - & 70.8 & - & - & - & - & 70.2 \\ \hline \multirow{5}{*}{Pubmed} & S-BGCN-T-K & **74.6** & 67.9 & 71.8 & 59.2 & 72.2 & **69.6** & 52.9 & 63.6 & 44.0 & 56.5 \\ & EDL-GCN & 71.5 & 68.2 & - & - & 70.5 & 65.3 & 53.1 & - & - & 55.0 \\ & DPN-GCN & - & - & 63.5 & 63.7 & 63.5 & - & - & 50.7 & 53.9 & 51.1 \\ & Drop-GCN & - & - & 68.7 & 60.8 & 66.7 & - & - & 59.7 & 46.7 & 54.8 \\ & GCN & - & - & - & - & 68.3 & - & - & - & - & 55.3 \\ \hline \multirow{5}{*}{Amazon Photo} & S-BGCN-T-K & **93.4** & 76.4 & 91.4 & 32.2 & 91.4 & **94.8** & 68.0 & 92.3 & 42.3 & 92.5 \\ & EDL-GCN & 63.4 & 78.1 & - & - & 79.2 & 66.2 & 74.8 & - & - & 81.2 \\ & DPN-GCN & - & - & 83.6 & 83.6 & 83.6 & - & - & 82.6 & 82.4 & 82.5 \\ & Drop-GCN & - & - & 84.5 & 58.7 & 84.3 & - & - & 87.0 & 57.7 & 86.9 \\ & GCN & - & - & - & - & 84.4 & - & - & - & - & 87.0 \\ \hline \multirow{5}{*}{Amazon Computer} & S-BGCN-T-K & **82.3** & 76.6 & 80.9 & 55.4 & 80.9 & **70.5** & 52.8 & 60.9 & 35.9 & 60.6 \\ & EDL-GCN & 53.2 & 70.1 & - & - & 70.0 & 33.2 & 43.9 & - & - & 45.7 \\ \cline{1-1} & DPN-GCN & - & - & 77.6 & 77.7 & 77.7 & - & - & 50.8 & 51.2 & 51.0 \\ \cline{1-1} & Drop-GCN & - & - & 74.4 & 70.5 & 74.3 & - & - & 50.0 & 46.7 & 49.8 \\ \cline{1-1} & GCN & - & - & - & - & 74.0 & - & - & - & - & 48.7 \\ \hline \multirow{5}{*}{Coauthor Physics} & S-BGCN-T-K & **91.3** & 87.6 & 89.7 & 61.8 & 89.8 & **72.2** & 56.6 & 68.1 & 25.9 & 67.9 \\ & EDL-GCN & 88.2 & 85.8 & - & - & 87.6 & 67.1 & 51.2 & - & - & 62.1 \\ \cline{1-1} & DPN-GCN & - & - & 85.5 & 85.6 & 85.5 & - & - & 59.8 & 60.2 & 59.8 \\ \cline{1-1} & Drop-GCN & - & - & 89.2 & 78.4 & 89.3 & - & - & 66.6 & 37.1 & 66.5 \\ \cline{1-1} & GCN & - & - & - & - & 89.1 & - & - & - & - & 64.0 \\ \hline \hline \end{tabular}
* Va.: Vacuity, Dis.: Dissonance, Al.: Aleatoric, Ep.: Epistemic, En.: Entropy
\end{table}
Table 2.6: AUROC and AUPR for the OOD Detection.
uncertainty estimation, i.e., OOD nodes have a higher vacuity than other nodes; and (2) the Teacher Network can further improve the node classification accuracy.
**Time Complexity Analysis**. S-BGCN has a similar time complexity with GCN while S-BGCN-T has the double complexity of GCN. For a given network where \(|\mathbb{V}|\) is the number of nodes, \(|\mathbb{E}|\) is the number of edges, \(C\) is the number of dimensions of the input feature vector for every node, \(F\) is the number of features for the output layer, and \(M\) is Monte Carlo sampling times.
**Compare with Bayesian GCN baseline**. Compare with a (Bayesian) GCN baseline, Dropout+DropEdge (Rong et al., 2019). As shown in the table 2.10 below, our proposed
\begin{table}
\begin{tabular}{c||c|c c c c|c c c c c|c} \hline \multirow{2}{*}{Data} & \multirow{2}{*}{Model} & \multicolumn{5}{c|}{AUROC} & \multicolumn{5}{c}{AUPR} \\ & & Va. & Dis. & Al. & Ep. & En. & Va. & Dis. & Al. & Ep. & En. & Acc \\ \hline \multirow{4}{*}{Cora} & S-BGCN-T-K & 70.6 & 82.4 & 75.3 & 68.8 & 77.7 & 90.3 & **95.4** & 92.4 & 87.8 & 93.4 & 82.0 \\ & S-BGCN-T & 70.8 & **82.5** & 75.3 & 68.9 & 77.8 & 90.4 & **95.4** & 92.6 & 88.0 & 93.4 & **82.2** \\ & S-BGCN & 69.8 & 81.4 & 73.9 & 66.7 & 76.9 & 89.4 & 94.3 & 92.3 & 88.0 & 93.1 & 81.2 \\ & S-GCN & 70.2 & 81.5 & - & - & 76.9 & 90.0 & 94.6 & - & - & 93.6 & 81.5 \\ \hline \multirow{4}{*}{Citeseer} & S-BGCN-T-K & 65.4 & **74.0** & 67.2 & 60.7 & 70.0 & 79.8 & **85.6** & 82.2 & 75.2 & 83.5 & 71.0 \\ & S-BGCN-T & 65.4 & 73.9 & 67.1 & 60.7 & 70.1 & 79.6 & 85.5 & 82.1 & 75.2 & 83.5 & **71.3** \\ & S-BGCN & 63.9 & 72.1 & 66.1 & 58.9 & 69.2 & 78.4 & 83.8 & 80.6 & 75.6 & 82.3 & 70.6 \\ & S-GCN & 64.9 & 71.9 & - & - & 69.4 & 79.5 & 84.2 & - & - & 82.5 & 71.0 \\ \hline \multirow{4}{*}{Pubmed} & S-BGCN-T-K & 63.1 & **69.9** & 66.5 & 65.3 & 68.1 & 85.6 & 90.8 & 88.8 & 86.1 & 89.2 & **79.3** \\ & S-BGCN-T & 63.2 & **69.9** & 66.6 & 65.3 & 64.8 & 85.6 & **90.9** & 88.9 & 86.0 & 89.3 & 79.2 \\ & S-BGCN & 62.7 & 68.1 & 66.1 & 64.4 & 68.0 & 85.4 & 90.5 & 88.6 & 85.6 & 89.2 & 78.8 \\ & S-GCN & 62.9 & 69.5 & - & - & 68.0 & 85.3 & 90.4 & - & - & 89.2 & 79.1 \\ \hline \multirow{4}{*}{Amazon Photo} & S-BGCN-T-K & 66.0 & 89.3 & 83.0 & 83.4 & 83.2 & 95.4 & **98.9** & 98.4 & 98.1 & 98.4 & 92.0 \\ & S-BGCN-T & 66.1 & 89.3 & 83.1 & 83.5 & 83.3 & 95.6 & 99.0 & 98.4 & 98.2 & 98.4 & **92.3** \\ & S-BGCN & 68.6 & **93.6** & 90.6 & 83.6 & 90.6 & 90.4 & 98.1 & 97.3 & 95.8 & 97.3 & 81.0 \\ & S-GCN & - & - & - & - & 86.7 & - & - & - & - & - & 98.4 \\ \hline \multirow{4}{*}{Amazon Computer} & S-BGCN-T-K & 65.0 & 87.8 & 83.3 & 79.6 & 83.6 & 89.4 & 96.3 & 95.0 & 94.2 & 95.0 & 84.0 \\ & S-BGCN-T & 65.2 & 88.0 & 83.4 & 79.7 & 83.6 & 89.4 & **96.5** & 95.0 & 94.5 & 95.1 & **84.1** \\ & S-BGCN & 63.7 & **89.1** & 84.3 & 76.1 & 84.4 & 84.9 & 95.7 & 93.9 & 91.4 & 93.9 & 76.1 \\ & S-GCN & - & - & - & - & 81.5 & - & - & - & - & - & 95.2 \\ \hline \multirow{4}{*}{Coauthor Physics} & S-BGCN-T-K & 80.2 & 91.4 & 87.5 & 81.7 & 87.6 & 98.3 & **99.4** & 99.0 & 98.4 & 98.9 & 93.0 \\ & S-BGCN-T & 80.4 & **91.5** & 87.6 & 81.7 & 87.6 & 98.3 & **99.4** & 99.0 & 98.6 & 99.0 & **93.2** \\ \cline{1-1} & S-BGCN & 79.6 & 90.5 & 86.3 & 81.2 & 86.4 & 98.0 & 99.2 & 98.8 & 98.3 & 98.8 & 92.9 \\ \cline{1-1} & S-GCN & 89.1 & 89.0 & - & - & 89.2 & 99.0 & 99.0 & - & - & 99.0 & 92.9 \\ \hline \end{tabular} Va.: Vacuity, Dis.: Dissonance, Al.: Aleatoric, Ep.: Epistemic, En.: Entropy
\end{table}
Table 2.7: Ablation experiment on AUROC and AUPR for the Misclassification Detection.
method performed better than Dropout+DropEdge on the Cora and Citeer datasets for misclassificaiton detection. A similar trend was observed for OOD detection.
#### Why is Epistemic Uncertainty Less Effective than Vacuity?
Although epistemic uncertainty is known to be effective to improve OOD detection (Gal and Ghahramani, 2016; Kendall and Gal, 2017) in computer vision applications, our results
\begin{table}
\begin{tabular}{l c c c c c|c c c c c} \hline \hline \multirow{2}{*}{Data} & \multirow{2}{*}{Model} & \multicolumn{6}{c|}{AUROC} & \multicolumn{6}{c}{AUPR} \\ & & Va. & Dis. & Al. & Ep. & En. & Va. & Dis. & Al. & Ep. & En. \\ \hline \multirow{4}{*}{Cora} & S-BGCN-T-K & **87.6** & 75.5 & 85.5 & 70.8 & 84.8 & **78.4** & 49.0 & 75.3 & 44.5 & 73.1 \\ & S-BGCN-T & 84.5 & 81.2 & 83.5 & 71.8 & 83.5 & 74.4 & 53.4 & 75.8 & 46.8 & 71.7 \\ & S-BGCN & 76.3 & 79.3 & 81.5 & 70.5 & 80.6 & 61.3 & 55.8 & 68.9 & 44.2 & 65.3 \\ & S-GCN & 75.0 & 78.2 & - & - & 79.4 & 60.1 & 54.5 & - & - & 65.3 \\ \hline \multirow{4}{*}{Citeseer} & S-BGCN-T-K & **84.8** & 55.2 & 78.4 & 55.1 & 74.0 & **86.8** & 54.1 & 80.8 & 55.8 & 74.0 \\ & S-BGCN-T & 78.6 & 59.6 & 73.9 & 56.1 & 69.3 & 79.8 & 57.4 & 76.4 & 57.8 & 69.3 \\ & S-BGCN & 72.7 & 63.9 & 72.4 & 61.4 & 70.5 & 73.0 & 62.7 & 74.5 & 60.8 & 71.6 \\ & SGCN & 72.0 & 62.8 & - & - & 70.0 & 71.4 & 61.3 & - & - & 70.5 \\ \hline \multirow{4}{*}{Pubmed} & S-BGCN-T-K & **74.6** & 67.9 & 71.8 & 59.2 & 72.2 & **69.6** & 52.9 & 63.6 & 44.0 & 56.5 \\ & S-BGCN-T & 71.8 & 68.6 & 70.0 & 60.1 & 70.8 & 65.7 & 53.9 & 61.8 & 46.0 & 55.1 \\ & S-BGCN & 70.8 & 68.2 & 70.3 & 60.8 & 68.0 & 65.4 & 53.2 & 62.8 & 46.7 & 55.4 \\ & S-GCN & 71.4 & 68.8 & - & - & 69.7 & 66.3 & 54.9 & - & - & 57.5 \\ \hline \multirow{4}{*}{Amazon} & S-BGCN-T-K & **93.4** & 76.4 & 91.4 & 32.2 & 91.4 & **94.8** & 68.0 & 92.3 & 42.3 & 92.5 \\ & S-BGCN-T & 64.0 & 77.5 & 79.9 & 52.6 & 79.8 & 67.0 & 75.3 & 82.0 & 53.7 & 81.9 \\ \cline{1-1} & S-BGCN & 63.0 & 76.6 & 79.8 & 52.7 & 79.7 & 66.5 & 75.1 & 82.1 & 53.9 & 81.7 \\ \cline{1-1} & S-GCN & 64.0 & 77.1 & - & - & 79.6 & 67.0 & 74.9 & - & - & 81.6 \\ \hline \multirow{4}{*}{Amazon} & S-BGCN-T-K & **82.3** & 76.6 & 80.9 & 55.4 & 80.9 & **70.5** & 52.8 & 60.9 & 35.9 & 60.6 \\ \cline{1-1} & S-BGCN-T & 53.7 & 70.5 & 70.4 & 69.9 & 70.1 & 33.6 & 43.9 & 46.0 & 46.8 & 45.9 \\ \cline{1-1} & S-BGCN & 56.9 & 75.3 & 74.1 & 73.7 & 74.1 & 33.7 & 46.2 & 48.3 & 45.6 & 48.3 \\ \cline{1-1} & S-GCN & 56.9 & 75.3 & - & - & 74.2 & 33.7 & 46.2 & - & - & 48.3 \\ \hline \multirow{4}{*}{Coauthor} & S-BGCN-T-K & **91.3** & 87.6 & 89.7 & 61.8 & 89.8 & **72.2** & 56.6 & 68.1 & 25.9 & 67.9 \\ \cline{1-1} & S-BGCN-T & 88.7 & 86.0 & 87.9 & 70.2 & 87.8 & 67.4 & 51.9 & 64.6 & 29.4 & 62.4 \\ \cline{1-1} & S-BGCN & 89.1 & 87.1 & 89.5 & 78.3 & 89.5 & 66.1 & 49.2 & 64.6 & 35.6 & 64.3 \\ \cline{1-1} & S-GCN & 89.1 & 87.0 & - & - & 89.4 & -66.2 & 49.2 & - & - & 64.3 \\ \hline \multicolumn{11}{c}{Va.: Vacuity, Dis.: Dissonance, Al.: Aleatoric, Ep.: Epistemic, D.En.: Differential Entropy, En.: Entropy} \\ \end{tabular}
\end{table}
Table 2.8: Big-O time complexity of our method and baseline GCN.
\begin{table}
\begin{tabular}{l c c c c|c c c c c} \hline \hline Dataset & GCN & S-GCN & S-BGCN & S-BGCN & S-BGCN-T & S-BGCN-T-K \\ \hline Time Complexity (Train) & \(O(|\mathbb{E}|CF)\) & \(O(|\mathbb{E}|CF)\) & \(O(2|\mathbb{E}|CF)\) & \(O(2|\mathbb{E}|CF)\) & \(O(2|\mathbb{E}|CF)\) & \(O(2|\mathbb{E}|CF)\) \\ Time Complexity (Test) & \(O(|\mathbb{E}|CF)\) & \(O(|\mathbb{E}|CF)\) & \(O(M|\mathbb{E}|CF)\) & \(O(M|\mathbb{E}|CF)\) & \(O(M|\mathbb{E}|CF)\) & \(O(M|\mathbb{E}|CF)\) \\ \hline \hline \end{tabular}
\end{table}
Table 2.9: Ablation experiment on AUROC and AUPR for the OOD Detection.
demonstrate it is less effective than our vacuity-based approach. The first potential reason is that epistemic uncertainty is always smaller than vacuity (From Theorem 1), which potentially indicates that epistemic may capture less information related to OOD. Another potential reason is that the previous success of epistemic uncertainty for OOD detection is limited to supervised learning in computer vision applications, but its effectiveness for OOD detection was not sufficiently validated in semi-supervised learning tasks. Recall that epistemic uncertainty (i.e., model uncertainty) is calculated based on mutual information (see Eq. (2.6)). In a semi-supervised setting, the features of unlabeled nodes are also fed to a model for the training process to provide the model with high confidence in its output. For example, the model output \(P(\mathbf{y}|A,\mathbf{r};\theta)\) would not change too much even with differently sampled parameters \(\mathbf{\theta}\), i.e., \(P(\mathbf{y}|A,\mathbf{r};\theta^{(i)})\approx P(\mathbf{y}|A,\mathbf{r}; \theta^{(j)})\), which result in a low epistemic uncertainty.
To back up our conclusion, design an image classification experiment based on MCDrop(Gal and Ghahramani, 2016) method to do the following experiment: 1) supervised learning on the MNIST dataset with 50 labeled images; 2) semi-supervised learning (SSL) on the MNIST dataset with 50 labeled images and 49950 unlabeled images, while there are 50% OOD images (24975 FashionMNIST images) in the unlabeled set. For both experiments, we test the epistemic uncertainty on 49950 unlabeled set (50% In-distribution (ID) images and 50% OOD images). We conduct the experiment based on three popular SSL methods, VAT (Miyato et al., 2018), Mean Teacher (Tarvainen and Valpola, 2017), and pseudo label (Lee, 2013). Table 2.11 shows the average epistemic uncertainty value for in-distribution
\begin{table}
\begin{tabular}{c||c|c c c c c|c c c c c} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Model} & \multicolumn{4}{c|}{AUROC} & \multicolumn{4}{c}{AUPR} \\ & & Va. & Dis. & Al. & Ep. & En. & Va. & Dis. & Al. & Ep. & En. \\ \hline \multirow{2}{*}{Cora} & S-BGCN-T-K & 70.6 & **82.4** & 75.3 & 68.8 & 77.7 & 90.3 & **95.4** & 92.4 & 87.8 & 93.4 \\ & DropEdge & - & - & 76.6 & 56.1 & 76.6 & - & - & 93.2 & 85.4 & 93.2 \\ \hline \multirow{2}{*}{Citeseer} & S-BGCN-T-K & 65.4 & **74.0** & 67.2 & 60.7 & 70.0 & 79.8 & **85.6** & 82.2 & 75.2 & 83.5 \\ & DropEdge & - & - & 71.1 & 51.2 & 71.1 & - & - & 84.0 & 70.3 & 84.0 \\ \hline \end{tabular} Va.: Vacuity, Dis.: Dissonance, Al.: Aleatoric, Ep.: Epistemic, En.: Entropy
\end{table}
Table 2.10: Compare with DropEdge on Misclassification Detection.
samples and OOD samples. The result shows the same pattern with (Kendall and Gal, 2017; Kendall et al., 2015) in a supervised setting, but an opposite pattern in a semi-supervised setting that low epistemic of OOD samples, which is less effective top detect OOD. Note that the SSL setting is similar to our semi-supervised node classification setting, which feed the unlabeled sample to train the model.
#### Graph Embedding Representations of Different Uncertainty Types
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Epistemic & **Supervised** & **VAT** & **Mean Teacher** & **Pseudo Label** \\ \hline
**In-Distribution** & 0.140 & **0.116** & **0.105** & **0.041** \\
**Out-of-Distribution** & **0.249** & 0.049 & 0.076 & 0.020 \\ \hline \hline \end{tabular}
\end{table}
Table 2.11: Epistemic uncertainty for semi-supervised image classification.
Figure 2.4: Graph embedding representations of the Cora dataset for classes and the extent of uncertainty: (a) shows the representation of seven different classes; (b) shows our model prediction; and (c)-(f) present the extent of uncertainty for respective uncertainty types, including vacuity, dissonance, aleatoric, epistemic.
To better understand different uncertainty types, we used \(t\)-SNE (\(t\)-Distributed Stochastic Neighbor Embedding (Maaten and Hinton, 2008)) to represent the computed feature representations of a pre-trained BGCN-T model's first hidden layer on the Cora dataset and the Citeseer dataset.
**Seven Classes on Cora Dataset**: In Figure 2.4, (a) shows the representation of seven different classes, (b) shows our model prediction, and (c)-(f) present the extent of uncertainty for respective uncertainty types, including vacuity, dissonance, and aleatoric uncertainty, respectively.
Figure 2.5: Graph embedding representations of the Citeseer dataset for classes and the extent of uncertainty: (a) shows the representation of seven different classes, (b) shows our model prediction, and (c)-(f) present the extent of uncertainty for respective uncertainty types, including vacuity, dissonance, and aleatoric uncertainty, respectively.
#### Six Classes on Citeseer Dataset
In Figure 2.5 (a), a node's color denotes a class on the Citeseer dataset where 6 different classes are shown in different colors. Figure 2.5 (b) is our prediction result.
**Eight Classes on Amazon Photo Dataset**: In Figure 2.6, a node's color denotes vacuity uncertainty value, and the big node represents the training node. These results are based on the OOD detection experiment. Comparing Figure 2.6 (a) and (b), we found that GKDE can indeed improve OOD detection.
For Figures 2.5 (c)-(f), the extent of uncertainty is presented where a blue color refers to the lowest uncertainty (i.e., minimum uncertainty) while a red color indicates the highest uncertainty (i.e., maximum uncertainty) based on the presented color bar. To examine the trends of the extent of uncertainty depending on either training nodes or test nodes, we draw training nodes as bigger circles than test nodes. Overall we notice that most training nodes (shown as bigger circles) have low uncertainty (i.e., blue), which is reasonable because
Figure 2.6: Graph embedding representations of the Amazon Photo dataset for the extent of vacuity uncertainty based on OOD detection experiment.
the training nodes are the ones that are already observed. Now we discuss the extent of uncertainty under each uncertainty type.
**Vacuity**: In Figure 2.6 (b), most training nodes show low uncertainty, we observe the majority of OOD nodes in the button cluster show high uncertainty as appeared in red.
**Dissonance**: In Figure 2.5 (d), similar to vacuity, training nodes have low uncertainty. But unlike vacuity, test nodes are much less uncertain. Recall that dissonance represents the degree of conflicting evidence (i.e., a discrepancy between each class probability). However, in this dataset, we observe a fairly low level of dissonance and the obvious outperformance of Dissonance in node classification prediction.
**Aleatoric uncertainty**: In Figure 2.5 (e), a lot of nodes show high uncertainty larger than 0.5 except for a small number of training nodes with low uncertainty.
**Epistemic uncertainty**: In Figure 2.5 (f), most nodes show very low epistemic uncertainty values because uncertainty derived from model parameters can disappear as they are trained well.
### 2.7 Conclusion
In this work, we proposed a multi-source uncertainty framework of GNNs for semi-supervised node classification. Our proposed framework provides an effective way of predicting node classification and out-of-distribution detection considering multiple types of uncertainty. We leveraged various types of uncertainty estimates from both DL and evidence/belief theory domains. Through our extensive experiments, we found that dissonance-based detection yielded the best performance on misclassification detection while vacuity-based detection performed the best for OOD detection, compared to other competitive counterparts. In particular, it was noticeable that applying GKDE and the Teacher network further enhanced the accuracy of node classification and uncertainty estimates.
Although our method introduced in this Chapter achieves good performance in semi-supervised node classification, especially in an out-of-distribution setting where part of unlabeled nodes (_e.g._, OODs) does not belong to training (in-distribution) classes. However, we found that the OODs nodes may hurt the semi-supervised node classification performance for in-distribution nodes. A similar issue also occurs in transitional semi-supervised learning (SSL) problems. Semi-supervised learning algorithm's performance could degrade when the unlabeled set has out-of-distribution examples. To this end, in the following Chapter, we first study the impact of OOD data on SSL algorithms and propose a novel unified uncertainty-aware robust SSL framework to solve this issue.
## Chapter 3 Uncertainty-Aware Robust Semi-Supervised Learning With Out of Distribution Data
### 3.1 Introduction
Deep learning approaches have been shown to be successful on several supervised learning tasks, such as computer vision (Szegedy et al., 2015), natural language processing (Graves, 2013), and speech recognition (Graves et al., 2013). However, these deep learning models are data-hungry and often require massive amounts of labeled examples to obtain good performance. Obtaining high-quality labeled examples can be very time-consuming and expensive, particularly where specialized skills are required in labeling (for example, in cancer detection on X-ray or CT-scan images). As a result, semi-supervised learning (SSL) (Zhu, 2005) has emerged as a very promising direction, where the learning algorithms try to effectively utilize the large unlabeled set (in conjunction) with a relatively small labeled set. Several recent SSL algorithms have been proposed for deep learning and have shown great promise empirically. These include Entropy Minimization (Grandvalet and Bengio, 2005), pseudo-label based methods (Lee, 2013; Arazo et al., 2019; Berthelot et al., 2019) and consistency based methods (Sajjadi et al., 2016; Laine and Aila, 2016; Tarvainen and Valpola, 2017; Miyato et al., 2018) to name a few.
Despite the positive results of the above SSL methods, they are designed with the assumption that both labeled and unlabeled sets have the same distribution. Fig 3.1 (a) shows an example of this. However, this assumption may not hold in many real-world applications, such as web classification (Yang et al., 2011) and medical diagnosis (Yang et al., 2015),
where some unlabeled examples are from novel classes unseen in the labeled data. For example, Fig 3.1 (b) illustrates an image classification scenario with out-of-distribution data, where the unlabeled dataset contains two novel classes (bicycle and clock) compared to the in-distribution classes (flower and beetle) in the labeled dataset. When the unlabeled set contains OOD examples (OODs), deep SSL performance can degrade substantially and is sometimes even worse than simple supervised learning (SL) approaches (Oliver et al., 2018). Moreover, it is unreasonable to expect a human to go through and clean a large and massive unlabeled set in such cases.
A general approach to robust SSL against OODs is to assign a weight to each unlabeled example based on some criteria and minimize a weighted training or validation loss. In particular, (Yan et al., 2016) applied a set of weak annotators to approximate the ground-truth labels as pseudo-labels to learn a robust SSL model. (Chen et al., 2019) proposed a distributionally robust model that estimates a parametric weight function based on both the discrepancy and the consistency between the labeled data and the unlabeled data. (Chen et al., 2020) (UASD) proposed to weigh the unlabeled examples based on an estimation of predictive uncertainty for each unlabeled example. The goal of UASD is to discard the potentially irrelevant samples having low confidence scores and estimate the parameters by optimizing a regularized training loss. A state-of-the-art method (Guo et al., 2020), called DS3L, considers a shallow neural network to predict the weights of unlabeled examples and estimate the parameters of the neural network based on a clean labeled set via bi-level optimization. It is common to obtain a dataset composed of two parts, including a relatively
Figure 3.1: (a) Traditional SSL. (b) SSL with OODs.
small but accurately labeled set and a large but coarsely labeled set from inexpensive crowd-sourcing services or other noisy sources.
There are three **main limitations** of DS3L and other methods as reviewed above. First, it lacks a study of the potential causes of the impact of OODs on SSL, and as a result, the interpretation of robust SSL methods becomes difficult. Second, existing robust SSL methods did not consider the negative impact of OODs on the utilization of BN in neural networks, and as a result, their robustness against OODs degrades significantly when a neural network includes BN layers. The utilization of BNs for deep SSL has an implicit assumption that the labeled and unlabeled examples follow a single or similar distribution, which is problematic when the unlabeled examples include OODs (Ioffe and Szegedy, 2015). Third, the bi-level learning algorithm developed in DS3L relies on low-order approximations of the objective in the inner loop due to vanishing gradients or memory constraints. As a result of not using the high-order loss information, the learning performance of DS3L could be significantly degraded in some applications, as demonstrated in our experiments. Our main technical contributions over existing methods are summarized as follows:
**The effect of OOD data points.** The first critical contribution of our work (Sec. 3.3) is to analyze what kind of OOD unlabeled data points affect the performance of SSL algorithms. In particular, we observe that OOD samples lying close to the decision boundary have more influence on SSL performance than those far from the boundary. Furthermore, we observe that the OOD instances far from the decision-boundary (faraway OODs) can degrade SSL performance substantially if the model contains a batch normalization (BN) layer. The last observation makes sense logically as well since the batch normalization heavily depends on the mean and variance of each batch's data points, which can be significantly different for OOD points that came from very different distributions. We find these observations about OOD points consistent across experiments on several synthetic and real-world datasets.
**Uncertainty-Aware Robust SSL Framework.** To address the above causes, we first proposed a simple modification to BN, called weighted batch normalization (WBN), to improve
BN's robustness against OODs. We demonstrate that W-BN can learn a better representation with good statistics estimation via a theoretical analysis. Finally, we proposed a unified, uncertainty-aware robust SSL approach to improve many existing SSL algorithms' robustness by learning to assign weights to unlabeled examples based on their meta-gradient directions derived from a loss function on a clean validation set. Different from safe SSL (Guo et al., 2020), we directly treat the weights as hyperparameters instead of the output of a parametric function. The effectiveness of this strategy has been well demonstrated in robust supervised learning problems against noise labels (Ren et al., 2018). Furthermore, treating the weights as hyperparameters enables the use of an implicit differentiation-based approach in addition to the meta-approximation algorithm considered in (Guo et al., 2020). In addition, we proposed two efficient bi-level algorithms for our proposed robust SSL approach, including meta-approximation and implicit-differentiation based, that have different tradeoffs on computational efficiency and accuracy. We designed the first algorithm based on lower-order approximations of the objective in the meta optimization's inner loop. We designed the second algorithm based on higher-order approximations of the objective and is scalable to a higher number of inner optimization steps to learn a massive amount of weight parameters, but is less efficient than the first algorithm.
**Speeding up the re-weighting algorithms.** A critical issue with the current approaches (Ren et al., 2018; Shu et al., 2019; Guo et al., 2020) for bi-level optimization is that they are around 3x slower (even after using a one-step gradient approximation) compared to the original training(in this case, the base SSL algorithm). Whereas more fancy and higher-order approaches like implicit differentiation (Lorraine et al., 2020) are even slower. We show that by adopting some simple tricks like just considering the last layer of the network in the inner one-step optimization (while updating the hyper-parameters) and performing the weight update step, only every \(L\) epochs can significantly speedup the reweighting while not significantly losing on its gains. In particular, we see that _we can bring down the run
time from \(3\times\) to \(1.2\times\), thereby significantly improving the experimental turn around times and reducing cost, energy, and time requirements of the reweighting algorithms._
**Comprehensive experiments.** We conduct extensive experiments on synthetic and real-world datasets. The results demonstrate that our weighted robust SSL approach significantly outperforms existing robust approaches (L2RW, MWN, Safe-SSL, and UASD) on four representative SSL algorithms. We also perform an ablation study to demonstrate which components of our approach are most important for its success. Finally, we show the effect of using last layer gradients and infrequent weight updates on both accuracy and time speedups.
### Semi-Supervised Learning (SSL)
#### 3.2.1 Notations
Given a training set with a labeled set of examples \(\mathcal{D}=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n}\) and an unlabeled set of examples \(\mathcal{U}=\{\mathbf{x}_{j}\}_{j=1}^{m}\). Vectors are denoted by lower case bold face letters, _e.g._, weight vector \(\mathbf{w}\in[0,1]^{|\mathcal{U}|}\) and uncertainty vector \(\mathbf{u}\in[0,1]^{|\mathcal{U}|}\) where their _i_-th entries are \(w_{i},u_{i}\). Scalars are denoted by lowercase italic letters, _e.g._, trade-off parameter \(\lambda\in\mathbb{R}\). Matrices are denoted by capital italic letters. Some important notations are listed in Table 3.1
Given a training set with a labeled set of examples \(\mathcal{D}=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n}\) and an unlabeled set of examples \(\mathcal{U}=\{\mathbf{x}_{j}\}_{j=1}^{m}\). For any classifier model \(f(\mathbf{x},\theta)\) used in SSL, where \(\mathbf{x}\in\mathbb{R}^{C}\) is the input data, and \(\theta\) refers to the parameters of the classifier model. The loss functions of many existing methods can be formulated in the following general form:
\[\sum\nolimits_{(\mathbf{x}_{i},y_{i})\in\mathcal{D}}l(f(\mathbf{x}_{i},\theta),y_{i})+\sum\nolimits_{x_{j}\in\mathcal{U}}r(f(\mathbf{x}_{j},\theta)), \tag{3.1}\]
where \(l(\cdot)\) is the loss function for labeled data (such as cross-entropy), and \(r(\cdot)\) is the loss function (regularization function) on the unlabeled set. The goal of SSL methods is to design an efficient regularization function to leverage the model performance information
on the unlabeled dataset for effective training. Pseudo-labeling (Lee, 2013) uses a standard supervised loss function on an unlabeled dataset using "pseudo-labels" as a target label as a regularizer. \(\Pi\)-Model (Laine and Aila, 2017; Sajjadi et al., 2016) designed a consistency-based regularization function that pushes the distance between the prediction for an unlabeled sample and its stochastic perturbation (e.g., data augmentation or dropout (Srivastava et al., 2014)) to a small value. Mean Teacher (Tarvainen and Valpola, 2017) proposed to obtain a more stable target output \(f(x,\theta)\) for unlabeled set by setting the target via an exponential moving average of parameters from previous training steps. Instead of designing a stochastic \(f(x,\theta)\), Virtual Adversarial Training (VAT) (Miyato et al., 2018) proposed to approximate a tiny perturbation to unlabeled samples that affect the output of the prediction function most. MixMatch (Berthelot et al., 2019), UDA (Xie et al., 2019), and Fix-Match (Sohn et al., 2020) choose the pseudo-labels based on predictions of augmented samples, such as shifts, cropping, image flipping, weak and strong augmentation, and mix-up (Zhang et al., 2017) to design the regularization functions. However, the performance of most existing SSL
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Notations** & **Descriptions** \\ \hline \(\mathcal{D}\) & Labeled set \\ \(\mathcal{U}\) & Unlabeled set \\ \(\mathcal{V}\) & Validation set (labeled) \\ \(\mathbf{x}_{i}\) & Feature vector of sample \(i\) \\ \(y_{i}\) & Class label of sample \(i\) \\ \(\boldsymbol{\theta}\) & model parameters \\ \(f(\cdot)\) & semi-supervised learning model function \\ \(\mathcal{L}_{V}(\cdot)\) & Validation loss \\ \(\mathcal{L}_{T}(\cdot)\) & Training loss \\ \(l(\cdot)\) & Loss function for labeled data \\ \(r(\cdot)\) & Loss function for unlabeled data \\ \(\text{Un}(\cdot)\) & Uncertainty regularization term \\ \(J\) & Inner loop gradient steps \\ \(P\) & Neumann series approximation parameters \\ \hline \hline \end{tabular}
\end{table}
Table 3.1: Important notations and corresponding descriptions.
can degrade substantially when the unlabeled dataset contains OOD examples (Oliver et al., 2018).
### Impact of OOD on SSL Performance
In this section, we provide a systematic analysis of the impact of OODs for many popular SSL algorithms, such as Pseudo-Label(PL) (Lee, 2013), \(\Pi\)-Model (Laine and Aila, 2016), Mean Teacher(MT) (Tarvainen and Valpola, 2017), and Virtual Adversarial Training (VAT) (Miyato et al., 2018). We illustrate the discoveries using the following synthetic and real-world datasets. While we mainly focus on VAT as the choice of the SSL algorithm, the observations extend to other SSL algorithms as well.
**Synthetic dataset.** We considered two moons dataset (red points are labeled data, gray circle points are in-distribution (ID) unlabeled data) with OOD (yellow triangle points) points in three different scenarios that can exist in real-world, 1) Faraway OOD scenario where the OOD points exist far from decision boundary; 2) Boundary OOD scenario where the OOD points occur close to decision boundary; 3) Mixed OOD scenario where OOD points exist both far and close to the decision boundary, as shown in Fig 3.2.
**Real-world dataset.** We consider MNIST as ID data with three types of OODs to account for plausible real-world scenarios. 1)Faraway OOD: We used Fashion MNIST (F-MNIST)
Figure 3.2: SSL performance with different type OODs in synthetic datasets.
dataset, which contains fashion images as the Faraway OOD dataset as it inherently has different patterns compared to the MNIST dataset; 2) Boundary OOD: We used EMNIST dataset, which contains handwritten character digits as Boundary OOD dataset as it has similar patterns compared to MNIST dataset; In addition to EMNIST, we also considered Mean MNIST (M-MNIST) as a boundary OOD dataset, which was generated by averaging MNIST images from two different classes (usage of M-MNIST as boundary OOD is also considered in (Guo et al., 2019)); 4) Mixed OOD: For Mixed OOD dataset, we combined both Fashion MNIST and EMNIST together. To justify the usage of the above-mentioned datasets as Faraway, Boundary, and Mixed datasets, we use the vacuity uncertainty (Zhao et al., 2020) values as an estimate of their distance to the decision boundary. Table 3.2 shows the average vacuity uncertainty for each type of OODs; the results indicate that Fashion MNIST has larger vacuity uncertainty compared to MNIST, whereas vacuity uncertainties of EMNIST and M-MNIST are closer to MNIST. For more details, refer to the supplementary material. We also conducted a similar experiment in Sec. 3 based on CIFAR10 dataset. We adapt CIFAR10 to a 6-class classification task, using 400 labels per class (from the 6 classes), the ID classes are: "bird", "cat", "deer", "dog", "frog", "horse", and OOD data are from classes: "airline", "automobile", "ship", "truck"), we regard this type of OODs as boundary OODs, named CIFAR-4. Besides, we use the sample from the SVHN dataset as faraway OODs and consider mixed OODs that combined both SVHN and CIFAR-4. Table 3.3 showed the vacuity uncertainty for each dataset and got a similar pattern. Note that we use WRN-28-2 as the backbone for the CIFAR10 experiment such that we can not remove the BN layer (the SSL performance would significantly decrease when removing BN from WRN-28-2), so we only show "SSL-BN" and "SSL-FBN" in this case. The results are shown in Fig 3.3 (b), which shows a similar pattern made on the MNIST dataset.
For all experiments in this section, we used a multilayer perceptron neural network (MLP) with three layers as a backbone architecture for the synthetic dataset and LeNet (LeCun
et al., 1989) as a backbone for the real-world datasets. We consider the following models in the experiments: 1) _SSL-NBN_: MLP or LeNet model without Batch Normalization; 2) _SSL-BN_: MLP or LeNet model with Batch Normalization; 3) _SSL-FBN_: MLP or LeNet model where we freeze the batch normalization layers for the unlabeled instances. Freezing BN (FBN) (Oliver et al., 2018) is a common trick to improve the SSL model robustness where we freeze batch normalization layers by not updating _running_mean_ and _running_variance_ in the training phase.
The following are the main observations. First, from Fig 3.2, we see that with BN (i.e., SSL-BN), there is a significant impact on model performance and learned decision boundaries in the presence of OOD. This performance degradation is even more pronounced
\begin{table}
\begin{tabular}{c c c c} \hline _SVHN_ & _CIFAR-4_ & _Mixed_ & _CIFAR(ID)_ \\ \hline
0.27 & 0.18 & 0.22 & 0.15 \\ \hline \end{tabular}
\end{table}
Table 3.3: Uncertainty for different type OODs based on CIFAR10.
Figure 3.3: SSL performance with different type OODs in real world datasets.
in Faraway OOD since the BN statistics, like the running mean/variance can be significantly changed by faraway OOD points. Secondly, when we do not use BN (i.e., SSL-NBN), the impact of the Faraway OOD and mixed OOD data is reduced. However, in the case of boundary OOD (Fig 3.2 (b) and EMNIST/M-MNIST case of Fig 3.3), we still see significant performance degradation compared to the skyline. However, BN is a crucial component in more complicated models (Eg: ResNet family), and we expect OOD instances to play a significant role there. Finally, when freezing the BN layers for the unlabeled data (i.e., SSL-FBN), we see that the Faraway and Mixed OODs' effect is alleviated; but SSL-FBN still performs worse than the SSL-NBN in Faraway and Mixed OODs (and there is a big scope of improvement w.r.t the skyline). Finally, both SSL-NBN and SSL-FBN fail to efficiently mitigate the performance degradation caused by boundary OOD data points. In the supplementary material, we also show similar observations made on the CIFAR-10 dataset.
The main takeaways of the synthetic and real data experiments are as follows: 1) OOD instances close to the decision boundary (Boundary OODs) hurt SSL performance irrespective of the use of batch normalization; 2) OOD instances far from the decision boundary (Faraway OODs) hurt the SSL performance if the model involves BN. Freezing BN can reduce some impact of OOD to some extent but not entirely; 3) OOD instances far from the decision boundary will not hurt SSL performance if there is no BN in the model. In the next section, we propose a robust SSL reweighting framework to address above mentioned issues caused by OOD data points.
### Methodology
In this section, we first proposed the uncertainty-aware robust SSL framework, and then introduce two efficient bi-level algorithms to train the robust SSL approach. More impor
tantly, we proposed Weighted Batch Normalization (WBN) to improve the robustness of our robust SSL framework against OODs.
#### Uncertainty-Aware Robust SSL Framework
. **Reweighting the unlabeled data.** Consider the semi-supervised classification problem with training data (labeled \(\mathcal{D}\) and unlabeled \(\mathcal{U}\)) and classifier \(f(x;\theta)\). Generally, the optimal classifier parameter \(\theta\) can be extracted by minimizing the SSL loss (Eq. (3.1)) calculated on the training set. In the presence of unlabeled OOD data, sample reweighting methods enhance the robustness of training by imposing weight \(w_{j}\) on the \(j\)-th unlabeled sample loss,
\[\mathcal{L}_{T}(\theta,\mathbf{w})=\sum_{(\mathbf{x}_{i},y_{i})\in\mathcal{D }}l(f(\mathbf{x}_{i};\theta),y_{i})+\sum_{x_{j}\in\mathcal{U}}w_{j}r(f( \mathbf{x}_{j};\theta)),\]
where we denote \(\mathcal{L}_{U}\) is the robust unlabeled loss, and we treat weight \(\mathbf{w}\) as hyperparameter. Our goal is to learn a sample weight vector \(\mathbf{w}\) such that \(\mathbf{w}=0\) for OODs, \(\mathbf{w}=1\) for In-distribution (ID) sample.
Denote \(\mathcal{L}_{V}(\theta^{*}(\mathbf{w}),\mathbf{w})=\mathcal{L}_{V}(\theta^{*} (\mathbf{w}))+\lambda\cdot\text{Reg}(\mathbf{w})\) as the validation loss with a regularization term over the validation dataset, where \(\text{Reg}(\mathbf{w})\) is the regularization term, \(\lambda\) is the regularization coefficient, and \(\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))\triangleq\sum_{(\mathbf{x}_{i},y_{i} )\in\mathcal{V}}l(f(\mathbf{x}_{i},\theta^{*}(\mathbf{w})),y_{i})\). This labeled set could either be a held-out validation set, or the original labeled set \(\mathcal{D}\). Intuitively, the problem given in Eq. (3.2) aims to choose weights of unlabeled samples \(\mathbf{w}\) that minimize the super
Figure 3.4: Main flowchart of the proposed Weighted Robust SSL algorithm.
vised loss evaluated on the validation set when the model parameters \(\theta^{*}(\mathbf{w})\) are optimized by minimizing the weighted SSL loss \(\mathcal{L}_{T}(\theta,\mathbf{w})\).
**Uncertainty-aware bi-level optimization objective.** Since manual tuning and grid-search for each \(w_{i}\) is intractable, we pose the weights optimization problem described above as a _bi-level_ optimization problem.
\[\min_{\mathbf{w}} \mathcal{L}_{V}(\theta^{*}(\mathbf{w}),\mathbf{w}),\] \[\text{s.t.} \theta^{*}(\mathbf{w})=\operatorname*{arg\,min}_{\theta} \mathcal{L}_{T}(\theta,\mathbf{w}). \tag{3.2}\]
where \(\mathcal{L}_{V}(\theta^{*}(\mathbf{w}),\mathbf{w})\) is the objective to optimize \(\mathbf{w}\). Based on the observation from Sec 3.3, we find that OOD data usually holds high vacuity uncertainty. Therefore, we design the inner loop objective function with an uncertainty regularization term,
\[\mathcal{L}_{V}(\theta^{*}(\mathbf{w}),\mathbf{w})=\mathcal{L}_{V}(\theta^{*} (\mathbf{w}))+\lambda\cdot\text{Un}(\mathbf{w},\mathbf{u}) \tag{3.3}\]
where \(\text{Un}(\mathbf{w},\mathbf{u})=\mathbf{u}(1-\mathbf{w})^{2}\) is the uncertainty regularization term, and \(\lambda\) is the regularization coefficient. Based on this design, the objective function will push the OOD sample with a large weight and push the in-distribution sample with a small weight.
Calculating the optimal \(\theta^{*}\) and \(\mathbf{w}\) requires two nested loops of optimization, which is expensive and intractable to obtain the exact solution (Franceschi et al., 2018), especially when optimization involves deep learning model and large datasets. Since gradient-based methods like Stochastic Gradient Descent (SGD) have shown to be very effective for machine learning and deep learning problems (Bengio, 2000), we adapt both high-order approximation and meta approximation strategies, as described in Sec 3.4.2.
#### 3.4.2 Bi-level Optimization Approximation
In this section, We developed two efficient bi-level algorithms that have different tradeoffs in computational efficiency and accuracy.
**Implicit Differentiation.** Directly calculate the weight gradient \(\frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}),\mathbf{w})}{\partial\mathbf{ w}}\) by chain rule:
\[\frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}),\mathbf{w})}{\partial \mathbf{w}}=\underbrace{\frac{\partial\mathcal{L}_{V}}{\partial\mathbf{w}}}_{(a )}+\underbrace{\frac{\partial\mathcal{L}_{V}}{\partial\theta^{*}(\mathbf{w})}}_ {(b)}\times\underbrace{\frac{\partial\theta^{*}(\mathbf{w})}{\partial \mathbf{w}}}_{(c)} \tag{3.4}\]
where (a) is the weight direct gradient (e.g., gradient from regularization term, \(\text{Reg}(\mathbf{w})\)), (b) is the parameter direct gradient, which is easy to compute. The difficult part is the term (c) (best-response Jacobian). We approximate (c) by using the Implicit function theorem (Lorraine et al., 2020),
\[\frac{\partial\theta^{*}(\mathbf{w})}{\mathbf{w}}=-\underbrace{\Big{[}\frac{ \partial\mathcal{L}_{T}}{\partial\theta\partial\theta^{T}}\Big{]}^{-1}}_{(d)} \times\underbrace{\frac{\partial\mathcal{L}_{T}}{\partial\mathbf{w}\partial \theta^{T}}}_{(e)} \tag{3.5}\]
However, computing Eq. (3.5) is challenging when using deep nets because it requires inverting a high dimensional Hessian (term (d)), which often requires \(\mathcal{O}(m^{3})\) operations. Therefore, we give the Neumann series approximations (Lorraine et al., 2020) of the term (d) which we empirically found to be effective for SSL,
\[\Big{[}\frac{\partial\mathcal{L}_{T}}{\partial\theta\partial\theta^{T}}\Big{]} ^{-1}\approx\lim_{P\rightarrow\infty}\sum_{p=0}^{P}\Big{[}I-\frac{\partial \mathcal{L}_{T}}{\partial\theta\partial\theta^{T}}\Big{]}^{p} \tag{3.6}\]
where \(I\) is the identity matrix.
Since the algorithm mentioned in (Lorraine et al., 2020) utilizes the Neumann series approximation and efficient Hessian vector product to compute the Hessian inverse product, it can efficiently compute the Hessian-inverse product even when a larger number of weight hyperparameters are present. We should also note that the implicit function theorem's assumption \(\frac{\partial\mathcal{L}_{T}}{\partial\theta}=0\) needs to be satisfied to accurately calculate the Hessian inverse product. However in practice, we only approximate \(\theta^{*}\), and simultaneously train both \(\mathbf{w}\) and \(\theta\) by alternatively optimizing \(\theta\) using \(\mathcal{L}_{T}\) and \(\mathbf{w}\) using \(\mathcal{L}_{V}\).
**Meta approximation.** Here we proposed the meta-approximation method to jointly update both network parameters \(\theta\) and hyperparameter \(\mathbf{w}\) in an iterative manner. At iteration
step \(t\), we approximate \(\theta_{t}^{*}\approx\theta_{t}^{J}\) on training set via low order approximation, where \(J\) is inner loop gradient steps, Eq. (3.7) shows each gradient step update,
\[\theta_{t}^{j}(\mathbf{w}_{t})=\theta_{t}^{j-1}-\alpha\nabla_{\theta}\mathcal{L }_{T}(\theta_{t}^{j-1},\mathbf{w}_{t}) \tag{3.7}\]
then we update hyperparameter \(\mathbf{w}_{t+1}\) on the basis of the net parameter \(\theta_{t}^{*}\) and weight \(\mathbf{w}_{t}\) obtained in the last iteration. To guarantee efficiency and general feasibility, the outer loop optimization to update weight is employed by one gradient step on the validation set \(\mathcal{V}\),
\[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\beta\nabla_{\mathbf{w}}\mathcal{L}_{V}( \theta_{t}^{*},\mathbf{w}_{t}) \tag{3.8}\]
We analyze the convergence of the meta-approximation method and derive the following theorem.
**Theorem 2**.: _Suppose the validation function is Lipschitz smooth with constant \(L\), and the supervised loss and unsupervised loss have \(\gamma\)-bounded gradients, let the step size \(\alpha\) for \(\theta\) satisfies \(\alpha=\min\{1,\frac{k}{T}\}\) for some constant \(k>0\) such that \(\frac{k}{T}<1\) and \(\beta=\min\{\frac{1}{L},\frac{C}{\sqrt{T}}\}\) for some constant \(C>0\) such that \(\frac{\sqrt{T}}{C}\leq L\). Then, the meta approximation algorithm can achieve \(\mathbb{E}[\|\nabla_{\mathbf{w}}\mathcal{L}_{V}(\theta_{t})\|_{2}^{2}]\leq\epsilon\) in \(\mathcal{O}(1/\epsilon^{2})\) And more specifically,_
\[\min_{0\leq t\leq T}\mathbb{E}[\|\nabla_{\mathbf{w}}\mathcal{L}_{V}(\theta_{t} )\|_{2}^{2}]\leq\mathcal{O}(\frac{C}{\sqrt{T}}) \tag{3.9}\]
_where \(C\) is some constant independent to the convergence process._
Proof.: See Appendix A.3.
**Complexity.** Compared with regular optimization on a the single-level problem, our robust SSL requires \(J\) extra forward and backward passes of the classifier network. To compute the weight gradient via bi-level optimization, Meta approximation requires extra forward and backward passes. Therefore, compared with the regular training procedures of SSL, our robust SSL with Meta approximation needs approximately \((2+J)\times\) training time. For
Implicit Differentiation, the training time is complex to estimate, we show the running time in Fig 3.9 (c) on the experiment part.
**Connections between implicit-differentiation and meta-approximation.**
**Proposition 2**.: _Suppose that the Hessian inverse of training loss \(\mathcal{L}_{T}\) with the model parameters \(\theta\) is equal to the identity matrix \(\frac{\partial^{2}\mathcal{L}_{T}}{\partial\theta\partial\theta^{T}}=\mathbf{I}\) (i.e.,\(P=0\) for implicit differentiation approach). Suppose the model parameters are optimized using single-step gradient descent (i.e., \(J=1\) for the low-order approximation approach), and the model learning rate is equal to one. Then, the weight update step in both implicit differentiation and low-order approximation approach is equal._
Proof.: See Appendix A.4.
Proposition 2 shows that the weight gradient from implicit differentiation is the same as the weight gradient from the meta approximation method. Hence, the meta-approximation is the same as the Implicit differentiation method when \(P=0\). This also signifies that using larger \(P\) values for inverse Hessian estimations will use higher-order information and is more accurate. We designed these two efficient hyper-parameter optimization algorithms that have different trade-offs on computational efficiency and accuracy. Meta approximation was designed based on lower-order approximations of the objective in the inner loop of meta optimization due to vanishing gradients or memory constraints. Implicit Differentiation was designed based on higher-order approximations of the objective is more accurate than Meta approximation while at the same time being computationally more expensive.
#### 3.4.3 Weighted Batch Normalization
In practice, most deep SSL models would use deep CNN. While BN usually serves as an essential component for many deep CNN models (He et al., 2016; Huang et al., 2017). Specifically, BN normalizes input features by the mean and variance computed within each
mini-batch. At the same time, OODs would indeed affect the SSL performance due to BN (discussed this issue in Sec. 3.3). To address this issue, we proposed a _Weighted Batch Normalization_ (WBN) that performs the normalization for each training mini-batch with sample weights \(\mathbf{w}\). We present the WBN in Algorithm 2, where \(\epsilon\) is a constant added to the mini-batch variance for numerical stability.
**Proposition 3**.: _Give in-distribution mini-batch \(\mathcal{I}=\{\mathbf{x}_{i}\}_{i=1}^{m}\), OOD mini-batch \(\mathcal{O}=\{\hat{\mathbf{x}}_{i}\}_{i=1}^{m}\), and the mixed mini-batch \(\mathcal{IO}=\mathcal{I}\cup\mathcal{O}\). Denote \(\mu_{\mathcal{M}}\) is the mini-batch mean of \(\mathcal{M}\) (either \(\mathcal{O},\mathcal{I}\) or \(\mathcal{IO}\)). With faraway OOD: \(\|\mu_{\mathcal{O}}-\mu_{\mathcal{I}}\|_{2}>L\), where \(L\) is large (\(L\gg 0\)), we have:_
1. \(\|\mu_{\mathcal{IO}}-\mu_{\mathcal{I}}\|_{2}>\frac{L}{2}\) _and_ \(BN_{\mathcal{I}}(\mathbf{x}_{i})\neq BN_{\mathcal{IO}}(\mathbf{x}_{i})\)_;_
2. _Given perfect weights_ \(\mathbf{w}=\mathbf{w}_{\mathcal{I}}\cup\mathbf{w}_{\mathcal{O}}\)_, where_ \(\mathbf{w}_{\mathcal{I}}=\mathbf{1}\) _for mini-batch_ \(\mathcal{I}\) _and_ \(\mathbf{w}_{\mathcal{O}}=\mathbf{0}\) _for mini-batch_ \(\mathcal{O}\)_, then_ \(\mu_{\mathcal{I}}=\mu_{\mathcal{IO}}^{\mathbf{w}}\) _and_ \(BN_{\mathcal{I}}(\mathbf{x}_{i})=WBN_{\mathcal{IO}}(\mathbf{x}_{i},\mathbf{w})\)__
_where \(\mu_{\mathcal{IO}}^{\mathbf{w}}\) is the weighted mini-batch mean of \(\mathcal{IO}\), \(BN_{\mathcal{M}}(\mathbf{x}_{i})\) is traditional batch normalizing transform based on mini-batch from \(\mathcal{M}\) (either \(\mathcal{I}\) or \(\mathcal{IO}\)) and \(WBN_{\mathcal{IO}}(\mathbf{x}_{i},\mathbf{w})\) is weighted batch normalizing transform based on the set \(\mathcal{IO}\) with weights \(\mathbf{w}\)._
Proof.: See Appendix A.5.
Proposition 3 shows that when an unlabeled set contains OODs, the traditional BN behavior could be problematic, therefore resulting in incorrect statistics estimation. e.g., the output of BN for mixed mini-bath \(BN_{\mathcal{IO}}(\mathbf{x}_{i})\approx\gamma\frac{\mathbf{x}_{i}-\mu_{\mathcal{O}} }{\|\mu_{\mathcal{O}}-\mu_{\mathcal{I}}\|_{2}}+\beta\), which is not our expected result (\(BN_{\mathcal{I}}(\mathbf{x}_{i})=\gamma\frac{\mathbf{x}_{i}-\mu_{\mathcal{I}}}{\sqrt {\sigma_{2}^{2}+\epsilon}}+\beta\)). While our proposed weighted batch normalization (WBN) can reduce the OOD effect and get the expected result. Therefore, our uncertainty-aware robust SSL framework uses WBN instead of BN if the model includes a BN layer. Ablation studies in Sec. 3.5 demonstrate that such our approach with WBN can improve performance further. Finally, our uncertainty-aware robust SSL framework is detailed in Algorithm 3.
#### 3.4.4 Additional Implementation Details:
In this subsection, we discuss additional implementational and practical tricks to make our weighted robust SSL scalable and efficient.
**Last-layer gradients.** Computing the gradients over deep models is time-consuming due to the enormous number of parameters in the model. To address this issue, we adopt a last-layer gradient approximation similar to (Ash et al., 2019; Killamsetty et al., 2021,?) by only considering the last classification layer gradients of the classifier model in inner loop optimization (step 10 in algorithm 3). By simply using the last-layer gradients, we achieve significant speedups in weighted robust SSL.
**Infrequent update w.** We update the weight parameters every \(L\) iterations (\(L>2\)). In our experiments, we see that we can set \(L=5\) without a significant loss in accuracy. For MNIST experiments, we can be even more aggressive and set \(L=20\).
**Weight Sharing and Regularization.** Considering the entire weight vector \(\mathbf{w}\) (overall unlabeled points) is not practical for large datasets and easily overfits (see ablation study
experiments), we propose two ways to fix this. The first is weight sharing via clustering, which we call Cluster Re-weight (CRW) method. Specifically, we use an unsupervised cluster algorithm (e.g., K-means algorithm) to embed unlabeled samples into \(K\) clusters and assign a weight to each cluster such that we can reduce the dimensionality of \(\mathbf{w}\) from \(|M|\) to \(|K|\), where \(|K|\ll|M|\). In practice, for high dimensional data, we may use a pre-trained model to calculate embedding for each point before applying the cluster method. In cases where we do not have an effective pre-trained model for embedding, we consider another variant that applies weights to every unlabeled point but considers an L1 regularization in Eq (3.2) for sparsity in \(\mathbf{w}\). We show that both these tricks effectively improve the performance of reweighting and prevent overfitting on the validation set.
### Experiment
To corroborate our algorithm, we conduct extensive experiments comparing our approaches (R-Meta and R-IFT) with some popular baseline methods. We aim to answer the following questions:
**Question 1:** Can our approach achieve better performance on both different types of OODs with varying OOD ratios compared with baseline methods?
**Question 2:** How does our approach compare in terms of running times compared with baseline methods?
**Question 3:** What is the effect of each of the components of our approach (e.g., WBN, clustering/regularization, inverse Hessian approximation, inner loop gradient steps)?
#### 3.5.1 Evaluation on Synthetic Dataset
We designed a synthetic experiment to illustrate how OODs affect SSL performance. The experimental setting is the same as the setting used in the section "Impact of OOD on SSL Performance". We used the Two Moons dataset with six labeled points and 2000 unlabeled (in-distribution) points and considered two OOD types, including faraway OODs and boundary OODs. We conducted the experiments with OOD ratio = \(\{25\%,50\%,75\%\}\) and reported the averaged accuracy rate with mean and standard deviation over ten runs. Table 3.4 shows that our robust SSL (named R-SSL-IFT (Implicit Differentiation) and R-SSL-Meta (Meta approximation)) is more effective than the four baselines on test accuracy. We also conducted experiments on additional synthetic datasets and observed similar trends of the results, as shown in Fig 3.5.
#### 3.5.2 Real-world Dataset Details
**Datasets.** We consider four image classification benchmark datasets. (1) **MNIST**: a handwritten digit classification dataset, with 50,000/ 10,000/ 10,000 training/validation/test sam
\begin{table}
\begin{tabular}{c|c c c} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{_Faraway OODs_} \\ & 25\% & 50\% & 75\% \\ \hline Supervised & 84.4\(\pm\) 0.3 & 84.4\(\pm\) 0.3 & 84.4\(\pm\) 0.3 \\ SSL-NBN & 100.0\(\pm\) 0.0 & 100.0\(\pm\) 0.0 & 100.0\(\pm\) 0.0 \\ SSL-BN & 60.7\(\pm\) 1.5 & 50.0\(\pm\) 0.0 & 50.0\(\pm\) 0.0 \\ SSL-FBN & 89.7\(\pm\) 0.7 & 87.0\(\pm\) 1.1 & 81.0\(\pm\) 1.7 \\
**R-SSL-IFT** & **100.0\(\pm\) 0.0** & **100.0\(\pm\) 0.0** & **100.0\(\pm\) 0.0** \\
**R-SSL-Meta** & **100.0\(\pm\) 0.0** & **100.0\(\pm\) 0.0** & **100.0\(\pm\) 0.0** \\ \hline \multicolumn{4}{c}{_Boundary OODs_} \\ & 25\% & 50\% & 75\% \\ \hline Supervised & 84.4\(\pm\) 0.3 & 84.4\(\pm\) 0.3 & 84.4\(\pm\) 0.3 \\ SSL-NBN & 87.3\(\pm\) 0.3 & 83.4\(\pm\) 0.4 & 82.4\(\pm\) 0.5 \\ SSL-BN & 84.7\(\pm\) 0.4 & 82.3\(\pm\) 0.3 & 80.4\(\pm\) 0.5 \\ SSL-FBN & 89.7\(\pm\) 0.7 & 87.0\(\pm\) 1.1 & 81.0\(\pm\) 1.7 \\
**R-SSL-IFT** & **100.0\(\pm\) 0.0** & **100.0\(\pm\) 0.0** & **100.0\(\pm\) 0.0** \\
**R-SSL-Meta** & **100.0\(\pm\) 0.0** & **100.0\(\pm\) 0.0** & **100.0\(\pm\) 0.0** \\ \hline \end{tabular}
\end{table}
Table 3.4: Test accuracies for the two moons dataset (\(P=10,J=3\) for R-SSL-IFT and \(J=1\) for R-SSL-Meta).
Figure 3.5: Additional experiments on the synthetic dataset.
ples, with training data, split into two groups - labeled and unlabeled in-distribution (ID) images (labeled data has ten images per class), and with two types of OODs: a) Fashion MNIST, b) Mean MNIST (where the OOD instances are mean images of two classes, more details refer to see Appendix D); (2) **CIFAR10**: a natural image dataset with 45,000/ 5,000/ 10,000 training/validation/test samples from 10 object classes, and following (Oliver et al., 2018), we adapt CIFAR10 to a 6-class classification task, using 400 labels per class (from the 6 classes) and rest of the classes OOD (ID classes are "bird", "cat", "deer", "dog", "frog", "horse", and OOD data are from classes: "airline", "automobile", "ship", "truck"); (3) **CIFAR100**: another natural image dataset with 45,000/5000/10,000 training/validation/test images, similar to CIFAR10, we adapt CIFAR100 to a 50-class classification task, with 40 labels per class - the ID classes are the first 50 classes, and OOD data corresponds to the last 50 classes; (4) **SVHN-extra**: This is SVHN dataset with 531,131 additional digit images (Tarvainen and Valpola, 2017), and we adapt SVHN-extra to a 5-class classification task, using 400 labels per class. The ID classes are the first five classes, and OOD data corresponds to the last five classes.
**Comparing Methods.** To evaluate the effectiveness of our proposed weighted robust SSL approaches, we compare with five state-of-the-art robust SSL approaches, including UASD (Chen et al., 2020), DS3L (DS3L) (Guo et al., 2020), L2RW (Ren et al., 2018), and MWN (Shu et al., 2019). The last two approaches L2RW and MWN, were originally designed for robust supervised learning (SL), and we adapted them to robust SSL by replacing the supervised learning loss function with an SSL loss function. We compare these robust approaches on four representative SSL methods, including Pseudo-Label (PL) (Lee, 2013),
\begin{table}
\begin{tabular}{c|c c} \hline Model & _Faraway OODs_ & _Boundary OODs_ \\ \hline R-SSL-IFT P=1 & 55.0\(\pm\) 2.1 & 83.9\(\pm\) 0.7 \\ R-SSL-IFT P=5 & 91.0\(\pm\) 3.1 & 93.9\(\pm\) 0.9 \\ R-SSL-IFT P=10 & 100.0\(\pm\) 0.0 & **100.0\(\pm\) 0.0** \\ \hline \end{tabular}
\end{table}
Table 3.5: Test accuracies for different \(P\) at OOD ratio = 50% on the synthetic dataset.
\(\Pi\)-Model (PI) (Laine and Aila, 2016; Sajjadi et al., 2016), Mean Teacher (MT) (Tarvainen and Valpola, 2017), and Virtual Adversarial Training (VAT) (Miyato et al., 2018). One additional baseline is the supervised learning method, named "Sup," which ignored all the unlabeled examples during training. All the compared methods (except UASD (Chen et al., 2020)) were built upon the open-source Pytorch implementation1 by (Oliver et al., 2018). As UASD has not released its implementation, we implemented UASD by ourselves. For DS3L, we implemented it based on the released code 2. For L2RW (Ren et al., 2018), we used the open-source Pytorch implementation3and adapted it to the SSL settings. For MWN (Shu et al., 2019), we used the authors' implementation4 and adapt to SSL.
Footnote 1: [https://github.com/perrying/realistic-ssl-evaluation-pytorch](https://github.com/perrying/realistic-ssl-evaluation-pytorch)
Footnote 2: [https://github.com/guolz-ml/DS3L](https://github.com/guolz-ml/DS3L)
Footnote 3: [https://github.com/danieltan07/learning-to-reweight-examples](https://github.com/danieltan07/learning-to-reweight-examples)
Footnote 4: [https://github.com/xjtushujun/meta-weight-net](https://github.com/xjtushujun/meta-weight-net)
**Setup.** In our experiments, we implement our approaches (Ours-SSL) for four representative SSL methods, including Pseudo-Label (PL), \(\Pi\)-Model (PI), Mean Teacher (MT), and Virtual Adversarial Training (VAT). The term "SSL" in Ours-SSL represents the SSL method, e.g., (Ours-VAT denotes our weighted robust SSL algorithm implemented based on VAT. We used the standard LeNet model as the backbone for the MNIST experiment and used _WRN-28-2_(Zagoruyko and Komodakis, 2016) as the backbone for CIFAR10, CIFAR100, and SVHN experiments. For a comprehensive and fair comparison of the CIFAR10 experiment, we followed the same experiment setting of (Oliver et al., 2018). All the compared methods were built upon the open-source Pytorch implementation by (Oliver et al., 2018). The code and datasets are temporarily available for reviewing purposes at here5. Our code and datasets are also submitted as supplementary material.
Footnote 5: [https://anonymous.4open.science/r/WR-SSL-406F/README.md](https://anonymous.4open.science/r/WR-SSL-406F/README.md)
**Hyperparameter setting.** For our WR-SSL approach, we update the weights only using last layer for the inner optimization, we set \(J=3\) (for inner loop gradient steps), \(P=5\) (for inverse Hessian approximation), \(K=20\) (for CRW), \(\lambda=10^{-7}\) (for L1), and \(L=5\) (for infrequent update) for all experiments. We trained all the networks for 2,000 updates with a batch size of 100 for MNIST experiments, and 500,000 updates with a batch size of 100 for CIFAR10, CIFAR100, and SVHN experiments. We did not use any form of early stopping but instead continuously monitored the validation set performance and reported test errors at the point of the lowest validation error. We show the specific hyperparameters used with four representative SSL methods on MNIST experiments in Table 3.6. For CIFAR10, we
\begin{table}
\begin{tabular}{l c} \hline \hline \multicolumn{1}{c}{**Shared**} \\ \hline Learning decayed by a factor of & 0.2 \\ at training iteration & 1,000 \\ coefficient = 1 (Do not use warmup) & \\ \hline \multicolumn{1}{c}{**Supervised**} \\ \hline Initial learning rate & 0.003 \\ \hline \multicolumn{1}{c}{\(\Pi\)**-Model**} \\ \hline Initial learning rate & 0.003 \\ Max consistency coefficient & 20 \\ \hline \multicolumn{1}{c}{**Mean Teacher**} \\ \hline Initial learning rate & 0.0004 \\ Max consistency coefficient & 8 \\ Exponential moving average decay & 0.95 \\ \hline \multicolumn{1}{c}{**VAT**} \\ \hline Initial learning rate & 0.003 \\ Max consistency coefficient & 0.3 \\ VAT \(\epsilon\) & 3.0 \\ VAT \(\xi\) & \(10^{-6}\) \\ \hline \multicolumn{1}{c}{**Pseudo-Label**} \\ \hline Initial learning rate & 0.0003 \\ Max consistency coefficient & 1.0 \\ Pseudo-label threshold & 0.95 \\ \hline \hline \end{tabular}
\end{table}
Table 3.6: Hyperparameter settings used in MNIST experiments for four representative SSL. All robust SSL methods (e.g., ours (WR-SSL), DS3L, and UASD) are developed based on these representative SSL methods.
used the same hyperparameters as (Oliver et al., 2018). For CIFAR100 and SVHN datasets, we used the same hyperparameters as CIFAR10.
#### 3.5.3 Performance with different real-world OOD datasets
In all experiments, we report the performance over five runs. Denote OOD ratio\(=\mathcal{U}_{ood}/(\mathcal{U}_{ood}+\mathcal{U}_{in})\) where \(\mathcal{U}_{in}\) is ID unlabeled set, \(\mathcal{U}_{ood}\) is OOD unlabeled set, and \(\mathcal{U}=\mathcal{U}_{in}+\mathcal{U}_{ood}\).
**Impact of faraway OODs on SSL performance (with batch normalization).** We used FashionMNIST (Xiao et al., 2017) dataset to construct a set of OODs \(\mathcal{U}_{ood}\) faraway from the decision boundary of the MNIST dataset, as FashionMNIST and MNIST have been shown very different and considered as cross-domain benchmark datasets (Meinke and Hein, 2019). As shown in Fig 3.6 (a)-(b), with the OOD ratio increase, the performance of the existing SSL method decreases rapidly, whereas our approach can still maintain clear performance improvement, i.e., the outperformance of our methods are fairly impressive(e.g., 10% increase with R-IFT-PI over PI model when OOD ratio = 75%). Compare with other robust SSL methods, our methods surprisingly improves the accuracy and suffers much less degradation under a high OOD ratio.
**Impact of boundary OODs on SSL performance (without batch normalization).** We generate boundary OOD by mix up (fusion) existing ID unlabeled samples, i.e., \(\hat{\mathbf{x}}_{ood}=0.5(\mathbf{x}_{i}+\mathbf{x}_{j})\), where \(\mathbf{x}_{i}\), \(\mathbf{x}_{j}\) is ID images from a different class, and \(\hat{\mathbf{x}}_{ood}\) can be regarded as
Figure 3.6: Classification accuracy with varying OOD ratio on MNIST. (a)-(b) consider faraway OODs with batch normalization; (c)-(d) consider boundary OODs without batch normalization. Shaded regions indicate standard deviation.
boundary OOD (Guo et al., 2019). As shown in Fig 3.6 (c)-(d), we get a similar result pattern that the accuracy of existing SSL methods decreases when the OOD ratio increases. Across different OOD ratios, particularly our method significantly outperformed among all (e.g., 10% increase with Meta-VAT over VAT when OOD ratio = 75%). We also conduct experiments based on other SSL algorithms, results are shown in the supplementary material.
**Impact of mixed OODs on SSL performance (with batch normalization).** We followed (Oliver et al., 2018) to adapt CIFAR10 for a 6-class classification task, using 400 labels per class. The ID classes are: "bird", "cat", "deer", "dog", "frog", "horse", and OOD classes are: "airline", "automobile", "ship", "truck". As our implementation follows (Oliver et al., 2018) that freeze BN layers for the WRN model, we freeze BN layers for all the methods for CIFAR10 for fair comparisons. In this dataset, the examples of the OOD classes were considered as OODs. As these OODs are from the same dataset, it may have OODs close to or far away from the decision boundary of the ID classes. We hence called these OODs as mixed-type OODs. The averaged accuracy of all compared methods v.s. OOD ratio is plotted in Fig 3.7. Across different OOD ratios, particularly our method significantly outperformed among all, strikingly exceeding the performance when the OOD ratio is large (i.e., 4.5% increase with Ours-Meta over L2RW when OOD ratio = 75%).
Figure 3.7: Classification accuracy with varying OOD ratio on CIFAR10. We use _WRN-28-2_ (contains BN module) as the backbone. Shaded regions indicate standard deviation.
Unlike most SSL methods that degrade drastically when increasing the OOD ratio, ours achieves a stable performance even in a 75% OOD ratio.
#### Efficiency Analysis
To evaluate the efficiency of our proposed approach, we first compare the running time among all methods. Fig 3.8 (a)-(b) shows the running time (relative to the original SSL algorithm) for MNIST and CIFAR-10. We see that our proposed approaches are only \(1.7\times\) to \(1.8\times\) slower than the original SSL algorithm, while other robust SSL methods (L2RW and DS3L) are almost \(3\times\) slower. We note that our implementation tricks can also be applied to these other techniques (DS3L, L2RW, MWN), but this would possibly degrade performance since these approaches' performance is worse than ours even without these tricks. To further analyze our proposed speedup strategies, we plot the running time v.s. accuracy in Fig 3.8 (d)-(e) for different settings (with/without last and with/without infrequent updates). As expected, the results show that, without the only last layer updates and the infrequent updates (i.e. if \(L=1\)), Algorithm 3 is \(3\times\) slower than SSL baseline. Whereas with the last
Figure 3.8: Running time results. (a)-(b) show our proposed approaches are only \(1.7\times\) to \(1.8\times\) slower compared to base SSL algorithms, while other robust SSL methods are \(3\times\) slower. (c) shows that the running time of our method would increase with \(J\) (inner loop gradients steps) and \(P\) (inverse Hessian approximation) increase. (d)-(e) show the running time of our strategies with different combinations of tricks viz; last layer updates and updating weights every \(L\) iterations. Note that by using only last layer updates, our strategies are around \(2\times\) slower. With \(L=5\) and last layer updates, we are around \(1.7\times\) to \(1.8\times\) slower with comparable test accuracy.
layer updates, it is around \(2\times\) slower. We get the best trade-off between speed and accuracy considering both \(L=5\) and the last layer updates. In addition, we analyze the efficiency of our approach with varying inner loop gradient steps (\(J\)) and inverse Hessian approximation (\(P\)). The result shows that the running time of our method would increase with \(J\) and \(P\) increase, we choose the best trade-off between speed and accuracy considering \(J=3\) and \(P=5\).
Figure 3.10: (a) shows that WBN and CRW (or L1 regularization) are critical in retaining the performance gains of reweighting; (b)-(c) demonstrate that the performance of our approach would increase with (inner loop gradients steps) and inverse Hessian approximation) increase due to high-order approximation.
Figure 3.9: (a) shows that our method learns optimal weights for ID and OOD samples; (b) shows that our method is stable even for a small validation set containing 25 images.
#### 3.5.5 Additional Analysis
**Analysis of weight variation.** Fig 3.9 (a) shows the weight learning curve of our proposed approach and DS3L on Fashion-MNIST OOD. The results show that our method learns better weights for unlabeled samples compared to DS3L. The weight distribution learned in other cases are also similar.
**Size of the clean validation set.** We explore the sensitivity of the clean validation set used in robust SSL approaches on Mean MNIST OOD. Fig 3.9 (b) plots the classification performance with varying the size of the clean validation set. Surprisingly, our methods are stable even when using only 25 validation images, and the overall classification performance does not grow after having more than 1000 validation images.
**Ablation Studies.** We conducted additional experiments on Fashion-MNIST OOD (see Fig 3.10 (a)-(c)) in order to demonstrate the contributions of the key technical components, including Cluster Re-weight (CRW) and weighted Batch Normalization (WBN). The key
\begin{table}
\begin{tabular}{c|c c c}
**OOD ratio** & _25\%_ & _50\%_ & _75\%_ \\ \hline \hline VAT & 94.1\(\pm\) 0.5 & 93.6\(\pm\) 0.7 & 92.8\(\pm\) 0.9 \\ L2RW-VAT & 96.0\(\pm\) 0.6 & 93.5\(\pm\) 0.8 & 92.7\(\pm\) 0.8 \\ MWN-VAT & 96.2\(\pm\) 0.5 & 93.8\(\pm\) 1.1 & 93.0\(\pm\) 1.3 \\ DS3L-VAT & 96.4\(\pm\) 0.7 & 93.9\(\pm\) 1.0 & 92.9\(\pm\) 1.2 \\ Ours-VAT+L1 & 96.6\(\pm\) 0.6 & 94.4\(\pm\) 0.8 & 93.2\(\pm\) 1.1 \\ Ours-VAT+CRW & **96.8\(\pm\) 0.7** & **95.2\(\pm\) 0.9** & **94.9\(\pm\) 1.3** \\ \end{tabular}
\end{table}
Table 3.7: SVHN-Extra (VAT) with different OOD ratios.
\begin{table}
\begin{tabular}{c|c c c}
**OOD ratio** & _25\%_ & _50\%_ & _75\%_ \\ \hline \hline DS3L-MT & 60.8\(\pm\) 0.5 & 60.1\(\pm\) 1.1 & 57.2\(\pm\) 1.2 \\ Ours-MT+L1 & 61.5\(\pm\) 0.4 & 60.7\(\pm\) 0.6 & 59.0\(\pm\) 0.8 \\ Ours-MT+CRW & **62.1\(\pm\) 0.5** & **61.0\(\pm\) 0.5** & **59.7\(\pm\) 0.9** \\ \hline DS3L-PI & 60.5\(\pm\) 0.6 & 60.1\(\pm\) 1.0 & 57.4\(\pm\) 1.3 \\ Ours-PI+L1 & 61.2\(\pm\) 0.4 & 60.4\(\pm\) 0.4 & 58.9\(\pm\) 0.6 \\ Ours-PI+CRW & **61.6\(\pm\) 0.4** & **60.7\(\pm\) 0.5** & **59.5\(\pm\) 0.7** \\ \end{tabular}
\end{table}
Table 3.8: CIFAR100 (MT) with different OOD ratios.
findings obtained from this experiment are _1)_ WBN plays a vital role in our uncertainty-aware robust SSL framework to improve the robustness of BN against OOD data; _2)_ Removing CRW (or L1 regularization) results in performance decrease, especially for VAT based approach, which demonstrates that CRW (and L1 regularization) can further improve performance for our robust-SSL approach; _3)_ Fig 3.10 (b)-(c) demonstrate that the performance of our approach would increase with (inner loop gradients steps) and inverse Hessian approximation) increase due to high-order approximation. Based on this, we choose the best trade-off between speed and accuracy considering \(J=3\) and \(P=5\) when considering running time analysis in Fig 3.8 (c).
**L1 vs CRW tricks.** Next, we discuss the trade-offs between L1 and CRW. We first analyzed the sensitivity of our proposed CRW methods to the number of clusters used. Table 3.9 demonstrates the test accuracies of our approach with varying numbers of clusters. The results indicate a low sensitivity of our proposed methods to the number of clusters. We also find that CRW generally can further improve performance. Part of this success can be attributed to the good pretrained features (ImageNet). In addition, we compared our approach with the DS3L (safe SSL, SOTA robust SSL method) on CIFAR100 and SVHN dataset and get a similar pattern. The results are shown in Table 3.7 and 3.8.
### Conclusion
In this work, we first propose the research question: _How out-of-distribution data hurt semi-supervised learning performance?_ To answer this question, we study the impact of OOD data
\begin{table}
\begin{tabular}{c|c c c c} \# Clusters & K=5 & K=10 & K=20 & K =30 \\ \hline Ours-VAT & 94.7\(\pm\) 0.7 & 95.3\(\pm\) 0.4 & 96.3\(\pm\) 0.5 & 95.6\(\pm\) 0.5 \\ \hline Ours-PL & 95.2\(\pm\) 0.5 & 95.3\(\pm\) 0.5 & 96.2\(\pm\) 0.4 & 95.9\(\pm\) 0.5 \\ \end{tabular}
\end{table}
Table 3.9: Test accuracies for different numbers of clusters \(K\) on the MNIST dataset with 50% Mean MNIST as OODs.
on SSL algorithms and demonstrate empirically on synthetic data that the SSL algorithms' performance depends on how close the OOD instances are to the decision boundary (and the ID data instances). To address the above causes, we proposed a novel unified uncertainty-aware robust SSL framework that treats the weights directly as hyper-parameters in conjunction with weighted batch normalization, which is designed to improve the robustness of BN against OODs. To address the limitation of low-order approximations in bi-level optimization (DS3L), we designed an implicit-differentiation-based algorithm that considered high-order approximations of the objective and is scalable to a higher number of inner optimization steps to learn a massive amount of weight parameters. Next, we made our reweighting algorithms significantly more efficient by considering only last layer updates and infrequent weight updates, enabling us to have the same run time as simple SSL (naive reweighting algorithms are generally \(3\times\) more expensive). In addition, we conduct a theoretical analysis of the impact of faraway OODs in the BN step and discuss the connection between our approach (high-order approximation based on implicit differentiation) and low-order approximation approaches. We show that our weighted robust SSL approach significantly outperforms existing robust approaches (L2RW, MWN, Safe-SSL, and UASD) on several real-world datasets.
In Chapter 2 and Chapter 3, we studied the uncertainty quantification on graph and image data with a semi-supervised learning setting. However, the uncertainty framework used in Chapter 2 and Chapter 3 only focuses on multi-class classification and statics problems. Therefore, it can not extend to multi-label classification or time series problems. To this end, in the following Chapter, we will study a time series multi-label classification setting for early event detection. Specifically, we first propose a novel framework, Multi-Label Temporal Evidential Neural Network, to estimate the evidential uncertainty of multi-label time series classification. Then, we propose two novel uncertainty estimation heads to quantify the fused uncertainty of a sub-sequence for early event detection.
## Chapter 4 Multi-label temporal evidential neural networks for early event detection
### 4.1 Introduction
In recent decades, early detection of temporal events has aroused a lot of attention and has applications in a variety of industries, including security (Sai and Reddy, 2017), quality monitoring (He et al., 2020), medical diagnostic (Zhao et al., 2019), transportation (Gupta et al., 2020), _etc_. According to the time series, an event can be viewed with three components, pre-event, ongoing event, and post-event. Early event detection in machine learning identifies an event during its initial ongoing phase after it has begun but before it concludes (Hoai and De la Torre, 2014; Phan et al., 2018). As illustrated in Figure 4.1, given a video clip with multiple frames, the goal is to accurately and rapidly detect human action(s) in the box (_i.e._ smoke and watch a person) in the observation of incomplete video segments so that timely responses can be provided. This demands the detection of events prior to their completion.
To achieve the earliness of event detection, existing approaches can be broadly divided into several major categories. Prefix-based techniques (Gupta et al., 2020, 2020) aim to learn a minimum prefix length of the time series from the training instances and utilize it to classify a testing time series. Shapelet-based approaches (Yan et al., 2020; Zhao et al., 2019) focus on obtaining a set of key shapelets from the training dataset and utilizing them as class discriminatory features. Model-based methods for event early detection (Mori et al., 2019; Lv et al., 2019) are proposed to obtain conditional probabilities by either fitting a discriminative
classifier or using generative classifiers on training. Although these approaches address the importance of early detection, they primarily focus on an event with a single label but fail to be applied to situations with multiple labels.
Another non-negligible issue for early event detection is the prediction with overconfidence (Zhao et al., 2020; Sensoy et al., 2018). In general, the occurrence of an event is determined by its predicted probability. An event with a high probability is considered as an occurrence. This, however, may not be reliable. Figure 4.2 shows an example that the prediction of the occurrence of an event (_i.e._ an action) in a video clip with a binary class (occurs or not) based on its predicted probability is overconfident at the pre-event stage. In this case, the groundtruth (red line) demonstrates that the ongoing stage starts at the 20th frame. Nevertheless, the event is falsely detected, prior to it actually occurring (Figure 4.2, left), because a greater probability (_i.e._ 0.9 indicated on the green line) is given by positive evidence. Here, the evidence indicates data samples (_i.e._ actions) that are closest to the predicted one in the feature space and used to support the decision-making. Positive (negative) evidence is the observed samples that have the same (opposite) class labels. The event prediction with overconfidence at its early stage is due to high vacuity uncertainty (Josang,
Figure 4.1: How many frames do we need to detect smoke and watch actions reliably? Can we even detect these actions before they finish? Existing event detectors are trained to recognize complete events only; they require seeing the entire event for a reliable decision, preventing early detection. We propose a learning formulation to recognize partial events, enabling early detection.
2016) which is a terminology representing a lack of evidence. Therefore, it makes event detection based on probability unreliable. To overcome this flaw, methods developed on uncertainty estimation using evidence are desirable for early event detection.
In this work, to address the aforementioned issues, we first introduce a novel problem, namely _early event detection with multiple labels_. In the problem setting, a temporal data with multiple events occurs sequentially over time. The goal of this work is to accurately detect the occurrence of all events within the least amount of time. Inspired by (Sensoy et al., 2018) and subjective logic (SL) (Yager and Liu, 2008), a proposed framework is proposed, which is composed of two phases: in phase one, a time series data is viewed as a sequence of segments with equal temporal length, where each segment comes one after another. Instead of predicting occurrence probabilities for all events, their positive and negative evidence is estimated through the proposed Multi-Label Temporal Evidential Neural Network (MTENN). The positive and negative evidence is seen as parameters of a Beta distribution which is a conjugate prior to Binomial likelihood. In the second phase, a sliding window spanning the most recent collected segments is designed to validate whether an event is successfully detected through two novel uncertainty estimation heads: (1) Weighted Binomial Comultiplication (WBC), where the belief of the occurrence of an event is successively updated through a binomial comultiplication operation (Josang, 2016) from SL, and (2) Uncertainty Mean Scan Statistics (UMSS) for early event detection aims to detect the distribution change of the vacuity uncertainty through hypothesis testing from statistics. **Key contributions** of this work are summarized:
* We introduce a novel framework consisting of two phases for early event detection with multiple labels. At each timestamp, the framework estimates positive and negative evidence through the proposed Multi-Label Temporal Evidential Neural Network (MTENN). Inspired by subjective logic and belief theory, the occurrence uncertainty of an event is sequentially estimated over a subset of temporal segments.
* We introduce two novel uncertainty fusion operators (weighted binomial comultiplication (WBC) and uncertainty mean scan statistics (UMSS)) based on MTENN to quantify the fused uncertainty of a sub-sequence for early event detection. We demonstrate the effectiveness of WBC and UMSS on detection accuracy and detection delay, respectively.
* We validate the performance of our approach with state-of-the-art techniques on real-world audio and video datasets. Theoretic analysis and empirical studies demonstrate the effectiveness and efficiency of the proposed framework.
### 4.2 Related Work
**Early Event Detection** has been studied extensively in the literature of the time series domain. The primary task of it is to classify an incomplete time series event as soon as
Figure 4.2: Illustration of overconfidence prediction. (Left) The occurrence of the event is falsely detected at the pre-event stage prior to its starting. This indicates that predicted probabilities are not reliable due to insufficient evidence. (Right) Instead of probabilities, subjective opinions (_e.g.,_ belief, disbelief, uncertainty) are used in the proposed method for early event detection.
possible with some desired level of accuracy. (Gupta et al., 2020c) attempts to classify various complex human activities such as sitting on a sofa, sitting on the floor, standing while talking, walking upstairs, and eating, using only partial time series. A maximum-margin framework is proposed in (Hoai and De la Torre, 2014) for training temporal event detectors to recognize partial events, enabling early detection. The generative adversarial network introduced in (Wang et al., 2019) improves the early recognition accuracy of partially observed videos though narrowing the feature difference of partially observed videos from complete ones. Dual-DNN (Phan et al., 2018) is proposed for sound event early detection via a monotonous function design. (McLoughlin et al., 2018) identifies seed regions from spectrogram features to detect events at the early stage. Other algorithms have considered epistemic uncertainty for reliable event prediction (Soleimani et al., 2017). However, little attention has been paid to early event detection with multi-label settings, where multiple events may occur at the same time. Although there are several existing works designed for multi-label event detection (Pan et al., 2021; Tang et al., 2020), most of them cannot be applied to early detection event problems.
**Uncertainty Estimation.** In machine learning and data mining, researchers have mainly focused on aleatoric uncertainty and epistemic uncertainty using Bayesian Neural Networks for computer vision applications. Bayesian Neural Networks frameworks are presented to simultaneously estimate both aleatoric and epistemic uncertainty in regression (Gal and Ghahramani, 2016) and classification tasks (Kendall and Gal, 2017). However, aleatoric or epistemic uncertainty is not able to estimate evidence uncertainty, which is essential for early event detection. As for evidential uncertainty, its origins from the belief (or evidence) theory domain, such as Dempster-Shafer Theory (DST) (Sentz et al., 2002), or Subjective Logic (SL) (Josang, 2016). SL considered predictive uncertainty in subjective opinions in terms of _vacuity_(Josang, 2016). Evidential neural networks (ENN) was proposed in (Sensoy et al., 2018) to estimate evidential uncertainty for multi-class classification problem in the
deep learning domain. However, ENN is designed for single-label classification due to the subjective logic assumption.
To address the limitation of ENN and existing early event detection methods, we proposed a novel framework, namely Multi-Label Temporal Evidential Neural Network (MTENN), for early event detection in temporal data. MTENN is able to quality predictive uncertainty due to lack of evidence for multi-label classifications at each time stamp based on belief/evidence theory.
### Preliminaries
The introduction of subjective logic is in Sec 2.3.2 In this section, we start with Evidential neural networks (ENN) (Sensoy et al., 2018), a hybrid model of subjective logic and neural networks. Then we recap multi-label classification.
#### 4.3.1 Notations
A time series data \(\{(\mathbf{x}^{t},\mathbf{y}^{t})\}_{t=1}^{T}\in(\mathcal{X}\times\mathcal{Y})\) consists of \(T\) segments where each \((\mathbf{x}^{t},\mathbf{y}^{t})\) is collected one after another over time. \(\mathbf{x}^{t}\) represents the feature vector. \(\mathbf{y}^{t}=[y_{1}^{t},\dots,y_{K}^{t}]^{T}\) denotes the multi-label formula with \(y_{k}^{t}=\{0,1\},\forall k\in\{1,\cdots K\}\) representing an event occurs or not and \(K\) is the number of classes. Vectors are denoted by lower case bold face letters, _e.g._, class probability \(\mathbf{p}\in[0,1]^{T}\) where their \(i\)-th entries are \(p_{i}\). Scalars are denoted by lowercase italic letters, _e.g._\(u\in[0,1]\). Matrices are denoted by capital italic letters. \(\omega\) denotes the subjective opinion. We use subscripts to denote the index of the class, and we use superscripts to denote the index of the time stamp. Some important notations are listed in Table 4.1
#### 4.3.2 Evidential neural networks
Evidential neural networks (ENNs) (Sensoy et al., 2018) is a hybrid framework of subjective belief models (Josang, 2016) with neural networks. ENNs is designed to estimate the evidential uncertainty of classification problem. They are similar to classic neural networks for classification. The main difference is that the softmax layer is replaced with an activation function in ENNs, _e.g.,_ ReLU, to ensure non-negative output (range of \([0,+\infty)\)), which is taken as the evidence vector for the predicted Dirichlet (or Beta) distribution, or equivalently, multinomial (binomial) opinion.
Given of an input sample \(\mathbf{x}\), let \(f(\mathbf{x}|\mathbf{\theta})\) represent the evidence vector by the network for the classification, where \(\mathbf{\theta}\) is network parameters. Then the corresponding Dirichlet distribution has parameters \(\mathbf{\alpha}=f(\mathbf{x}_{i}|\mathbf{\theta})\), where \(\mathbf{\alpha}=[\alpha_{1},\ldots,\alpha_{K}]\), \(K\) is the number of classes. Let \(\mathbf{p}=(p_{1},\ldots,p_{K})^{T}\) be the class probabilities, which can be sampled from the
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Notations** & **Descriptions** \\ \hline \(\mathcal{B}\) & Segment buffer \\ \(\mathbf{x}^{t}\) & feature vector for segment \(t\) \\ \(\mathbf{y}^{t}\) & multi-label for segment \(t\) \\ \(\mathbf{p}^{t}\) & Class probability for segment \(t\) \\ \(\mathbf{\theta}\) & model parameters \\ \(\omega\) & Subjective opinion \\ \(b\) & Belief mass \\ \(d\) & Disbelief mass \\ \(u\) & Vacuity uncertainty \\ \(a\) & Base rate \\ \(\alpha_{k}^{t}\) & Positive evidence at segment \(t\) for class \(k\) \\ \(\beta_{k}^{t}\) & Negative evidence at segment \(t\) for class \(k\) \\ \(K\) & Number of classes \\ \(\mathbf{c}\) & Opinion weight in WBC \\ \(f(\cdot)\) & MTENN model function \\ \(\psi(\cdot)\) & Digamma function \\ \(\mathbf{Beta}(p|\alpha,\beta)\) & PDF of Beta distribution \\ \(\mathbf{BCE}(\cdot)\) & binary cross-entropy loss \\ \hline \hline \end{tabular}
\end{table}
Table 4.1: Important notations and corresponding descriptions.
Dirichlet distribution. Therefore, the output of an ENNs can be applied to measure the evidential uncertainty about the predictive class variable \(\mathbf{y}\), such as vacuity. Usually, we consider the following loss function (Sensoy et al., 2018) to train an ENNs model:
\[\mathcal{L}_{ENN}=\int\Big{[}\sum_{i=1}^{K}-y_{i}\log(p_{i})\Big{]}\mathbf{Dir} (p|\mathbf{\alpha})dp \tag{4.1}\]
#### 4.3.3 Multi-label classification
In machine learning and data mining, multi-class classification is a learning problem where each data sample is associated with a unique label from a set of disjoint classes. The single-label classification problem can be divided to binary classification or multi-class classification depending on the number of classes. Unlike multi-class classification problems, multi-label classification allows each data sample belongs to more than one class. Similar to deep learning models for multi-label classification where they typically use a sigmoid layer on top of deep neural networks for each of class, for time series data, a temporal multi-label classifier is considered to handle the temporal dependency between segments collected at each timestamp. Both traditional binary and multi-class problems can be posed as specific cases of multi-label problems. For most single-label classification methods, cross-entropy is used as the loss function. In contrast, for multi-label classification problems, traditional multi-label temporal neural networks take binary cross entropy as the objective function,
\[\mathcal{L}_{ML} = \sum_{t=1}^{T}\sum_{k=1}^{K}\mathbf{BCE}(y_{k}^{t},p_{k}^{t}) \tag{4.2}\] \[= \sum_{t=1}^{T}\sum_{k=1}^{K}-y_{k}^{t}\log(p_{k}^{t})-(1-y_{k}^{t })\log(1-p_{k}^{t})\]
where \(\mathbf{BCE}(\cdot)\) is the binary cross-entropy loss.
### Problem Formulation
Given a time series data with multiple labels (_e.g._ a person in a video clip has multiple actions) where each class label is viewed as an event, let \(\mathcal{X}\times\mathcal{Y}\) be the data space, where
\(\mathcal{X}\) is an input space and \(\mathcal{Y}=\{0,1\}^{K}\) is an output space. A time series data \(\{(\mathbf{x}^{t},\mathbf{y}^{t})\}_{t=1}^{T}\in(\mathcal{X}\times\mathcal{Y})\) consists of \(T\) segments where each \((\mathbf{x}^{t},\mathbf{y}^{t})\) is collected one after another over time. \(\mathbf{x}^{t}\) represents the feature vector. \(\mathbf{y}^{t}=[y_{1}^{t},\ldots,y_{K}^{t}]^{T}\) denotes the multi-label formula with \(y_{k}^{t}=\{0,1\},\forall k\in\{1,\cdots K\}\) representing an event occurs or not and \(K\) is the number of classes. A segment buffer \(\mathcal{B}\) is initialized as empty. It is maintained by adding each segment one at a time. That is, at timestamp \(t\), the buffer includes all segments from previous \(\mathcal{B}=\{(\mathbf{x}^{i},\mathbf{y}^{i})\}_{i=1}^{t}\) and \(|\mathcal{B}|=t\). At each time, a predictive model \(f:\mathcal{X}\rightarrow\mathcal{Y}\) parameterized by \(\boldsymbol{\theta}\) takes segments in \(\mathcal{B}\) as the input and outputs a event prediction vector \(\hat{\mathbf{y}}^{t}=[\hat{y}_{1}^{t},\ldots,\hat{y}_{K}^{t}]^{T}\) where \(\hat{y}_{k}^{t}\in\{0,1\}\) represents the predicted result of the \(k\)-th event at time \(t\) (1 represents occurrence 0 otherwise). Therefore, for some events which are predicted as occurrences, one may conclude that they can be detected at time \(t\).
In the following sections, as demonstrated in Figure 4.3, we propose a novel framework for early event detection, that is Multi-label Temporal Evidential Neural Networks (MTENN) followed by sequential uncertainty estimation. This framework is composed of two phases. In the first phase, the data is viewed as a sequence of segments with equal temporal length, where each segment comes one after another over time. At the time \(t\), instead of the prediction result \(\hat{\mathbf{y}}^{t}\), a pair of vectors consisting of positive evidence \(\boldsymbol{\alpha}^{t}\) and negative evidence \(\boldsymbol{\beta}^{t}\) are estimated. They can be seen as parameters of a Beta distribution which is a conjugate prior to Binomial likelihood. In the second phase, a sliding window including \(m\) most recent collected segments is used to validate whether an event is successfully detected through an early detection function. The function maps a sequence of \(\{\boldsymbol{\alpha}^{i},\boldsymbol{\beta}^{i}\}_{i=t-m}^{t}\) in the window and outputs a subjective opinion for all events by recursively combining a set of opinions of each segment. The integrated opinion is used to determine the occurrence of each event.
### Multi-Label Temporal Evidential Neural Networks
In this section, we introduce the proposed Multi-Label Temporal Evidential Neural Networks (MTENN) Framework for reliable early event detection. The overall description of the framework is shown in Figure 4.3.
#### MTENN Framework
For multi-label early event detection, most existing methods would consider a binary classification for each class, such as sigmoid output (Turpault et al., 2019; Hershey et al., 2021). As discussed in Section 4.3, evidential uncertainty can be derived from binomial opinions or Beta distributions to model an event distribution for each class. Therefore, we proposed a novel Multi-label Evidential Temporal Neural Network (MTENN) \(f(\cdot)\) to form their binomial opinions for the class-level Beta distribution of a given time series segments \([\mathbf{x}^{1},\ldots,\mathbf{x}^{t}]\). Then, the conditional probability \(P(p_{k}^{t}|\mathbf{x}^{1},\ldots,\mathbf{x}^{t};\mathbf{\theta})\) of class \(k\) at timestamp \(t\) can be ob
Figure 4.3: **Framework Overview. Given the streaming data, (b) MTENN is able to quality predictive uncertainty due to a lack of evidence for multi-label classifications at each time stamp based on belief/evidence theory. Specifically, (a) at each time step with data segment \(x^{t}\), MTENN is able to predict Beta distribution for each class, which can be equivalent transfer to subjective opinion \(\omega_{t}\); (c) based on a sliding window, two novel fusion operators (weighted binomial comultiplication and uncertainty mean scan statistics) are introduced to quantify the fused uncertainty of a sub-sequence for an early event.**
tained by:
\[[f^{1},\ldots,f^{t}] \leftarrow f(\mathbf{x}^{1},\ldots,\mathbf{x}^{t};\boldsymbol{\theta})\] \[(\alpha_{1}^{t},\beta_{1}^{t}),\ldots,(\alpha_{K}^{t},\beta_{K}^{t}), \leftarrow f^{t}(\mathbf{x}^{1},\ldots,\mathbf{x}^{t};\boldsymbol{\theta}),\] \[p_{k}^{t} \sim \mathbf{Beta}(p_{k}^{t}|\alpha_{k}^{t},\beta_{k}^{t}),\] \[y_{k}^{t} \sim \mathbf{Bernoulli}(p_{k}^{t}), \tag{4.3}\]
where \(k\in\{1,\cdots,K\}\), \(t\in\{1,\cdots,T\}\), \(f^{t}\) is the output of MTENN at timestamp \(t\), and \(\boldsymbol{\theta}\) refers to model parameters. \(\mathbf{Beta}(p^{t}|\alpha_{k}^{t},\beta_{k}^{t})\) is the Beta probability function. Note that MTENN is similar to the classical multi-label time classification model (e.g., CRNN (Turpault et al., 2019)), except that we use an activation layer (e.g., ReLU) instead of the sigmoid layer (only outputs class probabilities). This ensures that MTENN would output non-negative values taken as the evidence for the predicted Beta distribution. Therefore, MTENN is able to quality predictive uncertainty (vacuity) due to a lack of evidence for multi-label classifications at each time stamp based on belief/evidence theory, and vacuity can be calculated from the estimated Beta distribution.
#### 4.5.2 Loss
In this work, we design and train MTENN to form their binomial opinions for the classification of a given streaming segment as a Beta distribution. For the binary cross-entropy loss, we have the MTENN loss by computing its Bayes risk for the class predictor,
\[\mathcal{L}_{MTENN} = \sum_{t=1}^{T}\sum_{k=1}^{K}\int\Big{[}\mathbf{BCE}(y_{k}^{t},p_{ k}^{t})\Big{]}\mathbf{Beta}(p_{k}^{t};\alpha_{k}^{t},\beta_{k}^{t})dp_{k}^{t} \tag{4.4}\] \[= \sum_{t=1}^{T}\sum_{k=1}^{K}\Big{[}y_{k}^{t}\Big{(}\psi(\alpha_{k }^{t}+\beta_{k}^{t})-\psi(\alpha_{k}^{t})\Big{)}\] \[+(1-y_{k}^{t})\Big{(}\psi(\alpha_{k}^{t}+\beta_{k}^{t})-\psi( \beta_{k}^{t})\Big{)}\Big{]},\]
where \(\mathbf{BCE}(y_{k}^{t},p_{k}^{t})=-y_{k}^{t}\log(p_{k}^{t})-(1-y_{k}^{t})\log (1-p_{k}^{t})\) is the binary cross-entropy loss, and \(\psi(\cdot)\) is the _digamma_ function. The log expectation of Beta distribution derives the second
equality. The details of the first phase of our proposed framework for MTENN are shown in Algorithm 4.
```
Input: A time series data \(\{(\mathbf{x}^{t},\mathbf{y}^{t})\}_{t=1}^{T}\) Output: Model params: \(\boldsymbol{\theta}\)
1 Set \(t=0\); learning rate \(\alpha\);
2 Initialize model parameters \(\boldsymbol{\theta}\);
3repeat
4 Estimate the Beta distribution via Eq. (4.3);
5 Calculate the gradient \(\nabla_{\boldsymbol{\theta}}\mathcal{L}_{MTENN}\) via Eq. (4.4);
6 Update net parameters
7\(\boldsymbol{\theta}_{t+1}=\boldsymbol{\theta}_{t}-\alpha\nabla_{\boldsymbol{ \theta}}\mathcal{L}_{MTENN}(\boldsymbol{\theta}_{t})\)
8\(t=t+1\)
9untilconvergence return\(\boldsymbol{\theta}_{t+1}\)
```
**Algorithm 4**MTENN-Phase I
#### Theoretical Analysis
In this section, we demonstrate two main theoretical results. First, traditional evidential neural networks (ENN) can be posed as specific cases of multi-label temporal evidential neural networks. Second, traditional multi-label temporal neural networks (MTNN) can be posed as specific cases of multi-label Temporal Evidential neural networks (MTENN) when training data has large evidence for each class. Theorem 4 shows that the loss function in MTENN is equivalent to it in MTNN when sufficient evidence is observed. Furthermore, given a feature vector, MTNN predicts probabilities for each class, while MTENN outputs positive and negative evidence estimations (\(\alpha\) and \(\beta\)) that can be seen as parameters of a Beta distribution. Statistically, the predicted class probabilities are sampled from a Beta probability density function parameterized by the estimated \(\alpha\) and \(\beta\). When uncertainty mass approaches zero, the variance of sampled class probabilities is small and close to its expectation \(\frac{\alpha}{\alpha+\beta}\). As a consequence, one may infer the probability of each class using the predicted evidence from MTENN, which is equivalent to the output of MTNN. The results
indicate that our proposed MTENN is a generalization of ENN and MTNN and has thus inherited the merits of both models for the task of early event detection.
**Theorem 3**.: _Denote \(\mathcal{L}_{ML}\) is the loss function of traditional multi-label temporal neural networks, \(\mathcal{L}_{TMENN}\) is the loss function of multi-label Temporal Evidential neural networks, and \(u\) is the vacuity uncertainty estimated form MTENN. We have \(\lim_{\alpha\rightarrow\infty,\beta\rightarrow\infty}|\mathcal{L}_{ML}- \mathcal{L}_{MTENN}|\to 0\)._
Proof.: When \(\alpha\rightarrow\infty\), we have
\[\psi(\alpha_{k}^{t}+\beta_{k}^{t})-\psi(\alpha_{k}^{t})\] \[\xrightarrow{\alpha\rightarrow\infty}\ln(\alpha_{k}^{t}+\beta_ {k}^{t})-\frac{1}{2(\alpha_{k}^{t}+\beta_{k}^{t})}-\ln(\alpha_{k}^{t})+\frac{ 1}{2\alpha_{k}^{t}}\] \[=-\ln(\frac{\alpha_{k}^{t}}{\alpha_{k}^{t}+\beta_{k}^{t}})+\frac {1}{2\alpha_{k}^{t}}-\frac{1}{2(\alpha_{k}^{t}+\beta_{k}^{t})}\] \[\xrightarrow{\alpha\rightarrow\infty}-\ln(\frac{\alpha_{k}^{t}} {\alpha_{k}^{t}+\beta_{k}^{t}})=-\ln p_{k}^{t}\]
and when \(\beta\rightarrow\infty\), we have
\[\psi(\alpha_{k}^{t}+\beta_{k}^{t})-\psi(\beta_{k}^{t})\] \[\xrightarrow{\beta\rightarrow\infty}\ln(\alpha_{k}^{t}+\beta_ {k}^{t})-\frac{1}{2(\alpha_{k}^{t}+\beta_{k}^{t})}-\ln(\beta_{k}^{t})+\frac{ 1}{2\beta_{k}^{t}}\] \[=-\ln(\frac{\beta_{k}^{t}}{\alpha_{k}^{t}+\beta_{k}^{t}})+\frac{ 1}{2\beta_{k}^{t}}-\frac{1}{2(\alpha_{k}^{t}+\beta_{k}^{t})}\] \[\xrightarrow{\beta\rightarrow\infty}-\ln(\frac{\beta_{k}^{t}} {\alpha_{k}^{t}+\beta_{k}^{t}})=-\ln(1-p_{k}^{t})\]
where the expected probability \(p_{k}^{t}=\frac{\alpha_{k}^{t}}{\alpha_{k}^{t}+\beta_{k}^{t}}\). Then we have \(|\mathcal{L}_{ML}-\mathcal{L}_{MTENN}|\to 0\).
**Theorem 4**.: _Denote \(\mathcal{L}_{ENN}\) is the loss function of traditional evidential neural networks, \(\mathcal{L}_{Beta}\) is the loss function of multi-label Temporal Evidential neural networks, and \(K\) is the total number of the class. We have \(\mathcal{L}_{ENN}=\mathcal{L}_{MTENN}\) when \(K=1,T=1\)._
Proof.: When \(K=1,T=1\), we have
\[\mathcal{L}_{TMENN}=\sum_{t=1}^{1}\sum_{k=1}^{1}\int\Big{[}\mathbf{ BCE}(y_{k}^{t},p_{k}^{t})\Big{]}\mathbf{Beta}(p_{k}^{t};\alpha_{k}^{t},\beta_{k}^{ t})dp_{k}^{t}\] \[=\int\Big{[}\mathbf{BCE}(y,p)\Big{]}\mathbf{Beta}(p;\alpha,\beta )dp\] \[=y\Big{(}\psi(\alpha+\beta)-\psi(\alpha)\Big{)}+(1-y)\Big{(}\psi( \alpha+\beta)-\psi(\beta)\Big{)}\]
and for the ENN model, \(k=1\) means the binary classification, and we denote the positive evidence \(\alpha=\alpha_{1}\), negative evidence \(\beta=\alpha_{2}\), positive label \(y=y_{1}\), and negative label \((1-y)=y_{2}\)
\[\mathcal{L}_{ENN}=\int\Big{[}\mathbf{CrossEntopy}(y,p)\Big{]} \mathbf{Dir}(p;\alpha)dp\] \[=\sum_{j=1}^{2}y_{j}\Big{(}\psi(\sum_{j=1}^{2}\alpha_{j})-\psi( \alpha_{j})\Big{)}\] \[=y\Big{(}\psi(\alpha+\beta)-\psi(\alpha)\Big{)}+(1-y)\Big{(}\psi (\alpha+\beta)-\psi(\beta)\Big{)}\]
Then we proof that \(\mathcal{L}_{ENN}=\mathcal{L}_{TMENN}\).
### Multi-label Sequential Uncertainty Quantittation
In the second phase, for early event detection, at time \(t\), a subset including \(m\) most recent collected segments are considered to validate whether an event is successfully detected or not, as shown in Fig 4.3 (c). We name the subset as a sliding window, as it dynamically restructures a small sequence of segments from \(t-m\) to \(t\) and performs validation through an early detection function at each time. Based on the sliding window, we introduce two novel uncertainty fusion operators based on MTENN to quantify the fused uncertainty of a sub-sequence for early event detection.
#### 4.6.1 Weighted Binomial Comultiplication
**Binomial Comultiplication Operator.** After we get the sequential Beta distribution output, a sequential fusional opinion can be estimated via a subjective operator (e.g., union operator). As shown in Fig 4.3 (b), we can use subjective operator \(\oplus\) to fuse the opinions. Here we consider to use comultiplication operator (Josang, 2006) to fusion two opinion \(\omega_{i}\) and \(\omega_{j}\) via Eq. (4.5),
\[\begin{split}& b_{i\oplus j}=b_{i}+b_{j}-b_{i}b_{j}\\ & d_{i\oplus j}=d_{i}d_{j}+\frac{a_{i}\left(1-a_{j}\right)d_{i}u_ {j}+\left(1-a_{i}\right)a_{j}u_{i}d_{j}}{a_{i}+a_{j}-a_{i}a_{j}}\\ & u_{i\oplus j}=u_{i}u_{j}+\frac{a_{j}d_{i}u_{j}+a_{i}u_{i}d_{j}}{ a_{i}+a_{j}-a_{i}a_{j}}\\ & a_{i\oplus j}=a_{i}+a_{j}-a_{i}a_{j}\end{split} \tag{4.5}\]
Based on \(m\) sliding windows, the sequential fusional opinion can be calculated by
\[\omega_{t-m,\ldots,t}=\omega_{t-m}\oplus\omega_{t-m+1}\oplus\ldots\oplus \omega_{t} \tag{4.6}\]
The above operator ignores the order information, which means \(\omega_{x}\oplus\omega_{y}\) has the same effect as \(\omega_{y}\oplus\omega_{x}\). To consider the order information and emphasize the importance of current time step \(t\), we propose a weighted comultiplication operator that assigns the weight \(\mathbf{c}\) for each opinion when executing the operator, then the weighted sequential opinion can be obtained,
\[\hat{\omega}^{t}=c_{t-m}\cdot\omega^{t-m}\oplus c_{t-m+1}\cdot\omega^{t-m+1} \oplus\ldots\oplus c_{t}\cdot\omega^{t} \tag{4.7}\]
We consider the vacuity from \(\hat{\omega}^{t}\) as sequential uncertainty for a sub-sequence.
**Uncertainty-based Inference.** At the test stage, we consider a simple strategy to make a reliable prediction based on the sequential uncertainty estimated from WBC. For each class, we predict sound events happened only when the belief is larger than disbelief
\[\hat{y}_{k}^{t}=\begin{cases}1,&\text{if $b_{k}^{t}>d_{k}^{t}$ and $u_{k}^{t}<V$}\\ 0,&\text{otherwise}\end{cases} \tag{4.8}\]
where \(\hat{y}_{k}^{t}\in\{0,1\}\) is the model prediction for class \(k\) in segment \(t\), \(V\) is the vacuity threshold.
#### Uncertainty mean scan statistics.
Intuition: uncertainty distribution changes at the early ongoing event stage. One example is shown in Fig 4.4. Here we proposed a simple uncertainty fusion operator based on the mean scan statistics method for early event detection. Let \(u_{t}\) be the vacuity uncertainty at time \(t\) and \(T\) be the sliding window size. Let \(S_{t}\) be the sliding window at time \(t\): \(S_{t}=\{u_{t-T+1},u_{t-T+2},\ldots,u_{t}\}\). We consider the following hypothesis testing about the detection of an event at time \(t\).
* **Null hypothesis \(H_{0}\):** The uncertainty at \(S_{t}\) follow the distribution \(u\in\mathcal{N}(0,1),\forall u\in S_{t}\)
* **Alternative hypothesis \(H_{1}\):** The uncertainty at \(S_{t}\) follow the distribution \(u\in\mathcal{N}(\mu,1),\forall u\in S_{t}\) with \(\mu>0\)
Figure 4.4: Uncertainty distribution changed at the ongoing event early stage.
Based on the above hypothesis testing, we can derive the log-likelihood ratio, called positive alleviated mean scan statistic (EMS), that is used as the test statistic for event detection. The larger the test statistic, the higher the chance there is an event in this time window.
\[\hat{u}_{t}=\frac{\sum_{u\in S_{t}}u_{i}}{T} \tag{4.9}\]
After we get the scan statistic score \(\hat{u}_{t}\), we need to decide a threshold, i.e., \(V\), such that we will reject \(H_{0}\) and accept \(H_{1}\) if \(\hat{u}_{t}>V\), based on the confidence level, and such as 0.05. In order to decide the threshold \(V\), we use Monte Carlo sampling as follows: We calculate the EMS statistics scores for all historical windows that have no event, we call them historical windows under \(H_{0}\), and find the 5% quantile value, i.e., the value, such that the number of historical windows under \(H_{0}\) less than this value is 5% of the total number of historic windows. The details of the second phase of our framework for early event detection are shown in Algorithm 5.
### Experiments
Our experimental section aims to verify the effectiveness of our proposed method (MTENN) by evaluating MTENN through both _early sound event detection_ scenario and _early human action detection_ scenario on real-world datasets. Furthermore, our work's experimental scenarios are very relevant in terms of research and real-world applications. Finally, we have implemented the MTENN framework using PyTorch. We repeat the same experiment for three runs with different initialization and report the mean detection delay and detection F1 score (ref to Evaluation Metrics section).
#### 4.7.1 Experiment Details
**Dataset.**_a) Early sound event detection task_, we conduct the experiments on DESED2021 dataset (Turpault et al., 2019) and AudioSet-Strong-Labeled dataset (Hershey et al., 2021).
**Algorithm 5**: MTENN-Phase II
**Algorithm 4.2**: Description of datasets and their experimental setup for the early event detection.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline & **DESED2021** & **Explosion** & **Alarm** & **Liquid** & **Engine** & **AVA** \\ \hline \# Classes & 10 & 5 & 7 & 9 & 8 & 60 \\ \hline \# Training samples & 10,000 & 5,518 & 8,085 & 6,517 & 18,741 & 43,232 \\ \# Validation samples & 1,168 & 788 & 1,355 & 931 & 2,677 & 20,000 \\ \# Test samples & 1,016 & 1,577 & 2,311 & 1,862 & 5,354 & 23,498 \\ \hline \end{tabular} DESED2021 dataset is composed of 10 second audio clips recorded in domestic environments or synthesized using Scaper to simulate a domestic environment. The original AudioSet-Strong-Labeled dataset consists of an expanding ontology of 632 audio event classes and a collection of 2,084,320 human-labeled 10-second sound clips drawn from YouTube videos. To simulate the application of early event detection in industries, we select some subsets from the AudioSet-Strong-Labeled dataset to form early event detection datasets. Specifi
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline & **DESED2021** & **Explosion** & **Alarm** & **Liquid** & **Engine** & **AVA** \\ \hline \# Classes & 10 & 5 & 7 & 9 & 8 & 60 \\ \hline \# Training samples & 10,000 & 5,518 & 8,085 & 6,517 & 18,741 & 43,232 \\ \# Validation samples & 1,168 & 788 & 1,355 & 931 & 2,677 & 20,000 \\ \# Test samples & 1,016 & 1,577 & 2,311 & 1,862 & 5,354 & 23,498 \\ \hline \end{tabular} DESED2021 dataset is composed of 10 second audio clips recorded in domestic environments or synthesized using Scaper to simulate a domestic environment. The original AudioSet-Strong-Labeled dataset consists of an expanding ontology of 632 audio event classes and a collection of 2,084,320 human-labeled 10-second sound clips drawn from YouTube videos. To simulate the application of early event detection in industries, we select some subsets from the AudioSet-Strong-Labeled dataset to form early event detection datasets. Specifi
\begin{table}
\begin{tabular}{l|c|c|c} \hline & **DESED2021** & **Explosion** & **Alarm** & **Liquid** & **Engine** & **AVA** \\ \hline \# Classes & 10 & 5 & 7 & 9 & 8 & 60 \\ \hline \# Training samples & 10,000 & 5,518 & 8,085 & 6,517 & 18,741 & 43,232 \\ \# Validation samples & 1,168 & 788 & 1,355 & 931 & 2,677 & 20,000 \\ \# Test samples & 1,016 & 1,577 & 2,311 & 1,862 & 5,354 & 23,498 \\ \hline \end{tabular} DESED2021 dataset is composed of 10 second audio clips recorded in domestic environments or synthesized using Scaper to simulate a domestic environment. The original AudioSet-Strong-Labeled dataset consists of an expanding ontology of 632 audio event classes and a collection of 2,084,320 human-labeled 10-second sound clips drawn from YouTube videos. To simulate the application of early event detection in industries, we select some subsets from the AudioSet-Strong-Labeled dataset to form early event detection datasets. Specifi
\begin{table}
\begin{tabular}{l|c} \hline & **DESED2021** & **Explosion** & **Alarm** & **Liquid** & **Engine** & **AVA** \\ \hline \# Classes & 10 & 5 & 7 & 9 & 8 & 60 \\ \hline \# Training samples & 10,000 & 5,518 & 8,085 & 6,517 & 18,741 & 43,232 \\ \# Validation samples & 1,168 & 788 & 1,355 & 931 & 2,677 & 20,000 \\ \# Test samples & 1,016 & 1,577 & 2,311 & 1,862 & 5,354 & 23,498 \\ \hline \end{tabular} DESED2021 dataset is composed of 10 second audio clips recorded in domestic environments or synthesized using Scaper to simulate a domestic environment. The original AudioSet-Strong-Labeled dataset consists of an expanding ontology of 632 audio event classes and a collection of 2,084,320 human-labeled 10-second sound clips drawn from YouTube videos. To simulate the application of early event detection in industries, we select some subsets from the AudioSet-Strong-Labeled dataset to form early event detection datasets. Specifi
\end{table}
Table 4.2: Description of datasets and their experimental setup for the early event detection.
cally, we select four subsets from the AudioSet-Strong-Labeled dataset, including explosion, alarm, liquid, and engine subclasses. _b) Early human action detection task_, we consider AVA dataset (Gu et al., 2018). AVA is a video dataset for spatiotemporal localizing atomic visual actions with 60 classes. For AVA, box annotations and their corresponding action labels are provided on key frames of 430 15-minute videos with a temporal stride of 1 second. We use version 2.2 of the AVA dataset by default. The details of each dataset are shown in Table 4.2.
**Features.** For the early sound event detection task, the input features used in the experiments are log-mel spectrograms extracted from the audio signal resampled to 16000 Hz. The log-mel spectrogram uses 2048 STFT windows with a hop size of 256 and 128 Mel-scale filters. At the training stage, the input is the full observed 10-second sound clip. As a result, each 10-second sound clip is transformed into a 2D time-frequency representation with a size of (626\(\times\)128). At the test stage, we collect an audio segment at each timestamp, which can be transformed into a 2D time-frequency representation with a size of (4\(\times\)128). For the early human action detection task, each input video contains 91 image frames. To satisfy the streaming data at the inference stage, we cut the video into \(T\) segments. In our experiment, we consider \(T=\{4,6,8,10,12,15\}\), the corresponding segment length \(ST=\{0.75s,0.5s,0.375s,0.3s,0.25s,0.2s\}\).
**Comparing Methods.** To evaluate the effectiveness of our proposed approach with two variants, named MTENN-WBC and MTENN-UMSS, _a) Early sound event detection task_, we compare it with two state-of-the-art early sound event detection methods: Dual DNN (Phan et al., 2018) and SEED (Zhao et al., 2022); two sound event detection methods: CRNN (Turpault et al., 2019) and Conformer (Miyazaki et al., 2020); _b) Early human action detection task_, we compare it with three sound human action detection methods: ACAR (Pan et al., 2021), AIA (Tang et al., 2020), and SlowFast (Feichtenhofer et al., 2019); For both early detection tasks, we consider three different uncertainty methods as the baselines, which include _Entropy_, _Epistemic_ uncertainty (Gal and Ghahramani, 2016) (represents the uncertainty of model parameters), and _Aleatoric_ uncertainty (Depeweg et al., 2018) ( represents
the uncertainty of data noise). We consider using uncertainty to filter the high uncertainty prediction for three uncertainty-based methods. We use MC-drop (Gal and Ghahramani, 2016) to estimate epistemic and aleatoric uncertainties in the experiments.
**Evaluation Metrics.** We consider both early detection F1 score and detection delay as the evaluation metrics to evaluate early detection performance. We first define the true positive prediction for the event \(k\), which only happens when the first prediction timestamp \(d_{p}\) is located in the ongoing event region.
\[\text{TP}_{k}=\begin{cases}1,&\text{if }y_{k}^{d_{p}}==1\text{ and }d_{p}-d_{t}\geq L\\ 0,&\text{otherwise}\end{cases} \tag{4.10}\]
where \(d_{t}\) is the onset timestamp of the predicted event. In contrast, the false positive prediction happened when the first prediction timestamp \(d_{p}\) is not located in the ongoing event region. Then we can calculate precision, recall, and F1 score based on true positive prediction and false positive prediction for each event. For detection delay, it's only measured when we have a true positive prediction. Then the detection delay is defined as follows,
\[\text{delay}=\begin{cases}d_{p}-d_{t},&\text{if }d_{p}\geq d_{t}\\ 0,&\text{if }d_{p}<d_{t}\end{cases} \tag{4.11}\]
**Settings.**_a) Early sound event detection task_, we use CRNN (Turpault and Serizel, 2020) as the backbone except for Conformer. We use the Adam optimizer for all methods and follow the same training setting as (Turpault and Serizel, 2020). _b) Early human action detection task_, we use ACAR as the backbone except for AIA and SlowFast. We use the SGD optimizer for all methods and follow the same training setting as (Pan et al., 2021). For the uncertainty threshold, we set 0.5 for epistemic uncertainty and 0.9 for other uncertainties (entropy, vacuity, aleatoric).
#### 4.7.2 Results and Analysis
**Early Sound Event Detection Performance.** Table 4.3 shows that our proposed methods (MTENN-WBC and MTENN-UMSS) outperform all baseline models under the detection delay and early detection F1 score for the sound event early detection. The outperformance of MTENN-WBC is fairly impressive. This confirms that the belief comultiplication operator is the key to improving the sequential uncertainty estimation such that MTENN-WBC can significantly improve the early detection accuracy. In addition, MTENN-UMSS is a more aggressive method for early event detection, which significantly reduced the detection delay, demonstrating that vacuity uncertainty distribution changed at the event's ongoing early stage. Furthermore, the test inference time of our approach is around 5ms, less than the streaming segment duration (60ms), which indicates that our method satisfies the real-time requirement.
**Early human action detection performance.** Table 4.4 shows the experiment results of early human action detection, which get a similar pattern to early sound event detection. Note that MTENN-WBC achieved the best detection accuracy (e.g., 50% increase compared with ACAR baseline) with a large detection delay. While our MTENN-UMSS outperforms all baseline models under the detection delay and early detection F1 score. In addition, as
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline
**Datasets** & **DESED2021** & **Explosion** & **Alarm** & **Liquid** & **Engine** \\ & & **Detection Delay \(\downarrow\) / Detection F1 Score \(\uparrow\) & & \\ \hline Dual DNN & 0.386 / 0.682 & 0.325 / 0.295 & 0.257 / 0.221 & 0.467 / 0.162 & 0.324 / 0.323 \\ SEED & 0.252 / 0.691 & 0.339 / 0.288 & 0.293 / 0.407 & 0.334 / 0.172 & 0.428 / 0.342 \\ Conformer & 0.372 / 0.639 & 0.444 / 0.268 & 0.292 / 0.429 & 0.463 / 0.166 & 0.427 / 0.323 \\ CRNN & 0.284 / 0.677 & 0.415 / 0.278 & 0.273 / 0.408 & 0.451 / 0.144 & 0.404 / 0.301 \\ CRNN + entropy & 0.312 / 0.669 & 0.422 / 0.272 & 0.282 / 0.406 & 0.465 / 0.142 & 0.423 / 0.313 \\ CRNN + epistemic & 0.278 / 0.647 & 0.401 / 0.28 & 0.244 / 0.413 & 0.411 / 0.152 & 0.356 / 0.31 \\ CRNN + aleatoric & 0.281 / 0.643 & 0.404 / 0.288 & 0.252 / 0.419 & 0.421 / 0.157 & 0.377 / 0.312 \\ \hline MTENN-WBC & **0.206 / 0.727** & **0.119** / 0.314 & 0.217 / 0.470 & 0.059 / **0.200** & 0.294 / **0.391** \\ MTENN-UMSS & 0.267 / 0.575 & 0.120 / **0.345** & **0.191** / **0.473** & **0.026** / 0.188 & **0.237** / 0.349 \\ \hline \hline \end{tabular}
\end{table}
Table 4.3: Early sound event detection performance on Audio datasets.
segment length decreases (more challenging to detect event), baseline methods' performance decreases significantly. In contrast, our proposed methods (MTENN-WBC and MTENN-UMSS) can still hold a robust performance.
**Sensitive Analysis.** (1) Uncertainty threshold. We explore the sensitivity of the vacuity threshold used in the MTENN-WBC model. Figure 4.5 (a) shows the detection delay and early detection F1 score with varying vacuity threshold values. When the vacuity threshold increases, the detection delay decreases continuously, while the early detection accuracy (F1 score) decreases as well. There is a tradeoff between detection delay and detection accuracy. The higher the uncertainty threshold increase, the more overconfident predictions (predictions with high uncertainty) result in an aggressive early prediction (may predict event happen early but may cause a false positive prediction).
(2) Effect of sliding window size. We analyzed the sensitivity of our proposed sequential fusional opinion to the size of sliding windows. Fig 4.6 (b) shows the performance of detection delay and F1 score with the varying size of sliding windows. When the sliding window size increases, the detection delay continuously decreases, and detection F1 increases until the sliding window size is large enough. The results demonstrate that sequential uncertainty estimation is critical to improving early event detection performance.
**Per-class performance.** In addition to the overall comparison, we plot the per-class performance on the Audio (Engine) dataset compared with our method and SEED baselines.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline \hline
**Segment Length** & **0.75s** & **0.5s** & **0.375s** & **0.3s** & **0.25s** & **0.2s** \\ \multicolumn{7}{c}{**Detection Delay \(\downarrow\) / Detection F1 Score \(\uparrow\)**} \\ \hline SlowFast & 0.156 / 0.402 & 0.175 / 0.376 & 0.183 / 0.372 & 0.208 / 0.367 & 0.220 / 0.360 & 0.229 / 0.357 \\ AIA & 0.214 / 0.410 & 0.231 / 0.387 & 0.230 / 0.382 & 0.239 / 0.379 & 0.250 / 0.368 & 0.264 / 0.360 \\ ACAR & 0.227 / 0.446 & 0.245 / 0.430 & 0.245 / 0.424 & 0.240 / 0.406 & 0.235 / 0.399 & 0.253 / 0.389 \\ ACAR + entropy & 0.234 / 0.464 & 0.268 / 0.462 & 0.267 / 0.457 & 0.284 / 0.456 & 0.276 / 0.456 & 0.307 / 0.447 \\ ACAR + epistemic & 0.232 / 0.451 & 0.265 / 0.448 & 0.264 / 0.442 & 0.441 / 0.413 & 0.274 / 0.438 & 0.301 / 0.430 \\ ACAR + aleatoric & 0.213 / 0.434 & 0.244 / 0.428 & 0.243 / 0.423 & 0.259 / 0.421 & 0.254 / 0.416 & 0.275 / 0.407 \\ \hline MTENN-WBC & 0.308 / **0.736** & 0.343 / **0.734** & 0.341 / **0.706** & 0.361 / **0.708** & 0.354 / **0.706** & 0.377 / **0.698** \\ MTENN-UMSS & **0.089** / 0.533 & **0.090** / 0.535 & **0.090** / 0.527 & **0.093** / 0.532 & **0.093** / 0.525 & **0.081** / 0.524 \\ \hline \hline \end{tabular}
\end{table}
Table 4.4: Early human action detection performance on AVA datasets with different segment lengths (ST).
Figure 4.6: Sensitive Analysis of sliding window size. When the sliding window size increases, the detection delay continuously decreases, and detection F1 increases until the sliding window size is large enough.
As shown in Figure 4.7, MTENN-WBC outperforms others under the detection accuracy among most classes. MTENN-UMSS outperforms others under the detection delay. Among most classes. Note that both SEED and MTENN cannot detect difficult events (‘Heavy engine’ or ‘Idling’) due to class imbalance in the training set. But our MTENN-UMSS can detect these complex events with a small detection delay.
**Ablation study.** We conducted additional experiments (see Table 4.5) in order to demonstrate the contributions of the key technical components, including MTENN loss, WBC, and UMSS. Specifically, we consider three ablated models: (a) MTENN-BC, a vari
Figure 4.5: Sensitive Analysis of uncertainty threshold. There is a tradeoff between detection delay and detection accuracy. The higher uncertainty threshold increase, the more overconfidence predictions.
Figure 4.6: Sensitive Analysis of sliding window size. When the sliding window size increases, the detection delay continuously decreases, and detection F1 increases until the sliding window size is large enough.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Datasets** & **AudioSet(Engine)** & **AudioSet(Liquid)** \\ & **Detection Delay \(\downarrow\) / Detection F1 Score \(\uparrow\)** \\ \hline MTENN w/o MTENN loss & 0.463 / 0.307 & 0.326 / 0.083 \\ MTENN (Phase I) & 0.448 / 0.313 & 0.329 / 0.142 \\ MTENN-BC & 0.312 /0.383 & 0.074 / 0.207 \\ \hline MTENN-WBC & 0.294 / **0.391** & 0.057 / **0.196** \\ MTENN-UMSS & **0.237** / 0.349 & **0.026** / 0.193 \\ \hline \hline \end{tabular}
\end{table}
Table 4.5: Ablation study. MTENN-BC: a variant of MTENN-WBC that uses binomial comultiplication instead of weighted binomial comultiplication; MTENN (Phase I): only consider phase I to predict event without any sequential uncertainty head; MTENN w/o MTENN loss: a variant of MTENN (Phase I) that consider BCE loss.
Figure 4.7: Per-Class Evaluation on Audio (Engine) dataset.
**Inference time.** Table 4.6 shows the inference time for all methods used in our experiments. Note that the audio streaming segment duration is 60 milliseconds (ms), and the video streaming segment duration is 300 milliseconds when \(ST=0.3s\), while our approaches only take around 5ms and 190 ms for audio and video streaming segments, respectively. This indicates that our proposed framework satisfies the real-time requirement for early event detection.
### Conclusion
In this work, we propose a novel framework, Multi-Label Temporal Evidential Neural Network (MTENN), for early event detection in temporal data. MTENN is able to quality predictive uncertainty due to the lack of evidence for multi-label classifications at each time stamp based on belief/evidence theory. In addition, we introduce two novel uncertainty fusion operators (weighted binomial comultiplication (WBC) and uncertainty mean scan statistics (UMSS)) based on MTENN to quantify the fused uncertainty of a sub-sequence for early event detection.. We validate the performance of our approach with state-of-the-art techniques on real-world audio and video datasets. Theoretic analysis and empirical studies
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline \multicolumn{2}{c|}{DESED2021} & \multicolumn{2}{c}{AVA (ST=0.3s)} \\ \hline Dual DNN & 5.1ms & & \\ SEED & 5.0ms & SlowFast & 175ms \\ Conformer & 6.6ms & AIA & 181ms \\ CRNN & 5.0ms & ACAR & 187ms \\ CRNN + entropy & 5.0ms & ACAR + entropy & 188ms \\ CRNN + epistemic & 27.0ms & ACAR + epistemic & 564ms \\ CRNN + aleatoric & 27.0ms & ACAR + aleatoric & 564ms \\ \hline MTENN-WBC & 5.5ms & MTENN-WBC & 192ms \\ MTENN-UMSS & 5.3ms & MTENN-UMSS & 190ms \\ \hline \hline \end{tabular}
\end{table}
Table 4.6: Compare inference time with different methods.
demonstrate the effectiveness and efficiency of the proposed framework in both detection delay and accuracy.
## Chapter Conclusion and Future Work
### 5.1 Conclusion of Completed Work
In this dissertation, the proposed research aims on the design a general multi-source uncertainty framework to quantify the inherent uncertainties of deep neural networks. We are focused on three major types of direction, including uncertainty-aware semi-supervised learning on graph data (Chapter 2), uncertainty-aware robust semi-supervised learning (Chapter 3), and uncertainty-aware early event detection with multi-labels (Chapter 4).
In Chapter 2, we study the uncertainty decomposition problem for graph neural networks. We first provide a theoretical analysis of the relationships between different types of uncertainties. Then, we proposed a multi-source uncertainty framework of GNNs for semi-supervised node classification. Our proposed framework provides an effective way of predicting node classification and out-of-distribution detection considering multiple types of uncertainty. We leveraged various types of uncertainty estimates from both deep learning and evidence/belief theory domains. Through our extensive experiments, we found that dissonance-based detection yielded the best performance on misclassification detection while vacuity-based detection performed the best for OOD detection, compared to other competitive counterparts. In particular, it was noticeable that applying GKDE and the Teacher network further enhanced the accuracy in node classification and uncertainty estimates.
We further study uncertainty in robust semi-supervised learning setting in Chapter 3. In this setting, traditional semi-supervised learning (SSL) performance can degrade substantially and is sometimes even worse than simple supervised learning approaches due to the OODs involved in the unlabeled pool. To solve this problem, we first study the impact of OOD data on SSL algorithms and demonstrate empirically on synthetic data that the SSL algorithms' performance depends on how close the OOD instances are to the decision boundary
(and the ID data instances), which can also be measured via vacuity uncertainty introduced in Chapter 2. Based on this observation, we proposed a novel unified uncertainty-aware robust SSL framework that treats the weights directly as hyper-parameters in conjunction with weighted batch normalization, which is designed to improve the robustness of BN against OODs. In addition, we proposed two efficient bi-level algorithms for our proposed robust SSL approach, including meta-approximation and implicit-differentiation based, that have different tradeoffs on computational efficiency and accuracy. We also conduct a theoretical analysis of the impact of faraway OODs in the BN step and discuss the connection between our approach (high-order approximation based on implicit differentiation) and low-order approximation approaches.
Finally, in Chapter 4, we consider a time series setting for early event detection. In this setting, a temporal event with multiple labels occurs sequentially along the timeline. The goal of this work is to accurately detect all classes at the ongoing stage of an event within the least amount of time. The problem is formulated as an online multi-label time series classification problem. To this end, technically, we propose a novel framework, Multi-Label Temporal Evidential Neural Network, and two novel uncertainty estimation heads (weighted binomial comultiplication (WBC) and uncertainty mena scan statistics(UMSS)) to quantify the fused uncertainty of a sub-sequence for early event detection. We empirically show that the proposed approach outperforms state-of-the-art techniques on real-world datasets.
In light of these discoveries, the following are some interesting future avenues to investigate.
### Future Work
#### 5.2.1 Quantification of multidimensional uncertainty
Different forms of the distribution have respective limitations in quantifying uncertainty. For example, a Dirichlet distribution can estimate vacuity and dissonance, but not vague
ness (e.g., non-distinctive class labels, such as 'A or B'). Vagueness can emerge only in a hyper-Dirichlet distribution; but both Dirichlet and hyper-Dirichlet distributions are uni-models and cannot estimate data source dependent uncertainty, i.e., the training data fused from multiple sources. Even subjective logic does not provide ways of measuring the uncertainty types based on multi-model or heterogeneous distributions over a simplex. Therefore, future researchers can explore different forms of distributions to measure different types of uncertainties, such as Dirichlet, hyper-Dirichlet, a mixture of Dirichlet, Logistic Normal distributions, or implicit generative models that are free of parametric forms about the distribution.
#### 5.2.2 Interpretation of Multidimensional Uncertainty
Based on our in-depth literature review on different uncertainty types, we observed that some uncertainty types were studied in different domains as DL and belief theory with using distinctive terminologies but referring to the same uncertainty types. For example, the distributional uncertainty in deep learning and vacuity in subjective logic are designed to measure uncertainty caused by a lack of information and knowledge. Based on my prior work (Zhao et al., 2020), we found that vacuity is more effective than distributional uncertainty for node-level OOD detection in graph data. This finding explains the difference between distributional uncertainty and vacuity, even if the intent of formulating them was from the same origin. Hence, in this plan, I will further delve into how and why these two similar types of uncertainties perform differently through empirical experiments and theoretical proof. Here is the research question for future investigation: what type of uncertainty is more critical than other types to maximize decision effectiveness under what problem contexts (e.g., images, graph data) and what task types (e.g., classification prediction or OOD)?
**APPENDIX**
**PROOFS OF THE PROPOSED THEOREMS**
### Proof of Theorem 1
**Interpretation**. **Theorem 1.1 (a)** implies that increases in both uncertainty types may not happen at the same time. A higher vacuity leads to a lower dissonance, and vice versa (a higher dissonance leads to a lower vacuity). This indicates that a high dissonance only occurs only when a large amount of evidence is available and the vacuity is low. **Theorem 1.1 (b)** shows relationships between vacuity and epistemic uncertainty in which vacuity is an upper bound of epistemic uncertainty. Although some existing approaches (Josang, 2016; Sensoy et al., 2018) treat epistemic uncertainty the same as vacuity, it is not necessarily true except for an extreme case where a sufficiently large amount of evidence available, making vacuity close to zero. **Theorem 1.2 (a) and (b)** explain how entropy differs from vacuity and/or dissonance. We observe that entropy is 1 when either vacuity or dissonance is 0. This implies that entropy cannot distinguish different types of uncertainty due to different root causes. For example, a high entropy is observed when an example is either an OOD or misclassified example. Similarly, a high aleatoric uncertainty value and a low epistemic uncertainty value are observed under both cases. However, vacuity and dissonance can capture different causes of uncertainty due to lack of information and knowledge and to conflicting evidence, respectively. For example, an OOD objects typically show a high vacuity value and a low dissonance value while a conflicting prediction exhibits a low vacuity and a high dissonance.
Proof.: 1. (a) Let the opinion \(\omega=[b_{1},\ldots,b_{K},u_{v}]\), where \(K\) is the number of classes, \(b_{i}\) is the belief for class \(i\), \(u_{v}\) is the uncertainty mass (vacuity), and \(\sum_{i=1}^{K}b_{i}+u_{v}=1\). Dissonance has
an upper bound with
\[u_{diss} = \sum_{i=1}^{K}\Big{(}\frac{b_{i}\sum_{j=1,j\neq i}^{K}b_{j}\text{Bal} (b_{i},b_{j})}{\sum_{j=1,j\neq i}^{K}b_{j}}\Big{)}\] \[\leq \sum_{i=1}^{K}\Big{(}\frac{b_{i}\sum_{j=1,j\neq i}^{K}b_{j}}{\sum_{ j=1,j\neq i}^{K}b_{j}}\Big{)},\quad\text{(since $0\leq\text{Bal}(b_{i},b_{j})\leq 1$)}\] \[= \sum_{i=1}^{K}b_{i},\]
where \(\text{Bal}(b_{i},b_{j})\) is the relative mass balance, then we have
\[u_{v}+u_{diss}\leq\sum_{i=1}^{K}b_{i}+u_{v}=1.\] (A.2)
1. (b) For the multinomial random variable \(y\), we have
\[y\sim\text{Cal}(\mathbf{p}),\quad\mathbf{p}\sim\text{Dir}( \boldsymbol{\alpha}),\] (A.3)
where \(\text{Cal}(\mathbf{p})\) is the categorical distribution and \(\text{Dir}(\boldsymbol{\alpha})\) is Dirichlet distribution. Then we have
\[\text{Prob}(y|\boldsymbol{\alpha})=\int\text{Prob}(y|\mathbf{ p})\text{Prob}(\mathbf{p}|\boldsymbol{\alpha})d\mathbf{p},\] (A.4)
and the epistemic uncertainty is estimated by mutual information,
\[\mathcal{I}[y,\mathbf{p}|\boldsymbol{\alpha}]=\mathcal{H}\Big{[} \mathbb{E}_{\text{Prob}(\mathbf{p}|\boldsymbol{\alpha})}[P(y|\mathbf{p})] \Big{]}-\mathbb{E}_{\text{Prob}(\mathbf{p}|\boldsymbol{\alpha})}\Big{[} \mathcal{H}[P(y|\mathbf{p})]\Big{]}.\] (A.5)
Now we consider another measure of ensemble diversity: _Expected Pairwise KL-Divergence_ between each model in the ensemble. Here the expected pairwise KL-Divergence between two independent distributions, including \(P(y|\mathbf{p}_{1})\) and \(P(y|\mathbf{p}_{2})\), where \(\mathbf{p}_{1}\) and \(\mathbf{p}_{2}\) are two independent samples from \(\text{Prob}(\mathbf{p}|\boldsymbol{\alpha})\), can be computed,
\[\mathcal{K}[y,\mathbf{p}|\boldsymbol{\alpha}] = \mathbb{E}_{\text{Prob}(\mathbf{p}_{1}|\boldsymbol{\alpha}\text{ Prob}(\mathbf{p}_{2}|\boldsymbol{\alpha})}\Big{[}KL[P(y|\mathbf{p}_{1})\|P(y| \mathbf{p}_{2})]\Big{]}\] \[= -\sum_{i=1}^{K}\mathbb{E}_{\text{Prob}(\mathbf{p}_{1}|\boldsymbol {\alpha})}[P(y|\mathbf{p}_{1})]\mathbb{E}_{\text{Prob}(\mathbf{p}_{2}| \boldsymbol{\alpha})}[\ln P(y|\mathbf{p}_{2})]-\mathbb{E}_{\text{Prob}( \mathbf{p}|\boldsymbol{\alpha})}\Big{[}\mathcal{H}[P(y|\mathbf{p})]\Big{]}\] \[\geq \mathcal{I}[y,\mathbf{p}|\boldsymbol{\alpha}],\]
where \({\cal I}[y,{\bf p}_{1}|\mathbf{\alpha}]={\cal I}[y,{\bf p}_{2}|\mathbf{\alpha}]\). We consider Dirichlet ensemble, the _Expected Pairwise KL Divergence_,
\[{\cal K}[y,{\bf p}|\mathbf{\alpha}] = -\sum_{i=1}^{K}\frac{\alpha_{i}}{S}\Big{(}\psi(\alpha_{i})-\psi(S) \Big{)}-\sum_{i=1}^{K}-\frac{\alpha_{i}}{S}\Big{(}\psi(\alpha_{i}+1)-\psi(S+1) \Big{)}\] (A.7) \[= \frac{K-1}{S},\]
where \(S=\sum_{i=1}^{K}\alpha_{i}\) and \(\psi(\cdot)\) is the _digamma Function_, which is the derivative of the natural logarithm of the gamma function. Now we obtain the relations between vacuity and epistemic,
\[\underbrace{\frac{K}{S}}_{\rm Vacuity}>{\cal K}[y,{\bf p}|\mathbf{\alpha}]=\frac{K-1}{S}\geq\underbrace{{\cal I}[y,{\bf p}|\mathbf{\alpha}]}_{\rm Epistemic}.\] (A.8)
2. (a) For an out-of-distribution sample, \(\alpha=[1,\ldots,1]\), the vacuity can be calculated as
\[u_{v} = \frac{K}{\sum_{i=1}^{K}\alpha_{i}}=\frac{K}{K}=1,\] (A.9)
and the belief mass \(b_{i}=(\alpha_{i}-1)/\sum_{i=1}^{K}\alpha_{i}=0\), we estimate dissonance,
\[u_{diss} = \sum_{i=1}^{K}\Big{(}\frac{b_{i}\sum_{j=1,j\neq i}^{K}b_{j}{\rm Bal }(b_{i},b_{j})}{\sum_{j=1,j\neq i}^{K}b_{j}}\Big{)}=0.\] (A.10)
Given the expected probability \(\hat{p}=[1/K,\ldots,1/K]^{\top}\), the entropy is calculated based on \(\log_{K}\),
\[u_{en}={\cal H}[\hat{p}]=-\sum_{i=1}^{K}\hat{p}_{i}\log_{K}\hat{p}_{i}=-\sum_{ i=1}^{K}\frac{1}{K}\log_{K}\frac{1}{K}=\log_{K}\frac{1}{K}^{-1}=\log_{K}K=1,\] (A.11)
where \(\mathcal{H}(\cdot)\) is the entropy. Based on Dirichlet distribution, the aleatoric uncertainty refers to the expected entropy,
\[u_{alea} = \mathbb{E}_{p\sim\text{Dir}(\alpha)}[\mathcal{H}[p]]\] \[= -\sum_{i=1}^{K}\frac{\Gamma(S)}{\prod_{i=1}^{K}\Gamma(\alpha_{i})} \int_{S_{K}}p_{i}\log_{K}p_{i}\prod_{i=1}^{K}p_{i}^{\alpha_{i}-1}d\mathbf{p}\] \[= -\frac{1}{\ln K}\sum_{i=1}^{K}\frac{\Gamma(S)}{\prod_{i=1}^{K} \Gamma(\alpha_{i})}\int_{S_{K}}p_{i}\ln p_{i}\prod_{i=1}^{K}p_{i}^{\alpha_{i}-1 }d\mathbf{p}\] \[= -\frac{1}{\ln K}\sum_{i=1}^{K}\frac{\alpha_{i}}{S}\frac{\Gamma(S+ 1)}{\Gamma(\alpha_{i}+1)\prod_{i^{\prime}=1,\neq i}^{K}\Gamma(\alpha_{i^{ \prime}})}\int_{S_{K}}p_{i}^{\alpha_{i}}\ln p_{i}\prod_{i^{\prime}=1,\neq i}^{ K}p_{i^{\prime}}^{\alpha_{i^{\prime}}-1}d\mathbf{p}\] \[= \frac{1}{\ln K}\sum_{i=1}^{K}\frac{\alpha_{i}}{S}\big{(}\psi(S+ 1)-\psi(\alpha_{i}+1)\big{)}\] \[= \frac{1}{\ln K}\sum_{i=1}^{K}\frac{1}{K}(\psi(K+1)-\psi(2))\] \[= \frac{1}{\ln K}(\psi(K+1)-\psi(2))\] \[= \frac{1}{\ln K}(\psi(2)+\sum_{k=2}^{K}\frac{1}{k}-\psi(2))\] \[= \frac{1}{\ln K}\sum_{k=2}^{K}\frac{1}{k}<\frac{1}{\ln K}\ln K=1,\]
where \(S=\sum_{i=1}^{K}\alpha_{i}\), \(\mathbf{p}=[p_{1},\ldots,p_{K}]^{\top}\), and \(K\geq 2\) is the number of category. The epistemic uncertainty can be calculated via the mutual information,
\[u_{epis} = \mathcal{H}[\mathbb{E}_{p\sim\text{Dir}(\alpha)}[p]]-\mathbb{E}_ {p\sim\text{Dir}(\alpha)}[\mathcal{H}[p]]\] \[= \mathcal{H}[\hat{p}]-u_{alea}\] \[= 1-\frac{1}{\ln K}\sum_{k=2}^{K}\frac{1}{k}<1.\]
To compare aleatoric uncertainty with epistemic uncertainty, we first prove that aleatoric uncertainty (Eq. (A.13)) is monotonically increasing and converging to 1 as \(K\) increases.
Based on _Lemma 1_, we have
\[\Big{(}\ln(K+1)-\ln K\Big{)}\sum_{k=2}^{K}\frac{1}{k}<\frac{\ln K}{K +1}\] \[\Rightarrow\ln(K+1)\sum_{k=2}^{K}\frac{1}{k}<\ln K\Big{(}\sum_{k=2 }^{K}\frac{1}{k}+\frac{1}{K+1}\Big{)}=\ln K\sum_{k=2}^{K+1}\frac{1}{k}\] \[\Rightarrow\frac{1}{\ln K}\sum_{k=2}^{K}\frac{1}{k}<\frac{1}{\ln( K+1)}\sum_{k=2}^{K+1}\frac{1}{k}.\] (A.14)
Based on Eq. (A.14) and Eq. (A.13), we prove that aleatoric uncertainty is monotonically increasing with respect to \(K\). So the minimum aleatoric can be shown to be \(\frac{1}{\ln 2}\frac{1}{2}\), when \(K=2\).
Similarly, for epistemic uncertainty, which is monotonically decreasing as \(K\) increases based on _Lemma 1_, the maximum epistemic can be shown to be \(1-\frac{1}{\ln 2}\frac{1}{2}\) when \(K=2\). Then we have,
\[u_{alea}\geq\frac{1}{\ln 2}\frac{1}{2}>1-\frac{1}{2\ln 2}\geq u_{epis}\] (A.15)
Therefore, we prove that \(1=u_{v}=u_{en}>u_{alea}>u_{epis}>u_{diss}=0\).
2. (b) For a conflicting prediction, i.e., \(\alpha=[\alpha_{1},\ldots,\alpha_{K}]\), with \(\alpha_{1}=\alpha_{2}=\cdots=\alpha_{K}=C\), and \(S=\sum_{i=1}^{K}\alpha_{i}=CK\), the expected probability \(\hat{p}=[1/K,\ldots,1/K]^{\top}\), the belief mass \(b_{i}=(\alpha_{i}-1)/S\), and the vacuity can be calculated as
\[u_{v} = \frac{K}{S}\xrightarrow{S\rightarrow\infty}0,\] (A.16)
and the dissonance can be calculated as
\[u_{diss} = \sum_{i=1}^{K}\Big{(}\frac{b_{i}\sum_{j=1,j\neq i}^{K}b_{j}\text {Bal}(b_{i},b_{j})}{\sum_{j=1,j\neq i}^{K}b_{j}}\Big{)}=\sum_{i=1}^{K}b_{i}\] \[= \sum_{i=1}^{K}\Bigg{(}\frac{a_{i}-1}{\sum_{i=1}^{K}a_{i}}\Bigg{)}\] \[= \frac{\sum_{i=1}^{K}a_{i}-k}{\sum_{i=1}^{K}a_{i}}\] \[= 1-\frac{K}{S}\xrightarrow{S\rightarrow\infty}1.\]
Given the expected probability \(\hat{p}=[1/K,\ldots,1/K]^{\top}\), the entropy can be calculated based on Dirichlet distribution,
\[u_{en} = \mathcal{H}[\hat{p}]=-\sum_{i=1}^{K}\hat{p}_{i}\log_{K}\hat{p}_{i}=1,\] (A.18)
and the aleatoric uncertainty is estimated as the expected entropy,
\[u_{alea} = \mathbb{E}_{p\sim\mathrm{Dir}(\alpha)}[\mathcal{H}[p]]\] \[= -\sum_{i=1}^{K}\frac{\Gamma(S)}{\prod_{i=1}^{K}\Gamma(\alpha_{i} )}\int_{S_{K}}p_{i}\log_{K}p_{i}\prod_{i=1}^{K}p_{i}^{\alpha_{i}-1}d\mathbf{p}\] \[= -\frac{1}{\ln K}\sum_{i=1}^{K}\frac{\Gamma(S)}{\prod_{i=1}^{K} \Gamma(\alpha_{i})}\int_{S_{K}}p_{i}\ln p_{i}\prod_{i=1}^{K}p_{i}^{\alpha_{i}- 1}d\mathbf{p}\] \[= -\frac{1}{\ln K}\sum_{i=1}^{K}\frac{\alpha_{i}}{S}\frac{\Gamma(S +1)}{\Gamma(\alpha_{i}+1)\prod_{i^{\prime}=1,\neq i}^{K}\Gamma(\alpha_{i^{ \prime}})}\int_{S_{K}}p_{i}^{\alpha_{i}}\ln p_{i}\prod_{i^{\prime}=1,\neq i}^{ K}p_{i^{\prime\prime}}^{\alpha_{i^{\prime}}-1}d\mathbf{p}\] \[= \frac{1}{\ln K}\sum_{i=1}^{K}\frac{\alpha_{i}}{S}\big{(}\psi(S+1 )-\psi(\alpha_{i}+1)\big{)}\] \[= \frac{1}{\ln K}\sum_{i=1}^{K}\frac{1}{K}(\psi(S+1)-\psi(C+1))\] \[= \frac{1}{\ln K}(\psi(S+1)-\psi(C+1))\] \[= \frac{1}{\ln K}(\psi(C+1)+\sum_{k=C+1}^{S}\frac{1}{k}-\psi(C+1))\] \[= \frac{1}{\ln K}\sum_{k=C+1}^{S}\frac{1}{k}\xrightarrow{S\to \infty}1.\]
The epistemic uncertainty can be calculated via mutual information,
\[u_{epis} = \mathcal{H}[\mathbb{E}_{p\sim\mathrm{Dir}(\alpha)}[p]]-\mathbb{E }_{p\sim\mathrm{Dir}(\alpha)}[\mathcal{H}[p]]\] \[= \mathcal{H}[\hat{p}]-u_{alea}\] \[= 1-\frac{1}{\ln K}\sum_{k=C+1}^{S}\frac{1}{k}\xrightarrow{S\to \infty}0.\]
Now we compare aleatoric uncertainty with vacuity,
\[u_{alea} = \frac{1}{\ln K}\sum_{k=C+1}^{S}\frac{1}{k}\] \[= \frac{1}{\ln K}\sum_{k=C+1}^{CK}\frac{1}{k}\] \[= \frac{\ln(CK+1)-\ln(C+1)}{\ln K}\] \[= \frac{\ln(K-\frac{K-1}{C+1})}{\ln K}\] \[> \frac{\ln(K-\frac{K-1}{2})}{\ln K}\] \[= \frac{\ln(4/K+4/K+1/2)}{\ln K}\] \[\geq \frac{\ln[3(4/K+4/K+1/2)^{\frac{1}{3}}]}{\ln K}\] \[= \frac{\ln 3+\frac{1}{3}\ln(\frac{K^{2}}{32})}{\ln K}\] \[= \frac{\ln 3+\frac{2}{3}\ln K-\frac{1}{3}\ln 32}{\ln K}>\frac{2 }{3}.\]
Based on Eq. (A.22), when \(C>\frac{3}{2}\), we have
\[u_{alea}>\frac{2}{3}>\frac{1}{C}=u_{v}\] (A.22)
We have already proved that \(u_{v}>u_{epis}\), when \(u_{en}=1\), we have \(u_{alea}>u_{diss}\) Therefore, we prove that \(u_{en}>u_{alea}>u_{diss}>u_{v}>u_{epis}\) with \(u_{en}=1,u_{diss}\to 1,u_{alea}\to 1,u_{v}\to 0,u_{epis}\to 0\)
**Lemma 1**.: _For all integer \(N\geq 2\), we have \(\sum_{n=2}^{N}\frac{1}{n}<\frac{\ln N}{(N+1)\ln(\frac{N+1}{N})}\)._
Proof.: We will prove by induction that, for all integer \(N\geq 2\),
\[\sum_{n=2}^{N}\frac{1}{n}<\frac{\ln N}{(N+1)\ln(\frac{N+1}{N})}.\] (A.23)
_Base case_: When \(N=2\), we have \(\frac{1}{2}<\frac{\ln 2}{3\ln\frac{3}{2}}\) and Eq. (A.23) is true for \(N=2\).
_Induction step_: Let the integer \(K\geq 2\) is given and suppose Eq. (A.23) is true for \(N=K\), then
\[\sum_{k=2}^{K+1}\frac{1}{k}=\frac{1}{K+1}+\sum_{k=2}^{K}\frac{1}{k}< \frac{1}{K+1}+\frac{\ln K}{(K+1)\ln(\frac{K+1}{K})}=\frac{\ln(K+1)}{(K+1)\ln( \frac{K+1}{K})}.\] (A.24)
Denote that \(g(x)=(x+1)\ln(\frac{x+1}{x})\) with \(x>2\). We get its derivative, \(g^{\prime}(x)=\ln(1+\frac{1}{x})-\frac{1}{x}<0\), such that \(g(x)\) is monotonically decreasing, which results in \(g(K)>g(K+1)\). Based on Eq. (A.24) we have,
\[\sum_{k=2}^{K+1}\frac{1}{k}<\frac{\ln(K+1)}{g(K)}<\frac{\ln(K+1)} {g(K+1)}=\frac{\ln(K+1)}{(K+2)\ln(\frac{K+2}{K+1})}.\] (A.25)
Thus, Eq. (A.23) holds for \(N=K+1\), and the proof of the induction step is complete.
_Conclusion_: By the principle of induction, Eq. (A.23) is true for all integer \(N\geq 2\).
### Derivations for Joint Probability and KL Divergence
#### a.2.1 Joint Probability
We infer the joint probability (Eq. (2.8)) by:
\[p(\mathbf{y}|A,\mathbf{r};\mathcal{G})=\int\int\mathrm{Prob}( \mathbf{y}|\mathbf{p})\mathrm{Prob}(\mathbf{p}|A,\mathbf{r};\boldsymbol{ \theta})\mathrm{Prob}(\boldsymbol{\theta}|\mathcal{G})d\mathbf{p}d\boldsymbol {\theta}\] \[\approx \int\int\mathrm{Prob}(\mathbf{y}|\mathbf{p})\mathrm{Prob}( \mathbf{p}|A,\mathbf{r};\boldsymbol{\theta})q(\boldsymbol{\theta})d\mathbf{p}d\theta\] \[\approx \frac{1}{M}\sum_{m=1}^{M}\int\mathrm{Prob}(\mathbf{y}|\mathbf{p} )\mathrm{Prob}(\mathbf{p}|A,\mathbf{r};\boldsymbol{\theta}^{(m)})d\mathbf{p}, \quad\boldsymbol{\theta}^{(m)}\sim q(\boldsymbol{\theta})\] \[\approx \frac{1}{M}\sum_{m=1}^{M}\int\sum_{i=1}^{N}\mathrm{Prob}(\mathbf{y }_{i}|\mathbf{p}_{i})\mathrm{Prob}(\mathbf{p}_{i}|A,\mathbf{r};\boldsymbol{ \theta}^{(m)})d\mathbf{p}_{i},\quad\boldsymbol{\theta}^{(m)}\sim q( \boldsymbol{\theta})\] \[\approx \frac{1}{M}\sum_{m=1}^{M}\prod_{i=1}^{N}\int\mathrm{Prob}( \mathbf{y}_{i}|\mathbf{p}_{i})\mathrm{Dir}(\mathbf{p}_{i}|\boldsymbol{ \alpha}_{i}^{(m)})d\mathbf{p}_{i},\quad\boldsymbol{\alpha}^{(m)}=f(A,\mathbf{ r},\boldsymbol{\theta}^{(m)}),q\quad\boldsymbol{\theta}^{(m)}\sim q(\boldsymbol{\theta}),\]
where the posterior over class label \(p\) will be given by the mean of the Dirichlet:
\[\mathrm{Prob}(y_{i}=p|\boldsymbol{\theta}^{(m)})=\int\mathrm{Prob}(y_{i}=p| \mathbf{p}_{i})\mathrm{Prob}(\mathbf{p}_{i}|A,\mathbf{r};\boldsymbol{\theta}^{(m )})d\mathbf{p}_{i}=\frac{\alpha_{ip}^{(m)}}{\sum_{k=1}^{K}\alpha_{ik}^{(m)}}.\]
The probabilistic form for a specific node \(i\) by using marginal probability,
\[\mathrm{Prob}(\mathbf{y}_{i}|A,\mathbf{r};\mathcal{G}) = \sum_{y\backslash y_{i}}\mathrm{Prob}(\mathbf{y}|A,\mathbf{r}; \mathcal{G})\] \[= \sum_{y\backslash y_{i}}\int\int\prod_{j=1}^{N}\mathrm{Prob}( \mathbf{y}_{j}|\mathbf{p}_{j})\mathrm{Prob}(\mathbf{p}_{j}|A,\mathbf{r}; \boldsymbol{\theta})\mathrm{Prob}(\boldsymbol{\theta}|\mathcal{G})d\mathbf{p}d \boldsymbol{\theta}\] \[\approx \sum_{y\backslash y_{i}}\int\int\prod_{j=1}^{N}\mathrm{Prob}( \mathbf{y}_{j}|\mathbf{p}_{j})\mathrm{Prob}(\mathbf{p}_{j}|A,\mathbf{r}; \boldsymbol{\theta})q(\boldsymbol{\theta})d\mathbf{p}d\boldsymbol{\theta}\] \[\approx \sum_{m=1}^{M}\sum_{y\backslash y_{i}}\int\prod_{j=1}^{N}\mathrm{ Prob}(\mathbf{y}_{j}|\mathbf{p}_{j})\mathrm{Prob}(\mathbf{p}_{j}|A,\mathbf{r}; \boldsymbol{\theta}^{(m)})d\mathbf{p},\quad\boldsymbol{\theta}^{(m)}\sim q( \boldsymbol{\theta})\] \[\approx \sum_{m=1}^{M}\Big{[}\sum_{y\backslash y_{i}}\int\prod_{j=1}^{N} \mathrm{Prob}(\mathbf{y}_{j}|\mathbf{p}_{j})\mathrm{Prob}(\mathbf{p}_{j}|A, \mathbf{r};\boldsymbol{\theta}^{(m)})d\mathbf{p}_{j}\Big{]},\quad\boldsymbol{ \theta}^{(m)}\sim q(\boldsymbol{\theta})\] \[\approx \sum_{m=1}^{M}\Big{[}\sum_{y\backslash y_{i}}\prod_{j=1,j\neq i}^ {N}\mathrm{Prob}(\mathbf{y}_{j}|A,\mathbf{r}_{j};\boldsymbol{\theta}^{(m)}) \Big{]}\mathrm{Prob}(\mathbf{y}_{i}|A,\mathbf{r};\boldsymbol{\theta}^{(m)}), \quad\boldsymbol{\theta}^{(m)}\sim q(\boldsymbol{\theta})\] \[\approx \sum_{m=1}^{M}\int\mathrm{Prob}(\mathbf{y}_{i}|\mathbf{p}_{i}) \mathrm{Prob}(\mathbf{p}_{i}|A,\mathbf{r};\boldsymbol{\theta}^{(m)})d\mathbf{ p}_{i},\quad\boldsymbol{\theta}^{(m)}\sim q(\boldsymbol{\theta}).\]
To be specific, the probability of label \(p\) is,
\[\mathrm{Prob}(y_{i}=p|A,\mathbf{r};\mathcal{G})\approx\frac{1}{M}\sum_{m=1}^{M} \frac{\alpha_{ip}^{(m)}}{\sum_{k=1}^{K}\alpha_{ik}^{(m)}},\quad\boldsymbol{ \alpha}^{(m)}=f(A,\mathbf{r},\boldsymbol{\theta}^{(m)}),\quad\boldsymbol{ \theta}^{(m)}\sim q(\boldsymbol{\theta}).\]
#### a.2.2 KL-Divergence
KL-divergence between \(\text{Prob}(\mathbf{y}|\mathbf{r};\mathbf{\gamma},\mathcal{G})\) and \(\text{Prob}(\mathbf{y}|\hat{\mathbf{p}})\) is given by
\[\text{KL}[\text{Prob}(\mathbf{y}|A,\mathbf{r};\mathcal{G})||\text{ Prob}(\mathbf{y}|\hat{\mathbf{p}}))] = \mathbb{E}_{\text{Prob}(\mathbf{y}|A,\mathbf{r};\mathcal{G})} \Big{[}\log\frac{\text{Prob}(\mathbf{y}|A,\mathbf{r};\mathcal{G})}{\text{ Prob}(\mathbf{y}|\hat{\mathbf{p}})}\Big{]}\] \[\approx \mathbb{E}_{\text{Prob}(\mathbf{y}|A,\mathbf{r};\mathcal{G})} \Big{[}\log\frac{\prod_{i=1}^{N}\text{Prob}(\mathbf{y}_{i}|A,\mathbf{r}; \mathcal{G})}{\prod_{i=1}^{N}\text{Prob}(\mathbf{y}|\hat{\mathbf{p}})}\Big{]}\] \[\approx \sum_{i=1}^{N}\mathbb{E}_{\text{Prob}(\mathbf{y}|A,\mathbf{r}; \mathcal{G})}\Big{[}\log\frac{\text{Prob}(\mathbf{y}_{i}|A,\mathbf{r}; \mathcal{G})}{\text{Prob}(\mathbf{y}|\hat{\mathbf{p}})}\Big{]}\] \[\approx \sum_{i=1}^{N}\sum_{j=1}^{K}\text{Prob}(y_{i}=j|A,\mathbf{r}; \mathcal{G})\Big{(}\log\frac{\text{Prob}(y_{i}=j|A,\mathbf{r};\mathcal{G})}{ \text{Prob}(y_{i}=j|\hat{\mathbf{p}})}\Big{)}\]
The KL divergence between two Dirichlet distributions \(\text{Dir}(\alpha)\) and \(\text{Dir}(\hat{\alpha})\) can be obtained in closed form as,
\[\text{KL}[\text{Dir}(\alpha)\|\text{Dir}(\hat{\alpha})]\] \[=\ln\Gamma(S)-\ln\Gamma(\hat{S})+\sum_{c=1}^{K}\big{(}\ln\Gamma( \hat{\alpha}_{c})-\ln\Gamma(\alpha_{c})\big{)}+\sum_{c=1}^{K}(\alpha_{c}-\hat {\alpha}_{c})(\psi(\alpha_{c})-\psi(S)),\]
where \(S=\sum_{c=1}^{K}\alpha_{c}\) and \(\hat{S}=\sum_{c=1}^{K}\hat{\alpha}_{c}\).
### Proof of Theorem 2
For our robust SSL, the training loss \(\mathcal{L}_{T}\) can be decomposition into supervised loss \(\mathcal{L}_{L}\) and unsupervised loss \(\mathcal{L}_{U}\), i.e., \(\mathcal{L}_{T}(\theta)=\mathcal{L}_{L}(\theta)+\mathbf{w}\mathcal{L}_{U}(\theta)\), where \(\mathcal{L}_{L}(\theta)=\sum_{(\mathbf{x}_{i},y_{i})\in\mathcal{D}}l(f( \mathbf{x}_{i},\theta),y_{i})\) and \(\mathcal{L}_{U}(\theta)=\sum_{x_{j}\in\mathcal{U}}r(f(\mathbf{x}_{j},\theta))\).
**Lemma 2**.: _Suppose the validation loss function \(\mathcal{L}_{V}\) is Lipschitz smooth with constant L, and the unsupervised loss function \(\mathcal{L}_{U}\) have \(\gamma\)-bounded gradients. Then the gradient of the validation loss function with respect to \(\mathbf{w}\) is Lipschitz continuous._
Proof.: For the meta approximation method, the gradient of the validation loss function with respect to \(\mathbf{w}\) can be written as:
\[\nabla_{\mathbf{w}}\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w})) = \frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{\partial \theta^{*}}\cdot\frac{\theta^{*}(\mathbf{w})}{\partial\mathbf{w}}\] (A.26) \[= \frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{\partial \theta^{*}}\cdot\frac{\theta-\alpha\frac{\partial\mathcal{L}_{T}(\theta)}{ \partial\theta}}{\partial\mathbf{w}}\] \[= \frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{\partial \theta^{*}}\cdot\frac{\theta-\alpha\frac{\partial(\mathcal{L}_{L}(\theta)+ \mathbf{w}\mathcal{L}_{U}(\theta))}{\partial\theta}}{\partial\mathbf{w}}\] \[= -\alpha\frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{ \partial\theta^{*}}\cdot\frac{\partial\mathcal{L}_{U}(\theta)}{\partial \theta}.\]
Taking gradients on both sides of Eq (A.26) ]with respect to \(\mathbf{w}\), we have
\[\|\nabla_{\mathbf{w}}^{2}\partial\mathcal{L}_{V}(\theta^{*}( \mathbf{w}))\| = -\alpha\Big{\|}\frac{\partial}{\partial\mathbf{w}}\Big{(}\frac{ \partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{\partial\theta^{*}}\cdot \frac{\partial\mathcal{L}_{U}(\theta)}{\partial\theta}\Big{)}\Big{\|}\] (A.27) \[= -\alpha\Big{\|}\frac{\partial}{\partial\theta}\Big{(}\frac{ \partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{\partial\mathbf{w}}\Big{)} \cdot\frac{\partial\mathcal{L}_{U}(\theta)}{\partial\theta}\Big{\|}\] \[= -\alpha\Big{\|}\frac{\partial}{\partial\theta}\Big{(}-\alpha \frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{\partial\theta^{*}} \cdot\frac{\partial\mathcal{L}_{U}(\theta)}{\partial\theta}\Big{)}\cdot\frac{ \partial\mathcal{L}_{U}(\theta)}{\partial\theta}\Big{\|}\] \[= \alpha^{2}\Big{\|}\frac{\partial^{2}\mathcal{L}_{V}(\theta^{*}( \mathbf{w}))}{\partial\theta^{*}\partial\theta^{*}}\frac{\partial\mathcal{L}_ {U}(\theta)}{\partial\theta}\cdot\frac{\mathcal{L}_{U}(\theta)}{\partial \theta}\Big{\|}\] \[\leq \alpha^{2}\gamma^{2}L,\]
since \(\|\frac{\partial^{2}\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{\partial\theta^{ *}\partial\theta^{*}}\|\leq L,\|\frac{\mathcal{L}_{U}(\theta)}{\partial \theta}\|\leq\gamma\). Let \(\tilde{L}=\alpha^{2}\gamma^{2}L\). Based on the Lagrange mean value theorem, we have,
\[\|\nabla\mathcal{L}_{V}(\theta^{*}(\mathbf{w}_{i}))-\nabla\mathcal{L}_{V}( \theta^{*}(\mathbf{w}_{j}))\|\leq\tilde{L}\|\mathbf{w}_{i}-\mathbf{w}_{j}\|, \quad\text{for all }\mathbf{w}_{i},\mathbf{w}_{j},\] (A.28)
where \(\nabla\mathcal{L}_{V}(\theta^{*}(\mathbf{w}_{i}))=\nabla\mathcal{L}_{V}( \theta^{*}(\mathbf{w}))|_{\mathbf{w}_{i}}\)
Based on Lemma 2, now we start to proof of theorem 2.
Proof.: First, according to the updating rule, we have:
\[\mathcal{L}_{V}(\theta_{t+1})-\mathcal{L}_{V}(\theta_{t}) = \mathcal{L}_{V}(\theta_{t}-\alpha\nabla_{\theta}\mathcal{L}_{T}( \theta_{t},\mathbf{w}_{t}))-\mathcal{L}_{V}(\theta_{t-1}-\alpha\nabla_{\theta }\mathcal{L}_{T}(\theta_{t-1},\mathbf{w}_{t-1}))\] (A.29) \[= \underbrace{\mathcal{L}_{V}(\theta_{t}-\alpha\nabla_{\theta} \mathcal{L}_{T}(\theta_{t},\mathbf{w}_{t}))-\mathcal{L}_{V}(\theta_{t-1}- \alpha\nabla_{\theta}\mathcal{L}_{T}(\theta_{t},\mathbf{w}_{t}))}_{(a)}+\] \[\underbrace{\mathcal{L}_{V}(\theta_{t-1}-\alpha\nabla_{\theta} \mathcal{L}_{T}(\theta_{t},\mathbf{w}_{t}))-\mathcal{L}_{V}(\theta_{t-1}- \alpha\nabla_{\theta}\mathcal{L}_{T}(\theta_{t-1},\mathbf{w}_{t-1}))}_{(b)}.\]
and for term (a), we have
\[{\cal L}_{V}(\theta_{t}-\alpha\nabla_{\theta}{\cal L}_{T}(\theta_{t },{\bf w}_{t}))-{\cal L}_{V}(\theta_{t-1}-\alpha\nabla_{\theta}{\cal L}_{T}( \theta_{t},{\bf w}_{t}))\] (A.30) \[\leq (\theta_{t}-\theta_{t-1})\cdot\nabla_{\theta}{\cal L}_{V}(\theta_ {t-1}-\alpha\nabla_{\theta}{\cal L}_{T}(\theta_{t},{\bf w}_{t}))+\frac{L}{2} \|\theta_{t}-\theta_{t-1}\|_{2}^{2}\quad\mbox{(Lipschitzs smooth)}\] \[\leq \alpha\gamma^{2}+\frac{L}{2}\alpha^{2}\gamma^{2}\] \[= \alpha\gamma^{2}(\frac{\alpha L}{2}+1).\]
For term (b), we can adapt a Lipschitz-continuous function to make \({\cal L}_{V}\) smooth w.r.t. \({\bf w}\). Then we have,
\[{\cal L}_{V}(\theta_{t-1}-\alpha\nabla_{\theta}{\cal L}_{T}( \theta_{t},{\bf w}_{t}))-{\cal L}_{V}(\theta_{t-1}-\alpha\nabla_{\theta}{\cal L }_{T}(\theta_{t-1},{\bf w}_{t-1}))\] (A.31) \[= {\cal L}_{V}(\theta^{*}({\bf w}_{t}))-{\cal L}_{V}(\theta^{*}({ \bf w}_{t-1}))\] \[\leq ({\bf w}_{t}-{\bf w}_{t-1})\cdot\nabla_{\bf w}{\cal L}_{V}( \theta_{t})+\frac{\tilde{L}}{2}\|{\bf w}_{t}-{\bf w}_{t-1}\|_{2}^{2}\qquad \mbox{(From Lemma \ref{lem:L1})}\] \[= -\beta\nabla_{\bf w}{\cal L}_{V}(\theta_{t})\cdot\nabla_{\bf w}{ \cal L}_{V}(\theta_{t})+\frac{\tilde{L}}{2}\|-\beta\nabla_{\bf w}{\cal L}_{V}( \theta_{t})\|_{2}^{2}\] \[= (\frac{\tilde{L}}{2}\beta^{2}-\beta)\|\nabla_{\bf w}{\cal L}_{V}( \theta_{t})\|_{2}^{2}.\]
Then we have,
\[{\cal L}_{V}(\theta_{t+1})-{\cal L}_{V}(\theta_{t})\leq\alpha \gamma^{2}(\frac{\alpha L}{2}+1)+(\frac{\tilde{L}}{2}\beta^{2}-\beta)\|\nabla _{\bf w}{\cal L}_{V}(\theta_{t})\|_{2}^{2}\] (A.32)
Summing up the above inequalities and rearranging the terms, we can obtain
\[\sum_{t=1}^{T}(\beta-\frac{\tilde{L}}{2}\beta^{2})\|\nabla_{\bf w }{\cal L}_{V}(\theta_{t})\|_{2}^{2} \leq {\cal L}_{V}(\theta_{1})-{\cal L}_{V}(\theta_{T+1})+\alpha\gamma^ {2}(\frac{\alpha LT}{2}-T)\] (A.33) \[\leq {\cal L}_{V}(\theta_{1})+\alpha\gamma^{2}(\frac{\alpha LT}{2}+T)\]
Furthermore, we can deduce that,
\[\min_{t}\mathbb{E}\big{[}\|\nabla_{\textbf{w}}\mathcal{L}_{V}(\theta _{t})\|_{2}^{2}\big{]} \leq \frac{\sum_{t=1}^{T}(\beta-\frac{\tilde{L}}{2}\beta^{2})\|\nabla_{ \textbf{w}}\mathcal{L}_{V}(\theta_{t})\|_{2}^{2}}{\sum_{t=1}^{T}(\beta-\frac{ L}{2}\beta^{2})}\] (A.34) \[\leq \frac{\sum_{t=1}^{T}(\beta-\frac{\tilde{L}}{2}\beta^{2})\|\nabla_ {\textbf{w}}\mathcal{L}_{V}(\theta_{t})\|_{2}^{2}}{\sum_{t=1}^{T}(\beta-\frac{ L}{2}\beta^{2})}\] \[\leq \frac{1}{\sum_{t=1}^{T}(2\beta-\tilde{L}\beta^{2})}\Big{[}2 \mathcal{L}_{V}(\theta_{1})+\alpha\gamma^{2}(\alpha LT+2T)\Big{]}\] \[\leq \frac{1}{\sum_{t=1}^{T}\beta}\Big{[}2\mathcal{L}_{V}(\theta_{1}) +\alpha\gamma^{2}(LT+2T)\Big{]}\] \[= \frac{2\mathcal{L}_{V}(\theta_{1})}{T}\frac{1}{\beta}+\frac{ \alpha\gamma^{2}(L+2)}{\beta}\] \[= \frac{2\mathcal{L}_{V}(\theta_{1})}{T}\max\{L,\frac{\sqrt{T}}{C} \}+\min\{1,\frac{k}{T}\}\max\{L,\frac{\sqrt{T}}{C}\}\gamma^{2}(L+2)\] \[= \frac{2\mathcal{L}_{V}(\theta_{1})}{C\sqrt{T}}+\frac{k\gamma^{2}( L+2)}{C\sqrt{T}}=\mathcal{O}(\frac{1}{\sqrt{T}})\]
The third inequality holds for \(\sum_{t=1}^{T}\beta\leq\sum_{t=1}^{T}(2\beta-\tilde{L}\beta^{2})\). Therefore, we can conclude that our algorithm can always achieve \(\min_{0\leq t\leq T}\mathbb{E}[\|\nabla_{\textbf{w}}\mathcal{L}_{V}(\theta_{t} )\|_{2}^{2}]\leq\mathcal{O}(\frac{1}{\sqrt{T}})\) in T steps.
### Proof of Proposition 2
Proof.: The bi-level optimization problem for weight hyperparameters **w** using a model characterized by parameters \(\theta\) is as follows:
\[\textbf{w}^{*}=\operatorname*{argmin}_{\textbf{w}}\mathcal{L}_{V} (\textbf{w},\theta^{*}(\textbf{w}))\;\text{where}\] (A.35) \[\theta^{*}(\textbf{w})=\operatorname*{argmin}_{\theta}\mathcal{L}_ {T}(\textbf{w},\theta)\] (A.36)
**Case 1**.: _IFT Approach:_ _In order to optimize **w** using gradient descent, we need to calculate the weight gradient \(\frac{\partial\mathcal{L}_{V}(\theta^{*}(\textbf{w}),\textbf{w})}{\partial \textbf{w}}\). Using chain rule in Eq. (A.35), we have:_
\[\frac{\partial\mathcal{L}_{V}(\theta^{*}(\textbf{w}),\textbf{w})}{\partial \textbf{w}}=\underbrace{\frac{\partial\mathcal{L}_{V}}{\partial\textbf{w}}}_{ (a)}+\underbrace{\frac{\partial\mathcal{L}_{V}}{\partial\theta^{*}(\textbf{w} )}}_{(b)}\times\underbrace{\frac{\partial\theta^{*}(\textbf{w})}{\partial \textbf{w}}}_{(c)}\] (A.37)
_where (a) is the direct weight gradient and (b) is the direct parameter gradient, which is easy to compute. The tricky part is the term (c) (best-response Jacobian)._
_In the IFT approach, we approximate (c) using the Implicit function theorem,_
\[\frac{\partial\theta^{*}(\mathbf{w})}{\mathbf{w}}=-\underbrace{\left[\frac{\partial \mathcal{L}_{T}}{\partial\theta\partial\theta^{T}}\right]^{-1}}_{(d)}\times \underbrace{\frac{\partial\mathcal{L}_{T}}{\partial\mathbf{w}\partial\theta^{T}} }_{(e)}\] (A.38)
_However, computing Eq. (A.38) is challenging when using deep nets because it requires inverting a high dimensional Hessian (term (d)), which often requires \(\mathcal{O}(m^{3})\) operations. Therefore, the IFT approach (Lorraine et al., 2020) uses the Neumann series approximation to effectively compute the Hessian inverse term(d), which is as follows,_
\[\left[\frac{\partial\mathcal{L}_{T}}{\partial\theta\partial\theta^{T}}\right] ^{-1}=\lim_{P\rightarrow\infty}\sum_{p=0}^{P}\left[I-\frac{\partial\mathcal{L }_{T}}{\partial\theta\partial\theta^{T}}\right]^{p}\] (A.39)
_where \(I\) is the identity matrix._
_Assuming \(P=0\) in Eq. A.39, we have \(\left[\frac{\partial\mathcal{L}_{T}}{\partial\theta\partial\theta^{T}} \right]^{-1}=\mathbf{I}\) and substituting it in Eq. (A.38), we have:_
\[\frac{\partial\theta^{*}(\mathbf{w})}{\mathbf{w}}=-\frac{\partial\mathcal{L}_{T}}{ \partial\mathbf{w}\partial\theta^{T}}\] (A.40)
_Substituting the above equation in Eq. (A.37), we have:_
\[\frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}),\mathbf{w})}{\partial\mathbf{w}}= \frac{\partial\mathcal{L}_{V}}{\partial\mathbf{w}}-\frac{\partial\mathcal{L}_{V} (\theta^{*}(\mathbf{w}))}{\partial\theta^{*}(\mathbf{w})}\times\frac{\partial^{2} \mathcal{L}_{T}}{\partial\mathbf{w}\partial\theta^{T}}\] (A.41)
_Since we are using unweighted validation loss \(\mathcal{L}_{V}\), there is no dependence of validation loss on weights directly, i.e., \(\frac{\partial\mathcal{L}_{V}}{\partial\mathbf{w}}=0\). Hence, the weight gradient is as follows:_
\[\frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}),\mathbf{w})}{\partial\mathbf{w}}=- \frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{\partial^{2}\theta^{*}(\bm {w})}\times\frac{\partial\mathcal{L}_{T}}{\partial\mathbf{w}\partial\theta^{T}}\] (A.42)
_Since we are using one-step gradient approximation, we have \(\theta^{*}(\mathbf{w})=\theta-\alpha\frac{\partial\mathcal{L}_{T}(\mathbf{w},\theta)} {\partial\theta}\) where \(\alpha\) is the model parameters learning rate._
_The weight update step is as follows:_
\[\mathbf{w}^{*}=\mathbf{w}+\beta\frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{ \partial\theta^{*}(\mathbf{w})}\times\frac{\partial^{2}\mathcal{L}_{T}}{\partial \mathbf{w}\partial\theta^{T}}\] (A.43)
_where \(\beta\) is the weight learning rate._
**Case 2**.: _Meta-approximation Approach In Meta-approximation approach, we have \(\theta^{*}(\mathbf{w})=\theta-\alpha\frac{\partial\mathcal{L}_{T}(\mathbf{w},\theta)} {\partial\theta}\) where \(\alpha\) is the model parameters learning rate._
_Using the value of \(\theta^{*}\), the gradient of validation loss with weight hyperparameters is as follows:_
\[\frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{\partial\mathbf{w}} =\frac{\partial\mathcal{L}_{V}(\theta-\alpha\frac{\partial \mathcal{L}_{T}(\mathbf{w},\theta)}{\partial\mathbf{w}})}{\partial\mathbf{w}}\] (A.44) \[=-\frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{\partial \theta^{*}(\mathbf{w})}\times\alpha\frac{\partial^{2}\mathcal{L}_{T}(\mathbf{w}, \theta)}{\partial\theta\partial\mathbf{w}^{T}}\] (A.45)
_Assuming \(\alpha=1\), we have:_
\[\frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{\partial\mathbf{w}}=-\frac{ \partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{\partial\theta^{*}(\mathbf{w})} \times\frac{\partial^{2}\mathcal{L}_{T}(\mathbf{w},\theta)}{\partial\theta \partial\mathbf{w}^{T}}\] (A.46)
_Hence, the weight update step is as follows:_
\[\mathbf{w}^{*}=\mathbf{w}+\beta\frac{\partial\mathcal{L}_{V}(\theta^{*}(\mathbf{w}))}{ \partial\theta^{*}(\mathbf{w})}\times\frac{\partial^{2}\mathcal{L}_{T}}{\partial \mathbf{w}\partial\theta^{T}}\] (A.47)
_where \(\beta\) is the weight learning rate_
As shown in the cases above, we have a similar weight update. Hence, it inherently means that using the meta-approximation of \(J=1\) is equivalent to using an identity matrix as the Hessian inverse of training loss with \(\theta\) (i.e., \(P=0\) for implicit differentiation approach).
### Proof of Proposition 3
Proof.: (1). The mini-batch mean of \(\mathcal{I}\),
\[\mu_{\mathcal{I}}=\frac{1}{m}\sum_{i=1}^{m}\mathbf{x}_{i}=\mu_{I}.\] (A.48)
The mixed mini-batch mean of \({\cal IO}\),
\[\mu_{\cal IO} = \frac{1}{2m}(\sum_{i=1}^{m}{\bf x}_{i}+\sum_{i=1}^{m}\hat{\bf x}_{i})\] (A.49) \[= \frac{1}{2}\mu_{\cal I}+\frac{1}{2}\mu_{\cal O}.\]
Then we have,
\[\|\mu_{\cal IO}-\mu_{\cal I}\|_{2} = \|\frac{1}{2}\mu_{\cal I}+\frac{1}{2}\mu_{\cal O}-\mu_{\cal I}\|_ {2}\] (A.50) \[= \|\frac{1}{2}\mu_{\cal O}-\mu_{\cal I}\|_{2}>\frac{L}{2}\gg 0.\]
The mini-batch variance of \({\cal I}\),
\[\sigma_{\cal I}^{2}=\frac{1}{m}\sum_{i=1}^{m}({\bf x}_{i}-\mu_{\cal I})^{2}.\] (A.51)
The traditional batch normalizing transform based on mini-batch \({\cal I}\) for \({\bf x}_{i}\)
\[BN_{\cal I}({\bf x}_{i})=\gamma\frac{{\bf x}_{i}-\mu_{\cal I}}{\sqrt{\sigma_{ \cal I}^{2}+\epsilon}}+\beta.\] (A.52)
The mini-batch variance of \({\cal IO}\),
\[\sigma_{\cal IO}^{2} = \frac{1}{2m}(\sum_{i=1}^{m}{\bf x}_{i}^{2}+\sum_{i=1}^{m}\hat{\bf x }_{i}^{2})-\mu_{\cal IO}^{2}\] (A.53) \[= \frac{1}{2}\sigma_{\cal I}^{2}+\frac{1}{2}\sigma_{\cal O}^{2}+ \frac{1}{4}(\mu_{\cal O}-\mu_{\cal I})^{2}\] \[\approx \frac{1}{4}(\mu_{\cal O}-\mu_{\cal I})^{2},\]
and the traditional batch normalizing transform based on mini-batch \({\cal IO}\) for \({\bf x}_{i}\),
\[BN_{\cal IO}({\bf x}_{i}) = \gamma\frac{{\bf x}_{i}-\mu_{\cal B}(IO)}{\sqrt{\sigma_{\cal B}^{ 2}(IO)+\epsilon}}+\beta\] (A.54) \[= \gamma\Big{(}\frac{{\bf x}_{i}-\mu_{O}}{2\sqrt{\sigma_{\cal B}^{ 2}(IO)+\epsilon}}+\frac{{\bf x}_{i}-\mu_{I}}{2\sqrt{\sigma_{\cal B}^{2}(IO)+ \epsilon}}\Big{)}+\beta\] \[\approx \gamma\Big{(}\frac{{\bf x}_{i}-\mu_{O}}{\|\mu_{O}-\mu_{I}\|_{2}} +\frac{{\bf x}_{i}-\mu_{I}}{\|\mu_{O}-\mu_{I}\|_{2}}\Big{)}+\beta\] \[\approx \gamma\frac{{\bf x}_{i}-\mu_{O}}{\|\mu_{O}-\mu_{I}\|_{2}}+\beta.\]
The approximations in Eq. (A.53) and Eq. (A.54) hold when \(\sigma_{\mathcal{I}}^{2}\) and \(\sigma_{\mathcal{O}}^{2}\) have the same magnitude levels as \(\mu_{I}\), i.e., \(\|\mu_{\mathcal{O}}-\sigma_{\mathcal{I}}^{2}\|_{2}>L\) and \(\|\mu_{\mathcal{O}}-\sigma_{\mathcal{O}}^{2}\|_{2}>L\). Comparing Eq. (A.52) with Eq. (A.54), we prove that \(BN_{\mathcal{I}}(\mathbf{x}_{i})\neq BN_{\mathcal{IO}}(\mathbf{x}_{i})\).
(2) The weighted mini-batch mean of \(\mathcal{IO}\),
\[\mu_{\mathcal{IO}}^{\mathbf{w}} = \frac{\sum_{i=1}^{m}w_{\mathcal{I}}^{i}\mathbf{x}_{i}+\sum_{i=1}^ {m}w_{\mathcal{O}}^{i}\hat{\mathbf{x}}_{i}}{\sum_{i=1}^{m}w_{\mathcal{I}}^{i}+ \sum_{i=1}^{m}w_{\mathcal{O}}^{i}}\] \[= \frac{\sum_{i=1}^{m}1\cdot\mathbf{x}_{i}+\sum_{i=1}^{m}0\cdot \hat{\mathbf{x}}_{i}}{\sum_{i=1}^{m}1+\sum_{i=1}^{m}0}\] \[= \mu_{I},\]
witch proves that \(\mu_{\mathcal{I}}=\mu_{\mathcal{IO}}^{\mathbf{w}}\). The weighted mini-batch variance of \(\mathcal{IO}\),
\[\sigma_{\mathcal{IO}}^{\mathbf{w}}\ {}^{2} = \frac{\sum_{i=1}^{m}w_{\mathcal{I}}^{i}(\mathbf{x}_{i}-\mu_{ \mathcal{IO}}^{\mathbf{w}})^{2}}{\sum_{i=1}^{m}w_{\mathcal{I}}^{i}+\sum_{i=1} ^{m}w_{\mathcal{O}}^{i}}+\frac{\sum_{i=1}^{m}w_{\mathcal{O}}^{i}(\hat{ \mathbf{x}}_{i}-\mu_{\mathcal{IO}}^{\mathbf{w}})^{2}}{\sum_{i=1}^{m}w_{ \mathcal{I}}^{i}+\sum_{i=1}^{m}w_{\mathcal{O}}^{i}}\] \[= \frac{\sum_{i=1}^{m}1\cdot(\mathbf{x}_{i}-\mu_{\mathcal{IO}})^{2 }+\mathbf{0}}{\sum_{i=1}^{m}1+\sum_{i=1}^{m}0}\] \[= \frac{\sum_{i=1}^{m}1\cdot(\mathbf{x}_{i}-\mu_{\mathcal{IO}})^{2 }}{m}\] \[= \sigma_{\mathcal{I}}^{2}.\]
The weighted batch normalizing transform based on mini-batch \(\mathcal{IO}\) for \(\mathbf{x}_{i}\),
\[WBN_{\mathcal{IO}}(\mathbf{x}_{i},\mathbf{w})=\gamma\frac{\mathbf{x}_{i}-\mu _{\mathcal{IO}}^{\mathbf{w}}}{\sqrt{\sigma_{\mathcal{IO}}^{\mathbf{w}}\ {}^{2}+\epsilon}}+\beta=\gamma\frac{\mathbf{x}_{i}-\mu_{\mathcal{I}}}{\sqrt{ \sigma_{\mathcal{I}}^{2}+\epsilon}}+\beta,\] (A.55)
which proves that \(BN_{\mathcal{I}}(\mathbf{x}_{i})=WBN_{\mathcal{IO}}(\mathbf{x}_{i},\mathbf{w})\). |
2310.00873 | Deep Neural Networks Tend To Extrapolate Predictably | Conventional wisdom suggests that neural network predictions tend to be
unpredictable and overconfident when faced with out-of-distribution (OOD)
inputs. Our work reassesses this assumption for neural networks with
high-dimensional inputs. Rather than extrapolating in arbitrary ways, we
observe that neural network predictions often tend towards a constant value as
input data becomes increasingly OOD. Moreover, we find that this value often
closely approximates the optimal constant solution (OCS), i.e., the prediction
that minimizes the average loss over the training data without observing the
input. We present results showing this phenomenon across 8 datasets with
different distributional shifts (including CIFAR10-C and ImageNet-R, S),
different loss functions (cross entropy, MSE, and Gaussian NLL), and different
architectures (CNNs and transformers). Furthermore, we present an explanation
for this behavior, which we first validate empirically and then study
theoretically in a simplified setting involving deep homogeneous networks with
ReLU activations. Finally, we show how one can leverage our insights in
practice to enable risk-sensitive decision-making in the presence of OOD
inputs. | Katie Kang, Amrith Setlur, Claire Tomlin, Sergey Levine | 2023-10-02T03:25:32Z | http://arxiv.org/abs/2310.00873v2 | # Deep Neural Networks Tend To Extrapolate Predictably
###### Abstract
Conventional wisdom suggests that neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs. Our work reassesses this assumption for neural networks with high-dimensional inputs. Rather than extrapolating in arbitrary ways, we observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD. Moreover, we find that this value often closely approximates the optimal constant solution (OCS), i.e., the prediction that minimizes the average loss over the training data without observing the input. We present results showing this phenomenon across 8 datasets with different distributional shifts (including CIFAR10-C and ImageNet-R, S), different loss functions (cross entropy, MSE, and Gaussian NLL), and different architectures (CNNs and transformers). Furthermore, we present an explanation for this behavior, which we first validate empirically and then study theoretically in a simplified setting involving deep homogeneous networks with ReLU activations. Finally, we show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
+
Footnote †: Code: [https://github.com/katiekang1998/cautious_extrapolation](https://github.com/katiekang1998/cautious_extrapolation)
+
Footnote †: Code: [https://github.com/katiekang1998/cautious_extrapolation](https://github.com/katiekang1998/cautious_extrapolation)
## 1 Introduction
The prevailing belief in machine learning posits that deep neural networks behave erratically when presented with out-of-distribution (OOD) inputs, often yielding predictions that are not only incorrect, but incorrect with _high confidence_[19, 37]. However, there is some evidence which seemingly contradicts this conventional wisdom - for example, Hendrycks and Gimpel [24] show that the softmax probabilities outputted by neural network classifiers actually tend to be _less confident_ on OOD inputs, making them surprisingly effective OOD detectors. In our work, we find that this softmax behavior may be reflective of a more general pattern in the way neural networks extrapolate: as inputs diverge further from the training distribution, a neural network's predictions often converge towards a _fixed_ constant value. Moreover, this constant value often approximates the best prediction the network can produce without observing any inputs, which we refer to as the optimal constant solution (OCS). We call this the "reversion to the OCS" hypothesis:
_Neural networks predictions on high-dimensional OOD inputs tend to revert towards the optimal constant solution_.
In classification, the OCS corresponds to the marginal distribution of the training labels, typically a high-entropy distribution. Therefore, the our hypothesis posits that classifier outputs should become higher-entropy as the input distribution becomes more OOD, which is consistent with the findings in Hendrycks and Gimpel [24]. Beyond classification, to the best of our knowledge, we are the first to present and provide evidence for the "reversion to the OCS" hypothesis in its full generality. Our experiments show that the amount of distributional shift correlates strongly with the distance between model outputs and the OCS across 8 datasets, including both vision and NLP domains, 3 loss functions, and for both CNNS and transformers.
Having made this observation, we set out to understand why neural networks have a tendency to behave this way. Our empirical analysis reveals that the feature representations corresponding to OOD inputs tend to have smaller norms than those of in-distribution inputs, leading to less signal being propagated from the input. As a result, neural network outputs from OOD inputs tend to be dominated by the input-independent parts of the network (e.g., bias vectors at each layer), which we observe to often map closely to the OCS. We also theoretically analyze the extrapolation behavior of deep homogeneous networks with ReLU activations, and derived evidence which supports this mechanism in the simplified setting.
Lastly, we leverage our observations to propose a simple strategy to enable risk-sensitive decision-making in the face of OOD inputs. The OCS can be viewed as a "backup default output" to which the neural network reverts when it encounters novel inputs. If we design the loss function such that the OCS aligns with the desirable cautious behavior as dictated by the decision-making problem, then the neural network model will automatically produce cautious decisions when its inputs are OOD. We describe a way to enable this alignment, and empirically demonstrate that this simple strategy can yield surprisingly good results in OOD selective classification.
In summary, our key contributions are as follows. First, we present the observation that neural networks often exhibit a predictable pattern of extrapolation towards the OCS, and empirically illustrate this phenomenon for 8 datasets with different distribution shifts, 3 loss functions, and both CNNs and transformers. Second, we provide both empirical and theoretical analyses to better understand the mechanisms that lead to this phenomenon. Finally, we make use of these insights to propose a simple strategy for enabling cautious decision-making in face of OOD inputs. Although we do not yet have a complete characterization of precisely when, and to what extent, we can rely on "reversion to the OCS" to occur, we hope our observations will prompt further investigation into this phenomenon.
## 2 Related Work
A large body of prior works have studied various properties of neural network extrapolation. One line of work focuses on the failure modes of neural networks when presented with OOD inputs, such as poor generalization and overconfidence [50, 18, 42, 5, 30]. Other works have noted that neural networks are ineffective in capturing epistemic uncertainty in their predictions [39, 32, 35, 19, 13], and that a number of techniques can manipulate neural networks to produce incorrect predictions with high confidence [48, 37, 40, 22]. However, Hendrycks et al. [25] observed that neural networks assign lower maximum softmax probabilities to OOD than to in-distribution point, meaning neural networks
Figure 1: A summary of our observations. On in-distribution samples (top), neural network outputs tend to vary significantly based on input labels. In contrast, on OOD samples (bottom), we observe that model predictions tend to not only be more similar to one another, but also gravitate towards the optimal constant solution (OCS). We also observe that OOD inputs tend to map to representations with smaller magnitudes, leading to predictions largely dominated by the (constant) network biases, which may shed light on why neural networks have this tendency.
may actually exhibit less confidence on OOD inputs. Our work supports this observation, while further generalizing it to arbitrary loss functions. Other lines of research have explored the influence of architectural decisions on generalization [55, 56, 7, 53], the relationship between in-distribution and OOD performance [34, 2, 3], and the behavior of neural network representations under OOD conditions [52, 28, 41]. While our work also analyzes representations in the context of extrapolation, our focus is on understanding the mechanism behind "reversion to the OCS", which differs from the aforementioned works.
Our work also explores risk-sensitive decision-making using selective classification as a testbed. Selective classification is a well-studied problem, and various methods have been proposed to enhance selective classification performance [15, 12, 6, 38, 8, 54]. In contrast, our aim is not to develop the best possible selective classification approach, but rather to providing insights into the effective utilization of neural network predictions in OOD decision-making.
## 3 Reversion to the Optimal Constant Solution
In this work, we will focus on the widely studied covariate shift setting [17, 47]. Formally, let the training data \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{N}\) be generated by sampling \(x_{i}\sim P_{\text{train}}(x)\) and \(y_{i}\sim P(y|x_{i})\). At test time, we query the model with inputs generated from \(P_{\text{OOD}}(x)\neq P_{\text{train}}(x)\), whose ground truth labels are generated from the same conditional distribution, \(P(y|x)\), as that in training. We will denote a neural network model as \(f_{\theta}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\), where \(d\) and \(m\) are the dimensionalities of the input and output, and \(\theta\in\Theta\) represents the network weights. We will focus on settings where \(d\) is high-dimensional. The neural network weights are optimized by minimizing a loss function \(\mathcal{L}\) using gradient descent, \(\hat{\theta}=\arg\min_{\theta\in\Theta}\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}(f _{\theta}(x_{i}),y_{i})\).
Figure 2: Neural network predictions from training with cross entropy and Gaussian NLL on MNIST (top 3 rows) and CIFAR10 (bottom 3 rows). The models were trained with 0 rotation/noise, and evaluated on increasingly OOD inputs consisting of the digit 6 for MNIST, and of automobiles for CIFAR10. The blue plots represent the average model prediction over the evaluation dataset. The orange plots show the OCS associated with each model. We can see that as the distribution shift increases (going left to right), the network predictions tend towards the OCS (rightmost column).
### Main Hypothesis
In our experiments, we observed that as inputs become more OOD, neural network predictions tend to revert towards a constant prediction. This means that, assuming there is little label shift, model predictions will tend to be more similar to one another for OOD inputs than for the training distribution. Furthermore, we find that this constant prediction is often similar to the optimal constant solution (OCS), which minimizes the training loss if the network is constrained to ignore the input. The OCS can be interpreted as being the maximally cautious prediction, producing the class marginal in the case of the cross-entropy loss, and a high-variance Gaussian in the case of the Gaussian NLL. More precisely, we define the OCS as
\[f^{*}_{\text{constant}}=\operatorname*{arg\,min}_{f\in\mathbb{R}^{m}}\frac{1 }{N}\sum_{1\leq i\leq N}\mathcal{L}(f,y_{i}).\]
Based on our observations, we hypothesize that **as the likelihood of samples from \(P_{\text{OOD}}(x)\) under \(P_{\text{train}}(x)\) decreases, \(f_{\theta}(x)\) for \(x\sim P_{\text{OOD}}(x)\) tends to approach \(f^{*}_{\text{constant}}\).**
As an illustrative example, we trained models using either cross-entropy or (continuous-valued) Gaussian negative log-likelihood (NLL) on the MNIST and CIFAR10 datasets. The blue plots in Fig. 2 show the models' predictions as its inputs become increasingly OOD, and the orange plots visualize the OCS associated with each model. We can see that even though we trained on _different_ datasets and evaluated on _different_ kinds of distribution shifts, the neural network predictions exhibit the _same_ pattern of extrapolation: as the distribution shift increases, the network predictions move closer to the OCS. Note that while the behavior of the cross-entropy models can likewise be explained by the network simply producing lower magnitude outputs, the Gaussian NLL models' predicted variance actually increases with distribution shift, which contradicts this alternative explanation.
### Experiments
We will now provide empirical evidence for the "reversion to the OCS" hypothesis. Our experiments aim to answer the question: **As the test-time inputs become more OOD, do neural network predictions move closer to the optimal constant solution?**
Experimental setup.We trained our models on 8 different datasets, and evaluated them on both natural and synthetic distribution shifts. See Table 1 for a summary, and Appendix B.1 for a more detailed description of each dataset. Models with image inputs use ResNet [21] or VGG [45] style architectures, and models with text inputs use DistilBERT [43], a distilled version of BERT [10].
We focus on three tasks, each using a different loss functions: classification with cross entropy (CE), selective classification with mean squared error (MSE), and regression with Gaussian NLL. Datasets with discrete labels are used for classification and selective classification, and datasets with continuous labels are used regression. The cross entropy models are trained to predict the likelihood that the input belongs to each class, as is typical in classification. The MSE models are trained to predict rewards for a selective classification task. More specifically, the models output a value for each class as well as an abstain option, where the value represents the reward of selecting that option given the input. The
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Dataset** & **Label Type** & **Input Modality** & **Distribution Shift Type** \\ \hline CIFAR10 [31] / CIFAR10-C [23] & Discrete & Image & Synthetic \\ \hline ImageNet [9] / ImageNet-R(condition) [26] & Discrete & Image & Natural \\ \hline ImageNet [9] / ImageNet-Sketch [51] & Discrete & Image & Natural \\ \hline DomainBed OfficeHome [18] & Discrete & Image & Natural \\ \hline Skin-elsonPieks [20] & Continuous & Image & Natural \\ \hline UTKFace [57] & Continuous & Image & Synthetic \\ \hline BREEDS living-17 [44] & Discrete & Image & Natural \\ \hline BREEDS non-living-26 [44] & Discrete & Image & Natural \\ \hline WILDS Amazon [30] & Discrete & Text & Natural \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the datasets that we train/evaluate on in our experiments.
ground truth reward is +1 for the correct class, -4 for the incorrect classes, and +0 for abstaining. We train these models by minimizing the MSE loss between the predicted and ground truth rewards. We will later use these models for decision-making in Section 5. The Gaussian NLL models predict a mean and a standard deviation, parameterizing a Gaussian distribution. They are trained to minimize the negative log likelihood of the labels under its predicted distributions.
Evaluation protocol.To answer our question, we need to quantify (1) the dissimilarity between the training data and the evaluation data, and (2) the proximity of network predictions to the OCS. To estimate the former, we trained a low-capacity model to discriminate between the training and evaluation datasets and measured the average predicted likelihood that the evaluation dataset is generated from the evaluation distribution, which we refer to as the OOD score. This score is 0.5 for indistinguishable train and evaluation data, and 1 for a perfect discriminator. To estimate the distance between the model's prediction and the OCS, we compute the KL divergence between the model's predicted distribution and the distribution parameterized by the OCS, \(\frac{1}{N}\sum_{i=1}^{N}D_{\text{KL}}(P_{\theta}(y|x_{i})||P_{f^{*}_{\text{ constant}}}(y))\), for models trained with cross-entropy and Gaussian NLL. For MSE models, the distance is measured using the mean squared error, \(\frac{1}{N}\sum_{i=1}^{N}||f_{\theta}(x_{i})-f^{*}_{\text{constant}}||^{2}\). See Appendix B.3 for more details on our evaluation protocol, and Appendix B.4 for the closed form solution for the OCS for each loss.
Results.In Fig. 3, we plot the OOD score (x-axis) against the distance between the network predictions and the OCS (y-axis) for both the training and OOD datasets. Our results indicate a clear trend: as the OOD score of the evaluation dataset increases, neural network predictions move closer to the OCS. Moreover, our results show that this trend holds relatively consistently across different loss functions, input modalities, network architectures, and types of distribution shifts. We also found instances where this phenomenon did not hold, such as adversarial inputs, which we discuss in greater detail in Appendix A. However, the overall prevalence of "reversion to the OCS" across different settings suggests that it may capture a general pattern in the way neural networks extrapolate.
Figure 3: Evaluating the distance between network predictions and the OCS as the input distribution becomes more OOD. Each point represents a different evaluation dataset, with the red star representing the (holdout) training distribution, and circles representing OOD datasets. The vertical line associated with each point represents the standard deviation over 5 training runs. As the OOD score of the evaluation dataset increases, there is a clear trend of the neural network predictions approaching the OCS.
## 4 Why do OOD Predictions Revert to the OCS?
In this section, we aim to provide insights into why neural networks have a tendency to revert to the OCS. We will begin with an intuitive explanation, and provide empirical and theoretical evidence in Sections 4.1 and 4.2. In our analysis, we observe that weight matrices and network representations associated with training inputs often occupy low-dimensional subspaces with high overlap. However, when the network encounters OOD inputs, we observe that their associated representations tend to have less overlap with the weight matrices compared to those from the training distribution, particularly in the later layers. As a result, OOD representations tend to diminish in magnitude as they pass through the layers of the network, causing the network's output to be primarily influenced by the accumulation of model constants (e.g. bias terms). Furthermore, both empirically and theoretically, we find that this accumulation of model constants tend to closely approximate the OCS. We posit that reversion to the OCS for OOD inputs occurs due to the combination of these two factors: that accumulated model constants in a trained network tend towards the OCS, and that OOD points yield smaller-magnitude representations in the network that become dominated by model constants.
### Empirical Analysis
We will now provide empirical evidence for the mechanism we describe above using deep neural network models trained on MNIST and CIFAR10. MNIST models use a small 4 layer network, and CIFAR10 models use a ResNet20 [21]. To more precisely describe the quantities we will be illustrating, let us rewrite the neural network as \(f(x)=g_{i+1}(\sigma(W_{i}\phi_{i}(x)+b_{i}))\), where \(\phi_{i}(x)\) is an intermediate representation at layer \(i\), \(W_{i}\) and \(b_{i}\) are the corresponding weight matrix and bias, \(\sigma\) is a nonlinearity, and \(g_{i+1}\) denotes the remaining layers of the network. Because we use a different network architecture for each domain, we will use variables to denote different intermediate layers of the network, and defer details about the specific choice of layers to Appendix C.
First, we will show that \(W_{i}\phi_{i}(x)\) tends to diminish for OOD inputs. The first column of plots in Fig. 4 show \(\mathbb{E}_{x\sim P_{\text{OOD}}(x)}[||W_{i}\phi_{i}(x)||^{2}]/\mathbb{E}_{x \sim P_{\text{train}}(x)}[||W_{i}\phi_{i}(x)||^{2}]\) for \(P_{\text{OOD}}\) with different level of rotation or noise. The x-axis represents different layers in the network, with the leftmost being the input and the rightmost being the output. We can see that in the later layers of the network, \(||W_{i}\phi_{i}(x)||^{2}\) consistently became smaller as inputs became more OOD (greater rotation/noise). Furthermore, the diminishing effect becomes more pronounced as the representations pass through more layers.
Figure 4: Analysis of the interaction between representations and weights as distribution shift increases. Plots in first column visualize the norm of network features for different levels of distribution shift at different layers of the network. In later layer of the network, the norm of features tends to decrease as distribution shift increases. Plots in second column show the proportion of network features which lie within the span of the following linear layer. This tends to decrease as distributional shift increases. Error bars represent the standard deviation taken over the test distribution. Plots in the third and fourth column show the accumulation of model constants as compared to the OCS for a cross entropy and a MSE model; the two closely mirror one another.
Next, we will present evidence that this decrease in representation magnitude occurs because \(\phi_{j}(x)\) for \(x\sim P_{\text{train}}(x)\) tend to lie more within the low-dimensional subspace spanned by the rows of \(W_{j}\) than \(\phi_{j}(x)\) for \(x\sim P_{\text{OOD}}(x)\). Let \(V_{\text{top}}\) denote the top (right) singular vectors of \(W_{j}\). The middle plots of Fig. 4 show the ratio of the representation's norm at layer \(j\) that is captured by projecting the representation onto \(V_{\text{top}}\) i.e., \(||\phi_{j}(x)^{\top}V_{\text{top}}V_{\text{top}}^{\top}||^{2}/||\phi_{j}(x)|| ^{2}\), as distribution shift increases. We can see that as the inputs become more OOD, the ratio goes down, suggesting that the representations lie increasingly outside the subspace spanned by the weight matrix.
Finally, we will provide evidence for the part of the mechanism which accounts for the optimality of the OCS. Previously, we have established that OOD representations tend to diminish in magnitude in later layers of the network. This begs the question, what would the output of the network be if the input representation at an intermediary layer had a magnitude of 0? We call this the accumulation of model constants, i.e. \(g_{k+1}(\sigma(b_{k}))\). In the third and fourth columns of Fig. 4, we visualize the accumulation of model constants at one of the final layers \(k\) of the networks for both a cross entropy and a MSE model (details in Sec. 3.2), along with the OCS for each model. We can see that the accumulation of model constants closely approximates the OCS in each case.
### Theoretical Analysis
We will now explicate our empirical findings more formally by analyzing solutions of gradient flow (gradient descent with infinitesimally small step size) on deep homogeneous neural networks with ReLU activations. We adopt this setting due to its theoretical convenience in reasoning about solutions at convergence of gradient descent [33, 14, 27], and its relative similarity to deep neural networks used in practice [36, 11].
**Setup:** We consider a class of homogeneous neural networks \(\mathcal{F}:=\{f(W;x):W\in\mathcal{W}\}\), with \(L\) layers and ReLU activation, taking the functional form \(f(W;x)=W_{L}\sigma(W_{L-1}\ldots\sigma(W_{2}\sigma(W_{1}x))\ldots)\), where \(W_{i}\in\mathbb{R}^{m\times m},\forall i\in\{2,\ldots,L-1\}\), \(W_{1}\in\mathbb{R}^{m\times 1}\) and \(W_{L}\in\mathbb{R}^{1\times m}\). Our focus is on a binary classification problem where we consider two joint distributions \(P_{\text{train}},P_{\text{OOD}}\) over inputs and labels: \(\mathcal{X}\times\mathcal{Y}\), where inputs are from \(\mathcal{X}:=\{x\in\mathbb{R}^{d}:\|x\|_{2}\leq 1\}\), and labels are in \(\mathcal{Y}:=\{-1,+1\}\). We consider gradient descent with a small learning rate on the objective: \(L(W;\mathcal{D}):=\sum_{(x,y)\in\mathcal{D}}\ell(f(W;x),y)\) where \(\ell(f(W;x),y)\mapsto\exp\left(-yf(W;x)\right)\) is the exponential loss and \(\mathcal{D}\) is an IID sampled dataset of size \(N\) from \(P_{\text{train}}\). For more details on the setup, background on homogeneous networks, and full proofs for all results in this section, please see Appendix D.
We will begin by providing a lower bound on the expected magnitude of intermediate layer features corresponding to inputs from the training distribution:
**Proposition 4.1** (\(P_{\text{train}}\) **observes high norm features)**: _When \(f(\hat{W};x)\) fits \(\mathcal{D}\), i.e., \(y_{i}f(\hat{W};x_{i})\geq\gamma\), \(\forall i\in[N]\), then w.h.p \(1-\delta\) over \(\mathcal{D}\), layer \(j\) representations \(f_{j}(\hat{W};x)\) satisfy \(\mathbb{E}_{P_{\text{train}}}[\|f_{j}(\hat{W};x)\|_{2}]\geq(1/C_{0})(\gamma- \tilde{\mathcal{O}}(\sqrt{\log(1/\delta)/N}+C_{1}\log m/N\gamma))\), if \(\exists\) constants \(C_{0},C_{1}\) s.t. \(\|\hat{W}_{j}\|_{2}\leq C_{0}^{1/L}\), \(C_{1}\geq C_{0}^{3L/2}\)._
Here, we can see that if the trained network perfectly fits the training data (\(y_{i}f(\hat{W};x_{i})\)\(\geq\)\(\gamma\), \(\forall i\in[N]\)), and the training data size \(N\) is sufficiently large, then the expected \(\ell_{2}\) norm of layer \(j\) activations \(f_{j}(\hat{W};x)\) on \(P_{\text{train}}\) is large and scales at least linearly with \(\gamma\).
Next, we will analyze the size of network outputs corresponding to points which lie outside of the training distribution. Our analysis builds on prior results for gradient flow on deep homogeneous nets with ReLU activations which show that the gradient flow is biased towards the solution (KKT point) of a specific optimization problem: minimizing the weight norms while achieving sufficiently large margin on each training point [49, 1, 33]. Based on this, it is easy to show that that the solution for this constrained optimization problem is given by a neural network with low rank matrices in each layer for sufficiently deep and wide networks. Furthermore, the low rank nature of these solutions is exacerbated by increasing depth and width, where the network approaches an almost rank one solution for each layer. If test samples deviate from this low rank space of weights in any layer, the dot products of the weights and features will collapse in the subsequent layer, and its affect rolls over to the final layer,
which will output features with very small magnitude. Using this insight, we present an upper bound on the magnitude of the final layer features corresponding to OOD inputs:
**Theorem 4.1** (Feature norms can drop easily on \(P_{\textsc{OOD}}\)): _If \(\exists\) a shallow network \(f^{\prime}(W;x)\) with \(L^{\prime}\) layers and \(m^{\prime}\) neurons satisfying conditions in Proposition 4.1 (\(\gamma{=}1\)), then optimizing the training objective with gradient flow over a class of deeper and wider homogeneous network \(\mathcal{F}\) with \(L>L^{\prime},m>m^{\prime}\) would converge directionally to a solution \(f(\hat{W};x)\), for which the following is true: \(\exists\) a set of rank \(1\) projection matrices \(\{A_{i}\}_{i=1}^{L}\), such that if representations for any layer \(j\) satisfy \(\mathbb{E}_{P_{\textsc{OOD}}}\|A_{j}f_{j}(\hat{W};x)\|_{2}\leq\epsilon\), then \(\exists C_{2}\) for which \(\mathbb{E}_{P_{\textsc{OOD}}}[\|f(\hat{W};x)\|]\lesssim C_{0}(\epsilon+C_{2}^{ -1/L}\sqrt{L+1/L})\)._
This theorem tells us that for any layer \(j\), there exists only a narrow rank one space \(A_{j}\) in which OOD representations may lie, in order for their corresponding final layer outputs to remain significant in norm. Because neural networks are not optimized on OOD inputs, we hypothesize that the features corresponding to OOD inputs tend to lie outside this narrow space, leading to a collapse in last layer magnitudes for OOD inputs in deep networks. Indeed, this result is consistent with our empirical findings in the first and second columns of Fig. 4.1, where we observed that OOD features tend to align less with weight matrices, resulting in a drop in OOD feature norms.
To study the accumulation of model constants, we now analyze a slightly modified class of functions \(\tilde{\mathcal{F}}=\{f(W;\cdot)+b:b\in\mathbb{R},f(W;\cdot)\in\mathcal{F}\}\), which consists of deep homogeneous networks with a bias term in the final layer. In Proposition 4.2, we show that there exists a set of margin points (analogous to support vectors in the linear setting) which solely determines the model's bias \(\hat{b}\).
**Proposition 4.2** (Analyzing network bias): _If gradient flow on \(\tilde{\mathcal{F}}\) converges directionally to \(\hat{W},\hat{b}\), then \(\hat{b}\propto\sum_{k}y_{k}\) for margin points \(\{(x_{k},y_{k}):y_{k}\cdot f(\hat{W};x_{k})=\arg\min_{j\in[N]}y_{j}\cdot f( \hat{W};x_{j})\}\)._
If the label marginal of these margin points mimics that of the overall training distribution, then the learnt bias will approximate the OCS for the exponential loss. This result is consistent with our empirical findings in the third and fourth columns on Fig. 4.1, where we found the accumulation of bias terms tends to approximate the OCS.
## 5 Risk-Sensitive Decision-Making
Lastly, we will explore an application of our observations to decision-making problems. In many decision-making scenarios, certain actions offer a high potential for reward when the agent chooses them correctly, but also higher penalties when chosen incorrectly, while other more cautious actions consistently provide a moderate level of reward. When utilizing a learned model for decision-making, it is desirable for the agent to select the high-risk high-reward actions when the model is likely to be accurate, while opting for more cautious actions when the model is prone to errors, such as when the inputs are OOD. It turns out, if we leverage "reversion to the OCS" appropriately, such risk-sensitive behavior can emerge automatically. If the OCS of the agent's learned model corresponds to cautious actions, then "reversion to the OCS" posits that the agent will take increasingly cautious actions as its inputs become more OOD. However, not all decision-making algorithms leverage "reversion to the OCS" by default. **Depending on the choice of loss function (and consequently the OCS), different algorithms which have similar in-distribution performance can have different OOD behavior.** In the following sections, we will use selective classification as an example of a decision-making problem to more concretely illustrate this idea.
### Example Application: Selective Classification
In selective classification, the agent can choose to classify the input or abstain from making a decision. As an example, we will consider a selective classification task using CIFAR10, where the agent receives
a reward of +1 for correctly selecting a class, a reward of -4 for an incorrect classification, and a reward of 0 for choosing to abstain.
Let us consider one approach that leverages "reversion to the OCS" and one that does not, and discuss their respective OOD behavior. An example of the former involves learning a model to predict the reward associated with taking each action, and selecting the action with the highest predicted reward. This reward model, \(f_{\theta}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{|\mathcal{A}|}\), takes as input an image, and outputs a vector the size of the action space. We train \(f_{\theta}\) using a dataset of images, actions and rewards, \(\mathcal{D}=\{(x_{i},a_{i},r_{i})\}_{i=1}^{N}\), by minimizing the MSE loss, \(\frac{1}{N}\sum_{1\leq i\leq N}(f_{\theta}(x_{i})_{a_{i}}-r_{i})^{2}\), and select actions using the policy \(\pi(x)=\arg\max_{a\in\mathcal{A}}f_{\theta}(x)_{a}\). The OCS of \(f_{\theta}\) is the average reward for each action over the training points, i.e., \((f_{\text{constant}}^{*})_{a}=\frac{\sum_{1\leq i\leq N}r_{i}\cdot[a_{i}=a]}{ \sum_{1\leq j\leq N}\overline{[a_{j}=a]}}\). In our example, the OCS is -3.5 for selecting each class and 0 for abstaining, so the policy corresponding to the OCS will choose to abstain. Thus, according to "reversion to the OCS", this agent should choose to abstain more and more frequently as its input becomes more OOD. We illustrate this behavior in Figure 5. In the first row, we depict the average predictions of a reward model when presented with test images of a specific class with increasing levels of noise (visualized in Figure 1). In the second row, we plot a histogram of the agent's selected actions for each input distribution. We can see that as the inputs become more OOD, the model's predictions converged towards the OCS, and consequently, the agent automatically transitioned from making high-risk, high-reward decisions of class prediction to the more cautious decision of abstaining.
One example of an approach which does not leverage "reversion to the OCS" is standard classification via cross entropy. The classification model takes as input an image and directly predicts a probability distribution over whether each action is the optimal action. In this case, the optimal action given an input is always its ground truth class. Because the OCS for cross entropy is the marginal distribution of labels in the training data, and the optimal action is never to abstain, the OCS for this approach corresponds to a policy that _never_ chooses to abstain. In this case, "reversion to the OCS" posits that the agent will continue to make high-risk high-reward decisions even as its inputs become more OOD. As a result, while this approach can yield high rewards on the training distribution, it is likely to yield very low rewards on OOD inputs, where the model's predictions are likely to be incorrect.
### Experiments
We will now more thoroughly compare the behavior of a reward prediction agent with a standard classification agent for selective classification on a variety of different datasets. Our experiments aim to answer the questions: **How does the performance of a decision-making approach which leverages "reversion to the OCS" compare to that of an approach which does not?**
**Experimental Setup.** Using the same problem setting as the previous section, we consider a selective classification task in which the agent receives a reward of +1 for selecting the correct class, -4 for
Figure 5: Selective classification via reward prediction on CIFAR10. We evaluate on holdout datasets consisting of automobiles (class 1) with increasing levels of noise. X-axis represents the agent’s actions, where classes are indexed by numbers and abstain is represented by ”A”. We plot the average reward predicted by the model for each class (top), and the distribution of actions selected by the policy (bottom). The rightmost plots represent the OCS (top), and the actions selected by an OCS policy (bottom). As distribution shift increased, the model predictions approached the OCS, and the policy automatically selected the abstain action more frequently.
selecting an incorrect class, and +0 for abstaining from classifying. We experiment with 4 datasets: CIFAR10, DomainBed OfficeHome, BREEDS living-17 and non-living-26. We compare the performance of the reward prediction and standard classification approaches described in the previous section, as well as a third oracle approach that is optimally risk-sensitive, thereby providing an upper bound on the agent's achievable reward. To obtain the oracle policy, we train a classifier on the training dataset to predict the likelihood of each class, and then calibrate the predictions with temperature scaling on the OOD evaluation dataset. We then use the reward function to calculate the theoretically optimal threshold on the classifier's maximum predicted likelihood, below which the abstaining option is selected. Note that the oracle policy has access to the reward function and the OOD evaluation dataset for calibration, which the other two approaches do not have access to.
**Results.** In Fig. 6, we plot the frequency with which the abstain action is selected for each approach. As distribution shift increased, both the reward prediction and oracle approaches selected the abstaining action more frequently, whereas the standard classification approach never selected this option. This discrepancy arises because the OCS of the reward prediction approach aligns with the abstain action, whereas the OCS of standard classification does not. In Fig. 7, we plot the average reward received by each approach. Although the performance of all three approaches are relatively similar on the training distribution, the reward prediction policy increasingly outperformed the classification policy as distribution shift increased. Furthermore, the gaps between the rewards yielded by the reward prediction and classification policies are substantial compared to the gaps between the reward prediction and the oracle policies, suggesting that the former difference in performance is nontrivial. Note that the goal of our experiments was not to demonstrate that our approach is the best possible method for selective classification (in fact, our method is likely not better than SOTA approaches), but rather to highlight how the OCS associated with an agent's learned model can influence its OOD decision-making behavior. To this end, this result shows that appropriately leveraging "reversion to the OCS" can substantial improve an agent's performance on OOD inputs.
Figure 6: Ratio of abstain action to total actions; error bars represent standard deviation over 5 random seeds; (t) denotes the training distribution. While the oracle and reward prediction approaches selected the abstain action more frequently as inputs became more OOD, the standard classification approach almost never selected abstain.
Figure 7: Reward obtained by each approach. While all three approaches performed similarly on the training distribution, reward prediction increasingly outperformed standard classification as inputs became more OOD.
Conclusion
We presented the observation that neural network predictions for OOD inputs tend to converge towards a specific constant, which often corresponds to the optimal input-independent prediction based on the model's loss function. We proposed a mechanism to explain this phenomenon and a simple strategy that leverages this phenomenon to enable risk-sensitive decision-making. Finally, we demonstrated the prevalence of this phenomenon and the effectiveness of our decision-making strategy across diverse datasets and different types of distributional shifts.
Our understanding of this phenomenon is not complete. Further research is needed to to discern the properties of an OOD distribution which govern when, and to what extent, we can rely on "reversion to the OCS" to occur. Another exciting direction would be to extend our investigation on the effect of the OCS on decision-making to more complex multistep problems, and study the OOD behavior of common algorithms such as imitation learning, Q-learning, and policy gradient.
As neural network models become more broadly deployed to make decisions in the "wild", we believe it is increasingly essential to ensure neural networks behave safely and robustly in the presence of OOD inputs. While our understanding of "reversion to the OCS" is still rudimentary, we believe it offers a new perspective on how we may predict and even potentially steer the behavior of neural networks on OOD inputs. We hope our observations will prompt further investigations on how we should prepare models to tackle the diversity of in-the-wild inputs they must inevitably encounter.
## 7 Acknowledgements
This work was supported by DARPA ANSR, DARPA Assured Autonomy, and the Office of Naval Research under N00014-21-1-2838. Katie Kang was supported by the NSF GRFP. We would like to thank Dibya Ghosh, Ilya Kostribov, Karl Pertsch, Aviral Kumar, Eric Wallace, Kevin Black, Ben Eysenbach, Michael Janner, Young Geng, Colin Li, Manan Tomar, and Simon Zhai for insightful feedback and discussions.
|
2307.06167 | Auxiliary-Tasks Learning for Physics-Informed Neural Network-Based
Partial Differential Equations Solving | Physics-informed neural networks (PINNs) have emerged as promising surrogate
modes for solving partial differential equations (PDEs). Their effectiveness
lies in the ability to capture solution-related features through neural
networks. However, original PINNs often suffer from bottlenecks, such as low
accuracy and non-convergence, limiting their applicability in complex physical
contexts. To alleviate these issues, we proposed auxiliary-task learning-based
physics-informed neural networks (ATL-PINNs), which provide four different
auxiliary-task learning modes and investigate their performance compared with
original PINNs. We also employ the gradient cosine similarity algorithm to
integrate auxiliary problem loss with the primary problem loss in ATL-PINNs,
which aims to enhance the effectiveness of the auxiliary-task learning modes.
To the best of our knowledge, this is the first study to introduce
auxiliary-task learning modes in the context of physics-informed learning. We
conduct experiments on three PDE problems across different fields and
scenarios. Our findings demonstrate that the proposed auxiliary-task learning
modes can significantly improve solution accuracy, achieving a maximum
performance boost of 96.62% (averaging 28.23%) compared to the original
single-task PINNs. The code and dataset are open source at
https://github.com/junjun-yan/ATL-PINN. | Junjun Yan, Xinhai Chen, Zhichao Wang, Enqiang Zhou, Jie Liu | 2023-07-12T13:46:40Z | http://arxiv.org/abs/2307.06167v1 | Auxiliary-Tasks Learning for Physics-Informed Neural Network-Based Partial Differential Equations Solving +
###### Abstract
Physics-informed neural networks (PINNs) have emerged as promising surrogate modes for solving partial differential equations (PDEs). Their effectiveness lies in the ability to capture solution-related features through neural networks. However, original PINNs often suffer from bottlenecks, such as low accuracy and non-convergence, limiting their applicability in complex physical contexts. To alleviate these issues, we proposed auxiliary-task learning-based physics-informed neural networks (ATL-PINNs), which provide four different auxiliary-task learning modes and investigate their performance compared with original PINNs. We also employ the gradient cosine similarity algorithm to integrate auxiliary problem loss with the primary problem loss in ATL-PINNs, which aims to enhance the effectiveness of the auxiliary-task learning modes. To the best of our knowledge, this is the first study to introduce auxiliary-task learning modes in the context of physics-informed learning. We conduct experiments on three PDE problems across different fields and scenarios. Our findings demonstrate that the proposed auxiliary-task learning modes can significantly improve solution accuracy, achieving a maximum performance boost of 96.62% (averaging 28.23%) compared to the original single-task PINNs. The code and dataset are open source at [https://github.com/junjun-yan/ATL-PINN](https://github.com/junjun-yan/ATL-PINN).
Partial differential equations Physics-informed neural networks Surrogate model Auxiliary-task learning
## 1 Introduction
Partial differential equations (PDEs) play a crucial role in describing numerous essential problems in physics and engineering fields. However, numerical iterative solvers can be computationally expensive when solving inverse problems, high-dimensional problems, and problems involving complex geometries, despite the accurate approximation provided by numerical solvers [1, 2, 3, 4]. The development of artificial intelligence methods, typically deep neural networks (DNNs), has facilitated the successful application of data-driven models in such fields. The DNNs models effectively reduce computational time and enhance efficiency. Physics-informed neural networks (PINNs), a class of DNNs that incorporate PDEs into the loss function, transform the numerical problem into an optimization problem. In the early 1990s, research efforts had been devoted to solving PDEs using neural networks [5]. However, due to the limitations in hardware and algorithm, this method did not receive significant attention. In recent years, the introduction of the PINN training architecture by Raissi et al. [6] sparked interest in physics-informed learning within this interdisciplinary field.
Compared to the traditional numerical solvers, PINNs offer several advantages [7, 8, 9]. Firstly, PINNs can handle inverse problems as straightforward as forward problems by treating the unknown parameters as trainable variables and
learning them through back-propagation during network training. Secondly, PINNs alleviate the issue of exponential growth in network weight size with increasing dimensions, effectively bypassing the dimension catastrophe commonly encountered in traditional methods. Another significant advantage of PINNs is their mesh-free nature. They can directly sample points in complex geometric domains for training without computationally-expensive mesh generation. Therefore, PINNs possess the potential to overcome the challenges faced by traditional methods. Despite some studies demonstrating the convergence of PINNs in several scenarios, their accuracy remains inadequate [10; 11; 12]. PINNs are hard to train due to the statistical inefficiency of building useful representations from physics information, thereby leaving room for enhancing the performance by improving the representations in the latent solution space [13].
Auxiliary-task learning, a prevalent technique in deep learning applications such as computer vision, natural language processing, and recommender systems, falls under the umbrella of multi-task learning [14; 15; 16; 17]. In this framework, the primary problem referred to as the main task, is accompanied by similar problems known as auxiliary tasks, which can potentially enhance the performance of the main task. The fundamental concept behind auxiliary-task learning involves leveraging a shared representation to learn from both tasks. When the auxiliary tasks are closely related to the main task, this approach will improve prediction accuracy and reduces data requirements [18]. In solving PDEs, tasks involving the same governing equation but different boundaries or initial conditions can be treated as correlated tasks. Despite the potential heterogeneity of the final physics fields, there is still shared physical information among them. Consequently, by fully capitalizing on auxiliary tasks through a shared representation, we can potentially enhance the accuracy of the main tasks.
This paper presents the pioneering research that integrates an auxiliary-task learning mechanism into physics-informed learning. Specifically, we implement four distinct network structures for auxiliary-task learning within a physics-informed framework. Furthermore, we introduce the gradient cosine similarity algorithm to all auxiliary-task networks, ensuring that the main task consistently benefits from the auxiliary task in terms of gradients [19]. To validate our approach, we conduct a comprehensive set of experiments using three different PDEs from the PDEBench dataset [20]. The experimental results demonstrate that auxiliary-task learning can effectively enhance the DNN-based physics-informed surrogate models. Overall, by actively constructing auxiliary tasks through variations in the initial conditions of the same PDEs, the prediction accuracy is improved by an average of 28.23% and a maximum of 96.62% across the experimental datasets. These findings highlight the general utility of auxiliary-task learning as a technique for enhancing the performance of surrogate models.
The remainder paper is organized as follows: Section 2 provides an overview of the related work and background on PINNs, multi-task learning, and auxiliary-task learning. Section 3 presents detailed descriptions of four auxiliary-task learning network structures, along with an introduction to the gradient cosine similarity algorithm. Section 4 shows the numerical experiments conducted to evaluate the performance of different auxiliary-task networks across various PDEs. Finally, Section 5 concludes the paper and discusses future directions for research.
## 2 Background
We first define the initial-boundary value problem of general PDEs as follows:
\[N\left[u(x,t);\lambda\right]=f(x,t),x\in\Omega,t\in(0,T] \tag{1}\] \[B[u(x,t)]=g(x,t),x\in\partial\Omega,t\in(0,T]\] (2) \[u\left(x,0\right)=h(x),x\in\bar{\Omega} \tag{3}\]
Here, \(\lambda\) is the unknown equation parameters; \(u(x,t)\) denotes the latent (hidden) solution at time \(t\) and location \(x\); \(f(x,t)\) is the equation forcing function; \(g(x,t)\) and \(h(x)\) are boundary conditions function and initial conditions function respectively; \(N[\cdot]\) and \(B(\cdot)\) are nonlinear differential operators, where \(B(\cdot)\) can be Dirichlet, Neumann, or mixed boundary conditions. We note that \(\Omega\subset\mathbb{R}^{d}\) in an open set while \(\bar{\Omega}\) is its closure.
### Physics-informed neural networks
PINN is a surrogate model based on deep neural networks that utilize temporal-spatial coordinates \((x,t)\) as input and predicts the physics field \((u)\) as output. In contrast to traditional fully connected neural networks, PINNs incorporate governing equations, boundary conditions, and initial conditions into the loss function, which ensures that the network output adheres to these constraints. Consequently, PINNs not only learn from the data distribution but also conform to the laws of physics. The loss function comprises several components: (4) represents the loss function of the governing equations, (5) denotes the loss function of the boundary conditions, and (6) indicates the loss function for the initial conditions. While PINNs can train the network without supervised data, practical applications often include limited data points within the domain (sensor data) to expedite model convergence. The loss function for these data points is
represented by (7). Finally, all the loss components are weighted and summed together as (8), allowing for training through backpropagation and gradient descent.
\[L_{f}=\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}\left|N\left[\hat{u}(x_{f}^{i},t_{f}^{i}); \lambda\right]-f(x_{f}^{i},t_{f}^{i})\right|^{2} \tag{4}\]
\[L_{b}=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\left|B\left[\hat{u}(x_{b}^{i},t_{b}^{i}) \right]-g(x_{b}^{i},t_{b}^{i})\right|^{2} \tag{5}\]
\[L_{0}=\frac{1}{N_{0}}\sum_{i=1}^{N_{0}}\left|\hat{u}(x_{0}^{i},\;0)-h(x_{0}^{i })\right|^{2} \tag{6}\]
\[L_{p}=\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}\left|\hat{u}(x_{p}^{i},\;t_{p}^{i})-u_{ p}^{i}\right|^{2} \tag{7}\]
\[L=W_{f}*L_{f}+W_{b}*L_{b}+W_{0}*L_{0}+W_{d}*L_{d} \tag{8}\]
where \((x_{f}^{i},t_{f}^{i})\), \((x_{b}^{i},t_{b}^{i})\), \((x_{0}^{i},0)\) and \((x_{d}^{i},t_{d}^{i})\) represent the sample points, boundary points, initial points, and data points (if available). The predicted solution by the network, used to approximate \(u\), is denoted as \(\hat{u}\), \(y_{d}^{i}\) represents the supervised real solution (if available). The weights assigned to each loss component are \(W_{f}\), \(W_{b}\), \(W_{0}\), and \(W_{d}\), respectively. Figure 1 illustrates the architecture of PINN, and the deep learning frameworks (e.g., TensorFlow, PyTorch) enable convenient learning of high-order gradient losses through their automatic differentiation mechanisms (AD).
The baseline PINN method proposed by Raissi et al. [6] achieved tremendous success in solving PDE problems in many physics and engineering fields, such as fluid mechanics, quantum mechanics, and electromagnetism [21; 22; 23]. Several articles analyzed that the PINNs can converge to the solution in some satiations. For example, Shin et. al. [11] first demonstrated that in the Holder continuity assumptions, the upper bound on the generalization error is controlled by training error and the number of training data. At the same time, Mishra et. al. [12] establish a theory of using PINNs to solve forward PDE problems. They get a similar error estimation result under a weaker assumption and generalize the result to the inverse problem [10]. However, in practice cases, the PINNs sometimes fail to train. Wang et. al. [13] analyzed this phenomenon from a neural tangent kernel (NTK) perspective. They find the convergence rate of different parts in the loss function has a remarkable discrepancy and they suggested utilizing the eigenvalues of the NTK matrix to adaptively the weight of loss to solve the above problems.
A substantial amount of prior research has been dedicated to enhancing the prediction accuracy and training effectiveness of the baseline PINNs by introducing novel network architectures and training methods. For instance, Wu et al. [24] proposed the residual-based adaptive refinement method, which strategically samples additional points in high-loss regions. Yu et al. [25] introduced g-PINN, which incorporates higher gradient information into the loss function. Recognizing the significance of weights in PINN training, McClenny et al. [26] developed a self-attention PINN
Figure 1: An example of a physics-informed neural network.
(SA-PINN) that adapts training weights for each data point using a soft multiplicative attention mask mechanism similar to those used in computer vision. Differing from SA-PINN, competitive physics-informed networks, introduced by Zeng et al. [27], employ a discriminator to reward the prediction of errors made by the PINN. Several papers have explored the domain decomposition approach [28; 29]. As PINN training aligns with semi-supervised learning scenarios, Yan et al. [30] incorporated a self-training mechanism into PINN training, utilizing the residual of the physics equation as an index for generating pseudo-labels. In addition, there are many practical problems that PINNs have been successfully applied to, including fluid mechanics, mechanics of materials, power systems, and biomedical [31; 32; 33]. While the above studies cover a broad spectrum of research on physics-informed learning, few papers have explored the application of auxiliary-task learning methods to PINNs.
### Auxiliary-tasks learning
Multi-task learning (MTL) is a machine learning approach that can enhance learning performance and efficacy by utilizing a shared representation for multiple tasks. Fine-tuning is a specific instance of multi-task learning as it leverages different tasks sequentially. In contrast to learning individually, multi-task learning potentially improves overall performance. In recent years, multi-task learning has emerged as a powerful approach for enhancing model training performance and effectiveness, finding successful applications in various domains of deep learning, including computer vision, natural language processing, and recommendation systems [34; 14; 35; 36].
Baxter et al. [37] suggest that MTL improves generalization performance by incorporating domain knowledge learned from the supervised signal of relevant tasks. When a hypothesis space performs well on a sufficient number of training tasks, it is more likely to perform well on new tasks within the same environment, thus facilitating generalization. Furthermore, MTL enables the model to leverage knowledge from other tasks, allowing for learning features that a single task alone cannot capture [38]. Ruder et al. [39] provides a biological interpretation of this phenomenon, drawing parallels to how infants learn to recognize faces and utilize this knowledge to recognize other objects later. The authors argue that MTL better captures the learning process observed in human intelligence, as integrating knowledge across domains is a fundamental aspect of human cognition.
The multi-task learning approach can be divided into two classes based on the shared mode of hidden-layer parameters: hard-shared and soft-shared. The hard sharing mechanism is the most commonly used approach, which can date back to the literature [40]. Generally, it applies to all hidden layers for all tasks while leaving the task-specific output layers. This mechanism reduces the risk of overfitting. However, the hard sharing mode is sensitive to the similarity between different tasks, potentially leading to interference among non-similar tasks. As for the soft-shared, each task has its own parameters. The similarity of parameters is ensured through regularization techniques. For instance, [41] uses L2 distance regularization, while [42] employs trace norm regularization. The soft-shared mechanisms in deep neural networks draw inspiration from regularization techniques in traditional multi-task learning and can mitigate the challenges faced by hard sharing. Consequently, soft sharing has emerged as a focal point of modern research.
Auxiliary-task learning is a specific form of multi-task learning that shares a common theoretical foundation but diverges in its implementation. While traditional multi-task learning aims to enhance the performance of all tasks within a single framework, auxiliary-task learning focuses on utilizing additional tasks to improve the performance of the primary task independently. Recent studies have underscored the importance of fully leveraging auxiliary tasks to enhance the main task's performance, leading to consistent benefits [15; 16; 17; 18; 19]. However, limited research has been conducted in the context of physics-informed learning problems. In solving PDEs, one governing equation with two different initial conditions can be considered two distinct tasks. They may address two different hypothesis spaces while connecting through the shared governing equation. Consequently, it is promising for PINNs to achieve more accurate predictions by leveraging knowledge from other tasks. In this study, we explore four distinct network structures for auxiliary-task learning and evaluate their performance on three partial differential equation problems, which we describe in the subsequent sections.
## 3 Auxiliary-task learning-based physics-informed neural networks
To investigate the potential improvement of PINNs through auxiliary-task learning, we conducted experiments involving different PDEs and network architectures. In practical problems, tasks can be created by randomly altering the initial condition. However, we directly selected diverse tasks from the PDEBench dataset during our experiments, which is more convenient. The PDEBench dataset [20] is a novel resource for scientific machine learning. It provides a diverse range of PDE problems with varying initial conditions, which create conditions for our experiments. In Section 3.1, we will present the network architecture for auxiliary-task learning as implemented in our paper. Furthermore, in Section 3.2, we will introduce the gradient cosine similarity algorithm, which optimizes the above models from the perspective of a gradient in physics-informed learning.
### The proposed ATL-PINN modes
We provide four auxiliary-task learning modes below, which will be compared with the original PINNs to demonstrate the potential impact of auxiliary-task learning in physics-informed learning.
**Hard-shared auxiliary-task learning-based physics-informed neural network (Hard-ATL-PINN)**. Figure 2 (a) depicts the architecture of the hard-shared network. In this approach, the main and auxiliary tasks share an expert for feature extraction. After that, each task has a non-shared specific tower network to perform the final processing. The hard sharing mechanism is a widely employed strategy in multi-task learning of neural networks, known for its effectiveness in various problems.
**Soft-shared auxiliary-task learning-based physics-informed neural network (Soft-ATL-PINN)**. The network architecture depicted in Figure 2 (b) represents a customized version of a soft-shared network. It comprises shared expert networks and task-specific expert networks for each task. The underlying concept of this network design is to separate shared and specific features by training experts on different data. The network structure is flexible, which allows us to custom-define the number of shared / task-specific experts.
**Multi-gate mixture-of-experts auxiliary-task learning-based physics-informed neural network (MMoE-ATL-PINN)**. The feature extraction network layer consists of multiple expert networks shared by multiple tasks and a single gate network unique to each task, known as the Multi-gate Mixture-of-Experts (MMoE) layer (shown in Figure 2 (c)). The MMoE layer is designed to enable the modes to automatically learn how to allocate experts based on the relationships among the underlying tasks. It introduces the gate network as a self-attention mechanism, which has been successfully applied in various backbones. In scenarios where the underlying task relationship is weak, the model can learn to activate only one expert per task, effectively assigning different experts to different tasks.
**Progressive layered extraction auxiliary-task learning-based physics-informed neural network (PLE-ATL-PINN)**. Compared to the MMoE modes, which employs only shared expert networks, PLE incorporates both shared and task-specific expert networks (shown in Figure 2 (d)). This design can guide the mode to learn the common feature through shared networks and the private feature through task-specific expert networks, thereby alleviating the seesaw phenomenon when the relationships between tasks are weak. The PLE method divides the mode's parameter representation into private and public parts for each task, enhancing the robustness of auxiliary-task learning and mitigating negative interactions between task-specific pieces of knowledge.
### Gradient cosine similarity algorithm for auxiliary-task learning
We employ gradient cosine similarity to leverage the auxiliary loss in conjunction with the main loss. The gradient cosine similarity is defined as follows:
\[\mathrm{Cosine\ Similarity}\left(\theta\right)=\frac{\nabla_{\theta}\mathcal{ L}_{main}\left(\theta\right)\cdot\nabla_{\theta}\mathcal{L}_{aux}\left( \theta\right)}{\left|\nabla_{\theta}\mathcal{L}_{main}\left(\theta\right) \right|\left|\nabla_{\theta}\mathcal{L}_{aux}\left(\theta\right)\right|} \tag{9}\]
where \(\theta\) represents the shared network parameters, \(\mathcal{L}_{main}\left(\theta\right)\) denotes the loss function of the main task, and \(\mathcal{L}_{aux}\left(\theta\right)\) is the loss function associated with the auxiliary task. The gradient cosine similarity measures the degree of correlation between the tasks, thereby approximating the extent to which the gradient descent directions align between the main task and the auxiliary tasks.
If the gradient cosine similarity between the main task and the auxiliary task is positive, the network will update both their gradients (indicating that they share the same gradient direction). Conversely, if the gradient cosine similarity is negative, the network only updates the gradient for the main task and ignores the auxiliary task (indicating an adverse gradient direction). This update ensures the appropriate adjustment of the network's shared parameters, following Algorithm 1. The symbol \(\theta\) denotes the shared parameters, \(\phi_{\mathrm{main}}\) and \(\phi_{\mathrm{aux}}\) correspond to the private parameters of the main and auxiliary tasks, respectively. The symbol \(\alpha\) represents the learning rate used in the training process. \(\mathcal{L}_{main}\) and \(\mathcal{L}_{aux}\) denotes the loss of the main task and auxiliary task, respectively. In this paper, we assess whether the cosine similarity between task gradients can serve as a reliable signal for identifying when the incorporation of an auxiliary loss benefits the main loss in physics-informed auxiliary task learning. To this end, we compare the non-cosine version with the cosine version of the approach.
## 4 Numerical experiments
We conduct a series of experiments on three PDEs from various fields and levels of complexity. These include the one-dimensional time-space Diffusion Reaction equation, the one-dimensional time-space Burger's equation, and the two-dimensional time-space Shallow Water equations. The training data used for these experiments are downloaded
from the PDEBench dataset [20], and the equations and parameters are set to their default values as specified in their paper. In all case study, the PEDBench dataset provides 10,000 tasks, each with a different initial condition. We randomly selected 100 tasks to create a subset dataset for auxiliary task learning. For each task, we randomly chose another task as its auxiliary task. Our deep learning framework of choice is Pytorch 1.12, and the experiments are performed using the NVIDIA Tesla P100 accelerator.
Table 1 displays the overall performance analysis. The modes are referred to as _Org_ for the original version and _Cos_ for the gradient cosine similarity version. _L2 error_ represents the average L2-related error across the 100 tasks under study; _Boost_ indicates the average percentage of improvement in prediction accuracy achieved by each mode; _Number_ quantifies the number of tasks in which the corresponding mode outperforms the PINN baseline. It is evident from the table that ATL-PINNs consistently outperform PINNs in terms of average errors. Notably, over 60% tasks across all modes and PDE problems can benefit from auxiliary-task learning. The Diffusion Reaction equation exhibits the most significant improvement among ATL-PINN modes, achieving an average performance boost of 62.58%. The Burgers' equation and the Shallow Water equation can be improved by approximately 14.30% and 17.19%, respectively. Despite the Burgers' equation showing the lowest average performance, it benefits the most expansive range of tasks, with nearly 90% of problems in the training dataset exhibiting improvement. These results attest to the beneficial impact of learning in conjunction with auxiliary tasks in physics-informed learning.
Figure 2: The architecture sketch of four different auxiliary-task learning modes.
```
0:\(\theta^{(t)},\phi_{main}^{(t)},\phi_{aux}^{(t)},\alpha^{(t)}\)
0:\(\theta^{(t+1)},\phi_{main}^{(t+1)},\phi_{aux}^{(t+1)}\)
1: Compute \(\mathcal{L}_{main}\), \(\mathcal{L}_{aux}\)
2: Compute \(\nabla_{\phi_{main}}\mathcal{L}_{main}\), \(\nabla_{\theta}\mathcal{L}_{main}\), \(\nabla_{\phi_{aux}}\mathcal{L}_{aux}\), \(\nabla_{\theta}\mathcal{L}_{aux}\)
3:\(\phi_{main}^{(t+1)}\leftarrow\phi_{main}^{(t)}-\alpha^{(t)}\nabla_{\phi_{main}} \mathcal{L}_{main}\left(\theta,\phi_{main}\right)\)
4:\(\phi_{aux}^{(t+1)}\leftarrow\phi_{aux}^{(t)}-\alpha^{(t)}\nabla_{\phi_{aux}} \mathcal{L}_{aux}\left(\theta,\phi_{aux}\right)\)
5:if Cosine Similarity\(\left(\nabla_{\theta}\mathcal{L}_{main}(\theta),\nabla_{\theta}\mathcal{L}_{aux}( \theta)\right)>0\)then
6:\(\theta^{(t+1)}\leftarrow\theta^{(t)}-\alpha^{(t)}(\nabla_{\theta}\mathcal{L}_ {main}\left(\theta\right)+\nabla_{\theta}\mathcal{L}_{aux}(\theta))\)
7:else
8:\(\theta^{(t+1)}\leftarrow\theta^{(t)}-\alpha^{(t)}\nabla_{\theta}\mathcal{L}_ {main}\left(\theta\right)\)
9:endif
```
**Algorithm 1** Gradient descent based on gradient cosine similarity
Regarding the gradient cosine similarity algorithm, notable improvements are observed in the Hard and Soft models across all three case studies compared to the original modes. The cosine similarity algorithm leads to an average enhancement of nearly 5%. However, the benefits of MMoE and PLE modes are negligible. This discrepancy may be because of the attention mechanism introduced by the gate network. It allows the modes autonomously separate task-specific information into their respective experts. Therefore, the gate network performs efficiently regardless of using the cosine method. Notably, the Hard mode with the cosine method surpasses more complex modes such as MMoE and PLE, achieving the best results in the Shallow Water equation. In the subsequent sections, we will discuss the detailed experiment results for the three equations.
### Diffusion Reaction equation
The one-dimensional time-space Diffusion Reaction equation and its corresponding initial conditions are illustrated by the following equations:
\[\partial_{t}u\left(t,x\right)-\upsilon\partial_{xx}\left(t,x \right)-\rho u\left(1-u\right)=0,\;x\in\left(0,1\right),\;t\in\left(0,1\right] \tag{10}\] \[x\in\left(0,1\right),\;t\in\left(0,1\right] \tag{11}\]
This equation presents a challenging problem that combines a rapid evolution from a source term with a diffusion process, thus testing the network's capability to capture swift dynamics accurately. Here, the diffusion coefficient is represented by \(\nu=0.5\), and the mass density is denoted by \(\rho=1\). The boundary condition is periodic, and the initial condition, described as (12), consists of a superposition of sinusoidal waves:
\[u_{0}\left(x\right)=\sum_{k_{i}=k_{1},\ldots,k_{N}}A_{i}\sin\left(2\pi n_{i}/L_ {x}\right)x+\emptyset_{i} \tag{12}\]
\begin{table}
\begin{tabular}{c c|c|c c|c c|c c|c c} \hline PDE & Name & PINN & Hard & \multicolumn{2}{c|}{Soft} & \multicolumn{2}{c|}{MMoE} & \multicolumn{2}{c}{PLE} \\ - & - & - & _Org_ & _Cos_ & _Org_ & _Cos_ & _Org_ & _Cos_ & _Org_ & _Cos_ \\ \hline \multirow{3}{*}{Diff-React} & _L2 error_ & 9.66E-2 & 3.87E-2 & 3.72E-2 & 4.01E-2 & 3.86E-2 & 3.62E-2 & **3.62E-2** & 3.91E-2 & 3.91E-2 \\ & _Boost_ & - & 59.93\% & 61.54\% & 58.46\% & 60.07\% & 62.52\% & **62.58\%** & 59.53\% & 59.54\% \\ & _Number_ & - & 63/100 & 63/100 & 59/100 & 67/100 & 68/100 & **70/100** & 63/100 & 60/100 \\ \hline \multirow{3}{*}{Burgers’} & _L2 error_ & 1.49E-1 & 1.37E-1 & 1.32E-1 & 1.33E-1 & 1.33E-1 & 1.35E-1 & 1.35E-1 & 1.28E-1 & **1.27E-1** \\ & _Boost_ & - & 7.92\% & 11.38\% & 7.00\% & 10.31\% & 9.37\% & 9.18\% & 13.60\% & **14.30\%** \\ & _Number_ & - & 84/100 & **90/100** & 76/100 & 86/100 & 82/100 & 81/100 & 85/100 \\ \hline \multirow{3}{*}{Sha-Water} & _L2 error_ & 2.73E-2 & 2.33E-2 & **2.24E-2** & 2.35E-2 & 2.28E-2 & 2.37E-2 & 2.37E-2 & 2.44E-2 & 2.45E-2 \\ & _Boost_ & - & 14.67\% & **1.791\%** & 14.62\% & 16.17\% & 13.13\% & 13.18\% & 10.39\% & 10.24\% \\ & _Number_ & - & 61/100 & 70/100 & 73/100 & **77/100** & 68/100 & 69/100 & 64/100 & 61/100 \\ \hline \end{tabular}
* _Org_ denote to the original version and _Cos_ denote to the gradient cosine similarity version.
* _L2 error_ represents the average L2-related error across the 100 tasks under study.
* _Boost_ indicates the average percentage of improvement in prediction accuracy achieved by each mode.
* _Number_ quantifies the number of tasks in which the corresponding mode outperforms the PINN baseline.
\end{table}
Table 1: The comprehensive performance evaluation of PINN and ATL-PINNs.
where \(L\) represents the size of the calculation domain; \(n_{i}\), \(A_{i}\), and \(\varphi_{i}\) denote random sample values. The value of \(n_{i}\) is a random integer within the range \([1,8]\), \(A_{i}\) is a uniformly chosen random float number between 0 and 1, and \(\varphi_{i}\) is a randomly selected phase within the interval \((0,2\pi)\). We note that N=2 in this equation. The spatiotemporal domain used for training ranges from 0 to 1 in both spatial and temporal dimensions. We discretize them into \(N_{x}\times N_{t}=1024\times 256\) points. To enforce the prediction that satisfied the initial condition and boundary conditions, we randomly choose 100 initial points and 100 boundary points in each boundary. The network learns the governing equation using a total of 10,000 sample points.
The network structure for single-task learning consisted of four layers, each with 100 cells. We use three layers, 100 cells each layer for expert networks and two layers, 100 cells each layer for tower networks in the auxiliary-task learning networks. All modes underwent training for 30,000 iterations using the Adam optimizer, with a learning rate of 10E-3 and the activation function set to \(tanh\). Additionally, we implemente a learning rate decay method for the optimizer, reducing it to half of the original value every 10,000 iterations. It is important to note that unless explicitly stated, the training environment and network configuration remained consistent across all modes to ensure a fair comparison of their solution prediction accuracy.
Table 2 displays ten tasks exhibiting the most significant performance improvement across Diffusion Reaction equations in the experimental dataset. Nearly all the cases in the table achieve a performance enhancement of one order, with a maximum boost of approximately 96.62%. Notably, the MMoE mode delivers outstanding results, accounting for half of the best-performing tasks. In Table 2, seven of the best results come from the _Cos_ version, demonstrating that the gradient cosine similarity algorithm can benefit auxiliary-task learning. Figure 3 illustrates the convergence situation of the loss function for different cases and corresponding modes. We apply Gaussian smoothing to the loss curves, which enhances clarity and distinguishes between methods in a single figure. The blue lines represent the loss variation of PINN, and other colorful lines denote to loss variation of different modes in ATL-PINNs. We can see that the blue lines are nearly always higher than other lines, demonstrating that ATL-PINNs converge better than PINNs. In figure 3, the MMoE and PLE modes exhibit less pronounced fluctuations than the Hard and Soft modes, owing to the presence of the attention mechanism introduced by the gate networks. However, the final performance does not follow a consistent pattern, indicating that the network architecture can influence predictions across different tasks. Overall, the auxiliary-task approach yields an average improvement of 62.58% (maximum improvement of 96.62%) in the dataset comprising various diffusion equations, showing its immense potential.
### Burger's equation
The one-dimensional time-space Burger's equation, along with its corresponding initial conditions, represents a mathematical mode for capturing the nonlinear behavior and diffusion process in fluid dynamics. Specifically, the equation and initial conditions are defined as follows:
\[\partial_{t}u\left(t,x\right)+\partial_{x}\left(u^{2}(t,x)\left/2 \right)=\nu/\pi\partial_{xx}u\left(t,x\right),\;x\in(0,1),\;t\in(0,2]\right. \tag{13}\] \[u\left(0,x\right)=u_{0}\left(x\right),\;x\in(0,1) \tag{14}\]
where \(\nu=0.01\) represents the diffusion coefficient. The boundary condition is periodic and the initial condition is described in (12).
In this case study, the networks are trained on a spatiotemporal domain of \([0,1]\times[0,2]\), discretized into \(N_{x}\times N_{t}=1024\times 256\) points. The network structure for single-task learning consisted of five layers, each with 50 cells. For the auxiliary-task learning networks, we utilize four layers with 50 cells each for the shared experts and two layers with 50 cells each for the tower networks. All other settings remained the same as in the Diffusion Reaction equation (e.g., number of points, optimizer configuration).
\begin{table}
\begin{tabular}{c|c|c c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Subtask} & \multicolumn{2}{c}{PINN} & \multicolumn{2}{c}{Hard} & \multicolumn{2}{c}{Soft} & \multicolumn{2}{c}{MMoE} & \multicolumn{2}{c}{PLE} & \multicolumn{1}{c}{Max Boost} \\ \cline{2-10} & - & _Org_ & _Cos_ & _Org_ & _Cos_ & _Org_ & _Cos_ & _Org_ & _Cos_ & \\ \hline
0 & 7.19E-1 & 2.90E-2 & 2.97E-2 & 2.45E-2 & 2.51E-2 & 2.53E-2 & **2.43E-2** & 2.90E-2 & 2.81E-2 & 96.62\% \\
1 & 6.93E-1 & 3.12E-2 & **2.80E-2** & 3.41E-2 & 3.05E-2 & 2.98E-2 & 2.98E-2 & 3.05E-2 & 2.96E-2 & 95.96\% \\
2 & 5.24E-1 & 2.50E-2 & 2.78E-2 & 2.61E-2 & 2.90E-2 & **2.31E-2** & 2.43E-2 & 3.42E-2 & 3.52E-2 & 95.59\% \\
3 & 7.45E-1 & 3.85E-2 & 4.41E-2 & 3.41E-2 & 3.91E-2 & 3.41E-2 & **3.31E-2** & 4.79E-2 & 4.70E-2 & 95.56\% \\
4 & 6.92E-1 & 5.99E-2 & 5.15E-2 & 3.22E-2 & **2.76E-2** & 4.45E-2 & 4.27E-2 & 3.22E-2 & 3.28E-2 & 95.35\% \\
5 & 6.47E-1 & 3.53E-2 & 3.12E-2 & 5.50E-2 & 4.86E-2 & **3.53E-2** & 3.57E-2 & 6.02E-2 & 6.32E-2 & 95.17\% \\
6 & 6.67E-1 & 5.99E-2 & 5.62E-2 & 4.47E-2 & **4.19E-2** & 4.47E-2 & 4.64E-2 & 6.80E-2 & 7.07E-2 & 93.72\% \\
7 & 1.67E-1 & **2.02E-2** & 2.23E-2 & 2.97E-2 & 3.28E-2 & 3.11E-2 & 3.02E-2 & 3.18E-2 & 3.06E-2 & 87.92\% \\
8 & 2.21E-1 & 3.65E-2 & **3.12E-2** & 3.81E-2 & 3.25E-2 & 3.37E-2 & 3.20E-2 & 3.94E-2 & 3.90E-2 & 85.91\% \\
9 & 1.84E-1 & 5.59E-2 & 3.82E-2 & 4.84E-2 & 3.31E-2 & 2.85E-2 & **2.70E-2** & 2.81E-2 & 2.81E-2 & 85.34\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: The case studies of best 10 boosting in Diffusion Reaction equation.
Figure 3: The convergence of PINN and ATL-PINNs (on the log scale) on the Diffusion Reaction equation in some example case studies.
Table 3 shows notable improvements achieve through auxiliary-task learning. While the maximum boost efficiency may not be as high as observed in the Diffusion Reaction equation, the enhancement range is significantly broader, with improvements seen in nearly ninety percentage cases. Figure 4 presents the predictions of various modes at three different time points. The gray lines represent the reference solution obtained from the solver, the blue lines depict the forecasts from the PINN baseline, and the other lines correspond to the auxiliary-task modes. In Figure 4, both the PINN and ATL-PINNs initially fit the solution well. However, as time progresses and the equation evolves, the predictions from the PINN gradually deviate from the reference. At \(t=0.6\), PINN starts to depart from the reference solution while ATL-PINNs can still predict the result with litter error in their peaks and troughs. At \(t=0.9\), the PINN fails to capture the waveform accurately. However, the ATL-PINNs perform significantly better. Although deviations are present in their peaks and troughs, the waveforms are generally correct from ATL-PINNs' prediction. In Figure 4, the PLE modes can get the best average result, shown as purple lines, which fit the reference most precisely. Table 1 demonstrates this result. Overall, the ATL-PINNs yield an average improvement of 14.30% (maximum improvement of 36.24%) over ninety percent of cases in the Burgers' equation dataset, revealing the universality of using auxiliary-task learning method to improve PINNs for PDE solving.
### Shallow Water equations
The two-dimensional time-space Shallow Water equations, denoted as equations (15) to (17), capture the dynamics of free-surface flow problems:
\[\partial_{t}h+\partial_{x}hu+\partial_{y}hu =0 \tag{15}\] \[\partial_{t}hu+\partial_{x}\left(u^{2}h+\frac{1}{2}g_{r}h^{2}\right) =-g_{r}h\partial_{x}b\] (16) \[\partial_{t}hv+\partial_{y}\left(v^{2}h+\frac{1}{2}g_{r}h^{2}\right) =-g_{r}h\partial_{y}b \tag{17}\]
where the variables \(u\) and \(v\) represent the velocities in the horizontal and vertical directions, respectively. h corresponds to the water depth, which serves as the primary prediction in this problem. Additionally, \(g_{r}=1.0\) represent the gravitational acceleration. Derived from the general Navier-Stokes (N-S) equations, these equations find broad application in modeling various free-surface flow phenomena.
The dataset provided by PDEBench shows a 2D radial dam break scenario that captures the evolution process of a circular bump within a square domain. This scenario involves initializing the water height at the center. The initial condition for \(h\) can be defined as follows:
\[h=\begin{cases}2.0,&for\ r\ <\sqrt{x^{2}+y^{2}}\\ 1.0,&for\ r\ \geq\sqrt{x^{2}+y^{2}}\end{cases} \tag{18}\]
where the radius \(r\) randomly sampled from \(U(0.3,0.7)\). The spatial dimension for training is \(\Omega=[-2.5,2.5]\times[-2.5,2.5]\); the temporal dimension spans \(T=[0,1]\). To discretize the dataset, we employ a resolution of \(N_{x}\times N_{y}\times N_{t}=128\times 128\times 101\).
In contrast to the one-dimensional case studies, the two-dimensional problem presents greater complexity. As a result, we employ a deeper network with an increased number of neural cells to address these equations. The network structure for single-task learning consists of six layers, each comprising 100 cells. In the auxiliary-task learning networks, we utilize five layers with 100 cells each for the expert networks and two layers with 100 cells each for the tower networks.
\begin{table}
\begin{tabular}{c|c|c c c|c c|c c|c} \hline \hline \multirow{2}{*}{Subtask} & PINN & \multicolumn{2}{c}{Hard} & \multicolumn{2}{c}{Soft} & \multicolumn{2}{c}{MMoE} & \multicolumn{2}{c|}{PLE} & \multirow{2}{*}{Max Boost} \\ \cline{2-2} \cline{5-10} & - & \(Org\) & \(Cos\) & \(Org\) & \(Cos\) & & \(Org\) & \(Cos\) & \(Org\) & \(Cos\) \\ \hline
0 & 2.04E-1 & 1.33E-1 & 1.33E-1 & 1.33E-1 & 1.33E-1 & **1.30E-1** & 1.30E-1 & 1.69E-1 & 1.69E-1 & 36.24\% \\
1 & 1.61E-1 & 1.41E-1 & 1.24E-1 & 1.18E-1 & **1.04E-1** & 1.20E-1 & 1.20E-1 & 1.35E-1 & 1.28E-1 & 35.44\% \\
2 & 2.07E-1 & 1.66E-1 & 1.53E-1 & 1.66E-1 & 1.53E-1 & 1.53E-1 & 1.45E-1 & 1.47E-1 & **1.40E-1** & 32.39\% \\
3 & 1.06E-1 & 8.90E-2 & 8.84E-2 & 7.72E-2 & 7.67E-2 & **7.25E-2** & 7.61E-2 & 8.72E-2 & 8.72E-2 & 31.60\% \\
4 & 1.51E-1 & **1.05E-1** & 1.06E-1 & 1.27E-1 & 1.29E-1 & 1.12E-1 & 1.18E-1 & 1.25E-1 & 1.31E-1 & 30.61\% \\
5 & 1.42E-1 & 1.04E-1 & **9.92E-2** & 1.26E-1 & 1.20E-1 & 1.08E-1 & 1.03E-1 & 1.03E-1 & 1.03E-1 & 30.22\% \\
6 & 1.79E-1 & 1.55E-1 & 1.48E-1 & 1.31E-1 & **1.26E-1** & 1.52E-1 & 1.60E-1 & 1.48E-1 & 1.41E-1 & 29.77\% \\
7 & 1.98E-1 & 1.55E-1 & 1.59E-1 & **1.42E-1** & 1.46E-1 & 1.50E-1 & 1.50E-1 & 2.03E-1 & 2.03E-1 & 28.12\% \\
8 & 1.88E-1 & 1.41E-1 & **1.38E-1** & 1.48E-1 & 1.44E-1 & 1.42E-1 & 1.49E-1 & 1.65E-1 & 1.73E-1 & 26.98\% \\
9 & 1.36E-1 & 1.08E-1 & 1.07E-1 & 1.17E-1 & 1.16E-1 & **1.03E-1** & 1.03E-1 & 1.16E-1 & 1.16E-1 & 24.04\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: The case studies of best 10 boosting in Burgers’ equation.
Figure 4: The prediction of PINN and ATL-PINNs at three different times (\(t=0.3\), \(t=0.6\), and \(t=1\)) on the Burgers’ equation in some case studies.
To accommodate the increased complexity, the number of points is also augmented. Specifically, we employ 1000 boundary points for each boundary, 1000 initial points, and 100,000 intra-domain sample points to train both networks. Because the number of sample points is too large to input into the network once, we employ a mini-batch approach by randomly selecting 20,000 sample points for each iteration. It is important to note that all the boundary and initial points are sent to the network during every iteration. As for other network configurations, such as the optimizer, we keep them the same as those employed in the burger's equation experiment.
Table 4 presents the top ten cases where auxiliary-task learning yields the best benefits in the Shallow Water equation. The ATL-PINNs can achieve a maximum boost of approximately 72%. In Table 4, the Hard mode consistently exhibits the most substantial enhancement among the different modes. We get an unexpected result that the most straightforward modes achieve the best results in the most complex problems. One reason behind this phenomenon is perhaps the relevance between different tasks is strong (only the initial water height is changed). Therefore, the Hard model can learn the shared representation more directly, thus leading to better results. Figure 5 shows the predictions of the water height (\(h\)) in two example cases at three-time points (\(t=0\), \(t=0.5\), and \(t=1\)). The Shallow Water equation poses a challenging problem for physics-informed learning due to its 2D complexity and large solution domain. Therefore, the PINN baseline struggles to accurately capture the circle form of water height in the center dam. The results in the ATL-PINNs deliver superior results. Although the water height cannot be prediction accuracy, the ATL-PINN can portray the circle form of the water height in the center dam. All the ATL-PINN modes can better capture the underlying physics processes and improve prediction accuracy by incorporating auxiliary knowledge from tasks. Overall, the auxiliary-task approach yields an average improvement of 17.91% (maximum improvement of 72.38%) in the Shallow Water equation dataset, revealing the potential for handling complex scenarios in physics problems.
## 5 Conclusion
Motivated by the remarkable success of shared common representation in auxiliary-task learning, our study incorporates these modes into neural network-based surrogate models to investigate the potential benefits of PINNs through correlated auxiliary-task learning. Specifically, we randomly select auxiliary tasks with different initial conditions in homogeneous PDEs. In this paper, we propose ATL-PINNs with four modes for auxiliary-task learning and test them in three PDE datasets, each involving 100 tasks that can leverage the performance of the auxiliary-task learning modes. We also introduce the gradient cosine similarity approach to ensure that the updates from the auxiliary task consistently benefit the main problem. The experimental results demonstrate that the auxiliary-task learning modes enhance the performance of network-based surrogate models in physics-informed learning, and the gradient cosine similarity approach further improves their performance. However, selecting suitable auxiliary tasks and determining the optimal number for the main problem remains unexplored. In future work, we will focus on developing efficient algorithms for auxiliary-task construction and selection. We are also interested in exploring the application of auxiliary-task learning modes in more complex real-world scenarios. Although this paper represents a preliminary attempt to combine auxiliary-task learning and PINNs, we hope our research will contribute to the broader application of auxiliary-task learning-based modes in the physics-informed learning scene.
## Acknowledgments
This research work was supported by the National Key Research and Development Program of China (2021YFB0300101). The datasets and code used during the current study can be accessed on GitHub: [https://github.com/junjun-yan/ATL-PINN](https://github.com/junjun-yan/ATL-PINN). The authors declare no conflict of interest.
\begin{table}
\begin{tabular}{c|c|c c c c|c c|c c} \hline \hline \multirow{2}{*}{Subtask} & PINN & \multicolumn{2}{c}{Hard} & \multicolumn{2}{c}{Soft} & \multicolumn{2}{c}{MMoE} & \multicolumn{2}{c|}{PLE} & \multirow{2}{*}{Max Boost} \\ \cline{2-2} \cline{5-10} & - & _Org_ & _Cos_ & _Org_ & _Cos_ & _Org_ & _Cos_ & _Org_ & _Cos_ \\ \hline
0 & 2.98E-2 & 1.88E-2 & 2.40E-2 & 2.20E-2 & 2.81E-2 & **1.71E-2** & 1.80E-2 & 1.74E-2 & 1.74E-2 & 72.38\% \\
1 & 3.26E-2 & **1.66E-2** & 2.31E-2 & 2.05E-2 & 2.86E-2 & 2.14E-2 & 2.14E-2 & 2.12E-2 & 2.23E-2 & 60.77\% \\
2 & 4.82E-2 & **1.96E-2** & 2.46E-2 & 2.14E-2 & 2.69E-2 & 3.18E-2 & 3.18E-2 & 4.12E-2 & 3.91E-2 & 59.99\% \\
3 & 5.20E-2 & 3.35E-2 & 2.08E-2 & 1.52E-2 & **9.47E-2** & 3.54E-2 & 3.71E-2 & 4.18E-2 & 4.18E-2 & 59.35\% \\
4 & 3.58E-2 & 2.46E-2 & **1.72E-2** & 3.15E-2 & 2.21E-2 & 2.30E-2 & 2.18E-2 & 2.43E-2 & 2.43E-2 & 58.70\% \\
5 & 4.48E-2 & **2.33E-2** & 2.43E-2 & 2.50E-2 & 2.60E-2 & 3.28E-2 & 3.28E-2 & 3.28E-2 & 3.12E-2 & 58.08\% \\
6 & 3.45E-2 & **1.43E-2** & 1.76E-2 & 1.98E-2 & 2.44E-2 & 2.36E-2 & 2.47E-2 & 1.89E-2 & 1.98E-2 & 51.79\% \\
7 & 5.01E-2 & 3.63E-2 & **1.97E-2** & 4.61E-2 & 2.50E-2 & 3.33E-2 & 3.49E-2 & 2.13E-2 & 2.03E-2 & 49.22\% \\
8 & 3.35E-2 & **1.41E-2** & 1.57E-2 & 2.27E-2 & 2.54E-2 & 2.36E-2 & 2.36E-2 & 1.81E-2 & 1.81E-2 & 48.05\% \\
9 & 5.91E-2 & **1.63E-2** & 2.80E-2 & 1.67E-2 & 2.86E-2 & 2.63E-2 & 2.76E-2 & 2.29E-2 & 2.29E-2 & 42.52\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: The detail of the best 10 boosting case studies in Shallow Water equation.
Figure 5: The reference solution and prediction of PINN and ATL-PINNs at three different times (\(t=0.0\), \(t=0.5\), and \(t=1\)) on the Shallow Water equation in some case studies. |
2306.05021 | Mixed-TD: Efficient Neural Network Accelerator with Layer-Specific
Tensor Decomposition | Neural Network designs are quite diverse, from VGG-style to ResNet-style, and
from Convolutional Neural Networks to Transformers. Towards the design of
efficient accelerators, many works have adopted a dataflow-based, inter-layer
pipelined architecture, with a customised hardware towards each layer,
achieving ultra high throughput and low latency. The deployment of neural
networks to such dataflow architecture accelerators is usually hindered by the
available on-chip memory as it is desirable to preload the weights of neural
networks on-chip to maximise the system performance. To address this, networks
are usually compressed before the deployment through methods such as pruning,
quantization and tensor decomposition. In this paper, a framework for mapping
CNNs onto FPGAs based on a novel tensor decomposition method called Mixed-TD is
proposed. The proposed method applies layer-specific Singular Value
Decomposition (SVD) and Canonical Polyadic Decomposition (CPD) in a mixed
manner, achieving 1.73x to 10.29x throughput per DSP to state-of-the-art CNNs.
Our work is open-sourced: https://github.com/Yu-Zhewen/Mixed-TD | Zhewen Yu, Christos-Savvas Bouganis | 2023-06-08T08:16:38Z | http://arxiv.org/abs/2306.05021v2 | # Mixed-TD: Efficient Neural Network Accelerator with Layer-Specific Tensor Decomposition
###### Abstract
Neural Network designs are quite diverse, from VGG-style to ResNet-style, and from Convolutional Neural Networks to Transformers. Towards the design of efficient accelerators, many works have adopted a dataflow-based, inter-layer pipelined architecture, with a customized hardware towards each layer, achieving ultra high throughput and low latency. The deployment of neural networks to such dataflow architecture accelerators is usually hindered by the available on-chip memory as it is desirable to preload the weights of neural networks on-chip to maximise the system performance. To address this, networks are usually compressed before the deployment through methods such as pruning, quantization and tensor decomposition. In this paper, a framework for mapping CNNs onto FPGAs based on a novel tensor decomposition method called Mixed-TD is proposed. The proposed method applies layer-specific Singular Value Decomposition (SVD) and Canonical Polyadic Decomposition (CPD) in a mixed manner, achieving 1.73\(\times\) to 10.29\(\times\) throughput per DSP to state-of-the-art CNNs. Our work is open-sourced: [https://github.com/Yu-Zhewen/Mixed-TD](https://github.com/Yu-Zhewen/Mixed-TD).
## I Introduction
The recent advances in Machine Learning (ML) research have led to the design of a large and diverse set of neural network designs. At a macroscopic level, popular designs include Convolutional Neural Networks (CNNs) and Transformers, where at a microscopic level the above structures contain layers with different properties, including number of channels, kernel size, residual connection, etc. Towards the design of neural network accelerators, many works adopted a dataflow architecture [1], customizing the computational pipeline to each layer to maximise efficiency and achieve high throughput.
A challenge for the dataflow-based accelerators is the storing of the parameters of the neural network to on-chip memory. Let's consider the AMD Alveo U250, a data-center acceleration card. Despite its high performance, this card has a limited internal SRAM capacity of only 54MB. A ResNet-50, a popular neural network architecture with 23 million parameters, requires a storage capacity of approximately 92MB in floating-point format, which exceeds the SRAM capacity. Apart from the memory capacity, the on-chip memory bandwidth also becomes a limitation when the architecture requires access to a large number of the parameters concurrently to support parallel computation for enhanced performance. Storing the parameters to off-chip memory addresses the capacity problem but penalises the performance of the system.
To reduce the memory footprint of the model, existing approaches compress the weights of a pre-trained neural network and fine-tune the compressed weights before the network is mapped to a dataflow-based accelerator. In the case of a pre-trained deep neural network, the data distribution and error tolerance vary across different parts of the network [2]. As such, it is necessary to make the compression method fine-grained and layer-specific to avoid significant accuracy degradation. In previous work on weights pruning, unstructured ways of pruning bring a larger compression ratio with negligible accuracy degradation compared with the channel-wise structured pruning [3, 4]. Similarly, in weights quantization, mixed precision and block floating point methods are favoured compared with uniform wordlength and range [5, 6, 7].
In this paper, we explore a different dimension for performing fine-grained, layer-specific, and hardware-friendly weights compression, by using Tensor Decomposition techniques. Furthermore, we propose the use of an ML-based proxy for predicting the performance of the compressed model during its mapping to an FPGA, making it possible to explore the large design space defined by the introduced Tensor decomposition schemes. Our performance evaluation shows that the proposed approach can achieve significant weight compression with negligible accuracy penalty, as well as result in designs with competitive latency and throughput to state-of-the-art approaches.
The key contributions of this paper are as follows:
* A novel weight compression method, Mixed-TD, that is based on tensor decomposition techniques. It extends current approaches by introducing for the first time a layer-specific mixture of Singular Value Decomposition (SVD) and Canonical Polyadic Decomposition (CPD).
* An efficient method for fast design space exploration through the evolutionary search and a random-forest-based throughput predictor.
* A dataflow-based accelerator design achieving low la
Fig. 1: An example of a dataflow architecture, where each layer of the network has its own customized hardware. Inter-layer and intra-layer pipelines are usually applied.
tency, high throughput, and negligible accuracy loss at the same time. In terms of the _Throughput per DSP_, we are achieving gains of 1.73\(\times\) to 10.29\(\times\) compared to existing work.
## II Background
### _Tensor Decomposition_
Tensor decomposition expresses one tensor as a set of elementary and simpler tensors which act on each other [8, 9]. The decomposition of a tensor \(\mathcal{A}\) can be derived through the following optimization problem:
\[\begin{split}\min_{a_{1},a_{2}\ldots a_{M}}&\;r_{1 },r_{2}\ldots r_{M}\\ s.t.&||\mathcal{A}-f(a_{1},a_{2}\ldots a_{M})||\leq \epsilon,\end{split} \tag{1}\]
where \(a_{i},i\in[1,M]\) is the set of elementary tensors and \(f\) is the approximation function \(f\) so that the given \(M_{th}\)-order tensor \(\mathcal{A}\) is approximated within the desired error boundary \(\epsilon\) and the ranks of these elementary tensors \(r_{i}\) are also minimised. Depending on the operations inside the function \(f\), there are different formats of tensor decomposition available. SVD and CPD are two of them that have been well studied in compressing the weights of neural networks [10].
SVD targets the compression of a given \(2_{nd}\)-order tensor, which is \(M=2\) and \(\mathcal{A}\in\mathbb{R}^{d_{1}\times d_{2}}\). \(\mathcal{A}\) is decomposed into the form of
\[\mathcal{A}\approx U_{r}\Sigma_{r}V_{r}^{T}, \tag{2}\]
where the rows of \(U_{r}\) and the columns of \(V_{r}\) are orthogonal bases, and \(\Sigma_{r}\) is a diagonal matrix containing the top-\(r\) singular values in descending order. After absorbing the diagonal matrix \(\Sigma_{r}\) into \(U_{r}\) and \(V_{r}\), the final format becomes the product of two tensors and the number of parameters remaining is \((d_{1}+d_{2})r\). SVD can also be used to compress the tensor with higher-order (\(M>2\)) which requires reducing the order of the target tensor with slicing and reshaping operations beforehand [11].
CPD can be applied directly to a high-order tensor without the need of reducing its order to \(M=2\) like SVD. Given the \(M_{th}\)-order tensor, \(\mathcal{A}\in\mathbb{R}^{d_{1}\times d_{2}\times\ldots d_{M}}\), its CPD format can be represented as the sum of the outer product of \(1_{st}\)-order tensors.
\[\mathcal{A}\approx\sum_{i=1}^{r}a_{1,i}\otimes a_{2,i}\otimes\ldots a_{M,i} \tag{3}\]
This set of \(1_{st}\)-order tensors can be computed via the Alternating Least Squares (ALS) method [9], and the number of parameters remaining is \((d_{1}+d_{2}+\ldots d_{M})r\). The difference between SVD and CPD is visualised in Fig. 2.
### _Related Work_
Many state-of-the-art approaches that target the acceleration of CNNs are based on dataflow architectures, as they reduce the off-chip memory accesses, and customize the hardware pipeline to the properties of the targeted CNN load.
fpggaConvNet [13] generates a highly optimized architecture based on the target CNN and FPGA board. They considered the splitting of the whole data flow into multiple partitions, where each partition only contains a subgraph of the neural network, allowing the tool to map large CNN models to small devices through reconfiguration. The tool performs a faithful mapping of the CNN to the FPGA, assuming an already optimized CNN model as its input.
FINN [14] implemented the dataflow architecture through extreme network quantization. The authors fit a ResNet-50 network on a U250 device, after quantizing the weights and the activations of the network to 1 bit and 2 bits respectively, except the first and last layers which are quantized to 8 bits. Their mixed precision implementation achieves an impressive 2703 Frames Per Second (FPS) but they also reported 9.8 percentage points (pp) top-1 accuracy degradation on the ImageNet dataset compared to the floating point version of the network.
HPIPE [15] explored weights compression through unstructured sparsity. The authors eliminated \(85\%\) of the weights from ResNet-50 and encoded the remaining parameters in a compressed format to save memory storage. The authors reported a 5.2pp accuracy degradation and an achieved throughput of 4550 FPS on a Stratix 10 2800 FPGA.
Our approach takes a step to a different direction from prior work as it explores a different dimension for performing fine-grained, layer-specific weights compression, which achieves competitive performance compared with existing methods including unstructured sparsity and mixed precision quantization.
## III Mixed Tensor Decomposition
This section introduces our proposed fine-grained compression method, termed Mixed-TD, which opens the space for applying layer-specific SVD and CPD decompositions in a mixed manner.
### _Compute Decomposed Convolution_
Let us consider an \(N\)-layer CNN whose weights are represented as \(4_{th}\)-order tensors \(\mathcal{W}_{j}\in\mathbb{R}^{c_{j+1}\times c_{j}\times k_{j}\times k_{j}},j \in[1,N]\)
Fig. 2: Represent SVD and CPD in the Tensor Diagram Notation [12]. Each solid node denotes a \(M_{th}\)-order tensor with \(M\) edges. Edges connected together represent the tensor contraction operation between two tensors. In CPD, the hollow node with a cross inside represents the sum of the outer product.
where \(c_{j+1}\), \(c_{j}\) and \(k_{j}\times k_{j}\) are denoting the number of output channels, input channels and the kernel size of the \(j_{th}\) convolutional layer respectively. Its input and output are denoted as \(\mathcal{X}_{j}\in\mathbb{R}^{b\times m_{j}\times n_{j}\times c_{j}}\) and \(\mathcal{X}_{j+1}\in\mathbb{R}^{b\times m_{j+1}\times n_{j+1}\times c_{j+1}}\), where \(b\) denotes the batch size, \(m_{j}\) and \(n_{j}\) denote the spatial size of the feature map.
The convolution operation can be represented as the tensor contraction along \(c_{j}\) and \(k_{j}\) dimensions,
\[\mathcal{X}_{j+1}=\sum_{c_{j},k_{j}}(\mathcal{W}_{j}\cdot S(\mathcal{X}_{j})), \tag{4}\]
where the sliding window function \(S\) performs the padding and striding, converting \(\mathcal{X}_{j}\) into the shape of \(b\times m_{j+1}\times n_{j+1}\times c_{j}\times k_{j}\times k_{j}\).
Our proposed approach performs tensor decomposition on the weights tensor at design time. After decomposition, the tensor \(\mathcal{W}_{j}\) is substituted by the form of (2) or (3), transforming as such the computation from one tensor contraction operation into multiple consecutive ones, but with a reduced total number of elements.
### _Layer-Specific Design Choices_
In the proposed Mixed-TD method, there are three types of layer-specific design choices.
**SVD or CPD:** For each convolutional layer, its 4-d weight tensor can be decomposed either in SVD or CPD format. For SVD, the 4-d tensor is reshaped into 2-d and then decomposed into the product of two 2-d tensors, as (2) shows. Otherwise, for CPD, the 4-d tensor is decomposed to the sum of the outer product of four 1-d tensors, as (3) shows. The main difference between these two formats is the number of tensors left after decomposition as well as their representation abilities. In Section IV and VI, we will see the number of tensors left affects the accelerator performance in a dataflow architecture design. A rule of thumb is that the CPD format has more overhead than SVD in hardware design but in terms of the impact on network accuracy, whether to use SVD or CPD is really layer-specific. We use the enumeration variable \(t_{j}=\{0:SVD,1:CPD\}\) to represent this decision.
**Channels grouping:** Instead of directly applying the tensor decomposition to the entire \(\mathcal{W}_{j}\), we can choose to slice and split \(\mathcal{W}_{j}\) into multiple chunks where each chuck can then be decomposed individually. As in CNN designs \(c_{j+1}\) and \(c_{j}\) are usually much larger than \(k_{j}\), our method provides the options of slicing the first (\(c_{j+1}\)) and the second dimension (\(c_{j}\)) of \(\mathcal{W}_{j}\) into \(g_{1,j}\) and \(g_{2,j}\) groups respectively. As such, for each \(\mathcal{W}_{j}\), it can be split into \(g_{1,j}g_{2,j}\) chunks in total, and each chuck has the shape of \(\mathbb{R}^{\frac{c_{j+1}}{g_{1,j}}\times\frac{c_{j+1}}{g_{2,j}}\times k_{j} \times k_{j}}\). The intuition behind offering this type of design choice is to leverage the diversity and similarity of feature maps [16].
**Rank selection:** In both SVD and CPD methods, a key design choice is the rank selection, as it controls the compression ratio of the decomposition. Within a network, the redundancy of each layer varies considerably and setting a uniform compression ratio dramatically hurts the accuracy [17]. Therefore, it is critical to have the layer-specific choice of the rank \(r_{j}\) after searching for the optimal combination of them.
## IV Accelerator Design
This section introduces the accelerator designs for deploying networks compressed by our Mixed-TD method. A high level description of the dataflow architecture is given in section IV-A, the internal structures of compute engines are discussed in section IV-B, and the data connections between engines are elaborated in section IV-C.
### _Dataflow Architecture_
We extend an open-source dataflow architecture accelerator fpgaConvNet [18, 13]. As we are targeting both high throughput and low latency, we did not use the device reconfiguration technique in our work. Each layer, including convolution, relu, pooling, elementwise addition, fully connected, and etc., is mapped to a dedicated computation engine. These engines are connected in a pipeline manner to maximise overall throughput. During inference, the input data is sent from DDR to FPGA using the AXI-Stream interface, propagating through all computation engines, and the final prediction results are sent back to DDR using AXI-Stream. The system processes input data in batches and the pipeline is emptied between different batches.
### _SVD v.s. CPD Engine_
Each compressed convolutional layer of the network is mapped to either the SVD Engine or the CPD Engine with all the weights stored at the on-chip memories and preloaded before the inference starts. Both the SVD Engine and CPD Engine adopt the structure of input buffer, MAC units and accumulation module to compute the tensor contractions (Fig. 3).
* Input Buffer: It applies the sliding window function to input feature maps using a set of \(p_{in}\) line buffers. At every clock cycle, each line buffer fetches one word of data from the previous layer and dispatches a window with the size of \(k_{j}\times k_{j}\).
* MAC Units: \(p_{out}\) MAC units operate in parallel. Each MAC unit, which contains a multiplier array followed by an adder tree, is responsible for a vector-vector multiplication. The MAC units are fed by the input buffer as well as the preloaded weights.
* Accumulation Module: It gathers and accumulates the partial sums produced by the MAC units before dispatching the data into the next stage of computation.
Now, we highlight the differences between the proposed SVD Engine and CPD Engine.
* In the case of SVD decomposition, the convolution kernel is decomposed into **two** stages of tensor contraction, as (2) shows. In the case of CPD decomposition, the computation is decomposed to **four** stages instead, as (3) shows. In each stage, \(p_{in}\) and \(p_{out}\) can be tuned individually to trade resources for throughput.
* In the SVD Engine, the buffer **broadcasts** all the inputs to \(p_{out}\) of MAC units concurrently to compute the inner
products. On the contrary, in the CPD Engine, at the second and the third stages of computation (involving \(a_{3,r}\) and \(a_{4,r}\)), the data coming out of the buffer is **scattered** to the \(p_{out}\) MAC units instead to compute the outer products.
### _Cross-layer Data Flow_
In the case of decomposed convolutions without channel grouping (as explained in Section III), each layer will require its own engine, and data will flow between layers in the form of a flattened 4-dimensional tensor, in the form of \(b\times m_{j}\times n_{j}\times c_{j}\), where \(j\in[1,N]\). In this notation, \(b\) represents the batch size, \(m_{j}\) and \(n_{j}\) represent the spatial size of the feature map, and \(c_{j}\) represents the number of feature maps in the \(j\)-th layer.
In the case where channel grouping is applied, i.e. either \(g_{1,j}\) or \(g_{2,j}\) is greater than 1, there will be \(g_{1,j}g_{2,j}\) engines per layer. In this case, data flows into the engines in a flattened form, as \(b\times m_{j}\times n_{j}\times\frac{c_{j}}{g_{2,j}}\times g_{2,j}\). After splitting of the data into \(g_{2,j}\) groups is performed, each engine fetches data in the form of \(b\times m_{j}\times n_{j}\times\frac{c_{j}}{g_{2,j}}\). Similarly, at the layer's output, data is in the form of \(b\times m_{j+1}\times n_{j+1}\times\frac{c_{j+1}}{g_{1,j}}\times g_{1,j}\).
In addition, if \(g_{1,j}\neq g_{2,j+1}\), the data must be rearranged from \(\frac{c_{j+1}}{g_{1,j}}\times g_{1,j}\) to \(\frac{c_{j+1}}{g_{2,j+1}}\times g_{2,j+1}\) before being fed into the engines of the next layer. This is achieved using an array of FIFOs with a width set to the Least Common Multiple (LCM) of \(g_{1,j}\) and \(g_{2,j+1}\). The data streams are written and read from this FIFO array in a round-robin fashion to effectively rearrange the data.
## V Design Space Exploration
Having introduced both the compression algorithm and accelerator architecture, this section focuses on their integration, and how we can efficiently identify the optimal design point.
### _Problem Formulation_
We define the optimal design point as the one that maximizes the prediction accuracy of the network, while satisfying the given resource budget and the target throughput in Frames Per Second (FPS). Several variables affect the design point, including the applied decomposition parameters \(\tau=\{g_{1,j},g_{2,j},t_{j},r_{j}\},j\in[1,N]\) (defined in section III-B); as well as the accelerator's unrolling factors \(\psi=\{p_{in,j},p_{out,j}\},j\in[1,N]\) (defined in section IV-B).
\[\max_{\tau,\psi}ACC\quad s.t.\ FPS\geq FPS_{target},\ \ RSC\leq RSC_{budget} \tag{5}\]
The accuracy of the network depends on the choice of \(\tau\) only. The FPS depends on both \(\tau\) and \(\psi\), as \(\tau\) controls the per-layer workload and \(\psi\) determines the initiation interval of the computation loop. The total resource is represented
Fig. 3: Internal architecture of the proposed SVD Engine and CPD Engine. Both architectures implement the convolution on the decomposed weight tensors, but they differ in the number of stages of computation as well as the dataflow between stages.
by the sum of per-layer resources, depending on \(\psi\) only. To efficiently solve the constrained optimization problem, we decouple the searches of decomposition parameters \(\tau\) and unrolling factors \(\psi\), which are elaborated in the following two sections respectively.
### _Evolutionary Search_
Due to the fine-grained and layer-specific decisions made by the proposed Mixed-TD method, the design space defined by \(\tau\) is extremely large. To illustrate, ResNet-18 alone consists of \(5.7\times 10^{29}\) possible candidate designs.
As such, for the efficient search of \(\tau\), we adopt the evolutionary searching algorithm proposed in [19]. Initially, we randomly sample from the design space and keep only the valid design points that satisfy the throughput and resource constraints until we obtain \(|\textbf{P}|\) valid design points. These \(|\textbf{P}|\) design points form the first generation of the "population", referred to as "parents". Using mutation and crossover, we generate \(|\textbf{C}|\) new valid samples, referred to as "children". The parents and children are then ranked together based on their prediction accuracy, and only the top-\(|\textbf{P}|\) samples are retained to become the parents of the next generation. This ensures the continuation of high-performing design points throughout the evolution process.
```
1:procedureValidate(sample \(i\))
2:query accuracy of \(i\)
3:query throughput of \(i\)return\(FPS\geq FPS_{target}\)
4:procedureSearching
5:random sample \(|\textbf{P}|\) valid designs
6:whilestep \(\leq\) max_stepsdo
7: mutate **P**, obtain \(|\textbf{C}|/2\) new valid samples.
8: crossover **P**, obtain \(|\textbf{C}|/2\) new valid samples.
9: sort \(\textbf{P}\cup\textbf{C}\) by accuracy
10: keep top-\(|\textbf{P}|\) samples
```
**Algorithm 1** Constrained Evolutionary Search
### _Performance Predictor_
To further speed up the search of the evolutionary algorithm, it is crucial to minimize the time spent on evaluating the accuracy and throughput of each design point.
For accuracy estimation, we use the decomposed network without fine-tuning and evaluate it on a single batch of data, rather than the entire validation dataset. This significantly reduces the time required for accuracy queries, reducing the duration from minutes to just seconds on a desktop GPU.
For throughput estimation, it is necessary to solve the following resource allocation problem that identifies the optimal configuration of unrolling factors \(\psi\).
\[\max_{\psi}FPS\quad s.t.\ RSC\leq RSC_{budget} \tag{6}\]
The unrolling factors impact both the resource utilization and system throughput. The optimal configuration is which maximizes the system throughput by balancing the delays of pipeline stages, under a given resource constraint.
Existing optimizers are based on heuristics and take minutes or hours to run for networks with about 10 to 50 layers [20]. Moreover, such a optimization process has to be repeated whenever the compression decisions change, limiting the search speed of Algorithm 1 and prohibiting the application to our case.
To address this challenge, the paper proposes building a proxy that predicts the achievable throughput for a specific configuration based on network characteristics and the optimizer used. For this purpose, a random forest was selected as the proxy model.
Our approach is as follows: In the first few generations of the evolutionary search, we use the solver from [20] to obtain the throughput of each design, which we save and use to build a dataset. At a predetermined point in the search, we use this dataset to train a random forest regressor, which is then used to predict the throughput for subsequent designs during the evolutionary algorithm. Training the regressor takes only a few minutes on a desktop CPU, and its inference time is less than a second.
Our work differs from previous study [21], which aimed to build performance predictors for general-purpose computing architectures, such as CPUs, GPUs, or systolic array accelerators, where the architecture is fixed and only the workload changes during the search. In contrast, our work targets dataflow architectures, where the hardware is changed and customized for each design point.
Fig. 4: Design flow of our system. The flow includes the _search_ and the _deployment_ stages. During the _search_ stage, we query the accuracy and the throughput of each design point and perform the design space exploration to identify the optimal design. The optimal design is then fine-tuned to improve its accuracy before being synthesised and deployed on the FPGA device.
## VI Experiments
Our experiment is carried out on a server using an Nvidia GTX 1080 Ti GPU for accuracy queries and final model fine-tuning. The accelerator is evaluated using Vivado 2020.1, targeting the AMD Alveo U250 device.
### _Benchmarks_
As a case study, we focused on the ImageNet dataset and evaluated two state-of-the-art models, ResNet-18 and RepVGG-A0 [26]. ResNet-18 features representative residual block designs, while RepVGG-A0 is the latest model from the VGG family. Both models were quantized to 8-bit Block Floating Point (BFP) format [27]. Table II summarizes the results of our investigation. Our proposed method successfully produced a model with a significantly reduced number of parameters while maintaining an accuracy loss of less than 0.4pp compared to a model that uses 8-bit BFP for both activations and weights.
### _Performance Results_
We used the proposed approach to generate designs and compared their performance against state-of-the-art work that targets the same task i.e. ImageNet classification using similar type models. Even though a direct comparison with those approaches is not possible as each one utilises a different model and device, it is useful to position the work against the state-of-the-art on the task of ImageNet classification within the space of achieved throughput and accuracy. The results are shown in Table I. The produced designs by the proposed approach outperform significantly, non-dataflow designs [3, 24, 25] with respect to both peak FPS, and latency (inverse of FPS for batch size 1). In terms of FPS/DSP, our designs outperform non-dataflow designs between 1.73\(\times\) and 10.29\(\times\).
With respect to dataflow architectures, StreamSVD [22] utilises tensor decomposition based on SVD only to compress the weights of the model, which is the closest approach to our work. Their work is a partial dataflow design as device reconfiguration is required to overcome the resource constraint. When compared with them, our work achieves 3.02\(\times\) peak FPS/DSP. HPIPE [15] utilises weight sparsity and we are outperforming them in terms of batch size 1 but not peak performance because their design is clocking at a much higher frequency than ours, 580 MHz versus 200 MHz. FCMP is based on FINN [14] and exploits binary quantization. Because of the binarization, their design utilizes much fewer DSPs than ours but their classification accuracy is lower than ours by 1.8pp on ResNet. Overall, the results show that our Mixed-TD approach can lead to designs with competitive performance with similar task accuracy on ImageNet.
### _Ablation Studies_
To better understand the individual contributions of the two main components of the work, mixed tensor decomposition and random-forest-based predictor, ablation studies were conducted. These studies allowed us to analyze the impact of each component on the overall performance of the system.
The top-1 accuracy achieved over time when SVD-only, CPD-only, and the proposed Mixed-TD approaches are used for the decomposition of the weights tensors are illustrated in Fig. 5. As the Mixed-TD approach explores a larger design space, the benefits in accuracy are only observed after the 90-hour mark.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Precision & Float32 & BFP-W8A8 & BFP-W8A8 \\ & Decomposed & ✗ & ✗ & ✓ \\ \hline \multirow{4}{*}{ResNet-18} & Top-1 Accuracy (\%) & 69.7 & 69.3 & 69.1 \\ & Memory Size (Mb) & 374 & 94 & 35 \\ & BitOPs (G) & 3715 & 232 & 97 \\ & Throughput (FPS) & N/A & 702 & 1041 \\ \hline \multirow{4}{*}{RepVGG-A0} & Top-1 Accuracy (\%) & 72.4 & 71.9 & 71.5 \\ & Memory Size (Mb) & 266 & 67 & 35 \\ & BitOPs (G) & 2789 & 174 & 87 \\ \cline{1-1} & Throughput (FPS) & N/A & 864 & 1162 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Compression results on two CNN benchmarks trained on ImageNet dataset. BitOPs are counted as \(2\times W\times A\) MACs. Models have been fine-tuned after the compression. Throughput is on Alveo U250 with a batch size of 1.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline & Dataflow & Network & Method & Acc. & Device & \begin{tabular}{c} Freq. \\ (MHz) \\ \end{tabular} & URAM & BRAM & kLUT & DSP & \begin{tabular}{c} FPS \\ _batch 1_ \\ \end{tabular} &
\begin{tabular}{c} FPS \\ _peak_ \\ \end{tabular} \\ \hline N3H-Core [24] & ✗ & ResNet-18 & Q & 70.4 & XCZZ045 & 100 & - & 541 & 153 & 900 & 31 & 123 \\ FILM-QNN [25] & ✗ & ResNet-18 & Q & 70.5 & ZCU102 & 150 & - & 881 & 180 & 2092 & - & 215 \\ MCBBS [3] & ✗ & VGG16 & Q, S & 64.8 & Arria GX1150 & 242 & - & 1785 & 605 & 2704 & 54 & - \\ \hline StreamSVD [22] & ✓(partial) & ResNet-18 & Q, TD & 68.4 & XCZ7045 & 125 & - & 752 & - & 576 & - & 34 \\ HPIPE [15] & ✓ & ResNet-50 & Q, S & 71.9 & S10 2800 & 580 & - & 11278 & 1064 & 10044 & 909 & 4550 \\ FCMP [14] & ✓ & ResNet-50 & Q & 67.3 & Alveo U250 & 195 & 109 & 3870 & 1027 & 1611 & 526 & 2703 \\ \hline Ours & ✓ & ResNet-18 & Q, TD & 69.1 & Alveo U250 & 200 & 0 & 3564 & 1550 & 6394 & 1041 & 1138 \\ Ours & ✓ & RepVGG-A0 & Q, TD & 71.5 & Alveo U250 & 200 & 0 & 3550 & 1555 & 5652 & 1162 & 1288 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Performance and resource comparison with existing work on CNN-FPGA accelerators. In terms of the compression method, “Q” stands for quantization, “S” stands for sparsity (weights pruning) and “TD” stands for tensor decomposition. In terms of performance metrics, both FPS for batch size 1, and peak FPS for a large batch size (we and [22] use 256, [14] uses 1024, and others did not provide the details) are reported. As the pipeline is emptied between different batches, the pipeline depth impacts differently those two FPS metrics. The resources of Intel devices have already been converted to the equivalent resources on AMD devices, where 1 Intel ALM = 1.8 AMD LUT [23], and 1 Intel DSP = 2 AMD DSP [15].
We investigated the performance benefits of other decomposition methods such as Tensor Train [28] and Tensor Ring [29] into our Mixed-TD algorithm, but we did not observe any further gains to the benchmarks that we investigated.
We have evaluated the performance of our proposed performance predictor for accelerating the design space exploration and compared it to a baseline approach that uses the number of Multiply-Accumulate (MAC) operations to guide the exploration. As shown in Fig. 6, while the MAC-based approach is \(10\times\) faster, it leads to designs with significantly lower throughput than the target. On the other hand, our performance predictor converges to a design point that better meets the target throughput, resulting in significant time savings compared to a full optimization process for mapping the model to an FPGA. Please note that the full optimization process can only explore 22 designs/hour and it takes more than 150 hours to converge, where with the help of the predictor, we are able to explore 4969 designs/hour and the searching converges with less than 20 hours.
Furthermore, in our experiment, the target throughput is 1079 FPS, and our predictor identifies the design point with 1041 FPS, which is only \(3.5\%\) lower. To build the predictor, we chose a random forest due to its fast training time, typically taking only a few minutes, as opposed to other options such as Graph Convolutional Network [21] which can be relatively slow to train.
## VII Conclusion
The paper presents a novel method, called Mixed-TD, for fine-grained and layer-specific model compression. This approach addresses the on-chip memory limitations of dataflow architecture accelerators. Mixed-TD achieves substantial weight compression while preserving high accuracy and considering the target hardware. To efficiently navigate the extended design space, we introduced an evolutionary search with a throughput predictor based on a random forest. The paper demonstrates the benefits of tensor decomposition methods in the space of mapping CNN models onto FPGAs, as well as the need for proxies in order to navigate quickly the large design space.
## Acknowledgement
For the purpose of open access, the author(s) has applied a Creative Commons Attribution (CC BY) license to any Accepted Manuscript version arising.
|
2305.09373 | Multi-task convolutional neural network for image aesthetic assessment | As people's aesthetic preferences for images are far from understood, image
aesthetic assessment is a challenging artificial intelligence task. The range
of factors underlying this task is almost unlimited, but we know that some
aesthetic attributes affect those preferences. In this study, we present a
multi-task convolutional neural network that takes into account these
attributes. The proposed neural network jointly learns the attributes along
with the overall aesthetic scores of images. This multi-task learning framework
allows for effective generalization through the utilization of shared
representations. Our experiments demonstrate that the proposed method
outperforms the state-of-the-art approaches in predicting overall aesthetic
scores for images in one benchmark of image aesthetics. We achieve near-human
performance in terms of overall aesthetic scores when considering the
Spearman's rank correlations. Moreover, our model pioneers the application of
multi-tasking in another benchmark, serving as a new baseline for future
research. Notably, our approach achieves this performance while using fewer
parameters compared to existing multi-task neural networks in the literature,
and consequently makes our method more efficient in terms of computational
complexity. | Derya Soydaner, Johan Wagemans | 2023-05-16T11:56:02Z | http://arxiv.org/abs/2305.09373v2 | # Multi-task convolutional neural network for image aesthetic assessment
###### Abstract
As people's aesthetic preferences for images are far from understood, image aesthetic assessment is a challenging artificial intelligence task. The range of factors underlying this task is almost unlimited, but we know that some aesthetic attributes affect those preferences. In this study, we present a multi-task convolutional neural network that takes into account these attributes. The proposed neural network jointly learns the attributes along with the overall aesthetic scores of images. This multi-task learning framework allows for effective generalization through the utilization of shared representations. Our experiments demonstrate that the proposed method outperforms the state-of-the-art approaches in predicting overall aesthetic scores for images in one benchmark of image aesthetics. We achieve near-human performance in terms of overall aesthetic scores when considering the Spearman's rank correlations. Moreover, our model pioneers the application of multi-tasking in another benchmark, serving as a new baseline for future research. Notably, our approach achieves this performance while using fewer parameters compared to existing multi-task neural networks in the literature, and consequently makes our method more efficient in terms of computational complexity.
Image aesthetics, convolutional neural network, deep learning, multi-task learning, regression, image aesthetic assessment.
## 1 Introduction
Image aesthetic assessment is a challenging task due to its subjective nature. Some people may find an image aesthetically pleasing, while others may disagree. Aesthetic preferences of individuals are diverse and they can depend on many factors. Because of the importance and complexity of the problem, the literature on automated image aesthetic assessment is extensive [4, 40]. In recent years, deep learning has become an important part of this literature based on its substantial impact in many areas. Given that deep neural networks can already perform tasks that were previously thought to be exclusive to humans, such as playing games [32], it is not unreasonable to expect them to be able to assess the aesthetic value of images as well. Currently, image aesthetic assessment has a significant impact on many application areas such as automatic photo editing and image retrieval.
In this context, neural networks have become a powerful tool in _computational aesthetics_. This interdisciplinary field of research is of great importance for the automatic assessment of image aesthetics, and has led to the development of several state-of-the-art models for aesthetics research. In this study, we aim to evaluate a computational approach for image aesthetics which considers the overall aesthetic score as well as individual attributes that can impact aesthetic preferences. Therefore, we focus on the multi-task setting to assess the model's performance across multiple tasks. We handle the image aesthetic assessment task as a regression problem, i.e., our aim is predicting the aesthetic ratings for images. However, predicting overall aesthetic scores using regression-based approaches is complicated because aesthetic liking is influenced by a multitude of interacting factors. Many of these factors are subjective, and their combined effect is notoriously difficult to predict. This difficulty is compounded further in a multi-task setting, making the task even more challenging.
To this end, we propose a multi-task convolutional neural network (CNN) that predicts an overall aesthetic score for a given image while also learning important attributes related to aesthetics. To demonstrate the effectiveness of our approach, we evaluate our multi-task CNN on two benchmark datasets in aesthetics research, namely the Aesthetics with Attributes Database (AADB) [17] and The Explainable Visual Aesthetics (EVA) dataset [13]. These datasets are unique in that they provide both overall aesthetic scores and attribute scores, making them valuable resources for evaluating image aesthetic assessment models.
Our proposed multi-task CNN performs well for image aesthetic assessment, while being efficient in terms of computational complexity. Our multi-task CNN is the first of its kind applied to the EVA dataset, making it a new baseline for the multi-task setting on this dataset. Moreover, it achieves near-human performance on the overall aesthetic scores of the AADB dataset while having fewer parameters than the previous studies in the literature, demonstrating the principle of Occam's razor in machine learning.
The rest of the paper is organized as follows. In Section 2, related work is presented. We introduce our multi-task CNN in Section 3. We describe our experimental setup in Section 4 and we discuss our results in Section 5. Finally, the conclusions and outlook are given in Section 6.
### _Contributions_
Our main contributions are summarized as follows.
* We propose an end-to-end multi-task CNN for image aesthetic assessment and conduct systematic evaluation of our model on two image aesthetic benchmarks.
* In the multi-task setting, our model achieves the state-of-the-art result on the overall aesthetic scores of the AADB dataset, while requiring fewer parameters than previous approaches.
* On the more recent EVA dataset, we conduct performance analysis and our model is the first multi-task CNN for this dataset, serving as the new baseline.
* Our evaluation shows that the multi-task setting consistently outperforms the single-task setting for the same neural network architecture across both datasets.
* As a result, we present a simple yet effective multi-task neural network architecture for image aesthetic assessment and provide a detailed evaluation of it on both image aesthetic datasets.
### _Problem formulation_
In this study, our aim is to develop a model that predicts aesthetic-related scores of images. We use aesthetic benchmarks that include images with overall aesthetic scores and scores for \(K\) aesthetic attributes. Our model learns from the training set \(\left\{(x^{(1)},x^{(2)},...,x^{(N)})\right\}_{i=1}^{N}\) with corresponding targets \(y^{(i)}\). Here, each training sample consists of an RGB image \(x^{(i)}\in\mathbb{R}^{d}\). Correspondingly, \(y^{(i)}\in\mathbb{R}^{K+1}\) is a concatenated vector of the overall aesthetic score \(y^{(i)}_{o}\in\mathbb{R}\) and scores for \(K\) aesthetic attributes \(y^{(i)}_{a}\in\mathbb{R}^{K}\). Our model learns from this training data to accurately predict the aesthetic-related scores of images.
Such problems, where the output is a numerical value, are known as the _regression_ problems. Here, the task is to learn the mapping from the input to the output. To this end, we assume a machine learning model of the form
\[y=f(x|\theta), \tag{1}\]
where \(f(.)\) denotes the model and \(\theta\) represents its parameters. Since we have images as input data, we choose the convolutional neural network (CNN) as the model \(f(.)\). We use a CNN in a _multi-task_ setting, as the target vector \(y^{(i)}\) includes overall aesthetic score and scores for \(K\) aesthetic attributes. Given the training set of \(N\) samples \(D=\left\{(x^{(i)},y^{(i)})\right\}_{i=1}^{N}\), our goal is to obtain a network \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{K+1}\), where \(f\) can simultaneously predict the overall aesthetic scores and attribute scores from input images.
## 2 Related Work
The task of image aesthetic assessment is typically approached as either a binary classification problem, where the aim is to classify an image as low or high aesthetics, or as a regression problem, where a model predicts an aesthetic score for a given image. Prior studies have investigated both classification and regression-based approaches to image aesthetic assessment. Deep learning techniques have achieved remarkable success in various fields, and image aesthetic assessment is no exception, as evidenced by the increasing number of studies exploring the use of deep neural networks in this area. It is clear that these techniques have played a critical role in contributing to notable advances in image aesthetics research. For example, Kang _et al._ (2014) [12] presented a CNN that predicts image quality, while Lu _et al._ (2014) [25] proposed an aesthetics classification network that learns several style attributes. Lee _et al._ (2019) [19] utilized a Siamese network-based approach in this area. In another study, Lu _et al._ (2015) [26] developed a deep neural network for image style recognition, aesthetic quality categorization, and image quality estimation. In some studies, models based on CNNs were used to predict a single aesthetic score for an image [14, 15]. Inspired by a visual neuroscience model, Wang _et al._ (2017) [38] introduced a model for image aesthetics assessment that predicts the distribution of human ratings.
Assessing the aesthetic quality of images involves numerous factors that contribute to preferences, and while many of these factors are difficult to quantify, there are some known aesthetic attributes that influence preferences. Previous studies have investigated the aesthetic value of images in conjunction with these attributes. Recently, deep neural networks based on multi-task learning have been employed to tackle this task, treating it as a multi-task problem that simultaneously predicts an overall aesthetic score and multiple attribute scores. For instance, Kong _et al._ (2016) [17] introduced the AADB dataset, which includes overall aesthetic scores and scores for eleven attributes of photos. In their study, Kong _et al._ (2016) [17] developed a multi-task neural network by fine-tuning AlexNet [18], and training a Siamese network [2] to predict aesthetic ratings. Subsequent studies have also utilized the AADB dataset for further research in this field. For example, Hou _et al._ (2017) [10] applied the squared earth mover's distance-based loss for training, and compared different deep networks including AlexNet, VGG16 [35], and a wide residual network [39], and found that fine-tuning a VGG-based model achieved the best performance.
Pan _et al._ (2019) [30] proposed a neural network architecture based on adversarial learning inspired by generative adversarial networks [8]. This is a multi-task deep CNN namely "rating network" which learns the aesthetic score and attributes simultaneously. While the rating network plays the role of "generator", a "discriminator" tries to distinguish the predictions of multi-task network from the real values. This model outperforms previous approaches and is currently considered as the state-of-the-art method for predicting the overall aesthetic scores on the AADB dataset in multi-task aesthetic prediction.
Since most of the images rated null (neutral) for three attributes (symmetry, repetition and motion blur) in the AADB dataset (Fig. 3), some studies [1, 21, 27, 31] have chosen to exclude these attributes from their multi-task models. For instance, Malu _et al._ (2017) [27] developed a multi-task CNN based on ResNet-50 [9] which simultaneously learns the eight aesthetic attributes along with
the overall aesthetic score. They also examined the salient regions for the corresponding attribute and applied the gradient based visualization technique [41]. Abdenebaoui _et al._ (2018) [1] used a deep CNN that predicts technical quality, high-level semantic quality, and a detailed description of photographic rules. Reddy _et al._ (2020) [31] proposed a multi-task network based on EfficientNet [37] for the same purpose, along with a visualization technique and activation maps generated using Gradient-weighted Class Activation Mapping (Grad-CAM) [33] to generate activation maps. Recently, Li _et al._ (2022) [21] presented a hierarchical image aesthetic attribute prediction model.
Besides, Li _et al._ (2020) [20] proposed a multi-task deep learning framework that takes into account an individual's personality in modeling their subjective preferences. Liu _et al._ (2020) [24] developed an aesthetics-based saliency network in a multi-tasking setting. The aesthetic evaluation system proposed by Jiang _et al._ (2021) [11] outputs the image style label and three forms of aesthetic evaluation results for an image.
More recently, another image dataset, namely The Explainable Visual Aesthetics (EVA) [13], has been released, which includes overall aesthetic scores and attribute scores. Although there are a few studies that have used this dataset for aesthetics research, their models only predict overall aesthetic scores [22, 23, 34]. Therefore, our study is the first multi-task neural network that can make predictions on the EVA dataset.
Current multi-task learning approaches have demonstrated the feasibility of predicting ratings in the AADB dataset by utilizing all the available attributes instead of excluding some. In line with this, we employ all the attributes in the AADB dataset and develop a neural network architecture that is both efficient and effective in predicting overall aesthetic scores of images. Moreover, we evaluate our multi-task CNN on the EVA dataset to further assess its performance. Our multi-task CNN provides predictions for the EVA dataset, serving as a baseline for the future research in this area.
## 3 Proposed Multi-Task Convolutional Neural Network
We propose a deep multi-task CNN that jointly learns the overall aesthetic score and the aesthetic-related attributes for images during training. This allows the resulting neural network to simultaneously predict multiple scores for an image. We train our deep neural network directly from RGB images, and it is based on the VGG16 pretrained network to extract features. The prior multi-task approaches mentioned in Section 2 have already proved that using a pretrained network is a better option than training a neural network from scratch, since both the AADB and EVA datasets do not include large amounts of images. We conducted experiments on several candidate per-trained CNNs to determine the optimal architecture for our task, and selected the model with the highest performance.
The proposed neural network takes images as input and uses VGG16 to extract feature representations, as shown in Fig. 1. We removed the fully-connected layers in VGG16 and used the five blocks of convolutional layers. We added a global average pooling layer to the output of the last convolutional block of VGG16. The resulting feature maps are fed into two fully-connected layers with ReLU activation function [7, 29], consisting of 128 and 64 hidden units, respectively. To prevent overfitting, we applied dropout [36] with a rate of 0.35 to the second fully-connected layer with 64 hidden units, which precedes the output layer. The architecture of our neural network consists of multiple units in the output layer, one for predicting the overall aesthetic score and additional units for predicting attribute scores. For the AADB dataset, which has 11 attributes, there are 12 output units in total. On the other hand, the EVA dataset has 4 attributes, so our model includes 5 output units. For more information about the datasets, please refer to Section 4.1. The output layer applies sigmoid activation function, and all the output units share the same hidden representation. Notably, we also designed our multi-task CNN with separate output layers: one for the overall aesthetic score and one
Fig. 1: The general architecture of our multi-task convolutional neural network.
for each attribute. Interestingly, both architectures perform similarly for predicting the overall aesthetic score. However, we found that the architecture with a single output layer outperforms the one with separate output layers when it comes to predicting the attribute scores.
## 4 Experimental Setup
### _Datasets_
We evaluate the performance of our proposed multi-task CNN on two publicly available datasets for image aesthetic assessment, namely the AADB and EVA. Both datasets provide aesthetic attribute scores that are suitable for regression modeling, in addition to the overall aesthetic scores. The Aesthetic Visual Analysis (AVA) dataset [28] is another widely used benchmark in aesthetics research. However, the AVA dataset only provides binary labels for attributes, which is not suitable for our proposed framework as it requires rating scores. Additionally, many images in the AVA dataset are either heavily edited or synthetic, which limits its applicability. In contrast, the AADB dataset provides a more balanced distribution of professional and consumer photos, as well as a more diverse range of photo qualities [17]. Consequently, we utilize the AADB and EVA datasets for training and evaluating our multi-task CNN, as they are the most appropriate datasets for this study. These datasets are described below.
**AADB.** We utilize the Aesthetics with Attributes Database (AADB) [17], an image aesthetic benchmark containing 10,000 RGB images of size 256 \(\times\) 256 collected from the Flickr website. Each image in the AADB dataset has overall aesthetic scores provided by 5 different raters. The scores are on a scale of 1 to 5, with 5 being the most aesthetically pleasing score. Additionally, there are eleven attributes that are known to impact aesthetic judgments according to professional photographers. In this dataset, every image has also scores for each attribute. These attributes are balancing element, interesting content, color harmony, shallow depth of field, good lighting, motion blur, object emphasis, rule of thirds, vivid color, repetition, and symmetry. The raters indicated whether each attribute has a positive, negative, or null (zero) effect on the aesthetics of an image, except for repetition and symmetry where only the presence or absence of the attribute is rated.
To obtain the ground-truth scores for each image in the AADB dataset, Kong _et al._ (2016) [17] calculated the average aesthetic scores provided by five different raters. Since only the average scores are reported, the individual rater scores are not available in the dataset. Then, the average scores are normalized to the range of [0,1], while all the attributes except for repetition and symmetry are normalized to the range of [-1,1]. Repetition and symmetry are normalized to the range of [0,1]. Two sample images from the AADB dataset, showcasing examples of both low and high aesthetics, are shown in Figure 2.
The AADB dataset includes eleven attributes in total, and the distribution of these attributes is presented in Fig. 3. Among them, the motion blur, repetition, and symmetry attributes are mostly rated neutral. Therefore, as mentioned in Section 2, some researchers excluded these three attributes from their multi-task neural networks. However, the motion blur attribute has both negative (700) and positive (397) scores, which may still provide useful information. Similarly, the repetition attribute has 1683 positive scores, while the symmetry attribute has 771 positive scores. It is worth noting that raters were not allowed to give negative scores for repetition and symmetry.
The AADB dataset has been split into three subsets: 500 images for validation, 1000 images for testing, and the remaining images for training, following the official partition [17]. For our experiments, we use this partition to train and test our multi-task CNN, allowing for direct comparison with other approaches.
**EVA.** The Explainable Visual Aesthetics (EVA) dataset [13] contains 4070 images, each rated by at least 30 participants. The EVA dataset overcomes the limitations of previous datasets by including images with 30 to 40 votes per image, collected using a disciplined approach to avoid noisy labels due to misinterpretations of the tasks or limited number of votes per image [13]. Each image has an aesthetic quality rating with an \(11\)-point discrete scale. The extremes
Fig. 3: Visualization of image attribute data in the training set of AADB dataset illustrating the distribution of negative, null, and positive levels for each attribute [17].
Fig. 2: Example images from the training set of the AADB dataset. Each image has overall aesthetic score and scores for 11 attributes. _(Left)_ High aesthetic: An image rated high on overall aesthetic score. _(Right)_ Low aesthetic: An image rated low on overall aesthetic score.
of the scale are labelled as "least beautiful" (corresponding to 0) and "most beautiful" (corresponding to 10). The EVA dataset contains four attributes: light and color, composition and depth, quality, and semantics of the image. For each attribute, the images were rated on a four-level Likert scale (very bad, bad, good, and very good). Two sample images from the EVA dataset, showcasing examples of both low and high aesthetics, are shown in Figure 4.
In contrast to the AADB dataset, Kang _et al._ (2020) [13] reported all ratings from the participants. So, we calculated the average scores for each image. Unlike the AADB dataset, which has predetermined train-validation-test splits, there is no official train-validation-test split for the EVA dataset, since Kang _et al._ (2020) did not use any neural network. However, studies focusing on predicting only the overall aesthetic scores (see Section 2) on the EVA dataset have utilized different training and testing splits. For example, Duan _et al._ (2022) [5] and Li _et al._ (2023a) [23] employed a split of 3,500 training images and 570 testing images, while Li _et al._ (2023) [22] used 4,500 training images and 601 testing images. Similarly, Shaham _et al._ (2021) [34] utilized a split of 2,940 training images and 611 testing images.
### _Implementation Details_
We initialize the fully-connected layer weights in our multi-task CNN with the Glorot uniform initializer [6]. We use _mean squared error_ as the loss function on the training set \(X\) to minimize the error between the predictions and the ground-truth values:
\[E(W|X)=\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2} \tag{2}\]
where \(n\) is the number of samples in the training set, \(y_{i}\) are the ground-truth scores and \(\hat{y_{i}}\) are the predictions generated by Eq. 1.
Since both datasets we use in this study do not include large amounts of images, we apply horizontal flip as data augmentation. We train our multi-task CNN in two stages. In the first stage, we apply the Adam algorithm [16] with an initial learning rate of 0.001 and decay constants of 0.9 and 0.999, respectively. The VGG16 pretrained network is composed of five blocks, each of which includes convolutional and pooling layers. During the first stage, we freeze the weights for all five blocks and train the multi-task CNN for 5 epochs, with a minibatch size of 64. We closely monitor the training and validation loss during this stage and observe that the model is prone to overfitting if we train it for longer, without any notable improvement in its performance.
In the second stage, we fine-tune the multi-task CNN by unfreezing the last two convolutional layer in the fourth block of VGG16. We apply the Adam algorithm again, but we adjust its learning rate by using the exponential decay learning rate schedule. The initial learning rate is 0.0001, and it decays every 125 steps with a base of 0.50. We fine-tune the model with this setting for 3 epochs with a minibatch size of 64.
## 5 Results and Discussion
In this section, we provide a comprehensive evaluation of our proposed multi-task CNN on both datasets, with an emphasis on its efficiency in assessing image aesthetics. We explore the impact of fine-tuning and analyze the results for both overall and attribute scores predicted by our model. Additionally, we compare the performance of our multi-task approach with a single-task setting for predicting the overall aesthetic scores of images using the same neural network architecture.
### _Performance analysis on the AADB dataset_
#### 5.1.1 Model evaluation and comparison with the state-of-the-art
Table I provides an overview of the performances achieved by the studies in the literature which use the AADB dataset to develop multi-task deep neural networks. These neural networks learn the eleven attributes of the AADB dataset along with the overall aesthetic score of images. The reported results show the progress made by previous studies, with each subsequently becoming the state-of-the-art in this field. Since our aim is similar to these previous studies, we evaluate how well our multi-task CNN performs in comparison to these neural networks.
To make a comparison between the previous studies and our multi-task CNN, we use Spearman's rank correlation
\begin{table}
\begin{tabular}{l l}
**METHODOS** & \(\rho\) \\ \hline (Kong _et al._, 2016) & 0.6782 \\ (Hou _et al._, 2017) & 0.6889 \\ (Pan _et al._, 2019)\({}^{a}\) & 0.6927 \\ (Pan _et al._, 2019) & 0.7041 \\ Ours & **0.7067** \\ \end{tabular}
\end{table} TABLE I: Comparison of performances achieved by previous multi-task neural networks and our proposed multi-task CNN on the test set of the AADB dataset.
Fig. 4: Example images from the training set of the EVA dataset. Each image has overall aesthetic score and scores for 4 attributes. _(Left)_ High aesthetic: An image rated high on overall aesthetic score. _(Right)_ Low aesthetic: An image rated low on overall aesthetic score.
coefficient (\(\rho\)), which is a commonly used metric in this field. Table I summarizes the \(\rho\) values reported in each study, which represent the correlation between the estimated overall aesthetic scores by the multi-task neural network and the corresponding ground-truth scores in the test set. We calculate this correlation using the overall aesthetic scores predicted by our multi-task CNN and find it to be significant at p \(<\) 0.01. This allows us to compare the performance of our model to those in the literature.
As shown in Table I, there has been a slight improvement in the correlation between predicted overall scores and ground-truth scores over the years. The approach proposed by Pan _et al._ (2019) [30] has resulted in the highest correlation achieved thus far. Their study includes two methods, the first of which is a multi-task deep neural network that achieves a Spearman's rank correlation of 0.6927. The second one takes the first method one step forward by updating it with an adversarial setting, as described in Section 2. Compared to these methods, our multi-task CNN outperforms the first method (0.7067 \(>\) 0.6927). When we compare our model with the adversarial learning setting proposed by Pan _et al._ (2019) [30], we find that our neural network outperforms theirs again (0.7067 \(>\) 0.7041). Even though there isn't much of a difference in the results of the current approaches in terms of Spearman's rank correlations, our multi-task CNN currently performs slightly better than previous models.
In addition to achieving the highest Spearman's rank correlation, our multi-task CNN has other advantages over the state-of-the-art approach proposed by Pan et al. (2019) [30]. To compare the complexity of the two approaches, we examine the number of parameters in each model. Table II compares the neural network architectures of current state-of-the-art [30] and our multi-task CNN by taking into account the number of parameters. While Pan _et al._ (2019) [30] uses ResNet-50 for feature extraction, we utilize VGG16, which has fewer parameters for this particular problem. Furthermore, the fully-connected layers of our model have significantly fewer parameters compared to those in Pan _et al._ (2019) [30]. Specifically, our model's fully-connected layers have around 15 times fewer parameters in the neural network compared to their model. Similarly, the output layer of our multi-task CNN has fewer parameters, too. Moreover, the adversarial setting used in Pan _et al._ (2019) [30] makes the training of their neural network more complex.
We also evaluate the predictions made by our multi-task CNN and investigate the issue of overfitting. Fig. 5 shows that our model can predict overall aesthetic scores across a wide range. While the ground-truth overall aesthetic scores in the test data range from 0.05 to 1.0, our model's predictions range from 0.26 to 0.90. We also report the frequencies and percentages of ground-truth overall aesthetic scores in Table III for different intervals in the test data. Based on these data, our model cannot make predictions for 39 samples falling in intervals [0.05-0.10] and [0.10-0.20], and it fails for 15 samples falling in the interval [0.90-1.00]. In other words, our multi-task CNN can make successful predictions for approximately 95\(\%\) of test data. This indicates that our multi-task CNN is able to make predictions for the majority of the test data, with only a small percentage of samples falling outside of its prediction range. These results also indicate that there is no issue of overfitting.
present the most successful predictions in Fig. 6 and the least successful ones in Fig. 7, respectively. As shown in Fig. 6, our multi-task CNN makes accurate predictions for most images, with the exception of one low-aesthetic image with the ground-truth score of 0.15. On the other hand, in the least successful predictions shown in Fig. 7, our model tends to predict high scores to low aesthetic images, while giving lower scores to the high aesthetic images.
Overall, our multi-task CNN achieves the highest Spearman's rank correlation for overall aesthetic scores while
Fig. 6: Comparison of ground-truth overall aesthetic scores and corresponding predictions by our multi-task CNN on the test data of the AADB dataset. This figure shows the most successful predictions ranging from low aesthetic images to high aesthetic images.
Fig. 7: Comparison of ground-truth overall aesthetic scores and corresponding predictions by our multi-task CNN on the test data of the AADB dataset. This figure shows the least successful predictions ranging from low aesthetic images to high aesthetic images.
simultaneously predicting scores for 11 attributes. Notably, our approach accomplishes this with fewer parameters, making it more computationally efficient than the state-of-the-art method proposed by Pan _et al._ (2019) [30]. By combining a simplified neural network architecture with superior predictive performance, our approach represents a significant advancement in the field of image aesthetic assessment.
#### 5.1.2 Comparison with human performance
In addition to evaluating the performance of our multi-task CNN in rating image aesthetics, we compare its results with human performance on the AADB dataset. Kong _et al._ (2016) [17] previously reported the Spearman's rank correlation between each individual's ratings and the ground-truth average score on this dataset. A subset of raters was selected based on the number of images they have rated. In their study, they found that the more images an individual rated, the more stable their aesthetic score rankings became. We utilize this data and compare it to the performance of our model, as shown in Table IV.
Based on these correlations, we see that when the number of images rated by the same observer increases, human performance becomes better. On the other hand, it is also clear that our multi-task CNN performs above the level of human consistency averaged across all raters. Only when compared to the more experienced raters (i.e., the 42 raters who rated \(>\)200 images), our model performs slightly less. Our experiments demonstrate that our multi-task CNN achieves near-human performance in predicting the overall aesthetic scores on the AADB dataset. This narrows the performance gap between machines and humans in this domain.
#### 5.1.3 Attribute predictions and the fine-tuning effect
To further evaluate the performance of our multi-task CNN, we analyze its ability to predict image attributes in the AADB dataset and report the results in Table V. The table displays the Spearman's rank correlations between the ground-truth scores and the corresponding predictions made by our multi-task CNN for each attribute. Moreover, we investigate the effect of fine-tuning and include those results in Table V. As described in Section 4.2, we train our multi-task CNN, and then we fine-tune the model by unfreezing the last two convolutional layer in the fourth block of VGG16 (\(block4\_conv2\) and \(block4\_conv3\)). After fine-tuning our model, we observe an increase in the correlations for all attributes except symmetry. We also observe an increase in the correlation for the overall aesthetic score. Moreover, we aim to gain insight into the two convolutional layers that we fine-tune. To this end, we illustrate the activation maps generated using Grad-CAM [33] in Fig. 8 for two images from the test set of AADB dataset, one with a low aesthetic score and the other with a high aesthetic score.
We report the Spearman's rank correlations on the AADB dataset in Fig. 9. This figure shows the correlations for the ground-truth scores of all dataset on the left side, whereas on the right side, we present the correlations for the predictions on the test data made by our multi-task CNN. As it is known, the Spearman's rank correlation measures the strength and direction of the relationship between two variables, and it ranges from -1 to 1. A correlation of 1 indicates a perfect positive correlation, -1 indicates a perfect negative correlation, and 0 indicates no correlation. We examine the correlations between the overall aesthetic score and the 11 attributes. Our model's highest correlations among all attributes are for the light (\(\rho\)=0.96) and content (\(\rho\)=0.95) attributes. This finding is consistent with the AADB dataset, where content has the highest correlation with overall aesthetic scores (\(\rho\)=0.70) and light follows in second place (\(\rho\)=0.58). Furthermore, when we compare the top-five correlations for our multi-task CNN (light, content, color harmony, vivid color, and rule of thirds attributes), we see the similar results in the AADB dataset indicating our model can capture the relationships between the overall aesthetic scores and the attributes. On the other hand, when we examine the lowest correlations in Fig. 9, we find that our model also exhibites lower correlations for the motion blur, symmetry, and repetition attributes, consistent with human data. Accordingly, we can conclude that the predictions made by our multi-task CNN closely match human interpretation.
#### 5.1.4 Single-task versus multi-task setting
We wondered what would happen if our proposed model were a single-task neural network instead of a multi-task one. In this case, the neural network just learns the overall aesthetic score, not the scores for attributes. We report our result in Table VI and compare it to those of Pan _et al._ (2019)
\begin{table}
\begin{tabular}{l l l}
**\# images rated** & **\# raters** & \(\rho\) \\ \hline (0,100) & 190 & 0.6738 \\ \([100,200)\) & 65 & 0.7013 \\ \([200,\infty)\) & **42** & 0.7112 \\ Our method & - & **0.7067** \\ \end{tabular}
\end{table} TABLE IV: The Comparison Between Human Performance and Our Multi-Task CNN on the AADB Database.
\begin{table}
\begin{tabular}{l l l}
**Attributes** & **Training** & **Fine-tuning** \\ \hline Content & 0.546 & 0.593 \\ Vivid Color & 0.617 & 0.669 \\ Object Emphasis & 0.600 & 0.639 \\ Color Harmony & 0.437 & 0.484 \\ Depth of Field & 0.466 & 0.497 \\ Lighting & 0.396 & 0.445 \\ Balancing Elements & 0.264 & 0.267 \\ Rule of Thirds & 0.216 & 0.235 \\ Motion Blur & 0.098 & 0.109 \\ Symmetry & 0.194 & 0.177 \\ Repetition & 0.322 & 0.355 \\
**Overall** & **0.650** & **0.707** \\ \end{tabular}
\end{table} TABLE V: Spearman’s rank correlations between the ground-truth scores for each attribute and the predictions by our multi-task CNN in the test data of the AADB Dataset. This table shows the correlations after training and after fine-tuning separately, in addition to the correlations for the overall aesthetic score.
[30]. In terms of single-task networks, the Spearman's rank correlation of our method is slightly higher than Pan _et al._ on the test set of AADB dataset. This indicates that our neural network performs slightly better in the single-task setting while utilizing fewer parameters, highlighting the effectiveness of our approach. Furthermore, we also add the multi-task setting results for both model to make a comparison with the single-task one. Both models show that multi-task learning improves the neural network performance, as the Spearman's rank correlations between the predicted aesthetic scores and ground-truth overall aesthetic scores are consistently higher for the multi-task neural networks than for the single-task ones.
### _Performance analysis on the EVA dataset_
#### 5.2.1 Model evaluation
The second benchmark we use in this study, the EVA dataset, provides access to all participant's ratings. To in
\begin{table}
\begin{tabular}{l c c}
**METHODOS** & **Single-task** & **Multi-task** \\ \hline Pan _et al._, 2019) & 0.6833 & 0.6927 \\ Ours & **0.6890** & **0.7067** \\ \end{tabular}
\end{table} TABLE VI: The performance comparison between single-task and multi-task neural networks in terms of Spearman’s rank correlations.
Fig. 8: The activation maps for two input images from the test set of AADB dataset. These maps highlight the regions of the input image that contributed the most to the neural network’s prediction. The heatmap is overlaid on top of the input image to provide a visualization of which areas of the image are most relevant for the task.
Fig. 9: Spearman’s rank correlations between the overall aesthetic scores and the attribute scores on the AADB dataset. _(a)_: The ground-truth scores on all dataset, _(b)_: The predictions by our multi-task CNN on the test data.
vestigate the performance of our multi-task CNN on this dataset, first, we calculated the average score for each image with respect to each attribute and overall aesthetic score. Table VII reports the minimum and maximum averages for each attribute and overall aesthetic score in the EVA dataset.
Since there are four attributes in the EVA dataset (light and color, composition and depth, quality, semantics), we modify the output layer of our multi-task CNN to include five units (one for the overall aesthetic score and one for each attribute). Consequently, the output layer of our multi-task CNN consists of 325 parameters for the EVA dataset. We applied dropout [36] with a rate of 0.25 to the second fully-connected layer with 64 hidden units, which precedes the output layer. Also, since there is no official train-test split for the EVA dataset, we follow the two studies [23, 5] which use 3,500 images for training and 570 for testing. Table VIII presents the performance of our multi-task CNN on this dataset, also highlighting the effect of fine-tuning. This table summarizes the Spearman's rank correlations between the estimated overall aesthetic scores by our multi-task CNN and the corresponding ground-truth scores in the test set. Here, we again wondered what would happen if our proposed model were a single-task neural network instead of a multi-task one. Therefore, we also evaluated the model's performance in the single-task setting and observed that the multi-task setting outperforms it. Consistent with the findings in Section 5.1.4, we note that predicting the attributes along with the overall aesthetic score has a positive effect on the overall score for the same neural network architecture.
Similar to the evaluation of the AADB dataset in the previous section, we evaluate the predictions made by our multi-task CNN and investigate the issue of overfitting. We compare the actual overall aesthetic scores of test images in the EVA dataset to the predicted scores generated by our model in Fig. 10. While the ground-truth overall aesthetic scores in the test data range from 2.46 to 9.0, our model's predictions range from 5.09 to 8.13. We also report the frequencies and percentages of ground-truth overall aesthetic scores in Table IX for different intervals in the test data. Based on these data, our model cannot make predictions for 83 samples falling in interval range [1.70-5.00], and it fails for some samples falling in the maximum intervals. In other words, our multi-task CNN can make successful predictions for approximately 85\(\%\) of test data. This indicates that our multi-task CNN is able to make predictions for a majority of the test data, with only a small percentage of samples falling outside of its prediction range. These results also indicate that there is no issue of overfitting.
In order to further evaluate the performance of our multi-task CNN, we visually examine its predictions and present the most successful predictions in Fig. 11 and the least successful ones in Fig. 12, respectively. As shown in Fig. 11, our multi-task CNN makes accurate predictions for most images in the test set of EVA dataset. However, it has more difficulty in predicting scores for the low aesthetic images compared to the AADB dataset. On the other hand, in the least successful predictions shown in Fig. 12, our model tends to predict high scores for the low aesthetic images, while giving lower scores to the high aesthetic ones. This behavior of the model is consistent with the results obtained from the AADB dataset.
the results in Table 10. The table displays the Spearman's rank correlations between the ground-truth scores and the corresponding predictions made by our multi-task CNN for each attribute. Moreover, we investigate the effect of fine-tuning and include those results in Table 10. This time, for all the attributes, the correlations increase after fine-tuning. Similar to the evaluation in the AADB dataset, we illustrate the activation maps generated using Grad-CAM [33] in Fig. 13 for two images from the test set of AADB dataset, one with a low aesthetic score and the other with a high aesthetic score.
Lastly, we report the Spearman's rank correlations on the EVA dataset in Fig. 14. This figure shows the correlations for the ground-truth scores of all dataset on the left side, whereas on the right side, we present the correlations for the predictions on the test data made by our multi-task CNN. Our model's highest correlations among all attributes are for the composition and depth (\(\rho\)=0.97) and semantics (\(\rho\)=0.95)
Fig. 11: Comparison of ground-truth overall aesthetic scores and corresponding predictions by our multi-task CNN on the test data of the EVA dataset. This figure shows the most successful predictions ranging from low aesthetic images to high aesthetic images.
Fig. 12: Comparison of ground-truth overall aesthetic scores and corresponding predictions by our multi-task CNN on the test data of the EVA dataset. This figure shows the least successful predictions ranging from low aesthetic images to high aesthetic images.
attributes. This finding is consistent with the EVA dataset, where composition and depth has the highest correlation with overall aesthetic scores (\(\rho\)=0.89) and semantics follows in second place (\(\rho\)=0.87). The lowest correlation belongs to the quality attribute, which is consistent with human data. Based on our evaluation, we can conclude that the predictions made by our multi-task CNN closely align with human interpretation in the EVA dataset as well.
### _Cross-dataset evaluation_
In the final part of our analysis, we investigate the generalization capability of our multi-task CNN by conducting a cross-dataset evaluation. Firstly, we examine the performance of our model trained on the AADB dataset when tested on the test set of the EVA dataset. Subsequently, we reverse the process and evaluate the performance of our model trained on the EVA dataset when tested on the test set of the AADB dataset. The results of these cross-dataset evaluations are summarized in Table XI. These results show the Spearman's rank correlations between the ground-truth overall aesthetic scores and the predictions made by our multi-task CNN.
Fig. 14: Spearman’s rank correlations between the overall aesthetic scores and the attribute scores on the EVA dataset. _(a)_: The ground-truth scores on all dataset, _(b)_: The predictions by our multi-task CNN on the test data.
\begin{table}
\begin{tabular}{l c c}
**Attributes** & **Training** & **Fine-tuning** \\ \hline Light and color & 0.610 & 0.709 \\ Composition and depth & 0.571 & 0.655 \\ Quality & 0.463 & 0.548 \\ Semantics & 0.586 & 0.659 \\ \end{tabular}
\end{table} TABLE XI: Spearman’s rank correlations between the ground-truth scores for each attribute and the predictions by our multi-task neural network on the test set of the EVA dataset. The table presents the results obtained after training and after fine-tuning.
Fig. 13: The activation maps for two input images from the test set of EVA dataset. These maps highlight the regions of the input image that contributed the most to the neural network’s prediction. The heatmap is overlaid on top of the input image to provide a visualization of which areas of the image are most relevant for the task.
\begin{table}
\begin{tabular}{|l|c|c|} \hline \multirow{2}{*}{**Train dataset**} & \multicolumn{2}{c|}{**Test dataset**} \\ \cline{2-3} & **AADB**[17] & **EVA**[13] \\ \hline AADB & 0.707 & 0.321 \\ \hline EVA & 0.441 & 0.695 \\ \hline \end{tabular}
\end{table} TABLE XI: Spearman’s rank correlations between the ground-truth overall aesthetic scores and the predictions by our multi-task CNN for the cross dataset evaluation. |
2306.09623 | From Hypergraph Energy Functions to Hypergraph Neural Networks | Hypergraphs are a powerful abstraction for representing higher-order
interactions between entities of interest. To exploit these relationships in
making downstream predictions, a variety of hypergraph neural network
architectures have recently been proposed, in large part building upon
precursors from the more traditional graph neural network (GNN) literature.
Somewhat differently, in this paper we begin by presenting an expressive family
of parameterized, hypergraph-regularized energy functions. We then demonstrate
how minimizers of these energies effectively serve as node embeddings that,
when paired with a parameterized classifier, can be trained end-to-end via a
supervised bilevel optimization process. Later, we draw parallels between the
implicit architecture of the predictive models emerging from the proposed
bilevel hypergraph optimization, and existing GNN architectures in common use.
Empirically, we demonstrate state-of-the-art results on various hypergraph node
classification benchmarks. Code is available at
https://github.com/yxzwang/PhenomNN. | Yuxin Wang, Quan Gan, Xipeng Qiu, Xuanjing Huang, David Wipf | 2023-06-16T04:40:59Z | http://arxiv.org/abs/2306.09623v2 | # From Hypergraph Energy Functions to Hypergraph Neural Networks
###### Abstract
Hypergraphs are a powerful abstraction for representing higher-order interactions between entities of interest. To exploit these relationships in making downstream predictions, a variety of hypergraph neural network architectures have recently been proposed, in large part building upon precursors from the more traditional graph neural network (GNN) literature. Somewhat differently, in this paper we begin by presenting an expressive family of parameterized, hypergraph-regularized energy functions. We then demonstrate how minimizers of these energies effectively serve as node embeddings that, when paired with a parameterized classifier, can be trained end-to-end via a supervised bilevel optimization process. Later, we draw parallels between the implicit architecture of the predictive models emerging from the proposed bilevel hypergraph optimization, and existing GNN architectures in common use. Empirically, we demonstrate state-of-the-art results on various hypergraph node classification benchmarks. Code is available at [https://github.com/yxzwang/PhenomNN](https://github.com/yxzwang/PhenomNN).
Machine Learning, Hypergraph Energy Functions, Hypergraph Neural Networks
## 1 Introduction
Hypergraphs represent a natural extension of graphs, whereby each hyperedge can link an arbitrary number of hypernodes (or nodes for short). This flexibility more directly facilitates the modeling of higher-order relationships between entities (Chien et al., 2022; Benson et al., 2016, 2017) leading to strong performance in diverse real-world situations (Agarwal et al., 2005; Li and Milenkovic, 2017; Feng et al., 2019; Huang and Yang, 2021). Currently, hypergraph-graph-based modeling techniques frequently rely, either implicitly or explicitly, on some type of expansion (e.g., clique, star), which effectively converts the hypergraph into a regular graph with a new edge set and possibly additional nodes as well. For example, one approach is to first extract a particular expansion graph and then build a graph neural network (GNN) model on top of it (Zhang et al., 2022).
We instead adopt a different starting point that both allows us to incorporate multiple expansions if needed, but also transparently explore the integrated role of each expansion within a unified framework. To accomplish this, our high-level strategy is to first define a family of parameterized hypergraph energy functions, with regularization factors that we later show closely align with popular existing expansions. We then demonstrate how the minimizers of such energy functions can be treated as learnable node embeddings and trained end-to-end via a bilevel optimization process. Namely, the lower-level minimization process produces optimal features contigent on a given set of parameters, while the higher-level process trains these parameters (and hence the features they influence) w.r.t. downstream node classification tasks.
To actualize this goal, after presenting related work in Section 2, we provide relevant background and notation w.r.t. hypergraphs in Section 3. The remainder of the paper then presents our primary contributions, which can be summarized as follows:
* We present a general class of hypergraph-regularized energy functions in Section 4 and elucidate their relationship with traditional hypergraph expansions that have been previously derived from spectral graph theory.
* We demonstrate how minimizers of these energy functions can serve as principled, trainable features for hypergraph prediction tasks in Sections 5 and 6. And by approximating the energy minimizers using provably-convergence proximal gradient steps, the resulting architecture borrows the same basic structure as certain graph neural network layers that: (i) have been fine-tuned to accommodate hypergraphs, and (ii) maintain
the inductive bias infused by the original energy function.
* The resulting framework, which we name **PhenomNN** for _Purposeful Hyper-Edges iN Optimization Motivated Neural Networks_, is applied to a multitude of hypergraph node classification benchmarks in Section 7, achieving competitive or SOTA performance in each case.
## 2 Related Work
**Hypergraph Expansions/Neural Networks**. Hypergraphs are frequently transformed into graphs by expansion methods including the clique and star expansions. An extensive spectral analysis study of of different hypergraph expansions is provided in (Agarwal et al., 2006), but not from the vantage point of energy functions as is our focus. An alternative line expansion (Yang et al., 2020) has also been proposed that can be viewed in some sense as a hybrid combination of clique and star expansions, although this involves the creation of additional nodes, and there may be scalability issues. In terms of predictive models, previous spectral-based hypergraph neural networks are analogous to applying GNNs on clique expansions, including HGNN (Feng et al., 2019), HCHA (Bai et al., 2021), H-GNNs (Zhang et al., 2022). Meanwhile, FastHyperGCN (Yadati et al., 2019) and HyperGCN (Yadati et al., 2019) reduce a hyperedge into a subgraph using Laplacian operators (Chan and Liang, 2020), which can be viewed as a modified form of clique expansion. HGAT (Ding et al., 2020), HNHN (Dong et al., 2020), HyperSAGE (Arya et al., 2020), UniGNN (Huang and Yang, 2021), (Srinivasan et al., 2021), Set-based models (Chien et al., 2022), (Heydari and Livi, 2022), Aponte et al. (2022), HEAT (Georgiev et al., 2022) take into account hyperedge features and use a message-passing framework, which can be interpreted as GNNs applied to the star expansion graph. And finally, (Wang et al., 2023) use gradient diffusion processes to motivate a broad class of hypergraph neural networks, although in the end there is not actually any specific energy function that is being minimized by the proposed model layers.
**Graph Neural Networks from Unfolded Optimization**. A variety of recent work has demonstrated that robust GNN architectures can be formed via graph propagation layers that mirror the unfolded descent iterations of a graph-regularized energy function (Chen and Eldar, 2021; Liu et al., 2021; Ma et al., 2020; Pan et al., 2021; Yang et al., 2021; Zhang et al., 2020; Zhu et al., 2021; Ahn et al., 2022). In doing so, the node embeddings at each layer can be viewed as increasingly refined approximations of an interpretable energy minimizer, that may be designed, for example, to mitigate GNN oversmoothing or perhaps inject robustness to spurious edges. Furthermore, these learnable embeddings can be integrated within a bilevel optimization framework (Wang et al., 2016) for supervised training. While at a high level we adopt a similar conceptual starting point, we nonetheless introduce non-trivial adaptations that are particular to the hypergraph domain, where this framework has not yet been extensively explored, and provide hypergraph-specific insights along the way.
## 3 Hypergraph Background and Notation
A hypergraph can be viewed as a higher-order form of graph whereby edges can encompass more than two nodes. Specifically, let \(\mathcal{G}(\mathcal{V},\mathcal{E})\) denote a hypergraph, where \(\mathcal{V}\) is a set of \(n=|\mathcal{V}|\) vertices and \(\mathcal{E}\) is a set of \(m=|\mathcal{E}|\) hyperedges. In contrast to a traditional graph, each hyperedge \(e_{k}\in\mathcal{E}\), can link an arbitrary number of nodes. The corresponding hypergraph connectivity structure is conveniently represented in a binary incidence matrix \(B\in\mathbb{R}^{n\times m}\), where \(B_{ik}=1\) if node \(v_{i}\in e_{k}\), otherwise \(B_{ik}=0\). We also use \(D_{H}\in\mathbb{R}^{m\times m}\) to denote the degree matrix of the hypergraph, where \(m_{e_{k}}\triangleq D_{H}[k,k]=\sum_{i}B_{ik}\).
And finally, we define input features and embeddings for both nodes and hyperedges. In this regard, \(X\in\mathbb{R}^{n\times d_{x}}\) represents a matrix of \(d_{x}\)-dimensional initial/given node features, while \(Y\in\mathbb{R}^{n\times d_{y}}\) refers to the corresponding node embeddings of size \(d_{y}\) we seek to learn. Analogously, \(U\in\mathbb{R}^{n\times d_{u}}\) and \(Z\in\mathbb{R}^{m\times d_{x}}\) are the initial edge features and learnable embeddings respectively. While here we have presented the most general form, we henceforth just assume \(d=d_{x}=d_{y}=d_{z}=d_{u}\) for simplicity.
## 4 A Family of Hypergraph Energy Functions
Our goal is to pursue hypergraph-based energy functions whose minima produce embeddings that will ultimately be useful for downstream predictive tasks. In this section, we first present an initial design of these functions followed by adaptations for handling the situation where no edge features \(U\) are available. We then show how in certain circumstances the proposed energy functions reduce to special cases that align with hypergraph star and clique expansions, before concluding with revised, simplified energy expressions informed by these considerations.
### Initial Energy Function Design and Motivation
We begin with the general form
\[\ell(Y,Z;\psi)=g_{1}(Y,X;\psi)+g_{2}(Z,U;\psi)+g_{3}(Y,Z,\mathcal{G};\psi) \tag{1}\]
where \(g_{1}(Y,X;\psi)\) and \(g_{2}(Z,U;\psi)\) are non-structural regularization factors over node and edge representations respectively, while \(g_{3}(Y,Z,\mathcal{G};\psi)\) explicitly incorporates hypergraph structure. In all cases \(\psi\) represents parameters that
control the shape of the energy, with particular choices that should be clear from the context (note that these parameters need not all be shared across terms; however, we nonetheless lump them together for notational convenience).
For the non-structural terms in (1), a plausible design criteria is to adopt functions that favor embeddings (either node or edge) that are similar to the corresponding input features or some transformation thereof. Hence we select
\[g_{1}(Y,X;\psi) = \sum_{i=1}^{n}\|y_{i}-f(x_{i};W_{x})\|_{2}^{2}\] \[g_{2}(Z,U;\psi) = \sum_{k=1}^{m}\|z_{k}-f(u_{k};W_{u})\|_{2}^{2}, \tag{2}\]
noting that both cases favor embeddings with minimal \(\ell_{2}\) distance from the trainable base predictor, and by extension, the initial features \(\{X,U\}\). In practice, the function \(f\) can be implemented as an MLP with node/edge weights \(W_{x}\) and \(W_{u}\) respectively.
Turning to \(g_{3}(Y,Z,\mathcal{G};\psi)\), our design is guided by the notion that:
1. Both node and edge embeddings should be individually constrained to a shared subset of \(\mathbb{R}^{d}\), e.g., consistent with most GNN architectures we may enforce non-negative embeddings;
2. Nodes sharing an edge should be similar when projected into an appropriate space, and;
3. Nodes within an edge set should have similar embeddings to the edge embedding, again, when suitably projected.
With these desiderata in mind, we adopt
\[g_{3}(Y,Z,\mathcal{G};\psi)=\sum_{i=1}^{n}\phi(y_{i})+\sum_{k=1} ^{m}\phi(z_{i})+\] \[\lambda_{0}\overbrace{\sum_{e_{k}\in\mathcal{E}}\sum_{i\in e_{k} }\sum_{j\in e_{k}}\sum_{j\in e_{k}}(|y_{i}H_{0}-y_{j}||_{2}^{2}}+\lambda_{1} \overbrace{\sum_{e_{k}\in\mathcal{E}}\sum_{i\in e_{k}}\sum_{i\in e_{k}}(|y_{i }H_{1}-z_{k}||_{2}^{2}}^{(b)} \tag{3}\]
For the first terms we choose \(\phi:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}_{+}\) defined as \(\phi(p)\triangleq\sum_{j=1}^{d}\mathcal{I}_{\infty}[p_{i}<0]\), where \(\mathcal{I}_{\infty}\) is an indicator function that assigns an infinite penalty to any \(p_{i}<0\). This ensures that all node and edge embedding must be non-negative to achieve finite energy. Next, the term labeled (\(a\)) in (3) directly addresses criteria (ii). We note that the summation is over both indices \(i\) and \(j\) so that the symmetric counterpart, where the roles of nodes \(v_{i}\) and \(v_{j}\) are switched, is effectively included in the summation. And finally, criteria (iii) is handled by the last term, labeled (\(b\)). Here the node and edge embeddings play different roles and exhibit a natural asymmetry.1 Incidentally, the projections \(H_{0}\) and \(H_{1}\) can be viewed as compatibility matrices, initially introduced for label or belief propagation (Eswaran et al., 2017; Yamaguchi et al., 2016; Zhou et al., 2003) to provide additional flexibility to the metric in which entities are compared; for term (\(a\)) \(H_{0}\) facilitates the handling of nodes with potentially heterophily relationships, while for term (\(b\)) \(H_{1}\) accommodates the comparison of fundamentally different embedding types.
Footnote 1: While we could consider adding an additional factor \(||y_{i}-z_{k}H_{2}||_{2}^{2}\) to this term, we found that in practice it was not necessary.
### Handling a Lack of Edge Features
In some practical situations there may not be any initial hyperedge features \(U\). In such cases we could potentially modify \(\ell(Y,Z;\psi)\) accordingly in multiple different ways. First, and perhaps simplest, we can simply remove \(g_{2}(Z,U;\psi)\) from (1). We will explore the consequences of this option further in Section 4.3. But for tasks more related to hyperedge classification, it may be desirable to maintain this term for additional flexibility. Hence as a second option, we could instead create pseudo features \(\widetilde{U}\) with \(\widetilde{u}_{k}=\text{AGG}\left[\{x_{i}|i\in e_{k}\}\right]\) for all \(e_{k}\in\mathcal{E}\) for some aggregation function AGG. Or in a similar spirit, we could adopt \(f(u_{k};W_{u})\equiv\text{AGG}\left[\{f(x_{i};W_{x})|i\in e_{k}\}\right]\) such that aggregation now takes place after the initial feature transformations.
### Analysis of Simplified Special Cases
Because most hypergraph benchmarks for node classification, and many real-world use cases, involve data devoid of hyperedge features, in this section we more closely examine simplifications of (1) that arise when \(g_{2}(Z,U;\psi)\) is removed. For analysis purposes, it is useful to first introduce two representative hypergraph expansions, both of which can be viewed as converting the original hypergraph to a regular graph, which is tantamount to the assumption that edges in these expanded graphs involve only pairs of nodes.
**Clique Expansion.** For the _clique expansion_(Zien et al., 1999), we form the regular graph \(\mathcal{G}_{C}(\mathcal{V},\mathcal{E}_{C})\), where the node set \(\mathcal{V}\) remains unchanged while the edge set \(\mathcal{E}_{C}\) is such that, for all \(e_{k}\in\mathcal{E}\), we have that \(\{v_{i}|i\in e_{k}\}\) forms a complete subgraph of \(\mathcal{G}_{C}\). We define \(L_{C}\), \(A_{C}\), and \(D_{C}\) as the corresponding Laplacian, adjacency matrix, and degree matrix of \(\mathcal{G}_{C}\) respectively.
**Star Expansion.** In contrast, the _star expansion_(Zien et al., 1999) involves creating the bipartite graph \(\mathcal{G}_{S}(\mathcal{V}_{S},\mathcal{E}_{S})\), with revised node set \(\mathcal{V}_{S}=\{v_{1},\ldots,v_{n+m}\}\) and edge set \(\mathcal{E}_{S}\) defined such that \(\{v_{i},v_{n+k}\}\in\mathcal{E}_{S}\) iff
\(B_{ik}=1\). Conceptually, the resulting graph is formed with a new node associated with each hyperedge (from the original hypergraph), and an edge connecting every such new node to the original nodes within the corresponding hyperedges. Additionally, \(L_{S}=D_{S}-A_{S}\) is the revised Laplacian matrix, with \(D_{S}\) and \(A_{S}\) the degree and adjacent matrices of the star expansion graph.
**Unification.** We now introduce simplifying assumptions to link the proposed energy with the Laplacians of clique and star expansions as follows:
**Proposition 4.1**.: _Suppose \(g_{2}(Z,U;\psi)\) is removed from (1), \(H_{0}=H_{1}=I\), and define \(Z^{*}\triangleq D_{H}^{-T}B^{T}Y\). It then follows that_
\[\min_{Z}\ \ell(Y,Z;\psi) = g_{1}(Y,X;\psi)+\sum_{i=1}^{n}\phi(y_{i})\] \[+ 2\lambda_{0}\text{tr}[Y^{T}L_{C}Y]+\lambda_{1}\text{tr}\left( \left[\begin{array}{c}Y\\ Z^{*}\end{array}\right]^{T}L_{S}\left[\begin{array}{c}Y\\ Z^{*}\end{array}\right]\right)\] \[= g_{1}(Y,X;\psi)+\sum_{i=1}^{n}\phi(y_{i})\] \[+ 2\lambda_{0}\text{tr}[Y^{T}L_{C}Y]+\lambda_{1}\text{tr}[Y^{T} \bar{L}_{S}Y],\]
_where \(\bar{L}_{S}\triangleq\bar{D}_{S}-\bar{A}_{S}\), with \(\bar{A}_{S}\triangleq BD_{H}^{-1}B^{T}\) and \(\bar{D}_{S}\) a diagonal matrix with nonzero elements formed as the corresponding row-sums of \(\bar{A}_{S}\). Moreover, if \(\mathcal{G}\) is \(m_{e}\)-uniform,2 then under the same assumptions_
Footnote 2: An \(m_{e}\)-uniform hypergraph is such that every hyperedge joins exactly \(m_{e}\) nodes. Hence a regular graph is by default a 2-uniform hypergraph.
\[\min_{Z}\ \ell(Y,Z;\psi)=g_{1}(Y,X;\psi)+\sum_{i=1}^{n}\phi(y_{i})+\beta \text{tr}[Y^{T}L_{C}Y], \tag{5}\]
_where \(\beta\triangleq 2\lambda_{0}+\frac{\lambda_{1}}{m_{e}}\)._
All proofs are deferred to Appendix C. This last result demonstrates that, under the stated assumptions, the graph-dependent portion of the original hypergraph energy, after optimizing away the influence of \(Z\), can be reduced to a weighted quadratic penalty involving the graph Laplacian of the clique expansion. Moreover, this factor further resolves as
\[\text{tr}[Y^{T}L_{C}Y]=\frac{1}{2}\sum_{e_{k}\in\mathcal{E}}\sum_{i\in e_{k}} \sum_{j\in e_{k}}||y_{i}-y_{j}||_{2}^{2}. \tag{6}\]
Of course in more general settings, for example when \(H_{0}\neq H_{1}\neq I\), or when \(\phi(p)\neq\sum_{j=1}^{d}\mathcal{I}_{\infty}[p_{i}<0]\), this equivalence will _not_ generally hold.
### Revised Hypergraph Energy Functions
The analysis from the previous sections motivates two practical, revised forms of our original energy from (1), which we will later use for all of our empirical evaluations. For convenience, we define
\[\ell(Y;\psi)\triangleq\ell(Y,Z=Z^{*};\psi). \tag{7}\]
Then the first, more general variant, we adopt is
\[\ell(Y;\psi=\{W,H_{0},H_{1}\})\] \[= ||Y-f(X;W)||_{\mathcal{F}}^{2}+\sum_{i}\phi(y_{i})+\] \[\lambda_{0}\overbrace{\text{tr}\left[(YH_{0})^{T}D_{C}YH_{0}-2(YH _{0})^{T}A_{C}Y+Y^{T}D_{C}Y\right]}+\] \[\lambda_{1}\overbrace{\text{tr}\left[(YH_{1})^{T}\bar{D}_{S}YH_{ 1}-2(YH_{1})^{T}BZ^{*}+Z^{*T}D_{H}Z^{*}\right]},\]
where \(\bar{D}_{S}\) is defined as in Proposition 4.1. Moreover, to ease later exposition, we have overloaded the definition of \(f\) such that \(\|Y-f(X;W)\|_{\mathcal{F}}^{2}\equiv\sum_{i=1}^{n}\|y_{i}-f(x_{i};W)\|_{2}^{2}\). And secondly, as a less complex alternative we have
\[\ell(Y;\psi=\{W,I,I\})= \tag{9}\] \[||Y-f(X;W)||_{\mathcal{F}}^{2}+\sum_{i}\phi(y_{i})+\text{tr}[Y^{T }(\lambda_{0}L_{C}+\lambda_{1}\bar{L}_{S})Y].\]
## 5 Hypergraph Node Classification via Bilevel Optimization
We now demonstrate how the optimal embeddings obtained by minimizing the energy functions from the previous section can be applied to our ultimate goal of hypergraph node classification. For this purpose, define
\[Y^{*}(\psi)=\arg\min_{Y}\ell(Y;\psi), \tag{10}\]
noting that the solution depends explicitly on the parameters \(\psi\) governing the shape of the energy. We may then consider treating \(Y^{*}(\psi)\), which is obtainable from the above optimization process, as features to be applied to a discriminative node classification loss \(\mathcal{D}\) that can be subsequently minimized via a second, meta-level optimization step.3 In aggregate we arrive at the _bilevel_ optimization problem
Footnote 3: Because our emphasis is hypergraph node classification, we will not explicitly use any analogous hyperedge embeddings for the meta-level optimization; however, they nonetheless still play a vital role given that they are co-adapted with the node embeddings during the lower-level optimization per the discussion from the previous section.
\[\ell(\theta,\psi)\triangleq\sum_{i=1}^{n^{\prime}}\mathcal{D}(h[y_{i}^{*}( \psi);\theta],\tau_{i}), \tag{11}\]
where \(\mathcal{D}\) is chosen as an classification-friendly cross-entropy function, \(y_{i}^{*}(\psi)\) is the \(i\)-th row of \(Y^{*}(\psi)\), and \(\tau_{i}\in\mathbb{R}^{c}\) is the ground-truth label of node \(i\) to be approximated by some differentiable node-wise function \(h:\mathbb{R}^{d}\rightarrow\mathbb{R}^{c}\) with trainable parameters \(\theta\). We have also implicitly assumed that the first \(n^{\prime}\) nodes of \(\mathcal{G}\) are labeled. Intuitively, (11) involves training a classifier \(h\), with input features \(y_{i}^{*}(\psi)\), to predict labels \(\tau_{i}\).
At this point, assuming \(\partial Y^{*}(\psi)/\partial\psi\) is somehow computable, then \(\ell(\psi,\theta)\) can be efficiently trained over _all_ parameters, including \(\psi\) from the lower level optimization. However, directly computing \(\partial Y^{*}(\psi)/\partial\psi\) is not generally feasible. Instead, in the remainder of this section we will derive approximate embeddings \(\hat{Y}(\psi)\approx Y^{*}(\psi)\) whereby \(\partial\hat{Y}(\psi)/\partial\psi\) can be computed efficiently. And as will be assessed in greater detail later, the computational steps we derive to produce \(\hat{Y}(\psi)\) will mirror the layers of canonical graph neural network architectures. It is because of this association that we refer to our overall model as **PhenomNN**, for _Purposeful Hyper-Edges iN Optimization Motivated Neural Networks_ as mentioned in the introduction.
### Deriving Proximal Gradient Descent Steps
To efficiently deploy proximal gradient descent (PGD) (Parikh et al., 2014), we first must split our loss into a smooth, differentiable part, and a non-smooth but separable part. Hence we adopt the decomposition
\[\ell(Y;\psi)=\bar{\ell}(Y;\psi)+\sum_{i}\phi(y_{i}), \tag{12}\]
where \(\bar{\ell}(Y;\psi)\) is defined by exclusion upon examining the original form of \(\ell(Y;\psi)\). The relevant proximal operator is
\[\textbf{prox}_{\phi}(V) \triangleq \arg\min_{Y}\frac{1}{2}||V-Y||_{\mathcal{F}}^{2}+\sum_{i}\phi(y_ {i}) \tag{13}\] \[= \text{max}(0,V),\]
where the max operator is assumed to apply elementwise. Subsequent PGD iterations for minimizing (12) are then computed as
\[\tilde{Y}^{(t+1)} =Y^{(t)}-\alpha\Omega\nabla_{Y^{(t)}}\bar{\ell}(Y^{(t)};\psi) \tag{14}\] \[Y^{(t+1)} =\text{max}(0,\tilde{Y}^{(t+1)}), \tag{15}\]
where \(\alpha\) is a step-size parameter and \(\Omega\) is a positive-definite pre-conditioner to be defined later. Incidentally, as will become apparent shortly, (14) will occupy the role of a pre-activation hypergraph neural network layer, while (15) provides a ReLU nonlinearity. A related association was previously noted within the context of traditional GNNs (Yang et al., 2021). We now examine two different choices for \(\Omega\) and \(\psi\) that correspond with the general form from (8) and the simplified alternative from (9).
General FormTo compute (14), we consider term (\(a\)) and (\(b\)) from (8) separately. Beginning with (\(a\)), the corresponding gradient is
\[2D_{C}Y-2\tilde{Y}_{C}, \tag{16}\]
where \(\tilde{Y}_{C}\triangleq A_{C}Y(H_{0}+H_{0}^{T})-D_{C}YH_{0}H_{0}^{T}\). Similarly, for (\(b\)) the gradient is given by
\[2BD_{D}^{-1}D_{H}(BD_{H}^{-1})^{T}Y-2\tilde{Y}_{S}, \tag{17}\]
where \(\tilde{Y}_{S}\triangleq(B(BD_{H}^{-1})^{T}YH_{1}^{T}+BD_{H}^{-1}B^{T}YH_{1})- \bar{D}_{S}YH_{1}H_{1}^{T}\). Additionally, given that \(BD_{H}^{-1}D_{H}(BD_{H}^{-1})^{T}=B(BD_{H}^{-1})^{T}=BD_{H}^{-1}B^{T}=\bar{A} _{S}\), we can reduce (17) to
\[2\bar{A}_{S}Y-2\tilde{Y}_{S}, \tag{18}\]
since now \(\tilde{Y}_{S}=\bar{A}_{S}Y(H_{1}+H_{1}^{T})-\bar{D}_{S}YH_{1}H_{1}^{T}\). Combining terms, the gradient for \(\bar{\ell}(Y;\psi)\) is
\[\frac{\bar{\ell}(Y;\psi)}{\partial Y}=2\lambda_{0}(D_{C}Y-\tilde{ Y}_{C})+2\lambda_{1}(\bar{A}_{S}Y-\tilde{Y}_{S})\] \[+2Y-2f\left(X;W\right), \tag{19}\]
and (14) becomes
\[\bar{Y}^{(t+1)}=Y^{(t)}-\alpha\Bigg{[}\lambda_{0}(D_{C}Y^{(t)}- \tilde{Y}_{C}^{(t)})\] \[+\lambda_{1}(\bar{A}_{S}Y^{(t)}-\tilde{Y}_{S}^{(t)})+Y^{(t)}-f \left(X;W\right)\Bigg{]}, \tag{20}\]
where \(\alpha/2\) is the step size. The coefficient \(\bar{\Omega}\) before \(Y^{(t)}\) is
\[\bar{\Omega}\triangleq\lambda_{0}D_{C}+\lambda_{1}\bar{A}_{S}+I. \tag{21}\]
Applying Jacobi preconditioning (Axelsson, 1996) often aids convergence by helping to normalize the scales across different dimensions. One natural candidate for the preconditioner is \(\left(\text{diag}[\bar{\Omega}]\right)^{-1}\); however, we use the more spartan \(\Omega=\tilde{D}^{-1}\) where \(\tilde{D}\triangleq\lambda_{0}D_{C}+\lambda_{1}\bar{D}_{S}+I\). After rescaling and applying (15), the composite PhenomNN update is given by
\[Y^{(t+1)} =\text{ReLU}\Bigg{(}(1-\alpha)Y^{(t)}+\alpha\tilde{D}^{-1}\Big{[} f\left(X;W\right)\] \[+\lambda_{0}\tilde{Y}_{C}^{(t)}+\lambda_{1}(\bar{L}_{S}Y^{(t)}+ \tilde{Y}_{S}^{(t)})\Big{]}\Bigg{)}, \tag{22}\]
where \(\bar{L}_{S}=\bar{D}_{S}-\bar{A}_{S}\) as in Proposition 4.1. This respresents the general form of PhenomNN.
Simplified AlternativeRegarding the simplified energy from (9), the relevant gradient is
\[\frac{\partial\bar{\ell}(Y;\psi=W,I,I)}{\partial Y}=2(\lambda_{0} L_{C}+\lambda_{1}\bar{L}_{S})Y+\] \[2Y-2f\left(X;W\right), \tag{23}\]
leading to the revised update
\[\bar{Y}^{(t+1)}=Y^{(t)}-\alpha\left[\tilde{\Omega}Y^{(t)}-f\left(X;W \right)\right],\] \[\text{with}\;\;\tilde{\Omega}\triangleq\lambda_{0}L_{C}+\lambda_{ 1}\bar{L}_{S}+I \tag{24}\]
and step size \(\alpha/2\) as before. And again, we can apply preconditioning, in this case rescaling each gradient step by \(\Omega=\left(\text{diag}[\tilde{\Omega}]\right)^{-1}=\left(\lambda_{0}D_{C}+ \lambda_{1}\bar{D}_{S}+I\right)^{-1}=\tilde{D}^{-1}\). So the final/composite update formula, including (15), becomes
\[Y^{(t+1)}=\text{ReLU}\Big{(}(1-\alpha)Y^{(t)}+\] \[\alpha\tilde{D}^{-1}\left[(\lambda_{0}A_{C}+\lambda_{1}\bar{A}_{S })Y^{(t)}+f\left(X;W\right)\right]\Big{)}. \tag{25}\]
We henceforth refer to this variant as **PhenomNN\({}_{\text{simple}}\)**.
### Overall Algorithm
The overall algorithm for PhenomNN is demonstrated in Algorithm 1.
```
Input: Hypergraph incidence matrix \(B\), node features \(X\), number of layers \(T\), training epochs \(E\), and node labels \(\tau=\{\tau_{i}\}\). for\(e=0\) to \(E-1\)do Set initial projection \(Y^{(0)}=f(X;W)\), where \(f\) is the trainable base model. for\(t=0\) to \(T-1\)do \(Y^{(t+1)}=Update(Y^{(t)})\), where \(Update\) is computed via (22) for PhenomNN or (25) for PhenomNN\({}_{\text{simple}}\). endfor Compute loss \(\ell(\theta,\psi)=\sum_{i}\mathcal{D}(h[y_{i}^{(T)};\theta],\tau_{i})\) from (11), where \(\psi=\{W,H_{0},H_{1}\}\) for PhenomNN and \(\psi=\{W,I,I\}\) for PhenomNN\({}_{\text{simple}}\), noting that each \(y_{i}^{(T)}\) is a trainable function of \(\psi\) by design. Backpropagate over all parameters \(\psi,\theta\) using optimizer (Adam, SGD, etc.) endfor
```
**Algorithm 1** PhenomNN Algorithm for Hypergraph Node Classification.
### Convergence Analysis
We now consider the convergence of the iterations (22) and (25) introduced in the previous section. First, for the more general form we have the following:
**Proposition 5.1**.: _The PhenomNN updates from (22) are guaranteed to monotonically converge to the unique global minimum of \(\ell(Y;\psi)\) on the condition that_
\[\alpha<\frac{1+\lambda_{0}d_{\text{Cmin}}+\lambda_{1}d_{\text{Smin}}}{1+ \lambda_{0}d_{\text{Cmin}}+\sigma_{\text{max}}}, \tag{26}\]
_where \(d_{\text{Cmin}}\) is the minimum diagonal element of \(I\otimes D_{C}\), \(d_{\text{Smin}}\) is the minimum diagonal element of \(I\otimes\bar{D}_{S}\) and \(\sigma_{\text{max}}\) is the max eigenvalue of \((Q-P+\lambda_{1}I\otimes\bar{A}_{S})\) with_
\[Q \triangleq\lambda_{0}H_{0}^{T}H_{0}\otimes D_{C}+\lambda_{1}H_{1}^ {T}H_{1}\otimes\bar{D}_{S}, \tag{27}\] \[P \triangleq\lambda_{0}(H_{0}+H_{0}^{T})\otimes A_{C}+\lambda_{1}(H _{1}+H_{1}^{T})\otimes\bar{A}_{S}. \tag{28}\]
And for the restricted case where \(\psi=\{W,I,I\}\), the convergence conditions simplify as follows:
**Corollary 5.2**.: _The PhenomNN\({}_{\text{simple}}\) updates from (25) are guaranteed to monotonically converge to the unique global minimum of \(\ell(Y;\psi=\{W,I,I\})\) on the condition that_
\[\alpha<\frac{1+\lambda_{0}d_{\text{Cmin}}+\lambda_{1}d_{\text{Smin}}}{1+ \lambda_{0}d_{\text{Cmin}}+\lambda_{1}d_{\text{Smin}}-\sigma_{\text{min}}}, \tag{29}\]
_where \(\sigma_{min}\) is the min eigenvalue of \((\lambda_{0}A_{C}+\lambda_{1}\bar{A}_{S})\)._
### Complexity Analysis
Analytically, PhenomNN\({}_{\text{simple}}\) has a time complexity given by \(O(|\mathcal{E}|Td+|\mathcal{V}|Pd^{2})\), where \(|\mathcal{E}|\) is edge number, \(|\mathcal{V}|\) is the node number, \(T\) is the number of layers/iterations, \(d\) is the hidden size, and \(P\) is the number of MLP layers in \(f(\cdot;W)\). In contrast, for PhenomNN this complexity increases to \(O(|\mathcal{E}|Td+|\mathcal{V}|(T+P)d^{2})\), which is roughly the same as a standard GCN model. In fact, the widely-used graph convolution networks (GCN) (Kipf and Welling, 2016) have equivalent complexity to PhenomNN up to the factor of \(P\) which is generally small (e.g., \(P=1\) for PhenomNN in our experiments, while for a GCN \(P=0\)). In this way then, PhenomNN\({}_{\text{simple}}\) is actually somewhat cheaper than a GCN when \(T>P\). Additionally, we include complementary empirical results related to time and space complexity in Section 7.
## 6 Connections with Existing GNN Layers
As mentioned in Section 4.3, the clique and star expansions can be invoked to transform hypergraphs into homogeneous and bipartite graphs respectively (where the latter is a special case of a heterogeneous graph). In this section we examine how the layer-wise structure of two of the most popular GNN models, namely GCN (Kipf and Welling, 2016) mentioned previously, and relational graph convolution networks (RGCN) (Schlichtkrull et al., 2018), relate to PhenomNN and simplifications thereof.
### Homogeneous Graphs and GCN
Using the so-called message-passing form of expression, the embedding update for the \(i\)-th node of the \(t\)-th GCN
layer can be written as
\[y_{i}^{(t+1)}=\sigma\left(\sum_{j\in\mathcal{N}_{i}}\frac{1}{c_{ij}}W^{(t)}y_{j}^{ (t)}\right) \tag{30}\]
where \(\sigma\) is an activation function like ReLU, \(W^{(t)}\) are weights, \(c_{ij}\triangleq\sqrt{|\mathcal{N}_{i}||\mathcal{N}_{j}|}\) and \(\mathcal{N}_{i}\) refers to the set of neighboring nodes in some input graph (note also that the graph could have self-loops in which case \(i\in\mathcal{N}_{i}\)). Interestingly, follow-up work (Ma et al., 2020; Pan et al., 2021; Yang et al., 2021; Zhang et al., 2020; Zhu et al., 2021) has demonstrated that this same basic layer-wise structure can be closely linked to iterative steps designed to minimize the energy
\[\ell(Y)=||Y-f(X;W)||_{\mathcal{F}}^{2}+\lambda\text{tr}[Y^{T}LY], \tag{31}\]
where \(f\) is defined as before and \(L\) is the assumed graph Laplacian matrix. One way to see this is to examine a preconditioned gradient step along (31), which can be expressed as
\[Y^{(t+1)}=(1-\alpha)Y^{(t)}+\alpha\widetilde{D}_{0}^{-1}\left[ \lambda AY^{(t)}+f\left(X;W\right)\right], \tag{32}\]
with preconditioner \(\widetilde{D}_{0}^{-1}=(\lambda D+I)^{-1}\), step-size parameter \(\alpha\), graph adjacency matrix \(A\), and corresponding degree matrix \(D\). Moreover, for a single node \(i\), (32) can be reduced to
\[y_{i}^{(t+1)}=\left(\sum_{j\in\mathcal{N}_{i}}\frac{1}{\tilde{c}_{i}}y_{j}^{( t)}\right)+\tilde{f}_{i}(x_{i};W), \tag{33}\]
where \(\tilde{c}_{i}\) is a scaling constant dependent on \(\lambda\), the gradient step-size, and the preconditioner, while \(\tilde{f}_{i}\) is merely \(f\) similarly rescaled. If we add an additional penalty \(\phi\) and subsequent proximal operator step to introduce a non-linearity, then this result is very similar to (30), although without the weight matrix directly on each \(y_{j}^{(t)}\) but with an added skip connection to the input layer.
Importantly for our purposes though, if the input graph is chosen to be a hypergraph clique expansion, and we set \(D=D_{C}\), \(A=A_{C}\), \(\lambda=\lambda_{0}\), and \(\lambda_{1}=0\), then we arrive at a special case of PhenomNNsimple from (25). Of course one might not naturally conceive of the more generalized form that leads to PhenomNNsimple, and by extension PhenomNN, without the interpretable grounding of the underlying hypergraph energy functions involved.
### Heterogeneous Graphs and RGCN
For heterogeneous graphs applied to RGCN, the analogous message-passing update for the \(i\)-th node in the \(t\)-th layer is given by
\[y_{i}^{(t+1)}=\sigma\left(\sum_{r\in\mathcal{R}}\sum_{j\in\mathcal{N}_{i}^{r} }\frac{1}{c_{i,r}}y_{j}^{(t)}W_{r}^{(t)}+y_{i}^{(t)}W_{0}^{(t)}\right), \tag{34}\]
where \(\mathcal{R}\) is the set of edge types in a heterogeneous input graph, \(\mathcal{N}_{i}^{r}\) is the set of neighbors with edge type \(r\), \(c_{i,r}\triangleq|\mathcal{N}_{i}^{r}|\), and \(W_{r}^{(t)}\) and \(W_{0}^{(t)}\) are weight/projection matrices. In this context, the RGCN input could conceivably be chosen as the bipartite graph produced by a given star expansion (e.g., such a graph could be assigned the edge types "hypergraph node belongs to hyperedge" and "hyper-edge belongs to hypergraph node").
For comparison purposes, we can also re-express our general PhenomNN model from (22), in the node-wise message-passing form
\[y_{i}^{(t+1)}=\sigma\left(\sum_{j\in\mathcal{N}_{i}^{C}}y_{j}^{( t)}W_{ij}^{(t)}+y_{i}^{(t)}W_{i}^{(t)}+\alpha\tilde{D}_{ii}^{-1}f\left(x_{i};W \right)\right), \tag{35}\]
where \(\mathcal{N}_{i}^{C}\) are neighbors in the clique (not star) expansion graph (more on this below) and the weight matrices are characterized by the special energy-function-dependent forms
\[W_{ij}^{(t)} \triangleq\alpha\tilde{D}_{ii}^{-1}\Big{[}\lambda_{0}A_{C}[i,j](H_ {0}+H_{0}^{T})+\] \[\lambda_{1}\bar{A}_{S}[i,j](H_{1}+H_{1}^{T}-I)\Big{]}, \tag{36}\] \[W_{i}^{(t)} \triangleq(1-\alpha)I-\alpha\tilde{D}_{ii}^{-1}\Big{[}\lambda_{0} D_{C}[i,i]H_{0}H_{0}^{T}+\] \[\lambda_{1}\bar{D}_{S}[i,i](H_{1}H_{1}^{T}-I)\Big{]}. \tag{37}\]
While the basic structures of (34) and (35) are similar, there are several key differences:
* When RGCN is applied to the star expansion, neighbors are defined by the resulting bipartite graph, and nodes in the original hypergraph do not directly pass messages to each other. In contrast, because within PhenomNN we have optimized away the hyperedge embeddings, the implicit graph that dictates neighborhood structure is actually the _clique expansion graph_ as reflected in (35).
* The PhenomNN projection matrices have special structure infused from the energy function and optimization over the edge embeddings. As such, unlike RGCN node \(i\) receives messages from its connected neighbors and itself, with projection matrices \(W_{ij}^{(t)}\) and \(W_{i}^{(t)}\) that can vary from node to node and edge to edge. In contrast, RGCN has layer-wise (or analogously iteration-wise) dependent weights.
* PhenomNN has an additional weighted skip connection from the input base model \(f\left(x_{i};W\right)\). While of course RGCN could also be equipped with a similar term, this would be accomplished in a post-hoc fashion, and not tethered to an underlying energy function.
## 7 Hypernode Classification Experiments
In this section we evaluate PhenomNNsimple and PhenomNN on various hypergraph benchmarks focusing on hypernode classification and compare against previous SOTA approaches.
Datasets.Existing hypergraph benchmarks mainly focus on hypernode classification. We adopt five public citation network datasets from (Zhang et al., 2022): Co-authorship/Cora,Co-authorship/DBLP, Co-citaion/Cora, Co-citaion/Pubmed, Co-citaion/Citeseer. These datasets and splits are constructed by (Yadati et al., 2019) ([https://github.com/malllabiisc/HyperGCN](https://github.com/malllabiisc/HyperGCN)). We also adopt two other public visual object classification datasets: Princeton ModelNet40 (Wu et al., 2015) and the National Taiwan University (NTU) 3D model dataset (Chen et al., 2003). We follow HGNN (Feng et al., 2019) to preprocess the data by MVCNN (Su et al., 2015) and GVCNN (Feng et al., 2018) and obtain the hypergraphs. Additionally, we use the datasets provided by the public code ([https://github.com/iMoonLab/HGNN](https://github.com/iMoonLab/HGNN)) associated with (Feng et al., 2019). Finally, (Chien et al., 2022) construct a public hypergraph benchmark for hypernode classification which includes ModelNet40\({}^{*}\), NTU2012\({}^{*}\), Yelp (Yelp), House (Chodrow et al., 2021), Walmart (Amburg et al., 2020), and 20News (Dua and Graff, 2017). ModelNet40\({}^{*}\) and NTU2012\({}^{*}\) have the same raw data as ModelNet40 and NTU2012 mentioned before in (Zhang et al., 2022) but different splits. All datasets from (Chien et al., 2022) are downloaded from their code site ([https://github.com/jianhao2016/AllSet](https://github.com/jianhao2016/AllSet)).4
Footnote 4: Note that we excluded a few datasets for the following reasons: The Zoo dataset is very small; the Mushroom dataset is too easy; the Citation datasets are similar to (Zhang et al., 2022), and since we have ModelNet40* and NTU2012\({}^{*}\) for comparison of different baselines from both papers, we did not select them.
Baselines.For datasets from (Zhang et al., 2022), we adopt the baselines from their paper which includes a multi-layer perceptron with explicit hypergraph Laplacian regularization (MLP+HLR), FastHyperGCN (Yadati et al., 2019), HyperGCN (Yadati et al., 2019), HGNN (Feng et al., 2019), HNHN (Dong et al., 2020), HGAT (Ding et al., 2020), HyperSAGE (Arya et al., 2020), UniGNN (Huang and Yang, 2021), and various hypergraph GNNs (H-GNNs) (Zhang et al., 2022) proposed by them. For datasets from (Chien et al., 2022), we also select baselines from their paper including an MLP, CE (Clique Expansion\(+\)GCN, CE\(+\)GAT, HNHN, HGNN, HCHA (Bai et al., 2021), HyperGCN, UniGCNII (Huang and Yang, 2021), HAN (Wang et al., 2019) with full batch and mini-batch settings, and AllsetTransformer and AllDeepSets (Chien et al., 2022).
Implementations.We use a one-layer MLP for \(f(X;W)\). Also, in practice we found that only using ReLU at the end of propagation steps works well. Detailed hyperparameter settings are deferred to Appendix D. We choose the hidden dimension of our models to be the same or less than the baselines in previous work. For results in Table 1, we conduct experiments on 10 different train-test splits and report average accuracy of test samples following (Zhang et al., 2022). For results in Table 2, we randomly split the data into training/validation/test samples using (50%/25%/25%) splitting percentages as in (Chien et al., 2022) and report
\begin{table}
\begin{tabular}{c|c c c c c c c c} \hline \hline & Cora & DBLP & Cora & Pubmed & Citeseer & NTU2012 & ModelNet40 & Avg Ranking \\ & (co-authorship) & (co-authorship) & (co-citation) & (co-citation) & (co-citation) & (both features) & (both features) & Avg Ranking \\ \hline MLP+HLR & 59.8 \(\pm\) 4.7 & 63.6 \(\pm\) 7.4 & 61.0 \(\pm\) 4.1 & 64.7 \(\pm\) 3.1 & 56.4 \(\pm\) 2.6 & - & - & 13.6 \\ FastHyperGCN & 61.1 \(\pm\) 8.2 & 68.1 \(\pm\) 9.6 & 61.3 \(\pm\) 10.3 & 65.7 \(\pm\) 11.1 & 56.2 \(\pm\) 8.1 & - & - & 12.4 \\ HyperGCN & 63.9 \(\pm\) 7.3 & 70.9 \(\pm\) 8.3 & 62.5 \(\pm\) 9.7 & 67.8 \(\pm\) 9.5 & 57.3 \(\pm\) 7.3 & - & - & 10.8 \\ HGNN & 63.2 \(\pm\) 3.1 & 68.1 \(\pm\) 9.6 & 70.9 \(\pm\) 2.9 & 68.7 \(\pm\) 8.7 & 56.7 \(\pm\) 3.8 & 83.54 \(\pm\) 0.50 & 97.15 \(\pm\) 0.14 & 9.4 \\ HBNN & 64.0 \(\pm\) 2.4 & 84.4 \(\pm\) 3.0 & 41.6 \(\pm\) 3.1 & 41.9 \(\pm\) 4.7 & 33.6 \(\pm\) 2.1 & - & - & 13.0 \\ HGAT & 65.4 \(\pm\) 1.5 & OOM & 52.2 \(\pm\) 3.5 & 46.3 \(\pm\) 0.5 & 38.3 \(\pm\) 1.5 & 84.05 \(\pm\) 0.36 & 96.44 \(\pm\) 0.15 & 12.0 \\ HyperSAGE & 72.4 \(\pm\) 1.6 & 77.4 \(\pm\) 3.8 & 69.3 \(\pm\) 2.7 & 72.9 \(\pm\) 1.3 & 61.8 \(\pm\) 2.3 & - & - & 8.6 \\ UniGNN & 75.3 \(\pm\) 1.2 & 88.8 \(\pm\) 0.2 & 70.1 \(\pm\) 1.4 & 74.4 \(\pm\) 1.0 & 63.6 \(\pm\) 1.3 & 84.55 \(\pm\) 0.40 & 96.69 \(\pm\) 0.07 & 6.0 \\ \hline H-ChebNet & 70.6 \(\pm\) 2.1 & 87.9 \(\pm\) 0.24 & 69.7 \(\pm\) 2.0 & 74.3 \(\pm\) 1.5 & 63.5 \(\pm\) 1.3 & 83.16 \(\pm\) 0.46 & 96.95 \(\pm\) 0.09 & 8.0 \\ H-APPNP & 76.4 \(\pm\) 0.8 & 89.4 \(\pm\) 0.18 & 70.9 \(\pm\) 0.7 & 75.3 \(\pm\) 1.1 & 64.5 \(\pm\) 1.4 & 83.57 \(\pm\) 0.42 & 97.20 \(\pm\) 0.14 & 4.6 \\ H-SSFC & 72.0 \(\pm\) 1.2 & 88.6 \(\pm\) 0.16 & 68.8 \(\pm\) 2.1 & 74.5 \(\pm\) 1.3 & 60.5 \(\pm\) 1.7 & 84.13 \(\pm\) 0.34 & 97.07 \(\pm\) 0.07 & 7.6 \\ H-GCN & 74.8 \(\pm\) 0.9 & 89.0 \(\pm\) 0.19 & 69.5 \(\pm\) 2.0 & 75.4 \(\pm\) 1.2 & 62.7 \(\pm\) 1.2 & 84.5 \(\pm\) 0.40 & 97.28 \(\pm\) 0.15 & 5.4 \\ H-GCNII & 76.2 \(\pm\) 1.0 & 89.8 \(\pm\) 0.20 & 72.5 \(\pm\) 1.2 & 75.8 \(\pm\) 1.1 & 64.5 \(\pm\) 1.0 & 85.17 \(\pm\) 0.36 & 97.5 \(\pm\) 0.07 & 3.0 \\ \hline PhenomNNsimple & **77.62 \(\pm\) 1.30** & 89.74 \(\pm\) 0.16 & 72.81 \(\pm\) 1.67 & 76.20 \(\pm\) 1.41 & 65.07 \(\pm\) 1.08 & 85.39 \(\pm\) 0.40 & **97.83 \(\pm\) 0.09** & 1.9 \\ PhenomNN & **77.11 \(\pm\) 0.45** & **98.31 \(\pm\) 0.05** & **73.09 \(\pm\) 0.05** & **78.12 \(\pm\) 0.24** & **65.77 \(\pm\) 0.45** & **85.40 \(\pm\) 0.42** & 97.77 \(\pm\) 0.11 & **13** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on datasets from (Zhang et al., 2022): Mean accuracy (%) ± standard deviation results over 10 train-test splits. Boldfacters are used to indicate the best mean accuracy and underline for the second. “-” means not reported in their paper so in average ranking we just average over the ones that are available. OOM indicates out-of-memory.
the average accuracy over ten random splits. All experiments are implemented on RTX 3090 with Pytorch and DGL (Wang et al., 2019).
**Results.** As shown in Table 1, our models achieve the best performance and top ranking on all datasets from (Zhang et al., 2022) compared to previous baselines. And in Table 2, our models achieve the first (PhenomNN\({}_{\text{simple}}\)) and tied-for-second (PhenomNN) overall performance ranking on the benchmarks from (Chien et al., 2022).
**Empirical evaluation of time and space complexity.** In practice, we find that PhenomNN is roughly \(2\times\) to \(3\times\) slower than a GCN given the integration of two expansions based on \(H_{0}\) and \(H_{1}\), which implies that the constant multiplying the theoretical complexity from above is at least doubled as expected. Of course timing results will still vary based on hardware and implementation details. As an example, we measure the training time of GCN and our models on the same hardware on Coauthorship-DBLP data with hidden size 64 and 8 layers. We observe 0.047s/epoch for GCN and 0.045s/epoch for PhenomNN\({}_{\text{simple}}\) and 0.143s/epoch for PhenomNN under these conditions. In terms of the space efficiency, our models are also analytically similar to common GNNs. And under the same settings as above, the memory consumption is 1665MB for GCN, 1895MB for PhenomNN\({}_{\text{simple}}\), and 2424MB for PhenomNN.
**Ablations.** For space considerations, we defer ablations to Appendix B; however, we nonetheless highlight some of our findings here. For example, in Table 5 (Appendix B) we demonstrate the effect of different hypergraph energy function terms, which are associated with different hypergraph expansions per Proposition 4.1. In brief here, we explore different selections of \(\{\lambda_{0},\lambda_{1}\}\in\{\{0,1\},\{1,0\},\{1,1\},\}\) which in effect modulate the inclusion of clique- and star-like expansion factors. Results demonstrate that on most datasets, the combination of both expansions, with their complementary roles, is beneficial.
We also explore the tolerance of our model to different hidden dimensions in Table 6. In brief, we fix other hyperparameters and obtain results across different hidden dimensions with PhenomNN\({}_{\text{simple}}\) for simplicity; results for PhenomNN are similar. Overall, this ablation demonstrates the stability of our approach across hidden dimension.
**Additional comparisons and discussion.** As suggested by reviewers, we include additional discussion and comparison with existing work in Appendix A due to the page limit. This includes side-by-side evaluations with RGCN and the model from (Wang et al., 2023) which was not yet published at the time of our original submission.
## 8 Conclusion
While hypergraphs introduce compelling modeling flexibility, they still remain relatively under-explored in the GNN literature. With the potential to better understand hypergraph properties and expand their utility, we have introduced an expressive family of hypergraph energy functions and fleshed out their connection with previous hypergraph expansions. We then leverage this perspective to design what can be interpreted as hypergraph neural network layers that are in one-to-one correspondence with proximal gradient steps descending these energies. We also characterize the similarities and differences of these layers w.r.t. popular existing GNN architectures. In the end, the proposed framework achieves competitive or SOTA performance on key hypergraph node classification benchmarks.
## 9 Acknowledgments
This work was supported by the National Natural Science Foundation of China (No. 62236004 and No. 62022027)
\begin{table}
\begin{tabular}{c|c c c c c c c c|c} \hline & NTU2012\({}^{*}\) & ModelNet40\({}^{*}\) & Yelp & House(1) & Walmart(1) & House(0.6) & Walmart(0.6) & 20Newsgroups & Avg Ranking \\ \hline MLP & \(85.52\pm 1.49\) & \(96.14\pm 0.36\) & \(31.96\pm 0.44\) & \(67.93\pm 2.33\) & \(45.51\pm 0.24\) & \(81.53\pm 2.26\) & \(63.28\pm 0.37\) & \(81.42\pm 0.49\) & 6.9 \\ CEGCN & \(81.52\pm 1.43\) & \(89.92\pm 0.46\) & OOM & \(62.80\pm 2.61\) & \(54.44\pm 0.24\) & \(64.36\pm 2.41\) & \(59.78\pm 0.32\) & OOM & 11.5 \\ CEGAT & \(82.21\pm 1.23\) & \(92.52\pm 0.39\) & OOM & \(69.09\pm 0.39\) & \(51.14\pm 0.56\) & \(77.18\pm 2.53\) & \(59.47\pm 1.05\) & OOM & 10.4 \\ HNHN & \(89.11\pm 1.44\) & \(97.84\pm 0.25\) & \(31.65\pm 0.44\) & \(67.80\pm 2.59\) & \(47.18\pm 0.35\) & \(78.78\pm 1.88\) & \(65.80\pm 0.39\) & \(31.35\pm 0.61\) & 7.1 \\ HGNN & \(87.72\pm 1.35\) & \(95.44\pm 0.33\) & \(33.04\pm 0.62\) & \(61.39\pm 0.29\) & \(62.00\pm 0.24\) & \(66.16\pm 1.80\) & \(77.72\pm 0.21\) & \(80.33\pm 0.42\) & 7.8 \\ HCHA & \(87.48\pm 1.87\) & \(94.48\pm 0.28\) & \(30.99\pm 0.72\) & \(61.36\pm 2.53\) & \(62.45\pm 0.26\) & \(67.91\pm 2.26\) & \(77.12\pm 0.26\) & \(80.33\pm 0.80\) & 8.8 \\ HyperCCN & \(56.36\pm 4.86\) & \(75.58\pm 2.56\) & \(29.42\pm 1.54\) & \(48.31\pm 2.39\) & \(44.74\pm 2.81\) & \(78.22\pm 2.46\) & \(55.31\pm 0.30\) & \(81.05\pm 0.59\) & 12 \\ UniGCN & \(89.30\pm 1.33\) & \(89.07\pm 0.23\) & \(31.70\pm 0.52\) & \(67.25\pm 2.57\) & \(54.45\pm 0.37\) & \(68.05\pm 1.96\) & \(72.08\pm 0.28\) & \(81.12\pm 0.67\) & 6.2 \\ HAN (full batch)\({}^{*}\) & \(83.58\pm 1.46\) & \(94.04\pm 0.41\) & OOM & \(71.05\pm 2.26\) & OOM & \(83.27\pm 1.62\) & OOM & OOM & 9.6 \\ HAN (min batch) & \(80.77\pm 2.36\) & \(59.12\pm 0.52\) & \(26.05\pm 1.37\) & \(62.00\pm 0.46\) & \(85.71\pm 0.80\) & \(82.40\pm 2.68\) & \(63.0\pm 0.96\) & \(79.72\pm 0.62\) & 10.4 \\ AllDeepgsets & \(88.09\pm 1.52\) & \(59.68\pm 0.26\) & \(30.36\pm 1.67\) & \(67.82\pm 2.40\) & \(64.53\pm 0.33\) & \(80.70\pm 1.78\) & \(**78.46\pm 0.26**\) & \(81.06\pm 0.54\) & 5.6 \\ AllSetTransformer & \(88.69\pm 1.24\) & \(98.20\pm 0.20\) & \(\mathbf{36.89\pm 0.51}\) & \(69.33\pm 2.20\) & \(\mathbf{65.46\pm 0.25}\) & \(83.14\pm 1.92\) & \(\mathbf{78.46\pm 0.40}\) & \(81.38\pm 0.58\) & 3.1 \\ \hline PhenomNN\({}_{\text{simple}}\) & \(\mathbf{91.03\pm 1.04}\) & \(\mathbf{98.66\pm 0.20}\) & \(32.62\pm 0.40\) & \(\mathbf{71.77\pm 1.68}\) & \(\mathbf{64.41\pm 0.49}\) & \(\mathbf{86.96\pm 1.33}\) & \(\mathbf{78.46\pm 0.32}\) & \(\mathbf{81.74\pm 0.52}\) & **1.6** \\ PhenomNN & \(90.62\pm 1.88\) & \(98.61\pm 0.17\) & \(31.92\pm 0.36\) & \(70.71\pm 2.35\) & \(62.98\pm 1.36\) & \(85.28\pm 2.30\) & \(78.26\pm 0.26\) & \(81.41\pm 0.49\) & 3.1 \\ \hline \end{tabular}
\end{table}
Table 2: Results using the benchmarks from (Chien et al., 2022): Mean accuracy (%) \(\pm\) standard deviation. The number behind Walmart and House is the feature noise standard deviation for each dataset, and for HAN\({}^{*}\), additional preprocessing of each dataset is required (see (Chien et al., 2022) for more details). Boldfaced letters are used to indicate the best mean accuracy and underline is for the second. OOM indicates out-of-memory.
and the Major Key Project of PCL (PCL2021A12).
|
2301.05900 | Static, dynamic and stability analysis of multi-dimensional functional
graded plate with variable thickness using deep neural network | The goal of this paper is to analyze and predict the central deflection,
natural frequency, and critical buckling load of the multi-directional
functionally graded (FG) plate with variable thickness resting on an elastic
Winkler foundation. First, the mathematical models of the static and
eigenproblems are formulated in great detail. The FG material properties are
assumed to vary smoothly and continuously throughout three directions of the
plate according to a Mori-Tanaka micromechanics model distribution of volume
fraction of constituents. Then, finite element analysis (FEA) with mixed
interpolation of tensorial components of 4-nodes (MITC4) is implemented in
order to eliminate theoretically a shear locking phenomenon existing. Next,
influences of the variable thickness functions (uniform, non-uniform linear,
and non-uniform non-linear), material properties, length-to-thickness ratio,
boundary conditions, and elastic parameters on the plate response are
investigated and discussed in detail through several numerical examples.
Finally, a deep neural network (DNN) technique using batch normalization (BN)
is learned to predict the non-dimensional values of multi-directional FG
plates. The DNN model also shows that it is a powerful technique capable of
handling an extensive database and different vital parameters in engineering
applications. | Nam G. Luu, Thanh T. Banh | 2023-01-14T11:37:01Z | http://arxiv.org/abs/2301.05900v1 | Static, dynamic and stability analysis of multi-dimensional functional graded plate with variable thickness using deep neural network
###### Abstract
The goal of this paper is to analyze and predict the central deflection, natural frequency, and critical buckling load of the multi-directional functionally graded (FG) plate with variable thickness resting on an elastic Winkler foundation. First, the mathematical models of the static and eigenproblems are formulated in great detail. The FG material properties are assumed to vary smoothly and continuously throughout three directions of the plate according to a Mori-Tanaka micromechanics model distribution of volume fraction of constituents. Then, finite element analysis (FEA) with mixed interpolation of tensorial components of 4-nodes (MITC4) is implemented in order to eliminate theoretically a shear locking phenomenon existing. Next, influences of the variable thickness functions (uniform, non-uniform linear, and non-uniform non-linear), material properties, length-to-thickness ratio, boundary conditions, and elastic parameters on the plate response are investigated and discussed in detail through several numerical examples. Finally, a deep neural network (DNN) technique using batch normalization (BN) is learned to predict the non-dimensional values of multi-directional FG plates. The DNN model also shows that it is a powerful technique capable of handling an extensive database and different vital parameters in engineering applications.
Keywords:static, dynamic and stability analysis; multi-directional functionally graded plates; variable thickness; elastic Winkler foundation; deep neural network; batch normalization +
Footnote †: Corresponding author, E-mail: [email protected]
## 1 Introduction
In recent years, functionally graded (FG) materials have made great strides as one of the advanced and intelligent non-homogeneous composites after these kinds of materials were first discovered Koizumi (1997) by Japanese scientists in the mid-1980s. FG materials are usually made from two material phases: ceramic and metal, whose properties vary smoothly and continuously along particular directions. Metal with good ductility and durability can resist mechanical loads, while ceramic with low thermal conductivity is strongly suitable for standing high-temperature environment Nguyen-Xuan _et al._ (2012). Moreover, undesired stresses discontinuity appearing in layers of laminated composites can be eliminated entirely due to a predefined mathematical function. With this outstanding characteristic, FG material has been had the application potential in engineering fields such as aerospace, automobile, electronics,
chemistry, and biomedical engineering (Ilschner 1996, Taheri _et al._ 2014, Radhika _et al._ 2018, Arslan _et al._ 2018, Smith _et al._ 2019).
In the numerical analysis field, plate theory for both uniform and variable thickness plates have been widely developed to predict the structure responses in the past decades, such as classical plate theory (CPT), refined plate theory (RPT) (Nguyen-Xuan _et al._ 2014), first-order shear deformation plate theory (FSDT) (Nguyen-Xuan _et al._ 2012), generalized shear deformation plate theory (GSDT, Thai _et al._ 2014), and high-order shear deformation plate theory (HSDT, Reddy 2000). Furthermore, due to a good and applicable design, variable thickness plates can be used flexibly in industrial applications, such as maritime, civil, and aviation. In the uni-directional FG (UD-FG) plate with z-direction, extensive studies have been introduced in recent years. For instance, Zhang
some applications of buckling constraint in an elastic medium and topology optimization are considered by Banh et al. (2017), Hoan and Lee (2017), and Hoan et al. (2020). A Winkler-Pasternak foundation is investigated for non-linear thermal free vibration of pre-buckling, and post-buckled FG plates Taczala et al. (2016). Gunda (2013) presents simple closed-form solutions for thermal post-buckling paths of homogeneous, isotropic, square plate configurations resting on elastic Winkler foundation. Hoang _et al._ (2020) presents effects of non-uniform Pasternak elastic foundation on non-linear thermal dynamics of the simply supported plate reinforced by functionally graded (FG) graphene nanoplatelets.
In the recent industrial revolution, data science and artificial intelligence have become more popular and have much application in many fields, including mechanical engineering. Furthermore, a huge computing cost when constructing FEM or solving linear equations (bending problem) and eigenvalue problems (free-vibration and buckling problems), the deep neural network (DNN) is proposed for predicting directly numerical output (such as non-dimensional values) of multi-directional FG plates without any numerical process. The advantage of a DNN is to easily predict complex nonlinear problems by using training data of simple problems with small errors. There are also some papers that use DNN as an optimal method or as a predicting method in numerical analysis, especially plate theory. Some publications used DNN to present mechanical behaviors such as genetic algorithms (Ye _et al._ 2001), optimization (Wang and Xie 2005), fuzzy logic (Shen _et al._ 2013), and so on. Abambres et al. (2013) use an NN-based formula for the buckling load prediction of I-section cellular steel beams under uniformly distributed vertical loads. Do et al. (2018) improves computational cost enhanced by a deep neural network (DNN) and modified symbiotic organisms search (mSOS) algorithm for optimal material distribution of FG plates. Each dataset is randomly created from analysis, and solutions can directly be predicted by using DNN with a robust metaheuristic mSOS algorithm. With the same optimal method, a TDFG plate is considered using a non-uniform rational B-spline (NURBS) basis function for describing material distribution varying through all three directions.
In this article, a finite element analysis with mixed interpolation of tensorial components of the 4-node (MITC4, Bathe and Dvorkin 1985) is applied to Mindlin-Reissner plate theory resting on the elastic Winkler foundation with material distribution varying through three directions. The material properties of TD-FG materials are described by MoriTanaka micro-mechanical scheme. The influences of boundary conditions, lengthto-thickness ratios, types of the plate (uniform or variable thickness plate), and power indexes on the behavior of the bending, free-vibration, and buckling problems of FG plates are considered with various elastic Winkler parameters. After verifying the results with the reference results, all numerical results are collected as training data, and the DNN is considered as an advanced technique to directly predict behaviors (non-dimensional deflection, natural frequency, and critical buckling loads) of the multi-directional FG plates without solving linear equations or eigenvalue problems. Batch normalization accelerates deep network training by requiring lower learning rates and careful parameter initialization. FSDT-based FEM randomly creates these datasets with MITC4 through iterations. The optimal results attained by DNN are compared with those gained by the traditional method to verify the proposed method's effectiveness in both accuracy and time consumption.
The rest of this study is organized as follows. Section 2 exhibits a theoretical formulation of modeling TD-FG plates with the Mori-Tanaka scheme embedded on the elastic Winkler foundation. A construction of FEM with MICT4 for FSDT and the total energy equation are elaborately derived in Sec. 3. Section 4 proposes the deep neural network for predicting the behaviors of TD-FG plates in various variables. Section 5 demonstrates the efficiency and reliability of the present
method through several numerical examples of bending, free-vibration, and buckling problems. Finally, Section 6 ends this paper with concluding remarks.
## 2 Multi-directional FG variable thickness plates
In this work, the ceramic volume fraction distribution varying according to a power-law form is first considered as a test in analysis which is represented by the following model.
### Numerical simulation procedure
One can write the extended form of the Hamilton's Principle with the notations used in the present study as......
\[V_{c}(x,y,z)=\left(\frac{x}{a}\right)^{k_{z}}\left(\frac{y}{a}\right)^{k_{z}} \left(\frac{1}{2}+\frac{z}{h(x,y)}\right)^{k_{z}} \tag{1}\]
where a, is size of square plate and \(h(x,y)=h_{0}\lambda(x,y)\) is variable thickness of the plate; \(k_{x}\), \(k_{y}\) and \(k_{z}\) represent the power indexes in the x-, y- and z-axes, respectively. The function \(\lambda(x,y)\) is defined as (Banh and Lee 2019, Banh et al. 2020):
\[\lambda(x,y)=\left\{\begin{array}{cc}1&\mbox{type 1: uniform thickness}\\ 1+x&\mbox{type 2: non-uniform linear thickness}\\ 1+\left(x-\frac{a}{2}\right)^{2}+\left(y-\frac{a}{2}\right)^{2}&\mbox{type 3: non-uniform non-linear thickness}\end{array}\right. \tag{2}\]
Fig. 1 shows the geometrical figures of uniform, non-uniform linear, and nonuniform non-linear thickness plates.
The metal volume fraction distribution is then computed as follows.
\[V_{m}(x,y,z)=1-V_{c}(x,y,z) \tag{3}\]
The material properties of the FG plate consisting of Young's modulus E, Poisson's ratio v, density p can be exhibited by the rule of mixture Reddy (2000).
\[P(x,y,z)=P_{c}.V_{c}(x,y,z)+P_{m}.V_{m}(x,y,z) \tag{4}\]
However, the interactions between two constituents are not taken into account by the rule of mixture (Vel and Batra 2002, Qian _et al._ 2004). As a result, the Mori-Tanaka scheme was used to capture these interactions, wherein the effective bulk and shear modulus can be expressed by.
\[K_{f}=\frac{V_{c}\left(K_{c}-K_{m}\right)}{1+V_{m}\,\frac{K_{c}-K_{m}}{K_{m}+4 /3\mu_{m}}}+K_{m},\mu_{f}=\frac{V_{c}\left(\mu_{c}-\mu_{m}\right)}{1+V_{m}\, \frac{\mu_{c}-\mu_{m}}{\mu_{m}+f_{1}}}+\mu_{m} \tag{5}\]
where \(f_{1}=\frac{\mu_{m}\left(9K_{m}+8\mu_{m}\right)}{6\left(K_{m}+2\mu_{m}\right)}\), and \(K_{c,m}\) and \(\mu_{c,m}\) are bulk and shear moduli of the two phases, respectively, which are defined as
\[K_{c,m}=\frac{E_{c,m}}{3\left(1-2\mu_{c,m}\right)},\mu_{c,m}=\frac{E_{c,m}}{2 \left(1+\mu_{c,m}\right)} \tag{6}\]
Then, the effective material properties of the FG plate are now estimated by
\[E=\frac{9K_{f}\mu_{f}}{3K_{f}+\mu_{f}},\nu=\frac{3K_{f}-2\mu_{f}}{2\left(3K_{f }+\mu_{f}\right)} \tag{7}\]
and p(x, y, z) is still computed by Eq. (4).
Fig. 1: Uniform (type 1), non-uniform linear (type 2) and non-uniform non-linear (type 3) thickness plates
An FG plate consisting of ceramic and metal phases is depicted in Fig. 1(a) while the Young modulus distribution of UD-FG SUS304/Si3N4 is presented in Fig. 1(b).
## 3 Basic formulations for FSDT using FEA and MITC4
### Formulation of FSDT finite element model
According to FSDT, the in-plane displacement field at an arbitrary point (x, y, z) of a plate can be approximated as
\[\begin{split} u(x,y,z)=u_{0}(x,y,t)+z\phi_{x}(x,y)\\ v(x,y,z)=v_{0}(x,y,t)+z\phi_{y}(x,y)\\ w(\text{x},\text{y})=\text{w}_{0}(x,y)\end{split} \tag{8}\]
where u0, v0, w0, \(\phi_{x}\) and \(\phi_{y}\) are function of x, y, t (time); u0, v0, w0 denote the displacements of a point on the middle surface; \(\phi_{x}\) and \(\phi_{y}\) are the rotations of a transverse normal about the y-, x-axis. Note that, a comma represents the differentiation to the space. Fig. 3 shows the deformed plate cross-section view.
Fig. 3: A cross section view of a deformed plate
Fig. 2: Examples of multi-directional FG plates with Mori-Tanaka micro-mechanical model
The generalized displacements and rotations of the plates using the FEA can be approximated: with the bilinear quadrilateral shape functions \(\mathrm{N}_{i}\). Then, strain is expressed based on FEA as follows:
\[\mathbf{u}=\sum_{i=\overline{i,4}}N_{i}\mathbf{q}_{i},\text{where }\mathbf{u}=\left\{u,v,\beta_{x},\beta_{y}\right\}^{\text{T}},\mathbf{q}_{i}= \left\{u_{i},v_{i},\beta_{x},\beta_{y}\right\}^{\text{T}} \tag{8}\]
\[\mathbf{\varepsilon}_{0}= \left\{u_{0,x}\quad v_{0,y}\quad u_{0,y}+v_{0,x}\right\}^{\text{T }}, \mathbf{\varepsilon}_{1}=\left\{\phi_{x,x}\quad\phi_{y,y}\quad\phi_{x,y}+\phi _{y,x}\right\}^{\text{T}}\] \[\left\{\mathbf{\varepsilon}_{0}\quad\mathbf{\varepsilon}_{1} \right\}^{\text{T}}=\sum_{i=1,4}\mathbf{B}_{i}^{\text{mb}}=\mathbf{B}^{\text{ mb}}\mathbf{q}, \mathbf{\varepsilon}_{2}=\left\{\phi_{x}+w_{0,x}\quad\phi_{y}+w_{0,y}\right\}^{ \text{T}}=\mathbf{B}^{s}\mathbf{q}\] \[\mathbf{B}^{k}= \left\{\mathbf{B}_{1}^{k}\quad\mathbf{B}_{2}^{k}\quad\mathbf{B}_ {3}^{k}\quad\mathbf{B}_{4}^{k}\right\}^{\text{T}},k\in\left\{\text{mb,s} \right\}\qquad\mathbf{q}=\left\{\mathbf{q}_{1}\quad\mathbf{q}_{2}\quad\mathbf{ q}_{3}\quad\mathbf{q}_{4}\right\}^{\text{T}}\] \[\mathbf{B}_{i}^{\text{mb}}= \left[\begin{array}{ccccc}N_{i,x}&0&0&0&0\\ 0&N_{i,y}&0&0&0\\ N_{i,y}&N_{i,x}&0&0&0\\ 0&0&0&N_{i,x}&0\\ 0&0&0&0&N_{i,y}&N_{i,x}\end{array}\right],\quad\mathbf{B}_{i}^{s}=\left[ \begin{array}{ccccc}0&0&N_{i,x}&N_{i,y}&0\\ 0&0&N_{i,y}&0&N_{i,y}\\ 0&0&0&N_{i,y}&N_{i,x}\end{array}\right]\] \[\mathbf{B}_{i}^{s}= \left[\begin{array}{ccccc}0&0&N_{i,x}&N_{i,y}&0\\ 0&0&N_{i,y}&0&N_{i,y}\\ 0&0&0&N_{i,y}&N_{i,x}\end{array}\right]\]
Based on the generalized Hoole's law, the stress is determined by the constitutive relations as
\[\left\{\sigma_{xx}\quad\sigma_{yy}\quad\sigma_{xy}\right\}^{\text{T}}=\mathbf{ Q}^{\text{mb}}\left\{\mathcal{E}_{xx}\quad\mathcal{E}_{yy}\quad\mathcal{E}_{ xy}\right\}^{\text{T}},\left\{\tau_{xz}\quad\tau_{yz}\right\}^{\text{T}}= \mathbf{Q}^{s}\left\{\gamma_{xz}\quad\gamma_{yz}\right\}^{\text{T}} \tag{10}\]
where \(\left\{\mathcal{E}_{xx}\quad\mathcal{E}_{yy}\quad\mathcal{E}_{xy}\right\}^{ \text{T}}=\mathbf{\varepsilon}_{0}+\mathbf{z}\,\mathbf{\varepsilon}_{1},\left\{ \gamma_{xz}\quad\gamma_{yz}\right\}^{\text{T}}=\mathbf{\varepsilon}_{2}\) and
\[\mathbf{Q}^{\text{mb}}= \left\{\begin{array}{ccccc}Q_{11}&Q_{12}&0\\ Q_{21}&Q_{22}&0\\ 0&0&Q_{44}\end{array}\right\},\mathbf{Q}^{s}= \left\{\begin{array}{ccccc}Q_{55}&0\\ 0&Q_{66}\end{array}\right\} \tag{11}\] \[Q_{11}=Q_{22}=\frac{E}{1-\nu^{2}},Q_{12}=Q_{21}=\frac{E\nu}{1- \nu^{2}},Q_{44}=Q_{55}=Q_{66}=\frac{E}{2\left(1+\nu\right)}\]
The stress resultants are related to the strains by the following relationships
\[\left\{N_{\alpha,\beta},M_{\alpha,\beta}\right\}=\int\limits_{-h/2}^{h/2}\left\{ 1,z\right\}\sigma_{\alpha\beta}dz,S_{\alpha}=\int\limits_{-h/2}^{h/2}\tau_{az}dz \tag{12}\]
where \(\left\{\alpha,\beta\right\}=\left\{x,y\right\}.\) From Eqs. (10) and (12) it can be computed that
\[\begin{split}\begin{split}\begin{cases}\mathbf{N}\\ \mathbf{M}\\ \mathbf{S}\end{cases}=&\begin{bmatrix}\mathbf{D}^{\text{mb}}&\mathbf{[0]}\\ \mathbf{[0]}&\mathbf{D}^{\text{s}}\end{bmatrix}\begin{cases}\mathbf{\varepsilon}_{ 0}\\ \mathbf{\varepsilon}_{1}\\ \mathbf{\varepsilon}_{2}\end{bmatrix}=\mathbf{D}^{\text{mb}}\begin{cases} \mathbf{\varepsilon}_{0}\\ \mathbf{\varepsilon}_{1}\end{cases}+\mathbf{D}^{\text{s}}\mathbf{\varepsilon}_ {2}=\mathbf{D}^{\text{mb}}\mathbf{B}^{\text{mb}}\mathbf{q}+\mathbf{D}^{\text{s }}\mathbf{B}^{\text{s}}\mathbf{q}\end{split}\end{split}\end{split} \tag{13}\]
where
\[\mathbf{D}^{\text{mb}}= \begin{bmatrix}\mathbf{A}^{1}&\mathbf{A}^{2}\\ \mathbf{A}^{2}&\mathbf{A}^{3}\end{bmatrix},\mathbf{D}^{\text{s}}=\int_{-h/2 }^{h/2}\mathbf{Q}^{\text{s}}dz \tag{14}\] \[A_{ij}^{1}=\int_{-h/2}^{h/2}\mathbf{Q}^{\text{mb}}dz,A_{ij}^{2} =\int_{-h/2}^{h/2}z\mathbf{Q}^{\text{mb}}dz,A_{ij}^{3}=\int_{-h/2}^{h/2}z^{3} \mathbf{Q}^{\text{mb}}dz \tag{15}\]
### Formulation of MITC4 scheme
The construction of thick plate elements, in which the deflection and the rotation are independently defined, is indeed more straightforward than that of thin plate elements to develop bending components based on the Mindlin-Reissner plate theory (Zhang and Kuang 2007). However, in the thin plate case from the low-order standard iso-parametric displacement-based plate elements without unique treatments only, the instability in shear strains of model Kirchhoff-type constraints in thin plate limits is occurred due to the shear locking phenomenon (Arnold 1981). Therefore, shear energy is defined in terms of the assumed covariant transverse shear strain field of the MITC4 (Thompson and Thangavelu 2002) to eliminate the shear locking phenomenon in the FSDT plate theory based on the approximated membrane-bending part. However, the approximation of the shear strain part has to be re-defined by the linear interpolation between mid-points of the element edges, namely that assume the transversal shear interpolation in local convective coordinates to be linear in v direction as shown in Fig. 4.
\[\gamma_{\xi\nu}= \begin{bmatrix}\gamma_{\xi}\\ \gamma_{\nu}\end{bmatrix}= \frac{1}{2}\begin{bmatrix}(1-\nu)\gamma_{\xi}^{C}+(1+\nu) \gamma_{\xi}^{A}\\ (1-\nu)\gamma_{\xi}^{B}+(1+\nu)\gamma_{\xi}^{D}\end{bmatrix} \tag{16}\] \[\gamma_{xy}= \begin{bmatrix}\gamma_{x}\\ \gamma_{y}\end{bmatrix}= \begin{bmatrix}\varphi_{x}&\nu_{,x}\\ \varphi_{y}&\nu_{,y}\end{bmatrix}^{-1}\begin{bmatrix}\gamma_{\xi}\\ \gamma_{\nu}\end{bmatrix}=\mathbf{J}^{-1}\begin{bmatrix}\gamma_{\xi}\\ \gamma_{\nu}\end{bmatrix} \tag{17}\]
where the strain components at points A, B, C and D, to obtain:
\[\gamma_{\psi}^{x}=x_{\psi}^{x}\ \frac{\phi_{x}^{a}+\phi_{x}^{b}}{2},+y_{ \psi}^{x}\ \frac{\phi_{y}^{a}+\phi_{y}^{b}}{2}+\frac{w_{a}-w_{b}}{2} \tag{18}\]
in which \(\ \left(\xi,\psi,a,b\right)=\left\{\left(\xi,C,2,1\right),\left(\xi,A,3,4\right),\left(\nu,B,4,1\right),\left(\nu,D,3,2\right)\right\}.\)
By using the MITC4 technique, the approximation of modified shear strain components may be expressed as
\[\gamma^{m}=\mathbf{J}^{-1}\left\{\gamma^{1}_{\xi\nu}\quad\gamma^{2}_{\xi\nu}\quad \gamma^{3}_{\xi\nu}\quad\gamma^{4}_{\xi\nu}\right\}^{\mathrm{T}}\left\{w^{(i)} \quad\phi^{i}_{x}\quad\phi^{i}_{y}\right\}=\mathbf{B}^{m}\mathbf{q} \tag{19}\]
where
\[\gamma^{i}_{\xi\nu}=\begin{bmatrix}N^{(i)}_{\cdot\xi}&\xi^{(i)}x^{,B,D}_{\cdot\xi}N^{(i)}_{\cdot\xi}&\xi^{(i)}y^{,B,D}_{\cdot\xi}N^{(i)}_{\cdot\xi }\\ N^{(i)}_{\cdot\nu}&\eta^{(i)}x^{,B,D}_{\cdot\eta}N^{(i)}_{\eta}&\eta^{(i)}y^{,B,D }_{\cdot\eta}N^{(i)}_{\eta}\end{bmatrix} \tag{20}\]
in which \(\quad\left(i,\chi,\nu\right)\!\in\!\left\{\left(1,C,B\right),\left(2,C,D\right),\left(3,A,D\right),\left(4,A,B\right)\right\}\) and \(\mathbf{J}\) is the Jacobian transformation matrix of mapping \(\mathbf{x}\!:\!\left[-1,1\right]^{2}\to\Omega^{\epsilon}\).
### Energy and frequency equation
The total strain energy of the initially stressed plate is given by the strain energy due to vibratory stresses as follows.
\[U=\frac{1}{2}\!\int\limits_{V_{e}}\!\left(\sigma_{xx}\varepsilon_{xx}+\sigma_ {yy}\varepsilon_{yy}+\sigma_{xy}\varepsilon_{xy}+\tau_{xz}\gamma_{xz}+\tau_{ yz}\gamma_{yz}\right)\!dV_{e} \tag{21}\]
Applying the finite element model:
\[\delta U=\!\int\limits_{\Omega_{e}}\!\left[\delta\mathbf{q}^{\mathrm{T}}\left( \mathbf{B}^{\mathrm{mb}}\right)^{\mathrm{T}}\mathbf{D}^{\mathrm{mb}}\mathbf{B} ^{\mathrm{mb}}\mathbf{q}+\delta\mathbf{q}^{\mathrm{T}}\left(\mathbf{B}^{\mathrm{ s}}\right)^{\mathrm{T}}\mathbf{D}^{\mathrm{s}}\mathbf{B}^{\mathrm{s}}\mathbf{q} \right]\!d\Omega_{e} \tag{22}\]
where \(\mathbf{N}^{\mathrm{Th}}=\!\left[\mathbf{N}^{\mathrm{Th}}_{1}\quad\mathbf{N}^ {\mathrm{Th}}_{2}\quad\mathbf{N}^{\mathrm{Th}}_{3}\quad\mathbf{N}^{\mathrm{Th}}_ {4}\right]\) in which
Fig. 4: Geometry of quadrilateral plate element MITC4
\[{\bf N}_{i}^{\rm Th}=\!\left[\begin{array}{cccccc}N_{i,x}&0&0&zN_{i,x}&0\\ 0&N_{i,x}&0&0&0\\ 0&&N_{i,x}&0&0\\ N_{i,y}&0&0&zN_{i,y}&0\\ 0&N_{i,y}&0&0&zN_{i,y}\\ 0&0&N_{i,y}&0&0\end{array}\right] \tag{23}\]
The virtual work done by the applied transverse load \(q_{0}\), in-plane buckling forces, the foundation reaction for a typical finite element, and the virtual kinetic energy \(\Omega_{e}\) are denoted \(\delta V\), \(\delta V_{B}\)\(\delta V_{S}\) and \(\delta K\) given in detail below.
\[\delta V=-\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\mathbf{m}=\begin{bmatrix}\mathbf{m}_{0}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{m}_{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{m}_{0}\end{bmatrix},\mathbf{m}_{0}=\begin{bmatrix} I_{0}&I_{1}\\ I_{1}&I_{2}\end{bmatrix} \tag{28f}\] \[I_{0}=\int\limits_{-h/2}^{h/2}\rho(x,y,z)dz,I_{1}=\int\limits_{-h/2}^{h/2}z \rho(x,y,z)dz,I_{2}=\int\limits_{-h/2}^{h/2}z^{2}\rho(x,y,z)dz \tag{28g}\]
Fig. 5 A non-uniform thickness plate model embedded in elastic foundation presented by a Winkler model
Note that the superposed dot on a variable denotes the time derivative and the above equation is derived based on an assumption of the plate harmonic motion, i.e. \(\mathbf{\ddot{u}}_{p}=-\omega^{2}\mathbf{u}_{p}\) where \(\omega\) symbolizes the frequency of natural vibration; p(x, y, z) is the mass density/volume; \(\mathbf{N}^{0}\) is a matrix which includes the in-plane applied forces.
Consequently, the finite element model of static bending, free vibration, and buckling problems of TD-FG plate resting on Winkler foundation is respectively expressed in the form of the following linear algebraic equations
\[\mathbf{K}\mathbf{q}=\mathbf{F},(\mathbf{K}-\omega^{2}\mathbf{M})\mathbf{q}= \mathbf{0},(\mathbf{K}-\lambda_{\mathrm{cr}}\mathbf{K}_{{}_{g}})\mathbf{q}= \mathbf{0} \tag{29}\]
where \(\mathbf{q}\) is the DOFs vector at the FE analysis, \(\mathbf{K},\mathbf{M},\mathbf{K}_{{}_{g}}\), and \(\mathbf{F}\) are the stiffness matrix, element mass matrix, geometric stiffness matrix, and the element applied load vector defined as
\[\mathbf{K}=\int\limits_{\Omega}\left(\mathbf{B}^{\mathrm{mb}} \right)^{\mathrm{T}}\mathbf{D}^{\mathrm{mb}}\mathbf{B}^{\mathrm{mb}}+\left( \mathbf{B}^{\mathrm{s}}\right)^{\mathrm{T}}\mathbf{D}^{\mathrm{s}}\mathbf{B}^ {\mathrm{s}}+\mathbf{N}_{w}^{\mathrm{T}}k_{W}\mathbf{N}_{{}_{w}}d\Omega_{e} \tag{30}\] \[\mathbf{M}=\int\limits_{\Omega_{e}}\mathbf{N}_{p}^{\mathrm{T}} \mathbf{m}\mathbf{N}_{p}d\Omega_{e},\mathbf{K}_{{}_{g}}=\int\limits_{\Omega_ {e}}\mathbf{B}_{{}_{g}}^{\mathrm{T}}\mathbf{N}^{0}\mathbf{B}_{{}_{g}}d\Omega _{e},\mathbf{F}=\int\limits_{\Omega_{e}}\mathbf{N}_{w}^{\mathrm{T}}q_{0}d \Omega_{e}\]
## 4 Creating prediction of deep neural network (DNN)
### Creating a DNN model
Fig. 5: A non-uniform thickness plate model embedded in elastic foundation presented by a Winkler model
To predict the non-dimensional fundamental frequencies of the plate, we first have to train the ANN model by several sets of material parameters, stiffness of elastic foundation, and non-dimensional fundamental frequencies. The ANN model will be set up to have some input data: volume fraction indexes (k\({}_{x}\), k\({}_{y}\), k\({}_{z}\)), stiffness of the foundation \(k_{w}\), length-thickness ratio (a/h\({}_{0}\)), type of problem; the output is the non-dimensional value based on type of problem: central deflection \(\overline{w}\) ) for bending, fundamental frequencies \(\overline{\omega}\) ) for free-vibration and critical buckling load (\(\overline{\lambda}_{\mathrm{cr}}\) ) for uni- and bi-axial buckling problem.
### Collecting data for training
After completing to implement FEM with MITC4 to deal with TD-FG variable thickness plates, the present approach is verified with high accuracy with reference papers. Then, all results will be collected as a sample of DNN model with input variables: 'problem'(bending, free-vibration, uni-, and bi-buckling), 'boundary condition' (CCCC, SSSS, CFCF, SFSF, CSCS where 'C', 'S', and 'F' stand for clamped, simply supported, and free conditions, respectively), k\({}_{x}\), k\({}_{y}\), k\({}_{z}\) and \(k_{w}\), type of plate (as shown in Fig. 1), a/h\({}_{0}\) and the results are their non-dimensional data. A TD-FG SUS304/Si\({}_{3}\)N\({}_{4}\) is used in all training and test problems. For the training process in the deep neural network, dataset training patterns are randomly created through iterations from a general code of TD-FG variable thickness plate under elastic foundation. There are 20,000 training patterns designed for the training set for the TD-FG problem with random inputs. Note that 90% data patterns are randomly chosen to train data, and 10% left is used for the validation process. By using the dataset, DNN performs the training process through 5,000 epochs to obtain the optimal mapping rules expressed by weights.
### Theoretical DNN model
A multi-layer perceptron (MLP) model architecture is utilized to predict nondimensional values. In this article, the model consists of three hidden layers with the number of neurons in the inputs mentioned earlier. As the model is quite shallow (only three hidden layers), a Sigmoid function is selected as an activation function instead of ReLU or its variants despite their popularity. Adam optimizer (Kingma and Ba 2014) is applied as an adaptive learning rate optimization algorithm that's been designed specifically for training DNN. Loss function is defined as:
\[L=MSE=\frac{1}{n}\sum_{i=1,n}\left(Y_{i}-Y_{i}\right)^{2} \tag{31}\]
where MSE is mean square error, n is batch size, \(Y_{i}\) is ground truth, and \(Y_{i}\) is prediction.
In order to make the training of NN faster and more stable, an algorithmic method named Batch-Normalization (BN, Ioffe and Szegedy 2015) is applied. BN consists of normalizing activation vectors from hidden layers using the current batch's first and second statistical moments (mean and variance). This normalization step is applied right before (or right after) the non-linear function. BN is computed differently during the training and the testing phase. At each hidden layer, BN transforms the signal as follow:
\[\begin{split} Z=\frac{Z^{(i)}-\mu_{i}}{\sqrt{\sigma_{i}^{2}+\varepsilon}} *\gamma_{i}+\beta_{i},\mu=\frac{1}{n}\sum_{i}Z^{(i)},\sigma^{2}=\frac{1}{n} \sum_{i}\left(Z^{(i)}-\mu\right)^{2}\\ Z^{(i)}_{\text{norm}}=\left(Z^{(i)}-\mu\right)/\sqrt{\sigma^{2}+ \varepsilon}\end{split} \tag{31}\]
The BN layer first determines the mean \(\mu\) and the variance \(\sigma^{2}\) of the activation values across the batch. It then normalizes the activation vector \(Z^{(i)}\). That way, each neuron's output follows a standard normal distribution across the batch. It finally calculates the layer's output \(\hat{Z}\) by applying a linear transformation with \(\gamma\) and \(\beta\), two trainable parameters \(\hat{Z}=\gamma Z^{(i)}_{\text{norm}}+\beta\). In each hidden layers, the optimum distribution is chosen for the DNN model. At each iteration, the network computes the mean \(\mu\) and the standard deviation \(\sigma\) corresponding to the current batch and trains \(\gamma\) and \(\beta\). The input data are pre-processed before being fed to the neural network model. Non-scalar value is one-hot encoded, while scalar value is normalized with Z-Score Normalization: new value \(=\left(Z^{(i)}-\mu_{0}\right)/\sigma_{0}\), where \(Z^{(i)}\) is original value, \(\mu\) is mean of data, and \(\sigma\) is standard deviation of data.
Table 1 shows the advantages of BN in the Multilayer Perceptron (MLP). Table 2 shows comparison of Adam optimizers with different activation functions (Sigmoid, Softplus, ReLU, and Tanh). As can be seen, the best option for both training error and validation error is Sigmoid function with BN.
The DNN model is illustrated in Fig. 6 while Fig. 7 shows the convergence history of training loss function and valid loss function achieved by the above-mentioned DNN model. A flowchart of the DNN approach for predicting non-dimensional values of variable thickness TD-FG plate problems resting on an elastic Winkler foundation is depicted in Fig. 8.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Without batch normalization} & \multicolumn{4}{|c|}{With batch normalization} \\ \hline \multicolumn{2}{|c|}{\(\bullet\) Raw signal} & \multicolumn{4}{|c|}{\(\bullet\) Normalized signal} \\ \hline \multicolumn{2}{|c|}{\(\bullet\) High interdependency between distribution} & \multicolumn{4}{|c|}{\(\bullet\) Mitigated interdependency between distribution} \\ \hline \multicolumn{2}{|c|}{\(\bullet\) Slow and unstable training} & \multicolumn{4}{|c|}{\(\bullet\) Fast and stable training} \\ \hline \end{tabular}
\end{table}
Table 1: Comparisons of Multilayer Perceptron with and without batch normalization.
Fig. 6 The DNN model with input data and output data.
Fig. 7 The convergence history of train and valid loss functions with random dataset
Fig. 8 A computational flowchart of DNN for predicting behavior of static, dynamic, and stability problems
Fig. 6 The DNN model with input data and output data.
## 5 Numerical examples
In this study, bending, free vibration and buckling behaviours of some types of multi-directional FG plates are investigated to verify the proposed method with the previous papers and present the new results with DNN prediction. At first, material properties of these FG plates are presented in Tab. 3:
The properties of FG materials are estimated the Mori-Tanaka scheme and all uni-, bi-, and tri-directional FG are surveyed in each linear algebra and eigenvalue problem. Moreover, the effects of gradient indexes, boundary conditions, length-to-thickness ratios, temperature in ceramic phase (only thermal case), and elastic Winkler parameter on multi-directional FG plates are also considered. In all problems in this section, the shear correction factor is fixed at SCF = \(\pi^{2}\) /12. After comparing to the reference result, all non-dimensional values according to bending problem (central deflection \(\overline{w}\) ), free-vibration problem (natural frequency \(\overline{\omega}\) ) and uni- and bi-axial buckling problem (critical buckling load \(\overline{P}_{\mathrm{cr}}\) ) are calculated for present study and collecting data with FG material SUS304/Si\({}_{3}\)N\({}_{4}\) as follow:
\[\overline{w}\Bigg{(}\frac{a}{2},\frac{a}{2},0\Bigg{)}=w\Bigg{(}\frac{a}{2}, \frac{a}{2},0\Bigg{)}\frac{E_{c}h_{0}^{2}}{a^{3}q_{0}} \tag{33a}\]
\[\overline{\omega}=\omega\left(a\ /\ \pi\right)^{2}\sqrt{\frac{\rho_{c}h_{0}}{D _{c}}} \tag{33b}\]
\[\overline{P}_{\mathrm{cr}}=\frac{P_{\mathrm{cr}}a^{2}}{\pi^{2}D_{c}} \tag{33c}\]
where \(D_{c}=E_{c}h_{0}^{3}\) / 12\(\left(1-\nu_{c}^{2}\right)\).
### 4.1 Bending problems
**Example 1:** In this example, uniform thickness UD-FG (Al/ZrO\({}_{2}\)) thick plates (k\({}_{\mathrm{x}}\)=k\({}_{\mathrm{y}}\)=0) with a length-to-thickness ratio a/h\({}_{0}\) = 5 are investigated using of three different boundary conditions CCCC, SSSS, and SFSF. The load distribution in this example is considered as a uniform load with the magnitude value q=q\({}_{0}\)=-1 (see Fig. 9a). The isotropic material properties are assumed to according to the Mori-Tanaka model distribution of volume fraction of constituents. The non-dimensional transverse displacement can be defined as
\begin{table}
\begin{tabular}{|c|l|c|c|c|} \hline & Material & E(Pa) & \(\nu\) & \(\rho\left(\mathrm{kg/m}^{3}\right)\) \\ \hline \multirow{4}{*}{Ceramics} & Silicon nitride (Si\({}_{3}\)N\({}_{4}\)) & 348.43 & 0.24 & 2370 \\ \cline{2-5} & Alumina (Al\({}_{2}\)O\({}_{3}\)) & 380 & 0.3 & 3800 \\ \cline{2-5} & Zirconia (ZrO\({}_{2}\)) & 151 & 0.3 & 3000 \\ \hline \multirow{2}{*}{Metals} & Stainless steel (SUS304) & 201.04 & 0.3262 & 8166 \\ \cline{2-5} & Aluminum (Al) & 70 & 0.32 & 2702 \\ \hline \end{tabular}
\end{table}
Table 3: Information of normal FG material.
Fig. 10 shows the non-dimensional transverse displacement compared with those of GSDT (Nguyen-Xuan _et al._ 2014) and TSDT (Tran _et al._ 2013) for UD-FG plate of bending problem with SSSS and CCCC boundary conditions. From the figure, it can be seen that the gained transverse displacement is in good agreement with the existing results with a small percentage relative error.
**Example 2:** An in-plane BD-FG SUS304/Si\({}_{3}\)N\({}_{4}\) (k\({}_{z}\)=0) square plate subjected to a sinusoidal load of \(q=q_{0}\sin\left(\pi x\,/\,a\right)\sin\left(\pi y\,/\,a\right),q_{0}=1\) (as depicted in Fig. 9b) is considered in this
Fig. 10: The comparison of non-dimensional transverse displacement of UD-FG Al/ZnO\({}_{2}\) plate under uniform load in bending problem
Fig. 9: Geometry of FG plate under uniformly distributed and sinusoidal loads
example. The non-dimensional central deflection ( \(\overline{w}\) in Eq. 33a) of the plate is examined with CCCC and SSSS boundary conditions and compared with Lieu et al. (2018). Fig. 11 illustrates the comparison of the above problem with previous in case of thick plate (a/h\({}_{0}\)=5) and thin plate (a/h\({}_{0}\)=100). From the figure, it can be observed that obtained results by the present method are in excellent agreement with the reference results.
From above examples, it can be concluded that the present analytical approach for the mechanical problem is effective and reliable for analysing behaviour of the FG plate in term of bending problem. Thus, the present method may ensure the generated dataset for the training process in the DNN regarding the bending problem in the subsequent example.
**Example 3:** A TD-FG SUS304/Si\({}_{3}\)N\({}_{4}\) square plate subjected to sinusoidal loads, as depicted in Fig. (b)b, is considered in this example. The non-dimensional central deflection Eq. (33a) of the plate is examined with CCCC and SSSS boundary conditions. Table 4 demonstrates the present study of the above problem with DNN prediction in case of thick plate (a/h\({}_{0}\)=20) and thin plate (a/h\({}_{0}\)=50). It can be seen that the DNN prediction is closed to the present study using finite element
analysis with MITC4 that verifies the accuracy of DNN model. The maximum relative error in TD-FG bending problem is 2.5492% in the thick plate a/h\({}_{0}\)=20 with CCCC k\({}_{x}\)=8, k\({}_{y}\) =4, k\({}_{z}\)=2 (FEM result is 0.3550, DNN prediction is 0.3529). The average relative error in this case is 1.3065%. Fig. 12 shows the deflection of TD-FG thick and thin plates with CCCC and SSSS and k\({}_{x}\)=4, k\({}_{y}\) =4, k\({}_{z}\)=2.
### Free-vibration problems
In this part, the free vibration examples are investigated in terms of both analysis and prediction problems.
**Example 4:** The Al/Al\({}_{2}\)O\({}_{3}\) UD-FG plate is investigated with both CFCF and CFFF boundary conditions. In this example, the non-dimensional frequency \(\varpi=\omega h_{0}\sqrt{\rho_{{}_{m}}\left/\right.E_{{}_{m}}}\) is applied. The material distribution of FG properties is estimated by the rule of mixture rule. Fig. 13 shows the comparison with those of three-dimensional quadrature element method Wang et al. (2019). From the results, it can be seen that the presented frequencies are in good agreement with the previous results with very small error.
**Example 5:** This example is currently dedicated to examining the free vibration responses of in-plane BD-FG plates (k\({}_{x}\)=0) using a uniform thickness in the comparison reference Lieu et al. (2018). The thickness h\({}_{0}\) of the IBFG square plate is investigated in term of both thick (a/h\({}_{0}\)=5) and thin (a/h\({}_{0}\)=100) cases, wherein, a is the length of model. A SUS304/Si\({}_{3}\)N\({}_{4}\) IBFG square plate with the non-dimensional frequency are defined Eq. (33b). Fig. 14 presents the first non-dimensional frequency of the previous problem with CCCC and SSSS boundary conditions in vary numbers of indexes k\({}_{x}\) and k\({}_{y}\). As observed, the present method can be concluded as effective and reliable for the BD-FG plates under free vibration. Furthermore, the accuracy of the current approach can ensure the generated dataset for the following training process in DNN.
Fig. 13: Comparisons of the first six non-dimensional frequency of CFCF and CFFF boundary conditions for UD-FG Al/Al\({}_{2}\)O\({}_{3}\) square plate.
**Example 6:** In this example, the free vibration responses of both TD-FG thick and thin plates using three kind of thickness models: uniform (type 1), non-uniform linear (type 2), and non-uniform non-linear (type 3) are considered. A SUS304/Si\({}_{3}\)N\({}_{4}\) TD-FG square plate with the non-dimensional frequency as defined Eq. (33b).
Tables 5, 6, 7 and 8 show the non-dimensional of first natural frequencies of TDFG plates with three types of thickness for CCCC and SSSS boundary conditions, respectively. As can be seen, the relative errors of all cases are smaller than 2.5% where the average relative errors of CCCC and SSSS boundary condition in free-vibration of TD-FG plate is 1.7551% and 1.0176%, respectively. On the other hand, the relative errors are 1.1611% (type 1), 1.6332% (type 2), and 1.3647% (type 3) in different type of thickness. In particular, from tables, it can be seen that when gradient indexes k\({}_{\text{x}}\), k\({}_{\text{y}}\), and k\({}_{\text{z}}\) increase, the contribution of the ceramic phase will decrease which leads to stiffness of the plate will decrease. As we know that a vibration frequency is directly proportional to the plate stiffness; therefore, the attained frequencies decrease when gradient indexes increase. Besides that, as can be seen from pair tables 5-7 and 6-8, the non-dimensional frequency increases when the length-to-thickness ratio increases. Figs. 15, 16, and 17 show the first three free vibration mode shapes of the SSSS boundary condition of TD-FG SUS304/Si\({}_{3}\)N\({}_{4}\) square plate a/h\({}_{0}\)=100 with k\({}_{\text{x}}\)=5, k\({}_{\text{y}}\)=5, k\({}_{\text{z}}\)=2 in three types of thickness.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multirow{2}{*}{\(k_{z}\)} & \multirow{2}{*}{\(\frac{\phantom{\text{}}}{\phantom{\text{}}}\phantom{\text{}}\phantom{\text{}}\phantom{ \text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{ \text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{ \text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{ \text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{ \text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{ \text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{}\phantom{\text{}} \phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{}\phantom{} \phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{}\phantom{}\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \text{}\phantom{}\phantom{}\text{}\phantom{}\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \text{}\phantom{}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{}\text{}\phantom{} \text{}\phantom{}\text{}\phantom{}\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{} \phantom{}\text{}\phantom{}\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{}\text{}\phantom{} \text{}\phantom{}\text{}\phantom{}\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{}\text{}\phantom{}\text{}\phantom{} \text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{} \phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{} \text{}\phantom{}\text{}\phantom{}\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{} \phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}}\phantom{\text{} \text{}\phantom{}\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{} \phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \text{}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{}\text{} \phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{} \text{}\phantom{}\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{}\text{}\phantom{} \text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{}\text{}\phantom{}\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{\text{}}\phantom{}\text{}\phantom{} \text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}}\phantom{}\text{} \phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{} \text{}}\phantom{\text{}}\phantom{\text{}}\phantom{}\text{}\phantom{}\text{} \phantom{}\text{}\phantom{}\text{}}\phantom{\text{}}\phantom{}\text{}\phantom{} \text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{} \phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}} \phantom{\text{}}\phantom{\text{}}\phantom{}\text{}\phantom{}\text{}\phantom{} \text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{} \phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}\phantom{} {\text{}}\phantom{\text{}}\phantom{}\text{}\phantom{}\text{}\phantom{} \text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}}\phantom{\text{}} \phantom{\text{}}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{} \phantom{}\text{}\phantom{}\text{}\phantom{}\text{}}\phantom{}\text{}\phantom{} \text{}\phantom{}\text{}\phantom{}\text{}\phantom{}\text{}}\phantom{\text{}} \phantom{}\text{}\phantom{}\text{}\phantom{
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multirow{2}{*}{\(k_{z}\)} & \multirow{2}{*}{\(\overline{\omega}=\omega\big{(}a\,/\,\pi\big{)}^{2}\sqrt{\rho_{c}h_{0}\,/\,D_{c}}\,,D_{c}=E_{c}h_{0}^{3}\,/\,12 \Big{(}1-\nu_{c}^{2}\,\Big{)}\) of CCCC boundary conditions for TD-
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multirow{2}{*}{\(k_{z}\)} & \multirow{2}{*}{\(\overline{\omega}=\omega\big{(}a\,/\,\pi\big{)}^{2}\,\sqrt{\rho_{c}h_{0}\,/\,D_{c}}\,,D_{c}=E_{c}h_{0}^{3}\,/\,12 \Big{(}1-\nu_{c}^{2}\Big{)}\) of SSSS boundary conditions for TD-FG SUS304/Si\({}_{3}\)N\({}_{4}\) thick square plate a/h\({}_{0}\)=100.} & \multicolumn{1}{c}{\(k_{x}\)} \\ \hline \multirow{4}{*}{\(k_{z}\)} & \multirow{2}{*}{\(k_{z}\)} & \multicolumn{4}{c|}{\(1\)} & \multicolumn{2}{c|}{\(2\)} & \multicolumn{2}{c|}{\(5\)} & \multicolumn{2}{c|}{\(10\)} \\ \cline{3-11} & & \(k_{y}\) & Present & Predict & Present & Predict & Present & Predict & Present & Predict \\ \hline \multirow{4}{*}{\(1\)} & \multirow{4}{*}{\(1\)} & 1 & 0.9083 & 0.9294 & 0.8802 & 0.8989 & 0.8545 & 0.8642 & 0.8463 & 0.8519 \\ \cline{3-11} & & 2 & 0.8802 & 0.8914 & 0.8645 & 0.8737 & 0.8498 & 0.8540 & 0.8446 & 0.8474 \\ \cline{3-11} & & 5 & 0.8547 & 0.8641 & 0.8498 & 0.8573 & 0.8448 & 0.8492 & 0.8427 & 0.8456 \\ \cline{3-11} & & 10 & 0.8463 & 0.8575 & 0.8446 & 0.8542 & 0.8428 & 0.8491 & 0.8418 & 0.8459 \\ \cline{3-11} & & 1 & 0.8888 & 0.9025 & 0.8692 & 0.8822 & 0.8511 & 0.8588 & 0.8450 & 0.8488 \\ \cline{3-11} & & 2 & 0.8692 & 0.8763 & 0.8581 & 0.8651 & 0.8475 & 0.8518 & 0.8437 & 0.8453 \\ \cline{3-11} & & 5 & 0.8511 & 0.8582 & 0.8475 & 0.8543 & 0.8439 & 0.8485 & 0.8423 & 0.8435 \\ \cline{3-11} & & 10 & 0.8450 & 0.8544 & 0.8437 & 0.8525 & 0.8423 & 0.8483 & 0.8416 & 0.8443 \\ \hline \multirow{4}{*}{\(1\)} & \multirow{4}{*}{\(1\)} & 1 & 1.3394 & 1.3438 & 1.2983 & 1.3096 & 1.2629 & 1.2672 & 1.2514 & 1.2540 \\ \cline{3-11} & & 2 & 1.3002 & 1.3007 & 1.2772 & 1.2811 & 1.2562 & 1.2574 & 1.2488 & 1.2502 \\ \cline{3-11} & & 5 & 1.2644 & 1.2674 & 1.2570 & 1.2596 & 1.2494 & 1.2492 & 1.2461 & 1.2464 \\ \cline{3-11} & & 10 & 1.2519 & 1.2580 & 1.2492 & 1.2550 & 1.2462 & 1.2513 & 1.2445 & 1.2423 \\ \cline{3-11} \cline{2-11} & & 1 & 1.3059 & 1.2989 & 1.2793 & 1.2787 & 1.2559 & 1.2579 & 1.2482 & 1.2505 \\ \cline{3-11} & & 2 & 1.2806 & 1.2761 & 1.2654 & 1.2661 & 1.2515 & 1.2521 & 1.2465 & 1.2473 \\ \cline{3-11} & & 5 & 1.2569 & 1.2590 & 1.2520 & 1.2538 & 1.2469 & 1.2468 & 1.2447 & 1.2442 \\ \cline{3-11} & & 10 & 1.2486 & 1.2552 & 1.2468 & 1.2531 & 1.2448 & 1.2501 & 1.2437 & 1.2501 \\ \hline \multirow{4}{*}{\(2\)} & \multirow{4}{*}{\(1\)} & 1 & 1.0706 & 1.0850 & 1.0379 & 1.0478 & 1.0083 & 1.0143 & 0.9981 & 1.0070 \\ \cline{3-11} & & 2 & 1.0379 & 1.0385 & 1.0196 & 1.0206 & 1.0022 & 1.0073 & 0.9958 & 1.0045 \\ \cline{3-11} & & 5 & 1.0083 & 1.0108 & 1.0022 & 1.0059 & 0.9961 & 1.0032 & 0.9933 & 1.0000 \\ \cline{3-11} & & 10 & 0.9981 & 1.0019 & 0.9958 & 0.9998 & 0.9933 & 0.9974 & 0.9920 & 0.9933 \\ \cline{3-11} \cline{2-11} & & 1 & 1.0428 & 1.0451 & 1.0217 & 1.0253 & 1.0021 & 1.0091 & 0.9953 & 1.0043 \\ \cline{3-11} \cline{2-11} & & 2 & 1.0217 & 1.0214 & 1.0096 & 1.0125 & 0.9981 & 1.0055 & 0.9938 & 1.0024 \\ \cline{3-11} \cline{2-11} & & 5 & 1.0021 & 1.0073 & 0.9981 & 1.0040 & 0.9940 & 1.0018 & 0.9921 & 0.9979 \\ \cline{3-11} \cline{2-11} & & 10 & 0.9953 & 0.9999 & 0.9938 & 0.9983 & 0.9921 & 0.9962 & 0.9912 & 0.9926 \\ \hline \end{tabular}
\end{table}
Table 8: Comparison studies of the first non-dimensional frequency \(\overline{\omega}=\omega\big{(}a\,/\,\pi\big{)}^{2}\,\sqrt{\rho_{c}h_{0}\,/\,D_{c}}\,,D _{c}=E_{c}h_{0}^{3}\,/\,12\Big{(}1-\nu_{c}^{2}\,\Big{)}\) of SSSS boundary conditions for TD-FG SUS304/Si\({}_{3}\)N\({}_{4}\) thick square plate a/h\({}_{0}\)=100.
### Buckling problems
The buckling investigations of the present model in terms of both analytical and predictive problems are presented in this part.
**Example 7:** The buckling analysis of UD-, BD-, and TD-FG square plates are compared to reference Do et al. (2020). The FG material SUS304/Si\({}_{3}\)N\({}_{4}\) is utilized for these problems. Material properties of the FG materials are estimated by Mori-Tanaka scheme. The non-dimensional of uniaxial (N\({}^{0}\)xx = 1) and bi-axial (N\({}^{0}\)xx = N\({}^{0}\)yy = 1) critical buckling loads are defined in Eq. (33c).
In term of UD-FG material, Table 9 shows the comparisons of present study and DNN prediction with reference results Do et al. (2020) in uni-axial and bi-axial buckling problems with SSSS boundary condition. With CCCC boundary condition, the present results will be shown with the predicting results of DNN model. It can be seen that the present study is close to reference results and the DNN model predicts quite exactly. The relative errors of present results and DNN
Fig. 16: The first three free vibration mode shapes of the SSSS TD-FG SUS304/Si\({}_{3}\)N\({}_{4}\) thin square non-uniform linear (type 2) plate a/h\({}_{0}\)=100 with k\({}_{x}\)=k\({}_{y}\)=5, k\({}_{x}\)=2.
Fig. 17: The first three free vibration mode shapes of the SSSS TD-FG SUS304/Si\({}_{3}\)N\({}_{4}\) thin square non-uniform non-linear (type 3) plate a/h\({}_{0}\)=100 with k\({}_{x}\)=k\({}_{y}\)=5, k\({}_{x}\)=2.
Fig. 15: The first three free vibration mode shapes of the SSSS TD-FG SUS304/Si\({}_{3}\)N\({}_{4}\) thin square uniform (type 1) plate a/h\({}_{0}\)=100 with k\({}_{x}\)=k\({}_{y}\)=5, k\({}_{x}\)=2.
predictions are always smaller than 2% for both CCCC and SSSS and both uni-axial and bi-axial buckling problems. These results claim that the finite element model implemented for UD-FG plate is verified and the DNN model which is learned by using data created from the finite element analysis predicts with high accuracy.
Fig. 18 compares the present studies of non-dimensional critical buckling load \(\ \overline{P}_{\rm cr}\\) with those of reference results Do et al. (2020) with SSSS boundary condition for both thick and thin plates. In term of CCCC boundary condition, the prediction of non-dimensional critical buckling load \(\ \overline{P}_{\rm cr}\\) for CCCC boundary condition is presented in Tab. 10 to compare with present results in case of BD-FG plate.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & & a/h\({}_{0}\) & & \multicolumn{4}{c|}{k\({}_{\rm z}\)} \\ \cline{4-7} & & & 0 & 1 & 2 & 5 \\ \hline \multirow{6}{*}{SSSS} & \multirow{4}{*}{10} & Ref. Do et al. (2020) & 3.8026 & 2.9522 & 2.7995 & 2.6511 \\ \cline{3-7} & & & Present & 3.8020 & 2.9917 & 2.8400 & 2.6722 \\ \cline{3-7} & & & Predict & 3.7939 & 3.0698 & 2.8977 & 2.6757 \\ \cline{3-7} & & & Ref. Do et al. (2020) & 3.9979 & 3.1121 & 2.9627 & 2.8168 \\ \cline{3-7} & & 100 & Present & 4.0001 & 3.1570 & 3.0052 & 2.8354 \\ \cline{3-7} & & & Predict & 3.9999 & 3.1782 & 3.0217 & 2.8712 \\ \cline{3-7} & & & Present & 8.4122 & 6.4972 & 6.1262 & 5.7586 \\ \cline{3-7} & & & Predict & 8.4135 & 6.4381 & 6.1750 & 5.7676 \\ \cline{3-7} & & & Present & 10.0786 & 7.8449 & 7.4636 & 7.0869 \\ \cline{3-7} & & & Predict & 10.0679 & 7.9904 & 7.5220 & 7.1285 \\ \hline \multirow{6}{*}{SSSS} & \multirow{4}{*}{10} & Ref. Do et al. (2020) & 1.9013 & 1.4761 & 1.3998 & 1.3255 \\ \cline{3-7} & & & Present & 1.9010 & 1.4958 & 1.4200 & 1.3361 \\ \cline{3-7} & & & Predict & 1.9126 & 1.5174 & 1.4391 & 1.3385 \\ \cline{3-7} & & & Ref. Do et al. (2020) & 1.9990 & 1.5560 & 1.4814 & 1.4084 \\ \cline{3-7} & & & Present & 2.0001 & 1.5785 & 1.5026 & 1.4177 \\ \cline{3-7} & & & Predict & 2.0000 & 1.5633 & 1.5233 & 1.4149 \\ \cline{3-7} & & & Present & 4.5997 & 3.5586 & 3.3621 & 3.1675 \\ \cline{3-7} & & & Predict & 4.6113 & 3.7506 & 3.3186 & 3.2094 \\ \cline{3-7} & & & Present & 5.3059 & 4.1301 & 3.9294 & 3.7312 \\ \cline{3-7} & & & Predict & 5.3257 & 4.2168 & 3.8595 & 3.7588 \\ \hline \end{tabular}
\end{table}
Table 9: Comparisons and present study of the first non-dimensional critical buckling loads \(\ \ \overline{P}_{\rm cr}=\dfrac{P_{\rm cr}a^{2}}{\pi^{2}D_{c}}\) for UD-FG SUS304/Si\({}_{3}\)N\({}_{4}\) square plate.
Fig. 18: Comparisons of the first non-dimensional critical uni- and bi-buckling loads for BD-FG SUS304/Si3N4 square thick and thin plates with SSSS boundary condition.
Similarly, in TD-FG material, the behaviors of present study are compared with reference results in case of SSSS boundary condition while those of CCCC are predicted and verified with present studies in Tab. 11. Although the DNN predictions are always far from reference results than those of present results, the predictions are acceptable due to quite small relative errors and very quick time consumption of DNN model. Finally, Figs. 19-20 shows the three first buckling mode of TD-FG plates with uniform thickness for CCCC boundary condition.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{Results} & \multirow{3}{*}{k\({}_{y}\)} & \multicolumn{3}{c|}{a/h\({}_{0}\)=10} & \multicolumn{3}{c|}{a/h\({}_{0}\)=10} \\ \cline{3-8} & & \multicolumn{3}{c|}{k\({}_{x}\)} & \multicolumn{3}{c|}{k\({}_{x}\)} \\ \cline{3-8} & & \multicolumn{1}{c|}{} & 1 & 5 & 10 & 1 & 5 & 10 \\ \hline \multirow{8}{*}{
\begin{tabular}{c} **CSCC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** ** \\ **CSC** **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** **CSC** \\ **CSC** \\ **CSC** ** \\ **CSC** ** \\ **CSC** \\ **CSC** ** \\ **CSC** \\ **CSC** **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** ** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** ** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** ** \\ **CSC** ** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** ** \\ **CSC** \\ **CSC** ** \\ **CSC** ** \\ **CSC** ** \\ **CSC** ** \\ **CSC** ** \\ **CSC** \\ **CSC** **CSC** \\ **CSC** \\ **CSC** ** \\ **CSC** **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** **CSC** \\ **CSC** \\ **CSC** ** \\ **CSC** \\ **CSC** ** \\ **CSC** ** \\ **CSC** ** \\ **CSC** ** **CSC** \\ **CSC** \\ **CSC** \\ **CSC** ** **CSC** \\ **CSC** \\ **CSC** ** \\ **CSC** ** \\ **CSC** ** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** ** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** ** \\ **CSC** \\ **CSC** ** \\ **CSC** \\ **CSC** ** **CSC** \\ **CSC** \\ **CSC** ** \\ **CSC** \\ **CSC** \\ **CSC** ** \\ **CSC** ** \\ **CSC** \\ **CSC** ** \\ **CSC** \\ **CSC** ** \\ **CSC** ** \\ **CSC** ** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** ** \\ **CSC** ** \\ **CSC** \\ **CSC** ** \\ **CSC** \\ **CSC** \\ **CSC** \\ **CSC** **C** \\
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & a/h\({}_{0}\) & Results & \multicolumn{2}{c|}{k\({}_{x}\)} & & \\ \cline{4-7} & & & 0 & 1 & 2 & 5 \\ \hline \multirow{6}{*}{**CCCC**} & \multirow{3}{*}{10} & Ref. Do et al. (2020) & 2.6003 & 2.4470 & 2.4155 & 2.3819 \\ \cline{3-7} & & Present & 2.5997 & 2.4503 & 2.4192 & 2.3835 \\ \cline{3-7} & & Predict & 2.6325 & 2.4999 & 2.4402 & 2.3791 \\ \cline{2-7} & & Ref. Do et al. (2020) & 2.7509 & 2.5877 & 2.5564 & 2.5228 \\ \cline{3-7} & & Present & 2.7524 & 2.5934 & 2.5618 & 2.5253 \\ \cline{3-7} & & Predict & 2.8029 & 2.6506 & 2.5808 & 2.5423 \\ \cline{3-7} & & Present & 5.6584 & 5.3354 & 5.2615 & 5.1814 \\ \cline{3-7} & & Predict & 5.5605 & 5.3801 & 5.2531 & 5.1680 \\ \cline{3-7} & & Present & 6.9119 & 6.5170 & 6.4390 & 6.3532 \\ \cline{3-7} & & Predict & 6.8754 & 6.6038 & 6.4659 & 6.3806 \\ \hline \multirow{6}{*}{**CCCC**} & \multirow{3}{*}{10} & Ref. Do et al. (2020) & 1.3027 & 1.2241 & 1.2081 & 1.1911 \\ \cline{3-7} & & Present & 1.3024 & 1.2258 & 1.2100 & 1.1920 \\ \cline{3-7} & & Predict & 1.3279 & 1.2750 & 1.2418 & 1.2177 \\ \cline{2-7} & & Ref. Do et al. (2020) & 1.3772 & 1.2943 & 1.2785 & 1.2615 \\ \cline{3-7} & & Present & 1.3780 & 1.2971 & 1.2812 & 1.2628 \\ \cline{3-7} & & Predict & 1.3910 & 1.3374 & 1.3031 & 1.2735 \\ \cline{3-7} & & Present & 3.1155 & 2.9318 & 2.8916 & 2.8483 \\ \cline{3-7} & & Predict & 3.0695 & 2.9872 & 2.9247 & 2.8761 \\ \cline{3-7} & & Present & 3.6455 & 3.4326 & 3.3910 & 3.3454 \\ \cline{3-7} & & Predict & 3.5483 & 3.4661 & 3.4023 & 3.3529 \\ \hline \end{tabular}
\end{table}
Table 11: Present study of the first non-dimensional critical buckling loads \(\overline{P}_{x}=\dfrac{P_{x}d^{2}}{\pi^{2}D_{c}}\) for BD-FG SUS304/Si\({}_{3}\)N\({}_{4}\) square plate with CCCC boundary condition.
Figure 19: The first three uni-axial buckling mode shapes of the CCCC TD-FG SUS304/Si\({}_{3}\)N\({}_{4}\) thin uniform square plate a/h\(0=100\) with \(\text{k}_{x}=\text{k}_{y}=1\), \(\text{k}_{x}=5\).
### 5.4 TD-FG plates embedded into elastic foundation
In this part, the influences of elastic foundation in the TD-FG plate are investigated in terms of both analytical and predictive approaches.
**Example 8:** A free-vibration problem of a UD-FG Al/Al\({}_{2}\)O\({}_{3}\) with elastic Winkler foundation is considered. In this example, the non-dimensional natural frequency and Winkler parameter are defined as follows:
\[\overline{\omega}=\omega h_{0}\sqrt{\rho_{{}_{m}}\ /\ E_{{}_{m}}}\ \ \text{and}\ \ \overline{k}_{W}=k_{W}a^{4}\ /\ D_{11}\ \ \text{where}\ D_{11}=\int_{-h_{0}/2}^{h_{0}/2}z^{2}Q_{11}dz\]
Fig. 21 shows the comparison of present and reference results TSDT (Baferani _et al._ 2011) and HSDT (Li _et al._ 2021). As expected, a closing agreement between gained outcomes and reference solutions is found. This once again demonstrates the reliability of present approach.
**Example 9:** A TD-FG (\(\rm{k_{x}=k_{z}=1}\)) SUS304/Si\(\rm{3N_{4}}\) square plate with SSSS boundary condition is applied for an elastic Winkler elastic \(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
## 6 Conclusion and remarks
A multi-directional FG square plate model with variable thickness resting on an elastic Winkler foundation regarding bending, free vibration, and buckling problems was described in mathematically detail. The Mori-Tanaka micro-mechanical technique is applied to describe the material properties that vary continuously through the plates' one, two, and three directions. The numerical performance is conducted to verify the effectiveness and reliability for analyzing the behavior of the present FG plate model in terms of all static, dynamic, and stability problems. Then, the DNN model using batch normalization and Adam optimizer is created to predict the non-dimensional values such as central deflection, natural frequency, and critical buckling load based on the dataset collected from the analytical solution. Several conclusions can be drawn via the presented formulations and examined examples as follows:
* The finite element analysis with MITC4 yields excellent results in comparison with those previous in the open literature for analyzing bending, free vibration and buckling behavior of the TD-FG plates.
* There are also several new results are presented in TD-FG plates, especially new results for TD-FG plates embedded on elastic Winkler foundation with various types of thickness (type 1, 2 and 3).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & a/h\({}_{0}\) & & Bending & & Uni-axial buckling & & Bi-axial buckling & \\ \cline{3-8} & \multirow{2}{*}{a/h\({}_{0}\)} & \multirow{2}{*}{k\({}_{\mathrm{z}}\)} & \multirow{2}{*}{1} & 5 & 1 & 5 & 1 & 5 \\ \cline{3-8} & & & & & & & & \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{10} & Present & 0.4733 & 0.4871 & 2.4503 & 2.3835 & 1.2258 & 1.1920 \\ \cline{3-8} & & Predict & 0.4630 & 0.5053 & 2.5027 & 2.3881 & 1.2714 & 1.2182 \\ \cline{3-8} & \multirow{2}{*}{20} & Present & 0.9073 & 0.9327 & 2.5572 & 2.4894 & 1.2791 & 1.2449 \\ \cline{3-8} & & Predict & 0.8689 & 0.9344 & 2.5520 & 2.4703 & 1.2951 & 1.2484 \\ \cline{3-8} & \multirow{2}{*}{50} & Present & 2.2406 & 2.3029 & 2.5888 & 2.5207 & 1.2948 & 1.2605 \\ \cline{3-8} & & Predict & 2.2291 & 2.3309 & 2.6187 & 2.5162 & 1.3371 & 1.2753 \\ \cline{3-8} & \multirow{2}{*}{100} & Present & 4.4734 & 4.5974 & 2.5934 & 2.5253 & 1.2971 & 1.2628 \\ \cline{3-8} & \multirow{2}{*}{100} & Predict & 4.3694 & 4.6058 & 2.6506 & 2.5439 & 1.3325 & 1.2714 \\ \hline \multirow{8}{*}{
\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{10} & Present & 0.1891 & 0.1948 & 5.3354 & 5.1814 & 2.9318 & 2.8483 \\ \cline{3-8} & & Predict & 0.1875 & 0.1924 & 5.3793 & 5.1663 & 2.9823 & 2.8707 \\ \cline{1-1} \cline{2-8} & \multirow{2}{*}{20} & Present & 0.3341 & 0.3434 & 6.1846 & 6.0226 & 3.2926 & 3.2061 \\ \cline{1-1} \cline{2-8} & \multirow{2}{*}{20} & Predict & 0.3355 & 0.3553 & 6.2407 & 6.0088 & 3.3181 & 3.2187 \\ \cline{1-1} \cline{2-8} & \multirow{2}{*}{50} & Present & 0.8029 & 0.8245 & 6.4732 & 6.3096 & 3.4142 & 3.3271 \\ \cline{1-1} \cline{2-8} & \multirow{2}{*}{50} & Predict & 0.8167 & 0.8305 & 6.5743 & 6.3188 & 3.4667 & 3.3358 \\ \cline{1-1} \cline{2-8} & \multirow{2}{*}{100} & Present & 1.5965 & 1.6392 & 6.5170 & 6.3532 & 3.4326 & 3.3454 \\ \cline{1-1} \cline{2-8} & \multirow{2}{*}{200} & Predict & 1.5764 & 1.6275 & 6.5820 & 6.3713 & 3.4643 & 3.3506 \\ \hline \end{tabular}
\end{table}
Table 12: Present study of the first non-dimensional central deflection and critical buckling load for Winkler elastic TD-FG (k\({}_{\mathrm{x}}\) = k\({}_{\mathrm{z}}\) = 1) SUS304/Si\({}_{3}\)N\({}_{4}\) plate with SSSS boundary condition and k\({}_{\mathrm{W}}\) = 3\({}^{4}\).
The training dataset is collected based on finite element model. This dataset is essential for building a DNN model and the DNN helps to find the optimal mapping rule by learning the relation between input data and output one.
The non-dimensional central deflection (bending problem), natural frequency (free-vibration problem), and critical buckling load (uni-axial and bi-axial buckling problems) can be directly predicted with high accuracy without solving system of linear equations problems or eigenvalue problems. The prediction is compared to present study and reference to claim the high accuracy of DNN model.
The present method can be extended to complex engineering problems such as shells or cracked (discontinuous) plates. Furthermore, topology optimization also awaits further attention.
## Acknowledgments
None
|
2305.01713 | Learning Disentangled Semantic Spaces of Explanations via Invertible
Neural Networks | Disentangled latent spaces usually have better semantic separability and
geometrical properties, which leads to better interpretability and more
controllable data generation. While this has been well investigated in Computer
Vision, in tasks such as image disentanglement, in the NLP domain sentence
disentanglement is still comparatively under-investigated. Most previous work
have concentrated on disentangling task-specific generative factors, such as
sentiment, within the context of style transfer. In this work, we focus on a
more general form of sentence disentanglement, targeting the localised
modification and control of more general sentence semantic features. To achieve
this, we contribute to a novel notion of sentence semantic disentanglement and
introduce a flow-based invertible neural network (INN) mechanism integrated
with a transformer-based language Autoencoder (AE) in order to deliver latent
spaces with better separability properties. Experimental results demonstrate
that the model can conform the distributed latent space into a better
semantically disentangled sentence space, leading to improved language
interpretability and controlled generation when compared to the recent
state-of-the-art language VAE models. | Yingji Zhang, Danilo S. Carvalho, André Freitas | 2023-05-02T18:27:13Z | http://arxiv.org/abs/2305.01713v3 | # Learning Disentangled Semantic Spaces of Explanations
###### Abstract
Disentangling sentence representations over continuous spaces can be a critical process in improving interpretability and semantic control by localising explicit generative factors. Such process confers to neural-based language models some of the advantages that are characteristic of symbolic models, while keeping their flexibility. This work presents a methodology for disentangling the hidden space of a BERT-GPT2 autoencoder by transforming it into a more separable semantic space with the support of a flow-based invertible neural network (INN). Experimental results indicate that the INN can transform the distributed hidden space into a better semantically disentangled latent space, resulting in better interpretability and controllability, when compared to recent state-of-the-art models.
## 1 Introduction
Disentangled representations, in which each learned features of data refer to a semantically meaningful and independent concept (Bengio et al., 2012), are widely investigated and explored in the field of Computer Vision because of their interpretability and controllability (Higgins et al., 2017; Kim and Mnih, 2018). These works reveal that the semantic features of imaging data, such as mouth and eyes in face images, can be mapped to specific latent dimensions. However, the use of disentanglement for the representation of textual data is comparatively less-explored.
Recent work has started articulating how the disentangled factors of generative models can be used to support the representation of natural language definitions (Carvalho et al., 2022), and science explanations (Zhang et al., 2022), i.e. sentences which relate and compose scientific concepts. These works have the motivation of understanding whether the properties introduced by disentangled generative models can support a consistent organisation of the latent space, where syntactic and semantic transformations can be localised, interpolated and controlled. This category of models has fundamental practical implications. Firstly, similarly to the computer vision models, where image objects can be meaningfully interpolated and transformed, it can provide a framework for transforming and combining sentences which communicate complex concepts, definitions and explanations. Secondly, the localisation of latent factors allows sentence representation models to be more consistent and interpretable. For example, one recent work (Zhang et al., 2022) illustrates that the predicate-argument semantic structure of explanatory sentences from the WorldTree corpus (Jansen et al., 2018) could be partially disentangled through a Variational AutoEncoder based model (Optimus) (Li et al., 2020). For example, a simple explanatory fact such as "_animals require oxygen for survival_" can be projected into a latent space where each pair role-content, such as ARG0-animal or VERB-require, is described by a hypersolid over the latent space. In this case, the generation of explanations can be semantically controlled/manipulated in a coherent manner. For ex
Figure 1: Use of an invertible neural network to support better semantic disentanglement and separation of explanatory sentences.
ample, we can control the generation of sentences by manipulating the movement of latent vectors between different role-content regions (e.g., moving the representation of the sentence "_animals require oxygen for survival_" to an ARG1-warmth region would produce the sentence "_animals require warmth for survival_"). Such ability is inherently valuable to downstream tasks such as natural language inference (for argument alignment, substitution or abstraction on inference chains) or improving the consistency of neural search (by representing more consistent queries and sentences).
In this work, we build upon the seminal work of [22], proposing a better way to separate the predicate-argument structure of explanatory sentences in the latent space. Inspired by the work of [10], we apply an invertible neural network (INN), a control component, to learn the bijective transformation between the hidden space of the autoencoder and the smooth latent space of INN due to its low computational overhead and theoretically low information loss on bijective mapping.
More importantly, the transformation modelled by the proposed approach approximates the VAE-defined latent space, both being multivariate Gaussian. Thus, there is potential to learn a latent space with improved geometric properties. That is to say, the same semantic roles and associated content clusters can be better separated over the latent space modelled by the INN (an illustration can be found in Figure 1). In this case, we can improve control over the decoding process due to the reduction of overlapping (ambiguous) regions.
In summary, this work aims to explore the utilisation of flow-based INNs as a control component for controlling the generation of a typical neural language modelling autoencoder setting (e.g. BERT-GPT2). This addition can transform the latent space of an autoencoder (BERT-GPT2) into a constrained multivariate Gaussian space via INN in a supervised approach. In this latent space, different role-content regions can be better separated. This smoother and better separated space can be later operated over in order to improve the control of the generation of the autoencoder using geometric operators, such as traversal [17], interpolation [14], and vector arithmetic [16]. The following are our contributions:
**1.** We find that adding a flow-based INN is an effective mechanism for transforming the hidden space of the autoencoder into a smooth multivariate Gaussian latent space for representing sentences. It can be applied to arbitrary existing large-scale autoencoders without any further training. **2.** We put forward a supervised training strategy for INNs to learn a controllable semantic space with higher disentanglement than previous work. **3.** We introduce the use of this representation to support semantically coherent data augmentation (generating sentences). Our algorithm can increase the diversity of the data while keeping the distribution of the original data unchanged.
## 2 Related work
Sentence DisentanglementMercatali and Freitas (2021) pioneered the work on the use of disentanglement to control syntactic-level generative factors in sentence representations. More specialised architectures such as the Attention-Driven Variational Autoencoder (ADVAE) [12] were later introduced for learning disentangled syntactic latent spaces. Recent contributions moved in the direction of exploring disentanglement for encoding sentence semantics, where Carvalho et al. (2022) proposed a supervised training strategy for learning a disentangled representation of definitions by injecting the semantic role labelling inductive biases into the latent space, with the support of a conditional VAE. Comparatively, this work builds-upon on the recent advances of encoding semantic generative factors in latent spaces, focusing on the representation of explanatory statements, and proposing flow-based INN autoencoders as a mechanism to achieve improved separation and control.
INN in NLPThe properties of INN-based representations have recently started being investigated in language. [20] concentrate on modelling morphological inflection and lemmatization tasks, utilizing INN to learn a bijective transformation between the word surface and its morphemes. [11] focused on sentence-level representation learning, transforming sentences from a BERT sentence embedding space to standard Gaussian space, which improves sentence embeddings on a variety of semantic textual similarity tasks. Comparatively, this work is the first to explore the bijective mapping between the distributed sentence space of an autoencoder and a multivariate Gaussian space to improve the se
mantic separability and control over the distributed representation of sentences. Moreover, this is the first to explore this mechanism to support semantically coherent data augmentation.
## 3 Background
Disentangled semantic spacesZhang et al. (2022) demonstrates that semantic role supervision of explanations can induce disentanglement of semantic factors in a latent space of sentences modeled using the Optimus autoencoder configuration Li et al. (2020). Each _semantic role - content concept_ combination is described by a hypersolid (region) in the latent space. More information about the semantic roles can be found in Appendix B. Figure 1 illustrates examples of role-content/concept semantic clusters, such as _ARG1-shelter_ and _ARG1-warmth_ ("shelter" or "warmth" as direct objects) around the cluster _ARG0-animal_ ("animal" as the subject). However, the qualitative and quantitative analysis of Zhang et al. (2022) shows that role-content clusters are still substantially entangled. In order to support highly controlled operations over the latent space, this work addresses this limitation, demonstrating that the bijective mapping induced by INNs can provide a measurably better disentanglement and cluster separation.
Invertible Neural NetworksFlow-based INNs Dinh et al. (2014, 2016); Kingma and Dhariwal (2018) is a class of neural networks that models the bijective mapping between observation distribution \(p(x)\) and latent distribution \(p(z)\). In this case, we use \(T\) and \(T^{\prime}\) to represent forward mapping (from \(p(x)\) to \(p(z)\)) and backward mapping (from \(p(z)\) to \(p(x)\)), respectively. Unlike VAEs that approximate the posterior distribution to multivariate Gaussian distributions, INNs use multivariate Gaussian directly. The forward mapping can be learned by the following objective function:
\[\mathcal{L}=-\mathbb{E}_{x\sim p(x)}\Big{[}T(x)\Big{]}^{2}-\log\big{|}T^{ \prime}(x)\big{|} \tag{1}\]
where \(T(x)\) learns the transformation from \(x\) to \(z\sim N(0,1)\). \(|T^{\prime}(x)|\) is the determinant of the Jacobian, which indicates how much the transformation locally expands or contracts the space to ensure the integration of the probability density function is one.
## 4 Proposed Approach
Starting from the Optimus-based conditional VAE architecture proposed by Zhang et al. (2022), we encode each sentence \(x\) with an autoencoder and consider their sentence-level latent representation as the input of INNs, which is described as \(E(x)\). Next, we put forward two training strategies to map the hidden representations into a better semantically disentangled space.
### Training Strategy
Unsupervised INNsFirstly, we train the INN in an unsupervised fashion so that it minimizes the negative log-likelihood of the marginal distribution of latent representation \(z=E(x)\):
\[\begin{split}\mathcal{L}_{\text{unsup}}=&-\mathbb{ E}_{x\sim p(x)}\Big{[}T(E(x))\Big{]}^{2}\\ &-\log\big{|}T^{\prime}(E(x))\big{|}\end{split} \tag{2}\]
As this leads to a bijective mapping between distributed representation and the disentangled latent representation (multivariate Gaussian space), it allows us to explore the geometric clustering property of its latent space by traversal, interpolation, and latent space arithmetic.
Cluster-supervised INNAccording to the findings of Zhang et al. (2022) that the content of semantic role can be disentangled over the latent space approximated to multivariate Gaussian learned using the Optimus autoencoder setting, expanded with a conditional VAE term, we next train the INN to learn the embeddings, by minimizing the distance (cosine) between points in the same role-content regions and maximizing the distance between points in different regions, based on the explanation embeddings and their corresponding central point from the Optimus model. For example, given a sentence _"animal require food for survival"_ and its central vector of _ARG1-animal_, the training moves the sentence representation closer to the _ARG1-animal_ region center in the latent space of INN.
More specifically, during the calculation of the posterior, we replace the mean and variance of standard Gaussian distribution by the center point of its cluster and a hyper-parameter which should be less than one, respectively. In this case, each role-content cluster in the latent space will be mapped to a space where each cluster will have its embeddings more densely and regularly distributed around its
center. The objective function can be described as follows:
\[\begin{split}\mathcal{L}_{\text{sup}}=&-\mathbb{E}_{x \sim p_{cluster}(x)}\frac{\left[T(E(x))-\mu_{cluster}\right]^{2}}{1-\sigma^{2} }\\ &-\log\left|T^{\prime}(E(x))\right|\end{split} \tag{3}\]
where \(T(E(x))\) learns the transformation from \(x\) to \(z\sim N(\mu_{cluster},1-\sigma^{2})\). More training and architecture details are provided in Appendix A.
### Data Augmentation
To better capture the different features between distinct role-content clusters, more training sentences are needed in those clusters for training INNs. Therefore, we consider vector arithmetic and traversal as a systematic mechanism to support data augmentation. This is done as described in Equations 4. More details are provided in Appendix A.
\[\begin{split} vec&=average(E(s_{i}),E(s_{j}))\\ vec_{i}&=N(0,1)\quad\forall i\in\{0,..,size(vec)\} \\ s&=D(vec)\end{split} \tag{4}\]
where \(s_{k}\in S\) (explanation sentence corpus), \(E(s):S\rightarrow\mathbb{R}^{n}\) is the encoder (embedding) function, and \(D(e):\mathbb{R}^{n}\to S\) is the decoder function. The term \(vec_{i}=N(0,1)\) is done to resample each dimension and the last term generates a new sentence. Table 1 lists some randomly selected examples from augmented explanations.
## 5 Experiments
During the experiment, we consider both WorldTree (Jansen et al., 2018) and EntailmentBank (Dalvi et al., 2021) as datasets. The statistic information of datasets can be found in Appendix A.
### Disentanglement Analysis
In this section, we analyze the disentanglement of the latent space from the INN model independently of the cluster-based supervision. We empirically evaluate the properties of the latent space from two different perspectives: (i) quantitative: with the measurement of the disentanglement metrics, and (ii) qualitative: with the support of the geometrical operations (traversal, interpolation, and vector arithmetic) of the space which allows for eliciting the semantic-geometric behaviour of the space. We also qualitatively evaluate its reconstruction performance. Reconstruction examples have been included in the Appendix D.
Disentanglement metricsFirstly, we probe the ability of the model to disentangle the predicate-argument structure, such as ARG0 and PRED. Then, we compare its performance with the reference model (Optimus) under two training frameworks (Zhang et al., 2022) using six quantitative metrics for disentanglement. A description of the metrics is provided in Appendix C.
Table 2 summarises the results. The INN-based autoencoder can outperform the baseline model (Optimus) under three metrics: 5.1% in z-min-var, 42.3% in MIG, and 8.6% in Modularity. Those three metrics are plain statistical and do not depend on a trainable machine learning classifier, avoiding classifier fitting biases. We refer to (Carbonneau et al., 2022) for an in-depth discussion on a critical analysis of the strengths and limitations of
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{4}{c}{Optimus vs INN} \\ \hline model & z-min-var \(\downarrow\) & MIG & Modularity \\ \hline O(U) &.451 &.027 &.758 \\ O(S) &.453 &.067 &.753 \\ OC &.401 &.039 &.751 \\ INN & **.350** & **.491** & **.844** \\ \hline model & Disentanglement & Completeness & Informativeness \(\downarrow\) \\ \hline O(U) & **.307** &.493 & **.451** \\ O(S) &.302 &.491 &.466 \\ OC &.306 & **.493** &.474 \\ INN &.186 &.270 &.503 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Disentanglement metrics. O(U), O(S), O(C) are three different training strategies for Optimus.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{4}{c}{
\begin{tabular}{l} Role-content & Augmented sentences \\ \end{tabular} } \\ \hline \multirow{4}{*}{ARG0-animal} & an animal requires energy to move \\ & animals produce offspring \\ & some adult animals lay eggs \\ & an animal requires shelter \\ & an animal can use its body to breathe \\ \hline \multirow{4}{*}{ARG0-human} & humans travel sometimes \\ & humans usually use gasoline \\ & humans sometimes endanger themselves \\ & humans use coal to make food \\ & humans depend on pollinators for survival \\ \hline \multirow{4}{*}{PRED-are} & wheels are a part of a car \\ & lenses are a part of eyeglasses \\ \cline{1-1} & toxic chemicals are poisonous \\ \cline{1-1} & green plants are a source of food for animals \\ \cline{1-1} & copper and zinc are two metals \\ \hline \multirow{4}{*}{PRED-mean} & summit mean the top of the mountain \\ \cline{1-1} & colder mean a decrease in heat energy \\ \cline{1-1} & helping mean something can be done better \\ \cline{1-1} & cleaner mean (less ; lower ) in pollutants \\ \cline{1-1} & friction mean the product of a physical change \\ \hline \hline \end{tabular}
\end{table}
Table 1: Example of augmented explanations.
disentanglement metrics. Following the current methodological norm, we decided to include most used metrics. After quantitative evaluation, we next qualitatively assess its disentanglement.
TraversalThe traversal of a latent factor is obtained as the decoding of the vectors corresponding to the latent variables, where the evaluated factor is changed within a fixed interval, while all others are kept fixed. A disentangled representation should cause the decoded sentences to only change with respect to a single latent factor when that factor is traversed. In this experiment, the traversal is set up from a starting point given by a "seed" sentence. As illustrated in Table 3 we can observe that the generated sentences can hold concepts at different argument positions unchanged, localizing that specific semantic component at the sentences at a locus of the latent space. For example, the sentence traversed in the low dimension can hold the same semantic role-concept pairing _ARG0-animals_. During the traversal, the sentences present close variations (realizations in this case) of the semantic concept given by _animal_, such as _mammal_ and _predator_, tied to the same semantic role (ARG0).
InterpolationNext, we demonstrate the ability of INNs to provide smooth transitions between latent space representations of sentences [20]. In practice, the interpolation mechanism encodes two sentences \(x_{1}\) and \(x_{2}\) as \(z_{1}\) and \(z_{2}\), respectively. It interpolates a path \(z_{t}=z_{1}\cdot(1-t)+z_{2}\cdot t\) with \(t\) increased from \(0\) to \(1\) by a step size of \(0.1\). As a result, \(9\) sentences are generated on each interpolation step. If the latent space is semantically disentangled, the intermediate sentences should present discrete changes, with semantic roles changing between the endpoints \(x_{1}\) and \(x_{2}\) in each step. In Table 4, we provide qualitative results with latent space interpolations on explanation sentences. We can observe that the intermediate explanations could transition smoothly (i.e., no unrelated content between steps) from source to target. e.g., predicate changed from _eat_ to _must eat_ to _must hunt_, and ARG0s are changed from _humans_ to _some animals_.
Latent space arithmeticIn this part, we analyse whether averaging two input vectors with the same role-content preserve this property. If the averaged vector holds the same semantic concept as the input sentences, the latent space is better disentangled with regard to the induced semantic representation [21]. Table 5 shows examples of output sentences after vector averaging. We can observe that the lower dimensions can hold the same semantic information as input. However, this information is lost during the traversal of higher dimensions, which indicates that the latent space of INNs stores explanatory information differently from the Optimus baseline model. Therefore, we examine next whether our supervision method could better enforce separation and disentanglement.
\begin{table}
\begin{tabular}{|l|} \hline \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} \\ \end{tabular}
\end{table}
Table 4: Interpolation examples where top and bottom sentences are source and target, respectively.
### Cluster-supervised INN model
After analyzing the disentanglement of the latent space of unsupervised INN, next, we examine could cluster-supervised lead to more separable latent space than Optimus. Reconstructed examples are provided in Appendix E.
Disentanglement between _ARG0_ clustersIn this case, we consider four ARG0 clusters, including _human_, _animal_, _plant_, and _something_, and evaluate model performance from two sides, including forward mapping and backward mapping. For forward mapping, we assess the disentanglement of the latent space of the INN model from two aspects (visualization and classification metrics). Figure 2 displays the distributions of four role-content clusters over the latent space. As we can observe, after the cluster-supervised training strategy, the embeddings are more concentrated on the center of their cluster, and there is a clear boundary between clusters, indicating better disentanglement. This visualization indicates that our supervised approach can help the INN-based architecture to learn a better separated semantic space when compared to the baseline models (Optimus, unsupervised INNs). Additionally, the unsupervised INN latent space (shown in the middle) does not display good separation compared with Optimus (left), which supports the result from latent space arithmetic 5.1, in that unsupervised INN stores explanations in a different way (not separating by role-content).
It is also observable that there are low-density embeddings at the transition between two clusters, which leads to the connection between clusters. We decode the middle datapoints between _animal_ and _human_ clusters and list them in Table 6. From those examples, we can observe that such explanations are related to both _animal_ and _human_ (e.g., _animals eat humans_). This result implies that the explanations may be geometrically represented in a similar way as they were originally designed in the WorldTree corpus (maximising lexical overlaps pred-arg alignments within an explanation chain), in which each explanation is typically linked with another through a subject or object abstraction/realisation, for supporting multi-hop inference tasks.
Next, we quantitatively evaluate the disentanglement of ARG0-content clusters (not like Table 2 that only evaluates the disentanglement by semantic roles). We consider classification task metrics (_accuracy_, _precision_, _recall_, _f1_) as proxies for evaluating region separability, effectively testing cluster membership across different clusters. The performance of those classifiers represents disentanglement performance. As shown in table 7, all classifiers trained over supervised latent representation outperform both unsupervised INN and Optimus, which indicates that the cluster-supervised approach leads to better disentanglement.
As for the evaluation of backward mapping, we calculate the ratio of generated sentences that hold the same role-content as the inputs (henceforth called invertibility ratio). We randomly selected 100 em
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{5}{c}{ARG0: disentanglement proxy metrics} \\ \hline classifier & train & accuracy & precision & recall & f1 score \\ \hline \multirow{3}{*}{KNN} & O & 0.983 & 0.983 & 0.983 & 0.983 \\ & U & 0.972 & 0.972 & 0.972 & 0.972 \\ & C & **0.986** & **0.986** & **0.986** & **0.986** \\ \hline \multirow{3}{*}{NB} & O & 0.936 & 0.936 & 0.936 & 0.936 \\ & U & 0.961 & 0.961 & 0.961 & 0.961 \\ & C & **0.979** & **0.979** & **0.979** & **0.979** \\ \hline \multirow{3}{*}{SVM} & O & 0.979 & 0.979 & 0.979 & 0.979 \\ & U & 0.975 & 0.975 & 0.975 & 0.975 \\ \cline{1-1} & C & **0.981** & **0.981** & **0.981** & **0.981** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Middle explanations between _ARG0-animal_ and _ARG0-human_.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{5}{c}{ARG0: disentanglement proxy metrics} \\ \hline classifier & train & accuracy & precision & recall & f1 score \\ \hline \multirow{3}{*}{KNN} & O & 0.983 & 0.983 & 0.983 & 0.983 \\ & U & 0.972 & 0.972 & 0.972 & 0.972 \\ & C & **0.986** & **0.986** & **0.986** & **0.986** \\ \hline \multirow{3}{*}{NB} & O & 0.936 & 0.936 & 0.936 & 0.936 \\ & U & 0.961 & 0.961 & 0.961 & 0.961 \\ \cline{1-1} & C & **0.979** & **0.979** & **0.979** & **0.979** \\ \hline \multirow{3}{*}{SVM} & O & 0.979 & 0.979 & 0.979 & 0.979 \\ \cline{1-1} & U & 0.975 & 0.975 & 0.975 & 0.975 \\ \cline{1-1} & C & **0.981** & **0.981** & **0.981** & **0.981** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Disentanglement of ARG0 between Optimus (O), unsupervised INN (U), and cluster-supervised INN (C) where KNN: k-neighbours, NB: naive bayes, SVM: support vector machine. The abbreviations are the same for remaining.
Figure 2: ARG0: t-SNE plot, different color represents different content regions (blue: animal, green: human, red: plant, purple: something) (left: Optimus, middle: unsupervised, right: cluster supervised).
beddings as inputs and show the corresponding ratios in Table 8. We can observe that both unsupervised and supervised cases can achieve high invertibility ratios, such as 0.99 of ARG0-plant in both of them, indicating that the INN component is performing its inverse mapping function, which means that INN provides us the means to control the sentence decoding step more precisely by operating the vector over its transformed latent space. If we compare the ratio between unsupervised and supervised, however, there is no significant difference between them, confirming the low information loss from the transformation.
Finally, we follow [22] on using decision trees to guide the movement of latent vectors over different clusters for controlling the explanation generation of the autoencoder. Table 9 shows the generation step following the path from _animal_ to _something_. We can observe from the results that under the guidance of the decision tree, the ARG0 content of the generated explanations gradually changes from _animals_ to _something_, and these generated explanations can maintain the semantics related to _animals_ even though the content of target, _something_, is not related to _animal_, which indicates that the generation of sentence can be localised control via supervised INN.
#### 4.2.2 Disentanglement between _Argi_ clusters
Next, we consider four ARG1 clusters, including _ARG1-food_, _ARG1-oxygen_, _ARG1-sun_, _ARG1-water_, and evaluate model performance following the same procedure. Figure 3 first displays the distributions of four role-content clusters over the latent space. With similar observations as before, the INN-supervised training strategy can learn the better disentanglement between ARG1 clusters. Additionally, when compared with the ARG0 cluster, the Optimus model does not show observable disentanglement.
Table 10 shows the disentanglement metrics (top) and invertibility ratio (bottom). With similar observations as the previous experiment, all classifiers trained over supervised latent representation outperform both the unsupervised INN model and Optimus, and both unsupervised and supervised cases can achieve higher ratio (at least 0.95). We also evaluate the results for _PRED_ clusters. Same observation as both _ARG0_ and _ARG1_. More information can be found in Appendix F due to the page limitation.
#### 4.2.3 Disentanglement between _Animal_ clusters
Before that, we investigated the separation between the same semantic roles but different content clusters. Next, we explore separating different semantic
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{5}{c}{ARG1: disentanglement proxy metrics (forward: \(T\))} \\ \hline classifier & train & accuracy & precision & recall & f1 score \\ \hline \multirow{3}{*}{KNN} & O & 0.958 & 0.958 & 0.958 \\ & U & 0.951 & 0.951 & 0.951 \\ & C & **0.969** & **0.969** & **0.969** \\ \hline \multirow{3}{*}{NB} & O & 0.907 & 0.907 & 0.907 \\ & U & 0.926 & 0.926 & 0.926 & 0.926 \\ & C & **0.956** & **0.956** & **0.956** \\ \hline \multirow{3}{*}{SVM} & O & 0.956 & 0.956 & 0.956 & 0.956 \\ & U & 0.953 & 0.953 & 0.953 & 0.953 \\ \cline{1-1} & C & **0.958** & **0.958** & **0.958** & **0.958** \\ \hline \multicolumn{5}{c}{ARG1: invertibility ratio (backward: \(T^{\prime}\))} \\ \hline train & food & oxygen & sun & water \\ \hline U & **0.99** & **0.98** & 0.95 & 1.00 \\ C & 0.96 & 0.95 & **0.96** & **1.00** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Forward and backward evaluation for ARG1.
Figure 3: ARG1: t-SNE plot (blue: _food_, green: _oxygen_, red: _sun_, purple: _water_) (left: Optimus, middle: unsupervised INN, right: cluster supervised INN).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{5}{c}{ARG0: invertibility ratio (backward: \(T^{\prime}\))} \\ \hline train & human & animal & plant & something \\ \hline U & 0.98 & **0.89** & 0.99 & **1.00** \\ C & **1.00** & 0.86 & **0.99** & 0.95 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Invertibility test for ARG0.
roles with the same content. We thus focus on the _animal_ cluster, and investigate the disentanglement between _ARGO-animal_, _ARG1-animal_, and _ARG2-animal_. As illustrated in Figure 4, the animal clusters with different semantic roles can be separated after cluster-supervised training, which indicates that the INN model can capture the difference between the same contents with different semantic roles in the case of similar topic. That is to say, the INN-based approach could jointly learn separable embeddings w.r.t. role-content and content alone.
Table 11 shows the disentanglement metrics and the invertibility ratio. Similarly to the previous experiment, the supervised case outperforms both the unsupervised and the Optimus models. We can also observe that both unsupervised and supervised cases can achieve good invertibility (at least 90%).
Table 12 shows the decoded explanations traversed around the central point of each cluster in the latent space of cluster-supervised INN. From it, we can observe that the INN-based model can generate explanations that hold the same role as its cluster, indicating that INN can separate the information of different semantic roles in similar contextual information.
## 6 Conclusions
In this work, we first analyze the disentanglement of the latent space of INN. The experimental results indicate that it can transform the distributed hidden space from a BERT-GPT2 autoencoder into a smooth latent space where syntactic and semantic transformations can be localised, interpolated and controlled. Secondly, we propose a supervised training strategy for INNs, which leads to an improved disentangled and separated space. This property can help us control the autoencoder generation by manipulating the movement of latent vectors. Thirdly, we utilize these geometric properties and semantic controls to support a semantically coherent and controlled data augmentation strategy.
## 7 Limitations
This work explores how flow-based INN autoencoders can support better semantic disentanglement and separation for sentence representations over continuous sentence spaces. While this work is motivated by providing more localised distributed representations, which can impact the safety and coherence of generative models, the specific safety guarantees of these models are not fully established.
|
2305.13471 | Fast Convergence in Learning Two-Layer Neural Networks with Separable
Data | Normalized gradient descent has shown substantial success in speeding up the
convergence of exponentially-tailed loss functions (which includes exponential
and logistic losses) on linear classifiers with separable data. In this paper,
we go beyond linear models by studying normalized GD on two-layer neural nets.
We prove for exponentially-tailed losses that using normalized GD leads to
linear rate of convergence of the training loss to the global optimum if the
iterates find an interpolating model. This is made possible by showing certain
gradient self-boundedness conditions and a log-Lipschitzness property. We also
study generalization of normalized GD for convex objectives via an
algorithmic-stability analysis. In particular, we show that normalized GD does
not overfit during training by establishing finite-time generalization bounds. | Hossein Taheri, Christos Thrampoulidis | 2023-05-22T20:30:10Z | http://arxiv.org/abs/2305.13471v2 | # Fast Convergence in Learning Two-Layer Neural Networks
###### Abstract
Normalized gradient descent has shown substantial success in speeding up the convergence of exponentially-tailed loss functions (which includes exponential and logistic losses) on linear classifiers with separable data. In this paper, we go beyond linear models by studying normalized GD on two-layer neural nets. We prove for exponentially-tailed losses that using normalized GD leads to linear rate of convergence of the training loss to the global optimum if the iterates find an interpolating model. This is made possible by showing certain gradient self-boundedness conditions and a log-Lipschitzness property. We also study generalization of normalized GD for convex objectives via an algorithmic-stability analysis. In particular, we show that normalized GD does not overfit during training by establishing finite-time generalization bounds.
## 1 Introduction
### Motivation
A wide variety of machine learning algorithms for classification tasks rely on learning a model using monotonically decreasing loss functions such as logistic loss or exponential loss. In modern practice these tasks are often accomplished using over-parameterized models such as large neural networks where the model can interpolate the training data, i.e., it can achieve perfect classification accuracy on the samples. In particular, it is often the case that the training of the model is continued until achieving approximately zero training loss [14].
Over the last decade there has been remarkable progress in understanding or improving the convergence and generalization properties of over-parameterized models trained by various choices of loss functions including logistic loss and quadratic loss. For the quadratic loss it has been shown that over-parameterization can result in significant improvements in the training convergence rate of (stochastic)gradient descent on empirical risk minimization algorithms. Notably, quadratic loss on two-layer ReLU neural networks is shown to satisfy the Polyak-Lojasiewicz(PL) condition [1, 1, 13, 14]. In fact, the PL property is a consequence of the observation that the tangent kernel associated with the model is a non-singular matrix. Moreover, in this case the PL parameter, which specifies the rate of convergence, is the smallest eigenvalue of the tangent kernel[15]. The fact that over-parameterized neural networks trained by quadratic loss satisfy the PL condition, guarantees that the loss convergences exponentially fast to a global optimum. The global optimum in this case is a model which "perfectly" interpolates the data, where we recall that perfect interpolation requires that the model output for every training input is precisely equal to the corresponding label.
On the other hand, gradient descent using un-regularized logistic regression with linear models and separable data is biased toward the max-margin solution. In particular, in this case the parameter converges in direction with the rate \(O(1/\log(t))\) to the solution of hard margin SVM problem, while the training loss converges to zero at the rate \(\tilde{O}(1/t)\)[15, 16]. More recently, normalized gradient descent has been proposed as a promising approach for fast convergence of exponentially tailed losses. In this method, at any iteration the step-size is chosen proportionally to the inverse of value of training loss function [13]. This results in choosing unboundedly increasing step-sizes for the iterates of gradient descent. This choice of step-size leads to significantly faster rates for the parameter's directional convergence. In particular, for linear models with separable data, it is shown that normalized GD with decaying step-size enjoys a rate of \(O(1/\sqrt{t})\) in directional parameter convergence to the max-margin separator [13]. This has been improved to \(O(1/t)\) with normalized GD using fixed step-size [16].
Despite remarkable progress in understanding the behavior of normalized GD with separable data, these results are only applicable to the implicit bias behavior of "linear models". In this paper, we aim to discover for the first time, the dynamics of learning a two-layer neural network with normalized GD trained on separable data. We also wish to realize the iterate-wise test error performance of this procedure. We show that using normalized GD on an exponentially-tailed loss with a two layered neural network leads to exponentially fast convergence of the loss to the global optimum. This is comparable to the convergence rate of \(O(1/t)\) for the global convergence of neural networks trained with exponentially-tailed losses. Compared to the convergence analysis of standard GD which is usually carried out using smoothness of the
loss function, here for normalized GD we use the Taylor's expansion of the loss and use the fact the operator norm of the Hessian is bounded by the loss. Next, we apply a lemma in our proof which shows that exponentially-tailed losses on a two-layered neural network satisfy a log-Lipschitzness condition throughout the iterates of normalized GD. Moreover, crucial to our analysis is showing that the \(\ell_{2}\) norm of the gradient at every point is upper-bounded and lower-bounded by constant factors of the loss under given assumptions on the activation function and the training data. Subsequently, the log-Lipschitzness property together with the bounds on the norm of Gradient and Hessian of the loss function ensures that normalized GD is indeed a descent algorithm. Moreover, it results in the fact that the loss value decreases by a constant factor after each step of normalized GD, resulting in the promised geometric rate of decay for the loss.
### Contributions
In Section 2.1 we introduce conditions -namely log-Lipschitz and self-boundedness assumptions on the gradient and the Hessian- under which the training loss of the normalized GD algorithm converges exponentially fast to the global optimum. More importantly, in Section 2.2 we prove that the aforementioned conditions are indeed satisfied by two-layer neural networks trained with an exponentially-tailed loss function if the iterates lead to an interpolating solution. This yields the first theoretical guarantee on the convergence of normalized GD for non-linear models. We also study a stochastic variant of normalized GD and investigate its training loss convergence in Section 2.4.
In Section 2.3 we study, for the first time, the finite-time test loss and test error performance of normalized GD for convex objectives. In particular, we provide sufficient conditions for the generalization of normalized GD and derive bounds of order \(O(1/n)\) on the expected generalization error, where \(n\) is the training-set size.
### Prior Works
The theoretical study of the optimization landscape of over-parameterized models trained by GD or SGD has been the subject of several recent works. The majority of these works study over-parameterized models with specific choices of loss functions, mainly quadratic or logistic loss functions. For quadratic loss, the exponential convergence rate of over-parameterized neural networks is proved in several recent works e.g., [1, 1, 2, 3, 1, 1, 1, 2]. These results naturally relate to the Neural Tangent Kernel(NTK) regime of infinitely wide or sufficiently large initialized neural networks [1] in which the iterates of gradient descent stay close to the initialization. The NTK approach can not be applied to our setting as the parameters' norm in our setting is growing as \(\Theta(t)\) with the NGD updates.
The majority of the prior results apply to the quadratic loss. However, the state-of-the-art architectures for classification tasks use unregularized ERM with logistic/exponential loss functions. Notably, for these losses over-parameterization leads to infinite norm optimizers. As a result, the objective in this case does not satisfy strong convexity or the PL condition even for linear models. The analysis of loss and parameter convergence of logistic regression on separable data has attracted significant attention in the last five years. Notably, a line of influential works have shown that gradient descent provably converges in direction to the max-margin solution for linear models and two-layer homogenous neural networks. In particular, the study of training loss and implicit bias behavior of GD on logistic/exponential loss was first initiated in the settings of linear classifiers [1, 1, 2, 1, 1]. The implicit bias behavior of GD with logistic loss in two-layer neural networks was later studied by [1, 2, 1]. The loss landscape of logistic loss for over-parameterized neural networks and structured data is analyzed in [1, 2], where it is proved that GD converges to a global optima at the rate \(O(1/t)\). The majority of these results hold for standard GD while we focus on normalized GD.
The generalization properties of GD/SGD with binary and multi-class logistic regression is studied in [1, 1, 2] for linear models and in [1, 2] for neural networks. Recently, [1] studied the generalization error of decentralized logistic regression through a stability analysis. For our generalization analysis we use an algorithmic stability analysis [1, 1, 2]. However, unlike these prior works we consider normalized GD and derive the first generalization analysis for this algorithm.
The benefits of normalized GD for speeding up the directional convergence of GD for linear models was suggested by [1, 1]. Our paper contributes to this line of work. Compared to the prior works which are focused on implicit behavior of linear models, we study non-linear models and derive training loss convergence rates. We also study, the generalization performance of normalized GD for convex objectives.
### Notation
We use \(\|\cdot\|\) to denote the operator norm of a matrix and also to denote the \(\ell_{2}\)-norm of a vector. The Frobenius norm of a matrix \(W\) is shown by \(\|W\|_{F}\). The Gradient and the Hessian of a function \(F:\mathbb{R}^{d}\rightarrow\mathbb{R}\) are denoted by \(\nabla F\) and \(\nabla^{2}F\). Similarly, for a function \(F:\mathbb{R}^{d}\times\mathbb{R}^{d^{\prime}}\rightarrow\mathbb{R}\) that takes two input variables, the Gradient and the Hessian with respect to the \(i\)th variable (where \(i=1,2\)) are denoted by \(\nabla_{i}F\) and \(\nabla_{i}^{2}F\), respectively. For functions \(F,G:\mathbb{R}\rightarrow\mathbb{R}\), we write \(F(t)=O(G(t))\) when \(|F(t)|\leq m\,G(t)\) after \(t\geq t_{0}\) for positive constants \(m,t_{0}\). We write \(F(t)=\tilde{O}(G(t))\) when \(F(t)=O(G(t)H(t))\) for a polylogarithmic function \(H\). Finally, we denote \(F(t)=\Theta(G(t))\) if \(|F(t)|\leq m_{1}G(t)\) and \(|F(t)|\geq m_{2}G(t)\) for all \(t\geq t_{0}\) for some positive constants \(m_{1},m_{2},t_{0}\).
### Problem Setup
We consider unconstrained and unregularized empirical risk minimization (ERM) on \(n\) samples,
\[\min_{w\in\mathbb{R}^{\bar{d}}}F(w):=\frac{1}{n}\sum_{i=1}^{n}f\left(y_{i}\Phi(w, x_{i})\right). \tag{1}\]
The \(i\)th sample \(z_{i}:=(x_{i},y_{i})\) consists of a data point \(x_{i}\in\mathbb{R}^{d}\) and its associated label \(y_{i}\in\{\pm 1\}\). The function \(\Phi:\mathbb{R}^{\bar{d}}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) represents the model taking the weights vector \(w\) and data point \(x\) to approximate the label. In this section, we take \(\Phi\) as a neural network with one hidden layer and \(m\) neurons,
\[\Phi(w,x):=\sum_{j=1}^{m}a_{j}\sigma(\langle w_{j},x\rangle).\]
Here \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) is the activation function and \(w_{j}\in\mathbb{R}^{d}\) denotes the input weight vector of the \(j\)th hidden neuron. \(w\in\mathbb{R}^{\bar{d}}\) represents the concatenation of these weights i.e., \(w=[w_{1};w_{2};\ldots;w_{m}]\). In our setting the total number of parameters and hence the dimension of \(w\) is \(\widetilde{d}=md\). We assume that only the first layer weights \(w_{j}\) are updated during training and the second layer weights \(a_{j}\in\mathbb{R}\) are initialized randomly and are maintained fixed during training. The function \(f:\mathbb{R}\rightarrow\mathbb{R}\) is non-negative and monotonically decreases such that \(\lim_{t\rightarrow+\infty}f(t)=0\). In this section, we focus on the exponential loss \(f(t)=\exp(-t)\), but we expect that our results apply to a broader class of loss functions that behave similarly to the exponential loss for large \(t\), such as logistic loss \(f(t)=\log(1+\exp(-t))\).
We consider activation functions with bounded absolute value for the first and second derivatives.
**Assumption 1** (Activation function).: _The activation function \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) is smooth and for all \(t\in\mathbb{R}\)_
\[|\sigma^{\prime\prime}(t)|\leq L.\]
_Moreover, there are positive constants \(\alpha,\ell\) such that \(\sigma\) satisfies for all \(t\in\mathbb{R}\),_
\[\alpha\leq\sigma^{\prime}(t)\leq\ell.\]
An example satisfying the above condition is the activation function known as smoothed-leaky-ReLU which is a smoothed variant of the leaky-ReLU activation \(\sigma(t)=\ell t\,\mathbb{I}(t\geq 0)+\alpha t\,\mathbb{I}(t\leq 0)\), where \(\mathbb{I}(\cdot)\) denotes the 0-1 indicator function.
Throughout the paper we let \(R\) and \(a\) denote the maximum norm of data points and second layer weights, respectively, i.e.,
\[R:=\max_{i\in[n]}\ \left\|x_{i}\right\|\,,\ \ \ \ \ a:=\max_{j\in[m]}\ \left|a_{j}\right|\,.\]
Throughout the paper we assume \(R=\Theta(1)\) w.r.t. problem parameters and \(a=\frac{1}{m}\).
We also denote the _training loss_ of the model by \(F\), defined in (1) and define the _train error_ as misclassification error over the training data, or formally by \(F_{0-1}(w):=\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}(\textsc{sign}(\Phi(w,x_{i})) \neq y_{i})\).
Normalized GD.We consider the iterates of normalized GD as follows,
\[w_{t+1}=w_{t}-\eta_{t}\nabla F(w_{t}). \tag{2}\]
The step size is chosen inversely proportional to the loss value i.e., \(\eta_{t}=\eta/F(w_{t})\), implying that the step-size is growing unboundedly as the algorithm approaches the optimum solution. Since the gradient norm decays proportionally to the loss, one can equivalently choose \(\eta_{t}=\eta/\|\nabla F(w_{t})\|\).
## 2 Main Results
For convergence analysis in our case study, we introduce a few definitions.
**Definition 1** (log-Lipschitz Objective).: _The training loss \(F:\mathbb{R}^{\bar{d}}\rightarrow\mathbb{R}\) satisfies the log-Lipschitzness property if for all \(w,w^{\prime}\in\mathbb{R}^{\bar{d}}\),_
\[\max_{v\in[w,w^{\prime}]}F(v)\leq F(w)\cdot\tilde{c}_{w,w^{\prime}},\]
_where \([w,w^{\prime}]\) denotes the line between \(w\) and \(w^{\prime}\) and we define \(\tilde{c}_{w,w^{\prime}}:=\exp\left(c(\|w-w^{\prime}\|+\|w-w^{\prime}\|^{2})\right)\) where the positive constant \(c\) is independent of \(w,w^{\prime}\)._
As we will see in the following sections, log-Lipschitzness is a property of neural networks trained with exponentially tailed losses with \(c=\Theta(\frac{1}{\sqrt{m}})\). We also define the property "log-Lipschitzness in the gradient path" if for all \(w_{t},w_{t-1}\) in Eq. (2) there exists a constant \(C\) such that,
\[\max_{v\in[w_{t},w_{t+1}]}F(v)\leq C\,F(w_{t}).\]
**Definition 2** (Self lower-bounded gradient).: _The loss function \(F:\mathbb{R}^{\bar{d}}\rightarrow\mathbb{R}\) satisfies the self-lower bounded Gradient condition for a function, if these exists a constant \(\mu\) such that for all \(w\),_
\[\|\nabla F(w)\|\geq\mu\,F(w).\]
**Definition 3** (Self-boundedness of the gradient).: _The loss function \(F:\mathbb{R}^{\bar{d}}\rightarrow\mathbb{R}\) satisfies the self-boundedness of the gradient condition for a constant \(h\), if for all \(w\)_
\[\|\nabla F(w)\|\leq h\,F(w).\]
The above two conditions on the upper-bound and lower bound of the gradient norm based on loss can be thought as the equivalent properties of smoothness and the PL condition but for our studied case of exponential loss. To see this, note that smoothness and PL condition provide upper and lower bounds for the square norm of gradient. In particular, by \(L\)-smoothness one can deduce that \(\|\nabla F(w)\|^{2}\leq 2L(F(w)-F^{\star})\) (e.g., [11]) and by the definition of \(\mu\)-PL condition \(\|\nabla F(w)\|^{2}\geq 2\mu(F(w)-F^{\star})\)[12, 12].
The next necessary condition is an upper-bound on the operator norm of the Hessian of loss.
**Definition 4** (Self-boundedness of the Hessian).: _The loss function \(F:\mathbb{R}^{\bar{d}}\rightarrow\mathbb{R}\) satisfies the self-boundedness of the Hessian property for a constant \(H\), if for all \(w\),_
\[\|\nabla^{2}F(w)\|\leq H\,F(w),\]
_where \(\|\cdot\|\) denotes the operator norm._
It is worthwhile to mention here that in the next sections of the paper, we prove all the self lower and upper bound in Definitions 3-4 are satisfied for a two-layer neural network under some regularity conditions.
### Convergence Analysis of Training Loss
The following theorem states that under the conditions above, the training loss converges to zero at an exponentially fast rate.
**Theorem 1** (Convergence of Training Loss).: _Consider normalized gradient descent update rule with loss \(F\) and step-size \(\eta_{t}\). Assume \(F\) and the normalized GD algorithm satisfy log-Lipschitzness in the gradient path with parameter \(C\), as well as self-boundedness of the Gradient and the Hessian and the self-lower bounded Gradient properties with parameters \(h,H\) and \(\mu\), respectively. Let \(\eta_{t}=\frac{\eta}{F(w_{t})}\) for all \(t\in[T]\) and for any positive constant \(\eta\) satisfying \(\eta\leq\frac{\mu^{2}}{HCh^{2}}\). Then for the training loss at iteration \(T\) the following bound holds:_
\[F(w_{\tau})\leq(1-\frac{\eta\mu^{2}}{2})^{T}F(w_{0}). \tag{3}\]
_Remark 1_.: The proof of Theorem 1 is provided in Appendix A, where we use a Taylor expansion of the loss and apply the conditions of the theorem. It is worth noting that the rate obtained for normalized GD in Theorem 1 is significantly faster than the rate of \(\widetilde{O}(\frac{1}{T})\) for standard GD with logistic or exponential loss in neural networks (e.g., (Zou et al. 2020, Thm 4.4), and (Taheri and Thrampoulidis 2023a, Thm 2)). Additionally, for a continuous-time perspective on the training convergence of normalized GD, we refer to Proposition 10 in the appendix, which presents a convergence analysis based on _normalized Gradient Flow_. The advantage of this approach is that it does not require the self-bounded Hessian property and can be used to show exponential convergence of normalized Gradient Flow for leaky-ReLU activation.
### Two-Layer Neural Networks
In this section, we prove that the conditions that led to Theorem 1 are in fact satisfied by a two-layer neural network. Consequently, this implies that the training loss bound in Eq.(3) is valid for this class of functions. We choose \(f(t)=\exp(-t)\) for simpler proofs, however an akin result holds for the broader class of exponentially tailed loss functions.
First, we start with verifying the log-Lipschitzness condition (Definition 1). In particular, here we prove a variation of this property for the iterates of normalized GD i.e., where \(w,w^{\prime}\) are chosen as \(w_{t},w_{t+1}\). The proof is included in Appendix B.1.
**Lemma 2** (log-Lipschitzness in the gradient path).: _Let \(F\) be as in (1) for the exponential loss \(f\) and let \(\Phi\) be a two-layer neural network with the activation function satisfying Assumption 1. Consider the iterates of normalized GD with the step-size \(\eta_{t}=\frac{\eta}{F(w_{t})}\). Then for any \(\lambda\in[0,1]\) the following inequality holds:_
\[F(w_{t}+\lambda(w_{t+1}-w_{t}))\leq\exp(\lambda c)\,F(w_{t}), \tag{4}\]
_for a positive constant \(c\) independent of \(\lambda,w_{t}\) and \(w_{t+1}\). As a direct consequence, it follows that,_
\[\max_{v\in[w_{t},w_{t+1}]}F(v)\leq C\,F(w_{t}), \tag{5}\]
_for a numerical constant \(C\)._
The next two lemmas state sufficient conditions for \(F\) to satisfy the self-lower boundedness for its gradient (Definition 2). The proofs are deferred to Appendices B.2-B.3.
**Lemma 3** (Self lower-boundedness of gradient).: _Let \(F\) be as in (1) for the exponential loss \(f\) and let \(\Phi\) be a two-layer neural network with the activation function satisfying Assumption 1. Assume the training data is linearly separable with margin \(\gamma\). Then \(F\) satisfies the self-lower boundedness of gradient with the constant \(\mu=\frac{\alpha\gamma}{\sqrt{m}}\) for all \(w\), i.e., \(\|\nabla F(w)\|\geq\mu F(w)\)._
Next, we aim to show that the condition \(\|\nabla F(w)\|\geq\mu F(w)\), holds for training data separable by a two-layer neural network during gradient descent updates. In particular, we assume the Leaky-ReLU activation function taking the following form,
\[\sigma(t)=\begin{cases}\ell\,t&t\geq 0,\\ \alpha\,t&t<0.\end{cases} \tag{6}\]
for arbitrary non-negative constants \(\alpha,\ell\). This includes the widely-used ReLU activation as a special case. Next lemma shows that when the weights are such that the neural network separates the training data, the self-lower boundedness condition holds.
**Lemma 4**.: _Let \(F\) be in (1) for the exponential loss \(f\) and let \(\Phi\) be a two-layer neural network with activation function in Eq.(6). Assume the first layer weights \(w\in\mathbb{R}^{\overline{d}}\) are such that the neural network separates the training data with margin \(\gamma\). Then \(F\) satisfies the self- lower boundedness of gradient, i.e, \(\|\nabla F(w)\|\geq\mu F(w),\) where \(\mu=\gamma\)._
A few remarks are in place. The result of Lemma 4 is relevant for \(w\) that can separate the training data. Especially, this implies the self lower-boundedness property after GD iterates succeed in finding an interpolator. However, we should also point out that the non-smoothness of leaky-ReLU activation functions precludes the self-bounded Hessian property and it remains an interesting future direction to prove the self lower-boundedness property with general smooth activations. On the other hand, the convergence of normalized "Gradient-flow" does not require the self-bounded Hessian property, as demonstrated in Proposition 10. This suggests that Lemma 4 can be applied to prove the convergence of normalized Gradient-flow with leaky-ReLU activations. It is worth highlighting that we have not imposed any specific initialization conditions in our analysis as the self-lower bounded property is essentially sufficient to ensure global convergence.
Next lemma derives the self-boundedness of the gradient and Hessian (c.f. Definitions 3-4) for our studied case. The proof of Lemma 5 (in Appendix B.4) follows rather straightforwardly from the closed-form expressions of gradient and Hessian and using properties of the activation function.
**Lemma 5** (Self-boundedness of the gradient and Hessian).: _Let \(F\) be in (1) for the exponential loss \(f\) and let \(\Phi\) be a two-layer neural network with the activation function satisfying Assumption 1. Then \(F\) satisfies the self-boundedness of gradient and Hessian with constants \(h=\frac{\ell R}{\sqrt{m}},H:=\frac{LR^{2}}{m^{2}}+\frac{\ell^{2}R^{2}}{m}\) i.e.,_
\[\|\nabla F(w)\|\leq hF(w),\quad\|\nabla^{2}F(w)\|\leq HF(w).\]
We conclude this section by offering a few remarks regarding our training convergence results. We emphasize that combining Theorem 1 and Lemmas 2-5 achieves the convergence of training loss of normalized Gradient Descent for two-layer networks. Moreover, in Appendix D, we refer to Proposition 10 which presents a continuous time convergence analysis of normalized GD based on Gradient Flow. This result is especially relevant in the context of leaky-ReLU activation, where Proposition 10 together with Lemma 4 shows exponential convergence of normalized Gradient-flow. The experiments of the training performance of normalized GD are deferred to Section 3.
### Generalization Error
In this section, we study the generalization performance of normalized GD algorithm. Formally, the _test loss_ for the data distribution \(\mathcal{D}\) is defined as follows,
\[\widetilde{F}(w):=\mathbb{E}_{(x,y)\sim\mathcal{D}}\Big{[}f(y\Phi(w,x))\Big{]}.\]
Depending on the choice of loss \(f\), the test loss might not always represent correctly the classification performance of a model. For this, a more reliable standard is the _test error_ which is based on the \(0-1\) loss,
\[\widetilde{F}_{0-1}(w):=\mathbb{E}_{(x,y)\sim\mathcal{D}}\Big{[}\mathbb{I}(y \neq\textsc{sign}(\Phi(w,x)))\Big{]}.\]
We also define the _generalization loss_ as the gap between training loss and test loss. Likewise, we define the _generalization error_ based on the train and test errors.
With these definitions in place, we are ready to state our results. In particular, in this section we prove that under the normalized GD update rule, the generalization loss at step \(T\) is bounded by \(O(\frac{T}{n})\) where recall that \(n\) is the training sample size. While, the dependence of generalization loss on \(T\) seems unappealing, we show that this is entirely due to the fact that a convex-relaxation of the \(0-1\) loss, i.e. the loss function \(f\), is used for evaluating the generalization loss. In particular, we can deduce that under appropriate conditions on loss function and data (c.f. Corollary 7.1), the test error is related to the test loss through,
\[\widetilde{F}_{0-1}(w_{{}_{T}})=O(\frac{\widetilde{F}(w_{{}_{T}})}{\|w_{{}_{ T}}\|}).\]
As we will see in the proof of Corollary 7.1, for normalized GD with exponentially tailed losses the weights norm \(\|w_{{}_{T}}\|\) grows linearly with \(T\). Thus, this relation implies that the test error satisfies \(\widetilde{F}_{0-1}(w_{{}_{T}})=O(\frac{1}{n})\). Essentially, this bound on the misclassification error signifies the fast convergence of normalized GD on test error and moreover, it shows that normalized GD never overfits during its iterations.
It is worthwhile to mention that our generalization analysis is valid for any model \(\Phi\) such that \(f(y\Phi(\cdot,x))\) is convex for any \((x,y)\sim\mathcal{D}\). This includes linear models i.e., \(\Phi(w,x)=\langle w,x\rangle\) or the Random Features model (Rahimi and Recht 2007), i.e., \(\Phi(w,x)=\langle w,\sigma(Ax)\rangle\) where \(\sigma(\cdot)\) is applied element-wise on its entries and the matrix \(A\in\mathbb{R}^{m\times d}\) is initialized randomly and kept fixed during train and test time. Our results also apply to neural networks in the NTK regime due to the convex-like behavior of optimization landscape in the infinite-width limit.
We study the generalization performance of normalized GD, through a stability analysis (Bousquet and Elisseeff 2002). The existing analyses in the literature for algorithmic stability of \(\tilde{L}-\)smooth losses, rely on the step-size satisfying \(\eta_{t}=O(1/\tilde{L})\). This implies that such analyses can not be employed for studying increasingly large step-sizes as in our case \(\eta_{t}\) is unboundedly growing. In particular, the common approach in the stability analysis (Hardt, Recht, and Singer 2016; Lei and Ying 2020) uses the "non-expansiveness" property of standard GD with smooth and convex losses, by showing that for \(\eta\leq 2/\tilde{L}\) and for any two points \(w,v\in\mathbb{R}^{d}\), it holds that \(\|w-\eta\nabla F(w)-(v-\eta\nabla F(v))\|\leq\|w-v\|\). Central to our stability analysis is showing that under the assumptions of self-boundedness of Gradient and Hessian, the normalized GD update rule satisfies the non-expansiveness condition with any step-size satisfying both \(\eta\lesssim\frac{1}{F(w)}\) and \(\eta\lesssim\frac{1}{F(v)}\). The proof is included in Appendix C.1.
**Lemma 6** (Non-expansiveness of normalized GD).: _Assume the loss \(F\) to satisfy convexity and self-boundedness for the gradient and the Hessian with parameter \(h\leq 1\) (Definitions 3-4). Let \(v,w\in\mathbb{R}^{d}\). If \(\eta\leq\frac{1}{h\cdot\max(F(v),F(w))}\), then_
\[\|w-\eta\nabla F(w)-(v-\eta\nabla F(v))\|\leq\|w-v\|.\]
The next theorem characterizes the test loss for both Lipschitz and smooth objectives. Before stating the theorem, we need to define \(\delta\). For the leave-one-out parameter \(w_{t}^{-i}\) and loss \(F^{\neg i}(\cdot)\) defined as
\[w_{t+1}^{-i}=w_{t}^{-i}-\eta_{t}\nabla F^{\neg i}(w_{t}^{\neg i}),\]
and
\[F^{\neg i}(w):=\frac{1}{n}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{n}f(w,z_{j}),\]
we define \(\delta\geq 1\) to be any constant which satisfies for all \(t\in[T],i\in[n]\), the following
\[F^{\neg i}(w_{t}^{\neg i})\leq\delta\,F^{\neg i}(w_{t}).\]
While this condition seems rather restrictive, we prove in Lemma 9 in Appendix C.3 that the condition on \(\delta\) is satisfied by two-layer neural networks with sufficient over-parameterization. With these definitions in place, we are ready to state the main theorem of this section.
**Theorem 7** (Test loss).: _Consider normalized GD update rule with \(\eta_{t}=\frac{\eta}{F(w_{t})}\) where \(\eta\leq\frac{1}{h\delta}\). Assume the loss \(F\) to be convex and to satisfy the self-bounded gradient and
_Hessian property with a parameter \(h\) (Definitions 3-4). Then the following statements hold for the test loss:_
_(i) if the loss_ \(F\) _is_ \(G\)_-Lipschitz, then the generalization loss at step_ \(T\) _satisfies_
\[\mathbb{E}[\widetilde{F}(w_{{}_{T}})-F(w_{{}_{T}})]\leq\frac{2GT}{n}.\]
_(ii) if the loss_ \(F\) _is_ \(\tilde{L}\)_-smooth, then the test loss at step_ \(T\) _satisfies,_
\[\mathbb{E}[\widetilde{F}(w_{{}_{T}})]\leq 4\mathbb{E}[F(w_{{}_{T}})]+\frac{3 \tilde{L}^{2}T}{n},\]
_where all expectations are over training sets._
The proof of Theorem 7 is deferred to Appendix C.2. As discussed earlier in this section, the test loss dependence on \(T\) is due to the rapid growth of the \(\ell_{2}\) norm of \(w_{t}\). As a corollary, we show that the generalization error is bounded by \(O(\frac{1}{n})\). For this, we assume the next condition.
**Assumption 2** (Margin).: _There exists a constant \(\tilde{\gamma}\) such that after sufficient iterations the model satisfies \(|\Phi(w_{t},x)|\geq\tilde{\gamma}\|w_{t}\|\) almost surely over the data distribution \((x,y)\sim\mathcal{D}\)._
Assumption 2 implies that the absolute value of the margin is \(\tilde{\gamma}\) i.e., \(\frac{|\Phi(w_{t},x)|}{\|w_{t}\|}\geq\tilde{\gamma}\) for almost every \(x\) after sufficient iterations. This assumption is rather mild, as intuitively it requires that data distribution is not concentrating around the decision boundaries.
For the loss function, we consider the special case of logistic loss \(f(t)=\log(1+\exp(-t))\) for simplicity of exposition and more importantly due to its Lipschitz property. The use of Lipschitz property is essential in view of Theorem 7.
**Corollary 7.1** (Test error).: _Suppose the assumptions of Theorem 7 hold. Consider the neural network setup under Assumptions 1 and 2 and let the loss function \(f\) be the logistic loss. Then the test error at step \(T\) of normalized GD satisfies the following:_
\[\mathbb{E}[\widetilde{F}_{0-1}(w_{{}_{T}})]=O(\frac{1}{T}\mathbb{E}[F(w_{{}_{ T}})]+\frac{1}{n})\]
The proof of Corollary 7.1 is provided in Appendix C.4. In the proof, we use that \(\|w_{t}\|\) grows linearly with \(t\) as well as Assumption 2 to deduce \(\widetilde{F}_{0-1}(w_{{}_{T}})=O(\frac{\widetilde{F}(w_{{}_{T}})}{T})\). Hence, the statement of the corollary follows from Theorem 7 (i). We note that while we stated the corollary for the neural net setup, the result is still valid for any model \(\Phi\) that satisfies the Lipschitz property in \(w\). We also note that the above result shows the \(\frac{1}{n}\)-rate for expected test loss which is known to be optimal in the realizable setting we consider throughout the paper.
### Stochastic Normalized GD
In this section we consider a stochastic variant of normalized GD algorithm, Assume \(z_{t}\) to be the batch selected randomly from the dataset at iteration \(t\). The stochastic normalized GD takes the form,
\[w_{t+1}=w_{t}-\eta_{t}\nabla F_{z_{t}}(w_{t}), \tag{7}\]
where \(\nabla F_{z_{t}}(w_{t})\) is the gradient of loss at \(w_{t}\) by using the batch of training points \(z_{t}\) at iteration \(t\). We assume \(\eta_{t}\) to be proportional to \(1/F(w_{t})\). Our result in this section states that under the following strong growth condition (Schmidt and Roux 2013; Vaswani, Bach, and Schmidt 2019), the training loss converges at an exponential rate to the global optimum.
**Assumption 3** (Strong Growth Condition).: _The training loss \(F:\mathbb{R}^{\tilde{d}}\rightarrow\mathbb{R}\) satisfies the strong growth condition with a parameter \(\rho\),_
\[\mathbb{E}_{z}[\|\nabla F_{z}(w)\|^{2}]\leq\rho\|\nabla F(w)\|^{2}.\]
Notably, we show in Appendix E.1 that the strong growth condition holds for our studied case under the self-bounded and self-lower bounded gradient property.
The next theorem characterizes the rate of decay for the training loss. The proof and numerical experiments are deferred to Appendices E.2 and F, respectively.
**Theorem 8** (Convergence of Training Loss).: _Consider stochastic normalized GD update rule in Eq.(7). Assume \(F\) satisfies Assumption 3 as well as the log-Lipschitz in the GD path, self-boundedness of the Gradient and the Hessian and the self-lower bounded Gradient properties (Definitions 1-4). Let \(\eta_{t}=\eta/F(w_{t})\) for all \(t\in[T]\) and for any positive constant \(\eta\) satisfying \(\eta\leq\frac{\mu^{2}}{HCP\tilde{\rho}\tilde{\rho}^{2}}\). Then for the training loss at iteration \(T\) the following bound holds:_
\[F(w_{{}_{T}})\leq(1-\frac{\eta\mu^{2}}{2})^{T}F(w_{0}).\]
## 3 Numerical Experiments
In this section, we demonstrate the empirical performance of normalized GD. It is important to highlight that the advantages of normalized GD over standard GD are most pronounced when dealing with well-separated data, such as in high-dimensional datasets. However, in scenarios where the margin is small, the benefits of normalized GD may be negligible. Figure 1 illustrates the training loss (Left), the test error % (middle), and the weight norm (Right) of GD with normalized GD. The experiments are conducted on a two-layer neural network with \(m=50\) hidden neurons with leaky-ReLU activation function in (6) where \(\alpha=0.2\) and \(\ell=1\). The second layer weights are chosen randomly from \(a_{j}\in\{\pm\frac{1}{m}\}\) and kept fixed during training and test time. The first layer weights are initialized from standard Gaussian distribution and then normalized to unit norm. We consider binary classification with exponential loss using digits "\(0\)" and "\(1\)" from the MNIST dataset (\(d=784\)) and we set the sample size to \(n=1000\). The step-size are fine-tuned to \(\eta=30\) and \(5\) for GD and normalized GD, respectively so that each line represents the best of each algorithm. We highlight the significant speed-up in the convergence of normalized GD compared to standard GD. For the training loss, normalized GD decays exponentially fast to zero while GD converges at a remarkably slower rate. We also highlight that \(\|w_{t}\|\) for
normalized GD grows at a rate \(\Theta(t)\) while it remains almost constant for GD. In fact this was predicted by Corollary 7.1 where in the proof we showed that the weight norm grows linearly with the iteration number. In Figure 2, we generate two synthetic dataset according to a realization of a zero-mean Gaussian-mixture model with \(n-40\) and \(d=2\) where the two classes have different covariance matrices (top) and a zero-mean Gaussian-mixture model with \(n=40,d=5\) (only the first two entires are depicted in the figure) where \(\Sigma_{1}=\mathbf{I}\), \(\Sigma_{2}=\frac{1}{4}\mathbf{I}\) (Bottom). Note that none of the datasets is linearly separable. We consider the same settings as in Figure 1 and compared the performance of GD and normalized GD in the right plots. The step-sizes are fine-tuned to \(\eta=80,350\) and \(30,20\) for GD and normalized GD, respectively. Here again the normalized GD algorithm demonstrates a superior rate in convergence to the final solution.
## 4 Conclusions
We presented the first theoretical evidence for the convergence of normalized gradient methods in non-linear models. While previous results on standard GD for two-layer neural networks trained with logistic/exponential loss proved a rate of \(\widetilde{O}(1/t)\) for the training loss, we showed that normalized GD enjoys an exponential rate. We also studied for the first time, the stability of normalized GD and derived bounds on its generalization performance for convex objectives. We also briefly discussed the stochastic normalized GD algorithm. As future directions, we believe extensions of our results to deep neural networks is interesting. Notably, we expect several of our results to be still true for deep neural networks. Extending the self lower-boundedness property in Lemma 4 for smooth activation functions is another important direction. Another promising avenue for future research is the derivation of generalization bounds for non-convex objectives by extending
Figure 1: Comparison of the training loss, test error (in percentage), and weight norm (i.e., \(\|w_{t}\|\)) between gradient descent and normalized gradient descent algorithms. The experiments were conducted on two classes of the MNIST dataset using exponential loss and a two-layer neural network with \(m=50\) hidden neurons. The results demonstrate the performance advantages of normalized gradient descent over traditional gradient descent in terms of both the training loss and test error.
Figure 2: The left plot depicts two synthetic datasets, each consisting of \(n=40\) data points. On the right, we present the training loss results of gradient descent and normalized gradient descent algorithms applied to a two-layer neural network with \(m=50\) (top) and \(100\) (bottom) hidden neurons.
the approach used for GD (in [11]) to normalized GD.
## Acknowledgements
This work was partially supported by NSF under Grant CCF-2009030.
|
2307.15679 | Dynamic Analysis and an Eigen Initializer for Recurrent Neural Networks | In recurrent neural networks, learning long-term dependency is the main
difficulty due to the vanishing and exploding gradient problem. Many
researchers are dedicated to solving this issue and they proposed many
algorithms. Although these algorithms have achieved great success,
understanding how the information decays remains an open problem. In this
paper, we study the dynamics of the hidden state in recurrent neural networks.
We propose a new perspective to analyze the hidden state space based on an
eigen decomposition of the weight matrix. We start the analysis by linear state
space model and explain the function of preserving information in activation
functions. We provide an explanation for long-term dependency based on the
eigen analysis. We also point out the different behavior of eigenvalues for
regression tasks and classification tasks. From the observations on
well-trained recurrent neural networks, we proposed a new initialization method
for recurrent neural networks, which improves consistently performance. It can
be applied to vanilla-RNN, LSTM, and GRU. We test on many datasets, such as
Tomita Grammars, pixel-by-pixel MNIST datasets, and machine translation
datasets (Multi30k). It outperforms the Xavier initializer and kaiming
initializer as well as other RNN-only initializers like IRNN and sp-RNN in
several tasks. | Ran Dou, Jose Principe | 2023-07-28T17:14:58Z | http://arxiv.org/abs/2307.15679v1 | # Dynamic Analysis and an Eigen Initializer for Recurrent Neural Networks
###### Abstract
In recurrent neural networks, learning long-term dependency is the main difficulty due to the vanishing and exploding gradient problem. Many researchers are dedicated to solving this issue and they proposed many algorithms. Although these algorithms have achieved great success, understanding how the information decays remains an open problem. In this paper, we study the dynamics of the hidden state in recurrent neural networks. We propose a new perspective to analyze the hidden state space based on an eigen decomposition of the weight matrix. We start the analysis by linear state space model and explain the function of preserving information in activation functions. We provide an explanation for long-term dependency based on the eigen analysis. We also point out the different behavior of eigenvalues for regression tasks and classification tasks. From the observations on well-trained recurrent neural networks, we proposed a new initialization method for recurrent neural networks, which improves consistently the performance. It can be applied to vanilla-RNN, LSTM, and GRU. We test is on many datasets, such as Tomita Grammars, pixel-by-pixel MNIST dataset, and machine translation dataset (Multi30k). It outperforms Xavier initializer and kaiming initializer as well as other RNN-only initializers like IRNN and sp-RNN in several tasks.
recurrent neural networks, initializer, eigendecomposition, state space model
## I Introduction
In a recurrent system, the hidden state is considered representative of the unknown state of the system that created the data. In general, a state space model approximates the unknown state \(h_{t}\) using the previous hidden state \(h_{t-1}\) and current sample \(x_{t}\). And the prediction \(y_{t}\) is observed by the hidden state.
\[h_{t}=H(x_{t},h_{t-1}) \tag{1}\]
\[y_{t}=G(h_{t}) \tag{2}\]
where \(H\) is the state transition function and \(G\) is the observation function. Extensive research has been studied on the design of state transition functions, such as the Kalman Filter family [27], echo state networks [13] and recurrent neural networks. The hidden states represent the memory of past information, however, information stored in hidden states is fading away through time. Models have difficulty learning the long-term dependencies.
Many mechanisms have been proposed to improve long-term memory. One is to use the gating mechanism [4, 10, 12] and these gated networks have been widely used in time series, reinforcement learning, and natural language processing. Generally, the output of a gated unit is limited from 0 to 1 by a sigmoid activation function. The forgetting gate is used to reset the hidden state memory and the writing gate decides what information should be stored in the hidden states. In LSTM [12], the forgetting and writing gates operate a cell state vector as the memory. And the GRU operates directly on the hidden state [4]. In order to learn the long-term dependency, a bias of 1 can be added to the forgetting gate for initialization [14].
Another mechanism for improving the long-term dependency is using external memory, such as the Neural Turing Machine (NTM [7]), Differentiable Neural Computer [8], End-to-end Network [23], Neural Stack and Neural Queue [9], RNN-EM [21], and Dynamic Memory Networks [15]. In these architectures, a recurrent network is used as the controller to interact with the external memory. At each time, the controller reads and writes from the external memory based on a rule such as the attention mechanism. Unlike overwriting new information into hidden states, the external memory stores all past information and provides a better long-term dependency.
Besides the two mechanisms mentioned above, there are other studies trying to improve the performance of RNNs. And most of them are focusing on manipulating the hidden state, such as AntisymmetricRNN [3], Noisy RNN [17], PF-RNN [19], SRNN [22], TP-RNN [18], uRNN [2] and SR-RNN [26]. These methods show that well-designed hidden states are capable of handling longer memory. Other methods are focusing more on the learning algorithms for RNNs. For example, [1] shows that with a sufficiently large architecture, the stochastic gradient descent can provide a linear convergence rate. [28] proposed a stochastic bilevel optimization for RNN to prevent vanishing and exploding gradients. And [20] proposed to use the stochastic Riemannian coordinate descent for RNN training.
There is also research studying the effect of different
initialization methods for RNNs. In gradient descent methods, a proper initialization for weights is of vital importance. If the weights are initialized with small values, the gradients will vanish and if the weights are initialized with large values, the gradients will explode. And the rules of thumb for weight initialization are to set the mean of activations close to zero and the variance of activations consistent with each other. Followed by this idea, Xavier initializer [6] and kaiming initializer [11] are proposed and frequently used in nowadays research. Unlike feedforward neural networks, recurrent neural networks share weights recurrently, which provides a different dynamic for the hidden states. IRNN [16] shows that RNNs can be simply initialized by identity matrices and [24] proposed the np-RNN afterward. However, both methods are designed for vanilla RNNs and can not be applied in general applications such as LSTM and GRU.
In this paper, we start by analyzing a linear state space model in terms of eigendecomposition and extrapolate it to nonlinear cases. We conclude that in RNNs, the long-term dependency is improved by increasing eigenvalues. Based on our observation, we propose a new initializer for recurrent layers. Unlike IRNN and np-RNN, which are only designed for vanilla-RNN, our initializer can be widely used for different types of RNNs, like LSTM and GRU.
## II Eigen Decomposition for State Transition Function
Given a linear state space model, the state transition function and observation function can be written as,
\[h_{t}=W_{h}h_{t-1}+W_{x}x_{t} \tag{3}\]
\[y_{t}=W_{y}h_{t} \tag{4}\]
where \(h_{t}\) is the state vector with the size of \(n\). For simplicity, denote,
\[x^{\prime}_{t}=W_{x}x_{t} \tag{5}\]
For the state transition matrix \(W_{h}\), let \(\Lambda=[\lambda_{1},\lambda_{2},...,\lambda_{n}]\) and \(U=[u_{1},u_{2},...,u_{n}]\) be the eigenvalues and eigenvectors,
\[W_{h}=U\Lambda\bar{U}^{T} \tag{6}\]
Note here \([u_{1},u_{2},...,u_{n}]\) are the orthonormal basis, and \(\Lambda\) and \(U\) may have complex values. When the eigenvalues are real values, after each update, the hidden state is enlarged along the direction of eigenvectors by the corresponding eigenvalues. And when the eigenvalues are complex numbers, they show up in conjugate pairs and the hidden state also rotates in the plane formed by the corresponding conjugate eigenvectors.
Consider only the hidden state with zero inputs, we can decompose \(h_{t-1}\) by the eigenvectors,
\[h_{t-1}=\sum_{i=1}^{n}\alpha^{i}_{t-1}u_{i} \tag{7}\]
where \([\alpha^{1}_{t-1},\alpha^{2}_{t-1},...,\alpha^{n}_{t-1}]\) are the coefficients for the corresponding eigenvectors. And,
\[h_{t}=\sum_{i=1}^{n}\alpha^{i}_{t}u_{i}=\sum_{i=1}^{n}\alpha^{i}_{t-1}\lambda _{i}u_{i} \tag{8}\]
After \(t^{\prime}\) steps,
\[h_{t+t^{\prime}}=\sum_{i=1}^{n}\alpha^{i}_{t-1}\lambda^{i}_{t}u_{i} \tag{9}\]
It is obvious that the system is stable only when \(|\lambda_{i}|\leq 1\). And as time goes on, the information of hidden states is decaying at different speeds along different eigenvectors.
Now take the input \(x^{\prime}_{t}\) into consideration. Same as the hidden state, we can decompose the \(x^{\prime}_{t}\) by the eigenvectors \(u_{i}\) and its coefficients \(a^{i}_{t}\).
\[x^{\prime}_{t}=\sum_{i=1}^{n}a^{i}_{t}u_{i} \tag{10}\]
Set the initial hidden state to zero vector, then we have,
\[h_{1}=x^{\prime}_{1}=\sum_{i=1}^{n}a^{i}_{1}u_{i} \tag{11}\]
Compute \(h_{t}\) recursively,
\[h_{t}=\sum_{i=1}^{n}\sum_{j=1}^{t}a^{i}_{j}\lambda^{j-1}_{i}u_{i} \tag{12}\]
Recall (8), then we have,
\[\alpha^{i}_{t}=\sum_{j=1}^{t}a^{i}_{j}\lambda^{j-1}_{i} \tag{13}\]
\[\alpha^{i}_{t}=\lambda_{i}\alpha^{i}_{t-1}+a^{i}_{t} \tag{14}\]
Therefore, a linear state space model equals a set of first-order IIR filters in complex space along the direction of eigenvectors.
In nonlinear recurrent systems, for the weight matrix of the hidden state, we can still use the same analysis of eigendecomposition as above. However, equation (12) breaks due to the nonlinear activation function, causing it difficult to analyze nonlinear cases.
## III Conjectures for Nonlinear Case
The activation functions are used to introduce non-linearity in recurrent neural networks. Here, we can rethink it from the view of eigenvalues and eigenvectors of the weight matrix in equation (9). However, the issue is that the information decay of a hidden state is determined by the eigenvalues. A system with small eigenvalues loses information very fast. And in a stable linear system, the eigenvalue norms are strictly limited by 1, which provides no long-term dependency. However, using nonlinear activation functions restricts the outputs either from 0 to 1 (\(sigmoid\)) or from -1 to 1 (\(tanh\)). It prevents the hidden states from exploding when the eigenvalues are greater than 1. As for the \(ReLU\) activation, there are cases where eigenvalues are negative numbers, meaning the hidden states flip in the opposite direction. And \(ReLU\) prevents the exploding by directly cutting the negative parts. Hereby, we make the following conjecture:
**Conjecture 1**: _A nonlinear recurrent system improves the long-term dependency by increasing the eigenvalues of the
weighting matrix (\(>1\)). The larger the eigenvalues are, the better long-term dependency is provided.
To test this conjecture, we perform two experiments on linear-RNN, tanh-RNN, and relu-RNN and compare the eigenvalue norms. We first train the three models on a regression task using the Mackey glass dataset. We use the MSE as the loss function and Adam optimizer with a learning rate of 0.01. We set the size of the hidden state to 8. Since the new hidden state is given by both the previous hidden state and input sample, the eigenvalue norms should be much smaller than 1. From Table I we can see both tanh-RNN and relu-RNN have larger eigenvalue norms than linear-RNN.
We also train the three networks on the sequential MNIST classification task, which requires a higher demand for long-term dependency. We use the softmax output and cross-entropy as the loss function. Since models are only trained with fixed length sequences and there is no limitation on the values of the outputs (before the softmax), the restriction for stability in the networks is looser. And the eigenvalue norms can be slightly larger than 1 in linear RNN. Besides, in most classification cases, the models extract features from the input and keep information. Larger eigenvalue norms are expected. We set the size of the hidden state to 150 and use Adam with a learning rate of 0.0001. We only show the largest 10th unique eigenvalue norms (only keep one for conjugate eigenvalues) in Table II.
In a stable linear system, the eigenvalue norms cannot be larger than 1, meaning that the information can hardly be preserved in the hidden states. Therefore, the hidden states tend to shrink to zero when propagating forward through time. And the hidden states have a higher density distribution around the origin. It causes discriminating information much more difficult. However, with the nonlinear activation functions, the eigenvalue norms can be greater than 1. Here the space of hidden states is a hypercube limited by the activation functions. These eigenvalues keep pushing the hidden states to the surface of the hypercube. Therefore, the hidden states can explore more space in the hypercube and have a better distribution for discriminant features. In conclusion, we have the following conjecture:
_Conjecture 2:_ For a recurrent system with long-term dependency, the hidden states are capable of exploring the space away from the origin and avoiding collapsing.
From the tables above, we notice that the classification task has much better long-term dependency than regression. Therefore, we validate this conjecture using the MNIST dataset and compare the hidden states of linear-RNN, tanh-RNN, LSTM and GRU. We first train the four networks until the best performance is achieved. Then, we collect all the hidden states along the trajectories. For visualization, we use the principal component analysis (PCA) and plot the scatters on the first two principal components (Figure 1). We can see that the plots are consistent with our conjecture. Besides, LSTM and GRU show special structures in the distribution which requires future explanation.
## IV Eigen Initializer
In gradient descent methods, a proper initialization for the weights not only helps with the convergence but also provides better local optimal solutions. To study the relation between eigenvalues and different initialization methods, we calculate the eigenvalue norms for different initializers (Table III). We create a weighting matrix with a size \(8\times 8\) and compare the default uniform initializer (in Pytorch) with sp-RNN, Xavier initializer, and kaiming initializer. We generate each initializer 500 times and calculate the ordered mean values. The default initializer of Pytorch is initialized by a uniform distribution with a standard deviation given by,
\[stdv=1/sqrt(n) \tag{15}\]
where \(n\) is the size of input features.
Fig. 1: Distribution of Hidden States
From table III, we notice that compared with other initializers, the default initializer provides small eigenvalues. At the beginning of training, if the eigenvalue norms are small, the information of input samples decays very fast. And the model can hardly learn from previous samples. Other initializers provide larger eigenvalues, helping the model to learn from more samples. From the perspective of eigenvalues, we can also explain why other initializers help with the convergence rate.
Therefore, we can help models learn better long-term dependency with a better initialization on eigenvalues. First, we set all the eigenvalue norms to \(\lambda\in(0,1)\). We want the model can learn more from long-term dependency and let the hidden states store more information in the beginning. Therefore, we set \(\lambda\) to 0.95 for all experiments.
\[W_{0}=diag(\lambda) \tag{16}\]
where \(w_{1}\) is a diagonal matrix with the size of \(n\times n\), and \(n\) is the size of hidden states.
In hidden state space, the transition function cannot only magnify the hidden states but should also rotate the hidden states by a certain angle along several directions. Therefore, we mimic this rotation by decomposing the process into \(n-1\) step. And at step \(i\), we randomly sample an angle \(\theta_{i}\) from a uniform distribution \([0,2\pi]\). Then, we perform the rotation in the space of \(i\)th and (\(i+1\))th dimension.
\[W_{i}=\left[\begin{array}{ccccc}1&&\cdots&&0\\ &\ddots&&&&&\\ \vdots&&\cos\theta_{i}&-\sin\theta_{i}&&\vdots\\ &&&\ddots&\\ 0&&\cdots&&1\end{array}\right] \tag{17}\]
Then, the final initializer is,
\[W=\prod_{i=0}^{n-1}W_{i} \tag{18}\]
## V Experiments and Results
### _Tomita Grammar_
The Tomita Grammar [25] is a set of strings with alphabet \(\{0,1\}\) following predefined rules. It contains 7 different languages and has been widely studied for deterministic finite automata (DFA) extraction. Here, we use the Tomita Grammar to test our initialization method. We compare the learning curves on tanh-RNN, LSTM and GRU with uniform initialization (Pytorch), Xavier initializer and kaiming initializer. We also compare IRNN, sp-RNN for tanh-RNN, which are both RNN-only initializers. We show the comparison of learning curves on grammar 4. To show stability, we train each method 20 times and plot the average learning curve. We use Adam as the optimizer and uniformly set the learning rate to 0.001 with no weighting decay.
Figure 2 shows that in tanh-RNN, our initialization method provides a smoother learning curve. This is because our initialization method provides a better long-term dependency at the beginning and networks can benefit from this initialization. And in LSTM and GRU, our method shows slight improvement compared with other methods. Note that the Kaiming method takes advantage of the nonsaturating nonlin
Fig. 2: Average Learning Curves on Tomita Grammar 4
earity (ReLU) and is therefore not comparable to the others.
### _MNIST Classification_
We also test our initialization method on MNIST classification. In the experiment, we use the sequential MNIST by scanlines. Each time 28 pixels from the same row are presented as the input and there are 28 inputs in total. This is a simple task and we only care about the convergence of the learning curves. We set the same comparison as in section V-A. We set the hidden states for all networks to 150 and use cross-entropy as the loss function. We use Adam as the optimizer with a learning rate of 0.0001. The result is shown in Figure 3. Our method provides both a good convergence rate and better local optima for all networks. Still, we only have a significant improvement on the convergence rate on tanh-RNN. But in LSTM and GRU, we also benefit from learning the long-term dependency.
### _Multi30k Machine Translation_
The Multi30k [5] is a multi-model dataset that has been widely used in machine translation and image description. Each sample contains an image and a pair of descriptions in German and English. In this paper, we only take the sentences and perform a machine translation task from German to English. In the experiment, we choose the encoder-decoder architecture for sequence-to-sequence prediction. Each encoder and decoder contains an embedding layer and a recurrent layer (either tanh-RNN, LSTM or GRU). For simplicity, we only use one layer of each recurrent layer. For data prepossessing, we use spaCy to extract tokens from the sentences, which is an industrial package for natural language processing. Then, we replace all tokens that appear only once with the unknown token [UNK]. We also use the [PAD] token for padding the sequences. We uniformly set the embedding size to 512 and the hidden size to 256. We compare our methods with other initialization methods on tanh-RNN, LSTM and GRU. We use the cross-entropy as the loss function with a label smooth of 0.1. We also use Adam as the optimizer with a learning rate of 0.1. The learning curves are shown in Figure 4. We overtrain the LSTM and GRU cases to display the full convergence of the other methods.
In the tanh-RNN, our method outperforms default uniform, IRNN and sp-RNN. It also provides a better and more stable solution compared with all other initialization methods. In LSTM, our methods provide the best convergence rate as the default uniform initialization and in GRU, our performance is pretty close to all other methods, except Xavier initializers fail to provide a good convergence.
## VI Conclusion
In this paper, we analyze the dynamics of hidden states in recurrent systems. We study the relationship between eigenvalues and long-term dependency in linear recurrent models and testify to the conjectures for nonlinear systems. We also propose a new eigen initializer for all kinds of recurrent neural networks. We test is on multiple datasets. Our results show that our initialization methods can not only provide a good convergence rate but can also provide a better local optimal solution. In some experiments, we outperform Xavier initializer and kaiming initializer. We provide a new alternative initializer for training recurrent neural networks and an intuitive explanation of why it works.
|
2303.14090 | Physics-informed neural networks in the recreation of hydrodynamic
simulations from dark matter | Physics-informed neural networks have emerged as a coherent framework for
building predictive models that combine statistical patterns with domain
knowledge. The underlying notion is to enrich the optimization loss function
with known relationships to constrain the space of possible solutions.
Hydrodynamic simulations are a core constituent of modern cosmology, while the
required computations are both expensive and time-consuming. At the same time,
the comparatively fast simulation of dark matter requires fewer resources,
which has led to the emergence of machine learning algorithms for baryon
inpainting as an active area of research; here, recreating the scatter found in
hydrodynamic simulations is an ongoing challenge. This paper presents the first
application of physics-informed neural networks to baryon inpainting by
combining advances in neural network architectures with physical constraints,
injecting theory on baryon conversion efficiency into the model loss function.
We also introduce a punitive prediction comparison based on the
Kullback-Leibler divergence, which enforces scatter reproduction. By
simultaneously extracting the complete set of baryonic properties for the Simba
suite of cosmological simulations, our results demonstrate improved accuracy of
baryonic predictions based on dark matter halo properties, successful recovery
of the fundamental metallicity relation, and retrieve scatter that traces the
target simulation's distribution. | Zhenyu Dai, Ben Moews, Ricardo Vilalta, Romeel Dave | 2023-03-24T15:55:09Z | http://arxiv.org/abs/2303.14090v2 | # Physics-informed neural networks in the recreation of hydrodynamic simulations from dark matter
###### Abstract
Physics-informed neural networks have emerged as a coherent framework for building predictive models that combine statistical patterns with domain knowledge. The underlying notion is to enrich the optimization loss function with known relationships to constrain the space of possible solutions. Hydrodynamic simulations are a core constituent of modern cosmology, while the required computations are both expensive and time-consuming. At the same time, the comparatively fast simulation of dark matter requires fewer resources, which has led to the emergence of machine learning algorithms for baryon inpainting as an active area of research; here, recreating the scatter found in hydrodynamic simulations is an ongoing challenge. This paper presents the first application of physics-informed neural networks to baryon inpainting by combining advances in neural network architectures with physical constraints, injecting theory on baryon conversion efficiency into the model loss function. We also introduce a punitive prediction comparison based on the Kullback-Leibler divergence, which enforces scatter reproduction. By simultaneously extracting the complete set of baryonic properties for the Simba suite of cosmological simulations, our results demonstrate improved accuracy of baryonic predictions based on dark matter halo properties, successful recovery of the fundamental metallicity relation, and retrieve scatter that traces the target simulation's distribution.
keywords: galaxies: evolution - galaxies: haloes - methods: analytical - methods: statistical
## 1 Introduction
The \(\Lambda\)CDM model, coined the standard model of cosmology due to its widespread adoption and explanatory power, plays a crucial role in modern cosmology and astrophysics. Galaxy formation and evolution occur within virialized structures resulting from density perturbations in the early Universe, subjected to gravitational collapse (Frenk & White, 2012). This leads to large-scale structure in the form of a cosmic web evolving from a somewhat smooth starting point, with dark matter halos as gravitationally bound overdensities of the postulated main contributor to the matter content of the Universe.
As galaxies form through the condensation of cooling gas within these halos, their resulting baryonic properties share a natural relationship with the dark matter accumulations they live in (see, for example, Rees & Ostriker, 1977; Blumenthal et al., 1984). As dark matter distributions are often sufficient to extract cosmological parameters of interest, \(N\)-body simulations restricted to simulating dark matter particles or grids through gravitational interactions are common in cosmology (Springel et al., 2005; Boylan-Kolchin et al., 2009; Klypin et al., 2011; Riebe et al., 2013; Potter et al., 2017). Options in this regard include direct numerical integration and the inclusion of a scale factor to model the expansion of the Universe via general-relativistic effects. These simulations are computationally cheap and fast to run compared to more complex alternatives (Efstathiou et al., 1985).
In contrast to the success for large-scale structure, modeling galaxies through \(N\)-body simulations has historically been more difficult, as baryonic physics plays a vital role through nonlinear and dissipative processes at this level of granularity (Somerville & Dave, 2015). At the same time, the relevance of the baryonic properties of galaxies is evident by the reason dark matter bears its name; direct observations in the electromagnetic spectrum rely on baryonic processes emitting such radiation. Methods to include the luminous baryonic matter in our simulations are thus required to enable comparisons to these observations in the first place.
Multiple avenues to circumvent these limitations have been developed. These include abundance matching, which connects the halo mass to a range of baryonic properties through the stellar mass. However, this requires the assumption of the preservation of rank
ordering in mass (Reddick et al., 2013). As an alternative to the rank ordering of all galaxies without separating central and satellite galaxies, halo occupancy distribution modeling makes assumptions about the satellite distributions and uses halo mass functions as an input (Berlind & Weinberg, 2002).
The penultimate step in this evolution of complexity are semi-analytic models, which incur a heavier computational burden as a trade-off for using a complete physical framework. The issue these models face is the number of free parameters for which constraints have to be found (see, for example, Somerville & Primack, 1999; Lu et al., 2014). These include, but are not limited to, GALFORM as presented in Cole et al. (2000) and Baugh et al. (2018), GalICS and GalICS 2.0 (see Hatton et al., 2003; Cattaneo et al., 2017), and SAGE (Croton et al., 2016). Simpler analytic formalisms for the evolution of baryonic galaxy properties exist, for example, the bathtub model by Bouche et al. (2010) and the reservoir model by Krumholz & Dekel (2012), as well as the equilibrium model (Dave et al., 2012; Saintonge et al., 2013; Mitra et al., 2015; Mitra et al., 2017).
Hydrodynamic simulations, which require the most computational resources and time, are generally considered the gold standard of an ab initio approach to galaxy formation and evolution (Vogelsberger et al., 2020). Particle-based and grid-based methods exist based on the foundation that baryonic matter can be modeled by treating gas as an ideal fluid. The former rely on computations of discrete masses, or particles, while the latter splits the simulation volume into discrete spaces (Dolag et al., 2008; Somerville & Dave, 2015). Influential recent hydrodynamic simulations include, for example, Illustris and IllustrisTNG (see Genel et al., 2014; Pillepich et al., 2018), EAGLE as introduced in Schaye et al. (2015), and HorizonAGN by Dubois et al. (2016), as well as Mufasa and its successor Simha, the latter of which is used for the lion's share of our experiments and is described in more detail in Section 2.1 (Dave et al., 2016, 2019).
In recent years, the physical sciences have experienced an exponentially rising interest in applying machine learning algorithms (Carleo et al., 2019). These developments include the acceleration of hydrodynamic simulations of galaxy formation and evolution, with Kamdar et al. (2016) providing one of the first examples by predicting baryonic properties in Illustris using extremely randomized trees, commonly abbreviated as 'extra trees', an ensemble model based on decision trees. This heralded the dominance of tree-based ensembles in related research, including Agarwal et al. (2018) for Mufasa and Lovell et al. (2021) for EAGLE, as well as Jo & Kim (2019), McGibbon & Khochfar (2022), and de Santi et al. (2022), for IllustrisTNG.
While these works apply tree-based ensembles, de Santi et al. (2022) also compare other algorithms such as \(k\)-nearest neighbors, light gradient-boosting machines, and feed-forward neural networks, and combine them with a linear regressor for improved predictions. Similarly, de Andres et al. (2022) use random forests, feed-forward neural networks, and natural and extreme gradient boosting on The Three Hundred data described by Cui et al. (2018), and provide an example of favoring boosting over tree-based methods. In this case, extreme gradient boosting is reported to reach the best accuracy, while natural boosting retrieves the most scatter as a probabilistic regressor.
Jepersen et al. (2022) apply graph neural networks to IllustrisTNG. However, they only use the dark matter version and compute baryonic properties with a semi-analytic model, effectively making the predictor emulate such a model instead of a hydrodynamic simulation. Other works exist that focus on alternative algorithms or hybrid approaches. Moster et al. (2021), for example, apply wide and deep neural networks (WDNN) and reinforcement learning. However, the baryonic properties are calculated not through a hydrodynamic simulation but emerge, an empirical model that statistically links properties from surveys instead of simulating baryonic physics (Moster et al., 2018). The prediction problem is flipped by von Marttens et al. (2022), who retrieve dark matter halos from their baryonic properties in IllustrisTNG instead.
Moews et al. (2021), on the other hand, extend the equilibrium model by incorporating largest-progenitor merger trees1, and combine it with extra trees into a hybrid approach. One disadvantage of such analytic models is that they predict a limited set of properties. They can, however, be looped into both the training and prediction steps of machine learning algorithms by first predicting a limited set of properties with the analytic model and then using these predictions together with halo properties to predict the complete set of barbyonics. Related work on hybrid models is done by Hearin et al. (2020), who generate mock catalogs by combining empirical and semi-analytic models. This leads to the weighted Monte Carlo sampling of baseline catalogs to improve statistical realism.
Footnote 1: This might be a constant source of frustration for some readers of papers on these topics, as decision trees and merger trees are, apart from technically both having a tree-like structure, two entirely different concepts.
Stiskalek et al. (2022), similarly to Moster et al. (2021), also use a WDNN approach for IllustrisTNG and HorizonAGN, and include a comparison with extra trees, but with a particular focus on reproducing the intrinsic scatter of the galaxy-halo connection. The reason is a well-known failure of commonly used machine learning algorithms when applied to hydrodynamic simulations to retrieve the expected scatter. The prediction of a Gaussian probability distribution for estimating targets does, of course, impose a constraint on the scatter. Still, probabilistic approaches to scatter reproduction are commonly encountered features in the literature (see, for example, Lehmann et al., 2016; Desmond et al., 2017; Mitra et al., 2017; Cao et al., 2020).
Machine learning methods that rely on standard metrics like the mean squared error (MSE) face certain limitations. The dominant drawback for this application area is that these models aim to recreate the provided data without knowledge of the underlying physical theory. Some prior work tries to solve this issue; for example, the mentioned reports on weighted sampling through baseline libraries by Hearin et al. (2020) and pre-prediction of baryonic subsets to aid the machine learning model (Moews et al., 2021). Another more direct way is injecting theory directly into the learning process of neural network architectures, with such models coined physics-informed neural networks (PINN) (Raissi et al., 2019).
Originally developed for the finding of solutions for partial differential equations, PINNs are rooted in older research on neural networks for ordinary and partial differential equations (see, for example, Dissanayake & Phan-Thien, 1994; Lagaris et al., 1998). In such models, knowledge represented as suitable equations offers domain constraints. Various approaches exist that include physical equations into the loss function, while replacing a lengthy simulation with a machine-learning model has become common practice in many scientific endeavors (Deiana et al., 2022). The line of thinking behind this approach is simple; instead of evaluating the model solely on prediction accuracy through metrics such as the mean squared error, additional loss components can enforce compliance with additional parameter relationships. For a more in-depth overview of this rapidly expanding area of physics-driven deep learning, we refer interested
readers to the reviews by Karniadakis et al. (2021) and Cuomo et al. (2022).
PINNs are not entirely alien to the field of astrophysics in general. Recent work by Mishra and Molinaro (2021) targets the simulation of radiative transfer by minimizing the residual of the underlying transfer equations, while Martin and Schaub (2022) bypasses inefficiencies of gravity models to learn representations of small-body gravity fields directly. Other recent examples include Branca and Pallottini (2022) on solutions for interstellar medium chemistry and the finding of quasinormal modes of nonrotating black holes (Cornell et al., 2022). The development and application of these architectures offer a powerful way to add knowledge to observational evidence to enforce adherence to theoretical models in training the machine learning algorithm.
In this paper, we present the first way to incorporate the concept of PINNs into the active research field of the completion of dark matter-only information in cosmological simulations with baryonic properties. To achieve this goal, we modify the training process of a deep learning model through two different novel extensions of the loss function. The first part is based on the standard approach of injecting physical theory into these models and adds the stellar-to-halo mass relation (SHMR) as described by Moster et al. (2010) into the training process. This double power law is subsequently used by Moster et al. (2018) to parameterize the instantaneous baryon conversion efficiency, meaning the efficiency with which gas is transformed into stars, and provides a theory-oriented constraint.
The second part, which presents another novel addition, forces the model to recreate the scatter of the underlying hydrodynamic simulation by including the Kullback-Leibler divergence (KLD) as an asymmetric distance measure between approximations of the probability distributions for training targets and associated model predictions (Kullback and Leibler, 1951; Ferdosi et al., 2011). In doing so, we show that our extensions of existing machine learning approaches in baryon inpainting from dark matter halo properties are a powerful tool for modern cosmological simulations.
Compared to the baseline model, our results demonstrate improvements in both predictive accuracy and the reproduction of scatter, and show suitable correlations between target variables and model predictions. The contribution pertains to the broader field of machine learning in astrophysics, including adding physical theory into predictive models and using distributional loss components.
The remainder of this paper is structured as follows. In Section 2, we provide an overview of our machine learning approach and data. Section 2.1 describes the Simba suite of cosmological simulations and our dataset. Section 2.2 covers the functionality and justification of our baseline PINN model; Sections 2.3 and 2.4 introduce extensions of the loss function for the SHMR and a distribution comparison between predictions and target values, respectively. Section 3 presents our experiments and their results. Specifically, Section 3.1 explains the fitting of our theory constraint to Simba and the weighting of loss function components. Section 3.2 provides the results for our predictions, and Section 3.3 shows correlations for individual data point accuracy. Section 4 discusses our findings, limitations of our approach, and follow-ups. Lastly, Section 5 provides our conclusions.
## 2 Data and Methodology
### Simulation data from the Simba suite
The Simba simulation models the co-evolution of gas and dark matter within an expanding metric using the Gizmo code (see Hopkins, 2015), which itself is based on the Gadget-2(Springel et al., 2005). It employs the Meshless Finite Mass (MFM) hydrodynamics method, which marries the convenience of a mass-conserving particle-based code with the shock and instability-capturing advantages of a Riemann solver-based scheme.
Many so-called sub-grid processes have been added to Gizmo to model the formation and evolution of galaxies. These include radiative cooling and photoionization heating, chemical enrichment from stellar evolution, the formation of stars and supermassive black holes, the energy release ('feedback') from supernovae and black hole accretion discs, and the growth and destruction of dust. The complete model detailing these sub-grid prescriptions is described in Dave et al. (2016, 2019).
Simba simulations begin in the linear regime at redshift \(z=249\), and are evolved to \(z=0\), meaning today. 151 snapshot outputs are stored at various redshifts along the way. The main Simba run models a random cube of 147 Mpc (comoving) on a side, represented by 10243 gas elements and 10243 dark matter particles. The minimum (adaptive) spatial resolution is 0.7 kpc. The simulation assumes a _Planck_-concordant cosmology (see Planck Collaboration et al., 2016) of \(\Omega_{m}=0.3\), \(\Lambda=0.7\), \(H_{0}=68\) km/s/Mpc, \(\sigma_{8}=0.82\), and \(n_{s}=0.97\). This results in a mass resolution of \(1.8\times 10^{7}\) per gas element and \(9.5\times 10^{7}\) per dark matter particle.
Footnote 3: [https://simba.roe.ac.uk](https://simba.roe.ac.uk)
Each snapshot is analyzed using the Caesar galaxy/halo catalog package. For each halo identified within Simba using Gizmo's native 3D Friends-of-Friends (FoF) finder, Caesar identifies galaxies as collections of stars and dense gas via a 6D FoF algorithm. The most massive galaxy within a halo is defined as the central, and the others are satellites. A large range of physical and photometric properties are computed for each halo and galaxy. For this work, the key galaxy quantities are the stellar mass (\(M_{*}\)), star formation rate (SFR), SFR-weighted gas-phase metallicity (\(Z\)), neutral hydrogen mass (\(M_{\rm HI}\)), molecular hydrogen mass (\(M_{\rm H2}\)), and central supermassive black hole mass (\(M_{\rm BH}\)).
These properties are obtained by summing the relevant particles in each galaxy. For halos, the relevant quantities are the total mass (\(M_{h}\)), the dark matter half-mass radius (\(r_{h}\)), and the dark matter velocity dispersion (\(\sigma_{h}\)). The catalogs are stored as HDF5 files, and Caesar provides a simple yet powerful Python-based access interface. The Simba snapshots and catalogs are all publicly available online for use by the scientific community4.
Footnote 4: [https://simba.roe.ac.uk](https://simba.roe.ac.uk)
Our work uses central galaxies from the n100ni1024_151 version of the Simba main runs. Entries for which \(M_{\rm BH}=0\) are dropped to avoid zero-mass black holes distorting the distributions of predictions in our experiments. Similarly, we restrict the range of included halo masses to \(11\leq\log_{10}(M_{h})\leq 14\), as in related research mentioned in Section 1. Apart from these preprocessing steps, we refrain from any further alterations. While suitable data selections could lead to improved predictions, the goal of this paper is to maintain generalizability. For the model training and prediction processes, we apply min-max normalization to input and target variables to scale values within the same interval \([0,1]\) and then revert predictions to their proper scales.
### Physics-informed neural network model
The concept of PINNs rests on the assumption that data-driven statistical learning can be enhanced with domain knowledge. This inte
gration can be implemented using different frameworks, as covered in the overview of Section 1 and related reviews (Karniadakis et al., 2021; Cuomo et al., 2022). One approach is to generate more data specifically crafted to enforce domain knowledge into the model. While simple, this approach requires large amounts of additional data to cover a broad region of the input-variable space, effectively introducing observational biases into the inductive analysis (Kashefi et al., 2021; Yang and Perdikaris, 2019).
Another approach involves designing specific learning algorithms that embed knowledge into the learning architecture, for example, convolutional neural networks for images and speech (see LeCun and Bengio, 1995), graph neural networks (see Zhou et al., 2020; Wu et al., 2021), and networks for Hamiltonian systems, among others (Jin et al., 2020). Implementing this approach is difficult because embedding physical laws within a neural network architecture is limited to simple processes. Complex processes require architectural designs that cannot be easily realized with current learning frameworks; even relatively simple processes need complex and elaborate designs. Regarding practical applications, a second complication of these purpose-designed models is creating a network specific to a given problem, making the transfer to new domain applications time-consuming.
The third approach we adopt here enriches the loss function to explicitly incorporate constraints, usually as partial differential equations (Raissi et al., 2019; Lagaris et al., 1998). The benefit is a decoupling between the machine learning strategy and the embedded knowledge; the corresponding framework has broader applicability since the learning architecture is built independently of the underlying physical laws. Normally, the loss function is extended to include a measure of the accuracy of each prediction and the degree of alignment with a physical constraint.
The ability of PINNs to generalize well goes beyond approximation theorems. The representational power of neural networks is well known; under limited assumptions, any continuous function can be approximated to an arbitrarily close fit using a neural network with a finite number of hidden nodes and one hidden layer (Cybenko, 1989; Yarotsky, 2017). It should be noted that representational power is not equivalent to generalization power. Adding domain knowledge to the loss function improves the generalizability of PINNs by reducing the bias component of error while keeping the variance component under control (Hastie et al., 2009).
In traditional PINNs, the final loss, \(\mathcal{L}_{f}\), is a weighted combination of two losses; one is the data-driven empirical loss, \(\mathcal{L}_{s}\), and the other is the domain-knowledge constraint, \(\mathcal{L}_{k}\), with respective weights \(w_{s}\) and \(w_{k}\), so that
\[\mathcal{L}_{f}=w_{s}\mathcal{L}_{s}+w_{k}\mathcal{L}_{k}. \tag{1}\]
The first term is the conventional loss obtained in traditional neural networks through methods such as the squared difference between predictions and target values. Training a network NN\({}_{\theta}\) that aims to find a near-optimal parameter vector \(\theta\), for example, through gradient descent, yields a surrogate for a solution to processes such as complex simulations. The corresponding output \(\hat{f}(x)\), where \(x\) is a feature vector, serves to adjust the weights during training by looking at a loss function \(\mathcal{L}_{s}(\hat{f}(x),y)\). Here, \(y\) is the true response variable, or the target value from the underlying hydrodynamic simulation in our case, and \(\hat{f}(x)\) is our model-provided estimate.
The additional term, \(\mathcal{L}_{k}\), adds domain constraints, usually as differential equations. Specifically, the term captures the partial differential residuals as covered in Section 2.3. In our study, the inputs to the neural network are \(x:=[M_{h},r_{h},\sigma_{h}]\) corresponding to dark matter halo mass at present, dark matter half-mass radius, and dark matter halo velocity dispersion, respectively.
The outputs are \(y:=[M_{s},\text{SFR},Z,M_{\text{HI}},M_{\text{H2}},M_{\text{BH}}]\) corresponding to stellar mass, star formation rate, metallicity, neutral and molecular hydrogen masses, and black hole mass, respectively. During training, we aim to minimize the residual sum of squares,
\[\mathcal{L}_{s}=\sum_{i=1}^{N}(y_{i}-\hat{f}(\mathbf{x}_{i}))^{2}. \tag{2}\]
This data loss assumes points \(\{x_{i},y_{i}\}\) sampled at the initial-boundary locations. Domain knowledge minimizes a different loss function, \(\mathcal{L}_{k}\), forcing the final model to obey the constraint from physical knowledge. Here, we assume points \(\{x_{j}\}\) sampled across the entire input space. In the next section, we explain the domain loss in detail. With the defined loss functions, the neural network NN\({}_{\theta}\) is trained to obtain parameters \(\theta\) using efficient optimization methods, such as gradient descent. The weights \(w_{s}\) and \(w_{k}\) enable different contributions to the final loss function and can be tuned automatically as part of the optimization process.
### Inclusion of baryon conversion efficiency
As the name suggests, the stellar-to-halo mass relation links the stellar mass of a given galaxy to the dark matter halo mass. Suitable parameterizations have been shown to reflect the galaxy mass function observed in the third data release of the Sloan Digital Sky Survey (SDSS DR3; see Panter et al., 2007) more closely, as it does not assume a constant SHMR. The instantaneous baryon conversion efficiency \(\epsilon\) is the rate at which gas is transformed into stars. This efficiency can be described by an SHMR parameterized through a double power law model as shown by Moster et al. (2010).
The latter introduces a parameterization that follows observations by avoiding a surplus of galaxies at low and high masses. For two slopes \(\beta\) and \(\gamma\) that are used to determine the decrease in efficiency at lower and higher masses, respectively, the parameterization takes the form
\[\epsilon(M,z=0)=2\epsilon_{N}\left(\left(\frac{M_{h}}{M_{1}}\right)^{-\beta}+ \left(\frac{M_{h}}{M_{1}}\right)^{\gamma}\right)^{-1}. \tag{3}\]
Here, \(\epsilon_{N}\) is the normalization, while \(M_{1}\) denotes the characteristic mass at which the respective efficiency is the same as its normalization. Moster et al. (2018) use this to parameterize the instantaneous baryon conversion efficiency, showing that peak conversion efficiency takes place at halo masses similar to the characteristic mass,
\[M_{\text{max}}=M_{1}\left(\frac{\beta}{\gamma}\right)^{(\beta+\gamma)^{-1}}, \tag{4}\]
with the general assumption of \(\beta,\gamma>0\). The integrated baryon conversion efficiency is dependent on redshift (see Moster et al., 2013), and the mentioned work allows for parameters of the instantaneous efficiency to vary, with
\[\begin{split}\log_{10}M_{1}(z)&=M_{0}+M_{z}(1-(z+ 1)^{-1})\\ &=M_{0}+M_{z}\left(\frac{z}{z+1}\right),\end{split} \tag{5}\]
and with the normalization and slopes given by
\[\begin{split}\epsilon_{N}\left(z\right)&=\epsilon_{0}+ \epsilon_{z}(1-(z+1)^{-1})=\epsilon_{0}+\epsilon_{z}\left(\frac{z}{z+1}\right), \\ \beta(z)&=\beta_{0}+\beta_{z}(1-(z+1)^{-1})=\beta_{0} +\beta_{z}\left(\frac{z}{z+1}\right),\\ \gamma(z)&=\gamma_{0}.\end{split} \tag{6}\]
As we operate at \(z=0\), these considerations are simplified and need only be optimized for single values. In Section 3.1, we will cover this optimization for Simba data using maximum likelihood estimation, with ranges provided by prior research on this parameterization. This view on the SHMR can then be included, for a given dataset size of \(N\), into the loss function of Eq. 1 as
\[\mathcal{L}_{k}=\sum_{i=1}^{N}\left(\frac{\hat{M}_{*}}{M_{h}}-2\epsilon_{N} \left(\left(\frac{M_{h}}{M_{1}}\right)^{-\beta}+\left(\frac{M_{h}}{M_{1}} \right)^{\gamma}\right)^{-1}\right)^{2}. \tag{7}\]
This injection of domain knowledge provides an additional constraint for the model, which subsequent experiments show to benefit the model to recover the mean relation better.
### Constraints from predictive distributions
Reproducing the scatter found in hydrodynamic simulations when predicting baryonic properties based on dark matter halos is a known challenge in the literature (Cui et al., 2018; Stisalek et al., 2022). As described in Section 1, probabilistic approaches relying on parametrized distribution families are common. In this work, we target the reproduction based on the training data distribution directly by introducing a second extension to the standard loss function in Eq. 2 (see Section 2.2).
The Kullback-Leibler divergence (KLD) is a statistical distance measure to assess the difference between two distributions. Introduced by Kullback and Leibler (1951), it has found various applications in astrophysics in recent years (see, for example, Ben-David et al., 2015; Hee et al., 2016; Moews et al., 2019; Nicola et al., 2019). For a given reference distribution \(P\) and proposal distribution \(Q\), the KLD can be written as
\[D_{\mathrm{KL}}(P||Q)=\sum_{x\in\mathcal{X}}P(x)\mathrm{log}\frac{P(x)}{Q(x)}, \tag{8}\]
or, correspondingly, with an integral for absolutely continuous probability distributions. One important point is that the KLD is not a distance metric due to its status as an asymmetric difference measure. This means that
\[D_{\mathrm{KL}}(P||Q)\neq D_{\mathrm{KL}}(Q||P), \tag{9}\]
as it calculates a directional information loss when approximating \(P\) via \(Q\). It also does not satisfy the triangle equality,
\[d(a,c)\leq d(a,b)+d(b,c), \tag{10}\]
with points \(\{a,b,c\}\in M\) for a given metric space \(M\). Given the above, the KLD is applicable only when a 'true' reference distribution is used. Fortunately, this is the case here, as we want to calculate the difference between the distributions of model predictions and their respective targets.
Following Fussell and Moews (2019), who propose the incorporation of the KLD into the loss function in the context of generative modeling - although without implementing the proposal due to conflicting success metrics - we extend Eq. 1 to
\[\mathcal{L}_{f}=w_{s}\mathcal{L}_{s}+w_{k}\mathcal{L}_{k}+w_{\mathrm{KL}} \mathcal{L}_{\mathrm{KL}}. \tag{11}\]
Here, the KLD is part of the overall loss function through a normal assumption placed on the target and prediction distributions, as the loss needs to remain easily differentiable for error backpropagation during training. This means that
\[\mathcal{L}_{\mathrm{KL}}=D_{\mathrm{KL}}\big{(}N(\mu(y),\sigma(y))||\mathcal{ N}(\mu(\hat{y}),\sigma(\hat{y}))\big{)}, \tag{12}\]
for target values \(y\) and corresponding predictions \(\hat{y}\). In doing so, and as described in the overview in Section 1, we impose an additional constraint on the learning process by forcing the model to recreate the approximate distribution of the underlying training data. In later sections, we will see how this avoids an underprediction of tails in the scatter of the baryonic properties of interest.
Figure 1 provides a schematic for the complete model incorporating both the SHMR from Section 2.3 and the KLD, which we dub the 'Hybrid' model going forward. The lower part of the figure shows the neural network model, going from input data containing information from a set of dark matter halo properties to baryonic predictions, \(\hat{y}\), as the model output. The components of the loss function are encased by a dashed line, listing the mean squared error, the KLD introduced in this section, and the SHMR. The respective weights of these components can be selected based on the predictive performance of the neural network, which we cover as part of our experiments in Section 3.1.
The normal assumption made for the respective distributions is, of course, not without fault. However, it provides an approximation that is differentiable for the purpose of backpropagation during training and is computationally cheap enough to result in reasonable training times on small numbers of GPUs without requiring access to stacks in large-scale supercomputing solutions. We reserve part of the discussion in Section 4 to list possible alternatives in related follow-up research.
Figure 1: Illustration of the hybrid model. The loss function used in this framework is a combination of the mean squared error, the Kullback-Leibler divergence, and the stellar-to-halo mass relationship.
## 3 Experiments and Results
### The stellar-to-halo mass relation in Simba
We first train a simple feed-forward neural network, a multilayer perceptron (MLP), using only the MSE in the loss function. We then use this baseline model to predict all six parameters. Our MLP model has five hidden layers with 50 artificial neurons each. As the top panels of Figure 2 show, this basic model can recover the basic pattern of the SHMR represented in Eq. 3. However, the pattern exhibits a diminished variance at lower and higher halo masses, and is expectedly flat around the characteristic \(M_{\text{max}}\) value.
As a result, the predictions show a flatter relation than 1:1 relative to the true distribution from Simba, indicating a bias. We note that Simba's galaxy sample is limited in stellar mass, which results in a diagonal completeness limit in the SHMR. This leads to galaxies with a high \(M_{s}/M_{h}\) ratio for their halo mass being preferentially included in the sample; such asymmetric completeness (known as Malmquist bias) is common in astrophysics, but can be difficult for machine learning algorithms to recover.
Our PINN, on the other hand, is constructed by encoding Eq. 3 into the loss function as shown in Section 2.3. The parameters of the resulting loss component shown in Eq. 7 are adopted from Behroozi et al. (2019). With this extra constraint, the predicted SHMR pattern better recovers the curvature around \(M_{\text{max}}\) in Figure 2. Eq. 3, however, as a double power-law function, cannot introduce the scatter found in the cosmological simulation, and still displays a biased recovery of the true SHMR relation.
To retrieve the scatter, we need a way to provide information as part of the loss function, effectively forcing the model to recreate it. For this, we extend the loss function with an additional component using the KLD, which we introduced in Section 2.4. This distributional divergence measurement has a history of being used in various cosmological applications, as discussed in Section 1, including the study of properties and auxiliary observational data on baryonic physics (Yasin et al., 2022). The resulting predictions from this hybrid scheme track the respective target values more faithfully and with less bias.
We calculate the mean and covariance matrix of the training data and construct a multivariate Gaussian distribution in six dimensions, one for each prediction target property. By including the KLD-based loss in Eq. 8 into the overall loss function, we create a hybrid model combining the PINN with a KLD measurement as written in Eq. 11. Table 1, listing the MSE as well as the coefficient of determination (\(R^{2}\)) and the Pearson correlation coefficient (\(\rho\)), demonstrates the benefit on the predictive power of these models.
We plot the SHMR relative to the halo mass in Figure 2 to provide a visual overview. This confirms that the hybrid approach between extraneous physical knowledge and distributional adherence outperforms both the baseline model and the PINN alone. The latter provides information on the mean trends of the SHMR, while the KLD helps the neural network to mimic the substantial scatter, which an exact equation does not provide. Compared to the baseline MLP, the figure shows that the distributional component helps predict the SHMR more accurately at lower halo masses while tracing the downward scatter at higher halo masses.
To better understand the correlation between targets and predictions, the bottom panels of Figure 2 show the SHMR values from Simba on the horizontal axis, with SHMR values from model predictions on the vertical axis. The result would follow the plot's diagonal line for a perfect model. The visuals, which indicates a better correlation for the PINN compared to the MLP, and for the hybrid model compared to both, are confirmed by the corresponding correlation measurements in Table 1. Apart from the SHMR, the KLD also improves the model's prediction ability on other baryonic properties, which we will cover in Section 3.2.
Lastly, we perform a maximum likelihood estimation (MLE) to optimize the parameters of Eq. 3 and, accordingly, of the PINN part of the loss function in Eq. 7. Here, the likelihood function is the density function regarded as a function of given parameters \(\theta\),
\[\mathcal{L}(\theta)=\sum_{i=1}^{n}f(x_{i}|\theta),\text{ with }\theta\in \Theta, \tag{13}\]
where \(\Theta\) is the corresponding parameter space and \(f(\cdot)\) is the probability density function. We can estimate the optimal combination of parameter values of interest, \(\hat{\theta}\), as
\[\hat{\theta}=\arg\max_{\theta\in\Theta}\mathcal{L}(\theta|x), \tag{14}\]
which results in a choice of parameter values that make the observed data the most probable. Different values in a wide range have been reported on Eq. 3(Behroozi et al., 2019; Kravtsov et al., 2018; Shankar et al., 2017). Instead of directly applying these numbers, they become the initial guess of our estimator, and the range of values is considered a constraint. The role of the MLE can be framed as knowledge learned, molding the known physical equation to best fit the underlying data from Simba within accepted ranges.
In addition, we split off 10% of the dataset as a validation set to perform a grid search on the weights for the loss function components, \(w=[w_{s},w_{k},w_{\text{KL}}]\). While an MLE approach is infeasible due to the model retraining for each run, this allows us to gauge a suitable combination. Each combination is tested for 20 cross-validations using the MSE and the mean absolute percentage error (MAPE), with the results listed in Table 2.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & MSE & \(R^{2}\) & \(\rho\) \\ \hline MLP & 0.024 & 0.821 & 0.906 \\ PINN & 0.023 & 0.827 & 0.910 \\ Hybrid (PINN+KLD) & 0.020 & 0.847 & 0.920 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison between different models. The table lists the coefficient of determination (\(R^{2}\)) and Pearson’s correlation coefficient (\(\rho\)) for different neural network models corresponding to Figure 3. The hybrid model is a combination of PINN with KLD in the loss.
\begin{table}
\begin{tabular}{l c c} \hline \hline \((w_{l},w_{k},w_{\text{KL}})\) & MSE & MAPE \\ \hline \((1,0.1,0.1)\) & 0.02623 (0.0013) & 1.126 (0.0527) \\ \((1,0.1,1)\) & 0.02583 (0.0023) & 1.123 (0.0476) \\ \((1,0.1,10)\) & 0.03021 (0.0031) & 1.523 (0.0529) \\ \((1,1,0.1)\) & 0.02671 (0.0020) & 1.234 (0.0588) \\ \((1,1,1)\) & 0.02629 (0.0021) & 1.212 (0.0526) \\ \((1,1,10)\) & 0.03365 (0.0028) & 1.387 (0.0483) \\ \((1,1,10,0.1)\) & 0.03711 (0.0011) & 1.527 (0.0339) \\ \((1,10,1)\) & 0.03521 (0.0015) & 1.434 (0.0415) \\ \((1,10,10)\) & 0.03840 (0.0019) & 1.714 (0.0374) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Grid-based optimization of loss function component weights. The table lists the mean squared error and mean absolute percentage error for various loss function weight combinations in the hybrid model. The values in brackets list the associated standard deviations.
### Predicting baryonic properties from dark matter halos
After establishing the effect on SHMR prediction, we expand our analysis to the full set of six baryonic properties, \(\{M_{\ast},\)SFR, \(Z,M_{\rm HI},M_{\rm H2},M_{\rm BH}\}\). To better understand the cause of these results, we calculate \(\rho\) values between the entire set of available parameters in Figure 3.
Figure 4, where these variables are plotted against the halo mass, shows that each parameter is reasonably well-predicted upon visual inspection. In particular, the model excels for \(M_{\ast}\) and \(M_{\rm BH}\), while the performance on SFR and \(M_{\rm H2}\) is subject to a scatter taper at higher halo masses. In the values of Figure 3, we can see that the correlation between the dark matter halo properties used as inputs, meaning \(\{M_{\rm{H}},r_{h},\sigma_{h}\}\), for these variables are considerably lower compared to the rest of the investigated properties. At the same time, good results are equally reflected in strong correlations for stellar and black hole masses. As covered in Section 3.1, this is aided by the additional SHMR constraint in the loss function.
Following this, we explore secondary correlations between different variables, analogous to a similar analysis performed by Agarwal et al. (2018). Figure 3 indicates that galaxies with higher SFR exhibit positive correlations with \(M_{\rm HI}\) and \(M_{\rm H2}\), and a negative correlation with \(Z\). To test whether our model correctly learns the split in the specific star formation rate,
\[{\rm sSFR}=\frac{{\rm SFR}}{M_{\ast}}, \tag{15}\]
we plot these properties against the stellar mass and color data points using the distance in SSFR values from the mean \(M_{\ast}-{\rm sSFR}\) relation, \(\Delta\log_{10}{\rm sSFR}\), in Figure 5. The resulting mean scaling relations are drawn for model predictions and the corresponding data from the underlying Simba simulation, with the former tracing the results of Agarwal et al. (2018) and Dave et al. (2019).
Both mean scaling relations are obtained by fourth-order polynomial fitting, and we provide the 6\({}^{\rm th}\) to 93\({}^{\rm th}\) percentile range for Simba data indicated by grey shading. We use a bin size of 0.2 for the \(\log_{10}M_{\ast}\) values in solar masses along the horizontal axis for the latter. The divergence between confidence intervals and the mean scaling relation for metallicity in Simba at lower stellar masses is an artifact of a small number of data points available at this range, but we include this left-hand interval to exemplify potential peculiarities encountered in such analyses.
As our approach is a prediction problem using approximations, it is not perfectly consistent, just like other works on machine learning for baryonic inpainting into dark matter halos. Here, the underprediction of scatter at higher stellar masses is notable for \(M_{\rm H2}\) compared to Simba. At the same time, the relationships of \(Z\), \(M_{\rm HI}\), and \(M_{\rm H2}\) to the sSFR are preserved around the mean scaling relations, although in a cleaner split than is the case in the target values provided by the Simba cosmological simul
Figure 3: Correlation matrix of input and output parameters. The color bar indicates the Pearson correlation coefficient values, calculated for variables in the Simba dataset used in the presented work.
Figure 2: Top panels: Hexagonal joint histograms of the predicted SHMR from different models and the true SHMR calculated from the Simba test set. From left to right, the panels show the results from the baseline MLP (yellow), the PINN (red), and the hybrid PINN + KLD model (purple). The Simba data is shown in green. Bottom panels: Hexagonal joint histograms of the predicted versus target SHMR for different models and the test set, with color coding as described above. The diagonal line indicates \(x=y\) for the resulting correlation plots.
of secondary correlations between SFR (or gas content), metallicity, and stellar mass, known as the fundamental metallicity relation, is an important success of this machine learning framework.
### Correlations for separate prediction targets
In this section, we further analyze the quality of model outputs. Similar to our visualizations in Section 3.2, we wish to compare the model predictions to the underlying target data from the Simba suite of cosmological simulations, but separately for different baryonic properties. Fig 6 shows kernel density estimates for all six target properties. For the bandwidth optimization, we make use of Scott's rule as the default heuristic in a variety of statistical software packages for a dataset length \(|X|\) with a given dimensionality,
\[\beta=|X|^{-(\mathrm{dim}(X)+4)^{-1}}. \tag{16}\]
In addition, we also plot the same for the baseline multilayer perception used in previous comparisons. These density plots reveal a reasonably close agreement regarding scatter between Simba and our model's predictions for all baryonic properties. While the baseline model is already performing well on the stellar mass and, to a degree, on the black hole mass, it visibly struggles with the scatter of the remainder of the target properties.
Our hybrid model shows major improvements in the distributions of SFR, \(Z\), \(M_{\mathrm{HI}}\), and \(M_{\mathrm{H2}}\). For the multilayer perceptron, the predicted distributions form a sharp peak around the mean, demonstrating especially difficult-to-retrieve scatter for the star formation rate and the molecular hydrogen mass. While the injected physical knowledge on the SHMR does not aid with scatter predictions but instead with the accuracy of associated properties, the KLD loss component contains extra information on distributions that allows for the production of the necessary scatter.
To quantify these improvements, we compare the baseline model and our hybrid approach in Table 3, providing MSE, \(R^{2}\), and \(\rho\) values as for previous comparisons. Here, we can see the expectedly high correlations for \(M_{*}\), \(M_{\mathrm{BH}}\), and, to a lesser degree, \(Z\), comparable to prior research on different hybrid approaches in this area (Moews et al., 2021).
While the latter does not use neural network architectures or direct injection of information into a loss function, they build an additional analytic model into the prediction pipeline and use full largest-progenitor merger trees, which demonstrates the capability of our loss function extensions.
Despite the correct prediction of mean relations, results for SFR, \(M_{\mathrm{HI}}\), and \(M_{\mathrm{H2}}\) are not quite as perfect when viewed next to the remaining baryonic properties. One reason is that we use a
Figure 4: Hexagonal joint histograms of target values and model predictions. The panels show, for the set of six baryonic properties of interest, plots against the dark matter halo mass. Model predictions are in purple, while the Simba data is in green.
Figure 5: Secondary correlations at \(z=0\). The panels show, from top to bottom, neutral hydrogen, molecular hydrogen, and metallicity as a function of the stellar mass. Mean scaling relations are drawn with blue dot-dashed lines for Simba and red dashed lines for our hybrid model. Galaxies are colored by the distance from the mean \(M_{*}-\mathrm{sSFR}\) relation.
single neural network model to predict all variables simultaneously instead of using separate models. The downside is that the model cannot focus solely on a one-dimensional prediction, but this is not desirable, as it also bars the model from learning the connections between different target variables. The result, however, is that properties with a smaller correlation coefficient in terms of input variables, as shown in Figure 3, are more difficult to recover.
For the star formation rate and both hydrogen masses, we can see in Figure 5 that there is more scatter at higher dark matter halo masses for these targets in particular, which usually have higher scatter compared to \(M_{*}\), \(Z\), and \(M_{\rm BH}\)(Shankar et al., 2017). Another reason for prediction errors is likely found in the normal assumption. To calculate the KLD loss and implant it into the backpropagation, we assume a 6D Gaussian distribution, which is not completely true for our data. More complex approximations, however, are beyond the scope of this initial study due to often prohibitive computational costs and the challenge of maintaining differentiability but are discussed in Section 4.
There are potential solutions, which include the injection of more information, for example, on scatter relations, into the loss function. Another pathway is applying more data, such as merger tree information, into the input features. The main purpose of this paper is to show that PINNs, which make use of extraneous physical knowledge, together with a second distributional loss component, can improve model performance in this challenging area and demonstrate the utility that current developments in deep learning approaches can provide for the simulation of baryonic properties.
## 4 Discussion and Limitations
The application of modern machine learning methods to the completion of \(N\)-body information has emerged as a growing area of interest in recent years. Here, the sometimes mentioned unreasonable effectiveness of tree-based ensembles, most commonly random forests and extra trees, is that these models are comparatively simple in their functionality, and yet, as covered in Section 1, are frequently found to outperform neural network architectures in this area. For standard feed-forward frameworks, the universal approximation theorem even guarantees the ability to represent arbitrary functions under very limited assumptions (Cybenko, 1989;
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{MLP} & \multicolumn{3}{c}{Hybrid} \\ \cline{2-7} Variable & MSE & \(R^{2}\) & \(\rho\) & MSE & \(R^{2}\) & \(\rho\) \\ \hline \(M_{*}\) & 0.023 & 0.827 & 0.901 & 0.020 & 0.847 & 0.920 \\ SFR & 0.638 & 0.050 & 0.267 & 0.423 & 0.080 & 0.284 \\ \(Z\) & 0.052 & 0.277 & 0.527 & 0.044 & 0.352 & 0.593 \\ \(M_{\rm HI}\) & 0.221 & 0.215 & 0.450 & 0.162 & 0.235 & 0.480 \\ \(M_{\rm H2}\) & 0.308 & 0.087 & 0.284 & 0.208 & 0.097 & 0.313 \\ \(M_{\rm BH}\) & 0.276 & 0.466 & 0.682 & 0.214 & 0.502 & 0.709 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Statistical validation for the full experimental run of hybrid model. The table lists MSE, the coefficient of determination (\(R^{2}\)), and Pearson’s correlation coefficient (\(\rho\)) for each output variable.
Figure 6: Density plots for predictions on all six baryonic target properties. The panels show separate kernel density estimates for stellar mass (\(M_{*}\)), star formation rate (SFR), metallicity (\(Z\)), neutral hydrogen (\(M_{\rm HI}\)), molecular hydrogen (\(M_{\rm H2}\)), and black hole mass (\(M_{\rm BH}\)). The corresponding values from the underlying Sigma test dataset are plotted in yellow, baseline multilayer perceptron predictions in green, and results for the hybrid model are in red.
Hornik et al., 1989; Maiorov and Pinkus, 1999). These theoretical capabilities do not, however, touch upon the 'learnability' of these functions, as there are a wide variety of hyperparameters to consider.
While tree-based ensembles are simple, they too are not immune to such limitations, and more recent research has discussed expected error rates as non-monotonous functions concerning the number of ensemble constituents to be set by the user (Probst and Boulesteix, 2018). With the first application of machine learning to this specific area having arguably started with Kamdar et al. (2016), recent works such as Moster et al. (2021), Jespersen et al. (2022), and Stislaklek et al. (2022) have demonstrated the performance potential of suitably chosen neural network architectures.
One issue that remains is the limitations set by the data used for training and prediction in supervised machine learning models. A popular adage from the early days of computer science still widely used is 'garbage in, garbage out', expressing the constraints on arbitrarily powerful models by insufficient datasets. While the results of cosmological simulations are, of course, by no means garbage, they are also limited by the physics they implement and the chosen assumptions and simplifications. PINNs provide a way to go beyond these ingredients, forcing the model to consider further explicitly specified physical relationships.
Our additional implementation of distributional compliance as part of the loss function puts a further constraint on the model, which targets scatter fidelity. The direct incorporation of this additional knowledge into the architecture also goes beyond prior work on the inclusions of analytic models, which relies on physical computations outside of and before the application of machine learning models, as previously implemented by Moews et al. (2021).
At the same time, the predictive power of our architecture relies on the suitability of the included loss function components in terms of both the physical domain knowledge provided and the choice of a density approximation method. The natural extension of our work, although beyond the scope of this paper, is the identification and analysis of additional domain knowledge to be injected into the loss function, thus allowing the model to root its learning in a more complete set of physics. Another pathway is replacing our normal assumption with more complex approximations that allow for non-Gaussian features and multimodal distributions.
In Table 4, we list the dataset sizes and different dark matter properties used as inputs in similar works providing the same performance metrics (Kamdar et al., 2016; Agarwal et al., 2018; Moews et al., 2021). In contrast to this paper, all three comparable studies employ tree-based models, with the largest dataset used by Kamdar et al. (2016). The latter also provide their model, in addition to \(M_{h}\) and \(\sigma_{h}\), with the three spin components, number of dark matter particles bound to the subhalo, and maximum circular velocity in the subhalo, but omit \(r_{h}\) as used in our work. This supplies additional information on the dark matter halo to their model.
Similarly, Agarwal et al. (2018) do not include \(r_{h}\), but add the halo local density, with \(\{M_{h}\}_{i}\) denoting not only the current halo mass at \(z=0\) but also the five preceding snapshots at higher redshifts, which provides information on the recent merger history. Conversely, \(\{\rho\}_{j}\) indicates the set of nearby halo mass densities within radii \(r\in\{200,500,1000\}\) (in kpc) focused on the halo's mass center, thus providing additional information in a similar way. That being said, their dataset is the smallest in this line-up.
The work most closely aligned with ours in terms of inputs is Moews et al. (2021), although largest-progenitor merger trees over the entirety of a halo's evolution are provided instead of only the current halo mass. More importantly, their hybrid approach uses an analytic formalism to pre-compute a subset of baryonic properties with a physical model, feeding the merger trees both into the latter and the subsequent machine learning model.
In Table 5, we list the performance metrics, \(R^{2}\) and \(\rho\), for these related works to enable a discussion of differences in data and model approach. With regard to Kamdar et al. (2016), our model yields a close performance on \(M_{*}\) and \(M_{\text{BH}}\), but with a less robust prediction of SFR and \(Z\). This could be due to the size of the Illustris dataset used by the authors, which is about eight times larger than our available Simba data, as more suitable data usually leads to a better performance in machine learning models (Zhou et al., 2014).
Agarwal et al. (2018) use Mufasa, the predecessor of Simba, and benefits from their inclusion of prior halo masses and halo mass densities at different distance radii. Our model's results are comparable for \(M_{*}\) and \(M_{\text{HI}}\), with a slight underperformance for \(Z\), but there is a notable degradation in \(MH2\) and SFR predictions. Based on the correlation matrix in Figure 3, these two properties are highly correlated, meaning that an improvement in one in future research should impact the other. One potential reason for this degradation is the larger scatter in Simba higher halo masses, while Agarwal et al. (2018) pre-select their galaxies to be star-forming which strongly reduces the scatter at high masses.
Lastly, Moews et al. (2021) develop a hybrid approach, combining an extra trees ensemble with the equilibrium model, and incorporating merger trees into the latter. The same data and inputs are used in both work, save for the largest-progenitor merger trees that are fed into both the physical and machine learning models. While our model achieves close results on \(M_{*}\), \(Z\), \(M_{\text{HI}}\), and, to a lesser degree, \(M_{\text{BH}}\), metrics of SFR and \(M_{H2}\) are lower. Here, the difficulty to predict the star formation rate due to the large scatter could be data-driven, while Moews et al. (2021) utilize a physical model for this property. At the same time, our model outperforms
\begin{table}
\begin{tabular}{c c c} \hline \hline & size & input \\ \hline Kamdar et al. (2016) & 249370 & \(M_{h}\), \(S_{X}\), \(S_{y}\), \(S_{z}\), \(\sigma_{h}\), \(N_{h}\), \(V_{\text{c}}\) \\ Agarwal et al. (2018) & 3400 & \(\{M_{h}\}_{i}\), \(\{\rho_{h}\}_{j}\), \(\lambda_{h}\), \(\sigma_{h}\) \\ Moews et al. (2021) & 13132 & \(\{M_{h}\}_{i}\), \(r_{h}\), \(\sigma_{h}\) \\ Dai et al. (this work) & 14247 & \(M_{h}\), \(r_{h}\), \(\sigma_{h}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Dataset size and input variables for related works and this paper. Here, \(S_{X,y,z}\) denotes the different components of the spin, \(V_{\text{c}}\) the maximum circular velocity in the subhalo, \(N_{h}\) the number of dark matter particles bound to the subhalo, \(\lambda_{h}\) the halo spin, and \(\rho_{h}\) the halo local density. Sets for additional information at higher redshifts are indicated using braces.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & Kamdar et al. & & Agarwal et al. & Moews et al. \\ \cline{2-6} Variable & \(R^{2}\) & \(\rho\) & \(R^{2}\) & \(\rho\) & \(R^{2}\) & \(\rho\) \\ \hline \(M_{*}\) & 0.91 & 0.95 & 0.90 & 0.95 & 0.82 & 0.94 \\ SFR & 0.63 & 0.79 & 0.55 & 0.74 & 0.73 & 0.87 \\ \(Z\) & 0.93 & 0.96 & 0.73 & 0.86 & 0.21 & 0.66 \\ \(M_{\text{gas}}\) & 0.67 & 0.85 & - & - & - & - \\ \(M_{\text{HI}}\) & - & - & 0.35 & 0.59 & 0.36 & 0.65 \\ \(M_{\text{H2}}\) & - & - & 0.51 & 0.71 & 0.54 & 0.75 \\ \(M_{\text{BH}}\) & 0.72 & 0.85 & - & - & 0.71 & 0.88 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance metrics for related works. The table lists the coefficient of determination (\(R^{2}\)) and Pearson’s correlation coefficient (\(\rho\)) for each output variable. Here, in addition to the variables described throughout this paper, \(M_{\text{gas}}\) denotes the total gas mass of a subhalo in the indicated paper.
these results on \(Z\), which reflects the equilibrium model's difficulty with this quantity.
Overall, unlike our method focusing on the retrieval of accurate scatter, the above models are optimized for accuracy and make use of more information on the dark matter halo. The results of both Agarwal et al. (2018) and Moews et al. (2021) show a considerably less accurate scatter retrieval than can be seen in our experiments, with noticeable underprediction of the tails. The goal of this work is not to replace existing research but to demonstrate modifications that can increase the predictive ability of deep learning models in this area. Since the dataset, model, and purpose are different, this comparison provides a direction for future research aimed at combining these strengths.
For follow-up research not using PINNs, or machine learning models that do not require the loss function to be differentiable for backpropagation as described in Section 2.4, we recommend extensions to the comparison of target and prediction distributions. This work uses the KLD to calculate the difference between those distributions under the normal assumption. While a reasonably close proposition for the data used in our experiments, this limits our model's effectiveness when directly transferred to markedly non-Gaussian datasets. In such cases, the spread of predictions through the variance would still be enforced, but in the case of, for example, a starkly multimodal distribution, the recreation of these distributional features would not be a major component of the optimization.
As the comparison needs to be reasonably fast, approaches such as Bayesian mixture models as well as associated methods that are more complex (see, for example, Moews & Zuntz 2020) are likely to slow the training process down too much. As a compromise, a kernel density estimate (KDE), also known as the Parzen-Rosenblatt window after Rosenblatt (1956) and Parzen (1962), can be used, although this is limited to lower dimensionalities. We propose two ways to circumvent the latter limitation. The first is applying one-dimensional KDEs on a variable-by-variable basis and then averaging the Kullback-Leibler divergences between these estimates. While the advantage is the good fit in \(\mathbb{R}^{1}\), this can lead to a subset of variables not being forced to follow the target distribution as long as the KLD average remains small.
The alternative is to make use of dimensionality reduction methods such as principal component analysis, which collapses the \(n\)-dimensional coordinate space, for \(n\) variables, into orthogonal vectors ranked by their ability to explain the variance (for a recent overview, see Jolliffe & Cadima 2016). Reducing the coordinate space to a subspace in \(\mathbb{R}^{2}\) would, for example, still allow for reasonably good KDE approximations while retaining each variable's contribution to a combined KLD. The same line of thought does, of course, also apply to other density approximation and dimensionality reduction methods.
## 5 Conclusion
In this paper, we transfer the paradigm of physics-informed neural networks to predicting baryonic properties for associated dark matter halo variables. We adapt this approach in two different ways. The first includes the stellar-to-halo mass relation as a double power law previously used to parameterize the instantaneous baryon conversion efficiency. While this is a more established way to include physical theory into PINNs, our second extension is the enforcement of baryonic scatter in simulations under a normal assumption using the Kullback-Leibler divergence between the underlying cosmological simulation and predictions. In doing so, we solve the common problem of scatter reproduction in this area, which is merged directly into the machine learning model.
We first test the improvement for the more traditional approach to PINNs, meaning the injection of physical domain knowledge into the loss function, and demonstrate a positive effect on the model's performance. The hybrid approach combines these strengths by including the measurement of distributional differences and outperforms the standard PINN model. Subsequent tests of the scatter retrieval show more faithful reproductions for baryonic properties. These improvements in scatter are especially notable for molecular hydrogen masses and star formation rates, but can also be seen for neutral hydrogen masses and metallicities. In particular, our model successfully recovers the fundamental metallicity relation.
Our experiments demonstrate that PINNs, a rapidly expanding area of research across various subfields of physics, offer a way to directly bake theoretical constraints and distributional adherence into neural network architectures when painting baryonic properties into galactic dark matter halos. As such, they can be used to complete cosmological \(N\)-body simulations based on full hydrodynamic simulation suites, although this comes with the same caveats as other research in this area.
The inference of physics from simulations operates under the assumption that such simulations are a sufficiently close approximation of the real world. Any machine learning models learning from those simulations are subject to the same assumption. That said, including physical models in the learning process enables these algorithms to include domain information beyond the underlying cosmological simulations.
Potential follow-ups include additional physical models specific to galaxy formation and evolution into the loss function, as well as further constraints based on observational data or other simulations to diversify the data sources. Our presented framework is widely applicable to large-scale cosmological simulations and the study of the utility and effect of physical domain knowledge on galaxy evolution emulators. It provides a further piece in the puzzle of fully using modern machine learning in astrophysics.
|
2306.04449 | Neural Networks from Biological to Artificial and Vice Versa | In this paper, we examine how deep learning can be utilized to investigate
neural health and the difficulties in interpreting neurological analyses within
algorithmic models. The key contribution of this paper is the investigation of
the impact of a dead neuron on the performance of artificial neural networks
(ANNs). Therefore, we conduct several tests using different training algorithms
and activation functions to identify the precise influence of the training
process on neighboring neurons and the overall performance of the ANN in such
cases. The aim is to assess the potential application of the findings in the
biological domain, the expected results may have significant implications for
the development of effective treatment strategies for neurological disorders.
Successive training phases that incorporate visual and acoustic data derived
from past social and familial experiences could be suggested to achieve this
goal. Finally, we explore the conceptual analogy between the Adam optimizer and
the learning process of the brain by delving into the specifics of both systems
while acknowledging their fundamental differences. | Abdullatif Baba | 2023-06-05T17:30:07Z | http://arxiv.org/abs/2306.04449v1 | # Neural Networks from Biological to Artificial and Vice Versa
###### Abstract
In this paper, we examine how deep learning can be utilized to investigate neural health and the difficulties in interpreting neurological analyses within algorithmic models. The key contribution of this paper is the investigation of the impact of a dead neuron on the performance of artificial neural networks (ANNs). Therefore, we conduct several tests using different training algorithms and activation functions to identify the precise influence of the training process on neighboring neurons and the overall performance of the ANN in such cases. The aim is to assess the potential application of the findings in the biological domain, the expected results may have significant implications for the development of effective treatment strategies for neurological disorders. Successive training phases that incorporate visual and acoustic data derived from past social and familial experiences could be suggested to achieve this goal. Finally, we explore the conceptual analogy between the Adam optimizer and the learning process of the brain by delving into the specifics of both systems while acknowledging their fundamental differences.
A 8 (2020) 5165-4205 (2020) 1111-1111-1111-1111-11111-11111-11111-11111-11111-11111-11111-111111-11111-111111-11111-11111-11111-111111-111111-111111-11111-111111-11111-111111-11111-111111-111111-111111-111111-111111-111111-111111-111111-111111-111111-111111-111111-11111-111111-111111-111111-111111-11111-111111-111111-11111-11111-111111-111111-111111-111111-111111-111111-111111-111111-111111-111111-111111-11111-111111-111111-111111-111111-111111-11111-11111-11111-111111-11111-11111-11111-111111-11111-111111-11111-111111-111111-11111-11111-111111-11111-11111-111111-11111-111111-111111-111111-111111-11111-11111-111111-11111-111111-111111-11111-11111-111111-11111-111111-11111-11111-111111-11111-111111-111111-111111-11111-111111-111111-111111-111111-111111-111111-111111-111111-111111-111111-111111-111111-111111-1111111-111111-111111-111111-111111-111111-111111-111111-111111-1111111-111111-1111111-1111111-1111111-1111111-1111111-11111111-1111111111-1
## 2 ANN Vs SNN (The problem statement)
Artificial Neural Networks (ANNs) are a type of machine learning algorithm that inspires the human brain's neural structure. ANNs are particularly useful for tasks that involve large amounts of data, such as image recognition, natural language processing, and speech recognition. They are also capable of learning complex relationships between inputs and outputs, making them well-suited for tasks that involve non-linear mappings. Training an ANN involves adjusting the network's weights and biases in response to a set of training examples. This process is typically done using an optimization algorithm to minimize the difference between the network's output and the desired output for each given input.
### ANNs training issues
Backpropagation is frequently utilized as an algorithm to train ANNs, where the neurons could be fired by different types of activation functions; mainly Sigmoid, ReLU, or LeakyReLU; Figure 1. The backpropagation algorithm computes the error between the predicted output and the actual output for each training vector and then uses this error to adjust the weights in the network. Specifically, the algorithm calculates the gradient of the error with respect to each weight in the network and then updates each weight by moving it in the direction that reduces the error. As the training process continues, the weights are expected to converge gradually toward values that minimize the error in the training data. In other words, the weights should become more and more optimized for the specific task the network is being trained to perform.
It is worth noting that the evolution of the weights during training can be influenced by various factors such as the learning rate, the number of training iterations, the structure of the network, and the complexity of the task. Additionally, it is possible for the weights to become stuck in a local minimum, which can prevent them from reaching the global minimum and therefore limit the network's performance.
During the training process, two issues could be encountered:
The first one is called the vanishing gradient problem; where in some cases, the back-propagated gradient errors become vanishingly small, consequently preventing the weights of a given neuron from changing even in small values; that means the neuron itself doesn't profit from the successive training vectors to extract the hidden features from the given data, consequently its influence on the output becomes so small or negligible. In any case, we have to notice that the weights of links inside an ANN are used as long-term memory compared to the biological neuron. One possible solution that could be suggested is to use the ReLU activation function instead of the Sigmoid, as it is faster and does not activate all neurons simultaneously. Another possible solution is to use an optimization algorithm, such as the Adam optimizer \(\mathrm{Kingma}\) and \(\mathrm{Ba}\)(2015), which has been shown to perform better than backpropagation in some cases. Additionally, other activation functions such as the hyperbolic tangent (tanh) Glorot, Bordes and Bengio(2011) or the exponential linear unit (ELU) Clevert, Unterthiner and Hochreiter(2015) have also been suggested to address this issue. However, using the ReLU activation function may result in dead neurons that produce zero output in the relevant region, as the gradient for any negative value applied to it is zero and the corresponding weights will not be updated during backpropagation. This issue can be addressed by modifying the activation function to use LeakyReLU, which has a small slope for negative values instead of a flat slope, allowing negative inputs to produce some corresponding values on the output.
Interestingly, behavioral disorders in humans may be caused by inhibited, extra-excited, or damaged neurons, which could correspond to the vanishing gradient problem and dead artificial neurons in ANN. While replacing or modifying the activation function is a feasible solution for ANN, it is more challenging for biological neurons as their activation functions are chemical in nature. Thus, the question of how to replace or modify them remains a significant challenge.
From a technical point of view, reconfigurable FPGA-based implementation could be suggested to model the biological activation function to get sufficient flexibility to track the neuron's evolutionary state during its training process. An alternative approach is to use spiking neural networks (SNNs) Maass(1997a); Wang, Li, Chen and Xu(2020) that are more biologically realistic than traditional ANNs as they incorporate the concept of spikes or action potentials that occur in biological neurons Srinivasa, Cruz-Albrecht, Chakradhar and Cauwenberghs(2016); Moradi, Qiao and Stefanini(2017). They can also simulate the behavior of inhibitory and excitatory neurons, which is important in modeling neural disorders. There are existing tools and frameworks for implementing SNNs on FPGAs, such as SpiNNaker and BrainScaleS Davies, Srinivasa, Lin, Chinya, Cao, Choday and et al.(2018).
### Spiking Neural Networks (SNNs)
Spiking Neural Networks (SNNs) are a type of artificial neural network that is inspired by the way that biological neurons communicate through the use of electrical spikes or "action potentials" Maass(1997b). In contrast to traditional neural networks, which typically process continuous-valued inputs and outputs, SNNs operate on time-varying signals, where each neuron is activated by a series of discrete spikes representing the timing and strength of incoming signals Gerstner, Kistler, Naud and Paninski(2014). In SNNs, the information is processed in a more biologically plausible way, where the output of a neuron is determined not only by the magnitude of the input but also by the timing and frequency of the incoming spikes. This allows SNNs to more accurately model the spatiotemporal dynamics of biological neural networks and to handle problems that are difficult for
traditional neural networks, such as those involving temporal patterns or event-based data. One important feature of SNNs is their ability to implement a form of time-based computation, where the timing of input signals can be used to perform complex computations Maass (2002). For example, SNNs have been used to perform tasks such as speech recognition, image recognition, and control of robotic systems, where the timing of events is critical to the task at hand Pfeiffer and Brette (2018). SNNs can be trained using a variety of methods, including supervised learning, unsupervised learning, and reinforcement learning, and there are many different architectures and variants of SNNs that have been proposed in the literature Bengio, Courville and Vincent (2015).
There are several algorithms that can be used to train SNNs, but one of the most common is the SpikeProp algorithm, which is an extension of the backpropagation algorithm used in ANNs. It propagates spikes through the network backward from the output to the input layer, similar to how backpropagation propagates error signals in ANNs. Another popular algorithm for training SNNs is the SUR (Synaptic Update Rule) algorithm which is a simple and biologically plausible learning rule that modifies synaptic weights based on the spike timing between the pre-and post-synaptic neurons; i.e. it increases the weight of a connection if the pre-synaptic neuron fires just before the post-synaptic neuron, and decreases it if the pre-synaptic neuron fires just after the post-synaptic neuron. This algorithm aims to strengthen connections that are active at the same time and weaken connections that are not. In addition to these algorithms, there are also unsupervised learning algorithms for training SNNs, such as STDP (Spike-Timing-Dependent Plasticity) that also adjusts the synaptic weights based on the timing of pre-synaptic and post-synaptic spikes, but it does so by using slightly different rules for updating the weights compared to SUR algorithm.
Regarding the activation function, the preferred activation function to use in an SNN depends on the specific task and network architecture. One common activation function used in SNNs is the Sigmoid function, which is also commonly used in ANNs. However, other activation functions, such as the ReLU (Rectified Linear Unit) and its variants, can also be used in SNNs. There are also specialized activation functions designed for SNNs, such as the Spiking Rectified Linear Unit (SReLU) which is a variant of the ReLU that considers the spiking nature of SNNs. SReLU is shown in Figure 1, and given as Python code in Table 1.
## 3 Experiments and analysis
This section assumes training an artificial neural network (ANN) using various algorithms and activation functions to stimulate the neurons in the network. While the ANN is being trained, it is possible for a neuron to lose all relevant weights without any apparent reason. If this happens, we conduct a thorough analysis to understand the expected behavior of the ANN, the affected neuron, and its neighboring neurons. To perform this experiment, we constructed a customized ANN consisting of an input layer with 5 neurons, 3 hidden layers with 10 neurons each, and an output layer with a single neuron, Figure 2. The dataset used for training this network provides information on the power consumption in a specific area over 5 consecutive years. By detecting the hidden patterns within this data, the ANN is capable of predicting the power consumption for the same day of the next year. The dataset was separated into training and testing sets with different and shuffled split ratios that were applied in successive cycles by employing the "KFold" class and the "cross_val_score" function from the "sklearn" Python module. In this study, we are not focusing on the accuracy of the prediction process or adjusting the parameters of the architecture, as these were already discussed in a previous study Baba (2022, 2021). Instead, we are using the previously established and fine-tuned architecture to examine the internal behavior of the ANN when one of its neurons suddenly malfunctions.
Now, let's examine each of these scenarios one by one to deduce the corresponding results:
The first case: (The ANN is trained using the Backpropagation training algorithm. Neurons are fired using the Sigmoid activation function)
When a neuron in an ANN loses its relevant weights, it effectively becomes "dead" and stops contributing to the
\begin{table}
\begin{tabular}{l} def SRELU(x, a, b); \\ if x > b; \\ return x \\ elif x <- a; \\ return 0 \\ else: \\ return (x + a) * (b - x) / (b + a) \\ \end{tabular}
\end{table}
Table 1: A Python code clarifying the SpikeRectified Linear Unit
Figure 1: The activation functions are illustrated: Sigmoid, ReLU, LeakyReLU, and SReLU
output of the network. However, the other neurons in the same layer as the lost neuron (the neighbors) compensate for this loss by increasing their own weights. The compensation mechanism occurs during the training process, as the weights of the remaining neurons in the same layer are adjusted to minimize the overall loss of the network. Specifically, the weights of the remaining neurons are adjusted to produce the desired output of the network in response to the input data. In doing so, the network redistributes the contribution of the lost neuron to its neighboring neurons, allowing the network to continue learning to make accurate predictions. The behavior of the neuron with the lost weights itself would depend on the specific implementation of the backpropagation algorithm. If the implementation uses regularization or dropout techniques, then the neuron may continue to contribute to the output of the ANN. Regularization and dropout techniques are methods used in machine learning to prevent the overfitting of a model which usually occurs when a model becomes too complex, resulting in it memorizing the training data rather than learning the underlying patterns that generalize to new data. Regularization involves adding a penalty term to the loss function that the model minimizes during training. This penalty term is a function of the model's parameters and helps to prevent the model from becoming too complex. Dropout is another technique where, during training, a random subset of the model's neurons is deactivated or "dropped out" with a certain probability. This encourages the remaining neurons to learn more robust features and reduces the model's reliance on any particular subset of neurons. Dropout has been shown to be effective in preventing overfitting and improving the generalization performance of deep neural networks.
The second case: (The ANN is trained using the Backpropagation training algorithm. Neurons are fired using the ReLU activation function)
In such a scenario, the behavior of the affected neuron and its neighboring neurons may differ from what is expected with the Sigmoid activation function. In contrast, if a neuron loses its relevant weights, its output will be zero for all inputs. Hence, the lost neuron will not contribute to the ANN's output, and its neighboring neurons will be unable to compensate for the lost contribution since the output of the lost neuron is already zero. Furthermore, with ReLU and depending on the input value, the activation function's gradients are either 0 or 1. If a neuron with zero output has a non-zero gradient, the backpropagation algorithm will be unable to update the weights of the previous layer's neurons, resulting in the vanishing gradient problem. This can make training the ANN difficult or impossible, as the gradients will become extremely small, and the weights will not be updated significantly. Therefore, the loss of relevant weights of a neuron in the ReLU activation function can have a more severe impact on the neuron's behavior and its neighbors in the ANN than with the Sigmoid activation function.
The third case: (The ANN is trained using the Backpropagation training algorithm. Neurons are fired using the LeakyReLU activation function)
In this case, if a neuron loses its relevant weights, the output of that neuron will not become zero for all inputs; instead, it will be a small non-zero value. Therefore, the behavior of the ANN will change, but the impact will not be as severe as in the case of the ReLU activation function. The neighbors of the lost neuron in the LeakyReLU activation function compensate for the lost contribution to some extent, as the output of the lost neuron will still be non-zero. However, the degree of compensation will depend on the amount of weight loss and the specific topology of the ANN. Moreover, the LeakyReLU gradients are either the leak's slope or 1, depending on the input value. If a neuron with a small non-zero output has a non-zero gradient, then the backpropagation algorithm can still update the neurons' weights in the previous layer. Therefore, the vanishing gradient problem is not as severe as in the case of the ReLU activation function.
The fourth case: (The ANN is trained using the Adam optimizer. Neurons are fired using the Sigmoid activation function)
The Adam optimizer is an optimization algorithm that combines the benefits of both gradient descent and momentum techniques. The algorithm uses adaptive learning rates for each weight parameter to improve convergence speed and stability. Therefore, in the case of weight loss of a neuron, the Adam optimizer will adapt to this change and try to update the affected weights in subsequent iterations to compensate for the lost contribution. If the weight loss is minimal, the impact may not be significant, and the ANN can continue learning without any significant change in behavior.
Figure 2: The customized ANN consists of an input layer with 5 neurons, 3 hidden layers with 10 neurons each, and an output layer with a single neuron. The yellow neuron is supposed to lose its relevant weights
Interestingly, there are some similarities between the Adam optimizer and the way the human brain learns. One of the key features of the Adam optimizer is the use of adaptive learning rates. In the human brain, a similar process occurs through synaptic plasticity. Synaptic plasticity refers to the ability of the connections between neurons, known as synapses, to change their strength based on the activity of the neurons. This process enables the brain to adapt and learn from new experiences. Moreover, the Adam optimizer uses momentum to help the optimization process. Momentum refers to the tendency of a moving object to continue moving in the same direction. In the human brain, a similar process occurs through the formation of new neural pathways. As a person learns a new skill or task, their brain forms new neural connections that strengthen the existing pathways, making it easier for the person to perform the skill in the future.
The fifth case: (SNN is trained using the backpropagation algorithm. Neurons are fired using the Sigmoid activation function)
If a Spiking Neural Network is trained using backpropagation and one of its neurons loses its relevant weights, the behavior of the SNN, the affected neuron, and its neighbors would be different from that in the case of an ANN. In an SNN, neurons communicate by sending spikes (discrete events) to their neighbors. Therefore, the behavior of a neuron in an SNN depends on the timing and frequency of the spikes it receives from its neighbors. If a neuron loses its relevant weights, it will receive fewer or no spikes from its neighbors, which will reduce its output firing rate. This reduction in firing rate may affect the behavior of the SNN, especially if the lost neuron plays a significant role in the network's computation. Moreover, in an SNN, the output of a neuron is typically binary (spike or no spike), which is different from the continuous output of neurons in an ANN. Therefore, the behaviors of the affected neuron and its neighbors are more complex than in the case of an ANN. The loss of a neuron's relevant weights causes it to stop firing altogether, which affects the firing patterns of its neighbors and the overall computation of the SNN.
The sixth case: (SNN is trained using the SUR (Synaptic Update Rule) algorithm. Neurons are fired using the SReLU activation function)
As mentioned in the last case, the weights of the synapses between neurons are adjusted based on the timing of the pre-and post-synaptic spikes. Therefore, the behavior of a neuron in an SNN depends on the timing and frequency of the spikes it receives from its neighbors, and the loss of its relevant weights could cause it to receive fewer or no spikes, which would reduce its output firing rate. The SReLU activation function is a variant of the ReLU activation function designed for use in SNNs. Like the ReLU, the SReLU function is linear for positive inputs, but it produces a spike (a single output event) instead of continuous output. Therefore, the behavior of an SReLU neuron when it loses its relevant weights could be similar to that of a ReLU neuron in an ANN. The neuron's output firing rate would be reduced, and its neighbors compensate for the lost input by increasing their firing rates.
From the upper-mentioned scenarios, we come to the fundamental conclusion: if a neuron in an artificial neural network loses its relevant weight during backpropagation training, the anticipated behavior of the ANN will be determined by the extent of the weight loss and the stage of training when it occurs. If the lost weights are minor and the training is still in its initial phases, then the effect of the loss may not be substantial, and the ANN may continue to learn with minimal impact on its behavior, while when the lost weights are significant or the training is already in later stages, then the behavior of the ANN may change significantly.
However, as observed in all experiments that are outlined in Table 2, when a neuron suddenly dies, its neighbors on the same layer are activated to make up for the loss. Notably, the closest neurons to the dead neuron bear the greatest responsibility in this process, with their involvement ranging from 60% to 70%, while this proportion decreases significantly with neurons that are farther away.
## 4 From artificial to biological
Biological neural networks are the foundation of the human nervous system and are composed of interconnected neurons that transmit and process information. These networks are responsible for various cognitive functions, including learning, memory, perception, and decision-making. The architecture of biological neural networks consists of individual neurons that are connected through synapses. Neurons receive inputs through dendrites, which are branching extensions that receive signals from other neurons. The signals are then processed in the neuron's cell body or soma. One of the primary activation functions in biological neurons is the action potential, also known as a "spike"; when the accumulated input reaches a certain threshold, the neuron generates an output signal through its axon, which is a long projection that transmits signals to other neurons and the process continues throughout the network.
In this context, the training process refers to the strengthening or weakening of synaptic connections between neurons. This process is influenced by external stimuli and experiences. The most well-known mechanism for synaptic plasticity is long-term potentiation (LTP) which is a process where the strength of a synapse is enhanced following repeated activation of the presynaptic neuron.
However, the human brain is significantly more complex than existing artificial neural networks and spiking neural networks. Hence, directly reprogramming the weights of human neurons is not currently feasible or practical. However, the principles of neural network training and the use of artificial neural networks and spiking neural networks can provide insights into potential approaches for addressing neurological disorders such as Schizophrenia, Autism, or
Alzheimer's disease. One approach that has shown promise is deep brain stimulation (DBS), which involves implanting electrodes in specific areas of the brain and delivering electrical impulses to regulate neural activity. DBS has been used successfully to treat a variety of neurological and psychiatric disorders, including Parkinson's disease, depression, and obsessive-compulsive disorder Fins, Mayberg, Nuttin et al. (2011). Another approach is to develop drugs or therapies that target specific neurotransmitters or receptors in the brain. For example, medications that block the activity of the neurotransmitter dopamine have been used to treat symptoms of schizophrenia, while drugs that increase the availability of acetylcholine have been used to improve memory and cognitive function in Alzheimer's patients Kappur and Mamo (2003); Jefferson (2003). In addition, recent research has shown that deep learning algorithms, which are used to train artificial neural networks, can be used to analyze brain imaging data and identify patterns associated with neurological disorders. This approach could potentially lead to new diagnostic tools and personalized treatments for individuals with neurological disorders Orru, Pettersson-Yeo, Marquand, Sartori and Mechelli (2012); Arbabshirani, Plis, Sui and Calhoun (2017).
### Adam optimizer
As noticed in the upper-mentioned section, while there are some parallels between the Adam optimizer and the way the human brain learns, it is important to note that both systems are very different in terms of their complexity and the way they process information. Therefore, it seems so difficult to directly use the Adam optimizer to re-train human neurons for treating specific disorder conditions. However, creating an Adam-based new reinforced learning algorithm that is designed to mimic the way the brain responds to positive and negative feedback can help identify new patterns or relationships in complex data.
The Adam optimizer could be mathematically described by the following equations:
The moving average of the gradient:
\[m_{t}=\beta_{1}m_{t-1}+(1-\beta_{1})g_{t} \tag{1}\]
The moving average of the squared gradient:
\[v_{t}=\beta_{2}v_{t-1}+(1-\beta_{2})g_{t}^{2} \tag{2}\]
The bias correction for the moving average of the gradient:
\[\hat{m}_{t}=\frac{m_{t}}{1-\beta_{1}^{t}} \tag{3}\]
The bias correction for the moving average of the squared gradient:
\[\hat{v}_{t}=\frac{v_{t}}{1-\beta_{2}^{t}} \tag{4}\]
The update rule for the weights:
\[w_{t+1}=w_{t}-\frac{a\hat{m}_{t}}{\sqrt{\hat{v}_{t}}+\epsilon} \tag{5}\]
where, \(g_{t}\) is the gradient at time \(t\), \(m_{t}\) and \(v_{t}\) are the moving averages of the gradient and the squared gradient respectively, \(\beta_{1}\) and \(\beta_{2}\) are the decay rates for the moving averages. Typically, \(\beta_{1}\) is set to 0.9 and \(\beta_{2}\) is set to 0.999.
\(t\) is the time step, \(\alpha\) is the learning rate, \(\epsilon\) is a small constant to avoid division by zero, \(w_{t}\) is the updated weights at time \(t\)
\begin{table}
\begin{tabular}{|c|l|l|l|} \hline Scenario & Training Algorithm & Activation & Impact of weight loss \\ & & Function & \\ \hline
1 & ANN with & Sigmoid & Minor impact on behavior if lost weights are minor and training is still in its initial phases, significant impact if lost weights are significant or training is in later stages. \\ \hline
2 & ANN with & ReLU & Severe impact on behavior and its neighbors, may result in \\ & Backpropagation & & vanishing gradients and make training difficult or impossible. \\ \hline
3 & ANN with & LeakyReLU & Moderate impact on behavior and its neighbors, compensation by \\ & Backpropagation & & neighboring neurons to some extent, but the degree of \\ & & & compensation will depend on the specific implementation. \\ \hline
4 & ANN with & Sigmoid & Minor impact on behavior and its neighbors. \\ & Adam optimizer & & Adapt to change and update weights. \\ & & & The output of a neuron is continuous. \\ & & & Similarities to a human brain: Adaptive learning rates, momentum \\ \hline
5 & SNN with & Sigmoid & Complex impact on behavior and its neighbors. \\ & Backpropagation & & Reduction in output firing rate. \\ & & & The output of a neuron is Binary (spike or no spike). \\ \hline
6 & SNN with & SReLU & Complex impact on behavior and its neighbors. \\ & SUR algorithm & & Reduction in output firing rate. \\ & & & The output of a neuron is Binary (spike or no spike). \\ & & & Similarities to a human brain: Synaptic plasticity. \\ \hline \end{tabular}
\end{table}
Table 2: A Table that concludes all achieved experiments
Even though the Adam optimizer is not a comprehensive model of the biological processes that occur in the brain, the above equations can be conceptually analogized with the way the human brain learns as follows; Kingma and Ba (2014); Lillicrap, Cownden, Tweed and Akerman (2016):
* The moving average of the gradient (\(m_{t}\)) can be seen as analogous to the synaptic strength or connection weight between neurons in the brain. Just as the synaptic strength changes based on feedback from the environment, the moving average of the gradient changes based on feedback from the loss function.
* The moving average of the squared gradient (\(v_{t}\) ) can be seen as analogous to the square of the synaptic strength or connection weight. It provides a measure of the variability or uncertainty in the synaptic strength.
* The bias correction terms (\(\hat{m}_{t}\) and \(\hat{v}_{t}\)) adjust the moving averages to account for the initial bias at the beginning of training. Similarly, the brain also adjusts the synaptic strengths to account for any initial biases.
* The update rule for the weights (\(w_{t}\)) adjusts the weights based on the moving averages of the gradient and the squared gradient. This is similar to how the synaptic strengths are adjusted based on feedback from the environment in the brain.
Here's an example that illustrates the analogy between the Adam optimizer and the way the human brain learns. Suppose having a neural network with two inputs, one hidden layer with two units, and one output, Figure 3, and we want to train it to predict the output given the inputs. Let's assume that the inputs are scalar values \(x_{1}\) and \(x_{2}\), the output is a scalar value y, and the network has two hidden units with ReLU activation functions. We can write the network as:
\[y=f(w_{3}\cdot\max(0,w_{1}x_{1}+w_{2}x_{2})+b) \tag{6}\]
where \(f\) is the output activation function, \(w_{1}\) and \(w_{2}\) are the weights connecting the input to the hidden units, \(w_{3}\) is the weight connecting the hidden units to the output, and \(b\) is the bias term. The \(\max(0,x)\) function applies the ReLU activation function to the sum of the weighted inputs to the hidden units.
To train this network, we need to define a loss function that measures the difference between the predicted output and the true output for a given input. Let's use the mean squared error (MSE) as the loss function here:
\[L=\frac{1}{2}(y_{true}-y_{pred})^{2} \tag{7}\]
where \(y_{true}\) is the true output and \(y_{pred}\) is the predicted output.
To compute the moving average of the gradient, the squared gradient, the bias-corrected estimates of the moving averages, and the update to the weights according to the upper-mentioned equations (1, 2, 3, 4, and 5) respectively, we need to calculate the gradient of the loss with respect to the weights:
\[g_{t}=\nabla_{w}L(w_{t}) \tag{8}\]
The weights update rule has some similarities with the way that the human brain adjusts the strength of connections between neurons during learning. In the brain, synaptic plasticity, which is the ability of synapses to change in strength, is thought to be a key mechanism underlying learning and memory. One form of synaptic plasticity is long-term potentiation (LTP), which is a process by which the strength of a synapse is increased following repeated activation of the presynaptic neuron. LTP is thought to involve the activation of certain signaling pathways, such as the NMDA receptor pathway, and the subsequent strengthening of the synapse by the insertion of more receptors or the growth of new dendritic spines.
The NMDA receptor pathway refers to a specific signaling pathway in the brain that involves the N-methyl-D-aspartate (NMDA) receptor, a type of glutamate receptor that is involved in synaptic plasticity and learning and memory processes. When glutamate, a neurotransmitter, binds to the NMDA receptor, it triggers a cascade of events that lead to changes in the strength and structure of the synapse, the junction between two neurons. This process is known as synaptic plasticity and is thought to underlie learning and memory formation. The NMDA receptor pathway involves several key signaling molecules, including calcium ions (Ca2+), protein kinases, and transcription factors, which work together to initiate and maintain changes in synaptic strength. dysregulation of the NMDA receptor pathway has been implicated in several neurological and psychiatric disorders, including Alzheimer's disease, schizophrenia, and depression. In this context, the Adam optimizer can be seen as a computational model of this process, where the weights of the neural network are the synapses, the gradient of the loss function is the signal that activates the presynaptic neuron, and the update rule is the process by which the strength of the synapse is increased or decreased. Just as in the brain, the Adam optimizer uses a form of "memory" to keep track of the history of the gradients and adjust the update rule accordingly. This allows the optimizer to adapt to the structure of the loss function and converge to a good solution quickly.
Figure 3: A neural network with two inputs, one hidden layer with two units, and one output. The output of the yellow neuron is determined by equation 6.
It is important to reiterate here that the conceptual analogy drawn between the Adam optimizer and the learning process of the human brain should not be interpreted as a direct comparison between the mathematical equations used in the optimizer and the biological mechanisms that take place in the brain. Rather, this analogy serves as a helpful tool to comprehend the underlying principles of the optimization algorithm. By emphasizing the significance of balancing exploration and exploitation during the learning process, this analogy sheds light on how to achieve efficient and effective learning.
### Human brain training; practical and ethical considerations
There is growing evidence that using technology-based interventions, such as virtual reality and computerized cognitive training, can improve outcomes for individuals with neurological disorders. For example, studies have shown that virtual reality-based interventions can improve motor function and reduce pain in individuals with Parkinson's disease and stroke Liao, Wu and Hsieh (2017); Pompeu, Arduini, Botelho, Fonseca and Pompeu (2014). Additionally, computerized cognitive training has been found to improve cognitive function in individuals with traumatic brain injury and multiple sclerosis Governor, Chiaravalidi, O'Brien, DeLuca and Ehrlich-Jones (2018); Akerlund, Esbjornsson and Sunnerhagen (2013) Furthermore, there is evidence that social support and social interactions can improve outcomes for individuals with neurological disorders. For example, studies have found that social support can improve quality of life and reduce depression in individuals with multiple sclerosis Levin, Hadjkiss, Weiland and Jelinek (2017); Siebert, Siebert and Rees (2014) In this context, incorporating visual and acoustic data from past social and familial experiences into training programs that utilize technology-based interventions (such as reinforcement learning or unsupervised learning techniques) and social support could have potential benefits for individuals with neurological disorders.
However, it is so important to note that the implementation of such an approach would require a lot of careful and ethical considerations. One potential issue with using past experiences to train machine learning models is that it may be difficult to ensure that the data is truly representative of a person's experiences. Memories can be unreliable and subjective, and people may have different interpretations of events that occurred in the past. Therefore, it may be challenging to collect and interpret data in a way that accurately reflects a person's experiences. Additionally, using such data could raise concerns about privacy and informed consent. Furthermore, even if the data is accurately collected and the person's privacy and autonomy are respected, there is still the question of whether machine learning models can truly capture the complexity of human mental disorders. While machine learning models can be powerful tools for analyzing complex data, they are not capable of replicating the full complexity of the human brain and its functions.
## 5 Conclusion
This article emphasizes the significance of bridging the gap between biological neuroscience and artificial neural networks in order to enhance our comprehension and treatment of neurological disorders. It begins by investigating the effect of a non-functioning neuron on the performance of artificial neural networks (ANNs), conducting multiple tests using various training algorithms and activation functions to determine the specific impact of the training process on neighboring neurons and the overall performance of the ANN in such scenarios. The study's results have the potential to improve the functionality of inhibited or damaged neurons in individuals with behavioral disorders in the biological field by suggesting the implementation of multiple training phases that incorporate data from past social and familial experiences, both visual and acoustic, to attain this goal. The article also investigates the conceptual analogy between the Adam optimizer and the learning process of the brain, in spite of the significant difference in terms of complexity and information processing. The development and implementation of these approaches must take into account practical and ethical considerations such as treatment safety and efficacy, access to care, and their impact on individuals' autonomy and privacy.
## Acknowledgement
In order to enhance the depth of this research and advance its findings, the author of this paper is actively seeking a collaborative partner in the field of neuroscience who are interested in working to develop an algorithmic or mathematical model that effectively describes the chemical activation function that could be observed in biological neurons.
|
2307.05318 | Predicting small molecules solubilities on endpoint devices using deep
ensemble neural networks | Aqueous solubility is a valuable yet challenging property to predict.
Computing solubility using first-principles methods requires accounting for the
competing effects of entropy and enthalpy, resulting in long computations for
relatively poor accuracy. Data-driven approaches, such as deep learning, offer
improved accuracy and computational efficiency but typically lack uncertainty
quantification. Additionally, ease of use remains a concern for any
computational technique, resulting in the sustained popularity of group-based
contribution methods. In this work, we addressed these problems with a deep
learning model with predictive uncertainty that runs on a static website
(without a server). This approach moves computing needs onto the website
visitor without requiring installation, removing the need to pay for and
maintain servers. Our model achieves satisfactory results in solubility
prediction. Furthermore, we demonstrate how to create molecular property
prediction models that balance uncertainty and ease of use. The code is
available at https://github.com/ur-whitelab/mol.dev, and the model is usable at
https://mol.dev. | Mayk Caldas Ramos, Andrew D. White | 2023-07-11T15:01:48Z | http://arxiv.org/abs/2307.05318v4 | # Predicting small molecules solubilities on endpoint devices using deep ensemble neural networks
###### Abstract
Aqueous solubility is a valuable yet challenging property to predict. Computing solubility using first-principles methods requires accounting for the competing effects of entropy and enthalpy, resulting in long computations for relatively poor accuracy. Data-driven approaches, such as deep learning, offer improved accuracy and computational efficiency but typically lack uncertainty quantification. Additionally, ease of use remains a concern for any computational technique, resulting in the sustained popularity of group-based contribution methods. In this work, we addressed these problems with a deep learning model with predictive uncertainty that runs on a static website (without a server). This approach moves computing needs onto the website visitor without requiring installation, removing the need to pay for and maintain servers. Our model achieves satisfactory results in solubility prediction. Furthermore, we demonstrate how to create molecular property prediction models that balance uncertainty and ease of use. The code is available at [https://github.com/ur-whitelab/mol.dev](https://github.com/ur-whitelab/mol.dev), and the model is usable at [https://mol.dev](https://mol.dev).
Solubility, Small Molecule, Deep Ensemble, Recurrent Neural Network
## 1 Introduction
Aqueous solubility measures the maximum quantity of matter that can be dissolved in a given volume of water. It depends on several conditions, such as temperature, pressure, pH, and the physicochemical properties of the compound being solvated.[1] The solubility of molecules is essential in many chemistry-related fields, including drug development[2, 3, 4, 5], protein design[6], chemical[7, 8] and separation[9] processes. In drug development, for instance, compounds with biological activity may not have enough bioavailability due to inadequate aqueous solubility.
Solubility prediction is critical and has driven the development of several methods, including first principles[10, 11], semi-empirical equations[12, 13, 14], molecular dynamics (MD) methods[15, 16, 17, 18], quantum computations[19], and quantitative structure-property relationship (QSPR)[20, 21, 22, 23] methods. Despite significant progress, the development of accurate and reliable models for solubility remains a major concern.[24]
To address the persistent issues of systematic bias and non-reproducibility in aqueous solubility datasets, Llinas et al.[25, 26] introduced two solubility challenges featuring consistent data. The first challenge evaluated participants based on the root mean square error (RMSE) obtained and the percentage of correct values within a \(\pm 0.5\) logS error range. Unfortunately, the authors did not report the methods used by the participants.[27] In contrast, the second challenge showed that regardless participants were open to choosing which approach to use, all submitted responses used an implementation of QSPR or machine learning (ML).[28] Neural networks (NN), multiple linear regression (MLR), and decision trees were the most commonly applied methods in these challenges. Tree-based and MLR models presented the best results. Surprisingly, new state-of-art methods did not yield a significant improvement in predictions compared to the results of the first challenge.[27] The challenges' findings showed that data quality is more critical for
accurate predictions than model selection.[28] The results from the solubility challenges will be discussed in detail in Section 5.
Ideally, solubility models should be accurate and accessible, having clear or minimal instructions on how to use a model. Thus a common idea is to use web servers to provide easier public access. However, maintaining a web server requires an ongoing investment of time and money. There are examples of servers that eventually disappear, even with institutional or government support[29]. For example, eight of the 89 web server tools from the 2020 _Nucleic Acid Research_ special web server issue are already offline[30]1 after just a few years. Additionally, some tasks may require a long computation time[31]. For instance, tools like RoseTTAFold[32] and ATB[33] can take hours to days to complete a job, resulting in long queues and waiting times.
Footnote 1: Tested December 30, 2022
An alternative approach is to perform the computation directly on the user's device, removing the need for the server's maintenance and cost. In this approach, the website is simply a static file that can be hosted on sites like GitHub and be completely archived in the Internet Archive2. We explored this approach in Ansari and White [34] for bioinformatics. The main drawback is that the application runs directly from the browser on a user's device (a personal computer or even a cellphone). This would be infeasible for first-principle methods, like those that rely on molecular dynamics. Nevertheless, it is feasible for deep learning models, especially with the increasing integration of deep learning chips and compiler optimizations.
Footnote 2: [https://archive.org/](https://archive.org/)
In this work, we developed a front-end application using a JavaScript (JS) implementation of TensorFlow framework[35]. Our application can be used to predict the solubility of small molecules with uncertainty. To calibrate the confidence of the prediction, our model implements a deep ensemble approach[36] which allows reporting model uncertainty when reporting the prediction. Our model runs locally on the user's device and can be accessed at [https://mol.dev/](https://mol.dev/).
## 2 Related works
Physic-based models have been developed in the past for aqueous solubilization predictions. Those models may become complex, limiting their use to advanced users only.[24] Despite physic-based models being derived from first principles, they are no more accurate than empirical methods.[37] Data-driven models can outperform physics-based models with the benefit of being less time-consuming. Historically, common approaches computed aqueous solubilities based on QSPR[21, 22, 23] and MLR[38, 39] methods.[24]
Huuskonen [38] used a dataset consisting of 1297 organic molecules to develop two models based on MLR and NN. The author reported a good correlation between predicted properties and labels for training (\(r^{2}=0.94\)) and test (\(r^{2}=0.92\)) data. Delaney [39] used another approach based on MLR called Estimated SOLubility (ESOL) adjusted on a 2874 small organic molecules dataset. The final model presented a \(r^{2}=0.55\) and an average absolute error (AAE) of 0.83. GPSol[40], a Gaussian Process-based model, was trained to predict the aqueous solubilities of electrolytes in addition to non-electrolyte molecules. It used 1664 descriptors computed by Dragon software as input features to train the model on a dataset of \(\sim\)4000 molecules. Depending on the dataset, it presented an RMSE of \(0.77\) or \(0.61\). Lusci _et al.[41]_ trained several Undirected graph recurrent neural networks (UG-RNN) architectures using different sets of node feature vectors. The authors report RMSE from 0.90 to 1.41 for different models on the first Solubility Challenge dataset[25]. McDonagh _et al.[37]_ calculated solubilization free energies using first principle theoretical calculations and cheminformatic methods. Their results have shown that cheminformatic methods have better accuracy than theoretical methods. The authors also point to the promising results of using Random Forest models (RMSE of 0.93 on Llinas first dataset[25]).
Those models use descriptors to represent molecules. Descriptors are a straightforward way to convey physical-chemical information in your input. However, descriptor selection is not an easy task. It requires a good understanding of the problem settings, usually only held by specialists. Some automated methods have been proposed to select descriptors[42]. Nevertheless, computing several descriptors can increase the time needed for inference. Additionally, these descriptors can be valid only for a specific region of the chemical space[43].
More recently, transformers models have been used to compute the solubility of small molecules. Francoeur _et al.[44]_ developed the SolTranNet, a transformers model trained on AqSolDB[1] solubility data. Notably, this architecture results in an RMSE of only \(0.278\) when trained and evaluated on the original ESOL[39] dataset using random split. Nevertheless, it shows an RMSE of \(2.99\) when trained using the AqSolDB[1] and evaluated using ESOL. It suggests that the molecules present in ESOL may have low variability, meaning that samples in the test set are similar to samples in the training set. Hence, models trained on the ESOL training set performed excellently when evaluated on the ESOL test set. Regression Transformer (RT)[45] is a multipurpose transformer model trained using an infilling mask
approach[46]. Their results are comparable to those achieved by the SMILES-BERT[47] and Mol-BERT[48] models. While the RMSE values for SMILES-BERT and Mol-BERT are \(0.47\) and \(0.53\), respectively. Whereas RT presented an RMSE of \(0.73\). All three models were fine-tuned using ESOL. MolFormer[49] is an encoder-only transformers model with a modified embedding. It was pre-trained in a large corpus and fine-tuned for numerous downstream tasks. Specifically for the solubility regression fine-tuning, they reported an RMSE of \(0.278\) on the ESOL dataset. Noticeably, the same value was reported by Francoeur _et al.[44]_ when they trained their model on ESOL.
Comparing the performance of different models is a complex task, as performance metrics cannot be directly compared across models evaluated on distinct datasets. To address this issue, Panapitiya et al. [50] curated a large and diverse dataset to train models with various architectures and molecular representations. They also compared the performance of these models on datasets from the literature[25, 26, 38, 39, 51, 52, 53, 54, 55, 56, 57, 58]. Although their models achieved an RMSE of \(\sim 1.1\) on their test set, using descriptors as molecular representations resulted in RMSE values ranging from \(0.55\) to \(\sim 1.35\) when applied to other datasets from the literature. These findings suggest that some datasets used to train models in the literature may be inherently easier to predict, leading to smaller RMSE values. According to their study, the Solubility Challenge datasets by Llinas _et al.[25, 26]_ were found to be particularly challenging due to their more significant reproducibility error.
## 3 Methods
### Dataset
The data used for training the models were obtained from AgSolDB[1]. This database combined and curated data from 9 different aqueous solubility datasets. The main concern in using a large, curated database is to avoid problems with the generalizability of the model[59] and with the fidelity of the data[60]. AgSolDB consists of aqueous solubility (LogS) values for 9982 unique molecules extended with 17 topological and physicochemical 2D descriptors calculated by RDKit[61].
We augmented AgSolDB to 96,625 molecules. Each entry of AgSolDB was used to generate at most ten new unique randomized SMILES strings. Training the model on multiple representations of the same molecule improves its ability to learn the chemical space constraints of the training set, as demonstrated in previous studies [62, 63]. Duplicates were removed.
After shuffling, the augmented dataset was split into 80%/20% for the training and test datasets, respectively. The curated datasets for the solubility challenges[25, 28] were used as withheld validation data to evaluate the model's ability to predict solubility for unseen compounds. To refer to the validation datasets, we labeled the first solubility challenge dataset as "solubility challenge 1" and the two sets from the second solubility challenge as "solubility challenge 2_1" and "solubility challenge 2_2", respectively. Molecules in these three datasets were not found in train and test datasets.
### Model architecture
Our model uses a deep ensemble approach as described by Lakshminarayanan et al. [36]. This technique was selected due to its ability to estimate prediction uncertainty, thus enhancing the predictive capability of our model. The uncertainty of a model can be divided into two sources: aleatoric uncertainty (AU) and epistemic uncertainty (EU).[64, 65] These uncertainties quantify the intrinsic uncertainty inherent in data observations and the disagreement among model estimations, respectively.[66]
Given a model that outputs two values - \(\hat{\mu}_{m}\) and \(\hat{\sigma}_{m}\) - that characterize a normal distribution \(\mathcal{N}(\hat{\mu}_{m},\hat{\sigma}_{m})\), a deep ensemble creates an ensemble of \(m\) models that can estimate prediction uncertainty. For a given data point \(\vec{x}\), the estimates for the ensemble predictions are computed as follows:
\[\hat{\mu}(\vec{x})=\frac{1}{N}\sum_{m}\hat{\mu}_{m}(\vec{x}) \tag{1}\]
\[\hat{\sigma}_{ale}^{2}(\vec{x})=\frac{1}{N}\sum_{m}\hat{\sigma}_{m}^{2}(\vec{ x})\,\ \ \hat{\sigma}_{epi}^{2}(\vec{x})=\frac{1}{N}\sum_{m}\left(\hat{\mu}(\vec{x})-\hat{ \mu}_{m}(\vec{x})\right)^{2} \tag{2}\]
where \(\hat{\sigma}_{ale}^{2}\) is AU, \(\hat{\sigma}_{epi}^{2}\) is EU, N is the ensemble size, and m indexes the models in the ensemble.
We used a deep neural network (DNN) implemented using Keras[67] and TensorFlow[68] to build the deep ensemble. Our DNN model uses Self-referencing embedded strings (SELFIES)[69] tokens as input. Simplified molecular-input
line-entry system (SMILES)[70] or SELFIES[69] molecule representations are converted to tokens based on a pre-defined vocabulary generated from our training data, resulting in 273 available tokens. Figure 1 illustrates the model architecture. The network can be divided into three sections: (\(i\)) Embedding, (\(ii\)) bi-RNN, and (\(iii\)) fully connected NN.
The embedding layer converts a list of discrete tokens into a fixed-length vector space. Working on a continuous vector space has two main advantages: it uses a more compact representation, and semantically similar symbols can be described closely in vector space. Our embedding layer has an input dimension of 273 (vocabulary size) and an output dimension of 64.
Following the embedding layer, the data are fed into the bidirectional Recurrent Neural Network (RNN) layer. We used two RNN layers, each containing 64 units. The effects of using Gated Recurrent Unit (GRU) or Long Short-Term Memory (LSTM)[71] layers as the RNN layers were investigated (refer to Section 4.1). Using bi-RNN was motivated based on our previous work[34] in which LSTM helped improve the model's performance for predicting peptide properties using its sequences. More details regarding RNN, LSTM, and GRU layers can be found in Ref. 72.
The output from the bi-LSTM stack undergoes normalization via Layer Normalization[73]. There is no agreement on why Layer Normalization improves the model's performance.[74, 75, 76, 77] The absence of a comprehensive theoretical understanding of normalization effects hinders the evolution of novel regularization schemes.[78] Despite the limited understanding, Layer Normalization is employed due to its demonstrated effectiveness.[77]
After normalization, data is processed through three dense layers containing 32, 16, and 1 units, respectively. The 16-unit layer's output goes to two different 1-unit layers. One layer uses a linear function and the other uses a softplus function, producing \(\hat{\mu}_{m}\) and \(\hat{\sigma}_{m}\), respectively.
Negative log-likelihood loss \(l\) was used to train the model. It is defined as the probability of observing the label \(y\) given the input \(\vec{x}\):
\[l(\vec{x},y)=\frac{log(\hat{\sigma}_{m}^{2}(\vec{x}))}{2}+\frac{ \left(y-\hat{\mu}_{m}(\vec{x})\right)^{2}}{2\hat{\sigma}_{m}^{2}(\vec{x})} \tag{3}\]
Figure 1: Scheme of the deep learning DNN. The molecule is input using the SMILES or SELFIES representation. This representation is converted to a tokenized input based on a vocabulary obtained using the training dataset. A set of models represents the deep ensemble model. Each model consists of an embed layer, two bidirectional RNN (bi-RNN) layers, a normalization layer, and three fully connected layers being down-sized in three steps. Dropout layers are present after the embed and after each fully connected layer during training, but they were not represented in this scheme. Predictions of the models in the ensemble are then aggregated.
During the training phase, dropout layers with 0.35 dropout rate were incorporated after the embedding and each dense layer to mitigate over-fitting.[79] Models were trained using the Adam[80] optimizer with a fixed learning rate of 0.0001 and default values for \(\beta_{1}\) and \(\beta_{2}\) (0.9 and 0.999, respectively).
Our model employs adversarial training, following the approach proposed by Lakshminarayanan et al. [36] to improve the robustness of our model predictions. Because the input for our model is a discrete sequence, we generate adversarial examples by modifying the embedded representation of the input data. Each iteration in the training phase consists of first computing the loss using Equation 3 and a second step with a new input \(\vec{x}^{\prime}\) to smooth the model's prediction:
\[\vec{x}^{\prime}=\vec{x}+\epsilon\text{sign}(\nabla_{x}l(\vec{x},y)) \tag{4}\]
where \(\epsilon\) is the strength of the adversarial perturbation.
Details of the model performance, limitations, training data, ethical considerations, and caveats are available as model cards[81] at [http://mol.dev/](http://mol.dev/).
## 4 Results
In order to evaluate the performance of our model using deep ensembles, two baseline models were created: (\(i\)) an XGBoost Random Forest (RF) model using the 17 descriptors available on AqSolDB plus 1809 molecular descriptors calculated by PaDELPy, a python wrapper for the PaDEL-Descriptor[82] software, and (\(ii\)) a model with the same architecture used on our deep ensemble using RMSE as the loss function and no ensemble (referred to as DNN). In addition, we evaluate the effects of (\(i\)) the bi-RNN layer (either GRU or LSTM), (\(ii\)) using an augmented dataset to train, (\(iii\)) the adversarial training, and (\(iv\)) the ensemble size in the model's performance. Table 1 shows the performance of each one of our trained models.
### Gated layer
The most common RNN layers are the GRU and the LSTM. GRU layers use two gates, reset and update, to control the cell's internal state. On the other hand, LSTM layers use three gates: forget, input, and output, with the same objective. Available studies compare GRU and LSTM performances in RNNs for different applications, for instance: forecasting[83], cryptocurrency[84, 85], wind speed[86, 87], condition of a paper press[88], motive classification in
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c} & \multicolumn{2}{c|}{Solubility Challenge 1} & \multicolumn{2}{c|}{Solubility Challenge 2\_1} & \multicolumn{2}{c}{Solubility Challenge 2\_2} \\ Model & RMSE & MAE & r & RMSE & MAE & r & RMSE & MAE & r \\ \hline RF & 1.121 & 0.914 & 0.547 & **0.950** & **0.727** & **0.725** & **1.205** & **1.002** & **0.840** \\ DNN & 1.540 & 1.214 & 0.433 & 1.315 & 1.035 & 0.651 & 1.879 & 1.381 & 0.736 \\ DNN\({}_{Aug}\) & 1.261 & 1.007 & 0.453 & 1.371 & 1.085 & 0.453 & 2.189 & 1.710 & 0.386 \\ kde4\({}^{GRU}\) & 1.610 & 1.145 & 0.462 & 1.413 & 1.114 & 0.604 & 1.488 & 1.220 & 0.704 \\ kde4\({}^{LSTM}\) & 1.554 & 1.191 & 0.507 & 1.469 & 1.188 & 0.650 & 1.523 & 1.161 & 0.706 \\ kde4\({}^{GRU}\)-NoAdv & 1.729 & 1.348 & 0.525 & 1.483 & 1.235 & 0.622 & 1.954 & 1.599 & 0.517 \\ kde4\({}^{LSTM}\)-NoAdv & 1.425 & 1.114 & 0.505 & 1.258 & 0.972 & 0.610 & 1.719 & 1.439 & 0.609 \\ kde4\({}^{GRU}_{Aug}\) & 1.329 & 1.148 & 0.426 & 1.354 & 1.157 & 0.674 & 1.626 & 1.340 & 0.623 \\ kde4\({}^{LSTM}_{Aug}\) & 1.273 & 0.984 & 0.473 & 1.137 & 0.932 & 0.639 & 1.511 & 1.128 & 0.717 \\ kde4\({}^{LSTM}_{Aug}\) & 1.247 & 0.984 & 0.542 & 1.044 & 0.846 & 0.701 & 1.418 & 1.118 & 0.729 \\ kde10\({}^{LSTM}_{Aug}\) & 1.689 & 1.437 & 0.471 & 1.451 & 1.238 & 0.676 & 1.599 & 1.405 & 0.699 \\ kde10\({}^{LSTM}_{Aug}\) & **1.095** & **0.843** & **0.559** & 0.983 & 0.793 & 0.724 & 1.263 & 1.051 & 0.792 \\ \end{tabular}
\end{table}
Table 1: Summary of the metrics for each trained model. We used the Root Mean Squared Error(RMSE(\(\downarrow\))), Mean Absolute Error (MAE(\(\downarrow\))), and Pearson correlation coefficient (r(\(\uparrow\))) to evaluate our models. The arrows indicate the direction of improvement. Deep ensemble models are referred to as “kde\(N\)”, where \(N\) is the ensemble size. Baseline models using random forest (RF) and the DNN model employed for deep ensemble (DNN) are also displayed. DNN model was trained as described in Section 3. The models in which data augmentation was used were subscribed with the flag \(Aug\). A superscript indicates if the bidirectional layer implements a \(GRU\) or a \(LSTM\) layer. In addition, models trained not using adversarial perturbation are flagged with “-NoAdv”. The columns show the results of each model evaluated on each solubility challenge dataset. 2\_1 represents the tight dataset (set-1), while 2\_2 represents the loose dataset (set-2) as described in the original paper (See Ref. 26). \(r\) stands for the Pearson correlation coefficient. The best-performing model in each dataset is displayed in bold.
thematic apperception tests[89] and music and raw speech[90]. Nevertheless, it is not clear which of those layers would perform better at a given task.
We trained models with four elements in the deep ensemble using GRU or LSTM. Metrics can be found in Table 1; for an explanation of the naming syntax used in this work, refer to Table 1 caption. Using LSTM resulted in a decrease in RMSE and MAE and an increase in the correlation coefficient, indicating better performance. For Solubility Challenges 1, 2_1, and 2_2, the kde4\({}^{GRU}_{Aug}\) model yielded RMSE values of 1.329, 1.354, and 1.626, respectively, while the kde4\({}^{LSTM}_{Aug}\) model achieved 1.049, 1.054, and 1.340, respectively. This trend was also observed for the models trained without data augmentation, but in a smaller proportion (See Table 1). Considering that LSTM performs better regarding this model and data, we will consider only bi-LSTM layers for further discussion. Those results are in accordance with our previous work[34] in which using LSTM helped improve the model's performance.
### Data augmentation
Our model is not intrinsically invariant with respect to the SELFIES representation input. For instance, both "C(C(C1C(=C(C(=O)O1)O)O)O" and "O=C1OC(C(O)CO)C(O)=C1O" are valid SMILES representations for the ascorbic acid (See Figure 1) that will be encoded for different SELFIES tokens. Hence, the model should learn to be invariant concerning changes in the string representation during training. It can be achieved by augmenting the dataset with SMILES randomization and training the model using different representations with the same label. Therefore, the model can learn relations in the chemical space instead of correlating the label with a specific representation.[62] With this aim, we evaluated the effects of augmenting the dataset by generating new randomized SMILES representations for each sample.
Augmenting the dataset had a significant impact on the metrics. It could be seen improvements of \(\sim 0.5\) in the RMSE when evaluating on challenge datasets 1 and 2_1, and a gain of \(\sim 0.2\) on 2_2 (See Table 1). Concerning the first two datasets, augmenting data improved every model used in this study. However, surprisingly, data augmentation led to a deprecation of the DNN model on the solubility challenge 2_2 dataset. This behavior was not further investigated.
### Adversarial training
Using adversarial training improved performance in Lakshminarayanan _et al.[36]_ studies. Hence, they suggested that it should be used in future applications of their deep learning algorithm. Thus, we tested the effects of adversarial perturbation on training models with ensemble sizes of 4 and 10.
Comparing kde4\({}^{LSTM}\)-NoAdv and kde4\({}^{LSTM}\), using adversarial training decreases model performance. It can be seen in Table 1 that using adversarial perturbation increased the RMSE from \(1.425\) to \(1.554\) and \(1.258\) to \(1.469\) in solubility challenges dataset 1 and 2_1, respectively. However, the RMSE decreased from \(1.719\) to \(1.523\) in dataset 2_2. Using adversarial perturbation affected our kde4\({}^{LSTM}\)'s performance by a change in RMSE of \(\pm 0.2\).
The inconsistent performance improvement observed when using adversarial training was further investigated with models in which the dataset was augmented. Due to the lack of multiple string representations in the training dataset, it is known that kde4\({}^{LSTM}\) may have generalization problems. A generalization issue could direct the adversarial perturbation in a non-physical direction because the model does not have complete knowledge about the chemical representation space. This hypothesis is reinforced when we compare kde10\({}^{LSTM}_{Aug}\)-NoAdv and kde10\({}^{LSTM}_{Aug}\). When using adversarial training on a model trained with an augmented dataset, the performance improvement is more evident (\(\sim 0.5\)) and consistent for all the test datasets.
### Deep ensemble size
To investigate the effects of increasing the ensemble size, we trained models with an ensemble of 4, 8, and 10 models. Given the previous results, these models used LSTM as the bi-RNN layer and were trained on the augmented dataset. Specifically for the solubility challenge 2_2, the most complex set to predict, these models presented an RMSE of \(1.511\), \(1.418\), and \(1.263\), respectively. Therefore, increasing the ensemble size consistently improved performance. We also observed this improvement on the other datasets (See Table 1).
Besides the immediate improvement in RMSE, increasing the ensemble size also improves the uncertainty of the model. Figure 2 shows the density distribution of the aleatoric variance and the epistemic variance (respectively related to AU and EU) for kde4\({}^{LSTM}_{Aug}\) (top 6 panels) and kde10\({}^{LSTM}_{Aug}\) (bottom six panels).
The increase in ensemble size led to a decrease in both uncertainties. AU distributions for the kde4\({}^{LSTM}_{Aug}\) are centered around 4 logS\({}^{2}\), displaying a long tail that extends to values as high as 20 logS\({}^{2}\) in the worst case (solubility challenge
2_2). A similar trend is observed in EU distributions. On the other hand, the kde10\({}^{LSTM}_{Aug}\) model results in narrower distributions. The mean of these distributions remains relatively unchanged, but a noticeable reduction in the extent of their tails can be observed. AU distribution ends in values around 10 logS\({}^{2}\).
## 5 Discussion
After extensively investigating the hyperparameter selection, we compared our model with available state-of-art models from the literature. Performance metrics on the solubility challenge datasets can be found in Table 2. Parity plots for our chosen models are presented in Figure 3.
Focusing on the solubility challenge 1 dataset[25], kde10\({}^{LSTM}_{Aug}\) is only \(\sim 0.2\) RMSE units worse than the best model available in the literature[41]. The RMSE of the participants of the challenge was not reported.[27] The primary metric used to evaluate models was the percentage of predictions within an error of 0.5 LogS units (called \(\pm 0.5\)log\(\%\)). Computing the same metric, kde10\({}^{LSTM}_{Aug}\) has a percentage of correct prediction of 44.4%. This result would place our model among the 35% best participants. The participant with the best performance presented a \(\pm 0.5\)log\(\%\) of 60.7%.
The architecture of the models was not published in the findings of the first challenge.[27] Nevertheless, the findings for the second challenge[28] investigated the participants more thoroughly. Participants were asked to identify their models' architecture and descriptors used. The challenge is divided into two datasets. Set-1 contains LogS values with an average interlaboratory reproducibility of 0.17 LogS. Our kde10\({}^{LSTM}_{Aug}\) achieve an RMSE of 0.983 and a \(\pm 0.5\)log\(\%\) of 40.0% in this dataset. Therefore, our model performs better than 26% of the published RMSE values and 50% of the \(\pm 0.5\)log\(\%\). In addition, the model with the best performance is an artificial neural network (ANN) that correctly predicted 61% (\(\pm 0.5\)log\(\%\)) of the molecule's LogS using a combination of molecule descriptors and fingerprints. The
Figure 2: Density distribution of the aleatoric (AU) and epistemic variances (EU) for the: (\(i\)) kde4\({}^{LSTM}_{Aug}\) (top six panels) and (\(ii\)) kde10\({}^{LSTM}_{Aug}\) (bottom six panels). Increasing ensemble size reduces the extent of the distribution’s tail, decreasing uncertainty about predictions. However, the ensemble size does not noticeably affect the distribution center.
second dataset (set-2) contains molecules whose solubility measurements are more challenging, reporting an average error in reproducibility of 0.62 LogS. The kde10\({}^{LSTM}_{Aug}\) achieves an RMSE of 1.263 and a \(\pm 0.5\)log\(\%\) of 23.3%. It performs better than 82% of the candidates when considering the RMSE. Surprisingly, \(\pm 0.5\)log\(\%\) does not follow this outstanding performance, which is more significant than only 32% Regarding the literature, kde10\({}^{LSTM}_{Aug}\) has an RMSE only \(\sim 0.1\) higher than a GNN that used an extensive set of numeric and one-hot descriptors in their feature vector.[50] Our model performs better than a transformer model that uses SMILES-string and an adjacency matrix and inputs.[44] The performance of those models is available in Table 2.
Notably, all participants in the solubility challenge 2 submitted a kind of QSPR or descriptor-based ML model. Using descriptors provides an easy way to ensure model invariance concerning molecule representation and is more informative since they can be physical quantities. However, selecting appropriate descriptors is crucial for developing descriptor-based ML models. It often requires specialists with a strong intuition about the relevant physical and chemical properties for predicting the target quantity. Our approach, on the other hand, is based on extracting information from simple string representations, a more straightforward raw data. Furthermore, we could achieve state-of-the-art performance while balancing the model size and complexity and using a raw input (a simple string). This simplified usage enables running the model on devices with limited computing power.
Lastly, transformers models have been used to address the issue of accurately predicting the solubility of small compounds. The typical workflow for transformers involves pre-training the model using a large dataset and subsequently fine-tuning it for a specific downstream task using a smaller dataset. Most existing models were either pre-trained on the ESOL[39] dataset or pre-trained on a larger dataset and fine-tuned using ESOL. Hence, the generalizability of those models cannot be verified. In a study by Francoeur and Koes [44], they considered two versions of their model, SolTransNet. The first version of SolTransNet was trained with the ESOL dataset using random splits. This approach achieved an RMSE of 0.278. Subsequently, the deployed version of SolTransNet was trained with the AqSolDB[1]. When ESOL was used to evaluate their deployed version, the model presented an RMSE of 2.99. While our model achieved an RMSE of 1.316 on ESOL, outperforming the SolTransNet deployed version, it cannot be compared with other models trained on ESOL.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c} & \multicolumn{2}{c|}{SolChal1} & \multicolumn{2}{c|}{SolChal2\_1} & \multicolumn{2}{c|}{SolChal2\_2} & \multicolumn{2}{c}{ESOL} \\ Model & RMSE & MAE & RMSE & MAE & RMSE & MAE & RMSE & MAE \\ \hline RF & 1.121 & 0.914 & **0.950** & **0.727** & **1.205** & **1.002** & & \\ DNN & 1.540 & 1.214 & 1.315 & 1.035 & 1.879 & 1.381 & & \\ DNN\({}_{Aug}\) & 1.261 & 1.007 & 1.371 & 1.085 & 2.189 & 1.710 & & \\ kde\({}^{LSTM}_{Aug}\) & 1.273 & 0.984 & 1.137 & 0.932 & 1.511 & 1.128 & 1.397 & 1.131 \\ kde\({}^{LSTM}_{Aug}\) & 1.247 & 0.984 & 1.044 & 0.846 & 1.418 & 1.118 & 1.676 & 1.339 \\ kde10\({}^{LSTM}_{Aug}\) & **1.095** & **0.843** & 0.983 & 0.793 & 1.263 & 1.051 & **1.316** & **1.089** \\ \hline Linear regression[39] & & & & & & & **0.75** & \\ UG-RNN[41] & 0.90 & **0.74** & & & & & & \\ RF[37] & 0.93 & & & & & & & \\ Consensus[55] & **0.91** & & & & & & & \\ GNN[50] & \(\sim 1.10\) & **0.91** & & **1.17** & & & \\ SolyBert[91] & 0.925 & & & & & & \\ SolTransNet\({}^{a}\)[44] & & & 1.004 & 1.295 & 2.99 & & \\ SMILES-BERT\({}^{b}\)[92] & & & & & & & 0.47 \\ MolBERT\({}^{b}\)[48] & & & & & & & 0.531 \\ RT\({}^{b}\)[45] & & & & & & & 0.73 \\ MolFormer\({}^{b}\)[49] & & & & & & & **0.278** \\ \end{tabular}
\end{table}
Table 2: Metrics for the best models found in the current study (upper section) and for other state-of-art models available in the literature (lower section). Values were taken from the cited references. Missing values stand for entries that the cited authors did not study. SolChal columns stand for the Solubility Challenges. 2_1 represents the tight dataset (set-1), while 2_2 represents the loose dataset (set-2) as described in the original paper (See Ref. 26). The best-performing model in each dataset has its RMSE value in bold. \({}^{a}\) Has overlap between training and test sets. \({}^{b}\) Pre-trained model was fine-tuned on ESOL.
## 6 Conclusions
Our model was able to predict LogS values directly from SMILES or SELFIES string representations. Hence, there is no need for descriptors selection and construction. Using only raw data, our model could match state-of-art performance in datasets that are challenging to predict accurately.
In addition, carefully compromising between performance and complexity, we implemented a web application using TensorFlow JS. This application can satisfactorily run on any device with limited computational resources, such as laptops and smartphones. This excludes the need to rely on a server to run the application, improving usability and flexibility and decreasing implementation costs.
## 7 Data and code availability
All code needed to reproduce those results is publicly available on the following GitHub repository: [https://github.com/ur-whitelab/mol.dev](https://github.com/ur-whitelab/mol.dev). The model is also publicly accessible at the following address: [https://mol.mol.dev/](https://mol.mol.dev/).
Figure 3: Parity plots for two selected models being evaluated on the solubility challenge datasets: (\(i\)) kde\({}^{LSTM}_{Aug}\) (top row), and (\(ii\)) kde\({}^{LSTM}_{Aug}\) (bottom row). The left, middle, and right columns show the parity plots for solubility challenge 1[25], 2-set1, and 2-set2[26], respectively. Pearson correlation coefficient is displayed together with RMSE and MAE. “acc-0.5” stands for the \(\pm 0.5\mathrm{log}\%\) metric. Red dashed lines show the limits for molecules considered a correct prediction when computing the \(\pm 0.5\mathrm{log}\%\). The correlation between predicted values and labels increases when more models are added to the ensemble. RMSE and MAE also follow this pattern. However, the \(\pm 0.5\mathrm{log}\%\) decreases in set-2 of the second solubility challenge dataset (SolChal2-set2). While kde10\({}^{LSTM}_{Aug}\) improved the prediction of molecules that were being poorly predicted by kde\({}^{LSTM}_{Aug}\), the prediction of molecules with smaller errors was not greatly improved. |
2306.03440 | Quantifying the Variability Collapse of Neural Networks | Recent studies empirically demonstrate the positive relationship between the
transferability of neural networks and the within-class variation of the last
layer features. The recently discovered Neural Collapse (NC) phenomenon
provides a new perspective of understanding such last layer geometry of neural
networks. In this paper, we propose a novel metric, named Variability Collapse
Index (VCI), to quantify the variability collapse phenomenon in the NC
paradigm. The VCI metric is well-motivated and intrinsically related to the
linear probing loss on the last layer features. Moreover, it enjoys desired
theoretical and empirical properties, including invariance under invertible
linear transformations and numerical stability, that distinguishes it from
previous metrics. Our experiments verify that VCI is indicative of the
variability collapse and the transferability of pretrained neural networks. | Jing Xu, Haoxiong Liu | 2023-06-06T06:37:07Z | http://arxiv.org/abs/2306.03440v1 | # Quantifying the Variability Collapse of Neural Networks
###### Abstract
Recent studies empirically demonstrate the positive relationship between the transferability of neural networks and the within-class variation of the last layer features. The recently discovered Neural Collapse (NC) phenomenon provides a new perspective of understanding such last layer geometry of neural networks. In this paper, we propose a novel metric, named Variability Collapse Index (VCI), to quantify the variability collapse phenomenon in the NC paradigm. The VCI metric is well-motivated and intrinsically related to the linear probing loss on the last layer features. Moreover, it enjoys desired theoretical and empirical properties, including invariance under invertible linear transformations and numerical stability, that distinguishes it from previous metrics. Our experiments verify that VCI is indicative of the variability collapse and the transferability of pretrained neural networks.
Machine Learning, Variability Collapse, Neural Networks
## 1 Introduction
The pursuit of powerful models capable of extracting features from raw data and performing well on downstream tasks has been a constant endeavor in the machine learning community (Bommasani et al., 2021). In the past few years, researchers have developed various pretraining methods (Chen et al., 2020; Khosla et al., 2020; Grill et al., 2020; He et al., 2022; Baevski et al., 2022) that enable models to learn from massive real world datasets. However, there is still a lack of systematic understanding regarding the transferability of deep neural networks, _i.e_., whether they can leverage the information in the pretraining datasets to achieve high performance in downstream tasks (Abnar et al., 2021; Fang et al., 2023).
The performance of a pretrained model is closely related to the quality of the features it produces. The recently proposed concept of _neural collapse (NC)_(Papyan et al., 2020) provides a paradigmatic way to study the representation of neural networks. According to neural collapse, the last layer features of neural networks adhere to the following rule of _variability collapse (NC1)_: As the training proceeds, the representation of a data point converges to its corresponding class mean. Consequently, the within-class variation of the features converges to zero.
The deep connection between transferability and neural collapse is rooted in the variability collapse criterion. Previous works (Feng et al., 2021; Kornblith et al., 2021; Sariyildiz et al., 2022) empirically find that although models with collapsed last-layer feature representations exhibit better pretraining accuracy, they tend to yield worse performance for downstream tasks. These works give an intuitive explanation that pushing the feature to their class means results in the loss of the diverse structures useful for downstream tasks. Building upon this understanding, researchers design various algorithms (Jing et al., 2021; Kini et al., 2021; Chen et al., 2022; Dubois et al., 2022; Sariyildiz et al., 2023) that either explicitly or implicitly leverage the variability collapse criterion to retain the feature diversity in the pretraining phase, and thereby improve the transferability of the models.
Straightforward as it is stated, the variability criterion is still not thoroughly understood. One fundamental question is how to mathematically quantify variability collapse. Previous works propose variability collapse metrics that are meaningful in specific settings (Papyan et al., 2020; Zhu et al., 2021; Kornblith et al., 2021; Hui et al., 2022). However, a more principled characterization is required when we want to use variability collapse to analyze transferability. For example, in the linear probing setting, the loss function is invariant to invertible linear transformations on the last layer features. Consequently, it is reasonable to expect that the collapse metric of the features would also be invariant under such transformations, in order to properly reflect the models performance on downstream tasks. However, as we will point out in Section 4.2, no previous metric can achieve this high level of invariance, to the best of the authors' knowledge.
To obtain a well-motivated and well-defined variability collapse metric, we tackle the problem from a loss minimization perspective. Our analysis reveals that the minimum mean squared error (MSE) loss in linear probing on a set of pretrained features can be expressed concisely, with a major component being \(\operatorname{Tr}[\Sigma_{T}^{\dagger}\Sigma_{B}]\). Here, \(\Sigma_{B}\) is the between-class feature covariance matrix and \(\Sigma_{T}\) is the overall feature covariance matrix, as defined in Section 3.1. This term serves as an indicator of variability collapse, since it achieves its maximum \(\operatorname{rank}(\Sigma_{B})\) for fully collapse configurations where the feature of each data point coincides with the feature class mean. Furthermore, an important implication of its connection with MSE loss is that the invertible linear transformation invariance of the loss function directly transfers to the quantity \(\operatorname{Tr}[\Sigma_{T}^{\dagger}\Sigma_{B}]\).
Motivated by the above investigations, we propose the following collapse metric, which we name **Variability Collapse Index (VCI)**:
\[\text{VCI}=1-\frac{\operatorname{Tr}[\Sigma_{T}^{\dagger}\Sigma_{B}]}{ \operatorname{rank}(\Sigma_{B})}.\]
The VCI metric possesses the desirable property of invariance under invertible linear transformation, making it a proper indicator of last layer representation collapse. Furthermore, VCI enjoys a higher level of numerical stability compared previous collapse metrics. We conduct extensive experiments to validate the effectiveness of the proposed VCI metric. The results show that VCI is a valid index for variability collapse across different architectures. We also show that VCI has a strong correlation with accuracy of various downstream tasks, and serves as a better index for transferability compared with existing metrics.
## 2 Related Works
Neural Collapse.The seminal paper Papyan et al. (2020) proposes the concept of neural collapse, which consists of four paradigmatic criteria that govern the terminal phase of training of neural networks.
One research direction regarding neural collapse focuses on rigorously proving neural collapse for specific learning models. A large portion of them adopt the layer peeled model (Mixon et al., 2020; Fang et al., 2021), which treats the last layer feature vector as unconstrained optimization variables. In this setting, both cross entropy loss (Lu and Steinerberger, 2020; Zhu et al., 2021; Ji et al., 2021) and mean square loss (Tirer and Bruna, 2022; Zhou et al., 2022) exhibit neural collapse configurations as the only global minimizers and have benign optimization landscapes. Additionally, other theoretical investigations explore neural collapse from the perspective of optimization dynamics (Han et al., 2021), max margin (Zhou et al., 2022) and more generalized setting (Nguyen et al., 2022; Tirer et al., 2022; Zhou et al., 2022; Yaras et al., 2022)
Another research direction draws inspiration from the neural collapse phenomenon to devise training algorithms. For instance, some studies empirically demonstrate that fixing the last-layer weights of neural networks to an Equiangular Tight Frame (ETF) reduces memory usage (Zhu et al., 2021), and improves the performance on imbalanced dataset (Yang et al., 2022; Thrampoulidis et al., 2022; Zhu et al., 2022) and few shot learning tasks (Yang et al., 2023).
Representation Collapse and Transferability.Understanding and improving the transferability of neural networks to unknown tasks have attracted significant attention in recent years (Tan et al., 2018; Ruder et al., 2019; Zhuang et al., 2020). Previous works (Feng et al., 2021; Sariyildiz et al., 2022; Cui et al., 2022) empirically demonstrate that the diversity of last layer features is positively correlated with the transferability of neural networks, highlighting a tradeoff between pretraining accuracy and transfer accuracy. To address this challenge, various methods (Schilling et al., 2021; Touvron et al., 2021; Xie et al., 2022) have been proposed to quantify and mitigate representation collapse. For example, Kornblith et al. (2021) show that using a low temperature for softmax activation in training reduces class separation and improves transferability. Neural collapse provides a novel perspective for understanding this fundamental tradeoff (Galanti et al., 2021; Li et al.). Notably, Hui et al. (2022) reveal that neural collapse can be at odds with transferability by causing a loss of crucial information necessary for downstream tasks.
## 3 Preliminaries
### Notations and Problem Setup
Throughout this paper, we adopt the following notation conventions. We use \(\|v\|\) to denote Euclidean norm of vector \(v\in\mathbb{R}^{d}\). We use \(\|A\|_{F}\) to denote Frobenious norm and \(A^{\dagger}\) to denote the pseudo-inverse of matrix \(A\in\mathbb{R}^{d\times d}\), \(d\in\mathbb{N}_{+}\). We use \([n]\) as a short hand for \(\{1,\cdots n\}\). We use \(e_{k}\in\mathbb{R}^{K}\) to denote the vector whose \(k\)-th entry is \(1\) and the other entries are \(0\). We use \(\mathbf{1}_{d}\) and \(\mathbf{0}_{d}\) to denote the all-one and the zero vector in \(\mathbb{R}^{d}\), and use \(\mathbf{I}_{d\times d}\) and \(\mathbf{0}_{d\times d}\) to denote the identity matrix and the zero matrix in \(\mathbb{R}^{d\times d}\). We omit the subscripts of dimension when the context is clear.
Consider a \(K\)-class classification problem on a balanced dataset \(\mathcal{D}=\{(x_{k,i},e_{k})\}_{k\in[K],i\in[N]}\), where \(N\) is the number of samples from each class. It is worth noting that the results presented in this paper can be readily extended to imbalanced datasets. Each sample consists of a data point \(x_{k,i}\in\mathbb{R}^{d}\) and an one-hot label \(e_{k}\in\mathbb{R}^{K}\). The classifier \(W\phi(\cdot)+b\) is composed of a feature extractor \(\phi:\mathbb{R}^{d}\to\mathbb{R}^{p}\) and a linear layer with \(W\in\mathbb{R}^{K\times p}\) and
\(b\in\mathbb{R}^{K}\). Let \(h_{k,i}=\phi(x_{k,i})\) denote the feature vector of \(x_{k,i}\), and \(H=(h_{k,i})_{k\in[K],i\in[N]}\in\mathbb{R}^{p\times KN}\) denote the feature matrix. The feature extractor can be any pretrained neural network, till its penultimate layer.
For a given feature matrix, we denote \(\mu_{k}(H)=(1/N)\sum_{i\in[N]}h_{k,i}\) as the \(k\)-th class mean, and \(\mu_{G}(H)=(1/KN)\sum_{k\in[K],i\in[N]}h_{k,i}\) as the global mean. Throughout this paper, we will frequently refer to the following notions of feature covariance. Specifically, we denote the within-class covariance matrix by
\[\Sigma_{W}(H)=\frac{1}{KN}\sum_{k\in[K]}\sum_{i\in[N]}(h_{k,i}-\mu_{k})(h_{k, i}-\mu_{k})^{\top}, \tag{1}\]
and the between-class covariance matrix by
\[\Sigma_{B}(H)=\frac{1}{K}\sum_{k\in[K]}(\mu_{k}-\mu_{G})(\mu_{k}-\mu_{G})^{ \top}. \tag{2}\]
The overall covariance matrix is defined as
\[\Sigma_{T}(H)=\frac{1}{KN}\sum_{k\in[K]}\sum_{i\in[N]}(h_{k,i}-\mu_{G})(h_{k, i}-\mu_{G})^{\top}. \tag{3}\]
A bias-variance decomposition argument gives \(\Sigma_{T}(H)=\Sigma_{B}(H)+\Sigma_{W}(H)\), whose proof is provided in Equation 7 for completeness. We omit the feature matrix \(H\) in the above notations, when the context is clear.
We define \(V_{B}=\text{span}\{\mu_{1}-\mu_{G},\cdots\mu_{k}-\mu_{G}\}\) as the column space of \(\Sigma_{B}\). In the same way, we can define \(V_{W},V_{T}\) as the column space of \(\Sigma_{W}\) and \(\Sigma_{T}\), respectively.
### Previous Collapse Metrics
The first item in the Neural collapse paradigm is referred to as the variability collapse criterion (NC1), which states that as the training proceeds, the within-class variation of the last layer features will diminish and the features will concentrate to the corresponding class means. Use the quantities defined above, NC1 happens if \(\Sigma_{W}\rightarrow\mathbf{0}\). In the related literature, researchers propose various ways to non-asymptotically characterize NC1.
Fuzziness.One of the commonly adopted metrics for NC1 is the normalized within-class covariance \(\mathrm{Tr}[\Sigma_{B}^{!}\Sigma_{W}]\)(Papyan et al., 2020; Zhu et al., 2021; Tirer and Bruna, 2022). The term is commonly referred to as _Separation Fuzziness_ or simply _Fuzziness_ in the related literature (He and Su, 2022), and is inherently related to the fisher discriminant ratio (Zarka et al., 2020).
Squared Distance.Hui et al. (2022) uses the quantity
\[\frac{\sum_{k\in[K]}\sum_{i\in[N]}\|h_{k,i}-\mu_{k}\|^{2}}{N\sum_{k\in[K]}\| \mu_{k}-\mu_{G}\|^{2}} \tag{4}\]
to characterize NC1. In this paper, we refer to it as _Squared Distance_ for convenience. Unlike fuzziness, square distance disregards the structure of the covariance matrix and uses the ratio of the square norm between the within-class variation and the between-class variation as a measure of collapse metric.
Cosine Similarity.Kornblith et al. (2021) uses the ratio of the average within-class cosine similarity to the overall cosine similarity to measure the dispersion of feature vectors. Define \(\mathrm{sim}(x,y)=x^{\top}y/\left(\|x\|\|y\|\right)\) as the cosine similarity between vectors. Denote the within-class cosine distance and overall cosine distance as
\[\bar{d}_{\text{within}} =\sum_{k=1}^{K}\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{1-\mathrm{sim} \left(h_{k,i},h_{k,j}\right)}{KN^{2}},\] \[\bar{d}_{\text{total}} =\sum_{k=1}^{K}\sum_{l=1}^{K}\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{1 -\mathrm{sim}\left(h_{k,i},h_{l,j}\right)}{K^{2}N^{2}}.\]
They refer to the term \(1-\bar{d}_{\text{within}}/\bar{d}_{\text{total}}\) as _class separation_. They also propose a simplified quantity \(1-\bar{d}_{\text{within}}\), and empirically show that both of them have a negative correlation with linear probing transfer performance across different settings. In this paper, we adopt \(\bar{d}_{\text{within}}\) as the baseline metric in Kornblith et al. (2021), and call it _Cosine Similarity_ for brevity.
## 4 What is an Appropriate Variability Collapse Metric?
In this section, we explore the essential properties that a valid variability collapse metric should and should not have.
### Do Last Layer Features Fully Collapse?
The original NC1 argument states that the within class covariance converges to zero, _i.e._, \(\Sigma_{W}\to 0\), as the training proceeds. This implies that a collapse metric should achieve minimum or maximum at these _fully collapsed_ configurations with \(\Sigma_{W}=0\).
However, the following proposition shows that the opposite is not true, _i.e._, full collapse is not necessary for loss minimization.
**Proposition 4.1**.: _Consider a loss function \(\ell:\mathbb{R}^{k}\times\mathbb{R}^{k}\rightarrow\mathbb{R}\). Define the training loss as_
\[L(W,b,H) =\frac{1}{KN}\sum_{k\in[K]}\sum_{i\in[N]}\ell(Wh_{k,i}+b,e_{k})\] \[\quad+\frac{\lambda_{W}}{2}\|W\|_{F}^{2}+\frac{\lambda_{b}}{2}\| b\|^{2}, \tag{5}\]
_where \(\lambda_{W},\lambda_{b}\geq 0\) are regularization parameters. Suppose that \(p>K\), \(N\geq 2\). Then for any constant \(C>0\),
there exists an \(H^{\prime}\), such that \(L(W,b,H^{\prime})=L(W,b,H)\), \(\Sigma_{B}(H^{\prime})=\Sigma_{B}(H)\), but \(\|\Sigma_{W}(H^{\prime})\|_{F}>C\)._
The proof of the proposition is provided in Appendix A.1. It is worth noting that the above proposition does not contradict previous conclusions that ETF configurations are the only minimizers (Zhu et al., 2021; Tirer and Bruna, 2022), since they require feature regularization \((\lambda_{H}/2)\|H\|_{F}^{2}\) in the loss function.
Our experiments show that Proposition 4.1 truly reflects the trend of neural network training. We train a ResNet50 model on ImageNet-1K dataset, and decompose \(\Sigma_{W}\) into the \(V_{B}\) part and the \(V_{B}^{\perp}\) part by computing \((1/KN)\sum_{k\in[K],i\in[N]}\|\text{Proj}_{V_{B}}(h_{k,i}-\mu_{k})\|^{2}\) and \((1/KN)\sum_{k\in[K],i\in[N]}\|\text{Proj}_{V_{B}^{\perp}}(h_{k,i}-\mu_{k})\|^{2}\). The results are shown in Figure 1. We observe that although the \(V_{B}\) part steadily decreases, the \(V_{B}^{\perp}\) part keeps increasing in the training process. Therefore, \(\Sigma_{W}\rightarrow\mathbf{0}\) may not occur for real world neural network training.
Proposition 4.1 and Figure 1 show that the last layer of neural networks exhibits high flexibility due to overparameterization. Consequently, it is unrealistic to expect standard empirical risk minimization training to achieve fully collapsed last layer representation, unless additional inductive bias are introduced. Therefore, requiring that the collapse metric reaches its minima _only_ at fully collapsed configurations, such as Squared Distance, will be too stringent for practical use.
### Invariance to Invertible Linear Transformations Matters
Symmetry and invariance is a core concept in deep learning (Gens and Domingos, 2014; Tan et al., 2018; Chen et al., 2019). The collapse metric discussed in Section 3.2 enjoy certain level of invariance properties.
**Observation 4.2**.: _The Fuzziness metric \(\operatorname{Tr}[\Sigma_{B}^{\dagger}\Sigma_{W}]\) is invariant to invertible linear transformation \(U\in\mathbb{R}^{p\times p}\) that can be decomposed into two separate transformations in \(V_{B}\) and \(V_{B}^{\perp}\). The claim comes from the fact that_
\[\operatorname{Tr}\left[\left(U\Sigma_{B}U^{\top}\right)^{\dagger} U\Sigma_{W}U^{\top}\right]\] \[=\operatorname{Tr}\left[U^{-1,\top}\Sigma_{B}^{\dagger}U^{-1}U \Sigma_{W}U^{\top}\right]\] \[=\operatorname{Tr}\left[\Sigma_{B}^{\dagger}\Sigma_{W}\right].\]
_However, Fuzziness is not invariant to all invertible linear transformations in \(\mathbb{R}^{p}\). A simple counter example is \(\Sigma_{B}=\left[\begin{array}{cc}1&0\\ 0&0\end{array}\right]\), \(\Sigma_{W}=\left[\begin{array}{cc}1&0\\ 0&1\end{array}\right]\), and the linear transformation \(U=\left[\begin{array}{cc}1&1\\ 0&1\end{array}\right]\). It can be calculated that \(\operatorname{Tr}\left[\left(U\Sigma_{B}U^{\top}\right)^{\dagger}U\Sigma_{W}U^ {\top}\right]=2\neq 1=\operatorname{Tr}\left[\Sigma_{B}^{\dagger}\Sigma_{W}\right]\)._
**Observation 4.3**.: _The Squared Distance metric in Equation 4 is invariant to isotropic scaling and orthogonal transformation on the feature vectors, i.e., since such transformations preserve the pairwise distance between the feature vectors. However, it is not invariant to invertible linear transformations in \(\mathbb{R}^{p}\)._
**Observation 4.4**.: _The Cosine Similarity metric is invariant to independent scaling of each \(h_{k,i}\). It is also invariant to orthogonal transformation in \(\mathbb{R}^{p}\), as such transformations preserves the cosine similarity between feature vectors. But it is easy to see that Cosine Similarity is not invariant to invertible linear transformation in \(\mathbb{R}^{p}\)._
However, the next proposition shows that the linear probing loss of the last layer feature is invariant under a much more general class of transformations.
**Observation 4.5**.: _The minimum value of loss function in Equation 5 is invariant to invertible linear transformations on the feature vector, i.e._
\[\min_{W,b}L(W,b,H)=\min_{W,b}L(W,b,VH),\]
_for any invertible \(V\in\mathbb{R}^{p\times p}\)._
In other words, if we have two pretrained models \(\phi_{1}(\cdot)\) and \(\phi_{2}(\cdot)\), and there exists an invertible linear transformation \(V\in\mathbb{R}^{p\times p}\) such that \(\phi_{1}(x)=V\phi_{2}(x)\) for any \(x\in\mathbb{R}^{d}\), then \(\phi_{1}(\cdot)\) and \(\phi_{2}(\cdot)\) will have exactly the same linear probing loss on any downstream data distribution. Therefore, when considering a collapse metric that may serve as an indicator of transfer accuracy, it is desirable for the metric to exhibit invariance to invertible linear transformations. However, as discussed previously, the metrics listed in Section 3.2 do not possess this level of invariance.
Figure 1: **Projections of Squared Distance onto \(V_{B}\) and \(V_{B}^{\perp}\) show opposite trends as the training proceeds. The model is a ResNet50 trained on ImageNet-1K, using the setting specified in Section 6.1.**
### Numerical Stability Issues
Numerical stability is an essential property for the collapse metric to ensure its practical usability. Unfortunately, the Fuzziness metric is prone to numerical instability, primarily due to the pseudoinverse operation applied to \(\Sigma_{B}\).
Firstly, the between-class covariance matrix \(\Sigma_{B}\) is singular when \(K\leq p\), and its rank is unknown. Due to computational imprecision, its zero eigenvalues are always occupied with small nonzero values. In the default PyTorch (Paszke et al., 2019) implementation, the pseudoinverse operation includes a thresholding step to eliminate the spurious nonzero eigenvalues. However, selecting the appropriate threshold is a manual task, as it may vary depending on the architecture, dataset, or training algorithms.
To tackle this issue, one possible solution is to retain only the top \(\min\{p,K-1\}\) eigenvalues, which is the maximum rank of \(\Sigma_{B}\). Nevertheless, \(\Sigma_{B}\) can still possess small trailing nonzero eigenvalues. For example, in the experiments illustrated in Figure 2, the \(999\)-th eigenvalue is about \(2\times 10^{-3}\), significantly smaller than the typical scale of nonzero eigenvalues. Including such small eigenvalues in the computation would yield a substantially large fuzziness value.
To address the numerical stability issue, an alternative approach is to discard the \(\Sigma_{B}\) and instead employ the more well-behaved overall covariance matrix \(\Sigma_{T}\). As shown in Figure 2, the eigenvalues of \(\Sigma_{T}\) exhibit a larger scale and a more uniform distribution compared with eigenvalues of \(\Sigma_{B}\), making it a numerically stable choice for pseudoinverse operation. Interestingly, the quantity \(\Sigma_{T}^{\dagger}\) naturally emerges in the solution of a loss minimization problem, which we will explore in the next section.
## 5 The Proposed Metric
As we have discussed, the existing collapse metrics discussed in Section 3.2 do not have the desired properties to fully measure the quality of the representation in downstream tasks. In this section, we introduce a novel and well-motivated collapse metric, which we call Variability Collapse Index (VCI), that satisfy all the aforementioned properties.
Previous studies (Zhu et al., 2021; Tirer and Bruna, 2022) indicate that fully collapsed last layer features minimizes the linear probing loss. Therefore, it is natural to explore the inverse direction, namely, using the linear probing loss to quantify the collapse level of last layer features.
Suppose we have a labeled dataset with corresponding last layer feature \(H=(h_{k,i})_{k\in[K],i\in[N]}\). We perform linear regression on the last layer to find the optimal parameter \(W\) that minimizes the following MSE loss:
\[L(W,b,H)=\frac{1}{2KN}\sum_{k\in[K],i\in[N]}\|Wh_{k,i}+b-e_{k}\|^{2}.\]
The following theorem gives the optimal linear probing loss.
**Theorem 5.1**.: _The optimal linear probing loss has the following form._
\[\min_{W,b}L(W,b,H)=-\frac{1}{2K}\operatorname{Tr}\left[\Sigma_{T}^{\dagger} \Sigma_{B}\right]+\frac{1}{2}-\frac{1}{2K},\]
_where \(\Sigma_{B}\) and \(\Sigma_{T}\) are the between-class and overall covariance matrix defined in Equation 2 and 3._
Theorem 5.1 shows that the information of the minimum MSE loss can be fully captured by the simple quantity \(\operatorname{Tr}\left[\Sigma_{T}^{\dagger}\Sigma_{B}\right]\). It is easy to see that the minimum of \(\operatorname{Tr}[\Sigma_{T}^{\dagger}\Sigma_{B}]\) is \(0\). The following theorem gives an upper bound of \(\operatorname{Tr}[\Sigma_{T}^{\dagger}\Sigma_{B}]\).
**Theorem 5.2**.: \(\operatorname{Tr}[\Sigma_{T}^{\dagger}\Sigma_{B}]\leq\operatorname{rank}( \Sigma_{B})\)_. The equality holds for fully collapsed configuration \(\Sigma_{W}=\mathbf{0}\)._
The term \(\operatorname{Tr}[\Sigma_{T}^{\dagger}\Sigma_{B}]\) has a positive correlation with the level of collapse in the representation. Theorem 5.1 implies that for MSE loss, a more collapsed representation leads to a smaller loss. Therefore, this term is a natural candidate for collapse metric.
**Definition 5.3**.: Define the **Variability Collapse Index (VCI)** of a set of features \(H=(h_{k,i})_{k\in[K],i\in[N]}\) as
\[\text{VCI}=1-\frac{\operatorname{Tr}[\Sigma_{T}^{\dagger}\Sigma_{B}]}{ \operatorname{rank}(\Sigma_{B})},\]
where \(\Sigma_{B}\) and \(\Sigma_{T}\) are the between-class and overall covariance matrix defined in Equation 2 and 3.
One of the advantages of VCI is its invariance to invertible linear transformations, which is inherited from the invariance of the MSE loss.
Figure 2: **The Eigenvalue spectra of \(\Sigma_{B}\) and \(\Sigma_{T}\).** The spectrum of \(\Sigma_{T}\) has a substantially larger scale. The model is a ResNet50 trained on ImageNet-1K, using the setting specified in Section 6.1.
**Corollary 5.4**.: _VCI is invariant to invertible linear transformation of the feature vector, i.e., multiplying each \(h_{k,i}\) with an invertible matrix \(U\in\mathbb{R}^{p\times p}\)._
Proof.: From Observation 4.5, we know that the minimum of the loss function \(L(W,b,H)\) is invariant to invertible linear transformations on \(H\). This implies the same invariance property of the term \(\operatorname{Tr}[\Sigma_{T}^{\top}\Sigma_{B}]\). The proof is complete by noting that invertible linear transformation will also preserve the rank of \(\Sigma_{B}\).
Another advantage of VCI lies in its numerical stability. This advantage primarily stems from the well-behaved nature of the spectrum of \(\Sigma_{T}\) compared to that of \(\Sigma_{B}\), as discussed in Section 4.3. Therefore, the pseudo-inverse operation does not lead to an explosive increase in VCI. Furthermore, one can safely takes \(\operatorname{rank}(\Sigma_{B})=\min\{p,K-1\}\), since the unknown rank is not the cause of numerically instability as in Fuziness.
## 6 Experiment Results
In this section, we present experiments that reflect the differences between the previous variability collapse metrics and our proposed VCI metric.
### Setups
We conduct experiments to analyze the behavior of four variability collapse metrics, namely Fuzziness, Squared Distance, Cosine Similarity. We evaluate the metrics on the feature layer of ResNet18 (He et al., 2016) trained on CIFAR10 (Krizhevsky et al., 2009) and ResNet50 / variants of ViT (Dosovitskiy et al., 2020) trained on ImageNet-1K with AutoAugment (Cubuk et al., 2018) for 300 epochs. ResNet18s are trained on one NVIDIA GeForce RTX 3090 GPU, ResNet50s and ViT variants are trained on four GPUs. The batchsize for each GPU is set to 256. The metric values are recorded every 20 epochs, where \(\operatorname{rank}(\Sigma_{B})\) in the expression of VCI is taken to be \(\min\{p,K-1\}\) as stated in the previous section.
For all experiments on ResNet models, We use the implementation of ResNet from the torchvision library, called 'ResNet v1.5'. We use SGD with Nesterov Momentum as the optimizer. The maximum learning rate is set to \(0.1\times\text{batch size}/256\). We try both the cosine annealing and step-wise learning rate decay scheduler. When using a step-wise learning rate decay schedule, the learning rate is decayed by a factor of 0.975 every epoch. We also use a linear warmpup procedure of 10 epochs, starting from an initial \(10^{-5}\) learning rate. The weight-decay factor is set to \(8\times 10^{-5}\). For training on CIFAR10, we replace the random resized crop with random crop after padding 4 pixels on
Figure 4: **Variability Collapse metrics of training ResNet50 on ImageNet-1k dataset. From left to right: Fuzziness, Squared Distance, Cosine Similarity and our proposed VCI. The three curves are obtained with different training settings, all achieving \(\geq\) 77.8\(\%\) test accuracy. green: CE loss. orange: MSE loss. blue: MSE loss + cosine annealing schedule.**
Figure 3: **Variability Collapse metrics of training ResNet18 on CIFAR-10 dataset. From left to right: Fuzziness, Squared Distance, Cosine Similarity and our proposed VCI. The three curves are obtained with different training settings specified below, all achieving \(\geq\) 92.1\(\%\) test accuracy. Green: step-wise lerning rate decay schedule. Orange: cosine annealing schedule. Blue: cosine annealing schedule without weight decay and warmup.**
each side as in He et al. (2016). Cross-Entropy loss is used if not specified otherwise.
For DeiT-T and DeiT-S (Touvron et al., 2021), the two ViT variants used in our experiments, we use AdamW (Loshchilov and Hutter, 2017) with a cosine annealing scheduler as the optimizer. We incorporate a linear warm-up phase of 5 epochs, starting from a learning rate of \(10^{-6}\) and gradually increasing to the maximum learning rate of \(10^{-3}\). For other modules of training, such as weight initialization, mixup/cutmix, stochastic depth and random erasing, we keep the same with those of Touvron et al. (2021).
At test time for ImageNet-1K, we resize the short side of image to a length of 256 pixels and perform a center crop. When evaluating the variability collapse metrics, we use the same data transformation as at test time. All transformed images are finally normalized with ImageNet mean and standard deviation during training, testing, and metric evaluation.
### How do Variability Collapse Metrics Evolve as the Training Proceedts
Figure 3 demonstrates the trend of four different variability collapse metrics when training ResNet18 on CIFAR-10. It is observed that Squared Distance and Cosine Similarity fail to exhibit a consistent trend of collapse, as explained in Section 4.1. On the other hand, Fuzziness and VCI show a decreasing trend across these settings.
The results for ResNet50s trained on ImageNet are provided in Figure 4. In contrast to the case of ResNet18 on CIFAR10, all evaluated metrics consistently demonstrate a decreasing curve since the ratio of the \(V_{B}^{\perp}\) part becomes smaller with a smaller \(p/K\) value, as shown in Figure 1. Additionally, it is observed that neural networks trained with MSE loss exhibit a higher level of collapse compared to those trained with CE loss, which aligns with the findings of Kornblith et al. (2021).
The results for ViT variants trained on ImageNet are given in Figure 5. For DeiT-T and DeiT-S with embedding dimensions of 192 and 384, \(V_{B}\) becomes the whole feature space due to \(p<K\), leading to a clearer trend of variability collapse since \(V_{B}^{\perp}\) becomes \(0\).
Finally, we show that test collapse also happens for VCI in Figure 6. This indicates that variability collapse is a phenomenon that reflects the properties of underlying data distributions, rather than being solely caused by overfitting the training datasets. We refer to Appendix B for comparisons between train collapse and test collapse for other variability metrics.
Figure 5: **Variability Collapse metrics of training ViT on ImageNet-1k dataset. From left to right: Fuzziness, Squared Distance, Cosine Similarity and our proposed VCI. Blue: DeiT-S. Orange: DeiT-T. All of the four metrics indicate variability collapse happens for this setting.**
Figure 6: **Train Collapse and Test Collapse both happen for VCI. Train collapse is evaluated on a 50000 subset of ImageNet-1K training dataset. Test collapse is evaluated on the full ImageNet-1K test dataset. Left: ResNet18 on CIFAR-10. Middle: ResNet50 on ImageNet-1K. Right: DeiT-S on ImageNet-1K.**
### Only VCI consistently Indicates Transferability
In this section, we investigate the correlation between variability metrics and transferability through two sets of experiments. We pretrain ResNet50 on ImageNet-1K with a single varying hyperparameter specified within each group. We evaluate the pretrained neural representations using linear probing (Kornblith et al., 2019; Chen et al., 2020) on 10 downstream datasets, including Oxford-IIIT Pets (Parkhi et al., 2012), Oxford 102 Flowers (Nilsback and Zisserman, 2008), FGVC Aircraft (Maji et al., 2013), Stanford Cars (Krause et al., 2013), the Describable Textures Dataset (DTD) (Cimpoi et al., 2014), Food-101 dataset (Bossard et al., 2014), CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), Caltech-101 (L. Fei-Fei et al., 2004), the SUN397 scene dataset (Xiao et al., 2010). We use L-BFGS to train the linear classifier, with the optimal \(L_{2}\)-penalty strength determined by searching through 97 logarithmically spaced values between \(10^{-6}\) and \(10^{6}\) on a validation set. We provide the raw experiment results in Appendix C.
We use the following **mean log odds gain**
\[\mathrm{MLOG}=\frac{1}{10}\sum_{i=1}^{10}\log\frac{p_{i}}{1-p_{i}}-\log\frac{ p_{\text{pretrain}}}{1-p_{\text{pretrain}}} \tag{6}\]
to measure the transferability of a neural representation, where \(p_{\text{pretrain}}\) is the final test accuracy in pretraining. Compared with Kornblith et al. (2019), we subtract the log odds of the pretrain accuracy from the mean log odds of linear classification accuracy over the downstream tasks, to isolate the impact of variability collapse on transfer performance.
In the first group, we change the temperature \(\tau\) in the softmax function
\[\mathrm{Softmax}_{\tau}(z)=\left(\frac{\exp(\frac{1}{\tau}z_{1})}{\sum_{k=1}^ {K}\exp(\frac{1}{\tau}z_{k})},\cdots,\frac{\exp(\frac{1}{\tau}z_{K})}{\sum_{k= 1}^{K}\exp(\frac{1}{\tau}z_{k})}\right).\]
The results of the first group of experiments are shown in the top row of Figure 7. The results are consistent with the findings in (Kornblith et al., 2021), as all considered metrics show a negative relation between variability collapse and transfer performance.
In the second group of experiments, we introduce regularization to control the collapse behavior of neural networks (Kornblith et al., 2021). The regularization term we used is the average within-class cosine similarity divided by the number of data points of each class in the batch. By varying the value of \(\lambda\) multiplied to the regularization term, we investigate whether the observed correlation in the first group still holds true. The bottom row of Figure 7 shows that for the three previous metrics, the correlation changes from positive to negative, or vice versa. However, a strong positive correlation consistently holds between VCI and transferability. Therefore, VCI serves as an effective indicator of transfer performance, compared to other variability collapse metrics.
## 7 Conclusions and Future Directions
In this paper, we study the variability collapse phenomenon of neural networks, and propose the VCI metric as a quan
Figure 7: **Only VCI consistently indicates transferability in both groups of our experiments:** In each graph, x-axis represents the metric value evaluated on a 50000 subset of ImageNet train set, y-axis shows the mean log odds gain defined as in Equation (6), and the Pearson correlation coefficient is shown in the legend. **Top Row**: A negative relation between all variability metrics and transferability can be observed when changing the temperature \(\tau\) of softmax in pretraining. **Bottom Row**: Nearly opposite trends emerge on previous variability metrics when we adjust the coefficient \(\lambda\) of the Cosine Similarity regularization term. In contrast, VCI maintains a positive correlation with the mean log odds gain.
titative characterization. We demonstrate that VCI enjoys many desired properties, including invariance and numerical stability, and verify its usefulness via extensive experiments.
Moving forward, there are several promising directions for future research. Firstly, it would be beneficial to explore the applicability of VCI to a broader range of training recipes and architectures, by analyzing its performance using alternative network architectures, training methodologies, and datasets. Secondly, it would be valuable to conduct theoretical investigations into the relationship between variability collapse and transfer accuracy. Understanding the mechanisms and principles behind this could provide insights to designing better transfer learning algorithms.
## Acknowledgements
The authors would like to acknowledge the support from the 2030 Innovation Megaprojects of China (Programme on New Generation Artificial Intelligence) under Grant No. 2021AAA0150000.
|
2308.11503 | Multi-level Neural Networks for Accurate Solutions of Boundary-Value
Problems | The solution to partial differential equations using deep learning approaches
has shown promising results for several classes of initial and boundary-value
problems. However, their ability to surpass, particularly in terms of accuracy,
classical discretization methods such as the finite element methods, remains a
significant challenge. Deep learning methods usually struggle to reliably
decrease the error in their approximate solution. A new methodology to better
control the error for deep learning methods is presented here. The main idea
consists in computing an initial approximation to the problem using a simple
neural network and in estimating, in an iterative manner, a correction by
solving the problem for the residual error with a new network of increasing
complexity. This sequential reduction of the residual of the partial
differential equation allows one to decrease the solution error, which, in some
cases, can be reduced to machine precision. The underlying explanation is that
the method is able to capture at each level smaller scales of the solution
using a new network. Numerical examples in 1D and 2D are presented to
demonstrate the effectiveness of the proposed approach. This approach applies
not only to physics informed neural networks but to other neural network
solvers based on weak or strong formulations of the residual. | Ziad Aldirany, Régis Cottereau, Marc Laforest, Serge Prudhomme | 2023-08-22T15:24:29Z | http://arxiv.org/abs/2308.11503v1 | # Multi-level Neural Networks for Accurate Solutions of Boundary-Value Problems
###### Abstract
The solution to partial differential equations using deep learning approaches has shown promising results for several classes of initial and boundary-value problems. However, their ability to surpass, particularly in terms of accuracy, classical discretization methods such as the finite element methods, remains a significant challenge. Deep learning methods usually struggle to reliably decrease the error in their approximate solution. A new methodology to better control the error for deep learning methods is presented here. The main idea consists in computing an initial approximation to the problem using a simple neural network and in estimating, in an iterative manner, a correction by solving the problem for the residual error with a new network of increasing complexity. This sequential reduction of the residual of the partial differential equation allows one to decrease the solution error, which, in some cases, can be reduced to machine precision. The underlying explanation is that the method is able to capture at each level smaller scales of the solution using a new network. Numerical examples in 1D and 2D are presented to demonstrate the effectiveness of the proposed approach. This approach applies not only to physics informed neural networks but to other neural network solvers based on weak or strong formulations of the residual.
**Keywords:** Neural networks, Partial differential equations, Physics-informed neural networks, Numerical error, Convergence, Frequency analysis
## 1 Introduction
In recent years, the solution of partial differential equations using deep learning [34, 6, 14] has gained popularity and is emerging as an alternative to classical discretization methods, such as the finite element or the finite volume methods. Deep learning techniques can be used to either solve a single initial boundary-value problem [31, 38, 40] or approximate the operator associated with a partial differential equation [22, 20, 3, 28]. The primary advantages of deep learning approaches lie in their ability to provide meshless methods, and hence address the curse of dimensionality, and in
the universality of their implementation for various initial and boundary-value problems. However, one of the main obstacles remains their inability to consistently reduce the relative error in the computed solution. Although the universal approximation theorem [7, 12] guarantees that a single hidden layer network with a sufficient width should be able to approximate smooth functions to a specified precision, one often observes in practice that the convergence with respect to the number of iterations reaches a plateau, even if the size of the network is increased. This is primarily due to the use of gradient-based optimization methods, e.g. Adam [17], for which the solution may get trapped in local minima. These optimization methods applied to classical neural network architectures, e.g. feedforward neural networks [19], do indeed experience difficulties in controlling the large range of scales inherent to a solution, even with some fine-tuning of the hyper-parameters, such as the learning rate or the size of the network. In contrast, this is one of the main advantages of classical methods over deep learning methods, in the sense that they feature well-defined techniques to consistently reduce the error, using for instance mesh refinement [4, 33] or multigrid structures [11].
We introduce in this work a novel approach based on the notion of multi-level neural networks, which are designed to consistently reduce the residual associated with a partial differential equation, and hence, the errors in the numerical solution. The approach is versatile and can be applied to various neural network methods that have been developed for the solution of boundary-value problems [38, 40], but we have chosen, for the sake of simplicity, to describe the method on the particular case of physics-informed neural networks (PINNs) [31]. Once an approximate solution to a linear boundary-value problem has been computed with the classical PINNs, the method then consists in finding a correction, namely, estimating the solution error, by minimizing the residual using a new network of increasing complexity. The process can subsequently be repeated using additional networks to minimize the resulting residuals, hence allowing one to reduce the error to a desired precision. A similar idea has been proposed in [1] to control the error in the case of symmetric and positive-definite variational equations. Using Galerkin neural networks, the authors construct basis functions calculated from a sequence of neural networks to generate a finite-dimensional subspace, in which the solution to the variational problem is then approximated. Our approach is more general as the problems do not need to be symmetric.
The development of the proposed method is based on two key observations. First, each level of the correction process introduces higher frequencies in the solution error, as already discussed in [1] and highlighted again in the numerical examples. This is the reason why the sequence of neural networks should be of increasing complexity. Moreover, a key ingredient will be to use
the Fourier feature mapping approach [36] to accurately approximate the functions featuring high frequencies. Second, the size of the error, equivalently of the residual, becomes at each level increasingly smaller. Unfortunately, feedforward neural networks employing standard parameter initialization, e.g. Xavier initialization [9] in our case, are tailored to approximate functions whose magnitudes are close to unity. We thus introduce a normalization of the solution error at each level based on the Extreme Learning Method [13], which also contributes to the success of the multi-level neural networks.
After finalizing the writing of the manuscript, one has brought to our attention the recent preprint [37] on multi-stage neural networks. Although the conceptual approach presented in that preprint features many similarities with our method, namely the use of a sequence of networks for the reduction of the numerical errors, the methods developed in our independent work to address the two aforementioned issues are original and sensibly differ from those introduced in [37].
The paper is organized as follows. We briefly describe in Section 2 neural networks and their application with PINNs, the deep learning approach that will be used to solve the boundary-value problems at each level of the training. We describe in Section 3 the two issues that may affect the accuracy of the solutions obtained by PINNs. We motivate in Section 3.1 the importance of normalization of the problem data and show that it can greatly improve the convergence of the solution. We continue in Section 3.2 with the choice of the network architecture and the importance of using the Fourier feature mapping algorithm to approximate high-frequency functions. We then present in Section 4 our approach, the multi-level neural network method, and demonstrate numerically with a simple 1D Poisson problem that the method greatly improves the accuracy of the solution, up to machine precision, with respect to the \(L^{2}\) and the \(H^{1}\) norms, as in classical discretization methods. We demonstrate further in Section 5 the efficiency of the proposed method on several numerical examples based on the Poisson equation, the convective-diffusion equation, and the Helmholtz equation, in one dimension or two dimensions. We were able to consistently reduce the solution error in these problems using the multi-level neural network method. Finally, we compile concluding remarks about the present work and put forward new directions for research in Section 6.
Preliminaries
### Neural networks
Neural networks have been extensively studied in recent years for solving partial differential equations [34, 31]. A neural network can be viewed as a mapping between an input and an output by means of a composition of linear and nonlinear functions with adjustable weights and biases. Training a neural network consists in optimizing the weights and biases by minimizing some measure of the error between the output of the network and corresponding target values obtained from a given training dataset. As a predictive model, the trained network is then expected to provide accurate approximations of the output when considering a wider set of inputs. Several neural network architectures, e.g. convolutional neural networks (CNNs) [18] or feedforward neural networks (FNNs) [19], are adapted to specific classes of problems.
We shall consider here FNNs featuring \(n\) hidden layers, each layer having a width \(N_{i}\), \(i=1,\ldots,n\), an input layer of width \(N_{0}\), and an output layer of width \(N_{n+1}\); see Figure 1. Denoting the activation function by \(\sigma\), the neural network with input \(\mathbf{z}_{0}\in\mathbb{R}^{N_{0}}\) and output \(\mathbf{z}_{n+1}\in\mathbb{R}^{N_{n+1}}\) is defined as
\[\begin{array}{ll}\text{Input layer:}&\mathbf{z}_{0},\\ \text{Hidden layers:}&\mathbf{z}_{i}=\sigma(W_{i}\mathbf{z}_{i-1}+\mathbf{b}_{i}),\quad i =1,\cdots,n,\\ \text{Output layer:}&\mathbf{z}_{n+1}=W_{n+1}\mathbf{z}_{n}+\mathbf{b}_{n+1},\end{array} \tag{1}\]
where \(W_{i}\) is the _weights_ matrix of size \(N_{i}\times N_{i-1}\) and \(\mathbf{b}_{i}\) is the _biases_ vector of size \(N_{i}\). To simplify the notation, we combine the weights and biases of the neural network into a single parameter denoted by \(\theta\). The neural network (1) generates a finite-dimensional space of dimension \(N_{\theta}=\sum_{i=1}^{n+1}N_{i}(N_{i-1}+1)\). To keep things simple, throughout this work we shall use the \(\tanh\) activation function and the associated Xavier initialization scheme [9] to initialize the weights and biases.
### Physics-informed neural networks
We briefly review the PINNs approach to solving partial differential equations, as described in [31]. Let \(\varOmega\) be an open bounded domain in \(\mathbb{R}^{d}\), \(d=1,2\), or \(3\), with boundary \(\partial\varOmega\). For two Banach spaces \(U\) and \(V\) of functions over \(\varOmega\), we assume a linear differential operator \(A:U\to V\). Our goal is to find the solution \(u\in U\) that satisfies, for a given \(f\in V\), the partial differential equation cast here in its residual form:
\[R\big{(}\mathbf{x},u(\mathbf{x})\big{)}:=f(\mathbf{x})-Au(\mathbf{x})=0,\quad\forall\mathbf{x}\in \varOmega, \tag{2}\]
and the following boundary conditions:
\[B\big{(}\mathbf{x},u(\mathbf{x})\big{)}=0,\quad\forall\mathbf{x}\in\partial\Omega. \tag{3}\]
For the sake of simplicity in the presentation, but without loss of generality, we consider here only the case of homogeneous Dirichlet boundary conditions, such that the residual \(B\) is given by
\[B\big{(}\mathbf{x},u(\mathbf{x})\big{)}:=u(\mathbf{x}),\quad\forall\mathbf{x}\in\partial\Omega. \tag{4}\]
The primary objective in PINNs is to use a neural network with parameters \(\theta\) to find an approximation \(\tilde{u}_{\theta}(\mathbf{x})\) of the solution \(u(\mathbf{x})\) to problem (2)-(3). For the sake of simplicity in the notation, we shall omit in the rest of the paper the subscript \(\theta\) when referring to the approximate solutions \(\tilde{u}_{\theta}\), and thus simply write \(\tilde{u}(\mathbf{x})\). The training, i.e. the identification of the parameters \(\theta\) of the neural network, is performed by minimizing a loss function, defined here as a combination of the residual associated with the partial differential equation and that associated with the boundary condition in terms of the \(L^{2}\) norm:
\[\mathcal{L}(\theta):=w_{r}\int_{\Omega}R\big{(}\mathbf{x},\tilde{u}(\mathbf{x})\big{)} ^{2}dx+w_{bc}\int_{\partial\Omega}B\big{(}\mathbf{x},\tilde{u}(\mathbf{x})\big{)}^{2}dx, \tag{5}\]
where \(w_{r}\) and \(w_{bc}\) are penalty parameters. In other words, by minimizing the loss function (5) one obtains a weak solution \(\tilde{u}\) that weakly satisfies the boundary condition.
Figure 1: Sketch of a feedforward neural network with \(d\) hidden layers of a width \(N_{i}\), \(i=1,\dots,n\), an input layer of size \(N_{0}\), and an output layer of size \(N_{n+1}\).
Alternatively, the homogeneous Dirichlet boundary condition could be strongly imposed, as done in [24], by multiplying the output of the neural network by a function \(g(\mathbf{x})\) that vanishes on the boundary. For instance, if \(\varOmega=(0,\ell)\in\mathbb{R}\), one could choose \(g(x)=x(\ell-x)\). The trial functions \(\tilde{u}\) would then be constructed, using the feedforward neural network (1), as follows:
\[\begin{split}\text{Input layer:}&\quad\mathbf{z}_{0}= \mathbf{x},\\ \text{Hidden layers:}&\quad\mathbf{z}_{i}=\sigma(W_{i} \mathbf{z}_{i-1}+\mathbf{b}_{i}),\quad i=1,\dots,n,\\ \text{Output layer:}&\quad z_{n+1}=W_{n+1}\mathbf{z}_{ n}+\mathbf{b}_{n+1},\\ \text{Trial function:}&\quad\tilde{u}=g(\mathbf{x})z_{n+1}. \end{split} \tag{6}\]
where the input and output layers have a width \(N_{0}=d\) and \(N_{n+1}=1\), respectively. The dimension of the finite-dimensional space of functions generated by the neural network (6) is now given by \(N_{\theta}=\sum_{i=1}^{n+1}N_{i}(N_{i-1}+1)=N_{1}(d+1)+\sum_{i=2}^{n}N_{i}(N_{ i-1}+1)+(N_{n}+1)\).
For the rest of this work, the boundary conditions will be strongly imposed, so that the loss function will henceforth be
\[\mathcal{L}(\theta)=\int_{\varOmega}R\big{(}\mathbf{x},\tilde{u}(\mathbf{x})\big{)}^{ 2}dx. \tag{7}\]
The problem that one solves by PINNs can thus be formulated as:
\[\min_{\theta\in\mathbb{R}^{N_{\theta}}}\mathcal{L}(\theta)=\min_{\theta\in \mathbb{R}^{N_{\theta}}}\int_{\varOmega}R\big{(}\mathbf{x},\tilde{u}(\mathbf{x})\big{)} ^{2}dx. \tag{8}\]
One advantage of PINNs is that they do not necessarily need the construction of a mesh, which is often a time-consuming process. Instead, the integral in the loss function can be approximated using Monte Carlo integration from randomly generated points in \(\varOmega\). Another advantage is the ease of implementation of the boundary and initial conditions. On the other hand, one major issue that one faces when using PINNs is that it is very difficult, even impossible, to effectively reduce the \(L^{2}\) or \(H^{1}\) error in the solutions to machine precision. The main reason, from our own experience, is that the solution process may get trapped in some local minima, without being able to converge to the global minimum, when using non-convex optimization algorithms. We briefly review some commonly used optimizers and study their performance in the next section.
### Choice of the optimization algorithm
The objective functions in PINNs are by nature non-convex, which makes the minimization problems difficult to solve and their solutions highly dependent on the choice of the solver. For these reasons, it is common practice to employ gradient-based methods, such as the Adam optimizer [17]
or the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm [8]. BFGS is a second-order optimizer, but if used alone, has the tendency to converge to a local minimum in the early stages of the training. A widely used strategy to overcome this deficiency is to begin the optimization process using the Adam optimizer and subsequently switch to the BFGS optimizer [23]. In this work, we will actually utilize the so-called L-BFGS optimizer, the limited-memory version of BFGS provided in PyTorch [27]. Although L-BFGS is a higher-order method than Adam, the computational cost for each iteration is also much higher than the cost for one iteration of Adam. We actually adopt here the following definition of what we mean by an iteration: in both algorithms, it actually corresponds to a single update of the neural network parameters.
In the following example, we study the performance of the aforementioned strategy, when applied to a simple one-dimensional Poisson problem, and compare the resulting solution with that obtained when using the Adam optimizer only. This numerical example will also serve later as a model problem for further verifications of the underlying principles in our approach.
**Example 1**.: _Given a function \(f(x)\), the problem consists in finding \(u=u(x)\), for all \(x\in[0,1]\), that satisfies_
\[-\partial_{xx}u(x) =f(x),\qquad\forall x\in(0,1),\] \[u(0) =0, \tag{9}\] \[u(1) =0.\]
_For the purpose of the study, the source term \(f\) is chosen such that the exact solution to the problem is given as_
\[u(x)=e^{\sin(k\pi x)}+x^{3}-x-1, \tag{10}\]
_where \(k\) is a given integer. We take \(k=2\) in this example._
_We consider here a network made of only one hidden layer of a width of \(20\), i.e. \(n=1\) and \(N_{1}=20\). Moreover, \(N_{0}=N_{2}=1\). The learning rates for the Adam optimizer and L-BFGS are set to \(10^{-2}\) and unity, respectively. In the first experiment, the network is trained for 10,000 iterations using Adam. In the second experiment, it is trained with Adam for 4,000 iterations followed by 100 iterations of L-BFGS. Figure 2 compares the evolution of the loss function with respect to the number of iterations for these two scenarios. In the first case, we observe that the loss function laboriously reaches a value around \(10^{-2}\) after 10,000 iterations. The loss function further decreases in the second case but still plateaus around \(5\times 10^{-5}\) after about 30 iterations of L-BFGS. Note that the scale along the \(x\)-axis in the figure on the right has been adjusted in order to account for the large discrepancy in the number of iterations used with Adam and L-BFGS._
## 3 Error analysis in PINNs
In this section, we further study the numerical errors, and a fortiori, the sources of error, in the solutions obtained with PINNs. We have mentioned in the introduction that two issues may actually affect the quality of the solutions. Indeed, it is well known that the training of the neural networks may perform poorly if the data, in our case the source term in the differential equations, are not properly normalized [10]. Moreover, the accuracy may deteriorate when the solutions to the problem exhibit high frequencies. We briefly review here the state-of-the-art in dealing with those two issues as they will be of paramount importance in the development of the multi-level neural network approach. More specifically, we illustrate on simple numerical examples how these issues can be somewhat mitigated.
### Data normalization
A major issue when solving a boundary-value problem such as (2)-(3) with PINNs is the amplitude of the problem data, in particular, the size of the source term \(f(x)\). In other words, a small source term naturally implies that the target solution will be also small, making it harder for the training to find an accurate approximation of the solution to the boundary-value problem. This issue will become very relevant and crucial when we design the multi-level neural network approach in Section 4. Our goal here is to illustrate through a numerical example that the accuracy of the solution clearly depends on the amplitude of the data, and hence of the solution itself, and that it may therefore be necessary to scale the solution before minimizing the cost functional. We hence revisit Example 1 of Section 2.3.
Figure 2: Results from Example 1 in Section 2.3. (Left) Exact solution with \(k=2\). (Middle) Evolution of the loss function using the Adam optimizer only. (Right) Evolution of the loss function using the Adam optimizer and L-BFGS.
**Example 2**.: _We solve again the Poisson problem (9) in \(\Omega=(0,1)\) with \(k=2\). However, we deliberately divide the source term \(f(x)\) by a factor \(\mu\) such that the exact solution is changed to_
\[u(x)=\frac{1}{\mu}\big{(}e^{\sin(k\pi x)}+x^{3}-x-1\big{)}.\]
_A large value of \(\mu\) implies a small \(f\), and hence, a small \(u\). We now compare the solutions of the problem for several values of \(\mu\) with different orders of magnitude, namely \(\mu=\{10^{-3},1,10^{3}\}\). For the training, we consider the Adam optimizer followed by L-BFGS using the same network architecture and hyper-parameters as in Section 2.3._
_We show in Figure 3 (left) the solution for \(k=2\) and \(\mu=1\). We want to draw attention to the fact that the maximal amplitude of this solution is roughly unity._
_The evolution of the loss function during training is shown in Figure 3 (middle) for the three values of \(\mu\). We actually plot each loss function as computed but divided by \(\mu^{2}\) for a clearer
Figure 4: Results from Example 2 in Section 3.1: Pointwise error \(e(x)=u(x)-\tilde{u}(x)\) for \(\mu=10^{-3}\), \(1\), and \(10^{3}\).
Figure 3: Results from Example 2 in Section 3.1: (Left) Exact solution with \(k=2\) and \(\mu=1\). (Middle) Evolution of the loss function for \(\mu=10^{-3}\), \(1\), and \(10^{3}\). (Right) Distribution of the absolute value of the weights in the last layer before and after training.
comparison. For \(\mu=1\), we observe that the loss function converges much faster and achieves a much smaller residual at the end of the training, than for \(\mu=10^{3}\) and \(\mu=10^{-3}\). We show in Figure 4 the errors in the three solutions obtained after training. The error in the solution computed with \(\mu=1\) is indeed several orders of magnitude smaller than the error in the other two solutions. An important observation from these plots is that smaller errors induce higher frequencies. Hence, if one wants to reduce the error even further, it would become necessary to have an algorithm that allows one to capture those higher frequencies. This issue is addressed in the next section._
_We remark that the solution obtained with \(\mu=1\) would actually provide a more accurate approximation by simply multiplying it by \(\mu=10^{3}\) (resp. \(\mu=10^{-3}\)) to the problem with \(\mu=10^{3}\) (resp. \(\mu=10^{-3}\)). In other words, it illustrates the fact that, when using PINNs, the process of multiplying the source term by \(\mu\), solving, and then re-scaling the solution by \(\mu^{-1}\) is simply not equivalent to the process of simply solving the problem._
_The distribution of the weights in the output layer \(|\mathbf{W}_{n+1}|\) obtained after initialization and training is shown in Figure 3 (right) for each \(\mu\). First, we observe that the final weights for \(\mu=10^{-3}\) are very different from their initial values and sometimes exceed \(10^{3}\). Second, we would expect the trained parameters for \(\mu=10^{3}\) to be three orders of magnitude smaller than those for \(\mu=1\). However, it seems that the network has difficulty decreasing the values of these weights. As a consequence, the training fails to properly converge in the two cases \(\mu=10^{3}\) and \(\mu=10^{-3}\). A reasonable explanation is that an accurate solution cannot be obtained if the optimal weights exist far from their initialized values, since in this case the training of the network is more demanding. This implies that, for a very small or very large value of \(\mu\), an efficient initialization will not suffice to improve the training. One could perhaps adjust the learning rate for the last layer, but finding the proper value of the learning rate is far from a straightforward task. Therefore, a simpler approach would be to normalize the solution being sought so that the output of the neural network is largely of the order of unity. We will propose such an approach in Section 4._
### Solutions with high frequencies
A deep neural network usually adheres to the F-principle [30, 32, 39], which states that the neural network tends to approximate the low-frequency components of a function before its high-frequency components. This property explains why networks approximate well functions featuring a low-frequency spectrum while avoiding aliasing, leading to reasonable generalized errors. The F-principle also serves as a filter for noisy data and provides an early stopping criterion to avoid overfitting. When it comes to handling higher frequencies, one is generally exposed to the risk of
overfitting and the lack of convexity of the loss function. Unfortunately, there exist few guidelines, to the best of our knowledge, to ensure that the training yields accurate solutions in those cases. As is often the case with PINNs, the quality of the obtained solutions depends on the experience of the user with the initialization of the hyper-parameters.
Several studies, see e.g. [35, 21, 25], have put forward some techniques to improve neural networks in approximating high-frequency functions. We start by providing a concise overview of the Fourier feature mapping presented in [25], which we shall use in this work, and proceed with an illustration of its performance on a simple one-dimensional example.
In order to simultaneously approximate the low and high frequencies, the main idea behind the method is to explicitly introduce within the networks high-frequency modes using the so-called Fourier feature mapping. Let \(\mathbf{\omega}_{M}\) denote the vector of \(M\) given wave numbers \(\omega_{m}\), \(m=1,\ldots,M\), that is \(\mathbf{\omega}_{M}=[\omega_{1},\ldots,\omega_{M}]\). The mapping \(\gamma\) for each spatial component \(x_{j}\) is provided by the row vector of size \(2M\) defined as:
\[\gamma(x_{j})=[\cos(\mathbf{\omega}_{M}x_{j}),\sin(\mathbf{\omega}_{M}x_{j})],\qquad j =1,\ldots,d, \tag{11}\]
where we have used the shorthand:
\[\cos(\mathbf{\omega}_{M}x_{j}) =[\cos(\omega_{1}x_{j}),\cos(\omega_{2}x_{j}),\ldots,\cos(\omega_ {M}x_{j})],\] \[\sin(\mathbf{\omega}_{M}x_{j}) =[\sin(\omega_{1}x_{j}),\sin(\omega_{2}x_{j}),\ldots,\sin(\omega_ {M}x_{j})].\]
As shown with the Neural Tangent Kernel theory in [36], the Fourier feature mapping helps the network learn the high and low frequencies simultaneously. The structure of the feedforward neural network (6) is now modified as follows. Considering a network with an input layer of width \(N_{0}=2M\times d\) and an output layer of width \(N_{n+1}=1\), the trial functions \(\tilde{u}\) are taken in the form:
\[\begin{split}\text{Input layer:}&\quad\mathbf{z}_{0}=[ \gamma(x_{1}),\ldots,\gamma(x_{d})]^{T},\\ \text{Hidden layers:}&\quad\mathbf{z}_{i}=\sigma(W_{i }\mathbf{z}_{i-1}+\mathbf{b}_{i}),\quad i=1,\ldots,n,\\ \text{Output layer:}&\quad z_{n+1}=W_{n+1}\mathbf{z}_{n }+\mathbf{b}_{n+1},\\ \text{Trial function:}&\quad\tilde{u}=g(\mathbf{x})z_{n+1}. \end{split} \tag{12}\]
The dimension of the finite-dimensional space of trial functions is given in this case by \(N_{\theta}=N_{1}(2Md+1)+\sum_{i=2}^{n}N_{i}(N_{i-1}+1)+(N_{n}+1)\).
In a similar manner, we will show on a numerical example that using a function \(g(\mathbf{x})\) whose spectrum contains both low and high frequencies also improves the convergence of the solutions. In the present work, we only consider one-dimensional problems or two-dimensional problems on
rectangular domains so that one can introduce a new mapping \(\gamma_{g}\) in terms of only the sine functions and thus strongly impose the boundary conditions by:
\[\gamma_{g}(x_{j})=[\sin(\boldsymbol{\omega}_{M}x_{j})],\qquad j=1,\ldots,d. \tag{13}\]
We note here that the wave number vector \(\boldsymbol{\omega}_{M}\) is the same as in \(\gamma\) and should be chosen such that all sine functions vanish on the boundary \(\partial\varOmega\). In that case, we consider a feedforward neural network with an input layer of width \(N_{0}=2M\times d\) and an output layer of width \(N_{n+1}=M\), so that the trial functions \(\tilde{u}\) are given by:
\[\begin{split}\text{Input layer:}&\quad\boldsymbol{z }_{0}=[\gamma(x_{1}),\ldots,\gamma(x_{d})]^{T},\\ \text{Hidden layers:}&\quad\boldsymbol{z}_{i}= \sigma(W_{i}\boldsymbol{z}_{i-1}+\boldsymbol{b}_{i}),\quad i=1,\ldots,n,\\ \text{Output layer:}&\quad\boldsymbol{z}_{n+1}=W_{n+ 1}\boldsymbol{z}_{n}+\boldsymbol{b}_{n+1},\\ \text{Trial function:}&\quad\tilde{u}=M^{-1}\big{(} \varPi_{j=1}^{d}\gamma_{g}(x_{j})\big{)}\cdot\boldsymbol{z}_{n+1},\end{split} \tag{14}\]
where the trial function is divided by \(M\) in order to normalize the output. The dimension of the finite-dimensional space of trial functions generated by the neural network (6) is now given by \(N_{\theta}=N_{1}(2Md+1)+\sum_{i=2}^{n}N_{i}(N_{i-1}+1)+M(N_{n}+1)\). We reiterate here that the output \(\boldsymbol{z}_{n+1}\) needs to be multiplied by sine functions that vanish on the boundary in order to strongly impose the boundary condition. When \(\varOmega=(0,\ell)^{d}\), an appropriate choice for the parameters \(\omega_{m}\) is given by the geometric series \(\omega_{m}=2^{m-1}\pi/\ell\), with \(m=1,\ldots,M\), as suggested in [25].
We now compare the performance of the three approaches using Example 1, in order to show the importance of introducing high frequencies in the input layer and in the function used to enforce the boundary conditions. The three methods can be summarized as follows:
* **Method 1:** Classical PINNs with input \(x\) and trial functions provided by the neural network (6) with \(g(x)=x(1-x)\).
* **Method 2:** PINNs using the Fourier feature mapping for the input and trial functions provided by the neural network (12) with \(g(x)=x(1-x)\).
* **Method 3:** PINNs using the Fourier feature mapping for the input and trial functions provided by the neural network (14).
**Example 3**.: _We solve the Poisson problem (9) in \(\varOmega=(0,1)\) with \(k=10\). The exact solution is given in (10) and shown in Figure 5 (left). The networks all have a single hidden layer of width \(N_{1}=10\). As before, the learning rates for Adam and L-BFGS are chosen as \(10^{-2}\) and unity, respectively. The training is performed for 4,000 iterations with Adam and 100 iterations with L
BFGS. The vector \(\mathbf{\omega}_{M}\) of wave numbers \(\omega_{m}\), \(m=1,\ldots,M\), is constructed from a geometric series with \(M=4\), i.e. \(\mathbf{\omega}_{M}=[\pi,2\pi,4\pi,8\pi]\)._
_We first observe in Figure 5 (middle) that Method 1 fails to converge. On the other hand, Method 2 allows one to decrease the loss function by six orders of magnitude and Method 3 reduces further the loss function by almost two orders of magnitude. This example indicates that it is best to use the architecture given in (14) when dealing with solutions with high frequencies._
_We show in Figure 5 (right) the absolute value of the initialized and trained parameters for each method. We notice that the trained parameters are very large in the case of the first method, with some of them reaching values as large as \(10^{4}\). In contrast, the values remain much smaller in the case of Methods 2 and 3, with the parameters of Method 3 staying closer to the initialized parameters when compared to those obtained by Method 2. For Method 1, the weights in the hidden layers need to be large so as to be able to capture the high frequencies, as seen in Figure 5 (right). If one uses Method 2 to obtain an approximation of the exact solution (10), the function computed by the output layer in (12) should converge to the function:_
\[\frac{u(x)}{x(1-x)}=\frac{e^{\sin(k\pi x)}+x^{3}-x-1}{x(1-x)}.\]
_However, this function takes on large values near the boundary. Indeed, when \(x\) tends to \(0\), the limit is equal to \(k\pi-1\), which becomes large for large values of \(k\). Hence, the parameters of the network after training will tend to take large values in order to approximate well the solution, as explained in Section 3.1. In order to avoid these issues, we have thus introduced the architecture (14), such that the functions used to enforce the boundary conditions contain a mix of low and high frequencies. Method 3 thus allows one to get a solution whose trained parameters remain of the same order as
Figure 5: Results from Example 3 in Section 3.2: (Left) Exact solution with \(k=10\). (Middle) Evolution of the loss function for the three methods. (Right) Distribution of the absolute value of the initialized and trained parameters for the three methods.
the initial ones, as observed in Figure 5 (right)._
_Finally, we show in Figure 6 the pointwise error \(e(x)=u(x)-\tilde{u}(x)\) obtained at the end of the training for Methods 1, 2, and 3. Note that the scale along the \(y\)-axis is different on the graphs. As expected, the pointwise error obtained by Method 1 is of the same order as the solution itself. Moreover, we observe that the maximum value of \(|e(x)|\) using Method 3 is smaller than that obtained with Method 2. Hence, the architecture presented in Method 3 yields a better solution when compared to the other two methods. We observed in Example 2 that smaller approximation errors contained higher frequencies. The picture is slightly different here. If we closely examine the pointwise error obtained by Methods 2 or 3, we observe that the error contains both a low-frequency component of large amplitude and a high-frequency component of small amplitude. In order to explain this phenomenon, we plot in Figure 7 the residual \(R(x)\) associated with the partial differential equation
Figure 6: Results from Example 3 in Section 3.2: Pointwise error \(e(x)=u(x)-\tilde{u}(x)\) for Methods 1, 2, and 3.
Figure 7: Results from Example 3 in Section 3.2: Residual \(R(x)\) associated with the partial differential equation at the end of the training using Methods 1, 2, and 3. Note that the scale along the \(y\)-axis is different from one plot to the other.
for the three methods. For Method 1, we observe that the residual is still very large by the end of the training since the method did not converge. For Methods 2 and 3, the residual is a high-frequency function as the second-order derivatives of the solution tend to amplify its high-frequency components, as confirmed by the trivial calculation:_
\[\frac{d^{2}}{dx^{2}}\sin(\omega x)=-\omega^{2}\sin(\omega x).\]
_It follows that the high-frequency components of the solution will be reduced first since the training is based on minimizing the residual of the partial differential equation. On the other hand, the error, see e.g. Figure 6 (middle) or (right), includes some low-frequency contributions, which are imperceptible in the plot of the residual. To further reduce the pointwise error, the objective should then be to reduce the low-frequency modes alone, without the need to reduce the high frequencies whose amplitudes are smaller._
In summary, we have seen through numerical experiments that the accuracy of the solutions may be affected by the scale of the problem data and the range of frequencies inherent to the solutions. The methodology that we describe below allows one to address these issues, namely to control the error within machine precision in neural network solutions using the PINNs approach.
## 4 Multi-level neural networks
In this section, we describe the multi-level neural networks, whose main objective is to improve the accuracy of the solutions obtained by PINNs. Supposing that an approximation \(\tilde{u}\) of the solution \(u\) to Problem (2)-(3) has been computed, the error in \(\tilde{u}\) is defined as \(e(\mathbf{x})=u(\mathbf{x})-\tilde{u}(\mathbf{x})\) and satisfies:
\[R(\mathbf{x},u(\mathbf{x}))=f(\mathbf{x})-Au(\mathbf{x})=f(\mathbf{x})-A\tilde{u}( \mathbf{x})-Ae(\mathbf{x})=R(\mathbf{x},\tilde{u}(\mathbf{x}))-Ae(\mathbf{x})=0,\quad\forall\mathbf{x} \in\Omega,\] \[B(\mathbf{x},u(\mathbf{x}))=B(\mathbf{x},\tilde{u}(\mathbf{x}))+B(\mathbf{x},e(\mathbf{ x}))=B(\mathbf{x},e(\mathbf{x}))=0,\quad\forall\mathbf{x}\in\partial\Omega,\]
where we have used the fact that \(A\) and \(B\) are linear operators and \(\tilde{u}\) strongly verifies the boundary condition. In other words, the error function \(e(x)\) satisfies the new problem in the residual form:
\[\tilde{R}(\mathbf{x},e(\mathbf{x}))=R(\mathbf{x},\tilde{u}(\mathbf{x}))-Ae(\mathbf{x })=0,\quad\forall\mathbf{x}\in\Omega, \tag{15}\] \[B(\mathbf{x},e(\mathbf{x}))=0,\quad\forall\mathbf{x}\in\partial\Omega. \tag{16}\]
We notice that the above problem for the error has exactly the same structure as the original problem, with maybe two exceptions: 1) the source term \(R(\mathbf{x},\tilde{u}(\mathbf{x}))\) in the error equation may be small, 2) the error \(e(\mathbf{x})\) may be prone to higher frequency components than in \(\tilde{u}\). Our earlier
observations suggest we find an approximation \(\tilde{e}\) of the error using the PINNs approach after normalizing the source term by a scaling parameter \(\mu\), in a way that scales the error to a unit maximum amplitude. The new problem becomes:
\[\tilde{R}(\mathbf{x},e(\mathbf{x}))=\mu R(\mathbf{x},\tilde{u}(\mathbf{x}))-Ae(\bm {x}) =0,\quad\forall\mathbf{x}\in\Omega, \tag{17}\] \[B(\mathbf{x},e(\mathbf{x})) =0,\quad\forall\mathbf{x}\in\partial\Omega. \tag{18}\]
The dimension of the new neural network to approximate \(e\) should be larger than that used to find \(\tilde{u}\), due to the presence of higher frequency modes in \(e\). In particular, the number of wave numbers \(M\) in the Fourier feature mapping should be increased. The idea is to some extent akin to a posteriori error estimation techniques developed for Finite Element methods, see e.g. [5, 29, 2, 26, 4]. Finally, one should expect that the optimization algorithm should once again reach a plateau after a certain number of iterations and that the process should be repeated to estimate a new correction to the error \(e\).
We thus propose an iterative procedure, referred here to as the "multi-level neural network method", in order to improve the accuracy of the solutions when using PINNs (or any other neural network procedure based on residual reduction). We start by modifying the notation due to the iterative nature of the process. As mentioned in the previous section, the source term \(f\) may need to be normalized by a scaling parameter \(\mu_{0}\), so that we reconsider the initial solution \(u_{0}\) satisfying a problem in the form:
\[R_{0}(\mathbf{x},u_{0}(\mathbf{x}))=\mu_{0}f(\mathbf{x})-Au_{0}(\mathbf{x}) =0,\quad\forall\mathbf{x}\in\Omega, \tag{19}\] \[B(\mathbf{x},u_{0}(\mathbf{x})) =0,\quad\forall\mathbf{x}\in\partial\Omega. \tag{20}\]
The above problem can then be approximated using a neural network to obtain an approximation \(\tilde{u}_{0}\) of \(u_{0}\). Hence, the first approximation \(\tilde{u}\) to \(u\) reads after scaling \(\tilde{u}_{0}\) with \(\mu_{0}\):
\[\tilde{u}(\mathbf{x})=\frac{1}{\mu_{0}}\tilde{u}_{0}(\mathbf{x}). \tag{21}\]
We would like now to estimate the error in \(\tilde{u}\). However, we find it easier to work in terms of \(\tilde{u}_{0}\). Therefore, we look for a new correction \(u_{1}\) that solves the problem:
\[R_{1}(\mathbf{x},u_{1}(\mathbf{x}))=\mu_{1}R_{0}(\mathbf{x},\tilde{u}_{0}( \mathbf{x}))-Au_{1}(\mathbf{x}) =0,\quad\forall\mathbf{x}\in\Omega, \tag{22}\] \[B(\mathbf{x},u_{1}(\mathbf{x})) =0,\quad\forall\mathbf{x}\in\partial\Omega. \tag{23}\]
Once again, one can compute an approximation \(\tilde{u}_{1}\) of \(u_{1}\) using PINNs. Since \(\tilde{u}_{1}\) can be viewed as the normalized correction to the error in \(\tilde{u}_{0}(\mathbf{x})\), the new approximation to \(u\) is now given by:
\[\tilde{u}(\mathbf{x})=\frac{1}{\mu_{0}}\tilde{u}_{0}(\mathbf{x})+\frac{1}{\mu_{0}\mu_ {1}}\tilde{u}_{1}(\mathbf{x}). \tag{24}\]
The process can be repeated \(L\) times to find corrections \(u_{i}\) at each level \(i=1,\ldots,L\) given the prior approximations \(\tilde{u}_{0}\), \(\tilde{u}_{1}\),..., \(\tilde{u}_{i-1}\). Each new correction \(u_{i}\) then satisfies the boundary-value problem:
\[R_{i}(\mathbf{x},u_{i}(\mathbf{x}))=\mu_{i}R_{i-1}(\mathbf{x},\tilde{u}_{i-1} (\mathbf{x}))-Au_{i}(\mathbf{x}) =0,\quad\forall\mathbf{x}\in\Omega, \tag{25}\] \[B(\mathbf{x},u_{i}(\mathbf{x})) =0,\quad\forall\mathbf{x}\in\partial\Omega. \tag{26}\]
After finding an approximation \(\tilde{u}_{i}\) to each of those problems up to level \(i\), one can obtain a new approximation \(\tilde{u}\) of \(u\) such that:
\[\tilde{u}(\mathbf{x})=\frac{1}{\mu_{0}}\tilde{u}_{0}(\mathbf{x})+\frac{1}{\mu_{0}\mu_ {1}}\tilde{u}_{1}(\mathbf{x})+\ldots+\frac{1}{\mu_{0}\mu_{1}\ldots\mu_{i}}\tilde{u }_{i}(\mathbf{x}). \tag{27}\]
Once the approximations \(\tilde{u}_{i}\) have been found at all levels \(i=0,\ldots,L\), the final approximation at the end of the process would then be given by:
\[\tilde{u}(\mathbf{x})=\sum_{i=0}^{L}\frac{1}{\Pi_{j=0}^{i}\mu_{j}}\tilde{u}_{i}( \mathbf{x}). \tag{28}\]
Using PINNs, the neural network approximation \(\tilde{u}_{i}\) (which implicitly depends on the network parameters \(\theta\)) for each error correction will be obtained by solving the following minimization problem:
\[\min_{\theta\in\mathbb{R}^{N_{\theta,i}}}\mathcal{L}_{i}(\theta)=\min_{\theta \in\mathbb{R}^{N_{\theta,i}}}\int_{\Omega}R_{i}\big{(}\mathbf{x},\tilde{u}_{i}( \mathbf{x})\big{)}^{2}dx, \tag{29}\]
where \(N_{\theta,i}\) denotes the dimension of the function space generated by the neural network used at level \(i\). We recall that the boundary conditions are strongly imposed and, hence, do not appear in the loss functions \(\mathcal{L}_{i}(\theta)\). Since each correction \(\tilde{u}_{i}\) is expected to have higher frequency contents, the size \(N_{\theta,i}\) of the networks should be increased at each level. Moreover, the number of iterations used in the optimization algorithms Adam and L-BFGS will be increased as well, since more iterations are usually needed to approximate higher frequency functions. For illustration purposes, we consider a simple one-dimensional numerical example and use once again the setting of Example 1 in Section 3.1.
**Example 4**.: _We solve Problem (9) with \(k=2\) whose exact solution is given by (10). We consider three levels of the multi-level neural networks, i.e. \(L=3\), in addition to the initial solve, so that the approximation \(\tilde{u}\) will be obtained using four sequential neural networks. We choose networks with a single hidden layer of width \(N_{1}\) given by \(\{10,20,40,20\}\). The networks are first trained with 4,000 iterations of Adam followed by \(\{200,400,600,0\}\) iterations of L-BFGS. The mappings of the input and boundary conditions are chosen with \(M=\{1,3,5,1\}\) wave numbers. In this example,
the scaling parameter \(\mu_{i}\), \(i=0,\ldots,3\), for \(\tilde{u}_{0}\) and the three corrections \(\tilde{u}_{i}\), are chosen here as \(\mu_{i}=\{1,10^{3},10^{3},10^{2}\}\). In the next section, we will present a simple approach to evaluate these normalization factors. We note that the last network has been designed to approximate functions with a low-frequency content. This choice will be motivated below._
_We present in Figure 8 the evolution of the loss function and of the errors in the \(L^{2}\) and \(H^{1}\) norms with respect to the number of optimization iterations, along with the pointwise error at the end of the training. We first observe that each error correction allows one to converge closer to the exact solution. More precisely, we gain almost seven orders of magnitude in the \(L^{2}\) error thanks to the introduced corrections. Indeed, after three corrections, the maximum pointwise error is around \(6\times 10^{-12}\), which is much smaller than the error we obtained with \(\tilde{u}_{0}\) alone. To better explain our choice of the number \(M\) of wave numbers at each level, we show in Figure 9 the computed corrections \(\tilde{u}_{i}\). We observe that each correction approximates higher frequency functions than the previous one, except \(\tilde{u}_{3}\). In fact, once we start approximating the high-frequency errors, it becomes harder to capture the low-frequencies with larger amplitudes. This phenomenon was actually observed and described in Example 3. Here, we see that the loss function eventually decreases during the training of \(\tilde{u}_{2}\), but that the \(L^{2}\) error has the tendency to oscillate while slightly decreasing. It turns out that this behavior can be attributed to the choice of the loss function \(\mathcal{L}_{2}\), in which the higher frequencies are penalized more than the lower ones. In other words, we have specifically designed the last network to approximate only low-frequency functions and be trained using Adam only. Thanks to this architecture, the \(L^{2}\) error significantly decreases during the training of \(\tilde{u}_{3}\), without a noticeable decrease in the loss function. As a remark, longer training for \(\tilde{u}_{2}\) would correct the lower frequencies
Figure 8: Results from Example 4 in Section 4: (Left) Evolution of the loss function. (Middle) Evolution of the error \(e(x)=u(x)-\tilde{u}(x)\) measured in the \(L^{2}\) and \(H^{1}\) norms. (Right) Pointwise error after three error corrections. The regions shown by \(L_{i}\), \(i=0,\ldots,3\), in the first two graphs indicate the region in which the neural network at level \(i\) is trained.
_while correcting high frequencies with smaller amplitudes, but it is in our opinion more efficient, with respect to the number of iterations, to simply introduce a new network targeting the low frequencies._
In the last example, we observe that the maximal values in the three corrections \(\tilde{u}_{i}\), \(i=1,2,3\), range in absolute value from \(0.002\) to \(0.015\), see Figure 9, whereas their values should ideally be of order one if proper normalization were used. The reason is that we provided a priori values for the scaling parameters \(\mu_{i}\). It appears that those values were not optimal, i.e. too small, yielding solutions whose amplitudes were two to three orders of magnitude smaller than the ones we should expect. The multi-level neural network approach was still able to improve the accuracy of the solution despite sub-optimal values of the scaling parameters.
Ideally, one would like to have a method to uncover appropriate values of the scaling parameters. Unfortunately, it is not a straightforward task, that of predicting the amplitude of the remaining error in order to correctly normalize the residual term in the partial differential equation. We propose here a simple approach based on the Extreme Learning Method [13]. The main idea of
Figure 9: Results from Example 4 in Section 4: Approximation \(\tilde{u}_{0}(x)\) and corrections \(\tilde{u}_{i}(x)\), \(i=1,2,3\).
the Extreme Learning Method is to use a neural network with a single hidden layer, to fix the weight and bias parameters of the hidden layer, and to minimize the loss function with respect to the parameters of only the output layer by a least squares method. We propose here to utilize the Extreme Learning Method to obtain a coarse prediction for each correction \(\tilde{u}_{i}\). The solution might not be very accurate, but it should provide a reasonable estimate of the amplitude of the correction function, which can be employed to adjust the normalization parameter \(\mu_{i}\). Moreover, the approach has the merit of being very fast and scale-independent. We assess its performance in the next numerical example and show that it allows one to further improve the accuracy of the multi-level neural network solution.
**Example 5**.: _We use the exact same setting as described in Example 4 but we now employ the Extreme Learning Method to normalize the residual terms, as explained above. We observe in Figure 10 that the proposed normalization technique leads at the end of the training to errors \(e(x)=u(x)-\tilde{u}(x)\) within machine precision. We actually gain about two orders of magnitude in the error with respect to both norms over the results obtained in Example 4. Even more striking, the approximations of the corrections \(\tilde{u}_{i}\) all have amplitudes very close to unity, see Figure 11, which confirms the efficiency of the proposed approach._
## 5 Numerical results
In this section, we present a series of numerical examples to illustrate the whole potential of the multi-level neural networks to reduce errors in neural network approximations of boundary-value problems in one and two dimensions. The computational domain in each of the examples
Figure 10: Results from Example 5 in Section 4: (Left) Evolution of the loss function. (Middle) Evolution of the error \(e(x)=u(x)-\tilde{u}(x)\) measured in the \(L^{2}\) and \(H^{1}\) norms. (Right) Pointwise error after three error corrections.
is defined as \(\Omega=[0,1]^{d}\), with \(d=1\) or \(2\). The solutions to the problems will all be submitted to homogeneous Dirichlet boundary conditions, unless explicitly stated otherwise. The solutions and error corrections computed with the multi-level neural network approach shall be consistently approximated by the neural network architecture provided in (14), for which the vector of wave numbers \(\mathbf{\omega}_{M}\) is constructed from a geometric series, as described in Section 3.2. The normalization of the source terms is implemented through the use of the Extreme Learning Method, as described in Section 4. We again emphasize that the scaling along the horizontal axis in the convergence plots is actually different for the Adam iterations and L-BFGS iterations. The reason is simply to provide a clearer visualization of the L-BFGS training phase, given the notable difference in the number of iterations used in the Adam and L-BFGS algorithms. Finally, for each example, the number of levels is set to \(L=3\) and the values of the hyper-parameters for each network (number of hidden layers \(n\) and widths \(N_{i}\), number of Adam and L-BFGS iterations, number of wave numbers \(M\)) will be collected in a table.
Figure 11: Results from Example 5 in Section 4: Approximation \(\tilde{u}_{0}(x)\) and corrections \(\tilde{u}_{i}(x)\), \(i=1,2,3\).
### Poisson problem in 1D
We revisit once again Problem (9) described in Example 1, this time with \(k=10\). Our objective here is to demonstrate the performance of the multi-level neural networks even in the case of solutions with high-frequency components. The solution is approximated using four sequential networks whose hyper-parameters are reported in Table 5.1. Similarly to the Example 4, the last network is chosen in such a way that only the low-frequency modes are approximated, using the Adam optimizer only.
We plot in Figure 12 the evolution, during training, of the loss function (left) and the errors in the \(L^{2}\) and \(H^{1}\) norms (middle), along with the pointwise error at the end of the training (right). We observe that the loss function and the errors in both norms are reduced when using two corrections. During the second error correction, we notice that the reduction in the loss function did not yield a significant decrease in the \(L^{2}\) and \(H^{1}\) errors. As described in Example 4, this is a consequence of our choice of the loss function, where higher frequencies are penalized more than the lower ones, which yields large low-frequency errors. This issue is addressed in the third correction that helps decrease the \(L^{2}\) and the \(H^{1}\) errors without significantly decreasing the loss function, since the role of the last network is mainly to capture the low frequencies. As described in Example 4, this is a consequence of the specific choice of the hyper-parameters for the last network. In this example, we are able to attain a maximum pointwise error of around \(10^{-11}\) with four successive networks.
### Boundary-layer problem
In this section, we consider the convection-diffusion problem given by
\[\begin{split}-\varepsilon\partial_{xx}u(x)+\partial_{x}u(x)& =1,\qquad\forall x\in(0,1),\\ u(0)&=0,\\ u(1)&=0,\end{split} \tag{30}\]
\begin{table}
\begin{tabular}{c|c|c|c|c} Hyper-parameters & \(\tilde{u}_{0}\) & \(\tilde{u}_{1}\) & \(\tilde{u}_{2}\) & \(\tilde{u}_{3}\) \\ \hline \# Hidden layers \(n\) & 1 & 1 & 1 & 1 \\ Width \(N_{1}\) & 10 & 20 & 40 & 40 \\ \# Adam iterations & 4,000 & 4,000 & 4,000 & 10,000 \\ \# L-BFGS iterations & 500 & 1,000 & 1,500 & 0 \\ \# wave numbers \(M\) & 4 & 6 & 8 & 2 \\ \hline \end{tabular}
\end{table}
Table 1: Hyper-parameters used in the example of Section 5.1.
where \(\varepsilon\) denotes a viscosity coefficient. We show in Figure 13 the exact solutions to the problem when \(\varepsilon=1\) and \(\varepsilon=0.01\). As \(\varepsilon\) gets smaller, a sharp boundary layer is formed in the vicinity of \(x=1\), which makes the problem more challenging to approximate. Finite element approximations of the same problem without using any stabilization technique actually exhibit large oscillations whenever the mesh size is not fine enough to capture the boundary layer. We apply the multi-level neural network method to both cases using the hyper-parameters given in Table 2 for \(\varepsilon=1\) and Table 3 for \(\varepsilon=0.01\).
of the machine precision.
In Figure 15, we show the convergence results and the pointwise error for \(\varepsilon=0.01\). As expected, the convergence with four networks is slower than in the case with \(\varepsilon=1\) and plateaus for larger values of the loss function and the errors. As a matter of fact, the loss function after the training of \(\tilde{u}_{0}\) stagnates around a value of \(10^{-4}\). But using the multi-level neural network method, we are able to decrease the loss function down to \(10^{-14}\), which is also accompanied by a reduction of the \(L^{2}\) and \(H^{1}\) errors.
### Helmholtz Equation
We are now looking for the field \(u=u(x)\) governed by the one-dimensional Helmholtz equation:
\[-\partial_{xx}u(x)-\kappa^{2}u(x)=0,\qquad\forall x\in(0,1), \tag{31}\]
\begin{table}
\begin{tabular}{c|c|c|c|c} Hyper-parameters & \(\tilde{u}_{0}\) & \(\tilde{u}_{1}\) & \(\tilde{u}_{2}\) & \(\tilde{u}_{3}\) \\ \hline \# Hidden layers \(n\) & 1 & 1 & 1 & 1 \\ Width \(N_{1}\) & 5 & 10 & 20 & 40 \\ \# Adam iterations & 4,000 & 4,000 & 4,000 & 10,000 \\ \# L-BFGS iterations & 200 & 500 & 800 & 0 \\ \# wave numbers \(M\) & 1 & 3 & 5 & 2 \\ \hline \end{tabular}
\end{table}
Table 2: Hyper-parameters used in the example of Section 5.2 for \(\varepsilon=1\).
Figure 14: Example of Section 5.2 with \(\varepsilon=1\): (Left) Evolution of the loss function. (Middle) Evolution of the error \(e(x)=u(x)-\tilde{u}(x)\) measured in the \(L^{2}\) and \(H^{1}\) norms. (Right) Pointwise error after three error corrections.
where the value of the wave number is chosen here equal to \(\kappa=\sqrt{9200}\approx 95.91\), and subject to the Dirichlet boundary conditions:
\[\begin{array}{l}u(0)=0,\\ u(1)=1.\end{array} \tag{32}\]
The Dirichlet boundary condition is non-homogeneous at \(x=1\). We thus introduce the lift function \(\bar{u}(x)=x\) to account for the boundary condition, so that we consider trial functions for the initial solution \(\tilde{u}_{0}\) in the form:
\[\tilde{u}_{0}(x)=x+\frac{\left(\Pi_{j=1}^{d}\gamma_{g}(x_{j})\right)\cdot \boldsymbol{z}_{n+1}}{M}\]
where \(\gamma_{g}\) is defined (13). Since \(\tilde{u}_{0}\) strongly verifies the two boundary conditions, the corrections \(\tilde{u}_{i}\), for \(i\geq 1\), will therefore be subjected to homogeneous Dirichlet boundary conditions, i.e. \(\tilde{u}_{i}(0)=\tilde{u}_{i}(1)=0\). The main objective of this example is to show that the multi-level neural network method can actually recover a high-frequency solution resulting from the large value of the wave number \(\kappa\). The hyper-parameters of the multi-level neural networks are provided in Table 4.
As before, we plot in Figure 16 the convergence of the loss function and the errors along with
\begin{table}
\begin{tabular}{c|c|c|c|c} Hyper-parameters & \(\tilde{u}_{0}\) & \(\tilde{u}_{1}\) & \(\tilde{u}_{2}\) & \(\tilde{u}_{3}\) \\ \hline \# Hidden layers \(n\) & 1 & 1 & 1 & 1 \\ Width \(N_{1}\) & 10 & 10 & 20 & 20 \\ \# Adam iterations & 4,000 & 4,000 & 4,000 & 10,000 \\ \# L-BFGS iterations & 500 & 1,000 & 2,000 & 0 \\ \# wave numbers \(M\) & 3 & 5 & 7 & 3 \\ \hline \end{tabular}
\end{table}
Table 3: Hyper-parameters used in the example of Section 5.2 for \(\varepsilon=0.01\).
Figure 15: Example of Section 5.2 with \(\varepsilon=0.01\): (Left) Evolution of the loss function. (Middle) Evolution of the error \(e(x)=u(x)-\tilde{u}(x)\) measured in the \(L^{2}\) and \(H^{1}\) norms. (Right) Pointwise error after three error corrections.
the pointwise error. We observe that the use of the multi-level neural networks leads to a significant reduction of the error as the absolute pointwise error in the final approximation \(\tilde{u}\) never exceeds \(3\times 10^{-10}\).
In this example, the last correction is constructed using the Fourier feature mapping with \(M=4\) wave numbers. This is in contrast to the previous examples where we chose lower frequencies for the last correction. The reason is that, as observed in Figure 17, the dominant frequency in \(\tilde{u}_{1}\) and \(\tilde{u}_{2}\) is comparable to that of the solution. Therefore, in order to reduce this frequency without reducing the larger frequencies whose amplitudes are smaller, we actually select an architecture for \(\tilde{u}_{3}\) similar to that of \(\tilde{u}_{0}\). Using this architecture, we see that the errors in the solution significantly decrease even if the loss function remains virtually unchanged. We show in Figure 18 the residual \(R_{1}(x,\tilde{u}_{1}(x))\) associated with the approximation \(\tilde{u}_{1}\). We have already mentioned that the error corrections \(\tilde{u}_{1}\) and \(\tilde{u}_{2}\) have a dominant frequency similar to that of \(\tilde{u}_{0}\). However, we observe that the residual clearly features higher-frequency modes, whose amplitudes, although small in the approximation \(\tilde{u}_{1}\), are in fact amplified due to the second-order derivatives. For that reason, it is
\begin{table}
\begin{tabular}{c|c|c|c|c} Hyper-parameters & \(\tilde{u}_{0}\) & \(\tilde{u}_{1}\) & \(\tilde{u}_{2}\) & \(\tilde{u}_{3}\) \\ \hline \# Hidden layers \(n\) & 1 & 1 & 1 & 1 \\ Width \(N_{1}\) & 10 & 20 & 40 & 10 \\ \# Adam iterations & 10,000 & 10,000 & 10,000 & 30,000 \\ \# L-BFGS iterations & 400 & 800 & 1,600 & 0 \\ \# wave numbers \(M\) & 5 & 7 & 9 & 5 \\ \hline \end{tabular}
\end{table}
Table 4: Hyper-parameters used in the example of Section 5.3.
Figure 16: Example of Section 5.3: (Left) Evolution of the loss function. (Middle) Evolution of the error \(e(x)=u(x)-\tilde{u}(x)\) measured in the \(L^{2}\) and \(H^{1}\) norms. (Right) Pointwise error after three error corrections.
desirable to consider larger networks with a larger number of wave numbers in the Fourier feature mapping for the approximations \(\tilde{u}_{1}\) and \(\tilde{u}_{2}\). We have thus used in this experiment \(N_{1}=20\) and \(M=7\) for \(\tilde{u}_{1}\) and \(N_{1}=40\) and \(M=9\) for \(\tilde{u}_{2}\).
### Poisson problem in 2D
In this final example, we consider the two-dimensional Poisson equation in \(\Omega=(0,1)^{2}\), with homogeneous Dirichlet boundary conditions prescribed on the boundary \(\partial\Omega\) of the domain. The boundary-value problem consists then in solving for \(u=u(x,y)\) satisfying:
\[\begin{array}{cc}-\nabla^{2}u(x,y)=f(x,y),&\quad\forall x\in\Omega,\\ u(x,y)=0,&\quad\forall x\in\partial\Omega,\end{array} \tag{33}\]
where the source term \(f(x,y)\) is chosen such that the exact solution is given by:
\[u(x,y)=\sin(\pi x)\sin(\pi y).\]
Figure 17: Example of Section 5.3: Approximation \(\tilde{u}_{0}(x)\) and corrections \(\tilde{u}_{i}(x)\), \(i=1,2\).
Figure 18: Example of Section 5.3: Residual \(R_{1}(x,\tilde{u}_{1}(x))\) associated with the Hemlholtz equation obtained after the training of \(\tilde{u}_{1}(x)\).
The problem is solved using four networks whose hyper-parameters are given in Table 5. We note that, for this two-dimensional problem, we increase the depth of the networks at levels 0, 1, and 2, to two hidden layers, both having the same width \(N_{1}=N_{2}\) at each level.
We show in Figure 19 the evolution of the loss function and of the errors with respect to the number of Adam and L-BFGS iterations. As in the one-dimensional examples, the multi-level neural network approach allows one to reduce the loss function and the errors in the \(L^{2}\) and \(H^{1}\) norms down to values around \(10^{-15}\), \(10^{-11}\), and \(10^{-9}\), respectively. The results are in our opinion remarkable since we attain in this 2D example an accuracy comparable to that obtained with classical discretization methods. As indicated in Table 5, the hyper-parameters for the last correction are chosen so that they can capture the low-frequency functions. Figure 19 (right) actually shows that, by the end of the process, we are thereby able to decrease the maximum pointwise error to within \(6\times 10^{-10}\). Finally, we plot in Figure 20 the approximation \(\tilde{u}_{0}\) along with the three corrections \(\tilde{u}\), \(i=1,2,3\), computed after each level of the multi-level neural networks. One easily observes that all solutions are properly normalized and that, as expected, each corrective function exhibits higher frequencies than in the previous one, except for the approximation \(\tilde{u}_{3}\) by the design of the last neural network.
efficiency of the approach relies nonetheless on two key ingredients. Indeed, we have observed that the remaining error at each subsequent level, and equivalently, the resulting residual, have smaller amplitudes and contain higher frequency modes, two circumstances for which we have highlighted the fact that the training of the neural networks usually performs poorly. We have addressed the first issue by normalizing the residual before computing a new correction. To do so, we have developed a normalization approach based on the Extreme Learning Method that allows one to estimate appropriate scaling parameters. The second issue is taken care of by applying a Fourier feature mapping to the input data and the functions used to strongly impose the Dirichlet boundary conditions. We believe that the multi-level neural network method is a versatile approach and can be applied to many deep learning techniques designed to solve boundary-value problems. In this work, we have chosen to present the method in the special case of physics-informal neural networks, which have recently been used for the solution of several classes of initial and boundary-value problems. The efficiency of the multi-level neural network method was demonstrated here on several 1D or 2D numerical examples based on the Poisson equation, the convective-diffusion equation, and the Helmholtz equation. More specifically, the numerical results successfully illustrate the fact that the method can provide highly accurate approximations to the solution of the problems and, in some cases, allows one to reduce the numerical errors in the \(L^{2}\) and \(H^{1}\) norms down to the machine precision.
Even if the preliminary results are very encouraging, additional investigations should be considered to further assess and improve the efficiency of the proposed multi-level neural network method. More specifically, one would like to apply the method to other deep learning approaches, such as the Deep Ritz method [38] or the weak adversarial networks method [40] to name a few, to
Figure 19: Example of Section 5.4: (Left) Evolution of the loss function. (Middle) Evolution of the error \(e(x)=u(x)-\tilde{u}(x)\) measured in the \(L^{2}\) and \(H^{1}\) norms. (Right) Pointwise error after three error corrections.
time-dependent problems, and to the learning of partial differential operators, e.g. DeepONets [22] or GreenONets [3], for reduced-order modeling. One could also imagine estimating the correction at each level of the algorithm to control the error in specific quantities of interest following ideas from [26, 16, 15]. In this work, we have chosen to strongly enforce boundary conditions in order to neglect errors arising from the introduction of penalty parameters in the loss function (5). One should thus assess the efficiency of the method when initial and boundary conditions are weakly enforced. Finally, the multi-level neural network method introduces a sequence of several neural networks whose hyper-parameters are chosen a priori and often need to be adjusted by trial and error. It would hence be very useful to devise a methodology that determines optimal values of the hyper-parameters independently of the user.
Acknowledgements.SP and ML are grateful for the support from the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grants [grant numbers RGPIN-2019-7154, PGPIN-2018-06592]. This research was also partially supported by an NSERC Collaborative Research and Development Grant [grant number RDCPJ 522310-17] with the Institut de Recherche en Electric du Quebec and Prompt. SP acknowledges the support of the Basque Center for
Figure 20: Example of Section 5.4: approximation \(\tilde{u}_{0}(x)\) and corrections \(\tilde{u}_{i}(x)\), \(i=1,2,3\).
Computational Mathematics to host him in May and June 2023. He also thanks David Pardo for many fruitful discussions on this subject. ZA and SP are thankful to the Laboratoire de Mecanique et d'Acoustique UMR 7031, in Marseille, France, for hosting them. This work received support from the French Government under the France 2030 investment plan, as part of the Initiative d'Excellence d'Aix-Marseille Universite - A*MIDEX - AMX-19-IET-010.
|
2310.02053 | Controlling Topic-Focus Articulation in Meaning-to-Text Generation using
Graph Neural Networks | A bare meaning representation can be expressed in various ways using natural
language, depending on how the information is structured on the surface level.
We are interested in finding ways to control topic-focus articulation when
generating text from meaning. We focus on distinguishing active and passive
voice for sentences with transitive verbs. The idea is to add pragmatic
information such as topic to the meaning representation, thereby forcing either
active or passive voice when given to a natural language generation system. We
use graph neural models because there is no explicit information about word
order in a meaning represented by a graph. We try three different methods for
topic-focus articulation (TFA) employing graph neural models for a
meaning-to-text generation task. We propose a novel encoding strategy about
node aggregation in graph neural models, which instead of traditional encoding
by aggregating adjacent node information, learns node representations by using
depth-first search. The results show our approach can get competitive
performance with state-of-art graph models on general text generation, and lead
to significant improvements on the task of active-passive conversion compared
to traditional adjacency-based aggregation strategies. Different types of TFA
can have a huge impact on the performance of the graph models. | Chunliu Wang, Rik van Noord, Johan Bos | 2023-10-03T13:51:01Z | http://arxiv.org/abs/2310.02053v1 | # Controlling Topic-Focus Articulation in Meaning-to-Text Generation
###### Abstract
A bare meaning representation can be expressed in various way using natural language, depending on how the information is structured on the surface level. We are interested in finding ways to control topic-focus articulation when generating text from meaning. We focus on distinguishing active and passive voice for sentences with transitive verbs. The idea is to add pragmatic information such as topic to the meaning representation, thereby forcing either active or passive voice when given to a natural language generation system. We use graph neural models because there is no explicit information about word order in a meaning represented by a graph. We try three different methods for topic-focus articulation employing graph neural models for a meaning-to-text generation task. We propose a novel encoding strategy about node aggregation in graph neural models, which instead of traditional encoding by aggregating adjacent node information, learns node representations by using depth-first search. The results show our approach can get competitive performance with state-of-art graph models on general text generation, and lead to significant improvements on the task of active-passive conversion compared to traditional adjacency-based aggregation strategies. Different types of TFA can have a huge impact on the performance of the graph models.
## 1 Introduction
Topic-Focus Articulation (TFA) refers to the way information is packaged within a sentence, in particular the division of the topic (the given information) and the focus (the new information). There are various linguistic devices that determine how topic and focus can be articulated in a sentence: dislocation, word order, active/passive voice, intonation, particles, and more [1, 1, 2, 1, 14]. TFA plays a crucial role in natural language generation when the input is an abstract representation of meaning and the output is a text. The same information can be realised in various ways, depending on the perspective one takes. Take for example the information provided by the situation where a wolf killed two sheep. We could ask several questions about this situation, viewing it from different points of view:
Q\({}_{1}\): What about the sheep?
A\({}_{1}\): _The two sheep were killed by a wolf._
Q\({}_{2}\): What about the wolf?
A\({}_{2}\): _The wolf killed two sheep._
A flexible natural language generation system, one that takes as input formal meaning representations, and outputs texts, ideally should be able to package the information related to the perspective being taken. In this paper we investigate ways of adding TFA to a formal meaning representation in order to get different texts where information is packaged in different ways. We focus on active-passive alternation in English, as exemplified by the examples above, as in English usually the subject of a sentence is the topic [1]. The challenge is to find a simple, intuitive way to add TFA to an abstract meaning representation that gives the desired effects when given to an meaning-to-text NLG component. For this purpose we employ graphical representations of meaning that abstract away from any surface order of the message they convey the meaning of. The NLG modules in our experiments are implemented by neural models. To the best of our knowledge we are the first that experiment with controlling TFA in the context of text generation from formal meaning representations.
The paper is organised as follows. In Section 2 we introduce the semantic formalism of our choice, Discourse Representation Theory [1], and in particular the TFA-neural
graph representation of meaning that we use in our experiments. Here we also review previous meaning-to-text approaches for Discourse Representation Structures and graph-based methods for graph-to-text generation task. In Section 3 we describe the process of acquiring graph-structured data for graph neural networks (GNNs), and then we focus on presenting the three different TFAs we added in meaning representation. The basic idea is to use the simple markers to mark active-voice data and passive-voice data respectively to help the neural models distinguish different types of input graph. In Section 4 we introduce the implementation settings, and the evaluation metrics. We present a comparison of local encoders and deep encoders based on three types of GNNs training on different graph representation with three types of TFAs.
## 2 Background
### Discourse Representation Structures
Discourse Representation Theory (DRT) is a well-studied semantic formalism covering many linguistic phenomena including scope of negation and quantifiers, interpretation of pronouns, presupposition, temporal and discourse relations (Kamp and Reyle, 1993). The meaning representation proposed by DRT is the Discourse Representation Structure (DRS), a recursive first-order logic representation comprising of discourse referents (the entities introduced in the discourse) and relations between them. We work with a particular variant of DRS, namely the one proposed in the Parallel Meaning Bank (Abzianidze et al., 2017).
There are five types of semantic information that can be found in the PMB-style DRSs: entities, constants, roles, comparison operators, and discourse relations (including negation). The entities are represented by concepts denoted by WordNet synsets (Fellbaum, 1998) (for nouns, verbs, adjectives and adverbs), indicating the lemma, part-of-speech and sense number (e.g., book.n.02 encodes the entity with the second sense of the noun "book"). The constants are used to represent names, numbers, dates, and deixis. The roles (_Theme_, _Agent_, and so on), are represented by the thematic relations proposed in VerbNet (Kipper et al., 2008) and semantically connect two entities. Comparison operators relate and compare entities or constants. Discourse relations convey the rhetorical function between different discourse units.
As Figure 1 shows, there are various ways of representing a DRS. Besides the classic box notation, DRSs can also be displayed in variable-free sequential notation or as directed acyclic graphs, following a recent proposal by Bos (2021). For use in a neural graph model, the DRS data can be converted into a set of triples, using the variable-free sequential box notation. Doing so, the roles, comparison operators and discourse relations are regarded as edge labels, while entities, constants and discourse units ("boxes") form the nodes in the Discourse Representation Graph (DRG).
### Generating Text from DRSs
The purpose of DRS-to-text generation is to produce a text from an input DRS data. Basile (2015) proposed a pipeline consisting of three components: a surface order module, a lexicalization module that served for constructing an alignment between the abstract structure and the text, and a surface real
Figure 1: (a) Box format DRS for the sentence of _Two sheep were not killed by the wolf_, (b) DRS in sequential box notation, (c) corresponding DRG.
ization module to construct the final output. Wang et al. (2021) uses a standard LSTM model on clause format DRSs to generate text and shows that char-level encoder achieve significant performance, and Liu et al. (2021) encode DRS as a tree and uses a sibling treeLSTM model to produce text, which focus on the solution for condition ordering and variable naming in DRS representation.
In this paper, however, as our motivation is to convert freely active and passive voice on DRS data, we consider the graphical representation as input and treat the meaning-to-text generation task as Graph2Seq learning task. A DRG is neutral with respect to word order in the surface realisation of text, and therefore serves our purpose of topic-focus articulation better than a sequential representation of meaning in which order is explicitly encoded.1
Footnote 1: A completely different architecture, based on sequential meaning representations, would also be possible, where TFA would take place by operating on the sequential meaning, before given to a Seq2Seq module.
### Neural Networks for Graph
How to encode the input graph is the key issue for the graph-to-text generation task. A GNN layer computes every node representation by aggregating its neighbor's representation, the design of how to aggregate is what mostly distinguishes the various types of GNNs. Graph Convolutional Networks (GCN) learn representations of nodes by summing over the representations of immediate neighborhood of each node (Kipf and Welling, 2017), which has been applied to various text generation tasks (Damonte and Cohen, 2019; Guo et al., 2019; Song et al., 2020) and has achieved remarkable performance. Some variants use mean or max pooling as aggregation, weighting all neighbors with equal importance, with the same core computation as GCN. Velickovic et al. (2018) consider it is unreasonable to assign equal importance to all adjacent nodes of a node and proposed Graph Attention Networks (GAT), which updates each node representation by incorporating the attention mechanism to calculate the importance of adjacent information. Li et al. (2016) proposed the Gated Graph Neural Networks (GGNN) for the reason that GCN model has difficulty to learn deep layers nodes information, which use a Gated Recurrent Unit (GRU) to facilitate information propagation between local layers. This model is used in AMR-to-text generation tasks (Ribeiro et al., 2019) and syntax-based machine translation (Beck et al., 2018).
All the above methods belong to local graph encoding strategies as they are based on _local node aggregation_. The opposite approach to it is global encoding strategies, which typically based on Transformer and leverage global node aggregation. We propose deep traversal graph encoders to replace the global graph encoders and local graph encoders. Different from the global graph encoder, which compute a node representation based on all nodes in the graph, deep traversal graph encoder focus on capture rich information from the depth-first search of nodes, while avoiding introducing noise from all nodes.
Figure 2: (a) Levi graph for a DRG. (b) Two possible realisations for (active and passive voice). (c) Local graph encoder learning information from adjacent nodes, implicitly capturing 2-hop information by using 2-layers model. (d) Deep traversal encoder capturing non-local nodes information by using depth-first search for a node.
## 3 Method
### Graph Preparation
A DRG is a directed labeled graph defined as \(G=(V,E)\), where \(V\) is a set of nodes and \(E\) is a set of edges. Each edge in \(E\) can be represented as \((v_{i},l,v_{j})\), where \(v_{i}\) and \(v_{e}\) are the indices of incoming nodes and outgoing nodes respectively, and \(l\) is the edge label. We use the extended Levi graph method Beck et al. (2018) that changes edge labels to additional nodes, and the graph becomes a directed unlabeled graph where each labeled edge \(e=(v_{i},l,v_{j})\) is transformed into two unlabeled edges \(e_{1}=(v_{i}\), \(l)\) and \(e_{2}=(l\), \(v_{j})\), as shown in Figure 2. This approach is also convenient for representing named entities with more than one token, representing the direction of edges according to the order between tokens of named entities without adding additional edge labels.
### Adding Topic-Focus Articulation
A DRG can be textually paraphrased in various ways. For instance, for sentences with a transitive verb, active as well as passive voice could be generated if nothing is known about the information structure. Here we introduce three ways of adding information to the graph about the topic of the sentence (and assume that the rest of the meaning is considered focus). The idea is that topic controls whether an English sentence with a transitive verb is generated in active or passive voice. We investigate three different methods of adding TFA to a DRG. Our aim is to explore which way of TFA can simply and effectively distinguish active and passive voice, and whether it is convenient for the model to learn this information. We propose three types of augmenting the DRG with TFA:
1. Concept \(\rightarrow\) TOPIC \(\rightarrow\) Concept (CTC)
2. Box \(\rightarrow\) TOPIC \(\rightarrow\) Concept (BTC)
3. Role \(\rightarrow\) Role (RTR)
CtcHere we specify the subject by giving _TOPIC_ marker as edge label and get a self-loop representation. As shown in Figure 3 (a), we give _TOPIC_ marker as edge label to the concept _wolf.n.01_, and then _wolf_ becomes the subject of text and reference text become active-voice text, vice versa, if _sheep_ is the subject, _TOPIC_ marker is added as edge label to the concept _sheep.n.01_ and the reference text is passive-voice. This method does not depend on other content in the text, but only focuses on which concept is the subject, and when adding this type TAF in graph and convert graph to Levi graph, the directed acyclic graph will become a directed cycle graph.
BtcHere we try to connect more information by giving _TOPIC_ as edge label between discourse unit node and subject concept node. As shown in Figure 3 (b), when we use _sheep_ as the subject of the reference text, we give _TOPIC_ marker as edge label to the concept _sheep.n.01_ and its located discourse unit node _Box_, whereas we add _TOPIC_ marker between the concept _wolf.n.01_ and its discourse node _Box_. This method considers the global discourse
Figure 3: Three types of TFA applied to a DRG for _A wolf killed two sheep_ (in blue) or _The wolf killed two sheep_ (in red).
information, and is more intuitive in terms of graph format.
_Rtr_When we do not know the concept information contained in the data, it is possible to add TFA by using VerbNet role information. As shown in Figure 3 (c), _Agent_ is always the VerbNet role connect concept to the verb node, and the text is active-voice if that concept is subject. _Patient_ is one of types of VerbNet roles that connect verb node and the concept of the action or event being imposed by the agent, there are also other types of VerbNet roles introduced in Section 4.1. This method do not consider any content of original DRSs data, but by adding the edge such as _Agent_\(\rightarrow\)_Patient_ or _Patient_\(\rightarrow\)_Agent_ in the graph to decide active-voice and passive-voice. This representation only based on the transformed extended Levi graph, and essentially changes the flow of information into and out of the graph nodes.
### Graph Neural Networks
Previous research shows it is difficult for local graph encoders to capture non-local nodes information and deeper-layers models perform worse because useful information in earlier layers may get lost [19, 10]. Inspired by the work of Ribeiro et al. (2020), they propose to integrate GAT and Transformer architectures into a unified global-local graph encoder to learn deep layers nodes information and adopt it on knowledge graphs to text generation task. In this paper, we propose a deep traversal encoder that aggregates node representations by using all nodes from each node's depth-first search.
Figure 2 (a) presents a DRG data, the corresponding text is shown in Figure 2 (b). For the node _kill.v.01_, it is difficult to access the information from _intruder.n.01_ 4-hop away, especially when we use two layers (Figure 2 (c)), even if we use four layers, the model is very hard to capture. While when using deep traversal for the node _kill.v.01_, the node _intruder.n.01_ has the same distance as node _intruder.n.01_(Figure 2 (d)), comparing with local graph encoder, deep traversal encoder essentially changes the graph structure, and compared to the global graph encoder, this method preserves more information of related nodes.
#### 3.3.1 Deep Traversal Encoder
Compare with the local graph encoder, a deep traversal encoder updates each node information based on depth-first search, which aggregates all the deep traversal nodes instead of adjacency nodes. In this paper, we compute a layer of the depth-first traversal convolution for a node \(i\in V\) based on GGNN, which shows significant performance on the meaning-to-text generation task in previous research [10].
Gated Graph Neural NetworksThe main difference between GCN and GGNN is anologous to the difference between convolutional and recurrent networks [1]. GGNN can be seen as a multi-layer GCN where layer-wise parameters are tied and gating mechanisms are added. With this, the model can propagate node information between long distance nodes in the graph [19, 1]. In particular, the \(l\)-th layer of a GGNN is calculated as:
\[h_{i}^{(l+1)}=GRU(h_{i}^{(l)},\sum_{j\in deep(N_{l})}\frac{W_{dir(j,i)}^{(l)}h_ {j}^{(l)})}{[deep(N_{i})]}) \tag{1}\]
where \(W_{dir(j,i)}^{(l)}\) represents the weight matrix of the \(l\)-th layer, they are direction-specific parameter, where \(dir(j,i)\in\{default,reverse,self\}\), which refer to the original edges, the reversed edges to the original edges, and the self-loop edges [17]. \(deep(N_{i})\) is the the set of nodes by depth-first search for node \(i\), the core component of the proposed approach, where the standard GNNs use \(N_{i}\), which is the set of immediate neighbors of node \(i\). \(\rho\) is the activation function, we use \(ReLU\) in the experiments. \(h_{j}\) is the embedding representation of node \(j\in V\) at layer \(l\), \(GRU\) is a gated recurrent unit, a combination function.
#### 3.3.2 Decoder
We adopt standard fully batched attention-based LSTM decoder [1], where the attention memory is the concatenation of the attention vectors among all input words. The initial state of the decoder is the representation of the output of the encoder. In order to alleviate the data sparsity, we add a copy mechanism [10, 11] on the top of decoder, which favors generating words such as dates, numbers, and named entities that appear in DRGs.
Experiments
### Implementation Details
DataWe use the data from release 4.0.0 of the Parallel Meaning Bank (Abzianidze et al., 2017), which contains 10,711 gold instances and 127,302 silver instances. Each instance contains a text and the corresponding DRS in various formats. We use the sequential box notation to produce the discourse representation graphs (DRGs). We randomly split the gold data to get 1,000 instances for development and 1,000 instances for test, then we use the remaining gold and silver data as our training data (Table 1).
Table 2 shows the distribution of active and passive voice data as shown in the PMB dataset. For convenience, we use VerbNet roles to distinguish the types of active-voice and passive-voice for sentences with transitive verbs in the PMB data. There are five types of active-voice and passive-voice DRGs representation divided by VerbNet roles. The nodes of the VerbNet roles (ingoing or outgoing) determine whether active or passive voice is used. For example, when the Agent role is an incoming node, this DRG corresponds to passive voice. For the challenge set of active and passive data, we use all the types of passive voice sentences in the gold data, and the same number of active voice sentences data corresponding different to types (after removing all interrogative sentences, that we disregard for our experiments). There are less passive (106) than active sentences (3140) in the PMB data. To get a balanced evaluation set, we draw randomly 106 from the active instances.
SettingAll the models we used implemented based on OpenNMT (Klein et al., 2017). For the vocabulary, we construct vocabularies from all words, the vocabulary sizes as shown in Table 1. For the hyperparameters, they are set based on performance on the development set. We use SGD optimizer with the initial learning rate set to 1 and decay 0.8. In addition, we set the dropout to 0.5 at the decoder layer to avoid overfitting with batch size 32. For the deep traversal encoder, we only use one layer for all the graph models. For the local graph encoder, all the graph models we use two layers, _ReLU_ activation and _tanh_ highway. In order to mitigate the effects of random seeds, we report the average for 3 training runs of each model.
Evaluation MetricsFor automatic evaluation, we use three standard metrics measuring word-overlap between system output and references. They are BLEU (Papineni et al., 2002), and METEOR (Lavie and Agarwal, 2007), which are used as standard in machine translation evaluation and very common in NLG. In order to better evaluate the results of different models, we employ the ROSE manual evaluation metric which proposed by the Wang et al. (2021), because this measure is simple to define and easy to reproduce. That was carried out by creating semantic challenge set and assigning three binary dimensions (either 0 or 1) to each generated text: (1) semantics, (2) grammaticality, and (3) phenomenon. The result is correct only when the ROSE-score is 1, and we use the ratio of the correct number to the total number of data as the final manual evaluation method accuracy.
### Automatic Metrics Results
In order to explore which TFA can better represent the active/passive voice and have a positive impact on the models. We use automatic metric evaluation methods on the normal test data in order to get the performance for general generation task. At the same time, we also evaluate the active and passive dataset for obtaining rough performance for this specific tasks. Table 3 is the results of automatic metric results on the normal test and active-passive dataset training on different types of representations by both local graph encoder and deep traversal encoder (see also Appendix A).
Our results show different types of TFA for the graph-structured data have remarkable different influence for the performance of models on specific tasks (active and passive dataset), but also on general text generation (normal test set). On the local graph encoder, GGNN has the best performance on both dataset when we use the edge of _RTR_ as TFA in the graph. However, when we use deep traversal encoder, _CTC_ become the best TFA, and the normal test performance of the model that using _RTR_ as TFA is degraded compared to the local graph encoder, but for active-passive dataset, the deep traversal encoders training with different TFA get the best results compare with the models based
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{**Document-level**} & \multicolumn{2}{c}{**Word-level**} \\ \hline
**Data** & **Train** & **dev** & **test** & **src** & **tgt** \\ \hline _gold_ & 8,711 & 1,000 & 1,000 & 7,856 & 7,699 \\ _gold + silver_ & 136,013 & 1,000 & 1,000 & 46,620 & 44,849 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Document statistics and vocabulary sizes.
on local graph encoder.
### Manual Metrics Results
For manual evaluation, in addition to evaluating whether the position of the subject and object is replaced, the transition between active and passive must take into account the grammatical inflection of the verb. Different types of grammatical inflection of the verb generated by the models will lead to different types of errors, which will affect the three dimensions of manual metrics evaluation at the same time. In this paper, _phenomena_ metric only focus on active and passive voice of verbs in sentences, regardless of semantics and grammaticality. Due to the limited amount of passive data, we create a challenge set using active and passive data in the gold training data. All the passive data and active data in DRG can be transformed by changing the TFA, based on this, we can get a challenge set that is different from the data in the training set. We report the results of accuracy for the active-voice and passive-voice challenge set in Table 4.
Our results show that for the local graph encoder, the representation by adding _RTR_ can get best performance, especially for the task of converting passive data to active data to generate active-voice text, as its _phenomenon_ score and _ROSE_ are the highest. When we change the active DRG data to passive DRG data to expect the models to generate passive text, if _CTC_ and _BTC_ are used as TFA, it is difficult for the models to generate passive voice. They tend to generate active voice text, and almost impossible to generate ideal text, although both semantics and grammaticality are fine. Compared with local graph encoder, deep traversal encoder can get better performance by using _CTC_ and _BTC_ as TFA in the DRG, especially for the _CTC_, the performance for both converting active to passive task and converting passive to active task better than the best results on local graph encoder.
## 5 Discussion
From the experimental results, for all models based on different TFA and encoders, the difficulty lies in whether the correct passive voice text can be generated. There are significant gaps of scores between _phenomenon_ and the other two dimensions in the task of generate passive voice from Figure 4. The models intend to generate text in training data with correct semantic and grammar but wrong phenomenon, that means the models cannot learn the TFA very well. We further subdivide and analyze the types of errors from the _semantic_ and _grammaticality_ two dimensions (see Appendix B).
When the edge label in a DRG between the subject and the object connected with the verb is different, which in the Levi graph is the type of the intermediate node for a VerbNet role. We find that the local encoder can achieve good results by using _RTR_ as TFA. This method is somewhat similar to the methods of adding sequential information in a graph used by Beck et al. (2018) and Guo et al. (2019) to improve the performance, but it is equiva
\begin{table}
\begin{tabular}{l|l|r|r|r|r} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{**gold data**} & \multicolumn{2}{c}{**silver data**} \\ \hline \hline
**Type** & **Example of passive** & **Active** & **Passive** & **Active** & **Passive** \\ \hline Patient \(\rightarrow\) Agent & Bill was killed by an intruder. & 786 & 43 & 5,285 & 381 \\ Theme \(\rightarrow\) Agent & The college was founded by Mr Smith. & 1,913 & 42 & 28,120 & 1,125 \\ Experiencer \(\rightarrow\) Agent & I got stung by a bee. & 41 & 5 & 311 & 40 \\ Result \(\rightarrow\) Agent & That was written by Taro Akagawa. & 211 & 13 & 1,241 & 65 \\ Source \(\rightarrow\) Agent & He was deserted by his friends. & 189 & 3 & 85 & 32 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Distribution of active/passive voice types in gold and silver data in PMB release 4.0.0.
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Model** & **Normal Test** & **A \& P Test** \\ \hline
**TFA** & **BLEU** & **METEOR** & **BLEU** & **METEOR** \\ \hline \multicolumn{3}{l}{**Local Graph Encoder**} \\ RTR & **55.4** & **44.0** & **74.4** & **54.1** \\ BTC & 53.8 & 43.3 & 66.9 & 50.4 \\ CTC & 55.0 & 43.6 & 71.8 & 52.6 \\ \hline \multicolumn{3}{l}{**Deep Traversal Encoder**} \\ RTR & 54.7 & 43.4 & 76.1 & 54.3 \\ BTC & 54.5 & 43.7 & 73.3 & 53.9 \\ CTC & **55.1** & **43.8** & **75.3** & **55.7** \\ \hline \hline \end{tabular}
\end{table}
Table 3: The normal test data results and the active & passive (A\&P) data results of GGNN models training on DRG representation with different types of TFA.
lent to adding a very small part, so it is not difficult to understand why it achieves better performance.
The deep traversal encoder achieves good results by using the self-loop type TFA (_CTC_). The deep traversal encoder essentially obtains all the outflow and inflow nodes information of a node at one time through depth-first search. When we add the _RTR_ to the graph, it essentially introduces noise to some nodes in the graph that originally have no direct flow of information, such as the node _Agent_ and _sheep.n.01_ in Figure 3 (e).
However, using the _CTC_ in a graph will not have this problem, and through the deep traversal encoder, other nodes in the graph also receive _TOPIC_ node information, which is equivalent to increasing the model's memory of _TOPIC_ node, so that the models can learn the difference of active and passive in graph structure data. Intuitively, adding _TOPIC_ markers between discourse unit nodes and concepts (_BTC_) can get good performance, but in practice it cannot achieve good performance on both local graph encoder and deep traversal encoder. We believe that this has a strong correlation with the discourse unit node _Box_, which is very similar to the artificial global node used by Guo et al. (2019) and Cai and Lam (2020), especially when there is only one discourse in the DRG, this node has less influence on the models, and the added edge label _TOPIC_ is also difficult to be learned by the models. This has yet to be found where a better way to go to further verification.
## 6 Conclusion
In this paper we propose the use of GNNs to control the generation of active and passive voice text from formal meaning representations. We use discourse representation structures to represent meaning of sentences, and present the process of how to convert DRS data into graph-structured data for graph models. On the graph level, we introduce three types of TFA to distinguish active and passive DRS data. On the model level, we propose the deep traversal encoder for capturing more information for this task. We apply one of the most popular graph models, GGNN, to the deep traversal encoder and the local graph model encoder, respectively, to compare the performance of the three TFA. Our experimental results show that using self-loop labeled TFA (CTC) can achieve optimal results on deep traversal encoder, while using edge-oriented TFA (TRT) can achieve optimal results on local graph encoder. Our main contributions can be summarized as follows:
1. To the best of our knowledge, we are the first to propose to control TFA in the context of text generation from formal meaning representations;
2. We present the process of how to convert formal meaning representations into graph-structured data for graph models;
3. We compare the performance of three graph neural networks, which have not been studied so far for meaning-to-text generation;
4. We introduce deep traversal encoder to replace local graph encoder for capturing more information;
5. We compare three types of TFA in the graph notation of active-voice and passive-voice data to control the models to generate active-passive voice documents.
\begin{table}
\begin{tabular}{l|l|c c c c|c c c c|c} \hline \hline & & \multicolumn{4}{c|}{**Passive \(\rightarrow\) Active**} & \multicolumn{4}{c|}{**Active \(\rightarrow\) Passive**} & **ALL** \\ \hline
**Model** & **TFA** & **Sem.** & **Gram.** & **Phen.** & **ROSE** & **Sem.** & **Gram.** & **Phen.** & **ROSE** & **ROSE** \\ \hline \multirow{3}{*}{**Local Graph Encoder**} & RTR & 64.2 & 92.5 & 80.2 & **59.4** & 64.2 & 80.2 & 50.0 & **34.9** & **47.2** \\ & BTC & 61.3 & 85.8 & 60.4 & 45.3 & 50.0 & 88.7 & 16.0 & 11.3 & 28.3 \\ & CTC & 65.1 & 89.6 & 69.8 & 50.9 & 62.3 & 68.9 & 20.8 & 10.4 & 30.7 \\ \hline \multirow{3}{*}{**Deep Traversal Encoder**} & RTR & 70.8 & 84.9 & 68.9 & 62.3 & 53.8 & 86.8 & 38.7 & 24.5 & 43.4 \\ & BTC & 65.1 & 88.7 & 72.6 & 60.4 & 60.4 & 88.7 & 24.5 & 14.2 & 37.3 \\ \cline{1-1} & CTC & 73.6 & 93.4 & 87.7 & **67.0** & 61.3 & 86.8 & 56.6 & **38.7** & **52.8** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Manual metric performance of GGNN models on active and passive challenge set, training on different types of graph data with local graph encoder and deep traversal encoder. |
2308.13300 | Learning Compact Neural Networks with Deep Overparameterised Multitask
Learning | Compact neural network offers many benefits for real-world applications.
However, it is usually challenging to train the compact neural networks with
small parameter sizes and low computational costs to achieve the same or better
model performance compared to more complex and powerful architecture. This is
particularly true for multitask learning, with different tasks competing for
resources. We present a simple, efficient and effective multitask learning
overparameterisation neural network design by overparameterising the model
architecture in training and sharing the overparameterised model parameters
more effectively across tasks, for better optimisation and generalisation.
Experiments on two challenging multitask datasets (NYUv2 and COCO) demonstrate
the effectiveness of the proposed method across various convolutional networks
and parameter sizes. | Shen Ren, Haosen Shi | 2023-08-25T10:51:02Z | http://arxiv.org/abs/2308.13300v1 | # Learning Compact Neural Networks with Deep Overparameterised Multitask Learning
###### Abstract
Compact neural network offers many benefits for real-world applications. However, it is usually challenging to train the compact neural networks with small parameter sizes and low computational costs to achieve the same or better model performance compared to more complex and powerful architecture. This is particularly true for multi-task learning, with different tasks competing for resources. We present a simple, efficient and effective multitask learning overparameterisation neural network design by overparameterising the model architecture in training and sharing the overparameterised model parameters more effectively across tasks, for better optimisation and generalisation. Experiments on two challenging multitask datasets (NYUv2 and COCO) demonstrate the effectiveness of the proposed method across various convolutional networks and parameter sizes.
## 1 Introduction
Deep Multi-task Learning (MTL) techniques are widely applied to real-world embedded computer vision applications. A deep MTL model explores and exploits the synergies among multiple tasks to be learned simultaneously to improve the joint performance, as well as to reduce inference time and computational costs. However, designing efficient MTL models on budgeted devices poses two major challenges. First, the model design needs to be efficient to stay compact for meeting the computational budget constraints. Second, the model needs to be effective on resource sharing among multiple tasks learned simultaneously to avoid resource competition.
Motivated by previous research on deep linear network [14] that even the input-output map can be rewritten as a shallow network, it nevertheless demonstrates highly nonlinear training dynamics and can help to accelerate optimisation [1] and improve generalisation [1]. To tackle the aforementioned challenges, we propose an overparameterised MTL method by initialising the parameters of each shared neural network layer as the product of multiple matrices following the spatial Singular Vector Decomposition (SVD)[1]. The left and right singular vectors are trained with all task losses, and the diagonal matrices are trained using task-specific losses. Our design is mainly inspired by analytical studies on overparameterised networks for MTL [1] that the training/test error dynamics depends on the time-evolving alignment of the network parameters to the singular vectors of the training data, and a quantifiable task alignment describing the transfer benefits among multiple tasks depends on the singular values and input feature subspace similarity matrix of the training data.
In this work, we follow the definition of overparameterisation [1], referring to the replacement of neural network layers by operations of the compositions of multiple layers with more learnable parameters, but without adding additional expressiveness of the network. The contribution of this work can be summarised as follows.
* We propose an MTL neural network design with overparameterised training components and a compact inference architecture, applicable for embedded applications with limited computational budgets.
* We implement an iterative training strategy for the proposed design that is effective and efficient for the multitask computer vision dense prediction tasks, compared to the state-of-the-art.
## 2 Methodology
We replace the fully-connected layers and/or convolutional layers of modern neural networks with overparameterisation, and share the overparameterised parameters among different tasks, to achieve higher performance for reduced inference parameter size and computational cost.
Specifically, tensor decomposition is used for model expansion instead of model compression during training. The full-rank diagonal tensor is further expanded to be trained separately for each task, while the other tensors are shared among all tasks. During inference, the decomposed tensors are contracted back into a compact MTL architecture.
### Overparameterisation Mechanism
**Fully-connected Layers**
For any shared layer of a deep MTL model, given a weight matrix \(W\) that is shared among \(t\) tasks, we directly factorise the weight matrix \(W\) of the size \(m\times n\) using SVD, similar to [13] and [1], so that
\[W:=UMV \tag{1}\]
\(U\) is of size \(m\times r\) and V is of size \(r\times n\), and \(M\) is a diagonal matrix of size \(r\times r\) where the matrix is full rank \(r\geq\min(m,n)\).
#### 2.1.2 Convolutional Layers
A shared convolutional layer of a deep MTL model is parameterised as \(W\in\mathbb{R}^{c_{o}\times c_{i}\times k\times k}\) to mathematically denote a 2-dimensional convolutional layer of kernel size \(k\times k\) with \(c_{o}\) output channels and \(c_{i}\) input channels.
Using tensor factorisation on the convolutional layer following the spatial SVD format, we first change the dimensions of the weight tensor into \(c_{o}\times k\times k\times c_{i}\), and replace the original weight tensor into three tensors \(U\), \(M\) and \(V\) using Eq. (1) with the size \(c_{o}\times k\times r\), \(r\times r\) and \(r\times k\times c_{i}\) respectively, where \(r\geq\min(c_{o}\times k,c_{i}\times k)\).
### Parameter Sharing Mechanism
Considering each shared fully-connected or convolutional layer of an MTL model with the objective of learning tasks \(a\) and \(b\) together, after overparameterisation, the parameters \(U\) and \(V\) are shared across all tasks, and \(M^{(a)},M^{(b)}\) are assigned as task-specific parameters for the corresponding tasks. The task-specific parameters \(M^{(a)}\) and \(M^{(b)}\) are learned as scaling factors in changing the scales of shared parameters \(U\) and \(V\) according to each individual task, to better align with the singular vectors of the training data. The product is cumulative, associative and distributive so that the sequence of the tasks will not take effect on the final product.
The designed sharing mechanism can be extended naturally to multitask learning of more than \(2\) tasks by adding task-specific diagonal matrices, \(M^{(1)},\cdots,M^{(t)}\). For \(t\) numbers of tasks, the diagonal matrix \(M\) can be expanded and shared across all tasks as
\[M:=M^{(1)}\circ\cdots\circ M^{(t)} \tag{2}\]
where \(M\) is computed from the matrices \(M^{(1)},\cdots,M^{(t)}\) and \(\circ\) is the Hadamard product or standard matrix product, which are the same for the diagonal matrices.
The overparameterisation and the parameter sharing mechanisms are shown in Figure 1.
### Iterative Training Strategy
During the training phase of the overparameterised MTL model, instead of training the weight matrix \(W\), the factorised matrices \(U\), \(M^{(1)}\) to \(M^{(t)}\) and \(V\) are trained. The \(U\) and \(V\) matrices are initialised using the same initialisation method as the original parameter matrix, and \(M^{(1)}\) to \(M^{(t)}\) are initialised into identity matrices. The trained weight matrices are contracted back to \(W\) according to Eq. (1) for inference with fewer parameter counts and computational costs.
In order to train shared and task-specific parameters separately, we propose an iterative training strategy which consists of two training processes for each epoch of training, as shown in Algorithm 1
1. Choose a small subset of the training data (\(\sim 3\%\) in our experiments), for each task \(j\) from all \(t\) numbers of tasks to be learned together, only train its task-specific factors \(M^{(j)}\) by task loss \(L^{(j)}\), where \(1\leq j\leq t\).
2. All task-specific factors \(M^{(0)}\) to \(M^{(t)}\) are frozen. The other factors \(U\), \(V\) and the parameters of other unfactorised layers are trained by a multi-task learning loss \(L=\sum_{i\in t}\alpha_{i}L^{(i)}\), where \(\alpha\) represents the fixed or adaptive loss weights.
```
0: Initialised parameters: \(\theta\), loss weights \(\alpha\) while not convergence do sample a mini-batch data for\(i\in\{1,\dots,t\}\)do update \(M^{(i)}\) by \(L^{(i)}\) endfor update \(\theta-\{M^{(1)},\dots,M^{(t)}\}\) by \(L=\sum_{i\in t}\alpha_{i}L^{(i)}\) endwhile
```
**Algorithm 1** The Iterative Training Process
During each training process, Frobenius decay is applied as a penalty on factorised matrices \(U\), \(V\) and \(M\) to regularise the models for better generalisation, implemented as in [11]. The Frobenius decay rate is set to \(1\)e-4.
## 3 Experiments
Experiments are conducted on public datasets NYUv2 [21] and COCO [15] including a series of ablation studies and comparisons. Note that as the proposed overparameterized model is contracted back to its original size for inference, the FLOPS, latency, and parameter size during inference following the proposed methods remain identical to the original model without overparameterization.
### Semantic Segmentation
To first demonstrate the effectiveness of matrix factorisation as an overparameterisation method, we compare the method with other state-of-the-art overparameterisation methods, including RepVGG [4] and ExpandNet [19], to perform a single Semantic Segmentation task. All experiments are conducted on NYUv2 dataset using SegNet model [1], a smaller SegNet model (tagged with -S in the result tables) with all filter sizes for all convolutional layers (except for the final layer) cut into half. The overparameterisation is applied to all convolutional layers except for the final layer. The implementation of SegNet and all experiments on NYUv2 dataset follow the same training hyperparameters and configurations as in [15]. The batch size is \(8\). The total training epoch is \(200\). A learning rate scheduler is used to reduce the learning rate to
Figure 1: Overparameterisation and Parameter Sharing Mechanisms
half at the \(150\)th epoch. Note that we use the multi-branch architecture including additional \(1\times 1\) convolutional and identity branches proposed in RepVGG to overparameterise the SegNet model, though it is not intended to be used as a drop-in replacement. Results in Table 1 show that the matrix factorisation method in this paper outperforms all other methods.
### Semantic Segmentation, Depth Estimation and Surface Normal Estimation
The proposed parameter-sharing mechanisms and iterative training strategy for MTL are experimented on NYUv2 dataset using SegNet, small SegNet and DeepLabv3 [3] with ResNet50 backbone (tagged with -ResNet). For some experiments (tagged with Pretrain), the ResNet50 backbone is pre-trained with ImageNet 1k dataset. All convolutional layers in the shared backbone are overparameterised for SegNet models. For DeepLabv3 model, all the first convolutional layers in ResNet bottleneck modules are overparameterised following the similar design in YOLOv7 [22]. The initialisation of the overparameterised parameters follows the spectral initialisation introduced in [13]. The performance is compared with multiple MTL methods including MTL baseline with only shared backbones and task-specific heads, cross-stitch [17] and MTAN [14]. The batch size for MTAN and the proposed method applied to MTAN is set to \(2\), and learning rate is \(5\)e\(-5\). Note that the task-specific diagonal matrices are trained without weight decay in the optimiser.
Ablation studies are also conducted to compare the proposed method (**Fac**) with a series of variants:
* The shared backbone is overparameterised according to Figure 1, but all factors are trained with the combined task losses.
* Split the channels factorised in factors \(U\) and \(V\) evenly across tasks, and stack the split matrix back to recover \(U\) and \(V\).
* Split the core diagonal matrix \(M\) evenly across tasks, and stack the split matrix back to recover \(M\).
Experiment results are shown in Table 3. The proposed method outperforms all baselines except for some tasks under the cross-stitch model, which has increased expressiveness by enlarging the model size (\(\sim 3\) times larger in inference model parameter size). The proposed method showcases comparable or superior performance in comparison to a backbone model pretrained with the ImageNet dataset, even when the backbone structure experiences changes due to overparameterisation, resulting in the loss of the pretrained weights
### Instance and Semantic Segmentation
We also evaluate the performance of our proposed method using Panoptic Feature Pyramid Network (PanopticFPN) [15] to perform instance segmentation and semantic segmentation tasks simultaneously. We follow the same experiment configurations described in [15] using COCO dataset. By overparameterise the PanopticFPN and following the iterative training strategy, our model also performs better than the Baseline model without pretraining on other datasets, as shown in Table 2.
## 4 Related Works
After investigating the advantages of optimisation in overparameterised linear and nonlinear networks in [1], several studies have proposed overparameterised deep learning models based on fully-connected or convolutional neural networks. However, none of these studies designed their overparameterisation under the context of MTL, accounting for the effective parameter sharing among multiple tasks. Among these studies, ExpandNet [1], DO-Conv [1] and Overcomplete Knowledge Distillation [13] designed different methods to overparameterise the kernels of convolutional layers, either over their channel axes [1] or spatial axes [1]. A more recent design named RepVGG [16] is a special case of overparameterisation, which instead of designing a drop-in replacement for convolutional layers of any neural architecture as previous studies, presented an effective convolutional neural network model with an overparameterised training network and a VGG-like inference network.
## 5 Conclusion
In this paper, we propose a parameter-sharing scheme and an iterative training for deep multitask learning that effectively share parameters using overparameterised models during training, while the model architecture stayed slim and compact during inference. Compared to the state-of-the-art, the scheme has demonstrated its potential in various datasets and model architecture. The design is particularly well-suited for embedded computer vision applications where there are tight budget constraints in both memory and computational resources.
## Acknowledgements
This study is supported under the RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**SemSeg**} \\ \cline{2-3} & mIoU \(\uparrow\) & Pix Acc \(\uparrow\) \\ \hline SegNet Baseline & 17.85 & 55.36 \\
**Matrix Factorisation** & **19.74** & **56.42** \\ RepVGG & 19.26 & 56.30 \\ ExpandNet & 16.27 & 51.31 \\ SegNet-S Baseline & 17.22 & 54.29 \\
**SegNet-S Matrix Factorisation** & **19.58** & **56.28** \\ SegNet-S RepVGG & 17.25 & 54.26 \\ SegNet-S ExpandNet & 16.59 & 51.44 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Matrix Factorisation and SOTA Comparison on NYUv2
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**SemSeg**} & \multicolumn{2}{c}{**Bias**} & \multicolumn{2}{c}{**Instance Seg**} & \multicolumn{2}{c}{**PanopticSeg**} \\ \cline{2-9} & mIoU \(\uparrow\) & Pix Acc \(\uparrow\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}}\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf{ } }}}}}}}}}}}}}}\) & & & \\ \hline Baseline & 21.34 & 60.15 & 11.30 & 11.18 & 10.89 & 9.88 & 16.59 & 61.69 & 21.35 \\
**Fac** & **23.44** & **70.22** & **15.62** & **15.11** & **15.11** & **15.97** & **20.94** & **64.40** & **25.64** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Instance and Semantic Segmentation using PanopticFPN |
2302.01987 | Hierarchical Graph Neural Networks for Causal Discovery and Root Cause
Localization | In this paper, we propose REASON, a novel framework that enables the
automatic discovery of both intra-level (i.e., within-network) and inter-level
(i.e., across-network) causal relationships for root cause localization. REASON
consists of Topological Causal Discovery and Individual Causal Discovery. The
Topological Causal Discovery component aims to model the fault propagation in
order to trace back to the root causes. To achieve this, we propose novel
hierarchical graph neural networks to construct interdependent causal networks
by modeling both intra-level and inter-level non-linear causal relations. Based
on the learned interdependent causal networks, we then leverage random walks
with restarts to model the network propagation of a system fault. The
Individual Causal Discovery component focuses on capturing abrupt change
patterns of a single system entity. This component examines the temporal
patterns of each entity's metric data (i.e., time series), and estimates its
likelihood of being a root cause based on the Extreme Value theory. Combining
the topological and individual causal scores, the top K system entities are
identified as root causes. Extensive experiments on three real-world datasets
with case studies demonstrate the effectiveness and superiority of the proposed
framework. | Dongjie Wang, Zhengzhang Chen, Jingchao Ni, Liang Tong, Zheng Wang, Yanjie Fu, Haifeng Chen | 2023-02-03T20:17:45Z | http://arxiv.org/abs/2302.01987v1 | # Hierarchical Graph Neural Networks for
###### Abstract.
The goal of root cause analysis is to identify the underlying causes of system problems by discovering and analyzing the causal structure from system monitoring data. It is indispensable for maintaining the stability and robustness of large-scale complex systems. Existing methods mainly focus on the construction of a single effective isolated causal network, whereas many real-world systems are complex and exhibit interdependent structures (_i.e._, multiple networks of a system are interconnected by cross-network links). In interdependent networks, the malfunctioning effects of problematic system entities can propagate to other networks or different levels of system entities. Consequently, ignoring the interdependency results in suboptimal root cause analysis outcomes.
In this paper, we propose REASON, a novel framework that enables the automatic discovery of both intra-level (_i.e._, within-network) and inter-level (_i.e._, across-network) causal relationships for root cause localization. REASON consists of Topological Causal Discovery and Individual Causal Discovery. The Topological Causal Discovery component aims to model the fault propagation in order to trace back to the root causes. To achieve this, we propose novel hierarchical graph neural networks to construct interdependent causal networks by modeling both intra-level and inter-level nonlinear causal relations. Based on the learned interdependent causal networks, we then leverage random walk with restarts to model the network propagation of a system fault. The Individual Causal Discovery component focuses on capturing abrupt change patterns of a single system entity. This component examines the temporal patterns of each entity's metric data (_i.e._, time series), and estimates its likelihood of being a root cause based on the Extreme Value theory. Combining the topological and individual causal scores, the top \(K\) system entities are identified as root causes. Extensive experiments on three real-world datasets with case studies demonstrate the effectiveness and superiority of the proposed framework.
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
+
Footnote †: journal: Computer Science and
system failures, Chen _et al._(Chen et al., 2018) constructed a directed acyclic graph that depicts the invoking relations among microservice applications. These methods have been applied to some uncomplicated systems with isolated network structures.
However, a real-world complex system usually consists of multiple networks that coordinate in a highly complex manner (Beng et al., 2015; Chen et al., 2017). These networks are interconnected, and if one network's system entity fails, it may spread to its dependent entities in other networks, which may then cause cascading damages or failures (Chen et al., 2017; Chen et al., 2018) that could circulate throughout the interconnected levels with catastrophic consequences. For instance, Figure 1 shows the malfunction of the pod _Django-search_ first spreads to the server network and causes the fault of the server _Compute-1_; Then, the malfunctioning effects spread to the pod network of _Compute-2_ and causes the faults of the pod _Mongodb_ and _Mysql_; Finally, the pod _Sdn_ in _Infra-1_ is also affected, resulting in the system failure. In this failure case, it is quite difficult to pinpoint the root cause _Django-search_ if we only model the server network (_i.e._, the three servers) or one of the three pod networks of the microservice system. Thus, modeling the interconnected multi-network structures is vital for comprehensive understanding of the complex system and effective root cause localization.
Recently, a promising approach for modeling such interconnected structures in complex systems has emerged through the concept of interdependent networks (or network of networks) (Beng et al., 2015; Chen et al., 2017; Chen et al., 2018; Chen et al., 2018). In interdependent networks, each node of the main network can be represented as a domain-specific network. Let us elaborate using the example in Figure 1 again. Here, the dashed network represents a server/machine network (the main network), where the nodes are three different servers and edges/links indicate the causal relations among different servers. Each node of this main network is further represented as a pod network (the domain-specific network), where nodes are pods and edges denote their causal relations. Collectively, we call this structure a (server-pod) interdependent networks. And since all the edges in these interdependent networks indicate causal dependencies, we further call it _interdependent causal networks_. Interdependent networks have been widely used in the study of various topics, including the academic influence of scholars (Shen et al., 2016), the spreading pattern of rumors in the complex social network (Shen et al., 2016), and etc. However, existing methods only consider physical or statistical correlations, but not causation, and thus cannot be directly applied for locating root causes.
Enlightened by the interdependent networks, this paper aims to learn interdependent causal relationships from monitoring metrics in multi-network systems for accurately identifying root causes when a system failure/fault occurs. Formally, given the system KPI data, the multi-level interconnected system entities, and their metrics data (_i.e._, time series), our goal is to learn interdependent causal structures for discovering the root causes of system failures. There are two major challenges in this task:
* **Challenge 1: Learning interdependent causal networks and modeling fault propagation in interdependent causal networks**. As aforementioned, in real-world systems with interdependent networks structures, malfunctioning effects of root causes can propagate to other nodes of the same level or different levels (_i.e._, main network level and domain-specific network level), resulting in catastrophic failure of the entire system. To capture such propagation patterns for root cause localization, we need to learn the causal relationships not only within the same level but also across levels. After modeling the interdependent causal relationships, we still need to model the propagation of malfunctioning effects in the learned causal interdependent networks.
* **Challenge 2: Identifying abrupt change patterns from the metrics data of an individual system entity.** In addition to the topological patterns, metrics data associated with the system entities can exhibit abrupt change patterns during the incidence of system faults, particularly those that are short-lived (_e.g._, fail-stop failures). The malfunctioning effects of the root cause may end quickly before they can spread. Thus, the temporal patterns from the metrics data can provide individual causal insights for locating root causes. The challenge is how to capture abrupt change patterns and determine the individual causal effect associated with the system failure.
To address these challenges, in this paper, we propose REASON, a generic interdependent causal networks based framework, for root cause localization in complex systems with interdependent network structures. REASON consists of Topological Causal Discovery (TCD) and Individual Causal Discovery (ICD). For the TCD component, the assumption is that the malfunctioning effects of root causes can propagate to other system entities of the same level or different levels over time (Chen et al., 2017; Chen et al., 2018). To capture such propagation patterns, we propose a hierarchical graph neural networks based causal discovery method to discover both intra-level (_i.e._, within-network) and inter-level (_i.e._, across-network) non-linear causal relationships. Then, we leverage a random walk with restarts to model the network propagation of a system fault. The ICD component, on the other hand, focuses on individual causal effects, by analyzing the metrics data (_i.e._, time series) of each system entity. Especially considering the short-lived failure cases (_e.g._, fail-stop failures), there may be no propagation patterns. We design an Extreme Value theory based method to capture the abrupt fluctuation patterns and estimate the likelihood of each entity being a root cause. Finally, we integrate the findings of TCD and ICD, and output the system entities with the top-\(K\) greatest causal scores as the root causes. Extensive experiments and case studies are conducted on three real-world datasets to validate the efficacy of our work.
## 2. Preliminaries
**System Key Performance Indicator (KPI).** A KPI is a monitoring time series that indicates the system status. For instance, in a microservice system, the latency or connection time can be used to assess the system status. The smaller the latency, the higher the system's performance quality. If the connection time is too long, it is likely that the system has failed.
**Entity Metrics.** Entity metrics data can be collected by monitoring different levels of system entities. It usually contains a number of metrics, which indicate the status of a system's underlying entity. For example, in a microservice system, the underlying entity can be a physical machine, container, virtual machine, pod, etc. And the
corresponding metrics can be CPU utilization, memory utilization, disk IO utilization, etc. The data for all these metrics are essentially time series. An anomalous metric of a microservice's underlying entity can be the potential root cause of an anomalous system latency/connection time, which indicates a microservice failure.
**Interdependent Networks (INs).** Interdependent networks model the interconnections of multiplex networks (Srivastava et al., 2017; Wang et al., 2018). Given a \(g\times g\) main network G, a set of domain-specific networks \(\mathcal{A}=\{\mathbf{A}_{1},\cdots,\mathbf{A}_{g}\}\), and an edge set E that represents the edges between the nodes in \(\mathcal{A}\) and the nodes in G, INs are defined as a triplet \(\mathcal{R}=<\mathbf{G},\mathcal{A},\mathbf{E}>\). The node set in G, _a.k.a._ high-level nodes, is denoted by \(\mathbf{V}^{\mathcal{G}}\), and the node set in \(\mathcal{A}\), _a.k.a._ low-level nodes, is denoted by \(\mathbf{V}^{\mathcal{M}}=(\mathbf{V}^{A_{1}},\cdots,\mathbf{V}^{A_{g}})\). As a special type of INs, interdependent causal networks represent the INs with the edges indicating causal relations.
Take Figure 1 as an example, the dashed network is the main network G, which has three server nodes, including _Compute-1_, _Compute-2_, and _Infra-1_. Each of these main nodes contains a domain-specific network that is made up of several applications/pods. For instance, the main node _Compute-2_ is further represented as a domain-specific network with three pod nodes, including _Mysql_, _Mongodb_, and _Dispatch_. And the solid edges indicate the causal relationships between different pods, while the dashed edges indicate the causal relationships between different servers.
Without loss of generality, we focus on two levels of system entities. Our goal is to identify root causes by automatically learning interdependent causal relations between different levels of system entities and the system KPI. The identified root causes are low-level system entities to reflect fine-grained root cause detection.
**Problem Statement**. Given metrics/sensor data of multi-level system entities corresponding to high-level and low-level nodes in main and domain-specific networks \(\{\mathbf{X}^{G},\mathbf{X}^{\mathcal{M}}\}\), and system key performance indicator \(\mathbf{y}\), the problem is to construct an interdependent causal network \(\mathcal{R}=<\mathbf{G},\mathcal{A},\mathbf{E}>\), and identify the top \(K\) low-level nodes in \(\mathbf{V}^{\mathcal{M}}\) that are most relevant to \(\mathbf{y}\).
## 3. Methodology
We present REASON, an interdependent causal network based framework for root cause localization. As illustrated in Figure 2, REASON includes three major steps: 1) topological causal discovery; 2) individual causal discovery; and 3) causal integration.
### Topological Causal Discovery
Root causes (_i.e._, the system entities that cause the system failures or faults) could propagate malfunctioning or fault effects to other system entities of the same network or across different networks over time (Beng et al., 2016; Wang et al., 2018; Wang et al., 2018), which makes real root causes hard to locate. To address this challenge, we propose a hierarchical graph neural network based causal discovery method to construct interdependent causal graphs among low-level and high-level system entities. Failure propagation is modeled on the learned causal structures to provide topological guidance for locating root causes by simulating the malfunctioning effects of root causes.
#### 3.1.1. Hierarchical Graph Neural Network based Interdependent Causal Discovery
There can be more than one entity metric (_i.e._, multi-variate time series) per system entity (refer to Section 2). For each individual metric, we learn interdependent causal graphs among different system entities using the same learning strategy. To ease the description, we take one metric of system entities as an example to illustrate the interdependent causal discovery process.
The metric of system entities (_i.e._, high-level or low-level) is a multivariate time series \(\{\mathbf{x}_{0},\cdots,\mathbf{x}_{T}\}\). The metric value at the \(t\)-th time step is \(\mathbf{x}_{t}\in\mathbb{R}^{d}\), where \(d\) is the number of entities. The data can be modeled using the VAR model (Wang et al., 2018; Wang et al., 2018), whose formulation is given by:
\[\mathbf{x}_{t}^{\top}=\mathbf{x}_{t-1}^{\top}\mathbf{B}_{1}+\cdots+\mathbf{x} _{t-p}^{\top}\mathbf{B}_{p}+\mathbf{\epsilon}_{t}^{\top},\quad t=\{p,\cdots,T\} \tag{1}\]
where \(p\) is the time-lagged order, \(\mathbf{\epsilon}_{t}\) is the vector of error variables that are expected to be non-Gaussian and independent in the temporal dimension, \(\{\mathbf{B}_{1},\cdots,\mathbf{B}_{p}\}\) are the weighted matrix of time-lagged
Figure 2. The overview of the proposed framework REASON. It consists of three major steps: topological causal discovery, individual causal discovery, and causal integration.
data. In the VAR model, the time series at \(t\), \(\mathbf{x}_{t}\), is assumed to be a linear combination of the past \(p\) lags of the series.
Assuming that \(\{\mathbf{B}_{1},\cdots,\mathbf{B}_{p}\}\) is constant across time, the Equation (1) can be extended into a matrix form:
\[\mathbf{X}=\tilde{\mathbf{X}}_{1}\mathbf{B}_{1}+\cdots+\tilde{\mathbf{X}}_{p} \mathbf{B}_{p}+\epsilon \tag{2}\]
where \(\mathbf{X}\in\mathbb{R}^{m\times d}\) is a matrix and its each row is \(\mathbf{x}_{t}^{\top}\); \(\{\tilde{\mathbf{X}}_{1},\cdots,\tilde{\mathbf{X}}_{p}\}\) are the time-lagged data.
To simplify Equation 2, let \(\tilde{\mathbf{X}}=[\tilde{\mathbf{X}}_{1}|\cdots|\tilde{\mathbf{X}}_{p}]\) with its shape of \(\mathbb{R}^{m\times pd}\) and \(\mathbf{B}=[\mathbf{B}_{1}|\cdots|\mathbf{B}_{p}]\) with its shape of \(\mathbb{R}^{m\times pd}\). Here, \(m=T+1-p\) is the effective sample size, because the first \(p\) elements in the metric data have no sufficient time-lagged data to fit Equation 2. After that, we apply the QR decomposition to the weight matrix \(\mathbf{B}\) to transform Equation 2 as follows:
\[\mathbf{X}=\tilde{\mathbf{X}}\hat{\mathbf{W}}+\epsilon \tag{3}\]
where \(\hat{\mathbf{B}}\in\mathbb{R}^{m\times pd}\) is the weight matrix of time-lagged data in the temporal dimension; \(\mathbf{W}\in\mathbb{R}^{d\times d}\) is the weighted adjacency matrix, which reflects the relations among system entities.
A nonlinear autoregressive model allows \(\mathbf{x}_{t}\) to evolve according to more general nonlinear dynamics [9]. In a forecasting setting, one promising way is to jointly model the nonlinear functions using neural networks [9, 24]. By applying neural networks \(f\) to Equation 3, we have
\[\mathbf{X}=f(\tilde{\mathbf{X}}\hat{\mathbf{B}}\mathbf{W};\mathbf{\Theta})+\epsilon \tag{4}\]
where \(\mathbf{\Theta}\) is the set of parameters of \(f\).
Given the data \(\mathbf{X}\) and \(\tilde{\mathbf{X}}\), here our goal is to estimate weighted adjacency matrices \(\mathbf{W}\) that correspond to directed acyclic graphs (DAGs). The causal edges in \(\mathbf{W}\) go only forward in time, and thus they do not create cycles. In order to ensure that the whole network is acyclic, it thus suffices to require that \(\mathbf{W}\) is acyclic. Minimizing the least-squares loss with the acyclicity constraint gives the following optimization problem:
\[\min\frac{1}{m}\left\|\mathbf{X}-f(\tilde{\mathbf{X}}\hat{\mathbf{B}} \mathbf{W};\mathbf{\Theta})\right\|^{2}\quad s.t.\ \mathbf{W}\ \text{is acyclic}, \tag{5}\]
To learn \(\mathbf{W}\) in an adaptive manner, we adopt the following layer:
\[\mathbf{W}=\text{RELU}(\tanh(\mathbf{W}_{+}\mathbf{W}_{-}^{\top}-\mathbf{W}_ {-}\mathbf{W}_{+}^{\top})), \tag{6}\]
where \(\mathbf{W}_{+}\in\mathbb{R}^{d\times d}\) and \(\mathbf{W}_{-}\in\mathbb{R}^{d\times d}\) are two parameter matrices. This learning layer aims to enforce the asymmetry of \(\mathbf{W}\), because the propagation of malfunctioning effects is unidirectional and acyclic from root causes to subsequent entities. In the following sections, \(\mathbf{W}^{G}\) denotes the causal relations between high-level nodes and \(\mathbf{W}^{\mathcal{H}}\) denotes the causal relations between low-level nodes.
Then, the causal structure learning for the interdependent networks can be divided into intra-level learning and inter-level learning. Intra-level learning is to learn the causation among the same level of nodes, while inter-level learning is to learn the cross-level causation. To model the influence of low-level nodes on high-level nodes, we aggregate low-level information into high-level nodes in inter-level learning. Figure 3 shows the learning process.
For **intra-level learning**, we adopt the same learning strategy to learn causal relations among both high-level nodes and low-level nodes. Specifically, we first apply \(L\) layers of GNN to the time-lagged data \(\{\mathbf{x}_{t-1},\cdots,\mathbf{x}_{t-p}\}\in\mathbb{R}^{d\times p}\) to obtain its embedding. In the \(l\)-th layer, the embedding \(\mathbf{z}^{(l)}\) is obtained by aggregating the nodes' embedding and their neighbors' information at the \(l-1\) layer. Then, the embedding at the last layer \(\mathbf{z}^{(L)}\) is used to predict the metric value at the time step \(t\) by an MLP layer. This process can be represented as
\[\left\{\begin{array}{l}\mathbf{z}^{(0)}=[\mathbf{x}_{t-1},\cdots,\mathbf{x }_{t-p}],\\ \mathbf{z}^{(l)}=\text{GNN}(\text{Cat}(\mathbf{z}^{(l-1)},\mathbf{W}\cdot \mathbf{z}^{(l-1)})\cdot\mathbf{B}^{(l)}),\\ \tilde{\mathbf{x}}_{t}=\text{MLP}(\mathbf{z}^{(L)};\mathbf{\Theta}),\end{array}\right. \tag{7}\]
where Cat is the concatenation operation; \(\mathbf{B}^{(l)}\) is the weight matrix of the \(l\)-th layer; GNN is activated by the RELU function to capture non-linear correlations in the time-lagged data. Our goal is to minimize the difference between the actual value \(\mathbf{x}_{t}\) and the predicted value \(\tilde{\mathbf{x}}_{t}\). Thus, the optimization objective is defined as follows
\[\mathcal{L}=\frac{1}{m}\sum_{t}(\mathbf{x}_{t}-\tilde{\mathbf{x}}_{t})^{2} \tag{8}\]
As shown in Figure 3, we conduct intra-level learning for the low-level and high-level system entities for constructing \(\mathbf{W}^{\mathcal{H}}\) and \(\mathbf{W}^{G}\), respectively. The optimization objectives for the low-level and high-level causal relations, in the same format as Equation 8, are denoted by \(\mathcal{L}_{\mathcal{H}}\) and \(\mathcal{L}_{G}\), respectively.
For **inter-level learning**, we aggregate the information of low-level nodes to the high-level nodes for constructing the cross-level causation. So, the initial embedding of high-level nodes \(\mathbf{z}^{(0)}\) is the concatenation of their time-lagged data \(\{\mathbf{x}_{t-1},\cdots,\mathbf{x}_{t-p}\}\) and aggregated low-level embeddings, which can be formulated as follows
\[\mathbf{z}^{(0)}=\text{Cat}([\mathbf{x}_{t-1},\cdots,\mathbf{x}_{t-p}],\mathbf{ W}\cdot\mathbf{z}^{(L)}) \tag{9}\]
where \(\mathbf{W}\) is a weight matrix that controls the contributions of low-level embeddings to high-level embeddings. As shown in Figure 3, there are two inter-level learning parts. The first one is used to learn the cross-level causal relations between low-level and high-level nodes, denoted by \(\mathbf{W}^{\mathcal{H}G}\). The second one is used to construct the causal linkages between high-level nodes and the system KPI, denoted by \(\mathbf{W}^{GS}\). During this process, we predict the value of the system KPI at the time step \(t\) and aim to make the predicted values close to the actual ones. Hence, we formulate the optimization objective \(\mathcal{L}_{S}\), whose format is the same as Equation 8.
In addition, the learned interdependent causal graphs must meet the acyclicity requirement. Since the cross-level causal relations \(\mathbf{W}^{\mathcal{H}G}\) and \(\mathbf{W}^{GS}\) are unidirectional, only \(\mathbf{W}^{\mathcal{H}}\) and \(\mathbf{W}^{G}\) need to be
Figure 3: The learning process of hierarchical GNNs. Intra-level learning captures causation within the same-level system entities. Inter-level learning aggregates low-level information to high-level for constructing cross-level causation.
acyclic. To achieve this goal, inspired by the work (Wang et al., 2017), we use the trace exponential function: \(h(\mathbf{W})=tr(e^{\mathbf{W}\cdot\mathbf{W}})-d=0\) that satisfies \(h(\mathbf{W})=0\) if and only if \(\mathbf{W}\) is acyclic. Here, \(\circ\) is the Hadamard product of two matrices. Meanwhile, to enforce the sparsity of \(\mathbf{W}^{\mathcal{A}}\), \(\mathbf{W}^{G}\), \(\mathbf{W}^{\mathcal{A}G}\), and \(\mathbf{W}^{GS}\) for producing robust causation, we use the \(L1\)-norm to regularize them. So, the final optimization objective is
\[\mathcal{L}_{final}=(\mathcal{L}_{\mathcal{A}}+\mathcal{L}_{G}+ \mathcal{L}_{S})\] \[+\lambda_{1}(\left\|\mathbf{W}^{\mathcal{A}}\right\|_{1}+\left\| \mathbf{W}^{G}\right\|_{1}+\left\|\mathbf{W}^{\mathcal{A}G}\right\|_{1}+ \left\|\mathbf{W}^{GS}\right\|_{1})\] \[+\lambda_{2}(h(\mathbf{W}^{\mathcal{A}})+h(\mathbf{W}^{G})) \tag{10}\]
where \(\left\|\cdot\right\|_{1}\) is the element-wise \(L1\)-norm; \(\lambda_{1}\) and \(\lambda_{2}\) are two parameters that control the contribution of regularization items. We aim to minimize \(\mathcal{L}_{final}\) through the L-BFGS-B solver. When the model converges, we construct interdependent causal networks through \(\mathbf{W}^{\mathcal{A}}\), \(\mathbf{W}^{G}\), \(\mathbf{W}^{\mathcal{A}G}\), and \(\mathbf{W}^{GS}\).
#### 3.1.2. Network Propagation on Interdependent Causal Graphs
As aforementioned, starting from the root cause entity, malfunctioning effects will propagate to neighboring entities (Han et al., 2017), and different types of system faults can trigger diverse propagation patterns. This observation motivates us to apply network propagation to the learned causal structure to mine the hidden actual root causes.
The learned interdependent causal structure is a directed acyclic graph, which reflects the causal relations from the low-level to the high-level to the system level. In order to trace back the root causes, we need to conduct a reverse analysis process. Thus, we transpose the learned causal structure to get \(<<\mathbf{G}^{\top},\mathcal{A}^{\top},\mathbf{E}>,\text{KPI}>^{1}\), then apply a random walk with restart on the interdependent causal networks to estimate the topological causal score of each entity.
Specifically, the transition probabilities of a particle on the transposed structure can be denoted by
\[H=\begin{bmatrix}H_{\mathbf{G}\mathbf{G}}&H_{\mathcal{G}\mathcal{A}}\\ H_{\mathcal{H}\mathbf{G}}&H_{\mathcal{H}\mathcal{A}\mathcal{A}}\end{bmatrix} \tag{11}\]
where \(H_{\mathbf{G}\mathbf{G}}\) and \(H_{\mathcal{H}\mathcal{A}\mathcal{A}}\) depict the walks within the same-level network. \(H_{\mathbf{G}\mathcal{A}}\) and \(H_{\mathcal{H}\mathcal{A}\mathcal{A}}\) describe the walks across different level networks. Imagine that from the KPI node, a particle begins to visit the networks. The particle randomly selects a high-level or low-level node to visit, then the particle either jumps to the low-level nodes or walks in the current graph with a probability value \(\Phi\in[0,1]\). The higher the value of \(\Phi\) is, the more possible the jumping behavior occurs. In detail, if a particle is located at a high-level node \(i\) in \(\mathbf{G}\), the probability of the particle moving to the high-level node \(j\) is
\[H_{\mathbf{G}\mathbf{G}}(i,j)=(1-\Phi)\mathbf{G}^{\top}(i,j)/\sum_{k=1}^{g} \mathbf{G}^{\top}(i,k) \tag{12}\]
or jumping to the low-level node \(b\) with a probability
\[H_{\mathbf{G}\mathcal{A}}(i,b)=\Phi\mathbf{W}(i,b)/\sum_{k=1}^{gd}\mathbf{W}( i,k) \tag{13}\]
We apply the same strategy when the particle is located at a low-level node. The particle walking between different low-level nodes has a visiting probability of \(H_{\mathcal{A}\mathcal{A}}\), whose calculation equation is similar to \(H_{\mathbf{G}\mathbf{G}}\). Moreover, the visiting probability from a low-level node to a high-level node is \(H_{\mathcal{A}\mathcal{A}}\), whose calculation equation is similar to \(H_{\mathbf{G}\mathcal{A}}\). The probability transition evolving equation of the random walk with restart can be formulated as
\[\tilde{\mathbf{p}}_{t+1}^{\top}=(1-\varphi)\tilde{\mathbf{p}}_{t}^{\top}+ \varphi\tilde{\mathbf{p}}_{rs}^{\top} \tag{14}\]
where \(\tilde{\mathbf{p}}_{t+1}^{\top}\in\mathbb{R}^{g+gd}\) and \(\tilde{\mathbf{p}}_{t}^{\top}\in\mathbb{R}^{g+gd}\) are the visiting probability distribution at different time steps; \(\tilde{\mathbf{p}}_{rs}^{\top}\in\mathbb{R}^{g+gd}\) is the initial visiting probability distribution that depicts the visiting possibility of high-level or low-level nodes at the initialization step. \(\varphi\in[0,1]\) is the restart probability. When the visiting probability distribution is convergence, we regard the probability score of the low-level nodes as the associated topological causal score.
### Individual Causal Discovery
In addition to the topological causal effects, the entity metrics of root causes themselves could fluctuate stronger than those of other system entities during the incidence of some system faults. And for some short-lived failure cases (_e.g._, fail-stop failure), there may even be no propagation patterns. Thus, we propose to individually analyze such temporal patterns in order to provide individual causal guidance for locating root causes.
Compared with the values of entity metrics in normal time, the fluctuating values are extreme and infrequent. Inspired by (Zhou et al., 2017), such extreme value follows the extreme value distribution, which is defined as:
\[U_{\zeta}:x\to exp(-(1+\zeta x)^{-\frac{1}{\zeta}}),\quad\zeta\in\mathbb{R}, \quad 1+\zeta x>0. \tag{15}\]
where \(x\) is the original value and \(\zeta\) is the extreme value index depending on the distribution of \(x\). Let the probability of potential extreme value in \(x\) be \(q\), the boundary2\(\varrho\) of normal value can be calculated through \(\mathbb{P}(X>\varrho)=q\) based on \(U_{\zeta}\). However, since the distribution of \(x\) is unknown, \(\zeta\) should be estimated. The Pickands-Bellema-de Haan theorem (Pickands and Blekema, 1992) provides an approach to estimate \(\zeta\), which is defined as follows:
Footnote 2: The boundary can be upper bound or lower bound of normal values.
Theorem 3.1 ().: _The extrema of a cumulative distribution \(F\) converge to the distribution of \(U_{\zeta}\), denoted as \(F\in D_{\zeta}\), if and only if a function \(\delta\) exists, for all \(x\in\mathbb{R}\) s.t. \(1+\zeta x>0\):_
\[\frac{\overline{F}(\eta+\delta(\eta)x)}{\overline{F}(\eta)}\quad\overline{\eta \longrightarrow\frac{1}{\zeta}}\quad(1+\zeta x)^{-\frac{1}{\zeta}}, \tag{16}\]
where \(U_{\zeta}\) refers to the extreme value distribution; \(D_{\zeta}\) refers to a Generalized Pareto Distribution; \(\eta\) is a threshold for peak normal value; and \(\tau\) is the boundary of the initial distribution. Assuming that \(\eta\) is a threshold for peak normal value, \(X-\eta\) follows a Generalized Pareto Distribution (GPD) with parameters \(\zeta\) and \(\delta\) according to the theorem, which is defined as:
\[\mathbb{P}(X-\eta>x|X>\eta)\sim(1+\frac{\zeta x}{\delta(\eta)})^{-\frac{1}{ \zeta}}. \tag{17}\]
We can utilize the maximum likelihood estimation method (Duffuffas and Blekema, 1992) to estimate \(\zeta\) and \(\delta\). Then, the boundary value \(\varrho\) can be calculated by
\[\varrho\simeq\eta+\frac{\delta}{\zeta}((\frac{qn}{N_{\eta}})^{-\zeta}-1). \tag{18}\]
where \(\eta,q\) can be provided by domain knowledge, \(n\) is the total number of observations, and \(N_{\eta}\) is the number of peak values (_i.e.,_ the number of \(X>\eta\)).
Individual causal discovery is devised based on Equation (18). Specifically, we divide the metric data of one system entity into two segments. The first segment is used for initialization, and the second one is used for detection. For initialization, we first provide the probability of the extreme value \(q\) and the threshold of the peak value \(\eta\) using a mean excess plot-based method (Bahdan et al., 2017). Then, we use the first time segment to estimate the boundary \(\varrho\) of normal value according to Equation (18). Here, \(\eta\) should be lower than \(\varrho\). For detection, we compare each value in the second time segment with \(\varrho\) and \(\eta\). If the value is larger than \(\varrho\), the value is abnormal, so we store it. If the value is less than \(\varrho\) but larger than \(\eta\), which means the boundary \(\varrho\) has been changed. Hence, we add it to the first segment and re-evaluate the parameters \(\zeta\) and \(\delta\) to get new boundaries. If the value is less than \(\eta\), it is normal, so we ignore it. Finally, we can collect all abnormal values and normalize them using the Sigmoid function. The mean of the normalized values is regarded as the individual causal score of the associated system entity.
### Causal Integration
Finally, we integrate the individual and topological causal scores of low-level system entities through the integration parameter \(0\leq\gamma\leq 1\), which can be represented as \(\mathbf{q}_{final}=r\mathbf{q}_{indio}+(1-\gamma)\mathbf{q}_{topol}\). After that, we rank low-level nodes using \(\mathbf{q}_{final}\) and select the top \(K\) results as the final root causes.
## 4. Experiments
### Experimental Setup
#### 4.1.1. Datasets
We evaluated REASON on the following three real-world datasets for the task of root cause localization. **1) AIOps**: This dataset was collected from a real micro-service system. This system has 234 microservice pods/applications (low-level system entities) that are deployed to 5 cloud servers (high-level system entities). The operators collected metrics data (_e.g.,_ CPU Usage, Memory Usage) of high-level and low-level system entities from May 2021 to December 2021. There are 5 system faults during this time period. **2) WADI (Bahdan et al., 2021)**: This dataset was collected from a water distribution testbed, which owns 3 stages (high-level entities) and 123 sensors (low-level entities). It has 15 system faults collected in 16 days. In these datasets, low-level entities affiliate with high-level entities, and same-level entities invoke each other. **3) Swat (Sandel, 2021)**: This dataset was collected from a water treatment testbed, which consists of 6 stages (high-level entities) that have 51 sensors (low-level entities). It has 16 system faults collected in 11 days.
#### 4.1.2. Evaluation Metrics
We evaluated the model performance with the following three widely-used metrics (Zhu et al., 2017; Wang et al., 2018):
**Precision@K (PR@K)**. It denotes the probability that the top-\(K\) predicted root causes are real, defined as
\[\text{PR@K}=\frac{1}{|A|}\sum_{a\in\mathbb{A}}\frac{\sum_{i<K}R_{a}(i)\in V_ {a}}{min(K,|V_{a}|)}, \tag{19}\]
where \(\mathbb{A}\) is the set of system faults; \(a\) is one fault in \(\mathbb{A}\); \(V_{a}\) is the real root causes of \(a\); \(R_{a}\) is the predicted root causes of \(a\); and \(i\) refers to the \(i\)-th predicted cause of \(R_{a}\).
**Mean Average Precision@K (MAP@K)**. It assesses the model performance in the top-\(K\) predicted causes from the overall perspective, defined as
\[\text{MAP@K}=\frac{1}{K|A|}\sum_{a\in\mathbb{A}}\sum_{1\leq j\leq K}\text{PR@ j}, \tag{20}\]
where a higher value indicates better performance.
**Mean Reciprocal Rank (MRR)**. This metric measures the ranking capability of models. The larger the MRR value is, the further ahead the predicted positions of the root causes are; thus, operators can find the real root causes more easily. MRR is defined as
\[\text{MRR}=\frac{1}{A}\sum_{a\in\mathbb{A}}\frac{1}{rank_{R_{a}}}, \tag{21}\]
where \(rank_{R_{a}}\) is the rank number of the first correctly predicted root cause for system fault \(a\).
#### 4.1.3. Baselines
We compared REASON with the following five causal discovery models: **1) PC(Rang et al., 2018)** is a classic constraint-based method. It first identifies the skeleton of the causal graph with the independence test, then generates the orientation direction using the v-structure and acyclicity constraints. **2) C-LSTM(Wang et al., 2018)** captures the nonlinear Granger causality that existed in multivariate time series by using LSTM neural networks. **3) Dynotas(Rang et al., 2018)** is a score-based method that uses the structural vector autoregression model to construct dynamic Bayesian networks. **4) GOLEM(Wang et al., 2018)** employs a likelihood-based score function to relax the hard DAG constraint in NOTEARS. **5) GNN** is a simplified version of our causal discovery method. It only uses GNN to learn causal structures among low-level system entities.
Since none of the above baselines can be directly applied to learn the hierarchical interdependent causation, we only utilized the entity metrics to construct causation between low-level system entities and the system KPI. We then selected the top-\(K\) entities with the highest causal scores as the root causes. To verify the effectiveness of the network propagation module (see Section 3.1.2), we applied it to the causal structures learned by these baselines and analyzed model performance changes.
In addition, to study the impact of each technical component of REASON, we developed the following model variants: (1) To assess the benefits of inter-level learning (see Section 3.1), we implemented REASON-N by removing the inter-level learning in topological causal discovery while keeping the intra-level learning of low-level system entities, network propagation and individual causal discovery. (2) To evaluate the necessity and effectiveness of integrating the individual and topological causal discovery, we developed two variants: REASON-I, which only keeps the individual causal discovery, and REASON-T, which solely keeps the topological causal discovery. (3) To verify the efficacy of hierarchical GNN-based causal discovery, we replaced the causal discovery component of REASON with PC, C-LSTM, Dynotears, and GOLEM, respectively, to implement model variants denoted as REASON-P, REASON-C, REASON-D, and REASON-G.
All experiments were conducted on a server running Ubuntu 18.04.5 with Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz, 4-way GeForce RTX 2080 Ti GPUs, and 192 GB memory. In addition, all methods were implemented using Python 3.8.12 and PyTorch 1.7.1.
### Performance Evaluation
#### 4.2.1. Overall Performance
Table 1, Table 2, and Table 3 present the overall performance of all models, where a larger value indicates better performance. We have two key observations: First, REASON can significantly outperform all the baselines on all three datasets. For example, compared to the second-best method, REASON can improve PR@10, MAP@10, and MRR by at least 21.9%, 15.7%, and 6.29%, respectively. The underlying driver is that REASON can capture more complex malfunctioning effects by integrating individual and topological analyses, and learning interdependent causal networks. Second, GNN is the best baseline model that outperforms others on most datasets. A possible explanation is that graph neural
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline & PR@1 & PR@3 & PR@5 & PR@7 & PR@10 & MAP@3 & MAP@5 & MAP@7 & MAP@10 & MRR \\ \hline REASON & **25.0\%** & **28.13\%** & **66.67\%** & **76.04\%** & **84.38\%** & **23.96\%** & **35.0\%** & **46.73\%** & **57.60\%** & **40.99\%** \\ GNN & 18.75\% & 19.79\% & 43.75\% & 52.08\% & 62.50\% & 18.06\% & 27.92\% & 33.63\% & 41.88\% & 34.77\% \\ PC & 12.5\% & 13.54\% & 34.38\% & 47.92\% & 58.33\% & 12.85\% & 20.42\% & 26.64\% & 35.0\% & 26.16\% \\ C-LSTM & 12.5\% & 13.54\% & 28.13\% & 40.63\% & 52.08\% & 13.89\% & 17.71\% & 23.81\% & 31.88\% & 29.35\% \\ Dynotears & 12.5\% & 29.17\% & 32.29\% & 34.38\% & 42.71\% & 20.14\% & 24.38\% & 26.93\% & 30.83\% & 27.85\% \\ GOLEM & 6.25\% & 7.29\% & 12.5\% & 39.58\% & 47.92\% & 7.64\% & 9.58\% & 16.96\% & 25.0\% & 22.36\% \\ \hline \end{tabular}
\end{table}
Table 1. Overall performance _w.r.t._ Swat dataset.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline & PR@1 & PR@3 & PR@5 & PR@7 & PR@10 & MAP@3 & MAP@5 & MAP@7 & MAP@10 & MRR \\ \hline REASON & **28.57\%** & **59.52\%** & **65.0\%** & **76.19\%** & **79.76\%** & **42.46\%** & **50.62\%** & **57.41\%** & **63.76\%** & **53.35\%** \\ GNN & 14.28\% & 26.19\% & 34.28\% & 42.86\% & 54.76\% & 21.83\% & 25.31\% & 30.15\% & 37.54\% & 32.71\% \\ PC & 7.14\% & 27.38\% & 35.0\% & 44.05\% & 50.0\% & 16.27\% & 23.90\% & 28.47\% & 34.57\% & 27.74\% \\ C-LSTM & 0\% & 20.24\% & 35.0\% & 47.62\% & 51.19\% & 11.51\% & 18.55\% & 25.83\% & 32.73\% & 24.40\% \\ Dynotears & 7.14\% & 14.29\% & 30.00\% & 29.76\% & 47.62\% & 10.71\% & 17.43\% & 20.95\% & 26.81\% & 22.23\% \\ GOLEM & 0\% & 19.05\% & 40.0\% & 46.43\% & 53.57\% & 9.92\% & 20.38\% & 27.82\% & 34.83\% & 23.48\% \\ \hline \end{tabular}
\end{table}
Table 2. Overall performance _w.r.t._ WADI dataset.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline & \multicolumn{2}{c|}{PC} & \multicolumn{2}{c|}{GLOEM} & \multicolumn{2}{c|}{Dynotears} & \multicolumn{2}{c|}{C-LSTM} & \multicolumn{2}{c}{GNN} \\ \cline{2-11} & Original & Propagate & Original & Propagate & Original & Propagate & Original & Propagate & Original & Propagate \\ \hline Swat & 35.0\% & **37.39\%** & 25.0\% & **33.44\%** & 30.83\% & **37.08\%** & 31.87\% & **34.16\%** & 41.87\% & **49.16\%** \\ \hline WADI & 34.57\% & **35.71\%** & 34.83\% & **38.05\%** & 26.81\% & **33.76\%** & 32.72\% & **42.61\%** & 37.53\% & **45.98\%** \\ \hline AIOPS & 28.0\% & **30.0\%** & 38.0\% & **54.0\%** & 38.0\% & **58.0\%** & 18.0\% & **48.0\%** & 38.0\% & **60.0\%** \\ \hline \end{tabular}
\end{table}
Table 4. The influence of network propagation in terms of MAP@10
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{PC} & \multicolumn{2}{c|}{GLOEM} & \multicolumn{2}{c|}{Dynotears} & \multicolumn{2}{c|}{C-LSTM} & \multicolumn{2}{c}{GNN} \\ \cline{2-11} & Original & Propagate & Original & Propagate & Original & Propagate & Original & Propagate & Original & Propagate \\ \hline Swat & 26.16\% & **32.27\%** & 22.36\% & **30.42\%** & 27.85\% & **33.98\%** & 29.35\% & **32.85\%** & 34.77\% & **40.43\%** \\ \hline WADI & 27.74\% & **30.74\%** & 23.48\% & **25.89\%** & 22.22\% & **34.28\%** & 24.39\% & **33.27\%** & 32.71\% & **36.40\%** \\ \hline AIOPS & 14.0\% & **25.35\%** & 31.22\% & **37.74\%** & 30.79\% & **50.77\%** & 10.82\% & **24.73\%** & 30.65\% & **62.48\%** \\ \hline \end{tabular}
\end{table}
Table 5. The influence of network propagation in terms of MRR
networks can facilitate the learning of non-linear causal relations among system entities via message passing. Thus, the experimental results on three datasets demonstrate the superiority of REASON in locating root causes over other baselines.
#### 4.2.2. Influence of Network Propagation
Here, we applied our network propagation mechanism (see Section 3.1.2) to the causal structures learned by each baseline model to evaluate its effect on performance. The results are shown in Table 4 and Table 5. Our first finding is that network propagation can always improve the model performance for all models on all datasets. This observation strongly supports our assumption that network propagation is beneficial for capturing the propagation patterns of malfunctioning effects, resulting in a superior root cause localization performance. Moreover, we observe that across all models, network propagation yields greater performance enhancement on AIOps than the other two datasets. A possible reason is that AIOps contains explicit invoking relations among different pods, resulting in learning stronger causation compared with Swat and WADI.
#### 4.2.3. Ablation studies of REASON
Figure 4 shows ablation studies of REASON to examine the necessity of each technical component using PR@K and MAP@K. We can find that REASON significantly outperforms REASON-N on Swat and WADI. The underlying driver is that since REASON-N focuses on modeling causation among low-level entities only, using such causal structures, REASON-N is unable to capture cross-network propagation patterns of malfunctioning effects, leading to worse model performance. The second finding is that REASON is superior to both REASON-T and REASON-I in most cases. This observation indicates that integrating individual and topological causal discovery results can sufficiently capture the fluctuation and propagation patterns of malfunctioning effects for precisely locating root causes. Thus, each technical component of REASON is indispensable for keeping excellent root cause localization performance.
#### 4.2.4. Impact of Hierarchical GNN-based Causal Discovery
Figure 5 evaluates the effectiveness of the proposed hierarchical GNN-based causal discovery method. Our key observations are two-fold. First, we find that REASON significantly outperforms all model variants. The underlying driver is that the message-passing mechanism of GNN can learn more robust non-linear causal relations through
Figure 4. Ablation studies of REASON.
Figure 5. The impact of hierarchical GNN-based causal learning process.
sharing neighborhood information. Moreover, REASON-P outperforms REASON-C across all datasets in terms of MAP@10, while the result is the opposite in terms of MRR. A possible explanation is that PC learns more causal relations between system entities than C-LSTM. As a result, REASON-P is able to identify more actual root causes by propagating on the learned causal structures, but their ranks are not at the top owing to more root cause candidates.
#### 4.2.5. Parameter Analysis
We investigated the integration parameter \(\gamma\) and the number of layers \(L\) in GNN. \(\gamma\) controls the contribution of the individual and topological causal discovery for root cause localization. The number of layers \(L\) in GNN impacts the learning scenario of interdependent causal structures. Figure 6 presents our parameter analysis results. It can be seen that although the optimal \(\gamma\) values for different datasets vary, REASON can achieve optimal or near-optimal results on all three datasets using a similar small \(\gamma\) value. For instance, the best value of \(\gamma\) for Swat, WADI, and AIOps is 0.2, 0.1, and 0.8, respectively. But when we use \(\gamma=0.1\), the best value for WADI, the overall performance of REASON only drops a little in terms of both MAP@10 and MRR. For instance, compared with the optimal results, the MRR value only decreased by 0.01 on Swat and 0.04 on AIOps, respectively. This indicates that although the propagation of malfunctioning effects varies amongst datasets, the topological component contributes more to the model performance than the individual component, which further supports our findings in Section 4.2.3. Thus, in most cases, a small \(\gamma\) value (_e.g._, \(\gamma=0.1\)) would be a good choice. Second, when the number of GNN layers rose, we did not observe improved model performance. This is because a large number of GNNs may cause the information of each node to become highly similar, hindering the learning of robust causal relationships.
#### 4.2.6. A Case Study
Finally, we conducted a case study to further illustrate the learned interdependent causal networks by utilizing the system failure of AIOps on September 1, 2021. Operators built up a microservice system and simulated system faults to collect metrics data for analysis. The detailed collection procedure is as follows: First, the operators deployed the system on three servers that are _compute-2_, _infra-1_, and _control-plane-1_. Then, they sent requests periodically to the pod _sdn-c7kq_ to observe the system's latency. Next, to simulate the malfunctioning effects of the root cause, the operators used an _opensall_ command to make the pod _catalogue-xfjp_ have an extremely high CPU load, which affected some other pods on different servers, and eventually caused the system fault. Finally, the operators collected all entity metrics (_e.g._, CPU Usage, Memory Usage) of all system entities (_e.g._, servers, pods).
Based on the collected metrics data, we applied REASON to learn the interdependent causation between system entities and the system KPI for locating root causes, which reflects the real operation circumstances. Figure 7 shows the learned interdependent causal structures based on the CPU Usage metric. According to it, _infra-1_ server is the one most likely to increase in system latency. In this server, _catalogue-xfjp_ is the root cause, whose negative effects propagate to _'sdn-c7kqg_, resulting in the malfunction of _infra-1_. This observation illustrates that REASON can precisely locate the root causes and provide an explanation for the located outcomes.
## 5. Related Work
**Root Cause Analysis (RCA)**, also known as fault localization, focuses on identifying the root causes of system failures/faults from symptom observations (Zhou et al., 2017). In recent years, many domain-specific RCA approaches (Zhou et al., 2017; Wang et al., 2018; Wang et al., 2018) have been proposed for maintaining the robustness of complex systems in various domains. For instance, in the energy management domain, Capozzoli _et al._ utilized statistical techniques and DNNs to identify the reason for abnormal energy consumption in smart buildings (Brandon et al., 2016). In the web development domain, Brandon _et al._ proposed a graph representation framework to localize root causes in microservice systems by comparing anomalous situations and graphs (Brandon et al., 2016). Different from the existing works, the proposed REASON framework is a generic RCA approach that analyzes the surveillance multi-variate time series data from both individual and topological perspectives. Moreover, REASON captures the interdependent network properties present in many real-world systems to enhance RCA performance.
**Causal Discovery in Time Series** aims to learn causal relationships from observational time series data (Brandon et al., 2016). Existing methods can be broadly classified into four categories: (i) Granger causality approaches (Zhou et al., 2017; Wang et al., 2018), in which the causation is assessed based on whether one time series is helpful in predicting another; (ii)
Figure 6. Parameter analysis of REASON.
Constraint-based approaches (Liu et al., 2017; Wang et al., 2018; Wang et al., 2018), in which a causal structure is learned based on the conditional independence test and v-structure rules; (iii) Noise-based approaches (Wang et al., 2018; Wang et al., 2018), in which the causation is depicted by equations that reflect the causation between different variables and noises; (iv) Score-based approaches (Wang et al., 2018; Wang et al., 2018), in which a causal structure's quality is assessed by a scoring function. REASON belongs to the score-based causal discovery category. Existing causal discovery methods could only handle time series or isolated graphs, ignoring the structural and dynamical features one may want to model explicitly. In this paper, we propose a hierarchical graph neural networks based method that could model interdependent causal structures from multi-variable time series.
**Interdependent Networks** are often referred to as network of networks (NoN), in which complex networks interact and influence one another (Wang et al., 2018; Wang et al., 2018). Numerous real-world systems exhibit such structural and dynamical features that differ from those observed in isolated networks. To overcome the limitation of prior efforts on isolated graph analysis, in recent years, increasing research efforts have been focused on interdependent networks and their applications. For example, Ni _et al._ employed interdependent networks to illustrate the academic influence of scholars based on their research area and publications (Liard et al., 2018). Laird _et al._ studied the interdependent relationship between cancer pain and depression (Laird et al., 2018). These examples demonstrate the efficiency of modeling complicated systems via interdependent networks. In recent years, several studies (Liu et al., 2017; Wang et al., 2018; Wang et al., 2018) have begun to explore how the interdependent networks model can be applied to root cause analysis. However, there are two key differences between REASON and other previous works: 1) Existing works only consider physical or statistical correlations, but not causation. 2) Existing interdependent networks are constructed using domain knowledge or system rules. REASON can automatically discover the interdependent causal graphs from monitoring metrics data for root cause analysis.
## 6. Conclusion
In this paper, we investigated the challenging problem of root cause localization in complex systems with interdependent network structures. We proposed REASON, a generic framework for root cause localization through mining interdependent causation and propagation patterns of faulted effects. Hierarchical graph neural networks were used to represent non-linear intra-level and inter-level causation and to improve causal discovery among system entities via message transmission. We conducted comprehensive experiments on three real-world datasets to evaluate the proposed framework. The experimental results validate the effectiveness of our work. Additionally, through ablation studies, parameter analysis, and case studies, the importance of capturing interdependent structures for root cause localization has been well verified. An interesting direction for further exploration would be incorporating other sources of data, such as system logs, with the time series data for root cause analysis in complex systems.
Figure 7. Interdependent causal structures learned from AIOps dataset. Applications/pods are denoted by solid nodes, in which the red solid node is the root cause. The numbers on each causal relation edge with black color indicates the causal score between two connected nodes. The red dashed line reflects the backtracing process of the root cause. |
2310.08224 | Emergence of Latent Binary Encoding in Deep Neural Network Classifiers | We investigate the emergence of binary encoding within the latent space of
deep-neural-network classifiers. Such binary encoding is induced by the
introduction of a linear penultimate layer, which employs during training a
loss function specifically designed to compress the latent representations. As
a result of a trade-off between compression and information retention, the
network learns to assume only one of two possible values for each dimension in
the latent space. The binary encoding is provoked by the collapse of all
representations of the same class to the same point, which corresponds to the
vertex of a hypercube. By analyzing several datasets of increasing complexity,
we provide empirical evidence that the emergence of binary encoding
dramatically enhances robustness while also significantly improving the
reliability and generalization of the network. | Luigi Sbailò, Luca Ghiringhelli | 2023-10-12T11:16:57Z | http://arxiv.org/abs/2310.08224v4 | # Emergence of Latent Binary Encoding in Deep Neural Network Classifiers
###### Abstract
We observe the emergence of binary encoding within the latent space of deep-neural-network classifiers. Such binary encoding is induced by introducing a linear penultimate layer, which is equipped during training with a loss function that grows as \(\exp(\vec{x}^{2})\), where \(\vec{x}\) are the coordinates in the latent space. The phenomenon we describe represents a specific instance of a well-documented occurrence known as _neural collapse_, which arises in the terminal phase of training and entails the collapse of latent class means to the vertices of a simplex equiangular tight frame (ETF). We show that binary encoding accelerates convergence toward the simplex ETF and enhances classification accuracy.
## I Introduction
In tasks like images classification, deep neural networks have achieved levels of performance that surpass human capabilities. Nevertheless, there is a general lack of understanding regarding the theoretical mechanisms behind these outstanding results.
In the last years, there has been a growing interest in studying the geometrical structures that emerge in the latent space of deep neural networks. Notably, it has been noticed that the class means in the penultimate layers collapse to the vertices of a simplex equiangular tight frame (ETF) in the terminal phase of training [1]. This phenomenon known as _neural collapse_ has been linked to transfer learning [2] and incremental learning [3].
In this work, we develop a method for generating a binary encoding for the latent representations found in the penultimate layer of deep neural networks. Within each dimension of the penultimate layer, latent representations are in practice trained to assume one of two possible values. In the tests that we perform all data points belonging to the same class adopt an identical binary encoding. This implies that data points within the same class ultimately converge to the vertices of a simplex constructed on the vertexes of a binary hypercube. Notably, our work represents a specific instance of _neural collapse_. In our case, the phenomenon extends beyond just the class means collapsing to simplex vertices; rather, it encompasses all data points within the same class. Furthermore, our findings demonstrate that this method enhances accuracy of neural-networks prediction.
## II Binary Encoding
Given a labeled dataset \(\left\{\vec{x},\vec{\overline{y}}\right\}\), we deal with the problem of predicting the labels with a deep neural network. For an input point \(\vec{x}\), we can break down the process of producing the neural network's output \(\vec{f}(\vec{x})\) into two distinct steps. Firstly, the non-linear component of the neural network generates a latent representation \(\vec{h}(\vec{x})\). Following this, a linear classifier, characterized by weights \(\vec{W}\) and biases \(\vec{b}\) operates on this latent representation to compute \(\vec{f}(\vec{x})=\vec{W}\vec{h}(\vec{x})+\vec{b}\). The predicted label \(\vec{y}\) is finally determined by applying a softmax function to the network's output. The neural network is trained through the minimization of the cross-entropy loss function \(\mathcal{L}_{\text{CE}}\left(\vec{f}(\vec{x}),\vec{\overline{y}}\right)\), which measures the disparity between the network's predictions and the ground truth labels.
Here, we propose the introduction of an additional linear layer prior to the classifier, defined as \(\vec{x}_{bin}=\vec{W}_{bin}\vec{h}(\vec{x})+\vec{b}_{bin}\). This layer serves as the penultimate step in the network architecture, with classification being subsequently determined through another linear operation, \(\vec{f}(\vec{x})=\vec{W}\vec{x}_{bin}+\vec{b}\). In addition to the cross-entropy loss applied to the network's output, we incorporate a loss function into the penultimate layer defined as: \(\mathcal{L}_{\text{Bin}}(\mathbf{x}_{\text{bin}})=e^{\vec{x}_{\text{bin}}^{ \text{fin}}}\). The resulting loss function is thus composed of two terms:
\[\mathcal{L}=\mathcal{L}_{\text{CE}}+\gamma\,\mathcal{L}_{\text{Bin}}, \tag{1}\]
where \(\gamma\) is a hyperparameter.
The latent binary encoding emerges as a result of balancing two conflicting tendencies coming from the two components of the loss function. The exponential loss function pushes towards having all latent representations closer to zero, while the minimization of the cross-entropy induces differentiation among the different latent representations. This dynamics naturally leads to a configuration where, for each latent dimension, most of the latent values cluster around two opposing peaks in relation to the origin. The presence of these two peaks facilitates the necessary differentiation for distinguishing various representations, and they tend to be in close proximity to zero due to the influence of the exponential loss. Assuming
the concentrated distribution of points only around the two peaks, it becomes evident that the latent representation effectively approximates only two distinct values: either positive or negative.
## III Experiments
To evaluate the impact of incorporating a binary encoding layer, we conducted experiments with four distinct neural network architectures, all of which share a common base network responsible for generating the latent representation \(\vec{h}(\vec{x})\). Nevertheless, these networks diverge in their subsequent steps for classification.
One network architecture, referred as _Binary encoding_, implements a linear penultimate layer with loss function as in Eq (1). The _Linear penultimate_ architecture features a linear penultimate layer as well, but is trained using only the cross-entropy loss function. The _Non-linear penultimate_ architecture implements a non-linear layer that acts on \(\vec{h}(\vec{x})\) before linear classification. The fourth _No penultimate_ architecture performs linear classification directly on the \(\vec{h}(\vec{x})\) latent representation.
We note that the _Binary encoding_, _Linear penultimate_ and _Non-linear penultimate_ architectures have the same number of layers and parameters but differ for activation and loss functions, while the _No penultimate_ architecture has one layer less with respect to the others. These 4 different architectures are tested on MNIST and FashionMNIST. Details about training and architecture of the network used to generate the latent representation \(\vec{h}(\vec{x})\) are given in Appendix A.
In order to test the binarity hypothesis, which we define as the assumption that each dimension in the latent representation can assume approximately only one of two values, we fit a Gaussian mixture model with 2 modes on the _Binary encoding_ latent representation \(\mathbf{x}_{\text{Bin}}\). A different fit is performed on each dimension over all values of the training set. In each dimension, the average log-likelihood score of the training set is computed and averaged over all dimensions. Also the standard deviation of the two posterior distributions are collected and averaged over all dimensions. These values are plotted in Fig. 3, where we can see that during training the score increases while the standard deviation decreases. This observation supports our binarity hypothesis, as it aligns with the notion that a Gaussian distribution with a standard deviation approaching zero implies that all data points converge to a single position. The same analysis is performed for the _Linear penultimate_ architecture as it also features a linear layer before classification. However, we can see that in this architecture the binarity hypothesis does not hold.
As we assume that each latent representation \(\vec{x}_{\text{Bin}}\) is encoded into a binary representation, we expect that points with the same label present the same encoding. We then generate a binary encoding of the penultimate layer in each of the network architectures we study. This encoding is generated giving 1 to all positive values and 0 to values equal to or lower than 0. In Fig. 3, we show the fraction of points that share the same encoding and belong to the same class. Notably, we observe that this assumption holds true exclusively for the _Binary encoding_ architecture. All points belonging to the same class are in fact placed on a vertex of the simplex designed on the vertices of a hypercube. Binary encoding also accelerates _neural collapse_ as discussed in Appendix B.
Finally, we can see in Fig. 3 that the implementation of a _binary encoding layer_ has the effect of improving the accuracy of neural network classification both on the train and test set, while the other three architectures show comparable performance.
Figure 1: Average log-likelihood scores and standard deviations for Gaussian mixture models with two modes on each dimension of the penultimate layer. Averages of different training outcomes are shown, and line shadows represent the standard deviations. If not visible standard deviations are small for the image resolution.
Figure 2: Fraction of points belonging to the same class which share the same binary encoding. Averages of different training outcomes are shown, and line shadows represent the standard deviations. If not visible standard deviations are small.
Conclusion and limitations
We have discussed a method to generate a binary encoding in the latent space, which is accomplished by adding a penultimate _binary encoding layer_, i.e. a linear layer that incorporates an exponentially growing loss function. The emergence of this phenomenon is shown to accelerate convergence toward the vertices of a simplex equiangular tight frame, and to enhance the network accuracy. Although results seem to be promising to suggest that binary encoding should be used to enhance network performance, more comprehensive tests on more complex datasets and with more expressive deep neural networks are still to be done.
## Appendix A Training and architecture details
To generate the latent representation \(\vec{h}(\vec{x})\), two distinct neural network architectures were employed for the two different datasets. For MNIST classification, a fully connected neural network with three layers, each consisting of 2048 nodes, along with two dropout layers, was utilized. In the case of FashionMNIST, a convolutional neural network was employed, featuring five convolutional layers with specified input and output channel configurations ([1,64],[64,128],[128,256],[256,256], [256,512]), a kernel size of 3, and padding of 1. Additionally, three max pool layers with a kernel size of 2 and a stride of 2 were applied in the following sequence: Conv2D, MaxPool, Conv2D, MaxPool, Conv2D, MaxPool, Conv2D. Furthermore, two fully connected layers, each comprising 1024 nodes, were employed after the convolutional layers, and two dropout layers were included. A nonlinear activation function was consistently applied following each convolutional layer.
Both architectures incorporated a dropout rate of 0.5 and utilized ReLU activation functions. In the context of the CNN network applied to FashionMNIST, the _Binary encoding_, _Linear penultimate_, and _Non-linear penultimate_ architectures included an additional fully connected layer consisting of 128 nodes. Conversely, for the fully connected network applied to MNIST, this penultimate layer contained 64 nodes. The _Non-linear penultimate_ architecture incorporated a ReLU activation function within this penultimate layer.
The training process utilized the Adam optimizer with default settings as specified in the PyTorch implementation, employing a learning rate of \(10^{-4}\). The learning rate was reduced by half after every 20 epochs. For the MNIST dataset, a batch size of 64 was employed, whereas for the FashionMNIST dataset, a batch size of 128 was utilized. The loss function employed for the 'Binary encoding' architecture was as in Eq. (1) with \(\gamma=10\). Each network architecture has undergone training three times with distinct initial conditions, and the quantities displayed in plots represent the averages and standard deviations of these outcomes.
## Appendix B Convergence to simplex equiangular tight frame
In the terminal phase of training the vectors of the class means are known to converge to a simplex equiangular tight frame (ETF) as manifestation of a phenomenon known as _neural collapse_. The class mean vectors converge to having equal lengths, resulting in uniform angles between any given pair of vectors. The ETF configuration represents the maximum pairwise distance while adhering to the aforementioned properties.
In Figure 2, we illustrate this convergence of class means towards the vertexes of the simplex ETF. Notably, we observe that this convergence occurs more rapidly when employing a _binary encoding layer_. In this figure the within class variation is computed using the within class covariance compared with the between class covariance. The collapse of variability becomes apparent when contrasting with the between-class covariance. However, in the lower plot, which displays the average within-class covariance matrix, we observe an interesting trend. The average within-class covariance matrix itself, without comparison to the between-class covariance matrix, shows an increase during training, unless the _binary encoding layer_ is integrated into the network. This implies that, when using the _binary encoding layer_, not only the
Figure 4: Plots demonstrating convergence to the vertices of a simplex equiangular tight frame. From top to bottom: ’Equinorm’ as variation of the mean classes norms; ’Equian-gularity’ as variation of the angle between all class means pairs; ’Max Angle’ as distance from the max angle class means can have; ’Tr(Sw/Sb)/C’ as weighted within-class variance; ’Within class covariance’ is the average of the within-class covariance matrix. More details about the quantities plotted can be found in Ref. [1].
class means but also all data points within the dataset converge towards the vertexes of a simplex ETF.
|
2306.13690 | Prediction of Deep Ice Layer Thickness Using Adaptive Recurrent Graph
Neural Networks | As we deal with the effects of climate change and the increase of global
atmospheric temperatures, the accurate tracking and prediction of ice layers
within polar ice sheets grows in importance. Studying these ice layers reveals
climate trends, how snowfall has changed over time, and the trajectory of
future climate and precipitation. In this paper, we propose a machine learning
model that uses adaptive, recurrent graph convolutional networks to, when given
the amount of snow accumulation in recent years gathered through airborne radar
data, predict historic snow accumulation by way of the thickness of deep ice
layers. We found that our model performs better and with greater consistency
than our previous model as well as equivalent non-temporal, non-geometric, and
non-adaptive models. | Benjamin Zalatan, Maryam Rahnemoonfar | 2023-06-22T19:59:54Z | http://arxiv.org/abs/2306.13690v1 | # Prediction of Deep Ice Layer Thickness Using Adaptive Recurrent Graph Neural Networks
###### Abstract
As we deal with the effects of climate change and the increase of global atmospheric temperatures, the accurate tracking and prediction of ice layers within polar ice sheets grows in importance. Studying these ice layers reveals climate trends, how snowfull has changed over time, and the trajectory of future climate and precipitation. In this paper, we propose a machine learning model that uses adaptive, recurrent graph convolutional networks to, when given the amount of snow accumulation in recent years gathered through airborne radar data, predict historic snow accumulation by way of the thickness of deep ice layers. We found that our model performs better and with greater consistency than our previous model as well as equivalent non-temporal, non-geometric, and non-adaptive models.
Benjamin Zalatan\({}^{1}\), Maryam Rahnemoonfar\({}^{1,2,}\)+\({}^{1}\)Department of Computer Science and Engineering, Lehigh University, PA, USA
\({}^{2}\)Department of Civil and Environmental Engineering, Lehigh University, PA, USA Deep learning, graph neural networks, recurrent neural networks, airborne radar, ice thickness
Footnote †: Corresponding author ([email protected]).
## 1 Introduction
As global atmospheric temperatures rise and climate trends shift, there has been a growing importance placed upon accurately tracking and predicting polar snowfall over time. A precise understanding of the spatiotemporal variability in polar snow accumulation is important for reducing the uncertainties in climate model predictions, such as prospective sea level rise. These snowfall trends are revealed through the internal ice layers of polar ice sheets, which often represent annual isochrones and relay information about the climate at that location during the corresponding year, similar to rings on a tree. The tracking and forecasting of these internal ice layers is also important for calculating snow mass balance, extrapolating ice age, and inferring otherwise difficult-to-observe processes.
Measurements of ice layer mass balance are traditionally collected by drilling ice cores and shallow pits. However, capturing catchment-wide accumulation rates using these methods is exceedingly difficult due to their inherent sparsity, access difficulty, high cost, and depth limitations. Attempts to interpolate these in-situ measurements introduce further uncertainties to climate models, especially considering the high variability in local accumulation rate.
Airborne measurements using nadir-looking radar sensors has quickly become a popular complementary method of mapping ice sheet topography and monitoring accumulation rates with a broad spatial coverage and ability to penetrate deep ice layers. The Center for Remote Sensing of Ice Sheets (CReSIS), as part of NASA's Operation Ice Bridge, operates the Snow Radar [1], an airborne radar sensor that takes high-resolution echograms of polar ice sheets.
Recent studies involving graph convolutional networks (GCNs) [2] have shown promise in spatiotemporal tasks such as traffic forecasting [3, 4, 5], wind speed forecasting [6], and power outage prediction [7]. In this paper, we propose a geometric deep learning model that uses a supervised, multi-target, adaptive long short-term memory graph convolutional network (AGCN-LSTM) [8, 9] to predict the thicknesses of multiple deep ice layers at specific coordinates in an ice sheet given the thicknesses of few shallow ice layers.
In our experiments, we use a sample of Snow Radar flights over Greenland in the year 2012. We convert this internal ice layer data into sequences of temporal graphs to be used as input to our model. More specifically, we convert the five shallow ice layers beneath the surface into five spatiotemporal graphs. Our model then performs multi-target regression to predict the thicknesses of the fifteen deep ice layers beneath them. Our model was shown to perform significantly better than previous models in predicting ice layer thickness, as well as better than equivalent non-geometric, non-adaptive, and non-temporal models.
## 2 Related Work
### Automated Ice Layer Segmentation
In recent years, automated techniques have been developed to track the surface and bottom layers of an ice sheet using radar depth sounder sensors. Tracking the internal layers, however, is more difficult due to the low proximity between each layer, as well as the high amount of noise present in the echogram images. Due to its exceptional performance in au
tomatic feature extraction and image segmentation tasks, deep learning has been applied extensively on ice sheet echograms in order to track their internal layers [10, 11, 12, 13]. [12] used a multi-scale contour-detection convolutional neural network (CNN) to segment the different internal ice layers within Snow Radar echogram images. In [10], the authors trained a multi-scale neural network on synthetic Snow Radar images for more robust training. A multi-scale network was also used in [13], where the authors trained a model on echograms taken in the year 2012 and then fine tuned it by training on a small number of echograms taken in other years. [11] found that using pyramid pooling modules, a type of multi-scale architecture, helps in learning the spatio-contextual distribution of pixels for a certain ice layer. The authors also found that denoising the input images improved both the model's accuracy and F-score. While these models have attempted to segment Snow Radar echogram images, none have yet attempted to predict deep ice layer thicknesses with only information about shallow ice layers.
### Graph Convolutional Networks
Graph convolutional networks have had a number of applications in a vast array of different fields. In the field of computer vision, recurrent GCNs have been used to generate and refine "scene graphs", in which each node corresponds to the bounding box of an object in an image and the edges between nodes are weighted by a learned "relatedness" factor [14, 15]. GCNs have also been used to segment and classify point clouds generated from LiDAR scans [16, 17]. Recurrent GCNs have been used in traffic forecasting, such as in [3], where graph nodes represented traffic sensors, edges were weighted by the physical distance between sensors, and node features consisted of the average detected traffic speed over some period of time.
Some existing graph-based weather prediction models, such as [6] and [18], have tested models in which edge weights are defined as learnable parameters rather than static values. This strategy allowed the models to learn relationships between nodes more complex than simple geographic distance, and was shown to improve performance at the expense of increased computational complexity.
In our previous study published at the 2023 IEEE Radar Conference [19], we used a GCN-LSTM to predict the thicknesses of shallow ice layers using the thicknesses of deep ice layers. Our results were reasonable, usually lying within 5 pixels of the ground-truth, and we found that GCN-LSTM performed better and with more consistency than equivalent non-temporal and non-geometric models. While this previous model had a similar objective to the model described in this paper, it was far less complex, did not include learned adjacency, and attempted to predict the thicknesses of shallow ice layers rather than deep ice layers.
## 3 Dataset
In this study, we use the Snow Radar dataset made public by CReSIS as part of NASA's Operation Ice Bridge. The Snow Radar operates from 2-8 GHz and is able to track deep ice layers with a high resolution over wide areas of an ice sheet. The sensor produces a two-dimensional grayscale profile of historic snow accumulation over consecutive years, where the horizontal axis represents the along-track direction, and the vertical axis represents layer depth. Pixel brightness is directly proportional to the strength of the returning signal. Each of these grayscale echogram profiles has a width of 256 pixels and a height ranging between 1200 and 1700 pixels. Each pixel in a column corresponds to approximately 4cm of ice, and each echogram image has an along-track footprint of 14.5m. Accompanying each image are vectors that provide positional data (including geographic latitude and longitude) of the sensor for each column. In order to gather ground-truth thickness data, the images were manually labelled in a binary format where white pixels represented the tops of each firn layer, and all other pixels were black. Thickness data was extracted by finding the distance (in pixels) between each white pixel in a vertical column.
We focus on radar data captured over Greenland during the year 2012. Since each ice layer often represents an annual isochrone, we may refer to specific layers by their corresponding year (in this case, the surface layer corresponds with the year 2012, the layer below it 2011, and so on). In order to capture a sufficient amount of data, only echogram images containing a minimum of 20 ice layers were used (five feature layers and fifteen predicted layers). Five and fifteen feature and predicted layers, respectively, were chosen in order to maximize the number of usable images while maintaining a sufficient number of experimental layers. This restriction reduced the total number of usable images down to 703. Five different training and testing sets were generated by taking five random permutations of all usable images and splitting them at a ratio of 4:1. Each training set contained 562 images, and each testing set contained 141 images.
## 4 Methods
### Graph Convolutional Networks
Traditional convolutional neural networks use a matrix of learnable weights, often referred to as a kernel or filter, as a sliding window across pixels in an input image. The result is a higher-dimensional representation of the image that automatically extracts image features that would otherwise need to be identified and inputted manually. Graph convolutional networks apply similar logic to graphs, but rather than using a sliding window of learned weights across a matrix of pixels, GCN performs weighted-average convolution on each node's neighborhood to automatically extract features that reflect
the structure of a graph. The size of the neighborhood on which convolution takes place is dictated by the number of sequential GCN layers present in the model (i.e. \(K\) GCN layers results in \(K\)-hop convolution). In a sense, GCNs are a generalized form of CNNs that enable variable degree.
A special form of GCN, known as adaptive GCN (or AGCN), define edge weights within an input graph as learnable parameters rather than predefined constants. In certain cases, this may increase model performance if relationships between nodes are more advanced than those specified by the input. In the case of our model, we route the graphs through an EvolveGCNH layer [9] prior to entering the GCN-LSTM layer.
EvolveGCNH is a version of EvolveGCN that behaves similarly to a traditional GCN, but treats its learned weight matrix as a temporal hidden state that, through use of a gated recurrent unit (GRU), implicitly adjusts the structure of input graphs by modifying node embeddings. The adjustment of the weight matrix at each forward pass is influenced by the previous hidden weight state as well as the node embeddings of the current input graph.
### Recurrent Neural Networks
Recurrent neural networks (RNNs) are able to process a sequence of data points as input, rather than a single static data point, and learn the long-term relationships between them. Many traditional RNN structures have had issues with vanishing and exploding gradients on long input sequences. Long short-term memory (LSTM) [20] attempts to mitigate those issues by implementing gated memory cells that guarantee constant error flow. Applying LSTM to GCN using GCN-LSTM allows for a model to learn not only the relationships between nodes in a graph, but also how those relationships change (or persist) over time.
### Model Architecture
Our model (see Figure 1) uses an EvolveGCNH layer to introduce adaptivity to input adjacency matrices. The resulting node matrix is used as the feature matrix for a GCN-LSTM layer with 256 output channels. This leads into three fully-connected layers: the first with 128 output channels, the second with 64 output channels, and the third with 15 output channels, each corresponding to one of the 15 predicted ice layer thicknesses. Between each layer is the Hardswish activation function [21], an optimized approximation of the Swish function that has been shown to perform better than ReLU and its derivatives in deep networks [22]. Between the fully-connected layers is Dropout [23] with p=0.2. We use the Adam optimizer [24] over 300 epochs with mean-squared error loss. We use a dynamic learning rate that halves every 75 epochs beginning at 0.01.
### Graph Generation
Each ground-truth echogram image is converted into five graphs, each consisting of \(256\) nodes. Each graph corresponds to a single ice layer for each year from 2007 to 2011. Each node represents a vertical column of pixels in the ground-truth echogram image and has three features: two for the latitude and longitude at that point, and one for the thickness of the corresponding year's ice layer at that point.
All graphs are fully connected and undirected. All edges
Figure 1: Architecture of the proposed model.
are inversely weighted by the geographic distance between node locations using the haversine formula. For a weighted adjacency matrix \(A\):
\[A_{i,j}=\frac{1}{2\arcsin\left(\text{law}(\phi_{j}-\phi_{i})+\cos(\phi_{i})\cos( \phi_{j})\text{\,law}(\lambda_{j}-\lambda_{i})\right)}\]
where
\[\text{\,law}(\theta)=\sin^{2}\left(\frac{\theta}{2}\right)\]
\(A_{i,j}\) represents the weight of the edge between nodes \(i\) and \(j\). \(\phi\) and \(\lambda\) represent the latitude and longitude features of a node, respectively. Node features of all graphs are collectively normalized using z-score normalization. Weights in the adjacency matrices of all graphs are collectively normalized using min-max normalization with a slight offset to prevent zero- and one-weight edges. Self-loops are added with a weight of two. While we use an EvolveGCNH layer to introduce learned adjacency, this predefined spatial adjacency matrix serves as the initial state of the learned adjacency matrix, and is also passed residually to the GCN-LSTM layer.
## 5 Results
In order to verify that the temporal and adaptive aspects of the model serve to its benefit, we compared its performance with equivalent non-geometric, non-temporal, and non-adaptive models.
For the non-geometric model, the EvolveGCNH and GCN-LSTM layers are replaced by a single LSTM layer, and all node feature data is concatenated into a single, stacked feature vector. Since this model is non-geometric, no adjacency data is supplied. All other hyperparameters remain the same.
For the non-temporal model, the EvolveGCNH and GCN-LSTM layers are replaced by a single GCN layer. Rather than generating five independent graphs for each of the five shallow "feature" ice layers, we generate a single graph and concatenate the thickness features from all five graphs together. The rest of the model, including the adjacency matrix generation, is identical to the proposed model.
For the non-adaptive model, all hyperparameters remain the same, but the adaptive EvolveGCNH layer is removed. The rest of the model remains identical to the proposed model.
Over each trial, the root mean squared error (RMSE) was taken between the predicted and ground truth thickness values for each of the fifteen ice layers from 1992 to 2006 over all images in its corresponding testing set. The mean and standard deviation RMSE over all five trials are displayed in Table 1. The proposed AGCN-LSTM model consistently performed better than the baseline models in terms of mean RMSE.
## 6 Conclusion
In this work, we proposed a temporal, geometric, adaptive multi-target machine learning model that predicts the thicknesses of deep ice layers within the Greenland ice sheet (corresponding to the annual snow accumulation from 1992 to 2006, respectively), given the thicknesses of shallow ice layers (corresponding to the annual snow accumulation from 2007 to 2011, respectively). Our proposed model was shown to perform better and with more consistency than equivalent non-geometric and non-temporal models.
### Improvements and Generalizations
While our model succeeds at predicting deep layer thicknesses with reasonable accuracy, there are opportunities for improvement and further generalization. For example, it may be possible to use radar data from multiple different years in order to adjust the model to predict future snow accumulation, rather than historic. The dataset used in these experiments was limited to Greenland, and only measured twenty ice layers. It is likely possible to generalize this model onto other polar regions, such as Antarctica, or use data with a much larger depth and thus number of ice layers. The inclusion of physical ice properties, more advanced machine learning techniques, and a deeper hyperparameter search may also serve to produce even better results.
## 7 Acknowledgements
This work is supported by NSF BIGDATA awards (IIS-1838230, IIS-1838024), IBM, and Amazon. We acknowledge data and data products from CReSIS generated with support from the University of Kansas and NASA Operation IceBridge.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline & LSTM & GCN & GCN-LSTM & AGCN-LSTM \\ \hline Total RMSE & \(5.817\pm 1.349\) & \(3.496\pm 0.509\) & \(2.766\pm 0.312\) & \(\mathbf{2.712\pm 0.179}\) \\ \hline \end{tabular}
\end{table}
Table 1: Results from the non-temporal, non-geometric, non-adaptive, and proposed models on the fifteen predicted annual ice layer thicknesses from 1992 to 2006. Results are shown as the mean \(\pm\) standard deviation of the RMSE over five trials (in pixels). |
2301.10869 | A Deep Neural Network Algorithm for Linear-Quadratic Portfolio
Optimization with MGARCH and Small Transaction Costs | We analyze a fixed-point algorithm for reinforcement learning (RL) of optimal
portfolio mean-variance preferences in the setting of multivariate generalized
autoregressive conditional-heteroskedasticity (MGARCH) with a small penalty on
trading. A numerical solution is obtained using a neural network (NN)
architecture within a recursive RL loop. A fixed-point theorem proves that NN
approximation error has a big-oh bound that we can reduce by increasing the
number of NN parameters. The functional form of the trading penalty has a
parameter $\epsilon>0$ that controls the magnitude of transaction costs. When
$\epsilon$ is small, we can implement an NN algorithm based on the expansion of
the solution in powers of $\epsilon$. This expansion has a base term equal to a
myopic solution with an explicit form, and a first-order correction term that
we compute in the RL loop. Our expansion-based algorithm is stable, allows for
fast computation, and outputs a solution that shows positive testing
performance. | Andrew Papanicolaou, Hao Fu, Prashanth Krishnamurthy, Farshad Khorrami | 2023-01-25T23:30:52Z | http://arxiv.org/abs/2301.10869v2 | A Deep Neural Network Algorithm for Linear-Quadratic Portfolio Optimization with MGARCH and Small Transaction Costs
###### Abstract
We analyze a fixed-point algorithm for reinforcement learning (RL) of optimal portfolio mean-variance preferences in the setting of multivariate generalized autoregressive conditional-heteroskedasticity (MGARCH) with a small penalty on trading. A numerical solution is obtained using a neural network (NN) architecture within a recursive RL loop. A fixed-point theorem proves that NN approximation error has a big-oh bound that we can reduce by increasing the number of NN parameters. The functional form of the trading penalty has a parameter \(\epsilon>0\) that controls the magnitude of transaction costs. When \(\epsilon\) is small, we can implement an NN algorithm based on the expansion of the solution in powers of \(\epsilon\). This expansion has a base term equal to a myopic solution with an explicit form, and a first-order correction term that we compute in the RL loop. Our expansion-based algorithm is stable, allows for fast computation, and outputs a solution that shows positive testing performance.
_Keywords--_ Hetereoskedasticity, MGARCH, Fixed-point algorithms, Reinforcement learning, Deep neural networks
###### Contents
* 1 Introduction
* 1.1 Background and Review
* 1.2 Results in this Paper
* 2 Model and Optimization Problem
* 2.1 Two-Step Iteration Scheme
* 2.2 Convergence to a Fixed-Point
* 2.3 Neural Network Policy Approximation
* 3 Small-\(\epsilon\) Asymptotic Analysis & Implementation
* 3.1 Expansion of \(\lambda_{t}\)
* 3.2 Neural Network Algorithm
###### Abstract
We consider a \(\epsilon\)-term model for the problem of a linear-quadratic problem with a quadratic penalty on the order-\(\epsilon\) term. The problem is solved by a quadratic penalty on the order-\(\epsilon\) term.
### Background and Review
Other machine learning applications in finance include [15] where stochastic gradient descent (SGD) with deep NN architecture is used for computing prices of American options on large baskets of stocks, and in [16] where an RL approach is used to numerically solve high-dimensional backward stochastic differential equations related to finance. In [17], the authors utilized an LSTM network for predicting the price movement with daily S&P500 data1. The performance of the LSTM is mixed during different periods. The short-term trend prediction in the price movement on NASDAQ by the deep network was studied by [18]. The authors utilized features from both fundamental and technical analysis as the network input. [19] used a graph network to predict the stock price movement on S&P500 data. In [20], they demonstrate how adversarial learning methods can be used to automate trading in stocks. General methods from control theory have been applied for optimal trading decisions in [21, 22]. The effects of transaction costs and liquidity are well-studied ([1, 23, 24]). In particular, the "aim portfolio" description given in [2] has been a key result for the management of large funds. The discussed works are based on supervised learning. Therefore, the non-supervised learning approaches should also be studied as they may address more complicated problems.
Footnote 1: [https://en.wikipedia.org/wiki/S%26P_500#cite_note-history-17](https://en.wikipedia.org/wiki/S%26P_500#cite_note-history-17)
It was originally thought that combining NN with RL recursions would lead to instability ([11]) due to the approximation errors caused by using the neural networks. However, hypothetically, the accuracy of NN approximations can be improved to within arbitrarily close bounds (see [25, 26]). Additionally, the actor-critic approach has been shown in settings to have a stabilizing effect ([27, 28]). The actor-critic approach utilizes two independent deep networks to represent a policy function (the actor) that provides a group of possible actions for a given state and an evaluation function (the critic) that evaluates the action taken by the actor based on the current policy function. By alternatively updating the actor and critic with the given objective function, the two networks converge. The DDPG algorithm ([9]) takes the actor-critic approach for solving RL problems with continuous control space and is based on the DPG algorithm in [29]. The DQN method in [8] is an RL algorithm that utilizes and trains a deep network to represent the Q-value function. Then at each instant, the DQN takes the action based on the Q-value network. The DQN does not use an actor-critic approach. and can only address discrete action problems, whereas the DDPG, which extends from DQN, can also address continuous action problems. But the DDPG is more difficult to train and sometimes unstable as it uses an actor-critic approach. Reinforcement learning was utilized in many topics, such as wireless ([30]) and mobile edge computing networks ([31, 32]).
In portfolio management, [33] utilized the DDPG to optimize the cryptocurrency portfolio. [20] utilized both DDPG and proximal policy optimization (PPO) for portfolio management. [34] considers portfolio optimization for a transaction cost. However, the authors only considered a linear cost. In this work, we considered a quadratic transaction cost that is more practical. Similar to the problem we consider in this paper is the linear-quadratic regulator with uncertainty in the (constant) coefficients. There are provable bounds for identifying the minimum run-time required from an on-policy approximation, after which the observed data is used to refine the policy to within a given tolerance of the optimal ([35, 36]). This approach to uncertainty in on-policy learning is analyzed as an actor-critic approach in [11].
### Results in this Paper
The analyses in this paper show the effectiveness of RL and NN-based policy approximations for solving linear-quadratic programs with non-constant coefficients and small penalization on control. We consider an iterative scheme for the optimal controls, for which we can prove convergence to a fixed point under some reasonable conditions. We extend this fixed point argument to show that NN approximation errors will compound over time, but that their total will be of order big-oh in the magnitude of an error that is reduced as the number of NN parameters is increased. The problem is of practical interest because the MGARCH covariance process does not allow for an explicit solution, such as those seen in other linear-quadratic optimizations ([37, 38]).
For a faster implementation, we propose an algorithm that exploits the smallness of the transaction cost parameter. In particular, we write the solution as a series expansion in powers of the transaction costs parameter, which is small. Using only a few terms from this expansion we form a sub-optimal solution that tends toward the optimum as the parameter decreases toward zero. This approach is similar to a standard NN-based policy gradient but is more stable (i.e., the difference between our RL output and the ground-truth optimal output is bounded) because the series terms are fast and easy to compute, and because the NN approximation error is present only in the higher-order terms and therefore is an
order of magnitude smaller than it would've been without the expansion. In our experiments, we take the first two terms in this series: the based term that is equal to an explicitly computable myopic solution and a first-order correction term that we compute using an RL loop and NN functional approximation. We implement this approximation using a single network that contains only fully-connected layers without requiring dueling networks or special machinery, and in our studies on market data, we can see that the correction term provides some improvement compared to using only the myopic control, which is defined in (22). The key difference between our RL strategy and the myopic strategy is that the myopic strategy makes decisions without forecasting the future, whereas the RL strategy forecasts the future to make decisions. Overall, the contribution of our paper includes the following:
* We show the effectiveness of RL and NN-based policy approximations for solving linear-quadratic programs with non-constant coefficients and small penalization on control.
* We prove the convergence of an iterative scheme to a fixed point for the optimal controls under some reasonable conditions.
* We show that NN approximation errors will be of order big-oh bound.
* We propose an algorithm based on NN approximation that exploits the smallness of the transaction cost parameter for faster implementation.
* We evaluate our algorithm on both synthetic and historical market data, which shows positive testing results.
The remaining part of this paper is organized as follows: First, the problem is mathematically formulated. A solution with a two-step iteration scheme is proposed, and its convergence is analyzed. A practical method for implementing the solution using neural networks is proposed. A small-\(\epsilon\) analysis regarding the neural network solution is shown. Lastly, the method is evaluated on synthetic market data and historical market data, and the results are discussed.
## 2 Model and Optimization Problem
Let \(R_{t}\in\mathbb{R}^{n}\) denote the vector of \(n\)-many assets' returns realized at time \(t\), and let \(\Sigma_{t-1}\) be the covariance of \(R_{t}\) given the information immediately prior at time \(t-1\). An MGARCH model ([39, 40]) is the following:
\[R_{t+1} =\mu+Z_{t+1} \tag{1}\] \[\Sigma_{t+1} =CC^{\top}+A\Sigma_{t}A^{\top}+BZ_{t+1}Z_{t+1}^{\top}B^{\top} \tag{2}\]
where \(\mu\) is the conditional mean vector, \(Z_{t+1}\sim\text{iid}(0,\Sigma_{t})\), and where \(A\), \(B\) and \(C\) are \(n\times n\) matrices. The following condition ensures that \(\Sigma_{t}\) is always invertible:
**Condition 1**.: _The matrix \(C\) given in (2) is full rank so that \(CC^{\top}\) is invertible, that is, there is a positive lower bound \(c=\inf_{\|v\|=1}v^{\top}CC^{\top}v>0\)._
Asset prices are calculated by compounding the returns (1). For \(1\leq i\leq n\), the \(i^{th}\) asset's price is
\[S_{t+1}^{i}=h(S_{t}^{i},R_{t+1}^{i}) \tag{3}\]
where \(h\) is some known function. A typical choice of \(h\) is \(h(s,r)=s(1+r)\) as used in [39, 40], but for technical reasons, we will need to impose the following condition:
**Condition 2**.: _The function \(h\) in (3) is finitely bounded away from zero. That is, \(\|h\|_{\infty}=\sup_{s,r}|h(s,r)|<\infty\) and there exists constant \(\underline{s}>0\) such that \(\inf_{s,r}h(s,r)\geq\underline{s}\)._
Denote the \(\mathbb{R}^{n}\) vector of these prices as
\[\vec{S}_{t}=\begin{pmatrix}S_{t}^{1}\\ S_{t}^{2}\\ \vdots\\ S_{t}^{n}\end{pmatrix}\,.\]
Next, define the covariance matrix of the dollar returns,
\[P_{t}=\Psi_{t}\Sigma_{t}\Psi_{t}\]
where
\[\Psi_{t}=\text{diag}(\vec{S}_{t})\.\]
Let \(X_{t}\in\mathbb{R}^{n}\) denote a manager's holdings in assets (in contract units). The returns (in dollar units) on this portfolio are \(\sum_{i}(S_{t+1}^{i}-S_{t}^{i})X_{t}^{i}\), the expected value of these returns is \(\mu^{\top}S_{t}X_{t}\), and their variance is \(X_{t}^{\top}P_{t}X_{t}\). The portfolio manager has a control \(\{a_{t},t=1,2,3,\dots\}\) that she selects at time \(t\) to change \(X_{t}\). The manager's control should be optimal with respect to her mean-variance preferences,
\[V(x,s,p)=\sup_{a}\sum_{t=1}^{\infty}\delta^{t}\mathbb{E}\left[f(a _{t},X_{t},\vec{S}_{t},P_{t})\Big{|}X_{0}=x,\vec{S}_{1}=s,P_{1}=p\right]\] (4) s.t. \[f(a_{t},X_{t},\vec{S}_{t},P_{t})=\underbrace{\mu^{\top}\Psi_{t}X _{t}-\frac{\epsilon q(\vec{S}_{t},P_{t})}{2}a_{t}^{\top}\Psi_{t}a_{t}}_{\text {expected return}}\underbrace{\frac{\gamma}{2}X_{t}^{\top}P_{t}X_{t}}_{\text{ risk penalty}}\] \[X_{t}=X_{t-1}+a_{t}\]
where \(0\leq\delta<1\) is a discount factor, \(\epsilon>0\) is a (small) parameter, and the function \(q:\mathbb{R}^{n}\times\mathbb{R}^{n\times n}\rightarrow\mathbb{R}^{+}\) is a transaction cost (or liquidity penalty) that is higher at times when it is harder to trade and lower at times when there is plenty of liquidity. This form of transaction cost penalty was introduced in [1] and the relationship with volatility was shown in [3].
**Condition 3**.: _We assume \(q\) is bounded away from zero,_
\[1\leq q(s,p)\leq\mathcal{X}<\infty\qquad\forall(s,p)\in\mathbb{R}^{n}\times \mathbb{R}^{n\times n} \tag{5}\]
_where \(\mathcal{X}\) is a known constant._
As mentioned earlier, the penalty on trading should depend on the instantaneous value of the covariance matrix. Many works use principal component analysis (PCA) to study the relationship between degrees of freedom in the stock returns' covariance matrix and overall market volatility. For example, [6] observes that relatively few eigenvectors are needed to capture the majority of market variance during times of high market stress, thus resulting in wider bid/ask spreads and higher transaction costs; the relationship is reversed during times of low market stress. Based on this dynamic, we use the condition number to excite the transaction cost function when the market has reduced degrees of freedom. In the examples, we take \(q(s,p)=\text{cond}(\text{diag}^{-1}(s)p\text{diag}^{-1}(s))\), which is based on the empirical observation that losses in the S&P500 market index occur when there is a large spike in the condition number of the covariance matrix, as shown in Fig. 1. Additionally, we take \(\gamma=\frac{1}{W_{0}}\sum_{i}(\overline{\Sigma}^{-1}\mu)^{i}\) where \(\overline{\Sigma}\) is the covariance matrix estimated from market data and \(W_{0}\) is the value of the new capital, i.e., we set risk aversion so that the goal is for \(W_{0}\) to be invested in the market.
The problem formulated in (4) is similar to the mean-variance preferences problem in [2] but with the added non-constant cost from \(q(\sigma)\), where \(\sigma^{2}=\Sigma\) and \(\Sigma\) is the estimated covariance matrix. A similar type of financial control problem was considered in [41]. An effective way to analyze this system is to take a Hamiltonian approach and write it using a vector of Lagrange multipliers,
\[\sum_{t=1}^{\infty}\delta^{t}\mathbb{E}\Big{[}f(a_{t},X_{t},\vec{ S}_{t},P_{t})+\lambda_{t}^{\top}(X_{t}-X_{t-1}-a_{t})\Big{]}\] \[=\sum_{t=1}^{\infty}\delta^{t}\mathbb{E}\Big{[}f(a_{t},X_{t}, \vec{S}_{t},P_{t})-(\delta\lambda_{t+1}-\lambda_{t})^{\top}X_{t}-\lambda_{t}^ {\top}a_{t}\Big{]}+\lim_{t\rightarrow\infty}\mathbb{E}[\delta^{t+1}\lambda_{t +1}^{\top}X_{t}]-\lambda_{1}^{\top}X_{0}\]
where we have used the transversality condition ([42])
\[\lim_{t\rightarrow\infty}\mathbb{E}(\delta^{t+1}\lambda_{t+1}^{\top}X_{t})=0.\]
First-order conditions in \(a_{t}\) and in \(X_{t}\) yield a forward-backward system
\[X_{t} =X_{t-1}-\frac{1}{\epsilon q(\vec{S_{t}},P_{t})}\Psi_{t}^{-1} \lambda_{t} \tag{6}\] \[\lambda_{t} =\delta\mathbb{E}_{t}\lambda_{t+1}-\Psi_{t}\mu+\gamma P_{t}X_{t}\ ; \tag{7}\]
where \(\mathbb{E}_{t}\) denotes expectation conditional on the information observed up to time \(t\). However, in the real world, (6) and
Figure 1: X-axis: the date (month - day). Left Y-axis: condition number. Right Y-axis: log(SPY) where SPY denotes the S&P500 index ETF. Blue plot: condition number of the covariance matrix with time. Red plot: log of SPY with time.
(7) cannot be directly solved. Therefore, in this paper, we propose an iteration scheme for (6) and (7):
\[X_{t}^{(k+1)} =X_{t-1}^{(k+1)}-\frac{1}{eq(\vec{S}_{t},P_{t})}\Psi_{t}^{-1} \lambda_{t}^{(k)}\] \[\lambda_{t}^{(k+1)} =\left(I+\widetilde{P}_{t}\Psi_{t}^{-1}\right)^{-1}\left(\delta \mathbb{E}_{t}\lambda_{t+1}^{(k+1)}-\Psi_{t}\mu+\gamma P_{t}X_{t-1}^{(k+1)}\right) \tag{8}\]
where \(\widetilde{P}_{t}=\frac{\gamma}{\epsilon q(\vec{S}_{t},P_{t})}P_{t}\) and \(X_{0}^{(k)}=x_{0}\) for all rounds of iteration \(k\). Then, we implement RL using neural networks (NNs) to estimate the limiting fixed point from (8). The convergence of iterations in (8) depends on if the following condition holds:
**Condition 4**.: _There is a constant \(\Delta(\epsilon)<1\) such that_
\[\delta\left\|\left(I+\widetilde{P}_{t}\Psi_{t}^{-1}\right)^{-1}\right\|\leq \Delta(\epsilon)\;,\]
_for all t._
If Condition 4 holds, then we can prove that (8) converges to a unique fixed point. Condition 4 would always hold if \(P_{t}\) and \(\Psi_{t}\) commute, but this is an unrealistic condition. For the data used in this paper, we check empirically that in fact Condition 4 holds for \(\epsilon<1\). For theoretical purposes, because we are considering the small-\(\epsilon\) parameterization, the following proposition is useful for confirming Condition 4:
**Proposition 1**.: _Assume Condition 1, Condition 2 and Condition 3. If \(\epsilon\chi\|h\|_{\infty}<\gamma\underline{s}^{2}c\), then denoting \(\kappa=X\|h\|_{\infty}/(\gamma\underline{s}^{2}c)\), we have_
\[\sup_{t}\left\|\left(I+\widetilde{P}_{t}\Psi_{t}^{-1}\right)^{-1}\right\|\leq \frac{\epsilon\kappa}{1-\epsilon^{2}\kappa^{2}}\;, \tag{9}\]
_from which it follows that Condition 4 will hold for \(\epsilon\) small enough._
Proof.: (see Appendix).
### Two-Step Iteration Scheme
The scheme in (8) is the basis for an iterative algorithm with two steps. The policy is given by \(\lambda^{(k)}\) and is used to generate \(X^{(k+1)}\). Then, upon observation of \(X^{(k+1)}\), the updated Lagrange multiplier \(\lambda^{(k+1)}\) is obtained via a fixed-point iteration,
\[\lambda_{t}^{(k^{\prime}+1)}=\left(I+\widetilde{P}_{t}\Psi_{t}^{-1}\right)^{- 1}\left(\delta\mathbb{E}_{t}\lambda_{t+1}^{(k^{\prime})}-\Psi_{t}\mu+\gamma P _{t}X_{t-1}^{(k+1)}\right)\;, \tag{10}\]
for fixed \(k\) and for \(k^{\prime}\rightarrow\infty\). Algorithmically, this can be described as a 2-step iterative procedure for finding a fixed point:
1. \(X_{t}^{(k+1)}=X_{t-1}^{(k+1)}-\Psi_{t}^{-1}\lambda_{t}^{(k)}/(\epsilon q( \vec{S}_{t},P_{t}))\),
2. \(\lambda_{t}^{(k+1)}=\lim_{k^{\prime}\rightarrow\infty}\lambda_{t}^{(k^{\prime})}\) where \(\lambda_{t}^{(k^{\prime})}\) is given by (10).
If we consider a finite-time version of (6) and (7) with terminal condition \(\lambda_{T+1}^{(k)}\equiv 0\) for all \(k\) (i.e., on \(X_{t}^{(k)}=X_{T}^{(k)}\) for all \(t>T\)), and given Condition 4, then we have a contraction mapping with \(\lambda_{t}^{(k^{\prime})}\) converging to a unique fixed point as \(k^{\prime}\rightarrow\infty\), namely, \(\lambda_{t}^{(k+1)}\) for all \(t\leq T\).
### Convergence to a Fixed-Point
The two-step iteration described in Section 2.1 looks for a fixed point of \(\lambda^{(k^{\prime})}\) for a given \(X^{(k+1)}\). In this section we present a theorem stating that the pair \((X^{(k)},\lambda^{(k)})\) given by (8) converges to a fixed point, thus confirming that the forward-backward system of (6) and (7) has a unique solution. We prove these results for the finite-time version of (6) and (7) with terminal condition \(\lambda_{T+1}^{(k)}\equiv 0\) for all \(k\).
**Theorem 1**.: _Consider the finite-time problem with terminal condition \(\lambda_{T+1}^{(k)}\equiv 0\) for all \(k\). If we assume Condition 1, Condition 2, Condition 3 and Condition 4, then the iterations of (8) will converge to a unique fixed point._
Proof.: (see Appendix).
**Remark 1**.: _Accuracy of the approximation of the infinite-time problem by a finite-time problem can be proven with optimality bounds and a squeeze lemma as \(T\to\infty\)._
The main idea in the proof of Theorem 1 is to show a contraction in \(\mathbb{E}\|\lambda_{t}^{(k+1)}-\lambda_{t}^{(k)}\|\), thus confirming the existence of a unique fixed point. The initial step for setting up the proof is to write the following forward equation for the iteration difference,
\[\lambda_{t}^{(k+1)}-\lambda_{t}^{(k)}=\delta(I+\widetilde{P}_{t} \Psi_{t}^{-1})^{-1}\mathbb{E}_{t}\left[\lambda_{t+1}^{(k+1)}-\lambda_{t+1}^{(k) }\right]-\gamma(I+\widetilde{P}_{t}\Psi_{t}^{-1})^{-1}P_{t}\left(\sum_{t^{ \prime}=1}^{t-1}\frac{1}{\epsilon q(\vec{S}_{t^{\prime}},P_{t})}\Psi_{t^{ \prime}}^{-1}(\lambda_{t^{\prime}}^{(k)}-\lambda_{t^{\prime}}^{(k-1)})\right) \tag{11}\]
which is derived from (8) by differencing between iteration \(k+1\) and \(k\). The following quantity is important for convergence,
\[\Omega=\frac{\chi\|h\|_{\infty}}{\bar{\mathfrak{s}}}. \tag{12}\]
The quantity in (12) is important because from (11) we obtain the following inequality,
\[\underset{t\leq T}{\sup}\mathbb{E}\left\|\lambda_{t}^{(k+1)}- \lambda_{t}^{(k)}\right\|\leq\frac{\Omega}{1-\Delta(\epsilon)}\sup_{t\leq T} \sum_{t^{\prime}=1}^{t-1}\mathbb{E}\left\|\lambda_{t^{\prime}}^{(k)}-\lambda_ {t^{\prime}}^{(k-1)}\right\|\,\]
An equivalent form of this inequality appears in Theorem 1 and is used to show that \(\mathbb{E}\left\|\lambda_{t}^{(k+1)}-\lambda_{t}^{(k)}\right\|\to 0\) as \(k\to\infty\) for \(t\leq T\), thus proving the existence of a unique fixed point of (8) for finite \(T\).
### Neural Network Policy Approximation
The machine learning approach to solve (8) is to implement RL using neural networks (NNs). It amounts to estimating the limiting fixed point \(\lambda^{*}\) from (8) as
\[\lambda_{t}^{*}\approx\lambda(t,X_{t-1},\vec{S}_{t},P_{t};\theta)\]
where \(\lambda(\cdot,\cdot,\cdot,\cdot;\theta)\) is a policy approximation function with a feed-forward NN, and \(\theta\) denotes the NN parameters to be estimated. The NN takes into account time \(t\) so that the solution can be adapted to the time remaining until terminal time \(T\). An example of an NN that can accommodate time dependence is the Deep BSDE architecture in [16].
Given an initial estimate \(\theta^{(0)}\), we proceed to iteratively look for \(\theta\) that is close to a fixed point. The following iterative estimation scheme is the basis for the algorithm we'll implement:
\[\theta^{(k+1)} =\underset{\theta}{\arg\min}\ \ \sum_{t=1}^{T}\mathbb{E}\Big{\|} \lambda(t,X_{t-1}^{(k+1)},\vec{S}_{t},P_{t};\theta)-Y_{t}^{(k)}(\theta)\Big{\|} ^{2}\ \text{for}\] (13) s.t. \[X_{t}^{(k+1)} =X_{t-1}^{(k+1)}-\frac{1}{\epsilon q(\vec{S}_{t},P_{t})}\Psi_{t} ^{-1}\lambda(t,X_{t-1}^{(k+1)},\vec{S}_{t},P_{t};\theta^{(k)})\] \[Y_{t}^{(k)}(\theta) =\delta\left(I+\widetilde{P}_{t}\Psi_{t}^{-1}\right)^{-1}\mathbb{ E}_{t}\Big{[}\lambda(t+1,X_{t}^{(k+1)},\vec{S}_{t+1},P_{t+1};\theta)\Big{]} \mathbb{1}_{t<T}-\left(I+\widetilde{P}_{t}\Psi_{t}^{-1}\right)^{-1}\left(\Psi_ {t}\mu-\gamma P_{t}X_{t-1}^{(k+1)}\right)\]
where the indicator \(\mathbb{1}_{t<T}\) is used to enforce the finite-time problem's terminal condition of \(\lambda_{T+1}=0\). The following theorem proves the NN scheme in (13) will in fact be a decent approximation for the fixed point \(\lambda_{t}\).
**Theorem 2**.: _Assume Condition 1, Condition 2 and Condition 3. Furthermore, assume the family_
\[(\lambda(t,X_{t-1}^{(k)},\vec{S}_{t},P_{t};\theta^{(k)}))_{k=1,2,\ldots}\]
_are continuous and uniformly integrable, i.e., for any \(\eta>0\) there is compact set \(\mathbf{K}\subset\mathbb{R}^{d}\times\mathbb{R}^{d}\times\mathbb{R}^{d\times d}\) on which we have_
\[\sup_{k}\mathbb{E}\Big{[}\|\lambda(t,X_{t-1}^{(k)},\vec{S}_{t},P_{t};\theta^{( k)})\|\,\mathbb{1}_{\{(X_{t-1}^{(k)},\vec{S}_{t},P_{t})\notin\mathbf{K}\}}\Big{]} \leq\eta\;. \tag{14}\]
_Assume also that_
\[\mathbb{E}\Big{[}\|\lambda^{*}(t,X_{t-1}^{*},\vec{S}_{t},P_{t})\|\,\mathbb{1} _{\{(X_{t-1}^{*},\vec{S}_{t},P_{t})\notin\mathbf{K}\}}\Big{]}\leq\eta.\]
_For each \(k\) let \(\varepsilon^{(k)}\) denote the error from the NN approximation,_
\[\sup_{t\leq T}\!\!\mathbb{E}\Big{[}\,\Big{\|}\lambda(t,X_{t-1}^{(k)},\vec{S}_ {t},P_{t};\theta^{(k)})-Y_{t}^{(k-1)}(\theta^{(k)})\Big{\|}\,\mathbb{1}_{\{(X _{t-1}^{(k)},\vec{S}_{t},P_{t})\in\mathbf{K}\}}\Big{]}\leq\varepsilon^{(k)}\;.\]
_Then, the error of the iteration scheme in (13) is_
\[\sup_{t\leq T}\!\!\mathbb{E}\,\Big{\|}\lambda(t,X_{t-1}^{(k+1)},\vec{S}_{t},P_ {t};\theta^{(k+1)})-\lambda_{t}^{*}\Big{\|}=\mathcal{O}\left(\left(\sup_{\ell }\varepsilon^{(\ell)}+2\eta\right)\exp\Big{(}\frac{\Omega T}{1-\Delta(\epsilon )}\Big{)}\right)\;, \tag{15}\]
_for \(k\) large, where \(\lambda^{*}\) is the fixed point of (8)._
Proof.: (see Appendix).
The bound in (15) is similar to the bounds derived in [43], wherein the approximation error was computed to be the sum of three terms: a sampling error term, an NN parameter error, and a value function estimation error. An additional similarity is an exponential growth in the bounding constants as \(T\) increases. In theory, the exponential growth in (15) is contained by continually increasing the hyperparameters so that \(\sup_{\ell}\varepsilon^{(\ell)}+2\eta\) tends to zero, but in practice increasing hyperparameters requires a growing number of training samples, leading to an infeasibly long computation time. However, the smallness of \(\epsilon\) can be exploited for a faster algorithm, which does not eliminate exponential growth in \(T\), but we are at least able to reduce bounding constants with lowered computational cost.
## 3 Small-\(\epsilon\) Asymptotic Analysis & Implementation
Let's consider the parameterization with \(\epsilon\) being small enough that effects of order \(\epsilon^{2}\) can be dropped or grouped in with round-off error. In this setting, we can construct a solution in a power series form and then truncate terms order \(\epsilon^{2}\) and higher. In other words, our expansion has a base term equal to the explicitly computable myopic solution (obtained for the case \(\delta=0\)), and an order-\(\epsilon\) correction term. Computation of the order-\(\epsilon\) correction is more involved, but computational costs and runtime are minimal.
### Expansion of \(\lambda_{t}\)
We write the formal expression for \(\lambda_{t}\),
\[\lambda_{t}=\epsilon\widetilde{\lambda}_{t}\;,\]
which we insert into (6) and (7) to obtain the following equations,
\[X_{t} =X_{t-1}-\frac{1}{q(\vec{S}_{t},P_{t})}\Psi_{t}^{-1}\widetilde{ \lambda}_{t} \tag{16}\] \[\widetilde{\lambda}_{t} =\delta\mathbb{E}_{t}\widetilde{\lambda}_{t+1}-\frac{\gamma}{ \epsilon}P_{t}\left(\frac{1}{\gamma}P_{t}^{-1}\Psi_{t}\mu-X_{t}\right)\] (17) \[=\delta\mathbb{E}_{t}\widetilde{\lambda}_{t+1}-\frac{\gamma}{ \epsilon}P_{t}\left(\frac{1}{\gamma}P_{t}^{-1}\Psi_{t}\mu-X_{t-1}\right)- \frac{\gamma}{\epsilon q(\vec{S}_{t},P_{t})}P_{t}\Psi_{t}^{-1}\widetilde{ \lambda}_{t}\;.\]
Rearranging (17) we have the following stabilized equation,
\[\widetilde{\lambda}_{t}=\left(\epsilon I+\frac{\gamma}{q(\vec{S}_{t},P_{t})}P_{t} \Psi_{t}^{-1}\right)^{-1}\left(\delta\epsilon\mathbb{E}_{t}\widetilde{\lambda} _{t+1}-\gamma P_{t}\left(\frac{1}{\gamma}P_{t}^{-1}\Psi_{t}\mu-X_{t-1}\right) \right)\, \tag{18}\]
for which it is straightforward to check that the convergence proof of Theorem 1 still applies. We write the following formal expansion,
\[\widetilde{\lambda}_{t}=\widetilde{\lambda}_{t}^{[0]}+\epsilon\widetilde{ \lambda}_{t}^{[1]}+\epsilon^{2}\widetilde{\lambda}_{t}^{[2]}+\dots\,\]
which we insert into (18) to obtain the following recursive expressions for the expansion's terms,
\[\widetilde{\lambda}_{t}^{[0]} =-\left(\epsilon I+\frac{\gamma}{q(\vec{S}_{t},P_{t})}P_{t}\Psi_{ t}^{-1}\right)^{-1}\left(\Psi_{t}\mu-\gamma P_{t}X_{t-1}\right) \tag{19}\] \[\widetilde{\lambda}_{t}^{[t]} =\delta\left(\epsilon I+\frac{\gamma}{q(\vec{S}_{t},P_{t})}P_{t} \Psi_{t}^{-1}\right)^{-1}\mathbb{E}_{t}\widetilde{\lambda}_{t+1}^{[i-1]}\]
for \(i=1,2,3,\dots\). This expansion is in powers of \(\epsilon\) and can be truncated for a good approximation of the solution to (16).
**Remark 2**.: _When we approximate \(\widetilde{\lambda}_{t}\) with lower-order terms \(\widetilde{\lambda}_{t}^{[0]}\) and \(\widetilde{\lambda}_{t}^{[1]}\), we can simplify the expressions in (19) so that they do not depend on \(\epsilon\)_
\[\widetilde{\lambda}_{t}^{[0]} =-\left(\frac{\gamma}{q(\vec{S}_{t},P_{t})}P_{t}\Psi_{t}^{-1} \right)^{-1}\left(\Psi_{t}\mu-\gamma P_{t}X_{t-1}\right) \tag{20}\] \[\widetilde{\lambda}_{t}^{[1]} =\delta\left(\frac{\gamma}{q(\vec{S}_{t},P_{t})}P_{t}\Psi_{t}^{- 1}\right)^{-1}\mathbb{E}_{t}\widetilde{\lambda}_{t+1}^{[0]}+\left(\frac{ \gamma}{q(\vec{S}_{t},P_{t})}P_{t}\Psi_{t}^{-1}\right)^{-2}\left(\Psi_{t}\mu- \gamma P_{t}X_{t-1}\right)\,\]
_so that \(\widetilde{\lambda}_{t}=\widetilde{\lambda}_{t}^{[0]}+\epsilon\widetilde{ \lambda}_{t}^{[1]}+\mathcal{O}(\epsilon^{2})\).The expansion in (20) is different from (19) because it has reduced the base term to the naive policy that goes straight to the aim portfolio, thus leaving it to the correction terms to compensate for transaction costs._
Inserting the \(\widetilde{\lambda}_{t}\) expansion into (16), we suspect the following lower-order expansion for the state process has a Big-Oh error as follows,
\[X_{t}=X_{t-1}-\frac{1}{q(\vec{S}_{t},P_{t})}\Psi_{t}^{-1}\left(\widetilde{ \lambda}_{t}^{[0]}+\epsilon\widetilde{\lambda}_{t}^{[1]}\right)+\mathcal{O}( \epsilon^{2}). \tag{21}\]
These order \(\mathcal{O}(\epsilon^{2})\) errors are in fact true, based on the following proposition:
**Proposition 2**.: _Assume Condition 1, Condition 2 and Condition 3. The order-\(\epsilon\) approximation \(\widetilde{\lambda}_{t}^{[0]}+\epsilon\widetilde{\lambda}_{t}^{[1]}\) has an error, which is_
\[\underset{t\leq T}{\sup}\mathbb{E}\left\|\widetilde{\lambda}_{t}^{[0]}+ \epsilon\widetilde{\lambda}_{t}^{[1]}-\widetilde{\lambda}_{t}^{\star}\right\| =\mathcal{O}\left(\epsilon\Delta(\epsilon)\exp\Big{(}\frac{\Omega T}{1- \Delta(\epsilon)}\Big{)}\mathbb{E}\underset{t\leq T}{\sup}\|\widetilde{ \lambda}_{t}^{[1]}\|\right)\,\]
_where \(\widetilde{\lambda}^{\star}\) denotes the solution to (16) and (17)._
Proof.: (see Appendix.)
```
Initialize: \(\theta^{(1)}\sim\mathcal{N}(0,0.01)\). for k = 1 to MAX_ITERdo Initialize: \(P_{0}\), \(\vec{S}_{0}\) and \(X_{0}\). Set: \(S_{0}=\text{diag}(\vec{S}_{0})\). for t = 1 to T do ### Update the MGARCH state: \(Z_{t}=R_{t}-\mu\) \(\Sigma_{t}=CC^{\top}+A\Sigma_{t-1}A^{\top}+BZ_{t}Z_{t}^{\top}B^{\top}\) \(S_{t}=S_{t-1}(1+R_{t})\) \(\vec{S}_{t}=(S_{t}^{1},S_{t}^{2},...,S_{t}^{n})^{T}\) \(\Psi_{t}=\text{diag}(\vec{S}_{t})\) \(P_{t}=\Psi_{t}\Sigma_{t}\Psi_{t}\) ### Update the portfolio: \(\widetilde{\lambda}_{t}^{[0]}=-\left(\epsilon I+\frac{\gamma}{q(\vec{S}_{t}, P_{t})}P_{t}\Psi_{t}^{-1}\right)^{-1}\left(\Psi_{t}\mu-\gamma P_{t}X_{t-1}\right)\) \(\widetilde{\lambda}_{t}^{[1]}=\delta\left(\epsilon I+\frac{\gamma}{q(\vec{S}_ {t},P_{t})}P_{t}\Psi_{t}^{-1}\right)^{-1}\varphi(X_{t-1},\vec{S}_{t},P_{t}; \theta^{(k)})\) \(X_{t}=X_{t-1}-\frac{1}{q(\vec{S}_{t},P_{t})}\Psi_{t}^{-1}\Big{(}\widetilde{ \lambda}_{t}^{[0]}+\epsilon\widetilde{\lambda}_{t}^{[1]}\Big{)}\) endfor \(\text{loss}(\theta)=\frac{\sum_{t=1}^{T}\left\|\varphi(X_{t-1},\vec{S}_{t},P_ {t};\theta)-\widetilde{\lambda}_{t+1}^{[0]}\right\|^{2}}{T-1}\) \(\theta^{*}=\arg\min_{\theta}\text{loss}(\theta)\) \(\theta^{(k+1)}=\alpha\theta^{*}+(1-\alpha)\theta^{(k)}\) endfor
```
**Algorithm 1** Small-\(\epsilon\) Neural Network Fixed-Point Algorithm for MGARCH, with Learning Rate \(\alpha\in(0,1]\)
### Neural Network Algorithm
Let function \(\varphi:\mathbb{R}^{d}\times\mathbb{R}^{d}\times\mathbb{R}^{d\times d}\to \mathbb{R}^{n}\) be from a class of NN functionals with sigmoidal activation. The term \(\widetilde{\lambda}_{t}^{[1]}\) is approximated as \(\delta\left(\epsilon I+\frac{\gamma}{q(\vec{S}_{t},P_{t})}P_{t}\Psi_{t}^{-1} \right)^{-1}\varphi(X_{t-1},\vec{S}_{t},P_{t};\theta)\), where the optimal parameter \(\theta\) is found from
\[\min_{\theta}\sum_{t=1}^{T}\mathbb{E}\left\|\varphi(X_{t-1},\vec{S}_{t},P_{t}; \theta)-\mathbb{E}_{t}[\widetilde{\lambda}_{t+1}^{[0]}]\mathbb{I}_{t<T} \right\|^{2}\.\]
Note that now we are considering a NN architecture that remains constant through time, which is a considerable simplification from the NN architecture used in the proof of Theorem 2. The tradeoff in making this simplification is faster computation time. Algorithm 1 gives the implementation of the scheme in (13) using the lower-order expansion in (21) with this NN approximation of \(\delta\mathbb{E}_{t}\widetilde{\lambda}_{t+1}^{[0]}\). Fig. 2 shows the flow of Alg. 1 at each moment \(t\). We empirically observed that after training, our NN policy can work in real-time (specifically, each NN inference takes around 1.5 ms on a commodity laptop).
In our analysis of Algorithm 1's portfolio, we will compare to the purely myopic strategy,
\[X_{t}^{\text{myopic}}=X_{t-1}^{\text{myopic}}+\frac{1}{\epsilon q(\vec{S}_{ t},P_{t})}\Psi_{t}^{-1}\left(I+\widetilde{P}_{t}\Psi_{t}^{-1}\right)^{-1} \left(\mu-\gamma P_{t}X_{t-1}^{\text{myopic}}\right). \tag{22}\]
The myopic strategy shares the same objective as the RL strategy but for \(\delta=0\). Note that because our objective function contains a quadratic transaction cost term, the typical buy-and-hold portfolios will create a large transaction cost at the beginning, leading to a negative total wealth return. Therefore, we do not compare our method with the typical buy-and-hold portfolios.
## 4 Sector ETFs: an 11-Dimensional Example
We verify the small-\(\epsilon\) asymptotic analysis by allocating 11 sector ETFs, including iShares U.S. Real Estate ETF (IYR), Materials Select Sector SPDR Fund (XLB), Energy Select Sector SPDR Fund (XLE), Financial Select Sector SPDR Fund (XLF), Industrial Select Sector SPDR Fund (XLI), Technology Select Sector SPDR Fund (XLK), Consumer Staples Select Sector SPDR Fund (XLP), Utilities Select Sector SPDR Fund (XLU), Health Care Select Sector SPDR Fund (XLV), Consumer Discretionary Select Sector SPDR Fund (XLY), and Vanguard Communication Services ETF (VOX). The chosen ETFs are a good representation of a cross-section of the U.S. stock market returns. However, our method is applicable to any set of stocks. Training of Algorithm 1 was implemented on synthetic data sampled from an MGARCH model that was estimated on historical data of the 11 ETFs. We will show the implementation results of Algorithm 1 on both simulated testing data, and on historical out-of-sample data that was not used to estimate the MGARCH model. Overall, our algorithm practically performs well. Additionally, it is not necessary that the rigorous conditions of Sec. 2 have to hold, as we do not enforce Condition 2's boundedness of \(h\).
### Setup
#### 4.1.1 Dataset
We downloaded the adjusted closing price of the 11-sector ETFs in the year 2010 to the year 2019 from Yahoo Finance ([44]). The data was split into 10 folds for training, with each fold having 5 years of historical data. The starting and ending dates for the folds are shown in Table 1.
#### 4.1.2 Estimating Parameters
Given a sequence of historical price \(\{S_{t}^{i}\}\) for the \(i^{th}\) ETF, the return rate \(R_{t}^{i}\) at each time \(t\) can be found by:
\[R_{t}^{i}=\frac{S_{t+1}^{i}-S_{t}^{i}}{S_{t}^{i}}.\]
Figure 2: Flow of Alg. 1
Defining \(R_{t}=[R_{t}^{1},R_{t}^{2},...,R_{t}^{11}]^{\top}\) and \(\bar{R}\) the mean of \(R_{t}\) over \(t\), the initial covariance matrix \(\Sigma_{0}\) is calculated as:
\[\Sigma_{0}=\frac{1}{L}\sum_{t=1}^{L}(R_{t}-\bar{R})(R_{t}-\bar{R})^{\top},\]
where \(L\) is the length of the sequential historical data.
\(A\), \(B\), and \(C\) in (2) were estimated using Broyden Fletcher Goldfarb Shanno (BFGS) algorithm ([45]). Specifically, we first burned in the initial covariance matrix \(\Sigma_{0}\) into (2). We then used \(\sum_{t\geq 0}||\Sigma_{t+1}-\Sigma_{t}||^{2}\) with matrix norm as the loss function and BFGS as the minimizer to find \(A\), \(B\), \(C\), and \(\Sigma\) to minimize the loss function. The following equilibrium state is achieved when the loss function is the smallest:
\[\Sigma=CC^{\top}+A\Sigma A^{\top}+B\Sigma B^{\top}. \tag{23}\]
For the estimation of \(\mu\), we used the eigen-portfolio approach in [46]. The parameters (i.e., \(A\), \(B\), \(C\), \(\Sigma_{0}\), and \(\mu\)) were estimated for each fold. Fig. 3 shows an example of the historical ETF prices and the simulated ETF prices using the estimated parameters.
#### 4.1.3 Architecture of Neural Network
The utilized neural network (NN) is composed of fully connected layers. The input contains the portfolio \(X_{t-1}\) that has 11 elements, plus the covariance matrix of the dollar-returns \(P_{t}=\Psi_{t}\Sigma_{t}\Psi_{t}\) whose dimension is \(11\times 11\) and also the expected value of returns \(\mu^{\top}\Psi_{t}\) which is also 11-dimensional. Therefore, the total dimension is 143. The hidden layer size was determined by considering both the training time and the NN performance. When using more complex NN architecture, we observed that there was no obvious improvement in performance while the training time increased considerably. On the other hand, when using even simpler NN architectures, we observed that the deep RL algorithm suffered from under-fitting problems. Therefore, we utilized four hidden layers, each of which contains 400 neurons. The output of the NN corresponds to \(\varphi(\cdot,\cdot,\cdot;\theta)\) in Algorithm 1, which is 11-dimensional. The activation function is Tanh. The architecture of the utilized neural network (NN) is shown in Table 5 in the appendix shows the detail of the architecture. The programming language is Python 3. Tensorflow and Keras were utilized as the deep-learning library. Keras is built on top of Tensorflow.
### Simulation on Synthetic Data
At moment \(t+1\), a noise vector \(Z_{t+1}\) was generated from a Gaussian distribution \(\mathcal{N}(0,\Sigma_{t})\), where \(\Sigma_{t}\) is the covariance matrix at the previous moment, the return rate \(R_{t+1}\) was determined by (1), and the covariance matrix \(\Sigma_{t+1}\) was found by (2). For each fold, we generated one sequence of synthetic data to train the NN and the other 200 sequences to test the performance of the NN. We show how the total wealth grows with time for RL and myopic strategies. We also show how the fund manager should invest the money into the stock market with time using RL and myopic strategies, respectively. Specifically, wealth \(W_{t}\) at time \(t\) was calculated by
\[W_{t}=W_{t-1}+X_{t-1}^{\top}(\vec{S}_{t}-\vec{S}_{t-1})-\frac{\epsilon q(\vec {S}_{t},P_{t})}{2}a_{t}^{\top}\Psi_{t}a_{t}\,\]
\begin{table}
\begin{tabular}{c c c|c c c} \hline \hline
**Fold** & **From** & **To** & **Fold** & **From** & **To** \\ \hline
1 & Jan. 2010 & Oct. 2014 & 2 & Jul. 2010 & Apr. 2015 \\
3 & Jan. 2011 & Oct. 2015 & 4 & Jul. 2011 & Apr. 2016 \\
5 & Feb. 2012 & Nov. 2016 & 6 & Aug. 2012 & May 2017 \\
7 & Mar. 2013 & Nov. 2017 & 8 & Sep. 2013 & Jun. 2018 \\
9 & Feb. 2014 & Dec. 2018 & 10 & Aug. 2014 & Jun. 2019 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Details of Each Fold
with \(W_{0}=100\). We ran the trained NN and the myopic strategy (as given by (22)) on the test data for each fold and calculated the average. The training epoch was 100 for each fold. The optimizer was Adam. The loss function was mean-squared-error loss (MSE). In this case, we took \(\epsilon=0.003\), for which we were able empirically to verify Condition 4. The results are shown in Fig. 4.
Table 2 shows the value added of the two methods in each fold on the synthetic data. The RL strategy of Algorithm 1 consistently outperforms the myopic strategy given by (22) by showing an annual increase of 1.8% in total value. The results are scalable to any \(W_{0}\). Therefore, our RL approach will show significant outperformance in the absolute value of total assets added when the given capital (i.e., \(W_{0}\)) is large. Note in the table that we are emphasizing the $-value of returns, as this is a better measure of fund performance as per the reasoning of [47].
The amount of money invested in the stock market is calculated by
\[I_{t}=I_{t-1}+a_{t-1}^{\top}\vec{S}_{t-1}\;,\]
where \(I_{0}=0\) because all money is in cash at the beginning. Note that since the ETF prices vary with time, it is likely that at the end of the day, the total money invested in the stock market may exceed \(W_{0}\). Fig. 5 shows the evolution of \(I_{t}\) for both RL and myopic strategies. The two strategies show different investment speeds. The RL is investing faster than the myopic strategy. Fig. 6 shows the evolution of the RL and myopic portfolio allocations with time. In the beginning, the portfolio allocations are at 0 for both RL and myopic. As time passes, the portfolios increase, but the slope becomes smaller. Eventually, the portfolios will be attracted to a stationary state. It is worth mentioning that the investment portfolio charts are not for comparison purposes. Instead, they are used to illustrate how our RL approach and the myopic approach are applied and to ensure that the two approaches have reasonable behaviors.
Figure 3: X-axis shows the day. Y-axis shows the ETF prices.
### Simulation on Historical Data
For historical out-of-sample data, \(Z_{t}\) was found by
\[Z_{t}=R_{t}-\mu\]
Figure 4: Average wealth (200 trajectories) of the RL strategy (solid) and the myopic strategy (dot). The X-axis shows the day. Y-axis shows the total wealth.
where \(R_{t}\) is the return rate of the real historical data at each time. For each fold, the NN was trained with the data of that fold and tested out-of-sample on the historical data of the following six months. The training epoch was 100. The optimizer was Adam. And the loss function was MSE. In this case, we took \(\epsilon=0.01\), and again we were able empirically to verify Condition 4. The result is shown in Fig. 7. For most situations, the RL outperforms the myopic strategy, but not as significantly as it did on synthetic data. This is because the real historical market data contains considerable uncertainty that might cause the diminished performance of the RL strategy in testing. Table 3 shows annualized total asset value added by using the two approaches on historical market data. Among the cases, the RL strategy shows more added value than the myopic approach. The RL method is able to consistently outperform the myopic by about 11bps. This over-performance is not tremendous but certainly makes evident that RL can improve out-of-sample performance for this problem. The results are scalable: if the fund manager has 100 million dollars, using the RL strategy can help her gain an extra $120,000 compared to the gain realized from the myopic strategy.
Fig. 8 shows how the money is invested with time. Fig. 9 shows how the portfolios change. It bears repeating that the practical purpose of this paper's optimization is to optimally move a large amount of new capital into the market, i.e., bring this new capital into the fund. By showing the cumulative amount of money invested we can see how much of the investment goal has been accomplished. Figures 5 and 8 illustrate how the two strategies behave as they are moving the new capital into stocks. Overall, the portfolios show similar behavior to what we observed in the synthetic data case, namely, that the RL strategy seeks to move the new capital into the ETFs faster than the myopic.
for the weaker significance when testing out-of-sample is because of the following. First, Fig. 4 shows persistent over-performance by RL because it is an average of 200 trajectories. It is possible that if we had abundantly more out-of-sample data, we could see averages playing out, in which case we might see a stronger out-of-sample performance by RL. Second, there may be an out-of-sample model risk, i.e., that the model changes or is misspecified in the out-of-sample test. In this case, the training is aimed at learning an optimum that becomes sub-optimal when applied in the out-of-sample test.
Figure 5: Money invested in the stock market at a different time for the RL strategy (solid) and the myopic strategy (dot). The X-axis shows the day. Y-axis shows the money invested in the stock market.
Ultimately, we cannot say much more than this, but the different folds of data that we have shown do give us some sense for out-of-sample variation for both RL and the myopic portfolios.
Figure 6: Portfolios of the RL strategy (solid) and the myopic strategy (dot) at different times. The X-axis shows the day. Y-axis shows the portfolio.
Figure 7: Total wealth of the RL strategy (solid) and the myopic strategy (dot) at different time. X-axis shows the date (month - day). Y-axis shows the wealth.
Figure 8: Money invested in the stock market at different times for the RL strategy (solid) and the myopic strategy (dot). The X-axis shows the date (month - day). Y-axis shows the money invested in the stock market.
### The Objective Value \(V\) on Historical Data
The main goal of this paper is to maximize the objective value function (4), we, therefore, ran the RL and myopic algorithms on the 10 folds historical market data and calculated the relevant terms in (4). Specifically, we show the
Figure 9: Portfolio value of the RL strategy (solid) and the myopic strategy (dot) with time. The X-axis shows the date (month - day). Y-axis shows the portfolio.
average results of \(V\), total transaction cost \(\sum_{t=0}^{T}\delta^{t}\frac{\epsilon q(\vec{S}_{t},P_{t})}{2}a_{t}^{\top}\Psi_{ t}a_{t}\), and risk penalty \(\sum_{t=0}^{T}\delta^{t}\frac{\epsilon}{2}X_{t}^{\top}P_{t}X_{t}\) with different \(\epsilon\). We observed that when \(\epsilon\) is relatively small (e.g., \(0.00001\)), our RL algorithm considers an aggressive investment to maximize (4) (i.e., increasing the inventory \(X_{t}\) quicker and thus taking a relatively higher total transaction cost and risk penalty). When \(\epsilon\) is relatively large (e.g., \(0.01\)), our RL algorithm adopts a conservative investment to maximize (4) (i.e., increasing \(X_{t}\) slower and thus taking a relatively lower total transaction cost and risk penalty). Fig. 10 shows how the inventory of IYR ETF changes for different \(\epsilon\) by using the RL and myopic strategies. The inventories of other ETFs show similar behavior. From the figure, when \(\epsilon=0.01\), our RL tends to slowly increase \(X_{t}\). When \(\epsilon=0.00001\), our RL tends to increase \(X_{t}\) quickly. The explanation for such behavior is that when \(\epsilon\) is small (meaning that the transaction cost can be negligible), \(f(a_{t},X_{t},\vec{S}_{t},P_{t})\) in (4) can be approximated as:
\[f(a_{t},X_{t},\vec{S}_{t},P_{t})\approx\underbrace{\mu^{\top}\Psi_{t}X_{t}}_{ \text{investment return}}-\underbrace{\frac{\gamma}{2}X_{t}^{\top}P_{t}X_{t}}_{ \text{risk penalty}} \tag{24}\]
which has an equilibrium point at \(X_{t}=\frac{1}{\gamma}P_{t}^{-1}\mu^{\top}\Psi_{t}\) for every \(t\). Therefore, to maximize (4) with the approximation (24), our RL algorithm needs to increase \(X_{t}\) to the equilibrium point within a short time and thus performs an aggressive investment. If \(\epsilon\) is large (meaning that the transaction cost cannot be negligible), to avoid high transaction costs, our RL can only slowly increase \(X_{t}\). We show the average results of final objective-function value \(V\), total transaction costs, and total risk penalty over the 10 folds historical market data in Table 4 for different \(\epsilon\). The total transaction cost and risk penalty for \(\epsilon=0.00001\) are higher than the total transaction cost and risk penalty for \(\epsilon=0.01\), which confirms our explanation.
Our RL algorithm outperforms the myopic strategy for both aggressive and conservative investment cases by showing a higher \(V\) according to Table 4. For the aggressive investment case (\(\epsilon=0.00001\)), our RL algorithm behaves less aggressively than the myopic strategy by having a lower total transaction cost and risk penalty. On the other hand, for the conservative investment case (\(\epsilon=0.01\)), our RL algorithm behaves more aggressively than the myopic strategy by taking a higher total transaction cost and risk penalty. However, for both cases, our RL algorithm shows a higher value for the objective \(V\) than the myopic strategy, as shown in Table 4.
Fig. 11 shows the average performance of our RL and the myopic strategies with time on the 10-fold historical market datasets. The average wealth returns using RL and myopic strategies are close. However, our RL approach attains a higher (i.e., better) value for the objective than the myopic strategy, meaning that our RL achieves a better trade-off
between investment return, transaction cost, and risk penalty. Therefore, our RL outperforms the myopic strategy.
### Discussion of Our RL Approach
From the experimental results, we have observed that our RL approach has a higher objective value V given in (4) (i.e., the main goal of this paper) than the myopic strategy on the historical data. In other words, our RL approach balances the wealth return, risk penalty, and transaction cost better than the myopic by showing a 3.85% better for \(\epsilon=0.01\) and 11.64% better for \(\epsilon=0.00001\). On synthetic data, the RL method returns an extra 1-2% additional annual percentage return over the myopic (out of a total of roughly 10% annual return). This demonstrates that our method outperforms the myopic, as this extra 1-2% annually is significant when measuring the growth of assets. For real data, we are able to show consistent over-performance of the RL method by about 11bps. This over-performance certainly makes it evident RL's improved out-of-sample performance. Our RL approach also shows a faster investment speed with working in real time. Therefore, our approach is a valid effective method.
## 5 Conclusion
We have implemented a reinforcement learning algorithm to solve the mean-variance preferences of (4) with an MGARCH model and transaction costs of order \(\epsilon\), where \(\epsilon>0\) is a small parameter. Our method addresses issues of algorithm convergence and computation time by using an \(\epsilon\) expansion to guide the network to the solution. While similar methods use double (dueling) networks to stabilize convergence, we only need to use one network with our expansion. Our method is stable and can work in real time. The resulting portfolios show good performance in simulated tests.
## Appendix A Proofs
**Proof of Proposition 1**: We apply the Sherman-Woodbury-Morrison formula,
\[\left(I+\widetilde{P}_{t}\Psi_{t}^{-1}\right)^{-1}=\Psi_{t}\widetilde{P}_{t}^ {-1}-\Psi_{t}\widetilde{P}_{t}^{-1}\left(I+\Psi_{t}\widetilde{P}_{t}^{-1} \right)^{-1}\Psi_{t}\widetilde{P}_{t}^{-1}. \tag{25}\]
The first term in (25) is bounded as follows,
\[\|\Psi_{t}\widetilde{P}_{t}^{-1}\|=\frac{\epsilon}{\gamma}q(\widetilde{S}_{t}, P_{t})\|P_{t}^{-1}\Psi_{t}\|\leq\frac{\epsilon}{\gamma}\chi\|h\|_{\infty}\|P_{t}^{- 1}\|\leq\frac{\epsilon\chi\|h\|_{\infty}}{\gamma\underline{\varsigma}^{2}c}\,\]
\begin{table}
\begin{tabular}{|c|c c c c|} \hline \multirow{2}{*}{\(\epsilon\)} & \multicolumn{4}{c|}{\(V\)} \\ \(\epsilon\) & RL & Myopic & Difference & Difference / RL \\ \hline \(0.01\) & 0.0234 & 0.0225 & 0.0009 & 3.85\% \\ \(0.00001\) & 0.0885 & 0.0782 & 0.0103 & 11.64\% \\ \hline \multirow{4}{*}{\(0.01\)} & \multicolumn{4}{c|}{Total Transaction Cost} \\ & RL & Myopic & Difference & Difference / RL \\ \hline \(0.01\) & 0.0013 & 0.0011 & 0.0002 & 15.38\% \\ \(0.00001\) & 0.2562 & 0.2600 & -0.0038 & -1.50\% \\ \hline \multirow{4}{*}{\(0.01\)} & \multicolumn{4}{c|}{Total Risk Penalty} \\ & RL & Myopic & Difference & Difference / RL \\ \hline \(0.01\) & 0.0012 & 0.0011 & 0.0001 & 8.33\% \\ \(0.00001\) & 0.2399 & 0.2436 & -0.0037 & -1.54\% \\ \hline \end{tabular}
\end{table}
Table 4: Total Value \(V\), Transaction Costs, and Risk Penalties for Different \(\epsilon\).
Figure 11: Results on real data. Top: \(\epsilon=0.00001\). Bottom: \(\epsilon=0.01\).
where we have bounded \(\|P_{t}^{-1}\|\) in the following way,
\[\|P_{t}^{-1}\|\leq\|\Psi_{t}^{-1}\|^{2}\cdot\|\Sigma_{t}^{-1}\|\leq \frac{1}{\underline{\mathrm{s}}^{2}\inf_{\|v\|=1}v^{\top}CC^{\top}v}=\frac{1}{ \underline{\mathrm{s}}^{2}c}\,\]
where \(c>0\) is the lower bound defined in Condition 1. The second term in (25) is bounded as follows,
\[\left\|\Psi_{t}\widetilde{P}_{t}^{-1}\left(I+\Psi_{t}\widetilde{P}_{t}^{-1} \right)^{-1}\Psi_{t}\widetilde{P}_{t}^{-1}\right\|\leq\left\|\Psi_{t} \widetilde{P}_{t}^{-1}\right\|^{2}\left\|\left(I+\Psi_{t}\widetilde{P}_{t}^{-1 }\right)^{-1}\right\|\leq\left(\frac{\epsilon X\|h\|_{\infty}}{\gamma \underline{\mathrm{s}}^{2}c}\right)^{2}\left\|\left(I+\Psi_{t}\widetilde{P}_{ t}^{-1}\right)^{-1}\right\|\.\]
Now, taking the norm of (25), applying a triangle inequality to the right-hand side, and inserting these bounds, we have
\[\left\|\left(I+\Psi_{t}\widetilde{P}_{t}^{-1}\right)^{-1}\right\|\leq\frac{ \epsilon X\|h\|_{\infty}}{\gamma\underline{\mathrm{s}}^{2}c}+\left(\frac{ \epsilon X\|h\|_{\infty}}{\gamma\underline{\mathrm{s}}^{2}c}\right)^{2}\left\| \left(I+\Psi_{t}\widetilde{P}_{t}^{-1}\right)^{-1}\right\|\.\]
If \(\epsilon X\|h\|_{\infty}<\gamma\underline{\mathrm{s}}^{2}c\), then we can re-arrange to obtain the bound in (9).
**Proof of Theorem 1**: The following lemma is useful to have when proving convergence in Theorem 1.
**Lemma 1**.: _Let \(A\) and \(B\) be two symmetric positive definite matrices. Then \(\|(A+B)^{-1}\|\leq\min(\|A^{-1}\|,\|B^{-1}\|)\)._
Proof.: By positive definiteness of each matrix, we have
\[\inf_{\|v\|=1}v^{\top}(A+B)v\geq\inf_{\|v\|=1}v^{\top}Av+\inf_{\|v\|=1}v^{\top }Bv=\frac{1}{\|A^{-1}\|}+\frac{1}{\|B^{-1}\|}\geq\frac{1}{\min(\|A^{-1}\|,\|B^ {-1}\|)}\.\]
Thus, we have
\[\|(A+B)^{-1}\|=\frac{1}{\inf_{\|v\|=1}v^{\top}(A+B)v}\leq\min(\|A^{-1}\|,\|B^ {-1}\|)\,\]
which proves the second statement.
From inspection of (11), the usefulness of the following lemma should be evident:
**Lemma 2**.: _If we assume Condition 1, Condition 2 and Condition 3, then in (11) there are the following bounds in coefficients,_
\[\left\|\frac{1}{\epsilon q(\widetilde{S_{t}},P_{t})}\Psi_{t}^{-1}\right\| \leq\frac{1}{\epsilon\underline{\mathrm{s}}} \tag{26}\] \[\left\|\left(I+\widetilde{P}_{t}\Psi_{t}^{-1}\right)^{-1}P_{t}\right\| \leq\frac{\epsilon}{\gamma}\chi\|h\|_{\infty}\,\]
_where \(\underline{\mathrm{s}}\) and \(\|h\|_{\infty}\) are constants defined in Condition 2 and \(X\) is the bound defined in Condition 3._
Proof.:
1. The first bound in (26) is a clear consequence of Condition 2 and Condition 3.
2. To prove the second bound in (26), we start as follows, \[\left(I+\widetilde{P}_{t}\Psi_{t}^{-1}\right)^{-1}P_{t}=\frac{\epsilon}{\gamma }q(\widetilde{S_{t}},P_{t})\left(\frac{\epsilon}{\gamma}q(\widetilde{S_{t}},P_ {t})P_{t}^{-1}+\Psi_{t}^{-1}\right)^{-1}\,\] where \(P_{t}\) is invertible because we have assumed Condition 1 putting a lower bound on the eigenvalues. However, we don't have a lower bound on \(\|P_{t}^{-1}\|\) because the eigenvalues of \(P_{t}\) do not have a fixed upper bound, but from Condition 2 we have \(\|\Psi_{t}\|\leq\|h\|_{\infty}\), and thus we use Lemma 1 to get \[\frac{\epsilon}{\gamma}q(\widetilde{S_{t}},P_{t})\left\|\left(\frac{\epsilon}{ \gamma}q(\widetilde{S_{t}},P_{t})P_{t}^{-1}+\Psi_{t}^{-1}\right)^{-1}\right\| \leq\frac{\epsilon}{\gamma}q(\widetilde{S_{t}},P_{t})\|h\|_{\infty}\,\] which proves the second bound in (26).
We are now ready to prove Theorem 1. From (11) and using \(\Omega\) defined in (12), we obtain the following inequality,
\[\|\lambda_{t}^{(k+1)}-\lambda_{t}^{(k)}\|\leq\delta\|(I+\widetilde{P} _{t}\Psi_{t}^{-1})^{-1}\|\cdot\mathbb{E}_{t}\|\lambda_{t+1}^{(k+1)}-\lambda_{t+ 1}^{(k)}\|\] \[+\|\gamma(I+\widetilde{P}_{t}\Psi_{t}^{-1})^{-1}P_{t}\|\sum_{t^{ \prime}=1}^{t-1}\frac{\|\Psi_{t^{\prime}}^{-1}\|}{\epsilon q(\widetilde{S}_{t^ {\prime}},P_{t^{\prime}})}\|\lambda_{t^{\prime}}^{(k)}-\lambda_{t^{\prime}}^{( k-1)}\|\] \[\leq\Delta(\epsilon)\mathbb{E}_{t}\|\lambda_{t+1}^{(k+1)}-\lambda _{t+1}^{(k)}\|+\Omega\sum_{t^{\prime}=1}^{t-1}\|\lambda_{t^{\prime}}^{(k)}- \lambda_{t^{\prime}}^{(k-1)}\|\, \tag{27}\]
where from Condition 4 we have \(\left\|\delta\left(I+\widetilde{P}_{t}\Psi_{t}^{-1}\right)^{-1}\right\|\leq \Delta(\epsilon)<1\) and from Lemma 2 it follows that
\[\sup_{t,t^{\prime}}\frac{\|\Psi_{t^{\prime}}^{-1}\|}{\epsilon q(\widetilde{S} _{t^{\prime}},P_{t^{\prime}})}\left\|\left(I+\widetilde{P}_{t}\Psi_{t}^{-1} \right)^{-1}P_{t}\right\|\leq\Omega\.\]
If we consider a finite-time version of the problem where \(\lambda_{T+1}^{(k)}\equiv 0\) for all \(k\) (i.e., no control on \(X_{t}^{(k)}=X_{T}^{(k)}\) for \(t>T\)), then we can prove convergence of the iteration scheme. Denote the expected iteration error as
\[\mathcal{E}_{t}^{(k)}=\mathbb{E}\|\lambda_{t}^{(k)}-\lambda_{t}^{(k-1)}\|\.\]
Denoting the vector of errors as
\[\mathcal{E}_{1:T}^{(k)}=\left(\mathcal{E}_{T}^{(k)},\mathcal{E}_{T-1}^{(k)}, \ldots,\mathcal{E}_{1}^{(k)}\right)^{\top},\]
the inequality in (27) can be expressed with the following matrix/vector system,
\[\mathcal{E}_{1:T}^{(k+1)}\leq H_{T}\mathcal{E}_{1:T}^{(k+1)}+N_{T}\mathcal{E} _{1:T}^{(k)} \tag{28}\]
where \(H_{T}\) the \(T\times T\) nilpotent matrix whose entry at row \(i\) column \(j\) is
\[H_{T}^{ij}=\Delta(\epsilon)\times\begin{cases}1\text{ if }i=j+1\\ 0\text{ otherwise,}\end{cases}\]
which has norm \(\|H_{T}^{k}\|=\Delta^{k}(\epsilon)\) for \(k<T\) and \(\|H_{T}^{k}\|=0\) for \(k>T\), and where \(N_{T}\) is the \(T\times T\) nilpotent matrix whose entry at row \(i\) column \(j\) is
\[N_{T}^{ij}=\Omega\times\begin{cases}1\text{ if }i<j\\ 0\text{ otherwise,}\end{cases}\]
for which \(N_{T}^{k}=0\) for \(k\geq T\). Therefore, by combining the error bound in (28) we have
\[\|\mathcal{E}_{1:T}^{(k+1)}\|\leq\left\|H_{T}\mathcal{E}_{1:T}^{(k+1)}\right\| +\left\|N_{T}\mathcal{E}_{1:T}^{(k)}\right\|\leq\Delta(\epsilon)\left\| \mathcal{E}_{1:T}^{(k+1)}\right\|+\left\|N_{T}\mathcal{E}_{1:T}^{(k)}\right\|\,\]
and after rearranging and taking \(k\)-many iterations, we have
\[\|\mathcal{E}_{1:T}^{(k+1)}\|\leq\left(\frac{1}{1-\Delta(\epsilon)}\right)^{k }\|N_{T}^{k}\mathcal{E}_{1:T}^{(1)}\|\leq\left(\frac{1}{1-\Delta(\epsilon)} \right)^{k}\|N_{T}^{k}\|\|\mathcal{E}_{1:T}^{(1)}\|. \tag{29}\]
Now we take a moment to derive a bound on the normal of \(N_{T}\): comparing sums with integrals we see the following,
\[\|N_{T}^{k}e_{T}\|_{1}\leq\Omega^{k}\sum_{t_{1}=1}^{T-1}\sum_{t_{2}=1}^{t_{1}- 1}\cdots\sum_{t_{k}=1}^{t_{k-1}-1}1\leq\Omega^{k}\int_{0}^{T}dt_{1}\int_{0}^{ t_{1}}dt_{2}\cdots\int_{0}^{t_{k-1}}dt_{k}=\frac{\Omega^{k}T^{k}}{k!}\,\]
for \(k<T\) where \(e_{T}\in\mathbb{R}^{T}\) is the \(T^{th}\) canonical basis vector, and this bound gives us the general bound
\[\|N_{T}^{k}\|\leq\sqrt{T}\|N_{T}^{k}e_{T}\|_{1}\leq\frac{\Omega^{k}T^{k+1/2}}{k! }\mathbb{1}_{k<T}\qquad\forall k. \tag{30}\]
We apply the bound in (30) to the right-hand side in (29), and we find that \(\|\mathcal{E}_{1:T}^{(k+1)}\|=0\) for all \(k\geq T\) thus proving convergence to a unique fixed point, hence proving the statement of Theorem 1.
**Proof of Theorem 2**: Given the compact set \(\mathbf{K}\) from (14), by the universal approximation theorem ([25]), for each iteration in (13) there is a NN for which parameters can be chosen such that
\[\sup_{t\leq T}\mathbb{E}\Big{[}\,\Big{\|}\lambda(t,X_{t-1}^{(k)},\vec{S}_{t},P _{t};\theta^{(k)})-Y_{t}^{(k-1)}(\theta^{(k)})\Big{\|}\,\mathbb{1}_{\{(X_{t-1} ^{(k)},\vec{S}_{t},P_{t})\in\mathbf{K}\}}\Big{]}\leq\varepsilon^{(k)}\,\]
where \(\varepsilon^{(k)}\) is the arbitrarily small error that can be decreased by increasing the NN's hyperparameters. In the same manner as the process given by (8), we define the iterated process to be
\[\lambda_{t}^{(k)}=\lambda(t,X_{t-1}^{(k)},\vec{S}_{t},P_{t},\theta^{(k)})\.\]
Denote the error as
\[\mathcal{E}_{t}^{(k)}=\mathbb{E}\,\Big{\|}\lambda_{t}^{(k)}-\lambda_{t}^{*} \Big{\|}\,\]
where \(\lambda_{t}^{*}\) is the limiting fixed-point of (8). Using the pair of iterative equations in (8), we can include the NN error to obtain the following recursive bound,
\[\mathcal{E}_{t}^{(k)}\leq\frac{\mathbb{E}\,\Big{[}\|\gamma(I+\widehat{P}_{t} \Psi_{t}^{-1})^{-1}P_{t}\|\cdot\|X_{t-1}^{(k)}-X_{t-1}^{*}\|\Big{]}+(\varepsilon ^{(k)}+2\eta)}{1-\Delta(\epsilon)}\.\]
Then, proceeding similarly to the proof of Theorem 1 in Appendix, we denote the vector of all errors as \(\mathcal{E}_{1:T}^{(k)}=\left(\mathcal{E}_{T}^{(k)},\mathcal{E}_{T-1}^{(k)}, \ldots,\mathcal{E}_{1}^{(k)}\right)^{\top}\), and like we had in (28), a bound on these errors can be expressed with the following matrix/vector system,
\[\mathcal{E}_{1:T}^{(k+1)} \leq\frac{1}{1-\Delta(\epsilon)}\left(N_{T}\mathcal{E}_{1:T}^{(k) }+\sup_{\ell}(\varepsilon^{(\ell)}+2\eta)\mathbf{1}\right)\] \[\leq\left(\frac{1}{1-\Delta(\epsilon)}\right)^{k}N_{T}^{k} \mathcal{E}_{1:T}^{(1)}+\sup_{\ell}(\varepsilon^{(\ell)}+2\eta)\sum_{i=0}^{k- 1}\left(\frac{1}{1-\Delta(\epsilon)}\right)^{i+1}N_{T}^{i}\mathbf{1}\,\]
where \(H_{T}\) and \(N_{T}\) are the same matrices that were defined in Appendix, and where \(\mathbf{1}\) denotes the vector in \(\mathbb{R}^{T}\) of all 1's. Thus,
\[\|\mathcal{E}_{1:T}^{(k+1)}\|\leq\left(\frac{1}{1-\Delta(\epsilon )}\right)^{k}\|N_{T}^{k}\mathcal{E}_{1:T}^{(1)}\|+\sup_{\ell}(\varepsilon^{( \ell)}+2\eta)\sum_{i=0}^{k-1}\left(\frac{1}{1-\Delta(\epsilon)}\right)^{i+1} \|N_{T}^{i}\mathbf{1}\|\] \[\leq\left(\frac{1}{1-\Delta(\epsilon)}\right)^{k}\|N_{T}^{k}\| \|\mathcal{E}_{1:T}^{(1)}\|+\sqrt{T}\left(\sup_{\ell}\varepsilon^{(\ell)}+2 \eta\right)\sum_{i=0}^{k-1}\left(\frac{1}{1-\Delta(\epsilon)}\right)^{i+1}\|N _{T}^{i}\|\] \[=\left(\frac{1}{1-\Delta(\epsilon)}\right)^{k}\frac{\Omega^{k}T^{k +1/2}}{k!}\,\mathbb{1}_{k<T}\|\mathcal{E}_{1:T}^{(1)}\|+\sqrt{T}\left(\sup_{ \ell}\varepsilon^{(\ell)}+2\eta\right)\sum_{i=0}^{k-1}\left(\frac{1}{1-\Delta (\epsilon)}\right)^{i+1}\frac{\Omega^{i}T^{i+1/2}}{i!}\,\mathbb{1}_{i<T}\,\]
for large \(k\geq T\) we have
\[\|\mathcal{E}_{1:T}^{(k+1)}\|=\mathcal{O}\left(\left(\sup_{\ell}\varepsilon^{( \ell)}+2\eta\right)\exp\left(\frac{\Omega T}{1-\Delta(\epsilon)}\right)\right)\,\]
which is the statement of Theorem 2.
**Proof of Proposition 2**: Denote the state process from the order-\(\epsilon\) approximation as
\[X_{t}^{[1,2]}=X_{t-1}^{[1,2]}-\sum_{t^{\prime}=1}^{t}\frac{1}{q(\widetilde{S_{t^ {\prime}}}_{t^{\prime}}P_{t^{\prime}})}\Psi_{t^{\prime}}^{-1}(\tilde{\lambda}_{t ^{\prime}}^{[0]}+\epsilon\widetilde{\lambda}_{t}^{[1]})\,\]
for \(t=1,2,3,\ldots,T\). Inserting the expansion into (17) results in the order-\(\epsilon\) approximation error
\[\widetilde{\lambda}_{t}^{[0]}+\epsilon\widetilde{\lambda}_{t}^{[ 1]}-\left(\epsilon I+\frac{\gamma}{q(\widetilde{S_{t}},P_{t})}P_{t}\Psi_{t}^{- 1}\right)^{-1}\left(\delta\epsilon\mathbb{E}_{t}\left[\widetilde{\lambda}_{t+1 }^{[0]}+\epsilon\widetilde{\lambda}_{t+1}^{[1]}\right]-\gamma P_{t}\left( \frac{1}{\gamma}P_{t}^{-1}\Psi_{t}\mu-X_{t-1}^{[1,2]}\right)\right)\] \[=-\left(\epsilon I+\frac{\gamma}{q(\widetilde{S_{t}},P_{t})}P_{t} \Psi_{t}^{-1}\right)^{-1}\left(\delta\epsilon^{2}\mathbb{E}_{t}\widetilde{ \lambda}_{t+1}^{[1]}\right)\,\]
where the equality uses the expressions of for \(\widetilde{\lambda}_{t}^{[0]}\) and \(\widetilde{\lambda}_{t}^{[1]}\) given in (19). Using Lemma 2, we have
\[\left\|\left(\epsilon I+\frac{\gamma}{q(\widetilde{S_{t}},P_{t})}P_{t}\Psi_{t} ^{-1}\right)^{-1}\right\|=\frac{\Delta(\epsilon)}{\epsilon}\.\]
Therefore the differential error given above is of order \(\mathcal{O}\left(\epsilon\Delta(\epsilon)\mathbb{E}\sup_{t\leq T}\|\widetilde{ \lambda}_{t}^{[1]}\|\right)\). Thus, similar to Theorem 2, we have a big-oh bound on error,
\[\sup_{t\leq T}\mathbb{E}\left\|\widetilde{\lambda}_{t}^{[0]}+\epsilon \widetilde{\lambda}_{t}^{[1]}-\widetilde{\lambda}_{t}^{\star}\right\|= \mathcal{O}\left(\epsilon\Delta(\epsilon)\exp\Big{(}\frac{\Omega T}{1- \Delta(\epsilon)}\Big{)}\mathbb{E}\sup_{t\leq T}\|\widetilde{\lambda}_{t}^{[1 ]}\|\right)\,\]
where \(\widetilde{\lambda}^{\star}\) denotes the solution from (16) and (17).
|
2303.09599 | cito: An R package for training neural networks using torch | Deep Neural Networks (DNN) have become a central method in ecology. Most
current deep learning (DL) applications rely on one of the major deep learning
frameworks, in particular Torch or TensorFlow, to build and train DNN. Using
these frameworks, however, requires substantially more experience and time than
typical regression functions in the R environment. Here, we present 'cito', a
user-friendly R package for DL that allows specifying DNNs in the familiar
formula syntax used by many R packages. To fit the models, 'cito' uses 'torch',
taking advantage of the numerically optimized torch library, including the
ability to switch between training models on the CPU or the graphics processing
unit (GPU) (which allows to efficiently train large DNN). Moreover, 'cito'
includes many user-friendly functions for model plotting and analysis,
including optional confidence intervals (CIs) based on bootstraps for
predictions and explainable AI (xAI) metrics for effect sizes and variable
importance with CIs and p-values. To showcase a typical analysis pipeline using
'cito', including its built-in xAI features to explore the trained DNN, we
build a species distribution model of the African elephant. We hope that by
providing a user-friendly R framework to specify, deploy and interpret DNN,
'cito' will make this interesting model class more accessible to ecological
data analysis. A stable version of 'cito' can be installed from the
comprehensive R archive network (CRAN). | Christian Amesoeder, Florian Hartig, Maximilian Pichler | 2023-03-16T18:54:20Z | http://arxiv.org/abs/2303.09599v3 | cito: An R package for training neural networks using torch
###### Abstract
1. Deep neural networks (DNN) have become a central class of algorithms for regression and classification tasks. Although some packages exist that allow users to specify DNN in R, those are rather limited in their functionality. Most current deep learning applications therefore rely on one of the major deep learning frameworks, PyTorch or TensorFlow, to build and train DNN. However, using these frameworks requires substantially more training and time than comparable regression or machine learning packages in the R environment.
2. Here, we present 'cito', an user-friendly R package for deep learning. 'cito' allows R users to specify deep neural networks in the familiar formula syntax used by most modeling functions in R. In the background, 'cito' uses 'torch' to fit the models, taking advantage of all the numerical optimizations of the torch library, including the ability to switch between training models on CPUs or GPUs. Moreover, 'cito' includes many user-friendly functions for predictions and an explainable Artificial Intelligence (xAI) pipeline for the fitted models.
3. We showcase a typical analysis pipeline using 'cito', including its built-in xAI features to explore the trained DNN, by building a species distribution model of the African elephant.
4. In conclusion, 'cito' provides a user-friendly R framework to specify, deploy and interpret deep neural networks based on torch. The current stable CRAN version mainly supports fully connected DNNs, but it is planned that future versions will also include CNNs and RNNs.
## Introduction
Deep neural networks (DNNs) are increasingly used in ecology and evolution (Christian et al., 2019; Joseph, 2020; Strydom et al., 2021). For many researchers, however, these methods are still not easily accessible because state-of-the-art deep learning frameworks have steep learning curves, while existing user-friendly R-packages lack important functionalities necessary for training modern DNN architectures, such as the ability to train the models on graphics processing units (Graphic cards, GPUs) or important training techniques (e.g. learning rate scheduler).
Looking at the requirements and user expectations for working with DNNs, we first observe that modern DNNs are almost exclusively implemented and trained in specialized deep learning frameworks such as PyTorch or Tensorflow (Abadi et al., 2016; Paszke et al., 2019). These frameworks are essentially extremely flexible and performant math libraries, consisting of functions and classes to implement and train many different deep learning
architectures, such as large language models (e.g. GPT-3, RoBERTA) (Black et al., 2022; Liu et al., 2019) or complex object detection models (e.g. Mask R-CNN, DeepVit) (He et al., 2017; Zhou et al., 2021).
This flexibility is appealing to "power users", but for many standard applications, even simple fully-connected neural networks (also known as multi-layer perceptron, MLP) can act as a useful predictive models (Strydom et al., 2021) For such applications, the high level of customization offered by the large machine learning frameworks is unnecessary and often prohibitive or at least time-consuming for less experienced users.
As a response to this problem, several simplified frontends for the major machine learning frameworks have been developed, such as Keras for TensorFlow and luz for PyTorch (Allaire and Chollet, 2022; Falbel, 2022). However, building a DNN with these frontends still typically requires a few days of training, which is a lot compared to the time it takes a user to get first results from an R package that relies on the formula syntax for regression or classification that R users are familiar with. Popular packages that follow this syntax include ranger, for training random forests and lme4, for training mixed-effect models (Bates et al., 2015; Wright and Ziegler, 2017).
Some R packages for training DNN using the standard formula syntax already exist, but they often lack crucial functionalities, and most of them do not make use of the state-of-the-art frameworks for model fitting, which limits their use for very large networks because of their numerical inefficiency or their inability to train the models on GPUs. Established R packages such as 'nnet' or 'neuralnet' do not support modern deep learning techniques, such as regularization to control the bias-variance tradeoff (Fritsch et al., 2019; Venables and Ripley, 2002), or more importantly, modern training techniques such as early stopping or learning rate schedulers that can help with otherwise difficult training. The 'h2o' package comes with its own Java backend, and while it allows training with the standard formula syntax, its use in R is cumbersome due to its inability to work with default R objects (LeDell et al., 2022). The 'brulee' R package, which uses 'torch' to train the DNNs specified in standard R syntax, is very similar to the package presented here, but still lacks some critical features (see section "Performance analysis and validation") (Kuhn and Falbel, 2022).
Here, we present 'cito', an R package for training fully-connected neural networks using the standard R formula syntax for model specification. Based on the 'torch' deep learning framework, 'cito' allows flexible specifying of fully-connected neural networks architectures, supports many modern deep learning techniques (e.g. dropout and elastic net regularization, learning rate schedulers), can take advantage of CPU and GPU hardware for parallelization, and yet optionally offers a high degree of customization such as user-defined loss functions. 'cito' furthermore supports many downstream functionalities, such as the possibility to continue the training of existing neural networks with modified training parameters for fine-tuning, or the application of xAI methods to interpret the trained models. As such 'cito' provides an easily useable but nevertheless complete analysis pipeline for building neural networks in R.
In the remainder of the paper, we introduce the design principles of cito in more detail, show validation and performance analysis, and showcase the application of cito using the example of a species distribution model of the African elephant.
## Design of the cito package
### Torch backend
Cito uses 'torch', a variant of PyTorch, as its backend to represent and train the specified neural networks. Until recently, R users who wanted to use PyTorch and Tensorflow had to call their Python bindings through the'reticulate' package. R packages that relied on this pipeline were thus dependent on appropriate Python installations, which often created dependency issues. This issue got solved with the release of 'torch', a native implementation of the torch libraries with a R frontend (Falbel and Luraschi, 2022).
### Building and training neural networks in cito
With 'torch', R users can essentially use PyTorch natively in R, which solves dependency issues, but not the problem that specifying a DNN with 'torch' is complex.
'cito' addresses this problem by providing one simple command, _dnn()_, which combines everything needed to build and train a fully-connected neural network in one line of code. The _dnn()_ function includes options to modify the network architecture, the training process and the monitoring (e.g. by visualization) of the training and validation loss (see Table 1). The function returns an S3 object that can be used, for example, with the _continue_training()_ function to continue training for additional epochs (iterations) with the same or modified training parameters or data. Moreover, many standard R functions such as _summary()_, _predict()_ or _residuals()_ are
implemented for the trained models, and additional specialized explainable Artificial Intelligence (xAI) functions are available for interpreting the fitted networks.
## Performance comparison and validation of cito
After explaining the design of cito, we shortly compare its performance and functionality with the other available packages for implementing neural networks in R. We consider in particular 'nnet' and 'neuralnet', which each have their own backend and are not based on modern DL frameworks (Fritsch et al., 2019; Venables and Ripley, 2002), 'h2o', which possesses a much broader toolkit for training neural networks than the previous two packages (LeDell et al., 2022), and 'brulee' (Kuhn and Falbel, 2022), which, similar to cito, uses the 'torch' DL framework as a backend.
Regarding implemented options, 'cito' offers the most options. 'cito' offers flexible regularization options (to control the bias-variance trade-off), GPU support, the possibility to continue training and custom loss functions and most importantly tools to interpret the trained DNN models (see Table 2).
Looking at the performance of the packages, measured by the time it takes to train the networks, we find that some of the older packages, in particular 'neuralnet', perform better than the torch packages for small networks (see Figure 1). This is probably due to the smaller overhead of these more specialized packages. However, when moving to larger networks (large networks, especially wide networks are important to achieve low generalization
\begin{table}
\begin{tabular}{l l l} \hline \hline Name & Explanation & Default \\ \hline hidden & Quantity and size of hidden Layers & (10,10,10) \\ activation & Activation function for hidden layers & “relu” \\ bias & Should hidden nodes have bias & TRUE \\ \hline \hline
**Training** & & \\ \hline Name & Explanation & Default \\ \hline validation & Split data into test and validation set & 0 \\ epochs & Number of training iterations & 100 \\ device & Set to “cuda” to train on GPU & “cpu” \\ plot & Visualize loss during training & TRUE \\ batchsize & Number of samples used for each training step & 32 \\ shuffle & Shuffle batches in between epochs & TRUE \\ lr & Learning rate & 0.01 \\ early stopping & Stops training early based on validation loss & FALSE \\ \hline \hline \multicolumn{3}{l}{**Controlling bias-variance trade-off (regularization)**} \\ \hline Name & Explanation & Default \\ \hline lambda & Strength of elastic net regularization & 0 \\ alpha & Split of L1 and L2 regularization & 0.5 \\ dropout & Dropout probability of a node & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Changeable parameters for fully-connected neural networks and their default values in ’cito’
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & ‘cito’ & ‘brulee’ & ‘h2o’ & ‘neuralnet’ & ‘nnet’ \\ \hline Customizable network architecture & X & X & X & X \\ Fit a probability distribution & X & & X & \\ GPU support & X & & & \\ Regularization & X & X & & \\ Custom loss function & X & & & X \\ Optimization of additional user-defined parameters & X & & & \\ Continue training & X & & & \\ Class weights for imbalanced data & & X & & \\ Learning rate scheduler & X & X & X & \\ Feature importance (xAI) & X & & & \\ Partial dependency plots (xAI) & X & & & \\ Accumulated local effect plots (xAI) & X & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Feature comparison of R packages used to build fully-connected neural networks
errors (Belkin et al., 2019) 'cito' can play out one the main advantage of modern machine learning frameworks, which is GPU support. On the GPU, training time in city is practically independent of the size of the network, confirming the general consensus that training large networks requires GPU resources. On a CPU, 'cito' performs on par with 'brulee', the other torch-based package, but somewhat worse than 'neuralnet'. We interpret these results as showing that for a simple problem, there is still some overhead of using torch as opposed to a native C implementation. Nevertheless, we argue would argue that the added flexibility and functionality of cityto outweighs this advantage of 'neuralnet'. Moreover, our results suggest that the difference between the torch packages and 'neuralnet' lies mainly in the constant overhead needed to set up the models, and thus for very large models, their performance is roughly equal.
## Workflow and case study
After having discussed the process of model training, which is arguably the core of any machine learning project, we want to comment on the entire workflow of 'cito' in building a predictive model. In general, this workflow consists of model specification, training, and interpretation and predictions (Box 1). To make the discussion of the workflow more accessible to the reader, we show all functions using the example of building a species distribution model for the African elephant (Loxodonta Africana).
As data, we use occurrence records of the African elephant from Angelov, 2020, who compiled data from different studies available on GBIF ((INaturalist Contributors, 2022a, 2022b; Jlegind, 2021; Musila et al., 2019; Navarro, 2022). Those presence-only data were supplemented with randomly sampled background points (pseudo-abscences) to generate a presence-absence signal for the classifier. As predictors, we used standard bioclimatic variables from WorldClim v2 (Fourcade et al., 2018). While it is common in statistical modelling to sample more pseudo-abscences than presences, such unbalanced class numbers can be harmful for machine learning algorithms. We therefore randomly undersampled pseudo-abscences to match the number of observations (another option would be to oversample presences but in our example this resulted in lower accuracy in interim results).
Figure 1: Comparison of different deep learning packages (‘brulee’, ‘h2o’, ‘neuralnet’, and ‘cito’ (CPU and GPU)) on different network sizes on an Intel Xeon 6128 and a Nvidia RTX 2080ti. The networks consist of five equally sized layers (50 to 1000 nodes with a step size of 50) and are trained on a simulated data set with 1000 observations. Panel (A) shows the runtime of the different packages and panel (B) shows the average root mean square error (RMSE) of the models on a holdout of size 1000 observations (RMSE was averaged over different network sizes). Each network was trained 20 times (the dataset was resampled each time).
The building and training can be done in one line of code (Figure 1(a), code provided in Pichler and Amesoder, 2023). The code builds a species-distribution model based on a fully-connected neural network with three hidden layers of 50, 50 and 50 nodes and trains it for 50 epochs. During training, a plot that is not shown here, would visualize the training and validation loss and if the loss does not decrease over time it may be beneficial to stop and restart training with different hyper-parameters (e.g. smaller learning rate). In the second line of code (Figure 1(b)), the resulting object is further trained with a smaller learning rate to achieve a better fit.
The trained models can be interpreted with a range of in-build functions. The _predict()_ can be used to make predict the occurrence probability of the elephant (Figure 3a). _summary()_ function provides a first overview about influential variables by calculating their importance(Fisher et al., 2019). Partial dependency plots (PDP) and averaged local effect plots (ALE) functions can be used to display the effect of specific features on the response, in this case the occurrence probability of the elephant (Figure 3b).
## Conclusion
'cito' is a powerful R package for building and training fully-connected neural networks with a formula syntax. The package seamlessly fits into the R environment and removes many hurdles for inexperienced users, but also saves programming time for experienced users who want to build simple neural networks. Unique features of 'cito', such as training on a GPU, using custom loss functions, modern DL training techniques such as continue training, learning rate scheduler or early stopping cannot be found in other packages. Future releases of 'cito' aim to implement additional functionalities such as internal cross validation for hyperparameter optimization, gradient based methods for hyperparameter tuning and the integration of recurrent and convolutional neural networks.
Figure 3: Predictions for the African elephant from a DNN trained by cito. Panel (A) shows the predicted probability of occurrence of the African elephant. Panel (B) shows the accumulated local effect plot (ALE), i.e. the change of the predicted occurrence probability in response to the Bioclim variable 8 (mean temperature of the wettest quarter).
Figure 2: Training a deep neural network using ’cito’. A shows how ‘cito’ is used to train a model with three hidden layers and B shows how the training can be continued for a trained model.
## Data Availability
The processed datasets for the species distribution model (African elephant) are available at Angelov, 2020. The 'cito' package can be downloaded from CRAN. Scripts to reproduce the analysis and the benchmark are available at Pichler and Amesoder, 2023.
## Authors' Contributions
C.A. implemented the 'cito' software with contributions by M.P. M.P. ran the experiments and analyzed the species distribution model for the present contribution. C.A. provided the first draft of the software note and M.P. and F.H. helped improving it.
|
2307.07725 | Improving Translation Invariance in Convolutional Neural Networks with
Peripheral Prediction Padding | Zero padding is often used in convolutional neural networks to prevent the
feature map size from decreasing with each layer. However, recent studies have
shown that zero padding promotes encoding of absolute positional information,
which may adversely affect the performance of some tasks. In this work, a novel
padding method called Peripheral Prediction Padding (PP-Pad) method is
proposed, which enables end-to-end training of padding values suitable for each
task instead of zero padding. Moreover, novel metrics to quantitatively
evaluate the translation invariance of the model are presented. By evaluating
with these metrics, it was confirmed that the proposed method achieved higher
accuracy and translation invariance than the previous methods in a semantic
segmentation task. | Kensuke Mukai, Takao Yamanaka | 2023-07-15T06:44:34Z | http://arxiv.org/abs/2307.07725v1 | Improving translation invariance in convolutional neural networks with peripheral prediction padding
###### Abstract
Zero padding is often used in convolutional neural networks to prevent the feature map size from decreasing with each layer. However, recent studies have shown that zero padding promotes encoding of absolute positional information, which may adversely affect the performance of some tasks. In this work, a novel padding method called Peripheral Prediction Padding (PP-Pad) method is proposed, which enables end-to-end training of padding values suitable for each task instead of zero padding. Moreover, novel metrics to quantitatively evaluate the translation invariance of the model are presented. By evaluating with these metrics, it was confirmed that the proposed method achieved higher accuracy and translation invariance than the previous methods in a semantic segmentation task.
Kensuke Mukai and Takao Yamanaka Department of Information and Communication Sciences, Sophia University, Japan
Translation invariance, padding, CNN, semantic segmentation, positional information
## 1 Introduction
In recent years, some researches have focused on padding techniques, one of the most fundamental components in convolutional neural networks (CNNs). Although the padding changes only a few pixels at the edges of feature maps, this padding sometimes affects recognition accuracy in deep neural networks. For the padding, zero padding has been generally used due to low computational cost and simplicity. However, recent studies suggest that the use of zero padding promotes the encoding of positional information in CNNs [1, 2, 3, 4, 5] and may hinder the acquisition of translation invariance [6, 7, 8].
Encoding positional information in a network helps achieve high accuracy for the tasks where positional information is useful for identification, such as object recognition in an image [9]. On the other hand, in some tasks such as semantic segmentation, it has been confirmed that biased estimation based on absolute positional information tends to underestimate objects at the edge of an image [10, 11, 12], or that blind spots appear where objects cannot be recognized [13]. Such a phenomenon becomes a problem when learning with datasets such as satellite images where positional information (position in the cropped image coordinates) has no information about the segmentation label, or when using it for practical applications such as autonomous driving.
The previous methods proposed to deal with these problems aim to improve accuracy by padding image edges with more natural distributions [8, 14, 15] or by preventing specific patterns from appearing at image edges [9]. They are based on the idea that an unnatural distribution of image edges due to zero padding encourages CNN to encode positional information and adversely affects the performance of some tasks. However, in these methods, it is necessary to design an appropriate padding depending on the target task though the improvement of accuracy has been limited.
In this paper, a novel padding method was proposed, called Peripheral Prediction Padding (PP-Pad). PP-Pad estimates the optimal padding values from the values of several neighbouring pixels by end-to-end training of the model. Moreover, novel evaluation metrics were defined to evaluate the translation invariance of CNN models with each padding method. Using the metrics, the proposed padding methods were evaluated on estimation accuracy and translation invariance, compared with the previous padding methods. As a result of experiments in a semantic segmentation task, the proposed method achieved better recognition accuracy than that of previous methods with higher translation invariance.
## 2 Related Work
Some recent studies have conducted experiments on the effects of padding in convolutional neural networks [7, 14, 15, 16]. For the padding, zero padding has been commonly used, simply filling the edges of the image with 0. However, recent studies suggest that this zero padding facilitates the encoding of positional information in CNNs, and can reduce translation invariance, which is one of the important properties of CNNs [6]. The conventional padding methods other than the zero padding include replicate and reflect, which repeat values at the edge of the image. In addition, padding such as circular may be used in panorama images such as spherical images. Nguyen et al. have proposed a padding method that uses the mean and variance of surrounding patches and fills with values following a normal distribution [8]. Liu et al. have proposed a padding method that applies the image inpainting task using partial convolution [17, 18]. More recently, Huang et al. have proposed a model that treats padding as an image extrapolation problem and uses an image generation model to generate padding values [19]. In these methods, it has been considered that the padding values should have same distribution as the pixel values at the edges of an image. On the other hand, PP-Pad does not regularize the model to have the similar distribution in the padded pixel values to the edges of an image. It provides more flexible and more useful padding values for the models by estimating the optimal padded values for the task.
## 3 Method
### Peripheral Prediction Padding (PP-Pad)
In the proposed method, the optimal padding value is learned in the end-to-end manner from the neighbouring \(h_{p}\times w_{p}\) pixels using convolutional layers of \(1\times w_{p}\) kernel and \(1\times 1\) kernel, as shown in Fig. 1. When the input feature map \(H\times W\times C\) with zero padding is given, the \(h_{p}\times W\) region at the top edge except for the zero-padding region (orange square in the most left feature map in Fig. 1) is cropped to estimate optimal padding values at the top padding region (white pixels at the top edge). The cropped tensor \(h_{p}\times W\times C\) is rotated 90 degrees to place the face of \(C\times W\) in front as shown in the 2nd feature map in Fig. 1, and then is processed by a convolutional layer with a \(1\times w_{p}\), followed by an activation function ReLU (Rectified Linear Unit). After additional two \(1\times 1\) convolutional layers are applied with ReLUs, the feature map of \(C\times(W-2)\times 1\) is rotated back to the original direction, and then used as the padding values at the top edge of the original feature map except for the pixels at both ends, as shown in the rightest feature map in Fig. 1. Similar models are prepared separately for left, bottom, right sides for the feature map to estimate the optimal padding values. Note that the four corners of the feature map are padded with 0 in the proposed PP-Pad to avoid recursive predictions. Although the padding model could be designed with normal convolutional layers without the feature map rotation, the model size (the number of parameters) can be reduced by the proposed architecture. For example, if the padding model is designed with a normal convolution of a \(h_{p}\times w_{p}=2\times 3\) kernel and two 1x1 convolutional layers, the number of padding model parameters is \((6Cn+n^{2}+Cn)\), where \(n\) is the number of intermediate channels. On the other hand, the proposed method is implemented with the \((6n+n^{2}+n)\) parameters, so that the \(7n(C-1)\) parameters can be saved for each convolutional layer.
### Evaluation Metric for Translation Invariance
In this work, the property of translation invariance for the proposed model was evaluated using a semantic segmentation task. Since there is no standard metric to measure the translation invariance, novel metrics were designed. As shown in Fig. 2, a patch is cropped from an image to estimate a class for each pixel of the patch using a semantic segmentation model with a padding method. Then, the patches are cropped in a sliding-window manner to estimate the class for each pixel. Since the cropped patches are overlapping, multiple class labels are obtained for each pixel in the original image. If the semantic segmentation model has the property of the translation invariance, these class labels are same for each pixel in the original image. Although the CNN-based model is thought to have translation invariance, it has been observed that the padding in the model provides positional information to better predict a class depending on the position in an image [12]. Therefore, the cropped patches sometimes provide different class labels for each pixel due to the padding. In this work, the translation invariance is measured by calculating the degree of coincidence over the predicted classes for each pixel obtained from the overlapping cropped patches. Specifically, this degree of coincidence is measured by entropy in Eq. 1.
\[e=-\sum\nolimits_{k=0}^{k-1}p_{k}\log_{2}p_{k} \tag{1}\]
\(K\) is the number of classes, and \(p_{k}\) is the probability of classifying a pixel into the \(k\)-th class, which is obtained from the histogram of the predicted classes for each pixel estimated from the multiple patches. The entropy \(e\) takes a lower value for higher degree of coincidence. For example, \(e\) is 0 if the predicted classes
are same for all overlapping patches. Based on the entropy, two metrics for measuring the property of translation invariance are defined.
\[meanE=\frac{1}{N}\sum_{n=0}^{N-1}e_{n} \tag{2}\] \[disR=\frac{1}{N}\sum_{n=0}^{N-1}f_{0}(e_{n}) \tag{3}\]
\(e_{n}\) represents the entropy for the \(n\)-th pixel, and \(N\) is the total number of pixels in all images used for evaluation. Thus, _meanE_ (mean entropy) represents the average entropy over all pixels. \(f_{\theta}(x)\) is a function which takes 1 for \(x>\theta\) and 0 for \(x\leq\theta\). Thus, _disR_ (disagreement rate) counts the number of pixels where the predicted classes are not same over overlapping patches. Therefore, both metrics in Eq. 2 and Eq. 3 take lower values for higher degree of coincidence, the property of translation invariance, and were used for the evaluation in the experiments.
## 4 Experimental Setup
### Dataset and Metrics
The dataset used in experiments is the PASCAL VOC 2012 dataset [20]. This is a dataset with 21 classes of people, animals, vehicles, and indoors, containing data for classification, detection, and segmentation. In the experiments, a segmentation dataset with 1464 training and 1449 validation data was used. During validation, patches were cropped from an image in a sliding-window manner. The number of patches input to the model was too large if all the validation images were used for the evaluation, which was computationally prohibitive. Therefore, 100 images out of the 1449 validation images were randomly selected and used as validation data.
The performance was evaluated using three metrics: _meanE_, _disR_, and mIoU (mean Intersection over Union) [21]. _meanE_ and _disR_ evaluate the translation invariance of the model with each padding, as previously described. mIoU was used for evaluating accuracy in the semantic segmentation task, by calculating it for all the patches obtained from a sliding window.
### Training and Implementation Details
Unlike the previous methods [8, 17], it is not required for the proposed method PP-Pad to train a separate network to estimate padding values. In the experiments, Pyramid Scene Parsing Network (PSPNet) [21], an FCN model for semantic segmentation, was used as a base model. The initial values of the model were obtained from the weight parameters trained with the ADE20K dataset [22] and then trained with the PASCAL VOC2012 dataset using zero padding for 160 epochs. This aimed at reducing overall training time by the pre-training with the computationally efficient zero padding. After that, each padding method was applied to the model and trained it again for 160 epochs. Following [21], the "poly" learning rate policy [23] was used, where the current learning rate was equal to the base rate multiplied by \(\left(1-\frac{epoch}{maxepoch}\right)^{power}\). The base learning rate was set to 0.01 with \(power=0.9\). The momentum and the weight decay were set to 0.9 and 0.0001, respectively. Due to the limited GPU memory size, the batch size was set to 12 during training.
All images in the dataset were resized to 1050 pixels on the short side. During training, patches of \(475\times 475\) were randomly cropped and then were randomly flipped and rotated for data augmentation. When evaluating translation invariance and estimation accuracy, after resizing the image to 1050 pixels on the short side, patches of 475\(\times\)475 pixels were cropped in a sliding-window manner with stride = 47 pixels. Since a same object can appear at the edges or the center of a patch cropped from an image due to the sliding window, positional information is not useful to predict the class for each pixel in this semantic segmentation task.
PP-Pad was implemented with \(h_{p}=2\), 3, 5, 10, \(w_{p}=3\), and \(n=8\). The size \(h_{p}\times w_{p}\) represents the reference region to be used for calculating a padding value in PP-Pad. Note that the padding values were calculated in the channel-wise manner with a same convolutional filter among channels (Fig. 1).
## 5 Experimental Results
The model was evaluated using three metrics: mIoU, _meanE_, and _disR_. While mIoU is a measure for recognition accuracy, _meanE_ and _disR_ are the two proposed measures for evaluating translation invariance. The results for the proposed method were compared with other padding methods, including the previous padding methods: CAP (Context-Aware Padding) [19] and Partial (Partial Padding) [17], as shown in Table 1. Both CAP and Partial aim to pad the edges of an image to have values from a natural distribution. As can be seen from the table, the proposed method achieved high recognition accuracy in mIoU with better translation invariance in _meanE_ and _disR_. This would be because PP-Pad learned the model to produce the optimal padding values for the trained task. On the other hand, the padding methods such as reflect, replicate, and circular tended to have poor recognition accuracy and translation invariance. The recognition accuracies for previous methods, CAP and Partial, were comparable to the zero padding, but there was no improvement in the translation invariance. This indicates that CAP and Partial would differently predict the classes for pixels at the edges of a patch, from the prediction at the center of a patch, similar to the zero padding which is known that the positional information is encoded [12]. For the proposed method, PP-Pad with the reference region of
\begin{table}
\begin{tabular}{l l c c c} \hline \hline & Methods & mIoU \(\uparrow\) & meanE \(\downarrow\) & disR \(\downarrow\) \\ \hline \multirow{6}{*}{Previous} & Zero & 0.3323 & 1.8811 & 0.6482 \\ & Reflect & 0.3040 & 32.8108 & 0.8374 \\ & Replicate & 0.3059 & 1.8945 & 0.6545 \\ & Circular & 0.3112 & 1.8413 & 0.6344 \\ \hline \multirow{3}{*}{Previous} & CAP [19] & 0.3380 & 1.8764 & 0.6429 \\ & Partial [17] & 0.3324 & 2.0035 & 0.6846 \\ \hline \multirow{3}{*}{Proposed} & PP-Pad (\(2\times 3\)) & 0.3352 & **1.7995** & **0.6102** \\ & PP-Pad (\(3\times 3\)) & 0.3472 & 1.8787 & 0.6465 \\ \cline{1-1} & PP-Pad (\(5\times 3\)) & 0.3380 & 1.8568 & 0.6321 \\ \cline{1-1} & PP-Pad (\(10\times 3\)) & **0.3486** & 1.8334 & 0.6322 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison in recognition accuracy (mIoU) and translation invariance (_disR_, _meanE_)
2\(\times\)3 showed the best translation invariance with comparable recognition accuracy with the zero padding and previous methods. PP-Pad with the 10\(\times\)3 reference region achieved the best recognition accuracy with slightly lower performance on the translation invariance than PP-Pad (2\(\times\)3). The reason of the high recognition accuracy in PP-Pad (10\(\times\)3) would be that more accurate padding values can be predicted from a larger area of an image.
Since the zero padding fills the edges of an image with zero values, the convolutional filters can recognize the edges by learning "boundary detector" which detects zero patterns at the edges. In addition, the values at the edges of the feature map are affected by zero values in every convolutional process. Therefore, the region where zero padding affects propagates to the center of the patch as convolutional layers become deeper. Thus, the zero padding would encode positional information to promote the recognition accuracy, which would lead to the decrease of the performance on translation invariance.
An example of the results on this semantic segmentation task is shown in Fig. 3. Although the original image includes almost whole body of a dog, a cropped patch indicated by the red frame in the ground-truth (GT) image, only includes a part of the dog. Since the shape of the dog in the cropped patch is similar to a bird, some pixels in the patches were recognized to the bird class (yellow) in CAP, Partial, and Zero, while PP-Pad accurately classified all the pixels to the dog class (purple) in this example.
Since it would be important to compare the computational cost during training and inference, the comparison of the computational costs is shown in Table 2 for the zero padding, the previous methods, and the proposed methods. From the table, the overhead in the proposed method for inference was relatively small, although the training required almost three-times training time compared with the other methods. It is noted that CAP requires additional pre-training of the padding model to obtain padding values. Since PP-Pad learns the padding model to produce the optimal padding values in the end-to-end manner, it took more time to train the model. However, PP-Pad can use a pre-training procedure with zero padding for computational efficiency. Therefore, the required time to obtain final model would be reduced for PP-Pad.
## 6 Conclusions
In this paper, a novel padding method, PP-Pad, was proposed to improve translation invariance in convolutional neural networks. In PP-Pad, the network can be learned in the end-to-end training to obtain the padding model to produce the optimal padding values. In order to evaluate the model, two metrics were defined as measures for translation invariance based on a semantic segmentation task. From the experiments, it can be confirmed that PP-Pad achieved high recognition accuracy with better translation invariance in the semantic segmentation task.
\begin{table}
\begin{tabular}{l l c c} \hline \hline & Methods & Training on & Inference on \\ & & GPU [s/10iter] & CPU [s/image] \\ \hline \multirow{2}{*}{Previous} & Zero & 2.45 & 0.606 \\ \cline{2-4} & CAP [19] * & 2.98 & 0.710 \\ & Partial [17] & 2.98 & 0.646 \\ \hline \multirow{2}{*}{Proposed} & PP-Pad (2\(\times\)3) & 7.26 & 0.755 \\ & PP-Pad (10\(\times\)3) & 7.35 & 0.802 \\ \hline \hline \end{tabular}
* CAP [19] requires additional pre-training.
\end{table}
Table 2: Comparison of computational cost during training and inference
Figure 3: Example of prediction results in semantic segmentation for sliding-window patches |
2306.02560 | Tensorized Hypergraph Neural Networks | Hypergraph neural networks (HGNN) have recently become attractive and
received significant attention due to their excellent performance in various
domains. However, most existing HGNNs rely on first-order approximations of
hypergraph connectivity patterns, which ignores important high-order
information. To address this issue, we propose a novel adjacency-tensor-based
\textbf{T}ensorized \textbf{H}ypergraph \textbf{N}eural \textbf{N}etwork
(THNN). THNN is a faithful hypergraph modeling framework through high-order
outer product feature message passing and is a natural tensor extension of the
adjacency-matrix-based graph neural networks. The proposed THNN is equivalent
to a high-order polynomial regression scheme, which enables THNN with the
ability to efficiently extract high-order information from uniform hypergraphs.
Moreover, in consideration of the exponential complexity of directly processing
high-order outer product features, we propose using a partially symmetric CP
decomposition approach to reduce model complexity to a linear degree.
Additionally, we propose two simple yet effective extensions of our method for
non-uniform hypergraphs commonly found in real-world applications. Results from
experiments on two widely used {hypergraph datasets for 3-D visual object
classification} show the model's promising performance. | Maolin Wang, Yaoming Zhen, Yu Pan, Yao Zhao, Chenyi Zhuang, Zenglin Xu, Ruocheng Guo, Xiangyu Zhao | 2023-06-05T03:26:06Z | http://arxiv.org/abs/2306.02560v2 | # Tensorized Hypergraph Neural Networks
###### Abstract
Hypergraph neural networks (HGNN) have recently become attractive and received significant attention due to their excellent performance in various domains. However, most existing HGNNs rely on first-order approximations of hypergraph connectivity patterns, which ignores important high-order information. To address this issue, we propose a novel adjacency-tensor-based Tensorized Hypergraph Neural Network (THNN). THNN is a faithful hypergraph modeling framework through high-order outer product feature message passing and is a natural tensor extension of the adjacency-matrix-based graph neural networks. The proposed THNN is equivalent to an high-order polynomial regression scheme, which enable THNN with the ability to efficiently extract high-order information from uniform hypergraphs. Moreover, in consideration of the exponential complexity of directly processing high-order outer product features, we propose using a partially symmetric CP decomposition approach to reduce model complexity to a linear degree. Additionally, we propose two simple yet effective extensions of our method for non-uniform hypergraphs commonly found in real-world applications. Results from experiments on two widely used hypergraph datasets for 3-D visual object classification show the promising performance of the proposed THNN.
Hypergraph, graph neural networks, tensor neural networks, tensor decomposition
## I Introduction
The rapid development of graph neural networks (GNNs, [1, 2, 3]) greatly benefits various crucial research areas due to their extraordinary performance. Generally, a conventional GNN only allows objects to have pairwise interaction. However, in many real-world applications, the interactions among objects can go beyond pairwise interactions and involve higher-order relationships. For example, in brain connectivity networks [4, 5], multiple brain regions often work together in a neurological manner to accomplish certain functional tasks. To faithfully characterize such connections, pairwise modeling in graph structure is inadequate, and it is necessary to incorporate high-order interacting information across brain regions. To articulate the correlation among multiple regions, a hypergraph structure [5, 6, 7] can be created with each vertex as a brain region and each hyperedge representing the interactions among several brain regions.
As discussed in [8], there is a clear difference between the pairwise relationship and the high order relationship of multiple objects. The capacity of graph structures is limited as they can only describe pairwise relationships. Compared to graphs, a hypergraph provides significant advantages in modeling the high-order relationships among multiple objects in real-world data [8]. For example, in the case of multi-agent trajectory prediction [9], adopting the multiscale hypergraph can extract the interactions among groups of varying sizes and performs much better than prior graph-based methods which can solely describe pairwise interactions. Hypergraphs have also recently been widely used in a variety of other data mining tasks such as node classification [10], link prediction [11], community detection [12], retrieval [13], multi-label classification [14, 15], 3D object classification [16, 17], tracking [18], point cloud matching [19], and clustering [7].
In these applications, the majority of hypergraph neural network architectures are based on the Chebyshev formula for hypergraph Laplacians proposed by HGNN [17]. These neural hypergraph operators can be seen as constructing a weighted graph and thus can utilize the off-the-shelf graph learning models (e.g., GCN). **However, most of these methods are incapable of learning higher-order information since they only make use of the first-order approximation**, e.g., the clique expansion [7] of a hypergraph. In order to better characterize high-order information in hypergraphs, it is natural to model the high-order interaction information of a hypergraph through high-order representation (like outer product). Some studies [12, 20, 21] have revealed the great success of tensor representation (like adjacency tensor [12] representation) in hypergraph modeling. **However, a general tensor based hypergraph neural network, which conduct a high-order information message passing procedure, has not yet been developed.**
Motivated by the two aforementioned observations, in this
Fig. 1: An example of Graphs and Hypergraphs. A graph is typically represented by an adjacency matrix while a hypergraph is always described by an incidence matrix.
work, we propose a tensor based **T**ensorized **H**ypergraph **N**eural **N**etwork (THNN) to extend the adjacency matrix based graph neural networks into an adjacency tensor based framework. THNN has high expressiveness due to its intrinsic similarity to a high-order outer product feature aggregation scheme [22, 23, 24], which can capture intra-feature and inter-feature dynamics in multilinear interaction information modeling. In other words, the intrinsic multilinear mathematical architecture of THNN is effective and natural in modeling high-order information, resulting in a more accurate extraction of high-order interactions.
Furthermore, because adjacency tensors can only be used to represent uniform hypergraphs, the straightforward THNN is incapable of handling the widely existing non-uniform hypergraphs. Therefore, we propose two novel solutions: (1) adding a global node and (2) multi-uniform processing. To evaluate the performance of the proposed THNN framework, experiments on two 3-D visual object recognition datasets are performed. The experimental results show that the proposed THNN model achieves state-of-the-art performance. In summary, our major contributions are as follows:
* We propose, to the best of our knowledge, the first hypergraph neural network based on adjacency tensor that comes with a message passing mechanism capturing high-order interactions in hypergraphs. Previous hypergraph neural networks are mostly based on the first-order approximation in higher-order learning.
* Given the fact that the naive outer product based model of high-order information suffers from exponential time/space complexity, we propose to utilize partially symmetric CP decomposition to reduce time/space complexity from exponential to linear.
* To handle non-uniform hypergraphs, we propose two simple yet effective solutions, i.e., adding a global node and multi-uniform processing, to overcome the limitation that the straightforward THNN of adjacency tensor methods can only be used to model and process uniform hypergraphs.
* We compare the proposed THNN with the state-of-the-art baselines on two widely used hypergraph datasets for 3-D visual object classification under both uniform and non-uniform settings. Empirical results show that the proposed THNN achieves state-of-the-art performance. We also conducted comprehensive experiments for hyperparameter analysis to provide insights into the behavior of the proposed model.
## II Preliminaries and Background
### _Graph and Hypergraph_
A graph can be denoted by \(G=(V,E)\), where \(V\) is the set of vertices and \(E\) is a set of paired vertices, or edges. A graph can be represented by its adjacency matrix \(\mathbf{A}\in\{0,1\}^{|V|\times|V|}\), where \(|\cdot|\) denotes the set cardinality. The entries of the matrix \(\mathbf{A}\) indicate whether two vertices in the graph are adjacent. More specifically, \(\mathbf{A}_{i,j}=1\) if \(\{v_{i},v_{j}\}\in E\) and 0 otherwise.
A hypergraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) is a generalization of a graph in which any number of vertices can be joined in one edge. \(\mathcal{V}\) is the set of vertices, and \(\mathcal{E}\) is a set of vertex sets, a.k.a hyperedges. A hypergraph (undirected) is always described by an incidence \(\mathbf{H}\in\{0,1\}^{|\mathcal{V}|\times|\mathcal{E}|}\), where \(|\mathcal{V}|\) is the number of vertices and \(|\mathcal{E}|\) is the number of hyper-edges. Specifically, \(\mathbf{H}_{i,j}=1\), if \(v_{i}\in e_{j}\) and 0 otherwise. The Illustration of graphs, hypergraphs, adjacency matrix and incidence matrix is shown in Figure 1.
Hypergraphs can be approximated by graphs via its clique expansion [7]. As shown in Figure 3, the clique expansion approximates the original hypergraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) via a graph \(G_{clique}=(\mathcal{V},E_{clique})\), which reduces each hyperedge \(e\in\mathcal{E}\) into a clique in \(G_{clique}\). However, the clique expansion will lead to information loss [25]. The original hypergraph can not be recovered according to the adjacency matrix of clique expansion, as the hyper-dependency and high order relationship collapses into linearity [25].
Another important way to represent hypergraphs is by **Adjacency Tensor**[12]. As shown in Figure 2, adjacency tensor can represent uniform hypergraph where all hyperedges share the same size. The \(m\)-uniform hypergraph means that the sizes of all hyperedges are \(m\). For an \(m\)-uniform hypergraph, the adjacency tensor is defined as the \(m\)-order tensor \(\mathcal{A}\in\{0,1\}^{n\times\ldots\times n}\) with the entry \(\mathcal{A}_{i_{1},\ldots,i_{m}}=1\) if \(\{v_{i_{1}},\ldots,v_{i_{m}}\}\in\mathcal{E}\) and 0 otherwise.
### _Tensor Contraction_
Tensor contraction [26] means that two tensors are contracted into one tensor along their associated pairs of indices. Given two tensors \(\mathcal{A}\in\mathbb{R}^{I_{1}\times I_{2}\times\cdots\times I_{N}}\) and \(\mathcal{B}\in\mathbb{R}^{J_{1}\times J_{2}\times\cdots\times J_{M}}\), with some common modes, \(I_{n_{1}}=J_{m_{1}}\), \(\cdots\)\(I_{n_{B}}=J_{m_{S}}\), the tensor contraction \(\mathcal{A}\times_{(m_{1},m_{2}\cdots,m_{S})}^{(n_{1},n_{2}\cdots,n_{S})} \mathcal{B}\) yields a \((N+M-2S)\)-order tensor \(\mathcal{C}\). Tensor contraction can be formulated as:
\[\mathcal{C}=\mathcal{A}\times_{(m_{1},j_{m_{2}},\ldots,j_{m_{S}})}^{(i_{n_{1} },i_{n_{2}},\ldots i_{N})}\mathcal{B}=\sum_{i_{1},i_{2},\cdots i_{N}}\mathcal{ A}_{i_{1},i_{2},\cdots i_{n_{S}},*}\quad\mathcal{B}_{s,i_{1},i_{2},\cdots i_{n_{S}}} \tag{1}\]
Fig. 2: An example of adjacency tensor of a 3-uniform hypergraph. In this example, the adjacency tensor of hypergraph \(\mathcal{G}\) is defined as the \(3\)-order tensor \(\mathcal{A}\in\{0,1\}^{7\times 7\times 7}\) with the entry \(\mathcal{A}_{v_{i},v_{j},v_{k}}=1\) if \(\{v_{i},v_{j},v_{k}\}\in\mathcal{E}\) and 0 otherwise.
For example, if \(\mathcal{A}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}\times I_{4}\times I_{5}}\), \(\mathcal{B}\in\mathbb{R}^{J_{1}\times J_{2}\times J_{3}\times J_{4}}\), \(N=5\), \(M=4\), \(S=2\), \(n_{1}=3\), \(n_{2}=5\), \(m_{1}=3\), and \(m_{2}=2\), then the entries of \(\mathcal{C}=\mathcal{A}\times_{(3,2)}^{(3,5)}\mathcal{B}\) are
\[\mathcal{C}_{i_{1},i_{2},i_{4},j_{1},j_{4}}=\sum_{i_{3},i_{5}} \mathcal{A}_{i_{1},i_{2},i_{3},i_{4},i_{5}}\mathcal{B}_{j_{1},i_{5},i_{3},j_{4 }}. \tag{2}\]
The well known Mode-\(N\) Product is a special case of Tensor Contraction. Given a tensor \(\mathcal{A}\in\mathbb{R}^{I_{1}\times I_{2}\times\cdots\times I_{N}}\) and a matrix \(\mathbf{B}\in\mathbb{R}^{J_{1}\times J_{2}}\). If \(J_{2}=I_{n}\), then
\[\mathcal{C}=\mathcal{A}\times_{(2)}^{(n)}\mathbf{B}=\mathcal{A} \times_{n}\mathbf{B}. \tag{3}\]
### _Graph Convolutional Neural Networks_
Pairwise message passing is the most important component of the widely used Graph Neural Networks (GNNs). It ensures that nodes in a graph are able to repeatedly improve their representations by exchanging information with their neighbors. One of the most classical design of GNNs is the Graph Convolutional Network (GCN) [2]. Given the adjacency matrix \(\mathbf{A}\) of graph \(G\), a GCN layer can be represented as
\[\mathbf{X}^{(l+1)}=\sigma\left(\tilde{\mathbf{D}}^{-\frac{1}{2} }\tilde{\mathbf{A}}\tilde{\mathbf{D}}^{-\frac{1}{2}}\mathbf{X}^{(l)}\mathbf{ \Theta}^{(l)}\right), \tag{4}\]
with \(\tilde{\mathbf{A}}=\mathbf{A}+\mathbf{I}\), where \(\mathbf{I}\) is the identity matrix and \(\tilde{\mathbf{D}}\) is the diagonal node degree matrix of \(\tilde{\mathbf{A}}\) where \(\tilde{\mathbf{D}}_{ii}=\sum_{j}\tilde{\mathbf{A}}_{ij}\). \(\mathbf{\Theta}^{(l)}\) is the learnable parameters in the \(l\)-th layer. \(\mathbf{X}^{(l)}\) is the feature matrix in the \(l\)-th layer. By analyzing the computation of Eq. (4), we can represent the embedding of node \(v_{i}\) in \(l+1\)-th layer \(x_{v_{i}}^{(l+1)}\) with an aggregation function [27] as the following:
\[x_{v_{i}}^{(l+1)}=\sigma\bigg{(}\sum_{v_{j}\in N_{i}}\frac{1}{ \sqrt{d_{i}}\sqrt{d_{j}}}x_{v_{j}}^{(l)}\mathbf{\Theta}^{(l)}\bigg{)}, \tag{5}\]
where \(N_{i}\) is the set of neighbors of node \(v_{i}\) and \(d_{i}=\mathbf{\tilde{D}}_{ii}\). We observe that pairwise message passing in GCN is achieved via a weighted sum (weight is normalized via degree of pairs) of neighbors.
### _CP Decomposition_
CANDECOMP/PARAFAC (CP) Decomposition [28] factorizes a higher-order tensor into a sum of several rank-1 tensor components. For instance, given an order-\(N\) tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\ldots I_{N}}\), each of its elements in the CP decomposition form can be decomposed as:
\[x_{i_{1},i_{2},\ldots,i_{N}}\approx\sum_{r=1}^{R}\prod_{n=1}^{N }a_{i_{n},r}^{(n)},\] \[\mathcal{X}\approx\mathcal{I}\times_{1}\mathbf{A}^{(1)}\times_{2} \mathbf{A}^{(2)}\cdots\times_{N}\mathbf{A}^{(N)}, \tag{6}\]
where \(R\) denotes the rank, \(\mathcal{I}\in\mathbb{R}^{R\times R\cdots\times R}\) is identity tensor where all diagnol elements \(\mathcal{I}_{i,i\ldots i}=1\) and \(\mathbf{A}^{(1)},...,\mathbf{A}^{(N)}\in\mathbb{R}^{I_{n}\times R}\) denote a series of factor matrices.
## III Methods
In this section, we first analyze the widely adopted hypergraph neural network - HGNN [17], which uses first-order information for hypergraph representation learning. Next, we propose and analyze tensorized hypergraph neural network based on adjacency tensors. Since the straightforward THNN cannot handle more common non-uniform hypergraphs, we introduce two simple yet effective solutions: global node adding and multi-uniform processing.
### _Analysis of Hypergraph Neural Networks_
Feng et al. [17] develop the classical Hypergraph Neural Networks which use truncated Chebyshev formula as hypergraph Laplacians. Given the incidence matrix \(\mathbf{H}\in\mathbb{R}^{|V|\times|\mathcal{E}|}\) of the hypergraph \(\mathcal{G}\), its operator can be written as
\[\mathbf{X}^{(l+1)}=\sigma\left(\mathbf{D}_{(v)}^{-1/2}\mathbf{H} \mathbf{W}\mathbf{D}_{(e)}^{-1}\mathbf{H}^{\top}\mathbf{D}_{(v)}^{-1/2} \mathbf{X}^{(l)}\mathbf{\Theta}^{(l)}\right), \tag{7}\]
where \(\mathbf{W}\in\mathbb{R}^{|\mathcal{E}|\times|\mathcal{E}|}\) is a diagonal matrix to be learned and \(\mathbf{\Theta}^{(l)}\) is a learnable matrix in layer \(l\). \(\mathbf{D}_{(v)ii}=\sum_{j=1}^{|\mathcal{E}|}W_{jj}H_{ij}\) and \(\mathbf{D}_{(e)jj}=\sum_{i=1}^{|\mathcal{V}|}H_{ij}\) are diagonal degree matrices of vertices and edges, respectively. These methods can be viewed as applying clique expansion graph that all the edges in a same clique sharing the same learnable weight to approximate the hypergraph. Each hyperedge of size \(s\) is approximated by a weighted \(s\)-clique. By analyzing the computation of Eq. (7), as shown in Figure 5, we denote the embedding of node \(v_{i}\) in the \(l+1\)-th layer by \(x_{v_{i}}^{(l+1)}\), which is computed by the following aggregation function
\[x_{v_{k}}^{(l+1)}=\sigma\bigg{(}\frac{1}{\sqrt{d_{v_{k}}}}\sum_{e_{j},v_{k}\in e _{j}}\frac{1}{d_{e_{j}}}\mathbf{W}_{jj}\sum_{v_{i},v_{i}\in e_{j}}\frac{1}{ \sqrt{d_{v_{i}}}}x_{v_{i}}^{l}\mathbf{\Theta}^{(l)}\bigg{)}. \tag{8}\]
Fig. 4: Illustration of concatenating 1 [22]. Every circle corresponds to an element in a vector or tensor and \(\circ\) indicates the outer product. We can concatenate a 1 to each vector, and then the outer product of vectors will help to introduce the lower order dynamics.
Fig. 3: Illustration of Clique Expansion of a hypergraph. Clique expansion is important for the processing and utilization of hypergraphs. However, according to the adjacency matrix of clique expansion, the original hypergraph cannot be fully retrieved, showing that some information is lost during the expansion procedure.
This approach tackles information aggregation by a weighted summation of the linearly processed (via \(\mathbf{\Theta}^{(l)}\)) node embeddings of neighbors in the weighted clique expansion graph. However, it is insufficient for higher-order information extraction as only the first-order linear information is considered in Eq. (8).
### _Tensorized Hypergraph Neural Network_
Eq. (8) has revealed that classical Hypergraph Neural Networks approximate high-order information via the first-order summation. However, higher-order information better characterizes co-occurrence relationship in hypergraph.
In order to characterize the influence of other nodes in the same hyperedge on its high-order interaction information, for a node in a hypergraph, the most intuitive method is to use the outer product pooling [22] of the feature vectors of its neighbors. For example, for a node \(v_{i}\) of a hyperedge \(\{v_{i},v_{j},v_{k}\}\in\mathcal{E}\) in a third-order hypergraph \(\mathcal{G}\), the message of the hyperedge \(\{v_{i},v_{j},v_{k}\}\) to node \(v_{i}\) is \(x_{v_{j}}\circ x_{v_{k}}\in\mathbb{R}^{I_{in}\times I_{in}}\). Similar to other graph neural networks, a trainable weight tensor can be used to process and align features, followed by the aggregation of all hyperedges. Then, the information aggregation of node \(v_{i}\) embedding can be represented as follows:
\[x_{v_{i}}^{(l+1)}=\sum_{(j,k)\in N_{i}}x_{v_{j}}^{l}\circ x_{v_{k}}^{l}\times (2,3)\,\mathcal{W}, \tag{9}\]
where \(\mathcal{W}\in\mathbb{R}^{I_{in}\times I_{in}\times I_{out}}\) is the weight tensor., where \(N_{i}\) is the set of neighbor pairs of the node \(v_{i}\) and \(\circ\) indicates the outer product. Eq.(9) formulate the basic framework of the hypergraph neural network that utilizes high-order polynomial information directly. Such formulation also reminds us of the polynomial regression scheme [29, 30], which are highly recognized successful techniques for high-order interaction information extracting (such as multi-modality analysis [22, 23, 24] and quantum data processing [31]).
Similar to the representation equivalence between the aggregation scheme in Eq.(4) and adjacency matrix formulation in Eq.(5), the adjacency tensor can also be used to describe Eq.(9). The outer product feature aggregation can be reformulated as,
\[\mathbf{X}^{(l+1)}=\sigma\left((\mathcal{A}\times_{2}\mathbf{X}^{(l)}\times_{ 3}\mathbf{X}^{(l)})\times^{(2,3)}_{(1,2)}\mathcal{W}\right), \tag{10}\]
Similar to GCN, simple graph convolution layer without feature normalization can result in numerical instabilities because directly applying convolution layer changes the scale of feature vectors. As a consequence of this, there is a need for an appropriate level of degree-normalization [2]. Similar to the degree-normalized adjacency matrix in Eq. (4), we adopt the well-known normalizing adjacency tensor extension [32],
\[\tilde{\mathcal{A}}_{i_{1}\ldots i_{k}}=\begin{cases}\frac{1}{(k-1)!}\prod_{1 \leqslant j\leqslant k}\frac{1}{\sqrt[d_{ij}]}&\text{if }\{v_{i_{1}},\ldots,v_{i_{k}}\}\in E\\ 0&\text{otherwise}\end{cases}. \tag{11}\]
Thus, Eq.(10) can be then reformulated as,
\[\mathbf{X}^{(l+1)}=\sigma\left((\tilde{\mathcal{A}}\times_{2}\mathbf{X}^{(l) }\times_{3}\mathbf{X}^{(l)})\times^{(2,3)}_{(1,2)}\mathcal{W}\right), \tag{12}\]
Since the size of the parameter tensor will grow exponentially with the order number, such extremely large storage and computational complexity is unacceptable. This phenomenon is known as the curse of dimensions [33], and proper tensor decomposition format can effectively solve this problem. So, we decompose the weight tensor \(\mathcal{W}\in\mathbb{R}^{I_{in}\times I_{in}\times I_{out}}\) into
Fig. 5: Illustration of an example of Hypergraph Neural Networks in Eq. (7) and Eq. (8). Classical hypergraph neural network models can be considered as learning a mapped weighted graph. The message passing procedure is still processed via the weighted sum of neighbors’ embeddings. For instance, the weight of \(x_{v_{3}}^{(l)}\) for generating \(x_{v_{1}}^{(l+1)}\) is \(\left(\frac{1}{d_{e_{1}}}\mathbf{W}_{11}+\frac{1}{d_{e_{2}}}\mathbf{W}_{22} \right)\frac{1}{\sqrt{d_{v_{3}}}}\).
Fig. 6: Illustration of THNN. THNN tries to pass the high-order interactions of neighbors in different hyperedges. The information of interactions is computed via outer product and is processed via tensor contractions.
the following partially symmetric CP decomposition [28, 34] structure with the rank \(R\):
\[\mathcal{W}=\mathcal{I}\times_{1}\mathbf{\Theta}^{(l)}\times_{2}\mathbf{\Theta}^{(l)} \times_{3}\mathbf{Q}^{(l)}.\]
Using partially symmetric constraints is motivated by the assumption that the same combination of nodes in **undirected** hypergraph should result in equal output features after outer product fusion. For example, as for node pair \(v_{j}\) and \(v_{k}\), the weight should hold \(\mathcal{W}\times_{1}x_{v_{j}}\times_{2}x_{v_{k}}=\mathcal{W}\times_{1}x_{v_{k }}\times_{2}x_{v_{j}}\). Therefore, partially symmetry constraints can address this assumption and reduce the number of parameters. The final low-rank aggregation scheme can be represented as follows
\[\mathbf{X}^{(l+1)}=\] \[\sigma\left(\left(\left(\tilde{\mathcal{A}}\times_{2}(\mathbf{X} ^{(l)}\mathbf{\Theta}^{(l)})\times_{3}(\mathbf{X}^{(l)}\mathbf{\Theta}^{(l)})\right) \times^{(2,3)}_{(1,2)}\mathcal{I}\right)\mathbf{Q}^{(l)T}\right), \tag{13}\]
where \(\tilde{\mathcal{A}}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\), \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times I_{in}}\), \(\mathcal{I}\in\mathbb{R}^{R\times R\times R}\) is the identity tensor, \(\mathbf{\Theta}^{(l)}\in\mathbb{R}^{I_{in}\times R}\), and \(\mathbf{Q}^{(l)}\in\mathbb{R}^{I_{out}\times R}\). \(\mathbf{\Theta}^{(l)}\) and \(\mathbf{Q}^{(l)}\) are the learnable weights in the \(l\)-th layer. \(I_{in}\) is the input feature dimension number, \(I_{out}\) is the output dimensionality and \(R\) is the number of rank. We define the family of such hypergraph neural networks as **T**ensorized **H**ypergraph **N**eural **N**etworks (**THNN**).
### _Architecture Analysis and Model Details_
Traditional GCN speeds up their computation via sparse matrix operation in Pytorch1 or Tensorflow2. However, sparse tensor operations have not been supported well in common differential programming libraries. The above extension would suffer from high computational space cost of huge adjacency tensor, especially when the order is high. After fully optimizing the order of calculations, we can rewrite the THNN in
Footnote 1: [https://pytorch.org/](https://pytorch.org/)
Footnote 2: [https://www.tensorflow.org/](https://www.tensorflow.org/)
\[{x_{v_{i}}}^{(l+1)}=\] \[\sigma\left(\sum_{(j,k)\in N_{i}}\frac{1}{2}\mathbf{Q}^{(l)}\frac{ 1}{\sqrt[3]{d_{i}}\sqrt[3]{d_{j}}\sqrt[3]{d_{k}}}\left(\mathbf{\Theta}^{(l)\top}x _{v_{j}}^{(l)}\right)\star\left(\mathbf{\Theta}^{(l)\top}x_{v_{k}}^{(l)}\right) \right), \tag{14}\]
where \(N_{i}\) is the set of neighbor pairs of the node \(v_{i}\), and \(\star\) is element-wise dot product. And considering that the low order information can also be very important in some cases, we concatenate a scalar \(1\) in feature vectors to generate lower-order dynamics. Such a strategy could help THNN with the low order information modeling. In detail, Eq. (14) considers more on the 2nd-order interactions and ignores some 1st-order information. As for the 4-uniform situation, the 3rd-order interactions would be considered more. As shown in Figure 4, if we concatenate the original feature vector with a scalar \(1\), such preference can be alleviated.
As dot product of many vectors would lead the numerical insatiability empirically, we add a new activation function \(\sigma^{{}^{\prime}}\) in the original architectures. We evaluated common activation functions and used \(Tanh(\cdot)\) in experiments. A discussion of choosing \(\sigma^{{}^{\prime}}\) is in Section IV-J. The final expression of THNN for uniform Hyper-Graph is represented as follows
\[{x_{v_{i}}}^{(l+1)}=\sigma(\sum_{(j,k)\in N_{i}}(\mathbf{Q}^{(l)} \sigma^{{}^{\prime}}\frac{1}{2}\frac{1}{\sqrt[3]{d_{i}}\sqrt[3]{d_{j}}\sqrt[3] {d_{k}}}\] \[\left(\mathbf{\Theta}^{(l)\top}\left[\begin{array}{c}x_{v_{j}}^{l} \\ 1\end{array}\right]\right)\star\left(\mathbf{\Theta}^{(l)\top}\left[\begin{array}{c }x_{v_{k}}^{l}\\ 1\end{array}\right]\right)))). \tag{15}\]
The whole procedure of THNN is shown in Figure 6. Generalizations of order \(N\) THNN are similarly generated using the same approach, and formulas for the general case are provided in Section III-D and Section III-E in the Appendix.
### _Generalizing Third uniform THNN to the \(N\)-Order_
In Section III-C of the paper, we illustrate the third uniform THNN. Here we derive the more general \(N\)-th order form. Then, the outer product information aggregation of node \(v_{i}\) embedding can be represented as follows:
\[{x_{v_{j_{1}}}}=\sum_{(j_{1},j_{2},\cdots,j_{N-1})\in N_{i}}({x_{v_{j_{1}}}} \circ\cdots\circ{x_{v_{j_{N-1}}}})\times^{(2,3,\cdots,N)}_{(1,2,\cdots,N-1)} \mathcal{W} \tag{16}\]
where \(\mathcal{W}\in\mathbb{R}^{I_{in}\times I_{in}\cdots\times I_{out}}\) is the weight tensor. And we can reformulate Eq. (9) into the adjacency tensor contraction format,
\[\mathbf{X}^{(l+1)}=\sigma\left((\tilde{\mathcal{A}}\times_{2}\mathbf{X}^{(l) }\cdots\times_{N}\mathbf{X}^{(l)})\times^{(2,3,\cdots,N)}_{(1,2,\cdots,N-1)} \mathcal{W}\right), \tag{17}\]
Then, we decompose the weight tensor \(\mathcal{W}\in\mathbb{R}^{I_{in}\times I_{in}\times I_{out}}\) into the following partially symmetric CP decomposition [28, 34] structure with the rank \(R\):
\[\mathcal{W}=\mathcal{I}\times_{1}\mathbf{\Theta}^{(l)}\times_{2}\mathbf{\Theta}^{(l)} \cdots\times_{N-1}\mathbf{\Theta}^{(l)}\times_{N}\mathbf{Q}^{(l)T}.\]
(18)
We could also rewrite the THNN with the element-wise dot aggregation function form as,
\[{x_{v_{i}}}^{(l+1)}=\sum_{(j_{1},j_{2},\cdots,j_{N-1})\in N_{i}}( \mathbf{Q}^{(l)}(\frac{1}{(N-1)!}\frac{1}{\Pi_{l}\sqrt[3]{d_{j_{1}}}}\] \[\left(\mathbf{\Theta}^{(l)\top}\left[\begin{array}{c}x_{v_{j_{1}}} ^{l}\\ 1\end{array}\right]\right)\star\cdots\left(\mathbf{\Theta}^{(l)\top}\left[ \begin{array}{c}x_{v_{j_{N-1}}}^{l}\\ 1\end{array}\right]\right))). \tag{19}\]
Empirically, dot product of many vectors would lead the numerical insatiability, therefore, we add activation functions in the original structures. The final expression of THNN for \(N\)-th uniform hypergraph is represented as,
\[{x_{v_{i}}}^{(l+1)}=\sigma(\sum_{(j_{1},j_{2},\cdots,j_{N-1})\in N _{i}}(\mathbf{Q}^{(l)}\textit{Tanh}(\frac{1}{(N-1)!}\frac{1}{\Pi_{l}\sqrt[N]{d_ {j_{l}}}}\] \[\left(\mathbf{\Theta}^{(l)\top}\left[\begin{array}{c}x_{v_{j_{1}}}^{l} \\ 1\end{array}\right]\right)\star\cdots\left(\mathbf{\Theta}^{(l)\top}\left[\begin{array} []{c}x_{v_{j_{N-1}}}^{l}\\ 1\end{array}\right]\right)))). \tag{20}\]
### _Detailed Complexity Analysis of General THNN_
In this section, we formalize time and space complexity of general THNN. The outer product based high order message passing scheme in Eq. (16) suffers from a very high space and time complexity. The parameter space complexity is \(\mathcal{O}(I_{in}^{N-1}I_{out}L)\) and the computational time complexity is \(\mathcal{O}(|\mathcal{V}|^{N}(I_{in}^{N-1}+I_{in}^{N-1}I_{out})L)\). \(L\) denoted the number of layers. After exploiting the symmetry property of the feature tensor via a partially CP decomposition format, as stated in Eq. (18), the parameter space complexity and computational time complexity can be reduced into \(\mathcal{O}((I_{in}+I_{out}+1)RL)\) and \(\mathcal{O}(|\mathcal{V}|^{N}R+|\mathcal{V}|R^{N}+|\mathcal{V}|RI_{out}+I_{ in}I_{out}R)\) respectively. If the computation is fully optimized in dot product form in Eq. (20), the time complexity will be reduced to \(\mathcal{O}(|\mathcal{A}|(I_{in}R+R+I_{out}R)N)\). \(|\mathcal{A}|\) denoted the number of non-zeros in adjacency tensor \(\mathcal{A}\). As for non-uniform generalization, adding global node do not change the degree of both space and time complexity. Multi-uniform process will increase the space complexity into \(\mathcal{O}((I_{in}+I_{out}+1)N_{max}RL)\) and will not change degree of the time complexity. Table I shows a summary of complexities of different formulations.
### _Non-Uniform Generalization_
One of the most critical issues of using adjacency tensor in hypergraph analysis is that only uniform hypergraph can be processed. In addition, the proposed THNN in Eq. (15) only considered the uniform hypergraph. But in many real-world situations, non-uniform hypergraphs are needed. Hence, we aim to extend the proposed model for non-uniform hypergraphs. Motivated by [12] and [32], we propose two methods to extend the uniform hypergraph models to handle non-uniform hypergraphs.
**Global Node.** As shown in Figure 7, we can add a global node to the hypergraph. The non-informative global node would be added many times in one hyperedge until the order of the hyperedge is equal to the max-order number. The feature vector of the global point will be a trainable vector with the same size as other node features. Since the non-informative global node has too many neighbors, the representation of the linked neighbors of the informative global node will tend to converge to the same value, which may exacerbate the issue of oversmoothing in the message passing procedure.
**Multi-Uniform Processing.** So in order to mitigate the problem of oversmoothing of global node adding strategies, as shown in Figure 8, we also proposed a multi-uniform processing scheme that decomposes a non-uniform hypergraph into several uniform sub-hypergraphs. One sub-hypergraph contains all the hyperedges with the same number of orders. Then, we could process uniform sub-hypergraphs separately. Finally, we can concatenate feature vectors with different orders and process them via a trainable weight matrix.
In this paper, we mainly focus on node classification tasks. Therefore, when we get the node representation through several layers of THNN, we directly use a fully connected layer to obtain the predicted label and use the **cross-entropy** loss to optimize parameters.
### _Complexity Analysis_
In this section, we formalize the above-mentioned kinds of time and space complexity. Through a series of analyses, we reduce the exponential complexity of Eq. (9) to the current complexity of Eq. (14). The original outer product based high order message passing scheme in Eq. (9) suffers from a very high space and time complexity. The parameter space complexity is \(\mathcal{O}(I_{in}^{2}I_{out}L)\) and the computational time complexity is \(\mathcal{O}(|\mathcal{V}|^{3}(I_{in}^{2}+I_{in}^{2}I_{out}L))\). \(L\) denoted the number of layers. After exploiting the symmetry property of the feature tensor via a partially CP decomposition format, as stated in Eq. (13), the parameter space complexity and computational time complexity can be reduced into \(\mathcal{O}((I_{in}+I_{out}+1)RL)\) and \(\mathcal{O}(|\mathcal{V}|^{3}R+|\mathcal{V}|^{2}R^{2}+|\mathcal{V}|R^{3}+| \mathcal{V}|RI_{out}+I_{in}I_{out}R)\) respectively. If the computation is fully optimized in dot product form in Eq. (14), the time complexity will be reduced to \(\mathcal{O}(|\mathcal{A}|(I_{in}R+R+I_{out}R))\). \(|\mathcal{A}|\) denoted the number of non-zeros in adjacency tensor \(\mathcal{A}\). As for non-uniform generalization, adding global node does not increase both space and time complexity too much. Multi-uniform process will increase the space complexity into \(\mathcal{O}((I_{in}+I_{out}+1)N_{max}RL)\), where \(N_{max}\) is the largest number of order in non-uniform hypergraph, and will not change degree of the time complexity. Table I shows a summary of complexities of different formulations. If parallelism can be fully utilized like the current cuda-based sparse adjacency matrix calculation scheme, the time complexity of the model can be further reduced.
## IV Experiments
In this section, we conduct experiments on two different 3-D visual object classification datasets under both uniform and non-uniform hypergraph construction settings. Experimental results verify the effectiveness of the proposed model. In addition, we also performed ablation analysis and hyperparameter experiments in order to acquire a better knowledge of the model and data.
### _Datasets_
In experiments, two public benchmarks, the Princeton ModelNet40 dataset [35] and the National Taiwan University (NTU) 3D model dataset [36] are used. The ModelNet40 dataset includes 12,311 objects from 40 popular categories, while the NTU dataset has 2,012 3D items from 67 categories. We apply the similar data split settings in [17], [37], and we extract the features of 3D objects through the Multi-view Convolutional Neural Network (MVCNN) [38] and Group-view Convolutional Neural Network (GVCNN) [39]. 12 virtual cameras are used to collect images with a 30 degree interval angle, and MVCNN and GVCNN features are extracted accordingly.
### _Uniform Hypergraph Generation_
After obtaining the vector embedding representation of the 3D object in Euclidean space, we implement a distance-based hypergraph generation [7, 16, 17]. Such a distance-generation approach connects a group of similar vertices to
the same centroid and exploits the correlations among vertices. Distance-based hyperedges can represent node connection in the feature space [7, 16, 17].
More specifically, given the features of all data objects, the affinity matrix \(M\) can be computed as
\[M_{ij}=\exp\left(-\frac{2D_{ij}^{2}}{\Delta}\right),\]
where \(D_{ij}\) indicates the Euclidean distance between vertex \(i\) and \(j\) in the feature embedding space. \(\Delta\) is the average pairwise distance between vertices. Every 3D object in the dataset is chosen as the centroid node, its k-nearest neighbors in the embedding feature space are connected to the centroid by a hyperedge.
By selecting the centroid node \(j\) and k-nearest neighbor nodes \(i\in\textit{KNN}[j]\), we can define \(\hat{H}_{ij}=M_{ij}\). Then, an appropriate probabilistic hypergraph incidence matrix is constructed. Then, we can use \(H_{ij}=\mathds{1}(\hat{H}_{ij}>0)\) to construct a k-uniform hypergraph, where \(\mathds{1}(\cdot)\) denotes the indicator function.
Alternatively, we can sample \(H_{ij}=\textit{Bern}(\hat{H}_{ij})\) to construct a non-uniform hypergraph, where \(\textit{Bern}(\cdot)\) denotes a Bernoulli distribution. Once the non-uniform hypergraph structure is generated, the structure remains consistent in all models under one setting. We choose the \(K=4\) as the default setting in our experiments, and an experiment of it is discussed in
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Formulation** & Space Complexity & Time Complexity \\ \hline Outer Product formulation in Eq (0) (order \(3\)) & \(\mathcal{O}(I_{in}^{s}I_{out}L)\) & \(\mathcal{O}(|\mathcal{V}|^{3}(I_{in}^{s}+I_{out}^{s}I_{out}L)L)\) \\ \hline General Outer Product formulation (order \(\kappa=N\)) & \(\mathcal{O}(I_{in}+I_{out}+1)RL)\) & \(\mathcal{O}(|\mathcal{V}|^{3}(I_{in}^{s}+I_{out}^{s}I_{out}^{s}I_{out}^{s}I_{out }+\mathcal{V}|^{3}|\mathcal{V}|RL_{out}+I_{in}I_{out}L)R)\) \\ \hline Tensor Contraction formulation in Eq (13) (order \(\kappa=3\)) & \(\mathcal{O}(I_{in}+I_{out}+1)RL)\) & \(\mathcal{O}(|\mathcal{A}|(I_{in}R+I_{out}L_{out}L))\) \\ \hline General Tensor Contraction formulation (order \(\kappa=N\)) & \(\mathcal{O}(I_{in}^{s}I_{out}L)\) & \(\mathcal{O}(|\mathcal{V}|^{3}(I_{in}^{s}+I_{out}^{s}I_{out}L))\) \\ \hline Our Postnode formulation in Eq (15) (order \(3\)) & \(\mathcal{O}(I_{in}+I_{out}+1)RL)\) & \(\mathcal{O}(|\mathcal{V}|^{3}(I_{in}^{s}+I_{out}^{s}I_{out}L_{out}L_{out}L_{out}R)\) \\ \hline General Den Product formulation (order \(\kappa=N\)) & \(\mathcal{O}(I_{in}+I_{out}+1)RL)\) & \(\mathcal{O}(|\mathcal{A}|(I_{in}R+I_{out}L_{out}L_{out}L_{out}L_{out}L_{out}))\) \\ \hline Adding Global Node Extension of Non-uniform THEN (maxorder \(=N_{max}\)) & \(\mathcal{O}(I_{in}+I_{out}+1)RL+I_{in}\) & \(\mathcal{O}(|\mathcal{A}|(I_{in}R+I_{out}L_{out}L_{out}L_{out}))\) \\ \hline Multi-uniform Processing Extension of Non-uniform THEN (maxorder \(=N_{max}\)) & \(\mathcal{O}(|\mathcal{A}|(I_{in}R+I_{out}L_{out}L_{out}L_{out}L_{out}))\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Complexity comparison. \(I_{in}\) and \(I_{out}\) are feature input dimensionality number and output dimensionality number. \(L\) is the number of layers and \(R\) is the number of rank. \(|\mathcal{V}|\) is the number of nodes in the hypergraph, and \(|\mathcal{A}|\) is the number of non-zeros in adjacency tensor. \(N\) is the number of orders of the uniform hypergraph, and \(N_{max}\) is the largest number of order in the non-uniform hypergraph.
Fig. 8: We can also process hypergraphs in layers and utilize distinct models to process sub-hypergraphs of different orders. The resultant embedding vectors of distinct layers are therefore concatenated and integrated through a fully-connected layer.
Fig. 7: Inspired by [12], we can add a global node \(v_{g}\) to a non-uniform hypergraph in order to make it uniform. The global node can be added many times in one hyperedge. Following this procedure, the hypergraph can well be represented as an adjacency tensor, allowing it to be directly processed in uniform-hypergraph form models.
## Appendix IV-H
### _Baseline Models_
After constructing hypergraphs, we compare THNN with multiple graph and hypergraph neural network baselines. All results about baseline are reproduced by ourselves. For GNN baselines, we feed the clique-expansion of constructed hypergraphs. The baselines are listed and decribed as follows:
* Graph convolutional network (GCN) [2]. A GCN layer is equivalent to a localized first-order spectral filter on graphs. GCNs can be thought of as convolutional neural networks that are stretched to process graph-structured data;
* Graph attention network (GAT) [1] incorporates an attention mechanism into graph convolution. It can be considered as a GNN with learnable edge weights;
* Graph Isomorphism Network (GIN) [3]. Among the GNNs performing first-order neighborhood aggregation, GIN is proved to be the one with maximal expressiveness;
* HyperGCN [40]. Each hyperedge of the hypergraph is approximated by a collection of edges connecting the pairs of vertices of the hyperedge. Then the hypergraph learning problem becomes a special case of the graph learning.
* Hypergraph Networks with Hyperedge Neurons (HNHN) [41]. HNHN is also a first-order approximation message passing neural network with a node-specific normalization;
* Hypergraph Neural Networks (HGNN) [17]. HGNN has been discussed in Section III-A in detail.
### _Results on the Uniform Setting_
First, we evaluate THNN along with the baselines in a uniform hypergraph setting with \(k=4\). We employ a two-layer THNN model with a rank setting of 128. THNN is trained with the Adam optimizer whose learning rate is set to be 0.001 initially. Similar to the evaluation procedure in [17], we create multiple hypergraph structures for comparison using either the single features or the concatenation multi-features.
Detailed results on the two datasets are reported in Tables II. We have the following observations. First, THNN maintains the best performance in the majority of cases because the proposed models are compelling in extracting high-order information of hypergraph structures. For example, compared with HyperGCN, THNN achieves gains of \(0.92\%\) and \(1.02\%\) on average of 7 settings on the ModelNet40 and the NTU datasets, respectively. Second, the graph-based model performs worse than the hypergraph in most cases because the hypergraph structure can convey complicated high-order correlations among data, whereas the clique expansion introduces some information loss. Furthermore, as a result of information loss, the graph-based models achieve poor results in some settings. For example, GCN achieves \(77.80\%\), and GIN achieves \(87.93\%\) in the **C: MvGv, T: Gv** setting of ModelNet40, while the accuracy of other models under such setting is all above \(90\%\). The representation information under this setting can be better characterized by a model that considers higher-order information. These cases indicate that graph models are unstable in situations where higher-order information dominates.
### _Results on the non-Uniform Setting_
We also create non-uniform hypergraph structures to validate the efficacy of the two proposed extensions of THNN. Because each element of the probability incidence matrix \(\hat{H}\) takes value in \([0,1]\), and the closer the value is to 1, the more similar with centroid node the value is, it is possible to perform Bernoulli sampling on \(\hat{H}\). We employ the two-layer THNN extension models with adding a global node (THNN-AdG) and Multi-Uniform Processing (THNN-Multi) a rank setting of 128. The two extensions are also trained with Adam optimizers whose learning rates are set to be 0.005.
Detailed results are reported in Tables III. In these two tables, the results remain consistent with those in the uniform settings. The two proposed extensions of THNN perform the best in the majority of scenarios. Specifically, THNN-Multi performs better than THNN-AdG. Compared with THNN-AdG, THNN-Multi achieves gains of \(0.59\%\) and \(0.86\%\) on average of the 7 settings on the ModelNet40 and the NTU datasets, respectively. The possible reason is that interactions of different orders are learned via hypergraph separation processing procedure and aligned via merged layer in THNN-Multi, while artificially introduced external global node might disturb the interaction representation of different orders due to the oversmoothing issue in THNN-AdG.
But THNN-Multi requires several THNN models to process the hypergraph message passing procedure of different orders.
As a result, the number of parameters of THNN-Multi increases linearly with the order number, but the parameter number of the THNN-AdG is not influenced by order number. Thus, the well-performing THNN-Multi generally has a larger number of parameters than THNN-AdG when the order number is large. In addition, the graph-based model still performs worse than the hypergraph in most cases and suffers from large variance across hypergraph construction and feature selection settings.
In summary, hypergraph-based models can better model complex correlations among data than graph-based models. Compared with other hypergraph models that adopt first-order approximations,
the proposed adjacency-tensor-based THNN enjoys higher-order information modeling, which will lead to a better performance.
### _Ablation Study_
We also conducted some ablation experiments to verify the effectiveness of the proposed techniques. We choose two uniform hypergraph generation settings. For these two datasets, we evaluate the proposed three techniques. We use \(Tanh\) in \(\sigma^{\prime}\) to make the element-wise product in CP decomposition stable. A discussion of other \(\sigma^{\prime}\) choices is in Section IV-J. Secondly, we use the techniques of the _concatenating ones_ to improve the expressiveness of the model. We also evaluate the normalized adjacency tensor strategy, which is inspired by the normalized Laplacian adjacency matrix to make the training stable. Results show that it is crucial to use \(Tanh\). This is because higher-order tensor operations introduce operations such as \(x^{n}\), which makes training neural networks extremely challenging. Small changes in the \(x\) can cause large disturbances in \(x^{n}\).
### _Hyper-parameter Analysis: Rank and Number of Layers_
As shown in Fig (a)a, we evaluate the impact of rank \(R\) on model performance by generating hypergraphs with varying values of \(K\) (from 2 to 6) in the NUT2012 experiment and by adjusting the rank setting in the THNN. We choose the "C: Mv T: Gv" setting in NUT2012. We found that \(Rank=128\) is a proper setting, as the accuracy does not change much when \(Rank\in[75,150]\). We also examine how the number of layers in THNN affects performance. Under the setting of "C: Mv T: Gv" in NUT2012, we discover that the model's expressiveness cannot be fully explored with a single layer. Still, the model's performance degrades dramatically when there are too many layers. We conject that this may be due to numerical instability (for example, \(10.001^{10}\) is much larger than \(10^{10}\)) of higher order operations or over-smoothing problems in graph learning [42]. The experiment results in Fig (b)b has demonstrated that stacking two layers is optimal.
### _K in Hypergraph Generation_
The numerical properties of the THNN will be unstable when a large number of components are involved in a single hyperedge, which is not a condition our model handles well due to numerical explosion of high-order product operations. Therefore, we did not select \(K=10\) as the HGNN did [17]. We discovered through the experiment that \(K\) does not need to be huge for this task. As demonstrated in Fig. 10, \(K=4\) is an optimal choice.
The numerical properties of the THNN will be unstable when a large number of components are involved in a single hyperedge, which is not a condition our model handles well due to the numerical explosion of high-order product operations. Therefore, we did not select \(K=10\) as the HGNN did [17]. We choose the "C: Mv T: Gv" setting in NUT2012. We discovered through the experiment that \(K\) does not need to be huge for this task. As demonstrated in Fig. 10, \(K=4\) is an optimal choice.
connection to a THNN layer to see whether it helps. We define the new THNN's layers as follows:
\[F^{{}^{\prime}}(x)=F(x)+Wx\]
where \(F(\cdot)\) is the original THNN layer, and \(W\) is trainable weight matrix to align feature dimensions. We adopted the NUT2012 dataset in the uniform case and the result is shown in Fig 11. We found that the residual connection can improve the performance of the NUT2012 dataset (about 0.8% on average) and can help THNN to achieve a deeper layer stacking. However, in order to make a fair comparison in the main text, we did not add residual connections in comparison with the baseline.
### _Other Choice of Activation Function \(\sigma^{{}^{\prime}}\)_
When we first encountered the instability issue during the training procedure, we investigated using tanh in \(\sigma^{{}^{\prime}}\) in Eq (15) would solve this problem. However, it is not certain that other different activation functions would have the same result. We thus attempt to replace tanh with another common activation function while maintaining all other settings the same. We chose two uniform hypergraph generation settings for our evaluation. As shown in Table V, it is apparent that activation functions that restrict the output value to a limited range perform well. Other activation functions that do not restrict value ranges are unstable in some or all Settings. Among the four efficient common activation functions, tanh and sigmoid are the most effective, but Softsign and hardsigmoid do not need time-consuming exp operations. Accordingly, different stable activation functions can be chosen based on specific requirements. We summarize the properties of these activation functions in the TABLE VI.
We summarize the properties of the activation function in the experiment in Section IV-J
### _Hypergraph Learning with Adjacency Tensor_
In hypergraph learning, most existing methods convert the hypergraph into a weighted graph [17, 40, 41], and then existing graph methods can be applied. For example, spectral clustering could be performed on the weighted graph Laplacian [44] for hypergraph community detection. As stated in [21], such a conversion will result in information loss and sub-optimal performance in hypergraph community detection. As tensor is the most natural extension in representing p-adic relationships [32], many adjacency tensor based methods
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{**Dataset**}} & \multicolumn{2}{c|}{NUT2012} & \multicolumn{2}{c}{ModelNet40} \\ \hline \multirow{2}{*}{**Function Type**} & **C: Mv** & **C: MvGv** & **C: MvGv** \\ & **T: Gv** & **T: MvGv** & **T: Gv** & **T: MvGv** \\ \hline No Activation & 69.43\% & 68.36\% & 82.74\% & 83.95\% \\ \hline LeakyReLU & 56.03\% & 69.17 & 88.21\% & 46.68\% \\ \hline ReLU & 69.71\% & 65.42\% & 89.34\% & 10.94\% \\ \hline Tanishhrink & 65.41\% & 41.82\% & 87.40\% & 69.69\% \\ \hline ELU & 75.87\% & 83.65\% & 90.03\% & 82.25\% \\ \hline Hardsigmoid & 78.28\% & 83.65\% & 92.34\% & 93.00\% \\ \hline Softsign & 77.75\% & 83.11\% & 91.86\% & 91.15\% \\ \hline Sigmoid & 77.75\% & 83.65\% & 92.02\% & 95.38\% \\ \hline Tnnh & 78.55\% & 83.91\% & 92.38\% & 94.25\% \\ \hline \hline \end{tabular}
\end{table} TABLE V: Results about different activation function
Fig. 10: Influence of K in NUT2012 for THNN under different rank settings.
are proposed to conduct optimal community detection [21, 45] on both uniform hypergraphs and non-uniform ones. To the best of our knowledge, the majority of adjacency tensor based research is focused on spectral properties [46, 47, 48] of hypergraphs and tensor decomposition modelings [12, 20, 49], and there is not any adjacency tensor based hypergraph neural networks.
### _Tensor Fusion Models and Tensorized Neural Networks_
Zadeh et al. [22] propose to use a tensorized outer product as a deep information fusion layer named as Tensor Fusion Layer (TFL), which learns the intra-modality dynamics and inter-modality dynamics as well as aggregates multi-modal interactions. This type of fusion operation can be deemed as a variant of exponential machines [50] or a special case of Higher-order Polynomial Regression [29, 30]. Multi-linear tensor models have been proven to have better expressive power in modeling various types of interactions (all pairs, triplets, and even p-adic). These tensor fusion neural networks are also Tensorized Neural Networks (TNNs) [51, 52]. TNNs are designed to process high order tensor input rather than vectors or matrices via a tensor parameterization procedure. Tensor parameterization procedures have achieved great success in enhancing the expressive power [53, 54, 55] of Neural Networks while compressing the number of parameters [56, 57]. Different from the exiting TNNs in processing multi-modal sentimental data [24], image data [52] or video data [56], the proposed THNN is the first Tensorized Neural Network in modeling high order information interaction in hypergraphs.
## VI Conclusion and Discussion
In this paper, we discover that existing hypergraph neural networks are mostly based on the message passing of first-order approximations of the original hypergraph structure and ignore higher-order interactions encoded in hypergraphs. In order to better model the higher-order information of the hypergraph structure, we propose a novel hypergraph neural network, THNN, which is based on adjacency tensors of hypergraphs and is a high-order extension of traditional Graph Convolution Neural Networks. We also find that the proposed models are well connected with tensor fusion, which is a highly recognized successful technique for high-order interaction modeling. Therefore, the proposed models are compelling in extracting high-order information encoded in hypergraphs. We show that our framework can achieve the promising performance in 3-D visual object classification tasks under both uniform and non-uniform hypergraph settings. In the future, we plan to explore more applications of the proposed tensorized neural network models to verify their advantages.
|
2308.11098 | On the Interpretability of Quantum Neural Networks | Interpretability of artificial intelligence (AI) methods, particularly deep
neural networks, is of great interest. This heightened focus stems from the
widespread use of AI-backed systems. These systems, often relying on intricate
neural architectures, can exhibit behavior that is challenging to explain and
comprehend. The interpretability of such models is a crucial component of
building trusted systems. Many methods exist to approach this problem, but they
do not apply straightforwardly to the quantum setting. Here, we explore the
interpretability of quantum neural networks using local model-agnostic
interpretability measures commonly utilized for classical neural networks.
Following this analysis, we generalize a classical technique called LIME,
introducing Q-LIME, which produces explanations of quantum neural networks. A
feature of our explanations is the delineation of the region in which data
samples have been given a random label, likely subjects of inherently random
quantum measurements. We view this as a step toward understanding how to build
responsible and accountable quantum AI models. | Lirandë Pira, Chris Ferrie | 2023-08-22T00:43:14Z | http://arxiv.org/abs/2308.11098v2 | # Explicability and Inexplicability in the Interpretation of Quantum Neural Networks
###### Abstract
Interpretability of artificial intelligence (AI) methods, particularly deep neural networks, is of great interest due to the widespread use of AI-backed systems, which often have unexplainable behavior. The interpretability of such models is a crucial component of building trusted systems. Many methods exist to approach this problem, but they do not obviously generalize to the quantum setting. Here we explore the interpretability of quantum neural networks using local model-agnostic interpretability measures of quantum and classical neural networks. We introduce the concept of the _band of inexplicability_, representing the _interpretable_ region in which data samples have no explanation, likely victims of inherently random quantum measurements. We see this as a step toward understanding how to build responsible and accountable quantum AI models.
## I Introduction
Artificial intelligence (AI) has become ubiquitous. Often manifested in machine learning algorithms, AI systems promise to be evermore present in everyday high-stakes tasks [1; 2]. This is why building fair, responsible, and ethical systems is crucial to the design process of AI algorithms. Central to the topic of _trusting_ AI-generated results is the notion of _interpretability_, also known as _explainability_. This has given rise to research topics under the umbrella of interpretable machine learning (IML) and explainable AI (XAI), noting that the terms _interpretable_ and _explainable_ are used synonymously throughout the corresponding literature. Generically, interpretability is understood as the extent to which humans comprehend the output of an AI model that leads to decision-making [3]. Humans strive to understand the "thought process" behind the decisions of the AI model -- otherwise, the system is referred to as a "black box."
The precise definition of a model's interpretability has been the subject of much debate [4; 5]. Naturally, there exist learning models which are more interpretable than others, such as simple decision trees. On the other hand, the models we prefer best for solving complex tasks, such as deep neural networks (DNNs), happen to be highly non-interpretable, which is due to their inherent non-linear layered architecture [6]. We note that DNNs are one of the most widely used techniques in machine learning. Thus, the interpretability of neural networks is an essential topic within the IML research [7; 8]. In this work, we focus on the topic of interpretability as we consider the quantum side of neural networks.
In parallel, recent years have witnessed a surge of research efforts in _quantum_ machine learning (QML) [9; 10]. This research area sits at the intersection of machine learning and quantum computing. The development of QML has undergone different stages. Initially, the field started with the quest for speedups or quantum advantages. More recently, the target has morphed into further pursuits in expressivity and generalization power of quantum models. Nowadays, rather than "competing" with classical models, quantum models are further being enhanced on their own, which could, in turn, improve classical machine learning techniques. One of the key techniques currently used in QML research is the variational quantum algorithm, which acts as the quantum equivalent to classical neural networks [11]. To clarify the analogy, we will refer to such models as quantum neural networks (QNNs) [12].
Given the close conceptual correspondence to classical neural networks, it is natural to analyze their interpretability, which is important for several reasons. Firstly, QNNs may complement classical AI algorithm design, making their interpretability at least as important as classical DNNs. Secondly, the quantum paradigms embedded into QNNs deserve to be understood and explained in their own right. The unique
Figure 1: **Categorization of interpretability techniques as they apply to classical and quantum resources.** Here, the well-known QML diagram represents data, and an algorithm or device, which can be classical (C) or quantum (Q) in four different scenarios. We consider a reformulation of interpretable techniques to be required in the CQ scenario. In the QC and QQ quadrants, the design of explicitly quantum interpretable methods may be required. The scope of this paper covers CQ approaches.
non-intuitive characteristics of quantum nature can make QNNs more complicated to interpret from the point of view of human understandability. Finally, with the growing interest and capabilities of quantum technologies, it is crucial to identify and mitigate potential sources of errors that plague conventional AI due to a lack of transparency.
In this work, we define some notions of the interpretability of quantum neural networks. In doing so, we generalize some well-known interpretable techniques to the quantum domain. Consider the standard relationship diagram in QML between data and algorithm (or device) type, where either can be classical (C) or quantum (Q). This entails the following combinations (CC, CQ, QC, and QQ), shown in Figure 1. Classical interpretable techniques are the apparent domain of CC. We will discuss, but not dwell on, the potential need for entirely new techniques when the data is quantum (QC and QQ). In CQ, the domain that covers the so-called quantum-enhanced machine learning techniques, although the data is classical, the output of the quantum devices is irreversibly probabilistic. Generalizing classical notions of interpretability to this domain is the subject of our work.
The question of interpretability in quantum machine learning models more broadly, as well as of QNNs more specifically, has already started to receive attention [13; 14; 15], particularly involving the concept of Shapley values [16], which attempt to quantify the importance of features in making predictions. In [13] interpretability is explored using Shapley values for quantum models by quantifying the importance of each gate to a given prediction. The complexity of computing Shapley values for generalized quantum scenarios is analyzed in [17]. In [15], Shapley values are computed for a particular class of QNNs. Our work complements these efforts using an alternative notion of explainability to be discussed in detail next.
## II Interpretability in AI
### Taxonomy
There are several layers to the design of interpretability techniques. To start, they can be _model-specific_ or _model-agnostic_. As the name suggests, model-specific methods are more restrictive in terms of which models they can be used to explain. As such, they are designed to explain one single type of model. In contrast, model-agnostic methods allow for more flexibility in usage as they can be used on a wide range of model types. At large, model-agnostic methods can have a _global_ or _local_ interpretability dimension. Locality determines the scope of explanations with respect to the model. Interpretability at a global level explains the average predictions of the model as a whole. At the same time, local interpretability gives explanations at the level of each sample. In another axis, these techniques can be _active_ (inherently interpretable) or _passive_ (post-hoc). The state of these interpretable paradigms implies the level of involvement of interpretable techniques in the outcome of the other parameters. Active techniques change the structure of the model itself by leaning towards making it more interpretable. In contrast, passive methods explain the model outcome once the training has finished. In comparison to model-agnostic methods, which work with samples at large, there also exist example-based explanations which explain selected data samples from a dataset. An example of this method is the \(k\)-nearest neighbours models, which average the outcome of \(k\) nearest selected points.
Other than the idea of building interpretable techniques, or more precisely, techniques that interpret various models, there exist models that are inherently interpretable. Such models include linear regression, logistic regression, naive Bayes classifiers, decision trees, and more. This feature makes them good candidates as surrogate models for interpretability. Based on this paradigm, there exists the concept of surrogate models, which uses interpretable techniques as a building block for designing other interpretable methods. Such important techniques are, for example, local interpretable model-agnostic explanations (LIME) [18] and Shapley additive explanations (known as SHAP) [16].
### Interpretability of neural networks
The interpretability of neural networks remains a challenge on its own. This tends to amplify in complex models with many layers and parameters. Nevertheless, there is active research in the field and several proposed interpretable techniques [7]. Such techniques that aim to gain insights into the decision-making of a neural network include saliency maps [19], feature visualization [20; 21], perturbation or occlusion-based methods [16; 18], and layerwise relevance propagation (also known by its acronym LRP) [22].
To expand further on the abovementioned techniques, saliency maps use backpropagation and gradient information to identify the most influential regions contributing to the output result. This technique is also called pixel attribution [4]. Feature visualisation, particularly useful for convolutional natural networks, is a technique that analyses the importance of particular features in a dataset by visualising the patterns that activate the output. In the same remark, in terms of network visualisations, Ref. [23] goes deeper into the layers of a convolutional neural network to gain an understanding of the features. This result, in particular, shows the intricacies and the rather intuitive process involved in the decision-making procedure of a network as it goes through deeper layers. Occlusion-based methods aim to perturb or manipulate certain parts of the data samples to observe how the explanations change. These methods are important in highlighting deeper issues in neural networks.
Similarly, layerwise relevance propagation techniques reassign importance weight to the input data by analysing the output. This helps the understanding by providing a hypothesis over the output decision. Finally, the class of surrogate-based methods mentioned above is certainly applicable in neural networks as well.
The importance of these techniques is also beyond the interpretability measures for human understanding. They can also be seen as methods of debugging and thus improving the result of a neural network as in Ref. [23]. Below we take a closer look at surrogate model-agnostic local interpretable techniques, which are applicable to DNNs as well.
### Local interpretable methods
Local interpretable methods tend to focus on individual data samples of interest. One of these methods relies on explaining a black-box model using inherently interpretable models, also known as surrogate methods. These methods act as a bridge between the two model types. The prototype of these techniques is the so-called local interpretable model-agnostic explanations (LIME), which has gotten much attention since its invention in 2016 [18]. Local surrogate methods work by training an interpretable surrogate model that approximates the result of the black-box model to be explained. LIME, for instance, categorizes as a perturbation-based technique that perturbs the input dataset. Locality in LIME refers to the fact that the surrogate model is trained on the data point of interest, as opposed to the whole dataset (which would be the idea behind _global_ surrogate methods). Eq. (1) represents the explanation \(\xi\) of a sample \(x\) via its two main terms, namely the term \(L(f,g,\pi_{x})\) representing the loss which is the variable to be minimized, and \(\Omega(g)\) which is the complexity measure, which encodes the degree of interpretability. Here \(f\) is the black-box model, \(g\) is the surrogate model, and \(\pi_{x}\) defines the region in data space local to \(x\). In broader terms, LIME is a trade-off between interpretability and accuracy,
\[\xi(x)=\operatorname*{argmin}_{g\in G}L(f,g,\pi_{x})+\Omega(g) \tag{1}\]
In the following, we make use of the concept of local surrogacy to understand the interpretability of quantum models using LIME as a starting point. Much like LIME, we develop a _framework_ to provide explanations of black-box models in the quantum domain. The class of surrogate models, the locality measure, and the complexity measure are free parameters that must be specified and justified in each application of the framework.
## III The case for quantum AI interpretability
As mentioned in Section I, interpretability in the quantum literature in the context of machine learning can take different directions. We consider the case when data is classical and encoded into a quantum state, which is manipulated by a variational quantum circuit before outputting a classical decision via quantum measurement. Our focus is on interpreting the classical output of the quantum model.
A quantum machine learning model \(f\) takes as input data \(x\) and first produces quantum data \(|\psi(x)\rangle\). A trained quantum algorithm -- the QNN, say -- then processes this quantum data and outputs a classification decision based on the outcome of a quantum measurement. This is not conceptually different from a classical neural network beyond the fact that the weights and biases have been replaced by parameters of quantum processes, except for one crucial difference -- quantum measurements are unavoidably probabilistic.
Probabilities, or quantities interpreted as such, often arise in conventional neural networks. However, these numbers are encoded in bits and directly accessible, so they are typically used to generate deterministic results (through thresholding, for example). Qubits, on the other hand, are not directly accessible. While procedures exist to reconstruct qubits from repeated measurements (generally called _tomography_), these are inefficient -- defeating any purpose of encoding information into qubits in the first place. Hence, QML uniquely forces us to deal with uncertainty in interpreting its decisions.
In the case of probabilistic decisions, the notion of a _decision boundary_ is undefined. A reasonable alternative might be to define the boundary as those locations in data space where the classification is purely random (probability \(\frac{1}{2}\)). A data point here is randomly assigned a label. For such a point, any _explanation_ for its label in a particular realization of the random decision process would be arbitrary and prone to error. It would be more accurate to admit that the algorithm is _indecisive_ at such a point. This rationale is equally valid for data points near such locations. Thus, we define the _region of indecision_ as follows,
\[R=\left\{x^{\prime}\in X:\left|P(f(x^{\prime})=1)-\frac{1}{2}\right|<\epsilon \right\}, \tag{2}\]
where \(\epsilon\) is a small positive constant representing a threshold of uncertainty tolerated in the classification decision. In some sense of the word, points lying within this reason have no _explanation_ for their particular beyond "luck" -- or unluck, depending on one's point of view!
Now, while some data points have randomly assigned labels, we might still ask _why_. In other words, even data points lying within the region of indecision demand an explanation. Next, we will show how the ideas of local interpretability can be extended to apply to the probabilistic setting.
### Probabilistic local interpretability
In the context of LIME, the loss function is typically chosen to compare models and their potential surrogates on a per-sample basis. However, if the model's output is random, the loss function will also be a random variable. An obvious strategy would be to define loss via expectation:
\[\xi(x)=\operatorname*{argmin}_{g\in G}\mathbb{E}[L(f,g,\pi_{x})]+\Omega(g). \tag{3}\]
However, even then, we still cannot say that \(\xi\) is an explanation, as its predictions are only capturing the average behaviour of the underlying model's randomness. In fact, the label provided by \(\xi\) may be the opposite of that assigned to \(x\) by the model in any particular instance!
To mitigate this, we call an _explanation_ the distribution \(\Xi\) of trained surrogate models \(g\). Note again that \(g\) is random, trained on synthetic local data with random labels assigned by the underlying model. Thus, the explanation inherits any randomness from the underlying model. It's not the case that the explanation provides an interpretation of the randomness _per se_ -- however, we can utilize the distribution of surrogate models to simplify the region of indecision, hence providing an interpretation of it.
### Band of inexplicability
In this section, we define the _band of inexplicability_. Loosely speaking, this is the region of indecision interpreted locally through a distribution of surrogate models. Suppose a particular data point lies within its own band of inexplicability. The explanation for its label is thus _there is no explanation_. Moreover, this is a strong statement because -- in principle -- all possible interpretable surrogate models have been considered in the optimization.
The band of inexplicability can be defined as the region of the input space where the classification decision of the quantum model is uncertain or inexplainable. More formally, we can define the band of inexplicability \(B\) for a data point \(x\) in a dataset \(X\) as,
\[B=\left\{x^{\prime}\in X:\left|P(g(x^{\prime})=1|f,\Xi,\pi_{x})-\frac{1}{2} \right|<\epsilon\right\}, \tag{4}\]
where \(\epsilon\) is again a small positive constant representing a threshold of uncertainty tolerated in the classification decision -- this time with reference to the _explanation_ rather than the underlying model. Note that the distribution in Eq. (4) is over \(\Xi\) as each \(g\) provides deterministic labels.
In much the same way that an interpretable model approximates decision boundaries locally in the classical context, a band of inexplicability approximates the region of indecision in the quantum (or probabilistic) context. The size and shape of this region will depend on several factors, such as the choice of the interpretability technique, the complexity of the surrogate model, and the number of features in the dataset. We call the region a "band" as it describes the simplest schematic presented in Fig. 2.
## IV Numerical experiments for interpreting QNNs
We use the well-known Iris dataset [24] for our numerical experiments. For the sake of the explainability of our own method (no pun intended), we reduce it to a binary classification problem, using only two of the three classes in this dataset as well as only two of the four features. Since we don't actually care about classifying flowers here, with apologies to iris lovers, we abstract the names of these classes and labels below.
The trained quantum model to be explained is a hybrid QNN trained using simultaneous perturbation stochastic approximation (better known by its acronym SPSA) built and simulated using the Qiskit framework [25]. Each data point is encoded into a quantum state with the angle encoding [26]. The QNN model is an autoencoder with alternating layers of single qubit rotations and entangling gates [27]. Since our goal here is to illustrate the band of inexplicability, as in Eq. 4, we do not optimize over
Figure 2: **Depiction of the concept of the _band of inexplicability_. The space within the dashed lines represents the region in the decision space where data samples exhibit ambiguous classification due to randomness. The figure showcases a two-class classification task in a two-dimensional space with two features represented along the horizontal and vertical axes. Here, \(\epsilon\) is the pre-defined threshold. Likely, data samples either inside or close by the band, can not be explained.**
the complexity of surrogate models and instead fix our search to the class of logistic regression models with two features.
The shaded background in each plot of Figs. 3 and 4 show the decision region of the trained QNN. Upon inspection, it is clear that these decision regions, and the implied boundary, change with each execution of the QNN. In other words, the decision boundary is ill-defined. In Fig. 3, we naively apply the LIME methodology to two data points -- one in the ambiguous region and one deep within the region corresponding to one of the labels. In the latter case, the output of the QNN is nearly deterministic in the local neighbourhood of the chosen data point, and LIME works as expected -- it produces a consistent explanation.
However, in the first example, the data point receives a random label. It is clearly within the _region of indecision_ for a reasonably small choice of \(\epsilon\). The "explanation" provided by LIME (summarized by its decision boundary shown as the solid line in Fig. 3) is random. In other words, each application of LIME will produce a different explanation. For the chosen data point, the explanation itself produces opposite interpretations roughly half the time, and the predictions it makes are counter to the actual label provided by the QNN model to be explained roughly half the time. Clearly, this is an inappropriate situation to be applying such interpretability techniques. Heuristically, if a data point lies near the decision boundary of the surrogate model for QNN, we should not expect that it provides a satisfactory explanation for its label. The _band of inexplicability_ rectifies this.
For the same sample data points, the band of inexplicability is shown in Fig. 4. Data points within their own band should be regarded as having been classified due purely to chance -- and hence have no "explanation" for the label they have been assigned. Note that the band itself is unlikely to yield to analytic form. Hence, some numerical approximation is required to calculate it in practice. Our approach, conceptually, was to repeat what was done to produce Fig. 3 many times and summarize the statistics to produce Fig. 4. A more detailed description follows, and the implementation to reproduce the results presented here can be found at [28].
simple description of a region where the QNN is interpreted to be indecisive. The data samples within their associated band should not be expected to have an "explanation" in the deterministic classical sense. While directly useful for hybrid quantum-classical models, we hope this stimulates further research on the fundamental differences between quantum and classical model interpretability. In the remainder of the paper, we discuss possible future research directions.
Our results are pointed squarely at the randomness of quantum measurements, which might suggest that they are "backwards compatible" with classical models. Indeed, randomness is present in training classical DNNs due to random weight initialization or the optimization techniques used. However, this type of randomness can be "controlled" via the concept of a _seed_. Moreover, the penultimate output of DNNs (just before label assignment) is often a probability distribution. However, these are produced through arbitrary choices of the activation function (i.e., the Softmax function), which force the output of a vector that merely _resembles_ a probability distribution. Each case of randomness in classical DNNs is very different from the innate and unavoidable randomness of quantum models. While our techniques _could_ be applied in a classical setting, the conclusions drawn from them may ironically be more complicated to justifiably action.
In this work, we have provided concrete definitions for QNNs applied to binary classification problems. Using a probability of \(\frac{1}{2}\) would not be a suitable reference point in multi-class decision problems. There are many avenues to generalize our definitions which would mirror standard generalizations in the move from binary to multi-class decision problems. One such example would be defining the region of indecision as that with nearly maximum entropy.
We took as an act of brevity the omission of the word _local_ in many places where it may play a pivotal role. For example, the strongest conclusion our technique can reach is that a QNN is merely _locally_ inexplicable. In such cases, we could concede that (for some regions of data space) the behaviour of QNN is inexplicable, full stop. Or we can use the conclusion to signal that an explanation at a higher level of abstraction is required. Classically, a data point asks, "Why give me this label?" Quantumly, our answer might be, "Sorry, quantum randomness." Yet, the data point may persist, "But what about me led to _that_?" These may be questions that a quantum generalization of _global_ interpretability techniques could answer.
Referring back to Fig. 1, we have focused here on CQ quantum machine learning models. However, the core idea behind local surrogate models remains applicable in the context of quantum data -- use interpretable _quantum_ models as surrogate models to explain black box models producing quantum data. Of course, one of our assumptions is the parallels between the classical interpretable models we mentioned above, with their quantum equivalents. This can be a line for future work. The ideas here encapsulate inherently quantum models such as matrix product states or tensor network states, which can act as surrogate models for quantum models as they may be considered more interpretable.
Furthermore, the idea behind _interpreting_ or "opening up" black-box models may be of interest in control theory [29; 30; 31]. In this scenario, the concept of "grey-box" models -- portions of which encode specific physical models -- give insights into how to engineer certain parameters in a system. These grey-box models can thus be considered _partially explainable_ models. The proposed algorithm in [32] may also be of interest in terms of creating intrinsically quantum interpretable models, which would act as surrogates for other more complex quantum models.
An obvious open question that inspires future research remains to investigate the difference in computational
Figure 4: **The approximated band of inexplicability.** (top) An example of a marked data point that lies on the band of inexplicability. (bottom) A data point that is outside of this region and hence can be assessed for interpretability as per the interpretable techniques.
tractability of interpretability methods in quantum versus classical. This will lead to understanding whether it is more difficult to interpret quantum models as opposed to classical models. We hope such results shed light on more philosophical questions as well, such as _is inexplicability, viz. complexity, necessary for learning?_.
For completeness, the case for the interpretability of machine learning models does not go without critique. There are opinions that performance should not be compromised in order to gain insights into the decision-making of the model, and realistically it may not always be prioritized [33, 34]. Simple models tend to be more explainable, however, it is the more complex models that require explanations, as they may be more likely employed in critical applications.
Regardless of the two distinct camps of beliefs, the niche field of interpretable machine learning keeps growing in volume. An argument is that having a more complete picture of the model's performance can help improve the performance of the model overall. As QML becomes more relevant to AI research, the need for quantum interpretability, we expect, will also be in demand.
_Acknowledgments:_ LP was supported by the Sydney Quantum Academy, Sydney, NSW, Australia.
|
2305.13115 | Causal-Based Supervision of Attention in Graph Neural Network: A Better
and Simpler Choice towards Powerful Attention | Recent years have witnessed the great potential of attention mechanism in
graph representation learning. However, while variants of attention-based GNNs
are setting new benchmarks for numerous real-world datasets, recent works have
pointed out that their induced attentions are less robust and generalizable
against noisy graphs due to lack of direct supervision. In this paper, we
present a new framework which utilizes the tool of causality to provide a
powerful supervision signal for the learning process of attention functions.
Specifically, we estimate the direct causal effect of attention to the final
prediction, and then maximize such effect to guide attention attending to more
meaningful neighbors. Our method can serve as a plug-and-play module for any
canonical attention-based GNNs in an end-to-end fashion. Extensive experiments
on a wide range of benchmark datasets illustrated that, by directly supervising
attention functions, the model is able to converge faster with a clearer
decision boundary, and thus yields better performances. | Hongjun Wang, Jiyuan Chen, Lun Du, Qiang Fu, Shi Han, Xuan Song | 2023-05-22T15:13:51Z | http://arxiv.org/abs/2305.13115v2 | # Causal-Based Supervision of Attention in Graph Neural Network:
###### Abstract
Recent years have witnessed the great potential of attention mechanism in graph representation learning. However, while variants of attention-based GNNs are setting new benchmarks for numerous real-world datasets, recent works have pointed out that their induced attentions are less robust and generalizable against noisy graphs due to lack of direct supervision. In this paper, we present a new framework which utilizes the tool of causality to provide a powerful supervision signal for the learning process of attention functions. Specifically, we estimate the direct causal effect of attention to the final prediction, and then maximize such effect to guide attention attending to more meaningful neighbors. Our method can serve as a plug-and-play module for any canonical attention-based GNNs in an end-to-end fashion. Extensive experiments on a wide range of benchmark datasets illustrated that, by directly supervising attention functions, the model is able to converge faster with a clearer decision boundary, and thus yields better performances.
## 1 Introduction
Graph-structured data is widely used in real-world domains, such as social networks [14], recommender systems [23], and biological molecules [16]. The non-euclidean nature of graphs has inspired a new type of machine learning model, Graph Neural Networks (GNNs) [15, 16, 17]. Generally, GNN iteratively updates features of the center node by aggregating those of its neighbors and has achieved remarkable success across various graph analytical tasks. However, the aggregation of features between unrelated nodes has long been an obstacle for GNN, keeping it from further improvement.
Recently, Graph Attention Network (GAT) [21] pioneered the adoption of the attention mechanism, a well-established method with proven effectiveness in deep learning [22], into the neighborhood aggregation process of GNNs to alleviate the issue. The key concept behind GAT is to adaptively assign importance to each neighbor during the aggregation process. Its simplicity and effectiveness have made it the most widely used variant of GNN. Following this line, a myriad of attention-based GNNs have been proposed and have achieved state-of-the-art performance in various tasks [24, 14, 15, 16].
Nevertheless, despite the widespread use and satisfying results, in the past several years, researchers began to rethink if the learned attention functions are truly effective [23, 25, 26, 27, 28, 29]. As we know, most existing attention-based GNNs learn the attention function in a weakly-supervised manner, where the attention modules are simply supervised by the final loss function, without a powerful supervising signal to guide the training process. And the lack of direct supervision on attention might be a potential cause of a less robust and generalizable attention function against real-world noisy graphs [23, 24, 25, 26, 27, 28, 29, 12]. To address this problem, existing work enhances the quality of attention through auxiliary regularization terms (supervision). However, concerns have been raised that these methods often rely heavily on human-specified prior assumptions about a specific task, which limits their generalizability [26, 27]. Additionally, the auxiliary regularization is formulated independently of the primary prediction task, which may disrupt the original optimization target and cause the model to "switch" to a different objective function during training [23, 26].
Recently, causal inference [20] has attracted many researchers in the field of GNNs by utilizing structural causal model (SCM) [28] to handle distribution shift [29] and shortcut learning [14]. In this paper, we argue that the tool of causal inference has also shed light on a promising avenue that could supervise and improve the quality of GNN's attention directly, while in the meantime we will not make any assumptions about specific tasks or models, and the supervision signal for attention implicitly aligns well with the primary task. Before going any deeper, we first provide a general schema for the SCM of attention-based GNNs in Figure 1, which uses nodes to represent variables and edges to indicate causal relations between variables. As we can see, after a high-level abstraction, there are only three key factors in SCM, including the node
features \(X\), attention maps \(A\), and the model's final prediction \(Y\). Note that in causal language, \(X\), \(A\), and \(Y\) also denotes the context, treatment, and outcome respectively. For edges in SCM, the link \(X\to A\) represents that the attention generation relies on the node's features (i.e., context decides treatment). And links \((X,A)\to Y\) indicate that the model's final prediction is based on both the node's features \(X\) and the attention \(A\) (i.e., the final outcome is jointly determined by both context and treatment).
In order to provide a direct supervision signal and further enhance the learning of attention functions, the first step would be finding a way to measure the quality of attention (i.e., quantifying what to improve). Since there are no unified criteria on the way of measurement, researchers usually propose their own solution according to the tasks they are facing, and this is very likely to introduce the unfavorable human-intervened prior assumptions [20]. For example, CGAT [21] believes that better attention should focus more on one-hop neighbors. While this assumption surely works on homophilous graphs, it will suffer from huge performance degradation in heterophilous scenarios. Our method differs a lot from existing work in that we introduce SCM to effectively decouple the direct causal effect of attention on the final prediction (i.e., link \(A\to Y\)), and use such causal effect as a measurement for the quality of attention. In this way, it is the model and data that decide if the attention works well during training instead of human-predefined rules. And this has been shown to be non-trivial in various machine learning fields because what might seem reasonable to a human might not be considered the same way by the model [14, 21]. Another drawback of existing attention regularization methods, as previously mentioned, is the deviation from primary tasks. SuperGAT [14] uses link prediction to improve the attention quality for node classification, but as the author claims in the paper, there is an obvious trade-off between the two tasks. In this paper, we alleviate this problem by directly maximizing the causal effect of attention on the primary task (i.e., strengthening the causal relation \(A\to Y\)). Under mild conditions, we can deem the overall optimization is still towards the primary objective, except that we additionally provide a direct and powerful signal for the learning of attention in a fully-supervised manner.
In summary, this paper presents a **C**ausal **S**upervision for **A**ttention in graph neural networks (abbreviated as **CSA** in the following paragraphs). CSA has strong applicability because no human-intervened assumptions are made on the target models or training tasks. And the supervision of CSA can be easily and smoothly integrated into optimizing the primary task to performing end-to-end training. We list the main contributions in this paper as follows:
* We explore and provide a brand-new perspective to directly boost GNN's attention with the tool of causality. To the best of our knowledge, this is a promising direction that still remains unexplored.
* We propose CSA, a novel causal-based supervision framework for attention in GNNs, which can be formulated as a simple yet effective external plug-in for a wide range of models and tasks to improve their attention quality.
* We perform extensive experiments and analysis on CSA and the universal performance gain on standard benchmark datasets validates the effectiveness of our design.
## 2 Related Work
**Attention-based Graph Neural Networks.** Modeling pairwise importance between elements in graph-structured data dates back to interaction networks [1, 17] and relational networks [16]. Recently GAT [13] rose as one of the representative work of attention-based GNNs using self-attention [16]. The remarkable success of GAT in multiple tasks has motivated many works focusing on integrating attention into GNN [15, 14, 21, 22, 23, 24]. _Lee et al._ have also conducted a comprehensive survey [11] on various types of attention used in GNNs.
**Causal Inference in Graph Neural Network.** Causality [1] provides researchers new methodologies to design robust measurements, discover hidden causal structures and confront data biases. A myriad of studies has shown that incorporating causality is beneficial to graph neural network in various tasks. [23] makes use of counterfactual links to augment data for link prediction improvement. [25] performs interventions on the representations of graph data to identify the causally attended subgraph for graph classification. [10] on the other hand, applies causality to estimate the causal effect of node's local structure to assist node classification.
**Improving Attention in GAT.** There is a great number of work dedicated to improving attention learning in GAT. [14] enhances attention by exploiting two attention forms compatible with a self-supervised task to predict edges. [1] introduces a simple fix by modifying the order of operations in GAT. [21] develops an approach using constraint on the attention weights according to the class boundary and feature aggregation pattern. In addition, causality also plays a role in boosting the attention of GATs recently. [21] estimates the causal effect of edges by intervention and regularizes edges' attention weights according to their causal effects.
## 3 Preliminaries
We start by introducing the notations and formulations of graph neural networks and their attention variant. Let \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) represents a graph where \(\mathcal{V}=\{v_{i}\}_{i=0}^{n}\) is the set of nodes and \(\mathcal{E}\in\mathcal{V}\times\mathcal{V}\) is the set of edges. For each node \(v\in\mathcal{V}\),
Figure 1: Structural causal model of attention-based GNNs
it has its own neighbor set \(N(v)=\{u\in\mathcal{V}\mid(v,u)\in\mathcal{E}\}\) and its initial feature vector \(x_{v}^{0}\in\mathbb{R}^{d^{0}}\), where \(d^{0}\) is the original feature dimension. Generally, GNN follows the message-passing mechanism to perform feature updating, where each node's feature representation is updated by aggregating the representations of its neighbors and then combining the aggregated messages with its ego representation [20]. Let \(m_{v}^{l}\in\mathbb{R}^{d^{l}}\) and \(x_{v}^{l}\in\mathbb{R}^{d}\) be the message vector and representation vector of node \(v\) at layer \(l\), we formally define the updating process of GNN as:
\[m_{v}^{l} =\mathrm{AGGREGATE}\left(\left\{x_{j}^{(l-1)},\forall j\in N(v) \right\}\right)\] \[x_{v}^{l} =\mathrm{COMBINE}\left(x_{v}^{l-1},m_{v}^{l}\right),\]
where \(\mathrm{AGGREGATE}\) and \(\mathrm{COMBINE}\) are aggregation functions (e.g., mean, LSTM) and combination function (e.g., concatenation), respectively. The design of these two functions is what most distinguishes one type of GNN from the other. GAT [21] augments the normal aggregation with the introduction of self-attention. The core idea of self-attention in GAT is to learn a scoring function that computes an attention score for every node in \(N(v)\) to indicate their relational importance to node \(v\). In layer \(l\), such process is defined by the following equation:
\[e\left(x_{v_{i}}^{l},x_{v_{j}}^{l}\right)=\sigma\left((\mathbf{a}^{l})^{\top} \cdot\left[W^{l}\;x_{v_{i}}^{l}\|W^{l}\;x_{v_{j}}^{l}\right]\right),\]
where (\(\mathbf{a}^{l},W^{l}\)), \(\sigma\) are learnable matrices and activation function (e.g., LeakyReLU) respectively, and \(\|\) denotes vector concatenation. The attention scores are then normalized across all neighbors \(v_{j}\in N(v_{i})\) using softmax to ensure consistency:
\[\alpha_{ij}^{l}=\frac{\exp\left(e\left(x_{v_{i}}^{l},x_{v_{j}}^{l}\right) \right)}{\sum_{v_{j}\in N(v_{i})}\exp\left(e\left(x_{v_{i}}^{l},x_{v_{j}}^{l} \right)\right)}\]
Finally, GAT computes a weighted average of the features of the neighboring nodes as the new feature of \(v_{i}\), which is demonstrated as follows:
\[x_{v_{i}}^{l+1}=\sigma\left(\sum\nolimits_{v_{j}\in N(v_{i})}\alpha_{ij}^{l} W^{l}\;x_{v_{j}}^{l}\right).\]
## 4 Casual-based Supervision on GNN's Attention
In this section, we first introduce how the causal effect of attention can be derived from the structural causal model of attention-based GNNs. Specifically, this is done with the help of the widely used counterfactual analysis in causal reasoning. After that, with the obtained causal effects, we elaborate three candidate schemes to incorporate with the training of attention-based GNNs to improve their quality of attention.
### Causal Effects of Attention
As previously mentioned, the first step towards improving attention lies in measuring the quality of existing attention. However, since deep learning models usually exhibit as black boxes, it is generally infeasible to directly assess their attention qualities. Existing works mainly address this issue by introducing human priors to build pre-defined rules for some specific models and tasks. Yet, it has been a long debate on whether human-made rules share consensus with deep learning models during training [21, 19]. Fortunately, the recent rise of causal inference technology has offered effective tools to help us think beyond the black box and analyze causalities between model variables, which leads us to an alternative way to directly utilize the causal effect of attention to measure its quality. Since the obtained causal effects are mainly affected by the model itself, it is a more accurate and unbiased measurement of how well the attention actually learns.
We first give a brief review of the formulation of attention-based graph neural network in causal languages, as shown in Figure 2(a). The generated attention map \(A\) is directly affected by node feature \(X\). And the model prediction \(Y\) is jointly determined by both \(X\) and \(A\). We denote the inferring process of the model as:
\[Y_{x,a}=Y(X=x,A=a), \tag{1}\]
which indicates that model will give value \(Y_{x,a}\) if the value of \(X\) and \(A\) are set to \(x\) and \(a\) respectively. In order to pursue the attention's causal effect, we introduce the widely-used counterfactual analysis [10] in causal reasoning.
The core idea of counterfactual causality lies in asking: given a certain data context (node feature \(X\)), what the outcome (model prediction \(Y\)) would have been if the treatment (attention map \(A\)) had not been the observed value? To answer the imaginary question, we have to manipulate the values of several variables to see the effect, and this is formally termed as _intervention_ in causal inference literature, which can be denoted as \(do(\cdot)\). In \(do(\cdot)\) operation, we compulsively select a counterfactual value to replace the original factual value of the intervened variable. And once a variable is intervened, its all incoming links in the SCM will be cut off and its value is independently given, while other variables that are not affected still maintain the original value. In our case, for example, \(do(A=a^{*})\) means we demand the attention \(A\) to take the non-real value \(a^{*}\) (e.g., reverse/random attention) so that the link \(X\to A\) is cut-off and \(A\) is no longer be affected by its causal parent \(X\). This process is illustrated in Figure 2(b) and the mathematical formulation is given as:
\[Y_{x,a^{*}}=Y(X=x,do(A=a^{*})), \tag{2}\]
which indicates that after \(do(\cdot)\) operation which changes the value of attention to be \(a^{*}\), the output value of the model also changes to \(Y_{x,a^{*}}\). Finally, let us consider a case where we
Figure 2: Deriving causal effects through counterfactual
assign a _dummy_ value \(\tilde{a}\) to the attention map so that for each ego node, all its neighbors share the same attention weights, the feature aggregation of the graph attention model will then degrade to an unweighted average. In this case, according to the theory of causal inferences [20], the Total Direct Effect (TDE) of attention to model prediction can be obtained by computing the differences of model outcome \(Y_{x,a}\) and \(Y_{x,\tilde{a}}\), which is formulated as follows:
\[TDE=Y_{x,a}-Y_{x,\tilde{a}}. \tag{3}\]
It is worth noting that the induction of attention's causal effect does not exert any assumptions and constraints, which lays a solid foundation for our wide applicability on any graph attention models.
### Supervision on Attention with Causal Effects
We have already demonstrated the derivation of attention's causal effect in the previous section. In this part, we will discuss how to utilize the obtained causal effect for attention quality improvement. Previous works that make use of an auxiliary task to regularize attention usually suffered from performance trade-off between the primary task and auxiliary task. In this work, we alleviate this problem by directly maximizing the causal effect of attention on the primary task. The overall schema of this part is shown in Figure 3.
Consider a simple case where we conduct the node classification task with a standard \(L\)-layer GAT. For each layer \(l\), we have the node representations \(X^{l-1}\in\mathbb{R}^{n\times d^{l-1}}\) from the previous layer as input. Then, we perform feature aggregation and updating with factual attention map \(A^{l}\) to obtain the factual output feature \(X^{l}=f(X^{l-1},A^{l})\). Similarly, when we intervene the attention maps of layer \(l\) (e.g., assigning _dummy_ values using \(do(\cdot)\) operation), we can get a counterfactual output feature \(\hat{X}^{l}\). We further employ a learnable matrix \(\mathcal{W}^{l}\in\mathbb{R}^{c\times d^{l}}\) (\(c\) denotes the number of classes) to get the node's factual predicted label \(Y^{l}_{pred}\) and counterfactual predicted label \(\hat{Y}^{l}_{pred}\) using the corresponding features from layer \(l\). Therefore, the causal effect of attention at layer \(l\) is obtained as: \(Y^{l}_{pred}-\hat{Y}^{l}_{pred}\). To this end, we can use the causal effect as a supervision signal to explicitly guide the attention learning process. The new objective of the CSA-assisted GAT model can be formulated as:
\[\mathcal{L}\!\!=\!\!\!\sum_{l}\lambda_{l}\mathcal{L}_{ce}(Y^{l}_{\text{effect }},y)\!+\!\mathcal{L}_{\text{others}}, \tag{4}\]
where \(y\) is the ground-truth label, \(\mathcal{L}_{ce}\) is the cross-entropy loss, \(\lambda_{l}\) is the coefficient to balance training, and \(\mathcal{L}_{\text{others}}\) represents the original objective such as standard classification loss. Note that Equation.(4) is a general form of CSA where we compute additional losses for each GAT layer to supervise attention directly. However in practice it is not necessary, and we found that simply selecting one or two layers is enough for CSA to bring satisfying performance improvement.
Moreover, since our aim is to boost the quality of attention, it is not necessary to estimate the correct causal effect of attention using _dummy_ values. Instead, a strong counterfactual baseline might even be helpful for the attention quality improvement. We hereby further propose three heuristic counterfactual schemes and test them in our experiments. We note that the exact form of how counterfactual is achieved is not limited, and our goal here is just to set the ball rolling.
**Scheme I:** In the first scheme, we utilize the uniform distribution to generate the counterfactual attention map. Specifically, the counterfactual attention is produced by
\[\hat{a}\sim U(e,f), \tag{5}\]
where \(e\) and \(f\) are the lower and upper boundaries. In this case, the generated counterfactual could vary from very bad (i.e., mapping all unrelated neighbors) to very good (i.e., mapping all meaningful neighbors). This is a similar process to the _Randomized Controlled Trial_[17] where all possible treatments are enumerated. We hope that maximizing causal effects computed over all possible treatments can lead to a robust improvement of attention.
**Scheme II:** Scheme I is easy and straightforward to apply. However due to its randomness, a possible concern is that if most of the generated counterfactual attentions are inferior to the factual one, then we will only have very small gradient on attention improvements. Therefore we are actually motivated to find "better" counterfactuals to spur the factual one to evolve. Heuristically, given that MLP is a strong baseline on several datasets (e.g., Texas, Cornell, and Wisconsin), we employ an identical mapping to generate the counterfactual attention, which only attends to the ego-node instead of neighbors. Specifically, the counterfactual attention map is equal to the identity matrix \(I\):
\[\hat{a}\sim I \tag{6}\]
**Scheme III:** Our last schema can be considered as an extension of Schema II. Since the fast development of GAT family has introduced to us some variants that already outperform MLP in many datasets, using counterfactuals derived from
Figure 3: The schematic of CSA is shown above as a plug-in to graph attention methods. The \(a\) and \(\hat{a}\) indicate the factual and counterfactual attention values, respectively. We subtract the counterfactual classification results from the original classification to analyze the causal effects of learned attention (i.e., attention quality) and directly maximize them in the training process towards primary task.
the behavior of MLP does not seem to be a wise choice for these GAT variants to improve their attentions. Inspired by the self-boosting concept [14] widely used in machine learning, we leverage the historical attention map as the counterfactual to urge the factual one keep refining itself. The specific formulation is written as follows:
\[\hat{a}\sim A_{hist}, \tag{7}\]
where \(A_{hist}\) denotes the historical attention map (e.g., the attention map from the last update iteration).
## 5 Experiment
In this section, we conduct extensive node classification experiments to evaluate the performance of CSA. Specifically, we 1) validate the effectiveness of CSA on three popular GAT variants using a wide range of datasets, including both homophily and heterophily scenarios; 2) compare CSA with other attention improvement baselines of GAT to show the superiority of our method; 3) show that CSA can induce better attention that improve the robustness of GAT; 4) test CSA's sensitivity to hyper-parameters; 5) analyze the influences of CSA in feature space; and 6) examine the performances of some special cases of CSA.
### Datasets
For heterophily scenario, we select seven standard benchmark datasets: Wisconsin, Cornell, Texas, Actor, Squirrel, Chameleon, and Crocodile. Wisconsin, Cornell, and Texas collected by Carnegie Mellon University are published in WebKB1. Actor [14] is the actor-only subgraph sampling from a film-director-actor-writer network. Squirrel, Chameleon, and Crocodile are datasets collected from the English Wikipedia [11]. We summarize the basic attributions for each dataset in Table 1. \(\mathcal{H}(\mathcal{G})\) is the homophily ratio [15], where \(\mathcal{H}(\mathcal{G})\to 1\) represents extreme homophily and vice versa.
Footnote 1: [http://www.cs.cmu.edu/](http://www.cs.cmu.edu/) webkb/
For homophily scenario, two large datasets released by Open Graph Benchmark (OGB)2[13]: ogbn-products and ogbn-arxiv, are included in our experiments, together with two small-scale homophily graph datasets: Cora and Citeseer [12]. Similarly, the attribution of the dataset is summarized in Table 2.
Footnote 2: [https://ogb.stanford.edu/](https://ogb.stanford.edu/)
### Experimental Setup
We employ popular node classification models in our experiments as the baselines: GCN [11], GAT [13], SGCN [21], FAGCN [10], GPR-GNN [20], H2GCN [15], WRGAT [23], APPNP [22] and UniMP [24]. We also present the performance of MLPs, serving as a strong non-graph-based baseline. Due to page limit, we only select four models: GAT, FAGCN, WRGAT and UniMP to examine the effectiveness of CSA. These models ranges from the classic ones to the latest ones, and are considered as representatives for state-of-the-art node classification models. One thing to be noted here is that for all these models, we implement CSA only in their first layers to avoid excessive computational cost.
In our experiments, each GNN is run with the best hyperparameters if provided. We set the same random seed for each
\begin{table}
\begin{tabular}{l c c c c c c c} & **Texas** & **Wisconsin** & **Actor** & **Squirrel** & **Chameleon** & **Cornell** & **Crocodile** \\ \(\mathcal{H}(\mathcal{G})\) & **0.11** & **0.21** & **0.22** & **0.22** & **0.23** & **0.3** & **0.26** \\
**\#Nodes** & 183 & 251 & _7,600_ & _5,201_ & _2,277_ & 183 & 11,631 \\
**\#Edges** & 295 & 466 & 191,506 & 198,493 & 31,421 & 280 & 899,756 \\
**\#Classes** & 5 & 5 & 5 & 5 & 5 & 5 & 6 \\
**\#Features** & 1703 & 1703 & 932 & 2089 & 2325 & 1703 & 500 \\ \hline MLP & 81.32 \(\pm\) 6.22 & 84.38 \(\pm\) 5.34 & 36.09 \(\pm\) 1.35 & 28.98 \(\pm\) 1.32 & 46.21 \(\pm\) 2.89 & 83.92 \(\pm\) 5.88 & 54.35 \(\pm\) 1.90 \\ SGCN & 56.41 \(\pm\) 4.29 & 54.82 \(\pm\) 3.63 & 30.50 \(\pm\) 0.94 & 52.74 \(\pm\) 1.58 & 60.89 \(\pm\) 2.21 & 62.52 \(\pm\) 5.10 & 51.80 \(\pm\) 1.53 \\ GCN & 55.59 \(\pm\) 5.96 & 53.48 \(\pm\) 4.75 & 28.40 \(\pm\) 0.88 & 53.98 \(\pm\) 1.53 & 61.54 \(\pm\) 2.59 & 60.01 \(\pm\) 5.67 & 52.24 \(\pm\) 2.54 \\ H2GCN & 84.81 \(\pm\) 6.94 & 86.64 \(\pm\) 4.63 & 35.83 \(\pm\) 0.96 & 37.95 \(\pm\) 1.89 & 58.27 \(\pm\) 2.63 & 82.08 \(\pm\) 4.71 & 53.10 \(\pm\) 1.23 \\ APPNP & 81.93 \(\pm\) 5.77 & 85.48 \(\pm\) 4.58 & 35.90 \(\pm\) 0.96 & 39.08 \(\pm\) 1.76 & 57.80 \(\pm\) 2.47 & 81.92 \(\pm\) 6.12 & 53.06 \(\pm\) 1.90 \\ GPR-GNN & 79.44 \(\pm\) 5.17 & 84.46 \(\pm\) 6.36 & 35.11 \(\pm\) 0.82 & 32.33 \(\pm\) 2.42 & 46.76 \(\pm\) 2.10 & 79.91 \(\pm\) 6.60 & 52.74 \(\pm\) 1.88 \\ \hline GAT & 55.21 \(\pm\) 5.70 & 52.80 \(\pm\) 6.11 & 29.04 \(\pm\) 0.66 & 40.00 \(\pm\) 0.99 & 59.32 \(\pm\) 1.54 & 61.89 \(\pm\) 6.08 & 51.28 \(\pm\) 1.79 \\ +CSA-I & 56.17 \(\pm\) 5.32 & 53.23 \(\pm\) 6.28 & 29.03 \(\pm\) 0.79 & 40.51 \(\pm\) 0.98 & 60.73 \(\pm\) 1.35 & 62.75 \(\pm\) 6.32 & 51.67 \(\pm\) 1.62 \\ +CSA-II & **58.21 \(\pm\) 4.79** & **54.35 \(\pm\) 6.54** & 29.71 \(\pm\) 0.74 & 41.02 \(\pm\) 1.23 & **61.31 \(\pm\) 1.13** & **64.26 \(\pm\) 5.21** & **52.20 \(\pm\) 1.74** \\ +CSA-III & 58.04 \(\pm\) 5.27 & 53.98 \(\pm\) 6.30 & **29.72 \(\pm\) 0.86** & **41.38 \(\pm\) 1.19** & 61.20 \(\pm\) 1.37 & 63.58 \(\pm\) 6.03 & 52.13 \(\pm\) 1.83 \\ \hline FAGCN & 82.54 \(\pm\) 6.89 & 82.84 \(\pm\) 7.95 & 34.85 \(\pm\) 1.24 & 42.55 \(\pm\) 0.86 & 61.21 \(\pm\) 3.13 & 79.24 \(\pm\) 9.92 & 54.35 \(\pm\) 1.11 \\ +CSA-I & 82.65 \(\pm\) 7.11 & 83.37 \(\pm\) 7.79 & 34.77 \(\pm\) 0.95 & 42.55 \(\pm\) 0.74 & 61.86 \(\pm\) 2.98 & 80.01 \(\pm\) 9.72 & 54.44 \(\pm\) 1.18 \\ +CSA-II & 83.29 \(\pm\) 6.80 & 83.11 \(\pm\) 8.26 & 34.88 \(\pm\) 0.86 & 42.58 \(\pm\) 0.93 & 61.74 \(\pm\) 3.39 & **81.35 \(\pm\) 9.68** & 54.45 \(\pm\) 1.23 \\ +CSA-III & **84.72 \(\pm\) 6.71** & **84.23 \(\pm\) 7.21** & **35.12 \(\pm\) 0.98** & **43.38 \(\pm\) 1.02** & **62.52 \(\pm\) 3.20** & 80.94 \(\pm\) 9.77 & **55.16 \(\pm\) 0.97** \\ \hline WRGAT & 83.62 \(\pm\) 5.50 & 86.98 \(\pm\) 3.78 & 36.53 \(\pm\) 0.77 & 48.85 \(\pm\) 0.78 & 65.24 \(\pm\) 0.87 & 81.62 \(\pm\) 3.90 & 54.76 \(\pm\) 1.12 \\ +CSA-I & 83.69 \(\pm\) 5.63 & 87.23 \(\pm\) 3.94 & 36.55 \(\pm\) 0.93 & **49.46 \(\pm\) 0.74** & 65.36 \(\pm\) 1.05 & 81.88 \(\pm\) 3.93 & 54.86 \(\pm\) 1.31 \\ +CSA-II & 83.76 \(\pm\)
model and dataset for reproducibility. The reported results are in the form of mean and standard deviation, calculated from 10 random node splits (the ratio of train/validation/test is 48%/32%/20% from [20]). Our experiments are conducted on a GPU server with eight NVIDIA DGX A100 graphics cards, and the codes are implemented using Cuda Toolkit 11.5, PyTorch 1.8.1 and torch-geometric 2.0.1.
### Performance Analysis
Table 1 and Table 2 provide the test accuracy of different GNNs in different variants of CSA in the supervised node classification task. A graph's homophily level is the average of its nodes' homophily levels. CSA achieves the best in terms of the vanilla one across all datasets. In particular, the highest improved datasets are Texas, Wisconsin, and Cornell. By observing the performance of MLPs, we can see that the common ground of those three datasets contains distinguishable features and a large part of non-homophilous edges. In the meanwhile, the performance of CSA is proportional to the modeling ability. The mechanism behind CSA is to extend the causal effect between nodes representation and final prediction. Therefore, CSA owns limited performance when the node's representations are chaotic. Our experiments highlight that I) The model, which is already better than MLP, does not improve much in CSA-II, while CSA-III improves it relatively more. This is because in those datasets, the graph structure can provide meaningful information, so that CSA-III have more advantages. II) The dataset, which has distinctive features indicated by the performance of MLPs, is more satisfied CSA-II. Similarly, in this scene, the features can be more informative. III) The random strategy (CSA-I) relatively inferior to others, since the distribution is hard to control and tend to generate worst attention map, whereby the regularization is vanished.
### Comparison with Attention Promotion Baselines
We here apply multiple attention promotion baselines: CAL [14], CAR [23], Super [15], Constraint interventions [23] and the result is shown in Figure 4. Among them, CAL is a method for graph property prediction that relies on an alternative formulation of causal attention with interventions on implicit representations. We adapted CAL for node classification by removing its final pooling layer. Super is well-known as SuperGAT, a method seeking to constrain node feature vectors through a semi-supervised edge prediction task. CAR aligns the attention mechanism with the causal effects of active interventions on graph connectivity in a scalable manner. Constraint method has two auxiliary losses: graph structure-based constraint and class boundary constraint. The results on four datasets are shown in Figure 4. While CAL, CAR, and CSA have related goals of enhancing graph attention using concepts from causal theory, CAL uses abstract perturbations on graph representation to perform causal interventions, and CAR employs an edge intervention strategy that enables causal effects to be computed scalable, while our method does not exert any assumptions and constraints on GATs, compared with CAL and CAR. Therefore, CSA tends to own good generalization ability. In terms of SuperGAT and Constraint method, since there is a trade-off between node classification and regularization. For example, SuperGAT implies that it is hard to learn the relational importance of edges by simply optimizing graph attention for link prediction.
### CSA Provides Robustness
In this section, we systematically study the performance of CSA on robustness against the input perturbations including feature and edge perturbations. Following [21], we conduct node-level feature perturbations by replac
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Models & **Cora** & **CiteSeer** & **ogbn-products** & **ogbn-arxiv** \\ \(\mathcal{H}(\beta)\) & 0.81 & 0.74 & 0.81 & 0.66 \\
**\#Nodes** & 2.708 & 3,327 & 2,449,029 & 169,343 \\
**\#Edges** & 5,278 & 4,467 & 61,859,140 & 1,166,243 \\
**\#Classes** & 7 & 7 & 7 & 40 \\
**\#Features** & 1433 & 3703 & 100 & 128 \\ \hline GAT & 86.21 \(+\) 0.78 & 75.73 \(+\) 1.23 & 77.02 \(+\) 0.63 & 70.96 \(+\) 0.14 \\ +CSA-I & 86.16 \(+\) 0.95 & 76.81 \(+\) 1.29 & 77.28 \(+\) 0.69 & 71.08 \(+\) 0.14 \\ +CSA-II & 86.89 \(+\) 0.64 & 76.53 \(+\) 1.18 & 77.44 \(+\) 0.63 & 71.05 \(+\) 0.14 \\ +CSA-III & **87.86 \(+\) 0.87** & **77.72 \(+\) 1.25** & **78.36 \(+\) 0.72** & **71.20 \(+\) 0.16** \\ \hline UnifP & 86.89 \(+\) 0.90 & 75.14 \(+\) 0.68 & 81.37 \(+\) 0.47 & 72.92 \(+\) 0.10 \\ +CSA-I & 87.47 \(+\) 0.87 & 75.89 \(+\) 0.73 & 81.55 \(+\) 0.62 & 72.94 \(+\) 0.10 \\ +CSA-II & 85.62 \(+\) 0.73 & 75.87 \(+\) 0.72 & 81.39 \(+\) 0.47 & 72.96 \(+\) 0.10 \\ +CSA-II & **88.64 \(+\) 1.28** & **77.61 \(+\) 0.82** & **82.24 \(+\) 0.63** & **73.08 \(+\) 0.11** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification accuracy on homophily datasets.
Figure 4: Comparison with different GATs promotion strategies.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Models & **Texas** & **Cornell** & **ogbn-arxiv** \\ \hline GAT & \(55.21\pm 5.70\) & \(61.89\pm 6.08\) & \(70.96\pm 0.14\) \\ +CSA (Last) & \(57.83\pm 4.65\) & \(63.52\pm 5.34\) & \(71.08\pm 0.15\) \\ +CSA (Pure) & \(55.79\pm 5.05\) & \(61.97\pm 5.18\) & \(70.96\pm 0.14\) \\ +CSA (Ours) & **58.21 \(+\) 4.79** & **64.26 \(+\) 5.21** & **71.20 \(+\) 0.16** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison with the heuristic causal strategies.
ing them with the noise sampled from the Bernoulli distribution with \(p=0.5\) and edge perturbations by stochastically generating the edges. According to the performance shown in Figure 5, CSA produces robust performance on input perturbations. Figure 5 demonstrate that CSA in higher noise situations achieves more robust results than in lower noise scenes on both node and edge perturbations with perturbation percentages ranging from \(0\%\) to \(40\%\).
### Hyper-parameter Analysis
We analyze the sensitivity of \(\lambda\) and plot node classification performance in Figure 6. For \(\lambda\), there is a specific range that maximizes test performance in all datasets. The performance in Texas is the highest when \(\lambda\) is \(0.4\), but the difference is relatively small compared to Cora. We observe that there is an optimal level of causal supervision for each dataset, and using too large \(\lambda\) degrades node classification performance. Since Cora owns friendly neighbors, its performance is less sensitive than Texas. Based on this, we can also see that Texas relatively needs a larger regularization.
### Pairwise Distance among Classes
To further evaluate whether the good performance of CSA can be contributed to the mitigation of lacking supervision, we visualize the pairwise distance of the node representations among classes learned by CSA and vanilla GAT. Following [10], we calculate the Mean Average Distance (MAD) with cosine distance among node representations for the last layer. The larger the MAD is, the better the node representations are. Results are reported in Figure 7. It can be observed that the node representations learned by CSA keep having a large distance throughout the optimization process, indicating relief of lacking supervision issues. On the contrary, GAT suffers from severely indistinguishable representations of nodes.
### Comparison with Heuristic Causal Strategies.
To validate the effectiveness of CSA, we compare it with two heuristic causal designs (Last and Pure) that 1) directly estimate the total causal effect by subtracting between the model's and the counterfactual's output in the final layer; 2) replace the attention with the static aggregate weights (i.e., each node allocates the same weight). The results are shown in Table 3. We observe that their performance outperforms vanilla one, but is still inferior to ours. In terms of Last, the major difference is whether to explicitly estimate causal effect or not. In our framework, we plug the MLPs into the hidden layers to precisely estimate the causal effect for each layer. Regarding Pure, our strategy can provide more strong baselines, leading to better regularization.
## 6 Conclusion
We introduced CSA, a counterfactual-based regularization scheme that can be applied to graph attention architectures. Unlike other causal approaches, we first built the causal graph of GATs in a general way and do not impose any assumptions and constraints on GATs. Subsequently, we introduce an efficient scheme to directly estimate the causal effect of attention in hidden layers. Applying it to both homophilic and heterophilic node-classification tasks, we found accuracy improvements and robustness in almost all circumstances. We performed three variants of counterfactual attention strategies and found that they can adapt to different situations, respectively.
Figure 5: The robust performance on the node and edge perturbations.
Figure 6: Hyper-parameter analysis on GAT.
Figure 7: Mean Average Distance among node representations of Last GAT layer.
## Contribution Statement
Hongjun Wang and Jiyuan Chen contributed equally to the work. Lun Du and Xuan Song are the corresponding authors. This work is done during Hongjun Wang's internship at Microsoft Research Asia.
|
2310.10133 | Empowering SMPC: Bridging the Gap Between Scalability, Memory Efficiency
and Privacy in Neural Network Inference | This paper aims to develop an efficient open-source Secure Multi-Party
Computation (SMPC) repository, that addresses the issue of practical and
scalable implementation of SMPC protocol on machines with moderate
computational resources, while aiming to reduce the execution time. We
implement the ABY2.0 protocol for SMPC, providing developers with effective
tools for building applications on the ABY 2.0 protocol. This article addresses
the limitations of the C++ based MOTION2NX framework for secure neural network
inference, including memory constraints and operation compatibility issues. Our
enhancements include optimizing the memory usage, reducing execution time using
a third-party Helper node, and enhancing efficiency while still preserving data
privacy. These optimizations enable MNIST dataset inference in just 32 seconds
with only 0.2 GB of RAM for a 5-layer neural network. In contrast, the previous
baseline implementation required 8.03 GB of RAM and 200 seconds of execution
time. | Ramya Burra, Anshoo Tandon, Srishti Mittal | 2023-10-16T07:16:09Z | http://arxiv.org/abs/2310.10133v1 | Empowering SMPC: Bridging the Gap Between Scalability, Memory Efficiency and Privacy in Neural Network Inference
###### Abstract
This paper aims to develop an efficient open-source Secure Multi-Party Computation (SMPC) repository, that addresses the issue of practical and scalable implementation of SMPC protocol on machines with moderate computational resources, while aiming to reduce the execution time. We implement the ABY2.0 protocol for SMPC, providing developers with effective tools for building applications on the ABY 2.0 protocol. This article addresses the limitations of the C++ based MOTION2NX framework for secure neural network inference, including memory constraints and operation compatibility issues. Our enhancements include optimizing the memory usage, reducing execution time using a third-party Helper node, and enhancing efficiency while still preserving data privacy. These optimizations enable MNIST dataset inference in just \(32\) seconds with only \(0.2\) GB of RAM for a 5-layer neural network. In contrast, the previous baseline implementation required \(8.03\) GB of RAM and \(200\) seconds of execution time.
SMPC, ABY2.0, MOTION2NX
## I Introduction
In today's interconnected and data-driven world, the ability to perform computations while preserving privacy and security is paramount. Privacy is considered a fundamental human right as it balances the need for transparency and accountability with the protection of individual rights [1]. As technology advances and the digital age evolves, preserving privacy remains a pressing concern that requires ongoing attention and protection. Secure Multi Party Computation (SMPC) serves as a fundamental tool to address these concerns and facilitates secure data sharing and decision-making across various domains. When implementing SMPC for real-life data, it is essential to consider factors such as the nature of the data, the privacy requirements, the computational resources available, and the specific tasks to be performed. In this paper, our objective is to tackle the challenge of implementing the SMPC protocol in real world scenarios and at scale on machines with modest computational resources, all while striving to minimize the execution time. Our main goal is to optimize the code, making it more efficient and accessible to a broader audience, ultimately empowering SMPC. We offer neural network inference solutions with semi-honest security [11] executed on virtual machines with less than 1 GB RAM. Previous baseline implementation required 8.03 GB RAM on virtual machines for executing the same neural network model. In practical real-world setting, this reduction in RAM usage results in significant cost reduction (in dollar terms) [3].
To accomplish this, we modified the C++ based MOTION2NX framework [6] and included additional functionality to provide a _resource-optimized_ implementation for secure inferencing tasks. We begin by examining MOTION2NX's limitations, such as memory issues and the lack of interoperability between tensor and non-tensor operations. The proposed enhancements include leveraging efficient tensor operations, overcoming the absence of an argmax function using novel approaches, optimizing memory usage, and reducing execution time with the introduction of a third-party Helper node. These improvements aim to enhance the framework's capabilities and efficiency while maintaining data privacy and integrity (refer to Section III for details).
In our optimized implementation, it is important to highlight that the memory usage of a standard \(N\)-layer neural network is determined by its largest layer, i.e., the layer with most parameters. This feature makes our implementation highly scalable as the memory footprint does not grow with the number of layers of the neural network. Moreover, we present an approach to further decrease the memory footprint by splitting the computations for the largest layer without compromising on privacy or accuracy.
We use the data provider framework of SMPC. We consider there are two compute servers (for executing the ABY2.0 SMPC protocol [9]) and two data providers (that actually possess input data and neural network model, respectively). Data providers provide shares to compute servers for computation in this framework (see Section II-D for details). We assume that the readers are familiar with ABY2.0 protocol [9].
### _Related Work_
Over the past several years, there has been an increased focus on practical application of SMPC to real-world problems. Here, we elucidate two real-life applications of SMPC.
Secure AuctionIn Denmark, farmers sell sugar beets to Danisco. The Market Clearing Prices, which represent the price per unit of the commodity that balances total supply and demand in the auction, play a pivotal role in the allocation of contracts among farmers, ensuring a fair and efficient distribution of production rights. To preserve bid privacy in this
process, a three-party SMPC system involving representatives from Danisco, and two other organizations was employed [5].
Secure Gender Wage Gap StudyHere, a specialized software facilitates data analysis of collaborative compensation for organizations, like the Boston Women's Workforce Council (BWWC) study in Greater Boston [7]. This application seamlessly integrates SMPC techniques to ensure collective computation of aggregate compensation data while preserving individual privacy. This approach empowers organizations to collaborate effectively while upholding data privacy.
We remark that the above two described SMPC applications are not memory or computationally intensive. On the other hand, in this paper, we present the modifications and functional additions to the MOTION2NX framework for practical implementation of secure neural network inference task that is _both_ memory intensive and computationally intensive. These modifications and updates are a step towards secure disease prediction (see [2] for secure medical image analysis) where one party provides secret shares of medical images while the other party provides secret shares of a pre-trained neural network model.
### _Our Contributions_
The following is the list of our contribution, specifically to MOTION2NX setup.
* Extending MOTION2NX to the setup where data providers and compute servers are separate entities.
* Creating an argmax function for multiple (\(>2\)) inputs.
* Enabling writing of output shares to files and preventing reconstruction of the output at the compute servers. The compute servers send their respective final shares to the output owners.
* Inter-operability of tensor and non-tensor operations.
* A new optimized Helper node algorithm, working in conjunction with the ABY2.0 protocol, for efficient matrix multiplication.
* Our optimized 5-layer neural network inference requires about 0.2 GB of RAM, where as the original non-optimized version requires \(8.03\) GB of RAM. This implies over \(40\times\) reduction in RAM usage.
* For obtaining realistic numbers, we opted to deploy the Compute servers and the Helper node on the cloud. Setting up and operating these systems require considerable amount of time due to installation of necessary dependencies and compilation of binaries in each of the machines. To simplify this process, we offer the docker images which contains all the essential dependencies and compiled binaries of the new enhancements we contributed. The SMPC Compute servers can now be easily brought up as docker containers from these docker images on any machine with docker installation.
The source code of our optimized implementations along with docker images is available at [https://github.com/datakaveri/iudx-MOTION2NX](https://github.com/datakaveri/iudx-MOTION2NX).
## II Preliminaries
In this section, we provide preliminary details of our secure neural network inferencing implementation.
### _Framework for Implementation_
We consider MOTION2NX, a C++ framework for generic mixed-protocol secure two-party computation in our paper. The following are the features of baseline MOTION2NX.
* Assumes data providers are a part of compute servers
* No intermediate values are reconstructed
* Assumes either of the compute servers as output owners
* Output is reconstructed in clear and shared with the output owner
_Tensor and non-Tensor variants:_ MOTION2NX offers non-optimized secure functions that compile the descriptions of low-level circuits, which are referred to as non-tensor operations. MOTION2NX also provides optimized building blocks that directly implement common high-level operations, which are referred to as tensor operations. The tensor operations are more computationally efficient than the primitive operations. They use a specialized executor to evaluate the tensor operations sequentially while parallelizing the operations themselves with multi-threading and SIMD operations.
We discuss the details of our proposed enhancements and optimizations in Section III.
### \(N\)_-layer Neural Network_
Our optimizations in MOTION2NX enable us to implement deep neural networks with relatively large number of layers. For illustrative purposes, we only present the details for 2-layer and 5-layer neural networks on MNIST dataset. A similar procedure can be adapted for any other dataset on their pretrained models with multiple neural net layers.
The input to the neural network is a real-valued vector of size \(784\times 1\). The output is a boolean vector of size \(10\), where only one element is set to \(1\). Each element in the output vector corresponds to an index from \(\{0,1,2,\ldots,9\}\). If the \(j^{th}\) element in the vector is \(1\), it indicates that the predicted label is \(j\). The dimensions of weights and biases used in our implementation are listed below.
#### Ii-B1 2 layer neural network dimensions
* Layer 1 : Weights \((256\times 784)\), bias \((256\times 1)\)
* Layer 2 : Weights \((10\times 256)\), bias \((10\times 1)\)
#### Ii-B2 5 layer neural network dimensions
* Layer 1 : Weights \((512\times 784)\), bias \((512\times 1)\)
* Layer 2 : Weights \((256\times 512)\), bias \((256\times 1)\)
* Layer 3 : Weights \((128\times 256)\), bias \((128\times 1)\)
* Layer 4 : Weights \((64\times 128)\), bias \((64\times 1)\)
* Layer 5 : Weights \((10\times 64)\), bias \((10\times 1)\)
Algorithm 1 describes a simple two-layer secure neural network inference implementation with ReLU activation. This algorithm takes ABY2.0 shares of input data (image), neural network weights, and biases as inputs and produces ABY2.0 shares of the predicted label as output.
We recall that ABY2.0 shares consist of a pair comprising a public share and a private share [9]. For instance, the ABY2.0 shares of an input variable \(y\) associated with server \(i\) are represented as a pair consisting of \(\Delta_{y}\) and \([\delta_{y}]_{i}\). Note that \(\Delta\) and \(\delta\) represent public share and private share respectively. The secure functions listed in Algorithm 1 were provided
```
0: Input image shares \(x^{i}\), weight shares \(w_{1}^{i},w_{2}^{i}\), bias shares \(b_{1}^{i},b_{2}^{i}\). All the above shares are vectors in the form of ABY2.0 shares
0: Shares of predicted class label \(\hat{y}^{i}\)
1: Compute first layer:
2:\(z_{1}^{i}=\)SecureAdd(SecureMul\((w_{1}^{i},x^{i}),b_{1}^{i}\))
3:\(h_{1}^{i}=\)SecureReLU\((0,z_{1}^{i})\)
4:Compute Second layer:
5:\(z_{2}^{i}=\)SecureAdd(SecureMul\((w_{2}^{i},h_{1}^{i}),b_{2}^{i}\))
6:Compute predicted class label:
7:\(\hat{y}^{i}=\)SecureArgmax(\(z_{2}\))
8:Return\(\hat{y}^{i}\)
```
**Algorithm 1** Neural Net inferencing task with ReLU activation function at compute server-\(i\), \(i\in\{0,1\}\)
### _Reconstructing the Predicted label in clear_
Algorithm 2 outlines the process of deriving the predicted label from boolean output shares. With MNIST, the label is a number from \(0\) to \(9\). The output here is a boolean vector of 10 elements indexed from \(0\) to \(9\), with only one element set to 1; a \(1\) in the \(j^{th}\) element signifies the predicted label as \(j\).
```
0: Boolean ABY2.0 shares from both the servers \(\hat{y}^{0},\hat{y}^{1}\)
0: Predicted class label \(Y\)
1:for j = 0 to 9 do
2:\(\hat{y}[j]=\Delta_{\hat{y}[j]}\oplus[\delta_{\hat{y}[j]}]_{0}\oplus[\delta_{ \hat{y}[j]}]_{1}\)
3:endfor
4: Initialize index \(k\gets 0\)
5:while\(k<10\) and \(\hat{y}[k]=0\)do
6: Increment index \(k\gets k+1\)
7:endwhile
8:Return\(k\)\(\triangleright\) The predicted label
```
**Algorithm 2** Reconstructing the Predicted label in clear
Algorithm 1, unfortunately, couldn't be executed completely in MOTION2NX using its built-in functions. In a single MOTION2NX instance, only one of the tensor variant or non-tensor variant could be utilized. Regrettably, the tensor variant lacked an argmax function, which prevented us from implementing Algorithm 1 using its standard functions. Nevertheless, we were able to address this issue by applying the optimizations outlined in Section III-B, which provides further insights into this matter.
We remark that the steps outlined in Algorithm 1 can be readily extended to provide secure inferencing results for a general fully-connected neural network with \(N\left(>2\right)\) layers.
### _Data Provider Model_
In this model the data providers (Image provider and Model provider) create shares of their private data and communicate them with the compute servers for further computation. Compute servers perform the inferencing task and send the output shares to the Image provider. Compute servers are unaware of the clear output result. For secure inferencing task, we consider that the neural network model is pretrained and is proprietary to a model provider. Similarly, the image for the inferencing task is private to image data provider.
## III Enhancements, Optimizations, and Feature Additions to MOTION2NX
Before we proceed with a discussion of the enhancements proposed in our paper, let's first examine the limitations of MOTION2NX.
_Limitations:_
* No provision to implement data provider model
* Output is always reconstructed at the compute server(s)
* Memory issues : A simple 2-layer neural network inference requires about 3.2 GB of RAM
* No interoperability between tensor and non-tensor operations.
### _Floating point to fixed point and back, and potential errors introduced_
Motion2NX internally works in the uint operations. We had to design appropriate encode functions to convert real numbers to uint depending on the number of fractional bits we wish to consider. Also, during reconstruction the uint numbers had to be appropriately mapped back to real numbers. This process introduces an error due to fixed point arithmetic. Interestingly, for inferencing task on the MNIST data set we see that for fractional bits as small as 6, no considerable error is introduced. Specifically, the accuracy for secure computation with 6 fractional bits matches the accuracy of the inferencing task in the floating point world when tested with 1000 test images in the MNIST dataset. To understand the error introduced due to secure computation and uint operations, we compared the cross entropy loss of the output vector at layer 2 for a 2 layer neural network, before the argmax operation. We see that the cross entropy loss decreases as we increase the fractional bits upto 24. We also observed that a further increase in fractional bits increases the cross entropy loss. Also, beyond the threshold accuracy of the SMPC model decreases drastically. This inference is in tune with [8, paragraph 4, Section 5.1.1] that talks about the adverse impact of increase in fractional bits due to wrap around error during the truncation operation.
### _Implementing Argmax and writing output shares_
MOTION2NX can implement the functions (matrix multiplication, add etc.) using tensor and non-tensor operations. As discussed in Section II-A, tensor operations are optimized versions that enable faster calculations. Therefore, to perform an inference operation on a neural net, MOTION2NX authors recommend using tensor operations. Also, practically we verified that inferencing task on MNIST dataset takes around 1 minute using tensor operations. A simple multiplication operation of two numbers in a non-tensor version takes about 1.5sec. An inferencing task on MNIST data set on a two layer neural network has operations of the order \(\approx 10^{6}\). Clearly, the tensor operations are much more efficient compared to the non-tensor operations.
To implement the neural net we need to have an argmax function (providing the index of the maximum value in a vector) for appropriate classification of the MNIST image. However, MOTION2NX has no provision to implement the argmax function using the tensor operations. Although there are functions to compute max and argmax using non-tensor operations for two inputs, these functions cannot be used in the inferencing task as _there is no provision to interlink tensor and non-tensor operations in MOTION2NX_. We modified MOTION2NX to store the output shares of the neural net just before executing the argmax function, while ensuring that the corresponding clear values of the output shares are never reconstructed in the code. This feature is an improvement over the existing framework as it enables us to use tensor and non-tensor operations together in a sequential manner. The output shares of the last layer of the neural net are fed to the non-tensor implementation of the argmax function to obtain the required shares of the predicted label. This argmax function operates on a vector with multiple (\(>2\)) inputs, and internally uses the in-built max and argmax functions for two inputs in a recursive fashion for generating the output shares of the predicted label.
In our implementation, we pinpointed the stage at which output shares become accessible, just before the reconstruction step. Note that in the original MOTION2NX framework, the compute servers initiate the transmission of their private share for the purpose of reconstruction. We modified the MOTION2NX framework to halt the broadcast of the private shares, and instead save the local public and private shares to respective files at the two compute servers. It is important to note that the private shares of each server remain local (not shared with the other server), ensuring no loss of privacy.
### _Optimizing Memory Requirement_
In the context of secure inferencing tasks using the MNIST dataset, we observed a substantial RAM requirement of approximately 3.2 GB (per server instance) for a two-layer neural network. In practical terms, this RAM demand poses a significant obstacle to performing inferencing tasks on a resource constrained machine. This issue is further aggravated when we work with a neural network with relatively large number of layers. To address this challenge, our primary objective was to reduce the memory requirement, thereby facilitating the use of more complex neural networks.
Notably, for a multi-layer neural network, MOTION2NX constructs the entire end-to-end circuit in a single step and retains the RAM memory until the entire execution has been completed. Our approach, as outlined in Section III-B, involves writing the shares of intermediate results in respective files while ensuring that no reconstruction is performed. This significantly reduces the RAM requirement for multi-layer neural network inferencing tasks.
In the specific context of a 2-layer neural network inferencing model (with dimensions detailed in Section II-B1), the highest memory requirement arises from the matrix multiplication in layer-1. To mitigate this, we perform intra-layer optimization where the matrix multiplication task is implemented in smaller segments (or "splits"), resulting in a proportional decrease in the average RAM requirement. It is evident from Table I that the average RAM requirement scales down almost linearly with the number of splits employed.
### _Optimizing Execution Time using Helper Node Algorithm_
After successfully reducing the average RAM requirement, our next objective was to optimize the execution time. Here, we implemented compute servers on different machines but on same LAN for realistic execution times. The numbers are detailed in Tables II and III. For secure matrix multiplication, the execution time is significantly impacted by the use of
oblivious transfers (OTs) which occur behind the scenes. To address this issue, we introduce a semi-honest third-party Helper node that eliminates the need for OTs during matrix multiplication.
As a result of implementing the Helper-node algorithm, the execution time for the inference operation is reduced from 77 sec (for the baseline implementation with no intra-layer optimization on a 2-layer rural network) to 11 sec. Additionally, the RAM requirement when utilizing the Helper node algorithm is only 0.134 GB (see Table II).
Note that we refer to this algorithm as "ABY2.0 with a Helper node", drawing inspiration from the implementation of Beaver triples produced by a third-party helper in [10]. The primary objective of our Helper node is to eliminate the need for Oblivious Transfers (OTs) in both the online and setup phases of ABY2.0. In this context, we specifically delve into how the Helper node's algorithm modifies the ABY2.0 Multiplication Protocol, denoted as Protocol MULT(\(<\)\(a\)\(>\), \(<\)\(b\)\(>\)) in [9, Section 3.1.3].
In summary, Protocol MULT(\(<\)\(a\)\(>\), \(<\)\(b\)\(>\)) takes ABY2.0 shares of \(a\) and \(b\) from two parties as inputs. It performs operations to generate output shares representing the product \(a\times b\). Importantly, in this protocol, neither party possesses knowledge of the clear output unless they engage in communication to share their respective output secret shares.
It's worth noting that Protocol MULT (\(<\)\(a\)\(>\), \(<\)\(b\)\(>\)) relies on an OT-based setupMULT during the pre-processing phase. Our specific goal is to eliminate this setupMULT and replace it with a more efficient approach using helperNODE. Here, the objective is to calculate
\[\delta_{ab}=[([\delta_{a}]_{0}+[\delta_{a}]_{1})([\delta_{b}]_{0}+[\delta_{b}] _{1})], \tag{1}\]
and share the additive components of this expression with both server nodes. In this process, party 0 conveys \([\delta_{a}]_{0}\) and \([\delta_{b}]_{0}\) to the Helper node through a reliable channel, while party 1 conveys \([\delta_{a}]_{1}\) and \([\delta_{b}]_{1}\) to the Helper node via another reliable channel. Subsequently, the Helper node computes \(\delta_{ab}\) and distributes additive shares to both parties. It is crucial to note that the Helper node is _incapable of reconstructing the actual values of \(a\) and \(b\) as it possesses no knowledge of \(\Delta_{a}\) and \(\Delta_{b}\)_. Additionally, we assume that both the compute server parties and the Helper node operate in a semi honest fashion. It is important to emphasize that if the information from the Helper node is shared with either of the two servers, it jeopardizes the data privacy of the input data. To uphold these properties, one may consider running the compute servers and the Helper node on separate secure enclave machines (see [4] for details on the secure enclave machines).
In Algorithm 3, we provide an overview of the multiplication algorithm using the Helper node. This algorithm needs the procedure HelperNODE in the setup phase. To enhance readability, Algorithm 3 addresses the efficient implementation of two scalar values, \(a\) and \(b\). However, we carry forward this very idea into our optimized implementation for matrix multiplication.
```
1:Setup Phase:
2:P\({}_{i}\) for \(i\in 0,1\) samples random \([\delta_{y}]_{i}\in_{R}\mathbb{Z}_{2}^{64}\).
3:Parties execute helperNODE(\([\delta_{a}],[\delta_{b}]\)) to obtain \([\delta_{ab}]_{i}\).
4:Online Phase:
5:P\({}_{i}\) for \(i\in 0,1\) locally computes \([\Delta_{y}]_{i}=i\Delta_{a}\Delta_{b}-\Delta_{a}[\delta_{b}]_{i}-\Delta_{b}[ \delta_{a}]_{i}+[\delta_{ab}]_{i}+[\delta_{y}]_{i}\) and sends to P\({}_{1-i}\).
6:P\({}_{i}\) for \(i\in 0,1\) computes \(\Delta_{y}=[\Delta_{y}]_{0}+[\Delta_{y}]_{1}\).
7:procedurehelperNODE(\([\delta_{a}],[\delta_{b}]\))
8:P\({}_{i}\) for \(i\in 0,1\) send \(([\delta_{a}]_{i},[\delta_{b}]_{i})\) to the Helper node.
9:Helper node computes equation (1).
10:Helper node creates arithmetic shares \(([\delta_{ab}]_{0},[\delta_{ab}]_{1})\), such that \(\delta_{ab}:=[\delta_{ab}]_{0}+[\delta_{ab}]_{1}\)
11:Helper node sends \([\delta_{ab}]_{i}\) to P\({}_{i}\) for \(i\in 0,1\)
12:endprocedure
```
**Algorithm 3** Protocol HELPERMULT(\(<\)\(a\)\(>\), \(<\)\(b\)\(>\))
## IV Numerical Evaluation
Tables II and III outline the execution time for 2-layer and 5-layer neural networks, respectively, when the compute resources are run on the same LAN. In practice, the compute servers are typically hosted on the cloud (on different LANs). To determine the execution time, we deployed compute server 0 on Microsoft Azure cloud and compute server 1 (and the Helper node) on AWS cloud. The image provider and weights/model provider run on separate local machines. See Table IV for details of the cloud configurations.
The image provider's terminal accepts a handwritten digit and converts it into a \(2\times 28\).csv file, which is then flattened to a \(784\times 1\) format. The model provider possesses the weights and biases of a pretrained model. Both the image provider and model provider generate ABY2.0 shares of their respective private data and send these shares to the compute servers hosted on the cloud. Following the sharing of these secret shares, the image provider awaits the output shares of the predicted image label. After the execution of the secure inferencing task at the two compute servers, these servers send their respective output shares back to the image provider for the reconstruction of the predicted image label in plain text. Tables V and VI list details of the 2-layer and 5-layer neural networks, respectively, when compute servers are run on cloud. As expected, the execution time with compute servers running on cloud machines (on different LANs) is higher compared to the case where compute servers are part of the same LAN (compare Tables II, III with Tables V, VI).
\begin{table}
\begin{tabular}{|c|c|c|} \hline Splits & RAM requirement & Execution time \\ \hline No intra-layer split & 3.2 GB & 34 seconds \\ \hline Layer 1: 2 splits & 1.6419 GB & 38 seconds \\ \hline Layer 1: 4 splits & 0.888 GB & 43 seconds \\ \hline Layer 1: 8 splits & 0.4527 GB & 47 seconds \\ \hline Layer 1: 16 splits & 0.253 GB & 50 seconds \\ \hline Layer 1: 64 splits & 0.09888 GB & 73 seconds \\ \hline \end{tabular}
\end{table} TABLE I: Compute servers run on the same machine for 2 layer NN. Note that there is no intra-layer split for layer2 in the above.
We remark that the accuracy of our secure neural network inferencing implementation (using 64 bit fixed-point arithmetic including 13 bits for representing the fractional part) on the MNIST dataset was roughly similar to the corresponding accuracy obtained using a python floating-point implementation.
## V Model known to compute servers
We recognized the possibility of situations where neural network model parameters might not be private but rather considered common knowledge. However, the image provider still values the privacy of its data. In order to address such scenarios, we have introduced a new functionality where common knowledge variables are treated as known unencrypted values. To achieve this, we have introduced two operations, namely "ConstantMul" and "ConstantAdd", within the MOTION2NX framework. These operations were not previously available in the framework. We discuss these algorithms in Algorithm 4 and 5, respectively.
When dealing with neural networks where weights are considered common knowledge, we observed that the inference time and RAM requirements are significantly reduced when compared with the baseline (where the model is private to one of the data providers). When the model is known to both the compute servers, for a two-layer neural network, the inference time is approximately 13 seconds with a RAM requirement of 0.134 GB. In the case of a five-layer neural network, the inference time is around 34 seconds, and the RAM requirement is 0.2 GB. These numbers are similar to the corresponding numbers obtained using the helper node. When the model is known to both the compute servers, this reduction in execution time is expected since the overhead of performing OTs during secure multiplication between two private values is eliminated.
Let \(a\) be the private data, \(b\) be the known data. In the following we use ABY2.0 shares of \(a\) and uint64 equivivalent of \(b\) with \(f\) fractional bits. Let \(y=a\times b\). It's important to highlight that during the online phase, after the multiplication of the shares by the constant value, we transformed the ABY2.0 shares into the GMW arithmetic shares. Note that the GMW shares are represented by \(Y_{i}\) in Algorithm 4. The transformation to GMW shares was needed to prevent the occurrence of wrap-around errors during the truncation operation performed on \(Y_{i}\) (see [8, paragraph 4, Section 5.1.1] for a discussion of wrap-around errors during the truncation of respective shares). The truncation operation ensures that after the multiplication operation, the fractional bits continue to be represented by \(f\) least significant bits.
```
1:Setup Phase:\(P_{i}\) for \(i\in\{0,1\}\) samples \([\delta_{y}]_{i}\in_{\mathbb{R}}Z_{2^{64}}\)
2:\(P_{i}\) updates \([\delta_{a}]_{i}\) with \([\delta_{a}]_{i}\times b\)
3:Online Phase:\(P_{i}\) updates \(\Delta_{a}\) with \(\Delta_{a}\times b\)
4:\(P_{0}\) locally generates \(Y_{0}=0\times\Delta_{a}-[\delta_{a}]_{0}\)
5:\(P_{1}\) locally generates \(Y_{1}=1\times\Delta_{a}-[\delta_{a}]_{1}\)
6:\(P_{i}\) for \(i\in\{0,1\}\) performs truncation operation: \(Y_{i}=\frac{Y_{i}}{2f}\)
7:\(P_{i}\) locally computes \([\Delta_{y}]_{i}=Y_{i}+[\delta_{y}]_{i}\) and sends to \(P_{1-i}\)
8:Both \(P_{0}\) and \(P_{1}\) calculate \(\Delta_{y}=[\Delta_{y}]_{0}+[\Delta_{y}]_{1}\)
```
**Algorithm 4** Protocol ConstantMULT(\(<\)\(a\)\(>,b\))
## VI Conclusion and Future Work
We modified and enhanced the MOTION2NX framework to bridge the gap between scalability, memory efficiency and privacy. In particular, we optimized the memory usage, reduced the execution time using a third-party Helper node, and enhanced the efficiency while still preserving data privacy.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \begin{tabular}{c} Which layers are split, \\ number of splits \\ \end{tabular} & \begin{tabular}{c} RAM requirement \\ \end{tabular} & Time of execution \\ \hline Layer 1: 16 splits & 0.461 GB & 525 seconds \\ Layer 2: 8 splits & & \\ Layer 3: 4 splits & & \\ Layer 4: 2 splits & & \\ Layer 5: No split & & \\ \hline \hline
\begin{tabular}{c} **Helper node** \\ \end{tabular} & 0.2 GB & 34 seconds \\ \hline \end{tabular}
\end{table} TABLE VI: Compute servers and Helper node are run on cloud for 5-layer neural net.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \begin{tabular}{c} Shifts \\ \end{tabular} & \begin{tabular}{c} RAM requirement \\ \end{tabular} & \begin{tabular}{c} Execution time \\ \end{tabular} \\ \hline No intra-layer split & 3.2 GB & 77 seconds \\ \hline Layer 1: 8 splits & 0.4527 GB & 100.3 seconds \\ \hline Layer 1: 16 splits & 0.253 GB & 108.45 seconds \\ \hline Layer 1: 64 splits & 0.09888 GB & 121.53 seconds \\ \hline
\begin{tabular}{c} **Helper node** \\ \end{tabular} & 0.134GB & 11 seconds \\ \hline \end{tabular}
\end{table} TABLE II: Compute servers and Helper node are run on different machine on same LAN for 2-layer neural net. Note that there is no intra-layer split for layer2 in the above.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \begin{tabular}{c} Shifts \\ \end{tabular} & \begin{tabular}{c} RAM requirement \\ \end{tabular} & \begin{tabular}{c} Time of execution \\ \end{tabular} \\ \hline Layer 1: 16 splits & 0.253 GB & 200 seconds \\ \hline Layer 1: 64 splits & 0.0988 GB & 280 seconds \\ \hline
\begin{tabular}{c} **Helper node** \\ \end{tabular} & 0.134 GB & 12 seconds \\ \hline \end{tabular}
\end{table} TABLE IV: Cloud configuration
\begin{table}
\begin{tabular}{|c|c|c|} \hline \begin{tabular}{c} Shifts \\ \end{tabular} & \begin{tabular}{c} RAM requirement \\ \end{tabular} & \begin{tabular}{c} Time of execution \\ \end{tabular} \\ \hline Layer 1: 16 splits & 0.253 GB & 200 seconds \\ \hline Layer 1: 64 splits & 0.0988 GB & 280 seconds \\ \hline
\begin{tabular}{c} **Helper node** \\ \end{tabular} & 0.134 GB & 12 seconds \\ \hline \end{tabular}
\end{table} TABLE IV: Compute servers and Helper node are run on cloud for 2-layer neural net. Note that there is no intra-layer split for layer2 in the above.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \begin{tabular}{c} Shifts \\ \end{tabular} & \begin{tabular}{c} RAM requirement \\ \end{tabular} & \begin{tabular}{c} Time of execution \\ \end{tabular} \\ \hline Layer 1: 8 splits & 0.4527 GB & 100.3 seconds \\ \hline Layer 1: 16 splits & 0.253 GB & 108.45 seconds \\ \hline Layer 1: 64 splits & 0.09888 GB & 121.53 seconds \\ \hline
\begin{tabular}{c} **Helper node** \\ \end{tabular} & 0.134GB & 11 seconds \\ \hline \end{tabular}
\end{table} TABLE II: Compute servers and Helper node are run on different machine on same LAN for 2-layer neural net. Note that there is no intra-layer split for layer2 in the above.
These optimizations enable MNIST dataset inference in just 32 seconds with only 0.2 GB of RAM for a 5-layer neural network. In contrast, the previous baseline implementation required 8.03 GB of RAM and 200 seconds of execution time
After successfully deploying a neural network inference implementation with \(N\)-layers, we proceeded to implement CNN inference using the optimized strategies outlined in Section III-B. These optimizations yielded remarkable results in terms of reduced inference time and lower average memory consumption. Our next objective is to optimize the secure neural network _training_ implementation.
## Acknowledgements
We want to thank all the interns (Udbhav, Rishab, Rashmi, and Shreyas) who extended their support to this project. We also thank Abhilash for helping us setup the cloud instances.
|
2310.06989 | TDPP: Two-Dimensional Permutation-Based Protection of Memristive Deep
Neural Networks | The execution of deep neural network (DNN) algorithms suffers from
significant bottlenecks due to the separation of the processing and memory
units in traditional computer systems. Emerging memristive computing systems
introduce an in situ approach that overcomes this bottleneck. The
non-volatility of memristive devices, however, may expose the DNN weights
stored in memristive crossbars to potential theft attacks. Therefore, this
paper proposes a two-dimensional permutation-based protection (TDPP) method
that thwarts such attacks. We first introduce the underlying concept that
motivates the TDPP method: permuting both the rows and columns of the DNN
weight matrices. This contrasts with previous methods, which focused solely on
permuting a single dimension of the weight matrices, either the rows or
columns. While it's possible for an adversary to access the matrix values, the
original arrangement of rows and columns in the matrices remains concealed. As
a result, the extracted DNN model from the accessed matrix values would fail to
operate correctly. We consider two different memristive computing systems
(designed for layer-by-layer and layer-parallel processing, respectively) and
demonstrate the design of the TDPP method that could be embedded into the two
systems. Finally, we present a security analysis. Our experiments demonstrate
that TDPP can achieve comparable effectiveness to prior approaches, with a high
level of security when appropriately parameterized. In addition, TDPP is more
scalable than previous methods and results in reduced area and power overheads.
The area and power are reduced by, respectively, 1218$\times$ and 2815$\times$
for the layer-by-layer system and by 178$\times$ and 203$\times$ for the
layer-parallel system compared to prior works. | Minhui Zou, Zhenhua Zhu, Tzofnat Greenberg-Toledo, Orian Leitersdorf, Jiang Li, Junlong Zhou, Yu Wang, Nan Du, Shahar Kvatinsky | 2023-10-10T20:22:17Z | http://arxiv.org/abs/2310.06989v1 | # TDPP: Two-Dimensional Permutation-Based Protection of Memristive Deep Neural Networks
###### Abstract
The execution of deep neural network (DNN) algorithms suffers from significant bottlenecks due to the separation of the processing and memory units in traditional computer systems. Emerging memristive computing systems introduce an in situ approach that overcomes this bottleneck. The non-volatility of memristive devices, however, may expose the DNN weights stored in memristive crossbars to potential theft attacks. Therefore, this paper proposes a two-dimensional permutation-based protection (TDPP) method that thwarts such attacks. We first introduce the underlying concept that motivates the TDPP method: permuting both the rows and columns of the DNN weight matrices. This contrasts with previous methods, which focused solely on permuting a single dimension of the weight matrices, either the rows or columns. While it's possible for an adversary to access the matrix values, the original arrangement of rows and columns in the matrices remains concealed. As a result, the extracted DNN model from the accessed matrix values would fail to operate correctly. We consider two different memristive computing systems (designed for layer-by-layer and layer-parallel processing, respectively) and demonstrate the design of the TDPP method that could be embedded into the two systems. Finally, we present a security analysis. Our experiments demonstrate that TDPP can achieve comparable effectiveness to prior approaches, with a high level of security when appropriately parameterized. In addition, TDPP is more scalable than previous methods and results in reduced area and power overheads. The area and power are reduced by, respectively, 1218\(\times\) and 2815\(\times\) for the layer-by-layer system and by 178\(\times\) and 203\(\times\) for the layer-parallel system compared to prior works.
Memristor, deep neural network, permutation-based protection, security.
## I Introduction
Artificial intelligence (AI) techniques have enabled machines to surpass human capabilities in research areas such as image recognition and have become an integral part of society. AI uses advanced deep neural network (DNN) algorithms such as convolutional neural networks to accomplish its tasks [1]. The separation of processing and memory units in modern computer architecture, however, means that a tremendous amount of energy is utilized when executing the data-intensive DNN algorithms [2]. Emerging memristive computing systems have demonstrated great potential in boosting the energy efficiency of the DNN algorithms [2, 3]. Their advantage is their ability to store the DNN weights and process them in memory, thereby avoiding the tremendous data movement between the computing and memory units [2].
Despite this appealing advantage, the security of memristive computing systems has yet to receive sufficient attention. That is, as shown in Fig. 1, DNN models stored in the memristive computing systems face theft attacks because of the non-volatility of memristive devices. While the memristive devices' non-volatility might be appealing, it facilitates data theft attacks, which are real threats [4, 5, 6, 7] in scenarios of using memristive devices as main memory. If a memristor-based Dual In-line Memory Module (DIMM) is stolen, an adversary can stream out the data stored in the memory from the DIMM. For memristive computing systems, the current commercial memristive chips are embedded in boards with M.2 [8] or PCIe [9] interfaces. Moreover, the memristive chips may also be equipped with I/Os ports such as GPIOs and I\({}^{2}\)C [10]. These universal interfaces and ports allow an adversary to steal the data from the memristive chips. Thus, the adversary, having physical access to the memristive computing systems, could steal the DNN weights stored in the memristive crossbars by exploiting the data persistence of memristive
Fig. 1: The DNN models loaded in memristive computing systems face potential theft attacks due to the non-volatility of memristor devices.
devices. Once in possession of the DNN weights, the adversary may reverse-engineer the well-trained DNN models stored in the memristive computing systems. The stolen DNN models could be sold illegally to customers, resulting in copyright infringement and economic losses to the DNN model designers. Additionally, if the models are trained with proprietary datasets, the stolen models could leak private information, such as patients' information in a medical system, as the case may be.
The existing protection methods for memristive main memory, such as counter mode encryption [4, 5, 6, 7] are based on encrypting the data with conventional cryptographic algorithms and decrypting them while they are being used. The methods, however, are not suitable for memristive computing systems because they require frequent writing operations to the memristive devices, which leads to extra high costs in both energy and latency. Given that the endurance property of the state-of-the-art memristive devices is limited [11], the extra writing operations could also shorten the lifetime of the memristive computing systems. Even worse, these methods would open an attack window for the adversary to exploit when the DNN weights on the memristive crossbars executing the DNN algorithms are decrypted. Though the time window may be narrow, the adversary could use side-channel analysis to pinpoint the exact execution time of each DNN layer and then turn off the systems to stream out the DNN weights of those layers. For instance, [12] encrypted only part of the DNN weights to reduce decryption time. Nevertheless, this partial encryption method still involves frequent writing operations to some memristive devices, and the attack windows, though minor, persist.
Another type of protection method calls for transforming the DNN weight matrices. It does not rely on encrypting the DNN weights; thus, the shortcomings of the above methods are avoided. This type of method provides round-the-clock security for the DNN weights, i.e., whenever the adversary carries out theft attacks, the DNN weights are always protected. [13] suggested selectively encoding some columns of weights as their ones' complement and leaving the others untouched. The adversary does not know which columns of weights are encoded, so the actual representation of the weights is hidden. This method, however, may increase the output value range at bitlines (BLs) and thus require a higher-precision analog-to-digital converter (ADCs) [3]. Another sort of weight matrix transforming is matrix row/column permutation. The protection proposed by [14] was to hide the row connections between crossbar pairs. Conversely, [15] suggested grouping memristive crossbars into multiple virtual operation units (VOU) and permuting the VOUs along the column dimension. Nevertheless, the existing matrix row/column permutation methods have some shortcomings and challenges that need to be countered:
(1) **Scalability**. Both methods assume the crossbar digital-to-analog converters (DACs) and ADCs are shared among wordlines (WLs) and BLs, respectively, and that they can reduce the hardware overheads of their respective protection methods by exploiting DAC/ADC multiplexing. Typically, for a 256 \(\times\) 256 crossbar, they assume that only 16 WLs and 16 BLs are enabled simultaneously1. In fact, the number of simultaneously activated WLs/BLs (\(x\)) varies, depending on the specific architecture and implementation. For example, NeuRRAM [16] suggested that it is possible to activate all the crossbar rows and columns simultaneously using voltage-mode sensing instead of current-mode sensing. The protection method of [14] is only applicable when \(x\) is 16 because, for its protection hardware, the output of each multiplexer (MUX) in the first layer needs to be connected to all the MUXes in the middle layer. As to the protection method of [15], it is not applicable when \(x\) is 1 or 256 since the crossbar row grouping mechanism is invalid, and when \(x\) is large, such as 128, the method becomes insecure because the number of VOUs is minimal.
Footnote 1: [15] considered 8 WLs and 8 BLs for a crossbar size of 128 \(\times\) 128.
(2) **Vulnerability**. Both [14] and [15] only considered the security of a single protected crossbar or one crossbar pair. In Section V, we investigated the security aspects of the proposed TDPP in terms of the entire model, going beyond the analysis of single crossbars or crossbar pairs. By adopting this broader perspective, our aim is to provide a more comprehensive understanding of the security implications associated with our approach. Additionally, permutation-based protection methods may be vulnerable to several types of attacks, especially divide-and-conquer attacks [17]. These potential attacks, however, were not considered by them, either. As mentioned in above paragraph, when \(x\) is large, the methods of [14] and [15] becomes inapplicable and insecure, respectively.
(3) **Key strategy**. The protection hardware of both protection methods [14] and [15] is dispersed in the peripheral of every crossbar pair, complicating the peripheral design. Furthermore, for parallel execution of the crossbars, the protection keys also need to be near the crossbars. The keys would be stored in volatile memory, such as buffers or registers. Their papers do not clarify how the keys are generated and shared among the crossbars.
In this paper, we propose a two-dimensional permutation-based protection (TDPP) method permuting both the rows and columns of the DNN weight matrices, which also belongs to the matrix row/column permutation class. TDPP differs from previous works and is more advantageous in several ways, which are summarized below:
* We offer a new method involving the permutation of both rows and columns of the weight matrices. Conversely, previous works exclusively addressed the permutation of a single dimension within the weight matrices, specifically either the rows or the columns.
* We consider two different memristive computing systems (designed for layer-by-layer and layer-parallel processing, respectively) and present the design of the TDPP method for memristive computing systems that could be embedded in the two systems. We include the essential design parameters and key strategy.
* We discuss the security metrics of the proposed method, including its resistance to brute
force attacks, divide-and-conquer attacks, and known-plaintext attacks. Note that permutation-based protection methods do not guarantee absolute security. Nevertheless, it aims to enhance security by introducing confusion and complexity to the arrangement of weight matrix rows and columns stored in memristor devices, thereby increasing the difficulty for attackers to extract correct DNN weights.
* We evaluated the maximum security provided by TDPP based on the minimal effort for divide-and-conquer attacks to succeed. We show that the TDPP method is highly effective, secure, and scalable. It delivers up to 1218\(\times\) and 2815\(\times\) lower area and power, respectively, than related works [14, 15].
## II background
### _Preliminaries_
Main parts of DNN algorithmsThe main parts of DNN algorithms are convolution (Conv) and fully-connected (FC) layers. These algorithms are dominated by vector-matrix multiplications (VMMs) because both Conv and FC layers can be implemented with VMM operations [18]. The weights of FC layers are in the form of matrices and the weights of Conv layers can also be transformed into matrices by reshaping each filter kernel into a column. For simplicity, we assume the weights of the Conv layers are already transformed into matrices. Thus, in this paper, **both the FC layer weights and the Conv layer weights are in the form of matrices**.
Analogous VMMs with memristive crossbarsIn memristive computing systems, the memristive devices are organized in the form of crossbars. When applying voltages in the WLs of memristive crossbars, the BLs of the memristive crossbars output the accumulated currents, which is analogous to VMMs. The input feature maps of DNNs are transformed into voltages by using DACs so that they can be applied to the WLs, and the accumulated current outputs at the BLs are converted back to digital values using ADCs. Due to the non-negative conductance values from the memristive devices [19], a weight matrix is mapped to a pair of memristive crossbars, i.e., a positive crossbar (XB+) and a negative crossbar (XB-). Additionally, because of the limited precision of memristive devices, multiple crossbar pairs are used to represent a high-precision weight matrix [20].
### _Threat Model_
As shown in Fig. 1, the well-trained DNN models are loaded into the memristive computing systems. Memristive computing systems are whole chips embedded in boards with a universal interface such as M.2 or PCIe. We assume the adversary has physical access to the memristive computing systems **but does not own the stored DNN models and is motivated to steal them from the systems**. The adversary can insert the memristor-based DIMM into their own host machine, gaining access to the host memory to know the input of the first DNN layer to the memristive computing system and the output of the last DNN layer. We also assume the adversary can stream out the values of the memristive devices through the board interface or the I/O ports by exploiting the non-volatility of memristive devices. This threat model is aligned with the existing works [12, 13, 14, 15]. The goal of the adversary is to read the DNN weights from the memristive devices. Once possessing the correct DNN weights, the adversary could extract the DNN models. Our motivation is to prevent the adversary from reading the DNN weights correctly.
## III The TDPP Method
Fig. 2 illustrates the basic idea of the TDPP method for protecting our weight matrix example. Fig. 2(a) shows the VMM operation between a four-element input vector and a \(4\times 4\) weight matrix, which is plainly mapped to the memristive devices. Thus, the adversary could correctly read the weight matrix values through the corresponding memristive devices. Fig. 2(b) shows the securely mapped weight matrix: the rows and columns of the original matrix have been permuted according to the vectors \(P_{r}\) and \(P_{c}\), respectively. The vectors \(P_{r}\) and \(P_{c}\) indicate the permutation patterns, which are the keys. For example, the vector \(P_{r}\) being (3,4,1,2) means the 1st, 2nd, 3rd and 4th rows of the original matrix have moved to become the 3rd, 4th, 1st, and 2nd rows, respectively. For the correctness of the VMM operation, the input vector is also permuted according to the vector \(P_{r}\). The output vector of the VMM operation between the permuted input vector and the permuted weight matrix has to be reverse-permuted to get the correct VMM result according to the vector \(P_{c}\). The reverse permutation occurs by first reversing the vector, then permuting the vector, and finally reversing the vector again. Similarly, the weight matrix of each layer of a model is permuted independently. Without knowledge of \(P_{r}\) and \(P_{c}\), the extracted weight matrices known to the adversary are very different from the original weight matrices, so the weights of the model are well protected.
Fig. 2: (a) A four-element input vector multiples an unprotected \(4\times 4\) weight matrix and outputs a four-element output vector; (b) The rows and columns of the weight matrix are permuted according to \(P_{r}\) and \(P_{c}\), respectively; the input and output vectors need to be permuted and reverse-permuted, correspondingly, to get the correct VMM results.
## IV TDPP Design for Memristive Computing Systems
### _Two Different Memristive Computing Systems_
Fig. 3(a) shows two memristive computing systems. One comprises a global arithmetic unit (AU) and a global buffer, and the other puts a tile AU and a tile buffer in each tile. These systems are designed for layer-by-layer and layer-parallel processing, respectively. Denote them as config-1 and config-2, respectively. Except for the location of the AUs and buffers, the two systems share a similar design, such as the architecture of the tiles and processing elements (PEs). As shown in Fig. 3(b), a global or tile AU consists of several digital processing modules: the adding module, the pooling module, and the activation module. The system consists of many tiles for both systems, with each tile composed of multiple processing elements (PEs). Each PE comprises multiple crossbar pairs. The precision of both the DNN weights and the memristive devices determines the number of crossbar pairs. For example, eight crossbar pairs are needed per PE when the precision of the DNN weights and memristive devices are 8 and 1, respectively.
### _Design of the TDPP Hardware_
As shown in Fig. 3(b), the TDPP design consists of a permutation module (PM), a key storage module, and a key generator.
(1) _PM_: The PM is used for both permuting the layer's inputs and reverse-permuting layer's outputs. To minimize the hardware overhead and the system latency, we suggest implementing the PM using the Benes Network (BN) [21]. Fig. 3(c) shows the structure of a 2:2 BN, essentially a 2:2 switch. A 2:2 switch could be composed of two 2:1 MUXes. When the \(sel\) signal is 0, the inputs \(in_{1}\) and \(in_{2}\) will be connected to the outputs \(out_{1}\) and \(out_{2}\), respectively; otherwise, the inputs will be cross-connected to the outputs. Fig. 3(d) shows the structure of a \(2^{b}\):\(2^{b}\) BN, constructed by recursively connecting smaller-size BNs. Generally, a \(2^{b}\):\(2^{b}\) BN consists of \((2^{b-1}\times(2b-1))\) 2:2 BNs. Each 2:2 BN comes with a \(sel\) signal, and all the signals together determine the permutation pattern. Denote the signals as key, the size of the key \(s_{b}\) for a \(2^{b}\):\(2^{b}\) BN equals the number of 2:2 BNs it contains, described as
\[s_{b}=(2^{b-1}\times(2b-1)). \tag{1}\]
Two benefits are achieved when using a BN-based PM implementation. First, BNs are non-blocking, i.e., at any given time, all the inputs and outputs of the BNs are connected. The non-blocking feature is essential to avoid affecting the system throughput of the memristive computing system. Second, the number of 2:2 switches required by BNs is optimized, which is significant if we want to impose minimal hardware overhead on the system. Additionally, the vector reversing step can be done using the PM by setting the selection signals of its last \(b\) columns of 2:2 switches to 1 and that of the remaining switches to 0, without additional hardware.
We can also reduce the hardware overhead of the PM by implementing it with multiple smaller BNs instead of a big BN. As shown in Fig. 3(e), a \(2^{b}\):\(2^{b}\) BN could be replaced by \(k\)\(2^{B}\):\(2^{B}\) BNs, where \(k\) is the number of \(2^{B}\):\(2^{B}\) BNs and \(2^{b}=k\times 2^{B}\). A PM consisting of \(k\)\(2^{B}\):\(2^{B}\) BNs still simultane
Fig. 3: (a) Memristive computing systems config-1 and config-2; (b) An arithmetic unit (AU) with a TDPP hardware module embedded; (c) A 2:2 Benes Network (BN) consists of two MUXes; (d) A \(2^{b}\):\(2^{b}\) BN made up of two \(2^{b-1}\):\(2^{b-1}\) BNs and two columns of 2:2 switches [21]; (e) An alternative implementation of a \(2^{b}\):\(2^{b}\) PM (permutation module) with \(k\)\(2^{B}\):\(2^{B}\) BNs; (f) A PM can do partial permutation.
ously connects \(2^{b}\) inputs and \(2^{b}\) outputs. This alternative design could reduce the hardware overhead of PMs substantially. For example, a 256:256 BN could be replaced by 16 16:16 BNs to reduce the hardware overhead by approximately \(53\%\). Note that the hardware-reduced PM design also decreases the permutation effectiveness and security. Section V-A analyzes the security of the PM hardware-reduced design, and Section VI shows that a hardware-reduced PM design can still provide sufficient permutation and security.
(2) _Key storage_: The key storage module is an on-chip buffer comprising any volatile memory technology such as eDRAM or SRAM. The volatility of the key storage module ensures the keys are not accessible to the adversary when the systems are powered off.
(3) _Key generator_: The key generator generates the PM key. We suggest the generator be a physical unclonable function (PUF) from which the adversary cannot steal the key [22]. Note that the global AU and tile AUs are near the global buffer and tile buffers, respectively, and the global or tile buffer is usually a volatile eDRAM, or SRAM memory [23]. We can use the global/tile buffer as a PUF by exploiting the startup values of its cells [24, 25]. As shown in Fig. 4, the startup values of the eDRAM/SRAM cells are randomly initialized as 0 or 1 due to process variation. Note that reading the startup values must be conducted before the system overrides them. We refer the readers to [24, 25] for detailed PUF design.
### _Embedding TDPP Hardware in Memristive Computing Systems_
In the config-1 architecture, the DNN inference follows a layer-to-layer processing approach [26, 27]. All the layer's outputs will be transferred to the global buffer to be processed by the global AU. Then, the layer's outputs will be used as inputs for the next layers and transferred to the corresponding tiles. We insert the TDPP hardware into the global AU.
In the config-2 architecture, since each tile is equipped with an AU, the output of a layer can be transferred directly to other tiles where the DNN weights of its next layer are located [3, 16, 23]. This architecture aims at layer-parallel processing to maximize the crossbar throughput. In this case, we insert a TDPP hardware module in the AU of each tile, and the key generator utilizes the cell startup values of the tile buffer. Note that the cell startup values of each tile buffer are different, and the key for the PM module in each tile is, therefore, unique. Inserting TDPP hardware in the AU of each tile will increase the hardware overhead. We can, however, use the hardware-reduced PM implementation, as explained earlier, to reduce the hardware overhead.
### _Key Strategy_
For the TDPP method described in Section III, the permutation size for a weight matrix is the same as the original weight matrix. The layer size of some DNN models, however, may be enormous, and its corresponding PM - of the same size - could be infeasible when the hardware overhead is constrained. To circumvent this problem, we could design a feasible-size PM and permute the rows and columns of the large weight matrices part by part separately. Note that in memristive computing systems, if the height or width of a layer's weight matrix is greater than that of the memristive crossbars, the matrix is divided into multiple submatrices to fit the size of the memristive crossbars. Each submatrix is mapped to a PE, and the PEs execute VMM operations in parallel [23]. Denote the size of the memristive crossbars as \(C\times C\). Hence, to be aligned with the crossbar parallelism, the size of the PM must be no less than the memristive crossbars, i.e., \(2^{b}\) is at least \(C\). For simplicity and ease of discussion, we set \(2^{b}\) equals to \(C\). Assume the size of a weight matrix is \(m\times n\), divided into multiple submatrices by the size of crossbars. The rows and columns of each submatrix will be permuted independently before being mapped to the PE crossbars. Note for small-size memristive crossbars, setting \(2^{b}\) equal to \(C\) might compromise security. For example, when \(C\) is \(16\), according to (1), the key size for a 16:16 BN is only 56, which might not provide enough sufficient permutation and security. To address this issue, however, we can set \(2^{b}\) as a multiple of 16, for example, 256. In this case, the rows and columns of every 256 submatrices will be permuted independently before being mapped to the PE crossbars.
For a small weight matrix, when its height \(m\) or width \(n\) is less than \(C\), it is possible to pad it by programming the unused memristive cells with camouflage values to increase security [14, 15]. Usually, however, the unused cells are set into a high resistance state (HRS), and the corresponding WLs/BLs are turned off during computing to reduce the sneak paths [16]. Since padding small weight matrices could introduce sneaking noise, we leave them in HRS in our design. For the weight matrix of a DNN layer, high level security can be achieved when each submatrix is permuted with a different key. The resultant key storage, however, could be overwhelming. For example, assume a weight matrix of \(4094\times 4096\) in size and \(C\) equals 256. The matrix is divided into 256 submatrices by every 256 rows and columns, and the rows (columns) of each submatrix are permuted with a different key using a 256:256 BN-based PM. According to (1), permuting 256 rows (columns) requires a 1920-bit key. The required key for the whole weight matrix would be \(1920\times 256\) bits and only for permuting the weight matrix's rows or columns. We
Fig. 4: The key generator utilizes the startup values of eDRAM/SRAM cells, which are randomly initialized to 0 or 1 due to process variation [24, 25].
could reuse the key inside each weight matrix to compromise between maintaining a sufficiently high level of security and having reasonably sized key storage. This would mean that for a layer's weight matrix, all the submatrices share the same key and that the key for permuting the rows and the key for permuting the columns of a submatrix are the same. The keys for each layer, however, are the same or different than those of the other layers, depending on whether the architecture is config-1 or config-2. For config-1, the key is generated using the cell startup values of the global buffer, and all layers share the same key. For config-2, each tile is mapped with no more than a single DNN layer [23], and each tile has a tile buffer. Hence each layer can have a unique key.
### _Data flow_
This section examines the system data flow to understand the effects of the TDPP hardware on the system in functionality and throughput. For both systems, only the initial input and final output of the DNN models are transferred between the host and the memristive computing system; all the intermediate layer results are stored in the on-chip global/tile buffer.
For the config-1 architecture, each layer's inputs will be copied from the global buffer to the TDPP hardware for permutation and then back to the global buffer. The layer inputs will then be transferred to the tiles through the network-on-chip (NoC). The partial outputs from the involved tiles of a DNN layer are gathered in the global buffer and then accumulated, pooled, and activated in the global AU. The aggregated output will go through the TDPP hardware for reverse permutation as an additional procedure.
Fig. 5 illustrates a simple example of the data flow. Assume the input of a Conv layer is divided into four input vectors. The size of each vector is four, as the number of the input channels. Each input vector will be copied from the global buffer to the TDPP hardware for permutation and then copied back to the global buffer, which is ready to be transmitted to the tiles through the NoC (step 1). In this case, the Conv kernels are distributed in multiple tiles. Each input vector will perform VMM operations with the DNN weights loaded in the tiles, and each tile will output a vector of partial results (step 2). The partial results are transferred back to the global buffer and aggregated using the adding module to get an output vector of size equal to the number of output channels (four in this example) (step 3). There are four input vectors, resulting in four output vectors. These output vectors will be pooled to become a single vector (step 4), which then will go through the activation module (step 5). The TDPP hardware will reversely permute the activated vector and, finally, copied back to the global buffer, to be ready for use as the inputs for the next layer (step 6). Note that the pooling operations and activation operations are along each output channel. Therefore, the output channels are preserved. The reverse permutation recovers the correct order of the channels for the output vector. Hence, the embedded TDPP does not affect the normal functionality of memristive computing systems. The PM bandwidth should be at least that of the global AU or the NoC to maintain a similar system throughput.
For the config-2 architecture, the partial VMM operation results from the involved tiles of a DNN layer gathered in one of these tiles. The tile AU will process the aggregated outputs and send them directly to the tiles of the next layer. The next layer will start processing once it gets the necessary partial outputs rather than waiting for whole outputs from the current layer [23]. The proposed PM can process a partial permutation without waiting to complete a whole layer. As shown in Fig. 3(f), when a PM receives a partial layer output vector, the partial vector will be padded to become the same size as a full layer output vector. In this case, the outputs of the first two output channels will be transferred to the next layer first. Thus the tile (knowing the key) processes VMM operations of the corresponding first and last output channels, which have top priority. The states of the padded elements will be set as \(Z\) (high impedance state). After permutation, the padded elements will be discarded. Thus, as with the config-1 architecture, the TDPP hardware would not affect the system throughput for config-2, either.
## V Security Analysis of the TDDP Method
According to the threat model outlined in Section II-B, the adversary possesses the capability to read the values of the memristive devices, allowing them to extract the permuted weight matrices from these devices. The main objective of the adversary is to reverse the permutation process and restore the rows and columns of the extracted matrices to their original arrangement, which effectively means deciphering the permutation keys generated by the key generator. Note that the memristive computing system is integrated into a single chip, and all components, including the PM, are on-chip. Consequently, the adversary does not have control over the _sel_ signals of the PM. Even in the scenario where the adversary gains control over the _sel_ signals, without knowledge of the correct permutation keys, they would be compelled to try different _sel_ signal combinations. This process would be equivalent to attempting to restore the rows and columns of the extracted matrices to their original arrangement.
### _Brute-Force Attack_
Brute-force attack can be used to attempt to crack any encryption methods [28]. Assume a DNN model under attack has \(L\) layers, and the size of the \(i^{th}\) layer's weight matrix is \(m^{i}\times n^{i}\) (\(i\in[1,L]\)). According to the key strategy in Section IV, the number of times a brute-force attack is undertaken \(T^{i}_{BF}\) to recover the original weight matrix of the \(i^{th}\) layer can be described as
\[T^{i}_{BF}=\begin{cases}(B!)^{k}&\text{if }m^{i}\geq C\text{ or }n^{i}\geq C\\ (B!)^{[m^{i}/2^{B}]}\cdot(m^{i}\!\!\zeta\!2^{B})!&\text{if }n^{i}\leq m^{i}<C\\ (B!)^{[n^{i}/2^{B}]}\cdot(n^{i}\!\!\zeta\!2^{B})!&\text{if }m^{i}<n^{i}<C \end{cases}. \tag{2}\]
For the config-1 architecture, as the key for each layer is the same, the effort of brute-force attacking the whole model is equal to that of attacking its biggest layer. Thus, the number of brute-force attacks \(T_{BF}\) needed to recover all the original weight matrices of the DNN model can be described as
\[T_{BF}=max(T_{BF}^{1},T_{BF}^{2},...,T_{BF}^{L}). \tag{3}\]
For the config-2 architecture, given that the key for each layer is different, permuting the weight matrix of each of its layers could cumulatively make the permutation space even larger. The model could be reverse-engineered correctly only after the weight matrices of all the layers are recovered. The number of brute-force attempts \(T_{BF}\) needed to recover all the original weight matrices of the DNN model can be described as
\[T_{BF}=\prod_{i=1}^{L}T_{BF}^{i}. \tag{4}\]
Large DNN models generally have more layers and are therefore more resistant to brute-force attacks than small DNN models.
### _Attacking Small Matrices_
Recall that for the config-1 architecture, the weight matrices of all DNN layers are permuted using the same key. Mapping a small matrix to a memristive crossbar, however, leaves some rows or columns in the crossbar unused, which may facilitate the adversary's brute-force attacks aiming to recover the permutation pattern for the whole DNN model. Furthermore, recall that for both config-1 and config-2 architecture, if the width or height of a layer's weight matrix is larger than that of the memristor crossbars, it is divided into multiple submatrices, and the key used to permute each submatrix is the same. A small submatrix being mapped as a memristive crossbar may also facilitate the adversary's brute-force attacks aiming to recover the permutation pattern for the whole DNN layer.
Fig. 6 shows a simple example. The weight matrix has two rows, \(w_{1}\) and \(w_{2}\), and four columns, and the crossbar size is \(4\times 4\). Assume the PM module is based on a 4:4 BN. Thus the number of permutation patterns is \(4!\). After permuting the matrix, its rows become the second and fourth rows of the permuted matrix, respectively. If we map the permuted matrix directly to a memristive crossbar, the first and third rows of the crossbar will be left unused, from which the adversary can gain some insights into the permutation patterns. That is, for the permutation pattern used to permutation the example matrix, the first two permutation inputs are connected to the second and fourth permutation outputs, respectively, and the last two permutation inputs are connected to the first and third permutation outputs, respectively. In this case, the possible permutation patterns are reduced from \(4!\) to \(2!\times 2!\), i.e., reduced by 83.33%.
To mitigate these attacks, we propose to map the rows or columns of small (sub)matrices to contiguous crossbar rows or columns, respectively, and use an **index vector** to indicate the correct location of each weight matrix row or column. As shown in Fig. 6, \(w_{1}\) and \(w_{2}\) are mapped to the first and second rows of the crossbar. The index vector \((0,1,0,1)\) means the correct locations for \(w_{1}\) and \(w_{2}\) are rows two and four, respectively. Without knowing the index vector, the unused crossbar rows or columns do not expose information about the permutation pattern used to permute the matrix. The size of the index vector is equal to the number of rows or columns of the memristive crossbars. If the matrix size is small for both the rows and columns, we need one index vector for the rows and one for the columns. For TDPP, we need, at most, two index vectors for each tile. The index vectors are stored in the key storage of the TDPP hardware. Note that those vectors are generated based on the permutation keys and must not be stored in non-volatile memory.
### _Divide-and-Conquer Attack_
Unlike conventional cryptographic algorithms, permutation-based protection methods may be vulnerable to divide-and-conquer attacks. For example, for a weight matrix, instead of guessing the permutation pattern for the whole matrix at once, the adversary may target only a small number of weight matrix rows or columns each time. The adversary expects higher
Fig. 5: Example of the system data flow: 1�
and lower inference accuracy of the extracted DNN model using the correct and incorrect keys for the rows or columns, respectively. In this way, the original locations of the rows or columns may be recovered. Then the adversary will target the next set of rows or columns and continue until the locations of all the rows or columns are discovered. In our experiment, we use LeNet [29]. The example model consists of two Conv layers and three FC layers. We trained the example model with the CIFAR10 dataset [30]; the inference accuracy of the well-trained example model was \(76.22\%\). We then permuted 25, 50, and all 75 rows of the weight matrix of the LeNet model's first layer (only permuting the rows); the inference accuracy of the extracted model was \(49.4915\%\), \(28.0103\%\), and \(19.601\%\), respectively. That is, when only partial rows of the example model's first layer are permuted, the inference accuracy of the extracted model is higher than when all rows are permuted.
This sort of attack, however, could not work against the proposed protection technique since, with the majority of the DNN weights protected, removing the protection of a small number of rows (columns) does not affect the model performance and the inference accuracy of the extracted model stays approximately 10% (for the CIFAR10 dataset). For config-1, we examined the inference accuracy of the extracted model by choosing the different ratios of key guess as 0.01 to 1 with 0.01 steps of the permutation key of the example model. For config-2, we examined the inference accuracy of the extracted model by guessing the permutation keys of the 1, 2, 3, 4, and 5 most significant layers. The model layer significance is measured by running the model inference with only a single layer protected. The lower the inference accuracy, the more significant the layer is. The PMs for both systems are based on a 256:256 BN. We compared the results of inputting correct key(s) and incorrect key(s), respectively, while keeping the other part of the model protected. Each experiment was carried out 40 times, and the average results were determined. The results are shown in Fig. 7 and Fig. 8. For config-1, only when 74% and above of the key is guessed correctly is the inference accuracy of the extracted model higher than that when guessing the incorrect key, i.e., the divide-and-conquer can succeed. For config-2, only when the keys of all layers are guessed correctly the inference accuracy of the extracted model is higher than that when guessing the incorrect key.
We further explored the minimal effort for the divide-and-conquer attacks to succeed. The minimal effort is defined as the number of brute-force trial times to discover the minimal ratio of the key (for config-1) or the keys of the minimum number of DNN layers (for config-2) required to increase the inference accuracy of extracted DNN models. The algorithms are described as Algorithm 1 and Algorithm 2. For config-1, Algorithm 1 takes the brute-force attack effort \(T_{BF}\) as an input. The output is the minimal effort for the divide-and-conquer attacks. The algorithm initializes the ratio \(r\) as 0.01 and iterates until \(r\) reaches 1 with steps of 0.01. For each iteration, it checks the attack sensitivity of \(r\) of the permutation key. The attack sensitivity is defined as the relativity of the
Fig. 8: For config-2, the inference accuracy of the extracted example model with guessing the correct and random incorrect keys of different numbers of the most significant layers, while keeping the other layers of the model protected.
Fig. 6: A matrix smaller than a memristive crossbar is permuted and mapped to **(left bottom)** discrete crossbar rows and **(right bottom)** contiguous crossbar rows with an index vector indicating the correct locations of the matrix rows.
Fig. 7: For config-1, the inference accuracy of the extracted example model with guessing the correct and random incorrect keys for different ratios of the key, while keeping the remaining of the key untouched.
inference accuracy of the extracted DNN model when guessing correctly and incorrectly, respectively, for \(r\) of the permutation key while keeping the remaining of the key untouched. When the correct-key results show relevant lopsidedness (at least 5% higher) than the incorrect-key results, we regard the \(r\) of the key as attack sensitive and otherwise attack insensitive.
For config-2, before the algorithm, we sort the model layer significance list in descending order and store the corresponding layer indexes into a list \(list1\). Algorithm 2 takes \(list1\) and the brute-force attack effort for each layer \(T_{BF}^{i}\) as inputs. The output is the minimal effort for the divide-and-conquer attacks. Firstly, we create a new list \(list2\) to store the index of the candidate layer set that is attack sensitive. Similarly, the attack sensitivity is defined as the relativity of the inference accuracy of the extracted DNN model when guessing the respective correct and incorrect keys for the layer set while keeping the other layers protected. The algorithm keeps checking the attack sensitivity of the accumulating layer set \(list2\) until \(list2\) is attack-sensitive or all the layers are in \(list2\).
```
1:Inputs: the brute force attack effort \(T_{BF}\);
2:Outputs: minimal effort for the divide-and-conquer attacks;
3:for\(r=0.01;r\leq 1;r=r+0.01\)do
4:if\(r\) of key are attack sensitive then
5: Break
6:endif
7:endfor
8:return \(r\cdot T_{BF}\)
```
**Algorithm 1** Compute the minimal effort for the divide-and-conquer attacks for config-1
```
1:Inputs: descending sorted model significance \(list1\);
2:Inputs: brute force attack effort for each layer \(T_{BF}^{i}\), where \(i\in[1,L]\) ;
3:Outputs: minimal effort for the divide-and-conquer attacks;
4:\(list2=\{\}\);
5:while\(list1\)! = NULL do
6:\(index\) = pop(\(list1\));
7:\(list2\).append(\(index\));
8:if list2 is attack sensitive then
9: Break
10:endif
11:endwhile
12:return \(\prod_{i=1}^{L}T_{BF}^{list2[i]}\)
```
**Algorithm 2** Compute the minimal effort for the divide-and-conquer attacks for config-2
### _Known-Plaintext Attacks_
If the adversary knows the inputs and outputs of the PMs, then the permutation keys can easily be discovered. For both systems, the adversary has access to the host memory to know the input of the first DNN layer to the memristive computing system and the output of the last DNN layer. The input of the first DNN layer to the memristive computing system and the output of the last DNN layer, however, are irrelevant to the permutation keys. Only the intermediate results, permuted input vectors, and VMM operation results before reversed permutation are relevant to the permutation keys. The intermediate results, importantly, are stored in on-chip buffers. For the config-1 architecture, we assume the global buffer is implemented using eDRAM or SRAM embedded on the chip. For the config-2 architecture, all the tile buffers are on-chip. Thus, the adversary cannot directly access the intermediate results. Indirectly accessing those intermediate results might be possible through side-channel analysis. Side-channel analysis against the intermediate layer results could be thwarted by countermeasures such as inserting fake cycles or adding noise [31], and those countermeasures could be combined with TDPP to counter side-channel attacks against the intermediate results.
Another potential attack involves writing specific-pattern weight matrices to the memristor crossbars. In this scenario, the adversary may offload a customized DNN model with identity matrices as weight matrices to the memristor system. Consequently, by processing these customized DNN weights on the memristor crossbars, the input of the first DNN layer, and the output of the last DNN layer, the difficulty of inferring intermediate results could be reduced. To mitigate this attack, a predefined user key can be utilized to encrypt the keys generated by the key generator within the TDPP module. Instead of directly using the keys from the generator, they are XORed with the user key to create the permutation keys. This way, without the correct user key, the permutation keys remain hidden from the adversary. This additional layer of encryption enhances the security of the system.
## VI Evaluation
In this section, we present our evaluation of the proposed TDPP method in terms of protection effectiveness, security, hardware area and power overheads. We tested the proposed method on four DNN models: AlexNet, VGG16, ResNet18, and GoogleNet. All the models were modified and trained on the CIFAR10 dataset, and all the models' weights were quantized as 8-bit. The original accuracy of the unprotected models is 86.58%, 91.21%, 93.27%, and 79.88%, respectively. We ignored the errors of mapping DNN weights to the memristive devices. For comparison, we implemented the protection methods of [14] and [15] on the same models. We assume both methods apply a different key for each PE, that all crossbar pairs share the key inside a PE, and that the keys of each layer are different from that of other layers. The evaluation configuration is listed in Table I. The choices of \(p\), \(x\), \(T\), and \(B\) are {1, 2, 4, 8}, {1, 2, 4, 8, 16, 32, 64, 128, 256}, {20, 40, 60, 80, 100}, and {2, 4, 8, 16, 32, 64, 128, 256}, respectively. The area of the memristive cells is taken from [32]. The protection modules of all the protection methods were evaluated based on 32nm CMOS technology. For simplicity, we assumed all the inputs, outputs, and intermediate results were 8-bit. Each experiment was performed 40 times, and the average results were determined.
### _Protection Effectiveness_
The protection effectiveness of a protection method is defined as the inference accuracy of the protected DNN models directly extracted by the adversary. The lower the accuracy is, the better the effectiveness of the method. The CIFAR10 dataset is 10-class; thus, when an extracted model's inference accuracy is \(10\%\), the model function is randomly guessing, which is useless. Table II lists the effectiveness of TDPP for different values of \(B\). For config-1, when \(B\) is above 4, the extracted DNN models are nearly useless. When \(B\) is larger, the average inference accuracy shows less fluctuation, i.e., the model functions strictly as random guessing. For config-2, the inference accuracy of the extracted DNN models is 10% for any value of \(B\) without fluctuation. Based on the results of Table II, we claim that even when \(B\) is very small, e.g., 4, TDPP remains highly effective for all models for both systems. Moreover, the protection effectiveness is unrelated to the parameters \(p\) and \(x\) because TDPP is at the layer level, and the inputs/outputs of TDPP's hardware are not affected by \(p\) or \(x\).
We also compared the protection effectiveness of [14] and [15]. The method of [14] only applies when \(x\) is 16; the method [15] is not applicable when \(x\) is 1 or 256 because the grouping strategy is invalid for both cases. The results show that when all layers are protected, the comparison works are also effective in protecting all the models.
### _Security_
The maximum security of the protection methods was estimated as the minimal effort for divide-and-conquer attacks to succeed using Algorithms 1 and 2. This evaluation considers factors such as PM size, architecture (config-1 or config-2), and the specific DNN model. Table III lists the security of TDPP for both config-1 and config-2. The results are shown as logarithms in base 2. When \(B\) is above 8 and 2 for config-1 and config-2, respectively, the minimal brute-force effort requires at least \(2^{256}\) attempts. When \(B\) increases, the minimal brute-force effort also increases significantly. The maximum security for config-2 is at least one order of magnitude higher than that for config-1, primarily because, in the former, each layer's key is different. The maximum security for config-1 could be improved to be similar to config-2 by applying a strong PUF [22] as the key generator so that each layer has a different key. From the results, we conclude that our method is highly secure when choosing a proper \(B\).
We also compared the related works with a modified Algorithm 2. The original Algorithm 2 keeps checking the attack sensitivity of increasing the number of DNN layers. In our experiment settings, the related works apply different keys for the PEs of each DNN layer. Thus, the modified Algorithm 2 checks the attack sensitivity of increasing the number of PEs instead of the DNN layers. The comparison results are listed in Table IV. The maximum security of both [14] and [15] is not affected by \(p\) because, in our experimental setting, all crossbar pairs inside a PE share the same key. For the protection method of [14], the maximum security is high since its permutation is applied to crossbar rows, and the weight matrices of some DNN layers of the tested models have a large number of rows, so those matrices are mapped to multiple PEs. Each PE that applies a different permutation key increases the maximum security significantly. This method, however, is only applicable when \(x\) is 16 due to the implementation of its protection module. For the protection method of [15], the maximum security is also high when \(x\) is up to 32. A small \(x\) means the VOUs are small and high-multiplicity MUXes/DEMUXes are used so that the permutation space is immense. Nevertheless, as \(x\) increases, the maximum security decreases significantly. For example, when \(x\) is 128, [15] randomly divides a crossbar into two VOU groups, and each group is divided into two VOUs. Thus, the possible permutation patterns for a single crossbar is only \(2!^{2}\), and for all models, the total maximum security it provides is at most \(2^{52}\), which is insufficient.
### _Hardware Overhead_
In this subsection, we evaluate the hardware overheads of TDPP and the related works in terms of area and power. The overhead is aggregated for the protection module and key storage and does not include the key generation module.
#### Iv-C1 Protection module
Table V summarizes the required hardware modules for different protection methods. The Config-1 architecture only needs one TDPP hardware module, while the Config-2 architecture needs one TDPP hardware module in each tile. The protection module required by the method proposed in [14] comprises \(2x\)\((256/x)\):1 MUXes and \(x\)\(1\):\((256/x)\) DEMUXes. In this method, each crossbar pair requires one protection module. On the other hand, the size of a VOU in the method proposed in [15] is scaled as \(x^{2}\), and the protection module includes one \((256/x)\):1 MUX and one 1:\((256/x)\) DEMUX. Again, each crossbar pair requires one protection module. However, the authors of [15] did not consider the module's bitwidth. In reality, each input/output of the MUX/DEMUX is an array of \(x\) 8-bit values. To ensure a fair comparison, we set the bitwidth of the MUXes and DEMUXes to \(8x\) using their method.
#### Iv-C2 Key storage
For the proposed method, the key storage includes both the key(s) for permutation and the index vectors. For [14], for each of the \(x\) WLs, the key storage for each protection module is \(x\times 3\times log_{2}(256/x)\) bits (each MUX or DEMUX needs \(log_{2}(256/x)\) bits) and so the key storage for each protection module is \(x\times 3\times log_{2}(256/x)\times(256/x)\) bits. For [15], the key storage for each protection module is \((256\times log_{2}(256/x)+log_{2}(256/x)\times 2\times(256/x))\) bits (row activation vectors and keys for the MUX/DEMUX for each \(x\) WLs). To reduce the key storage overhead for [14] and [15], we assume all protection modules inside each PE share the same key. For the parallel execution of PEs, each PE will have a corresponding key storage. We assume all the keys are stored in eDRAM (32nm CMOS), the area and power consumption are modeled using CACTI [33].
Figs. 9 and 10 show the total area and power overheads of the proposed method for the config-1 architecture, respectively. When \(B\) is 256, the area and power overheads are maximized, and are less than 0.16% and 0.26% of that of memristive crossbars for \(p=8\), respectively. Lower \(B\) would reduce the overhead. When \(B\) is 2, compared with when \(B\) is 256, the overhead could be reduced by up to approximately 74% and 87% for area and power, respectively. For the config-2 architecture, the relative overhead compared to crossbars remains constant regardless of \(T\) since each tile is equipped with a protection module and a key storage module. Fig. 11 shows the area and power overhead compared with that of crossbars for \(p=8\).
Note that, for brevity, we only show the results for \(p=8\). The relative overhead would decrease proportionally as \(p\) decreases. For example, when \(p\) is 1, each tile needs \(8\times\) more devices compared to \(p\) = 8, and the corresponding relative overhead is one eighth of that when \(p\) is 8.
To compare with the related works, for config-1 and config-2, we set \(B\) as 64 and 4, respectively, to ensure our method provide sufficient maximum security (more than \(2^{986}\)) for all
Fig. 10: Power consumption overhead of config-1 compared with that of memristive crossbars for \(p=8\).
Fig. 9: Area overhead of config-1 compared with memristive crossbars for \(p=8\).
models. Tables VI-IX list the results for different \(T\), different \(x\), and different \(p\). For brevity, we only show the results for \(p\) equals 1 and 8, when the gap between TDPP and the related works is the largest and smallest. TDPP for config-1 shows a significant advantage over that for config-2 and other protection methods mainly because it only requires one TDPP hardware module. The advantage increases proportionally with \(T\). TDPP for config-1 also incurs a lower hardware overhead than the methods of [14] and [15] thanks to its hardware-reduced PM implementation. For higher \(x\), the overhead advantage of TDPP versus the method of [15] declines since a larger \(x\) requires fewer MUXes and DEMUXes for the method of [15]. Overall, the proposed method incur lower hardware overhead than the related works regardless of the memristive devices's precision, the number of simultaneously activated WLs/BLs, and the number of tiles.
It is crucial to mention that for the evaluation section, we explicitly specified the DNN weight precision as 8 bits and investigated the memristive device precision \(p\) ranging from 1 bit to 8 bits. As for higher-precision memristive devices (such as 11-bit devices [34]), they have the capacity to represent higher-precision DNN weights using single devices. Nonetheless, it is essential to reiterate that the claims of our proposed TDPP method remain valid, regardless of the memristive device precision.
Furthermore, it is important to note that our primary focus has been on the implications of weight matrix permutation on the model inference accuracy, and we did not consider the non-ideality of memristor devices or the interconnect wire resistance. While device imperfections and interconnect wire resistance could potentially impact the model performance, it is worth noting that the security of our proposed TDPP method might be higher in such scenarios. The reason for this higher security is that the minimal effort required for a divide-and-conquer attack to succeed in compromising the proposed TDPP method would likely increase rather than decrease. However, evaluating the implications of device imperfections and interconnect wire resistance on our method would necessitate non-trivial and additional work. As a result, we intend to address and quantify these effects in our future research.
Considering that (1) TDPP achieves protection effectiveness comparable with the related works, (2) TDPP is very secure when choosing appropriate size of BN for PM implementation, and (3) with higher security ensured, TDPP imposes significantly lower area and power overheads than the related works considering different precision of memristive devices, different numbers of simultaneously activated WLs/BLs, and different number of tiles, we assert that the proposed method outperforms the related works.
## VII Conclusion
The nonvolitility of memristive devices may facilitate attempts by adversaries to steal DNN weights loaded in the memristive computing systems by exploiting the data persistence. To mitigate this vulnerability, this paper proposed the TDPP method based on permuting both the rows and columns of the weight matrices. We considered two memristive computing systems and designed TDPP hardware that can be embedded in them. Our experiments show that TDPP is very effective, secure, and scalable. Compared with similar existing works, the proposed TDPP method's area and power overhead demands are up to 1218.1\(\times\) (area) and 2815.0\(\times\) (power) lower and up to 178.1\(\times\) (area) and 203.0\(\times\) (power) lower for the two different systems, respectively. We also showed TDPP's security robustness against potential attacks. In the future, we intend to extend the proposed method to support spiking neural networks and graph neural networks.
## Acknowledgments
This paper acknowledges the funding by the German Research Foundation (DFG) Projects MemDPU (Grant Nr. DU1896/3-1), MemCrypto (Grant Nr. DU 1896/2-1), and the European Union's Horizon 2020 Research And Innovation Programme FETOpen NEU-Chip (Grant agreement No. 964877).
|
2310.01758 | Linearization of ReLU Activation Function for Neural Network-Embedded
Optimization:Optimal Day-Ahead Energy Scheduling | Neural networks have been widely applied in the power system area. They can
be used for better predicting input information and modeling system performance
with increased accuracy. In some applications such as battery degradation
neural network-based microgrid day-ahead energy scheduling, the input features
of the trained learning model are variables to be solved in optimization models
that enforce limits on the output of the same learning model. This will create
a neural network-embedded optimization problem; the use of nonlinear activation
functions in the neural network will make such problems extremely hard to solve
if not unsolvable. To address this emerging challenge, this paper investigated
different methods for linearizing the nonlinear activation functions with a
particular focus on the widely used rectified linear unit (ReLU) function. Four
linearization methods tailored for the ReLU activation function are developed,
analyzed and compared in this paper. Each method employs a set of linear
constraints to replace the ReLU function, effectively linearizing the
optimization problem, which can overcome the computational challenges
associated with the nonlinearity of the neural network model. These proposed
linearization methods provide valuable tools for effectively solving
optimization problems that integrate neural network models with ReLU activation
functions. | Cunzhi Zhao, Xingpeng Li | 2023-10-03T02:47:38Z | http://arxiv.org/abs/2310.01758v1 | Linearization of ReLU Activation Function for Neural Network-Embedded Optimization: Optimal Day-Ahead Energy Scheduling
###### Abstract
**Neural networks have been widely applied in the power system area. They can be used for better predicting input information and modeling system performance with increased accuracy. In some applications such as battery degradation neural network-based microgrid day-ahead energy scheduling, the input features of the trained learning model are variables to be solved in optimization models that enforce limits on the output of the same learning model. This will create a neural network-embedded optimization problem; the use of nonlinear activation functions in the neural network will make such problems extremely hard to solve if not unsolvable. To address this emerging challenge, this paper investigated different methods for linearizing the nonlinear activation functions with a particular focus on the widely used rectified linear unit (ReLU) function. Four linearization methods tailored for the ReLU activation function are developed, analyzed and compared in this paper. Each method employs a set of linear constraints to replace the ReLU function, effectively linearizing the optimization problem, which can overcome the computational challenges associated with the nonlinearity of the neural network model. These proposed linearization methods provide valuable tools for effectively solving optimization problems that integrate neural network models with ReLU activation functions.**
Day-ahead scheduling, Linearization, Neural network, Optimization, Rectified linear unit.
## I Introduction
With the trend of decarbonization, a large number of renewable energy sources (RES)-based power plants are being constructed in the power system and the RES portfolio keeps increasing [1]. However, the intermittent and stochastic characteristics of the RES has raised the concern of system reliability and stability for high RES penetrated grid [2]. Deep learning (DL) is playing a crucial role in enhancing the operational efficiency and reliability of power systems, particularly in the context of high RES penetration and the associated challenges of intermittency and unpredictability [3]. DL has been widely adapted to assist the power system operation such as, optimal power flow, day ahead scheduling, restoration, and state estimation [4]-[6]. DL is an important technology for contributing to the transition towards more sustainable and resilient power grids.
The outstanding performance of deep learning technologies has ushered in innovative solutions for numerous challenging power system issues that traditional methods struggle to address. Today's power system is confronting formidable challenges, primarily due to the extensive integration of RES into the grid. Deep learning methods have emerged as indispensable tools within the power system area. Based on different attributes of DL models, they have been found to be useful in diverse applications in solving a range of power system problems. For example, a deep neural network (DNN) consisting of only fully connected dense layers is developed in [7] to predict active power flows and it outperforms the widely-used linearized DC power flow model. The utilization of graph neural networks (GNN) in [8] allows for efficient predictions of current and power injection, capitalizing on GNN's topological advantages. Convolutional neural network is adapted to predict the rate of change of frequency under large disturbances to ensure the system stability in [9]. Furthermore, fault detection including fault type and location can be predicted by the artificial neural network as presented in [10]. Meanwhile, recurrent neural networks are widely used in time sequential prediction such as load forecasting, electricity price prediction, weather forecasting, and renewable generation forecasting, offering versatile solutions to contemporary power system challenges [11]-[14].
Most DL models adopt the rectified linear unit (ReLU) as the nonlinear activation function between the hidden layers to enhance the training efficiency and robustness [15]. Nevertheless, the ReLU introduces the nonlinearity to the DL models which make them nonlinear [16]. While this nonlinearity poses no issue for deterministic problems like fault detection and power flow prediction, it can become a significant obstacle when integrating the trained DNN model into optimization problems where DNN's input features are
decision variables to be solved, rendering them suddenly unsolvable due to the introduced nonlinearity. Remarkably, none of the previously mentioned studies have addressed the crucial challenge of linearizing the ReLU function in DNNs. Thus, there exists a significant research gap concerning the development of methods to linearize ReLU-based DNNs.
To address the challenge, we have proposed four different models to linearize the ReLU activation function. The proposed four models include: (i) Big-M based piecewise linearization (BPWL), (ii) convex triangle area relaxation (CTAR), (iii) penalized CTAR (P-CTAR), and (iv) penalized convex area relaxation (PCAR). BPWL is able to fully linearize the ReLU activation function without any approximation while the rest three models make some approximations to decrease the computational complexity.
Our previous work [17]-[18] has introduced a novel neural network based battery degradation (NNBD) model aimed at accurately quantifying the battery degradation values per usage profile. The proposed NNBD model is able to predict the battery degradation value for each cycle based on the input of state of charge (SOC), state of health (SOH), depth of discharge (DOD), ambient temperature, and charge/discharge rate (C Rate) [19]. The NNBD model enables the incorporation of battery degradation into microgrid daily operational energy scheduling. However, this integration encountered unexpected computational burden due to the nonlinear nature of the NNBD model that utilizes the ReLU activation function in the hidden layers; this poses challenges when NNBD is incorporated into the optimal day-ahead generation scheduling problems. To address this issue, we will evaluate the proposed four ReLU linearization models in the testbed of NNBD-integrated microgrid day-ahead scheduling (MDS) model. It is worth mentioning that the proposed ReLU linearization methods not only fit the proposed NNBD model and optimal energy scheduling applications, but also applicable to a broader spectrum of DL-embedded optimization models that contains neural networks with ReLU activation functions.
The main contributions of this paper are as follows:
* _Linearization Approaches_: we introduce four novel formulations for linearizing the ReLU activation function, enhancing its applicability in various contexts, particularly when the input features of DL models are variables to be solved by an optimization model while the outputs represent physical system performance subject to some requirements enforced in the same optimization model.
* _Linearized Energy Scheduling Frameworks_: building upon the linearized ReLU models, we develop four distinct day-ahead scheduling models, providing practical solutions for generation scheduling.
* _Performance Assessment_: Comprehensive evaluations of the linearization ReLU models demonstrate their effectiveness in addressing nonlinearity challenges.
* _Optimal Configuration Exploration_: we conduct sensitivity tests to identify optimal configurations for the proposed linearization models, ensuring their robust performance in diverse scenarios.
The rest of the paper is organized as follows. Section II describes the proposed ReLU linearization models. Section III presents the traditional day-ahead scheduling model. Section IV presents the neural network integrated day-ahead scheduling model. Section V presents case studies and Section VI concludes the paper.
## II Proposed Linearization Models
This section presents the formulations of four models designed to linearize the ReLU activation function within the neural network model. The fully connected neural network models are characterized by a series of equations that describe the calculation and activation processes of neurons. Each neuron's pre-activated value \(x_{h}^{i}\) is computed by (1), factoring in input features from the previous layer, the corresponding weight matrix \(W\), and biases matrix \(B\). Most neural network models employ ReLU as the activation function, as shown in (2). While this activation function is prevalent for introducing nonlinearity to capture intricate relationships among variables, the nonlinearity of the ReLU function can pose challenges when it is embedded in the optimization problem. To address this challenge, the ReLU activation function can be linearized by applying the proposed linearization models described in this section. Notably, the proposed linearization models can be applied to any optimizations models that needs to efficiently integrate nonlinear ReLU activation function.
\[x_{h}^{i}=\sum a_{h-1}^{i}*W+B \tag{1}\]
\[a_{h}^{i}=ReLU\big{(}x_{h}^{i}\big{)}=max(0,x_{h}^{i}) \tag{2}\]
BPWL method is adapted to reformulate the ReLU function represented by (1) into (3)-(6), where \(\delta_{h}^{i}\) is a binary variable represents the activation status of neuron \(i\) in layer \(h\), and \(BigM\) is a pre-specified numerical value that is larger than any possible value of \(|x|\).
As shown in Fig. 1, the BPWL model offers the distinct advantage of perfectly linearizing the ReLU activation function without any reformulation losses, which is illustrated by the fact that the BPWL line in Fig. 1 completely overlaps the ReLU function curve. However, this method requires one additional binary variable for each neuron that applies the ReLU function, which may significantly increase the computational complexity.
\[a_{h}^{i}\leq x_{h}^{i}+BigM*(1-\delta_{h}^{i}) \tag{3}\] \[a_{h}^{i}\geq x_{h}^{i} \tag{4}\]
Fig. 1: Illustration of the BPWL model for ReLU linearization.
\[a_{\text{h}}^{i}\leq BigM*\delta_{\text{h}}^{i} \tag{5}\] \[a_{\text{h}}^{i}\geq 0 \tag{6}\]
The proposed CTAR approximate the ReLU function at each neuron with (4) and (6)-(7), which constrains the feasible solution set. CTAR offers a pure linear representation of ReLU, introducing minimal complexity to optimization problems while acknowledging the presence of approximation errors. As shown in Fig. 2, the blue area denotes the feasible region, delimited by the lower bound (LB) and upper bound (UB), both of which are determinable from the neural network model. Specifically, LB should be less than the minimum neuron preactivated value, while UB should exceed the maximum neuron preactivated value. In most cases with normalized training data, each neuron's value falls within the range between -1 to 1. It's noteworthy that different choices for LB and UB can impact the performance of the CTAR linearization method.
\[a_{\text{h}}^{i}\leq\frac{UB}{UB-LB}x_{\text{h}}^{i}-\frac{UB\cdot LB}{UB-LB} \tag{7}\]
The proposed P-CTAR model is introduced to reduce the approximation error associated with CTAR model with (4) and (6)-(8). To achieve this, a mitigation strategy involves incorporating a penalty term \(c_{h}\) into the objective function of the optimization model. This penalty encourages the nonnegative variable \(a_{h}^{i}\) to be positioned closer to the lower two sides, corresponding to the actual ReLU activated values, within the triangular representation as shown in Fig. 3.
\[f^{c}=\sum a_{\text{h}}^{i}c_{h} \tag{8}\]
The proposed PCAR method is designed to add a penalty term \(c_{h}\) on nonnegative \(a_{h}^{i}\) without LB and UB, represented by (4), (6), and (8), which force the \(a_{\text{h}}^{i}\) to be set at the ReLU activated values as shown in Fig. 4. As compared to CTAR, the PCAR method offers enhanced accuracy, especially when equipped with sufficiently large penalty terms, and notable efficiency gains due to the absence of constraints (7). Thus, this method stands out for its ability to significantly reduce computational complexity when compared to other available linearization approaches.
## III Traditional Microgrid Day-Ahead Energy Scheduling Model
This section presents the traditional microgrid day-ahead scheduling model. This model consists of (9)-(26) as described below and it does not consider battery degradation.
The objective of this traditional MDS model is to minimize the total cost of the microgrid operations as illustrated in (9). The MDS model includes a power balance equation, as detailed in (10), encompassing controllable generators, RES, power exchange with the main grid, battery energy storage system (BESS) output, and the load. Constraint (11) enforces the power limits of the controllable units such as diesel generators, while (12) and (13) enforce the ramping up and down limits. Equations (14)-(16) are employed to establish the relationship between a controllable unit's start-up status and its on/off status. Equation (17) restricts the BESS to be either in charging mode or in discharging mode or stay idle. Constraints (18)-(19) limit the charging/discharging power of BESS. Equation (20) governs the power exchange status between the microgrid and the main grid, indicating whether it involves purchasing, selling, or remaining idle. Constraints (21)-(22) define the thermal limits of the tie-line. Equation (23) computes the energy stored in the BESS for each time interval. Constraint (24) ensures that the final energy of the BESS equals the initial energy value while (25) respects the BESS capacity limit. Constraint (26) guarantees the microgrid maintains sufficient backup power to handle outage events.
Objective function:
\[\begin{split} f^{MG}=\sum\sum\left(P_{\text{G}}^{t}c_{\text{G}} \text{\emph{i}}+U_{\text{G}}\text{\emph{i}}c_{\text{G}}^{\text{NL}}+\text{ \emph{V}}_{\text{G}}\text{\emph{i}}c_{\text{G}}^{\text{SU}}\right)\\ +P_{\text{Buy}}^{t}c_{\text{Buy}}^{B}-P_{\text{S}}^{t}\text{ \emph{i}}c_{\text{S}}^{\text{EL}},\forall i,t\end{split} \tag{9}\]
Constraints are as follows:
\[\begin{split}& P_{t}^{Buy}+\sum\limits_{g\in S_{\text{G}}}\text{ \emph{g}},t+\sum\limits_{w\in S_{\text{W}}T}\text{\emph{p}}_{w,t}+\sum\limits _{\text{p}w\in S_{\text{P}}P_{\text{p}y,t}}\\ +\sum\nolimits_{s\in S_{\text{S}}S_{\text{S}}^{\text{DL}}}=P_{ \text{S}}^{\text{S}\text{S}\text{S}}+\sum\limits_{l\in S_{\text{L}}}\text{ \emph{P}}_{l,t}+\sum\limits_{s\in S_{\text{S}}}\text{\emph{P}}_{l,t}^{\text{ Char}}\\ & P_{g}^{\text{Min}}\leq P_{g,t}\leq P_{g}^{\text{Max}},\forall g,t\end{split} \tag{11}\]
\[P_{g,t+1}-P_{g,t}\leq\Delta T\cdot P_{g}^{\text{ramp}},\forall g,t \tag{12}\]
\[P_{g,t}-P_{g,t+1}\leq\Delta T\cdot p_{g}^{\text{ramp}},\forall g,t \tag{13}\]
\[V_{g,t}\geq U_{g,t}-U_{g,t-1},\forall g,t \tag{14}\]
\[V_{g,t+1}\leq 1-U_{g,t},\forall g,t \tag{15}\]
\[V_{g,t}\leq U_{g,t},\forall g,t \tag{16}\]
\[U_{g,t}^{\text{Disc}}+U_{g,t}^{\text{Charg}}\leq 1,\forall g,t \tag{17}\]
Fig. 4: Illustration of the PCAR model for ReLU linearization.
Fig. 3: Illustration of the P-CTAR model for ReLU linearization.
Fig. 2: Illustration of the CTAR model for ReLU linearization.
\[U_{s,t}^{Char}\cdot P_{s}^{Min}\leq p_{s,t}^{Char}\leq U_{s,t}^{ Char}\cdot P_{s}^{Max},\forall s,t \tag{18}\] \[U_{s,t}^{Disc}\cdot P_{s}^{Min}\leq p_{s,t}^{Disc}\leq U_{s,t}^{ Disc}\cdot P_{s}^{Max},\forall s,t\] (19) \[U_{t}^{Buy}+U_{t}^{Self}\leq 1,\forall t\] (20) \[0\leq P_{t}^{Buy}\leq U_{t}^{Buy}\cdot P_{Grid}^{Max},\forall t\] (21) \[0\leq P_{t}^{Bell}\leq U_{t}^{Self}\cdot P_{Grid}^{Max},\forall t\] (22) \[E_{s,t}-E_{s,t-1}+\Delta T\big{(}P_{s,t-1}^{Disc}\eta_{s}^{Disc}- P_{s,t-1}^{char}\eta_{s}^{char}\big{)}\] (23) \[=0,\forall s,t\] (24) \[E_{s,t=24}=E_{s}^{Initial},\forall s\] (25) \[0\leq E_{s,t}\leq E_{s,t}^{max}\] (26) \[P_{Grid}^{Max}-P_{t}^{Buy}+P_{t}^{Sell}+\sum_{g\in S_{G}}\big{(} P_{g}^{Max}-P_{g,t}\big{)}\] \[\geq R_{percent}\left(\sum_{t\in S_{L}}P_{t,t}\right),\forall t\]
## IV Neural Network Integrated Optimal Day-Ahead Scheduling Model
### _Neural Network Based Battery Degradation Model_
We've constructed a fully connected neural network model as mentioned in Section I to predict battery degradation, with five key aging factors (ambient temperature, C Rate, SOC, DOD, and SOH) comprising a five-element input vector for the network. Each input vector corresponds to a single output value, representing the battery degradation as a percentage relative to the SOH for the corresponding cycle. The structure of the trained neural network is shown in Fig. 5[17] plotted by NN-SVG. It has an input layer with 5 neurons corresponding to the 5 input features, first hidden layer with 20 neurons, second hidden layer with 10 neurons and an output layer with 1 neuron indicating the percentage battery degradation. The activation function is ReLU for the hidden layers and "linear" for the output layer.
### _NNBD based Day-ahead Scheduling Model_
Based on the BESS operation profile, the SOC level is required as the input of the proposed NNBD model. DOD is calculated by taking the absolute difference in SOC levels between time intervals \(t\) and \(t-1\), as shown in (27). C rate is calculated by (28). The input vector can be formed as shown in (29) and then fed into the trained NNBD model to obtain the total battery degradation over the MDS time horizon in (30). The sum of the battery degradation is used to calculate the equivalent battery degradation cost by (31). The updated objective function (32) is required for two of the proposed models: BPWL and CTAR. The NNBD based day-ahead scheduling model will consists of (9)-(32). However, due to the nonlinearity of the NNBD model, especially due to the ReLU activation function, the integration of the proposed NNBD model into the day-ahead scheduling framework cannot be directly solved. Therefore, we employ the proposed linearization methods within the intricate optimization model to facilitate its efficient solution.
\[DOD_{t}=|SOC_{t}-SOC_{t-1}| \tag{27}\] \[C_{t}^{Rate}=DOD_{t}/\Delta T\] (28) \[\overline{x}_{t}=(T,C,SOC,DOD,SOH)\] (29) \[BD\ =\sum_{t\in S_{T}}f^{NN}(\overline{x}_{t})SOH\] (30) \[f^{BESS}=\frac{c_{BESS}^{Capital}-c_{BESS}^{SY}}{1-SOH_{EOL}}BD\] (31) \[f=f^{MG}+f^{BESS} \tag{32}\]
### _Linearization for NNBD-based MDS Model_
Since the nonlinear ReLU activation function make the optimization model extremely hard to solve if not unsolvable, we have reformulated the MDS models to make it linear and solvable with the proposed four ReLU linearization models respectively. The updated objective function (33) is required for two of the proposed models: P-CTAR and PCAR. The linearized NNBD-integrated day-ahead energy scheduling models are defined in Table I.
\[f=f^{MG}+f^{BESS}+f^{c} \tag{33}\]
## V Case Studies
To evaluate the effectiveness of the linearized day-ahead scheduling model, we utilized a representative grid-connected microgrid [19] as the test case in this work, featuring several distributed energy resources including renewables and controllable units. The microgrid setup encompasses various key components, notably a conventional diesel generator, wind turbines, residential houses with solar panels, and a lithium-ion BESS with a charging and discharging roundtrip efficiency of 90%. The parameters for these main components are provided in Table II. To simulate real-world scenarios accurately, the load data for the microgrid is obtained from the electricity consumption patterns of 1000 residential households from the Pecan Street Dataport [17]. Additionally, for a comprehensive representation of environmental conditions, the ambient temperature and available solar power data are sourced over a 24-hour period from the same Pecan Street Dataport source.
Fig. 5: Structure of the NNBD model [17].
Furthermore, the wholesale electricity price data used in the microgrid model is obtained from the Electric Reliability Council of Texas (ERCOT) that manages majority of the Texas grid [18].
The MDS optimization problem in this paper was solved on a computer equipped with the following hardware: an AMD Ryzen 7 3800X processor, 32 GB of RAM, and an Nvidia Quadro RTX 2700 Super GPU with 8 GB of memory. The Pyomo [19] package, a powerful power system optimization modeling framework, was utilized to formulate and solve the day-ahead optimization problem. To expedite our quest for optimal solutions, we employed the high-performance mathematical programming solver Gurobi [20].
Table III presents the performance results obtained from the microgrid testbed based on the four proposed linearization models. In Table II, the term, _Degradation_, represents the degradation value calculated by the proposed linearization methods, while the term, _Real Degradation_, represents the degradation value computed separately by the NNBD model with the BESS operation profile to gauge the accuracy of the linearization models. The term, _Error_, is derived from the comparison between _Degradation_ and _Real Degradation_. The term, _Total Cost_, represents the combined MDS operation cost and the equivalent degradation cost. It's important to note that this equivalent degradation cost is derived from _Degradation_ which may introduce the linearization approximation error. The term, _Real Total Cost_ represent the combined MDS operation cost and the real equivalent degradation cost based on the _Real Degradation_. Also, the _Total Cost_ in Table III has already excluded the penalty cost for P-CTAR and PCAR models. The outcomes indicate that BPWL yields the lowest linearization error, attributed to its complete linearization of the ReLU activation function. The minimal degradation prediction error is a result of rounding the weights in the NNBD model during the battery degradation value verification process. Ideally, there should be zero linearization error.
However, it's worth mentioning that the BPWL model does require a relatively longer solving time due to the exponential addition of new binary variables to the optimization problem. Although the current solving time of 28 seconds remains acceptable for a microgrid case, it will increase significantly for larger systems. Overall, the BPWL model accurately represents the ReLU function with no reformulation losses but introduces binary variables, complicating the optimization model and leading to longer solving times. In contrast, the computation times for the other linearization methods are significantly less. Notably, PCAR yields the shortest solving time, as expected, given its fewer constraints compared to CTAR and P-CTAR.
The three MDS models with non-exact ReLU linearization achieve different accuracies. The proposed P-CTAR model exhibits the second-lowest error among the four proposed methods, signifying its ability to accurately linearize the ReLU activation function. A comparative analysis between CTAR and P-CTAR reveals that CTAR introduces substantial linearization errors, suggesting an ineffective linearization approach in this particular case. The significant approximation error in CTAR results from the absence of a penalty term in the model. This may change if the MDS model changes. In contrast, the P-CTAR model incorporates a penalty term in addition to the CTAR model, leading to a considerable enhancement in performance relative to CTAR. The P-CTAR model outperforms the PCAR model which implies that the chosen lower bound and upper bound values for CTAR are appropriate and effective in this testbed. The PCAR model, however, exhibits a linearization error of 37%, notably higher than P-CTAR. It's important to note that both PCAR and P-CTAR models' linearization errors are influenced by the penalty constant in the objective function, which requires additional sensitivity test to determine the optimal setup. While BPWL outperforms the other three models in terms of linearization accuracy, it lags behind in solving efficiency, especially when considering larger systems where it could potentially lead to significantly extended solving times. In contrast, P-CTAR stands out as the best performance model in terms of linearization error among the rest of the models. The solving time is decreased significantly compared to the BPWL model while maintaining the linearization performance.
Fig. 6: Microgrid load profile.
Fig. 7 graphically illustrates the linearization performance of the P-CTAR model when compared to the reference NNBD degradation value, which is based on the BESS operation profile per usage cycle. The linearized degradation, produced directly from the optimization model using the P-CTAR model to linearize the internal ReLU function of the NNBD model, exhibits a trend that closely aligns with the benchmark model. While there are some deviations in specific time periods, the overall degradation value over the 24-hour look-ahead scheduling horizon demonstrates minimal prediction error, affirming the efficacy of the P-CTAR linearization model.
It's noteworthy that during the design of the proposed linearization models, the expectation was that P-CTAR might introduce additional complexity to the optimization model, potentially leading to longer solving times. The results indeed align with this expectation, as the P-CTAR model's solving time surpasses that of the CTAR model. This trade-off between model efficiency and accuracy is duly considered in our evaluation.
Fig. 8 presents the results of a sensitivity test for the P-CTAR model using different penalty cost constants denoted as \(c_{h}\). This penalty cost serves the purpose of mitigating the approximation error associated with the CTAR model. However, it's important to note that different values of \(c_{h}\) may lead to different outcomes, as the penalty cost is directly integrated into the objective function of the optimization problem. The objective cost reflects the expenses incurred by the objective function (33), while the penalty cost accounts for the costs introduced by the P-CTAR model. The real cost represents the MDS operating cost and the battery degradation cost, calculated as the difference between the objective cost and the penalty cost. However, the battery degradation cost here may include an error due to the linearization approximation of the P-CTAR model. Upon examination of the figure, it becomes evident that there is no significant variation in the real cost as \(c_{h}\) increases. However, the lowest linearization error is observed when \(c_{h}\) is set to 10. Beyond this threshold, increasing \(c_{h}\) leads to an escalation in battery degradation error which is directly caused by the error of linearization model. In summary, the selection of \(c_{h}\) demands careful consideration and thorough preliminary testing to ensure the optimal performance of the P-CTAR model. It's essential to emphasize that the optimal \(c_{h}\) value provided here pertains specifically to the NNBD-based MDS optimization problem. If the optimization model undergoes any modifications, it would require recalibration to determine the ideal \(c_{h}\) value for the optimal setup.
We also conducted a sensitivity test on the PCAR model with varying \(c_{h}\) values, and the results are depicted in Figure 9. It's noticeable that as we increase the \(c_{h}\) value, the linearization error does not exhibit a substantial decrease. In essence, there doesn't appear to be an optimal \(c_{h}\) value for the PCAR model that significantly enhances linearization accuracy. While the PCAR model demonstrates effectiveness, it doesn't achieve the same level of performance as the P-CTAR model. Consequently, for the NNBD-integrated MDS optimization problem, the PCAR model may not represent the most ideal solution. However, it's worth noting that the PCAR model could potentially serve as the optimal solution for other neural network-based optimization models with distinct characteristics and requirements.
## VI Conclusion
This paper investigated four innovative linearization models designed to tackle the challenges caused by the ReLU activation function in neural networks when integrated into
Fig. 8: P-CTAR sensitivity tests.
Fig. 7: P-CTAR model degradation comparison.
Fig. 9: PCAR sensitivity tests.
optimization models where a subset of decision variables serve as the input features of the learning model. The inherent non-linearity of the ReLU function often makes such models complex and difficult to solve directly. However, by harnessing the proposed linearization models BPWL, CTAR, P-CTAR, and PCAR, it becomes feasible to effectively address the intricacies of neural network-integrated optimization problems. Our findings demonstrate that the BPWL model achieves the best performance in terms of accuracy by fully reformulating the ReLU activation function with auxiliary binary variables and it can achieve correct problem solutions. Furthermore, the proposed P-CTAR model can maintain impressive linearization accuracy without significant compromise, while significantly reducing the computing time.
Crucially, our results highlight that the choice of penalty terms is pivotal in obtaining optimal solutions for P-CTAR and PCAR models. While CTAR and PCAR may not deliver perfect performance in the NNBD-integrated microgrid testbed, they remain viable options for modeling and solving this class of problems. It's important to emphasize that when changes are made to the optimization model, the performance of the proposed linearization models should be re-evaluated. In summary, the linearization models introduced in this paper open up fresh avenues for efficiently addressing the challenges posed by non-linear neural network-integrated optimization problems. In future work, we intend to delve deeper into how these linearization models influence optimization models and further develop linearization models tailored to other activation functions such as the softmax function.
|
2308.07163 | HyperSparse Neural Networks: Shifting Exploration to Exploitation
through Adaptive Regularization | Sparse neural networks are a key factor in developing resource-efficient
machine learning applications. We propose the novel and powerful sparse
learning method Adaptive Regularized Training (ART) to compress dense into
sparse networks. Instead of the commonly used binary mask during training to
reduce the number of model weights, we inherently shrink weights close to zero
in an iterative manner with increasing weight regularization. Our method
compresses the pre-trained model knowledge into the weights of highest
magnitude. Therefore, we introduce a novel regularization loss named
HyperSparse that exploits the highest weights while conserving the ability of
weight exploration. Extensive experiments on CIFAR and TinyImageNet show that
our method leads to notable performance gains compared to other sparsification
methods, especially in extremely high sparsity regimes up to 99.8 percent model
sparsity. Additional investigations provide new insights into the patterns that
are encoded in weights with high magnitudes. | Patrick Glandorf, Timo Kaiser, Bodo Rosenhahn | 2023-08-14T14:18:11Z | http://arxiv.org/abs/2308.07163v2 | # HyperSparse Neural Networks: Shifting Exploration to Exploitation through Adaptive Regularization
###### Abstract
Sparse neural networks are a key factor in developing resource-efficient machine learning applications. We propose the novel and powerful sparse learning method Adaptive Regularized Training (ART) to compress dense into sparse networks. Instead of the commonly used binary mask during training to reduce the number of model weights, we inherently shrink weights close to zero in an iterative manner with increasing weight regularization. Our method compresses the pre-trained model "knowledge" into the weights of highest magnitude. Therefore, we introduce a novel regularization loss named HyperSparse that exploits the highest weights while conserving the ability of weight exploration. Extensive experiments on CIFAR and TinyImageNet show that our method leads to notable performance gains compared to other sparsification methods, especially in extremely high sparsity regimes up to \(99.8\%\) model sparsity. Additional investigations provide new insights into the patterns that are encoded in weights with high magnitudes.1
Footnote 1: Code available at [https://github.com/GreenAutoML4FAS/HyperSparse](https://github.com/GreenAutoML4FAS/HyperSparse)
## 1 Introduction
Recent years have shown tremendous progress in the field of machine learning based on the use of neural networks (NN). Alongside the increasing accuracy in nearly all tasks, also the computational complexity of NNs increased, _e.g._, for Transformers [5, 7] or Large Language Models [2]. The complexity causes high energy costs, limits the applicability for cost efficient systems [10], and is counterproductive for the sake of fairness and trustworthiness due to dwindling interpretability [38].
Facing these issues, recent years have also led to a growing community in the field of sparse NNs [12]. The goal is to find small subgraphs (_a.k.a_ sparse NNs) in well performing NNs that have similar or comparable capabilities regarding the main tasks while being significantly less complex and therefore cheaper and potentially better interpretable. Standard methods usually create sparse NNs by obtaining a binary mask that limits the number of used weights in a NN [20, 34, 42]. The most prominent method is _Iterative Magnitude Pruning (IMP)_[16] that is based on the _Lottery Ticket Hypothesis (LTH)_[9]. Assuming that important weights have high magnitudes after training, it trains a dense NN and removes an amount of elements from the mask that correspond to the lowest weights. Afterward, the sparse NN is reinitialized and retrained from scratch. The process is iterated until a sparsity level is reached.
The assumption of magnitude pruning that highest weights in dense NNs encode most important decision rules for a diverse set of classes is problematic, because it is not guaranteed. Removed weights th
Figure 1: Magnitude of weights with their corresponding gradients at different epochs derived from our _HyperSparse_ loss sorted by the weight magnitude. The weights and gradients belong to a ResNet-32 trained on Cifar-100, where the desired pruning rate is \(\kappa=90\%\). The smallest weight \(w_{\kappa}\) that remains after pruning is marked by a dashed line. Note that we added the gradient for the \(\mathcal{L}_{1}\) loss in green.
to the prediction can no longer be reactivated during fine-tuning. In the worst case, a "layer collapse" can prohibit a useful forward propagation [37]. The lack of exploration ability still persists in the more accurate but resource consuming iterative _IMP_ approach.
Reviving the key ideas of _Han et al_. [10] and _Narang et al_. [27] (comparable to [26]), we introduce a lightweight and powerful method called _Adaptive Regularized Training (ART)_ to obtain highly sparse NNs, which implicitly "removes" weights with increasing regularization until a desired sparsity level is reached. _ART_ strongly regularizes the weights before magnitude pruning. First, a dense NN is pre-trained until convergence. In the second stage, the NN is trained with an increasing and weight decaying regularization until the hypothetical magnitude pruned NN performs on par with the dense counterpart. Lastly, we apply magnitude pruning and fine-tune the NN without regularization. Avoiding binary masks in the second stage allows exploration and regularization forces the exploitation of weights that remain in the sparse NN. We introduce the new regularization approach _HyperSparse_ for the second stage that overcomes static regularization like _Lasso_[39] or _Weight Decay_[44] and adapts to the weight magnitude by penalizing small weights. _HyperSparse_ balances the exploration/exploitation tradeoff and thus increases the accuracy while leading to faster convergence in the second stage. The combination of our regularization schedule and _HyperSparse_ improves the classification accuracy and optimization time significantly, especially in high sparsity regimes with up to \(99.8\%\) zero weights. We evaluate our method on CIFAR-10/100 [19] and TinyImageNet [6] with ResNet-32 [11] and VGG-19 [33].
Moreover, we analyze the gradient and weight distribution during regularized training, showing that _HyperSparse_ leads to faster convergence to sparse NNs. The experiments also shows that the claim of [34], that optimal sparse NNs can be obtained via simple weight distribution heuristics, does not hold in general. Finally, we analyze the process of compressing dense NNs into sparse NNs and show that the highest weights in NNs do not encode decision rules for a diverse set of classes with equal priority.
**In summary**, this paper
* introduces _HyperSparse_, a superior adaptive regularization loss that implicitly promotes configurable network sparsity by balancing the exploration and exploitation tradeoff.
* introduces the novel framework _ART_ to obtain sparse networks using regularization with increasing leverage, which improves the optimization time and classification accuracy of sparse neural networks, especially in high sparsity regimes.
* analyzes the continuous process of compressing patterns from dense to sparse neural networks.
## 2 Related Work
Sparse Learningmethods that find binary masks to remove a predefined amount of weights can be categorized as static or dynamic (_e.g_., in [14, 4, 12]). According to [4], in dynamic sparse training _"[...] removed elements [from masks] have chances to be grown back if they potentially benefit to predictions"_ whereas static training incorporates fixed masks.
Static methods are usually based on _Frankle et al_. [9], who introduce _LTH_ and show that well performing sparse NNs in random initialised NNs can be found after dense training via magnitude pruning. The magnitude pruning method is improved by _IMP_[16] that iterates the process. Replacing the time consuming training procedure, methods like _SNIP_[20] or _GraSP_[42] find sparse NNs in randomized dense NNs using a single network prediction and its gradients. To also address the risk of layer collapse during pruning, _SynFlow_[37] additionally conserves the total flow in the network. Contrary to the latter works, _Su et al_. [34] claim that appropriate sparse NNs do not depend on data or weight initialization and provide a general heuristic for the distribution of weights.
Different from static methods, dynamic methods prune and re-activate zero elements in the binary mask. The weights that are reactivated can be selected randomly [25] or determined by the gradients [3, 4, 8]. For example, _RigL_[8] iteratively prunes weights with low magnitude and therefore reactivates weights with highest gradient. Also, modern dynamic methods utilize continuous masks. For example, _Tai et al_. [36] relax the _IMP_ framework by introducing a parameterized softmax to obtain a weighted average between _IMP_ and _Top-KAST_[15]. Similar, [24, 31] relaxes the binary mask and optimizes its \(\mathcal{L}_{0}\)-norm. Another way is to inherently prune the model, _e.g_., by reducing the gradients of weights with small magnitude [32]. Compared to static methods, _Liu et al_. [22, 23] show that dynamic sparse training methods overcomes most static methods by allowing weight exploration.
Another property to distinguish modern sparse learning methods is the complexity during mask generation, _e.g_., as done by _Schwarz et al_. [32]. The more resource efficient _sparse_\(\rightarrow\)_sparse_ methods sustain sparse NNs during training [4, 8, 20, 32, 34, 42, 37], whereas _dense_\(\rightarrow\)_sparse_ methods utilize all parameters before finding the final mask [9, 15, 16, 36, 24, 31].
However, as explained later, our approach belongs to _dense_\(\rightarrow\)_sparse_ methods that inherently reduce the model complexity without masking before magnitude pruning to obtain a static sparse mask for fine-tuning. We want to mention the primary works of _Han et al_. [10], _Narang et al_. [27] and _Molchanov et al_. [26] whose combination is a role model for us. _Han et al_. use \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) regularization to reduce the number of non-zero elements during
training. Their early framework uses regularization without bells and whistles and has no ability to control the sparsity level. _Narang et al_. and _Molchanov et al_. remove weights in fine-grained portions with an increasing removal-threshold, but do not incorporate weight exploration.
Interpretability and Understandingof machine learning is closely related to sparse learning and is also addressed in this paper. There is an increasing number of works in recent years that utilize sparse learning for other benefits, for example, to find interpretable correlations between feature- and image-space [38] or to visualize inter-class ambiguities [18]. The work of _Paul et al_. [28] gives details about the early learning stage which is crucial, _e.g_., to determine memorization of label noise [17]. They show that most data is not necessary to obtain suitable subnetworks. The general relationship between _LTH_ and generalization is investigated in [30]. _Varma et al_. [35] show that sparse NNs are better suited in data limited and noisy regimes. On the other hand _Hooker et al_. [13] show that sparse NNs have non-trivial impact on ethical bias by investigating which samples are "forgotten" first during network compression. The underlying research question of the latter work is altered to _"Which samples are compressed first?"_ and discussed in this paper.
## 3 Method
Sparsification aims to reduce the number of non-zero weights in a NN. To address this problem, we use a certain schedule for regularization such that small weights converge to zero and our model implicitly becomes sparse. In Sec. 3.1, we formally define the sparsification problem. Then, we present _Adaptive Regularized Training_ (ART) in Sec. 3.2, which iteratively increases the leverage of regularization to maximize the number of close-to-zero weights. Moreover, we introduce our regularization loss _HyperSparse_ in Sec. 3.3 that is integrated in _ART_. It simultaneously allows the exploration of new topologies while exploiting weights of the final sparse subnetwork.
### Preliminaries
We consider a NN \(f(W,x)\) with topology \(f\) and weights \(W\) that is trained to classify images from a dataset \(S=\{(x_{n},y_{n})\}_{n=1}^{N}\), where \(y_{n}\) is the ground truth class to an image sample \(x_{n}\). The training is structured in epochs, which are iterative optimizations of the weights \(W=\{w_{1},\ldots,w_{D}\}\) over all samples in \(S\) to minimize the loss objective \(\mathcal{L}\). The obtained weights after epoch \(e\) are denoted as \(W_{e}\), with \(W_{0}\) denoting the weights before optimization. Furthermore, the classification accuracy of a NN is measured by a rating function \(\psi(W)\).
The goal in sparsification is to reduce the cardinality of \(W\) by removing a pre-defined ratio of weights \(\kappa\), while maximizing \(\psi(W)\). The network is pruned by the Hadamard product \(m\odot W\) of a binary mask \(m\in[0,1]^{D}\) and the model-weights \(W\). The mask is usually created by applying magnitude pruning \(m=\nu(W)\)[3, 4, 9, 16], which is a technique that sets the \(\kappa\)-lowest weights to zero.
### Adaptive Regularized Training (ART)
Regularization losses like the \(L_{1}\)-norm (\(L_{\text{asso}}\)-regression) [39] or \(L_{2}\)-norm [44] are used to prevent overfitting by shrinking the magnitude of weights. We use this effect in _ART_ for sparsification, as weights with low magnitude have low effect on changing the output and thus can be removed with only little impact on \(\psi(W)\).
Regularization during training can be expressed as a mixed loss
\[\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{class}}+\lambda_{\text{init}} \cdot\eta^{e}\cdot\mathcal{L}_{\text{reg}}\, \tag{1}\]
where \(\mathcal{L}_{\text{class}}\) is the classification loss and \(\mathcal{L}_{\text{reg}}\) the regularization loss. The gradient of \(\mathcal{L}_{\text{reg}}\) shrinks a set of weights to approximately zero and creates a inherent sparse network of an undefined pruning rate [39]. Increasing \(\eta\) leverages the regularization \(\mathcal{L}_{reg}\) in an ascending manner, but current approaches use a fixed regularization rate \(\eta=1\)[4, 10, 26].
After unregularized training of a dense NN to convergence, _ART_ employs the standard regularization framework and modifies it by setting \(\eta>1\) and a low initialisation of \(\lambda_{\text{init}}\). Subsequently, the regularization loss \(\mathcal{L}_{\text{reg}}\) has almost no effect on \(\mathcal{L}_{\text{total}}\) in the beginning, but starts to shrink weights without much impact on \(\mathcal{L}_{\text{class}}\) to zero. However, it allows every weight \(w_{i}\) to potentially get a high magnitude such that \(w_{i}\) is shifted into the sparse NN of highest weights (exploration). With increasing regularization, the influence of the gradient \(\frac{d\mathcal{L}_{\text{reg}}}{dw_{i}}\) on \(w_{i}\) increases and is more likely to overcome the gradient \(\frac{d\mathcal{L}_{\text{dim}}}{dw_{i}}\). Regularization impedes proper exploration of small weights by pulling the magnitude to zero. On the other hand, the larger weights
need to be exploited to conserve the classification results. Therefore, our increasing regularization continually shifts the exploration/exploitation tradeoff from exploration to exploitation. The method allows reordering weights to find better topologies, but forces to exploit the highest weights regarding the classification task. Due to the increasing number of weights that are approximately zero, the dense model converges to a inherently sparse model. We stop the regularized training if the NN with best pruned weights \(\psi(\nu(W_{\text{best}})\odot W_{\text{best}})\) has higher accuracy than with the latest unpruned weights \(\psi(W_{e})\) and choose \(W_{\text{best}}\) as our candidate for fine-tuning.
The overall training pipeline is defined as follows:
1. Pre-train dense model until convergence without regularization.
2. Remove weights implicitly using _ART_ as described in algorithm 1.
3. Apply magnitude pruning and fine-tune pruned network until convergence.
_ART_ relaxes the iterative _IMP_ approach that prunes the least important weights over certain iterations. Analogous to the increasing pruning ratio in standard iterative methods, we iteratively increase the amount of weights that are close to zero and thus approximate a binary mask implicitly.
### HyperSparse Regularization
The latter Section 3.2 describes the process of shrinking weights in \(W\) by penalizing with ascending regularization. A drawback of this procedure is that also weights that remain after pruning are penalized by the regularization. This negatively affects the exploitation regarding the main task. Thus, remaining weights should not be penalized. On the other hand, if small weights are strongly penalized, the desired exploration property of dynamic pruning methods to "grow" back these elements is restricted. To address this tradeoff between exploitation and exploration, we introduce the sparsity inducing adaptive regularization loss _HyperSparse_.
Incorporating the _Hyper_bolic Tangent function applied on the magnitude denoted as \(t(\cdot)=\tanh(|\cdot|)\) for simplicity, the _HyperSparse_ loss is defined as
\[\begin{split}\mathcal{L}_{\text{HS}}(W)=&\frac{1}{A }\sum_{i=1}^{|W|}\bigg{(}|w_{i}|\sum_{j=1}^{|W|}t(s\cdot w_{j})\bigg{)}-\sum_{i =1}^{|W|}|w_{i}|\\ &\text{with}\quad A:=\sum_{w\in W}t(s\cdot w)\\ &\text{and}\quad\forall w\in W:\quad\frac{dA}{dw}=0,\end{split} \tag{2}\]
where \(A\) is treated as a pseudo-constant in the gradient computation and \(s\) is an alignment factor that is described later. The regularization penalizes weights depending on the gradient and can vary for different weights. The gradient of _HyperSparse_ with respect to a weight \(w_{i}\) is approximately
\[\begin{split}\frac{d\mathcal{L}_{\text{HS}}(W)}{dw_{i}}=\text{ sign}(w_{i})\cdot\frac{t^{\prime}(s\cdot w_{i})\cdot\sum_{j=1}^{|W|}|w_{j}|}{ \sum_{j=1}^{|W|}t(s\cdot w_{j})},\\ \text{with}\quad w_{i},w_{j}\in W,\quad t^{\prime}(\cdot)\in(0,1 ].\end{split} \tag{3}\]
The derivative \(t^{\prime}=\frac{dt}{dw_{i}}\) converges towards \(1\) for small magnitudes \(|w_{i}|\approx 0\) and towards \(0\) for large magnitudes \(|w_{i}|\gg 0\). Thus, the second term in Eq. (3) is adaptive to the weights and highly penalizes small magnitudes, but is breaking down to zero for large ones. Details for the gradient calculation and analysis can be found in the supplementary material, Sec. D.
The alignment factor \(s\) is mandatory to exploit the aforementioned properties for the sparsification task with a specific pruning rate \(\kappa\). Since \(\mathcal{L}_{\text{HS}}\) is dependent on the weights magnitude, but there is no determinable value range for weights, our loss \(\mathcal{L}_{\text{HS}}\) is not guaranteed to adapt reasonably to a given \(W\). For example, considering a fixed \(s=1\) and all weights in \(W\) are close to zero, the gradient from Eq. (3) results into nearly the same value for every weight. Therefore, we adapt \(s\) to the smallest weight \(|w_{\kappa}|\) that would remain after magnitude pruning, such that \(t^{\prime\prime\prime}(s\cdot w_{\kappa})=0\), which is the point of inflection of \(t^{\prime}\). According to this alignment, the gradients in Eq. (3) of remaining weights \(|w|\geq|w_{\kappa}|\) are shifted closer to 1 and are increased for weights \(|w|\leq|w_{\kappa}|\), while adhering a smooth gradient from remaining to removed weights. Moreover, the denominator in Eq. (3) decreases over time, if more weights in \(W\) are close to zero subsequent to ascending regularization. The gradient for different weight distributions of a NN based on _HyperSparse_ is shown in Fig. 1 and visualizes the described gradient behavior of adaptive weight regularization.
## 4 Experiments
This section presents experiments showing that our proposed method _ART_ outperforms comparable methods, especially in extreme high sparsity regimes. Our experimental setup is described in Sec. 4.1. In the subsequent section, we show that _HyperSparse_ has a large positive impact on the optimization time and classification accuracy. This improvement is explained by analyzes of the tradeoff between exploration and exploitation, the gradient and weight distribution in Sec. 4.3 and 4.4. Finally, we analyze and discuss the compression behaviour during regularized training and derive further insights about highest magnitude weights in Sec. 4.5.
### Experimental Setup
We evaluate _ART_ on the datasets CIFAR-10/100 [19] and TinyImageNet [6] to cover different complexities, given by
a varying number of class labels. Furthermore, we use different model complexities, where ResNet-32 [11] is a simple model with 1.8 M parameters and VGG-19 [33] is a complex model with 20 M parameters. Note that we use the implementation given in [34]. As explained in Sec. 3.2, we group our training in 3 steps. First we train our model for 60 epochs until convergence (step 1), using a constant learning rate of \(0.1\). In the following regularization step, we initialize the regularization with \(\lambda_{\text{init}}=5\cdot 10^{-6}\), \(\eta=1.05\), and use the same learning rate as used in pre-training. The fine-tuning-step (step 3) is similar to [34], as we train for 160 epochs in CIFAR-10/100 and for 300 epochs on Tiny-ImageNet, using a learning rate of \(0.1\) and apply a multiplied decay of 0.1 at 2/4 and 3/4 of the total number of epochs. We also adapt the batch size of 64 and weight-decay of \(10^{-4}\). All experiments are averaged over 5 runs.
We compare our method _ART_ to _SNIP_[20], _Grasp_[42], _SRatio_[34], and _LTH_[9] similar as done in [34, 41]. In addition we evaluate _IMP_[16] and _RigL_[8] as dynamic pruning methods. For comparability, all competitors in our experiments are trained with the same setup as given in the fine-tuning-step. To improve the performance of _RigL_, we extend the training duration by 360 epochs. Further details are given in the supplementary material, Sec. A.
### Sparsity Level
In this section, we compare the performances of _ART_ to other methods on different sparsity levels \(\kappa\in\{90\%,95\%,98\%,99\%,99.5\%,99.8\%\}\), using different datasets and models. To demonstrate the advantages of our novel regularization loss, we additionally substitute _HyperSparse_ with \(\mathcal{L}_{1}\)[39] and \(\mathcal{L}_{2}\)[44]. Table 1 shows the resulting accuracies with standard deviations.
Our method _ART_ combined with _HyperSparse_ outper
\begin{table}
\begin{tabular}{c c c c c c c|c c c c c c} \hline \hline \multirow{2}{*}{**\#Epodes \(\downarrow\)**} & \multicolumn{6}{c}{**ResNet-32 \(\rightarrow\)**} & \multicolumn{6}{c}{**VGG-19 \(\rightarrow\)**} \\ \cline{3-13} & & \(\kappa\): 90\% & 98\% & 99\% & 99\% & 90\% & 98\% & 99\% & 99\% \\ \hline \multirow{6}{*}{**ART + \(\mathcal{L}_{1}\)**} & 34.23.51 & 68.24.8 & 94.26.80 & 5.80.44 & 24.24.24 & 56.01.30 & \\ & \(C\) & 57.66.46 & 49.62.20 & 11.66.68 & 11.62.63 & 17.58.13 & 17.88.48 & 17.84.37 & 16.28 & 17.88.43 & 17.86 & 0.51.00 \\ & \(C\) & **26.61.15** & 53.22 & 40.32.28 & **37.18.14** & **40.60** & **18.21.42** & **42.16** & **42.16** & **42.16** & **42.16** \\ & \(C\) & 53.22.88 & 77.57.41 & 101.34.96 & 11.22.04 & 47.72.21 & 66.54.46 & \\ & \(C\) & **11.20.74** & **11.48.12** & **11.20.04** & **11.53.25** & **11.86.35** & **11.86.35** & **11.86.35** & **11.86.35** & **11.86.35** & **11.86.35** & **11.86.35** \\ & \(C\) & **39.20.58** & **63.45** & **88.21.48** & **88.24** & **88.24** & **88.24** & **88.24** & **85.34** & **88.58** & **55.88** \\ \cline{2-13} & \(L\) & 52.01.34 & 61.65 & 31.61.35 & 101.21.08 & 20.81.04 & 43.22.53 & 59.01.80 & 5.26.33 & 11.02.47 & 5.39.53 & 5.90.10 \\ & \(C\) & 56.94.50 & 31.40.32 & 32.51.99 & 89.11.6 & 93.24.63 & 27.66.81 & 11.84.42 & 11.07.44 & 9.66 & **30.32** & **69.21** \\
**ART + \(\mathcal{L}_{2}\)** & **57.66.36** & **57.87.38** & **53.92.14** & **47.97.04** & **40.68.05** & **28.95** & **61.55** & **43.26** & **43.04** & **52.31** & **43.69** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Number of epochs with regularization to obtain the final mask, evaluated for multiple datasets, network topologies, and pruning rates \(\kappa\). It shows that our _HyperSparse_\(\mathcal{L}_{\text{HS}}\) loss reduces the training time significantly.
\begin{table}
\begin{tabular}{c c c c c c c|c c c c c} \hline \hline \multirow{2}{*}{**\#Epodes \(\downarrow\)**} & \multicolumn{6}{c}{**ResNet-32 \(\rightarrow\)**} & \multicolumn{6}{c}{**VGG-19 \(\rightarrow\)**} \\ \cline{3-13} & & \(\kappa\): 90\% & 98\% & 99\% & 99\% & 90\% & 98\% & 99\% \\ \cline{2-13} & \(L\) & 34.23.51 & 68.24.8 & 94.26.80 & 5.80.44 & 24.24.24 & 56.01.30 \\ & \(C\) & 57.66.46 & 49.62.20 & 11.66.68 & 11.62.63 & 11.62.35 & 17.85.18 & 17.84.37 & 17.84.37 \\ & \(C\) & **26.61.15** & 53.22.00 & 40.32.22 & **38.61.35** & **40.60** & **18.21.42** & **42.16** & **42.16** \\ & \(C\) & 53.22.88 & 77.57.41 & 101.34.96 & 11.22.04 & 47.72.21 & 66.54.46 \\ & \(C\) & 11.20.74 & 11.48.12 & 11.39.87 & 12.00.14 & 15.32.95 & 16.86.59 & 75.42.12 \\ & \(C\) & **39.20.58** & **63.45** & **88.21.48** & **88.24.24** & **88.24** & **85.36** & **55.88** & **55.88** \\ \cline{2-13} & \(L\) & 52.01.34 & 61.65.35 & 101.21.08 & 20.81.04 & 43.22.53 & 59.01.80 & 5.26.33 & 11.02.47 & 5.39.13 \\ & \(C\) & 11.20.27.33 & 12.99.81 & 93.24.62 & 19.32.45 & 27.69.81 & 11.08.48 & 44.12.35 & 107.44 & 9.66.95 \\
**ART + \(\mathcal{L}_{2}\)** & **56.97.36** & **57.01.03** & **53.72.03** & **47.19.04** & **40.63.32** & **28.96.66** & **40.95** & **59.67** & **59.67** & **56.72** & **43.08** & **53.23** \\ \cline{2-13} & \(L\) & **53.22.85** & 77.57.41 & 101.34.96 & **40.63.65** & **11.21.08** & **47.79** & **40.63** & **11.20** & **47.72** & **66.55** & **44.66** \\ & \(C\) & **53.22.85** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **64.79** & **19.90** & **31.00** \\ & \(C\) & **53.28** & **63.45** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** & **63.25** \\ \cline
formes the methods _SNIP_[20], _Grasp_[42], _SRatio_[34], _LTH_[9] and _RigL_[8] on all sparsity levels. Considering the high sparsity of \(99\%\), \(99.5\%\) and \(99.8\%\), all competitors drop drastically in accuracy, even to the minimal classification bound of random prediction for SNIP and _LTH_ using VGG-19. However, _ART_ is able keep high accuracy even on extreme high sparsity levels. In comparison to the regularization losses \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\), our _HyperSparse_ loss achieves higher accuracy in nearly all settings and even minimizes the variance. If we skip the Pre-train-step (step 1) of _ART_, the performance slightly drops. However, _ART_ without pre-training still has good results.
Moreover, we present the number of trained epochs for the regularization phase (step 2) in Tab. 2. In almost all cases, _HyperSparse_ requires less epochs to terminate compared to \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) and converges faster to a well performing sparse model. As a second aspect, _ART_ dynamically varies the training-length to the sparsity level, model and data complexity. Thus, _ART_ trains longer if higher sparsity is required or the model has more parameters and is more complex like VGG-19. In comparison of the two datasets CIFAR-10 and CIFAR-100, which have the same number of training samples and thus the same number of optimization steps per epoch, _ART_ extends the training-length for the more complex classification problem in CIFAR-100.
_ART_ trains the model for 60 epochs in pre-training (step 1) and 160 epochs in fine-tuning (step 3). Considering the dynamic training-length in step 2, the epochs of _ART_ using \(\mathcal{L}_{\text{HS}}\) sum up from \(226.2\) to \(301.2\) epochs in mean. In comparison, iterative pruning methods are computationally much more expensive, since each model is trained multiple times. For example, IMP [16] requires 860 epochs on CIFAR-10/100 in our experiments.
### Exploration and Exploitation aware Gradient
The training-schedule of _ART_ allows to explore new topologies of sparse networks, while compressing the dense network into the remaining weights that are exploited to minimize the loss \(\mathcal{L}_{\text{class}}\). To reduce the tradeoff between exploration and exploitation, our regularization loss _HyperSparse_ penalizes small weights with a higher regularization and forces the most weights to be close to zero, while preserving the magnitude of weights that remain after pruning. To highlight the beneficial behaviour of _HyperSparse_, this section visualizes and analyzes the gradient. Fig. 1 shows the values and the corresponding gradients of all weights, sorted by the weights magnitude. Note that we only focus on the second step of _ART_, where the regularization is incorporated. Epoch \(0\) represents the first epoch using regularization.In the lower subfigure, we observe that the gradient of _HyperSparse_ with respect to weights larger than \(|w_{\kappa}|\) is closer to 0 than for smaller weights. In comparison, \(\mathcal{L}_{1}\) remains constantly 1 for all weights. The effect of increasing regularization of small weights is stronger for networks with more weights close to zero and therefore amplifies over time, since increasing regularization shrinks the weights magnitude. For example, epoch 40 shows higher gradients for small weights compared to epoch 0, while having more weights with lower magnitude. The pruning-rate \(\kappa\) dependent \(\mathcal{L}_{\text{HS}}\) increases the gradient for small weights \(|w|<|w_{\kappa}|\) over time but conserves the low gradient of larger weights \(|w|>|w_{\kappa}|\) approximately at 0 to favor
Figure 3: Distribution of weights per layer after pruning in a ResNet-32 model that is trained on CIFAR-100 with pruning rate \(\kappa=98\%\). Layer index \(i\) describes the execution order. We group the model in residual blocks (RES), downsampling blocks (DS) and the linear layer (LL). Our method distributes the weights comparable to _IMP_[16], but it has more weights in the downsampling layers.
Figure 2: Intersection of the set of weights with highest magnitude during training and the final mask measured during _ART_ with ResNet-32, CIFAR-100, pruning rate \(\kappa=98\%\) and different regularization losses. Horizontal bars mark the intersection one epoch before pruning and the dashed line at epoch \(60\) indicates the start of regularization. Our _HyperSparse_ loss reduces the optimization time and the high intersection before pruning suggests a higher stability during regularization, which leads to better exploitation.
exploitation. During optimization, the gradient remains smooth and increases slowly for weights that are smaller, but close to \(|w_{\kappa}|\). This favors exploration in the domain of weights close to \(w_{\kappa}\). Therefore, the model becomes inherently sparse and the behaviour shifts continuously from exploration to exploitation.
### Reordering Weights
We use the regularization loss with ascending leverage to find a reasonable set of weights, that remain after pruning. We implicitly do this by shrinking small weights close to zero. During training, weights are reordered and thus can change the membership from the set of pruned to remaining weights, and vice versa. We analyze the reordering procedure in Fig. 2, which shows the intersection of the intermediate and final mask over all epochs, using different regularization losses in _ART_. The model is pre-trained to convergence without regularization for the first 60 epochs (step 1) and with regularization in further epochs (step 2). Fine-tuning is not visualized (step 3). After pre-training, the highest weights only intersects up to \(20\%\) with the final mask obtained by \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\), while _HyperSparse_ leads to an intersection of approximately \(35\%\). This results show _HyperSparse_ changes less parameter while reordering weights, which implies that more structures from the dense model are exploited. It also shows that _HyperSparse_ has a significantly smaller learning duration than \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\). The horizontal bars point to the intersection before last training epoch and show that \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) only intersect by \(60\%\) and \(50\%\), while _HyperSparse_ is getting very close to the final mask with more than \(90\%\) intersection. This indicates that _HyperSparse_ finds a more stable set of high valued weights and reduces exploration, as the mask has less variation in the final epochs. More results for other training settings are shown in the supplementary material, Sec. B.
Moreover, we analyze the resulting weight distribution of our method and compare it to _IMP_[16] and _SRatio_[34]. Fig. 3 shows the number of remaining weights per layer for ResNet-32 that consists of three scaling levels, which end up with the linear layer (LL). Each scaling level consists of four residual blocks (RES), which are connected by a downsampling-block (DS). The basic topology of _ART_ and _IMP_ looks similar, since both methods show a constant keep-ratio over the residual blocks. Furthermore, _ART_ and _IMP_ use more parameters in downsampling and linear layers. We conclude that these two layer types require more weights and consequently are more important to the model. The higher accuracy discussed earlier suggest that our method exploit these weights better. To show that this results are also obtained on other datasets, models, and sparsity levels, we describe further weight distributions in the supplementary material, Sec. C and show that the number of parameters in the linear layer decreases drastically for a small set of classes in CIFAR-10. Moreover, the compared
\begin{table}
\begin{tabular}{c c|c c c c c c} \hline \multirow{2}{*}{**CP**} & \multicolumn{2}{c}{**Human**} & \multicolumn{5}{c}{**CIFAR-10 Class**} \\ & **Label** & & & & & & \\ \multirow{4}{*}{**Errors**} & 0 & 0.482 & 0.554 & 0.639 & 0.400 & 0.452 & 0.429 \\ & 1 & 0.548 & 0.632 & 0.722 & 0.479 & 0.547 & 0.493 \\ & 2 & 0.653 & 0.741 & 0.808 & 0.588 & 0.676 & 0.710 \\ & 3 & 0.760 & 0.823 & 0.862 & 0.695 & 0.783 & 0.769 \\ \hline \multirow{4}{*}{**Front**} & 0 & 0.166 & 0.499 & 0.494 & 0.567 & 0.580 & 0.524 \\ & 1 & 0.217 & 0.573 & 0.601 & 0.613 & 0.671 & 0.570 \\ & 2 & 0.292 & 0.678 & 0.721 & 0.676 & 0.790 & 0.737 \\ & 3 & 0.392 & 0.772 & 0.796 & 0.710 & 0.841 & 0.808 \\ \hline \multirow{4}{*}{**Front**} & 0 & 0.056 & 0.296 & 0.387 & 0.622 & 0.798 & 0.824 \\ & 1 & 0.061 & 0.317 & 0.411 & 0.647 & 0.835 & 0.835 \\ & 2 & 0.069 & 0.342 & 0.439 & 0.677 & 0.882 & 0.893 \\ & 3 & 0.081 & 0.363 & 0.469 & 0.699 & 0.902 & 0.929 \\ \hline \end{tabular}
\end{table}
Table 3: _Compression Position_ (see Sec. 4.5) for dense NNs (during pre-training) and \(\kappa\) pruned NNs (during regularization) for six CIFAR-10 classes. Samples of a class are split into 4 subsets according to the number of human label errors in CIFAR-N to indicate the difficulty. In sparse networks, different classes are compressed at different times and difficult samples are compressed later. All classes and pruning rates can be found in the supplementary material, Tab. 2.
Figure 4: First 5% CIFAR-10 samples that are compressed into the remaining highest weights after pruning with \(\kappa\in\{0\%,90\%,99.8\%\}\) deduced by the CP-metric. While dense networks learn samples approximately uniform-distributed over classes, the highest weights compress decision rules only for a subset of classes in the early learning stage. Note that we sampled by factor 10 for visualization purposes and ellipses represent the double standard deviation of cluster centers.
method _SRatio_ assumes that suitable sparse networks can be obtained using handcrafted keep-ratios per layer. It has a quadratic decreasing keep-ratio that can be observed in Fig. 3. As shown in Tab. 1, our method _ART_ performs significantly better than _SRatio_ and therefore we deduce that fixed keep-ratios have an adverse effect on performance. Reordering weights during training favors well performing sparse NNs, especially in high sparsity regimes.
### What do networks compress first?
Along with the introduction of _ART_, we are faced with the question of which patterns are compressed first into the large weights that remain after magnitude pruning during regularization. This question is in contrast to Hooker's question _"What Do Compressed Deep Neural Networks Forget?"_[13] and challenges the fundamental assumption of magnitude pruning, which assumes large weights to be most important. In this section, we analyze the chronological order of how samples are compressed and introduce the metric _Compression Position_ (CP) to determine it.
According to our method, regularization starts at epoch \(e_{S}\) and ends at \(e_{E}\) and therefore the weights \(W\) have different states \(\mathcal{W}=\{W_{e}\}_{e=e_{S}}^{e_{E}}\) during training. We measure the individual accuracy over time \(\psi_{1}\) reached by the sparse network for a training sample \((x,y)\in S\), defined by
\[\psi_{1}\!\big{(}x,y,f,\mathcal{W}\big{)}=\frac{\big{|}\big{\{}W_{e}\in \mathcal{W}\mid f(\nu(W_{e})\odot W_{e},x)=y\big{\}}\big{|}}{e_{E}-e_{S}}. \tag{4}\]
After computing the individual accuracy for all samples \(\Psi=\big{\{}\psi_{1}\!\big{(}x_{n},y_{n},f,\mathcal{W}\big{)}\big{\}}_{n=1} ^{N}\) and sorting \(\Psi\) in descending order, the metric \(\text{CP}\big{(}x,y,f,\mathcal{W}\big{)}\) describes the relative position of \(\psi_{1}\!\big{(}x,y,f,\mathcal{W}\big{)}\) in sort\((\Psi)\). In other words, early compressed and correctly classified samples obtain a low CP close to \(0\), and those compressed later closer to \(1\).
We calculate the CP metric for all samples in CIFAR-10 during training of dense, low, and high sparsity NNs. The compression behaviour for dense NNs is measured during the pre-training phase (\(e_{S}=0\) and \(e_{E}=60\)) and for sparse NNs during regularization phase (\(e_{S}=60\) and \(e_{E}=e_{\text{max}}\)).
To show, which samples are compressed first into the remaining highest weights, the \(5\%\) samples with lowest CP are visualized in Fig. 4 in the latent space of the well known _CLIP_ framework [29] mapped by _t-SNE_[40]. As commonly known, the dense model compresses easy samples of all classes in the early stages [17, 21], while the low sparsity model already loses some. In the high sparsity regime no discriminative decision rules are left at beginning of training, and the remaining classes are compressed step by step as the training continues (see supplementary material, Sec. E). In our experiments, we have seen continuously that there is a bias towards the class _deer_. We call this effect _"the deer bias"_, which must be reduced with regularisation. The _deer_ bias suggests that large weights in dense NNs do not encode decision rules for all classes.
To quantify the above results, Tab. 3 shows the average CP for all samples belonging to a specific class. Additionally, we split the class sets into four subsets according to their difficulty. We estimate the difficulty of a sample by counting the human label errors that are made from three human annotators derived from CIFAR-N [43], _e.g_., 2 means that two of three persons mislabeled the sample. The first observation is that the above mentioned separation of classes is confirmed, since CP values are similar in dense NNs, but diverge in sparse NNs. In high sparsity regimes, the _deer_ bias is persistent before first samples of other classes are compressed. The classes _horse_ and _airplane_ are only included at the end of the training. The second observation is, that within a closed set of samples belonging to a class, difficult samples are compressed later. This nature is similar to the training process of dense NNs.
Implementation details and more fine-grained results are available in the supplementary material, Sec. E.
## 5 Conclusion
Our work presents _Adaptive Regularized Training_ (_ART_), a method that utilizes regularization to obtain sparse neural networks. The regularization is amplified continuously and used to shrink most weight magnitudes close to zero. We introduce the novel regularization loss _HyperSparse_ that induces sparsity inherently while maintaining a well balanced tradeoff between exploration of new sparse topologies and exploitation of weights that remain after pruning. Extensive experiments on CIFAR and TinyImageNet show that our novel framework outperforms sparse learning competitors. _HyperSparse_ is superior to standard regularization losses and leads to impressive performance gains in extremely high sparsity regimes and is much faster. Additional investigations provide new insights about the weight distribution during network compression and about patterns that are encoded in high valued weights.
Overall, this work provides new insights into sparse neural networks and helps to develop sustainable machine learning by reducing neural network complexity.
## 6 Acknowledgments
This work was supported by the Federal Ministry of Education and Research (BMBF), Germany under the project AI service center KISSKI (grant no. 01IS22093C), the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122), and by the Federal Ministry of the Environment, Nature Conservation, Nuclear Safety and Consumer Protection, Germany under the project GreenAutoML4FAS (grant no. 67KI32007A). |
2301.12717 | Automatic Intersection Management in Mixed Traffic Using Reinforcement
Learning and Graph Neural Networks | Connected automated driving has the potential to significantly improve urban
traffic efficiency, e.g., by alleviating issues due to occlusion. Cooperative
behavior planning can be employed to jointly optimize the motion of multiple
vehicles. Most existing approaches to automatic intersection management,
however, only consider fully automated traffic. In practice, mixed traffic,
i.e., the simultaneous road usage by automated and human-driven vehicles, will
be prevalent. The present work proposes to leverage reinforcement learning and
a graph-based scene representation for cooperative multi-agent planning. We
build upon our previous works that showed the applicability of such machine
learning methods to fully automated traffic. The scene representation is
extended for mixed traffic and considers uncertainty in the human drivers'
intentions. In the simulation-based evaluation, we model measurement
uncertainties through noise processes that are tuned using real-world data. The
paper evaluates the proposed method against an enhanced first in - first out
scheme, our baseline for mixed traffic management. With increasing share of
automated vehicles, the learned planner significantly increases the vehicle
throughput and reduces the delay due to interaction. Non-automated vehicles
benefit virtually alike. | Marvin Klimke, Benjamin Völz, Michael Buchholz | 2023-01-30T08:21:18Z | http://arxiv.org/abs/2301.12717v2 | # Automatic Intersection Management in
###### Abstract
Connected automated driving has the potential to significantly improve urban traffic efficiency, e.g., by alleviating issues due to occlusion. Cooperative behavior planning can be employed to jointly optimize the motion of multiple vehicles. Most existing approaches to automatic intersection management, however, only consider fully automated traffic. In practice, mixed traffic, i.e., the simultaneous road usage by automated and human-driven vehicles, will be prevalent. The present work proposes to leverage reinforcement learning and a graph-based scene representation for cooperative multi-agent planning. We build upon our previous works that showed the applicability of such machine learning methods to fully automated traffic. The scene representation is extended for mixed traffic and considers uncertainty in the human drivers' intentions. In the simulation-based evaluation, we model measurement uncertainties through noise processes that are tuned using real-world data. The paper evaluates the proposed method against an enhanced first in - first out scheme, our baseline for mixed traffic management. With increasing share of automated vehicles, the learned planner significantly increases the vehicle throughput and reduces the delay due to interaction. Non-automated vehicles benefit virtually alike.
Cooperative Automated Driving, Mixed Traffic, Reinforcement Learning, Graph Neural Network.
## I Introduction
Traffic congestion is a major cause of time loss and energy inefficiency in urban mobility. Ever increasing traffic demands challenge the currently prevailing approaches for managing intersections, like static priority rules or simple traffic light schemes. The issue is aggravated by occlusion, e.g., through buildings and other road users, which limits the view onto cross traffic for human drivers as well as on-board perception systems.
The deployment of connected automated driving has the potential to alleviate many of these issues. Thereby, human driven connected vehicles (CVs) and connected automated vehicles (CAVs) share a communication link with each other and possibly infrastructure systems. By providing edge computing resources in urban areas, an environment model of the local traffic scene, for instance at an intersection, can be maintained and provided to connected road users. This information could be used, e.g., by CAVs to automatically merge into a gap in prioritized traffic, as shown in [1].
Further increase in traffic efficiency can be obtained by leveraging automatic intersection management (AIM). For example, an automated vehicle on the main road may be requested to slow down and let another vehicle coming from a side road merge, as illustrated in Fig. 1. On public roads, CAV penetration will not be even close to \(100\,\mathrm{\char 37}\) soon, but mixed traffic will be prevalent. Thus, cooperative planning approaches have to take all vehicles into account, including those that cannot be influenced.
In the current work, we propose a mixed-traffic-capable cooperative behavior planning scheme using reinforcement learning (RL) and graph neural networks (GNNs). We build our approach on our previous work proposing a cooperative planning model for fully automated traffic [2, 3]. The contribution of the current work is fourfold:
Fig. 1: Cooperative planning in mixed traffic. The two automated vehicles (depicted in yellow) may perform a cooperative maneuver by deviating from the precedence rules, if it does not interfere with regular vehicles. Here, the maneuver of the regular vehicle (in blue) in the top has to be considered.
* To the best of our knowledge, we propose the first learning-based AIM scheme for mixed traffic,
* Introduction of a novel GNN architecture leveraging an attention mechanism and relation-dependent edges,
* Presenting an RL training procedure taking into account uncertainty about non-connected vehicles,
* Demonstration of the model's ability to increase traffic efficiency over varying automation rate.
The remainder of the paper is structured as follows: Section II gives an overview on the state of the art in AIM and machine learning based planning for automated driving. We present our learning model as well as details on the graph structure in Section III. The experimental simulation setup and evaluation results are discussed in Section IV. The paper is concluded in Section V.
## II Related Work
The state of the art in AIM consists of a broad variety of approaches, many of which can be allocated to reservation-based systems [4, 5, 6, 7], optimization algorithms [8, 9, 10, 11], or tree-based methods [12, 13]. In the following, we discuss a selection of works in commonly used paradigms and refer the reader to surveys like [14] for a more extensive overview.
The tile-based reservation scheme proposed in [4] employs a central coordination unit that assigns clearance to the vehicles in order of their requests. This first in - first out (FIFO) policy is benchmarked on simulated multi-lane intersections against traffic lights as a baseline and an overpass as the ideal solution while disallowing turning maneuvers. In [5], the approach is extended to support turns at the intersection and traditional yielding as an additional baseline. This work also investigates intersection management in mixed traffic, however, requires strict spatial separation between human-driven vehicles and CAVs.
Connected automated driving also supports ad-hoc negotiation of cooperative maneuvers [6, 7]. In [6], a cooperative lane change procedure is presented. If a CAV deems a lane change maneuver beneficial, it may request surrounding vehicles to keep a designated area free to safely perform the lane change. The authors extend their approach for usage at intersections by introducing a cooperation request when planned paths conflict on the intersection area [7]. The feasibility of this procedure is demonstrated in low-density real traffic using two testing vehicles.
In fully automated traffic, intersection efficiency can be further improved by optimizing the individual vehicles' trajectories freely on the 2D ground plane. To conquer the sharply rising complexity with increasing number of agents, standard cases can be defined and solved offline, as proposed by [11]. Prerequisite for applying the precomputed solutions to a given traffic scene online, is to have the approaching vehicles form a predefined formation before entering the intersection area. The authors acknowledge that high computing power is needed to solve all standard cases for a multi-lane four-way intersection, even when done offline. Other approaches assume the vehicles to follow predefined lanes and solely optimize the longitudinal motion [8, 9, 10]. Centralized model predictive control of the longitudinal vehicle motion is employed by [8]. In [9], a distributed energy-optimizing approach is presented that disallows turning maneuvers. Both works demonstrate a reduction in delay and fuel consumption compared to traditional signalized intersections. Coordination by optimizing the intersection crossing order using mixed-integer quadratic programming can yield a benefit over FIFO ordering when considering the increased inertia of heavy vehicles [10]. These approaches are not suited for mixed traffic and share the unfavorable scaling of computational demand with increasing number of road users.
Decision trees are well-suited to represent possible variants of the scenario evolution depending on agent actions or interactions. In [12], Monte Carlo tree search is employed for decentralized cooperative trajectory planning in highway scenarios. The authors demonstrate their model's ability to generate viable trajectories in an overtaking maneuver for varying levels of cooperation. In the context of AIM, decision trees enhanced by probabilistic predictions of scene evolvement can be used to find the crossing order, which yields the highest expected efficiency [13] in mixed traffic scenarios.
In recent works, machine learning demonstrated its potential for automated driving, yielding remarkable results in road user motion prediction [15] and planning for a single ego vehicle [16]. Many planning models [17, 18] rely on the imitation of expert demonstrations taken, e.g., from driving datasets using supervised learning. In [19], it is proposed to aid a machine learning model for single ego planning using inter-vehicle communication providing LiDAR measurements from surrounding vehicles to enhance the sensor coverage. For cooperative behavior planning, ground truth data is virtually unavailable, because such maneuvers are seldom, if at all, performed by human drivers. Therefore, supervised learning approaches are rather limited for the application in AIM. Reinforcement learning (RL) evades the need for large amounts of training data, by instead exploring possible behaviors in simulation, guided by a reward signal. The application of RL to single ego planning was demonstrated for handling urban intersections [20] and highway lane change maneuvers [21]. The authors of the latter work propose a graph-based scene representation for encoding the ego vehicle's semantic surroundings and a GNN-based policy.
Learning-based algorithms have not been as extensively used for cooperative multi-agent behavior planning. In [22], RL is employed to train a policy for choosing the most suited action from a restricted discrete action space, which ensures collision-free maneuvers. Because the policy execution is performed decentralized for each vehicle individually, the approach does not leverage explicit communication and dedicated cooperation between agents. Our previous work [2] was the first to propose a GNN trained in RL for centralized cooperative behavior planning in fully automated traffic. The representation and learning model have been improved and was shown to generalize to intersection layouts not encountered during training [3].
## III Proposed Approach
This section introduces the proposed learning paradigm (Sec. III-A), the graph-based input representation (Sec. III-B), the network architecture (Sec. III-C), and the reward function (Sec. III-D).
### _Learning Paradigm_
The task of cooperative planning across multiple vehicles is considered as a multi-agent RL problem. Depending on the degree of centralization, different learning paradigms can be employed [23]. Instead of deploying multiple agents independently within the simulated environment, joint cooperative behavior planning is best modeled by the centralized training centralized execution paradigm. All decentralized approaches require the agents to develop an implicit communication scheme implemented through their behavior. In connected automated driving, however, a communication link is at the disposal of the involved agents and makes centralized execution feasible. Therefore, the multi-agent planning task is modeled as a single partially observable Markov decision process (POMDP), denoted as
\[(S,\,A,\,T,\,R,\,\Omega,\,O). \tag{1}\]
The set of states \(S\) contains all reachable states of the simulator, including full state information on all automated vehicles (AVs) and manually driven vehicles (MVs). While the set of available actions is denoted by \(A\), the conditional transition probability of changing from \(s\in S\) to \(s^{\prime}\in S\) when applying action \(a\in A\) is given by \(T(s^{\prime}|s,\,a)\). The state transition is not deterministic because human reaction to a given traffic scene is neither, which has to be modeled in simulation. \(R:S\times A\rightarrow\mathbb{R}\) denotes the reward function that rates the chosen action in a given state in terms of a scalar value. Because most of the abstract traffic state \(S\) is not observable for the multi-agent planner, a reduced set of observations is defined as \(\Omega\). The probability of a state \(s\in S\) being mapped to observation \(\omega\in\Omega\) is given by \(O(\omega|s)\). This mapping is not injective, which manifests, for instance, when the driver of a non-connected vehicle changes their turning intention without any externally noticeable change. Moreover, measurement uncertainties are modeled by \(O\). The dimensionalities of the state space and the action space depend on the number of (controllable) vehicles currently in the scene and may vary over time.
### _Input Representation_
We retain the core idea of encoding the traffic scene at an urban intersection as a directed graph with vertex features and edge features from [3]. Graph-based input representations proved to be well-suited for behavior planning in automated driving, where a varying number of dynamically interacting entities must be encoded efficiently [24]. The set of observations can thus be denoted as \(\Omega=(V,\,E,\,U)\), where each vehicle is mapped to a vertex \(\nu\in V\), \(E\) describes the set of edges, and \(U\) a set of edge types. The available edge types result from the Cartesian product between the relation-dependent dimension and the automation-dependent dimension: \(U=U_{\text{rel}}\times U_{\text{aut}}\). While \(U_{\text{rel}}=\{\text{same lane, crossing}\}\) is retained from [2], \(U_{\text{aut}}=\{\text{AV/AV,AV/MV, MV/AV}\}\) denotes whether the edge encodes the interaction between two AVs, one AV and one MV or vice versa. The reason for this design follows from the fact that the coordination of two AVs is fundamentally different than an interaction of an AV with an MV. Note that there are no edges between two MVs, because pure MV interactions are not of concern for the cooperative planner. In case of, e.g., two MVs leading an AV, both MVs are connected via edges to the AV, which enables the network to infer relevant AV interactions. Formally, an edge is defined as
\[(\nu_{i},\,\nu_{j},\,g_{ij},\,r)\in E, \tag{2}\]
where the source and destination vertices are named \(\nu_{i}\) and \(\nu_{j}\), respectively. The edge feature \(g_{ij}\) will be described below and \(r\in U\) specifies the edge type. Two graph vertices are being connected by an edge, if the corresponding vehicles share a conflict point on the intersection area (crossing) or are driving on the same path (same lane), as illustrated in Fig. 2. Note that the different edge types are mutually exclusive.
A key challenge of planning in mixed traffic is the ambiguity on the maneuver intention of non-connected vehicles, i.e., they could go straight, turn left or turn right (e.g. \(\nu_{3}\) in Fig. 2). In the graph-based scene representation, this issue is addressed by including all potential conflict points by means of additional edges. Thereby, the planner becomes more cautious, because
Fig. 2: The graph-based input representation for mixed traffic at a four-way intersection. An AV’s (yellow) turning intention is denoted by an arrow on its hood. Due to the unknown turning intention of the MV \(\nu_{3}\) (blue, denoted by ’?’), it shares edges with all three AVs, although conflicts with \(\nu_{1}\) and \(\nu_{4}\) are mutually exclusive.
each conflict point may result in a collision if not coordinated properly. As soon as the future motion of an MV can be predicted with reasonable accuracy, the edges for all other options can be removed. In the present work, we assume an MV's intention to be predictable when it passed \(25\,\%\) of the intersection area. The cooperative planning performance might be increased further by employing a prediction algorithm like [25]. Such an extension is, however, out-of-scope for this work.
The sets of input features for vertices and edges are adapted to accommodate the advanced requirements for planning in mixed traffic. The vertex input features are denoted as one four-element vector per vehicle
\[\mathbf{h}^{(0)}=[s,\,v,\,\tilde{a},\,c]^{T}, \tag{3}\]
where the upper index \((0)\) denotes the input layer of the GNN. The first three elements describe the longitudinal position along its lane, the scalar velocity, and measured acceleration, respectively. \(c\) is a binary indicator on whether the corresponding vehicle is controllable, i.e., is an AV. The edge input feature with distance measure \(d_{ij}\) and heading-relative bearing \(\chi_{ij}\) is extended by the new feature \(pr_{ij}\):
\[\mathbf{g}_{ij}^{(0)}=[1/d_{ij},\,\chi_{ij},\,pr_{ij}]^{T}. \tag{4}\]
This new feature encodes the priority relation between the two vehicles and is defined as
\[pr_{ij}=\max\left(\min(pr_{i}-pr_{j},\,1),\,-1\right), \tag{5}\]
where \(pr_{i}\) is the priority of vehicle \(i\) depending on its originating road and turning intention, given as an integer value. In case of an MV with uncertain maneuver intention, its true priority is replaced by the assumed priority, given the worst-case maneuver with respect to the interacting AV.
### _Network Architecture_
In this work, we employ the TD3 [26] RL algorithm, which belongs to the family of actor-critic methods for actions in continuous space. As the actor and critic network architecture deviate only slightly on the output side, only the actor network is presented in detail for conciseness. We propose a GNN architecture that is composed of relational graph convolutional network (RGCN) layers [27] and graph attention (GAT) layers [28]. To process edge features in the RGCN layers, we use the extended update rule for message passing that was introduced in [3]. The different edge types correspond to independently learnable weight matrices. Thus, the network can map the fundamental difference in interaction, as described in Sec. III-B, to sensible actions.
The overall network architecture is depicted in Fig. 3. Both the vertex input features and edge input features are first mapped into a high-dimensional space using the encoders v_enc and e_enc, respectively. Afterwards, the vertex features in latent space are passed through three GNN layers for message passing, each of which gets the encoded edge features as an additional input. All GNN layers fulfill the mapping conv : \(V^{n}\times E^{m}\to V^{n}\), where \(n\) denotes the number of vehicles (nodes) and \(m\) the number of conflict relations (edges). The edge features are not updated throughout the forward pass, which is fine for our application, as there is nothing to be inferred on the edges. Using alternating layer types proved to deliver better results than a pure RGCN or GAT network in our experiments. Thus, we leverage the GAT layer's attention mechanism and explicitly include edge features and edge types in the modified RGCN layers. The original GAT layer disregards edge type information and only considers the edge features for computing the attention weights but not the node update. Recently, a graph attention mechanism for relational data was proposed [29], which did not perform well in our case, though.
The latent vertex features \(\mathbf{h}^{(4)}\) are then passed to an action decoder, composed of fully connected layers, that infers a joint action for all vehicles. Each intermediate layer is followed by a rectified linear unit (ReLU) as the activation function and all fully connected layers share their weights across vertices or edges. The critic network aggregates the latent vertex features \(\mathbf{h}^{(4)}\) to a single feature vector, which is then decoded into one q-value estimate. The graph-based scene representation and the GNN are implemented using the PyTorch Geometric API [30].
### _Reward Function Engineering_
The reward function for driving the RL training is composed of a weighted sum of reward components
\[R=\sum_{k\in\mathcal{R}}w_{k}R_{k}\,, \tag{6}\]
where the weights are denoted as \(w_{k}\) and the set of reward components as \(\mathcal{R}\). We retain the weights and reward components for velocity, action, idling, proximity, and collisions from [2], with certain adaptions for mixed traffic described in the following. To encourage cooperative maneuvers, the velocity reward not only considers the speed of AVs that are directly controlled by the RL planner, but also MVs in equal weighting. Therefore, a cooperative maneuver becomes even
Fig. 3: The GNN architecture of the actor network. Vertex input features \(\mathbf{h}^{(0)}\) and edge input features \(\mathbf{g}^{(0)}\) are mapped to one joint action \(\mathbf{a}\). The edge feature enhanced RGCN layers are depicted in green, the GAT layer in blue, and fully connected layers in yellow.
more attractive, if further MVs subsequently benefit from it. The action penalty \(R_{\mathrm{action}}=-||\mathbf{a}||_{1}\), on the other hand, only considers AVs to encourage smooth driving commands.
When applied in mixed traffic, this reward function exhibits an unintentional local optimum, resulting in AVs being stopped far away from the intersection entry and MVs subsequently passing the intersection without cross traffic. This behavior is not desirable, as it effectively leads to a strong disturbance on the prioritized road whenever an MV appears on the minor road. We propose to conquer this issue through an additional reluctance reward component
\[R_{\mathrm{reluctance}}=-\max_{i\in\text{AVs}}\mathbb{I}(\nu_{i}\text{ is leader})\ \mathbb{I}(v_{i}<v_{\text{stop}})\ \delta_{i}, \tag{7}\]
where \(\mathbb{I}(\cdot)\) denotes the indicator function. This penalty is only nonzero, if there is no leading vehicle and the velocity falls below a stopping threshold of \(v_{\text{stop}}=1\frac{\text{m}}{\text{s}}\). \(\delta_{i}\) denotes the distance of \(\nu_{i}\) to the stop point in front of the intersection. Throughout this study, the reluctance reward weight was set to \(w_{\mathrm{reluctance}}=0.01\), based on empirical observations.
## IV Experiments
In this section, the modeling of measurement uncertainties (Sec. IV-A) and the simulation setup (Sec. IV-B) is introduced, before presenting evaluation results in Sec. IV-C.
### _Modeling of Measurement Uncertainties_
In addition to the epistemic uncertainty in the maneuver intention of non-connected vehicles, there is also aleatoric uncertainty due to measurement noise. Although current testing vehicles might be equipped with a highly accurate GNSS-aided inertial navigation system (GNSS-INS), perfect localization cannot be assumed for the deployed system. This issue becomes even more pressing with regular vehicles being present that must be localized via perception. Therefore, we model measurement uncertainties in simulation by means of additive noise processes, whose parameters were estimated using real-world driving logs. In this work, we use recordings of test drives of a connected testing vehicle at the pilot site in Ulm-Lehr, Germany [1], but the proposed method is applicable for larger amounts of data to obtain a more precise estimate. During the test drive, the GNSS-INS track of the testing vehicle was recorded. Additionally, the environment model state, which results from fusion of infrastructure sensor perception, was saved.
We consider the GNSS-INS track as the ground truth to the environment model track and strive to parameterize four independent noise processes for the set of measured quantities \(\lambda\in\{x,\,y,\,v,\,\psi\}\), being position in a local east-north-up frame, velocity, and heading. The model deviation in each measurand is given as \(e_{\lambda}=|\lambda-\tilde{\lambda}|\), where \(\tilde{\lambda}\) denotes the environment model state that has been aligned temporally to the ground truth samples \(\lambda\). The sample frequency is \(10\,\mathrm{Hz}\). Temporal dependencies within the noise shall be modeled by a first-order autoregressive process (AR(1)), defined as
\[\hat{e}_{\lambda,\,k}=\phi_{\lambda}\hat{e}_{\lambda,\,k-1}+\varepsilon_{ \lambda,\,k}, \tag{8a}\] \[\varepsilon_{\lambda,\,k}\sim\mathrm{N}(0,\,\sigma_{\lambda}^{2}),\,\mathrm{i.i.d.}, \tag{8b}\]
where \(\phi_{\lambda}\) and \(\sigma_{\lambda}^{2}\) are the process parameters for measurand \(\lambda\). The subscript \(\lambda\) is dropped in the following for brevity. We employ ordinary least squares estimation [31] to estimate the process parameter and its variance:
\[\hat{\phi}=\frac{\sum_{k=1}^{K}e_{k-1}e_{k}}{\sum_{k=1}^{K}x_{k-1}^{2}}, \tag{9}\]
\[\hat{\sigma}^{2}=\frac{1}{K}\sum_{k=1}^{K}\left(e_{t}-\hat{\phi}e_{k-1}\right) ^{2}. \tag{10}\]
Here, the number of samples in the recorded error track is denoted \(K\). The retrieved parameter set for our dataset is given in Table I. Figure 4 exemplarily shows the additive noise process for the vehicle heading compared to actual recordings of the environment model. Note that, because of the stochastic nature of measurement uncertainties, it was not to be expected to observe an absolute match. Instead, each run will yield different trajectories.
### _Training and Evaluation Environment_
Training and evaluation of the proposed cooperative planning scheme was conducted in the open-source simulator Highway-env [32] that was extended for usage in centralized multi-agent planning. A kinematic bicycle model [33] is applied for simulating plausible vehicle trajectories according to the action output of the RL algorithm. The MVs' behavior models in Highway-env are based on car following models, like the intelligent driver model (IDM) [34] and have
Fig. 4: Exemplary result of superposing the measurement noise process to the vehicle heading during one traversal of the testing site. The qualitative nature of noise in the environment model’s track is matched well by the modeled noise process.
been tweaked for improved yielding behavior at intersections [2]. For evaluation, the extended intelligent driver model (EIDM) [35] is employed, which resembles human driving behavior more closely and features non-deterministic outputs that manifest in the transition \(T\). Unless noted otherwise, measurement noise according to the noise processes introduced in Sec. IV-A is added in the observation \(O\) during evaluation.
Training of the planner network begins in fully automated traffic for the first third, before the share of MVs is gradually increased during the second third and remains at \(50\,\%\) for the last third. Beginning with \(100\,\%\) AVs allows the RL algorithm to learn basic behavior, like collision avoidance, before tuning the policy for the more challenging case of mixed traffic. Instead starting the training with samples of solely MVs does not provide a benefit, because it lacks collision samples, thus preventing the RL algorithm from learning how to effectively avoid collisions. During training, the idealized IDM car following model is employed, which yields superior results to using the EIDM. Notably, this still holds for evaluation with the more diverse EIDM, which shows that the model does not overfit to the modeled human driving behavior.
In this work, we use an enhanced first in - first out (eFIFO) scheme as a baseline to the proposed RL planner. A more extensive set of baselines was employed in [3] for fully automated traffic, while we restrict the current analyses to the best performing baseline for conciseness. The algorithmic idea of the eFIFO is being extended to handle mixed traffic. Therefore, AVs are prioritized according to their current distance to the intersection. Clearance to cross the intersection is assigned respecting the precedence relations to the MVs as boundary conditions, while conflict-free paths may be driven on simultaneously.
### _Simulation Results_
We evaluate the planning approaches in five scenarios of varying traffic density at a four-way intersection and under different automation levels, i.e., proportion of AVs in traffic. The simulated intersection connects a major road with a minor road that carries comparatively less traffic. Each configuration was run ten times, because the non-determinism in the setup may cause volatility in closed-loop metrics. In intersection management, the throughput of vehicles is of particular interest and is captured by the _flow rate_ metric in Fig. 5. Both planning approaches achieve a benefit for high automation levels, although the absolute maximum is much higher for the RL planner. When faced with strong traffic demand, the eFIFO sometimes even causes a slight decrease in flow rate for automation levels as low as \(20\,\%\), compared to simple precedence rules (\(0\,\%\) automation). The RL planner, on the other hand, yields virtually monotonically rising throughput with increasing automation for any traffic demand. Figure 5(a) additionally shows the results of the legacy RL planner that was trained in fully automated traffic. As the deviation in flow rate is negligible, it can be concluded that the training in mixed traffic has no negative impact on the policy's peak performance. This might be explained by the first third of the training being conducted in fully automated traffic, which enables the network to optimize for this specific case, which is supported by the independent weight matrices per edge type.
Cooperative planning significantly increases the attained velocity on the minor road, as illustrated in Fig. 6. While the RL planner yields a monotonous increase already for low automation levels, the eFIFO requires much more of the traffic being automated to attain the same benefit. Both approaches cause a slight velocity decrease on the major road, which is expected because the cooperative maneuvers require the prioritized vehicles to refrain from crossing the intersection unconditionally. When using the RL planner in fully automated traffic, this effect is compensated entirely.
Another metric of interest is the _delay_, which is defined in accordance with [13] for vehicle \(\nu_{i}\) as
\[\mathrm{delay}(\nu_{i})=\sum_{k=1}^{L}1-\frac{v_{i}(k)}{v_{\text{lim}}(s_{i}(k ))}, \tag{11}\]
where \(L\) denotes the evaluation horizon, \(v_{i}\) the vehicle's
Fig. 5: Attained flow rate over varying automation level for the RL planner and the eFIFO baseline. Each colored line represents one scenario definition with increasing traffic demand (blue, orange, green, red, violet). The shaded area indicates the standard deviation of the metric samples.
velocity and \(v_{\text{lim}}\) the lane speed limit queried at the vehicles position. Note that a delay of zero may not be attainable due to limited acceleration capabilities of the vehicles and discontinuous speed limits on the lanes. It can be seen from Fig. 7 that an increasing share of AVs performing cooperative maneuvers does not disadvantage MVs. They rather benefit from it, as their delay shrinks virtually alike when using the RL planner. This effect is less pronounced for the eFIFO, where the remaining MVs do not benefit much even in mostly automated traffic.
The collision rates in Table II allow to assess to which extent cooperative planning is possible under measurement uncertainty. Compared to the eFIFO baseline, the learned model copes significantly better with measurement uncertainties that are modeled according to Sec. IV-A. Although a collision rate of zero is not yet attained by the RL policy, the degradation due to noise is lower than for the rule-based eFIFO. In practice, these few remaining collision cases shall not cause serious problems, because a cooperative plan would be subject to feasibility checks before being executed.
## V Conclusion
This work presented a novel machine learning based cooperative planning scheme for mixed traffic at urban intersections. By training an RL policy in a simulated environment, we evaded the need for large amounts of training data, which is unavailable for cooperative maneuvers. The proposed graph-based scene representation considers the inherent uncertainty in human-driven vehicles. Evaluation of the RL planner revealed a clear benefit in flow rate and reduced delays for an increasing share of AVs in traffic. We showed that our method outperforms the eFIFO scheme for mixed traffic and is robust to measurement uncertainties.
In future works, we plan to shrink the gap to real-world application further by integrating dedicated motion planning algorithms and deploying our approach to a real testing vehicle.
|
2310.19103 | Proving Linear Mode Connectivity of Neural Networks via Optimal
Transport | The energy landscape of high-dimensional non-convex optimization problems is
crucial to understanding the effectiveness of modern deep neural network
architectures. Recent works have experimentally shown that two different
solutions found after two runs of a stochastic training are often connected by
very simple continuous paths (e.g., linear) modulo a permutation of the
weights. In this paper, we provide a framework theoretically explaining this
empirical observation. Based on convergence rates in Wasserstein distance of
empirical measures, we show that, with high probability, two wide enough
two-layer neural networks trained with stochastic gradient descent are linearly
connected. Additionally, we express upper and lower bounds on the width of each
layer of two deep neural networks with independent neuron weights to be
linearly connected. Finally, we empirically demonstrate the validity of our
approach by showing how the dimension of the support of the weight distribution
of neurons, which dictates Wasserstein convergence rates is correlated with
linear mode connectivity. | Damien Ferbach, Baptiste Goujaud, Gauthier Gidel, Aymeric Dieuleveut | 2023-10-29T18:35:05Z | http://arxiv.org/abs/2310.19103v2 | # Proving Linear Mode Connectivity of Neural Networks
###### Abstract
The energy landscape of high-dimensional non-convex optimization problems is crucial to understanding the effectiveness of modern deep neural network architectures. Recent works have experimentally shown that two different solutions found after two runs of a stochastic training are often connected by very simple continuous paths (e.g., linear) modulo a permutation of the weights. In this paper, we provide a framework theoretically explaining this empirical observation. Based on convergence rates in Wasserstein distance of empirical measures, we show that, with high probability, two wide enough two-layer neural networks trained with stochastic gradient descent are linearly connected. Additionally, we express upper and lower bounds on the width of each layer of two deep neural networks with independent neuron weights to be linearly connected. Finally, we empirically demonstrate the validity of our approach by showing how the dimension of the support of the weight distribution of neurons, which dictates Wasserstein convergence rates is correlated with linear mode connectivity.
+
Footnote †: dagger}\) Work done during an internship at Ecole Polytechnique
+
Footnote †: ddagger}\) Canada CIFAR AI Chair
## 1 Introduction and Related Work
Training deep neural networks on complex tasks is a high-dimensional, non-convex optimization problem. While stochastic gradient-based methods (i.e., SGD and its derivatives) have proven highly efficient in finding a local minimum with low test error, the loss landscape of deep neural networks (DNNs) still contains numerous open questions. In particular, Goodfellow et al. (2014) try to find ways to connect two local minima reached by two independent runs of the same stochastic algorithm with different initialization and data orders. This problem has applications in diverse domains such as model averaging (Izmailov et al., 2018; Rame et al., 2022; Wortsman et al., 2022), loss landscape study (Gotmare et al., 2018; Vlaar and Frankle, 2022; Lucas et al., 2021), adversarial robustness (Zhao et al., 2020) or generalization theory (Pittorino et al., 2022; Juneja et al., 2022; Lubana et al., 2023).
An answer to this question is the _mode connectivity phenomenon_. It suggests the existence of a continuous low-loss path connecting all the local minima found by a given optimization procedure. The mode connectivity phenomenon has extensively been studied in the literature (Goodfellow et al., 2014; Keskar et al., 2016; Sagun et al., 2017; Venturi et al., 2019; Neyshabur et al., 2020; Tatro et al., 2020) and _non-linear connecting paths_ have been evidenced for DNNs trained on MNIST and CIFAR10 by Freeman and Bruna (2016); Garipov et al. (2018); Draxler et al. (2018).
(Linear) mode connectivity.Formally, let \(A:=\hat{f}(.,\theta_{A})\) and \(B:=\hat{f}(.,\theta_{B})\) two neural networks sharing a common architecture \(\hat{f}\). They are parametrized by \(\theta_{A}\) and \(\theta_{B}\) after training those networks on a data distribution \(P\) with loss \(\mathcal{L}\), i.e. by minimizing \(\mathcal{E}(\theta):=\mathbb{E}_{(x,y)\sim P}[\mathcal{L}(\hat{f}(x,\theta),y)]\) over \(\theta\). Let \(p\) be a continuous path connecting \(\theta_{A}\) and \(\theta_{B}\), i.e. a continuous function defined on \([0,1]\) with \(p(0)=\theta_{A}\) and \(p(1)=\theta_{B}\). Frankle et al. (2020); Entezari et al. (2021) defined the _error barrier height_ of \(p\) as \(\sup_{t\in[0,1]}\mathcal{E}\left(p(t)\right)-\left((1-t)\mathcal{E}\left( \theta_{A}\right)+t\mathcal{E}\left(\theta_{B}\right)\right)\). The two found solutions \(\theta_{A}\) and \(\theta_{B}\) are said to be _mode connected_ if there is a continuous path with zero error barrier height connecting them. Furthermore, if \(p\) is linear, that is \(p(t)=(1-t)\theta_{A}+t\theta_{B},\ \theta_{A}\) and \(\theta_{B}\) are said to be _linearly mode connected (LMC)_.
Permutation invariance.Recently, Singh and Jaggi (2020), Ainsworth et al. (2022) highlighted the fact that the units in a hidden layer of a given model
can be permuted while preserving the network's functionality. Figure 1 shows how one can permute the hidden layer of a two-layer network to match a different target network without changing the source function. From now on, **we will understand LMC modulo permutation invariance**, i.e. two networks \(A,B\) are said to be linear mode connected whenever there exists a permutation of neurons in each hidden layer of network \(B\) such that the linear path in parameter space between network \(A\) and \(B\) permuted has low loss.
Linear mode connectivity up to permutation.Singh and Jaggi (2020) proposed to use optimal transport (OT) theory to find soft alignment providing a "good match" (in a certain sense) between the neurons of two trained DNNs. Furthermore, the authors propose ways to fusion the aligned networks together in a federated learning context with local-steps. Ainsworth et al. (2022) further experimentally studied linear mode connectivity between two pre-aligned networks. The authors first align network B's weights on the weights of network A before connecting both of them by a linear path in the parameter space. They notably achieved zero-loss barrier for two trained Resnets with SGD on CIFAR10. Moreover, their experiments strongly suggest that the error barrier on a linear path gets smaller for wider networks, with a detrimental effect of big depth.
Prior theoretical explanations.A recent work by Kuditipudi et al. (2019) shows that dropout stable networks (i.e. networks that are functionally stable to the action of randomly setting a fraction of their weights and normalizing the others) exhibit mode connectivity. Shevchenko and Mondelli (2020) use a mean field viewpoint to show that wide two-layer neural networks trained with SGD are dropout stable and hence show (non-linear) mode connectivity for two-layer neural networks in the mean field regime (i.e. one single wide hidden layer). Finally Entezari et al. (2021) show that two-layer neural networks exhibit linear mode connectivity up to permutation at initialization for parameters initialized following uniform independent distribution properly scaled. They highlight that this result could be extended to networks trained in the Neural Tangent Kernel regime where parameters stay close to initialization (Jacot et al., 2018).
Contributions.This paper aims at building theoretical foundations on the phenomenon of linear mode connectivity up to permutation. More precisely, we theoretically prove this phenomenon arises naturally on multi-layer perceptrons (MLPs), which goes beyond two-layer networks on which theoretical works focused so far. We also provide a new efficient way to find the right permutation to apply on the units of a neural network's layer. The paper is organized as follow:
* In Section 3, we focus on two-layer neural networks in the mean field regime. While Shevchenko and Mondelli (2020) proved _non-linear_ mode connectivity in this setting; we go further by proving _linear mode connectivity up to permutation_. Moreover, we provide an upper bound on the minimal width of the hidden layer to guarantee linear mode connectivity.
* In Section 4, we use general OT theory to exhibit tight asymptotics on the minimal width of a multi-layer perceptron (MLP) to ensure LMC.
* In Section 5, we apply our general results to networks with parameters following sub-Gaussian distribution. Our result holds for deep networks, generalizing the result of Entezari et al. (2021) with better bounds. We shed light on the dependence in the dimension of the underlying distributions of the weights in each layer and explain how it connects with previous empirical observations (Ainsworth et al., 2022). Using a model of approximately low dimensional weight distribution as a proxy of sparse feature learning, we yield more realistic bounds on the architectures of DNNs to ensure linear mode connectivity. We therefore, show why LMC is possible after training and how it depends on the complexity of the task. Finally we unify our framework with dropout stability.
* In Section 6, we validate our theoretical framework by showing how the implicit dimension of the weight distribution is correlated with linear mode connectivity for MLPs trained on MNIST with SGD and propose a new weight matching method.
## 2 Preliminaries and notations
**Notations.** Let two multilayer perceptrons (MLP) \(A\) and \(B\) with the same depth \(L+1\) (\(L\) hidden layers), an input dimension \(m_{0}\), intermediate widths
Figure 1: Permuting the neurons in the hidden layer of network \(B\) to align them on network \(A\)
and an output dimension \(m_{L+1}\). Given \(2(L+1)\) weights matrices \(W^{1,\ldots,L+1}_{A,B}\), and a non-linearity \(\sigma\), we define the neural network function of network A by \(\hat{f}_{A}\) (respectively \(\hat{f}_{B}\)): \(\forall x\in\mathbb{R}^{m_{0}}\),
\[\hat{f}_{A}(x):=\hat{f}(x;\theta_{A}):=W^{L+1}_{A}\sigma\left(W^{L}_{A}\ldots \sigma(W^{1}_{A}x)\right) \tag{1}\]
To \(W^{\ell}_{A}\in\mathcal{M}_{m_{\ell},m_{\ell-1}}(\mathbb{R})\) we associate \(\hat{\mu}_{A,\ell}\) the empirical measure of its rows \([W^{\ell}_{A}]_{i:}\in\mathbb{R}^{m_{\ell-1}}\): \(\frac{1}{m_{\ell}}\sum_{i=1}^{m_{\ell}}\delta_{[W^{\ell}_{A}]_{i:}}\) which belongs to the space of probability measures \(\mathcal{P}_{1}(\mathbb{R}^{m_{\ell-1}})\), where \([W^{\ell}_{A}]_{i:}\) is the \(i\)-th row of the matrix and \(\delta\) denotes the Dirac measure. Note that \([W^{\ell}_{A}]_{i:}\) is also the weights vector of the \(i\)-th neuron of the layer \(\ell\) of network \(A\). Given an equi-partition1\(\mathcal{I}^{\ell-1}=\{I^{\ell-1}_{1},...,I^{\ell-1}_{\bar{m}_{\ell-1}}\}\) of \([m_{\ell-1}]\) we denote \(W^{\mathcal{I}^{\ell-1}}_{A}\in\mathcal{M}_{m_{\ell},\bar{m}_{\ell-1}}( \mathbb{R})\) the matrix issued from \(W^{\ell}_{A}\) where we have summed the columns being in the same set of the partition \(\mathcal{I}^{\ell-1}\). In that case \(\hat{\mu}^{\mathcal{I}^{\ell-1}}_{A}\in\mathcal{P}_{1}\left(\mathbb{R}^{\bar{ m}_{\ell}}\right)\) denotes the associate empirical measure of its rows.
Footnote 1: All subsets have the same number of elements
Denote \(\phi^{\ell}_{A}(x)\ :=\ \sigma\left(W^{\ell}_{A}\ldots\sigma(W^{1}_{A}x)\right)\) (respectively \(\phi^{\ell}_{B}\)) the activations of neurons at layer \(\ell\) of network \(A\) on input \(x\). The data \(x\) follows a distribution \(P\) in \(\mathbb{R}^{m_{0}}\).
Given permutations matrices \(\Pi_{\ell}\in\mathcal{S}_{m_{\ell}}\)2\(\ell=1,\ldots,L\) of each hidden layer of network \(B\), the weight matrix at layer \(\ell\) of the permuted network \(B\) is \(\tilde{W}^{\ell}_{B}:=\Pi_{\ell}W^{\ell}_{B}\Pi^{T}_{\ell-1}\) and its new activation vector is \(\tilde{\phi}^{\ell}_{B}(x):=\Pi_{\ell}\phi^{\ell}_{B}(x)\). Finally, \(\forall t\in[0,1]\) we define \(M_{t}\) the convex combination of network \(A\) and \(B\) permuted, with weights matrices \(tW^{\ell}_{A}+(1-t)\tilde{W}^{\ell}_{B}\) and \(\phi^{\ell}_{M_{t}}\) its activations at layer \(\ell\).
Footnote 2: We use interchangeably \(\mathcal{S}_{m}\) to denote the space of permutations of \(\{1,\ldots,m\}\) and the corresponding space of permutations matrices. Given \(\pi\in\mathcal{S}_{m}\) its corresponding permutation matrix \(\Pi\) is defined as \(\Pi_{ij}=1\iff\pi(i)=j\).
**Preliminaries.** We consider networks \(A\) and \(B\) to be independently chosen from the same distribution \(Q\) on parameters. This is coherent with considering two networks initialized independently or trained independently with the same optimization procedure (SS3). We additionally suppose the choice of \(A\) and \(B\) to be independent of the choice of \(x\sim P\), which is valid when evaluating models on test data not seen during training. We denote \(\mathbb{E}_{Q},\mathbb{E}_{P},\mathbb{E}_{P,Q}\) expectations with respect to the choice of the networks, the data, or both.
To show linear mode connectivity of networks \(A\) and \(B\) we will show the existence of permutations \(\Pi_{1},...,\Pi_{L}\) of layers \(1,...,L\) that align the neurons of network \(B\) on the closest neurons weights of network \(A\) at the same layer as shown in Figure 1. In other words, we want to find permutations that minimize for each layer \(\ell\in[L]\) the norm \(\|W^{\ell}_{A}-\Pi_{\ell}W^{\ell}_{B}\Pi^{T}_{\ell-1}\|_{2}\). Recursively on \(\ell\), we solve the following optimization problem:
\[\begin{split}\Pi_{\ell}&=\operatorname*{arg\,min}_{ \Pi\in\mathcal{S}_{m_{\ell}}}\|W^{\ell}_{A}-\Pi W^{\ell}_{B}\Pi^{T}_{\ell-1}\| _{2}^{2}\\ &=\operatorname*{arg\,min}_{\pi\in\mathcal{S}_{m_{\ell}}}\frac{1} {m_{\ell}}\sum_{i=1}^{m_{\ell}}\|[W^{\ell}_{A}]_{i:}-[W^{\ell}_{B}\Pi^{T}_{ \ell-1}]_{\pi_{i}:}\|_{2}^{2}\end{split} \tag{2}\]
For each layer, the problem can be cast as finding a pairing of weights neurons \([W^{\ell}_{A}]_{i:}\) and \([W^{\ell}_{B}\Pi^{T}_{\ell-1}]_{\pi_{i}:}\) to minimize the sum of their Euclidean distances. It is known as the Monge problem is the optimal transport literature Peyre et al. (2019). More precisely Equation (2) can be formulated as finding an optimal transport plan corresponding to the Wasserstein distance between the empirical measures of the rows of \(W^{\ell}_{A}\) and \(W^{\ell}_{B}\Pi^{T}_{\ell-1}\). We provide more details about this connection between Equation (2) and optimal transport in Appendix A.2. In the following, the \(p-\)Wasserstein distance will be denoted \(\mathcal{W}_{p}(\cdot,\cdot)\) and defined with the underlying distance \(\|\cdot\|_{2}\) unless expressed otherwise.
By controlling the cost in Equation (2) at every layer, we show that the permuted s of networks \(A\) and \(B\) are approximately equal. Linearly interpolating both networks will therefore keep activations of all hidden layers unchanged except the last layer which acts as a linear function of the interpolation parameter \(t\in[0,1]\).
## 3 LMC for Two-Layer NNs in the Mean Field Regime
We will first study linear mode connectivity between two two-layer neural networks independently trained with SGD for the same number of steps.
### Background on the Mean Field Regime
We will use some notations from Mei et al. (2019) and consider a two-layer neural network,
\[\hat{f}_{N}(x;\theta)=\frac{1}{N}\sum_{i=1}^{N}\sigma_{*}(x;\theta_{i}) \tag{3}\]
parametrized by \(\theta_{i}=(a_{i},w_{i})\in\mathbb{R}\times\mathbb{R}^{d}\) and where \(\sigma_{*}(x;\theta_{i})=a_{i}\sigma(w_{i}x)\). The parameters evolve as to minimize the following regularized cost \(R_{N}(\theta)=\mathbb{E}_{(x,y)\sim P}[(y-\hat{f}_{N}(x;\theta))^{2}]+\lambda \|\theta\|_{2}^{2}\). Define noisy regularized stochastic gradient descent (or noiseless regularization-free when \(\lambda=0,\tau=0\)) with step size \(s_{k}\), and i.i.d. Gaussian noise \(g^{k}\sim\mathcal{N}(0,I_{d})\):
\[\theta_{i}^{k+1}=(1-2\lambda s_{k})\theta_{i}^{k}\] (SGD) \[+2s_{k}(y_{k}-\hat{f}_{N}(x_{k};\theta^{k}))\nabla_{\theta}\sigma_ {*}(x_{k};\theta_{i}^{k})+\sqrt{\frac{2s_{k}\tau}{4}}g_{i}^{k}\]
It will be useful to consider \(\rho_{N}^{k}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{\theta_{i}^{k}}\) the empirical distribution of the weights after \(k\) SGD steps.
Indeed some recent works (Chizat and Bach, 2018; Mei et al., 2018, 2019) have shown that when setting the width \(N\) to be large and the step size \(s_{k}\) to be small, the empirical distribution of weights during training remains close to an empirical measure drawn from the solution of a partial differential equation (PDE) we explicit in Appendix B.1. Especially, the parameters \(\left\{\theta_{i}^{k},i\in[N]\right\}\) evolve approximately independently.
### Proving LMC in the mean field setting
Define respectively the alignment of a neuron function on the data and the correlation between two neurons:
\[V(\theta_{1}):=av(w):=-\mathbb{E}_{P}[y\sigma_{*}(x;\theta_{1})]\] \[U(\theta_{1},\theta_{2}):=a_{1}a_{2}u(w_{1},w_{2}):=\mathbb{E}_ {P}[\sigma_{*}(x;\theta_{1})\sigma_{*}(x;\theta_{2})]\,.\]
and we for \(\varepsilon>0\) fixed we note the step size
\[s_{k}=\varepsilon\xi(k\varepsilon) \tag{4}\]
where \(\xi\) is a positive scaling function. The underlying training time up to step \(k_{T}\) is defined as \(T:=\sum_{k=1}^{k_{T}}s_{k}\). We now state the standard assumptions to work on the mean field regime (Mei et al., 2019),
**Assumption 1**.: _The function \(t\mapsto\xi(t)\) is bounded Lipschitz. The non-linearity \(\sigma\) is bounded Lipschitz and the data distribution has a bounded support. The functions \(w\mapsto v(w)\) and \((w_{1},w_{2})\mapsto u(w_{1},w_{2})\) are differentiable, with bounded and Lipschitz continuous gradient. The weights at initialization \(\theta_{i}^{0}\) are i.i.d. with distribution \(\rho_{0}\) which has bounded support._
Assumption 1 imposes that the step size is of order \(\mathcal{O}(\varepsilon)\) and its variations are of order \(\mathcal{O}(\varepsilon^{2})\). Constant step size \(\varepsilon\) will work. Bounded non-linearity include arctan and sigmoid but excludes ReLU. While it is a standard assumption in mean field theory ((Mei et al., 2018, 2019)), we mention in SSB that this assumption can be relaxed by the weaker assumption that the non-linearity stays small on some big enough compact set.
The second assumption is technical and only used for studying noisy regularized SGD.
**Assumption 2**.: \(V,U\) _are four times continuously differentiable and \(\nabla_{1}^{k}u(\theta_{1},\theta_{2})\) is uniformly bounded for \(0\leq k\leq 4\)._
The following theorem states that two wide enough two-layer neural networks trained independently with SGD exhibit, with high probability, a linear connection of the prediction modulo permutations for all data.
**Theorem 3.1**.: _Consider two two-layer neural networks as in Equation (3) trained with equation SGD with the same initialization over the weights independently and for the same underlying time \(T\). Suppose Assumptions 1 and 2 to hold. Then \(\forall\delta,\text{err},\exists N_{\min}\) such that if \(N\geq N_{min},\exists\varepsilon_{\max}(N)\) such that if \(\varepsilon\leq\varepsilon_{\max}(N)\) in Equation (4), then with probability at least \(1-\delta\) over the training process, there exists a permutation of the second network's hidden layer such that for almost every \(x\sim P\):_
\[\left|t\tilde{f}_{N}(x;\theta_{A})+(1-t)\hat{f}_{N}(x;\theta_{B})\right.\] \[-\hat{f}_{N}(x;t\theta_{A}+(1-t)\tilde{\theta}_{B})|\leq\text{err},\quad\forall t\in[0,1]\,.\]
**Remark.** Assumption 2 is not used when studying noiseless regularization-free SGD (\(\lambda=0\), \(\tau=0\)).
**Corollary 3.2**.: _Under assumptions of Theorem 3.1, \(\forall\delta,\text{err}>0,\,\exists N^{\prime}_{\min},\,\forall N\geq N^{ \prime}_{\min},\,\exists\varepsilon^{\prime}_{\max}(N),\,\forall\varepsilon \leq\varepsilon^{\prime}_{\max}(N)\) in Equation (4), then with probability at least \(1-\delta\) over the training process, there exists a permutation of the second network's hidden layer such that \(\forall t\in[0,1]\):_
\[\mathbb{E}_{P}\big{[}\big{(}\hat{f}_{N}(x;t\theta_{A}+(1-t)\tilde{ \theta}_{B})-y\big{)}^{2}\big{]}\leq\text{err}\] \[+\mathbb{E}_{P}\big{[}t(\hat{f}_{N}(x;\theta_{A})-y)^{2}+(1-t)( \hat{f}_{N}(x;\theta_{B})-y)^{2}\big{]}\]
**Discussion.** Two wide enough two-layer neural networks wide enough trained with SGD are therefore Linear Mode Connected with an upper bound on the error tolerance we explicit in Appendix B. We have extensively used the independence between weights in the mean field regime to apply OT bounds on convergence rates of empirical measures. To go beyond the two-layer case, we will need to make such an assumption on the distribution of weights. Note that this is true at initialization and after training for two-layer networks. Studying the independence of weights in the multi-layer case is a natural avenue for future work, already studied in Nguyen and Pham (2020).
## 4 General Strategy for proving LMC of multi-layer networks
We now build the foundations to study the case of multi-layer neural networks (see Equation (1)).
We first write one formal property expressing the existence of permutations of neurons of network \(B\) up to layer \(\ell\) such that the activations of network \(A\), network \(B\) permuted and the mean network \(M_{t}\) are close up to layer \(\ell\). This property is trivially satisfied at the input layer. We then show that under two formal assumptions on the weights matrices of networks \(A\) and \(B\), this property still hold at layer \(\ell+1\).
### Formal Property at layer \(\ell\)
Let \(\varepsilon>0,m_{\ell}\geq\tilde{m}_{l}\) and \(m_{\ell+1}\geq\tilde{m}_{\ell+1}\). Assume \(\frac{m_{\ell}}{\tilde{m}_{\ell}},\frac{m_{\ell+1}}{\tilde{m}_{\ell+1}}\in \mathbb{N}\) to simplify technical details but this hypothesis can easily be removed.
**Property 1**.: _There exists two constants \(E_{\ell},E_{\ell}\) such that given weight matrices up to layer \(\ell\), \(W^{1,\dots,\ell}_{A,B},W^{1,\dots,\ell}_{B}\) one can find \(\ell\) permutations \(\Pi_{1},\cdots,\Pi_{\ell}\) of the neurons in the hidden layers \(1\) to \(\ell\) of network \(B\), an equi-partition \(\mathcal{I}^{\ell}=\{I^{\ell}_{1},\dots,I^{\ell}_{\tilde{m}_{\ell}}\}\), and a map \(\phi^{\ell}(x)\in\mathbb{R}^{n}\) such that \(\forall k\in[\tilde{m}_{\ell}]\,,\,\forall i,j\in I^{\ell}_{k},\phi^{\ell}_{i} (x)=\phi^{\ell}_{j}(x)\) such that:_
\[\mathbb{E}_{P,Q}\|\phi^{\ell}(x)\|_{2}^{2}\leq E_{\ell}m_{\ell}\] \[\mathbb{E}_{P,Q}\|\phi^{\ell}_{A}(x)-\phi^{\ell}(x)\|_{2}^{2}\leq E _{\ell}m_{\ell}\] \[\mathbb{E}_{P,Q}\|\tilde{\phi}^{\ell}_{B}(x)-\phi^{\ell}(x)\|_{2} ^{2}\leq E_{\ell}m_{\ell}\] \[\mathbb{E}_{P,Q}\|\phi^{\ell}_{M_{t}}(x)-\phi^{\ell}(x)\|_{2}^{2} \leq E_{\ell}m_{\ell}\,,\quad\forall t\in[0,1],\]
This property not only requires proximity between activations \(\phi^{\ell}_{A}(x),\tilde{\phi}^{\ell}_{B}(x)\) at layer \(\ell\) but requires the existence of a vector \(\tilde{\phi}^{\ell}(x)\) whose coefficients in the same groups of the partition \(\mathcal{I}^{\ell}\) are equal, and therefore lives in a \(\tilde{m}_{\ell}\). It bounds the size of the function space available at layer \(\ell\) and hence allows to use an effective width \(\tilde{m}_{\ell}\) independent of the real width \(m_{\ell}\), which can be much larger. It is crucial in order to show LMC for neural networks of constant width across layers. The introduction of such a map \(\underline{\phi}^{\ell}(x)\) is non trivial and is an important contribution since it allows to extend results of Entezari et al. (2021) beyond two layers.
### Assumptions on the weight distribution
We now make an assumption on the empirical distribution of the weights \(\hat{\mu}_{A,\ell+1}\) at layer \(\ell+1\) of \(W^{\ell+1}_{A}\).
**Assumption 3**.: _There exists an integer \(\tilde{m}_{\ell+1}\) such that for all equi-partition \(\mathcal{I}^{\ell}\) of \([m_{\ell}]\) with \(\tilde{m}_{\ell}\) sub-sets, there exists a random empirical measure \(\hat{\mu}_{\tilde{m}_{\ell+1}}\) independent of \(A\) and \(B\) composed of \(\tilde{m}_{\ell+1}\) vectors in \(\mathbb{R}^{m_{\ell}}\), such that \(\mathbb{E}_{Q}[\mathcal{W}^{2}_{2}(\hat{\mu}^{\mathcal{I}^{\ell}}_{A,\ell+1}, \hat{\mu}^{\mathcal{I}^{\ell}}_{\tilde{m}_{\ell+1}})]\leq C_{1}\)._
This assumption requires that the empirical distribution with \(m_{\ell+1}\) points of the neurons' weights of network \(A\) at layer \(\ell+1\) can be approximated by an empirical measure with a smaller \(\tilde{m}_{\ell+1}\) number of points. Note that it implies proximity in Wasserstein distance between \(\hat{\mu}^{\mathcal{I}^{\ell}}_{A}\) and \(\hat{\mu}^{\mathcal{I}^{\ell}}_{B}\) by a triangular inequality.
We finally assume some central limit behavior when summing the errors made for each neuron of layer \(\ell\).
**Assumption 4**.: _There exists a constant \(C_{2}\) such that \(\forall X\in\mathbb{R}^{m_{l}}\) we have:_
\[\max\left\{\mathbb{E}_{Q}[\|W^{\ell+1}_{A}X\|_{2}^{2}],\mathbb{E}_{Q}[\|W_{ \tilde{m}_{l+1}}X\|_{2}^{2}]\right\}\leq C_{2}\frac{m_{\ell+1}}{m_{\ell}}\|X\|_ {2}^{2},\]
Finally, we consider the following assumption on the non-linearity, verified for example by pointwise ReLU.
**Assumption 5**.: \(\sigma\) _is pointwise, \(1\)-Lipschitz, \(\sigma(0)=0\)._
### Propagating Property 1 to layer \(\ell+1\)
We state now how Property 1 propagates throughout the layers using Assumptions 3 to 5 with new parameters \(E_{\ell+1},E_{\ell+1}\). We give a proof in Appendix A.6.
**Lemma 4.1**.: _Let \(\ell\in\{0,\cdots,L-1\}\) and suppose Property 1 to hold at layer \(\ell\) and Assumptions 3 to 5 to hold, then Property 1 still holds at the next layer with \(\tilde{m}_{\ell+1}\) given in Assumption 3 and_
\[E_{\ell+1} =C_{2}E_{\ell}\] \[E_{\ell+1} =2C_{2}E_{\ell}+2C_{1}\tilde{m}_{\ell}E_{\ell}\]
## 5 LMC for Randomly Initialized NNs with sub-Gaussian Distributions
We will make the following assumption on the empirical distribution of neurons weights \(\hat{\mu}_{A,\ell},\hat{\mu}_{B,\ell}\) of \(W^{\ell}_{A},W^{\ell}_{B}\) at layer \(\ell\).
**Assumption 6** (Independence of neurons weights).: \(\hat{\mu}_{A,\ell},\hat{\mu}_{B,\ell}\) _correspond to two i.i.d drawings of vectors with distribution \(\mu_{\ell}\) i.e., \(\hat{\mu}_{A,\ell},\hat{\mu}_{B,\ell}\) have the law of \(\frac{1}{m_{\ell}}\sum_{i=1}^{m_{\ell}}\delta_{x_{i}}\) where \(x_{i}\sim\mu_{\ell}\) i.i.d._
Assumption 6 is verified for example at initialization but more generally when weights do not depend too much one of each other. This case still holds for wide two-layer neural networks trained with SGD and is at the heart of the proof of Theorem 3.1.
### Showing LMC for multilayer MLPs under Gaussian distribution
We first examine the case \(\mu_{\ell}=\mathcal{N}\left(0,\frac{I_{m_{\ell-1}}}{m_{\ell-1}}\right)\). We moreover assume that the input data distribution has bounded second moment: \(\mathbb{E}_{P}[\|x\|_{2}^{2}]\leq m_{0}\).
Our strategy detailed in Appendix A.7 consists in showing that wide enough such networks will satisfy Assumptions 3 and 4 with well controlled constants \(C_{1},C_{2}\). We can then apply Lemma 4.1 successively \(L\) times to get the following lemma:
**Lemma 5.1**.: _Under normal initialization of the weights, given \(\varepsilon>0\), if \(m_{0}\geq 5\), there exists minimal widths \(\tilde{m}_{1},\dots,\tilde{m}_{L}\) such that if \(m_{1}\geq\tilde{m}_{1},\dots,m_{L}\geq\tilde{m}_{L}\), Property 1 is verified at the last hidden layer \(L\) for \(E_{L}=1,E_{L}=\varepsilon^{2}\). More
over, \(\forall\ell\in[L],\exists T_{\ell}\) which does only depend on \(L,\ell\) such that one can define recursively \(\tilde{m}_{\ell}\) as \(\tilde{m}_{0}=m_{0}\) and_
\[\tilde{m}_{\ell}=\tilde{\mathcal{O}}\left(\frac{T_{\ell}}{\varepsilon}\right)^{ \tilde{m}_{\ell-1}}\]
Discussion.The hypothesis \(m_{0}\geq 5\) is technical and could be relaxed at the price of slightly changing the bound on \(\tilde{m}_{1}\). Lemma 5.1 shows that given two random networks whose widths \(m_{\ell}\) is larger than \(\tilde{m}_{\ell}\), we can permute neurons of the second one such that their activations at layer \(\ell\) are both close to the one of the networks on a linear path in parameter's space.
As \(\varepsilon\) goes to \(0\), the width of the layer \(\ell+1\) must scale at least as \(\left(\frac{1}{\varepsilon}\right)^{\tilde{m}_{\ell-1}}\). This is a fundamental bound due to the convergence rate in Wasserstein distance of empirical measures. It imposes a recursive exponential growth in the width needed with respect to depth. This condition appears excessive as compared to the typical width of neural networks used in practice. We highlight here that Ainsworth et al. (2022) empirically demonstrates that networks at initialization do not exhibit LMC and that the loss barrier is erased only after a sufficient number of SGD steps.
### Showing Linear Mode Connectivity
We make the following assumption on the loss function to show LMC from Lemma 5.1.
**Assumption 7**.: \(\forall y\in\mathbb{R}^{m_{L+1}}\)_, the loss \(\mathcal{L}(\cdot,y)\) is convex and \(1\)-Lipschitz._
We finally prove the following bound on the loss of the mean network \(M_{\text{f}}\) in Appendix A.8:
**Theorem 5.2**.: _Under normal initialization of the weights, for \(m_{1}\geq\tilde{m}_{1},\cdots,m_{L}\geq\tilde{m}_{L}\) as defined in Lemma 5.1, \(m_{0}\geq 5\), and under Assumption 7 we know that \(\forall t\in[0,1]\), with \(Q\)-probability at least \(1-\delta_{Q}\), there exists permutations of hidden layers \(1,\ldots,L\) of network \(B\) that are independent of \(t\), such that:_
\[\mathbb{E}_{P}\left[\mathcal{L}\left(\hat{f}_{M_{t}}(x),y\right) \right]\leq t\mathbb{E}_{P}\left[\mathcal{L}\left(\hat{f}_{A}(x),y\right) \right]+\\ (1-t)\mathbb{E}_{P}\left[\mathcal{L}\left(\hat{f}_{B}(x),y\right) \right]+\frac{4\sqrt{m_{L+1}}}{\delta_{Q}^{2}}\varepsilon\]
Discussion.The minimal width at layer \(\ell\) needed for Theorem 5.2 is recursively \(\tilde{m}_{l}\sim\varepsilon^{-\tilde{m}_{l-1}}\). Applied to randomly initialized two-layer networks, we need a hidden layer's dimension of \(\varepsilon^{-m_{0}}\) as opposed to Entezari et al. (2021) which prove a bound of \(\varepsilon^{-(2m_{0}+4)}\).
### Tightness of the bound dependency with respect to the error tolerance
We discuss here the tightness of the minimal width \(\tilde{m}_{\ell}\) we require in Lemma 5.1 with respect to the error tolerance \(\varepsilon\). The recursive exponential growth of the width in the form \(\tilde{m}_{\ell}\sim\left(\frac{1}{\varepsilon}\right)^{\tilde{m}_{\ell-1}}\) is a consequence of the convergence rate of Wasserstein distance of empirical measures in dimension \(\tilde{m}_{\ell-1}\) at the rate \(\nicefrac{{1}}{{\tilde{m}_{\ell-1}}}\). Theorem 5.3 provides a corresponding lower bound which shows that this recursive exponential growth is tight at the precise rate \(\left(\frac{1}{\varepsilon}\right)^{\tilde{m}_{\ell-1}}\) (just take \(n=\tilde{m}_{\ell-1}\), \(m=\tilde{m}_{\ell}\), \(m=\mu_{\ell}\), \(x=\phi_{L}^{\ell-1}(x),W_{A,B}=W_{A,B}^{\ell}\)). A proof is given in Appendix A.11.
**Theorem 5.3**.: _Let \(n\geq 1,x\sim P\in\mathcal{P}_{1}(\mathbb{R}^{n})\) and \(\mu\in\mathcal{P}(\mathbb{R}^{n})\) such that \(\frac{d\mu_{\ell}}{d\mu_{\ell}}\leq F_{1}\). Suppose \(\Sigma=\mathbb{E}[xx^{T}]\) is full rank \(n\). Let \(m\geq 1\) and \(W_{A},W_{B}\in\mathcal{M}_{m,n}(\mathbb{R})\) whose rows are drawn i.i.d. from \(\mu\). Then, there exists \(F_{0}\) such that_
\[\mathbb{E}_{W_{A},W_{B}}\big{[}\min_{\Pi\in\mathcal{S}_{m}}\mathbb{E}_{P}\|(W _{A}-\Pi W_{B})x\|_{2}^{2}\big{]}\geq F_{0}\left(\frac{1}{m}\right)^{2/n}\]
**Remark 1.** Using an effective width \(\tilde{m}_{\ell-1}\) smaller and independent of the real width \(m_{\ell-1}\) allows to show LMC for networks of constant hidden width \(m_{1}=m_{2}=\ldots=m_{L}\) as soon as they verify \(m_{1}\geq\tilde{m}_{1},\ldots,m_{L}\geq\tilde{m}_{L}\) where \(\tilde{m}_{1},\ldots,\tilde{m}_{L}\) are defined in Lemma 5.1. Without this trick, we need a recursive exponential growth of the real width \(m_{\ell}\sim\left(\frac{1}{\varepsilon}\right)^{m_{\ell-1}}\).
**Remark 2**.: Motivated by the fact that feature learning may concentrate the weight distribution on low dimensional sub-space, we could extend our proofs to the case where the underlying weight distribution has a support with smaller dimension to get recursive bounds no longer at rate \(\tilde{m}_{\ell-1}\) but at a smaller one. Note this is unlikely to happen as we expect the matrix of weight vectors of a given layer to be full rank. Therefore, we study in the next section the case when this matrix is approximately low rank, or equivalently when the weight distribution is concentrated around a low dimensional approximated support.
### Approximately low dimensional supported measures
For the sake of clarity, assume from now on that the layer \(\ell-1\) of network \(A\) has been permuted such that for \(\mathcal{I}^{\ell-1}=\{I_{1}^{\ell-1},\ldots,I_{m_{\ell-1}}^{\ell-1}\}\) (given in Property 1) we have \(I_{1}^{\ell-1}=\{1,\ldots,p_{\ell-1}\}\), \(\ldots\), \(I_{\tilde{m}_{\ell-1}}^{\ell-1}=\{m_{\ell-1}-p_{\ell-1}+1,\ldots,m_{\ell-1}\}\) with \(p_{\ell-1}=\nicefrac{{m_{\ell-1}}}{{\tilde{m}_{\ell-1}}}\). This assumption is mild since we can always consider
a permuted version of network \(A\) without changing the problem.
Motivated by the discussion in Appendix A.9.1 we consider the model where the weights at layer \(\ell\) are initialized i.i.d. multivariate Gaussian \(\mu_{\ell}=\mathcal{N}(0,\Sigma^{\ell-1})\) with
\[\Sigma^{\ell-1}:=\mathrm{Diag}\left(\lambda_{1}^{\ell}I_{p_{\ell-1}},\lambda_{ 2}^{\ell}I_{p_{\ell-1}},\ldots,\lambda_{\tilde{m}_{\ell-1}}^{\ell}I_{p_{\ell-1 }}\right)\]
with \(\frac{1}{m_{\ell-1}}\frac{\tilde{m}_{\ell-1}}{k_{\ell-1}}\geq\lambda_{1}^{ \ell}\geq\lambda_{2}^{\ell}\geq\ldots\geq\lambda_{\tilde{m}_{\ell-1}}^{\ell}\) with \(k_{\ell-1}\leq\tilde{m}_{\ell-1}\) an approximate dimension of the support of the underlying weights distribution. Note that to balance the low dimensionality of the weights distribution, we have replaced the upper-bound on the eigenvalues \(\frac{1}{\tilde{m}_{\ell-1}}\) by the greater value \(\frac{1}{m_{\ell-1}}\frac{\tilde{m}_{\ell-1}}{k_{\ell-1}}\) to avoid vanishing activations when \(\ell\) grows which would have made our result vacuous.
The following assumption states that the weights distribution \(\mu_{\ell}^{\mathcal{I}^{\ell-1}}\) at layer \(\ell\) considered in \(\mathcal{P}_{1}(\mathbb{R}^{\tilde{m}_{\ell-1}})\) (with the operation explicited in Section 2) is approximately of dimension \(k_{\ell-1}=e\tilde{m}_{\ell-1}\). The approximation becomes more correct as \(\eta\to 0\).
**Assumption 8** (Approximately low-dimensionality).: \[\exists\eta,e\in(0,1),\forall\ell\in[L],\,\frac{\sqrt{\sum_{j=k_{\ell-1}}^{ \ell}\lambda_{j}^{\ell}}}{4\sqrt{\sum_{j=1}^{k_{\ell-1}}\lambda_{j}^{\ell}}} \leq\eta,\,\frac{k_{\ell-1}}{\tilde{m}_{\ell-1}}=e\]
**Theorem 5.4**.: _Under Assumptions 7 and 8, given \(\varepsilon>0\), if \(em_{0}\geq 5\) there exists minimal widths \(\tilde{m}_{1},\ldots,\tilde{m}_{L}\) such that if \(\eta^{-k_{0}}\geq m_{1}\geq\tilde{m}_{1},\ldots,\eta^{-k_{L-1}}\geq m_{L}\geq \tilde{m}_{L}\), Property 1 is verified at the last hidden layer \(L\) for \(E_{L}=1,E_{L}=\varepsilon^{2}\). Moreover, \(\forall\ell\in[L],\,\exists\mathcal{I}_{\ell}^{\prime}\) which does only depend on \(L,e,\ell\), such that one can define recursively \(\tilde{m}_{\ell}\) as_
\[\tilde{m}_{\ell}=\tilde{\mathcal{O}}\left(\frac{T_{\ell}^{\prime}}{\varepsilon }\right)^{k_{\ell-1}}=\tilde{\mathcal{O}}\left(\frac{T_{\ell}^{\prime}}{ \varepsilon}\right)^{e\tilde{m}_{\ell-1}}\]
_where \(\tilde{m}_{0}=m_{0}\). Moreover there exists permutations of hidden layers \(1,\ldots,L\) of network \(B\) s.t. \(\forall t\in[0,1]\), with \(Q\)-probability at least \(1-\delta_{Q}\):_
\[\mathbb{E}_{P}\left[\mathcal{L}\left(\hat{f}_{M_{t}}(x),y\right) \right]\leq t\mathbb{E}_{P}\left[\mathcal{L}\left(\hat{f}_{A}(x),y\right)\right]\] \[\qquad+(1-t)\mathbb{E}_{P}\left[\mathcal{L}\left(\hat{f}_{B}(x), y\right)\right]+\frac{4\sqrt{m_{L+1}}}{\sqrt{e\delta_{Q}^{2}}}\varepsilon\]
**Discussion.** We give a proof in Appendix A.10. For \(\eta\) small enough, the distribution of weights is approximately lower dimensional. It yields faster convergence rates until \(m\) becomes exponentially big in \(\eta\). This prevents the previous recursive exponential growth of width with respect to depth, though asymptotically, we recover the same rates as in Theorem 5.2. The smaller \(e\), the lower dimensional are the distributions, and the less the width needs to grow when \(\varepsilon\to 0\). The problem in that model is that the constant \(T_{\ell}^{\prime}\) explodes if \(e\to 0\), which prevents using a model with fixed \(k_{\ell}\) across the layers for the weight distribution. We want to highlight here that the proof can be extended to such a case, but we need to assume that the constant \(C_{2}\) is bounded and not depending on \(e\) across the layers in Lemma 4.1 (recall that with our proof, we had \(C_{2}=\frac{1}{e}\)). This assumption seems coherent because the average activations don't explode across layers in the model. Assuming this, the bound we obtain for \(\tilde{m}_{\ell}\) in Theorem 5.4 is completely independent on \(\tilde{m}_{\ell-1}\), and there is no recursive exponential growth in the width needed across the layers. We give a more explicit discussion in Appendix A.12.
### LMC for sub-Gaussian distributions
Still under the setting of Assumption 6 assume that the underlying distribution \(\mu_{\ell}\) verifies for each layer \(\ell\in[L+1]\): if \(X\sim\mu_{\ell}\) then, \(\forall j\neq k\in[m_{l-1}],X_{j}\amalg X_{k}\). Moreover \(\forall i\in[\tilde{m}_{\ell-1}],\forall j,k\in I_{i}^{\ell-1}\),
\[\mathbb{E}[X_{j}^{2}]=\mathbb{E}[X_{k}^{2}]=\lambda_{i}^{\ell}\]
Finally suppose the variables are sub-Gaussian i.e., \(\exists K>0,\forall i\in[\tilde{m}_{\ell-1}],\forall j\in I_{i}^{\ell-1}\), \(\forall c>0\),
\[\mathbb{P}(|X_{j}|\geq c)\leq 2\exp(-\frac{c^{2}}{K\lambda_{i}^{\ell}})\]
We explain in Appendix A.13 why both Theorem 5.2 (in the case \(\lambda_{1}^{\ell}=\ldots=\lambda_{\tilde{m}_{\ell-1}}^{\ell}=\nicefrac{{1}}{{m _{\ell-1}}}\)) and Theorem 5.4 hold with mild modifications in the constants.
It extends our previous result considerably to LMC for any large enough networks whose weights are i.i.d. and whose underlying distribution has a sub-Gaussian tail (for example uniform distribution).
### Link with dropout stability
In Appendix A.14, we build a first step towards unifying our study with the dropout stability viewpoint (Kuditipudi et al., 2019; Shevchenko and Mondelli, 2020) by showing in a simplified setting how networks become dropout stable in the same asymptotics on the width as the one needed in our Theorem 5.2.
## 6 Experimental validation and new method to find permutations
Our previous study shows the influence of the dimension of the underlying weight distribution on LMC effectiveness. Based on this insight we develop a
new weight matching method at the crossroads between previous naive weight matching (WM) and activation matching (AM) methods [11]. Given \(n\) training points \(x_{i},\,i\in[n]\), denote \(Z_{A}^{\ell}\in\mathcal{M}_{m_{\ell},n}(\mathbb{R})\) (respectively \(Z_{B}^{\ell}\)) the activations \(\phi_{A}^{\ell}(x_{i})\) for the \(n\) data points \(x_{i}\). Further denote \(\Sigma_{A}^{\ell}:=\frac{1}{n}Z_{A}^{\ell}[Z_{A}^{\ell}]^{T}\approx\mathbb{E}_ {P}\left[\phi_{A}^{\ell}(x)[\phi_{A}^{\ell}(x)]^{T}\right]\). We aim at finding for each layer \(\ell\) the optimal permutation \(\Pi\) minimizing the cost (respectively for naive WM, our new WM method and AM):
\[\min_{\Pi\in S_{m_{\ell}}}\left\|W_{A}^{\ell}-\Pi W_{B}^{\ell} \Pi_{\ell-1}^{T}\right\|_{2}^{2}\,,\] (Naive WM) \[\min_{\Pi\in S_{m_{\ell}}}\left\|W_{A}^{\ell}-\Pi W_{B}^{\ell} \Pi_{\ell-1}^{T}\right\|_{2,\Sigma_{A}^{\ell-1}}^{2},\] (WM (ours)) \[\min_{\Pi\in S_{m_{\ell}}}\left\|Z_{A}^{\ell}-\Pi Z_{B}^{\ell} \right\|_{2}^{2}\,,\] (AM)
where \(\|\cdot\|_{2,\Sigma_{A}^{\ell-1}}\) is the norm3 induced by the scalar product \((X,Y)\mapsto\text{tr}(X\Sigma_{A}^{\ell-1}Y^{T})\). We both theoretically support the gain of our method in Theorem C.2 and empirically verify that this method constantly and substantially outperforms naive Weight Matching across different learning rates when training with SGD.
Footnote 3: Semi-norm in full generality (if \(\Sigma_{A}^{\ell-1}\) is not full rank)
We train a three hidden layer MLP of width \(512\) on MNIST with learning rates varying between \(10^{-4}\) and \(10^{-1}\) across \(4\) runs. We plot on Figure 1(b) the approximate dimension of the considered covariance matrix for each matching method: \(W_{A}^{\ell}[W_{A}^{\ell}]^{T}\) for WM (naive), \(W_{A}^{\ell}\Sigma_{A}^{\ell-1}[W_{A}^{\ell}]^{T}\) for WM (ours) and \(\Sigma_{A}^{\ell}\) for AM (see SSC.1). Our code is available at [https://github.com/9aze/OT_LMC/tree/main](https://github.com/9aze/OT_LMC/tree/main). We see the detrimental effect of high approximate dimension on LMC effectiveness, therefore validating our theoretical approach. Note that for a learning rate of \(10^{-1}\) the correlation is less clear but a trend is visible on decreasing dimension for naive WM as it performs better (and increasing dimension for AM and our WM method as it performs comparatively less well). An alternative would be to use a proxy taking the diameter of the distributions into account (and not only the dimension of their support). Finally, experiments on Adam lead to less clear results that we did not report as more experimental investigation is needed. In particular, understanding the impact of the optimizer on the independence of weights during training is crucial, as it is a central assumption in our study.
## 7 Discussion
Optimal transport serves as a good framework to study linear mode connectivity of neural networks. This paper uses convergence rates of empirical measures in Wasserstein distance to upper bound the test error of the linear combination of two networks in weight space modulo permutation symmetries. Our main assumption is the independence of all neuron's weight vectors inside a given layer. This assumption is trivially true at initialization but remains valid for wide two-layers networks trained with SGD. We experimentally demonstrate the correlation between the dimension of the underlying weight distribution with LMC effectiveness and design a new weight matching method that significantly outperforms existing ones. A natural direction for future work is to focus on the behaviour of the weights distribution inside each layer of DNNs and their independence. Moreover, extending our results to only assuming approximate independence of weights is a natural direction as it
Figure 2: Statistics of the average network \(M\) over the linear path between networks \(A\) and \(B\) using respectively weight matching (blue), weight matching using covariance of activations and activations (green), and activation matching (red)
seems a more realistic setting.
## 8 Acknowledgements
The work of B. Goujaud and A. Dieuleveut is partially supported by ANR-19-CHIA-0002-01/chaire SCAI, and HiPlaris. This work was partly funded by the French government under management of Agence Nationale de la Recherche as part of the "Investissements d'avenir" program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute).
|
2305.16910 | Universal approximation with complex-valued deep narrow neural networks | We study the universality of complex-valued neural networks with bounded
widths and arbitrary depths. Under mild assumptions, we give a full description
of those activation functions $\varrho:\mathbb{C}\to \mathbb{C}$ that have the
property that their associated networks are universal, i.e., are capable of
approximating continuous functions to arbitrary accuracy on compact domains.
Precisely, we show that deep narrow complex-valued networks are universal if
and only if their activation function is neither holomorphic, nor
antiholomorphic, nor $\mathbb{R}$-affine. This is a much larger class of
functions than in the dual setting of arbitrary width and fixed depth. Unlike
in the real case, the sufficient width differs significantly depending on the
considered activation function. We show that a width of $2n+2m+5$ is always
sufficient and that in general a width of $\max\{2n,2m\}$ is necessary. We
prove, however, that a width of $n+m+4$ suffices for a rich subclass of the
admissible activation functions. Here, $n$ and $m$ denote the input and output
dimensions of the considered networks. | Paul Geuchen, Thomas Jahn, Hannes Matt | 2023-05-26T13:22:14Z | http://arxiv.org/abs/2305.16910v2 | # Universal approximation with complex-valued deep narrow neural networks
###### Abstract.
We study the universality of complex-valued neural networks with bounded widths and arbitrary depths. Under mild assumptions, we give a full description of those activation functions \(\varrho:\mathbb{C}\to\mathbb{C}\) that have the property that their associated networks are universal, i.e., are capable of approximating continuous functions to arbitrary accuracy on compact domains. Precisely, we show that deep narrow complex-valued networks are universal if and only if their activation function is neither holomorphic, nor antiholomorphic, nor \(\mathbb{R}\)-affine. This is a much larger class of functions than in the dual setting of arbitrary width and fixed depth. Unlike in the real case, the sufficient width differs significantly depending on the considered activation function. We show that a width of \(2n+2m+5\) is always sufficient and that in general a width of \(\max\{2n,2m\}\) is necessary. We prove, however, that a width of \(n+m+4\) suffices for a rich subclass of the admissible activation functions. Here, \(n\) and \(m\) denote the input and output dimensions of the considered networks.
Key words and phrases:complex-valued neural networks, holomorphic function, polyharmonic function, uniform approximation, universality 2020 Mathematics Subject Classification: 68T07,41A30,41A63,31A30,30E10 All authors contributed equally to this work.
## 1. Introduction
This paper addresses the universality of deep narrow complex-valued neural networks (CVNNs), i.e., the density of neural networks with arbitrarily large depths but bounded widths, in spaces of continuous functions over compact domains with respect to the uniform norm. Our main theorem is as follows.
**Theorem 1.1**.: _Let \(n,m\in\mathbb{N}\), and \(\varrho:\mathbb{C}\to\mathbb{C}\) be a continuous function which at some point is real differentiable with non-vanishing derivative. Then \(\mathcal{N}\mathcal{N}^{\varrho}_{n,m,2n+2m+5}\) is universal if and only if \(\varrho\) is neither holomorphic, nor antiholomorphic, nor \(\mathbb{R}\)-affine._
Here \(\mathcal{N}\mathcal{N}^{\varrho}_{n,m,W}\) denotes the set of complex-valued neural networks with input dimension \(n\), output dimension \(m\), activation function \(\varrho\), and \(W\) neurons per hidden layer. These neural networks are alternating compositions
\[V_{L}\circ\varrho^{\times W}\circ\ldots\circ\varrho^{\times W}\circ V_{1}: \quad\mathbb{C}^{n}\to\mathbb{C}^{m} \tag{1.1}\]
of affine maps \(V_{1}:\mathbb{C}^{n}\to\mathbb{C}^{W}\), \(V_{2},\ldots,V_{L-1}:\mathbb{C}^{W}\to\mathbb{C}^{W}\), \(V_{L}:\mathbb{C}^{W}\to\mathbb{C}^{m}\), and componentwise applications of the activation function \(\varrho\), see Section 2 for a detailed definition.
Studying the expressivity of neural networks is an important part of the mathematical analysis of deep learning. We contribute a qualitative result in that direction. Such qualitative results naturally precede the investigation of approximation rates, i.e., the decay of approximation errors as the class of approximants increases, and the design of numerical algorithms that output near-to-optimal approximants.
### Related work
In the neural network context, universal approximation theorems date back to the 1980s and 1990s [7, 24], where it was shown that real-valued shallow neural networks with output dimension \(1\) and a fixed continuous activation function are universal if and only if the activation function is not a polynomial. Modifications of the setting in which universal approximation is studied appear in the neural network literature over the past decades. These variants of the problem refer to, e.g., the input and the output dimension, the target space (typically \(L_{p}\) for \(1\leq p\leq\infty\), continuous
Introduction
### Background
The study of the evolution of a network is motivated by the study of the evolution of a network, which is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory, and it is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory, and it is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory, and it is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory, and it is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory, and it is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory, and it is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory, and it is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory, and it is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory, and it is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory, and it is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory, and it is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory, and it is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory, and it is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory theory, and it is a fundamental problem in the field of information theory. The evolution of a network is a fundamental problem in the field of information theory theory, and it is a fundamental problem in the field of information theory theory. The evolution of a network is a fundamental problem in the field of information theory theory, and it is a fundamental problem in the field of information theory theory. The evolution of a network is a fundamental problem in the field of information theory theory, and it is a fundamental problem in the field of information theory theory. The evolution of a network is a fundamental problem in the field of information theory theory, and it is a fundamental problem in the field of information theory theory. The evolution of a network is a fundamental problem in the field of information theory theory, and it is a fundamental problem in the field of information theory theory. The evolution of a network is a fundamental problem in the field of information theory theory theory, and it is a fundamental problem in the field of information theory theory theory. The evolution of a network is a fundamental problem in the field of information theory theory theory, and it is a fundamental problem in the field of information theory theory theory. The evolution of a network is a fundamental problem in the field of information theory theory theory, and it is a fundamental problem in the field of information theory theory theory. The evolution of a network is a fundamental problem in the field of information theory theory theory theory, and it is a fundamental problem in the field of information
\(n\), output dimension \(m\), and \(2n+2m+5\) or even \(n+m+4\) neurons per hidden layer are universal, see Theorem 5.3. This is done by approximating polynomials in \(z\) and \(\overline{z}\) uniformly on compact sets and invoking the Stone-Weierstrass theorem.
### Organization of our paper
In Section 2 we fix our notation and recall some basics from complex analysis, the theory of neural networks, and functional analysis. Section 3 introduces the register model and shows how the identity map on \(\mathbb{C}\) and complex conjugation can be approximated using CVNNs. For non-polyharmonic functions, the proof of Theorem 1.1 can be found in Section 4. In Section 5, we present the proof of universality claimed in Theorem 1.1 in the case of polyharmonic activation functions which are neither holomorphic, nor antiholomorphic, nor \(\mathbb{R}\)-affine. In Section 6, we show that CVNNs whose activation function is holomorphic or antiholomorphic or \(\mathbb{R}\)-affine are never universal, regardless of the number of neurons per hidden layer, and derive a lower bound on the minimum width necessary for universality. In the appendix, we provide basics on the relationship between local uniform convergence and universal approximation, and on Taylor approximations in terms of Wirtinger derivatives.
## 2. Preliminaries
In this section, we recall facts from complex analysis, functional analysis, and the theory of neural networks behind the phrases in Theorem 1.1. The presentation is loosely based on [25, Chapter 7], [26, Chapter 11], and [11, Section 1].
### Complex analysis
We use the symbols \(\mathbb{N}\), \(\mathbb{R}\), and \(\mathbb{C}\) to denote the natural, real, and complex numbers, respectively. By \(\operatorname{Re}(z)\), \(\operatorname{Im}(z)\), and \(\overline{z}\), we denote the componentwise real part, imaginary part, and complex conjugate of a vector \(z\in\mathbb{C}^{n}\), respectively. We call the function \(\varrho:\mathbb{C}\to\mathbb{C}\)_partially differentiable_ at \(z_{0}\), if the _partial derivatives_
\[\frac{\partial\varrho}{\partial x}(z_{0}) :=\lim_{\mathbb{R}\setminus\{0\}\ni h\to 0}\frac{\varrho(z_{0}+h)- \varrho(z_{0})}{h}\] \[\text{and}\]
exist. Higher-order partial derivates are defined in the standard manner. We write \(\varrho\in C^{k}(\mathbb{C};\mathbb{C})\) if \(\varrho\) admits partial derivatives up to order \(k\) at each point of \(\mathbb{C}\) and the \(k\)th-order partial derivatives are
Figure 1. Our results in a nutshell.
continuous functions \(\mathbb{C}\to\mathbb{C}\). Likewise, we write \(\varrho\in C^{\infty}(\mathbb{C};\mathbb{C})\) if \(\varrho\in C^{k}(\mathbb{C};\mathbb{C})\) for all \(k\in\mathbb{N}\). If \(\frac{\partial\varrho}{\partial x}(z_{0})\) and \(\frac{\partial\varrho}{\partial y}(z_{0})\) exist and the identity
\[\lim_{\mathbb{C}\setminus\{0\}\ni h\to 0}\frac{\varrho(z_{0}+h)-\varrho(z_{0})- \frac{\partial\varrho}{\partial x}(z_{0})\operatorname{Re}(h)-\frac{\partial \varrho}{\partial y}(z_{0})\operatorname{Im}(h)}{h}=0 \tag{2.1}\]
holds true, then \(\varrho\) is called _real differentiable in \(z_{0}\)_ with derivative \((\frac{\partial\varrho}{\partial x}(z_{0}),\frac{\partial\varrho}{\partial y}( z_{0}))\). Similarly, \(\varrho\) is called _complex differentiable in \(z_{0}\)_ if
\[\lim_{\mathbb{C}\setminus\{0\}\ni h\to 0}\frac{\varrho(z_{0}+h)-\varrho(z_{0})- ch}{h}=0\]
for some number \(c\in\mathbb{C}\), which in that case is given by
\[\partial_{\operatorname{wirt}}\varrho(z_{0}):=\frac{1}{2}\bigg{(}\frac{ \partial\varrho}{\partial x}(z_{0})-\mathrm{i}\frac{\partial\varrho}{\partial y }(z_{0})\bigg{)}\,.\]
Complex differentiability of \(\varrho\) in \(z_{0}\) can be equivalently stated as
\[\overline{\partial}_{\operatorname{wirt}}\varrho(z_{0}):=\frac{1}{2}\bigg{(} \frac{\partial\varrho}{\partial x}(z_{0})+\mathrm{i}\frac{\partial\varrho}{ \partial y}(z_{0})\bigg{)}=0.\]
The differential operators \(\partial_{\operatorname{wirt}}\) and \(\overline{\partial}_{\operatorname{wirt}}\) are called _Wirtinger derivatives_. If \(\overline{\partial}_{\operatorname{wirt}}\varrho(z)=0\) for all \(z\in\mathbb{C}\), then \(\varrho\) is a _holomorphic_ function. The function \(\varrho\) is called _antiholomorphic_ if the function \(\overline{\varrho}:\mathbb{C}\to\mathbb{C}\), \(\overline{\varrho}(z):=\operatorname{Re}(\varrho(z))-\mathrm{i}\operatorname {Im}(\varrho(z))\) is holomorphic or, equivalently, \(\partial_{\operatorname{wirt}}\varrho(z)=0\) for all \(z\in\mathbb{C}\). As the linear operator that maps the partial derivatives onto the Wirtinger derivatives is invertible, it follows that \((\frac{\partial\varrho}{\partial x}(z_{0}),\frac{\partial\varrho}{\partial y }(z_{0}))=(0,0)\) if and only if \((\partial_{\operatorname{wirt}}\varrho(z_{0}),\overline{\partial}_{ \operatorname{wirt}}\varrho(z_{0}))=(0,0)\). Furthermore, the symmetry of mixed partial derivatives implies for \(\varrho\in C^{2}(\mathbb{C};\mathbb{C})\) that
\[4\partial_{\operatorname{wirt}}\overline{\partial}_{\operatorname{wirt}} \varrho=4\overline{\partial}_{\operatorname{wirt}}\partial_{\operatorname{wirt }}\varrho=\left(\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{ \partial y^{2}}\right)\varrho=:\Delta\varrho. \tag{2.2}\]
If \(\varrho\in C^{\infty}(\mathbb{C};\mathbb{C})\) and \(\Delta^{m}\varrho=0\) for some \(m\in\mathbb{N}\), then \(\varrho\) is called _polyharmonic of order \(m\)_. Because of (2.2), holomorphic and antiholomorphic functions are _harmonic_, i.e., polyharmonic of order \(1\).
The following lemma generalizes the classical real-valued Taylor expansion to the complex-valued setting; see Lemma B.1 for a proof.
**Lemma 2.1**.: _Let \(\varrho\in C(\mathbb{C};\mathbb{C})\) and \(z,z_{0}\in\mathbb{C}\). If \(\varrho\) is real differentiable in \(z_{0}\), then_
\[\varrho(z+z_{0})=\varrho(z_{0})+\partial_{\operatorname{wirt}}\varrho(z_{0})z+ \overline{\partial}_{\operatorname{wirt}}\varrho(z_{0})\overline{z}+\Theta_{1} (z) \tag{2.3}\]
_for a function \(\Theta_{1}:\mathbb{C}\to\mathbb{C}\) with \(\lim_{\mathbb{C}\setminus\{0\}\ni z\to 0}\frac{\Theta_{1}(z)}{z}=0\). If \(\varrho\in C^{2}(\mathbb{C};\mathbb{C})\), then_
\[\varrho(z+z_{0})=\varrho(z_{0})+\partial_{\operatorname{wirt}}\varrho(z_{0})z+ \overline{\partial}_{\operatorname{wirt}}\varrho(z_{0})\overline{z}+\frac{1}{2 }\partial_{\operatorname{wirt}}^{2}\varrho(z_{0})z^{2}+\partial_{\operatorname{ wirt}}\overline{\partial}_{\operatorname{wirt}}\varrho(z_{0})z\overline{z}+\frac{1}{2} \overline{\partial}_{\operatorname{wirt}}^{2}\varrho(z_{0})\overline{z}^{2}+ \Theta_{2}(z) \tag{2.4}\]
_for a function \(\Theta_{2}:\mathbb{C}\to\mathbb{C}\) with \(\lim_{\mathbb{C}\setminus\{0\}\ni z\to 0}\frac{\Theta_{2}(z)}{z^{2}}=0\)._
### Neural networks
A (fully connected feed-forward) _complex-valued neural network_ (CVNN) is a function
\[V_{L}\circ\varrho^{\times N_{L-1}}\ldots\circ\varrho^{\times N_{1}}\circ V_{1}: \quad\mathbb{C}^{N_{0}}\to\mathbb{C}^{N_{L}}\]
where
* \(L\in\mathbb{N}_{\geq 2}\) is called the _depth_ of the CVNN,
* \(N_{j}\in\mathbb{N}\) is the _width_ of the \(j\)th layer,
* \(\max\,\{N_{0},\ldots,N_{L}\}\) is the _width_ of the CVNN,
* \(V_{j}:\mathbb{C}^{N_{j-1}}\to\mathbb{C}^{N_{j}}\) is a \(\mathbb{C}\)-affine map, abbreviated as \(V_{j}\in\operatorname{Aff}(\mathbb{C}^{N_{j-1}};\mathbb{C}^{N_{j}})\), i.e., there exist \(A_{j}\in\mathbb{C}^{N_{j}\times N_{j-1}}\) and \(b_{j}\in\mathbb{C}^{N_{j}}\) such that \(V_{j}(z)=A_{j}z+b_{j}\) for all \(z\in\mathbb{C}^{\mathbb{N}_{j-1}}\),
* \(\varrho^{\times N_{j}}(z_{1},\ldots,z_{N_{j}})=(\varrho(z_{1}),\ldots,\varrho(z_{ N_{j}}))\) is the componentwise application of a (potentially non-affine) map \(\varrho:\mathbb{C}\to\mathbb{C}\) called the _activation function_.
We refer to the numbers \(N_{0}\) and \(N_{L}\) as the _input dimension_ and _output dimension_, respectively. The layers \(1,\ldots,L-1\) are the _hidden layers_ of the CVNN. Since it is always possible to pad matrices and vectors by additional zero rows and columnns, we may and will assume without loss of generality that \(N_{1}=N_{2}=\ldots=N_{L-1}\).
We introduce a short-hand notation for the CVNNs that arise this way.
**Definition 2.2**.: Let \(n,m,W\in\mathbb{N}\) and \(\varrho:\mathbb{C}\to\mathbb{C}\). We denote by \(\mathcal{N}\mathcal{N}_{n,m,W}^{\varrho}\) the set of CVNNs with arbitrary depth \(L\), input dimension \(n\), output dimension \(m\), and \(N_{j}=W\) for \(j\in\{1,\ldots,L-1\}\).
The elements of \(\mathcal{N}\mathcal{N}_{n,m,W}^{\varrho}\) are thus alternating compositions
\[V_{L}\circ\varrho^{\times W}\circ\ldots\circ\varrho^{\times W}\circ V_{1}: \quad\mathbb{C}^{n}\to\mathbb{C}^{m} \tag{2.5}\]
of \(\mathbb{C}\)-affine maps \(V_{j}\) and the componentwise applications of the activation function \(\varrho\). In the subsequent sections Sections 4 and 5, we have \(W\geq\max\left\{n,m\right\}\), such that \(W\) turns out to be the width of the neural networks under consideration.
A typical way of thinking about neural networks is viewing the component functions of \(\varrho^{\times N_{j}}\circ V_{j}\) as building blocks called _neurons_. Each neuron performs a computation of the form
\[z\mapsto\varrho(w^{\top}z+b)\]
where \(z\) is the output of the previous layer, \(w\) a vector of _weights_, and \(b\) a number called _bias_.
Since the composition of affine maps is affine, it is also possible to think about neural networks as maps
\[\big{(}\Psi_{L}\circ\varrho^{\times W}\circ\Phi_{L}\big{)}\circ\big{(}\Psi_{L- 1}\circ\varrho^{\times W}\circ\Phi_{L-1}\big{)}\cdots\circ\big{(}\Psi_{1}\circ \varrho^{\times W}\circ\Phi_{1}\big{)},\]
where each of the maps \(\Phi_{k},\Psi_{k}\) is affine. This allows to perceive shallow networks, see Definition 2.3, as building blocks for neural networks. This point of view is similar to the notion of _enhanced neurons_ in [15].
For a fixed activation function \(\varrho\), different choices of the \(\mathbb{C}\)-affine functions \(V_{j}\) may lead to the same composite function (2.5). In view of this, both depth and width of a CVNN are not properties of the function (2.5) but of the tuple \((V_{1},\ldots,V_{L})\). For this reason, a different terminology is sometimes used in the literature where \((V_{1},\ldots,V_{L})\) is called the _neural network_ and (2.5) is its _realization_, cf. [11, Section 1].
Apart from the choice of the activation function \(\varrho\), restrictions on the depth or the width are common ingredients in the analysis of (fully connected feed-forward) neural networks. A CVNN is called _shallow_ if its depth equals \(2\), and _deep_ otherwise.
**Definition 2.3**.: Let \(n,m\in\mathbb{N}\) and \(\varrho:\mathbb{C}\to\mathbb{C}\). We denote by \(\mathcal{N}\mathcal{N}_{n,m}^{\varrho}\) the set of _shallow CVNNs_, i.e., CVNNs with depth \(L=2\), input dimension \(n\), and output dimension \(m\).
In contrast to shallowness, _narrowness_ is not an individual property of CVNNs but a class property. A set \(\mathcal{F}\) of CVNNs is said to be _narrow_ if it does not contain CVNNs of arbitrarily large widths, i.e., if \(\mathcal{F}\subseteq\mathcal{N}\mathcal{N}_{n,m,W}^{\varrho}\) for suitable \(n,m,W\in\mathbb{N}\) and \(\varrho:\mathbb{C}\to\mathbb{C}\).
### Functional analysis
On \(\mathbb{C}^{m}\), consider the topology induced by the Euclidean norm
\[\left\|(z_{1},\ldots,z_{m})\right\|_{\mathbb{C}^{m}}= \left(\sum_{j=1}^{m}\left|z_{j}\right|^{2}\right)^{\frac{1}{2}}.\]
We denote by
\[B_{\delta}(z_{0}):=\{z\in\mathbb{C}^{m}\;:\;\left\|z-z_{0}\right\|_{\mathbb{C} ^{m}}<\delta\}\]
and
\[\overline{B}_{\delta}(z_{0}):=\{z\in\mathbb{C}^{m}\;:\;\left\|z-z_{0}\right\|_ {\mathbb{C}^{m}}\leq\delta\}\]
the open and the closed ball with center \(z_{0}\in\mathbb{C}^{m}\) and radius \(\delta>0\), respectively. For \(K\subseteq\mathbb{C}^{n}\), we denote the vector space of continuous functions \(K\to\mathbb{C}^{m}\) by \(C(K;\mathbb{C}^{m})\). When \(K\subseteq\mathbb{C}^{n}\) is compact, the expression
\[\left\|f\right\|_{C(K;\mathbb{C}^{m})}:=\sup_{z\in K}\left\|f(z)\right\|_{ \mathbb{C}^{m}}\]
defines a norm on \(C(K;\mathbb{C}^{m})\), called the _uniform norm_, which renders \(C(K;\mathbb{C}^{m})\) a Banach space. The convergence of a sequence \((f_{j})_{j\in\mathbb{N}}\) of elements \(f_{j}\in C(K;\mathbb{C}^{m})\) to a limit \(f\in C(K;\mathbb{C}^{m})\) is written as \(f_{j}\to f\)_as \(j\to\infty\), uniformly on \(K\)_. Similarly, a sequence \((f_{j})_{j\in\mathbb{N}}\) of functions \(f_{j}\in C(\mathbb{C}^{n};\mathbb{C}^{m})\) is said to converge _locally uniformly_ to a function \(f\in C(\mathbb{C}^{n};\mathbb{C}^{m})\) if it converges uniformly to \(f\) on every compact subset \(K\subseteq\mathbb{C}^{n}\). Since compositions of continuous functions are continuous, we have \(\mathcal{N}\mathcal{N}^{\varrho}_{n,m,W}\subseteq C(\mathbb{C}^{n};\mathbb{C }^{m})\) when \(\varrho\in C(\mathbb{C};\mathbb{C})\). The main objective of the paper at hand is to show that under certain additional assumptions on \(\varrho\) and \(W\), the elements of \(\mathcal{N}\mathcal{N}^{\varrho}_{n,m,W}\) are arbitrarily close to the elements of \(C(\mathbb{C}^{n};\mathbb{C}^{m})\) in the following sense.
**Definition 2.4**.: We say that a function class \(\mathcal{F}\subseteq C(\mathbb{C}^{n};\mathbb{C}^{m})\) has the _universal approximation property_ or is _universal_) if for every function \(g\in C(\mathbb{C}^{n};\mathbb{C}^{m})\), every compact subset \(K\subseteq\mathbb{C}^{n}\) and every \(\varepsilon>0\) there is a function \(f\in\mathcal{F}\) such that
\[\sup_{z\in K}\left\|f(z)-g(z)\right\|_{\mathbb{C}^{m}}<\varepsilon.\]
The universal approximation property is equivalent to saying that for every function \(g\in C(\mathbb{C}^{n};\mathbb{C}^{m})\) there is a sequence \((f_{j})_{j\in\mathbb{N}}\) with \(f_{j}\in\mathcal{F}\) for every \(j\in\mathbb{N}\) that converges locally uniformly to \(g\). We elaborate this equivalence in Appendix A.
## 3. Building blocks and register model
In this section we introduce various _building blocks_ for complex-valued networks, i.e., small neural network blocks that are able to represent elementary functions (e.g., the complex identity \(\mathrm{id}_{\mathbb{C}}\) or complex conjugation \(\overline{\mathrm{id}_{\mathbb{C}}}\)) up to an arbitrarily small approximation error. These building blocks are used in Sections 4 and 5 to construct the deep narrow networks that we use to approximate a given continuous function. Throughout the chapter, we assume that the used activation function \(\varrho:\mathbb{C}\to\mathbb{C}\) is differentiable (in the real sense) at one point with non-vanishing derivative at that point. In fact, the strategy for constructing these building blocks is always going to be similar: By using the first- and second-order Taylor expansion of the activation function \(\varrho\) as introduced in Lemma 2.1 one can localize the activation function around its point of differentiability where it behaves like a complex polynomial in \(z\) and \(\overline{z}\) of degree \(1\) and \(2\), respectively. This enables us to extract elementary functions from that Taylor expansion.
Proposition 3.1 is fundamental for the universality results introduced in Sections 4 and 5. It shows that it is possible to uniformly approximate the complex identity or complex conjugation on compact sets using neural networks with a single hidden layer and width at most \(2\). If the activation function (or its complex conjugate) is not just real but even complex differentiable, the width can be reduced to \(1\). See Figure 2 for an illustration of the building blocks.
**Proposition 3.1**.: _Let \(\varrho\in C(\mathbb{C};\mathbb{C})\) be real differentiable in \(z_{0}\in\mathbb{C}\) with \((\partial_{\mathrm{wirt}}\varrho(z_{0}),\overline{\partial}_{\mathrm{wirt}} \varrho(z_{0}))\neq(0,0)\). Furthermore, let \(K\subseteq\mathbb{C}\) be compact and \(\varepsilon>0\)._
_(i) If_ \(\partial_{\mathrm{wirt}}\varrho(z_{0})\neq 0\) _and_ \(\overline{\partial}_{\mathrm{wirt}}\varrho(z_{0})=0\) _there are_ \(\phi,\psi\in\mathrm{Aff}(\mathbb{C};\mathbb{C})\) _such that_
\[\sup_{z\in K}\left|(\psi\circ\varrho\circ\phi)(z)-z\right|<\varepsilon.\]
_(ii) If_ \(\partial_{\mathrm{wirt}}\varrho(z_{0})=0\) _and_ \(\overline{\partial}_{\mathrm{wirt}}\varrho(z_{0})\neq 0\) _there are_ \(\phi,\psi\in\mathrm{Aff}(\mathbb{C};\mathbb{C})\) _such that_
\[\sup_{z\in K}\left|(\psi\circ\varrho\circ\phi)(z)-\overline{z}\right|<\varepsilon.\]
_(iii) If_ \(\partial_{\mathrm{wirt}}\varrho(z_{0})\neq 0\neq\overline{\partial}_{\mathrm{wirt}} \varrho(z_{0})\) _there are_ \(\phi\in\mathrm{Aff}(\mathbb{C};\mathbb{C}^{2})\) _and_ \(\psi\in\mathrm{Aff}(\mathbb{C}^{2};\mathbb{C}^{2})\) _such that_
\[\sup_{z\in K}\left\|(\psi\circ\varrho^{\times 2}\circ\phi)(z)-(z,\overline{z}) \right\|_{\mathbb{C}^{2}}<\varepsilon\]
Proof.: The key idea of the proof is to use the affine maps to localize the activation function around its point of differentiability, where it behaves like its derivative, i.e., like a linear map in \(z\) and \(\bar{z}\), and then to invoke Lemma 2.1. Recall that Lemma 2.1 yields the existence of a function \(\Theta:\mathbb{C}\to\mathbb{C}\) satisfying \(\lim\limits_{z\to 0}\frac{\Theta(z)}{z}=0\) and
\[\varrho(z+z_{0})=\varrho(z_{0})+\partial_{\text{wirt}}\varrho(z_{0})z+ \overline{\partial}_{\text{wirt}}\varrho(z_{0})\overline{z}+\Theta(z)\]
for every \(z\in\mathbb{C}\).
If \(\overline{\partial}_{\text{wirt}}\varrho(z_{0})=0\) and \(\partial_{\text{wirt}}\varrho(z_{0})\neq 0\) we see for every \(h>0\) that
\[\frac{\varrho(z_{0}+hz)-\varrho(z_{0})}{\partial_{\text{wirt}}\varrho(z_{0})h }=z+\frac{\Theta(hz)}{\partial_{\text{wirt}}\varrho(z_{0})h}\quad\text{for all }z\in K. \tag{3.1}\]
Similarly, if \(\overline{\partial}_{\text{wirt}}\varrho(z_{0})\neq 0\) and \(\partial_{\text{wirt}}\varrho(z_{0})=0\) we get
\[\frac{\varrho(z_{0}+hz)-\varrho(z_{0})}{\overline{\partial}_{\text{wirt}} \varrho(z_{0})h}=\overline{z}+\frac{\Theta(hz)}{\overline{\partial}_{\text{ wirt}}\varrho(z_{0})h}\quad\text{for all }z\in K. \tag{3.2}\]
If \(\partial_{\text{wirt}}\varrho(z_{0})\neq 0\neq\overline{\partial}_{\text{wirt}} \varrho(z_{0})\) consider
\[\frac{\mathrm{i}\varrho(z_{0}+hz)+\varrho(z_{0}+\mathrm{i}hz)-(1+\mathrm{i}) \varrho(z_{0})}{2\mathrm{i}h\partial_{\text{wirt}}\varrho(z_{0})}=z+\frac{ \mathrm{i}\Theta(hz)+\Theta(\mathrm{i}hz)}{2\mathrm{i}h\partial_{\text{wirt}} \varrho(z_{0})} \tag{3.3}\]
as well as
\[\frac{-\mathrm{i}\varrho(z_{0}+hz)+\varrho(z_{0}+\mathrm{i}hz)-(1-\mathrm{i}) \varrho(z_{0})}{-2\mathrm{i}h\overline{\partial}_{\text{wirt}}\varrho(z_{0})} =\overline{z}+\frac{-\mathrm{i}\Theta(hz)+\Theta(\mathrm{i}hz)}{-2\mathrm{i} h\overline{\partial}_{\text{wirt}}\varrho(z_{0})}. \tag{3.4}\]
Since \(K\) is compact there exists \(L>0\) satisfying \(|z|\leq L\) for all \(z\in K\). Let \(\varepsilon^{\prime}>0\) be arbitrary and take \(\delta>0\) such that
\[\left|\frac{\Theta(w)}{w}\right|<\frac{\varepsilon^{\prime}}{L}\]
for every \(w\in\mathbb{C}\setminus\{0\}\) with \(|w|<\delta\). Let \(h\in(0,\delta/L)\) and \(z\in K\). Since \(|hz|<\delta\) we see for every \(z\in K\setminus\{0\}\) that
\[\left|\frac{\Theta(hz)}{h}\right|\leq L\cdot\left|\frac{\Theta(hz)}{hz}\right| <\varepsilon^{\prime}\]
and since \(\varepsilon^{\prime}\) has been taken arbitrarily
\[\lim\limits_{h\downarrow 0}\sup\limits_{z\in K}\frac{\Theta(hz)}{h}=0.\]
Here, we used \(\Theta(0)=0\). Thus, Equations (3.1) to (3.4) yield the claim by taking \(h\) sufficiently small.
Proposition 3.2 is important for the case of polyharmonic activation functions which is considered in Section 5. It essentially states that, given an activation function which is not \(\mathbb{R}\)-affine, one can approximate one of the functions \(z\mapsto z\overline{z},\;\;z\mapsto z^{2}\) or \(z\mapsto\overline{z}^{2}\) by using a shallow neural network of width \(4\), see Figure 3 for an illustration.
Figure 2. Illustration of the neural network building blocks from Proposition 3.1. Neurons in the input and output layers are depicted in filled dots at the top and bottom, respectively. Applications of the activation function \(\varrho\) are shown as circles.
**Proposition 3.2**.: _Let \(\varrho\in C^{2}(\mathbb{C};\mathbb{C})\) be not \(\mathbb{R}\)-affine, let \(K\subseteq\mathbb{C}\) compact and \(\varepsilon>0\). Then there exist \(\phi\in\mathrm{Aff}(\mathbb{C};\mathbb{C}^{4}),\psi\in\mathrm{Aff}(\mathbb{C}^{ 4};\mathbb{C})\) such that at least one of the three inequalities_
\[\sup_{z\in K}\big{|}(\psi\circ\varrho^{\times 4}\circ\phi)(z)-z \overline{z}\big{|}<\varepsilon,\] \[\sup_{z\in K}\big{|}(\psi\circ\varrho^{\times 4}\circ\phi)(z)-z^{2} \big{|}<\varepsilon,\] \[\sup_{z\in K}\big{|}(\psi\circ\varrho^{\times 4}\circ\phi)(z)- \overline{z}^{2}\big{|}<\varepsilon\]
_holds true._
Proof.: Since \(\varrho\) is not \(\mathbb{R}\)-affine, there exists \(z_{0}\in\mathbb{C}\) such that \(\partial^{2}_{\mathrm{wirt}}\varrho(z_{0})\neq 0\), \(\partial_{\mathrm{wirt}}\overline{\partial}_{\mathrm{wirt}}\varrho(z_{0})\neq 0\), or \(\overline{\partial}^{2}_{\mathrm{wirt}}\varrho(z_{0})\neq 0\), see, e.g., Proposition B.2. Using the second-order Taylor expansion stated in Lemma 2.1, there is a function \(\Theta:\mathbb{C}\to\mathbb{C}\) satisfying \(\lim\limits_{z\to 0}\frac{\Theta(z)}{z^{2}}=0\) and
\[\varrho(z+z_{0})=\varrho(z_{0})+\partial_{\mathrm{wirt}}\varrho(z_{0})z+ \overline{\partial}_{\mathrm{wirt}}\varrho(z_{0})\overline{z}+\frac{1}{2} \partial^{2}_{\mathrm{wirt}}\varrho(z_{0})z^{2}+\partial_{\mathrm{wirt}} \overline{\partial}_{\mathrm{wirt}}\varrho(z_{0})z\overline{z}+\frac{1}{2} \overline{\partial}^{2}_{\mathrm{wirt}}\varrho(z_{0})\overline{z}^{2}+\Theta(z)\]
for every \(z\in\mathbb{C}\). Applying this identity to \(z=w\) and \(z=-w\) and adding up, we infer for any \(w\in\mathbb{C}\) that
\[\varrho(z_{0}+w)+\varrho(z_{0}-w)=2\varrho(z_{0})+\partial^{2}_{\mathrm{wirt }}\varrho(z_{0})w^{2}+2\partial_{\mathrm{wirt}}\overline{\partial}_{\mathrm{ wirt}}\varrho(z_{0})w\overline{w}+\overline{\partial}^{2}_{\mathrm{wirt}} \varrho(z_{0})\overline{w}^{2}+\Theta(w)+\Theta(-w).\]
Let \(h>0\) and \(z\in K\). If \(\partial_{\mathrm{wirt}}\overline{\partial}_{\mathrm{wirt}}\varrho(z_{0})\neq 0\), we see with \(w=hz\) and \(w=\mathrm{i}hz\) that
\[\frac{\varrho(z_{0}+hz)+\varrho(z_{0}-hz)+\varrho(z_{0}+\mathrm{ i}hz)+\varrho(z_{0}-\mathrm{i}hz)-4\varrho(z_{0})}{4h^{2}\partial_{\mathrm{wirt}} \overline{\partial}_{\mathrm{wirt}}\varrho(z_{0})}\] \[=z\overline{z}+\frac{\Theta(hz)+\Theta(-hz)+\Theta(\mathrm{i}hz)+ \Theta(-\mathrm{i}hz)}{4h^{2}\partial_{\mathrm{wirt}}\overline{\partial}_{ \mathrm{wirt}}\varrho(z_{0})}. \tag{3.5}\]
If \(\partial_{\mathrm{wirt}}\overline{\partial}_{\mathrm{wirt}}\varrho(z_{0})=0\) and \(\partial^{2}_{\mathrm{wirt}}\varrho(z_{0})\neq 0\), consider \(w=hz\) and \(w=\sqrt{\mathrm{i}}hz\), where \(\sqrt{\mathrm{i}}\) is a fixed square root of \(\mathrm{i}\):
\[\frac{\varrho(z_{0}+hz)+\varrho(z_{0}-hz)-\mathrm{i}\varrho(z_{0 }+\sqrt{\mathrm{i}}hz)-\mathrm{i}\varrho(z_{0}-\sqrt{\mathrm{i}}hz)+2(-1+ \mathrm{i})\varrho(z_{0})}{2h^{2}\partial^{2}_{\mathrm{wirt}}\varrho(z_{0})}\] \[=z^{2}+\frac{\Theta(hz)+\Theta(-hz)-\mathrm{i}\Theta(\sqrt{ \mathrm{i}}hz)-\mathrm{i}\Theta(-\sqrt{\mathrm{i}}hz)}{2h^{2}\partial^{2}_{ \mathrm{wirt}}\varrho(z_{0})}. \tag{3.6}\]
Last, if \(\partial^{2}_{\mathrm{wirt}}\varrho(z_{0})=\partial_{\mathrm{wirt}}\overline{ \partial}_{\mathrm{wirt}}\varrho(z_{0})=0\) consider \(w=hz\):
\[\frac{\varrho(z_{0}+hz)+\varrho(z_{0}-hz)-2\varrho(z_{0})}{h^{2}\overline{ \partial}^{2}_{\mathrm{wirt}}\varrho(z_{0})}=\overline{z}^{2}+\frac{\Theta(hz) +\Theta(-hz)}{h^{2}\overline{\partial}^{2}_{\mathrm{wirt}}\varrho(z_{0})}. \tag{3.7}\]
Since \(K\) is bounded, there is \(L>0\) with \(|z|\leq L\) for every \(z\in K\). For given \(\varepsilon^{\prime}>0\) there exists \(\delta>0\) such that
\[\left|\frac{\Theta(w)}{w^{2}}\right|<\frac{\varepsilon^{\prime}}{L^{2}}\]
for every \(w\in\mathbb{C}\setminus\{0\}\) with \(|w|<\delta\). Hence, we see for every \(h\in(0,\delta/L)\) and all \(z\in K\setminus\{0\}\) that
\[\left|\frac{\Theta(hz)}{h^{2}}\right|\leq L^{2}\cdot\left|\frac{\Theta(hz)}{(hz )^{2}}\right|<\varepsilon^{\prime}\]
where we used that \(|hz|<\delta\). Therefore, we conclude
\[\lim_{h\downarrow 0}\sup_{z\in K}\frac{\Theta(hz)}{h^{2}}=0,\]
using \(\Theta(0)=0\). The claim then follows from Equations (3.5) to (3.7).
In order to approximate arbitrary polynomials in the variables \(z_{1},\ldots,z_{n}\) and \(\overline{z_{1}},\ldots,\overline{z_{n}}\), we will compute iterative products of two complex numbers in Theorem 5.3. The following result enables the approximation of such products. An illustration of the CVNN blocks appearing in the proof are given in Figure 4.
**Proposition 3.3**.: _Let \(\varrho\in C^{2}(\mathbb{C};\mathbb{C})\) be not \(\mathbb{R}\)-affine, \(K\subseteq\mathbb{C}^{2}\) compact, and \(\varepsilon>0\). Then there exist \(\phi\in\mathrm{Aff}(\mathbb{C}^{2};\mathbb{C}^{12})\) and \(\psi\in\mathrm{Aff}(\mathbb{C}^{12};\mathbb{C})\) such that at least one of the three inequalities_
\[\sup_{(z,w)\in K}\left|(\psi\circ\varrho^{\times 12}\circ\phi)(z,w)- zw\right|<\varepsilon,\] \[\sup_{(z,w)\in K}\left|(\psi\circ\varrho^{\times 12}\circ\phi)(z,w)- z\overline{w}\right|<\varepsilon,\] \[\sup_{(z,w)\in K}\left|(\psi\circ\varrho^{\times 12}\circ\phi)(z,w)- \overline{zw}\right|<\varepsilon\]
_holds true._
Proof.: The main steps of the proof are to use a variant of the polarization identity to reconstruct the three multiplication operators from their values on the diagonal, and then to apply Proposition 3.2 to approximate the latter.
Precisely, the construction is as follows: If \(\zeta\mapsto\zeta\overline{\zeta}=\left|\zeta\right|^{2}\) can be approximated according to the first case of Proposition 3.2, use the identity
\[\left(\frac{1}{4}+\frac{\mathrm{i}}{4}\right)\left|z+w\right|^{2}+\left(- \frac{1}{4}+\frac{\mathrm{i}}{4}\right)\left|z-w\right|^{2}-\frac{\mathrm{i}}{ 2}\left|z-\mathrm{i}w\right|^{2}=z\overline{w}.\]
In order to approximate \((z,w)\mapsto z\overline{w}\), one needs \(4\) hidden neurons to approximate \(\zeta\mapsto\left|\zeta\right|^{2}\) for each of the \(3\) linear combinations of \(z\) and \(w\), resulting in a total amount of \(12\) hidden neurons.
If we have the second case of Proposition 3.2, we can approximate \(\zeta\mapsto\zeta^{2}\) using \(4\) hidden neurons. In this case, consider
\[\frac{1}{4}\left[(z+w)^{2}-(z-w)^{2}\right]=zw,\]
so that in total one needs \(8\) neurons in order to approximate \((z,w)\mapsto zw\). In the last case of Proposition 3.2 we can approximate \(\zeta\mapsto\overline{\zeta}^{2}\) using \(4\) hidden neurons. Considering
\[\frac{1}{4}\left[\overline{(z+w)}^{2}-\overline{(z-w)}^{2}\right]=\overline{ zw},\]
we infer that \((z,w)\mapsto\overline{zw}\) can be approximated using \(8\) hidden neurons.
Figure 3. Illustration of the neural network building block from Proposition 3.2. Neurons in the input and output layers are depicted in filled dots at the top and bottom, respectively. Applications of the activation function \(\varrho\) are shown as circles.
At the end of this section we want to introduce the fundamental concept of the _register model_. This construction has been heavily used in [15] to prove the real-valued counterpart of the theorem established in the present paper. The key idea is to transform a shallow neural network into a deep narrow network by "flipping" the shallow network and only performing one computation per layer. This is formalized in Proposition 3.4 and illustrated in Figure 5.
**Proposition 3.4**.: _Let \(n,m,\in\mathbb{N}\) and \(\varrho:\mathbb{C}\to\mathbb{C}\). Denote by \(\mathcal{I}_{n,m,n+m+1}^{\varrho}\) the set of register models_
\[T_{L}\circ\tilde{\varrho}\circ\ldots\circ\tilde{\varrho}\circ T_{1},\]
_where \(T_{1}\in\operatorname{Aff}(\mathbb{C}^{n};\mathbb{C}^{n+m+1})\), \(T_{L}\in\operatorname{Aff}(\mathbb{C}^{n+m+1};\mathbb{C}^{m})\), \(T_{\ell}\in\operatorname{Aff}(\mathbb{C}^{n+m+1};\mathbb{C}^{n+m+1})\) for \(\ell\in\{2,\ldots,L-1\}\), and the map \(\tilde{\varrho}\) acts on \(\mathbb{C}^{n+m+1}\) via_
\[\tilde{\varrho}:\mathbb{C}^{n+m+1}\to\mathbb{C}^{n+m+1},\;\;\;(\tilde{\varrho} (z_{1},\ldots,z_{n+m+1}))_{j}=\begin{cases}\varrho(z_{n+1}),&j=n+1,\\ z_{j},&j\neq n+1.\end{cases}\]
_Then \(\mathcal{S}\mathcal{N}_{n,m}^{\varrho}\subseteq\mathcal{I}_{n,m,n+m+1}^{\varrho}\)._
Proof.: Let \(f=V_{2}\circ\varrho^{\times(n+m+1)}\circ V_{1}\) with \(V_{1}\in\operatorname{Aff}(\mathbb{C}^{n};\mathbb{C}^{n+m+1})\) and \(V_{2}\in\operatorname{Aff}(\mathbb{C}^{n+m+1};\mathbb{C}^{m})\). For \(j\in\{1,\ldots,m\}\), the \(j\)th component function \(f_{j}\) of \(f\) can be written as
\[f_{j}(z)= \left(\sum_{k=1}^{n+m+1}c_{k,j}\varrho(a_{k}^{\top}z+b_{k})\right) +d_{j}\]
with suitably chosen \(a_{k}\in\mathbb{C}^{n}\) and \(c_{k,j},b_{k},d_{j}\in\mathbb{C}\). We define
\[T_{1}:\mathbb{C}^{n}\to\mathbb{C}^{n}\times\mathbb{C}\times\mathbb{C}^{m},\;\; \;\;T_{1}(z)=(z,a_{1}^{\top}z+b_{1},0).\]
For \(\ell\in\{2,\ldots,n+m+1\}\), we define
\[T_{\ell}:\mathbb{C}^{n}\times\mathbb{C}\times\mathbb{C}^{m}\to\mathbb{C}^{n} \times\mathbb{C}\times\mathbb{C}^{m},\;\;\;T_{\ell}(z,u,(w_{1},\ldots,w_{m}))= \begin{pmatrix}z\\ a_{\ell}^{\top}z+b_{\ell}\\ w_{1}+c_{\ell-1,1}u\\ w_{2}+c_{\ell-1,2}u\\ \vdots\\ w_{m}+c_{\ell-1,m}u\end{pmatrix}.\]
Figure 4. Illustration of the neural network building block from Proposition 3.3. Neurons in the input and output layers are depicted in filled dots at the top and bottom, respectively. Applications of the activation function \(\varrho\) are shown as circles. From the input values \(z\) and \(w\), three or two linear combinations are computed. Then the building block from Figure 3 is inserted to approximate \(z\mapsto z^{2}\), \(z\mapsto\overline{z}^{2}\), or \(z\mapsto z\overline{z}\). The results are again combined linearly.
Last, we set
\[T_{n+m+2}:\mathbb{C}^{n}\times\mathbb{C}\times\mathbb{C}^{m}\to\mathbb{C}^{m}, \quad T_{n+m+2}(z,u,(w_{1},\ldots,w_{m}))=\begin{pmatrix}w_{1}+c_{W,1}u+d_{1}\\ w_{2}+c_{W,2}u+d_{2}\\ \vdots\\ w_{m}+c_{W,m}u+d_{3}\end{pmatrix}.\]
Then we have
\[f=T_{n+m+2}\circ\tilde{\varrho}\circ\ldots\circ\tilde{\varrho}\circ T_{1},\]
which clearly yields \(f\in\mathcal{I}^{\varrho}_{n,m,n+m+1}\).
## 4. The non-polyharmonic case
When the activation function \(\varrho\) is not polyharmonic, the universal approximation theorem for shallow CVNNs from [28, Theorem 1.3] is applicable. For convenience, we state the following special case relevant for our investigations.
**Theorem 4.1**.: _Let \(n\in\mathbb{N}\) and \(\varrho\in C(\mathbb{C};\mathbb{C})\). Then \(\mathcal{SN}^{\varrho}_{n,1}\) is universal if and only if \(\varrho\) is not polyharmonic._
In conjunction with suitable register models, it is possible to construct deep narrow networks to approximate a given continuous function. To do so, it is necessary to approximate the identity function. By assumption, the activation function \(\varrho\) is real differentiable at some point \(z_{0}\in\mathbb{C}\) with non-zero derivative. We consider three different cases. Firstly, if \(\varrho\) is even _complex_ differentiable at \(z_{0}\), Proposition 3.1 yields an approximation of \(\mathrm{id}_{\mathbb{C}}\) using shallow CVNNs with activation function \(\varrho\) and a width of \(1\). In that case, the register model can be used where the identity connections have to be replaced by sufficiently good approximations, resulting in a sufficient width of \(n+m+1\). Secondly, if \(\overline{\varrho}\) is
Figure 5. Illustration of the register model from Proposition 3.4. Neurons where the complex identity is used as activation function are visualized as squares, whereas neurons using \(\varrho\) as activation function are visualized using circles. The in-register neurons (squares on the left) store the input values and pass them to the computation neurons (middle circles). The result of the computations are added up and stored in the out-register neurons (squares on the right). The dashed box highlights one of the blocks that are later replaced using approximations of the complex identity.
complex differentiable at \(z_{0}\), we combine the fact that \(\mathcal{N}\overline{\mathcal{N}}_{n,m,n+m+1}^{\varrho}\) is universal and that \(\overline{\operatorname{id}_{\mathbb{C}}}\) can be approximated according to Proposition 3.1 to deduce the universality of \(\mathcal{N}\mathcal{N}_{n,m,n+m+1}^{\varrho}\). Lastly, if neither of the two former cases happens, we show that a width of \(2n+2m+1\) is sufficient for CVNNs with activation function \(\varrho\) in order to be universal, by combining the register model with the third building block described in Proposition 3.1.
**Theorem 4.2**.: _Let \(n,m\in\mathbb{N}\). Assume that \(\varrho\in C(\mathbb{C};\mathbb{C})\) is not polyharmonic and that there exists \(z_{0}\in\mathbb{C}\) such that \(\varrho\) is real differentiable at \(z_{0}\) with non-zero derivative._
_(i) If_ \(\partial_{\operatorname{wirt}}\varrho(z_{0})\neq 0\) _and_ \(\overline{\partial}_{\operatorname{wirt}}\varrho(z_{0})=0\)_, then the set_ \(\mathcal{N}\mathcal{N}_{n,m,n+m+1}^{\varrho}\) _is universal._
_(ii) If_ \(\partial_{\operatorname{wirt}}\varrho(z_{0})=0\) _and_ \(\overline{\partial}_{\operatorname{wirt}}\varrho(z_{0})\neq 0\)_, then the set_ \(\mathcal{N}\mathcal{N}_{n,m,n+m+1}^{\varrho}\) _is universal._
Proof.: Let \(f\in C(\mathbb{C}^{n};\mathbb{C}^{m})\), \(K\subseteq\mathbb{C}\) a compact subset, and \(\varepsilon>0\).
Using Theorem 4.1, Proposition 3.4, and the notation introduced therein, we infer that there is a function \(g\in\mathcal{I}_{n,m,n+m+1}^{\varrho}\) with
\[\sup_{z\in K}\left\|f(z)-g(z)\right\|_{\mathbb{C}^{m}}<\frac{\varepsilon}{2}.\]
This means that there exist \(T_{1}\in\operatorname{Aff}(\mathbb{C}^{n};\mathbb{C}^{n+m+1})\), \(T_{L}\in\operatorname{Aff}(\mathbb{C}^{n+m+1};\mathbb{C}^{m})\), and \(T_{\ell}\in\operatorname{Aff}(\mathbb{C}^{n+m+1};\mathbb{C}^{n+m+1})\) for \(\ell\in\{2,\ldots,L-1\}\) such that
\[g=T_{L}\circ\tilde{\varrho}\circ T_{L-1}\circ\ldots\circ\tilde{\varrho}\circ T _{1},\]
where \(\tilde{\varrho}:\mathbb{C}^{n+m+1}\to\mathbb{C}^{n+m+1}\) is given by
\[\left(\tilde{\varrho}(z_{1},\ldots,z_{n+m+1})\right)_{j}=\begin{cases}z_{j},& j\in\{1,\ldots,n+m\}\,,\\ \varrho(z_{n+m+1}),&j=n+m+1.\end{cases}\]
We first prove (i). Combining Proposition 3.1(i), Proposition A.2, and Proposition A.5, we deduce the existence of sequences \((\Psi_{j})_{j\in\mathbb{N}}\) and \((\Phi_{j})_{j\in\mathbb{N}}\) with \(\Psi_{j},\Phi_{j}\in\operatorname{Aff}(\mathbb{C}^{n+m+1};\mathbb{C}^{n+m+1})\) for \(j\in\mathbb{N}\), satisfying
\[\Psi_{j}\circ\varrho^{\times(n+m+1)}\circ\Phi_{j}\xrightarrow{j\to\infty} \tilde{\varrho}\]
locally uniformly. Proposition A.5 now implies that the sequence \((g_{j})_{j\in\mathbb{N}}\) given by
\[g_{j}=T_{L}\circ\Bigl{(}\Psi_{j}\circ\varrho^{\times(n+m+1)}\circ\Phi_{j} \Bigr{)}\circ T_{L-1}\circ\ldots\circ\Bigl{(}\Psi_{j}\circ\varrho^{\times(n+ m+1)}\circ\Phi_{j}\Bigr{)}\circ T_{1}\]
converges locally uniformly to \(g\) as \(j\to\infty\). Moreover, since the composition of affine maps is affine, we have \(g_{j}\in\mathcal{N}\mathcal{N}_{n,m,n+m+1}^{\varrho}\) for all \(j\in\mathbb{N}\). Claim (i) now follows from another application of Proposition A.2.
We now deal with (ii). By the fundamental properties of Wirtinger derivatives (see for instance [14, E. 1a]) we compute
\[\partial_{\operatorname{wirt}}\overline{\varrho}(z_{0})=\overline{\overline{ \partial}_{\operatorname{wirt}}\varrho(z_{0})}\neq 0\quad\text{and}\quad\overline{ \partial}_{\operatorname{wirt}}\overline{\varrho}(z_{0})=\overline{\partial _{\operatorname{wirt}}\varrho(z_{0})}=0.\]
It is immediate to see that \(\varrho\in C^{\infty}(\mathbb{C};\mathbb{C})\) if and only if \(\overline{\varrho}\in C^{\infty}(\mathbb{C};\mathbb{C})\). In that case, it holds \(\Delta^{m}\overline{\varrho}=\overline{\Delta^{m}\varrho}\) for any \(m\in\mathbb{N}_{0}\). These two properties imply that \(\overline{\varrho}\) is non-polyharmonic. From (i) we deduce that the set \(\mathcal{N}\mathcal{N}_{n,m,n+m+1}^{\varrho}\) is universal. We claim that the set \(\mathcal{N}\mathcal{N}_{n,m,n+m+1}^{\varrho}\) is universal, too. Indeed, let \(f\in\mathcal{N}\mathcal{N}_{n,m,n+m+1}^{\varrho}\) be arbitrary and consider the decomposition
\[f=V_{L}\circ\overline{\varrho}^{\times(n+m+1)}\circ V_{L-1}\circ\ldots\circ \overline{\varrho}^{\times(n+m+1)}\circ V_{1}\]
with \(V_{1}\in\operatorname{Aff}(\mathbb{C}^{n};\mathbb{C}^{n+m+1})\), \(V_{\ell}\in\operatorname{Aff}(\mathbb{C}^{n+m+1};\mathbb{C}^{n+m+1})\), and \(V_{L}\in\operatorname{Aff}(\mathbb{C}^{n+m+1};\mathbb{C}^{m})\) for every number \(\ell\in\{2,\ldots,L-1\}\). From Proposition 3.1(ii) we infer the existence of sequences \((\Phi_{j})_{j\in\mathbb{N}}\) and \((\Psi_{j})_{j\in\mathbb{N}}\) with \(\Phi_{j},\Psi_{j}\in\operatorname{Aff}(\mathbb{C}^{n+m+1};\mathbb{C}^{n+m+1})\) for \(j\in\mathbb{N}\) such that
\[\Psi_{j}\circ\varrho^{\times(n+m+1)}\circ\Phi_{j}\xrightarrow{j\to\infty} \overline{\operatorname{id}_{\mathbb{C}}}^{\times(n+m+1)}\]
locally uniformly. But then we see
\[V_{L}\circ\Bigl{(}\Psi_{j}\circ\varrho^{\times(n+m+1)}\circ\Phi_{j}\Bigr{)}\circ \varrho^{\times(n+m+1)}\circ V_{L-1}\circ\ldots\circ\Bigl{(}\Psi_{j}\circ \varrho^{\times(n+m+1)}\circ\Phi_{j}\Bigr{)}\circ\varrho^{\times(n+m+1)}\circ V _{1}\xrightarrow{j\to\infty}f\]
locally uniformly, where the left-hand side is an element of \(\mathcal{NN}^{\varrho}_{n,m,n+m+1}\) for every \(j\in\mathbb{N}\). Here we applied Proposition A.5. This shows that \(\mathcal{NN}^{\overline{\sigma}}_{n,m,n+m+1}\) is dense in \(\mathcal{NN}^{\varrho}_{n,m,n+m+1}\) with respect to local uniform convergence which in turn yields the universality of \(\mathcal{NN}^{\varrho}_{n,m,n+m+1}\), cf. Proposition A.2.
The proof of (iii) is analogous. Gluing together one copy of the complex identity \(\mathrm{id}_{\mathbb{C}}\) and \(n+m\) copies of the function \(\phi\) or (the projection onto the first component of) \(\psi\) of Proposition 3.1(iii), respectively, we construct sequences \((\Psi_{j})_{j\in\mathbb{N}},(\Phi_{j})_{j\in\mathbb{N}}\) satisfying \(\Psi_{j}\in\mathrm{Aff}(\mathbb{C}^{2n+2m+1};\mathbb{C}^{n+m+1})\) and \(\Phi_{j}\in\mathrm{Aff}(\mathbb{C}^{n+m+1};\mathbb{C}^{2n+2m+1})\) for \(j\in\mathbb{N}\) such that
\[\Psi_{j}\circ\varrho^{\times(2n+2m+1)}\circ\Phi_{j}\xrightarrow{j\to\infty} \tilde{\varrho}\]
locally uniformly. Because the building blocks consist of \(2\) neurons in this case, the resulting approximating neural network has width \(2n+2m+1\) instead of \(n+m+1\). An illustration of the replacement of the in-register and out-register neurons of the register model from Proposition 3.4 using a building block from Proposition 3.1 is given in Figure 6.
Next we provide two examples of activation functions which are used in practice and to which Theorem 4.2 applies.
**Example 4.3**.: The modReLU function has for example been proposed in [2] as a generalization of the classical ReLU to the complex plane. For a parameter \(b<0\) it is defined as
\[\mathrm{modReLU}_{b}:\mathbb{C}\to\mathbb{C},\quad\mathrm{modReLU}_{b}(z):= \begin{cases}(|z|+b)\frac{z}{|z|},&|z|+b\geq 0,\\ 0,&\text{otherwise}.\end{cases}\]
An application of Theorem 4.2(iii) shows that for \(n,m\in\mathbb{N}\), and \(b<0\), the set \(\mathcal{NN}^{\mathrm{modReLU}_{b}}_{n,m,2n+2m+1}\) is universal.
To this end, let us verify the assumptions of Theorem 4.2 in detail. Since the continuity of \(\mathrm{modReLU}_{b}\) is immediate for \(z\in\mathbb{C}\) with \(|z|\neq-b\), it remains to check the case \(|z|=-b\). Take any sequence \((z_{j})_{j\in\mathbb{N}}\) with \(z_{j}\to z\) as \(j\to\infty\), where we assume without loss of generality \(|z_{j}|\geq-b\) for every \(j\in\mathbb{N}\). Then \(|\mathrm{modReLU}_{b}(z_{j})|=|z_{j}|+b\to|z|+b=0\) as \(j\to\infty\). This shows the continuity of \(\mathrm{modReLU}_{b}\).
In [10, Corollary 5.4] it is shown that for all \(z\in\mathbb{C}\) with \(|z|>-b\) and all \(k,\ell\in\mathbb{N}_{0}\) one has
\[\partial_{\mathrm{wirt}}^{k}\overline{\partial}_{\mathrm{wirt}}^{\ell}\, \mathrm{modReLU}_{b}(z)\neq 0.\]
This implies that \(\mathrm{modReLU}_{b}\) is non-polyharmonic and \(\partial_{\mathrm{wirt}}\,\mathrm{modReLU}_{b}(z)\neq 0\neq\overline{\partial}_{ \mathrm{wirt}}\,\mathrm{modReLU}_{b}(z)\) for all \(z\in\mathbb{C}\) with \(|z|>-b\).
Further note that the result from Theorem 4.2 cannot be used to reduce the sufficient width to \(n+m+1\) since it holds \(\partial_{\mathrm{wirt}}\,\mathrm{modReLU}_{b}(z)\neq 0\neq\overline{\partial}_{ \mathrm{wirt}}\,\mathrm{modReLU}_{b}(z)\) for all \(z\in\mathbb{C}\) with \(|z|>-b\), \(\partial_{\mathrm{wirt}}\,\mathrm{modReLU}_{b}(z)=\overline{\partial}_{ \mathrm{wirt}}\,\mathrm{modReLU}_{b}(z)=0\) for every \(z\in\mathbb{C}\) with \(|z|<-b\) and \(\mathrm{modReLU}_{b}\) is not differentiable in any \(z\in\mathbb{C}\) with \(|z|=-b\).
Figure 6. Illustration for the proof of Theorem 4.2: Replacement rules for the approximation of register models by CVNNs using building blocks.
**Example 4.4**.: The complex cardioid function has been used in [27] in the context of MRI fingerprinting, where complex-valued neural networks significantly outperformed their real-valued counterparts. It is defined as
\[\operatorname{card}:\mathbb{C}\to\mathbb{C},\quad\operatorname{card}(z):=\begin{cases} \frac{1}{2}\Big{(}1+\frac{\operatorname{Re}(z)}{|z|}\Big{)}\,z,&z\in\mathbb{C} \setminus\{0\}\,,\\ 0,&z=0.\end{cases}\]
An application of Theorem 4.2(i) shows that for \(n,m\in\mathbb{N}\) the set \(\mathcal{N}\mathcal{N}^{\operatorname{card}}_{n,m,n+m+1}\) is universal.
To this end, let us verify the assumptions of Theorem 4.2 in detail. The continuity of card on \(\mathbb{C}\setminus\{0\}\) is immediate. Further note that
\[|\operatorname{card}(z)|=\left|\frac{1}{2}\bigg{(}1+\frac{\operatorname{Re}(z )}{|z|}\bigg{)}\,z\right|\leq|z|\to 0\]
as \(z\to 0\), showing the continuity of card on the entire complex plane \(\mathbb{C}\). Now [10, Corollaries 5.6 and 5.7] show that card is non-polyharmonic and
\[\partial_{\operatorname{wirt}}\operatorname{card}(z)=\frac{1}{2}+\frac{1}{8} \cdot\frac{\overline{z}}{|z|}+\frac{3}{8}\cdot\frac{z}{|z|},\quad\overline{ \partial}_{\operatorname{wirt}}\operatorname{card}(z)=-\frac{1}{8}\cdot \frac{z^{3}}{\frac{1}{|z|}^{3}}+\frac{1}{8}\cdot\frac{z}{|z|}\]
for every \(z\in\mathbb{C}\setminus\{0\}\). Hence, we see \(\partial_{\operatorname{wirt}}\operatorname{card}(1)=1\) and \(\overline{\partial}_{\operatorname{wirt}}\operatorname{card}(1)=0\).
## 5. The polyharmonic case
In this section, we deal with activation functions \(\varrho:\mathbb{C}\to\mathbb{C}\) that are polyharmonic. However, it turns out that this property can be relaxed to only requiring that \(\varrho\in C^{2}(\mathbb{C};\mathbb{C})\) in order for the proofs to work. The main assumptions of this section can therefore be stated as follows.
**Assumption 5.1**.: _Let \(\varrho:\mathbb{C}\to\mathbb{C}\) be a function satisfying the following conditions:_
_(i)_ \(\varrho\in C^{2}(\mathbb{C};\mathbb{C})\)_,_
_(ii)_ \(\varrho\) _is not holomorphic,_
_(iii)_ \(\varrho\) _is not antiholomorphic,_
_(iv)_ \(\varrho\) _is not_ \(\mathbb{R}\)_-affine._
The Stone-Weierstrass theorem states that any continuous function can be arbitrarily well approximated by complex polynomials in \(z_{1},\dots,z_{n}\) and \(\overline{z_{1}},\dots,\overline{z_{n}}\) in the uniform norm on compact subsets of \(\mathbb{C}^{n}\). In order to show universality of CVNNs, it suffices therefore to show that such polynomials can be approximated by deep narrow CVNNs to arbitrary precision in the uniform norm on compact sets. We follow an approach similar to that of the register model by preserving the inputs and outputs from layer to layer while gradually performing multiplications to approximate the individual monomials.
Using Proposition 3.1 we derive the following Proposition 5.2. It states that the function \(z\mapsto(z,\overline{z})\) can be uniformly approximated on compact sets by a shallow network of width \(2\).
**Proposition 5.2**.: _Let \(\varrho\) satisfy Assumption 5.1, let \(K\subseteq\mathbb{C}\) be compact and \(\varepsilon>0\). Then there are maps \(\phi\in\operatorname{Aff}(\mathbb{C};\mathbb{C}^{2})\) and \(\psi\in\operatorname{Aff}(\mathbb{C}^{2};\mathbb{C}^{2})\) such that_
\[\sup_{z\in K}\left\|(\psi\circ\varrho^{\times 2}\circ\phi)(z)-(z,\overline{z}) \right\|_{\mathbb{C}^{2}}<\varepsilon.\]
Proof.: If there is a point \(z_{0}\in\mathbb{C}\) with \(\partial_{\operatorname{wirt}}\varrho(z_{0})\neq 0\neq\overline{\partial}_{ \operatorname{wirt}}\varrho(z_{0})\), we can directly apply Proposition 3.1(iii). If there is no such point \(z_{0}\), we can still find \(z_{1}\in\mathbb{C}\) with \(\partial_{\operatorname{wirt}}\varrho(z_{1})\neq 0\) and \(z_{2}\in\mathbb{C}\) with \(\overline{\partial}_{\operatorname{wirt}}\varrho(z_{2})\neq 0\), since \(\varrho\) is neither holomorphic nor antiholomorphic. By assumption of the nonexistence of \(z_{0}\in\mathbb{C}\) with \(\partial_{\operatorname{wirt}}\varrho(z_{0})\neq 0\neq\overline{\partial}_{ \operatorname{wirt}}\varrho(z_{0})\), it follows that \(\overline{\partial}_{\operatorname{wirt}}\varrho(z_{1})=0\) and \(\partial_{\operatorname{wirt}}\varrho(z_{2})=0\), and we can thus apply Proposition 3.1(i) and (ii).
In fact, it is possible to show that under the conditions formulated in Assumption 5.1, one can always find a point \(z_{0}\in\mathbb{C}\) with \(\partial_{\operatorname{wirt}}\varrho(z_{0})\neq 0\neq\overline{\partial}_{ \operatorname{wirt}}\varrho(z_{0})\).
Proposition 5.2 is a central finding, since it implies that we can build from the activation function \(\varrho\) constructions that are similar to the register model construction from Proposition 3.4. Together with Proposition 3.3 from Section 3, this leads to the following proof of the sufficiency part of Theorem 1.1.
**Theorem 5.3**.: _Let \(n,m\in\mathbb{N}\). Assume that \(\varrho\in C(\mathbb{C};\mathbb{C})\) satisfies Assumption 5.1. Then \(\mathcal{N}\mathcal{N}^{\varrho}_{n,m,2n+2m+5}\) is universal._
_Moreover, if there is a point \(z_{0}\in\mathbb{C}\) with either_
\[\partial_{\mathrm{wirt}}\varrho(z_{0})\neq 0=\overline{\partial}_{\mathrm{wirt}} \varrho(z_{0})\quad\text{or}\quad\partial_{\mathrm{wirt}}\varrho(z_{0})=0 \neq\overline{\partial}_{\mathrm{wirt}}\varrho(z_{0}), \tag{5.1}\]
_then \(\mathcal{N}\mathcal{N}^{\varrho}_{n,m,n+m+4}\) is universal._
Proof.: We first provide a detailed proof of the universality of \(\mathcal{N}\mathcal{N}^{\varrho}_{n,m,2n+2m+5}\). Afterwards, we comment on how to adapt it to show that a width of \(n+m+4\) is sufficient if (5.1) is fulfilled.
From the Stone-Weierstrass theorem [9, Theorem 4.51], we know that the set of complex polynomials in \(z_{1},\ldots,z_{n}\) and \(\overline{z_{1}},\ldots,\overline{z_{n}}\) is dense in \(C(K;\mathbb{C})\) with respect to the uniform norm on the compact set \(K\subset\mathbb{C}^{n}\). Hence, it suffices to show that each function \(p:K\to\mathbb{C}^{m}\), whose components are polynomials in \(z_{1},\ldots,z_{n}\) and \(\overline{z_{1}},\ldots,\overline{z_{n}}\) can be uniformly approximated on \(K\) by CVNNs of width \(2n+2m+5\). Let \(p=(p_{1},\ldots,p_{m})\) be such a function and decompose \(p_{\ell}=\sum_{j=1}^{L_{\ell}}a_{j,\ell}p_{j,\ell}\) with each \(p_{j,\ell}\) being a monomial (i.e., a product of some of the variables \(z_{1},\ldots,z_{n},\overline{z_{1}},\ldots,\overline{z_{n}}\)), and \(a_{j,\ell}\in\mathbb{C}\) for \(\ell\in\{1,\ldots,m\}\) and \(j\in\{1,\ldots,L_{\ell}\}\). We now construct a CVNN that approximates \(p\) sufficiently well on \(K\).
The idea is to construct a register-like model where each layer consists of \(2n+m+1\) neurons with \(2n\)_in-register neurons_ storing the input values and their complex conjugates, \(m\)_out-register neurons_, and one _computation neuron_, see Figure 7.
For \(k\in\{1,2,3\}\), let \(\mathrm{mul}_{k}:\mathbb{C}^{2}\to\mathbb{C}\) be maps defined as \(\mathrm{mul}_{1}(z,w):=zw\), \(\mathrm{mul}_{2}(z,w):=z\overline{w}\), and \(\mathrm{mul}_{3}(z,w):=\overline{zw}\) for all \(z,w\in\mathbb{C}\). From Proposition 3.3 we know that there is \(\mathrm{mul}\in\{\mathrm{mul}_{1},\mathrm{mul}_{2},\mathrm{mul}_{3}\}\) that can be uniformly approximated by shallow CVNNs with activation function \(\varrho\) and a width of \(12\). The available multiplication \(\mathrm{mul}\) is fixed from now on. Each of the computation neurons takes two inputs and performs the multiplication \(\mathrm{mul}\).
We first sketch how to compute and save a single monomial of degree \(l\in\mathbb{N}\). Let \(q=z_{k_{\ell}}\cdot z_{k_{\ell-1}}\cdot\ldots\cdot z_{k_{1}}\) be such a monomial, where we assume for now that it includes no conjugated variables. We further assume that in every computation neuron we could perform a multiplication. The out-register neurons are initialized with \(0\) and the computation neuron is initialized with \(1\). Then the desired monomial can be computed iteratively as \(z_{k_{\ell}}\cdot(z_{k_{\ell-1}}\cdot\ldots(z_{k_{1}}\cdot 1)\ldots)\): in each layer, we pass to the computation neuron the currently needed variable \(z_{k_{j}}\) and the product obtained from the previous computational neuron. However, in general it might occur that the computational neuron only performs the operation \((z,w)\mapsto z\bar{w}\) or \((z,w)\mapsto\bar{z}\bar{w}\), and that the desired monomial also includes conjugated variables. Hence, the obtained monomial
\[\zeta_{k_{\ell}}\cdot\ldots\cdot\zeta_{k_{1}}:=\mathrm{mul}(z_{k_{\ell}}, \mathrm{mul}(z_{k_{\ell-1}},\ldots\mathrm{mul}(z_{k_{1}},1)\ldots))\]
might differ from \(q\).
However, by the basic properties of complex conjugation, for every \(j\in\{1,\ldots,\ell\}\), \(\zeta_{k_{j}}\) is either \(z_{k_{j}}\) or \(\bar{z}_{k_{j}}\). Since the input register saves both the variables and their conjugates, it suffices to pass, if necessary, \(\bar{z}_{k_{j}}\) instead of \(z_{k_{j}}\) from the input register to the computation neuron at the \(j\)th step. This ensures that it is always possible to compute the desired monomial.
Let us now describe how the entire polynomial \(p\) is computed: In each layer the current value of the computation neuron or its complex conjugate is multiplied with either the value of one of the in-register neurons or its complex conjugate, according to Proposition 3.3. This is performed repeatedly until the first monomial \(p_{1,1}\) is completed. Once the first monomial \(p_{1,1}\) is realized, it is then multiplied with its coefficient and stored into the first out-register neuron. This is then repeated for the other monomials \(p_{j,1}\) and the result is gradually added up in the first out-register neuron which then finally stores the desired polynomial for the first component \(p_{1}\). This procedure is then repeated for the other polynomials \(p_{2},\ldots,p_{m}\).
The approximation of this type of register model by CVNNs is done using the building blocks from Section 3. An illustration of the replacement rules is shown in Figure 8.
In the following we want to make this idea more precise. To this end, we set up a register model in Step 1, and show in Step 2 how this register model can be uniformly approximated by elements of \(\mathcal{N}\mathcal{N}^{\varrho}_{n,m,2n+2m+12}\) on compact sets. This yields universality of \(\mathcal{N}\mathcal{N}^{\varrho}_{n,m,2n+2m+12}\). In Step 2\({}^{\prime}\) we
provide a more detailed analysis of the arguments from Step 2 which lead to the claimed universality of \(\mathcal{NN}_{n,m,2n+2m+5}^{\varrho}\).
_Step 1_.: Fix \(\operatorname{mul}\in\{\operatorname{mul}_{1},\operatorname{mul}_{2}, \operatorname{mul}_{3}\}\). We can deduce that we can write \(p\) as a composition
\[p=T_{\operatorname{end}}\circ T_{N}\circ T_{N-1}\circ\ldots\circ T_{1}\circ T_ {\operatorname{init}} \tag{5.2}\]
for some number \(N\in\mathbb{N}\) with
\[T_{\operatorname{init}} :\mathbb{C}^{n}\to\mathbb{C}^{n}\times\mathbb{C}^{n}\times \mathbb{C}\times\mathbb{C}^{m}, z\mapsto(z,\overline{z},1,0),\] \[T_{\operatorname{end}} :\mathbb{C}^{n}\times\mathbb{C}^{n}\times\mathbb{C}\times \mathbb{C}^{m}\to\mathbb{C}^{m}, (z,u,w,v)\mapsto v\]
Figure 7. The register-like model from Theorem 5.3: Squares and diamonds depict the complex identity map and complex conjugation, respectively. Each circle stands for one of the multiplication operations \(\operatorname{mul}_{k}\) from the proof of Theorem 5.3. The choice of the connections from the in-register neurons (left) to the computation neuron (middle) depends on the monomial that is approximated and the operation \(\operatorname{mul}_{k}\) that is available (\(T_{\ell}\) of the first form). One of them is shown in the figure. The connections from the computation neuron (middle) to the out-register neuron (right) are drawn each time the computation of a monomial of one of the \(m\) output polynomials is completed (\(T_{\ell}\) of the second form). The dashed portions of the register model are to be replaced according to Figure 8.
and for every \(\ell\in\{1,\ldots,N\}\), the map \(T_{\ell}:\mathbb{C}^{2n+m+1}\to\mathbb{C}^{2n+m+1}\) has one of the two following forms: The first possible form is
\[T_{\ell}:\mathbb{C}^{n}\times\mathbb{C}^{n}\times\mathbb{C}\times\mathbb{C}^{m }\to\mathbb{C}^{n}\times\mathbb{C}^{n}\times\mathbb{C}\times\mathbb{C}^{m}, \quad(z,u,w,v)\mapsto(z,\overline{z},\operatorname{mul}((z,u)_{k},w),v)\]
where \((z,u)_{k}\) denotes the \(k\)th component of \((z,u)\in\mathbb{C}^{n}\times\mathbb{C}^{n}\) for some \(k=k(\ell)\in\{1,\ldots,2n\}\). This function realizes the multiplication of the current value of the computation neuron with one of the values of the in-registers, choosing either a variable or its complex conjugation, depending on \(k\). The second possible form is
\[T_{\ell}:\mathbb{C}^{n}\times\mathbb{C}^{n}\times\mathbb{C}\times\mathbb{C}^{m }\to\mathbb{C}^{n}\times\mathbb{C}^{n}\times\mathbb{C}\times\mathbb{C}^{m}, \quad(z,u,w,v)\mapsto(z,u,1,v+a_{j,k}we_{k})\]
for some \(k\in\{1,\ldots,m\}\) and \(j\in\{1,\ldots,L_{k}\}\). Here, \(e_{k}\) denotes the \(k\)th standard basis vector of \(\mathbb{C}^{m}\). This is the function that adds a finished monomial to the corresponding out-register neuron.
_Step \(2\)._ To conclude, we see that every function \(T_{\ell}\) occurring in the composition (5.2) is either an affine map (if it has the second form) or has the property that there are sequences \((\phi_{j}^{\ell})_{j\in\mathbb{N}}\) and \((\psi_{j}^{\ell})_{j\in\mathbb{N}}\) with \(\phi_{j}^{\ell}\in\operatorname{Aff}(\mathbb{C}^{2n+m+1};\mathbb{C}^{2n+2m+12})\) and \(\psi_{j}^{\ell}\in\operatorname{Aff}(\mathbb{C}^{2n+2m+12};\mathbb{C}^{2n+m+1})\) such that
\[\psi_{j}^{\ell}\circ\varrho^{\times(2n+2m+12)}\circ\phi_{j}^{\ell}\xrightarrow{ j\to\infty}T_{\ell}\quad\text{locally uniformly},\]
where we applied Propositions 3.3 and 5.2. Similarly, one can find sequences \((\phi_{j}^{\text{init}})_{j\in\mathbb{N}}\) and \((\psi_{j}^{\text{init}})_{j\in\mathbb{N}}\) with \(\phi_{j}^{\text{init}}\in\operatorname{Aff}(\mathbb{C}^{n};\mathbb{C}^{2n})\) and \(\psi_{j}^{\text{init}}\in\operatorname{Aff}(\mathbb{C}^{2n};\mathbb{C}^{2n+m+1})\) satisfying
\[\psi_{j}^{\text{init}}\circ\varrho^{\times 2n}\circ\phi_{j}^{\text{init}} \xrightarrow{j\to\infty}T_{\text{init}}\quad\text{locally uniformly}.\]
Using Proposition A.5, we obtain the universality of \(\mathcal{N}\mathcal{N}_{2n+2m+12}^{g}\).
_Step \(2^{\prime}\)._ In the previous step, we approximated in the sense of Proposition A.5 the complex identity neurons and the complex conjugation neurons by the elementary building blocks given in Proposition 3.1, and the multiplication operation mul by the building block given by Proposition 3.3. The latter is a shallow CVNN of width \(12\). In order to further reduce the width, we will first approximate this shallow CVNN by an appropriate register model according to Proposition 3.4, and only then replace all complex identities and conjugations by the elementary building blocks.
Each of the functions \(T_{1},\ldots,T_{N}\) occurring in the composition (5.2) is either an affine map (if it has the second form) or has the following property: For each \(\ell\in\{1,\ldots,N\}\), there are sequences
Figure 8. Replacement rules for the highlighted portions in Figure 4(b).
and \((\psi_{j}^{\ell})_{j\in\mathbb{N}}\) with \(\phi_{j}^{\ell}\in\operatorname{Aff}(\mathbb{C}^{2};\mathbb{C}^{12})\) and \(\psi_{j}^{\ell}\in\operatorname{Aff}(\mathbb{C}^{12};\mathbb{C})\) such that \(F_{j}^{\ell}\xrightarrow{j\to\infty}T_{\ell}\) locally uniformly, where we define
\[F_{j}^{\ell}:\mathbb{C}^{n}\times\mathbb{C}^{n}\times\mathbb{C}\times\mathbb{C} ^{m}\to\mathbb{C}^{n}\times\mathbb{C}^{n}\times\mathbb{C}\times\mathbb{C}^{m}, \quad(z,u,w,v)\mapsto(z,\overline{z},\widetilde{F}_{j}^{\ell}((z,u)_{k},w),v)\]
and \(\widetilde{F}_{j}^{\ell}=\psi_{j}^{\ell}\circ\varrho^{\times 12}\circ\phi_{j}^{ \ell}:\mathbb{C}^{2}\to\mathbb{C}\) with \(k=k(\ell)\) from the definition of \(T_{\ell}\).
Note that \(\widetilde{F}_{j}^{\ell}\in\mathcal{SN}_{2,1}^{\varrho}\subseteq\mathcal{I}_ {2,1,4}^{\varrho}\) for all \(j,\ell\in\mathbb{N}\) thanks to Proposition 3.4. Hence, every function \(F_{j}^{\ell}\) can be represented by a ("large") register model with input and output dimension both equal to \(2n+m+1\) and with \(2n+m+4\) neurons per hidden layer. These \(2n+m+4\) neurons can be grouped as follows: \(n\) in-register neurons use the complex identity as activation function and store the \(n\) input values, \(n\) in-register neurons use the complex conjugation as activation function and store the complex conjugates of those input values, \(m\) out-register neurons use the complex identity as activation function and carry the output values of the original polynomial, and \(4\) neurons for the ("small") register model that represents \(\widetilde{F}_{j}^{\ell}\), of which two in-register neurons and one out-register neurons use the complex identity, and one computation neuron uses \(\varrho\) as activation function. Note that one of the two in-register neurons of the small register model stores one of the input variables of the large register model and can therefore be dropped. This results in total in a register model with \(n+m+2\) identity neurons, \(n\) conjugation neurons, and \(1\) neuron using \(\varrho\) as its activation function per hidden layer.
Making this more precise, we write
\[F_{j}^{\ell}=V_{K}\circ\tilde{\varrho}\circ V_{K-1}\circ\ldots\circ V_{2} \circ\tilde{\varrho}\circ V_{1},\]
where \(\tilde{\varrho}\) is as in Proposition 3.4, i.e., it applies \(\varrho\) to the \((2n+2)\)th input variable, and the identity to all others, and for \(r\in\{2,\ldots,K-1\}\), we write
\[V_{1}:\mathbb{C}^{n}\times\mathbb{C}^{n}\times\mathbb{C}\times \mathbb{C}^{m}\to\mathbb{C}^{n}\times\mathbb{C}^{n}\times\mathbb{C}^{3}\times \mathbb{C}^{m}, (z,u,w,v)\mapsto(z,\overline{z},\eta_{1}(z,u,w),v),\] \[V_{r}:\mathbb{C}^{n}\times\mathbb{C}^{n}\times\mathbb{C}^{3} \times\mathbb{C}^{m}\to\mathbb{C}^{n}\times\mathbb{C}^{n}\times\mathbb{C} \times\mathbb{C}^{m}, (z,u,w,v)\mapsto(z,\overline{z},\eta_{r}(z,u,w),v)\] \[V_{K}:\mathbb{C}^{n}\times\mathbb{C}^{n}\times\mathbb{C}^{3} \times\mathbb{C}^{m}\to\mathbb{C}^{n}\times\mathbb{C}^{n}\times\mathbb{C} \times\mathbb{C}^{m}, (z,u,w,v)\mapsto(z,\overline{z},\eta_{K}(z,u,w),v),\]
where \(\eta_{1}\in\operatorname{Aff}(\mathbb{C}^{2n+1};\mathbb{C}^{3})\), \(\eta_{r}\in\operatorname{Aff}(\mathbb{C}^{2n+3};\mathbb{C}^{3})\), and \(\eta_{K}\in\operatorname{Aff}(\mathbb{C}^{2n+3};\mathbb{C})\).
Now we approximate the functions \(\tilde{\varrho}\circ V_{r}\) for \(\ell\in\{1,\ldots,r\}\) and \(V_{K}\) by shallow CVNNs in the sense of Proposition A.5. Using Proposition 5.2 in the large register model, we approximate each of the \(n\) in-register pairs of input values and their complex conjugates, and each of the out-register neurons by a shallow CVNN with \(2\) hidden neurons. Similarly, the remaining in-register neuron and the out-register neuron of the small register model are approximated in the same way. This results in shallow CVNNs with \(2n+2m+(2\cdot 2+1)=2n+2m+5\) hidden neurons. Formally, we approximate
* \(\tilde{\varrho}\circ V_{1}\) by a sequence of CVNNs \(\psi\circ\varrho^{\times(2n+2m+5)}\circ\phi\) with \(\phi\in\operatorname{Aff}(\mathbb{C}^{2n+m+1};\mathbb{C}^{2n+2m+5})\) and \(\psi\in\operatorname{Aff}(\mathbb{C}^{2n+2m+5};\mathbb{C}^{2n+m+3})\),
* \(\tilde{\varrho}\circ V_{r}\) by a sequence of CVNNs \(\psi\circ\varrho^{\times(2n+2m+5)}\circ\phi\) with \(\phi\in\operatorname{Aff}(\mathbb{C}^{2n+m+3};\mathbb{C}^{2n+2m+5})\) and \(\psi\in\operatorname{Aff}(\mathbb{C}^{2n+2m+5};\mathbb{C}^{2n+m+3})\) for \(r\in\{2,\ldots,K-1\}\),
* \(V_{K}\) by a sequence of CVNNs \(\psi\circ\varrho^{\times(2n+2m+2)}\circ\phi\) with \(\phi\in\operatorname{Aff}(\mathbb{C}^{2n+m+3};\mathbb{C}^{2n+2m+2})\) and \(\psi\in\operatorname{Aff}(\mathbb{C}^{2n+2m+2};\mathbb{C}^{2n+m+1})\).
Overall, we deduce that each function \(F_{j}^{\ell}\) with \(j\in\mathbb{N}\) and \(\ell\in\{1,\ldots,N\}\) can be uniformly approximated using CVNNs from \(\mathcal{N}\mathcal{N}_{2n+m+1,2n+m+1,2n+2m+5}^{\varrho}\). Another application of Proposition A.5 shows that each function \(T_{1},\ldots,T_{N}\) appearing in (5.2) can be uniformly approximated using CVNNs from \(\mathcal{N}\mathcal{N}_{2n+m+1,2n+m+1,2n+2m+5}^{\varrho}\). Similar arguments apply to \(T_{\text{init}}\) and \(T_{\text{end}}\) in (5.2), and this shows that the polynomial \(p\) can be uniformly approximated by CVNNs from \(\mathcal{N}\mathcal{N}_{n,m,2n+2m+5}^{\varrho}\). Since \(p\) was arbitrary, this together with the Stone-Weierstrass theorem shows the universality of \(\mathcal{N}\mathcal{N}_{n,m,2n+2m+5}^{\varrho}\).
We now discuss in what way the preceding proof has to be adapted in order to show the universality of \(\mathcal{N}\mathcal{N}_{n,m,n+m+4}^{\varrho}\) if (5.1) is fulfilled.
First assume \(\partial_{\text{wirt}}\varrho(z_{0})\neq 0=\overline{\partial}_{\text{wirt}} \varrho(z_{0})\). Proposition 3.1(i) shows that we can approximate \(\operatorname{id}_{\mathbb{C}}\) arbitrarily well using shallow CVNNs of width \(1\) using \(\varrho\) as activation function. We then construct a register-model in the spirit of the first part of Step 1 that has in each layer \(n\) in-register neurons,
out-register neurons, one computation neuron realizing the available multiplication \(\operatorname{mul}_{k}\) and a _single_ conjugation neuron that stores the conjugated version of one of the in-registers. The idea is that we do not need to store the conjugates of all the inputs in every layer. In fact, it suffices to include one conjugation neuron in which the conjugate of one of the inputs is stored if that input is needed in the current computation step. Approximating the multiplication mul first by shallow networks according to Proposition 3.3 and then replacing these shallow networks by deep register models according to Proposition 3.4 yields that we approximate a given polynomial by large variants of CVNNs with a width of \(n+m+1+4\) where \(n+m+3\) neurons use the identity activation function, \(1\) neuron the conjugation and \(1\) neuron \(\varrho\) as activation function. Note also here that we may omit one of the identity neurons since its value is already contained in either the \(n\) in-registers or the conjugation neuron. This leads in total to a network of width \(n+m+4\) with \(n+m+2\) identity neurons, one conjugation neuron and one neuron that uses \(\varrho\) as activation function. Now we approximate \(n+m+1\) of these identity neurons in the spirit of Proposition 3.1(i) using shallow CVNNs with activation \(\varrho\) of width \(1\) and the remaining identity neuron together with the conjugation neuron in the spirit of Proposition 5.2 using a shallow CVNN with \(\varrho\) activation and a width of \(2\). This yields the universality of \(\mathcal{NN}_{n,m,n+m+4}^{\varrho}\).
If on the other hand \(\partial_{\operatorname{wirt}}\varrho(z_{0})=0\neq\overline{\partial}_{ \operatorname{wirt}}\varrho(z_{0})\) we argue as in the proof of Theorem 4.2(ii): By combining the previous part which yields the universality of \(\mathcal{NN}_{n,m,n+m+4}^{\overline{\varrho}}\) and Proposition 3.1(ii) we deduce the universality of \(\mathcal{NN}_{n,m,n+m+4}^{\varrho}\). Notice that \(\varrho\) satisfies Assumption 5.1 iff \(\overline{\varrho}\) does.
## 6. Necessity of our assumptions
The proof of Theorem 1.1 is not yet complete. So far, we have proven that activation functions \(\varrho\in C(\mathbb{C};\mathbb{C})\) which are neither holomorphic, nor antiholomorphic, nor \(\mathbb{R}\)-affine yield universality of CVNNs of width \(2n+2m+5\), with indicated improvements under additional assumptions. The necessity part is done in this section: If \(\varrho\) is holomorphic, antiholomorphic, or \(\mathbb{R}\)-affine, then even the set of CVNNs using activation function \(\varrho\) with _arbitrary widths and depths_ is not universal, cf. Theorem 6.3. Furthermore, Theorem 1.1 states that under the mentioned constraints on the activation function, a width of \(2n+2m+5\) is sufficient for universality of CVNNs with input dimension \(n\) and output dimension \(m\). But could we have done better? In Theorem 6.4 below, we show that for a family of real-valued activation functions, the set \(\mathcal{NN}_{n,m,W}^{\varrho}\) is not universal when \(W<\max\left\{2n,2m\right\}\). In our final result Theorem 6.5, we show that real differentiability of the activation function at one point with non-vanishing derivative is not necessary for the universal approximation property of deep narrow CVNNs.
We prepare the proof of Theorem 6.3 by two lemmas, the first of which is about uniform convergence of \(\mathbb{R}\)-affine functions.
**Lemma 6.1**.: _Let \(n,m\in\mathbb{N}\), \((f_{k})_{k\in\mathbb{N}}\) be a sequence of \(\mathbb{R}\)-affine functions from \(\mathbb{R}^{n}\) to \(\mathbb{R}^{m}\) and \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\). Let \((f_{k})_{k\in\mathbb{N}}\) converge locally uniformly to \(f\). Then \(f\) is \(\mathbb{R}\)-affine too._
Proof.: Let \(A_{k}\in\mathbb{R}^{m\times n}\) and \(b_{k}\in\mathbb{R}^{m}\) such that \(f_{k}(x)=A_{k}x+b_{k}\). Let \(b:=f(0)\). Then we have \(b_{k}=f_{k}(0)\to f(0)=b\). Furthermore we see for every \(j\in\left\{1,\ldots,n\right\}\) that
\[\left\|A_{k}e_{j}-A_{\ell}e_{j}\right\|_{\mathbb{R}^{m}}\leq\left\|A_{k}e_{j} +b_{k}-A_{\ell}e_{j}-b_{\ell}\right\|_{\mathbb{R}^{m}}+\left\|b_{k}-b_{\ell} \right\|_{\mathbb{R}^{m}}\to 0\]
as \(k,\ell\to\infty\), uniformly over \(j\), meaning \(\max_{j\in\left\{1,\ldots,n\right\}}\left\|A_{k}e_{j}-A_{\ell}e_{j}\right\|_{ \mathbb{R}^{m}}\to 0\) as \(k,\ell\to\infty\). Consequently, \((A_{k})_{k\in\mathbb{N}}\) is a Cauchy sequence and thus converges to some \(A\in\mathbb{R}^{m\times n}\). We claim \(f(x)=Ax+b\) for every \(x\in\mathbb{R}^{n}\). Indeed, this follows from
\[\left\|A_{k}x+b_{k}-Ax+b\right\|_{\mathbb{R}^{m}}\leq\left\|A_{k}x-Ax\right\|_ {\mathbb{R}^{m}}+\left\|b_{k}-b\right\|_{\mathbb{R}^{m}}\leq\left\|A_{k}-A \right\|_{\mathbb{R}^{m\times n}}\left\|x\right\|_{\mathbb{R}^{m}}+\left\|b_{k}- b\right\|_{\mathbb{R}^{m}}\to 0.\]
as \(k\to\infty\).
Our second lemma in preparation of the proof of Theorem 6.3 concerns locally uniform limits of sequences of functions that are either holomorphic or antiholomorphic.
**Lemma 6.2**.: _Let \(\mathcal{F}:=\left\{F:\mathbb{C}\to\mathbb{C}\;:\;F\;\text{holomorphic or antiholomorphic}\right\}\) and \((f_{k})_{k\in\mathbb{N}}\) be a sequence of functions with \(f_{k}\in\mathcal{F}\) for every \(k\in\mathbb{N}\). Let \(f:\mathbb{C}\to\mathbb{C}\) be such that \(f_{k}\to f\) locally uniformly. Then it holds \(f\in\mathcal{F}\)._
Proof.: We distinguish two cases:
1. If there is a subsequence of \((f_{k})_{k\in\mathbb{N}}\) consisting of holomorphic functions, the limit \(f\) of this subsequence also has to be holomorphic (see for instance [26, Theorem 10.28]).
2. If there is a subsequence of \((f_{k})_{k\in\mathbb{N}}\) consisting of antiholomorphic functions, the limit \(f\) of this subsequence also has to be antiholomorphic, where we again apply [26, Theorem 10.28] to the complex conjugates of the functions in this subsequence.
The necessity part of Theorem 1.1 is covered by the following theorem.
**Theorem 6.3**.: _Let \(n,m\in\mathbb{N}\) and \(\varrho:\mathbb{C}\to\mathbb{C}\) be holomorphic or antiholomorphic or \(\mathbb{R}\)-affine. Then \(\mathcal{N}\mathcal{N}^{\varrho}_{n,m}:=\bigcup\limits_{W\in\mathbb{N}} \mathcal{N}\mathcal{N}^{\varrho}_{n,m,W}\) is not universal._
Proof.: It suffices to show the claim for \(m=1\), since local uniform approximation in \(\mathbb{C}^{m}\) means componentwise local uniform approximation.
We start with the case \(n=1\). If \(\varrho\) is holomorphic or antiholomorphic it follows that all the elements of \(\mathcal{N}\mathcal{N}^{\varrho}_{1,1}\) are holomorphic or antiholomorphic (see, e.g., [28, Proof of Eq. (4.15), p. 28]. But then it follows from Lemma 6.2 and Proposition A.2 that \(\mathcal{N}\mathcal{N}^{\varrho}_{1,1}\) is not universal. If \(\varrho\) is \(\mathbb{R}\)-affine, each element of \(\mathcal{N}\mathcal{N}^{\varrho}_{1,1}\) is \(\mathbb{R}\)-affine (as the composition of \(\mathbb{R}\)-affine functions). By Lemma 6.1 and Proposition A.2 it follows that \(\mathcal{N}\mathcal{N}^{\varrho}_{1,1}\) is not universal.
The case \(n>1\) can be reduced to the case \(n=1\) in the following way: Assume that \(\mathcal{N}\mathcal{N}^{\varrho}_{n,1}\) is universal and pick any arbitrary function \(f\in C(\mathbb{C};\mathbb{C})\). Let \(\pi:\mathbb{C}^{n}\to\mathbb{C}\), \(\pi(z_{1},\ldots,z_{n})=z_{1}\) and \(\widetilde{\pi}:\mathbb{C}\to\mathbb{C}^{n}\), \(\widetilde{\pi}(z)=(z,0,\ldots,0)\). Note that it holds \(\pi\circ\widetilde{\pi}=\mathrm{id}_{\mathbb{C}}\). By assumption, there is a sequence \((g_{k})_{k\in\mathbb{N}}\) with \(g_{k}\in\mathcal{N}\mathcal{N}^{\varrho}_{n,1}\) satisfying \(g_{k}\in\mathcal{N}\mathcal{N}^{\varrho}_{n,1}\) for \(k\in\mathbb{N}\) and \(g_{k}\to f\circ\pi\) locally uniformly. From Proposition A.5 it follows \(g_{k}\circ\widetilde{\pi}\to f\) locally uniformly. Since \(g_{k}\circ\widetilde{\pi}\in\mathcal{N}\mathcal{N}^{\varrho}_{1,1}\) for every \(k\in\mathbb{N}\) it follows that \(\mathcal{N}\mathcal{N}^{\varrho}_{1,1}\) is universal, in contradiction to what has just been shown.
In the previous sections, we showed that for a large class of activation functions a width of \(2n+2m+5\) is sufficient for universality of CVNNs with input dimension \(n\) and output dimension \(m\). Following the lines of [5, Lemma 1], we show next that for some activation functions, a width of at least \(\max\left\{2n,2m\right\}\) is necessary to guarantee universality.
**Theorem 6.4**.: _Let \(n,m\in\mathbb{N}\)._
1. _Let_ \(\phi\in C(\mathbb{R};\mathbb{C})\)_, and_ \(\varrho:\mathbb{C}\to\mathbb{C}\) _be given by_ \(\varrho(z):=\phi(\mathrm{Re}(z))\)_. Then_ \(\mathcal{N}\mathcal{N}^{\varrho}_{n,m,2n-1}\) _is not universal._
2. _Let_ \(\varrho:\mathbb{C}\to\mathbb{R}\)_. Then_ \(\mathcal{N}\mathcal{N}^{\varrho}_{n,m,2m-1}\) _is not universal._
Proof.: We start with (i). Let \(K:=[-2,2]^{n}+\mathrm{i}[-2,2]^{n}\subseteq\mathbb{C}^{n}\) and \(f(z):=\left(\left\|z\right\|_{\mathbb{C}^{n}},0,\ldots,0\right)\) for \(z\in\mathbb{C}^{n}\). Let \(g\in\mathcal{N}\mathcal{N}^{\varrho}_{n,m,2n-1}\) be arbitrary. From the definition of \(\varrho\), it follows that we may write \(g\) as
\[g(z)=\psi(\mathrm{Re}(Vz)+b),\]
where \(\psi:\mathbb{C}^{2n-1}\to\mathbb{C}^{m}\) is some function, \(V\in\mathbb{C}^{(2n-1)\times n}\), \(b\in\mathbb{R}^{2n-1}\), and the real part \(\mathrm{Re}\) is taken componentwise. Interpreting \(\mathrm{Re}\circ V\) as an \(\mathbb{R}\)-linear function from \(\mathbb{R}^{2n}\) to \(\mathbb{R}^{2n-1}\) we conclude from \(2n-1<2n\) that there exists \(v\in\mathbb{C}^{n}\) with \(\left\|v\right\|_{\mathbb{C}^{n}}=1\) satisfying \(\mathrm{Re}(Vv)=0\) and hence
\[g(z+v)=g(z)\text{ for any }z\in\mathbb{C}^{n}. \tag{6.1}\]
We set \(A:=B_{0.1}(0),B:=B_{0.1}(v)\subseteq\mathbb{C}^{n}\) and compute
\[\int_{K}\left\|f(z)-g(z)\right\|_{\mathbb{C}^{m}}\mathrm{d}z \geq\int_{A}\left\|f(z)-g(z)\right\|_{\mathbb{C}^{m}}\mathrm{d}z+ \int_{B}\left\|f(z)-g(z)\right\|_{\mathbb{C}^{m}}\mathrm{d}z\] \[\stackrel{{ B=\underline{A}+v}}{{\longrightarrow}}\int_{A} \left(\left\|f(z)-g(z)\right\|_{\mathbb{C}^{m}}+\left\|f(z+v)-g(z+v)\right\|_{ \mathbb{C}^{m}}\right)\mathrm{d}z\] \[\stackrel{{\eqref{eq:C}}}{{\geq}}\int_{A}\left\|f(z )-f(z+v)\right\|_{\mathbb{C}^{m}}\mathrm{d}z\geq 0.8\cdot\lambda^{2n}(A)\]
with \(\lambda^{2n}\) denoting the \(2n\)-dimensional Lebesgue measure. In the last inequality we used
\[\left|\left\|z\right\|_{\mathbb{C}^{n}}-\left\|z+v\right\|_{\mathbb{C}^{n}}\right| \geq\left\|z+v\right\|_{\mathbb{C}^{n}}-\left\|z\right\|_{\mathbb{C}^{n}}\geq \left\|v\right\|_{\mathbb{C}^{n}}-2\left\|z\right\|_{\mathbb{C}^{n}}\geq 0.8.\]
Hence it follows that \(\mathcal{NN}^{\varrho}_{n,m,2n-1}\) is not dense in \(C(K;\mathbb{C}^{m})\) with respect to the \(L^{1}\)-norm and thus, using Holder's inequality, it follows that \(\mathcal{NN}^{\varrho}_{n,m,2n-1}\) is not dense in \(C(K;\mathbb{C}^{m})\) with respect to the \(L^{p}\)-norm for any \(p\in[1,\infty]\), so in particular for \(p=\infty\) which shows that \(\mathcal{NN}^{\varrho}_{n,m,2n-1}\) is not universal.
Next, we prove (ii). To this end, we construct a function \(f\in C(\mathbb{C}^{n};\mathbb{C}^{m})\), a compact set \(K\subset\mathbb{C}^{n}\), and a number \(\varepsilon>0\) such that
\[\sup_{z\in K}\left\|f(z)-g(z)\right\|_{\mathbb{C}^{m}}\geq\varepsilon\]
for all \(g\in\mathcal{NN}^{\varrho}_{n,m,2m-1}\). For a moment, fix \(g\in\mathcal{NN}^{\varrho}_{n,m,2m-1}\). Since the activation function \(\varrho\) is real-valued, the output of the last but one layer of \(g\) is a function \(\psi:\mathbb{C}^{n}\to\mathbb{R}^{2m-1}\). Also, there are a matrix \(V\in\mathbb{C}^{m\times(2m-1)}\) and a vector \(b\in\mathbb{C}^{m}\) such that \(g(z)=V\psi(z)+b\). With \(\mathbb{C}^{m}\cong\mathbb{R}^{2m}\), we may view \(V\) as a linear map \(\mathbb{R}^{2m-1}\to\mathbb{R}^{2m}\), and the range \(\{g(z)\,:\,z\in\mathbb{C}\}\) of \(g\) is thus contained in a \((2m-1)\)-dimensional affine subspace \(U=U(g)\) of \(\mathbb{R}^{2m}\). As
\[\sup_{z\in K}\left\|f(z)-g(z)\right\|_{\mathbb{R}^{2m}}\geq\sup_{z\in K}\inf_{ u\in U(g)}\left\|f(z)-u\right\|_{\mathbb{R}^{2m}},\]
it is sufficient for our purposes to find a function \(f\in C(\mathbb{C}^{n};\mathbb{C}^{m})\), a compact set \(K\subset\mathbb{C}^{n}\), and a number \(\varepsilon>0\) such that
\[\inf_{U}\sup_{z\in K}\inf_{u\in U}\left\|f(z)-u\right\|_{\mathbb{R}^{2m}}\geq\varepsilon\]
where the outermost infimum traverses the \((2m-1)\)-dimensional affine subspaces \(U\) of \(\mathbb{R}^{2m}\). This is achieved by a function \(f\) whose range \(\{f(z)\,:\,z\in\mathbb{C}\}\) is not contained in any \((2m-1)\)-dimensional affine subspace \(U\) of \(\mathbb{R}^{2m}\). A semi-explicit construction is like this:
Let \(K:=\{(\lambda,0,\ldots,0)\,:\,\lambda\in\mathbb{R},0\leq\lambda\leq 1\}\subseteq \mathbb{C}^{n}\) and
\[f_{1}:\mathbb{C}^{n}\to[0,1],\quad f_{1}(z_{1},\ldots,z_{n})=\max\{0,\min\{1, \operatorname{Re}(z_{1})\}\}.\]
Further let \(f_{2}:[0,1]\to\mathbb{C}^{m}\) be a parameterization of a curve that along the edges of the cube \(Q:=[0,1]^{m}+\mathrm{i}[0,1]^{m}\subseteq\mathbb{C}^{m}\cong\mathbb{R}^{2m}\) passes through all of its vertices \(\{0,1\}^{m}+\mathrm{i}\,\{0,1\}^{m}\subseteq\mathbb{C}^{m}\cong\mathbb{R}^{ 2m}\), and \(f=f_{2}\circ f_{1}\). From [3, Corollary 2.5], we deduce
\[\inf_{U}\sup_{z\in K}\inf_{u\in U}\left\|f(z)-u\right\|_{\mathbb{R}^{2m}}\geq \frac{1}{2}\]
and this finishes the proof.
Note that there are non-polyharmonic functions like \(\varrho(z)=\mathrm{e}^{\mathrm{Re}(z)}\) but also polyharmonic functions that are neither holomorphic, nor anti-holomorphic, nor \(\mathbb{R}\)-affine like \(\varrho(z)=\mathrm{Re}(z)^{2}\) that meet the assumptions made in Theorem 6.4.
Following the ideas from [15, Proposition 4.15] we also want to add a short note on the necessity of the differentiability of the activation function. It turns out that the differentiability of the activation function is _not_ a necessary condition for the fact that narrow networks with width \(n+m+1\) using this activation function have the universal approximation property. The proof is in fact identical to the proof presented in [15]. However, we include a detailed proof to clarify that the reasoning also works in the case of activation functions \(\mathbb{C}\to\mathbb{C}\).
**Theorem 6.5**.: _Take any function \(w\in C(\mathbb{C};\mathbb{C})\) which is bounded and nowhere real differentiable. Then \(\varrho(z):=\sin(z)+w(z)\exp(-z)\) is also nowhere differentiable and \(\mathcal{NN}^{\varrho}_{n,m,n+m+1}\) is universal._
Proof.: Since \(\varrho\) is non-polyharmonic (since it is nowhere differentiable) it suffices to show that the identity function can be uniformly approximated on compact sets using compositions of the form \(\psi\circ\varrho\circ\phi\) with \(\phi,\psi\in\mathrm{Aff}(\mathbb{C};\mathbb{C})\). Then the statement can be derived similarly to the proof of Theorem 4.2(i).
Therefore, take any compact set \(K\subseteq\mathbb{C}\) and \(\varepsilon>0\). Choose \(M_{1}>0\) with \(\left|z\right|\leq M_{1}\) for every \(z\in K\). Take \(h>0\) arbitrary and consider
\[\sup_{z\in K\setminus\{0\}}\left|\frac{\sin(hz)-hz}{h}\right|\leq\sup_{z\in K \setminus\{0\}}M_{1}\left|\frac{\sin(hz)}{hz}-1\right|\to 0\]
as \(h\to 0\). Therefore, we may take \(h>0\) with
\[\left|\frac{\sin(hz)-hz}{h}\right|<\frac{\varepsilon}{2}\]
for every \(z\in K\). Furthermore, choose \(M_{2}>0\) with \(|w(z)|\leq M_{2}\) for every \(z\in K\) and pick \(k\in\mathbb{N}\) large enough such that
\[\frac{\left|\exp(-hz)\right|\left|\exp(-2\pi k)\right|}{h}<\frac{\varepsilon} {2M_{2}}\]
for all \(z\in K\). Hence we derive
\[\left|\frac{\sin(hz+2\pi k)+w(hz+2\pi k)\exp(-hz-2\pi k)}{h}-z\right|\] \[= \left|\frac{\sin(hz)+w(hz+2\pi k)\exp(-hz-2\pi k)}{h}-z\right|\] \[\leq \left|\frac{\sin(hz)-hz}{h}\right|+M_{2}\frac{\left|\exp(-hz) \right|\left|\exp(-2\pi k)\right|}{h}<\varepsilon.\]
Thus, we get the claim by defining \(\phi(z):=hz+2\pi k\) and \(\psi(z):=\frac{1}{h}z\).
## Appendix A Topological notes on locally uniform convergence
In this appendix, we discuss the relationship between locally uniform convergence, the compact open topology, and the universal approximation property introduced in Definition 2.4. Note that [21, Appendix B] is another account on the same topic.
Although locally uniform convergence can be studied more generally for functions defined on a topological space and taking values in a metric space, we restrict ourselves to functions \(\mathbb{C}^{n}\to\mathbb{C}^{m}\).
**Definition A.1**.: Let \((f_{k})_{k\in\mathbb{N}}\) be a sequence of functions \(f_{k}:\mathbb{C}^{n}\to\mathbb{C}^{m}\) and \(f:\mathbb{C}^{n}\to\mathbb{C}^{m}\). The sequence \((f_{k})_{k\in\mathbb{N}}\) converges locally uniformly to \(f\), if for every compact set \(K\subseteq\mathbb{C}^{n}\) we have
\[\sup_{z\in K}\left\|f_{k}(z)-f(z)\right\|_{\mathbb{C}^{m}}\xrightarrow{k\to \infty}0.\]
There is a certain equivalence between locally uniform convergence and the universal approximation property introduced in Definition 2.4.
**Proposition A.2**.: _Let \(\mathcal{F}\subseteq C(\mathbb{C}^{n};\mathbb{C}^{m})\) and \(f\in C(\mathbb{C}^{n};\mathbb{C}^{m})\). Then the following are equivalent:_
_(i)_ _For every compact set_ \(K\subseteq\mathbb{C}^{n}\) _and_ \(\varepsilon>0\)_, there is a function_ \(g\in\mathcal{F}\) _satisfying_
\[\left\|f-g\right\|_{C(K;\mathbb{C}^{m})}<\varepsilon.\]
_(ii)_ _There is a sequence_ \((f_{k})_{k\in\mathbb{N}}\) _with_ \(f_{k}\in\mathcal{F}\) _for_ \(k\in\mathbb{N}\) _such that_ \((f_{k})_{k\in\mathbb{N}}\) _converges locally uniformly to_ \(f\)_._
Proof.: We start with the implication (i)\(\Rightarrow\)(ii). Let \(f\in C(\mathbb{C}^{n};\mathbb{C}^{m})\). For every \(k\in\mathbb{N}\), choose \(f_{k}\in\mathcal{F}\) with
\[\left\|f_{k}-f\right\|_{C(\overline{B}_{k}(0);\mathbb{C}^{m})}\leq\frac{1}{k}.\]
Then \((f_{k})_{k\in\mathbb{N}}\) converges locally uniformly to \(f\), since every compact set \(K\subseteq\mathbb{C}^{n}\) is contained in \(\overline{B}_{k}(0)\) for all \(k\geq J\) for some \(J\in\mathbb{N}\).
Now we show the implication (ii)\(\Rightarrow\)(i). For any compact set \(K\subseteq\mathbb{C}^{n}\) and \(\varepsilon>0\) we know by definition of locally uniform convergence that there is \(k\in\mathbb{N}\) satisfying
\[\left\|f_{k}-f\right\|_{C(K;\mathbb{C}^{m})}<\varepsilon.\]
Since \(f_{k}\in\mathcal{F}\), this shows (i).
In particular, Proposition A.2 yields the following equivalence:
(i') The set \(\mathcal{F}\) has the universal approximation property.
* For every \(f\in C(\mathbb{C}^{n};\mathbb{C}^{m})\), there is a sequence \((f_{k})_{k\in\mathbb{N}}\) of elements \(f_{k}\in\mathcal{F}\) such that \((f_{k})_{k\in\mathbb{N}}\) converges locally uniformly to \(f\).
Next, we show that locally uniform convergence of sequences \((f_{k})_{k\in\mathbb{N}}\) of elements \(f_{k}\in C(\mathbb{C}^{n};\mathbb{C}^{m})\) coincides with convergence with respect to the _compact-open topology_, cf. [8, Definition XII.1.1]. Hence, the compact-open topology is the topology in charge when we speak about universality of a set of continuous functions.
**Definition A.3**.: For each pair of sets \(A\subseteq\mathbb{C}^{n},B\subseteq\mathbb{C}^{m}\), we denote
\[(A,B):=\left\{f\in C(\mathbb{C}^{n};\mathbb{C}^{m})\::\:f(A)\subseteq B\right\}.\]
The _compact-open topology_ on \(C(\mathbb{C}^{n};\mathbb{C}^{m})\) is then the smallest topology containing the sets \((K,V)\), where \(K\subseteq\mathbb{C}^{n}\) is compact and \(V\subseteq\mathbb{C}^{m}\) is open.
That this topology indeed induces locally uniform convergence is a direct consequence of [8, Theorem XII.7.2] and \(\mathbb{C}^{m}\) being a metric space.
**Proposition A.4**.: _Let \((f_{k})_{k\in\mathbb{N}}\) be a sequence of functions with \(f_{k}\in C(\mathbb{C}^{n};\mathbb{C}^{m})\) and \(f\in C(\mathbb{C}^{n};\mathbb{C}^{m})\). Then the following statements are equivalent._
* _The sequence_ \((f_{k})_{k\in\mathbb{N}}\) _converges to_ \(f\) _in the compact-open topology._
* _The sequence_ \((f_{k})_{k\in\mathbb{N}}\) _converges to_ \(f\) _locally uniformly._
In the present paper, it is of particular importance that the composition of functions is compatible with locally uniform convergence.
**Proposition A.5**.: _Let \((f_{k})_{k\in\mathbb{N}}\) and \((g_{k})_{k\in\mathbb{N}}\) be two sequences of functions with \(f_{k}\in C(\mathbb{C}^{n_{1}};\mathbb{C}^{n_{2}})\) and \(g_{k}\in C(\mathbb{C}^{n_{2}};\mathbb{C}^{n_{3}})\) for \(k\in\mathbb{N}\). Let \(f\in C(\mathbb{C}^{n_{1}};\mathbb{C}^{n_{2}})\) and \(g\in C(\mathbb{C}^{n_{2}};\mathbb{C}^{n_{3}})\) such that \(f_{k}\to f\) and \(g_{k}\to g\) locally uniformly. Then we have_
\[g_{k}\circ f_{k}\xrightarrow{k\to\infty}g\circ f\]
_locally uniformly._
Proof.: Using [8, Theorem XII.2.2], we know that the map
\[C(\mathbb{C}^{n_{1}};\mathbb{C}^{n_{2}})\times C(\mathbb{C}^{n_{2}};\mathbb{ C}^{n_{3}})\to C(\mathbb{C}^{n_{1}};\mathbb{C}^{n_{3}}),\quad(h_{1},h_{2}) \mapsto h_{2}\circ h_{1}\]
is continuous, where each space \(C(\mathbb{C}^{n_{j}};\mathbb{C}^{n_{k}})\) is equipped with the compact-open topology and Cartesian products of spaces are equipped with the product topology. Note here that we use the fact that \(\mathbb{C}^{n_{1}}\) and \(\mathbb{C}^{n_{3}}\) are Hausdorff spaces and \(\mathbb{C}^{n_{2}}\) is locally compact. Then the claim follows from Proposition A.4.
Note that the statement of Proposition A.5 can inductively be extended to the composition of \(L\) functions, where \(L\) is any natural number.
## Appendix B Taylor expansion using Wirtinger derivatives
In this appendix we give some details about the Taylor expansion introduced in Lemma 2.1. Furthermore we show that an activation function which is not \(\mathbb{R}\)-affine necessarily admits a point where one of the second-order Wirtinger derivatives does not vanish. We begin by restating and proving Lemma 2.1.
**Lemma B.1**.: _Let \(\varrho\in C(\mathbb{C};\mathbb{C})\) and \(z,z_{0}\in\mathbb{C}\). If \(\varrho\) is real differentiable in \(z_{0}\), then_
\[\varrho(z+z_{0})=\varrho(z_{0})+\partial_{\rm{wirt}}\varrho(z_{0})z+\overline {\partial}_{\rm{wirt}}\varrho(z_{0})\overline{z}+\Theta_{1}(z)\] (B.1)
_for a function \(\Theta_{1}:\mathbb{C}\to\mathbb{C}\) with \(\lim_{\mathbb{C}\setminus\{0\}\ni z\to 0}\frac{\Theta_{1}(z)}{z}=0\). If \(\varrho\in C^{2}(\mathbb{C};\mathbb{C})\), then_
\[\varrho(z+z_{0})=\varrho(z_{0})+\partial_{\rm{wirt}}\varrho(z_{0})z+\overline {\partial}_{\rm{wirt}}\varrho(z_{0})\overline{z}+\frac{1}{2}\partial_{\rm{wirt }}^{2}\varrho(z_{0})z^{2}+\partial_{\rm{wirt}}\overline{\partial}_{\rm{wirt}} \varrho(z_{0})z\overline{z}+\frac{1}{2}\overline{\partial}_{\rm{wirt}}^{2} \varrho(z_{0})\overline{z}^{2}+\Theta_{2}(z)\] (B.2)
_for a function \(\Theta_{2}:\mathbb{C}\to\mathbb{C}\) with \(\lim_{\mathbb{C}\setminus\{0\}\ni z\to 0}\frac{\Theta_{2}(z)}{z^{2}}=0\)._
Proof.: Equation (B.1) follows from the definition of real differentiability (2.1) by using
\[\frac{\partial\varrho}{\partial x}(z_{0})\operatorname{Re}(z)+\frac{\partial \varrho}{\partial y}(z_{0})\operatorname{Im}(z)=\frac{\partial\varrho}{ \partial x}(z_{0})\cdot\frac{1}{2}(z+\overline{z})+\frac{\partial\varrho}{ \partial y}\cdot\frac{1}{2\mathrm{i}}(z-\overline{z})=\partial_{\mathrm{wirt} }\varrho(z_{0})z+\overline{\partial}_{\mathrm{wirt}}\varrho(z_{0})\overline{z}.\]
In order to prove (B.2) we use the second-order Taylor expansion of \(\varrho\) around \(z_{0}\) which can be found for instance in [1, Theorem VII.5.11] and obtain
\[\varrho(z+z_{0})=\varrho(z_{0})+\frac{\partial\varrho}{\partial x}(z_{0})x+ \frac{\partial\varrho}{\partial y}(z_{0})y+\frac{1}{2}\frac{\partial^{2} \varrho}{\partial x^{2}}(z_{0})x^{2}+\frac{\partial^{2}\varrho}{\partial x \partial y}(z_{0})xy+\frac{1}{2}\frac{\partial^{2}\varrho}{\partial y^{2}}(z_ {0})y^{2}+\Theta_{2}(z)\]
where \(\Theta_{2}:\mathbb{C}\to\mathbb{C}\) satisfies \(\lim_{\mathbb{C}\setminus\{0\}\ni z\to 0}\frac{\Theta_{2}(z)}{z^{2}}=0\). Furthermore, we use the notation \(x=\operatorname{Re}(z)\) and \(y=\operatorname{Im}(z)\). Letting \(x=\frac{1}{2}(z+\overline{z})\), \(y=\frac{1}{2\mathrm{i}}(z-\overline{z})\) and using
\[\frac{1}{4}\cdot\begin{pmatrix}1&-2\mathrm{i}&-1\\ 1&0&1\\ 1&2\mathrm{i}&-1\end{pmatrix}\begin{pmatrix}\frac{\partial^{2}}{\partial x^{2} }\\ \frac{\partial^{2}}{\partial x\partial y}\\ \frac{\partial^{2}}{\partial y^{2}}\end{pmatrix}=\begin{pmatrix}\partial_{ \mathrm{wirt}}^{2}\\ \partial_{\mathrm{wirt}}^{2}\overline{\partial}_{\mathrm{wirt}}\\ \overline{\partial}_{\mathrm{wirt}}^{2}\end{pmatrix}\] (B.3)
yields the claim.
The following Proposition is required in the proof of Proposition 3.2.
**Proposition B.2**.: _Let \(\varrho\in C^{2}(\mathbb{C};\mathbb{C})\) be not \(\mathbb{R}\)-affine. Then there is a point \(z_{0}\in\mathbb{C}\) such that either \(\partial_{\mathrm{wirt}}^{2}\varrho(z_{0})\neq 0\), \(\partial_{\mathrm{wirt}}\overline{\partial}_{\mathrm{wirt}}\varrho(z_{0})\neq 0\) or \(\overline{\partial}_{\mathrm{wirt}}^{2}\varrho(z_{0})\neq 0\)._
Proof.: Assume \(\partial_{\mathrm{wirt}}^{2}\varrho\equiv\partial_{\mathrm{wirt}}\overline{ \partial}_{\mathrm{wirt}}\varrho\equiv\overline{\partial}_{\mathrm{wirt}}^{2} \varrho\equiv 0\). From the fact that the matrix on the left-hand side in (B.3) is invertible it follows \(\frac{\partial^{2}\varrho}{\partial x^{2}}\equiv\frac{\partial^{2}\varrho}{ \partial x\partial y}\equiv\frac{\partial^{2}\varrho}{\partial y^{2}}\equiv 0\). Since \(\varrho\) is \(\mathbb{R}\)-affine if and only \(\operatorname{Re}(\varrho)\) and \(\operatorname{Im}(\varrho)\) are both \(\mathbb{R}\)-affine, we may assume that \(\varrho\) is real-valued. It is a well-known fact that a \(C^{1}\)-function with vanishing gradient is necessarily constant. Applying this fact to \(\frac{\partial\varrho}{\partial x}\) and \(\frac{\partial\varrho}{\partial y}\) separately shows
\[\nabla\varrho\equiv a\]
for a constant \(a\in\mathbb{R}^{2}\). Let \(f(z):=z^{\top}a\) where \(z\in\mathbb{C}\) is treated as an element of \(\mathbb{R}^{2}\). Then the gradient of \(\varrho-f\) vanishes identically and hence it holds \(\varrho-f\equiv b\) for a constant \(b\in\mathbb{R}\). This yields
\[\varrho(z)=z^{\top}a+b\quad\text{for all $z\in\mathbb{C}$}.\]
But then \(\varrho\) is \(\mathbb{R}\)-affine.
**Acknowledgements.** The authors thank Felix Voigtlaender for helpful comments. PG acknowledges support by the German Science Foundation (DFG) in the context of the Emmy Noether junior research group VO 2594/1-1.
|
2303.09200 | Reduction of rain-induced errors for wind speed estimation on SAR
observations using convolutional neural networks | Synthetic Aperture Radar is known to be able to provide high-resolution
estimates of surface wind speed. These estimates usually rely on a Geophysical
Model Function (GMF) that has difficulties accounting for non-wind processes
such as rain events. Convolutional neural network, on the other hand, have the
capacity to use contextual information and have demonstrated their ability to
delimit rainfall areas. By carefully building a large dataset of SAR
observations from the Copernicus Sentinel-1 mission, collocated with both GMF
and atmospheric model wind speeds as well as rainfall estimates, we were able
to train a wind speed estimator with reduced errors under rain. Collocations
with in-situ wind speed measurements from buoys show a root mean square error
that is reduced by 27% (resp. 45%) under rainfall estimated at more than 1 mm/h
(resp. 3 mm/h). These results demonstrate the capacity of deep learning models
to correct rain-related errors in SAR products. | Aurélien Colin, Pierre Tandeo, Charles Peureux, Romain Husson, Ronan Fablet | 2023-03-16T10:19:14Z | http://arxiv.org/abs/2303.09200v2 | Reduction of rain-induced errors for wind speed estimation on SAR observations using convolutional neural networks
###### Abstract
Synthetic Aperture Radar is known to be able to provide high-resolution estimates of surface wind speed. These estimates usually rely on a Geophysical Model Function (GMF) that has difficulties accounting for non-wind processes such as rain events. Convolutional neural network, on the other hand, have the capacity to use contextual information and have demonstrated their ability to delimit rainfall areas. By carefully building a large dataset of SAR observations from the Copernicus Sentinel-1 mission, collocated with both GMF and atmospheric model wind speeds as well as rainfall estimates, we were able to train a wind speed estimator with reduced errors under rain. Collocations with in-situ wind speed measurements from buoys show a root mean square error that is reduced by 27% (resp. 45%) under rainfall estimated at more than 1 mm/h (resp. 3 mm/h). These results demonstrate the capacity of deep learning models to correct rain-related errors in SAR products.
Synthetic Aperture Radar, Deep Learning, Oceanography, Wind.
## I Introduction
Synthetic Aperture Radar (SAR) is a powerful tool for studying the ocean surface. C-Band SAR are sensitive to variations in sea surface roughness, and have been used to detect various meteorological and ocean processes, referred to as metocean, such as atmospheric or ocean fronts [1], icebergs [2], oil surfactants from pollution [3] or generated by plankton [4], and some species of seaweed [5]. They are particularly useful for studying waves [6] and extreme events like cyclones [7]. There has been particular attention given to estimating wind speed using these sensors.
As the number of satellite missions with C-SAR sensors increases and archives of these data accumulate, it is becoming easier to build large SAR datasets. This paper focuses on the Sentinel-1 mission from the Copernicus program, which consists of two satellites, Sentinel-1A (launched in 2014) and Sentinel-1B (launched in 2016, which has been out of operation since December 2021). Sentinel-1C is planned to be launched in 2023. Ground Range Detected Higher Resolution Interferometric Wide-swath (GRDH IW) observations have a range of 250 km, an azimuth of about 200 km, and a resolution of 10 m/px. These observations are mainly routinely acquired over coastal areas. Systematic processes are used to produce geophysical products from these observations, including wind speed estimates. Several Geophysical Model Functions (GMFs) have been developed for this purpose, including CMOD3 [8], CMOD4 [9], CMOD5 [10], CMOD5.N [11], CMOD6 [12], CMOD7 [13] and C_SARMOD2 [14]. However, these GMFs are sensitive to contamination from non-wind processes. In particular, rainfall can either increase or decrease sea surface roughness [15], making it difficult to correct for its effects.
Deep learning models, particularly Convolutional Neural Networks (CNNs), have demonstrated their ability to detect rain signatures in SAR observations [16]. These models are known to be able to tackle denoising [17] and inpainting [18] tasks because they use contextual information to estimate the
original signal. This paper is dedicated to estimating wind speed in rainy areas using a model that does not require an explicit rainfall prior and only uses the parameters available to GMFs.
In the first section, we present the SAR data used to train the model and the ancillary information available. The second section describes the methodology used to build the dataset, with special attention given to ensuring a balanced representation of rainfall observations. The final section presents the results on the training set and confirms them with in-situ measurements from buoys, demonstrating the model's ability to correct for rain overestimates.verestimates.
## II Dataset
The SAR measurements used in this chapter come from 19,978 IW observations acquired globally between March 03rd, 2018 and the February 23rd, 2022, inclusive. Each of these observations covers approximately 44 000 km2 and has a resolution of 100 m/px, downscaled from the GRDH products available at 10 m/px.
Obtaining global information on rain that can be used in conjunction with SAR observations can be difficult. A previous study conducted using a global Sentinel-1 dataset found only 2,304 partial collocations with the satellite-based radar GPMP-DPR [19] out of 18,2153 IW. "Partial collocations" refers to instances where at least 20x20 km of a swath is observed by the spaceborne weather radar 20 minutes before or after the SAR observation. Coastal ground-based radars like NEXRAD [20] could provide rainfall estimates, but they are affected by topography and may not capture all wind regimes. Therefore, SAR-based rain estimation is preferred to maximize the number of available observations and simplify the collocation process. We used a recent SAR rainfall estimator [16] that emulates NEXRAD's reflectivity and proposes three rainfall thresholds that roughly correspond to 1 mm/h, 3 mm/h, and 10 mm/h.
Ancillary information, such as incidence angle and satellite heading, is retrieved from Sentinel-1 Level-2 products. It also includes collocations with atmospheric models from the European Centre for Medium-Range Weather Forecasts, which provide modelled wind speed and direction, as well as the surface wind speed computed by the GMF. The atmospheric models have a spatial resolution of 0.25x0.25 degrees and a temporal resolution of 3 hours, while the GMF has a spatial resolution of 1 km and corresponds to the observation itself."
## III Methodology
This section presents the methodology for building the rain-invariant wind speed estimator. We first describe the deep learning architecture of the model, then we discuss the creation of the dataset, which is biased to have a large number of rain examples. The final section describes the evaluation procedure.
### _Deep Learning Model_
The architecture used in this chapter is the UNet architecture [21] depicted in Fig. 1. UNet is an autoencoder architecture with the advantage of being fully convolutional, meaning it has translation equivariance properties (translations of the input result in translations of the output). In addition, skip connections between the encoder and the decoder facilitate training, especially by reducing the vanishing gradient issue [22]). Introduced in 2015, UNet has been used in various domains and has demonstrated its importance for segmentation of SAR observations [23, 16, 24].
The output of the model always contains a single convolution kernel, activated by the ReLU function to ensure that the prediction is in the interval [0, +\(\infty\)[. All convolution kernels in the hidden layers are also activated by ReLU functions. The model is set to take input of 256x256 pixels during training, but since the weights only describe convolution kernels, it is possible to use the model for inference on images of any shape as long as the input resolution remains at 100 m/px. Variants of the model are trained with different numbers of input channels. The architecture is modified by changing the size of the first convolution kernel, which is defined as a kernel of size (3, 3, c, 32), where c is the number of input channels.
### _Dataset balancing procedure_
In this section, we describe the process for building a balanced dataset. Our goals are to (I) ensure that the wind distribution of each dataset is close to the real-world distribution, (II) prevent information leak between the training, validation, and test subsets, (III) ensure that the groundtruth wind speed, obtained from an atmospheric model, accurately represents the real-world wind speed, and (IV) include enough rain samples to allow the model to learn from them.
Rain and rainless patches selectionAs discussed earlier, rainfall estimation is provided by a deep learning model at a resolution of 100 m/px, on the same grid as the SAR observation. Therefore, it is possible to separate the observations into two areas, \(\mathcal{A}^{+}\) and \(\mathcal{A}^{-}\), based on the 3 mm/h threshold from the rainfall estimation.
\[\mathcal{A}^{+}=\{x:Rainfall(x)>=3mm/h\}\] (Eq. 1) \[\mathcal{A}^{-}=\{x:Rainfall(x)<3mm/h\}\] (Eq. 2)
However, most SAR observations do not contain rain signatures. Collocations with GPM's dual polarization radar, a satellite-based weather radar, indicated that the probability of rain rates higher than 3 mm/h was 0.5%. Thus, by dividing the SAR observations into tiles of 256 by 256 pixels, we call "rain patches" the tiles with more than 5% of their surface predicted to have rain rates higher than 3 mm/h, and "rainless patches" those without rain signatures. We denote \(n_{+}\) as the number of rain patches and \(n_{-}\) as the number of rainless patches. To ensure that the model will learn regardless of the rain-situation, we set \(n_{+}=n_{-}\).
Restriction to a priori Accurate Model Wind SpeedsAtmospheric models have been known to lack resolution (0.25x0.25 degrees spatially, 3 hours temporally) and to be unable to accurately depict fine-scale wind fields. However, they are computed globally and independently of the SAR observations. On the other hand, SAR-based wind fields from the GMFs are known to be accurate on rainless patches, but to overestimate on
Fig. 1: Architecture of the UNet model used for estimating the wind speed.
Fig. 2: Distribution of ECMWF ERA interim wind speed collocated with surface rain rate from GPM-DPR for rainfall higher than 3 mm/h (a) -0.55% of the collocations- and 30 mm/h(b) -0.02% of the collocations. The orange curve in the figure shows the wind distribution regardless of the rainfall.
rain signatures. We calculate \(\Delta_{\mathcal{A}^{-}}\) as the discrepancy between the GMF and the atmospheric model on rain-free pixels."
\[\Delta_{\mathcal{A}^{-}}=MSE_{|\mathcal{A}^{-}}(Att,GMF)\] (Eq. 3)
In our experiments, the threshold was set at \(\Delta_{\mathcal{A}^{-}}<1\) m/s. All patches containing a higher discrepancy between the two wind speed sources were discarded.
Balancing to the real-world wind distributionIt should be noted that this condition ensures accurate modeled wind speeds and rain distribution, especially because the rainfall estimator is known to overestimate rainfall at high wind speeds.
We denote:
* \(P^{+}\) as the wind speed distribution on \(n_{+}\).
* \(P^{-}\) as the wind speed distribution on \(n_{-}\).
* \(P\) as the wind speed distribution on \(n_{-}\cup n_{+}\).
Balancing the dataset to the real-world wind distribution translates to the following condition:
\[\forall x,P(x)=\frac{n_{+}P^{+}(x)+n_{-}P^{-}(x)}{n_{+}+n_{-}}\] (Eq. 4)
As we choose to keep all rain patches and to set \(n_{+}=n_{-}\), Eq. 4 leads to:
\[\forall x,P^{-}(x)=\frac{1}{2}(P^{+}(x)-P(x))\] (Eq. 5)
For some wind speeds \(x\), \(P^{+}(x)\) is higher than twice \(P(x)\). In these cases, we relax the condition from Eq. 5 in order to avoid removing rain patches. Fig. 3 depicts the wind speed distribution for rain and rainless patches. The mean squared error between \(P\) and \(\frac{1}{2}(P^{+}+P^{-})\) reach 8.8%.
The dataset can be further balanced to ensure that, for each wind speed, the number of rain and rainless patches is equal. However, this leads to remove of 84% of the data. Appendix 1 compares the performance of this second dataset. As did not provide improvements, this dataset is left out of the main document.
#### Iii-B1 Training, Validation and Test Set Division
After extracting the patches following the distributions \(P^{+}\) and \(P^{-}\), they are split into training, validation, and test sets. Each subset preserves the same distributions. Furthermore, to avoid information leakage, if a patch from one IW is in a subset, every patch from the same IW belongs to the same subset. The stochastic brute forcing method described in Algorithm 1 draws random IWs and computes the distribution of the validation and test subsets, compares them to the overall distributions, and returns the solution that minimizes the difference. In this algorithm, \(\bar{P}_{e}\) indicates the wind speed distribution multiplied by the number of patches in \(e\) and divided by the total number of patches. It ensures that the validation and test subsets each contain approximately 10% of all the patches.
This process results in 168349, 20944, and 21010 patches in the train, test, and validation subsets, respectively, for 14169, 1763, and 1763 IWs.
Before training the model, we compute the mean and standard deviation of each channel on the training set and use them to normalize the inputs during training, validation, and inference. The output, however, is not normalized. We train the model for 100,000 weight updates, with a batch size of 16 and a learning rate of \(10^{-5}\) using the Adam optimizer.
Fig. 3: Wind speed distributions for rain (blue) and rainless (red) patches. Orange curve correspond to the real world wind distribution.
### _Evaluation procedure_
To evaluate the impact of each input channel, we train various variants of the model:
* I uses only the VV channel.
* II uses the same inputs as the GMF: the VV channel, the incidence angle and the _a priori_ wind speed direction.
* III uses the same inputs as II and the VH channel.
* IV uses the same inputs as III and the wind speed prior.
* V uses only the wind speed prior. We note that this variant implicitly use the VV channel, though at a resolution of only 1 km/px.
Each architecture is trained five times to reduce the impact of random initialization on the evaluation results. The results is presented as the mean and standard deviation over these five independent trainings.
We compare the results using the Root Mean Square Error (RMSE) and the Pearson correlation coefficient (PCC). The PCC is formulated in Eq. 6.
\[PCC_{Y,\hat{Y}}=\frac{\mathds{E}[(Y-\mu_{Y})(\hat{Y}-\mu_{\hat{Y}})]}{\sigma_{Y} \sigma_{\hat{Y}}}\] (Eq. 6)
The results are computed against both the groundtruths from the atmospheric model, which provides a large test set, and against collocations with buoys, which have good temporal resolution and are in-situ measurements.
## IV Results
### _Benchmarking experiments_
The performance of the models compared to ECMWF are calculated on the test subset for each input variant and the baseline GMF. The results of this analysis can be found in Table I. It appears that the most important input is the GMF itself, as both IV and V have better results than the other variants. I, II and III are unable to achieve better results than the GMF, except under strong rainfall, even though II and III have access to all the channels used by the GMF.
### _Application to SAR observation with groundtruthed in-situ data_
However, ECMWF wind speeds are reanalysis data and not in-situ data, which can be obtained using anemometers on buoys. Using the dataset created in [25], 4732 collocation points between Sentinel-1 and NDBC buoys are identified. The rain prediction model estimates that 4643 of these points are rainless, 75 record rainfall of more than 1 mm/h, and 14 record rainfall of more than 3 mm/h. On a side note, the height at which in-situ measurements were taken varies, with most being between 3.8 m and 4.1 m above sea level. As mentioned in [25], the SAR inversion and deep learning prediction are both normalized to the altitude of the corresponding in-situ measurement using an exponential law [26]:
\[w(h)=\left(\frac{10}{h}\right)^{0.11}\] (Eq. 7)
Table II indicates that the performances of the deep learning are higher than the GMF for both the Root Mean Square Error and the Pearson Correlation Coefficient for all rain ranges. The RMSE decreases by 0.04, 0.39, and 1.33 m/s for rainless, light rain, and moderate rain-situations, respectively. The bias is also lower for the deep learning model, except for rainless situations.
Table II also demonstrates the importance of the dataset building scheme as a dataset composed of random collocations (without the aformentionned sample selection) between ECMWF and the GMF, referred to as the "neutral dataset," consistently has lower performances than the balanced dataset.
In the following, we observe two cases where rainfall was detected on the buoy position at the time of observation.
#### Iv-B1 2017-01-08 01:58:19 at NDBC 46054
The observation from 2017-01-08 01:58:19 covers the north of the Californian Channel Islands (Fig. 5.a). Several meteorological buoys are dispersed over the channel, including NDBC 46054 and NDBC 46053, which are indicated as red dots. The wind speed
\begin{table}
\begin{tabular}{|c c|c c|c c|c c|} \hline \multirow{2}{*}{MODEL AND CHANNELS} & \multirow{2}{*}{RAIN RATE} & \multicolumn{2}{c|}{RMSE} & \multicolumn{2}{c|}{PCC} \\ & & Balanced & \multicolumn{1}{c|}{Neutral} & Balanced & \multicolumn{1}{c|}{Neutral} \\ & & Dataset & \multicolumn{1}{c|}{Dataset} & \multicolumn{1}{c|}{Dataset} & \multicolumn{1}{c|}{Dataset} & \multicolumn{1}{c|}{Dataset} \\ \hline & \([0,1[mm/h\) & 1.38 [0.016] & **2.40 [0.019]** & 89.9\% [0.19\%] & 74.2\% [0.42\%] \\ I & \([1,3[mm/h\) & 1.64 [0.046] & **2.78 [0.038]** & 92.7\% [0.39\%] & 79.5\% [0.58\%] \\ [VV] & \([3,10[mm/h\) & 1.59 [0.038] & **3.18 [0.070]** & **92.7\% [0.38\%]** & 78.9\% [0.62\%] \\ & \(>10mm/h\) & **2.12 [0.052]** & **\(\star\) 3.27 [0.135]** & **81.7\% [1.03\%]** & 74.3\% [0.90\%] \\ \hline & \([0,1[mm/h\) & 0.87 [0.009] & **2.26 [0.019]** & 96.2\% [0.06\%] & 77.7\% [0.24\%] \\ II & \([1,3[mm/h\) & 1.04 [0.086] & **2.87 [0.061]** & 97.2\% [0.47\%] & 77.6\% [0.24\%] \\ [VV, INC, WDIR] & \([3,10[mm/h\) & **1.19 [0.053]** & **3.39 [0.193]** & 96.2\% [0.37\%] & 75.0\% [3.04\%] \\ & \(\geq 10mm/h\) & **2.27 [0.151]** & 3.86 [0.700] & **81.7\% [1.89\%]** & 63.2\% [9.19\%] \\ \hline & \([0,1[mm/h\) & 0.83 [0.002] & 2.18 [0.006] & 96.5\% [0.02\%] & 79.3\% [0.07\%] \\ [VV, VH, VH, & \([1,3[mm/h\) & 0.93 [0.020] & **2.73 [0.022]** & 97.7\% [0.07\%] & **80.1\% [0.29\%]** \\ INC, WDIR] & \([3,10[mm/h\) & **1.09 [0.022]** & **3.25 [0.052]** & **96.7\% [0.07\%]** & 78.0\% [0.44\%] \\ & \(>10mm/h\) & **2.13 [0.050]** & **3.68 [0.310]** & **83.9\% [0.66\%]** & 70.8\% [2.95\%] \\ \hline & \([0,1[mm/h\) & **\(\star\) 0.64 [0.007]** & **\(\star\) 1.90 [0.028]** & **\(\star\) 97.9\% [0.04\%]** & **\(\star\) 85.0\% [0.33\%]** \\ [VV, VH, & \([1,3[mm/h\) & **\(\star\) 0.63 [0.015]** & **\(\star\) 2.29 [0.075]** & **\(\star\) 98.9\% [0.03\%]** & **\(\star\) 87.1\% [0.56\%]** \\ INC, WDIR, GMF] & \([3,10[mm/h\) & **\(\star\) 0.78 [0.040]** & **\(\star\) 2.55 [0.132]** & **\(\star\) 98.4\% [0.08\%]** & **\(\star\) 87.1\% [1.12\%]** \\ & \(>10mm/h\) & **\(\star\) 1.63 [0.162]** & **3.37 [0.113]** & **\(\star\) 90.9\% [1.17\%]** & 73.4\% [2.31\%] \\ \hline & \([0,1[mm/h\) & **0.67 [0.003]** & 3.16 [2.650] & **97.7\% [0.02\%]** & 80.7\% [17.55\%] \\ V & \([1,3[mm/h\) & **0.68 [0.005]** & 4.17 [3.582] & **98.8\% [0.01\%]** & **82.5\% [12.67\%]** \\ [GMF] & \([3,10[mm/h\) & **0.88 [0.021]** & 4.53 [3.739] & **98.0\% [0.03\%]** & **83.4\% [8.19\%]** \\ & \(\geq 10mm/h\) & **1.94 [0.087]** & 4.79 [2.917] & **87.9\% [0.66\%]** & 74.0\% [8.19\%] \\ \hline & \([0,1[mm/h\) & 0.77 & 2.41 & 97.0\% [0.0\%] & 81.2\% \\ GMF & \([1,3[mm/h\) & 0.84 & 3.16 & 98.1\% & 80.0\% \\ & \([3,10[mm/h\) & 1.25 & 3.42 & 96.5\% & 81.9\% \\ & \(\geq 10mm/h\) & 4.65 & 3.70 & 52.5\% & \(\star\) 75.4\% \\ \hline \end{tabular}
\end{table}
Table I: Comparison of the five variants of the model and the two datasets. RMSE and PCC are computed on the respective test set and for five training with random initialization. Results are given as mean and standard deviation in brackets. The best result for each metric is indicated by \(\bigstar\). Results better than the GMF are italicized.
over the area is mostly around 6 m/s, but a squall line appears at the position of NDBC 46054 and spans over a dozen kilometers. Rain signatures are clearly visible on the southern (or upper, as the image is in sensor geometry) half of the front, which is detected by the rain detector. In the northern half, the backscattering is still high, but the rain signature is difficult to interpret. The GMF indicates very high wind speeds, higher than 20 m/s (Fig. 5.d), while the deep learning model attenuates these values to between 6 m/s and 8 m/s (Fig. 5.e). The southern half of the front, where rain signatures are visible, is the most attenuated.
For NDBC 46054, only one measurement of wind speed and direction per hour is available. It recorded a wind speed of 6.3 m/s eight minutes before the SAR observation. The GMF and the deep learning model estimated wind speeds of 15.1 m/s and 5.9 m/s, respectively. While the temporal resolution of NDBC 46054 is one measurement per hour, NDBC 46053 records data every ten minutes. Furthermore, the gust front appears to be moving toward the right part of the observation. This can be seen in the time series in Fig. 5 as a large variation in wind direction between 02:40:00 and 03:00:00. The variation in wind speed seems to precede the variation in direction, first increasing then decreasing to a lower wind regime. On NDBC 46053, the GMF and the deep learning model agree on a wind speed of 4.5 m/s, which is slightly lower than the in-situ data of 5.4 m/s. Since the distance between NDBC 46054 and NDBC 46053 is approximately 60 km, the progression of the gust front can be estimated to be around 90 km/h. With a width of around 5 or 6 km, the whole system would pass the buoys in three minutes. This means that even NDBC 46053 may not have been able to accurately estimate the wind speed due to its low temporal resolution. However, it is worth noting that even the gust speed at NDBC 46054, defined as the maximum wind speed over a given number of seconds, does not record a speed
\begin{table}
\begin{tabular}{|c|c||c|c|c|} \hline & & Balanced & Neutral & GMF \\ \hline \multirow{4}{*}{\begin{tabular}{c} \(\mathbf{\times}\) \\ \end{tabular} } & \(<1mm/h\) & \(0.73\) (0.04) & \(1.32\) (0.04) & \(\bigstar\) 0.71 \\ & \([1,3]mm/h\) & **+1.18 (0.04)** & **1.47 (0.04)** & 1.64 \\ & \(>3mm/h\) & **+0.92 (0.07)** & 1.96 (0.29) & 2.93 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \(\mathbf{\times}\) \\ \end{tabular} } & \(<1mm/h\) & **+1.14 (0.03)** & 1.76 (0.12) & 1.48 \\ & \([1,3]mm/h\) & **+1.81 (0.04)** & **1.95 (0.18)** & 2.18 \\ & \([2,3mm/h\)) & **+1.60 (0.10)** & **2.42 (0.21)** & 2.93 \\ \hline \multirow{4}{*}{
\begin{tabular}{c} \(\mathbf{\times}\) \\ \end{tabular} } & \(<1mm/h\) & **+93.6 (0.16)** & 92.9\% (0.20\%) & 93.4\% \\ & \([1,3]mm/h\) & **+96.3 (0.22)** & 93.4\% (0.19\%) & 95.3\% \\ \cline{1-1} & \(>3mm/h\) & **+95.9 (0.35\%)** & **93.4\% (2.04\%)** & 91.3\% \\ \hline \end{tabular}
\end{table}
Table II: Bias, Root Mean Square Error, and Pearson Correlation Coefficient of model IV, the GMF, for each rainfall level. The best result for each metric and rainfall level is indicated by \(\bigstar\). Results better than the baseline are in bold. Results are given as mean and standard deviation in brackets.
Figure 5: Observation from the January 08th 2017 at 01:58:19 in VV channel (a), zoom on an area of 35x35 km centred on the buoy NDBC 45054 (b), segmentation of the rain rate (c), wind speed given by the GMF (d) and by the deep learning model (e).
higher than 9 m/s.
#### Iv-B2 Sar-20191006T232853 NDBC-41009
The observation from 2019-10-06 23:28:53 was recorded on the east coast of Florida. While most of the swath covers the marshes around Orlando and Cap Canaveral rather than the ocean, convective precipitation can be observed in the right part of the image (Fig. 7.a). The cells are moving downward (north-north-east), as indicated by the stronger gradient of the convective front. Since the wind from the convection is opposing the underlying wind regime, an area of lower wind speed appears as an area of lower backscatter. Rain signatures are clearly visible south of NDBC 41009 (Fig. 7.b). The GMF is impacted by these rain signatures and estimates a very high local wind speed (Fig. 7.c). The deep learning model is less affected by the rain signatures, but also appears to blur the low wind speed area (Fig. 7.d).
The time serie from NDBC 41009 in Fig. 8 indicates that the lower backscattering was indeed caused by a drop in wind speed rather than a change in direction, as the latter does not significantly change during the passage of the convective cell (possibly because the underlying wind regime is strong). It does record a sudden drop in wind speed to 7.5 m/s one minute after the SAR observation, while the GMF and the deep learning model estimated wind speeds of 13.7 m/s and 8.9 m/s, respectively.
## V Conclusion
Previous studies have shown that high-resolution rain signatures can be automatically extracted from SAR observations. Using this SAR rainfall segmenter, we built a wind estimation dataset where 50% of the patches contain rainfall examples. Samples were chosen so that a SAR-based and a SAR-independent wind speed model agree on non-rain pixels, ensuring that their estimates are close to the true wind speed. A UNet architecture was trained on this dataset to estimate wind speeds based on the SAR-independent atmospheric model. We tested several input combinations and found that the most important parameter was the wind speed prior from the geophysical model function, which the deep learning model had difficulty emulating.
Collocations with buoy in-situ measurements show that the model outperforms the current Geophysical Model Function (GMF) on rain areas, reducing the RootMean Square Error (RMSE) by
Fig. 6: Time series of the NDBC buoy wind measurements around January 08th 2017 01:58:19 for NDBC 46054 (a) and NDBC 46053 (b), and the estimation from the GMF, the deep learning model and the atmospheric model.
27% (resp. 45%) for rain rates higher than 1 mm/h (resp. 3 mm/h). On rainless areas, performances are similar with a small reduction of the RMSE by 2.7%. However, since the buoys have a time resolution of ten minutes, some quick sub-mesoscale processes, such as gust fronts, are difficult to register. The limited spatial range of the buoys also makes it challenging to observe rare phenomena. Future work should address these concerns.
A secondary dataset was created differing with the main dataset by Eq. 4. Here, the balancing policy is defined as:
\[\forall x,P(x)=P^{+}(x)=P^{-}(x)\] (Eq. 8)
The distributions \(P^{+}\) and \(P^{-}\) are presented in Fig. 9. The balancing is performed for every wind speed, which lead to remove 84% of the rain patches -especially at 5 and 10 m/s- since the number of rain patches at 8 m/s is limited.
Comparison with the first balancing scheme display lower performances despite the more accurate balancing, as indicated Table III.
|
2310.01770 | A simple connection from loss flatness to compressed representations in
neural networks | The generalization capacity of deep neural networks has been studied in a
variety of ways, including at least two distinct categories of approaches: one
based on the shape of the loss landscape in parameter space, and the other
based on the structure of the representation manifold in feature space (that
is, in the space of unit activities). Although these two approaches are
related, they are rarely studied together explicitly. Here, we present an
analysis that bridges this gap. We show that in the final phase of learning in
deep neural networks, the compression of the manifold of neural representations
correlates with the flatness of the loss around the minima explored by SGD.
This correlation is predicted by a relatively simple mathematical relationship:
a flatter loss corresponds to a lower upper bound on the compression metrics of
neural representations. Our work builds upon the linear stability insight by Ma
and Ying, deriving inequalities between various compression metrics and
quantities involving sharpness. Empirically, our derived inequality predicts a
consistently positive correlation between representation compression and loss
sharpness in multiple experimental settings. Overall, we advance a dual
perspective on generalization in neural networks in both parameter and feature
space. | Shirui Chen, Stefano Recanatesi, Eric Shea-Brown | 2023-10-03T03:36:29Z | http://arxiv.org/abs/2310.01770v3 | # A simple connection from loss flatness to compressed representations in neural networks
###### Abstract
Deep neural networks' generalization capacity has been studied in a variety of ways, including at least two distinct categories of approach: one based on the shape of the loss landscape in parameter space, and the other based on the structure of the representation manifold in feature space (that is, in the space of unit activities). These two approaches are related, but they are rarely studied together and explicitly connected. Here, we present a simple analysis that makes such a connection. We show that, in the last phase of learning of deep neural networks, compression of the volume of the manifold of neural representations correlates with the flatness of the loss around the minima explored by ongoing parameter optimization. We show that this is predicted by a relatively simple mathematical relationship: loss flatness implies compression of neural representations. Our results build closely on prior work of Ma and Ying (1), which shows how flatness (i.e., small eigenvalues of the loss Hessian) develops in late phases of learning and lead to robustness to perturbations in network inputs. Moreover, we show there is no similarly direct connection between local dimensionality and sharpness, suggesting that this property may be controlled by different mechanisms than volume and hence may play a complementary role in neural representations. Overall, we advance a dual perspective on generalization in neural networks in both parameter and feature space.
## Introduction
The remarkable capacity of deep neural networks to generalize has been studied in many ways. Generalization is a complex phenomenon influenced by myriad factors, including model architecture, dataset size and diversity, and the specific task used to train a network - and researchers continue to develop new techniques to enhance the generalization. This is a vast field; however, from a theoretical point of view, we can identify two distinct categories of approach. These are studies that show that neural network generalization is linked to (a) properties of minima of the loss function that learning algorithms find in parameter space, and (b) to properties of the representations that optimized networks find in feature space - that is, in the space of their neural activations.
In parameter space, both empirical studies and theoretical analyses have shown that deep neural networks often converge to flat and wide minima and that this can underlie their good generalization performance (e.g. [1, 2, 3, 4, 5, 6, 7, 8]). Flat minima refer to regions in the loss landscape where the loss function has a relatively large basin: put simply, the loss doesn't change much in different directions around the minimum. El-geant arguments show how models that converge to flat minima are more likely to generalize well [9]. The intuition behind this is that in a flat minimum, small perturbations or noise in the input data are less likely to cause significant changes in the model's output, resulting in improved robustness to variations in data from training to test.
In feature space, the phenomenon of compression (or neural collapse) refers to the observation that, as neural representations develop through the course of training, the activity patterns in feedforward and recurrent neural networks can become more compact and reside in lower-dimensional spaces [10, 11, 12, 13, 14, 15]. This is closely related to the earlier information bottleneck ideas [16], which quantified allied effects via entropy. This phenomenon, which in many (but not all [17, 18]) cases develops through training, can be enhanced by the architectural design of a network: for example, higher layers of a deep neural network usually have fewer neurons compared to the input layer, and autoencoders include internal "bottleneck layers" with significantly fewer neurons [19]. Together compression and low-dimensionality in feature space can result in networks isolating the most important and discriminative features of input data, thus enhancing their ability to generalize. Throughout, we focus on the second, or final, stage of learning, which proceeds after the learning algorithm has already found parameters that give optimal performance (i.e., minimum loss) on the training data [19, 20, 1]. Here, additional learning still occurs, which changes the properties of the solutions in both feature and parameter space in very interesting ways.
Our paper proceeds as follows. First, we review arguments of Ma and Ying [1] that flatter minima are induced during the final phase of learning under gradient descent, and that this flatness can constrain the gradient of the loss with respect to network inputs. Next, we prove that lower sharpness implies a lower upper bound of the local volume (compression) of the representation manifold in feature space. We conclude our findings with simulations that confirm our central theoretical results and show how they can be applied in practice.
## Background and setup
Consider a feedforward neural network \(f\) with input data \(\mathbf{x}\in\mathbb{R}^{M}\) and parameters \(\boldsymbol{\theta}\). The output of the network is:
\[\mathbf{y}=f(\mathbf{x};\boldsymbol{\theta})\;, \tag{1}\]
with and \(\mathbf{y}\in\mathbb{R}^{N}\) (\(N<M\)). Consider a quadratic loss \(L(\mathbf{y},\mathbf{y}_{\text{true}})=||\mathbf{y}-\mathbf{y}_{\text{true}}||^{2}\) function of the outputs and ground true data \(\mathbf{y}_{\text{true}}\). In the following, we'll simply write \(L(\mathbf{y})\), \(L(f(\mathbf{x},\mathbf{\theta}))\) or simply \(L(\mathbf{\theta})\) to highlight the dependence of the loss on the output, the network or its parameters.
During the last phase of learning, Ma and colleagues have recently argued that SGD appears to regularize the sharpness of the loss (4) (see also (5-8)). This is to say that the dynamics of SGD lead network parameters to minima where the local loss landscape is flatter or wider. This is best captured by the sharpness, measured by the sum of the eigenvalues of the Hessian:
\[S(\mathbf{\theta})=\text{Tr}(H)\;, \tag{2}\]
with \(H=\nabla^{2}L(\mathbf{\theta})\) being the Hessian, and \(\lVert\cdot\rVert_{F}\) is the Frobenius norm. A solution with low sharpness is a flatter solution. Following (1, 20), we define \(\mathbf{\theta}^{*}\) to be an "interpolation solution" on the zero loss manifold in the parameter space (the zero loss manifold in what follows), where \(f(\mathbf{x}_{i},\mathbf{\theta}^{*})=\mathbf{y}_{i}\) for all \(i\)'s (with \(i\in\{1..n\}\) indexing the training set) and \(L(\mathbf{\theta}^{*})=0\). On the zero loss manifold, in particular, we have
\[S(\mathbf{\theta}^{*})=\frac{1}{n}\sum_{i=1}^{n}\lVert\nabla_{\mathbf{\theta}}f( \mathbf{x}_{i},\mathbf{\theta}^{*})\rVert_{F}^{2} \tag{3}\]
Next, we briefly review the arguments of (2, 4, 20) for why sharpness is minimized once loss has attained its minimum. On such a manifold, we have \(\nabla_{\mathbf{\theta}}L(f_{\text{net}}(\mathbf{x};\mathbf{\theta}^{*}))=0\).
Consider a perturbation \(\delta\mathbf{\theta}^{*}\) of such a point. Then the second-order expansion of the loss around \(\mathbf{\theta}^{*}+\delta\mathbf{\theta}^{*}\) is:
\[L(\mathbf{\theta}^{*}+\delta\mathbf{\theta}^{*})\approx L(\mathbf{\theta}^{*})+\nabla^{T} L(\mathbf{\theta}^{*})\delta\mathbf{\theta}^{*}+\frac{1}{2}\delta\mathbf{\theta}^{*T}H \delta\mathbf{\theta}^{*}\;, \tag{4}\]
where \(H\) is again the Hessian matrix. Taking the gradient with respect to \(\mathbf{\theta}\) we obtain:
\[\begin{split}\nabla_{\mathbf{\theta}}L(\mathbf{\theta}^{*}+\delta\mathbf{ \theta}^{*})&=\nabla_{\mathbf{\theta}}L(\mathbf{\theta}^{*})+\nabla_{\bm {\theta}}\nabla^{T}L(\mathbf{\theta}^{*})\delta\mathbf{\theta}^{*}\\ &\qquad\qquad+\nabla_{\mathbf{\theta}}(\frac{1}{2}\delta\mathbf{\theta}^ {*T}H\delta\mathbf{\theta}^{*})\\ &=0+H\delta\mathbf{\theta}^{*}+\nabla_{\mathbf{\theta}}(\frac{1}{2}\delta \mathbf{\theta}^{*T}H\delta\mathbf{\theta}^{*})\;.\end{split} \tag{5}\]
The leading term in this expansion is \(H\delta\mathbf{\theta}^{*}\). As the loss does not vary along the zero-loss manifold, the Hessian along this manifold has zero eigenvalues. Therefore, \(H\delta\mathbf{\theta}^{*}\) can be viewed as a projection of the perturbation \(\delta\mathbf{\theta}^{*}\) along directions that are orthogonal to the zero-loss manifold. The third term can instead be viewed as an Hessian-based loss. This indicates that minimizing the total loss also requires minimizing the Hessian along the zero-loss manifold.
In order to see why minimizing the sharpness of the solution leads to more compressed representations, we need to move from parameter space to input space. To do so we review the argument of Ma and colleagues (1) that relates variations in input data \(\mathbf{x}\) and input weights. Thus, at least for a subset of parameters \(\mathbf{\theta}\) (the input weights \(\mathbf{W}\)), there exists a direct relationship between network effects of changes in parameter space and changes in input space.
Let \(\mathbf{W}\) be the input weights to the network, and \(\bar{\mathbf{\theta}}\) the corresponding set of parameters. Following (1), as the weights \(\mathbf{W}\) multiply the inputs \(\mathbf{x}\) we have the following identities:
\[\begin{split}\lVert\nabla_{\mathbf{W}}f(\mathbf{W}\mathbf{x}; \bar{\mathbf{\theta}})\rVert_{F}&=\sqrt{\sum_{i,j,k}J_{jk}^{2}x_{i}^{2 }}=\lVert J\rVert_{F}\lVert\mathbf{x}\rVert_{2}\\ \lVert\nabla_{\mathbf{x}}f(\mathbf{W}\mathbf{x};\bar{\mathbf{\theta }})\rVert_{F}&=\lVert\mathbf{W}^{T}J\rVert_{F}\;,\end{split} \tag{6}\]
where \(J=\frac{\partial f(\mathbf{W}\mathbf{x};\bar{\mathbf{\theta}})}{\partial(\mathbf{ W}\mathbf{x})}\) is a complex expression as computed in, e.g., backpropagation. From Eq. (6) and the sub
Figure 1: Trends in key variables across SGD training of the vgg10 network with fixed batch size (equal to 20) and varying learning rates (0.05, 0.1 and 0.2). After the loss is minimized (so that an interpolation solution is found) sharpness and volumes decrease together. Moreover, higher learning rates lead to lower sharpness and hence stronger compression. From left to right in row-wise order: train loss, test accuracy, sharpness (Eq. (3)), log volumetric ratio (Eq. (11)), left-hand side of Eq. (8) (axes titled \(G\)), and local dimensionality of the network output (Eq. (14)).
multiplicative property of the Frobenius norm, we have:
\[\|\nabla_{\mathbf{x}}f(\mathbf{W}\mathbf{x};\tilde{\mathbf{\theta}})\|_{F}\leq\frac{ \|\mathbf{W}\|_{F}}{\|\mathbf{x}\|_{2}}\|\nabla_{\mathbf{W}}f(\mathbf{W} \mathbf{x};\tilde{\mathbf{\theta}})\|_{F}\;, \tag{7}\]
which is an upper bound on \(\|\nabla_{\mathbf{x}}f(\mathbf{W}\mathbf{x};\tilde{\mathbf{\theta}})\|_{F}\). If the norms \(\|\mathbf{W}\|_{F}\) and \(\|\mathbf{x}\|_{2}\) are not excessively large or small respectively, these bounds control the gradient with respect to inputs via the gradient with respect to weights. Following (1), this in turn reveals the impact of flatness in the loss function:
\[\frac{1}{n}\sum_{i=1}^{n}\lVert\nabla_{\mathbf{x}}f(\mathbf{x}_{i },\mathbf{\theta}^{*})\rVert_{F}^{k} \leq\frac{\|\mathbf{W}\|_{F}^{k}}{\text{min}_{i}\|\mathbf{x}_{i} \|_{2}^{k}}\frac{1}{n}\sum_{i=1}^{n}\lVert\nabla_{\mathbf{w}}f(\mathbf{x}_{i}, \mathbf{\theta}^{*})\rVert_{F}^{k}\] \[\leq\frac{\|\mathbf{W}\|_{F}^{k}}{\text{min}_{i}\|\mathbf{x}_{i} \|_{2}^{k}}\frac{1}{n}\sum_{i=1}^{n}\lVert\nabla_{\mathbf{\theta}}f(\mathbf{x}_{i},\mathbf{\theta}^{*})\rVert_{F}^{k}\] \[\leq\frac{\|\mathbf{W}\|_{F}^{k}}{\text{min}_{i}\|\mathbf{x}_{i} \|_{2}^{k}}S(\mathbf{\theta}^{*})^{k/2} \tag{8}\]
for all \(k\geq 2\). Thus, as in (1), the effect of input perturbations is constrained by the sharpness of the loss function. The flatter the minimum of the loss, the lower the effect of input space perturbations on the network function. \(f(\mathbf{x},\mathbf{\theta}^{*})\) as determined by gradients.
## Appendix F From robustness to inputs to compression of representations
We now further analyze variations in the input and how they propagate through the network to shape representations of sets of inputs. Consider an input data point \(\bar{\mathbf{x}}\) drawn from the training set: \(\bar{\mathbf{x}}=\mathbf{x}_{i}\) for a specific \(i\in\{1..n\}\). The set of all possible perturbations around \(\bar{\mathbf{x}}\) in input space is the ball \(\mathcal{B}(\bar{\mathbf{x}})_{r}\): \(\mathbf{x}\in\mathcal{B}(\bar{\mathbf{x}})_{r}\) if \(\|\mathbf{x}-\bar{\mathbf{x}}\|_{2}<r\). To determine how the network represents these inputs, we will quantify how the ball \(\mathcal{B}(\bar{\mathbf{x}})_{r}\) expands or contracts as it propagates through the network's layers and reaches the network's output. This amounts to asking whether the transformation of the differential volume around \(\bar{\mathbf{x}}\) is an expansion or a contraction.
To do so we propagate the ball through the network transforming each point \(\mathbf{x}\) into its image \(f(\mathbf{x})\). Following a Taylor expansion for points within \(\mathcal{B}(\bar{\mathbf{x}})_{r}\) as \(r\to 0\) we have:
\[f(\mathbf{x})=f(\bar{\mathbf{x}})+\nabla_{\mathbf{x}}(f(\bar{ \mathbf{x}},\mathbf{\theta}^{*}))(\mathbf{x}-\bar{\mathbf{x}}). \tag{9}\]
We can express the limit of the covariance matrix \(C_{f(\mathcal{B}(\mathbf{x}))}\) of the output \(f(\mathbf{x})\) as
\[C_{f}^{\text{lim}}=\lim_{\alpha\to 0}C_{f(\mathcal{B}(\mathcal{A}))}= \alpha\nabla_{\mathbf{x}}f(\bar{\mathbf{x}},\mathbf{\theta}^{*})\nabla_{\mathbf{x }}^{T}f(\bar{\mathbf{x}},\mathbf{\theta}^{*})\;, \tag{10}\]
where \(\alpha\) depends on the input's covariance, given as \(C_{\mathcal{B}(\mathbf{x})}=\alpha\mathcal{I}\), with \(\mathcal{I}\) as the identity matrix. Our covariance expressions capture the distribution of points in \(\mathcal{B}(\bar{\mathbf{x}})_{r}\) as they go through the network \(f(\bar{\mathbf{x}},\mathbf{\theta}^{*})\). We note that our analysis of the covariance in feature space is an approximation at second order, akin to approximating the representation manifold around \(f(\bar{\mathbf{x}})\) with a Gaussian distribution.
We quantify how a network compresses its input volumes via the local volumetric ratio, between a cube of side length \(h\) at \(\bar{\mathbf{x}}\) and its image under transformation \(f\) is:
\[d\operatorname{Vol}^{ratio}\lvert_{f(\bar{\mathbf{x}},\mathbf{ \theta}^{*})} =\lim_{h\to 0}\frac{\operatorname{Vol}(f(\bar{\mathbf{x}},\mathbf{ \theta}^{*}))}{\operatorname{Vol}(\bar{\mathbf{x}})} \tag{11}\] \[=\sqrt{\det\left(\nabla_{\mathbf{x}}f^{T}\nabla_{\mathbf{x}}f \right)}\]
which is equal to the square root of the product of all positive eigenvalues of \(C_{f}^{\text{lim}}\). Exploiting the bound on the gradients derived earlier in Eq. (7) we derive a similar bound for the
Figure 2: Trends in key variables across SGD training of the vqg10 network with fixed learning rate size (equal to 0.1) and varying batch size (8, 20 and 32). After the loss is minimized (so that an interpolation solution is found) sharpness and volumes decrease together. Moreover, lower batch sizes lead to lower sharpness and hence stronger compression. From left to right in row-wise order: train loss, test accuracy, sharpness (Eq. (3)), log volumetric ratio (Eq. (11)), left-hand side of Eq. (8) (axes titled \(G\)), and local dimensionality of the network output (Eq. (14)).
volumetric ratio:
\[\begin{split} d\operatorname{Vol}^{ratio}\lvert_{f(\bar{\mathbf{x}}, \boldsymbol{\theta}^{*})}&\leq\left(\frac{\operatorname{Tr}\nabla_{ \mathbf{x}}f^{T}\nabla_{\mathbf{x}}f}{N}\right)^{N/2}\\ &=N^{-N/2}\|\nabla_{\mathbf{x}}f(\mathbf{x}_{i},\boldsymbol{ \theta}^{*})\|_{F}^{N}\end{split} \tag{12}\]
where the first line uses the inequality between arithmetic and geometric means and the second the definition of the Frobenius norm. Introducing the averaged volumetric ratio across all input points \(dV^{ratio}(\boldsymbol{\theta}^{*})=\frac{1}{n}\sum_{i=1}^{n}d\operatorname{ Vol}^{ratio}\lvert_{f(\mathbf{x}_{i},\boldsymbol{\theta}^{*})}\), and comparing this with Eq. (8) we obtain:
\[dV^{ratio}(\boldsymbol{\theta}^{*})\leq\frac{\|\mathbf{W}\|_{F}^{N}}{\text{ min}_{i}\|\mathbf{x}_{i}\|_{2}^{N}}\left(\frac{S(\boldsymbol{\theta}^{*})}{N} \right)^{N/2}\;. \tag{13}\]
This equation implies that, as SGD dynamics drives the minimization of flattened minima of the loss function in parameter space, the manifold of data representation through the network becomes increasingly compressed. Our analysis demonstrates that these two phenomena are linked by the generalization properties of the network to input perturbations.
We note that an alternative way to analyze the same effect would be by means of the entropy of the representations in feature space. The entropy of a multivariate Gaussian distribution is defined as \(H=\frac{1}{2}\ln\det\left(2\pi eC_{f}^{r}\right)\) and other works have focused on entropic quantities (e.g. [21, 22]).
## Experiments
### Sharpness and compression: verifying the theory.
The theoretical results derived above show that during the later phase of training - the interpolation phase - the volume of the network's representation is upper bounded by a function of the sharpness of the loss function in parameter space. This links sharpness and representation's volume: the flatter is the loss landscape, the more compressed is the upper bound on the representation volume.
It remains to test in practice, however, whether these bounds are sufficiently tight so that a clear relationship between sharpness and representation volume appears. As one such test, we ran the following experiment. We trained a VGG10 network [23] to classify images from the CIFAR-10 dataset, and calculated the sharpness (Eq. (2)), the log volumetric ratio (Eq. (11)) and the left-hand side of Eq. (8) (the gradient with respect to the inputs, a quantity we term G in the figures below) during the training phase of a (Fig 1 and 2).
We trained the network (VGG10) using SGD on images from 2 classes (out of 10) so that convergence to the interpolation regime, i.e. zero loss, was faster. We explored the influence of two specific parameters that have a substantial effect on the network's training: learning rate and batch size. For each pair of learning rate and batch size parameters, we computed all quantities at hand across 100 input samples and five different random initializations for network weights.
In the first set of experiments, we studied the link between a decrease in sharpness during the latter phases of training and volume compression Fig. 1. We noticed that when the network reaches the interpolation regime, and the sharpness decreases, so does the volume. The quantity G similarly decreases. All these results were consistent across multiple learning rates for a fixed batch size (of 20): specifically, for learning rates that gave lower values of sharpness, volume was lower as well.
We then repeated the experiments while keeping the learning rate fixed (lr=0.1) and varying the batch size. The same broadly consistent trends emerged linking a decrease in the sharpness to a compression in the representation volume Fig. 2. However, we also find that while sharpness stops decreasing after about iteration \(50\cdot 10^{3}\) for batch size 32, the volume keeps decreasing as learning proceeds. This suggests that there may be other mechanisms at play, beyond sharpness, in driving the compression of volumes.
### Sharpness and compression on test set data.
Despite the fact that Eq. (3) is exact for interpolation solutions only (i.e., those with zero loss), we found that the test loss is small enough (Fig. 3) so that it should be a good approximation for test data as well. Therefore we analyzed our simulations to study trends in sharpness and volume for these held-out test data as well (Fig. 3). We discovered that this sharpness increased rather than diminished as a result of training. We hypothesized that sharpness could correlate with the difficulty of classifying testing points. This was supported by the fact that the sharpness of misclassified test data was even greater than that of all test data. Despite this increase in sharpness, the volume followed the same pattern as the training set. This suggests that compression in representation space is a robust phenomenon that can be driven by additional phenomena beyond sharpness. Nevertheless, the compression still is weaker for misclassified test samples that have higher sharpness than other test samples. Overall, these results emphasize an interesting distinction between how sharpness evolves for training vs. test data.
### Sharpness and local dimensionality.
Lastly, we analyze the representation's local dimensionality in a manner analogous to the volume analysis. Apriori, it is ambiguous whether the dimensionality of the data representation should increase or decrease as the volume is compressed. For instance, the volume could decrease while maintaining its overall form and symmetry, thus preserving its dimensionality. Alternatively, one or more of the directions in the relevant tangent space could be selectively compressed, leading to an overall reduction in dimensionality. To study this quantitatively, we use a local measure of dimensionality, the local Participation Ratio, given by:
\[D_{\text{PR}}(f(\bar{\mathbf{x}}))=\lim_{\alpha\to 0}\frac{\operatorname{Tr}[C_{f( \mathcal{B}(\mathbf{x}))}]^{2}}{\operatorname{Tr}[(C_{f(\mathcal{B}(\mathbf{x }))})^{2}]}=\frac{\operatorname{Tr}[C_{f}^{\lim}]^{2}}{\operatorname{Tr}[(C_{f }^{\lim})^{2}]} \tag{14}\]
(cf. [24, 25, 26]). This quantity can be averaged across a set of points within a given radius, giving it its local property: \(D_{\text{PR}}(\theta^{*})=\frac{1}{n}\sum_{i}^{n}D_{\text{PR}}(f(\mathbf{x}_{i} ))\).
Figures 1 and 2 show our experiments computing the Participation Ratio over the course of learning. Here, we find that the local dimensionality of the representation decreases as the loss decreases to 0, which is consistent with the viewpoint that the network compresses representations in feature space as much as possible, retaining only the directions that code for task-relevant features [27, 28]. However, the local dimensionality exhibits unpredictable behavior that cannot be explained by the sharpness (or the volume) once the network is on the zero-loss manifold and training continues. This discrepancy is consistent with the bounds established by our theory, which only bound the numerator of Eq. (14). It is also consistent with volume compression overall: for example, if we let \(\mathbf{\lambda}\) be all the eigenvalues of \(C_{f}^{\text{lim}}\), then the local dimensionality can be written as \(D_{\text{PR}}=(\|\mathbf{\lambda}\|_{1}/\|\mathbf{\lambda}\|_{2})^{2}\), which retains the same value when \(\mathbf{\lambda}\) is arbitrarily scaled. This shows how local dimensionality is a distinct quality of network representations compared with volume, and is driven by mechanisms that differ from sharpness alone. We emphasize that the dimensionality we study here is a local measure, on the finest scale around a point on the "global" manifold of unit activities; dimension on larger scales (i.e., across categories or large sets of task inputs [10, 24]) may show different trends.
## Conclusion
In the final phase of neural network training, it is a widely observed phenomenon that SGD training effectively reduces the (eigenvalues of the) Hessian of the loss in parameter space. In other words, training finds increasingly flat minima, and this refinement occurs in the last, and sloweds, phase of training. Here, we have shown that this same phenomenon leads to compression of a network's representation of its inputs. Our results unite often separate perspectives on generalization in neural networks and add to mounting evidence for why the final phase of learning crucially shapes network representations.
|
2304.13412 | Mutual information of spin systems from autoregressive neural networks | We describe a new direct method to estimate bipartite mutual information of a
classical spin system based on Monte Carlo sampling enhanced by autoregressive
neural networks. It allows studying arbitrary geometries of subsystems and can
be generalized to classical field theories. We demonstrate it on the Ising
model for four partitionings, including a multiply-connected even-odd division.
We show that the area law is satisfied for temperatures away from the critical
temperature: the constant term is universal, whereas the proportionality
coefficient is different for the even-odd partitioning. | Piotr Białas, Piotr Korcyl, Tomasz Stebel | 2023-04-26T09:51:55Z | http://arxiv.org/abs/2304.13412v2 | # Mutual information of spin systems from autoregressive neural networks
###### Abstract
We describe a direct approach to estimate bipartite mutual information of a classical spin system based on Monte Carlo sampling enhanced by autoregressive neural networks. It allows studying arbitrary geometries of subsystems and can be generalized to classical field theories. We demonstrate it on the Ising model for four partitionings, including a multiply-connected even-odd division. We show that the area law is satisfied for temperatures away from the critical temperature: the constant term is universal, whereas the proportionality coefficient is different for the even-odd partitioning.
Mutual Information, Hierarchical Autoregressive Neural Networks, Monte Carlo simulations, Ising model _Introduction_: The discovery of topological order in quantum many-body systems [1] initiated a very fruitful exchange of ideas between solid-state physics and information theory. Many new theoretical tools developed to quantitatively describe the flow of information or the amount of information shared by different parts of the total system have been employed in the studies of physical systems [2]. Among these tools, quantum entanglement entropy or mutual information and their various alternatives were found to be particularly useful [3]. With their help, it was understood that many new phases of matter respect the same set of symmetries but differ in long-range correlations quantified by bipartite, tripartite, or higher information-theoretic measures such as mutual information. Turning this fact around, it is expected that calculating mutual information can provide hints about the topological phase of the system, playing a role similar to order parameters in the usual Landau picture of phase transitions (see for example [4]). Indeed, such quantities are not only useful for theoretical understanding but are also measurable observables in experiments. For example, in Ref. [5] the mutual information in a quantum spin chain was measured demonstrating the area law [6] governing the scaling of mutual information with the volume of the bipartite partition. The law [7] claims that the discussed quantities, the mutual information being one of them, scale with the boundary area separating the two parts of the system, instead of the expected scaling with the volume. The increased interest comes from the fact that the black hole entropy was shown a long time ago to follow similar law: the black hole entropy depends only on its surface and not on the interior. To a large surprise, this connection was recently made explicit with the realization that a certain quantum spin model in \(0\) spatial dimensions called the SYK model [8] is dual to a black hole via the gravity/gauge duality [9]. From this perspective, quantum entanglement entropy or quantum mutual information which can be defined and calculated on both sides of this correspondence became even more attractive from the theoretical point of view. Therefore there is a strong pressure to develop computational tools able to estimate such quantities, which in itself is, however, very difficult in the general case.
Although an important part of Condense Matter physics [6; 10; 11; 12] aims at studying entanglement entropy and quantum mutual information in quantum spin systems, already in classical spin systems the Shannon and Renyi entropies were found to follow the area law [13]. With the interface boundary sizes up to \(64\) spins the coefficients of the area law were precisely determined and their universality was verified using different lattice shapes. Later, the mutual information of two halves of the classical Ising model on an infinitely long cylinder was calculated using the transfer matrix approach [14]. Both quantities were discussed with more precision in [15] using a different method, called the bond propagation algorithm.
In this work, we propose a new approach to directly estimate bipartite mutual information. It is based on the incorporation of machine learning techniques into Monte Carlo simulation algorithms. The calculations are stochastic and yield provably exact results within their statistical uncertainties. The main advantage of the method is its flexibility, allowing its application: to any geometry of the partitioning, to any statistical system with a finite number of degrees of freedom, in an arbitrary number of space dimensions.
We consider a classical system of spins which we divide into two arbitrary parts \(A,B\). In this case, the Shannon mutual information is defined as
\[I=\sum_{\mathbf{a},\mathbf{b}}p(\mathbf{a},\mathbf{b})\log\frac{p(\mathbf{a},\mathbf{b})}{p(\mathbf{a})p(\mathbf{b})}, \tag{1}\]
where a particular configuration \(\mathbf{s}\) of the full model has parts \(\mathbf{a}\) and \(\mathbf{b}\), and where the Boltzmann probability dis
tribution of states, depending on inverse temperature \(\beta\), is given by (we omit the explicit dependence of \(Z\) on \(\beta\)),
\[p(\mathbf{a},\mathbf{b})=\frac{1}{Z}e^{-\beta E(\mathbf{a},\mathbf{b})},\quad Z =\sum_{\mathbf{a},\mathbf{b}}e^{-\beta E(\mathbf{a},\mathbf{b})} \tag{2}\]
and
\[p(\mathbf{a})=\sum_{\mathbf{b}}p(\mathbf{a},\mathbf{b}),\ \ p(\mathbf{b})=\sum_{ \mathbf{a}}p(\mathbf{a},\mathbf{b}) \tag{3}\]
are probability distributions of subsystems. We shall use the same symbol \(p\) for all probability distributions defined on different state spaces distinguishing them by the arguments. In the above expressions, the summation was performed over all configurations of subsystems \(A\) or \(B\). Inserting Eq. (2) and Eq. (3) into Eq. (1) we obtain
\[I=\log Z-\sum_{\mathbf{a},\mathbf{b}}p(\mathbf{a},\mathbf{b}) \Big{[}\beta E(\mathbf{a},\mathbf{b})+\\ +\log Z(\mathbf{a})+\log Z(\mathbf{b})\Big{]}, \tag{4}\]
where \(Z(\mathbf{a})=\sum_{\mathbf{b}}e^{-\beta E(\mathbf{a},\mathbf{b})}\) and \(Z(\mathbf{b})=\sum_{\mathbf{a}}e^{-\beta E(\mathbf{a},\mathbf{b})}\). Please note that in typical Monte Carlo approaches the partition functions \(Z\), \(Z(\mathbf{a})\), and \(Z(\mathbf{b})\) are not available.
_Method:_ Below we argue that \(I\) can be obtained from a Monte Carlo simulation enhanced with autoregressive neural networks (ANNs). It was recently shown that ANNs can be used to approximate the Boltzmann probability distribution \(p(\mathbf{a},\mathbf{b})\) for spin systems and provide a mean to simulate from this approximate distribution [16; 17; 18; 19]. Let us call this approximate distribution \(q_{\theta}(\mathbf{a},\mathbf{b})\), \(\theta\) stands here for the parameters of the neural network that are to be tuned so that \(q_{\theta}\) is as close to \(p\) as possible under an appropriate measure, typically the backward Kullback-Leibler divergence
\[D_{\mathrm{KL}}(q_{\theta}|p)=\sum_{\mathbf{a},\mathbf{b}}q_{\theta}(\mathbf{a },\mathbf{b})\,\log\left(\frac{q_{\theta}(\mathbf{a},\mathbf{b})}{p(\mathbf{a },\mathbf{b})}\right). \tag{5}\]
The formula Eq. (4) can be rewritten in terms of averages with respect to the distribution \(q_{\theta}\)
\[I=\log Z-\frac{1}{Z}\beta\left\langle\hat{w}(\mathbf{a},\mathbf{ b})E(\mathbf{a},\mathbf{b})\right\rangle_{q_{\theta}(\mathbf{a},\mathbf{b})}+\\ +\frac{1}{Z}\left\langle\hat{w}(\mathbf{a},\mathbf{b})\log Z( \mathbf{a})\right\rangle_{q_{\theta}(\mathbf{a},\mathbf{b})}+\\ +\frac{1}{Z}\left\langle\hat{w}(\mathbf{a},\mathbf{b})\log Z( \mathbf{b})\right\rangle_{q_{\theta}(\mathbf{a},\mathbf{b})}, \tag{6}\]
where the importance ratios are defined as
\[\hat{w}(\mathbf{a},\mathbf{b})=\frac{e^{-\beta E(\mathbf{a},\mathbf{b})}}{q_{ \theta}(\mathbf{a},\mathbf{b})}. \tag{7}\]
The crucial feature of ANN-enhanced Monte Carlo is that contrary to standard Monte Carlo, we can estimate directly the partition functions \(Z\), \(Z(\mathbf{a})\) and \(Z(\mathbf{b})\) (see Appendix A for details). In this way, we have expressed the mutual information \(I\) only through averages with respect to the distribution \(q_{\theta}\). It can be now estimated by sampling from this distribution. The procedure of sampling configurations from approximate probability distribution provided by neural network together with reweighting observables with importance ratios was proposed in Ref. [20] and named Neural Importance Sampling (NIS). This paper reports on the first application of this technique to information theory observables.
Autoregressive neural networks rely on the product rule _i.e._ factorization of \(q_{\theta}\) into the product of conditional probabilities
\[q_{\theta}(\mathbf{a},\mathbf{b})=\prod_{i=1}^{L^{2}}q_{\theta}(s^{i}|s^{1},s^ {2},\ldots,s^{i-1}). \tag{8}\]
Due to the fact that the labeling of spins in Eq. (8) is arbitrary, we can choose it in such a way that we first enumerate all spins from part \(A\), \(\mathbf{a}=(s^{1},s^{2},\ldots,s^{n_{A}})\) and only afterward all spins from part \(B\), \(\mathbf{b}=(s^{n_{A}+1},\ s^{n_{A}+2},\ldots,s^{n_{A}+n_{B}})\). We then obtain
\[q_{\theta}(\mathbf{s})\equiv q_{\theta}(\mathbf{a},\mathbf{b})=q_{\theta}( \mathbf{a})q_{\theta}(\mathbf{b}|\mathbf{a}), \tag{9}\]
with
\[q_{\theta}(\mathbf{a})=\prod_{i=1}^{n_{A}}q_{\theta}(s^{i}|s^{1},s^{2},\ldots, s^{i-1}), \tag{10}\]
and
\[q_{\theta}(\mathbf{b}|\mathbf{a})=\prod_{i=1}^{n_{B}}q_{\theta}(s^{n_{A}+i}|s^ {n_{A}+1},s^{n_{A}+2},\ldots,s^{n_{A}+i-1},\mathbf{a}). \tag{11}\]
This feature of ANN stands behind the fact that we can readily estimate \(\log Z(\mathbf{a})\) and \(\log Z(\mathbf{b})\) required by Eq. (6) directly. In this respect, the ANN approach differs from _normalizing flows_ used to approximate continuous probability distributions and employed recently in the context of the lattice field theory [21; 22; 23; 24; 25; 26; 27]. There, the conditional probability \(q_{\theta}(\mathbf{b}|\mathbf{a})\) as well the marginal distribution \(q_{\theta}(\mathbf{a})\) are not so easily available.
_Results:_ We leave the technical details of the evaluation of the terms in Eq. (6) to the Appendix A and now provide an example of its application. We demonstrate it on the Ising model on a periodic \(L\times L\) lattice, with ferromagnetic, nearest-neighbor interactions, defined by the Hamiltonian
\[E(\mathbf{a},\mathbf{b})=-\sum_{\langle i,j\rangle}s^{i}s^{j}, \tag{12}\]
where \(s^{i}\in\{1,-1\}\). We consider the following divisions and respective mutual information observables: "strip" geometry: the system is divided into equal rectangular subsystems; "square" geometry: subsystem \(A\) is the
square of size \(\frac{1}{2}L\times\frac{1}{2}L\); "quarter" geometry: \(A\) is rectangular of size \(\frac{1}{4}L\times L\). We show them schematically in the three sketches on the left in Fig. 1. We note that all of them have the same length of the border between the subsystems, i.e. \(2L\) (as we used periodic boundary conditions) although may they differ in their volumes. In the discussion below, we refer to those three partitionings as "block" partitionings. In addition, we also consider a division, called "chessboard", where the system is divided using even-odd labeling of spins. In this case, the boundary between spins is \(L^{2}\), i.e. every spin of the system is at the boundary between parts \(A\) and \(B\). Note that in the chessboard partitioning the subsystems are not simply connected, as is usually considered in the Literature.
We investigate a wide range of system sizes, reaching \(L=66\) or \(130\) (for some \(\beta\)). We combine results obtained by the Variational Autoregressive Network (VAN) approach of Ref. [16] and our recently proposed modification called Hierarchical Autoregressive Network (HAN) algorithm [18]. Both approaches are applied to several divisions, as summarized in Table 1. We describe the details of the VAN and HAN architectures and the quality of their training in Appendix B. In Appendix C we discuss the details of mutual information calculation for the chessboard partitioning. Simulations are performed at 13 values of the inverse temperature: from 0.1 to 0.4 with a step 0.05, 0.44, and from 0.5 to 0.9 with a step 0.1. For each \(\beta\) we collect the statistics of at least \(10^{6}\) configurations and estimated the statistical uncertainties using the jackknife resampling method. In all cases, the relative total uncertainty of \(I\) combining the systematic and statistical uncertainties is of the order of 1 %, or smaller.
In Fig. 2 we show \(I\) as a function of \(L\) for the three block partitionings and for three representative inverse temperatures \(\beta\): 0.1 (high-temperature regime), 0.44 (very close to critical temperature), and 0.9 (low-temperature regime). We clearly observe a linear dependence on the system size with a strongly \(\beta\)-dependent slope and intercept. Similar behavior can be observed for the chessboard geometry when one plots \(I\) as the function of \(L^{2}\) - see Fig. 3. The emerging picture, supporting the conjectured area law, is that \(I\) can be written in a compact form as
\[I_{geom}(\beta,L)=\alpha_{geom}(\beta)B(L)+r_{geom}(\beta), \tag{13}\]
where the variable \(B(L)\) corresponds to the length of the boundary between the two parts \(A\) and \(B\): \(B(L)=L^{2}\) for the chessboard partitioning and \(B(L)=2L\) for other partitionings considered in this work. Eq. (13) has two
\begin{table}
\begin{tabular}{|c|c|c|} \hline Partitioning & VAN & HAN \\ \hline strip & 8,12,16, & 10,18,34, \\ & 20,24,28 & 66,130(for \(\beta\leq 0.3\)) \\ \hline square & & 10,18,34, \\ & 66,130(for \(\beta\leq 0.3\)) \\ \hline quarter & 8,12,16, & \\ & 20,24,28 & \\ \hline chessboard & 8,12,16,20, & \\ & 24,28,32 & \\ \hline \end{tabular}
\end{table}
Table 1: The system sizes \(L\) which were considered for given partitioning type. Note that HAN requires total system size \(L=2^{n}+2\), where \(n\) would be the number of levels in the hierarchy.
Figure 1: Four considered partitioning geometries. Periodic boundary conditions are applied. The small blocks in the chessboard partitioning represent single spins.
Figure 3: General dependence of \(I\) on \(L^{2}\) for the chessboard partitioning at three inverse temperatures: \(\beta=0.10\), \(\beta=0.44\) and \(\beta=0.90\). Lines correspond to attempted fits using the area law Ansatz. Statistical uncertainties are shown but are much smaller than the symbol size.
Figure 2: General dependence of \(I\) on \(L\) for various geometries and three representative inverse temperatures: \(\beta=0.10\) in the disordered phase, \(\beta=0.44\) close to the phase transition and \(\beta=0.90\) in the ordered phase. Lines correspond to attempted fits using the area law Ansatz.
parameters, \(\alpha\) and \(r\), which, as we denoted, depend on \(\beta\) and can differ for different partitioning geometries.
We expect that Eq. (13) is valid as long as finite volume effects in \(I\) are small. In general, they may depend on three length scales present in the system: the size of the whole system \(L\), the size of the smallest subsystem, and correlation length \(\xi\) determined by the temperature of the system. Finite volume effects should vanish when \(\xi\) is smaller than other scales and they may depend on the geometry of the partitioning. A closer look at the data indeed reveals additional contributions to mutual information which spoil the area law (13). To see this we plot in Fig. 4 the \(\sqrt{\chi^{2}/{\rm DOF}}\) of the fits, which were performed assuming relation Eq. (13). Three regions of inverse temperatures can be easily defined. At small \(\beta\) the Ansatz Eq. (13) describes all system sizes since \(\sqrt{\chi^{2}/{\rm DOF}}\approx 1\), suggesting that finite volume effects are smaller than the statistical uncertainties. A similar situation occurs for large \(\beta\), where again the fit involved all available system sizes. Contrary, in the region close to the phase transition, where the correlation length \(\xi\) is the largest, the fits have clearly bad quality, \(\sqrt{\chi^{2}/{\rm DOF}}\gg 1\). This picture is further confirmed by the fact that the quality of the fit improves when we discard smaller system sizes, \(L<L_{min}\), as shown for the strip partitioning in the inset.
The values of \(\sqrt{\chi^{2}/{\rm DOF}}\) close to critical temperature are particularly large for the chessboard partitioning. However, since the errors of the individual points are much smaller than for other partitionings (due to the simplified calculation of \(Z({\bf a})\) - see Appendix C), we refrain from drawing conclusions about the size of finite-size effects in this geometry compared to block partitionings.
_Discussion of \(\alpha(\beta)\) and \(r(\beta)\) coefficients:_ For \(\beta\) far from critical inverse temperature we can reliably describe our data with the area law Ansatz Eq. (13). This allows us to extract the coefficients \(\alpha\) and \(r\) for the four different partitionings and compare them. We show the results in Fig. 5. In the main figures, we show the dependence on \(\beta\) whereas in the insets we show the difference between the "strip" and "square" partitionings (the differences between other partitionings look similar). We have shown only the values which were obtained from fits with \(\sqrt{\chi^{2}/{\rm DOF}}\lesssim 1\), as to be sure that the postulated dependence Eq. (13) is indeed reflected in the data. This means \(\beta\leq 0.35\) and \(\beta\geq 0.5\) for block partitionings, \(\beta\leq 0.25\) and \(\bar{\beta}\geq 0.5\) for chessboard.
In the left panel, we show the coefficient \(\alpha\) as a function of \(\beta\). Qualitative behavior of \(\alpha(\beta)\) seems to be universal: it goes to \(0\) at \(\beta=0\) and \(\beta\to\infty\) and rises around the critical temperature. The data clearly show that the chessboard partitioning yields different values than the three other possibilities which seem to be compatible with each other. The differences shown in the inset are indeed compatible with zero within their uncertainties. Therefore, we conclude that in this range of \(\beta\) the partitioning geometry does not influence the mutual information (when block partitionings are considered). In the right panel, we show the value of \(r\). In that case, all four partitionings give the same result: \(0\) for \(\beta<\beta_{c}\) and \(\log 2\) for \(\beta>\beta_{c}\). This seems to be quite a universal behavior [13].
_Conclusions and prospects:_ In this work, we have provided a numerical demonstration that the Shannon bipartite mutual information can be readily obtained from the Neural Importance Sampling algorithm for the classical Ising model on a square lattice. Our approach allows for studies of different partitionings and we provided comparisons of the mutual information estimated for four geometries. We successfully exploited the hierarchical algorithm (HAN) to reach larger system sizes than achievable using standard Variational Autoregressive Networks (VAN). We discussed the validity and universality of the area law for different partitionings. We found that at low and high temperatures the area law is satisfied whereas such an Ansatz does not describe our data in the vicinity of the phase transition, supposedly due to inherent finite volume effects.
Our proposal can be applied to many other spin systems with short- and long-ranged interactions, such as various classes of spin-glass systems. Long-ranged interactions prohibit the application of the HAN algorithm limiting, at the moment, the available system sizes down to \(L\sim 30\).
We believe that by exploiting the Feynman path integral quantization prescription one may use the approach based on autoregressive networks to estimate also the entanglement entropy in quantum spin systems. In such a picture, a \(D\) dimensional quantum system is described by a \(D+1\) statistical system, where the machine learning enhanced Monte Carlo is applicable. In particular, thanks to the replica method [28], one can directly express the Renyi entropies in terms of partition functions [29] readily obtainable in the NIS. In this approach sys
Figure 4: Quality of fits involving all available data points and assuming the area law functional form of the mutual information for the four geometries shown in Figure 1: \(\sqrt{\chi^{2}/{\rm DOF}}\) plotted as a function of the inverse temperature. In the inset, we show how the fit improves as we restrict the system sizes included in the fit to be bigger or equal to \(L_{\rm min}\).
tems in any space-time dimension can be studied, with the obvious limitation that performance is limited by the total number of sites in the system, hence reducing the available volumes in higher dimensions. Still, due to its straightforwardness, it should be seen as a valuable alternative to studying information-theoretic properties of one-, two-, or three-dimensional quantum systems.
One should also observe that our proposal in principle can work unaltered for systems with continuous degrees of freedom. Neural generative networks have been already discussed in the context of \(\phi^{4}\) classical field theory [30] as well as \(U(1)\), \(SU(N)\)[31], and the Schwinger model gauge theories [32]. In all these cases, one would introduce a conditional normalizing flow or another proposal that would give access to conditional probabilities (see for example Ref. [33]). For instance, a hierarchical construction similar to the HAN algorithm employing normalizing flows could be used to simulate the \(\phi^{4}\) classical field theory. In this way, conditional probabilities would be naturally introduced and could be used to calculate the mutual information for some specific partitioning. The combination of the proposed method of measuring information-theoretic quantities with the recent advancements [27] in the machine learning enhanced algorithms for simulating four-dimensional Lattice Quantum Chromodynamics (LQCD) can open new ways of investigating this phenomenologically important theory, for instance by studying quantum correlations and entanglement of the QCD vacuum.
###### Acknowledgements.
Computer time allocation 'plpng' on the Prometheus and ARES supercomputers hosted by AGH Cyfronet in Krakow, Poland was used through the Polish PLGRID consortium. T.S. kindly acknowledges the support of the Polish National Science Center (NCN) Grants No. 2019/32/C/ST2/00202 and 2021/43/D/ST2/03375 and support of the Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University Grant No. LM/23/ST. P.K. acknowledge that this research was partially funded by the Priority Research Area Digiworld under the program Excellence Initiative - Research University at the Jagiellonian University in Krakow. P.K. and T.S thank Alberto Ramos and the University of Valencia for their hospitality during the stay when part of this work was performed and discussed. We also acknowledge very fruitful discussions with Leszek Hadasz on entanglement entropy.
|
2307.08204 | A Quantum Convolutional Neural Network Approach for Object Detection and
Classification | This paper presents a comprehensive evaluation of the potential of Quantum
Convolutional Neural Networks (QCNNs) in comparison to classical Convolutional
Neural Networks (CNNs) and Artificial / Classical Neural Network (ANN) models.
With the increasing amount of data, utilizing computing methods like CNN in
real-time has become challenging. QCNNs overcome this challenge by utilizing
qubits to represent data in a quantum environment and applying CNN structures
to quantum computers. The time and accuracy of QCNNs are compared with
classical CNNs and ANN models under different conditions such as batch size and
input size. The maximum complexity level that QCNNs can handle in terms of
these parameters is also investigated. The analysis shows that QCNNs have the
potential to outperform both classical CNNs and ANN models in terms of accuracy
and efficiency for certain applications, demonstrating their promise as a
powerful tool in the field of machine learning. | Gowri Namratha Meedinti, Kandukuri Sai Srirekha, Radhakrishnan Delhibabu | 2023-07-17T02:38:04Z | http://arxiv.org/abs/2307.08204v1 | # A Quantum Convolutional Neural Network Approach for Object Detection and Classification
###### Abstract
This paper presents a comprehensive evaluation of the potential of Quantum Convolutional Neural Networks (QCNNs) in comparison to classical Convolutional Neural Networks (CNNs) and Artificial / Classical Neural Network (ANN) models. With the increasing amount of data, utilizing computing methods like CNN in real-time has become challenging. QCNNs overcome this challenge by utilizing qubits to represent data in a quantum environment and applying CNN structures to quantum computers. The time and accuracy of QCNNs are compared with classical CNNs and ANN models under different conditions such as batch size and input size. The maximum complexity level that QCNNs can handle in terms of these parameters is also investigated. The analysis shows that QCNNs have the potential to outperform both classical CNNs and ANN models in terms of accuracy and efficiency for certain applications, demonstrating their promise as a powerful tool in the field of machine learning.
**Keywords:** Quantum Convolutional Neural Networks, QCNNs, classical CNNs, Artificial Neural Network, ANN, fully connected neural network, machine learning, efficiency, accuracy, real-time, data, qubits, quantum environment, batch size, input size, comparison, potential, promise.
## 1 Introduction
In recent years, there has been a significant increase in investment in the field of quantum computing, with the aim of leveraging its principles to solve problems that are intractable using traditional computing techniques. The intersection of quantum computing and deep learning is of particular interest, as both fields have seen significant growth in recent years. Researchers such as Garg and Ramakrishnan [1] have highlighted the potential of quantum computing to revolutionize current techniques in areas such as security and network communication. The application of quantum computing principles to deep learning models has the potential to significantly enhance their performance and enable the solution of classically intractable problems. As such, there has been a growing interest in the exploration of the possibilities at the intersection of these two fields, commonly referred to as Quantum deep learning.
The classification outcome is obtained by utilizing the fully connected layer after the data size has been effectively reduced by multiple applications of these layers. To achieve optimal results, the discrepancy between the acquired label and the actual label can be employed to train the model using optimization techniques such as gradient descent. In recent years, several studies have been conducted that combine the principles of quantum computing and the CNN model to solve real-world problems that are otherwise intractable using conventional machine learning techniques through the use of Quantum Convolutional Neural Networks (QCNN). There exists an approach for efficiently solving quantum physics problems by incorporating the CNN structure into a quantum system, as well as a methodology for enhancing performance by incorporating quantum principles into problems previously solved by CNN.
## 2 Background
### Convolutional Neural Network
Convolutional Neural Networks (CCNNs) are a subclass of artificial neural networks that are widely utilized in image recognition and audio processing tasks. They possess the ability to identify specific features and patterns in a given input, making them a powerful tool in the field of computer vision. The ability to identify features is achieved by using two types of layers in a CCNN: the convolutional layer and pooling layer (Figure 2)
The convolutional layer applies a set of filters or kernels to an input image, resulting in a feature map that represents the input image with the filters applied. These layers can be stacked to create more complex models, which can learn more intricate features from images. The pooling layer, on the other hand, reduces the spatial size of the input, making it easier to process and requiring less memory. They also help to reduce the number of parameters and speed up the training process. Two main types of pooling are used: max pooling and average pooling. Max pooling takes the maximum value from each feature map, while average pooling takes the average value. Pooling layers are typically used after convolutional layers to reduce the size of the input before it is fed into a fully connected layer.
Fully connected layers are one of the most basic types of layers in a CNN, where each neuron is connected to every other neuron in the previous layer. They are typically used towards the end of a CNN, when the goal is to take the
features learned by the previous layers and use them to make predictions. For example, if a CNN is used to classify images of animals, the final fully connected layer might take the features learned by the previous layers and use them to classify an image as containing a dog, cat, bird, etc.
Several studies have been conducted to improve the performance of CCNs. For example, Saad Albawi et al. [2] discussed the layers of the Convolutional Neural Network in depth and found that as the number of layers and parametersincrease, the time and complexity significantly increase for training and testing the model. The use of convolutional neural networks (CNNs) in image analysis tasks has been extensively studied in recent literature. Keiron O'Shea et al. in their study [3] discuss the advantages of CNNs over traditional artificial neural networks (ANNs) and the best ways to structure a network for image analysis tasks. The authors highlight that CNNs exploit the knowledge of the specific input used, but also note that they are resource-heavy algorithms, particularly when dealing with large images. Shabeer Basha et al. in [4] investigate the relationship between fully connected layers and CNN architectures, and the impact of deeper/shallower architectures on CNN performance. The authors conclude that shallow CNNs perform better with wider datasets, while deeper CNNs are more suitable for deeper datasets. This serves as an advantage for the deep learning community as it allows for the selection of the appropriate model for higher precision and accuracy on a given dataset.
In [5], Sakshi Indolia et al. highlight the architectures and learning algorithms used for CNNs. They mention that the GoogleNet architecture, while reducing the budget and number of trainable parameters, also increases the risk of overfitting as the network size increases. Youhui Tian in [6] presents a new CNN algorithm that aims to increase convergence speed and recognition accuracy by incorporating a recurrent neural network and a residual module called ShortCut3-ResNet. This ultra-lightweight network structure reduces the number of parameters, making the algorithm more diverse in feature extraction and improving test accuracy.
Figure 1: Convolutional Neural Networks
Shyava Tripathi et al. in [7] focus on the real-time implementation of image recognition with a low complexity and good classification accuracy for a dataset of 200 classes. The authors suggest that there is scope for improvement in increasing the number of classes to 1000 and focusing on feature extraction rather than raw input images. In [8], Rahul Chauhan et al. implement the algorithm on MNIST and CIFAR-10 datasets, achieving 99.6 percent and 80.17 percent accuracy, respectively. The authors suggest that the accuracy on the training set can be improved by adding more hidden layers.
Deepika Jaiswal et al. in [9] implement the algorithm against various standard data sets and measure the performance based on mean square error and classification accuracy. The classification accuracy for some datasets reaches 99 percent and 90 percent, but for others, such as large aerial images, the accuracy is in the 60s and 70s, indicating scope for improvement. In [10], Neha Sharma et al. conduct an empirical analysis of popular neural networks like AlexNets, GoogleNet, and ResNet50 against 5 standard image data sets. The authors find that 27 layers are insufficient to classify the datasets and that the more layers, the higher the accuracy in prediction. In this case, the highest accuracy was achieved at 147-177 layers, which is not suitable for training on a normal desktop. However, once trained, the model can be used in a wide number of applications due to its flexibility.
Finally, in [11], Shuying Liu and Weihong Deng aim to prove that deep CNNs can be used to fit small datasets with simple modifications without severe overfitting. The authors conclude that on large enough images, batch normalization on a very deep model will give comparable accuracy to shallow models. However, they also note that there is still scope for improvement in both methods, suggesting that deep models can be used for small datasets once overfitting is addressed and better accuracy is achieved.
Overall, the literature suggests that CNNs are powerful tools for image analysis tasks and that various architectures and modifications can be used to improve performance on different datasets. Convolutional Neural Networks (CNNs) have been widely used in image analysis tasks and have shown to be effective in achieving high accuracy and precision. However, there are still challenges and areas for improvement, such as memory allocation for large input images and overfitting for small datasets. Various studies have attempted to address these challenges, such as introducing new architectures and algorithms to reduce the number of parameters and increase convergence speed, and implementing batch normalization on deep models to combat overfitting. It is rather important to consider the type and size of the dataset when choosing a CNN architecture in order to achieve optimal performance.
### Quantum Convolutional Neural Network
The current research aims to explore the potential of Quantum Convolutional Neural Networks (QCNNs) in addressing the limitations of classical CNNs in solving quantum physics problems. The exponential growth of data size as the system size increases has been a significant hindrance in utilizing classical computing methods to solve quantum physics problems. QCNNs address this challenge by utilizing qubits to represent data in a quantum environment and applying CNN structures to quantum computers.
QCNNs are based on the fundamental concepts and structures of classical CNNs, but adapt them to the realm of quantum systems. The utilization of qubits in a
quantum environment allows for the property of superposition to be utilized, where qubits can exist in multiple states at the same time. This property of superposition plays a vital role in quantum computing tasks, as it allows quantum computers to perform multiple tasks in parallel without the need for a fully parallel architecture or GPUs.
In QCNNs, the image is first encoded into a quantum circuit using a given feature map, such as Qiskit's ZFeatureMap or ZZFFeatureMap. Alternating convolutional and pooling layers are then applied to the encoded image, reducing the dimensionality of the circuit until only one qubit remains. The output of this remaining qubit is measured to classify the input image. The Quantum Convolutional Layer consists of a series of two-qubit unitary operators that recognize and determine relationships between the qubits in the circuit. The Quantum Pooling Layer, however, reduces the number of qubits by performing operations on each qubit until a specific point, and then discarding certain qubits in a specific layer.
In QCNNs, each layer contains parametrized circuits, meaning that the output can be altered by adjusting the parameters of each layer. During training, these parameters are adjusted to reduce the loss function of the QCNN. The present research aims to investigate the potential of QCNNs in addressing the limitations of classical CNNs and solving quantum physics problems.
The study by Rishab Parthasarathy and Rohan Bhowmik [12] aimed to investigate the potential of quantum computing for efficient image recognition by creating and assessing a novel machine learning algorithm, the quantum optical convolutional neural network (QOCNN). The QOCNN architecture combines the quantum computing paradigm with quantum photonics and was benchmarked against competing models, achieving comparable accuracy while outperforming them in terms of robustness. Additionally, the proposed model has significant potential for computational speed improvement. The results of this study demonstrate the significant potential of quantum computing for the development of artificial intelligence and machine learning.
Subsequently, other studies, such as those by Tak Hur et al. [13], Potok et al.[14], Tacchino et al. [15], and Ji Guan et al. [16] have been conducted to explore the potential of quantum computing in machine learning.
Tak Hur et al. [13] conducted a study in which they simulated the MNIST and Fashion MNIST datasets with Pennylane and various combinations of factors to test 8-qubit quantum convolutional neural network (QCNN) models for binary classification. The results of this study revealed that QCNN exhibited high classification accuracy, with the highest example being 94% for Fashion MNIST and close to 99% for MNIST. Furthermore, they compared the performance of QCNN to traditional convolutional neural networks (CNN) and found that, given the same training settings for both benchmarking datasets, QCNN outperformed CNN considerably.
Potok et al. [14] also conducted a study that compared the performance of deep learning architectures on three different types of computing platforms - quantum, high performance, and neuromorphic, and highlighted the unique strengths of each. Tacchino et al. carried out experiments using a NISQ quantum processor to test a quantum neural network (QNN) with a small number of qubits, and proposed a hybrid
algorithm that combines quantum and classical techniques to update the network parameters.
Ji Guan et al. in their study [17] investigated the formal robustness verification of quantum machine learning algorithms against unknown quantum noise. They discovered an analytical bound that can be efficiently calculated to provide a lower estimate of robust accuracy in real-world applications. Furthermore, they developed a robustness verification algorithm that can precisely verify the \(\epsilon\)-robustness of quantum machine learning algorithms and also provides helpful counter examples for adversarial training. Tensor networks are widely recognized as a powerful data structure for implementing large-scale quantum classifiers, such as QCNNs with 45 qubits in [18].
In order to meet the demands of NISQ devices with more than 50 qubits, the authors integrated tensor networks into their robustness verification algorithm for practical applications. However, more research is needed to fully understand the significance of robustness in quantum machine learning, particularly through more experiments on real-world applications such as learning the phases of quantum many-body systems.
Iris Cong et al. in their work [18] employed a finite- difference method to calculate gradients and due to the structural similarity of QCNN with its classical counterpart, they adopted more efficient techniques such as back propagation. This approach allows for a more streamlined implementation of the QCNN model, making it more practical for real-world applications.
Overall, the results of these studies demonstrate the enormous potential that quantum computing holds for the development of artificial intelligence and machine learning, specifically in terms of performance and accuracy. The QCNN [19, 20]models show promising results in terms of classification accuracy and outperforming traditional CNN models. Additionally, the comparison of deep learning architectures on different types of computing platforms highlights the unique strengths of quantum computing in this field.
### Datasets
We are training our model on the MNIST dataset. The MNIST dataset is a widely used dataset for training and testing image recognition algorithms. It contains 60,000 training examples and 10,000 test examples of handwritten digits, each represented as a 28x28 grayscale image. The digits in the dataset have been size-normalized and centred in the image to ensure consistency.
## 3 Method
### Proposed model
Figure 3 shows the architecture of the proposed quantum neural network model. In this proposed QCNN, the convolutional layer is modeled as a quasi-local unitary operation on the input state density. This unitary operator, denoted by Ui, is applied on several successive sets of input qubits, up to a predefined depth. The pooling layer is implemented by performing measurements on some of the qubits and applying unitary rotations Vi to the nearby qubits.
The rotation operation is determined by the observations on the qubits. After the required number of blocks of convolutional and pooling unitaries, the unitary F implements the fully connected layer. A final measurement on the output of F yields the network output. In a QCNN, the final qubit(s) is/are measured, and the measurement result is used to determine the class of the input image. The measurement result is typically a probability distribution over the possible classes. The class with the highest probability is chosen as the final output of the QCNN. The decoding process in a QCNN can be done in different ways, depending on the specific implementation of the QCNN.Here, the final qubit is measured in a computational basis, and the measurement result is used to determine the class of the input image.
Quantum Convolutional Neural Networks (QCNNs) can be mathematically modeled using quantum circuits and linear algebra. In a QCNN, the input data is represented as a quantum state, which is initialized using a set of single-qubit gates. The convolution operation in QCNNs is implemented using a set of trainable quantum filters, which are represented by unitary matrices. The pooling operation is performed using specific quantum circuits composed of quantum gates that operate on the state of the quantum register. The performance of QCNNs is evaluated using a loss function, such as the mean squared error (MSE) function. The most commonly used loss function is the mean squared error (MSE) function, which is defined as:
\(L(y,y_{hat})=1/n\sum(y_{i}-y_{hat_{i}})^{2}\)
The optimization is performed using a quantum optimizer, such as the Variational Quantum Eigensolver (VQE), which adjusts the parameters of the quantum filters to minimize the loss function. The VQE algorithm uses the gradient descent method to minimize the loss function and update the parameters of the quantum filters. The update rule for the parameters can be represented mathematically as:
\(\theta_{new}=\theta_{old}-\alpha\)\(\triangledown L(\theta_{old})\)
The mathematical modeling of QCNNs involves the use of quantum circuits, quantum gates, quantum states, and linear algebra to perform the convolution and pooling operations and optimize the parameters of the quantum filters to minimize the loss function.
**Task flow**
The present research paper proposes a framework for a quantum neural network model, which is represented through a task flow diagram depicted in Figure 4. The proposed model involves the following steps: First, a standard dataset for image classification tasks is collected, in this project the MNIST dataset, consisting of images of
Figure 3: Proposed Quantum Neural Network model
handwritten digits, is used. Then, the algorithms to be compared are selected, in this project, an Artificial Neural Network (ANN), a Quantum Convolutional Neural Network (QCNN) and a Classical Convolutional Neural Network (CNN) are compared for their performance. Subsequently, the dataset is preprocessed to prepare it for the training of the algorithms, including scaling, normalization, and data augmentation. The models are then trained on the preprocessed dataset using an iterative process. In this process, the models are trained on batches of data, and the weights are updated based on their performance on the training data. After the models are trained, their performance is evaluated on a separate test dataset, where the accuracy and loss curves are measured to compare the performance of the ANN, QCNN and CNN models. Finally, the results are analyzed and interpreted to draw conclusions about the performance of the QCNN and CNN models for image classification tasks. Additionally, the potential advantages of QCNNs in solving complex problems using qubits are explored.
## 4 Result
The experimental results depicted in Table 2 showcase the construction of four distinct models
**Fig. 4**: Task flow
**Fig. 5**: QNN training vs testing (a) accuracy (b) loss
used for binary classification on the MNIST dataset. These models encompass the Quantum Neural Network (QNN), Classical Convolutional Neural Network (CNN), Classical Neural Network (NN) without Convolution, and Quantum Convolutional Neural Network (QCNN). Their training objective was to accurately classify digits as either 0 or 7, with a strong emphasis on achieving high accuracy.
Upon evaluation of a reduced-scale dataset, the classical algorithms demonstrated remarkable accuracy levels, approaching 1.0. Specifically, the CNN model achieved an accuracy of 0.999, accompanied by a low loss value of 0.0031. Similarly, the NN model achieved an accuracy of 0.9141, albeit with a slightly higher loss value of 0.2180. Contrastingly, when the quantum algorithms and models were executed using various input parameters such as batch size and epochs, the QNN model exhibited accuracy within the range of 0.50 to 0.60 and he accuracy of the QCNN model fell within the range of 0.52 to 0.61, as indicated by the accuracy curves presented in Figure 5 and Figure 6.
## 5 Conclusion
Figure 8 shows the Mindgraph of the our study. The results presented in this study demonstrate that classical neural networks outperform quantum neural networks for binary classification tasks involving the MNIST dataset. Specifically, the classical CNN and NN models achieved accuracy scores of 0.999 and 0.9141, respectively, while the accuracy scores of the quantum QNN and QCNN models were in the range of 0.5-0.6 and 0.52-0.61, respectively.
This finding has significant implications for the field of quantum computing and machine learning, as it suggests that classical neural networks may be more effective than quantum neural networks for certain tasks. However, it is important to note that the results presented in this study may not be generalizable to other datasets and tasks. Future research should explore the effectiveness of quantum neural networks for a wider range of tasks and datasets. In appendix A section, summaries a few related QCNN studies.
The small size of the MNIST dataset, containing only 60,000 training images and 10,000 test images, which, after further required preprocessing is reduced to 1000-2000 training and testing images respectively, is likely one of the reasons for the lower performance of QCNN algorithms in binary classification tasks compared to classical CNN and NN. Another potential cause for the lower performance of QCNN algorithms is their relatively new and complex architecture, which may require more optimization effort. Additionally, hardware limitations of quantum computers, such as their limited number of qubits and coherence times, could also play a role in their
Figure 8: Mindgraph of our study
lower performance on small datasets like MNIST. Moreover, the sensitivity of quantum computers to noise and errors is another factor that could affect the accuracy and performance of QCNN algorithms, particularly for near-term quantum computers with high error rates. Lastly, it is possible that the MNIST dataset does not offer a clear quantum advantage over classical algorithms, and therefore, the performance of QCNN algorithms may not significantly outperform classical CNN and NN in this case.
**Discussions**
One potential avenue for future research is to investigate the effectiveness of hybrid models that combine classical and quantum neural networks. This approach has shown promise in previous studies and could potentially improve the performance of quantum neural networks for certain tasks. Additionally, the use of quantum computing hardware could potentially yield better results than simulations on classical computers.
Another area for future research is to explore the potential of quantum neural networks for unsupervised learning tasks. While classical neural networks have achieved significant success in supervised learning tasks, their effectiveness in unsupervised learning tasks is still limited. Quantum neural networks, on the other hand, have shown promise for unsupervised learning tasks such as clustering and dimensionality reduction.
In addition, the utilization of quantum computing hardware holds the potential to outperform classical computer simulations. Quantum computers excel when confronted with complex datasets and large-scale data, leading to significantly improved outcomes compared to classical computers. The inherent processing capabilities of quantum systems allow them to effectively tackle intricate computational challenges, making them highly advantageous for handling complex datasets. Consequently, quantum computing is expected to yield substantial advancements and superior results in various domains where classical computers face limitations.
In conclusion, the results of this study highlight the current limitations of quantum neural networks for binary classification tasks involving the MNIST dataset. However, further research is needed to fully explore the potential of quantum neural networks for various applications and to determine whether they can outperform classical neural networks for certain tasks.
**Repository**[https://github.com/IamRash-7/capstone_project](https://github.com/IamRash-7/capstone_project) |
2307.09126 | Recent Advances in Metasurface Design and Quantum Optics Applications
with Machine Learning, Physics-Informed Neural Networks, and Topology
Optimization Methods | As a two-dimensional planar material with low depth profile, a metasurface
can generate non-classical phase distributions for the transmitted and
reflected electromagnetic waves at its interface. Thus, it offers more
flexibility to control the wave front. A traditional metasurface design process
mainly adopts the forward prediction algorithm, such as Finite Difference Time
Domain, combined with manual parameter optimization. However, such methods are
time-consuming, and it is difficult to keep the practical meta-atom spectrum
being consistent with the ideal one. In addition, since the periodic boundary
condition is used in the meta-atom design process, while the aperiodic
condition is used in the array simulation, the coupling between neighboring
meta-atoms leads to inevitable inaccuracy. In this review, representative
intelligent methods for metasurface design are introduced and discussed,
including machine learning, physics-information neural network, and topology
optimization method. We elaborate on the principle of each approach, analyze
their advantages and limitations, and discuss their potential applications. We
also summarise recent advances in enabled metasurfaces for quantum optics
applications. In short, this paper highlights a promising direction for
intelligent metasurface designs and applications for future quantum optics
research and serves as an up-to-date reference for researchers in the
metasurface and metamaterial fields. | Wenye Ji, Jin Chang2, He-Xiu Xu, Jian Rong Gao, Simon Gröblacher, Paul Urbach, Aurèle J. L. Adam | 2023-07-18T10:17:33Z | http://arxiv.org/abs/2307.09126v1 | Recent Advances in Metasurface Design and Quantum Optics Applications with Machine Learning, Physics-Informed Neural Networks, and Topology Optimization Methods
### 4.7 **Abstract**
As a two-dimensional planar material with low depth profile, a metasurface can generate non-classical phase distributions for the transmitted and reflected electromagnetic waves at its interface. Thus, it offers more flexibility to control the wave front. A traditional metasurface design process mainly adopts the forward prediction algorithm, such as Finite Difference Time Domain, combined with manual parameter optimization. However, such methods are time-consuming, and it is difficult to keep the practical meta-atom spectrum being consistent with the ideal one. In addition, since the periodic boundary condition is used in the meta-atom design process, while the aperiodic condition is used in the array simulation, the coupling between neighboring meta-atoms leads to inevitable inaccuracy. In this review, representative intelligent methods for metasurface design are introduced and discussed, including machine learning,
physics-information neural network, and topology optimization method. We elaborate on the principle of each approach, analyze their advantages and limitations, and discuss their potential applications. We also summarise recent advances in enabled metasurfaces for quantum optics applications. In short, this paper highlights a promising direction for intelligent metasurface designs and applications for future quantum optics research and serves as an up-to-date reference for researchers in the metasurface and metamaterial fields.
## 1 Introduction
Electromagnetic (EM) wave front modulation has important significance in both scientific researches and industrial applications. It is highly demanded in classical information processing [1], [2], telecommunication [3], [4], military applications [5], [6], imaging systems [7], [8], and their quantum counterparts [9]-[11]. Traditionally, dielectric materials-based devices were used to control EM waves, including lenses and deflectors [12]-[14]. However, conventional dielectric materials have limited choices of dielectric constants. In addition, large dimensions and complex shapes are required to accumulate enough propagating phase to realize targeted functions [15], [16].
In recent years, two-dimensional (2D) planar materials, known as metasurfaces, have solved challenging problems encountered by traditional optical devices. Metasurfaces have artificially tunable EM responses. The modulation of EM waves is achieved by arranging meta-atoms (pixel unit cells of a metasurface) in a predefined order [17]-[38]. The principle is to use the sharp phase variation of the transmitted or reflected wave on the metasurface with the designed structure to effectively control the EM wavefront [39], [40]. After Yu et al. proposed the generalized Snell's law [38], metasurface researches started to flourish and various metasurface devices with different functionalities emerged consistently [41]-[56]. A traditional metasurface design relies on the forward prediction methods: Finite Element Methods or Finite Difference Time Domain Methods are used to predict the optical properties [57]-[58]. Normally, a unit cell is simulated with a periodic boundary. Then more unit cells are combined to form a large area system. Such a process is time-consuming, and the designed meta-atoms are difficult to achieve ideal optical responses. Due to the use of periodic boundary conditions in meta-atom design and aperiodic boundary conditions in the array simulation process, the mutual coupling between different meta-atoms causes unavoidable inaccuracy.
Before we discuss the intelligent optimization techniques for metasurface designs, the traditional methods derived from fundamental laws of physics are introduced in detail as following, which is understandable by logic. The general design process for a metasurface device consists of two steps. The first step is the metasurface unit cell design.
Afterwards, one fills in the discrete metasurface pixels by unit cells with different response including phase, amplitude or other EM properties of a wave [59]. There are several typical mechanisms for metasurface unit cell design. The earliest method is called propagating phase metasurface [60]. The mechanism of propagating phase unit cell is to change EM resonance of the unit cell by adjusting the size of the metal structure that introduces a phase change. This mechanism works on the condition of linear polarization wave. Later on, Pancharatnam-Berry (PB) phase metasurface is proposed [61], which is another mechanism for unit cell design working on circularly polarized waves. The phase change is only related to the rotation angle of unit cell and has thus nothing to do with size. Generally, the additional phase shift of the PB phase unit cell is twice the rotation angle of structure. In design of a PB phase unit cell, according to the PB theory, one generally designs a unit cell structure to achieve phase difference of 180\({}^{\circ}\) under the radiation of two orthogonally linear polarization waves. In 2013, Pfeiffer and Grbic proposed Huygens' metasurface to realize perfect transmission with impedance matching theory [62]. The basic theory is as following. One considers the total field on both sides of a metasurface is \(\overrightarrow{E}\) and \(\overrightarrow{H}\). Then, equivalent electric current \(J_{s}\) and equivalent magnetic current \(M_{s}\) on the interface of the metasurface are excited. The surface electric impedance \(Z_{e}\) and surface magnetic impedance \(Z_{m}\) are defined by \(\overrightarrow{E}=Z_{e}\overset{\rightarrow}{J_{s}}\), and \(\overrightarrow{H}=\dfrac{1}{Z_{m}}\overset{\rightarrow}{M_{s}}\). According to boundary conditions, reflection and transmission coefficient \(R\) and \(T\), can be characterized by \(Z_{e}\) and \(Z_{m}\), \(R=\dfrac{-Z_{0}}{2Z_{e}+Z_{0}}+\dfrac{Z_{m}}{Z_{m}+2Z_{0}}\), and \(T=\dfrac{2Z_{e}}{2Z_{e}+Z_{0}}-\dfrac{Z_{m}}{Z_{m}+2Z_{0}}\), where \(Z_{0}\) is the wave impedance in free space. By derivation, \(Z_{e}\) and \(Z_{m}\) can be characterized by \(R\) and \(T\), \(Z_{e}=\dfrac{Z_{0}}{2}\dfrac{1+R+T}{1-R-T}\), and \(Z_{m}=2Z_{0}\dfrac{1+R-T}{1-(R-T)}\). We assume the metasurface is lossless. If the transmission amplitude is 1, we derive \(R\)=0, and \(T\)=\(e^{i\phi}\). Then, the condition of a high transmission for Huygens' Metasurface is determined by \(Z_{e}=\dfrac{Z_{0}^{2}}{Z_{m}}\). In other words, as long as one carefully designs \(Z_{e}\) and \(Z_{m}\) of the metasurface, full transmission can be realized [63]. Recently, an interesting metasurface with Anomalous Generalized Brewster Effect (AGBE) is proposed by Fan et al. and Luo et al. [64], [65]. For Brewster Effect, when TM polarized wave (its magnetic field being perpendicular to the plane of incidence) is incident from vacuum to dielectric with permittivity
\(\varepsilon_{d}\), the Brewster's angle \(\theta_{\text{B}}\) is \(\arctan\sqrt{\varepsilon_{d}}\) where there is no reflection. However, if a dielectric layer with an appropriate surface impedance \(R_{\text{s}}\) is coated on a well-designed metasurface, the reflection-less phenomenon can be realized for different angles, which is called Generalized Brewster Effect (GBE). The relationship between \(R_{s}\) and incident angle \(\theta_{i}\) for TM polarization is determined by \(R_{s}=\dfrac{\sqrt{\varepsilon_{d}-\sin^{2}\theta_{i}}\,\cos\theta_{i}}{\sqrt{ \varepsilon_{d}-\sin^{2}\theta_{i}}-\varepsilon_{d}\,\cos\theta_{i}}\,Z_{0}\), where \(Z_{0}\) is wave impedance of free space. For a refracted wave, refraction angle \(\theta_{t}\) is determined by \(\theta_{t}=\arcsin(\dfrac{\sin\theta_{i}}{\sqrt{\varepsilon_{d}}})\). Then, the previous isotropic dielectric is changed into anisotropic dielectric with permittivity \(\varepsilon_{pe\text{-}\varepsilon_{d}}\) (perpendicular to refraction direction), and \(\varepsilon_{pa}\) (parallel to refraction direction). In this case, the refraction and reflection phenomena remain the same as before. However, if we switch the incident wave from left side of normal (\(\theta_{i}\)\(>\)0) to right side of normal (\(\theta_{i}\)\(<\)0), according to the reciprocity principle, the reflection is the same as before, which is determined by \(R_{s}\). While the refraction angle is not only dominated by \(\varepsilon_{pe}\), but also \(\varepsilon_{pa}\). This interesting phenomenon is called Anomalous Generalized Brewster Effect (AGBE). By carefully designing \(R_{s}\) of a metasurface and permittivity of dielectric, we can control Brewster's and refraction angle. Another metasurface proposed by Chu et al. is the random-flip metasurface [66]. Such a metasurface consists normally of two types of unit cells, which have the same shape and size, but arranged in space inversion positions. The main mechanism behind this metasurface is the reciprocity principle and local space inversion. For demonstration, the Chu et al. designed a metasurface that realizes diffusive reflection while keeping distortion-free transmission. The choice of materials also plays an important role in metasurface designs, which relies on the specific applications and the desired properties of a metasurface, such as operating wavelength, polarization sensitivity, and the level of loss. Generally speaking, metasurfaces are made of subwavelength size metal or dielectric structures. For visible and near-infrared applications, dielectric materials such as Silicon and Titanium Dioxide are often used because of their low loss at those wavelengths compared with metals. For mid- to far-infrared applications, gold and silver are usually used because of their strong resonant property. In microwave applications, metal, such as copper, and high refractive index dielectric materials are normally used because the loss of metal is quite low in microwave band. In other words, the choice of materials for a metasurface depends on different wavelength applications and should keep a balance between material properties such as conductivity, permittivity, and loss.
In a metasurface intelligent design, a commonly used method is the genetic algorithm [51]. However, for multi-parameter problems, solving with the genetic algorithm requires complex computational processes and is time consuming. In addition, with the increased number of parameters, the computational time will increase exponentially. Alternatively, an intelligent metasurface design combining with forward and reverse algorithms offers a solution to overcome the above-mentioned problems encountered by traditional metasurfaces [67, 68]. Compared with traditional optimization algorithms, machine learning can predict unknown problems by learning complex relationships between model variables and optical properties from large known datasets. This strategy can significantly reduce the computational time of a metasurface design by providing a more comprehensive and systematic optimization for metasurface properties [69, 70]. Additionally, by using deep optics and photonics theory, (e.g., Rigorous Coupled Wave Analysis, RCWA) with advanced optimization algorithms (e.g., automatic differentiation, AD), parameters are adjusted and re-evaluated to approach the final goal more efficiently [71, 72]. This method is called topology optimization. We make some comparisons between these methods in terms of physical accuracy, computational time, and degrees of freedom.
In general, to improve the precision of network predictions, machine learning is often applied to simple and fixed metasurface structures, where certain structural parameters are treated as variables [69, 70, 73]-[78]. The precision of the results is directly dependent on the precision of the numerical algorithms used, such as the Finite Element Method (FEM) or Finite-Difference Time-Domain (FDTD), as well as the size of the dataset. Additionally, the inverse design of metasurface structures using machine learning methods may result in non-unique solutions, where the same input may produce different outputs during the training process. To address these limitations, physics-informed neural networks can be employed to accurately predict the EM response of metasurface structures by incorporating physical laws, such as Maxwell's equations and boundary conditions of EM fields, during training [67, 68, 79]-[83]. The network can find the optimal solution by learning the laws of physics. Furthermore, optimization techniques, such as dynamically adjusting the weights, can be used to improve the calculation precision. Topology optimization, the design of the size and shape of the structure within a given space, can also be employed to improve precision [71, 72, 84]-[89]. It considers physical laws, optimization goals, and constraints, such as fabrication precision, and finds a local optimal solution when combined with an optimization algorithm. This method has the highest precision when the physical model is accurate during optimization.
The computational time consumed by machine learning is primarily determined by the size of the data set, the complexity of the metasurface structure, and the number of optimization parameters [69], [70], [73]-[78]. Typically, the computational time can be reduced without compromising the precision of the results, by reducing the size of the required training data set, simplifying the structure, and decreasing the number of optimized parameters. Consequently, the majority of the computational time is concentrated on collecting data sets and training networks. However, this approach may also result in a limited number of design parameters, leading to a reduction in the diversity of structure designs. Physics-informed neural networks, on the other hand, establish a framework based on mathematical and physical methods [67], [68], [79]-[83]. Compared to machine learning methods, they can use fewer data samples to train networks with better generalization capabilities and adjust more structure parameters. This approach not only reduces time loss but also increases the degree of freedom in structure design. The computational time consumed by the topology optimization method is primarily focused on the execution of Generalized Updating Procedures (GUP), which includes optical theory and optimization algorithms, such as gradient descent algorithms [71], [72], [84]-[89]. Therefore, compared to the two previous methods, this one can more quickly approach the optimum. Additionally, this method allows for arbitrary arrangements in space, resulting in the highest degree of freedom in structure design. We give mainly the qualitative comparison here. For further work in this field in the future, specific numerical comparison among these methods needs to be discussed in detail.
The methods discussed above have significantly expanded the possibilities for metasurface designs and have led to notable improvements in performance. A flowchart of the metasurface design process, including principles, fabrication, experimental conditions, and applications, is illustrated in Figure 1. This paper focuses primarily on the recent developments in intelligent metasurface design methods, including machine learning, physics-informed neural networks, and topology optimization. The advantages and limitations of these various methods are analyzed and discussed in detail. We also present recent advances in quantum optics applications enabled metasurfaces before the conclusion section. Thus, this review paper aims to provide a timely summary of recent developments in metasurface design and offers new perspectives for future metasurface designs and applications.
Optical Society of America under the terms of the OSA Open Access Publishing Agreement, (c) focusing lens, reuse with permission for ref. [122], Copyright @ 2017, Wang, S. M. (d) light refraction, reuse with permission for ref. [123], Copyright @ 2021, Ji, W. Y. et al. Laser & Photonics Reviews published by Wiley-VCH GmbH. Besides, the figure also includes metasurface simulation principle, fabrication technics, testing conditions, and typical metasurface applications (e) imaging system, reuse with permission for ref. [7], Copyright @ 2019 Fan, Z. B. (f) communication system, reuse with permission for ref. [124], Copyright @ 2020 Zhao, H. T. (g) radar system, reuse with permission for ref. [125], Copyright @ 2022 Wan, X. et al. Advanced Intelligent Systems Published by Wiley-VCH GmbH. and (h) quantum optics, reuse with permission for ref. [9], Copyright © 2021, Springer Nature Limited.
## 2 Results (recent advances in metasurface design and analysis)
In this section, we address various approaches and case studies pertaining to the optimization of techniques for designing metasurface-based optical devices. We analyze their underlying principles and methods, and classify them into three main categories, which will be discussed in the subsequent sections.
### 2.1 Machine learning for metasurface design
Figure 1: Overview of representative tasks for a metasurface, including (a) vortex generation, reuse with permission for ref. [120], Copyright @ 2022 Song, N. T. et al. (b) light absorption, reuse with permission for ref. [121], © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement, (c) focusing lens, reuse with permission for ref. [122], Copyright @ 2017, Wang, S. M. (d) light refraction, reuse with permission for ref. [123], Copyright @ 2021, Ji, W. Y. et al. Laser & Photonics Reviews published by Wiley-VCH GmbH. Besides, the figure also includes metasurface simulation principle, fabrication technics, testing conditions, and typical metasurface applications (e) imaging system, reuse with permission for ref. [7], Copyright @ 2019 Fan, Z. B. (f) communication system, reuse with permission for ref. [124], Copyright © 2020 Zhao, H. T. (g) radar system, reuse with permission for ref. [125], Copyright © 2022 Wan, X. et al. Advanced Intelligent Systems Published by Wiley-VCH GmbH. and (h) quantum optics, reuse with permission for ref. [9], Copyright © 2021, Springer Nature Limited.
#### Basic principle
The basic principle flowchart of machine learning approach for metasurface design is illustrated in Figure 2. The general design process is as follows: for a simple cylindrical structure, data sets of EM response can be obtained using forward solver algorithms with a variety of parameter combinations as an input. These data sets can then be used to train a deep neural network, which can calculate the EM response when provided with the input. This is referred to as a forward network. Through the same training process, an inverse network is also obtained. The inverse network differs in that the input is the desired response, and the output is the geometry parameters of the structure. The optimized solutions can also be evaluated using forward solver algorithms to determine if the response is acceptable [73]-[78].
Figure 2 The flowchart steps of basic principle of machine learning for metasurface design: define the problem, collect data sets, pre-process data, train the model, validate the model, train inverse network, input desired response, iterate and optimization, and achieve desired performance.
#### Cases and approaches
In 2019, An et al. proposed a deep learning method for optimizing the optical response of metasurface structures. This method was found to be more accurate and efficient than conventional techniques [70]. Additionally, it represented the first successful application of machine learning to model 3D structures. However, it should be noted that this method is limited to simple, fixed structures, as demonstrated in Figure 3(a). For complex structures, an increase in input size parameters results in a significant reduction of optimization time and accuracy, limiting the
Figure 2: The flowchart steps of basic principle of machine learning for metasurface design: define the problem, collect data sets, pre-process data, train the model, validate the model, train inverse network, input desired response, iterate and optimization, and achieve desired performance.
degree of freedom. In addition, this work lacks experimental validation of the theoretical and simulation results. Therefore, further researches are needed to establish its validity. In the same year, Zhang et al. proposed a binary coded metasurface structure optimized by machine learning [76], as depicted in Figure 3(b). This structure possesses a higher degree of freedom than the previous work [70], allowing for the use of a large dataset to train the network. Consequently, the network exhibits a high phase response accuracy and rapid optimization speed. The authors [76] also provided experimental validation of the feasibility of their method. However, this work is limited in its focus on coding optimization for unit cell structures and does not provide a design or strategy for array optimization. Furthermore, while the sample works in the microwave band, which allows for easy fabrication of binary coding structures, conversion to the optical band would be hindered by limitations in manufacturing accuracy, making experimental implementation of this method challenging. Later, in 2021, Jiang et al. demonstrated that deep neural networks can predict not only phase, but also group delay for meta-atoms operating in the visible light band [69]. The deep neural network used in this work is illustrated in Figure 3(c). However, if there is a significant deviation in the desired phase spectrum or the spectrum is complex, there will be an inherent error between the network-trained spectrum and the test spectrum, as calculated by the forward solver. In 2022, Ma et al. proposed an innovative approach of designing a multi-functional metasurface by incorporating an optimization algorithm and machine learning in the near-infrared band [77]. The method proposed by Ma et al. allows for multiple functions from a single metasurface to approach the physical limitations. The maxima number of functions that a metasurface can perform is limited by several factors: including physical size, manufacture techniques, coupling between meta-atoms, material loss, and so on. If the area of the metasurface is smaller, the number of resonators which can be designed in one meta-atom period is less, which limits the number of functions. The fabrication techniques used to create resonators also have influence. For example, electron-beam lithography can define highly complex resonator shape, which ensures high precision of the structure. The coupling between neighboring meta-atoms limits the number of the functions because it causes interference between different resonators, which has influence on the functions. Metasurfaces are usually made of metal or dielectric materials with a high refractive index, which can cause large absorption or scattering loss. This factor also influences the functions. The design process is illustrated in Figure 3(d). The authors [77] also conducted experiments and demonstrated their strategy successfully. In the same year, Lin et al. also employed a combination of machine learning and optimization algorithm to optimize all the meta-atoms in a metasurface array in the microwave band.
band [78]. This approach was used to design a retroreflector and a sample was fabricated. It was then effectively demonstrated through an experiment.
## 5 Conclusion
In this paper, we have proposed a new method for the detection of a single-antenna signal in a \(2\times 2\) array of \(
Figure 3. Cases and approaches of machine learning for a metasurface design. (a) Illustration of a metasurface model in the solver. (b) Schematic diagram of binary coded meta-atom and the comparison phase between predicted results by deep learning and simulated results, reuse with permission for ref. [76], (c) 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. (c) Schematic diagram of framework for deep neural network, reuse with permission for ref. [69], (d) 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement. (d) Flowchart of optimization design of EM response by combining machine learning and optimization algorithm, reuse with permission for ref. [77], (c) 2022 Wiley-VCH GmbH. (e) Flowchart of retroreflector 2-bit array structure design optimized by machine learning, reuse with permission for ref. [78], (c) 2022 Optical Society of America under the terms of the OSA Open Access Publishing Agreement.
#### Analysis and Conclusion
The strategy to apply machine learning for metasurface design involves the initial creation of a large "library" of potential metasurface designs, followed by the simulation of each design's response using a numerical solver. Subsequently, classical search methods are employed to traverse the library and identify the desired metasurface parameters. This traditional approach is inefficient as well as time-consuming. The machine learning techniques discussed in this section still necessitate the use of a numerical solver to generate training data, but this represents a one-time cost. Once trained, the networks can predict metasurface parameters without the need for further utilization of a numerical solver.
A prevalent issue in the application of general neural network frameworks to metasurface design is the lack of rigor in enforcing the correctness of physics solutions or the manufacturability of discovered parameters. As demonstrated in [69], these frameworks may yield negative parameters, which have no physical significance. This limitation can be addressed through the use of classical constrained optimization frameworks. It is important to eliminate the results that are technically infeasible through validation using numerical solvers. The following methods discussed in this review aim to address this issue.
### 2.2 Physics-informed neural networks for metasurface design
#### Basic principle
The basic principle for physics-informed neural networks is to add information on the physics laws, such as Maxwell's equations or some other partial differential equation (PDE), into neural networks. The operation can be realized by incorporating PDE governing data set into a loss function of framework. Detailed information and theory are referred
to the second paragraph of section 2 (Physics-informed neural networks) in [67]. We illustrate a flowchart of physics-informed neural networks for metasurface design in Figure 4. A multi-pillar meta-atom structure, which has a complex design and multiple parameters, results in a greater degree of freedom compared to previous methods. Data sets are obtained by simulating the structure through a forward solver, but with the incorporation of physics laws such as Maxwell's Equations and EM boundary conditions into the neural networks, the required data set size is reduced, leading to a significant reduction in computational time [79]-[83]. The remaining design process is consistent with the machine learning method.
Figure 4 The flowchart steps of basic principle of Physics-Informed Neural Networks for metasurface design: define the problem, create the training data, define physics principle and loss function, define the PINN architecture, validate the network, train backpropagation networks, input desired EM response, iterate the design process, and obtain desired metasurface designs.
### 2.2.2 Cases and approaches
In 2020, Chen et al. utilized a combination of neural networks and partial differential equations, specifically the Helmholtz equation, for metasurface design, the method of which is referred as Physics-Informed Neural Networks [67]. The aim of this work was to use a single cylinder to replace an array of small cylinders, as shown in the first column of Figure 5(a). The goal was to achieve the same electric field response from both objects when a plane wave is incident upon them. As shown in the second column of Figure 5(a), the electric field distribution of the small cylinder array is depicted for a wavelength of 2.1 \(\upmu\)m when the wave is incident from left to right. The authors then utilized Physics-Informed Neural Networks to optimize a single cylinder to replace the array. The third column of
Figure 5(a) shows the predicted electric field distribution of the single cylinder, with an error of 2.82% compared to the desired pattern in the second column. This method allows for a reduction in computational time and simplifies the training process, making it useful in the design of invisible cloaks.
In 2022, Tang et al. proposed an approach of incorporating physical guidance and explanation within recurrent neural networks for time response of optical resonances [79]. In this work, A recurrent neural network (RNN) is a kind of neural network (ANN) which is designed to process sequential data, such as time sequence data. RNN processes sequences of inputs and use the information from previous inputs to predict the output. Optical resonance is a phenomenon that occurs when light interacts with a structure, which makes it resonant at a certain frequency. Physics-guided and physics-explainable neural networks are a type of neural network framework that is designed to include physical principles or knowledge into the framework and training process. This method can improve the output accuracy and interpretability because it is consistent with known physics laws or principles. In the first column of Figure 5(b), periodic monolayer graphene stripe structures are expected to produce resonance when a THz wave is incident on it. Then, one can observe time-domain signals. Here, physics-guided and physics-explainable neural networks are adopted to predict the full-length time-domain response, as shown in the second column of Figure 5(b). The input sequence is only 7% of full sequence. With this prediction, they are able to derive the resonant frequency of resonant structures. Additionally, they can obtain more information in the frequency domain by applying the Fourier transform, as depicted in the third column of Figure 5(b). This approach reduces significantly the time required for data collection.
Similarly, Khatib et al. proposed a framework called Deep Lorentz Neural Networks (DLNN) for metamaterials design [68], which is specifically used to model the behavior of EM waves in all-dielectric metamaterials. The DLNN is based on the Lorentz model, which describes how the electric and magnetic properties of a material change in response to an EM field. A schematic of working principle of framework is shown in Figure 5(c). The DLNN is trained by using a dataset from simulated EM wave propagation in all-dielectric metamaterials. Then, it is able to learn physical properties of these materials and predict their behavior according to different inputs. The authors [68] proved that DLNN can accurately predict the behavior of all-dielectric metamaterials with different conditions, such as changing in frequency, polarization, and angle of incidence. By comparing conventional Deep Neural Networks and Lorentz Neural Networks, the authors demonstrate that the latter requires less training data to achieve the same target
with less errors. Furthermore, physics-informed neural networks have great universality and generalizability for solving a wide range of optical problems and creating physics models.
Later, Chen et al. updated their physics-informed neural networks with Maxwell equations [83]. It is important to note that the proposed method can predict a 3D distribution of permittivity in an unknown target using near-field data sets. As an example, Figure 5(d) illustrates the cross-sectional plane of the electric field distribution of \(E_{x}\), \(E_{y}\), and \(E_{z}\) (real part) as determined by the finite element simulation method. This information is utilized to train neural networks, which are subsequently combined with wave equations to enable the retrieval of 3D permittivity information from electric field information. This achievement is significant as it allows for the extraction of information from 3D objects, which are more commonly encountered in realistic scenarios. As such, this method has potential applications in fields such as near-field microscopy and medical imaging.
Figure 5: Cases and approaches of Physics-informed neural networks for metasurface design. (a) First column: Schematic diagram of original cylinder array. Second column: Electric field distribution of cylinder array when the plane wave is incident from left to right for the wavelength of 2.1 \(\upmu\)m. Third column: Electric distribution of single cylinder predicted by networks, reuse with permission for ref. [67], © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement. (b) First column: Graphene resonant structure. Second column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field prediction and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain. Third column: comparison of electric field and target signal in time-domain. Third column: The comparison of electric field and target signal in time-domain.
transmission amplitude prediction and target signal in frequency-domain, reuse with permission for ref. [79], Copyright © 2022, Yingheng Tang et al, under exclusive licence to Springer Nature America, Inc. (c) Schematic diagram Lorentz Neural Networks, reuse with permission for ref. [68], © 2022 Wiley-VCH GmbH. (d) Top left, top right and bottom left: the plane cross sections of electric field distribution of \(E_{x}\), \(E_{y}\), and \(E_{z}\) (real part). Bottom right: 3D permittivity information extracted by physics-informed neural networks, reuse with permission for ref. [83], Copyright © 2022, Chen, Y. Y. This article is distributed under a Creative Commons Attribution (CC BY) license.
#### Analysis and Conclusion
The previous discussions demonstrate that physics-informed neural networks possess a high speed, accuracy, and degree of freedom, and overcome limitations imposed by training data. Additionally, this framework is readily applicable to the problem of recovering geometrical parameters of metasurfaces for a desired EM response. However, it is important to note that a disadvantage of this framework is that a single trained physics-informed neural network cannot be applied to multiple similar inverse problems, requiring retraining for each individual case. Nevertheless, alternative methods that may be equally effective and more efficient without the use of machine learning techniques will be discussed in the following section.
### 2.3 Topology Optimization for metasurface design
#### Basic principle
A flowchart of the topology optimization process for metasurface design is illustrated in Figure 6. The process commences with an initial structure and associated parameters, denoted as \(x_{i}\). The EM response of the structure is then computed using advanced optical theories such as the rigorous coupled-wave analysis (RCWA) method [71], [72]. A loss function, \(L\), is subsequently determined by comparing the current EM response to the desired EM response. The gradient of the loss function with respect to the parameters \(x_{i}\) is then determined using gradient algorithms such as automatic differentiation [71]. This gradient information is utilized to update the structure's parameters \(x_{i}\) in the direction that minimizes the loss function. The process is repeated until the loss function reaches its minimum value. The final output is the optimized set of parameters \(x_{i}\) for the desired structure.
#### Cases and approaches
In 2019, Lin et al. proposed the use of RCWA method for topology optimization of multi-layer metasurface structures [84]. RCWA is a mathematical approach that is used to analyze and calculate the EM response of multi-layer, periodic structures, making it well-suited for metasurface analysis. Additionally, it has the advantage of highly efficient computation. After obtaining the EM response information of the structure, the authors [88] applied the adjoint method to perform the optimization. As a pure physics-based method, they obtained thousands of degrees of freedom for meta-atoms, which is much more than what achieved using the previous methods. They successfully designed a metalens, as shown in Figure 7(a), but due to the complex structure and high degree of freedom, it is difficult to fabricate such a lens. While the concept is promising, it is currently infeasible to experimentally implement multi-layer, nano-scale structures. In the same year, Phan et al. designed large-area lenses with Aperiodic Fourier Modal Method (AFMM) for finite-sized, isolated devices [85]. Firstly, the authors divided a lens into several small sections for ease of calculation. The curve phase profiles are then linearized for each part, as illustrated in Figure 7(b). For every section, it is composed of aperiodic pillar structures. Importantly, the authors creatively propose an AFMM, which combines a solver for periodic systems and perfectly matched layers. A metasurface with pillar structures can be expressed as a distribution of the permeability and permeability on the surface plane. Then, Stratton-Chu integral equation is used to
Figure 6: Flowchart steps of basic principle of topology optimization for metasurface design: define the problem, input parameters, make sure desired EM response, calculate loss function, calculate gradient of loss function, update and output parameters, iterate the design process, get desired design.
compute the radiation field in all of space. After optimization, the authors can get the desired field distribution. Also considering the coupling between the edge of neighboring sections, they add a gap of at least 0.2\(\lambda\) between neighboring sections, thereby reducing the coupling between sections. At last, a series of methods adopted above lead to designing a highly efficient lens with high NA. In 2021, Colburn et al. employed the rigorous coupled-wave analysis (RCWA) method in conjunction with automatic differentiation (AD) to optimize metasurface parameters [71]. The overall process is similar to the one that is depicted in Figure 6. AD can be applied to any sequential calculation procedure, but it is complex and involves a series of basic operations. By utilizing the chain rule, the differential coefficient of the complex procedure can be calculated once the differential coefficient for each basic operation is determined. Additionally, the use of parallel calculation on GPUs further increases the speed compared to the adjoint method. An example of designing a lens with several elliptical resonator meta-atoms is shown in the first column of Figure 7(c). As the iteration number increases, the learning curve of focus efficiency improves, as depicted in the second column of Figure 7(c). The normalized electric field intensity in the focal plane is presented in the third column. In the same year, Xu et al. employed RCWA with a multi-object adjoint-based approach to optimize the discrete geometric phase metasurface [89]. Having followed optimization, the discrete structure becomes a continuous structure and the efficiency is significantly improved. The final optimized refractor array and intensity distribution of the electric field are illustrated in Figure 7(d).
Figure 7: Cases and approaches of topology for metasurface design. (a) Metalens designed by topology optimization method and the focus effect of simulation results for the metalens, reuse with permission for ref. [84], © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement. (b) First column: Conventional lens with curve phase profile. Second column: The topology optimized lens with separated sections and linearization phase profile. Third column: Comparison of optimization time between full device and linear sections varying against
device size, reuse with permission for ref. [85], Copyright @ 2019 Phan, T. _et al_. (c) First column: Meta-atom with several elliptical resonators in designing a lens. Second column: The learning curve of focus efficiency varying against iteration. Third column: The normalized electric field intensity in focal plane, reuse with permission for ref. [71], Copyright @ 2021 Colburn, S. _et al_. (d) Left: Topology optimized structure of a metasurface array. Right: Real part of electric field distribution of the refractor, reuse with permission for ref. [89], 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement.
### Analysis and Conclusion
Based on the previously discussed works, it is evident that in recent years the most effective strategy for metasurface optimization has been through the use of topology optimization methods, which leverage advanced AD formulations or apply high-performance computing techniques. These methods are highly flexible, performant, accurate, fast, and offer a large degree of freedom. The choice of strategy is largely dependent on a specific application. In conclusion, to effectively handle highly complex metasurfaces, a physics-informed neural network can be employed by incorporating elements from these topological optimization approaches, such as the use of the efficient RCWA method, or the implementation of high-performance parallel architecture for training.
### 2.4 Metasurfaces for quantum optics applications
Since the introduction of metasurface concepts, various devices and applications have emerged rapidly, including metasurface antennas [30], [56], radar cross-section modification [29], specialized beam generation [40], [41], active metasurfaces [90], and metasurfaces for quantum optics. In recent years, quantum optics research, as an emerging field, have experienced significant expansion and hold important prospects for applications in areas such as quantum computation, communication, storage, sensing and fundamental quantum physics research [91]-[95]. Metasurfaces are predicted to play a crucial role in the future quantum photonics technology. In this section, we highlight several recent advances in the applications of metasurfaces for quantum optics.
In 2018, Wang et al. proposed the use of flat metasurfaces as a replacement for conventional bulk components to achieve non-classical multiphoton interference [91]. The authors reconstructed one photon and two photon states using a polarization-insensitive detector, as shown in Figure 8(a). The experiment demonstrated the feasibility of controlling multi-photon quantum states using a metasurface. Subsequently, Georgi et al. presented a quantum system that could entangle and disentangle two photon spin states using a metasurface, as shown in Figure 8(b) [92]. The performance of this system was found to be superior to that of conventional optical elements. In 2020, Zhou et al. proposed a
polarization-entangled photon source that could function as an optical switch for edge detection mode, as shown in Figure 8(c) [93]. When the photon is in the "switch OFF" or "switch ON" state, the imaging shows a solid or an outlined cat, respectively. In 2022, Gao et al. demonstrated a multi-channel metasurface being capable of transforming polarization-entangled photon pairs [94]. Additionally, it was shown that using two metasurfaces could enable even more channels for entangled photon pair distribution, which holds immense potential for applications in quantum information processing. The intelligent methods discussed in previous sections can improve the performance, accuracy, and speed of a metasurface design, which could potentially be the next-generation design strategy for quantum optics devices.
Figure 8. Cases of a metasurface for quantum optics applications. (a) Schematic of quantum state reconstruction with a nanostructured metasurface, reuse with permission for ref. [91], Copyright © 2018 Wang, K. et al. some rights
reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. (b) Schematic of entanglement and disentanglement of two photons states with a metasurface, reuse with permission for ref. [92], Copyright @ 2019 Georgi, P. (c) Schematic of a metasurface functioning as an optical switch for the optical edge detection mode. When the photon indicates the switch OFF state, the imaging is a solid cat. When the photon indicates the switch ON state, the imaging is an outlined cat, reuse with permission for ref. [93], Copyright © 2020 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. Distributed under a Creative Commons Attribution License 4.0 (CC BY). (d) Schematic of two metasurfaces for multi-channel quantum entangle distribution and transformation, reuse with permission for ref. [94], Copyright @ 2022 Gao Y. _et. al._, PRL, 129, 023601, 2022. American Physical Society, [https://doi.org/10.1103/PhysRevLett.129.023601](https://doi.org/10.1103/PhysRevLett.129.023601).
## 3 Discussion
In this review, multiple important methods on intelligent design for metasurfaces are elaborated. These methods will become the effective methods for metasurface and metamaterial design in the future and have obvious advantages in physical accuracy and computational time. We give a comparison between forward and backward inverse designs of a metasurface in detail in Table 1. For more examples of using machine learning, physics-informed neural network, and topology optimization method, the readers are referred to [96]-[100]. Furthermore, the methods above can be extended to the design for other optical devices [101], such as photonic crystals [102]-[106], optical cavities [107]-[110], and integrated photonic circuits [111]-[115]. Intelligent metasurfaces are a rapid development direction and have important application prospects in several new revolutionary fields, especially in the field of quantum optics [9]-[11].
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Design Method** & **Algorithm** & **Physical Accuracy** & **Computational cost** \\ \hline Forward Design & Finite Element Method, & Only forward solver is difficult to achieve results consistent with ideal optical expectations. & Artificial design and optimization is time-consuming. \\ \hline Machine Learning [69], [70], [73]-[78] & Machine Learning Combined with Numerical Solver Method & Pre-existing numerical forward solver produces physically accurate solutions. & The cost is derived from the training and depends on the training data size, which is one-time cost. \\ \hline Physics-Informed Neural Networks [67], [68], [79]-[83] & Neural Network Combined with Physical Laws behind Physics Process & The existence of solution cannot be presupposed and depends on physical problem. & Less cost, since physical laws restrict the space of admissible solutions to a manageable size. \\ \hline \end{tabular}
\end{table}
Table 1: The comparison between forward and backward inverse design of metasurface.
Besides academic interests, metasurfaces have emerged as a promising technology with diverse potential applications in industry. Metamaterials have shown remarkable properties such as anomalous reflection, wavefront manipulation, and polarization control. These unique features enable metasurfaces to revolutionize conventional optics and provide innovative solutions for a wide range of industrial applications [116]. In the near future, metasurfaces can be used to enhance the performance of sensors [117], antennas, large sensor arrays, and solar cells, leading to a higher optical efficiency and sensitivity. They can also be used in imaging and holography, providing high-resolution imaging and 3D display capabilities. Furthermore, metasurfaces can be integrated into various devices and systems to improve their functionalities. Therefore, the potential to transform metasurfaces to industrial applications is enormous, and further researches in this field are expected to open new possibilities for their practical implementation [118], [119].
## Conflict of interests
All authors declare no conflict of interests.
## Contributions
W. Ji and J. Chang wrote the manuscript (equal contribution) with the help of H. Xu, J. Gao, S. Groblacher, P. Urbach and A. Adam.
|
2304.06745 | End-to-end codesign of Hessian-aware quantized neural networks for FPGAs
and ASICs | We develop an end-to-end workflow for the training and implementation of
co-designed neural networks (NNs) for efficient field-programmable gate array
(FPGA) and application-specific integrated circuit (ASIC) hardware. Our
approach leverages Hessian-aware quantization (HAWQ) of NNs, the Quantized Open
Neural Network Exchange (QONNX) intermediate representation, and the hls4ml
tool flow for transpiling NNs into FPGA and ASIC firmware. This makes efficient
NN implementations in hardware accessible to nonexperts, in a single
open-sourced workflow that can be deployed for real-time machine learning
applications in a wide range of scientific and industrial settings. We
demonstrate the workflow in a particle physics application involving trigger
decisions that must operate at the 40 MHz collision rate of the CERN Large
Hadron Collider (LHC). Given the high collision rate, all data processing must
be implemented on custom ASIC and FPGA hardware within a strict area and
latency. Based on these constraints, we implement an optimized mixed-precision
NN classifier for high-momentum particle jets in simulated LHC proton-proton
collisions. | Javier Campos, Zhen Dong, Javier Duarte, Amir Gholami, Michael W. Mahoney, Jovan Mitrevski, Nhan Tran | 2023-04-13T18:00:01Z | http://arxiv.org/abs/2304.06745v1 | # End-to-end codesign of Hessian-aware quantized neural networks for FPGAs and ASICs
###### Abstract
We develop an end-to-end workflow for the training and implementation of co-designed neural networks (NNs) for efficient field-programmable gate array (FPGA) and application-specific integrated circuit (ASIC) hardware. Our approach leverages Hessian-aware quantization (HAWQ) of NNs, the Quantized Open Neural Network Exchange (QONNN) intermediate representation, and the lhs4ml tool flow for transpiling NNs into FPGA and ASIC firmware. This makes efficient NN implementations in hardware accessible to nonexperts, in a single open-sourced workflow that can be deployed for real-time machine-learning applications in a wide range of scientific and industrial settings. We demonstrate the workflow in a particle physics application involving trigger decisions that must operate at the 40 MHz collision rate of the CERN Large Hadron Collider (LHC). Given the high collision rate, all data processing must be implemented on custom ASIC and FPGA hardware within the strict area and latency requirements. Based on these constraints, we implement an optimized mixed-precision NN classifier for high-momentum particle jets in simulated LHC proton-proton collisions.
ural networks, field programmable gate arrays, firmware, high-level synthesis +
Footnote †: Also with International Computer Science Institute.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: thanks: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: thanks: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
+
Footnote †: Also with International Computer Science Institute and Lawrence Berkeley National Laboratory.
###### Contents
* 1 Introduction
* 2 Background and Related Work
* 2.1 Quantization
* 2.2 Automatic Bit Width Selection
* 2.3 Firmware Generation Tools
* 3 Experimental Setup
* 3.1 Dataset
* 3.2 Model & Loss Definition
* 3.3 Metrics: Bit Operations & Sparsity
* 4 Quantization-Aware Training
* 4.1 Homogeneous Quantization
* 4.2 Mixed-Precision Quantization
* 4.2.1 Conversion into QONNX
* 4.2.2 Model Translation
* 4.2.3 Post-Export
* 4.2.4 Hardware Generation
* 4.2.5.1 \(\mathrm{hls4ml\,\,Ingestion}\)
* 4.2.6 Synthesis Results
* 4.2.7 Summary
## 1. Introduction
Machine learning (ML) is pervasive in big data processing, and it is becoming increasingly important as data rates continue to rise. In particular, ML taking place as close to the data source as possible, or _edge ML_, is increasingly important for both scientific and industrial applications, including applications such as data compression, data volume reduction, and feature extraction for real-time decision-making (Bauer et al., 2017). Integrating ML at the edge, however, is challenging because of area, power, and latency constraints. This is especially the case for deep learning (DL) and neural network (NN) models. Deployment of NNs for edge applications requires carefully-optimized protocols for training as well as finely-tuned implementations for inference. This typically requires efficient computational platforms such as field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). Developing a NN algorithm and implementing it in hardware within system and task constraints is a multistep _codesign_ process with a large decision space. Among other things, this space includes options related to _quantization_, or using reduced precision operations. In this paper, we present a completely open-source, end-to-end workflow accessible to nonexerts for NN quantization and deployment in FPGAs and ASICs.
Quantization-aware training (QAT) has been shown to be very successful in scaling down model sizes for FPGAs (Han et al., 2017; Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017). With QAT, large NNs can be quantized to 8 bits and below, with comparable accuracy to the baseline. Quantized NNs (QNNs) generally have considerably reduced model sizes and latencies. Hessian-aware quantization (HAWQ) (Sutton et al., 2017) is a mixed-precision integer-only quantization framework for PyTorch (Paszke et al., 2017) with promising applications. HAWQ is able to quantize the model to very small bit widths by using mixed-precision guided by second-order (Hessian) information. In this approach, sensitive layers (determined by Hessian information) are kept at higher precision and insensitive layers are kept at lower precision. FPGAs are a natural use case for this: they can benefit from this approach since mixed-precision computations are much better supported by FPGAs than other hardware such as GPUs.
While these features make HAWQ an interesting choice for QAT with FPGAs, there does not currently exist a streamlined process to deploy it onto FPGAs directly. To address this, we introduce additional functionality to HAWQ in order to export QNNs as Quantized Open Neural Network Exchange (QONNX) (Paszke et al., 2017) intermediate representations. Then the QONNX representation can be ingested by hls4ml (Goyal et al., 2017), an open-source Python library for NN translation and deployment in FPGA and ASIC hardware. The hls4ml package is designed to be accessible for both hardware experts and nonexerts, and it is flexible enough to deploy QNNs with a broad range of quantization bit widths on different FPGA and ASIC platforms. It is a popular tool for both scientific and industry edge ML applications (Bauer et al., 2017; Goyal et al., 2017; Goyal et al., 2017).
To demonstrate the performance of our end-to-end workflow, we develop a NN for real-time decision-making in particle physics. The CERN Large Hadron Collider (LHC) is the world's largest and most powerful particle accelerator. Particles collide in detectors every 25 ns, producing tens of terabytes of data. Because of storage capacity and processing limitations, not every collision event can be recorded. In these experiments, the online trigger system filters data and stores only the most "interesting" events for offline analysis. Typically, the trigger system uses simple signatures of interesting physics, e.g., events with large amounts of deposited energy or unusual combinations of particles, to decide which events in a detector to keep. There are multiple stages of the trigger system, and the first stage, referred to as the level-1 trigger (L1T) (Bauer et al., 2017; Goyal et al., 2017), processes data at 40 MHz with custom ASICs or FPGAs. Over the past years, the LHC has increased its center of mass collision energy and instantaneous luminosity to allow experiments to hunt for increasingly rare signals. With the extreme uptake of accumulated data, ML methods are being explored for various tasks at the L1T (Goyal et al., 2017; Goyal et al., 2017). One such task is _jet tagging_: identifying and classifying collimated showers of particles from the decay and hadronization of quarks and gluons using _jet substructure_ information (Paszke et al., 2017; Goyal et al., 2017). ML methods show great promise over
traditional algorithms in increasing our capability to identify the origins of different jets and discover new physical interactions (Kang et al., 2018; Zhang et al., 2019).
Within the context of developing a NN for real-time decision making for particle physics applications, the original contributions of this paper are the following:
* We take advantage of the QONNX format to represent QNNs with arbitrary precision and mixed-precision quantization in order to extend HAWQ for QONNX intermediate representation support.
* We perform Hessian-aware quantization on a multilayer perceptron (MLP) model used in jet tagging benchmarks, and we study in detail the effects of quantization on each layer for model performance and efficiency.
* We use hls4ml to present optimized resources and latency for FPGA hardware implementations of NNs trained in HAWQ.
The rest of this paper is structured as follows. In Section 2, we introduce the key steps that comprise the end-to-end codesign workflow for QNNs to be deployed on FPGAs and ASICs, including an overview of quantization and HAWQ. We present the task and discuss how NNs are evaluated and trained in Section 3. Preliminary QAT results with homogeneous quantization and Hessian-based quantization are presented in Section 4, and our extension to HAWQ is presented in Section 5. We then cover the firmware implementation of NNs, specifically the resource usage and estimated latency, in Section 6. Finally, a summary is presented in Section 7.
## 2. Background and Related Work
In this section, we provide an overview of quantization and HAWQ (in Section 2.1); and then we cover automatic bit width selection (in Section 2.2) and firmware generation tools (in Section 2.3).
### Quantization
Quantization in NNs refers to reducing the numerical precision used for inputs, weights, and activations. In _uniform affine quantization_, values are quantized to lower precision integers using a mapping function defined as
\[q=\text{quantize}(r)=\text{Clip}(\text{Round}((r/S)-Z),\alpha,\beta), \tag{1}\]
where \(r\) is the floating-point input, \(S\) is the _scale factor_, and \(Z\) is the _zero point_(Kang et al., 2018). The Round function is the round-to-nearest operation clipped/clamped at \(\alpha\) and \(\beta\). Because all quantization bins are uniformly spaced, this mapping function in Eqn. 1 is referred to as uniform quantization. Nonuniform quantization methods whose bin sizes are variable are more difficult to implement in hardware (Shi et al., 2019). Real values can be recovered from the quantized values through _dequantization_:
\[\tilde{r}=\text{dequantize}(q)=S(q+Z), \tag{2}\]
where \(\tilde{r}-r\) is known as the quantization error. The scale factor divides a given range of real values into \(2^{b}\) bins, with
\[S=\frac{\beta-\alpha}{2^{b}-1}, \tag{3}\]
where \([\alpha,\beta]\) is the clipping range and \(b\) is the bit width. Choosing the clipping range is referred to as _calibration_. A simple approach is to use the minimum and maximum of the values, i.e., \(\alpha=r_{\text{min}}\), and \(\beta=r_{\text{max}}\). This is an asymmetric quantization scheme because the clipping range is not necessarily symmetric with respect to the input, i.e., it could be that \(-\alpha\neq\beta\). A symmetric quantization approach uses a symmetric clipping range of \(-\alpha=\beta\), such as \(-\alpha=\beta=\max(|r_{\text{max}}|,|r_{\text{min}}|)\), and replaces the _zero point_ with \(Z=0\).
The latest publication of the Hessian-aware quantization, HAWQv3 (Haug et al., 2017), introduces a completely new computational graph with an automatic bit width selection policy based on it's previous works (Haug et al., 2017; Wang et al., 2018). In HAWQv3, which for simplicity we refer to here simply as HAWQ, quantization follows Eqn. 1 with additional hardware-inspired restrictions. HAWQ executes its entire computational graph using only integer multiplication, addition, and bit shifting, without any floating-point or integer division operations. The clipping range is symmetric for weights \(\beta=2^{b}-1=-\alpha\), while activations can be either symmetric or asymmetric. The real-valued scale factors are pre-calculated by analyzing the range of outputs for different batches and fixed at inference time, a process called _static quantization_. HAWQ avoids floating-point operations and integer divisions by restricting all scale factors to be dyadic numbers (rational numbers of the form \(b/2^{c}\), where \(b\) and \(c\) are integers). To illustrate a typical computation, consider a layer with input \(h\) and weight tensor \(W\). In HAWQ, \(h\) and \(W\) are quantized to \(S_{h}q_{h}\) and \(S_{W}q_{W}\), respectively, where \(S_{h}\) and \(S_{W}\) are the real-valued scale factors, and \(q_{h}\) and \(q_{W}\) are the corresponding quantized integer values. The output result, denoted by \(a\), can be computed as
\[a=(S_{W}S_{h})(q_{W}*q_{h}), \tag{4}\]
where \(*\) denotes a low-precision integer matrix multiplication (or convolution). The result is then quantized to \(S_{a}q_{a}\) for the following layer as
\[q_{a}=\text{Int}\left(\frac{a}{S_{a}}\right)=\text{Int}\left(\frac{S_{W}S_{h} }{S_{a}}\left(q_{W}*q_{h}\right)\right), \tag{5}\]
where \(S_{a}\) is a precalculated scale factor for the output activation. This avoids floating point operations and integer divisions by implementing Eqn. 5 with integer multiplication and bit shifting.
### Automatic Bit Width Selection
Many methods have been proposed to measure the sensitivity to quantization or developed automatic schemas for bit settings. For example, HAQ (Haug et al., 2017) proposed a reinforcement learning (RL) method to determine the quantization policy automatically. The method involves an RL agent receiving direct latency and energy feedback from hardware simulators. Ref. (Haug et al., 2018) formulated a neural architecture search (NAS) problem with a differentiable NAS (DNAS) to explore the search space efficiently. Ref. (Haug et al., 2019) proposed periodic functions as regularizers, where regularization pushes the weights into discrete points that can be encoded as integers. One disadvantage of these exploration-based methods is that they are often sensitive to hyperparameters or initialization. More recently, AutoQkeras (Haug et al., 2019) was proposed as a method to optimize both model area (measured by the number of logical elements in the FPGA design) and accuracy, given a set of resource constraints and accuracy metrics, e.g., energy consumption or bit-size. Different from these previous methods, HAWQ (Wang et al., 2018) introduced an automatic way to find the mixed-precision settings based on a second-order sensitivity metric. In particular, the Hessian (specifically the top Hessian eigenvalue) can be used to measure the sensitivity. This approach was extended in Ref. (Haug et al., 2019), where the sensitivity metric is computed using the average of all the Hessian eigenvalues.
### Firmware Generation Tools
Although ML methods have shown promising results on edge devices, fitting these algorithms onto FPGAs is challenging, often very time-consuming, and it requires the expertise of domain experts and engineers. Several directions aim to solve this issue. One direction, field-programmable DNN (FP-DNN) (Haug et al., 2019), is a framework that takes TensorFlow-described deep neural networks (DNNs) as input and automatically generates hardware implementations with register transfer level (RTL) and high-level synthesis (HLS) hybrid templates. Another direction, fpgaConvNet (Haug et al., 2019), specifically targets convolutional NNs (CNNs) and is an end-to-end framework for the optimized mapping of CNNs on FPGAs. Interestingly,
fpgaConvNet proposes a multi-objective optimization problem to account for the CNN workload, target device, and metrics of interest.
These and other tools indicate a growing desire to deploy more efficient and larger ML models on edge devices in a faster and more streamlined process. This desire arises in many scientific and industrial use cases (Beng et al., 2015; Chen et al., 2017). Particle physics applications are a particularly strong stress test of such tools. This is due to the extreme requirements in computational latency and data bandwidth, as well as environmental constraints such as low-power and high-radiation and cryogenic environments. Furthermore, particle physics practitioners are not necessarily ML experts or hardware experts, and their applications and systems require open-source tools (to the extent possible) and flexible deployment across different FPGA and ASIC platforms. The hls4ml tool originated from such use cases, and it supports multiple architectures and frameworks, such as Keras (Keras, 2016), QKeras (Keras, 2017; Keras, 2018), and PyTorch (Pasz, 2018). Currently, it is steadily increasing its scope of supported architectures, frameworks, hardware optimizations, and target devices, with the backing of a growing scientific community. Another tool, FINN (Keras, 2018; Keras, 2018) from AMD/Xilinx, aims to solve the problem of bringing NNs (more specifically, QNNs) to FPGAs by using generated high-level synthesis (HLS) code. Both tools create a streamlined process to deploy DL models as efficiently as possible, without requiring large development effort and time. The two tools are similar in their goals, hls4ml and FINN, though there are differences in their flows, layer support, and targeted optimizations. Both of them support QONNX, an open-source exchange format, representing QNNs with arbitrary precision, such that there can be interoperability between the flows. More generally, we should note that this is ideal for HAWQ, as it can target multiple hardware-generating tools. In this work, however, we focus only on hls4ml, which has implementations for FPGAs and ASICs and optimizations for a larger range of bit widths.
## 3. Experimental Setup
In this section, we describe the benchmark ML task we explore for particle physics applications. As discussed above, although there are a much broader set of scientific and industrial applications, particle physics applications are a particularly good stress test of our end-to-end workflow. The concept behind the development of particle physics benchmarks is detailed more in Ref. (Keras, 2018), and our jet tagging benchmark is one of the three described there.
### Dataset
We consider a jet classification benchmark of high-\(p_{\mathrm{T}}\) jets to evaluate performance. Particle jets are radiation patterns of quarks and gluons produced in high-energy proton-proton collisions at the LHC. As these jets propagate through detectors like ATLAS or CMS, they leave signals through the various subdetectors, such as the silicon tracker, electromagnetic or hadron calorimeters, or muon detectors. These signals are then combined using jet reconstruction algorithms. We use the benchmark presented in Ref. (Keras, 2018) consisting of 54 features from simulated particle jets produced in proton-proton collisions. Of the 54 high-level features, 16 were chosen based on Table 1 of Ref. (Keras, 2018). The features are a combination of both mass ("dimensionful") and shape ("dimensionless") observables. The dataset (Pasz, 2018) is a collection of 870,000 jets and is divided into two sets: a training set of 630,000 jets, and a test set of 240,000 jets. The dataset underwent preprocessing: all features are standardized by removing the mean and scaling to obtain unit variance. The task is to discriminate jets as originating from one of five particles: W bosons, Z bosons, light quarks (q), top quarks (t), or gluons (g). Descriptions of each observable and particle jet can be found in Ref. (Keras, 2018). Additionally, we measure the accuracy given by the number of correctly classified jets divided by the total number of classified jets.
### Model & Loss Definition
We implement all models with the architecture presented in Ref. (Krizhevsky et al., 2017), an MLP with three hidden layers of 64, 32, and 32 nodes, respectively. The baseline model is the floating-point implementation of this MLP, i.e., with no quantization. All hidden layers use ReLU activations, and the output is a probability vector of the five classes filtered through the softmax activation function. We aim to minimize the empirical loss function
\[\mathcal{L}_{\mathbf{c}}(\theta)=\frac{1}{N}\sum_{i=1}^{N}\ell(f_{\theta}(\mathbf{ x}_{i}),\mathbf{y}_{i})=\frac{1}{N}\sum_{i=1}^{N}\ell(\hat{\mathbf{y}}_{i}, \mathbf{y}_{i}), \tag{6}\]
where \(\ell\) is the categorical cross-entropy loss function and \(N\) is the number of training samples. The model, denoted by \(f_{\theta}\), maps each input \(\mathbf{x}_{i}\in\mathbb{R}^{16}\) to a prediction \(\hat{\mathbf{y}}_{i}\in[0,1]^{5}\), using parameters \(\theta\). Predictions are then compared with ground truth \(\mathbf{y}_{i}\) to minimize the empirical loss. We train the NNs with \(L_{1}\) regularization by including an additional penalty term to the loss,
\[\mathcal{L}(\theta)=\mathcal{L}_{\mathbf{c}}(\theta)+\lambda\sum_{j=1}^{L}\| \mathbf{W}_{j}\|_{1}, \tag{7}\]
where the added penalty term is the elementwise norms of weight matrices, \(\mathbf{W}_{i}\) is the "vectorized" form of weight matrix for the \(j\)th layer, and \(L\) is the number of layers in the model. The \(L_{1}\) regularization term is scaled by a tunable hyperparameter \(\lambda\). Typically, \(L_{1}\) regularization is used to prevent overfitting, enabling statistical models to generalize better outside the training data. It is also known to promote sparsity, which is desirable to reduce the number of computations. Section 4 discusses the implications of \(L_{1}\) regularization in QNNs concerning performance and other metrics discussed below.
### Metrics: Bit Operations & Sparsity
Similar to floating-point operations, bit operations (BOPs) (Botton et al., 2017) in QNNs are computed to estimate model complexity and the number of operations per inference. BOPs have been shown to predict accurately the area of hardware accelerators and, in turn, the power usage in processing elements (Krizhevsky et al., 2017). This makes BOPs an easy-to-compute metric that is a useful approximation of the total area of a QNN. The bit operations of a fully connected layer with \(b_{a}\) bit input activations and \(b_{W}\) bit weights is estimated by:
\[\text{BOPs}\approx mn((1-f_{p})b_{a}b_{W}+b_{a}+b_{W}+\log_{2}(n)), \tag{8}\]
where \(n\) and \(m\) are the number of input and output features, and the \((1-f_{p})\) term accounts for a fraction of weights pruned (i.e., equal to zero). From Eqn. 8, the number of BOPs is inversely proportional to the sparsity. Sparse models are desired, as zero-weight multiplications are optimized out of the firmware implementation by HLS. This is a highly attractive feature of HLS, and it makes BOPs a noteworthy metric to observe. We measure the total BOPs of each quantization scheme as well as its relation with accuracy (see Section 4) and hardware usage (see Section 6).
## 4. Quantization-Aware Training
In this section, we discuss the training procedure for homogeneous and mixed-precision quantization. We start in Section 4.1 with a discussion of single bitwidth quantization, which is also referred to as homogeneous quantization. Then, in Section 4.2, we discuss mixed-precision quantization, including how it can greatly improve classification performance, as well as its downsides. In particular, in Section 4.2.2, we cover a method to select automatically the
bit width of each layer in a NN using second-order Hessian information, as well as a method obtained by imposing hardware constraints in the bit width selection process.
### Homogeneous Quantization
Quantizing all layers with the same bit width is simple, but it can cause a significant loss in performance. In Table 1, we present the accuracy for different bit settings from INT12 to INT4 with homogeneous quantization using HAWQ. As expected, we see a significant performance degradation as we quantize below INT8 (and especially below INT6). To combat this, we employed two regularization techniques during training: \(L_{1}\) regularization and batch normalization (BN) [(29)]. BN provides a more stable distribution of activations throughout training by normalizing the activations and producing a smoother loss landscape [(40)]. Although using BN raises performance on all quantization schemes, it fails to recover baseline accuracy for INT6 and INT4 quantization. Similarly, \(L_{1}\) regularization improves the model somewhat, but it fails to restore performance to its baseline. Consequently, homogeneously quantizing a model with one bitwidth setting is insufficient for quantization below 8-bit precision.
In addition to employing regularization techniques, we can increase the input quantization bit width. In HAWQ, inputs are quantized before proceeding to the first layer, ensuring all operations are integer only. A possible failure point is quantization error introduced in the inputs for low bitwidths where key features needed to classify jets may be lost. We decouple the precision of the inputs from that of the weights and activations and increase it to INT16. Fig. 1 shows results for 8-bit weights and below, with different bit widths for the activations. We find: (1) increasing the activation bit width significantly improves the classification performance of INT4 and INT6 weights; (2) similar improvements are obtained for INT16 quantized inputs--although this comes at the cost of increased hardware resource usage; and (3) \(L_{1}\) and BN (applied alone or together) are insufficient for recovering the accuracy to baseline levels. For this study, BN is less desirable, as the batch statistics parameters are implemented with floating-point values, thereby increasing the latency and memory footprint. One option is to quantize these values or (even more promisingly) apply BN folding. The idea is to remove BN by using its parameters to update the fully connected (or convolution) layer's weights and biases for inference efficiency. However, after we explored BN folding using the procedure outline in Ref. [(48)], we found little to no effect on model performance. As previously mentioned, \(L_{1}\) produces sparse matrices, decreasing the number of bit operations needed in hardware. Henceforth, in later sections, we continue to use \(L_{1}\) during training for mixed-precision quantization. Fig. 1 suggests model performance can greatly benefit from more fine-grained quantization settings. However, manually adjusting all these quantization settings can be time-consuming
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multicolumn{2}{c}{Precision} & \multicolumn{1}{c}{Baseline [\%]} & \multicolumn{1}{c}{\(L_{1}\) [\%]} & \multicolumn{1}{c}{BN [\%]} & \multicolumn{1}{c}{\(L_{1}\)+BN [\%]} \\ \hline Weights & Inputs & & & & \\ \hline INT12 & INT12 & 76.916 & 72.105 & 77.180 & 76.458 \\ INT8 & INT8 & 76.605 & 76.448 & 76.899 & 76.879 \\ INT6 & INT6 & 73.55 & 73.666 & 74.468 & 74.415 \\ INT4 & INT4 & 62.513 & 63.167 & 63.548 & 63.431 \\ \hline FP-32 & FP-32 & 76.461 & 76.826 & 76.853 & 76.813 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification performance with homogeneous quantization. All weights, activations, and inputs are quantized with the same precision. Models are trained with and without \(L_{1}\) regularization and BN. At INT8 and above, the accuracy is restored to baseline; but at INT6 and below, the accuracy is worse than baseline.
and suboptimal. An optimized bit-setting scheme is needed to simultaneously minimize the loss and hardware usage. In the next subsection, we explore mixed-precision quantization. We fix the input bit width to INT16. This could be further optimized, but this choice makes direct comparison with other work easier [15; 20; 21; 26].
### Mixed-Precision Quantization
#### 4.2.1. Brute-force Search
Mixed-precision quantization aims to improve performance by keeping certain layers at a higher precision than others. The basic problem with going beyond homogeneous quantization is that--when implemented naively--the search space for determining the bit setting is exponential to the number of layers. Our model architecture's MLP search space is significantly smaller than deep CNNs such as ResNet-50 [27] because our MLP only has 3 hidden layers. However, assuming we have 5-bit width options, finding the mixed-precision setting for our MLP classifier, with 4 fully-connected layer weights and activations, has a search space of \(((2)(4))^{5}=32,768\) combinations. It is impractical, especially for applications that need frequently retrained models or that need DNNs, to search this space exhaustively. Several methods have been proposed to address this problem of manually searching for the optimal bit configuration [9; 18; 41; 46; 47]. We use Ref. [18], which is based on the Hessian information, and we observe the relative position of Hessian-based solutions within the _brute-force search_ space.
#### 4.2.2. Hessian-Aware Quantization
As discussed in the Sec. 4.1, performance greatly benefited from higher precision in activations suggesting certain layers are more sensitive to quantization than others. We use the work first proposed in HAWQv2 [18] to determine the relative sensitivity of each layer for the baseline 32-bit floating point implementation of the model. The sensitivity metric is computed using the Hutchinson algorithm,
\[\mathrm{Tr}(H)\approx\frac{1}{k}\sum_{i=1}^{k}z_{i}^{\intercal}Hz_{i}=\mathrm{ Tr}_{\mathrm{Est}}(H), \tag{9}\]
Figure 1. Model performance using homogeneous quantization. The precision of weights is indicated after ”w” and activations after ”a.” Models are trained with \(L_{1}\) regularization and BN. We can see: (1) 16-bit input improves the model performance of all bit settings; (2) larger activation bit widths improve accuracy; and (3) \(L_{1}\) and BN (applied alone or together) show no positive impact on performance.
where \(H\in\mathbb{R}^{d\times d}\) is the Hessian matrix of second-order partial derivatives of the loss function with respect to all \(d\) model parameters, \(z\in\mathbb{R}^{d}\) is a random vector whose component is i.i.d. sampled Rademacher distribution, and \(k\) is the number of Hutchinson steps used for trace estimation. Fig. 2 shows the average Hessian trace (our sensitivity metric) of each layer in the baseline model, with logarithmic scaling. The first two layers are the most sensitive, with the first layer more sensitive than the second by a factor of 7. Thus, the first two layers in the network must have a larger bit width setting, while the last two layers can be quantized more aggressively. While the Hessian traces provides a sensitivity metric, this does not directly translate to a bit configuration. Instead, Ref. [18] assigns the bit width of each layer \(i\) by checking the corresponding \(\Omega\) term, defined as:
\[\Omega=\sum_{i=1}^{L}\Omega_{i}=\sum_{i=1}^{L}\overline{\mathrm{Tr}}(H_{i}) \|Q(W_{i})-W_{i}\|_{2}^{2}, \tag{10}\]
where \(Q\) is the quantization function, \(\|Q(W_{i})-W_{i}\|_{2}^{2}\) is the squared \(L_{2}\) norm of the quantization perturbation, and \(\overline{\mathrm{Tr}}\) is the average Hessian trace. We apply the same technique as Ref. [18], where the amount of second-order perturbation, \(\Omega\), is calculated for a given set of quantization schemes, and the minimal \(\Omega\) is chosen. This procedure is fully automated without any manual intervention.
We follow the procedure outlined in HAWQ [48] to constrain Eqn. 10 by the total BOPs. We formulate an integer linear programming (ILP) optimization problem, where the objective is to minimize \(\Omega_{i}\) while satisfying the constraints. We set up an ILP problem to automatically determine the bit settings of our classifier for various BOPs limits, and we compare these solutions with the brute force and homogeneous quantization methods.
#### 4.2.3. QAT Results
With the information provided in Fig. 1, we apply all possible bit settings based on the initial implementation in homogeneous quantization. We explore the weight bit width \(b_{W}=\{4,5,6,7,8\}\), and we set the activation bit width \(b_{a}=b_{W}+3\) to prevent saturation and further reduce the search space. All models are trained for 100 epochs, with \(L_{1}\) regularization, and all models use quantized inputs with INT16. Fig. 3 presents the model accuracy
Figure 2. Average Hessian trace of each fully-connected (Fc) layer in the MLP. The Hessian is used as a sensitivity metric to quantization, where layers are ranked based on their trace. The first two layers are significantly larger than the others, signifying they are more prone to error at lower bit widths. The average Hessian traces are used to assign each layer a bit setting, i.e., layers with higher traces are assigned larger precision.
against BOPs for all combinations of weight bits \(b_{W}\). Data points are color-coded based on the bit precision of the first layer. Several data points indicate a complete or nearly complete recovery to baseline accuracy (76.853%). The majority of points can be clustered based on the bit width of the first layer, since the model accuracy generally increases as the first layer's bit width increases. We can also see in Fig. 3 that the bit width of the first fully-connected layer greatly impacts the final model performance. Among the top 100 best-performing models, 66 had the first dense layer as INT8, and 33 had INT7 weights. This coincides with the average Hessian traces shown in Fig. 2, showing the first layer is the most sensitive layer to quantization, by a factor of 7\(\times\), compared to the second most sensitive layer. Among the top models, we observed the frequency of 7-bit and 8-bit in later layers decrease significantly. The bit width of the later layers has fewer effects on the classification than the first two layers.
The ILP solutions to Eqn. 10 are also shown. The solutions are obtained with respect to 7 different BOPs constraints, from 250 k to 550 k in steps of 50 k. As expected, as the BOPs constraint increases, the selected precision of the first two layers increases. Hence, we begin to see more ILP solutions closer to the 8-bit cluster. The ILP solutions also tend to be positioned towards the lower end of BOPs in their local cluster. With brute-force search quantization and the ILP solutions shown side by side, the advantages of using the Hessian information become clearer. While an optimal solution is not guaranteed, the Hessian provides a stable and reliable solution to mixed-precision quantization. This is ideal for deep learning models that need to be quantized to meet the resource constraints and inference times of the LHC 40 MHz collision rate.
## 5. Conversion into QONNX
### Intermediate Representations
To increase interoperability and hardware accessibility, the Open Neural Network Exchange (ONNX) format was established to set open standards for describing the computational graph of ML algorithms (Beng et al., 2017). ONNX defines a common and wide set of operators enabling developers and researchers greater freedom and choice between frameworks, tools, compilers, and hardware accelerators. Currently, ONNX offers some support for quantized operators, including QuantLinear, QLinearConv, and QLinearMatMul. However, ONNX falls short in representing arbitrary precision and
Figure 3. Brute-force search quantization using weight bit widths \(b_{w}=\{4,5,6,7,8\}\). Each data point is color-coded based on the bit width of the first fully-connected layer. It’s importance in quantization coincides with the observed clusters, with higher performing models using larger bit widths. Solutions based on ILP are presented. All ILP solutions make trade-offs based on the quantization error and bit width, and typically are among the lowest BOPs in their respective cluster.
ultra-low quantization, below 8-bit precision. To overcome these issues, recent work [37] introduced quantize-clip-dequantize (QCDQ) using existing ONNX operators and a novel extension with new operators, called QONNX, to represent QNNs. QONNX introduces three new custom operators: Quant, Bipolar, and Trunc. The custom operators enable uniform quantization and abstract finer details, making the intermediate representation graph flexible and at a higher level of abstraction than QCDQ.
For these reasons, we represent HAWQ NNs in the QONNX format, leveraging HAWQ's ultra-low precision and QONNX's abstraction to target two FPGA synthesizing tools, hls4ml and FINN [44; 7].1 We also include the ONNX format in our model exporter for representing QNNs. In the next subsections, we describe the setup, export procedure, and validation steps to represent HAWQ NNs in the QONNX and QCDQ intermediate representations.
Footnote 1: The main focus in exporting QNNs is the QONNX intermediate format. However, the QONNX software toolkit enables conversion to QCDQ format. This allows HAWQ to target hls4ml and FINN, and indirectly all other ONNX inference accelerators and frameworks.
### Model Translation
In PyTorch, exporting to ONNX works via tracing. This is the process of capturing all the operations invoked during the forward pass on some input. PyTorch provides the means for tracing through the torch.jit API. Tracing a model will return an executable that is optimized using the PyTorch just-in-time compiler. The executable contains the structure of the model and original parameters. Tracing will not record any control flow like if-statements and loops. The returned executable will always run the same traced graph on any input, which may not be ideal for functions or modules that are expected to run different sets of operations depending on the input and model state. The executable is then used to build the ONNX graph by translating operations and parameters within the executable to standard ONNX operators. In general, all PyTorch models are translated to ONNX using this process, and we extend this existing system to build support for QONNX operators in HAWQ.
The layers in HAWQ and operators in QONNX both require extra steps to support tracing and export. For each quantized layer in HAWQ, we implement a corresponding "export" layer. These dedicated export layers implement the forward pass and specify the equivalent QONNX operators based on the original layer parameters. This is accomplished by registering _symbolic functions_ via torch.onnx.register_custom_op_symbolic. These symbolic functions decompose HAWQ layer operations into a series of QONNX nodes. Because we are using custom QONNX nodes, we also must register them via the torch.onnx API. Together, these preliminary steps define the HAWQ-to-QONNX translation. During the export process, the exporter looks for a registered symbolic function for each visited operator. If a given model contains quantized HAWQ or standard PyTorch layers, it can be traced and finally translated to standard ONNX and QONNX operators. Because tracing records computations, the input can be random as long as the dimensions and data type are correct. The model exported with ONNX and QONNX operators is shown in Fig. 3(a). With these additions, our exporter can perform the following:
1. export models containing HAWQ layers to QONNX, with custom operators to handle a wide range of bit widths while keeping the graph at a higher level of abstraction; and
2. export models containing HAWQ layers to standard ONNX with INT8 and UINT8 restrictions.
### Post-Export
#### 5.3.1. Optimization
In order to create firmware using hls4ml or FINN, the QONNX graph is expected to be normalized, i.e., to undergo several optimization steps. The QONNX software utilities [37] provide these transformations, as shown in Fig. 3(b), where shape inference and constant folding are applied to the graph. Fig. 3(c) shows the last optimization step;
we merge scaling factors across ReLU activation functions. For reasons related to the underlying implementation of HAWQ, there are two scaling operations before and after specific layers. For a detailed explanation of these scaling factors, see Section 2.1. To reduce the number of operations needed in firmware we combine the scaling factors in cases where the ReLU function is used. This cannot always be done, and it is dependent on the activation function used.
#### 5.3.2. Graph Evaluation
After exporting, we evaluate the model using the QONNX software package (Zhu et al., 2017), confirming a successful translation of our model from HAWQ to QONNX. While the main focus has been MLPs, exporting is not limited to this one architecture. All HAWQ layers now support QONNX export via the implemented symbolic functions. Moreover, with the QONNX software package, it is easy to transform, optimize, evaluate, and validate the exported HAWQ models.
## 6. Hardware Generation
In this section, we explain where HAWQ fits within the hls4ml hardware generation workflow. The total resources used, BOPs, and classification performance for different bit width configurations are shown and discussed.
### hls4ml Ingestion
The hls4ml workflow automatically performs the translation of the architecture, weights, and biases of NNs, layer by layer, into code that can be synthesized to RTL with HLS tools. The first part of this workflow entails training a NN for
Figure 4. The QONNX graph in its three stages after exporting. (a) The first layers of the model including the quantized fully-connected layer before any optimizations. (b) The first layers of the model after post-clean-up operations: constant folding, shape inference, tensor, and node renaming. (c) The final optimization step: node merging across ReLU activations. All QNNs implemented in HAWQ can be exported to an QONNX or ONNX intermediate representation and undergo transformations as described in each stage.
a task as usual with PyTorch, Keras, QKeras, or HAWQ. For HAWQ, a QONNX graph must be exported from the model, but this step can (optionally) be performed for all the frameworks (and, eventually, this will be the preferred flow). Next, hls4ml translates the QONNX graph into an HLS project that can subsequently be synthesized and implemented on an FPGA or ASIC in the final step of the workflow.
All results presented are synthesized for a Xilinx Kintex Ultrascale FPGA with part number xcu250-figd2104-2L-e. We report the usage of different resources: digital signal processor units (DSPs), flip-flops (FFs), and look-up tables (LUTs). We do not report the block RAM (BRAM), a dense memory resource, usage because its only use in the design is to store precomputed outputs for the softmax activation, whose numerical precision is the same for all quantization schemes. Only the "bare" firmware design needed to implement the NN is built with RTL synthesis using Vivado 2020.1. All NNs are maximally parallelized. In hls4ml, parallelization is configured with a "reuse factor" that sets the number of times a multiplier is used to compute the layer's output. A fully parallel design corresponds to a reuse factor of one. All resource usage metrics are based on this "bare" implementation after RTL synthesis, and all designs use a clock frequency of 200 MHz.
### Synthesis Results
Fig. 5 shows the resource usage compared with the accuracy of the implemented designs. Quantization bit width settings were chosen at random. Higher-performing models use more resources. This is expected, as the top 100 performing models use larger bit widths for the first layer, which is the largest layer in the model. As such, we expect to see more resources as accuracy increases. LUTs have the most linear relationship with accuracy, while FFs and DSPs also increase with accuracy. The relationship between BOPs and resources, presented in Fig. 6, also shows a linear relationship between LUTs and BOPs, which scale with the bit width and weight matrix dimensions. The number of LUTs used is dependent on the bit width because, at low bit widths, addition and multiplication are implemented with LUTs. However, DSPs are used at larger bit widths because they become much more efficient. DSPs offer custom datapaths, efficiently implementing a series of arithmetic operations, including multiplication, addition, multiply-accumulate (MAC), and work-level logical operations. DSP datapaths are less flexible than programmable logic, but they are more efficient at multiplying and MAC operations. This is shown in Fig. 6 as DSPs usage increases, dramatically at points, with larger BOPs. Switching from LUTs to DSPs depends on the target device and Vivado HLS internal biases toward DSPs for certain bit widths. The shift towards DSPs occurs with 11 or wider bits in Vivado 2020.1, with multiplications lower than this limit implemented using LUTs. The result of these operations is stored in FFs, displaying a steady increase with fewer variations than that seen in DSPs. The number of FFs up to 250 k BOPs rise at a constant pace with deviations beginning to appear thereafter. The inconsistencies for the number of FFs for neighboring BOPs suggests there is a weaker correlation between the two. The deviations comes from the precision needed for intermediate accumulations and the total FFs needed will vary network to network.
The baseline (BL) model is synthesized after adjusting the weights without any fine-tuning. In hls4ml, parameters and computations are performed using fixed-point arithmetic, and each layer in the model can be quantized after training by specifying a reduced precision. Fixed-point data types model the data as an integer and fraction bits with the format ap_fixed<W,D>. The BL model uses ap_fixed<16,6> for all parameters and computations and is fully unrolled, i.e., maximally parallelized, as in previous results. We compare the BL logical synthesis results with the homogeneous and a Hessian-aware quantization model. From Table 1, homogeneous quantization begins to decline below INT8, and we use this quantization scheme to compare to BL. Of the multiple Hessian-aware solutions, we choose the solution given by the lowest BOPs constraint, i.e., the quantization scheme is 4, 4, 5, and 4 bits for the first, second, third, and final output
layers, respectively. Table 2 shows the synthesis results for three models: BL, INT8 homogeneous quantization, and the Hessian-aware solution. With INT8 homogeneous quantization, there is a significant reduction in DSPs compared to the BL model, which is further reduced with mixed Hessian-aware quantization. We expect that as bit width increases, more MAC operations will be implemented in DSPs, which offer a much more efficient implementation than LUTs and FFs. Interestingly, there's only a minor decrease of FFs with INT8 from BL, compared to the other resources, but this is mostly attributed to INT16 inputs. Simply put, larger inputs require more FFs to store and accumulate computations, but they're utilization drastically decrease with lower bit widths. The MLP with a Hessian-aware quantization scheme uses 42.2% fewer LUTs, 36.3% fewer FFs, and 95.7% fewer DSPs, compared to BL. As precision is reduced, the number of LUTs needed to compute outputs decreases. Most computations with lower precision can be implemented with LUTs; hence they have the strongest correlation with BOPs. However, this observed relationship weakens as bit width increases and DSPs are used instead. The sudden uptick in LUTs and FFs are outliers that originate from the softmax activation. As previously mentioned, the softmax activation stores precomputed outputs and the sudden surge comes from lookup tables created to store all values with large bit widths. Table 2 also includes the automatic mixed-precision solution, QB, from AutoQkeras [15], a QNN optimized by minimizing the model size in terms of bits. The AutoQkeras solution for jet-tagging, denoted as QB in Table 2, drives down all resource metrics by a substantial amount by employing below the threshold of the softmax activation.
Fig. 5: Resource usage for a subset of brute-force quantization (BFQ) using weight bit widths \(b_{W}=\{4,5,6,7,8\}\). LUT, FF, and DSP usage versus accuracy are shown, with higher-performing quantization schemes among the highest resource users. All solutions to the ILP problem from BOPs constraint are presented. Extra logical elements are needed to maintain accuracy while considerable reduction in all metrics can be achieved with 1-2% drop in accuracy.
4-bit quantization. The advantages are also seen in latency while accuracy only drops by a tolerable 4%. In this study binary and ternary quantization was not explored as in AutoQkeras, but the total gains by leveraging mixed-precision are clearly shown.
The latency for these models, as estimated by Vivado HLS, is also shown in Table 2. Latency estimates are based on the specified clock, the loop transformations' analysis, and the design's parallelization. Pipelining and data flow choices can heavily change the actual throughput. However, the latency for the quantized models is about 30 ns longer than for BL. This can primarily be attributed to the additional scaling operations of the intermediate accumulations needed for lower precision quantities. While the additional computation creates an additional latency, the needed resources of these scaling layers are rather modest, approximately 1-3% relative to the rest of the design. So there is a latency-resource trade-off for the lower-precision computations. However, for the task at hand, the large reduction in resources is worth the increase in latency. The softmax activation is the other significant contributor to latency, with an estimated 10 ns runtime for all three quantized models presented in Table 2. As stated above, BRAMs are used for storing the precomputed outputs and the latency mainly arises from reading memory. Removing the softmax activation function from the implemented design is usually possible, especially if only the top-\(k\) classes are needed for further computation.
## 7. Summary
The possible applications of HAWQ on edge devices and its automatic bit-setting procedure make it a convincing candidate for physics research. In this paper, we contributed to the HAWQ library by introducing an extension to convert NNs to ONNX and QONNX intermediate formats. Bridging HAWQ with firmware synthesis tools that ingest these formats make it easier to deploy NNs to edge devices, such as FPGAs or ASICs, opening many potential use cases in science. As an initial case study, we employed a NN to classify jets using a challenging benchmark commonly used for QNNs in jet tagging. We show that the Hessian-aware solution to a mixed precision quantization scheme provides a reliable solution. We then used our new exporter in HAWQ to translate multiple MLPs optimized with various bit settings to their QONNX IR. Models were successfully translated from HAWQ to a firmware implementation, and
Figure 6. Resource usage for a subset of QNNs in a brute-force attempt to an optimal mixed-precision quantization scheme. LUT, FF, and DSP usage versus BOPs are shown, with LUTs having the most linear relationship to BOPs. This relationship weakens with larger bit widths as DSPs can implement MAC operations more efficiently. In all designs, FFs are the only type of memory utilized in fully-connected layers and the total used can drastically vary for neighboring BOPs, implying a weaker relationship between the two.
we've observed the resource usage compared to the total BOPs and accuracy. Furthermore, we compared the resource utilization of multiple different bit settings with the automatic bit selection process in Ref. [18]; and we compared the Hessian-aware model with a homogeneous bit configuration and baseline. The Hessian-aware solution significantly reduced all resource metrics (LUTs, FFs, and DSPs), with the most significant improvements in DSPs and LUTs, using 95.7% and 42.2% fewer DSPs and LUTs compared to baseline, respectively. Although the current study is limited to MLPs, all NN architectures can first be exported to an ONNX or QONNX intermediate representation graph, and then be applied to whichever tools supports the format.
## Acknowledgments
JC, NT, AG, MWM, and JD are supported by the U.S. Department of Energy (DOE), Office of Science, Office of Advanced Scientific Computing Research under the "Real-time Data Reduction Codesign at the Extreme Edge for Science" Project (DE-FOA-0002501). JM is supported by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the DOE, Office of Science, Office of High Energy Physics. JD is also supported by the DOE, Office of Science, Office of High Energy Physics Early Career Research program under Grant No. DE-SC0021187, and the U.S. National Science Foundation (NSF) Harnessing the Data Revolution (HDR) Institute for Accelerating AI Algorithms for Data Driven Discovery (A3D3) under Cooperative Agreement No. OAC-2117997. NT is also supported by the DOE Early Career Research program under Award No. DE-0000247070,
|
2303.12797 | An algorithmic framework for the optimization of deep neural networks
architectures and hyperparameters | In this paper, we propose an algorithmic framework to automatically generate
efficient deep neural networks and optimize their associated hyperparameters.
The framework is based on evolving directed acyclic graphs (DAGs), defining a
more flexible search space than the existing ones in the literature. It allows
mixtures of different classical operations: convolutions, recurrences and dense
layers, but also more newfangled operations such as self-attention. Based on
this search space we propose neighbourhood and evolution search operators to
optimize both the architecture and hyper-parameters of our networks. These
search operators can be used with any metaheuristic capable of handling mixed
search spaces. We tested our algorithmic framework with an evolutionary
algorithm on a time series prediction benchmark. The results demonstrate that
our framework was able to find models outperforming the established baseline on
numerous datasets. | Julie Keisler, El-Ghazali Talbi, Sandra Claudel, Gilles Cabriel | 2023-02-27T08:00:33Z | http://arxiv.org/abs/2303.12797v2 | An algorithmic framework for the optimization of deep neural networks architectures and hyperparameters
###### Abstract
In this paper, we propose an algorithmic framework to automatically generate efficient deep neural networks and optimize their associated hyperparameters. The framework is based on evolving directed acyclic graphs (DAGs), defining a more flexible search space than the existing ones in the literature. It allows mixtures of different classical operations: convolutions, recurrences and dense layers, but also more newfangled operations such as self-attention. Based on this search space we propose neighbourhood and evolution search operators to optimize both the architecture and hyper-parameters of our networks. These search operators can be used with any metaheuristic capable of handling mixed search spaces. We tested our algorithmic framework with an evolutionary algorithm on a time series prediction benchmark. The results demonstrate that our framework was able to find models outperforming the established baseline on numerous datasets.
Metaheuristics Evolutionary Algorithm AutoML Neural Architecture Search Hyperparameter optimization Directed Acyclic Graphs Time Series Forecasting
## 1 Introduction
With the recent successes of deep learning in many research fields, deep neural networks (DNN) optimization stimulates the growing interest of the scientific community (Talbi, 2021). While each new learning task requires the handcrafted design of a new DNN, automated deep learning facilitates the creation of powerful DNNs. Interests are to give access to deep learning to less experienced people, to reduce the tedious tasks of managing many parameters to reach the optimal DNN, and finally, to go beyond what humans can design by creating non-intuitive DNNs that can ultimately prove to be more efficient.
Optimizing a DNN means automatically finding an optimal architecture for a given learning task: choosing the operations and the connections between those operations and the associated hyperparameters. The first task is also known as Neural Architecture Search (Elsken et al., 2019), also named NAS, and the second, as HyperParameters Optimization (HPO). Most works from the literature try to tackle only one of these two optimization problems. Many papers related to NAS (White et al., 2021; Loni et al., 2020; Wang et al., 2019; Sun et al., 2018; Zhong, 2020) focus on designing optimal architectures for computer vision tasks with a lot of stacked convolution and pooling layers. Because each DNN training is time-consuming, researchers tried to reduce the search space by adding many constraints preventing from finding irrelevant architectures. It affects the flexibility of the designed search spaces and limits the hyperparameters optimization.
We introduce in this paper a new optimization framework for AutoML based on the evolution of Directed Acyclic Graphs (DAGs). The encoding and the search operators may be used with various deep learning and AutoML problems. We ran experiments on time series forecasting tasks and demonstrate on a large variety of datasets that our framework can find DNNs which compete with or even outperform state-of-the-art forecasters. In summary, our contributions are as follows:
* The precise definition of a flexible and complete search space based on DAGs, for the optimization of DNN architectures and hyperparameters.
* The design of efficient neighbourhoods and variation operators for DAGs. With these operators, any meta-heuristic designed for a mixed and variable-size search space can be applied. In this paper, we investigate the use of evolutionary algorithms.
* The validation of the algorithmic framework on popular time series forecasting benchmarks (Godahewa et al., 2021). We outperformed 13 statistical and machine learning models on 24 out of 40 datasets, proving the efficiency and robustness of our framework.
The paper is organized as follows: we review section 2, the literature on deep learning models for time series forecasting and AutoML. Section 3 defines our search space. Section 4 presents our neighbourhoods and variation operators within evolutionary algorithms. Section 5 details our experimental results obtained on popular time series forecasting benchmarks. Finally, section 6 gives a conclusion and introduces further research opportunities.
## 2 Related Work
### Deep learning for time series forecasting
Time series forecasting has been studied for decades. The field has been dominated for a long time by statistical tools such as ARIMA, Exponential Smoothing (ES), or (S)ARIMAX, this last model allowing the use of exogenous variables. It now opens itself to deep learning models (Liu et al., 2021). These new models recently achieved great performances on many datasets. Three main parts compose typical DNNs: an input layer, several hidden layers and an output layer. In this paper, we define a search space designed to search for the best-hidden layers, given a meta-architecture (see Figure 5), for a specific time series forecasting task. Next, we introduce the usual DNN layers considered in our search space.
The first layer type from our search space is the fully-connected layer, or Multi-Layer Perceptron (MLP). The input vector is multiplied by a weight matrix. Most architectures use such layers as simple building blocks for dimension matching, input embedding or output modelling. The N-Beats model is a well-known example of a DNN based on fully-connected layers for time series forecasting (Oreshkin et al., 2019).
The second layer type (LeCun et al., 2015) is the convolution layer (CNN). Inspired by the human brain's visual cortex, it has mainly been popularised for computer vision. The convolution layer uses a discrete convolution operator between the input data and a small matrix called a filter. The extracted features are local and time-invariant if the considered data are time series. Many architectures designed for time series forecasting are based on convolution layers such as WaveNet (Oord et al., 2016) and Temporal Convolution Networks (Lea et al., 2017).
The third layer type is the recurrent layer (RNN), specifically designed for sequential data processing, therefore, particularly suitable for time series. These layers scan the sequential data and keep information from the sequence past in memory to predict its future. A popular model based on RNN layers is the Seq2Seq network (Cho et al., 2014). Two RNNs, an encoder and a decoder, are sequentially connected by a fixed-length vector. Various versions of the Seq2Seq model have been introduced in the literature, such as the DeepAR model (Salinas et al., 2020), which encompasses an RNN encoder in an autoregressive model. The major weakness of RNN layers is the modelling of long-term dynamics due to the vanishing gradient. Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) layers have been introduced (Hochreiter and Schmidhuber, 1997; Chung et al., 2014) to overcome this problem.
Finally, the layer type from our search space is the attention layer. The attention layer has been popularized within the deep learning community as part of Vaswani's transformer model (Vaswani et al., 2017). The attention layer is more generic than the convolution. It can model the dependencies of each element from the input sequence with all the others. In the vanilla transformer (Vaswani et al., 2017), the attention layer does not factor the relative distance between inputs in its modelling but rather the element's absolute position in the sequence. The Transformer-XL (Dai et al., 2019), a transformer variant created to tackle long-term dependencies tasks, introduces a self-attention version with relative positions. Cordonnier et al. (2019) used this new attention formulation to show that, under a specific configuration of parameters, the attention layers could be trained as convolution layers. Within our search space, we chose this last formulation of attention, with the relative positions.
The three first layers (i.e. MLP, CNN, RNN) were frequently mixed into DNN architectures. Sequential and parallel combinations of convolution, recurrent and fully connected layers often compose state-of-the-art DNN models for time series forecasting. Layer diversity enables the extraction of different and complementary features from input data to allow a better prediction. Some recent DNN models introduce transformers into hybrid DNNs. In Lim et al. (2021), the authors developed the Temporal Fusion Transformer, a hybrid model stacking transformer layers on top of an RNN layer. With this in mind, we built a flexible search space which generalizes hybrid DNN models including MLPs, CNNs, RNNs and transformers.
### Search spaces for automated deep learning
Designing an efficient DNN for a given task requires choosing an architecture and tuning its many hyperparameters. It is a difficult fastidious, and time-consuming optimization task. Moreover, it requires expertise and restricts the discovery of new DNNs to what humans can design. Research related to the automatic design and optimization of DNNs has therefore risen this last decade (Talbi, 2021). The first challenge with automatic deep learning (AutoDL), and more specifically the neural architecture search (NAS), is the search space design. If the solution encoding is too broad and allows too many architectures, we might need to evaluate many architectures to explore the search space. However, training many DNNs would require considerable computing time and become unfeasible. On the contrary, if the search space is too small, we might miss promising solutions. Besides, encoding of DNNs defining the search space should follow some rules (Talbi, 2021):
* Completeness: all candidate DNNs solutions should be encoded in the search space.
* Connexity: a path should always be possible between two encoded DNNs in the search space.
* Efficiency: the encoding should be easy to manipulate by the search operators (i.e. neighbourhoods, variation operators) of the search strategy.
* Constraint handling: the encoding should facilitate the handling of the various constraints to generate feasible DNNs.
A complete classification of encoding strategies for NAS is presented in Talbi (2021) and reproduced in Figure 1. We can discriminate between direct and indirect encodings. With direct strategies, the DNNs are completely defined by the encoding, while indirect strategies need a decoder to find the architecture back. Amongst direct strategies, one can discriminate between two categories: flat and hierarchical encodings. In flat encodings, all layers are individually encoded (Loni et al., 2020; Sun et al., 2018; Wang et al., 2018, 2019a). The global architecture can be a single chain, with each layer having a single input and a single output, which is called chain structured (Asuncao et al., 2018), but more complex patterns such as multiple outputs, skip connections, have been introduced in the extended flat DNNs encoding (Chen et al., 2021). For hierarchical encodings, they are bundled in blocks (Pham et al., 2018; Shu et al., 2019; Liu et al., 2017; Zhang et al., 2019). If the optimization is made on the sequencing of the blocks, with an already chosen content, this is referred to as inner-level fixed (Camero et al., 2021; White et al., 2021). If the optimization is made on the blocks' content with a fixed sequencing, it is called outer level fixed. A joint optimization with no level fixed is also an option (Liu et al., 2019). Regarding the indirect strategies, one popular encoding is the one-shot architecture (Bender et al., 2018; Brock et al., 2017). One single large network resuming all candidates from the search space is trained. Then the architectures are found by pruning some branches. Only the best promising architectures are retrained from scratch.
Our search space can be categorized as a direct and extended flat encoding. Each layer is individually encoded by our search space. It is more flexible than the search spaces designed in literature. First, we tackle both the optimization of the architecture and the hyperparameters. Second, the diversity of candidate DNNs is much
Figure 1: Classification of encoding strategies for NAS (Talbi, 2021).
found in the literature. We allow a combination of recurrent, convolution, attention-based and fully connected layers, leading to innovative, original yet well-performing DNNs. To our knowledge, this encoding has never been investigated in the literature.
### AutoML for time series forecasting
The automated design of DNNs called Automated Deep Learning (AutoDL), belongs to a larger field (Hutter et al., 2019) called Automated Machine Learning (AutoML). AutoML aims to automatically design well-performing machine learning pipelines, for a given task. Works on model optimization for time series forecasting mainly focused on AutoML rather than AutoDL (Alshafer et al., 2022). The optimization can be performed at several levels: input features selection, extraction and engineering, model selection and hyperparameters tuning. Initial research works used to focus on one of these subproblems, while more recent works offer complete optimization pipelines.
The first subproblems, input features selection, extraction and engineering, are specific to our learning task: time series forecasting. This tedious task can significantly improve the prediction scores by giving the model relevant information about the data. Methods to select the features are among computing the importance of each feature on the results or using statistical tools on the signals to extract relevant information. Next, the model selection aims at choosing among a set of diverse machine learning models the best-performing one on a given task. Often, the models are trained separately, and the best model is chosen. In general, the selected model has many hyperparameters, such as the number of hidden layers, activation function or learning rate. Their optimization usually allows for improving the performance of the model.
Nowadays, many research works implement complete optimization pipelines combining those subproblems for time series forecasting. The Time Series Pipeline Optimization framework (Dahl, 2020), is based on an evolutionary algorithm to automatically find the right features thanks to input signal analysis, then the model and its related hyperparameters. AutoAI-TS (Shah et al., 2021) is also a complete optimization pipeline, with model selection performed among a wide assortment of models: statistical models, machine learning, deep learning models and hybrids models. Finally, the framework Auto-Pytorch-TS (Deng et al., 2022) is specific to deep learning models optimization for time series forecasting. The framework uses Bayesian optimization with multi-fidelity optimization.
Except for AutoPytorch-TS, cited works covering the entire optimization pipeline for time series do not deepen model optimization and only perform model selection and hyperparameters optimization. However, time series data becomes more complex, and there is a growing need for more sophisticated and data-specific DNNs. In this work, we only tackle the model selection and hyperparameters optimization parts of the pipeline. We made this choice to show the effectiveness of our framework for designing better DNNs. If we had implemented feature selection, it would have been harder to determine whether the superiority of our results came from the input features pool or the model itself. We discuss this further in Section 5.
## 3 Search space definition
The development of a complete optimization framework for AutoDL needs the definition of the search space, the objective function and the search algorithm. In this section, the handled optimization problem is formulated. Then, the search space and its characteristics are detailed.
### Optimization problem formulation
Our optimization problem consists in finding the best possible DNN for a given time series forecasting problem. To do so, we introduce an ensemble \(\Omega\) representing our search space, which contains all considered DNNs. We then consider our time series dataset \(\mathcal{D}\). For any subset \(\mathcal{D}_{0}=(X_{0},Y_{0})\), we define the forecast error \(\ell\) as:
\[\ell\colon\Omega\times\mathcal{D} \rightarrow\mathbb{R}\] \[f\times\mathcal{D}_{0} \mapsto\ell\big{(}f(\mathcal{D}_{0})\big{)}=\ell\big{(}Y_{0},f(X _{0})\big{)}.\]
The explicit formula for \(\ell\) will be given later in the paper. Each element \(f\) from \(\Omega\) is a DNN defined as an operator parameterized by three parameters. First, its architecture \(\alpha\in\mathcal{A}\). \(\mathcal{A}\) is the search space of all considered architectures and will be detailed in Subsection 3.2. Given the DNN architecture \(\alpha\), the DNN is then parameterized by its hyperparameters \(\lambda\in\Lambda(\alpha)\), with \(\Lambda(\alpha)\) the search space of the hyperparameters induced by the architecture \(\alpha\) and defined Subsection 3.3. Finally, \(\alpha\) and \(\lambda\) generate an ensemble of possible weights \(\Theta(\alpha,\lambda)\), from which the DNN optimal
weights \(\theta\) are found by gradient descent when training the model. The architecture \(\alpha\) and the hyperparameters \(\lambda\) are optimized by our framework.
We consider the multivariate time series forecasting task. Our dataset \(\mathcal{D}=(X,Y)\) is composed of a target variable \(Y=\{\mathbf{y}_{t}\}_{t=1}^{T}\), with \(\mathbf{y}_{t}\in\mathbb{R}^{N}\) and a set of explanatory variables (features) \(X=\{\mathbf{x}_{t}\}_{t=1}^{T}\), with \(\mathbf{x}_{t}\in\mathbb{R}^{F_{1}\times F_{2}}\). The size of the target \(Y\) at each time step is \(T\) and \(F_{1}\), \(F_{2}\) are the shapes of the input variable \(X\) at each time step. We choose to represent \(\mathbf{x}_{t}\) by a matrix to extend our framework's scope, but it can equally be defined as a vector by taking \(F_{2}=1\). The framework can be applied to univariate signals by taking \(T=1\). We partition our time indexes into three groups of successive time steps and split accordingly \(\mathcal{D}\) into three datasets: \(\mathcal{D}_{train}\), \(\mathcal{D}_{valid}\) and \(\mathcal{D}_{test}\).
After choosing an architecture \(\alpha\) and a set of hyperparameters \(\lambda\), we build the DNN \(f^{\alpha,\lambda}\) and use \(\mathcal{D}_{train}\) to train \(f^{\alpha,\lambda}\) and optimize its weights \(\theta\) by stochastic gradient descent:
\[\hat{\theta}\in\operatorname*{arg\,min}_{\theta\in\Theta(\alpha,\lambda)} \bigl{(}\ell(f^{\alpha,\lambda}_{\theta},\mathcal{D}_{train})\bigr{)}.\]
The forecast error of the DNN parameterized by \(\hat{\theta}\) on \(\mathcal{D}_{valid}\) is used to assess the performance of the selected \(\alpha\) and \(\lambda\). The best architecture and hyperparameters are optimized by solving:
\[(\hat{\alpha},\hat{\lambda})\in\operatorname*{arg\,min}_{\alpha\in\mathcal{R} }\Bigl{(}\operatorname*{arg\,min}_{\lambda\in\Lambda(\alpha)}\bigl{(}\ell(f^{ \alpha,\lambda}_{\hat{\theta}},\mathcal{D}_{valid})\bigr{)}\Bigr{)}.\]
The function \((\alpha,\lambda)\mapsto\ell(f^{\alpha,\lambda}_{\hat{\theta}},\mathcal{D}_{valid})\) corresponds to the objective function of our algorithmic framework. We finally will evaluate the performance of our algorithmic framework by computing the forecast error on \(\mathcal{D}_{test}\) using the DNN with the best architecture, hyperparameters and weights:
\[\ell(f^{\hat{\alpha},\hat{\lambda}}_{\hat{\theta}},\mathcal{D}_{test}).\]
In practice, the second equation optimizing \(\alpha\) and \(\lambda\) can be solved separately or jointly. If we fix \(\lambda\) for each \(\alpha\), the optimization is made only on the architecture and is referred to as Neural Architecture Search (NAS). If \(\alpha\) is fixed, then the optimization is only made on the model hyperparameters and is referred to as HyperParameters Optimization (HPO). Our algorithmic framework allows us to fix \(\alpha\) or \(\lambda\) during parts of the optimization to perform a hierarchical optimization: ordering optimisation sequences during which only the architecture is optimised, and others during which only the hyperparameters are optimised. In the following, we will describe our search space \(\Omega=(\mathcal{A}\times\{\Lambda(\alpha),\alpha\in\mathcal{A}\})\).
### Architecture Search Space
First, we define our architecture search space \(\mathcal{A}\). We propose to model a DNN by a Directed Acyclic Graph (DAG) with a single input and output (Fiore and Devesas Campos, 2013). A DAG \(\Gamma=(\mathcal{V},\mathcal{E})\) is defined by its nodes (or vertices)
Figure 2: DNN encoding as a directed acyclic graph (DAG). The elements in blue (crosshatch) are fixed by the framework, the architecture elements from \(\alpha\) are displayed in beige and the hyperparameters \(\lambda\) are in pink (dots).
set \(\mathcal{V}=\{v_{1},...,v_{n}\}\) and its edges set \(\mathcal{E}\subseteq\{(v_{i},v_{j})|v_{i},v_{j}\in\mathcal{V}\}\). Each node \(v\) represents a DNN layer as defined in Subsection 2.1, such as a convolution, a recurrence, or a matrix product. To eliminate isolated nodes, we impose each node to be connected by a path to the input and the output. The graph acyclicity implies a partial ordering of the nodes. If a path exists from the node \(v_{a}\) to a node \(v_{b}\), then we can define a relation order between them: \(v_{a}<v_{b}\). Acyclicity prevents the existence of a path from \(v_{b}\) to \(v_{a}\). However, this relation order is not total. When dealing with symmetric graphs where all nodes are not connected, several nodes' ordering may be valid for the same graph. For example in Figure 1(a), the orderings \(v_{1}>v_{2}\) and \(v_{2}>v_{1}\) are both valid.
Hence, a DAG \(\Gamma\) is represented by a sorted list \(\mathcal{L}\), such that \(|\mathcal{L}|=m\), containing the graph nodes, and its adjacency matrix \(M\in\mathbb{R}^{m\times m}\)(Zhang et al., 2019). The matrix \(M\) is built such that: \(M(i,j)=1\Leftrightarrow(v_{i},v_{j})\in\mathcal{E}\). Because of the graph's acyclicity, the matrix is upper triangular with its diagonal filled with zeros. The input node has no incoming connection, and the output node has no outcoming connection, meaning \(\sum_{i=1}^{m}M_{i,1}=0\) and \(\sum_{j=1}^{m}M_{m,j}=0\). Besides, the input is necessarily connected to the first node and the last node to the output for any graph, enforcing: \(M_{1,2}=1\) and \(M_{m-1,m}=1\). As isolated nodes do not exist in the graph, we need at least a non-zero value on every row and column, except for the first column and last row. We can express this property as: \(\forall i<m:\sum_{j=i+1}^{m}M_{i,j}>0\) and \(\forall j>1:\sum_{i=j+1}^{m}M_{i,j}>0\). Finally, the ordering of the partial nodes does not allow a bijective encoding: several matrices \(M\) may encode the same DAG.
To summarize, we have \(\mathcal{A}=\{\Gamma=(\mathcal{V},\mathcal{E})=(\mathcal{L},M)\}\). The graphs \(\Gamma\) are parameterized by their size \(m\) which is not equal for all graphs. As we will see in Section 4.1 the DNNs size may vary during the optimization framework.
### Hyperparameters Search Space
For any fixed architecture \(\alpha\in\mathcal{A}\), let's define our hyperparameters search space induced by \(\alpha:\Lambda(\alpha)\). As mentioned above, the DAG nodes represent the DNN hidden layers. A set of hyperparameters \(\lambda\), also called a graph node, is composed of a combiner, a layer operation and an activation function (see Figure 1(c)). Each layer operation is associated with a specific set of parameters, like output or hidden dimensions, convolution kernel size or dropout rate. We provide in Appendix A a table with all available layer types and their associated parameters. The hyperparameters search space \(\Lambda(\alpha)\) is made of sets \(\lambda\) composed with a combiner, the layer's parameters and the activation function.
First, we need a combiner as each node can receive an arbitrary number of input connections. The parents' latent representations should be combined before being fed to the layer type. Taking inspiration from the Google Brain Team Evolved Transformer (So et al., 2019), we propose three types of combiners: element-wise addition, element-wise multiplication and concatenation. The input vectors may have different channel numbers and the combiner needs to level them. This issue is rarely mentioned in the literature, where authors prefer to keep a fixed channel number (Liu et al., 2018). In the general case, for element-wise combiners, the combiner output channel matches the maximum channel number of latent representation. We apply zero-padding on the smaller inputs. For the concatenation combiner, we consider the sum of the channel number of each input. Some layer types, for instance, the pooling and the convolution operators, have kernels. Their calculation requires that the number of channels of the input vector is larger than this kernel. In these cases, we also perform zero-padding after the combiner to ensure that we have the minimum number of channels required.
When building the DNN, we dimension asynchronously each layer operation. We first compute the layer operation input shape according to the input vectors and the combiner. After building the operation we compute its output shape for the next layer. Finally, the node's remaining part is the activation function. We choose this last parameter among a large set detailed in Appendix A. To summarize, we define every node as the sequence of combiner \(\rightarrow\) layer type \(\rightarrow\) activation function. In our search space \(\Lambda(\alpha)\), the nodes are encoded by arrays containing the combiner name, the layer type name, the value of each layer operation's parameters and finally, the activation function name. The set \(\mathcal{L}\) mentioned in the previous section, which contains the nodes, is then a variable-length list containing the arrays representing each node.
## 4 Search algorithm
The search space \(\Omega=(\mathcal{A}\times\{\Lambda(\alpha),\alpha\in\mathcal{A}\})\) that we defined in the previous Section is a mixed and variable space: it contains integers, float, and categorical values, and the dimension of its elements, the DNNs, is not fixed. We need to design a search algorithm able to efficiently navigate through this search space. While several metaheuristics can solve mixed and variable-size optimization problems (Talbi, 2023), we chose to start with an evolutionary algorithm. For the manipulation of directed acyclic graphs, this metaheuristic was the most intuitive for us. It has been used in other domains, for example on graphs representing logic circuits (Aguirre and Coello Coello, 2003). The design of other metaheuristics in our search space and their comparison with the evolutionary algorithm are left to future work.
### Evolutionary algorithm design
Evolutionary algorithms represent popular metaheuristics which are well adapted to solve mixed and variable-space optimization algorithms (Talbi, 2023). They have been widely used for the automatic design of DNNs (Li et al., 2022). The idea is to evolve a randomly generated population of DAGs to converge towards an optimal DNN. An optimal solution should be a DNN with a small error on our forecasting task. The designed metaheuristic is based on several search operators: selection, mutation, crossover and replacement. The initial population is randomly generated. We then evolve this population during \(G\) generations. At the beginning of each new generation \(g\), we build the new population starting from the scores obtained by the individuals from the previous generation. We use the tournament selection to pick the best individual among randomly drawn sub-groups. Part of the individuals from this new population comes from the tournament selection, while we randomly draw the remaining ones. The randomly drawn individuals ensure the algorithm to not be dependent on the initial population. Afterwards, the DAGs composing the new population are transformed using variation operators such as crossover and mutation, described thereafter. The generated individuals are called offsprings. After their evaluation, the worst offsprings are replaced by the best individuals from the previous generation. Therefore, the best individuals are kept in memory and used for evolution during the entire process. The replacement rate should stay small to prevent a premature convergence of the algorithm toward a local optimum. The complete framework is shown in Figure 3.
To design the algorithm evolution operators, we split them into two categories: hyperparameters specific operators and architecture operators. The idea is to allow a sequential or joint optimization of the hyperparameters and the architecture. The involved layer types do not have the same hyperparameters. Thus, drawing a new layer means modifying all its parameters and one can lose the optimization made on the previous layer type. Using sequential optimization, the algorithm can first find well-performing architectures and layers types during the architecture search and then fine-tune the found DNNs during the hyperparameters search.
### Architecture evolution
In this section, we introduce the architecture-specific search operators. By architecture, we mean the search space \(\mathcal{A}\) defined above: the node's operations and the edges between them. The mutation operator is made of several simple operations inspired by the transformations used to compute the Graph Edit Distance (Abu-Aisheh et al., 2015): insertion, deletion and substitution of both nodes and edges. Given a graph \(\Gamma=(\mathcal{L},M)\), the mutation operator will draw the set \(\mathcal{L}^{\prime}\subseteq\mathcal{L}\) and apply a transformation to each node of \(\mathcal{L}^{\prime}\). Let's have \(v_{i}\in\mathcal{L}^{\prime}\) the node that will be transformed:
Figure 3: Evolutionary algorithm framework.
* **Node insertion:** we draw a new node with its combiner, operation and activation function. We insert the new node in our graph at the position \(i+1\). We draw its incoming and outgoing edges by verifying that we do not generate an isolated node.
* **Node deletion:** we delete the node \(v_{i}\). In the case where it generates other isolated nodes, we draw new edges.
* **Parents modification:** we modify the incoming edges for \(v_{i}\) and make sure we always have at least one.
* **Children modification:** we modify the outgoing edges for \(v_{i}\) and make sure we always have at least one.
* **Node modification:** we draw the new content of \(v_{i}\), the new combiner, the operation and/or the activation function.
The crossover idea is to inherit patterns belonging to both parents. The primary crossover operator applies on two arrays and swaps two subparts of those arrays. We draw two subgraphs from our parents, which can be of different sizes, and we swap them. This transformation has an impact on edges. To reconstruct the offsprings, we tried to preserve at most the original edges from the parents and the swapped subgraphs. An illustration of the crossover can be found in Figure 4.
### Hyperparameters evolution
One of the architecture mutations consists in disturbing the node content. In this case, the node content is modified, including the operation. A new set of hyperparameters is then drawn. To refine this search, we defined specific
Figure 4: Crossover operator illustration.
mutations for the search space \(\Lambda(\alpha)\). In the hyperparameters case, edges and nodes number are not affected. As for architecture-specific mutation, the operator will draw the set \(\mathcal{L}^{\prime}\subseteq\mathcal{L}\) and apply a transformation on each node of \(\mathcal{L}^{\prime}\). For each node \(v_{i}\) from \(\mathcal{L}^{\prime}\), we draw \(h_{i}\) hyperparameters, which will be modified by a neighbouring value. The hyperparameters in our search space belong to three categories:
* **Categorical values:** the new value is randomly drawn among the set of possibilities deprived of the actual value. For instance, the activation functions, combiners, and recurrence types (LSTM/GRU) belong to this type of categorical variable.
* **Integers:** we select the neighbours inside a discrete interval around the actual value. For instance, it has been applied to convolution kernel size and output dimension.
* **Float:** we select the neighbours inside a continuous interval around the actual value. Such a neighbourhood has been defined for instance to the dropout rate.
## 5 Experimental study
### Experimental protocol
We evaluated our optimization algorithm framework on the established benchmark of Monash Time Series Forecasting Repository (Godahewa et al., 2021). For these experiments, we configured our algorithm to have a population of 40 individuals and a total of 100 generations. We investigated a sequential optimization of the architecture and the hyperparameters. We alternate at a certain generation frequency between two scopes: search operators applied to the architecture \(\alpha\) with \(\lambda\) fixed, and search operators applied to the hyperparameters \(\lambda\) with \(\alpha\) fixed. Thus, the architecture-centred sections of the optimization will diversify the population DNNs, while the hyperparameter-centred parts will perform a finer optimization of the obtained DNNs. We ran our experiments on 5 cluster nodes, each equipped with 4 Tesla V100 SXM2 32GB GPUs, using Pytorch 1.11.0 and Cuda 10.2. The experiments were all finished in less than 72 hours.
The Monash time series forecasting archive is a benchmark containing more than 40 datasets and the results of 13 forecasting models on each of these prediction tasks (Godahewa et al., 2021). The time series are of different kinds and have variable distributions. More information on each dataset from the archive is available B. It allows us to test our framework generalization and robustness abilities.
Figure 5: Meta model for Monash time series datasets.
The paper's authors accompanied their dataset with a GitHub allowing them to directly and very easily compare different statistical, machine learning and deep learning models on the forecast of different time series. We followed the indications to integrate new deep learning models, and only changed the models' core. We kept their data preparation functions, data loaders, training parameters (number of epochs, batch size), as well as the training and evaluation functions, to have the most accurate comparison possible. For each dataset, we also took over the configurations of the directory, notably the prediction horizons and lags. Figure 5 represents our meta-model which was used to replace the repository's models. The Multi-layer Perceptron at the end of the model is used to retrieve the time series output dimension, as the number of channels may vary within the Directed Acyclic Graph.
We compared our results with the benchmark models, which are composed of statistical models, machine learning and deep learning models. The metric used to evaluate the models' performance, our forecast error \(\ell\), is the Mean Absolute Scaled Error (MASE), an absolute mean error divided by the average difference between two consecutive time steps (Hyndman and Koehler, 2006). Given a time series \(Y=(\mathbf{y}_{1},...,\mathbf{y}_{n})\) and the predictions \(\hat{Y}=(\mathbf{\hat{y}}_{1},...,\mathbf{\hat{y}}_{n})\), the MASE can be defined as:
\[\mathrm{MASE}(Y,\hat{Y})=\frac{n-1}{n}\times\frac{\sum_{i=1}^{n}|\mathbf{y}_{i }-\mathbf{\hat{y}}_{i}|}{\sum_{i=1}^{n}|\mathbf{y}_{i}-\mathbf{y}_{i-1}|}.\]
In our case, for \(f\in\Omega\), \(\mathcal{D}_{0}=(X_{0},Y_{0})\subseteq\mathcal{D}\), we have \(\ell(Y_{0},f(X_{0})=\mathrm{MASE}\big{(}Y_{0},f(X_{0})\big{)}\).
The results are reported in Table 1. Our model succeeded in outperforming the best baseline for 24 out of 40 datasets. For the remaining 16 datasets, our framework obtained errors close to the best baseline. It is worth noting that our model outperformed the machine learning and deep learning models from the benchmark for 34 out of 40 datasets and was among the top 3 models for 34 out of 40 datasets. In Figure 6 we show the convergence of the evolutionary algorithm. At the right positions, we displayed the mean loss for each generation and the best individual loss. On the mean loss for each generation curve, we can identify the phases where the architecture is optimized, where we have more variability and the phases where the hyperparameters are optimized, with less variability. On the left positions, we represented heatmaps with the loss for each individual at each generation. The best individual is not over-represented in the population before the optimization ends.
### Best models analysis
In the AutoDL literature, few efforts are usually made to analyze the generated DNNs. In Shu and Cai (2019) the authors established that architectures with wide and shallow cell structures are favoured by the NAS algorithms and that they suffer from poor generalization performance. We can rightfully ask ourselves about the efficiency of our framework and some of these questions may be answered thanks to a light models analysis. By the end of this section, we will try to answer some inquiries about our framework outputs. To answer those questions we defined some structural indicators, and we computed them in Table 2 for the best model for each dataset from Godahewa et al. (2021):
* _Nodes_: it represents the number of nodes (i.e. operations) in the graph.
* _Width_: it represents the network width which can be defined as the maximum of incoming or outgoing edges to all nodes within the graph.
* _Depth_: it defines the network depth which corresponds to the size of the longest path in the graph.
* _Din_: it is the maximum channel dimension, relative to the number of input and output channels (ratio). It indicates how complex the latent spaces from the neural network might become, compared to the dataset complexity.
* _Edges_: it represents the number of edges, relative to the number of nodes in the graph. It indicates how complex the graph can be and how sparse the adjacency matrix is.
* The last 7 indicators correspond to the number of the appearance of each layer type within the DNN.
_Do our framework always converge to complex models, or is it able to find simple DNNs?_
From Table 2 and Figure 6(a), one can see that we have multiple simple graphs with only two layers. Knowing that the last feed-forward layer is enforced by our meta-model (see Figure 5), our DAG is only composed of one layer. Another indicator of this simplicity is the percentage of feed-forward layers found in the best models. 41% of the layers are feed-forward according to the table 2 although our search space offers more complex layers such as convolution, recurrence or attention layers less frequently picked. This proves that even without regularization penalties, our algorithmic framework does not systematically search for over-complicated models.
_Do our algorithmic framework always converge to similar architectures for different datasets?_
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Name** & \multicolumn{3}{c|}{**Models**} \\
**Dataset** & _Stat. models_ & _ML/DL models_ & _Our framework_ \\ \hline \hline Aus. elec & 1.174 & **0.705** & 0.893 \\ \hline births & 1.453 & 1.537 & **1.233** \\ \hline Bitcoin & 2.718 & **2.664** & 4.432 \\ \hline Carparts & 0.897 & 0.746 & **0.744** \\ \hline Covid deaths & 5.326 & 5.459 & **4.535** \\ \hline Dominick & 0.582 & 0.705 & **0.510** \\ \hline Elec. hourly & 3.690 & **1.606** & 1.652 \\ \hline Elec. weekly & 1.174 & 0.705 & **0.652** \\ \hline Fred MD & **0.468** & 0.601 & 0.489 \\ \hline Hospital & 0.761 & 0.769 & **0.751** \\ \hline KDD & 1.394 & 1.185 & **1.161** \\ \hline Kaggle weekly & 0.622 & 0.628 & **0.561** \\ \hline MI monthly & **1.074** & 1.123 & 1.081 \\ \hline M1 quart. & **1.658** & 1.700 & 1.683 \\ \hline M1 yearly & **3.499** & 4.355 & 3.732 \\ \hline M3 monthly & **0.861** & 0.934 & 0.926 \\ \hline M3 other & **1.814** & 2.127 & 2.227 \\ \hline M3 quart. & 1.174 & 1.182 & **1.099** \\ \hline M3 yearly & **2.774** & 2.961 & 2.800 \\ \hline M4 daily & 1.153 & 1.141 & **1.066** \\ \hline M4 hourly & 2.663 & 1.662 & **1.256** \\ \hline M4 monthly & **0.948** & 1.026 & 0.993 \\ \hline M4 quart. & **1.161** & 1.239 & 1.198 \\ \hline M4 weekly & 0.504 & 0.453 & **0.430** \\ \hline NN5 daily & **0.858** & 0.916 & 0.898 \\ \hline NN5 weekly & 0.872 & 0.808 & **0.739** \\ \hline Pedestrians & 0.957 & 0.247 & **0.222** \\ \hline Rideshare & 1.530 & 2.908 & **1.410** \\ \hline Saugeen & 1.425 & 1.411 & **1.296** \\ \hline Solar 10mn & **1.034** & 1.450 & 1.426 \\ \hline Solar weekly & 0.848 & 0.574 & **0.511** \\ \hline Sunspot & 0.067 & 0.003 & **0.002** \\ \hline Temp. rain & 1.174 & 0.687 & **0.686** \\ \hline Tourism monthly & 1.526 & **1.409** & 1.453 \\ \hline Tourism quart. & 1.592 & 1.475 & **1.469** \\ \hline Tourism yearly & 3.015 & 2.977 & **2.690** \\ \hline Traffic hourly & 1.922 & 0.821 & **0.729** \\ \hline Traffic weekly & 1.116 & 1.094 & **1.030** \\ \hline Vehicle trips & 1.224 & **1.176** & 1.685 \\ \hline Weather & 0.677 & 0.631 & **0.614** \\ \hline \hline
**Total bests** & 11 & 5 & 24 \\ \hline \end{tabular}
\end{table}
Table 1: Mean MASE for each dataset, we only reported: the best MASE for statistical models, machine learning models and our optimization framework results. Statistical models: SES, Theta, TBATS, ETS, ARIMA. Machine Learning models: PR, CatBoost, FFNN, DeepAR, N-Beats, WaveNet, Transformer, Informer.
Figure 6: Evolutionary algorithm convergence. Left: heatmap with the loss for each individual for every generation. Right: population mean loss and best individual’s loss through generations. Darker grey backgrounds represent generations during which the architecture is optimized, and lighter grey backgrounds represent generations during which the hyperparameters are optimized.
The framework is meanwhile able to find complex models as in Figure 6(b), which partially answers our question. The indicators in Table 2 suggest that we found models with various numbers of nodes, from 2 to 10 and with diverse edge densities. The best model for the electricity hourly dataset (see Figure 6(d)) has an average of 1.5 incoming or outgoing edges by node whereas the best model for the electricity weekly dataset has an average of 3 incoming or outgoing edges by nodes. This is true even within performing graphs for the same dataset. In Figure 8 we displayed two different graphs having similar (and good) performance on the Dominick dataset. Both graphs do not have the same number of nodes and different architectures, the first one being a deep sparse graph while the other is wider with a lot of edges.
#### What is the diversity of the layer types within the best models?
The graphs from Figure 8 have also quite different layer types. If both are mainly based on identity, pooling and feed-forward operations, the second graph introduces convolution and dropout layers. In general, to answer this question, the fully-connected layers seem to dominate all layer types, as it represents 41% of the chosen layers in the best models (see Table 2). The identity layer is also more frequently picked. It is finally interesting to notice that all other layers are selected at the same frequency.
Are the best models still "deep" neural networks or are they wide and shallow as stated in Shu and Cai (2019)?
To answer this question, the observations from Shu and Cai (2019) also apply to our results. Our models are often almost as wide as they are deep. This observation needs more investigation as one of the reasons mentioned in the paper: the premature evaluation of architecture before full convergence does not apply here. Our models are smaller than the ones studied in the paper and thus converge faster, even when they are a bit deeper.
A last remark is related to the latent spaces generated by our models. Except for a few models, our models tend to always generate bigger latent space than the number of input and output channels. On average, the maximum size of the latent representation within a network is 17 times bigger than the input and/or output channels number.
### Nondeterminism and instability of DNNs
An often overlooked robustness challenge with DNN optimization is their uncertainty in performance (Summers and Dinneen, 2021). A unique model with a fixed architecture and set of hyperparameters can produce a large variety of results on a dataset. Figure 9 shows the results on two datasets: M3 Quarterly and Electricity Weekly. For both datasets, we selected the best models found with our optimization and drew 80 seeds summing all instability and nondeterministic aspects of our models. We trained these models and plotted the MASE Figure 9. On the M3 Quarterly, the MASE reached values two times bigger than our best result. On the Electricity Weekly, it went up to five times worst. To overcome this problem, we represented the parametrization of stochastic aspects in our models as a hyperparameter, which we added to our search space. Despite its impact on the performance, we have not seen any work on NAS, HPO or AutoML trying to optimize the seed of DNNs. Our plots of Figure 9 showed that the optimization was effective as no other seeds gave better results than the one picked by our algorithmic framework.
## 6 Conclusion and Future Work
In this work, we introduced a novel algorithmic framework for the joint optimization of DNNs architectures and hyperparameters. We first introduced a search space based on Directed Acyclic Graphs, highly flexible for the architecture, and also allow for fine-tuning of hyperparameters. Based on this search space we designed search operators compatible with any metaheuristic able to handle a mixed and variable-size search space. The algorithmic framework is generic and has been efficient on the time series forecasting task using an evolutionary algorithm.
Further work would be dedicated to the investigation of other metaheuristics (e.g. swarm intelligence, simulated annealing) to evolve the DAGs. The reformulation of the studied optimization problems by including multiple objectives (e.g. complexity of DNNs) and robustness represent also important perspectives. To further improve the performance on time series forecasting tasks we can develop a more complete pipeline including a features selection strategy.
We can imagine further research work testing our framework on different learning tasks. Considering the forecasting task, the output models show that combining different state-of-the-art DNN operations within a single model is an interesting lead to improve the models' performance. Such models are quite innovative within the deep learning community and studies on their conduct and performances could be carried out.
Figure 7: Best DNNs output by our algorithmic framework.
## Acknowledgments
This work has been funded by Electricite de France (EDF). The supercomputers used to run the experiments belong to EDF.
Figure 8: Two different models having similar good performance on Dominick dataset.
Figure 9: Mape histogram of the best model performances with multiple seeds for two datasets.
A Available operations and hyperparameters
Activation functions, \(\forall x\in\mathbb{R}^{\mathbb{D}}\)
* Id: \(\mathrm{id}(x)=x\)
* Sigmoid: \(\mathrm{sigmoid}(x)=\frac{1}{1+e^{-x}}\)
* Swish: \(\mathrm{swish}(x)=x\times\mathrm{sigmoid}(\beta x)=\frac{x}{1+e^{-\beta x}}\)
* Relu: \(\mathrm{relu}(x)=\max(0,x)\)
* Leaky-relu: \(\mathrm{leakyRelu}(x)=\mathrm{relu}(x)+\alpha\times\min(0,x)\), in our case: \(\alpha=10^{-2}\)
* \(\mathrm{Elu}\): \(\mathrm{elu}(x)=\mathrm{relu}(x)+\alpha\times\min(0,e^{x}-1)\)
* \(\mathrm{Gelu}\): \(\mathrm{gelu}(x)=x\mathbb{P}(X\leq x)\approx 0.5x(1+\tanh[\sqrt{2/\pi}(x+0.04 4715x^{3})])\)
* Softmax: \(\sigma(\mathbf{x})_{j}=\frac{e^{xj}}{\sum_{d=1}^{d}e^{xj}}\;\forall j\in\{1, \ldots,D\}\)
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Operation** & \multicolumn{2}{c|}{**Optimized hyperparameters**} \\ \hline Identity & \multicolumn{2}{c|}{-} \\ \hline Fully-Connected (MLP) & Output shape & Integer \\ \hline \multirow{2}{*}{Attention} & Initialization type & [convolution, random] \\ & Heads number & Integer \\ \hline
1D Convolution & Kernel size & Integer \\ \hline \multirow{2}{*}{Recurrence} & Output shape & Integer \\ & Recurrence type & [LSTM, GRU, RNN] \\ \hline \multirow{2}{*}{Pooling} & Pooling size & Integer \\ & Pooling type & [Max, Average] \\ \hline Dropout & Dropout Rate & Float \\ \hline \end{tabular}
\end{table}
Table 3: Operations available in our search space and used for the Monash time series archive dataset and their hyperparameters that can be optimized.
## Appendix B Monash datasets presentation |
2304.10899 | Electromechanical memcapacitive neurons for energy-efficient spiking
neural networks | In this article, we introduce a new nanoscale electromechanical device -- a
leaky memcapacitor -- and show that it may be useful for the hardware
implementation of spiking neurons. The leaky memcapacitor is a movable-plate
capacitor that becomes quite conductive when the plates come close to each
other. The equivalent circuit of the leaky memcapacitor involves a
memcapacitive and memristive system connected in parallel. In the leaky
memcapacitor, the resistance and capacitance depend on the same internal state
variable, which is the displacement of the movable plate. We have performed a
comprehensive analysis showing that several spiking types observed in
biological neurons can be implemented with the leaky memcapacitor. Significant
attention is paid to the dynamic properties of the model. As in leaky
memcapacitors the capacitive and leaking resistive functionalities are
implemented naturally within the same device structure, their use will simplify
the creation of spiking neural networks. | Zixi Zhang, Yuriy V. Pershin, Ivar Martin | 2023-04-21T11:34:58Z | http://arxiv.org/abs/2304.10899v1 | # Electromechanical memcapacitive neurons for energy-efficient spiking neural networks
###### Abstract
In this article, we introduce a new nanoscale electromechanical device - a leaky memcapacitor - and show that it may be useful for the hardware implementation of spiking neurons. The leaky memcapacitor is a movable-plate capacitor that becomes quite conductive when the plates come close to each other. The equivalent circuit of the leaky memcapacitor involves a memcapacitive and memristive system connected in parallel. In the leaky memcapacitor, the resistance and capacitance depend on the same internal state variable, which is the displacement of the movable plate. We have performed a comprehensive analysis showing that several spiking types observed in biological neurons can be implemented with the leaky memcapacitor. Significant attention is paid to the dynamic properties of the model. As in leaky memcapacitors the capacitive and leaking resistive functionalities are implemented naturally within the same device structure, their use will simplify the creation of spiking neural networks.
## I Introduction
Information processing in biological systems relies on a complex network of interacting neurons. Each neuron, when subjected to a stimulus responds by outputting a signal that typically has a form of unharmonic spikes. A variety of models of the spiking phenomenon have been proposed, most famously the Hodgkin-Huxley (HH) model, which successfully captured many observed features of spiking in neuronal membranes [14; 17; 18; 19; 20; 21; 24]. This model prompted the development of bio-chemically based information processing models in the subsequent decades [24; 5; 40; 27].
HH model attempts to accurately capture the properties of ion channels with memory and consequently is quite complex. A number of simplified models, like the integrate-and-fire model, appeared that can achieve a variety of spiking behaviors in response to different types of stimulation [14; 12; 14; 23; 24; 28; 4]. Despite being distinct from HH, one may expect that the qualitative features such as instabilities/bifurcations, limit cycles, and synchronization among neurons are quite universal and robust due to the general underlying principles of the dynamical systems [39]. The simplified models, however, may have the important advantage of being easier to implement artificially using existing materials and devices. This leads to an exciting possibility of biologically-inspired _neuromorphic_ information processing systems implemented in the solid-state.
In the above context, the class of memory circuit elements [11] becomes increasingly important because of the capacity of memory circuit elements to store and process the information on the same physical platform [10]. The memory circuit elements (in a pure form) are resistors, capacitors, and inductors with memory whose response is defined by the equations
\[y(t) =g\left(\mathbf{x},u\right)u(t)\ \, \tag{1}\] \[\dot{\mathbf{x}} =\mathbf{f}\left(\mathbf{x},u\right)\ \, \tag{2}\]
where \(y(t)\) and \(u(t)\) are any two complementary circuit variables (i.e., current, charge, voltage, or flux), \(g\left(\mathbf{x},u\right)\) is a generalized response, \(\mathbf{x}\) is a set of \(n\) state variables describing the internal state of the device, and \(\mathbf{f}\left(\mathbf{x},u\right)\) is a continuous \(n\)-dimensional vector function. Depending on the choice of the complementary circuit variables, Eqs. (1) and (2) are used to define memristive, memcapacitive, or meminductive systems [11]. Examples of physical realizations of memory circuit elements can be found in the review paper [35].
For more than a decade, significant attention has been paid to the application of memristive systems in neuromorphic computing. Indeed, the memristive systems share several common characteristics with biological synapses such as
the two-terminal structure, adaptivity, and high integration density. One of the first works in this area was the demonstration of associative memory with memristive neural networks by one of us [34]. Interestingly, the relevance of the memristive equations to the HH model was established already in 1976 by Chua and Kang [7; 8]. Specifically, they proposed that "the potassium channel of the Hodgkin-Huxley model should be identified as a first-order time-invariant voltage-controlled memristive one-port and the sodium channel should be identified as a second-order time-invariant voltage-controlled memristive one-port" [8]. Even though most of the existing research focuses on purely electronic schemes, there are also models that rely on the mechanical realization of memory and spikes [6; 13; 15; 22; 25].
Neuromorphic applications of memcapacitive systems have received much less attention. This may be explained by the fact that in general memcapacitive devices [30; 31; 32; 33] have been much less studied (in comparison to the memristive ones). At the same time, the reactive nature of the capacitive response is very promising for low-power computing applications as at equilibrium the memcapacitive devices do not consume any power. Thus, currently, the application of memcapacitors in neuromorphic circuits [36; 38] is a narrow but highly promising research field. In fact, the possibility of energy-efficient neuromorphic computing with solid-state memcapacitive structures has been demonstrated recently [9].
The mechanical aspect of biological neurons is also becoming increasingly recognized as important in the study of neuroscience. They play a role in several physiological processes, including the generation of action potentials [29]. The mechanical changes in the cell membrane can influence the opening and closing of ion channels, which can impact the electrical signaling of the neuron. The soliton model combines mechanical and electrical factors of signal propagation through axons and has been able to account for some experimental effects in anesthesiology beyond the Hodgkin-Huxley model [2; 3; 16; 26].
In this work, in part inspired by biological neurons, we propose a simple neuromorphic electromechanical model based on a leaky memcapacitor that is capable of achieving the periodic generation of spikes under constant stimulation. The presence of additional --mechanical-- degrees of freedom makes it easier to realize neuromorphic behaviors in artificial electromechanical systems. Applying the methods of dynamical systems, we explore the qualitative features of this model in terms of its fixed points, bifurcations, and limit cycles under DC stimulation. In addition, we demonstrate that the system is capable of complex dynamical adaptations, such as synchronization with periodic external drives, spiking frequency drift, bursting, and other adaptations (for some behaviors, an additional memory circuit element is required).
## II The model
In this section, we introduce a leaky memcapacitor, its model, and the circuit we use to simulate the spiking neurons. The circuit, physical diagram of leaky memcapacitor, and its equivalent circuit are shown in Fig. 1.
The central element of our spiking neuron is a _leaky memcapacitor_ - a capacitor with a plate that moves in response to the force exerted by the internal electric field and restoring force of the spring (the spring constant is \(k\)). In the absence of charges on the plates, \(q=0\), the distance between two plates is \(d\). It is assumed that the displacement of the top plate, \(x\), is positive for the displacement towards the bottom plate, see Fig. 1(a). Thus, the distance between the plates is \(d-x\).
We assume that the capacitor is _leaky_: there is a finite resistance \(R(x)\), which depends on the distance between the plates, dropping rapidly in the 'contact' region, \(x>x_{c}\). The leaky memcapacitor has the _memory_ (which justifies the
Figure 1: (a) An electromechanical leaky memcapacitor connected to a voltage source through a resistor. (b) The equivalent electronic circuit of the leaky memcapacitor: a memcapacitive system, \(\mathrm{C_{M}}\), and memristive system, \(\mathrm{R_{M}}\), connected in parallel. Both systems depend on the same internal state variable \(x\).
term memcapacitor) because the current position of the moving plate (and hence the capacitance itself) depends on the prior history of the charge on the capacitor, which in turn depends on the history of applied voltage. It is this history dependence, the memory, that is responsible for the emergence of complex behaviors that the simple circuit in Fig.1(a) exhibits.
The main operating principle of this device is straightforward. Starting from the equilibrium position at \(x=0\), upon turning on voltage \(V\), the capacitor begins to charge up. This leads to the attraction between its plates, which brings them closer together. For large enough applied voltage, the top plate reaches the contact region, the capacitor discharges, and the plate recoils back from the contact region. Note that the process of charging is limited by the resistor with the resistance \(r\) (Fig. 1(a)).
We found that the proximity-induced attraction between the two surfaces (plates) is helpful to avoid the plates finding a new static equilibrium position in the contact region. We model this part of interaction with a Lennard-Jones-like potential. The short-range attraction between the plates allows for a more effective discharge process, leading to a periodic approach of the top plate to the bottom plate followed by recoil. The exact form of \(R(x)\) and the form of the potential, to a certain extent, do not matter. In the biological context, our memcapacitor may represent two lipid monolayers forming a cell membrane. As the membrane swells or thins, the resistance of the membrane (via ion channels) is affected, as modeled by \(R(x)\).
In our modeling of membrane dynamics, we neglect the kinematic mass (in a biological setting that would be because membranes reside in an aqueous environment that has considerable viscosity). This overdamped regime makes it nontrivial to have oscillatory behavior, and new types of mechanisms are required to perform the periodic spiking that we see possible.
Eqs. (1) and (2) provide the general framework for the description of leaky memcapacitor. For the capacitive and resistive responses, Eq. (1) is written as
\[q =\frac{\epsilon A}{d-x}V_{C}\equiv C(x)\cdot V_{C}\;\;, \tag{3}\] \[V_{C} =\left[R_{m}\cdot\left(\frac{1}{\pi}\mathrm{arctan}\beta\left( x_{c}-x\right)+0.5\right)+\rho_{0}\frac{d-x}{A}\right]I_{M}\equiv R(x)\cdot I _{M}\;\;, \tag{4}\]
where \(q\) is the memcapacitor charge, \(V_{C}\) is the voltage across the leaky memcapacitor, \(I_{M}\) is the leakage current, \(\varepsilon\) is the permittivity, \(A\) is the plate area, and \(R_{m}\), \(\beta\), \(\rho_{0}\) are the parameters defining the memristance. In the normal regime, \(R(x)\approx R_{m}\), while in the contact regime, \(R(x)\) is mainly determined by the \(\rho_{0}\) term; \(\beta\) describes the rate of change of the \(R_{m}\) component in \(R(x)\) in the vicinity of \(x_{c}\).
The leaky memcapacitor is described by a single internal state variable \(x\), which is the displacement of the top plate from its equilibrium position (at \(q=0\)) in the downward direction (see Fig. 1(a)). Its dynamics (corresponding to Eq. (2)) is represented by
\[\gamma\dot{x}=\frac{q^{2}}{\epsilon A}-\frac{\mathrm{d}U(x)}{\mathrm{d}x}\;\;, \tag{5}\]
Figure 2: (a) Resistance as a function of the displacement of the top plate (Eq. (4)). (b) Potential energy as a function of the displacement of the top membrane (Eq. (6)). The insets present a zoomed-in contact region. The dashed line refers to \(x_{c}\).
where \(\gamma\) is the dissipation coefficient, and the potential \(U(x)\) is chosen as
\[U(x)=\frac{1}{2}kx^{2}+4\epsilon_{l}\left[\left(\frac{\sigma}{d-x}\right)^{12}- \left(\frac{\sigma}{d-x}\right)^{6}\right]\ . \tag{6}\]
Here, the first term is the spring potential energy, while the second one is the Lennard-Jones-like potential that we use to describe the contact interaction between the plates. In Eq. (6), \(k\) is the spring constant, \(\epsilon_{l}\) is the depth of the Lennard-Jones potential well, and \(\sigma\) is the distance at which the Lennard-Jones potential energy is zero.
In the following, we measure the distances in the units of \(\sigma\), resistances in the units of \(R_{m}\), \(U\) in the units of \(\epsilon_{l}\), \(k\) in the units of \(\epsilon_{l}/\sigma^{2}\), time in the units of \(R_{m}A\epsilon/\sigma\), charge in the units of \(\sqrt{\epsilon\epsilon_{l}A/\sigma}\), \(\gamma\) in the units of \(\epsilon\epsilon_{l}AR_{m}/\sigma^{3}\), voltage in the units of \(\sigma\epsilon_{l}/\epsilon A\), and current is the units of \(\sqrt{\sigma\epsilon_{l}/\epsilon A}/R_{m}\). For thus defined dimensionless variables and parameters, we use the original notation to minimize clutter in the text.
The parameters used in our simulations are given in Table 1. Fig. 2(a) shows \(R(x)\) as defined by Eq. (4); Eq. (6) is presented in Fig. 2(b).
Note that qualitatively similar results (to the ones presented in this paper) may be obtained using a different choice of parameters and functional dependencies. In particular, we have verified that the results remain nearly the same when the constant resistance for \(x\lesssim 6\) in Fig. 2(a) is replaced with a resistance linearly dependent on \(x\) (for more details, see the Supplemental Information (SI) Appendix A).
To simulate Fig. 1(a) circuit, we use Kirchhoff's voltage law
\[V(t)=r\left(\dot{q}+I_{M}\right)+V_{C}\ \, \tag{7}\]
which is applied together with the equations defining the leaky memcapacitor, Eqs. (3)-(6). The trajectories \((x(t),q(t))\) are found by numerical integration of Eqs. (5) and (7). The integration was performed using the ODE solver ode45 for nonstiff differential equations in MATLAB (version R2022a).
## III Spiking behavior and phase diagram
Figs. 3(a) and (b) show two selected simulation results that demonstrate the transient dynamics from zero initial conditions, \(x_{0}=0\) and \(q_{0}=0\), to the regime of periodic spiking. These results indicate a significant dependence of the spike shape on the magnitude of applied voltage (\(V=8.0829\) in Fig. 3(a) and \(V=15.0111\) in Fig. 3(b)). According to Figs. 3(a) and (b), the spikes are smoother at \(V=8.0829\), and sharper at \(V=15.0111\). The sharper spikes show a close resemblance to biological spikes.
A notable feature of the transient dynamics is the initial sharp increase in the voltage across the leaky memcapacitor, \(V_{C}\). This property (clearly observed in Figs. 3(a) and (b)) is associated with the smaller capacitance at short times (close to \(t=0\)) due to the initially large plate separation. Within the transient region, the leaky memcapacitor adapts to the applied voltage: the Coulomb attraction reduces the distance between the plates what increases the capacitance. The voltage plateau in Fig. 3(a) is close to the bifurcation point at \(V_{l}^{\prime}=7.9582\) (for more information, see SI Appendix A). This explains the relatively long duration of the transient region.
Moreover, in Fig. 3(c) we present an example of non-spiking behavior. In this case, the trajectory ends in a sink (attractor).
We systematically analyzed the behavior of the circuit by studying its fixed points and limit cycles. For this purpose, we used vector field diagrams of solutions and Jacobi matrices (for more information, see SI Appendix A). This analysis has resulted in the phase diagram presented in Fig. 4(a). The diagram indicates the presence of a global limit cycle in a wide range of the applied voltage, from \(V_{1}^{\prime}\) to \(V_{2}\), which accounts for the presence of the spiking regime.
The main features of the phase diagram are as follows. First of all, since the behavior is symmetric with respect to the sign of \(V\), we show only the positive voltage region of the phase diagram. Around \(V=0\), the only global attractor is a sink. As \(V\) increases, at \(V_{0}\approx 3.4156\) there is a bifurcation that nucleates a saddle and spiral source; they do not influence the global attractor, however, until they separate sufficiently at \(V_{1}\) where the saddle splits
\begin{table}
\begin{tabular}{c c c c} \hline Parameter & Value & Parameter & Value \\ \hline \(d\) & 8 & \(r\) & \(10^{-3}\) \\ \(x_{c}\) & 6.4 & \(\rho_{0}\) & \(1.25\times 10^{-4}\) \\ \(\beta\) & \(5\times 10^{4}\) & \(\gamma\) & \(1.25\times 10^{-4}\) \\ \(k\) & 5/6 & & \\ \hline \end{tabular}
\end{table}
Table 1: Parameters used in simulations.
the phase space into two disconnected regions, one of which hosts a limit cycle and the other the original sink that corresponds to a static state, as shown in Fig. 4(b). In SI Appendix B we show the possibility of switching between these two attractors using voltage pulses. Another bifurcation occurs when the sink and saddle point annihilate at \(V_{1}^{\prime}=7.9582\), transforming the limit cycle into the global attractor with stable spikes, as shown in Fig. 4(c). At \(V_{2}\), another bifurcation generating a sink-saddle pair cuts off the limit cycle, shifting the global attractor to the sink, as shown in Fig. 4(d). As \(V\) continuously increases, the saddle and spiral source point move towards each other and then annihilate, leaving the sink alone (for more information, see SI Appendix A).
To better understand the properties of spikes, the _natural_ frequency of spikes, \(w_{natural}\), was calculated as a function of the applied voltage (see Fig. 5(a)). Interestingly, the calculated points are distributed in the half-of-the-oval shape in the frequency-voltage plot. Fig. 5(a) shows that the frequency approaches zero when \(V\to V_{1},V_{2}\). The Fourier transform of the voltage across the memcapacitor is presented in Fig. 5(b). Qualitatively, the whole spiking regime can be divided into three parts, I, II, and III, that are different by the pattern of spikes (see Figs. 5(c)-(e)). The "negative spikes" regime, I, and the "positive spikes" regime, III, are connected by the regime of more symmetric (harmonic) spikes, II.
## IV Synchronization with external source
To study the synchronization with an external source, an ac voltage was added to the constant driving voltage \(V_{dc}\), \(V(t)=V_{dc}+\delta V\sin(\omega_{source}t)\), with \(\delta V=0.1155\). To initialize the system close to the limit cycle, we used the initial conditions \(x_{0}=6.60\) and \(q_{0}=2.02\). The Fourier transforms for the regimes I-III of oscillations in Fig. 5(a) are presented in Fig. 6. When the circuit is in regime II (Fig. 6(b)), i.e. away from the thresholds, the synchronization occurs only when the source frequency is very close to the spike frequency, \(\omega_{source}\approx\omega_{natural}(V_{dc})\). In this regime, the Fourier plot features various combinations of integer multiples of \(\omega_{source}\) and \(\omega_{natural}\), \(N\omega_{natural}\pm M\omega_{source}\), where \(M,N=0,1,2,...\)
In the other two cases in Fig. 6, the external source has a much stronger influence on the spike generation. Figs. 6(a) and (c) represent the cases when \(V_{dc}\) is close to the thresholds \(V_{1}\) and \(V_{2}\). In these cases, the regions of synchronized driving frequencies are largely extended in both integer multiples and rational fractions of the self-oscillation frequencies. In regime I, when the driving frequency is small, the system responds harmonically to the drive, as shown in Fig. 6(a), where only the component of \(\omega_{source}\) is evident. This is because once \(V\) drops below \(V_{1}\), the spiral source and the saddle vanish simultaneously, and the system can easily switch to the sink, which is away from the contact regime as depicted in Figs. 4(a) and (b), and may not return. In contrast, in regime III, when \(V\) crosses over \(V_{2}\), the bifurcation of a sink-saddle pair appears in the contact regime on the previous limit cycle, as depicted in Figs. 4(a) and (d), thus the system will not go away from the contact regime, which results in spiking waveform even at low source frequency. Further detailed Fourier transforms and relevant analyses of these three regimes are discussed in SI Appendix C.
Figure 3: The response to a step-like voltage applied at \(t=0\): starting from zero initial conditions (\(x=0\), \(q=0\)), the circuit transits to (a), (b) a periodic spiking regime or (c) static regime. These plots were obtained using (a) \(V=8.0829\), (b) \(V=15.0111\), and (c) \(V=7.8520\). In the top pannels, The black dashed line refers to \(d\) and the brown one refers to \(x_{c}\).
## V Other types of dynamics
In this section, we show that the dynamics of Fig. 1(a) circuit can be further enriched by the use of additional components with memory. Indeed, the replacement of the resistor \(r\) in Fig. 1(a) with a memristor may completely change the pattern of spikes to bursting (a pattern of firing wherein the periods of rapid spiking are separated by quiescent periods). Below, we introduce two memristive behaviors (without and with threshold with respect to \(I_{r}\)) and study their effect on the spike generation.
In principle, it is evident that at a suitable constant applied voltage \(V\), the resistance \(r\) controls the spike generation. As in the limiting cases of \(r\to 0\) and \(r\rightarrow\infty\) the dynamics of Fig. 1(a) circuit should not be oscillatory, one can assume that the spiking behavior occurs within certain resistance thresholds, say, when \(r_{2}<r<r_{1}\). Consequently, a suitable memristor (e.g., with resistance changing across \(r_{1}\) and/or \(r_{2}\)) may be used to control the pattern of spikes.
For the sake of simplicity, in what follows the resistance is used as the internal state variable of memristor (such models are known in the literature [37]). In the first model, the memristor dynamics is defined by the equation
\[\dot{r}(t)=-\alpha_{1}I_{r}(t)+\lambda_{1}\int\limits_{0}^{t}e^{-\gamma(t- \tau)}\left(r_{0}-r(\tau)\right)\mathrm{d}\tau\;\;, \tag{8}\]
where \(\alpha_{1}\) is the current proportionality coefficient (can be of any sign), \(\lambda_{1}\) is the (positive) relaxation coefficient, \(\exp\left(-\gamma\left(t-\tau\right)\right)\) is the memory kernel, \(\gamma\) is non-negative constant, \(r_{0}\) is the equilibrium resistance, and \(I_{r}(t)=\dot{q}+I_{M}\) is the current flowing through \(r\). It is assumed that \(r\) is confined to the interval from \(r^{\prime}\) to infinity, where \(r^{\prime}\) is the
Figure 4: (a) An overall picture: the fixed points and attractors depending on the applied voltage \(V\). The phase diagrams at (b) \(V=7.0437\) (the sink and limit cycle case in (a)), (c) \(V=7.9674\) (the limit cycle case in (a)) and (d) \(V=15.5000\) (the sink case on the right of the limit cycle one in (a)). In (b) the phase space is divided into two parts by the flow lines towards the saddle, as depicted with a dashed line. The semi-transparent gray thick line represents the limit cycle. The saddle, the sink, and the spiral source are labeled by black arrows, and the yellow star inside the limit cycle is located at the spiral source. The blue arrows in (b) and the orange arrows in (d) depict the shift direction of the three fixed points as \(V\) increases. In (d), vector fields are added in small blue arrows to help show the fixed points. The initial conditions of the solutions are set discretely along the edge, as well as around the spiral source. In (b)-(d), the evolution time \(t\) was \(0.05\).
minimal value of \(r\). In the right-hand side of Eq. (8), the first term causes the drift of \(r\) mostly due to the spikes. The second term incorporates the relaxation to \(r_{0}\) and memory on prior states.
Figs. 7(a) and (b) show results of simulations with Eq. (8) model for \(r\). Roughly the picture is as follows: The spikes starting at \(t=0\) cause \(r\) to decrease. As soon as \(r<r_{2}\), the spiking terminates and \(r\) starts relaxing towards \(r_{0}\) with a delay determined by \(\gamma\). When \(r\) crosses \(r_{2}\) again (in the opposite direction), the circuit transits back into the spiking mode. This process repeats periodically.
Figure 5: (a) Spike frequency as a function of \(V\), and (b) Fourier transform of \(V_{C}\) as a function of \(V\). The dashed lines, from left to right, refer to \(V_{1}\), \(V_{1}^{\prime}\) and \(V_{2}\), respectively. Here, \(P_{1}\) is the single-sided amplitude of the Fourier transform. In these calculations, to keep the system close to the limit cycle, the initial condition was selected as \(x_{0}=6.6000\) and \(q_{0}=2.0207\). The evolution time \(t\) was \(1.5\), and we skipped the initial transient interval. (c)-(e) Steady-state oscillations at (c) \(V=7.0183\) (regime I), (d) \(V=11.5470\) (regime II) and (e) \(V=15.0561\) (regime III). In (c)-(e), the black dashed line refers to \(d\) and the brown one refers to \(x_{c}\).
Figure 6: Synchronization with external source: The Fourier transform of \(V_{C}\) at (a) \(V_{dc}=7.0645\) (regime I), (b) \(V_{dc}=11.5470\) (regime II) and (c) \(V_{dc}=15.0688\) (regime III). \(P_{1}\) is the single-sided amplitude of Fourier transform. In these calculations, to keep the system close to the limit cycle, the initial condition was selected as \(x_{0}=6.6000\) and \(q_{0}=2.0207\). These plots were obtained using the integration time \(t\) of \(2\); the initial interval of transient dynamics was omitted. In (a)-(c), the horizontal and vertical dashed lines correspond to \(\omega_{natural}(V_{dc})\).
Next, we consider the second model for \(r(t)\). A deeper analysis of Fig. 1(a) circuit (with \(r\) as a control parameter) has revealed an additional threshold \(r_{1}^{\prime}\) between \(r_{2}\) and \(r_{1}\) such that there are two attractors (a sink and limit cycle) for \(r_{1}>r>r_{1}^{\prime}\). In the limit cycle, certain variables such as \(I_{r}(t)\) have higher values for the duration of one spiking period compared to those in the sink. Therefore, we can set a threshold, \(I^{\prime}\), to induce the cycling between them. If
Figure 7: Achieving complex spiking behaviors by introducing memory into the resistor. (a), (b) Method of Eq. (8). (c), (d) Method of Eq. (9). Panels (b) and (d) are zoom-ins of (a) and (c), respectively. In the inset of (d), the dashed line refers to \(I^{\prime}\), and during the short plateau, the limit cycle has just disappeared and the system is switching to the sink. In the simulation (a), the parameters were set as \(V=13.8564\), \(\alpha_{1}=3.4641\times 10^{-6}\), \(\lambda_{1}=1.600\times 10^{5}\), \(r_{0}=10^{-3}\), \(\gamma=0\) and \(r^{\prime}=0.8r_{0}\). In the simulation (c), \((\arctan\lambda^{\prime}(I_{r}(t)-I^{\prime}))/\pi+0.5\) is used instead of step functions in Eq. (9), and the parameters were set as \(V=7.9674\), \(I^{\prime}=10.3923\), \(\lambda^{\prime}=1000\), \(\alpha_{2}=1.6000\times 10^{4}\) and \(\lambda_{2}=2000\). Initial conditions are both set \(x=0\), \(q=0\) and \(r_{0}=10^{-3}\).
\(I_{r}(t)>I^{\prime}\), we can set \(\dot{r}>0\), otherwise \(\dot{r}<0\), like
\[\dot{r}(t)=\begin{cases}\alpha_{2}r(t)^{2},&I_{r}(t)>I^{\prime}\\ -\lambda_{2}r(t),&I_{r}(t)<I^{\prime}\\ 0,&I_{r}(t)=I^{\prime}\end{cases}\, \tag{9}\]
where \(\alpha_{2},\lambda_{2}\) are positive coefficients. It is assumed that \(r\) is confined to the interval from \(r^{\prime}\) to infinity, where \(r^{\prime}\) is the minimal value of \(r\). This mechanism allows \(r(t)\) to increase from \(\{r^{\prime}_{1},r_{1}\}\) to above \(r_{1}\), switching the system from the limit cycle to the sink; after a while \(r(t)\) begins to decrease, closing the cycle. The simulations of this scheme for \(\dot{r}\) are shown in Fig. 7(c) and (d). Note that the second method works only when \(R_{m}\) is finite (for more details, see SI Appendix A).
## VI Conclusion
In summary, we have proposed a leaky memcapacitor - an electromechanical crossbreed of memcapacitor and memristor - that can generate neuromorphic spikes. Its model is based on the potential that combines linear elasticity with nonlinear Lennard-Jones-like interaction between the plates at short distances, attempting to represent realistic interaction potential. Thanks to the presence of the non-linear interaction, the dynamical behavior of the system in the contact region is different from the one when the plates are relatively far from each other. This helps achieve a stable spiking behavior when a constant voltage is applied.
In order to thoroughly understand the spiking behavior, we have conducted the stability analysis in the \((x,q)\)-space and discovered several interesting regimes characterized by different configurations of fixed points and attractors. We have shown that for some ranges of parameters, one can use a voltage pulse to switch the system from a sink to a limit cycle and vice-versa. We have also found that the spike shape depends on the applied voltage.
An important feature of the system is that the spike frequency may adapt to the external perturbation frequency (depending on the model and excitation parameters). A rich dynamical behavior has been observed including synchronization when a small amplitude ac signal was added to the constant driving voltage. In addition, replacing constant external resistor by a memristor extends the variety of spike waveforms that the circuit generates. With this modification, the circuit can be tuned to mimic behaviors of some types of biological neurons.
Overall, the system introduced in this article provides a new avenue for the practical realization of neuromorphic devices based on memcapacitive and memristive effects. Our study may lead to novel energy-efficient realizations of neural dynamics with electromechanical structures, including artificial analogs of biological membranes.
## Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## Author Contributions
All authors contributed to conceiving the idea, formulating the model, its qualitative analysis, and writing the manuscript. Z. Z. has performed all numerical simulations.
## Funding
I. M. acknowledges funding from the Materials Sciences and Engineering Division, Basic Energy Sciences, Office of Science, US DOE.
## Data Availability Statement
The Supplemental Information contains additional data supporting this manuscript. Further inquiries can be directed to the corresponding authors.
|
2308.00787 | Evaluating Spiking Neural Network On Neuromorphic Platform For Human
Activity Recognition | Energy efficiency and low latency are crucial requirements for designing
wearable AI-empowered human activity recognition systems, due to the hard
constraints of battery operations and closed-loop feedback. While neural
network models have been extensively compressed to match the stringent edge
requirements, spiking neural networks and event-based sensing are recently
emerging as promising solutions to further improve performance due to their
inherent energy efficiency and capacity to process spatiotemporal data in very
low latency. This work aims to evaluate the effectiveness of spiking neural
networks on neuromorphic processors in human activity recognition for wearable
applications. The case of workout recognition with wrist-worn wearable motion
sensors is used as a study. A multi-threshold delta modulation approach is
utilized for encoding the input sensor data into spike trains to move the
pipeline into the event-based approach. The spikes trains are then fed to a
spiking neural network with direct-event training, and the trained model is
deployed on the research neuromorphic platform from Intel, Loihi, to evaluate
energy and latency efficiency. Test results show that the spike-based workouts
recognition system can achieve a comparable accuracy (87.5\%) comparable to the
popular milliwatt RISC-V bases multi-core processor GAP8 with a traditional
neural network ( 88.1\%) while achieving two times better energy-delay product
(0.66 \si{\micro\joule\second} vs. 1.32 \si{\micro\joule\second}). | Sizhen Bian, Michele Magno | 2023-08-01T18:59:06Z | http://arxiv.org/abs/2308.00787v1 | # Evaluating Spiking Neural Network On Neuromorphic Platform For Human Activity Recognition
###### Abstract.
Energy efficiency and low latency are crucial requirements for designing wearable AI-empowered human activity recognition systems, due to the hard constraints of battery operations and closed-loop feedback. While neural network models have been extensively compressed to match the stringent edge requirements, spiking neural networks and event-based sensing are recently emerging as promising solutions to further improve performance due to their inherent energy efficiency and capacity to process spatiotemporal data in very low latency. This work aims to evaluate the effectiveness of spiking neural networks on neuromorphic processors in human activity recognition for wearable applications. The case of workout recognition with wrist-worn wearable motion sensors is used as a study. A multi-threshold delta modulation approach is utilized for encoding the input sensor data into spike trains to move the pipeline into the event-based approach. The spikes trains are then fed to a spiking neural network with direct-event training, and the trained model is deployed on the research neuromorphic platform from Intel, Loihi, to evaluate energy and latency efficiency. Test results show that the spike-based workouts recognition system can achieve a comparable accuracy (87.5%) comparable to the popular milliwatt RISC-V bases multi-core processor GAP8 with a traditional neural network ( 88.1%) while achieving two times better energy-delay product (0.66 jJs vs. 1.32 jJs).
neuroomorphic computing, human activity recognition, spiking neural networks, workouts recognition, Loihi +
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: journal: Computer Vision and Pattern Recognition
data like audio and sensor signals is relatively new and much less explored (Kyle et al., 2013). However, recent studies have shown promising results in using SNNs for ubiquitous computing with low-dimensional signal sensors, which can provide essential insights into related topics like HAR. Table 1 lists several recent studies that explore SNNs for ubiquitous computing with low dimensional signals and their resulting performance in different applications. Kyle et al. (Kyle et al., 2013) and Federico et al. (Fedico et al., 2014) explored the heartbeat classification with SNN with different training strategies and validated it on two neuromorphic processors with competitive accuracy. The energy-delay product (EDP) on Loihi shows over twenty-eight times more efficiency than the inference on a CPU. Enea et al. (Enea et al., 2015) run a direct-trained SNN on Loihi with customized EMG and DVS data set for hand gesture recognition. Similar to (Kyle et al., 2013), the EMG results on Loihi outperform in EDP compared with the results on GPU by ninety-seven times more efficiently. Besides the biological signals, audio signals were also explored with SNN (Kyle et al., 2013). In (Kyle et al., 2013), a fresh edge neuromorphic processor, Xylo, was used to classify ambient audios. An impressive inference energy was reported on Xylo with only 9.3uJ, over twenty-six times less energy than the edge IoT processor MAX78000 owning a convolutional hardware accelerator. One common result of those SNNs on low-dimensional signals is that SNN supplies state-of-the-art inference energy and impressive EDP compared with ANN on CPU and GPU. Besides this, the ANN-to-SNN training approach often results in competitive accuracy while the direct trained SNN shows incompetence in accuracy compared with the ANN result(Enea et al., 2015). The reason is that while encoding the signal to spikes, for example, the delta modulation, information loss is happening, especially for fast and huge signal variations (Kyle et al., 2013; Federico et al., 2014; Fedico et al., 2014).
In this work, we bring the following contributions:
1. We demonstrated the feasibility of using SNN for sensor-based HAR tasks pursuing latency and energy efficiency with a direct-trained SNN on the neuromorphic platform Loihi. The first Spiking-IMU dataset and the corresponding direct-trained SNN are released for benchmarking of HAR with the neuromorphic solution 1. Footnote 1: [https://github.com/zhaxidele/HAR-with-SNN](https://github.com/zhaxidele/HAR-with-SNN)
2. With spike trains generated by a multi-threshold delta modulation approach, a comparable accuracy (87.5%) is achieved compared with the ANN approach on the novel IoT processor GAP8 (88.1%), which has a dedicated RISC-V cluster for hardware acceleration and presented the state of the art edge AI performance in a rich of applications. Footnote 1: [https://github.com/zhaxidele/HAR-with-SNN](https://github.com/zhaxidele/HAR-with-SNN)
3. The latency and energy efficiency of the neuromorphic approach HAR and the mainstream approach HAR were compared in this case study, and it showed that the neuromorphic approach of HAR using SNNs on Loihi outperforms the ANN method in terms of inference energy on GAP8 while falling behind lightly in latency. However, the neuromorphic approach shows nearly two times the energy-delay product (0.66 uJ s vs. 1.31 uJ s).
## 3. System Architecture
Figure 1 depicts the pipeline of the proposed SNN for HAR applications, including three key steps: spike encoding from sensor data, off-line SNN training, and on line SNN inference on neuromorphic processor. To have a fair comparison with neural networks and low power digital processors, in this work we use a public data
\begin{table}
\begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline Authors/Application Year & \begin{tabular}{c} \begin{tabular}{c} Sensor/Dataset \\ \end{tabular} & \begin{tabular}{c} Neuromor- \\ phic \\ platform \\ \end{tabular} & \begin{tabular}{c} Encoding \\ (vs \\ ANN) \\ \end{tabular} & \begin{tabular}{c} Training \\ Inference \\ (ms) \\ \end{tabular} & \begin{tabular}{c} Inference \\ energy \\ (m) \\ \end{tabular} &
\begin{tabular}{c} Energy-Delay \\ Product \\ (\(\mu\)J s) \\ \end{tabular} \\ \hline
[15]-2019 & Heartbeat Classification & ECG MIT-BIH & DYNAP & Delta Modulation & SVM+ rSNN & 95.6\% (94.2\% ) & NA & NA \\ \hline
[32]-2021 & Oscillation Detection & EEG- iEEG- iEEG- iFOFOO: 1: 1: Time elapsed between the end of the input and the classification. 2: Only dynamic energy is considered in Loihi. 2: Not Available. 4: HFO was detected with morphology detector (Kyle et al., 2013).[ENDFOOTNOTE] & \begin{tabular}{c} EEG \\ Customized \\ \end{tabular} & \begin{tabular}{c} DYNAP- \\ SE \\ \end{tabular} & \begin{tabular}{c} Delta Modulation \\ \end{tabular} & \begin{tabular}{c} Direct \\ SNN \\ \end{tabular} &
\begin{tabular}{c} 78.0\% \\ (67.0\%)d & NA & NA \\ \end{tabular} & NA & NA \\ \hline
[14]-2020 & Hand Gesture Recognition & EMG Customized & Loihi & Delta Modulation & Direct SNN & 55.7\% (68.1\%) & 5.9 (3.8 on GPU) & 0.173 (25.5 on GPU) & 1.0 (97.3 on GPU) \\ \hline
[11]-2019 & Key Word Spotting & Audio Customized & Loihi & Rate Encoding & ANN-to- SNN & 97.9\% (97.9\%)e & 3.38 (97.9\%)e & 3.38 (1.30 on GPU, 2.4 GPU, 5.6 on a Jetson) & 0.27 (29.8 on Jetson) & 0.91 (38.7 on GPU, 13.44 on Jetson) \\ \hline
[12]-2023 & Ambient Audio Classification & Audio Xylo QUT- NOISE &
\begin{tabular}{c} Power band \\ bin to spike \\ \end{tabular} & Direct SNN & 98.0\% (97.9\%) & 100\({}^{\dagger}\) & 0.0093 (0.25: MAX78000, 11.2: Cortex) & 0.93 \\ \hline
**Ours** & \begin{tabular}{c} **Human Activity Recognition** \\ \end{tabular} &
\begin{tabular}{c} **1MU, Ca- \\ **1**-**1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** **1**-** 1**-** **1**-** **1**-** 1**-** **1**-** 1**-** **1**-** 1**-** 1**-** 1**-** 1**-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-** 1-1-** 1-** 1-1-** 1-1-** 1-** 1-** 1-** 1-1-** 1-** 1-1-** 1-** 1-** 1-1-** 1-** 1-** 1-** 1-1-** 1-** 1-1** 1-** 1-1-** 1-1-** 1-** 1-1-** 1-** 1-1-** 1-1-** 1-1-** 1-1-** 1-1-** 1-1-** 1-1-** 1-1-** 1-1-** 1-1-** 1-1-** 1-1-** 1-1-1-** 1-1-** 1-1-1-** 1-1-1-** 1-1-1-** 1-1-1-** 1-1-1-** 1-1-1-1-** 1-1-1-1-** 1-1-1-1-** 1-1-1-1-** 1-1-1-1-1-1-**
set, RecGym (Beng et al., 2017), as a case study. However, it is important to notice that the proposed approach can be used with other HAR-related data sets from various sensing modalities. The data set records ten volunteers' gym sessions with a sensing unit composed of an IMU sensor and a Body Capacitance sensor(Beng et al., 2017; Chen et al., 2018). The sensing units were worn at three positions: on the wrist, pocket, and calf. Twelve gym activities are recorded, including eleven workouts like ArmCurl, LegPress, and StairsClimber, and a "Null" activity when the volunteer hangs around between different work sessions. Each participant performed the selected workouts for five sessions in five days. Altogether, fifty sessions of gym workout data are presented in this data set. In this study, we only focus on the motion signals with the sensing unit worn on the wrist.
### Spike encoding
To directly train an SNN, traditional numerical sequential value needs to be transformed into spike streams that carry both temporal and spatial knowledge of the original signals. Different encoding approaches have been explored mostly for vision data transformation (Zhou et al., 2017), like latency encoding, rate encoding, delta modulation, etc. Each has advantages and limitations and has been adopted in different works (Beng et al., 2017). For example, latency encoding normally achieves the best processing latency and energy consumption with fewer synaptic operations while being more susceptible to noise. Rate coding is demonstrated to exist in sensory systems like the visual cortex and motor cortex (Srivastava et al., 2017), showing the best resilience to input noise while limited by a lengthy processing period. In this work, we used the delta modulation approach due to the optimal trade of complexity and latency from both the firmware and hardware implementation. Moreover, the analog sensory information can be directly encoded to the spike train at the front end. To address the accuracy degradation caused by information loss during the encoding, we gave multiple thresholds for spike train generation. The relationship between the continuous signal \(s(t)\) and its spiking counterpart \(\hat{s}(t)\) is given by Equation 1.
\[\hat{s_{i}}(t)=\begin{cases}1,\text{ if }s(t)-s(t-1)>\epsilon_{i}\\ -1,\text{ if }s(t)-s(t-1)<-\epsilon_{i}\end{cases} \tag{1}\]
where \(\epsilon_{i}\) is the threshold empirically chosen for spike encoding, and \(i\) (\(0\sim 4\))represents the index of applied thresholds list. Figure 2 depicts the five spike train channels encoded from the X-axis of the accelerometer, where the fast and large signal trend gives spikes in more spike train channels. For the inertial data, the thresholds were empirically set to 0.00005 x (\(i+1\)), while 0.0000125 x (\(i+1\)) for the capacitance data, and the threshold element number was empirically set to five. In future work, a systematical exploration of choosing the best threshold value and element number will be explored. As seven continuous signals were collected in the data set, we got thirty-five (7x5) spike trains for SNN training and inferring. With two seconds time window as a classification instance, we got 81291 spiking samples for building the SNN model.
### Spiking neural network and Loihi
One of main contribution of this paper is the design of a SNN model and its evaluation on the Intel Loihi research platform. Loihi (2017) is an asynchronous neuromorphic digital processor mainly for research. The processor consists of a many-core mesh of 128 neuromorphic cores for spike processing and three synchronous Lakemont x86 cores to monitor and configure the network and assist with the injection and recording of input/output spikes. Each neuro core in Loihi can access its local memories independently without needing to share a global memory bus and can implement up to 1024 current-based leaky integrate and fire neurons. Among other research platforms, Loihi has been selected because it includes a software SDK to design and profile proposed SNN.
Our proposed SNN is composed of two convolutional and two dense layers (32C64C128D12D) with a kernel size of three, as Table 2 lists. The threshold for neuron spiking was empirically selected. The current and voltage decay constants for the leaky integrate and fire neurons were set to 1024 (32 ms) and 128 (4 ms), respectively. Before spike encoding, the data set was interpolated to 1 kHz using
Figure 1. Human activity recognition using the spiking neural network where the network is processed on neuromorphic platforms pursuing energy and latency efficiency
Figure 2. Encoded spike trains of signal Acc_x from the workout of BenchPress with thresholds of [0.00005, 0.0001, 0.0002, 0.0004, 0.0008]
the univariate spline method, aiming to maximally approach the biological behaviors of the brain regarding information feeding. Each sample contains two seconds length of spike trains. The model was trained offline on GPU with weighted classes and leave-one-user-out cross-validation, and the trained weights and delays were then used to configure the network on Loihi hardware for inference purposes.
To fully make use of the biological plausibility of SNN, we used the framework SLAYER (Schaver, 2017) for direct training, aiming to pursue the envelope of energy and latency efficiency of SNN. SLAYER evaluates the gradient of the convolutional and linear layers in an SNN by a temporal credit assignment policy, which distributes the error credit of an error back both through layers and in time, as a spiking neuron's current state depends on its previous states. Then a probability density function is used to estimate the change in the neuron state, thus approximating the derivative of the spike function. With SLAYER, the synaptic weights and axonal delays can be trained, and some state-of-the-art performances have been achieved on neuromorphic datasets like the NMNIST and ibmDVS-Gesture (Schaver, 2017). SLAYER supports Loihi-specific implementation for neuron model and weight quantization.
## 4. Experimental Evaluation
Table 3 lists the workouts classification performance with the trained SNN on Loihi. In comparison, we selected an ANN model using the same data set and being deployed on two different IoT processors, presented in (Louhihi et al., 2017). Such a comparison has seldom been made, as previous SNN evaluations mostly used ANN deployed on GPU/CPU as the baselines. The result will be meaningful for developing ubiquitous neuromorphic edge computing by supplying a straightforward comparison with the state-of-the-art using mainstream solutions. The multi-threshold spike encoding approach results in an accuracy of 87.5% with the directly trained SNN, which is much better than the single-threshold encoding result (below 60%) and acceptable compared with the accuracy from the ANN approach considering that the accuracy of direct-trained SNN on spike streams degrades in most cases. The inference latency of SNN on Loihi implies the time elapsed between the end of the input and the classification output and is reported as 4.4 ms, which is also much better than the latency on general IoT processors like STM32 with Cortex-M7 core but falls behind slightly to the GAP8, which features 8 RISC-V cores for dedicated hardware acceleration. However, the neuromorphic pipeline outperforms in dynamic energy consumption (0.15 mJ), benefitting from the sparsity of the spike trains and the in-memory computing of Loihi, which results in an EDP of 0.66 \(\upmu\)s, while the EDP on GAP8 and STM32 are almost two times and over two hundred times higher, respectively. The energy reported here is the dynamic energy on Loihi, which is measured by enabling the energy probe during inference as the difference between the total energy consumed by the network and the static energy when the chip is idle. We have to acknowledge that Loihi is not for edge computing specifically. Instead, it is designed more for general-purpose neuromorphic research. Thus there is still space for raising the neuromorphic performance, for example, the spike injection speed (the primary x86 core always waits 1ms before allowing the system to continue to the next timestep). To have a more fair comparison, end-to-end solutions of neuromorphic and traditional approaches should be developed, adopting the newly released edge neuromorphic processors (Schaver, 2017; Schaver, 2017).
## 5. Conclusion
This work explored the neuromorphic solution of human activity recognition with a typical case study of workout recognition. Neuromorphic solutions, mainly inferring the SNN associated with the neuromorphic processor, have been emerging benefiting from its latency and energy efficiency. We started with a multi-threshold delta modulation to encode the raw motion sensor signal into multiple spike trains, aiming to reduce the information loss during spike generation. A shallow SNN model was then used to train the spike-form workouts signal with the SLAYER framework. The model runs on Loihi showed a comparable accuracy of 87.5% and an impressive energy-delay-product of 0.66 \(\upmu\)s, compared with the state-of-the-art ANN solution on GAP8. This work demonstrates the efficiency of neuromorphic solutions in ubiquitous computing that pursues latency and energy efficiency. For future work, we will focus on new features in neuromorphic solutions that exceed the traditional edge solutions, for example, learn on the fly that can adapt the SNN models for specific subjects and environments, boosting the inference accuracy. We will also explore the newly released edge neuromorphic platforms and Loihi2, which has redesigned asynchronous circuits supplying faster speed and enhanced learning capabilities, featuring multiple times performance boosting compared with its predecessor.
###### Acknowledgements.
This work was supported by the CHIST-ERA project ReHab(20CH21-203783).
\begin{table}
\begin{tabular}{c|c|c|r|r|r} & Type & Size & Feature Size & Features & Stride \\ \hline
0 & Input & 7x5x2 & - & - & - \\
1 & Conv & 7x5x32 & 3x3 & 32 & 1 \\
2 & Conv & 7x5x64 & 3x3 & 64 & 1 \\
3 & Dense & 2240 & - & 128 & - \\
4 & Dense & 128 & - & 12 & - \\ \end{tabular}
\end{table}
Table 2. SNN model for the spiking RecGym dataset
\begin{table}
\begin{tabular}{l c c c c} \hline Hardware & Model & Accuracy & Latency & Energy & Energy-Delay \\ & & & (ms) & (m) & Product (\(\upmu\)s) \\ \hline Loihi (Neuromorphic) & SNN & 87.5\% & 4.4 & 0.15 & 0.66 \\ \hline GAP8 (RISC-V) & ANN & 88.1\% & 3.2 & 0.41 & 1.31 \\ \hline STM32 (Cortex-M7) & ANN & 89.3\% & 20.88 & 8.07 & 168.5 \\ \hline \end{tabular}
\end{table}
Table 3. Classification profiling vs. general edge solutions |
2303.02590 | Domain Decomposition with Neural Network Interface Approximations for
time-harmonic Maxwell's equations with different wave numbers | In this work, we consider the time-harmonic Maxwell's equations and their
numerical solution with a domain decomposition method. As an innovative
feature, we propose a feedforward neural network-enhanced approximation of the
interface conditions between the subdomains. The advantage is that the
interface condition can be updated without recomputing the Maxwell system at
each step. The main part consists of a detailed description of the construction
of the neural network for domain decomposition and the training process. To
substantiate this proof of concept, we investigate a few subdomains in some
numerical experiments with low frequencies. Therein the new approach is
compared to a classical domain decomposition method. Moreover, we highlight
current challenges of training and testing with different wave numbers and we
provide information on the behaviour of the neural-network, such as convergence
of the loss function, and different activation functions. | T. Knoke, S. Kinnewig, S. Beuchler, A. Demircan, U. Morgner, T. Wick | 2023-03-05T07:12:24Z | http://arxiv.org/abs/2303.02590v1 | Domain Decomposition with Neural Network Interface Approximations for time-harmonic Maxwell's equations with different wave numbers
###### Abstract
In this work, we consider the time-harmonic Maxwell's equations and their numerical solution with a domain decomposition method. As an innovative feature, we propose a feedforward neural network-enhanced approximation of the interface conditions between the subdomains. The advantage is that the interface condition can be updated without recomputing the Maxwell system at each step. The main part consists of a detailed description of the construction of the neural network for domain decomposition and the training process. To substantiate this proof of concept, we investigate a few subdomains in some numerical experiments with low frequencies. Therein the new approach is compared to a classical domain decomposition method. Moreover, we highlight current challenges of training and testing with different wave numbers and we provide information on the behaviour of the neural-network, such as convergence of the loss function, and different activation functions.
**Keywords:** Time-Harmonic Maxwell's Equations, Machine Learning, Feedforward Neural Network, Domain Decomposition Method.
## 1 Introduction
The Maxwell's equations for describing electro-magnetic phenomena are of great interest in current research fields, such as optics. One present example of employing Maxwell's equations can be found in the Cluster of Excellence PhoenixD (Photonics Optics Engineering Innovation Across Disciplines)1 at the Leibniz University Hannover, in which modern methods for optics simulations are being developed. Therein, one focus is on the efficient and accurate calculation of light distribution in an optical material to design optical devices on the micro- and nanoscale [35, 26]. In comparison to other partial differential equations, such as in solid mechanics or fluid flow, the Maxwell's equations have some peculiarities such as the curl operator, which has in two-dimensional problems, a one-dimensional image, but in three-dimensional problems, it has a three-dimensional image. Moreover, the requirements for the discretization and definiteness of the final linear system are specific. In more detail, in numerical mathematics, Maxwell's equations are of interest because of their specific mathematical structures [27, 9, 25, 33], requirements for finite elements [27, 25, 28, 31, 21, 7, 29], their numerical solution [18, 16, 14, 10] as well as postprocessing such as a posteriori error control and adaptivity [34, 6]. As their numerical solution is challenging due to their ill-posed nature, e.g., [4], one must apply suitable techniques. The most prominent approach in the literature is based on domain decomposition (DD) techniques [36, 10]. The geometric multigrid solver developed by Hiptmair [18] can only be applied to the problem in the time domain (i.e., the well-posed problem).
In this work, we concentrate on the numerical solution using a domain decomposition method. Specifically, our starting point is the method developed in [4], based on ideas from [11], and which was realized in the modern open-source finite element library deal.II [1, 2]. The domain decomposition method's crucial point is the interface operator derivation [11]. Our main objective in the current work is to design a proof of concept to approximate the interface operator with the help of a feedforward neural network (NN) [5, 17, 20]. We carefully derive the governing algorithms and focus on a two-domain problem to study our new approach's mechanism and performance. Implementation-wise, the previously mentioned deal.II library (in C++) is coupled to the PyTorch (in python) [32] library, which is one of the standard packages for neural network computations. Our main aim is to showcase that our approach is feasible and can be a point of departure for further future extensions. We notice that the current work is an extension of the conference proceedings paper [22] with more mathematical and algorithmic details, and different numerical tests, specifically the studies on different wave numbers and comparison of two NN activation functions.
The outline of this work is as follows: In Section 2, we introduce the time-harmonic Maxwell's equations and our notation. Next, in Section 3, domain decomposition and neural network approximations are introduced. Afterwards, we address in detail the training process in Section 4. In Section 5, some numerical tests demonstrate our proof of concept. Our work is summarized in Section 6.
## 2 Equations
For the sake of simplicity, we only consider the two-dimensional time-harmonic Maxwell's equations. In the following, we will introduce these equations in detail.
### Fundamental operators
To comprehensively describe the problem, we introduce the basic operators needed to describe two-dimensional electro-magnetic problems. Therefore, let us assume a scalar function \(\phi:\mathbb{R}\rightarrow\mathbb{R}\) and \(\vec{v}\in\mathbb{R}^{2}\) to be a two-dimensional vector. Then the gradient of \(\phi\) is given by \(\nabla\phi=\left(\frac{\partial\phi}{\partial x_{1}},\ \frac{\partial\phi}{ \partial x_{2}}\right),\) and the divergence of \(v\) is given by \(\mathrm{div}(v)\coloneqq\nabla\cdot v\coloneqq\sum_{i=1}^{2}\frac{\partial v_ {i}}{\partial x_{i}}.\) Next, \(a\cdot b=(a_{1},a_{2})^{T}\cdot(b_{1},b_{2})^{T}=a_{1}b_{1}+a_{2}b_{2}\) denotes the scalar product. We can furthermore write down the description of the two-dimensional curl operator
\[\mathrm{curl}(\vec{v})=\frac{\partial v_{2}}{\partial x_{1}}-\frac{\partial v _{1}}{\partial x_{2}}, \tag{1}\]
and the curl operator applied to a scalar function
\[\underline{\mathrm{curl}}(\phi)=\left(\begin{array}{c}\frac{\partial\phi} {\partial x_{2}}\\ -\frac{\partial\phi}{\partial x_{1}}\end{array}\right). \tag{2}\]
### Time-harmonic Maxwell's equations
Let \(\Omega\subset\mathbb{R}^{2}\) be a bounded domain with sufficiently smooth boundary \(\Gamma\). The latter is partitioned into \(\Gamma=\Gamma^{\infty}\cup\Gamma^{\mathrm{inc}}\). The main governing function space is defined as
\[H(\mathrm{curl},\Omega):=\{v\in\mathcal{L}^{2}(\Omega)\ |\ \mathrm{curl}(v)\in \mathcal{L}^{2}(\Omega)\},\]
where \(\mathcal{L}^{2}(\Omega)\) is the well-known space of square-integrable functions in the Lebesgue sense. In order to define boundary conditions, we introduce the traces
\[\gamma^{t}:H(\mathrm{curl},\Omega)\to H_{\times}^{-1/2}( \mathrm{div},\Gamma),\] \[\gamma^{T}:H(\mathrm{curl},\Omega)\to H_{\times}^{-1/2}( \mathrm{curl},\Gamma),\]
which are defined by
\[\gamma^{t}\left(\phi\right)=\left(\begin{array}{c}\phi\ n_{2}\\ -\phi\ n_{1}\end{array}\right)\quad\text{and}\quad\gamma^{T}\left(v\right)=v- \left(n\cdot v\right)\cdot n,\]
where \(n\in\mathbb{R}^{2}\) is the normal vector of \(\Omega\), \(H_{\times}^{-1/2}(\operatorname{div},\Gamma):=\{v\in H^{-1/2}(\Gamma)\mid v\cdot n =0,\;\operatorname{div}_{\Gamma}v\in H^{-1/2}(\Gamma)\}\) is the space of well-defined surface divergence fields and \(H(\operatorname{curl},\Gamma):=\{v\in H^{-1/2}(\Gamma)\mid v\cdot n=0,\; \operatorname{curl}_{\Gamma}\left(v\right)\in H^{-1/2}(\Gamma)\}\) is the space of well-defined surface curls, see [27, Chapter 3.4]. In the following, we first state the strong form of the system. The time-harmonic Maxwell's equations are then defined as follows: Find the electric field \(E:\Omega\to\mathbb{C}^{2}\) such that
\[\left\{\begin{array}{ll}\underline{\operatorname{curl}}\left(\mu^{-1} \operatorname{curl}\left(\vec{E}\right)\right)-\varepsilon\omega^{2}\vec{E}& =\vec{0}\qquad\text{ in }\Omega\\ \mu^{-1}\gamma^{t}\left(\operatorname{curl}\left(\vec{E}\right)\right)-i \kappa\omega\gamma^{T}\left(\vec{E}\right)&=\vec{0}\qquad\text{ on }\Gamma^{\infty}\\ \gamma^{T}\left(\vec{E}\right)&=\vec{E}^{\text{inc}}\quad\text{ on }\Gamma^{\text{inc}},\end{array}\right. \tag{3}\]
where \(\vec{E}^{\text{inc}}:\mathbb{R}^{2}\to\mathbb{C}^{2}\) is some given incident electric field, \(\mu\in\mathbb{R}^{+}\) is the relative magnetic permeability, \(\kappa=\sqrt{\varepsilon}\), \(\varepsilon\in\mathbb{C}\) relative permittivity, \(\omega=\frac{2\pi}{\lambda}\) is the wave number and \(\lambda\in\mathbb{R}^{+}\) is the wave length and \(i\) denotes the imaginary number. System (3), as well as its weak form, is called time-harmonic, because the time dependence can be expressed by \(e^{i\omega\tau}\), where \(\tau\geq 0\) denotes the time.
### Weak formulation
In this subsection, we derive the weak form. This is the starting point for a finite element method (FEM) discretization. For the derivation, we first begin by rewriting the curl product with the help of integration by parts:
\[\int_{\Omega}\underline{\operatorname{curl}}\left(\phi\right)\cdot\vec{u}\; \mathsf{d}x=\int_{\Omega}\phi\operatorname{curl}\left(u\right)\;\mathsf{d}x+ \int_{\partial\Omega}\gamma^{t}(\phi)\cdot u\;\mathsf{d}s, \tag{4}\]
see for instance [15, 27]. We want to derive the weak formulation from the strong formulation (3) in the following:
\[\int_{\Omega}\underline{\operatorname{curl}}\left(\mu^{-1} \operatorname{curl}\left(\vec{E}\right)\right)\cdot\vec{\varphi}\;\mathsf{d}x- \varepsilon\omega^{2}\int_{\Omega}\vec{E}\cdot\vec{\varphi}\;\mathsf{d}x= \vec{0},\] \[\stackrel{{\eqref{eq:weakform}}}{{\Rightarrow}}\int_{ \Omega}\mu^{-1}\operatorname{curl}\left(\vec{E}\right)\operatorname{curl} \left(\vec{\varphi}\right)\;\mathsf{d}x-\varepsilon\omega^{2}\int_{\Omega} \vec{E}\cdot\vec{\varphi}\;\mathsf{d}x+\int_{\partial\Omega}\mu^{-1}\gamma^{t} \left(\operatorname{curl}\left(\vec{E}\right)\right)\cdot\vec{\varphi}\; \mathsf{d}s=\vec{0}. \tag{5}\]
By applying the definition of the boundaries \(\Gamma^{\infty}\) and \(\Gamma^{\text{inc}}\) from equation (3) to equation (5), we obtain the weak formulation of the time-harmonic Maxwell's equations. Find \(\vec{E}\in H(curl,\Omega)\) such that for all \(\vec{\varphi}\in H(curl,\Omega)\)
\[\int_{\Omega}\left(\mu^{-1}\operatorname{curl}\left(\vec{E} \right)\operatorname{curl}\left(\vec{\varphi}\right)-\varepsilon\omega^{2}\vec {E}\cdot\vec{\varphi}\right)\;\mathsf{d}x+ i\kappa\omega\int_{\Gamma^{\infty}}\gamma^{T}\left( \vec{E}\right)\cdot\gamma^{T}\left(\vec{\varphi}\right)\;\mathsf{d}s\] \[= \int_{\Gamma^{\text{inc}}}\gamma^{T}\left(\vec{E}^{\text{inc}} \right)\cdot\gamma^{T}\left(\vec{\varphi}\right)\;\mathsf{d}s. \tag{6}\]
### Two-dimensional Nedelec elements
For the implementation with the help of a Galerkin finite element method (FEM), we need the discrete weak form. Based on the De-Rham cohomology, we must choose our basis functions out of the Nedelec space \(V_{h}\). Therefore, we want to introduce the definition of the space \(V_{h}\) in the following, based on the formalism introduced by Zaglmayr [37, Chapter 5.2].
As a suitable polynomial basis, we introduce the integrated Legendre polynomials. Let \(x\in[-1,1]\). The following recursive formula defines the integrated Legendre polynomials:
\[L_{1}(x) =x,\] \[L_{2}(x) =\tfrac{1}{2}\left(x^{2}-1\right), \tag{7}\] \[(n+1)L_{n+1}(x) =(2n-1)xL_{n}(x)-(n-2)L_{n-1}(x),\quad\text{ for }n\geq 2.\]
Let us choose the quadrilateral reference element as \(Q=[0,1]\times[0,1]\).
We continue by defining the set of all edges with local edge-ordering, where, see Figure 1. We denote the cell itself with local vertex-ordering. The polynomial order is given by.
We continue by defining the set of all edges with local edge-ordering, where. We denote the cell itself with local vertex-ordering. The polynomial order is given by.
With the help of these basis functions, we define the two-dimensional Nedelec space
(8)
where is the space of the _lowest-order Nedelec_ function, is the space of the _edge-bubbles_ and is the space of the _cell-bubbles_. All basis functions on one element with barycentric coordinates are displayed in Figure 2. Visualizations of some basis functions are displayed in Figure 3. The description of is still not complete, so far we only described, with as previously defined. It remains to introduce the Piola transformation, which is used to transform the reference element to any given physical element, see Monk [27] (Lemma 3.57, Corollary 3.58).
### Discrete weak formulation
We have gathered everything to write down the discrete weak formulation of the time-harmonic Maxwell's equations. We obtain the discrete weak formulation by applying the Galerkin method
Figure 1: Left: Vertex and edge ordering on the reference cell, right: parametrisation of the reference cell.
to the equation (6). Find \(E_{h}\in V_{h}(\Omega)\) such that
\[\int_{\Omega}\left(\mu^{-1}\operatorname{curl}\left(\vec{E}_{h} \right)\operatorname{curl}\left(\vec{\varphi}_{h}\right)-\varepsilon\omega^{2} \vec{E}_{h}\cdot\vec{\varphi}_{h}\right)\ \mathsf{d}x+ i\kappa\omega\int_{\Gamma^{\infty}}\gamma^{T}\left(\vec{E}_{h} \right)\cdot\gamma^{T}\left(\vec{\varphi}_{h}\right)\ \mathsf{d}s\] \[=\int_{\Gamma^{\mathrm{inc}}}\gamma^{T}\left(\vec{E}^{\mathrm{ inc}}\right)\cdot\gamma^{T}\left(\vec{\varphi}_{h}\right)\ \mathsf{d}s\ \ \forall\varphi_{h}\in V_{h}(\Omega). \tag{9}\]
## 3 Numerical approach
In this section, we first describe domain decomposition and afterwards the neural network approximation. In the latter, we also outline how to replace the interface operator by the neural network.
### Domain decomposition
Since the solution of Maxwell's equation system (3) is challenging, as already outlined in the introduction, we apply a non-overlapping domain composition method (DDM)[36] in which the domain is divided into subdomains as follows
\[\overline{\Omega}=\bigcup_{i=0}^{n_{\mathrm{dom}}}\overline{ \Omega}_{i}\quad\text{with}\] \[\Omega_{i}\cap\Omega_{j}=\varnothing\quad\forall i\neq j,\]
where \(n_{\mathrm{dom}}+1\) is the number of subdomains. In such a way, every subdomain \(\Omega_{i}\) becomes small enough so that we can handle it with a direct solver. The global solution of the electric field \(E\) is computed via an iterative method, where we solve the time-harmonic Maxwell's equations on each subdomain with suitable interface conditions between the different subdomains. Thus, we obtain a solution \(E_{i}^{k}\) for
Figure 3: Plots of basis functions on \((0,1)^{2}\): low order edge function (left above), high order edge based basis function for \(p=2\) (right above) to the edge \(\mathcal{E}_{0}\), high order cell based basis functions for \(p=2\) of type 1 and 2 (below).
every subdomain \(\Omega_{i}\), where \(k\) denotes the \(k\)-th iteration step. The initial interface condition is given by
\[g_{ji}^{k=0}:=-\mu^{-1}\gamma_{i}^{t}\left(\operatorname{curl}\left(E_{i}^{k=0} \right)\right)-i\kappa S\left(\gamma_{i}^{T}\left(E_{i}^{k=0}\right)\right)=0, \tag{10}\]
where \(S\) describes the interface operator, \(i\) is the index of the current domain, and \(j\) is the index of the neighbouring domain [11]. Afterwards, the electric-field \(E_{i}^{k+1}\) is computed at each step by solving the following system
\[\left\{\begin{array}{ll}\operatorname{curl}\left(\mu^{-1} \operatorname{curl}\left(E_{i}^{k+1}\right)\right)-\omega^{2}\varepsilon E_{i }^{k+1}&=0&\text{in }\Omega_{i},\\ \mu^{-1}\gamma_{i}^{t}\left(\operatorname{curl}\left(E_{i}^{k+1}\right) \right)-i\omega\kappa\gamma_{i}^{T}\left(E_{i}^{k+1}\right)&=0&\text{on }\Gamma_{i}^{ \infty},\\ \gamma_{i}^{T}\left(E_{i}^{k+1}\right)&=\gamma_{i}^{T}\left(E_{i}^{\text{inc} }\right)&\text{on }\Gamma_{i}^{\text{inc}},\\ \mu^{-1}S\left(\gamma_{i}^{t}\left(\operatorname{curl}\left(E_{i}^{k+1} \right)\right)\right)-i\omega\kappa\gamma_{i}^{T}\left(E_{i}^{k+1}\right)&=g_{ ji}^{k}&\text{on }\Sigma_{ij},\end{array}\right. \tag{11}\]
where \(\Sigma_{ij}=\Sigma_{ji}:=\partial\Omega_{i}\cap\partial\Omega_{j}\) denotes the interface of two neighbouring elements and the interface condition is updated by
\[g_{ji}^{k+1}=-\mu^{-1}\gamma_{i}^{t}\left(\operatorname{curl}\left(E_{i}^{k+1 }\right)\right)-i\kappa S\left(\gamma_{i}^{T}\left(E_{i}^{k+1}\right)\right)=- g_{ij}^{k}-2i\kappa S\left(\gamma_{i}^{T}\left(E_{i}^{k+1}\right)\right). \tag{12}\]
In case of success we obtain \(\lim_{k\to\infty}E_{i}^{k}=E|_{\Omega_{i}}\), but this convergence depends strongly on the chosen interface operator \(S\) (see [10, 11]). The implementation of this approach into deal.II was done in [4].
### Our new approach: Neural network approximation of \(S\)
Since the computation of a good approximation of \(S\) is challenging, we examine a new approach in which we attempt to approximate this operator with the help of a neural network (NN). For a first proof of concept, we choose a prototype example and explore whether an NN can approximate the interface values. As it is not feasible to compute the exact interface operator \(S\), we aim to compute \(g_{ij}^{k+l},\ l>0\) with an NN, using \(g_{ij}^{k}\) and \(E_{i}^{k+1}\) as input. Another benefit of this approach is that we can quickly generate a training data set from a classical domain decomposition method, as described in Section 4.4. We choose \(S=\mathds{1}\) for simplicity inside our classical domain decomposition method. Hence, the advantage of this approach is that one can update the interface condition without recomputing the system (11) at each step, raising the hope of reducing the computational cost.
## 4 Neural network training
The first step in neural network approximations is the training process, which is described in this section. Besides the mathematical realization, we also need to choose the software libraries. We utilize deal.II [2] to discretize the time-harmonic Maxwell's equations with the finite element method. The neural network is trained with PyTorch [32]. The exchange of information between the results of the deal.II code and the PyTorch code take place via the hard disk.
### Basic definitions
First of all, we give a short definition of the neural network type employed in this work, and we introduce the basic parameters. Further information can be found in [5, 8, 24, 3, 30]. The following notation and descriptions of this subsection are mainly based on [23].
**Definition 4.1** (Artificial neuron).: _An (artificial) neuron (also known as unit [12], [5][Section 5.1]) \(u\) is a tuple of the form \((\mathfrak{x},\mathfrak{w},\sigma)\). The components have the following meanings:_
* \(\mathfrak{x}=(\mathfrak{x}_{0},\ldots,\mathfrak{x}_{n})\in\mathbb{R}^{n+1}\) _is the input vector. It contains the information, that the neuron receives._
* \(\mathfrak{w}=(\mathfrak{w}_{0},\ldots,\mathfrak{u}_{n})\in\mathbb{R}^{n+1}\) _is the weight vector, which determines the influence of the individual input information on the output of the neuron. Later,_ \(\mathfrak{w}\) _denotes the weight vector of all neurons._
* \(\sigma:\mathbb{R}\to\mathbb{R}\)_, with_ \(\sigma=\sum_{i=0}^{n}\mathbbm{x}_{i}\mathbbm{u}_{i}\mapsto a\) _is the activation function. It determines the so-called activation level_ \(a\) _from the input and the weights, which represent the output of the neuron._
**Definition 4.2** (Neural network).: _An (artificial) neural (feedforward-) network is a set of neurons \(U\) with a disjoint decomposition \(U=U_{0}\dot{\cup}\ldots\dot{\cup}U_{l}\). The partition sets \(U_{\mathsf{k}},\mathsf{k}=0,\ldots,l\) are called layers. Here, \(U_{0}\) is the input layer. It contains the neurons that receive information from outside. Moreover, \(U_{l}\) is the output layer with the neurons that return the output. Finally, \(U_{1},\ldots,U_{l-1}\) are the so-called hidden layers._
Starting from any neuron \(u\in U_{\mathsf{k}}\), there is a connection to each neuron \(\hat{u}\in U_{\mathsf{k}+1}\) for \(\mathsf{k}=0,\ldots,l-1\). Such a connection illustrates that the output \(a\) of the neuron \(u\) is passed on to the neuron \(\hat{u}\). This property is the reason for the name feedforward network.
Each \(U_{0},\ldots,U_{l-1}\) contains a so-called bias neuron of the form \((0,0,1)\). It has no input, weights and a constant output value \(1\) and only transfers a constant bias in the form of the weight to each neuron of the subsequent layer.
**Remark 4.3**.: _In the following examples all neurons of the layer \(U_{\mathsf{k}}\) will have the same activation function, given by \(\sigma^{(\mathsf{k})}\) for \(\mathsf{k}=0,\ldots,l\). Here, \(D_{\mathsf{k}}:=|U_{\mathsf{k}}|-1\) for \(k=0,\ldots,l-1\) denotes the number of neurons of the \(\mathsf{k}\)-th layer (without bias neuron) and \(D_{l}:=|U_{l}|\) is the number of neurons of the output layer._
### Decomposing the domain
Before constructing the NN, we choose the domain, the decomposition and the grid on which the system (11) is solved to obtain the training values because they will influence the network size. The domain in our chosen example, given by
\[\Omega=(0,1)\times(0,1),\]
is divided into two subdomains
\[\Omega_{0}=(0,1)\times(0,0.5)\quad\text{and}\quad\Omega_{1}=(0,1)\times(0.5,1) \quad\text{(see Figure~{}\ref{fig:sub_eq_1})},\]
and the grid on which the FEM is applied is a mesh of \(32\times 32\) elements with quadratic Nedelec elements.
Hence \(32\) elements with each \(4\) degrees of freedom (dofs) are located on the interface in both subdomains. We evaluate the interface condition and the solution on each dof and use the values as the NN's input and target. Therefore the input contains \(4\cdot\dim(g_{ij})+4\cdot\dim(E_{i})=16\) values, and the output consists of \(4\cdot\dim(g_{ji})=8\) values, and we obtain \(32\) input-target pairs with one computation.
Figure 4: Visualization of the domain \(\Omega\) with the chosen decomposition.
### Neural network construction
Regarding the considerations above, we need an input layer with 16 neurons (without bias) and an output layer with 8 neurons. Furthermore, we use one hidden layer with 500 neurons (without bias). Hence, for the governing network, we have
\[U=U_{0}\dot{\cup}U_{1}\dot{\cup}U_{2}\]
with \(D_{0}=16\), \(D_{1}=500\) and \(D_{2}=8\). Our tests, presented in Section 5, revealed that this is a sufficient size for our purpose. The activation function per layer is chosen as follows:
\[\sigma^{(0)} =id\quad\text{(input layer)},\] \[\sigma^{(1)} =\frac{1}{1+e^{-x}}\quad\text{(hidden layer)},\] \[\sigma^{(2)} =id\quad\text{(output layer)},\]
where \(\sigma^{(1)}\) is known as the sigmoid function, which turned out to be the most effective since the error could be reduced more and more quickly than with other functions we tested e.g.
\[\sigma^{(1)} =\tanh(x),\] \[\sigma^{(1)} =\log\left(\frac{1}{1+e^{-x}}\right)\quad\text{(LogSigmoid)},\] \[\sigma^{(1)} =\max(0,x)+\min(0,e^{x}-1)\quad\text{(CELU)},\] \[\sigma^{(1)} =a\left(\max(0,x)+\min\left(0,b\left(e^{x}-1\right)\right) \right)\quad\text{(SELU)},\] \[\quad\text{with }a\approx 1.0507,\text{ and }b\approx 1.6733.\]
An exception represents the ReLU function, which we will discuss later in Section 5.4. Moreover, we apply separate networks \(U^{01}\) and \(U^{10}\) of the same shape for both interface conditions \(g_{01}\) and \(g_{10}\) since it turned out that they are approximated differently, fast and accurately. The resulting programming code is displayed in Figure 5.
### Training
To obtain enough training data, we vary the boundary condition \(E^{\text{inc}}\) and create training and test values to control the network during the training and avoid overfitting. The training and test sets are generated by the boundary values listed in Table 1.
Figure 5: PyTorch code of the implementation of the network construction.
Since we choose \(10\) different boundary values for the training set and \(2\) for the test set and each of them generates a set of \(32\) training/test values (one per element on the interface), we obtain all in all a set of \(32\cdot 10=320\) training values and a set of \(32\cdot 2=64\) test values for both networks. To keep the computation simple in a first set of tests, we choose a small wave number \(\omega=\frac{2\pi}{3}\), and compute the sets with the iterative DDM in \(4\) steps. Afterwards we use the results \(\left(g_{ij}^{1},E_{i}^{2}\right)\) as the input and \(g_{ji}^{3}\) as the targets to train our NNs with the application of the mean squared error as the loss function, given by
\[\text{Loss}(\mathbf{\mathsf{w}})=\frac{1}{2}\sum_{i=1}^{N}\|t^{(i)}-y(\mathbf{\mathsf{ x}}^{(i)},\mathbf{\mathsf{w}})\|^{2},\]
where \(N\) denotes the number of input-target pairs (in our case \(N=320\) for the training set and \(N=64\) for the test set), \(t^{(i)}\) is the target vector, \(y\) is the function generated by the network and hence \(y(\mathbf{\mathsf{x}}^{(i)},\mathbf{\mathsf{w}})\) denotes the output of the NN. We refer the reader to Section 5.1 for the specific realization.
As the optimizer, we use the Adam algorithm [19], which is a line search method based on the following iteration rule
\[x^{\rho+1}=x^{\rho}+\alpha^{\rho}p^{\rho},\]
where \(p^{\rho}\) is called the search direction and \(\alpha^{\rho}\) is the step size (or learning rate in case of NN) for the iteration step \(\rho\). The search direction of the Adam algorithm depends on four parameters \(\beta_{1}\), \(\beta_{2}\), \(m_{1}\) and \(m_{2}\), where \(\beta_{1}\) and \(\beta_{2}\) are fixed values in the interval \([0,1)\), and \(m_{1}\) and \(m_{2}\) are updated in each step via
\[m_{1}^{0}=m_{2}^{0}=0, m_{1}^{\rho+1}=\beta_{1}m_{1}^{\rho}+(1-\beta_{1})\cdot \nabla\text{Loss}(\mathbf{\mathsf{w}})^{\rho}\] \[\text{and} m_{2}^{\rho+1}=\beta_{2}m_{2}^{\rho}+(1-\beta_{2})\cdot\|\nabla \text{Loss}(\mathbf{\mathsf{w}})^{\rho}\|^{2}.\]
The search direction is then given by
\[p^{\rho-1}=-\widehat{m_{1}^{\rho}}\Big{/}\sqrt{\widehat{m_{2}^{\rho}}+\varepsilon}\]
with \(\widehat{m_{1}^{\rho}}=m_{1}^{\rho}/(1-(\beta_{1})^{\rho})\), \(\widehat{m_{2}^{\rho}}=m_{2}^{\rho}/(1-(\beta_{2})^{\rho})\) and \(0<\varepsilon\ll 1\).
The implementation of this training process in PyTorch is displayed in Figure 6.
\begin{table}
\begin{tabular}{|c|c|} \hline \(E^{\text{inc}}\) for the training set & \(E^{\text{inc}}\) for the test set \\ \hline \(\left(\begin{array}{c}e^{\frac{-(x-0.7)^{2}}{0.008}}\\ 0\end{array}\right)\)\(\left|\begin{array}{c}\left(\begin{array}{c}\cos(\pi^{2}y)+\sin(\pi^{2}x)i\\ \sin(\pi^{2}y)+0.5\cos(\pi^{2}x)i\end{array}\right)\end{array}\right.\) & \(\left(\begin{array}{c}e^{\frac{-(x-0.5)^{2}}{0.003}}\\ 0\end{array}\right)\) \\ \hline \(\left(\begin{array}{c}e^{\frac{-(x-0.2)^{2}}{0.002}}\\ 1\end{array}\right)\)\(\left|\begin{array}{c}\sin(\pi^{2}x)+\sin(\pi^{2}x)i\\ \sin(\pi^{2}y)+0.5\cos(\pi^{2}x)i\end{array}\right.\) & \(\left(\begin{array}{c}\cos(\pi^{2}y)+\sin(\pi^{2}x)i\\ \cos(\pi^{2}y)+0.5\cos(\pi^{2}x)i\end{array}\right)\) \\ \hline \(\left(\begin{array}{c}e^{\frac{-(x-0.7)^{2}}{0.003}}\\ 1\end{array}\right)\)\(\left|\begin{array}{c}\sin(\pi^{2}x)+\sin(\pi^{2}x)i\\ \sin(\pi^{2}x)+0.5\cos(\pi^{2}x)i\end{array}\right.\) & \\ \hline \(\left(\begin{array}{c}e^{\frac{-(x-0.8)^{2}}{0.003}}\\ \sin(\pi^{2}x)\end{array}\right)\)\(\left|\begin{array}{c}\cos(\pi^{2}y)+\sin(\pi^{2}x)i\\ \cos(\pi^{2}x)+0.5\cos(\pi^{2}x)i\end{array}\right.\) & \\ \hline \(\left(\begin{array}{c}e^{\frac{-(x-0.5)^{2}}{0.003}}\\ \cos(\pi^{2}x)\end{array}\right)\)\(\left|\begin{array}{c}\cos(\pi^{2}x)+\sin(\pi^{2}x)i\\ \cos(\pi^{2}y)+0.5\cos(\pi^{2}x)i\end{array}\right.\) & \\ \hline \end{tabular}
\end{table}
Table 1: Boundary values for generating the training set and the test set
The network \(U^{01}\) is trained with the learning rate \(10^{-5}\). The initial training error of \(3.12\) and the test error of \(5.87\) are reduced to \(1.7\cdot 10^{-4}\) and \(3\cdot 10^{-3}\) after \(29\,843\) training steps. At \(U^{10}\), the initial training error of \(0.72\) and the test error of \(1.28\) are reduced to \(3\cdot 10^{-4}\) and \(4\cdot 10^{-3}\) after \(20\,326\) steps with learning rate of \(10^{-5}\) and after further training with a learning rate of \(10^{-6}\) in \(3706\) steps, we finally achieve the training error \(2.9\cdot 10^{-4}\) and the test error \(3\cdot 10^{-3}\).
## 5 Numerical tests
In this section, we investigate several numerical experiments to demonstrate the current capacities of our approach. In addition, we highlight and analyze shortcomings and challenges.
### Comparison of new approach and classical DDM
In this first numerical example, we apply the implemented and trained NNs for the following boundary condition
\[E^{\mathrm{inc}}(x,y)=\left(\begin{array}{c}\cos\left(\pi^{2} \left(y-0.5\right)\right)+\sin\left(\pi^{2}x\right)i\\ \cos\left(\pi^{2}y\right)+0.5\sin\left(\pi^{2}x\right)i\end{array}\right),\]
and compute the first interface conditions \(g^{1}_{10}\) and \(g^{1}_{01}\) and the solutions \(E^{2}_{1}\) and \(E^{2}_{0}\) by solving (11) and (12) with the use of the parameters given in Table 2. Afterwards, these values are passed on to the networks \(U^{01}\) and \(U^{10}\). The output they return is then handled as our new interface condition, which we use to solve system (11) one more time. With that, we obtain the final solution. Moreover, we compute the same example with the DDM in 4 steps. The results that are displayed in Figure 7 show excellent agreement.
Figure 6: PyTorch code of the implementation of the network training.
### Higher wave numbers
As a second example, we increase the wave number, which leads to a more complicated problem. Therefore we repeat the same computation with \(\omega=\pi\) and leave the other parameters (especially the parameters and hyperparameters of the neural networks) unchanged. In contrast to the previous example, the results that are displayed in Figure 8 show differences. While the imaginary part is still well approximated, the real part of the NN solution differs significantly from the DDM solution and shows a discontinuity on the interface.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Parameter & Definition & Value \\ \hline \hline \(\mu\) & relative magnetic permeability & 1.00 \\ \hline \(\varepsilon\) & relative electric permittivity & \(1.49^{2}\) \\ \hline \(\kappa\) & & \(\sqrt{\varepsilon}\) = 1.49 \\ \hline \(\lambda\) & wave length & 3.00 \\ \hline \(\omega\) & wave number & \(\frac{2\pi}{\lambda}+\frac{2\pi}{3.00}\) \\ \hline & grid size & \(\frac{1}{32}\) \\ \hline \end{tabular}
\end{table}
Table 2: Parameters for the DDM
Figure 7: First example: Real part (above) and imaginary part (below) of the NN solution (left) and the DDM solution (right).
### Refined computational analysis for intermediate wave numbers
A possible reason for the mismatching results in Section 5.2 is the "problem of big wave numbers", which is very well-studied for Helmholtz-type problems [13]. The same problem also applies to Maxwell's equations [4]. To verify this conjecture and because of the very distinctive results in Section 5.1 and Section 5.2, we attempt two more computations with other wave numbers, namely \(\omega=\frac{2\pi}{2.9}\) and \(\omega=\frac{2\pi}{3.1}\). The results, that are displayed in Figures 9 and 10, in which we neglect the representation of the meshes to make the differences more visible, show that the approximation becomes inaccurate if the wave number differs slightly from the one we used for the training, regardless of whether it is larger or smaller. Therefore the bad approximation is not due to the big size of the wave number. Instead of this, it can be assumed that the NNs are specialized for the specific wave number they are trained with and "learn along" this value during the training process.
Figure 8: Second example: Real part (above) and imaginary part (below) of the NN solution (left) and the DDM solution (right).
Figure 10: Fourth example: Real part (above) and imaginary part (below) of the NN solution (left) and the DDM solution (right) with \(\omega=\frac{2\pi}{3.1}\).
Figure 9: Third example: Real part (above) and imaginary part (below) of the NN solution (left) and the DDM solution (right) with \(\omega=\frac{2\pi}{2.9}\)
### Comparison of different neural network activation functions: Sigmoid vs. ReLU
As mentioned in Section 4.3, we tested different activation functions to train the NNs before using sigmoid. One of these is the ReLU function given by
\[f(x)=\max(0,x),\]
which is implemented in the PyTorch class torch.nn.functional. This function allows a greater and faster error reduction than the others we tested, including sigmoid. In most cases, the test error of the network \(U^{01}\) can be reduced after approx. 16000 steps with a learning rate of \(10^{-5}\) and ca. 6500 steps with a learning rate of \(10^{-6}\) to \(8\cdot 10^{-4}\), which is almost a quarter compared to the final error in the training of the same NN with sigmoid as the activation function (see Section 4.4). Also, the test error of \(U^{10}\) can be reduced more quickly, namely to \(2\cdot 10^{-3}\) after ca. 3500 steps with a learning rate of \(10^{-5}\). However, we also observed that the test error grows after a short reduction phase in other cases. But in contrast, the training error continues to shrink, revealing that the training of our ReLU-networks is more susceptible to overfitting. This suspicion is strengthened when we apply the successfully trained ReLU-networks to the first example with the same procedure described in Section 5.1. The results displayed in Figure 11 show a discontinuity in the interface. This suggests that even in the lucky cases in which the test error is reduced very well, we are dealing with overfitting, and the resulting NNs cannot accurately capture the actual problem. Because of the unreliable training of the ReLU-NNs, it is reasonable to use sigmoid as the activation function instead.
Figure 11: Real part (above) and imaginary part (below) of the NN solution with the use of ReLU (left) and Sigmoid (right) as activation function.
Conclusion
In this contribution, we provided a proof of concept and feasibility study for approximating the interface operator in domain decomposition with a feedforward neural network. These concepts are applied to the time-harmonic Maxwell's equations. We carefully described the numerical framework from the algorithmic and implementation point of view. In the realization, we coupled deal.II (C++) for solving the Maxwell's equations with PyTorch for the neural network solution. Afterwards, we conducted various numerical tests that included comparing our new approach with classical domain decomposition. Then, we studied higher wave numbers in more detail. Therein, we detected difficulties, which we further investigated, revealing that the training and testing of the neural network is highly sensitive to the specific wave number. Finally, a comparison of two different neural network activation functions was undertaken. As an outlook, we plan to increase the number of subdomains to study other wave numbers further and apply the method to three-dimensional Maxwell's equations.
## Acknowledgment
This work is funded by the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122, Project ID 390833453).
|
2303.17610 | Ensemble weather forecast post-processing with a flexible probabilistic
neural network approach | Ensemble forecast post-processing is a necessary step in producing accurate
probabilistic forecasts. Conventional post-processing methods operate by
estimating the parameters of a parametric distribution, frequently on a
per-location or per-lead-time basis. We propose a novel, neural network-based
method, which produces forecasts for all locations and lead times, jointly. To
relax the distributional assumption of many post-processing methods, our
approach incorporates normalizing flows as flexible parametric distribution
estimators. This enables us to model varying forecast distributions in a
mathematically exact way. We demonstrate the effectiveness of our method in the
context of the EUPPBench benchmark, where we conduct temperature forecast
post-processing for stations in a sub-region of western Europe. We show that
our novel method exhibits state-of-the-art performance on the benchmark,
outclassing our previous, well-performing entry. Additionally, by providing a
detailed comparison of three variants of our novel post-processing method, we
elucidate the reasons why our method outperforms per-lead-time-based approaches
and approaches with distributional assumptions. | Peter Mlakar, Janko Merše, Jana Faganeli Pucer | 2023-03-29T15:18:00Z | http://arxiv.org/abs/2303.17610v3 | # Ensemble weather forecast post-processing with a flexible probabilistic neural network approach
###### Abstract
Ensemble forecast post-processing is a necessary step in producing accurate probabilistic forecasts. Conventional post-processing methods operate by estimating the parameters of a parametric distribution, frequently on a per-location or per-lead-time basis, which limits their expressive power. We propose a novel, neural network-based method, which produces forecasts for all locations and lead times, jointly. To relax the distributional assumption made by many post-processing methods, our approach incorporates normalizing flows as flexible parametric distribution estimators. This enables us to model varying forecast distributions in a mathematically exact way. We demonstrate the effectiveness of our method in the context of the EUPPBench benchmark, where we conduct temperature forecast post-processing for stations in a sub-region of western Europe. We show that our novel method exhibit state-of-the-art performance on the benchmark, outclassing our previous, well-performing entry. Additionally, by providing a detailed comparison of three variants of our novel post-processing method, we elucidate the reasons why our method outperforms per-lead-time-based approaches and approaches with distributional assumptions.
## 1 Introduction
Forecast post-processing is a crucial task when constructing skillful weather forecasts. This is due to the inherent biases of the numerical weather predictions (NWP) which are the results of initial condition errors, computational simplifications, and sub-grid parametrizations [3, 40, 18]. All these factors combine and result in errors that compound as the forecast horizon increases. Therefore, due to these issues, the forecast is uncertain by nature. To quantify this uncertainty ensemble forecasts are issued by centers for weather forecasts, such as the European Center for Medium Range Weather Forecasts (ECMWF) [11]. ECMWF generates ensemble forecasts by perturbing the initial atmospheric conditions and varying the NWP parametrizations. This results in 50 different ensemble members, each expressing a potentially unique future weather situation. However, biases are still present in ensemble forecasts resulting in a disconnect between the specific observations and the described ensemble distribution.
To mitigate the effects of biases in forecasts with the aim of constructing calibrated forecast probability distributions [13] forecast providers frequently employ forecast post-processing techniques [40]. These statistical approaches transform raw ensemble forecasts to better concur with actual weather variable observations. Post-processing methods vary from simple to complex, from rigid [15, 33, 36, 30] to flexible [24, 21, 29, 39], and encompass a large spectra of statistical approaches [40]. Another way of distinguishing between different post-processing methods is to characterize them based on their distributional assumptions or lack thereof. Since the nature of ensemble forecasts is inherently probabilistic (ensemble forecasts quantify the weather forecast uncertainty) the post-processing output should reflect that. Methods that assume the target weather variable distribution are usually simpler to implement however, they suffer from the pitfall of assuming the wrong distribution. This can result in additional biases. Likewise, different weather variables can exhibit different distributions therefore, tuning and exploration of the best target distribution is required before constructing the model. Methods that do not make such assumptions are usually more complex to implement and require more data for parameter estimation.
Recently neural networks started showing promising results in this field [27, 26, 5, 37, 41, 4, 17, 35, 34, 9, 27, 14]. Applications such as those by [34] exhibited state of the art results compared to conventional post-processing
techniques. By using the ensemble mean and variance with the combination of station embeddings [34] construct a neural network that estimates the parameters of a normal distribution for each lead time. This approach is further extended in [37] by applying different distribution estimators in place of the normal distribution. To be more specific, they apply the same neural network architecture as a base parameter predictor to three different models: a distributional regression model, a Bernstein quantile regression model [4], and a histogram estimation model. Indeed, their aim to reduce the distributional assumption is apparent and also required as many weather variables can not be effectively described using simple probability distributions. We can make a similar observation by investigating the work by [41] where the authors construct a convolutional neural network for post-processing wind speeds. Similarly to [37], they tried different probability models as the output of their convolutional neural network: a quantized softmax approach, kernel mixture networks [1], and truncated normal distribution fitting approach. [41] concluded that the most flexible distribution assessment method exhibited the best performance. This further bolsters the need for sophisticated distribution estimation techniques as these increase the performance of post-processing algorithms.
To further improve upon the existing post-processing techniques we propose two novel neural network approaches, Atmosphere NETwork 1 (ANET1) and Atmosphere NETwork 2 (ANET2). The novelty of the ANET methods is threefold and can be summarized thusly:
* Two novel neural network architectures for post-processing ensemble weather forecasts
* Joint probabilistic forecasting for all lead times and locations
* Flexible parametric distribution estimation based on normalizing spline flows, implemented in ANET2
Both ANET variants are constructed such that there is only one model for the entire post-processing region and all lead times. This is often not the case with frequently used post-processing methods [6] which typically construct individual models per-lead-time and sometimes even per-station. Our joint location-lead-time method ANET1 exhibits state-of-the-art performance on the EUPPBench benchmark [6] compared to other submitted methods, which are implemented on a per-lead time or per station basis.
ANET2 is the second iteration of our post-processing approach and features an optimized neural network architecture (similar to [34]) and training procedure. Furthermore, ANET2 improves upon ANET1 by relaxing the distribution assumption of ANET1, which assumes that the target weather variable is distributed according to a parametric distribution which, in the case of temperature, is a normal distribution. ANET2 relaxes this assumption by modeling the target distribution in terms of a normalizing spline flow [10, 8, 25, 22]. This enables us to model mathematically exact distributions without specifying a concrete target distribution family. We demonstrate that ANET2 further improves upon ANET1 in a suite of performance metrics tailored for probabilistic forecast evaluation. Additionally, we provide intuition as to why such a type of joint forecasting and model construction leads to better performance by analyzing the feature importance encoded in the ANET2 model.
To demonstrate the performance of the ANET variants we compare four different novel methods for probabilistic post-processing of ensemble temperature forecasts: ANET1, ANET2, ANET2\({}_{\text{NORM}}\), and ANET2\({}_{\text{BERN}}\). This helps us quantify the impact different neural network architectures and distributional models have on the final post-processing performance of our approaches. The description of the aforementioned methods and training procedures is available in Section 2. We follow this with the evaluation results in Section 3 which we further elaborate on in Section 4.
## 2 Methods
### Anet1
ANET1 is a neural network based approach for probabilistic ensemble forecast post-processing. The ANET1 architecture, displayed in Figure 1, is tailored for processing ensemble forecasts with a varying number of ensemble members. ANET1 achieves this by first processing ensemble forecasts for the entire lead time individually, by passing them through a shared forecast encoder structure. We concatenate per-station predictors to each ensemble forecast to allow ANET1 to adapt to individual station conditions. These predictors include station and model altitude, longitude, latitude, and land usage, with the addition of a seasonal time encoding, defined as \(\cos(\frac{2\pi d}{365})\), where \(d\) denotes the day of year the individual input forecast was issued. The shared forecast encoder transforms each forecast-predictor pair into a high-dimensional latent encoding, which are then weighed by a dynamic attention block and averaged into a single, mean ensemble encoding. This dynamic attention mechanism is implemented with the goal of determining the importance of individual ensemble members in a given weather situation. The mean
ensemble encoding is then passed through a regression block, the outputs of which are the per lead time additive corrections to the ensemble mean and standard deviation. We use the corrected mean and standard deviation as parameters of the predictive normal distribution.
### Anet2
ANET1 suffers from the distributional assumption drawback which mitigates its expressive power. To improve upon ANET1 we developed ANET2 which utilizes normalizing spline flows [10] in place of the normal distribution. Normalizing flows are designed for density estimation and can approximate complex distributions in a tractable and mathematically exact way. ANET2 embraces this methodology, albeit in a modified manner which better suits our context of application, by conditioning the final distributional model on the provided raw ensemble forecast.
#### 2.2.1 Flexible parametric distribution estimation using modified normalizing flows
We begin our overview of the normalizing spline flow density estimation procedure by defining a rational-quadratic spline transformation \(\tilde{T}(x;\theta)\), parameterized by \(\theta\), with \(x\) as the target temperature realization. The transformation \(\tilde{T}_{\theta}\) is a strictly increasing and differentiable function and is the key behind the expressive power of normalizing spline flows. The parameter set \(\theta\) includes the spline knots and the spline's values at those knots. Since our goal is to model the target temperature distribution \(F_{temp}\) we can use the change of variable approach [25] to express this unknown distribution in terms of the transformation \(\tilde{T}_{\theta}\) and a base distribution \(F_{norm}\) which, in our case, is a univariate normal distribution. The target variable density can then be defined as
\[p_{temp}(x;\theta)=p_{norm}(\tilde{T}_{\theta}(x))\frac{\partial\tilde{T}_{ \theta}(x)}{\partial x}.\]
Since \(p_{norm}\) is the density of a normal distribution with zero mean and standard deviation of one, the final loss function minimized (the negative log-likelihood) for a given sample \(x_{i}\) is
\[L(p_{temp}(x_{i}))=\frac{\tilde{T}_{\theta}(x_{i})^{2}}{2}-\ln(\frac{ \partial\tilde{T}_{\theta}(x_{i})}{\partial x}).\]
In practice we replace \(\tilde{T}_{\theta}\) with a composition of rational-quadratic spline transformations \(T_{\Theta}\), where
\[T_{\Theta}=\tilde{T}_{\theta_{i}}(\tilde{T}_{\theta_{i-1}}(...)),\]
Figure 1: The neural network architecture of ANET1. The parameter f denotes the number of features in a dense layer. The parameters t and p denote the lead time and the number of per-lead-time parameters required by the normal distribution model. The circular blocks denote the following operations: \(\mathbf{M}\) and \(\mathbf{S}\) the ensemble forecast mean and standard deviation, \(\mathbf{X}\) the element-wise product, \(\mathbf{C}\) concatenation operation, and \(\mathbf{F}\) the softplus activation [32]. The square blocks denote the following operations: \(\mathbf{E}\) corresponds to the shared forecast encoder block, \(\mathbf{A}\) corresponds to the dynamic attention block, and \(\mathbf{R}\) corresponds to the regression block. The variable \(\mathrm{e_{i}}\) denotes the i-th ensemble member in the forecast.
and \(\Theta\) denotes the entire set of individual spline parameters \(\theta_{l}\). ANET2 uses a composition of four rational-quadratic spline transformations, each described by four knot-value pairs. Since ANET2 generates a distribution for each lead time this entails that the parametric distribution described by this model contains 840 parameters (\(21\times 10\times 4\): 21 for each lead time, 10 for each of the 4 spline transformations). To fully describe a rational-quadratic spline we require the spline's knot-value pairs and the value of the spline's derivative at those knots. The knot-value pairs are estimated by the ANET2 neural network (described in more detail in Section 2.2.4). For a specific ensemble forecast with a lead time of \(t\) steps, ANET2 computes \(t\) parameter sets \(\Theta_{j}\), where \(j\in[1,t]\) (in our case, \(t=21\)).
Each \(\theta_{l}\) is comprised of two vectors, where the first contains the knots and the second the values of the spline. More formally,
\[\theta_{l}:=\{\mathbf{k}_{l},\mathbf{v}_{l}\},\]
where the vector \(\mathbf{k}_{l}\) denotes the spline knots and \(\mathbf{v}_{l}\) the values. We use a neural network to estimate these values based on the input ensemble forecast. The monotonicity of the spline knot-value pairs is ensured using the Softplus function [32]. Therefore, the final vectors of knots \(\mathbf{k}_{l}\) and values \(\mathbf{v}_{l}\) are defined as
\[\mathbf{k}_{l} :=\text{CumSum}([\mathbf{k}^{\prime}_{l,1},1\text{e}^{-3}+\text{ Softplus}(\mathbf{k}^{\prime}_{l,[2,5]})]),\] \[\mathbf{v}_{l} :=\text{CumSum}([\mathbf{v}^{\prime}_{l,1},1\text{e}^{-3}+\text{ Softplus}(\mathbf{v}^{\prime}_{l,[2,5]})]),\]
where \(\mathbf{k}^{\prime}_{l}\) and \(\mathbf{v}^{\prime}_{l}\) refer to the raw neural network outputs for the knot-value pairs, \(\mathbf{k}^{\prime}_{l,1}\) refers to the first element of the vector \(\mathbf{k}^{\prime}_{l}\), while \(\mathbf{k}^{\prime}_{l,[2,5]}\) contains the remaining elements (the same holds true for \(\mathbf{v}^{\prime}_{l}\). The index into \(\mathbf{k}^{\prime}_{l}\) and \(\mathbf{v}^{\prime}_{l}\) goes up to five as we only have five knot-value pairs per spline. Additionally, we limit the minimal distance between two consecutive knot-value pairs to be \(1\text{e}^{-3}\). We implement this restriction to ensure numerical stability and in our testing this does not impact the regression performance. The function CumSum denotes the cumulative sum operation, where the first element is kept identical to that of the input vector. This in combination with the Softplus function ensures that the knots-value pairs increase monotonically, in turn guaranteeing the monotonicity of the rational quadratic spline transformation [16].
However, to provide a full estimate of the transformation we also require the positive derivative values at the knots, denoted as \(d_{l,i}\), where \(i\in[1,...,5]\). [10] propose that we determine these derivatives from the neural network in much the same way as the knot-value pairs. This definition of the derivative can yield discontinuities in the splines and therefore in the final estimated density. This is not practical for our application context as we frequently inspect the probability density to determine the most likely weather outcome. A density full of discontinuities conveys unnatural properties and impedes the required analysis. To rectify this we look at the work by [16], where they propose two derivative estimation schemes that compute the required derivatives from the knot-value pairs. We adapt the second approach in our work, which is expressed as
\[d_{l,i}=\frac{\Delta_{l,i}\cdot\Delta_{l,i-1}}{\delta_{l,j}}\]
for \(i\in[2,4]\). The derivative values at the edge knots are defined as
\[d_{l,1}=\frac{\Delta_{l,1}^{2}}{\delta_{l,2}},\;\;d_{l,5}=\frac{\Delta_{l,4}^{ 2}}{\delta_{l,4}},\]
where \(\Delta_{l,j}\) and \(\delta_{l,j}\) are computed by
\[\Delta_{l,j}=\frac{\mathbf{v}_{l,j+1}-\mathbf{v}_{l,j}}{\mathbf{k}_{l,j+1}-\mathbf{k}_{l,j}}, \;\;\delta_{l,j}=\frac{\mathbf{v}_{l,j+1}-\mathbf{v}_{l,j-1}}{\mathbf{k}_{l,j+1}-\mathbf{k}_{l,j-1}}.\]
Due to our constraint on the minimal differences between consecutive knots and values we do not have to concern ourselves with specific edge cases outlined in [16] where the differences in above equalities would be zero.
Additionally, we do not restrict the spline knots to a predetermined interval. This is contrary to [10], however, in our testing, this constraint relaxation works well and provides no tangible difference in performance. It does, however, eliminate the need for parameter tuning as the knot ranges are determined automatically during optimization.
#### 2.2.2 Anet2norm
ANET2NORM combines the ANET2 neural network model described in Section 2.2.4 with a normal distribution acting as its probability distribution model. Therefore, we model the mean and standard deviation vectors of the
distribution, each containing 21 values corresponding to the lead times. Let us denote the raw ensemble mean and standard deviation vectors as \(\mathbf{\mu}^{\rm e},\mathbf{\sigma}^{\rm e}\). We compute the final model mean and standard deviation vectors \(\mathbf{\mu},\mathbf{\sigma}\) as
\[\mathbf{\mu} =\mathbf{\mu}^{\rm e}+\mathbf{\mu}^{\prime}, \tag{1}\] \[\mathbf{\sigma} =\mathbf{\sigma}^{\rm e}+\text{Softplus}(\mathbf{\sigma}^{\prime}), \tag{2}\]
where \(\mathbf{\mu}^{\prime},\mathbf{\sigma}^{\prime}\) denote the raw neural network estimates for those parameters. We again use the Softplus function to enforce the positivity constraint on the standard deviation. We found that if we use the neural network to predict the mean and standard deviation correction residual (expressed as an additive correction term to the raw ensemble statistics) the model performs better than it would if it were to directly predict the target mean and standard deviation without the residual.
#### 2.2.3 Anet2bern
Similarly to ANET2\({}_{\text{NORM}}\), we form ANET2\({}_{\text{BERN}}\) by combining the neural network parameter estimation architecture of ANET2 with the quantile regression framework described by [4] as its probability distribution model. In this case, the output of the neural network model is a set of 21 Bernstein polynomial coefficient vectors, each containing 13 values. Therefore, the degree of the Bernstein polynomial we fit is 12 (the degree is one less than the number of parameters). As per the suggestion of [4] we train the model by minimizing the quantile loss on 100 equidistant quantiles. To ensure that no quantile crossing can occur, [4] suggest one might restrict the polynomial coefficient to the positive reals. However, in our testing, this limits the expressive power of the model. Therefore, we let the coefficients be unconstrained as the quantile crossing event is rare in practice [4] and we did not encounter it in our testing (however, one must not neglect this correctness concession as it can be a source of potential issues if left unchecked).
#### 2.2.4 Parameter estimation neural network architecture
To determine the values of these spline parameters (in the case of ANET2\({}_{NORM}\) the parameters of a normal distribution and in the case of ANET2\({}_{BERN}\) the coefficients of the Bernstein polynomial) we use a dense neural network, whose architecture is displayed in Figure 2. ANET2 conducts post-processing jointly for the whole lead time and all spatial locations. Therefore, the input to the network is an ensemble forecast with \(m\) ensemble members, each containing 21 forecasts (we will discuss the training dataset and training procedure in the following section). ANET2 first computes the ensemble forecast mean and variance. These two vectors, each containing 21 elements, are then concatenated with the per-station predictors and seasonal time encodings to form the input to the neural network. The per-station predictors include station and model altitude, longitude, latitude, and land usage. The seasonal time encoding is defined as \(\cos(\frac{2\pi d}{365})\), where \(d\) denotes the day of year the forecast was issued. There are a total of six dense layers in the ANET2 neural network. All dense layers, except for the last one, are followed by a SiLU [20] activation, and each of the layers, except for the first and last, are preceded by a dropout layer [38] with a dropout probability of 0.2. Each of these dense layers is a residual layer implying that the transformation done by the dense layer and the consequent activation is added to the input of that layer, which then forms the output of that residual block (inspect Figure 2 for more details). The final layer produces 21 sets of parameters \(\theta\), representing the parametric distribution for each forecast time.
Figure 2: The neural network architecture of ANET2. The parameter f denotes the number of features in a dense layer. The parameters t and p denote the lead time and the number of per-lead-time parameters required by the normalizing flow. The circular blocks denote the following operations: \(\mathbf{M}\) denotes the computation of the ensemble forecast mean, \(\mathbf{S}\) block denotes the computation of the standard deviation of the ensemble forecast, and \(\mathbf{C}\) denotes a concatenation operation.
### Training procedure
The training procedure is different between ANET1 and the remaining ANET2 variants. The reason for this discrepancy was the fact that ANET1 preceded ANET2 in development. We describe both approaches and give our arguments as to why we modified the training procedure with ANET2.
In both cases, we used the datasets provided by the EUPPBench benchmark [6] to construct the model. The pilot study focused on the post-processing of temperature forecasts for stations in a limited region in Europe. This study included two datasets. The first dataset, denoted as \(D_{11}\), consisted of 20 year _re-forecasts_[6] for the years \(2017,2018\), with 209 time samples for both years. The ensemble for this dataset counted \(m=11\) members. The \(D_{11}\) dataset was the designated training dataset. The second dataset, denoted as \(D_{51}\), included _forecasts_ for \(2017,2018\), with 730 time samples for both years. The ensemble for this dataset counted \(m=51\) members. The \(D_{51}\) dataset was the designated test dataset. We performed post-processing for 229 stations which were equal for both datasets. Each training batch consisted of 256 randomly selected samples across all stations, years, and time samples.
The datasets described above and their designated uses were equal across ANET1 and all ANET2 variants. The main difference between the training procedures of ANET1 and the ANET2 variants lies in how we split the \(D_{11}\) dataset into the training and validation subsets.
#### 2.3.1 ANET1 training procedure
To create the training-validation split we randomly partitioned the \(D_{11}\) dataset across all stations, years, and time samples. The first partition was the final training dataset subset, consisting of 80 percent of \(D_{11}\). The remaining 20 percent formed the validation dataset. We select the model that exhibits the lowest validation loss (the loss function being the negative log-likelihood) as the final candidate model. Additionally, the model did not use the validation or test datasets for parameter estimation during training. This way we select the model that minimizes the unseen data loss in a particular training setup. The random partitioning required us to implement early stopping conditions as otherwise the training quickly lead to overfitting due to the training-validation dataset distribution similarity. Gradient descent was our optimization procedure of choice in conjunction with the Adam optimizer [23] in the Pytorch [31] framework, with its default parameters, a batch size of 128 samples, learning rate of \(10^{-3}\), and weight decay of \(10^{-9}\). If the validation loss did not improve for more than 20 epochs the training was terminated. We reduced the learning rate by a factor of 0.9 after the validation loss plateaued for 10 epochs to increase numerical stability.
Even though ANET1 managed to perform well in the final evaluation against competing methods [6], this training set-up poorly reflected the nature of the train-test relationship. It also lead to stability issues. We rectified this in the next iteration of the training procedure for ANET2.
#### 2.3.2 ANET2 variants training procedure
To rectify the issues of the previous training approach we forwent the random split approach. Therefore, we formed the validation subset of \(D_{11}\) with the re-forecasts corresponding to the year 2016 while we used all the remaining data to train the model. This better reflects the train-test dynamic of \(D_{11}\) and \(D_{51}\), measuring the "out-of-sample" predictive power of the model, as the validation subset is not sampled from the same years as the training one. Similarly to ANET1, we select the model that exhibits the lowest validation loss (ANET2 and ANET2\({}_{\text{NORM}}\) minimize the negative log-likelihood, while ANET2\({}_{\text{BERN}}\) is trained minimizing the quantile loss) as the final candidate model, with not using the validation and test datasets for parameter estimation during training. This way we select the model that minimizes the unseen data loss in a particular training setup however, the training procedure is now more stable and less prone to overfitting. The loss functions were minimized using gradient descent and the Adam optimizer in the Pytorch framework with its default parameters, a batch size of 256 samples, learning rate of \(10^{-3}\), and weight decay of \(10^{-6}\). A bigger weight decay helped with the stability of the learning procedure as ANET2's distribution modeling approach is more flexible compared to ANET1's and therefore required additional regularization. Finally, we also reduced the learning rate by a factor of 0.9 after the validation loss plateaued for 10 epochs to increase numerical stability.
## 3 Results
In this Section we present the evaluation of ANET1, ANET2, ANET2\({}_{\text{NORM}}\), ANET2\({}_{\text{BERN}}\) on the \(D_{51}\) test dataset. We evaluate the performance of the above methods using the continuous ranked probability score (CRPS) [13], bias, quantile loss (QL), and quantile skill score (QSS) [4].
We forwent the direct comparison of ANET variants against conventional post-processing approaches. ANET1 was already evaluated in detail against other methods in [6]. The conclusion from that comparison was that, while most post-processing methods performed similarly, ANET1 achieved the lowest CRPS and further attenuated the error variability in the day-night cycle. ANET1's advantage compared to other approaches in terms of CRPS is most prominent at high-altitude stations (stations with altitudes bigger than 1000 meters). For more details, about this comparison and included method, please refer to [6].
Our evaluation of the ANET variants is displayed in Figure 3 and Table 1. In the case of QSS, we use ANET1 as the reference model, quantifying the performance gain of ANET2 variants relative to ANET1. We compute the CRPS and bias by averaging them across all stations and time samples, resulting in a per-lead-time performance report. The performance values in Table 1 are a result of averaging the corresponding metrics across the entire test dataset.
### Impact of improved network architecture and training procedure
To quantify the contributions of the ANET2 neural network architecture and training procedure, we look at the performance differences between ANET1 and the ANET2 variants. All ANET2 variants perform better on average than ANET1 across all lead times in terms of CRPS and QSS. The difference between individual ANET2 variants is smaller, with ANET2 leading the pack. We notice that ANET2\({}_{\text{NORM}}\) constantly outperforms ANET1 both in terms of average CRPS and QSS, which is further corroborated by the results in Table 1. However, both predict the same target distribution, that being the normal distribution. These performance enhancements of ANET2\({}_{\text{NORM}}\) can be attributed to the architecture and training protocol changes from ANET1. ANET2\({}_{\text{NORM}}\) also exhibits the lowest bias amongst all methods. This could be due to the symmetric nature of the target normal distribution ANET2\({}_{\text{NORM}}\). Conversely, ANET1 outputs the same distribution as ANET2\({}_{\text{NORM}}\) but has a higher bias. We believe that this difference in bias is due to the less stable and less aligned training procedure of ANET1, which results in a sub-optimal convergence.
### Forecast calibration
To further quantify the probabilistic calibration of individual methods we use rank histograms [19], displayed in Figure 3. The results are aggregated across all stations, time samples, and lead times, for each quantile. This gives us an estimate of how well the individual methods describe the distribution of the entire \(D_{51}\) dataset. ANET1 and ANET2\({}_{\text{NORM}}\) exhibit similar rank histograms due to both using a normal distribution as their target, implying similar deficiencies. The standout models are ANET2, which uses the modified normalizing flow approach, and ANET2\({}_{\text{BERN}}\), which is based on Bernstein polynomial quantile regression. Both methods are much more uniform compared to the remaining alternatives. For example, ANET2\({}_{\text{NORM}}\) and ANET1 seem to suffer from over-dispersion as is implied by the hump between the 20-th and 40-th quantiles. We can observe similar phenomena with other post-processing methods that fit a normal distribution to temperature forecasts [6]. ANET2 and ANET2\({}_{\text{BERN}}\) do not exhibit this specific over-dispersion, albeit with a small over-dispersion between the 10-th and 30-th quantile. Additionally, both methods under-predict upper quantiles, albeit to a lesser degree than ANET1 and ANET2\({}_{\text{NORM}}\). We also found that the ANET2\({}_{\text{BERN}}\) method exhibits artefacts that are most pronounced at the low and high quantiles. We believe that this is due to the nature of the Bernstein quantile regression method where the fixed degree of the polynomial affects its expressive value, especially on the quantile edges. Overall, ANET2 displays the most uniform rank histogram out of all evaluated methods.
### Per-altitude and per-station performance
Station altitude correlates with the decrease in model predictive performance, as we show in Figure 5. Therefore, we investigate the altitude-related performance of each model in terms of QSS over three station altitude intervals: \((-5,800]\), \((800,2000]\), and \((2000,3600)\), displayed in Figure 4. We compute the per-altitude QSS by aggregating stations whose altitudes fall into a specific interval. Then we average the QL across all lead times, time samples, and corresponding stations, producing the QL on a per-quantile level. We use ANET1 as the reference model for the QSS. Looking at Figure 4, we can see that all ANET2 variants improve upon ANET1. The only exceptions are ANET2\({}_{BERN}\) and ANET2\({}_{NORM}\) that exhibit roughly equal QL to ANET1 for certain quantile levels at the altitude intervals of \((-5,800]\) and \((2000,3600)\) respectively. ANET2 exhibiting the highest QSS of all methods. Indeed, ANET2 outperforms all the remaining methods over all quantile levels and all altitudes, with the exception of lower quantile levels at the stations with the lowest altitude where ANET2\({}_{\text{NORM}}\) performs the best. The closest method, exhibiting similar trends in improvement to ANET2, is ANET2\({}_{\text{BERN}}\). In our opinion this and the results we show
in Figure 3 further bolster the value of flexible distribution estimators. Similarly, by observing the average CRPS ranking per station, shown in Figure 5, we can see that ANET2 exhibits the best CRPS across 204 stations from a total of 229 stations. ANET2\({}_{\text{BERN}}\) is the most performant method at 18 stations, with ANET2\({}_{\text{NORM}}\) leading in CRPS only at 7 locations. ANET1 is not displayed as its CRPS does not outperform any of the ANET2 variants at any of the stations.
## 4 Discussion
The results we presented provide a clear argument for the use of neural networks in conjunction with flexible distribution estimation methods. Even in the case of the initial run of the EUPPBench dataset with a limited number of predictors and one target variable, temperature, neural networks can tangibly increase the state-of-the-art performance in post-processing. The comparative study we conducted between ANET1, the improved ANET2, and its variants, revealed useful guidelines for future method research and development.
\begin{table}
\begin{tabular}{l|l|l|l} & CRPS & Bias & QL \\ \hline ANET2\({}_{\text{NORM}}\) & 0.940 & **0.038** & 0.373 \\ ANET2\({}_{\text{BERN}}\) & 0.935 & 0.076 & 0.367 \\ ANET2 & **0.923** & 0.069 & **0.363** \\ ANET1 & 0.988 & 0.092 & 0.386 \\ \end{tabular}
\end{table}
Table 1: CRPS, bias, and QL (quantile loss) for all evaluated methods averaged over all stations, time samples, and lead times for the \(D_{51}\) test dataset. The values in bold represent the best results for each metric.
Figure 3: (Top row): The CRPS, bias, and QSS for all ANET variants. A lower CRPS is better, a bias value closer to zero is better, and a higher QSS is preferable. (Bottom row): The rank histogram of all ANET variants. A more uniform histogram (column heights closer to the black dashed line) is better. Histograms with central humps imply over-dispersion while histograms with outliers on the edges represent under-dispersion.
### ANET1 versus ANET2
First, our results corroborate the findings and practices outlined by [34]. ANET1 uses a per-ensemble-member dynamic attention mechanism with the idea of determining the individual ensemble member importance, conditioned on the predictors and weather situation. While the idea is sound in our opinion, further testing with ANET2 revealed that performing regression on the ensemble mean and variance resulted in equal or even slightly better performance, with a similar number of model parameters. This result is similar to many existing post-processing approaches [34]. We also evaluated the effects of including additional forecast statistics into the input, such as the minimal, maximal, and median temperature. We found that these do not impact the model's predictive performance. We conclude that for the EUPPBench dataset v1.0 the ensemble statistics such as the mean and variance contain enough information for the formation of skillful probabilistic forecasts of temperature. However, it likely that we have not yet found an efficient way of extracting information from individual ensemble members.
### ANET2 versus ANET2\({}_{\text{NORM}}\) and ANET2\({}_{\text{Bern}}\)
ANET2\({}_{\text{NORM}}\) operates by assuming the target weather distribution. However, this method is outperformed by the remaining two, more flexible methods when observing the mean QSS and CRPS. Similar observations were found by other researchers [36, 41] further underlining the need for flexible distribution estimators. From the two flexible approaches we implemented, ANET2 outperforms ANET2\({}_{\text{BERN}}\) in all metrics. The ability to model the probability density through a cascade of spline transformations seems to offer greater flexibility compared to the Bernstein polynomial approach. Additionally, ANET2 does not suffer from the quantile crossing issue that can occur with Bernstein quantile regression [4]. ANET2 produces a probabilistic model from which we can estimate the exact density, distribution, and samples from the distribution.
### Joint lead time forecasting
Where we believe that ANET2 and ANET1 innovate compared to previously applied neural network approaches is in the post-processed forecast lead time. All ANET variants perform predictions for the entire lead time. They are single models for all lead times and all stations, taking as input the entire ensemble forecast (whole lead time). This enables ANET to model dependencies between different forecast times. These ideas are further corroborated by investigating the feature importance generated using input feature permutation [12]. We investigate how different lead times in the input help form the post-processed output for other lead times.
We display the input variable importance relative to a target lead time in Figure 6. Each row signifies a specific lead time in the post-processing output while a column denotes a specific lead time in the ensemble forecast that forms the model input. In many cases, off diagonal elements of each row exhibit high importance implying that ensemble forecasts for different lead times contribute to the correction for a specific target lead time. For example, when ANET2 predicts the distribution for the first target lead time (row denoted with zero) it heavily relies on the first three input forecast lead times (the first three columns). We can also observe that past forecasts exhibit higher importance compared to future forecasts, relative to a specific target lead time. This is implied by the darker
Figure 4: QSS for all compared methods relative to stations with different altitudes. ANET1 is used as the reference model, therefore, the \(y\)-axis denotes the percentage improvement compared to ANET1.
lower triangle of the importance matrix (area under the blue dashed line). Likewise, we can identify an interesting, periodic trend when observing the columns corresponding to lead times at noon We can see that these forecasts non-trivially impact post-processing any target lead time. We hypothesise that this is due to the daily temperature, which frequently reaches its maximum value at noon (this is due to the six-hour resolution), playing an important role in the amount of heat retained throughout the night (void of all other predictors). Since the first noon forecast is the most accurate compared to the rest it acts as an estimator, with consecutive noon forecasts decreasing in importance due to forecast errors.
Additionally, looking at the CRPS comparison of ANET1 with other state-of-the-art post-processing techniques evaluate on the EUPPBench dataset (page 17, Figure 3 ion [6]) we can observe that ANET1 exhibits better correction for temperatures at midnight. The remaining methods suffer from stronger periodic spikes in error when post-processing those lead times. However, the majority of the implemented methods in [6] (except for reliance calibration) operate on a single lead time. If we turn our attention again to Figure 6 and look at each labeled row (corresponding to midnight forecasts), we can see that the importance is more spread out between multiple input lead times. Therefore, it seems that information important for post-processing forecast at midnight is not solely concentrated at that time but is spread out. This might explain why our method can better suppress forecast errors at these lead times.
### Modified derivative estimator
In this section we present our modification to the derivative estimator of ANET2's distribution model which is based on normalizing spline flows [10]. The implementation described by [10] requires the estimation of three sets of parameters for the spline transformations: spline knots, spline values at the knots, and spline derivatives at the knots. [10] suggest that these parameters be estimated by a neural network. However, in our testing, this resulted in sharp discontinuities in the probability density of the final estimated distribution, as can be seen in Figure 7. This is because the values of the derivatives are selected independently of the knots and values, resulting in potential discontinuities. [10] state that this induces multi-modality into the density. However, this might hamper future efforts for determining the most likely predicted weather outcome as sharp jumps in the density could introduce noise around the modes. To produce smoother densities, we modified the derivative estimation procedure such that it now entirely depends on the spline knots and values. This modified derivative estimator was described
Figure 5: (Left): Average per-station CRPS relative to the absolute difference between the model and station altitude. We used median filtering with a kernel size of 15 to suppress outliers. An increase in CRPS correlates with an increase in the absolute difference in altitude. (Right): Average per-station CRPS. The number of cases in which a specific method outperformed the remaining is denoted in the legend. ANET1 is not displayed since it never performed better versus the ANET2 variants.
by [16] and offers certain spline continuity guarantees. The resulting probability density is much smoother (right panel in Figure 7) and in our tests, the modified distribution estimator performs equally well relative to the default derivative estimator in terms of predictive power in our test scenario. Another positive attribute of the modified derivative estimation approach is the fact that it reduced the required number of parameters our neural network has to estimate by a third. This is because in our case the derivatives are estimated from the existing spline knots and values, while the default implementation required the explicit estimation of all three sets of parameters by the neural network. For example, ANET2 issues a probabilistic forecast for each lead time. If we use a normalizing spline flow with \(s\) consecutive spline transformations, each containing \(k\) knots, this results in \(21\times(s\times k\times 3)\) parameters for the default derivative estimation implementation (21 for each lead time, times \(s\) for each spline, times \(k\) for the number of knots, times 3 for each set of parameters - knots, values, derivatives). Contrary to that, the modified derivative approach requires \(21\times(s\times k\times 2)\) parameters, as the derivatives are estimated from the remaining parameters.
### ANET2 drawbacks
The enhanced distributional flexibility of ANET2 comes with two drawbacks: an increased number of parameters in the distribution model, and increased execution time. The execution time and distribution model parameter count comparison is displayed in Table 2. We can see that ANET2 is the slowest of all three variants, exhibiting a roughly five times slower inference time for the \(D_{51}\) dataset compared to ANET2BERN. ANET2NORM has the lowest execution speed, followed by ANET2BERN. This difference in time is mainly due to the sequential nature of the normalizing spline flow: input data has to first pass through a cascade of spline transformations before the density can be estimated. Additionally, when a data sample passes through a spline transformation it has to locate the appropriate spline bin or interval which incurs additional computational burdens of searching for that bin ([10] suggest the use of bisection to speed up this search, however, we do not implement this procedure). The execution times are still small in an absolute sense, as all methods require less than a minute to post-process the entire \(D_{51}\) test dataset, which contains 167170 forecasts. Nevertheless, we must keep this in mind as these computational burdens might compound in a more complex setting with multiple forecast variables in a spatial context.
We can also see that ANET2 requires the highest number of parameters for determining the distribution model. Although ANET2 outperforms ANET2BERN in all forecast evaluation metrics, we have reached the area of diminishing returns. ANET2 requires three times as many parameters compared to ANET2BERN to produce these improvements.
Still, even in the face of these drawbacks, we believe that ANET2 is the most compelling of the three compared approaches in the EUPPBench setting due to its superior distribution estimation performance, and exact probability distribution estimation with no potential quantile crossing. Perhaps these would require attention in a more complex forecast post-processing scenario, however, this requires additional research and testing.
Figure 6: Input forecast lead times (\(x\) axis) importance relative to ANET2’s post-processed lead time outputs (\(y\) axis). Values on the diagonal (blue dashed line) represent the importance of each input lead time to its corresponding post-processed lead time output. Example: the column labeled 24 contains the importance measures of the forecast issued at that lead time on all lead times in the post-processed output.
## 5 Conclusion
In this work we introduced the ANET2 forecast post-processing model to combat two main issues plaguing current state-of-the-art techniques: the lack of expressive power in the parameter estimation and distribution models. We addressed the first by developing a novel neural network architecture and training procedure, processing the entire forecast lead time at once. By implementing a modified normalizing flow approach we tackled the second issue, which resulted in the creation of a flexible, mathematically exact distribution model. We demonstrated the effectiveness of ANET2 against the current state-of-the-art method ANET1, showing that ANET2 outperforms ANET1 in all tested metrics.
This is not to say that ANET2 is without flaws. From all the compared methods, ANET2 is the slowest in terms of inference time (about 10 times slower compared to ANET\({}_{\text{2NORM}}\), requiring 36 seconds for the processing of the D\({}_{51}\) dataset). Likewise, the distribution model described by the normalizing flow requires the most parameters out of all evaluated models. While this is not an issue in the context of the initial EUPPBench experiment, these drawbacks have to be kept in mind if one wishes to scale ANET2 to a spatial domain with more than one weather
Figure 7: (Top): ANET2 with the default derivative estimator for the normalizing spline flow, as described by [10]. (Bottom): ANET2 with the derivative estimation modification based on [16]. QF denotes the plot containing the quantile function (inverse cumulative distribution function). The ANET2 variant with the modified derivative estimator on the bottom suppresses sharp non-linearities, resulting in more ”natural” probability densities and a reduced amount of parameters that the model needs to predict. The observations are generated by sampling two normal distributions with equal variance at different mean temperatures.
variable to be post-processed at the same time.
Still, we consider ANET2 to be a significant contribution to the field of forecast post-processing. To the best of our knowledge, ANET2 is the first method that utilizes normalizing flows to construct weather probability distributions based on weather forecasts. When combined with the novel neural network parameter estimation model, ANET2 achieves state-of-the-art post-processing results. Our intention in introducing this new tool for weather forecast post-processing is to empower national weather forecast providers to generate higher quality probabilistic models with increased confidence.
### Data Availability
The data we used in this study is part of the EUPPBench dataset. However, a subset of the dataset pertaining to the Swiss station data is not freely available. For more information about the dataset please refer to [6]. The EUPPBench dataset is available at [7].
### Code Availability
All ANET variants are open source and available on GitHub, with ANET1 accessible at [2], and ANET2 at [28].
## Acknowledgments
We would like to thank EUMETNET for providing the necessary resources, enabling the creation of the EUPPBench benchmark and to our colleagues behind the benchmark, whose dedicated work lead to its realization.
This work was supported by the Slovenian Research Agency (ARRS) research core funding P2-0209 (Jana Faganeli Pucer).
|
2305.13920 | Quantifying local and global mass balance errors in physics-informed
neural networks | Physics-informed neural networks (PINN) have recently become attractive for
solving partial differential equations (PDEs) that describe physics laws. By
including PDE-based loss functions, physics laws such as mass balance are
enforced softly in PINN. This paper investigates how mass balance constraints
are satisfied when PINN is used to solve the resulting PDEs. We investigate
PINN's ability to solve the 1D saturated groundwater flow equations for
homogeneous and heterogeneous media and evaluate the local and global mass
balance errors. We compare the obtained PINN's solution and associated mass
balance errors against a two-point finite volume numerical method and the
corresponding analytical solution. We also evaluate the accuracy of PINN in
solving the 1D saturated groundwater flow equation with and without
incorporating hydraulic heads as training data. We demonstrate that PINN's
local and global mass balance errors are significant compared to the finite
volume approach. Tuning the PINN's hyperparameters, such as the number of
collocation points, training data, hidden layers, nodes, epochs, and learning
rate, did not improve the solution accuracy or the mass balance errors compared
to the finite volume solution. Mass balance errors could considerably challenge
the utility of PINN in applications where ensuring compliance with physical and
mathematical properties is crucial. | Md Lal Mamud, Maruti K. Mudunuru, Satish Karra, Bulbul Ahmmed | 2023-05-11T00:21:51Z | http://arxiv.org/abs/2305.13920v2 | # Do Physics-informed Neural Networks Satisfy Local and Global Mass Balance?
###### Abstract
Physics-informed neural networks (PINN) have recently become attractive to solve partial differential equations (PDEs) that describe physics laws. By including PDE-based loss functions, physics laws such as mass balance are enforced softly. In this paper, we examine how well mass balance constraints are satisfied when PINNs are used to solve the resulting PDEs. We investigate PINN's ability to solve the 1D saturated groundwater flow equations for homogeneous and heterogeneous media and evaluate the local and global mass balance errors. We compare the obtained PINN's numerical solution and associated mass balance errors against a two-point finite volume numerical method and the corresponding analytical solution. We also evaluate the accuracy of PINN to solve the 1D saturated groundwater flow equation with and without incorporating hydraulic head as training data. Our results showed that one needs to provide hydraulic head data to obtain an accurate solution for the head for the heterogeneous case. That is, only after adding data at some collocation points the PINN methodology was able to give a solution close to the finite volume and analytical solutions. In addition, we demonstrate that even after adding hydraulic head data at some collocation points, the resulting local mass balance errors are still significant at the other collocation points compared to the finite volume approach. Tuning the PINN's hyperparameters such as the number of collocation points, epochs, and learning rate did not improve the solution accuracy or the mass balance errors compared to the finite volume solution. Mass balance errors may pose a significant challenge to PINN's utility for applications where satisfying physical and mathematical properties is crucial.
PINN, physics-informed neural networks, mass balance errors, porous media flow, analytical and numerical modeling.
## 1 Introduction
Partial differential equations (PDEs) represent physical laws describing natural and engineered systems, e.g., heat transfer, fluid flow, wave propagation, etc., and are widely used to predict physical variables of interest in natural and engineered phenomena in both spatial and temporal domains. Often, the PDE solutions are combined with laboratory and field experiments to help stakeholders make informed decisions. Therefore, accurate estimation of variables of interest is critical. Numerous methods are used to solve PDEs, including analytical [1, 2, 3], finite difference [4, 5], finite volume [5, 6, 7], finite element [5, 8].
Recently, due to the advances in machine learning (ML) and physics-informed machine learning methods (PIML), physics-informed neural networks (PINN) [9, 10, 11, 12, 13] are gaining popularity as a tool to solve PDEs. In this approach, one treats the primary variable in the PDE to be solved as a neural network. Then, using auto-differentiation, the underlying PDE of interest is discretized, and modern optimization frameworks are used to minimize a loss function. It captures losses due to the PDE residual along with the losses due to the initial and boundary conditions. PINN has been used to solve numerous scientific and engineering problems [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. These examples in literature have demonstrated that PINN can handle complex geometry well. However, it has been shown that PINN's training is computationally expensive, and the resulting solution is not always accurate compared to its counterpart numerical solutions [24, 25, 26, 21]. For instance, the training process can take 10000s of epochs, and the absolute error of forward solutions is \(\thickapprox 10^{-5}\) because of the challenges involved in solving high-dimensional non-convex optimization problems [27, 25].
Several studies have addressed PINN's drawbacks in the areas of computational time and accuracy. For instance, Jagtap et al., [24] address the accuracy of PINN by imposing continuity/flux across the boundary of subdomains as a constraint through an additional term to the loss function. However, the additional term in the loss function makes the model more complicated to train and requires more data with increased computational cost. Huang et al., [28] looked at improving the training efficiency by incorporating multi-resolution hash encoding into PINN that offers locally-aware coordinate inputs to the neural network. This encoding method requires careful selection of hyperparameters and auto-differentiation, complicating the training and post-training processes.
However, there is no systematic study on how PINN solutions perform with regard to satisfying the balance laws such as mass, momentum, or energy. In this study, we interrogate the performance of PINN solutions in terms of satisfying the balance of mass. We chose the application of groundwater flow where the mass flux is linearly related to the gradients in the head through Darcy's model. We used PINN to solve simple boundary value PDEs describing 1D steady-state saturated groundwater flow in both homogeneous and heterogeneous confined aquifers. For comparison, we solved the same governing equations analytically and via a traditional numerical method called two-point finite volume (FV) method [7].
The outline of our paper is as follows: Sec. 2 provides the governing equations for groundwater flow and equations for mass balance error calculation. Section 3 presents the analytical, two-point flux finite volume, and PINN solutions for the governing equations described in Sec. 2. Section 4 discusses PINN training; compares the analytical, finite volume, and PINN solutions for the hydraulic head; Darcy's flux; and mass balance error. Finally, conclusions are drawn in Sec. 5.
## 2 Governing Equations
The one-dimensional steady-state balance of mass for groundwater flow in the absence of sources and sinks within the flow domain is:
\[\frac{\partial}{\partial x}\left[q(x)\right]=0, \tag{1}\]
where, \(q(x)\), follows the Darcy's model given by :
\[q(x)=-K(x)\frac{\partial h}{\partial x}, \tag{2}\]
where, \(h[L]\) is the piezomerric head, \(x[L]\) is the coordinate, and \(K[L/T]\) is the hydraulic conductivity. Using Eqs. 1 and 2, the balance of mass governing equation reduces to:
\[\frac{\partial}{\partial x}\left[K\left(x\right)\frac{\partial}{\partial x}h( x)\right]=0. \tag{3}\]
Eq. 3 requires two boundary conditions for head. Assuming a unit length domain, if \(h_{L},h_{R}\) are the heads at the left and the right faces of the domain, then \(h(x=0)=h_{L}\) and \(h(x=1)=h_{R}\).
The model domain (\(x\in[0,1]\)) is discretized into a finite number of cells (\(NCells\)). Figure 1 shows the discretization for obtaining both the FV and PINN solutions. Assuming a unit area for the one-dimensional domain, the mass flux at face \(i-\frac{1}{2}\) (\(m_{\text{local}_{i-\frac{1}{2}}}\)), the local mass balance error (\(LMBE\)) in cell \(i\), and the global mass balance error (\(GMBE\)) are:
\[m_{\text{local}_{i-\frac{1}{2}}} =\rho_{w}q_{i-\frac{1}{2}}, \tag{4a}\] \[LMBE_{i} =m_{\text{local}_{i+\frac{1}{2}}}-m_{\text{local}_{i-\frac{1}{2} }}\quad\forall i=1,2,\cdots,NCells,\] (4b) \[GMBE =\sum_{i=1}^{NCells}LMBE_{i}, \tag{4c}\]
where, \(\rho_{w}\)\([M/L^{3}]\) is the density of water, \(q_{i-\frac{1}{2}}\)\([L/T]\) is Darcy's flux at face \(i-\frac{1}{2}\), \(x_{i-1/2}\) and \(x_{i+1/2}\) are locations of the faces where the Darcy fluxes are computed.
## 3 Methodology
In this section, we briefly describe the analytical, FV, and PINN methods to solve Eq. 3. We consider two scenarios - homogeneous (\(K\) is a constant) and heterogeneous (\(K\) varies with \(x\)) porous media.
### Analytical solution:
Considering \(K\) as constant and choosing \(h(x=0)=0\), and \(h(x=1)=0.9\) as the Dirichlet boundary conditions, the analytical solution can be derived as follows:
\[h=-0.1x+1. \tag{5}\]
With \(K\) varies as a function of space as \(K(x)=(x+0.5)^{2}/2.25\) (a convex function), the analytical solution for the heterogeneous scenario can be derived as:
\[h=\frac{0.075}{(x+0.5)}+0.85. \tag{6}\]
### Numerical solution using finite volume:
For the homogeneous medium, the discretized forms of Eq. 3 using the two-point flux finite volume with central difference for gradient, \(\frac{dh}{dx}\), are as follows
\[h_{i+1}-2h_{i}+h_{i-1}=0,\]
Figure 1: Conceptual ID model illustrating the locations of the head, Darcy’s flux, and local mass balance error (LMBE) calculations.
while for the heterogeneous medium:
\[\left(K_{i+1}+K_{i}\right)h_{i+1}-\left(K_{i+1}+2K_{i}+K_{i-1}\right)h_{i}+\left( K_{i}+K_{i-1}\right)h_{i-1}=0. \tag{8}\]
Along with the boundary conditions, \(h(x=0)=0\), and \(h(x=1)=0.9\), the linear systems of Eqs. 7 and 8 were solved using the LU decomposition method in Scipy python package [29].
### PINN solution:
The deep neural networks (DNN) approximation of groundwater heads as a function of \(x\), and weights and biases (\(\theta\)) of a neural network is:
\[h(x)\approx\hat{h}(x;\theta). \tag{9}\]
DNN approximation of the governing Eq. 3 for the homogeneous case is given by
\[f(x)=\frac{\partial^{2}h}{\partial x^{2}}\approx\hat{f}(x;\theta)=\frac{ \partial^{2}\hat{h}(x;\theta)}{\partial x^{2}}, \tag{10}\]
while for the heterogeneous case is
\[f(x)=\frac{\partial}{\partial x}\left[K\left(x\right)\frac{\partial}{\partial x }h(x)\right]\approx\hat{f}(x;\theta)=\frac{\partial}{\partial x}\left[K\left( x\right)\frac{\partial}{\partial x}\hat{h}(x;\theta)\right]. \tag{11}\]
The loss function (accounting for the losses in the PDE residual and the residual due to the boundary conditions) without training data for the above scenarios is
\[\mathcal{L}(\theta)=\frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\left[\hat{f}(x_{i}^{c}; \theta)\right]^{2}+\frac{1}{N_{D}}\sum_{i=1}^{N_{D}}\left[\hat{h}\left(x_{i}^ {D};\theta\right)-g_{i}^{*}\right]^{2}. \tag{12}\]
If we include training data for the heads, an additional loss term is added, and the overall loss function for the above scenarios become
\[\mathcal{L}(\theta)=\frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\left[\hat{f}(x_{i}^{c}; \theta)\right]^{2}+\frac{1}{N_{D}}\sum_{i=1}^{N_{D}}\left[\hat{h}\left(x_{i} ^{D};\theta\right)-g_{i}^{*}\right]^{2}+\frac{1}{N_{h}}\sum_{i=1}^{N_{h}} \left[\hat{h}\left(x_{i}^{h};\theta\right)-h_{i}^{*}\right]^{2}, \tag{13}\]
where, \(N_{c}\), \(N_{D}\), and \(N_{h}\) represent the number of collocation points, the number of Dirichlet boundary conditions, and the number of head measurements/data, respectively. The head training data and Dirichlet boundary conditions are represented by \(h^{*}\) and \(g^{*}\). The first term on the right-hand side in Eq. 13 is the loss due to the PDE residual at collocation points. The second term in Eq. 13 is loss due to the Dirichlet boundary condition while the third term is the loss to match \(h(x)\) to the measurements \(h^{*}\). \(x_{i}^{C},x_{i}^{D},x_{i}^{h}\) are the locations of the collocation points, Dirichlet, and Neumann boundaries, respectively.
The PINNs solution, \(\hat{h}(x;\theta)\), is obtained by
\[\underset{\hat{h}(x;\theta)\in\mathbb{R}^{+}}{\text{minimize}}\,\mathcal{L}( \theta), \tag{14}\]
where the optimal solution is obtained by selecting the PINNs model that has minimal loss through hyperparameter tuning.
Multi-layer feed-forward DNNs [30, 31] with three hidden layers and 50 neurons per layer were used to train PINN. The hyperbolic tangent was used as the activation function because it is infinitely differentiable and facilitated for differentiating Eq. 3 twice. The remaining PINN hyperparameters are collocation points, learning rate, training data, and epoch numbers. Collocation points (\(x_{i}^{C}\)) in Eq. 13 are the locations in the model domain where PINN is trained for satisfying the PDE. We coincide these points with the FV cell centers for comparison its solution with analytical and numerical solutions. Training data are the observations/measurements or sampled values from an analytical solution used to train PINN. Epoch is one complete pass of the training dataset (PDE, BCs, ICs where needed, and the training data) through the optimization algorithm to train PINN. The
learning rate is the step size at each epoch while PINN approaches to its minimum loss. Our codes for PINN used the DeepXDE python package [32].
### Performance Metrics
In addition to the loss function, we used mean squared error (\(MSE\)) and correlation coefficient (\(R^{2}\)) for evaluating PINN models. The \(MSE\) and \(R^{2}\) are given by:
\[MSE=\frac{1}{n}\sum\limits_{i=1}^{n}(h_{\mathbf{a}_{i}}-h_{\mathbf{p}_{i}})^{2}, \tag{15}\]
\[R^{2}=1-\frac{\sum\limits_{i=1}^{n}\left(h_{\mathbf{a}_{i}}-h_{\mathbf{p}_{i}} \right)^{2}}{\sum\limits_{i=1}^{n}\left(h_{\mathbf{a}_{i}}-\bar{h}_{\mathbf{a} }\right)^{2}}, \tag{16}\]
where, \(n\) is the number of hydraulic heads, \(h_{\mathbf{a}}\) is the head from the analytical solution, \(h_{\mathbf{p}}\) is the predicted head from the PINN model, and \(\bar{h}_{\mathbf{a}}\) is the mean of heads derived from analytical solution.
## 4 Results
This section compares the results of the analytical, FV, and PINN methods as well as the mass balance errors associated with the FV and PINN solutions.
### Homogeneous Porous Media
Two boundary conditions and 11 collocation points were used to train PINN for hydraulic head calculation. The loss function in Eq. 12 was used in this case, that is, no head data was added in the training process. PINN was trained for 20,000 epochs and the loss function fluctuated by 10 orders of magnitude even after reaching its minimum value suggesting an unsteady learning process (Figure 1(a)).
The minimum value for the loss function is \(\approx 10^{-18}\) at 4,000 epochs. The PINN model with the minimum loss was used to calculate the hydraulic heads at each collocation point. Hydraulic heads at the same points as in PINN collocation points were also computed using analytical and FV methods. \(MSE\) and \(R^{2}\) between the analytical and PINN solution are \(4.96\times 10^{-8}\) and \(9.99\times 10^{-1}\), respectively. Such low MSE and high \(R^{2}\) suggest that the PINN solution is fairly close with the analytical (Figure 1(b)). \(MSE\) and \(R^{2}\) between the analytical and FV solution are \(1.68\times 10^{-32}\) (\(\approx\) square of machine precision) and \(1.00\), respectively.
Darcy's flux is one of the indicators that represent the integrity of a numerical solution in porous media and should be constant everywhere for the homogeneous case. We computed Darcy's fluxes on all nodes/collocation points of the model domain using Eq. 2. We found that while Darcy's fluxes of the analytical and FV solutions are constant, they were not constant for the optimal PINN solution (Fig. 2(a)), and the magnitude decreases in space. However, the mean of Darcy's fluxes computed from the optimal PINN solution was found to be the same as that of the analytical and FV solutions.
The mass balance in the model domain was then calculated using _LMBE_ and _GMBE_ equations for both FV and PINN solutions. Analytical solutions are exact solutions that do not incur round-off or numerical truncation errors. _LMBE_ and _GMBE_ of the analytical solutions are zero because of the exactness of the solutions; therefore, they are not discussed here. _LMBE_s of the FV solution vary from 0 to \(\approx 10^{-15}\) or close to machine precision while they are around \(\approx 10^{-4}\) for the PINN solution (Fig. 2(b)). The _GMBE_ of the FV and the PINN solutions are \(5.55\times 10^{-16}\) and \(1.06\times 10^{-3}\), respectively. The LMBE and GMBE for PINNs are much higher than machine precision, indicating that it does not conserve mass locally and globally. A primary reason for such a large discrepancy in _LMBE_ and _GMBE_ between FV and PINN solutions is that FV puts a hard constraint on
the balance of mass when it solves Eq. 3. Although PINN computes accurate heads, it fails to balance the mass because it tries to minimize the residual contribution from the PDE and the BCs instead of exactly setting these residuals to zero. Another potential reason for such a mismatch is the inability of the neural network to optimize its weights and biases as the loss function value gets smaller, due to the non-convex optimization nature of PINN training.
### Heterogeneous Porous Media
The loss function in Eq. 12 was first used to train for heterogeneous porous media with boundary conditions without providing head training data. The resulting PINN head prediction was highly inaccurate compared to
Figure 3. a) Darcy’s flux calculated from the analytical, FV, and PINN solutions; and b) Mass Balance error of the FV and PINN solutions.
Figure 2. a) Epoch versus loss values during the PINN training and b) Analytical, FV, and optimal PINN solution (without any training data) for hydraulic heads for the homogeneous porous media.
the analytical solution, as shown in Fig. 4 in the main text and Fig. S1 in the supplementary material. The FV solution matched well with the analytical solution. The \(\mathbf{MSE}\) and \(\mathbf{R^{2}}\) scores between PINN and analytical solutions are 0.0024 and -2.24. Here, PINN only learns the boundary condition values but fails to learn the accurate head values in the middle of the model domain.
#### 4.2.1 Hyperparameter tuning
To interrogate the performance of PINN solutions, we performed hyperparameter tuning by generating an ensemble of hyperparameters. Specifically, we generated 4800 unique scenarios by varying the following hyperparameters to find an optimal PINN model for the heterogeneous case: learning rates, the number of epochs, the number of collocation points, and the number of training data points provided for the head. Table 2 lists the values chosen for these parameters.
We considered two metrics for finding an optimal PINN model: the minimum loss of the PINN training and the minimum MSE of the head between PINN and analytical solutions. The minimum loss provides the best
\begin{table}
\begin{tabular}{l l l} \hline \hline Solutions & Maximum _LMBE_ & _GMBE_ \\ \hline FV & \(\mathbf{1.20\times 10^{-15}}\) & \(\mathbf{5.5\times 10^{-16}}\) \\ PINN & \(\mathbf{1.62\times 10^{-4}}\) & \(\mathbf{1.06\times 10^{-3}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Mass balance errors of the FV and PINN solutions for homogeneous media.
Figure 4. Hydraulic head prediction by analytical, FV, and PINN methods for the heterogeneous porous media. No training data for head was used to train the PINN solution.
solution for the training process but does not necessarily generate the best model. For instance, a significantly low minimum loss may generate a PINN model, which provides a high MSE. Therefore, we need a model which also provides a low MSE value.
Next, we describe how each PINN hyperparameter affects the PINN solution and how we found an optimal PINN model. Figure 5 shows minimum loss versus MSE values for each of the hyperparameters. Scenarios on the left side of the boxed portion have low minimum loss but high MSE, while scenarios in the right portion have high scores for both metrics. However, scenarios in the boxed portion have low values for both metrics. The red box in Fig. 5 has the following threshold: minimum loss \(\in[\mathbf{10^{-7}},\mathbf{10^{-16}}]\) and \(\text{MSE}\in[\mathbf{10^{-7}},\mathbf{10^{-14}}]\).
Figure 5(a) demonstrates that large learning rates (\(\mathbf{\dot{\iota}10^{-2}}\)) prominently provide low minimum loss but high MSE. This is reasonable because a large learning rate quickly finds accurate values for boundary conditions and training data without learning the PDE solution accurately. A low learning rate allows us to search multiple local minima within the non-convex loss function resulting in high values for both metrics. However, many scenarios for the learning rates ranging from \(\mathbf{10^{-5}}\) to \(\mathbf{10^{-2}}\) provide low scores for both metrics suggesting learning rates within this range will likely generate accurate PINN models.
The number of training data points provided has relatively a higher control on minimum loss and MSE (Figure 5c). Generally, more training data points generate better models. Note that a large number of training data may also generate wrong PINN models if learning rates and epoch numbers are outside the ranges shown in the red box. Lower training data points may provide an accurate model if learning rates and epoch numbers are within the threshold box. Our detailed analysis indicates that the number of training data >20 will likely generate accurate PINN models.
Although hyperparameter tuning is computationally expensive, finding the optimal model for a comparison study with FV's solution is necessary. The optimal model falls in the boxed portion, where minimum loss and minimum MSE are \(\mathbf{1.69\times 10^{-8}}\) and \(\mathbf{1.00\times 10^{-16}}\), respectively. The optimal model's corresponding learning rate, epoch number, collocation point, and training data are \(\mathbf{10^{-4}}\), 70000, 81, and 40, respectively. Hydraulic head prediction by analytical, FV, and PINN with 40 training data points are consistent (Figure 6). The \(\boldsymbol{MSE}\) and \(\boldsymbol{R^{2}}\) between hydraulic head prediction by analytical and optimal PINN with 40 training data points are 0.0 and 1, respectively. We computed Darcy's fluxes on all nodes/collocation points of the model domain using Eq. 2. We found that while the Darcy's fluxes of the analytical and FV solutions are constant, they were not constant for the PINN solution (Figure 7a) and the magnitude decreases in space. However, the mean of Darcy's fluxes from the PINN solution was same as that of the analytical and FV solutions. In both cases, hydraulic head predictions are accurate, but Darcy's fluxes are not accurate compared to its counterpart FV solution. Figures S2 to S4 in supplementary material show convergence studies for both FV and PINN solutions. This includes progressively adding analytical solution data at collocation points to improve PINN's training. The figures in S2-S4 show that mass balance errors are still significant, with minimal improvements even when more data is added.
The maximum _LMBE_ of the FV and the optimal PINN solutions are \(\mathbf{2.44\times 10^{-7}}\) and \(\mathbf{3.13\times 10^{-5}}\), respectively (Figure 7a and Table 3). The _LMBE_ of the optimal PINN solution is two orders of magnitude higher than that of the FV solution. The _GMBE_ of the FV and the optimal PINN solutions are \(\mathbf{4.53\times 10^{-6}}\) and \(\mathbf{6.92\times 10^{-6}}\), respectively. The magnitudes _GMBE_ of PINN and FV are close suggesting the comparable performance of optimal PINN with FV.
## 5 Conclusions
PINN is an alternative approach to numerical solutions solving physical problems, which a partial differential equation can describe. We investigated whether PINN conserved mass and compared its performance with the FV method. For this purpose, we solved a steady-state 1D groundwater flow equation for predicting
Figure 5. Minimum loss and mean squared error for 4800 hyperparameter scenarios. The hyperparameters considered were: a) the learning rate, b) the number of epochs, c) the number of collocation points, and d) the number of training data points. The scenarios in the red box highlight the cases where minimum loss and MSE are low.
Figure 6. Hydraulic head prediction by analytical, FV, and optimal PINN with 40 training data for the heterogeneous porous media.
hydraulic heads using analytical, FV, and PINN methods for both homogeneous and heterogeneous media. The accuracy of the PINN model was computed using \(MSE\) and \(R^{2}\) scores between hydraulic head prediction by analytical and PINN methods. Next, the integrity of PINN models was investigated using Darcy's flux, _LMBE_, and _GMBE_. Finally, we compared its performance with the FV method.
For the homogeneous media case, \(MSE\) and \(R^{2}\) scores are 0.0 and 1.0, respectively, suggesting accurate prediction by PINN. Darcy's flux is not constant as it is for the analytical and FV methods. The _LMBE_ is close to zero for FV solution while it is \(\mathbf{10^{-6}}\) for PINN. The _GMBE_ of PINN solution is 10 magnitudes higher than FV solution. Such a large discrepancy is due to how PINN finds its solution by softly enforcing the PDE constraints.
For the heterogeneous media case, we performed extensive hyperparameter tuning to find an optimal PINN and compare it with the FV solution. We found that without adding training data points, PINN fails to predict accurate hydraulic heads with only boundary conditions, let alone Darcy's flux, _LMBE_, and _GMBE_. We found that hydraulic head prediction by analytical, FV, and optimal PINN with 40 training data are consistent. One needs to provide extensive training data points for training PINN to have an LMBE closer to FV. These findings shed light on the limitations of PINNs to applications where conserving mass locally is essential.
## Abbreviations
* PINN: Physics-Informed Neural Networks
* PDE: Partial Differential Equations
* ODE: Ordinary Differential Equations
* FV: Finite Volume
\begin{table}
\begin{tabular}{l l l} \hline \hline Method & Maximum _LMBE_ & _GMBE_ \\ \hline FV & \(2.44\times 10^{-7}\) & \(4.53\times 10^{-6}\) \\ PINN & \(3.13\times 10^{-5}\) & \(6.92\times 10^{-6}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3. Mass balance errors of the FV and optimal PINN solution for the heterogeneous case.
Figure 7. a) Darcy’s flux was calculated from the analytical, FV, and optimal PINN solution for the heterogeneous porous media, and b) Mass Balance error with the FV and optimal PINN solution for 81 collocation points and 40 data.
* DNN: Deep Neural Networks
* MSE: Mean Squared Error
* LMF: Local Mass Flux
* LMBE: Local Mass Balance Error
* GMBE: Global Mass Balance Error
* CP: Collocation Points
* TD: Training Data
## Conflict of Interest
The authors declare that they do not have any conflicts of interest.
## Acknowledgments
MLM and BA thank the U.S. Department of Energy's Biological and Environmental Research Program for support through the SciDAC4 program. MLM also thanks the Center for Nonlinear Studies at Los Alamos National Laboratory. SK and MKM thank Environmental Molecular Sciences Laboratory for its support. Environmental Molecular Sciences Laboratory is a DOE Office of Science User Facility sponsored by the Biological and Environmental Research program under Contract No. DE-AC05-76RL01830. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
## Data Availability
The codes and the data used in the paper are available at [https://gitlab.com/bulbulahmmed/mass_balance_of_pinn](https://gitlab.com/bulbulahmmed/mass_balance_of_pinn).
## Appendix
The supplementary material contains figures on the training of PINNs, mass balance errors, and PDE residual with and without data. These figures are provided in a separate file.
|
2310.16350 | Unraveling Feature Extraction Mechanisms in Neural Networks | The underlying mechanism of neural networks in capturing precise knowledge
has been the subject of consistent research efforts. In this work, we propose a
theoretical approach based on Neural Tangent Kernels (NTKs) to investigate such
mechanisms. Specifically, considering the infinite network width, we
hypothesize the learning dynamics of target models may intuitively unravel the
features they acquire from training data, deepening our insights into their
internal mechanisms. We apply our approach to several fundamental models and
reveal how these models leverage statistical features during gradient descent
and how they are integrated into final decisions. We also discovered that the
choice of activation function can affect feature extraction. For instance, the
use of the \textit{ReLU} activation function could potentially introduce a bias
in features, providing a plausible explanation for its replacement with
alternative functions in recent pre-trained language models. Additionally, we
find that while self-attention and CNN models may exhibit limitations in
learning n-grams, multiplication-based models seem to excel in this area. We
verify these theoretical findings through experiments and find that they can be
applied to analyze language modeling tasks, which can be regarded as a special
variant of classification. Our contributions offer insights into the roles and
capacities of fundamental components within large language models, thereby
aiding the broader understanding of these complex systems. | Xiaobing Sun, Jiaxi Li, Wei Lu | 2023-10-25T04:22:40Z | http://arxiv.org/abs/2310.16350v2 | # Unraveling Feature Extraction Mechanisms in Neural Networks
###### Abstract
The underlying mechanism of neural networks in capturing precise knowledge has been the subject of consistent research efforts. In this work, we propose a theoretical approach based on Neural Tangent Kernels (NTKs) to investigate such mechanisms. Specifically, considering the infinite network width, we hypothesize the learning dynamics of target models may intuitively unravel the features they acquire from training data, deepening our insights into their internal mechanisms. We apply our approach to several fundamental models and reveal how these models leverage statistical features during gradient descent and how they are integrated into final decisions. We also discovered that the choice of activation function can affect feature extraction. For instance, the use of the _ReLU_ activation function could potentially introduce a bias in features, providing a plausible explanation for its replacement with alternative functions in recent pre-trained language models. Additionally, we find that while self-attention and CNN models may exhibit limitations in learning n-grams, multiplication-based models seem to excel in this area. We verify these theoretical findings through experiments and find that they can be applied to analyze language modeling tasks, which can be regarded as a special variant of classification. Our contributions offer insights into the roles and capacities of fundamental components within large language models, thereby aiding the broader understanding of these complex systems.
## 1 Introduction
Neural networks have become indispensable across a variety of natural language processing (NLP) tasks. There has been growing interest in understanding their successes and interpreting their characteristics. One line of works attempts to identify possible features captured by them for NLP tasks (Li et al., 2016; Linzen et al., 2016; Jacovi et al., 2018; Hewitt and Manning, 2019; Vulic et al., 2020). They mainly develop empirical methods to verify hypotheses regarding the semantic and syntactic features encoded in the output. Such works may result in interesting findings, but those models still remain _black-boxes_ to us. Another line seeks to reveal internal mechanisms of neural models using mathematical tools (Levy and Goldberg, 2014; Saxe et al., 2013; Arora et al., 2018; Bhojanapalli et al., 2020; Merrill et al., 2020; Dong et al., 2021; Tian et al., 2023), which can be more straightforward and insightful. However, few of them have specifically focused on the feature extraction of neural NLP models.
When applying neural models to downstream NLP tasks in practice, we often notice some modules perform better than others on specific tasks, while some exhibit similar behaviors. We may wonder what mechanisms are behind such differences and similarities between those modules. By acquiring deeper insights into the roles of those modules in a complex model with respect to feature extraction, we will be able to select or even design more suitable models for downstream tasks.
In this work, we propose a novel theoretical approach to understanding the mechanisms, through which fundamental models (often used as modules in complex models) acquire features during gradient descent in text classification tasks. The evolution of model output can be described as learning dynamics involving NTKs (Jacot et al., 2018; Arora et al., 2019), which are typically used to study various properties of neural networks, including convergence and generalization. While these representations can be complex in practice, when the width of the network approaches infinity, they tend to converge to less complex representations and remain asymptotically constant (Jacot et al., 2018), allowing us to intuitively interpret the learning dynamics and identify the relevant features cap
Figure 1: Example of co-occurrence features between tokens and labels (self-attention model).
tured by the model.
We applied our approach to several fundamental models, including a multi-layer perceptron (MLP), a convolutional neural network (CNN), a linear Recurrent Neural Network (L-RNN), a self-attention (SA) model (Vaswani et al., 2017), and a matrix-vector (MV) model (Mitchell and Lapata, 2009) and exhibit the MLP, CNN, and SA models may behave similarly in capturing token-label features, while the MV and L-RNN extract different types of features. Our contributions include:
* We propose an approach to theoretically investigate feature extraction mechanisms for fundamental neural models.
* We identify significant factors such as the choice of activation and unveil the limitations of these models, e.g., both the CNN and SA models may not effectively capture meaningful n-gram information beyond individual tokens.
* Our experiments validate the theoretical findings and reveal their relevance to advanced architectures such as Transformers (Vaswani et al., 2017).
Our intention through this work is to provide new insights into the core components of complex models. By doing so, we aim to contribute to the understanding of the behaviors exhibited by state-of-the-art large language models and facilitate the development of enhanced model designs1.
Footnote 1: Our code is available at [https://github.com/richardsun-voyager/ufemnn](https://github.com/richardsun-voyager/ufemnn).
## 2 Related Work
Probing features for NLP modelsProbing linguistic features is an important topic for verifying the interpretability of neural NLP models. Li et al. (2016) employed a visualization approach to detect linguistic features such as negation captured by the hidden states of LSTMs. Linzen et al. (2016) examined the ability of LSTMs to capture syntactic knowledge using number agreement in English subject-verb dependencies. Jacovi et al. (2018) studied whether the CNN models could capture n-gram features. Vulic et al. (2020) presented a systematic analysis to probe possible knowledge that the pre-trained language models could implicitly capture. Chen et al. (2020) proposed an algorithm to detect hierarchical feature interaction for text classifiers. Empirically, such work reveals that neural NLP models can capture useful and interpretable features for downstream tasks. Our work seeks to explain how neural NLP models capture useful features during training from a theoretical perspective.
Infinite-width Neural NetworksResearchers found that there could be interesting patterns when the neural network's width approaches infinity. Lee et al. (2018) linked infinitely wide deep networks to Gaussian Processes. A recent line of work (Jacot et al., 2018; Bietti and Mairal, 2019; Nguyen et al., 2021; Loo et al., 2022) proposed that as the network width approaches infinity, the dynamics can be characterized by the NTK, which converges to a kernel determined at initialization and remains constant. This conclusion holds for fully-connected neural networks, CNNs (Arora et al., 2019) and RNNs (Emami et al., 2021; Alemohammad et al., 2021). Later, Yang and Littwin (2021) showed that such properties of NTKs can be applied to a randomly initialized neural network of any architecture. Very limited studies have delved into the analysis of feature extraction in neural NLP models. We will investigate the internal mechanisms of neural NLP models under extreme conditions.
## 3 Analysis
We use learning dynamics to describe the updates of neural models during training with the aim of identifying potentially useful properties. For the ease of presentation and discussion, we focus on binary text classification2.
Footnote 2: Analysis for multi-class classification can be found in Appendix A.
Model DescriptionAssume we have a training dataset denoted by \(\mathcal{D}\), consisting of \(m\) labeled instances. Let \(\mathcal{X}\) and \(\mathcal{Y}\) represent all the sentences and labels in the training dataset, respectively. \(x\in\mathcal{X}\) is an instance consisting of a sequence of tokens, and \(y\in\mathcal{Y}\) is the corresponding label. The vocabulary size is \(|V|\). Consider a binary text classification model, where \(y\in\{-1,+1\}\). The model output, denoted as \(s(t)\in\mathbb{R}\) at time \(t\) is
\[s(t)=\mathbf{f}_{t}(x;\mathbf{\theta}_{t}), \tag{1}\]
where \(\mathbf{\theta}_{t}\) (a vector) is the concatenation of all the parameters, which are functions of time \(t\). We refer to the model output \(s(t)\) as the _label score_ at time \(t\). This score is used for classification decisions, _positive_ if \(s(t)>0\) and _negative_ otherwise.
Learning DynamicsThe evolution of a label score can be described by learning dynamics, which may indicate interesting properties. Let \(\mathbf{f}_{t}(\mathcal{X})\in\mathbb{R}^{m}\) represent the concatenation of all the outputs of training instances at time \(t\), and \(y\in\mathcal{Y}\)
is the desired label. Given a test input \(x^{\prime}\), the corresponding label score \(s^{\prime}(t)\) follows the dynamics
\[\begin{split}\dot{s}^{\prime}(t)&=\nabla_{\theta}f_{t }^{\top}(x^{\prime})\nabla_{\theta_{t}}\mathbf{f}_{t}(\mathcal{X})\nabla_{\mathbf{f}_{t }(\mathcal{X})}\mathcal{L}\\ &=\Theta_{t}(x^{\prime},\mathcal{X})\nabla_{\mathbf{f}_{t}(\mathcal{ X})}\mathcal{L},\end{split} \tag{2}\]
where \(\Theta_{t}(x^{\prime},\mathcal{X})\) is the NTK at time \(t\) and \(\mathcal{L}\) is the empirical loss defined as
\[\mathcal{L}=-\frac{1}{m}\sum_{(x,y)\in\mathcal{D}}\log g(ys). \tag{3}\]
where \(g\) is the _sigmoid_ function. For simplicity, we will omit the time stamp \(t\) in our subsequent notations. The dynamics \(\dot{s}^{\prime}\) will obey
\[\dot{s}^{\prime}=\frac{1}{m}\underset{(x,y)\in\mathcal{D}}{\sum}g(-ys^{(x)})y \Theta(x^{\prime},x), \tag{4}\]
where \(s^{(x)}\) is the label score for the training instance \(x\). Obtaining closed-form solutions for the differential equation in Equation 4 is a challenge. We thereby consider an extreme scenario with the infinite network width, suggested by Lee et al. (2018).
Infinite-WidthWhen the network width approaches infinity, the NTK will converge and stay constant during training Jacot et al. (2018); Arora et al. (2019); Yang and Littwin (2021). Therefore, the learning dynamics can be written as follows,
\[\dot{s}^{\prime}=\frac{1}{m}\underset{(x,y)\in\mathcal{D}}{\sum}g(-ys^{(x)})y \Theta_{\infty}(x^{\prime},x), \tag{5}\]
where \(\Theta_{\infty}(x^{\prime},x)\) refers to the converged NTK determined at initialization. This convergence may allow us to simplify the representations of the learning dynamics and offer more intuitive insights to analyze its evolution over time.
There can be certain interesting properties (regarding the trend of the label scores) harnessed by the interaction \(y\Theta_{\infty}(x^{\prime},x)\), where \(y\) controls the direction and \(\Theta_{\infty}(x^{\prime},x)\) may indicate the relationship between \(x^{\prime}\) and \(x\). Certain hypotheses can be drawn from these properties. First, the converged NTK \(\Theta_{\infty}(x^{\prime},x)\) may intuitively represent the interaction between the test input \(x^{\prime}\) and the training instance \(x\). This could extend to the interaction between the basic units (tokens or n-grams) from \(x^{\prime}\) and \(x\), as the semantic meaning of an instance can be deconstructed into the combination of the meanings of its basic units Mitchell and Lapata (2008); Socher et al. (2012). Second, if \(\Theta_{\infty}(x^{\prime},x)\) depends on the similarity between \(x^{\prime}\) and \(x\), a more deterministic trend can be predicted for a test input \(x^{\prime}\) that closely resembles the training instances of a specific type. For example, suppose \(\Theta_{\infty}(x^{\prime},x)\) exhibits a significantly large gain when \(x^{\prime}\) is similar to \(x\) at a particular \(y\), and the dynamics will likely receive significant gains in a desired direction during training, thus enabling us to predict the trend of the label score.
We thereby propose the following approach to investigate a target model and verify our aforementioned hypotheses: 1) redefining the target model following the settings proposed by Jacot et al. (2018); Yang and Littwin (2021), which guarantees the convergence of NTKs; 2) obtaining the converged NTK \(\Theta_{\infty}(x^{\prime},x)\) and the learning dynamics under the infinite-width condition; 3) performing analysis on the learning dynamics of basic units and revealing possible features.
## 4 Interpreting Fundamental Models
We investigate an MLP model, a CNN model, an SA model, an MV model, and an L-RNN model, respectively. Details and proofs for the lemmas and theorems can be found in Appendix A.
NotationLet \(\mathbf{e}\in\mathbb{R}^{|V|}\) be the one-hot vector for token \(e\), \(l^{(x)}\) be the instance length, \(\mathbf{W}^{e}\in\mathbb{R}^{d_{in}\times|V|}\) be the weight of the embedding layer, and \(\mathbf{v}\in\mathbb{R}^{d_{out}}\) be the final layer weight. \(\mathbf{W}\in\mathbb{R}^{d_{out}\times d_{in}}\) is the weight of the hidden layer in the MLP model. \(\mathbf{W}^{e}_{k}\in\mathbb{R}^{d_{out}\times d_{in}}\) is the kernel weight corresponding to the \(k\)-th token in the sliding window in the CNN model. For simplicity, we let \(d_{out}=d_{in}=d\). Assume all the parameters are initialized with Gaussian distributions in our subsequent analysis, i.e., \(\mathbf{W}_{ij}\sim\mathcal{N}(0,\sigma_{w}^{2})\), \(\mathbf{W}^{e}_{ij}\sim\mathcal{N}(0,\sigma_{e}^{2})\), and \(\mathbf{v}_{j}\sim\mathcal{N}(0,\sigma_{v}^{2})\), and \(\mathbf{W}^{e}_{ij}\sim\mathcal{N}(0,\sigma_{w}^{2})\), for the sake of NTK convergence.
### Mlp
Following Wiegreffe and Pinter (2019), given instance \(x\), the output of MLP is defined as
\[s=\frac{\mathbf{v}^{\top}}{\sqrt{d}}\sum_{j=1}^{l^{(x)}}\mathbf{\phi}(\mathbf{W}\frac{1}{ \sqrt{d}}\mathbf{W}^{e}\mathbf{e}_{j}). \tag{6}\]
The label score \(s\) will be used for making classification decisions. \(\mathbf{\phi}\) is the element-wise _ReLU_ function. \(\mathbf{e}_{j}\) is the _one-hot_ vector for token \(e_{j}\). It is not straightforward to analyze \(s\) directly, which can be viewed as the sum of token-level label scores. Instead, as basic units are tokens in this model, we focus on the label score of every single token and understand how they contribute to the instance-level label score. When the test input \(x^{\prime}\) is simply
a token \(e\), we can get the corresponding NTK with the infinite network width.
**Lemma 4.1**.: When \(d\rightarrow\infty\), the NTK between the token \(e\) and instance \(x\) in the MLP model converges to
\[\Theta_{\infty}(e,x)=\rho\sum_{j=1}^{l(x)}\boldsymbol{e}^{\top}\boldsymbol{e}_ {j}+\sum_{j=1}^{l(x)}\mu, \tag{7}\]
where \(\rho=\frac{(\pi-1)\sigma_{e}^{2}\sigma_{w}^{2}}{2\pi}+\frac{\sigma_{e}^{2} \sigma_{w}^{2}+\sigma_{w}^{2}\sigma_{w}^{2}}{2}\) and \(\mu=\frac{\sigma_{e}^{2}\sigma_{w}^{2}}{2\pi}\).
Note that, for two tokens \(e_{j}\) and \(e_{k}\), their one-hot vectors satisfy \(\boldsymbol{e}_{j}^{\top}\boldsymbol{e}_{k}=0\) if \(e_{j}\neq e_{k}\); \(\boldsymbol{e}_{j}^{\top}\boldsymbol{e}_{k}=1\) if \(e_{j}=e_{k}\). The dot-product \(\sum_{j=1}^{l(x)}\boldsymbol{e}^{\top}\boldsymbol{e}_{j}\) can be interpreted as the frequency of \(e\) appearing in instance \(x\).
**Theorem 4.2**.: The learning dynamics of token \(e\)'s label score obey
\[\dot{s}^{e} =\frac{\rho}{m}\sum_{(x,y)\in\mathcal{D}}g(-ys^{(x)})y\omega(e,x) \tag{8}\] \[+\frac{\mu}{m}\sum_{(x,y)\in\mathcal{D}}g(-ys^{(x)})yl^{(x)},\]
where \(\omega(e,x)=\sum_{j=1}^{l(x)}\boldsymbol{e}^{\top}\boldsymbol{e}_{j}\), which depends on the training data and will not change over time.
The _non-linearity_ of the sigmoid function \(g(-ys)\) makes it a challenge to obtain a _closed-form_ solution for the dynamics.
However, we can predict trends for the label scores in special cases. Note that the polarity of the first term in Equation 8 will depend on \(y\omega(e,x)\) in each training instance. For instance, consider a token that only appears in positive instances, i.e., \(\omega(e,x)>0\) when \(y=+1\); \(\omega(e,x)=0\) when \(y=-1\). In this case, the first term remains positive and incrementally contributes to the label score \(s^{e}\) throughout the training process. The opposite trend occurs for tokens solely appearing in negative instances. If the impact of the second term is minimal, the label scores of these two types of tokens will be significantly positive or negative after sufficient updates. The final classification decisions are made based on the linear combination of the label scores for the constituent tokens. The second term in Equation 8 is unaffected by \(\omega(e,x)\) and is shared by all the tokens \(e\) at each update. It can be interpreted as an induced feature bias. Particularly, when this term is sufficiently large, it may cause an imbalance between the tokens co-occurring with the positive label and those co-occurring with the negative label, rendering one type of tokens more influential than the other for classification.
Theorem 4.2 may explain how the MLP model leverages the statistical co-occurrence features between \(e\) and \(y\) as shown in Figure 1, and integrate them in final classification decisions, i.e., tokens solely appearing in positive/negative instances will likely contribute in the direction of predicting a positive/negative label.
### Cnn
We consider the 1-dimensional CNN, with kernel size, stride size, and padding size set to \(K\), 1, and \(K-1\) respectively. For each sliding window \(c_{j}\) comprising \(K\) consecutive tokens, the corresponding feature \(\boldsymbol{c}_{j}\in\mathbb{R}^{d}\) can be represented as
\[\boldsymbol{c}_{j}=\sum_{k=1}^{K}\boldsymbol{W}_{k}^{c}\frac{1}{\sqrt{d}} \boldsymbol{W}^{e}\boldsymbol{e}_{j+k-1}, \tag{9}\]
where \(\boldsymbol{W}_{k}^{c}\) is the kernel weight corresponding to the \(k\)-th token in the sliding window.
The label score of an instance is computed as
\[s=\frac{\boldsymbol{v}^{\top}}{\sqrt{d}}\sum_{j=-(K-1)}^{l(x)}\boldsymbol{ \phi}(\boldsymbol{c}_{j}), \tag{10}\]
where \(-(K-1)\) means the position for the leftmost padding token. The first and last \(K-1\) padding tokens in an instance are represented by zero vectors. \(\boldsymbol{\phi}\) is the element-wise _ReLU_ function. For brevity, we will denote \(\sum_{j=-(K-1)}^{l(x)}\) by \(\sum_{j}\).
Let us focus on a single sliding window and study the learning dynamics of its label score.
**Lemma 4.3**.: Consider a sliding window \(c\) consisting of tokens \(e_{1},e_{2},\ldots,e_{K}\), when \(d\rightarrow\infty\) the NTK between \(c\) and instance \(x\) converges to
\[\Theta_{\infty}(c,x)=\sum_{j}F[\omega_{c}(c,c_{j})]+\]
\[\rho\sum_{k=1}^{K}\sum_{j}H[\omega_{c}(c,c_{j})]\boldsymbol{e}_{k}^{\top} \boldsymbol{e}_{j+k-1}, \tag{11}\]
where
\[\omega_{c}(c,c_{j})\!=\!\!\sum_{k^{\prime}=1}^{K}\!\boldsymbol{e}_{k^{\prime}}^ {\top}\sum_{k=1}^{K}\boldsymbol{e}_{j+k-1},\ \ \rho\!=\!\sigma_{v}^{2}(\sigma_{e}^{2}\!+\!\sigma_{w}^{2}).\]
\(\omega_{c}\) means the number of shared tokens between \(c\) and \(c_{j}\) regardless of positions. \(F\) and \(H^{3}\) are _monotonically-increasing_ and _non-negative_ functions depending on \(\sigma_{e}^{2}\sigma_{w}^{2}\).
The first term in \(\Theta_{\infty}(c,x)\) captures the token similarity between sliding windows \(c\) and \(c_{j}\) regardless of token positions. In the second term, \(\sum_{j}H[\omega_{c}(c,c_{j})]\boldsymbol{e}_{k}^{\top}\boldsymbol{e}_{j+k-1}\) can be viewed as the weighted frequency of token \(e_{k}\) in instance \(x\), and when \(\sigma_{v}\) is sufficiently large, the converged NTK is majorly influenced by the sum of the weighted frequencies of the tokens in \(c\) appearing in \(x\).
**Theorem 4.4**.: The dynamics of the label score of the test sliding window \(c\) obey
\[\dot{s}^{c} =\frac{\rho}{m}\sum_{k=1}^{K}\!\!\sum_{(x,y)\in\mathcal{D}}\!\!g(- ys^{(x)})y\omega(e_{k},x) \tag{12}\] \[+\frac{1}{m}\sum_{(x,y)\in\mathcal{D}}\!\!g(-ys^{(x)})y\sum_{j}F[ \omega_{c}(c,c_{j})],\]
where \(\omega(e_{k},x)=\sum_{j}\!H[\omega_{c}(c,c_{j})]\mathbf{e}_{k}^{\top}\mathbf{e}_{j+k-1}\).
Theorem 4.4 indicates that with a sufficiently large \(\sigma_{v}\), the learning dynamics for window \(c\) may mainly depend on the linear combination of the weighted learning dynamics of its constituent tokens. Similar analysis can be performed on the label score of the sliding window. This may not exactly encode n-grams, which are inherently sensitive to order and can extend beyond their constituent elements. Instead, for each window, it is more akin to the composition model based on vector addition as described in the work of Mitchell and Lapata (2009). The second term in Equation 12 may not be zero even if \(c\) shares no tokens with \(x\), suggesting there can be an induced feature bias similar to the one in the MLP model.
When \(c\) only shares tokens with either positive or negative instances, regardless of position, the corresponding label score will receive relatively large gains in one direction during updates. This means the CNN model also captures co-occurrence features between tokens and labels. Importantly, a single token can also be viewed as a sliding window, padded with additional tokens, thereby leading to conclusions about the trend of label scores that mirror those drawn from the MLP model.
### Sa
We employ a fundamental self-attention module, analogous to the component found in Transformers. The representation of the \(i\)-th output in the instance will be computed as a weighted sum of token representations as follows,
\[\mathbf{h}_{i}=\sum_{j=1}^{l^{(x)}}\frac{\alpha_{ij}}{\sqrt{d}}\mathbf{W}^{e}\mathbf{e}_{j}, \tag{13}\]
where \(\alpha_{ij}\) is the weight produced by a _softmax_ function as follows,
\[\alpha_{ij}=\frac{\exp(a_{ij})}{\sum_{j^{\prime}=1}^{l^{(x)}}\exp\bigl{(}a_{ ij^{\prime}}\bigr{)}}. \tag{14}\]
We define the attention score \(a_{ij}\) from position \(i\) to \(j\) as
\[a_{ij}=\frac{(\mathbf{W}^{e}\mathbf{e}_{i}+P_{i})^{\top}(\mathbf{W}^{e}\mathbf{e}_{j}+P_{j})}{ d}, \tag{15}\]
where \(P_{i}\) (\(P_{j}\)) is the positional embedding at position \(i\) (\(j\)) and will be fixed during training. The instance label score will be computed as
\[s=\mathbf{v}^{\top}\sum_{i=1}^{l^{(x)}}\mathbf{h}_{i}=\sum_{i=1}^{l^{(x)}}\sum_{j=1}^ {l^{(x)}}\frac{\alpha_{ij}}{\sqrt{d}}\mathbf{v}^{\top}\mathbf{W}^{e}\mathbf{e}_{j}, \tag{16}\]
which can be viewed as the weighted sum of token-level label scores if we define such a score for each token \(e\) as \(s_{e}=\frac{1}{\sqrt{d}}\mathbf{v}^{\top}\mathbf{W}^{e}\mathbf{e}\). We consider the case where the test input is also simply a token \(e\).
**Lemma 4.5**.: When \(d\rightarrow\infty\), the NTK between the token \(e\) and the instance \(x\) will converge to \(\Theta_{\infty}(e,x)\), which obeys
\[\Theta_{\infty}(e,x)\!\approx\!(\sigma_{e}^{2}\!+\!\sigma_{v}^{2})\sum_{i=1}^{ l^{(x)}}\!\!\sum_{j=1}^{l^{(x)}}\!\!\mathbb{E}(\alpha_{ij})\mathbf{e}^{\top}\mathbf{e}_{j}, \tag{17}\]
where \(\mathbb{E}(\alpha_{ij})\) is the expectation of \(\alpha_{ij}\).
**Theorem 4.6**.: The learning dynamics of the label score of a token \(e\) obey
\[\dot{s}^{e}=\frac{\rho}{m}\sum_{(x,y)\in\mathcal{D}}\!\!g(-ys^{(x)})y\omega(e, x), \tag{18}\]
where \(\omega(e,x)=\sum_{i=1}^{l^{(x)}}\!\sum_{j=1}^{l^{(x)}}\!\mathbb{E}(\alpha_{ij}) \mathbf{e}^{\top}\mathbf{e}_{j}\) and \(\rho=\sigma_{e}^{2}\!+\!\sigma_{v}^{2}\).
Theorem 4.6 shows the learning dynamics of token e's label score also depends on the weighted sum of the frequencies of \(e\) appearing in \(x\). The learning dynamics of a single token's label score will likely resemble that in the MLP model, capturing the co-occurrence features between tokens and labels despite the weights. This model may not experience an induced bias, compared to the MLP model as discussed in Theorem 4.2. This will be further explored in our experiments.
### Mv
We consider the matrix-vector representation as applied in adjective-noun composition (Baroni and Zamparelli, 2010) and recursive neural networks (Socher et al., 2012). It models each word pair through matrix-vector multiplication. The label score of an instance is defined as
\[s=\mathbf{v}^{\top}\sum_{j}\frac{1}{d\sqrt{d}}\mathbf{M}(\mathbf{e}_{j})\mathbf{W}^{e}\mathbf{e}_{j +1}, \tag{19}\]
where \(\mathbf{M}(\mathbf{e}_{j})=\text{diag}(\mathbf{W}\mathbf{W}^{e}\mathbf{e}_{j})\) (diag converts a vector into a diagonal matrix.) and \(j=1,2,\ldots,l^{(x)}-1\).
**Lemma 4.7**.: Given a bigram consisting of two tokens \(e_{a}e_{b}\), with the infinite network width the NTK will converge to
\[\Theta_{\infty}(e_{a}e_{b},x)\!=\!(\sigma_{e}^{2}\!+\!3\sigma_{w}^{2})\sigma_{e}^{ 2}\sigma_{w}^{2}\sum_{j}\mathbf{e}_{j}^{\top}\mathbf{e}_{a}\mathbf{e}_{j+1}^{\top}\mathbf{e}_{b}. \tag{20}\]
It is worth highlighting that the interaction \(\mathbf{e}_{j}^{\top}\mathbf{e}_{a}\mathbf{e}_{j+1}^{\top}\mathbf{e}_{b}\) is different from the interaction resulting from the aforementioned models. When \(e_{a}\equiv e_{j}\) and \(e_{b}\equiv e_{j+1}\) (i.e., \(e_{a}e_{b}\equiv e_{j}e_{j+1}\)), the NTK will gain a relatively large value, implying the ability to capture co-occurrence knowledge between bigrams and labels.
**Theorem 4.8**.: The dynamics of the label score of the test bigram \(e_{a}e_{b}\) obey
\[\dot{s}^{ab}=\frac{\rho}{m}\sum_{(x,y)\in\mathcal{D}}\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Feature Extraction
We illustrate the label scores for the extracted co-occurrence pairs to examine the features predicted by our approach. It can be seen from Figures 1(a), 1(b), and 1(c) that the label scores for tokens in the extracted co-occurring pairs evolve as expected over epochs for the MLP, CNN, and SA models. The label scores of the tokens co-occurring majorly with the positive label consistently receive positive gains during training, whereas those of the tokens co-occurring majorly with the negative label experience negative gains, thus playing opposite roles in final classification decisions. Similar patterns can be observed on IMDB in Appendix C. We also extract bigrams co-occurring majorly with either the positive or negative label from SSTwub and calculate their label scores using a trained MV model, which exhibits the capability of capturing the co-occurrence between bigrams and labels as shown in Figure 3.
Our analysis on the binary classification tasks can be extended to the multi-class classification scenario on Agnews of four-class. The label scores for the tokens associated with a specific class would be assigned relatively large scores in the dimension corresponding to the class as shown in Figures 3(a), 3(b), 3(c), and 3(d). These observations support our analysis of the feature extraction mechanisms within our target models.
In addition, we extend our experiments to language modeling tasks, which can be viewed as a variant of multi-class classification tasks, with the label space equivalent to the vocabulary size. Interestingly, we observe similar token-label patterns on Transformer-based language models incorporating self-attention modules despite their complexity, in both word and character levels. Particularly, we
Figure 3: Distribution of the label scores for extracted bigrams from SSTwub. “p” refers to _positive_ and “n” refers to _negative_.
Figure 2: Evolution of the label scores for the extracted tokens from SST over epochs. “pos token ” and “neg token ” refer to “positive tokens” and “negative tokens” respectively.
find that nanoGPT, a light-weight implementation of GPT, can capture the co-occurrence features between context characters and target characters on the character-level Shakespeare dataset, and reflect them in the label scores as shown in Figure 5. Given a context character, the model's output is more likely to assign higher scores to target characters that predominantly co-occur with this context character in the training data, thereby making those target characters more likely to be predicted. This implies the significance of a large dataset may be (partially) ascribed to rich co-occurrence information between tokens. Further details can be found in Appendix C.
Induced BiasOur approach also indicates that factors such as activation and initial weight variances could affect feature extraction. We downscale the variances for the final layer weight vectors at initialization and compare the learning curves of the extracted tokens' label scores from models with different activations. As can be seen from Figures 1(d) and 1(e), a smaller initialization of the final layer weight variance can lead to a large feature bias, rendering negative tokens less significant than positive ones in the MLP and CNN models. This may not be a desirable situation, as Table 3 suggests a performance decline for the MLP model with _ReLU_. Furthermore, we compare other activation functions such as \(\tanh\), _GeLU_, and _SiLU_, which are alternatives to _ReLU_7. Figures 1(g) and 1(h) show that these alternatives are more robust than _ReLU_ in the MLP model. This also suggests that while non-linear activations may not significantly alter the nature of learned features during training, they can affect the balance of the extracted features. Figure 1(f) shows the SA model is also robust to the change in initialization. However, incorporating an MLP with _ReLU_ activation after the SA model reintroduces bias, as can be observed in Figure 1(i), suggesting a possible reason why _ReLU_ was replaced in models such as BERT Devlin et al. (2019), GPT3 Brown et al. (2020), and LLaMA Touvron et al. (2023), despite its presence in the original Transformer architecture Vaswani et al. (2017).
Footnote 7: The discussion of \(\tanh\) and visualization of _SiLU_ can be found in Appendix B and Appendix C, respectively.
Models' LimitationsWe aim to examine whether the CNN and SA models have a limitation in encoding n-grams in situations beyond constituent tokens' semantic meanings. We choose negation phenomena as our testbed, where a negation token can (partially) reverse the meanings of both positive and negative phrases, a task that is challenging to achieve by linear combination. We run experiments on the SSTwsub dataset with labeled sub-phrases, which contains rich negation phenomena, i.e., phrases with their negation expressions achieved by prepending negation tokens such as _not_ and _never_. We extract positive and negative adjectives and create their corresponding negation expressions by prepending the negation word _not_. Figure 5(a) shows that the SA model can capture negation phenomena for positive adjectives
Figure 4: Label scores for extracted tokens from Agnews, a dataset with four classes. SA model. \(d=64\).
Figure 5: Distribution of the label scores for target characters majorly (blue) and rarely co-occurring with each extracted context character. nanoGPT. Shakespeare Dataset.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c}{**ReLU**} & \multicolumn{3}{c}{**tanh**} & \multicolumn{3}{c}{**GeLU**} & \multicolumn{3}{c}{**SiLU**} \\ & & I & II & I & II & I & II & I & II \\ \hline \multirow{3}{*}{SST} & valid & 78.4 & 68.0 & 77.2 & 77.3 & 78.0 & 78.3 & 77.8 & 77.8 \\ & test & 80.0 & 67.3 & 78.9 & 78.9 & 79.9 & 79.8 & 79.7 & 79.9 \\ \multirow{3}{*}{Agnews} & valid & 91.1 & 90.4 & 91.1 & 90.7 & 90.8 & 90.0 & 90.7 & 90.1 \\ & test & 91.0 & 90.2 & 90.6 & 90.1 & 90.6 & 89.6 & 90.6 & 89.6 \\ \multirow{3}{*}{IMDB} & valid & 89.5 & 89.6 & 89.8 & 89.7 & 91.5 & 91.7 & 91.5 & 91.8 \\ & test & 89.9 & 89.5 & 89.9 & 90.2 & 91.2 & 91.4 & 91.2 & 91.4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average accuracy (%) on SST with scaled variances (I: \(\sigma_{v}=0.1\) and II: \(\sigma_{v}=0.001\)) and different activation functions. 3 trials for each run. MLP model.
but does not perform well for negative adjectives as shown in Figure 5(b). Specifically, prepending a negation word to the negative adjectives does not alleviate their negativity as expected but leads to the contrary. Based on our analysis, the polarity of a negation expression relies largely on the linear combination of the tokens' polarity in the SA model. As both the negation word _not_8 and negative adjectives are assigned negative scores, their linear combination will still be negative. This is not a desirable case and not surprising, as recent studies Liu et al. (2021); Dong et al. (2021); Orvieto et al. (2023) have challenged the necessity of self-attention modules. Similar patterns can also be observed on extracted phrases with negation words, on the CNN model, and even the Transformer model in Appendix C. Conversely, the MV model demonstrates the efficacy of capturing such negation for negative adjectives, as shown in Figure 5(c), demonstrating that the multiplication mechanism may play a more effective role in composing semantic meanings.
Footnote 8: The negation word _not_ appears more frequently in negative instances (2086 times compared to 813 in positive instances).
### Discussion
Our experimental results verify our theoretical analysis of the feature extraction mechanisms employed by fundamental models during their training process. These findings are consistent even with network widths as small as \(d=64\), a scenario in which the infinite-width hypothesis is not fully realized. This observed pattern underscores the robustness and generalizability of our model, a conclusion that aligns with the insights presented by Arora et al. (2019). They suggest that as network width expands, the NTK closely approximates the computation under infinite width conditions while keeping the error within established bounds. In our study, we noted that both CNN and Self-Attention models predominantly rely on the linear combination of token-label features. However, they exhibit limitations in effectively composing n-grams beyond tokens, a deficiency highlighted in negation cases. This observation points towards a potential necessity for alternative models that are adept at handling tasks involving complex n-gram features. This observation aligns with studies by Bhatamishra et al. (2020); Hahn (2020); Yao et al. (2021); Chiang and Cholak (2022), which underscore the constraints of self-attention modules, despite their practical successes. Contrarily, the MV model, based on matrix-vector multiplication, can better capture such negation evident in both analytical and observational perspectives. This model emerges as a promising alternative for tasks that hinge on the interpretation of n-grams. Regarding activation functions, our findings indicate that the utilization of _ReLU_ does not significantly impact the nature of features learned but can introduce feature bias. Consequently, we suggest the exploration of alternative activation functions to mitigate this bias, enhancing the model's performance and reliability in diverse applications.
## 6 Conclusions
We propose a theoretical approach to delve into the feature extraction mechanisms behind neural models. By focusing on the learning dynamics of neural models under extreme conditions, we can shed light on useful features acquired from training data. We apply our approach to several fundamental models for text classification and explain how these models acquire features during gradient descent. Meanwhile, our approach allows us to reveal significant factors for feature extraction. For example, inappropriate choice of activation functions may induce a feature bias. Furthermore, we may also infer the limitations of a model based on the features it acquires, thereby aiding in the selection (or design) of an appropriate model for specific downstream tasks. Despite the infinite-width hypothesis, the patterns observed are remarkable with finite widths. Our future directions include analyzing more complex neural architectures.
Figure 6: Label scores for the extracted _positive adjectives_ (pos/p adj) and _negative adjectives_ (neg/n adj), as well as their negation expressions. SSTwsub. “[-]” refers to the negation operation.
### Limitations
Despite the findings on the aforementioned fundamental models, applying our approach to analyze complex models like Transformers, which incorporate numerous layers, non-linear activation functions, and normalizations, presents challenges due to the increased complexity. These factors contribute to more intricate learning dynamics, making it less straightforward to gain comprehensive insights into the model's behavior. We would like to investigate and formulate them in future directions.
## Acknowledgements
We would like to thank the anonymous reviewers, our meta-reviewer, and senior area chairs for their constructive comments and support on this work. This research/project is supported by Ministry of Education, Singapore, under its Tier 3 Programme (The Award No.: MOET320200004), and Ministry of Education, Singapore, under its Academic Research Fund (AcRF) Tier 2 Programme (MOE AcRF Tier 2 Award No. : MOE-T2EP20122-0011). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of the Ministry of Education, Singapore.
|
2310.19063 | Feature Aggregation in Joint Sound Classification and Localization
Neural Networks | This study addresses the application of deep learning techniques in joint
sound signal classification and localization networks. Current state-of-the-art
sound source localization deep learning networks lack feature aggregation
within their architecture. Feature aggregation enhances model performance by
enabling the consolidation of information from different feature scales,
thereby improving feature robustness and invariance. This is particularly
important in SSL networks, which must differentiate direct and indirect
acoustic signals. To address this gap, we adapt feature aggregation techniques
from computer vision neural networks to signal detection neural networks.
Additionally, we propose the Scale Encoding Network (SEN) for feature
aggregation to encode features from various scales, compressing the network for
more computationally efficient aggregation. To evaluate the efficacy of feature
aggregation in SSL networks, we integrated the following computer vision
feature aggregation sub-architectures into a SSL control architecture: Path
Aggregation Network (PANet), Weighted Bi-directional Feature Pyramid Network
(BiFPN), and SEN. These sub-architectures were evaluated using two metrics for
signal classification and two metrics for direction-of-arrival regression.
PANet and BiFPN are established aggregators in computer vision models, while
the proposed SEN is a more compact aggregator. The results suggest that models
incorporating feature aggregations outperformed the control model, the Sound
Event Localization and Detection network (SELDnet), in both sound signal
classification and localization. The feature aggregation techniques enhance the
performance of sound detection neural networks, particularly in
direction-of-arrival regression. | Brendan Healy, Patrick McNamee, Zahra Nili Ahmadabadi | 2023-10-29T16:37:14Z | http://arxiv.org/abs/2310.19063v2 | # Feature Aggregation in Joint Sound Classification and Localization Neural Networks
###### Abstract
This study addresses the application of deep learning techniques in joint sound signal classification and localization networks. Current state-of-the-art sound source localization (SSL) deep learning networks lack feature aggregation within their architecture. Feature aggregation enhances model performance by enabling the consolidation of information from different feature scales, thereby improving feature robustness and invariance. This is particularly important in SSL networks, which must differentiate direct and indirect acoustic signals. To address this gap, we adapt feature aggregation techniques from computer vision neural networks to signal detection neural networks. Additionally, we propose the Scale Encoding Network (SEN) for feature aggregation to encode features from various scales, compressing the network for more computationally efficient aggregation. To evaluate the efficacy of feature aggregation in SSL networks, we integrated the following computer vision feature aggregation sub-architectures into a SSL control architecture: Path Aggregation Network (PANet), Weighted Bi-directional Feature Pyramid Network (BiFPN), and SEN. These sub-architectures were evaluated using two metrics for signal classification and two metrics for direction-of-arrival regression. PANet and BiFPN are established aggregators in computer vision models, while the proposed SEN is a more compact aggregator. The results suggest that models incorporating feature aggregations outperformed the control model, the Sound Event Localization and Detection network (SELDnet), in both sound signal classification and localization. Among the feature aggregators, PANet exhibited superior performance compared to other methods, which were otherwise comparable. The results provide evidence that feature aggregation techniques enhance the performance of sound detection neural networks, particularly in direction-of-arrival regression.
Joint sound signal classification and localization, Multi-task deep learning, Feature aggregation
**Availability of data, material, or code:**
[https://gitlab.com/dsim-lab/paper-codes/](https://gitlab.com/dsim-lab/paper-codes/)
feature-aggregation-for-neural-networks
## I Introduction
Sound source localization (SSL) represents an imperative domain within the broader field of audio signal processing, holding significant implications for topics such as robotics, hearing aids, and speech recognition systems [1]. SSL techniques aim to ascertain the location or direction-of-arrival (DOA) of a sound source, which provides critical data for sound source separation [2], speech augmentation [3], robot-human interaction [4], noise control [5], and auditory scene analysis [6].
A key gap within existing SSL neural networks is the lack of feature aggregation within their architecture. Feature aggregation can boost a model performance by consolidating information from various scales and contexts, thereby enhancing feature robustness and scale invariance. It is particularly vital for SSL networks, which must distinguish between direct signals and reflections [7]. This paper aims to address this gap by adapting feature aggregation techniques from computer vision neural networks and applying them to signal detection neural networks. Moreover, we propose the development of a novel architecture, the Scale Encoding Network (SEN), which serves as a compact feature aggregator in the context of SSL.
### _Related Work_
Early endeavors in machine learning for SSL were focused on conventional machine learning models, namely the Multi-layer Perceptron (MLP) and Support Vector Machines (SVM) [8, 9]. The aforementioned models encountered difficulties, particularly in effectively managing large datasets and addressing the complexities associated with temporal relationships in the input features. In light of these difficulties, there has been a notable shift towards Convolutional Neural Networks (CNNs), which have demonstrated the ability to capture spatial features in data [6, 10]. As deep learning techniques have advanced, the development of Recurrent Neural Networks (RNNs) gave rise to Convolutional Recurrent Neural Networks (CRNNs). This combination successfully utilized both spatial and temporal dimensions of the data, leading to improved DOA estimation [6, 11].
Residual CNNs (Res-CNNs) soon emerged, incorporating shortcut connections that provide a link between the input and output layers. Res-CNNs proved superior in performance compared to both conventional CNNs and CRNNs [12, 13]. The introduction of novel architectures such as Res-CRNNs and deep generative models enhanced the DOA estimation [14, 15]. Additionally, attention mechanisms have continued to enhance the capabilities of neural networks by allowing them to selectively concentrate on pertinent features, improving the accuracy of the estimation process [16].
### _Contributions_
This paper makes the following contributions:
1. We introduce Feature Aggregation techniques from image based Object Detection Neural Networks into SSL and Sound Detection Res-CRNN networks.
2. We propose and test a new Feature Aggregator method, Scale Encoder Network. This aggregator optimizes aggregation speed and computational costs by encoding multiple scales throughout the network.
3. We provide a Feature Aggregator Library for TensorFlow's Functional API. This library contains pre-made aggregators and allows for the efficient creation of new aggregators. This library will be made publicly available.
### _Overview_
The remainder of this paper is organized as follows. Section II (Theory) explains challenges with feature scaling in SSL networks, the role of feature aggregation in resolving them, and its practical application in real-world models. Section III (Methodology) elaborates on the training and testing procedures used to gather data for evaluating the merit of our theory. Section IV (Evaluation) describes the dataset, preprocessing, evaluation metrics, and baseline methods used to gauge our results. Section V (Results) investigates the testing outcomes and their contextual meaning. Section VI (Conclusions) provides a summary of our findings, their implications, and directions for future research.
## II Theory
This section is divided into four parts. Section II.A (Scaling) discusses the importance of scale invariance within neural networks. Section II.B (Feature aggregation) explains how feature aggregation addresses scale invariance, how feature aggregation is performed, and various feature aggregation designs. Section II.C (Scale Encoder Network) elaborates on our custom aggregation approach. Section II.D (Object Detection Architecture) gives an overview of standard object detection designs and how feature aggregation is employed within a full architecture.
### _Scaling_
Feature scaling is a powerful tool in both object detection and SSL neural networks. The ability to learn notable patterns in data and then identify these patterns at different scales reduces the amount of training data required and chances of overfitting to a particular size or amplitude input. The scale of extracted features in convolutional neural networks (CNNs) depends on the tensor dimensions and convolutional hyperparameters used in each layer of the network [17, 18]. As the input tensor is passed through the network, features are extracted at different scales. Downsampling, such as max-pooling or strided convolutions, reduces the size and spatial resolution of the feature map. As a result, finer resolution features that are present in earlier networks layers may be neglected in subsequent coarser resolution layers. This phenomenon is known as the "semantic gap," where the features in different layers of the network represent different levels of abstraction, and smaller features may be ignored as the network learns more complex representations [17, 18].
This semantic gap is of extreme importance in perception models because features of smaller scale can be overlooked in deeper convolutions. For example, in computer vision, features from distance or small objects may be lost as the input image is downsampled due to processing. This means the model loses the ability to identify a class at various sizes and distances and therefore is quite limited in its uses. In real- world applications, most classes vary in size and will not be at the same exact location in each image.
This same principle applies to SSL and may be even more important. This is because SSL algorithms must differentiate between direct and indirect signals (such as reflections, reverberations, and diffractions). These indirect signals generally have a similar (or identical) wave pattern as their source's direct signals, but at a reduced amplitude and with a phase shift. From a neural network's perspective, features of quieter or indirect signals are the equivalent to another source that is further away; the distinguishing patterns are the same but of different amplitude and phase. To differentiate between direct and indirect signals, a SSL model should: 1) identify all signals from the same source, and then 2) isolate the direct signal based on its scale relative to the indirect signals. For both these steps, the model must understand feature scaling.
To address the semantic gap, specific architectures have been proposed, such as U-Net and Feature Aggregators. Various studies have conducted performance comparisons between Feature Aggregation and U-Net architectures, demonstrating that feature aggregation networks achieve high performance in various image segmentation tasks [19]. Although, tasks such as brain tumor segmentation, which have more emphasis on fine-grained details, benefit from the use of U-Net, provided that sufficiently large dataset is available for training [20, 21]. Additionally, feature aggregation networks exhibited enhanced computational efficiency, reduced memory footprint, and the ability to achieve high accuracy with smaller training datasets [22]. The latter was demonstrated in a study that examined the effectiveness of feature aggregation networks in the context of image segmentation. The demand for less data is particularly crucial in the context of sound detection models as each class of signal must be sampled at many angles or locations; with variables like room size/shape, wall material, and objects in the room affecting signal reflections and reverberations.
Therefore, the choice between feature aggregation and encoder architectures ultimately depends on the desired tradeoff between efficiency and accuracy. Some tasks may require more emphasis on fine-grained details and thus benefit from the use of U-Net, while others may prioritize computational efficiency and simplicity, making feature aggregation a more suitable option [21]. This study focuses on feature aggregation due to its practicality in real world applications. Large computational cost, memory footprint and training dataset requirements make sound detection U-Nets impractical for sound detection.
### _Feature Aggregation_
The purpose of feature aggregation is to combine features from various convolutions throughout the network to improve scaling, overfitting, and exploding or vanishing gradients [23].
Feature aggregation has three sequential steps: resampling inputs to match shapes, aggregate inputs (weighted averaging or concatenation), and convolution of the aggregated tensor [23]. These processes are completed inside a TensorFlow sub-model called a "Node", as is illustrated in Fig. 1. As is discussed later in this section, multiple nodes are connected residually within an aggregator, allowing this process to be repeatedly executed throughout an elaborate structure.
Resampling consists of two processes, downsampling and upsampling [23]. Downsampling reduces the resolution of feature maps, resulting in a smaller spatial dimension but a larger number of channels. It is typically done through operations such as max pooling or strided convolutions.
Upsampling increases the resolution of feature maps by interpolating the values and can be achieved through techniques such as deconvolutions, transposed convolutions, or interpolation [23, 24, 25]. One commonly used method for upsampling tensors in computer vision and image processing is bilinear interpolation. This method is preferred over other interpolation methods like nearest-neighbor interpolation because it produces smoother and more natural-looking results [26]. Nearest-neighbor interpolation simply selects the nearest pixel value without considering its neighboring pixels, which can result in jagged edges and other artifacts [27]. This spatial preservation of features is vital for the resampling of both images and spectrograms [7, 10]. Additionally, bilinear interpolation can be easily extended to higher dimensions, such as 3D volumes or tensors [22]. It is also relatively easy to implement and understand, making it a popular choice for spatial features [19]. This study utilizes bilinear interpolation over other upsampling methods due to the reasons previously listed.
Industrial computer vision object detection models that demonstrate resampling are YOLO (You Only Look Once) [24] and SSD (Single Shot MultiBox Detector) [25]. In YOLO, downsampling is performed using strided convolutions, while upsampling is achieved using transposed convolution. In SSD, downsampling is performed using max pooling, and upsampling is done using deconvolution layers. Comparatively, this study performs downsampling using strided convolutions and upsampling through bilinear interpolation. This demonstrates that there is not one universal method for completing resampling.
Once resampling is complete, the tensors are aggregated. In this study, we use weighted averaging to combine the resampled tensors. This is more efficient than the conventional method of concatenation as it results in a smaller tensor[28]. Weighted averaging allows the model to directly contrast features derived at distinct scales and processing depths, diversifying the scale and refinement level of features used for final predictions [29]. The weights are trainable variables, allowing the model to optimize the aggregation.
As seen in Fig. 2, feature aggregators connect nodes via residual connections, creating a complex residual network. A benefit of these residual connections is the minimization of vanishing/exploding gradients throughout the network [29]. Residual connections enable the gradient to flow through the network more easily, stabilizing the training process. In Path Aggregation Network (PANet), residual connections propagate high-level semantic features from the top-down pathway to the bottom-up pathway, enabling the network to generate highly detailed object proposals at multiple scales [29, 30].
Feature Aggregators are classified based on their number of nodes and connection order, which varies with the feature extraction depth and structure. However, aggregation of additional feature scales results in computational overhead, necessitating a tradeoff between higher aggregation and prediction speed [23, 29]. This is why recent object detection models transitioned from Feature Pyramid Network (FPN) to PANet for enhanced performance [19], yet subsequent developments have focused on more compact alternatives such as Neural
Fig. 1: Aggregation node diagram illustrating sequential procedures performed within each node [23]
Fig. 2: Example diagrams of PANet and BiFPN feature aggregators with five scales [23].
Architecture Search Feature Pyramid Network (NAS-FPN) [31] and Weighted Bi-Directional Feature Pyramid Network (BiFPN) [23].
### _Scale Encoder Network_
The new aggregator introduced in this study is Scale Encoder Network (SEN). The goal of SEN is to reduce aggregation's computational complexity from the node aggregation process while still enabling the neural network to weigh scales in a residual manner. Its premise is based on the idea that encoding multiple scales into one scale results in less nodes but still addresses the semantic gap.
Aggregators such as FPN, PANet, and BiFPN essentially update a constant number of scales throughout their process. If the backbone of the network has N nodes (or scales), the final layer of these aggregators also has N nodes and outputs. SEN, on the other hand, compresses multiple scales into one, as seen in Fig. 3. In SEN, consecutive aggregation layers reduce N until it is equal to the desired number of outputs. In Fig. 3, five initial scales are compressed into two, then one scale during aggregation.
In [23], seven backbone outputs feed into aggregators. In this case, PANet adds 14 resamplings, weighted averages, and convolutions to a network of only seven convolutions. A SEN with a compression stride of two would have a first layer of three nodes and a second layer of one node. In models like DarkNet [32], there can be over 50 convolutional blocks in the backbone, and a SEN aggregator will have a huge impact on the computational cost of feature aggregation.
The number of scales between nodes in a SEN layer is referred to as the compression stride; there are several factors to consider when choosing it. The first factor is the volume and scope of resampling. While downsampling results in data loss, upsampling necessitates data approximation. Both of these are undesirable and ought to be kept to a minimum. Resampling is only carried out between nodes within one scale size step in all prior aggregators in an effort to minimize these side effects. Connection overlaps should also be considered; a compression stride of one can result in many overlapping connections, which could lead to repetitive calculations and a preference for particular scales. To compare the effects of various compression stride sizes, this study evaluates two different SEN designs; which are explained further in Section III, Methodology.
### _Object Detection Architecture_
This section discusses Multi-Task Learning (MTL) as well as the various stages of object detection in neural networks. These aspects of neural network architecture are not exclusive to object detection networks but are pivotal for object detection execution.
MTL refers to the process of training neural networks to make predictions for multiple tasks, such as object localization and classification, simultaneously [33]. This enables models to train variables while also cross-referencing losses from each task. By collectively downplaying noise and anomalous patterns that one task may have overemphasized, each task contributes evidence for the applicability of features. This prevents overfitting by focusing on crucial traits.
Eavesdropping occurs when information obtained from simple tasks is used to finish a complex task [33]. For instance, filters used in image segmentation to identify an object's class provide information about the object's shape, which can be used to calculate the object's coordinates.
Usually, object detection models consist of the three steps of feature extraction, feature aggregation, and prediction [32, 27]. Collectively, these processes extract relevant features from input tensors, combine those features into a single representation, and then use that representation to predict the presence and location of objects within each tensor's unique coordinate system. By following these three steps, object detection neural networks can achieve cutting-edge performance for a variety of object detection applications.
The input tensors are analyzed in the feature extraction stage (or "the backbone") to identify relevant data patterns, typically using convolutional layers. A hierarchy of increasingly complex patterns is produced as the backbone processes the tensor [34, 35]. High-level estimator performance is still not assured, even though these extracted features are a more useful representation for predictions than raw data.
In the feature aggregation stage, the object detection model will produce feature maps that are resistant to changes in scale, translation, and rotation [23, 29, 35]. The feature aggregation stage is covered in detail in Section 2.B, Feature Aggregation.
During the last stage, prediction (or "the neck"), aggregated features are transformed into an entire set of predictions. In computer vision's object detection models, separate dense branches frequently perform classification and box coordinate regression for the detected objects [24, 25].
For computing final predictions in object detection, anchors, like those found in YOLO and SSD, have become standard
Fig. 3: Example diagram of SEN feature aggregator encoding five scales down to one scale.
[24, 29]. In order to localize objects in an image, anchors are a collection of predefined bounding boxes with various scales and aspect ratios. Anchors streamline the prediction process by splitting the task into two distinct tasks: determining whether an anchor box contains an object or not, and adjusting the anchor box coordinates to fit any present objects. This method enables the network to generalize objects of specific shapes and sizes while reducing the number of trainable parameters, speeding up parameter optimization. The network only predicts the presence and location of objects in a fixed set of boxes rather than predicting the precise location of each object across the entire image, making anchors computationally efficient.
Anchors do not exist yet for SSL and sound detection models. However, feature aggregation is essential to proper function of anchors; so, incorporating feature aggregation into sound detection architectures allows for the use of sound signal anchors. Although anchors are not used in this study's models, it's important to note how the developments in this study enable the possibility of these future advancements. The localization and classification of objects at various scales and orientations is substantially enhanced by the combination of anchors and feature aggregation in computer vision; so, the development of anchor compatible architectures has clear potential in succeeding research [25, 27, 32].
This study uses a control model consisting of only a basic backbone and head as a control model. The backbone of this control model consists of three sequential convolutions and the head is two branches, each with two sequential dense layers. As will be elaborated upon in the next section, Methodology, various feature aggregators are inserted between the backbone and neck of this control model; replicating the computer vision object detection architecture described in this section.
## III Methodology
To isolate the effects of feature aggregation on SSL models, this section will introduce four new SSL models that integrate various aggregators into a control model (taken from [7]) and then trained with identical datasets [36] and preprocessing [7]. The control model, Sound Event Localization and Detection network (SELDnet), can be seen in Fig. 4 and the proposed models with aggregation in Fig. 5.
### _Development_
This study used Keras library with TensorFlow's Functional API backend to implement and test this theory. This API was chosen because of its flexibility for creating non-sequential neural networks. Feature aggregation nodes are Model class objects that incorporate sequential resampling, weighted averaging, and convolutional layers for processing input tensors.
As the current version of the TensorFlow Functional API does not include a weighted averaging layer, we utilized a custom layer of the Layer class, with the weights designated as trainable variables. Subsequently, multiple node sub-models were interconnected to create Feature Aggregator sub-models, which were then integrated into the main model. This design allows nodes to be easily arranged to quickly create aggregators, and aggregators to be efficiently integrated into larger architectures.
Each model was trained for a max of 1000 epochs with early stoppage if the SELD score (see Section IV.C) on the test split does not improve for 100 epochs. This early stoppage is to prevent network over-fitting. For training loss, we utilized a weighted combination of binary cross-entropy for classification and MSE for localization with an Adam optimizer with default parameters [37].
Simple 1x1 convolutions were used in the feature aggregators due to their ability to reduce dimensionality of feature maps [38], apply nonlinear transformations [39], and combine information from multiple feature maps [40]. In terms of dimensionality reduction, the convolutional operation with a single filter and a stride of 1 can be used to generate a new feature map with a reduced number of channels, which is particularly beneficial in deep neural networks where the number of feature maps can quickly become large and computationally expensive to process [13]. As for nonlinear transformation, the convolutional operation with multiple filters and a nonlinear activation function, such as ReLu or softmax, are applied to feature maps to increase their expressive power and improve the performance of the neural network [39].
### _Final Architectures_
The Feature Aggregators chosen for this study are Pyramid Attention Network (PANet), Bidirectional Feature Pyramid Network (BiFPN), and Scale Encoder Network (SEN). PANet and BiFPN were selected because of their well-established track record in popular object detection models, including YOLO and SSD [23, 25]. SEN is a new aggregator developed by this study. To evaluate the effects of the compression stride value, two models with SEN are tested: one test model with a compression stride of one (\(\text{SEN}_{\text{N=1}}\)) and the other a compression stride of two (\(\text{SEN}_{\text{N=2}}\)). The SEN model with a compression stride of one uses two intermediate scales; these scales sizes are the averaged dimensions of their input tensors. All of these aggregators vary in number of nodes and connection patterns, allowing for analysis and speculation of optimal approaches for feature aggregation design.
The control architecture, SELDnet, is a MTL CRNN without feature aggregation. It simultaneously predicts the presence of multiple classes and their relative positions in 3D Cartesian
Fig. 4: Illustration of the control model, SELDnet [7].
coordinates. It has been chosen for a few reasons. First, this is currently a state-of-the-art architecture that performed well in a distinguished study by Adavanne et al. (2018) [7]. Second, the architecture design allows for easy integration of feature aggregators compared to non-sequential networks. Third, the control architecture's hyperparameters have already been tuned, allowing this study to focus on tuning feature aggregators.
As observed in Fig. 4, the control model's feature extraction and prediction stages are completed by three sequential convolutions and two branches of two dense layers, respectively. Fig. 5 presents the proposed architectures by this study. Fig. 5 (a)-(b) display SELDnet with the established PANet and BiFPN aggregators. Fig. 5 (c)-(d) illustrate SELDnet with two variations of SEN. The first variation, Fig. 5 (c), incorporates an aggregator with two SEN layers with compression strides of one. Fig. 5 (d) demonstrates the SELDnet with a single SEN layer of compression stride two.
## IV Evaluation
### _Dataset_
The REAL dataset, compiled by [36], serves as a valuable resource for research on sound event detection (SED) and localization [7]. The dataset consists of 216 uncompressed WAV audio recordings, each lasting 30 seconds, captured in various indoor and outdoor settings such as street junctions, tram stations, shopping malls, and pedestrian streets. These settings represent common acoustic environments found in urban areas and feature diverse overlapping sound sources and backgrounds.
The dataset includes spatial coordinates for the loudspeakers and microphones, along with annotations that provide information about the temporal boundaries, classification, and spatial coordinates of sound events present in each recording. It encompasses 11 distinct sound categories, including car, bus, train, tram, footsteps, speech, music, dog, bird, jackhammer, and siren.
### _Preprocessing_
In order to discern the specific impact of feature aggregation within our framework, the data preprocessing methodology employed is identical to the control model study [7] and uses the code from this reference repository. The preprocessing stage involves the following steps:
1. Raw audio files were de-noised using band-pass filtration to remove low and high-frequency noise. This method is effective against typical noise in recording environments [7]. Signals were then downsampled to 16 kHz, reducing computational complexity and ensuring efficient data preprocessing for future stages.
2. A Short-Time Fourier Transform (STFT) extracted features from the preprocessed audio signals to create a detailed time-frequency representation [11]. A window size of 1024 samples and hop size of 256 samples provided the optimal temporal and spectral details in the resulting spectrogram [41]. This spectrogram was used as the neural network input for SED and localization.
3. To bolster the training dataset and improve model generalization, data augmentation techniques were employed, including random time shifting, frequency shifting, and amplitude scaling [22]. This enriched dataset enhanced the model's adaptability and performance in varied acoustic scenarios.
Fig. 5: Diagrams of final model architectures proposed by this study. Subfigures a, b, c, and d illustrate SELDnet with PANet, BiFPN, SEN\({}_{\text{N=1}}\) and SEN\({}_{\text{N=2}}\), respectively.
### _Metrics_
Each model's performance is evaluated using several metrics that measure the accuracy of the SED and the sound event DOA estimation.
The SED metrics are F-score and Error Rate. F-score is a widely used metric for binary classification problems that measures the balance between precision and recall [42]. It is defined as the harmonic mean of precision and recall. In the context of SED, True Positives (TP) refer to the correctly detected events, False Positives (FP) refer to the events that were incorrectly detected, and False Negatives (FN) refer to the events that were missed by the model [7]. F-score is a useful metric because it considers both the number of correctly detected events and the number of missed and false alarms. A higher F-score indicates a better performance.
\[F=\frac{2\cdot\sum_{k=1}^{K}TP(k)}{2\cdot\sum_{k=1}^{K}TP(k)+\sum_{k=1}^{K}FP( k)+\sum_{k=1}^{K}FN(k)} \tag{1}\]
where for each one-second segment k: \(TP(k)\), the number of true positives, represents the total number of sound event classes active in both ground truth and predictions. FP(k), the number of false positives, represents the number of sound event classes inactive in ground truth but predicted as active in the kth segment. FN(k), the number of false negatives, represents the number of sound event classes active in ground truth but predicted as inactive in the kth segment.
Error Rate (ER) is another commonly used metric for SED, which measures the percentage of incorrectly detected events [7, 43]. ER is calculated as:
\[ER=\frac{\sum_{k=1}^{K}S(k)+\sum_{k=1}^{K}D(k)+\sum_{k=1}^{K}I(k)}{\sum_{k=1}^ {K}N(k)} \tag{2}\]
where, for each one-second segment k: N(k) is the total number of active sound event classes in the ground truth. S(k), substitution, is the number of times an event was detected at the wrong level and is calculated by merging false negatives and false positives without individually correlating which false positive substitutes which false negative. The remaining false positives and false negatives, if any, are counted as insertions I(k) and deletions D(k) respectively. These values are calculated as follows:
\[S(k)=min(FN(k),FP(k)) \tag{3}\]
\[D(k)=max(0,FN(k)-FP(k)) \tag{4}\]
\[I(k)=max(0,FP(k)-FN(k)) \tag{5}\]
The DOA metrics are DOA error and Frame Recall. DOA error measures the difference between the estimated DOA and the ground truth DOA in degrees for the entire dataset with total number of DOA estimates, D [7]. A lower DOA error indicates a better performance. The error is defined as,
\[DOA\ Error=\frac{1}{D}\sum_{d=1}^{D}\sigma((x_{G}^{d},y_{G}^{d},z_{G}^{d}),(x _{E}^{d},y_{E}^{d},z_{E}^{d})) \tag{6}\]
where \((x_{E},y_{E},z_{E})\) is the predicted DOA estimate, \((x_{G},y_{G},z_{G})\) is the ground truth DOA, and \(\sigma\) is the angle between \((x_{E},y_{E},z_{E})\) and \((x_{G},y_{G},z_{G})\) at the origin for the d-th estimate:
\[\sigma=2\cdot\arcsin\left(\frac{\sqrt{\Delta x^{2}+\Delta y^{2}+\Delta z^{2}}} {2}\right)\cdot\frac{180}{\pi} \tag{7}\]
with \(\Delta x=x_{G}-x_{E}\), \(\Delta y=y_{G}-y_{E}\), and \(\Delta z=z_{G}-z_{E}\).
Frame recall, as defined by [7], is a metric used to measure the accuracy of a model's predictions in the context of time frames or segments. It considers situations where the number of estimated and ground truth DOAs may not match. Frame Recall measures the percentage of time frames where the number of estimated and reference DOAs are unequal. A higher Frame Recall indicates a better performance and is calculated as:
\[FR=\frac{\sum_{k=1}^{K}TP(k)}{\sum_{k=1}^{K}TP(k)+FN(k)}\cdot 100 \tag{8}\]
We used a combined localization and classification score, SELD, to perform early training stoppage. If the SELD score did not improve over 100 epochs, training was terminated to prevent overfitting. SED and DOA score represent the overall performance of an estimator for sound event detection and localization, respectively. SELD is the average of these scores and functions as a single overarching metric to compare models. A lower value indicates better performance for DOA, SED and SELD scores. Eqs. (9), (10) and (11) define these metrics [7]:
\[\text{DOA score}=\frac{(\text{DOA Error}/180+(1-FR/100))}{2} \tag{9}\]
\[\text{SED score}=\frac{(ER+(1-F/100))}{2} \tag{10}\]
\[\text{SELD}=\frac{(\text{SED score}+\text{DOA score})}{2} \tag{11}\]
### _Baseline Methods_
SELDnet is an advanced deep learning architecture developed specifically for combined SED and DOA estimation tasks [7]. SELDnet processes spectral features dedicated to the SED task in parallel with spatial features for the DOA estimation. By leveraging the fusion of convolutional and recurrent layers, SELDnet effectively pinpoints both the occurrence and the spatial origin of a sound event.
MSEDnet, derived from SELDnet, is designed for monaural (single-channel) SED. By focusing on the SED task, MSEDnet provides an optimal solution for applications where the sole requirement is event detection without the need for spatial localization [44]. Its architecture is tailored to single-channel audio contexts, ensuring precise event detection.
SEDnet stands as a dedicated solution for SED, capitalizing on deep learning techniques [44]. With its architecture centered around event identification, SEDnet excels in scenarios where temporal detection of sound events is the primary objective. It delivers accuracy and efficiency in sound event classification without incorporating spatial estimation components.
DOAnet offers a specialized approach towards the spatial dimension of audio signals, focusing solely on DOA estimation [45].
MUSIC (Multiple Signal Classification) is a robust algorithm for DOA estimation. Relying on subspace methods, MUSIC differentiates the signal space from the noise space, facilitating precise DOA predictions for multiple sound sources [46]. Its mathematical foundation and proven efficiency in array signal processing render it a reliable choice for DOA estimation tasks, even in contexts dominated by neural network models [7].
## V Results
The results in TABLE I indicate that feature aggregation enhanced the control model's capacity to both classify and locate sound sources. None of the models evaluated in this study outperformed the algorithms specialized for only classification or localization, but all demonstrated a clear improvement in both tasks compared to the control model, SELDnet. Overall, the models developed in this study demonstrated a healthy balance in both localization and classification.
### _Classification_
For the SELDnet variants, classification improvements were clear but minimal. All of the ER scores are closely clustered, making it difficult to determine the extent to which different feature aggregation designs affected scores. However, the F and SED scores imply that aggregation did improve SELDnet's ability to classify in a manner comparable to MSEDnet and SEDnet. The dataset may impose restrictions on these scores, preventing architectural design from displaying a large impact. Datasets can limit deep learning model performance through factors such as data size, quality, class imbalances, noisy or biased labeling, and distribution mismatches; all of which hinder the model's ability achieve over a certain score. Compared to MSEDnet, SELDnet+PANet and SELDnet+SENN=1 performed marginally worse. These two models performed marginally better than SEDnet, whereas SELDnet with BiFPN and SENN=2 performed comparably to SEDnet. With respect to SEDnet, SELDnet with BiFPN and SENN=2 scored slightly better on ER, slightly worse on F-Score, but achieved the same overall SED score. Generally, it appears models with aggregation provide better sound event detection performance; demonstrated by lower ER, higher F-score, and higher SED score.
### _Localization_
The improvements in localization scores of models with aggregation are notable. Compared to SELDnet, all the networks with aggregation considerably improved performance, as was expected. All aggregated models had considerably lower DOA Errors, higher Frame Recall, and lower DOA score. SELD+PANet performed particularly well in DOA estimation, with metric scores that stand out from the other clustered aggregation DOA scores. This clear DOA estimation boost suggests that feature aggregation enhances the distinguishing of various sound signals, such as reverberations, diffractions, and direct signals. The increased robustness to indirect signals, observed after introducing aggregators into SELDnet, is attributable to the enhanced feature scaling. Indirect signals, such as reflections, can exhibit comparable wave patterns to direct signals at lesser amplitudes. Therefore, one would anticipate that a better comprehension of feature scales would enhance the differentiation between direct and indirect signals.
Compared to DOAnet, SELDnet and its variants have a higher frame recall but a lower overall DOA Error; signifying that they excel at correctly identifying DOAs within individual time frames while maintaining consistency with the ground truth DOAs. This indicates that these models have difficulty minimizing the overall disparity between predicted and ground truth DOA compared to DOAnet, but consistently capture signal patterns with respect to time.
It is likely that DOAnet's inconsistency with respect to time is a result of an inability to distinguish direct signals from reflections, reverberations, and diffractions. The data implies that SELDnet variants (developed by this study) are adept at pinpointing the precise time instances when sound sources appear, leading to an improved ability to distinguish between multiple signals. Furthermore, by consistently capturing signal patterns over time, these models are likely to be more robust in dynamic soundscapes where the number of sound sources and noise interference can vary. This trait is vital in real-world applications where sound sources often overlap and vary in number and characteristics.
## VI Joint Classification and Localization
The SELD scores indicate that, regardless of aggregator design, feature aggregation improves the function of joint sound classification and localization models. Although aggregators with more nodes outperformed the single node SEN model, this modest aggregator demonstrated that even minimal aggregation can counter-act the negative effects of the semantic gap.
Clearly, PANet's in-depth and equal processing of all scales is optimal for performance. However, as previously discussed, this approach can be computationally demanding, which may prove un-ideal for certain situations.
## VII Aggregator Comparison
This section will compare aggregators using two metrics: their overall percentage improvement on SED, DOA, and SELD scores and the percentage improvement per node. The percentage improvement per node is intended as a metric to quantify the efficiency of aggregator designs. As can be seen in TABLE II, although some aggregators have better overall improvement, others have a better improvement ratio per node, implying a more efficient connection design.
The obvious outlier is \(\text{SEN}_{\text{N=2}}\). This model's results imply that any aggregation helps counteract the semantic gap. The collective results indicate that less nodes results in a higher percentage improvement per node. However, in this particular \(\text{SEN}_{\text{N=2}}\) aggregator, the magnitude of improvement per node must be taken with a grain of salt due to the simplicity of the aggregator. SELDnet is an unusually compact neural network, and most real-world models are much deeper (such as Darknet 53 with a backbone of 53 convolutional layers). For backbones deeper than three layers, which is the case for most models, SEN aggregators with a compression stride of two would involve more than one node. We hypothesize that the use of a single node causes this \(\text{SEN}_{\text{N=2}}\) to seem disproportionally effective per node because the overall score change is divided
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Aggregator & \# Nodes & SED \% Improvement & DOA \% Improvement & \multicolumn{2}{c|}{SELD \% Improvement} \\ \hline & & Overall & Per Node & Overall & Per Node & Overall & Per Node \\ \hline PANet & 6 & 12.5 & 2.08 & 32.0 & 5.33 & 19.4 & 3.23 \\ BiFPN & 4 & 7.50 & 1.88 & 20.0 & 5.00 & 12.3 & 3.08 \\ \(\text{SEN}_{\text{N=1}}\) & 3 & 10 & 3.33 & 24.0 & 8.00 & 14.8 & 4.92 \\ \(\text{SEN}_{\text{N=2}}\) & 1 & 7.50 & 7.5 & 20.0 & 20.0 & 11.1 & 11.1 \\ \hline \end{tabular}
\end{table} TABLE II: COMPARISON OF AGGREGATORS’ EFFECTS ON CONTROL MODEL SCORES.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Algorithm & \multicolumn{3}{c|}{Classification} & \multicolumn{3}{c|}{Localization} & \multicolumn{1}{c|}{SELD Score} \\ \cline{2-7} & ER & F-Score & SED & DOA Error & Frame Recall & DOA Score & \\ \hline SELDnet [7] & 0.41 & 60.5 & 0.40 & 26.9 & 65.3 & 0.25 & 0.325 \\ SELDnet + PANet & 0.36 & 65.1 & 0.35 & 11.8 & 72.5 & 0.17 & 0.262 \\ SELDnet + BiFPN & 0.37 & 63.0 & 0.37 & 15.2 & 68.6 & 0.20 & 0.285 \\ SELDnet + \(\text{SEN}_{\text{N=1}}\) & 0.36 & 63.6 & 0.36 & 14.3 & 69.5 & 0.19 & 0.277 \\ SELDnet + \(\text{SEN}_{\text{N=2}}\) & 0.37 & 62.5 & 0.37 & 16.1 & 68.0 & 0.20 & 0.289 \\ MSEDnet [44] & 0.35 & 66.2 & 0.34 & - & - & - & - \\ SEDnet [44] & 0.38 & 64.6 & 0.37 & - & - & - & - \\ DOAnet [45] & - & - & - & 6.30 & 46.5 & 0.29 & - \\ MUSIC [46] & - & - & - & 36.3 & - & - & - \\ \hline \end{tabular}
\end{table} TABLE I: CLASSIFICATION AND LOCALIZATION SCORES OF TEST ARCHITECTURES COMPARED TO CONTROL ALGORITHMS.
by one. The overall improvement from this \(\text{SEN}_{\text{N=2}}\) aggregator is a testament to the effects of having any aggregator, but the results for improvement per node are skewed due to division by one. Nevertheless it is interesting that compared to BiFPN, this single node performed comparatively in overall SED and DOA percentage improvement and slightly worse in overall SELD. It is important to note that the DOA and SED scores of BiFPN and \(\text{SEN}_{\text{N=2}}\) are the same because of rounding, but the actual difference is seen in the SELD score. As will be discussed later in this section, we attribute this similar performance to the efficacy of encoder aggregators of SEN.
After removing this outlier and comparing PANet, BiFPN and \(\text{SEN}_{\text{N=1}}\), the next clear take away is PANet's overall percentage improvement. PANet was not as efficient as \(\text{SEN}_{\text{N=1}}\), but the overall improvement is substantial compared to all other aggregators. This is attributable to the in-depth processing at every scale, which is the most comprehensive design for addressing the semantic gap. The lower efficiency per node is likely the result of certain scales not requiring as much processing as is actively occurring.
\(\text{SEN}_{\text{N=1}}\) is clearly an efficient design, with the second highest overall percentage improvements and highest percentage improvement per node (excluding the outlier \(\text{SEN}_{\text{N=2}}\)). This efficiency per node is attributable to the efficacy of the encoder approach. This approach allows for even weighing and consideration of all scales (like PANet) without leading to uneven processing of each scale (as seen in BiFPN). This equal evaluation of all scales with reduced nodes (half of PANet) leads to an efficient aggregation process.
BiFPN's results indicate that the design is not the best performing or most efficient. It's no surprise that PANet, a model with more nodes, outperformed BiFPN overall. However, why then did the SEN models (which have less nodes than BiFPN) perform so well in comparison with BiFPN? We hypothesize that BiFPN overemphasized one scale due to the scale's extra nodes, whereas SEN created a compressed representation with equal weighting of all scales.
## VIII Conclusion
The results indicate that, regardless of the aggregator's design, feature aggregation can significantly improve the performance of neural networks for sound detection. A balance must be struck between computational expense and performance when deciding on an aggregator. While examining the available aggregators, SEN and PANet stand out as the most cost-effective and robust, respectively. The difference in aggregator performances indicates that when performing feature aggregation, it's best to equally emphasize all scales.
Future research may delve deeper into an assortment of topics, such as the establishment of anchors for spectrograms and the development of more complex SEN designs (such as stacking SEN after FPN or PANet).
## Acknowledgment
We would like to express our gratitude to Tampere University of Technology and its licensors for granting permission to use the code for the Sound Event Localization and Detection using Convolutional Recurrent Neural Network method/architecture, which is available in the GitHub repository with the handle "seld-net" found at [https://github.com/sharathadavanne/seld-net](https://github.com/sharathadavanne/seld-net). This code was described in the paper titled "Sound event localization and detection of overlapping sources using convolutional recurrent neural network" [7].
We acknowledge and honor the non-commercial nature of this grant and affirm our commitment to preserving the copyright notice in all reproductions of this Work. Furthermore, we are grateful to the original source of the Work, the Audio Research Group, Lab. of Signal Processing at Tampere University of Technology.
|
2308.06309 | Predicting Resilience with Neural Networks | Resilience engineering studies the ability of a system to survive and recover
from disruptive events, which finds applications in several domains. Most
studies emphasize resilience metrics to quantify system performance, whereas
recent studies propose statistical modeling approaches to project system
recovery time after degradation. Moreover, past studies are either performed on
data after recovering or limited to idealized trends. Therefore, this paper
proposes three alternative neural network (NN) approaches including (i)
Artificial Neural Networks, (ii) Recurrent Neural Networks, and (iii)
Long-Short Term Memory (LSTM) to model and predict system performance,
including negative and positive factors driving resilience to quantify the
impact of disruptive events and restorative activities. Goodness-of-fit
measures are computed to evaluate the models and compared with a classical
statistical model, including mean squared error and adjusted R squared. Our
results indicate that NN models outperformed the traditional model on all
goodness-of-fit measures. More specifically, LSTMs achieved an over 60\% higher
adjusted R squared, and decreased predictive error by 34-fold compared to the
traditional method. These results suggest that NN models to predict resilience
are both feasible and accurate and may find practical use in many important
domains. | Karen da Mata, Priscila Silva, Lance Fiondella | 2023-08-11T17:29:49Z | http://arxiv.org/abs/2308.06309v1 | # Predicting Resilience with Neural Networks
###### Abstract
Resilience engineering studies the ability of a system to survive and recover from disruptive events, which finds applications in several domains. Most studies emphasize resilience metrics to quantify system performance, whereas recent studies propose statistical modeling approaches to project system recovery time after degradation. Moreover, past studies are either performed on data after recovering or limited to idealized trends. Therefore, this paper proposes three alternative neural network (NN) approaches including (i) Artificial Neural Networks, (ii) Recurrent Neural Networks, and (iii) Long-Short Term Memory (LSTM) to model and predict system performance, including negative and positive factors driving resilience to quantify the impact of disruptive events and restorative activities. Goodness-of-fit measures are computed to evaluate the models and compared with a classical statistical model, including mean squared error and adjusted R squared. Our results indicate that NN models outperformed the traditional model on all goodness-of-fit measures. More specifically, LSTMs achieved an over 60% higher adjusted R squared, and decreased predictive error by 34-fold compared to the traditional method. These results suggest that NN models to predict resilience are both feasible and accurate and may find practical use in many important domains.
predictive resilience, artificial neural network, recurrent neural network, long short-term memory
## I Introduction
System resilience is the ability of a system or a process to survive and recover from disruptive events [1, 2]. Early studies emphasize resilience metrics [3] to quantify system performance, while more recent studies [4] propose resilience models to project system recovery time after failures. Past studies emphasizing resilience metrics or modeling are typically performed after recovering or limited to smooth trends. Real-world systems do not exhibit such simplified trends. Therefore, a general model capable of characterizing a broad cross-section of systems and processes to which resilience engineering is relevant would be beneficial.
Relevant research on quantitative resilience metrics includes Bruneau and Reinhorn [5] defined performance preserved relative to a baseline by measuring the area under the curve, while Yang and Frongpool [6] measured the performance lost due to a degrading stress as the area above the curve. Moreover, resilience models have been proposed with stochastic techniques such as Markov processes [7], Bayesian networks [8], and Petri nets [9]. Recently, Silva et al. [4] proposed statistical models with multiple linear regression methods that include covariates, characterizing multiple deteriorations and recoveries associated with multiple shocks. Although these methods and models successfully quantify or characterize a system or a process performance under difficult conditions, they are either limited to idealized trends or single disruptive events, or contain various parameters to describe multiple shocks. Hence, machine learning techniques are a powerful alternative to these statistical models which can improve predictive accuracy and do not depend on idealized curve shapes.
In contrast to previous research, this paper considers three neural network (NN) models to predict system performance, including negative and positive factors driving deterioration and recovery in order to better understand the application domain and precisely track and predict the impact of disruptions and restorative activities, including artificial neural networks [10] (ANN), (ii) recurrent neural networks [11] (RNN), and (iii) long-short term memory [12] (LSTM). Multiple linear regression with interaction [4] (MLRI), a classical statistical method, is also applied to compare the predictive accuracy of the proposed models when tracking degradation and recovery, and predicting future changes in performance. All models are applied to historical data with \(60\%\) and \(70\%\) of data used for training. Our results suggest that LSTMs improve the adjusted R Squared over \(60\%\) and reduce predictive error 34-fold compared to the statistical method, indicating a better model fit and a higher predictive accuracy.
The remainder of this paper is organized as follows: Section II summarizes NN models to characterize resilience. Section III describes the goodness-of-fit measures to validate the models and a feature selection technique considered to identify the most relevant subset of covariates. Section IV provides illustrative examples and compares the model results. Section V offers conclusions and future research.
## II Resilience Modeling
This section describes predictive resilience models, which have been developed to track and predict the performance of a system. Although the definition of the performance of a system is domain-dependent, it can be described as the level of accomplishment of a system or a task, which changes depending on disruptive events and restorative activities, also known as covariates, characterizing a resilience curve.
Figure 1 shows the four stages of a canonical resilience curve described by The National Academy of Sciences [13], including (i) plan or prepare for, (ii) absorb, (iii) recover from, and (iv) adapt to actual or potential disruptive events. In the prepare stage, the system possesses a nominal performance \(P(t)\) indicated by the dotted horizontal line until time \(t_{h}\) when an initial disruptive event occurs. Then, the system transitions to the absorb stage, where the performance deteriorates until it reaches a minimum value at time \(t_{d}\) and starts to improve
in the recovery stage. Resilient systems recover smoothly to a new steady performance until time \(t_{r}\) and enter into the adapt stage. Systems capable of adapting to their environment can recover to an improved (dashed) performance such as economic and computational systems. However, physical systems such as power generation may only exhibit recovery to nominal (solid) or degraded (dash-dotted) performance due to damaged equipment.
A discrete resilience curve incorporating covariates [4] is
\[P(i)=P(i-1)+\Delta P(i) \tag{1}\]
where \(P(i)\) and \(P(i-1)\) are the performance in the present and previous interval, and \(\Delta P(i)\) is the change in performance predicted using the past [4] and the proposed methods described in the following subsections.
### _Classical Statistical Modeling_
Silva et al. [4] characterized the change in performance with multiple linear regression with interaction as
\[\Delta P(i)=\beta_{0}+\sum_{j=1}^{m}\beta_{j}X_{j}(i)+\sum_{j=1}^{m}\sum_{l=j+1 }^{m}\beta_{j(m+l)}X_{j}(i)X_{l}(i) \tag{2}\]
where \(\beta_{0}\) is the baseline change in performance, \(X_{1}(i),X_{2}(i),\ldots,X_{m}(i)\) the \(m\) covariates documenting the magnitude of degrading shocks or amount of efforts dedicated to restore performance, \(\beta_{1},\beta_{2},\ldots,\beta_{m}\) the coefficients characterizing the impact of the covariates, and \(\beta_{j(m+l)}\) the interaction between two covariates.
### _Neural Networks_
A neural network [14] is a machine-learning approach commonly used in solving pattern recognition and non-linear problems, since it does not impose assumptions about an underlying probability distribution or functional form, making it widely applicable to numerous real-world applications.
The architecture of a neural network model consists of nodes, also known as neurons, an input layer, an output layer, and one or more hidden layers. In each hidden layer, a predefined number of neurons process the information from the previous layer. An activation function transforms this information, which is transferred to the next layer. In most cases, the neurons are fully connected between layers. Each connection possesses a weight, while each hidden layer and the output layer also possess a bias. The network weights and biases are initialized randomly and estimated using historical data during training with an optimization algorithm. In this learning process, the network processes the entire training data set several times, known as epochs. During each epoch, the optimizer adjusts the weights and biases after each observation is processed to minimize the error between actual data and the output of the network. Once the training is complete, the model can be used to predict by providing new input.
Figure 2 shows a combination of three topology of the NN models considered in this paper. The ANN architecture is represented by the network disregarding the two loops in the hidden layer. The RNN architecture includes the ANN architecture as well as the dashed loop in the hidden layer. Thus, the LSTM architecture consists of the entire network considering both dashed and dotted loops in the hidden layer.
The input layer in Figure 2 consists of \(m\) neurons, one for each covariate \(X_{1}(i),X_{2}(i),\ldots,X_{m}(i)\) at time step \(i\), which are interconnected with the \(n\) neurons in the hidden layer, and \(W_{1,1},W_{1,2},\ldots,W_{m,n}\) are the weights associated with the connection between the neurons of the input and the hidden layer. The hidden states \(h_{1}(i),h_{2}(i),\ldots,h_{n}(i)\) at time step \(i\), are defined by the NN model applied, which are introduced in the following subsections. \(b_{1}\) is the bias of the hidden layer, \(W_{1},W_{2},\ldots,W_{n}\) the weights associated with dashed recurrent loops included in the RNN and LSTM models, and \(w_{1},w_{2},\ldots,w_{n}\) the weights associated with the cell states included only in the LSTM model.
Fig. 1: Canonical resilience curve.
Fig. 2: Topology of the neural networks.
The output layer of the model consists of a single neuron characterizing the change in performance at time step \(i\) as
\[\Delta P(i)=\sum_{k=1}^{n}W_{k,o}h_{k}(i)+b_{o} \tag{3}\]
where \(W_{k,o}\) is the weight of the connection between the \(k^{th}\) neuron of the hidden layer and the output layer, and \(b_{o}\) the bias of the output layer.
#### Iii-B1 Artificial Neural Network (ANN)
The artificial neural network [10] is the simplest case of a NN, where each neuron in the hidden layer receives as input the summation of the weighted input neurons and a bias \(b_{1}\) of the hidden layer. An activation function \(\alpha\) transforms this summation introducing non-linearity to the network. Then, the output \(h_{k}(i)\) of the \(k^{th}\) neuron in the hidden layer at time step \(i\) is
\[h_{k}(i)=\alpha\left(\sum_{j=1}^{m}W_{j,k}X_{j}(i)+b_{1}\right) \tag{4}\]
where \(W_{j,k}\) is the weight associated with the \(j^{th}\) node of the input layer and the \(k^{th}\) node in the hidden layer, \(X_{j}(i)\) is the \(j^{th}\) covariate in the present time step.
To avoid vanishing or exploding gradient problems due to the activation function, the single hidden layer of this model uses the ReLU activation function
\[\text{ReLU}(x)=\begin{cases}x&\text{if }x>0\\ 0&\text{Otherwise}\end{cases} \tag{5}\]
and the input and output layers do not require activation functions. Thus, the change in performance is modeled with ANNs by substituting Equation (4) and (5) into Equation (3).
#### Iii-B2 Recurrent Neural Network (RNN)
A recurrent neural network [11] is an extension of the ANN that includes both the current time step as well as the previous output to make predictions. The dashed loop pointing back to each neuron in the hidden layer in Figure 2 indicates the previous output being passed into each neuron, also known as the previous hidden state \(h_{k}(i-1)\) of the \(k^{th}\) neuron in the hidden layer. Thus, the hidden state \(h_{k}(i)\) in the current time step is
\[h_{k}(i)=\alpha\left(\sum_{j=1}^{m}W_{j,k}X_{j}(i)+W_{k}h_{k}(i-1)+b_{1}\right) \tag{6}\]
where \(W_{k}\) is the weight associated with the recurrent portion of the \(k^{th}\) node of the hidden layer. The hidden layer also uses the ReLU activation function. Thus, the change in performance is modeled with RNNs by replacing Equation (6) and (5) into Equation (3).
#### Iii-B3 Long Short-Term Memory (LSTM)
Long short-term memory [12] modifies the ANN/RNN architecture to address the vanishing and exploding gradient problems in which the gradients of the network tend to zero or infinity during training, by introducing new features enforcing constant error flow throughout the layers. Besides taking the input and hidden states as RNNs, LSTM introduces a "cell state" (\(c_{k}\)) which holds important information during an arbitrary time interval.
Figure 3 shows an example of a LSTM cell containing memory cells and three gates, including the forget (\(F_{Gate}\)), input (\(I_{Gate}\)), and output (\(O_{Gate}\)) gates.
The \(\oplus\) operator in Figure 3 represents element wise addition and \(\otimes\) vector multiplication, \(c_{k}(i-1)\) the previous \(k^{th}\) cell value, and \(\textbf{X}(i)\) the current input vector. The LSTM model uses two activation functions, the hyperbolic tangent \(\tanh(x)\) and the logistic \(S(x)\). The \(\tanh(x)\) processes an input and outputs a number between \(-1\) and \(1\), while the \(S(x)\) outputs a number between \(0\) and \(1\), indicating if information should be ignored \((0)\) or kept \((1)\).
On the left side of Figure 3, the \(F_{Gate}\) uses the logistic activation function to decide if the state of the previous cell should be remembered or ignored. Thus, the forget gate output \(f_{k}(i)\) of the \(k^{th}\) neuron in the hidden layer is
\[f_{k}(i)=c_{k}(i-1)\times S\left(\sum_{j=1}^{m}W_{j,k}X_{j}(i)+W_{k}h_{k}(i-1) +b_{1}\right) \tag{7}\]
The \(I_{Gate}\) shown in the middle of Figure 3 multiplies the output of both activation functions to decide which information should be stored in the cell state \(c_{k}(i)\) resulting in
\[\begin{split} I_{k}(i)&=S\left(\sum_{j=1}^{m}W_{j,k}X_{j}(i)+W_{k}h_{k}(i-1)+b_{1}\right)\\ &\quad\times\tanh\left(\sum_{j=1}^{m}W_{j,k}X_{j}(i)+W_{k}h_{k}( i-1)+b_{1}\right)\end{split} \tag{8}\]
Then, the updated cell state \(c_{k}(i)\) is
\[c_{k}(i)=f_{k}(i)+I_{k}(i) \tag{9}\]
The final gate \(O_{Gate}\) on the right side of Figure 3 computes the \(k^{th}\) hidden state \(h_{k}(i)\) at the current time step as
\[\begin{split} h_{k}(i)&=S\left(\sum_{j=1}^{m}W_{j,k}X_{j}(i)+W_{k}h_{k}(i-1)+b_{1}\right)\\ &\quad\times\tanh\left(w_{k}\times c_{k}(i)+b_{1}\right)\end{split} \tag{10}\]
where \(w_{k}\) is a weight associated with the cell state, and \(c_{k}(i)\) and \(h_{k}(i)\) are the final outputs of the \(k^{th}\) LSTM cell in the hidden layer. Thus, the change in performance is modeled with LSTMs by replacing Equation (10) into Equation (3).
Fig. 3: Example of a \(k^{th}\) LSTM cell.
## III Model Validation and Feature Selection
This section describes measures to assess the performance of the models described in Section II and a feature selection technique to identify the most relevant subset of covariates.
### _Goodness-of-fit Measures_
Goodness-of-fit measures assess how well the model performs on a given data set. Once the models described in Section II are fitted/trained and the predictions of the change in performance (\(\Delta P\)) are made, the results of Equation (2) and (3) can be replaced in Equation (1) to estimate the system performance (\(\hat{P}\)) to evaluate the models.
Statistical and machine learning models require different data splits. Multiple linear regression with interaction divides the data set into two parts, training and testing. While machine learning models split the data into three parts, training, validation and testing. Thus, all models are fitted/trained with the training data set of \(n-l\) points. The regression model uses the remainder \(l\) points for testing, while the NN models divide the remaining \(l\) points into two equal parts since they require a validation data set to evaluate the models during training to select the best hyperparameters, such as number of neurons and layers. Then, the testing data set of \(\frac{l}{2}\) points provide an unbiased evaluation of the NN model selected after training.
_Predictive Mean Squared Error (PMSE)_[15] computes the mean discrepancy of the model estimate from the actual data considering the testing data set.
\[\text{PMSE}=\frac{1}{l_{m}}\sum_{i=n-l_{m}+1}^{n}\left(\hat{P}(i)-P(i)\right)^ {2} \tag{11}\]
where \(l_{m}=l\) for regression, and \(\frac{l}{2}\) for the NNs, and \(\hat{P}(i)\) and \(P(i)\) are the predicted and expected performance at time \(i\). Two special cases are the _Validation Mean Squared Error (VMSE)_ where the error considers only the validation data set presented in the NN models, and the _Mean Squared Error (MSE)_ considers the entire data set for both models.
_Mean Absolute Percentage Error (MAPE)_[16] measures the mean accuracy of time-dependent problems.
\[MAPE=\frac{100}{n}\sum_{i=1}^{n}\left|\frac{P(i)-\hat{P}(i)}{P(i)}\right| \tag{12}\]
For all error measures (PMSE, VMSE, MSE and MAPE), smaller values are preferred since they indicate a better model fit compared with other models.
_Adjusted Coefficient of Determination (\(r_{adj}^{2}\))_[17] measures the variation in the dependent variable that is explained by \(m\) independent variables incorporated into the model, quantifying the degree of linear correlation between the empirical performance and the model predictions.
\[r_{adj}^{2}=1-\left(1-\frac{\text{SSY}-\text{SSE}}{\text{SSY}}\right)\left( \frac{n-1}{n-m-1}\right) \tag{13}\]
where
\[\text{SSY}=\sum_{i=1}^{n}\left(P(i)-\overline{P}(i)\right)^{2} \tag{14}\]
is the sum of squares error associated with the naive predictor \(\overline{P}(i)\), and
\[\text{SSE}=\sum_{i=1}^{n}\left(\hat{P}(i)-P(i)\right)^{2} \tag{15}\]
is the sum of squares of the residual between the predicted \(\hat{P}(i)\) and expected performance \(P(i)\). A value of \(r_{adj}^{2}\) closer to 1.0 indicates a strong relationship between the data and the model. Negative or low \(r_{adj}^{2}\) values indicate no or weak linear relationship, which may be due to poor predictions or model fit that result in a large SSE.
### _A Hybrid Feature Selection Technique_
Hybrid feature selection techniques [18] are widely applied in machine learning problems to reduce the number of model inputs and decrease the possibility of under or overfitting. There are two steps to select the most relevant subset of covariates. The first step performs a forward selection search ranking possible subsets of covariates using a heuristic "merit" function. The second step trains models using the highest-ranked subsets from the first step and evaluates the models according to the goodness-of-fit measures.
The first step applies a correlation-based feature selection (CFS) technique [19] to identify a subset of relevant covariates highly correlated with the expected output (\(\Delta P\)) but uncorrelated with the other covariates to avoid including redundant attributes. The heuristic "merit" evaluation function
\[M_{s}=\frac{k\overline{r_{co}}}{\sqrt{k+k(k-1)\overline{r_{cc}}}} \tag{16}\]
ranks a subset \(S\) of \(k\) covariates, where \(\overline{r_{co}}\) is the mean of the correlation between the covariates in the subset \(S\) and the expected output \(\Delta P\), and \(\overline{r_{cc}}\) the mean of the inter-correlation between the covariates in the subset \(S\).
The forward selection search starts by evaluating one covariate at a time using Equation (16). Then, the search evaluates the highest-ranked covariate with each remaining covariate, and chooses the subset with the highest score. This process stops when the change in the merit score decreases by more than 0.01 or reaches the maximum number of covariates. The second step of the hybrid feature selection creates and trains models using the highest-ranked subsets resulting from the CFS algorithm as input, and evaluates these models according to the goodness-of-fit measures introduced in Section III-A. The subset of covariates that achieves the highest \(r_{adj}^{2}\) and the smallest overall error is then chosen as the best subset of covariates for the application.
## IV Illustrations
This section illustrates the application of machine learning models described in Section II, including artificial neural network, recurrent neural network, and long short-term memory to predict the systems change in performance. Predictions made by the NN models are compared with the traditional statistical multiple linear regression with interaction model results to evaluate the effectiveness of the approaches through the goodness-of-fit measures described in Section III-A.
The proposed resilience models are illustrated using the most recent recession in the U.S. that began in March, 2020 due to the COVID-19 pandemic. In this case, performance is the normalized number of adults eligible to work in the United States, where time step zero corresponds to peak employment prior to a period of job loss and recovery. In order to identify activities that have a high impact on job losses or recovery, twenty-one covariates regarding statistical information and relevant factors to the COVID-19 pandemic were collected from January 2020 to November 2022, and normalized by dividing the values of each covariate by the maximum value observed for that covariate. To promote reproducibility, these covariates are available in a public GitHub repository [20].
For each NN model considered, the number of neurons in the hidden layer was varied between 1 to 15 neurons, due to the limited size of the data set. The models were trained with the Adam optimizer [21] and an upper limit of 1000 epochs with an earlier stopping condition when the change in the loss did not improve by more than 0.0001 for ten epochs. Three possible values for the learning rate were also tested, namely \(\alpha\) = \(\{10^{-2},\)\(10^{-3},\)\(10^{-4}\}\). Ultimately, an \(\alpha=0.01\) was selected to achieve a good fit while avoiding overfitting. For each combination, the model was trained and tested 50 times, and the average of the results was compared through to the measures described in Section III-A. The data set was split into three parts, training-validation-testing, and two splits were considered, 60-20-20 and 70-15-15. In order to conduct an objective comparison, the MLRI models were fitted using maximum likelihood estimation [22] with 60% and 70% of the data and the remainder was used for testing.
Table I shows the result of the first part of the hybrid feature selection technique described in Section III-B applied to the 2020 U.S. recession data set. The order in which the covariates were selected is X\({}_{19}\) (_Industrial Production_), X\({}_{14}\) (_Workplace Closures_), X\({}_{4}\) (_New Orders Index_), X\({}_{7}\) (_Unemployment Benefits_), and X\({}_{6}\) (_Consumer Activity_).
As shown in Table I, the subset of covariates that has the highest merit score included three covariates, X\({}_{19}\), X\({}_{14}\), X\({}_{4}\), and when a fourth covariate was included, the merit score started to decrease. The subset with more than four covariates decreased by more than \(0.01\), and was hence disregarded.
Table II reports the goodness-of-fit comparison and the architecture of the best combinations of the models discussed in Section II for the four subsets selected in the first part of the hybrid feature selection method. Most models exhibited inconsistent results when comparing both data splits, demonstrating that these models perform differently depending on the training data size, which is not always expected. For these reasons, the architecture of each model that produced the most similar results in both data splits was chosen, shown in bold.
Table II shows that the best MLRI model with two covariates achieved a \(r_{\text{adj}}^{2}\) of \(0.6032\) on average, as well as a low PMSE, MSE and a MAPE of \(1.72\) on average. The best ANN model with three covariates and three neurons in the hidden layer achieved a \((\frac{0.8053}{0.6032})=33.5\%\) higher \(r_{\text{adj}}^{2}\) and a \((\frac{1.72}{0.81})=2.12\) times lower MAPE, compared to the best MLRI. The best RNN model had four covariates and twelve neurons, achieving a \((\frac{0.7376}{0.6032})=22.2\%\) higher \(r_{\text{adj}}^{2}\) than the MLRI, but the highest VMSE compared to the other NNs. The best LSTM with four covariates and seven neurons exhibited an \((\frac{0.9876}{0.6032})=63.72\%\) improvement on the \(r_{\text{adj}}^{2}\), and a \((\frac{2.884\cdot 10^{-4}+4.306\cdot 10^{-4})/2}{(2.048\cdot 10^{-5}+0.057\cdot 10^{-5})/2} )=34.07\) times smaller PMSE, when compared to the best MLRI. The improvements made by LSTM justify the additional epochs on required.
Figure 4 shows the normalized number of individuals employed (solid) and the model of best fit. The two vertical lines represent the time step in which the training and validation splits end. Since both considered splits have shown similar results for the chosen models, only the model fit using 60% of the data for training is shown for clarity.
In Figure 4, the MLRI model underestimated the data in most phases, which may be due to the amount of data used to estimate the parameters, since mathematical approaches usually use 80% or 90% of the data for parameters estimation. The ANN model performed exceptionally well in the validation and testing phases and followed the data relatively well at the beginning of the training phase. However, the model did not track the degradation and underestimated the recovery. On the other hand, the RNN model oscillated between overestimating and underestimating in every phase. As explained in Section II, ANNs and RNNs are prone to gradient problems resulting
\begin{table}
\begin{tabular}{l c c} \hline \hline Covariates Subset & \(k\) & \(M_{s}\) \\ \hline X\({}_{19}\) & 1 & 0.5567882 \\ X\({}_{19}\), X\({}_{14}\) & 2 & 0.6151150 \\ X\({}_{19}\), X\({}_{14}\), X\({}_{4}\) & 3 & 0.6257571 \\ X\({}_{19}\), X\({}_{14}\), X\({}_{4}\), X\({}_{7}\) & 4 & 0.6208308 \\ X\({}_{19}\), X\({}_{14}\), X\({}_{4}\), X\({}_{7}\), X\({}_{6}\) & 5 & 0.5959661 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Ranking of covariates subset using CFS algorithm.
Fig. 4: Model fit of the best models using 60% of the data for training.
in poor performance in some time-dependent problems. Meanwhile, the LSTM model characterized the data well in each phase. Thus, the LSTM's more complex architecture enables it to accommodate fluctuations better.
## V Conclusion and Future Research
This paper presented three alternative neural network approaches, including (i) Artificial Neural Network, (ii) Recurrent Neural Network, and (iii) Long Short-Term Memory, to model and predict system resilience considering disruptive events and restorative activities that characterize the degradation and the recovery in system performance. The neural network models and a traditional model (Multiple Linear Regression with interaction) were applied to a historical data set with \(60\%\) and \(70\%\) of the data used for training. The results indicated that neural network approaches outperformed the multiple linear regression with interaction model in every stage of the analysis, including tracking degradation and recovery and predicting future changes in performance. Specifically, LSTMs exhibited an improvement of over \(60\%\) in the adjusted R squared and a 34-fold reduction in predictive error.
Future research will explore more challenging data sets including multiple shocks and alternative neural network models such as gated recurrent units.
## Acknowledgment
This paper was published and presented at the \(28^{th}\) ISSAT International Conference on Reliability & Quality in Design in August 2023, which will be indexed in Elsevier Scopus.
|
2308.07003 | Deepbet: Fast brain extraction of T1-weighted MRI using Convolutional
Neural Networks | Brain extraction in magnetic resonance imaging (MRI) data is an important
segmentation step in many neuroimaging preprocessing pipelines. Image
segmentation is one of the research fields in which deep learning had the
biggest impact in recent years enabling high precision segmentation with
minimal compute. Consequently, traditional brain extraction methods are now
being replaced by deep learning-based methods. Here, we used a unique dataset
comprising 568 T1-weighted (T1w) MR images from 191 different studies in
combination with cutting edge deep learning methods to build a fast,
high-precision brain extraction tool called deepbet. deepbet uses LinkNet, a
modern UNet architecture, in a two stage prediction process. This increases its
segmentation performance, setting a novel state-of-the-art performance during
cross-validation with a median Dice score (DSC) of 99.0% on unseen datasets,
outperforming current state of the art models (DSC = 97.8% and DSC = 97.9%).
While current methods are more sensitive to outliers, resulting in Dice scores
as low as 76.5%, deepbet manages to achieve a Dice score of > 96.9% for all
samples. Finally, our model accelerates brain extraction by a factor of ~10
compared to current methods, enabling the processing of one image in ~2 seconds
on low level hardware. | Lukas Fisch, Stefan Zumdick, Carlotta Barkhau, Daniel Emden, Jan Ernsting, Ramona Leenings, Kelvin Sarink, Nils R. Winter, Benjamin Risse, Udo Dannlowski, Tim Hahn | 2023-08-14T08:39:09Z | http://arxiv.org/abs/2308.07003v1 | # Deepbet: Fast brain extraction of T1-weighted MRI using Convolutional Neural Networks
###### Abstract
Brain extraction in magnetic resonance imaging (MRI) data is an important segmentation step in many neuroimaging preprocessing pipelines. Image segmentation is one of the research fields in which deep learning had the biggest impact in recent years enabling high precision segmentation with minimal compute. Consequently, traditional brain extraction methods are now being replaced by deep learning-based methods. Here, we used a unique dataset comprising 568 T1-weighted (T1w) MR images from 191 different studies in combination with cutting edge deep learning methods to build a fast, high-precision brain extraction tool called deepbet. deepbet uses LinkNet, a modern UNet architecture, in a two stage prediction process. This increases its segmentation performance, setting a novel state-of-the-art performance during cross-validation with a median Dice score (DSC) of 99.0% on unseen datasets, outperforming current state of the art models (\(\text{DSC}=97.8\%\) and \(\text{DSC}=97.9\%\)). While current methods are more sensitive to outliers, resulting in Dice scores as low as 76.5%, deepbet manages to achieve a Dice score of > 96.9% for all samples. Finally, our model accelerates brain extraction by a factor of \(\approx\)10 compared to current methods, enabling the processing of one image in \(\approx\)2 seconds on low level hardware.
Brain extraction Skull stripping Deep learning Neural Network
## 1 Introduction
The objective of brain extraction is to remove parts of a magnetic resonance imaging (MRI) sample which are non-brain tissue. This process, also known as skull-stripping, stands at the beginning of many popular neuroimage tools such as [Avantes et al., Cox, Fischl, Gaser et al., Smith et al.] which preprocess MRIs before further analysis. The quality of brain extraction is critical as any errors during this first preprocessing step can harm the quality of the subsequent preprocessing steps and the downstream analysis.
Established traditional brain extraction tools such as Brain Extraction Tool (BET) [Smith], ROBEX [Iglesias et al.], BEAST [Eskildsen et al.] and 3dSkuillStrip, a component of AFNI [Cox], build on specifically handcrafted algorithms which use deformable meshes, prior probabilities and thresholding to segment brain from non-brain tissue voxels. Since convolutional neural networks (CNN) showed their superior performance for image classification [Deng et al.], handcrafted pattern recognition algorithms are being replaced with machine learning-based approaches. As a specific subset of pattern recognition, image segmentation was disrupted by the UNet [Ronneberger et al.], a neural network architecture which translated the CNNs superior performance in image classification to image segmentation. HD-BET
[Isensee et al., a] and SynthStrip [Hoopes et al.] applied UNets with three dimensional (3D) kernels to brain extraction and showed superior segmentation performance compared to the traditional brain extraction tools across modalities.
Here, we aim at further maximizing the segmentation performance by 1. being specific to the most common modality i.e. T1-weighted (T1w) MRI scans of healthy adults; 2. maximizing the amount of different scanners and scanner protocols in the training and testing data to ensure maximum generalizability and stability; 3. using a modern adaptation of the UNet - i.e. LinkNet [Chaurasia and Culurciello] - to enable fast image segmentation while not sacrificing accuracy.
Modal specificityModels trained and tested on one specific modality will most likely outperform models which are trained and tested with a wide range of image modalities because the model can "concentrate" on one modality and does not have to co-model other modalities. While our approach can easily be expanded to other modalities, here, we will focus on T1w MRI scans of healthy adults.
Heterogeneous dataNeural networks learn to recognize the patterns presented during model training. While they are exceptionally good at interpolating between seen data points to adapt to new samples, extrapolation - i.e., generalizing beyond the seen data distribution, posits a challenge. Therefore, the training data should include many cases which represent the edge of the data distribution. In the case of magnetic resonance imaging, these edge cases are mostly images which are subject to intense scanner artifacts and uncommon scanner protocols. Some of these edge cases can be artificially introduced with data augmentation transformations (see Section 2.4). However, one cannot anticipate all edge cases of the final use case and therefore we utilize a heterogeneous pool of training data that include 568 images from 191 different datasets (see Section 2.4) published in OpenNeuro [Markiewicz et al.].
Modern CNNsSince the proposition of the UNet in 2015, many tricks and trimmings enabled successively higher performing segmentation models while minimizing their computational costs. LinkNet is one descendant of the UNet architecture which optimized the linking between encoder and decoder blocks such that the number of parameters in the network could be reduced, resulting in faster computation [Chaurasia and Culurciello]. The original LinkNet enabled the efficient application of segmentation models to 2D images using 2D kernels. Matching the given 3D MRI data, we utilize a LinkNet with 3D kernels (see Section 2.4).
On top of the 3D LinkNet, we also employ an approach which utilizes 2D CNNs on slices of the original 3D MR image. This method is a popular alternative to the 3D approach [Guha Roy et al., Henschel et al.] since 3D UNets need much more memory such that high-end GPUs are needed to train them with full-view MR images. Beside the lower memory requirements of 2D CNNs, there exist way more model architectures specialized for 2D images and many of them are available with pretrained parameters. To investigate this issue, we apply an approach using a 2D LinkNet pretrained on ImageNet, which we will call "deepbet 2D", parallel to the 3D approach, called "deepbet 3D" hereafter. Similar to [Henschel et al.] we use adjacent slices as model input and multi-view aggregation - also known as 2.5D approach [Han et al.] - to recapture spatial information in the third dimension. To further smoothen the prediction along the third dimension, we develop a data augmentation technique which interpolates across slices during training (see Section 2.4) and we combine multi-view aggregation with multi-slice aggregation (see Section 2.7).
All our approaches are two staged: First, the full-view MR image is cropped to the region of interest using a preliminary mask predicted in the first stage. Second, the final mask is calculated, using the cropped MR image as input.
We validate the performance of our approaches using the Dice score metric measured during 5-fold group cross-validation using N = 568 samples from 191 different OpenNeuro datasets. The group cross-validation guarantees that all samples in the validation folds stem from datasets unseen during training of the respective model, resulting in realistic validation performance measures. The results are compared to HD-BET and SynthStrip being the two state-of-the-art deep learning brain extraction models. Finally, we investigate the limitations of each model by visually inspecting the brain masks for the most challenging samples.
## 2 Materials and Methods
### Datasets
This study utilizes existing data from 191 studies published on the OpenNeuro platform [Markiewicz et al.]. Data availability is governed by the respective consortia. No new data was acquired for this study.
Out of the 750+ datasets available at OpenNeuro, initially each dataset was included which contained at least five T1-weighted images from at least five adult healthy controls. The samples were preprocessed using the commonly used CAT12 toolbox (built 1450 with SPM12 version 7487 and Matlab 2019a; [http://dbm.neuro.uni-jena.de/cat](http://dbm.neuro.uni-jena.de/cat)) with default parameters [Gaser et al.]. For each dataset the samples were ranked according to the weighted average of the image and
preprocessing quality provided by the CAT12 toolbox and the top three samples which passed a visual quality check were finally included. This resulted in a total of 568 samples from 191 datasets (see Figure 1).
By including only the three images of each dataset which show the highest preprocessing quality, we maintain a high quality standard regarding the CAT12 tissue segmentation masks and consequently the ground truth masks we will use for training and validation (see Section 2.2). This way we avoid dealing with "wrong" ground truth masks which would complicate the qualitative analysis (see Section 3.1).
The models are trained using all 568 samples from 191 studies using 5-fold group cross-validation based on their OpenNeuro dataset identifier. This way all samples in the validation folds stem from datasets unseen during training of the respective model resulting in realistic validation performance measures.
### Ground Truth masks
The ground truth masks are derived based on tissue segmentation masks generated by the CAT12 toolbox which have been meticulously quality checked (see Section 2.1). These tissue segmentation masks contain the probability for the background and the foreground tissue classes cerebrospinal fluid, grey matter and white matter in each voxel. Brain extraction masks can be easily derived by summing up the probability values of all foreground classes. We directly use this probability mask and initially do not apply thresholding (e.g. \(p_{foreground}<0.5\to 0\), \(p_{foreground}\geq 0.5\to 1\)) as this would omit the measure of uncertainty CAT12 generates.
### Preprocessing
Before model training, the images are preprocessed using bias correction followed by intensity normalization. The ANTS implementation [Avants et al.] of the standard n4 bias field correction [Tustison et al.] is applied. Then the intensity values of each sample are clipped into a range between the 0.5% and 99.5% quantile as proposed in [Isensee et al., b]. Finally, the image intensity is normalized to a mean of 0.449 and a standard deviation of 0.229, aligning it to the ImageNet [Deng et al.] intensity distribution the encoder of the 2D model has been pretrained on (see Section 2.5).
### Data Augmentation
Data augmentation is used during model training to introduce artificial effects which can occur in potential use cases. This increases model generalizability since effects which are infrequent in the training data can be systematically oversampled with any desired intensity.
We combine standard image augmentations with augmentations specific to magnetic resonance images and augmentation specific to the 2D approach which merges adjacent slices (see Figure 2). The standard image augmentations consist of four spatial transformations (flip, warp, rotate and zoom) and two intensity transformations (brightness and contrast). These transformations are implemented using aug_transforms function of the fastai package [Howard and Gugger] with max_rotate set to 15, max_lighting set to 0.5, max_warp set to 0.1 and default settings for all remaining parameters.
Figure 1: Age and sex distribution across all 568 utilized samples.
The five MRI specific augmentations are: Bias fields, motion artifacts, noise, blurring and ghosting. Bias fields are simulated with a linear combination of polynomial basis functions [Van Leemput et al.] with the order 4 and ghosting is achieved by introducing artifacts in the k-space of the image. The implementation of these augmentations is based on the torchio package [Perez-Garcia et al.].
Finally, we use a transformation specific to the 2D model (see Section 2.5) which incorporates the adjacent slices \(i+1\) and \(i-1\) (see Figure 2 bottom) into slice \(i\) with \(x_{i}=(1-\alpha)*x_{i}+\alpha*(x_{i+1}+x_{i-1})\). Alpha is randomly sampled between 0 and 0.5 and the transformation is applied to the image slice and the respective mask slice. This augmentation aims to reinforce the consistency of the predicted masks along the third dimension such that slicing artifacts (see Figure 3B center) are minimized.
### Model Architecture
2D ModelThe 2D model is a two dimensional (2D) convolutional neural network (CNN) which is using five neighboring slices to predict the segmentation mask of the central of these five slices. The 2D CNN is a LinkNet [Chaurasia and Culurciello] which is an advancement on the U-Net architecture specialized to produce accurate segmentation masks with high efficiency. To utilize transfer learning, a 2D CNN pretrained on the ImageNet dataset was used as the encoder part of the LinkNet. The encoder's architecture is a GENet (GPU-Efficient Network) [Lin et al.] with five input channels, corresponding to the five neighboring coronal slices which are inputted to the network. The model is implemented using the Segmentation Model PyTorch package [Iakubovskii] with default parameter setting except for the encoder depth which is set to 4 [Isensee et al., b].
3D ModelThe 3D model is a 3D LinkNet which is based on the original 2D implementation ([https://github.com/e-lab/pytorch-linknet](https://github.com/e-lab/pytorch-linknet)) employing 3D convolution and 3D pooling instead of the 2D operations. Since the 3D MR images need more memory compared to 2D images, two additional modifications had to be implemented: First, dividing the number of channels used for each convolutional operation by 4 reducing the needed memory. Second, the batch normalizations [Ioffe and Szegedy] were replaced by Instance Normalizations [Ulyanov et al.] such that model training could be done with a batch size of 1.
Figure 2: All utilized MRI specific augmentations (top), standard image augmentations (middle) and the slice merge augmentation transformation (bottom), based on an axial slice of an example image (top left).
### Training Procedure
Both models are trained with a learning rate of.001 using a combination of the Dice Loss and the Focal Loss using the GeneralizedDiceFocalLoss of the monai package [Diaz-Pinto et al.] with lambda_focal set to 0.2. The 2D model is trained with a batch size of 32 while the 3D model is trained with a batch size of 1 due to memory constraints. To equalize the amount of training, i.e. the total number of batch iterations, the three 2D models (sagittal, coronal and axial) are each trained for 10 epochs while the 3D model is trained for 200 epochs resulting in \(\approx\)100.000 total iterations for both approaches, respectively.
Three methods for quick convergence are applied: usage of the Ranger optimizer, the Flatten + Cosine Annealing learning rate schedule and Discriminative learning rates. The Ranger optimizer proposed by [Wright and Demeure] combines the rectified Adam [Liu et al.] optimizer which increases stability during the beginning of the training with LookAhead [Zhang et al.] which speeds up convergence during the end of the training. The Flatten + Cosine Annealing learning rate schedule was specifically developed for the Ranger optimizer combining constant learning rates for stable exploration with cosine annealing to smoothly finish training. To avoid catastrophic forgetting in the 2D model the learning rate of pretrained model encoder is set to zero while the decoder and the head are trained with 20% and 100% of the scheduled learning rate, respectively. These methods are implemented using the fastai package with the respective default parameters.
### Prediciton Procedure
The prediction procedure consists of two successive stages: First, a preliminary mask is predicted using the full-view MRI with a low resolution (128\({}^{3}\) voxels) to determine the minimal bounding box which contains all brain voxels in the preliminary mask (see Figure 3A). With a margin of 10% of the respective edge length, the image is cropped around the minimal bounding box and resampled to 256\({}^{3}\) voxels. Then the second stage prediction is done to obtain the final brain mask.
In the second stage prediction we compare the 3D approach with the popular, alternative 2D approach: The 3D approach (called "deepbet 3D") applies a 3D CNN to the MR image while the 2D approach (called "deepbet 2D") applies 2D CNNs to individual slices of the MR image (see Section 2.5).
The final prediction of deepbet 2D utilizes a multi-view aggregation approach i.e. predictions are done on sagittal, coronal and axial slices separately using three models trained on the respective views and then these predictions are aggregated [Guha Roy et al.]. The aggregation of the three-view predictions are typically done by calculating the voxelwise median to get a smooth ensemble prediction (see Figure 3C). To further minimize slicing artifacts, we combine multi-view aggregation with multi-slice aggregation. Multi-slice aggregation incorporates predictions of neighboring slices into each slice resulting in masks with smoother transitions between slices (see Figure 3B). Here, multi-slice aggregation is done with n=5 slices such that the prediction of slice i is aggregated via the voxelwise median of the slices [i-2;i+2]. To maximize the accuracy of the final prediction we combine multi-view and multi-slice aggregation such that each voxel of the final prediction is based on the median of an ensemble of 15 - i.e. 3 views * 5 slices - voxel predictions.
Finally, two standard postprocessing steps are applied to the predicted mask: Firstly, voxels which do not belong to the largest connected component are set to zero using the cc3d package [Silversmith] and secondly, holes in the remaining component are filled using the fill_voids package [fil].
Figure 3: A.) First stage of the prediction process: The original MR image is resized to 128\({}^{3}\) voxels, the preliminary mask is predicted and the image is cropped with a margin of 10% of the respective edge length of the minimal bounding box. B.) Multi-slice aggregation: Five adjacent (coronal) prediction slices are aggregated via the voxelwise median. C.) Multi-view aggregation: The sagittal, coronal and axial view, i.e. the stacked predictions of the respective models, are aggregated via the voxelwise median.
### Evaluation
To evaluate the agreement between the predicted brain masks and the thresholded probabilistic ground truth the Dice score is calculated. As the probability values of our ground truth masks suggest, there is room for interpretation where to exactly draw the boundary line of the brain mask.
As shown in Figure 4, some tools include a thicker layer of cerebrospinal fluid around the brain then others. To account for this, SynthStrips mask border threshold is calibrated to -1 millimetre as this setting resulted in the best agreement to our ground truth masks in terms of the Dice score. Similarly, we calibrate the threshold which is used to binarize the ground truth mask before calculating the Dice score individually for HD-BET and SynthStrip. We find that a threshold of 0.5 for SynthStrip and 0.5 for HD-BET maximizes the median Dice score for each tool, indicating proper calibration.
Figure 4: Boundary line of the brain masks generated by SynthStrip, HDBET and the ground truth probability mask thresholded at 0.1 and 0.9.
## 3 Results
During cross-validation deepbet 3D performs best regarding the Dice scores (DSC) across the validation samples (see Figure 5). It shows the highest median DSC of 99.0%, followed by deepbet 2D (\(\text{DSC}=98.2\%\)). SynthStrip (\(\text{DSC}=97.8\%\)) and HD-BET (\(\text{DSC}=97.9\%\)) show the lowest median DSC of the investigated methods.
On top of the highest performance, deepbet 3D achieves the highest processing speed on both low level and high level hardware. On low level hardware deepbet 3D takes 25 images per minute which translates into a 2.5x speedup compared to deepbet 2D (10 images per minute), a 7x speedup compared to SynthStrip (3.5 images per minute) and a 250x speedup compared to HD-BET (0.1 images per minute). On high level hardware deepbet 3D achieves 122 images per minute being 3x faster than deepbet 2D (40 images per minute), 11x faster than SynthStrip (11 images per minute) and 27x faster than HD-BET (4.5 images per minute). We use a laptop with a Intel i7-8565U Central Processing Unit (CPU) and without a Graphical Processing Unit (GPU) as low level hardware and a system with a AMD Ryzen 9 5950X CPU and a NVIDIA GeForce RTX 3090 GPU as high level hardware.
### Qualitative analysis
By examining the most challenging images (see Figure 5 bottom), i.e. the images with the lowest Dice score, deficits of the respective approach can be unveiled. All extraction methods are challenged by either of two issues: First, strong noise occurring in images of the OpenNeuro studies ds001168 and ds003192. Second, atypically strong rotation around the sagittal axis in images of study ds001734.
In deepbet 3D, deepbet 2D and HD-BET the respective images with the lowest Dice score are the noisy images. deepbet 3D maintains the highest minimal DSC of 96.9%, excluding the outer edge of the cerebellum for one of the noisy images. HD-BET is most sensitive, including large patches of noise in the background into the brain mask, resulting in the strong outliers with Dice scores as low as 76.5%. deepbet 2D tends to exclude small parts where strong noise of the background blends into the cerebellum from the brain mask resulting in a minimal DSC of 93.4%. This showcases the drawback of the 2D approach compared to the 3D approach: 3D models can infer the segmentation class of some parts of the image which are "occluded" (e.g., by noise) by memorizing the typical 3D shape of the masks during training which 2D models cannot.
Figure 5: Top: Dice scores and average images per minute processed by deepbet 3D, deepbet 2D, SynthStrip and HD-BET. Bottom: Sagittal slice of the worst predictions of each method (red) and the respective ground truth masks (blue).
With regard to SynthStrip, we noticed that it included small regions of tissue around the cerebellum (see Supplement Figure S1) along with the known issues of including tissue around the eye socket and regions of extra-cereb matter near the dorsal cortex (Hoopes et al., 2022). However, its lowest Dice score of 95.4% is caused by excluding a large part of the frontal lobe in one of the images which is atypically rotated around the sagittal axis. This behaviour can be replicated in other images by rotating them in the same direction before applying brain extraction (see Figure 6). Despite SynthStrip being trained with rotations between -45 and 45 degrees via data augmentation, the rotation of 40 degrees around the sagittal axis causes it to exclude large parts of the frontal lobe in all five cases. This indicates that the SynthStrip model is overfitted to typically oriented, non-tilted samples.
## 4 Discussion
The results confirm that with deepbet efficient brain extraction of T1w MRIs can be done with high-precision i.e. a median Dice score (DSC) of 99.0%. deepbet is outperforming the state-of-the-art tools SynthStrip (DSC \(=97.8\%\)) and HD-BET (DSC \(=97.9\%\)) which achieve higher median Dice scores than in the T1w test images of their respective original studies, indicating proper calibration (see Section 2.8). Furthermore, deepbet manages to maintain high precision even for edge cases (e.g. images with strong background noise) with a minimal DSC of 96.9% beating SynthStrip and HD-BET with minimal Dice scores of 95.4% and 76.5%. On top of that, deepbet is 10x more efficient than current state-of-the-art tools.
Besides the best performing approach (called deepbet or deepbet 3D), another approach (called deepbet 2D) which applies 2D CNNs on 2D slices of the 3D image was investigated. The results show that this approach is inferior to deepbet 3D since 1. It is susceptible to strong noise occluding parts of the brain and 2. It introduces slicing artifacts which have to be smoothed out by multi-slice and multi-view aggregation (see Section 2.7).
Due to the two staged prediction process and the usage of LinkNet instead of the standard UNet architecture deepbet is highly computationally efficient, such that brain extraction can be done in two seconds on low level hardware (here Intel i7-8565U laptop CPU) and half a second on high level hardware (here AMD Ryzen 9 5950X CPU, NVIDIA GeForce RTX 3090 GPU). This makes deepbett attractive for integration into neuroimaging pipelines and enables processing large datasets within a single day without the need to access large compute clusters. The method is made available at [https://github.com/wuu-mml/deepbet](https://github.com/wuu-mml/deepbet).
In this work, we purposely limited deepbet to T1w MRIs of healthy adults as it is the dominant modality in the neuroimaging field and modal-specific training and application maximizes performance. In future work, this modal-specific approach can be applied to other modalities (e.g. T2w, FLAIR, DWI, PET, CT) and patient groups (e.g. children and brain-tumor patients) by training the model on respective images, optimally stemming from a large number of different studies.
Figure 6: Systematic errors of SynthStrips brain extraction caused by 40 degrees of rotation around the sagittal axis. Five original (top) and rotated (bottom) samples shown with their respective brain mask (red) and Dice score (DSC).
## Declarations
### Conflict of Interest
All authors declare that they have no conflicts of interest.
### Funding
This work was funded by the German Research Foundation (DFG grants HA7070/2-2, HA7070/3, HA7070/4 to TH) and the Interdisciplinary Center for Clinical Research (IZKF) of the medical faculty of Munster (grants Dan3/012/17 to UD and MzH 3/020/20 to TH and GMzH).
|
2302.10524 | LU-Net: Invertible Neural Networks Based on Matrix Factorization | LU-Net is a simple and fast architecture for invertible neural networks (INN)
that is based on the factorization of quadratic weight matrices
$\mathsf{A=LU}$, where $\mathsf{L}$ is a lower triangular matrix with ones on
the diagonal and $\mathsf{U}$ an upper triangular matrix. Instead of learning a
fully occupied matrix $\mathsf{A}$, we learn $\mathsf{L}$ and $\mathsf{U}$
separately. If combined with an invertible activation function, such layers can
easily be inverted whenever the diagonal entries of $\mathsf{U}$ are different
from zero. Also, the computation of the determinant of the Jacobian matrix of
such layers is cheap. Consequently, the LU architecture allows for cheap
computation of the likelihood via the change of variables formula and can be
trained according to the maximum likelihood principle. In our numerical
experiments, we test the LU-net architecture as generative model on several
academic datasets. We also provide a detailed comparison with conventional
invertible neural networks in terms of performance, training as well as run
time. | Robin Chan, Sarina Penquitt, Hanno Gottschalk | 2023-02-21T08:52:36Z | http://arxiv.org/abs/2302.10524v1 | # LU-Net: Invertible Neural Networks Based on Matrix Factorization
###### Abstract
LU-Net is a simple and fast architecture for invertible neural networks (INN) that is based on the factorization of quadratic weight matrices \(A=LU\), where \(L\) is a lower triangular matrix with ones on the diagonal and \(U\) an upper triangular matrix. Instead of learning a fully occupied matrix \(A\), we learn \(L\) and \(U\) separately. If combined with an invertible activation function, such layers can easily be inverted whenever the diagonal entries of \(U\) are different from zero. Also, the computation of the determinant of the Jacobian matrix of such layers is cheap. Consequently, the \(LU\) architecture allows for cheap computation of the likelihood via the change of variables formula and can be trained according to the maximum likelihood principle. In our numerical experiments, we test the \(LU\)-net architecture as generative model on several academic datasets. We also provide a detailed comparison with conventional invertible neural networks in terms of performance, training as well as run time.
generative models \(\bullet\) invertible neural networks \(\bullet\) normalizing flows \(\bullet\) LU matrix factorization \(\bullet\) LU-Net
## I Introduction
In recent years, learning generative models has emerged as a major field of AI research [1, 2, 3]. In brief, given \(N\) i.i.d. examples \(\{X_{j}\}_{j=1}^{N}\) of a \(\mathbb{R}^{Q}\)-valued random variable \(X\sim P_{X}\), the goal of a generative model is to learn the data distribution \(P_{X}\) while having only access to the sample \(\{X_{j}\}_{j=1}^{N}\). Having knowledge about the underlying distribution of observed data allows for e.g. identifying correlations between observations. Common downstream tasks of generative models then include 1) _density estimation_, i.e. what is the probability of an observation \(x\) under the distribution \(P_{X}\), and 2) _sampling_, i.e. generating novel data points \(x_{new}\) approximately following data distribution \(P_{X}\).
Early approaches include **deep Boltzmann machines**[4, 5], which are energy-based models motivated from statistical mechanics and which learn unnormalized densities. The expressive power of such models is however limited and sampling from the learned distributions becomes intractable even in moderate dimensions due to the computation of a normalization constant, for which the (slow) Markov chain Monte Carlo method is often used instead.
On the contrary, **autoregressive models**[6, 7, 8] learn tractable distributions by relying on the chain rule of probability, which allows for factorizing the density of \(X\) over its \(Q\) dimensions. In this way, the likelihood of data can be evaluated easily, yielding promising results for density estimation and sampling by training via maximum likelihood. Nonetheless, sampling using autoregressive models still remains computationally inefficient given the sequential nature of the inference step over the dimensions.
**Generative adversarial networks** (GANs) avoid the just mentioned drawback by learning a generator in a minimax game against a discriminator [9, 1]. Here, both the generator and discriminator are represented by deep neural networks. In this way, the learning process can be conducted without an explicit formulation of the density of the distribution \(P_{X}\) and therefore can be considered as likelihood-free approach. While GANs have shown successful results on generating novel data points even in high dimensions, the absence of an expression for the density makes them unsuitable for density estimation.
Also **variational autoencoders** (VAEs) have shown fast and successful sampling results [10, 2, 11]. These models consist of an encoder and a decoder part, which are both based on deep neural network architectures. The encoder maps data points \(\{X_{j}\}_{j=1}^{N}\) to lower dimensional latent representations, which define variational distributions. The decoder subsequently maps samples from these variational distributions back to input space. Both parts are trained jointly by maximizing the evidence lower bound (ELBO), i.e. providing a lower bound for the density.
More recently, **diffusion models** have gained increased popularity [12, 13, 14, 15] given their spectacular results. In the forward pass, this type of model gradually adds Gaussian noise to data points. Then, in the backward pass the original input data is recovered from noise using deep neural networks, i.e. representing the sampling direction of diffusion models. However, sampling requires the simulation of a multi stage stochastic process, which is computationally slow compared to simply applying a map. Moreover, just like VAEs, diffusion models do not provide a tractable computation of the likelihood. By training using ELBO, they at least provide a lower bound for the density.
Another type of generative models are **Normalizing flows**[16, 17, 18, 19, 20, 21, 22], which allow for efficient and exact density estimation as well as sampling of high dimensional data. These models learn an invertible transformation
for \(X\sim P_{X}\) such that the distribution \(f^{-1}(Z)\) is as close as possible to the target distribution \(P_{X}\). Here, \(Z\) follows a simple prior distribution \(P_{Z}\) that is easy to evaluate densities with and easy to sample from, such as e.g. multivariate standard normal distribution. Hence, \(f^{-1}\) represents the generator for the unknown (and oftentimes complicated) distribution \(P_{X}\). As normalizing flows are naturally based on the change of variables formula
\[p_{X}(x)=p_{Z}(f(x))\cdot\big{|}\det\left(\Jf(x)\right)\big{|}, \tag{1}\]
with \(X=f^{-1}(Z)\), this expression can also be used for exact probability density evaluation and likelihood based learning.
In this way, normalizing flows offer impressive generative performance along with an explicit and tractable expression of the density. Despite the coupling layers in normalizing flows being rather specific, it has been shown recently that their expressive power is universal for target measures having a density on the target space \(\mathbb{R}^{D}\)[21]. Note however that multiple such coupling layers are usually chained for an expressive generative model. These layers are typically parameterized by the outputs of neural networks and therefore normalizing flows can require considerable computational resources.
In this work, we propose **LU-Net**: an alternative to existing invertible neural network (INN) architectures motivated by the positive properties of normalizing flows. The major advantage of LU-Net is the simplicity of its design. It is based on the elementary insight that a fully connected layer of a feed forward neural network is a bijective map from \(\mathbb{R}^{Q}\to\mathbb{R}^{D}\), if and only if (a) the weight matrix is quadratic, i.e. \(Q=D\), (b) the weight matrix is of full rank and (c) the employed activation function maps \(\mathbb{R}\) to itself bijectively. A straight forward idea is then to fully compose a neural network of such invertible layers and learn a transformation \(f\) as in Eq. (1).
However, it can be computationally expensive to actually invert the just described INN, especially if required during training. Due to the size of weight matrices and the fact that they can be fully occupied, the computational complexity for the inversion scales as \(\mathcal{O}(M\times D^{3})\), where \(M\) is the depth and \(D\) the width of the proposed INN. The same complexity also holds for the computation of the determinant, which occurs in the computation of the likelihood. Both tasks, inversion and computation of the determinant, can be based on the LU-decomposition, where a quadratic matrix is factorized in a lower triangular matrix with ones on the diagonal and an upper triangular matrix with nonzero diagonal entries. Using this factorization inversion becomes of order \(\mathcal{O}(M\times D^{2})\) and computation of the determinant of \(\mathcal{O}(M\times D)\). In the LU-Net architecture, we therefore enforce the weight matrices to be of lower or upper triangular shape and keep this shape fixed during the entire training process. Consequently, a single fully connected layer \(x\mapsto Ax+b\) is decomposed into two layers \(x\mapsto Ux\mapsto UUx+b\), where the weight matrices \(L\) and \(U\) are "masked" on the upper and lower triangular positions, respectively.
This simple architecture of LU-Net however comes with a limitation, that is universal approximation. The classical type of universal approximation theorems deal with a fixed depth and an arbitrary width of neural networks [23, 24, 25]. Obviously, these theorems cannot be applied to LU-Net due its bijectivity constraint. More recent universal approximation theorems deal with a fixed width and an arbitrary depth [26, 27, 28, 29]. But even in the weak sense of \(L^{p}\)-distances, a network requires a width of at least \(D+1\)[29] to be a universal approximator. Thus, the LU-Net just misses the property by one dimension.
One could imagine that the missing dimension in the width of LU-Net becomes less relevant the higher dimensional the problem is, so that the difference between width \(D\) and \(D+1\) becomes marginal. Nevertheless, also for dimensions as low as \(D=2\), we provide numerical evidence that the expressivity of LU-Net still achieves reasonable quality in density estimation.
Moreover, we also present reasonable results using LU-Net for the popular task of generative modeling of images, which includes density estimation and sampling. In a quantitative comparison as suggested by [30], LU-Net achieves a consistent advantage in terms of the negative log likelihood metric when compared to the widely used RealNVP INN architecture [3] with about the same number of parameters. In our experiments we further observe that training LU-Net is computationally considerably cheaper than training the just mentioned coupling layer based normalizing flow. This also points to the particular suitability of LU-Net as base model for rapid prototyping.
Overall, LU-Net provides a simple and efficient framework of an INN. Due to its simplicity, this model is applicable to a variety of problems with data of different forms. This is in contrast to other generative models, e.g. normalizing flows, which are often particularly designed for a specific application such as image generation.
The content of this paper is structured as follows: in Sec. II we describe the LU-Net architecture with details on computing the density and likelihood. Numerical results including experiments on the image datasets MNIST [31] and Fashion-MNIST [32] follow in Sec. III. Finally, in Sec. IV we conclude our article and give recommendations for future research directions.
Additionally, we report evaluations on condition numbers of the LU-layers and on the closeness of the distribution in the normalizing direction of LU-Net to a multivariate standard normal distribution in Appendix A and Appendix B, respectively. The entire code for the numerical experiments presented in this paper is publicly available on GitHub: [https://github.com/spenquitt/LU-Net-Invertible-Neural-Networks](https://github.com/spenquitt/LU-Net-Invertible-Neural-Networks).
## II LU-Net Architecture and the Likelihood
Fully connected neural networks are the most basic models for deep learning, which can be applied to various applications. This generality property is what we also aim for invertible neural networks in the context of probabilistic generative modeling. To this end, we have to ensure that the model is bijective and that the inversion as well as computation of the likelihood are both tractable.
The bijectivity constraint in fully connected neural networks can easily be fulfilled by using a bijective activation function and restricting the weight matrices to be quadratic and of full rank. However, the inversion and computation of the determinant for the computation of the likelihood, cf. 1, of fully occupied matrices remain computationally expensive with a cubic complexity. To address this problem, we propose to directly learn the LU factorization replacing the weight matrices in fully connected layers without a loss in model capacity. This forms the building block of our proposed LU-Net, which we explain in more details in what follows.
### _LU Factorization_
The LU factorization is a common method to decompose any square matrix \(\mathsf{A}\in\mathbb{R}^{\mathsf{D}\times\mathsf{D}}\) into a lower triangular matrix \(\mathsf{L}\in\mathbb{R}^{\mathsf{D}\times\mathsf{D}}\) with ones on the diagonal and in an upper triangular matrix \(\mathsf{U}\in\mathbb{R}^{\mathsf{D}\times\mathsf{D}}\) with non-zero diagonal entries. Then, the matrix \(\mathsf{A}\) can be rewritten as
\[\mathsf{A}=\mathsf{LU}=\left(\begin{array}{ccc}1&&\raisebox{-1.29pt}{ \includegraphics[scale=0.4]{fig-1
The leaky softplus function circumvents those aforementioned limitations, which is why we choose this activation function for all hidden layers in our proposed LU-Net, i.e. \(\phi^{(\mathsf{m})}(\mathsf{x})=\mathsf{LeakySoftplus}(\mathsf{x})\) with \(\alpha=0.1\) for all \(\mathsf{m}=1,\ldots,\mathsf{M}-1\). As we will deal with regression problems, we employ no activation in the final layer, i.e. \(\phi^{(\mathsf{M})}(\mathsf{x})=\mathsf{x}\) is the identity map.
### _Inverse LU-Net_
If all previously described requirements for the inversion of LU-Net are fulfilled, each layer can be reversed as
\[{\mathsf{f}^{(\mathsf{m})}}^{-1}(\mathsf{z})={\mathsf{U}^{(\mathsf{m})}}^{-1}{ \mathsf{L}^{(\mathsf{m})}}^{-1}\left(\phi^{-1}(\mathsf{z})-\mathsf{b}^{( \mathsf{m})}\right)\]
for some input \(\mathsf{z}\in\mathbb{R}^{\mathsf{D}}\) and for all LU layers \(\mathsf{m}=1,\ldots,\mathsf{M}\).
Then, the overall expression for the reversed LU-Net \({\mathsf{f}^{-1}}:\mathbb{R}^{\mathsf{D}}\to\mathbb{R}^{\mathsf{D}}\) is given by
\[{\mathsf{f}^{-1}}(\mathsf{z})=\left({\mathsf{f}^{(1)}}^{-1}\circ\ldots\circ{ \mathsf{f}^{(\mathsf{M}-1)}}^{-1}\circ{\mathsf{f}^{(\mathsf{M})}}^{-1}\right) (\mathsf{z})\]
and represents the "generating direction", see also Fig. 0(b).
Note that both \(\mathsf{f}\) and \({\mathsf{f}^{-1}}\) share their weights and it holds \(\left({\mathsf{f}^{-1}}\circ{\mathsf{f}}\right)(\mathsf{x})=\mathsf{x}\) for any input \(\mathsf{x}\in\mathbb{R}^{\mathsf{D}}\) even without any training.
### _Training via Maximum Likelihood_
Given a dataset \(\mathcal{D}=\{\mathsf{x}^{(\mathsf{n})}\}_{n=1}^{\mathsf{N}}\), containing \(\mathsf{N}\) independently drawn examples of some random variable \(\mathsf{X}\sim\mathsf{P}_{\mathsf{X}}\), our training objective is then to maximize the likelihood
\[\mathscr{L}(\theta|\mathcal{D})=\prod_{n=1}^{\mathsf{N}}\mathsf{p}_{\mathsf{ X}}(\mathsf{x}^{(\mathsf{n})}) \tag{5}\]
where \(\mathsf{p}_{\mathsf{X}}:\mathbb{R}^{\mathsf{D}}\to\mathbb{R}\) denotes the (unknown) probability density function corresponding to the target distribution \(\mathsf{P}_{\mathsf{X}}\) and \(\theta=\{{\mathsf{U}^{(\mathsf{m})}},{\mathsf{L}^{(\mathsf{m})}},{\mathsf{b} ^{(\mathsf{m})}}\}_{n=1}^{\mathsf{M}}\) the set of model parameters of LU-Net. By defining the model function \({\mathsf{f}}(\cdot|\theta)\) of LU-Net to be the invertible transform such that \(\mathsf{X}={\mathsf{f}^{-1}}(\mathsf{Z}|\theta)\Leftrightarrow\mathsf{Z}={ \mathsf{f}}(\mathsf{X}|\theta)\), where \(\mathsf{Z}\sim\mathsf{P}_{\mathsf{Z}}\) is another random variable following a simple prior distribution, we can use the change of variables formula to rewrite the expression in Eq. (5) to
\[\mathscr{L}(\theta|\mathcal{D})=\prod_{n=1}^{\mathsf{N}}\mathsf{p}_{\mathsf{ Z}}\left({\mathsf{f}}(\mathsf{x}^{(\mathsf{n})}|\theta)\right)\big{|}\det \left({\mathbb{J}}{\mathsf{f}}(\mathsf{x}^{(\mathsf{n})}|\theta)\right)\big{|}. \tag{6}\]
As in normalizing flows, we choose \(\mathsf{P}_{\mathsf{Z}}\) to be a \(\mathsf{D}\)-multivariate standard normal distribution with probability density function
\[\mathsf{p}(\mathsf{z})=\frac{1}{\sqrt{(2\pi)^{\mathsf{D}}}}\,\exp\left(- \frac{1}{2}\mathsf{z}^{\intercal}\mathsf{z}\right)\!,\ \mathsf{z}\in\mathbb{R}^{\mathsf{D}}. \tag{7}\]
Further, given the fact that the determinant of a triangular matrix is the product of its diagonal entries and given the chain rule of calculus, for each LU layer \(\mathsf{m}=1,\ldots,\mathsf{M}\) of LU-Net it applies
\[\begin{split}&\big{|}\det\left({\mathbb{J}}{\mathsf{f}^{(\mathsf{ m})}}(\mathsf{x})\right)\big{|}\\ =&\big{|}\prod_{\mathsf{m}=1}^{\mathsf{M}}\prod_{ \mathsf{d}=1}^{\mathsf{D}}\phi^{\prime(\mathsf{m})}\left(({\mathsf{L}}^{( \mathsf{m})}{\mathsf{U}^{(\mathsf{m})}}\mathsf{x})_{\mathsf{d}}+{\mathsf{b} }_{\mathsf{d}}^{(\mathsf{m})}\right)\cdot{\mathsf{u}}_{\mathsf{d},\mathsf{d}}^{ (\mathsf{m})}\big{|}\end{split} \tag{8}\]
with \(\phi^{\prime(\mathsf{m})}\) being the derivative of the \(\mathsf{m}\)-th LU layer's activation function \(\phi^{(\mathsf{m})}\).
Considering now again the chain rule of calculus and taking into account Eq. (6), Eq. (7) as well as Eq. (8), we obtain the following final expression for the negative log likelihood as training loss function:
\[\begin{split}&-\ln\mathscr{L}(\theta|\mathcal{D})\\ =&\frac{1}{2}\cdot\mathsf{N}\cdot\mathsf{D}\cdot\ln(2 \pi)+\frac{1}{2}\sum_{n=1}^{\mathsf{N}}\sum_{\mathsf{d}=1}^{\mathsf{D}} \mathsf{f}_{\mathsf{d}}(\mathsf{x}^{(\mathsf{n})}|\theta)^{2}\\ &-\sum_{n=1}^{\mathsf{N}}\sum_{\mathsf{m}=1}^{\mathsf{M}}\sum_{ \mathsf{d}=1}^{\mathsf{D}}\ln\phi^{\prime(\mathsf{m})}\left(({\mathsf{L}}^{( \mathsf{m})}{\mathsf{U}^{(\mathsf{m})}}\mathsf{x}^{(\mathsf{n})})_{\mathsf{d}}+{ \mathsf{b}}_{\mathsf{d}}^{(\mathsf{m})}\right)\\ &-\mathsf{N}\cdot\sum_{\mathsf{m}=1}^{\mathsf{M}}\sum_{\mathsf{d}=1 }^{\mathsf{D}}\ln\big{|}{\mathsf{u}}_{\mathsf{d},\mathsf{d}}^{(\mathsf{m})} \big{|}\to\min\.\end{split} \tag{9}\]
Note that for each hidden LU layer \(\mathsf{m}=1,\ldots,\mathsf{M}-1\) in LU-Net
\[\phi^{\prime(\mathsf{m})}(\mathsf{x})=\alpha+\frac{1-\alpha}{1+\exp(-\mathsf{x} )}=\alpha+(1-\alpha)\,\sigma(\mathsf{x})\]
with \(\sigma:\mathbb{R}\to\mathbb{R}\) being the logistic sigmoid function. For the final layer the derivative is constant with \(\phi^{\prime(\mathsf{M})}=1\).
## III LU-Net Experiments
In this section we present extensive experiments with LU-Net, which were conducted in different settings. As a toy example, we apply LU-Net to learn a two-dimensional Gaussian mixture. Next, we apply LU-Net to the image datasets MNIST [31] as well as the more challenging Fashion-MNIST [32]. We evaluate LU-Net as density estimator and also have look at the sampling quality as generator.
### _Training goal and evaluation_
In general, our goal is to learn a target distribution \(\mathsf{P}_{\mathsf{X}}\) given only samples which are produced by its data generating process. To this end, we attempt to train our model distribution given by LU-Net to be as close as possible to the data distribution, i.e. the empirical distribution provided by examples \(\mathcal{D}=\{\mathsf{x}^{(\mathsf{n})}\}_{n=1}^{\mathsf{N}}\) of the random variable \(\mathsf{X}\sim\mathsf{P}_{\mathsf{X}}\).
One common way then to quantify the closeness between two distributions \(\mathsf{P}\) and \(\mathsf{Q}\) with probability density functions \(\mathsf{p}\) and \(\mathsf{q}\), respectively, is via the Kullback-Leibler divergence
\[\mathsf{D}_{\text{KL}}(\mathsf{P}||\mathsf{Q})=\mathbb{E}_{\mathsf{X}\sim\mathsf{ P}}\left[\ln\frac{\mathsf{p}(\mathsf{X})}{\mathsf{q}(\mathsf{X})}\right]=\int_{ \mathbb{R}}\mathsf{p}(\mathsf{x})\ln\frac{\mathsf{p}(\mathsf{x})}{\mathsf{q}( \mathsf{x})}\text{d}\text{d}\text{.}\]
It is well known that maximizing the likelihood on the dataset \(\mathcal{D}\), as presented in Eq. (9), asymptotically amounts to minimizing the Kullback-Leibler divergence between the target distribution and model distribution [9, 33], which in our case are defined by \(\mathsf{P}_{\mathsf{X}}\) and LU-Net, respectively. For this reason, the negative log likelihood (NLL) is not only used as loss function for training, but also as the standard metric to measure the density estimation capabilities of probabilistic generative models [30].
### _Gaussian mixture_
#### Iv-B1 Experimental setup
To begin, we create a dataset consisting of 10,000 sampled two-dimensional Gaussian data points at four different centers with a standard deviation of \(0.2\) of which we use 9,000 for training LU-Net in different configurations, see Fig. 2(a). More precisely, we train five LU-Nets in total comprising 2, 3, 5, 8, and 12 hidden LU layers with a final LU output layer each for 10, 20, 30, 35, and 40 epochs, respectively. As optimization algorithm we use stochastic gradient descent with a momentum term of 0.9. We start with a learning rate of \(1.0\) that decays by 0.9 in each training epoch. Further, we clip the gradient to a maximal length of 1 w.r.t. the absolute-value norm, which empirically have shown to stabilize the training process.
#### Iv-B2 Results
In Tab. I we report the negative log likelihood of LU-Net on the 1,000 holdout test data points. In Fig. 2(b) - Fig. 2(f) we provide visualizations of the learned and ground truth density. Generally, we observe that by stacking more LU layers the model function becomes more flexible, which is in line with [29] stating that deeper neural network have increased capacity. In our toy experiments the LU-Net with 12 layers achieves the best result with an averaged NLL of 1.0848. This is visible in the visualization in Fig. 2(f), clearly showing having learned modes in proximity of the true centers of the target Gaussian mixture. We conclude that in practice depth can increase the expressive power of LU-Net.
### _MNIST and Fashion-MNIST_
#### Iv-C1 Data preprocessing
The two publicly available datasets MNIST [31] and Fashion-MNIST [32] consist of gray-scaled images of resolution \(28\times 28\) pixels. These images are stored in 8-bit integers, i.e. each pixel can take on a brightness value from \(\{0,1.\dots,255\}\). Modeling such a discrete data distribution with a continuous model distribution (as we do with LU-Net by choosing a Gaussian prior, cf. Eq. (7)), could lead to arbitrarily high likelihood values, since arbitrarily narrow and high densities could be placed as spikes on each of the discrete brightness values. This practice would make the evaluation via the NLL not comparable and thus meaningless.
Therefore, it is best practice in generative modeling to add real-valued uniform noise \(\mathsf{u}\sim\mathsf{U}(0,1)\), \(\mathsf{u}\in[0,1)\) to each pixel of the images in order to dequantize the discrete data [30, 34]. It turns out that the likelihood of the continuous model on the dequantized data is upper bounded by the likelihood on the original image data [30, 35]. Consequently, maximizing the likelihood on \(\mathsf{x}+\mathsf{u}\) will also maximize the likelihood on the original input \(\mathsf{x}\). This also makes the NLL on the dequantized data a comparable performance measure with NLL \(>0\) for any probabilistic generative model dealing with images. Note that the just described non-deterministic preprocessing step can easily be reverted by simply rounding off.
As additional preprocessing steps we normalize the dequantized pixel values to the unit interval \([0,1]\) by dividing by 256, and apply the logit function to transform the data distribution to a Gaussian-like shape. This output then represents the input to LU-Net. Again, these preprocessing steps can easily be reversed by applying the inverse of logit, i.e. the logistic sigmoid function, and by multiplying by 256, respectively.
#### Iv-C2 Experimental setup
We conduct the experiments on MNIST and Fashion-MNIST with an LU-Net consisting of three hidden LU layers and a final LU output layer. The model is trained for a maximum of 40 epochs and conditioned on each class of MNIST and Fashion-MNIST. As optimization algorithm, we stick to gradient descent with a momentum
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \hline \multicolumn{3}{|c|}{MNIST Test NLL} & \multicolumn{2}{c|}{Fashion-MNIST Test NLL} \\ \hline Class & Bits \(\mathsf{P}\) Pixel 1 & Class & Bits \(\mathsf{P}\) Pixel 1 \\ \hline Number 0 & 2.7180 \(\pm\) 0.0284 & T-Shirt & 3.7726 \(\pm\) 3.1228 \\ Number 1 & 2.4795 \(\pm\) 0.0125 & Trousers & 7.2577 \(\pm\) 3.49085 \\ Number 2 & 2.9395 \(\pm\) 0.0997 & Pullover & 2.4018 \(\pm\) 0.1091 \\ Number 3 & 2.7465 \(\pm\) 0.0062 & Dress & 2.5337 \(\pm\) 0.0127 \\ Number 4 & 2.7760 \(\pm\) 0.0114 & Coat & 3.4392 \(\pm\) 1.6847 \\ Number 5 & 2.7489 \(\pm\) 0.0175 & Sandal & 2.8861 \(\pm\) 0.0173 \\ Number 6 & 2.8591 \(\pm\) 0.8279 & Shirt & 2.7209 \(\pm\) 0.1560 \\ Number 7 & 2.7930 \(\pm\) 0.0511 & Sneaker & 3.6077 \(\pm\) 0.0071 \\ Number 8 & 2.7916 \(\pm\) 0.0081 & Bag & 4.3983 \(\pm\) 2.0313 \\ Number 9 & 2.6576 \(\pm\) 0.0187 & Ankle Boot & 4.4405 \(\pm\) 0.1762 \\ \hline \hline Average & 2.7480 \(\pm\) 0.2931 & Average & 3.7568 \(\pm\) 2.4632 \\ \hline \end{tabular}
\end{table} TABLE II: Class-wise negative log likelihood (NLL) when applying LU-Net to MNIST and Fashion-MNIST test dataset. The results are averaged over 30 runs. Note that the NLL is reported in bits per pixel (sometimes also called bits per dimension or simply bpd), which is the NLL with logarithm base 2 averaged over all pixels.
Fig. 3: The LU-Net with different numbers of LU layers applied to a two-dimensional Gaussian mixture. The heatmaps indicate the ground truth densities and the level curves the learned densities given by training LU-Net.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \hline \multicolumn{3}{|c|}{Gaussian mixture negative log likelihood} \\ \hline \# LU Layers & 2 & 3 & 5 \\ \hline Test NLL \(\downarrow\) & 3.4024 \(\pm\) 0.3403 & 2.6765 \(\pm\) 0.3607 & 1.8665 \(\pm\) 0.2875 \\ \hline \hline \# LU Layers & 8 & 12 \\ \hline Test NLL \(\downarrow\) & 1.4633 \(\pm\) 0.2132 & 1.0848 \(\pm\) 0.3239 \\ \hline \end{tabular}
\end{table} TABLE I: Test results of LU-Net on the two-dimensional Gaussian mixture toy problem.
parameter of 0.9. We start with a learning rate of 0.6 that decays by 0.5 every three epochs. Further, we clip the gradients to a maximal length of 1 w.r.t. Euclidean norm, which empirically have shown to stabilize the training process as well as regularizing the weights. As extra loss weighting, we add a factor of \(\gamma=100\) to the sum of the diagonal entries in the NLL loss function, see Eq. (9), which has considerably improved the convergence speed of the training process.
#### Iv-B3 Results
In Tab. II we report the negative log likelihood computed by LU-Net on the MNIST and Fashion-MNIST test datasets. In Fig. 4 we provide qualitative examples of LU-Net as density estimator and in Fig. 5 as well as Fig. 6 as generator of new samples.
In terms of numerical results, we achieve adequate density estimation scores with an average NLL of \(2.75\pm 0.29\) bits/pixel and \(3.74\pm 2.46\) bits/pixel over all classes on MNIST and Fashion-MNIST, respectively. In general, the results can be considered as robust for each class of both datasets. The only major deviations are related to the classes T-Shirt and Trousers of Fashion-MNIST, which can be explained by the large variety of patterns in different examples and hence the increased difficulty in generative modeling, cf. also Tab. II right and Fig. 3(b). Furthermore, we notice that LU-Net is capable of assigning meaningful likelihoods, i.e. images that are more characteristic for the associated class are assigned higher likelihoods, see again Fig. 4 in particular.
With regard to LU-Net as generator, we obtain reasonable quality of the sampled images. The random samples can clearly be recognized as subset of MNIST or Fashion-MNIST and moreover, they can also be assigned easily to the corresponding classes. However, we notice that many generated examples contain noise, most visible for class 7 in MNIST or class sandal and bag in Fashion-MNIST, cf. Fig. 4(a) and Fig. 4(b), respectively. This shortcoming is not surprising since the fully connected layers of LU-Net capture less spatial correlations as other filter based architectures commonly used on image data.
Another noteworthy observation refers to the learned latent space of LU-Net, i.e. the space after applying the normalizing sequence. Given its invertibility property, each latent variable represents exactly one image, which allows for traveling though the latent space and thus also interpretation of it. By interpolating between two latent representations, we generally observe a smooth transition between the two corresponding images when transforming back to original input space, see Fig. 5(a) for MNIST and Fig. 5(b) for Fashion-MNIST. This also enables the visual inspection of relevant features or parts of the content associated with certain images or classes.
To conclude, we have seen that LU-Net even with a shallow architecture can be applied as probabilistic generative model to images. The numerical results on these higher dimensional data indicate that the bijectivity constraint is not a significant limitation regarding the expressive power. Although not specifically designed to model image data, LU-Net is still capable of generating sufficiently clear images, which highlights its general purpose property as generative model.
Fig. 4: Unseen original test samples of (a) MNIST and (b) Fashion MNIST. These images are ordered class-wise and in decreasing likelihood from left to right as estimated by LU-Net. See also Tab. II for the class names.
Fig. 5: LU-Net: Randomly generated samples of (a) MNIST and (b) Fashion MNIST. These examples are generated by sampling random numbers from a multivariate normal distribution and passing them through the inverse of LU-Net. Moreover, these images are ordered class-wise and in decreasing likelihood from left to right as estimated by LU-Net.
Fig. 6: LU-Net: Samples of (a) MNIST and (b) Fashion MNIST generated by interpolating in latent space of LU-Net. The examples in the red boxes are reconstructions from latent representations of original test data. Note that the examples are ordered randomly.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{MNIST Test NLL} & \multicolumn{2}{|c|}{Fashion-MNIST Test NLL} \\ \hline \hline Class & Bits / Pixel \(\downarrow\) & Class & Bits / Pixel \(\downarrow\) \\ \hline Number 0 & 5.2647 \(\pm\) 0.0189 & T-Shirt & 6.0803 \(\pm\) 0.0321 \\ Number 1 & 5.0888 \(\pm\) 0.1320 & Turouses & 5.5130 \(\pm\) 0.0778 \\ Number 2 & 6.0978 \(\pm\) 0.0249 & Pullover & 6.1770 \(\pm\) 0.0043 \\ Number 3 & 5.0174 \(\pm\) 1.0864 & Dress & 6.0273 \(\pm\) 0.1005 \\ Number 4 & 5.0952 \(\pm\) 0.0278 & Coat & 6.1355 \(\pm\) 0.0515 \\ Number 5 & 5.4285 \(\pm\) 0.0742 & Sandal & 5.9324 \(\pm\) 0.0449 \\ Number 6 & 5.4642 \(\pm\) 0.0393 & Shirt & 6.2517 \(\pm\) 0.0437 \\ Number 7 & 5.7739 \(\pm\) 0.0040 & Sneaker & 5.6603 \(\pm\) 0.0007 \\ Number 8 & 5.4086 \(\pm\) 0.0555 & Bag & 6.3035 \(\pm\) 0.0478 \\ Number 9 & 5.0607 \(\pm\) 0.0012 & Ankle Boot & 5.8741 \(\pm\) 0.0314 \\ \hline \hline Average & 5.3682 \(\pm\) 0.1464 & Average & 5.59956 \(\pm\) 0.0386 \\ \hline \end{tabular}
\end{table} TABLE III: Class-wise negative log likelihood in bits per pixel when applying RealNVP to MNIST and Fashion-MNIST test dataset. The results are averaged over 30 runs.
### _Comparison with RealNVP_
In these final experiments we want to compare LU-Net with the popular and widely used normalizing flow architecture RealNVP [3]. To make the models better comparable, we design them to be of similar size in terms of model parameters. In more detail, we employ a RealNVP normalizing flow with 9 affine coupling layers with checkerboard mask mixing. For the coupling layers two small ResNets [36] are used to compute the scale and translation parameter, respectively. Here, every ResNet backbone consists of two residual blocks, each applying two \(3\times 3\) convolutions with 64 kernels and ReLU activations. In total this amounts to a normalizing flow with 5,388,264 weight parameters compared to 4,920,384 weight parameters with LU-Net. Finally, we conducted the same experiments as presented in Sec. III-C for RealNVP.
In Tab. III we report the negative log likelihood scores of our implemented RealNVP on the test splits of MNIST as well as Fashion-MNIST. In comparison to LU-Net, we observe more robust results but overall significantly worse performance in density estimation for each class of both datasets with RealNVP. During the experiments with RealNVP, we realized that the model needs to treated carefully as slight modifications in the hyper parameters could quickly lead to unstable training. We ended up training the flows for 100 epoch using a small learning rate of 1e-4 that decays by 0.2 every 10 epochs.
Besides the worse performance on learning the data distribution of MNIST and Fashion-MNIST, RealNVP further is computationally more expensive than LU-Net, which can be seen in Tab. IV showing a comparison of the computational budget. RealNVP not only requires more GPU memory than LU-Net but also considerably more time to train. The latter point can be explained by backpropagation not working as efficiently due to the deep neural networks employed in coupling layers, which adversely affects propagating the errors from layer to layer. Moreover, RealNVP is notably slower at density evaluation, with LU-Net being 7 times faster. With regard to sampling, RealNVP is however significantly faster by nearly 50 times. Here, we want to note that at run time linear equation systems are solved in inverse LU-Net instead of inverting the weight matrices, cf. Sec. II-D and Appendix C, respectively. Although the inversion could be performed offline saving a considerable amount of operations, the computation of big inverse matrices is often numerically unstable, which is highly undesirable in particular in the context of invertible neural networks and therefore omitted.
Lastly, we present qualitative examples of RealNVP as density estimator in Fig. 7 and as generator in Fig. 8 for MNIST and Fashion-MNIST. At first glance, it is directly notable that the generated images are less noisy in comparison to the images generated by LU-Net. This might be an consequence of the extensive use of convolution operations in RealNVP that helps the model to better capture local correlations of features in images. However, the shapes of the digits and clothing articles in the images generated by our implemented slim RealNVP are still rather unnatural, which can be solved by deeper RealNVP models [3].
## IV Conclusion and Outlook
We introduced LU-Net, which is a simple architecture for an invertible neural network based on the LU-factorization of weight matrices and invertible as well as two times differentiable activation functions. LU-Net provides an explicit likelihood evaluation and reasonable sampling quality. The execution of both tasks is computationally cheap and fast, which we tested in several experiments on academic data sets.
In the future, we intend to investigate more closely the effects of the choice of the activation function. LU-net would become even simpler if leaky ReLu activation functions could be used, which can be achieved by training adversarially. Also, we intend to revisit the universal approximation properties for LU-Nets using artificial widening via zero padding.
Fig. 8: RealNVP: Randomly generated samples of (a) MNIST and (b) Fashion MNIST. These images are ordered class-wise and in decreasing estimated likelihood from left to right.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & num weight & GPU memory & num epochs & test NLL \\ model & parameters & usage in MiB & training & in bit/pixel \\ \hline \hline LU-Net & 4.92M & 1,127 & 40 & 3,242 \\ RealNVP & 5.39M & 3,725 & 100 & 5,6819 \\ \hline \hline & train epoch & optimization & density per & sampling per \\ model & in sec & step in ms & image in ms & image in ms \\ \hline \hline LU-Net & 7.32 & 1.2 & 37.10 & 45.15 \\ RealNVP & 99.88 & 56.0 & 259.15 & 1.03 \\ \hline \end{tabular}
\end{table} TABLE IV: Model size and run time comparisons between LU-Net and RealNVP. Note that the time results are averages over 100 runs on MNIST and Fashion-MNIST. Moreover, all these tests were conducted on the same machine using an NVIDIA Quadro RTX 8000 GPU and a batch size of 128.
Fig. 7: Unseen original test samples of (a) MNIST and (b) Fashion MNIST. These images are ordered class-wise and in decreasing likelihood from left to right as estimated by RealNVP. Note that RealNVP incorrectly assigns high likelihoods to some unclear examples, and vice versa.
## Acknowledgment
This work has been funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) via the research consortium AI Delta Learning (grant no. 19A19013Q) and the Ministry of Culture and Science of the German state of North Rhine-Westphalia as part of the KI-Starter research funding program (grant no. 005-2204-0023). Moreover, this work has been supported by the German Federal Ministry of Education and Research (grant no. 01IS22069).
|
2310.00526 | Are Graph Neural Networks Optimal Approximation Algorithms? | In this work we design graph neural network architectures that capture
optimal approximation algorithms for a large class of combinatorial
optimization problems, using powerful algorithmic tools from semidefinite
programming (SDP). Concretely, we prove that polynomial-sized message-passing
algorithms can represent the most powerful polynomial time algorithms for Max
Constraint Satisfaction Problems assuming the Unique Games Conjecture. We
leverage this result to construct efficient graph neural network architectures,
OptGNN, that obtain high-quality approximate solutions on landmark
combinatorial optimization problems such as Max-Cut, Min-Vertex-Cover, and
Max-3-SAT. Our approach achieves strong empirical results across a wide range
of real-world and synthetic datasets against solvers and neural baselines.
Finally, we take advantage of OptGNN's ability to capture convex relaxations to
design an algorithm for producing bounds on the optimal solution from the
learned embeddings of OptGNN. | Morris Yau, Nikolaos Karalias, Eric Lu, Jessica Xu, Stefanie Jegelka | 2023-10-01T00:12:31Z | http://arxiv.org/abs/2310.00526v7 | # Are Graph Neural Networks Optimal Approximation Algorithms?
###### Abstract
In this work we design graph neural network architectures that capture optimal approximation algorithms for a large class of combinatorial optimization problems using powerful algorithmic tools from semidefinite programming (SDP). Concretely, we prove that polynomial-sized message passing algorithms can represent the most powerful polynomial time algorithms for Max Constraint Satisfaction Problems assuming the Unique Games Conjecture. We leverage this result to construct efficient graph neural network architectures, OptGNN, that obtain high-quality approximate solutions on landmark combinatorial optimization problems such as Max Cut and maximum independent set. Our approach achieves strong empirical results across a wide range of real-world and synthetic datasets against classical algorithms. Finally, we take advantage of OptGNN's ability to capture convex relaxations to design an algorithm for producing dual certificates of optimality (bounds on the optimal solution) from the learned embeddings of OptGNN.
## 1 Introduction
Combinatorial Optimization is the class of problems that optimize functions subject to constraints over discrete search spaces. They are often NP-hard to solve and to approximate, owing to their typically exponential search spaces over nonconvex domains. Nevertheless, their important applications in science and engineering (Gardiner et al., 2000; Zaki et al., 1997; Smith et al., 2004; Du et al., 2017) has engendered a long history of study rooted in the following simple insight. In practice, CO instances are endowed with domain-specific structure that can be exploited by specialized algorithms (Hespe et al., 2020; Walteros and Buchanan, 2019; Ganesh and Vardi, 2020). In this context, neural networks are natural candidates for learning and then exploiting patterns in the data distribution over CO instances.
The emerging field at the intersection of machine learning (ML) and combinatorial optimization (CO) has led to novel algorithms with promising empirical results for several CO problems. However, similar to classical approaches to CO, ML pipelines have to manage a tradeoff between efficiency and optimality. Indeed, prominent works in this line of research forego optimality and focus on parametrizing heuristics (Li et al., 2018; Khalil et al., 2017; Yolcu and Poczos, 2019; Chen and Tian, 2019) or by employing specialized models (Zhang et al., 2023; Nazari et al., 2018; Toenshoff et al., 2019; Xu et al., 2021; Min et al., 2022) and task-specific loss functions (Amizadeh et al., 2018; Karalias and Loukas, 2020; Wang et al., 2022; Karalias et al., 2022; Sun et al., 2022). Exact ML solvers that can guarantee optimality often leverage general techniques like branch and bound (Gasse et al., 2019; Paulus et al., 2022) and constraint programming (Parajdis et al., 2021; Cappart et al., 2019), which offer the additional benefit of providing approximate solutions together with a bound on the distance to the optimal solution. The downside of those methods is their exponential worst-case time complexity. This makes it clear that striking a balance between efficiency and optimality is challenging, which leads us to the central question of this paper:
_Are there neural architectures for **general** combinatorial optimization that can learn to adapt to a data distribution over instances yet capture algorithms with **optimal** worst-case approximation guarantees?_
To answer this question, we build on the extensive literature on approximation algorithms and semidefinite programming. Convex relaxations of CO problems via semidefinite programming are the fundamental building block for breakthrough results in the design of efficient algorithms for NP-Hard combinatorial problems, such as the Goemans-Williamson approximation algorithm for Max Cut (Goemans & Williamson, 1995) and the use of the Lovasz theta function to find the maximum independent set on perfect graphs (Lovasz, 1979; Grotschel et al., 1981). In fact, it is known that if the Unique Games Conjecture (UGC) is true, then the approximation guarantees obtained through semidefinite programming relaxations are indeed the best that can be achieved (Raghavendra, 2008; Barak & Steurer, 2014). We will leverage these results to provide an affirmative answer to our question. Our contributions can be organized into theory and experiments.
First, on the theory side we show that a polynomial time message passing algorithm approximates the solution of an SDP with the optimal integrality gap for the class of Maximum Constraint Satisfaction Problems, assuming the UGC. The key theoretical insight is that a message-passing algorithm can be used to compute gradient updates for the augmented Lagrangian of an overparameterized reformulation of the SDP in (Raghavendra, 2008). This in turn leads to our main contribution, OptGNN, a graph neural network architecture that generalizes our message-passing algorithm and therefore captures its approximation guarantee.
Our second contribution is empirical. We show that our theoretical construction can be used directly within graph neural network pipelines for CO that are easy to implement and train. By training with the SDP objective as a loss function, OptGNN learns embeddings that can be regarded as feasible fractional solutions of an overparameterized low rank SDP, which are subsequently rounded to feasible integral solutions. On the primal side, we show OptGNN achieves strong empirical results against classical algorithms across a broad battery of datasets for landmark CO problems such as Max Cut and Vertex Cover.
Finally, to underscore the fact that OptGNN captures powerful convex relaxations, we construct dual certificates of optimality, i.e bounds that are provably correct, from OptGNN embeddings for the Max Cut problem that are virtually tight for small synthetic instances. See our discussion on general neural certification schemes for extracting dual certificates from OptGNN networks in Appendix B.2.
To summarize, the contribution of this paper is twofold:
* We construct a polynomial time message passing algorithm for solving the SDP of Raghavendra (2008) for the broad class of maximum constraint satisfaction problems (including Max Cut, max-SAT, etc.), that is optimal barring the possibility of significant breakthroughs in the foundations of algorithms.
* We construct graph neural architectures to capture this message passing algorithm and show that they achieve strong results against solvers and greedy algorithms.
## 2 Related Work
Optimal approximation algorithms.In theoretical computer science, it is typically quite difficult to prove that an algorithm achieves the best approximation guarantee for a given problem, as it is hard to rule out the existence of a more powerful algorithm. The Unique Games Conjecture (Khot, 2002) is a striking development in the theory of approximation algorithms because it is able to circumvent precisely this obstacle. If true, it implies several approximation hardness results which often match the approximation guarantees of the best-known algorithms (Raghavendra & Steurer, 2009b; Raghavendra et al., 2012). In fact, it implies something even stronger: there is a _general_ algorithm based on semi-definite programming that achieves the best possible approximation guarantees for several important problems (Raghavendra, 2008). Our theoretical contribution builds on these ideas to construct a neural architecture that is a candidate optimal approximation algorithm. For a complete exposition on the topic of UGC and approximation algorithms we refer the reader to Barak & Steurer (2014).
Semidefinite programming in machine learning.Semidefinite programming has already found applications in machine learning pipelines. In a similar spirit to our work, Wang et al. (2019) propose a differentiable SDP-based SAT solver based on previous works on low-rank SDPs (Wang and Kolter, 2019), while Krivachy et al. (2021) use neural networks to solve primal-dual SDP pairs for quantum information tasks. The key difference in our case is that our neural network architecture naturally aligns with solving a low-rank SDP and while our GNNs capture the properties of the SDP, in practice they can also improve upon it.
Other applications of semidefinite programming in machine learning include global minimization of functions by obtaining kernel approximations through an SDP (Rudi et al., 2020), deriving differentiable high-dimensional extensions of discrete functions (Karalias et al., 2022), and leveraging the connection between SVMs and the Lovasz theta function to efficiently find the largest common dense subgraph among a collection of graphs (Jethava et al., 2013).
Neural combinatorial optimization.Beyond semidefinite programming, prior work in the literature has examined the capabilities of neural networks to obtain solutions to combinatorial problems, including the ability of modern GNNs to achieve approximation guarantees for combinatorial problems Sato et al. (2019) and impossibility results for computing combinatorial properties of graphs (Loukas, 2019). It was previously shown that for maxSAT, a neural network can straightforwardly obtain a 1/2-approximation (Liu et al., 2021) which is also easily obtainable through a simple randomized algorithm Johnson (1973). SDPs can yield at least 3/4-approximations for max-SAT (Goemans and Williamson, 1995), so our approach improves significantly over previous results. Finally, a different divide and conquer approach was proposed by McCarty et al. (2021), which uses Baker's paradigm to solve maximum independent set on geometric intersection graphs by partitioning the problem into at most a linear number (on the size of the input) of subproblems of bounded size,
Figure 1: Schematic representation of OptGNN. During training, OptGNN produces node embeddings \(\mathbf{v}\) by calculating gradient updates in its forward pass using message passing on the input graph \(G\). These embeddings then used to compute the augmented Lagrangian loss \(\mathcal{L}_{p}(\mathbf{v};G)\). OptGNN is trained by minimizing the average loss over the training set. At inference time, the fractional solutions (embeddings) \(\mathbf{v}\) for an input graph \(G\) produced by OptGNN are rounded using randomized rounding and a greedy heuristic.
which allows them to use a neural network on each subproblem to obtain an approximation guarantee.
In order to obtain exact solvers that are guaranteed to find the optimal solution, a prominent direction in the field involves combining solvers with neural networks either to provide a "warm start" (Benidis et al., 2023), or to learn branching heuristics for branch and bound (Gasse et al., 2019; Nair et al., 2020; Gupta et al., 2020; Paulus et al., 2022) and CDCL SAT solvers (Selsam and Bjorner, 2019; Kurin et al., 2020; Wang et al., 2021). Owing to their integration with powerful solvers, those ML pipelines are able to combine some of the strongest elements of classical algorithms and neural networks to obtain compelling results. Other related work in this vein includes constructing differentiable solvers and optimization layers (Wang et al., 2019; Agrawal et al., 2019), and the paradigm of neural algorithmic reasoning (Velickovic and Blundell, 2021) which focuses on training neural networks to emulate classical polynomial-time algorithms and using them to solve various combinatorial problems (Ibarz et al., 2022; Georgiev et al., 2023). Other prominent neural approaches that have achieved strong empirical results with fast execution times follow different learning paradigms including reinforcement learning (Ahn et al., 2020; Bother et al., 2022; Tonshoff et al., 2022; Barrett et al., 2022) and unsupervised learning (Min et al., 2022; Ozolins et al., 2022).
The list of works mentioned here is by no means exhaustive, for a complete overview of the field we refer the reader to the relevant survey papers Cappart et al. (2023); Bengio et al. (2021).
## 3 Optimal approximation algorithms with neural networks
This section is structured as follows. In order to build intuition, we begin the section with the example of finding the maximum cut on a graph. We show that solving the Max Cut problem using a vector (low-rank SDP) relaxation and a simple projected gradient descent scheme amounts to executing a message-passing algorithm on the graph. This sets the stage for our main result for the class of constraint satisfaction problems (CSP) which generalizes this observation. We describe a semidefinite program (SDP 1) for maximum constraint satisfaction problems that achieves the optimal integrality gap assuming the UGC. Our theorem B.1 shows that similarly to the Max Cut example, SDP 1 can be solved with message-passing (Algorithm 1). This leads us to define an overparametrized version of Algorithm 1, dubbed OptGNN. We then show that by solving SDP 1, OptGNN can achieve optimal approximation guarantees for maximum constraint satisfaction problems.
### Solving combinatorial optimization problems with message passing
To build intuition, we begin with the canonical example for the usage of semidefinite programming in combinatorial optimization: the Maximum Cut (henceforth Max Cut) problem. Given a graph \(G=(V,E)\) with vertices \(V\), \(|V|=N\) and edge set \(E\), in the Max Cut problem we are looking to find a set of nodes in \(G\) that maximize the number of edges with exactly one endpoint in that set. Formally, this means solving the following nonconvex quadratic integer program.
\[\max_{(x_{1},x_{2},\ldots,x_{N})} \sum_{(i,j)\in E}\tfrac{1}{2}(1-x_{i}x_{j})\] (1) subject to: \[x_{i}^{2}=1 \forall i\in[N].\]
The global optimum of the integer program is the Max Cut. Unfortunately the discrete variables are not amenable to the tools of continuous optimization. A standard technique is to 'lift' the problem and solve it with a rank constrained SDP. Here we introduce a matrix \(X\in\mathbb{R}^{N\times N}\) of variables, where we index the \(i\)'th row and \(j\)'th column entry as \(X_{ij}\).
\[\max_{X} \sum_{(i,j)\in E}\tfrac{1}{2}(1-X_{ij})\] subject to: \[X_{ii}=1 \forall i\in[N]\] \[X\succeq 0\] \[\text{rank}(X)=r\]
The intuition is that a rank \(r=1\) solution to Algorithm 2 is equivalent to solving the integer program. A common approach is to replace the integer variables \(x_{i}\) with vectors \(v_{i}\in\mathbb{R}^{r}\) and constrain \(v_{i}\) to lie on the unit sphere.
\[\min_{v_{1},v_{2},\ldots,v_{N}} -\sum_{(i,j)\in E}\tfrac{1}{2}(1-\langle v_{i},v_{j}\rangle)\] (3) subject to: \[\|v_{i}\|=1 \forall i\in[N]\] \[v_{i}\in\mathbb{R}^{r}\]
For \(r=\Omega(\sqrt{N})\) the global optimum of this optimization is equivalent to the standard SDP optimum with no rank constraint (Barvinok, 1995; Pataki, 1998). Burer & Monteiro (2003) proposed a fast iterative algorithm for descending this loss. The landscape of this nonconvex optimization is benign in that all local minima are approximately global minima Ge et al. (2016) and variations on stochastic gradient descent converge to its optimum Bhojanapalli et al. (2018)Jin et al. (2017) under a variety of smoothness and compactness assumptions. Thus, for large \(r\), simple algorithms such as block coordinate descent Erdogdu et al. (2019) can find an approximate global optimum of the objective. For this extensive literature on low rank matrix factorization see Chi et al. (2018). However, for large \(r\), it is unclear how to transform, i.e., round, the vectors \(\mathbf{v}\) into a solution to the integral problem. Thus we need an approach that generalizes projected gradient descent that performs well for small \(r\). In iteration \(t\) (and for \(T\) iterations), projected gradient descent updates vector \(v_{i}\) in \(\mathbf{v}\) as
\[\hat{v_{i}}^{t+1}=v_{i}^{t}-\eta\sum\nolimits_{j\in N(i)}v_{j}^{t} \tag{4}\]
\[v_{i}^{t+1}=\frac{\hat{v_{i}}^{t+1}}{\|\hat{v_{i}}^{t+1}\|}, \tag{5}\]
where \(\eta\in\mathbb{R}^{+}\) is an adjustable step size and we let \(N(i)\) denote the neighborhood of node \(i\). The gradient updates to the vectors are local, i.e., each vector is updated by aggregating information from its neighboring vectors, so we can interpret the projected gradient descent steps as message-passing iterations.
Overparametrized message passing.Intuitively, our main contribution in this paper builds on the following observation. We may generalize the dynamics described above by considering an overparametrized version of the gradient descent updates in equations 4 and 5. Let \(\{M_{1,t}\}_{t\in[T]}\in\mathbb{R}^{r\times r}\) and \(\{M_{2,t}\}_{t\in[T]}\in\mathbb{R}^{r\times r}\) each be sets of \(T\) learnable matrices corresponding to \(T\) layers of a neural network. Then for layer \(t\) in max iterations \(T\), for embedding \(v_{i}\) in \(\mathbf{v}\), we have
\[\hat{v_{i}}^{t+1} :=M_{1,t}v_{i}^{t}-M_{2,t}\sum\nolimits_{j\in N(i)}v_{j}^{t}+b_ {t} \tag{6}\] \[v_{i}^{t+1} :=\frac{\hat{v_{i}}^{t+1}}{\|\hat{v_{i}}^{t+1}\|}, \tag{7}\]
where \(\{b_{t}\}_{t\in[T]}\) is a learnable affine shift. More generally, we can write the dynamics as
\[\hat{v_{i}}^{t+1} :=\text{UPDATE}(M_{1,t},v_{i}^{t},b_{t},\text{AGGREGATE}(M_{2,t},\{v_{j}^{t}\}_{j\in N(i)})) \tag{8}\] \[v_{i}^{t+1} :=\text{NONLINEAR}(\hat{v_{i}}^{t+1}), \tag{9}\]
for efficiently computable functions \(\text{UPDATE}:\mathbb{R}^{r\times r}\times\mathbb{R}^{r}\times\mathbb{R}^{r} \times\mathbb{R}^{r}\rightarrow\mathbb{R}^{r}\) and \(\text{AGGREGATE}:\mathbb{R}^{r\times r}\times\mathbb{R}^{r|N(i)|}\rightarrow \mathbb{R}^{r}\) and \(\text{NONLINEAR}:\mathbb{R}^{r}\rightarrow\mathbb{R}^{r}\). It is straightforward to see that these update dynamics can be represented by a Graph Neural Network (GNN), which we call OptGNN. It will become clear that the dynamics in 4, 5 can be generalized to a large class of CO problems. Examples include vertex cover, maximum clique, and maximum constraint satisfaction problems (see additional derivations for maximum clique and vertex cover in A).
In the rest of this section, we will show that the OptGNN formulation can be used for all maximum constraint satisfaction problems (max-CSPs). Max-CSPs include Max Cut, max-SAT, etc. Moreover, we will show that OptGNN can implement a message-passing (see B.1 for definition) algorithm for max-CSPs that can be shown to achieve optimal approximation guarantees, assuming the Unique Games Conjecture.
### Message passing for max-CSPs
Given a set of constraints over variables, Max-CSP asks to find a variable assignment that maximizes the number of satisfied constraints. Formally, a Constraint Satisfaction Problem \(\Lambda=(\mathcal{V},\mathcal{P},q)\) consists of a set of \(N\) variables \(\mathcal{V}:=\{x_{i}\}_{i\in[N]}\) each taking values in an alphabet \([q]\) and a set of predicates \(\mathcal{P}:=\{P_{z}\}_{z\subset\mathcal{V}}\) where each predicate is a payoff function over \(k\) variables denoted \(X_{z}=\{x_{i_{1}},x_{i_{2}},...,x_{i_{k}}\}\). Here we refer to \(k\) as the arity of the Max-k-CSP. We adopt the normalization that each predicate \(P_{z}\) returns outputs in \([0,1]\). We index each predicate \(P_{z}\) by its domain \(z\). The goal of Max-k-CSP is to maximize the payoff of the predicates.
\[OPT:=\max_{(x_{1},...,x_{N})\in[q]^{N}}\frac{1}{|\mathcal{P}|}\sum_{P_{z}\in \mathcal{P}}P_{z}(X_{z}), \tag{10}\]
where we normalize by the number of constraints so that the total payoff is in \([0,1]\). Therefore we can unambiguously define an \(\epsilon\)-approximate assignment as an assignment achieving a payoff of \(OPT-\epsilon\). Since our result depends on a message passing algorithm, we will need to define an appropriate graph structure over which messages will be propagated. To that end, we will leverage the constraint graph of the CSP instance. We define the constraint graph associated with a Max-CSP instance \(\Lambda\) as follows. Given a Max-k-CSP instance \(\Lambda=(\mathcal{V},\mathcal{P},q)\) a constraint graph \(G_{\Lambda}=(V,E)\) is comprised of vertices \(V=\{v_{\phi,\zeta}\}\) for every subset of variables \(\phi\subseteq z\) for every predicate \(P_{z}\in\mathcal{P}\) and every assignment \(\zeta\in[q]^{k}\) to the variables in \(z\). The edges \(E\) are between any pair of vectors \(v_{\phi,\zeta}\) and \(v_{\phi^{\prime},\zeta^{\prime}}\) such that the variables in \(\phi\) and \(\phi^{\prime}\) appear in a predicate together.
Optimal message passing for max-CSP.In order to construct OptGNN for this problem, we first establish that message passing can be optimal for max-CSPs. To show this, consider the SDP of Raghavendra (2008) which is known to possess the optimal integrality gap assuming the UGC. This Unique-Games-optimal program can be equivalently reformulated as described in SDP 1. Then, for Max-k-CSP we define its approximation ratio to be
\[\text{Approximation Ratio}:=\min_{\Lambda\in\text{Max-k-CSP}}\frac{OPT( \Lambda)}{SDP(\Lambda)},\]
where the minimization is being taken over all instances \(\Lambda\) with arity \(k\). The approximation ratio is always smaller than one. Similarly the integrality gap is defined to be the inverse of the approximation ratio and is always greater than one. There is no polynomial time algorithm that can achieve a superior (larger) approximation ratio assuming the truth of the conjecture. Furthermore, there is a polynomial time rounding algorithm (Raghavendra & Steurer, 2009a) that achieves the integrality gap of the SDP of (Raghavendra, 2008), and therefore outputs an integral solution with the optimal approximation ratio. Our main theoretical result is a polynomial time message passing algorithm (detailed in Algorithm 1) that solves the Unique Games optimal SDP.
**Theorem 3.1**.: _(Informal) Given a Max-k-CSP instance \(\Lambda\), there exists a message passing Algorithm 1 on constraint graph \(G_{\Lambda}\) with a per iteration update time of \(poly(|\mathcal{P}|,q^{k})\) that computes in \(poly(\epsilon^{-1},|\mathcal{P}|,q^{k},\log(\delta^{-1}))\) iterations an \(\epsilon\)-approximate solution to SDP 1 with probability \(1-\delta\). That is to say, Algorithm 1 computes a set of vectors \(\mathbf{v}\) satisfying constraints of SDP 1 to error \(\epsilon\) with objective value denoted \(OBJ(\mathbf{v})\) satisfying \(|OBJ(\mathbf{v})-SDP(\Lambda)|\leq\epsilon\)._
For the formal theorem and proof see Theorem B.1. Our algorithm is remarkably simple: perform gradient descent on the quadratically penalized objective of SDP 1. We observe that the gradient takes the form of a message-passing algorithm. For each predicate, we associate \(q^{k}\) vectors, one vector for each assignment to each subset of \(k\) variables, for a total of \(|\mathcal{P}|q^{k}\) vectors. The updates on each vector only depend on the vectors appearing in the same predicates. Therefore, if each variable \(x_{i}\) appears in no more than \(C\) predicates, every message update in the algorithm depends no more than \(Cq^{k}\) vectors rather than the total set of \(|\mathcal{P}|q^{k}\) vectors. For Max Cut for example this would mean each vector corresponds to a node that is updated as a function of the vectors in adjacent vertices, as described earlier in equations 4 and 5. For more complicated CSPs this gradient iteration can be defined analogously by taking gradients of a penalized objective of the reformulated SDP 1 which is equivalent to the optimal SDP of Raghavendra (2008).
### OptGNN for maximum constraint satisfaction problems
By inspection of the gradient iteration of Algorithm 1 we see that for a Max-k-CSP instance \(\Lambda\), Algorithm 1 is a message-passing algorithm (for a precise definition see B.1) on the associated
constraint graph \(G_{\Lambda}\). This message-passing form allows us to define OptGNN, a natural GNN generalization that captures this gradient iteration.
Definition (OptGNN).: Given a Max-k-CSP instance \(\Lambda\), an OptGNN\({}_{(T,r,G_{\Lambda})}(\mathbf{v})\) is a \(T\) layer, dimension \(r\), neural network over constraint graph \(G_{\Lambda}\) with learnable matrices \(\{M_{1,t}\}_{t\in[T]}\), \(\{M_{2,t}\}_{t\in[T]}\) and stochastic affine shift \(\{b_{t}\}_{t\in[T]}\) that generalizes the gradient iteration equation 32 of Algorithm 1 with an embedding \(v\in\mathbf{v}\) for every node in \(G_{\Lambda}\) with updates of the form
\[v_{w}^{t+1}=\text{UPDATE}(M_{1,t},v_{w}^{t},\text{AGGREGATE}(M_{2,t},\{v_{j} ^{t}\}_{j\in N(w)},v_{w}^{t}))\]
\[v_{w}^{t+1}=\text{NONLINEAR}(v_{w}^{t+1})+b_{t}\]
For arbitrary polynomial time computable functions \(\text{UPDATE}:\mathbb{R}^{r\times r}\times\mathbb{R}^{r}\times\mathbb{R}^{r} \rightarrow\mathbb{R}^{r}\), \(\text{AGGREGATE}:\mathbb{R}^{r\times r}\times\mathbb{R}^{r\prime}\times \mathbb{R}^{r\prime}\rightarrow\mathbb{R}^{r}\), and \(\text{NONLINEAR}:\mathbb{R}^{r}\rightarrow\mathbb{R}^{r}\). Here by 'generalize' we mean there exists an instantiation of the learnable parameters \(\{M_{1,t}\}_{t\in[T]}\) and \(\{M_{2,t}\}_{t\in[T]}\) such that OptGNN is equivalent to equation 32.
Naturally, we may conclude that an appropriately parameterized OptGNN solves SDP 1. To make this connection formal, we must ensure that a small amount of additive stochastic noise \(\{b_{t}\}_{t\in[T]}\) is added at each layer for the sake of theoretical gradient convergence. We then arrive at the following corollary.
Corollary 1.: Given a Max-k-CSP instance \(\Lambda\), there is an OptGNN\({}_{(T,r,G_{\Lambda})}(\mathbf{v})\) with \(T=poly(\delta^{-1},\epsilon^{-1},|\mathcal{P}|q^{k})\) layers, \(r=|\mathcal{P}|q^{k}\) dimensional embeddings, such that there is an instantiation of learnable parameters \(\{M_{1,t}\}_{t\in[T]}\) and \(\{M_{2,t}\}_{t\in[T]}\) and a random shift \(\{b_{t}\}_{t\in[T]}\) that outputs a set of vectors \(\mathbf{v}\) satisfying the constraints of SDP 1 and approximating its objective, \(OBJ_{\text{SDP}}(\Lambda)\), to error \(\epsilon\) with probability \(1-\delta\) over the randomness in \(\{b_{t}\}_{t\in[T]}\).
It is also straightforward to conclude that the rounding of Raghavendra & Steurer (2009a) achieves the integrality gap of SDP 1, and any OptGNN that approximates its solution. For the sake of completeness, we discuss the implications of the rounding. Let the integrality gap curve \(S_{\Lambda}(c)\) be defined as
\[S_{\Lambda}(c):=\inf_{\begin{subarray}{c}\Lambda\in\text{Max-k-CSP}\\ OBJ_{\text{sync}}(\Lambda)=c\end{subarray}}OPT(\Lambda),\]
which leads to the following statement about rounding.
Corollary 2.: The OptGNN of Corollary 3, which by construction is equivalent to Algorithm 1, outputs a set of embeddings \(\mathbf{v}\) such that the rounding of Raghavendra & Steurer (2009a) outputs an integral assignment \(\mathcal{V}\) with a Max-k-CSP objective OBJ(\(\mathcal{V}\)) satisfying \(OBJ(\mathcal{V})\geq S_{\Lambda}(OBJ_{\text{SDP}}(\Lambda)-\epsilon)-\epsilon\) in time \(\exp(\exp(\text{poly}(\frac{kg}{\epsilon})))\) which approximately dominates the Unique Games optimal approximation ratio.
We defer the proofs of the corollaries to subsection B.3
Obtaining neural certificates.The goal of OptGNN is to find high quality solutions to CO problems by capturing powerful classes of convex relaxations. To underscore this point, we construct dual certificates of optimality (a proof) from the embeddings of OptGNN. The key idea is to "guess" the dual variables of SDP 1 from the output embeddings/SDP solution \(\mathbf{v}\). Since we use a quadratic penalty for constraints, the natural dual guess is one step of the augmented method of lagrange multipliers on the SDP solution which can be obtained straightforwardly from the overparameterized primal variables \(\mathbf{v}\). Of course, this "guess" need not be dual feasible. To handle this problem we analytically bound the error in satisfying the KKT conditions of SDP 1. See B.2 for derivations and extended discussion, and 4.2 for an experimental demonstration.
### OptGNN in practice
OptGNN is a neural network architecture that uses message passing to solve a low-rank semidefinite program (SDP) for a given combinatorial optimization problem. This is jointly achieved through the forward and backward pass. The message-passing steps in the forward pass are gradient updates to the node embeddings towards the direction that minimizes the augmented Lagrangian of the SDP. The backward pass aids this process by backpropagating derivatives from the augmented Lagrangian
to the parameters of the neural network. Figure 1 summarizes the OptGNN pipeline for solving CO problems. Consider the Max Cut problem as an example. Given a distribution over graphs \(\mathcal{D}\), for an input graph, OptGNN computes node embeddings \(v_{1},v_{2},\ldots,v_{N}\in\mathbb{R}^{r}\) which are then plugged into the loss function. The loss in that case would be the Lagrangian which is calculated as \(\mathcal{L}(\mathbf{v};G)=-\sum_{(i,j)\in E}\frac{1}{2}(1-\langle v_{i},v_{j} \rangle)\) (see Appendix A for a minimum vertex cover example). The network is then trained in a completely unsupervised fashion by minimizing \(\mathbb{E}_{G\sim\mathcal{D}}[\mathcal{L}_{p}(\mathbf{v};G)]\) with a standard automatic differentiation package like Pytorch (Paszke et al., 2019).
Our theoretical result pertains to capturing the convex relaxation of Raghavendra (2008). As a message-passing algorithm, OptGNN does not deal with the issue of rounding fractional solutions. In practice, when implementing OptGNN, we round its fractional solutions using randomized rounding. Specifically, for each node with embedding vector \(v_{i}\), its discrete assignment \(x_{i}\in\{-1,1\}\) is calculated by \(x_{i}=\text{sign}(v_{i}^{\top}y)\), for a random hyperplane vector \(y\in\mathbb{R}^{r}\). We use multiple hyperplanes to obtain multiple solutions and then pick the best one. The solution is then post-processed with a simple heuristic. This enables fast inference while also helping ensure feasibility in the case of problems with constraints.
## 4 Experiments
### Comparisons with classical algorithms
We report experimental measurements of the performance of the OptGNN approach on two NP-Hard combinatorial optimization problems, _Maximum Cut_ and _Minimum Vertex Cover_. We obtain results for several datasets and compare against greedy algorithms, a state-of-the-art MIP solver (Gurobi). For details of the experimental setup see Appendix C.
Results.Overall, we may clearly see that OptGNN can consistently outperform a greedy algorithm and is competitive with Gurobi when execution time is taken into account. Table 1 presents results for the maximum cut problem. Specifically, we report the average integral cut value achieved
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline Dataset & OptGNN & Greedy & Gurobi & Gurobi & Gurobi \\ & & & 0.1s & 1.0s & 8.0s \\ \hline \hline BA\({}^{\text{A}}\) (50,100) & 351.49 (18) & 200.10 & 351.87 & 352.12 & 352.12 \\ BA\({}^{\text{A}}\) (100,200) & 717.19 (20) & 407.98 & 719.41 & 719.72 & 720.17 \\ BA\({}^{\text{A}}\) (400,500) & 2197.99 (66) & 1255.22 & 2208.11 & 2208.11 & 2212.49 \\ \hline ER\({}^{\text{A}}\) (50,100) & 528.95 (18) & 298.55 & 529.93 & 530.03 & 530.16 \\ ER\({}^{\text{A}}\) (100,200) & 1995.05 (24) & 1097.26 & 2002.88 & 2002.88 & 2002.93 \\ ER\({}^{\text{A}}\) (400,500) & 16387.46 (225) & 8622.34 & 16476.72 & 16491.60 & 16495.31 \\ \hline HK\({}^{\text{A}}\) (50,100) & 345.74 (18) & 196.23 & 346.18 & 346.42 & 346.42 \\ HK\({}^{\text{A}}\) (100,200) & 709.39 (23) & 402.54 & 711.68 & 712.26 & 712.88 \\ HK\({}^{\text{A}}\) (400,500) & 2159.90 (61) & 1230.98 & 2169.46 & 2169.46 & 2173.88 \\ \hline WC\({}^{\text{A}}\) (50,100) & 198.29 (18) & 116.65 & 198.74 & 198.74 & 198.74 \\ WC\({}^{\text{A}}\) (100,200) & 389.83 (24) & 229.43 & 390.96 & 392.07 & 392.07 \\ WC\({}^{\text{A}}\) (400,500) & 1166.47 (78) & 690.19 & 1173.45 & 1175.97 & 1179.86 \\ \hline MUTAG\({}^{\text{b}}\) & 27.95 (9) & 16.95 & 27.95 & 27.95 & 27.95 \\ ENZYMES\({}^{\text{b}}\) & 81.37 (14) & 48.53 & 81.45 & 81.45 & 81.45 \\ PROTEINS\({}^{\text{b}}\) & 102.15 (12) & 60.74 & 102.28 & 102.36 & 102.36 \\ IMDB-BIN\({}^{\text{b}}\) & 97.47 (11) & 51.85 & 97.50 & 97.50 & 97.50 \\ COLLAB\({}^{\text{b}}\) & 2622.41 (22) & 1345.70 & 2624.32 & 2624.57 & 2624.62 \\ \hline REDDIT-BIN\({}^{\text{c}}\) & 693.33 (186) & 439.79 & 693.02 & 694.10 & 694.14 \\ REDDIT-M-12K\({}^{\text{c}}\) & 568.00 (89) & 358.40 & 567.71 & 568.91 & 568.94 \\ REDDIT-M-5K\({}^{\text{c}}\) & 786.09 (133) & 495.02 & 785.44 & 787.48 & 787.92 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of OptGNN, Greedy, and Gurobi 0.1s, 1s, and 8s on Maximum Cut. For each approach and dataset, we report the average cut size measured on the test slice. Here, higher score is better. In parentheses, we include the average runtime in _milliseconds_ for OptGNN.
by OptGNN and classical baselines on a variety of datasets. We note that Greedy achieves poor performance compared to OptGNN and Gurobi on every dataset, indicating that for these datasets, finding Maximum Cut is not trivial. On the worst case, WS (400, 500), OptGNN achieves a cut value within 1.1% on average of Gurobi with an 8s time limit. On other datasets, OptGNN is typically within a fraction of a percent. Notably, OptGNN is within 0.1% of Gurobi 8s on all the TU datasets.
Table 2 presents the average size of the Vertex Cover achieved by OptGNN and classical baselines on our datasets. For this problem OptGNN also performs nearly as well as Gurobi 8s, remaining within 1% on the TU datasets and 3.1% on the worst case, ER (100, 200). OptGNN in certain cases outperforms Gurobi when considering execution time. Indeed, OptGNN requires only a few milliseconds to execute and is able to frequently outperform Gurobi when the latter is given a comparable time budget (0.1s). However, Gurobi with a 1s time budget is able to solve many of the datasets to optimality. While more computationally expensive than OptGNN, this constitutes only a small increase in total execution time for the entire dataset so Gurobi remains an extremely strong baseline.
### Experimental demonstration of neural certificates.
In this section, we provide a simple experimental example of our neural certificate scheme on small synthetic instances. Deploying this scheme on Max Cut on random graphs, we find this dual certificate to be remarkably tight. An example can be seen in figure 3. For \(100\) node graphs with \(1000\) edges our certificates deviate from the SDP certificate by about \(20\) nodes but are dramatically faster to produce. The runtime is dominated by the feedforward of OptGNN which is \(0.02\) seconds vs. the SDP solve time which is \(0.5\) seconds on cvxpy. See B.2 for extensive discussion and additional results.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Dataset & OptGNN & Greedy & Gurobi & Gurobi & Gurobi \\ & & & 0.1s & 1.0s & 8.0s \\ \hline \hline BA\({}^{\text{a}}\) (50,100) & 42.88 (27) & 51.92 & 42.82 & 42.82 & 42.82 \\ BA\({}^{\text{a}}\) (100,200) & 83.43 (25) & 101.42 & 83.19 & 83.19 & 83.19 \\ BA\({}^{\text{a}}\) (400,500) & 248.74 (27) & 302.53 & 256.33 & 246.49 & 246.46 \\ \hline ER\({}^{\text{a}}\) (50,100) & 55.25 (21) & 68.85 & 55.06 & 54.57 & 54.67 \\ ER\({}^{\text{a}}\) (100,200) & 126.52 (18) & 143.51 & 127.83 & 123.47 & 122.76 \\ ER\({}^{\text{a}}\) (400,500) & 420.70 (41) & 44.84 & 423.07 & 423.07 & 415.52 \\ \hline HK\({}^{\text{a}}\) (50,100) & 43.06 (25) & 51.38 & 42.98 & 42.98 & 42.98 \\ HK\({}^{\text{a}}\) (100,200) & 84.38 (25) & 100.87 & 84.07 & 84.07 & 84.07 \\ HK\({}^{\text{a}}\) (400,500) & 249.26 (27) & 298.98 & 247.90 & 247.57 & 247.57 \\ \hline WC\({}^{\text{a}}\) (50,100) & 46.38 (26) & 72.55 & 45.74 & 45.74 & 45.74 \\ WC\({}^{\text{a}}\) (100,200) & 91.28 (21) & 143.70 & 89.80 & 89.80 & 89.80 \\ WC\({}^{\text{a}}\) (400,500) & 274.21 (31) & 434.52 & 269.58 & 269.39 & 269.39 \\ \hline MUTAG\({}^{\text{b}}\) & 7.79 (18) & 12.84 & 7.74 & 7.74 & 7.74 \\ ENZYMES\({}^{\text{c}}\) & 20.00 (24) & 27.35 & 20.00 & 20.00 & 20.00 \\ PROTEINS\({}^{\text{a}}\) & 25.29 (18) & 33.93 & 24.96 & 24.96 & 24.96 \\ IMDB-BIN\({}^{\text{b}}\) & 16.78 (18) & 17.24 & 16.76 & 16.76 & 16.76 \\ COLLAB\({}^{\text{b}}\) & 67.50 (23) & 71.74 & 67.47 & 67.46 & 67.46 \\ \hline REDDIT-BIN\({}^{\text{c}}\) & 82.85 (38) & 117.16 & 82.81 & 82.81 & 82.81 \\ REDDIT-M-12K\({}^{\text{c}}\) & 81.55 (25) & 115.72 & 81.57 & 81.52 & 81.52 \\ REDDIT-M-5K\({}^{\text{c}}\) & 107.36 (33) & 153.24 & 108.73 & 107.32 & 107.32 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of OptGNN, Greedy, and Gurobi 0.1s, 1s, and 8s on Minimum Vertex Cover. For each approach and dataset, we report the average Vertex Cover size measured on the test slice. Here, lower score is better. In parentheses, we include the average runtime in _milliseconds_ for OptGNN.
### Out Of Distribution generalization
It is reasonable to examine whether our model learns a general combinatorial optimization algorithm or if it merely learns to fit the data distribution. A potential proxy for this is its out of distribution generalization, i.e., its ability to perform well on data distributions that it was not trained on. We test this using a collection of different datasets.
Specifically, for each dataset in our collection, we train a model and then test the trained model on a subset of datasets in the collection. The results are shown in Table 3. It is apparent from the results that the model performance generalizes well to different datasets. Strikingly, we frequently observe that the model reaches its peak performance on a given test dataset even when trained on a different one. This suggests that the model indeed is capturing elements of a more general process instead of just over
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Train Dataset & MUTAG & ENZYMES & PROTEINS & IMDB-BIN & COLLAB \\ \hline \hline BA (50,100) & **7.74** & 20.12 & 27.66 & 17.57 & 74.15 \\ BA (100,200) & **7.74** & 20.35 & 26.03 & 16.86 & 69.29 \\ BA (400,500) & 8.05 & 21.00 & 26.54 & 17.34 & 70.17 \\ \hline ER (50,100) & **7.74** & 20.37 & 28.17 & 16.86 & 69.07 \\ ER (100,200) & 8.05 & 21.52 & 27.72 & 16.89 & 68.83 \\ ER (400,500) & 7.79 & 21.55 & 28.60 & 16.78 & 68.74 \\ \hline HK (50,100) & **7.74** & 20.42 & 25.60 & 17.05 & 69.17 \\ HK (100,200) & 7.84 & 20.43 & 27.30 & 17.01 & 70.20 \\ HK (400,500) & 7.95 & 20.63 & 26.30 & 17.15 & 69.91 \\ \hline WC (50,100) & 7.89 & **20.13** & 25.46 & 17.38 & 70.14 \\ WC (100,200) & 7.79 & 20.30 & 25.45 & 17.91 & 71.16 \\ WC (400,500) & 8.05 & 20.48 & 25.79 & 17.12 & 70.16 \\ \hline MUTAG & **7.74** & 20.83 & 26.76 & 16.92 & 70.09 \\ ENZYMES & **7.74** & 20.60 & 28.29 & 16.79 & 68.40 \\ PROTEINS & 7.89 & 20.22 & **25.29** & 16.77 & 70.26 \\ IMDB-BIN & 7.95 & 20.97 & 27.06 & **16.76** & 68.03 \\ COLLAB & 7.89 & 20.35 & 26.13 & **16.76** & **67.52** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Models for Vertex Cover trained on “dataset” were tested on a selection of the TU datasets (ENZYMES, PROTEINS, MUTAG, IMDB-BINARY, and COLLAB). We observe that the performance of the models generalizes well even when they are taken out of their training context.
Figure 2: Experimental comparison of SDP versus Opt-GNN Dual Certificates on random graphs of 100 nodes for the Max Cut problem. Our Opt-GNN certificates track closely with the SDP certificates while taking considerably less time to generate.
### Ablation
In order to better understand the effectiveness of the OptGNN architecture, we compare it against other commonly used GNN architectures from the literature that have been trained using the same loss function. We present the comparison of their performance to OptGNN for maximum cut in Table 4; please see subsection C.2 for the analogous table for minimum vertex cover. It is clear that OptGNN consistently outperforms the other architectures on every dataset that we tested. Note that while OptGNN was consistently the best model, other models were able to perform relatively well; for instance, GatedGCNN achieves average cut values within a few percent of OptGNN on nearly all the datasets (excluding COLLAB). This points to the overall viability of training using an SDP relaxation for the loss function.
## 5 Conclusion
We have presented OptGNN, a GNN that can capture provably optimal message passing algorithms for a large class of combinatorial optimization problems. OptGNN achieves the appealing combination of obtaining approximation guarantees while also being able to adapt to the data to achieve improved results. Empirically, we observed that the OptGNN architecture achieves strong performance on a wide range of datasets and on multiple problems. Since the landscape of combinatorial optimization is expansive, there are still important challenges that have to be addressed within the scope of this work such as the extension of our approach to problems with more complex constraints and objectives. OptGNN offers a novel perspective on the connections between general approximation algorithms and neural networks, and opens up new avenues for exploration. These include the design of more powerful and sound (neural) rounding procedures that can secure approximation guarantees, the construction of neural certificates that improve upon the ones we described in Appendix B.2, and the design of neural SDP-based branch and bound solvers.
## 6 Acknowledgment
The authors would like to thank Ankur Moitra, Sirui Li, and Zhongxia Yan for insightful discussions in the preparation of this work. Nikolaos Karalias is funded by the SNSF, in the context of the project "General neural solvers for combinatorial optimization and algorithmic reasoning" (SNSF grant number: P500PT_217999). Stefanie Jegelka acknowledges support from NSF award 1900933 and NSF AI Institute TILOS (NSF CCF-2112665).
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Dataset & GAT & GCNN & GIN & GatedGCNN & OptGNN \\ \hline \hline ER\({}^{2}\) (50,100) & 525.92 (25) & 500.94 (17) & 498.82 (14) & 526.78 (14) & **528.95** (18) \\ ER\({}^{2}\) (100,200) & 1979.45 (20) & 1890.10 (26) & 1893.23 (23) & 1978.78 (21) & **1995.05** (24) \\ ER\({}^{2}\) (400,500) & 16317.69 (208) & 15692.12 (233) & 15818.42 (212) & 16188.85 (210) & **16387.46** (225) \\ \hline MUTAG\({}^{\text{b}}\) & 27.84 (19) & 27.11 (12) & 27.16 (13) & **27.95** (14) & **27.95** (9) \\ ENZYMES\({}^{\text{b}}\) & 80.73 (17) & 74.03 (12) & 73.85 (16) & 81.35 (9) & **81.37** (14) \\ PROTEINS\({}^{\text{b}}\) & 100.94 (14) & 92.01 (19) & 92.62 (17) & 101.68 (10) & **102.15** (12) \\ IMDB-BIN\({}^{\text{b}}\) & 81.89 (18) & 70.56 (21) & 81.50 (10) & 97.11 (9) & **97.47** (11) \\ COLLAB\({}^{\text{b}}\) & 2611.83 (22) & 2109.81 (21) & 2430.20 (23) & 2318.19 (18) & **2622.41** (22) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of various model architectures for selected datasets on Maximum Cut. Here, higher is better. GAT is the Graph Attention network (Velicković et al., 2018), GIN is the Graph Isomorphism Network (Xu et al., 2019), GCNN is the Graph Convolutional Neural Network (Morris et al., 2019), and GatedGCNN is the gated version (Li et al., 2015). |
2301.06170 | A neural network for beam background decomposition in Belle II at
SuperKEKB | We describe a neural network for predicting the background hit rate in the
Belle II detector produced by the SuperKEKB electron-positron collider. The
neural network, BGNet, learns to predict the individual contributions of
different physical background sources, such as beam-gas scattering or
continuous top-up injections into the collider, to Belle II sub-detector rates.
The samples for learning are archived 1 Hz time series of diagnostic variables
from the SuperKEKB collider subsystems and measured hit rates of Belle II used
as regression targets. We test the learned model by predicting detector hit
rates on archived data from different run periods not used during training. We
show that a feature attribution method can help interpret the source of changes
in the background level over time. | B. Schwenker, L. Herzberg, Y. Buch, A. Frey, A. Natochii, S. Vahsen, H. Nakayama | 2023-01-15T19:46:20Z | http://arxiv.org/abs/2301.06170v1 | # A neural network for beam background decomposition in Belle II at SuperKEKB
###### Abstract
We describe a neural network for predicting the background hit rate in the Belle II detector produced by the SuperKEKB electron-positron collider. The neural network, BGNet, learns to predict the individual contributions of different physical background sources, such as beam-gas scattering or continuous top-up injections into the collider, to Belle II sub-detector rates. The samples for learning are archived \(1\,\mathrm{H}\mathrm{z}\) time series of diagnostic variables from the SuperKEKB collider subsystems and measured hit rates of Belle II used as regression targets. We test the learned model by predicting detector hit rates on archived data from different run periods not used during training. We show that a feature attribution method can help interpret the source of changes in the background level over time.
keywords: Belle II, SuperKEKB, Beam background, Neural networks, Nonlinear regression, Machine learning for accelerators +
Footnote †: journal: NIM-A
## 1 Introduction
The Belle II experiment at SuperKEKB, an asymmetric electron-positron collider, aims to collect an unprecedented data set of \(50\) ab\({}^{-1}\) for high preci
sion studies of the flavour sector and to search for physics beyond the Standard Model. SuperKEKB, located at KEK (Tsukuba, Japan), collides 7 GeV electrons with 4 GeV positrons at a center of mass energy of 10.58 GeV which corresponds to the rest mass of the resonance. SuperKEKB has reached a world-record luminosity of \(4.7\times 10^{34}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\) for a vertical betatron function of \(\beta_{\mathrm{y}}^{*}=1.0\,\mathrm{mm}\) at the interaction point (IP) in summer 2022. In order to collect the planned data set in the next ten years, the target is to reach a peak luminosity of \(6.3\times 10^{35}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\) by further increasing the beam currents and reducing the beam-size at the interaction point by squeezing the betatron function down to \(\beta_{\mathrm{y}}^{*}=0.3\,\mathrm{mm}\). The Belle II experiment is a general-purpose particle detector built around the interaction point of SuperKEKB where the electrons and positrons collide. It is composed of several subsystems for tracking, particle identification and calorimetry. A detailed review of the SuperKEKB collider and its subsystems can be found in Refs. [1; 2; 3; 4]. Reviews for the design of the Belle II detector can be found in Refs. [5; 6].
Beam backgrounds seen by the Belle II detector originate from beam particles lost near the collision point. Beam particles which deviated from the nominal orbit are eventually lost by hitting the beam pipe inner wall or other machine apparatus. If their loss position is close to the interaction point, generated shower particles might reach the Belle II detector. The ionizing and non-ionizing irradiation from beam backgrounds sets limits on the lifetime of subsystems installed in the Belle II detector. Particles in background showers generate fake hits which overlay hits from triggered signal collisions posing a challenge to the event reconstruction software [7]. An efficient operation of the Belle II experiment requires to reach the target luminosity but also to keep beam backgrounds below detector limits.
In order to control and mitigate particle losses near the interaction point, a set of moveable beam collimators is installed around the high energy electron ring (HER) and the low energy positron ring (LER). There are two main types of collimators: KEKB type collimators with one jaw and SuperKEKB type collimators with two jaws. SuperKEKB type collimators can be set asymmetric,
i.e. the width of the inner jaw and the width of the outer jaw can be set differently. More details on the moveable collimators installed in SuperKEKB can be found in Ref. [2]. The collimators play a central role in the mitigation of backgrounds through stopping stray particles in the beam halo. Therefore, the monitored position of the collimator jaws relative to the beam center forms an important group of variables to predict backgrounds.
The contribution of this work is a neural network, BGNet, for the prediction of the hit rate of individual background sources in the Belle II detector. The proposed network is structured into submodels for individual background sources each exploiting heuristics from scattering theory to facilitate a physically sensible decomposition of the total hit rate. The neural network is trained on archived time series of the hit rate of a selected Belle II sub-detector as the regression target and multiple input time series of selected variables monitoring different subsystems of the SuperKEKB collider. After training, the neural network can predict all background components at any time given the values of the selected collider input variables. We propose BGNet as a diagnostic tool to provide real time predictions of background components seen in Belle II detectors during the operation of SuperKEKB.
In recent years, feature attribution methods [8; 9], like expected gradients, have emerged as a tool to quantify how much individual input variables contribute to the prediction of a model relative to a baseline. We demonstrate how expected gradients can be used to find the origin of a change in the background level over time.
## 2 Background types
A number of different physical processes contribute to the rate of particle losses near the collision point. A detailed reference of the physical processes can be found in Ref. [10]. For the case of Belle II, the most important background sources found during previous studies, see Ref. [11], are:
_Beam-gas background._ Beam-gas background originates from the interaction between beam particles and the residual gas atoms/molecules in the evacuated beam pipe. Beam-gas Coulomb scattering changes the direction of the beam particles, and beam-gas Bremsstrahlung scattering reduces the energy of the beam particles. The beam-gas scattering rate is proportional to the vacuum pressure in the beam pipe and the beam current. Simulation studies show that the loss rate in Belle II can be reduced by vertical collimators with a narrow aperture.
_Touschek background._ Touschek scattering occurs when two particles in the same bunch approach each other closely enough that they are deflected by an angle leading to a significant transfer of momentum from a transverse to the longitudinal direction. This Coulomb scattering between two particles increases the longitudinal momentum of one particle while it decreases the longitudinal momentum of the other. After the scattering event, the energy deviation of one or both particles may be outside the energy acceptance of the collider. The Touschek scattering rate is proportional to the beam current squared and inversely proportional to the number of bunches and the beam size. It is expected that the Touschek background is sensitive to the aperture of horizontal collimators installed around the ring.
_Injection background._ Beam losses due to Touschek and beam-gas scattering limit the beam lifetime of SuperKEKB well below one hour. To allow stable operations for long periods, it is necessary to perform top-up injections following a betatron injection scheme [1] during physics data taking. The newly injected bunch is perturbed and oscillates in the horizontal plane around the main stored beam. It causes high background rates in the Belle II detector for a few milliseconds after injection. During a physics run, top-up injections with up to 25 Hz per beam are used once the beam current falls below a limit and will be paused for multiple seconds once a target current is reached. The temporary pausing of injections is indicated by a binary variable called beam gate status.
Luminosity backgroundLuminosity background originates from electron-positron collisions at the interaction point inside the Belle II detector. The dominant processes are radiative Bhabha scattering \(e^{+}e^{-}\to e^{-}e^{-}\gamma\) and the two photon process \(e^{+}e^{-}\to e^{+}e^{-}e^{+}e^{-}\). The hit rate from luminosity background is proportional to instantaneous luminosity and becomes dominant at the target luminosity of SuperKEKB, which is about 30 times higher than the record of KEKB [12].
Motivated by the theory of beam dynamics in electron positron storage rings [10], the hit rate \(\mathcal{O}\) of Belle II subsystems due to beam-gas background can be approximated by the formulae
\[\mathcal{O}_{\text{Beam-gas,H}}=S_{1}\times I_{\text{H}}P_{\text{H}}=S_{1} \times G_{1}, \tag{1}\]
\[\mathcal{O}_{\text{Beam-gas,L}}=S_{2}\times I_{\text{L}}P_{\text{L}}=S_{2} \times G_{2}, \tag{2}\]
where \(I\) is the stored beam current and \(P\) is the effective residual gas pressure seen by the beam in the center of the evacuated beam pipe. The subscripts H or L are used for variables related to the high energy electron ring (HER) or the low energy positron ring (LER) respectively. The coefficient \(S_{1}\) (\(S_{2}\)) parametrizes the sensitivity to the beam-gas background from the HER (LER). For the determination of the effective residual pressure \(P\) from the readings of the pressure gauges placed around the ring, we follow the approach described in detail in Ref. [11].
The hit rate due to Touschek scattering can be approximated by the formulae
\[\mathcal{O}_{\text{Touschek,H}}=S_{3}\times\frac{I_{\text{H}}^{2}}{n_{\text{ b,H}}\sigma_{\text{x,H}}\sigma_{\text{y,H}}\sigma_{\text{z,H}}}=S_{3}\times G_{3}, \tag{3}\]
\[\mathcal{O}_{\text{Touschek,L}}=S_{4}\times\frac{I_{\text{L}}^{2}}{n_{\text{ b,L}}\sigma_{\text{x,L}}\sigma_{\text{y,L}}\sigma_{\text{z,L}}}=S_{4}\times G_{4}, \tag{4}\]
and depends on the bunch volume \(\sigma_{\text{x}}\sigma_{\text{y}}\sigma_{\text{z}}\) and the number of bunches \(n_{\text{b}}\) stored in the collider. The coefficient \(S_{3}\) (\(S_{4}\)) is the sensitivity to the Touschek background from the HER (LER).
We model the contribution of top-up injections to the hit rate by
\[\mathcal{O}_{\mathrm{Inj,H}}=S_{5}\times G_{5}, \tag{5}\]
\[\mathcal{O}_{\mathrm{Inj,L}}=S_{6}\times G_{6}, \tag{6}\]
where \(G_{5}\) (\(G_{6}\)) is an engineered injection heuristic and \(S_{5}\) (\(S_{6}\)) describes the sensitivity of the hit rate to top-up injections into the HER (LER). We take the injection heuristic \(G_{5}\) (\(G_{6}\)) to be unity when the product of the average injected charge into the HER (LER) and the HER (LER) beam gate status is positive and zero elsewhere. The injection sensitivity \(S_{5}\) is expected to scale with the product of the repetition rate of injections \(f_{\mathrm{Rep}}\), the average injected charge \(Q_{\mathrm{Inj}}\) and the inefficiency of injections measured as the fraction of lost charge of the injected bunch in the first 100 turns in the collider. The fraction of charge losses near Belle II will likely depend on other variables like the aperture of movable collimators. We use this expectations later by making these variables input features to the neural network model for background prediction.
The luminosity background scales linearly with the measured luminosity \(\mathcal{L}\)
\[\mathcal{O}_{\mathrm{Lumi}}=S_{7}\times\mathcal{L}=S_{7}\times G_{7}, \tag{7}\]
where \(G_{7}\) is the measured luminosity and \(S_{7}\) is the sensitivity to the luminosity background. The formula for the total predicted hit rate of the Belle II detector is
\[\mathcal{O}=\sum_{i=1}^{7}S_{i}\times G_{i}+S_{8}, \tag{8}\]
where \(S_{8}\) is a detector specific pedestal measurable when no beam is stored in the collider. For many Belle II sub-detectors, the pedestal \(S_{8}\) is stable over time.
## 3 Background prediction
In order to use Eq. (8) for hit rate prediction due to backgrounds, we need to know the value of all the variables on the right hand side. The luminosity \(\mathcal{L}\) and
all variables needed to compute the coefficients \(G_{i}\) are sampled at a frequency of \(1\,\mathrm{Hz}\) from the EPICS Archiver Appliance [13] of the Belle II slow control system [14]. Finding the correct value of the sensitivity \(S_{i}\) for each of the eight background sources is a more difficult task.
The traditional approach of the Belle II collaboration was to measure the sensitivities to beam-gas and Touschek backgrounds during dedicated background study campaigns conducted once or twice a year [11]. The idea was to store a beam only in one ring at a time and to record the decay of the beam current after pausing injections. By recording data from multiple single-beam decays, each decay differing only in the number of stored bunches, it is possible to disentangle beam-gas and Touschek backgrounds. For the same initial stored current, the density of particles per bunch and therefore the rate of Touschek scattering varies only with the number of bunches. Estimated values for the storage background sensitivities \(S_{1}\),...,\(S_{4}\) can be obtained from a least squares fit of the measured hit rates during single beam decays against a simplified background model
\[\mathcal{O}=\sum_{i=1}^{4}S_{i}\times G_{i}+S_{8}, \tag{9}\]
because top-up injections are paused during beam decays and the luminosity is zero. The sensitivity to luminosity was obtained in a second step using data with colliding beams at a luminosity of \(2.6\times 10^{34}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\). The idea is to fit the difference of the measured hit rate and the sum of beam-gas and Touschek backgrounds with a linear model \(S_{7}\times\mathcal{L}\). Bias from injection backgrounds was avoided by ignoring data samples during or near periods with top-up injections into either the HER or LER. Measured sensitivities were published for many Belle II sub-detectors [11; 15; 16].
The extracted sensitivities from background study campaigns are a valuable information for the machine learning approach to background prediction. Training such a model with data from previous weeks of collider operation may contain insufficient information to constrain all background sensitivities. In this
case it seems best to formulate the model in a way that neural network based models for background sensitivities can be initialized with values found during the latest background study day.
## 4 Neural network for background prediction
The measured detector hit rate seen at Belle II during SuperKEKB operation may deviate from the expectations derived from background study data. For example, the sensitivity to storage backgrounds may have changed because the aperture of moveable collimators was adjusted. Or the amplitude of the injection background may have worsened due to an increase in the injection repetition frequency or the injected charge per bunch or other reasons. A large number of collider operation related diagnostics are available on the SuperKEKB archiver, timestamped with a frequency of \(1\,\mathrm{Hz}\). The space of diagnostic variables at a time forms a high dimensional feature space. Deep learning [17] with neural networks offers a way to learn maps from the space of collider features to the background sensitivities of Belle II from archived data.
Figure 1 shows a flow diagram of BGNet with its different submodels and their connections. At the bottom, we have the SuperKEKB database (DB) with a collection of timestamped accelerator related diagnostics. The preprocessing provides a \(m+8\) dimensional input vector \(x(t)\) with accelerator variables as input to the neural network. The preprocessing includes the selection of variables from the DB for the queried time \(t\), a time delay correction, the computation of engineered features and a scaling of all input features. The input array \(x(t)\) is fed into a neural network to compute the predicted hit rate of a Belle II sub-detector as output. The blocks \(S_{1}\) to \(S_{8}\) in Fig. 1 are fully connected feed forward networks and have a one dimensional positive output, i.e. the sensitivity for background type \(i\) given the input vector \(x\). The last eight entries in the input array are the engineered features defined in Eq. (1) to Eq. (7)1.
The eight sensitivities \(S_{i}(x)\) are multiplied with the eight features \(G_{i}(x)\) to yield the eight components of the background decomposition. The final step is a sum over the decomposition yielding the total predicted background as output. The background sensitivities and the decomposition are available from the intermediate network layers.
Figure 2 provides more details on the structure of the neural network. We know a priori that certain input features may be predictive for some sensitivities but cannot be causally related to others. For example the apertures of collimators in the HER ring cannot have a direct causal relation to the background sensitivity for the LER beam-gas background. In a first order approximation, it may be sufficient to predict the LER beam-gas sensitivity from the aperture of
Figure 1: Flow diagram for BGNet: The output of the network is the total predicted background at a time \(t\) for a Belle II sub-detector.
vertical LER collimators and the LER Touschek sensitivity from the aperture of horizontal collimators. In order to exploit such a priori information, we decided to create a list \(V_{i}\) of selected input features for each sensitivity network. The common structure of the sensitivity network consists in projecting the full input array \(x\) to the \(m_{i}\) dimensional subspace of selected features \(V_{i}\) and to feed this projected vector \(x_{i}\) into a fully connected feed forward network. The network \(\mathcal{S}_{7}\) and \(\mathcal{S}_{8}\) form a special case since the sensitivity to the luminosity background is \(\mathcal{S}_{8}\). The network \(\mathcal{S}_{8}\) is a network with a \(\mathcal{S}_{8}\) structure of the network \(\mathcal{S}_{8}\).
and the detector noise should be constant and independent of accelerator conditions. We realize each of these networks by a single linear neuron connected to constant inputs \(x_{7}=x_{8}=1\).
The training objective for BGNet is to minimize the mean absolute error between the measured hit rate and the total predicted hit rate, i.e. the sum of all predicted background components and the detector pedestal. We use the TensorFlow/Keras machine learning framework [18; 19] to implement BGNet. The full source code for BGNet is open sourced and available online [20]. Table 1 summarizes hyperparameters of the neural network layers and the training procedure.
### Selection of training data.
The data source for training BGNet is the EPICS Archiver Appliance [13]. The training setup for BGNet requires the selection of a time window, ranging from a few days to multiple weeks, and the specification of the regression target, the hit rate of a Belle II sub-detector, and a set of selected input features for sub models. After downloading the data from the EPICS Archiver Appliance, we apply a mask to select valid data for training. The mask selects data where the sub-detector was powered and operational, and a beam current \(>10\)mA is
\begin{table}
\begin{tabular}{c|c c} \hline \hline Number of hidden layers. & \(S_{1\text{ to 4}}\): 2 & \(S_{5\text{ and 6}}\): 3 \\ Number of hidden units & \(S_{1\text{ to 4}}\): 8 & \(S_{5\text{ and 6}}\): 32 \\ Activation function & Hidden Layers: \(f(x)=tanh(x)\) \\ & Output Layer: \(f(x)=ln(1+exp(x))\) \\ Number of inputs & \(S_{1}\): 9 \(S_{2}\): 15 \(S_{3}\): 8 \(S_{4}\): 12 \(S_{5}\): 33 \(S_{6}\): 27 \\ Loss & Mean absolute error \\ Batch size & 32 \\ Optimizer & Adam [21] \\ Weights and bias initialization & Glorot uniform [22] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Table of hyper parameters for BGNet.
stored in at least one ring. Valid data was split into training and validation sets before the training. The training data is typically chosen to be the archived data from the latest few weeks of data taking with Belle II. The key idea is that the model learns by following archived time series during normal day to day operation (i.e. physics run) and machine tuning to observe the effect of changed inputs on the measured hit rate.
### Selection of input variables
The selection of a list of input variables \(V_{i}\) for predicting the different background sensitivities is a crucial step. Currently, the choice is guided by expert opinion seeking a plausible causal mechanism that a SuperKEKB variable can change the sensitivity of a certain beam background. For example, the aperture of vertical LER collimators is expected to influence the LER beam-gas background sensitivity but certainly not the storage backgrounds from the HER. In this sense, the selection of input variables provides a means to bias the model to learn causally relevant patterns. Candidate variables proposed by experts were tested by Bayesian hyperparameter optimization and by ranking them with feature attribution methods, see section 8. The number of selected input variables is shown in Tab. 1.
### Preprocessing of data
All input variables and the measured hit rate are scaled by subtracting the median and scaling by the percentile range between the 90th and 10th percentile. The centering of values was omitted for the measured hit rate and the engineered features \(G_{1}\) to \(G_{8}\). Using the scaled measured hit rate for computing the training loss improves the convergence of training since the raw hit rates from the archiver can be very big numbers. The center and scale parameters are computed on valid training data and are deployed along with the trained model.
The last step of the preprocessing is the correction of an empirically observed time delay between the measured hit rate \(\mathcal{O}(t)\) used as regression target and the accelerator input features \(x(t)\). We parametrize the delay correction by a
single delay shift \(d=i+w\), given in seconds, where \(i\) is the integer part and \(w\) is the non integer part of \(d\). The array, used as input by the network to predict the hit rate at time \(t\), is the weighted mean of the inputs at i and (i+1) seconds in the past.
\[x^{\prime}(t)=(1-w)\cdot x(t-i)+w\cdot x(t-(i+1)) \tag{10}\]
The variable \(w\) is used as a weight to interpolate between two neighboring samples. The delay shift is treated as a hyperparameter of the model and the search of its optimal value is discussed in the results section.
### Model training
During the operation of the collider we have moments where top-up injection is paused in both rings. During those moments, the measured hit rate is caused only by storage backgrounds, luminosity background and the detector pedestal. We find such decay data by looking only at moments where the product of the beam gate status and the injected charge is zero for HER and LER, i.e. \(G_{5}=0\) and \(G_{6}=0\). As can be seen from Fig. 1, this implies that injection backgrounds cannot contribute to the output of BGNet. The fraction of decay data varies between a few percent on some days to up to 20% on others. In order to disentangle the contributions of injection backgrounds on the one hand and storage plus luminosity on the other hand, it is important that the model describes decay data and non-decay data equally well. We apply random oversampling [23] to balance the ratio between decay and non-decay samples in the training data. We randomly select and duplicate examples of decay data to achieve a balanced 50:50 ratio between decay data and non decay data during the training. The oversampling of decay data is applied day by day to make sure the 50:50 balance is achieved for any day.
Disentangling the contribution of beam-gas, Touschek and luminosity backgrounds to the total background is a challenge. During normal collider operation, the fraction of single beam data is small and most decay data is sampled during few second pauses of top-up injections in physics runs. In other words,
the data may not constrain all individual sensitivities for all accelerator conditions encountered on the training data. One countermeasure is to initialize the output neurons of the sensitivity networks well: by choosing the bias of the final dense layer in each sensitivity network \(S_{i}\) such that the output of the layer equals the sensitivity measured during the last background study day. For \(S_{7}\), the luminosity sensitivity model, we found that fixing the output to the sensitivity value extracted from the latest study day achieved most consistent results.
### Hardware requirements
Training the neural network on an AMD Ryzen 5 2600, a CPU with 6 cores at 3.40GHz clock speed, reaches an average of 16,400 samples trained per second. With a training dataset spanning three weeks, consisting of 540,000 data points, this results in about 30 s per epoch. A full training, consisting of 70 epochs, takes around 35 min. Using the trained model to make predictions on archived data on the same hardware results in 27,500 predicted samples per second.
## 5 Delay optimization
The delay correction is parameterized by a number called delay shift \(d\). The fractional part of \(d\) interpolates between shifting the input array \(x\) by full seconds. For example, a delay shift of 2.4 means we use the weighted average \(0.6\cdot x(t-2s)+0.4\cdot x(t-3s)\) as input for estimating the backgrounds at time \(t\).
The impact of the delay shift is especially noticeable during the top-up injections during physics runs. The pausing of top-up injections for multi second time intervals causes injection spikes in the measured hit rate. Figure 3 shows four such injection spikes recorded by the VXD beam abort diamond detector system [24]. A delay shift between the measured hit rate and the input array, in particular the beam gate status variable, leads to a phase shift between the predicted and measured hit rate. This effect can be seen in Fig. 3 where two BGNet models where trained for different values of the delay shift.
Figure 4: Network performance during physics runs dependent on the input delay shift of the VXD Diamond at \(\phi=325\) in backwards direction mounted on the beam pipe
Figure 3: Measured hit rate for a VXD diamond and the predictions of two BGNet models trained with different delay shifts in data preprocessing. The shown time interval is from the 10.06.2022.
The performance of a given delay shift can therefore be evaluated by the prediction error during physics runs. The performance metric is the mean absolute difference between predicted and measured hit rate, normalized to the measured hit rate. Using this characterization one can perform an optimization of the delay shift. This optimization is done in three stages. In the first stage, a grid search over a range of integer delay shifts is performed. In the second and third stage, the delay shift is further optimized by doing grid searches in smaller increments around the best result from the previous stage. The best performance is given by the delay shift with the smallest error on top-up injections. An illustration of such an optimization can be seen in Fig. 4.
We find different optimal delay shifts for different Belle II sub-detectors. The optimal delay shift is \(1\,\mathrm{s}\) for the beam abort diamonds installed around the vertex detector (VXD), \(5.5\,\mathrm{s}\) for the Central Drift Chamber (CDC) chamber current and \(3.4\,\mathrm{s}\) for hit rates from Time-of-Propagation (TOP) detector photo multiplier tubes. We find no significant changes of the delay shifts during the run periods from 2020 to summer 2022.
## 6 Testing the background decomposition
The CDC [5] in Belle II provides a good example to experimentally study the quality of the background decomposition. The hit rate in the CDC volume is monitored online by the chamber current: the current over all wires per layer averaged over all 56 CDC layers. The chamber current is sampled with \(1\,\mathrm{Hz}\) frequency. In addition, the CDC has a fast current logger that samples the chamber current \(I_{\mathrm{Log}}\) of a sector of the fourth CDC layer at a kilohertz rate. This sampling rate is high enough to resolve spikes from individual top-up injections during physics runs. Figure. 5 shows an oscilloscope display of the fast current logger during a 3-second time window in June 2022. The figure shows multiple background spikes from injections into the LER at a repetition frequency of \(12.5\,\mathrm{Hz}\), injecting a charge of \(1.16\,\mathrm{nC}\) per bunch. During this period, interleaved 1-bunch and 2-bunch injection into the LER was enabled and every second peak
corresponds to the combined injection background from two bunches injected with a spacing of a few nanoseconds. This is followed by a period of injections into both the HER and LER during physics data taking. During that time, the HER injection follows a 1-bunch injection scheme at a repetition frequency of 25 Hz, injecting a charge of 3.21 nC per bunch. The achieved beam current during that time is 955 mA (1200 mA) for the HER (LER), and the peak luminosity is \(3.46\times 10^{34}\) cm\({}^{-2}\) s\({}^{-1}\). After each injection, the chamber current decays during a few milliseconds to a baseline chamber current from combined storage and luminosity backgrounds. The baseline shift observed in Fig. 5 most likely results from not fully decayed injection spikes due to decay time and asynchronous injections into the LER and HER. The fast current logger provides the minimum chamber current \(I_{\mathrm{Log}}^{\mathrm{min}}\) during one second to the EPICS Archiver Appliance. The minimum chamber current \(I_{\mathrm{Log}}^{\mathrm{min}}\) serves as an estimate for the sum of luminosity and storage backgrounds. In order to compare it with the total CDC chamber current, it must be multiplied by a factor of four. This empirical factor arises from the fact that only a small sector of the CDC is taken into account for \(I_{\mathrm{Log}}\) measurements.
Figure 5: Fast CDC current logger resolves background spikes due to individual injections into the HER and LER. The figure shows multiple background spikes from injections into the LER followed by asynchronous injections into both the HER and LER during physics data taking. The timestamp of the data is 20:52:39 on June 12, 2022 (JST).
Figure 6 shows a comparison of \(I_{\text{Log}}^{\text{min}}\) and the predictions of BGNet trained on the total CDC chamber current using training data covering June 2021. The figure shows the predicted injection backgrounds along with a sum over all storage and luminosity backgrounds for a 4 min time window during a physics run. The \(I_{\text{Log}}^{\text{min}}\) agrees well with the BGNet prediction for the sum of storage and luminosity backgrounds. In other words, the BGNet model learned an accurate decomposition between injection and non-injection backgrounds even though it only used inputs sampled at 1 Hz frequency. The histogram in Fig. 6 shows that the mean and the standard deviation of relative error during June 2021 are well below 10%. The figure also confirms that injection backgrounds quickly decay to zero whenever injections are paused. The measured chamber current during such pauses provides an estimate for the sum of storage and luminosity backgrounds.
Figure 6: Left: Top-up injections in a 4 min time window. The storage, luminosity and pedestal components are combined and displayed together with the injection background components of HER and LER rings. The fast CDC current logger variable is used to gauge the ability of BGNet to differentiate between the sum of storage and luminosity backgrounds and injection backgrounds. Right: Histogram of the relative error between the fast CDC current logger variable and the predicted storage and luminosity backgrounds plus the pedestal over the month of June 2021.
## 7 Extrapolation with BGNet
In order to be useful as a real time diagnostic tool, BGNet needs to provide accurate predictions on incoming data beyond the time window used for training. Figure 7 contains comparisons between BGNet predicted and measured hit rates for the CDC chamber current. BGNet was trained with archived hit rates on the first half of June 2021 at the nominal center-of-mass energy of \(10.58\,\mathrm{GeV}\), including a background study conducted on June 16, 2021. The trained model was applied to predict the backgrounds during the full month containing an off-resonance energy scan from June 19 to 24 which was followed by data taking on the nominal center-of-mass energy. Table 2 gives the mean and \(\sigma_{68}\) of the relative prediction error separately for data from the first half of June, the Off-Resonance scan and the remainder of June. Background sensitivities are underestimated (biased) by 14% during the Off-Resonance scan but the bias reduces to 4% on test data once data taking at the nominal center-of-mass energy is resumed. Fig. 8 shows two 10-minute time windows of physics data taking on June 15 (inside the train time window) and June 24 (inside the test time window) with \(1\,\mathrm{Hz}\) frequency. The timing and duration of backgrounds from top-up injections is well described in both time windows. In between the injection spikes, the sum of storage backgrounds and luminosity background agrees well with the measured data. The main discrepancy is an underestimation of the background amplitude from top-up injections on June 24. A similar analysis for the off-resonance data shows that the observed bias of 14% originates mostly from a underestimation of the HER injection background amplitude. There is an ongoing effort to improve predictions of injection backgrounds in the future by tuning the set of injection related input variables.
\begin{table}
\begin{tabular}{c|c c} & Mean \(\frac{\mathcal{O}_{\text{obs}}-\mathcal{O}_{\text{pred}}}{\mathcal{O}_{\text{ obs}}}\) & \(\sigma_{68}\frac{\mathcal{O}_{\text{obs}}-\mathcal{O}_{\text{pred}}}{\mathcal{O}_{ \text{obs}}}\) \\ \hline Train & -0.0012 & 0.08 \\ Off-resonance & 0.14 & 0.13 \\ Test & 0.04 & 0.15 \\ \end{tabular}
\end{table}
Table 2: Performance metrics for BGNet on the three time windows (Train, Off-Resonance, Test) defined in Fig 7. The first column gives the mean relative prediciton error and the second column the range of the central 68% of relative errors.
Figure 7: Stack plot showing the predicted decomposition and the measured CDC chamber current during June 2021. Training samples were drawn from June 1 to midnight of June 16 and the model was tested on the remainder of the month. Form June 19 to June 24 the collider conducted a Off-Resonance scan.
Figure 8: Comparison of measured and predicted background CDC hit rates during two time windows in June 2021. After training with archived data from the first half of June 2021, the model was applied to predict backgrounds on the remainder of June 2021. We show time windows in the training data and 8 days after the end of the training data.
## 8 Towards explaining backgrounds
BGNet learns surrogate models for the background sensitivities to all background types. So far, we provided case studies showing that the sensitivity models provide an accurate prediction of background levels. Here, we address the question if we can analyze the BGNet model to understand which input features are responsible for an observed change of the background level. To address this question, we look at archived data from the beginning of June 2022 where the origin of a background change is experimentally well known.
Since BGNet already predicts the total hit rate as a sum over predictions for all background types, looking at the decomposition of background predictions already offers some insight. Figure 9 shows the background decomposition during the evening on June 1, 2022 made by a model trained on the data from May 27, 2022 to June 23, 2022. A beam abort happened at 18:40. During the next runs after the abort the measured CDC chamber current is much reduced. The predictions for the LER beam-gas and LER injection background show the same reduction. The figure also highlights a scan of the D06V1 collimator aperture used to experimentally locate the origin of the background reduction.
Further information can be gained by applying feature attribution for each individual background sensitivity network \(S_{i}(x)\). As the sensitivity models are real valued and differentiable, we can directly apply the method of expected gradients [9]. The expected gradients attribution for the \(j\)th input feature of the sensitivity model \(S_{i}\) is computed as
\[\phi_{ij}(x)=\int_{x^{\star}}\left((x_{j}-x_{j}^{\star})\times\int_{\alpha=0}^ {1}\frac{\partial S_{i}(x^{\star}+\alpha(x-x^{\star}))}{\partial x_{j}}d\alpha \right)p(x^{\star})dx^{\star}, \tag{11}\]
where \(x\) is the test sample to be explained and \(x^{\star}\) is a reference sample drawn from any user-defined distribution \(p(x^{\star})\) over the data. In order to explain changes background hit rates over time, we are using a uniform distribution over a reference time window. The inner integral in Eq. 11 integrates the partial derivative of the sensitivity along a straight line connecting the point \(x\) with the
reference point \(x^{\star}\) in the feature space. The attribution value quantifies how much the selected feature contributes to the difference between \(S_{i}(x)\) and the expectation value \(<S_{i}>\) over the distribution \(p(x^{\star})\) of reference samples. We use the python library _Path Explain_[25] to compute attribution values.
This method gives a handle to understand why the model predicts a reduction in the LER beam-gas hit rate in the runs after the beam abort. We use a uniform distribution \(p(x^{\star})\) over a time window before the beam abort. We compute attributions using test data points sampled uniformly from runs after the beam abort. Figure 10 shows a summary of the attributions computed for the LER beam-gas sensitivity \(S_{3}(x)\) model using Eq. (11). Each point represents a test sample, its color gives the value of the feature and its position on the x-axis gives the attribution value. The attribution value tells us how much this feature contributed to the reduction of the LER beam-gas sensitivity after the beam abort compared to the sensitivity before the beam abort. The features are ranked according to the mean absolute attribution value over all tested data points.
Figure 9: Stack plot showing the predicted decomposition and the measured CDC chamber current. The peak of the chamber current corresponds to the opening of the collimator between 19:23 and 19:25, and closing between 19:25 and 19:30.
Figure 10 shows that the LER beam-gas sensitivity model largely attributes the reduction in the sensitivity to a change of the aperture of the collimator bottom jaw: the LER beam-gas background was reduced by opening of the D06V1 collimator. The D06V1 collimator scan further reinforces this conjecture, as adjustments in the aperture coincide with large changes in the measured hit rate. Figure 11 shows the behaviour of the predicted beam-gas sensitivity for different collimator apertures.
## 9 Conclusion
We introduced a novel neural network based model, BGNet, to predict the beam-induced background hit rate in Belle II sub-detectors. The neural network model architecture uses the particle scattering formulae and domain knowledge to explicitly decompose the hit rate of the Belle II detector into a sum of contributions for different background types. The BGNet model predicts the sensitivity to a background type by a fully connected neural network using a selection
Figure 10: Summary of the top eight features and their attribution values for the LER beam-gas sensitivity. Input variables ’VALCLM_*{TOP,BTM}_CSS_DIF_POS’ refer to the monitored aperture of the top or bottom jaw of vertical collimators in different sections of the LER ring. Variables ’BMLXRM_*_EMITT{X,Y}’ refer to readings from the X-ray monitor for the LER beam emittance. Variables ’BML_*_POS.PYP’ refer to vertical beam position in different parts of the superconducting final focusing magnets.
of diagnostic variables provided by the collider subsystems as input. All input variables are archived on the EPICS Archiver Appliance at a 1 Hz sample frequency. The role of these neural networks is to provide a flexible parametrization of how different SuperKEKB collider parameters affect the Belle II sensitivity to a specific type of background. After training the model on archived samples from the last few weeks of collider operation, the model is able to detect the most crucial parameters to be adjusted for background mitigation and collider performance improvement.
A strength of the BGNet model is its ability to disentangle the contribution to the hit rate from storage beam losses against contributions from top-up injections into the HER or LER. This was achieved by modulating the predicted injection background amplitude by the beam gate status input variable that indicates if top-up injections are paused. In the case of the CDC, we could explicitly test our decomposition based on 1 Hz inputs with a direct measurement of the fast CDC current logger, a detector measuring the background induced chamber current in a sector of the CDC at a sample rate of 1ms.
We provide a case study to show that BGNet is able to learn which moveabl
Figure 11: Scatter plot of the predicted LER beam-gas sensitivity and the aperture of the bottom jaw of the D06V1 collimator on samples from the 01.06.22 between 18:20 and 19:45.
collimators affect the sensitivity to storage and top-up injection induced beam losses. In this example, we demonstrated how feature attribution methods can be used to find the most relevant change of an input feature for explaining a change on the model prediction. The example illustrates how machine learning can provide insights for background mitigation and control. We plan to integrate the neural network based background decomposition into the SuperKEKB background monitor panel. A display showing the real-time background decomposition at Belle II can be used by SuperKEKB machine-operators to move collimators and tune top-up injections into SuperKEKB.
## Acknowledgements
The authors are grateful to Belle II and SuperKEKB colleagues for their hard work and contribution. We acknowledge the financial support by the Federal Ministry of Education and Research of Germany. This work was supported by the U.S. Department of Energy (DOE) via Award Number DE-SC0010504 and via U.S. Belle II Operations administered by Brookhaven National Laboratory (DE-SC0012704).
|
2301.02316 | Neural Network Adaptive Control with Long Short-Term Memory | In this study, we propose a novel adaptive control architecture, which
provides dramatically better transient response performance compared to
conventional adaptive control methods. What makes this architecture unique is
the synergistic employment of a traditional, Adaptive Neural Network (ANN)
controller and a Long Short-Term Memory (LSTM) network. LSTM structures, unlike
the standard feed-forward neural networks, can take advantage of the
dependencies in an input sequence, which can contain critical information that
can help predict uncertainty. Through a novel training method we introduced,
the LSTM network learns to compensate for the deficiencies of the ANN
controller during sudden changes in plant dynamics. This substantially improves
the transient response of the system and allows the controller to quickly react
to unexpected events. Through careful simulation studies, we demonstrate that
this architecture can improve the estimation accuracy on a diverse set of
uncertainties for an indefinite time span. We also provide an analysis of the
contributions of the ANN controller and LSTM network to the control input,
identifying their individual roles in compensating low and high-frequency error
dynamics. This analysis provides insight into why and how the LSTM augmentation
improves the system's transient response. The stability of the overall system
is also shown via a rigorous Lyapunov analysis. | Emirhan Inanc, Yigit Gurses, Abdullah Habboush, Yildiray Yildiz, Anuradha M. Annaswamy | 2023-01-05T22:22:50Z | http://arxiv.org/abs/2301.02316v1 | # Neural Network Adaptive Control with Long Short-Term Memory
###### Abstract
In this study, we propose a novel adaptive control architecture, which provides dramatically better transient response performance compared to conventional adaptive control methods. What makes this architecture unique is the synergistic employment of a traditional, Adaptive Neural Network (ANN) controller and a Long Short-Term Memory (LSTM) network. LSTM structures, unlike the standard feed-forward neural networks, can take advantage of the dependencies in an input sequence, which can contain critical information that can help predict uncertainty. Through a novel training method we introduced, the LSTM network learns to compensate for the deficiencies of the ANN controller during sudden changes in plant dynamics. This substantially improves the transient response of the system and allows the controller to quickly react to unexpected events. Through careful simulation studies, we demonstrate that this architecture can improve the estimation accuracy on a diverse set of uncertainties for an indefinite time span. We also provide an analysis of the contributions of the ANN controller and LSTM network to the control input, identifying their individual roles in compensating low and high-frequency error dynamics. This analysis provides insight into why and how the LSTM augmentation improves the system's transient response. The stability of the overall system is also shown via a rigorous Lyapunov analysis.
## I Introduction
Although Neural Networks (NN) is a fairly old concept [1], cheap and fast parallel computing unlocked their potential and lead to their current predominance in artificial intelligence and machine learning [2]. Computer vision and natural language processing are examples of fields that benefited greatly from these developments [3, 4]. Deep learning research produced derivatives of recurrent neural networks (RNN) such as long short-term memory (LSTM) [5] and gated recurrent unit [6], which are powerful structures for processing sequential data [7]. Reinforcement learning (RL) is another concept in machine learning that made advancements through applications of deep learning [8]. In RL, there exists a state that is updated with respect to an action selected by an agent that makes its choices based on observations to maximize its cumulative reward. A similarity can be drawn between control systems and RL where states, control inputs and feedback connections are parallel to states, actions, and observations, respectively. There are studies that take advantage of this fact to design RL-based controllers [9, 10].
The potential shown by earlier applications provides an incentive to utilize NN in adaptive control. The literature on this topic is extensive and well established [11, 12, 13, 14, 15]. Using an online-tuned feed-forward NN controller with adaptive update laws is proven to be stable and can successfully compensate the uncertainties [14]. However, if there are sudden changes in the uncertainty, the transient response can be oscillatory, which results in poor performance. This problem is addressed in [16] with the addition of external memory similar to a neural turing machine's (NTM) [17]. However, unlike the common practice in NTMs, a feed-forward network, instead of an RNN, is used in [17]. Feed-forward networks that have access to only the current state cannot take full advantage of the dependencies in a sequence. Instead, using a recurrent structure with internal memory can increase the capability of sequence estimation and thus improve performance [18].
To address the above mentioned issues, we propose a novel control architecture that consists of an Adaptive Neural Network (ANN) controller and an LSTM. The purpose of the LSTM is to take advantage of the long and short-term dependencies in the input sequence to improve the transient response of the controller. Specifically, we train the LSTM in such a way that it learns to compensate for the inadequacies of the ANN controller in response to sudden and unexpected uncertainty variations. Offline training in closed-loop systems is challenging since the trained element affects the system dynamics. This closed loop structure similarly exists in RL applications and is addressed by previous studies [19, 20]. Inspired by these methods, we train the LSTM in a closed-loop setting to predict and compensate for the undesired transient error dynamics. We demonstrate via simulations that thanks to its predictive nature, LSTM offsets high-frequency errors, and thus complements the ANN controller, where the latter helps with handling the low-frequency dynamics.
To summarize, the contribution of this paper is a novel adaptive control framework that provides enhanced transient performance compared to conventional approaches. This is achieved by making the traditional ANN controller work in collaboration with an LSTM network that is trained in the closed-loop system.
In Section II, we describe the formulation of the ANN controller. In Section III, the proposed LSTM augmentation and the training method are explained, together with a stability analysis. Simulation results are given in Section IV and a summary is given in Section V.
## II Problem Formulation
Consider the following plant dynamics
\[\dot{x}_{p}(t)= A_{p}x_{p}(t)+B_{p}(u(t)+f(x_{p}(t))), \tag{1a}\] \[y_{p}(t)= C_{p}^{T}x_{p}(t), \tag{1b}\]
where \(x_{p}(t)\in\mathbb{R}^{n_{p}}\) is the accessible state vector, \(u(t)\in\mathbb{R}^{m}\) is the plant control input, \(A_{p}\in\mathbb{R}^{n_{p}\times n_{p}}\) is a known system matrix, \(f(x_{p}(t)):\mathbb{R}^{n_{p}}\rightarrow\mathbb{R}^{m}\) is a state-dependent, possibly nonlinear, matched uncertainty, \(B_{p}\in\mathbb{R}^{n_{p}\times m}\) is a known control input matrix, and \(C_{p}\in\mathbb{R}^{n_{p}\times s}\) is a known output matrix, and \(y_{p}(t)\in\mathbb{R}^{s}\) is the plant output. Furthermore, it is assumed that the pair \((A_{p},B_{p})\) is controllable.
**Assumption 1**: _The unknown matched uncertainty \(f(x_{p}(t)):\mathbb{R}^{n_{p}}\rightarrow\mathbb{R}^{m}\) is continuous on a known compact set \(S_{p}\triangleq\{x_{p}(t):\|x_{p}(t)\|\leq b_{x}\}\subset\mathbb{R}^{n_{p}}\), where \(b_{x}\) is a known positive constant._
**Remark 1**: _The proposed method also applies for an unknown state matrix \(A_{p}\): Suppose that the plant dynamics are given by_
\[\dot{x}_{p}(t)=A_{u}x_{p}(t)+B_{p}(u(t)+f_{u}(x_{p}(t))), \tag{2}\]
_where \(A_{u}\in\mathbb{R}^{n_{p}\times n_{p}}\) is an unknown state matrix and \(f_{u}(x_{p}(t))\) is a state-dependent, possibly nonlinear, matched uncertainty. The dynamics (2) can be written in the form of (1) and for a known matrix \(A_{p}\), by defining \(f(x_{p}(t))\triangleq K_{u}x_{p}(t)+f_{u}(x_{p}(t))\), given that the matching condition \(A_{p}=A_{u}-B_{p}K_{u}\) is satisfied for some feedback gain \(K_{u}\in\mathbb{R}^{m\times n_{p}}\). This allows us to proceed assuming \(A_{p}\) is known, without loss of generality._
To achieve command following of a bounded reference input \(r(t)\in\mathbb{R}^{s}\), an error-integral state \(x_{e}(t)\in\mathbb{R}^{s}\) is defined as
\[\dot{x}_{e}(t)=r(t)-y_{p}(t), \tag{3}\]
which, when augmented with (1), yields the augmented dynamics
\[\dot{x}(t) =Ax(t)+B_{m}r(t)+B(u(t)+f(x_{p}(t))), \tag{4a}\] \[\dot{y}(t) =C^{T}x(t), \tag{4b}\]
where \(x(t)\triangleq[x_{p}(t)^{T},x_{e}(t)^{T}]^{T}\in\mathbb{R}^{n}\) is the augmented state, whose dimension is \(n\triangleq n_{p}+s\), and the system matrices \(A\), \(B_{m}\), \(B\) and \(C\) are
\[A \triangleq\begin{bmatrix}A_{p}&0_{n_{p}\times s}\\ -C_{p}^{T}&0_{s\times s}\end{bmatrix}\in\mathbb{R}^{n\times n}, \tag{5a}\] \[B_{m} \triangleq\begin{bmatrix}0_{n_{p}\times s}\\ -I_{s\times s}\end{bmatrix}\in\mathbb{R}^{n\times s},\] (5b) \[B \triangleq\begin{bmatrix}B_{p}\\ 0_{s\times m}\end{bmatrix}\in\mathbb{R}^{n\times m},\] (5c) \[C \triangleq\begin{bmatrix}-C_{p}^{T}&0_{s\times s}\end{bmatrix}^{T} \in\mathbb{R}^{n\times s}. \tag{5d}\]
A baseline state feedback controller is designed as
\[u_{bl}(t)=-Kx(t), \tag{6}\]
where \(K\in\mathbb{R}^{m\times n}\) is selected such that \(A_{m}\triangleq A-BK\) is Hurwitz. Then, a reference model is defined as
\[\dot{x}_{m}(t)=A_{m}x_{m}(t)+B_{m}r(t), \tag{7}\]
where \(x_{m}(t)\in\mathbb{R}^{n}\) is the state vector of the reference model.
Although the matched uncertainty \(f(x_{p}(t))\) is of unknown structure, Assumption 1 implies that it can be approximated by a multi-layer neural network in the following form
\[f(x_{p}(t))=W^{T}\bar{\sigma}(V^{T}\bar{x_{p}}(t))+\varepsilon(x_{p}(t)), \tag{8}\]
such that
\[\|\varepsilon(x_{p}(t))\|\leq\varepsilon_{N},\qquad\forall x\in S, \tag{9}\]
respectively, where the NN reconstruction error vector \(\varepsilon(x_{p}(t))\) is unknown, but bounded by a known constant \(\varepsilon_{N}>0\) in the domain of interest \(S_{p}\subset\mathbb{R}^{n_{p}}\). Such a bound depends on the number of hidden neurons \(n_{h}\) and the weight matrices of the neural network \(W\) and \(V\). The following assumption is necessary and is standard in the literature.
**Assumption 2**: _The unknown ideal weights \(W\) and \(V\) are bounded by known positive constants such that \(\|V\|_{F}\leq V_{M}\) and \(\|W\|_{F}\leq W_{M}\)._
The input to the NN and the output of the hidden layer is given by
\[\bar{x}_{p}(t) \triangleq\begin{bmatrix}x_{p}(t)^{T}&1\end{bmatrix}^{T}\in \mathbb{R}^{n_{p}+1}, \tag{10a}\] \[\bar{\sigma}(V^{T}\bar{x}_{p}(t)) \triangleq\begin{bmatrix}\sigma(V^{T}\bar{x}_{p}(t))&1\end{bmatrix}^ {T}\in\mathbb{R}^{n_{h}+1}, \tag{10b}\]
where the nonlinear activation function \(\sigma(.):\mathbb{R}^{n_{h}}\rightarrow\mathbb{R}^{n_{h}}\) can be either sigmoid or tanh. Note that \(x_{p}(t)\) and \(\sigma(V^{T}\bar{x}_{p}(t))\) are augmented with unity elements to account for hidden and outer layers biases, respectively. For the convenience of notation, we drop the overbar notation and write (8) as
\[f(x_{p}(t))=W^{T}\sigma(V^{T}x_{p}(t))+\varepsilon(x_{p}(t)), \tag{11}\]
where the dimensions of the weight matrices are loosely defined as \(W\in\mathbb{R}^{n_{h}\times m}\) and \(V\in\mathbb{R}^{n_{p}\times n_{h}}\).
By defining the state tracking error as
\[e(t)\triangleq x(t)-x_{m}(t), \tag{12}\]
the control objective is to make \(e(t)\) converge to zero while keeping all system signals bounded, with minimal transients. To achieve this, we use the control input
\[u(t)=u_{bl}(t)+u_{ad}(t)+u_{lstm}(t)+v(t), \tag{13}\]
where \(u_{bl}(t)\) is the baseline controller (6) that achieves the reference model dynamics (7) in the absence of uncertainty, \(u_{ad}(t)\) is an adaptive neural network (ANN) controller designed to compensate for the matched uncertainty \(f(x_{p}(t))\), and \(u_{lstm}\) is the output of a long short-term memory (LSTM) network. Furthermore, \(v(t)\) is a robustifying term, which is defined subsequently.
In this paper, we focus our attention on enhancing the transient response of the closed-loop system. Particularly, we propose the synergistic employment of a traditional ANN controller and an LSTM network to obtain superior performance compared to conventional approaches. In what follows, we detail each control term in (13).
### _Adaptive Neural Network Controller_
The adaptive neural network (ANN) controller is designed to compensate for the matched nonlinear uncertainty \(f(x_{p}(t))\), and is expressed as,
\[u_{ad}(t)=-\hat{f}(x_{p}(t)), \tag{14}\]
where \(\hat{f}(x_{p}(t))\) represents an estimate of \(f(x_{p}(t))\). Using (11), we define the estimate \(\hat{f}(x_{p}(t))\) as
\[\hat{f}(x_{p}(t))=\hat{W}^{T}(t)\sigma(\hat{V}^{T}(t)x_{p}(t)), \tag{15}\]
where \(\hat{V}(t)\in\mathbb{R}^{n_{h}\times n}\) is the input-to-hidden-layer weight matrix and \(\hat{W}(t)\in\mathbb{R}^{m\times n_{h}}\) is the hidden-to-output-layer weight matrix. These weights serve as estimates for the unknown ideal weights \(V\) and \(W\), respectively. Through proper use of Taylor series-based arguments [14], the weights are updated using the adaptive laws
\[\dot{\hat{W}} =F(\hat{\sigma}-\hat{\sigma}^{\prime}\hat{V}^{T}x_{p})e^{T}PB-F \kappa\|e\|\hat{W}, \tag{16a}\] \[\dot{\hat{V}} =Gx_{p}e^{T}PB\hat{W}^{T}\hat{\sigma}^{\prime}-G\kappa\|e\|\hat{V}, \tag{16b}\]
where \(F\in\mathbb{R}^{n_{h}\times n_{h}}\) and \(G\in\mathbb{R}^{n_{p}\times n_{p}}\) are symmetric positive definite matrices, serving as learning rates, \(\kappa>0\) is a scalar gain, and \(\hat{\sigma}\) and \(\hat{\sigma}^{\prime}\) are defined as
\[\hat{\sigma} \triangleq\sigma(\hat{V}^{T}(t)x_{p}(t)), \tag{17}\] \[\hat{\sigma}^{\prime} \triangleq\frac{d\sigma(z)}{dz}\bigg{|}_{z=\hat{V}^{T}(t)x_{p}(t)}.\]
Further, \(P\in\mathbb{R}^{n\times n}\) is the symmetric positive definite solution of the Lyapunov equation
\[A_{m}^{T}P+PA_{m}=-Q, \tag{18}\]
for some symmetric positive definite matrix \(Q\in\mathbb{R}^{n\times n}\).
### _Robustifying Term_
The usage of the baseline controller, ANN controller, and the robustifying term is sufficient to solve the adaptive control problem, as demonstrated in [14] for nonlinear robot arm dynamics. However, for a generic multi-input multi-output (MIMO) dynamical system with matched uncertainties (1), a difficulty arises from the control input matrix \(B\). A subsidiary contribution of this paper is a modified robustifying term \(v(t)\), which achieves the same stability results in [14], but for generic MIMO linear systems.
The robustifying term, \(v(t)\) (see (13)) is used to provide robustness against disturbances that arise from the high-order Taylor series terms [14]. For the considered MIMO linear plant dynamics, we propose a modified robustifying term which is defined as
\[v(t)=\begin{cases}0,&\text{if }\|B^{T}Pe\|=0\\ -\frac{B^{T}Pe}{\|B^{T}Pe\|}k_{z}\|e\|(\|\hat{Z}\|_{F}+Z_{M}),&\text{otherwise} \end{cases}, \tag{19}\]
with
\[k_{z}>C_{2}, \tag{20}\]
where \(Z_{M}\triangleq\sqrt{W_{M}^{2}+V_{M}^{2}}\), and matrices \(B\) and \(P\) are defined in the previous subsection, whereas \(e\) is given in (12). Furthermore, \(C_{2}\triangleq 2C_{\sigma^{\prime}}.Z_{M}\), where \(C_{\sigma^{\prime}}\) is a known upper bound for the absolute value of the sigmoid or tanh activation function derivatives, i.e. \(||\sigma^{\prime}(.)||\leq C_{\sigma^{\prime}}\), where \(\sigma^{\prime}(z)=d\sigma(z)/dz\).
In order not to diverge from the main contribution of the paper, we defer the derivation of the ANN controller and the proposed modified robustifying term to the Appendix. The following section introduces the main focus of this paper.
## III LSTM Network Design
We propose an LSTM network that works in coordination with the ANN controller to compensate for the uncertainties in the system, with better transients compared to conventional approaches. Combined usage of LSTM and ANN is not arbitrary. Indeed, in this section, we precisely define the separate roles of LSTM and ANN in the closed-loop system. The overall control architecture is given in Fig. 1. By taking the state tracking error \(e(t)=x(t)-x_{m}(t)\) as the input, the LSTM network computes the control input \(u_{lstm}(t)\), which is fed to the plant to compensate for the ANN controller deficiencies. The motivation behind the LSTM network is to improve the transient response of the system and achieve faster convergence by taking advantage of the sequence prediction capabilities of the LSTM architecture.
**Remark 2**: _LSTM network takes a sequence as an input and computes a sequence as the output. Hence, the sampling time of the network arises as a design parameter. The particular selection of the sampling time of the network is application dependent. A proper selection would match the sampling time at which the ANN is implemented, or it can be made larger to reduce the computational load. Nevertheless, the exact sampling time of the network is irrelevant in the proposed architecture since, as shown subsequently, the LSTM network can be trained offline to perform under a given sampling time._
**Remark 3**: _The LSTM input sequence \(e^{n}\) is generated from the continuous-time signal \(e(t)\) as_
\[e^{n}=e(nT),\quad n=0,1,2,\ldots \tag{21}\]
_where \(T>0\) is the sampling time of the LSTM. Furthermore, the output sequence of the LSTM \(u_{lstm}^{n}\) is converted to the continuous-time signal \(u_{lstm}(t)\) which accounts for the control input of the LSTM, where_
\[u_{lstm}(t)=u_{lstm}^{n},\ \ \text{for}\ nT\leq t<(n+1)T,\ \ n=0,1,2,\ldots \tag{22}\]
_LSTM is a Recurrent Neural Network (RNN), which, unlike a conventional neural network structure, can learn the long-term dependencies of a sequence. This is accomplished with an additional hidden state, a cell state, and gates that update the states in a systematic way. One cell of LSTM can be seen in Fig. 2._
In Fig. 2, \(e_{norm}^{n}\) is the normalized version of the input sequence \(e^{n}\), \(h^{n}\) is the hidden state, and \(c^{n}\) is the cell state. \(R,W\), and \(b\) are the recurrent weight matrices, input weight matrices, and bias vectors, respectively. \(\sigma_{g}\) denotes the gate activation function (sigmoid) and \(\sigma_{c}\) denotes the
state activation function (\(tanh\)). Subscripts \(f,g,i\), and \(o\) denote forget gate, cell candidate, input gate, and output gate, respectively. \(out_{f}^{n},\bar{c}^{n},out_{i}^{n}\), and \(out_{o}^{n}\) denote the output of the gate operations. The superscript \(n\) denotes the current time step and \(n-1\) is the previous time step.
a) Forget Gate: The forget gate decides the relevant information that the cell state should take from the hidden state and the input. This is achieved with a sigmoid activation function.
\[out_{f}^{n}=\sigma_{g}(W_{f}e_{norm}^{n}+R_{f}h^{n-1}+b_{f}). \tag{23}\]
b) Input Gate: The new information that is going to be added to the cell state is determined by the input gate and cell candidate. First, the hyperbolic tangent \(tanh\) activation function is used to determine which information is going to be added to the cell state. Then, a sigmoid function is used to determine how much of this information is going to be added. This process is given as
\[\bar{c}^{n}=\sigma_{c}(W_{g}e_{norm}^{n}+R_{g}h^{n-1}+b_{g}), \tag{24}\]
\[out_{i}^{n}=\sigma_{g}(W_{i}e_{norm}^{n}+R_{i}h^{n-1}+b_{i}). \tag{25}\]
c) Output Gate: The output gate decides how much of the cell state is going to be a part of the hidden state, using a sigmoid function as
\[out_{o}^{n}=\sigma_{g}(W_{o}e_{norm}^{n}+R_{o}h^{n-1}+b_{o}). \tag{26}\]
The cell state \(c\) is first erased using the forget gate \(out_{f}\), then new information is added through the input gate \(out_{i}\) and cell candidate \(\bar{c}\). This is achieved with the calculation
\[c^{n}=out_{f}^{n}\odot c^{n-1}+out_{i}^{n}\odot\bar{c}^{n}. \tag{27}\]
The gates mentioned above are used to determine the hidden state \(h\) and the cell state \(c\) for the next time step. The hidden state is updated using the output gate \(out_{o}\) and cell state \(c\) that is mapped between -1 and 1 using a \(tanh\) function as
\[h^{n}=out_{o}^{n}\odot\sigma_{c}(c^{n}). \tag{28}\]
A fully connected layer is utilized to obtain an \(m-\)dimensional output to match the dimension of the control input as
\[u_{lstm}^{n}=W_{fc}h^{n}+b_{fc}, \tag{29}\]
where \(W_{fc}\) and \(b_{fc}\) are the weight matrix and bias vector of the fully connected layer, respectively. Then, the control input \(u_{lstm}(t)\) is constructed from the LSTM output as shown in (22).
The ability to learn the dependencies within sequences helps the LSTM to predict the evolution of an unknown function. It is noted that in a typical ANN implementation, the speed of the response can be increased by increasing the adaptation rates, which may cause undesired oscillations. With LSTM augmentation, faster convergence is achieved without the need for large learning rates. The training and implementation details of LSTM are explained in the following subsections.
### _Training Method_
Since the LSTM network operates in a closed loop system, the system states and the "truth", which is the variable it is trying to estimate, are affected by the LSTM output (see Fig. 1). As the training progresses, the training data itself evolves with time, which allows the data to be used only once to train the network.
Our main philosophy to use LSTM is to compensate for the inadequacies of the conventional ANN controller: We know that ANN controllers are successful in compensating for the uncertainties asymptotically. However, adjusting their transient characteristics is not a trivial task. Instead of increasing the learning rates, LSTM provides better transients by actually predicting the type of transients and compensating accordingly. To achieve this, the goal for the LSTM is set to predict the estimation error of the ANN controller and compensate for this error. The estimation error of the ANN, or the deviation of the ANN controller input (14), \(\hat{f}(x_{p}(t))\), from the actual uncertainty, \(f(x_{p}(t))\), is given as
\[\tilde{f}(x_{p}(t))\triangleq f(x_{p}(t))-\hat{f}(x_{p}(t)). \tag{30}\]
Fig. 1: Block diagram of the proposed control architecture.
Fig. 2: Detailed sketch of an LSTM cell
The "truth" for the LSTM, or the desired signal for the LSTM to produce, is selected as
\[y(t)=-\tilde{f}(x_{p}(t)). \tag{31}\]
Therefore, LSTM provides a signal to the system that is an estimate of this truth, expressed as
\[u_{lstm}(t)=y(\hat{t}). \tag{32}\]
During training, an uncertainty \(f_{train}(x_{p}(t))\) is introduced to the closed loop system, and LSTM is expected to learn \(\tilde{f}_{train}(x_{p}(t))=f_{train}(x_{p}(t))-\hat{f}(x_{p}(t))\), where \(-\hat{f}(x_{p}(t))\) is the control signal produced by ANN. Apart from \(\hat{f}_{train}(x_{p}(t))\), LSTM uses the state-tracking error \(e(t)\), defined in (12), as its input (see Fig. 1). Every iteration of training affects the sequence that LSTM is trained on. This dynamic nature of training helps LSTM generalize to a set of functions that are not used in the training set. This is further emphasized in the Simulations Section.
Since there is no initial data set, the data needs to be collected by running the simulation with the initial LSTM weights. Since weight updates of the LSTM network also affect the system dynamics, new data needs to be collected after each episode of training. Before giving the LSTM output \(u_{lstm}\) to the system, a gain "\(k_{lstm}\)" scales \(u_{lstm}\). The role of this gain is to avoid an untrained LSTM network output from negatively affecting the system at the beginning of the training. \(k_{lstm}\) starts from 0 and approaches 1 as the number of training iterations increases, then stays at 1 for the rest of the training. The training loop can be seen in Fig. 3.
### _Normalization_
The inputs of the LSTM network are the components of the tracking error vector \(e\), defined in (12). In order to make sure that these components are on a similar scale, normalization needs to be performed. However, since an initial training data set does not exist, data needs to be collected first in order to acquire the parameters for normalization. For collecting the normalization parameters only, the simulation is run with \(f_{train}\) (see explanations after (32)), \(z\) number of times. A gain is uniformly sampled from \([0,1]\) in every simulation run, to scale \(f_{train}\). This random scaling is used only during the normalization parameter collection, not during training.The LSTM network output \(u_{lstm}\) is not injected to the system during this process. Within the collected data, the minimum, \(e_{i}^{min}\), and the maximum, \(e_{i}^{max}\), values, where "\(i\)" is used to refer the \(i^{th}\) component, of each component of the error vector, are utilized to obtain the normalized input parameters as
\[e_{norm_{i}}=(e_{i}-e_{i}^{min})/(e_{i}^{max}-e_{i}^{min}). \tag{33}\]
### _Stability Analysis_
The following theorem displays the stability properties of the overall architecture.
**Theorem 1**: _Consider the uncertain plant dynamics (1), subject to Assumptions 1 and 2, and the reference model (7). Let the control input be defined by (13), which consists of the baseline controller (6), the ANN controller defined by (14), (15) and (16), the robustifying term given in (19) and (20), and the LSTM controller explained in Section III. Then, given that \(x_{p}(0)\in S_{p}\) (see Assumption 1), the solution \((e(t),\tilde{W}(t),\tilde{V}(t))\) is uniformly ultimately bounded (UUB) and converges to a predefined compact set, where \(\tilde{V}(t)\triangleq V-\hat{V}(t)\) and \(\tilde{W}(t)\triangleq W-\hat{W}(t)\)._
The proof is deferred to the Appendix II.
## IV Simulations
In this section, the performance of the proposed control framework is examined using the short-period longitudinal flight dynamics. The short-period dynamics is given as [21]
\[\begin{bmatrix}\dot{\alpha}\\ \dot{q}\end{bmatrix}=\begin{bmatrix}\frac{Z_{q}}{m\ell}&1+\frac{Z_{q}}{mU}\\ \frac{M_{\alpha}}{J_{q}}&\frac{M_{\delta}}{J_{q}}\end{bmatrix}\begin{bmatrix} \alpha\\ q\end{bmatrix}+\begin{bmatrix}\frac{Z_{\delta}}{J_{q}}\\ \frac{M_{\delta}}{J_{q}}\end{bmatrix}(u+f(x_{p})), \tag{34}\]
where \(\alpha\) is the angle of attack (deg), \(q\) is the pitch rate (deg/s), \(u\) is the elevator deflection (deg), and \(f(x_{p})\) is a state dependent matched uncertainty. Elevator magnitude and rate saturation limits are set as \(+17/-23\) (deg) and \(+37/-37\) (deg/s) [22]. The commanded input is the pitch rate. \(Z_{\alpha}\), \(Z_{q}\), \(M_{\alpha}\), \(M_{q}\), \(Z_{\delta}\) and \(M_{\delta}\) are the stability and control derivatives.
The system matrices for a B-747 aircraft flying at the speed of 274 m/s at 6000 m altitude are given as [16]
\[A_{p}=\begin{bmatrix}-0.32&0.86\\ -0.93&-0.43\end{bmatrix},B_{p}=\begin{bmatrix}-0.02\\ -1.16\end{bmatrix},C_{p}=\begin{bmatrix}0&1\end{bmatrix}^{T}. \tag{35}\]
The augmented state vector is
\[x(t)=\begin{bmatrix}x_{1}(t)&x_{2}(t)&x_{3}(t)\end{bmatrix}^{T}, \tag{36}\]
where \(x_{1}\) is the angle of attack (rad), \(x_{2}\) is the pitch rate (rad/s) and \(x_{3}\) is the error-integral state (3). The input to the LSTM network is,
\[in_{lstm}(t)=\begin{bmatrix}e_{norm_{1}}(t)&e_{norm_{2}}(t)&e_{norm_{3}}(t) \end{bmatrix}^{T}, \tag{37}\]
where \(e_{norm_{1}}\), \(e_{norm_{2}}\), and \(e_{norm_{3}}\) are the components of the normalized version of the state tracking error vector (33).
The baseline controller (6) is an LQR controller with cost matrices \(Q_{LQR}=I\) and \(R_{LQR}=1\). The nonlinear uncertainty, \(f_{train}\) (see explanations after (32)) that the Long Short-Term Memory (LSTM) network (29) is trained on is defined as
Fig. 3: Flow chart of the training process
\[f_{train}(x)= \tag{38}\] \[\begin{cases}0.1x_{1}x_{2}&\text{if }0\leq t<5\\ \exp(x_{2})&\text{if }5\leq t<10\\ 2x_{1}x_{2}&\text{if }10\leq t<15\\ -0.1\cos(x_{1})&\text{if }15\leq t<20\\ 0.5(x_{1}x_{2})&\text{if }20\leq t<25\\ 0.1x_{1}x_{2}&\text{if }25\leq t<30\\ -x_{1}x_{2}(\sin(5x_{1}x_{2})+5\sin(x_{2}))&\text{if }30\leq t<35\\ x_{1}x_{2}(3\sin(2x_{1}x_{2})+2x_{1})&\text{if }35\leq t<45\\ -x_{1}x_{2}(2(\tan(2x_{1}x_{2})+x_{2}^{2})&\text{if }45\leq t<55\\ x_{1}x_{2}(x_{1}+x_{2})&\text{if }55\leq t\leq 60\\ \end{cases}.\]
The LSTM network is trained in the presence of an ANN controller (see Section II-A) that has a hidden layer consisting of four neurons. The learning rates in (16a) and (16b) are set as \(F=G=10\), and the Lyapunov matrix \(Q\) in (18) is set as \(Q=I\). The outer weights \((\hat{W})\) and bias \((\hat{b}_{w})\) are initialized to zero. The inner weights \((\hat{V})\) and biases \((\hat{b}_{v})\) are initialized randomly between 0 and 1. In the simulations, the robustifying gain \(k_{z}\) in (20) and scalar gain \(\kappa\) in (16a) and (16b) are set to 0.
The LSTM network contains one hidden layer with 128 neurons. The number of neurons in the input layer is 3 due to the number of state-tracking error components given as input to the network (37). Weights are initialized using Xavier Initialization [23]. The network is trained with stochastic gradient descent, and uses the Adam optimizer with a learning rate of 0.001, an L2 regularization factor of 0.0001, a gradient decay factor of 0.99, and a squared gradient decay factor of 0.999 [24]. The gradient clipping method is used with a threshold value of 1. The simulation step time is set to \(0.01\) s. The minibatch size is taken as 1. To cover both low and high-valued uncertainties, \(f_{train}\) is scaled by a parameter that took the alternating values of 0.2 and 2.
To obtain the normalization constants given in (33), the system is simulated 1000 times (\(z=1000\)), with the gain \(k_{f}\), (see Section III-B).
The loss function is chosen as Mean Squared Error (MSE) given as
\[L(y,\hat{y})=\frac{1}{N}\sum_{i=0}^{N}(y-\hat{y})^{2}, \tag{39}\]
where \(y\) and \(\hat{y}\) are defined in (31) and (32),and \(N\) is the number of data in the data set.
The proposed control framework is tested using the uncertainty defined as
\[f_{test}(x)=\begin{cases}0&\text{if }0\leq t<2\\ -0.1\exp(x_{1}x_{2})&\text{if }2\leq t<8\\ 0.5x_{2}^{2}&\text{if }8\leq t<12\\ 0.05\exp(x_{1}+2x_{2})&\text{if }12\leq t<20\\ -0.1\sin(0.05x_{1}x_{2})&\text{if }20\leq t<28\\ 0.1(x_{1}^{2}+x_{2}^{3})&\text{if }28\leq t<34\\ -0.1\sqrt{cos(x_{2})^{2}}&\text{if }34\leq t<40\\ -0.2\sin(x_{1}x_{2})&\text{if }40\leq t<49\\ 0.1(x_{1}x_{2})^{2}&\text{if }49\leq t<53\\ 0.5x_{2}&\text{if }53\leq t<60\\ 0.1(|x_{2}|^{x_{1}})&\text{if }60\leq t<67\\ -0.5\sin(x_{2})&\text{if }67\leq t<79\\ 0.01\exp(x_{1})&\text{if }79\leq t<86\\ -0.4x_{1}^{2}&\text{if }86\leq t<98\\ 0.2x_{2}^{2}&\text{if }98\leq t\leq 110\\ \end{cases}, \tag{40}\]
where the function is chosen to have different types of sub-functions with different time intervals, compared to the training uncertainty (38).
### _Controller performance in the presence of small uncertainty_
In this section, the effect of the proposed LSTM augmentation is examined using low-value uncertainties. For this purpose, \(f_{test}\) is scaled by 0.1 during the tests. Then, the tracking performance, tracking error, and control inputs of the closed-loop system are compared with and without LSTM augmentation.
Figures 4 and 5 show the tracking and tracking error curves, respectively. While the error plots demonstrate that LSTM augmentation dramatically reduces both the error magnitudes and the transient oscillations, the error values are small enough to be ignored compared to the absolute values of the states. Therefore, one can conclude that in this scenario
Fig. 4: Tracking performances with and without LSTM augmentation, in the presence of small uncertainty.
LSTM augmented and not-augmented cases show similar performances. In Fig. 6, the control inputs are presented. This figure shows that although LSTM augmentation does not affect the magnitude of the total control input in a meaningful manner, it provides damping to the oscillations by providing small but fast compensation to the introduced uncertainties. As we discussed above, since the ANN controller is already successful in compensating for the uncertainties, in this case, LSTM contribution is not prominent.
### _Controller performance in the presence of large uncertainty_
In this section, the effect of the proposed LSTM augmentation is examined in the presence of high-valued uncertainties, which is obtained by using the \(f_{test}\) as is.
Figures 7 and 8 demonstrate that LSTM augmentation substantially improves the transient response of the system, especially in pitch rate tracking, which is the output of interest (see (35)). It is noted that for this case, the same ANN controller and the LSTM network are used as in the small uncertainty case. Individual control inputs are shown in Fig. 9. In this case, unlike the low-uncertainty case, the LSTM augmentation makes the total control signal observably more agile, which is the main reason why excessive oscillations are prevented. This shows the LSTM network uses its memory to predict high-frequency changes in the system. It is noted that in this large uncertainty case the total control input is saturated. This is evident from Fig. 9, where the middle and the bottom sub-figures show the control signals created by the individual components of the overall controller, while the top sub-figure shows the total control input after saturation. The figure shows that the LSTM network produces large and very fast compensation,
Fig. 5: Tracking errors with and without LSTM augmentation, in the presence of small uncertainty.
Fig. 8: Tracking errors with and without LSTM augmentation, in the presence of large uncertainty.
Fig. 6: Control inputs in the presence of small uncertainty. Top: Total control input. Middle: The contributions of individual control inputs in the absence of LSTM. Bottom: The contributions of individual control inputs in the presence of LSTM.
Fig. 7: Tracking performances with and without LSTM augmentation, in the presence of large uncertainty.
which causes saturation. Here, the rate saturation can be problematic since it creates some oscillations in the total control signal (see the zoomed section, blue line, in the top sub-figure). Although the oscillations could be acceptable since they are short-lived, we believe that this issue can also be tackled by training the LSTM network with saturation information. We explain this solution in the next section.
### _Addressing Saturation_
To handle the saturation issue, we modified LSTM training by a) introducing the saturation limits to the plant dynamics during training, and b) informing the LSTM network whenever the control signal rate-saturates. The latter is achieved by providing additional input to the LSTM network as
\[u_{r}(t)=\begin{cases}0.1,&\text{if rate saturation is positive}\\ -0.1,&\text{if rate saturation is negative},\\ 0,&\text{otherwise}\end{cases} \tag{41}\]
The number of neurons in the input layer of the LSTM network is increased to 4 since the modified input to the LSTM network is
\[in_{lstm}(t)=\begin{bmatrix}e_{norm_{1}}(t)&e_{norm_{2}}(t)&e_{norm_{3}}(t)&u_{ r}(t)\end{bmatrix}^{T}. \tag{42}\]
Figure 10 shows the LSTM network contribution and the total control inputs, with and without rate-limit information during training. It is seen that the LSTM network uses the rate-limit information to provide a smoother compensation. Figures 11 and 12 show the tracking and tracking error curves, respectively, when the LSTM network is trained using the rate-limit information. It is seen that the performance of the proposed controller remains similar for the case when LSTM is not trained using the saturation information, although the pitch rate shows initial small jumps at the instances of uncertainty switches.
## V Summary
In this work, we propose a Long Short-Term Memory (LSTM) augmented adaptive neural network (ANN) con
Fig. 11: Tracking performances with LSTM trained without rate information and rate-limit-informed LSTM
Fig. 12: Tracking errors with LSTM trained without rate information and rate-limit-informed LSTM
Fig. 10: Control signals with an without rate-limit-informed LSTM. Top: Total control input. Bottom: LSTM network contribution to the total control input.
Fig. 9: Control inputs in the presence of high uncertainty. Top: Total control input. Middle: The contributions of individual control inputs in the absence of LSTM. Bottom: The contributions of individual control inputs in the presence of LSTM.
trol structure to improve the transient response of adaptive closed-loop control systems. We demonstrate that thanks to its time-series prediction capabilities, LSTM helps the ANN controller compensate for the uncertainties in a more agile fashion, resulting in dramatically improved tracking performance.
|
2308.15349 | Lie-Poisson Neural Networks (LPNets): Data-Based Computing of
Hamiltonian Systems with Symmetries | An accurate data-based prediction of the long-term evolution of Hamiltonian
systems requires a network that preserves the appropriate structure under each
time step. Every Hamiltonian system contains two essential ingredients: the
Poisson bracket and the Hamiltonian. Hamiltonian systems with symmetries, whose
paradigm examples are the Lie-Poisson systems, have been shown to describe a
broad category of physical phenomena, from satellite motion to underwater
vehicles, fluids, geophysical applications, complex fluids, and plasma physics.
The Poisson bracket in these systems comes from the symmetries, while the
Hamiltonian comes from the underlying physics. We view the symmetry of the
system as primary, hence the Lie-Poisson bracket is known exactly, whereas the
Hamiltonian is regarded as coming from physics and is considered not known, or
known approximately. Using this approach, we develop a network based on
transformations that exactly preserve the Poisson bracket and the special
functions of the Lie-Poisson systems (Casimirs) to machine precision. We
present two flavors of such systems: one, where the parameters of
transformations are computed from data using a dense neural network (LPNets),
and another, where the composition of transformations is used as building
blocks (G-LPNets). We also show how to adapt these methods to a larger class of
Poisson brackets. We apply the resulting methods to several examples, such as
rigid body (satellite) motion, underwater vehicles, a particle in a magnetic
field, and others. The methods developed in this paper are important for the
construction of accurate data-based methods for simulating the long-term
dynamics of physical systems. | Christopher Eldred, François Gay-Balmaz, Sofiia Huraka, Vakhtang Putkaradze | 2023-08-29T14:45:23Z | http://arxiv.org/abs/2308.15349v1 | # Lie-Poisson Neural Networks (LPNets): Data-Based Computing of Hamiltonian Systems with Symmetries
###### Abstract
An accurate data-based prediction of the long-term evolution of Hamiltonian systems requires a network that preserves the appropriate structure under each time step. Every Hamiltonian system contains two essential ingredients: the Poisson bracket and the Hamiltonian. Hamiltonian systems with symmetries, whose paradigm examples are the Lie-Poisson systems, have been shown to describe a broad category of physical phenomena, from satellite motion to underwater vehicles, fluids, geophysical applications, complex fluids, and plasma physics. The Poisson bracket in these systems comes from the symmetries, while the Hamiltonian comes from the underlying physics. We view the symmetry of the system as primary, hence the Lie-Poisson bracket is known exactly, whereas the Hamiltonian is regarded as coming from physics and is considered not known, or known approximately. Using this approach, we develop a network based on transformations that exactly preserve the Poisson bracket and the special functions of the Lie-Poisson systems (Casimirs) to machine precision. We present two flavors of such systems: one, where the parameters of transformations are computed from data using a dense neural network (LPNets), and another, where the composition of transformations is used as building blocks (G-LPNets). We also show how to adapt these methods to a larger class of Poisson brackets. We apply the resulting
methods to several examples, such as rigid body (satellite) motion, underwater vehicles, a particle in a magnetic field, and others. The methods developed in this paper are important for the construction of accurate data-based methods for simulating the long-term dynamics of physical systems.
keywords: Neural equations, Data-based modeling, Long-term evolution, Hamiltonian Systems, Poisson brackets +
Footnote †: journal: Journal of Computational Physics
## 1 Introduction
### Relevant work
Machine Learning (ML) approaches to data assimilation and modeling have been successful in interpreting large amounts of unstructured data. However, for many scientific and industrial purposes, direct applications of ML methods for big data have been developed without understanding that the underlying engineering and physics are challenging. To address these problems, _Physics Informed Neural Networks (PINNs)_ have been developed (Raissi et al., 2019). As it is impossible to provide a complete overview of the PINNs literature here, we refer the reader to recent reviews (Cuomo et al., 2022; Karniadakis et al., 2021) for references and more thorough review of the literature. In that approach, one assumes that the motion of the system \(\mathbf{u}(t)\) is approximated by a law of motion
\[\dot{\mathbf{u}}=\mathbf{f}(\mathbf{u},t), \tag{1}\]
where \(\mathbf{u}\) and \(\mathbf{f}\) can be either finite-dimensional, forming a system of ODEs, or infinite dimentional, forming a system of PDEs. In the 'vanilla' PINNs approach, the initial conditions, boundary conditions, and the ODEs itself form a part of the loss function optimized during the learning procedure. The last part - computing the difference \(\dot{\mathbf{u}}-\mathbf{f}(\mathbf{u},t)\) - can be achieved for a neural network approximating \(\mathbf{u}\), that is, approximating the mapping \((\mathbf{u}_{0},t)\rightarrow\mathbf{u}\). Similarly, for PDEs in a \((1+1)\) dimension, for example, the solution \(u(x,t)\) is a mapping from the position \(x\) and time \(t\) to the space of solutions. In PINNs, one approximates this mapping by a neural network. For a given set of weights and activating functions in neural network, at the given set of points in space and time \((t_{i},\mathbf{x}_{i})\), one can compute the _exact_ values of temporal and spatial derivatives using the method of automatic differentiation (Baydin et al., 2018). One then constructs the Mean-Square-Error (MSE) of the PDE approximation of solution and boundary conditions, taken as the cost function, and optimizes network weights to minimize that cost. The advantage of PINNs is their computational efficiency: speedups more than 100,00x
for evaluations of solutions of complex systems like weather have been reported (Bi et al., 2023; Pathak et al., 2022), although the quantification of speedup still needs to be understood (Karniadakis et al., 2021). PINNs are thus extremely useful in practical applications, and have been implemented as the open source using _Python_ and _Julia_, as well as Nvidia's _Modulus_.
In spite of the successes, there are still many uncertainties in the applications of PINNs. For example, it is known that PINNs may struggle with systems having dynamics with widely different time scales (Karniadakis et al., 2021), or having trouble in finding the true minimum for optimization (Krishnapriyan et al., 2021). Finally, systems with very little friction, like Hamiltonian systems, are difficult to describe by the standard PINNs, since evaluation errors created on every time step tend to accumulate with no chance of dissipating quickly enough. This paper develops a method of simulating a type of non-canonical Hamiltonian system, namely, Lie-Poisson systems. These systems are extremely important in building models of physical systems in Eulerian or body coordinates. The method is also shown to generalize to some other types of non-canonical Poisson brackets. The remainder of the literature review is dedicated to work on the use of data-based methods in simulating Hamiltonian systems. In order to make the discussion more concrete, we introduce a brief background in Hamiltonian and Poisson systems. More details on this can be found in Section 2.
There are several approaches applying physics-informed data methods to computations of Hamiltonian systems. Most of the work has been focused on _canonical_ Hamiltonian systems, _i.e._, the systems where the law of motion (1) has the following particular structure: \(\mathbf{u}\) is \(2n\)-dimensional, \(\mathbf{u}=(\mathbf{q},\mathbf{p})\), and there is a function \(H(\mathbf{q},\mathbf{p})\), the Hamiltonian, such that \(\mathbf{f}\) in (1) becomes
\[\mathbf{f}=\mathbb{J}\nabla_{\mathbf{u}}H\,,\quad\mathbb{J}=\left(\begin{array} []{cc}0&\mathbb{I}_{n}\\ -\mathbb{I}_{n}&0\end{array}\right)\,, \tag{2}\]
with \(\mathbb{I}_{n}\) the \(n\times n\) identity matrix, leading to the _canonical_ Hamilton equations for \((\mathbf{q},\mathbf{p})\):
\[\dot{\mathbf{q}}=\frac{\partial H}{\partial\mathbf{p}}\,,\quad\dot{\mathbf{p} }=-\frac{\partial H}{\partial\mathbf{q}}\,. \tag{3}\]
The evolution of an arbitrary phase space function \(F(\mathbf{q},\mathbf{p})\) along a solution of (3) is then described by the canonical Poisson bracket :
\[\frac{dF}{dt}=\{F,H\}=\frac{\partial F}{\partial\mathbf{q}}\frac{\partial H} {\partial\mathbf{p}}-\frac{\partial H}{\partial\mathbf{q}}\frac{\partial F}{ \partial\mathbf{p}}. \tag{4}\]
The bracket (4) is a mapping sending two smooth functions of \((\mathbf{q},\mathbf{p})\) into a smooth function of the same variables. This mapping is bilinear, antisymmetric, acts as a derivation on both functions, and satisfies the Jacobi identity: for all functions \(F,G,H\)
\[\{\{F,G\},H\}+\{\{H,F\},G\}+\{\{G,H\},F\}=0\,. \tag{5}\]
Brackets that satisfy all the required properties, _i.e._, are bilinear, antisymmetric, act as a derivation and satisfy Jacobi identity (5), but are not described by the canonical equations (4), are called general Poisson brackets (also known as non-canonical Poisson brackets). The corresponding equations of motion are called non-canonical Hamiltonian (or Poisson) systems. Often, these brackets have a non-trivial null space leading to the conservation of certain quantities known as the Casimir constants, or simply _Casimirs_. The Casimirs are properties of the Poisson bracket and are independent of a particular realization of a given Hamiltonian. This paper will focus on the data-based approaches for computations of an important class of non-canonical Poisson systems.
There is an avenue of thought that focuses on learning the actual Hamiltonian for the system from data, or, in the more general case, the Poisson bracket. This approach was explicitly implemented for canonical Hamiltonian systems in (Greydanus et al., 2019) under the name of _Hamiltonian Neural Networks (HNN)_, which approximated the Hamiltonian function \(H(\mathbf{q},\mathbf{p})\) fitting evolution of the particular data sequence through equations (3). It was shown that embedding the dynamics with the knowledge of the data allows a much more accurate and robust way to approximate the solution compared to a general Neural Network (NN). This work was further extended to include the adaptive learning of parameters and transitions to chaos (Han et al., 2021). The mathematical background guaranteeing the existence of Hamiltonian function sought in HNN was derived in (David and Mehats, 2021). An alternative method of learning the equations is given by the _Lagrangian Neural Networks (LNNs)_(Cranmer et al., 2020) which approximates the solutions of Euler-Lagrange equations, _i.e._, the equations in the coordinate-velocity space \((\mathbf{q},\dot{\mathbf{q}})\)_before_ Legendre-transforming to the momentum-coordinate representation \((\mathbf{q},\mathbf{p})\) given by equations (3). More generally, learning a vector field for the non-canonical Poisson brackets was suggested in (Sipka et al., 2023). The main challenge in that work was enforcing Jacobi identity (5) for the learned equation structure.
In these and other works on the topic, one learns the vector field governing the system, with the assumption that the vector field can be solved using appropriate numerical methods. However, one needs to be aware that care must be
taken in computing the numerical solutions for Hamiltonian systems, especially for long-term computations, as regular numerical methods lead to distortion of quantities that should be conserved, such as total energy and, when appropriate, the momenta. In order to compute long-term evolution of systems obeying the Hamiltonian vector fields, whether exact or approximated by the Hamiltonians derived from neural networks, one can use variational integrator methods (Hall and Leok, 2015; Leok and Shingel, 2012; Marsden and West, 2001) that conserve momenta-like quantities with machine precision. However, these integrators may be substantially more computationally intensive compared to non-structure preserving methods.
In this paper, we focus on an alternative approach, namely, exploring the learning transformations in phase space that satisfy appropriate properties. Recall that if \(\mathbf{\phi}(\mathbf{u})\) is a map in the phase space of equation (3) with \(\mathbf{u}=(\mathbf{q},\mathbf{p})\), then this map is called symplectic if
\[\left(\frac{\partial\mathbf{\phi}}{\partial\mathbf{u}}\right)^{T}\mathbb{J}\left( \frac{\partial\mathbf{\phi}}{\partial\mathbf{u}}\right)=\mathbb{J}\,. \tag{6}\]
A well-known result of Poincare states that the flow \(\mathbf{\phi}_{t}(\mathbf{u})\) of the canonical Hamiltonian system (3), sending initial conditions \(\mathbf{u}\) to the solution at time \(t\), is a symplectic map (Arnol'd, 2013; Marsden and Ratiu, 2013). Several authors pursued the idea of searching directly for the symplectic mappings obtained from the data, instead of finding actual equations of the canonical systems and then solving them.
Perhaps the first work in the area of computing the symplectic transformations directly was done in (Chen et al., 2020), where _Symplectic Recurring Neural Networks (SRNNs)_ were designed. The SRNNs computed an approximation to the symplectic transformation from data using the appropriate formulas for symplectic numerical methods. An alternative method of computation of canonical Hamilton equations for non-separable Hamiltonians was done in (Xiong et al., 2020), building approximations to symplectic steps using a symplectic integrator suggested in (Tao, 2016). This technique was named _Non-Separable Symplectic Neural Networks (NSSNNs)_.
A more direct computation of symplectic mappings was done using three different methods in (Chen and Tao, 2021; Jin et al., 2020). The first approach derived in (Jin et al., 2020) computes the dynamics through the composition of symplectic maps of certain type, which was implemented under the name of _SympNets_. Another approach (Chen and Tao, 2021) derives the mapping directly using a generating function approach for canonical transformations, implemented as _Generating Function Neural Networks (GFNNs)_. The approach using GFNNs al
lows for an explicit estimate of the error in a long-term simulation. In contrast, error analysis in SympNets focuses on the local approximation error. Finally, Burby et al. (2020) developed _HenonNets_, Neural Networks based on the Henon mappings, capable of accurately learning Poincare maps of a Hamiltonian systems while preserving the symplectic structure. SympNets, GFNNs and HenonNets showed the ability to accurately simulate long-term behavior of simple integrable systems like a pendulum or a single planet orbit; and satisfactory long-term simulation for chaotic systems like the three-body plane problem. Thus, learning symplectic transformations directly from data shows great promise for long-term simulations of Hamiltonian systems.
The method of SympNets was further extended for non-canonical Poisson systems by transforming the non-canonical form to local canonical coordinates using the Lie-Darboux theorem and subsequently using SympNets (Jin et al., 2022) by assuming that the dynamics occurs within a neighborhood in which the Poisson structure has constant rank. This method was named _Poisson Neural Networks (PNNs)_. While this method can in principle treat any Poisson system by learning the transformation to the canonical variables and its inverse, we shall note that there are several difficulties associated with this approach:
1. It is known that the Lie-Darboux transformation, in general, is only local: a global function transforming the system to canonical coordinates may not exist, although such transformation did exist for all examples presented in (Jin et al., 2022). When such non-locality happens, on would need to define several transformations in overlapping domains and ensure the smoothness between them. It is not clear how the network-based Lie-Darboux function will perform in that case.
2. If an _absolutely accurate_ Lie-Darboux transformation coupled with a symplectic integrator was used to transform the coordinates to canonical form in (Jin et al., 2022), it would of course preserve all the Casimirs. However, any numerical errors in determining that transformation will yield corresponding errors in the Casimir evolution.
3. Since the errors in Casimir evolution are determined by the errors in the Lie-Darboux mapping, it is also not clear how these errors will accumulate over long-term evolution of the system.
The preservation of Casimirs is especially important for predicting the probabilistic properties for the long-term evolution of many trajectories in the Poisson system (Dubinkina and Frank, 2007). In particular, even when the errors in each
individual component of the solution may accumulate over time, the fact that the solution stays exactly on the Casimir surface will play an essential role in the probability distribution in phase space.
The SympNets and PNN approach was further extended in (Bajars, 2023) where volume-preserving neural networks _LocSympNets_ and their symmetric extensions _SymLocSympNets_ were derived, based on the composition of mappings of certain type. A consistent good accuracy of long-term solutions obtained by LocSympNets and SymLocSympNets was demonstrated for several problems, including a discretized linear advection equation, rigid body dynamics and a particle in magnetic field. Although the methods of (Bajars, 2023) did not explicitly appeal to the Poisson structure of equations, the efficiency of the methods was demonstrated as applied to several problems that are essentially Poisson in nature, such as rigid body motion and the motion of particle in a magnetic field. However, the extension of the theory to more general problems was hampered by the fact that the completeness of activation matrices suggested in (Bajars, 2023) was not yet known.
The limitations of the methods of (Jin et al., 2022) and (Bajars, 2023) come from the fact that they relate to a general system of equations, where it is assumed that very little is known about the general system apart the fact that it is Hamiltonian, or Poisson. On the other hand, there is a large class of physical systems where the Poisson bracket is known exactly; in particular, _Lie-Poisson_ systems. In these approaches, the bracket does not come as a consequence of the equations of motion, but from general considerations of the Lie group symmetries of the system. In that case, the Poisson bracket has an explicit expression stemming from the expression for the Lie algebra bracket, and is called the Lie-Poisson bracket. The choice of actual Hamiltonian is a secondary step coming from physics. For example, the motion of the rigid body comes from the invariance of the Lagrangian/Hamiltonian in the body frame with respect to rigid rotations, _i.e._ SO(3) symmetry, see the introduction to the theory in Section 2 below. Table 1 provides an incomplete list of physical examples admitting a description through Lie-Poisson or closely related brackets.
In this paper, we develop the methods of constructing the activation maps directly from the brackets, predicting dynamics built out of maps computed by particular explicit solutions of Lie-Poisson systems. The method is applicable to _all_ finite-dimensional Lie-Poisson systems, as well as any Poisson system where explicit integration of appropriate equations for appropriate transformations are available. The advantage of utilizing explicit integration in Lie-Poisson equations to drastically speed up calculations on every time step was already noticed
in (McLachlan, 1993), although the application was limited to Hamiltonians of certain form depending on the Poisson bracket. Our method is also applicable to arbitrary Hamiltonians; in fact, the Hamiltonian does not need to be known for the construction of the Neural Network. However, it is assumed that the underlying symmetry and the appropriate Lie-Poisson bracket are known. This is indeed often the case, as the detailed expression of the Hamiltonian in terms of the variables
\begin{table}
\begin{tabular}{|c|c|} \hline Problem & Reference \\ \hline Rigid body & (Holm et al., 2009) \\ & (Marsden and Ratiu, 2013) \\ \hline Heavy top & (Holm et al., 2009) \\ & (Marsden and Ratiu, 2013) \\ \hline Underwater vehicles & (Leonard, 1997) \\ & (Leonard and Marsden, 1997) \\ & (Holmes et al., 1998) \\ \hline Plasmas & (Morrison, 1980), \\ & (Marsden and Weinstein, 1982), \\ & (Holm et al., 1985), \\ & (Holm and Tronci, 2010) \\ \hline Fluids & (Marsden and Weinstein, 1983), \\ & (Marsden et al., 1984), \\ & (Holm et al., 1985), \\ & (Morrison, 1998), \\ & (Morrison et al., 2006), \\ \hline Geophysical fluid dynamics & (Weinstein, 1983), \\ & (Holm, 1986), \\ & (Salmon, 2004) \\ \hline Complex and nematic fluids & (Holm, 2002), \\ & (Gay-Balmaz and Ratiu, 2009), \\ & (Gay-Balmaz and Tronci, 2010) \\ \hline Molecular strand dynamics & (Ellis et al., 2010), \\ & (Gay-Balmaz et al., 2012) \\ \hline Fluid-structure interactions & (Gay-Balmaz and Putkaradze, 2019) \\ \hline Hybrid quantum-classical dynamics & (Gay-Balmaz and Tronci, 2022), \\ & (Gay-Balmaz and Tronci, 2023) \\ \hline \end{tabular}
\end{table}
Table 1: A short description of some physical problems that can be written in Lie-Poisson form and related Poisson brackets.
is often only approximate and is driven by the modeling choices for the particular physical system (Tonti, 2013), in contrast to the Lie-Poisson bracket itself.
The novel contributions of the paper are as follows:
1. We show that a large class of Poisson systems, namely Lie-Poisson systems obtained by symmetry reduction, allow explicit construction of maps for every particular Lie-Poisson bracket.
2. These maps are global, in contrast to Lie-Darboux coordinates that are only local and may need to be re-computed depending on the position of solution in the configuration manifold.
3. By construction, these maps preserve Casimirs of each particular bracket exactly, which is not necessarily the case for PNNs, LocSympNets, SymLocSympNets and any other methods known to us.
## 2 An introduction to the Lie-Poisson equations
### General introduction: from Lagrangian to Hamiltonian description
We start with a brief introduction of the origin of Lie-Poisson systems to introduce some notations and show why this particular type of approach is essential for many physical problems. We will consider only finite-dimensional systems here not to make the consideration too abstract. Suppose a mechanical system is described by the coordinates \(\mathbf{q}\) and velocities \(\dot{\mathbf{q}}\), with \(\mathbf{q}\) lying on some configuration manifold \(Q\) of dimension \(n\). Hamilton's action principle is based on the Lagrangian function \(L(\mathbf{q},\dot{\mathbf{q}})\) (possibly depending on time) and on the action \(S=\int_{t_{0}}^{t_{f}}L(\mathbf{q},\dot{\mathbf{q}})\mathrm{d}t\), and imposes the condition that the variations of the action must vanish on the variations of \(\mathbf{q}\) that are fixed on the boundaries \(t=t_{0},t_{f}\)
\[\delta S=\delta\int_{t_{0}}^{t_{f}}L(\mathbf{q},\dot{\mathbf{q}})\mathrm{d}t= 0\,,\quad\delta\mathbf{q}(t_{0})=\delta\mathbf{q}(t_{f})=0\,. \tag{7}\]
In the Lagrangian approach, one takes the variations of (7) and gets the Euler-Lagrange equations, which are second order equations in \(\mathbf{q}\). In the Hamiltonian approach, one introduces the momenta \(\mathbf{p}=\frac{\partial L}{\partial\dot{\mathbf{q}}}\) and assumes that this relation can be inverted for each \(\mathbf{q}\), giving the velocities as \(\dot{\mathbf{q}}=\dot{\mathbf{q}}(\mathbf{q},\mathbf{p})\). One then defines the Hamiltonian function \(H(\mathbf{p},\mathbf{q})=\mathbf{p}\cdot\dot{\mathbf{q}}(\mathbf{q},\mathbf{p} )-L(\mathbf{q},\dot{\mathbf{q}}(\mathbf{q},\mathbf{p}))\), and the Euler-Lagrange equations of motion are equivalent to the _canonical_ Hamilton equations:
\[\dot{\mathbf{q}}=\frac{\partial H}{\partial\mathbf{p}}\,,\quad\dot{\mathbf{p}} =-\frac{\partial H}{\partial\mathbf{q}}\,. \tag{8}\]
Any function \(F(\mathbf{q},\mathbf{p})\) evolves according to the _canonical bracket_
\[\frac{dF}{dt}=\{F,H\}=\frac{\partial F}{\partial\mathbf{q}}\cdot\frac{\partial H }{\partial\mathbf{p}}-\frac{\partial F}{\partial\mathbf{p}}\cdot\frac{\partial H }{\partial\mathbf{q}}\,. \tag{9}\]
One can see that the bracket (9) satisfies the following properties that are valid for any functions \(F(\mathbf{q},\mathbf{p})\), \(G(\mathbf{q},\mathbf{p})\) and \(H(\mathbf{q},\mathbf{p})\):
1. Antisymmetry: \(\{F,H\}=-\{H,F\}\),
2. Linearity in each component: \(\{aF+bG,H\}=a\{F,H\}+b\{G,H\}\), \(a,b\in\mathbb{R}\),
3. Leibniz rule (acts as a derivation): \(\{FG,H\}=F\{G,H\}+G\{F,H\}\),
4. Jacobi identity: \(\{F,\{G,H\}\}+\{H,\{F,G\}\}+\{G,\{H,F\}\}=0\).
After defining the total phase space \(\mathbf{u}=(\mathbf{q},\mathbf{p})\), the canonical bracket (4) and Hamilton's equations of motion can be written in coordinates as
\[\{F,H\}=\frac{\partial F}{\partial\mathbf{u}}\cdot\mathbb{J}^{-1}\frac{ \partial H}{\partial\mathbf{u}}\,,\quad\dot{\mathbf{u}}=\mathbb{J}^{-1}\frac{ \partial H}{\partial\mathbf{u}}\,,\quad\mathbb{J}=\left(\begin{array}{cc}0& \mathbb{I}_{n}\\ -\mathbb{I}_{n}&0\end{array}\right)\,, \tag{10}\]
where \(\mathbb{I}_{n}\) is the \(n\times n\) unit matrix. From the definition of \(\mathbb{J}\) above, \(\mathbb{J}^{-1}=\mathbb{J}^{T}=-\mathbb{J}\).
### General Poisson brackets
A general Poisson bracket \(\{F,H\}\) satisfies all four properties of the canonical Poisson bracket above, but cannot be necessarily expressed in coordinates as (10). Instead, the bracket and corresponding equations of motion are described more generally in local coordinates as
\[\{F,H\}=\frac{\partial F}{\partial\mathbf{u}}\cdot\mathbb{B}(\mathbf{u})\frac {\partial H}{\partial\mathbf{u}}\,,\qquad\dot{\mathbf{u}}=\mathbb{B}(\mathbf{ u})\frac{\partial H}{\partial\mathbf{u}}\,. \tag{11}\]
In order for the bracket (11) to be Poisson, the matrix \(\mathbb{B}(\mathbf{u})\), also known as the Poisson tensor, must be antisymmetric and satisfy a condition involving both the matrix \(\mathbb{B}(\mathbf{u})\) and its derivatives. If such bracket can be found, the system is called (non-canonical) Poisson.
A special attention should be paid to the case when the matrix \(\mathbb{B}\) is degenerate. In that case, there very often1 are special functions for that particular
bracket, called _Casimirs_, which are conserved for _any_ Hamiltonian. By definition, a Casimir function \(C\) satisfies
\[\{F,C\}=0\quad\text{for all $F$}\,. \tag{12}\]
Any evolution must occur in such a way that all Casimirs of the system are conserved. Geometrically, the motion is only possible on the intersection of Casimir level sets, no matter what the Hamiltonian for the system is.
As we have seen, the motion of mechanical systems without friction is governed by canonical Poisson brackets, therefore the appearance of a non-canonical bracket (11) is somewhat mysterious. It turns out that in presence of symmetries, one can drastically reduce the degrees of freedom of the canonical Hamiltonian system by expressing the dynamics in terms of reduced coordinates, such as body or spatial coordinates. In such reduced variables, the dynamics is still governed by a Poisson bracket, but which is no more canonical. One of the most important example is the case of a system whose configuration is a Lie group: \(Q=G\). Let us consider the case of a rigid body. The configuration manifold is the Lie group of rotation matrices in \(\mathbb{R}^{3}\), also known as \(G=SO(3)\). This group consists of all \(3\times 3\) orthogonal matrices \(\Lambda\) (_i.e._, \(\Lambda^{T}\Lambda=\mathbb{I}_{3}\) with the determinant equal to 1). The Lagrangian depends on the variables \((\Lambda,\dot{\Lambda})\), and is just the kinetic energy. If one were to write either the Euler-Lagrange equations or the canonical Hamilton equations for \(\Lambda\) obtained by parameterizing \(SO(3)\) using local representation of rotation matrices, such as rotation angles, one would end up with complicated and unwieldy equations. Instead, the equations of motion for the rigid body can be efficiently written in terms of the angular velocity in the body frame \(\mathbf{\Omega}\) using the tensor of inertia \(\mathbb{I}\) as:
\[\mathbb{I}\dot{\mathbf{\Omega}}=-\mathbf{\Omega}\times\mathbb{I}\mathbf{ \Omega}\,. \tag{13}\]
As it turns out, the equations (13) can be understood from the point of view of symmetry. The rigid body kinetic energy is invariant with respect to _left_ rotations \(\Lambda\to R\Lambda\), where \(R\in SO(3)\) is a fixed rotation matrix. We can thus express the kinetic energy in terms of the antisymmetric \(3\times 3\) matrices \(\widehat{\Omega}=\Lambda^{T}\dot{\Lambda}\) which take on the role of angular velocities. One can compute the vector \(\mathbf{\Omega}\) as \(\widehat{\Omega}_{ij}=\epsilon_{ijk}\mathbf{\Omega}_{k}\), with \(\epsilon_{ijk}\) being the Levi-Civita symbol. The equations of motion (13) can be written in terms of momenta \(\mathbf{\Pi}\), with the utilization of the rigid body bracket:
\[\dot{\mathbf{\Pi}}=-\frac{\partial H}{\partial\mathbf{\Pi}}\times\mathbf{\Pi }\,,\quad\{F,H\}:=-\mathbf{\Pi}\cdot\left(\frac{\partial F}{\partial\mathbf{ \Pi}}\times\frac{\partial H}{\partial\mathbf{\Pi}}\right)\,. \tag{14}\]
The bracket in (14), as it turns out, satisfies the four properties of a Poisson bracket and is expressed in terms of the Lie bracket on the Lie algebra of \(SO(3)\), given
by the vector product, stemming from the invariance of the system with respect to \(SO(3)\) rotations.
These ideas can be generalized as follows, see (Marsden and Ratiu, 2013, Chap.10). For a general system defined on a Lie group \(G\), one naturally has the associated Lie algebra \(\mathfrak{g}\) with Lie bracket denoted as \([\alpha,\beta]\) for \(\alpha,\beta\in\mathfrak{g}\). If \(\{e_{a}\}\), \(a=1,\ldots,n\) is a basis of \(\mathfrak{g}\), then the Lie bracket is locally expressed in terms of the structure constants \(C^{d}_{ab}\) such that
\[[e_{a},e_{b}]=C^{d}_{ab}e_{d}. \tag{15}\]
Let us denote by \(\langle\mu,\alpha\rangle\) the duality pairing between vectors \(\alpha\) in the Lie algebra \(\mathfrak{g}\) and co-vectors (or momenta) \(\mu\) in the dual space \(\mathfrak{g}^{*}\) to \(\mathfrak{g}\). The partial derivatives of functions \(F,H:\mathfrak{g}^{*}\rightarrow\mathbb{R}\) with respect to \(\mu\) thus belong to \(\mathfrak{g}\), and one can define the _Lie-Poisson bracket_ derived from the Lie bracket as follows:
\[\{F,H\}=\pm\left\langle\mu,\left[\frac{\partial F}{\partial\mu}\,,\frac{ \partial H}{\partial\mu}\right]\right\rangle\,. \tag{16}\]
We refer to A for the explanation of the \(\pm\) sign and for the relation between this bracket and the canonical bracket. In terms of coordinates, the bracket (16) is a particular case of (11) with the matrix \(\mathbb{B}(\mu)\) defined as
\[\mathbb{B}_{ab}(\mu)=\pm C^{d}_{ab}\mu_{d} \tag{17}\]
and the Lie-Poisson equations are expressed in coordinates as
\[\dot{\mu}_{a}=\pm C^{d}_{ab}\mu_{d}\frac{\partial H}{\partial\mu_{b}}\,. \tag{18}\]
One can verify that the Lie-Poisson bracket (16) satisfies all the conditions of a general Poisson bracket. While the Lie-Poisson brackets appear because of the fundamental considerations of the symmetries of the physical system, the Hamiltonian is a modelling choice related to physics. Thus, we assume that if a system possesses a Lie-Poisson bracket, it is known explicitly _a priori_ and does not need to be computed or determined from the data. However, the dependence of the Hamiltonian on its arguments and parameters is not known and must be determined.
## 3 LPNets as Poisson maps for a general system
We are now ready to consider the theory of structure-preserving neural networks. We follow the ideas originally introduced in (Jin et al., 2020) as SympNets. The idea behind SympNets is to learn in the space of available symplectic
transformations for a canonical system. Further, (Jin et al., 2022) introduced a generalization of this method to an arbitrary Poisson system by using the Lie-Darboux theorem, stating that every Poisson system can be locally transformed into a canonical form, implementing these ideas a PoissonNets. This theory was further extended in (Patel et al., 2022; Zhang et al., 2022) for thermodynamics systems using the GENERIC metriplectic bracket approach.
Mathematically speaking, previous works in this avenue of thinking (Jin et al., 2022, 2020) and (Bajars, 2023) seek to determine a mapping \(\phi_{h}(\mathbf{y}_{i})\) in the next computational point \(\mathbf{y}_{i+1}\), satisfying as many restrictions preserving the structure of the actual flow as possible. The authors developed universal mappings that are symplectic in the canonical coordinates \((\mathbf{q},\mathbf{p})\), and sought the solution as a combination of these mappings. Since the mappings are universal for all finite-dimensional Hamiltonian systems, one then needed to prove the convergence and completeness of the combinations of these mapping, especially when coupled to the Lie-Darboux theorem which maps the Poisson system to a canonical form.
We build on the ideas of SympNets and Poisson nets by directly constructing the transformations related to the Poisson systems (_Poisson maps_) for the _known_ Lie-Poisson bracket, and using them as elements for construction of LPNets.
We first recall the useful concept of _Poisson maps_.
**Definition 3.1**.: _(Marsden and Ratiu, 2013, SS10.3) Let \((P_{i},\{\cdot,\cdot\}_{i})\), \(i=1,2\) be two Poisson manifolds. A mapping \(f:P_{1}\to P_{2}\) is Poisson if it preserves the Poisson bracket, i.e., for all functions \(F,G:P_{2}\to\mathbb{R}\)_
\[\{F,G\}_{2}\circ f=\{F\circ f,G\circ f\}_{1}\,. \tag{19}\]
A critical piece of information for our further progress is contained in the following result.
**Theorem 3.2** (Hamiltonian flows are Poisson).: _(Marsden and Ratiu, 2013, Thm. 10.3.1.) Consider a Poisson manifold \((P,\{\cdot,\cdot\})\) and a Hamiltonian \(H:P\to\mathbb{R}\). Let \(\phi_{t}(\mathbf{u}_{0})\) be the flow of the Poisson system associated with \(H\), see (11), which maps the initial conditions \(\mathbf{u}_{0}\) to the solution \(\mathbf{u}(t)\) at time \(t\). Then, the mapping \(\phi_{t}\) is Poisson, i.e., it satisfies:_
\[\{F\circ\phi_{t},G\circ\phi_{t}\}=\{F,G\}\circ\phi_{t}\,, \tag{20}\]
_for all functions \(F,G:P\to\mathbb{R}\)._
Following this theorem, we can design Poisson maps for a particular Poisson bracket using a sequence of flows created by simplified Hamiltonians for the Lie-Poisson dynamics. The advantage of this approach is that the resulting transformations will be Poisson and will be able to approximate any flow locally for the particular system considered. The disadvantage of our approach is inherently intertwined with the advantages: the mappings have to be constructed explicitly for every Lie-Poisson bracket. Fortunately, as we show below, this is possible as these mappings are derived as solutions of a linear system of ODEs. The method presented here is reminiscent of the Hamiltonian splitting methods used in numerical analysis (McLachlan, 1993; McLachlan and Quispel, 2002), reformulated for the purpose of data-based computations and the use of neural networks to find appropriate parameters of the Hamiltonians.
## 4 A general application to finite-dimensional Lie groups
The key to this paper lies in considering the evolution of the momentum \(\mu\) coming from the equations (18) for particular expressions for the Hamiltonian, namely, Hamiltonians linear in momenta
\[H(\mu)=\langle\alpha,\mu\rangle=\alpha^{a}\mu_{a}\,, \tag{21}\]
where \(\alpha^{a}\) are some constants that are to be found based on the learning procedure. The flow generated by the Hamiltonian (21) is given by a linear equation in \(\mu\) that can be written in two equivalent ways (choosing the \(+\) sign in (18)):
\[\dot{\mu}_{a}=C^{d}_{ab}\alpha^{b}\mu_{d}:=\mathbb{M}(\alpha)^{d}_{a}\mu_{d}= \mathbb{N}(\mu)_{ab}\alpha^{b}\,. \tag{22}\]
The number of possible dimensions of \(\alpha\) is exactly equal to the dimension of the momentum space. However, the "effective" dimension of this space may be less, and is related to the dimension of the Lie algebra \(n\) minus the dimension of the kernel of the operator \(\mathbb{N}(\mu):\mathfrak{g}\to\mathfrak{g}^{*}\), see Remark 4.1. The operators \(\mathbb{M}(\alpha)\) acting on the space of momenta \(\mu\) and \(\mathbb{N}(\mu)\) acting on the space of \(\alpha\) can be described in the coordinate-free form as \(\mathrm{ad}^{*}_{\alpha}\,\square\) and \(\mathrm{ad}^{*}_{\square}\,\mu\), respectively, see A.
Let us assume that there are \(d\) independent Casimir functions \(C_{j}(\mu)\), \(j=1,...,d\). From (12) and (17) such functions satisfy \(C^{d}_{ab}\frac{\partial C_{j}}{\partial\mu_{b}}\mu_{d}=0\) for all \(\mu\), i.e., \(\partial C_{j}/\partial\mu\) must belong to the kernel of the operator \(\mathbb{N}(\mu)\). We shall consider the effective dimension of the space of all possible \(\alpha\) to be exactly \(n-d\), where \(n\) is the dimension of the momentum space and \(d\) is the number of independent Casimirs. Thus, locally the vectors \(\alpha\) in this effective space form exactly the right number
of local tangent vectors to the intersection of Casimir surfaces to reach any point locally using a flow generated by the Poisson map. The next step is to compute that map exactly.
**Remark 4.1** (On the effective number of dimensions).: _In general, the number of null directions of \(\mathbb{N}(\mu_{0})\), denoted as \(k(\mu_{0})\) may depend on the momenta \(\mu_{0}\in\mathfrak{g}^{*}\) (or, more precisely, on the actual coadjoint orbit the solution is on), whereas the number of independent Casimirs \(d\) is fixed. The dimension of the image of the map \(\xi\in\mathfrak{g}\mapsto\mathbb{N}(\mu_{0})\xi=\mathrm{ad}_{\xi}^{*}\,\mu_{0} \in\mathfrak{g}^{*}\) is the dimension of the coadjoint orbit of \(\mu_{0}\), denoted as \(\mathcal{O}_{\mu_{0}}\). Thus, in general, we have \(k(\mu_{0})+\dim(\mathcal{O}_{\mu_{0}})=n\) and we have \(d\leq k(\mu_{0})\) for almost all \(\mu_{0}\), but not necessarily \(d=k(\mu_{0})\)2. However, the data are exceptionally unlikely to lie on orbits with high codimension \(k(\mu)>d\), and we are going to assume that the effective number of dimensions of \(\alpha\) is \(n-d\). If the data was obtained from one of the orbits with high codimension, one would amend formula (26) below using the information about that exceptional orbit. Note that our method preserves the general form of coadjoint orbits in all cases, whether the orbit is exceptional or not. This fact could be useful in data-based computations of exceptional orbits. It is an interesting question which we will address in the follow-up work._
Footnote 2: Note that there are examples of Lie algebras whose generic coadjoint orbits have codimension strictly bigger than the number of independent Casimirs.
Equations (22) are linear differential equations in \(\mu\), and the solutions of these equations can be (in principle) found in explicit form. This solution can be written in compact form as
\[\mathbb{T}(t,\alpha)\mu_{0}=e^{\mathbb{M}(\alpha)t}\mu_{0}\,. \tag{23}\]
The mappings \(\mathbb{T}(t,\alpha)\) defined in (23) satisfy all the requirements as the building blocks for the neural networks.
Let us denote the mapping \(\mathbb{T}_{a}(t,\alpha^{a})\), \(a=1,...,n\), to be the map originating from the \(a\)-th component of \(\alpha\) to have the value of \(\alpha^{a}\) and all other values being zero. We divide each time step into \(n\) substeps \(h_{a}\) with \(\sum_{a}h_{a}=h\). Notice that the number of steps is \(n\) and not \(n-d\), where \(d\) is the number of Casimirs, as we explain later.
These mappings satisfy the following conditions:
1. The mappings \(\mathbb{T}_{a}(h_{a},\alpha^{a})\) are Poisson, see (20), for the Lie-Poisson bracket,
2. The mappings conserve all Casimirs of the system, 3 Footnote 3: And, in fact, these maps also preserve all types of coadjoint orbits, _i.e._, they accurately represent the coadjoint action \(\mathrm{Ad}_{g}^{*}\mu\). This information would be useful if we were to compute exceptional orbits, which we will not do here.
3. Any two points close enough to each other on the same Casimir surface can be connected using a combination of mappings \(\mathbb{T}_{a}(h_{a},\alpha^{a})\).
The procedure of constructing LPNets is as follows:
1. Find explicit solutions for (23) by solving equations (22).
2. Define \(\bar{\alpha}=(\alpha^{1},\ldots,\alpha^{n})\) and \[\mathbb{T}(\bar{\alpha})=\mathbb{T}_{n}(h_{n},\alpha^{n})\circ\mathbb{T}_{n-1 }(h_{n-1},\alpha^{n-1})\circ\ldots\circ\mathbb{T}_{1}(h_{1},\alpha^{1})\,.\] (24) For the set of \(N\) data pairs \((\mu_{i}^{0},\mu_{i}^{f})\), \(i=1,\ldots N\) set up a minimization procedure to find the set of numbers \(\bar{\alpha}\) for each \(\mu_{i}^{0}\), minimizing the "individual" square loss \[\bar{\alpha}_{i}=\arg\min\left|\mathbb{T}(\bar{\alpha}_{i})\mu_{i}^{0}-\mu_{i }^{f}\right|^{2}\] (25) In most of the examples we consider here, we can find \(\bar{\alpha}\) analytically; however, in more general situations, one could also utilize a root-finding or a minimization procedure finding \(\bar{\alpha}_{i}\) for every pair of data points.
3. Since \(\bar{\alpha}\) are only defined up to a vector normal to the Casimir surface, in order to make the data consistent, we need to project out the appropriate components of the Casimirs. We will need to find coefficients \(p_{j}\), \(j=1,\ldots,d\) such that the projection of \(\bar{\alpha}\) on the gradients of the Casimirs vanishes: \[\bar{\alpha}_{i}\rightarrow\bar{\alpha}_{i}-\sum_{j=1}^{d}p_{j}\left.\frac{ \partial C_{j}}{\partial\mu}\right|_{\mu=\mu_{i}^{0}}\,,\,\,\mathrm{with}\, \left\langle\bar{\alpha}_{i}\,,\,\,\frac{\partial C_{j}}{\partial\mu}\right|_ {\mu=\mu_{i}^{0}}\!\right\rangle=0\,,j=1,...,d,\] (26) where \(C_{j}\), \(j=1,...,d\) are a set of \(d\) independent Casimir functions.
4. Create a neural network approximating the mapping \(\mu_{i}\rightarrow\bar{\alpha}_{i}\). This function will be denoted as \(\bar{\alpha}=NN(\mu)\).
Since the operators \(\mathbb{T}_{a}\), in general, do not commute, the composition order should be fixed ahead of time and also be the same for all data pairs. A different choice of the composition order will lead to a different (but of course equivalent) set of parameters \(\alpha\). The choice of the composition order of \(\mathbb{T}_{a}\) has to be preserved in the prediction step as well. Also, we do not consider different time sub-steps \(h_{a}\) here, taking them all to be equal. It is possible that the choice of \(h_{a}\) can be made to improve accuracy or convergence properties of the scheme.
The solution starting at a given initial condition \(\mu_{0}\) can then be evaluated using the neural network. If \(\mu=\mu_{j}\) at the time step \(j\), then the value of the solution at the next step is computed as
\[\mu_{j+1}=\mathbb{T}(\bar{\alpha}_{j}^{\rm est})\mu_{j},\quad\bar{\alpha}_{j}^ {\rm est}=NN(\mu_{j})\,. \tag{27}\]
Note that we never compute the actual Hamiltonian, its gradients, or the equations of motion. Instead, we just compute the composition of Poisson transformations reproducing the dynamics in phase space of some unknown Poisson system - with the known Lie-Poisson bracket.
We will now apply this procedure to several particular examples of physical systems which have a Lie-Poisson bracket. We shall call this method Local _LPNets_, or simply LPNets, as the exponential representation of the map is only defined locally. The advantage of this method is that when operating on Lie groups, we are guaranteed to be able to reach any point locally with an exponential map, so the completeness is automatic. However, this mathematical simplicity has to be compensated by the necessity to apply a neural network to learn the mappings from data points in the neighborhood of a trajectory. After our description of LPNets, we develop a more complex procedure which derives Lie-Poisson activation modules that directly extend the work of (Bajars, 2023; Jin et al., 2022, 2020). We call these methods global Lie-Poisson networks, or G-LPNets. Using the example of a rigid body motion, we show that G-LPNets provide a promising simple, accurate and highly computationally effective neural network.
A more general derivation performed in the coordinate-free form and in the general language of modern geometric approach is presented in A.
## 5 Test cases for LPNets
We choose to use the same test cases as in (Bajars, 2023; Jin et al., 2022), except for an additional test for \(SE(3)\) (the underwater vehicle). No detailed comparisons are performed between our results and those in (Bajars, 2023; Jin et al.,
2022). This is because the accuracy of each method depends strongly on the structure of the neural network, the accuracy and distribution of data used in training, and specific learning procedures employed, which prevents a fair comparison between methods. Instead, we will outline the performance of LPNets on these problems, and contrast with general results from (Bajars, 2023; Jin et al., 2022). In all cases, LPNets conserve the Casimir functions to machine-precision, unlike the other approaches.
In what follows, we will develop the prediction of parameters for LPNets using a dense Neural Network structure. Naturally, such a dense network will require appropriate number of data points to avoid overfitting. In Section 6, we show how to reduce the size of that network and achieve the accuracy in the whole phase space with a particular structure of a network which we will call G-LPNets.
### Rigid body dynamics
_Motion of a rigid body as dynamics on the Lie group \(S\,O(3)\)_. The (Lie-)Poisson bracket, the Hamiltonian, and the corresponding equations of motion for momenta \(\mathbf{\Pi}\) (measured from the body frame of reference), are (Holm et al., 2009):
\[\begin{split}\{F,G\}&=-\mathbf{\Pi}\cdot\left(\frac{ \partial F}{\partial\mathbf{\Pi}}\times\frac{\partial G}{\partial\mathbf{\Pi}}\right) \\ H(\mathbf{\Pi})&=\frac{1}{2}\mathbf{\Pi}\cdot\mathbb{I}^{- 1}\mathbf{\Pi}\\ \dot{\mathbf{\Pi}}&=-\mathbb{I}^{-1}\mathbf{\Pi}\times\mathbf{ \Pi}\,.\end{split} \tag{28}\]
Suppose there is a sequence of pairs of initial and final points of transformation \((\mathbf{\Pi}_{i}^{0},\mathbf{\Pi}_{i}^{f})\), \(i=1,\ldots,N\), coming from some information that we call _ground truth_. If that sequence comes from a single trajectory of length \(N\), _i.e._ has the form \(\mathbf{\Pi}_{0},\mathbf{\Pi}_{1},\ldots,\mathbf{\Pi}_{N}\) we take \(\mathbf{\Pi}_{i}^{0}=\mathbf{\Pi}_{i}\), \(\mathbf{\Pi}_{0}^{f}=\mathbf{\Pi}_{i+1}\). However, our method does not explicitly assume the existence of a single trajectory for learning.
To find the Poisson map approximating the motion of the system at every time step \(i\), let us consider Hamiltonians \(H_{i}\) linear in momenta, _i.e._ having the form \(H_{i}(\mathbf{\Pi})=\mathbf{A}_{i}\cdot\mathbf{\Pi}\), where \(\mathbf{A}_{i}\) is some unknown constant vector that is different for every pair of points. The Hamiltonian flow generated by that Hamiltonian is described by
\[\dot{\mathbf{\Pi}}=-\frac{\partial H_{i}}{\partial\mathbf{\Pi}}\times\mathbf{\Pi}=- \mathbf{A}_{i}\times\mathbf{\Pi}\,. \tag{29}\]
Note that we can also consider a Hamiltonian of the form \(H(\mathbf{\Pi})=f(\mathbf{A}\cdot\mathbf{\Pi})\). The dynamics induced by this Hamiltonian preserve the quantity \(\mathbf{A}\cdot\mathbf{\Pi}\), as one
can see from (29). Thus, that extension corresponds to a simple redefinition of \(\mathbf{A}\) by scaling and does not bring extra insight into the problem.4 Thus, the motion defined by (29) is a rotation of a vector \(\mathbf{\Pi}\) about the axis \(\mathbf{A}_{i}\) with a constant angular velocity. The flow preserves the Lie-Poisson bracket since it is a Hamiltonian flow with the same bracket. The dynamics (28) also preserve the Casimir \(C(\mathbf{\Pi})=|\mathbf{\Pi}|^{2}\) exactly. After the time \(t=h\), the dynamics (28) rotates the vector of angular momentum \(\mathbf{\Pi}\) by an angle \(\phi(\mathbf{A}_{i})=|\mathbf{A}_{i}|h\) around the axis \(\mathbf{n}_{\mathbf{A}_{i}}=\mathbf{A}_{i}/|\mathbf{A}_{i}|\). In other words, if \(\mathbb{R}(\mathbf{n},\phi)\) is the matrix of rotation around the axis \(\mathbf{n}\) by the angle \(\phi\), then (28) transforms the momentum as \(\mathbf{\Pi}\rightarrow\mathbb{R}(\mathbf{n}_{\mathbf{A}_{i}},\phi(\mathbf{ A}_{i}))\mathbf{\Pi}\).
Footnote 4: Of course, \(\mathbf{A}\cdot\mathbf{\Pi}\) is not a constant for the general system (28) - that is only true for the particular choice of the Hamiltonian \(H(\mathbf{\Pi})=f(\mathbf{A}\cdot\mathbf{\Pi})\).
Everywhere in this paper, the ground truth is obtained by BDF integrator in Python's _Scipy_ package, with relative and absolute tolerances being set at \(10^{-13}\) and \(10^{-14}\), respectively. Note that this numerical method is not expected to conserve Casimirs so our method will actually be more precise than the ground truth. We note that while Lie-Poisson integrators preserving Lie-Poisson structure do exist (Marsden et al., 1999), the precise implementation needs to be rederived in an explicit form for each particular Lie-Poisson system. We found it to be more appropriate to use a high accuracy algorithm that is common to all problems for a fair comparison, rather than build an algorithm that is tailored to every particular problem. The accuracy of the BDF algorithm is more than sufficient for our ground truth calculation and providing comparisons with the previous works. Thus, to be consistent, we used a high precision non-symplectic integrator for all cases, carefully checking its accuracy in all applications.
_Data preparation._ We should consider three angles of rotations. Given a sequence of begin and end pairs of momenta (\(\mathbf{\Pi}_{i}^{0},\mathbf{\Pi}_{i}^{f}\)), we compute \(\mathbf{A}_{i}\) as a function of \(\mathbf{\Pi}_{i}^{0}\) as follows:
\[\mathbf{A}_{i}=\frac{1}{h}\mathbf{\Pi}_{i}^{0}\times\mathbf{\Pi}_{i}^{f} \tag{30}\]
and the angle \(\theta_{i}\) of rotations from vector \(\mathbf{\Pi}_{0}^{i}\) and \(\mathbf{\Pi}_{f}^{i}\) as the shortest motion along the sphere \(\mathbf{\Pi}=\)const. The cross product contains information for both the direction normal to both \(\mathbf{\Pi}_{i}^{0}\) and \(\mathbf{\Pi}_{i}^{f}\), and the angle of rotation, as described above. The factor \(1/h\) is introduced to normalize the output data to be of order 1.
We could of course get \(\mathbf{A}_{i}\) by finding the match of three subsequent rotations around, say, Euler angles, and subtracting the corresponding rotation about the axis \(\mathbf{\Pi}_{i}^{0}\) or \(\mathbf{\Pi}_{i}^{f}\), applying equation (25) directly. However, the notation of vector
cross product, only available for \(SO(3)\), provides a simple and efficient alternative to this more complex procedure.
_Neural network._ Make a standard neural network learning from the data for the mapping \(\mathbf{\Pi}\to\mathbf{A}\). The neural network will have the three components of \(\mathbf{\Pi}\) as inputs and the three components of \(\mathbf{A}\) as outputs. Here and everywhere else, we utilize the package _Tensorflow5_. In the results shown in Figure 1, the learning is done on \(N=1000\) pairs produced by the single trajectory originating at \(\mathbf{\Pi}_{0}=(1/\sqrt{2},-1/\sqrt{2},1)\), with the interval between trajectory points given by \(h=0.1\).
Footnote 5: [https://www.tensorflow.org](https://www.tensorflow.org)
The neural network has three hidden layers of 16 neurons with the sigmoid activation function, with 659 trainable parameters. Out of 1000 pairs as the input data, 80% are used for training and 20% for evaluation, with the loss measured as the mean square discrepancy between \(\mathbb{R}(\mathbf{n}_{\mathbf{A}_{i}},\phi(\mathbf{A}_{i}))\mathbf{\Pi}_{i}^{0}\) and \(\mathbf{\Pi}_{i}^{f}\). Adam optimization algorithm is used with the learning rate starting at \(10^{-3}\). The loss and validation loss reach the values of approximately \(3.8\cdot 10^{-9}\) and \(4.1\cdot 10^{-9}\) after \(10^{5}\) epochs.
_Prediction._ The predicted trajectory start \(\mathbf{\Pi}_{0}\) is taken to coincide with the endpoint of the learning trajectory, with the values of \(\mathbf{\Pi}_{0}\simeq(0.43,1.33,-0.21)\). On each step, once \(\mathbf{\Pi}_{j-1}\) is known, Neural Network creates the prediction for the rotation axis \(\mathbf{A}_{j}/h\) and the angle \(\phi_{j}=|\mathbf{A}_{j}|/h\). That prediction of the neural network is used to produce the mappings \(\mathbf{\Pi}_{j}=\mathbb{R}(\mathbf{A}_{j},\phi_{j})\mathbf{\Pi}_{j-1}\), where \(j=1\dots m\) after the desired number of steps. That prediction by LPNets is compared with the ground truth prediction obtained by high accuracy ODE solver as described above. We perform 10000 time steps to reach \(t=1000\) and present the results in Figure 1 and Figure 2. Note that for clarity, only the first 2000 time steps up to \(t=200\) are shown in the individual momenta plots, the left panel of Figure 1. The right panel of that Figure shows that all available data coincide perfectly.
In Figure 2, we present the results for the conservation of the Hamiltonian \(H(\mathbf{\Pi})\) and the Casimir \(C(\mathbf{\Pi})=|\mathbf{\Pi}|^{2}\). The Hamiltonian is preserved to about 0.01% relative accuracy. The Casimir in the ground truth solution is preserved to about \(10^{-9}\) accuracy. In the LPNets solution, the Casimir is preserved to machine precision, far exceeding possible accuracy for ground truth for this variable. On the right side of this panel, we plot the error of the solution as the \(L_{2}\) norm of deviation between two solutions (ground truth and LPNets) for each \(t\). The deviation is growing roughly linearly in time to the values of about 0.5 after the time \(t=1000\).
This may come as a surprise given the excellent agreement on the right hand side of the Figure 1 for all \(t\). This phenomenon has a simple explanation: high accuracy of conservation laws presented on Figure 2 forces the solution to exist on the intersection of \(H=\)const (an ellipsoid) and \(C=\)const (a sphere) with the high accuracy.
Comparison with the previous literatureThe case of rigid body motion was considered in (Bajars, 2023). The appropriate comparison is the learning of the single trajectory of the rigid body dynamics contained in SS4.2.3 of that paper. The error of a single trajectory (ground truth vs. a solution obtained by the neural network solution) is of the same order as our results. The error in Hamiltonian is somewhat better in our case, with the relative error being about \(10^{-4}\) (\(0.01\%\)) vs. \(0.008\) in (Bajars, 2023). However, that number depends on the particular realization of both neural networks and learning data, as we outlined above. The value of the Casimir \(C(\mathbf{\Pi})=|\mathbf{\Pi}|^{2}\) is conserved to machine precision in our case, whereas it would follow the general accuracy of computations in the previous work on the subject.
Learning general dynamics of the rigid bodyIn the above calculation, we have followed the method of (Bajars, 2023; Jin et al., 2022) and learned the dynamics continuing a _single_ trajectory. However, it is also possible to extend the LPNets to learn several trajectories simultaneously, and predict the dynamics of the trajectory the method has not seen. We take the initial condition \(\mathbf{\bar{\Pi}}_{0}=(1/\sqrt{2},-1/\sqrt{2},1)\) and compute 20 ground truth trajectories with the initial con
Figure 1: Left: Results of LPNets applied to the motion of a rigid body (red) versus ground truth (blue) for the individual momenta. Right: Parametric plot of the momenta in the phase space. The results are visually indistinguishable.
ditions \(\mathbf{\Pi}_{0}^{j}=\bar{\mathbf{\Pi}}_{0}+\epsilon_{j}\), where \(\epsilon_{j}\) is a uniformly distributed random variable in the cube \([-0.1,0.1]\times[-0.1,0.1]\times[-0.1,0.1]\). Each trajectory generates 200 pairs of ground truth mapping between the momenta at the neighboring points, a total of 4000 data points. A neural network is constructed with a similar structure as the one used for learning a single trajectory, having three inner layers with 32 neurons, each having a sigmoid activation function, and the total of 2339 trainable parameters. The neural network is trained using Adam algorithm with initial time step of \(10^{-3}\) decaying exponentially to \(10^{-4}\), over \(10^{5}\) epochs. The final training and validation losses are slightly below \(10^{-6}\) and \(10^{-5}\) respectively, after 100,000 epochs.
Trajectory prediction using LPNetsA trajectory is then constructed with the initial conditions \(\bar{\mathbf{\Phi}}_{0}=\bar{\mathbf{\Pi}}_{0}\) iterating over 10000 steps up to the time \(t=1000\). Even though all the learning data were taken a finite distance from this solution, the LPNets faithfully reproduces the ground truth, as shown in Figure 3. While the solutions were computed until \(t=1000\), the left panel of Figure 3 only illustrates the evolution of momenta until \(t=200\) for clarity. This example of \(SO(3)\) shows that our method is capable of learning multiple trajectories and understanding the general Poisson dynamics. Of course, one has to keep in mind that in order to achieve good global accuracy, one needs to have quite a dense covering of the configuration manifold with the data points for learning, a task that may be difficult in many dimensions. A compromise presented here that is feasible to implement
Figure 2: Left: Conservation of the Hamiltonian \(H\) (top) and the Casimir \(C\) (bottom), comparing the results of LPNets (red) and ground truth (blue). Notice that LPNets conserve the Casimir exactly (to machine precision) and thus substantially exceeds the ground truth in the conservation of Casimirs. Right: The discrepancy between the results of LPNets and the ground truth. The discrepancy comes mostly from time mismatch, whereas the amplitude of oscillations is conserved with high precision.
is to consider a few trajectories in the neighborhood of the desired trajectory for learning. To illustrate that point, we show the 3D plot of the trajectories and the corresponding data for learning on the right panel of Figure 3. Note that all the data were used on the right panel, and there is no visible deviation between the ground truth and the solutions in 3D. We present the accuracy of the results compared to the ground truth and the preservation of the conserved quantities (Energy and Casimir) on Figure 4. The conservation of the Hamiltonian is satisfied with the relative accuracy of about 0.25% (0.001 in absolute accuracy), and the Casimir is conserved in our case to the absolute accuracy of about \(10^{-11}\), several orders of magnitude better than the ground truth solution.
### Extended pendulum case
In order to compare our results directly with the Poisson Neural Networks developed in (Jin et al., 2022), we consider the extended pendulum test case from that paper. This example shows that our method extends beyond the Lie-Poisson case, as long as an explicit integration for the Poisson equations for linear Hamiltonians is available. This is case here since the Poisson tensor \(\mathbb{B}(\mathbf{y})\) is affine in \(\mathbf{y}\).
Consider a standard pendulum of length 1 and mass 1, with the Hamiltonian
Figure 3: Left: Same results as in Figure 1, with the exception that the Neural Network is trained on several trajectories with random initial conditions different from the desired trajectory. Starting point is taken to be \(\boldsymbol{\Pi}_{0}=(1/\sqrt{2},-1/\sqrt{2},1)\). Right: Trajectories in 3D space, similar to the presentation on the right panel of Figure 1. In addition, the trajectories used for data learning are shown in black, and the blue dot indicates the starting point. Trajectories from LPNets are shown in red, learning trajectories in black, and trajectories considered ground truth are in blue.
\(H=\frac{1}{2}p^{2}-\cos q\). The equations of motion are written as
\[\dot{q}=\frac{\partial H}{\partial p}=p\,,\quad\dot{p}=-\frac{\partial H}{ \partial q}=-\sin q\,. \tag{31}\]
The paper (Jin et al., 2022) then introduces an extra variable \(c\) with equation \(c=\)const, extending the system to the three-dimensional space
\[\frac{d}{dt}\left(\begin{array}{c}p\\ q\\ c\end{array}\right)=\left(\begin{array}{c}-\sin q\\ p+c\\ 0\end{array}\right)=\left(\begin{array}{ccc}0&-1&0\\ 1&0&0\\ 0&0&0\end{array}\right)\nabla_{(p,q,c)}\widetilde{H}\,, \tag{32}\]
with the new Hamiltonian \(\widetilde{H}=\frac{1}{2}p^{2}-\cos q+pc\). The paper (Jin et al., 2022) then makes a transformation of variables
\[\begin{split}&(p,q,c)=\theta(u,v,r)=(u,v,r-u^{2}-v^{2})\,,\\ &(u,v,r)=\theta^{-1}(p,q,c)=(p,q,p^{2}+q^{2}+c)\,.\end{split} \tag{33}\]
In the new variables \((u,v,r)\), the equations of motion become
\[\frac{d}{dt}\left(\begin{array}{c}u\\ v\\ r\end{array}\right)=\left(\begin{array}{ccc}0&-1&-2v\\ 1&0&2u\\ 2v&-2u&0\end{array}\right)\nabla_{(u,v,r)}K \tag{34}\]
Figure 4: Left: Conservation of the Hamiltonian \(H\) (top) and the Casimir \(C\) (bottom), comparing the results of LPNets (red) and ground truth (blue), for the case of global dynamics. Again, LPNets conserves the Casimir exactly (to machine precision) and thus substantially exceeds the ground truth in the conservation of Casimirs. Right: The discrepancy between the results of LPNets and the ground truth.
with the new Hamiltonian \(K(u,v,r)=\frac{1}{2}u^{2}-\cos v+ur-u^{3}-uv^{2}\). The system (34) is Poisson with the corresponding bracket
\[\{F,H\}=(\nabla_{\mathbf{y}}F)^{T}\cdot\mathbb{B}(\mathbf{y})\cdot\nabla_{ \mathbf{y}}H\,,\quad\mathbb{B}(\mathbf{y}):=\left(\begin{array}{ccc}0&-1&-2v \\ 1&0&2u\\ 2v&-2u&0\end{array}\right)\,, \tag{35}\]
where we have denoted \(\mathbf{y}=(u,v,r)^{T}\). The matrix \(\mathbb{B}\) defined in (35) is degenerate, and
\[C(\mathbf{y})=r-u^{2}-v^{2}=y_{3}-y_{1}^{2}-y_{2}^{2}\,, \tag{36}\]
which is just \(c\) in the old variables \((p,q,c)\), is a Casimir of the bracket (35). Indeed, one can readily check that \(\mathbb{B}(\mathbf{y})\cdot\nabla_{\mathbf{y}}C(\mathbf{y})=\mathbf{0}\). Moreover, one also checks that \(\mathbb{B}(\mathbf{y})\) only has a single eigenvalue of \(0\), so \(C(\mathbf{y})\) defined in (36) is the only Casimir. In order to apply the method of LPNets, we take the test Hamiltonian linear in \(\mathbf{y}\) as \(H(\mathbf{y})=\boldsymbol{\alpha}\cdot\mathbf{y}\). For test Hamiltonians of that type, the equations of motion become
\[\dot{\mathbf{y}}=\left(\begin{array}{c}-\alpha_{2}-2y_{2}\alpha_{3}\\ \alpha_{1}+2y_{1}\alpha_{3}\\ 2\alpha_{1}y_{2}-2\alpha_{2}y_{1}\end{array}\right)\,. \tag{37}\]
There are three test Hamiltonians to consider, \(H_{a}(\mathbf{y})=\alpha_{a}y_{a}\), \(a=1,2,3\) (no sum). Then, the equations of motion are
\[\begin{cases}H_{1}=\alpha_{1}y_{1}\Rightarrow\,\dot{y}_{1}=0\,,\dot{y}_{2}= \alpha_{1}\,,\dot{y}_{3}=2\alpha_{1}y_{2}\\ H_{2}=\alpha_{2}y_{2}\Rightarrow\,\dot{y}_{1}=-\alpha_{2}\,,\dot{y}_{2}=0\,, \dot{y}_{3}=-2\alpha_{2}y_{1}\\ H_{3}=\alpha_{3}y_{3}\Rightarrow\,\dot{y}_{1}=-2y_{2}\alpha_{3}\,,\dot{y}_{2}= 2y_{1}\alpha_{3}\,,\dot{y}_{3}=0\,.\end{cases} \tag{38}\]
Equations (38) are easily solved explicitly. Each Hamiltonian \(H_{a}=\alpha_{a}y_{a}\) leads to explicit expressions for an affine transformation \(\mathbf{T}(t,\alpha_{a},\mathbf{y}_{0})\) of the initial condition \(\mathbf{y}_{0}\) to the final solution after time \(t\):
\[H_{a}=\alpha_{a}y_{a}\,\Rightarrow\,\mathbf{y}=\mathbf{T}_{a}(t,\alpha_{a}, \mathbf{y}_{0}) \tag{39}\]
with the transformations \({\bf T}_{a}\) given by
\[{\bf T}_{1}(t,\alpha_{1},{\bf y}_{0})= \left(\begin{array}{c}y_{1}(0)\\ y_{2}(0)+\alpha_{1}t\\ y_{3}(0)+2t\alpha_{1}y_{2}(0)+\alpha_{1}^{2}t^{2}\end{array}\right) \tag{40}\] \[{\bf T}_{2}(t,\alpha_{2},{\bf y}_{0})= \left(\begin{array}{c}y_{1}(0)-\alpha_{2}t\\ y_{2}(0)\\ y_{3}(0)-2t\alpha_{2}y_{1}(0)+\alpha_{2}^{2}t^{2}\end{array}\right)\] \[{\bf T}_{3}(t,\alpha_{3},{\bf y}_{0})= \left(\begin{array}{c}y_{1}(0)\cos(2\alpha_{3}t)-y_{2}(0)\sin(2 \alpha_{3}t)\\ y_{1}(0)\sin(2\alpha_{3}t)+y_{2}(0)\cos(2\alpha_{3}t)\\ y_{3}(0)\end{array}\right)\,.\]
_Data preparation, Part 1: finding the transformations._ Suppose we have pairs of solution \(({\bf y}_{i},{\bf y}_{i+1})\) that are obtained from snapshots of a single or several trajectories of equation (34); the time difference between the snapshots is \(\Delta t=h\). We separate the interval \(\Delta t=h\) between the snapshots into three equal sub-intervals (although other divisions of the time interval are also possible). Then, for each initial point of the pair \({\bf y}_{i}\) we are looking for the sequence of parameters \((\alpha_{1},\alpha_{2},\alpha_{3})\) defining the transformations \({\bf T}_{i}\) as in (40), such that
\[{\bf y}_{i}^{1} ={\bf T}_{1}\left(\frac{h}{3},\alpha_{1},{\bf y}_{i}\right)\,, \tag{41}\] \[{\bf y}_{i}^{2} ={\bf T}_{2}\left(\frac{h}{3},\alpha_{2},{\bf y}_{i}^{1}\right)\,,\] \[{\bf y}_{i}^{3} ={\bf T}_{3}\left(\frac{h}{3},\alpha_{3},{\bf y}_{i}^{2}\right)= {\bf y}_{i+1}\,.\]
In other words, \(\overline{\boldsymbol{\alpha}}_{i}=(\alpha_{1},\alpha_{2},\alpha_{3})({\bf y }_{i})\) should be such that the three transformations \({\bf T}_{1,2,3}\), performed in the corresponding order, map the initial snapshot of the pair \({\bf y}_{i}\) to the final snapshot of the pair \({\bf y}_{i+1}\).
For data that is not precise, the matching of \(\overline{\boldsymbol{\alpha}}_{i}\) to the data can be accomplished numerically using a gradient descent method, with further removal of components of the Casimir gradient \(\nabla C\). Indeed, each set of parameters \(\overline{\boldsymbol{\alpha}}_{i}=(\alpha_{1},\alpha_{2},\alpha_{3})({\bf y }_{i})\) determined in the previous step, is only defined up to the value of \(\nabla C\). Since \(\nabla C\) is a zero eigenvector of the matrix \(\mathbb{B}({\bf y})\), changing
\[\boldsymbol{\alpha}_{i}\rightarrow\boldsymbol{\alpha}_{i}+k\nabla C \tag{42}\]
for arbitrary \(k\) does not change the result of composition of transformations \(\mathbf{T}_{3}\circ\mathbf{T}_{2}\circ\mathbf{T}_{1}\). We thus choose \(k\) in (42) such that the projection of \(\boldsymbol{\alpha}\) on \(\nabla C\) vanishes, _i.e._, take
\[\boldsymbol{\alpha}_{i}^{*}=\boldsymbol{\alpha}_{i}-\nabla C\frac{\boldsymbol{ \alpha}_{i}\cdot\nabla C}{|\nabla C|^{2}}\,. \tag{43}\]
However, in the case the training data is _exact_, for example, is obtained from a numerical simulation with a very high accuracy, we can solve equations for \(\overline{\boldsymbol{\alpha}}_{i}\) analytically. For shortness, we denote the components of \(\mathbf{y}_{i}\) as \((y_{1},y_{2},y_{3})\), dropping the index \(i\), and the components of \(\mathbf{y}_{i+1}\) by \((y_{1,f},y_{2,f},y_{3,f})\), since \(\mathbf{y}_{i+1}\) represents the final value of the interval \(t\in(t_{i},t_{i+1})\). Let us notice that the sequential application of \(\mathbf{T}_{1}\) and \(\mathbf{T}_{2}\) after time \(\Delta t=h/3\) on each sub-step gives the intermediate values
\[\begin{split} y_{1}&\to y_{1}^{*}=y_{1}-\alpha_{2} \Delta t\,,\\ y_{2}&\to y_{2}^{*}=y_{2}+\alpha_{1}\Delta t\,,\\ y_{3}&\to y_{3}^{*}=y_{3}+(\alpha_{1}-\alpha_{2}) \Delta t+(\alpha_{1}^{2}+\alpha_{2}^{2})\Delta t^{2}\,.\end{split} \tag{44}\]
On the third sub-step, \(y_{3}\) doesn't change so \(y_{3}^{*}=y_{3,f}\). Also, at that sub-step, the application of \(\mathbf{T}_{3}\) just induces a rotation of \((y_{1}^{*},y_{2}^{*})\) by the angle \(2\alpha_{3}\Delta t\), so the compatibility conditions to match the data points precisely is
\[y_{3}^{*}=y_{3,f}\,,\quad(y_{1}^{*})^{2}+(y_{2}^{*})^{2}=y_{1,f}^{2}+y_{2,f}^{ 2}\,. \tag{45}\]
Using (44), we can rewrite (45) as
\[y_{3,f}-y_{1,f}^{2}-y_{2,f}^{2}=y_{3}-y_{1}^{2}-y_{2}^{2}\,, \tag{46}\]
so the solution for \(\overline{\boldsymbol{\alpha}}_{i}\) giving exact match between the data points can be found if and only if the Casimir is exactly the same for all values of the data. If the data contains noise, an optimization procedure must be introduced searching for an optimal value of \(\overline{\boldsymbol{\alpha}}_{i}\) at every step. In this paper, we assume that the learning data is exact. In that case, we can set \(\alpha_{3}=0\) on every time step and match the values of the components \(y_{1}\) and \(y_{2}\); the matching of the component \(y_{3}\) is done automatically due to the conservation of the Casimir.
\[\alpha_{1}=\frac{3}{h}\left(y_{2,i+1}-y_{2,i}\right)\,,\quad\alpha_{2}=-\frac {3}{h}\left(y_{1,i+1}-y_{1,i}\right)\,,\quad\alpha_{3}=0\,. \tag{47}\]
_Learning procedure: Neural Network approximation._ The inputs \(\mathbf{y}_{i}\) and outputs \(\overline{\boldsymbol{\alpha}}_{i}\) computed from (47) are used to learn the mapping \(\overline{\boldsymbol{\alpha}}(\mathbf{y})\) using a Neural Network. In order to match (Jin et al., 2022), we use three trajectories with initial conditions
\(\mathbf{y}_{0}=(0,1,1^{2})\), \((0,1.5,1.5^{2}+0.1)\) and \((0,2,2^{2}+0.2)\), with the time step of \(h=0.1\). We use the number of points \(N=1000\) with half of these points used for training and half for validation. The ground truth solution is computed using Python's _Scipy odeint_ routine with _BDF_ (Backward Differentiation Formula), with the tolerances, relative and absolute, set at \(10^{-13}\) and \(10^{-14}\), respectively.
Evaluation and dynamics using Neural NetworkJust as in (Jin et al., 2022), we compute the trajectories starting at the last point of the trajectory \(\mathbf{y}_{N}\) for 1000 points, with half of these points used for training and half for validation. We use a Sequential Keras program, with three inner layers of neurons with sigmoid activation functions. Each layer has the breadth of 16 neurons with the total number of tunable parameters being 642. The network uses the Adam optimizer with the learning rate of \(10^{-3}\), exponentially decaying to \(10^{-5}\) and Mean Square Error loss, computed for 50000 epochs. The resulting MSE achieved is between \(10^{-7}\) and \(10^{-8}\). The results of the simulations are shown in Figure 5. On the left side of this Figure, we show the match between the "exact" solution and the solution obtained by LPNets (notice that the "ground truth" solution is still obtained using a numerical method with a given accuracy). On the right hand side of that figure, we present a three-dimensional plot of the phase space \((u,v,r)=(y_{1},y_{2},y_{3})\) comparing the numerical results in blue with the LPNets results in red. In Figure 6, we show the conservation of the Casimir for all three initial conditions. Note that the ground truth numerical solution (blue line) only conserves the Casimir \(C=r-u^{2}-v^{2}\) up to the accuracy of calculation, accumulating the error of the or
Figure 5: Results of simulation of equations (34) and the corresponding solution of using LPNets procedure. Left panel: three components of the solution \(y_{1}=u\), \(y_{2}=v\) and \(y_{3}=r\) versus time. Right panel: phase space plot of the solution. Blue line: high-precision numerics taken as the exact solution; red line: LPNets.
der \(10^{-9}\) after \(t=100\). In contrast, the Casimir in LPNets (red line) is preserved to machine precision by the very nature of transformations performed on each time step. On the right panel of that Figure, we present the accuracy of the evolution of Energy in LPNets. As one can see, the energy conservation by LPNets is quite satisfactory, yielding the relative error of about \(1\%\) or less for all cases.
Finally, in Figure 7, we present the results for the discrepancy between the ground truth and LPNets solutions. Even though the discrepancy grows, it is mostly due to the fact that the time between the ground truth and the solution is slightly mismatched, which explains why energy is conserved to much higher precision than the solution itself. The accuracy is still quite good and the solution obtained by the LPNets is virtually indistinguishable from the ground truth on the left side of the Figure 5.
Finally, we would like to emphasize that a system of three inner layers having a width of \(16\) in each layer should be viewed as very compact. Higher accuracy can be achieved with more data points and correspondingly wider or deeper network, or having more insights into the structure of the framework which we have assumed to be completely dense. Achieving these efficiencies is an interesting challenge that we will consider in the future.
Comparison with previous results for extended pendulum systemThe visual agreement of the solutions and the ground truth is similar to the one presented in (Jin et al., 2022) for this system. The conservation of the Casimir is not presented in (Jin et al., 2022). We interrogated the data produced by the code made available in
Figure 6: Left: Conservation of the Casimir \(C=r-u^{2}-v^{2}\) in the solutions of equations (34) obtained by the high precision numerics (blue line) and the corresponding solution of using LPNets procedure (red line). Even though the precision of numerics is \(10^{-11}\), LPNets is substantially more precise, as it achieves machine precision of Casimir conservation on each time step. Right: Conservation of the energy of the system for all three cases. The relative accuracy in the conservation of energy is about \(0.5-1\%\).
Figure 7: The discrepancy between the ground truth and the solutions provided by LPNets for the extended pendulum case.
(Jin et al., 2022) and found that the relative errors for Casimir in the transformed Lie-Darboux coordinates are quite small, between \(10^{-7}\) and \(10^{-6}\). The errors in Casimir in the original coordinates \((u,v,r)\) for the parameters presented in (Jin et al., 2022), although still quite small, are several orders of magnitude larger than the transformed coordinates. This could be attributed to the fact that the inverse of Lie-Darboux transformation is not being computed sufficiently accurately. This accuracy of PNNs can certainly be improved by appropriate modification of the original PNN network from (Jin et al., 2022), which is beyond the scope of the paper. In any case, our method conserves the Casimir exactly for the original system with no necessity to search for the Lie-Darboux transformation to the canonical coordinates.
### A particle in a magnetic field
To compare with the second test case computed in (Jin et al., 2022), we study a particle of mass \(m\) and charge \(q\) moving in a magnetic field \(\mathbf{B}(\mathbf{x})\). We assume that the motion of the particle is in \(\mathbf{x}\in\mathbb{R}^{3}\), and the relevant variables are the particle position \(\mathbf{x}=(x_{1},x_{2},x_{3})\) and its' momentum \(\mathbf{p}=(p_{1},p_{2},p_{3})\). The equations of motion for a particle in a magnetic field \(\mathbf{B}(\mathbf{x})\) are:
\[\frac{d}{dt}\left(\begin{array}{c}\mathbf{p}\\ \mathbf{x}\end{array}\right)=\left(\begin{array}{cc}-\frac{q}{m}\widehat{B}( \mathbf{x})&-\mathbb{I}_{3}\\ \mathbb{I}_{3}&0\end{array}\right)\left(\begin{array}{c}\frac{\partial H}{ \partial\mathbf{p}}\\ \frac{\partial H}{\partial\mathbf{x}}\end{array}\right):=\mathbb{B}(\mathbf{x })\nabla_{(\mathbf{p},\mathbf{x})}H \tag{48}\]
Notice that equations (48) are not of Lie-Poisson form. However, explicit solutions for the transformations on each time step can be found here as well. We thus believe that this problem is a useful case for demonstrating the power and applicability of these methods beyond the Lie-Poisson equations.
The Hamiltonian for simulations is taken as
\[H(\mathbf{x},\mathbf{p})=\frac{1}{2m}|\mathbf{p}|^{2}+q\varphi(\mathbf{x})\,. \tag{49}\]
Here, \(\mathbb{I}_{3}\) is, as usual, a \(3\times 3\) unity matrix, and we used the hat map notation
\[\widehat{B}(\mathbf{x})=\left(\begin{array}{ccc}0&-B_{3}(\mathbf{x})&B_{2}( \mathbf{x})\\ B_{3}(\mathbf{x})&0&-B_{1}(\mathbf{x})\\ -B_{2}(\mathbf{x})&B_{1}(\mathbf{x})&0\end{array}\right)\,. \tag{50}\]
Note that \(\widehat{B}(\mathbf{x})\mathbf{v}=\mathbf{B}(\mathbf{x})\times\mathbf{v}\) for all \(\mathbf{x}\in\mathbb{R}^{3}\). Similar to (Bajars, 2023; Jin et al., 2022), we take the following values for parameters, electric potential \(\varphi(\mathbf{x})\) and the
magnetic field \(\mathbf{B}(\mathbf{x})\):
\[\begin{split}& q=1,\quad m=1,\\ &\mathbf{B}=\left(0,0,B_{3}\right),\quad\text{with}\quad B_{3}( \mathbf{x})=\sqrt{x_{1}^{2}+x_{2}^{2}}\,,\\ &\varphi(\mathbf{x})=\frac{1}{100\sqrt{x_{1}^{2}+x_{2}^{2}}}\,. \end{split} \tag{51}\]
One can readily check that (48) possesses no Casimirs since \(\mathbb{B}\) is non-degenerate.
_Reduction of motion and conserved quantities._ In (Jin et al., 2022), the initial conditions were chosen to be \((x_{3}=0,v_{3}=0)\) so the particle would always move on the plane. The motion in that case is four-dimensional, and we only need four test Hamiltonians. We thus take the Hamiltonians linear in velocities \((v_{1},v_{2})\) and coordinates \((x_{1},x_{2})\) and compute the corresponding motion.
Note that when the system (48) is restricted so both \(\mathbf{x}\) and \(\mathbf{p}\) are in the plane, _i.e._ both \(x_{3}=0\) and \(p_{3}=0\), and for the choice of any \(\mathbf{B}=\mathbf{e}_{3}B_{3}(r)\) and \(\varphi=\varphi(r)\), where \(r=\sqrt{x_{1}^{2}+x_{2}^{2}}\), there are two integrals of motion. One is clearly the Hamiltonian (49). Another one can be found by considering the evolution for the angular momentum in the \(\mathbf{e}_{3}\) direction \(M_{3}=\mathbf{e}_{3}\cdot(\mathbf{x}\times\mathbf{p})\). We can observe that
\[\dot{M}_{3}=-qB_{3}(r)\mathbf{x}\cdot\dot{\mathbf{x}}=-qB_{3}r\dot{r} \tag{52}\]
leading to the conservation law that for (51) and \(q=1\) reduces to
\[I=M_{3}+q\int^{r}B_{3}(s)s\mathrm{d}s=x_{1}p_{2}-x_{2}p_{1}+\frac{1}{3}\left(x _{1}^{2}+x_{2}^{2}\right)^{3/2}=\text{const.} \tag{53}\]
Therefore, the system (48), for the choice of functions (51) and reduced to four-dimensional motion, is essentially a two-dimensional motion because of the conservation of the Hamiltonian (49) and (53). Thus, one should expect a limited richness of solution behavior. Nevertheless, it is a good test problem and since it has been used in both recent papers on the subject (Bajars, 2023; Jin et al., 2022), we study this particular case as well.
_Lie-Poisson transformations._ Suppose we have a set of \(N\) data pairs, each pair is obtained by the phase flow from \(\mathbf{y}_{i}=(\mathbf{x}_{i},\mathbf{p}_{i})\) to some value \(\mathbf{y}_{i}^{f}=(\mathbf{x}_{i}^{f},\mathbf{p}_{i}^{f})\). If these pairs are obtained from a single trajectory, then \(\mathbf{y}_{i}^{f}=\mathbf{y}_{i+1}\), although this does not have to be the case - our method is capable of learning from several trajectories.
In order to apply our method, we need to compute the results of phase flows for Hamiltonians linear in coordinates \(\mathbf{x}\) and momenta \(\mathbf{p}\).
We just present the answers for these transformations here for brevity; an interested reader may readily check these formulas. In what follows, we shall use the function \(\mathbf{\Phi}\) of time \(t\), initial conditions \(\mathbf{X}=\mathbf{x}(0)\) and parameters \(\boldsymbol{\alpha}=(\alpha_{1},\alpha_{2})\) defined as follows:
\[\mathbf{\Phi}(t;\mathbf{X},\boldsymbol{\alpha})=\int_{0}^{t}B_{3}(\mathbf{X}+ \boldsymbol{\alpha}s)\mathrm{d}s\,. \tag{54}\]
For the expression of \(B_{3}(\mathbf{x})\) taken in (51), both components of the function \(\mathbf{\Phi}\) can be expressed in terms of a single elementary function coming from taking the quadrature of (54):
\[\begin{split}\Phi(t;\mathbf{X},\boldsymbol{\alpha})& =\frac{1}{2A^{3}}\left(A(X_{1}\alpha_{1}+X_{2}\alpha_{2}+A^{2}t) \sqrt{x_{1}(t)^{2}+x_{2}(t)^{2}}\right.\\ &\qquad+\left.(X_{1}\alpha_{2}-X_{2}\alpha_{1})^{2}\right.\\ &\qquad\times\log\left[A\sqrt{x_{1}(t)^{2}+x_{2}(t)^{2}}+(X_{1} \alpha_{1}+X_{2}\alpha_{2}+A^{2}t)\right]\right)\,,\\ A&:=|\boldsymbol{\alpha}|=\sqrt{\alpha_{1}^{2}+ \alpha_{2}^{2}}\,,\quad x_{i}(t):=X_{i}+\alpha_{i}t,\,i=1,2\,.\end{split} \tag{55}\]
The presence of explicit expression for the function (54) is helpful, but it is not essential; for general \(B_{3}(\mathbf{x})\), one can use the quadrature expressions (54). The transformations for each test Hamiltonian are:
1. \(H_{1}=\alpha_{1}p_{1}+\alpha_{2}p_{2}\) leads to the motion \(\mathbf{y}=\mathbf{T}_{1}(t,\mathbf{p}_{0},\mathbf{x}_{0})\) \[\left(\begin{array}{c}p_{1}(t)\\ p_{2}(t)\\ x_{1}(t)\\ x_{2}(t)\end{array}\right)=\mathbf{T}_{1}(t,\mathbf{p}_{0},\mathbf{x}_{0})= \left(\begin{array}{c}-\Phi(t;\mathbf{x}_{0},\boldsymbol{\alpha})+\Phi(0; \mathbf{x}_{0},\boldsymbol{\alpha})+p_{1}(0)\\ \Phi(t;\mathbf{x}_{0},\boldsymbol{\alpha})-\Phi(0;\mathbf{x}_{0},\boldsymbol{ \alpha})+p_{2}(0)\\ \alpha_{1}t+x_{1}(0)\\ \alpha_{2}t+x_{1}(0)\end{array}\right),\] (56)
2. \(H_{2}=\beta_{1}x_{1}+\beta_{2}x_{2}\) leads to the motion \(\mathbf{y}=\mathbf{T}_{2}(t,\mathbf{p}_{0},\mathbf{x}_{0})\) \[\left(\begin{array}{c}p_{1}(t)\\ p_{2}(t)\\ x_{1}(t)\\ x_{2}(t)\end{array}\right)=\mathbf{T}_{2}(t,\mathbf{p}_{0},\mathbf{x}_{0})= \left(\begin{array}{c}-\beta_{1}t+p_{1}(0)\\ -\beta_{2}t+p_{2}(0)\\ x_{1}(0)\\ x_{2}(0)\end{array}\right).\] (57)
Clearly, \(\mathbf{T}_{2}\) does not alter the coordinates \(x_{1,2}\). Thus, the algorithm of computation is very explicit, as \(\alpha_{1,2}\) and \(\beta_{1,2}\) can be solved without any root-finding procedure in the learning stage.
Data preparation.Divide the interval \(\Delta t=h\) into two equal steps \(h/2\). On each of those steps, for each pair of data points \((\mathbf{y}_{i},\mathbf{y}_{i}^{f})\) where \(\mathbf{y}=(p_{1},p_{2},x_{1},x_{2})\), we compute the parameters \((\alpha_{1,i},\alpha_{2,i},\beta_{1,i},\beta_{2,i})\) as follows:
1. Compute \(\alpha_{1,2}\) at the \(i\)-th data point to match \(x_{1,2}\), and corresponding new intermediate momenta \(p_{1,2}^{*}\) by using the corresponding transformations \(\mathbf{T}_{1,2}\) according to \[\alpha_{1,i} =\frac{2}{h}\left(x_{1,i}^{f}-x_{1,i}\right)\] (58) \[\alpha_{2,i} =\frac{2}{h}\left(x_{2,i}^{f}-x_{2,i}\right)\] \[p_{1,i}^{*} =p_{1}-\Phi(h/2;\mathbf{x}_{0},\boldsymbol{\alpha})+\Phi(0; \mathbf{x}_{0},\boldsymbol{\alpha})\] \[p_{2,i}^{*} =p_{2}+\Phi(h/2;\mathbf{x}_{0},\boldsymbol{\alpha})-\Phi(0; \mathbf{x}_{0},\boldsymbol{\alpha})\,.\]
2. Compute \(\beta_{1,2}\) at the \(i\)-th data point according to \[\beta_{1,i} =\frac{2}{h}\left(p_{1,i}^{*}-p_{1,i}^{f}\right)\] (59) \[\beta_{2,i} =\frac{2}{h}\left(p_{2,i}^{*}-p_{2,i}^{f}\right)\,.\]
Learning using Neural Network.The network will learn the mapping between \(\mathbf{y}\) and the parameters \((\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})\) using \(\mathbf{y}_{i}\) as inputs and corresponding parameters \((\boldsymbol{\alpha}_{i},\boldsymbol{\beta}_{i})\) as outputs.
The network structure consists of an input layer taking four inputs: \((\mathbf{x},\mathbf{p})\), three densely connected hidden layers of 32 neurons, with the sigmoid activation function, and the output layer producing an estimate for four outputs \((\boldsymbol{\alpha},\boldsymbol{\beta})\). The number of trainable parameters in the network is 2404. Adam optimization algorithm uses the step \(10^{-3}\) decaying exponentially to \(10^{-5}\), with the number of epochs equal to \(2\cdot 10^{5}\). The loss function is taken to be the mean square average (MSE). Because of the large number of parameters, to avoid overfitting, we take 20000 data points of a single trajectory separated by the time step \(h=0.1\). For that trajectory, use the initial conditions \(\mathbf{v}_{0}=(1,0.5)\) and \(\mathbf{x}_{0}=(0.5,1)\). From all the data points, 80% are used for learning and 20% for evaluation. At the end of the learning procedure, both the loss and the validation loss drop to values below \(10^{-5}\).
Solution evaluations.We choose the initial condition to coincide with the end of the learning trajectory, which is located at \(\mathbf{x}_{0}\simeq(-0.627,0.985)^{T}\) and
\((0.119,1.112)^{T}\). We present the results of simulations in Figure 8. The ground truth was obtained by solving (48) numerically with the given initial conditions using the BDF solver of \(SciPy\), with the relative tolerance of \(10^{-13}\) and absolute tolerance \(10^{-14}\) for \(t=200\), providing outputs every \(h=0.1\). The ground truth is presented with a solid blue line.
The iterative solution using the evaluation provided by successful iterations of Poisson maps (56) and (57) with the parameters \((\alpha,\beta)\) provided by the neural network, approximates the solution at the time points \(t=ih\), \(i=1,\ldots 2000\). The iteration starts with the same initial conditions as the ground truth solutions. The results of the iterations are presented in Figure 8 with solid red lines. As one can see, the LPNets solution approximates ground truth quite well. To further investigate the results, on the left panel Figure 9 we show the mean-square deviation between the ground truth and LPNets solutions of (48) presented in Figure 8. On the right panel of this Figure, we present the conserved quantities: Hamiltonian (Energy) \(H\) given by (49) and the conserved quantity (53). As one can see, the Hamiltonian is preserved with the relative accuracy of \(1-2\%\) and the integral (53)
Figure 8: Solutions of (48): ground truth (blue lines) vs an iterative solution obtained by LPNets (red lines).
to about \(3-4\%\).
Comparison with previous literatureThe problem of a particle in a magnetic field was considered in both (Jin et al., 2022) and (Bajars, 2023). The conservation law (53) has not been identified in these papers, thus we do not provide a direct comparison. The visual agreement of the solutions is just as good as the one demonstrated in (Bajars, 2023; Jin et al., 2022). The absolute error in solution and the Hamiltonian is also similar to that presented in (Bajars, 2023). There are no Casimirs in the system, so there is no additional advantage over the methods presented in (Jin et al., 2022) or (Bajars, 2023).
Additionally, although the system (48) is Poisson, it is not in the Lie-Poisson form. The fact that LPNets can solve this problem on par with other methods is a consequence of the particular form of the magnetic field \(\mathbf{B(x)}\) and the fact that the corresponding integrals (54) can be computed explicitly. Even though the system (48) is not Lie-Poisson, we found it useful to present the results in order to show possible extensions of the methods of LPNets for more general systems.
### Kirchhoff's equations for an underwater vehicle
The motion of a neutrally buoyant underwater body is described by the Kirchhoff equations, see (Leonard, 1997; Leonard and Marsden, 1997) for the discussion on the Lie-Poisson structure of these equations and the corresponding stability results. When the centre of gravity coincides with the centre of buoyancy, the system simplifies somewhat but still possesses a rich dynamics with several integrable and chaotic cases with a rich history of study (Holmes et al., 1998).
Figure 9: Left: the value of Hamiltonian (energy) obtained from (49) and the conserved quantity (53). The ground truth is plotted as blue lines, and the results from the iterative solution obtained by LPNets are shown as red lines. Right: mean-square discrepancy (in all components) between the ground truth and the results obtained by the LPNets.
We will treat that particular system with coincidence of centres of buoyancy and gravity and show the applicability of LPNets approach there as well.
The Hamiltonian for Kirchhoff's equations consists of the kinetic energy of rotational motion and the energy of the translational motion with velocity \(\mathbf{v}\) and mass tensor \(\mathbb{M}\):
\[H(\mathbf{\Pi},\mathbf{p})=\frac{1}{2}\mathbf{\Pi}\cdot\mathbb{T}^{-1}\mathbf{\Pi}+\frac{1} {2}\mathbf{p}\cdot\mathbb{M}^{-1}\mathbf{p}\,. \tag{60}\]
The Poisson bracket for the underwater vehicle is expressed as the Lie-Poisson bracket for the group of rotations and translations \(SE(3)\):
\[\{F,H\}=-\mathbf{\Pi}\cdot\left(\frac{\partial F}{\partial\mathbf{\Pi}}\times\frac{ \partial H}{\partial\mathbf{\Pi}}\right)-\mathbf{p}\cdot\left(\frac{\partial F}{ \partial\mathbf{\Pi}}\times\frac{\partial H}{\partial\mathbf{p}}-\frac{\partial H }{\partial\mathbf{\Pi}}\times\frac{\partial F}{\partial\mathbf{p}}\right)\,. \tag{61}\]
This bracket has a specific form coming from the geometry of semidirect product groups, see (Holm et al., 1998, 2009; Leonard, 1997). Kirchhoff's equations of motion for the underwater vehicle are
\[\dot{\mathbf{H}} =-\frac{\partial H}{\partial\mathbf{\Pi}}\times\mathbf{\Pi}-\frac{ \partial H}{\partial\mathbf{p}}\times\mathbf{p} \tag{62}\] \[\dot{\mathbf{p}} =-\frac{\partial H}{\partial\mathbf{\Pi}}\times\mathbf{p}\,.\]
Equations (62) have two Casimirs: \(C_{1}=\|\mathbf{p}\|^{2}\) and \(C_{2}=\mathbf{\Pi}\cdot\mathbf{p}\). In addition, the total energy given by (60) is also conserved.
_LPNets for Kirchhoff's equations._ Bearing in mind that now we have two momenta \((\mathbf{\Pi},\mathbf{p})\), we shall take the following Hamiltonians: \(H_{1}=\mathbf{A}\cdot\mathbf{\Pi}\) and \(H_{2}=\mathbf{b}\cdot\mathbf{p}\). Equations of motion (62) reduce to
\[\begin{split} H_{1}&=\mathbf{A}\cdot\mathbf{\Pi}:\quad \dot{\mathbf{\Pi}}=-\mathbf{A}\times\mathbf{\Pi}\,,\quad\dot{\mathbf{p}}=-\mathbf{A} \times\mathbf{p}\\ H_{2}&=\mathbf{b}\cdot\mathbf{p}:\quad\dot{\mathbf{\Pi} }=-\mathbf{b}\times\mathbf{p}\,,\quad\dot{\mathbf{p}}=\mathbf{0}\,.\end{split} \tag{63}\]
The first motion is the simultaneous rotation of the vectors \(\mathbf{\Pi}\) and \(\mathbf{p}\) about the same axis \(\mathbf{A}\), by the same amount, with a given angular velocity. This is the transformation \((\mathbf{\Pi},\mathbf{p})\to(\mathbb{R}(\mathbf{A},\theta)\mathbf{\Pi},\mathbb{R}( \mathbf{A},\theta)\mathbf{p})\), where \(\mathbb{R}(\mathbf{A},\theta)\) is the rotation matrix about the axis \(\mathbf{A}\) by the angle \(\theta\). The second motion creates the transformation \((\mathbf{\Pi},\mathbf{p})\to(\mathbf{\Pi}-\mathbf{b}\times\mathbf{p},\mathbf{p})\). These transformation describe the coadjoint action of \(SE(3)\) on an element \(\mathfrak{se}(3)^{*}\), see A.
Based on (63), we thus suggest LPNets for \(SE(3)\)-based Lie-Poisson equations (62).
#### Data preparation.
1. Select the training data couples \((\mathbf{y}^{0},\mathbf{y}^{f})\), where \(\mathbf{y}:=(\boldsymbol{\Pi},\mathbf{p})\).
2. Find the rotation axis \(\mathbf{A}_{j}\) and angles \(\theta_{j}\) that takes \(\mathbf{p}^{0}_{j}\) to \(\mathbf{p}^{f}_{j}\), and the corresponding rotation matrix \(\mathbb{R}(\mathbf{A}_{j},\theta_{j})\). This could be accomplished by finding rotation angles, for example, the Euler or Tait angles, introducing the rotation mapping \(\mathbf{p}^{0}_{j}\) into \(\mathbf{p}^{f}_{j}\). Simultaneous rotation about either \(\mathbf{p}^{0}_{j}\) or \(\mathbf{p}^{f}_{j}\) does not change the end result and must be discarded. While it is possible to proceed in this manner for higher-dimensional groups, for the three-dimensional groups we can utilize a shortcut based on the cross-product of the two vectors, as we have done in the case of a rigid body. We compute \[\mathbf{A}_{j}=\frac{1}{h}\frac{\mathbf{p}^{0}_{j}\times\mathbf{p}^{f}_{j}}{ \|\mathbf{p}^{0}_{j}\|\|\mathbf{p}^{f}_{j}\|}\,.\] (64) The vector \(\mathbf{A}_{j}\) contains all the information necessary for the first step of the algorithm, namely simultaneous rotation. In order to reconstruct the vector \(\mathbf{A}_{j}\) on each time step, we only need the components normal to the vector \(\mathbf{p}^{0}_{j}\). These components can be found by defining two vectors spanning the plane normal to \(\mathbf{p}^{0}_{j}\) in the following way. Take a fixed vector, for example, \(\mathbf{e}_{1}=(1,0,0)^{T}\), and for each \(\mathbf{p}^{0}_{j}\) define \[\boldsymbol{\xi}^{1}_{j}=\frac{\mathbf{p}^{0}_{j}\times\mathbf{e}_{1}}{\| \mathbf{p}^{0}_{j}\times\mathbf{e}_{1}\|}\,,\quad\boldsymbol{\xi}^{2}_{j}= \frac{\mathbf{p}^{f}_{j}\times\boldsymbol{\xi}_{1}}{\|\mathbf{p}^{0}_{j} \times\mathbf{e}_{1}\|}\,.\] (65) Since \((\boldsymbol{\xi}_{1},\boldsymbol{\xi}_{2})\) are unit length and orthogonal to each other and also to \(\mathbf{p}^{0}_{j}\), the vector \(\mathbf{A}_{j}\) defined by (64) can be uniquely reconstructed by \[\mathbf{A}_{j}=\sum_{a=1}^{2}a^{a}_{j}\boldsymbol{\xi}^{a}_{j}\,,\quad a^{a}_ {j}:=\mathbf{A}_{j}\cdot\boldsymbol{\xi}^{a}_{j}\,.\] (66) Variables \(a^{1,2}_{j}\) are the first part of the input for any pair of data points.
3. Find the vector \(\mathbf{b}_{j}\) (or, more precisely, two of its components normal to \(\mathbf{p}_{f}\)), such that \(\boldsymbol{\Pi}^{f}_{j}-(\mathbb{R}(\mathbf{A}_{j},\theta_{j})\boldsymbol{ \Pi}_{0}-\mathbf{b}_{j}\times\mathbf{p}^{f}_{j})\) vanishes. This is accomplished by the following calculation. Define two vectors \(\mathbf{E}_{1,2}\) as \[\mathbf{E}_{1}=\frac{\mathbf{p}^{0}_{j}\times\mathbf{p}^{f}_{j}}{\|\mathbf{p}^ {0}_{j}\times\mathbf{p}^{f}_{j}\|}\,,\quad\mathbf{E}_{2}=\frac{\mathbf{E}_{1} \times\mathbf{p}^{f}_{j}}{\|\mathbf{E}_{1}\times\mathbf{p}^{f}_{j}\|}\,.\] (67)
Clearly, \(\mathbf{E}_{1,2}\) are orthogonal to each other and also to the vector \(\mathbf{p}_{j}^{f}\). Note that \(\mathbf{E}_{1}\) is simply the normalized version of vector \(\mathbf{A}_{j}\) given by (64).
4. We only need to equalize the components of \(\mathbf{\Pi}_{j}^{f}-\mathbf{\Pi}_{j}^{0}\) which are normal to \(\mathbf{p}_{j}^{f}\) on each time step. We thus define the coefficients \(\widetilde{b}^{1,2}\)6 according to Footnote 6: We used tildes to distinguish the coefficients of expansion \(\widetilde{b}^{1,2}\) in the basis of moving vectors \((\mathbf{E}_{1},\mathbf{E}_{2})\) from the coefficients \(b^{1,2}\) in the fixed frame basis \((\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})\)
\[\widetilde{b}^{1,2}=(\mathbf{\Pi}^{f}-\mathbf{\Pi}^{0})\cdot\mathbf{E}_{1,2} \,,\quad\mathbf{b}=\widetilde{b}^{1}\mathbf{E}_{1}+\widetilde{b}^{2}\mathbf{E }_{2} \tag{68}\]
One can see that by construction, using this algorithm exactly conserves the Casimirs \(|\mathbf{p}|^{2}\) and \(\mathbf{\Pi}\cdot\mathbf{p}\) on every time step.
5. Each pair of mappings \((\mathbf{\Pi}_{j}^{0},\mathbf{p}_{j}^{0})\to(\mathbf{\Pi}_{j}^{f},\mathbf{p}_ {j}^{f})\) (six coordinates) is parameterized by four variables: \((a_{j}^{1,2},\widetilde{b}_{j}^{1,2})\). These four variables are the outputs of the neural networks, whereas \((\mathbf{\Pi}_{j}^{0},\mathbf{p}_{j}^{0})\) are the inputs.
Neural network trainingSimilar to the procedure for the rigid body, we generate 50 trajectories starting in the neighborhood of \(\mathbf{\Pi}^{0}=(1,1,1)\) and \(\mathbf{p}^{0}=(-1,1,2)\). The initial points for trajectories are randomly distributed in the phase space with a uniform distribution in a cube of the size \(2a=0.2\) in every direction. Each trajectory has a 1000 points (not counting the initial point) separated by the time interval of \(h=0.1\), with \(50,000\) data points total used for learning.
EvaluationEvaluation of trajectories using Neural network mapping \((\mathbf{\Pi},\mathbf{p})\) to the variables \((\mathbf{A},\widetilde{b}^{1,2})\) proceeds in next steps:
1. Suppose the initial conditions \((\mathbf{\Pi}^{0},\mathbf{p}^{0})\) are given. Using the Neural Network, estimate the values \((a^{1,2},\widetilde{b}^{1,2})\) corresponding to the inputs.
2. For each starting point of the momentum \(\mathbf{p}^{0}\) and the pair \(a^{1,2}\) evaluated by the neural network, build two vectors \((\boldsymbol{\xi}_{1},\boldsymbol{\xi}_{2})\perp\mathbf{p}^{0}\) according to (65) and reconstruct the axis of rotation \(\mathbf{A}\) according to (64).
3. Compute the three-dimensional rotation matrix \(\mathbb{R}(\mathbf{n}_{\mathbf{A}},\phi)\) about the axis \(\mathbf{n}_{\mathbf{A}}=\mathbf{A}/\|\mathbf{A}\|\) and the angle \(\phi=\operatorname{arc}\,\sin\Big{(}\|\mathbf{A}\|h/\|\mathbf{p}_{0}\|^{2} \Big{)}\).
4. Compute the new linear momentum \(\mathbf{p}^{f}\) and intermediate angular momentum \(\boldsymbol{\Pi}^{f}\) according to \[\mathbf{p}^{f}=\mathbb{R}(\mathbf{n}_{\mathbf{A}},\phi)\mathbf{p}^{0}\,,\quad \boldsymbol{\Pi}^{*}=\mathbb{R}(\mathbf{n}_{\mathbf{A}},\phi)\boldsymbol{\Pi}^{0 }\,.\] (69)
5. Compute the axes \(\mathbf{E}_{1,2}\) according to (67) and update the angular momentum according to \[\boldsymbol{\Pi}_{f}=\boldsymbol{\Pi}_{*}-\widetilde{b}_{1}\mathbf{E}_{1}- \widetilde{b}_{2}\mathbf{E}_{2}\,.\] (70)
6. Reset the final values \((\boldsymbol{\Pi}^{f},\mathbf{p}^{f})\) to the initial conditions and repeat Step 1.
One can see that the algorithm formulated above conserves the Casimirs \(|\mathbf{p}|^{2}\) and \(\boldsymbol{\Pi}\cdot\mathbf{p}\) exactly (_i.e._, to machine precision) on every time step during the trajectory prediction phase.
_Simulation results._
**Data generation** We present the simulation results for the LPNets for the Kirchhoff's equations. The data were obtained using 500 trajectories of 100 time steps with the step \(h=0.1\). All trajectories started in the neighborhood of \(\boldsymbol{\bar{\Pi}}^{0}=(1,1,1)^{T}\) and \(\boldsymbol{\bar{p}}^{0}=(-1,1,2)^{T}\). The initial conditions are uniformly distributed in a cube of the size \(2a=0.2\) in every direction of the phase space. The mass matrix \(\mathbb{M}\) and tensor of inertia \(\mathbb{I}\) in (62) are taken to be \(\mathbb{M}=\text{diag}(1,2,1)\) and \(\mathbb{I}=\text{diag}(1,2,1)\).
**Network parameters** The \(50,000\) data pairs are used to produce \(10,000\) points \((\boldsymbol{\Pi}_{j},\mathbf{p}_{j})\) as input and the parameters \((a^{1},a^{2},\tilde{b}^{1},\tilde{b}^{2})\) as outputs. A fully connected network with 6 inputs, 4 outputs and 6 hidden layers with 64 neurons each, is initialized to describe the mapping. The total number of trainable parameters for the network is 21,508. All activation functions are taken to be sigmoid. The network is trained using Adam algorithm with the step of 0.001, using mean square error as the loss function. From the data, 80% of data are used for training and 20% for validation. The network is optimized using Adams algorithm with the learning rate of \(0^{-3}\) decaying exponentially to \(10^{-5}\) for \(500,000\) epochs. After the optimization procedure, he loss and validation loss reaching the values of \(1.65\cdot 10^{-4}\) and \(1.35\cdot 10^{-3}\), respectively.
**Evaluation of a trajectory** A trajectory is reproduced using the LPNets algorithm as described above, using trained network, starting with the initial conditions \(\boldsymbol{\bar{\Pi}}^{0}=(1,1,1)^{T}\) and \(\boldsymbol{\bar{p}}^{0}=(-1,1,2)^{T}\) until \(t=10\). As usual, the ground truth
solution is produced by a high accuracy BDF algorithm with relative tolerance of \(10^{-13}\) and absolute tolerance of \(10^{-14}\). The comparison between the ground truth and the trajectories obtained by LPNets are presented on Figure 10.
On Figure 11, left panel, we show the conservation of the Hamiltonian (top), the first Casimir \(|\mathbf{p}|^{2}\) (middle) and the second \(\mathbf{\Pi}\cdot\mathbf{p}\) (bottom). Notice that the Casimirs are conserved with higher accuracy than the ground truth, although the ground truth is already conserved to about \(10^{-10}\).
One may wonder whether better results can be obtained by using alternative ways of simulating trajectories or more effective implementations of neural networks. We have to keep in mind that Kirchhoff's equations (62) are chaotic (Holmes et al., 1998; Leonard, 1997). We measure the rate of the divergence of nearby trajectories with the same values of the Casimirs, also known as the first Lyapunov exponent, to be about \(\lambda=0.250\) in units 1/time, starting with the same initial conditions as the simulated trajectory. The _minimum_ growth of errors expected by any numerical scheme is thus proportional to \(e^{\lambda t}\)(Ott, 2002). On
Figure 10: Results of simulation of equations (62) and the corresponding solution of using LPNets procedure. Upper panels: three components of the angular momenta \(\mathbf{\Pi}\); lower panels: three components of linear momenta \(\mathbf{p}\).
the right panel of Figure 11 we present the semilogarithmic plot of the growth of errors, and show that it is corresponds to the growth of errors expected from the chaotic properties of the system.
## 6 G-LPNets: Derivation and test cases
### General theory
Up until now, we have developed a method of computations using transformations that are easily computable, but the coefficients of these transformations must be obtained through the action of a Neural Network. We can extend these results and compute the transformation modules with parameters that are learned through an optimization procedure, producing modules preserving the Poisson structure of the bracket. We proceed as follows.
For Lie-Poisson systems, Section 4 showed that the flow generated by a test Hamiltonian, which is linear in momenta \(H(\mu)=\langle\alpha,\mu\rangle\), can be computed analytically. Instead of taking the test Hamiltonian to be a linear function of momenta, here we take the Hamiltonian to be \(\widetilde{H}=\varphi(H)\), where \(\varphi\) is some nonlinear scalar function of the variable \(H\), which is also a constant of motion7. Instead of equa
Figure 11: Left: Conservation of the Hamiltonian \(H\) (top) and two of the Casimirs \(\left|\mathbf{p}\right|^{2}\) (middle) and \(\mathbf{\Pi}\cdot\mathbf{p}\) (bottom), comparing the results of LPNets (red) and ground truth (blue). The Hamiltonian is conserved to the relative accuracy of less than 0.3%. Notice that LPNets conserves the Casimir exactly (to machine precision) and thus exceeds the ground truth in the conservation of Casimirs. Right: The discrepancy between the results of LPNets and the ground truth. Dashed red line: growth of error expected from the Lyapunov exponent in the semilog scale. The growth of error follows the Lyapunov’s exponent; thus, the neural network is performing as well as could be expected for a chaotic system.
tions (22), we obtain:
Footnote 7: The \(\varphi\)-function \(\varphi(H)\) is defined by the following formula:
\[\dot{\mu}_{a}=\varphi^{\prime}(H)C^{d}_{ab}\alpha^{b}\mu_{d}:=\varphi^{\prime}(H )\mathbb{M}(\alpha)^{d}_{a}\mu_{d}=\varphi^{\prime}(H)\mathbb{N}(\mu)_{ab} \alpha^{b}\,,\quad H=\langle\alpha,\mu_{0}\rangle. \tag{71}\]
Noticing that \(\varphi^{\prime}(H)\) is constant under the flow, generated by the Hamiltonian \(\widetilde{H}=\varphi(H)\), we conclude that the transformation generated by the flow of \(\varphi(H)\) is simply obtained by a time scaling of (23) and can be written as
\[\widetilde{\mathbb{T}}(\alpha,t)\mu_{0}=\mathbb{T}(\alpha,\varphi^{\prime}(H)t )\mu_{0}=e^{\mathbb{M}(\alpha)\varphi^{\prime}(H)t}\mu_{0}\,,\quad H=\langle \alpha,\mu_{0}\rangle\, \tag{72}\]
where \(\mathbb{T}(\alpha,t)\) are defined as in (23). The prefactor \(\varphi^{\prime}(H)=\varphi^{\prime}(\langle\alpha,\mu_{0}\rangle)\) can be sought as a function of the variables using some kind of scalar activation function. Thus, in the G-LPNets framework, we are looking for transformations that are compositions of moduli \(\mathbb{T}_{s}(a_{s},\alpha_{s},b_{s},t)\), \(s=1,\ldots M\) having the functional form
\[\widetilde{\mathbb{T}}_{s}(a_{s},\alpha_{s},b_{s},t)\mu_{0}=e^{\mathbb{M}( \alpha_{s})q_{s}t}\mu_{0}\,,\quad q_{s}=a_{s}\sigma\left(\langle\alpha_{s},\mu _{0}\rangle\right)+b_{s}\,, \tag{73}\]
where \(\sigma\) is the activation function, which we will choose to be the sigmoid, and \((a_{s},\alpha_{s},b_{s})\) are the parameters8to be determined. The transformations (72) are generalized moduli preserving the Poisson bracket and the Casimirs to machine precision.
Footnote 8: In the expression (72), \(\alpha_{s}\) refers to the \(s\) vector in the collection of \(M\) vectors \((\alpha_{1},\ldots,\alpha_{M})\) in the Lie algebra \(\mathfrak{g}\). In the example of the applications to the rigid body motion, for simplicity, we will put \(\alpha_{s}\) to be proportional to the basis vectors of the Lie algebra with some scalar coefficients, which we will also call \(\alpha_{s}\).
The method of G-LPNets selects the order of transformations \(\widetilde{\mathbb{T}}_{s}\) and minimizes a mean-square loss function. For example, for the data comprising \(N\) points specifying the beginning \(\mu_{i}^{0}\) and end \(\mu_{i}^{f}\) values of momenta on each time interval, and assuming that the composition takes \(M\) exactly equal time-substeps, the loss function can be taken to be the mean square error (MSE)
\[L=\frac{1}{N}\sum_{i=1}^{N}\left\|\widetilde{\mathbb{T}}_{M}(a_{M},\alpha_{M}, b_{M},h/M)\circ\ldots\circ\widetilde{\mathbb{T}}_{M}(a_{1},\alpha_{1},b_{1},h/M) \mu_{i}^{0}-\mu_{i}^{f}\right\|^{2}. \tag{74}\]
The use of the mean square loss is advantageous since the derivatives of \(L\) with respect to parameters \(a_{s},\alpha_{s},b_{s}\) can be found analytically. Denoting the collection of parameters with a bar for shortness, for example, \(\bar{a}=(a_{1},a_{2},\ldots)\)_etc_, we naturally have
\[(\bar{a},\bar{\alpha},\bar{b})=\text{arg min}\ L(\bar{a},\bar{\alpha},\bar{b}). \tag{75}\]
There are several points to keep in mind regarding the application of G-LPNets.
1. If there are no Casimirs, it is natural to choose the transformations alternating in a given sequence. For example, for an \(n\)-dimensional system one could choose a repeated application of \(\widetilde{\mathbb{T}}_{1}\), \(\widetilde{\mathbb{T}}_{2}\), \(\widetilde{\mathbb{T}}_{M}\) followed by \(\widetilde{\mathbb{T}}_{1}\) etc. It would be natural (although strictly speaking not necessary) to take the depth of network to be \(M=n\cdot k\), where \(k\) is an integer number.
2. If there are \(d\) independent Casimirs \(C_{j}(\mu)\), \(j=1,\ldots,d\), the transformations producing the motion about the axis parallel to \(\nabla C_{j}\) produce no effective evolution of momenta. One can either choose \(n-d\) transformations on every time step by excluding certain \(\alpha\), or simply apply all transformations in a sequence with the understanding that the resulting \(\alpha\) is not unique. The latter is the approach we will use to describe rigid body rotation below.
3. The advantage of applying all \(\widetilde{\mathbb{T}}_{s}\) in a row is that there will be no accuracy loss if, at some point, the momentum is becoming close to parallel with the given coordinate axis. The disadvantage of applying all transformations in a sequence without excluding any \(\alpha\) lies in the necessity for post-processing for (75), as they are only defined up to corresponding components of gradients that are not in the span of the gradients of Casimir functions \(\nabla_{\mu}C_{j}\), \(j=1,\ldots,d\). In fact, even at the continuous level, the Hamiltonian is defined only up to the Casimirs, since the same dynamics are obtained if an arbitrary Casimir is added to the Hamiltonian. Similarly, all possible solutions of (75) obtained by G-LPNets are in the same equivalence class since they generate the same dynamics in phase space.
4. It is also possible that because parameters are defined only up to a certain vector or vectors, one could encounter vanishing gradients in some directions and a slow down of the convergence to the desired solution due to some numerical artifacts. We have not encountered these artifacts during our solution of the rigid body equations, but we cannot exclude further numerical difficulties when applying G-LPNets for general high-dimensional problems.
5. We expect that G-LPNets will also work for the cases beyond the Lie-Poisson framework, whenever explicit integration of the trajectories with the test Hamiltonians is possible, such as the particle in a magnetic field. The transformations \(\widetilde{\mathbb{T}}_{s}\) are then computed as the generalizations of the Poisson transformations with the unknown time scaling coefficients. As long as
the completeness of these transformations will be achieved, we expect them to provide efficient and accurate data-based computing methods for general Poisson problems.
The most challenging part of applying G-LPNets, in our opinion, is the lack of a general completeness result. It would be nice if the G-LPNets moduli (73) satisfied a completeness result analogous to that of SympNets (Jin et al., 2020). Right now, it seems unlikely that a general result may be proved valid for all Lie-Poisson brackets. Without specifying more information about the particular Lie group, progress in that area may be limited. However, the silver lining here is that for each particular problem, the symmetry of the problem is bound to be known _a priori_, where the exact value of the Hamiltonian may or may not be known. Thus, if one focuses on particular problem at hand, a completeness result of transformations leading to G-LPNets could be feasible to achieve. It is also possible that some of the methods of analysis performed in Jin et al. (2020) for proving completeness of the modules in symplectic space would be applicable in the more general setting of a particular Lie-Poisson bracket.
### Applications to rigid body dynamics
To show the potential power of G-LPNets, we treat the equations of a rigid body, following up on our discussion in Sec. 5.1. Now, instead of learning from trajectories that stay close to the desired area, we aim to learn the whole dynamics and simply choose 50 initial points uniformly distributed in a cube in momentum space \(\mathbf{\Pi}\), \(-2\leq\Pi_{a}\leq 2\), \(a=1,2,3\). Each trajectory is simulated with a high precision ODE solver and output is provided every \(h=0.1\), providing 20 data pairs, with the total of 1000 data pairs. All parameters of the system are exactly as in Sec. 5.1.
Our comparison is the reconstruction of the dynamics of a rigid body which was already done in (Bajars, 2023). Note, however, that (Bajars, 2023) takes all data pairs on _the same_ Casimir surface. In our opinion, since the Casimir surface depends on the initial conditions, a physical system, such as a satellite, could be observed on different Casimir surfaces due to the fact that the thrusters or other external forces have moved it from one Casimir surface to another outside the observation time. The Hamiltonian and physical parameters of the satellite are assumed to be the same, so it makes sense that the ground truth is generated with the same Hamiltonian, but different Casimirs.
After data points are generated, a G-LPnet with 6 transformations (18 parameters) is generated. The transformations \(\widetilde{\mathbb{T}}_{i}\) are rotations about the coordinate axes
in the _fixed_ frame \(\mathbf{e}_{1,2,3}\) by the angle \(\phi_{i}\) (_i.e._, \(e_{1}=(1,0,0)^{T}\)_etc._) The rotations proceed in the sequence
\[\widetilde{\mathbb{T}}_{1}\rightarrow\widetilde{\mathbb{T}}_{2}\rightarrow \widetilde{\mathbb{T}}_{3}\rightarrow\widetilde{\mathbb{T}}_{1}\rightarrow\ldots \tag{76}\]
At the given step \(s\), the rotation angle \(\phi_{s}\) for the given value of the momentum \(\mathbf{\Pi}^{0}\) is given by
\[\phi_{s}=a_{s}\sigma(\mathbf{A}_{s}\cdot\mathbf{\Pi}^{0})+b_{s}\,. \tag{77}\]
The optimization finds the values of \((\bar{\alpha},\bar{\alpha},\bar{b})\) minimizing the MSE loss function. The gradients of MSE functions with respect to parameters are computed analytically. We tried gradient descent methods and discovered that while they work satisfactorily, they do require quite a substantial number of epochs to converge, as was already observed in (Bajars, 2023; Jin et al., 2022). To make computation more efficient, we implemented an optimization procedure based on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm (Fletcher, 2000) into SciPy: www.scipy.org. We let the BFGS algorithm run for 3500 iterations achieving the value of the Loss of less than \(5\cdot 10^{-10}\), a procedure which is several orders of magnitude more efficient than the standard gradient descent-based algorithm.
Using the values of parameters found by the optimization procedure, 10 long solution with \(10,000\) data points each with the time step of \(h=0.1\) (max time \(t_{max}=1000\)) were generated and compared with the ground truth solution obtained by a high precision ODE method. These long-term solutions were generated to compare with the results in (Bajars, 2023). The initial conditions \(\mathbf{\Pi}_{0}\) for these solutions were taken to be random numbers chosen from a random distribution from a cube \(|\Pi_{a}|\leq 1.25\), \(a=1,2,3\). The value of the cube was somewhat smaller than the initial conditions so the solutions are guaranteed to remain within the area where the data were available.
In the left panel of Figure 12 we show a part of one of the 10 trajectories extending to only about \(t=500\) for clarity. This particular trajectory starts with the initial condition \(\mathbf{\Pi}_{0}=(1.011,1.178,0.585)\), the results for other trajectories are similar. The values of momenta are nearly indistinguishable from ground truth for very long time. In the right panel of this Figure, we show all 10 trajectories in the phase space, plotted for all times \(0\leq t\leq 1000\) (10000 iterations). The results from the simulations and the ground truth coincide perfectly. This is due to the fact that both the Casimirs and the Hamiltonians are preserved with very high accuracy, as Figure 13 shows. The Casimir is, again, conserved to machine precision on each step and thus substantially exceeds the accuracy of the calculation for the ground truth solution (still very high at about \(10^{-9}\)). The relative error in energy is of the order of \(0.1\%\) over all long-term solution.
Comparison with previous literature.The case of learning the whole dynamics of a rigid body was considered in (Bajars, 2023). In that paper, the dynamics was considered only on a _single_ Casimir's surface \(|\mathbf{\Pi}|=1\) with 300 data points. The solutions generated by the neural network were also taken on that Casimir's surface. In contrast, we presented the dynamics with initial data taken from a volume of \(\mathbf{\Pi}\) (a cube), and initial conditions for G-LPNets are also taken without any restriction of the Casimir surface. Our relative error of solutions are of the same order as the results presented in (Bajars, 2023). The conservation of Casimir and energy is reported to be of the order of 1.5% in (Bajars, 2023). In our case, the relative error in energy is somewhat better, with the max value of error around 0.4%. The Casimir \(|\mathbf{\Pi}|^{2}\) is conserved with the machine precision on every time step, so after \(10^{5}\) steps a typical error in Casimir is expected to be of the order of \(10^{-11}\div 10^{-10}\). While the relative error of energy in methods presented in (Bajars, 2023) can no doubt be improved to reach the accuracy achieved here, we are not aware of any method preserving the value of the Casimir to the same precision as ours.
## 7 Conclusions
We have derived a novel method of learning the evolution of a general Lie-Poisson system that is capable of predicting the dynamics in phase space with
Figure 12: Left: Results of G-LPNets applied to the motion of a rigid body (red) versus Ground truth (blue) for the individual momenta. Right: Parametric plot of the momenta in the phase space for 10 solutions, chosen to start from uniformly distributed random points in the phase space in the cube \(|\Pi_{a}|\leq 1.25\) in the phase space. The results of G-LPNets are indistinguishable from the ground truth solution.
high precision. Our method learns the system by applying exact Poisson maps for certain test Hamiltonians, which are exactly solvable for any Lie-Poisson bracket. The resulting maps preserve the Poisson bracket under the evolution and also preserve all Casimirs with machine precision. These methods are also applicable to systems beyond Lie-Poisson, such as a particle in magnetic field, as long as the corresponding equations for test Hamiltonians are exactly solvable.
We derive two types of networks. The first one is the Local Lie-Poisson Neural Networks (LPNets) which derive the local Poisson maps using test Hamiltonians that are linear in momenta. The parameters for these Poisson mappings resulting from the test Hamiltonians are then learned using standard methods of data-based learning with Artificial Neural Network (ANN). The advantage of this method is that the local completeness of the mappings is achieved automatically, since they represent exponential maps on a Lie group. An additional advantage of the method is the ability to use all the modern technology of ANNs for the discovery of the mapping from momenta to the parameters of test Hamiltonians.
An alternative method was derived, called Global LPNets, using an example of nonlinear test Hamiltonians. These nonlinear Hamiltonians are arbitrary functions of the local Hamiltonians in the local LPNets approach. The explicit evolution maps on every time step obtained by these methods can be viewed as
Figure 13: Left panel, top: Relative accuracy for the conservation of the Hamiltonian \(H\) (top) (_i.e._, \(\Delta E/E=(E(t)-E(0))/\langle E\rangle\), where \(\langle E\rangle\) is the mean value of the Hamiltonian (top). Left panel, bottom: the corresponding conservation of the Casimir \(C\), computed as \(\Delta C/C=(C(t)-C(0))/\langle C\rangle\) comparing the results of G-LPNets (red) and ground truth (blue), for all 10 simulations. As usual, G-LPNets conserve the Casimir exactly (to machine precision) and thus substantially exceed the ground truth in the conservation of Casimirs. Right: The discrepancy between the results of G-LPNets and the ground truth for the solution, presented on the left part of Figure 12. Again, the discrepancy comes mostly from time mismatch, whereas the amplitude of oscillations is conserved with high precision due to the corresponding high precision in the conservation of energy.
generalizations of the symplectic modules derived in (Jin et al., 2020) for case of a general Lie-Poisson bracket. We have presented an application of these methods to rigid body motion and showed that these methods demonstrate excellent accuracy and efficiency for long-term computation, and the ability to learn the dynamics in the whole phase space from quite a limited number of observations.
While the G-LPNets seem more computationally effective than the local LPNets, we must caution the reader that a completeness result for mappings for G-LPNets obtained through the test nonlinear Hamiltonians is still missing. We believe it is probably highly unlikely that such a result can exist for a general Lie-Poisson system. The completeness result is likely to depend on the structure of the actual Lie group and the corresponding Lie-Poisson bracket. This is an interesting topic that we intend to address in the future.
Another interesting topic is the question of the extension of this method to more general systems. In order to achieve that goal, the equations generated by the Poisson bracket for the test Hamiltonians must be exactly solvable, as it was in the case of the particle in the magnetic field. It will be interesting to compute the conditions on the Poisson bracket for such integrability to occur, which we also plan to undertake in the future. Of particular interest are constrained systems, especially systems with nonholonomic constraints. The Lie-Poisson equations in this case become the Lie-Poisson-d'Alembert's equation, where the right-hand side of (A.3) contains extra terms enforcing vanishing of momenta projections to certain subspaces defined by the constraints, such as in the Suslov problem (Bloch, 2003). Apart from their importance in classical mechanics, these methods were recently found to be important for the variational discretizations of fluids (Gawlik and Gay-Balmaz, 2020), possibly including irreversible thermodynamics processes (Gawlik and Gay-Balmaz, 2022). Extension of data-based computing to nonholonomic Lie-Poisson systems may thus play an important role in the applications of the method of this paper to continuum mechanics, including irreversible processes.
## 8 Acknowledgements
We are grateful to Anthony Bloch, Pavel Bochev, Stephen Bond, Anthony Gruber, Melvin Leok, Tomoki Ohsawa, Tanya Schmah, Andrew Sinclair, Nathaniel Trask and Dmitry Zenkov for fruitful and engaging discussions. SH acknowledges support and experience provided by the internship in ATCO's transformation team and productive exchange with the team members. SH and VP were partially supported by the NSERC Discovery grant.
This article has been co-authored by an employee of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns right, title and interest in and to the article and is responsible for its contents. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this article or allow others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan [https://www.energy.gov/downloads/doe-public-access-plan](https://www.energy.gov/downloads/doe-public-access-plan).
|
2303.16532 | Futures Quantitative Investment with Heterogeneous Continual Graph
Neural Network | This study aims to address the challenges of futures price prediction in
high-frequency trading (HFT) by proposing a continuous learning factor
predictor based on graph neural networks. The model integrates multi-factor
pricing theories with real-time market dynamics, effectively bypassing the
limitations of existing methods that lack financial theory guidance and ignore
various trend signals and their interactions. We propose three heterogeneous
tasks, including price moving average regression, price gap regression and
change-point detection to trace the short-, intermediate-, and long-term trend
factors present in the data. In addition, this study also considers the
cross-sectional correlation characteristics of future contracts, where prices
of different futures often show strong dynamic correlations. Each variable
(future contract) depends not only on its historical values (temporal) but also
on the observation of other variables (cross-sectional). To capture these
dynamic relationships more accurately, we resort to the spatio-temporal graph
neural network (STGNN) to enhance the predictive power of the model. The model
employs a continuous learning strategy to simultaneously consider these tasks
(factors). Additionally, due to the heterogeneity of the tasks, we propose to
calculate parameter importance with mutual information between original
observations and the extracted features to mitigate the catastrophic forgetting
(CF) problem. Empirical tests on 49 commodity futures in China's futures market
demonstrate that the proposed model outperforms other state-of-the-art models
in terms of prediction accuracy. Not only does this research promote the
integration of financial theory and deep learning, but it also provides a
scientific basis for actual trading decisions. | Min Hu, Zhizhong Tan, Bin Liu, Guosheng Yin | 2023-03-29T08:39:36Z | http://arxiv.org/abs/2303.16532v2 | # Futures Quantitative Investment with Heterogeneous Continual Graph Neural Network
###### Abstract
It is a challenging problem to predict trends of futures prices with traditional econometric models as one needs to consider not only futures' historical data but also correlations among different futures. Spatial-temporal graph neural networks (STGNNs) have great advantages in dealing with such kind of spatial-temporal data. However, we cannot directly apply STGNNs to high-frequency future data because future investors have to consider both the long-term and short-term characteristics when doing decision-making. To capture both the long-term and short-term features, we exploit more label information by designing four heterogeneous tasks: price regression, price moving average regression, price gap regression (within a short interval), and change-point detection, which involve both long-term and short-term scenes. To make full use of these labels, we train our model in a continual manner. Traditional continual GNNs define the gradient of prices as the parameter important to overcome catastrophic forgetting (CF). Unfortunately, the losses of the four heterogeneous tasks lie in different spaces. Hence it is improper to calculate the parameter importance with their losses. We propose to calculate parameter importance with mutual information between original observations and the extracted features. The empirical results based on 49 commodity futures demonstrate that our model has higher prediction performance on capturing long-term or short-term dynamic change.
## 1 Introduction
Futures prices are regarded as an authoritative indicator of market conditions. As the core of futures quantitative investment, high-precision trend prediction of futures prices is of great practical significance. For decades, classical trend predict methods of time series such as vector autoregression (VAR)[22], autoregressive integrated moving average (ARIMA) model[15], generalized autoregressive conditional heteroskedasticity (GARCH) [1], which assume specific models to characterize financial time series data, were widely considered to be a general approach to capture the temporal linear dependencies.
However, more often than not, financial systems are complex and their underlying dynamics are not known [23]. At least two significant characteristics of the futures data have been ignored or partially ignored by the existing works. First, in quantitative investment, especially in high-frequency trading, faster and more precise predictions can provide reliable signals for trading strategies so as to timely and accurately make trading decisions. But in practice, it is extremely harder to predict the short-term trend than the long-term trend. Most of the existing methods ignore the interaction between the short-term and long-term features, which may cause the model to fail to obtain enough information, thus limiting the accuracy of the prediction [3]. Second, the price of one future usually shows a strong dynamic correlation with other futures [1]. That is, each variable (futures contract) depends not only on its historical values but also on the observation of other variables. For example, a shortage of petroleum may impact the price of synthetic resin at the downstream industrial chain. These two features of real-time series data cannot be reflected by analytical equations containing parameters [22, 14].
The deep learning models are proven to be more efficient than classical econometric models. Lots of deep learning methods have been extended to predict the multivariate time series [16, 20, 21]. For example, ConvLSTM [16] has been proposed to model the temporal dependence with the recurrent structure of LSTM and capture the spatial correlation with a traditional convolution. However, LSTM variants do not model the pair-wise dependencies among variables explicitly, which weakens model interpretability [20]. Performing node feature aggregation and updates, graph neural network (GNN) [1, 1] extends the deep learning methodology to graphs that can capture these complex dependencies naturally.
In this work, we focus on modeling the multiple futures series by considering the two aforementioned characteristics simultaneously. To capture the spatial correlation among futures, we propose to use STGNNs to model the multivariate
future data as a base model. However, if we only focus on price forecasting or change-point detection (CPD), there will be only one kind of supervised label, which is insufficient to learn comprehensive features to balance long- and short-term considerations. To alleviate this problem, we propose four heterogeneous tasks, futures' close price forecasting, gap regression, moving average (MA) regression, and change-point detection (CPD). To learn holistic long-term and short-term features, we train the proposed model with these four tasks in a continual manner.
The rationale is that the task of gap regression captures the law behind the spreads between the maximum and the minimum within a fixed time interval. The task of real-time futures price forecasting aims to capture the short-term feature from local fluctuation. On the contrary, the task of MA depicts the average settle prices over a window of time to show the historical volatility of futures prices, which tries to extract the long-term signal. The task of CPD helps us to find abrupt changes in data when a property of the time series changes, which takes into account both long-term and short-term features. In addition, we force our model to learn an overall experience from the four tasks in a continual way to avoid "catastrophic forgetting" on the long-term and short-term features [11].
Traditionally, training artificial neural networks on new tasks typically causes them to forget previously learned tasks, which refers to the issue of catastrophic forgetting [11]. Zenke et al.[14] propose a quadratic surrogate loss to solve this problem. The key point of the surrogate loss is a per-parameter regularization strength. It tries to "remember" the best model parameters for the last task as well as to adapt to the new task. The existing works [14, 11] approximate the per-parameter regularization strength with the ratio of loss dropping (the gradient of loss). In our problem, however, the tasks are heterogeneous with each other, therefore their corresponding losses are heterogeneous as well. Hence, the loss dropping based on per-parameter strength is inappropriate for futures price modeling. In this paper, we creatively introduce mutual information between the extracted features and original features to substitute the traditional loss change ratio. The rationale is that the mutual information among different tasks is much smoother than their losses. Experimental results show that our model has higher prediction accuracy and can effectively predict the dynamic trend of multi-future time series data.
The main achievements, including contributions to the field, can be summarized as follows:
* Conceptually, we propose four heterogeneous tasks to balance both long-term and short-term quantitative trading requirements.
* Technically, we propose a novel Heterogeneous Continual Graph Neural Network (HCGNN) to learn temporal (long-term and short-term) and spatial dependencies. We design a mutual information-based parameter strength to adapt to heterogeneous tasks.
* Empirically, we evaluate the proposed model on a real-world dataset that involves 49 types of futures in the Chinese futures market.
## 2 Related Works
### Deep Learning in Financial Quantitative Investment
A lot of literature have studied deep learning in financial quantitative investment. Solving the problem of long-term dependence of the series well, LSTM [13] and its variants [12][11] are the preferred choice of most researchers in the field of financial time series forecasting, including but not limited to stock price forecasting[14], index forecasting [1, 15, 16], commodity price forecasting[17], forex price forecasting[18], volatility forecasting[19]. Salinas et al. [16] embedded LSTM or its simplified version gated recurrent units (GRU) as a module in the main architecture. Meanwhile, hybrid models were used in some of the papers. For example, Wu et al.[20] combined CNN (Convolutional Neural Networks) and LSTM to predict the rise and fall of the stock market. They exploited the ability of convolutional layers for extracting useful financial knowledge as the input of LSTM. Combining complementary ensemble empirical mode decomposition (CEEMD), PCA, and LSTM, Zhang et al. [21] constructed a deep learning hybrid prediction model for stock markets based on the idea of "decomposition-reconstruction-synthesis". For multivariate time series, Lai et al. [10] proposed a deep learning framework, namely a long- and short-term time-series network (LSTNet) which employed convolutional neural networks to capture local dependencies. LSTNet cannot fully exploit latent dependencies between pairs of variables and is difficult to scale beyond a few thousand-time series. Similar attempts were made in [15] and [16]. In general, these methods often completely disregard available relational information or can not take full advantage of nonlinear spatial-temporal dependencies.
As a popular model emerging in recent years, GNN has been applied to different tasks and domains. Due to its good performance in graph modeling methods, GNN has been continuously promoted in the financial field by transforming financial tasks into node classifications. To represent relational data in the financial field, graphs are usually constructed, including user relations graphs [23], company-based relationship graphs [24], and stock relationship graphs [11, 12]. Sawhney et al. [25] concentrated on stock movement forecasting by blending chaotic multi-modal signals including inter-stock correlations via graph attention networks (GATs) [26]. Taking the lead-lag effect in the financial market into account, Cheng et al. [2] constructed heterogeneous graphs to learn from multi-modal inputs and proposed a multi-modality graph neural network (MAGNN) for financial time series prediction. Besides constructing graphs from prior knowledge, for multivariate
time series forecasting, researchers have been trying to discover graphical relationships between variables from the data.
With multiple ways of graph construction, it becomes challenging to obtain and select the relation for graph construction [20, 14]. For example, Cao et al. [14] proposed a StemGNN, which constructed a graph with the attention mechanism. Then they decompose the spatial-temporal signal into a spectral domain and frequency domain with GFT (graph Fourier transform) and DFT (discrete Fourier transform) respectively for multivariate time-series forecasting. Departing from their work, we extend StemGNN to a continual learning scene.
### Multi-task Continual Learning
In the area of multi-task continual learning (CL), a well-known problem is catastrophic forgetting (CF) [13]. That is when training a new task the mode will update existing network parameters, which may cause accuracy degradation of the task's former tasks that the model has learned. Using regularization to mitigate CF by penalizing changes to important parameters learned in the past is a popular approach [15]. Rehearsal is another popular approach, which stores or generates examples of previous tasks and replays them in training a new task [21]. Almost all online CL methods are based on rehearsal. Farajtabar et al. [1] addressed the CF issue from a parameter space perspective and studied an approach to restricting the direction of the gradient updates to avoid forgetting previously-learned data. Another alternative CL solution is Elastic Weight Consolidation (EWC) [16] and its variations [15, 17, 1]. The idea of these methods is trying to preserve the optimal parameters inferred from the previous tasks while optimizing the parameters for the next task. Liu et al. [13] extended EWC to GNNs by proposing a topology-aware weight preserving (TWP) term. However, all these models assume that all the tasks are isomorphic. In this work, we proposed a new framework for a heterogeneous multi-task setting.
## 3 Problem Formulation
We study 49 commodity futures in China market, which are multivariate time series data. In order to effectively represent the structural relationship among future varieties, we use the graph to describe the inter-series correlations. Let \(\mathcal{G}=(\mathbf{A},\mathbf{X})\), \(\mathbf{A}\in\mathbb{R}^{N\times N}\) denotes an adjacent matrix of the futures and \(\mathbf{X}=\{x_{i,t}\}\in\mathbb{R}^{N\times T}\) represents the input of multivariate time series, \(N\) is the number of future varieties.
### Four Tasks for Future Price Modeling
We propose four well-designed tasks for future data modeling, including **price forecasting**, **gap regression**, **moving average (MA) regression**, and **change-point detection (CPD)**.
**Future Price Forecasting.** Given the historical observations of previous \(T\) timestamps \(\mathbf{X}_{t-T},\ldots,\mathbf{X}_{t-2},\mathbf{X}_{t-1}\), where \(\mathbf{X}_{t}\in\mathbb{R}^{N}\), the goal for a multivariate time series forecasting is to learn a mapping function from the historical values to the next \(T^{{}^{\prime}}\) timestamps \(\mathbf{X}_{t},\mathbf{X}_{t+1},\ldots,\mathbf{X}_{t+T^{\prime}-1}\) in the graph \(\mathcal{G}\),
\[[\mathbf{X}_{t-T},\ldots,\mathbf{X}_{t-1},\mathbf{A};\Theta;W]\rightarrow[ \mathbf{X}_{t},\ldots,\mathbf{X}_{t+T^{\prime}-1}],\]
where \(\Theta\), \(W\) are necessary parameters.
**Gap Regression.** We encourage our model to make a regression on the dispersion ratio within a fixed time window to capture short-term features from local fluctuation as shown in Equation 1. We call it gap regression,
\[\Delta\mathbf{X}_{t}^{(l)}=\frac{\mathbf{X}_{max}-\mathbf{X}_{min}}{l} \tag{1}\]
where \(\mathbf{X}_{max}=\max(\mathbf{X}[:,t:t\!+\!l\!-\!1])\) (\(\mathbf{X}_{min}=\min(\mathbf{X}[:,t:t+l-1])\)) is a row maximum (minimum) operation within the sliding window, which returns row maximum (minimum) values of the slice \(\mathbf{X}[:,t:t+l-1]\), \(l\) is the window length.
**Moving Average Regression.** A moving average is a type of convolution. It can be viewed as an example of a low-pass filter that can smooth signals. In this work, we resort to the moving average to capture the long-term change trends of futures prices. Given a discrete time series with observations \(\mathbf{X}_{t}\), a moving average of \(\mathbf{X}_{t}\) can be calculated as follows,
\[\bar{\mathbf{X}}_{t}=\left(\mathbf{h}*\mathbf{X}\right)_{t}=\sum_{i=-L}^{i=L} \mathbf{h}\left(i\right)\mathbf{X}\left(t-i\right),\]
where \(\bar{\mathbf{X}}_{t}\) is the smoothed time series and \(\mathbf{h}\) denotes a discrete density function (that is \(\sum_{i=-L}^{i=L}\mathbf{h}\left(i\right)=1\)). The input sequence \(\mathbf{X}_{t}\) could be the futures' historical settle prices.
**Predicting change-points.** During futures trading, it is of great importance for investors to detect trend changes in price series in a timely and accurate manner, whether they are manual traders or programmed traders. In a way, if an investor can accurately predict a trend reversal, he can not only increase excess returns for the investor but also avoid a lot of unnecessary losses. We have therefore investigated the task of change-point detection to increase the excess returns of the investments. We calculate the change-points of the real price data with ruptures [15] and then try to classify them in our model. The labels of change-points are defined as follows,
\[y_{x_{i,t_{k}}}=\left\{\begin{array}{ll}1&\quad\text{change-point at $t_{k}$}\\ 0&\quad\text{otherwise}\end{array}\right.\]
The four tasks in this article are described as follows,
\[[\mathbf{X}_{t-T},\ldots,\mathbf{X}_{t-1},\mathbf{A};\Theta;W]\overset{F}{ \rightarrow}\begin{cases}\left[\mathbf{X}_{t},\ldots,\mathbf{X}_{t+T^{\prime }-1}\right]\\ \Delta\mathbf{X}_{t}^{(l)}\\ y(x_{i,t_{k}}),t_{k}\geq t\end{cases} \tag{2}\]
where these values can be inferred by an overall forecasting model \(F:=g(f(x;\Theta);W)\) with parameter \(\Theta\) and \(W\) as shown in Figure 1. \(f(x;\Theta)\) is a spatial-temporal module and \(g(f;W)\) is a link function to adapt the downstream tasks as we stated before. Usually, \(g\) is a multi-layer perception. To learn the common knowledge of all tasks while balancing the learning performance of all individual tasks, we conduct a continual learning the four tasks, which will be introduced in next section.
Model
In this section, we provide a detailed formulation of a Heterogeneous Continual Graph Neural Network (HCGNN).
### Overview
Figure 1 illustrates the overall framework of the proposed method. With the inputs of multivariate futures data from the four tasks, we extract their features \(f\) with the spatial-temporal modules in \(f(\mathbf{X};\Theta)\). The spatial-temporal modules are connected in a residual mode. Then we adapt them to the four downstream tasks (as shown in Equation 2) with the module \(g(f;W)\), which could be a multi-layer perception. The feature extraction process can be defined by the composite function \(F:=g(f(\mathbf{X};\Theta);W)\) as we stated in section 3.1. Finally, we train the four tasks in a continual manner.
Suppose there are four related tasks for futures data modeling, \(\mathcal{T}:=\{\mathcal{T}^{1},\mathcal{T}^{2},\mathcal{T}^{3},\mathcal{T}^{4}\}\). As shown in Equation 2, all four tasks share the feature \(\mathbf{X}_{t-T,t-1}\), but their target spaces are heterogeneous. That is, \(\mathbf{Y}_{t}^{1}=\mathbf{X}_{t,t+T^{\prime}-1}\), \(\mathbf{Y}_{t}^{2}=\Delta\mathbf{X}_{t}^{(l)}\), \(\mathbf{Y}_{t}^{3}=\bar{\mathbf{X}}_{t}\), \(\mathbf{Y}_{t}^{4}\in\{0,1\}\). We train the proposed model in a continual learning manner. Unfortunately, the traditionally continual learning failed to consider a heterogeneous setting of tasks. In this work, we propose a mutual information based method to satisfy such a heterogeneous requirement for continual learning as follows,
\[L_{k+1}(\Theta,W) =L_{k+1}^{\text{new}}(\Theta,W)+\lambda_{1}\sum_{n=1}^{k}\Omega^ {\Theta}\odot\|\Theta-\Theta_{n}^{*}\|_{2}^{2}\] \[+\lambda_{2}\sum_{n=1}^{k}\Omega^{W}\odot\|W-W_{n}^{*}\|_{2}^{2},\]
where \(\lambda_{1}\) and \(\lambda_{2}\) are regularization penalties, \(L_{k+1}^{\text{new}}(\Theta,W)\) denotes the \((k+1)\)-task-related loss function, e.g., it could be a cross-entropy loss for the classification task 4 or mean square error for regression tasks. \(\Theta_{n}^{*}\) and \(W_{n}^{*}\) are the optimal parameters of the \(n\)-th task. \(\odot\) is an element-wise product. \(\Omega^{\Theta}\) and \(\Omega^{W}\) play a role of "indicator" matrices that try to "divide" the network parameters \(\Theta\) (\(W\)) into groups for each task. Large values in \(\Omega^{\Theta}\) (\(\Omega^{W}\)) indicate the corresponding parts in \(\Theta_{n}^{*}\) (\(W_{n}^{*}\)) are important memories for the previous task \(n\) (\(n=0,...,k-1\)), hence we have to keep them when training the task \(k+1\). In other words, we can only allow updating the parts of \(\Theta\) (\(W\)) where the indicator values in \(\Omega^{\Theta}\) (\(\Omega^{W}\)) are small. This learning strategy can not only ensure that the parameters of low-importance scores can be freely changed to adapt to new tasks, but also punish the parameters of high-importance scores so that the model can still perform well on previous tasks.
The key part of the continual learning framework is the calculation of the per-parameter regularization strength \(\Omega:=\{\Omega^{W},\Omega^{\Theta}\}\)[20]. In [20], \(\Omega\) was defined with the gradient of the loss. However, considering the heterogeneous tasks in our problem, we proposed to calculate \(\Omega\) with mutual information that will be introduced in section 4.2.
### Mutual Information Based \(\Omega\)
As we mentioned before, traditionally, people define the per-parameter regularization strength \(\Omega\) as the magnitude of the gradient of loss [20],
\[\Omega=\left\|\frac{\partial L}{\partial\Theta}\right\|,\]
however, for our problem, the four tasks (section 3.1) are heterogeneous because the targets (Equation (2)) of them lie in different spaces. It suggests that the corresponding losses of the four tasks will present in different scales. For example, the price regression loss may be totally different from the change-point classification loss. Even for two regression tasks, the moving average regression loss and price regression loss may lie in different metric spaces because the fluctuation of real-time price is more severe than the smooth average.
The heterogeneous setting of tasks motivates us to develop new parameter strength. In this work, we replace the gradient of the training loss with _the gradient of mutual information between the original feature and extracted feature described by the model parameters_.
Specifically, we calculate both the mutual information between the original feature \(\mathbf{X}^{k}\) of task \(k\) (\(k\in\{1,2,3,4\}\)) and the feature map \(f(\mathbf{X}^{k};\Theta)\) learned from the deep neural network. For an infinitesimal parameter perturbation on \(\Delta\Theta=\{\Delta\boldsymbol{\theta}_{p}\}\), the changes in mutual information can be approximated by
\[\text{MI}(\mathbf{X}^{k},f(\mathbf{X}^{k};\Theta+\Delta\Theta))- \text{MI}(\mathbf{X}^{k},f(\mathbf{X}^{k};\Theta)) \tag{3}\] \[\approx\sum_{m}\frac{\partial\text{MI}(\mathbf{X}^{k},f(\mathbf{ X}^{k};\Theta))}{\partial\boldsymbol{\theta}_{p}}\Delta\boldsymbol{\theta}_{p},\]
where \(\text{MI}(x,y)\) is the mutual information between \(x\) and \(y\).
From Equation (3), we observe that the contribution of the parameter \(\boldsymbol{\theta}_{p}\) to the change of mutual information can be approximated by the gradient of the mutual information with respect to \(\boldsymbol{\theta}_{p}\).
Similarly, we can calculate the mutual information between the predicted value \(g(\mathbf{Z}^{k};W)\) and ground-truth target \(\mathbf{X}^{k}\), where \(\mathbf{Z}^{k}=f(\mathbf{X}^{k};\Theta)\) is the feature map output by a deep spatial-temporal module as shown in Fig 1. Consequently, the change of the mutual information with respect to \(\Delta W=\{\Delta\boldsymbol{w}_{q}\}\) can be approximated as follows,
\[\text{MI}(g(\mathbf{Z}^{k};W+\Delta W),\mathbf{X}^{k})-\text{MI}(g (\mathbf{Z}^{k};W),\mathbf{X}^{k}) \tag{4}\] \[\approx\sum_{m}(\frac{\partial\text{MI}(g(\mathbf{Z}^{k};W), \mathbf{X}^{k})}{\partial\boldsymbol{w}_{q}})\Delta\boldsymbol{w}_{q}.\]
Figure 1: \(f(\mathbf{X};\Theta)\) is a spatial-temporal module, \(g(f;W)\) is a feed-forward neural network.
According to Equation. (3) and (4), we define the per-parameter regularization strength \(\Omega:=\{\Omega^{W},\Omega^{\Theta}\}\) as follows,
\[\Omega^{\Theta} =\bigg{\{}\Big{[}\frac{\partial[\text{MI}(\mathbf{X}^{k+1};\mathbf{ Z}^{k+1})+\text{MI}(g(\mathbf{Z}^{k+1};W);\mathbf{X}^{k+1})]}{\partial\mathbf{\theta}_{p}} \Big{]}_{ij}\bigg{\}}, \tag{5}\] \[\Omega^{W} =\bigg{\{}\Big{[}\frac{\partial\text{MI}(g(\mathbf{Z}^{k+1};W); \mathbf{X}^{k+1})}{\partial\mathbf{w}_{q}}\Big{]}_{ij}\bigg{\}}.\]
### The Calculation of \(\Omega\)
In practice, the mutual information in Equation 5 is notoriously difficult to calculate. To overcome this problem, inferring the lower bound and then maximizing the tractable objective becomes a popular choice. Oord et al.(2018) proposed the InfoNCE loss as a proxy objective to maximize the Mutual Information. According to their research framework, we assume that \(\mathbf{X}^{{}^{\prime}}\) is a different view of the input variable \(\mathbf{X}\) created through data transformation. We have
\[\max_{\Theta}\text{MI}(\mathbf{X};f(\mathbf{X};\Theta))\geq\max_{ \Theta}\text{MI}(f(\mathbf{X};\Theta);f(\mathbf{X}^{{}^{\prime}};\Theta)) \tag{6}\] \[\max_{W}\text{MI}(g(\mathbf{Z};W);\mathbf{X})\geq\max_{W}\text{ MI}(g(\mathbf{Z};W);\mathbf{X}^{{}^{\prime}})\]
According to [11],
\[\max_{\Theta}\text{MI}(f(\mathbf{X};\Theta);f(\mathbf{X}^{{}^{\prime}};\Theta ))\geq\log B+\text{InfoNCE}(\{\mathbf{X}_{i}\}_{i=1}^{B}) \tag{7}\]
\[\text{InfoNCE}(\{\mathbf{X}_{i}\}_{i=1}^{B})=\frac{1}{B}\sum_{B}^{i=1}log\frac {s(\mathbf{X}_{i},\mathbf{X}^{{}^{\prime}}_{i})}{\sum_{j=1}^{B}s(\mathbf{X}_{i },\mathbf{X}^{{}^{\prime}}_{j})} \tag{8}\]
where \(s(\mathbf{X}_{i},\mathbf{X}^{{}^{\prime}}_{j})=e^{\frac{f(\mathbf{X}_{i}; \Theta)^{T}f(\mathbf{X}^{{}^{\prime}};\Theta)}{r}}\) can be regarded as calculating the similarity of \(\mathbf{X}_{i}\) and \(\mathbf{X}^{{}^{\prime}}_{j}\) and \(r\) is the temperature. \(\{\mathbf{X}_{i}\}_{i=1}^{B}\) are samples from variable \(\mathbf{X}\) and \(B\) is the batch size. we can calculate MI(\(\mathbf{X}\); \(f(\mathbf{X};\Theta))\) with Equation 6 and 7[11]. MI\((g(\mathbf{Z};W);\mathbf{X})\) can be calculated in the same way.
## 5 Experiments
In this section, we evaluate our model on the tick-level high-frequency futures data on 4 related and heterogeneous tasks. To explore the best structure of the HCGNN model, we perform exhaustive ablation studies on the proposed model to validate and analyze the contribution of various components.
### Datasets
The data used in our experiment is the tick-level high-frequency future market series from 14 February 2022 to 28 February 2022. Including the close price (buy price), the highest price within 1 minute, and the lowest price within 1 minute. There are 388,800 sample points. Considering that there are missing data for some futures varieties and there are also some newly listed futures varieties that are not available for previous trading records, we 1) delete commodity futures that more than 50% of their log-return is zero; 2) exclude commodity futures with no trading records for the first five days or the last five days; 3) delete commodity futures with missing slot length over 15 minutes within the study period; 4) for a small number slot of missing values, we fill them with the mean. All the future series data are normalized by the Z-Score method. We finally keep 49 active commodity futures for the following analysis. We split the data into training set, validation set, and test set with a ratio of 7:2:1 in Table 1.
We make a similar experimental setting as in Cao et al.(2020). Specifically, the channel size of each graph convolution layer is set as 64 and the kernel size of 1D convolution is 3. The number of training epochs is set as 50 and the learning rate is initialized by 0.001 and decayed with a rate of 0.7 after every 5 epochs. We select mean Absolute Errors (MAE), Root Mean Square Errors (RMSE), and Mean Absolute Percentage Errors (MAPE) to evaluate the three regression tasks and report precision, recall, accuracy, and F1 to measure the ability of change-point prediction.
### Baselines Methods
The baseline methods include **StemGNN**[14], **MTGNN**[15], **STGCN**[15], **SSTGNN**[16].
### Results
#### Performance
Table 2 illustrates the comparisons of our method with the baseline methods. We see that HCGNN has an obvious advantage on modeling the high-frequency time series data. It achieve the best MAE, RMSE, and MAPE on 1 minute and 15 minutes predictions.
#### Visualizations
Figure 2 visualizes predicted prices of twelve representative future varieties. The real values and predictions are plotted in pink and blue respectively. We observe that the proposed method has strong ability of predicting future price, especially on capturing the significant events.
We also conduct change-point detection to estimate the locations of the events to provide early warning. Part of the results of the change-point detection are shown in Figure 3. The red vertical lines are real change-points and the green ones are the predictions. The line was colored in both red and blue suggesting a correct prediction. We see that the proposed approach can capture most of the important change-points.
The bar charts shown in Figure 4 visualize the dispersion degree (gap) of the price within twenty ticks (20 seconds). We compare the predicted gaps with their corresponding ground truth in 15 minutes. We see that our method can predict the
\begin{table}
\begin{tabular}{c l} \hline \hline Datasets & Period (\# samples) \\ \hline Training set & 02/14-2/25 (278479) \\ Validation set & 02/25-2/27 (79565) \\ Test set & 02/27-2/28 (39782) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data splitting on 49 futures.
dispersion degree (gap) well, even when there are some big fluctuations at these time slots.
### Ablation Studies
To better validate the effectiveness of the proposed model, we conduct an ablation study as follows,
* **w/o GAP+CPD+MA**: HCGNN without GAP, CPD, and
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{3}{c}{1min} & \multicolumn{3}{c}{15min} \\ \cline{2-7} & MAE & RMSE & MAPE(\%) & MAE & RMSE & MAPE(\%) \\ \hline StemGNN & 17.04 & 52.35 & 10.67 & 10.64 & 32.87 & 11.04 \\ MTGNN & 84.29 & 171.09 & 1.19 & 64.58 & 126.08 & 0.77 \\ STGCN & 16.79 & 36.88 & 0.1925 & 16.940 & 37.230 & 0.1938 \\ SSTGNN & 223.80 & 389.30 & 4.34 & 68.68 & 135.75 & 0.9976 \\
**HCGN** & **6.48** & **12.40** & **0.04** & **13.70** & **42.15** & **0.1** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparisons of the performance of close price regression between the state-of-the-art approaches and our method. We report 1-minute and 15-minute results.
Figure 4: Visualizations of gap predictions. Predicted gap and the corresponding ground truth are plotted in blue and brown bars respectively.
Figure 3: Change-point classification results. The pink curves are the price fluctuation. The change-points of ground truth and predictions are plotted in red and green dotted lines respectively.
Figure 2: Close price forecasting results. The ground truth and predictions are colored pink and blue respectively.
MA task. We train the model only on the single Forecast task.
* **w/o CPD+MA+Forecast**: HCGNN without CPD, MA, and Forecast task. We train the model only on the single GAP task.
* **w/o GAP+MA+Forecast**: HCGNN without GAP, MA, and Forecast task. We train the model only on the single CPD task.
* **w/o GAP+CPD+Forecast**: HCGNN without GAP, CPD, and Forecast task. We train the model only on the single MA task.
* **w/o GAP+CPD**: HCGNN without GAP and CPD task. We train the model on forecasting and MA tasks in continuous learning with mutual information.
* **w/o MA+CPD**: HCGNN without MA and CPD. We train the model on forecasting and gap tasks in continuous learning with mutual information.
* **w/o GAP+MA**: HCGNN without gap and MA task. We train the model on forecasting and CPD tasks in continuous learning with mutual information.
* **w/o CM**: HCGNN without MA task. We train the model on forecasting, gap, and CPD tasks in continuous learning with mutual information.
* **w/o GAP**: HCGNN without gap task. We train the model on forecasting, MA, and CPD tasks in continuous learning with mutual information.
* **w/o CPD**: HCGNN without CPD task. We train the model on forecasting, MA, and gap task in continuous learning with mutual information.
* **w/o MI**: HCGNN without mutual information. We train the model on all four tasks continuously but replace mutual information with the traditional gradient of loss.
For all the ablation experiments, we only back-propagate gradient of the activated task. Take the setting w/o GAP+CPD+MA as an example, we set the requires_gradient as True for the price **Forecast** task while restraining the gradients of GAP, CPD, and MA.
Based on the ablation experimental design approach described above, we evaluated the various variants described above on the high-frequency trading dataset and present all the experimental results in Table 3. Values in bold are the best results of these settings. We have several observations based on the results of 1-minute and 15-minute: 1) The best performance values appear in almost all the settings, it suggests that the promotion relationships among the four tasks are complex. 2) Most of the best values appear in the setting of more tasks, it suggests that the overall performance get improved when adding more tasks. Together with the results of Table 2 (last row), the best performance is achieved when four tasks are engaged simultaneously. 3) Mutual information is more appropriate to define parameter strength than the traditional loss-based methods with the heterogeneous multi-task setting.
## 6 Conclusion and Future Work
In this work, we propose a novel deep learning model based graph called Heterogeneous Continual Graph Neural Network (HCGNN). It models spatial-temporal futures data and captures the correlations between different varieties. Meanwhile, it can catch long-term and short-term trends through multi-task continuous learning. Furthermore, we creatively take use of the mutual information maximization mechanism to address the problem of CF in continuous learning. The experiment shows HCGNN outperforms existing approaches consistently in futures time-series forecasting. In future works, we will continue exploring the applicability of HCGNN in stock, bond, forex, option, and other financial markets.
|
2301.02458 | Topics as Entity Clusters: Entity-based Topics from Large Language
Models and Graph Neural Networks | Topic models aim to reveal latent structures within a corpus of text,
typically through the use of term-frequency statistics over bag-of-words
representations from documents. In recent years, conceptual entities --
interpretable, language-independent features linked to external knowledge
resources -- have been used in place of word-level tokens, as words typically
require extensive language processing with a minimal assurance of
interpretability. However, current literature is limited when it comes to
exploring purely entity-driven neural topic modeling. For instance, despite the
advantages of using entities for eliciting thematic structure, it is unclear
whether current techniques are compatible with these sparsely organised,
information-dense conceptual units. In this work, we explore entity-based
neural topic modeling and propose a novel topic clustering approach using
bimodal vector representations of entities. Concretely, we extract these latent
representations from large language models and graph neural networks trained on
a knowledge base of symbolic relations, in order to derive the most salient
aspects of these conceptual units. Analysis of coherency metrics confirms that
our approach is better suited to working with entities in comparison to
state-of-the-art models, particularly when using graph-based embeddings trained
on a knowledge base. | Manuel V. Loureiro, Steven Derby, Tri Kurniawan Wijaya | 2023-01-06T10:54:54Z | http://arxiv.org/abs/2301.02458v3 | # Topics as Entity Clusters: Entity-based Topics from Language Models and Graph Neural Networks
###### Abstract
Topic models aim to reveal the latent structure behind a corpus, typically conducted over a bag-of-words representation of documents. In the context of topic modeling, most vocabulary is either irrelevant for uncovering underlying topics or contains strong relationships with relevant concepts, impacting the interpretability of these topics. Furthermore, their limited expressiveness and dependency on language demand considerable computation resources. Hence, we propose a novel approach for cluster-based topic modeling that employs conceptual entities. Entities are language-agnostic representations of real-world concepts rich in relational information. To this end, we extract vector representations of entities from (i) an encyclopedic corpus using a language model; and (ii) a knowledge base using a graph neural network. We demonstrate that our approach consistently outperforms other state-of-the-art topic models across coherency metrics and find that the explicit knowledge encoded in the graph-based embeddings provides more coherent topics than the implicit knowledge encoded with the contextualized embeddings of language models.
## 1 Introduction
Following the seminal work of Blei et al. (2003), topic models have since become the _de facto_ method for extracting and elucidating prominent themes from corpora. Traditionally, the semantic content of a document is composed of document-term frequencies or latently through a mixture of distributions of topics, common with probabilistic generative models such as _Latent Dirichlet Allocation_ (LDA). Here, individual topics are represented by salient lexical constituents such as words that depict some subjects of the corpora Blei et al. (2003); Blei and Lafferty (2006); Li and McCallum (2006); Teh et al. (2006); Crain et al. (2012). In recent years, the field of _Natural Language Processing_ (NLP) has seen a trend toward continuous vector representations of words, which look to capture the paradigmatic relationship between concepts by learning distributional co-occurrence patterns in text. For example, large-scale language models such as _BERT_Devlin et al. (2018) have explored robust contextualized representations that can explain an array of linguistic phenomena and implicit real-world knowledge Peters et al. (2018); Tenney et al. (2019); Petroni et al. (2019); Rogers et al. (2020), making them highly advantageous for topic modeling Sia et al. (2020); Bianchi et al. (2021).
Despite their successes, it becomes evident that certain limitations emerge from conventional topic modeling due to the superfluous nature and limited expressiveness of word-level tokens. These methods rely on data-driven techniques -- while ignoring real-world knowledge -- to uncover statistical patterns and infer relevant lexical items, which results in topics with limited guarantees of interpretability. Furthermore, in a multilingual setting, these models require expansive, resource-intensive lexicons that may not produce a desirable set of shared, language-free universal topics Ni et al. (2009); Boyd-Graber and Blei (2009).
To overcome these challenges, in this paper we focus on entities; they are distinct, free form human-derived concepts that are represented through encyclopedic-based definitions and a number of key relational attributes, which offer a better alternative for topic modeling Chemudugunta et al. (2008); Andrzejewski et al. (2009); Andryerjewski et al. (2011); Allahyari and Kochut (2016). We supersede word-level topic modeling with real-world entities, as these are both rich in conceptual information and language-agnostic. We demonstrate that by considering purely entity-level units in the text, it is possible to construct topics that are both interpretable to humans and founded on a rich set of prior knowledge. We pursue this approach using two sources to represent entities: (1) contextualized text represen
tations constructed from entity definitions and (2) structured graph data extracted from a knowledge base that we use to train a graph neural network to learn node embeddings. Furthermore, we propose _Topics as Entity Clusters_ (TEC), a novel topic modeling algorithm that can discover meaningful and highly informative topics by clustering either type of entity vectors or a combination of both. We successfully verify that, through the experimental procedure, our approach outperforms a number of state-of-the-art topic models on a range of metrics across numerous datasets.
## 2 Literature Review
Previous research has attempted to represent topic models using entities. For instance, Newman et al. (2006) proposed representing documents with salient entities obtained using _Named Entity Recognition_ (NER) instead of using the words directly. Others have attempted to capture the patterns among words, entities, and topics, either by expanding LDA Blei et al. (2003) or more complex Bayesian topic models -- see Alghamdi and Alfalqi (2015), Chauhan and Shah (2021) and Vaysansky and Kumar (2020) for a general overview -- by describing entities using words Newman et al. (2006); Kim et al. (2012); Hu et al. (2013).
### Word embeddings
Researchers have also found success by capitalizing on contemporary work in distributional semantics, integrating embedding lookup tables into their frameworks to represent words and documents. For instance, _lda2vec_Moody (2016) combines embeddings with topic models by embedding word, document, and topic vectors into a common representation space. Concurrently, Van Gysel et al. (2016) introduce an unsupervised model that learns unidirectional mappings between latent vector representations of words and entities. Using a shared embedding space for words and topics, Dieng et al. (2020) instead present the _Embedded Topic Model_ (ETM), which merges traditional topic models with the neural-based word embeddings of Mikolov et al. (2013)1.
Footnote 1: We do not compare our model against ETM because to do so requires us to pick the entity embeddings to be used in place of word embeddings.
### Neural topic models
In recent years, researchers have also looked to incorporate modern deep learning techniques that utilize contextualized representations in contrast to more traditional static embeddings Zhao et al. (2021). Srivastava and Sutton (2016) propose _ProdLDA_, a neural variational inference method for LDA that explicitly approximates the Dirichlet prior. Other models, however, such as _Neural Variational Document Model_ (NVDM) Miao et al. (2017), employ a multinomial factor model of documents that uses amortized variational inference to learn document representations. Bianchi et al. (2021) expand on ProdLDA presenting _CombinedTM_, which improves the model with contextualized embeddings.
### Knowledge extraction
More related to our work, Piccardi and West (2021) -- leveraging the self-referencing nature of Wikipedia -- define a cross-lingual topic model in which documents are represented by extracted and densified bags-of-links. The adoption of large-scale lexical resources has recently gained popularity in NLP as a way to directly inject knowledge into the model Gillick et al. (2019); Sun et al. (2020); Liu et al. (2020), which further motivates our research.
### Clustering
Clustering techniques have also proved effective for topic modeling. For instance, Sia et al. (2020) introduce clustering to generate coherent topic models from word embeddings, lowering complexity and producing better runtimes compared to traditional topic modeling approaches. Thompson and Mimno (2020) experiment with different pretrained contextualized embeddings and demonstrate that clustering contextualized representations at the token level is indistinguishable from a Gibbs sampling state for LDA. These findings were also recently corroborated by Zhang et al. (2022) who cluster sentence embeddings and extract top topic words using TF-IDF to produce more coherent and diverse topics than neural topic models. In contrast to our work, none of these works considers the expressiveness of entities.
## 3 Topics as Entity Clusters
In this section, we describe the steps necessary to perform topic modeling with entities and the novel approach for extracting salient entities to represent topics. We present an overview of the model in Figure 1.
### Entity representation
We explore methods to encode entities for cluster-based topic modeling. Broadly, we construct expressive entity representations from two sources of information: _Implicit Knowledge_ from a large pre-trained language model and _Explicit Knowledge_ extracted directly from a knowledge graph.
**Language embeddings.** Language models are used to construct document representations and depict knowledge obtained implicitly through a considerable amount of unsupervised learning (Petroni et al., 2019). To encode the entities, we first extract their definitions from an encyclopedic corpus before using these descriptions to build sentence embeddings to represent each entity -- for example, utilizing Reimers and Gurevych (2019). Using the text description of the entity rather than the entity alone as a query for these unsupervised models elicits a stronger response due to their highly contextualized nature (Ethayarajh, 2019).
**Graph embeddings.** Another advantage of using entities from lexical resources such as a knowledge base is that they provide a systematic framework for organizing and describing curated relationships between concepts. Similar to a semantic network, these entities exhibit a complex structure that provides meaningful information about their content, provided in the form of a directed graph. For instance, the triplet _<Petra, Culture, Nabataean kingdom>_ contains intricate encyclopedic knowledge about the city of Petra that can be difficult to learn with less specialized corpora. Language models may fail to adequately capture this relationship due to the abstract notion of the concept. Hence, to effectively capture these human-curated and refined factual relationships, we employ the graph neural network _node2vec_(Grover and Leskovec, 2016) to encode information about the sophisticated semantic structure between these entities.
**Combining Approaches.** In this work, we balance the contribution of the language model and our graph neural network. For some normalized language and graph embeddings \(\hat{E}_{LM}\in\mathbb{R}^{d_{LM}}\) and \(\hat{E}_{G}\in\mathbb{R}^{d_{G}}\), respectively, we weight their contributions using the following concatenation function,
\[\hat{E}=\left[\sqrt{\frac{1}{1+\alpha}}\cdot\hat{E}_{LM}^{\mathsf{T}},\sqrt{ \frac{\alpha}{1+\alpha}}\cdot\hat{E}_{G}^{\mathsf{T}}\right]^{\mathsf{T}} \tag{1}\]
where \(\alpha\in\mathbb{R}\) is the scalar ratio of embedding weights and \(\hat{E}\in\mathbb{R}^{d_{LM}+d_{G}}\) is our final embedding used in entity clustering. We take the square root to guarantee that the final embedding is normalized similarly to the input embeddings.
### Entity clustering
Independent of the specific method, we represent entities in an embedding space. Clustering allow us to define centroids which we interpret as topic centroids. Therefore, we model topics to have representations in a shared embedding space with entities. To this effect, we apply K-Means to the set of entities contained in a corpus, using the implementation available in FAISS (Johnson et al., 2021).
Figure 1: Overview of Topics as Entity Clusters (TEC). The top half illustrates the processing of entity embeddings, topic centroids and top entities per topic, while the bottom half inferencing the top topics per document.
### Entity extraction
We adopt a two-stage approach to extract entities, which allows us to represent text as a language-agnostic collection of entity identifiers arranged in order of appearance.
**Pattern matching**. We first extract candidate entities by finding language-specific text patterns in the original text. Inspired by Mendes et al. (2011) and Daiber et al. (2013), we use the deterministic Aho-Corasick algorithm (Aho and Corasick, 1975) due to its speed and effectiveness in extracting text patterns. The only language-specific components are the preprocessing components, such as lemantizers, that increase the number of relevant entity matches. These preprocessing components are independent of each other. Consequently, we can expand the model to additional languages without compromising the performance of the others.
**Disambiguation**. Since text patterns could represent multiple entities -- for example, acronyms of organizations or people sharing the same name -- we perform disambiguation and entity filtering. For each textual pattern and its corresponding set of entities, we choose the entity that best fits the text. We embed the text using using the same model used to derive the language embeddings and calculate their cosine similarity. We choose the best candidate based on the highest score if it is above a set similarity threshold. Otherwise, we discard it.
### Topic inference
Topic inference requires the representation of documents in the same embedding space as entities and topic centroids. To accomplish this, we extract entities as described in Section 3.3. We then obtain the document representation by calculating the weighted average of those entity embeddings. With \(K\) representing the number of topics, we can now measure the Euclidean distances \(\textbf{d}=\left[d_{1},d_{2},...,d_{K}\right]^{\mathsf{T}}\) of the document to the topic centroids. Documents are assumed to contain a share of all topics. We infer the topic weight contribution \(\textbf{w}=\left[w_{1},w_{2},...,w_{K}\right]^{\mathsf{T}}\) to the document using the inverse distance squared weighted interpolation2 (Shepard, 1968):
Footnote 2: If we consider the embedding of a document as an interpolation of topic centroids, squaring the distances yields more weight to the closest topic centroids.
\[w_{i}=\frac{d_{i}^{-2}}{\sum_{j=1}^{K}d_{j}^{-2}}\quad,\forall\,i\in\left\{1,...\,,K\right\}. \tag{2}\]
### Reranking top entities
A list of highly descriptive entities, weighted by their importance, can be used to express the theme of a topic. However, the closest entities to topic centroids are not necessarily the most descriptive as that does not consider entity co-occurrences in the corpus. In Algorithm 1, we propose a novel inference-based method to rerank top entities, which assigns the entity frequency of a document to the top topic centroid, as measured by **w**.
We start by assigning entities to topics based on their distances weighted by a small value, \(\epsilon\) (Lines 1-3). This ensures all topics have top entities. We follow by inferring the top topic for each document and updating the top entities in that topic using the document entity frequency. The update is proportional to the inference score, \(\max\left(\textbf{w}\right)\), as it represents the degree of confidence in the inference. (Lines 4-10). To increase topic diversity, we only update the top topic. Lastly, we calculate the relative frequencies to obtain the top entities per topic (Lines 11-13).
```
Input: Number of topics \(K\), number of top entities per topic \(N\), small initialization weight \(\epsilon\), documents \(Docs\), all entity identifiers in the corpus \(entities\), entity embeddings \(\hat{E}\) Output: Lists of top entities per topic \(topEntities\), each element is a list of pairs \(\left(entityId,frequency\right)\) for\(topicId\in\left\{1,...,K\right\}\)do \(topEntities[topicId]\leftarrow\) ClosestEntities(\(topicId,\hat{E},N,\epsilon\))
1 end for\(doc\in Docs\)do \(\textbf{w}\), \(entityFrequency\leftarrow\) TopicInference(\(doc\)) // Section 3.4 \(topTopic\leftarrow\) argmax(\(\textbf{w}\)) for\(entityId\inentities\)do \(topEntities\)\([topTopic]\leftarrow\) \(\max\left(\textbf{w}\right)\cdotentityFrequency[entityId]\)
2 end for
3
4 end for
5
6 end for
7for\(topicId\in\left[1,...,K\right]\)do \(topEntities[topicId]\leftarrow\) RelativeFrequency(\(topEntities[topicId]\))
8 end for
```
**Algorithm 1**Reranking top entities
## 4 Experiments
We study the performance of TEC and qualitatively compare it to other state-of-the-art topic models using a set of corpora preprocessed into lists of entity identifiers. By contrasting the top entities and measuring results across several coherency metrics, we can infer the quality of each topic model.
In summary, we find that TEC produces significantly more coherent topics. These gains are more pronounced when using graph embeddings.
### Entity extractor
We build the entity extractor using Wikidata3 as the source of our knowledge base and Wikipedia4 as the encyclopedic corpus. Wikidata currently has more than 97 million entities, most of which would be a long tail of entities in a topic model therefore we restrict the entity extractor to only include the top one million entities, as ranked by QRank5 - a public domain project that ranks page views across Wikimedia projects. Out of these entities, we select those matching at least one predicate-object pair from lists of preselected objects for predicates "instance of", "subclass of", and "facet of". We generate the entity embeddings used in disambiguation using _SBERT6_. Entities are matched to Wikipedia articles using Wikidata identifiers.
Footnote 3: Wikidata JSON dump downloaded on March 24, 2022.
Footnote 4: Collected with Beautiful Soup on March 28, 2022.
Footnote 5: QRank downloaded on March 24, 2022.
Footnote 6: We use paraphrase-multilingual-mpnet-base-v2.
Footnote 7: _CC-News_ available at Hugging Face.
Footnote 8: _zcyrek_ available on GitHub.
### Corpora
We evaluate all models on various corpora: _Wikipedia_, _CC-News7_, and _MLSUM_(Scialom et al., 2020); Table 1 contains a statistics summary. The _Wikipedia_ corpus consists of a sample of preprocessed documents, each matching an entity in the vocabulary. _CC-News_ consists of monolingual news articles written in English. _MLSUM_ is a collection of news articles written in German, Spanish, French, Russian, and Turkish.
Footnote 7: _CC-News_ available at Hugging Face.
We preprocess the documents according to Section 3.3. The language-specific components for documents in English, German, Spanish and French are _spaCy_ lemmatizers (Honnibal and Montani, 2017), for documents in Russian we use _pymorphy2_(Korobov, 2015), and for documents in Turkish we use _zeyrek8_.
Footnote 8: _zcyrek_ available on GitHub.
### Models
We start by comparing our approach with LDA (Blei et al., 2003) due to its pervasiveness in topic model literature. Specifically, we use the _Mallet_ implementation of LDA (McCallum, 2002). On top of that, we compare using other state-of-the-art topic models from the literature.
**NVDM-GSM.**_Neural Variational Document Model_ (NVDM) is a neural network-based topic model that discovers topics through variational inference training, proposing a number of ways to construct topic distributions, such as a _Gaussian Softmax_ (GSM) function (Miao et al., 2017).
**ProdLDA.** Similar to NVDM-GSM, this model is an autoencoder trained to reconstruct the input embeddings with variational inference-based training (Srivastava and Sutton, 2016).
**CombinedTM.** This model is a direct extension to ProdLDA that includes pre-trained contextualized embeddings from a pretrained language model (Bianchi et al., 2021). In this case, the authors extract contextual vectors for documents using _SBERT_.
**WikiPDA.** We also consider the Wikipedia-based Polyglot Dirichlet Allocation model, an LDA model trained on entities extracted from Wikipedia (Piccardi and West, 2021). WikiPDA has its own preprocessing method.
### Metrics
Topic models produce subjective results, so we calculate different measures to understand model performance. We use topic coherence measures to estimate the relationship between top entities of a topic (Roder et al., 2015).
\[C_{f_{t}}=\frac{1}{T}\sum_{t\in\{1..T\}}\left[\frac{2}{N(N-1)}\sum_{ \begin{subarray}{c}i\in\{1..N\}\\ j\in\{1..i-1\}\end{subarray}}f_{t}(w_{i},w_{j})\right] \tag{3}\]
All coherence metrics are calculated using Eq. 3, as implemented in _gensim_(Rehurek and Sojka, 2011), over the top most relevant \(N\) entities for all topics \(t\in\{1..T\}\), with \(N=10\). The specific element \(f_{t}\) changes for each measure.
**Coherence UCI.**Newman et al. (2010) present a coherence measure that averages the _Pointwise Mutual Information_ (PMI, Eq. 4) of all entity pairs in a topic using a sliding window of entities:
\[\mathrm{PMI}(w_{i},w_{j})=\log\left(\frac{p(w_{i},w_{j})}{p(w_{i})p(w_{j})} \right). \tag{4}\]
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Corpus** & Vocabulary & Documents &
\begin{tabular}{c} Avg. Entities \\ per Document \\ \end{tabular} \\ \hline WIKIPEDIA & 359,507 & 359,507 & 44.62 \\ CC-NEWS & 94,936 & 412,731 & 13.97 \\ MLSUM & 89,383 & 661,422 & 11.71 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the corpora.
\begin{table}
\begin{tabular}{l|c} \hline \hline
**Model** & **Sample Topic** \\ \hline \hline LDA & United Nations (Q1065) \(|\) Teenage Mutant Ninja Turtles (Q12296099) \(|\) Miles Davis (Q93341) \\ & \(|\) Star Trek (Q1092) \(|\) United Nations Security Council (Q37470) \(|\) United Nations Relief and Works Agency for Palestine Refugees in the Near East (Q846656) \(|\) public health (Q189603) \(|\) Dizzy Gillespie (Q49575) \(|\) Greenpeace (Q81307) \(|\) John Coltrante (Q7346) \\ NVDM-GSM & bitcoin (Q131723) \(|\) Apple Inc. (Q312) \(|\) Halloween [film franchise] (Q1364022) \(|\) Fisker Inc. [automaker] (Q1420893) \(|\) IBM (Q37156) \(|\) Michael Myers (Q1426891) \(|\) Yakuza [video game series] (Q2594935) \(|\) Facebook (Q355) \(|\) cryptocurrency (Q13479982) \(|\) Vancouver (Q234053) \\ ProdLDA & Paul McCartney (Q2599) \(|\) Maxim Gorky (Q12706) \(|\) Lucy-Jo Hudson (Q1394969) \(|\) Bob Dylan (Q392) \(|\) sport utility vehicle (Q192152) \(|\) FIFA World Cup (Q19317) \(|\) sedan (Q190578) \(|\) American football (Q41323) \(|\) concept car (Q850270) \(|\) racing automobile (Q673687) \\ CombinedTM & vocalist (Q2643890) \(|\) United States of America (Q30) \(|\) music interpreter (Q3153559) \(|\) England (Q21) \(|\) Ryuichi Sakamoto (Q345494) \(|\) human rights (Q8458) \(|\) David Tennant (Q214601) \(|\) Harry Potter (Q76164749) \(|\) Comedian (Q2591461) \(|\) Aoni Production (Q1359479) \\ WikiPDA & a cappedla (Q185298) \(|\) X-Men (Q128452) \(|\) Marvel Comics (Q173496) \(|\) To Be [music album] \\ & (Q17025795) \(|\) The Allman Brothers Band (Q507327) \(|\) proton-proton chain reaction (Q223073) \(|\) features of the Marvel Universe (Q5439694) \(|\) Features of the Marvel Chematic Universe (Q107088537) \(|\) Uncanny X-Men (Q1399747) \(|\) member of parliament (Q486839) \\ \hline TEC \(E_{LM}\left(\alpha=0\right)\) & Google (Q95) \(|\) Amazon (Q3884) \(|\) Microsoft (Q2283) \(|\) open source (Q39162) \(|\) Apple Inc. (Q312) \(|\) Facebook (Q355) \(|\) Meta Platforms (Q380) \(|\) Cisco Systems (Q173395) \(|\) Salesforce.com (Q941127) \(|\) Citrix Systems (Q916196) \\ TEC \(E_{G}\left(\alpha=\infty\right)\) & Mike Tyson (Q79031) \(|\) World Boxing Organization (Q830940) \(|\) International Boxing Federation (Q742944) \(|\) Floyd Mayweather (Q318204) \(|\) World Boxing Association (Q725676) \(|\) Tyson Party (Q1000592) \(|\) Manye Pacquique (Q486359) \(|\) World Boxing Council (Q724450) \(|\) Evander Hylofield (Q313451) \(|\) Joe Frazier (Q102301) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Example topics using WIKIPEDIA corpus for models trained with 300 topics. Each topic is represented by its top 10 entities.
\begin{table}
\begin{tabular}{l|r r r r r} \hline \hline
**Model** & \multicolumn{4}{c}{**WIKIPEDIA**} \\ \hline \hline
**Model** & \multicolumn{2}{c}{**C\({}_{\mathbf{NPMI}}\)**} & \multicolumn{2}{c}{**C\({}_{\mathbf{UCI}}\)**} & \multicolumn{2}{c}{**U\({}_{\mathbf{Mass}}\)**} & \multicolumn{2}{c}{**TD**} & \multicolumn{2}{c}{**TQ**} \\ \hline \hline
**Number of Topics \(\times\)100** & & & & & \\ \hline LDA & \(-0.05\) (\(0.01\)) & \(-4.72\) (\(0.21\)) & \(-11.23\) (\(0.23\)) & \(\mathbf{0.98}\) (\(0.00\)) & \(-0.05\) (\(0.01\)) \\ NVDM-GSM & \(0.06\) (\(0.02\)) & \(-2.66\) (\(0.36\)) & \(-9.17\) (\(0.31\)) & \(0.87\) (\(0.02\)) & \(0.05\) (\(0.02\)) \\ ProdLDA & \(-0.16\) (\(0.03\)) & \(-6.55\) (\(0.44\)) & \(-12.91\) (\(0.52\)) & \(0.62\) (\(0.16\)) & \(-0.10\) (\(0.03\)) \\ CombinedTM & \(-0.10\) (\(0.02\)) & \(-5.94\) (\(0.35\)) & \(-11.54\) (\(0.49\)) & \(0.22\) (\(0.03\)) & \(-0.02\) (\(0.00\)) \\ WikiPDA & \(0.08\) (\(0.01\)) & \(0.37\) (\(0.14\)) & \(\mathbf{-3.60}\) (\(0.11\)) & \(0.73\) (\(0.01\)) & \(0.06\) (\(0.00\)) \\ \hline TEC \(E_{LM}\left(\alpha=0\right)\) & \(0.18\) (\(0.01\)) & \(0.66\) (\(0.18\)) & \(-5.91\) (\(0.35\)) & \(0.95\) (\(0.00\)) & \(0.17\) (\(0.01\)) \\ TEC \(\alpha=1^{\frac{1}{2}}\) & \(0.21\) (\(0.01\)) & \(1.10\) (\(0.21\)) & \(-4.79\) (\(0.27\)) & \(0.95\) (\(0.01\)) & \(0.20\) (\(0.01\)) \\ TEC \(\alpha=1\) & \(0.21\) (\(0.01\)) & \(1.10\) (\(0.17\)) & \(-4.82\) (\(0.21\)) & \(0.96\) (\(0.01\)) & \(0.21\) (\(0.01\)) \\ TEC \(\alpha=2\) & \(0.22\) (\(0.01\)) & \(1.26\) (\(0.17\)) & \(-4.67\) (\(0.23\)) & \(0.97\) (\(0.00\)) & \(0.22\) (\(0.01\)) \\ TEC \(E_{G}\left(\alpha=\infty\right)\) & \(\mathbf{0.24}\) (\(0.01\)) & \(\mathbf{1.67}\) (\(0.15\)) & \(-4.94\) (\(0.26\)) & \(0.97\) (\(0.00\)) & \(\mathbf{0.23}\) (\(0.01\)) \\ \hline \hline
**Number of Topics \(\times\)300** & & & & & \\ \hline LDA & \(0.09\) (\(0.01\)) & \(-2.00\) (\(0.19\)) & \(-9.42\) (\(0.22\)) & \(\mathbf{0.97}\) (\(0.00\)) & \(0.09\) (\(0.01\)) \\ NVDM-GSM & \(0.06\) (\(0.02\)) & \(-2.31\) (\(0.37\)) & \(-9.23\) (\(0.39\)) & \(0.69\) (\(0.03\)) & \(0.04\) (\(0.02\)) \\ ProdLDA & \(-0.14\) (\(0.02\)) & \(-5.82\) (\(0.37\)) & \(-13.28\) (\(0.36\)) & \(0.44\) (\(0.17\)) & \(-0.06\) (\(0.03\)) \\ CombinedTM & \(-0.13\) (\(0.03\)) & \(-6.20\) (\(0.47\)) & \(-13.01\) (\(0.38\)) & \(0.15\) (\(0.03\)) & \(-0.02\) (\(0.00\)) \\ WikiPDA & \(0.06\) (\(0.01\)) & \(-0.30\) (\(0.13\)) & \(-5.72
\begin{table}
\begin{tabular}{l|r r r r r} \hline \hline \multicolumn{5}{c}{**MCSUM**} \\ \hline
**Model** & \multicolumn{1}{c}{\(\mathbf{C_{NPMI}}\)} & \multicolumn{1}{c}{\(\mathbf{C_{UCI}}\)} & \multicolumn{1}{c}{\(\mathbf{U_{Mass}}\)} & \multicolumn{1}{c}{**TD**} & \multicolumn{1}{c}{**TQ**} \\ \hline \multicolumn{5}{l}{**Number of Topics \(\times\)100**} \\ \hline LDA & \(-0.02\) (0.01) & \(-3.89\) (0.26) & \(-10.49\) (0.28) & **0.96** (0.00) & \(-0.02\) (0.01) \\ NVDM-GSM & \(0.08\) (0.01) & \(-1.19\) (0.32) & \(-7.44\) (0.64) & \(0.59\) (0.09) & \(0.04\) (0.01) \\ ProdLDA & \(-0.21\) (0.02) & \(-6.79\) (0.42) & \(-12.95\) (0.42) & \(0.36\) (0.04) & \(-0.08\) (0.01) \\ CombinedTM & \(-0.25\) (0.01) & \(-7.54\) (0.24) & \(-12.67\) (0.56) & \(0.25\) (0.09) & \(-0.06\) (0.02) \\ \hline TEC \(E_{LM}\) (\(\alpha=0\)) & \(0.16\) (0.01) & \(0.27\) (0.25) & \(-6.80\) (0.23) & \(0.79\) (0.01) & \(0.13\) (0.01) \\ TEC \(\alpha=\nicefrac{{1}}{{2}}\) & \(\mathbf{0.24}\) (0.01) & \(1.48\) (0.16) & \(-5.49\) (0.16) & \(0.82\) (0.01) & \(0.19\) (0.01) \\ TEC \(\alpha=1\) & \(\mathbf{0.24}\) (0.01) & \(1.45\) (0.17) & \(-5.58\) (0.15) & \(0.82\) (0.01) & \(0.19\) (0.01) \\ TEC \(\alpha=2\) & \(\mathbf{0.24}\) (0.01) & \(\mathbf{1.53}\) (0.15) & **-5.45** (0.19) & \(0.82\) (0.01) & \(\mathbf{0.20}\) (0.01) \\ TEC \(E_{G}\) (\(\alpha=\infty\)) & \(\mathbf{0.24}\) (0.01) & \(1.46\) (0.20) & \(-5.62\) (0.19) & \(0.83\) (0.01) & \(\mathbf{0.20}\) (0.01) \\ \hline \multicolumn{5}{l}{**Number of Topics \(\times\)300**} \\ \hline LDA & \(0.03\) (0.01) & \(-3.06\) (0.17) & \(-10.79\) (0.18) & **0.88** (0.00) & \(0.02\) (0.01) \\ NVDM-GSM & \(0.13\) (0.01) & \(-0.45\) (0.21) & \(-7.00\) (0.28) & \(0.42\) (0.04) & \(0.06\) (0.01) \\ ProdLDA & \(-0.16\) (0.01) & \(-5.54\) (0.13) & \(-12.01\) (0.16) & \(0.17\) (0.01) & \(-0.03\) (0.00) \\ CombinedTM & \(-0.20\) (0.01) & \(-6.44\) (0.15) & \(-12.13\) (0.17) & \(0.12\) (0.01) & \(-0.02\) (0.00) \\ \hline TEC \(E_{LM}\) (\(\alpha=0\)) & \(0.14\) (0.01) & \(-0.24\) (0.14) & \(-8.37\) (0.14) & \(0.74\) (0.01) & \(0.10\) (0.01) \\ TEC \(\alpha=\nicefrac{{1}}{{2}}\) & \(0.22\) (0.01) & \(1.17\) (0.12) & \(-6.84\) (0.14) & \(0.76\) (0.00) & \(0.16\) (0.01) \\ TEC \(\alpha=1\) & \(0.22\) (0.01) & \(1.22\) (0.13) & \(-6.79\) (0.12) & \(0.76\) (0.01) & \(\mathbf{0.17}\) (0.01) \\ TEC \(\alpha=2\) & \(0.22\) (0.01) & \(\mathbf{1.27}\) (0.08) & **-6.75** (0.12) & \(0.76\) (0.01) & \(\mathbf{0.17}\) (0.01) \\ TEC \(E_{G}\) (\(\alpha=\infty\)) & \(\mathbf{0.23}\) (0.01) & \(1.26\) (0.09) & \(-6.77\) (0.13) & \(0.76\) (0.01) & \(\mathbf{0.17}\) (0.00) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results on MLSUM corpus for all topic models. We record the results on five metrics, including \(\mathbf{C_{NPMI}}:\) Normalized pointwise mutual information, more correlated with humans, \(\mathbf{U_{Mass}}:\) How often a word appears with another against how often it appears on its own \(\mathbf{TD}:\) (Topic Diversity) the ratio of unique entities to total entities and \(\mathbf{TQ}:\) (Topic Quality) Topic Diversity \(\times\)\(\mathbf{C_{NPMI}}\). The results are reported as averages (95% confidence interval) based on 10 random experimental runs. Our model outperforms all baselines across all metrics except for \(\mathbf{TD}\).
\begin{table}
\begin{tabular}{l|r r r r} \hline \hline \multicolumn{5}{c}{**MLSUM**} \\ \hline
**Model** & \multicolumn{1}{c}{\(\mathbf{C_{NPMI}}\)} & \multicolumn{1}{c}{\(\mathbf{C_{UCI}}\)} & \multicolumn{1}{c}{\(\mathbf{U_{Mass}}\)} & \multicolumn{1}{c}{**TD**} & \multicolumn{1}{c}{**TQ**} \\ \hline
**Number of Topics \(\times\)100** & & & & \\ \hline LDA & \(-0.02\) (0.01) & \(-3.89\) (0.26) & \(-10.49\) (0.28) & **0.96** (0.00) & \(-0.02\) (0.01) \\ NVDM-GSM & \(0.08\) (0.01) & \(-1.19\) (0.32) & \(-7.44\) (0.64) & \(0.59\) (0.09) & \(0.04\) (0.01) \\ ProdLDA & \(-0.21\) (0.02) & \(-6.79\) (0.42) & \(-12.95\) (0.42) & \(0.36\) (0.04) & \(-0.08\) (0.01) \\ CombinedTM & \(-0.25\) (0.01) & \(-7.54\) (0.24) & \(-12.67\) (0.56) & \(0.25\) (0.09) & \(-0.06\) (0.02) \\ \hline TEC \(E_{LM}\) (\(\alpha=0\)) & \(0.16\) (0.01) & \(0.27\) (0.25) & \(-6.80\) (0.23) & \(0.79\) (0.01) & \(0.13\) (0.01) \\ TEC \(\alpha=\nicefrac{{1}}{{2}}\) & \(\mathbf{0.24}\) (0.01) & \(1.48\) (0.16) & \(-5.49\) (0.16) & \(0.82\) (0.01) & \(0.19\) (0.01) \\ TEC \(\alpha=1\) & \(\mathbf{0.24}\) (0.01) & \(1.45\) (0.17) & \(-5.58\) (0.15) & \(0.82\) (0.01) & \(0.19\) (0.01) \\ TEC \(\alpha=2\) & \(\mathbf{0.24}\) (0.01) & \(\mathbf{1.53}\) (0.15) & **-5.45** (0.19) & \(0.82\) (0.01) & \(\mathbf{0.20}\) (0.01) \\ TEC \(E_{G}\) (\(\alpha=\infty\)) & \(\mathbf{0.24}\) (0.01) & \(1.46\) (0.20) & \(-5.62\) (0.19) & \(0.83\) (0.01)
**Coherence NPMI.**Bouma (2009) proposes an alternative coherence measure, where the above elements are substituted by _Normalized PMI_ (NPMI, Eq. (5)) as it was found that these have higher correlation to human topic coherence ratings:
\[\mathrm{NPMI}(w_{i},w_{j})=\frac{\mathrm{PMI}(w_{i},w_{j})}{-\log\left(p(w_{i},w _{j})\right)}. \tag{5}\]
**Coherence UMass.**Mimno et al. (2011) suggests the asymmetrical coherence measure _UMass_ (Eq. 6), which is also calculated based on intrinsic entity co-occurrences conditioned to top entity occurrences:
\[\mathrm{UMass}(w_{i},w_{j})=\log\left(\frac{p(w_{i},w_{j})}{p(w_{j})}\right). \tag{6}\]
**Topic diversity and quality.** Topic diversity (**TD**) is the ratio between the number of unique entities and the total number of entities, considering the top 25 entities per topic (Dieng et al., 2020). Topic quality (**TQ**) is the product of topic coherence, as measured by \(\mathrm{C_{NPMI}}\), and topic diversity.
### Experiments specifications
For each combination of model, corpus, and the number of topics -- 100 and 300 --, we compute metrics over 10 runs and present both the averages and 95% confidence interval range in Tables 3, 4 and 5. We use sequential seeds for the sake of reproducibility. We use implementation defaults for all models, including TEC, with the exceptions of NVDM-GSM, where we run 100 epochs, and ProdLDA and CombinedTM, that we run each for 250 epochs. We report metrics for the epoch with higher \(\mathrm{C_{NPMI}}\).
We run the experiments in a shared Linux machine with 72 CPU cores, 256GB RAM and use a Tesla V100-SXM2-16GB GPU.
### Qualitative results
We present exemplar topics for the different models in Table 2. Using visual inspection, we find cases where some top entities do not match the general topic theme. These must be attributed to limitations in the model as they all share the same preprocessed corpora, with the exception of WikiPDA. Overall these issues seem less prevalent with TEC. Particularly for ProdLDA and CombinedTM, we also find unrelated entities that linger across many topics, with the lingering entities varying between runs. We also find topics covering multiple themes, such as the ones resulting from LDA and WikiPDA.
### Quantitative results
For all combinations of corpora and number of topics, TEC achieves better \(\mathbf{C_{NPMI}}\), \(\mathbf{C_{UCI}}\), \(\mathbf{U_{Mass}}\) and \(\mathbf{TQ}\) when compared to the other models. \(\mathbf{U_{Mass}}\) has a single exception where WikiPDA performs better for 100 topics.
As opposed to word-based preprocessing, entity extraction results in sparser representations of corpora, and, for that reason, we observe significantly worse results to those presented in the original topic model papers.
Documents generally assume the reader has background knowledge on the subject. Models like LDA, NVDM-GSM, ProdLDA and WikiPDA learn based on entity co-occurrences. Relationships that are not explicit are neglected, justifying their lack of performance in comparison to TEC. WikiPDA considers the relationship across entities during its preprocessing however training is based on LDA so the same limitations apply to it. CombinedTM uses implicit knowledge, but much like ProdLDA, seems to be affected by component collapse as can be verified by their low \(\mathbf{TD}\) scores -- a state where variational autoencoders can get stuck in a poor local optimum, due to the choice in objective function, that results in topics being similar (Masada, 2022).
The results suggest that both embedding types are valuable sources of knowledge to use with topic models. Models using graph-based embeddings perform significantly better than models using embeddings obtained with language models, and we only find a few circumstances where some combination of both embeddings produces better results than graph-based embeddings alone.
## 5 Conclusions
We explore entity-based topic models based on the clustering of vector representations of entities. TEC internally represents documents using language-agnostic entity identifiers, which results in a single set of topics shared across languages and allows it to extend to new languages without sacrificing the performance of the existing languages.
Our results suggest that the implicit knowledge provided by language models is superior to the state-of-the-art in terms of coherence and quality. Nevertheless, these results are surpassed by the explicit knowledge encoded in graph-based embeddings, using human contributed Wikidata knowledge base as a source.
## 6 Limitations
TEC assumes that documents contain entities, yet this is not necessarily the case. The proposed model is specifically valuable for entity-rich applications such as news articles. A potential solution we are interested in exploring in the future is to train a self-supervised model to generate word embeddings using the bag-of-words as input and the document embedding as the target. It results in word embeddings having a representation in the shared embedding space.
We produce graph-based embeddings using _node2vec_ -- a shallow neural network that may be unable to learn deeper, more complex relationships between entities. We believe that our results can improve if we obtain embeddings using a multi-layer graph neural network with unsupervised training. Furthermore, while our approach outperforms other models on a range of metrics, they still lag behind when it comes to topic diversity. Finding a way to improve the diversity of the topics while preserving their intrinsic performance could make for important future work.
Lastly, updating the knowledge base will force the retraining of the model, which does not currently guarantee a direct relationship between former and new topics. It requires additional research as this can be a hindrance for some applications.
|
2301.12893 | Formalizing Piecewise Affine Activation Functions of Neural Networks in
Coq | Verification of neural networks relies on activation functions being
piecewise affine (pwa) -- enabling an encoding of the verification problem for
theorem provers. In this paper, we present the first formalization of pwa
activation functions for an interactive theorem prover tailored to verifying
neural networks within Coq using the library Coquelicot for real analysis. As a
proof-of-concept, we construct the popular pwa activation function ReLU. We
integrate our formalization into a Coq model of neural networks, and devise a
verified transformation from a neural network N to a pwa function representing
N by composing pwa functions that we construct for each layer. This
representation enables encodings for proof automation, e.g. Coq's tactic lra --
a decision procedure for linear real arithmetic. Further, our formalization
paves the way for integrating Coq in frameworks of neural network verification
as a fallback prover when automated proving fails. | Andrei Aleksandrov, Kim Völlinger | 2023-01-30T13:53:52Z | http://arxiv.org/abs/2301.12893v1 | # Formalizing Piecewise Affine Activation Functions of Neural Networks in Coq
###### Abstract
Verification of neural networks relies on activation functions being _piecewise affine_ (pwa) -- enabling an encoding of the verification problem for theorem provers. In this paper, we present the first formalization of pwa activation functions for an interactive theorem prover tailored to verifying neural networks within Coq using the library Coquelicot for real analysis. As a proof-of-concept, we construct the popular pwa activation function ReLU. We integrate our formalization into a Coq model of neural networks, and devise a verified transformation from a neural network \(\mathcal{N}\) to a pwa function representing \(\mathcal{N}\) by composing pwa functions that we construct for each layer. This representation enables encodings for proof automation, e.g. Coq's tactic lra - a decision procedure for linear real arithmetic. Further, our formalization paves the way for integrating Coq in frameworks of neural network verification as a fallback prover when automated proving fails.
Keywords:Piecewise Affine Function Neural Network Interactive Theorem Prover Coq Verification.
## 1 Introduction
The growing importance of neural networks motivates the search of verification techniques for them. Verification with _automatic_ theorem provers is vastly under study, usually targeting feedforward networks with _piecewise affine_ (pwa) activation functions since the verification problem can be then encoded as an SMT or MILP problem. In contrast, few attempts exist on investigating _interactive_ provers. Setting them up for this task though offers not only a fallback option when automated proving fails but also insight on the verification process.
That is why in this paper, we work towards this goal by presenting the first formalization of pwa activation functions for an interactive theorem prover tailored to verifying neural networks with Coq. We constructively define pwa functions using the polyhedral subdivision of a pwa function [25] since many algorithms working on polyhedra are known [26] with some tailored to reasoning about reachability properties of neural networks [30]. Motivated by verification, we restrict pwa functions by a polyhedron's constraint to be _non-strict_ in order to suit linear programming [29] and by employing _finitely_ many polyhedra to fit
SMT/MILP solvers [11, 29]. We use reals supported by the library Coquelicot to enable reasoning about gradients and matrices with Coq's standard library providing the tactic lra - a decision procedure for linear real arithmetic. As a proof-of-concept, we construct the activation function ReLU- one of the most popular in industry [20] and formal verification [8]. Furthermore, we devise a sequential Coq model of feedforward neural networks integrating pwa activation layers. Most importantly, we present a verified transformation from a neural network \(\mathcal{N}\) to a pwa function \(f_{\mathcal{N}}\) representing \(\mathcal{N}\) with the main benefit being again encodings for proof automation. To this end, we introduce two verified binary operations on pwa functions - usual function composition and an operator to construct a pwa function for each layer. In particular, we provide the following contributions with the corresponding Coq code available on GitHub1:
Footnote 1: At [https://github.com/verinncq/formalizing-pwa](https://github.com/verinncq/formalizing-pwa) with matrix_extensions.v (Section 2), piecewise_affine.v (Section 3.1), neuron_functions.v (Section 3.2), neural_networks.v (Section 4.1 and 4.4) and pwa_operations.v (Section 4.2 and 4.3).
1. a formalization of pwa functions based on polyhedral subdivision tailored to verification of neural networks (Section 3),
2. a construction of the popular activation function ReLU (Section 3),
3. a sequential model for feedforward neural networks with parameterized layers (Section 4),
4. composition for pwa functions and an operator for constructing higher dimensional pwa functions out of lower dimensional ones (Section 4), and
5. a verified transformation from a feedforward neural network with pwa activation to a single pwa function representing the network (Section 4).
Related Work.A variety of work on using automatic theorem provers to verify neural networks exists with the vast majority targeting feedforward neural networks with pwa activation functions [6, 8, 12, 15, 18, 19, 24]. In comparison, little has been done regarding interactive theorem provers with some mechanized results from machine learning [2, 22], a result on verified training in Lean[27] and, relevant to this paper, pioneering work on verifying networks in Isabelle[7] and in Coq[3]. Apart from [7] targeting Isabelle instead of Coq, both network models are not generalized by entailing a formalization of pwa functions and in addition they do not offer a model of the network as a (pwa) function - both contributions of this paper.
## 2 Preliminaries
We clarify notations and definitions important to this paper. We write \(dom(f)\) for a function's domain, \(dim(f)\) for the dimension of \(dom(f)\) and \((f\circ g)(x)\) for function composition. For a matrix \(M\), \(M^{T}\) is the transposed matrix. We consider block matrices. To clarify notation, consider a block matrix made out of matrices \(M_{1},...,M_{4}\):
\[\begin{bmatrix}M_{1}&M_{2}\\ \hline M_{3}&M_{4}\end{bmatrix}\]
### Piecewise Affine Topology
We give the important definitions regarding pwa functions [23, 25, 32].
Definition 1 (Linear Constraint): For some \(c\in\mathbb{R}^{n},b\in\mathbb{R}\), a _linear constraint_ is an inequality of form \(c^{T}x\leq b\) for any \(x\in\mathbb{R}^{n}\).
Definition 2 (Polyhedron2): A _polyhedron_\(P\) is the intersection of finitely many halfspaces, meaning \(P:=\{x\in\mathbb{R}^{n}|c_{1}^{T}x\leq b_{1}\wedge...\wedge c_{m}^{T}x\leq b_{ m}\}\) with \(c_{i},b_{i}\in\mathbb{R}^{n},b_{i}\in\mathbb{R}\) and \(i\in\{1,...,m\}\).
Footnote 2: In the literature often referred to as a convex, closed polyhedron.
We denote the constraints of \(P\) as \(\mathcal{C}(P):=\{(c_{1}^{T}x\leq b_{1}),...,(c_{m}^{T}x\leq b_{m})\}\) for readability even though a constraint is given by \(c_{i}\) and \(b_{i}\) while \(x\) is arbitrary.
Definition 3 (Affine Function3): A function \(f:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) is called _affine_ if there exists \(M\in\mathbb{R}^{n\times m}\) and \(b\in\mathbb{R}^{n}\) such that for all \(x\in\mathbb{R}^{m}\) holds: \(f(x)=Mx+b\).
Definition 4 (Polyhedral Subdivision): A _polyhedral subdivision_\(S\subseteq\mathbb{R}^{n}\) is a finite set of polyhedra \(\mathbf{P}:=\{P_{1},\ldots,P_{m}\}\) such that (1) \(S=\bigcup_{i=1}^{m}P_{i}\) and (2) for all \(P_{i},P_{j}\in\mathbf{P},x\in P_{i}\cap P_{j}\), and for all \(\epsilon>0\) there exists \(x^{\prime}\) such that \(|x-x^{\prime}|<\epsilon\), and \(x^{\prime}\notin P_{i}\cap P_{j}\).
Definition 5 (Piecewise Affine Function): A continuous function \(f:D\subseteq\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) is _piecewise-affine_ if there is a polyhedral subdivision \(\mathbf{P}=\{P_{1},\ldots,P_{l}\}\) of \(D\) and a set of affine functions \(\{f_{1},\ldots,f_{l}\}\) such that for all \(x\in D\) holds \(f(x)=f_{i}(x)\) if \(x\in P_{i}\).
### Neural Networks
Neural networks approximate functions by learning from sample points during training [9] with arbitrary precision [10, 14, 16]. A feedforward neural network is a directed acyclic graph with the edges having weights and the vertices (neurons) having biases and being structured in layers. Each layer applies a generic affine function for summation and an activation function (possibly a pwa function). In many machine learning frameworks (e.g. PyTorch), these functions are modelled as separate layers followed up by each other. We adopt this structure in our Coq model with a _linear layer_ implementing the generic affine function. Every network has an input and an output layer with optional hidden layers in between.
### Interactive Theorem Prover Coq & Library Coquelicot
We use the interactive theorem prover Coq[28] providing a non-turing-complete functional programming language extractable to selected functional programming languages and a proof development system - a popular choice for formal verification of programs and formalization of mathematical foundations. Additionally, we use the real analysis library Coquelicot[5] offering derivatives, integrals, and matrices compatible with Coq's standard library.
Extensions in Coq: Column Vectors & Block Matrices.For this paper, we formalized column vectors and block matrices on top of Coquelicot. A column vector colvec is identified with matrices and equipped with a dot product dot on vectors and some additional lemmas to simplify proofs. Additionally, we formalized several notions for Coquelicot's matrix type. We provide multiplication of a matrix with a scalar scalar_mult and transposition transpose of matrices. We provide operations on different shapes of matrices and vectors such as a right-to-left construction of block diagonal matrices block_diag_matrix, a specialization thereof on vectors colvec_concat and extensions of vectors with zeroes on the bottom extend_colvec_at_bottom or top extend_colvec_on_top, denoted as follows: \(\begin{bmatrix}M_{1}&0\\ \hline 0&M_{2}\end{bmatrix},\begin{bmatrix}\vec{v}_{1}\\ \hline\vec{v}_{2}\end{bmatrix},\begin{bmatrix}\vec{v}\\ \hline\vec{0}\end{bmatrix},\text{ and }\begin{bmatrix}\vec{0}\\ \hline\vec{v}\end{bmatrix}.\) Moreover, we proved lemmas relating all new operations with each other and the existing matrix operations.
## 3 Formalization of Piecewise Affine Functions in Coq
We formalize pwa functions tailored to neural network verification with pwa activation. As a proof-of-concept, we construct the activation function Rectified Linear Unit (ReLU) - one of the most popular activation functions in industry [20] and formal verification [8].
### Inductive Definition of PWA Functions
We define a linear constraint with a dimension _dim_ and parameters, vector \(c\in\mathbb{R}^{dim}\) and scalar \(b\in\mathbb{R}\), being satisfied for a vector \(x\in\mathbb{R}^{dim}\) if \(c\cdot x\leq b\):
```
InductiveLinearConstraint(dim:nat):Type:= |Constraint(c:colvecdim)(b:R). Definitionsatisfies_lc{dim:nat}(x:colvecdim)(l:LinearConstraintdim) :Prop:=matchlwith|Constraintcb\(\Rightarrow\)dotcx<=bend.
```
We define a polehydron as a finite set of linear constraints together with a predicate stating that a point lies in a polyhedron:
```
InductiveConvexPolyhedron(dim:nat):Type:= |Polyhedron(constraints:list(LinearConstraintdim)).
```
```
Definitionin_convex_polyhedron{dim:nat}(x:colvecdim)(p: ConvexPolyhedrondim):= matchpwith|Polyhedronlcs=> forallconstraint,Inconstraintlcs-satisfies_lcxconstraintend.
```
Finally, we define a pwa function as a record composed of the fields body holding the polyhedral subdivision for piecewise construction of the function, and prop for the property univalence (i.e. all "pieces" together yield a function).
```
RecordPWAF(in_dimout_dim:nat):Type:=mkPLF{ body:list(ConvexPolyhedronin_dim*((matrixout_dimin_dim)*colvecout_dim)); prop:pwaf_univalencebody;}.
```
Piecewise ConstructionWe construct a pwa function \(f\) by a list of polyhedra, matrices and vectors with a triple \((P,M,b)\) defining a "piece" of \(f\) by an affine function with \(f(x)=Mx+b\) if \(x\in P\). For evaluation, we search a polyhedron containing \(x\) and compute the affine function:
```
Fixpointpwaf_eval_helper{in_dimout_dim:nat} (body:list(ConvexPolyhedronin_dim*((matrix(T:=R)out_dimin_dim)*colvecout_dim)))(x:colvecin_dim) :option(ConvexPolyhedronin_dim*((matrixout_dimin_dim)*colvecout_dim)):= matchbodywith |nil->None |body_el::next-> matchbody_elwith |(polyh,(M,b))-> matchpolyhedron_evalxpolyhwith |true->Some(body_el) false->pwaf_eval_helpernext-> endend.
```
To handle the edge case where no such polyhedron is found (i.e. \(x\notin dom(f)\)), we use a wrapper function pwaf_eval. For the purpose of proving, we define a predicate in_pwaf_domain for the existence of such a polyhedron and a predicate is_pwaf_value for a stating the function is evaluated to a certain value.
UnivalenceWe enforce the construction to be a function by stating univalence, in this case all pairs of polyhedra having coinciding affine functions in their intersection, requiring a proof for each instance of type PWAF:
Class of Formalized PWA FunctionsMotivated by pwa activation functions in the context of neural network verification, our pwa functions are restricted by
1. all linear constraints being _non-strict_, and
2. being defined over a union of _finitely_ many polyhedra.
Restriction (1) is motivated by linear programming usually dealing with non-strict constraints [29], and restriction (2) by MILP/SMT solvers commonly accepting finitely many variables [11, 29]. Since we use that every continuous pwa function on \(\mathbb{R}^{n}\) admits a polyhedral subdivision of the domain [25], all continuous pwa functions with a finite subdivision can be encoded.
For pwa functions not belonging to this class, consider any discontinuous pwa function since discontinuity violates restriction (1), and any periodic pwa function as excluded by restriction (2) due to having infinitely many "pieces".
Choice of FormalizationWe use real numbers (instead of e.g. rationals or floats) to enable Coquelicot's reasoning about derivatives interesting for neural networks' gradients. Coquelicot builds up on the reals of Coq's standard library allowing the use of Coq's tactic lra - a Coq-native decision procedure for linear real arithmetic.
Moreover, we use inductive types since they come with an induction principle and therefore ease proving. In addition, the type list (e.g. used for the definition of pwa functions) enjoys extensive support in Coq. For example, pwaf_univalence is stated using the list predicate ForAllPairs and proofs intensively involve lemmas from Coq's standard library.
A constructive definition using the polyhedral subdivision is interesting since many efficient algorithms are known that work on polyhedra [26] with even algorithms tailored to neural network verification around [30]. We expect that such algorithms are implementable in an idiomatic functional style using our model. Furthermore, we anticipate easy-to-implement encodings for proof automation.
### Example: Rectified Linear Unit Activation Function
We construct ReLU as a pwa function defined by two "pieces" each of which being a linear function. The function is defined as:
\[\textsc{ReLU}(x):=\begin{cases}0,&x<0\\ x,&x\geq 0\end{cases}\]
Piecewise Construction.The intervals, \((-\infty,0)\) and \([0,\infty)\), each correspond to a polyhedron in \(\mathbb{R}\) defined by a single constraint4: \(P_{left}:=\{x\in\mathbb{R}^{1}|[1]\cdot x<=0\}\) and \(P_{right}:=\{x\in\mathbb{R}^{1}|[-1]\cdot x<=0\}\). We define these polyhedra as follows:5
Footnote 4: Matrices involved are one-dimensional vectors since ReLU is one-dimensional. For technical reasons, in Coq, the spaces \(\mathbb{R}\) and \(\mathbb{R}^{1}\) differ with the latter working on one-dimensional vectors instead on scalars.
DefinitionReLU1d_polyhedra_left := Polyhedron 1 [Constraint 1 None 0]. DefinitionReLU1d_polyhedra_right := Polyhedron 1 [Constraint 1 (scalar_mult (-1) None 0)].
ReLU's construction list contains these polyhedra each associated with a matrix and vector, in these cases \(([0],[0])\) and \(([1],[0])\), for the affine functions:
DefinitionReLU1d_body: list (ConvexPolyhedron 1 * (matrix (T:=R) 1 1 * \(\text{\sc{colvec 1}})\)) := [(ReLU1d_polyhedra_left, (Mzero, null_vector 1)); (ReLU1d_polyhedra_right, (Mone, null_vector 1))].
Univalence.Note that while ReLU's intervals are distinct, the according polyhedra with non-strict constraints are not. To ensure the construction to be a function, we prove univalence by proving that only \([0]\in(P_{left}\cap P_{right})\):
LemmaRelU1d_polyhedra_intersect_0:
forall x, in_convex_polyhedron x ReLU1d_polyhedra_left \(\wedge\)
in_convex_polyhedron x ReLU1d_polyhedra_right \(\rightarrow\) x = null_vector 1.
Finally, we ensure for each polyhedra pair holds \([1]\cdot[0]+[0]=[0]\cdot[0]+[0]\), and instantiate a PWAF by DefinitionReLU1dPWAF := mkPLF 1 1 ReLU1d_body ReLU1d_pwaf_univalence.
On the Construction of pwaFunctions.Analogously to the ReLU example, other activation functions sharing its features of being one-dimensional and consisting of a few polyhedra can be constructed similarly. We can construct a multi-dimensional version out of a one-dimensional function as we will illustrate for ReLU in Section 4.3. Activation functions that require more effort to construct are for example different types of pooling [9], mostly due to a non-trivial polyhedra structure and inherent multi-dimensionality. This effort motivates future development of more support in constructing pwa functions with the goal to compile a library of layer types.
## 4 Verified Transformation of a Neural Network to a Pwa Function
We present our main contribution: a formally verified transformation of a feedforward neural network with pwa activations into a single pwa function. First, we introduce a Coq model for feedforward neural networks (Section 4.1). We follow up with two verified binary operations on pwa functions at the heart of the transformation, _composition_ (Section 4.2) and _concatenation_ (Section 4.3), and finish with the verified transformation (Section 4.4).
### Neural Network Model in Coq
We define a neural network _NNSequential_ as a list-like structure containing layers parameterized on the type of activation, and the input's, output's and hidden layer's dimensions with dependent types preventing dimension mismatch:
```
InductiveNNSequential{input_dimoutput_dim:nat}:= |NNOOutput:NNSequential |NNplainLayer{hidden_dim:nat}: (colvecinput_dim\(\rightarrow\)colvechidden_dim) \(\rightarrow\)NNSequential(input_dim:=hidden_dim)(output_dim:=output_dim) \(\rightarrow\)NNSequential |NNPVALayer{hidden_dim:nat}: PMAFinput_dimhidden_dim \(\rightarrow\)NNSequential(input_dim:=hidden_dim)(output_dim:=output_dim) \(\rightarrow\)NNSequential |NNUnknownLayer{hidden_dim:nat}: NNSequential(input_dim:=hidden_dim)(output_dim:=output_dim) \(\rightarrow\)NNSequential.
```
The network model has four layer types: NNOutput as the last layer propagates input values to the output; NNPlainLayer is a layer allowing any function in Coq defined on real vectors; NNPVALayer is a pwa activation layer - the primary target of our transformation; and NNUnknownLayer is a stub for a layer with an unknown function.
Informally speaking, the semantics of our model is as follows: for a layer NNOutput the identity function6 is evaluated, for NNPlainLayer the passed function, for NNPVALayer the passed pwa function, and for NNUnknownLayer a failure is raised. Thus, the _NNSequential_ type does not prescribe any specific functions of layers but expects them as parameters.
Footnote 6: We use the customized identity function _flex_dim_copy_.
An Example of a Neural Network.In order to give an example, we define specific layers for a network, in this case the pwa layers Linear and ReLU:
_From a Trained Neural Network into the World of Coq._ As illustrated, we can construct feedforward neural networks in Coq. Another option is to convert a neuronal network trained outside of Coq into an instance of the model. In [3] a python script is used for conversion from PyTorch to their Coq model without any correctness guarantees, while in [7] an import mechanism from TensorFlow into Isabelle is used, where correctness of the import has to be established for each instance of their model. We are working with a converter expecting a neural network in the ONNX format (i.e. a format for neural network exchange supported by most frameworks) [4] to produce an according instance in our Coq model [13].7 This converter is mostly written within Coq with its core functionality being verified.
Footnote 7: A bachlor thesis supervised by one of the authors and scheduled for publication.
Choice of Model._ While feedforward neural networks are often modeled as directed acyclic graphs [17, 1], in the widely used machine learning frameworks TensorFlow and PyTorch a sequential model of layers is employed as well. Our model corresponds to the latter, and is inspired by the, to our knowledge, only published neural network model in Coq (having been used for generalization proofs) [3]. Our model though is more generic by having parameterized layers instead of being restricted to ReLU activation. Moreover, while their model works with customized floats, we decided for reals in order to support Coquelicot's real analysis as discussed in Section 3.
A graph-based model carries the potential to be extended to other types of neural networks such as recurrent networks featuring loops in the length of the input. For the reason of being generic, ONNX employs a graph-based model. Hence, an even more generic graph-based Coq model is in principle desirable but it also adds complexity. In [7] the focus is on a sequential model which the authors showed to be superior to a graph-based model for the purpose of verification. Hence, we expect that the need for a sequential Coq model for _feedforward_ networks will stay even in the existence of a graph-based model.
### Composition of PWA functions
Besides composition being a general purpose binary operation closed over pwa functions [25], it is needed in our transformation to compose pwa layers. Since for pwa functions \(f:\mathbb{R}^{l}\rightarrow\mathbb{R}^{n}\) and \(g:\mathbb{R}^{m}\rightarrow\mathbb{R}^{l}\) their composition \(z=f\circ g\) is a pwa function, composition in Coq produces an instance of type PWAF requiring a construction and a proof of univalence:
```
Definitionpwaf_compose{in_dimhidden_dimout_dim:nat} (f:PWAFhidden_dim out_dim) (g:PWAF in_dim hidden_dim) :PWAF in_dim out_dim (pwaf_compose_bodyfg) (pwaf_compose_univalencefg).
```
#### 4.2.1 Piecewise Construction of Composition.
Assume a pwa function \(f\) defined on the polyhedra set \(\mathbf{P}^{f}=\{P_{1}^{f},\ldots,P_{k}^{f}\}\) with affine functions given by the parameter set \(\mathbf{A}^{f}=\{(M_{1}^{f},b_{1}^{f}),\ldots,(M_{K}^{f},b_{k}^{f})\}\). Analogously, \(g\) is given by \(\mathbf{P}^{g}\) and \(\mathbf{A}^{g}\). For computing a composed function \(z=f\circ g\) at any \(x\in\mathbb{R}^{m}\), we need a polyhedron \(P_{j}^{g}\in\mathbf{P}^{g}\) such that \(x\in P_{j}^{g}\) to compute \(g(x)=M_{j}^{g}x+b_{j}^{g}\) with \((M_{j}^{g},b_{j}^{g})\in\mathbf{A}^{g}\). Following, we need a polyhedron \(P_{i}^{f}\in\mathbf{P}^{f}\) with \(g(x)\in P_{i}^{f}\) to finally compute \(z(x)=M_{i}^{f}g(x)+b_{i}^{f}\) with \((M_{i}^{f},b_{i}^{f})\in\mathbf{A}^{f}\).
We have to reckon on function composition on the level of polyhedra sets to construct \(z\)'s polyhedra set \(\mathbf{P}^{z}\). For each pair \(P_{i}^{f}\in\mathbf{P}^{f}\), \(P_{j}^{g}\in\mathbf{P}^{g}\), we create a polyhedron \(P_{i,j}^{z}\in\mathbf{P}^{z}\) such that \(x\in P_{i,j}^{z}\) iff \(x\in P_{j}^{g}\) and \(M_{j}^{g}x+b_{j}^{g}\in P_{i}^{f}\) with \((M_{j}^{g},b_{j}^{g})\in\mathbf{A}^{g}\). Consequently, \(\mathcal{C}(P_{j}^{g})\subseteq\mathcal{C}(P_{i,j}^{z})\) while the constraints of \(P_{i}^{f}\) have to be modified. For \((c_{i}\cdot x\leq b_{i})\in\mathcal{C}(P_{i}^{f})\) we have the modified constraint \(((c_{i}^{T}M_{j}^{g})\cdot x\leq b_{i}-c_{i}\cdot b_{j}^{g})\in\mathcal{C}(P_{ i,j}^{z})\). We construct a polyhedra set accordingly in Coq including empty polyhedra in case no qualifying pair of polyhedra exists:
```
Fixpointcompose_polyhedra_helper{in_dimhidden_dim:nat} (M:matrixhidden_dimin_dim)(b:colvechidden_dim) (l_f:list(LinearConstrainthidden_dim)):= matchl_fwith |[]\(\Rightarrow\)[] |(Constraintcb2)::tail\(\Rightarrow\) Constraintin_dim (transpose(Mmult(transposec)M))(b2-(dotcb1)):: compose_polyhedra_helperMb1tail end.
```
Definitioncompose_polyhedra{in_dimhidden_dim:nat} (p_g:ConvexPolyhedronin_dim) (M:matrixhidden_dimin_dim)(b:colvechidden_dim) (p_f:ConvexPolyhedronhidden_dim):= matchp_gwith|Polyhedronl1 \(\Rightarrow\) matchp_fwith|Polyhedronl2 \(\Rightarrow\) Polyhedronin_dim(l1++compose_polyhedra_helperMb12)
Further, each \((M^{z}_{i,j},b^{z}_{i,j})\in\mathbf{A}^{z}\) is defined as \((M^{f}_{j}M^{g}_{i},M^{f}_{j}b^{g}_{i}+b^{f}_{j})\) as a result of usual composition of two affine functions:
```
Definitioncompose_affine_functions{in_dimhidden_dimout_dim:nat} (M_f:matrix(T:=R)out_dimhidden_dim)(b_f:colvecout_dim) (M_g:matrix(T:=R)hidden_dimin_dim)(b_g:colvechidden_dim):= (MmultM_fM_g,Mplus(MmultM_fb_g)b_f).
```
Univalence of Composition.Due to the level of details, the Coq proof for the composed function \(z\) satisfying univalence is not included in this paper (see Theorem pwaf_compat_univalence).
Composition Correctness.For establishing the correctness of the composition, we proved the following theorem:
```
Theorempwaf_compose_correct: forallin_dimhid_dimout_dimxf_xg_x (f:PWAFhid_dimout_dim)(g:PWAFin_dimhid_dim), in_pwaf_domaingx\(\rightarrow\)is_pwaf_valuegxg_x\(\rightarrow\) in_pwaf_domainfg_x\(\rightarrow\)is_pwaf_valuefg_xf_x letfg:=pwaf_composefg in in_pwaf_domainfgx\(\wedge\)is_pwaf_valuefgxf_x.
```
For one of the lemmas (compose_polyhedra_subset_g) we proved that polyhedra of \(g\) are only getting smaller by composing \(g\) with \(f\) while the borders that are set by polyhedra of \(g\) being kept.
### Concatenation: Layers of Neural Networks as PWA Functions
While some neural networks come with each layer being _one_ multi-dimensional function, many neural networks feature layers where each neuron is assigned the same lower dimensional function independently then applied to each neuron's input. Motivated by the transformation of a neural network into a single pwa function, we introduce a binary operation _concatenation_ that constructs a single pwa function for each pwa layer of a neural network. Otherwise, concatenation is interesting due to constructing a multi-dimensional pwa function being challenging since a user has to define multiple polyhedra with a significant number of constraints. For illustration, we construct a multi-dimensional ReLU layer.
Concatenation of pwa functions has to yield an instance of type PWAF since being closed over pwa functions. Concatenation is defined as follows:
Definition 6 (Concatenation).: _Let \(f:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) and \(g:\mathbb{R}^{k}\rightarrow\mathbb{R}^{l}\). The concatenation \(\oplus\) is defined as:_
\[(f\oplus g)(\frac{\left[x^{f}\right]}{x^{g}}):=\left[\frac{f(x^{f})}{g(x^{g})}\right]\]
Piecewise Construction of Concatenation.Assume some \(f,g\), \(\mathbf{P}^{f},\mathbf{P}^{g},\mathbf{A}^{f}\) and \(\mathbf{A}^{g}\) as previously used, and \(z=f\oplus g\). The polyhedra set \(\mathbf{P}^{z}\) contains the pairwise joined polyhedra of \(\mathbf{P}^{f}\) and \(\mathbf{P}^{g}\) but with each constraint of a polyhedron lifted to the dimension of \(z\)'s domain. Consider a pair \(P_{i}^{f}\in\mathbf{P}^{f}\) and \(P_{j}^{g}\in\mathbf{P}^{g}\). For constraints \((c_{i}^{f}\cdot x^{f}\leq b_{i}^{f})\in\mathcal{C}(P_{i}^{f})\) and \((c_{j}^{g}\cdot x^{g}\leq b_{j}^{g})\in\mathcal{C}(P_{j}^{g})\) with \(\left[\frac{x^{f}}{x^{g}}\right]\in\mathbb{R}^{dim(f)+dim(g)}\), the following higher dimensional constraints are in \(\mathcal{C}(P_{i,j}^{z})\) with \(P_{i,j}^{z}\in\mathbf{P}^{z}\): \(\left[\frac{c_{i}^{f}}{0}\right]\cdot\left[\frac{x^{f}}{x^{g}}\right]\leq b_{i }^{f}\) and \(\left[\frac{0}{c_{j}^{g}}\right]\cdot\left[\frac{x^{f}}{x^{g}}\right]\leq b_{j }^{g}\). Thus, we get \(\left[\frac{x^{f}}{x^{g}}\right]\in P_{i,j}^{z}\) iff \(x^{f}\in P_{i}^{f}\) and \(x^{g}\in P_{j}^{g}\).
Hence, the concatenation requires the pairwise join of all polyhedra \(\mathbf{P}^{f}\) and \(\mathbf{P}^{g}\) each with their constraints lifted to the higher dimension of \(z\)'s domain:
```
Definitionconcat_polyhedra{in_dim1in_dim2:nat} (p_f:ConvexPolyhedronin_dim1)(p_g:ConvexPolyhedronin_dim2): ConvexPolyhedron(in_dim1+in_dim2):= matchp_fwith|Polyhedron11\(\Rightarrow\) matchp_gwith|Polyhedron12\(\Rightarrow\) Polyhedron(in_dim1+in_dim2) (extend_lincons_at_bottom11(in_dim1+in_dim2)++ extend_lincons_on_top12(in_dim1+in_dim2)) end.
```
The Coq code uses two functions for insertion of zeros similar to the dimension operations (see Section 2). The corresponding affine function of \(P_{i,j}^{z}\) is then:
\[(M_{i,j}^{z},b_{i,j}^{z}):=(\left[\begin{array}{c|c}M_{i}^{f}&0\\ \hline 0&M_{j}^{g}\end{array}\right],\left[\begin{array}{c|c}b_{i}^{f}\\ b_{j}^{g}\end{array}\right]).\]
Univalence of Concatenation.The technical proof of concatenation being univalent is outside of the scope of this paper (see pwaf_concat_univalence).
Concatenation Correctness.We proved the correctness of the concatenation:
```
Theorempwaf_concat_correct: forallin_dim1in_dim2out_dim1out_dim2x1x2f_x1g_x2 (f:PWAFin_dim1out_dim1)(g:PWAFin_dim2out_dim2), in_pwaf_domainfx1\(\rightarrow\)is_pwaf_valuefx1f_x1\(\rightarrow\) in_pwaf_domaingx2\(\rightarrow\)is_pwaf_valuegx2g_x2\(\rightarrow\) letfg:=pwaf_concatfgin letx:=colvec_concatx1x2in letfg_x:=colvec_concatf_x1g_x2in in_pwaf_domainfgx\(\wedge\)is_pwaf_valuefgxfg_x.
```
The proof relies on an extensive number of lemmas connecting matrix operations to block matrices and vector reshaping.
Example:ReLU Layer.Using concatenation, we construct a multi-dimensional ReLU layer using one-dimensional ReLU (see Section 4.1). To construct a ReLU layer \(\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\), we perform \(n\) concatenations of one-dimensional ReLU:
```
FixpointReLU_PWAF_helper(in_dim:nat):PWAFin_dimin_dim:= matchin_dimwith |0\(\Rightarrow\)OutputPWAF(in_dim:=0)(out_dim:=0) |Sn\(\Rightarrow\)pwaf_concatReLUdPWAF(ReLU_PWAF_helpern) end.
```
### Transforming a Neural Network into a Pwa Function
Building up on previous efforts, the transformation of a feedforward neural network with pwa activation functions into a single pwa function is straightforward. Using concatenation, we construct multi-dimensional pwa layers and then compose them to one pwa function representing the whole neural network. The transformation is illustrated conceptually in Figure 1.
The transformation in Coq simply fails when applied to hidden layers that are non-pwa:
```
Fixpointtransform_nn_to_pwaf{in_dimout_dim:nat} (nn:NNSequential(input_dim:=in_dim)(output_dim:=out_dim)) :option(PWAFin_dimout_dim):= matchnnwith |NNOutput\(\Rightarrow\)Some(OutputPWAF) |NNplainLayer\(\_\_\_\Rightarrow\)None |NNUnknownLayer\(\_\_\Rightarrow\)None |NNPWALayer\(\_\_\)pwafnext\(\Rightarrow\) matchtransform_nn_to_pwafnextwith |Somenext_pwaf\(\Rightarrow\)Some(pwaf_composenext_pwaf) |None\(\Rightarrow\)None end.
```
Correctness of Transformation.For this transformation, we have also proven the following theorem in Coq to establish its correctness:
```
Theoremtransform_nn_to_pwaf_correct: forallin_dimout_dim(x:colvecin_dim)(f_x:colvecout_dim)nn nn_pwaf, Somenn_pwaf=transform_nn_to_pwaf_correctnn\(\rightarrow\) in_pwaf_domainn_pwafx\(\rightarrow\) is_pwaf_valuenn_pwaf\(\_\)xf_x\(\leftrightarrow\)nn_evalnnx=Somef_x.
```
For a neural network \(\mathcal{N}\) and its transformed pwa function \(f_{\mathcal{N}}\), the theorem states that for all inputs \(x\in dom(f_{\mathcal{N}})\) holds \(f_{\mathcal{N}}(x)=\mathcal{N}(x)\). The proof of this theorem relies on several relatively simple properties of the composition. Note that for
\(dom(f_{\mathcal{N}})=\emptyset\) the theorem trivially holds, and in fact an additional proof is required for \(f_{\mathcal{N}}\)'s polyhedra being a subdivision of \(dom(\mathcal{N})\) (i.e. \(dom(f_{\mathcal{N}}(x))=dom(\mathcal{N}(x))\)).
On the Representation of a Neural Network as a pwa FunctionThe main benefit of having a pwa function obtained from neural network lies in the option to use simple-to-implement encodings of pwa functions for different solvers, e.g. Coq's tactic lra or MILP/SMT solvers. Hence, this representation paves the way for proof automation when stating theorems about the input-output relation of a network in Coq.
Furthermore, a representation as a pwa function moves the structural complexity of a neural network to the polyhedral subdivision of the pwa function. This is interesting since local search can be applied to the set of polyhedra for reasoning about reachability properties in neural networks [30]. Furthermore, one may estimate the size of a pwa function's polyhedral subdivision for different architectures of neural network [21].
Figure 1: Transformation of a feedforward network \(N\) with pwa activation functions into its representation as a pwa function \(F_{N}\) by concatenating neuron activation within each layer followed up by composing pwa layers.
## 5 Discussion
We were working towards neural network verification in Coq with a verified transformation from a network to a pwa function being the main contribution.
Summary.We presented the first formalization of pwa activation functions for an interactive theorem prover tailored to verifying neural networks with Coq. For our constructive formalization, we used a pwa function's polyhedral subdivision due to the numerous efficient algorithms working on polyhedra. Our class of pwa functions is on-purpose restricted to suit linear programming by using non-strict constraints and to fit SMT/MILP solvers by employing finitely many polyhedra. Using Coquelicot's reals, we enabled reasoning about gradients and support Coq's tactic lra. With ReLU, we constructed one of the most popular activation functions. We presented a verified transformation from a neural network to its representation as a pwa function enabling encodings for proof automation for theorems about the input-output relation. To this end, we devised a sequential model of neural networks, and introduced two verified binary operation on pwa functions - usual function composition together with an operator to construct a pwa function for each layer.
Future Work.Since the main benefit of having a pwa function obtained from neural network lies in the many available encodings [8, 12] targeting different solvers, we envision encodings for our network model. These encodings have to be adapted to the verification within Coq with our starting point being the tactic lra - a Coq-native decision procedure for linear real arithmetic.
Moreover, moving the structural complexity of a neural network to the polyhedral subdivision of a pwa function, opens up on investigating algorithms working on polyhedra for proof automation with our main candidate being local search on polyhedra for reasoning about reachability properties in neural networks [30].
Further, for our model of neural networks, we intend a library of pwa activation functions with proof automation to ease construction. We also plan on a generic graph-based model for neural networks in Coq but as argued, we expect the sequential model to stay the mean of choice for feedforward networks. Additionally, since tensors are used in machine learning to incorporate complex mathematical operations, we aim to integrate a formalization of tensors tailored to neural network verification.
|
2301.11342 | A Robust Optimisation Perspective on Counterexample-Guided Repair of
Neural Networks | Counterexample-guided repair aims at creating neural networks with
mathematical safety guarantees, facilitating the application of neural networks
in safety-critical domains. However, whether counterexample-guided repair is
guaranteed to terminate remains an open question. We approach this question by
showing that counterexample-guided repair can be viewed as a robust
optimisation algorithm. While termination guarantees for neural network repair
itself remain beyond our reach, we prove termination for more restrained
machine learning models and disprove termination in a general setting. We
empirically study the practical implications of our theoretical results,
demonstrating the suitability of common verifiers and falsifiers for repair
despite a disadvantageous theoretical result. Additionally, we use our
theoretical insights to devise a novel algorithm for repairing linear
regression models based on quadratic programming, surpassing existing
approaches. | David Boetius, Stefan Leue, Tobias Sutter | 2023-01-26T19:00:02Z | http://arxiv.org/abs/2301.11342v2 | # A Robust Optimisation Perspective on
###### Abstract
Counterexample-guided repair aims at creating neural networks with mathematical safety guarantees, facilitating the application of neural networks in safety-critical domains. However, whether counterexample-guided repair is guaranteed to terminate remains an open question. We approach this question by showing that counterexample-guided repair can be viewed as a robust optimisation algorithm. While termination guarantees for neural network repair itself remain beyond our reach, we prove termination for more restrained machine learning models and disprove termination in a general setting. We empirically study the practical implications of our theoretical results, demonstrating the suitability of common verifiers and falsifiers for repair despite a disadvantageous theoretical result. Additionally, we use our theoretical insights to devise a novel algorithm for repairing linear regression models, surpassing existing approaches.
## 1 Introduction
The success of artificial neural networks in such diverse domains as image recognition [40], natural language processing [9], predicting protein folding [50], and designing novel algorithms [20] sparks interest in applying them to more demanding tasks, including applications in safety-critical domains. Neural networks are proposed to be used for medical diagnosis [1], autonomous aircraft control [31, 34], and self-driving cars [7]. Since a malfunctioning of such systems can threaten lives or cause environmental disaster, we require mathematical guarantees on the correct functioning of the neural networks involved. Formal methods, including verification and repair, allow obtaining such guarantees [48]. As the inner workings of neural networks are opaque to human engineers, automated repair is a vital component for creating safe neural networks.
Alternating search for violations and removal of violations is a popular approach for repairing neural networks [4, 15, 23, 25, 27, 48, 57]. We study this approach under the name _counterexample-guided repair_. Counterexample-guided repair uses inputs for which a neural network violates the specification (counterexamples) to iteratively refine the network until the network satisfies the specification. While empirical results demonstrate the ability of counterexample-guided repair to successfully repair neural networks [4], a theoretical analysis of counterexample-guided repair is lacking.
In this paper, we study counterexample-guided repair from the perspective of robust optimisation. Viewing counterexample-guided repair as an algorithm for solving robust optimisation problems allows us to encircle neural network repair from two sides. On the one hand side, we are able to show termination and optimality for counterexample-guided repair of linear regression models and linear classifiers, as well as single ReLU neurons. Coming from the other side, we disprove termination for repairing ReLU networks to satisfy a specification with an unbounded input set. Additionally, we disprove termination of counterexample-guided repair when using generic
counterexamples without further qualifications, such as being most-violating. While we could not address termination for the precise robust program of neural network repair with specifications having bounded input sets, such as \(L_{\infty}\) adversarial robustness or the ACAS Xu safety properties [35], our robust optimisation framework provides, for the first time to the best of our knowledge, fundamental insights into the theoretical properties of counterexample-guided repair.
Our analysis establishes a theoretical limitation of repair with otherwise unqualified counter-examples and suggests most-violating counterexamples as a replacement. We empirically investigate the practical consequences of these findings by comparing _early-exit_ verifiers -- verifiers that stop search on the first counterexample they encounter -- and optimal verifiers that produce most-violating counterexamples. We complement this experiment by investigating the advantages of using falsifiers during repair [4, 15], which is another approach that leverages sub-optimal counterexamples.
These experiments do not reveal any practical limitations for repair using early-exit verifiers. In fact, using an early-exit verifier consistently yields faster repair for ACAS Xu networks [35] and an MNIST [40] network, compared to using an optimal verifier. While the optimal verifier often allows performing fewer iterations, this advantage is offset by its additional runtime cost most of the time. Our experiments with falsifiers demonstrate that they can provide a significant runtime advantage for neural network repair.
For repairing linear regression models, we use our theoretical insights to design an improved repair algorithm based on quadratic programming. We compare the new algorithm with the Ouroboros [57] and SpecRepair [4] repair algorithms. The new quadratic programming repair algorithm surpasses Ouroboros and SpecRepair, illustrating the practical value of our theoretical results.
We highlight the following main contributions of this paper:
* We formalise neural network repair as a robust optimisation problem and, therefore, view counterexample-guided repair as a robust optimisation algorithm.
* Using this framework, we prove termination of counterexample-guided repair for more restrained problems than neural network repair and disprove termination in a more general setting.
* We empirically investigate the merits of using falsifiers and early-exit verifiers during repair.
* Our theoretical insights into repairing linear regression models allow us to surpass existing approaches for repairing linear regression models using a new algorithm.
## 2 Related Work
Our investigation is concerned with viewing neural network repair through the lens of robust optimisation. Neural network repair relies on neural network verification and can make use of neural network falsification. We introduce related work from these fields in this section.
**Neural Network Verification**
Neural network repair relies on neural network verifiers for proving specification satisfaction. Techniques such as Satisfiability Modulo Theories (SMT) solving [17, 35] or Mixed Integer Linear Programming (MILP) [58] allow for formally proving whether a neural network satisfies a specification. Neural network verification benefits from bounds computed using linear relaxations [17], in particular, linear relaxations that are efficiently computable through forward or backward passes over a neural network [51, 64, 67]. A particularly fruitful technique from MILP is branch and
bound [10, 45, 62]. Approaches that combine branch and bound with multi-neuron linear relaxations [21] or extend branch and bound using cutting planes [66] form the current state-of-the-art [2].
Our experiments in Section 5 are concerned with using most-violating counterexamples for repair. Strong et al. [55] perform global optimisation of functions involving neural networks. Among other applications, this allows computing most-violating counterexamples. We follow a different approach than Strong et al. [55] by fully utilising the optimisation capabilities of the MILP-based ERAN verifier [53]. This is described in more detail in Section 5.1.1.
Falsifiers are designed to discover counterexamples fast at the cost of completeness -- they can not prove specification satisfaction. Falsifiers can be used during repair to reduce expensive verifier invocations [4]. We view adversarial attacks as falsifiers for adversarial robustness specifications. Falsifiers use generic local optimisation algorithms [25, 39, 42, 56], global optimisation algorithms [4, 60], or specifically tailored search and optimisation techniques [12, 46].
#### Neural Network Repair
Neural network repair is concerned with modifying a neural network such that it satisfies a formal specification. Many approaches make use of the counterexample-guided repair algorithm (Algorithm 1) while utilising different counterexample-removal algorithms. The approaches range from augmenting the training set [25, 48, 57], over specialised neural network architectures [15, 27], and neural network training with constraints [4], to using a verifier for computing network weights [23]. Counterexample-guided repair is also applied to support vector machines and linear regression models [28, 57].
An alternative to counterexample-guided repair is training against differentiable lower bounds on the minimum specification satisfaction as introduced in Equation (6). This approach is primarily applied to train provably adversarially robust neural networks. The applied techniques include interval arithmetic [26], semi-definite relaxations [49], linear relaxations [43], and duality [63]. However, it was observed that tighter relaxations do not surpass the certified robustness obtained using interval arithmetic [26, 32]. Designing improved relaxations proves to be challenging [32].
Compared to training against differentiable lower bounds, counterexample-guided repair can be conceived as using an upper bound on the minimum specification satisfaction. This upper bound stems from a finite set of counterexamples. Training against a differentiable lower bound is limited by the degree of tightness of the lower bound, with the potential of producing overly conservative neural networks. While counterexample-guided repair is not affected by this, it remains an open question whether counterexample-guided repair is guaranteed to terminate. This is the question we are concerned with in this paper.
Beyond the above approaches, specialised neural network architectures can increase the adversarial robustness of neural networks [14, 65] but are limited to robustness specifications. Using decoupled neural networks [54] provides optimality and termination guarantees for repair but is not applicable for typical neural network specifications, such as \(L_{\infty}\) adversarial robustness and the ACAS Xu safety specifications [35].
#### Robust and Scenario Optimisation
Robust optimisation is, originally, a technique for dealing with data uncertainty [5]. Unfortunately, robust optimisation problems are, in general, NP-hard [6]. Nevertheless, in many situations, one can derive a solution in polynomial time. However, the arsenal of classical robust optimisation methods is mainly restricted to the convex setting [5]. In practice, robust optimisation problems are often solved by considering various types of relaxations. One possible relaxation is via a _chance-constrained program_, where a family of inequalities is only required to be satisfied with probability \(1-\varepsilon\) for some parameter \(\varepsilon\in(0,1)\). While, in general, chance-constrained problems are still computationally intractable [5], in many settings, tractable approximations can be obtained through the scenario program, in which only finitely many uncertainty samples are considered.
Feasibility guarantees of a scenario solution with respect to the chance-constrained program can be obtained [11] and also a link to a modified (perturbed) robust program is available [19]. While the mentioned results hold for any distribution from which the constraints are sampled, the observed empirical performance highly depends on it [18].
Madry et al. [42] and Wong and Kolter [63] use robust optimisation to train adversarially robust neural networks. Fischer et al. [22] apply the approach of Madry et al. [42] for specifications beyond adversarial robustness. Here, we study a different robust optimisation formulation where the specification is modelled as constraints. This formulation is better suited for a general setting, as it properly captures the relationship between a safety specification and a loss function. We can accept a higher loss when it is the price for satisfying the specification, as satisfying the specification is absolutely essential for safety-critical applications. Our robust optimisation formulation of repair is introduced in the following section.
## 3 Preliminaries and Problem Statement
In this section, we introduce preliminaries on robust optimisation, neural networks, and neural network verification before progressing to neural network repair, counterexample-guided repair, and the problem statement of our theoretical analysis.
### Robust Optimisation
We consider general _robust optimisation problems_
\[P:\begin{cases}\underset{\mathbf{v}}{\text{minimise}}&f(\mathbf{v})\\ \text{subject to}&g(\mathbf{v},\mathbf{d})\geq 0\quad\forall\mathbf{d}\in \mathcal{D}\\ &\mathbf{v}\in\mathcal{V},\end{cases} \tag{1}\]
where \(\mathcal{V}\subseteq\mathbb{R}^{v}\), \(\mathcal{D}\subseteq\mathbb{R}^{d}\) and \(f:\mathcal{V}\rightarrow\mathbb{R}\), \(g:\mathcal{V}\times\mathcal{D}\rightarrow\mathbb{R}\). Both \(\mathcal{V}\) and \(\mathcal{D}\) contain infinitely many elements. Therefore, a robust optimisation problem has infinitely many constraints. The _variable domain_\(\mathcal{V}\) defines eligible values for the _optimisation variable_\(\mathbf{v}\). The set \(\mathcal{D}\) may contain, for example, all inputs for which a specification needs to hold. In this example, \(g\) captures whether the specification is satisfied for a concrete input. Elaborating this example leads to neural network repair, which we introduce in Section 3.4.
A _scenario optimisation problem_ relaxes a robust optimisation problem by replacing the infinitely many constraints of \(P\) with a finite selection. For \(\mathbf{d}^{(i)}\in\mathcal{D}\), \(i\in\{1,\ldots,N\}\), \(N\in\mathbb{N}\), the scenario optimisation problem is
\[SP:\begin{cases}\underset{\mathbf{v}}{\text{minimise}}&f(\mathbf{v})\\ \text{subject to}&g\Big{(}\mathbf{v},\mathbf{d}^{(i)}\Big{)}\geq 0\quad \forall i\in\{1,\ldots,N\}\\ &\mathbf{v}\in\mathcal{V}.\end{cases} \tag{2}\]
The counterexample-guided repair algorithm that we study in this paper uses a sequence of scenario optimisation problems to solve a robust optimisation problem.
### Neural Networks
A neural network \(\text{net}_{\mathbf{\theta}}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), with parameters \(\mathbf{\theta}\in\mathbb{R}^{p}\) is a function composition of affine transformations and non-linear activations. For our theoretical analysis, it suffices to consider fully-connected neural networks (FCNN). Our experiments in Section 5 also use convolutional
neural networks (CNN). We refer to Goodfellow et al. [24] for an introduction to CNNs. An FCNN with _\(L\) hidden layers_ is a chain of affine functions and activation functions
\[\mathrm{net}_{\mathbf{\theta}}=h^{(L+1)}\circ\sigma^{(L)}\circ h^{(L)}\circ\cdots \circ\sigma^{(1)}\circ h^{(1)}, \tag{3}\]
where \(h^{(i)}:\mathbb{R}^{n_{i-1}}\rightarrow\mathbb{R}^{n_{i}}\) and \(\sigma^{(i)}:\mathbb{R}^{n_{i}}\rightarrow\mathbb{R}^{n_{i}}\) with \(n_{i}\in\mathbb{N}\) for \(i\in\{0,\dots,L+1\}\) and, specifically, \(n_{0}=n\) and \(n_{L+1}=m\). Each \(h^{(i)}\) is an affine function, called an _affine layer_. It computes \(h^{(i)}(\mathbf{z})=\mathbf{W}^{(i)}\mathbf{z}+\mathbf{b}^{(i)}\) with _weight matrix_\(\mathbf{W}^{(i)}\in\mathbb{R}^{n_{i}\times n_{i-1}}\) and _bias vector_\(\mathbf{b}^{(i)}\in\mathbb{R}^{n_{i}}\). Stacked into one large vector, the weights and biases of all affine layers are the _parameters_\(\mathbf{\theta}\) of the FCNN. An _activation layer_\(\sigma^{(i)}\) applies a non-linear function, such as \(\mathrm{ReLU}\left[z\right]^{+}=\max(0,z)\) or the sigmoid function \(\sigma(z)=\frac{1}{1+e^{-z}}\) in an element-wise fashion.
### Neural Network Verification
Neural network verification is concerned with automatically proving that a neural network satisfies a formal specification.
**Definition 1** (Specifications).: A _specification_\(\Phi=\{\varphi_{1},\dots,\varphi_{S}\}\) is a set of _properties_\(\varphi_{i}\). A _property_\(\varphi=(\mathcal{X}_{\varphi},\mathcal{Y}_{\varphi})\) is a tuple of an _input set_\(\mathcal{X}_{\varphi}\subseteq\mathbb{R}^{n}\) and an _output set_\(\mathcal{Y}_{\varphi}\subseteq\mathbb{R}^{m}\).
We write \(\mathrm{net}_{\mathbf{\theta}}\vDash\Phi\) when a neural network \(\mathrm{net}_{\mathbf{\theta}}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\)_satisfies_ a specification \(\Phi\). Specifically,
\[\mathrm{net}_{\mathbf{\theta}}\vDash\Phi \Leftrightarrow\forall\varphi\in\Phi:\mathrm{net}_{\mathbf{\theta}}\vDash\varphi \tag{4a}\] \[\mathrm{net}_{\mathbf{\theta}}\vDash\varphi \Leftrightarrow\forall\mathbf{x}\in\mathcal{X}_{\varphi}:\mathrm{net} _{\mathbf{\theta}}(\mathbf{x})\in\mathcal{Y}_{\varphi}. \tag{4b}\]
Example specifications, such as \(L_{\infty}\) adversarial robustness or an ACAS Xu safety specification [35] are defined in Appendix A. _Counterexamples_ are at the core of the counterexample-guided repair algorithm that we study in this paper.
**Definition 2** (Counterexamples).: An input \(\mathbf{x}\in\mathcal{X}_{\varphi}\) for which a neural network \(\mathrm{net}_{\mathbf{\theta}}\) violates a property \(\varphi\) is called a _counterexample_.
To define verification as an optimisation problem, we introduce _satisfaction functions_. A satisfaction function quantifies the satisfaction or violation of a property regarding the output set. Definition 4 introduces the verification problem, also taking the input set of a property into account.
**Definition 3** (Satisfaction Function).: A function \(f_{\mathrm{Sat}}:\mathbb{R}^{m}\rightarrow\mathbb{R}\) is a _satisfaction function_ of a property \(\varphi=(\mathcal{X}_{\varphi},\mathcal{Y}_{\varphi})\) if
\[\mathbf{y}\in\mathcal{Y}_{\varphi}\Leftrightarrow f_{\mathrm{Sat}}(\mathbf{y}) \geq 0. \tag{5}\]
**Definition 4** (Verification Problem).: The _verification problem_ for a property \(\varphi=(\mathcal{X}_{\varphi},\mathcal{Y}_{\varphi})\) and a neural network \(\mathrm{net}_{\mathbf{\theta}}\) is
\[V:f_{\mathrm{Sat}}^{*}=\begin{cases}\underset{\mathbf{x}}{\text{minimise}}&f_{ \mathrm{Sat}}(\mathrm{net}_{\mathbf{\theta}}(\mathbf{x}))\\ \text{subject to}&\mathbf{x}\in\mathcal{X}_{\varphi}.\end{cases} \tag{6}\]
We call a specification a _linear specification_ when its properties have a closed convex polytope as an input set and an affine satisfaction function. Appendix A contains the satisfaction functions from SpecRepair [4] for several specifications. The following Proposition follows directly from the definition of a satisfaction function.
**Proposition 1**.: _A neural network \(\mathrm{net}_{\mathbf{\theta}}\) satisfies the property \(\varphi\) if and only if the minimum of the verification problem \(V\) is non-negative._
We now consider _counterexample searchers_ that evaluate the satisfaction function for concrete inputs to compute an upper bound on the minimum of the verification problem \(V\). Such tools can disprove specification satisfaction by discovering a counterexample. They can also prove specification satisfaction when they are _sound_ and _complete_.
**Definition 5** (Soundness and Completeness).: We call a counterexample searcher _sound_ if it computes valid upper bounds. A counterexample searcher is _complete_ if it is guaranteed to find a counterexample whenever a counterexample exists.
**Definition 6** (Verifiers and Falsifiers).: We call a sound and complete counterexample searcher a _verifier_. A counterexample searcher that is sound but not complete is a _falsifier_.
We additionally differentiate _optimal_ and _early-exit_ verifiers.
**Definition 7** (Optimal and Early-Exit Verifiers).: An _optimal_ verifier is a verifier that always produces a global minimiser of the verification problem -- a _most-violating counterexample_. Contrarily, an _early-exit_ verifier provides any counterexample when a property is violated, without further qualifications. It aborts on the first counterexample it encounters.
A technique that allows building a verifier is global optimisation. Performing global minimisation of the verification problem allows for attaining completeness. For ReLU-activated neural networks, this is possible, for example, using Mixed Integer Linear Programming (MILP) [13, 41]. On the other hand, a falsifier may perform local optimisation using projected gradient descent [39, 42] to become sound but not complete. We name this approach _BIM_, abbreviating the name _Basic Iterative Method_ used by Kurakin et al. [39].
### Neural Network Repair
Neural network repair means modifying a trained neural network so that it satisfies a specification it would otherwise violate. While the primary goal of repair is satisfying the specification, the key secondary goal is that the repaired neural network still performs well on the intended task. This secondary goal can be captured using a performance measure, such as the training loss function [4] or the distance between the modified and the original network parameters [23].
**Definition 8** (Repair Problem).: Given a neural network \(\mathrm{net}_{\mathbf{\theta}}\), a property \(\varphi=(\mathcal{X}_{\varphi},\mathcal{Y}_{\varphi})\) and a performance measure \(J:\mathbb{R}^{p}\rightarrow\mathbb{R}\), repair translates to solving the _repair problem_
\[R:\begin{cases}\underset{\mathbf{\theta}\in\mathbb{R}^{p}}{\text{minimise}}&J( \mathbf{\theta})\\ \text{subject to}&f_{\mathrm{Sat}}(\mathrm{net}_{\mathbf{\theta}}(\mathbf{x})) \geq 0\quad\forall\mathbf{x}\in\mathcal{X}_{\varphi}.\end{cases} \tag{7}\]
The repair problem \(R\) is an instance of a robust optimisation problem as defined in Equation (1). Checking whether a parameter \(\mathbf{\theta}\) is feasibility for \(R\) corresponds to verification. In particular, we can equivalently reformulate \(R\) using the verification problem's minimum \(f_{\mathrm{Sat}}^{*}\) from Equation (6) as
\[R^{\prime}:\begin{cases}\underset{\mathbf{\theta}\in\mathbb{R}^{p}}{ \text{minimise}}&J(\mathbf{\theta})\\ \text{subject to}&f_{\mathrm{Sat}}^{*}\geq 0.\end{cases} \tag{8}\]
We stress several characteristics of the repair problem that we relax or strengthen in Section 4. First of all, \(\mathrm{net}_{\mathbf{\theta}}\) is a neural network and we repair all parameters \(\mathbf{\theta}\) of the network jointly. Practically, \(\mathrm{net}_{\mathbf{\theta}}\) is a ReLU-activated FCNN or CNN, as these are the models most verifiers support. For typical specifications, such as \(L_{\infty}\) adversarial robustness or the ACAS Xu safety specifications [35], the property input set \(\mathcal{X}_{\varphi}\) is a hyper-rectangle. Hyper-rectangles are closed convex polytopes and, therefore, bounded.
### Counterexample-Guided Repair
In the previous section, we have seen that the repair problem includes the verification problem as a sub-problem. Using this insight, one approach to tackle the repair problem is to iterate running a verifier and removing the counterexamples it finds. This yields the counterexample-guided repair algorithm, that was first introduced by Goldberger et al. [23], in a similar form. Removing counterexamples corresponds to a scenario optimisation problem \(CR_{N}\) of the robust optimisation problem \(R\) from Equation (7), where
\[CR_{N}:\begin{cases}\underset{\mathbf{\theta}\in\mathbb{R}^{p}}{\text{minimise}}&J( \mathbf{\theta})\\ \text{subject to}&f_{\text{Sat}}\Big{(}\text{net}_{\mathbf{\theta}}\Big{(}\mathbf{x}^ {(i)}\Big{)}\Big{)}\geq 0\quad\forall i\in\{1,\dots,N\}.\end{cases} \tag{9}\]
Algorithm 1 defines the counterexample-guided repair algorithm using \(CR_{N}\) and \(V\) from Equation (6). In analogy to \(CR_{N}\), we use \(V_{N}\) to denote the verification problem in iteration \(N\). We call the iterations of Algorithm 1_repair steps_.
Algorithm 1 defines the counterexample-guided repair algorithm for repairing a single property. However, the algorithm extends to repairing multiple properties by adding one constraint for each property to \(CR_{N}\) and verifying the properties separately. While Algorithm 1 is formulated for the repair problem \(R\), it is easy to generalise it to any robust program \(P\) as defined in Equation (1). Then, solving \(CR_{N}\) corresponds to solving \(SP\) from Equation (2) and solving \(V_{N}\) corresponds to finding maximal constraint violations of \(P\).
The question we are concerned with in this paper is whether Algorithm 1 is guaranteed to terminate after finitely many repair steps. We investigate this question in the following section by studying robust programs that are similar to the repair problem for neural networks and typical specifications but more restrained or more general.
```
1\(N\gets 0\)do
2\(\mathbf{\theta}^{(N)}\leftarrow\text{local minimiser of }CR_{N}\)// counterexample-removal (9)
3\(N\gets N+1\)
4\(\mathbf{x}^{(N)}\leftarrow\text{global minimiser of }V\text{ for net}_{\mathbf{\theta}^{(N-1)}}\)// verification (6)
5while\(f_{\text{Sat}}\big{(}\text{net}_{\mathbf{\theta}^{(N-1)}}\big{(}\mathbf{x}^{(N)} \big{)}\big{)}<0\)
```
**Algorithm 1**Counterexample-Guided Repair
## 4 Termination of Counterexample-Guided Repair
The counterexample-guided repair algorithm (Algorithm 1) repairs neural networks by iteratively searching and removing counterexamples. In this section, we study whether Algorithm 1 is guaranteed to terminate and whether it produces optimally repaired networks. Our primary focus is on studying robust optimisation problems that are more restrained or more general than the repair problem \(R\) from Equation (7). We apply Algorithm 1 to such problems and study termination of the algorithm for these problems. On our way, we also address the related questions of optimality on termination and termination when using an early-exit verifier as introduced in Definition 7.
We start our investigation by proving that Algorithm 1 provides an optimal solution whenever it terminates. Following this, we look at the more general case of repairing a neural network to conform to a property with an unbounded input set. We disprove termination by example for this case. Next, we prove termination for more restricted problems that originate from repairing linear regression models, linear classifiers, and single ReLU neurons. Lastly, we disprove termination for repairing linear regression models when relying on an early-exit verifier.
Table 1 summarises the central problem and variable names that we use throughout this section. The iterations of Algorithm 1 are called _repair steps_. We count the repair steps starting from one but index the counterexample-removal problems starting from zero, reflecting the number of constraints. Hence, the minimiser of the counterexample-removal problem \(CR_{N-1}\) from Equation (9) in repair step \(N\) is \(\mathbf{\theta}^{(N-1)}\). The verification problem in repair step \(N\) is \(V_{N}\) with the global minimiser \(\mathbf{x}^{(N)}\). We use \(\mathbf{\theta}^{\dagger}\) for a minimiser of the repair problem \(R\) from Equation (7). We are usually satisfied with finding a local minimiser of \(R\), for example, when the objective function of \(R\) is a training loss function. However, in certain settings [23], we may also seek a global minimiser of \(R\). The difference is largely irrelevant for our analysis, as we are primarily concerned with feasibility.
### Optimality on Termination
We prove that when applied to any robust program \(P\) as defined in Equation (1), counterexample-guided repair produces a minimiser of \(P\) whenever it terminates. While Algorithm 1 is formulated for the repair problem \(R\), it is easy to generalise it to \(P\), as described in Section 3.5.
**Proposition 2** (Optimality on Termination).: _Whenever Algorithm 1 terminates after \(\overline{N}\) iterations, it holds that \(\mathbf{\theta}^{(\overline{N}-1)}=\mathbf{\theta}^{\dagger}\)._
Proof.: Assume Algorithm 1 has terminated after \(\overline{N}\) iterations for some robust program \(P\). Since Algorithm 1 has terminated, we know that \(\min V_{\overline{N}}\geq 0\). Hence, \(\mathbf{\theta}^{(\overline{N}-1)}\) is feasible for \(R\). As \(\mathbf{\theta}^{(\overline{N}-1)}\) also minimises \(CR_{\overline{N}-1}\), which is a relaxation of \(R\), it follows that \(\mathbf{\theta}^{(\overline{N}-1)}\) minimises \(R\).
This proof is independent of whether we search for a local minimiser or a global minimiser of \(R\). Therefore, Proposition 2 holds regardless of the type of minimiser we are interested in.
### Non-Termination for General Robust Programs
In this section, we demonstrate non-termination and divergence of Algorithm 1 when we relax assumptions on the repair problem \(R\) that we outline in Section 3.4. In particular, we drop the assumption that the property's input set \(\mathcal{X}_{\varphi}\) is bounded. We disprove termination by example when \(\mathcal{X}_{\varphi}\) is unbounded. To simplify the proof, we use a non-standard neural network architecture. After the proof, we devise a fully-connected neural network (FCNN) that also leads to non-termination. However, here we also have to relax the assumption that we repair all parameters of a neural network jointly. Instead, we repair an individual parameter of the FCNN in isolation.
**Proposition 3** (General Non-Termination).: _Algorithm 1 does not terminate for \(J:\mathbb{R}\rightarrow\mathbb{R}\), \(f_{\mathrm{Sat}}:\mathbb{R}^{2}\rightarrow\mathbb{R}\) and \(\mathrm{net}_{\mathbf{\theta}}:\mathbb{R}\rightarrow\mathbb{R}^{2}\) where_
\[J(\mathbf{\theta}) =[-\mathbf{\theta}]^{+} \tag{10a}\] \[f_{\mathrm{Sat}}(\mathbf{y}) =\mathbf{y}_{2}+\mathbf{y}_{1}-1\] (10b) \[\mathrm{net}_{\mathbf{\theta}}(\mathbf{x}) =\left[\begin{pmatrix}-\mathbf{\theta}\\ \mathbf{\theta}-\mathbf{x}\end{pmatrix}\right]^{+}\] (10c) \[\mathcal{X}_{\varphi} =\mathbb{R}, \tag{10d}\]
_where \(\left[x\right]^{+}=\max(0,x)\) denotes the rectified linear unit (ReLU)._
\begin{table}
\begin{tabular}{l l l l}
**Symbol** & **Meaning** & **Symbol** & **Meaning** \\ \hline \(R\) & Repair Problem (7) & \(\mathbf{\theta}^{\dagger}\) & (Local) minimiser of \(R\) \\ \(CR_{N}\) & Counterexample Removal Problem (9) & \(\mathbf{\theta}^{(N)}\) & (Local) minimiser of \(CR_{N}\) \\ \(V_{N}\) & Verification Problem for \(\mathrm{net}_{\mathbf{\theta}^{(N-1)}}\) (6) & \(\mathbf{x}^{(N)}\) & Global minimiser of \(V_{N}\) \\ \end{tabular}
\end{table}
Table 1: **Symbol Overview**
Before we begin the proof of Proposition 3, we first give an intuition for the proof using Figure 0(a). The core of the proof is that Algorithm 1 generates parameter iterates \(\mathbf{\theta}^{(N)}\) and counter-examples \(\mathbf{x}^{(N)}\) that lie on the dark-red flat surface of Figure 0(a), where \(f_{\mathrm{Sat}}\) is negative. The combination of \(f_{\mathrm{Sat}}\) and the objective function \(J\) that prefers non-negative \(\mathbf{\theta}^{(N)}\) leads to \(\mathbf{\theta}^{(N)}\geq 0\) for every \(N\in\mathbb{N}\). As there is always a new counterexample \(\mathbf{x}^{(N)}\) for every \(\mathbf{\theta}^{(N-1)}\geq 0\), Algorithm 1 does not terminate.
Proof of Proposition 3.: Let \(J\), \(f_{\mathrm{Sat}}\), \(\mathrm{net}_{\mathbf{\theta}}\) and \(\mathcal{X}_{\varphi}\) be as in Proposition 3. Assembled into a repair problem, they yield
\[R:\begin{cases}\underset{\mathbf{\theta}\in\mathbb{R}}{\text{minimise}}&[-\mathbf{ \theta}]^{+}\\ \text{subject to}&[\mathbf{\theta}-\mathbf{x}]^{+}+[-\mathbf{\theta}]^{+}-1\geq 0 \quad\forall\mathbf{x}\in\mathbb{R}.\end{cases} \tag{11}\]
We now show that Algorithm 1 does not terminate when applied to \(R\).
The problem \(CR_{0}\) is minimising \(J(\mathbf{\theta})=[-\mathbf{\theta}]^{+}\) without constraints. The minimiser of \(J\) is not unique, but all minimisers satisfy \(\mathbf{\theta}^{(0)}\geq 0\). Let \(\mathbf{\theta}^{(0)}\geq 0\) be such a minimiser.
Searching for the global minimiser \(\mathbf{x}^{(1)}\) of \(V_{1}\), we find that this minimiser is non-unique as well. However, all minimisers satisfy \(\mathbf{x}^{(1)}\geq\mathbf{\theta}^{(0)}\). This follows since any minimiser of
\[g\Big{(}\mathbf{x},\mathbf{\theta}^{(0)}\Big{)}=\Big{[}\mathbf{\theta}^{(0)}-\mathbf{ x}\Big{]}^{+}+\left[-\mathbf{\theta}^{(0)}\right]^{+}-1 \tag{12}\]
minimises \(\big{[}\mathbf{\theta}^{(0)}-\mathbf{x}\big{]}^{+}\) as the remaining terms of Equation (12) are constant regarding \(\mathbf{x}\). The observation \(\mathbf{x}^{(1)}\geq\mathbf{\theta}^{(0)}\) applies analogously for later repair steps. Therefore, \(\mathbf{x}^{(N)}\geq\mathbf{\theta}^{(N-1)}\).
For any further repair step, we find that all non-negative feasible points \(\mathbf{\theta}\) of \(CR_{N}\) satisfy
\[\mathbf{\theta}\geq\max\Big{(}\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\Big{)}+1. \tag{13}\]
This follows because \(g\big{(}\mathbf{x}^{(i)},\mathbf{\theta}\big{)}\geq 0\) has to hold for all \(i\in\{1,\ldots,N\}\) for \(\mathbf{\theta}\) to be feasible for \(CR_{N}\). Now, if \(\mathbf{\theta}\geq 0\), this condition simplifies to
\[g\Big{(}\mathbf{x}^{(i)},\mathbf{\theta}\Big{)}=\Big{[}\mathbf{\theta}-\mathbf{x}^{(i )}\Big{]}^{+}+[-\mathbf{\theta}]^{+}-1=\Big{[}\mathbf{\theta}-\mathbf{x}^{(i)}\Big{]} ^{+}-1\geq 0, \tag{14}\]
Figure 1: **Constraint Visualisations for Non-Termination Proofs.** We visualise the function \(f_{\mathrm{Sat}}(\mathrm{net}_{\mathbf{\theta}}(\mathbf{x}))\) from Proposition 3 and for the FCNN variant from Example 1. In both cases, the parameter iterates \(\mathbf{\theta}^{(N)}\) and the counterexamples \(\mathbf{x}^{(N)}\) diverge to \(\infty\) along the dark-red flat surface where the \(f_{\mathrm{Sat}}\) value is negative. This divergence implies non-termination of Algorithm 1. The black line represents an example sequence of diverging parameter and counterexample iterates.
for all \(i\in\{1,\ldots,N\}\). We see that Equation (14) is satisfied for all \(i\in\{1,\ldots,N\}\) only if \(\mathbf{\theta}\) is larger than the largest \(\mathbf{x}^{(i)}\) by at least one. This yields equivalence of Equations (14) and (13).
As Equation (13) always has a solution, there always exists a positive feasible point for \(CR_{N}\). Now, due to \(J\), any minimiser \(\mathbf{\theta}^{(N)}\) of \(CR_{N}\) is positive and hence satisfies Equation (13). Putting these results together, we obtain
\[\mathbf{\theta}^{(0)} \geq 0 \tag{15a}\] \[\mathbf{x}^{(N)} \geq\mathbf{\theta}^{(N-1)}\] (15b) \[\mathbf{\theta}^{(N)} \geq\mathbf{x}^{(N)}+1. \tag{15c}\]
Inspecting Equation (11) closely reveals that no positive value \(\mathbf{\theta}\) is feasible for \(R\) as there always exists an \(\mathbf{x}\geq\mathbf{\theta}\). However, it follows from Equations (15) that the iterate \(\mathbf{\theta}^{(N)}\) of Algorithm 1 is always positive and thus never feasible for \(R\). Since feasibility for \(R\) is the criterion for Algorithm 1 to terminate, it follows that Algorithm 1 does not terminate for this repair problem.
We might be willing to accept non-termination for problems without a minimiser. However, \(R\) from Equation (11) has a minimiser. We have already seen in the proof of Proposition 3 that all positive \(\mathbf{\theta}\) are infeasible for \(R\). Similarly all \(\mathbf{\theta}\in(-1,0]\) are infeasible. However, all \(\mathbf{\theta}\leq-1\) are feasible as
\[[\mathbf{\theta}-\mathbf{x}]^{+}+[-\mathbf{\theta}]^{+}-1\geq[-\mathbf{\theta}]^{+}-1\geq 0, \tag{16}\]
for any \(\mathbf{x}\in\mathbb{R}\). For negative \(\mathbf{\theta}\), \(J\) prefers larger values. Because of this, the only minimiser of \(R\) is \(\mathbf{\theta}^{\dagger}=-1\). Indeed, Algorithm 1 not only fails to terminate but also moves further and further away from the optimal solution.
**Example 1** (Non-Termination for an FCNN).: The network in Proposition 3 fits our definition of a neural network but does not have a standard neural network architecture. However, Algorithm 1 also does not terminate for repairing only the parameter \(\mathbf{\theta}\) of the FCNN
\[\mathrm{net}_{\mathbf{\theta}}(\mathbf{x})=\left[\begin{pmatrix}-1&0\\ 1&-1\end{pmatrix}\left[\begin{pmatrix}0\\ 1\end{pmatrix}\mathbf{x}+\begin{pmatrix}\mathbf{\theta}\\ 2\end{pmatrix}\right]^{+}+\begin{pmatrix}2\\ 0\end{pmatrix}\right]^{+}, \tag{17}\]
when \(f_{\mathrm{Sat}}\) is as in Proposition 3 and \(J(\mathbf{\theta})=\mathrm{net}_{\mathbf{\theta}}(0)_{1}\). Figure 2 visualises this FCNN. The proof of non-termination for this FCNN is analogous to the proof of Proposition 3. Figure 0(b) visualises \(f_{\mathrm{Sat}}(\mathrm{net}_{\mathbf{\theta}}(\mathbf{x}))\) for the FCNN from Equation (17). Comparison with Figure 0(a) reveals that the key aspects of \(f_{\mathrm{Sat}}(\mathrm{net}_{\mathbf{\theta}}(\mathbf{x}))\) for the FCNN are identical to Proposition 3, except for being shifted. Most notably, there also is a flat surface with a negative \(f_{\mathrm{Sat}}\) value. As \(J\) also prefers non-negative \(\mathbf{\theta}\) in this example, Algorithm 1 diverges here as well.
### Termination for Robust Programs with Linear Constraints
In the previous section, we relax assumptions on neural network repair and show non-termination for the resulting more general problem. In this section, we look at a more restricted class of problems instead: robust problems with linear constraints. This class of problems encompasses, for example, repairing linear regression models to conform to a linear specification. Linear regression models can be understood as neural networks without hidden layers. As defined in Section 3.3, linear specifications consist only of properties with an affine satisfaction function and a closed convex polytope as an input set.
**Theorem 1** (Termination for Linear Constraints).: _Let \(g(\mathbf{\theta},\mathbf{x})=f_{\mathrm{Sat}}(\mathrm{net}_{\mathbf{\theta}}(\mathbf{x}))\) be bi-linear and let \(\mathcal{X}_{\varphi}\) be a closed convex polytope. Algorithm 1 computes a minimiser of_
\[R:\left\{\begin{aligned} &\underset{\mathbf{\theta}\in\mathbb{R}^{p}}{ \text{minimise}}& J(\mathbf{\theta})\\ &\text{subject to}& g(\mathbf{\theta},\mathbf{x})\geq 0 \quad\forall\mathbf{x}\in\mathcal{X}_{\varphi}\end{aligned}\right. \tag{18}\]
_in a finite number of repair steps._
Proof.: We prove termination of Algorithm 1 for \(R\) from Theorem 1. Optimality then follows from Proposition 2. Let \(g:\mathbb{R}^{p}\times\mathbb{R}^{n}\to\mathbb{R}\) be bi-linear, that is, linear in each argument when the other one is fixed. Let \(\mathcal{X}_{\varphi}\) be a closed convex polytope. Given this, every \(V_{N}\) is a linear program and all \(V_{N}\) share the same feasible set \(\mathcal{X}_{\varphi}\). Because \(V_{N}\) is a linear program, its minimiser coincides with one of the vertices of the feasible set \(\mathcal{X}_{\varphi}\).
It follows that \(\forall N\in\mathbb{N}:\mathbf{x}^{(N)}\in\mathrm{vert}(\mathcal{X}_{\varphi})\), where \(\mathrm{vert}(\mathcal{X}_{\varphi})\) are the vertices of \(\mathcal{X}_{\varphi}\). Because \(\mathrm{vert}(\mathcal{X}_{\varphi})\) is finite, at some repair step \(\overline{N}\) of Algorithm 1, we obtain a minimiser that we already encountered in a previous repair step. Let \(\tilde{N}\) be that repair step, such that \(\mathbf{x}^{(\tilde{N})}=\mathbf{x}^{(\overline{N})}\). Since \(\mathbf{\theta}^{(\overline{N}-1)}\) is feasible for \(\mathit{CR}_{\overline{N}-1}\), it satisfies
\[g\Big{(}\mathbf{\theta}^{(\overline{N}-1)},\mathbf{x}^{(\tilde{N})}\Big{)}=g \Big{(}\mathbf{\theta}^{(\overline{N}-1)},\mathbf{x}^{(\overline{N})}\Big{)}=f_{ \mathrm{Sat}}\Big{(}\mathrm{net}_{\mathbf{\theta}^{(\overline{N}-1)}}\Big{(} \mathbf{x}^{(\overline{N})}\Big{)}\Big{)}\geq 0. \tag{19}\]
As this is the negation of the loop condition of Algorithm 1, the algorithm terminates in repair step \(\overline{N}\).
Note that Theorem 1 holds without assumptions on the objective \(J\). Therefore, Theorem 1 encompasses such cases as training a linear regression model or a linear support vector machine under a linear specification. The insights from our proof enable a new repair algorithm for linear regression models based on quadratic programming. We discuss and evaluate this algorithm in Section 5.4.
### Termination for Element-Wise Monotone Constraints
Next, we study a different restricted class of repair problems that contains repairing single ReLU and sigmoid neurons to conform to linear specifications. This includes repairing linear classifiers, which are single sigmoid neurons. In this class of problems, the constraint \(g(\mathbf{\theta},\mathbf{x})=f_{\mathrm{Sat}}(\mathrm{net}_{\mathbf{\theta}}(\mathbf{ x}))\) is _element-wise monotone_ and continuous and \(\mathcal{X}_{\varphi}\) is a hyper-rectangle. Element-wise monotone functions are monotone in each argument, all other arguments being fixed at some value. We show termination for this class of repair problems.
Figure 2: **Fully-Connected Neural Network Variant of Proposition 3. This Figure visualises Equation (17). Empty nodes represent single ReLU neurons. Edge labels between nodes contain the network weights. Where edges are omitted, the corresponding weights are zero. Biases are written next to the incoming edge above the ReLU neurons.**
**Definition 9** (Element-Wise Monotone).: A function \(f:\mathcal{X}\to\mathbb{R}\), \(\mathcal{X}\subseteq\mathbb{R}^{n}\), is _element-wise monotone_ if
\[\forall i\in\{1,\ldots,n\}:\forall\mathbf{x}\in\mathcal{X}:\,f|_{\mathcal{X} \cap(\{\mathbf{x}_{1}\}\times\cdots\times\{\mathbf{x}_{i-1}\}\times\mathbb{R} \times\{\mathbf{x}_{i+1}\}\times\cdots\times\{\mathbf{x}_{n}\})}\text{ is monotone}. \tag{20}\]
_Remark_.: Affine transformations of element-wise monotone functions maintain element-wise monotonicity. This directly follows from affine transformations maintaining monotonicity.
Element-wise monotone functions can be monotonically increasing and decreasing in the same element but only for different values of the remaining elements. Examples of element-wise monotone functions include \(\big{[}\mathbf{w}^{\mathsf{T}}\mathbf{x}+\mathbf{b}\big{]}^{+}\) and \(\sigma\big{(}\mathbf{w}^{\mathsf{T}}\mathbf{x}+\mathbf{b}\big{)}\), where \([x]^{+}=\max(0,x)\) is the ReLU function and \(\sigma(x)=\frac{1}{1+e^{-x}}\) is the sigmoid function. These functions are also continuous.
**Theorem 2** (Termination for Element-Wise Monotone Constraints).: _Let \(g(\boldsymbol{\theta},\mathbf{x})=f_{\mathrm{Sat}}(\mathrm{net}_{\boldsymbol{ \theta}}(\mathbf{x}))\) be element-wise monotone and continuous. Let \(\mathcal{X}_{\varphi}\) be a hyper-rectangle. Algorithm 1 computes a minimiser of_
\[R:\begin{cases}\underset{\boldsymbol{\theta}\in\mathbb{R}^{p}}{ \text{minimise}}&J(\boldsymbol{\theta})\\ \text{subject to}&g(\boldsymbol{\theta},\mathbf{x})\geq 0\quad\forall \mathbf{x}\in\mathcal{X}_{\varphi}\end{cases} \tag{21}\]
_in a finite number of repair steps under the assumption that the algorithm prefers global minimisers of \(V_{N}\) that are vertices of \(\mathcal{X}_{\varphi}\)._
In this theorem, we make an assumption on the global minimisers that Algorithm 1 prefers when there are multiple global minimisers. In the proof of Lemma 1, we show that the assumption in Theorem 2 is a weak assumption. In particular, we show that it is easy to construct a global minimiser of \(V_{N}\) that is a vertex of \(\mathcal{X}_{\varphi}\) given any global minimiser of \(V_{N}\). Lemma 1 is a preliminary result for proving Theorem 2.
**Lemma 1** (Optimal Vertices).: _Let \(R\), \(g\) and \(\mathcal{X}_{\varphi}\) be as in Theorem 2. Then, for every \(N\in\mathbb{N}\) there is \(\tilde{\mathbf{x}}^{(N)}\in\mathrm{vert}(\mathcal{X}_{\varphi})\) that globally minimises \(V_{N}\), where \(\mathrm{vert}(\mathcal{X}_{\varphi})\) denotes the set of vertices of \(\mathcal{X}_{\varphi}\)._
Proof.: Let \(R\), \(g\), \(\mathcal{X}_{\varphi}\) be as in Lemma 1. Let \(N\in\mathbb{N}\). To prove the lemma we show that a) \(V_{N}\) has a minimiser and b) when there is a minimiser of \(V_{N}\), some vertex of \(\mathcal{X}_{\varphi}\) also minimises \(V_{N}\) and has the same \(f_{\mathrm{Sat}}\) value.
* As the feasible set of \(V_{N}\) is closed and bounded due to being a hyper-rectangle and the objective function is continuous, \(V_{N}\) has a minimiser.
* Let \(\mathbf{x}^{(N)}\in\mathbb{R}^{n}\) be a global minimiser of \(V_{N}\). We show that there is a \(\tilde{\mathbf{x}}^{(N)}\in\mathrm{vert}(\mathcal{X}_{\varphi})\) such that \(\tilde{\mathbf{x}}^{(N)}\) also minimises \(V_{N}\) since \[g\Big{(}\boldsymbol{\theta}^{(N-1)},\mathbf{x}^{(N)}\Big{)}\geq g\Big{(} \boldsymbol{\theta}^{(N-1)},\tilde{\mathbf{x}}^{(N)}\Big{)}.\] (22) Pick any dimension \(i\in\{1,\ldots,n\}\). As \(g\) is element-wise monotone, it is non-increasing in one of the two directions along dimension \(i\) starting from \(\mathbf{x}^{(N)}\). When \(\mathbf{x}^{(N)}\) does not already lie on a face of \(\mathrm{X}_{\varphi}\) that bounds expansion along the \(i\)-axis, we walk along the non-increasing direction along dimension \(i\) until we reach such a face of \(\mathrm{X}_{\varphi}\). As \(\mathrm{X}_{\varphi}\) is a hyper-rectangle and, therefore, bounded, it is guaranteed that we reach such a face. We pick the point on the face of \(\mathrm{X}_{\varphi}\) as the new \(\mathbf{x}^{(N)}\). While keeping dimension \(i\) fixed, we repeat the above procedure for a different dimension \(j\in\{1,\ldots,n\},i\neq j\). We iterate the procedure over all dimensions always keeping the value of \(\mathbf{x}^{(N)}\) in already visited dimensions fixed. In every step of this procedure, we restrict ourselves to a lower-dimensional face of \(\mathcal{X}_{\varphi}\) as we fix the value in one dimension. Thus, when we have visited every dimension, we have
reached a \(0\)-dimensional face of \(\mathcal{X}_{\varphi}\), that is, a vertex. Since we only walked along directions in which \(g\) is non-increasing and since \(g\) is element-wise monotone, the vertex \(\tilde{\mathbf{x}}^{(N)}\) that we obtain satisfies Equation (22). Since \(\mathbf{x}^{(N)}\) is a global minimiser, Equation (22) needs to hold with equality.
Together, a) and b) yield that there is always a vertex \(\tilde{\mathbf{x}}^{(N)}\in\operatorname{vert}(\mathcal{X}_{\varphi})\) that globally minimises \(V_{N}\).
Proof of Theorem 2.: Again, we prove termination with optimality following from Proposition 2. Let \(R\), \(g\), \(\mathcal{X}_{\varphi}\) be as in Theorem 2. Also, assume that Algorithm 1 prefers vertices of \(\mathcal{X}_{\varphi}\) as global minimisers of \(V_{N}\). From Lemma 1 we know that there is always a vertex of \(\mathcal{X}_{\varphi}\) that minimises \(V_{N}\). From the proof of Lemma 1 we also know that it is easy to find such a vertex given any global minimiser of \(V_{N}\).
As Algorithm 1 always chooses vertices of \(\mathcal{X}_{\varphi}\) under our assumption, there is only a finite set of minimisers \(\mathbf{x}^{(N)}\), as a hyper-rectangle has only finitely many vertices. Given this, termination follows analogously to the proof of Theorem 1.
As the class of continuous element-wise monotone functions includes single ReLU and sigmoid neurons, Theorem 2 provides a termination guarantee for repairing linear classifiers -- which are single sigmoid neurons -- to conform to linear specifications.
While we have proven termination for single neurons now, the facts that we used in our proof no longer hold when we consider neural networks of multiple neurons. Notably, for such networks, \(V_{N}\) can have a minimiser anywhere inside the feasible region and this minimiser may move when the network parameters are modified. Coming from the other side, the construction that we use in Section 4.2 relies on a diverging sequence of counterexamples. However, when counterexamples need to lie in a bounded set, as it is the case with common neural network specifications, it becomes intricate to construct a diverging sequence originating from a repair problem.
In summary, although we can not answer at this point whether Algorithm 1 terminates when applied to neural network repair for bounded property input sets, our methodology is useful for studying related questions. Our theoretical results in this paper may point in the direction in which the answer to our original question lies. In the following section, we continue our theoretical analysis, showing that early-exit verifiers are insufficient for guaranteeing termination of Algorithm 1.
### Counterexample-Guided-Repair with Early-Exit Verifiers
From a verification perspective, verifiers are not required to find most-violating counterexamples. Instead, it suffices to find any counterexample if one exists. In this section, we show that using just any counterexample is not sufficient for Algorithm 1 to terminate. We show that when using an _early-exit_ verifier that produces such otherwise unqualified counterexamples, repair may fail even for repairing linear regression models. As linear regression models are special neural networks without hidden layers, this result propagates to neural network repair.
Consider a modification of Algorithm 1, where we only search for a feasible point of \(V_{N}\) with a negative objective value instead of the global minimum. This corresponds to using an early-exit verifier during repair. The following proposition demonstrates that this modification can lead to non-termination even for robust optimisation problems with linear constraints.
**Proposition 4** (Non-Termination for Early-Exit Verifiers).: _Algorithm 1 modified to use an early-exit verifier is not guaranteed to terminate for_
\[J(\mathbf{\theta}) =|\mathbf{\theta}| \tag{23a}\] \[f_{\mathrm{Sat}}(\mathbf{y}) =\mathbf{y}\] (23b) \[\mathrm{net}_{\mathbf{\theta}}(\mathbf{x}) =\mathbf{\theta}-\mathbf{x}\] (23c) \[\mathcal{X}_{\varphi} =[0,1]. \tag{23d}\]
Proof.: Let \(J\), \(f_{\mathrm{Sat}}\), \(\mathrm{net}_{\mathbf{\theta}}\) and \(\mathcal{X}_{\varphi}\) be as in Proposition 4. When inserting these into Equation (7), we obtain the repair problem
\[R:\begin{cases}\underset{\mathbf{\theta}\in\mathbb{R}}{\text{minimise}}&|\mathbf{ \theta}|\\ \text{subject to}&\mathbf{\theta}-\mathbf{x}\geq 0\quad\forall\mathbf{x}\in[0,1]. \end{cases} \tag{24}\]
Assume the early-exit verifier generates the sequence \(\mathbf{x}^{(N)}=\frac{1}{2}-\frac{1}{N+2}\) as long as these points are counterexamples for \(\mathrm{net}_{\mathbf{\theta}^{(N-1)}}\). Otherwise, let it produce \(\mathbf{x}^{(N)}=1\), the global minimum of all \(V_{N}\). Minimising \(J\) without constraints yields \(\mathbf{\theta}^{(0)}=0\). The point \(\mathbf{x}^{(1)}=\frac{1}{2}-\frac{1}{3}\) is a valid result of the early-exit verifier for \(V_{1}\), as it is a counterexample. We observe that the constraint
\[f_{\mathrm{Sat}}(\mathrm{net}_{\mathbf{\theta}}(\mathbf{x}))=\mathbf{ \theta}-\mathbf{x}\geq 0 \tag{25}\]
is tight when \(\mathbf{\theta}=\mathbf{x}\). Smaller \(\mathbf{\theta}\) violate the constraint. Since \(J\) prefers values of \(\mathbf{\theta}\) closer to zero, it always holds for any minimiser of \(CR_{N}\) that
\[\mathbf{\theta}^{(N)}=\max\left(\mathbf{x}^{(1)},\dots,\mathbf{x}^{ (N)}\right)=\mathbf{x}^{(N)}. \tag{26}\]
The last equality is due to the construction of the points returned by the early-exit verifier. However, for these values of \(\mathbf{\theta}^{(N)}\), \(\frac{1}{2}-\frac{1}{N+2}\) always remains a valid product of the early-exit verifier for \(V_{N}\). Thus we obtain,
\[\mathbf{\theta}^{(N)}=\mathbf{x}^{(N)}=\frac{1}{2}-\frac{1}{N+2}. \tag{27}\]
The minimiser of \(R\) is \(\mathbf{\theta}^{\dagger}=1\). However, \(\mathbf{\theta}^{(N)}\) does not converge to this point but to the infeasible \(\lim_{N\to\infty}\mathbf{\theta}^{(N)}=\frac{1}{2}\). Since the iterates \(\mathbf{\theta}^{(N)}\) always remain infeasible for \(R\), the modified Algorithm 1 never terminates.
This result concludes our theoretical investigation. Table 2 summarises our results regarding the termination of Algorithm 1. In the following section, we research empirical aspects of Algorithm 1, including the practical implications of the above result on using early-exit verifiers.
## 5 Experiments
_Optimal_ verifiers that compute most-violating counterexamples are theoretically advantageous but not widely available [55]. Conversely, _early-exit_ verifiers that produce plain counterexamples without further qualifications are readily available [3, 21, 36, 59, 66], but are theoretically disadvantageous, as apparent from Section 4.5. However, despite these theoretical findings, previous work reveals that practically it is possible to achieve repair while using early-exit verifiers [4, 15]. In this section, we empirically compare the effects of using most-violating counterexamples and sub-optimal counterexamples -- as produced by early-exit verifiers and falsifiers -- for repair. Additionally, we apply our insights from Section 4.3 for repairing linear regression models. Our experiments address the following questions regarding counterexample-guided repair:
* How does repair using an early-exit verifier compare quantitatively to repair using an optimal verifier?
* What quantitative advantages does it provide to use falsifiers during repair?
* Can we surpass existing repair algorithms for linear regression models using our theoretical insights?
### Experiment Design
In our experiments, we repair an MNIST [40] network, ACAS Xu networks [35], a CollisionDetection [17] network, and Integer Dataset Recursive Model Indices (RMIs) [57]. For repair, we make use of an early-exit verifier, an optimal verifier, the SpecAttack falsifier [4], and the BIM falsifier [39]. To obtain an optimal verifier, we modify the ERAN verifier [53] to compute most-violating counterexamples. We use the modified ERAN verifier both as the early-exit and as the optimal verifier in our experiments, as it supports both exit modes.
In all experiments, we use the SpecRepair counterexample-removal algorithm [4] unless otherwise noted. We use SpecRepair with a decreased initial penalty weight of \(2^{-4}\) and a satisfaction constant of \(10^{-4}\). We set up all verifiers and falsifiers to return a single counterexample. For SpecAttack, that produces multiple counterexamples, we select the counterexample with the largest violation. We make this modification to eliminate differences due to some tools returning more counterexamples than others, as we are interested in studying the effects of counterexample quality, not counterexample quantity.
#### 5.1.1 Modifying ERAN to Compute Most-Violating Counterexamples
The ETH Robustness Verifier for Neural Networks (ERAN) [53] combines abstract interpretation with Mixed Integer Linear Programming (MILP) to verify neural networks. For our experiments, we use the DeepPoly abstract interpretation [52]. ERAN leverages Gurobi [29] for MILP. To verify properties with low-dimensional input sets having a large diameter, ERAN implements the ReluVal
\begin{table}
\begin{tabular}{l l l l l} \multicolumn{1}{c}{**Problem Class**} & \multicolumn{1}{c}{**Model**} & \multicolumn{1}{c}{**Specification**} & \multicolumn{1}{c}{**Termination of**} \\ \hline \(f_{\text{Sat}}(\text{net}_{\mathbf{\theta}}(\mathbf{x}))\) bi-linear, & Linear Regression & Linear & ✓ & (Theorem 1) \\ \(\mathcal{X}_{\varphi}\) closed convex & Model, Linear & & & \\ polytope & Support Vector & & & \\ & Machine & & & \\ \(f_{\text{Sat}}(\text{net}_{\mathbf{\theta}}(\mathbf{x}))\) element- & Linear Classifier, & Linear & ✓ & (Theorem 2) \\ wise monotone and & ReLU Neuron & & & \\ continuous, & & & & \\ \(\mathcal{X}_{\varphi}\) hyper-rectangle & & & & \\ net\({}_{\mathbf{\theta}}(\mathbf{x})\) neural network, & Neural Network & Bounded Input &? \\ \(\mathcal{X}_{\varphi}\) bounded & & Set & & \\ net\({}_{\mathbf{\theta}}(\mathbf{x})\) neural network, & Neural Network & Unbounded & ✗(Proposition 3) \\ \(\mathcal{X}_{\varphi}\) unbounded & & Input Set & & \\ Using an early-exit & Any & Any & ✗(Proposition 4) \\ verifier & & & & \\ \end{tabular}
\end{table}
Table 2: **Termination Results Summary**
input splitting branch and bound procedure [61]. We employ this branch and bound procedure only for ACAS Xu.
The Gurobi MILP solver can be configured to stop optimisation when encountering the first point with a negative satisfaction function value below a small threshold. We use this feature for the early-exit mode. To compute most-violating counterexamples, we instead run the MILP solver until achieving optimality.
The input-splitting branch and bound procedure evaluates branches in parallel. In the early-exit mode, the procedure terminates when it finds a counterexample on any branch. As other branches may contain more-violating counterexamples, we search the entire branch and bound tree in the optimal mode.
#### 5.1.2 Datasets, Networks, and Specifications
We perform experiments with four different datasets. In this section, we introduce the datasets, as well as what networks we repair to conform to which specifications. The network architectures for each dataset are contained in Table 3.
#### Mnist
The MNIST dataset [40] consists of \(70\,000\) labelled images of hand-written Arabic digits. Each image has \(28\times 28\) pixels. The dataset is split into a training set of \(60\,000\) images and a test set of \(10\,000\) images. The task is to predict the digit in an image from the image pixel data. We train a small convolutional neural network achieving \(97\,\%\) test set accuracy (\(98\)% training set accuracy). Table 3 contains the concrete architecture.
We repair the \(L_{\infty}\) adversarial robustness of this convolutional neural network for groups of \(25\) input images. These images are randomly sampled from the images in the training set for which the network is not robust. Each robustness property has a radius of \(0.03\). Overall, we form \(50\) non-overlapping groups of input images. Thus, each repaired network is guaranteed to be locally robust for a different group of \(25\) training set images. While specifications of this size are not practically relevant, they make it feasible to perform several (\(50\)) experiments for each verifier variant. We formally define \(L_{\infty}\) adversarial robustness in Appendix A.1.
We train the MNIST network using Stochastic Gradient Descent (SGD) with a mini-batch size of \(32\), a learning rate of \(0.01\) and a momentum coefficient of \(0.9\), training for two epochs. Counterexample-removal uses the same setup, except for using a decreased learning rate of \(0.001\) and iterating only for a tenth of an epoch.
\begin{table}
\begin{tabular}{l l} \hline
**Dataset** & **Network Architecture** \\ \hline MNIST & In(1\(\times\)28\(\times\)28), Conv(out=8, kernel=3, stride=3, pad=1), ReLU, FC(out=80), ReLU] \(\times\) 6, FC(out=50) \\ ACAS Xu & In(5), [FC(out=50), ReLU] \(\times\) 6, FC(out=5) \\ CollisionDetection & In(6), [FC(out=10), ReLU] \(\times\) 2, FC(out=2) \\ RMI, First Stage & In(1), [FC(out=16), ReLU] \(\times\) 2, FC(out=1) \\ RMI, Second Stage & In(1), FC(out=1) \\ \hline \end{tabular}
\end{table}
Table 3: **Network Architectures. In(*) gives the dimension of the network input. Convolutional layers are denoted Conv(*), where out is the number of filters and kernel, stride, and pad are the kernel size, stride, and padding for all spatial dimensions of the layer input. Fully-connected layers are denoted FC(*), where out is the number of neurons. [-] \(\times\)\(n\) denotes the \(n\)-fold repetition of the block in square brackets. RMI stands for the Integer Dataset RMIs.**
#### ACAS Xu
The ACAS Xu networks [35] form a collision avoidance system for aircraft without on-board personnel. Each network receives five sensor measurements that characterise an encounter with another aircraft. Based on these measurements, an ACAS Xu network computes scores for five possible steering directions: Clear of Conflict (maintain course), weak left/right, and strong left/right. The steering direction advised to the aircraft is the output with the minimal score. Each of the \(45\) ACAS Xu networks is responsible for another class of encounter scenarios. More details on the system are provided by Julian et al. [34]. Each ACAS Xu network is a fully-connected ReLU network with six hidden layers of \(50\) neurons each.
Katz et al. [35] provide safety specifications for the ACAS Xu networks. Of these specifications, the property \(\phi_{2}\) is violated by the largest number of networks. We repair \(\phi_{2}\) for all networks violating it, yielding \(34\) repair cases. The property \(\phi_{2}\) specifies that the score for the Clear of Conflict action is never maximal (least-advised) when the intruder is far away and slow. The precise formal definition of \(\phi_{2}\) is given in Appendix A.2.
We repair the ACAS Xu networks following Bauer-Marquart et al. [4]. To replace the unavailable ACAS Xu training data, we randomly sample a training and a validation set and compare with the scores produced by the original network. As a loss function, we use the asymmetric mean square error loss of Julian and Kochenderfer [33]. We repair using the Adam training algorithm [37] with a learning rate of \(10^{-4}\). We terminate training on convergence, when the loss on the validation set starts to increase, or after at most \(500\) iterations.
For assessing the performance of repaired networks, we compare the accuracy and the Mean Absolute Error (MAE) between the predictions of the repaired network and the predictions of the original network on a large grid of inputs, filtering out counterexamples. For all networks, the filtered grid contains more than \(24\) million points.
#### CollisionDetection
The CollisionDetection dataset [17] is introduced for evaluating neural network verifiers. The task is to predict whether two particles collide based on their relative position, speed, and turning angles. The training set of \(7000\) samples and the test set of \(3000\) samples are obtained from simulating particle dynamics for randomly sampled initial configurations. We train a small fully-connected neural network with \(20\) neurons on this dataset. The full architecture is given in Table 3.
Similarly to MNIST, we repair the adversarial robustness of this network for \(100\) non-overlapping groups of ten randomly sampled inputs from the training set. Here we also include inputs that do not violate the specification to gather a sufficient number of groups. Each robustness property has a radius of \(0.05\).
The CollisionDetection network is trained for \(1000\) iterations using Adam [37] with a learning rate of \(0.01\). Repair uses Adam with a learning rate of \(0.001\), terminating training on convergence or when reaching \(5000\) iterations.
#### Integer Dataset RMIs
Learned index structures replace index structures, such as B-trees, with machine learning models [38]. Tan et al. [57] identify these models as prime candidates for neural network verification and repair due to the strict requirements of their domain and the small size of the models. We use _Recursive Model Indices_ (RMIs) [38] in our experiments. The task of an RMI is to resolve a key to a position in a sorted sequence.
We build datasets, RMIs and specifications following Tan et al. [57], with the exception that we create models of two sizes. While Tan et al. [57] build one RMI with a second-stage size of ten, we build ten RMIs with a second-stage size ten and \(50\) RMIs with a second-stage size of \(304\). Each RMI is constructed for a different dataset. We create models of two second-stage sizes because the smaller size does not yield unsafe first-stage models, while the larger second-stage size does not yield unsafe second-stage models. However, we want to repair models of both stages.
Each dataset is a randomly generated integer dataset consisting of a sorted sequence of \(190\,000\) integers. The integers are randomly sampled from a uniform distribution with the range \(\left[0,10^{6}\right]\). The task is to predict the index of an integer (key) in the sorted sequence.
We build an RMI for each dataset. Each RMI consists of two stages. The first stage contains one neural network. The second stage contains several linear regression models. In our case, the second stage contains either ten or \(304\) models. Each dataset is first split into several disjoint blocks, one for each second-stage model. Now, the first-stage network is trained to predict the block an integer key belongs to. The purpose of this model is to select a model from the second stage. Each model of the second stage is responsible for resolving the keys in a block to the position of the key in the sorted sequence. The architectures of the models are given in Table 3.
We train the first-stage model to minimise the Mean Square Error (MSE) between the model predictions and the true blocks. Training uses Adam [37] with a learning rate of \(0.01\) and a mini-batch size of \(512\). For an RMI with a second-stage size of ten, we train for one epoch. For the larger second-stage size of \(304\), we train for six epochs.
Minimising the MSE between the positions a second stage model predicts and the true positions can be solved analytically. We use the analytic solution for training the second stage models. In Section 5.4, we compare with Ouroboros [57] that also uses the analytic solution. SpecRepair [4] can not make use of the analytic solution. Instead, it repairs second-stage models using gradient descent with a learning rate of \(10^{-13}\), running for \(150\) iterations.
The specifications for the RMIs are error bounds on the predictions of each model. For a first-stage neural network, the specification is that it may not deviate by more than one valid block from the true block. The specification for a second-stage model consists of one property for each integer in the target block and one for all other integers that the first stage assigns to the second-stage model. The property for a key \(k_{i}\) specifies that the prediction for all keys between the previous key \(k_{i-1}\) and the next key \(k_{i+1}\) in the dataset may not deviate by more than \(\varepsilon\) from the position for \(k_{i}\). We use two sets of specifications, one with \(\varepsilon=100\) and one with \(\varepsilon=150\). The specifications express a guaranteed error bound for looking up both existing and non-existing keys [57]. The formal definitions of the specifications are given in Appendix A.3.
#### 5.1.3 Implementation and Hardware
We build upon SpecRepair [4] for our experiments, leveraging the modified ERAN. SpecRepair and ERAN are implemented in Python. SpecRepair is based on PyTorch [47]. For repairing linear regression models, we also use an ERAN-based Python reimplementation of Ouroboros [57]. The original Ouroboros implementation is not publicly available. The quadratic programming repair algorithm for linear regression models is implemented in Python and leverages Gurobi [29]. Our source code is available at [https://github.com/sen-uni-kn/specrepair](https://github.com/sen-uni-kn/specrepair).
All experiments were conducted on Ubuntu 2022.04.1 LTS machines using Python 3.8. The ACAS Xu, CollisionDetection and Integer Dataset RMI experiments were run on a compute server with an Intel Xeon E5-2580 v4 2.4GHz CPU (28 cores) and 1008GB of memory. The MNIST experiments were run on a GPU compute server with an AMD Ryzen Threadripper 3960X 24-Core Processor and 252GB of memory, utilising an NVIDIA RTX A6000 GPU with 48GB of memory.
We limit the execution time for repairing each ACAS Xu network and each MNIST specification to three hours. For CollisionDetection and the Integer Dataset RMIs, we use a shorter timeout of one hour. Except for ACAS Xu, whenever we report runtimes, we repeat all experiments ten times and report the median runtime from these runs. This way, we obtain more accurate runtime measurments that are necessary for interpreting runtime differences below one minute. For ACAS Xu, the runtime differences are sufficiently large for all but one network, so that we can faithfully compare different counterexample searchers without repeating the experiments.
### Optimal vs. Early Exit Verifier
To evaluate how repair using an early-exit verifier compares to repair using an optimal verifier, we run repair using both verifiers for CollisionDetection, MNIST, ACAS Xu, and the first-stage models of the Integer Dataset RMIs with second-stage size \(304\). Our findings are two-fold:
* When verification is expensive, repair using the early-exit verifier is faster most of the time.
* For smaller networks, the two methods are typically similarly fast.
We observe only minimal variations regarding the performance of the repaired networks. These results are reviewed in Appendix B.
#### Larger Networks
Figure 3 depicts which verifier leads to repair fastest. The figure depicts this both for the absolute runtime of repair and the number of repair steps Algorithm 1 performs.
For the larger MNIST and ACAS Xu networks, we observe that repair using the early-exit verifier requires less runtime in most cases. Regarding the number of repair steps, we observe the opposite trend. Here, the optimal verifier yields repair in fewer repair steps more often than not. The additional runtime cost of computing most-violating counterexamples offsets the advantage in repair steps. In extreme cases, the early-exit verifier enables repair while the optimal verifier leads to a timeout. Due to this, using the early-exit verifier has a success rate of \(100\,\%\) for ACAS Xu, compared to \(82.5\,\%\) when using the optimal verifier (MNIST: \(100\,\%\) to \(96\,\%\)).
The finding that the optimal verifier leads to fewer repair steps aligns well with theoretical intuition. From a theoretical perspective, we expect that most violating counterexamples should better guide Algorithm 1 towards a repaired network. However, the fact that \(20\,\%\) of the cases defy our expectation means that our intuition is limited. This could be due to the counterexample-removal procedure, but neural network repair may also simply yield unintuitive structures.
#### Smaller Networks
For the smaller CollisionDetection and Integer Dataset networks, we primarily observe that, typically, both verifiers yield interchangeable runtime. Only infrequently repair using one verifier outperforms using the other by more than \(30\) seconds. This is apparent from Figure 2(a). While
Figure 3: **Optimal vs. Early-Exit Verifier: Runtime. We plot how frequently repair using the optimal verifier \(\boxplus\) or the early-exit verifier \(\boxplus\) is faster in terms of (a) runtime and (b) repair steps. Gray bars \(\boxplus\) depict how frequently both approaches are equally fast. We consider two runtimes equal when they deviate by at most \(30\) seconds. The figure contains data for four different datasets: CollisionDetection (CD), Integer Datasets (RMI), MNIST and ACAS Xu. Gaps to \(100\,\%\) are due to failing repairs.**
there is no variation regarding the number of repair steps for the Integer Dataset RMIs, Figure 2(b) shows the same trend for CollisionDetection as for ACAS Xu and MNIST.
#### A Note on Failing Repairs
We witness several failing repairs in our experiments. These are either due to timeout or due to failing counterexample-removal. There are no indications of non-termination regarding Algorithm 1 itself in these failing repairs. In other words, we do not observe exceedingly high repair step counts. This holds true both for the optimal verifier, for which termination remains an open question, and the early-exit verifier, for which we disprove termination in Section 4.5.
### Using Falsifiers for Repair
Falsifiers are sound but incomplete counterexample searchers that specialise in finding violations fast. In this section, we study how falsifiers can speed up repair. We find that using the BIM falsifier (Rasmari et al., 2017; Rasmari et al., 2018) can significantly accelerate repair of the MNIST network, demonstrating the potential of falsifiers for repair.
To study the advantages of falsifiers for repair, we repair an MNIST network and the ACAS Xu networks using the SpecAttack (Bimpoul et al., 2018) and BIM (Rasmari et al., 2017; Rasmari et al., 2018) falsifiers. We outline the approach of the BIM falsifier in Section 3.3. We start repair by searching counterexamples using one of the falsifiers. Only when the falsifier fails to produce further counterexamples we turn to the early-exit verifier. Ideally, we would want that the verifier is invoked only once to prove specification satisfaction. Practically, often several additional repair steps have to be performed using the verifier.
For ACAS Xu, we observe that BIM generally fails to find counterexamples. Therefore, we only report using SpecAttack for ACAS Xu. For the small CollisionDetection and Integer Dataset networks, the verifier is already comparably fast, so neither BIM nor SpecAttack can provide a runtime advantage. We also evaluated combining falsifiers with the optimal verifier, but this does not improve upon using the early-exit verifier.
We run SpecAttack using Sequential Least SQuares Programming (SLSQP) as network gradients are available. In spirit of Dong et al. (2019), we run BIM with Adam (Kingmaa et al., 2015) as optimiser. We find that using Adam yields the most powerful falsifier compared to using gradient descent with momentum and RMSprop (Rasmari et al., 2017). BIM performs local optimisation ten times from different random starting points.
Figure 4: **Falsifiers: Runtime. We plot the number of repaired instances that individually require less than a certain runtime. We plot this for repair using BIM, SpecAttack, only the optimal verifier and only the early-exit verifier. Both experiments use a timeout of three hours. Runtimes are given on a logarithmic scale.**
### Bim
The results of our experiments are summarised in Figure 4. For MNIST, we see that using the BIM falsifier can significantly accelerate repair. Repair using BIM is the fastest method in \(70\,\%\) of the repair cases, compared to \(26\,\%\) for only the early-exit verifier, \(2\,\%\) for SpecAttack and \(0\,\%\) for only the optimal verifier. In \(2\,\%\) of the cases, the runtime of the two best variants is within \(30\) seconds.
BIM is an order of magnitude faster than the early-exit verifier, yet it can find counterexamples with a larger violation than the early-exit verifier. Thus, BIM can sometimes provide the repair step advantage of the optimal verifier at a much smaller cost. Again, the breakdown of which method is fastest for each repair case shows that the figure is not as clear as we may wish it to be -- BIM provides a significant runtime advantage in \(70\,\%\) of the cases, but in \(26\,\%\) of the cases using only the early-exit verifier is faster.
### SpecAttack
For the MNIST network, using SpecAttack is inferior to using only the early-exit verifier. In our experiments, SpecAttack provides no significant runtime advantage for generating counterexamples over the early-exit verifier and tends to compute counterexamples with a smaller violation. SpecAttack's runtime scales well with the network size but exponentially in the input dimension. Thus, it is not surprising that it provides no advantage for our MNIST network, which is tiny compared to state-of-the-art image classification networks.
For ACAS Xu, we would expect that SpecAttack outperforms using only the early-exit verifier more clearly than apparent from Figure 4. Here, SpecAttack's runtime is an order of magnitude faster than the runtime of the early-exit verifier. SpecAttack can also provide an advantage in repair steps in many cases. However, at times using SpecAttack also increases the number of repair steps. Additionally, SpecAttack sometimes makes the final invocations of the early-exit verifier more costly than when only the verifier is used.
Our experiments using falsifiers demonstrate that they can give a substantial runtime advantage to repair, but they also show that speeding up repair traces back to more intricate properties beyond just falsifier speed. Understanding these properties better is a promising future research direction for designing better falsifiers for repair.
### Repairing Linear Regression Models
Our theoretical investigation into the repair of linear regression models in Section 4.3 provides us with a termination guarantee for repairing these models. The investigation also provides interesting insights that can be used to create a repair algorithm for linear regression models based on quadratic programming. In this section, we describe this algorithm and compare it to the Ouroboros [57] and SpecRepair [4] repair algorithms.
We repair the second-stage models of ten Integer Dataset RMIs with second-stage size ten. The specifications that we obtain for these models have a similar average size as reported by Tan et al. [57] (\(19\,426\) properties). This indicates that our reimplementation is faithful. Both Ouroboros and SpecRepair are counterexample-guided repair algorithms. Ouroboros performs repair by augmenting the training set with counterexamples and retraining the linear regression models using an analytic solution. SpecRepair uses the \(L_{1}\) penalty function method [44], training the linear regression models using gradient descent. We perform at most two repair steps for SpecRepair. For Ouroboros, we perform up to five repair steps following Tan et al. [57].
### Insights into Repairing Linear Regression Models
We recall from Section 4.3 that for repairing a linear regression model to conform to a linear specification, the most-violating counterexample for a property is always located at the vertices of the property's input set. This implies two conclusions for repairing the second-stage RMI models:
* To verify a linear regression model, it suffices to evaluate it on the vertices of the input set. As the input of the models is one-dimensional, these are just two points per property.
* As we can analytically solve the verification problem \(V\), we can rewrite \(R^{\prime}\) from Equation (8) using two constraints per property. The two constraints correspond to evaluating the satisfaction function for the two vertices of the property input set. We obtain an equivalent formulation of the repair problem \(R\) from Equation (7) with a finite number of constraints.
Repair using Quadratic ProgrammingConclusion b) from the previous paragraph gives us an equivalent formulation of the repair problem with finitely many linear constraints. We train and repair the second-stage models using MSE. Since MSE is a convex quadratic function and all constraints are linear, it follows that the repair problem is a quadratic program [8]. This allows applying a quadratic programming solver to repair the linear regression models directly. We use Gurobi [29] and report the results for this method under the name _Quadratic Programming_.
Due to the above theory, the quadratic programming repair algorithm is exact. That is, we obtain an infeasible problem if and only if the linear regression model can not satisfy the specification and otherwise obtain the optimal repaired regression model. To mitigate floating point issues, we require the satisfaction function to be at least \(10^{-2}\) in Equation (7) instead of requiring it to be just non-negative. That corresponds to applying a satisfaction constant as in SpecRepair.
ResultsThe results of repairing linear classifiers are summarised in Table 4. Our new quadratic programming repair algorithm achieves the highest success rate. As this method is successful if and only if repair is possible, this is not surprising. It is followed by SpecRepair. Ouroboros is the least successful method. This means that SpecRepair's counterexample-removal procedure is stronger than the Ouroboros' counterexample-removal procedure, not only theoretically but also practically. Nonetheless, there remains a significant gap between SpecRepair and Quadratic Programming. Our implementation of the different algorithms does not allow for a fair runtime comparison, but we remark that the runtime of Quadratic Programming is competitive in our experiments.
## 6 Conclusion
We view counterexample-guided repair as a robust optimisation algorithm. This approach provides a framework for studying neural network repair, including related but more restrained as well as more general problems. We are able to prove termination of counterexample-guided repair for simpler machine learning models, such as linear classifiers, assuming linear specifications. On the other hand, we show non-termination of repairing neural networks when the specification has an unbounded input set. As our results show, our methodology of viewing repair as robust optimisation is useful for studying the theoretical properties of counterexample-guided repair. We expect
\begin{table}
\begin{tabular}{l c c} & \multicolumn{2}{c}{**Success Rate**} \\
**Algorithm** & \(\varepsilon=100\) & \(\varepsilon=150\) \\ \hline Ouroboros [57] & \(30\,\%\) & \(77\,\%\) \\ SpecRepair [4] & \(58\,\%\) & \(94\,\%\) \\ Quadratic Programming & \(72\,\%\) & \(97\,\%\) \\ \end{tabular}
\end{table}
Table 4: **Comparison of Algorithms for Repairing Linear Regression Models.** We report the success rates of repairing RMI second-stage linear regression models for two specifications with different error bounds \(\varepsilon\). The success rates include models that already satisfy their specification.
that our insights will eventually help to answer the question whether counterexample-guided repair of neural networks terminates when applied to specifications with bounded input sets, such as \(L_{\infty}\) adversarial robustness or the ACAS Xu safety specifications.
Empirically, we find that \(-\) despite a disadvantageous theoretical result \(-\) early-exit verifiers allow achieving repair and can give speed advantages. Similarly, falsifiers can significantly accelerate repair, but this is to be traced back to more intricate properties than just being faster than the verifier. Studying these properties more closely is a promising direction for future research, as it may allow for designing improved falsifiers tailored to repair.
Our empirical results on repairing linear regression models shows that robust optimisation is also practically useful for designing stronger repair algorithms. Overall, we believe that robust optimisation provides a rich arsenal of useful tools for studying and advancing repair.
|
2308.06447 | A Sequential Meta-Transfer (SMT) Learning to Combat Complexities of
Physics-Informed Neural Networks: Application to Composites Autoclave
Processing | Physics-Informed Neural Networks (PINNs) have gained popularity in solving
nonlinear partial differential equations (PDEs) via integrating physical laws
into the training of neural networks, making them superior in many scientific
and engineering applications. However, conventional PINNs still fall short in
accurately approximating the solution of complex systems with strong
nonlinearity, especially in long temporal domains. Besides, since PINNs are
designed to approximate a specific realization of a given PDE system, they lack
the necessary generalizability to efficiently adapt to new system
configurations. This entails computationally expensive re-training from scratch
for any new change in the system. To address these shortfalls, in this work a
novel sequential meta-transfer (SMT) learning framework is proposed, offering a
unified solution for both fast training and efficient adaptation of PINNs in
highly nonlinear systems with long temporal domains. Specifically, the
framework decomposes PDE's time domain into smaller time segments to create
"easier" PDE problems for PINNs training. Then for each time interval, a
meta-learner is assigned and trained to achieve an optimal initial state for
rapid adaptation to a range of related tasks. Transfer learning principles are
then leveraged across time intervals to further reduce the computational
cost.Through a composites autoclave processing case study, it is shown that SMT
is clearly able to enhance the adaptability of PINNs while significantly
reducing computational cost, by a factor of 100. | Milad Ramezankhani, Abbas S. Milani | 2023-08-12T02:46:54Z | http://arxiv.org/abs/2308.06447v1 | A Sequential Meta-Transfer (SMT) Learning to Combat Complexities of Physics-Informed Neural Networks: Application to Composites Autoclave Processing
###### Abstract
Physics-Informed Neural Networks (PINNs) have gained popularity in solving nonlinear partial differential equations (PDEs) via integrating physical laws into the training of neural networks, making them superior in many scientific and engineering applications. However, conventional PINNs still fall short in accurately approximating the solution of complex systems with strong nonlinearity, especially in long temporal domains. Besides, since PINNs are designed to approximate a specific realization of a given PDE system, they lack the necessary generalizability to efficiently adapt to new system configurations. This entails computationally expensive re-training from scratch for any new change in the system. To address these shortfalls, in this work a novel sequential meta-transfer (SMT) learning framework is proposed, offering a unified solution for both fast training and efficient adaptation of PINNs in highly nonlinear systems with long temporal domains. Specifically, the framework decomposes PDE's time domain into smaller time segments to create "easier" PDE problems for PINNs training. Then for each time interval, a meta-learner is assigned and trained to achieve an optimal initial state for rapid adaptation to a range of related tasks. Transfer learning principles are then leveraged across time intervals to further reduce the computational cost.Through a composites autoclave processing case study, it is shown that SMT is clearly able to enhance the adaptability of PINNs while significantly reducing computational cost, by a factor of 100.
physics-informed neural networks sequential learning meta-transfer learning aerospace composites processing
## 1 Introduction
### Physics-informed neural networks and their current drawbacks
Recently, Physics-Informed Neural Networks (PINNs) [1] have gained unprecedented popularity among the research community and have proven to be highly advantageous in both scientific and industrial applications where prior knowledge of the system's underlying physics exists [2]. There are several reasons behind their prominence. Firstly, PINN models provide a powerful and flexible framework for integrating physical laws and constraints into the architecture of neural networks. By explicitly encoding domain-specific knowledge, namely, governing equations or boundary conditions, PINNs leverage a physics-guided approach to learning the unknown solution. Another major advantage of PINN models is the potential to significantly reduce computational costs compared to traditional simulation techniques (e.g., finite element method). PINNs can leverage the efficiency of neural network computations and their adaptability to new tasks and provide fast predictions and lower the computational costs. This is in particular very valuable in process optimization tasks which require an extensive and fast exploration of large and high-dimensional design spaces [3].
Despite promising results in a wide range of applications, it has been shown that conventional PINNs exhibit poor performance and fail to accurately approximate the behaviour of systems with _strong non-linearity_[4] and _long temporal domains_[5][6].Earlier studies have captured the training challenges of conventional PINNs when learning PDE systems with highly nonlinear time-varying characteristics and sharp transitions (i.e., stiff PDEs where the solution exhibits a significant disparity in time scales) [7; 8; 9; 10]. Similarly, when employed to learn an intricate system with long time domains, PINNs often return poor and sub-optimum performance [6]. Various reasons can contribute to this behaviour. One explanation could be the effect of the F-principle in the neural networks [11]. It has been shown that neural networks have a learning bias toward low-frequency functions and thus can fail to learn high-frequency components and highly nonlinear functions if not trained sufficiently [12]. Another reason can be the presence of large values in the temporal domain (e.g., simulating a five-hour-long manufacturing process) which plays a part in saturating the activation functions and impeding the proper training [6]. In addition to PINNs' limitation in approximating stiff PDEs and long temporal domains, their performance can also suffer from other reasons, such as poor collocation points sampling, unbalanced loss terms, and inefficient network architecture [7][13].Another major drawback of PINNs is that they are designed to approximate a specific realization of a PDE system [14], and are _not readily generalizable_. In other words, conventional PINNs are problem-specific by nature, and they need to be retrained from scratch for each and every new system configuration (e.g., when a new set of boundary or initial conditions is introduced for the given system).
### State-of-the-art to cope with PINNs drawbacks
Various methods in the literature have been proposed to alleviate the above drawbacks of PINNs in highly nonlinear systems. Namely, adaptive training and sampling [15][16], sequential learning [4][17], domain decomposition [14][18] and novel network architectures and activations functions [19][20] are among the strategies that have shown promising results in addressing PINNs' training complications. Wang et al. [13] proposed a residual weighting strategy that balances the interaction between the PINN's loss components. Levi et al. [21] proposed a soft attention method which adaptively updates the weights of training points based on how difficult they are to be learned. Sequential learning is another avenue of research that has shown promise in enhancing the PINNs training for nonlinear systems. Specifically in sequential learning, the temporal domain is discretized into small training time segments, and the PINN model is trained on "simpler" problems in a sequential manner. Mattey et al. [4] Introduced a sequential strategy with backward compatibility (bc-PINN) in which the temporal domain is broken down into smaller segments and each segment is trained sequentially using a single network. In a lifelong learning fashion [22], the network is updated at each time interval to adapt to the new domain while ensuring learned knowledge from the previous time segments is preserved. Wang et al. [9] further enhanced this strategy by proposing temporal weights assigned to collocation points in an attempt to respect physical causality for training PINNs. Similarly, in [17], a sequence-to-sequence learning approach is proposed in which instead of employing a single network to learn all the segments, each time interval is learned by an individual _subnetwork_. In this approach, known as Time Marching (TM), the initial condition of each segment is determined by the latent function learned by the subnetwork in the previous time interval (see section 3.1 for details). Furthermore, the training procedure can be facilitated by implementing TL via initializing each subnetwork with the learned weights of the subnetwork trained in the preceding time segment [10][23]. Since the subnetworks are trained toward learning a similar task (solving the same PDEs under slightly different initial and boundary conditions), the latent features learned by a network trained on the time interval [\(t_{n-1},t_{n}\)], encompass useful information for efficient learning of the next interval [\(t_{n},t_{n+1}\)], and thus the new subnetwork can leverage such information via TL. In [5],the authors introduced a similar strategy called Parallel PINN (PPINN) with the objective of reducing the _computational costs_ of PINN models in long temporal domains. PPINN discretizes long temporal domains into independent and short time segments and solves them in parallel using a coarse-grained solver. A by-product of this method as elaborated above is the improvement of PINN's performance in highly nonlinear systems.
Recently, some attempts have been made to enhance PINNs' adaptability (generalizability). These studies have harnessed the idea of knowledge transferability among relevant tasks and implemented different methods such as transfer learning (TL) [24][25], multi-task learning (MTL) [26][27], multi-fidelity learning [28][29] and meta-learning [30; 31; 32]. The idea is to leverage the existing knowledge, such as known physics and low-fidelity labeled data, to expedite the convergence of PINNs as well as to achieve better generalization performance. TL leverages the knowledge and learned representations from a _source_ task toward improving the performance on a related but different task called _target_. Chen et al. [25] constructed different sub-tasks by varying the source term and Reynolds numbers in the Navier-Stokes equation and showed that training on one sub-task and using the learned weights for initializing other sub-tasks can drastically improve the training of PINNs. Similarly, in multi-fidelity learning, a sub-class of TL, the knowledge transfer takes place between the low-fidelity and high-fidelity domains. In a case study on advanced composites manufacturing [28], it was shown that in the presence of low-fidelity knowledge, the training of PINNs in systems with strong nonlinearity and boundary conditions with sharp transitions becomes faster and more efficient.
In MTL, by jointly learning multiple tasks (i.e., one network with multiple heads, each assigned to one task), the network can learn and benefit from the common latent representation among similar tasks and use it toward a fast and cost-effective adaptation to new unseen tasks. Desai et al. [26]proposed a a MTL framework to alleviate PINN's high computational expense associated with training networks for different but closely related tasks. Specifically, a PINN is initially trained on a family of differential equations. Then by freezing the hidden layers and fine-tuning the final layer or training it from scratch, the network can efficiently learn the new tasks. Similarly, in [27], the authors introduced multi-head PINN (MHPINN) which simultaneously learns multiple tasks using a single network and then uses shared hidden layers as a basis function for fast-solving similar PDE systems. In contrast to the above methods, meta-learning, instead of transferring useful representations from other tasks, aims to learn an optimal _initial state_ that is suitable for fast and efficient adaptation to a family of relevant tasks [33]. Using meta-learning, Bihlo [30] explored the effect of meta-trained learnable optimizers on the convergence and error minimization of PINN models. It was shown that using a learnable optimizer parameterized by a neural network can improve the performance of conventional PINNs trained using Adam optimizers. In [31] the authors used Reptile, a scalable meta-learning algorithm to learn a proper initialization state for training the PINN model. Their work achieved a faster convergence of PINNs in a series of ODE and PDE systems such as Poisson and Burgers equations.
### Objective and novelty of the present study
As reviewed in section 1.2, sequential learning has already shown [4][17]to be an effective tool to address PINNs shortcomings in complex nonlinear systems. However, it considerably increases the computational cost as it introduces more loss terms as well as entails training multiple networks over a set of time intervals. This could result in slow training and limit the application of such strategies in solving real-world complex problems using PINNs. Besides, while some works have leveraged knowledge transferability (e.g., via utilizing TL and similar methods) towards making PINNs' training and implementation more efficient [25][26][28], no notable research has been conducted to date on reducing the computational costs and improving the adaptability of sequential learning strategies via knowledge transferability.
The present work aims at developing a novel sequential meta-transfer (SMT) learning approach for more efficient, adaptable and accurate training of PINNs in highly nonlinear systems with long temporal domains. Namely, the SMT combines and leverages the well-known TL and meta-learning principles, but under a sequential learning pattern to make the training of PINNs much faster and more efficient than conventional PINNs, while ensuring high adaptability to other relevant tasks/systems (see also Figure 1). At each time segment, instead of training task-specific networks, a series of meta-learners are trained with the goal of obtaining a set of optimal initial parameters used for a fast adaptation to a range of related tasks (e.g., different boundary condition configurations). The work also for the first time introduces an "adaptive temporal segmentation" strategy which adaptively selects the span of the next time interval for training. For each time interval, it evaluates the performance of the subnetwork trained on the previous time segment as a measure of similarity between the two domains (e.g., if the tasks are similar, the model should perform well on both) and based on that chooses the length of the interval. This results in an efficient way of reducing the number of sequential learning steps by allocating a large step to less difficult regions in the domain and employing finer time intervals for areas with highly nonlinear behaviour.
### Application to Advanced Autoclave Processing
In advanced composites autoclave processing, accurate prediction of the part's temperature and degree of cure (DoC) is of paramount importance in the process design optimization tasks as it directly influences other aspects of the process outcomes, from process-induced stress and cure shrinkage strains to resin thermal degradation due to excessive exothermic reactions [33]. In addition, the design space is often very complex and high-dimensional and thus, requires an extensive exploration by the optimization algorithms. This entails accessing an efficient surrogate model that provides accurate and fast predictions of the part's thermal profile during the cure cycle. The complex geometry of the composite parts and the non-uniform airflow within the autoclave imposes non-uniform heat transfer coefficient (HTC) distribution throughout the part surfaces. Inconsistent resin flow during the curing of the part also introduces non-uniform fibre volume fraction which results in un-equal cure kinetics behaviour. All of this necessitates incorporating a fast and accurate surrogate model that can predict the thermal profile of the part at various locations under different process configurations. As will be demonstrated in the following sections, the proposed SMT framework is well capable of training accurate and adaptable PINN models for complex and highly nonlinear systems with long temporal domains, which can be appropriately used in autoclave composites process optimization.
The rest of the manuscript is organized as follows. Section 2 provides an overview of PINNs and meta-learning concepts. Section 3 discusses the proposed framework and the role of its components in efficient learning of PINNs in complex systems, with the application/experimentation in composites autoclave processing. Section 4 presents the experimental
results and analyze the performance of the proposed framework. Section 5 concludes the paper with a summary of the contributions and highlights the potential future research direction.
## 2 Methodology: Basics
### Physics-informed neural networks
The basics of PINNs are briefly reviewed in this section, following the notations presented in [9]. In PINNs the solution of PDEs is inferred using neural networks and their universal approximation capabilities [1][9]. Specifically, PDEs can take the form of:
\[\mathbf{u}_{t}+\mathcal{N}\left[\mathbf{u}\right]=0,t\in\left[0,T\right], \mathbf{x}\in\Omega, \tag{1}\]
subject to the initial and boundary conditions:
\[\mathbf{u}\left(0,\mathbf{x}\right)=\mathbf{g}\left(\mathbf{x}\right), \mathbf{x}\in\Omega \tag{2}\]
\[\mathcal{B}\left[\mathbf{u}\right]=0,t\in\left[0,T\right],\mathbf{x}\in \partial\Omega \tag{3}\]
where \(\mathbf{u}\) is the latent solution, \(\mathcal{N}\) is the PDE's differential operator, and \(\mathcal{B}\) represents the boundary operator and it can take the form of Dirichlet, Neumann and Robin boundary conditions. \(\Omega\) and \(\partial\Omega\) denote the domain and the boundary domain, respectively. PINNs learn the solution of the PDE using a deep neural network \(\mathbf{u}^{\theta}\left(t,x\right)\) where \(\theta\) denotes the network's parameters which are trained by minimizing the following physics-aware loss function:
Figure 1: Overview of proposed sequential meta-transfer (SMT) learning approach. (a) schematic of autoclave pressure vessel ad 1-D AS4/8552 composite part considered for this case study per section 3.4. Autoclave air temperature \(T_{a}\) governs the part’s boundary condition during the curing process. (b) Thermochemical evolution of the part’s mid-section during the curing process. The black dash-dotted lines represent the time intervals used for PINNs sequential learning (Illustrated by light and dark green boxes). (c) schematic of the SMT framework based on meta-learning and TL principles. (For interpretation of the colors in the figures, the reader is referred to the web version of this article.)
\[\mathcal{L}\left(\theta\right)=\lambda_{ic}\mathcal{L}_{ic}\left(\theta\right)+ \lambda_{bc}\mathcal{L}_{bc}\left(\theta\right)+\lambda_{r}\mathcal{L}_{r}\left( \theta\right), \tag{4}\]
where
\[\mathcal{L}_{ic}\left(\theta\right)=\frac{1}{N_{ic}}\sum_{i=1}^{N_{ic}}\left| \mathbf{u}_{\theta}\left(0,\mathbf{x}_{ic}^{i}\right)-\mathbf{g}\left( \mathbf{x}_{ic}^{i}\right)\right|^{2}, \tag{5}\]
\[\mathcal{L}_{bc}\left(\theta\right)=\frac{1}{N_{bc}}\sum_{i=1}^{N_{bc}}\left| \mathcal{B}\left[\mathbf{u}_{\theta}\right]\left(t_{bc}^{i},\mathbf{x}_{bc}^{ i}\right)\right|^{2}, \tag{6}\]
\[\mathcal{L}_{r}\left(\theta\right)=\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}\left| \frac{\partial\mathbf{u}_{\theta}}{\partial t}\left(t_{r}^{i},\mathbf{x}_{r}^ {i}\right)+\mathcal{N}\left[\mathbf{u}_{\theta}\right]\left(t_{r}^{i},\mathbf{ x}_{r}^{i}\right)\right|^{2}. \tag{7}\]
\(\left\{x_{ic}^{i}\right\}_{i=1}^{N_{ic}}\left\{t_{bc}^{i},x_{bc}^{i}\right\}_ {i=1}^{N_{bc}}\) and \(\left\{t_{r}^{i},x_{r}^{i}\right\}_{i=1}^{N_{r}}\) represents the initial, boundary and collocation points, respectively, and \(N\) specifies the dataset size used for training. The loss weight hyperparameter \(\lambda\) determines the influence of each loss component and can be specified by the user or learned and tuned as part of PINNs training.
### Meta-learning
Human intelligence has the advantage of being able to quickly learn new tasks by leveraging prior experiences from related tasks. That is, humans can quickly grasp new concepts or tasks by building upon their existing knowledge and experiences. Various techniques in ML have been developed in an attempt to mimic such rapid adaptation of human intelligence. Meta-learning, in particular, enables models to learn from a set of related tasks with the goal of fast generalization and adaptation to new tasks. This is as opposed to conventional ML in which models are trained on a specific task with a large dataset [34]. With its learning-to-learn mechanism, meta-learning learns to accumulate experiences from relevant tasks and uses it toward improving the learning of new tasks using a few data points. Training a meta-learning model results in a set of _optimal_ parameters which can be used for initializing a base learner for fast adaptation to a new task [35]. To accomplish this task, the _gradient-based_ meta-learning models employ a bi-level optimization procedure. In particular, the "inner" optimization is responsible for learning a given task (base-learner), while the "outer" algorithm updates the base-learner in a way that improves the meta-training objective (meta-learner) [34][36]. Model-agnostic meta-learning (MAML) [35], as the most well-known method in this category, aims to learn a set of initial parameters which require only a few gradient steps to learn a new task. In the following, a description of the meta-learning setup for a few-shot supervised learning problem is provided as per the related work described in [35][34][37].
For training the meta-learning model, a distribution over the training tasks \(p\left(\mathcal{T}\right)\) is considered. Each task \(\mathcal{T}_{i}\) consists of a dataset \(\mathcal{D}_{i}\) which is split into a training set \(\mathcal{D}_{i}^{tr}\) and a test set \(\mathcal{D}_{i}^{test}\). Formally, a meta-learner model (e.g., a neural network) expressed by a parameterized function \(f^{\theta}\) is considered. The training starts with the inner step of meta-learning's bi-level optimization. Specifically, a task \(\mathcal{T}_{i}\) is drawn from \(p\left(\mathcal{T}\right)\) and the model parameters \(\theta\) are updated (e.g., via one gradient update) using the training data \(\mathcal{D}_{i}^{tr}\) and the corresponding training loss value \(\mathcal{L}_{\mathcal{T}_{i}}\left(\theta,\mathcal{D}_{i}^{tr}\right)\):
\[\theta_{i}^{{}^{\prime}}\leftarrow\theta-\alpha\nabla_{\theta}\mathcal{L}_{ \mathcal{T}_{i}}\left(\theta,\mathcal{D}_{i}^{tr}\right) \tag{8}\]
where \(\alpha\) is the base learner's step size. Then, the updated parameters \(\theta_{i}^{{}^{\prime}}\) is used to evaluate the model performance against the test set \(\mathcal{L}_{\mathcal{T}_{i}}\left(\theta^{{}^{\prime}},\mathcal{D}_{i}^{test}\right)\). The goal here is to leverage \(\mathcal{D}_{i}^{tr}\) toward learning task-specific parameters which minimizes the loss value of the test set. This procedure is repeated for all the tasks drawn from \(p\left(\mathcal{T}\right)\) during the training. Next, in the outer loop optimization, the calculated test losses from all the tasks used in the inner loop training phase \(\left\{\mathcal{L}_{\mathcal{T}_{i}}\left(\theta_{i}^{{}^{\prime}},\mathcal{D}_ {i}^{test}\right)\right\}_{\mathcal{T}_{i}\in p\left(\mathcal{T}\right)}\) are used to optimize the meta-learner's parameters \(\theta\) as follows:
\[\theta\leftarrow\theta-\beta\nabla_{\theta}\sum_{\mathcal{T}_{i}\in p\left( \mathcal{T}\right)}\mathcal{L}_{\mathcal{T}_{i}}\left(\theta_{i}^{{}^{\prime}},\mathcal{D}_{i}^{test}\right) \tag{9}\]
where \(\beta\) is the meta-learner's step size. Once trained, then the base learner can be used as the optimal initialization state for learning new tasks with a small number of gradient steps and few-shots.
## 3 Methods: Proposed Sequential Meta-Transfer (SMT) Learning and Composites Processing Case Study
Figure 1 shows the overview of the proposed SMT framework; Figure 1.a illustrates the selected autoclave manufacturing example which will be discussed in detail in section 3.4. Concerning the SMT framework, as shown in Figure 1.b, the _long_ time domain is first broken down into small intervals. Leveraging the adaptive temporal segmentation, the time domain is divided into finer time intervals where the system exhibits highly nonlinear behaviour (in Figure 1.b, notice the shorter time intervals around the sharp transition of the DoC). Then, at each time segment, instead of learning task-specific network parameters, a sub-meta-learner is trained to learn a set of optimal initial parameters which enables a _fast adaptation_ to a range of relevant tasks (e.g., different boundary condition configurations). This is deemed a major advantage compared to the conventional PINNs for which a small change in the system settings requires training the model from scratch. PINNs trained with SMT, on the other hand, can adapt to new configurations using significantly fewer training iterations (i.e., gradient steps). The sub-meta-learners are trained in a sequential manner (Figure 1.c). Once each sub-learner is trained, it is used to initialize the meta-learner for the next time interval (via TL). Since the tasks being learned in previous time intervals are similar to the current task (i.e., the physics remains the same while the initial and boundary conditions change slightly), transferring knowledge from previously learned meta-learners can hugely facilitate the training procedure in the following time segments and hence, further increase the temporal and computational efficiency. The transferred meta-learner, on the other hand, is readily fine-tuned for the new initial and boundary conditions.
In the sub-sections to follow, details of each comment of the proposed adaptive SMT learning framework for PINNs are described.
### Sequential learning in PINN
This section introduces and compares the application of two sequential learning strategies, namely, TM [17] and bc-PINN [4], in composites autoclave processing. As illustrated in Figure 2, both methods revolve around the idea that decomposing the time domain into small segments facilitates the training of PINNs for highly nonlinear systems. Specifically, in TM, the time domain is divided into \(n\) time intervals. For each time interval [\(t_{i-1},t_{i}\)], a subnetwork \(f_{i}^{\theta}\) is assigned (Figure 2.a). The training is initiated with the first time interval [\(t_{0},t_{1}\)] using the system's initial condition at \(t=0\). Once \(f_{1}\) is trained, its predictions at \(t_{1}\) are used as the initial condition of the second time interval [\(t_{1},t_{2}\)] for training the second subnetwork \(f_{2}\). This procedure is repeated until the solution of all time intervals is learned and then the individual subnetworks are combined in order to provide predictions at any point within the time domain. In contrast to TM, in bc-PINN, only _one network_ is used to learn the long time domain in a sequential manner. This is done by introducing an additional loss term which ensures that the network retains the learned knowledge from previous time segments. Specifically, a new loss term is added to the PINN's loss function (Eq. (4)) that satisfies the solution for all the previous time intervals:
\[\mathcal{L}\left(\theta\right)=\lambda_{ic}\mathcal{L}_{ic}\left(\theta \right)+\lambda_{bc}\mathcal{L}_{bc}\left(\theta\right)+\lambda_{r}\mathcal{L }_{r}\left(\theta\right)+\lambda_{LL}\mathcal{L}_{LL}\left(\theta\right). \tag{10}\]
Here, \(\mathcal{L}_{LL}\) represents the loss accounting for the departure of the network from the already learned solution in the previous time intervals. After training at each time step, the learned weights are used to make predictions at some random data points from the same time intervals and the predictions are stored as "true" labels to be used for calculating \(\mathcal{L}_{LL}\) when training the subsequent time intervals. As demonstrated in Figure 2.b and similar to TM, the time domain is initially divided into n time segments. At each time step, the network h \(\theta\) is trained/fine-tuned to learn the underlying solution via the initial condition, boundary condition and the PDE loss components, while ensuring knowledge retention from previous time steps using the added loss term. This means that for learning the time interval [\(t_{n-1},t_{n}\)], bcPINN requires accessing the network's predictions in all previously learned time intervals [\(0,t_{n-1}\)].This is as opposed to TM in which only the learned latent solution in the previous time interval [\(t_{n-2},t_{n-1}\)]
### Meta-transfer learning
Sequential learning methods discussed in section 3.1 focuses on learning a specific realization of a PDE system. For instance, the nonlinear thermochemical behavior of a composite part in an autoclave with a specific set of configurations (i.e., initial and boundary conditions, and material properties) can be captured using the TM approach by training subnetworks on multiple time intervals. However, the proposed SMT method aims to train a model that enables efficient and rapid adaptation of PINNs to a _distribution_ over tasks. To accomplish this, the subnetworks in TM approach are replaced with _meta-learners_ and they are trained using a support set, consisting of a set of tasks defined for training
purposes. Specifically in this study, various tasks are defined by modifying the boundary condition parameters (see section 3.4 for details.)
One particular limitation of meta-learning is the method's poor performance against out-of-distribution tasks. This becomes more evident when these tasks introduce a drastic domain shift [37]. This is especially true in the sequential learning of PINNs where transitioning from one time interval to another can result in significant discrepancies in initial/boundary conditions and the system's behavior. In such a scenario, one might need to train the meta-learners from scratch for each time interval in order to ensure an efficient adaptation to new tasks. However, this can be infeasible in PINNs' sequential learning setting as training the meta-learners from scratch for multiple time intervals is exhaustively demanding and time-consuming. One solution would be to _transfer_ the learned knowledge from the trained meta-learner at hand to the training of the subsequent meta-learners in the following time intervals. Inspired by the work in [37], here, a _meta-transfer_ learning strategy is employed to make the training of the proposed sequential framework more efficient (Figure 1.c). In particular, the training begins with training the first meta-learner (ML1) using the initial and boundary conditions of the first time interval \([0,t_{1}]\). Once trained, ML1's learned weights are used to initialize the second meta-learner (ML2) trained for the second time interval \([t_{1},t_{2}]\). This process is repeated until the meta-learners associated with all time intervals are trained.
### Adaptive temporal segmentation
Sequential learning methods have proven to be effective in accurately learning regions with rapid shifts and kinks in the boundary condition or highly nonlinear behaviour in the latent solution function (e.g., temperature and DoC in composites processing). It also has been shown that employing smaller time segment sizes can significantly improve the PINN's performance. However, this comes with the trade-off of more computational costs as it entails training more subnetworks. Thus, one would like to avoid making the time interval _too_ small. To ensure that the segment length is appropriately proportionate to the complexity of the system's behaviour and to avoid unnecessary computational expenses, the following adaptive segmentation strategy is implemented. Initially, the temporal domain is divided into \(n\) segments with equal lengths and the training begins with the first interval (Figure 1.b). Once the first subnetwork
Figure 2: Schematic of TM (a) and bcPINN (b) sequential learning approaches. Both methods use the “learned prior knowledge” from previous time intervals as part of learning the “training zone”. Specifically, the initial condition at \(t_{n}\) is determined by the learned latent solution in the previous time intervals (indicated by red dashed line). In TM, each time interval is learned by an individual subnetwork \(f_{i}^{\theta}\), whereas in bcPINN one network \(h^{\theta}\) is employed to estimate the solution in a sequential manner. The additional loss term \(\mathcal{L}_{LL}\left(\theta\right)\) in bcPINN is responsible for maintaining the prior learned knowledge across all the previous time intervals.
is trained and a desirable training/test loss is achieved, the optimized weights are transferred to initialize the next subnetwork. Then, before initiating the training, the training loss of the new subnetwork (with initialized weights) on the second time interval is calculated. The loss value can be viewed as a representative of the level of discrepancy between the source and target tasks as well as the difficulty level of the target task (e.g., in terms of nonlinearities). The loss value on the new task is then compared against the training loss of the previous task (source) and if the difference exceeds a user-defined threshold \(\epsilon\), the new time segment is halved and the loss value for the new time segment (now with half-length of the original size) is calculated and compared. This step is repeated until an acceptable initial loss is attained and then the training process begins. The benefit of the adaptive segmentation approach is two-fold. First, reducing the length of the time interval makes the training of "stiff" and highly nonlinear systems easier. Second, from the TL point of view, a large loss value on the target task using the weights of the source network signals a considerable discrepancy between the source and target tasks (here, neighbouring time intervals), and thus, yields a poor TL performance. By shortening the time interval, the farther points from the source domain are removed which results in a target domain with an input space closer to the source domain. Above all, this strategy ensures that more computational capacities are allocated only to "difficult" regions.
### Experimentation: Autoclave processing case study
Autoclave processing is a widely employed method in the manufacturing of advanced composite structures. During this process, the composite part undergoes a pre-defined temperature and pressure cycle, known as the "cure cycle", with multiple heat ramps and isothermal stages enforced by the autoclave's air temperature [33]. The objective is to cure the resin matrix in a way that an optimal resin-fibre distribution is obtained and the void/defect occurrence is minimized. The quality of the manufactured part highly depends on the process configurations as well as the properties of the raw materials employed. Temperature and DoC (indicative of resin's chemical advancement) are two of the key _state variables_ in the manufacture of composite materials, influencing not only the thermochemical behaviour of the part but also the resin flow, the part's residual stress propagation and deformation [38].Due to the complex nature of the curing process, the part's temperature and DoC exhibit a nonlinear evolution with rapid shifts (Figure 1.b).
The thermochemical behaviour of composites during the curing process is governed by an anisotropic heat conduction equation equipped with an internal heat generation term \(\dot{Q}\) representing the exothermic curing reaction of the resin matrix [38]:
\[\frac{\partial}{\partial t}\left(\rho C_{p}T\right)=\frac{\partial}{\partial x }\left(k_{xx}\frac{\partial T}{\partial x}\right)+\frac{\partial}{\partial y }\left(k_{yy}\frac{\partial T}{\partial y}\right)+\frac{\partial}{\partial z} \left(k_{zz}\frac{\partial T}{\partial z}\right)+\dot{Q} \tag{11}\]
where \(\rho\) denotes the part's density, \(C_{p}\) is the specific heat capacity and \(k_{ii}\) represent anisotropic thermal conductivity coefficients. They can be calculated using local resin and fiber properties as well as fiber volume fraction. The heat generation term \(\dot{Q}\) in (11) can be expressed as:
\[\dot{Q}=\frac{d\alpha}{dt}\left(1-v_{f}\right)\rho_{r}H_{R} \tag{12}\]
where \(\alpha\) represents the resin's DoC, \(v_{f}\) is the fiber volume fraction, \(\rho_{r}\) is the resin density and \(H_{R}\) is the resin heat of reaction, a measure of the total amount of heat produced during a complete resin curing cycle. \(\frac{d\alpha}{dt}\) is the cure reaction rate and it is governed by the cure kinetics of the resin system. For a one-dimensional heat transfer system, Eq. (11) can be reduced to:
\[\rho C_{p}\frac{\partial T}{\partial t}=k_{xx}\frac{\partial^{2}T}{\partial x ^{2}}+\left(1-v_{f}\right)\rho_{r}H_{R}\frac{d\alpha}{dt}. \tag{13}\]
For the curing process of a composite system with thermoset resin systems, the cure rate \(\frac{d\alpha}{dt}\) is governed by the resin's cure kinetics and often is described as an ordinary differential equation. Specifically for 8552 epoxy (the resin system used in this paper), the cure kinetics have been already developed in previous studies [38] and can be expressed as:
\[\frac{d\alpha}{dt}=\frac{K\alpha^{m}(1-\alpha)^{n}}{1+e^{C\{\alpha-\left( \alpha C_{0}+\alpha C_{T}T\right)\}}},K=Ae^{-\frac{\Delta E}{nT}} \tag{14}\]
where \(\Delta E\) is the activation energy, \(R\) is the gas constant and \(\alpha_{C0}\), \(\alpha_{CT}\), \(m\), \(n\) and \(A\) are constants determined by experiments. Table 1 summarized the values of the parameters used in the cure kinetics equations in this study.
The initial conditions of the coupled system described above can be specified as:
\[T\left.\right|_{t=0}=T_{0}\left(x\right) \tag{15}\]
\[\alpha\left.\right|_{t=0}=\alpha_{0}\left(x\right) \tag{16}\]
\(T_{0}\) denotes the part's initial temperature and is often considered uniform throughout the part. This study assumes 20 \({}^{\circ}\)C for the part's temperature at the beginning of the curing process. \(\alpha_{0}\) is the initial DoC of the resin system and for an uncured part, it is assumed to be zero or a small value (in this study, a value of 0.001 is assumed.)
The boundary conditions can also be specified by the autoclave air temperature \(T_{a}\left(t\right)\) prescribed by the cure cycle recipe. Specifically, Robin boundary conditions can be defined to incorporate the convective heat transfer between the composite part and the autoclave air [39]:
\[h_{t}\left(T_{a}(t)-T\left.\right|_{x=L}\right)=k_{xx}\frac{\partial T}{ \partial x}\left.\right|_{x=L} \tag{17}\]
\[h_{b}\left(T\left.\right|_{x=0}-T_{a}(t)\right)=k_{xx}\frac{\partial T}{ \partial x}\left.\right|_{x=0} \tag{18}\]
where \(h_{t}\) and \(h_{b}\) refers to the top and bottom HTC values, respectively. It was shown that the value of HTC within the autoclave is a strong function of the temperature and pressure of the autoclave air [40]. The presence of multiple parts and tools with various sizes and complex geometries in an autoclave introduces complex airflow patterns, resulting in considerable local variations in the air temperature, and subsequently different HTC values. This makes the already complex thermochemical analysis of a composite part more intricate as it necessitates separate evaluations of the part's thermal profile of the part at various locations with different HTC values.
For training the PINN on the curing process of a 1D composite part, the following loss function is employed:
\[\mathcal{L}\left(\theta\right)=\lambda_{icer}\mathcal{L}_{icer}\left(\theta \right)+\lambda_{ic_{\alpha}}\mathcal{L}_{ic_{\alpha}}\left(\theta\right)+ \lambda_{bc_{t}}\mathcal{L}_{bc_{t}}\left(\theta\right)+\lambda_{bc_{t}} \mathcal{L}_{bc_{t}}\left(\theta\right)+\lambda_{r_{T}}\mathcal{L}_{rr}\left( \theta\right)+\lambda_{r_{a}}\mathcal{L}_{r_{\alpha}}\left(\theta\right), \tag{19}\]
where
\[\mathcal{L}_{r_{T}}\left(\theta\right)=\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}\left| \rho C_{p}\frac{\partial T}{\partial t}\left(t_{r}^{i},\mathbf{x}_{r}^{i} \right)-k_{xx}\frac{\partial^{2}T}{\partial x^{2}}\left(t_{r}^{i},\mathbf{x}_ {r}^{i}\right)-\left(1-v_{f}\right)\rho_{r}H_{R}\frac{d\alpha}{dt}\left(t_{r} ^{i},\mathbf{x}_{r}^{i}\right)\right|^{2} \tag{20}\]
\[\mathcal{L}_{r_{\alpha}}\left(\theta\right)=\frac{1}{N_{r}}\sum_{i=1}^{N_{r}} \left|\frac{d\alpha}{dt}\left(t_{r}^{i},\mathbf{x}_{r}^{i}\right)-\frac{K \alpha^{m}(1-\alpha)^{n}}{1+e^{C\left\{\alpha-\left(\alpha_{C0}+\alpha_{CT}T \right)\right\}}}\left(t_{r}^{i},\mathbf{x}_{r}^{i}\right)\right|^{2} \tag{21}\]
\[\mathcal{L}_{bc_{t}}\left(\theta\right)=\frac{1}{N_{bc_{t}}}\sum_{i=1}^{N_{bc _{t}}}\left|h_{t}\left(T_{a}(t_{bc_{t}}^{i})-T\left(t_{bc_{t}}^{i},\mathbf{x} _{bc_{t}}^{i}\right)\right)-k_{xx}\frac{\partial T}{\partial x}\left(t_{bc_{ t}}^{i},\mathbf{x}_{bc_{t}}^{i}\right)\right|^{2} \tag{22}\]
\begin{table}
\begin{tabular}{l l l} \hline Parameter & Description & Value \\ \hline \(\Delta E\) & Activation energy & 66.5 (kJ/gmol) \\ \(R\) & Gas constant & 8.314 \\ \(A\) & Pre-exponential cure rate coefficient & \(1.53\times 10^{5}\) (1/s) \\ \(m\) & First exponential constant & 0.813 \\ \(n\) & Second exponential constant & 2.74 \\ \(C\) & Diffusion constant & 43.1 \\ \(\alpha_{C0}\) & Critical degree of cure at \(T=0\) K & -1.684 \\ \(\alpha_{CT}\) & Critical resin degree of cure constant & \(5.475\times 10^{-3}\) (1/K) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of parameters used in heat transfer and cure kinetics governing equations
\[\mathcal{L}_{bc_{b}}\left(\theta\right)=\frac{1}{N_{bc_{b}}}\sum_{i=1}^{N_{bc_{b}}} \left|h_{b}\left(T\left(t_{bc_{b}}^{i},\mathbf{x}_{bc_{b}}^{i}\right)-T_{a}(t_{ bc_{b}}^{i})\right)-k_{xx}\frac{\partial T}{\partial x}\left(t_{bc_{b}}^{i}, \mathbf{x}_{bc_{b}}^{i}\right)\right|^{2} \tag{23}\]
Subscripts \(T\), \(\alpha\), \(t\) and \(b\) refer to temperature, DoC, top and bottom sides loss components.
## 4 Results and Discussion
To evaluate the prediction performance of PINNs in the above composite autoclave processing case study, a complex cure recipe with two isothermal holds is considered (Figure 1.b). The cure cycle involves two heat ramps with equal heat rates of 2 \({}^{\circ}\)C/min and two isothermal stages at 110 \({}^{\circ}\)C and 180 \({}^{\circ}\)C. A 3cm AS4/8552 prepreg is considered as the raw material for this case study. A fully-connected architecture with 5 hidden layers and 64 neurons per layer is employed for all networks. Hyperbolic tangent activation function is used in all hidden layers. The location on the 1D composite part \(x\) and time \(t\) comprise the networks' input space and their output layer is equipped with two neurons dedicated to the prediction of the part's temperature \(T\) and DoC \(\alpha\). For the remainder of the paper, the same network architecture is used for all case studies unless mentioned otherwise. All networks are trained using Adam optimizer with the default hyperparameters [41]. An initial learning rate of 1\(\times\)10\({}^{\circ}\)5 with an exponential decay schedule with a decay rate of 0.9 per 5000 steps is utilized. For all networks, the weight hyperparameters \(\lambda\) in Eq. (19) are set to 1 except for initial condition terms (\(\lambda_{ic_{T}}\) and \(\lambda_{ic_{a}}\)) which are set to 100. For training SMT's meta-learners, the outer loop uses the same Adam optimizer specifications and learning rate, while the inner loop employs a one-step gradient descent with a learning rate of 1\(\times\)10\({}^{\circ}\)5. The JAX implementation of the SMT framework will be made available at [https://github.com/miladramzy/SequentialMetaTransferPINNs](https://github.com/miladramzy/SequentialMetaTransferPINNs).
### Training PINNs with sequential learning strategies
Three training scenarios are considered. First, the original formulation of PINN [1] is used to establish a baseline performance. Next, two sequential learning approaches, namely, TM and bcPINN, are implemented to address the shortfalls of conventional PINN. For both methods, the temporal domain is initially divided into 10 time segments and the segment lengths and counts are adaptively updated using the method described in section 3.3. This results in 12 intervals with the two additional intervals occurs around the sharp transition of DoC (Figure 1.b). The training specifications of the models along with their predictive performance are summarized in Table 2. The models' predictions of the temperature and DoC of the part's mid-point are demonstrated in Figure 3 and Figure 4. The FE simulation results for the same curing process are also overlayed for comparison. The conventional PINN exhibits a very poor prediction performance as it fails to capture the governing equations, especially the drastic shift of the DoC in the middle of the cure cycle. While bcPINN outperforms the conventional PINN and results in a close alignment with the FE simulation, it yields poor predictions at a few regions throughout the curing process. This is evident right after the kinks in the cure cycle and in the proximity of the exotherm (where the part's temperature reaches its maximum value). These are associated with regions where learning the system's solution is the most difficult [42].Such sudden shifts and nonlinearities introduce significant discrepancies between the distributions of the nearby time intervals. For bcPINN which uses one network to learn all time intervals one after another, such drastic changes can cause the model to forget the knowledge from the past intervals. This is due to the fact that once the model encounters an interval with a strong nonlinearity (hence, a significant distribution shift in comparison to previous time intervals/tasks), it generates _large error gradients_ which destructively update the already trained weights. It was also observed that for time segments with high nonlinearities (e.g., sharp DoC transition), the model fails to remember the past learnings. In other words, the model prioritizes minimizing the "stiff" PDE/ODE and initial and boundary losses by allowing more room for \(\mathcal{L}_{LL}\) (i.e., the loss associated with the learnings of past time intervals). This trade-off can lead to the model's poor performance on previous time segments.
TM, however, produces the dominant performance among the studied approaches and results in the lowest prediction loss (Table 2). It can be observed that by learning complex sub-domains in a sequential manner via temporal domain decomposition, one can avoid conventional PINN's pitfalls and achieve very accurate predictions. Additionally, in contrast to bcPINN, TM enjoys an easier optimization scheme with fewer loss components, which hugely improves its training time and accuracy. TM is advantageous since it does not require concurrently maintaining high accuracy in the previous intervals while solving PDEs in difficult regions. It was observed that employing separate subnetworks throughout the long time domains can result in better generalization performance. Thus, for the remainder of this paper, TM is used to sequentially train the PINN models as part of the SMT framework.
text text
Figure 4: DoC prediction and maximum absolute error of conventional PINN (a,d), TM (b,e), and bcPINN (c,f) for the mid-point of the composite part cured in a two-hold cure cycle.
Figure 3: Temperature prediction and maximum absolute error of conventional PINN (a,d), TM (b,e), and bcPINN (c,f) for the midpoint of the composite part cured in a two-hold cure cycle.
## 5 Assessing fast task adaptation of sequential meta-transfer learning
As was shown in Section 4.1, TM can achieve superior generalization performance in learning the thermochemical behaviour of composite materials during the curing process in an autoclave. However, it involves multiple stages of training PINN sub-models for various time intervals which can lead to considerable computational expenses. Furthermore, as discussed in section 3.2, PINNs' performance is limited to a specific realization of a PDE system and they lack the necessary flexibility for rapid adaptation to relevant tasks (e.g., different boundary HTCs or variation in fiber volume fraction in composites manufacturing). In order to address this computational bottleneck and improve the generalization and adaptability of PINNs, the proposed SMT method is employed and evaluated. Specifically, 20 relevant tasks (support set) in the curing of 1D composite parts are considered for training by varying the top- and bottom-side HTCs and fixing the rest of the process setting variables. Each training task is obtained by randomly sampling the top and bottom HTCs from the _training distribution_, [40, 120]\(\frac{W}{m^{2}k}\) (Figure 5). Next, SMT's meta-learners are trained throughout the decomposed temporal domain (as elaborated in section 3.3). Once trained, the learned parameters are used as the optimal initial state for fast and data-efficient adaptation to new curing processes with different boundary specifications (i.e., HTCs).
_Remark 1 - Meta-learners' initialization:_ it was noticed that when training the first meta-learner (for the time interval 1) from scratch, the PDE and ODE loss components generate large error gradients which results in the exploding gradient effect and leads to infinite loss value. To avoid this, we switched off the PDE and ODE loss terms and trained on the initial and boundary points for 1000 epochs to ensure that the model locates in a more stable region in the optimization space and then switched on physical loss terms.
_Remark 2 - SMT training:_ it was realized that the learning rate of the SMT's inner loop plays a significant role in the convergence of the meta-learners, especially for regions with stiff solutions. For all time intervals, the training started with an initial learning rate of 1\(\times 10^{5}\) and then using a stepwise annealing strategy, the learning rate is reduced by the order of 10 each time improvement was not observed over a specified number of epochs.
To evaluate SMT, its generalization performance is compared to that of MTL and TL approaches described in [37] and [24], respectively. For MTL, three tasks with three different sets of HTC values (top HTC-bottom HTC) are selected to be learned ([60-20]\(\frac{W}{m^{2}k}\), [120-70]\(\frac{W}{m^{2}k}\), [80-40]\(\frac{W}{m^{2}k}\)). To train MTL, a network with 6 neurons in the output layer (each task has two neurons for predicting the temperature and DoC) is utilized. The idea is that training on relevant tasks using a single network can encourage learning a hidden state with a more general representation of the solution space and thus, facilitates training the network on other similar tasks in a faster and more efficient way. For training a new task (e.g., a new set of HTC values), only the output layer is reduced to a two-neuron layer (i.e., initialized with the weights of one of the trained tasks) and the rest of the network (hidden layers) remains frozen. The network is then fine-tuned with the loss components of the new task. For TL, the source network is trained on a curing process with top and bottom HTCs set to 120 \(\frac{W}{m^{2}k}\) and 70 \(\frac{W}{m^{2}k}\), respectively, and then is used to initialize the target network for learning a text curing processes. Table 3 summarizes the predictive performance (relative \(\mathcal{L}^{2}\) error) of the above models against a test task with a symmetric (identical top and bottom values) HTC of 50 \(\frac{W}{m^{2}k}\). The models are fine-tuned for 1, 100 and 1000 iterations using 200 collocation, 40 boundary and 20 initial points per time interval.
It is worth noting that the training datasets used for this evaluation are significantly smaller in size and the number of training epochs is much less compared to the standard training procedures commonly used in the literature for training PINNs. This is done in order to evaluate the performance of the models for rapid and efficient adaptation to new tasks with a few gradient steps and very limited training data. As the number of training points and training iterations increases, the performance of all models is expected to enhance accordingly. The results show that SMT requires as few as one epoch to efficiently adapt to the new task. This is not achieved with TL and MTL as they need a longer training period to yield the same performance. This is due to the fact that the meta-learners in SMT are specifically trained to enable fast adaptations to novel scenarios with only a few gradient iterations (in this study, we chose one
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \multirow{2}{*}{Model} & Time & \multirow{2}{*}{
\begin{tabular}{c} Training \\ inter- \\ vals \\ count \\ \end{tabular} } & \multicolumn{2}{c}{Count per time interval} & \multicolumn{2}{c}{Temperature} & \multicolumn{2}{c}{Degree of cure} \\ \cline{4-9} & & & & & & & Relative \(\mathcal{L}^{2}\) & Maximum \\ \cline{4-9} & & & & & & & & & \\ \hline PINN [1] & 1 & 25 & 25000 & 5000 & 2000 & \(3.8\times 10^{-2}\) & 43.80 & \(9.98\times 10^{-1}\) & \(8.3\times 10^{-1}\) \\ TM [17] & 12 & 95 & 2000 & 400 & 200 & \(\mathbf{4.3\times 10^{-4}}\) & **1.705** & \(\mathbf{2.4\times 10^{-3}}\) & \(\mathbf{9.1\times 10^{-3}}\) \\ bcPINN [4] & 12 & 81 & 2000 & 400 & 200 & \(5.2\times 10^{-3}\) & 11.46 & \(1.5\times 10^{-2}\) & \(5.4\times 10^{-2}\) \\ \hline \end{tabular}
\end{table}
Table 2: Training specification and generalization performance of PINN, TM, and bcPINN on composite part’s temperature and DoC predictions.
epoch of fine-tuning for optimizing the meta-learner's inner loop). As the number of epochs increases, TL and MTL begin to perform better and obtain similar performance to SMT. This is also expected as TL-based fine-tuning improves with longer training periods and more iterations [43]. Table 3 also compares the model's training computational cost in terms of the number of epochs per second. MTL is the slowest as its loss function comprises of 3 PDE loss terms (each per task, as opposed to 1 PDE term for conventional PINN). SMT's training is slower than that of TL as it requires computing gradients for both inner and outer loops for updating its parameters. TL, however, does not add any additional computational costs to the conventional PINN setting as it only fine-tunes the source parameters. Finally, Table 3 also shows how many epochs TL and MTL need on average to achieve the similar Relative \(L^{2}\) error achieved by SMT after 1 epoch. Clearly, SMT is the dominant method for fast and efficient adaptation across different tasks in PINNs.
The training process of SMT can be further investigated. Due to MTL's relatively high computational cost during the training phase and its comparable predictive performance to TL, in the following analyses, only SMT and TL are compared and evaluated as the most viable models. In particular, as indicated in Figure 5, two in-distribution (symmetric \(50\frac{W}{m^{2}k}\) and symmetric \(60\frac{W}{m^{2}k}\)) and one out-of-distribution (symmetric \(40\frac{W}{m^{2}k}\)) tasks are selected for evaluation. Figure 6 shows the models' performance during the first 100 epochs of adaptation to the test tasks. Specifically, for each time interval, the corresponding meta-learner/source model is fine-tuned for 100 epochs (12 intervals, hence 1200 epochs in total) and the test performance on temperature prediction during the training is measured. The relative \(\mathcal{L}^{2}\) error and maximum absolute error of SMT and TL under each task are shown in the background, which are overlayed by the average performance indicated by thick red and black lines. In all time intervals, SMT outperforms TL significantly within the first few epochs. Specifically, in the "stiff" region of the curing process with sharp transition (minutes 120 to 180 in Figure 5 corresponds to epochs 500 to 800 in Figure 6), SMT was able to keep the temperature's maximum absolute error of in-distribution tasks below \(5^{\circ}\)C. Away from the first epochs, although SMT continues to improve the prediction performance, it can be seen that TL's error reduction occurs at a faster rate. This is expected as meta-learning models are designed to yield accurate predictions with only a few gradient steps, while TL models usually achieve this in the long run with more epochs. Regardless, because of the early advantage, SMT maintains its upper hand against TL even in longer training periods.
Figure 7 compares TL and SMT's temperature and DoC prediction performance at the first 2 epochs of fine-tuning for the test tasks. FE simulation results are presented as the reference. SMT was able to better capture the part's thermochemical behaviour, especially at the stiff regions as well as the part's maximum temperature (exotherm). While going from epoch 1 to epoch 2 does not change the TL curves much, SMT exhibits a considerable improvement in its prediction performance. This is again, a clear indication of SMT's success in fast and efficient adaptation. Moreover, SMT shows a more robust behaviour against the out-of-distribution task (Figure 7.c and f) and does not deviate from the true response as much as TL did. This also highlights a shortfall of TL in knowledge transfer as it performs poorly if the difference between the source and target distributions is considerable. In other words, in such scenarios, TL might require much longer training periods in order to fully adapt to the new task. It also should be pointed out that a few discontinuities (mainly around the stiff region) are evident in SMT's predictions. This is because, in contrast to TL which leverages fully converged source weights from similar tasks, SMT uses meta-learners trained on a variety of tasks, each with a unique initial condition (as it depends on the predictions in the previous time interval). This requires a few more epochs for the SMT to reduce the initial condition loss appropriately and yield a more continuous prediction. Regardless, the overall performance of SMT clearly dominates that of TL.
## 6 Conclusion
This work presented a new sequential meta-transfer-learning approach that can concurrently address the PINN's frequently faced poor performance in solving'stiff' problems with long temporal domains, as well as the PINN's slow and costly adaptations/generalization to new tasks (e.g., when the system parameters or boundary conditions change). This unified framework breaks down the input domain into smaller time segments and hence easier PDE problems. It then trains the PINN model sequentially over all time intervals using a set of subnetworks. The learning framework, on the other hand, leverages the capabilities of meta-transfer learning for fast and efficient adaptations to new tasks
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{PINN model} & \multicolumn{3}{c}{Temperature prediction relative \(\mathcal{L}^{2}\) error} & \multirow{2}{*}{Training time} & \multirow{2}{*}{Adaptation epochs} \\ \cline{2-2} \cline{4-4} & 1 epoch & 100 epochs & & & \\ \hline TL [24] & \(1.2\times 10^{-2}\) & \(4.3\times 10^{-3}\) & \(1.9\times 10^{-3}\) & **95** & 115 \\ MTL [37] & \(1.4\times 10^{-2}\) & \(3.8\times 10^{-3}\) & \(1.8\times 10^{-3}\) & 31 & 108 \\
**SMT (ours)** & \(\mathbf{3.2\times 10^{-3}}\) & \(\mathbf{2.1\times 10^{-3}}\) & \(\mathbf{1.5\times 10^{-3}}\) & 44 & **1** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparing predictive performance of TL, MTL and SMT against a test task with symmetric HTC of \(50\frac{w}{m^{2}k}\).
Figure 5: Temperature and DoC profiles of SMT support set (training tasks), TL source task, in-distribution (50\(\frac{W}{m^{2}k}\)) and out-of-distribution (40\(\frac{W}{m^{2}k}\)) test tasks. The profiles indicate the behaviour of the material at its middle section.
Figure 6: Comparison of SMT and TL performance in temperature prediction. Three test scenarios, namely two in-distribution tasks (symmetric 50 and 60) and one out-of-distribution task (symmetric 40) are used and relative \(L_{2}\) error (a) and maximum absolute error (b) values are recorded. The individual performances for each case are demonstrated by shades in the background which are overlaid by thick red and black lines representing the average performances. Each time interval is trained for 100 epochs (hence 1200 epochs in total)
Figure 7: Temperature and DoC prediction performance of SMT and TL after 1 (a, c, e) and 2 (b, d, f) epochs of finetuning against the test tasks. FE simulation result is presented for reference.
(e.g., systems with different boundary conditions). Specifically, at each time interval, instead of training a conventional PINN, the framework learns a meta-learner which produces optimal initial weights suitable for fast adaptation tasks with only a few gradient steps. The presented hybrid and adaptive learning method was evaluated in an advanced composites autoclave processing case study, and it exhibited a dominant performance in comparison to other approaches in the literature. Namely, in comparison to TL and MTL, the proposed SMT method expedites the task adaptation of PINN models by a factor of 100. Finally, one way to improve the accuracy of the proposed method is to incorporate _hard_ constraints to satisfy the initial conditions. In sequential learning, the initial condition of each time segment is determined by the approximation of the sub-network trained in the previous time interval. Also, the training of the PINN model via minimizing the loss function can still result in some residual error in the initial condition. This can cause error propagation over the time intervals and thus yield considerable deviation from the system's true behaviour. Utilizing hard constraints can alleviate such effects by strictly enforcing the initial conditions (no residual error) for the training of each sub-network.
## 7 Acknowledgments
This study was financially supported by the New Frontiers in Research Fund - Exploration stream (award number: NFRFE-2019-01440).
|
2303.11249 | What Makes Data Suitable for a Locally Connected Neural Network? A
Necessary and Sufficient Condition Based on Quantum Entanglement | The question of what makes a data distribution suitable for deep learning is
a fundamental open problem. Focusing on locally connected neural networks (a
prevalent family of architectures that includes convolutional and recurrent
neural networks as well as local self-attention models), we address this
problem by adopting theoretical tools from quantum physics. Our main
theoretical result states that a certain locally connected neural network is
capable of accurate prediction over a data distribution if and only if the data
distribution admits low quantum entanglement under certain canonical partitions
of features. As a practical application of this result, we derive a
preprocessing method for enhancing the suitability of a data distribution to
locally connected neural networks. Experiments with widespread models over
various datasets demonstrate our findings. We hope that our use of quantum
entanglement will encourage further adoption of tools from physics for formally
reasoning about the relation between deep learning and real-world data. | Yotam Alexander, Nimrod De La Vega, Noam Razin, Nadav Cohen | 2023-03-20T16:34:39Z | http://arxiv.org/abs/2303.11249v5 | What Makes Data Suitable for a Locally Connected Neural Network? A Necessary and Sufficient Condition Based on Quantum Entanglement
###### Abstract
The question of what makes a data distribution suitable for deep learning is a fundamental open problem. Focusing on locally connected neural networks (a prevalent family of architectures that includes convolutional and recurrent neural networks as well as local self-attention models), we address this problem by adopting theoretical tools from quantum physics. Our main theoretical result states that a certain locally connected neural network is capable of accurate prediction over a data distribution _if and only if_ the data distribution admits low quantum entanglement under certain canonical partitions of features. As a practical application of this result, we derive a preprocessing method for enhancing the suitability of a data distribution to locally connected neural networks. Experiments with widespread models over various datasets demonstrate our findings. We hope that our use of quantum entanglement will encourage further adoption of tools from physics for formally reasoning about the relation between deep learning and real-world data.
## 1 Introduction
Deep learning is delivering unprecedented performance when applied to data modalities involving images, text and audio. On the other hand, it is known both theoretically and empirically [53; 1] that there exist data distributions over which deep learning utterly fails. The question of _what makes a data distribution suitable for deep learning_ is a fundamental open problem in the field.
A prevalent family of deep learning architectures is that of _locally connected neural networks_. It includes, among others: _(i)_ convolutional neural networks, which dominate the area of computer vision; _(ii)_ recurrent neural networks, which were the most common architecture for sequence (_e.g._ text and audio) processing, and are experiencing a resurgence by virtue of S4 models [26]; and _(iii)_ local variants of self-attention neural networks [60]. Conventional wisdom postulates that data distributions suitable for locally connected neural networks are those exhibiting a "local nature," and there have been attempts to formalize this intuition [64; 28; 15]. However, to the best of our knowledge, there are no characterizations providing necessary and sufficient conditions for a data distribution to be suitable to a locally connected neural network.
A seemingly distinct scientific discipline tying distributions and computational models is _quantum physics_. There, distributions of interest are described by _tensors_, and the associated computational
models are _tensor networks_. While there is shortage in formal tools for assessing the suitability of data distributions to deep learning architectures, there exists a widely accepted theory that allows for assessing the suitability of tensors to tensor networks. The theory is based on the notion of _quantum entanglement_, which quantifies dependencies that a tensor admits under partitions of its axes (for a given tensor \(\mathcal{A}\) and a partition of its axes to sets \(\mathcal{K}\) and \(\mathcal{K}^{c}\), the entanglement is a non-negative number quantifying the dependence that \(\mathcal{A}\) induces between \(\mathcal{K}\) and \(\mathcal{K}^{c}\)).
In this paper, we apply the foregoing theory to a tensor network equivalent to a certain locally connected neural network, and derive theorems by which fitting a tensor is possible if and only if the tensor admits low entanglement under certain _canonical partitions_ of its axes. We then consider the tensor network in a machine learning context, and find that its ability to attain low approximation error, _i.e._ to express a solution with low population loss, is determined by its ability to fit a particular tensor defined by the data distribution, whose axes correspond to features. Combining the latter finding with the former theorems, we conclude that a _locally connected neural network is capable of accurate prediction over a data distribution if and only if the data distribution admits low entanglement under canonical partitions of features_. Experiments with different datasets corroborate this conclusion, showing that the accuracy of common locally connected neural networks (including modern convolutional, recurrent, and local self-attention neural networks) is inversely correlated to the entanglement under canonical partitions of features in the data (the lower the entanglement, the higher the accuracy, and vice versa).
The above results bring forth a recipe for enhancing the suitability of a data distribution to locally connected neural networks: given a dataset, search for an arrangement of features which leads to low entanglement under canonical partitions, and then arrange the features accordingly. Unfortunately, the above search is computationally prohibitive. However, if we employ a certain correlation-based measure as a surrogate for entanglement, _i.e._ as a gauge for dependence between sides of a partition of features, then the search converts into a succession of _minimum balanced cut_ problems, thereby admitting use of well-established graph theoretical tools, including ones designed for large scale [29; 57]. We empirically evaluate this approach on various datasets, demonstrating that it substantially improves prediction accuracy of common locally connected neural networks (including modern convolutional, recurrent, and local self-attention neural networks).
The data modalities to which deep learning is most commonly applied -- namely ones involving images, text and audio -- are often regarded as natural (as opposed to, for example, tabular data fusing heterogeneous information). We believe the difficulty in explaining the suitability of such modalities to deep learning may be due to a shortage in tools for formally reasoning about natural data. Concepts and tools from physics -- a branch of science concerned with formally reasoning about natural phenomena -- may be key to overcoming said difficulty. We hope that our use of quantum entanglement will encourage further research along this line.
The remainder of the paper is organized as follows. Section 2 reviews related work. Section 3 establishes preliminaries, introducing tensors, tensor networks and quantum entanglement. Section 4 presents the theorems by which a tensor network equivalent to a locally connected neural network can fit a tensor if and only if this tensor admits low entanglement under canonical partitions of its axes. Section 5 employs the preceding theorems to show that in a classification setting, accurate prediction is possible if and only if the data admits low entanglement under canonical partitions of features. Section 6 translates this result into a practical method for enhancing the suitability of data to common locally connected neural networks. Finally, Section 7 concludes. For simplicity, we treat locally connected neural networks whose input data is one-dimensional (_e.g._ text and audio), and defer to Appendix E an extension of the analysis and experiments to models intaking data of arbitrary dimension (_e.g._ two-dimensional images).
## 2 Related Work
Characterizing formal properties of data distributions that make them suitable for neural networks is a major open problem in deep learning. A number of papers provide sufficient conditions on a data distribution that allow it to be learnable by some neural network architecture [5; 6; 40; 20; 21; 44]. However, the conditions under which learnability is proven are restrictive, and are not argued to be necessary. There have also been attempts to quantify the structure of data via quantum entanglement and mutual information [41; 15; 39; 7; 64; 28], yet without formally relating properties of the data to
the prediction accuracy achievable by neural networks. To the best of our knowledge, this paper is the first to derive a necessary and sufficient condition on the data distribution for accurate prediction by locally connected neural networks.
Our work follows a long line of research employing tensor networks as theoretical models for studying deep learning. These include works analyzing the expressiveness of different neural network architectures [12; 55; 9; 10; 54; 14; 35; 2; 36; 30; 37; 31; 51], their generalization properties [38], and the implicit regularization induced by optimization [48; 49; 50; 62; 24]. We focus on expressiveness, yet our results differ from the aforementioned works in that we incorporate the data distribution into our analysis and tackle the question of what makes data suitable for deep learning.
The algorithm we propose for enhancing the suitability of data to locally connected neural networks can be considered a form of representation learning. Representation learning is a vast field, far too broad for us to survey here (for an overview see [42]). most representation learning methods are concerned with dimensionality reduction, i.e the discovery of low dimensional structure in high dimensional data, e.g [3; 33]. Such methods are complementary to our approach, which preserves the dimensionality of the input, and seeks to learn a rearrangement of features that is better suited to locally connected neural networks. A notable work of this type is IGTD [65], which arranges features in a dataset to improve its suitability for convolutional neural networks. In contrast to IGTD, our method is theoretically grounded, and as demonstrated empirically in Section 6, it leads to higher improvements in prediction accuracy for locally connected neural networks.
## 3 Preliminaries
NotationWe use \(\left\|\cdot\right\|\) and \(\left\langle\cdot,\cdot\right\rangle\) to denote the Euclidean (Frobenius) norm and inner product, respectively. We shorthand \([N]:=\{1,\ldots,N\}\), where \(N\in\mathbb{N}\). The complement of \(\mathcal{K}\subseteq[N]\) is denoted by \(\mathcal{K}^{c}:=[N]\setminus\mathcal{K}\).
### Tensors and Tensor Networks
For our purposes, a _tensor_ is a multi-dimensional array \(\mathcal{A}\in\mathbb{R}^{D_{1}\times\cdots\times D_{N}}\), where \(N\in\mathbb{N}\) is its _dimension_ and \(D_{1},\ldots,D_{N}\in\mathbb{N}\) are its _axes lengths_. The \((d_{1},\ldots,d_{N})\)'th entry of \(\mathcal{A}\) is denoted \(\mathcal{A}_{d_{1},\ldots,d_{N}}\).
_Contraction_ between tensors is a generalization of multiplication between matrices. Two matrices \(\mathbf{A}\in\mathbb{R}^{D_{1}\times D_{2}}\) and \(\mathbf{B}\in\mathbb{R}^{D^{\prime}_{1}\times D^{\prime}_{2}}\) can be multiplied if \(D_{2}=D^{\prime}_{1}\), in which case we get a matrix in \(\mathbb{R}^{D_{1}\times D^{\prime}_{2}}\) holding \(\sum_{d=1}^{D_{2}}\mathbf{A}_{d_{1},d}\cdot\mathbf{B}_{d,d^{\prime}_{2}}\) in entry \((d_{1},d^{\prime}_{2})\in[D_{1}]\times[D^{\prime}_{2}]\). More generally, two tensors \(\mathcal{A}\in\mathbb{R}^{D_{1}\times\cdots\times D_{N}}\) and \(\mathcal{B}\in\mathbb{R}^{D^{\prime}_{1}\times\cdots\times D^{\prime}_{N^{ \prime}}}\) can be contracted along axis \(n\in[N]\) of \(\mathcal{A}\) and \(n^{\prime}\in[N^{\prime}]\) of \(\mathcal{B}\) if \(D_{n}=D^{\prime}_{n^{\prime}}\), in which case we get a tensor in \(\mathbb{R}^{D_{1}\times\cdots\cdot D_{n-1}\times D_{n+1}\times\cdots\times D_{ N}\times D^{\prime}_{1}\times\cdots\times D^{\prime}_{n^{\prime}-1}\times D^{ \prime}_{n^{\prime}+1}\cdots\times D^{\prime}_{N^{\prime}}}\) holding:
\[\sum\nolimits_{d=1}^{D_{n}}\mathcal{A}_{d_{1},\ldots,d_{n-1},d,d_{n+1},\ldots,d_{N}}\cdot\mathcal{B}_{d^{\prime}_{1},\ldots,d^{\prime}_{n^{\prime}-1},d,d^{ \prime}_{n^{\prime}+1},\ldots,d^{\prime}_{N^{\prime}}}\,,\]
in the entry indexed by \(\left\{d_{k}\in[D_{k}]\right\}_{k\in[N]\setminus\{n\}}\) and \(\left\{d^{\prime}_{k}\in[D^{\prime}_{k}]\right\}_{k\in[N^{\prime}]\setminus\{n^ {\prime}\}}\).
_Tensor networks_ are prominent computational models for fitting (_i.e._ representing) tensors. More specifically, a tensor network is a weighted graph that describes formation of a (typically high-dimensional) tensor via contractions between (typically low-dimensional) tensors. As customary (_cf._[43]), we will present tensor networks via graphical diagrams to avoid cumbersome notation -- see Figure 1 for details.
### Quantum Entanglement
In quantum physics, the distribution of possible states for a multi-particle ("many body") system is described by a tensor, whose axes are associated with individual particles. A key property of the distribution is the dependence it admits under a given partition of the particles (_i.e._ between a given set of particles and its complement). This dependence is formalized through the notion of _quantum entanglement_, defined using the distribution's description as a tensor -- see Definition 1 below.
Quantum entanglement lies at the heart of a widely accepted theory which allows assessing the ability of a tensor network to fit a given tensor (_cf._[16; 36]). In Section 4 we specialize this theory to a tensor network equivalent to a certain locally connected neural network.
**Definition 1**.: For a tensor \(\mathcal{A}\in\mathbb{R}^{D_{1}\times\cdots\times D_{N}}\) and subset of its axes \(\mathcal{K}\subseteq[N]\), let \([\![\mathcal{A};\mathcal{K}]\!]\in\mathbb{R}\!\!\prod_{n\in\mathcal{K}}D_{n} \times\prod_{m\in\mathcal{K}^{c}}D_{n}\) be the arrangement of \(\mathcal{A}\) as a matrix where rows correspond to axes \(\mathcal{K}\) and columns correspond to the remaining axes \(\mathcal{K}^{c}:=[N]\setminus\mathcal{K}\). Denote by \(\sigma_{1}\geq\cdots\geq\sigma_{D_{\mathcal{K}}}\in\mathbb{R}_{\geq 0}\) the singular values of \([\![\mathcal{A};\mathcal{K}]\!]\), where \(D_{\mathcal{K}}:=\min\{\prod_{n\in\mathcal{K}}D_{n},\prod_{n\in\mathcal{K}^{c }}D_{n}\}\). The _quantum entanglement_1 of \(\mathcal{A}\) with respect to the partition \((\mathcal{K},\mathcal{K}^{c})\) is the entropy of the distribution \(\{\rho_{d}:=\sigma_{d}^{2}/\sum_{d^{\prime}=1}^{D_{\mathcal{K}}}\sigma_{d^{ \prime}}^{2}\}_{d=1}^{D_{\mathcal{K}}}\), _i.e._:
Footnote 1: There exist multiple notions of entanglement in quantum physics (see, _e.g._, [36]). The one we consider is the most common, known as _entanglement entropy_.
\[QE(\mathcal{A};\mathcal{K}):=-\sum\nolimits_{d=1}^{D_{\mathcal{K}}}\rho_{d} \ln(\rho_{d})\,.\]
By convention, if \(\mathcal{A}=0\) then \(QE(\mathcal{A};\mathcal{K})=0\).
## 4 Low Entanglement Under Canonical Partitions Is Necessary and Sufficient for Fitting Tensor
In this section, we prove that a tensor network equivalent to a certain locally connected neural network can fit a tensor if and only if the tensor admits low entanglement under certain canonical partitions of its axes. We begin by introducing the tensor network (Section 4.1). Subsequently, we establish the necessary and sufficient condition required for it to fit a given tensor (Section 4.2). For conciseness, the treatment in this section is limited to one-dimensional (sequential) models; see Appendix E.1 for an extension to arbitrary dimensions.
### Tensor Network Equivalent to a Locally Connected Neural Network
Let \(N\in\mathbb{N}\), and for simplicity suppose that \(N=2^{L}\) for some \(L\in\mathbb{N}\). We consider a tensor network with an underlying perfect binary tree graph of height \(L\), which generates \(\mathcal{W}_{\mathrm{TN}}\in\mathbb{R}^{D_{1}\times\cdots\times D_{N}}\). Figure 2(a) provides its diagrammatic definition. Notably, the lengths of axes corresponding to inner (non-open) edges are taken to be \(R\in\mathbb{N}\), referred to as the _width_ of the tensor network.2
Footnote 2: We treat the case where all axes corresponding to inner (non-open) edges in the tensor network have the same length merely for simplicity of presentation. Extension of our theory to axes of different lengths is straightforward.
As identified by previous works, the tensor network depicted in Figure 2(a) is equivalent to a certain locally connected neural network (with polynomial non-linearity -- see, _e.g._, [12; 10; 36; 50]). In
Figure 1: Tensor networks form a graphical language for fitting (_i.e._ representing) tensors through tensor contractions. **Tensor network definition:** Every node in a tensor network is associated with a tensor, whose dimension is equal to the number of edges emanating from the node. An edge connecting two nodes specifies contraction between the tensors associated with the nodes (Section 3.1), where the weight of the edge signifies the respective axes lengths. Tensor networks may also contain open edges, _i.e._ edges that are connected to a node on one side and are open on the other. The number of such open edges is equal to the dimension of the tensor produced by contracting the tensor network. **Illustrations:** Presented are exemplar tensor network diagrams of: **(a)** an \(N\)-dimensional tensor \(\mathcal{A}\in\mathbb{R}^{D_{1}\times\cdots\times D_{N}}\); **(b)** a vector-matrix multiplication between \(\mathbf{M}\in\mathbb{R}^{D_{1}\times D_{2}}\) and \(\mathbf{v}\in\mathbb{R}^{D_{2}}\), which results in the vector \(\mathbf{M}\mathbf{v}\in\mathbb{R}^{D_{1}}\); and **(c)** a more elaborate tensor network generating \(\mathcal{W}\in\mathbb{R}^{D_{1}\times D_{2}\times D_{3}}\).
particular, contracting the tensor network with vectors \(\mathbf{x}^{(1)}\in\mathbb{R}^{D_{1}},\ldots,\mathbf{x}^{(N)}\in\mathbb{R}^{D_{N}}\), as illustrated in Figure 2(b), can be viewed as a forward pass of the data instance \((\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)})\) through a locally connected neural network. This computation results in a scalar equal \(\left\langle\otimes_{n=1}^{N}\mathbf{x}^{(n)},\mathcal{W}_{\mathrm{TN}}\right\rangle\), where \(\otimes\) stands for the outer product.34 In light of its equivalence to a locally connected neural network, we will refer to the tensor network as a _locally connected tensor network_. We note that for the equivalent neural network to be practical (in terms of memory and runtime), the width of the tensor network \(R\) needs to be of moderate size. Specifically, \(R\) cannot be exponential in the dimension \(N\), meaning \(\ln(R)\) needs to be much smaller than \(N\).
Footnote 3: For any \(\left\{\mathbf{x}^{(n)}\in\mathbb{R}^{D_{n}}\right\}_{n=1}^{N}\), the outer product \(\otimes_{n=1}^{N}\mathbf{x}^{(n)}\in\mathbb{R}^{D_{1}\times\cdots\times D_{N}}\) is defined element-wise by \(\left[\otimes_{n=1}^{N}\mathbf{x}^{(n)}\right]_{d_{1},\ldots,d_{N}}=\prod_{n= 1}^{N}\mathbf{x}^{(n)}_{d_{n}}\), where \(d_{1}\in[D_{1}],\ldots,d_{N}\in[D_{N}]\).
Footnote 4: Contracting an arbitrary tensor \(\mathcal{A}\in\mathbb{R}^{D_{1}\times\cdots\times D_{N}}\) with vectors \(\mathbf{x}^{(1)}\in\mathbb{R}^{D_{1}},\ldots,\mathbf{x}^{(N)}\in\mathbb{R}^{D_{ N}}\) (where for each \(n\in[N]\), \(\mathbf{x}^{(n)}\) is contracted against the \(n\)’th axis of \(\mathcal{A}\)) yields a scalar equal to \(\left\langle\mathcal{A},\otimes_{n=1}^{N}\mathbf{x}^{(n)}\right\rangle\). This follows from the definitions of the contraction, inner product and outer product operations.
By virtue of the locally connected tensor network's equivalence to a deep neural network, it has been paramount for the study of expressiveness and generalization in deep learning [12; 9; 10; 13; 14; 54; 35; 36; 2; 30; 31; 37; 48; 49; 50; 51]. Although the equivalent deep neural network (which has polynomial non-linearity) is less common than other neural networks (_e.g._, ones with ReLU non-linearity), it has demonstrated competitive performance in practice [8; 11; 55; 58; 22]. More importantly, its analyses (through its equivalence to the locally connected tensor network) brought forth numerous insights that were demonstrated empirically and led to development of practical tools for common locally connected architectures. Continuing this line, we will demonstrate our theoretical insights through experiments with widespread convolutional, recurrent and local self-attention architectures (Section 5.3), and employ our theory for deriving an algorithm that enhances the suitability of a data distribution to said architectures (Section 6).
### Necessary and Sufficient Condition for Fitting Tensor
Herein we show that the ability of the locally connected tensor network (defined in Section 4.1) to fit (_i.e._ represent) a given tensor is determined by the entanglements that the tensor admits under the following _canonical partitions_ of \([N]\).
**Definition 2**.: The _canonical partitions_ of \([N]\) (illustrated in Figure 3) are:
\[\mathcal{C}_{N}:= \!\!\left\{\left(\mathcal{K},\mathcal{K}^{c}\right):\,\mathcal{K}= \left\{2^{L-l}\cdot(n-1)+1,\ldots,2^{L-l}\cdot n\right\},\,l\in\left\{0, \ldots,L\right\},\;n\in\left[2^{l}\right]\right\}.\]
By appealing to known upper bounds on the entanglements that a given tensor network supports [16, 36], we establish that if the locally connected tensor network can fit a given tensor, that tensor must admit low entanglement under the canonical partitions of its axes. Namely, suppose that \(\mathcal{W}_{\mathrm{TN}}\) -- the tensor generated by the locally connected tensor network -- well-approximates an \(N\)-dimensional tensor \(\mathcal{A}\). Then, Theorem 1 below shows that the entanglement of \(\mathcal{A}\) with respect to a canonical partition cannot be much larger than \(\ln(R)\) (recall that \(R\) is the width of the locally connected tensor network), whereas the entanglement attainable by an arbitrary tensor with respect to a canonical partition can be linear in the dimension \(N\).
In the other direction, Theorem 2 below implies that low entanglement under the canonical partitions is not only necessary for a tensor to be fit by the locally connected tensor network, but also sufficient.
**Theorem 1**.: _Let \(\mathcal{W}_{\mathrm{TN}}\in\mathbb{R}^{D_{1}\times\cdots\times D_{N}}\) be a tensor generated by the locally connected tensor network defined in Section 4.1, and let \(\mathcal{A}\in\mathbb{R}^{D_{1}\times\cdots\times D_{N}}\). For any \(\epsilon\in[0,\|\mathcal{A}\|/4]\), if \(\|\mathcal{W}_{\mathrm{TN}}-\mathcal{A}\|\leq\epsilon\), then for all canonical partitions \((\mathcal{K},\mathcal{K}^{c})\in\mathcal{C}_{N}\) (Definition 2):5_
Footnote 5: If \(\mathcal{A}=0\), then \(\epsilon=0\). In this case, the expression \(\epsilon/\|\mathcal{A}\|\) is by convention equal to zero.
\[QE(\mathcal{A};\mathcal{K})\leq\ln(R)+\frac{2\epsilon}{\|\mathcal{A}\|}\cdot \ln(D_{\mathcal{K}})+2\sqrt{\frac{2\epsilon}{\|\mathcal{A}\|}}\,, \tag{1}\]
_where \(D_{\mathcal{K}}:=\min\{\prod_{n\in\mathcal{K}}D_{n},\prod_{n\in\mathcal{K}^{c }}D_{n}\}\). In contrast, there exists \(\mathcal{A}^{\prime}\in\mathbb{R}^{D_{1}\times\cdots\times D_{N}}\) such that for all canonical partitions \((\mathcal{K},\mathcal{K}^{c})\in\mathcal{C}_{N}\):_
\[QE(\mathcal{A}^{\prime};\mathcal{K})\geq\min\{|\mathcal{K}|,|\mathcal{K}^{c}| \}\cdot\ln(\min_{n\in[N]}D_{n})\,. \tag{2}\]
Proof sketch (proof in Appendix G.2).: In general, the entanglements that a tensor network supports can be upper bounded through cuts in its graph [16, 36]. For the locally connected tensor network, these bounds imply that \(QE(\mathcal{W}_{\mathrm{TN}};\mathcal{K})\leq\ln(R)\) for any canonical partition \((\mathcal{K},\mathcal{K}^{c})\). Equation (1) then follows by showing that if \(\mathcal{W}_{\mathrm{TN}}\) and \(\mathcal{A}\) are close, then so are their entanglements. Equation (2) is established using a construction from [18], providing a tensor with maximal entanglements under all partitions of its axes.
**Theorem 2**.: _Let \(\mathcal{A}\in\mathbb{R}^{D_{1}\times\cdots\times D_{N}}\) and \(\epsilon>0\). Suppose that for all canonical partitions \((\mathcal{K},\mathcal{K}^{c})\in\mathcal{C}_{N}\) (Definition 2) it holds that \(QE(\mathcal{A};\mathcal{K})\leq\frac{\epsilon^{2}}{(2N-3)\|\mathcal{A}\|^{2}} \cdot\ln(R)\).6 Then, there exists an assignment for the tensors constituting the locally connected tensor network (defined in Section 4.1) such that it generates \(\mathcal{W}_{\mathrm{TN}}\in\mathbb{R}^{D_{1}\times\cdots\times D_{N}}\) satisfying:_
Footnote 6: When the approximation error \(\epsilon\) tends to zero, the sufficient condition in Theorem 2 requires entanglements to approach zero, unlike the necessary condition in Theorem 1 which requires entanglements to become no greater than \(\ln(R)\). This is unavoidable. However, if for all canonical partitions \((\mathcal{K},\mathcal{K}^{c})\in\mathcal{C}_{N}\) the singular values of \(\llbracket\mathcal{A};\mathcal{K}\rrbracket\) trailing after the \(R\)’th one are small, then we can also guarantee an assignment for the locally connected tensor network satisfying \(\|\mathcal{W}_{\mathrm{TN}}-\mathcal{A}\|\leq\epsilon\), while \(QE(\mathcal{A};\mathcal{K})\) can be on the order of \(\ln(R)\) for all \((\mathcal{K},\mathcal{K}^{c})\in\mathcal{C}_{N}\). See Appendix A for details.
\[\|\mathcal{W}_{\mathrm{TN}}-\mathcal{A}\|\leq\epsilon\,.\]
Proof sketch (proof in Appendix G.3).: We show that if \(\mathcal{A}\) has low entanglement under a canonical partition \((\mathcal{K},\mathcal{K}^{c})\in\mathcal{C}_{N}\), then the singular values of \(\llbracket\mathcal{A};\mathcal{K}\rrbracket\) must decay rapidly (recall that \(\llbracket\mathcal{A};\mathcal{K}\rrbracket\) is
Figure 3: The canonical partitions of \([N]\), for \(N=2^{L}\) with \(L\in\mathbb{N}\). Every \(l\in\{0,\ldots,L\}\) contributes \(2^{l}\) canonical partitions, the \(n\)’th one induced by \(\mathcal{K}=\{2^{L-l}\cdot(n-1)+1,\ldots,2^{L-l}\cdot n\}\).
the arrangement of \(\mathcal{A}\) as a matrix where rows correspond to axes \(\mathcal{K}\) and columns correspond to the remaining axes). The approximation guarantee is then obtained through a construction from [25], which is based on truncated singular value decompositions of every \(\llbracket\mathcal{A};\mathcal{K}\rrbracket\) for \((\mathcal{K},\mathcal{K}^{c})\in\mathcal{C}_{N}\).
## 5 Low Entanglement Under Canonical Partitions Is Necessary and Sufficient for Accurate Prediction
This section considers the locally connected tensor network from Section 4.1 in a machine learning setting. We show that attaining low population loss amounts to fitting a tensor defined by the data distribution, whose axes correspond to features (Section 5.1). Applying the theorems of Section 4.2, we then conclude that the locally connected tensor network is capable of accurate prediction if and only if the data distribution admits low entanglement under canonical partitions of features (Section 5.2). This conclusion is corroborated through experiments, demonstrating that the performance of common locally connected neural networks (including convolutional, recurrent, and local self-attention neural networks) is inversely correlated with the entanglement under canonical partitions of features in the data (Section 5.3). For conciseness, the treatment in this section is limited to one-dimensional (sequential) models and data; see Appendix E.2 for an extension to arbitrary dimensions.
### Accurate Prediction Is Equivalent to Fitting Data Tensor
As discussed in Section 4.1, the locally connected tensor network generating \(\mathcal{W}_{\mathrm{TN}}\in\mathbb{R}^{D_{1}\times\cdots\times D_{N}}\) is equivalent to a locally connected neural network, whose forward pass over a data instance \((\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)})\) yields \(\left\langle\otimes_{n=1}^{N}\mathbf{x}^{(n)},\mathcal{W}_{\mathrm{TN}}\right\rangle\), where \(\mathbf{x}^{(1)}\in\mathbb{R}^{D_{1}},\ldots,\mathbf{x}^{(N)}\in\mathbb{R}^{D_ {N}}\). Motivated by this fact, we consider a binary classification setting, in which the label \(y\) of the instance \((\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)})\) is either \(1\) or \(-1\), and the prediction \(\hat{y}\) is taken to be the sign of the output of the neural network, _i.e._\(\hat{y}=\mathrm{sign}\big{(}\big{\langle}\otimes_{n=1}^{N}\mathbf{x}^{(n)}, \mathcal{W}_{\mathrm{TN}}\big{\rangle}\big{)}\).
Suppose we are given a training set of labeled instances drawn i.i.d. from some distribution, and we would like to learn the parameters of the neural network through the soft-margin support vector machine (SVM) objective, _i.e._ by optimizing:
\[\min\!_{\|\mathcal{W}_{\mathrm{TN}}\|\leq B}\frac{1}{M}\sum\!_{m=1}^{M}\max \!\left\{0,1-y^{(m)}\big{\langle}\otimes_{n=1}^{N}\mathbf{x}^{(n,m)}, \mathcal{W}_{\mathrm{TN}}\big{\rangle}\right\}, \tag{3}\]
for a predetermined constant \(B>0\).7 We assume instances are normalized, _i.e._ the distribution is such that all vectors constituting an instance have norm no greater than one. We also assume that \(B\leq 1\). In this case \(\big{|}y^{(m)}\big{\langle}\otimes_{n=1}^{N}\mathbf{x}^{(n,m)},\mathcal{W}_{ \mathrm{TN}}\big{\rangle}\big{|}\leq 1\), so our optimization problem can be expressed as:
Footnote 7: An alternative form of the soft-margin SVM objective includes a squared Euclidean norm penalty instead of the constraint.
\[\min\!_{\|\mathcal{W}_{\mathrm{TN}}\|\leq B}1-\left\langle\mathcal{D}_{ \mathrm{emp}},\mathcal{W}_{\mathrm{TN}}\right\rangle\,,\]
where
\[\mathcal{D}_{\mathrm{emp}}:=\frac{1}{M}\sum\!_{m=1}^{M}y^{(m)}\cdot\otimes_{n =1}^{N}\mathbf{x}^{(n,m)} \tag{4}\]
is referred to as the _empirical data tensor_. This means that the accuracy over the training data is determined by how large the inner product \(\left\langle\mathcal{D}_{\mathrm{emp}},\mathcal{W}_{\mathrm{TN}}\right\rangle\) is.
Disregarding the degenerate case of \(\mathcal{D}_{\mathrm{emp}}=0\) (_i.e._ that in which the optimized objective is constant), the inner products \(\left\langle\mathcal{D}_{\mathrm{emp}},\mathcal{W}_{\mathrm{TN}}\right\rangle\) and \(\left\langle\frac{\mathcal{D}_{\mathrm{emp}}}{\|\mathcal{D}_{\mathrm{emp}}\|}, \mathcal{W}_{\mathrm{TN}}\right\rangle\) differ by only a multiplicative (positive) constant, so fitting the training data amounts to optimizing:
\[\max\!_{\|\mathcal{W}_{\mathrm{TN}}\|\leq B}\left\langle\frac{\mathcal{D}_{ \mathrm{emp}}}{\|\mathcal{D}_{\mathrm{emp}}\|},\mathcal{W}_{\mathrm{TN}}\right\rangle\,. \tag{5}\]
If \(\mathcal{W}_{\mathrm{TN}}\) can represent some \(\mathcal{W}\), then it can also represent \(c\cdot\mathcal{W}\) for every \(c\in\mathbb{R}\).8 Thus, optimizing Equation (5) is the same as optimizing:
Footnote 8: Since contraction is a multilinear operation, if a tensor network realizes \(\mathcal{W}\), then multiplying any of the tensors constituting it by \(c\) results in \(c\cdot\mathcal{W}\).
\[\max_{\mathcal{W}_{\mathrm{TN}}}\left\langle\frac{\mathcal{D}_{\mathrm{emp}}}{ \|\mathcal{D}_{\mathrm{emp}}\|},\frac{\mathcal{W}_{\mathrm{TN}}}{\|\mathcal{ W}_{\mathrm{TN}}\|}\right\rangle\,,\]
and multiplying the result by \(B\). Fitting the training data therefore boils down to minimizing \(\left\|\frac{\mathcal{W}_{\mathrm{TN}}}{\|\mathcal{W}_{\mathrm{TN}}\|}-\frac{ \mathcal{D}_{\mathrm{emp}}}{\|\mathcal{D}_{\mathrm{emp}}\|}\right\|\). In other words, the accuracy achievable over the training data is determined by the extent to which \(\frac{\mathcal{W}_{\mathrm{TN}}}{\|\mathcal{W}_{\mathrm{TN}}\|}\) can fit the normalized empirical data tensor \(\frac{\mathcal{D}_{\mathrm{emp}}}{\|\mathcal{D}_{\mathrm{emp}}\|}\).
The arguments above are independent of the training set size, and in fact apply to the population loss as well, in which case \(\mathcal{D}_{\mathrm{emp}}\) is replaced by the _population data tensor_:
\[\mathcal{D}_{\mathrm{pop}}:=\mathbb{E}_{(\mathbf{x}^{(1)},\ldots,\mathbf{x}^ {(N)}),y}\big{[}y\cdot\otimes_{n=1}^{N}\mathbf{x}^{(n)}\big{]}\,. \tag{6}\]
It follows that the achievable accuracy over the population is determined by the extent to which \(\frac{\mathcal{W}_{\mathrm{TN}}}{\|\mathcal{W}_{\mathrm{TN}}\|}\) can fit the normalized population data tensor \(\frac{\mathcal{D}_{\mathrm{pop}}}{\|\mathcal{D}_{\mathrm{pop}}\|}\). We refer to the minimal distance from it as the _suboptimality in achievable accuracy_.
**Definition 3**.: In the context of the classification setting above, the _suboptimality in achievable accuracy_ is:
\[\mathrm{SubOpt}:=\min_{\mathcal{W}_{\mathrm{TN}}}\left\|\frac{\mathcal{W}_{ \mathrm{TN}}}{\|\mathcal{W}_{\mathrm{TN}}\|}-\frac{\mathcal{D}_{\mathrm{pop}} }{\|\mathcal{D}_{\mathrm{pop}}\|}\right\|.\]
### Necessary and Sufficient Condition for Accurate Prediction
In the classification setting of Section 5.1, by invoking Theorems 1 and 2 from Section 4.2, we conclude that the suboptimality in achievable accuracy is small if and only if the population data tensor \(\mathcal{D}_{\mathrm{pop}}\) admits low entanglement under the canonical partitions of its axes (Definition 2).
**Corollary 1**.: _Consider the classification setting of Section 5.1, and let \(\epsilon\in[0,1/4]\). If there exists a canonical partition \((\mathcal{K},\mathcal{K}^{c})\in\mathcal{C}_{N}\) (Definition 2) under which \(QE(\mathcal{D}_{\mathrm{pop}};\mathcal{K})>\ln(R)+2\epsilon\cdot\ln(D_{ \mathcal{K}})+2\sqrt{2\epsilon}\), where \(D_{\mathcal{K}}:=\min\{\prod_{n\in\mathcal{K}}D_{n},\prod_{n\in\mathcal{K}^{c }}D_{n}\}\), then:_
\[\mathrm{SubOpt}>\epsilon\,.\]
_Conversely, if for all \((\mathcal{K},\mathcal{K}^{c})\in\mathcal{C}_{N}\) it holds that \(QE(\mathcal{D}_{\mathrm{pop}};\mathcal{K})\leq\frac{\epsilon^{2}}{8N-12}\cdot \ln(R)\),9 then:_
Footnote 9: Per the discussion in Footnote 6, when \(\epsilon\) tends to zero: _(i)_ in the absence of any knowledge regarding \(\mathcal{D}_{\mathrm{pop}}\), the entanglements required by the sufficient condition unavoidably approach zero; while _(ii)_ if it is known that for all \((\mathcal{K},\mathcal{K}^{c})\in\mathcal{C}_{N}\) the singular values of \([\![\mathcal{D}_{\mathrm{pop}};\mathcal{K}]\!]\) trailing after the \(R\)’th one are small, then the entanglements in the sufficient condition can be on the order of \(\ln(R)\) (as they are in the necessary condition).
\[\mathrm{SubOpt}\leq\epsilon\,.\]
Proof sketch (proof in Appendix G.5).: Follows from Theorems 1 and 2 after accounting for the normalization of \(\mathcal{W}_{\mathrm{TN}}\) in the suboptimality in achievable accuracy.
Directly evaluating the conditions required by Corollary 1 -- low entanglement under canonical partitions for \(\mathcal{D}_{\mathrm{pop}}\) -- is impractical, since: _(i)_\(\mathcal{D}_{\mathrm{pop}}\) is defined via an unknown data distribution (Equation (6)); and _(ii)_ computing the entanglements involves taking singular value decompositions of matrices with size exponential in the number of input variables \(N\). Fortunately, as Proposition 1 below shows, the entanglements of \(\mathcal{D}_{\mathrm{pop}}\) under all partitions are with high probability well-approximated by the entanglements of the empirical data tensor \(\mathcal{D}_{\mathrm{emp}}\). Moreover, the entanglement of \(\mathcal{D}_{\mathrm{emp}}\) under any partition can be computed efficiently, without explicitly storing or manipulating an exponentially large matrix -- see Appendix B for an algorithm (originally proposed in [41]). Overall, we obtain an efficiently computable criterion (low entanglement under canonical partitions for \(\mathcal{D}_{\mathrm{emp}}\)), that with high probability is both necessary and sufficient for low suboptimality in achievable accuracy -- see Corollary 2 below.
**Proposition 1**.: _Consider the classification setting of Section 5.1, and let \(\delta\in(0,1)\) and \(\gamma>0\). If the training set size \(M\) satisfies \(M\geq\frac{128\ln(\frac{2}{3})(\ln(\max_{\mathcal{K}^{c}\subseteq[N]}\mathcal{D} _{\mathcal{K}^{c}}))^{4}}{\|\mathcal{D}_{\mathrm{pop}}\|^{2}\gamma^{4}}\), where \(D_{\mathcal{K}^{c}}:=\min\{\prod_{n\in\mathcal{K}^{c}}D_{n},\prod_{n\in \mathcal{K}^{c}}D_{n}\}\) for \(\mathcal{K}^{c}\subseteq[N]\), then with probability at least \(1-\delta\):_
\[|QE(\mathcal{D}_{\mathrm{emp}};\mathcal{K})-QE(\mathcal{D}_{\mathrm{pop}}; \mathcal{K})|\leq\gamma\,\text{ for all }\mathcal{K}\subseteq[N]\text{.}\]
Proof sketch (proof in Appendix G.4).: A standard generalization of the Hoeffding inequality to random vectors in a Hilbert space allows bounding \(\|\mathcal{D}_{\mathrm{emp}}-\mathcal{D}_{\mathrm{pop}}\|\) with high probability. Bounding differences between entanglements via Euclidean distance concludes the proof.
**Corollary 2**.: _Consider the setting and notation of Corollary 1, with \(\epsilon\neq 0\). For \(\delta\in(0,1)\), suppose that the training set size \(M\) satisfies \(M\geq\frac{128\ln(\frac{2}{3})((16N-24)\ln(\max_{\mathcal{K}^{c}\subseteq[N]}D _{\mathcal{K}^{c}}))^{4}}{\|\mathcal{D}_{\mathrm{pop}}\|^{2}(\ln(R)\cdot \epsilon^{2})^{4}}\). Then, with probability at least \(1-\delta\) the following hold. First, if there exists a canonical partition \((\mathcal{K},\mathcal{K}^{c})\in\mathcal{C}_{N}\) (Definition 2) under which \(QE(\mathcal{D}_{\mathrm{emp}};\mathcal{K})>(1+\frac{\epsilon^{2}}{16N-24}) \cdot\ln(R)+2\epsilon\cdot\ln(D_{\mathcal{K}})+2\sqrt{2\epsilon}\), then:_
\[\mathrm{SubOpt}>\epsilon\,.\]
_Second, if for all \((\mathcal{K},\mathcal{K}^{c})\in\mathcal{C}_{N}\) it holds that \(QE(\mathcal{D}_{\mathrm{emp}};\mathcal{K})\leq\frac{\epsilon^{2}}{16N-24}\cdot \ln(R)\), then:_
\[\mathrm{SubOpt}\leq\epsilon\,.\]
_Moreover, the conditions above on the entanglements of \(\mathcal{D}_{\mathrm{emp}}\) can be evaluated efficiently (in \(\mathcal{O}(DN^{2}M^{2}+NM^{3})\) time \(\mathcal{O}(DNM+M^{2})\) and memory, where \(D:=\max_{n\in[N]}D_{n}\))._
Proof.: Implied by Corollary 1, Proposition 1 with \(\gamma=\frac{\epsilon^{2}}{16N-24}\cdot\ln(R)\) and Algorithm 2 in Appendix B.
### Empirical Demonstration
Corollary 2 establishes that, with high probability, the locally connected tensor network (from Section 4.1) can achieve high prediction accuracy if and only if the empirical data tensor (Equation (4)) admits low entanglement under canonical partitions of its axes. We corroborate our formal analysis through experiments, demonstrating that its conclusions carry over to common locally connected architectures. Namely, applying convolutional neural networks, S4 (a popular recurrent neural network; see [26]), and a local self-attention model [60] to different datasets, we show that the achieved test accuracy is inversely correlated with the entanglements of the empirical data tensor under canonical partitions. Below is a description of experiments with one-dimensional (_i.e._ sequential) models and data. Additional experiments with two-dimensional (imagery) models and data are given in Appendix E.2.3.
Discerning the relation between entanglements of the empirical data tensor and performance (prediction accuracy) of locally connected neural networks requires datasets admitting different entanglements. A potential way to acquire such datasets is as follows. First, select one a dataset which locally connected neural networks perform well, in the hopes that it admits low entanglement under canonical partitions; natural candidates are datasets comprising images, text or audio. Subsequently, create "shuffled" variants of the dataset by repeatedly swapping the position of two features chosen at random.10 This erodes the original arrangement of features in the data, and is expected to yield higher entanglement under canonical partitions.
Footnote 10: It is known that as the number of random position swaps goes to infinity, the arrangement of the features converges to a random permutation [19].
We followed the blueprint above for a binary classification version of the Speech Commands audio dataset [63]. Figure 4 presents test accuracies achieved by a convolutional neural network, S4, and a local self-attention model, as well as average entanglement under canonical partitions of the empirical data tensor, against the number of random feature swaps performed to create the dataset. As expected, when the number of swaps increases, the average entanglement under canonical partitions becomes higher. At the same time, in accordance with our theory, the prediction accuracies of the locally connected neural networks substantially deteriorate, showing an inverse correlation with the entanglement under canonical partitions.
## 6 Enhancing Suitability of Data to Locally Connected Neural Networks
Our analysis (Sections 4 and 5) suggests that a data distribution is suitable for locally connected neural networks if and only if it admits low entanglement under canonical partitions of features. Motivated by this observation, we derive a preprocessing algorithm aimed to enhance the suitability of a data distribution to locally connected neural networks (Sections 6.1 and 6.2). Empirical evaluations demonstrate that it significantly improves prediction accuracies of common locally connected neural networks on various datasets (Section 6.3). For conciseness, the treatment in this section is limited to one-dimensional (sequential) models and data; see Appendix E.3 for an extension to arbitrary dimensions.
### Search for Feature Arrangement With Low Entanglement Under Canonical Partitions
Our analysis naturally leads to a recipe for enhancing the suitability of a data distribution to locally connected neural networks: given a dataset, search for an arrangement of features which leads to low entanglement under canonical partitions, and then arrange the features accordingly. Formally, suppose we have \(M\in\mathbb{N}\) training instances \(\left\{\left((\mathbf{x}^{(1,m)},\ldots,\mathbf{x}^{(N,m)}),y^{(m)}\right) \right\}_{m=1}^{M}\), where \(y^{(m)}\in\{1,-1\}\) and \(\mathbf{x}^{(n,m)}\in\mathbb{R}^{D}\) for \(n\in[N],m\in[M]\), with \(D\in\mathbb{N}\). Assume without loss of generality that \(N\) is a power of two (if this is not the case we may add constant features as needed). The aforementioned recipe boils down to a search for a permutation \(\pi:[N]\rightarrow[N]\), which when applied to feature indices leads the empirical data tensor \(\mathcal{D}_{\mathrm{emp}}\) (Equation (4)) to admit low entanglement under the canonical partitions of its axes (Definition 2).
A greedy realization of the foregoing search is as follows. Initially, partition the features into two equally sized sets \(\mathcal{K}_{1,1}\subset[N]\) and \(\mathcal{K}_{1,2}:=[N]\setminus\mathcal{K}_{1,1}\) such that the entanglement of \(\mathcal{D}_{\mathrm{emp}}\) with respect to \((\mathcal{K}_{1,1},\mathcal{K}_{1,2})\) is minimal. That is, find \(\mathcal{K}_{1,1}\in\mathrm{argmin}_{\mathcal{K}\subset[N],|\mathcal{K}|=N/2} \,QE(\mathcal{D}_{\mathrm{emp}};\mathcal{K})\). The permutation \(\pi\) will map \(\mathcal{K}_{1,1}\) to coordinates \(\{1,\ldots,\frac{N}{4}\}\) and \(\mathcal{K}_{1,2}\) to \(\{\frac{N}{2}+1,\ldots,N\}\). Then, partition \(\mathcal{K}_{1,1}\) into two equally sized sets \(\mathcal{K}_{2,1}\subset\mathcal{K}_{1,1}\) and \(\mathcal{K}_{2,2}:=\mathcal{K}_{1,1}\setminus\mathcal{K}_{2,1}\) such that the average of entanglements induced by these sets is minimal, _i.e._\(\mathcal{K}_{2,1}\in\mathrm{argmin}_{\mathcal{K}\subset\mathcal{K}_{1,1},| \mathcal{K}|=|\mathcal{K}_{1,1}|/2}\,\frac{1}{2}\big{[}QE(\mathcal{D}_{\mathrm{emp }};\mathcal{K})+QE(\mathcal{D}_{\mathrm{emp}};\mathcal{K}_{1,1}\setminus \mathcal{K})\big{]}\). The permutation \(\pi\) will map \(\mathcal{K}_{2,1}\) to coordinates \(\{1,\ldots,\frac{N}{4}\}\) and \(\mathcal{K}_{2,2}\) to \(\{\frac{N}{4}+1,\ldots,\frac{N}{2}\}\). A partition of \(\mathcal{K}_{1,2}\) into two equally sized sets \(\mathcal{K}_{2,3}\) and \(\mathcal{K}_{2,4}\) is obtained similarly, where \(\pi\) will map \(\mathcal{K}_{2,3}\) to coordinates \(\{\frac{N}{2}+1,\ldots,\frac{3N}{4}\}\) and \(\mathcal{K}_{2,4}\) to \(\{\frac{3N}{4}+1,\ldots,N\}\). Continuing in the same fashion, until we reach subsets \(\mathcal{K}_{L,1},\ldots,\mathcal{K}_{L,N}\) consisting of a single feature index each, fully specifies the permutation \(\pi\).
Unfortunately, the step lying at the heart of the above scheme -- finding a balanced partition that minimizes average entanglement -- is computationally prohibitive, and we are not aware of any
Figure 4: The prediction accuracies of common locally connected neural networks are inversely correlated with the entanglements of the data under canonical partitions of features, in compliance with our theory (Sections 5.1 and 5.2). **Left:** Average entanglement under canonical partitions (Definition 2) of the empirical data tensor (Equation (4)), for binary classification variants of the Speech Commands audio dataset [63] obtained by performing random position swaps between features. **Right:** Test accuracies achieved by a convolutional neural network (CNN) [17], S4 (a popular class of recurrent neural networks; see [26]), and a local self-attention model [60], against the number of random feature swaps performed to create the dataset. **All:** Reported are the means and standard deviations of the quantities specified above, taken over ten different random seeds. See Appendix E.2.3 for experiments over (two-dimensional) image data and Appendix F for further implementation details.
tools that alleviate the computational difficulty. In the next subsection we will see that replacing entanglement with a surrogate measure paves way to a practical implementation.
### Practical Algorithm via Surrogate for Entanglement
To efficiently implement the scheme from Section 6.1, we replace entanglement with a surrogate measure of dependence. The surrogate is based on the Pearson correlation coefficient for multivariate features [46],11 and its agreement with entanglement is demonstrated empirically in Appendix D. Theoretically supporting this agreement is left for future work.
Footnote 11: For completeness, Appendix C provides a formal definition of the multivariate Pearson correlation.
**Definition 4**.: Given a set of \(M\in\mathbb{N}\) instances \(\mathcal{X}:=\{(\mathbf{x}^{(1,m)},\ldots,\mathbf{x}^{(N,m)})\in(\mathbb{R}^ {D})^{N}\}_{m=1}^{M}\), denote by \(p_{n,n^{\prime}}\) the multivariate Pearson correlation between features \(n,n^{\prime}\in[N]\). For \(\mathcal{K}\subseteq[N]\), the _surrogate entanglement_ of \(\mathcal{X}\) with respect to the partition \((\mathcal{K},\mathcal{K}^{c})\), denoted \(SE(\mathcal{X};\mathcal{K})\), is the sum of Pearson correlation coefficients between pairs of features, the first belonging to \(\mathcal{K}\) and the second to \(\mathcal{K}^{c}:=[N]\setminus\mathcal{K}\). That is:
\[SE\big{(}\mathcal{X};\mathcal{K}\big{)}:=\sum\nolimits_{n\in\mathcal{K},n^{ \prime}\in\mathcal{K}^{c}}p_{n,n^{\prime}}\,.\]
As shown in Proposition 2 below, replacing entanglement with surrogate entanglement in the scheme from Section 6.1 converts each search for a balanced partition minimizing average entanglement into a _minimum balanced cut problem_. Although the minimum balanced cut problem is NP-hard (see, _e.g._, [23]), it enjoys a wide array of well-established approximation tools, particularly ones designed for large scale [29; 57]. We therefore obtain a practical algorithm for enhancing the suitability of a data distribution to locally connected neural networks -- see Algorithm 1.
**Proposition 2**.: _For any \(\bar{\mathcal{K}}\subseteq[N]\) of even size, the following optimization problem can be framed as a minimum balanced cut problem over a complete graph with \(|\bar{\mathcal{K}}|\) vertices:_
\[\min_{\mathcal{K}\subset\bar{\mathcal{K}},|\mathcal{K}|=|\bar{\mathcal{K}}|/2} \frac{1}{2}\Big{[}SE\big{(}\mathcal{X};\mathcal{K}\big{)}+SE\big{(}\mathcal{ X};\bar{\mathcal{K}}\setminus\mathcal{K}\big{)}\Big{]}\,. \tag{7}\]
_Specifically, there exists a complete undirected weighted graph with vertices \(\bar{\mathcal{K}}\) and edge weights \(w:\bar{\mathcal{K}}\times\bar{\mathcal{K}}\to\mathbb{R}\) such that for any \(\mathcal{K}\subset\bar{\mathcal{K}}\), the weight of the cut in the graph induced by \(\bar{\mathcal{K}}\) -- \(\sum_{n\in\mathcal{K},n^{\prime}\in\bar{\mathcal{K}}\setminus\mathcal{K}}w(\{ n,n^{\prime}\})\) -- is equal, up to an additive constant, to the term minimized in Equation (7), i.e. to \(\frac{1}{2}\big{[}SE\big{(}\mathcal{X};\mathcal{K}\big{)}+SE\big{(}\mathcal{ X};\bar{\mathcal{K}}\setminus\mathcal{K}\big{)}\big{]}\)._
Proof.: Consider the complete undirected graph whose vertices are \(\bar{\mathcal{K}}\) and where the weight of an edge \(\{n,n^{\prime}\}\in\bar{\mathcal{K}}\times\bar{\mathcal{K}}\) is \(w(\{n,n^{\prime}\})=p_{n,n^{\prime}}\) (recall that \(p_{n,n^{\prime}}\) stands for the multivariate Pearson correlation between features \(n\) and \(n^{\prime}\) in \(\mathcal{X}\)). For any \(\mathcal{K}\subset\bar{\mathcal{K}}\) it holds that:
\[\sum\nolimits_{n\in\mathcal{K},n^{\prime}\in\bar{\mathcal{K}}\setminus\mathcal{ K}}w(\{n,n^{\prime}\})=\frac{1}{2}\Big{[}SE\big{(}\mathcal{X};\mathcal{K} \big{)}+SE\big{(}\mathcal{X};\bar{\mathcal{K}}\setminus\mathcal{K}\big{)} \Big{]}-\frac{1}{2}SE\big{(}\mathcal{X};\bar{\mathcal{K}}\big{)}\,,\]
where \(\frac{1}{2}SE\big{(}\mathcal{X};\bar{\mathcal{K}}\big{)}\) does not depend on \(\mathcal{K}\). This concludes the proof.
### Experiments
We empirically evaluate Algorithm 1 using common locally connected neural networks -- a convolutional neural network, an S4 (popular recurrent neural network; see [26], and a local self-attention model [60] -- over randomly permuted audio datasets (Section 6.3.1) and several tabular datasets (Section 6.3.2). For brevity, we defer some implementation details to Appendix F.
#### 6.3.1 Randomly Permuted Audio Datasets
Section 5.3 demonstrated that audio data admits low entanglement under canonical partitions of features, and that randomly permuting the position of features leads this entanglement to increase, while substantially degrading the prediction accuracy of locally connected neural networks. A sensible test for Algorithm 1 is to evaluate its ability to recover performance lost due to the random permutation of features.
```
1:Input:\(\mathcal{X}:=\{(\mathbf{x}^{(1,m)},\ldots,\mathbf{x}^{(N,m)})\}_{m=1}^{M}\)\(\ldots\)\(M\in\mathbb{N}\) data instances comprising \(N\in\mathbb{N}\) features
2:Output: Permutation \(\pi:[N]\rightarrow[N]\) to apply to feature indices
3:Let \(\mathcal{K}_{0,1}:=[N]\) and denote \(L:=\log_{2}(N)\)
4:# We assume for simplicity that \(N\) is a power of two, otherwise one may add constant features
5:for\(l=0,\ldots,L-1\), \(n=1,\ldots,2^{l}\)do
6: Using a reduction to a minimum balanced cut problem (Proposition 2), find an approximate solution \(\mathcal{K}_{l+1,2n-1}\subset\mathcal{K}_{l,n}\) for: \[\min_{\mathcal{K}\subset\mathcal{K}_{l,n},|\mathcal{K}|=|\mathcal{K}_{l,n}|/2} \frac{1}{2}[SE(\mathcal{X};\mathcal{K})+SE(\mathcal{X};\mathcal{K}_{l,n} \setminus\mathcal{K})]\]
7: Let \(\mathcal{K}_{l+1,2n}:=\mathcal{K}_{l,n}\setminus\mathcal{K}_{l+1,2n-1}\)
8:endfor
9:# At this point, \(\mathcal{K}_{L,1},\ldots,\mathcal{K}_{L,N}\) each contain a single feature index
10:return\(\pi\) that maps \(k\in\mathcal{K}_{L,n}\) to \(n\), for every \(n\in[N]\)
```
**Algorithm 1** Enhancing Suitability of Data to Locally Connected Neural Networks
For the Speech Commands dataset [63], Table 1 compares the prediction accuracies of locally connected neural networks on: _(i)_ the data subject to a random permutation of features; _(ii)_ the data attained after rearranging the randomly permuted features via Algorithm 1; and _(iii)_ the data attained after rearranging the randomly permuted features via IGTD [65] -- a heuristic scheme designed for convolutional neural networks (see Section 2). As can be seen, Algorithm 1 leads to significant improvements, surpassing those brought forth by IGTD. Note that Algorithm 1 does not entirely recover the performance lost due to the random permutation of features.12 We believe this relates to phenomena outside the scope of the theory underlying Algorithm 1 (Sections 4 and 5), for example translation invariance in data being beneficial in terms of generalization. Investigation of such phenomena and suitable modification of Algorithm 1 are regarded as promising directions for future work.
Footnote 12: The prediction accuracies on the original data are \(59.8\ \pm\ 2.6\), \(69.6\ \pm\ 0.6\) and \(48.1\ \pm\ 2.1\) for CNN, S4 and Local-Attention, respectively.
#### 6.3.2 Tabular Datasets
The prediction accuracies of locally connected neural networks on tabular data, _i.e._ on data in which features are arranged arbitrarily, is known to be subpar [56]. Table 2 reports results of experiments with locally connected neural networks over standard tabular benchmarks (namely "dna", "semeion" and "isolet" [61]), demonstrating that arranging features via Algorithm 1 leads to
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Randomly Permuted & Algorithm 1 & IGTD \\ \hline CNN & \(5.2\ \pm\ 0.7\) & \(\mathbf{17.4}\ \pm\ 1.7\) & \(6.1\ \pm\ 0.4\) \\ S4 & \(9.5\ \pm\ 0.6\) & \(\mathbf{30.3}\ \pm\ 1.6\) & \(13\ \pm\ 2.4\) \\ Local-Attention & \(7.8\ \pm\ 0.3\) & \(\mathbf{12.9}\ \pm\ 0.7\) & \(6.4\ \pm\ 0.5\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Arranging features of randomly permuted audio data via Algorithm 1 significantly improves the prediction accuracies of locally connected neural networks. Reported are test accuracies (mean and standard deviation over ten random seeds) of a convolutional neural network (CNN), S4 (a popular recurrent neural network; see [26]), and a local self-attention model [60], over the Speech Commands dataset [63] subject to different arrangements of features: _(i)_ a random arrangement; _(ii)_ an arrangement provided by applying Algorithm 1 to the random arrangement; and _(iii)_ an arrangement provided by applying an adaptation of IGTD [65] — a heuristic scheme designed for convolutional neural networks — to the random arrangement. For each model, we highlight (in boldface) the highest mean accuracy if the difference between that and the second-highest mean accuracy is statistically significant (namely), is larger than the sum of corresponding standard deviations). As can be seen, Algorithm 1 leads to significant improvements in prediction accuracies, surpassing the improvements brought forth by IGTD. See Appendix F for implementation details.
significant improvements in prediction accuracies, surpassing improvements brought forth by IGTD (a heuristic scheme designed for convolutional neural networks [65]). Note that Algorithm 1 does not lead to state of the art prediction accuracies on the evaluated benchmarks.13 However, the results suggest that it renders locally connected neural networks a viable option for tabular data. This option is particularly appealing in when the number of features is large settings, where many alternative approaches (_e.g._ ones involving fully connected neural networks) are impractical.
Footnote 13: XGBoost for example achieves prediction accuracies \(96\), \(91\) and \(95.2\) over dna, semeion and isolet, respectively.
## 7 Conclusion
The question of what makes a data distribution suitable for deep learning is a fundamental open problem. Focusing on locally connected neural networks -- a prevalent family of deep learning architectures that includes as special cases convolutional neural networks, recurrent neural networks (in particular the recent S4 models) and local self-attention models -- we address this problem by adopting theoretical tools from quantum physics. Our main theoretical result states that a certain locally connected neural network is capable of accurate prediction (_i.e._ can express a solution with low population loss) over a data distribution _if and only if_ the data distribution admits low quantum entanglement under certain canonical partitions of features. Experiments with widespread locally connected neural networks corroborate this finding.
Our theory suggests that the suitability of a data distribution to locally connected neural networks may be enhanced by arranging features such that low entanglement under canonical partitions is attained. Employing a certain surrogate for entanglement, we show that this arrangement can be implemented efficiently, and that it leads to substantial improvements in the prediction accuracies of common locally connected neural networks on various datasets.
The data modalities to which deep learning is most commonly applied -- namely ones involving images, text and audio -- are often regarded as natural (as opposed to, for example, tabular data
\begin{table}
\begin{tabular}{l l l l} \multicolumn{4}{l}{Dataset: dna} \\ \hline & Baseline & Algorithm 1 & IGTD \\ \hline CNN & \(81.1\pm 2.2\) & \(\mathbf{91.1}\pm 0.7\) & \(86.9\pm 0.7\) \\ S4 & \(87.7\pm 2.3\) & \(89.8\pm 2.9\) & \(90\pm 1.2\) \\ Local-Attention & \(77.5\pm 3.6\) & \(\mathbf{85.5}\pm 3.6\) & \(81\pm 2.5\) \\ \hline \multicolumn{4}{l}{Dataset: semeion} \\ \hline & Baseline & Algorithm 1 & IGTD \\ \hline CNN & \(77.5\pm 1.8\) & \(80.7\pm 1\) & \(80.2\pm 1.8\) \\ S4 & \(82.6\pm 1.1\) & \(\mathbf{89.8}\pm 0.5\) & \(85.9\pm 0.7\) \\ Local-Attention & \(60.6\pm 3.8\) & \(\mathbf{78.6}\pm 1.3\) & \(68\pm 0.9\) \\ \hline \multicolumn{4}{l}{Dataset: isolet} \\ \hline & Baseline & Algorithm 1 & IGTD \\ \hline CNN & \(91.6\pm 0.4\) & \(92.5\pm 0.4\) & \(90.5\pm 2.2\) \\ S4 & \(92\pm 0.3\) & \(93.3\pm 0.5\) & \(92.8\pm 0.3\) \\ Local-Attention & \(78\pm 2.0\) & \(\mathbf{87.7}\pm 0.4\) & \(83.9\pm 0.8\) \\ \hline \end{tabular}
\end{table}
Table 2: Arranging features of tabular datasets via Algorithm 1 significantly improves the prediction accuracies of locally connected neural networks. Reported are results of experiments analogous to those of Table 1, but with the “dna”, “semeion” and “isolet” tabular classification datasets [61]. Since to the arrangement of features in a tabular dataset is intended to be arbitrary, we regard as a baseline the prediction accuracies attained with a random permutation of features. For each combination of dataset and model, we highlight (in boldface) the highest mean accuracy if the difference between that and the second-highest mean accuracy is statistically significant (namely, is larger than the sum of corresponding standard deviations). Notice that, as in the experiment of Table 1, rearranging the features according to Algorithm 1 leads to significant improvements in prediction accuracies, surpassing the improvements brought forth by IGTD. See Appendix F for implementation details.
fusing heterogeneous information). We believe the difficulty in explaining the suitability of such modalities to deep learning may be due to a shortage in tools for formally reasoning about natural data. Concepts and tools from physics -- a branch of science concerned with formally reasoning about natural phenomena -- may be key to overcoming said difficulty. We hope that our use of quantum entanglement will encourage further research along this line.
## Acknowledgments and Disclosure of Funding
This work was supported by a Google Research Scholar Award, a Google Research Gift, the Yandex Initiative in Machine Learning, the Israel Science Foundation (grant 1780/21), Len Blavatnik and the Blavatnik Family Foundation, and Amnon and Anat Shashua. NR is supported by the Apple Scholars in AI/ML and the Tel Aviv University Center for AI and Data Science (TAD) PhD fellowships.
|
2310.06396 | Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach | Graph neural networks (GNNs) are vulnerable to adversarial perturbations,
including those that affect both node features and graph topology. This paper
investigates GNNs derived from diverse neural flows, concentrating on their
connection to various stability notions such as BIBO stability, Lyapunov
stability, structural stability, and conservative stability. We argue that
Lyapunov stability, despite its common use, does not necessarily ensure
adversarial robustness. Inspired by physics principles, we advocate for the use
of conservative Hamiltonian neural flows to construct GNNs that are robust to
adversarial attacks. The adversarial robustness of different neural flow GNNs
is empirically compared on several benchmark datasets under a variety of
adversarial attacks. Extensive numerical experiments demonstrate that GNNs
leveraging conservative Hamiltonian flows with Lyapunov stability substantially
improve robustness against adversarial perturbations. The implementation code
of experiments is available at
https://github.com/zknus/NeurIPS-2023-HANG-Robustness. | Kai Zhao, Qiyu Kang, Yang Song, Rui She, Sijie Wang, Wee Peng Tay | 2023-10-10T07:59:23Z | http://arxiv.org/abs/2310.06396v1 | # Adversarial Robustness in Graph Neural Networks:
###### Abstract
Graph neural networks (GNNs) are vulnerable to adversarial perturbations, including those that affect both node features and graph topology. This paper investigates GNNs derived from diverse neural flows, concentrating on their connection to various stability notions such as BIBO stability, Lyapunov stability, structural stability, and conservative stability. We argue that Lyapunov stability, despite its common use, does not necessarily ensure adversarial robustness. Inspired by physics principles, we advocate for the use of conservative Hamiltonian neural flows to construct GNNs that are robust to adversarial attacks. The adversarial robustness of different neural flow GNNs is empirically compared on several benchmark datasets under a variety of adversarial attacks. Extensive numerical experiments demonstrate that GNNs leveraging conservative Hamiltonian flows with Lyapunov stability substantially improve robustness against adversarial perturbations. The implementation code of experiments is available at [https://github.com/zknus/NeurIPS-2023-HANG-Robustness](https://github.com/zknus/NeurIPS-2023-HANG-Robustness).
## 1 Introduction
Graph neural networks (GNNs) [1; 2; 3; 4; 5; 6; 7; 8] have achieved great success in inference tasks involving graph-structured data, including applications from social media networks, molecular chemistry, and mobility networks. However, GNNs are known to be vulnerable to adversarial attacks [9]. To fool a trained GNN, adversaries can either add new nodes to the graph during the inference phase or remove/add edges from/to the graph. The former is called an injection attack [10; 11; 12; 13], and the latter is called a modification attack [14; 15; 16]. In some works [9; 17], node feature perturbations are also considered to enable stronger modification attacks.
Neural ordinary differential equation (ODE) networks [18] have recently gained popularity due to their inherent robustness [19; 20; 21; 22; 23; 24]. Neural ODEs can be considered as a continuous analog of ResNet [25]. Many neural ODE networks have since been proposed, including but not limited to [19; 20; 26; 27; 28]. Using neural ODEs, we can constrain the input and output of a neural network to follow certain physics laws. Injecting physics constraints to black-box neural networks improves neural networks' explainability. More recently, neural ODEs have also been successfully applied to GNNs by modeling
the way nodes exchange information given the adjacency structure of the underlying graph. We call them _graph neural flows_ which enables the interpretation of GNNs as evolutionary dynamical systems. These system equations can be learned by instantiating them using neural ODEs [29, 30, 31, 32, 33, 34]. For instance, [29, 30] models the message-passing process, i.e., feature exchanges between nodes, as the heat diffusion, while [31, 32] model the message passing process as the Beltrami diffusion. The reference [35] models the graph nodes as coupled oscillators with a coupled oscillating ODE guiding the message-passing process.
Although adversarial robustness of GNNs has been investigated in various works, including [9, 13, 17], robustness study of graph neural flows is still in its fancy. To the best of our knowledge, only the recent paper [32] has started to formulate theoretical insights into why graph neural diffusion is generally more robust against topology perturbation than conventional GNNs. The concept of Lyapunov stability was used in [20, 21]. However, there are many different notions of stability in the dynamic system literature [36, 37]. In this paper, we focus on the study of different notions of stability for graph neural flows and investigate which notion is most strongly connected to adversarial robustness. We impose an energy conservation constraint on graph neural flows that lead to a Hamiltonian graph neural flow. We find that energy-conservative graph Hamiltonian flows endowed with Lyapunov stability improve the robustness the most as compared to other existing stable graph neural flows.
**Main contributions.** This research is centered on examining various stability notions within the realm of graph neural flows, especially as they relate to adversarial robustness. Our main contributions are summarized as follows:
1. We revisit the definitions of stability from the perspective of dynamical systems as applicable to graph neural flows. We argue that vanilla Lyapunov stability does not necessarily confer adversarial robustness and provide a rationale for this observation.
2. We propose Hamiltonian-inspired graph neural ODEs, noted for their energy-conservative nature. We perform comprehensive numerical experiments to verify their performance on standard benchmark datasets. Crucially, our results demonstrate that Hamiltonian flow GNNs present enhanced robustness against various adversarial perturbations. Moreover, it is found that the effectiveness of Lyapunov stability becomes pronounced when layered on top of Hamiltonian flow GNNs, thereby fortifying their adversarial robustness.
The rest of this paper is organized as follows. We introduce various stability notions from a dynamical system perspective in Section 2. A review of existing graph neural flows is presented in Section 3 with links to the stability notions defined in Section 2. We present a new type of graph neural flow inspired by the Hamiltonian system with energy conservation in Section 4. Two different variants of this model are proposed in Section 5. Section 6 details our extensive experimental outcomes. The supplementary section provides an overview of related studies, an exhaustive outline of the algorithm, further insights into model robustness, supplementary experimental data, and the proofs for the theoretical propositions made throughout the paper.
## 2 Stability in Dynamical Systems
It is well known that a small perturbation at the input of an unstable dynamical system will result in a large distortion in the system's output. In this section, we first introduce various types of stability in dynamical physical systems and then relate them to graph neural flows. We consider the evolution of a dynamical system that is described as the following autonomous nonlinear differential equation:
\[\frac{\mathrm{d}\mathbf{z}(t)}{\mathrm{d}t}=f_{\boldsymbol{\theta}}(\mathbf{z }(t)), \tag{1}\]
where \(f_{\boldsymbol{\theta}}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) denotes the system dynamics, which may be non-linear in general, \(\boldsymbol{\theta}\) denotes the system parameters, and \(\mathbf{z}:[0,\infty)\rightarrow\mathbb{R}^{n}\) represents the \(n\)-dimensional system state.
We first introduce the stability notions from a dynamical systems perspective that is related to the GNN robustness against _node feature perturbation_.
**Definition 1** (Bibo stability).: _The system is called BIBO (bounded input bounded output) stable if for any bounded input, there exists a constant \(M\) s.t. the output \(\|\mathbf{z}(t)\|<M,\forall\,t\geq 0\)._
Suppose \(f\) has an equilibrium at \(\mathbf{z}_{e}\) so that \(f_{\boldsymbol{\theta}}\left(\mathbf{z}_{e}\right)=0\). We can define the stability notion for \(\mathbf{z}_{e}\).
**Definition 2** (Lyapunov stability and asymptotically stable [38]).: _The equilibrium \(\mathbf{z}_{e}\) is Lyapunov stable if for every \(\epsilon>0\), there exists a \(\delta>0\) such that, if \(\|\mathbf{z}(0)-\mathbf{z}_{e}\|<\delta\), then for every \(t\geq 0\) we have \(\|\mathbf{z}(t)-\mathbf{z}_{e}\|<\epsilon\). Furthermore, the equilibrium point \(\mathbf{z}_{e}\) is said to be asymptotically stable if it is Lyapunov stable and there exists a \(\delta^{\prime}>0\) such that if \(\|\mathbf{z}(0)-\mathbf{z}_{e}\|<\delta^{\prime}\), then \(\lim_{t\to\infty}\|\mathbf{z}(t)-\mathbf{z}_{e}\|=0\)._
**Remark 1**.: _Lyapunov stability indicates that the solutions whose initial points are near an equilibrium point \(\mathbf{z}_{e}\) stay near \(\mathbf{z}_{e}\) forever. For the special linear time-invariant system \(\,\mathrm{d}\mathbf{z}(t)/\,\mathrm{d}t=\mathbf{A}\mathbf{z}(t)\) with a constant matrix \(\mathbf{A}\), it is Lyapunov stable if and only if all eigenvalues of \(\mathbf{A}\) have non-positive real parts and those with zero real parts are the simple roots of the minimal polynomial of \(\mathbf{A}\)[39, 40]. Asymptotically stable means that not only do trajectories stay near \(\mathbf{z}_{e}\) for all time (Lyapunov stability), but trajectories also converge to \(\mathbf{z}_{e}\) as time goes to infinity (asymptotic stability)._
We next introduce the concept of structural stability from dynamical systems theory, which is related to the robustness of GNNs against _graph topological perturbation_. It describes the sensitivity of the qualitative features of a solution to changes in parameters \(\boldsymbol{\theta}\). The definition of structural stability requires the introduction of a topology on the space of \(\mathbf{z}\) in (1), which we do not however present here rigorously due to space constraints and not to distract the reader with too much mathematical details. Instead, we provide a qualitative description of structural stability to elucidate how it can indicate the robustness of a graph neural flow against topology perturbations.
**Definition 3** (Structural stability).: _Unlike Lyapunov stability, which considers perturbations of initial conditions for a fixed \(f_{\boldsymbol{\theta}}\), structural stability deals with perturbations of the dynamic function \(f_{\boldsymbol{\theta}}\) by perturbing the parameter \(\boldsymbol{\theta}\). The qualitative behavior of the solution is unaffected by small perturbations of \(f_{\boldsymbol{\theta}}\) in the sense that there is a homeomorphism that globally maps the original solution to the solution under perturbation._
**Remark 2**.: _In the graph neural flows to be detailed in Section 3 and Section 4, the parameter \(\boldsymbol{\theta}\) includes the graph topology (i.e., the adjacency matrix) and learnable neural network weights. Unlike adversarial attacks on other deep learning neural networks where the attacker targets only the input \(\mathbf{z}\), it is worth noting that adversaries for GNNs can also attack the graph topology, which forms part of \(\boldsymbol{\theta}\). If there are different Lyapunov stable equilibrium points, one for each class of nodes, one intuitive example of breaking structural stability in graph neural flows is by perturbing the graph topology in such a way that there are strictly fewer equilibrium points than the number of classes._
In this study, we will propose GNNs drawing inspiration from Hamiltonian mechanics. In a Hamiltonian system, \(\mathbf{z}=(q,p)\in\mathbb{R}^{2n}\) refers to the generalized coordinates, with \(q\) and \(p\) corresponding to the generalized position and momentum, respectively. The dynamical system is characterized by the following nonlinear differential equation:
\[\frac{\mathrm{d}\mathbf{z}(t)}{\mathrm{d}t}=J\nabla H(\mathbf{z}(t)), \tag{2}\]
where \(\nabla H(\mathbf{z})\) is the gradient of a scalar function \(H\) at \(\mathbf{z}\) and \(J=\left(\begin{array}{cc}0&I\\ -I&0\end{array}\right)\) is the \(2n\times 2n\) skew-symmetric matrix with \(I\) being the \(n\times n\) identity matrix.
We now turn our attention to the notion of conservative stability in dynamical systems. It is worth noting that a general dynamical system, as characterized in (1), might not consistently resonate with traditional perspectives on energy and conservation, especially when compared to physics-inspired neural networks, like (2).
**Definition 4** (Conservative stability).: _In a dynamical system inspired by physical principles, such as (2), a conserved quantity might be present. This quantity, which frequently embodies the notion of the system's energy, remains invariant along the system's evolution trajectory \(\mathbf{z}(t)\)._
Our focus in this work is on graph neural flows that can be described by either (1) or (2). Chamberlain et al. [29] postulate that many GNN architectures such as GAT can be construed as discrete versions of (1) via different choices of the function \(f_{\boldsymbol{\theta}}\) and discretization schemes. Therefore, the stability definitions provided above can offer additional insights into many popular GNNs. Most existing graph neural flows only scrutinize the BIBO/Lyapunov stability of their system. For instance, GRAND [29] proposes BIBO/Lyapunov stability against node feature perturbation over \(\mathbf{z}\). However, the more fundamental structural stability in graph neural flows, which is related to the robustness against graph topological changes, remains largely unexplored. Some models, such as GraphCON [29, 35], exhibit conservative stability under certain conditions. We direct the reader to Section 3 and Table 1 for a comprehensive discussion of the stability properties of each model.
## 3 Existing Graph Neural Flows and Stability
Consider an undirected, weighted graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) where \(\mathcal{V}\) is a finite set of vertices and \(\mathcal{E}\subset\mathcal{V}\times\mathcal{V}\) denotes the set of edges. The adjacency matrix of the graph is denoted as \((\mathbf{W}[u,v])=\mathbf{W}([v,u])\) for all \([u,v]\in\mathcal{E}\). Let \(\mathbf{X}(t)\in\mathbb{R}^{|\mathcal{V}|\times r}\) represent the features associated with the vertices at time \(t\). In this section, we introduce several graph neural flows on \(\mathcal{G}\), categorizing them according to the stability concepts outlined in Section 2.
**GRAND:** Inspired by the heat diffusion equation, GRAND [29] employs the following dynamical system:
\[\frac{\mathrm{d}\mathbf{X}(t)}{\mathrm{d}t}=\overline{\mathbf{A}}_{G}( \mathbf{X}(t))\mathbf{X}(t)\coloneqq(\mathbf{A}_{G}(\mathbf{X}(t))-\alpha \mathbf{I})\mathbf{X}(t), \tag{3}\]
with the initial condition \(\mathbf{X}(0)\). Within this model, \(\mathbf{A}_{G}(\mathbf{X}(t))\) is either a time-invariant static matrix, represented as GRAND-l, or a trainable time-variant attention matrix \((a_{G}\left(\mathbf{x}_{i}(t),\mathbf{x}_{j}(t)\right))\), labeled as GRAND-nl, reflecting the graph's evolutionary features. The function \(a_{G}(\cdot)\) calculates similarity for pairs of vertices, and \(\mathbf{I}\) is an identity matrix with dimensions that fit the context. In [29], \(\alpha\) is set to be 1. Let \(\mathbf{D}\) be the diagonal node degree matrix where \(\mathbf{D}[u,u]=\sum_{v\in\mathcal{V}}\mathbf{W}[u,v]\).
**Theorem 1**.: _We can prove the following stability:_
1. _For GRAND-nl, if the attention matrix_ \(\mathbf{A}_{G}(\mathbf{X}(t))\) _is set as a doubly stochastic attention_ _[_41_]__, we have BIBO stability and Lyapunov stability for any_ \(\alpha\geq 1\)_. When_ \(\alpha>1\)_, it reaches global asymptotic stability under any perturbation._
2. _Within the GRAND-l setting, if_ \(\mathbf{A}_{G}\) _is set as a constant column- or row-stochastic matrix, such as the normalized adjacency matrices_ \(\mathbf{WD}^{-1}\) _or_ \(\mathbf{D}^{-1}\mathbf{W}\)_, global asymptotic stability is achieved for_ \(\alpha>1\) _under any perturbation. If the graph is additionally assumed to be strongly connected_ _[_42_]__[_Sec.6.3_]__, BIBO and Lyapunov stability are realized for_ \(\alpha=1\)_._
3. _Furthermore, when_ \(\mathbf{A}_{G}\) _is specifically a constant column-stochastic matrix like_ \(\mathbf{WD}^{-1}\) _and_ \(\alpha=1\)_, GRAND conserves a quantity that can be interpreted as energy. Furthermore, in this setting, asymptotic stability is attained when the graph is aperiodic and strongly connected and the perturbations on_ \(\mathbf{X}(0)\) _ensure unaltered column summations._
**BLEND:** _In comparison to GRAND, BLEND [31] introduces the use of positional encodings. Following a similar line of reasoning to that used for GRAND, BLEND also exhibits BIBO/Lyapunov stability as stated in Theorem 1. Moreover, it is noteworthy that if positional features_ \(\mathbf{U}(t)\) _are eliminated, for instance by setting them as a constant, BLEND simplifies to the GRAND model._
**GraphCON:** Inspired by oscillator dynamical systems, GraphCON is a graph neural flow proposed in [35] and defined as
\[\left\{\begin{array}{c}\frac{\mathrm{d}\mathbf{Y}(t)}{\mathrm{d}t}=\sigma( \mathbf{F}_{\theta}(\mathbf{X}(t),t))-\gamma\mathbf{X}(t)-\alpha\mathbf{Y}(t), \\ \frac{\mathrm{d}\mathbf{X}(t)}{\mathrm{d}t}=\mathbf{Y}(t),\end{array}\right. \tag{4}\]
where \(\mathbf{F}_{\theta}(\cdot)\) is a learnable \(1\)-neighborhood coupling function, \(\sigma\) denotes an activation function, and \(\gamma\) and \(\alpha\) are tunable parameters.
_As described in [35, Proposition 3.1], under specific settings where \(\sigma\) is the identity function and \(\mathbf{F}_{\theta}(\mathbf{X}(t),t)=\mathbf{AX}(t)\) with \(\mathbf{A}\) being a constant matrix, GraphCON conserves Dirichlet energy_ (11)_, thereby demonstrating conservative stability._
**GraphBel:** Generalizing the Beltrami flow, mean curvature flow and heat flow, a stable graph neural flow [32] is designed as
\[\frac{\mathrm{d}\mathbf{X}(t)}{\mathrm{d}t}=(\mathbf{A}_{\mathbf{S}}(\mathbf{ X}(t))\odot\mathbf{B}_{\mathbf{S}}(\mathbf{X}(t))-\Psi(\mathbf{X}(t)))\mathbf{X}(t), \tag{5}\]
where \(\odot\) is the element-wise multiplication. \(\mathbf{A}_{\mathbf{S}}(\cdot)\) and \(\mathbf{B}_{\mathbf{S}}(\cdot)\) are learnable attention function and normalized vector map, respectively. \(\mathbf{\Psi}(\mathbf{X}(t))\) is a diagonal matrix in which \(\Psi(\mathbf{x}_{i},\mathbf{x}_{i})=\sum_{\mathbf{x}_{j}}(\mathbf{A}\odot \mathbf{B})(\mathbf{x}_{i},\mathbf{x}_{j})\).
_Analogous to BLEND, under certain conditions with \(\Psi(\mathbf{X}(t))=\mathbf{B}_{\mathbf{S}}(\mathbf{X}(t))=\mathbf{I}\), GraphBel simplifies to the GRAND model. Consequently, it exhibits BIBO/Lyapunov stability in certain scenarios._
The incorporation of ODEs via graph neural flows may enhance the stability of graph feature representations. A summarized relationship between model stability and these graph neural flows can be found in Table 1.
### Lyapunov Stability vs. Node Classification Robustness:
At first glance, Lyapunov stability has a strong correlation with node classification robustness against feature perturbations. However, before diving into experimental evidence, we point out an important conclusion: **Lyapunov stability _by itself_ does not necessarily imply adversarial robustness.** Consider a scenario where a graph neural flow has only one equilibrium point \(\mathbf{z}_{e}\), while the node features are derived from more than one class. In a Lyapunov asymptotically stable graph neural flow, such as GRAND (as shown in Theorem 1), all node features across different classes would inevitably converge to a single point \(\mathbf{z}_{e}\) due to global contraction. We note that this is why the model in [20] requires a diversity-promoting layer to ensure that different classes converge to different Lyapunov-stable equilibrium points.
**Example 1**.: _We provide an example to demonstrate our claim. Consider the following Lyapunov stable ODE_
\[\dot{\mathbf{x}}(t)=\begin{pmatrix}-1&0\\ 0&-5\end{pmatrix}\mathbf{x}(t) \tag{6}\]
_with initial condition \(\mathbf{x}(0)=\left[x_{1}(0),x_{2}(0)\right]^{\intercal}\). The solution to this ODE is given by \(\mathbf{x}(t)=x_{1}(0)e^{-t}[1,0]^{\intercal}+x_{2}(0)e^{-5t}[0,1]^{\intercal}\). For all initial points in \(\mathbb{R}^{2}\), we have \(\mathbf{x}(t)\rightarrow\mathbf{0}\) as \(t\rightarrow\infty\). Furthermore, as \(t\rightarrow\infty\), the trajectory \(\mathbf{x}(t)\) for any initial point is approximately parallel to the \(x\)-axis. We draw the phase plane in Fig. 0(a)._
_Assume that the points on the upper half y-axis belongs to class 1 while we have a linear classifier that seperates class 1 and class 2 as shown in Fig. 0(a). We observe that for the initial point \(A\) belonging to class 1, the solution from a small perturbed initial point \(A+\epsilon\) is misclassified as class 2 for a large enough \(t\) for any linear classifier. We see from this example that Lyapunov stability iteself does not imply adversarial robustness in graph neural flow models._
_This example indicates that Lyapunov stability does not guarantee node classification robustness. Additionally, for a system exhibiting global contraction to a single equilibrium point, structural stability may also be ensured. For instance, in the case of GRAND, even if the edges are perturbed, the system maintains the same number of equilibrium points with global contraction._ We conclude that even an amalgamation of both Lyapunov stability and structural stability may not help the graph's adversarial robustness for node classification.
In the example shown in Fig. 1, we observe that in the case of GRAND when \(\alpha>1\), the node features from different classes tend to become closer to each other as time progresses. This phenomenon can potentially create more vulnerability to adversarial attacks.
## 4 Hamiltonian-Inspired Graph Neural Flow
Drawing inspiration from the principles of Hamiltonian classical mechanics, we introduce a novel graph neural flow paradigm, namely HamiltoniAN Graph diffusion (HANG). In Hamiltonian mechanics, the notation \((q,p)\) is traditionally used to represent the properties of nodes (position and
Figure 1: (a): We plot the vector field and the system solution trajectories of Example 1. (b) and (c): In GRAND, the node features’ energy tends to converge towards each other. In HANG, we observe the node features’ energy remains relatively stable over time. Two nodes are from different classes.
momentum). Hence, in this section, we adopt this notation instead of \(\mathbf{X}\) (as used in Section 3) to denote node features.
### Physics Philosophy Behind HANG
In Hamiltonian mechanics, the state evolution of a multi-object physical system, such as an electromagnetic field, double pendulum, or spring network [43, 44], adheres to well-established physical laws. For instance, in a system of charged particles, each particle generates an electromagnetic field that influences other particles. The Hamiltonian for such a system includes terms representing the kinetic and potential energy of each particle, along with terms representing the interactions between particles via their electromagnetic fields. In this paper, we propose a novel concept of information propagation between graph nodes, where interactions follow a similar Hamiltonian style.
In a Hamiltonian system, the position \(q\) and momentum \(p\) together constitute the phase space \((q,p)\), which comprehensively characterizes the system's evolution. In our HANG model, we process the raw node features of the graph using a linear input layer, yielding \(2r\)-dimensional vectors. Following the methodologies introduced in the GNN work [31, 35], we split the \(2r\) dimensions into two equal halves, with the first half serving as the feature (position) vector and the second half as "momentum" vector that guides the system evolution. Concretely, each node \(k\) is associated with an \(r\)-dimensional feature vector \(\mathbf{q}_{k}(0)=(q_{k}^{1},\ldots,q_{k}^{r})\) and an \(r\)-dimensional momentum vector \(\mathbf{p}_{k}(0)=(p_{k}^{1},\ldots,p_{k}^{r})\). Subsequently, \(\mathbf{q}_{k}(t)\) and \(\mathbf{p}_{k}(t)\) will evolve along with the propagation of information between graph nodes, with \(\mathbf{q}_{k}(0)\) and \(\mathbf{p}_{k}(0)\) serving as the initial conditions.
Following the modeling conventions in physical systems, we concatenate the feature positions of all \(|\mathcal{V}|\) vertices into a single vector, treating it as the system's generalized coordinate within a \(r|\mathcal{V}|\)-dimensional manifold, a process that involves index relabeling3.
Footnote 3: In multilinear algebra and tensor computation, vector components employ upper indices, while covector components use lower indices. We adhere to this convention.
\[q(t)=\left(q^{1}(t),\ldots q^{r|\mathcal{V}|}(t)\right)=\left(\mathbf{q}_{1}(t ),\ldots,\mathbf{q}_{|\mathcal{V}|}(t)\right). \tag{7}\]
This \(r|\mathcal{V}|\)-dimensional coordinate representation at each time instance provides a snapshot of the state of the graph system. Similarly, we concatenate all the "momentum" vectors at time \(t\) to construct an \(r|\mathcal{V}|\)-dimensional vector:
\[p(t)=\left(p_{1}(t),\ldots p_{r|\mathcal{V}|}(t)\right)=\left(\mathbf{p}_{1}(t ),\ldots,\mathbf{p}_{|\mathcal{V}|}(t)\right), \tag{8}\]
which can be interpreted as a generalized momentum vector for the entire graph system.
In physics, the system evolves in accordance with fundamental physical laws, and a conserved quantity function \(H(q,p)\) remains constant along the system's evolution trajectory. This conserved quantity is typically interpreted as the "system energy". In our HANG model, instead of defining an explicit energy function \(H(p,q)\) from a fixed physical law, we utilize a learnable energy function \(H_{\mathrm{net}}:\mathcal{G}\rightarrow\mathbb{R}^{+}\) parameterized by a neural network, referred to as the _Hamiltonian energy function_:
\[H_{\mathrm{net}}:\mathcal{G}\rightarrow\mathbb{R}^{+} \tag{9}\]
We allow the graph features to evolve according to a learnable Hamiltonian law analogous to basic physical laws. More specifically, we model the feature evolution trajectory as the following canonical Hamilton's equations, which is a restatement of (2):
\[\dot{q}(t)=\frac{\partial H_{\mathrm{net}}}{\partial p},\quad\dot{p}(t)=- \frac{\partial H_{\mathrm{net}}}{\partial q}, \tag{10}\]
with the initial features \((q(0),p(0))\in\mathbb{R}^{2r|\mathcal{V}|}\) at time \(t=0\) being the vectors after the raw node features transformation.
The neural ODE given by (10) can be trained and solved through integration to obtain the trajectory \((q(t),p(t))\). At the terminal time point \(t=T\), the system's solution is represented as \((q(T),p(T))\). We then apply the canonical projection map \(\pi\) to extract the nodes' concatenated feature vector \(q(T)\) as follows: \(\pi((q(T),p(T)))=q(T)\). This concatenated feature vector \(q(T)\) is subsequently decompressed into individual node features for utilization in downstream tasks. For this study, we employ backpropagation to minimize the cross-entropy in node classification tasks. The complete model architecture is depicted in Fig. 2, while a comprehensive summary of the full algorithm can be found in the Appendix G.
### Hamiltonian Energy Conservation
Referring to [45], it is known that the total Hamiltonian energy \(H_{\mathrm{net}}\) remains constant along the trajectory of its induced Hamiltonian flow. This principle is recognized as the law of energy conservation in a Hamiltonian system.
**Theorem 2**.: _If the graph system evolves in accordance with (10), the total energy \(H_{\mathrm{net}}(q(t),p(t))\) of the system remains constant. BIBO stability is achieved if \(H_{\mathrm{net}}\) remains bounded for all bounded inputs and, as \((q,p)\to\infty\), \(H_{\mathrm{net}}(q,p)\to\infty\)._
In light of Theorem 2, if our system evolves following (10), it adheres to the law of energy conservation. As a result, our model guarantees conservative stability.
**Definition 5** (Dirichlet energy [35]).: _The Dirichlet energy is defined on node features \(q(t)\) at time \(t\) of an undirected graph \(\mathcal{G}\) as_
\[\mathcal{E}(q(t))=\frac{1}{|\mathcal{V}|}\sum_{i}\sum_{j\in\mathcal{N}(i)}\| \mathbf{q}_{i}(t)-\mathbf{q}_{j}(t)\|^{2}. \tag{11}\]
Compared to GraphCON, which conserves Dirichlet energy over time \(t\)_under specific conditions_ as detailed in Section 3, the notion of Hamiltonian energy conservation is broader in scope. This is due to the fact that \(H_{\mathrm{net}}\) can always be defined as \(\mathcal{E}(q)\). Therefore, under the settings delineated in Section 3, GraphCON can be considered a particular instance of HANG.
## 5 Different Hamiltonian Energy Functions
In physical systems, the system is often depicted as a graph where two neighboring vertices with mass are connected by a spring of given stiffness and length [44]. The system's energy is thus related to the graph's topology. Similarly, in our graph system, the energy function \(H_{\mathrm{net}}\) involves interactions between neighboring nodes, signifying the importance of the graph's topology. There exist multiple ways to learn the energy function, and we present two examples below.
### Vanilla HANG
We define \(H_{\mathrm{net}}\) as a composition of two graph convolutional layers:
\[H_{\mathrm{net}}=\left\|\left(g_{\mathrm{gcn_{2}}}\circ\tanh\circ g_{\mathrm{ gcn_{1}}}\right)\left(q,p\right)\right\|_{2}, \tag{12}\]
where \(g_{\mathrm{gcn_{1}}}:\mathbb{R}^{2r\times|\mathcal{V}|}\to\mathbb{R}^{d\times| \mathcal{V}|}\) and \(g_{\mathrm{gcn_{2}}}:\mathbb{R}^{d\times|\mathcal{V}|}\to\mathbb{R}^{|\mathcal{ V}|}\) are two GCN [3] layers with different hidden dimensions. A \(\tanh\) activation function is applied between the two GCN layers, and \(\|\cdot\|_{2}\) denotes the \(\ell_{2}\) norm. We concatenate \(\mathbf{q}_{k}\) and \(\mathbf{p}_{k}\) for each node \(k\) at the input of the above composite function, resulting in an input dimension of \(2r\) per node. From Theorem 2, it follows that HANG exhibits BIBO stability. If \((q(t),p(t))\) were unbounded, the value of \(H_{\mathrm{net}}\) would also become unbounded, contradicting the energy conservation principle. In subsequent discussions, this invariant is referred to as HANG.
### Quadratic HANG (HANG-quad)
The general vanilla HANG with conservative stability does not possess Lyapunov stability. Other additional conditions may be required for Lyapunov stability. The following Lagrange-Dirichlet Theorem provides a sufficient condition.
**Theorem 3** (Lagrange-Dirichlet Theorem [46]).: _Let \(\mathbf{z}_{e}\) be a locally quadratic equilibrium of the natural Hamiltonian (2) with_
\[H=T(q,p)+U(q), \tag{13}\]
_where \(T\) is a positive definite, quadratic function of \(p\). Then \(\mathbf{z}_{e}\) is Lyapunov stable if the position of it is a strict local minimum of \(U(q)\)._
Theorem 3 implies that we can design an energy function \(H_{\mathrm{net}}\) such that the induced graph neural flow is both Lyapunov stable and energy conservative. For instance, we can define \(T\) as
\[T(q,p)=\sum_{k}\mathbf{p}_{k}^{\intercal}\left(\mathbf{A}_{G}(\mathbf{q}_{k}, \mathbf{q}_{k})\mathbf{A}_{G}^{\intercal}(\mathbf{q}_{k},\mathbf{q}_{k})+ \sigma\mathbf{I}\right)\mathbf{p}_{k}. \tag{14}\]
\(\sigma\) is a small positive number ensuring positive definiteness. The matrix \(\mathbf{A}_{G}\) can be the adjacency matrix or one learnable attention matrix. If \(U(q)\) is chosen to possess only a single, global minimum--such as \(U(q)=\|q\|\) with an \(\ell_{2}\) norm, as indicated in Section 3.1--this kind of stability does not necessarily guarantee adversarial robustness. One potential approach to this issue could be to set \(U(q)=\|\sin(q)\|\), thereby promoting Lyapunov stability at many local equilibrium points. However, this choice considerably restricts the form that \(U\) can take and may consequently limit the model's capacity. In the implementation, we set the function \(U(q)\) in \(H\) to be a single GAT [6] layer \(g_{\mathrm{gat}}:\mathbb{R}^{r\times|\mathcal{V}|}\rightarrow\mathbb{R}^{r \times\mathcal{V}}\) with a sin activation function followed by an \(\ell_{2}\) norm, i.e.,
The relationship between the stability of HANG and HANG-quad is summarized in Table 1.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Graph Neural Flows & BIBO stability & Lyapunov stability & Structural stability & Conservative stability \\ \hline GRAND & \(\mathcal{\prec}\) & \(\mathcal{\prec}\) & \(\mathcal{\prec}\) & \(\mathcal{\prec}\) \\ BLEND & \(\mathcal{\prec}\) & \(\mathcal{\prec}\) & \(\mathcal{\prec}\) & \(\mathcal{\prec}\) \\ GraphCON & \(\mathcal{\prec}\) & \(\times\) & \(\times\) & \(\mathcal{\prec}\) \\ GraphBLE & \(\mathcal{\prec}\) & \(\mathcal{\prec}\) & \(\mathcal{\prec}\) & \(\mathcal{\prec}\) \\ HANG & \(\mathcal{\times}\) & \(\mathcal{\times}\) & \(\mathcal{\times}\) & \(\mathcal{\check{\prec}}\) \\ HANG-quad & \(\mathcal{\times}\) & \(\mathcal{\check{\vee}}\) & \(\mathcal{\times}\) & \(\mathcal{\check{\vee}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The stability summary for different graph neural flows where the \(\mathcal{\check{\prec}}\)denotes that stability is affirmed under additional conditions.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Dataset & Attack & HANG & HANG-quad & GraphCON & GraphBLE & GRAND & GAT & GraphSAGE & GCN \\ \hline \multirow{4}{*}{Cens} & _clean_ & 87.13\(\pm\)0.86 & 79.68\(\pm\)0.62 & 86.27\(\pm\)0.51 & 86.13\(\pm\)0.51 & 87.53\(\pm\)0.59 & 87.58\(\pm\)0.64 & 86.65\(\pm\)1.51 & 88.31\(\pm\)0.48 \\ & PGD & 78.37\(\pm\)1.84 & 79.05\(\pm\)0.42 & 42.81\(\pm\)0.30 & 40.04\(\pm\)0.68 & 39.65\(\pm\)1.32 & 38.27\(\pm\)2.73 & 35.43\(\pm\)5.05 & 35.83\(\pm\)0.71 \\ & TDIGA & 79.76\(\pm\)0.99 & 78.94\(\pm\)0.65 & 43.02\(\pm\)0.34 & 39.10\(\pm\)0.80 & 41.77\(\pm\)2.70 & 39.39\(\pm\)5.77 & 34.38\(\pm\)2.25 & 33.05\(\pm\)1.09 \\ & MetaCIA & 77.48\(\pm\)1.02 & 78.28\(\pm\)0.56 & 42.30\(\pm\)0.33 & 39.93\(\pm\)0.59 & 39.36\(\pm\)1.26 & 39.49\(\pm\)1.97 & 38.14\(\pm\)2.23 & 35.84\(\pm\)0.73 \\ \hline \multirow{4}{*}{Citeseer} & _clean_ & 74.11\(\pm\)0.62 & 71.85\(\pm\)0.48 & 74.84\(\pm\)0.49 & 69.62\(\pm\)0.56 & 74.98\(\pm\)0.45 & 67.87\(\pm\)4.97 & 63.22\(\pm\)9.14 & 72.63\(\pm\)1.14 \\ & PGD & 72.31\(\pm\)1.16 & 71.07\(\pm\)0.41 & 40.56\(\pm\)0.36 & 55.67\(\pm\)3.53 & 36.68\(\pm\)1.05 & 32.65\(\pm\)3.80 & 32.70\(\pm\)5.11 & 30.69\(\pm\)2.33 \\ & TDIGA & 71.21\(\pm\)0.52 & 71.09\(\pm\)0.40 & 36.67\(\pm\)1.25 & 34.74\(\pm\)4.68 & 36.67\(\pm\)1.25 & 30.53\(\pm\)3.57 & 36.11\(\pm\)2.94 & 21.10\(\pm\)2.35 \\ & MetaCIA & 72.92\(\pm\)0.66 & 71.60\(\pm\)0.48 & 43.86\(\pm\)2.12 & 45.60\(\pm\)3.41 & 46.23\(\pm\)2.01 & 37.68\(\pm\)4.04 & 35.75\(\pm\)5.50 & 35.86\(\pm\)0.68 \\ \hline \multirow{4}{*}{ConstroCS} & _clean_ & 96.16\(\pm\)0.09 & 59.27\(\pm\)1.12 & 95.10\(\pm\)0.12 & 93.93\(\pm\)0.48 & 95.08\(\pm\)0.12 & 92.84\(\pm\)0.41 & 93.09\(\pm\)0.39 & 93.33\(\pm\)0.73 \\ & PGD & 94.80\(\pm\)0.33 & 95.08\(\pm\)0.23 & 42.68\(\pm\)1.13 & 73.15\(\pm\)2.90 & 74.96\(\pm\)1.24 & 43.22\(\pm\)32.58 & 7.92\(\pm\)2.58 & 11.02\(\pm\)5.04 \\ & TDIGA & 95.40\(\pm\)0.13 & 95.09\(\pm\)0.09 & 7.92\(\pm\)4.11 & 43.13\(\pm\)3.58 & 5.05\(\pm\)1.43 & 16.08\(\pm\)15.74 & 6.47\(\pm\)4.25 & 3.61\(\pm\)1.77 \\ & MetaCIA & 94.85\(\pm\)0.31 & 94.83\(\pm\)0.28 & 67.79\(\pm\)4.04 & 73.98\(\pm\)8.26 & 84.31\(\pm\)4.26 & 52.01\(\pm\)24.21 & 7.82\(\pm\)2.69 & 15.70\(\pm\)3.95 \\ \hline \multirow{4}{*}{Pubuned} & _clean_ & 89.91\(\pm\)0.27 & 88.10\(\pm\)0.33 & 88.78\(\pm\)0.46 & 86.97\(\pm\)0.37 & 88.44\(\pm\)0.34 & 87.41\(\pm\)1.73 & 88.71\(\pm\)0.37 & 88.46\(\pm\)0.20 \\ & PGD & 81.81\(\pm\)1.94 & 89.67\(\pm\)0.57 & 45.06\(\pm\)0.51 & 46.06\(\pm\)0.97 & 44.61\(\pm\)2.78 & 49.84\(\pm\)12.99 & 44.62\(\pm\)6.49 & 99.03\(\pm\)0.10 \\ & TDIGA & 86.62\(\pm\)1.05 & 87.55\(\pm\)0.60 & 46.30\(\pm\)1.56 & 52.24\(\pm\)0.68 & 44.99\(\pm\)1.10 & 47.56\(\pm\)3.11 & 47.61\(\pm\)0.91 & 42.64\(\pm\)1.41 \\ & MetaCIA & 87.58\(\pm\)0.75 & 87.04\(\pm\)0.62 & 45.53\(\pm\)1.18 & 50.04\(\pm\)0.64 & 44.36\(\pm\)1.20 & 44.75\(\pm\)2.53 & 24.29\(\pm\)0.53 & 40.42\(\pm\)0.17 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Node classification accuracy (%) on graph **injection, evasion, non-targeted** attack in **inductive** learning. The best and the second-best result for each criterion are highlighted in red and blue respectively.
Figure 2: The model architecture: each node is assigned a learnable “momentum” vector at time \(t=0\) which initializes the evolution of the system together with node features. The graph features evolve on following a _learnable_ law (10) derived from the \(H_{\mathrm{net}}\). At the time \(t=T\), we use \(q(T)\) as the final node feature. \(H_{\mathrm{net}}(q(t),p(t))\) is a learnable graph energy function.
## 6 Experiments
In this section, we conduct a comprehensive evaluation of our theoretical findings and assess the robustness of two conservative stable models: HANG and HANG-Quad. We compare their performance against various benchmark GNN models, including GAT [47], GraphSAGE [48], GCN [3] and other prevalent graph neural flows. We incorporate different types of graph adversarial attacks as described in Section 6.1 and Section 6.3. These attacks are conducted in a black-box setting, where a surrogate model is trained to generate perturbed graphs or features. For more experiments, we direct readers to Appendix C.
### Graph Injection Attacks (GIA)
We implement various GIA algorithms following the methodology in [49]. This framework consists of node injection and feature update procedures. Node injection involves generating new edges for injected nodes using gradient information or heuristics. We use the PGD-GIA method [49] to randomly inject nodes and determine their features with the PGD algorithm [50]. TDGIA [12] identifies topological vulnerabilities to guide edge generation and optimize a smooth loss function for feature generation. MetaGIA [49] performes iterative updates of the adjacency matrix and node features using gradient information. Our datasets include citation networks (Cora, Citeseer, Pubmed) [51], the Coauthor academic network [52], an Amazon co-purchase network (Computers) [52], and the Ogbn-Arxiv dataset [53]. For inductive learning, we follow the data splitting method in the GRB framework [54], with 60% for training, 10% for validation, and 20% for testing. Details on data statistics and attack budgets can be found in Appendix C.1. Targeted attacks are applied to the Ogbn-Arxiv and Computers datasets [49]. Additional results for various attack strengths and white-box attacks can be found in Appendix C.4 and Appendix C.3, respectively.
### Performance Results Under GIAs
Upon examining the results in Table 2 and Table 3 pertaining to experiments conducted under GIA conditions, the robustness of our proposed HANG and HANG-quad, is notably prominent across different GIA scenarios. Interestingly, GRAND, despite its Lyapunov stability as analyzed in Theorem 1, does not significantly outperform GAT under certain attacks. In contrast, HANG consistently displays robustness against attacks. Notably, HANG-quad exhibits superior performance to HANG on the Pubmed dataset under GIA perturbations, underscoring the effectiveness of integrating both Lyapunov stability and Hamiltonian mechanics to boost robustness. Although other graph neural
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Dataset & Phb Rate(\%) & HANG & HANG-quad & GraphCON & GraphBled & GRAND & GAT & GCN & RGCN & GCN-SVD & Pro-GNN \\ \hline \multirow{8}{*}{Publogs} & 0 & 94.77\(\pm\)1.07 & 94.63\(\pm\)1.06 & 93.14\(\pm\)0.84 & 85.13\(\pm\)2.22 & 95.57\(\pm\)0.44 & 95.35\(\pm\)0.20 & 95.69\(\pm\)0.38 & 95.22\(\pm\)0.14 & 95.31\(\pm\)0.18 & 93.20\(\pm\)0.64 \\ & 5 & 80.19\(\pm\)2.92 & 94.38\(\pm\)0.82 & 70.24\(\pm\)2.71 & 51.84\(\pm\)3.38 & 77.58\(\pm\)3.46 & 83.69\(\pm\)1.45 & 70.77\(\pm\)0.80 & 74.34\(\pm\)0.19 & 89.09\(\pm\)0.22 & 93.92\(\pm\)0.18 \\ & 10 & 73.49\(\pm\)4.32 & 92.94\(\pm\)1.56 & 71.87\(\pm\)1.71 & 56.54\(\pm\)2.30 & 77.99\(\pm\)1.35 & 76.32\(\pm\)0.85 & 70.22\(\pm\)1.13 & 71.04\(\pm\)0.34 & 81.24\(\pm\)0.49 & 89.42\(\pm\)1.09 \\ & 15 & 71.65\(\pm\)1.34 & 90.58\(\pm\)2.43 & 69.00\(\pm\)0.90 & 53.41\(\pm\)1.08 & 73.84\(\pm\)1.46 & 68.80\(\pm\)1.14 & 64.96\(\pm\)1.91 & 67.28\(\pm\)0.38 & 68.10\(\pm\)1.33 & 86.04\(\pm\)2.71 \\ & 20 & 66.27\(\pm\)3.39 & 89.19\(\pm\)3.72 & 64.06\(\pm\)2.31 & 52.18\(\pm\)0.54 & 69.14\(\pm\)1.32 & 51.50\(\pm\)1.63 & 51.27\(\pm\)1.23 & 59.89\(\pm\)0.34 & 57.33\(\pm\)1.35 & 79.56\(\pm\)5.68 \\ & 25 & 65.30\(\pm\)2.33 & 86.89\(\pm\)3.80 & 50.56\(\pm\)1.87 & 81.39\(\pm\)1.36 & 67.55\(\pm\)1.65 & 51.19\(\pm\)1.49 & 49.23\(\pm\)1.36 & 56.02\(\pm\)0.56 & 48.66\(\pm\)0.93 & 63.18\(\pm\)4.40 \\ \hline \multirow{8}{*}{Pubmed} & 0 & 85.08\(\pm\)0.20 & 85.23\(\pm\)0.14 & 86.65\(\pm\)0.17 & 84.02\(\pm\)0.26 & 85.06\(\pm\)0.26 & 83.73\(\pm\)0.40 & 87.19\(\pm\)0.09 & 86.16\(\pm\)0.18 & 83.44\(\pm\)0.21 & 87.33\(\pm\)0.18 \\ & 5 & 85.08\(\pm\)0.18 & 85.12\(\pm\)0.18 & 86.52\(\pm\)1.41 & 89.19\(\pm\)0.26 & 84.11\(\pm\)0.30 & 78.00\(\pm\)0.44 & 83.09\(\pm\)0.13 & 81.08\(\pm\)0.20 & 84.41\(\pm\)0.15 & 87.35\(\pm\)0.09 \\ \cline{1-1} & 10 & 85.17\(\pm\)0.23 & 85.05\(\pm\)0.19 & 86.41\(\pm\)0.13 & 84.62\(\pm\)0.26 & 84.24\(\pm\)0.18 & 74.93\(\pm\)0.38 & 81.21\(\pm\)0.09 & 77.51\(\pm\)0.27 & 83.27\(\pm\)0.21 & 87.25\(\pm\)0.09 \\ \cline{1-1} & 15 & 85.04\(\pm\)0.22 & 85.15\(\pm\)0.17 & 86.21\(\pm\)0.15 & 84.83\(\pm\)0.20 & 83.74\(\pm\)0.34 & 71.13\(\pm\)0.51 & 78.66\(\pm\)0.12 & 73.91\(\pm\)0.25 & 83.10\(\pm\)0.18 & 87.02\(\pm\)0.09 \\ \cline{1-1} & 20 & 85.20\(\pm\)0.19 & 85.03\(\pm\)0.19 & 86.00\(\pm\)0.18 & 84.89\(\pm\)0.45 & 83.88\(\pm\)0.20 & 68.21\(\pm\)0.96 & 73.73\(\pm\)0.19 & 71.18\(\pm\)0.31 & 83.01\(\pm\)0.22 & 87.09\(\pm\)0.10 \\ \cline{1-1} & 25 & 85.06\(\pm\)0.17 & 84.99\(\pm\)0.16 & 86.04\(\pm\)0.14 & 85.07\(\pm\)0.15 & 83.66\(\pm\)0.25 & 65.41\(\pm\)0.77 & 75.05\(\pm\)0.17 & 67.95\(\pm\)0.15 & 82.72\(\pm\)0.18 & 86.71\(\pm\)0.09 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Node classification accuracy (%) under modification, poisoning, non-targeted attack (Metattack) in **transductive** learning.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Dataset & Attack & HANG & HANG-quad & GraphCON & GraphBled & GRAND & GAT & GraphSAGE & GCN \\ \hline \multirow{3}{*}{Computers} & PGD & 90.83\(\pm\)0.53 & 87.53\(\pm\)0.99 & 74.01\(\pm\)4.87 & 89.33\(\pm\)0.56 & 65.75\(\pm\)5.00 & 35.72\(\pm\)1.07 & 18.33\(\pm\)0.57 & 17.80\(\pm\)0.06 \\ & TDGIA & 90.88\(\pm\)0.50 & 87.75\(\pm\)0.53 & 80.11\(\pm\)3.21 & 82.81\(\pm\)0.63 & 77.18\(\pm\)7.60 & 66.09\(\pm\)17.07 & 50.89\(\pm\)1.30 & 66.51\(\pm\)3.90 \\ & MetaGIA & 90.73\(\pm\)0.64 & 89.32\(\pm\)0.48 & 87.95\(\pm\)2.65 & 89.27\(\pm\)0.57 & 81.85\(\pm\)4.66 & 60.28\(\pm\)1.11 & 37.51\(\pm\)4.49 & 36.19\(\pm\)1.00 \\ \hline \multirow{3}{*
flows might demonstrate a range of improved performance compared to conventional GNN models under GIA, the degree of improvement is not consistently distinct. Despite the pronounced association between conservative stability in Hamiltonian systems and adversarial robustness, a clear relationship between adversarial robustness with the other stability of the graph neural flows outlined in Table 1 is not immediately discernible. The performance differential between HANG variants and other graph neural flows further underscores the potential of our proposed Hamiltonian models in enhancing robustness against GIA attacks.
### Graph Modification Attacks
To evaluate the robustness of our proposed conservative models, we conducted graph modification adversarial attacks using the Metattack method [17]. We followed the attack setting described in the Pro-GNN [55] and utilized the perturbed graph provided by the library [56] to ensure a fair comparison. The perturbation rate, which indicates the proportion of altered edges, was incrementally varied in 5% increments from 0% to 25%. For comparison, we also considered other defense models for GNNs, namely Pro-GNN [55], RGCN [57], and GCN-SVD [58]. We report the results of baseline models from [55].
### Performance Results Under Modification/Poisoning/Transductive Attacks
In the case of the Polblogs dataset [59], as shown in Table 4, our proposed HANG-quad model demonstrates superior performance compared to other methods, including existing defense models. This result indicates that incorporating Lyapunov stability indeed enhances HANG's robustness against graph modification and poisoning attacks. For the Pubmed dataset, we note that the impact of Meta-attacks of varying strengths on _all_ graph neural flows, including our proposed ones, is negligible. Conversely, traditional GNN models such as GAT, GCN, and RGCN are marginally affected as the attack strength escalates. This observation underlines the robustness of graph neural flows, including our proposed models, against Meta-attacks on this dataset.
### Combination with other defense mechanisms
It merits noting that our models, HANG and HANG-quad, can be readily integrated with additional defense mechanisms against adversarial attacks. These include Adversarial Training (AT) [60] and other preprocessing methods such as GNNGUARD [61]. This integration can further bolster the robustness of the HANG model. To validate this enhancement, extensive experiments are conducted, with results detailed in Appendix C.8 and Appendix C.9.
## 7 Conclusion
In this paper, we conducted a comprehensive study on stability notions in the context of graph neural flows and made significant findings. While Lyapunov stability is frequently employed, it alone may not suffice in guaranteeing robustness against adversarial attacks. With a grounding in foundational physics principles, we proposed a shift towards conservative Hamiltonian neural flows for crafting GNNs resilient against adversarial attacks. Our empirical comparisons across diverse neural flow GNNs, as tested on multiple benchmark datasets subjected to a range of adversarial attacks, have further corroborated this proposition. Notably, GNNs that amalgamate conservative Hamiltonian flows with Lyapunov stability exhibited marked enhancement in their robustness metrics. We are optimistic that our work will inspire further research into marrying physics principles with machine learning paradigms for enhanced security.
## 8 Acknowledgments and Disclosure of Funding
This research is supported by the Singapore Ministry of Education Academic Research Fund Tier 2 grant MOE-T2EP20220-0002, and the National Research Foundation, Singapore and Infocomm Media Development Authority under its Future Communications Research and Development Programme. To improve the readability, parts of this paper have been grammatically revised using ChatGPT [62]. |
2307.13432 | High performance artificial visual system with plasmon-enhanced 2D
material neural network | Artificial visual systems (AVS) have gained tremendous momentum because of
its huge potential in areas such as autonomous vehicles and robotics as part of
artificial intelligence (AI) in recent years. However, current machine visual
systems composed of complex circuits based on complementary metal oxide
semiconductor (CMOS) platform usually contains photosensor array, format
conversion, memory and processing module. The large amount of redundant data
shuttling between each unit, resulting in large latency and high power
consumption, which greatly limits the performance of the AVS. Here, we
demonstrate an AVS based on a new design concept, which consists of hardware
devices connected in an artificial neural network (ANN) that can simultaneously
sense, pre-process and recognize optical images without latency. The Ag
nanograting and the two-dimensional (2D) heterostructure integrated plasmonic
phototransistor array (PPTA) constitute the hardware ANN, and its synaptic
weight is determined by the adjustable regularized photoresponsivity matrix.
The eye-inspired pre-processing function of the device under photoelectric
synergy ensures the considerable improvement of the efficiency and accuracy of
subsequent image recognition. The comprehensive performance of the
proof-of-concept device demonstrates great potential for machine vision
applications in terms of large dynamic range (180 dB), high speed (500 ns) and
ultralow energy consumption per spike (2.4e(-17) J). | Tian Zhang, Xin Guo, Pan Wang, Linjun Li, Limin Tong | 2023-07-25T11:53:16Z | http://arxiv.org/abs/2307.13432v1 | ## High performance artificial visual system with plasmon-enhanced 2D material neural network
## Abstract
Artificial visual systems (AVS) have gained tremendous momentum because of its huge potential in areas such as autonomous vehicles and robotics as part of artificial intelligence (AI) in recent years. However, current machine visual systems composed of complex circuits based on complementary metal oxide semiconductor (CMOS) platform usually contains photosensor array, format conversion, memory and processing module. The large amount of redundant data shuttling between each unit, resulting in large latency and high power consumption, which greatly limits the performance of the AVS. Here, we demonstrate an AVS based on a new design concept, which consists of hardware devices connected in an artificial neural network (ANN) that can simultaneously sense, pre-process and recognize optical images without latency. The Ag nanograting and the two-dimensional (2D) heterostructure integrated plasmonic phototransistor array (PPTA) constitute the hardware ANN, and its synaptic weight is determined by the adjustable regularized photoresponsivity matrix. The eye-inspired pre-processing function of the device under photoelectric synergy ensures the considerable improvement of the efficiency and accuracy of subsequent image recognition. The comprehensive performance of the proof-of-concept device
demonstrates great potential for machine vision applications in terms of large dynamic range (180 dB), high speed (500 ns) and ultralow energy consumption per spike (\(2.4\times 10^{-17}\) J).
## Introduction
The human visual system is mainly composed of the eyes and the visual cortex of the brain [1, 2]. The retina of the eye is normally used to capture external optical information and perform first-stage image pre-processing [3, 4, 5]. The regulated visual signals are transmitted to the neural network of the visual center for final processing and recognition [6, 7]. Accordingly, a variety of bio-inspired AVS for emulating certain functions of the human eye and neural network image processing have emerged that are used to perform typical image processing functionalities, which include image-contrast enhancement [1, 2, 8, 9], noise suppression [10, 11], visual adaptation [5, 12], detection and recognition [13, 14, 15, 16, 17, 18], and auto-encoding [19]. However, for current AVS, a hardware solution with both the pre-processing function of the human retina and the image recognition capability of the visual cortex has not been reported, especially in time-critical applications [18, 19]. There is a high demand to develop multifunctional electronic devices to meet the challenges of next generation machine vision. Additionally, developing low-power and high-efficiency AVS has become a major research focus, where the most critical issue to be addressed is the efficient conversion of optical images into electrical digital signals.
Plasmonic energy conversion has been considered as a promising alternative to drive a wide range of physical and chemical processes [20]. This emerging method is based on the generation of hot electrons with energy distribution deviating substantially from equilibrium Fermi-Dirac distribution in plasmonic nanostructures after light absorption through non-radiative electromagnetic decay of surface plasmons [21, 22, 23, 24, 25], while the 2D semiconductor itselfhas excellent optoelectronic properties [28, 29, 30] such as ultrafast response [31, 32], external tunability [19, 33] and large photothermoelectric effect [34]plasmonics can further enable strong light-matter interactions in 2D materials [26, 27],. 2D materials technology has by now achieved a sufficiently high level of maturity for integration with conventional complex electronic systems [35, 36, 38]. Herein, we present a PPTA constructed of nanogratings and 2D heterostructures, which constitutes an ANN that integrates
simultaneous sensing, pre-processing and image recognition functions. The plasmonic phototransistor (PPT) takes advantage of the strong coupling of photonic and electronic resonances in an elaborately designed device, in which hot electrons are injected efficiently into the floating gate and produce a large photoelectric effect, to simulate the response of the human retina to optical color information. Moreover, the electrical dynamic modulation of the gate electrode can effectively enlarge the dynamic range of the device for image pre-processing functions (image contrast enhancement). Further real-time image recognition is realized by training the network through varying the drain-source voltage to set the photoresponsivity value of each pixel individually. As a result, the AVS integrated with image pre-processing and ANN can effectively improve the image quality, and increase the efficiency and the accuracy of image recognition.
## Results
Figure 1a illustrates the schematic structure of a 2D PPT, which consists of a 2D MoS\({}_{2}\)/Ag nanograting integrated structure on the left and a 2D MoS\({}_{2}\)/h-BN/WSe\({}_{2}\) heterostructure on the right. The left part of the device mimics the sensing and pre-processing functions of the human retina for color information (Extended Data Fig.1a) using light-excited waveguide-plasmon polaritons (WPPs)[37] and electrical modulation of the gate electrode, respectively (see Fig. 2 for more details of the mechanism). The photocurrent signal processed in the first stage can be passed to the floating gate on the right side of the device to induce the channel current, which is similar to that visual information can be transmitted through the optical nerve to each neuron in the visual center via synaptic interconnection (Extended Data Figs. 1a, b). The photoresponsivity (synaptic weight) of the device is modulated by changing the drain-source voltage to emulate the regulation of neurotransmitter release between biological synapses (Extended Data Fig. 1c). To avoid unnecessary direct photocurrents in the channel, the right side is covered by the Al\({}_{2}\)O\({}_{3}\)/Au layer. Interconnecting each 2D PPT (subpixel) in the form of an ANN constitutes an AVS with image sensing, pre-processing and recognition functions (Fig. 1b). It contains \(N\) pixels, which form the imaging array, and each pixel is divided into \(M\) subpixels. The circuit connections of \(M\) subpixels and \(N\) pixels are presented in Figs. 1c,d, respectively. Each subpixel delivers a photocurrent of \(I_{mn}=R_{mn}P_{n}\) under illumination,
where \(R_{mn}\) is the regularized photoresponsivity of the subpixel and \(P_{n}\) denotes the optical power at the \(n\)th pixel. \(n=1\), \(2\),..., \(N\) and \(m=1\), \(2\),..., \(M\) denote the pixel and subpixel indices, respectively.
Figure 1: \(R_{mn}\) is the regularized photoresponsivity of the subpixel and \(P_{n}\) denotes the optical power at the \(n\)th pixel. \(n=1\), \(2\),..., \(N\) and \(m=1\), \(2\),..., \(M\) denote the pixel and subpixel indices, respectively.
**Fig.1**AVS-inspired 2D ANN PPTA. **a**, Schematic of a 2D PPT. **b**, Disassembled diagram of the 2D ANN PPTA. The current induced by subpixels of the same color in WSe\({}_{2}\) channel layer is connected in parallel by wires of the same color to generate an output current \(I_{M}\). **c**, **d**, Circuit diagram of the \(n\)th pixel (c) and \(M^{\in}N^{\text{e}}\) subpixels (d) in the array, where \(M^{\text{e}}\) is a subset of \(M\), representing a certain number of subpixels among \(M\) subpixels with the same \(M\) indice, and \(N^{\text{e}}\) is a subset of \(N\), representing a certain number of pixels among \(N\) pixels. **e**, Illustration of an AVS based on the 2D PPT for image pre-processing and an ANN for image recognition. **f**, Scanning electron microscopy (SEM) image of the PPTA. Scale bar, 20 \(\upmu\)m. GND, ground electrode. **g**, High-resolution scanning transmission electron microscope image captured from the black box in (f) and energy dispersive X-ray spectroscopy mapping. Scales bar are 8 (left) and 40 nm (right), respectively.
The schematic of a classifier is provided in Fig. 1e. The array is operated as a single-layer perceptron using pre-processed visual information as the input layer. Here, we chose the softmax function \(\phi_{m}(I)=\text{e}^{Im^{\text{e}}}/\sum_{k=1}^{M}\text{e}^{I_{k}\xi}\) as the nonlinear activation function to generate the neuron output off-chip, where \(\xi=10^{11}\text{A}^{-1}\) is a scaling factor. In one type of ANN representing a supervised learning algorithm, in order to facilitate the classification of images **P** into different categories **y**, we chose a binary code encoding, where each of the three letters corresponds to an output code. Following the elaborated design concept of the 2D PPTA, we fabricated the actual device as shown in Fig. 1f. The sample fabrication process is provided in Extended Data Fig. 2 (for details, see Methods). This device consists of 27 subpixels (\(N\times M=27\)), of which every 9 subpixels were arranged to form a 3\(\times\)3 imaging array (\(N=9\)) with a subpixel size of about 17\(\times\)5 \(\upmu\text{m}^{2}\). A schematic of the entire circuit connections of the array is presented in Extended Data Fig. 3. Summing all photocurrents generated by 9 PPTs with the same subpixel index \(m\) according to Kirchhoff's law, the output \(I_{m}\) is expressed as
\[I_{m}=\sum_{n=1}^{N}I_{mn}=\sum_{n=1}^{N}R_{mn}P_{n} \tag{1}\]
Figure 1g shows the high-resolution scanning transmission electron microscope and energy dispersive X-ray spectroscopy element mapping characterizations of a single subpixel in the black box in Fig. 1f, indicating a clean heterostructure interface.
**Fig.2\(|\)Schematic illustrations of the mechanisms of 2D PPT.****a**, A diagrammatic model proposed to describe the whole physical process of the strong coupling between the LSPR mode and the waveguide mode and its relaxation. **b**, Simplified band diagram illustrating the hot electron injection process taking place at the Ag-MoS\({}_{2}\) interface. In addition to receiving hot electrons emitted from the Ag nanograting, MoS\({}_{2}\) itself can also generate a small amount of electrons after receiving light. **c**, The simulated transmittance spectrum of the Ag nanograting-ITO waveguide integrated structure dependent on the grating period, showing the classic Rabi splitting. **d**, The calculated electric field distribution at the 320 nm grating period corresponding to each branch at red (R), green (G) and blue (B) wavelengths in the strong coupling regime. Scale bar, 180 nm. **e-h**, Charge-flow illustrations and schematic band diagrams at different operation modes: light on (**e**), light off (**f**), light on and apply -\(V_{{}_{G}}\) (**g**), light on and apply +\(V_{{}_{G}}\) (**h**). The blue balls denote the holes, the magenta balls denote the electrons, and the magenta arrows indicate the flow direction of the electrons. \(E_{\mathrm{T}}\) represents the thermoelectric potential, \(E_{\mathrm{A}}\) represents the accumulated
potential, and \(E_{\mathrm{G}}\) represents the gate potential. The black arrows represent the direction of each potential. The black dotted arrow represents the direction of the electron transition.
In order to understand the mechanism of 2D PPT, we present a scenario for elaboration below. As shown in Fig. 2a, following light absorption and localized surface plasmon resonance (LSPR) excitation in the Ag nanograting, the electromagnetic resonance can be damped radiatively by re-emission of photons, or non-radiatively through transferring the energy to hot electrons via Landau damping [20; 21]. In the subsequent hot electron injection [24] (Fig. 2b), hot electrons with momentum within the escape cone [23] can be rapidly emitted into MoS\({}_{2}\) through ohmic contacts during the relaxation time [22; 27]. At the same time, 2D MoS\({}_{2}\) itself also produces a fraction of energetic hot electrons after absorbing light energy, although the effect of this fraction is minimal (see Extended Data Fig. 4a-c). Figure 2c shows the simulated normalized transmittance mapping of the grating period from 250 to 450 nm in the visible region (for details, see Methods), where Rabi splitting can be clearly observed as a distinguishing characteristic of the strong coupling. It is worth mentioning that the upper, middle and lower three hybrid branches are caused by the coupling of the symmetric and antisymmetric modes in the waveguide with the LSPR mode, respectively, and the bottom two branches are caused by the presence of the mode in the quartz substrate, which is independent of the strong coupling modes. We choose the three eigenenergies corresponding to the red (632 nm), green (535 nm) and blue light (469 nm) when the grating period is 320 nm as the eigenvalues of the three-coupled oscillator model to analyze the strong coupling of this structure. The obtained Rabi splitting (\(\Omega\approx 680\) meV ) is satisfied with the strong coupling criterion between these three oscillators, that is, \(\Omega>\mathbf{W}\square\sum\limits_{i=P_{i},\ S\mathrm{m},\ A\mathrm{y} \mathrm{m}}\mathbf{P}^{i}\gamma_{i}\), where \(\mathbf{W}=\left(W_{Upper},\ W_{Middle},\ W_{Lower}\right)\) are the weight of each hybrid branch, \(\mathbf{P}^{i}=\left(P^{i}_{Upper},\ P^{i}_{Middle},\ P^{i}_{Lower}\right)\) represents the proportion of uncoupled states in each branch, and \(\gamma_{i}\) represents the linewidth of each uncoupled mode. (for details, see Methods). The electric field distribution corresponding to the eigenenergy of different branches at the period 320nm is provided in Fig. 2d. It can be clearly found that the coupling between LSPR mode and waveguide mode leads to energy exchange. The above mechanism suggests that
the 2D PPT can respond to optical color information, and it is also the first time that the splitting of three absorption peaks in the visible range has been achieved compared to previous studies [27; 37]. Thus, by exploiting the hybrid LSPR and waveguide modes, we realize highly efficient photoelectric conversion, while the limitation on the narrow responding wavelength of LSPR could be surmounted by adjusting the dimension of the Ag nanograting structure.
On the other hand, the hot electrons that can not be emitted from the decay of plasmons can generate enormous heat on the picosecond scale, which leads to a balance between thermoelectric potential \(E_{\mathrm{T}}\) (Extended Data Fig. 4d-f) and the accumulated electropotential \(E_{\mathrm{A}}\) as shown in Figs. 2e,f [34]. With such mechanism, the device can respond to different luminance (gray scale of image). When the light is turned on and the negative side gate voltage \(\,\text{-}V_{\mathrm{G}}\,\) is applied, the electrons will be more easily transferred from the left side of MoS\({}_{2}\) to the right side, as there is an additional gate potential \(E_{\mathrm{G}}\) (Fig. 2g). Accordingly, the larger channel current will be induced by the floating gate. Conversely, by applying a positive gate voltage \(\,\text{+}V_{\mathrm{G}}\,\) while the light is turned on, the electrons will be dragged to the left side because of the additional gate potential \(E_{\mathrm{G}}\) (Fig. 2h).The holes left on the right side of the floating gate lead to electron doping to the channel, which gives low conductance since WSe\({}_{2}\) is a p type semiconductor. The mechanism of the device described in Figs. 2g,h can be used to eliminate the redundant information. Finally, the regulation of the photoresponsivity of a single device can be realized by changing the drain-source voltage, which can be used to train the weights in the ANN formed by interconnected devices.
Having described the design concept of AVS, we next present its feasibility from an experimental perspective. The optical experimental setup is shown in Extended Data Fig. 5a,b and the electrical experimental setup is shown in Extended Data Fig. 6a (for details, see Methods). Here we choose the red light of \(\lambda=635\) nm, and its power (0-10 \(\upmu\)W) is divided into 11 orders. Figure 3a presents the multi-state photocurrents corresponding to different levels of optical power. These photocurrents are graphically visualised as 11 grey levels in the normalized 0-1 interval. By measuring the photocurrent corresponding to three
**Fig.3\(|\)Functional implementation of 2D PPT.****a**, Multi-state photocurrents corresponding to different levels of optical power (grey levels), where the laser wavelength is 635 nm and the drain-source voltage is 0.1 V. **b**, Photocurrent of different colors of light (R: 635nm, G: 532nm, B: 473nm) under the same measurement conditions, where the power is 10 \(\upmu\)W and the drain-source voltage is 0.1 V. **c**, Experimentally measured normalized transmittance spectra of the WPPs structure on the left side of the device. **d**, \(I_{\text{PH}}\)-\(V_{\text{DS}}\) curves at different optical powers without any applied gate voltage. **e**, Voltage tunability of the regularized photoresponsivity. The inset shows \(I_{\text{PH}}\) versus P for different \(V_{\text{DS}}\) values. **f**, The voltage (\(V_{\text{DS}}\)) tunable photocurrent corresponding to each gray scale. **g-i**, The transfer characteristic curves of the devices with red (**g**), green (**h**) and blue (**i**) light measured under different P values at \(V_{\text{DS}}\)=1 V, respectively.
wavelengths of light at the same power P = 10 \(\mu\)W, we can distinguish red (635 nm), green (532 nm) and blue colors (473 nm) when \(V_{\text{DS}}\) = 0.1 V (Fig. 3b). This is caused by the different absorption rates of the device for the corresponding three wavelengths of light in the strong coupling mechanism, as shown in Fig. 3c. Next, we performed photocurrent-voltage (\(I_{\text{PH}}\)-\(V_{\text{DS}}\)) characteristic measurements under different optical powers (Fig. 3d). It shows a linear dependence of the photocurrent on the voltage over a wide voltage range, which indicates that the device is dominated by ohmic contacts. Then, we extracted photocurrent as a function of optical power under different \(V_{\text{DS}}\) values (inset in Fig. 3e). An almost symmetrical and adjustable (trainable) linear photoresponsivity between -15 and +15 pA/\(\mu\)W can be obtained by varying the \(V_{\text{DS}}\) (Fig. 3e). Considering the subsequent ANN training, we plotted the voltage tunable photocurrents corresponding to each grey level, as shown in Fig. 3f. Similar measurements of the optoelectronic characterization of green and blue light and the uniformity of each device are presented in Extended Data Fig. 7a-i. We also performed the photoresponsivity measurement when \(V_{\text{G}}\) = -1 V (Extended Data Fig. 8a-i), and the increase of photoresponsivity in the order of magnitude can be applied to image detection and recognition under weak light. Figure 3g-i show the transfer characteristic curves of the PPTs obtained under the illumination with wavelength of 635, 532 and 473 nm and different incident optical power. The dynamic range (DR) is defined by the equation: DR = 20\(\times\)log\(\big{[}I_{\text{max}}/I_{\text{min}}\,\big{]}\)(dB), where \(I_{\text{max}}\) and \(I_{\text{min}}\) are the photocurrent values corresponding to the maximum and minimum gate voltages, respectively. The calculated effective DR is up to 180 dB, which equals almost the highest value reported up to date [5; 11]. Therefore, the characteristic allows us to realize image pre-processing such as contrast enhancement and noise reduction by locally modulating the gate voltage of each pixel.
To test the integrated sensing, pre-processing and image recognition functions of the AVS chip, we used it as a classifier to recognize the letters 'z', 'j' and 'u'. For training and testing of the chip, a point-by-point scan is used to project the optical image using the setup shown in Fig. 4a (for details, see Methods). In this example of supervised learning algorithm, cross-entropy is used as the loss/cost function, the weight values were updated by backpropagation of the gradient of the loss function [19]. A detailed flow chart of the whole
AVS including the training algorithm is presented in Extended Data Fig. 6c. Figure 4b illustrates the input image with different Gaussian noise (\(\sigma=0.2\), \(0.4\)) added and the pre-processed image (\(\sigma=~{}0.4\)), which is extracted from the drain-source current \(I_{\text{D}}\). After applying gate voltage \(V_{\text{G}}\) to the certain pixel (the white pixels in Fig. 4b), the body feature of the letters in the pre-processed image has been enhanced obviously. The complete dataset used for training after pre-processing is given in Extended Data Fig. 9. In Fig. 4c, the accuracy of recognition with and without pre-processing of the images is plotted. For the pre-processed image, it is faster to reach recognition accuracy of 100%. The initial and final responsivities/weights of the classifier are shown in Fig. 4d, and the measured currents and corresponding codes of the target port for each letter are depicted in Fig. 4e. Each code corresponds to a letter, and the corresponding letter is reconstructed through post-processing, as shown in Fig. 4f. To evaluate the overall performance (processing speed and energy consumption) of this network, we also performed time-resolved measurements. The experimental setup is shown in Extended Data Fig. 6b. The trigger/measurement pulse is provided in Extended Data Fig. 10a (see Method for details). The response of a single spike in a single device measured with the assistance of gate voltage is approximately 500 ns (Extended Data Fig. 10b) and the leakage current is shown in Extended Data Fig. 10c. The dissipated energy per spike of the device with such sensitive photoresponse is approximately \(2.4\times 10^{-17}\) J, according to \(P=I\times V\times t^{16}\). Such a system may hence provide great potential for the development of ultrafast and ultralow power machine vision.
In conclusion, we have presented an AVS composed of PPTA, which integrates multifunctions of sensing, preprocessing and image recognition simultaneously. By performing image pre-processing using this PPT, the image quality is effectively improved, and the efficiency and accuracy of subsequent image recognition is increased. This device exhibits great potential in terms of large dynamic range, ultrafast and ultralow power consumption for machine vision applications.
**Fig.4 \(\,\)AVS operation as a classifier.****a**, Schematic illustration of the optical setup for network training/operation. The resulting image is projected onto the photodiode array in a point-by-point scanning manner. **b**, Examples of images with (\(\sigma=\ 0.4\)) and without (\(\sigma=\ 0.2\), \(0.4\)) pre-processing of the device. **c**, Comparison of image recognition rate before and after pre-processing of the device. **d**, Responsivity distributions before (initial) and after (final) training. **e**, The measured three currents corresponding to 'z', 'j' and 'u' target ports, which are converted by the nonlinearity into binary activation codes. In each experiment, the letters 'z', 'j' and 'u' were projected onto the chip separately. **f**, The reconstructed letters after post-processing.
## Methods
### Device fabrication
The fabrication of the chip follows the procedure described in Extended Data Fig. 2. A quartz wafer was used as the original substrate, which was cleaned with acetone, isopropyl alcohol and deionized water, respectively. The cleaned quartz wafer was deposited with a layer of ITO film (\(\sim\)200 nm) using magnetron sputtering(equip? ). Subsequently, an Al\({}_{2}\)O\({}_{3}\) layer was grew on top of the ITO film by atomic layer deposition (\(\sim\)40nm, Kurt J. Lesker ALD150LX). 2D crystals including MoS\({}_{2}\), h-BN and WSe\({}_{2}\) flakes were derived from bulk source materials by a mechanical peel-transfer method. For the transfer of MoS\({}_{2}\) flake, it was first mechanically exfoliated on a transparent polydimethylsiloxane film and then transferred to the substrate with the help of an optical microscope. To eliminate unnecessary stresses, the transferred 2D MoS\({}_{2}\) was annealed in an argon atmosphere. Standard e-beam lithography (EBL, Raith Voyager) and magnetron sputtering were then employed to define the Ti/Ag nanogratings on the produced structures by a lift-off approach. Next, we defined the mask with EBL and carried out reactive ion etching (RIE) with Ar/SF\({}_{6}\) plasma to separate the previously transferred MoS\({}_{2}\) sheet into 27 pixelsAfterwards, the mask was removed with acetone. 2D h-BN and WSe\({}_{2}\) flakes were also transferred to the structure using the same method described above. In order to maximize the absorption of nanogratings, Ar/SF\({}_{6}\) plasma was again used to perform RIE towards 2D heterostructure on the mask defined by EBL. The top metal layer (gate electrode and drain-source electrode) was added by another EBL process and Cr/Au (3 nm/15 nm) evaporation. Finally, Al\({}_{2}\)O\({}_{3}\) (20 nm) and Cr/Au (20 nm/50 nm) layers were deposited on the produced heterostructures by lift-off methods using standard EBL process and magnetron sputtering/thermal evaporation of materials.
### Experimental setup
Schematics of the experimental setup are shown in Extended Data Fig. 5 and Extended Data Fig. 6a, b. Light from a semiconductor laser (635/532/473 nm wavelength) was collimated by a lens before passing
through a linear polarizer. The polarization direction of the linear polarizer was mounted perpendicular to the long axis of the Ag nanograting, and the linearly polarized light was projected on the structure in a normal incidence manner. The gray level of each pixel in the optical image was achieved by adjusting the laser power, and then the optical image was projected onto the sample using a microscope objective with a long working distance. A source meter (Keithley, 2400) was used to supply gate voltage to the PPT, and a source meter (Keithley, 2450) was used to supply drain-source voltage to the PPT while measuring the output current. The sample was connected to the source meter via a home-made measurement box and BNC connection cable. For time-resolved measurements, a femtosecond pulsed laser source (BFL-1030-20B, BWT) was used, which was triggered using a lock-in amplifier (Stanford Research Systems, SR830) to emit a single pulse at a wavelength of 515 nm. The 500ns cycle drain-source pulse voltage was provided by an arbitrary waveform generator (Keithley, 3390), and the output current was amplified by a preamplifier (Stanford Research Systems, SR570) and converted into a voltage signal, which was finally recorded by an oscilloscope (Siglent). In addition, all measurements were carried out at room temperature in an air environment.
### Simulation and strong coupling model
The transmittance spectra and electromagnetic field distributions of the structures with strong coupling were simulated using finite-difference time-domain (FDTD) method. The plane wave light source was projected onto the structure with normal incidence in the direction of polarization perpendicular to the long axis of Ag nanogratings. In order to highlight the strong coupling effect, we neglected the effect of 2D materials in our experimental and theoretical simulations. Here, small volumes Ag nanorods with a height of 20 nm were selected to form the grating in order to achieve large photoelectric conversion efficiency by reducing the proportion of radiation damping and increasing the ballistic transport probability[22] and hot electron relaxation time[27].All calculated data were collected while satisfying the steady state energy criteria.
A coupled oscillator model was introduced to analyze the strong coupling behavior of the hybrid architecture under specific parameters.The plasmon of Ag nanogratings, symmetrized photonic mode, and antisymmetrized mode can be assumed as three oscillators. Therefore, the Hamiltonian of this three-coupled system can be written as:
\[\begin{pmatrix}E_{rt}-i\gamma_{rt}/2&g_{w}&g_{s}\\ g_{w}&E_{Asym}-i\gamma_{Asym}/2&0\\ g_{s}&0&E_{sym}-i\gamma_{Sym}/2\end{pmatrix} \tag{1}\]
Where \(\gamma_{rt}\), \(\gamma_{Asym}\), and \(\gamma_{Sym}\) are the linewidths of plasmon, antisymmetrized and symmetrized modes, \(E_{rt}\), \(E_{Asym}\), and \(E_{sym}\) are the resonance energies of plasmon, antisymmetrized and symmetrized modes, while \(g_{w}\) and \(g_{s}\) represent plasmon-antisymmetrized mode and plasmon-symmetrized mode interaction constants. In the three-oscillator model, the eigenstates of Hamiltonian correspond to the three hybrid branches. The wave function of each branch from the admixture contribution of plasmon, symmetric mode and antisymmetric mode can be expressed as \(\left|\psi_{j}\right\rangle=\alpha_{pi}^{j}\left|Pl\right\rangle+\alpha_{Sym}^ {j}\left|Sym\right\rangle+\alpha_{Asym}^{j}\left|Asym\right\rangle\), where \(\alpha_{i}^{j}\left(i=Pl,\,Sym,\,Asym;j=Upper,\,Middle,\,Lower\right)\) denotes Hopfield coefficients. The modular square of the Hopfield coefficient represents the proportion of uncoupled states \(\mathbf{P}^{i}=\left(P_{Upper}^{i},\,P_{Middle}^{j},\,P_{Lower}^{i}\right)\) in hybrid state. Also, the weight of each hybrid branch \(\mathbf{W}=\left(W_{Upper},\,W_{Middle},\,W_{Lower}\right)\)in this strong coupling regime can be calculated as \(W_{j}=\gamma_{j}\,/\sum_{j}\gamma_{j}\).
## Data Availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
|
2303.05957 | Automated crack propagation measurement on asphalt concrete specimens
using an optical flow-based deep neural network | This article proposes a deep neural network, namely CrackPropNet, to measure
crack propagation on asphalt concrete (AC) specimens. It offers an accurate,
flexible, efficient, and low-cost solution for crack propagation measurement
using images collected during cracking tests. CrackPropNet significantly
differs from traditional deep learning networks, as it involves learning to
locate displacement field discontinuities by matching features at various
locations in the reference and deformed images. An image library representing
the diversified cracking behavior of AC was developed for supervised training.
CrackPropNet achieved an optimal dataset scale F-1 of 0.755 and optimal image
scale F-1 of 0.781 on the testing dataset at a running speed of 26
frame-per-second. Experiments demonstrated that low to medium-level Gaussian
noises had a limited impact on the measurement accuracy of CrackPropNet.
Moreover, the model showed promising generalization on fundamentally different
images. As a crack measurement technique, the CrackPropNet can detect complex
crack patterns accurately and efficiently in AC cracking tests. It can be
applied to characterize the cracking phenomenon, evaluate AC cracking
potential, validate test protocols, and verify theoretical models. | Zehui Zhu, Imad L. Al-Qadi | 2023-03-10T14:45:37Z | http://arxiv.org/abs/2303.05957v1 | Automated Crack Propagation Measurement On Asphalt Concrete Specimens Using an Optical Flow-Based Deep Neural Network
###### Abstract
This article proposes a deep neural network, namely CrackPropNet, to measure crack propagation on asphalt concrete (AC) specimens. It offers an accurate, flexible, efficient, and low-cost solution for crack propagation measurement using images collected during cracking tests. CrackPropNet significantly differs from traditional deep learning networks, as it involves learning to locate displacement field discontinuities by matching features at various locations in the reference and deformed images. An image library representing the diversified cracking behavior of AC was developed for supervised training. CrackPropNet achieved an optimal dataset scale F-1 of 0.755 and optimal image scale F-1 of 0.781 on the testing dataset at a running speed of 26 frame-per-second. Experiments demonstrated that low to medium-level Gaussian noises had a limited impact on the measurement accuracy of CrackPropNet. Moreover, the model showed promising generalization on fundamentally different images. As a crack measurement technique, the CrackPropNet can detect complex crack patterns accurately and efficiently in AC cracking tests. It can be applied to characterize the cracking phenomenon, evaluate AC cracking potential, validate test protocols, and verify theoretical models.
A RESEARCH ARTICLE
sphalt Concrete, crack propagation, digital image correlation, optical flow, deep learning.
## 1 Introduction
Approximately 95 percent of paved roads in the United States are surfaced with asphalt. Cracking is a common mode of failure in asphalt concrete (AC) pavements. Many tests have been developed to assess the cracking potential of AC materials. Accurate monitoring of crack initiation and propagation during testing is crucial.
Contact tools like a linear variable differential transformer (LVDT), extensometers, strain gauges, and crack mouth opening displacement (CMOD) clip gauges are the most widely used methods to monitor crack propagation and opening in AC cracking tests. However, these tools only provide localized information, as the measurement location must be decided before testing. As shown in Figure 1, the CMOD clip gauge is attached at the bottom of the specimen when conducting the low-temperature semi-circular bending (SCB) test (Li & Marasteanu, 2010). As such, the crack opening is
only recorded at that location. Similarly, the above-mentioned contact techniques may only provide indirect crack propagation and opening measurements. For example, the load-line displacement (LLD), measured by an extensometer mounted vertically at the surface of the specimen, is used to estimate crack propagation speed. However, a such approximation is insufficient to describe the cracking phenomenon, as a crack tends to choose a path around the aggregate as it grows (Doll, Ozer, Rivera-Perez, Al-Qadi, & Lambros, 2017). In addition, orienting the contact devices is time-consuming and requires experience, especially on small specimens. Moreover, routine calibrations are needed to ensure accurate measurement. Therefore, developing an easy-to-use, accurate, and full-field crack measurement technique for AC cracking tests is imperative.
Low-level computer vision-based crack detection methods have been proposed. The most popular algorithms include thresholding, image segmentation, filtering, and blob extraction (Hartman & Gilchrist, 2004; Oliveira & Correia, 2009; Wang, Zhang, Wang, Braham, & Qiu, 2018; Ying & Salari, 2010; A. Zhang, Li, Wang, & Qiu, 2013). However, the limitation of these methods is obtaining accurate results under complex imaging environments. Deep learning, especially deep convolution neural network (CNN), has been widely used to detect and categorize cracks (Cha, Choi, & Buyukozturk, 2017; Fei et al., 2019; A. Zhang et al., 2017; L. Zhang, Yang, Zhang, & Zhu, 2016). However, these models were mainly developed for visible mature cracks; the ground truth verification relied on visual recognition. This renders the models mentioned above unsuitable for monitoring crack propagation in AC cracking tests, where small cracks in the early stages are critical but often difficult to visualize.
The digital image correlation (DIC) technique has the potential to overcome these challenges. The DIC is an optical method that measures full-field displacement and strain. Because surface cracks are defined as displacement field discontinuities, cracks with varying sizes could be located, given an accurate displacement field. A few attempts have been made to measure cracks using DIC. Due to complex crack growth, locating cracks based on DIC-measured displacement or strain field is a challenge. Current methods rely on the strain or displacement thresholding, which requires significant post-processing efforts and empirical knowledge (Buttlar et al., 2014; Safavizadeh & Kim, 2017). In addition, DIC analysis involves computationally expensive optimization, making it unsuitable for real-time applications such as crack propagation measurement in AC testing, where hundreds or thousands of images need to be analyzed. These limitations have limited the implementation of DIC as an automated
Figure 1: Low-temperature SCB test setup (Li & Marasteanu, 2010).
crack measurement technique for AC cracking tests.
This article proposes a deep neural network to automatically measure crack propagation during testing based on the optical flow concept. Compared to the existing techniques discussed above, it offers an accurate, flexible, efficient, and low-cost solution. It can accurately measure crack propagation from hundreds of images collected by low-cost cameras in less than one minute.
This paper has seven sections, and they are organized as the following: section one discusses the background and motivation of this study; section two presents the development of the database; section three introduces the architecture of the deep neural network; section four explains the training strategy; section five presents the evaluation of the proposed network; section six discusses advantages and possible applications of the CrackPropNet, and the conclusions and recommendations are presented in section seven.
## 2 Data Preparation
As shown in Figure 2, the data preparation process consists of four steps:
1. Collect raw images;
2. Compute displacement fields using DIC;
3. Label ground-truth crack edges;
4. Inspect and verify ground-truth labels.
Implementation details are described in the following sections.
### Raw Image Collection
Raw images were collected while conducting the Illinois Flexibility Index Test (I-FIT), as shown in Figure 4 (Ozer, Al-Qadi, Lambros, et al., 2016; Ozer, Al-Qadi, Singhvi, et al., 2016). An extensive testing program covering a wide range of testing conditions and materials was developed. The goal was to develop an extensive image database covering AC's diversified cracking behavior. All experiments were displacement controlled. Load-line displacement was used on room-temperature tests, while CMOD was used under low temperatures to provide better crack propagation stability (Doll et al., 2017). Two different testing temperatures: -12 and 25degC; and four different loading rates: 0.7, 6.25, 25, and 50 mm/min were considered. The fracture behavior of AC is time- and temperature-dependent. A more brittle failure is expected at lower temper
Figure 2: Data preparation procedure.
atures or higher loading rates (Al-Qadi et al., 2015). As shown in Figure 3, a total of 53 AC mixes were tested. They had different N-designs, binder types and content, aggregate mineralogy, and the amount of recycled materials. The I-FIT specimens were prepared either from lab-compacted cylindrical pills or field cores. All specimens had SCB geometry, as shown in Figure 1, while their thickness, notch length, and air void range from 25mm to 60mm, 10mm to 35mm, and 1% to 12%, respectively. For each I-FIT specimen, a speckle pattern consisting of a white layer of paint and a random black pattern on top was applied (Doll et al., 2017).
Two CCD (Charge Coupled Device) cameras were positioned perpendicularly to the surface of the I-FIT specimen to collect images during the test: a Point Grey Gazelle 4.1MP Mono (\(2048\times 2048\) pixels, 150 frames per second-fps) and an Allied Vision Prosilica GX6600 (\(6576\times 4384\) pixels, 4 fps) with a Tokina AT-X Pro Macro 100 2.8D lens. The Gazelle has a faster acquisition rate but a lower resolution than the Prosilica. The former is generally used in experiments where the materials can be considered homogeneous, while the latter aims to study damage zone evolution in heterogeneous materials such as AC. The database intentionally includes images taken from cameras with significantly different resolutions to ensure better generalization of the deep neural network.
### Compute Displacement Fields Using DIC
The displacement fields were first computed using DIC. The DIC works by tracking pixels in a sequence of images. This is achieved using area-based matching, which extracts gray value correspondences based on their similarities. First, a reference image was taken at the unloaded state, and an area of interest was selected. Then, a subset of pixels is compared to a deformed image taken at a loaded state to identify the best match. Finally, the deformation of a point in the subset can be computed using
Figure 3: Mix properties.
Equation 1, which allows for translation, rotation, shear, and combinations.
\[\begin{cases}x_{i}^{\prime}=x_{i}+u+u_{x}\Delta x+u_{y}\Delta y\\ y_{j}^{\prime}=y_{j}+v+u_{x}\Delta x+u_{y}\Delta y\end{cases} \tag{1}\]
As shown in Figure 5, \(x_{i}\) and \(y_{j}\) are Cartesian coordinates of a point \(Q(x_{i},y_{j})\) in the reference image; \(x_{i}^{\prime}\) and \(y_{j}^{\prime}\) refer to its coordinates in the deformed image; \(u\) and \(v\) denote the corresponding displacement components of the reference subset center \(P(x_{0},y_{0})\) in the x- and y- direction, respectively; \(u_{x}\), \(u_{y}\), \(v_{x}\), \(v_{y}\) are the first-order displacement gradients of the reference subset; \(\Delta x=x_{i}-x_{0}\) and \(\Delta y=y_{j}-y_{0}\). To provide adequate spatial resolution to resolve the displacement distribution between and within aggregate particles, the subset size used for correlation was carefully chosen for each test following the algorithm proposed by Pan, Xie, Wang, Qian, and Wang (2008).
### Ground-Truth Crack Edges Labeling
Once the displacement field was obtained, potential crack edges could be located following the method proposed by Zhu and Al-Qadi (2023). Figure 6 shows the displacement field (\(u\)) contour plot measured by DIC for an I-FIT specimen surface, where a crack is visible in the area of interest. The reference and deformed images were taken with the Gazelle camera when conducting the I-FIT test at 25\({}^{\circ}\)C with a 50mm/min loading rate. A subset size of \(23\times 23\) with a correlation point spacing of 11 pixels
Figure 4: Experiment set up.
Figure 5: Area-based matching (Pan, Qian, Xie, & Asundi, 2009).
was used for DIC analysis. This resulted in a spatial resolution of approximately 25 \(\mu\)m/pixel and produced roughly \(175\times 157\) square microns in the area of interest.
Then, the first-order derivative of the opening displacement \(u_{x}\) was obtained by filtering the displacement field with a \([-1,1]\) kernel. For example, Figure 7 plots \(u_{x}\) along three discrete \(y\) axes (\(y=496\), 1497, and 1816). A large \(u_{x}\) indicates the material separation between the two correlation points, suggesting a crack may present. Thus, potential crack edges could be found by locating the corresponding correlation points in the deformed image, as shown in Figure 8. Please note that the marked crack edges may not match the actual crack edges exactly because the correlation point spacing is usually larger than 1 pixel in DIC analysis. However, the effect is negligible as the spatial resolution is typically smaller than 25 \(\mu\)m/pixel, which results in an error of less than 0.3 mm. In this article, the target value for a crack edge pixel is 1, while the rest is 0.
Figure 6: Contour plot of horizontal displacement (\(u\)).
Figure 7: First-order derivative of the opening displacement field \(u_{x}\) along three discrete \(y\) axes.
### Verification of Ground-Truth Labels
The above-described procedure may falsely label some pixels as crack edges under certain circumstances. For example, suppose an I-FIT specimen has irregularities and holes on the surface that cannot be painted or create shadows. In that case, the error in DIC measurements will increase compared to experiments on flat surfaces. This may lead to falsely labeled crack edge pixels. To make the ground-truth crack edges as accurate as possible, every image went through two rounds of inspection:
1. Automated inspection: for a sequence of deformed images, if pixel \(A\) was labeled as a crack edge in frame \(F_{t_{0}}\), but not in the following frames \(F_{t_{1:n}}\), the label would be corrected. In contrast, if pixel \(A\) was not labeled as crack edge in frame \(F_{t_{n}}\), but was labeled in previous \(F_{t_{0:n-1}}\) and following frames \(F_{t_{n+1:2n}}\), the label would be corrected;
2. Manual inspection: the ground-truths were visually inspected and verified.
### Image Library Summary
An image library, made of pairs of images with and crack edge labels, was developed for supervised learning. It consisted of 2,560 frame pairs. The image library represented the diversified cracking behavior of AC. Images were collected in the past eight years by four different operators. The original images collected by the Gazelle and the Prosilica camera have resolutions of \(6576\times 4384\) and \(2048\times 2048\), respectively. The original image was downsized to \(1024\times 1024\) by min-pooling and cropping to balance computational overhead and accuracy. This resulted in a spatial resolution of approximately 0.05 mm/pixel. Because the correlation point spacing in DIC analysis was typically larger than 10 pixels, the labeled ground truth crack edges were discontinuous. To provide more accurate ground-truth labels and reduce the computational cost, the ground-truth crack edge map was downsized from the original resolution to \(128\times 128\). This resulted in a spatial resolution of approximately 0.4 mm/pixel, which is adequate in this task, where the measurement area is larger than \(50\times 50\)mm. Figure 9 illustrates
Figure 8: Deformed image with crack edges marked in red dots.
a sequence of images together with their ground-truth crack edge maps. They were collected while conducting an I-FIT test on a typical AC mix (an Illinois N90 mix) at 25 mm/min and 25degC.
To evaluate the sufficiency of the developed image library, Table 1 compared it with datasets used in previous studies for relative tasks. The size of the developed image library is comparable to existing datasets. It is worth noting that StrainNet uses synthetic images, which have the advantage of developing large datasets quickly and inexpensively. However, real images were used in this paper because of the complexity of crack shapes in AC testing.
## 3 Network Architecture
Training a deep CNN from scratch requires a large dataset and significant computational power, which is impossible in most situations. In practice, pre-trained networks could be used as initialization or feature extractors for the task of interest. Because surface cracks are defined as displacement field discontinuities, crack propagation measurement in AC fracture testing can be accomplished by stacking edge detection layers on pre-trained networks for optical flow estimation.
This section provides a brief overview of existing networks on optical flow estimation. The proposed network architecture is discussed in detail.
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline
**Dataset** & **Number of frames** & **Resolution** & **Type** \\ \hline \multicolumn{4}{l}{_Task: Pavement crack detection_} \\ \hline CRACK500 (Yang et al., 2019) & 500 & 2000\(\times\)1500 & Real \\ \hline GAPs384 (Eisenbach et al., 2017) & 1,969 & 1920\(\times\)1080 & Real \\ \hline CrackNet (A. Zhang et al., 2017) & 2,000 & 1024\(\times\)512 & Real \\ \hline CrackNet-V (Fei et al., 2019) & 3,083 & 1024\(\times\)512 & Real \\ \hline \multicolumn{4}{l}{_Task: Optical flow estimation_} \\ \hline KITTI2015 (Menze \& Geiger, 2015) & 800 pairs & 1242\(\times\)375 & Real \\ \hline Sintel (Butler, Wulff, Stanley, \& Black, 2012) & 1064 pairs & 960\(\times\)540 & Real \\ \hline \multicolumn{4}{l}{_Task: DIC with deep learning_} \\ \hline StrainNet (Boukhtache et al., 2021) & 363 reference frames & 256\(\times\)256 & Synthetic \\ \hline \hline Proposed & 2,560 pairs & 1024\(\times\)1024 & Real \\ \hline \end{tabular}
\end{table}
Table 1: Comparison with datasets used in previous studies for relative tasks.
Figure 9: Representative image pairs with ground-truth labels from the image library.
### CNN-Based Methods for Optical Flow Estimation
Optical flow is the pattern of apparent motion of objects due to the relative motion between an observer and a visual scene (Warren and Strelow, 2013). Traditional energy-minimization-based approaches involve computationally expensive optimization, making them unsuitable for large-scale real-time applications such as crack propagation measurement in AC testing, where hundreds of images need to be processed.
Another promising approach is the fast and end-to-end trainable CNN framework. Dosovitskiy _et al._ proposed two CNNs: FlowNetS and FlowNetC, to learn optical flow from a synthetic dataset (Dosovitskiy et al., 2015). As shown in Figure 10, FlowNetS stacks the reference and deformed images together and feeds them through a rather generic network to extract optical flow. FlowNetC creates separate processing streams for the reference and deformed images to generate two feature maps. Then, it resembles them with a correlation layer that performs multiplicative patch comparisons. However, FlowNetS and FlowNetC have problems with small displacements and noisy artifacts in estimated optical flow fields. To improve the performance, Ilg _et al._ developed FlowNet2.0 by stacking multiple FlowNetS and FlowNetC (Ilg et al., 2017). It reduces the estimation error by more than 50% compared to FlowNet and has been proven to be efficient in many other applications such as motion segmentation and action recognition. FlowNet2.0 outperforms other state-of-art networks, such as SpyNet, RecSpyNet, and LiteFlowNet (Hu et al., 2018; Hui et al., 2018; Ranjan and Black, 2017) in terms of accuracy.
The above studies demonstrate that CNNs are powerful in estimating optical flow. This inspired the development of similar networks to solve analogous problems in other fields. For example, Boukhtache _et al._ developed StrainNet to retrieve displacement and strain fields from pairs of reference and deformed images. It uses FlowNetS as the backbone and achieves comparable accuracy as DIC with a significant improvement in computing time (Boukhtache et al., 2021).
Figure 10: FlowNetS and FlowNetC Dosovitskiy et al. (2015).
### FlowNetS and FlowNetC
Both FlowNetS and FlowNetC consist of a contracting and an expanding part. Although the two networks adopt different approaches in contracting, they share the same expanding part. The two parts of the network are discussed in detail below.
#### 3.2.1 Contracting
FlowNetS concatenates the reference and deformed images together as input and lets the network learn how to process the image pair to extract motion information. In contrast, FlowNetC creates separate processing streams for the reference and deformed images to generate two feature maps. Then, it resembles them with a correlation layer that performs multiplicative patch comparisons. Given two feature maps \(\mathbf{f}_{1}\), \(\mathbf{f}_{2}\), with dimension \(c(\text{number of channels})\times w(\text{width})\times h(\text{height})\), the correlation layer compares patches in \(\mathbf{f}_{1}\) and \(\mathbf{f}_{2}\) as below:
\[c(\mathbf{x}_{1},\mathbf{x}_{2})=\sum_{\mathbf{o}\in[-k,k]\times[-k,k]}\langle \mathbf{f}_{1}(\mathbf{x}_{1}+\mathbf{o}),\mathbf{f}_{2}(\mathbf{x}_{2}+ \mathbf{o})\rangle \tag{2}\]
\(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) denote the center square patch of size \(K\coloneqq 2k+1\) in \(\mathbf{f}_{1}\) and \(\mathbf{f}_{2}\), respectively. Because the correlation operation is fundamentally equivalent to convolving data with other data, it has no trainable weights. To reduce computation cost, the maximum displacement is constrained to \(d\), which means that for each location \(\mathbf{x}_{1}\), correlations \(c(\mathbf{x}_{1},\mathbf{x}_{2})\) are only computed in a neighborhood of size \(D\coloneqq 2d+1\). The output size of the correlation layer is \(D^{2}\times w\times h\).
#### 3.2.2 Expanding
As shown in Figure 12, to refine the coarse pooled representation and obtain a dense flow field, fractionally-strided convolution is applied to feature maps first. Then, they are concatenated with corresponding feature maps from the contracting part. This operation preserves high-level features as well as retains fine local features. The process is repeated four times, with each step doubling the resolution. A bi-linear up-sampling with a factor of four is performed at the end to obtain the original image resolution.
Figure 11: Proposed network architectures.
### Proposed Network Architecture
As shown in Figure 11, the proposed network combines one FlowNetC and two FlowNetS. First, reference (\(I_{r}\)) and deformed (\(I_{d}\)) images are fed into FlowNetC to generate an estimated flow field (\(w_{1}=(u_{1},v_{1})^{\top}\)) at the original image resolution. Second, subsequent FlowNetS gets reference image, deformed image, estimated flow field \(w_{1}\), warped deformed image (\(\tilde{I}_{d,1}(x,y)=I_{d}(x+u_{1},y+v_{1})\)), and brightness error field (\(e_{1}=||\tilde{I}_{d,1}-I_{r}||\)); and outputs an estimated flow field (\(w_{2}=(u_{2},v_{2})^{\top}\)) at the original image resolution. The concatenated input allows FlowNetS to assess the previous error more easily and compute an incremental update. Third, a modified FlowNetS receives reference image, deformed image, estimated flow field \(w_{2}\), warped deformed image (\(\tilde{I}_{d,2}(x,y)=I_{d}(x+u_{2},y+v_{2})\)), and brightness error field (\(e_{2}=||\tilde{I}_{d,2}-I_{r}||\)); and outputs an edge probability map for which the resolution is eight times smaller than the original image. The modified FlowNetS differs from FlowNetS in the expanding part:
* The last \(3\times 3\) convolution layer is cut. Inspired by Richer Convolutional Features for edge detection (Liu, Cheng, Hu, Wang, & Bai, 2017), layers shown in Figure 13 are added.
* Sigmoid units are connected to the final layer to generate an edge probability map.
Figure 12: Refinement part.
Figure 13: Edge detection layers.
## 4 Training
### Class-Balanced Loss Function
Because the distribution of crack-edge/non-crack-edge pixels greatly varies and is heavily biased: more than 90% of the ground truth pixels are non-crack-edge. A cost-sensitive loss function must be considered to balance the loss between positive (crack-edge)/negative (non-crack-edge) classes. Specifically, the following (Equation 3) class-balanced cross-entropy loss function was used:
\[\begin{split} L(W)&=\alpha\sum_{j\in Y_{-}}\log(1- \Pr(X_{j};W))\\ &+\beta\sum_{j\in Y_{+}}\log\Pr(X_{j};W)\end{split} \tag{3}\]
in which
\[\begin{split}\alpha&=\gamma+\frac{|Y_{+}|}{|Y_{+}|+| Y_{-}|}\\ \beta&=\lambda\cdot\frac{|Y_{-}|}{|Y_{+}|+|Y_{-}|} \end{split} \tag{4}\]
\(Y_{+}\) and \(Y_{-}\) denote the positive and negative sample sets, respectively. The hyperparameters \(\gamma\) and \(\lambda\) are used to balance positive and negative samples. \(X_{j}\) represents the activation value at each pixel \(j\), \(\Pr(X)\) is the standard sigmoid function, and \(W\) denotes all parameters in the network.
### Data Augmentation
Data augmentation is an often-used strategy to improve model generalization (Krizhevsky, Sutskever, & Hinton, 2012). The augmentations used in this study include geometric transformation: horizontal flip, as well as changes in brightness, contrast, saturation, and hue. It is worth mentioning that the same transformations were applied to both reference and deformed images. The augmentation was performed online during network training.
The brightness, contrast, and saturation factors are sampled uniformly from \([0.95,1.05]\); the hue factor is chosen from \([-0.05,0.05]\).
### Training Strategy
The following strategy was used in training the network:
1. The AdamW was chosen as the optimization method because it showed faster convergence than standard stochastic gradient descent with momentum in this task (Loshchilov & Hutter, 2017). The recommended parameters: \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\) were used, and the weight decay coefficient was set as \(1e^{-4}\).
2. Fairly small mini-batches of six-image pairs were used.
3. The training started with a learning rate of \(5e^{-5}\), and it was divided by 2 every 5 epochs. The network was trained for 40 epochs.
4. To monitor over-fitting during training, the dataset was randomly split into 2,248 training and 312 validation pairs.
### Evaluation Metrics
Because of the similarity with edge detection, it is intuitive to directly leverage its evaluation criteria for this task. Given a crack edge probability map, a threshold is needed to generate the crack edge map. Two commonly used strategies are optimal dataset scale (ODS) and optimal image scale (OIS). The former uses a fixed threshold for all images in the dataset, while the latter employs an optimal threshold for each image (Liu et al., 2017; Xie and Tu, 2015). This paper used the F-1 (\(\frac{2\cdot Precision\cdot Recall}{Precision+Recall}\)) of both ODS and OIS to assess the network's performance. They were calculated using Equation 5 (Yang et al., 2019). It is worth noting that, unlike previous studies, zero tolerance was allowed for correct matches between ground truth and prediction (Liu et al., 2017; Xie and Tu, 2015; Yang et al., 2019).
\[\text{ODS F}=\max\{\frac{2P_{t}\cdot R_{t}}{P_{t}+R_{t}}:t=0.01,0.02, \ldots,0.99\} \tag{5}\] \[\text{OIS F}=\frac{1}{N_{i}}\sum_{i}^{N_{i}}\max\{\frac{2P_{t}^{ i}\cdot R_{t}^{i}}{P_{t}^{i}+R_{t}^{i}}:t=0.01,0.02,\ldots,0.99\}\]
The \(t\) denotes the threshold, \(i\) refers to the index of an image, and \(N_{i}\) is the total number of images. \(P_{t}\) and \(R_{t}\) represent precision and recall for the chosen threshold \(t\), respectively. Precision refers to the proportion of identified crack edge pixels that were correct, while Recall represents the fraction of crack edge pixels identified correctly. It is challenging to achieve high Precision and high Recall simultaneously because they often conflict with each other. A high F-1 can only be achieved when both Precision and Recall are high.
### Training Result
The training took 13 hours on an _NVIDIA TESLA V100_ GPU. Figure 14 shows the class-balanced cross-entropy loss decay curve and the validation F-1 curve. To compute the F-1 during training, a fixed threshold of 0.5 was used to generate edge maps from edge probability maps instead of using ODS or OIS strategies. The highest F-1 was observed at the \(37^{th}\) epoch, and the corresponding model was considered optimal. Figure 15 shows the precision-recall curve of the final model. The trained model achieved \(\text{ODS}=0.769\) and \(\text{OIS}=0.772\) on the validation dataset.
## 5 Testing and Evaluation
### Testing Result
A testing dataset consisting of 188 frame pairs was developed to validate the trained model further. The images were collected using the Gazelle and the Prosilica cameras
while conducting I-FIT tests. The same ground-truth crack edges labeling procedure was followed. CrackPropNet provided running speeds of 6fps and 26fps on a _NVIDIA TESLA P100_ and a _TESLA V100_ GPU, respectively. The trained model achieved \(\text{ODS}=0.755\) and \(\text{OIS}=0.781\) on the testing dataset. The validation and testing images' performance were similar, suggesting that the over-fitting problem was avoided. Figure 16 shows F-1s on each frame pair. The trained model generally performed well in most frame pairs. Less than 2% of edge map predictions had F-1s smaller than 0.4. Moreover, the model performed exceptionally well in differentiating the frames with no cracks from those with cracks, which indicates its robustness in capturing crack initiation. Figure 17 shows edge map predictions with F-1s lower than 0.4. It could be noticed that all of them happened in the early stage of crack development.
Figure 18 provides a visualization of a sequence of CrackPropNet-measured crack edges. They were intentionally selected to be shown here because of their lower-than-average F-1s. The final model produced high-quality crack edges. The lower-than-average F-1 was mainly due to the nature of edge detection, where predicted edges are expected to be coarser than ground-truths (Liu et al., 2017; Xie & Tu, 2015).
Figure 14: Training progress.
Figure 15: Precision-recall curve evaluated on the validation dataset.
### Noise Robustness Evaluation
Some noise is always present in digital images, especially for those taken by non-industrial low-cost cameras. Because the training images were collected using high-performance hardware, it is critical to evaluate the noise robustness of the trained model.
Figure 16: F-1s on each frame pair.
Figure 17: CrackPropNet-measured cracks with low F-1s.
Figure 18: Examples of CrackPropNet-measured crack propagation.
Random Gaussian noise was injected to frame pairs in the testing dataset based on the Gaussian noise model:
\[P(g)=\sqrt{\frac{1}{2\pi\sigma^{2}}}e^{-\frac{(a-\mu)^{2}}{2\sigma^{2}}} \tag{6}\]
\(\mu\) and \(\sigma\) denote mean and standard deviation, respectively. \(g\) refers to the gray value. Three \(\sigma\) (5,15,25) values were used to simulate various degrees of noise: low, medium, and high. Figure 19 shows images with different levels of random Gaussian noises injected.
Table 2 shows the model performance on noise-injected images in the testing dataset. As would be expected, the measurement accuracy decreases as \(\sigma\) increases. The trained model performs well on images with low to medium noise levels.
### Ideal-Ct
To evaluate the generalization of the trained model, a small dataset consisting of 91 frame pairs was developed. The images were collected using the Prosilica (\(6576\times 4384\) pixels, 4 fps) while conducting the indirect tensile cracking test (IDEAL-CT), as shown in Figure 20. The spatial resolution was about 35 \(\mu\)m/pixel. The test was performed at 25\({}^{\circ}\)C and 50 mm/min LLD on a cylindrical specimen of 62 mm in thickness and 150 mm in diameter. The IDEAL-CT is fundamentally different from the I-FIT test. The
\begin{table}
\begin{tabular}{c|c|c} \hline \hline \(\sigma\) & **ODS** & **OIS** \\ \hline
5 & 0.6643 & 0.7720 \\ \hline
15 & 0.5918 & 0.6586 \\ \hline
25 & 0.5292 & 0.5915 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Model performance on testing images with noise injected.
Figure 19: Images with different levels of random Gaussian noises were injected.
former is a strength test requiring no notch, while the latter is a fracture test with a pre-crack (notch). Most I-FIT specimens have a single, well-defined crack path, unlike the IDEAL-CT specimens with multiple crack paths (Al-Qadi, Said, Ali, & Kaddo, 2021). Moreover, as shown in Figure 21, the images contained blurred backgrounds, which poses a new challenge to the trained model.
Figure 21 illustrates a sequence of images with their ground-truth crack edge maps. They were collected while conducting an IDEAL-CT test on an AC mix with high asphalt binder replacement (20%) at 50 mm/min and 25degC.
The trained model achieved \(\text{ODS}=0.588\) and \(\text{OIS}=0.605\) on the evaluation dataset. Figure 21 provides a visualization of a sequence of CrackPropNet-measured crack edges. Overall, the trained model showed promising accuracy on a dataset that is fundamentally different from the training dataset. The measurement accuracy increased as the crack propagated downwards. The trained model was able to measure fine details of mature cracks, as shown in Figure 21, frames 3 and 4. Figure 22 shows F-1s on each frame pair. The relatively low overall F-1 was mainly due to the poor performance on small cracks in the early stage of development, where multiple crack paths were presented in an IDEAL-CT strength test specimen.
The promising accuracy of the CrackPropNet in the case of IDEAL-CT indicated the model was well-trained to locate displacement field discontinuity, which is the
Figure 21: Representative IDEAL-CT image pairs with ground-truth and CrackPropNet-measured crack edges.
Figure 20: IDEAL-CT setup.
definition of crack. As would be expected, the CrackPropNet could provide a relatively accurate measurement of crack propagation in other AC cracking tests regardless of the cracking mechanisms.
### Application: Crack-Propagation Speed
As a crack measurement technique, the CrackPropNet can detect complex crack patterns accurately and efficiently in AC cracking tests. The trained model was applied to calculate crack-propagation speed in a fracture test as a case study to demonstrate its usefulness.
Crack-propagation speed is one of the main AC cracking characteristic factors. A large crack-propagation speed after initiation indicates the mix is brittle and prone to cracking. Most state-of-art AC cracking potential prediction indices rely on an approximate crack-propagation speed. For example, according to AASHTO T393, the flexibility index (FI) from the Illinois-flexibility index test (I-FIT) uses the post-peak inflection-point slope from the Load-LLD curve to proxy the crack-propagation speed. The speed was assumed constant (Al-Qadi et al., 2015). With the help of the trained model, the true crack-propagation speed could be easily derived and used to calculate cracking indices.
This case study included two plant-produced AC mixes, and their design details are summarized in Table 3. Raw images were collected while conducting the I-FIT test at 50 mm/min and 25\({}^{\circ}\)C. Four replicates were used for each mix. It was expected that mix two would have a much higher crack propagation speed than mix one because:
* Mix two had significantly lower asphalt content than mix one.
* Mix two used recycled materials, while mix one did not.
Figure 23 shows the CrackPropNet-measured and ground-truth mean crack-propagation speed of AC mixes one and two. The speed was calculated by tracking the crack front and averaged along the crack path. As would be expected, the mean crack-propagation speed measured on mix two specimens was 72% faster than that on mix one, indicating that mix two is more prone to cracking than mix one. The CrackPropNet-measured mean crack-propagation speed was similar to the ground-truth crack-propagation speed. The trained model achieved a mean absolute error of 1.08 mm/s on the tested specimens. Moreover, the CrackPropNet captured the AC material-inherent crack propagation speed variability.
Figure 22: F-1s on each frame pair of the IDEAL-CT dataset.
## 6 Discussion
As summarized in Table 4, the CrackPropNet offers an accurate, flexible, efficient, and low-cost solution for crack propagation measurement in AC cracking tests. To measure crack propagation on AC specimen surfaces, CrackPropNet only needs a series of images collected by a low-cost camera while conducting cracking tests and a computer with GPU for post-processing.
Although the CrackPropNet was trained on an image database of I-FIT tests, its promising measurement accuracy in the case of IDEAL-CT suggested that it could provide a relatively accurate measurement of crack propagation in other AC cracking tests. This is because the architecture of the CrackPropNet was designed to learn to locate displacement field discontinuities (i.e., cracks) regardless of the cracking mechanism.
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline \hline Technique & Accuracy & Flexibility & Efficiency & Cost \\ \hline Contact Tools (e.g., LVDT, clip gauge) & High & Low & Medium & \(\sim\)81,000 \\ \hline Low-Level Computer Vision (e.g., thresholding) & Low & Medium & Medium & \(\sim\)8500 \\ \hline Digital Image Correlation (Zhu \& Al-Qadi, 2023) & High & High & Low & \(\sim\) 810,000 \\ \hline CrackPropNet & Higha & High & High & \(\sim\) 81,500 \\ \hline \end{tabular}
\end{table}
Table 4: Comparison between the CrackPropNet and other crack measurement techniques.
Figure 23: CrackPropNet-measured and ground-truth mean crack-propagation speed of mix one and two.
\begin{table}
\begin{tabular}{l|l|l} \hline \hline Property & Mix 1 & Mix 2 \\ \hline Type & SMA & Dense-Graded \\ \hline Binder Grade & PG 70-22 & PG 64-22 \\ \hline Asphalt Content (\%) & 7.3 & 4.98 \\ \hline NMAS(mm) & 4.75 & 12.5 \\ \hline ABR (\%) & 0 & 20 \\ \hline VMA (\%) & 18.5 & 14.6 \\ \hline \end{tabular}
\end{table}
Table 3: Mix Design Details for AC Mixes Used in This Study.
CrackPropNet has many applications. Examples are listed below:
* Compare AC mixes' cracking potential. In a C* fracture test, a video of the specimen surface is recorded during the test, and crack propagation is measured via visual recognition, which is subjective and time-consuming (Stempihar, 2013). Instead, one can use CrackPropNet to measure crack propagation. An AC mix with a faster crack propagation speed indicates that it is more prone to cracking.
* Validate test protocols. To validate the testing protocol of the single-edge notched beam (SENB) test, wire crack detection gauges were glued to the specimen surface to monitor crack propagation (Wagoner, Buttlar, & Paulino, 2005). The gauges only provided localized information. Instead, CrackPropNet can be used for full-field crack propagation measurement.
* Derive new or calibrate existing cracking indices. The FI, the primary outcome of the I-FIT, introduced the post-peak inflection-point slope to proxy the crack-propagation speed after initiation. With the development of CrackPropNet, the actual crack propagation can be efficiently measured and used to compute cracking indices. Similarly, it can be used to calibrate existing indices.
## 7 Conclusions
This article proposes an efficient deep neural network, namely CrackPropNet, to measure crack propagation on AC specimens during testing. The proposed approach provides accuracy, flexibility, efficiency, and cost-effectiveness compared to other techniques, including contact measurements, low-level computer vision, and DIC.
CrackPropNet involves learning to locate displacement field discontinuities (i.e., cracks) by matching features at various locations in the reference and deformed images. The input of CrackPropNet includes a reference and a deformed image, and a crack edge probability map is generated as the output. This was accomplished by stacking edge detection layers on a pre-trained optical flow estimation network consisting of a FlowNetC and two FlowNetS.
An image library was developed for supervised learning. It represents the diversified AC cracking behavior. CrackPropNet could provide running speeds of 6fps and 26fps on a _NVIDIA TESLA P100_ and a _TESLA V100_ GPU, respectively. Besides, it achieved promising measurement accuracy with an ODS F-1 of 0.755 and an OIS F-1 of 0.781 on the testing dataset.
It was demonstrated in an experiment that low to medium-level Gaussian noises had a limited impact on the measurement accuracy of CrackPropNet. Besides, the model showed promising performance on a fundamentally different dataset, which consisted of images collected while conducting the IDEAL-CT strength test. A case study demonstrated that CrackPropNet could accurately calculate crack-propagation speed, one of the main AC cracking characteristics.
CrackPropNet has many applications, including characterizing the cracking phenomenon, evaluating AC cracking potential, validating test protocols, and verifying theoretical models. The promising performance of CrackPropNet suggests that an optical flow-based deep learning network offers a robust solution in accurately and efficiently measuring crack propagation on AC specimens.
The followings are recommended for further research:
* Images collected during the process of crack propagation possess sequential nature. It is worth investigating if deep recurrent optical flow neural networks could
boost the accuracy.
* The accuracy and generalization of CrackPropNet are limited by the size of the database, which could be expanded by collecting more images from various AC tests.
* Because of AC's material-inherent variability, it is suggested to monitor both sides of a specimen for the test replicates to obtain a reliable crack propagation measurement.
* It is recommended to verify existing cracking indices using CrackPropNet. It would enable contractors and transportation agencies to assess the cracking potential of AC mixes more accurately and efficiently.
## Data Availability Statement
Examples of the image database and pre-trained CrackPropNet are available at [https://github.com/zehuiz2/CrackPropNet](https://github.com/zehuiz2/CrackPropNet).
## Author Contributions
The authors confirm their contribution to the paper as follows: study conception and design: Zehui Zhu and Imad L. Al-Qadi; data collection: Zehui Zhu; analysis and interpretation of results: Zehui Zhu and Imad L. Al-Qadi; draft manuscript preparation: Zehui Zhu and Imad L. Al-Qadi. All authors reviewed the results and approved the final version of the manuscript.
## Acknowledgment
The authors would like to thank Jose Julian Rivera Perez, Berangere Doll, Uthman Mohamed Ali, and Maxwell Barry for their help in preparing test specimens and collecting raw images. The contents of this report reflect the view of the authors, who are responsible for the facts and the accuracy of the data presented herein.
## Disclosure statement
The authors report there are no competing interests to declare.
|
2304.06547 | RadarGNN: Transformation Invariant Graph Neural Network for Radar-based
Perception | A reliable perception has to be robust against challenging environmental
conditions. Therefore, recent efforts focused on the use of radar sensors in
addition to camera and lidar sensors for perception applications. However, the
sparsity of radar point clouds and the poor data availability remain
challenging for current perception methods. To address these challenges, a
novel graph neural network is proposed that does not just use the information
of the points themselves but also the relationships between the points. The
model is designed to consider both point features and point-pair features,
embedded in the edges of the graph. Furthermore, a general approach for
achieving transformation invariance is proposed which is robust against unseen
scenarios and also counteracts the limited data availability. The
transformation invariance is achieved by an invariant data representation
rather than an invariant model architecture, making it applicable to other
methods. The proposed RadarGNN model outperforms all previous methods on the
RadarScenes dataset. In addition, the effects of different invariances on the
object detection and semantic segmentation quality are investigated. The code
is made available as open-source software under
https://github.com/TUMFTM/RadarGNN. | Felix Fent, Philipp Bauerschmidt, Markus Lienkamp | 2023-04-13T13:57:21Z | http://arxiv.org/abs/2304.06547v1 | # RadarGNN: Transformation Invariant Graph Neural Network for Radar-based Perception
###### Abstract
A reliable perception has to be robust against challenging environmental conditions. Therefore, recent efforts focused on the use of radar sensors in addition to camera and lidar sensors for perception applications. However, the sparsity of radar point clouds and the poor data availability remain challenging for current perception methods. To address these challenges, a novel graph neural network is proposed that does not just use the information of the points themselves but also the relationships between the points. The model is designed to consider both point features and point-pair features, embedded in the edges of the graph. Furthermore, a general approach for achieving transformation invariance is proposed which is robust against unseen scenarios and also counteracts the limited data availability. The transformation invariance is achieved by an invariant data representation rather than an invariant model architecture, making it applicable to other methods. The proposed RadarGNN model outperforms all previous methods on the RadarScenes dataset. In addition, the effects of different invariances on the object detection and semantic segmentation quality are investigated. The code is made available as open-source software under [https://github.com/TUMFTM/RadarGNN](https://github.com/TUMFTM/RadarGNN).
## 1 Introduction
Autonomous vehicles rely on an accurate representation and understanding of their environment. To achieve this, even under severe weather conditions, the perception has to be robust against changing environmental conditions. However, current perception systems rely mainly on data from camera or light detection and ranging (lidar) sensors, which are negatively affected by certain environmental conditions [38]. The perception capability of both sensor types is for example reduced by fog or rain and camera sensors are dependent on an external light source limiting their usability in the dark [38]. As a result of these limitations, research has focused on integrating radio detection and ranging (radar) sensors into perception systems.
Even if radar data is mostly unaffected by adverse environmental conditions [38], the detection quality of radar-based systems cannot yet compete with state-of-the-art image or lidar-based perception methods [6]. While there are several reasons for this discrepancy, the two major challenges of radar-based perception are the limited availability of annotated radar data, and the sparsity of radar point cloud data [26, 39, 35].
Leveraging the sparse information available, a graph neural network (GNN) is proposed that will not just utilize the information encoded in the points but also the relation
Figure 1: Example scenario a) of the RadarScenes dataset [29] and its corresponding radar point cloud data in the bird’s eye view. The annotated ground truth data is shown in b), while the model prediction for object classes and bounding boxes is given in c).
ships between the points. As shown in Fig. 1, an object is characterized by multiple radar points, which is why the relationship between the points is important to identify and differentiate between objects. In addition, GNNs can operate on unstructured and unordered input data, eliminating the need for data discretization (voxelization) and its associated loss of information [33]. Therefore, all the information of sparse radar point clouds can be used without losing their structural information.
To counteract the limited data availability, a general approach for incorporating invariances into the perception pipeline is proposed. Building upon the success of translation invariant convolution operations, a method is proposed for creating a translation and rotation invariant perception pipeline, leading to better generalization and improved perception quality.
The proposed method was evaluated on the RadarScenes dataset [29] and outperforms all previous methods for bounding box prediction as well as semantic segmentation. In summary, the contributions of this paper are:
* A novel GNN model for radar-based multi-class object detection and semantic segmentation.
* A general approach for transformation invariant object detection and semantic segmentation.
* A new state of the art for object detection and semantic segmentation on the RadarScenes dataset.
## 2 Related Work
State-of-the-art radar-based object detection methods rely on deep neural networks (DNN) to detect objects within the provided radar point clouds. Even if radar data can also be processed with more conventional methods [25, 28, 31] and object detection can be performed at different data abstraction levels [16, 20, 23], DNNs applied to radar point clouds achieve the best results.
### Radar Datasets
The performance of data-driven perception methods is to a great extend dependent on the underlying dataset. Therefore, the selection of an appropriate dataset is essential for successful model training and meaningful evaluation. However, since most popular perception datasets, such as KITTI [10] or the Waymo Open Dataset [34], do not include annotated radar data, special emphasis is placed on radar-oriented datasets.
Of these, the Dense [3], PixSet [7] and Zendar [17] datasets provide annotated two-dimensional radar data, but the spatial resolution of the deployed radar sensors as well as the extend of the dataset is comparatively small. In contrast, the Oxford Radar RobotCar [1], MulRan [14] and RADIATE [32] datasets utilize high resolution spinning radar sensors which are not representative of currently deployed automotive radar sensors. The nuScenes [6] dataset includes radar data, but multiple authors [9, 19, 29, 39] have criticized the radar data quality of the nuScenes dataset because of its sparsity, limited feature resolution and errors within the radar domain. In consequence, the RadarScenes [29] dataset is chosen for this work.
The RadarScenes [29] dataset includes point-wise annotated radar data of moving objects assigned to eleven different categories. The dataset comprises the data of four series production automotive radar sensors and contains more than four hours of driving data. The radar points are represented by their spatial coordinates (\(x,y\)), target velocities (\(v_{x},v_{y}\)), radar cross section (\(rcs\)) and a timestamp (\(t\)). Currently, most comparative results for radar-based perception are reported based on the RadarScenes dataset, even if it does not provide ground truth bounding boxes and only considers moving objects for ground truth annotations.
### Point Cloud Object Detection
The approaches used to detect objects within point clouds, using deep neural networks, can be divided into three major groups: point-based, grid-based and graph-based methods. In addition to these three general concepts, hybrid methods can be designed by combining different approaches.
Point-based approaches operate on the input point clouds directly, without the need for any preceding data transformations. Therefore, all information and the structural integrity of the point cloud is preserved. Utilizing this method, Schumann [27] performed a semantic segmentation on a proprietary radar dataset and later extended their approach to develop an instance segmentation model for radar data [26]. Building upon this, Nobis [19] developed a point-based recurrent neural network (RNN) to realize semantic segmentation on nuScenes radar data. Nevertheless, point-based approaches cannot consider individual relationships between points, even if the structure of local groups can be taken into account [33].
Grid-based methods map the point clouds to a structured grid representation by a discretization (voxelization) of the underlying space. Based on this data structure, conventional convolutional neural networks (CNN) can be applied to accomplish different computer vision tasks. Using this approach, Schumann [30] developed an autoencoder network to perform a semantic segmentation on radar point clouds, originating from a proprietary dataset. Scheiner [26] applied a YOLOv3 [24] detector to a bird's-eye view (BEV) grid representation of radar point clouds and achieved state-of-the-art object detection results on the RadarScenes [29] dataset. However, the preceding data transformation leads to a loss of information and a sparse data representation.
Graph-based methods construct a graph from the input point cloud to operate on and can be categorized into convolutional [15], attentional [37] and message passing [11] neural network types [4]. For graph neural networks, the points are used as nodes within the graph, preserving the structural information of the point cloud, and the relationships between the points are modeled as edges in the graph [5]. This methods was first used by Shi and Rajkumar [33] to implement a GNN for object detection on lidar point clouds. So far, GNNs have only been used once by Svenningsson _et al_. [35] to realize a graph-based object detection on radar point clouds. However, their method was limited to a graph convolutional layer formulation, the effects of invariances where not investigated and their approach was limited to the detection of cars within the nuScenes dataset. Therefore, to the best of our knowledge, GNNs have never previously been used to accomplish a multi-class object detection task on radar point cloud data.
Additionally, hybrid methods can be used to combine several of the above mentioned techniques. Scheiner _et al_. [26] compared multiple different hybrid methods on the RadarScenes [29] dataset and most recently, Palffy _et al_. [21] evaluated a hybrid model architecture on the newly published View-of-Delft [21] dataset. However, most hybrid methods [21, 36, 26] rely on a grid-based approach for the final object detection and are therefore subject to similar disadvantages, as mentioned above.
### Transformation Invariance
The great successes of convolutional neural networks over their predecessors was mainly justified by the translation invariant property of the convolution operation [12, pp. 335-339]. A function is considered translation invariant if its output remains unchanged after a translation of its inputs [5].
Achieving an invariant object detection model is of great interest because the model should be robust to unseen scenarios, where objects can occur in different locations. Moreover, the model should also be applicable to sensors mounted in different positions. To accomplish this property, the following three methods have been used in the literature: data augmentation, invariant data representation and invariant model architectures.
Data augmentation is used to extend the training dataset by modified (e.g. translated or rotated) copies of the original data and relies on the model learning the aspired invariances during the training process [2]. This technique is commonly used to support the generalization of the model and is applied in many of the above mentioned methods [26, 35, 35]. However, this method does not ensure an invariant model after the training process.
Invariant data representation describes the restriction of the input features to those that are invariant to certain transformations [18, p. 23 ff.]. In the Euclidean space, for example, distance is an invariant quantity, which is unaffected by all rigid transformations [18, p. 12]. Only representing the data by such quantities results in an overall invariant data representation. However, limiting the input data to invariant features results in a loss of global context (e.g. absolute vs. relative coordinates).
Invariant model architectures are designed in such a way that the output of the model remains unchanged regardless of a transformation applied to the model input. Such an invariant model architecture was used by Svenningsson _et al_. [35] to design a translation invariant GNN for radar-based object detection. However, incorporating invariances in the model architecture restricts the architectural design to certain (mathematically symmetric) operations [22] and makes the investigation of the effects of different invariances complicated.
## 3 RadarGNN
This section describes the proposed RadarGNN model for radar-based object detection and semantic segmentation. The object detection pipeline is shown in Fig. 2 and consists of four major steps: data preparation, graph construction, the graph neural network and the detection heads. The main idea behind this method is the achievement of transformation invariances not by using the model architecture itself but by creating an invariant data representation. To achieve this, three things are required: transformation invariant bounding boxes, a graph construction method for invariant input features and a generalization of the GNN layer to consider edge features.
### Data Preparation
The purpose of the data preparation is to create a similar database to that used in previous research and to enable the training of a translation and rotation invariant perception model. This step is important for preserving the comparability of the results and affects not just the model inputs but also the target value determination.
The preparation of the RadarScenes data is based on the implementation of Scheiner _et al_. [26], who serves as a benchmark for this study. Using this approach, the model input is given by an accumulation of radar point clouds within a time period of \(500\,\mathrm{ms}\). The resulting point cloud is then cropped to an area of \(100\,\mathrm{m}\) times \(100\,\mathrm{m}\) in front of the vehicle's rear axis, as shown in [26, Fig. 2]. Since the RadarScenes dataset does not include ground truth bounding boxes, they are created as minimum enclosing rectangles in bird's-eye view, including all points belonging to the same instance. Furthermore, the original eleven instance categories are mapped to five major object classes and finally, the overall dataset is split into training (\(64\,\%\)), validation (\(16\,\%\)) and test (\(20\,\%\)) sets.
The investigation of the effects of different invariances on the detection quality of the model requires different bounding box definitions. The existing absolute bounding box definition can be used to train a non-invariant baseline model. An absolute bounding box is defined as a tuple \((x,y,w,l,\theta)\) consisting of the box center coordinates \(x,y\), the box dimensions \(w,l\) and the yaw angle \(\theta\).
The training of a translation invariant model requires a translation invariant bounding box definition. To achieve this, the bounding box is no longer defined by its absolute center coordinates but by the relative position to its associated radar point \(p_{0}\). Therefore, a translation invariant bounding box is given by a tuple \((dx,dy,w,l,\theta)\), with a relative translation \(dx,dy\) between the box center and the radar point \(p_{0}\) it belongs to.
The bounding box definition for the training of a translation and rotation invariant model requires the addition of a second reference point \(p_{nn}\). On this basis, a translation and rotation invariant bounding box is defined as tuple \((d,\varphi,w,l,\theta_{nn})\). Here, the distance \(d\) is the distance between the reference point \(p_{0}\) and the bounding box center. The angle \(\varphi\) represents the angle between the vector from the reference point \(p_{0}\) to its nearest neighbor \(p_{nn}\) and the vector to the bounding box center. Finally, the angle \(\theta_{nn}\) corresponds to the angle between the directional vector of the bounding box and the vector from the reference point \(p_{0}\) to its nearest neighbor \(p_{nn}\). A graphical representation of the translation and rotation invariant bounding box definition is given in Fig. 3.
### Graph Construction
The graph construction module transforms the initial radar point cloud into a graph representation and ensures a transformation invariant input data representation. The proposed method comprises of three major steps: the node feature transformation, the edge generation and a matrix transformation.
The node feature transformation maps the original point cloud \(\mathcal{P}\), formally defined as finite set \(\mathcal{P}\) of \(n\in\mathbb{N}\) vectors \(p_{i}\in\mathbb{R}^{d}\) with \(i=1,...,n\), to a set of nodes \(\mathcal{V}=\{\nu_{0},...,\nu_{n}|\nu\in\mathbb{R}^{d_{\nu}}\}\). During this transformation process, the number of points is preserved while the features are transformed. This is important to ensure the invariance of the data representation, while preserving the structure of the original point cloud. The key functionality, however, is the selection of the node features, which determines the invariances of the data representation.
In the non-invariant baseline configuration, the nodes are defined by their absolute spatial coordinates (\(x\), \(y\)), their velocity vectors (\(v_{x}\), \(v_{y}\)), the values of the radar cross section \(rcs\) and the associated timestamps \(t\). To achieve a translation invariant data representation, the absolute spatial coordinates are omitted and the encoding of the structural information is subject to the edge generation. Accomplishing a translation and rotation invariant representation further requires the reduction of the velocity information to the Euclidean norm of the velocity vector \(v\). In addition, all nodes hold the information about the connectivity degree c (the number of associated edges). This process shows that the achievement of certain invariances results in a loss of global context, where the edge generation is an attempt at compensating for this.
The edge generation encodes relationships between points in the form of edges and edge features. The set of edges can be formally defined as \(\mathcal{E}\subseteq\{(u,\nu)|(u,\nu)\in\mathcal{V}^{2},u\neq\nu\}\), where every edge \(\varepsilon\in\mathcal{E}\) can be associated with an edge feature vector \(e:\mathcal{E}\mapsto\mathbb{R}^{d_{\varepsilon}}\). The final Graph is then defined by the tuple of nodes and edges \(\mathcal{G}=(\mathcal{V},\mathcal{E})\).
The edges of the graph are created by a k-nearest neighbors algorithm, where the number of neighbors k is determined emphatically and represents a trade-off between performance and computational resources. As a result, every node of the graph is connected to its twenty nearest neighbors.
The edge features are generated in consideration of the desired invariances and with the aim of encoding relationship information between points. However, for the non-invariant baseline configuration, no edge features are generated but all information is contained within the node features. The translation invariant data representation requires the neglection of the absolute coordinates, instead the position relative to the neighboring points \(dx\) and \(dy\) is encoded in the edge features to preserve the spatial information. To create a rotation and translation invariant data rep
Figure 2: Model overview from point cloud processing on the left, through graph construction and GNN feature extraction, up to the object detection and semantic segmentation on the right.
resentation, an enhanced set of point-pair features, inspired by Drost [8], is generated. This set consists of the Euclidean distance \(d\) between the two points, the relative angle \(\psi\) between their velocity vectors and the individual angles between the velocity vector of the points and their connecting line, \(\gamma_{\nu}\) and \(\gamma_{u}\). An overview of the various node and edge features for the different data representations is presented in Tab. 1.
The matrix transformation maps the previously generated graph to a more easily processable matrix data representation. Therefore, the graph \(\mathcal{G}\) is mapped to a tuple consisting of an adjacency matrix \(\mathbf{A}\), a node feature matrix \(\mathbf{X}\) and an edge feature matrix \(\mathbf{E}\). The resulting tuple \((\mathbf{A},\mathbf{X},\mathbf{E})\) represents the actual input of the subsequently defined neural network.
### Graph Neural Network Architecture
The graph neural network consists of two major components and is responsible for the generation of an expressive feature representation for the connected detection heads. The two components are the initial feature embedding and the GNN layers themselves.
The initial feature embedding creates a high-dimensional non-contextual feature representation from the low-dimensional node and edge features. Therefore, a shared multilayer perceptron (MLP) with four layers is used for node feature embedding and one with three layers for edge feature embedding.
A major contribution of this paper is the generalization of the previously proposed graph neural network layer of Svenningsson [35], which is based on Shi [33]. Their graph convolutional layer function
\[\mathrm{h}_{\nu}^{l+1}=\zeta(\mathrm{h}_{\nu}^{l},\oplus_{u\in\mathcal{N}_{ \nu}}\ \xi(x_{\nu}-x_{u},\mathrm{h}_{u}^{l}))+\mathrm{h}_{\nu}^{l}, \tag{1}\]
updates the node features \(\mathrm{h}_{\nu}\) of node \(\nu\) by adding the results of the update function \(\zeta\) to the input node features \(\mathrm{h}_{\nu}^{l}\) of the current layer \(l\). The update values are determined by an aggregation \(\oplus\) over the node's neighborhood \(\mathcal{N}_{\nu}\) and implemented as a max pooling over the neighbor node features \(\mathrm{h}_{u}^{l}\) weighted by their relative position \(x_{\nu}-x_{u}\). However, this layer formulation does not allow feature dimension changes across GNN layers and only achieves a translational invariance.
To overcome these limitations we propose a more general message passing neural network (MPNN) layer formulation instead of the graph convolutional (GC) layer formulation of Svenningsson [35]. To achieve this, two major changes are made to the update function in Eq. (1). Firstly, the current node features \(\mathrm{h}_{u}^{l}\) are introduced in the embedding function \(\xi\) to allow dimensional changes across GNN layers. Secondly, the possibility of using arbitrary edge features is introduced instead of only using the relative position between neighboring nodes. Therefore, the achievement of certain invariances is subject to the edge and node feature generation rather than the layer formulation itself, which allows the analysis of different invariances without changing the network architecture. The resulting update function \(\zeta\) is given by
\[\mathrm{h}_{\nu}^{l+1}=\zeta(\mathrm{h}_{\nu}^{l},\oplus_{u\in\mathcal{N}_{ \nu}}\ \xi(\mathrm{h}_{\nu}^{l},\mathrm{h}_{u}^{l},\mathrm{e}_{\nu,u}^{l})) \tag{2}\]
where the new node features \(\mathrm{h}_{\nu}^{l+1}\) of node \(\nu\) after layer \(l\) are a function of the original node features \(\mathrm{h}_{\nu}^{l}\) and an aggregation \(\oplus\) over the neighborhood \(\mathcal{N}_{\nu}\). In this process, the aggregation is conducted over the embedding \(\xi\) of the sender node features \(\mathrm{h}_{\nu}^{l}\), the receiver node features \(\mathrm{h}_{u}^{l}\) and the features of the edge \(\mathrm{e}_{\nu,u}^{l}\) connecting them. To obtain a permutation invariant network architecture, the update function \(\zeta\) and the embedding function \(\xi\) are implemented as shared MLPs, and the aggregation function determines the maximum of the embedded features - similar to [22].
The resulting feature space of these layers forms the foundation for the subsequent detection heads to accomplish the desired perception tasks.
### Object Detection and Semantic Segmentation
The proposed RadarGNN model not only performs object detection but also semantic segmentation on the given radar point cloud. This multi-task learning approach is realized by a distinct feature extraction module and individual detection heads, which could also be extended to support additional perception tasks. For our purpose, a semantic
\begin{table}
\begin{tabular}{l l c} \hline \hline Invariance & Node features & Edge features \\ \hline - & \(x,y,v_{x},v_{y},rcs,t,\mathsf{c}\) & - \\ Trans. & \(v_{x},v_{y},rcs,t,\mathsf{c}\) & \(dx,dy\) \\ Trans. and rot. & \(v,rcs,t,\mathsf{c}\) & \(d,\psi,\gamma_{\nu},\gamma_{u}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Set of node and edge features of the graph for different transformation invariant data representations.
Figure 3: Definition of the translation and rotation invariant bounding box in regard to the radar point \(p_{0}\) and the reference point \(p_{nn}\). The box is defined by its extend (\(w,l\)), position (\(d,\varphi\)) and orientation (\(\theta_{nn}\)) in the bird’s-eye view.
segmentation and object detection (bounding box prediction) head is used.
The semantic segmentation head consists of a shared MLP, with a final softmax activation function and predicts a confidence score for each class. The final class for every individual point is then determined by the highest confidence score among all classes.
The object detection head is realized by a shared MLP with two consecutive layers and a linear activation function in order not to restrict the output space. The module predicts a bounding box for every point within the given point cloud, which is why a number of suppression schemes have to be applied to receive the final output. Firstly, a background removal is applied to remove all bounding boxes associated with the background class. Secondly, a non-maximum suppression (NMS) is used to remove all overlapping bounding boxes and keep only the one with the highest confidence score. Thirdly, a class-specific threshold is applied to discard all remaining bounding boxes below a certain confidence. Finally, the absolute bounding box representation is restored.
The overall model is trained with a combined loss function, consisting of multiple task-specific loss functions. The semantic segmentation branch uses a class-weighted cross-entropy loss function \(\mathcal{L}_{\text{seg}}\), whereas the object detection branch utilizes a Huber loss function \(\mathcal{L}_{\text{obj}}\)[13] with a delta value of one. Additionally, an L2 regularization term \(\mathcal{L}_{\text{reg}}\) is introduced to prevent the model from overfitting [35]. The overall loss function is given by
\[\mathcal{L}=\alpha\mathcal{L}_{\text{seg}}+\beta\mathcal{L}_{\text{obj}}+ \gamma\mathcal{L}_{\text{reg}} \tag{3}\]
where the weights \(\alpha=1,\ \beta=0.5\) and \(\gamma=5\times 10^{-6}\) balance between the different loss terms.
## 4 Experimental Results
The RadarGNN model is evaluated on the RadarScenes dataset and the obtained results are compared to the current state of the art in radar-based object detection and semantic segmentation. In addition, a detailed analysis of the effects of different invariances on the model's performance is given. All experiments are conducted on a dedicated benchmark server and within a containerized environment to keep the evaluation environment constant.
### Object Detection
The object detection quality of the RadarGNN model is measured by the mean average precision (mAP) value as defined in [26] and with an intersection over union (IOU) threshold of \(0.3\). In addition, the class-specific average precision (AP) values are reported for the pedestrian (ped), pedestrian group (grp), two-wheeler (tw), car and large vehicle (trk) classes. The RadarGNN method achieves a mAP value of \(60.2\,\%\) on the RadarScenes validation set, as shown in Tab. 2. The highest AP values are achieved for the two-wheeler, car and large vehicle classes, while the lowest AP value is reported on the pedestrian class. To put that into perspective, the results are compared to the current state of the art on the RadarScenes dataset.
Since all previous results for rotated bounding boxes are only reported on the validation set and the source code of none of the comparison models is publicly available, the RadarGNN model is compared to the literature values accomplished on the validation set. However, the results of our model are also reported on the independent test set in the bottom row of Tab. 2.
The RadarGNN model achieves state-of-the-art results on the RadarScenes dataset and outperforms all previous object detection methods, as shown in Tab. 2. Our graph-based architecture achieves the highest mean average precision (mAP) value and outperforms the hybrid PointPillars as well as the grid-based YOLOv3 method of Scheiner _et al_. [26].
The increased object detection quality has multiple causes, although the utilization of point relationships and the preservation of the structural information are of particular importance. Investigations demonstrated that a higher connectivity (number of edges) results in a better object detection score, but consequently in an increase in computational resources. In addition, the results of Scheiner _et al_. [26] indicate that the detection quality of the grid-based approach is limited in respect of the yaw angle prediction
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & Split & AP\({}_{\text{ped}}\) & AP\({}_{\text{grp}}\) & AP\({}_{\text{tw}}\) & AP\({}_{\text{car}}\) & AP\({}_{\text{trk}}\) & mAP\(\uparrow\) \\ \hline PointPillars [26] & val. & 11.2 & 22.5 & 38.8 & 54.2 & 53.7 & 36.1 \\ YOLOv3 [26] & val. & **34.4** & 55.7 & 57.4 & 70.2 & 61.9 & 55.9 \\ RadarGNN (ours) & val. & 34.0 & **58.3** & **66.6** & **72.0** & **70.1** & **60.2** \\ \hline RadarGNN (ours) & test & 33.1 & 54.8 & 62.0 & 70.4 & 62.0 & 56.5 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Object detection results on the RadarScenes dataset for the translation invariant model configuration. The first three rows are evaluated on the validation (val.) set, while the bottom row is evaluated on the test set. The detection quality is given by the average precision (AP) values for the pedestrian (ped), pedestrian group (grp), two-wheeler (tw), car and large vehicle (trk) class with an IOU threshold of \(0.3\). The mAP represents the mean average precision over all five classes. All benchmark results can be found in [26, Tab. 3].
because of the discretization and associated loss of structural information, which are not observed with the graph-based approach. Nevertheless, the previous YOLOv3 [26] method achieves better results for the pedestrian (ped) class, which is characterized by having very few radar points. Additionally, the introduced invariances greatly influence the detection quality, which is discussed below in Sec. 4.3.
To provide more context to these results, the RadarGNN model is also benchmarked in the official nuScenes detection challenge and achieves a nuScenes detection score (NDS) of \(0.059\). The proposed method, therefore, outperforms the comparison model of Svenningsson _et al_. [35], which achieved an NDS of \(0.034\) and has currently the second highest score among all radar-only object detection methods. However, it must be noted that our model was not designed for 3D object detection on the nuScenes dataset but rather for the bird's-eye view (BEV) object detection on the RadarScenes dataset.
### Semantic Segmentation
In addition to the object detection quality, the semantic segmentation quality of the proposed multi-task neural network is evaluated on the RadarScenes dataset and compared to the current state of the art. The segmentation quality is measured by the macro-averaged F1 score, where the RadarGNN model achieves a score of \(77.1\,\%\) on the RadarScenes test dataset, as shown in Tab. 3.
The proposed method outperforms all previous radar-based multi-task learning approaches and even the dedicated semantic segmentation models of Schumann _et al_. [30]. As shown in Fig. 4, the model is able to differentiate between the different classes, while the highest confusion exists between the pedestrian (ped) and pedestrian group (grp) classes. This result indicates the potential of the graph-based approach for additional computer vision applications on radar point clouds.
On the nuScenes dataset a macro-averaged F1 score of \(19.6\,\%\) can be achieved on the validation set, which represents the first reported semantic segmentation result with the official nuScenes configuration. Although Nobis _et al_. [19] developed a semantic segmentation method on nuScenes radar data, they used a simplified class configuration and achieved a macro-averaged F1 score of \(22.8\,\%\). Using the exact same class configuration as Nobis _et al_. [19], our model achieves a macro-averaged F1 score of \(39.4\,\%\).
### Transformation Invariances
The proposed approach to transformation invariance allows a detailed analysis of the effects of different invariances on the perception performance. The conducted study compares the object detection and semantic segmentation quality of three models with a non-invariant, translation invariant as well as translation and rotation invariant configuration.
For the object detection task, the highest mAP value of \(56.5\,\%\) can be achieved by the translation invariant configuration, as shown in Tab. 4. Hence, the non-invariant as well as the translation and rotation invariant configuration accomplished a lower detection score of \(19.4\,\%\) and \(19.6\,\%\), respectively.
This result could be caused by both the restriction of the input features as well as the differences in the bounding box description (as described in Sec. 3). For this reason, a complementary study, with a translation and rotation invariant bounding box definition for all three configurations, was conducted. The result of this study shows that the more complex bounding box definition leads to an overall lower detection quality but the general trend remains the same. Within this study the non-invariant and translation invariant methods achieved a mAP of \(16.4\,\%\) and \(20.8\,\%\), respectively. Consequently, the input feature restriction can be identified as the root cause of the reduced detection quality. In summary, translation invariance increases the detection quality but the additional rotation invariance, and the accompanying restriction of the input features, negatively affects the detection quality.
Figure 4: Confusion matrix of the semantic segmentation results on the RadarScenes test set. The matrix represents the ground truth values in contrast to the model prediction for the five objects classes and the background (bg) class.
\begin{table}
\begin{tabular}{l c} \hline \hline Method & F\({}_{1}\uparrow\) \\ \hline PointPillars [26] & 47.6 \\ YOLOv3 [26] & 53.0 \\ LSTM [31] & 59.7 \\ PointNet++ [27] & 74.3 \\ Recurrent PointNet++ [30] & 75.0 \\ RadarGNN (ours) & **77.1** \\ \hline \hline \end{tabular}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{F1} & \multirow{2}{*}{F1} & \multirow{2}{*}{F1} & \multirow{2}{*}{F1} & \multirow{2}{*}{F1} \\ & & & & & \\ \hline PointPillars [26] & 47.6 \\ YOLOv3 [26] & 53.0 \\ LSTM [31] & 59.7 \\ PointNet++ [27] & 74.3 \\ Recurrent PointNet++ [30] & 75.0 \\ RadarGNN (ours) & **77.1** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Semantic segmentation results on the RadarScenes test set, given by the macro-averaged F1 score.
Similar to the object detection task, the highest semantic segmentation score of \(77.1\,\%\) can be achieved by the translation invariant configuration, as shown in Tab. 4. The non-invariant configuration achieved a macro-averaged F1 score of \(68.2\,\%\) and the translation and rotation invariant configuration achieved \(76.5\,\%\), respectively.
The smaller differences between the semantic segmentation results, in comparison to the object detection results, can be explained by two reasons. Firstly, the bounding box description has no influence on the results, since semantic segmentation requires no bounding boxes. Secondly, directional information (which is lost with the addition of rotational invariance) is less relevant for point-wise classification than it is for the prediction of the bounding box orientation.
However, the object detection and semantic segmentation tasks are not independent but coupled by a combined loss function Eq. (3) and model training. Therefore, the segmentation quality is affected by the object detection performance and vice versa. Since the differences between the segmentation results are small, we conducted a complementary study to further analyze the effects of invariances on the segmentation quality. Within this study, the segmentation branch was trained independently by setting \(\beta=0\). As a result, the non-invariant model achieved an F1 score of \(60.5\,\%\), the translation invariant model achieved \(66.5\,\%\) and the translation and rotation invariant configuration achieved \(68.2\,\%\). Consequently, the assumption is made that further invariances increase the segmentation quality and justify the necessary restriction of the input features.
To provide evidence to the claim that the introduction of transformation invariances counteract the effects of limited data availability, we studied the model performance during a reduction of the training data. For this experiment we gradually reduced the amount of training data sequences, but kept the test set constant and monitored the model performance for the three different levels of invariance. The result shows that transformation invariant models are less affected by a limited data availability and the addition of further invariances contribute positively to this effect as shown in Fig. 5.
As a result, the analysis indicates that certain perception tasks benefit differently from specific transformation invariances. Where semantic segmentation improves with the addition of further invariances, the highest object detection score can be achieved by the translation invariant configuration. Furthermore, additional transformation invariances improve the ability of the model to handle limited data availability.
## 5 Conclusion
In this paper, we present a graph neural network for both multi-class object detection and semantic segmentation on radar point cloud data. The proposed RadarGNN model uses a generalized message passing neural network layer to consider edge features within its update function and to allow dimensional changes in the GNN. Furthermore, a more generalized approach to achieve transformation invariance is proposed by the creation of an invariant data representation rather than an invariant model architecture. This modification allows the analysis of different invariances without changing the model architecture itself and is transferable to different applications. However, since an invariant data representation always involves a restriction of the input features, a distinct set of point-pair features is proposed to compensate for this during the edge feature generation. The proposed RadarGNN model achieves state-of-the-art results on the RadarScenes dataset for both radar-based object detection and semantic segmentation. In addition, the effects of different invariances on the object detection and semantic segmentation quality is investigated. The incorporation of a sensor fusion concept or the transfer to different sensor modalities is subject to future research.
|
2308.15822 | AMDNet23: A combined deep Contour-based Convolutional Neural Network and
Long Short Term Memory system to diagnose Age-related Macular Degeneration | In light of the expanding population, an automated framework of disease
detection can assist doctors in the diagnosis of ocular diseases, yields
accurate, stable, rapid outcomes, and improves the success rate of early
detection. The work initially intended the enhancing the quality of fundus
images by employing an adaptive contrast enhancement algorithm (CLAHE) and
Gamma correction. In the preprocessing techniques, CLAHE elevates the local
contrast of the fundus image and gamma correction increases the intensity of
relevant features. This study operates on a AMDNet23 system of deep learning
that combined the neural networks made up of convolutions (CNN) and short-term
and long-term memory (LSTM) to automatically detect aged macular degeneration
(AMD) disease from fundus ophthalmology. In this mechanism, CNN is utilized for
extracting features and LSTM is utilized to detect the extracted features. The
dataset of this research is collected from multiple sources and afterward
applied quality assessment techniques, 2000 experimental fundus images
encompass four distinct classes equitably. The proposed hybrid deep AMDNet23
model demonstrates to detection of AMD ocular disease and the experimental
result achieved an accuracy 96.50%, specificity 99.32%, sensitivity 96.5%, and
F1-score 96.49.0%. The system achieves state-of-the-art findings on fundus
imagery datasets to diagnose AMD ocular disease and findings effectively
potential of our method. | Md. Aiyub Ali, Md. Shakhawat Hossain, Md. Kawar Hossain, Subhadra Soumi Sikder, Sharun Akter Khushbu, Mirajul Islam | 2023-08-30T07:48:32Z | http://arxiv.org/abs/2308.15822v1 | **AMDNet23: A combined deep Contour-based Convolutional Neural Network and Long Short Term Memory system to diagnose Age-related Macular Degeneration**
## Abstract
In light of the expanding population, an automated framework of disease detection can assist doctors in the diagnosis of ocular diseases, yields accurate, stable, rapid outcomes, and improves the success rate of early detection. The work initially intended the enhancing the quality of fundus images by employing an adaptive contrast enhancement algorithm (CLAHE) and Gamma correction. In the preprocessing techniques, CLAHE elevates the local contrast of the fundus image and gamma correction increases the intensity of relevant features. This study operates on a AMDNet23 system of deep learning that combined the neural networks made up of convolutions (CNN) and short-term and long-term memory (LSTM) to automatically detect aged macular degeneration (AMD) disease from fundus ophthalmology. In this mechanism, CNN is utilized for extracting features and LSTM is utilized to detect the extracted features. The dataset of this research is collected from multiple sources and afterward applied quality assessment techniques, 2000 experimental fundus images encompass four distinct classes equitably. The proposed hybrid deep AMDNet23 model demonstrates to detection of AMD ocular disease and the experimental result achieved an accuracy 96.50%, specificity 99.32%, sensitivity 96.5%, and F1-score 96.49.0%. The system achieves state-of-the-art findings on fundus imagery datasets to diagnose AMD ocular disease and findings effectively potential of our method.
**Keywords:** AMDNet23, Fundus image classification, CNN-LSTM, ocular diseases, automated diagnosis, convolutional neural networks, long short-term memory, early detection, Medical imaging, diagnosis.
## Introduction
Over the past two decades, ocular diseases (ODs) that can cause blindness have become extremely widespread. ODs encompass a wide range of conditions that can affect various components of the ocular, including the corneal tissue, lens, retina, optic nerves and periorbital tissues. Ocular diseases include abnormalities such as cataracts, untreated nearsightedness, trachoma, macular degeneration associated with aging, and diabetes-associated retinopathy. These ailments play a substantial role in global retinal degeneration and visual impairment. [1]. The worldwide prevalence of near- or farsightedness vision deficiency affects over 2.2 billion individuals [2]. Approximately half of the total cases, amounting to at least 1 billion folks, as reported by the World Health Organization (WHO), suffer from vision impairments that could have been evaded or remain unattended. Among these individuals, around 88.4 million have untreated refractive errors leading to adequate to extensive distant impaired vision, nearly ninety-four million owned cataracts, and eight million individuals are possessed by aged-related macular degeneration, diabetic retinopathy (3.9 million). [3] Despite significant investment, the number of individuals living with vision loss might increase to 1.7 billion by 2050, from the 1.1 billion people accomplished in the year 2020. Age-associated macular degeneration (AMD) predominantly strikes the older demographic, resulting in the
gradual deterioration of the macula, a crucial part of the retina in charge of the central region of perception. The consequences of AMD manifest as central vision abnormalities, including blurred or distorted vision, which significantly impede various daily activities[4].
Accurate and earlier identification of AMD disease explicitly a vital role in safeguarding irreversible damage to vision and initiating timely treatment and safeguarding ocular health. Machine learning techniques have advanced to the point where early identification of aged macular degeneration eye illness by an automated system has significant advantages over manual detection [5]. As aids in diagnosing eye diseases, digital pictures of the eye and computational intelligence (CI)-based technologies assist doctors in diagnosis. In the realm of diagnosing eye diseases, digital eye images and computational intelligence (CI)-based technologies serve as indispensable tools, enabling doctors to enhance their diagnostic capabilities[6]. In medical imaging, there are also various approaches are employed including fundus photography, [7] optical coherence tomography (OCT), and imaging modalities specifically designed for the eye. These imaging technologies allow for detailed visualization and analysis of ocular structures, facilitating the identification of characteristic features and abnormalities associated with age-related macular degeneration diseases.
Several researchers have demonstrated a critical task in ophthalmology, facilitating the early detection and diagnosis of aged macular degeneration (AMD) ocular disease using fundus image.
Researchers have focused on deep learning [8-10], The fields of vision for computing [11,12] and the use of predictive learning of machines [13,14] method have used to develop robust classification models to identify retinal images into AMD disease categories accurately. The incorporation of deep learning methodologies [57,58] plays a pivotal role in accurately classifying diverse ocular diseases, thereby ensuring the advancement of intelligent healthcare practices in the field of ophthalmology [15].
Therefore, this paper seeks to demonstrate a novel system employing a AMDNet23 framework, the deep mechanism of CNN and LSTM networks is combined for the automated identification of AMD from fundus photography. Within this approach, CNN performs the purpose of extracting fundus features, and LSTM undertakes the crucial task of classification AMD constructed on the extracted features. Internal memory inside the LSTM network empowers it to learning knowledge gained from significant experiences with extended period of condition. In the fully interconnected networks, each layer is linked comprehensively, and the nodes in between layers construction are unconnected, and LSTM nodes connection within a directed graph accompined a temporal order, which serves as an input with a specific form [16]. The hybrid two dimensional CNN and LSTM system combination improves classification of AMD ophthalmology and assists clinical decision, the dataset collected from several sources and preprocessing technique for the image quality enhancement to classify AMD efficiently. The assets of this research have been articulated in the following.
a) Constructing a combination of CNN-LSTM based AMDNet23 framework for the automated diagnosis of AMD and aiding clinical physician in the early detection of patients.
b) The collection data has investigated by employing the contour-based quality assessment technique in identifying the structure of fundus photography, Ocular illumination levels fundus images are automatically eliminated.
c) To enhance image quality, CLAHE improves the visibility of subtle details and enhances local contrast and Gamma correction adjusts the intensity levels, improving image quality and facilitating better diagnosis of AMD.
d) AMDNet23 hybrid framework for detection of AMD utilizing fundus image ophthalmology, data comprising 2000 images equitively.
e) An empirical evaluation is accessible encompassing accuracy, specificity, sensitivity, F1-measure, and a confusion matrix to assess the effectiveness of the proposed method.
The rest of the contents of this article are arranged a manner as follows: Section II covers the related works of this research. Section III Section III articulates the proposed AMDNet23 methodology, including data collection and preprocessing techniques, and a comparison of some existing models. Section IV covers the experimental findings and discussion, including state-of-the-art and transfer learning comparisons. The conclusion is presented in Section V.
### Related work
In the pursuit of identifying the ocular disease, researchers have harnessed the power of deep learning techniques. These methods leverage fundus ophthalmology to facilitate the diagnosis of ocular diseases. This reviewed literature presents cutting-edge systems that developed deep-learning techniques for detecting AMD, diabetes, and cataracts.
M Sahoo et. al[17] proposed an innovative ensemble-based prediction model called weighted majority voting (WMV) for the exclusive diagnosis of Dry-AMD. This approach intelligently combines the predictions from various base classifiers, utilizing assigned weights to each classifier. The WMV model demonstrates remarkable accuracy, achieving 96.15% and 96.94% accuracy rates, respectively. P Muthukannan et. al. [18] introduced a computer aided approach that leverages the Flower pollination optimization approach (FPOA) in accompanied with a CNN mechanism for preprocessing, specifically utilizing the maximum entropy transformation on the ODIR public dataset. The model's performance was then benchmarked against other optimized models, demonstrating superior accuracy at 95.27%. In a study by Serener et al. [19], their goal was to employ OCT images and deep neural networks to detect both dry and wet AMD. Regarding the purpose, two architectures--AlexNet and ResNet--were used. The outcomes revealed that the eighteen-layer ResNet model correctly identified AMD with an astounding accuracy of 94%, whereas the AlexNet model produced an accuracy of 63%.
There are several deep learning methods for detecting cataracts, because of the drawbacks of feature extraction and preprocessing, these methods don't always produce adequate results. Kumar et al. [20] proposed several models to improve clinical decision-making for ophthalmologists. Paradisa et al. [21] Fundus images applied the Concatenate model with For feature extraction, Inception-ResNet V2 and DenseNet121 are implemented, and MLP is deployed for classification and average accuracy was 91%. Faizal et al. [22] An automated cataract detection algorithm using CNN achieves high accuracy (95%) by analyzing visible wavelength and anterior segment images, enabling cost-effective early detection of various cataract types. Pahuja et al. [23] To enhance the model performance, data augmentation and methods to extract features have been performed. Therefore they used CNN and SVM models for the detection of cataract on a dataset comprising normal and cataract retinal images, achieving high accuracy of 87.5% for SVM and 85.42% for CNN. Hence, et al. [24] used a CNN model to diagnose cataract pathology with digital camera images. The model achieves high accuracy (testing: 0.9925, training: 0.9980) while optimizing processing time. It demonstrates the potential of CNNs for cataract diagnosis. Although et al [25] used color fundus images to detect cataracts.
A variety of computer vision engineering approaches are used to forecast the Diabetic retinopathy (DR)'s occurrences and phases automatically. Mondal et al. [26] Their model is a collaborative deep neural system for automated diabetes-related retinopathy (DR) diagnosis and categorization using two models: modified DenseNet101 and ResNeXt. Experiments were conducted on APTOS19 and DIARETDB1 datasets, with data augmentation using GAN-based techniques. Results show higher accuracy with accuracy for each of the five classes reached 86.08%, whilst for each of the two classes the score was 96.98%. Whereas, a ML-FEC model with pre-trained CNN architecture was proposed for Diabetic Retinopathy (DR) detection using ResNet50, ResNet152, and SqueezeNet1. On testing with DR datasets, ResNet50 achieved 93.67%, SqueezeNet1 achieved 91.94%, and ResNet152 achieved 94.40% accuracy, demonstrating its suitability for clinical implementation and large-scale screening programs. Using a novel CNN model, Babenko et al. [27] were able to multi-class categorize retinal fundus pictures from a publically accessible dataset with an accuracy of 81.33% for diabetic eye disease. Priorly based on UNet architecture, et al. [28] achieved 95.65% accuracy in identifying red lesions and 94% accuracy in classifying DR levels of severity. The approach was examined utilizing publically accessible datasets: IDRiD (99% specificity, 89% sensitivity) and MESSIDOR (94% accuracy, 93.8% specificity, 92.3% sensitivity).
## Methodology
Aging macular degenerative disorder (AMD) is an advancing retinal condition that predominantly impacts individuals over the age of 50. This eye disease can significantly affect eyesight, leading to various visual issues like blurred or distorted vision, where straight lines might appear wavy or twisted. Moreover, it causes a loss of central vision, the emergence of dark or empty spots at the center of vision, and alterations in color perception. Thus taking proactive steps to prevent eye diseases is crucial for maintaining clear and vibrant sight throughout our lives. In recent years, neural networks containing layers of convolution (CNNs) have demonstrated considerable potential in the processing of medical images. It has the remarkable capacity to recognize and extract meaningful features from images automatically. Figure 1 outlines the steps in developing the proposed CNN-based methods for AMD eye disease detection:
## Appendix A Data Collection:
The case of normal class represents the absence of any specific eye disease or condition. A healthy eye functions optimally, providing clear and unimpaired vision. Diabetes, a systemic disease characterized by elevated blood sugar levels, can lead to various ocular complications. Diabetic ocular disorders, notably diabetic retinopathy, occur when the blood vessels found in the retina undergo damage as a consequence of elevated blood sugar concentration [29]. Older individuals are predominantly affected by age-associated macular degeneration (AMD) and involves the progressive deterioration of the macula, a small but crucial of the core vision-related region of the retina. AMD can lead to blurred or distorted central vision, impacting activities [30]. A cataract is another common eye condition, particularly associated with aging. It involves clouding the crystalline lens inside the eye, leading to inconsistent or foggy vision [31]. Fig. 2. Shows the sample images of Normal, Cataract, AMD and Diabetes respectively.
Figure 1: Overall proposed-based Method
To train a robust CNN model, a diverse and well-annotated dataset of AMD and non-AMD eye images is essential. The dataset employed in this study containing a total of 2000 images, was put together by assessing the quality of the images from six other public datasets. Those datasets are ODIR[32], DR-200[33], Fundus Dataset[34], RFMiD[35], ARIA[36], and Eye_Diseases_Classification[37]. From these datasets. The quality assessment was done using contour techniques[38]. The contour-based approach focuses on the sharpness and clarity of edges, as they play a crucial role in human assess the quality of imagery. Which included illumination level, visibility structure, color and contrast, and direct eye image. Figure 3 represents a sample of the assessed image quality.
Here it can be seen that sharp, well-defined edges contribute to high-quality images, while blurry or distorted edges indicate poor quality. Such poor-quality images would negatively impact the machine's perception. We put together a dataset consisting of four classes: Normal, Diabetes, AMD, and Cataract,
Figure 3: Contour-based approach
Figure 2: Types of fundus ophthalmology
where each class contains 500 images. Table 1 indicates the quantity of accessible and selected images (within the first bracket) from those six public datasets.
### Data Pre-processing:
Preprocessing, which is the strong suits of the proposed work, was focused on enhancing image quality, and some of the preprocessing techniques applied to this work were not used by the previously proposed cataract disease detection works. The data were preprocessed in different color spaces (as shown in Figure 4) for extracting the features while bettering the practicability of our models. Among the RGB(G), HSV(V), and LAB(L) color spaces, the vessels were visible in the LAB(L) color space. As a result, the LAB(L) color space was chosen.
Later, some preprocessing algorithms like CLAHE and Gamma correction[39] were applied to enhance image quality by adjusting the brightness and contrast. We experimented with several parameters for these algorithms and finally got the satisfying result for CLAHE(2.0, (8,8)). For gamma values of 0.5, the image is found to become darkened. Moreover, for gamma values of 2.0, the image is found quite faded. To overcome this problem CLAHE is used to enhance regional contrasting, making the image more visually appealing and informative[40]. Figure 5 shows the resulting images for our experimented algorithms along with the finalized CLAHE (2.0, (8,8)) for our model.
Figure 4: Color spaces
Histogram comparison between the applied Context-limited adapted equalization of histograms (CLAHE) preprocessing algorithm and a normal image in Figure 6 can help illustrate the effects of CLAHE on enhancing local contrast and improving image quality.
Figure 5: Gamma and CLAHE based quality enhancement
By comparing the histograms, we can observe the changes in pixel intensity distribution before and after applying CLAHE. In the normal image, the histogram exhibits a relatively uniform distribution of pixel intensities, with some variations depending on the content of the image. CLAHE is implemented as a component in the preprocessing, it adapts the contrast enhancement locally, making it particularly effective in improving the contrast of regions with varying intensities. This helps reveal hidden details and textures that might have been obscured in the original image.
\begin{tabular}{|c|c|c|c|} \hline
**Images** & **MSE** & **PSNR** & **SSIM** \\ \hline
2376\_left.jpg & 2388.13 & 14.35 & 0.48 \\ \hline
84\_right.jpg & 2189.98 & 14.72 & 0.65 \\ \hline
980\_right.jpg & 927.54 & 18.45 & 0.53 \\ \hline
71\_left.jpg & 709.37 & 19.62 & 0.54 \\ \hline \end{tabular}
The effectiveness of the quality of image preprocessing, Table 2 displays the readings of the metrics mean square errors (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structures Similarity Index (SSIM) [41].
Figure 6: Histogram comparisons
These metrics compare the preprocessed image to the original image to determine the level of distortion or similarity.
**Mean Squared Error (MSE):** MSE generates the resultant mean squared disparity between the preprocessed and original image's pixel values. Lesser MSE reading indicate greater similarity between the images. MSE is calculated using the formula:
\[MSE=\frac{1}{m*n}\sum_{x=0}^{m-1}\sum_{y=0}^{n-1}[I(x,y)-K(x,y)]^{2}\]
where\(m*n\) represents the image dimensions,\(I(x,y)\) and \(K(x,y)\) indicate preprocessed image's pixel values and original images at coordinates\((x,y)\).
**Peak Signal-to-Noise Ratio (PSNR):** The optical appealing of preprocessed photographs is frequently assessed utilizing the PSNR measure. It calculates a measure of the peak power ratio of the signal's strength to noise, which assessed in decibels (dB). Increased PSNR values indicated a greater similarity between the images. PSNR is calculated using the formula:
\[PSNR=10log_{10}\frac{MAX^{2}}{MSE}\]
wherein MAX is the highest pixel value that is permitted to be used (for example, 255 in 8-bit photographs).
**Structural Similarity Index (SSIM):** SSIM evaluates the luminosity, contrary, and structural similarities between the preprocessed image and original image. The value of 1 denotes complete similarity, using SSIM readings varying from -1 to 1. Higher SSIM values indicate better similarity between the images. SSIM is calculated using a combination of mean, variance, and covariance of the image patches.
\[SSIM=\frac{\big{(}2\mu_{x}\mu_{y}+c1\big{)}\big{(}2\sigma_{xy}+c2\big{)}}{ \big{(}\mu_{x}^{2}+\mu_{y}^{2}+c1\big{)}\big{(}\sigma_{x}^{2}+\sigma_{y}^{2}+ c2\big{)}}\]
where c1 and c2 are constants to prohibit division by zero, \(\sigma\) and \(\mu\) stand representing the standard deviations and mean respectively.
By calculating MSE, PSNR, and SSIM before and after image preprocessing, The effectiveness of the preprocessing techniques in preserving image quality or reducing noise, artifacts, or other undesired effects can be determined. Lesser MSE readings, greater PSNR readings, and greater SSIM values indicate better image quality preservation.
### C. Comparison of some existing models
**Transfer Learning:**
In this research study, a handful of models were trained and evaluated. Some of those models are discussed below in the following sections:
The ViTB16 model, also known as Vision Transformer Base with a depth of 16 layers, is a deep learning architecture specifically designed for image classification tasks [42]. 224*224 pixels were made up of the size of the input images. A grid of patches containing fixed sizes is used to divide the input image. Each individual patch is positioned linearly to obtain a lower-dimensional representation. The patch embeddings are enhanced by employing positional encoding to provide the model with spatial information. The model is capable of finding relationships dependencies between different patches thanks to a self-attention mechanism. It calculates attention scores between all pairs of patches and applies weighted averaging to aggregate information. Layer normalization is applied after the self-attention mechanism to normalize the output and improve training stability. SGD (Stochastic Gradient Descent) optimizer was employed for training the model, and the learning rate was 0.0001.
**ii.DenseNet121, DenseNet169:**
DenseNet[43] is a famous deep-learning architecture known for its dense connections between layers, enabling effective feature reuse and alleviating the vanishing gradient problem. DenseNet121 and DenseNet169 have 121 and 169 layers, respectively, making DenseNet121 a relatively shallow variant than DenseNet169. DenseNet121 has fewer parameters compared to DenseNet169, which makes it more memory-efficient and faster to train. DenseNet121 performs well on various image classification tasks but may not capture as fine-grained features as deeper models. DenseNet169 performs better than DenseNet121, especially when the dataset is larger and more complex. Choosing between DenseNet121 and DenseNet169 for a particular purpose like AMD classification, it is essential to consider the size and complexity of the dataset. Since the dataset used in this study was small, DenseNet121 should have been the model to pick, but we experimented with all DenseNet variants.
**iii. InceptionResnetV2**
A powerful convolutional neural network conception that incorporates the Inception and ResNet modules is termed the InceptionResNetV2 system [44]. It was proposed as an extension to the original Inception and ResNet models designed to improve image classification efficient tasks. The InceptionResNetV2 model was initialized with pre-trained ImageNet weights, excluding the top classification layers. The pre-trained layers were frozen to prevent them from being updated during training. The Adam optimizer was utilized for bettering the model and the training was done with an epoch size of 100.
**D. AMDNet23:**
AMDNet23 Combining CNNs which are convolutional neural systems and long-term short-term memory networks (LSTM) is designated as CNN-LSTM where the strengths of CNNs in image feature extraction are combined with the temporal modeling capabilities of LSTM networks[45]. Before feeding the images to the model, employed a diverse set of augmentation techniques to enhance the training data. These included randomized horizontal and vertical flipping through a probability of fifty percent each and applied random brightness adjustments by varying the brightness level within a range of -0.1 to +0.1. To further increase variation, used random contrast adjustments with factors ranging from 0.8 to 1.2, as well as random saturation adjustments within the bounds of 0.8 to 1.2. Moreover, introduced random hue adjustments to add subtle color variations. Lastly, to augment the dataset further, performed translation-based width and height shifting with a specific range to the input images. The augment strategy emphasizes data diversity and improves the model's broad ability to diagnose unknown data[46]. The model is designed with a depth of 23 layers.
**Input Layer:** The input to the AMDNet23 model is a collection of eye images captured from patients. The input images are represented as a tensor X with dimensions (N, W, H, C), where N corresponds to the eye image's number, and W and H represent The width and length of the images(The model received imagery that measured 256 X 256 in size.) respectively. C denotes the number of color channels in the eye images. This tensor X is then passed into the model's input layer.
**Convolutional Layers (CNN):** After the initial input layer, the CNN component of the model comprises multiple convolutional layers[47]. Every individual layer of convolution utilizes a set of adjustable filters to process the input images. The resulting Features of the map from the \(i^{th}\) convolutional layer are denoted as \(F_{i}\), with \(i\) ranging from 1 to \(n\). The output feature map \(F_{i}\) can be computed as follows:
\[F_{i}=Conv2D(X,W_{i})+b_{i}\]
Where Conv2D refers to the convolution operation, \(W_{i}\) represents the trainable parameters (weights) specific to the \(i^{th}\) convolutional layer, and \(b_{i}\) represents the corresponding biases associated with that layer. The output feature maps \(F_{i}\) have spatial dimensions (W', H') and C' channels.
The model contained six convolutional blocks. The first four convolutional blocks consisted of 2 convolutional layers and 1 batch normalization layer each. The filters were 32, 64, 128, and 256 for the first four blocks, where the kernel size was 3 X 3. The fifth and sixth convolutional blocks consisted of 3 convolution layers and 1 batch normalization layer. All the convolution layers of the fifth and sixth blocks consisted of 512 filters.
**Pooling Layers:** After the layers based on convolution, layers of pooled [48] are frequently used to downsample the feature maps. Let us denote the output feature maps after pooling as \(P_{i}\), where \(i\) ranges from 1 to \(p\) (the total number of pooling layers). Each pooling layer performs a downsampling operation on the input feature maps. After applying all pooling layers, the resulting feature maps can be denoted as \(P_{p}\) and have spatial dimensions (W", H") and C" channels. The pool size for max-pooling layers was 2 X 2 for all the convolutional blocks.
**LSTM Layer:** The system conveys the development and significance of networks having long-term short-term memory (LSTM), an advancement residing in conventional Recurrent neural networks to delve into the motivation behind LSTM's creation, specifically to address the vanishing gradient problem, which formerly hindered the effective training of RNNs on long sequences [49]. Behind LSTM it introduces memory cells, enabling the network to retain information over extended periods. The mechanism empowers LSTMs in effectively capturing long-term dependencies within the input data. The cell state adds a long-term memory to flows the entire sequence [50]. It enables information to be retained or discarded selectively utilizing the input entrance, forget gatekeeper, and output gateway, which constitute the three main gates. The LSTM cell computations can be mathematically represented as follows, where \(t\) denotes the current time step, \(x_{t}\) represents what was the input entered at time \(t\), \(h_{t}\)denotes the previous hidden stated, and \(c_{t}\)signifies the cell state:
\[i_{t}=\sigma(W_{i}[x_{t},h_{t-1}]+b_{i})\ldots(1)\]
\(\mathcal{C}_{t}=tanh(W_{c}[x_{t},h_{t-1}]+b_{c})\ldots(2)\)
\(\mathcal{C}_{t}=f_{t}\mathcal{C}_{t-1}+i_{t}\mathcal{C}_{t}\ldots(3)\)
The input gate (1) uses a sigmoid function to combine the previous output \(h_{t-1}\)and the present time input \(x_{t}\), deciding the proportion of information to be incorporated into the cell state and (2) employes to obtain new information through the \(\tanh\) layer to be added into current cell state \(\mathcal{C}_{t}\). The current cell state \(\mathcal{C}_{t}\), and long term information \(\mathcal{C}_{t-1}\) are combination into \(\mathcal{C}_{t}(3)\) whereas \(w_{t}\) determines the sigmoid output and \(\mathcal{C}_{t}\) determines to \(\tanh\) output. "Forget" gate (4) investigates how much of the previous cell state should be retained and carried over to the next time step by assessing probability where \(W_{f}\) and \(b_{f}\) refers to the offset and weight matrix and offset respectively.
\(f_{t}=\sigma\big{(}W_{f}[x_{t},h_{t-1}]+b_{f}\big{)}\ldots(4)\)
The output gate of the LSTM investigates by \(h_{t-1}\) and \(x_{t}\)inputs following (4) and (5) passed through the activation function to determine what portion of information to be appeared from the current LSTM unit at timestamp t.
\(o_{t}=\sigma(W_{o}[x_{t},h_{t-1}]+b_{o})\ldots(5)\)
\(h_{t}=o_{t}tanh(\mathcal{C}_{t})\ldots(6)\)
In the above equation, \(W_{o}\)refers to the matrices of the output gate and \(b_{o}\)refers LSTM bias respectively.
**Output Layer:** The output layer delivers the ultimate prediction regarding the existence or non-existence of AMD in the input eye images. We can represent the input tensor to the output layer as \(H_{out}\), obtained by reshaping \(H_{lstm}\) to have dimensions (NT, D). A function of activation throughout softmax is positioned following its dense layer to the output section. The dense layer takes the input \(H_{out}\) and transforms it to generate the output tensor Y, which has dimensions (NT, K). Here, K specifies the number of output classes the model is classifying.
The AMDNet23 model (Figure 7) for AMD ocular disease detection leverages the complementary strengths of CNNs in spatial feature extraction and LSTMs in modeling sequential dependencies.
\begin{tabular}{l l l l l} \hline \hline
**Layer** & **Type** & **Kernel Size** & **Kernel** & **Input Size** \\ \hline
1 & Convolution2D & 3 X 3 & 32 & 256 X 256 X 3 \\
2 & Convolution2D & 3 X 3 & 32 & 256 X 256 X 32 \\ \hline \hline \end{tabular}
Figure 7: CNN-LSTM system
- 256 X 256 X 32
[MISSING_PAGE_POST]
axpooling2D - - 8 X 8 X 512
- 16 X 512
In this research, An innovative and novel technique was devised to automatically detect AMD by leveraging four distinct types of fundus images. This unique architecture synergizes the power of Neural Networks of Convolutional and Long Short-Term Memory. The CNN module is administered for extracting intricate features from fundus imaging, and the LSTM module serves as the classifier. The proposed AMDNet23 hybrid network for AMD detection consists of 23 layers: It includes 14 convolutional layers placed and 6 layers used for pooling,one fully interconnected layer of (FC), a layer of LSTM and a single output layer with a sense of softmax functionality. In our construction, an individual convolution block is made comprising between two or three 2-dimensional CNN's, A layer with a level of pooling and a layer comprising a twentieth percent dropout rate have of dropouts. Utilizing a convolutional layer with 3x3 kernels and the ReLU function, the feature extraction is carried out efficiently. The input image undergoes dimension reduction using A layer for maximum pooling of \(2\times 2\) kernels. The resulting output structure was discovered (none, 4, 4, 512). the input size inside the layer of LSTM transforms to (16, 512) whenever incorporating the reshaped approach. Combining these two neural network architectures allows the model to effectively analyze eye images, capturing both local spatial patterns and temporal relationships, ultimately enabling accurate AMD diagnosis. The summarized architecture is presented in Table 2.
### Evaluation Criteria
In this study, as many as 13 models were experimented and the evaluation of all those models will be presented in this section of the paper. Considering the following evaluation criteria, the performance, reliability, and clinical relevance of a AMD detection system can be assessed and also can be determined its suitability for assisting medical professionals in accurately detecting and diagnosing AMD.
**Accuracy**: The accuracy of the AMD detection system in correctly classifying images as AMD, diabetes, cataracts is a crucial evaluation criterion. It evaluates the correctness of the system's detection computed overall.
\[Accuracy=\frac{TP+TN}{TP+TN+FP+FN}\]
**Precision:** The proportion of properly identified AMD situations is examined to measure precision out of all predicted AMD cases.
\[Precision=\frac{TP}{TP+FP}\]
**Sensitivity and Specificity**: Sensitivity is typically referred to by the term the true positive rate, which gauges the system to identify AMD cases correctly. True negative rate, which is often referred to as specificity, assesses its capacity of identifying non-AMD conditions. Both metrics provide insights into the system's performance in different classes and help assess its ability to avoid false positives and false negatives.
\[Sensitivity=\frac{TP}{TP+FN}\]
\[Specificity=\frac{TN}{TN+FP}\]
**F1 Score**: The F1 score offers a comprehensive measurement that addresses the balance between precision and memory and thus represents a harmonious average of precision and recall. It is advantageous in realities whereby there occurs a disparity in class or in cases when the costs of false positives and false negatives fluctuate.
\[F1Score=\frac{TP}{TP+\frac{1}{2}\left(FP+FN\right)}\]
When evaluating model performance, it is crucial to consider a combination of these evaluation criteria accordance with the precise specifications of the completion and the area of expertise. Selecting appropriate metrics and interpreting the results will help determine the effectiveness and suitability of the model. Table 2 showcases the retained value for these evaluation metrics:
\begin{tabular}{c c c c c c} \hline \hline
**Model** & **Accuracy** & **Precision** & **Sensitivity** & **Specificity** & **F1 Score** \\ \hline ViTB16 & 95.25\% & 95.26\% & 95.25\% & 98.66\% & 95.24\% \\ MobileViT\_XXS & 83.00\% & 82.58\% & 83.00\% & 98.28\% & 82.41\% \\ InceptionResNetV2 & 47.50\% & 36.26\% & 47.50\% & 82.67\% & 40.50\% \\ EfficientNetB7 & 92.75\% & 92.94\% & 92.75\% & 98.97\% & 92.62\% \\ EfficientNetB6 & 92.75\% & 92.74\% & 92.75\% & 97.00\% & 92.69\% \\ DenseNet121 & 82.25\% & 82.41\% & 82.25\% & 95.00\% & 81.46\% \\ DenseNet169 & 81.25\% & 81.24\% & 81.25\% & 92.74\% & 80.31\% \\ DenseNet201 & 84.75\% & 84.45\% & 84.75\% & 93.46\% & 84.52\% \\ InceptionV3 & 72.25\% & 71.53\% & 72.25\% & 90.16\% & 70.71\% \\ MobileNetV2 & 71.75\% & 71.68\% & 71.75\% & 88.01\% & 71.19\% \\ VGG16 & 89.75\% & 89.82\% & 89.75\% & 94.68\% & 89.78\% \\ \hline \hline \end{tabular}
In conclusion, the AMDNet23 model for AMD ocular disease detection demonstrated strong performance in accurately identifying AMD from eye images. Its high accuracy, precision, and recall values, along with the robust AUC-ROC score, validate its potential as a reliable tool for early detection and intervention. The model's efficiency makes it suitable for practical deployment in healthcare settings, contributing to improved patient care and timely treatment of AMD retinal disease.
#### Result & Discussion:
In this undermentioned portion, the findings of the proposed mechanism along with the comparison with some cutting-edge studies will be comprised. The collected data were divided into sets for conducting training and testing to construct and examine the proposed system. This approach was initially trained to leverage 80% of the data and evaluated utilizing 20% of the collected information. To ensure enhanced productivity several sets of parameters were experimented. The parameter setting that provided us with the most advantageous outline for the proposed model is given below
Figure 5 and Figure 6 show both training and test sets of data, the accuracy and loss curves. The graphical representations of epoch versus accuracy and epoch versus losses are valuable insights for monitoring and understanding the progress of the proposed method.
The graph exhibits how the model's accuracy varies while an increasing amount of training epochs raises. The predictability of the model on either a set of training data or a testing set appears on the axis in the vertical direction, along with the number of epochs denoted throughout the horizontal direction. It is observed from Figure 5 reveals the overall number of epochs rises over training, the model's accuracy elevates as it learns from the training data. At first, the accuracy continues to improve with each epoch, it suggests that the model can benefit from additional training. Later the accuracy eventually plateaus. This indicates that the model has converged and further training may not significantly improve accuracy.
The epoch vs loss graph demonstrates the association between the process of training number of epochs and the loss or error of the model. The loss of performance is a disparity among the estimated of model outline and the intended outline. The loss level is portrayed through the y-axis, whereas the total amount of epochs is displayed through the x-axis. In Figure 6, the loss is seen initially high as the model makes random predictions. The training messages, the loss decreases, reflecting the model's improved performance and ability to make more accurate predictions.
Figure X a represents The AMDNet23 model's confusion matrix, resulting in was determined on the basis of the ablation investigation, Adam, and learning rate, experiences the greatest level of accuracy and is configured in a great potential manner.
The row of values reflects what is actually labeled attached to the images, while the column-specific values reveal the quantities provided the predictive model estimates. The diagonal values indicate correct predictions (TP). However, the model had the great results for AMD. According to the confusion matrix, Two of the AMD pictorials was mistakenly classified as cataract-related and diabetes, whereas 98 among the 100 AMD investigations possessed effectively diagnosed. Next, 99 among a possible 100 anticipated involving cataracts taught correctly where one as AMD. Of 100 diabetes images, Seven images had been mistakenly identified, involving 3 referred to as AMD along with 4 as belonging to the healthy class, exposing of 93 correctly assigned. In closing least, among the 100 normal images, 96 were perfectly identified, and 1 had been misinterpreted as images related to AMD and 3 associated to diabetes.
### State-of-the-art work comparison
Table 3 provides a concisely summarizes the main approaches in the existing literature for diagnosing AMD disease and our proposed method. These approaches primarily involve conventional methods and deep learning algorithms, which utilize retinal images for diagnosis.
\begin{tabular}{c c c c c} \hline \hline
**Author** & **Year** & **Method** & **No. of images** & **Accuracy** \\ \hline TK Yoo et. al. [51] & 2018 & VGG19-RF & 3000 & 95\% accuracy \\ \multirow{4}{*}{Huiying Liu et. al [52]} & \multirow{4}{*}{2019} & \multirow{4}{*}{DeepAMD} & \multirow{4}{*}{4725} & \multirow{4}{*}{70\% accuracy} \\ & & & & \\ \cline{1-1} \
The AMDNet23 model proposed in the study achieves a high accuracy rate of 96.5%, surpassing other state-of-the-art works currently available. As a result, It can be presumed that this proposed method is effective for early-stage detection and diagnosis of AMD, and this novel method also diagnoses Cataracts and diabetic retinopathy utilizing fundus ophthalmology datasets, demonstrating superior accuracy.
**Comparison of the AMDNet23 model with the transfer Learning models:**
The outline demonstrated and effectively potential of hybrid AMDNet23 network for precisely detecting AMD eye disease from images. The capacity of the model to precisely detect AMD instances is demonstrated by the excellent precision, accuracy, recall and F1-score acquired. Combination of CNNs and LSTMs allows for the extracting of both spatial and temporal features, capturing the subtle patterns and changes associated with AMD. Figure 5 exhibits the comparison of performance between the model we proposed and a several pre-trained prepared.
**Conclusion**
In essence, this study proposed a AMDNet23 model for detecting and diagnosing AMD disease using several image datasets. The model achieved a high accuracy rate of 96.5%, surpassing other state-of-the-art works in the field. Furthermore, when compared with pre-trained models, the novel deep AMDNet23 method also showed superior accuracy for AMD detection, and the system is efficient to diagnose cataracts and diabetic retinopathy respectively. In the future, incorporating additional modalities or features can potentially enhance the performance of AMD detection models. Combining fundus images with other clinical data, which could include patient demographics, health records, or genetic information, may improve accuracy and enable a more comprehensive awareness of the disease. In broadly, the findings of this research clearly demonstrates effectively of the proposed AMDNet23 model in accurately detecting and diagnosing AMD cases. This model holds promise for early detection and diagnosis of AMD ocular disease, which could assist clinicians and aid in timely intervention and treatment for affected individuals.
|
2303.04485 | Onsets and Velocities: Affordable Real-Time Piano Transcription Using
Convolutional Neural Networks | Polyphonic Piano Transcription has recently experienced substantial progress,
driven by the use of sophisticated Deep Learning approaches and the
introduction of new subtasks such as note onset, offset, velocity and pedal
detection. This progress was coupled with an increased complexity and size of
the proposed models, typically relying on non-realtime components and
high-resolution data. In this work we focus on onset and velocity detection,
showing that a substantially smaller and simpler convolutional approach, using
lower temporal resolution (24ms), is still competitive: our proposed
ONSETS&VELOCITIES model achieves state-of-the-art performance on the MAESTRO
dataset for onset detection (F1=96.78%) and sets a good novel baseline for
onset+velocity (F1=94.50%), while having ~3.1M parameters and maintaining
real-time capabilities on modest commodity hardware. We provide open-source
code to reproduce our results and a real-time demo with a pretrained model. | Andres Fernandez | 2023-03-08T10:17:27Z | http://arxiv.org/abs/2303.04485v2 | # Onsets and Velocities: Affordable Real-Time Piano Transcription Using Convolutional Neural Networks
###### Abstract
Polyphonic Piano Transcription has recently experienced substantial progress, driven by the use of sophisticated Deep Learning approaches and the introduction of new subtasks such as note onset, offset, velocity and pedal detection. This progress was coupled with an increased complexity and size of the proposed models, typically relying on non-realtime components and high-resolution data. In this work we focus on _onset_ and _velocity_ detection, showing that a substantially smaller and simpler convolutional approach, using lower temporal resolution (24ms), is still competitive: our proposed Onsets&Velocities (O&V) model achieves state-of-the-art performance on the MAESTRO dataset for onset detection (F=96.78%) and sets a good novel baseline for onset+velocity (F\({}_{1}\)=94.50%), while having \(\sim\)3.1M parameters and maintaining real-time capabilities on modest commodity hardware. We provide open-source code to reproduce our results and a real-time demo with a pretrained model 1.
deep learning, polyphonic piano transcription
Footnote 1: Setup [https://github.com/andres-fr/aimusica_training](https://github.com/andres-fr/aimusica_training)
## I Introduction
### _Polyphonic Piano Transcription_
The task of _Polyphonic Piano Transcription_ (PPT) is useful for downstream tasks like musical analysis and resynthesis. Consider an audio _waveform_\(x(t)\in\mathbb{R}^{T}\) with time \(T\) that corresponds to a piano performance of a _score_\(\mathcal{S}\); then the task of PPT is to recover \(\mathcal{S}\) from \(x\). Here, \(\mathcal{S}\) is a collection of \(N\)_note events_\(\{\mathcal{N}_{n}:=(k_{n},v_{n},\perp_{n},\uparrow_{n})\}_{n=1}^{N}\), where \(k\in\{1,\ldots,K\}\) specifies the _key_ (typically \(K=88\)). The value \(v\in[0,1]\) indicates the intensity of the event (also called key _velocity_). The key _onset_ (pressing) and _offset_ (releasing) timestamps are specified by \(\downarrow\) and \(\uparrow\), respectively, where \(0\leq\downarrow_{n}<\uparrow_{n}\leq T\)\(\forall n\).
There has been extensive effort in automating PPT, typically articulated through challenges like the popular _Music Information Retrieval Evaluation eXchange_ (MIREX) [1] and featuring different techniques like handcrafted features, spectrogram factorization, probabilistic models [2, 3] and, more recently, _Deep Learning_ (DL) [4]. PPT is typically evaluated by comparing the recovered score \(\mathcal{S}\) with the ground truth \(\mathcal{S}\) on a test set, in an event-wise manner. Prominent efforts in curating datasets like MAPS [5, 6], SMD [7] and MusicNet [8] were affected by imprecise annotations, insufficient training data, unrealistic interpretations and/or constrained recording conditions, which made evaluation more difficult and impeded the establishment of a unified benchmark for PPT [9]. The introduction of the MAESTRO dataset [9] addressed many of these issues, by providing \(\sim\)200 hours of precisely annotated, high-quality audio data, encompassing a large variety of virtuositic compositions, pianists and recording conditions, and incorporating evaluation splits. As a result, it quickly became a popular benchmark. Still, all pianos in MAESTRO are fairly similar: to capture more general settings and satisfy the ever-growing demand for training data, [10] curated the GiantMIDI dataset, by sourcing over 1000 hours of piano music from YouTube and annotating them using DL [11].
### _The State of the Art in PPT_
One influential effort applying DL to PPT was [12]. Their work cemented the following main trends:
SpectrogramsDespite promising efforts to use the \(x(t)\) waveforms as DL inputs [13], time-frequency representa
Fig. 1: Log-mel spctrogram (\(X\)) of a virtuosistic excerpt from the MAESTRO test set, followed by the corresponding velocity (\(\mathcal{\hat{R}}_{V}\)) and last-stage onset (\(\mathcal{\hat{R}}_{1}^{(3)}\)) predictions, as well as the ground truth _piano roll_\(\mathds{1}_{{}_{1}_{3}}\) (see Section II for details).
tions like the spectrogram [14, ch.19] remain competitive for PPT and discriminative audio tasks in general [11, 15].
Piano roll supervisionConsider an alternative representation of \(\mathcal{S}\), called _piano roll_\(\mathcal{R}\in[0,1]^{K\times T}\) (see Figure 1), where entries \(\mathcal{R}(k,t)\) encode the activity of channel \(k\) at time \(t\) (zero if inactive). This type of supervision consists in training the model to output a piano roll \(\mathcal{R}\) that predicts some ground truth \(\mathcal{R}\), by minimizing the binary cross-entropy loss: \(l_{BCE}(\mathcal{R},\mathcal{\hat{R}})\!=\!\langle\mathcal{R},-log(\mathcal{ \hat{R}})\rangle\!+\!\langle(1\!-\!\mathcal{R}),-log(1\!-\!\mathcal{\hat{R}})\rangle\). Often, the ground truth is binarized and we have \(\mathds{1}\in\{0,1\}^{K\times T}\) instead of \(\mathcal{R}\). This approach requires to _decode_ the predicted piano roll \(\mathcal{\hat{R}}\) to obtain the event-based representation \(\mathcal{\hat{S}}=dec_{H}(\mathcal{\hat{R}})\), typically by using a heuristic \(H\), e.g. grouping consecutive active frames into singe notes.
Computer VisionPPT can be tackled effectively by treating spectrograms and piano rolls as images, and models like Convolutional Neural Networks (CNNs) work well with minor adaptions.
A major turning point was Onsets&Frames (O&F) [16], which uses a sub-network to first predict a piano roll \(\mathcal{\hat{R}}_{\downarrow}\) encoding the probability of an _onset_ (trained with a mask \(\mathds{1}_{\downarrow}\) that is active only when a key is pressed), and then uses another subnetwork to predict the _frames_\(\mathcal{\hat{R}}_{\mathcal{N}}\) conditioned on \(\mathcal{\hat{R}}_{\downarrow}\) (trained with a mask \(\mathds{1}_{\mathcal{N}}\), active for the whole duration of each note). O&F is then trained jointly via a multi-task loss \(l_{BCE}(\mathds{1}_{\downarrow},\mathcal{\hat{R}}_{\downarrow})+l_{BCE}( \mathds{1}_{\mathcal{N}},\mathcal{\hat{R}}_{\mathcal{N}})\). O&F achieved a steep improvement in all PPT benchmarks, and also introduced a novel subtask, _note velocity_, modelled with a third sub-network that predicts a velocity piano roll \(\mathcal{\hat{R}}_{V}\) trained via masked \(\ell_{2}\)-norm loss \(l_{V}=\langle\mathds{1}_{\downarrow},(\mathcal{R}_{V}-\mathcal{\hat{R}}_{V})^{ 2}\rangle\). Due to its unprecedented effectiveness and versatility, O&F became a popular baseline [17, 18], but this was at the expense of increased complexity, including more elaborate decoder heuristics, a larger model, and the incorporation of bi-RNN layers [19, 20], which preclude real-time applications.
More recently, [11] pointed out issues with temporal precision on piano rolls, incorporating a trainable Regression model to enhance precision. They further expanded the model including sustain pedal detection capabilities. In [21], an _off-the-shelf_ Transformer[22] setup was used to produce \(\mathcal{\hat{S}}\) directly from spectrograms in an end-to-end fashion. Apart from their good performance, both systems have in common their substantial size, increased time resolution and replacing decoder heuristics with an end-to-end differentiable solution, suggesting that decoding is a performance bottleneck.
### _Proposed Contribution for PPT_
These state-of-the-art improvements in performance came entangled with increased complexity in the form of larger models, additional components and new sub-tasks [18]. Understanding and disentangling this complexity is an active field of research: Alternative PPT sub-task factorizations that do not rely on O&F were proposed, like nonlinear denoising vs. linear demixing [23], sound source vs. note arrangement [24] and ADSR envelopes [25]. General approaches like using invertible neural networks [26], reconstruction tasks [27] and additive attention [18] were also explored.
In this work, we pursue the orthogonal goal of achieving real-time capabilities. For that, we observe that the masked loss \(l_{V}\) imposes time-locality around the onsets, and follow up on several ideas: the importance of the onsets [16] as well as decoder heuristics [18], and the idea that note velocity is naturally associated with the onset [11]. We propose that _a convolutional end-to-end method for onsets and velocities leads to efficiency gains and affordable real-time capabilities without compromising performance_, and that _efficient decoding heuristics replace the need for high temporal resolution and complex inference schemes_.
We present Onsets&Velocities (O&V), featuring:
1. State-of-the-art performance for _onset_ detection and a good baseline for _onsets_+_velocities_ on MAESTRO.
2. A substantially reduced CNN (no recurrent layers) based on piano rolls at 24ms resolution, enabling affordable real-time inference on modest hardware.
3. A straightforward decoding mechanism, enabling a multi-task training scheme without any data augmentations or extensions.
In Section II we present our O&V method. Section III presents experiments substantiating our claims. We also provide a PyTorch [28] open-source implementation with a real-time demo. Section IV concludes and proposes future work.
## II Methodology
### _Model_
Given a waveform \(x(t)\in\mathbb{R}^{T}\) at 16kHz, we compute its Short-Term Fourier Transform (STFT) [14] with a Hann window of size 2048, and a hop size \(\delta\)=384 (i.e. a time resolution of \(\Delta_{t}\)=24ms). We then map it to 229 mel-frequency bins [29] in the 50Hz-8000Hz range, and take the logarithm, yielding our input representation: a _log-mel spectrogram_\(X(f,t^{\prime})\in\mathbb{R}^{229\times T^{\prime}}\), where \(T^{\prime}=\frac{T}{\delta}\) is the resulting "compact" time domain (see Figure 1). We also compute the first time-derivative \(\hat{X}(f,t^{\prime}):=X(f,t^{\prime})-X(f,t^{\prime}-1)\) and concatenate it to \(X\), forming the CNN input. Using the same \(\Delta_{t}\), we time-quantize the MIDI annotations into a piano roll \(\mathcal{R}_{V}\in[0,1]^{88\times T^{\prime}}\), where \(\mathcal{R}_{V}(k_{n},t^{\prime}_{n})\) contains the velocity if key \(k_{n}\) was pressed at time \(\Delta_{t}t^{\prime}_{n}\pm\frac{\Delta t}{2}\), and zero otherwise. We further binarize \(\mathcal{R}_{V}\), yielding \(\mathds{1}_{\downarrow}\).
The complete CNN is presented in Figure 2. We highlight the following design principles:
No recurrent layersMotivated by [30, 31], we follow the established CNN design of convolutional stem and body, followed by a fully connected head, making use of residual bottlenecks [32].
No poolingMotivated by [33], all residual bottlenecks maintain activation shape, and conversion from input to output shape is done in a single depthwise convolution layer [34], shown to be efficient and effective [35]. Note that the convolutions in the input domain (spectrogram) have vertical
dimensions but the convolutions in the output domain (piano roll) do not, since we assume that neighbouring frequencies are related but neighbouring piano keys aren't.
Multi-stageInspired by OpenPose [36], O&V features a series of residual stages that sequentially refine and produce the output. This is useful for real-time applications, since stages can be easily removed without need for retraining.
Temporal contextAt its core, O&V features the Context-Aware Module (CAM) [37], which is a residual bottleneck that combines time-dilated convolutions and channel attention [38]. Inspired by TCNs [39] and Inception [40], we aim to capture the temporal vicinity of an onset efficiently.
Model regularizersAt the input and before each output, O&V features _Sub-Spectral Batch Normalization_ (SBN), i.e. one individual BN per vertical dimension [41, 42]. We add dropout [43, 44] after the parameter-heavy layers. We use leaky ReLUs [45, 46] as nonlinearities.
Time localityThe time-derivative \(\hat{X}\) is a handcrafted input feature that directly represents intensity variations.
During inference, O&V produces one piano roll per onset stage \((\hat{\mathcal{R}}_{1}^{(1)},\hat{\mathcal{R}}_{1}^{(2)},\hat{\mathcal{R}}_{ 1}^{(3)})\) and one velocity piano roll \(\hat{\mathcal{R}}_{V}\) (see Figure 2(d)). Then, our proposed decoder \(\hat{\mathcal{S}}:=dec_{\sigma,\rho,\hat{\mathcal{R}}}(\hat{\mathcal{R}}_{1}^ {(3)},\hat{\mathcal{R}}_{V})\) follows a simple heuristic: temporal Gaussian smoothing (_smooth_) with variance \(\sigma^{2}\) followed by non-maximum suppression (_mms_), thresholding \(\rho\) and shifting \(\mu\), yielding the predicted score \(\hat{\mathcal{S}}\) with note onsets and velocities:
\[\begin{split}\hat{\downarrow}&:=\{(k,t^{\prime}):~{ }nms\big{(}smooth_{\sigma}(\hat{\mathcal{R}}_{1}^{(3)}))(k,t^{\prime})\geq\rho \}\\ \hat{\mathcal{S}}&:=\{(k_{n},~{}\hat{\mathcal{R}}_{V }(k_{n},t^{\prime}_{n}),~{}\Delta_{t}t^{\prime}_{n}+\mu):~{}(k_{n},t^{\prime}_ {n})\in\hat{\downarrow}_{\rho}\}\end{split} \tag{1}\]
The _mms_ operation consists in zeroing out any \((k,t)\) entry that is strictly smaller than \((k,t+1)\) or \((k,t-1)\). The note events are read at the resulting locations, and shifted by a global constant \(\mu\). In this work, we use the values \(\sigma\!=\!1,\mu\!=\!-0.01s,\rho\!=\!0.74\), obtained via cross-validation of the trained CNN on a subset of the MAESTRO validation split (note that this is different from the test split used for evaluation). While the optimal \(\rho\) fluctuates during training, we found \(\sigma\!=\!1,~{}\mu\!=\!-0.01s\) to be stable.
### _Training_
We train our CNN to predict onset probability and velocity jointly via minimization of the following multi-task loss:
\[\begin{split}& l_{1V}(\mathds{1}_{13},(\hat{\mathcal{R}}_{1}^{(1)}, \hat{\mathcal{R}}_{1}^{(2)},\hat{\mathcal{R}}_{1}^{(3)}),\mathcal{R}_{V_{3}},\hat{\mathcal{R}}_{V}):=\\ &\quad\sum_{i}^{3}l_{BCE}(\mathds{1}_{13},\hat{\mathcal{R}}_{1}^ {(i)})+\lambda\cdot l_{V^{\prime}}(\mathcal{R}_{V_{3}},\hat{\mathcal{R}}_{V}) \text{, where}\\ & l_{V^{\prime}}(\mathcal{R}_{V_{3}},\hat{\mathcal{R}}_{V}):=\\ &\quad\big{\langle}\mathds{1}_{13},~{}\big{(}\mathcal{R}_{V_{3}} \!\cdot\!-\!log(\hat{\mathcal{R}}_{V})\big{)}\big{(}(1\!-\!\mathcal{R}_{V_{3}} )\cdot\!-\!log(1\!-\!\hat{\mathcal{R}}_{V})\big{)}\big{\rangle}\end{split}\]
The \(\mathds{1}_{13}\) and \(\mathcal{R}_{V_{3}}\) rolls are a straightforward modification of \(\mathds{1}_{1}\) and \(\mathcal{R}_{V}\), where each active frame at \((k,t)\) is also extended into \(t+1\) and \(t+2\) (i.e. note onsets span 3 frames instead of one). This simple extension was crucial to achieve target performance, and combined with our decoder, allowed
Fig. 2: Our proposed CNN. Rank-4 tensor dimensions are Batch\(\times\)Channel\(\times\)Height\(\times\)Width. Design principles, interfaces and loss functions are described in Section II.
to bypass the need for elaborate decoding schemes as the ones discussed in [11]. The masked loss \(l_{V^{\prime}}\) is a cross-entropy variant of the previously mentioned \(l_{V}\), introduced in [16] and [11], that encourages to predict the right velocity only in the vicinity of onsets.
All model weights are initialized with the Gaussian-He distribution [47], and biases with 0, except the CAM channel attention biases (right before the sigmoid), initialized with 1 to promote signal flow. We use the Adam optimizer with a decoupled weight decay [48, 49] of 3\(\times\)10\({}^{-4}\), trained with random batches of 5-second segments (batch size 40, \(\sim\)14k batches per epoch) for \(\sim\)70K batches. For the learning rate, we start with a ramp-up from 0 to 0.008 across 500 batches, followed by cosine annealing with warm restarts [50], using cycles of 1000 batches and decaying by 97.5% after each cycle. BN/SBN momentum is 95%, dropout 15% and leaky ReLUs have a slope of 0.1. In \(l^{\prime}_{V}\), we use \(\lambda\!=\!10\). To compensate that \(\mathds{1}_{3\downarrow}\) is sparse, we give positive entries a weight 8 times bigger than negative entries inside of \(l_{BCE}(\mathds{1}_{3\downarrow},\cdot)\). Training speed was 1800 batches per hour on a 2080Ti NVIDIA GPU.
### _Evaluation_
Following the same evaluation procedure as O&F [16], Regression[11] and Transformer[21], and applying standard metrics from [51] implemented in the mir_eval library [52], we report precision (P), recall (R) and F\({}_{1}\)-score for the predicted _onsets_, considered correct if they are within 50ms of the ground truth. The _onset+velocity_ evaluation, following O&F [16, 3.1], has an added constraint: the predicted velocity must also be within 0.1 of the ground truth normalized between 0 and 1.
Note that the MAESTRO dataset is being actively extended and curated, presenting 3 versions so far. We report the respective versions in Table I, noting that versions 2 and 3 are almost identical, although comparisons across versions should be taken approximately.
## III Experiments and Discussion
We trained O&V on the MAESTRO v3 training split without any extensions or augmentations, achieving state-of-the-art performance in onset detection (see Table I). In the following we discuss some implications:
Temporal resolutionOur results seem to counter the need for increased temporal resolution expressed in [11] (which use 8ms), showing that 24ms piano rolls coupled with our decoder presented in Equation (1) are competitive.
Reduced memory footprintTable I reports model parameters for the components that are responsible exclusively for onset and frame detection (Transformer has \(\sim\)54M parameters in total, but it transcribes everything jointly so it cannot be fairly compared). O&V outperforms the best alternative, Regression, with \(\sim\)4 times less parameters.
Affordable real-time inferenceBi-recurrent layers like the ones used in O&F and Regression are unsuited for real-time processing. Transformer took \(\sim\)380s to transcribe a 120s file on an Intel Xeon CPU (1 core), and \(\sim\)20s on a Tesla-T4 GPU (including offsets) when run on the official Colab implementation2. O&V took less than 2s to process the same file on an 8-core Intel i7-11800H CPU. Even accounting for the number of cores, O&V is approximately one order of magnitude faster than Transformer.
Footnote 2: [https://github.com/magenta/mt3](https://github.com/magenta/mt3)
Conceptual simplicityIn essence, O&V revives the simplicity from [12] by applying a feedforward CNN to a discriminative task via piano rolls, followed by a simple decoding heuristic. Its architecture allows to remove onset stages without retraining, providing a flexible trade-off between runtime and performance with little added complexity.
LatencyThe receptive field for our O&V proposed components is: Stem: 60 frames (1.44s), Stage\({}_{1}\)\({}^{(i)}\): 99 frames (\(\sim\)2.38s), and Stage\({}_{\text{v}}\): 35 frames (0.84s). This would theoretically impose a latency of over 9s, which is far from a responsive system. We informally note that the latency can be truncated without drastically affecting results (we used a latency of 4s in a live workshop), and encourage practical applications. We also note that our focus was on finding a CNN with affordable inference and competitive performance, and we did not optimize for low receptive field, which may be obtainable with minor variations to the architecture (e.g. reducing the number of consecutive stages or CAMs).
## IV Conclusion and Future Work
We presented Onsets&Velocities, a convolutional setup that achieves real-time capabilities on modest commodity hardware without compromising performance, and with a substantially reduced size and temporal resolution. O&V achieves state-of-the-art performance in PPT note _onset_ detection, and establishes a good baseline on _onset+velocity_ detection. Future work could include reducing the receptive field, extensions to _offset_ and _pedal_ detection, training and evaluation on different instruments, and analysis of design choices via ablation studies (e.g. number of stages and temporal resolution).
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Onset+Velocity**} & \multirow{2}{*}{**Architecture**} & \multirow{2}{*}{**Offset/pedal**} & \multicolumn{3}{c}{**Onset (\%)**} & \multicolumn{3}{c}{**Onset+Velocity (\%)**} & \multirow{2}{*}{**MAESTRO**} \\ & & & & P & R & F\({}_{1}\) & P & R & F\({}_{1}\) & **version** \\ \hline O\&F [16] & 10M & bi-RNN & ✓ / ✗ & 98.27 & 92.61 & 95.32 & - & - & - & v1 \\ Regression[11] & 12M & bi-RNN & ✓ / ✓ & 98.17 & 95.35 & 96.72 & - & - & - & v2 \\ Transformer[21] & – & Transformer & ✓ / ✗ & - & 96.13 & - & - & - & v3 \\ O\&V (ours) & **3.13M** & CNN & ✗ / ✗ & 98.58 & 95.07 & **96.78** & 96.25 & 92.86 & 94.50 & v3 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison of top-performing models in terms of specifications (number of parameters for onset+velocity only, architecture and functionality) and performance (precision, recall, F\({}_{1}\)-score and MAESTRO version).
## Acknowledgments
A.F. wants to thank Jesus Monge Alvarez and Christian J. Steinmetz for their valuable feedback, the _Institut d'Estudis Balearics_ for supporting this work with research grant 389062 INV-23/2021, and the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for further support.
|
2306.01323 | Demystifying Structural Disparity in Graph Neural Networks: Can One Size
Fit All? | Recent studies on Graph Neural Networks(GNNs) provide both empirical and
theoretical evidence supporting their effectiveness in capturing structural
patterns on both homophilic and certain heterophilic graphs. Notably, most
real-world homophilic and heterophilic graphs are comprised of a mixture of
nodes in both homophilic and heterophilic structural patterns, exhibiting a
structural disparity. However, the analysis of GNN performance with respect to
nodes exhibiting different structural patterns, e.g., homophilic nodes in
heterophilic graphs, remains rather limited. In the present study, we provide
evidence that Graph Neural Networks(GNNs) on node classification typically
perform admirably on homophilic nodes within homophilic graphs and heterophilic
nodes within heterophilic graphs while struggling on the opposite node set,
exhibiting a performance disparity. We theoretically and empirically identify
effects of GNNs on testing nodes exhibiting distinct structural patterns. We
then propose a rigorous, non-i.i.d PAC-Bayesian generalization bound for GNNs,
revealing reasons for the performance disparity, namely the aggregated feature
distance and homophily ratio difference between training and testing nodes.
Furthermore, we demonstrate the practical implications of our new findings via
(1) elucidating the effectiveness of deeper GNNs; and (2) revealing an
over-looked distribution shift factor on graph out-of-distribution problem and
proposing a new scenario accordingly. | Haitao Mao, Zhikai Chen, Wei Jin, Haoyu Han, Yao Ma, Tong Zhao, Neil Shah, Jiliang Tang | 2023-06-02T07:46:20Z | http://arxiv.org/abs/2306.01323v3 | # Demystifying Structural Disparity in Graph Neural Networks: Can One Size Fit All?
###### Abstract
Recent studies on Graph Neural Networks(GNNs) provide both empirical and theoretical evidence supporting their effectiveness in capturing structural patterns on both homophilic and certain heterophilic graphs. Notably, most real-world homophilic and heterophilic graphs are comprised of a mixture of nodes in both homophilic and heterophilic structural patterns, exhibiting a structural disparity. However, the analysis of GNN performance with respect to nodes exhibiting different structural patterns, e.g., homophilic nodes in heterophilic graphs, remains rather limited. In the present study, we provide evidence that Graph Neural Networks(GNNs) on node classification typically perform admirably on homophilic nodes within homophilic graphs and heterophilic nodes within heterophilic graphs while struggling on the opposite node set, exhibiting a performance disparity. We theoretically and empirically identify effects of GNNs on testing nodes exhibiting distinct structural patterns. We then propose a rigorous, non-i.i.d PAC-Bayesian generalization bound for GNNs, revealing reasons for the performance disparity, namely the aggregated feature distance and homophily ratio difference between training and testing nodes. Furthermore, we demonstrate the practical implications of our new findings via (1) elucidating the effectiveness of deeper GNNs; and (2) revealing an over-looked distribution shift factor on graph out-of-distribution problem and proposing a new scenario accordingly.
## 1 Introduction
Graph Neural Networks (GNNs) [1; 2; 3; 4] are a powerful technique for tackling a wide range of graph-related tasks [5; 3; 6; 7; 8; 9], especially node classification [2; 4; 10; 11], which requires predicting unlabeled nodes based on the graph structure, node features, and a subset of labeled nodes. The success of GNNs can be ascribed to their ability to capture structural patterns through the aggregation mechanism that effectively combines feature information from neighboring nodes [12].
GNNs have been widely recognized for their effectiveness on homophilic graphs [13; 2; 10; 4; 14; 15; 16]. In homophilic graphs, connected nodes tend to share the same label, which we refer to as _homophilic patterns_. An example of the homophilic pattern is depicted in the upper part of Figure 1, where node features and node labels are denoted by colors (i.e., blue and red) and numbers (i.e., 0 and 1), respectively. We can observe that all connected nodes exhibit homophilic patterns and share the same label 0. Recently, several studies have demonstrated that GNNs can also
Figure 1: Examples of homophilic and heterophilic patterns. Colors/numbers indicate node features/labels.
perform well on certain heterophilic graphs [17; 12; 18]. In heterophilic graphs, connected nodes tend to have different labels, which we refer to as _heterophilic patterns_. The example in the lower part of Figure 1 shows the heterophilic patterns. Based on this example, we intuitively illustrate how GNNs can work on such heterophilic patterns (lower right): after averaging features over all neighboring nodes, nodes with label 0 completely switch from their initial blue color to red, and vice versa; despite this feature alteration, the two classes remain easily distinguishable since nodes with the same label (number) share the same color (features).
However, existing studies on the effectiveness of GNNs [13; 17; 12; 18] only focus on either homophilic or heterophilic patterns solely and overlook the fact that real-world graphs typically exhibit a mixture of homophilic and heterophilic patterns. Recent studies [19; 20] reveal that many heterophilic graphs, e.g., Squirrel and Chameleon [21], contain over 20% homophilic nodes. Similarly, our preliminary study depicted in Figure 2 demonstrates that heterophilic nodes are consistently present in many homophilic graphs, e.g., PubMed [22] and Ogbn-arxiv [23]. Hence, real-world homophilic graphs predominantly consist of homophilic nodes as the majority structural pattern and heterophilic nodes in the minority one, while heterophilic graphs exhibit an opposite phenomenon with homophilic nodes in the majority and heterophilic ones in the minority.
To provide insights aligning the real-world scenario with structural disparity, we revisit the toy example in Figure 1, considering both homophilic and heterophilic patterns together. Specifically, for nodes labeled 0, both homophilic and heterophilic node features appear in blue before aggregation. However, after aggregation, homophilic and heterophilic nodes in label 0 exhibit different features, appearing blue and red, respectively. Such differences may lead to performance disparity between nodes in majority and minority patterns. For instance, in a homophilic graph with the majority pattern being homophilic, GNNs are more likely to learn the association between blue features and class 0 on account of more supervised signals in majority. Consequently, nodes in the majority structural pattern can perform well, while nodes in the minority structural pattern may exhibit poor performance, indicating an over-reliance on the majority structural pattern. Inspired by insights from the above toy example, we focus on answering following questions systematically in this paper: How does a GNN behave when encountering the structural disparity of homophilic and heterophilic nodes within a dataset? and Can one GNN benefit all nodes despite structural disparity?
**Present work**. Drawing inspiration from above intuitions, we investigate how GNNs exhibit different effects on nodes with structural disparity, the underlying reasons, and implications on graph applications. Our study proceeds as follows: **First**, we empirically verify the aforementioned intuition by examining the performance of testing nodes w.r.t. different homophily ratios, rather than the overall performance across all test nodes as in [12; 13; 18]. We show that GCN [2], a vanilla GNN, often underperforms MLP-based models on nodes with the minority pattern while outperforming them on the majority nodes. **Second**, we examine how aggregation, the key mechanism of GNNs, shows different effects on homophilic and heterophilic nodes. We propose an understanding of why GNNs exhibit performance disparity with a non-i.i.d PAC-Bayesian generalization bound, revealing that both feature distance and homophily ratio differences between train and test nodes are key factors leading to performance disparity. **Third**, we showcase the significance of these insights by exploring implications for (1) elucidating the effectiveness of deeper GNNs and (2) introducing a new graph out-of-distribution scenario with an over-looked distribution shift factor.
## 2 Preliminaries
**Semi-Supervised Node classification (SSNC).** Let \(G=(V,E)\) be an undirected graph, where \(V=\{v_{1},\cdots,v_{n}\}\) is the set of \(n\) nodes and \(E\subseteq V\times V\) is the edge set. Nodes are associated with node features \(\mathbf{X}\in\mathbb{R}^{n\times d}\), where \(d\) is the feature dimension. The number of class is denoted as \(K\). The adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\) represents graph connectivity where \(\mathbf{A}[i,j]=1\) indicates an edge between nodes \(i\) and \(j\). \(\mathbf{D}\) is a degree matrix and \(\mathbf{D}[i,i]=d_{i}\) with \(d_{i}\) denoting degree of node \(v_{i}\). Given a small set of labeled nodes, \(V_{\text{tr}}\subseteq V\), SSNC task is to predict on unlabeled nodes \(V\setminus V_{\text{tr}}\).
**Node homophily ratio** is a common metric to quantify homophilic and heterophilic patterns. It is calculated as the proportion of a node's neighbors sharing the same label as the node [24; 25; 19]. It is formally defined as \(h_{i}=\frac{|\{u\in\mathcal{N}(v_{i}):y_{u}=y_{v}\}|}{d_{i}}\), where \(\mathcal{N}(v_{i})\) denotes the neighbor node set of \(v_{i}\) and \(d_{i}=|\mathcal{N}(v_{i})|\) is the cardinality of this set. Following [19; 26; 24], node \(i\) is considered to be homophilic when more neighbor nodes share the same label as the center node with \(h_{i}>0.5\). We
define the graph homophily ratio \(h\) as the average of node homophily ratios \(h=\frac{\sum_{i\in V}h_{i}}{|V|}\). Moreover, this ratio can be easily extended to higher-order cases \(h_{i}^{(k)}\) by considering \(k\)-order neighbors \(\mathcal{N}_{k}(v_{i})\).
**Node subgroup** refers to a subset of nodes in the graph sharing similar properties, typically homophilic and heterophilic patterns measured with node homophily ratio. Training nodes are denoted as \(V_{\text{tr}}\). Test nodes \(V_{\text{te}}\) can be categorized into \(M\) node subgroups, \(V_{\text{te}}=\bigcup_{m=1}^{M}V_{m}\), where nodes in the same subgroup \(V_{m}\) share similar structural pattern.
## 3 Effectiveness of GNN on nodes with different structural properties
In this section, we explore the effectiveness of GNNs on different node subgroups exhibiting distinct structural patterns, specifically, homophilic and heterophilic patterns. It is different from previous studies [12; 13; 17; 27; 18] that primarily conduct analysis on the whole graph and demonstrate effectiveness with an overall performance gain. These studies, while useful, do not provide insights into the effectiveness of GNNs on different node subgroups, and may even obscure scenarios where GNNs fail on specific subgroups despite an overall performance gain. To accurately gauge the effectiveness of GNNs, we take a closer examination on node subgroups with distinct structural patterns. The following experiments are conducted on two common homophilic graphs, Ogbn-arxiv [23] and Pubmed [22], and two heterophilic graphs, Chameleon and Squirrel [21]. These datasets are chosen since GNNs can achieve better overall performance than MLP. Experiment details and related work on GNN disparity are in Appendix G and A, respectively.
**Existence of structural pattern disparity within a graph** is to recognize real-world graphs exhibiting different node subgroups with diverse structural patterns, before investigating the GNN effectiveness on them. We demonstrate node homophily ratio distributions on the aforementioned datasets in Figure 2. We can have the following observations. **Obs.1:** All four graphs exhibit a mixture of both homophilic and heterophilic patterns, rather than a uniform structural patterns. **Obs.2:** In homophilic graphs, the majority of nodes exhibit a homophilic pattern with \(h_{i}{>}0.5\), while in heterophilic graphs, the majority of nodes exhibit the heterophilic pattern with \(h_{i}{\leq}0.5\). We define nodes in majority structural pattern as majority nodes, e.g., homophilic nodes in a homophilic graph.
Figure 3: Performance comparison between GCN and MLP-based models. Each bar represents the accuracy gap on a specific node subgroup exhibiting a homophily ratio within the range specified on the x-axis. MLP-based models often outperform GCN on heterophilic nodes in homophilic graphs and homophilic nodes in heterophilic graphs with a positive value.
Figure 2: Node homophily ratio distributions. All graphs exhibit a mixture of homophilic and heterophilic nodes despite various graph homophily ratio \(h\).
**Examining GCN performance on different structural patterns.** To examine the effectiveness of GNNs on different structural patterns, we compare the performance of GCN [2] a vanilla GNN, with two MLP-based models, vanilla MLP and Graphless Neural Network (GLNN) [28], on testing nodes with different homophily ratios. It is evident that the vanilla MLP could have a large performance gap compared to GCN (i.e., 20% in accuracy) [12; 28; 2]. Consequently, the performance disparity can be overwhelmed by such gap which renders the effect from structural patterns. Therefore, we also include an advanced MLP model GLNN. It is trained in an advanced manner via distilling GNN predictions and exhibits performance on par with GNNs. Notably, only GCN has the ability to leverage structural information during the inference phase while both vanilla MLP and GLNN models solely rely on node features as input. This comparison ensures a fair study on the effectiveness of GNNs in capturing different structural patterns with mitigating the effects of node features. Experimental results on four datasets are presented in Figure 3. In the figure, y-axis corresponds to the accuracy differences between GCN and MLP-based models where positive indicates MLP models can outperform GCN; while x-axis represents different node subgroups with nodes in the subgroup satisfying homophily ratios in the given range, e.g., [0.0-0.2]. Based on experimental results, the following key observations can be made: **Obs.1:** In homophilic graphs, both GLNN and MLP demonstrate superior performance on the heterophilic nodes with homophily ratios in [0-0.4] while GCN outperforms them on homophilic nodes. **Obs.2:** In heterophilic graphs, MLP models often outperform on homophilic nodes yet underperform on heterophilic nodes. Notably, vanilla MLP performance on Chameleon is worse than that of GCN across different subgroups. This can be attributed to the training difficulties encountered on Chameleon, where an unexpected similarity in node features from different classes is observed [29]. Our observations indicate that despite the effectiveness of GCN suggested by [12; 18; 13], GCN exhibits limitations with performance disparity across homophilic and heterophilic graphs. It motivates investigation why GCN benefits majority nodes, e.g., homophilic nodes in homophilic graphs, while struggling with minority nodes. Moreover, additional results on more datasets and significant test results are shown in Appendix H and L.
**Organization.** In light of the above observations, we endeavor to understand the underlying causes of this phenomenon in the following sections by answering the following research questions. Section 3.1 focuses on how aggregation, the fundamental mechanism in GNNs, affects nodes with distinct structural patterns differently. Upon identifying differences, Section 3.2 further analyzes how such disparities contribute to superior performance on the majority nodes as opposed to minority nodes. Building on these observations, Section 3.3 recognizes the key factors driving performance disparities on different structural patterns with a non-i.i.d. PAC-Bayes bound. Section 3.4 empirically corroborates the validity of our theoretical analysis with real-world datasets.
### How does aggregation affect nodes with structural disparity differently?
In this subsection, we examine how aggregation reveals different effects on nodes with structural disparity, serving as a precondition for performance disparity. Specifically, we focus on the discrepancy between nodes from the same class but with different structural patterns.
For a controlled study on graphs, we adopt the contextual stochastic block model (CSBM) with two classes. It is widely used for graph analysis, including generalization [12; 13; 30; 17; 31; 32], clustering [33], fairness [34; 35], and GNN architecture design [36; 37; 38]. Typically, nodes in CSBM model are generated into two disjoint sets \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) corresponding to two classes, \(c_{1}\) and \(c_{2}\), respectively. Each node with \(c_{i}\) is associated with features \(x\in\mathbb{R}^{d}\) sampling from \(N(\mathbf{\mu}_{i},I)\), where \(\mu_{i}\) is the feature mean of class \(c_{i}\) with \(i\in\{1,2\}\). The distance between feature means in different classes \(\rho=\|\mathbf{\mu}_{1}-\mathbf{\mu}_{2}\|\), indicating the classification difficulty on node features. Edges are then generated based on intra-class probability \(p\) and inter-class probability \(q\). For instance, nodes with class \(c_{1}\) have probabilities \(p\) and \(q\) of connecting with another node in class \(c_{1}\) and \(c_{2}\), respectively. The CSBM model, denoted as CSBM(\(\mathbf{\mu}_{1},\mathbf{\mu}_{2},p,q\)), presumes that all nodes follow either homophilic with \(p>q\) or heterophilic patterns \(p<q\) exclusively. However, this assumption conflicts with real-world scenarios, where graphs often exhibit both patterns simultaneously, as shown in Figure 2. To mirror such scenarios, we propose a variant of CSBM, referred to as CSBM-Structure (CSBM-S), allowing for the simultaneous description of homophilic and heterophilic nodes.
**Definition 1** (CSBM-S\((\mu_{1},\mu_{2},(p^{(1)},q^{(1)}),(p^{(2)},q^{(2)}),\Pr(\text{homo}))\)).: _The generated nodes consist of two disjoint sets \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\). each node feature \(x\) is sampled from \(N(\mu_{i},I)\) with \(i\in\{1,2\}\). Each set \(\mathcal{C}_{i}\) consists of two subgroups: \(\mathcal{C}_{i}^{(1)}\) for nodes in homophilic pattern with intra-class and inter-class edge
probability \(p^{(1)}>q^{(1)}\) and \(\mathcal{C}_{i}^{(2)}\) for nodes in heterophilic pattern with \(p^{(2)}>q^{(2)}\). \(\Pr(\text{homo})\) denotes the probability that the node is in homophilic pattern. \(\mathcal{C}_{i}^{(j)}\) denotes node in class \(i\) and subgroup \(j\) with \((p^{(j)},q^{(j)})\). We assume nodes follow the same degree distribution with \(p^{(1)}+q^{(1)}=p^{(2)}+q^{(2)}\)._
Based on the neighborhood distributions, the mean aggregated features \(\mathbf{F}=\mathbf{D}^{-1}\mathbf{A}\mathbf{X}\) obtained follow Gaussian distributions on both homophilic and heterophilic subgroups.
\[\mathbf{f}_{i}^{(j)}\sim N\left(\frac{p^{(j)}\boldsymbol{\mu}_{1}+q^{(j)} \boldsymbol{\mu}_{2}}{p^{(j)}+q^{(j)}},\frac{\mathbf{I}}{\sqrt{d_{i}}}\right), \text{for}\;i\in\mathcal{C}_{1}^{(j)};\mathbf{f}_{i}^{(j)}\sim N\left(\frac{q ^{(j)}\boldsymbol{\mu}_{1}+p^{(j)}\boldsymbol{\mu}_{2}}{p^{(j)}+q^{(j)}}, \frac{\mathbf{I}}{\sqrt{d_{i}}}\right),\text{for}\;i\in\mathcal{C}_{2}^{(j)} \tag{1}\]
Where \(\mathcal{C}_{i}^{(j)}\) is the node subgroups with structural pattern with \((p^{(j)},q^{(j)})\) in label \(i\). Our initial examination of different effects on aggregation focuses on the aggregated feature distance between homophilic and heterophilic node subgroups within class \(c_{1}\).
**Proposition 1**.: _The aggregated feature mean distance between homophilic and heterophilic node subgroups within class \(c_{1}\), denoted as \(\left\|\frac{p^{(1)}\boldsymbol{\mu}_{1}+q^{(1)}\boldsymbol{\mu}_{2}}{p^{(1)} +q^{(1)}}-\frac{p^{(2)}\boldsymbol{\mu}_{1}+q^{(2)}\boldsymbol{\mu}_{2}}{p^{(2) }+q^{(2)}}\right\|>0\), since they are from different feature distribution after aggregation, larger than the distance \(0\) before aggregation since they draw from the same distribution, regardless of different structural pattern._
Notably, the distance between original features is regardless of the structural pattern. This proposition suggests that aggregation results in a distance gap between different patterns within the same class.
In addition to node feature differences with the same class, we further examine the discrepancy between nodes \(u\) and \(v\) with the same aggregated feature \(\mathbf{f}_{u}=\mathbf{f}_{v}\) but different structural patterns. We examine the discrepancy with the probability difference of nodes \(u\) and \(v\) in class \(c_{1}\), denoted as \(|\mathbf{P}_{1}(y_{u}=c_{1}|\mathbf{f}_{u})-\mathbf{P}_{2}(y_{v}=c_{1}| \mathbf{f}_{v})|\). \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) are the conditional probability of \(y=c_{1}\) given the feature \(\mathbf{f}\) on structural patterns \((p^{(1)},q^{(1)})\) and \((p^{(2)},q^{(2)})\), respectively.
**Lemma 1**.: _With assumptions (1) A balance class distribution with \(\mathbf{P}(Y=1)=\mathbf{P}(Y=0)\) and (2) aggregated feature distribution shares the same variance \(\sigma\). When nodes \(u\) and \(v\) have the same aggregated features \(\mathbf{f}_{u}=\mathbf{f}_{v}\) but different structural patterns, \((p^{(1)},q^{(1)})\) and \((p^{(2)},q^{(2)})\), we have:_
\[|\mathbf{P}_{1}(y_{u}=c_{1}|\mathbf{f}_{u})-\mathbf{P}_{2}(y_{v}=c_{1}| \mathbf{f}_{v})|\leq\frac{\rho^{2}}{\sqrt{2\pi}\sigma}|h_{u}-h_{v}| \tag{2}\]
Notably, above assumptions are not strictly necessary but employed for the sake of elegant expression. \(\rho=\|\boldsymbol{\mu}_{1}-\boldsymbol{\mu}_{2}\|\) is the original feature separability, independent with structure. Lemma 1 implies that nodes with a small homophily ratio difference \(|h_{1}-h_{2}|\) are likely to share the same class, and vice versa. The proof details and additional analysis on between-class effects are provided in Appendix D.
### How does Aggregation Contribute to Performance Disparity?
We have established that aggregation can affect nodes with distinct structural patterns differently. However, it remains to be elucidated how such disparity contributes to performance improvement predominantly on majority nodes as opposed to minority nodes. It should be noted that, notwithstanding the influence on features, test performance is also profoundly associated with training labels. Performance degradation may occur when the classifier is inadequately trained with biased training labels.
We then conduct an empirical discriminative analysis taking both mean aggregated features and training labels into consideration. Drawing inspiration from existing literature [39; 40; 41; 42], we describe the discriminative ability with the distance between train class prototypes [43; 44], i.e., feature mean of each class, and the corresponding test class prototype within the same class \(i\). For instance, it can be denoted as \(||\mu_{i}^{\text{tr}}-\mu_{i}^{\text{ma}}||\), where \(\mu_{i}^{\text{tr}}\) and \(\mu_{i}^{\text{ma}}\) are the prototype of class \(i\) on train nodes and test majority nodes, respectively. A smaller value suggests that majority test nodes are close to train nodes within the same class, thus implying superior discriminative ability. A relative
Figure 4: Illustration on discriminative ratio variation along with aggregation. x-axis denotes the number of aggregations and y-axis denotes the discriminative ratio.
discriminative ratio is then proposed to compare the discriminative ability between majority and minority nodes. It can be denoted as: \(r\)=\(\sum_{i=1}^{K}\frac{||\mathbf{\mu}_{i}^{\text{mi}}-\mathbf{\mu}_{i}^{\text{mi}}||}{||\mathbf{ \mu}_{i}^{\text{mi}}-\mathbf{\mu}_{i}^{\text{mi}}||}\) where \(\mu_{i}^{\text{mi}}\) corresponds to the prototype on minority test nodes. A lower relative discriminative ratio suggests that majority nodes are easier to be predicted than minority nodes.
The relative discriminative ratios are then calculated on different hop aggregated features and original features denote as 0-hop. Experimental results are presented in Figure 4, where the discriminative ratio shows an overall decrease tendency as the number of aggregations increases across four datasets. This indicates that majority test nodes show better discriminative ability than the minority test nodes along with more aggregation. We illustrate more results on GCN in Appendix K. Furthermore, instance-level experiments other than class prototypes are in Appendix C.
### Why does Performance Disparity Happen? Subgroup Generalization Bound for GNNs
In this subsection, we conduct a rigorous analysis elucidating primary causes for performance disparity across different node subgroups with distinct structural patterns. Drawing inspiration from the discriminative metric described in Section 3.2, we identify two key factors for satisfying test performance: (1) test node \(u\) should have a close feature distance \(\min_{v\in V_{t}}\|\mathbf{f}_{u}-\mathbf{f}_{v}\|\) to training nodes \(V_{\text{tr}}\), indicating that test nodes can be greatly influenced by training nodes. (2) With identifying the closest training node \(v\), nodes \(u\) and \(v\) should be more likely to share the same class, where \(\sum_{c_{i}\in\mathcal{C}}|\mathbf{P}(y_{u}=c_{i}|\mathbf{f}_{u})-\mathbf{P}(y _{v}=c_{i}|\mathbf{f}_{v})|\) is required to be small. The second factor, focusing on whether two close nodes are in the same class, is dependent on the homophily ratio difference \(|h_{u}-h_{v}|\), as shown in Lemma 1. Notably, since training nodes are randomly sampled, their structural patterns are likely to be the majority one. Therefore, training nodes will show a smaller homophily ratio difference with majority test nodes sharing the same majority pattern than minority test nodes, resulting in the performance disparity in distinct structural patterns. We substantiate the above intuitions with controllable synthetic experiments in Appendix B.
To rigorously examine the role of aggregated feature distance and homophily ratio difference in performance disparity, we derive a non-i.i.d. PAC-Bayesian GNN generalization bound, based on the Subgroup Generalization bound of Deterministic Classifier [45]. We begin by stating key assumptions on graph data and GNN model to clearly delineate the scope of our theoretical analysis. All remaining assumptions, proof details, and background on PAC-Bayes analysis can be found in Appendix F. Moreover, a comprehensive introduction on the generalization ability on GNN can be found in A.
**Definition 2** (Generalized CSBM-S model).: _Each node subgroup \(V_{m}\) follows the CSBM distribution \(V_{m}\sim\text{CSBM}(\mathbf{\mu}_{1},\mathbf{\mu}_{2},p^{(i)},q^{(i)})\), where different subgroups share the same class mean but different intra-class and inter-class probabilities \(p^{(i)}\) and \(q^{(i)}\). Moreover, node subgroups also share the same degree distribution as \(p^{(i)}+q^{(i)}=p^{(j)}+q^{(j)}\)._
Instead of CSBM-S model with one homophilic and heterophilic pattern, we take the generalized CSBM-S model assumption, allowing more structural patterns with different levels of homophily.
**Assumption 1** (GNN model).: _We focus on SGC [15] with the following components: (1) a one-hop mean aggregation function \(g\) with \(g(X,G)\) denoting the output. (2) MLP feature transformation \(f(g_{i}(X,G);W_{1},W_{2},\cdots,W_{L})\), where \(f\) is a ReLU-activated \(L\)-layer MLP with \(W_{1},\cdots,W_{L}\) as parameters for each layer. The largest width of all the hidden layers is denoted as \(b\)._
Notably, despite analyzing simple GNN architecture theoretically, similar with [45; 12; 46], our theory analysis could be easily extended to the higher-order case with empirical success across different GNN architectures shown in Section 3.4.
Our main theorem is based on the PAC-Bayes analysis which typically aims to bound the generalization gap between the expected margin loss \(\mathcal{L}_{m}^{0}\) on test subgroup \(V_{m}\) for a margin \(0\) and the empirical margin loss \(\widehat{\mathcal{L}}_{\text{tr}}^{\gamma}\) on train subgroup \(V_{\text{tr}}\) for a margin \(\gamma\). Those losses are generally utilized in PAC-Bayes analysis[47; 48; 49; 50]. More details are found in Appendix F. The formulation is shown as follows:
**Theorem 1** (Subgroup Generalization Bound for GNNs).: _Let \(\tilde{h}\) be any classifier in the classifier family \(\mathcal{H}\) with parameters \(\{\widetilde{W}_{l}\}_{l=1}^{L}\), for any \(0<m\leq M\), \(\gamma\geq 0\), and large enough number of the training nodes \(N_{\text{tr}}=|V_{\text{tr}}|\), there exist \(0<\alpha<\frac{1}{4}\) with probability at least \(1-\delta\) over the sample of
\(y^{\prime r}:=\{y_{i}\}_{i\in V_{\sigma}}\), we have:
\[\mathcal{L}_{m}^{0}(\tilde{h})\leq\widehat{\mathcal{L}}_{\nu}^{\gamma}(\tilde{h}) +O\left(\underbrace{\frac{K\rho}{\sqrt{2\pi}\sigma}(\epsilon_{m}+|h_{\mathbf{r}}-h_ {m}|\cdot\rho)}_{\mathbf{(a)}}+\underbrace{\frac{b\sum_{l=1}^{L}\|\widetilde{W}_{ l}\|_{F}^{2}}{(\gamma/8)^{2/L}N_{\mathbf{r}}^{\alpha}}(\epsilon_{m})^{2/L}}_{\mathbf{(b)}}+ \mathbf{R}\right) \tag{3}\]
The bound is related to three terms: **(a)** describes both large homophily ratio difference \(|h_{\mathbf{r}}-h_{m}|\) and large aggregated feature distance \(\epsilon=\max_{j\in bV_{m}}\min_{i\in V_{\mathbf{r}}}\|g_{i}(X,G)-g_{j}(X,G)\|_{2}\) between test node subgroup \(V_{m}\) and training nodes \(V_{\mathbf{r}}\) lead to large generalization error. \(\rho=\|\mathbf{\mu}_{1}-\mathbf{\mu}_{2}\|\) denotes the original feature separability, independent of structure. \(K\) is the number of classes. **(b)** further strengthens the effect of nodes with the aggregated feature distance \(\epsilon\), leads to a large generalization error. **(c)**\(\mathbf{R}\) is a term independent with aggregated feature distance and homophily ratio difference, depicted as \(\frac{1}{N_{\mathbf{r}}^{1-2\alpha}}+\frac{1}{N_{\mathbf{r}}^{2\alpha}}\ln\frac{LC(2B_ {m})^{1/L}}{\gamma^{1/L}\delta}\), where \(B_{m}=\max_{i\in V_{\mathbf{r}}\cup V_{m}}\|g_{i}(X,G)\|_{2}\) is the maximum feature norm. \(\mathbf{R}\) vanishes as training size \(N_{0}\) grows. Proof details are in Appendix F
Our theory suggests that both homophily ratio difference and aggregated feature distance to training nodes are key factors contributing to the performance disparity. Typically, nodes with large homophily ratio difference and aggregated feature distance to training nodes lead to performance degradation.
### Performance Disparity Across Node Subgroups on Real-World Datasets
To empirically examine the effects of theoretical analysis, we compare the performance on different node subgroups divided with both homophily ratio difference and aggregated feature distance to training nodes with popular GNN models including GCN [2], SGC [15], GAT [10], GCNII [51], and GPRGNN [52]. Typically, test nodes are partitioned into subgroups based on their disparity scores to the training set in terms of both 2-hop homophily ratio \(h_{i}^{(2)}\) and 2-hop aggregated features \(\mathbf{F}^{(2)}\) obtained by \(\mathbf{F}^{(2)}=(\mathbf{\bar{D}}^{-1}\mathbf{\tilde{A}})^{2}\mathbf{X}\), where \(\mathbf{\tilde{A}}=\mathbf{A}+\mathbf{I}\) and \(\mathbf{\bar{D}}=\mathbf{D}+\mathbf{I}\). For a test node \(i\), we measure the node disparity by (1) selecting the closest training node \(v=\text{arg}\min_{v\in V_{0}}||\mathbf{F}_{u}^{(2)}-\mathbf{F}_{v}^{(2)}||\) (2) then calculating the disparity score \(s_{u}=||\mathbf{F}_{u}^{(2)}-\mathbf{F}_{v}^{(2)}||_{2}+|h_{u}^{(2)}-h_{v}^{(2)}|\), where the first and the second terms correspond to the aggregated-feature distance and homophily ratio differences,
Figure 5: Test accuracy disparity across node subgroups by **aggregated-feature distance and homophily ratio difference** to training nodes. Each figure corresponds to a dataset, and each bar cluster corresponds to a GNN model. A clear performance decrease tendency can be found from subgroups 1 to 5 with increasing differences to training nodes.
Figure 6: Test accuracy disparity across node subgroups by **aggregated-feature distance** to train nodes. Each figure corresponds to a dataset, and each bar cluster corresponds to a GNN model. A clear performance decrease tendency can be found from subgroups 1 to 5 with increasing differences to training nodes.
respectively. We then sort test nodes in terms of the disparity score and divide them into 5 equal-binned subgroups accordingly. Performance on different node subgroups is presented in Figure 5 with the following observations. **Obs.1:** We note a clear test accuracy degradation with respect to the increasing differences in aggregated features and homophily ratios. Furthermore, we investigate on the individual effect of aggregated feature distance and homophily ratio difference in Figure 6 and 7, respectively. An overall trend of performance decline with increasing disparity score is evident though some exceptions are present. **Obs.2:** When only considering the aggregated feature distance, there is no clear trend among groups 1, 2, and 3 on GCN, SGC, and GAT on heterophilic datasets. **Obs.3:** When only considering the homophily ratio difference, there is no clear trend among groups 1, 2, and 3 across four datasets. These observations underscore the importance of both aggregated-feature distance and homophily ratio differences in shaping GNN performance disparity. Combining these factors together provides a more comprehensive and accurate understanding of the reason for GNN performance disparity. For a more comprehensive analysis, we further substantiate our finding involving higher-order information and a wider array of datasets in Appendix J.1.
**Summary** In this section, we study GNN performance disparity on nodes with distinct structural patterns and uncover its underlying causes. We primarily investigate the impact of aggregation, the key component in GNNs, on nodes with different structural patterns in Sections 3.1 and 3.2. We observe that aggregation effects vary across nodes with different structural patterns, notably enhancing the discriminative ability on majority nodes. These observed performance disparities inspire us to identify crucial factors contributing to GNN performance disparities across nodes with a non-i.i.d PAC-Bayes bound in Section 3.3. The theoretical analysis indicates that test nodes with larger aggregated feature distances and homophily ratio differences with training nodes experience performance degradation. We substantiate our findings on real-world datasets in Section 3.4.
## 4 Implications of graph structural disparity
In this section, we illustrate the significance of our findings on structural disparity via (1) elucidating the effectiveness of existing deeper GNNs (2) unveiling an over-looked aspect of distribution shift on graph out-of-distribution (OOD) problem, and introducing a new OOD scenario accordingly. Experimental details and discussions on more implications are in Appendix G and O, respectively.
Having identified where deeper GNNs excel, reasons why effectiveness primarily appears in the minority node group remain elusive. Since the superiority of deeper GNNs stems from capturing higher-order information, we further investigate how higher-order homophily ratio differences vary on the minority nodes, denoted as, \(|h_{u}^{(k)}-h_{v}^{(k)}|\), where node \(u\) is the test node, node \(v\) is the closest train node to test node \(u\). We concentrate on analyzing these minority nodes \(\mathbf{V}_{\text{mi}}\) in terms of default one-hop homophily ratio \(h_{u}\) and examine how \(\sum_{u\in\mathbf{V}_{\text{gi}}}|h_{u}^{(k)}-h_{v}^{(k)}|\) varies with different \(k\) orders. Experimental results are shown in Figure 9, where a decreasing trend of homophily ratio difference is observed along with more neighborhood hops. The smaller homophily ratio difference leads to smaller generalization errors with better performance. This observation is consistent with [19], where heterophilic nodes in heterophilic graphs exhibit large higher-order homophily ratios, implicitly leading to a smaller homophily ratio difference.
### A new graph out-of-distribution scenario
The graph out-of-distribution (OOD) problem refers to the underperformance of GNN due to distribution shifts on graphs. Many Graph OOD scenarios [61, 62, 63, 64, 23, 65, 45], e.g., biased training labels, time shift, and popularity shift, have been extensively studied. These OOD scenarios can be typically categorized into covariate shift with \(\mathbf{P}^{\text{train}}(\mathbf{X})\neq\mathbf{P}^{\text{test}}(\mathbf{X})\) and concept shift [66, 67, 68] with \(\mathbf{P}^{\text{train}}(\mathbf{Y}|\mathbf{X})\neq\mathbf{P}^{\text{test}}( \mathbf{Y}|\mathbf{X})\). \(\mathbf{P}^{\text{train}}(\cdot)\) and \(\mathbf{P}^{\text{test}}(\cdot)\) denote train and test distributions, respectively. Existing graph concept shift scenarios [61, 65] introduce different environment variables \(e\) resulting in \(\mathbf{P}(\mathbf{Y}|\mathbf{X},e_{\text{train}})\neq\mathbf{P}(\mathbf{Y}| \mathbf{X},e_{\text{test}})\) leading to spurious correlations. To address existing concept shifts, algorithms [61, 69] have been developed to capture the environment-invariant relationship \(\mathbf{P}(\mathbf{Y}|\mathbf{X})\). Nonetheless, existing concept shift settings overlook the scenario where there is not a unique environment-invariant relationship \(\mathbf{P}(\mathbf{Y}|\mathbf{X})\). For instance, \(\mathbf{P}(\mathbf{Y}|\mathbf{X}_{\text{homo}})\) and \(\mathbf{P}(\mathbf{Y}|\mathbf{X}_{\text{hete}})\) can be different, indicated in Section 3.1. \(\mathbf{X}_{\text{homo}}\) and \(\mathbf{X}_{\text{hete}}\) correspond to features of nodes in homophilic and heterophilic patterns. Notably, homophilic and heterophilic patterns are crucial task knowledge that cannot be recognized as the irrelevant environmental variable. Consequently, we find that homophily ratio difference between train and test sets could be an important factor leading to an overlook concept shift, namely, graph structural shift. Notably, structural patterns cannot be considered as environment variables given their integral role in node classification task. The practical implications of this concept shift are substantiated by the following scenarios: **(1)** graph structural shift frequently occurs in most graphs, with a performance degradation in minority nodes, as depicted in Figure 3. **(2)** graph structural shift hides secretly in existing graph OOD scenarios. For instance, the FaceBook-100 dataset [61] reveals a substantial homophily ratio difference between train and test sets, averaging 0.36. This discrepancy could be the primary cause of OOD performance deterioration since the exist OOD algorithms [61, 70] that neglect such a concept shift can only attain a minimal average performance gain of 0.12%. **(3)** graph structural shift is a recurrent phenomenon in numerous real-world applications where new nodes in graphs may exhibit distinct structural patterns. For example, in a recommendation system, existing users with rich data can receive well-personalized recommendations in the exploitation stage (homophily), while new users with less data may receive diverse recommendations during the exploration stage (heterophily).
Figure 8: Performance comparison between GCN and deeper GNNs. Each bar represents the accuracy gap on a specific node subgroup exhibiting homophily ratio within range specified on x-axis.
Figure 9: Multiple hop homophily ratio differences between training and minority test nodes.
Given the prevalence and importance of the graph structural shift, we propose a new graph OOD scenario emphasizing this concept shift. Specifically, we introduce a new data split on existing datasets, namely Cora, CiteSeer, PubMed, Ogbn-Arxiv, Chameleon, and Squirrel, where majority nodes are selected for train and validation, and minority ones for test. This data split strategy highlights the homophily ratio difference and the corresponding concept shift. To better illustrate the challenges posed by our new scenario, we conduct experiments on models including GCN, MLP, GLNN, GPRGNN, and GCNII. We also include graph OOD algorithms, SRGNN [62] and EERM [61], with GCN encoders. EERM(II) is a variant of EERM with a GCNII encoder For a fair comparison, we show GCN performance on an i.i.d. random split, GCN(i.i.d.), sharing the same node sizes for train, validation, and test. Results are shown in Table 1 while additional ones are in Appendix G.4. Following observations can be made: **Obs.1:** The performance degradation can be found by comparing OOD setting with i.i.d. one across four datasets, confirming OOD issue existence. **Obs.2:** MLP-based models and deeper GNNs generally outperform vanilla GCN, demonstrating the superiority on minority nodes. **Obs.3:** Graph OOD algorithms with GCN encoders struggle to yield good performance across datasets, indicating a unique challenge over other Graph OOD scenarios. This primarily stems from the difficulty in learning both accurate relationships on homophilic and heterophilic nodes with distinct \(\mathbf{P}(\mathbf{Y}|\mathbf{X})\). Nonetheless, it can be alleviated by selecting a deeper GNN encoder, as the homophily ratio difference may vanish in higher-order structure information, with reduced concept shift. **Obs.4:** EERM(II), EERM with GCNII, outperforms the one with GCN. Observations suggest that GNN architecture plays an indispensable role in addressing graph OOD issues, highlighting the new direction.
## 5 Conclusion & Discussion
In conclusion, this work provides crucial insights into GNN performance meeting structural disparity, common in real-world scenarios. We recognize that aggregation exhibits different effects on nodes with structural disparity, leading to better performance on majority nodes than those in minority. The understanding also serves as a stepping stone for multiple graph applications.
Our exploration majorly focuses on common datasets with clear majority structural patterns while real-world scenarios, offering more complicated datasets, posing new challenges Additional experiments are conducted on Actor, IGB-tiny, Twitch-gamer, and Amazon-ratings. Dataset details and experiment results are in Appendix G.3 and Appendices H-K, respectively. Despite our understanding is empirically effective on most datasets, further research and more sophisticated analysis are still necessary. Discussions on the limitation and broader impact are in Appendix N and O, respectively.
## 6 Acknowledgement
We want to thank Xitong Zhang, He Lyu at Michigan State University, Yuanqi Du at Cornell University, Haonan Wang at the National University of Singapore, and Jianan Zhao at Mila for their constructive comments on this paper.
|
2310.02641 | Deformation-Invariant Neural Network and Its Applications in Distorted
Image Restoration and Analysis | Images degraded by geometric distortions pose a significant challenge to
imaging and computer vision tasks such as object recognition. Deep
learning-based imaging models usually fail to give accurate performance for
geometrically distorted images. In this paper, we propose the
deformation-invariant neural network (DINN), a framework to address the problem
of imaging tasks for geometrically distorted images. The DINN outputs
consistent latent features for images that are geometrically distorted but
represent the same underlying object or scene. The idea of DINN is to
incorporate a simple component, called the quasiconformal transformer network
(QCTN), into other existing deep networks for imaging tasks. The QCTN is a deep
neural network that outputs a quasiconformal map, which can be used to
transform a geometrically distorted image into an improved version that is
closer to the distribution of natural or good images. It first outputs a
Beltrami coefficient, which measures the quasiconformality of the output
deformation map. By controlling the Beltrami coefficient, the local geometric
distortion under the quasiconformal mapping can be controlled. The QCTN is
lightweight and simple, which can be readily integrated into other existing
deep neural networks to enhance their performance. Leveraging our framework, we
have developed an image classification network that achieves accurate
classification of distorted images. Our proposed framework has been applied to
restore geometrically distorted images by atmospheric turbulence and water
turbulence. DINN outperforms existing GAN-based restoration methods under these
scenarios, demonstrating the effectiveness of the proposed framework.
Additionally, we apply our proposed framework to the 1-1 verification of human
face images under atmospheric turbulence and achieve satisfactory performance,
further demonstrating the efficacy of our approach. | Han Zhang, Qiguang Chen, Lok Ming Lui | 2023-10-04T08:01:36Z | http://arxiv.org/abs/2310.02641v2 | Deformation-Invariant Neural Network and Its Applications in Distorted Image Restoration and Analysis
###### Abstract
Images degraded by geometric distortions pose a significant challenge to imaging and computer vision tasks such as object recognition. Deep learning-based imaging models usually fail to give accurate performance for geometrically distorted images. In this paper, we propose the deformation-invariant neural network (DINN), a framework to address the problem of imaging tasks for geometrically distorted images. The DINN outputs consistent latent features for images that are geometrically distorted but represent the same underlying object or scene. The idea of DINN is to incorporate a simple component, called the quasiconformal transformer network (QCTN), into other existing deep networks for imaging tasks. The QCTN is a deep neural network that outputs a quasiconformal map, which can be used to transform a geometrically distorted image into an improved version that is closer to the distribution of natural or good images. It first outputs a Beltrami coefficient, which measures the quasiconformality of the output deformation map. By controlling the Beltrami coefficient, the local geometric distortion under the quasiconformal mapping can be controlled. The QCTN is lightweight and simple, which can be readily integrated into other existing deep neural networks to enhance their performance. Leveraging our framework, we have developed an image classification network that achieves accurate classification of distorted images. Our proposed framework has been applied to restore geometrically distorted images by atmospheric turbulence and water turbulence. DINN outperforms existing GAN-based restoration methods under these scenarios, demonstrating the effectiveness of the proposed framework. Additionally, we apply our proposed framework to the 1-1 verification of human face images under atmospheric turbulence and achieve satisfactory performance, further demonstrating the efficacy of our approach.
Image Restoration, Turbulence Removal, Bijective Transformation, Generative Adversarial Network, Quasiconformal Geometry
## I Introduction
Deep learning methods have made significant strides in the field of imaging and computer vision, allowing us to achieve remarkable results in tasks like image restoration, object recognition, and classification. However, when it comes to degraded images, deep learning methods can face significant challenges. One such category of degraded images is those that are corrupted by geometric distortion, such as atmospheric turbulence or water turbulence. The use of deep learning methods may fail to produce accurate results for such images. For example, in the facial recognition task for images obtained by long-range cameras, the facial structure in the images is often geometrically distorted due to atmospheric turbulence, causing classical classification networks to provide incorrect results [1]. One intuitive approach for solving this problem is to add distorted images to the downstream classification networks for fine-tuning. However, this approach can be expensive due to the typically large size of the downstream network. Additionally, the introduction of extra variance in the data distribution caused by the distorted images may potentially degrade the performance of the tuned neural network. This challenge motivates the development of a framework that can effectively deal with geometrically distorted images, enabling deep learning methods to achieve accurate and reliable results even in challenging conditions.
There are two possible approaches to address this problem. One approach is to integrate a physical model that describes the geometric distortion. However, finding an appropriate physical model to describe the different types of geometric deformations can be challenging. Another approach is to train a deep neural network to describe and correct the geometric distortion. However, training a deep neural network that can handle a wide range of deformations while maintaining control over the geometric properties of the deformation is a challenging task. This difficulty often leads to inaccurate estimations of the required geometric distortion necessary to correct an image, making it crucial to develop a network that can effectively learn spatial deformation with controlled local geometric distortions.
To address the problem of imaging tasks for geometrically distorted images, we propose the deformation-invariant neural network (DINN), a framework that integrates the quasiconformal transformer network (QCTN) into existing deep neural networks. The QCTN is a deep neural network that outputs a quasiconformal map, which can transform a geometrically distorted image into an improved version that is closer to the distribution of natural or good images. The QCTN achieves this by first outputting a Beltrami coefficient, which measures the quasiconformality of the associated deformation map. By
controlling the Beltrami coefficient, the local geometric distortion under the quasiconformal mapping can be controlled. A key feature of the QCTN is its ability to generate a bijective deformation map. The bijectivity holds great importance as it ensures the preservation of the essential characteristics of the original image. Figure 1 provides an illustration of the significance of bijectivity. In Figure 1(a), an image depicting a degraded digit 9 is presented. Our objective is to transform this degraded image into a non-distorted version. However, if a non-bijective deformation is employed, the digit 9 undergoes a topological change and is transformed into the digit 8, as depicted in Figure 1(b). Conversely, when a bijective deformation is utilized, the digit 9 is transformed into a non-distorted digit 9, as shown in Figure 1(c). This example illustrates the crucial role of bijectivity in preserving the fundamental features of the original image.
Utilizing our framework, we have devised an image classification network that excels in accurately classifying distorted images. Our proposed framework has also been applied to the restoration of geometrically distorted images, including images distorted by atmospheric turbulence and water turbulence. Our proposed framework has outperformed existing GAN-based restoration methods, demonstrating its effectiveness. Additionally, we have applied our proposed framework to the 1-1 verification of human face images under strong air turbulence and achieved good performance, further demonstrating the efficacy of our approach. The proposed DINN framework is believed to be an effective model to enhance the performance of deep neural networks in imaging tasks and enable robust and accurate image analysis in various applications.
In summary, the main contributions of this paper are listed below.
* We introduce the deformation-invariant neural network (DINN) framework, designed to handle imaging tasks involving geometrically distorted images. DINN ensures consistent latent features for images capturing the same underlying object or scene. In the DINN framework, we propose the portable Quasiconformal Transformation Network (QCTN) component, enabling the correction of geometric distortions. This allows large pretrained networks to process heavily distorted images without the need for additional tuning, which can be computationally expensive.
* Based on quasiconformal theories, the QCTN component in DINN generates a bijective deformation map, preserving the salient features of the original image. This property leads to more accurate imaging results, ensuring that the restored images maintain their essential characteristics.
* We utilize the DINN framework to design deep neural networks for tackling three imaging tasks. Firstly, leveraging the DINN framework, we develop an image classification network that demonstrates proficiency in accurately classifying distorted images. Secondly, we employ the DINN framework to address the challenging problem of image restoration in the presence of atmospheric or underwater turbulence. The capability of DINN to effectively handle the complex distortions caused by turbulence in the air and water proves to be highly advantageous in this context. Thirdly, we design a deep neural network using DINN framework for 1-1 facial verification tasks involving facial images that have been corrupted by atmospheric turbulence. Through this application, DINN demonstrates its remarkable efficacy in enhancing the accuracy of facial recognition even under adverse conditions.
## II Related Work
In this section, we present a comprehensive overview of the relevant existing literature that closely relates to our work.
### _Computational quasiconformal geometry_
In this work, computational quasiconformal geometry will be applied. Computational quasiconformal geometry has found extensive application in diverse imaging tasks, yielding successful results. Computational quasiconformal geometry has provided a mathematical tool to study and control the geometric distortions under a mapping. In particular, conformal mappings belong to the class of quasiconformal mappings. Conformal mappings have garnered widespread usage in geometry processing, finding application in a multitude of tasks, including but not limited to texture mapping and surface parameterizations [2, 3, 4]. To quantitatively assess local geometric distortions within a mapping, the associated Beltrami coefficient is commonly employed. By manipulating the Beltrami coefficients, effective control over the geometric properties of the mapping can be achieved.
Fig. 1: The significance of bijectivity. (a) A degraded image of the digit 9. (b) The degraded image undergoes a non-bijective deformation resulting in a topological change, transforming the digit 9 into the digit 8. (c) The degraded image undergoes a bijective deformation. The distorted digit 9 is transformed into a non-distorted digit 9.
Consequently, various surface parameterization methods minimizing conformality distortion have been proposed, leveraging the Beltrami coefficient [5, 6]. Besides, quasiconformal mappings have found applications in computational fabrication [7, 8, 9]. Moreover, numerous quasiconformal imaging models have been introduced in recent years to address diverse imaging tasks, such as image registration [10, 11], surface matching [12] and shape prior image segmentation [13, 14].
### _Deformable Convolution_
Deformable convolution has emerged as a promising solution to overcome the limitations of traditional convolution operations in Convolutional Neural Networks (CNNs). One notable approach is the Active Convolution (AC) proposed by Y. Jeon et al. [15]. AC integrates a trainable attention mechanism into the convolution operation, enabling adaptive feature selection for different input instances. Another related technique is the Spatial Transformer Network (STN) [16], which introduces a learnable transformation module capable of warping the input feature map based on learnable parameters. By incorporating an explicit spatial transformation module, the STN allows the network to learn spatial transformations that align the input with the task at hand, resulting in improved performance for tasks like digit recognition and image classification. Building upon the concept of the STN, D. Dai et al. [17] proposed the Deformable Convolution Network (DCN), which extends the idea of spatial transformation to the convolution operation itself. DCN introduces learnable offsets for each position in the convolutional kernel, enabling dynamic adjustment of the sampling locations for each input instance. This leads to enhanced performance in tasks such as object detection and semantic segmentation. However, the original DCN has limitations in handling large deformations and maintaining invariance to occlusion. To address these limitations, researchers have introduced variations such as the Deformable Convolution v2 (DCNv2) [18], which incorporates additional deformable offsets for intermediate feature maps, and the Deformable RoI Pooling (DRoIPool) [17], which extends the DCN to region-based object detection tasks. Furthermore, W. Luo et al. [19] discovered that the contribution of each pixel is not equal to the final results in DCN, highlighting the need for further improvements in the deformable convolution operation to address its limitations and maximize its performance.
### _Image Restoration_
This work incorporates the use of Generative Adversarial Networks (GANs) to address image restoration tasks. GAN models have shown success in image restoration by training a generator to restore degraded images to their original, high-quality versions [20, 21, 22, 23]. The generator learns to produce visually pleasing and realistic images by taking a degraded image as input and generating an output that closely resembles the original image. In the specific area of image deturbulence, which focuses on restoring images distorted by atmospheric or water turbulence, Lau et al. [24] propose a method that utilizes robust principal component analysis (RPCA) and quasiconformal maps. Their approach aims to restore atmospheric turbulence-distorted images. Another relevant work by Thepa et al. [25] presents a deep neural network-based approach for reconstructing dynamic fluid surfaces from various types of images, including monocular and stereo images. Lau et al. [1] propose a novel GAN-based approach specifically designed for restoring and recognizing facial images distorted by atmospheric turbulence. Their method can restore high-quality facial images from distorted inputs and recognize faces under challenging conditions. Li et al. [26] employ a GAN-based model to remove distortions caused by refractive interfaces, such as water surfaces, using only a single image as input. Furthermore, Rai et al. [27] introduce a channel attention mechanism, adapted from [28], into the generator network of their proposed GAN-based model. This mechanism helps the network focus on more relevant features during the restoration process.
## III Mathematical Formulation
In this section, we present the mathematical formulation of our proposed framework, which leverages the principles of quasiconformal geometry. We also provide an overview of the fundamental mathematical concepts related to quasiconformal theories.
### _Problem formulation_
In this subsection, we provide the mathematical formulation of our problem. Given a distorted image \(\tilde{I}:\Omega\rightarrow\mathbb{R}\) or \(\mathbb{R}^{3}\), \(\tilde{I}\) can be expressed as:
\[\tilde{I}=I\circ\tilde{f}+\epsilon, \tag{1}\]
where \(I\) is the original clean image without any refractive distortion, and \(\epsilon\) represents the additive noise. Here, \(\tilde{f}\) represents the geometric distortion imposed on \(I\).
For most imaging tasks, the input image is assumed to be clean or undistorted. The clean image is fed into a suitable algorithm or deep neural network, which produces the desired imaging results. For example, in a classification task, we use a classifier network \(\mathcal{N}\) that takes an image as input and outputs its predicted category. Usually, \(\mathcal{N}\) is trained on a dataset of
normal, undistorted images with a distribution denoted as \(P_{\text{clear}}\). When the distorted image \(\tilde{I}\) is used as input to \(\mathcal{N}\), \(\mathcal{N}(\tilde{I})\) is likely to produce an incorrect prediction since \(\tilde{I}\) significantly deviates from \(P_{\text{clear}}\). One possible solution is to adjust the parameters in the deep neural network using a training dataset of distorted images. However, in most cases, the network \(\mathcal{N}\) is so large that it becomes expensive to include these distorted images for fine-tuning. Additionally, the introduction of extra variance in the data distribution caused by the distorted images may potentially degrade the performance of the tuned neural network.
To address this problem, our strategy is to learn a deformation map \(f:\Omega\rightarrow\Omega\) to deform \(\tilde{I}\) such that the deformed image \(I^{\prime}=\tilde{I}\circ f\) is closer to the distribution \(P_{\text{clear}}\). In other words, \(I^{\prime}\sim P_{\text{clear}}\). Essentially, \(f\approx\tilde{f}^{-1}\). To solve this problem, we propose the Quasiconformal Transformer Network (QCTN) to learn the deformation \(f\). The QCTN is lightweight, making it more cost-effective than retraining \(\mathcal{N}\) with a large dataset of distorted images.
### _Quasi Conformal Theory_
As discussed in the previous subsection, our strategy for handling distorted images is to train the Quasiconformal Transformer Network (QCTN). Specifically, the QCTN learns a homeomorphic mapping \(f:\Omega\rightarrow\Omega\) to remove the geometric distortion from a distorted image \(\tilde{I}\). Each homeomorphic deformation \(f:\Omega\rightarrow\Omega\) is associated with a geometric quantity known as the _Beltrami coefficient_, defined as:
\[\mu=\frac{\partial f}{\partial z}/\frac{\partial f}{\partial z}, \tag{2}\]
where \(\frac{\partial}{\partial z}=\frac{1}{2}\left(\frac{\partial}{\partial x}-i \frac{\partial}{\partial y}\right)\) and \(\frac{\partial}{\partial\tilde{z}}=\frac{1}{2}\left(\frac{\partial}{\partial x }+i\frac{\partial}{\partial y}\right)\). Here, \(i\) represents the imaginary unit.
The Beltrami coefficient \(\mu:\Omega\rightarrow\mathbb{C}\) is a complex-valued function defined on the image domain \(\Omega\). It quantifies the local geometric distortion under the mapping \(g\). In particular, if \(\mu(z)=0\), then \(\frac{\partial f}{\partial z}(z)=0\), indicating that \(f\) is conformal at \(z\). According to quasiconformal theories, there exists a one-to-one correspondence between the space of homeomorphic mappings and the space of Beltrami coefficients. Given a homeomorphic deformation, its associated Beltrami coefficient \(\mu\) can be obtained using Equation 2. Conversely, given a Beltrami coefficient \(\mu\) with \(||\mu||_{\infty}<1\), the associated homeomorphic mapping \(f\) can be reconstructed by solving Beltrami's equation. In particular, the condition \(||\mu||_{\infty}<1\) ensures that the reconstructed mapping \(f\) is bijective. In our framework, we aim to generate a bijective deformation \(f\) to remove the geometric distortion while preserving the essential characteristics of the original image. This can be achieved by controlling the Beltrami coefficient, which measures the local geometric distortion of the associated mapping.
In our proposed model, the QCTN takes the input image \(\tilde{I}\) and outputs the Beltrami coefficient \(\mu\) associated with the desired deformation map \(f\). To ensure the bijectivity of \(f\), \(\mu\) is constrained to satisfy \(||\mu||_{\infty}<1\) by applying an activation function. The resulting \(\mu\) is then fed into another network that outputs the mapping \(f\) corresponding to \(\mu\). The mapping \(f\) is then used to remove the geometric distortion from \(\tilde{I}\) for further processing.
## IV Deformation-invariant Neural Network (DINN)
In this section, we provide a comprehensive explanation of our general framework, known as the Deformation-invariant Neural Network (DINN), designed to address imaging tasks on distorted images. Each component of DINN will be discussed in subsequent subsections.
### _Overall framework_
In this subsection, we provide an overview of the overall framework of the Deformation-invariant Neural Network (DINN), as depicted in Figure 2. DINN takes the distorted image as input, which is then processed by the Quasiconformal Transformer Network (QCTN). The QCTN, a lightweight neural network, outputs a deformation map aimed at removing the geometric distortion from the distorted image(s). The QCTN comprises two main components: (1) the Beltrami coefficient estimator
Fig. 2: The overall framework of the proposed Deformation-invariant Neural Network (DINN). This framework consists of three principal modules: the Beltrami Coefficient (BC) estimator, the Beltrami Solver network (BSNet), and the network dedicated to a specific downstream imaging task.
and (2) the Beltrami Solver Network (BSNet). The Beltrami coefficient estimator is responsible for computing the Beltrami coefficient \(\mu\) associated with the deformation map \(f\), which is used to rectify the geometric distortion in the distorted image \(\tilde{I}\). By applying \(f\), the distorted image is transformed into a less distorted version, denoted as \(I^{\prime}\), which aligns more closely with the distribution of clean, non-distorted images. The resulting \(I^{\prime}\) is subsequently fed into a downstream network tailored to the specific imaging task. The following subsection provides a detailed description of the Beltrami coefficient estimator, BSNet, and the loss function employed for network training.
### _Quasiconformal Transformer Network (QCTN)_
A crucial component in DINN is the incorporation of the Quasiconformal Transformer Network (QCTN). The distorted image is fed into the QCTN, which outputs a deformation map. This deformation map is then used to spatially transform the distorted image into one that is less distorted for further processing. To better control the geometric properties of the deformation map, we incorporate quasiconformality into the network. Specifically, the QCTN consists of two components: (1) the Beltrami coefficient estimator and (2) the Beltrami Solver network (BSNet).
The Beltrami coefficient estimator aims to estimate the Beltrami coefficient associated with the desired deformation map. The Beltrami coefficient measures the geometric distortion caused by the deformation map, allowing for easy control of the geometric properties of the deformation map. The BSNet then outputs the corresponding deformation map associated with the estimated Beltrami coefficient. In the following subsections, we will provide a detailed description of each component.
#### Ii-B1 Beltrami coefficient estimator
The main feature of the QCTN is the utilization of the Beltrami coefficient (BC) to represent the deformation map, as opposed to the conventional approach of using the vector field. The BC, denoted as \(\mu\), quantifies the local geometric distortion caused by the deformation map. Specifically, if the norm of \(\mu\), denoted as \(|\mu|\), is close to zero, it indicates that the geometric distortion under the associated deformation map is minimal. Therefore, a loss function can be designed to minimize \(|\mu|\) in certain regions, thereby reducing the local geometric distortion.
Moreover, for a bijective deformation, the BC satisfies the condition \(||\mu||_{\infty}<1\). In this work, generating a bijective deformation map is crucial for mitigating the geometric distortion in the distorted image. As mentioned earlier, bijectivity is essential to preserve the fundamental characteristics of the original image. By incorporating the BC into the network, bijectivity can be easily enforced by introducing an activation function that guarantees \(||\mu||_{\infty}<1\). In the discrete case, the image domain is triangulated, and each deformation mapping is treated as a piecewise linear function over each triangular face. The first derivatives of a piecewise linear function are constant on each triangular face. Consequently, the Beltrami coefficient can be regarded as a complex-valued function defined over each triangular face.
As shown in Figure 3, the Beltrami coefficient estimator is an encode-decoder network \(\mathcal{G}^{\theta}\) with trainable parameters \(\theta\), which takes distorted image \(\tilde{I}\) as input and outputs a BC \(\mu\in\mathbb{C}^{m}\) associated with a deformation map \(f\) for restoring the distorted image. Here, \(m\) is the number of triangular faces in the discretization of the image domain. The corresponding deformation map \(f\) associated with the output BC \(\mu\) can be obtained by the BSNet \(\mathcal{H}\), which will be described in the next subsection. This mapping \(f=\mathcal{H}(\mu)\) is then used to transform the distorted image \(\tilde{I}\) into a distortion-free one \(I^{\prime}=\tilde{I}\circ f\). In order to obtain a bijective deformation map, the BC \(\mu\) outputted by \(\mathcal{G}^{\theta}\) should satisfy the condition that \(||\mu||_{\infty}<1\). For this purpose, in the last layer of the Beltrami coefficient estimator, we apply the following activation function:
\[\mathcal{A}(\mu_{j})=\frac{e^{|\mu_{j}|}-e^{-|\mu_{j}|}}{e^{|\mu_{j}|}+e^{-| \mu_{j}|}}\textbf{arg}(\mu_{j}), \tag{3}\]
Fig. 3: The architecture of the Beltrami coefficient (BC) estimator.
where \(\mu_{j}\) is the \(j\)-th entry of \(\mu\in\mathcal{C}^{m}\) and \(\mathbf{arg}(\mu_{j})\) is the argument of the complex number \(\mu_{j}\).
The activation function \(\mathcal{A}\) ensures that the network outputs \(\mu\) whose supremum norm is strictly less than 1. The associated deformation map is then bijective. The parameters of \(\mathcal{G}^{\theta}\) are optimized by backward propagation to minimize suitable loss functions, which will be described in subsection IV-C.
#### Iv-B2 Beltrami Solver Network (BSNet)
Another component in the QCTN is the pretrained Beltrami Solver Network (BSNet). The BSNet, denoted by \(\mathcal{H}\), takes the BC \(\mu\in\mathbb{C}^{m}\) as input and outputs its corresponding deformation map \(f=\mathcal{H}(\mu)\). Mathematically, the BSNet solves Beltrami's equation:
\[\frac{\partial f}{\partial\bar{z}}=\mu\frac{\partial f}{\partial z}. \tag{4}\]
Beltrami's equation has a variational formulation and can be converted into a system of elliptic partial differential equations:
\[\nabla\cdot\left(A\begin{pmatrix}u_{x}\\ u_{y}\end{pmatrix}\right)=0;\ \ \nabla\cdot\left(A\begin{pmatrix}v_{x}\\ v_{y}\end{pmatrix}\right)=0, \tag{5}\]
where \(A=\begin{pmatrix}\alpha_{1}&\alpha_{2}\\ \alpha_{2}&\alpha_{3}\end{pmatrix}\), \(\alpha_{1}=\frac{(\rho-1)^{2}+\tau^{2}}{1-\rho^{2}-\tau^{2}}\), \(\alpha_{2}=\frac{-2\tau}{1-\rho^{2}-\tau^{2}}\), \(\alpha_{3}=\frac{(\rho+1)^{2}+\tau^{2}}{1-\rho^{2}-\tau^{2}}\), and \(\mu=\rho+i\tau\). In the discrete case, the image domain \(\Omega\) is discretized by a triangulation mesh, and \(f\) is piecewise linear on each triangular face. Suppose \(f=\mathbf{u}+i\mathbf{v}\), where \(\mathbf{u}\) and \(\mathbf{v}\) are the coordinate functions of \(f\) defined on every vertex. Then, the system of elliptic PDEs can be discretized into two sparse linear systems: \(C_{1}\mathbf{u}=\mathbf{0}\) and \(C_{1}\mathbf{v}=\mathbf{0}\). These can be used to define a loss function to train the BSNet:
\[\mathcal{L}_{BSNet}=||C_{1}\mathbf{u}||_{1}+||C_{2}\mathbf{v}||_{1}. \tag{6}\]
The architecture of the BSNet is shown in 4. The network consists of a short path (upper network) and a long path (bottom network). The goal is to design a smaller network with fewer parameters for efficient training. To achieve this goal, we consider the Fourier transform of \(\mu\). This is inspired by the property that the low-frequency component of \(\mu\) can effectively capture the overall pattern of the corresponding deformation map \(f\)[29]. In the long path, we first perform the discrete Fourier transform on \(\mu\), and then apply truncation to the Fourier coefficient matrix to keep a few coefficients associated with the low-frequency component. The truncated Fourier coefficient matrix contains the major information of the corresponding mapping. After that, the truncated Fourier coefficient matrix is fed into the Domain Transform Layer (DTL), which imitates the process of transforming features from the frequency domain to the spatial domain. The network is then extended with multiple convolutional layers, each followed by an activation function. The truncation of the Fourier coefficient matrix greatly reduces the number of variables and parameters in the network. However, some subtle information may be lost due to the truncation of high-frequency components of the Fourier coefficients. Specifically, some local deformation patterns may be lost. To address this, we add a short path, which consists of a few layers of convolution and downsampling. The output is concatenated with the output from the long path. This short path is shallow and has a minimal training burden on the overall network. More details about the BSNet can be found in [30].
### _Training process of DINN_
In the DINN framework, our main task is to train the Beltrami coefficient estimator and the BSNet. The Beltrami coefficient estimator is lightweight, making the training process cost-effective. The BSNet can either be pretrained or trained simultaneously,
Fig. 4: The architecture of the Beltrami Solver Network
depending on the application. The downstream network for performing a specific imaging task is pretrained using the clean, undistorted training dataset. For the ease of our discussion, denote the Beltrami coefficient estimator, BSNet, and the downstream network for the imaging task as \(\mathcal{N}_{\theta}\), \(\mathcal{H}_{\phi}\), and \(\mathcal{T}_{\varphi}\), respectively.
To train the DINN framework, we optimize the loss function with suitable loss terms associated with each component of the DINN. The loss function \(\mathcal{L}\) is in the form:
\[\mathcal{L}(\theta,\phi)=\alpha\mathcal{L}_{est}+\beta\mathcal{L}_{BSNet}+\gamma \mathcal{L}_{task}. \tag{7}\]
\(\mathcal{L}_{est}\) guides the training of the Beltrami coefficient estimator by enforcing it to output a Beltrami coefficient associated with a suitable deformation map, which aligns the deformed image with its ground truth. For instance, suppose the distorted images and their corresponding ground truth images are available in the training data. Let \(\tilde{I}\) be the distorted image and \(I\) be its ground truth. Then, \(\mathcal{L}_{est}\) can be designed to measure the mean square error between the deformed image and the ground truth image:
\[\mathcal{L}_{est}=||\tilde{I}\circ\mathcal{H}_{\phi}\circ\mathcal{N}_{\theta} (\tilde{I})-I||_{2}. \tag{8}\]
In some applications where the ground truth deformation map \(f_{\tilde{I}}\) to restore the distorted image \(\tilde{I}\) is known, \(\mathcal{L}_{est}\) can be designed as:
\[\mathcal{L}_{est}=||\mathcal{H}_{\phi}\circ\mathcal{N}_{\theta}(\tilde{I})-f_ {\tilde{I}}||_{2}. \tag{9}\]
Additionally, \(\mathcal{L}_{BSNet}\) guides the training of the parameters \(\phi\) of the BSNet, ensuring that \(\mathcal{H}\) solves Beltrami's equation. In practice, this network can also be pretrained, and we can set \(\beta=0\).
Finally, \(\mathcal{L}_{task}\) is the loss function used to train the downstream network for the specific imaging task. It is included in the loss \(\mathcal{L}\) to guide the output deformation map \(f=\mathcal{H}_{\phi}\circ\mathcal{N}_{\theta}(\tilde{I})\) such that the deformed image \(I^{\prime}=\tilde{I}\circ f\) lies within the distribution of clean, undistorted images. By adding \(\mathcal{L}_{task}\) to the loss \(\mathcal{L}\), we aim to find \(f\) such that the deformed image \(I^{\prime}\) produced by \(f\) gives an accurate imaging result when fed into the pretrained downstream network \(\mathcal{T}_{\varphi}\). Note that \(\mathcal{T}_{\varphi}\) is pretrained on the training dataset of undistorted images. Thus, minimizing \(\mathcal{L}_{task}\) encourages \(f\) to deform \(\tilde{I}\) to one that aligns with the training dataset of undistorted images. For example, for image classification, \(\mathcal{L}_{task}\) can be chosen as the cross-entropy of the probability vectors. If \(I^{\prime}\) remains distorted, \(\mathcal{T}_{\varphi}\) is likely to produce an incorrect probability vector, resulting in a large cross-entropy with the correct probability vector (the label). By minimizing the cross-entropy, we guide the Beltrami coefficient estimator \(\mathcal{N}_{\theta}\) to output a Beltrami coefficient associated with a deformation map \(f\) that restores the deformed image \(I^{\prime}\) to an image that aligns with the training dataset of undistorted images of its corresponding class. The cross-entropy between \(\mathcal{T}_{\varphi}(I^{\prime})\) and the ground truth probability vector will be small.
In the following section, we will demonstrate the application of the DINN framework to real imaging tasks in order to illustrate the concept of the framework.
## V Applications of DINN
In this section, we describe how we can apply the DINN framework to three real applications, namely, (1) image classification; (2) image restoration and (3) 1-1 facial verification.
### _Classification of distorted images_
The DINN framework can be applied to perform image classification on distorted images. In certain real-world scenarios, images can undergo geometric distortions. For example, images captured using a long-range camera may experience geometric distortions caused by atmospheric turbulence. When dealing with distorted images, the predictions made by a classification network \(\mathcal{T}_{c}\) can be inaccurate due to the mismatch between the distorted image \(\tilde{I}\) and the training dataset, which consists of clean and undistorted images. To tackle this issue, we utilize the DINN framework to develop a deep neural network specifically designed for classifying distorted images. The main idea is to incorporate the QCTN component before the downstream classification network. The overall network architecture is illustrated in Figure 5. Initially, a distorted image \(\tilde{I}\) is inputted into the QCTN, which then produces a deformation map \(f\). The deformed image \(I^{\prime}\) by \(f\) is subsequently fed into the pre-trained downstream classification network, resulting in a probability vector.
Fig. 5: DINN based image classification network.
Suppose we are given a training dataset of distorted images \(\tilde{I}\), whose classification labels are known. In certain situations, a training dataset may also be generated using physical experiments or mathematical simulations, resulting in paired images that comprise distorted images and their corresponding original counterparts. For example, one approach is to submerge a ground truth image in water and capture the image, accounting for water turbulence, to generate distorted images. This training dataset serves the purpose of guiding or initializing the parameters of the QCTN. To train the network, we optimize the parameters so that they minimize the following loss function:
\[\mathcal{L}_{c}=\alpha\mathcal{L}_{MSE}+\beta\mathcal{L}_{BSNet}+\gamma \mathcal{L}_{ce}, \tag{10}\]
where \(\mathcal{L}_{MSE}\) is the mean square error given by \(||\tilde{I}-I||_{2}^{2}\). \(\mathcal{L}_{ce}\) is the cross entropy loss given by \(\sum_{i=1}^{n}p_{i}\log q_{i}\), where \(\mathbf{p}=(p_{i})_{1\leq i\leq n}\) is the ground truth probability vector and \(\mathbf{q}=(q_{i})_{1\leq i\leq n}\) is the predicted probability obtained by the classification network \(\mathcal{T}_{c}\). In case the original undistorted images corresponding to the distorted images in the training dataset are unavailable, we can set \(\alpha_{1}=0\). The BSNet can be pretrained, in which case \(\alpha_{2}\) can be set to 0. As mentioned in the previous subsection, \(\mathcal{L}_{ce}\) guides the Beltrami coefficient estimator \(\mathcal{N}_{\theta}\) to produce a Beltrami coefficient associated with a deformation map \(f\). This deformation map is responsible for transforming the distorted image \(I^{\prime}\) into its undistorted version, aligning it with the distribution of clean, undistorted images.
### _Image restoration_
The DINN can be applied to image restoration for turbulence-distorted images. These images can be affected by atmospheric turbulence or water turbulence. Our proposed model leverages a GAN-based architecture, as depicted in Figure 6. The deep neural network comprises the QCTN and an image deblurring module. The image deblurring module aims to restore a blurry image. Besides, turbulence-distorted images exhibit geometric distortions. The QCTN aims to eliminate the geometric distortions. Initially, the geometrically distorted image \(\tilde{I}\) is inputted into the QCTN, which generates an appropriate deformation map. The resulting deformed image, denoted as \(I^{\prime}\), is an improved version with the geometric distortions eliminated. However, it is possible for \(I^{\prime}\) to suffer from blurring caused by the spatial resampling process. To address this, \(I^{\prime}\) is further processed by a color correction network \(\mathcal{N}_{cc}^{\theta}\), resulting in a color-corrected image referred to as \(I^{\prime\prime}\). By combining the outputs of the QCTN and CCnet, we obtain the restored image \(I^{\prime\prime}\), which serves as the generator within the GAN framework. Additionally, a discriminator \(D\) is trained to evaluate the quality of the generated image and provide feedback to the generator. Throughout the training process, the generator aims to enhance the quality of the generated image, while the discriminator strives to distinguish between the restored and undistorted images.
In this work, our training dataset consists of pairs of geometrically distorted images and their corresponding undistorted originals. To train the network, we optimize the parameters so that they minimize the following loss function:
\[\mathcal{L}=a_{1}\mathcal{L}_{MSE}(I,I^{{}^{\prime}})+a_{2} \mathcal{L}_{MSE}(I,I^{{}^{\prime\prime}})+a_{3}\mathcal{L}_{vgg}(I,I^{{}^{ \prime}}) \tag{11}\] \[+a_{4}\mathcal{L}_{vgg}(I,I^{{}^{\prime\prime}})+a_{5}\mathcal{L} _{adv}(I^{{}^{\prime\prime}}),\]
Here, \(\mathcal{L}_{MSE}\) is as defined in the last subsection. It is introduced in the loss function to guide the training of the QCTN as well as the CCnet. \(\mathcal{L}_{vgg}\) denotes the VGG loss, which measures the mean square error between the VGG features of two images:
\[\mathcal{L}_{vgg}(I_{1},I_{2})=||\Phi_{vgg}(I_{1})-\Phi_{vgg}(I_{2})||^{2}, \tag{12}\]
\(\Phi_{vgg}(I_{1})\) and \(\Phi_{vgg}(I_{2})\) denote the VGG features of \(I_{1}\) and \(I_{2}\) respectively. The VGGNet is pretrained on the ImageNet dataset for a classification task. The output of the last hidden layer is extracted and utilized as the VGG feature. Hence, the objective of the third and fourth terms is to encourage the VGG features of \(I^{\prime}\) and \(I^{\prime\prime}\) to closely resemble the VGG feature of the ground truth image \(I\). The last term \(\mathcal{L}_{adv}(I,I^{{}^{\prime\prime}})\) is the adversarial loss defined as:
\[\mathcal{L}_{adv}(I,I^{{}^{\prime\prime}})=\log(D(I))+\log(1-D(I^{\prime\prime} )), \tag{13}\]
Fig. 6: DINN based image restoration network.
where \(D\) is the discriminator for determining whether an input image is real or generated, yielding output values between 0 and 1, respectively. By minimizing the adversarial loss, the objective is to guide the synthesized image \(I^{\prime\prime}\) towards closely resembling a clean, undistorted image that the discriminator recognizes as real. In this task, the CCnet serves as a deblurring module and can be pretrained. The discriminator plays a crucial role in the min-max game of the GAN model and needs to participate in the training process. The optimization of the network parameters then follows an alternating approach, similar to the conventional GAN model. In this paper, we have set the weight parameters as \(a_{1}=1.0\), \(a_{2}=0.1\), \(a_{3}=0.5\), \(a_{4}=0.1\), and \(a_{5}=0.1\) for our experiments.
### _1-1 facial verification_
The DINN framework can also be applied to the problem of 1-1 facial verification. In this task, the objective is to determine whether a distorted facial image \(\tilde{I}^{x}\) belongs to the same person as another facial image \(I^{y}\). In real-world scenarios, facial images captured by long-range cameras often exhibit geometric distortion caused by air turbulence, particularly when considering zoomed-in images. Our proposed model is illustrated in Figure 7. Initially, the distorted image \(\tilde{I}^{x}\) is inputted into the image restoration network, as described in the previous subsection, resulting in a geometrically restored and color-corrected image \(I^{x}\). Subsequently, both the restored image \(I^{x}\) and the reference image \(I^{y}\) are fed into a feature extractor network, denoted as \(\mathcal{N}_{feature}\), which utilizes the _IR-50_ architecture [31]. The feature representations \(\mathcal{N}_{feature}(I^{x})\) and \(\mathcal{N}_{feature}(I^{y})\) are then passed to a similarity measure network, denoted as \(\mathcal{N}_{head}\), which employs the ArcFace similarity comparison method [32]. The final output of the model determines whether the two facial images belong to the same person, with a value of 1 indicating a match and 0 indicating a mismatch. During the training process, both \(\mathcal{N}_{feature}\) and \(\mathcal{N}_{head}\) are pre-trained. On the other hand, the image restoration network, which includes the QCTN, is trained according to the methodology described in the previous subsection.
## VI Experimental Results
In this section, we evaluate the efficacy of our proposed DINN framework through a series of experiments. Specifically, we assess the performance in image classification, image restoration, and 1-1 facial verification tasks involving distorted images using the DINN framework. We compare our results against those achieved by state-of-the-art methods. Additionally, we conduct self-ablation studies to explore the impact of various parameters and settings on the performance of the framework.
The experimental setting is described in detail below.
**Computational Resources and Parameters** To ensure a fair comparison, all models were trained using the RMSprop optimizer with a fixed learning rate of \(0.00001\). Each method underwent 100 epochs of optimization to achieve sufficient convergence. The batch size was set to 64, unless stated otherwise. The training process took place on a CentOS 8.1 central cluster computing node equipped with two Intel Xeon Gold 5220R 24-core CPUs and two NVIDIA V100 Tensor Core GPUs.
**Training Details** For the classification task, the classifier network is pre-trained and kept fixed. Subsequently, the QCTN module is trained using the loss function defined in Equation (10). In the image restoration and 1-1 facial verification tasks, the overall model consists of multiple components: the Beltrami coefficient Estimator, the Color Correction Network (CCnet), and the discriminator. The training process follows an alternating minimizing approach.
### _DINN for classification of distorted images_
In this subsection, we provide the experimental results of image classification for distorted images using the method introduced in subsection V-A. We evaluate the performance of the method on images distorted by different types of spatial deformations,
Fig. 7: DINN based 1-1 facial verification network.
specifically (1) affine transformations, (2) elastic transformations, and (3) a combination of affine and elastic transformations. The objective is to assess the capability of the proposed DINN framework in effectively handling various types of deformations.
**Affine Deformation** We evaluated the performance of our proposed method on the MNIST handwriting dataset distorted by affine transformations. To introduce these deformations, we applied rotation angles within the range of \([-\frac{\pi}{3},\frac{\pi}{3}]\) and scaling parameters within the range of \([0.2,0.6]\), while allowing translations within the image domain. The downstream image classification network utilized in our experiments is a convolutional neural network consisting of three convolutional layers and two fully connected layers. In our proposed model, the QCTN is appended before the downstream classification network to correct the geometric distortions. For comparison purposes, we also considered a related framework that employs the spatial transformer network (STN) [16]. In this alternative model, the STN is appended in front of the downstream classification network instead of the QCTN.
During the classification stage, models equipped with transformer layers demonstrated better performance in terms of classification accuracy on the test dataset compared to the baseline convolutional neural network. The classification accuracy was notably low when the transformer layer was omitted. However, a significant improvement in classification accuracy was observed when the STN was added before the classification network. Furthermore, the incorporation of the QCTN resulted in even higher classification accuracy. A summary of the results can be found in Table I. The models were trained for 100 epochs to ensure convergence.
**Elastic Deformation** We assessed the performance of our proposed method on images distorted by general elastic spatial deformations. Elastic deformations are commonly encountered in various scenarios, such as captur
Fig. 8: Images produced by the deformation maps from different transformer layers. (a) Distorted images. (b) Visualization of the mapping generated by TPS-STN on a deformed image. (c) Image recovered by TPS-STN. (d) Visualization of the mapping generated by QCTN on a deformed image. (e) Image recovered by QCTN. The class names in the top-right corner of images in columns (a), (c), and (e) indicate the predicted class by the baseline CNN, TPS-STN, and DINN, respectively. The correct labels from the top row to the last row are Bird, Deer, Deer and Truck.
\begin{table}
\begin{tabular}{c|c|c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} Deform Type \\ \end{tabular} } & Method & Train ACC & Test ACC & Invertible \\ \hline \multirow{2}{*}{Affine \&} & CNN & 91.62 & 82.73 & \multirow{2}{*}{\begin{tabular}{c} \\ \end{tabular} } \\ & ST-CNN & 97.97 & 94.90 & \\ & DINN-CNN & 97.45 & **96.32** & \\ \hline \multirow{2}{*}{Elastic} & CNN & 95.43 & 78.47 & \multirow{2}{*}{\begin{tabular}{c} \\ \end{tabular} } \\ & TPS-CNN & 99.37 & 81.94 & \\ & DINN-CNN & 99.11 & **84.58** & \\ \hline \multirow{2}{*}{Affine \&} & CNN & 76.77 & 70.29 & \multirow{2}{*}{
\begin{tabular}{c} \\ \end{tabular} } \\ & ST-CNN & 81.65 & 77.21 & \\ \hline \multirow{2}{*}{Elastic} & TPS-CNN & 86.13 & 80.63 & \\ & DINN-CNN & 86.48 & **83.06** & \\ \hline \hline \end{tabular}
\end{table} TABLE I: The results for classifying standard CIFAR-10 by different methods.
surfaces like glasses or water. To test the effectiveness of our method, we conducted experiments on the CIFAR10 dataset with large elastic deformations. Figure 8 shows some examples of the deformed images. For this experiment, we employed the deep layer aggregation model (DLA) [33] as our downstream classification network. To ensure a fair comparison, we implemented a variant of the spatial transformer network (STN) with thin-plate spline transformation, called the _TPS-STN_. This variant outputs the mapped coordinates of the control points without constraint. The architecture for the TPS-STN that predicts the mapped coordinates is the same as our Beltrami coefficient estimator. The primary distinction between QCTN and TPS-STN is that QCTN generates a bijective folding-free deformation, whereas TPS-STN does not possess this property. The bijectivity of QCTN plays a crucial role in this imaging task.
Table I presents the classification accuracy results of the downstream classification network, the classification network with TPS-STN, and the classification network with QCTN. Again, the classification models equipped with the transformer network exhibit significantly better accuracy. However, the use of QCTN yielded better results compared to TPS-STN. This is attributed to the non-bijective nature of the deformations produced by TPS-STN, which have impacted its performance.
Figure 8 displays several examples of deformed images generated by the deformation maps produced by the transformer layer. The aim is to ensure that the deformed images effectively alleviate the geometric distortions present in the input distorted images. The results demonstrate that the images restored by the DINN approach are notably more accurate and closely resemble an image from their respective classes. This improved restoration aids the classification network in better identifying the class to which each image belongs. The top left corner label for each image indicates the class recognized by the classification network. It is observed that both the downstream classification network and the network with the inclusion of STN provided incorrect classifications. However, with the integration of QCTN, the classifications were accurate.
**Combined Deformation** We further evaluate the performance of our proposed model on images distorted by a combination of elastic and affine deformations, which generally involve large deformations. The experiment is conducted on the FashionMNIST dataset, utilizing the same downstream classification network as in the experiments on images distorted by affine transformations. We compare our method with the downstream classification network, the STN network, and the TPS-STN network. The classification accuracies are presented in Table I. Once again, the classification models equipped with the transformer network demonstrate significantly improved accuracy. However, the use of QCTN yields notably superior results compared to both STN and TPS-STN. Even for such large deformations, our proposed model successfully preserves the bijectivity of the deformation map, whereas TPS-STN fails to do so.
Figure 9 shows several examples of deformed images generated by the deformation maps produced by the transformer layer. Each image's top-left corner label indicates the class recognized by the classification network. The results demonstrate that the images restored by the DINN approach are notably more accurate and closely resemble an image from their respective classes, even with such large deformations. It is observed that both the downstream classification network, the network with the inclusion of STN and the network with the inclusion of TPS-STN, provided incorrect classifications. On the other hand, the classifications were accurate under the DINN framework.
### _Restortion of Turbulence-distorted images_
In this subsection, we present the experimental results of image restoration for turbulence-distorted images using our proposed model, as introduced in subsection V-B. Turbulence-distorted images commonly occur when imaging through turbulent refractive
Fig. 9: Images produced by the deformation maps from different transformer layers. (a) Distorted images from FashionMNIST. (b) Visualization of the mapping generated by STN on a deformed image. (c) Image recovered by STN. (d) Visualization of the mapping generated by TPS-STN on a deformed image. (e) Image recovered by TPS-STN. (f) Visualization of the mapping generated by QCTN on a deformed image. (g) Image recovered by QCTN. The class names in the top-right corner of images in columns (a), (c), (e) and (g) indicate the predicted class by the baseline CNN, STN, TPS-STN, and DINN, respectively. The correct labels from the top row to the last row are Shirt and Scandal.
media, such as air and water, owing to the refraction and scattering of light [34]. These distortions pose significant challenges in achieving high-quality and undistorted images for further analysis.
In our experiments, we obtained the training dataset through the following process. To simulate air-turbulence distortion in images, we utilized the model proposed by [35]. This model requires specific parameters related to the virtual camera. For our experiment, we set the focal distance of the virtual camera to \(300mm\), with a lens diameter of \(5.357cm\) and a pixel size of \(4\times 10^{3}mm\). The virtual camera was positioned at an elevation of \(4m\) with an object distance of \(2km\). For weak turbulence, we set the turbulence strength parameter \(C_{n}^{2}=3.6\times 10^{-13}\), and for strong turbulence, we used \(C_{n}^{2}=3.6\times 10^{-12}\). By applying the turbulence fields obtained from the simulated turbulence model, we distorted the IMAGENET dataset, resulting in a training dataset comprising air-turbulence distorted images. To create the water turbulence image dataset, we utilized a physics-based ray tracer following the methodology described in [25]. This approach allowed us to simulate various types of waves for realistic water deformations. In our experiment, we specifically employed two water deformation types: _Ripple_ and _Ocean_. Some examples of the turbulence fields are shown in Figure 11.
In all of our experiments, we utilized a total of \(360,000\) images for training purposes. Additionally, we reserved a separate set of \(40,000\) images specifically for testing and evaluation.
Fig. 11: Examples of turbulence fields. The Left shows the turbulence field of the air turbulence. The right shows the turbulence field of the water turbulence.
Fig. 10: Results of image restoration for images corrupted by ocean type water turbulence by different methods.
We compare our proposed DINN-GAN model with other state-of-the-art methods for the restoration of distorted images, namely Pix2Pix[36], DeblurGAN[20], CycleGAN[22], LiGAN[26], and DTD-GAN[27]. Figure 10 presents the image restoration results for images distorted by ocean turbulence. Our proposed method achieves the best results by successfully removing geometric distortions. In contrast, the restored images produced by other methods still exhibit some turbulence distortions. Similarly, Figure 12 displays the image restoration results for images distorted by air turbulence, where, once again, our method outperforms the other approaches. Table II provides a quantitative comparison among the different methods, further demonstrating that our method delivers the best results.
Furthermore, we assessed the performance of the DINN-GAN model on real images captured by a digital camera, capturing scenes inside a pool where the images were distorted due to water flow. The distorted images are presented in Figure 13 (left), while Figure 13 (right) displays the restored image using the DINN-GAN model. Our model adeptly restores the images by effectively removing the geometric distortions. This once again highlights the effectiveness of our proposed method.
proposed by [35]. This experiment used the same set of parameters as in the previous subsection, with the turbulence strength parameter set to \(C_{n}^{2}=3.6\times 10^{-13}\). We utilized 450 subjects for training and reserved the remaining 50 subjects for testing.
Figure 14 showcases some ground truth facial images in the first column, while the last column displays the same images distorted by strong air turbulence. The figure also presents the restored images produced by our proposed method, along with p2pGAN, DeblurGAN, CycleGAN, LiGAN, and DTD-GAN. Additionally, for each example, corresponding error maps are included, which are defined as the mean square error between the predicted and ground truth values at each pixels. In these error maps, red indicates higher error, while blue represents lower error. Evidently, the restored images using the DINN framework exhibit the most favorable results. This observation is further supported by the quantitative comparison presented in Table III, which also includes the accuracy of 1-1 facial verification using different methods. Once again, our method achieves significantly higher accuracy compared to the other methods.
### _Self Ablation_
The performance of the proposed Deformation-Invariant Neural Network (DINN) as a learning approach may rely on the chosen network architecture. It is crucial to strike a balance between the depth of the architecture and its convergence capabilities. An architecture that is too shallow may fail to converge adequately, resulting in suboptimal performance. On the other hand,
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Depth of Net & 2 & 3 & 4 & 5 \\ \hline PSNR & 20.2672 & 21.4485 & 21.6674 & 21.6931 \\ SSIM & 0.5325 & 0.5977 & 0.6134 & 0.6348 \\ MSE & 0.0339 & 0.0294 & 0.0281 & 0.273 \\ \hline \hline Conv. Type & Quadruple & Triple & Double & Single \\ \hline PSNR & 21.8411 & 21.7485 & 21.4485 & 20.0812 \\ SSIM & 0.6353 & 0.6163 & 0.5977 & 0.5184 \\ MSE & 0.0252 & 0.0268 & 0.0294 & 0.0326 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Self-ablation.
Fig. 13: Image restoration results for real images distorted by water turbulence.
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline Metrics & Sharp & Distorted & Pix2Pix & DeblurGAN & CycleGAN & Li _et al._ & Rai _et al._ & DINN-GAN \\ \hline PSNR & / & 21.6473 & 22.1846 & 24.4311 & 24.3958 & 25.3578 & 25.5666 & **26.3400** \\ SSIM & / & 0.6738 & 0.6185 & 0.7126 & 0.7129 & 0.7931 & 0.8015 & **0.8604** \\ MSE & / & 0.0097 & 0.0077 & 0.0081 & 0.0084 & 0.0067 & 0.0065 & **0.0063** \\ Accuracy & 95.31 & 81.23 & 83.58 & 85.08 & 84.98 & 86.76 & 88.53 & **90.15** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Recognition accuracy and image quality evaluation on distorted human face.
an excessively deep architecture may lead to excessive training consumption. To determine the optimal configuration, we conduct two self-ablation studies within our model. Specifically, we investigate the influence of various downsample levels and convolution layers on the performance of the DINN-based image restoration model. This analysis allows us to gain insights into the impact of these factors on the overall effectiveness of the DINN framework.
**Influence of Downsample Levels** The downsampled levels in the encoder-decoder architecture, utilized by the BC estimator, play a vital role in our model. To assess the influence of the number of downsampling levels on the generalization ability of our model, we conducted a self-ablation study. The results presented in Table IV compellingly demonstrate that a UNet architecture with a downsample level of 3 is sufficient to achieve effective learning and generate high-quality restoration mappings. This finding highlights the importance of striking a balance between the complexity of the architecture and its performance.
**Influence of Convolution Layers**
The number of convolution layers in each level of the encoder-decoder architecture is another crucial factor that affects the restoration quality. The results presented in Table IV indicate that employing a double-convolution configuration for each level in the encoder-decoder architecture is highly effective in producing satisfactory restoration results. This finding highlights the importance of carefully considering the number of convolution layers at each level to achieve optimal performance in the restoration process.
## VII Conclusion and Future Work
In this paper, we have introduced the deformation-invariant neural network (DINN) framework to solve the challenging problem of imaging tasks involving geometrically distorted images. Our proposed framework, incorporating the quasiconformal transformer network (QCTN), has demonstrated its effectiveness in addressing various imaging tasks, including image classification of
Fig. 14: Image restoration results for images severely distorted by strong air turbulence using different methods. The corresponding error maps, defined as the mean square error between the predicted and ground truth values at each pixel point, are shown below the restored images. In the error maps, red indicates higher error, while blue represents lower error.
distorted images, image restoration in the presence of atmospheric or water turbulence and 1-1 facial verification under strong air turbulence.
The key contributions of our work include the development of DINN, which ensures consistent latent features for geometrically distorted images capturing the same underlying object or scene. We have introduced the portable QCTN component, which allows large pretrained networks to process heavily distorted images without requiring additional tuning, thereby reducing computational costs. The QCTN generates bijective deformation maps that preserve the salient features of the original images, resulting in more accurate restoration and recognition results. Our experimental results have shown that the proposed DINN framework outperforms existing GAN-based restoration methods in scenarios involving atmospheric turbulence and water turbulence. Furthermore, the application of DINN to 1-1 facial verification under strong air turbulence has demonstrated its efficacy in enhancing the accuracy of facial recognition even in adverse conditions.
While our proposed framework has yielded promising results, there are still several avenues for future research. One potential direction is to investigate the application of the DINN framework to other imaging tasks, such as image registration and image segmentation. Additionally, it is worth noting that the current DINN framework may yield less satisfactory outcomes when confronted with very extreme deformations. Therefore, further exploration is needed to enhance the ability of the proposed model to handle such challenging scenarios.
In conclusion, the proposed DINN framework, incorporating the QCTN component, offers a powerful solution for addressing imaging tasks involving geometrically distorted images. Our experimental results have demonstrated its superiority in image classification, image restoration and facial verification tasks under challenging conditions. The DINN framework opens up new possibilities for handling geometric distortions in various applications and provides a valuable contribution to the field of deep learning in imaging and computer vision.
|
2308.06679 | Separable Gaussian Neural Networks: Structure, Analysis, and Function
Approximations | The Gaussian-radial-basis function neural network (GRBFNN) has been a popular
choice for interpolation and classification. However, it is computationally
intensive when the dimension of the input vector is high. To address this
issue, we propose a new feedforward network - Separable Gaussian Neural Network
(SGNN) by taking advantage of the separable property of Gaussian functions,
which splits input data into multiple columns and sequentially feeds them into
parallel layers formed by uni-variate Gaussian functions. This structure
reduces the number of neurons from O(N^d) of GRBFNN to O(dN), which
exponentially improves the computational speed of SGNN and makes it scale
linearly as the input dimension increases. In addition, SGNN can preserve the
dominant subspace of the Hessian matrix of GRBFNN in gradient descent training,
leading to a similar level of accuracy to GRBFNN. It is experimentally
demonstrated that SGNN can achieve 100 times speedup with a similar level of
accuracy over GRBFNN on tri-variate function approximations. The SGNN also has
better trainability and is more tuning-friendly than DNNs with RuLU and Sigmoid
functions. For approximating functions with complex geometry, SGNN can lead to
three orders of magnitude more accurate results than a RuLU-DNN with twice the
number of layers and the number of neurons per layer. | Siyuan Xing, Jianqiao Sun | 2023-08-13T03:54:30Z | http://arxiv.org/abs/2308.06679v1 | # Separable Gaussian Neural Networks: Structure, Analysis, and Function Approximations
###### Abstract
The Gaussian-radial-basis function neural network (GRBFNN) has been a popular choice for interpolation and classification. However, it is computationally intensive when the dimension of the input vector is high. To address this issue, we propose a new feedforward network - Separable Gaussian Neural Network (SGNN) by taking advantages of the separable property of Gaussian functions, which splits input data into multiple columns and sequentially feeds them into parallel layers formed by uni-variate Gaussian functions. This structure reduces the number of neurons from \(O(N^{d})\) of GRBFNN to \(O(dN)\), which exponentially improves the computational speed of SGNN and makes it scale linearly as the input dimension increases. In addition, SGNN can preserve the dominant subspace of the Hessian matrix of GRBFNN in gradient descent training, leading to a similar level of accuracy to GRBFNN. It is experimentally demonstrated that SGNN can achieve 100 times speedup with a similar level of accuracy over GRBFNN on tri-variate function approximations. The SGNN also has better trainability and is more tuning-friendly than DNNs with RuLU and Sigmoid functions. For approximating functions with complex geometry, SGNN can lead to three orders of magnitude more accurate results than a RuLU-DNN with twice the number of layers and the number of neurons per layer.
keywords: Function approximations, Separable Gaussian Neural Networks,, Gaussian-radial-basis functions, Separable functions, Subspace gradient descent +
Footnote †: journal: Neural Networks
## 1 Introduction
Radial-basis functions have many important applications in the fields such as function interpolation (Dyn et al., 1986), meshless methods (Duan, 2008), clustering classification (Wu, 2012), surrogate models (Akhtar and Shoemaker, 2016), Autoencoder (Daoud et al., 2019), and dynamic system design (Yu et al., 2011). The Gaussian-radial-basis-function neural network (GRBFNN) is a neural network with one hidden layer and produces output
in the form
\[\tilde{f}({\bf x})=\sum_{k=1}^{N}W_{k}G_{k}({\bf x}), \tag{1}\]
where \(G_{k}({\bf x})\) is a radially-symmetric unit represented by the Gaussian function such as
\[G_{k}({\bf x})=\exp\left(-\frac{1}{2\sigma_{k}^{2}}||{\bf x}-\mathbf{ \mu}_{k}||^{2}\right). \tag{2}\]
Herein, \(\mathbf{\mu}_{k}\) and \(\sigma_{k}\) are the center and width of the unit that can be tuned to adjust its localized response. The locality is then utilized to approximate the output of a nonlinear mapping through the linear combination of Gaussian units. Although it has been shown that GRBFNN outperforms multilayer perceptions (MLP) in generalization (Tao, 1993), tolerance to input noise (Moody and Darken, 1989), and learning efficiency with a small set of data (Moody and Darken, 1989), the network is not scalable for problems with high-dimensional input, because the neurons in need for accurate predictions and the corresponding computations exponentially increase with the rise of dimensions. This paper aims to tackle this issue and make the network available for high-dimensional problems.
GRBFNN was proposed by Moody and Darken (1989) and Broomhhead and Lowe (1988) in the late of the 1980s for classification and function approximations. It was soon proved that GRBFNN is a universal approximator (Hornik et al., 1989; Park and Sandberg, 1991; Leshno et al., 1993) that can be arbitrarily close to a real-value function when the sufficient number of neurons is offered. The proof of universal approximability for GRBFNN can be interpreted as a process beginning with partitioning the domain of a target function into a grid, followed by using localized radial-basis functions to approximate the target function in each grid cell, and then aggregating the localized functions to globally approximate the target function. It is evident that this approach is not feasible for high-dimensional problems because it will lead to the exponential growth of neurons as the number of input dimensions increases. For example, \(O(N^{d})\) neurons will be required to approximate a \(d\)-variate function, with the domain of each dimension divided into \(N\) segments.
To address this issue, researchers have heavily focused on selecting the optimal number of neurons as well as their centers and widths of GRBFNN such that the features of the target nonlinear map are well captured by the network. This has been mainly investigated through two strategies: (1) using supervised learning with dynamical adjustment of neurons (e.g., numbers, centers, and widths) according to the prescribed criteria and (2) performing unsupervised-learning-based preprocessing on input to estimate the placement and configuration of neurons.
For the former, Poggio and Girosi (1990) as well as Wettschereck and Dietterich (1991) applied gradient descent to train generalized-radial-basis-function networks that have trainable centers. Regularization techniques (Poggio and Girosi, 1990) were adopted to maintain the parsimonious structure of GRBFNN. Platt (1991) developed a two-layer network that dynamically allocates localized Gaussian neurons to the positions where the output pattern is not well represented. Chen et al. (1991) adopted an Orthogonal Least Square (OLS) method and introduced a procedure that iteratively selects the optimal centers that minimize the error reduction ratio until the desired accuracy is achieved. Huang et al. (2005)
proposed a growing and pruning strategy to dynamically add/remove neurons based on their contributions to learning accuracy.
The latter, unsupervised-learning-based preprocessing methods, have been more popular because it decouples the estimation of centers and widths from the computation of weights, which reduces the complexity of program as well as computational load. Moody and Darken (1989) used the k-means clustering method (Wu, 2012) to determine the centers that minimize the Euclidean distance between the training set and centers, followed by the calculation of a uniform width by averaging the distance to the nearest-neighbor of all units. Carvalho and Brizzotti (2001) investigated different clustering methods such as the iterative optimization (IO) technique, depth-first search (DF), and the combination of IO and DF for target recognition by RBFNNs. Niros and Tsekouras (2009) proposed a hierarchical fuzzy clustering method to estimate the number of neurons and trainable variables.
The optimization of widths has been of great interest more recently. Yao et al. (2010) numerically observed that the optimal widths of radial basis function are affected by the spatial distribution of training data and the nonlinearity of approximated functions. With this in mind, they developed a method that determines the widths using the Euclidean distance between centers and second-order derivatives of a function. However, calculating the width of each neuron is computationally expensive. Instead of assigning each neuron a distinct width, it makes more sense to assign different widths to the neurons that represent different clusters for computational efficiency. Therefore, Yao et al. (2012) further proposed a method to optimize widths by dividing a global optimization problem into several subspace optimization problems that can be solved concurrently and then coordinated to converge to a global optimum. Similarly, Zhang et al. (2019) introduced a two-stage fuzzy clustering method to split the input space into multiple overlapped regions that are then used to construct a local Gaussian-radial-basis-function network.
However, the aforementioned methods all suffer from the curse of dimensionality. As the input dimension grows, the selection of optimal neurons itself can become cumbersome. To compound the problem, the number of optimal neurons can also rise exponentially when approximating high-dimensional and geometrically complex functions. Furthermore, the methods are designed for CPU-based, general-purpose computing machines but are not appropriate for tapping into the modern GPU-oriented machine-learning tools (Abadi et al., 2016; Paszke et al., 2019) whose computational efficiency drop significantly when handling branching statements and dynamical memory allocation. This gap motivates us to reevaluate the structure of GRBFNN. As stated previously, the localized property of Gaussian functions is beneficial for identifying the parsimonious structure of GRBFNN with low input dimensions, but it also leads to the blow-up of the number of neurons in high dimensional situations.
Given that the recent development of deep neural networks has shown promise in solving such problems, _the main goal of this paper is to develop a deep-neural-network representation of GRBFNN such that it can be used for solving very high dimensional problems._ We approach this problem by utilizing the separable property of Gaussian radial basis functions. That is, every Gaussian-radial-basis function can be decomposed into the product of multiple uni-variate Gaussian functions. Based on this property, we construct a new neural network, namely separable Gaussian neural network (SGNN), whose number of layers is equal to the number of input dimensions, with the neurons of each layer formed by the corresponding
uni-variate Gaussian functions. Through dividing the input into multiple columns by their dimensions and feeding them into the corresponding layers, the output equivalent to that of a GRBFNN is constructed from multiplications and summations in the forward propagation. It should be noted that Poggio and Girosi (1990) have reported the separable property of Gaussian-radial-basis functions and proposed using it for neurobiology even in 1990.
SGNN offers several advantages.
* The number of neurons of SGNN is given by \(O(dN)\) and increases linearly with the dimension of the input while the number of neurons of GRBFNN given by \(O(N^{d})\) grows exponentially. This reduction of neurons also decreases the number of trainable variables from \(O(N^{d})\) to \(O(dN^{2})\), yielding a more compact network than GRBFNN.
* The reduction of trainable variables further decreases the computational load during training and testing of neural networks. As shown in Section 3, this has led to 100 times speedup of training time for approximating tri-variate functions.
* SGNN is much easier to tune than other MLPs. Since the number of layers in SGNN is equal to the number of dimension of the input data, the only tunable network-structural hyper-parameter is the layer width, i.e. the number of neurons in a layer. This can significantly alleviate the tuning workload as compared to other MLPs that must simultaneously tune the width and depth of layers.
* SGNN holds a similar level of accuracy as GRBFNN, making it particularly suitable for approximating multi-variate functions with complex geometry. In Section 7, it is shown that SGNN can yield three-order-of magnitude more accurate approximations for complex functions than MLPs with ReLU and Sigmoid functions.
The rest of this paper is organized as follows. In Section 2, we introduce the structure of SGNN and use it to approximate a multi-variate real-value function. In Section 3, we compare SGNN and GRBFNN regarding the number of trainable variables and computational complexity of forward and backward propagation. In Section 4, we show that SGNN can preserve the dominant sub-eigenspace of the Hessian of GRBFNN in the gradient descent search. This property can help SGNN maintain a similar level of accuracy as GRBFNN while substantially improving computational efficiency. In Section 5, we show the computational time of SGNN scales linearly with the increase of dimension and demonstrate its efficacy in function approximations through numerous examples. In Sections 6 and 7, extensive comparisons between SGNN and GRBFNN and between SGNN and MLPs are performed. At last, the conclusion is summarized in Section 8.
## 2 Separable-Gaussian Neural Networks
**Definition 2.1**.: _A \(d\)-variate function \(f(x_{1},x_{2},\ldots,x_{d})\) is separable if it can be expressed as a product of multiple uni-variate functions; i.e.,_
\[f(x_{1},x_{2},\ldots,x_{d})=f_{1}(x_{1})\cdot f_{2}(x_{2})\cdots f_{d}(x_{d}). \tag{3}\]
**Remark**.: _The Gaussian radial-basis function_
\[G(\mathbf{x})=\exp\left(-\sum_{i=1}^{d}\frac{(x_{i}-\mu_{i})^{2}}{2\sigma_{i}^{2} }\right), \tag{4}\]
_is separable and can be represented in the form_
\[G(\mathbf{x})=\prod_{k=1}^{d}\varphi^{(k)}(x_{k}), \tag{5}\]
_where \(\varphi^{(k)}(x_{k})=\exp(-\frac{1}{2}(x_{k}-\mu_{k})^{2}/\sigma_{k}^{2})\), with \(k=1,2,\ldots,d\)._
The product chain in Eq. (5) can be constructed through the forward propagation of a feedforward network with a single neuron per layer where \(\varphi^{(k)}(x_{k})\) is the neuron of the \(k\)-th layer. This way, the multi-variate Gaussian function \(G(\mathbf{x})\) is reconstructed at the output of the network. By adding more neurons to each layer and assigning weights to all edges, we can eventually construct a network whose output is equivalent to the output of a GRBFNN. Fig. 1 shows an example of an SGNN approximating a tri-variate function. Next we use this property to define SGNN.
**Definition 2.2**.: _The separable-Gaussian neural network (SGNN) with \(d\)-dimensional input
Figure 1: The SGNN that approximates a tri-variate function. The input is divided and fed sequentially to each layer. Therefore, the depth (layers) of the NN is identical to the number of input dimensions. In this paper, the weights of the output layer are unity.
_can be constructed in the form_
\[\mathcal{N}_{i}^{(0)} =x_{i},\ 1\leq i\leq d, \tag{6}\] \[\mathcal{N}_{i}^{(1)} =\varphi_{i}^{(1)}(x_{1},\mu_{i}^{(1)},\sigma_{i}^{(1)}),\ 1\leq i \leq N_{1},\] (7) \[\mathcal{N}_{i}^{(\ell)} =\varphi_{i}^{(\ell)}(x_{\ell},\mu_{i}^{(\ell)},\sigma_{i}^{( \ell)})\sum_{j=1}^{N_{l}}W_{ij}^{(\ell)}\mathcal{N}_{j}^{(\ell-1)},\ \ 2\leq\ell\leq d,1\leq i\leq N_{l},\] (8) \[\bar{f}(\mathbf{x}) =\mathcal{N}(\mathbf{x})=\sum_{j=1}^{N_{d}}\mathcal{N}_{j}^{(d)}, \tag{9}\]
_where \(N_{i}\) (\(i=1,2,\ldots,d\)) represents the number of neurons of the \(l\)-th layer, \(\mathcal{N}_{i}^{(l)}\) represents the output of the \(i\)-th Gaussian neuron (activation function) of the \(l\)-th layer._
Substitution of Eqs. (6) to (8) into Eq. (9) yields
\[\bar{f}(\mathbf{x})=\sum_{i_{d}=1}^{N_{d}}\sum_{i_{d-1}=1}^{N_{d-1}}\cdots\sum _{i_{1}=1}^{N_{1}}\left[W_{i_{d}i_{d-1}}^{(d-1)}W_{i_{d-1}i_{d-2}}^{(d-2)}\ldots W _{i_{2}i_{1}}^{(1)}\right]\prod_{\ell=1}^{d}\varphi_{i_{\ell}}^{(\ell)}(x_{ \ell}), \tag{10}\]
with
\[\prod_{\ell=1}^{d}\varphi_{i_{\ell}}^{(\ell)}(x_{\ell})=\varphi_{i_{d}}^{(d)} (x_{d})\varphi_{i_{d-1}}^{(d-1)}(x_{d-1})\ldots\varphi_{i_{1}}^{(1)}(x_{1}), \tag{11}\]
where \(W_{i_{l+1}i_{l}}^{(l)}\) (\(l=1,2,\ldots,d\)) represents the weight of the \(i_{(l+1)}\)-th neuron of the \((l+1)\)-th layer and the \(i_{l}\)-th neuron of the \(l\)-th layer. The loss function of the SGNN is defined in the form
\[J=\|f-\bar{f}\|_{2}=\sqrt{\sum_{i=1}^{d}\left[f(x_{i})-\bar{f}(x_{i})\right]^ {2}}. \tag{12}\]
The center \(\mu_{i}^{(l)}\) and width \(\sigma_{i}^{(l)}\) in the Gaussian function \(\varphi_{i}^{(l)}\) can also be treated as trainable. They are not included in this discussion for simplicity.
## 3 SGNN vs. GRBFNN
Without loss of generality, the analysis below will assume that each hidden layer has \(N\) neurons. To understand how the weights of SGNN relate to those of GRBFNN, we equate Eqs. (1) and (10), which yields a nonlinear map
\[\mathcal{W}_{j}=g_{j}\left(W_{i_{d}i_{d-1}}^{(d-1)},W_{i_{d-1}i_{d-2}}^{(d-2)},\ldots,W_{i_{2}i_{1}}^{(1)}\right), \tag{13}\]
whose explicit form is
\[\mathcal{W}_{j}=W_{i_{d}i_{d-1}}^{(d-1)}W_{i_{d-1}i_{d-2}}^{(d-2)}\ldots W_{i_ {2}i_{1}}^{(1)}, \tag{14}\]
with
\[j=i_{1}+i_{2}N+\cdots+i_{d}N^{d-1}. \tag{15}\]
It is evident that SGNN can be transformed into GRBFNN. However, GRBFNN can be converted into SGNN if and only if the mapping of Eq. (14) is invertible.
Because the mapping of Eq. (14) is not uniquely invertible, it is difficult to prove the universal approximability of SGNN. However, this paper will present extensive numerical experiments to show that SGNN can achieve comparable (occasionally even greater) accuracy with much less computation effort than GRBFNN. In addition, SGNN can have superior performance in approximating complex functions than deep neural networks with activation functions such as ReLU and Sigmoid as shown in Section 7.
In the following, we demonstrate the computational efficiency of SGNN over GRBFNN in terms of trainable variables and the number of floating-point operations of forward and backward propagation.
### Trainable Variables
Let us now treat the center and width of the uni-variate Gaussian function in SGNN as trainable. The total number \(N_{t}\) of trainable variables of SGNN is given by
\[N_{t}=\begin{cases}N+2N&\mathbf{x}\in\mathbb{R}^{1},\\ (d-1)N^{2}+2dN&\mathbf{x}\in\mathbb{R}^{d},\text{ for }d\geq 2.\end{cases} \tag{16}\]
Note that the number of trainable variables of GRBFNN is \(N^{d}\), identical to its number of neurons. SGNN and GRBFNN have identical weights when the number of layers is smaller than or equal to two. In other words, they are mutually convertible and the mapping of Eq. (14) is invertible when \(d\leq 2\). However, for high-dimensional problems, as shown in Table 1, SGNN can substantially reduce the number of trainable variables, making it more tractable than GRBFNN.
### Forward Propagation
Assume the size of the input dataset is \(m\). Using Eqs. (6) to (9), we can estimate the number of floating-point operations (FLOP) of the forward pass in SGNN. More specifically, the number of FLOP to calculate the output of the \(k\)-th layer with the input from the previous layer is
\[FLOP^{(k)}(\mathcal{N}(\mathbf{x}))=m(2N^{2}+6N),\text{ for }2\leq k\leq d, \tag{17}\]
where \(2N^{2}\) is the number of arithmetic operations by the product of weights and Gaussian functions of the \(k\)-th layer, and \(6N\) is the number of calculations for Gaussian functions of the layer, and \(m\) is the size of input dataset. In addition, the number of FLOP associated with the first and output layer are
\[FLOP^{(1)}(\mathcal{N}(\mathbf{x}))=6mN, \tag{18}\] \[FLOP^{(d+1)}(\mathcal{N}(\mathbf{x}))=mN. \tag{19}\]
\begin{table}
\begin{tabular}{c|c|c} \hline & Neurons & No. of variables \\ \hline SGNN & \(O(dN)\) & \(O(dN^{2})\) \\ GRBFNN & \(O(N^{d})\) & \(O(N^{d})\) \\ \hline \end{tabular}
\end{table}
Table 1: Neurons and trainable variables of SGNN and GRBFNN.
Therefore, the total number of FLOP is
\[O(FLOP_{fp})=O\left(\sum_{i=1}^{d+1}FLOP^{(i)}\right)=O(mdN^{2}). \tag{20}\]
The number of operations increases linearly with the increase of the number of layers or the dimension of the input vector \(d\). On the other hand, the computational complexity of FLOP of RBGNN is
\[O(FLOP_{\widetilde{fp}})=O(mN^{d}), \tag{21}\]
regardless of the trainability of the centers and width of Gaussian functions.
### Backward Propagation
Accurately estimating the computational complexity of backward propagation is challenging because techniques such as auto differentiation (Baydin et al., 2018) and computational graphs (Abadi et al., 2016) have optimized the original mathematical operations for improving the performance. Auto differentiation evaluates the derivative of numerical functions using dual numbers with the chain rule broken into a sequence of operations such as addition, multiplication, and composition. During forward propagation, intermediate values in computational graphs are recorded for backward propagation.
We analyze the operations of backward propagation with respect to a single neuron of the \(l\)-th layer. The partial derivatives of \(\bar{f}(\mathbf{x})\) with respect to \(\mathcal{W}_{j}^{(l)}\), \(\mu_{j}^{(l)}\), and \(\sigma_{j}^{(l)}\) of the \(l\)-th (\(1\leq l\leq d\)) layer in SGNN are
\[\frac{\partial\bar{f}}{\partial\mathcal{W}_{j}^{(l)}} =\left[\frac{\partial\bar{f}}{\partial\mathcal{N}_{j}^{(l+1)}} \right]^{T}\frac{\partial\mathcal{N}_{j}^{(l+1)}}{\partial\mathcal{W}_{j}^{( l)}}, \tag{22}\] \[\frac{\partial\bar{f}}{\partial\mu_{j}^{(l)}} =\left[\frac{\partial\bar{f}}{\partial\mathcal{N}_{j}^{(l+1)}} \right]^{T}\frac{\partial\mathcal{N}_{j}^{(l+1)}}{\partial\mu_{j}^{(l)}},\] (23) \[\frac{\partial\bar{f}}{\partial\sigma_{j}^{(l)}} =\left[\frac{\partial\bar{f}}{\partial\mathcal{N}_{j}^{(l+1)}} \right]^{T}\frac{\partial\mathcal{N}_{j}^{(l+1)}}{\partial\sigma_{j}^{(l)}}, \tag{24}\]
with
\[\left[\frac{\partial\bar{f}}{\partial\mathcal{N}_{j}^{(l+1)}}\right]=\left[ \frac{\partial\bar{f}}{\partial\mathcal{N}^{(l+2)}}\right]^{T}\left[\frac{ \partial\mathcal{N}^{(l+2)}}{\partial\mathcal{N}_{j}^{(l+1)}}\right], \tag{25}\]
where
\[\mathcal{N}^{(l+2)}=(\mathcal{N}_{1}^{(l+2)},\mathcal{N}_{2}^{(l+2)},\ldots, \mathcal{N}_{N}^{(l+2)})^{T}. \tag{26}\]
The backward prorogation with respect to the \(j\)-th neuron of the \(l\)-th (\(1\leq l\leq n-1\)) layer can be divided into three steps:
1. Compute the gradient of \(\bar{f}\) with respect to the output of the 1-st neuron in the \((l+1)\)-th layer, \(\mathcal{N}_{j}^{(l+1)}\), as shown in Eq. (25), where \(\left[\frac{\partial\bar{f}}{\partial\mathcal{N}^{(l+2)}}\right]^{T}\) can be accessed from the back propagation of the \((l+2)\)-th layer. This leads to \(2N\) FLOP due to the dot product of two vectors.
2. Calculate the partial derivatives of \(\mathcal{N}_{j}^{(l+1)}\) with respect to weights, center, and width. Since the calculation of derivatives is computationally cheap, the analysis below will neglect the operations used to evaluate derivatives. This shall not affect the conclusion.
3. Propagate the gradients backward. This produces \(N+2\) operations.
Therefore, the number of FLOP of the \(l\)-th layer is approximately \(m(3N^{2}+2N)\), where \(m\) is the volume of the input dataset. The backward propagation of the last layer leads to \(N\) operations. In total, the number of FLOP by backward propagation is
\[O(FLOP_{bp})=O(mdN^{2}). \tag{27}\]
On the other hand, the backward propagation FLOP number of GRBFNN is
\[O(FLOP_{\widetilde{bp}})=O(mN^{d}). \tag{28}\]
## 4 Subspace Gradient Descent
As illustrated in Section 3, SGNN has exponentially fewer trainable variables than the associated GRBFNN for high-dimensional input. In other words, GRBFNN may be over-parameterized. The recent work (Sagun et al., 2017; L. et al., 2018; Gur-Ari et al., 2018) has shown that optimizing a loss function constructed by an over-parameterized neural network can lead to Hessian matrices that possess few dominant eigenvalues with many near-zero ones before and after training. This means the gradient descent can happen in a small subspace. Inspired by their work, we consider infinitesimal variation of the loss function \(J\) for GRBFNN a
\[dJ=\left[\frac{\partial J}{\partial\tilde{\mathbf{\theta}}}\right]^{T}d\tilde{ \mathbf{\theta}}+\frac{1}{2}d\tilde{\mathbf{\theta}}^{T}\tilde{H}d\tilde{\mathbf{\theta} }+h.o.t.(\|d\tilde{\mathbf{\theta}}\|^{3}), \tag{29}\]
where \(\tilde{\mathbf{\theta}}\) represents a vector of all trainable weights, and
\[\tilde{\mathbf{H}}=\frac{\partial^{2}J}{\partial\tilde{\mathbf{\theta}}^{T}\tilde{ \mathbf{\theta}}}, \tag{30}\]
is the associated Hessian matrix. The centers and widths of Gaussian functions are assumed to be constant for simplicity. Since the Hessian matrix \(\tilde{H}\) is symmetric, we can represent it in the form
\[\tilde{\mathbf{H}}=\mathbf{P}^{T}\begin{pmatrix}\mathbf{\lambda}_{d}&\mathbf{0}\\ \mathbf{0}&\mathbf{\lambda}_{s}\end{pmatrix}\mathbf{P}, \tag{31}\]
where \(\mathbf{\lambda}_{d}=\text{diag}(\lambda_{1},\lambda_{2},\ldots,\lambda_{k}, \ldots,\lambda_{dN})\) are the \(k\) dominant eigenvalues padded by \((dN-k)\) non-dominant ones (assuming \(k<dN\)), and \(\mathbf{\lambda}_{s}=\text{diag}(\lambda_{dN+1},\lambda_{dN+2},\ldots,\lambda_{N ^{d}})\) are the rest non-dominant eigenvalues.
Let \(\mathbf{\theta}\) be the weights of SGNN. The variation of the mapping from \(\mathbf{\theta}\) to \(\tilde{\mathbf{\theta}}\) in Eq. (13) reads
\[d\tilde{\mathbf{\theta}}=\frac{\partial\mathbf{g}}{\partial\mathbf{\theta}}d\mathbf{ \theta}, \tag{32}\]
where \(\frac{\partial\mathbf{g}}{\partial\mathbf{\theta}}\): \(\mathbb{R}^{dN}\mapsto\mathbb{R}^{N^{d}\times dN}\). It should be noted that \(\frac{\partial\mathbf{g}}{\partial\mathbf{\theta}}\) is a super sparse matrix.
Substitution of Eq. (32) into Eq. (29)
\[dJ=\left[\frac{\partial J}{\partial\tilde{\mathbf{\theta}}}\right]^{T}\frac{\partial \mathbf{g}}{\partial\mathbf{\theta}}d\mathbf{\theta}+\frac{1}{2}d\mathbf{\theta}^{T} \mathbf{H}d\mathbf{\theta}+h.o.t.(\|d\tilde{\mathbf{\theta}}\|^{3}), \tag{33}\]
with
\[\mathbf{H} =\left[\frac{\partial\mathbf{g}}{\partial\mathbf{\theta}}\right]^{T} \tilde{\mathbf{H}}\frac{\partial\mathbf{g}}{\partial\mathbf{\theta}} \tag{34}\] \[=\left[\frac{\partial\mathbf{g}}{\partial\mathbf{\theta}}\right]^{T} \mathbf{P}^{T}\begin{pmatrix}\mathbf{\lambda}_{d}&\mathbf{0}\\ \mathbf{0}&\mathbf{\lambda}_{s}\end{pmatrix}\mathbf{P}\begin{bmatrix}\partial \mathbf{g}\\ \partial\mathbf{\theta}\end{bmatrix}.\]
Let
\[\begin{pmatrix}\mathbf{Q}_{d}\\ \mathbf{Q}_{s}\end{pmatrix}=\mathbf{P}\begin{bmatrix}\frac{\partial\mathbf{g}} {\partial\mathbf{\theta}}\end{bmatrix}, \tag{35}\]
where \(\mathbf{Q}_{d}\in\mathbb{R}^{dN\times dN}\) and \(\mathbf{Q}_{s}\in\mathbb{R}^{(N^{d}-dN)\times dN}\). Substitution Eq. (35) into Eq. (34) yields
\[\mathbf{H}=\mathbf{Q}_{d}^{T}\mathbf{\lambda}_{d}\mathbf{Q}_{d}+\mathbf{Q}_{s}^{T} \mathbf{\lambda}_{s}\mathbf{Q}_{s}\approx\mathbf{Q}_{d}^{T}\mathbf{\lambda}_{d} \mathbf{Q}_{d}. \tag{36}\]
Therefore, the dominant eigenvalues of the Hessian of GRBFNN are also included in the corresponding SGNN. This means that the gradient of SGNN can descend in the mapped dominant non-flat subspace of GRBFNN, which may contribute to the comparable accuracy and training efficiency of SGNN as opposed to GRBFNN, as discussed in Section 3.
## 5 Numerical Experiments
### Candidate Functions
We consider ten candidate functions from (Andras, 2014, 2018) and modified them, as listed in Table 2. The functions cover a range of distinct features, including sinks, sources, flat and s-shaped surfaces, and multiple sinks and sources, which can assist in benchmarking function approximations of different neural networks.
We generate uniformly distributed sample sets to train neural networks for each run, with upper and lower bounds of each dimension ranging from -8 to 8. During the training process, we employ mini-batch gradient descent with the optimizer Adam in Tensorflow to update model parameters. The optimizer uses its default training parameters and stops if no improvement of loss values is achieved in four consecutive epochs. The dataset is divided into a training set comprising 80% of the data and a validation set consisting of the remaining 20%. The mini-batch size, number of neurons, and data points are selected to balance the convergence speed and accuracy. All tests are performed on a Windows-10 desktop with a 3.6HZ, 8-core, Intel i7-9700K CPU and a 64GB Samsung DDR-3 RAM.
### Dimension Scalability
In order to understand the dimensional scalability of SGNN, we applied SGNN to candidate functions with the number of dimensions from two to five. For comparison, the data points were kept as 16384 such that sufficient data was sampled for 5-D functions, i.e. \(d=5\)
Each layer has fixed 20 uni-variate Gaussian neurons, with initial centers evenly distributed in each dimension and widths being the distance between two adjacent centers. The training time per epoch grows linearly as the dimension increases, with an increment of 0.02 seconds/epoch per layer. For the majority of candidate functions, SGNN can achieve the accuracy level of \(10^{-4}\). It is sufficient to approximate the 5-D functions by SGNN with 5 layers and in total 100 neurons. The configuration of SGNN cannot well approximate the function \(f_{5}\) in 4-D. This can be easily resolved by adding more neurons to the neural network (see a similar example in Table 7). In summary, the computational time of SGNN scales linearly with the increase of dimensions.
Next, two- and five-dimensional examples are selected to illustrate the expressiveness of SGNN in function approximations. The number of neurons, training size, and mini-batch
\begin{table}
\begin{tabular}{l l l} \hline \hline Functions & Features & Explicit expression \\ \hline Root sum squared & Sink & \(f_{1}(\mathbf{x})=\left(\sum_{i=1}^{d}x_{i}^{2}\right)^{\frac{1}{2}}\) \\ Second-degree polynomial & Saddle & \(f_{2}(\mathbf{x})=\frac{1}{50}\sum_{j=1}^{d}x_{j}^{2}x_{j+1}\) \\ Exponential-square sum & Flatter sink & \(f_{3}(\mathbf{x})=\frac{1}{5}\sum_{j=1}^{d}e^{x_{j}^{2}/50}\) \\ Exponential-sinusoid sum & Sink \& Source & \(f_{4}(\mathbf{x})=\frac{1}{5}\sum_{j=1}^{d}e^{x_{j}^{2}/50}\sin(y_{j})\) \\ Polynomial-sinusoid sum & Sink \& Source & \(f_{5}(\mathbf{x})=\frac{1}{50}\sum_{j=1}^{d}x_{j}^{2}\cos(j*x_{j})\) \\ Inverse-exponential-square sum & Source & \(f_{6}(\mathbf{x})=10/\sum_{j=1}^{d}e^{x_{j}^{2}/25}\) \\ Sigmoidal & S-shaped surface & \(f_{7}(\mathbf{x})=10/(1+e^{-\frac{1}{5}\sum_{i=1}^{d}x_{j}})\) \\ Gaussian & Flatter source & \(f_{8}(\mathbf{x})=10e^{-\frac{1}{100}\sum_{j=1}^{5}x_{j}^{2}}\) \\ Linear & Flat & \(f_{9}(\mathbf{x})=\sum_{j=1}^{d}x_{j}\) \\ Constant & Flat & \(f_{10}(\mathbf{x})=1\) \\ \hline \hline \end{tabular} Note: In \(f_{4}\), \(y_{j}=x_{j+1}\) with \(j=1,2,\ldots,d-1\) and \(y_{d}=x_{1}\).
\end{table}
Table 2: Candidate functions and their features.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{2D} & \multicolumn{2}{c|}{3D} & \multicolumn{2}{c|}{4D} & \multicolumn{2}{c}{5D} \\ \hline & sec/epoch & Loss & sec/epoch & loss & sec/epoch & loss & sec/epoch & loss \\ \hline \(f_{1}\) & **0.065** & 2.31E-04 & **0.081** & 4.43E-04 & **0.099** & 1.41E-03 & **0.113** & 4.43E-04 \\ \(f_{2}\) & **0.063** & 7.34E-05 & **0.081** & 8.31E-04 & **0.098** & 2.50E-03 & **0.114** & 8.31E-04 \\ \(f_{3}\) & **0.064** & 4.26E-06 & **0.083** & 1.50E-05 & **0.106** & 4.13E-05 & **0.110** & 1.50E-05 \\ \(f_{4}\) & **0.065** & 2.80E-06 & **0.084** & 2.91E-05 & **0.100** & 9.12E-05 & **0.108** & 2.91E-05 \\ \(f_{5}\) & **0.063** & 7.40E-05 & **0.083** & 7.53E-04 & **0.099** & 1.00E-01 & **0.107** & 7.53E-04 \\ \(f_{6}\) & **0.063** & 1.39E-06 & **0.083** & 1.27E-05 & **0.101** & 2.11E-05 & **0.115** & 1.27E-05 \\ \(f_{7}\) & **0.063** & 4.44E-05 & **0.083** & 4.08E-04 & **0.099** & 1.89E-03 & **0.111** & 4.08E-04 \\ \(f_{8}\) & **0.063** & 1.97E-05 & **0.083** & 4.36E-05 & **0.100** & 7.76E-05 & **0.113** & 4.36E-05 \\ \(f_{9}\) & **0.063** & 2.27E-04 & **0.079** & 1.76E-03 & **0.099** & 9.93E-03 & **0.111** & 1.76E-03 \\ \(f_{10}\) & **0.064** & 3.51E-06 & **0.082** & 6.32E-06 & **0.101** & 9.84E-06 & **0.113** & 6.32E-06 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The computation time per epoch of SGNN scales linearly with the increase of dimensions. Data is generated by averaging the results of 30 runs. Data Size: 16384, Mini-batch size: 256. Neurons per layer: 20.
size are fine-tuned to achieve optimal results.
### 2-D Examples
First, SGNN is used to approximate the two-dimension function \(f_{3}(\mathbf{x})=1/5e^{(x_{1}^{2}+x_{2}^{2})/50}\), which has four sharp peaks and one flat valley in the domain. As illustrated in Fig. 2(a), the optimizer converges in 400 steps, with the difference between training and test sets at the magnitude level of \(10^{-4}\). Figs. 2(b)-(e) show that the prediction by SGNN is nearly identical to the ground truth, except for the domain near boundaries. This can be attributed to fewer sampling points in the neighborhood of the boundaries. Better alignment can be achieved by sampling extra boundary points to the input dataset.
SGNN maintains its level of accuracy as candidate functions become more complex. For example, Fig. 3 presents the approximation of \(f_{4}(\mathbf{x})=\frac{1}{5}(e^{x_{1}^{2}/50}\sin x_{2}+e^{x_{2}^{2}/50}\sin x _{1})\). SGNN can approximate \(f_{4}\) with the same level of accuracy as \(f_{3}\) even with fewer training epochs, possibly led by the localization property of Gaussian function. The largest error again appears near boundaries, with a percentage error less than 8%. Inside the domain, the computed values precisely match the exact ones. As visualized in Figs. 3(d) and (e), the prediction by SGNN can fully capture features of the function.
Figure 2: Approximating the two-dimensional function \(f_{3}\) by SGNN. (a) Training history, (b) absolute error, (c) prediction vs. exact value, (d) prediction, (e) ground truth. Size of training dataset: 2048.
This finding is corroborated in Fig. 4, which presents the approximation of the function \(f_{5}(\mathbf{x})=\frac{1}{50}(x_{1}^{2}\cos x_{1}+x_{2}^{2}\cos 2x_{2})\) by SGNN. The function, different from \(f_{4}\), possesses peaks and valleys near boundaries and becomes flat in the vicinity of origin, as illustrated in Figs. 4(c)-(e). Interestingly, the neural network converges faster than the network for \(f_{4}\). This indicates that the loss function may become more convex and contain fewer flat regions. One possible reason is that as the function becomes more complex, more Gaussian neurons are active and have larger weights, increasing the loss gradients. The largest error is again observed near boundaries. As shown in Fig. 4, SGNN can well capture the features of the target function \(f_{5}\). Due to the gradient configuration of the color bar, the small offset with respect to the ground truth occurs near the origin, but the corresponding absolute errors are very small, as shown in Fig. 4(b).
### 5-D Examples
The approximation of five-dimensional functions from \(f_{1}\) to \(f_{10}\) by SGNN is illustrated through cross-sectional plots in the \(x_{1}-x_{2}\) plane with three other variables fixed to zero, as shown in Figs. 5 and 6. The left panel is for prediction, and the right panel is for ground truth. During training, uniformly-sampled training sets with the size of 32768 are separately
Figure 3: Approximating the two-dimensional \(f_{4}\) by SGNN. (a) Training history, (b) absolute error, (c) prediction vs. exactness, (d) prediction, (e) ground truth. Size of training dataset: 2048.
generated for all functions in order to maintain consistency. However, fewer points can be used when the function shape is simple (e.g., sink or source). The validation set used to produce the prediction plots is generated by uniformly partitioning the subspace with a step size twice the number of neurons per layer.
The SGNN can accurately capture the features of all candidates regardless of their geometric complexity. Although the predictions of SGNN show minor disagreements with the ground truth when the function (e.g., \(f_{10}\)) is constant, the differences are less than 3%.
## 6 Comparison of SGNN and GRBFNN
The performance of SGNN and GRBFNN in approximating two-dimensional and three-dimensional candidate functions are presented in Tables 4 and 5, respectively. For comparison, the centers and width of Gaussian neurons of GRBFNN are set to be trainable variables as well. We focus on the differences of total epochs, training time per epoch, and losses for comparison. The results are obtained by averaging the results of 30 runs.
Figure 4: Approximating the two-dimensional \(f_{5}\) by SGNN. (a) Training history, (b) absolute error, (c) prediction vs. exactness, (d) 2-D projection of prediction, (e) 2-D projection of ground truth. Size of training dataset: 2048.
As shown in Table 4, when approximating two-dimensional functions, SGNN can achieve comparable accuracy as GRBFNN, with differences less than one-order-of magnitude in most cases. The worse case occurs with approximating \(f_{1}\). However, the absolute difference is around 1.0E-3, and SGNN can still give a reasonably good approximation. On the other hand, the training time per epoch of SGNN is roughly one-tenth of that of GRBFNN.
The advantage of SGNN becomes more evident in three-dimensional function approximations. SGNN can gain a one-hundred-time speedup over GRBFNN but still maintain a similar level of accuracy. Surprisingly, SGNN can also yield more accurate results when approximating \(f_{3}\) to \(f_{6}\).
## 7 Comparison with Deep NNs
In this section, we compare the performance of SGNN with deep ReLU and Sigmoid NNs, which are two popular choices of activation functions. Through the approximation of four-dimensional candidate functions, SGNN shows much better trainability and approximability over deep ReLU and Sigmoid NNs.
Figure 5: Prediction vs. exact of \(f_{1}\)-\(f_{5}\) in five dimensions. The plots are generated by projecting the surface to \(x_{1}-x_{2}\) plane with other coordinates fixed to zero. The left panel is for prediction; the right panel is for the exact value. Size of training dataset: 32768.
\begin{table}
\begin{tabular}{c|c c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{SGNN} & \multicolumn{4}{c}{GRBFNN} \\ & Epoch & Sec/epoch & Ave loss & Min loss & Epoch & Sec/epoch & Ave loss & Min loss \\ \hline \(f_{1}\) & 483 & **0.027** & 1.86E-03 & 1.32E-03 & 568 & **0.232** & 1.98E-04 & 1.25E-04 \\ \(f_{2}\) & 333 & **0.028** & 6.51E-05 & 3.65E-05 & 351 & **0.199** & 2.17E-05 & 1.35E-05 \\ \(f_{3}\) & 334 & **0.027** & 1.32E-05 & 6.44E-06 & 349 & **0.193** & 6.21E-06 & 2.30E-06 \\ \(f_{4}\) & 223 & **0.028** & 7.12E-06 & 5.53E-06 & 209 & **0.194** & 4.62E-06 & 2.68E-06 \\ \(f_{5}\) & 123 & **0.030** & 3.72E-06 & 1.82E-06 & 126 & **0.198** & 1.99E-06 & 1.15E-06 \\ \(f_{6}\) & 595 & **0.027** & 2.41E-04 & 1.50E-04 & 629 & **0.209** & 7.41E-05 & 3.20E-05 \\ \(f_{7}\) & 602 & **0.027** & 6.17E-04 & 4.93E-04 & 636 & **0.192** & 1.77E-04 & 8.24E-05 \\ \(f_{8}\) & 822 & **0.026** & 3.62E-04 & 2.70E-04 & 756 & **0.191** & 3.48E-04 & 1.56E-04 \\ \(f_{9}\) & 625 & **0.026** & 8.56E-04 & 4.38E-04 & 591 & **0.190** & 4.84E-04 & 1.45E-04 \\ \(f_{10}\) & 437 & **0.026** & 2.97E-05 & 2.26E-05 & 444 & **0.199** & 1.09E-05 & 6.15E-06 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Two-dimensional function approximations using SGNN and GRBFNN. Data is generated by averaging the results of 30 runs. Sample points: 1024; mini-batch size: 64; neurons per layer: 10.
Figure 6: Prediction vs. exact of \(f_{6}\)-\(f_{10}\) in five dimensions. The plots are generated by projecting the surface to \(x_{1}-x_{2}\) plane with other coordinates fixed to zero. Left panel: prediction; right panel: ground truth. Size of training dataset: 32768.
Table 6 presents the training time per epoch, total epoch for training, and loss after training of three deep NNs by averaging the results of 30 runs. All NNs possess four hidden layers with 20 neurons per layer. The training-set size is fixed to 16384, with a mini-batch size of 256. As opposed to SGNN and Sigmoid-NN, which have stable training time per epoch of across all candidate functions, the time of ReLU-NN fluctuates. This might be led by the difference in calculating derivatives of a ReLU unit with input less or greater than zero. SGNN has a longer training time per epoch because of the computation of Gaussian function and derivative of \(\mu\) and \(\sigma\). One may argue that this comparison is unfair because SGNN has extra trainable variables. However, SGNN has fewer trainable weights (see Table 7) because no weights connect the input and first layer, and the output layer is not trainable.
Although SGNN has appreciably larger training epochs, this also leads to more accurate predictions. The loss values of SGNN after training are uniformly smaller than those of ReLU-NN and Sigmoid-NN except for \(f_{10}\). In fact, for \(f_{2}\), \(f_{4}\), \(f_{6}\), and \(f_{7}\), the accuracy of SGNN is even two orders of magnitude better than the other two models.
Despite the efficient training speed of Sigmoid-NN, the network is more difficult to train with random weight initialization for \(f_{1}\) and \(f_{5}\). In fact, the approximation of \(f_{5}\) by Sigmoid-NN is nowhere close to the group truth after training. When functions become more complex, SGNN outperforms ReLU-NN and Sigmoid-NN in minimizing loss through stochastic gradient descent. This could be attributed to the locality of Gaussian functions that increase the active neurons, reducing the flat subspace whose gradients diminish. Sigmoid-NN aborts with significantly fewer epoch numbers. This could be led by the small derivatives of Sigmoid functions when input stays within the saturation region, which makes it more difficult to train the network.
Next, we further compare the trainability of SGNN with ReLU-DNN. We train the two networks with different configurations to approximate the function \(f_{5}\), which has a more complex geometry and is more difficult to approximate. The configuration of the NNs and the training performance are listed in Table 7.
Because the layer of SGNN is fixed by the number of function variables, its only tunable network hyper-parameter is the number of neurons per layer. Doubling the neurons/layer of
\begin{table}
\begin{tabular}{c|c c c c|c c c} \hline & \multicolumn{4}{c|}{SGNN} & \multicolumn{4}{c}{GRBFNN} \\ & Epoch & Sec/epoch & Ave loss & Min loss & Epoch & Sec/epoch & Ave loss & Min loss \\ \hline \(f_{1}\) & 269 & **0.038** & 1.42E-03 & 9.33E-04 & 270 & **4.049** & 1.65E-04 & 9.60E-05 \\ \(f_{2}\) & 253 & **0.039** & 2.47E-04 & 1.02E-04 & 157 & **4.055** & 4.75E-05 & 3.51E-05 \\ \(f_{3}\) & 204 & **0.039** & **2.36E-05** & **1.78E-05** & 164 & **4.068** & 2.37E-05 & 1.80E-05 \\ \(f_{4}\) & 188 & **0.039** & **1.82E-05** & **1.24E-05** & 97 & **4.077** & 1.96E-05 & 1.61E-05 \\ \(f_{5}\) & 150 & **0.040** & **2.72E-06** & **1.44E-06** & 66 & **4.169** & 1.56E-05 & 1.20E-05 \\ \(f_{6}\) & 323 & **0.037** & 7.20E-05 & 4.29E-05 & 245 & **4.085** & 3.75E-05 & 2.73E-05 \\ \(f_{7}\) & 324 & **0.037** & 1.16E-03 & 7.07E-04 & 274 & **4.031** & 1.65E-04 & 1.02E-04 \\ \(f_{8}\) & 315 & **0.036** & 1.95E-03 & 8.48E-04 & 314 & **3.937** & 2.03E-04 & 1.41E-04 \\ \(f_{9}\) & 334 & **0.036** & 4.29E-03 & 1.89E-03 & 293 & **4.016** & 7.60E-04 & 4.96E-04 \\ \(f_{10}\) & 227 & **0.038** & 3.28E-05 & 2.45E-05 & 188 & **4.018** & 2.63E-05 & 1.72E-05 \\ \hline \end{tabular}
\end{table}
Table 5: Approximations of tri-variate functions using SGNN and GRBFNN. SGNN can achieve 100 times speedup over GRBFNN, with even smaller loss values for functions \(f_{3}\)-\(f_{5}\) (highlighted). Data was generated by averaging the results of 30 runs. Sample points: 2048; mini-batch size: 64; Neurons per layer: 10.
SGRN from 20 to 40 decreases the loss by two orders of magnitude. Although the training time per epoch increases by 30%, the number of epochs reduces by 60%. Consequently, the total training time is cut by almost 50%, from 38.2 to 19.4 seconds.
However, the accuracy of ReLU-DNN slightly increases with the increase of width and depth of the model. Close to 50% loss reduction is achieved by adding 7 more layers and 50 neurons per layer. However, the error is still three orders of magnitude higher than the error by a 4-layer SGNN with one-tenth of trainable variables, with half training time per epoch. According to the universal approximation theorem, although one can keep expanding the network structure to improve accuracy, it is against the observation in the last row. This is because the convergence of gradient descent can be a practical obstacle when the network becomes over-parametrized. In this situation, the network may impose a very high requirement on the initial weights to yield optimal solutions.
To visualize the differences in the expressiveness between SGNN and ReLU-NN, the predictions of one run in Table 7 are selected and plotted through a cross-sectional cut in \(x_{1}-x_{2}\) plane with the other two variables \(x_{3}\) and \(x_{4}\) fixed at zero, as shown in Fig. 7. The
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline & \multicolumn{3}{c|}{SGNN} & \multicolumn{3}{c|}{ReLU-NN} & \multicolumn{3}{c}{Sigmoid-NN} \\ & Sec/epoch & Epoch & Loss & Sec/epoch & Epoch & Loss & Sec/epoch & Epoch & Loss \\ \hline \(f_{1}\) & 0.099 & 218 & 1.41E-03 & 0.054 & 150 & 4.86E-03 & 0.063 & 39 & **4.78E-1** \\ \(f_{2}\) & 0.098 & 262 & 2.50E-03 & 0.054 & 119 & 1.07E-01 & 0.054 & 166 & 2.90E-01 \\ \(f_{3}\) & 0.106 & 193 & 4.13E-05 & 0.312 & 167 & 5.22E-04 & 0.054 & 169 & 1.30E-03 \\ \(f_{4}\) & 0.100 & 196 & 9.12E-05 & 0.234 & 161 & 7.32E-02 & 0.056 & 101 & 2.38E-01 \\ \(f_{5}\) & 0.097 & 392 & **9.65E-02** & 0.173 & 94 & **4.97E-01** & 0.066 & 29 & **6.38E-01** \\ \(f_{6}\) & 0.101 & 147 & 2.11E-05 & 0.293 & 115 & 1.18E-03 & 0.053 & 187 & 2.94E-03 \\ \(f_{7}\) & 0.099 & 246 & 1.89E-03 & 0.241 & 139 & 2.07E-03 & 0.054 & 145 & 1.58E-05 \\ \(f_{8}\) & 0.100 & 173 & 7.76E-05 & 0.344 & 109 & 9.95E-03 & 0.054 & 243 & 3.17E-03 \\ \(f_{9}\) & 0.099 & 245 & 9.93E-03 & 0.439 & 126 & 7.66E-03 & 0.054 & 374 & 7.79E-03 \\ \(f_{10}\) & 0.101 & 158 & 9.84E-06 & 0.135 & 143 & 6.86E-06 & 0.054 & 173 & 8.20E-08 \\ \hline \end{tabular}
\end{table}
Table 6: Performance comparison of SGNN and deep neural networks with ReLU and Sigmoid activation functions. Data is generated by averaging the results of 30 runs. All NNs have four hidden layers, with 20 neurons per layer.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline & Layers & Neuron/layer & Parameters & Sec/epoch & Epoch & Loss & Min loss \\ \hline \multirow{2}{*}{SGNN} & 4 & 20 & **1360** & 0.097 & 392 & **9.65E-2** & **3.91E-2** \\ & 4 & 40 & **5120** & 0.130 & 149 & **7.08E-4** & **5.32E-4** \\ \hline \multirow{4}{*}{ReLU-NN} & 4 & 20 & 1381 & 0.056 & 96 & 0.497 & 0.477 \\ & 4 & 40 & 5161 & 0.067 & 99 & 0.458 & 0.409 \\ \cline{1-1} & 7 & 40 & 10081 & 0.082 & 135 & 0.336 & 0.273 \\ \cline{1-1} & 10 & 40 & 15001 & 0.176 & 112 & 0.324 & 0.258 \\ \cline{1-1} & 10 & 50 & 23251 & 0.156 & 100 & 0.309 & 0.253 \\ \cline{1-1} & 10 & 60 & 33301 & 0.250 & 96 & 0.288 & 0.232 \\ \cline{1-1} & 10 & 70 & 45151 & 0.120 & 97 & 0.278 & 0.215 \\ \cline{1-1} & 10 & 80 & 58801 & 0.261 & 89 & 0.291 & 0.205 \\ \hline \end{tabular}
\end{table}
Table 7: Comparison of SGNN and ReLU-based NN in approximation of \(f_{5}.\) Results are generated by averaging the data of 30 runs.
network configures are listed in Table 8. The predictions of SGNN in Fig. 7(b) match well the ground truth in Fig. 7(a). Despite the minor differences in colors near the origin, their maximum magnitude is less than 0.1. The ReLU-NN with the same structure has a much worse approximation. Although the network gradually captures the main geometric features of \(f_{5}\) with significantly augmenting its structure to 10 layers and 70 neurons per layer, the difference of magnitude can still be as large as 0.5, as shown in Fig. 7(f).
## 8 Conclusions
In this paper, we reexamined the structure of GRBFNN in order to make it tractable for problems with high-dimensional input. By using the separable property of Gaussian radial-basis functions, we proposed a new feedforward network called Separable-Gaussian-Neural-Network (SGNN), which has identical output as a GRBFNN. Different from the traditional MLPs, SGNN splits the input data into multiple columns by dimensions and feeds them into the corresponding layers in sequence. As opposed to GRBFNN, SGNN significantly
Figure 7: Approximation of the four-dimensional \(f_{5}\) using SGNN and ReLU-NNs. The plots are generated by projecting the surface to \(x_{1}-x_{2}\) plane with other coordinates all zero. (a) Ground truth; (b) SGNN, (c) - (f) ReLu-NNs with different network configurations. The layers and neurons per layer of the NNs are listed in Table 8.
reduces the number of neurons, trainable variables, and computational load of forward and backward propagation, leading to exponential improvement of training efficiency. SGNN can also preserve the dominant subspace of the Hessian matrix of GRBFNN in gradient descent and, therefore, offer comparable minimal loss. Extensive numerical experiments have been carried out, demonstrating that SGNN has superior computational performance over GRBFNN while maintaining a similar level of accuracy. In addition, SGNN is superior to MLPs with ReLU and Sigmoid units when approximating complex functions. Further investigation should focus on the universal approximability of SGNN and its applications to Physics-informed neural networks (PINNs) and reinforcement learning.
## Acknowledgment
Siyuan Xing would like to thank the Donald E. Bently center for Engineering Innovation for their kind support of teaching release time, which made this research possible.
|
2302.04035 | Revisit the Algorithm Selection Problem for TSP with Spatial Information
Enhanced Graph Neural Networks | Algorithm selection is a well-known problem where researchers investigate how
to construct useful features representing the problem instances and then apply
feature-based machine learning models to predict which algorithm works best
with the given instance. However, even for simple optimization problems such as
Euclidean Traveling Salesman Problem (TSP), there lacks a general and effective
feature representation for problem instances. The important features of TSP are
relatively well understood in the literature, based on extensive domain
knowledge and post-analysis of the solutions. In recent years, Convolutional
Neural Network (CNN) has become a popular approach to select algorithms for
TSP. Compared to traditional feature-based machine learning models, CNN has an
automatic feature-learning ability and demands less domain expertise. However,
it is still required to generate intermediate representations, i.e., multiple
images to represent TSP instances first. In this paper, we revisit the
algorithm selection problem for TSP, and propose a novel Graph Neural Network
(GNN), called GINES. GINES takes the coordinates of cities and distances
between cities as input. It is composed of a new message-passing mechanism and
a local neighborhood feature extractor to learn spatial information of TSP
instances. We evaluate GINES on two benchmark datasets. The results show that
GINES outperforms CNN and the original GINE models. It is better than the
traditional handcrafted feature-based approach on one dataset. The code and
dataset will be released in the final version of this paper. | Ya Song, Laurens Bliek, Yingqian Zhang | 2023-02-08T13:14:20Z | http://arxiv.org/abs/2302.04035v1 | Revisit the Algorithm Selection Problem for TSP with Spatial Information Enhanced Graph Neural Networks
###### Abstract
Algorithm selection is a well-known problem where researchers investigate how to construct useful features representing the problem instances and then apply feature-based machine learning models to predict which algorithm works best with the given instance. However, even for simple optimization problems such as Euclidean Traveling Salesman Problem (TSP), there lacks a general and effective feature representation for problem instances.
The important features of TSP are relatively well understood in the literature, based on extensive domain knowledge and post-analysis of the solutions. In recent years, Convolutional Neural Network (CNN) has become a popular approach to select algorithms for TSP. Compared to traditional feature-based machine learning models, CNN has an automatic feature-learning ability and demands less domain expertise. However, it is still required to generate intermediate representations, i.e., multiple images to represent TSP instances first.
In this paper, we revisit the algorithm selection problem for TSP, and propose a novel Graph Neural Network (GNN), called GINES. GINES takes the coordinates of cities and distances between cities as input. It is composed of a new message-passing mechanism and a local neighborhood feature extractor to learn spatial information of TSP instances. We evaluate GINES on two benchmark datasets. The results show that GINES outperforms CNN and the original GINE models. It is better than the traditional handcrafted feature-based approach on one dataset. The code and dataset will be released in the final version of this paper.
Traveling Salesperson Problem, Algorithm Selection, Instance Hardness, Graph Neural Network, Graph Classification
## I Introduction
The Euclidean Traveling Salesman Problem (TSP) is one of the most intensely studied NP-hard combinatorial optimization problems. It relates to many real-world applications and has significant theoretical value. TSP can be described as follows. Given a list of cities with known positions, find the shortest route to visit each city and return to the origin city. Researchers have developed various exact, heuristic, and learning-based algorithms to solve this routing problem [1]. As these algorithms' performance is highly variable depending on the characteristics of the problem instances, selecting algorithms for each instance helps to improve the overall efficiency [2]. The algorithm selection problem was proposed in [3], and developed further in [4, 5], where the authors consider the algorithm selection as a classification problem that identifies the mapping from Problem Space to Algorithm Space [6]. Traditionally, domain experts design a group of features [7, 8, 9] that can represent the characteristics of TSP instances well. Then, one can train a machine learning classifier to be the selector using these features. This feature-based method has several potential limitations: high requirement for domain knowledge, insufficient expressiveness of the features [10], and required feature selection process [11]. Handcrafted features are not effective when being directly transferred to represent instances of other optimization problems. For complex optimization problems that are much less studied than TSP, it is hard for humans to design good features to represent instances.
Deep learning models, especially Convolutional Neural Networks (CNN), have recently been applied to select TSP algorithms. By employing images to represent TSP instances, the algorithm selection problem is transformed into a computer vision challenge. Since CNN has sufficient automatic feature learning capability, this approach no longer requires handcrafted features. In [11], the authors generate three images: a point image, a Minimum Spanning Tree (MST) image, and a k-Nearest-Neighbor-Graph (kNNG) image to represent each TSP instance. Then they apply an 8-Layer CNN architecture to predict which algorithm is better. In [12], researchers use a gridding method to transform TSP instances into density maps, and then apply Residual Networks (ResNet) [13] to do the classification. In [2], a similar gridding approach is used to generate images, and then a 3-Layer CNN model is designed to predict algorithms' temporal performance at different time steps.
Although the experimental results in [2, 11, 12] show CNN can outperform traditional feature-based machine learning models in the algorithm selection task for TSP, this approach still has the following main drawbacks: (1) _Need to generate intermediate representations_. Similar to feature-based methods, the instances' intermediate representations, in this case, the images, need to be generated as the inputs of CNN. It is usually a tedious process to transform TSP instances into images. In [11], generating MST and kNNG im
ages for each instance requires time-consuming calculations. When applying the gridding method to obtain images, the authors perform several up-scaling operations to improve the resolution [12]. Besides, data augmentation techniques, like random rotation/flipping, are widely used to enhance CNN's generalization ability [11, 2, 2]. As a result, multiple images must be generated to represent one TSP instance. (2) _Introduce problem-irrelevant parameters_. In [11], authors use solid dots to represent cities and solid lines to connect cities in MST and kNNG images. The dot size and line width are irrelevant to the properties of the TSP instance. Similarly, when applying gridding methods to generate images, the key parameter we need to set is the image size or the number of grids [12]. Adding these parameters increases the input data's complexity and the effort required for parameter tuning. (3) _Potentially lose problem-relevant information_. In the image generation procedure, the TSP instance is divided into multiple grids, with the value for each grid representing the number of cities that fall into it [12, 2]. After gridding, portions of the instance's local structure will be lost. In addition, [2] sets a maximum number for the value of grids, leading to more information distortion. (4) _Hard to generalize to other routing problems_. The gridding methods can be applied to convert TSP instances to images since cities are in 2D Euclidean space. However, for many variants of TSP problems, such as the Asymmetric Traveling Salesman Problem (ATSP) and the Capacitated Vehicle Routing Problem (VRP), generating images to represent problem instances is not straightforward and could be very challenging. In such cases, a graph with assigned node/edge features could be a better representation form.
To remedy the above issues, we propose an enhanced Graph Neural Network (GNN) named GINES to solve algorithm selection problems for TSP. Our main contributions are:
* We are the first to successfully design a GNN to learn the representation of TSP instances for algorithm selection, outperforming the existing feature-based or CNN-based approaches.
* The proposed model merely takes the coordinates of cities and the distance between them as inputs. We show there is no need to design and generate intermediate representations, such as handcrafted features or images, for TSP instances.
* The adopted graph representation methodology has few parameter settings, and the experimental results show it can retain accurate information about the original TSP instances.
* The proposed model is able to capture local features with multiple scales by aggregating information from the neighborhood nodes. Its robust performance is demonstrated on two public TSP datasets, compared with several existing approaches.
* The proposed model can easily generalize to other complex routing problems by adding node features or modifying distance metrics.
The rest of the paper is organized as follows. Section 2 introduces the background and related works. Section 3 presents the proposed GINES. Section 4 shows the experimental results of GINES. We conclude in Section 5.
## II Background and Related Work
### _Algorithm selection for optimization problems_
The No Free Lunch (NFL) theorem states that no algorithm can outperform others on all optimization problems. Researchers have been investigating algorithm selection problems to improve overall solving performance [14]. Most researchers focus on designing features for problem instances and solving algorithm selection by traditional feature-based machine learning models. The collection of features for classical optimization problems like Satisfiability Problem [15], AI planning [16], Knapsack Problem [17], TSP [7, 8, 9], and VRP [18, 19] have been well designed. These features are restricted to specific problems and usually need great efforts to be generated.
Deep learning has been shown to perform various classification/regression tasks effectively. In addition to the algorithm selection models using CNN for TSP mentioned above [11, 2], researchers have proposed a few feature-free algorithm selection models for other optimization problems. By generating images from the text documents for SAT problem instances, CNN can be applied to selecting algorithms [20]. In [21], researchers sample landscape information from instances and transform it into images, then apply CNN to select algorithms for Black-Box Optimization Benchmarking (BBOB) function instances. The authors of [22] treat online 1D Bin-Packing Problem instances as sequence data and apply Long Short-Term Memory (LSTM) to predict heuristic algorithms' performance. In the feature-free algorithm selection field, instances are usually converted to images or sequences, and graph representations are seldom used.
### _Hardness prediction for optimization problems_
Instance hardness prediction is a research topic closely related to algorithm selection. The purpose of the hardness prediction is to assess whether the problem instance is easy or difficult to solve using a specific algorithm. Researchers have studied where are the hard optimization problem instances, especially the hard TSP [23] or Knapsack instances [24, 25]. Similar to algorithm selection, the main research idea is to identify key attributes which correlate with hardness levels. The authors of [26] point out that the Standard Deviation (SD) of the distance matrix is highly relevant to TSP instance hardness. In [27], the same features used in the TSP algorithm selection are applied to predict TSP instance hardness for local search algorithms. Some other complex features, such as the highest edge features [28], the clustering features [29], Weibull distribution of distances [30], are proposed to assess the TSP hardness to some heuristic algorithms such as Ant Colony Optimization (ACO). In [31], researchers find that the regularity of the TSP structure can indicate the TSP hardness to ACO, but this type of feature cannot predict the hardness
to the local search Lin-Kernighan algorithm. It implies that the valuable features may vary across algorithms. Researchers commonly apply traditional feature-based machine learning models in this research area, and no deep learning models have been employed to our knowledge.
### _GNN for TSP_
The TSP instance can be naturally expressed by a graph \(G=(V,E)\), where \(V=\{v_{1},v_{2},...,v_{n}\}\) is a group of cities, and \(E=\{\langle v_{i},v_{j}\rangle:v_{i},v_{j}\in V\}\) is a set of paths between cities. Therefore, there exist several research lines for applying GNNs to TSP.
GNN for TSP solvingGNN has been successfully applied in learning-based TSP algorithms, either in the manner of reinforcement learning or supervised learning [32]. In reinforcement learning methods, researchers use graph embedding networks such as _structure2vec_[33] and Graph Pointer Networks (GPN) [34] to represent the current policy and apply Deep Q-Learning (DQN) to update it. To tackle larger graphs, [35] introduces a two-stage learning procedure that firstly trains a Graph Convolutional Network (GCN) [36] to predict node qualities and prune some of them before taking the next action. In supervised learning methods, GNN models are commonly used as the Encoder tool [37, 38] in the upgraded version of Pointer Network [39], a sequence-to-sequence architecture.
GNN for TSP search space reductionSearch space reduction for TSP instances is another GNN-related research task, and it can be viewed as an edge classification problem. Suppose a learned model can predict which edges in the TSP instance graph are likely to be included in the optimal solution. In that case, we can reduce the search space and improve computational efficiency in the following searching procedure [40]. In [41], authors have designed a benchmark TSP dataset for edge classification. Here a TSP instance is represented as a kNNG, where node features are node coordinates and edge features are Euclidean distances between two nodes. Many researchers use this benchmark dataset to assess the proposed GNN architectures [42].
GNN for TSP algorithm selectionThe authors of [12] investigate utilizing both CNN and GCN to select TSP algorithms and conclude that CNN performs better than GCN. The authors analyze the drawbacks of GCN, including the lack of relevant node features, the over-smoothing problem [43], and high time complexity. To the best of our knowledge, no GNN models have been successfully applied in algorithm selection for routing problems. We aim to design a suitable GNN architecture for solving the TSP algorithm selection problem.
## III TSP algorithm selection with GINES
### _Problem Statement_
The TSP algorithm selection problem can be defined as follows: given a TSP instance set \(I=\{I_{1},I_{2},...,I_{l}\}\), a TSP algorithm set \(A=\{A_{1},A_{2},...,A_{m}\}\), and a certain algorithm performance metric, the goal is to identify a per-instance mapping from \(I\) to \(A\) that maximizes its performance on \(I\) based on the given metric. As discussed in previous sections, the TSP instances can be represented by handcrafted features or images, which are inputs to supervised learning models such as SVM and CNN to learn this mapping.
In this work, we treat a TSP instance \(I_{i}\) as a graph \(G_{i}=(V,E)\), where the node features \(X_{v}\) for \(v\in V\) is a vector of its \((x_{v},y_{v})\) coordinate, the edge feature \(e_{u,v}\) for \((u,v)\in E\) is the Euclidean distance between two nodes. Here we use kNNG to represent TSP instances. We set the number of nearest nodes \(k\) to 10, which is relatively small compared to other papers [1, 41] in order to reduce the computational burden. Let \(N\) be the number of cities. The node feature is a \([N,2]\) matrix, and the matrix size of the edge feature is \([N\times 10,1]\). Given a set of TSP graphs \(\{G_{1},G_{2},...,G_{l}\}\) and their algorithm performance labels \(\{y_{1},y_{2},...,y_{l}\}\), the task of selecting TSP algorithms can be converted to a graph-level classification task. We develop a GNN model for routing problems, called GINES, which directly takes TSP graphs as inputs for classification. Next, we will describe the architecture of this model in detail.
### _GINES_
Graph Isomorphism Network (GIN) is one of the most expressive GNN architectures for the graph-level classification task. Researchers have shown that the representational power of GIN is equal to the power of the Weisfeiler Lehman graph isomorphism test, and GIN can obtain state-of-the-art performance on several graph classification benchmark datasets [44]. GIN uses the following formula for its neighborhood aggregation and message-passing:
\[\mathbf{x}_{i}^{\prime}=\text{MLP}\left((1+\epsilon)\cdot\mathbf{x}_{i}+\sum_ {j\in\mathcal{N}(i)}\mathbf{x}_{j}\right) \tag{1}\]
where \(\mathbf{x}_{i}\) is the target node's features, \(\mathcal{N}(i)\) denotes the neighborhood for node \(i\), and \(\mathbf{x}_{j}\) is the neighborhood nodes' features. \(\epsilon\) indicates the significance of the target node relative to its neighborhood, with a default value of zero. \(\mathbf{x}_{i}^{\prime}\) is the representation of node \(i\) we get after applying one GIN layer. Here, often the SUM aggregator is used to aggregate information from the neighborhood, as it can better distinguish different graph structures than MEAN and MAX aggregators [44]. A drawback of the original GIN is that the edge features are not taken into account. Thus, the authors of [45] proposed GINE that can incorporate edge features in the aggregation procedure:
\[\mathbf{x}_{i}^{\prime}=\text{MLP}\left((1+\epsilon)\cdot\mathbf{x}_{i}+\sum_ {j\in\mathcal{N}(i)}\text{ReLU}\left(\mathbf{x}_{j}+\mathbf{e}_{j,i}\right)\right) \tag{2}\]
where \(\mathbf{e}_{j,i}\) are edge features. In GINE, the neighborhood nodes' features and edge features are added together and make a ReLU transform before the SUM aggregation. With a TSP graph, the dimensions of these two features do not match. Therefore, we perform a linear transform to edge features.
To better tackle the TSP algorithm selection problem, we make several modifications on GINE and propose a GINES (GINE with Spatial information) architecture as follows.
Adopting a suitable aggregatorAggregators in GNNs play a crucial role in incorporating neighborhood information. Researchers have illustrated that the selection of aggregators significantly impacts GNN's representational capacity [44]. The widely applied aggregators are the MEAN aggregator, MAX aggregator, and SUM aggregator, and which aggregator is the best is an application-specific question. For example, the MEAN aggregator used in GCN can help to capture the nodes distribution in graphs, and it may perform well if the distributional information in the graph is more relevant to the studied task [44]. The MAX aggregator is beneficial for identifying representative nodes, and thus for some vision tasks like point clouds classification, the MAX aggregator is a better choice [46]. The SUM aggregator enables the learning of structural graph properties, which is the default setting of GIN.
With post-analysis, researchers have shown that the standard deviation (SD) or Coefficient of Variation (CV) of the distance matrix is one of the most significant features [27, 28, 31] in algorithm selection or hardness prediction for TSP. Intuitively, when the SD of the TSP distance matrix is very high, it is easy to tell the difference between candidate solutions, and the TSP is easy to solve. At the opposite end of the spectrum, when the SD of the TSP distance matrix is very small, there are many routes with the same minimum cost, and finding one of them is not difficult. So as the SD increases, an easy-hard-easy transition can be observed [26]. Based on the above analysis, we add the SD aggregator, along with the MAX aggregator and SUM aggregator, as the three aggregators in our GINES to aggregate useful information for TSP algorithm selection.
Extracting local spatial informationIn a TSP instance, cities are distributed in a 2D Euclidean Space. The main characteristic to distinguish TSP instances is the spatial distributions of cities. There exists a research topic that also focuses on learning the spatial distribution of points, namely, point cloud classification. The point cloud is a type of practical 3D geometric data. Identifying point clouds is an object recognition task with many real-world applications, such as remote sensing, autonomous driving, and robotics [46]. Unlike image data made up of regular grids, the point cloud is unstructured data as the distance between neighboring points is not fixed. As a result, applying the classic convolutional operations on point clouds is difficult. To tackle this, researchers have designed several GNN architectures, such as PointNet++[46], DGCNN [47], and Point Transformer [48]. In the message-passing formulation of these GNNs for point clouds, a common component is \((\mathbf{p}_{j}-\mathbf{p}_{i})\), here \(\mathbf{p}_{i}\) and \(\mathbf{p}_{j}\) indicate the positions of the current point and neighborhood points, respectively. Through this calculation, local neighborhood information, such as distance and angles between points, can be extracted [47]. As the TSP instances can be viewed as 2D point clouds, extracting more local spatial information may help identify the TSP instances' class. We add this component to the message-passing formulation of GINES, as shown follows:
\[\mathbf{x}_{i}^{\prime}= \text{MLP}\left((1+\epsilon)\cdot\mathbf{x}_{i}+\right.\] \[\left.\quad\square_{j\in\mathcal{N}(i)}\text{ReLU}\left(h_{ \boldsymbol{\Theta}}\left(\mathbf{x}_{j}-\mathbf{x}_{i}\right)+\mathbf{e}_{j, i}\right)\right) \tag{3}\]
where \(\square\) indicates the selected aggregator, it can be either SD aggregator, MAX aggregator, or SUM aggregator. \(h_{\boldsymbol{\Theta}}\) is a neural network and defaults to be one linear layer to transform the local spatial information. The whole neural network architecture of our GINES is shown in Figure 2. We adopt three GINES layers to extract the salient spatial information from TSP graphs and apply graph-level Sum pooling for each GINES layer to obtain the entire graph's representation in all depths of the model. Then we concatenate these representations together and feed them into the following two linear layers. We make full use of the learned representation in the first two GINES layers as they may have better feature generalization ability [44].
## IV Experiments
### _Dataset_
We evaluate the proposed GINES on two public TSP algorithm selection datasets. The first dataset is generated to assess the Instance Space Analysis (ISA) framework [4], and the second is for evaluating the proposed CNN-based selector [11]. The main difference between the two datasets is the size of the instances. The TSP instances in the first dataset all contain 100 cities, while instances in the second dataset are relatively larger and contain 1000 cities. Applying the proposed model to two different datasets helps us examine its adaptability and compare it with other models. The following part is a detailed description of the two datasets.
Fig. 1: The GINES neural network architecture for TSP algorithm selection
TSP-ISA datasetincludes 1330 TSP instances with 100 cities, and it is divided equally into seven groups based on instance characteristics: RANDOM, CLKeasy, CLKhard, LKCCeasy, LKCCand, easyCLK-hardLKCC and hardCLK-easyLKCC. Here Chained Lin-Kernighan (CLK) and Lin-Kernighan with Cluster Compensation (LKCC) are two well-known local search algorithms for solving TSP. The aim is to predict whether CLK or LKCC is better for each instance, and we can view it as a binary classification task. Here LKCC is the Single-Best-Solver, which denotes LKCC can achieve the best average performance across the entire set of problem instances. As the dataset is not balanced, selecting LKCC for all instances can achieve \(71.43\%\) accuracy. Here we apply a random oversampling to obtain a balanced training set.
TSP-CNN datasetincludes 1000 TSP instances with 1000 cities. There are two algorithms, one genetic algorithm, Edge-Assembly-Crossover (EAX), and one local search algorithm Lin-Kernighan Heuristic (LKH), to be selected. Here the TSP instances are well-designed to be easy for one algorithm and hard for another. The TSP-CNN dataset is well-balanced, and choosing EAX, the Single-Best-Solver for all instances, can only achieve \(49\%\) accuracy. In addition, the entire dataset is randomly divided into ten folds, allowing us to conduct the same 10-fold cross-validation and make a fair comparison. Researchers generate multiple images to represent TSP instances and apply CNN models to select algorithms [11]. They have shared the trained CNN model files, and we can load the trained model to obtain the test accuracy of CNNs.
### _Baseline model_
In addition to comparing with the model proposed in other articles, we create several baseline models to be TSP selectors. The baseline models can be divided into two groups: traditional feature-based models and GNNs. We can recognize which representation form and corresponding learning model performs better by comparing baseline models with the proposed GINES.
For traditional feature-based models, we use Random Forest (RF) as the classifier as it performs well in the TSP algorithm selection task [11]. The only difference is the TSP features we choose as the input data. In this work, we evaluate four groups of handcrafted TSP features:
* All140: All 140 TSP features defined by R package named \(salesperson\)[8]. These features can be divided into 10 groups, including Minimum Spanning Tree (MST) features, kNNG features, Angle features, etc. We also use this package to calculate the following groups of features.
* Top15: after the feature selection procedure, [11] propose the best 15 TSP features for the TSP-CNN dataset. Most of those features are statistical values of strong connected components of kNNG, and others are MST features and Angle features.
* MST19: all the 19 MST features defined by \(salesperson\), are multiple statistical values of MST distance and depth. Here we study the MST features as MST is strongly related to TSP and can be used to solve TSP approximately. Besides, MST features are essential features for algorithm selection according to the previous studies [11].
* kNNG51: all the 51 kNNG features defined by \(salesperson\), including statistical values of kNNG distances, as well as the weak/strong connected components of the kNNG.
For GNN baseline models, we apply two popular GNNs: GCN and GINE. Previous research has tried to apply GCN to TSP algorithm selection but discovered that it performed worse than CNN [12]. GINE is well-known for its powerful representation learning capabilities, outperforming GCN on a variety of graph-level classification tasks. Though GINE is not new, to the best of our knowledge, this is the first time that it has been applied to the algorithm selection task. The architecture and parameter settings of baseline GNN models are the same as those of our GINES. The main modification is replacing the corresponding three GINES layers with GCN and GINE. To analyze the role and performance of aggregators, we test GINES with three different aggregators: MAX (GINESMAX), SUM (GINES-SUM) and SD (GINES-SD). For a fair comparison with other works, we process the datasets in the same way as [11]. On the TSP-ISA dataset, we randomly split the entire dataset into training and test datasets and use 10-fold cross-validation to get the average performance. On the TSP-CNN dataset, we apply exactly the same 10-fold cross-validation in [11] as the data grouping information was released. We set the hidden channel dimension for the GNN layers to be \(32\) and apply PairNorm [49] after each GNN layer to address the over-smoothing problem. We use an Adam optimizer with a \(0.01\) learning rate to reduce Cross Entropy loss and train 100 epochs for each model. We apply the Early-Stopping method with 20 patience in GNN models when dealing with the TSP-CNN dataset and fix all the random seeds to be \(41\)[41] to ensure the results are reproducible. All experiments were performed on a laptop with Intel Core i7-9750H, and the code is built on Pytorch-geometric [50].
### _Result and analysis_
The average classification accuracy of each model on the TSP-ISA dataset is listed in Table I. The best model's performance is bolded, while the second-best model's performance is underlined. We can observe that out of all feature-based approaches, RF with all 140 features performs the best. Employing fewer features for classification results in substantially lower accuracy. Well-selected features in Top15 can help to keep RF's performance, and MST features are more important than kNNG features in this task. GNNs outperform all the traditional feature-based models. GNNs can automatically extract valuable features from kNNG, and they perform much better than RF with handcrafted kNNG features. Among all GNN architectures, GCN performs relatively poorly compared to GINE and GINES with different aggregators. This result suggests that an elaborate GNN design for this specific application is necessary. By adding a spatial information extractor,
our GINES can reach a higher accuracy than the original GINE. We test all three aggregators and the results show that they have comparable performance. As the proposed method does not require any domain knowledge of TSP and has high prediction accuracy, it can be a promising approach in this field.
The experiment results on the TSP-CNN dataset are shown in Table II. Firstly, We apply the feature-based models and find that RF with MST features can achieve the best performance. Again, we can observe that MST features are more valuable than kNNG features in the TSP algorithm selection task. Then we load the trained CNN model files and test them to get CNNs' performance. It shows that CNN with Points+MST images is better than CNN with other image inputs. At last, we test the proposed GINES and baseline GNN models. GINES can outperform CNN models but is still worse than feature-based models. The main reason may be the handcrafted features fed into RF are heavily engineered, while the GNN models fail to extract some crucial features, such as MST and clustering features. Besides, there are much more nodes in this dataset, leading to less salient spatial information that can be learned. In GINES, we can pick the SD aggregator in the message-passing procedure, because the SD of the distance matrix of TSP is very related to the problem hardness. We also test the prediction accuracy of GINES-MAX and GINES-SUM. The results show that employing an SD aggregator in GINES is a better choice for the TSP-CNN dataset.
Table III summarizes the properties of the feature-based model, CNN, and GINES on the TSP algorithm selection task. Compared to deep learning models such as CNN and the proposed GINES, the traditional feature-based method suffers from the following _shortcomings_. Firstly, substantial domain expertise is required to design features. Secondly, as shown in Figure 2, the important features of the TSP-ISA dataset and the TSP-CNN dataset are significantly different, indicating that tedious feature engineering is required to choose valuable features. Finally, these selected features are probably inapplicable to other routing problems. The experiment results in Table II show that the proposed GINES is a competitive method, and it can slightly outperform CNN in prediction accuracy. GINES has several other advantages compared to CNN. Firstly, CNN takes multiple images as inputs, i.e., Points image, MST image, and kNNG image. Generating these images might be burdensome work, and it is unclear which image can better represent TSP instances. Contrary to CNN, GINES directly takes cities' coordinate and distance matrices as inputs, and we do not need to prepare intermediate representations like
Fig. 2: The Top 10 importance features for TSP-ISA dataset and TSP-CNN dataset
images. Secondly, when generating images for CNN, several problem-irrelevant parameters must be set, such as image size, dot size, and line width in MST and kNNG images. Tuning these parameters can be a heavy workload, although theoretically, these parameters should not affect the learned mapping from instances to algorithms. In GINES, on the other hand, the TSP instances are treated as graphs, and there are not many instance representation parameters to be designed or adjusted. Besides, when setting the image resolution in the CNN method, we should consider the city number in the TSP instance. Otherwise, the representation ability of the image is inadequate, and problem instance information is lost. At last, generating images for TSP instances and applying CNN to select algorithms is not very difficult because cities in TSP are homogeneous and distributed in 2D Euclidean space. If we look into some complex routing problems, we will find that applying the CNN-based method is challenging. For VRP algorithm selection, it is hard to differentiate the depot and customer with image representations. While in GINES, we can simply add the point features to tell them apart. Considering the routing problem in Non-Euclidean space such as ATSP, drawing the problem instance on a 2D plane is nearly impossible. While GINES can naturally recognize the neighborhood in ATSP, we can also modify the message-passing formulation in GINES to aggregate more valuable edge features.
## V Conclusion
In this work, we propose a novel GNN named GINES to select algorithms for TSP. By adopting a suitable aggregator and local neighborhood feature extractor, this model can learn useful spatial information of TSP instances and outperform traditional feature-based models and CNNs on public algorithm selection datasets. GINES handles TSP instances as graphs and only takes cities' coordinates and distances between them as inputs. Thus no intermediate representations for problem instances, such as features or images, need to be designed and generated before model training. In contrast to converting TSP instances to images, the graph representation is more natural and efficient, as it neither introduces problem-irrelevant parameters nor loses problem-relevant information. The proposed GINES is promising as it is easy to generalize to other routing problems. For example, we can distinguish nodes and routes in the problem instances by adding node features and edge features. This work can be a good starting point for selecting algorithms or predicting instance hardness for combinatorial optimization problems defined on graphs. In the future, we will explore GINES architectures for more complex problems like ATSP, VRP, and real-world problems.
|
2307.15916 | Opportunistic Air Quality Monitoring and Forecasting with Expandable
Graph Neural Networks | Air Quality Monitoring and Forecasting has been a popular research topic in
recent years. Recently, data-driven approaches for air quality forecasting have
garnered significant attention, owing to the availability of well-established
data collection facilities in urban areas. Fixed infrastructures, typically
deployed by national institutes or tech giants, often fall short in meeting the
requirements of diverse personalized scenarios, e.g., forecasting in areas
without any existing infrastructure. Consequently, smaller institutes or
companies with limited budgets are compelled to seek tailored solutions by
introducing more flexible infrastructures for data collection. In this paper,
we propose an expandable graph attention network (EGAT) model, which digests
data collected from existing and newly-added infrastructures, with different
spatial structures. Additionally, our proposal can be embedded into any air
quality forecasting models, to apply to the scenarios with evolving spatial
structures. The proposal is validated over real air quality data from
PurpleAir. | Jingwei Zuo, Wenbin Li, Michele Baldo, Hakim Hacid | 2023-07-29T07:17:43Z | http://arxiv.org/abs/2307.15916v1 | # Opportunistic Air Quality Monitoring and Forecasting with Expandable Graph Neural Networks
###### Abstract
Air Quality Monitoring and Forecasting has been a popular research topic in recent years. Recently, data-driven approaches for air quality forecasting have garnered significant attention, owing to the availability of well-established data collection facilities in urban areas. Fixed infrastructures, typically deployed by national institutes or tech giants, often fall short in meeting the requirements of diverse personalized scenarios, e.g., forecasting in areas without any existing infrastructure. Consequently, smaller institutes or companies with limited budgets are compelled to seek tailored solutions by introducing more flexible infrastructures for data collection. In this paper, we propose an expandable graph attention network (EGAT) model, which digests data collected from existing and newly-added infrastructures, with different spatial structures. Additionally, our proposal can be embedded into any air quality forecasting models, to apply to the scenarios with evolving spatial structures. The proposal is validated over real air quality data from PurpleAir.
Air Quality Forecasting, Opportunistic Forecasting, Graph Neural Networks, Urban Computing
## I Introduction
Air quality forecasting using data-driven models has gained significant attention in recent years, thanks to the proliferation of data collection infrastructures such as sensor stations and advancements of telecommunication technologies. These infrastructures are typically managed by national institutes (e.g., AirParif1, EPA2) or large companies (e.g., PurpleAir3) that specialize in air quality monitoring or forecasting services and products. Leveraging existing data collection infrastructures proves beneficial for initial research exploration or validating product prototypes. However, reliance on fixed infrastructures presents practical constraints when customization is required for specific tasks. For instance, certain monitoring areas may be inadequately covered or completely absent from the existing infrastructures, or the density of coverage may not be sufficient. This issue particularly affects small or mid-sized industrial and academic players who face budget limitations that prevent them from investing in their own infrastructure from scratch, but have specific customization needs.
Footnote 1: [https://www.airparif.asso.fr/](https://www.airparif.asso.fr/)
Footnote 2: [https://www.epa.gov/air-quality](https://www.epa.gov/air-quality)
Footnote 3: [https://www2.purpleair.com/](https://www2.purpleair.com/)
In addition to data collection, air quality forecasting models trained solely with data from public fixed infrastructures may not perform well for users' specific scenarios, such as forecasting at a higher spatial resolution. Deploying additional sensors as a cost-effective solution can enrich the data and improve forecasting performance without the need to build infrastructures from scratch. Subsequently, this targeted solution leads us to consider the practical question: _how we can make use of the data collected from existing infrastructures, when integrating new sensor infrastructures?_
As depicted in Figure 1, the topological sensor network may change as the urban infrastructure evolves, resulting in varying network structures of air quality sensors. The data collected from the network \(G_{\tau}\) needs to be augmented with enriched data from newly installed sensors \(\Delta G_{\tau^{\prime}}\) and \(\Delta G_{\tau^{\prime\prime}}\). Training a model solely on recent data with \(G_{\tau^{\prime\prime}}\) would overlook valuable information contained in the historical data with \(G_{\tau}\) and \(G_{\tau^{\prime}}\).
In this paper, we propose an expandable graph attention network (EGAT) that effectively integrates data with various graph structures. This approach is versatile and can be seamlessly embedded into any existing air quality forecasting model. Furthermore, it applies to scenarios where sensors are not installed, enabling accurate forecasting in such areas. We summarize our approach's main advantages as follows:
* **Less is more:** With fewer installed sensors, we can directly predict the air quality of other unknown area where sensors are not installed and achieve comparable performance to models relying on extensive data collection infrastructures with more sensors.
* **Continual learning with self-adaptation:** The proposed model enables continuous learning from newly collected data with expanded sensor networks, demonstrating self-adaptability to different topological sensor networks.
* **Embeddable module with scalability:** The proposed module can be seamlessly integrated into any air quality forecasting model, enhancing its ability to forecast in real
Fig. 1: Expanded sensor networks and the related \(PM_{2.5}\) data at different time. The data was collected with \(PurpleairAPI\)[1].
world scenarios.
The rest of this paper starts with a review of the most related work. Then, we formulate the problems of the paper. Later, we present in detail our proposal, which is followed by the experiments on real-life datasets and the conclusion.
## II Related Work
### _Air Quality Forecasting_
Data-driven models for air quality forecasting has gained a huge popularity recently. Recent work [2, 3] studies graph-based representations of the air quality data by considering the sensor network as a graph structure, which extracts decent structural features between sensor data from a topological view. The air quality forecasting can be then formulated as a spatio-temporal forecasting problem.
Works like DCRNN [4], STGCN [5] and Graph WaveNet [6], have shown promising results in traffic forecasting tasks. These models can be adapted to air quality forecasting tasks owing to the shared spatio-temporal features present in the data. However, in practice, the above-mentioned models often overlook the evolving nature of sensor networks as more data collection infrastructures are incrementally built. Consequently, these models require re-training from scratch on the most recent data that reflects the evolved sensor network. It may result in the loss of valuable information contained in outdated data collected from different network configurations.
### _Expandable Graph Neural Networks_
In the field of graph learning, several works, such as ContinualGNN [7] and ER-GNN [8], have incorporated the concept of Continual Learning to capture the evolving patterns within graph nodes. While these approaches are valuable, it is important to consider spatio-temporal features in air quality forecasting tasks. Designed for traffic forecasting, TrafficStream [9] considers evolving patterns on both temporal and spatial axes; ST-GFSL [10] introduces a meta-learning model for cross-city spatio-temporal knowledge transfer. However, these works primarily focus on shared (meta-)knowledge between nodes, and give less attention to expandable graph structures. Basically, spectral-based graph neural networks (GNNs) face challenges when scaling to graphs with different structures due to the complexity of reconstructing the Laplacian matrix. To address this issue, our paper explores the use of spatial-based GNNs, such as Graph Attention Networks (GAT) [11], for expandable graph learning in air quality forecasting tasks.
## III Problem Formulation
**Definition 1**.: (Air Quality Forecasting). Given an air quality sensor network \(G=\{\mathcal{V},\mathcal{E}\}\), where \(\mathcal{V}=\{v_{1},...,v_{N}\}\) is a set of \(N\) sensor nodes/stations and \(\mathcal{E}=\{e_{1},...,e_{E}\}\) is a set of \(E\) edges connecting the nodes, the air quality data \(\{AQI_{t}\}_{t=1}^{T}\) and meteorological data \(\{M_{t}\}_{t=1}^{T}\) are collected over the \(N\) stations, where \(T\) is current timestamp. We aim to build a model \(f\) to predict the \(AQI\) over the next \(T_{p}\) timestamps.
To simplify, we denote input data as \(\mathcal{X}\)= \(\{AQI_{t},M_{t}\}_{t=1}^{T}\) = \(\{x_{t}\}_{t=1}^{T}\in\mathbb{R}^{N\times F\times T}\). Each node contains \(F\) features representing \(PM_{2.5}\), \(PM_{10}\), humidity, temperature, etc. As \(PM_{2.5}\) is _most reported and most difficult-to-predict_[12], we take \(PM_{2.5}\) as the AQI prediction target \(\mathcal{Y}\)=\(\{y_{t}\}_{t=T+1}^{T+p}\in\mathbb{R}^{N\times T_{p}}\).
**Definition 2**.: (Expanded Sensor Network). Given a sensor network at \(\tau\): \(G_{\tau}\) = \(\{\mathcal{V}_{\tau},\mathcal{E}_{\tau}\}\) with \(N_{\tau}\) sensors, the network at \(\tau^{\prime}\): \(G_{\tau^{\prime}}\)=\(G_{\tau}\)+\(\Delta G_{\tau}\) = \(\{V_{\tau^{\prime}},E_{\tau^{\prime}}\}\) expands \(G_{\tau}\) to \(N_{\tau^{\prime}}\) sensors.
We aim to build a model \(f\), which is firstly trained over a dataset \(\{\mathcal{X}_{\tau}\}\) on a sensor network \(G_{\tau}\) = \(\{\mathcal{V}_{\tau},\mathcal{E}_{\tau}\}\), and can be incrementally trained over \(\{\mathcal{X}_{\tau^{\prime}}\}\) on an expanded network \(G_{\tau^{\prime}}\). For inference, given a sequence \(\mathcal{X}\in\mathbb{R}^{N_{\tau^{\prime}}\times F\times T}\) and a sensor network \(G_{\tau^{\prime}}\), the model \(f\) can predict the \(AQI\) for the next \(T_{p}\) time steps \(\mathcal{Y}\)=\(\{y_{t}\}_{t=T+1}^{T+p}\in\mathbb{R}^{N\times T_{p}}\), where \(N_{\tau^{\prime}}\geq N_{\tau}\).
## IV Our proposals
In this paper, we adopt Graph WaveNet [6] as the backbone model, which consists of \(l\) Spatio-Temporal (ST) Blocks. However, our proposed EGAT can be integrated to any spatio-temporal models with adaptations on graph network layers. We employ Temporal Convolution Network (TCN) to encode the temporal dynamics of the AQIs. Specifically, as shown in Figure 2, we designed an Expandable Graph Attention Network (EGAT) to learn from the data with evolving graph structures. The output forecasting layer takes skip connections on the output of the final ST Block and the hidden states after each TCN module for final predictions.
### _Temporal Dynamics with Temporal Convolution Network_
Compared to RNN-based approaches, Temporal Convolution Network (TCN) [6] allows handling long-range sequences in a parallel manner, which is critical in industrial scenarios considering the model efficiency.
Given an input air quality sequence embedding \(H\)= \(f_{linear}(\mathcal{X})\in\mathbb{R}^{N\times d\times T}\), a filter \(\mathcal{F}\in\mathbb{R}^{1\times\mathrm{K}}\), \(\mathrm{K}\) is the temporal filter size, \(\mathrm{K}=2\) by default. The dilated causal convolution operation of \(H\) with \(\mathcal{F}\) at time \(t\) is represented as:
\[H\star\mathcal{F}(t)=\sum_{s=0}^{\mathrm{K}}\mathcal{F}(s)H(t-\textbf{d}\times s )\in\mathbb{R}^{N\times d\times T^{\prime}} \tag{1}\]
where \(\star\) is the convolution operator, **d** is the dilation factor, \(d\) is the embedding size, \(T^{\prime}\) is the generated sequence length. We define the output of a gated TCN layer as:
\[\textbf{h}=tanh(W_{\mathcal{F}^{1}}\star H)\odot\sigma(W_{\mathcal{F}^{2}} \star H)\in\mathbb{R}^{N\times d\times T^{\prime}} \tag{2}\]
where \(W_{\mathcal{F}^{1}}\), \(W_{\mathcal{F}^{2}}\) are learnable parameters, \(\odot\) is the element-wise multiplication operator, \(\sigma(\cdot)\) denotes Sigmoid function.
### _Expandable Graph Attention Networks (EGATs)_
Graph attention network (GAT) [11], as a weighted message-passing process, models neighboring nodes' relationships via their inherent feature similarities. Given a set of air pollution features at time \(t\): \(\textbf{h}(t)\) = \(\{h_{1},h_{2},...,h_{N}\},h_{i}\in\mathbb{R}^{N\times d}\) as input of a graph attention layer, following [11], we define the attention score between node \(i\), \(j\) as:
\[\alpha_{ij}=\frac{\exp\left(\mathrm{a}\left(Wh_{i},Wh_{j}\right)\right)}{\sum_ {k\in\mathcal{N}_{i}}\exp\left(\mathrm{a}\left(Wh_{i},Wh_{k}\right)\right)} \tag{3}\]
where \(W\in\mathbb{R}^{d\times d^{\prime}}\) is a weight matrix, \(\mathrm{a}\) is the attentional mechanism as mentioned in [11]: \(\mathbb{R}^{d^{\prime}}\times\mathbb{R}^{d^{\prime}}\rightarrow\mathbb{R}\), and \(\mathcal{N}_{i}\) is a set of neighbor nodes of \(v_{i}\). A _multi-head attention_ with a nonlinearity \(\sigma\) is employed to obtain abundant spatial representation of \(v_{i}\) with features from its neighbor nodes \(\mathcal{N}_{i}\):
\[h_{i}^{\prime}=\sigma\left(\frac{1}{K}\sum_{k=1}^{K}\sum_{j\in\mathcal{N}_{i}} \alpha_{ij}W^{k}h_{j}\right) \tag{4}\]
Therefore, the GAT layer in \(i\)-th ST Block can be defined as:
\[H_{i+1}=\sigma\left(\frac{1}{K}\sum_{k=1}^{K}\mathcal{A}\mathbf{h}_{i}W^{k}\right) \tag{5}\]
where \(\mathcal{A}\)=\(\{\alpha_{ij}\}\in\mathbb{R}^{N\times N}\), \(H_{i+1}\in\mathbb{R}^{N\times d^{\prime}\times T}\), \(W^{k}\in\mathbb{R}^{d\times d^{\prime}}\).
When expanding the graph with new sensor nodes, we scale up the GAT layers on new nodes while conserving the information learned over the old ones. Basically, new nodes can be considered during both model's training and inference.
#### Iv-B1 Expandable Graph Network Training
We consider that the sensor network expands with the newly built infrastructures. The model learned from \(G_{\tau}\) can be updated with recent data over \(G_{\tau^{\prime}}\) without re-training the model from scratch.
From Equation 5, with new embeddings \(\mathbf{h}_{\tau^{\prime}}\)\(\in\)\(\mathbb{R}^{N_{\tau^{\prime}}\times d\times T}\), the weight matrix \(W^{k}\) stays unchanged; only the adjacency matrix requires updates: \(\mathcal{A}_{\tau}\in\mathbb{R}^{N_{\tau}\times N_{\tau}}\rightarrow\mathcal{A }_{\tau^{\prime}}\in\mathbb{R}^{N_{\tau^{\prime}}\times N_{\tau^{\prime}}}\). We re-define \(\mathcal{N}_{i}\)=\(\{\mathcal{N}_{i,\tau},\mathcal{N}_{i,\tau^{\prime}}\}\) as the \(k\) nearest neighbors of \(v_{i}\), where \(\mathcal{N}_{i,\tau}\) denotes neighbors from existing nodes, \(\mathcal{N}_{i,\tau^{\prime}}\) indicates those from newly added nodes. Given a set of new sensors \(\Delta\mathcal{V}_{\tau}\), we obtain new edge connections \(\Delta\mathcal{E}_{\tau}\)=\(\{\mathcal{N}_{i}\}_{i=1}^{\Delta\Delta}\), where \(\Delta N\)=\(N_{\tau^{\prime}}-N_{\tau}\), with \(\mathcal{O}(N_{\tau^{\prime}}\Delta N)\) time for distance computations. According to Equation 3, the attentional mechanism will apply to \(\Delta\mathcal{E}_{\tau}\) with \(\mathcal{O}(\Delta Nk)\) time. Therefore, the attention score between node \(i,j\) can be re-defined as:
\[\alpha_{ij}=\frac{\exp\left(\mathrm{a}\left(Wh_{i},Wh_{j}\right)\right)}{\sum \limits_{k\in\mathcal{N}_{i,\tau}}\exp\left(\mathrm{a}\left(Wh_{i},Wh_{k} \right)\right)+\sum\limits_{k\in\mathcal{N}_{i,\tau^{\prime}}}\exp\left( \mathrm{a}\left(Wh_{i},Wh_{k}\right)\right)} \tag{6}\]
In this manner, we can update the graph layer, i.e., \(\mathcal{A}_{\tau^{\prime}}\) incrementally by considering cached attention scores over \(\mathcal{E}_{\tau}\), reducing the time complexity to \(\mathcal{O}(N_{\tau^{\prime}}\Delta N+\Delta Nk)\). This is much faster than rebuilding the entire graph layer (\(\mathcal{O}(N_{\tau^{\prime}}^{2})\)).
#### Iv-B2 Expandable Graph Network Inference
When no sensors are installed in (unseen) areas, _Spatial Smoothing_ can be performed on the unseen node \(v_{i}\). Based on its spatial location, we incorporate predictions from its neighbor nodes:
\[Y_{i}=\sum_{j\in\mathcal{N}_{i}}a_{ij}Y_{j},N_{i}=\{v_{j}|dist(v_{i},v_{j})<\varepsilon\} \tag{7}\]
where \(\mathcal{N}_{i}\) is the first-order neighbors of \(v_{i}\) (excluding \(v_{i}\), as the data on \(v_{i}\) is unavailable), \(a_{ij}=1-\frac{dist(v_{i},v_{j})}{\sum_{k\in\mathcal{N}_{i}}dist(v_{i},v_{k})}\) is the inverse Euclidean Distance (ED) between \(v_{i}\) and \(v_{j}\), \(\varepsilon\) is a threshold which decides the neighboring sensor nodes.
We propose a robust _Spatial Representation Smoothing_ technique that considers richer spatial relationships, in the embedding space, between unseen and existing nodes. Given an unseen node \(v_{i}\), its embedding \(h_{i}\) can be defined as follows:
\[h_{i}=\sigma\left(\frac{1}{K}\sum_{k=1}^{K}\sum_{j\in\mathcal{N}_{i}}a_{ij}W^{ k}h_{j}\right) \tag{8}\]
where \(a_{ij}\) is the inverse ED between \(v_{i}\) and \(v_{j}\), \(W^{k}\) is the learned weights in each attention head as shown in Equation 4.
### _Output Forecasting Layer_
For final predictions, we take skip connections as shown in [6] on the final ST Block's output and hidden states after each TCN. The concatenated output features are defined as:
\[O=(\mathbf{h}_{0}W^{0}+b^{0})\|...\|(\mathbf{h}_{l-1}W^{l-1}+bl-1)\ \|(\mathcal{H}_{l}W^{l}+b^{l}) \tag{9}\]
where \(O\in\mathbb{R}^{N\times(l+1)d}\), \(W^{i}_{s}\), \(b^{i}_{s}\) are learnable parameters for the convolution layers. Two fully-connected layers are added to project the concatenated features into the desired dimension:
\[\hat{\mathcal{Y}}=(ReLU(OW^{l}_{fc}+b^{1}_{fc}))W^{2}_{fc}+b^{2}_{fc}\in \mathbb{R}^{N\times T_{p}} \tag{10}\]
where \(W^{1}_{fc}\), \(W^{2}_{fc}\), \(b^{1}_{fc}\), \(b^{2}_{fc}\) are learnable parameters. We use mean absolute error (MAE) [6] as loss function for training.
## V Experiments
In this section, we demonstrate the effectiveness of EGAT with real-life air quality datasets. The experiments were designed to answer the following questions:
**Q1**: _Continal learning with self-adaptation:_ How well can our model make use of the ancient data with different graph structures, to improve the model's performance?
Fig. 2: Global system architecture of EGAT
* _Flexible Inference on unknown areas:_ How well is our model at predicting air quality in areas without any sensors installed? i.e., no available data over these areas.
### _Experimental Settings_
#### V-A1 Dataset description
We base our experiments on real air quality data [13] collected via PurpleAir API [1], which contains the AOIs and meteorological data in San Francisco (within \(10\,km^{2}\)) between 2021-10-01 and 2023-05-15. The datasets are split to training, validation, test sets with _7:1:2_. Table I shows more details of the collected datasets. For PurpleAirSF-1H, we adopt the last 12-hour data to predict the AQI (i.e., PM2.5) for the next 12 hours. For PurpleAir-6H, we consider the last 72 hours to predict the next 72 hours.
#### V-A2 Execution and Parameter Settings
We take Graph WaveNet as the backbone model. However, our proposal can be integrated to any air quality forecasting models. All the tests are done on a single Tesla A100 GPU of 40 Go memory. The forecasting accuracy of all tested models is evaluated by three metrics [2]: mean absolute error (MAE), root-mean-square error (RMSE) and mean absolute percentage error (MAPE).
#### V-A3 Baselines
We compare EGAT with various model variants and with Graph WaveNet [6]:
* **GraphWaveNet** (GWN) [6]: Trained on expanded graph data, as it is non-adaptable to different graph structures.
* **EGAT-Rec**: EGAT trained on data with expanded graph;
* **EGAT-FI-SS**: EGAT trained on data over ancient graph, Flexible Inference (FI) with _Spatial Smoothing_ is applied;
* **EGAT-FI-SRS**: EGAT trained on ancient data, FI with _Spatial Representation Smoothing_ is employed;
* **EGAT**: EGAT trained on both ancient and recent data.
### _Experimental Results_
Table II and Table III reports the average errors (12/72H) regarding the expanding node ratio and expanding time ratio determined by the deployment. **Bold** values indicate the best results, while underlined values represent the second-best.
EGAT consistently outperforms other models in continual learning with different node ratios and time radios, owning to its ability to leverage rich data from various graph structures. While GWN performs better than EGAT-Rec, this can be attributed to the k-order diffusion process in GCN. Even so, EGAT surpasses GWN by incorporating ancient graph data, further validating our proposal in graph adaptations (**Q1**).
When forecasting in unknown areas, EGAT-FI-SS provides approximate AQIs through _Spatial Smoothing_. However, its performance deteriorates with a high number of expanded nodes due to spatial sparsity. EGAT-FI-SRS performs better than EGAT-FI-SS and sometimes even better than GWN and comparable to EGAT, validating the viability of _Spatial Representation Smoothing_ for unknown areas' prediction (**Q2**).
## VI Perspectives and Conclusion
In this paper, we propose an Expandable Graph Attention Network (EGAT) for Air Quality monitoring and forecasting. It incorporates historical and recent graph data, which prevents industrial players with budget limitations from investing in their own infrastructures from scratch. EGAT also allows predicting air quality in areas without installed sensors. Future work includes comparing additional expandable graph learning models and exploring transfer learning and node alignment techniques to reduce re-training effort in industrial scenarios.
|
2310.07711 | Growing Brains: Co-emergence of Anatomical and Functional Modularity in
Recurrent Neural Networks | Recurrent neural networks (RNNs) trained on compositional tasks can exhibit
functional modularity, in which neurons can be clustered by activity similarity
and participation in shared computational subtasks. Unlike brains, these RNNs
do not exhibit anatomical modularity, in which functional clustering is
correlated with strong recurrent coupling and spatial localization of
functional clusters. Contrasting with functional modularity, which can be
ephemerally dependent on the input, anatomically modular networks form a robust
substrate for solving the same subtasks in the future. To examine whether it is
possible to grow brain-like anatomical modularity, we apply a recent machine
learning method, brain-inspired modular training (BIMT), to a network being
trained to solve a set of compositional cognitive tasks. We find that
functional and anatomical clustering emerge together, such that functionally
similar neurons also become spatially localized and interconnected. Moreover,
compared to standard $L_1$ or no regularization settings, the model exhibits
superior performance by optimally balancing task performance and network
sparsity. In addition to achieving brain-like organization in RNNs, our
findings also suggest that BIMT holds promise for applications in neuromorphic
computing and enhancing the interpretability of neural network architectures. | Ziming Liu, Mikail Khona, Ila R. Fiete, Max Tegmark | 2023-10-11T17:58:25Z | http://arxiv.org/abs/2310.07711v1 | # Growing Brains: Co-emergence of Anatomical and Functional Modularity in Recurrent Neural Networks
###### Abstract
Recurrent neural networks (RNNs) trained on compositional tasks can exhibit functional modularity [1; 2], in which neurons can be clustered by activity similarity and participation in shared computational subtasks. Unlike brains, these RNNs do not exhibit _anatomical modularity_, in which functional clustering is correlated with strong recurrent coupling and spatial localization of functional clusters. Contrasting with functional modularity, which can be ephemerally dependent on the input [2], anatomically modular networks form a robust substrate for solving the same subtasks in the future. To examine whether it is possible to grow brain-like anatomical modularity, we apply a recent machine learning method, brain-inspired modular training (BIMT), to a network being trained to solve a set of compositional cognitive tasks. We find that functional and anatomical clustering emerge together, such that functionally similar neurons also become spatially localized and interconnected. Moreover, compared to standard \(L_{1}\) or no regularization settings, the model exhibits superior performance by optimally balancing task performance and network sparsity. In addition to achieving brain-like organization in RNNs, our findings also suggest that BIMT holds promise for applications in neuromorphic computing and enhancing the interpretability of neural network architectures.
## 1 Introduction
A powerful way for networks to generalize is through modularity: If seen and unseen tasks in the world consist of combinations of subtasks, then a new task can be quickly solved by decomposing it into the set of previously seen subtasks, and tackling those based on prior learning. Recent work shows that RNNs trained on a set of tasks drawn by combining subtasks from a common dictionary begin to exhibit functional modularity, with similar activity profiles across neurons responding to the same subtask. However, the formed clusters were not anatomical. Anatomical clustering with localization of function is a central feature of brains [3]: for example, visual processing for object recognition is localized to the ventral visual pathway while the initiation of voluntary movements is confined to a few motor and premotor cortical regions. Anatomical modularization can facilitate continual learning: if it takes the form of spatial localization, then new inputs can be easily routed to the module; if it takes the form of recurrent connectivity, it provides lasting substructures for solving specific computations on future tasks. By contrast, functional clustering alone can be ephemeral, with groupings that might be defined primarily by correlations in the inputs [2]. When the input correlations change, functional modules could disappear.
Recently, the method of brain-inspired modular training (BIMT) was proposed as a way to make artificial neural networks modular and more interpretable [4]. The key idea of BIMT is to encourage local neural connections via two optimization terms: distance-dependent weight regularization and discrete neuron swapping. Here we ask whether BIMT can answer a fundamental question about neuroscience: Can spatial constraints and wiring costs [5; 6; 7] together lead to the emergence of anatomical modules that are also functionally distinct?
We study how BIMT can lead to the emergence of spatial modules in a multitask learning setting relevant to cognitive systems neuroscience with two sets of combinatorially constructed tasks, 20-Cog-tasks [1] and Mod-Cog-tasks [2]. We train recurrent neural networks (RNNs) on these tasks with BIMT in the supervised setup. We observe brain-like spatial organization emerging in the hidden layer of the RNN: neurons that are functionally similar are also localized in space (Figure 1c). Such locality and sparsity are gained with no sacrifice in performance, or even accompanied by an improvement in performance. We introduce our methods in Section 2, and present results in Section 3. Due to limited space, the main paper focus on 20-Cog-tasks [1] and leave the results for Mod-Cog-tasks [2] in Appendix B.
## 2 Method
**Brain-inspired modular training (BIMT)** Biological neural networks (e.g., brains) differ from artificial ones in that biological ones restrain neuronal connections to be local in space, leading to anatomical modularity. Motivated by this observation, [4] proposed brain-inspired modular training (BIMT) to facilitate modularity and interpretability of artificial neural networks. The idea is to embed neurons into a geometric space and minimize the total connection cost by adding the connection
Figure 1: Training an RNN with BIMT on cognitive tasks. **a**: Visualization of the network. Each line represents a weight; blue/red means positive/negative weights; thickness corresponds to magnitudes. **b**: The hidden neurons are clustered into functional modules. **c**: These functional modules (distinguished by colors) are also clustered in space, visually resembling a brain. By contrast, \(L_{1}\) regularization leads to no anatomical modules. The network has (**d**) good performance, (**e**) high sparsity and good locality (**f**). **g** and **h**: Trade-off between performance (error) and sparsity. **g** uses the number of active neurons as the sparsity measure, while **h** uses the wiring length as the sparsity measure. In both cases, the Pareto frontier of BIMT is better than that of \(L_{1}\) regularization and no regularization.
cost as a penalty to the loss function and swapping neurons if necessary. On a number of math and machine learning datasets, they show that BIMT is able to put functionally relevant neurons close to each other in space, just like brains. It is thus natural to ask: Can BIMT give something back to neuroscience? In this work, we apply BIMT to recurrent neural networks (RNN) for cognitive tasks, and show that the neurons in the hidden layer are organized into modules which are both anatomically and functionally distinct, just like brains.
**RNN** We take a simple recurrent neural network (RNN) in the context of systems neuroscience, which is defined by
\[\begin{split}\mathbf{h}_{t+1}&=\phi(\mathbf{W} \mathbf{h}_{t}+\mathbf{W}_{\mathrm{in}}\mathbf{u}_{t}+\mathbf{b}^{h}),\\ \mathbf{o}_{t+1}&=\mathbf{W}_{\mathrm{out}}\mathbf{h} _{t+1}+\mathbf{b}^{o},\end{split} \tag{1}\]
where \(\mathbf{u}_{t}\in\mathbb{R}^{n_{u}}\), \(\mathbf{h}_{t}\in\mathbb{R}^{n_{h}\times n_{h}}\), \(\mathbf{o}_{t}\in\mathbb{R}^{n_{a}}\), \(\mathbf{W}\in\mathbb{R}^{n_{h}\times n_{h}}\times\mathbb{R}^{n_{h}\times n_{ h}}\). We place \(n_{h}\times n_{h}\) hidden neurons uniformly on a 2D grid \([0,1]^{2}\) (see Figure 1a), so the \(ij\) neuron (the neuron in the \(i^{\mathrm{th}}\) row and the \(j^{\mathrm{th}}\) column) is located at \((i/n_{h},j/n_{h})\). The \(L_{1}\) distance between the \(ij\) neuron and the \(mn\) neuron is thus \(\mathbf{D}_{ij,mn}\equiv(|i-m|+|j-n|)/n_{h}\)2. We define the RNN's connection cost as
Footnote 2: One can choose other distances, e.g., \(L_{2}\) distance. We choose \(L_{1}\) distance because it is more consistent with \(L_{1}\) regularization.
\[\ell_{cc}=\underbrace{\|\mathbf{W}\|_{1}+\|\mathbf{W}_{\mathrm{in}}\|_{1}+\| \mathbf{W}_{\mathrm{out}}\|_{1}+|\mathbf{b}^{h}|_{1}+|\mathbf{b}^{o}|_{1}}_{ \text{vanilla $L_{1}$ regularization}}+\underbrace{A\|\mathbf{D} \odot\mathbf{W}\|_{1}}_{\text{distance-aware regularization}}, \tag{2}\]
where \(\|\mathbf{M}\|_{1}\equiv\sum_{ij}|M_{ij}|\) and \(|\mathbf{v}|_{1}\equiv\sum_{i}|v_{i}|\) are matrix and vector \(L_{1}\)-norm, respectively. \(A\) is a hyper-parameter controling the strength of locality constraint.
**Cognitive tasks** The 20-Cog-tasks are a set of simple cognitive tasks inspired by experiments with rodents and non-human primates performed by systems neuroscientists [1]. These tasks are designed to fall into families where each family is defined by a set of computations drawn from a common pool of computational primitives. Thus, the tasks have shared subtasks and an optimal solution is to form clusters of neurons specialized to these subtasks and share them across tasks, illustrated in Figure 2a. In our experiments, we set both input rings to have 16 dimensions, so input dimension \(n_{u}=16\times 2+20+1=53\) (where 20 for the one-hot length-20 task vector, and 1 for fixation). Hidden neurons are arranged as \(20\times 20\) grid (\(n_{h}=20\)), and output dimension \(n_{o}=16+1=17\). The prediction loss \(\ell_{\mathrm{pred}}\) is the cross-entropy between the ground truth and the predicted reaction.
**BIMT** loss simply combines the prediction loss and the connection cost, i.e., the total loss function is
\[\ell=\ell_{\mathrm{pred}}+\lambda\ell_{cc}, \tag{3}\]
where \(\lambda\geq 0\) is the strength of penalizing connection costs. When \(\lambda=0\), it boils down to train a fully-connected RNN without sparsity constraint; when \(\lambda>0\) but \(A=0\), it boils down to train with vanilla \(L_{1}\) regularization. Besides adding connection costs as regularization, BIMT allows swapping neurons to further reduce \(\ell_{cc}\) by avoiding local minima, e.g., if a neural network is initialized to be performing well but has non-local connections, without swapping, the network would not change much and maintain those non-local connections.
## 3 Results
We focus first on results in the 20-Cog-tasks ; qualitatively similar results on the Mod-Cog-tasks are included in Appendix B.
### BIMT learns a 2D brain that solves all 20-Cog-tasks
We show results for \(A=1.0\) and \(\lambda=10^{-5}\) in Figure 1. **a** shows the connectivity graph of the RNN. All the weights are plotted as lines whose thicknesses are proportional to their magnitudes 3, and blue/red means positive/negative weights. BIMT learns to prune away peripheral neurons and concentrate important neurons only in the middle. Following [1], we cluster neurons into functional modules based on their normalized task variance (shown in **b**). These functional modules, colored by different colors, are shown to be also anatomically modular, i.e., spatially local (shown in **c**), visually
resembling a brain. By contrast, \(L_{1}\) regularization does not induce any anatomical module. **d** shows that the network performs reasonably well. The network is sparse; **e** shows that it only contains around 100 important neurons (measured by sum of task variances). The connections in the hidden layer are mostly local, as we hoped; **f** shows that weights decay fast as distance increases, which is similar to fixed local-masked RNN [2]. However intriguingly, there is a second peak of (relatively) strong connections around distances being 0.6, which is probably attributed to inter-module connections.
### Sparsity vs Accuracy Tradeoff
There is a Pareto frontier showing the trade-off between sparsity and accuracy, shown in Figure 1**g** and **h**. We also compute the trade-off for networks with vanilla \(L_{1}\) regularization. We use two sparsity measures: the number of active neurons (a neuron is active if its sum of tasks variances is larger than \(10^{-3}\)), and the wiring length (sum of lengths of all active connections; a connection is active if its weight magnitude is larger than \(10^{-2}\)). Under both measures, BIMT is superior than \(L_{1}\) regularization in terms of having better Pareto frontier.
### Anatomical modularity
Anatomical modularity means that neurons with similar functions are placed close to each other in space. Because neurons of fully-connected layers have permutation symmetries, there is no incentive for them to develop anatomical modularity. By contrast, since BIMT penalizes connection costs, BIMT networks potentially have anatomical modularity. In Figure 1**c**, each neuron's functional cluster is marked and there are clear spatial clusters in which all neurons belong to the same functional cluster. Quantitatively, we propose two metrics: (1) the fraction of isolated neurons. A neuron is isolated if none of its (eight) neighbors belongs to the same functional cluster. (2) the average size of functional clusters. For both metrics, the smaller the better. For baselines, we randomly shuffle important hidden neurons 4. Since different random shuffling may yield different results, we try 10000 different random seeds and plot histograms for the metrics. We compute the two metrics for networks trained with BIMT, \(L_{1}\) regularization, or no regularization in Figure 2. Only BIMT networks are seen to be significantly out-of-distribution from baselines, implying anatomical modularity. In future work, we hope to explore improvements of BIMT to further increase the functional modularity of the "brain" seen in Figure 1**c**.
Figure 2: (a) The myriad ways anatomical and functional modularity can present itself in trained RNNs. (b) We test anatomical modularity for neural networks with BIMT (left), L1 regularization (middle) or no regularization (right). We propose two metrics, fraction of isolated neurons (top) and average (functional) cluster size (bottom) to measure anatomical modularity. For both metrics, smaller is better. We compare the trained network with networks whose useful hidden neurons are randomly shuffled. No regularization and L1 regularization are in the distributions of randomly shuffled networks, while BIMT is significantly out of distribution (smaller) than random ones, indicating anatomical modularity.
Our main findings remain qualitatively similar when (i) the topology of hidden layer is changed or (ii) the tasks are significantly harder, so that they require more-involved recurrent connectivity within the network. Specifically, in Appendix A, we include the results for 1D hidden layer. In Appendix B, we find that results on a more-complex set of 84 cognitive tasks, the Mod-Cog-tasks[2], show qualitatively similar anatomical clustering results, demonstrating the robustness and generality of our core findings.
## Acknowledgement
ZL and MT are supported by IAIFI through NSF grant PHY-2019786, the Foundational Questions Institute and the Rothberg Family Fund for Cognitive Science. IRF is supported by the Simons Foundation through the Simons Collaboration on the Global Brain, the ONR, the Howard Hughes Medical Institute through the Faculty Scholars Program and the K. Lisa Yang ICoN Center. MK acknowledges funding from the Department of Physics, MIT.
|
2303.02988 | Searching for Effective Neural Network Architectures for Heart Murmur
Detection from Phonocardiogram | Aim: The George B. Moody PhysioNet Challenge 2022 raised problems of heart
murmur detection and related abnormal cardiac function identification from
phonocardiograms (PCGs). This work describes the novel approaches developed by
our team, Revenger, to solve these problems.
Methods: PCGs were resampled to 1000 Hz, then filtered with a Butterworth
band-pass filter of order 3, cutoff frequencies 25 - 400 Hz, and z-score
normalized. We used the multi-task learning (MTL) method via hard parameter
sharing to train one neural network (NN) model for all the Challenge tasks. We
performed neural architecture searching among a set of network backbones,
including multi-branch convolutional neural networks (CNNs), SE-ResNets,
TResNets, simplified wav2vec2, etc.
Based on a stratified splitting of the subjects, 20% of the public data was
left out as a validation set for model selection. The AdamW optimizer was
adopted, along with the OneCycle scheduler, to optimize the model weights.
Results: Our murmur detection classifier received a weighted accuracy score
of 0.736 (ranked 14th out of 40 teams) and a Challenge cost score of 12944
(ranked 19th out of 39 teams) on the hidden validation set.
Conclusion: We provided a practical solution to the problems of detecting
heart murmurs and providing clinical diagnosis suggestions from PCGs. | Hao Wen, Jingsu Kang | 2023-03-06T09:31:42Z | http://arxiv.org/abs/2303.02988v1 | Searching for Effective Neural Network Architectures for Heart Murmur Detection from Phonocardiogram
###### Abstract
Aim: The George B. Moody PhysioNet Challenge 2022 raised problems of heart murmur detection and related abnormal cardiac function identification from phonocardiograms (PCGs). This work describes the novel approaches developed by our team, Reverager, to solve these problems.
Methods: PCGs were resampled to 1000 Hz, then filtered with a Butterworth band-pass filter of order 3, cutoff frequencies 25 - 400 Hz, and z-score normalized. We used the multi-task learning (MTL) method via hard parameter sharing to train one neural network (NN) model for all the Challenge tasks. We performed neural architecture searching among a set of network backbones, including multi-branch convolutional neural networks (CNNs), SE-ResNets, TResNets, simplified wav2vec2, etc.
Based on a stratified splitting of the subjects, 20% of the public data was left out as a validation set for model selection. The AdamW optimizer was adopted, along with the OneCycle scheduler, to optimize the model weights.
Results: Our murmur detection classifier received a weighted accuracy score of 0.736 (ranked 14th out of 40 teams) and a Challenge cost score of 12944 (ranked 19th out of 39 teams) on the hidden validation set.
Conclusion: We provided a practical solution to the problems of detecting heart murmurs and providing clinical diagnosis suggestions from PCGs.
## 1 Introduction
Heart murmur, defined as heart sounds produced by the turbulent blood flow through the heart, is a common clinical indicator in pediatric cardiology [1]. Accurate detection of heart murmurs and distinguish between innocent murmurs and pathological murmurs help early clinical intervention of vital heart diseases such as congenital heart diseases, hence having a significant medical value.
Based on such motivations, the George B. Moody PhysioNet Challenge 2022 [2, 3] raised questions about detecting heart murmurs and identifying abnormal cardiac functions from phonocardiograms (PCGs), which are non-invasive heart sound recordings collected from multiple auscultation locations. In this paper, we present our methods of tackling these problems.
## 2 Methods
### Preprocess Pipeline
After a careful study of spectral characteristics of heart murmurs from medical literature [4], and with reference to previous work [5], we constructed the PCG signal preprocessing pipeline as follows:
* Resampling to 1000 Hz;
* 400 Hz;
* Z-score normalization to zero mean and unit variance.
### Neural Network Backbones
Inspired by the work of wav2vec2 [6], and under the consideration of exploring and utilizing the powerfulness of pretraining models on larger databases, we adopted a shrunken wav2vec2 as one of our neural network (NN) backbones. We used the time-domain signals, namely the PCG waveforms, as model input, rather than the derived time-frequency-domain signals, for example, the spectrograms. Since PCGs have significantly lower sampling rates than conventional human voice audio signals, we reduced the dimension (number of channels) of the 'wav2vec2' model's encoder and its depth (number of hidden layers).
Considering that PCGs share a similar physiological origin as electrocardiograms (ECGs), we further adjusted and tested several NN backbones that have proven effective in ECG problems, including MultiBranch CNN, SE-ResNet, TResNetS, TResNetF [7], and ResNet-NC [8] etc. We enlarged the kernel sizes of each convolution in these backbones by a factor of 2 (the ratio of the sampling rates).
The efficacy of most of the NN backbones is validated via experiments as illustrated in Figure 1. The learning process of the wav2vec2 model was interrupted at an early
stage. The cause for this abnormal phenomenon is left for further studies.
### Multi-Task Learning
The 2 Challenge tasks [3] are per-patient classification tasks. It should be noted that the Challenge database [9] provides per-recording annotations for the murmur detection task and heart sound segmentation annotations as well. We applied the multi-task learning (MTL) paradigm [10] on each recording via hard parameter sharing. More precisely, we use one NN model for all the tasks. Each task has its specific model head, typically a stack of linear layers concatenated to the shared backbone as discussed in Section 2.2. Our MTL paradigm is illustrated in Figure 2.
As depicted in Figure 3, experiments showed that models (with the same backbone) using an additional segmentation head (denoted as "MTL3") usually outperformed models with only two classification heads (denoted as "MTL2") for the Challenge tasks.
Our NN models produce per-recording predictions for the Challenge tasks. To obtain per-patient predictions, we used the simple greedy rule described in Algorithm 1.
```
ifat least on recording positivethen Positive for the patient; elseifall recording negativethen Negative for the patient; else//for murmur detection only Unknown for the patient;
```
**Algorithm 1**The algorithm to obtain per-patient predictions
### Training Setups
For algorithm development, we divided the publicly available part of the Challenge database into the training set and the cross-validation set with a ratio of 8:2. This split was stratified on the attributes "Age", "Sex", "Pregnancy status" and the prediction targets "Murmur", "Outcome".
The batch size was set at 32 for model training, with the maximum number of epochs set at 60. Model parameters were optimized using the AMSGrad variant of the AdamW optimizer [11] along with the OneCycle scheduler [12]. We froze the backbone from a specific epoch (usually 30), only updating the parameters of the task heads.
To alleviate overfitting on the training set, an early stopping callback was added. To further improve model transferability, we applied several types of augmentations to the batched training data stochastically:
* adding coloured noises;
* polarity inversion (flipping).
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Backbone & \# Params & Input Type \\ \hline MultiBranch [7] & 17.7M & waveforms \\ SE-ResNet [7] & 15.9M & waveforms \\ ResNet-NC [8] & 15.4M & waveforms \\ TResNetS [7] & 41.0M & waveforms \\ TResNetF [7] & 4.0M & waveforms \\ wav2vec2 [6] & 19.8M & waveforms \\ \hline \hline \end{tabular}
\end{table}
Table 1: NN backbones tested for the Challenge tasks. wav2vec2 used the transformers implementation Wav2Vec2Model rather than the torchaudio implementation.
Figure 1: Curves of weighted accuracies of murmur detection on the cross-validation set 2.4 using 7 different NN backbones. The model heads, optimizers, loss functions, as well as other training setups, were kept the same.
Figure 2: The paradigm of multi-task learning (MTL) used in our team’s approach. The dashed lines indicate optional model heads. The “Outcome Head” and the “Murmur Head” use pooled features from the “Backbone”, while the “Segmentation Head” uses the unpooled features. The heads correspond to different tasks and share the same backbone.
We experimented with two types of loss functions: the asymmetric loss, denoted "Loss-A"; the weighted binary cross entropy (BCE), denoted "Loss-B". The weights were obtained from the weight matrix of the Challenge scoring functions [3]. The superiority of Loss-B was observed, as was illustrated in Figure 4.
### Demographic Features
For the public data of the Challenge, some demographic features are strongly correlated with the prediction target "Outcome", as can be inferred from Figure 5. Experiments and official phase submissions showed that an auxiliary random forest classifier using these features and the murmur predictions improved the outcome scores (reduced the outcome cost). However, we did not use such auxiliary models in our final submission, since the distribution of these features might be completely different in the hidden data. Moreover, no supportive medical literature was found to support this point.
## 3 Results
The Challenge scores (weighted accuracy for murmur detection and cost for clinical outcome identification) with an extra metric of weighted accuracy for clinical outcome identification on the train, cross-validation 2.4, the hidden validation, and the hidden test sets are gathered in Table 2. Scores on the former two sets are provided with mean and standard deviation over most of our offline experiments searching for the best NN architecture 2.2, 2.3, and loss functions 2.4.
## 4 Discussion and Conclusions
Our MTL paradigm proved effective for the problems of heart murmur detection and clinical outcome identification from PCGs in this study. The rankings of our team on
Figure 4: Experiments of comparison of the 2 loss functions (Loss-A for the asymmetric loss, Loss-B for the weighted BCE loss. The model with 2 classification heads and with SE-ResNet as the backbone was used.
Figure 5: Distributions (Dist.) of the “Outcome” against 2 typical categorical demographic variables.
Figure 3: Experiments of the MTL method with 2 heads (for murmur classification and outcome classification) and with 3 heads (an additional head for heart sound segmentation) using 2 typical backbones.
the hidden validation and on the whole public training set were 20 / 40, 26 / 40 for murmur weighted accuracy, and 21 / 39, 32 / 39 for outcome cost respectively. These were all significantly lower than our rankings on the hidden test set as listed in Table 2. This phenomenon is not surprising, since the MLT paradigm has already shown to have the ability to improve generalizability via leveraging latent domain-specific knowledge inherited in the training data of related tasks [10]. As for the problems that the Challenge raised, the additional segmentation head makes the shared representation (the common backbone) learn more general features and thus improves the performances for the original two classification tasks (heads).
The convolutional neural backbones also proved their effectiveness as has already shown in Figure 1. Indeed, this figure exhibits only a small part of the architectures we had experimented with. However, there is still room for improvement, as compared to the top teams on the Challenge leaderboard.
One regret of this study is that the potential of using derived time-frequency-domain signals is not explored. Previous studies on various physiological signals have shown the powerfulness of neural networks combining the derived time-frequency-domain signals with the original time-domain signals.
Another weakness of this work is that we failed to use the wav2vec2 model for tackling the Challenge problems. One possible reason is that transformer-based models need to be trained on larger datasets, and perform worse on smaller datasets than CNNs. Using larger datasets to perform self-supervised pretraining for PCGs would be a direction for our future work.
## Acknowledgments
This work is supported by NSFC under grants No. 11625105, and 12131004.
|
2305.10964 | Learning Activation Functions for Sparse Neural Networks | Sparse Neural Networks (SNNs) can potentially demonstrate similar performance
to their dense counterparts while saving significant energy and memory at
inference. However, the accuracy drop incurred by SNNs, especially at high
pruning ratios, can be an issue in critical deployment conditions. While recent
works mitigate this issue through sophisticated pruning techniques, we shift
our focus to an overlooked factor: hyperparameters and activation functions.
Our analyses have shown that the accuracy drop can additionally be attributed
to (i) Using ReLU as the default choice for activation functions unanimously,
and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts.
Thus, we focus on learning a novel way to tune activation functions for sparse
networks and combining these with a separate hyperparameter optimization (HPO)
regime for sparse networks. By conducting experiments on popular DNN models
(LeNet-5, VGG-16, ResNet-18, and EfficientNet-B0) trained on MNIST, CIFAR-10,
and ImageNet-16 datasets, we show that the novel combination of these two
approaches, dubbed Sparse Activation Function Search, short: SAFS, results in
up to 15.53%, 8.88%, and 6.33% absolute improvement in the accuracy for
LeNet-5, VGG-16, and ResNet-18 over the default training protocols, especially
at high pruning ratios. Our code can be found at https://github.com/automl/SAFS | Mohammad Loni, Aditya Mohan, Mehdi Asadi, Marius Lindauer | 2023-05-18T13:30:29Z | http://arxiv.org/abs/2305.10964v2 | # Learning Activation Functions for Sparse Neural Networks
###### Abstract
Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be an issue in critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning a novel way to tune activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (LeNet-5, VGG-16, ResNet-18, and EfficientNet-B0) trained on MNIST, CIFAR-10, and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 15.53%, 8.88%, and 6.33% absolute improvement in the accuracy for LeNet-5, VGG-16, and ResNet-18 over the default training protocols, especially at high pruning ratios.1
Footnote 1: Our code is available at github.com/automl/SAFS
1]Aditya Mohamad Lori
2]Mehdi Asadi
3]Marius Lindauer
## 1 Introduction
Deep Neural Networks, while having demonstrated strong performance on a variety of tasks, are computationally expensive to train and deploy. When combined with concerns about privacy, energy efficiency, and the lack of stable connectivity, this led to an increased interest in deploying DNNs on resource-constrained devices like micro-controllers and FPGAs (Chen and Ran, 2019).
Recent works have tried to address this problem by reducing the enormous memory footprint and power consumption of DNNs. These include quantization (Zhou et al., 2017), knowledge distillation (Hinton et al., 2015), low-rank decomposition (Jaderberg et al., 2014), and network sparsification using unstructured pruning (a.k.a. Sparse Neural Networks) (Han et al., 2015). Among these, Sparse Neural Networks (SNNs) have shown considerable benefit through their ability to remove redundant weights (Hoefler et al., 2021). However, they suffer from accuracy drop, especially at high pruning ratios; e.g., Mousavi et al. (2022) report \(\approx\)54% reduction in top-1 accuracy for MobileNet-v2 (Sandler et al., 2018) trained on ImageNet as compared to non-pruned. While significant blame for this accuracy drop goes to sparsification itself, we identified two underexplored, pertinent factors that can additionally impact it: (i) The activation functions of the sparse counterparts are never optimized, with the Rectified Linear Unit (ReLU) (Nair and Hinton, 2010) being the default choice. (ii) The training hyperparameters of the sparse neural networks are usually kept the same as their dense counterparts.
A natural step, thus, is to understand how the activation functions impact the learning process for SNNs. Previously, Jaiswal et al. (2022) and Tessera et al. (2021) have demonstrated that ReLU reduces the trainability of SNNs since sudden changes in gradients around zero result in blocking
gradient flow. Additionally, Apicella et al. (2021) have shown that a ubiquitous activation function cannot prevent typical learning problems such as vanishing gradients. While the field of Automated Machine Learning (AutoML) (Hutter et al., 2019) has previously explored optimizing activation functions of dense DNNs (Ramachandran et al., 2018; Loni et al., 2020; Bingham et al., 2020), most of these approaches require a huge amount of computing resources (up to 2000 GPU hours (Bingham et al., 2020)), resulting in a lack of interest in activation function optimization for various deep learning problems. On the other hand, attempts to improve the accuracy of SNNs either use sparse architecture search (Fedorov et al., 2019; Mousavi et al., 2022) or sparse training regimes (Srinivas et al., 2017). To our knowledge, there is no efficient approach for optimizing activation functions on SNN training.
**Paper Contributions**: (i) We analyze the impact of activation functions and training hyperparameters on the performance of sparse CNN architectures. (ii) We propose a novel AutoML approach, dubbed SAFS, to tweak the activation functions and training hyperparameters of sparse neural networks to deviate from the training protocols of their dense counterparts. (iii) We demonstrate significant performance gains when applying SAFS with unstructured magnitude pruning to LeNet-5 on the MNIST (LeCun et al., 1998) dataset, VGG-16 and ResNet-18 networks trained on the CIFAR-10 (Krizhevsky et al., 2014) dataset, and ResNet-18 and EfficientNet-B0 networks trained on the ImageNet-16 (Chrabaszcz et al., 2017) dataset, when compared against the default training protocols, especially at high levels of sparsity.
2. **Related Work** To the best of our knowledge, SAFS is the first automated framework that tweaks the activation functions of sparse neural networks using a multi-stage optimization method. Our study also sheds light on the fact that tweaking the hyperparameters plays a crucial role in the accuracy of sparse neural networks. Improving the accuracy of sparse neural networks has been extensively researched in the past. Prior studies are mainly categorized as (i) recommending various criteria for selecting insignificant weights, (ii) pruning at initialization or training, and (iii) optimizing other aspects of sparse networks apart from pruning criteria. In this section, we discuss these methods and compare them with SAFS, and briefly review state-of-the-art research on optimizing activation functions of dense networks.
### Sparse Neural Network Optimization
**Pruning Insignificant Weights**. A number of studies have proposed to prune the weight parameters below a fixed threshold, regardless of the training objective (Han et al., 2015; Li et al., 2016; Zhou et al., 2019). Recently, Azarian et al. (2020) and Kusupati et al. (2020) suggested layer-wise trainable thresholds for determining the optimal value for each layer.
**Pruning at Initialization or Training**. These methods aim to start sparse instead of first pre-training a dense network and then pruning it. To determine which weights should remain active at initialization, they use criteria such as using the connection sensitivity (Lee et al., 2018) and conservation of synaptic saliency (Tanaka et al., 2020). On the other hand, Mostafa and Wang (2019); Mocanu et al. (2018); Evci et al. (2020) proposed to leverage information gathered during the training process to dynamically update the sparsity pattern of kernels.
**Miscellaneous Sparse Network Optimization**. Evci et al. (2019) investigated the loss landscape of sparse neural networks and Frankle et al. (2020) addressed how it is impacted by the noise of Stochastic Gradient Descent (SGD). Finally, Lee et al. (2020) studied the effect of weight initialization on the performance of sparse networks. While our work also aims to improve the performance of sparse networks and enable them to achieve the same performance as their dense counterparts, we instead focus on the impact of optimizing activation functions and hyperparameters of the sparse neural networks in a joint HPO setting.
### Activation Function Search
Inappropriate selection of activation functions results in information loss during forward propagation and the vanishing and/or exploding gradient problems during backpropagation (Hayou et al., 2019). To find the optimal activation functions, several studies automatically tuned activation functions for dense DNNs, being based on either evolutionary computation (Bingham et al., 2020; Basirat and Roth, 2021; Nazari et al., 2019), reinforcement learning (Ramachandran et al., 2018), or gradient descent for devising parametric functions (Tavakoli et al., 2021; Zamora et al., 2022).
Despite the success of these methods, automated tuning of activation functions for dense networks is unreliable for the sparse context since the search spaces for activation functions for dense networks are not optimal for sparse networks (Dubowski, 2020). The same operations that are successful in dense networks can drastically diminish network gradient flow in sparse networks (Tessera et al., 2021). Additionally, existing methods suffer from significant search costs; e.g., Bingham et al. (2020) required 1000 GPU hours per run on NVIDIA(r) GTX 1080Ti. Jin et al. (2016) showed the superiority of SReLU over ReLU when training sparse networks as it improves the network's gradient flow. However, SReLU requires learning four additional parameters per neuron. In the case of deploying networks with millions of hidden units, this can easily lead to considerable computational and memory overhead at inference time. SAFS, on the other hand, unifies local search on a meta-level with gradient descent to create a two-tier optimization strategy and obtains superior performance with faster search convergence compared to the state-of-the-art.
3 Preliminaries In this section, we develop notations for the later sections by formally introducing the two problems that we address: Network Sparsification and Hyperparameter Optimization.
### Network Sparsification
Network sparsification is an effective technique to improve the efficiency of DNNs for applications with limited computational resources. Zhan and Cao (2019) reported that network sparsification could facilitate saving ResNet-18 inference time trained on ImageNet on mobile devices by up to 29.5\(\times\). Network sparsification generally consists of three stages:
1. _Pre-training_: Train a large, over-parameterized model. Given a loss metric \(\mathcal{L}_{train}\) and network parameters \(\mathbf{\theta}\), this can be formulated as the task of finding the parameters \(\mathbf{\theta}^{\star}_{pre}\) that minimize \(\mathcal{L}_{train}\) on training data \(\mathcal{D}_{train}\): \[\mathbf{\theta}^{\star}_{pre}\in\operatorname*{argmin}_{\mathbf{\theta}\in\mathbf{\Theta} }\left[\mathcal{L}_{train}(\mathbf{\theta};\mathcal{D}_{train})\right]\] (1)
2. _Pruning_: Having trained the dense model, the next step is to remove the low-importance weight tensors of the pre-trained network. This can be done layer-wise, channel-wise, and network-wide. The usual mechanisms either simply set a certain percentage of weights (_pruning ratio_) to zero, or learn a Boolean mask \(\mathbf{m}^{\star}\) over the weight vector. Both of these notions can be generally captured in a manner similar to the dense training formulation but with a separate loss metric \(\mathcal{L}_{prune}\). The objective here is to obtain a pruning mask \(\mathbf{m}^{\star}\), where \(\odot\) represents the masking operation and \(N\) represents the size of the mask: \[\mathbf{m}^{\star}\in\operatorname*{argmin}_{\mathbf{m}\in\{0,1\}^{N}}\left[\mathcal{ L}_{prune}(\mathbf{\theta}^{\star}_{pre}\odot\mathbf{m};\mathcal{D}_{train}) \right]\ \ \text{s.t.}\ \ \ \ \|\mathbf{m}^{\star}\|_{0}\leq\epsilon\] (2) where \(\epsilon\) is a threshold on the minimal number of masked weights.
3. _Fine-tuning_: The final step is to retrain the pruned network to regain its original accuracy using a fine-tuning 2 loss \(\mathcal{L}_{fine}\), which can either be the same as the training loss or a different kind: \[\boldsymbol{\theta}_{fine}^{\star}\in\operatorname*{argmin}_{\boldsymbol{ \theta}\in\Theta}\left[\mathcal{L}_{fine}(\boldsymbol{\theta};\boldsymbol{ \theta}_{pre}^{\star}\odot\boldsymbol{m}^{\star},\mathcal{D}_{train})\right]\] (3) For the pruning stage, SAFS uses the popular magnitude pruning method (Han et al., 2015) by removing a certain percentage of weights that have a lower magnitude. Compared to structured pruning methods (Liu et al., 2018), the magnitude pruning method provides higher flexibility and a better compression rate \(\left(\frac{|\boldsymbol{\theta}_{fine}^{\star}|}{|\boldsymbol{\theta}_{pre}^{ \star}|}\times 100\right)\). Crucially, SAFS is independent of the pruning algorithm; thus, it can optimize any sparse network. Footnote 2: We use the term fine-tuning interchangeably with re-training
3. **Hyperparameter Optimization (HPO)** We denote the hyperparameter space of the model as \(\Lambda\) out of which we sample a hyperparameter configuration \(\boldsymbol{\lambda}=(\lambda_{1},\ldots,\lambda_{d})\) to be tuned by some HPO methods. We assume \(c:\boldsymbol{\lambda}\rightarrow\mathbb{R}\) to be a black-box cost function that maps the selected configuration \(\boldsymbol{\lambda}\) to a performance metric, such as model-error3. HPO's goal can then be summarized as the task of finding an optimal configuration \(\boldsymbol{\lambda}^{\star}\) minimizing \(c\). Given the fine-tuned parameters \(\boldsymbol{\theta}_{fine}^{\star}\) obtained in Equation (3), we define the cost as minimizing a loss \(\mathcal{L}_{hp}\) on validation dataset \(\mathcal{D}_{val}\) as a bi-level optimization problem: \[\boldsymbol{\lambda}^{\star}\in\operatorname*{argmin}_{\boldsymbol{\lambda} \in\Lambda}c(\boldsymbol{\lambda})=\operatorname*{argmin}_{\boldsymbol{ \lambda}\in\Lambda}\left[\mathcal{L}_{hp}(\boldsymbol{\theta}_{fine}^{\star}( \boldsymbol{\lambda});\mathcal{D}_{val})\right]\] (4) \[\boldsymbol{\theta}_{fine}^{\star}(\boldsymbol{\lambda})\in \operatorname*{argmin}_{\boldsymbol{\theta}\in\Theta}\left[\mathcal{L}_{fine }(\boldsymbol{\theta};\boldsymbol{\theta}_{pre}^{\star}\odot\boldsymbol{m}^{ \star},\mathcal{D}_{train},\boldsymbol{\lambda})\right]\] We note that in principle HPO could also be applied to the training of the original model (Equation (1)), but we assume that the original is given and we care only about sparsification. Footnote 3: For reasonably sized datasets and models, we estimate this error using k-fold cross-validation.
4. **Finding Activation Functions for Sparse Networks** The aim of SAFS is to find an optimal hyperparameter configuration for pruned networks with a focus on activation functions. Given the HPO setup described in Section 3.2, we now explain how to formulate the activation function search problem and what is needed to solve it.
### Modelling Activation Functions
Using optimization techniques requires creating a search space containing promising candidate activation functions. Extremely constrained search spaces might not contain novel activation functions (_expressivity_) while searching in excessively large search spaces can be difficult (_size_) (Ramachandran et al., 2018). Thus, striking a balance between the expressivity and size of the search space is an important challenge in designing search spaces.
To tackle this issue, we model parametric activation functions as a combination of a unary operator \(f\) and two learnable scaling factors \(\alpha\), \(\beta\). Thus, given an input \(x\) and output \(y\), the activation function can be formulated as \(y=\alpha f(\beta x)\), which can alternatively be represented as a computation graph shown in Figure 2(a).
Figure 1 illustrates an example of tweaking the \(\alpha\) and \(\beta\) learnable parameters of the _Swish_ activation function. We can intuitively see that modifying the suggested learnable parameters for a
sample unary operator provides the sparse network additional flexibility to fine-tune activation functions (Godfrey, 2019; Bingham and Miikkulainen, 2022). Examples of activation functions that we consider in this work have been listed in Appendix E.
For sparse networks, this representation allows efficient implementation as well as effective parameterization. As we explain further in Section 4.2, by treating this as a two-stage optimization process, where the search for \(f\) is a discrete optimization problem and the search for \(\alpha,\beta\) is interleaved with fine-tuning, we are able to make the search process efficient while capturing the essence of input-output scaling and functional transformations prevalent with activation functions. Note that SAFS falls under the category of adaptive activation functions due to introducing trainable parameters (Dubey et al., 2022). These parameters allow the activation functions to smoothly adjust the model with the dataset complexity (Zamora et al., 2022). In contrast to popular adaptive activation functions such as PReLU and Swish, SAFS automates activation function tuning across a diverse family of activation functions for each layer of the network with optimized hyperparameters.
### Optimization Procedure
SAFS performs the optimization layer-wise i.e. we intend to find the activation functions for each layer. Given layer indices \(i=1,\ldots,L\) of the network of depth \(L\) an optimization algorithm needs to be able to select a unary operator \(f_{i}^{\star}\) and find appropriate scaling factors \((\alpha_{i}^{\star},\beta_{i}^{\star})\). We formulate these as two independent objective functions, solved in a two-stage optimization procedure combining discrete and stochastic optimization. Figure 2 shows an overview of the SAFS pipeline.
**Stage 1: Unary Operator Search**. The first stage is to find the unary operators after the network has been pruned. Crucially, the fine-tuning step happens only after this optimization for the activation function has been completed. We model the task of finding optimal unary operators for each layer as a discrete optimization problem. Given a pre-defined set of functions \(F=\{f_{1},f_{2},\ldots,f_{n}\}\), we define a space \(\mathcal{F}\) of possible sequences of operators \(\psi=\{f_{i}\mid f_{i}\in F\}_{i\in\{1,\ldots,L\}}\in\mathcal{F}\) of size \(L\). Our task is to find a sequence \(\psi\) after the pruning stage (Item 2). Since the pre-trained network parameters \(\mathbf{\theta}_{pre}^{\star}\) and the pruning mask \(\mathbf{m^{\star}}\) have already been discovered, we keep them fixed and use them as an initialization point for activation function optimization. The task is formulated as finding the
Figure 1: Modifying (a) \(\alpha\) and (b) \(\beta\) learnable scaling factors of the \(Swish\) activation function.
Figure 2: Overview of the entire SAFS pipeline.
optimal operators given the network parameters, as shown in Equation (5). During this step, \(\alpha\) and \(\beta\) parameters are set to \(1\) to focus on the function class first.
\[\psi^{\star}\in\operatorname*{argmin}_{\psi\in\mathcal{F}}\left[\mathcal{L}_{ train}(\theta^{\star}_{pre}\circ\mathbf{m}^{\star},\psi;\mathcal{D}_{train})\right] \tag{5}\]
Given the discrete nature of Equation (5), we use Late Acceptance Hill Climbing (LAHC) (Burke and Bykov, 2017) to iteratively solve it (Please refer to Appendix A for comparison against other search algorithms). LAHC is a Hill Climbing algorithm that uses a record of the history - _History Length_ - of objective values of previously encountered solutions in order to decide whether to accept a new solution. It provides us with two benefits: (i) Being a semi-local search method, LAHC works on discrete spaces and quickly searches the space to find unary operators. (ii) LAHC extends the vanilla hill-climbing algorithm (Selman and Gomes, 2006) by allowing worse solutions in the hope of finding a better solution in the future. We represent the design space of LAHC using a chromosome that is a list of activation functions corresponding to each layer of the network. Figure 2(b) shows an example of a solution in the design space. The benefit of this representation is its flexibility and simplicity. For generating a new search candidate (_mutation operation_), we first swap two randomly selected genes from the chromosome, and then, we randomly changed one gene from the chromosome with a new candidate from the list.
Appendix E lists unary operators considered in this study. To avoid instability during training, we ignored periodic operators (e.g., \(cos(x)\)) and operators containing horizontal (\(y=0\)) or vertical (\(x=0\)) asymptotes (e.g., \(y=\frac{1}{x}\)).
The process of selecting operators to form the chromosome is repeated for a predefined number of iterations (refer to Appendix E for the configuration of LAHC). Given that we have only two mutations per each search iteration, the entire chromosome is not significantly affected. Based on trial runs, we determined a budget of 20 search iterations to provide decent improvement alongside reducing the search cost. Each iteration consists of training the network using the selection activation functions and measuring the training loss \(\mathcal{L}_{train}\) as a fitness metric that needs to be minimized.
A downside of this process is the need to retrain the network for each search iteration, which can be intensive in time and compute resources. We circumvent this issue by leveraging a lower fidelity estimation of the final performance. Given that the network performance does not vary after a certain number of epochs, we leverage the work by Loni et al. (2020) and only train the network up to a certain point after which the performance should remain stable.
**Stage 2: Scaling Factor and HPO** Given a learned sequence of optimal operators \(\psi\), the next step is to find a sequence \(\psi^{\prime}=\langle(\alpha_{i},\beta_{i})\mid\alpha_{i},\beta_{i}\in\mathbb{ R}\rangle_{i\in\{1,\ldots,L\}}\) representing the scaling factors for each layer. We perform this process jointly with the fine-tuning stage (Equation (3)) and HPO to discover the fine-tuning parameters \(\mathbf{\theta}^{\star}_{fine}\) and hyperparameters \(\mathbf{\lambda}^{\star}\) as shown in Equation (6).
Figure 3: (a) SAFS unary activation graph. (b) An example of a solution representing activation functions of each layer in the network.
\[\mathbf{\lambda}^{\star}\in\operatorname*{argmin}_{\mathbf{\lambda}\in\Lambda}c(\mathbf{ \lambda};\mathcal{D}_{\mathsf{val}})\ \ \text{s.t.}\ \ \psi^{\star},\mathbf{\theta}^{\star}_{fine}(\mathbf{\lambda})\in \operatorname*{argmin}_{\theta\in\mathbf{\Theta},\psi^{\prime}\in\mathcal{R}^{(2L) }}\left[\mathcal{L}_{fine}\big{(}(\mathbf{\theta}\mid\mathbf{\theta}^{\star}_{pre} \odot\mathbf{m}^{\star}),\psi^{\prime};\psi,\mathcal{D}_{train}\big{)}\right] \tag{6}\]
Due to the continuous nature of this stage, we use the Stochastic Gradient Descent (SGD) for solving Equation (6), and use the validation accuracy as a fitness metric for the hyperparameter configuration.
Treating the scaling factors as learnable parameters allows us to learn them during the fine-tuning state. Thus, the inner optimization in this step has nearly no overhead costs. The only additional cost is that of HPO, which we demonstrate in our experiments to be important and worth it since the hyperparameters from training the original model might not be optimal for fine-tuning.
* 5 Experiments We categorize the experiments based on the research questions this work aims to answer. Section 5.1 introduces the experimental setup. Section 5.2 motives the problem of tuning activation functions for SNNs. Section 5.3 introduces the need for HPO with activation tuning for SNNs. In Section 5.4, we compare SAFS against different baselines. Appendix D provides an accuracy improvement vs. compression ratio trade-off to compare SAFS with state-of-the-art network compression methods. In Section 5.5 we compare the performance of SAFS for various pruning ratios. In Section 5.6 we provide insights on the activation functions learned by SAFS. Finally, we ablate SAFS in Section 5.7 to determine the impact of different design choices.
### Experimental Setup
**Datasets**. To evaluate SAFS, we use MNIST (LeCun et al., 1998), CIFAR-10 (Krizhevsky et al., 2014) and ImageNet-16 (Chrabaszcz et al., 2017) public classification datasets. Note that ImageNet-16 includes all images of the original ImageNet dataset, resized to 16\(\times\)16 pixels. All HPO experiments were conducted using SMAC3 (Lindauer et al., 2022). Appendix E presents the rest of the experimental setup.
### The Impact of Tweaking Activation Functions on the Accuracy of SNNs
To validate the assumption that activation functions indeed impact the accuracy, we investigated whether activation functions currently used for dense networks (Evci et al., 2022) are still reliable in the sparse context. Figure 3(a) shows the impact of five different activation functions on the accuracy of sparse architectures with various pruning ratios. To measure the performance during the search stage, we use a three-fold validation approach. However, we report the test accuracy of SAFS to compare our results with other baselines.
Our conclusions from this experiment can be summarised as follows: (i) ReLU does not perform the best in all scenarios. We see that SRS, Swish, Tanh, Symlog, FLAU, and PReLU outperform ReLU on higher sparsity levels. Thus, the decision to use ReLU unanimously can limit the potential gain in accuracy. (ii) As we increase the pruning ratio to 99% (extremely sparse networks), despite the general drop in accuracy, the difference in the sparse and dense networks' accuracies vary greatly depending on the activation function. Thus, the choice of activation function for highly sparse networks becomes an important parameter. We need to mention that despite the success of SAFS in providing higher accuracy, it needs 47 GPU hours in total for learning activation functions and optimal HPs. On the other hand, refining a sparse neural network takes \(\approx\) 3.9 GPU hours.
### The Difficulty of Training Sparse Neural Networks
Currently, most algorithms for training sparse DNNs use configurations customized for their dense counterparts, e.g., starting from a fixed learning scheduler. To validate the need for optimizing the
training hyperparameters of the sparse networks, we used the dense configurations as a baseline against hyperparameters learned by an HPO method. Figure 3(b) shows the curves of fine-tuning sparsified VGG-16 with 99% pruning ratio trained on CIFAR-10. The training has been performed with the hyperparameters of the dense network (Blue), and training hyperparameters optimized using SMAC3 (Orange).
We optimized the learning rate, learning rate scheduler, and optimizer hyperparameters with the range specified in Appendix E (Table 4). The type and range of hyperparameters are selected based on recommended ranges from deep learning literature (Simonyan and Zisserman, 2014; Subramanian et al., 2022), SMAC3 documentation (Lindauer et al., 2022), and from the various open-source libraries4 used to implement VGG-16. To prevent overfitting on the test data, we optimized the hyperparameters on validation data and tested the final performance on the test data. The poor performance (7.17% accuracy reduction) of the SNN learning strategy using dense parameters motivates the need for a separate sparsity-aware HPO regime.
Footnote 4: [https://www.kaggle.com/datasets/keras/vgg16/code](https://www.kaggle.com/datasets/keras/vgg16/code)
### Comparison with Magnitude Pruning Baselines
Table 1 shows the results of optimizing sparse VGG-16 activation functions trained on CIFAR-10 using SAFS with 99% pruning ratio. An average of three runs has been reported. Results show that SAFS provides 8.88% absolute accuracy improvement for VGG-16 and 6.33% for ResNet-18 trained on CIFAR-10 when compared against a vanilla magnitude pruning baseline. SAFS additionally yields 1.8% absolute Top-1 accuracy improvement for ResNet-18 and 1.54% for EfficientNet-B0 trained on ImageNet-16 when compared against a vanilla magnitude pruning baseline. SReLU (Jin et al., 2016) is a piece-wise linear activation function that is formulated by four learnable parameters. Mocanu et al. (2018); Curci et al. (2021); Tessera et al. (2021) have shown SReLU performs excellently for sparse neural networks due to improving the network's gradient flow. Results show that SAFS provides 15.99% and 19.17% higher accuracy compared to training VGG-16 and ResNet-18 with SReLU activation function on CIFAR-10. Plus, SAFS provides 0.88% and 1.28% better accuracy compared to training ResNet-18 and EfficientNet-B0 with SReLU activation function on the ImageNet-16 dataset. Lastly, Appendix B shows that SAFS significantly improves the gradient flow of sparse neural networks, which is associated with optimized activation functions and efficient training protocol.
Figure 4: (a) CIFAR-10 test accuracy on sparse VGG-16 with various activation functions customized for dense networks with a 3-fold cross-validation procedure. The bold line represents the mean across the folds, while the shaded area represents the Confidence Intervals across the folds. (b) Fine-tuning sparse VGG-16 on CIFAR-10 with different training hyperparameters with three different random seeds. The pruning ratio is 99%. As shown, fine-tuning with dense hyperparameters results in inefficient training of SNNs.
### Evaluation of SAFS with Various Pruning Ratios
Figure 4 compares the performance of VGG-16 fine-tuned by SAFS and the default training protocol on CIFAR-10 over three different pruning ratios including 90%, 95%, and 99%. Results show that SAFS is extremely effective by achieving 1.65%, 7.45%, and 8.88% higher accuracies compared to VGG-16 with ReLU activation functions fine-tuned with the default training protocol at 90%, 95%, and 99% pruning ratios. Plus, SAFS is better than activation functions designed for dense networks, especially for networks with a 99% pruning ratio.
### Insights on Searching for Activation Functions
Figure 5 presents the dominance pattern of each unary operator in the first learning stage (\(\alpha=\beta=1\)) for the CIFAR-10 dataset. The results are the average of three runs with different random seeds. The unit of the color bar is the number of seeing a specific activation function across all search iterations for the first learning stage. According to the results, it is evident that (i) Symexp and ELU are unfavorable activation functions, (ii) Symlog and Acon are dominant activation functions while being used in the early layers, and (iii) Overall Swish and HardSwish are good, but they mostly appear in the middle layers.
### Ablation Study
We study the effect of each individual optimization stage of SAFS on the performance of sparse VGG-16 and ResNet-18 trained on CIFAR-10 in Table 2. Results show that each individual contribution provides higher accuracy for both VGG-16 and ResNet-18. However, the maximum performance is attained by SAFS (+15.53%, +8.88%, +6.33%, and +1.54% for LeNet-5, VGG-16, ResNet-18, and EfficientNet-B0), where we first learn the most accurate unary operator for each layer and then fine-tune scaling factors with optimized hyperparameters.
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline
**Magnitude Pruning** & \multicolumn{2}{c}{**CIFAR-10 (Top-1)**} & \multicolumn{2}{c}{**ImageNet-16\({}^{\ddagger}\) (Top-1 / Top-5)**} \\ \cline{2-5} (Han et al., 2015) & VGG-16 & ResNet-18 & ResNet-18 & EfficientNet-B0 \\ \hline Original Model (Dense) & 86.76\% & 89.86\% & 25.42\% / 47.26\% & 18.41\% / 37.45\% \\ Vanilla Pruning (Baseline) & 70.32\% & 77.55\% & 11.32\% / 25.59\% & 10.96\% / 25.62\% \\ SReLU & 63.21\% & 64.71\% & 12.24\% / 26.89\% & 11.22\% / 25.98\% \\ SAFS (Ours) & 79.2\% (+8.88\%) & 83.88\% (+6.33\%) & 13.12\% (+1.8\%) / 28.94\% & 12.5\% (+1.54\%) / 27.15\% \\ \hline \hline \multicolumn{5}{l}{\({}^{\ddagger}\) The Top-1 accuracy of WideResNet-20-1 on ImageNet-16 is 14.82\% (Chrabaszcz et al., 2017).} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Refining sparse neural network activation functions with different methods.
Figure 5: Frequency of Occurring unary operator in the first learning stage (\(\alpha=\beta=1\)) for VGG-16 and ResNet-18 trained on CIFAR-10 with 99% pruning ratio.
## 6 Conclusion
In this paper, we studied the impact of activation functions on training sparse neural networks and use this to learn new activation functions. To this end, we demonstrated that the accuracy drop incurred by training SNNs uniformly with ReLU for all units can be partially mitigated by a layer-wise search for activation functions. We proposed a novel two-stage optimization pipeline that combines discrete and stochastic optimization to select a sequence of activation functions for each layer of an SNN, along with discovering the optimal hyperparameters for fine-tuning. Our method SAF5 provides significant improvement by achieving up to 8.88% and 6.33% higher accuracy for VGG-16 and ResNet-18 on CIFAR-10 over the default training protocols, especially at high pruning ratios. Crucially, since SAF5 is independent of the pruning algorithm, it can optimize any sparse network.
7 Limitations and Broader Impact The authors have determined that this work will have no negative impacts on society or the environment, since this work does not address any concrete application. Future Work and Limitations Sparse Neural Networks (SNNs) enable the deployment of large models on resource-limited devices by saving computational costs and memory consumption. In addition, this becomes important in view of decreasing the carbon footprint and resource usage of DNNs at inference time. We believe this opens up new avenues of research into methods that can improve the accuracy of SNNs. We hope that our work motivates engineers to use SNNs more than before in real-world products as SAF5 provides SNNs with similar performance to dense counterparts. Some immediate directions for extending our work are (i) leveraging the idea of accuracy predictors (Li et al., 2023) in order to expedite the search procedure. (ii) SNNs have recently shown promise in application to techniques for sequential decision-making problems such as Reinforcement Learning (Vischer et al., 2022; Graesser et al., 2022). We believe incorporating SAF5 into such scenarios can help with the deployability of such pipelines. SAF5 has been evaluated on diverse datasets, including MNIST, CIFAR-10, and ImageNet-16, and various network architectures such as LeNet-5, VGG-16, ResNet-18, and EfficientNet-B0. While the current results demonstrate the general applicability of our method and signs of scalability, we believe further experiments on larger datasets and more scalable networks would be an interesting avenue for future work.
## Acknowledgements
Aditya Mohan and Marius Lindauer were supported by the German Federal Ministry of the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (GreenAutoML4FAS project no. 67KI32007A). Mohammad Loni was supported by the HiPEAC project, a European Union's Horizon 2020 research and innovation program under grant agreement number 871174.
\begin{table}
\begin{tabular}{c c c|c c c} \hline \hline
**CNN** & **Dense** & **Magnitude** & \multicolumn{3}{c}{**Learning Activation Functions\({}^{*}\)**} \\
**Model\({}^{*}\)** & **Model** & **Pruning** & (Stage 1)\({}^{*}\) & (Stage 2)\({}^{*}\) & 54F5 (Stage 1 + Stage 2) \\ \hline LeNet-5 & 98.49\% & 46.69\% & 61.63\% & 60.2\% & 62.22\% (+15.53\%) \\ VGG-16 & 86.76\% & 70.32\% & 78.11\% & 80.97\% & 79.25\% (+8.88\%) \\ ResNet-18 & 89.86\% & 77.55\% & 79.34\% & 82.74\% & 83.88\% (+6.33\%) \\ EfficientNet-B0 & 18.41\% & 10.96\% & 11.84\% & 11.7\% & 12.55\% (+1.54\%) \\ \hline \hline \multicolumn{7}{l}{\({}^{*}\) Lenet-5, VGG-16, ResNet-18, and EfficientNet-B0 are trained on MNIST, CIFAR-10, CIFAR-10, and ImageNet-16, respectively.} \\ \multicolumn{7}{l}{\({}^{*}\) ReLU is the default activation function for Lenet-5, VGG-16, and ResNet-18. Swish is the default activation function for EfficientNet-B0.} \\ \multicolumn{7}{l}{\({}^{*}\) Learning activation functions by only using the first stage of SAF5 (\(\alpha=\beta=1\) and without using HPO).} \\ \multicolumn{7}{l}{\({}^{*}\) Learning \(\alpha\) and \(\beta\) for the ReLU operator with optimized hyperparameters.} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation Study on optimizing activation functions of SNNs with 99% pruning ratio. |
2303.12225 | Volatile Memory Motifs: Minimal Spiking Neural Networks | How spiking neuronal networks encode memories in their different time and
spatial scales constitute a fundamental topic in neuroscience and
neuro-inspired engineering. Much attention has been paid to large networks and
long-term memory, for example in models of associative memory. Smaller circuit
motifs may play an important complementary role on shorter time scales, where
broader network effects may be of less relevance. Yet, compact computational
models of spiking neural networks that exhibit short-term volatile memory and
actively hold information until their energy source is switched off, seem not
fully understood. Here we propose that small spiking neural circuit motifs may
act as volatile memory components. A minimal motif consists of only two
interconnected neurons -- one self-connected excitatory neuron and one
inhibitory neuron -- and realizes a single-bit volatile memory. An excitatory,
delayed self-connection promotes a bistable circuit in which a self-sustained
periodic orbit generating spike trains co-exists with the quiescent state of no
neuron spiking. Transient external inputs may straightforwardly induce
switching between those states. Moreover, the inhibitory neuron may act as an
autonomous turn-off switch. It integrates incoming excitatory pulses until a
threshold is reached after which the inhibitory neuron emits a spike that then
inhibits further spikes in the excitatory neuron, terminating the memory. Our
results show how external bits of information (excitatory signal), can be
actively held in memory for a pre-defined amount of time. We show that such
memory operations are robust against parameter variations and exemplify how
sequences of multidimensional input signals may control the dynamics of a
many-bits memory circuit in a desired way. | Fabio Schittler Neves, Marc Timme | 2023-03-21T22:58:40Z | http://arxiv.org/abs/2303.12225v1 | # Volatile Memory Motifs:
###### Abstract
How spiking neuronal networks encode memories in their different time and spatial scales constitute a fundamental topic in neuroscience and neuro-inspired engineering. Much attention has been paid to large networks and long-term memory, for example in models of associative memory. Smaller circuit motifs may play an important complementary role on shorter time scales, where broader network effects may be of less relevance. Yet, compact computational models of spiking neural networks that exhibit short-term volatile memory and actively hold information until their energy source is switched off, seem not fully understood. Here we propose that small spiking neural circuit motifs may act as volatile memory components. A minimal motif consists of only two interconnected neurons - one self-connected excitatory neuron and one inhibitory neuron - and realizes a single-bit volatile memory. An excitatory, delayed self-connection promotes a bistable circuit in which a self-sustained periodic orbit generating spike trains co-exists with the quiescent state of no neuron spiking. Transient external inputs may straightforwardly induce switching between those states. Moreover, the inhibitory neuron may act as an autonomous turn-off switch. It integrates incoming excitatory pulses until a threshold is reached after which the inhibitory neuron emits a spike that then inhibits further spikes in the excitatory neuron, terminating the memory. Our results show how external bits of information (excitatory signal), can be actively held in memory for a pre-defined amount of time. We show that such memory operations are robust against parameter variations and exemplify how sequences of multidimensional input signals may control the dynamics of a many-bits memory circuit in a desired way.
Introduction
Memory plays a fundamental role in biological, bio-inspired and abstract artificial computing systems. While in standard computers memory is usually implemented as discrete components [1], which can be addressed by other components as needed, in self-organized dynamical systems, computation and memory are typically intertwined, with computation and memory access taking place concurrently [2; 3]. In particular, neural networks are composed of many discrete computing units called neurons with memory being stored in the network's connectivity itself. The result of a computation is the observable emergent collective dynamics.
Due to its parallel and distributed nature, memory studies on neural networks have traditionally focused on large-scale systems, which not only exhibit a variety of emerging phenomena but also may become analytically tractable in the limit of large networks (\(N\rightarrow\infty\)) [4]. One key example of such collective phenomena is associative memory, in which memories are represented as attractors in state space, such that partial initial information or corrupted input signals may be sufficient to recover original stored associated memories. Most neural networks models, for memory or computation, are also non-volatile [5; 6], such that the information stored in the connections need not to be actively maintained but stay long-term without ongoing energy inputs required. Volatility, however, may be essential for a variety of cognitive functions, as working memory and real-time planning [7; 8], in particular on shorter timescales, where broader network effects may be of a lesser relevance.
Here we propose spiking neural network motifs to act as volatile memory components. A minimal example is composed of only two interconnected neurons (see Figure 1), one excitatory neuron with a delayed self-connection (autapse) and one inhibitory neuron, yielding a bistable motif circuit. A self-sustained periodic spike-train, representing an 'on' state and thus a bit '1' co-exists with the quiescent state, representing an 'off'-state and thus a bit '0'. Switching between those states is controlled by transient external inputs to either the inhibitory or the excitatory neuron. Alternatively, the inhibitory neuron may also act as an autonomous off switch for the circuit. That neuron integrates the pulses incoming from the excitatory neuron until it reaches a spiking threshold upon which the inhibitory neuron emits a spike and terminates the self-sustained periodic spike-train at the excitatory neuron, overall turning the collective motif state from '1' to '0'. Collections of such motif may be used in parallel to represent more complex information as independent bits, if larger-scale network effects are not desirable or not relevant.
Our results below show how an external bit of information can be actively held in memory for a pre-defined amount of time. To hold a '1' bit in memory, neural spiking activity and thus energy is needed, making the memory system volatile. The small neural circuits
Figure 1: **Spiking neural circuit motifs that implement a 1-bit volatile memory.** Blue circles labeled as ‘E’ represent excitatory neurons, red circles labeled as ‘I’ represent inhibitory neurons, circles as arrow heads represent inhibitory connections and conventional arrows represent excitatory connections. **(a)** A minimal circuit composed of two neurons, one excitatory and one inhibitory. **(b)** A circuit composed of an excitatory ring sub-network and an inhibitory neuron. **(a-b)** In both cases, the excitatory component has self-connections and connections to the inhibitory neurons while the inhibitory feedback connects to all excitatory neurons.
introduced below may serve as basic memory units for short-term volatile memory, and thus may complement the broad variety of previously proposed computational neural circuits and memory models [5; 9; 10], in particular the set of, also volatile, computing paradigms emerging in symmetrical systems [11; 12; 13; 14] or stochastic dynamics in random networks with local excitation and global inhibition [15; 16], which also take advantage of self-organization instead of carefully tuned (many) connections between neurons.
## II Minimal model of neuronal network motif
In this work we present a compact neuronal circuit motif that implements a 1-bit volatile memory. Volatile in this context means that spike activity, and thus energy, is needed to maintain at least one of the memory states. The system is compact because it can be implemented with as few as two neurons. For instance, as we sketch in Figure 1a, the system may be implemented with one inhibitory neuron and one excitatory neuron only. The connectivity is such that the excitatory neuron has a self connection and a connection to the inhibitory neuron; the inhibitory neuron has a single connection to the excitatory neuron. Alternatively, the excitatory component may be composed of a ring (Figure 1b) or other small population of neurons to better resemble a biological neural circuit or to otherwise circumvent self-connections. To explain the basic mechanisms and collective dynamics underlying volatile memory function of such motifs, we consider a minimal motif of two neurons in the remainder of this article.
For clarity of presentation we here mathematically describe the neurons as Leaky Integrate-and-Fire neuron models that exhibit parameters with a direct physical meaning also for potential hardware implementations. Leaky integrate-and-fire models already capture main fundamental features of spiking neurons, including their dynamics exhibiting two different times scales: a long term sub-threshold dynamics and short term interactions (spikes) modeled via discrete pulse responses. Our specific model is defined by a pair of differential equations,
\[\frac{dV_{E}}{dt} =A_{E}+\xi_{E}(t)-\gamma_{E}V_{E}+\sum_{t_{i}\in P_{E}}\varepsilon _{E}\delta(t-t_{i}-\tau_{E})+\sum_{t_{j}\in P_{I}}\varepsilon_{I}\delta(t-t_{ j}-\tau_{I})+\eta_{E}(t) \tag{1}\] \[\frac{dV_{I}}{dt} =A_{I}+\xi_{I}(t)-\gamma_{I}V_{I}+\sum_{t_{i}\in P_{E}}\varepsilon _{E}\delta(t-t_{i}-\tau_{E})+\eta_{I}(t) \tag{2}\]
complemented by conditions for spike emission and reset. Specifically, we say that neuron \(X\in\{E,I\}\) emits a spike at time \(t:=t_{n}\) if its voltage reaches a threshold,
\[V_{X}(t)\geq\theta\]
after which that voltage is reset to
\[V(t^{+}):=0.\]
The time \(t_{n}\) indicates the \(n\)th spike time in the motif circuit (after some reference time \(t_{0}\)). Moreover, the parameters \(A_{X}\) represent temporally the internal driving currents that set the equilibrium voltage (see below), \(\xi_{X}(t)\) external driving currents serving as input signal to store or remove memories, and \(\gamma_{X}\) the leak constants. Finally \(\varepsilon_{X}\) represent the connection weights, \(\tau_{X}\) the delays between a spike emitted by neuron \(X\) and reception of that spike and \(P_{X}\) denotes the set of all pulses elicited by a neuron \(X\). The indices \(E\) and \(I\) indicate features of the excitatory and inhibitory neurons, respectively. Finally the inputs \(\eta_{X}\) are the contribution of internal noise. We remark that the autonomous part of the system of equations above, i.e. for \(\eta_{X}(t)=\xi_{X}(t)\equiv 0\), has an analytical solution in between spike events and equally enable a piecewise, exact event-based simulations [17; 18; 19].
## III Self-sustained and self-terminated memory
In the 2-unit motif network, two qualitatively different collective dynamics coexist (Figure 2), one exhibiting a self-sustained spike train created by the excitatory neuron, encoding a bit value '1', the other a quiescent state with no spikes emitted, encoding the bit value '0'. To store the value '1', an excitatory signal \(\xi_{E}\) is sent to the excitatory neuron; to switch from a bit '1' to zero, an external excitatory signal \(\xi_{I}\) is sent to the inhibitory neuron. Both types of external signals \(\xi_{X}\) need to be sufficiently strong, i.e. of sufficiently large amplitude and duration. The exact shape of these input signals as a function of time is not relevant as long as they are sufficiently rapid and charge the targeted neuron sufficiently for it to cross threshold and spike.
In the absence of external input signals, the voltage of both neurons with time tends towards their respective fixed points \(V_{E}:=I_{E}/\gamma_{E}\) and \(V_{I}:=A_{I}/\gamma_{I}\), see Figure 2 before input onset. A short transient input signal \(\xi_{E}\) triggers the release of the first spike by the excitatory neuron. In turn, this spike arrives in both neurons after a delay \(\tau_{E}\). For sufficiently strong pulse (response) amplitudes \(\varepsilon_{E}\), the excitatory neuron sends a second spike and the process repeats. The motif network then maintains a spike-train with frequency \(1/\tau_{E}\) until it is interrupted by an inhibitory pulse. There are two different mechanisms potentially causing such an interruption. First, a strong excitatory signal \(\xi_{I}(t)\) could be sent at any desired time from outside the motif, see Figure 2a. Second, these systems hold the option of self-sustained and self-terminating memory function (see Figure 2b), with memory duration set by system parameters (that might, in turn, be varied on demand): The ongoing sequence of excitatory pulses fed into the inhibitory neuron promotes consecutive voltage jumps. If one such spike brings the inhibitory neuron to or beyond its firing threshold, the inhibitory neuron elicits a spike that after a delay \(\tau_{I}\) causes a voltage leak in the excitatory neuron, thereby interrupting the self-sustained spike-train.
Figure 2: **Bistable dynamics: memory initiation and termination.** For both panels, the upper graphs represent input currents as a function of time while the lower ones represent the voltages of inhibitory and excitatory neurons.**(a)** After a short input signal (upper panel), the excitatory neuron switches from its quiescent state to a self-sustained active state.A second external signal drives the inhibitory neuron to spike, which in turn terminates the memory, back to the quiescent state. **(b)** After the memory is initiated, the excitatory feedback loop persists until the inhibitory neuron produces a pulse, triggered by the consecutive excitatory pulses, thus terminating the memory. Parameters are: \(A_{E}=0.9\), \(A_{I}=0.01\), \(\gamma_{E}=1\), \(\gamma_{I}=0.12\), \(\theta_{E}=1\), \(\theta_{I}=0.3\), \(\tau_{E}=3\), \(\tau_{I}=2\), \(\varepsilon_{E}=0.05\), \(\varepsilon_{I}=-0.2\).
Memory duration
An interesting feature of the memory circuit motif presented is its tunable memory duration. Quantitatively, how long the an on-state is held active before self-terminating depends on most of the system parameters, for example on the pulse amplitudes (and durations), the delays, and the leak constant \(\gamma_{X}\). For a qualitative analysis, we study the memory duration in terms of variations of the leakage parameter \(\gamma_{I}\) and the firing threshold \(\theta_{I}\), fixing all other parameters. A natural way to measure the memory duration is in terms of number of elicited spikes; the absolute real time again depends on chosen parameters set in any motif implementation. Furthermore, because the excitatory neuron's role is simply to generate a spike-train with a fixed frequency \(1/\tau_{E}\), we here studied the memory duration from the perspective of the inhibitory neuron's response to such spike-trains.
As shown in Figure 3a, if \(\gamma_{I}\) is large enough, most of the current injected into a neuron is lost during the inter-spike intervals and the voltage curve resembles a non-linear saw wave with a small up-drift. Contrariwise, in the limit of \(\gamma_{I}\to 0\), no current is lost, as there is no leak term, and the voltage curve thus has a stair shape. Intermediary values show an average logarithmic increase overlayed by the spikes. Notice the exact values of \(\gamma_{I}\) shown in Figure 3 are only illustrative, as the same qualitative effects can be achieve for fixed \(\gamma_{I}\) and varying, for example, \(\tau_{E}\) instead. We also expect the system to show robustness to small variations of \(\varepsilon_{E}\) as the leakage grows exponentially with the deviation from the resting state, see Figure 3b.
The memory duration is controllable. For depiction, we varied the firing threshold \(\theta_{I}\) and fixed all other parameters. Figure 3c shows how larger \(\gamma_{I}\) values restrict the discernible memories' duration to a predefined interval of pulses, as the voltage peaks immediately after
Figure 3: **Memory duration and long-term dynamics (a)** Dynamical response of the inhibitory neuron to long spike trains with fixed frequency and amplitude. The larger the leak constant \(\gamma_{I}\), the slower the voltage increases on average. **(b)** Given an instantaneous voltage jump \(\Delta V_{fix}\) at time \(t_{0}\) from the resting state, this panel shows the recovery time \(\Delta t\), in which the voltage is \(V(\Delta t)-V_{I}=10^{-5}\) away from its resting states. The larger the leak constant \(\gamma_{I}\), the shorter the recovery time for the same \(\Delta V_{fix}\). **(c)** Number of spikes received by the inhibitory neuron until reset as a function of its threshold \(\theta\). **(d)** As in **(c)**, but for a smaller \(\gamma=10^{-10}\). The smaller \(\gamma_{I}\), the longer the curve resembles an equally spaced stair. Parameters: the same as in Figure 2 if not stated otherwise. \(A_{I}=10^{-4}\) for all panels.
consecutive spikes get closer exponentially fast. For large enough \(\theta_{I}\) the memory duration is in practice long-term, as the actual difference between the voltage peaks (at spike times) decreases exponentially with \(\theta_{I}\) and the voltage does not converge to the threshold in finite time or converge to a value below the threshold. For \(\gamma_{I}\to 0\), the leak is negligible, and the memory duration increases in equally spaced steps, multiples of to the spike amplitude \(\varepsilon_{E}\) (Figure 3d). We remark that long-term memory in this context does not imply storage but long-lasting, as the memory here is always sustained by pulses and is, thus, still volatile.
## V Influence of noise
Noise is an ever-present feature in biological and artificial (hardware) neural networks. Its role in memory and computation is varied and is often beneficial, in contrast to their effects in signal transmission lines. We now describe the effects of noise on our compact memory circuit. In the following, we add independent Gaussian noise sources \(\eta_{X}(t)\), with identically distributed random components, to both neurons. The noise is modeled (approximated) by adding the term
\[\eta_{X}(t+\delta t)-\eta_{X}(t)=\sqrt{\delta t}N_{rand}(\sigma,0) \tag{3}\]
to the righthand side of equations (1) and (2). Here \(N_{rand}(\sigma,0)\) is a random number drawn from a Gaussian distribution with variance \(\sigma\) and centered at zero. To conserve the event-based feature time evolution noise is evaluated after discrete times intervals \(\delta t\) drawn independently from a Poisson distribution with average \(\langle\delta t\rangle=\tau_{E}/100\). That is, the noise sample intervals are randomized and independent for each neuron, while their average sampling interval is fixed.
Figure 4 illustrates the effect of different noise amplitudes for small and for intermediate
Figure 4: **Noise-induced variability of memory duration.** Average number of spikes (black curves) received by the inhibitory neuron until its first spike and standard deviation (blue and red backgrounds) as functions of the inhibitory neuron’s firing threshold. Measures calculated over 1000 repetitions at intervals of 0.01 volts. **(a-c)** For fixed \(\gamma_{I}\), changing the noise standard deviation \(\sigma\) softens the staircase features of the curve. Even though the mean does not seem to deviate too far from the noiseless case, the standard deviation monotonically increases with \(\theta\). **(d-e)** Same results as in **(a-c)**, but with a slower increase rate. Parameters: same parameters as in Figure 3 if not stated otherwise.
noise amplitudes, \(\gamma_{I}=0.05\) and \(\gamma_{I}=10^{-10}\), respectively. Qualitatively, the results are the same in their most important aspects. Initially, for small \(\sigma\), noise mostly affects the vicinity of the transition points between two memories duration (step jumps). As a result, the steps themselves become less steep than without noise. For larger noise strength \(\sigma\), the plateaus (progressively) lose their identity, due to the large variability in the memory duration between random realizations, as also reflected in an increase in the standard deviation of the number of spikes to reset. Because the voltage plateaus for large \(\gamma_{I}\) become progressively smaller without noise, the increase in standard deviation becomes also progressively larger and more apparent with larger \(\theta_{I}\), compare Figure 4b-c to Figure 4e-f. Furthermore, combinations of small enough \(\theta_{I}\) with large enough \(\sigma\) may promote eventual noise-induced spikes in the inhibitory neuron (false positive), even without an external signal to any of the neurons. As a consequence, spike events occur even before the excitatory neuron has its first spike, which translate in average memory duration below 1 in Figures 4(b-c) and Figures 4(e-f). Notice, 1 is the minimum memory length for the noise-free dynamics, that is, a single excitatory spike promoting an inhibitory spike.
## VI Loading and flushing multi-dimensional memories
In neuronal systems, sets of neurons or neuronal networks can be used to represent and store information. In our approach to transiently holding bits in memory, neurons are interconnected forming small motifs. In the simplest setting, multi-dimensional memories may thus be established by multiple motifs acting independently and in parallel. In such settings, multi-dimensional inputs can be loaded concurrently into memory as independent bits, not unlike in a traditional computer (see Figure 5). In our model, loading a bit in memory is intuitive and in line with traditional computer, each single bit can be set to one of two states almost instantaneously (within one spike cycle interval), independently of the current states of the neuron. Furthermore, this system exhibits a natural ground state (non-active), to which the system abruptly switch after the memory interval elapses. Moreover, the state representation is very convenient for binary codes, as one state has spike activity and the other has a complete absence of spikes, thus, it does not require involved decoding approaches.
Figure 5 shows how a sequence of words can be loaded into an array of neurons. As expected from our previous discussion around Figure 2, a new value can be loaded independent of the system state. The single false positive spikes after each active-to-quiescent change of states occurs due to the delay \(\tau_{I}\), i.e. spike signals still in transit (sent but not yet received). As a consequence, the desired system state, i.e. the collective
Figure 5: **A sequence of four-bits words.** Only excitatory spikes are depicted. Four independent 1-bit neural circuit receive a sequence of four four-bits words. The last signal also serves to reset the system. Inputs label as 0 represent short inputs to the inhibitory neuron and as 1 to the excitatory neuron. In both cases the signal’s duration is 0.3 and the amplitude is 0.5, see Figure 2 for details.
dynamics at all motifs, is assumed after a (small) lag time \(\Delta t>\tau_{I}\). The exact timing depends on the excitatory neuron's voltage at the time it receives the inhibition. This observation sets a minimum of two consecutive pulses with frequency \(1/\tau_{E}\) to guarantee a correct solution, because after a change in input signals, a single pulse may still be triggered by the former input signal (a false '1'). The real-world (clock) time interval \(\tau_{E}\), e.g. in seconds, is defined by the neuron model time scales, by choosing the units for \(\tau_{E}\).
## VII Discussion
We have proposed a general concept of implementing tunable volatile memory in simple neural networks. Such networks are small network motifs and exploit bi- or multistability to realize memory dynamically. Memory duration is either determined by system parameters that set the time scale of self-terminating a memory, or by external signals.
The concept of volatile memory is already familiar in computer science, see [20]. It is defined as a memory type that is actively maintained by the system, thus continuously consuming power. Contrary to storage, such memory is erased each time power is no longer provided to the system. Inspired by such concepts, we here proposed that simple neural motifs may act as volatile memory components. Our model is fundamentally different from previous neuronal models with similar functionality, which rely on, e.g., short-term plasticity ([21]), because our model requires no changes in the network connectivity or their weights. Instead, memory is held dynamically in the spike configuration until terminated internally or externally. We specifically analyzed a simple 1-bit volatile memory neural network motif that exhibit bi-stability. The bit '1' is represented by a self-sustained spike train and the bit '0' by no spiking activity.
Our focus on minimal motifs was motivated by two aspects: first, independent bits may play an important role in small systems, where network effects may be less relevant; second, the minimal 2-neuron systems offers maximal clarity in gaining insights about fundamental mechanisms that underlie both the self-organized collective dynamics of a motif and its response to external control signals. We remark that the same concept and mechanisms underlie also volatile memory dynamics in larger recurrent motifs that exhibit a suitable inhibitory component (shutdown-counter) and may thus self-terminate memory. In general, for larger motifs or several motifs embedded into a larger network, future work will need to investigate two aspects, local memory function and broader network effects. Larger motifs or networks may also hold the option for additional, potentially more advanced, functionality, for instance into the direction of systematically correlated multi-bit parallel memory storage, see also [13; 22].
We chose a standard leaky integrate-and-fire neuronal model [23; 24; 25] to keep the number of defining parameters to the most essential ones. Nevertheless, the conditions to implement such volatile memory circuit do not depend on the details of the neuronal model, but only on whether a self-sustained spike-train can be initiated by an external signal and whether the inhibitory feedback can promptly terminate such a spike-train. The results might thus be viewed as conceptual and largely independent of the neuron model.
Departing in some measure from the biological paradigm, independent bits (motif states) can be assembled to form larger sets of \(N\) motifs which combined have a large memory capacity (\(2^{N}\)), as in traditional computers. While it is unclear if the animal brain may take advantage of such combinatorial approach, bio-inspired computers can certainly make use of it to complement functionality of a large class of spiking neural systems, thereby maintaining information and processing completely within the spiking paradigm if desired.
Our minimal motif for volatile memory complements a variety of alternative dynamical system models of neural and networked information processing systems [26; 27; 28; 29]. In particular, our model for short-term memory is a promising complement for approaches to computations relying on simple (neural) logical gates or on symmetrical spiking neural systems [11; 12; 13; 14]. To date, these systems transiently process information but cannot retain the result of a computation, neither in the long- nor in the short-term, for example in (noisy) heteroclinic networks [12] or, more generally, networks of unstable states [13]. Finally, we believe that
such alternative and compact form of volatile memory implementation may contribute to future computing architectures, e.g., in neuromorphic and bio-inspired chemical, physical and robotic systems [30; 31; 32; 33].
## Acknowledgements
Partially supported by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) under project number 419424741 and under Germany's Excellence Strategy - EXC-2068 - 390729961 - Cluster of Excellence Physics of Life at TU Dresden, and the Saxonian State Ministry for Science, Culture and Tourism under grant number 100400118.
|
2310.15466 | EKGNet: A 10.96μW Fully Analog Neural Network for Intra-Patient
Arrhythmia Classification | We present an integrated approach by combining analog computing and deep
learning for electrocardiogram (ECG) arrhythmia classification. We propose
EKGNet, a hardware-efficient and fully analog arrhythmia classification
architecture that archives high accuracy with low power consumption. The
proposed architecture leverages the energy efficiency of transistors operating
in the subthreshold region, eliminating the need for analog-to-digital
converters (ADC) and static random access memory (SRAM). The system design
includes a novel analog sequential Multiply-Accumulate (MAC) circuit that
mitigates process, supply voltage, and temperature variations. Experimental
evaluations on PhysioNet's MIT-BIH and PTB Diagnostics datasets demonstrate the
effectiveness of the proposed method, achieving average balanced accuracy of
95% and 94.25% for intra-patient arrhythmia classification and myocardial
infarction (MI) classification, respectively. This innovative approach presents
a promising avenue for developing low-power arrhythmia classification systems
with enhanced accuracy and transferability in biomedical applications. | Benyamin Haghi, Lin Ma, Sahin Lale, Anima Anandkumar, Azita Emami | 2023-10-24T02:37:49Z | http://arxiv.org/abs/2310.15466v1 | # EKGNet: A 10.96\(\upmu\)W Fully Analog Neural Network for Intra-Patient Arrhythmia Classification
###### Abstract
We present an integrated approach by combining analog computing and deep learning for electrocardiogram (ECG) arrhythmia classification. We propose EKGNet, a hardware-efficient and fully analog arrhythmia classification architecture that achieves high accuracy with low power consumption. The proposed architecture leverages the energy efficiency of transistors operating in the subthreshold region, eliminating the need for analog-to-digital converters (ADC) and static random-access memory (SRAM). The system design includes a novel analog sequential Multiply-Accumulate (MAC) circuit that mitigates process, supply voltage, and temperature variations. Experimental evaluations on PhysionNet's MIT-BIH and PTB Diagnostics datasets demonstrate the effectiveness of the proposed method, achieving an average balanced accuracies of 95% and 94.25% for intra-patient arrhythmia classification and myocardial infarction (MI) classification, respectively. This innovative approach presents a promising avenue for developing low power arrhythmia classification systems with enhanced accuracy and transferability in biomedical applications.
ECG, Classification, Deep Learning, CNN, Heartbeat, Arrhythmia, Myocardial Infarction, ASIC, SoC
## I Introduction
The electrocardiogram (ECG) is crucial for monitoring heart health in medical practice [1, 2]. However, accurately detecting and categorizing different waveforms and morphologies in ECG signals is challenging, similar to other time-series data. Moreover, manual analysis is time-consuming and prone to errors. Given the prevalence and potential lethality of irregular heartbeats, achieving accurate and cost-effective diagnosis of arrhythmic heartbeats is crucial for effectively managing and preventing cardiovascular conditions [3, 4].
Deep neural network-based algorithms [5] are commonly used for ECG arrhythmia classification (AC) due to their high accuracy [6]. However, many of the current highly accurate arrhythmia classifiers that rely on neural networks (NN) require a large number of trainable parameters, often ranging from thousands to millions, to achieve their exceptional performance [6, 7, 8, 9, 10, 11]. This poses a significant challenge when implementing these classifiers on hardware, as accommodating such a vast number of parameters becomes impractical. Consequently, existing algorithms are computationally intensive, particularly when compared to biological neural networks that operate with significantly lower energy requirements. As a result, designing low-power NN-AC systems poses significant computational challenges due to the computational demands involved.
Current approaches aim to tackle this either by (1) designing better AC algorithms, (2) better parallelism and scheduling on existing hardware such as graphics processing units (GPUs) or, (3) designing custom hardware. Previous studies [12, 13, 14, 15] that concentrate on patient-specific arrhythmia classification on chip necessitate training neural networks individually for each patient, which significantly limits their potential applications. Moreover, most of the existing hardware development is with respect to digital circuits.
Analog computing in the subthreshold region offers potential energy efficiency improvements, eliminating the need for ADC and SRAM, in contrast to prior research that mainly focused on digital circuit implementations [16, 17]. This is particularly beneficial for ECG classification applications, which often face energy constraints in health monitoring devices [18, 19, 20, 21, 22, 23]. Despite the challenges associated with analog circuits, such as susceptibility to noise and device variation, they can be effectively utilized for inferring neural network algorithms. The presence of inherent system noise in analog circuits can be leveraged to enhance robustness and improve classification accuracy, aligning with the desirable properties of AI algorithms [24, 25, 26].
In this paper, we propose EKGNet, a fully analog neural network with low power consumption (10.96\(\upmu\)W) that achieves high balanced accuracies of 95% on the MIT-BIH dataset and 94.25% on the PTB dataset for intra-patient arrhythmia classification. To address the challenges of analog circuits, we design an integrated approach that combines AI algorithms and hardware design. By modeling the EKGNet as a Bayesian neural network using Bayes by Backprop [27], we incorporate analog noise and mismatches into the EKGNet model [28]. Knowledge distillation [29] is employed to further enhance the network's performance by transferring knowledge from ResNet18 [30] used as a teacher network to the EKGNet. We also propose an algorithm to conduct weight fine-tuning after quantization to improve hardware performance.
Fig. 1: Model training pipeline - EKGNet training and optimization.
## II Datasets
In this work we utilize two databases; the PhysioNet MIT-BIH Arrhythmia dataset and PTB Diagnostic ECG dataset [31, 32, 33], for labeled ECG records. Specifically, we focused on ECG lead II. The MIT-BIH dataset included ECG recordings from 47 subjects, sampled at 360Hz, with beat annotations by cardiologists. Following the AAMI EC57 standard [34], beats were categorized into four categories based on annotations (Table I). The PTB Diagnostics dataset contained ECG records from 290 subjects, including 148 with myocardial infarction (MI), 52 healthy controls, and other subjects with different diseases. Each record in this dataset consisted of ECG signals from 12 leads, sampled at 1000Hz. Our analysis concentrated on ECG lead II and the MI and healthy control categories.
## III Methods
### _Data Preparation_
We extract beats from ECG recordings for classification by employing a straightforward and effective method [8]. Our approach avoids signal filtering or processing techniques that rely on specific signal characteristics. The extracted beats are of uniform length, ensuring compatibility with subsequent processing stages. The process involves resampling the ECG data to 125Hz, dividing it into 10-second windows, and normalizing the amplitude values between zero and one. We identify local maxima through zero-crossings of the first derivative and determine ECG R-peak candidates using a threshold of 0.9 applied to the normalized local maxima. The median of the R-R time intervals within the window provides the nominal heartbeat period (T). Each R-peak is associated with a signal segment of 1.2T length, padded with zeros to achieve a fixed length. The inputs are adjusted to fit our hardware input range of 0.6 V to 0.7 V (600 mV to 700 mV).
To address dataset imbalance, we divided the data into training and testing sets. For balanced representation, we excluded a specific number of beats for test: 3200 beats (800 beats per class) for the MIT-BIH and 2911 beats (809 healthy beats and 2102 MI beats) for the PTB dataset. The remaining beats underwent random oversampling [35], resulting in an augmented training dataset with an equal number of beats in each class. We ensured complete separation of training and testing data before augmentation to prevent overfitting. After augmentation, the training dataset consisted of 352,276 beats for the MIT-BIH (88,069 beats per class) and 16,800 beats for the PTB dataset (8,400 beats per class).
### _EKGNet training_
To implement the fully analog NN-AC, we optimized the software using a co-design approach. The hardware behavior was emulated in software by extracting a mathematical model of the Multiply-Accumulate (MAC) unit from circuit simulations. EKGNet, a convolutional neural network (CNN), was trained for ECG classification using the constructed ECG training set. During training, Bayes by Backprop [27] was utilized to model the standard deviation of weights (_w_) as derived hardware input-referred thermal noise ( \(\sigma=0.0021090w^{2}+0.0002000w+0.002355\) )1. Hardware leakage noise (\(\sim\)\(N(0.0005\,V,0.0001\,V)\)) was integrated into the network's output. The training pipeline is depicted in Fig. 1, and the high-level architecture of EKGNet is shown in Fig. 1(a) and Table II. EKGNet consists of two 1-D convolutional layers, two ReLU activations, a max pooling layer, two fully connected layers, and a softmax layer [5]. For optimization, we employed Adam with \(L_{2}\) regularization weight decay to optimize the cross-entropy loss [37]. Learning rate of \(\alpha=0.003\) was used, which was halved every fifty epochs using a linear scheduler. This approach ensured that the trained weights remained within a small range suitable for implementation and improved linearity due to hardware noise characteristics (Fig. 5).
Footnote 1: The weights and coefficients are expressed in Volts.
Fig. 2: (a) EKGNet architecture. (b) Table I: Mapping of beat annotations to AAMI EC57 categories. Table II: EKGNet details. (c) Algorithm 1: Fine-tuning of weights after quantization. (d) Confusion matrix for MIT-BIH (left) and PTB (right) classifications. (e) t-SNE visualization of learned representation for MIT-BIH (left) and PTB (right) classifications. Task labels are color-coded. (f) Colored sections highlight important segments in EKGNet predictions.
By applying knowledge distillation [29] to further train EKGNet, we observed a performance improvement of 1.5% on MIT-BIH dataset (resulting in 95% test accuracy) and 1.25% on PTB dataset (resulting in 94.25% test accuracy). Knowledge distillation involves transferring knowledge from a larger teacher network (ResNet18) with high test accuracies (99.88% for MIT-BIH and 100% for PTB datasets) to the smaller student network (EKGNet). Through experimentation, we determined that a temperature parameter value of 1.5 yielded optimal results, considering EKGNet's significantly fewer trainable parameters (336) compared to ResNet18 (\(\sim\)11 million).
To balance power consumption and accuracy, we used a 6-bit uniform quantization for the weights. Employing a fine-tuning technique, we iteratively adjusted a single weight by shifting it up or down one quantization level and evaluating its impact on performance (Fig. 2c, Algorithm 1). With this approach, we achieved the hardware performance of 94.88% and 94.10% on the MIT-BIH and PTB datasets, respectively.
### _Model Interpretability_
Interpreting machine learning algorithms, especially deep learning, in medical applications is a significant challenge [38]. We utilized t-SNE to visualize the learned representation by mapping high-dimensional vectors of the classified beats to a 2D space [39]. In Fig. 2e, we demonstrate clear separability between different classes using MIT-BIH and PTB datasets. Notably, only predicted class labels were used for colorization in the visualizations. To identify regions of input data that receive more attention from EKGNet during prediction, we selected a representative input beat from each category of the MIT-BIH dataset (Fig. 2f). Color-coded visual representations were employed to highlight segments of higher importance in EKGNet's predictions. By calculating the average Shapley value [40] across the entire beat, we selectively colored samples surpassing the threshold. Fig. 2f illustrates the most typical attribution pattern for ECG classification, aligning with established ECG abnormalities such as ST-segment elevation (STE) and pathological Q waves. However, some model attributions are less conclusive, and the highlighted areas may not perfectly align with clinical significance.
## IV Hardware Architecture
The proposed hardware architecture includes a fully analog NN-AC and System-on-Chip (SoC) implementation (Fig. 3). The analog NN-AC, optimized for analog computing, has 336 parameters. Digitally assisted analog circuits are used for ReLU, max pooling, and max functions in the NN-AC. The SoC integrates power-on-reset, bandgap voltage reference, biasing hub, oscillator, scan chain, and low dropout regulators (LDO). An LDO with minimal output variations enhances the analog NN-AC's robustness against supply fluctuations. All circuits operate in the subthreshold region with strict duty cycle control for reduced power consumption.
Fig. 4: (a) 1 channel for CNN1, (b) Half FC1, (c) ReLU, max pooling (18 parallel peak detectors), and weight decoder, (d) Max function.
Fig. 5: (Left) Schematic, waveforms, and equation for the MAC unit, showing its tracking technique to reduce sensitivity to process parameter and temperature variations. (Middle) IA with automatic gain control, R-peak detector, and their measured waveforms. (Right) MAC characterization simulation results.
Fig. 3: (a) Die micrograph, (b) Power breakdown, (c) NN-AC and SoC Architectures, (d) Sample and Hold (S/H), (e) Buffer and biasing hub.
To achieve overlapping CNN operations in hardware, three parallel MAC units are used with a 2-input-sample delay. CNN1 has six channels with ReLU activation. CNN2 employs charge redistribution for average pooling across all six channels, followed by ReLU activation. The first half of the fully connected layer (FC1) in Fig. 3(b) consists of 18 input signals undergoing MAC operations in three MAC units. The outputs are combined and sequentially output as six signals. FC2 follows the same design. The max function selects the node with the highest voltage from FC2, producing a 2-bit digital code representing the input ECG's arrhythmia class. The weight decoder synchronizes with NN-AC's control signals to convert digital codes to analog voltage levels. The fully analog NN-AC incorporates inputs from the sample and hold (S/H), enable signals from the R-peak detector, and weight levels from the weight decoder, generating the 2-bit digital output indicating the ECG's arrhythmia class.
Fig. 5 depicts the analog MAC unit. It consists of a multiplier and a current (\(I_{out}\)) proportional to their product. To reduce noise and cancel offsets, the multiplier incorporates autozero functionality. Linearity enhancement is achieved through the integration of an inverse hyperbolic tangent circuit. Resistor R3 is included to optimize the multiplier's output impedance, ensuring shift-invariance of the MAC. The accumulator converts \(I_{out}\) into a voltage and stores it in the ping-pong capacitors. During each conversion, one capacitor acts as \(V_{ref}\), while the other capacitor stores the updated voltage \(V_{ref}+I_{out}\times RFB\). This sequential MAC operation scheme reduces hardware and power requirements compared to parallel operations. The accumulator utilizes chopper stabilization to mitigate offsets and noise, employing switches controlled by narrow window pulses to minimize the leakage effect. The equation in Fig. 5 shows that the MAC output depends solely on the weight, \(V_{in}\), and device matching.
We propose an analog R-peak detector in the analog domain for beat extraction, specifically identifying the maximum peak of the ECG R wave. Using ECG gradients, the signal is sampled at a rate of 125 samples per second (S/s) with a sample and hold (S/H) circuit employing two ping-pong capacitors to preserve consecutive samples (Fig. 3(d)). In contrast to previous studies relying on digital R-peak detection, we introduce a digitally assisted analog R-peak detector (Fig. 5, middle). By exploiting the higher gradient of the R wave in the ECG waveform, we accurately locate R-peaks by comparing the gradient obtained from the S/H with a predefined threshold. To address noise issues, a Schmitt trigger is integrated into the comparator, utilizing two consecutive active high outputs to confirm the presence of an R-peak. Maintaining a constant input amplitude to the NN-AC is essential for achieving an optimal balance
## Acknowledgment
This work was partially supported by the Carver Mead New Adventure Fund and Heritage Medical Research Institute at Caltech. We would like to express our gratitude to Wei Foo for his help in software and hardware validations. Additionally, we extend our thanks to James Chen and Katie Chiu for their contribution to hardware validation. Finally, we acknowledge Dr. Jialin Song and Dr. Yisong Yue for their collaboration on this project.
|
2301.05652 | Motion Classification Based on Harmonic Micro-Doppler Signatures Using a
Convolutional Neural Network | We demonstrate the classification of common motions of held objects using the
harmonic micro-Doppler signatures scattered from harmonic radio-frequency tags.
Harmonic tags capture incident signals and retransmit at harmonic frequencies,
making them easier to distinguish from clutter. We characterize the motion of
tagged handheld objects via the time-varying frequency shift of the harmonic
signals (harmonic Doppler). With complex micromotions of held objects, the
time-frequency response manifests complex micro-Doppler signatures that can be
used to classify the motions. We developed narrow-band harmonic tags at 2.4/4.8
GHz that support frequency scalability for multi-tag operation, and a harmonic
radar system to transmit a 2.4 GHz continuous-wave signal and receive the
scattered 4.8 GHz harmonic signal. Experiments were conducted to mimic four
common motions of held objects from 35 subjects in a cluttered indoor
environment. A 7-layer convolutional neural network (CNN) multi-label
classifier was developed and obtained a real time classification accuracy of
94.24%, with a response time of 2 seconds per sample with a data processing
latency of less than 0.5 seconds. | Cory Hilton, Steve Bush, Faiz Sherman, Matt Barker, Aditya Deshpande, Steve Willeke, Jeffrey A. Nanzer | 2023-01-13T16:46:26Z | http://arxiv.org/abs/2301.05652v1 | Motion Classification Based on Harmonic Micro-Doppler Signatures Using a Convolutional Neural Network
###### Abstract
We demonstrate the classification of common motions of held objects using the harmonic micro-Doppler signatures scattered from harmonic radio-frequency tags. Harmonic tags capture incident signals and retransmit at harmonic frequencies, making them easier to distinguish from clutter. We characterize the motion of tagged handheld objects via the time-varying frequency shift of the harmonic signals (harmonic Doppler). With complex micromotions of held objects, the time-frequency response manifests complex micro-Doppler signatures that can be used to classify the motions. We developed narrow-band harmonic tags at 2.4/4.8 GHz that support frequency scalability for multi-tag operation, and a harmonic radar system to transmit a 2.4 GHz continuous-wave signal and receive the scattered 4.8 GHz harmonic signal. Experiments were conducted to mimic four common motions of held objects from 35 subjects in a cluttered indoor environment. A 7-layer convolutional neural network (CNN) multi-label classifier was developed and obtained a real time classification accuracy of 94.24\(\%\), with a response time of 2 seconds per sample with a data processing latency of less than 0.5 seconds.
Harmonic radar, Micro-Doppler, Narrow-band antennas, RF tags, Classification, Gesture recognition
## I Introduction
Increasingly rapid developments in human-computer interaction, the internoet of things (IoT), home health, and other areas is leading to more demand for accurate detection and estimation of the motions of people in living spaces. Accurate classification of human motions enables wireless control of devices for interactive entertainment technologies such as augmented reality and virtual reality (AR/VR), determination of human activities to improve home health monitoring, and also differentiation between people and other moving objects. Classifying the motions of people in living spaces is challenging, however, due to the complexity of the environment. Optical systems may be used, however they often entail computationally expensive image processing and have privacy concerns since images of people are formed and processed. Infrared systems generally operate under similar principals and have similar drawbacks. Microwave radar has been increasingly used in short-range motion detection applications due to a number of beneficial aspects that radar holds over optical systems [1, 2]. Radar systems process complex signal returns, so that motion can be directly measured via Doppler frequency shifts, unlike optical systems which are generally intensity-based, requiring techniques like change detection to infer motion. Microwave radar also provides a reliable method of detecting moving objects without forming images of people, alleviating such privacy concerns. Furthermore, when monitoring the motions of objects with rapid variations, such as human movements, the micromotions of the body generate distinguishable signatures in the time-frequency response of the radar return, called micro-Doppler signatures, which can be used for classification of the micromotions [3, 4, 5, 6, 7].
One of the principal challenges in detecting and classifying micro-Doppler signatures in living spaces is the presence of significant signal reflections from the environment. These clutter returns can be significantly higher in power than that returns from a moving person. While the clutter returns are centered at zero Doppler, the phase noise of the radar system contributes to a broader frequency response that overlaps with slow moving targets making them indistinguishable. Since human motions tend to be relatively slow, these micro-Doppler responses can thus be challenging to detect in highly cluttered indoor environments, and even if detected, the presence of system noise may reduce the classification accuracy. Harmonic radars and harmonic tags have been developed to overcome these challenges by moving the signal response to a frequency band without clutter [8, 9, 10, 11]. A harmonic tag accepts the incident radar transmit signal at the fundamental frequency \(f_{c}\) and using a nonlinear component such as a diode generates a higher harmonic signal. Typically the second harmonic \(2f_{c}\), which is then retransmitted and captured by the radar receiver. Since all the clutter returns are present only near \(f_{c}\), the harmonic tag signal can theoretically be detected more easily, however the conversion efficiency of the tag dictates how much return power the radar captures.
While previous works on harmonic tags have principally focused on detection, we recently presented a method of detecting the motion of harmonic tags via the harmonic Doppler return and demonstrated the presence of micro-motion information in harmonic micro-Doppler signatures [12, 13]. When detecting the motion of multiple objects, differentiability of the separate radar returns is necessary. To address this with a low-cost tag, we recently designed a narrow-band harmonic tag that can be used for frequency-selective harmonic Doppler |
2306.11680 | The Implicit Bias of Batch Normalization in Linear Models and Two-layer
Linear Convolutional Neural Networks | We study the implicit bias of batch normalization trained by gradient
descent. We show that when learning a linear model with batch normalization for
binary classification, gradient descent converges to a uniform margin
classifier on the training data with an $\exp(-\Omega(\log^2 t))$ convergence
rate. This distinguishes linear models with batch normalization from those
without batch normalization in terms of both the type of implicit bias and the
convergence rate. We further extend our result to a class of two-layer,
single-filter linear convolutional neural networks, and show that batch
normalization has an implicit bias towards a patch-wise uniform margin. Based
on two examples, we demonstrate that patch-wise uniform margin classifiers can
outperform the maximum margin classifiers in certain learning problems. Our
results contribute to a better theoretical understanding of batch
normalization. | Yuan Cao, Difan Zou, Yuanzhi Li, Quanquan Gu | 2023-06-20T16:58:00Z | http://arxiv.org/abs/2306.11680v2 | The Implicit Bias of Batch Normalization in Linear Models and Two-layer Linear Convolutional Neural Networks
###### Abstract
We study the implicit bias of batch normalization trained by gradient descent. We show that when learning a linear model with batch normalization for binary classification, gradient descent converges to a _uniform margin classifier_ on the training data with an \(\exp(-\Omega(\log^{2}t))\) convergence rate. This distinguishes linear models with batch normalization from those without batch normalization in terms of both the type of implicit bias and the convergence rate. We further extend our result to a class of two-layer, single-filter linear convolutional neural networks, and show that batch normalization has an implicit bias towards a _patch-wise uniform margin_. Based on two examples, we demonstrate that patch-wise uniform margin classifiers can outperform the maximum margin classifiers in certain learning problems. Our results contribute to a better theoretical understanding of batch normalization.
## 1 Introduction
Batch normalization (BN) is a popular deep learning technique that normalizes network layers by re-centering/re-scaling within a batch of training data (Ioffe and Szegedy, 2015). It has been empirically demonstrated to be helpful for training and often leads to better generalization performance. A series of works have attempted to understand and explain the empirical success of BN from different perspectives. Santurkar et al. (2018); Bjorck et al. (2018); Arora et al. (2019) showed that BN enables a more feasible learning rate in the training process, thus leading to faster convergence and possibly finding flat minima. A similar phenomenon has also been discovered in Luo et al. (2019), which studied BN by viewing it as an implicit weight decay regularizer. Cai et al. (2019); Kohler et al. (2019) conducted studies on the benefits of BN in training linear models with gradient descent: they quantitatively demonstrated the accelerated convergence of gradient descent with BN compared to that without BN. Very recently, Wu et al. (2023) studied how SGD interacts with batch normalization and can exhibit undesirable training dynamics. However, these works mainly focus on the convergence analysis, and cannot fully reveal the role of BN in model training and explain its effect on the generalization performance.
To this end, a more important research direction is to study the _implicit bias_(Neyshabur et al., 2014) for BN, or more precisely, identify the special properties of the solution obtained by gradient
descent with BN. When the BN is absent, the implicit bias of gradient descent for the linear model (or deep linear models) is widely studied (Soudry et al., 2018; Ji and Telgarsky, 2018): when the data is linearly separable, gradient descent on the cross-entropy loss will converge to the _maximum margin solution_. However, when the BN is added, the implicit bias of gradient descent is far less understood. Even for the simplest linear classification model, there are nearly no such results for the gradient descent with BN except Lyu and Li (2019), which established an implicit bias guarantee for gradient descent on general homogeneous models (covering homogeneous neural networks with BN, if ruling out the zero point). However, they only proved that gradient descent will converge to a Karush-Kuhn-Tucker (KKT) point of the maximum margin problem, while it remains unclear whether the obtained solution is guaranteed to achieve maximum margin, or has other properties while satisfying the KKT conditions.
In this paper, we aim to systematically study the implicit bias of batch normalization in training linear models and a class of linear convolutional neural networks (CNNs) with gradient descent. Specifically, consider a training data set \(\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) and assume that the training data inputs are centered without loss of generality. Then we consider the linear model and linear single-filter CNN model with batch normalization as follows
\[f(\mathbf{w},\gamma,\mathbf{x})=\gamma\cdot\frac{\langle\mathbf{w},\mathbf{x} \rangle}{\sqrt{n^{-1}\sum_{i=1}^{n}\langle\mathbf{w},\mathbf{x}_{i}\rangle^{2 }}},\ g(\mathbf{w},\gamma,\mathbf{x})=\sum_{p=1}^{P}\gamma\cdot\frac{\langle \mathbf{w},\mathbf{x}^{(p)}\rangle}{\sqrt{n^{-1}P^{-1}\sum_{i=1}^{n}\sum_{p=1} ^{P}\langle\mathbf{w},\mathbf{x}_{i}^{(p)}\rangle^{2}}},\]
where \(\mathbf{w}\in\mathbb{R}^{d}\) is the parameter vector, \(\gamma\) is the scale parameter in batch normalization, and \(\mathbf{x}^{(p)}\), \(\mathbf{x}_{i}^{(p)}\) denote the \(p\)-th patch in \(\mathbf{x}\), \(\mathbf{x}_{i}\) respectively. \(\mathbf{w},\gamma\) are both trainable parameters in the model. To train \(f(\mathbf{w},\gamma,\mathbf{x})\) and \(g(\mathbf{w},\gamma,\mathbf{x})\), we use gradient descent starting from \(\mathbf{w}^{(0)}\), \(\gamma^{(0)}\) to minimize the cross-entropy loss. The following informal theorem gives a simplified summary of our main results.
**Theorem 1.1** (Simplification of Theorems 2.2 and 3.2).: _Suppose that \(\gamma^{(0)}=1\), and that the initialization scale \(\|\mathbf{w}^{(0)}\|_{2}\) and learning rate of gradient descent are sufficiently small. Then the following results hold:_
1. _(Implicit bias of batch normalization in linear models)_ _When training_ \(f(\mathbf{w},\gamma,\mathbf{x})\)_, if the equation system_ \(\langle\mathbf{w},y_{i}\cdot\mathbf{x}_{i}\rangle=1\)_,_ \(i\in[n]\) _has at least one solution, then the training loss converges to zero, and the iterate_ \(\mathbf{w}^{(t)}\) _of gradient descent satisfies that_ \[\frac{1}{n^{2}}\sum_{i,i^{\prime}=1}^{n}\left[\operatorname{margin}\left( \mathbf{w}^{(t)},(\mathbf{x}_{i^{\prime}},y_{i^{\prime}})\right)- \operatorname{margin}\left(\mathbf{w}^{(t)},(\mathbf{x}_{i},y_{i})\right) \right]^{2}=\exp(-\Omega(\log^{2}t)),\] _where_ \(\operatorname{margin}\left(\mathbf{w},(\mathbf{x},y)\right):=y\cdot\frac{ \langle\mathbf{w},\mathbf{x}\rangle}{\sqrt{n^{-1}\sum_{i=1}^{n}\langle \mathbf{w},\mathbf{x}_{i}\rangle^{2}}}\)_._
2. _(Implicit bias of batch normalization in single-filter linear CNNs)_ _When training_ \(g(\mathbf{w},\gamma,\mathbf{x})\)_, if the equation system_ \(\langle\mathbf{w},y_{i}\cdot\mathbf{x}_{i}^{(p)}\rangle=1\)_,_ \(i\in[n]\)_,_ \(p\in[P]\) _has at least one solution, then the training loss converges to zero, and the iterate_ \(\mathbf{w}^{(t)}\) _of gradient descent satisfies that_ \[\frac{1}{n^{2}}\sum_{i,i^{\prime}=1}^{n}\left[\operatorname{margin}\left( \mathbf{w}^{(t)},(\mathbf{x}_{i^{\prime}}^{(p^{\prime})},y_{i^{\prime}}) \right)-\operatorname{margin}\left(\mathbf{w}^{(t)},(\mathbf{x}_{i}^{(p)},y_ {i})\right)\right]^{2}=\exp(-\Omega(\log^{2}t)),\] _where_ \(\operatorname{margin}\left(\mathbf{w},(\mathbf{x}^{(p)},y)\right):=y\cdot\frac{ \langle\mathbf{w},\mathbf{x}^{(p)}\rangle}{\sqrt{n^{-1}P^{-1}\sum_{i=1}^{n} \sum_{p=1}^{P}(\mathbf{w},\mathbf{x}_{i}^{(p)})^{2}}}\)_._
Theorem 1.1 indicates that batch normalization in linear models and single-filter linear CNNs has an implicit bias towards a _uniform margin_, i.e., gradient descent eventually converges to such a predictor that all training data points are on its margin. Notably, for CNNs, the margins of the predictor are uniform not only over different training data inputs, but also over different patches within each data input.
The major contributions of this paper are as follows:
* By revealing the implicit bias of batch normalization towards a uniform margin, our result distinguishes linear models and CNNs with batch normalization from those without batch normalization. The sharp convergence rate given in our result also allows us to make comparisons with existing results of the implicit bias for linear models and linear networks. In contrast to the \(1/\log(t)\) convergence rate of linear logistic regression towards the maximum margin solution (Soudry et al., 2018), the linear model with batch normalization converges towards a uniform margin solution with convergence rate \(\exp(-\Omega(\log^{2}t))\), which is significantly faster. Therefore, our result demonstrates that batch normalization can significantly increase the directional convergence speed of linear model weight vectors and linear CNN filters.
* linear models and linear CNNs with batch normalization
- whose implicit bias are not maximum margin, but uniform margin.
* We note that the uniform margin in linear models can also be achieved by performing linear regression minimizing the square loss. However, the patch-wise uniform margin result for single-filter linear CNNs is unique and cannot be achieved by regression models without batch normalization. To further demonstrate the benefit of such an implicit bias for generalization, we also construct two example learning problems, for which the patch-wise uniform margin classifier is guaranteed to outperform the maximum margin classifier in terms of test accuracy.
* From a technical perspective, our analysis gives many novel proof techniques that may be of independent interest. To derive tight bounds, we prove an inequality that is similar to the Chebyshev's sum inequality, but for pair-wise maximums and differences. We also establish an equivalence result connecting a quantity named margin discrepancy to the Euclidean metric and the metric induced by the data sample covariance matrix. At last, we develop an induction argument over five induction hypotheses, which play a key role in developing the sharp convergence rate towards the uniform margin.
## 2 Batch Normalization in Linear Models
Suppose that \((\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2}),\ldots,(\mathbf{x}_{n},y_{n})\) are \(n\) arbitrary training data points, where \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) and \(y_{i}\in\{\pm 1\}\) for \(i\in[n]\). We consider using a linear model with batch normalization to fit these data points. During training, the prediction function is given as
\[f(\mathbf{w},\gamma,\mathbf{x})=\gamma\cdot\frac{\langle\mathbf{w},\mathbf{x} \rangle-\langle\mathbf{w},\overline{\mathbf{x}}\rangle}{\sqrt{n^{-1}\sum_{i=1 }^{n}(\langle\mathbf{w},\mathbf{x}_{i}\rangle-\langle\mathbf{w},\overline{ \mathbf{x}}\rangle)^{2}}},\]
where \(\overline{\mathbf{x}}=n^{-1}\sum_{i=1}^{n}\mathbf{x}_{i}\) is the mean of all the training data inputs, \(\mathbf{w}\in\mathbb{R}^{d}\) is the linear model parameter vector, and \(\gamma\) is the scale parameter in batch normalization. \(\mathbf{w},\gamma\) are both trainable parameters in the model. Clearly, the above definition exactly gives a linear model with batch normalization during the training of full-batch gradient descent. Note that we can assume the data are centered (i.e., \(\overline{\mathbf{x}}=\mathbf{0}\)) without loss of generality: if \(\overline{\mathbf{x}}\neq\mathbf{0}\), we can simply consider new data inputs \(\widetilde{\mathbf{x}}_{i}=\mathbf{x}_{i}-\overline{\mathbf{x}}\). Therefore, assuming \(\overline{\mathbf{x}}=\mathbf{0}\), we have
\[f(\mathbf{w},\gamma,\mathbf{x})=\gamma\cdot\frac{\langle\mathbf{w},\mathbf{x} \rangle}{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}},\ \ \text{where}\ \ \boldsymbol{\Sigma}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_{i}\mathbf{x}_{i}^{ \top}.\]
Consider training \(f(\mathbf{w},\gamma,\mathbf{x})\) by minimizing the cross-entropy loss
\[L(\mathbf{w},\gamma)=\frac{1}{n}\sum_{i=1}^{n}\ell[y_{i}\cdot f(\mathbf{w}, \gamma,\mathbf{x}_{i})]\]
with gradient descent, where \(\ell(z)=\log(1+\exp(-z))\) is the logistic loss. Then starting from the initial \(\mathbf{w}^{(0)}\) and \(\gamma^{(0)}\), gradient descent with learning rate \(\eta\) takes the following update
\[\mathbf{w}^{(t+1)}=\mathbf{w}^{(t)}-\frac{\eta\cdot\gamma^{(t)}} {n\cdot\|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}}\sum_{i=1}^{n}\ell^{\prime}[y _{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i})]\cdot y_{i}\cdot \bigg{(}\mathbf{I}-\frac{\boldsymbol{\Sigma}\mathbf{w}^{(t)}\mathbf{w}^{(t) \top}}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}^{2}}\bigg{)}\mathbf{x}_{i}, \tag{2.1}\] \[\gamma^{(t+1)}=\gamma^{(t)}-\frac{\eta}{n}\sum_{i=1}^{n}\ell^{ \prime}[y_{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i})]\cdot y_{i }\cdot\frac{\langle\mathbf{w}^{(t)},\mathbf{x}_{i}\rangle}{\|\mathbf{w}^{(t) }\|_{\boldsymbol{\Sigma}}}. \tag{2.2}\]
Our goal is to show that \(f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x})\) trained by (2.1) and (2.2) eventually achieves the same margin on all training data points. Note that \(f(\mathbf{w},\gamma,\mathbf{x})\) is \(1\)-homogeneous in \(\gamma\) and \(0\)-homogeneous in \(\mathbf{w}\), and the prediction of \(f(\mathbf{w},\gamma,\mathbf{x})\) on whether a data input \(\mathbf{x}\) belongs to class \(+1\) or \(-1\) does not depend on the magnitude of \(\gamma\). Therefore we focus on the normalized margin \(y_{i}\cdot\langle\mathbf{w},\mathbf{x}\rangle/\|\mathbf{w}\|_{\boldsymbol{ \Sigma}}\). Moreover, we also introduce the following quantity which we call margin discrepancy:
\[D(\mathbf{w}):=\frac{1}{n^{2}}\sum_{i,i^{\prime}=1}^{n}\bigg{(}y_{i^{\prime}} \cdot\frac{\langle\mathbf{w},\mathbf{x}_{i^{\prime}}\rangle}{\|\mathbf{w}\|_{ \boldsymbol{\Sigma}}}-y_{i}\cdot\frac{\langle\mathbf{w},\mathbf{x}_{i}\rangle} {\|\mathbf{w}\|_{\boldsymbol{\Sigma}}}\bigg{)}^{2}.\]
The margin discrepancy \(D(\mathbf{w})\) measures how uniform the margin achieved by \(f(\mathbf{w},\gamma,\mathbf{x})\) is. When \(D(\mathbf{w})\) is zero, \(f(\mathbf{w},\gamma,\mathbf{x})\) achieves the exact same margin on all the training data points. Therefore, we call \(f(\mathbf{w},\gamma,\mathbf{x})\) or the corresponding linear classifier \(\mathbf{w}\) the _uniform margin classifier_ if \(D(\mathbf{w})=0\).
### The Implicit Bias of Batch Normalization in Linear Models
In this subsection we present our main result on the implicit bias of batch normalization in linear models. We first state the following assumption.
**Assumption 2.1**.: _The equation system \(\langle\mathbf{w},y_{i}\cdot\mathbf{x}_{i}\rangle=1\), \(i\in[n]\) has at least one solution._
Assumption 2.1 is a very mild assumption which commonly holds under over-parameterization. For example, when \(d\geq n\), Assumption 2.1 holds almost surely when \(\mathbf{x}_{i}\) are sampled from a non-degenerate distribution. Assumption 2.1 can also hold in many low-dimensional learning problems
as well. Note that Assumption 2.1 is our _only_ assumption on the data: we do not make any distribution assumption on the data, and we do not make any assumption on \(d\) and \(n\) either.
In order to present the most general implicit bias result for batch normalization that covers both the \(d\geq n\) case and the \(d<n\) case, we introduce some additional notations to unify the argument. We note that by the gradient descent update formula (2.1), \(\mathbf{w}^{(t)}\) is only updated in the space \(\mathcal{X}:=\text{span}\{\mathbf{x}_{1},\ldots\mathbf{x}_{n}\}\), and the component of \(\mathbf{w}^{(t)}\) in \(\mathcal{X}^{\perp}\) is unchanged during training (of course, if \(\mathcal{X}=\mathbb{R}^{d}\), then there is no such component). This motivates us to define
\[\lambda_{\max}=\sup_{\mathbf{u}\in\mathcal{X}\setminus\{\mathbf{0}\}}\frac{ \mathbf{u}^{\top}\boldsymbol{\Sigma}\mathbf{u}}{\|\mathbf{u}\|_{2}^{2}},\qquad \lambda_{\min}=\inf_{\mathbf{u}\in\mathcal{X}\setminus\{\mathbf{0}\}}\frac{ \mathbf{u}^{\top}\boldsymbol{\Sigma}\mathbf{u}}{\|\mathbf{u}\|_{2}^{2}}.\]
Moreover, we let \(\mathbf{P}_{\mathcal{X}}\) be the projection matrix onto \(\mathcal{X}\) (if \(\mathcal{X}=\mathbb{R}^{d}\), then \(\mathbf{P}_{\mathcal{X}}=\mathbf{I}\) is simply the identity matrix). We can briefly check the scaling of \(\lambda_{\max}\) and \(\lambda_{\min}\) with a specific example: suppose that \(\mathbf{x}_{i}\), \(i\in[n]\) are independently drawn from \(N(\mathbf{0},\mathbf{I})\). Then when \(d/n\to c\) for some constant \(c\neq 1\), with high probability, \(\lambda_{\max}\) and \(\lambda_{\max}\) are both of constant order (Vershynin, 2010).
Our main result for the implicit bias of batch normalization in linear models is given in the following theorem.
**Theorem 2.2**.: _Suppose that Assumption 2.1 holds. There exist \(M=1/\mathrm{poly}(\lambda_{\min}^{-1},\lambda_{\max},\max_{i}\|\mathbf{x}_{i}\| _{2})\), \(\overline{\eta}=1/\mathrm{poly}(\lambda_{\min}^{-1},\lambda_{\max},\max_{i}\| \mathbf{x}_{i}\|_{2},\|\mathbf{P}_{\mathcal{X}}\mathbf{w}^{(0)}\|_{2}^{-1})\) and constants \(C_{1},C_{2},C_{3},C_{4}>0\), such that when \(\gamma^{(0)}=1\), \(\|\mathbf{P}_{\mathcal{X}}\mathbf{w}^{(0)}\|_{2}\leq M\), and \(\eta\leq\overline{\eta}\), there exists \(t_{0}\leq\eta^{-1}\cdot\mathrm{poly}(\lambda_{\min}^{-1},\lambda_{\max},\max_ {i}\|\mathbf{x}_{i}\|_{2})\) and the following inequalities hold for all \(t\geq t_{0}\):_
\[\frac{C_{1}}{\eta\cdot(t-t_{0}+1)}\leq L(\mathbf{w}^{(t)},\gamma ^{(t)})\leq\frac{C_{2}}{\eta\cdot(t-t_{0}+1)},\] \[D(\mathbf{w}^{(t)})\leq\frac{C_{3}\lambda_{\max}}{\lambda_{\min} }\cdot\exp\Bigg{[}-\frac{C_{4}\lambda_{\min}}{\lambda_{\max}^{3/2}\cdot\| \mathbf{P}_{\mathcal{X}}\mathbf{w}^{(0)}\|_{2}^{2}}\cdot\log^{2}((8/9)\eta \cdot(t-t_{0})+1)\Bigg{]}.\]
By the first result in Theorem 2.2, we see that the loss function converges to zero, which guarantees perfect fitting on the training dataset. Moreover, the upper and lower bounds together demonstrate that the \(\Theta(1/t)\) convergence rate is sharp. The second result in Theorem 2.2 further shows that the margin discrepancy \(D(\mathbf{w})\) converges to zero at a rate of \(O(\exp(-\log^{2}t))\), demonstrating that batch normalization in linear models has an implicit bias towards the uniform margin. Note that Theorem 2.2 holds under mild assumptions: we only require that (i) a uniform margin solution exists (which is obviously a necessary assumption), and (ii) the initialization scale \(\|\mathbf{w}^{(0)}\|_{2}\) and the learning rate \(\eta\) are small enough (which are common in existing implicit bias results (Gunasekar et al., 2017; Li et al., 2018; Arora et al., 2019)). Therefore, Theorem 2.2 gives a comprehensive characterization on the implicit bias of batch normalization in linear models.
**Comparison with the implicit bias for linear models without BN.** When the BN is absent, it has been widely known that gradient descent, performed on linear models with cross-entropy loss, converges to the maximum margin solution with a \(O\big{(}1/\log(t)\big{)}\) rate (Soudry et al., 2018). In comparison, our result demonstrates that when batch normalization is added to the model, the implicit bias is changed from maximum margin to uniform margin and the (margin) convergence rate can be significantly improved to \(\exp(-\Omega(\log^{2}t))\), which is way faster than \(O(1/\log(t))\).
**Comparison with the implicit bias for homogeneous models.** Note that when BN is added, the model function \(f(\mathbf{w},\gamma,\mathbf{x})\) is 1-homogeneous for any \(\mathbf{w}\neq\mathbf{0}\), i.e., for any constant \(c\), we have
\(f(c\mathbf{w},c\gamma,\mathbf{x})=c\cdot f(\mathbf{w},\gamma,\mathbf{x})\). Therefore, by the implicit bias result for general homogeneous models in Lyu and Li (2019), we can conclude that the uniform margin solution is a KKT point of the maximum margin problem. On the other hand, it is clear that the uniform margin solution may not be able to achieve the maximum margin on the training data. This implies that for general homogeneous models (or homogeneous models with BN, which is still a class of homogeneous models), it is possible that gradient descent will not converge to the maximum margin solution.
## 3 Batch Normalization in Two-layer Linear CNNs
In Section 2, we have shown that batch normalization in linear models has an implicit bias towards the uniform margin. This actually reveals the fact that linear predictors with batch normalization in logistic regression may behave more similarly to the linear regression predictor trained by minimizing the square loss. To further distinguish batch normalization from other methods, in this section we extend our results in Section 2 to a class of linear CNNs with a single convolution filter.
Suppose that \((\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{n},y_{n})\) are \(n\) training data points, and each \(\mathbf{x}_{i}\) consists of \(P\) patches \(\mathbf{x}_{i}=[\mathbf{x}_{i}^{(1)},\mathbf{x}_{i}^{(2)},\ldots,\mathbf{x}_ {i}^{(P)}]\), where \(\mathbf{x}_{i}^{(p)}\in\mathbb{R}^{d}\), \(p\in[P]\). We train a linear CNN with a single filter \(\mathbf{w}\) to fit these data points. During training, the CNN model with batch normalization is given as
\[g(\mathbf{w},\gamma,\mathbf{x})=\sum_{p=1}^{P}\gamma\cdot\frac{\langle\mathbf{ w},\mathbf{x}^{(p)}\rangle}{\|\mathbf{w}\|\mathbf{\Sigma}}\ \ \text{with}\ \ \mathbf{\Sigma}=\frac{1}{nP}\sum_{i=1}^{n}\sum_{p=1}^{P}\mathbf{x}_{i}^{(p)} \mathbf{x}_{i}^{(p)\top},\]
where \(\gamma\) is the scale parameter in batch normalization. Here \(\mathbf{w},\gamma\) are both trainable parameters. Similar to the linear model case, the above definition can be obtained by centering the training data points. Moreover, note that here different patches of a data point \(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(P)}\) can have arbitrary overlaps, and our theory applies even when some of the patches are identical. Note also that in our definition, batch normalization is applied not only over the batch of data, but also over patches, which matches the definition of batch normalization in CNN in Ioffe and Szegedy (2015).
We again consider training \(g(\mathbf{w},\gamma,\mathbf{x})\) by minimizing the cross-entropy loss with gradient descent starting from initialization \(\mathbf{w}^{(0)}\), \(\gamma^{(0)}\). As the counterpart of the margin discrepancy \(D(\mathbf{w})\) defined in Section 2, here we define the patch-wise margin discrepancy as
\[D_{\text{patch}}(\mathbf{w})=\frac{1}{n^{2}P^{2}}\sum_{i,i^{\prime}=1}^{n} \sum_{p,p^{\prime}=1}^{P}\bigg{(}y_{i^{\prime}}\cdot\frac{\langle\mathbf{w}, \mathbf{x}_{i^{\prime}}^{(p^{\prime})}\rangle}{\|\mathbf{w}\|_{\mathbf{ \Sigma}}}-y_{i}\cdot\frac{\langle\mathbf{w},\mathbf{x}_{i}^{(p)}\rangle}{\| \mathbf{w}\|_{\mathbf{\Sigma}}}\bigg{)}^{2}.\]
We call \(g(\mathbf{w},\gamma,\mathbf{x})\) or the corresponding linear classifier defined by \(\mathbf{w}\) the _patch-wise uniform margin classifier_ if \(D_{\text{patch}}(\mathbf{w})=0\).
### The Implicit Bias of Batch Normalization in Two-Layer Linear CNNs
Similar to Assumption 2.1, we make the following assumption to guarantee the existence of the patch-wise uniform margin solution.
**Assumption 3.1**.: _The equation system \(\langle\mathbf{w},y_{i}\cdot\mathbf{x}_{i}^{(p)}\rangle=1\), \(i\in[n]\), \(p\in[P]\) has at least one solution._
We also extend the notations in Section 2.1 to the multi-patch setting as follows. We define
\(\mathcal{X}=\mathrm{span}\{\mathbf{x}_{i}^{(p)},i\in[n],p\in[P]\}\), and
\[\lambda_{\max}=\sup_{\mathbf{u}\in\mathcal{X}\setminus\{\mathbf{0}\}}\frac{ \mathbf{u}^{\top}\boldsymbol{\Sigma}\mathbf{u}}{\left\|\mathbf{u}\right\|_{2}^{ 2}},\qquad\lambda_{\min}=\inf_{\mathbf{u}\in\mathcal{X}\setminus\{\mathbf{0} \}}\frac{\mathbf{u}^{\top}\boldsymbol{\Sigma}\mathbf{u}}{\left\|\mathbf{u} \right\|_{2}^{2}}.\]
Moreover, we let \(\mathbf{P}_{\mathcal{X}}\) be the projection matrix onto \(\mathcal{X}\).
The following theorem states the implicit bias of batch normalization in two-layer linear CNNs.
**Theorem 3.2**.: _Suppose that Assumption 3.1 holds. There exist \(M=1/\mathrm{poly}(\lambda_{\min}^{-1},\lambda_{\max},\max_{i}\|\mathbf{x}_{i} \|_{2},P)\), \(\overline{\eta}=1/\mathrm{poly}(\lambda_{\min}^{-1},\lambda_{\max},\max_{i} \|\mathbf{x}_{i}\|_{2},P,\|\mathbf{P}_{\mathcal{X}}\mathbf{w}^{(0)}\|_{2}^{-1})\) and constants \(C_{1},C_{2},C_{3},C_{4}>0\), such that when \(\gamma^{(0)}=1\), \(\|\mathbf{P}_{\mathcal{X}}\mathbf{w}^{(0)}\|_{2}\leq M\), and \(\eta\leq\overline{\eta}\), there exists \(t_{0}\leq\eta^{-1}\cdot\mathrm{poly}(\lambda_{\min}^{-1},\lambda_{\max},\max_{ i}\|\mathbf{x}_{i}\|_{2},P)\) and the following inequalities hold for all \(t\geq t_{0}\):_
\[\frac{C_{1}}{\eta P\cdot(t-t_{0}+1)}\leq L(\mathbf{w}^{(t)}, \gamma^{(t)})\leq\frac{C_{2}}{\eta P\cdot(t-t_{0}+1)},\] \[D_{\mathrm{patch}}(\mathbf{w}^{(t)})\leq\frac{C_{3}\lambda_{ \max}}{\lambda_{\min}}\cdot\exp\Bigg{[}-\frac{C_{4}\lambda_{\min}}{\lambda_{ \max}^{3/2}\cdot\|\mathbf{P}_{\mathcal{X}}\mathbf{w}^{(0)}\|_{2}^{2}}\cdot \log^{2}((8/9)\eta P\cdot(t-t_{0})+1)\Bigg{]}.\]
Theorem 3.2 is the counterpart of Theorem 2.2 for two-layer single-filter CNNs. It further reveals that fact that batch normalization encourages CNNs to achieve the same margin on all data patches. It is clear that the convergence rate is \(\exp(-\Omega(\log^{2}t))\), which is fast compared with the convergence towards the maximum classifier when training linear CNNs without batch normalization.
### Examples Where Uniform Margin Outperforms Maximum Margin
Here we give two examples of learning problems, and show that when training two-layer, single-filter linear CNNs, the patch-wise uniform classifier given by batch normalization outperforms the maximum margin classifier given by the same neural network model without batch normalization.
We note that when making predictions on a test data input \(\mathbf{x}_{\mathrm{test}}\), the standard deviation calculated in batch normalization (i.e., the denominator \(\|\mathbf{w}\|_{\boldsymbol{\Sigma}}\)) is still based on the training data set and is therefore unchanged. Then as long as \(\gamma>0\), we have
\[\mathrm{sign}[g(\mathbf{w},\gamma,\mathbf{x}_{\mathrm{test}})]=\mathrm{sign} \Bigg{[}\sum_{p=1}^{P}\gamma\cdot\frac{\langle\mathbf{w},\mathbf{x}^{(p)} \rangle}{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}}\Bigg{]}=\mathrm{sign}(\langle \mathbf{w},\overline{\mathbf{x}}_{\mathrm{test}}\rangle),\]
where we define \(\overline{\mathbf{x}}_{\mathrm{test}}:=\sum_{p=1}^{P}\mathbf{x}_{\mathrm{test}}^ {(p)}\). We compare the performance of the patch-wise uniform margin classifier \(\mathbf{w}_{\mathrm{uniform}}\) with the maximum margin classifier, which is defined as
\[\mathbf{w}_{\max}=\mathrm{argmin}\left\|\mathbf{w}\right\|_{2}^{2},\ \ \text{ subject to }y_{i}\cdot\langle\mathbf{w},\overline{\mathbf{x}}_{i}\rangle\geq 1,i \in[n]. \tag{3.1}\]
By Soudry et al. (2018), \(\mathbf{w}_{\max}\) can be obtained by training a two-layer, single-filter CNN without batch normalization. We hence study the difference between \(\mathbf{w}_{\mathrm{uniform}}\) and \(\mathbf{w}_{\max}\) in two examples. Below we present the first learning problem example and the learning guarantees.
**Example 3.3**.: _Let \(\mathbf{u}\in\mathbb{R}^{d}\) be a fixed vector. Then each data point \((\mathbf{x},y)\) with_
\[\mathbf{x}_{i}=[\mathbf{x}_{i}^{(1)},\mathbf{x}_{i}^{(2)},\ldots,\mathbf{x}_{i }^{(P)}]\in\mathbb{R}^{d\times P}\]
_and \(y\in\{-1,1\}\) is generated from the distribution \(\mathcal{D}_{1}\) as follows:_
* _The label_ \(y\) _is generated as a Rademacher random variable._
* _For_ \(p\in[P]\)_, the input patch_ \(\mathbf{x}^{(p)}\) _is given as_ \(y\cdot\mathbf{u}+\boldsymbol{\xi}^{(p)}\)_, where_ \(\boldsymbol{\xi}^{(p)}\)_,_ \(p\in[n]\) _are independent noises generated from_ \(N(\mathbf{0},\sigma^{2}(\mathbf{I}-\mathbf{u}\mathbf{u}^{\top}/\|\mathbf{u}\|_ {2}^{2}))\)_._
**Theorem 3.4**.: _Let \(S=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) be the training data set consisting of \(n\) independent data points drawn from the distribution \(\mathcal{D}\) in Example 3.3. Suppose that \(d=2n\), \(P\geq 4\) and \(\sigma\geq 20\|\mathbf{u}\|_{2}\cdot P^{1/2}d^{1/2}\). Then with probability \(1\), the maximum margin classifier \(\mathbf{w}_{\max}\) on \(S\) exists and is unique, and the patch-wise uniform margin classifier \(\mathbf{w}_{\mathrm{uniform}}\) on \(S\) exist, and is unique up to a scaling factor. Moreover, with probability at least \(1-\exp(-\Omega(d))\) with respect to the randomness in the training data, the following results hold:_
* \(\mathbb{P}_{(\mathbf{x}_{\mathrm{test}},y_{\mathrm{test}})\sim\mathcal{D}_{1} }(y_{\mathrm{test}}\cdot\langle\mathbf{w}_{\mathrm{uniform}},\overline{ \mathbf{x}}_{\mathrm{test}}\rangle<0)=0\)_._
* \(\mathbb{P}_{(\mathbf{x}_{\mathrm{test}},y_{\mathrm{test}})\sim\mathcal{D}_{1} }(y_{\mathrm{test}}\cdot\langle\mathbf{w}_{\max},\overline{\mathbf{x}}_{ \mathrm{test}}\rangle<0)=\Theta(1)\)_._
Below we give the second learning problem example and the corresponding learning guarantees for the patch-wise uniform margin classifier and the maximum margin classifier.
**Example 3.5**.: _Let \(\mathbf{x}=[\mathbf{x}^{(1)},\mathbf{x}^{(2)}]\) be the data with two patches, where \(\mathbf{x}^{(p)}\in\mathbb{R}^{d}\) is a \(d\)-dimensional vector. Let \(\mathbf{u}\) and \(\mathbf{v}\) be two fixed vectors and \(\rho\in[0,0.5)\). Then given the Rademacher label \(y\in\{-1,1\}\), the data input \(\mathbf{x}\) is generated from the distribution \(\mathcal{D}_{2}\) as follows:_
* _With probability_ \(1-\rho\)_, one data patch_ \(\mathbf{x}^{(p)}\) _is the strong signal_ \(y\mathbf{u}\) _and the other patch is the random Gaussian noise_ \(\boldsymbol{\xi}\sim N\big{(}0,\sigma^{2}(\mathbf{I}-\mathbf{u}\mathbf{u}^{ \top}/\|\mathbf{u}\|_{2}^{2}-\mathbf{v}\mathbf{v}^{\top}/\|\mathbf{v}\|_{2}^{2 })\big{)}\)_._
* _With probability_ \(\rho\)_, one data patch_ \(\mathbf{x}^{(p)}\) _is the weak signal_ \(y\mathbf{v}\) _and the other patch is the combination of random noise_ \(\boldsymbol{\xi}\sim N(0,\sigma^{2}\mathbf{I})\) _and feature noise_ \(\alpha\cdot\zeta\mathbf{u}\)_, where_ \(\zeta\) _is randomly drawn from_ \(\{-1,1\}\) _equally and_ \(\alpha\in(0,1)\)_._
_The signals are set to be orthogonal to each other, i.e., \(\langle\mathbf{u},\mathbf{v}\rangle=0\). Moreover, we set \(d=n^{2}\log(n)\), \(\sigma_{0}=d^{-1/2}\), \(\rho=n^{-3/4}\), \(\alpha=n^{-1/2}\), \(\|\mathbf{u}\|_{2}=1\), and \(\|\mathbf{v}\|_{2}=\alpha^{2}\)._
**Theorem 3.6**.: _Suppose that the data is generated according to Example 3.5, then let \(\mathbf{w}_{\mathrm{uniform}}\) and \(\mathbf{w}_{\max}\) be the uniform margin and maximum margin solution in the subspace \(\mathcal{X}\) respectively. Then with probability at least \(1-\exp(-\Omega(n^{1/4}))\) with respect to the randomness in the training data, the following holds:_
* \(\mathbb{P}_{(\mathbf{x}_{\mathrm{test}},y_{\mathrm{test}})\sim\mathcal{D}_{2} }(y\cdot\langle\mathbf{w}_{\mathrm{uniform}},\overline{\mathbf{x}}_{\mathrm{ test}}\rangle<0)\leq\frac{1}{n^{10}}\)_._
* \(\mathbb{P}_{(\mathbf{x}_{\mathrm{test}},y_{\mathrm{test}})\sim\mathcal{D}_{2} }(y\cdot\langle\mathbf{w}_{\max},\overline{\mathbf{x}}_{\mathrm{test}}\rangle<0 )\geq\frac{1}{4n^{3/4}}\)_._
We note that data models similar to Examples 3.3 and 3.5 have been considered in a series of works (Allen-Zhu and Li, 2020; Zou et al., 2021; Cao et al., 2022). According to Theorems 3.4 and 3.6, the patch-wise uniform margin classifier achieves better test accuracy than the maximum margin classifier in both examples. The intuition behind these examples is that by ensuring a patch-wise uniform margin, the classifier amplifies the effect of weak, stable features over strong, unstable features and noises. We remark that it is not difficult to contract more examples where patch-wise uniform margin classifier performs better. However, we can also similarly construct other examples where the maximum margin classifier gives better predictions. The goal of our discussion here is just to demonstrate that there exist such learning problems where patch-wise uniform margin classifiers perform well.
Overview of the Proof Technique
In this section, we explain our key proof techniques in the study of implicit bias of batch normalization, and discuss the key technical contributions in this paper. For clarity, we will mainly focus on the setting of batch normalization in linear models as is defined in Section 2.
We first introduce some simplification of notations. As we discussed in Subsection 2.1, the training of \(\mathbf{w}^{(t)}\) always happen in the subspace \(\mathcal{X}=\operatorname{span}\{\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\}\). And when \(\mathcal{X}\subsetneq\mathbb{R}^{d}\), we need the projection matrix \(\mathbf{P}_{\mathcal{X}}\) in our result. In fact, under the setting where \(\mathcal{X}\subsetneq\mathbb{R}^{d}\), we need to apply such a projection whenever \(\mathbf{w}\) occurs, and the notations can thus be quite complicated. To simplify notations, throughout our proof we use the slight abuse of notation
\[\|\mathbf{a}\|_{2}:=\|\mathbf{P}_{\mathcal{X}}\mathbf{a}\|_{2},\qquad\langle \mathbf{a},\mathbf{b}\rangle=\langle\mathbf{P}_{\mathcal{X}}\mathbf{a}, \mathbf{P}_{\mathcal{X}}\mathbf{b}\rangle\]
for all \(\mathbf{a},\mathbf{b}\in\mathbb{R}^{d}\). Then by the definition of \(\lambda_{\max},\lambda_{\min}\), we have \(\lambda_{\min}\cdot\|\mathbf{a}\|_{2}^{2}\leq\|\mathbf{a}\|_{\boldsymbol{ \Sigma}}^{2}\leq\lambda_{\max}\cdot\|\mathbf{a}\|_{2}^{2}\) for all \(\mathbf{a}\in\mathbb{R}^{d}\). Moreover, let \(\mathbf{w}^{*}\in\mathcal{X}\) be the unique vector satisfying
\[\mathbf{w}^{*}\in\mathcal{X},\quad\langle\mathbf{w}^{*},y_{i}\cdot\mathbf{x}_ {i}\rangle=1,\quad i\in[n], \tag{4.1}\]
and denote \(\mathbf{z}_{i}=y_{i}\cdot\mathbf{x}_{i}\), \(i\in[n]\). Then it is clear that \(\langle\mathbf{w}^{*},\mathbf{x}_{i}\rangle=1\) for all \(i\in[n]\). In our analysis, we frequently encounter the derivatives of the cross entropy loss on each data point. Therefore we also denote \(\ell_{i}^{\prime}=\ell^{\prime}[y_{i}\cdot f(\mathbf{w},\gamma,\mathbf{x}_{i})]\), \(\ell_{i}^{\prime(t)}=\ell^{\prime}[y_{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i})]\), \(i\in[n]\).
### Positive Correlation Between \(\mathbf{w}^{*}\) and the Gradient Update
In order to study the implicit bias of batch normalization, the first challenge is to identify a proper target to which the predictor converges. In our analysis, this proper target, i.e. the uniform margin solution, is revealed by a key identity which is presented in the following lemma.
**Lemma 4.1**.: _Under Assumption 2.1, for any \(\mathbf{w}\in\mathbb{R}^{d}\), it holds that_
\[\langle-\nabla_{\mathbf{w}}L(\mathbf{w},\gamma),\mathbf{w}^{*}\rangle\] \[\qquad\qquad=\frac{\gamma}{2n^{2}\|\mathbf{w}\|_{\boldsymbol{ \Sigma}}^{3}}\sum_{i,i^{\prime}=1}^{n}|\ell_{i}^{\prime}|\cdot|\ell_{i^{\prime }}^{\prime}|\cdot(\langle\mathbf{w},\mathbf{z}_{i^{\prime}}\rangle-\langle \mathbf{w},\mathbf{z}_{i}\rangle)\cdot(|\ell_{i^{\prime}}^{\prime}|^{-1}\cdot \langle\mathbf{w},\mathbf{z}_{i^{\prime}}\rangle-|\ell_{i}^{\prime}|^{-1}\cdot \langle\mathbf{w},\mathbf{z}_{i}\rangle).\]
Recall that \(\mathbf{w}^{*}\) defined in (4.1) is a uniform margin solution. Lemma 4.1 thus gives an exact calculation on the component of the training loss gradient pointing towards a uniform margin classifier. More importantly, we see that the factors \(|\ell_{i}^{\prime}|^{-1}\), \(|\ell_{i^{\prime}}^{\prime}|^{-1}\) are essentially also functions of \(\langle\mathbf{w},\mathbf{z}_{i}\rangle\) and \(\langle\mathbf{w},\mathbf{z}_{i^{\prime}}\rangle\) respectively. If the margins of the predictor on a pair of data points \((\mathbf{x}_{i},y_{i})\) and \((\mathbf{x}_{i^{\prime}},y_{i^{\prime}})\) are not equal, i.e., \(\langle\mathbf{w},\mathbf{z}_{i}\rangle\neq\langle\mathbf{w},\mathbf{z}_{i^{ \prime}}\rangle\), then we can see from Lemma 4.1 that a gradient descent step on the current predictor will push the predictor towards the direction of \(\mathbf{w}^{*}\) by a positive length. To more accurately characterize this property, we give the following lemma.
**Lemma 4.2**.: _For all \(t\geq 0\), it holds that_
\[\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle\geq\langle\mathbf{w}^{(t)}, \mathbf{w}^{*}\rangle+\frac{\gamma^{(t)}\eta}{2\|\mathbf{w}^{(t)}\|_{ \boldsymbol{\Sigma}}^{2}}\cdot\max\left\{\frac{\exp(-\gamma^{(t)})}{8},\min_{i }|\ell_{i}^{\prime(t)}|\right\}\cdot D(\mathbf{w}^{(t)}),\]
Lemma 4.2 is established based on Lemma 4.1. It shows that as long as the margin discrepancy
is not zero, the inner product \(\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle\) will increase during training. It is easy to see that the result in Lemma 4.2 essentially gives two inequalities:
\[\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle \geq\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle+\frac{\gamma^{( t)}\eta}{2\|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{2}}\cdot\frac{\exp(- \gamma^{(t)})}{8}\cdot D(\mathbf{w}^{(t)}), \tag{4.2}\] \[\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle \geq\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle+\frac{\gamma^{ (t)}\eta}{2\|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{2}}\cdot\min_{i}|\ell_{i}^ {\prime(t)}|\cdot D(\mathbf{w}^{(t)}). \tag{4.3}\]
In fact, inequality (4.3) above with the factor \(\min_{i}|\ell_{i}^{\prime(t)}|\) is relatively easy to derive - we essentially lower bound each \(|\ell_{i}^{\prime(t)}|\) by the minimum over all of them. In comparison, inequality (4.2) with the factor \(\exp(-\gamma^{(t)})\) is highly nontrivial: in the early stage of training where the predictor may have very different margins on different data, we can see that \(y_{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i})=\gamma^{(t)}\cdot \langle\mathbf{w}^{(t)},y_{i}\cdot\mathbf{x}_{i}\rangle\cdot(n^{-1}\sum_{j} \langle\mathbf{w}^{(t)},\mathbf{x}_{j}\rangle^{2})^{-1/2}\) can be as large as \(\gamma^{(t)}\cdot\sqrt{n}\) when only one inner product among \(\{\langle\mathbf{w}^{(t)},\mathbf{x}_{i}\rangle\}_{i=1}^{n}\) is large and the other are all close to zero. Hence, \(\min_{i}|\ell_{i}^{\prime(t)}|\) can be smaller than \(\exp(-\gamma^{(t)}\cdot\sqrt{n})\) in the worst case. Therefore, (4.2) is tighter than (4.3) when the margins of the predictor on different data are not close. This tighter result is proved based on a technical inequality (see Lemma C.1), and deriving this result is one of the key technical contributions of this paper.
### Equivalent Metrics of Margin Discrepancy and Norm Bounds
We note that the result in Lemma 4.2 involves the inner products \(\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle\), \(\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle\) as well as the margin discrepancy \(D(\mathbf{w}^{(t)})\). The inner products and the margin discrepancy are essentially different metrics on how uniform the margins are. To proceed, we need to unify these different metrics. We have the following lemma addressing this issue.
**Lemma 4.3**.: _For any \(\mathbf{w}\in\mathbb{R}^{d}\), it holds that_
\[\|\mathbf{w}\|_{\mathbf{\Sigma}}^{2}\cdot D(\mathbf{w})=\frac{1} {n^{2}}\sum_{i,i^{\prime}=1}^{n}(\langle\mathbf{w},\mathbf{z}_{i^{\prime}} \rangle-\langle\mathbf{w},\mathbf{z}_{i}\rangle)^{2}=2\|\mathbf{w}-\langle \mathbf{w}^{*},\mathbf{w}\rangle_{\mathbf{\Sigma}}\cdot\mathbf{w}^{*}\|_{ \mathbf{\Sigma}}^{2},\] \[\lambda_{\min}\cdot\left\|\mathbf{w}-\frac{\langle\mathbf{w}^{ *},\mathbf{w}\rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*}\right\|_ {2}^{2}\leq\|\mathbf{w}-\langle\mathbf{w}^{*},\mathbf{w}\rangle_{\mathbf{ \Sigma}}\cdot\mathbf{w}^{*}\|_{\mathbf{\Sigma}}^{2}\leq\lambda_{\max}\cdot \left\|\mathbf{w}-\frac{\langle\mathbf{w}^{*},\mathbf{w}\rangle}{\|\mathbf{w}^ {*}\|_{2}^{2}}\cdot\mathbf{w}^{*}\right\|_{2}^{2}.\]
By Lemma 4.3, it is clear that the \(\|\mathbf{w}\|_{\mathbf{\Sigma}}\cdot\sqrt{D(\mathbf{w})}\), \(\|\mathbf{w}-\langle\mathbf{w}^{*},\mathbf{w}\rangle_{\mathbf{\Sigma}}\cdot \mathbf{w}^{*}\|_{\mathbf{\Sigma}}\) and \(\|\mathbf{w}-\|\mathbf{w}^{*}\|_{2}^{-2}\cdot\langle\mathbf{w}^{*},\mathbf{w} \rangle\cdot\mathbf{w}^{*}\|_{2}\) are equivalent metrics on the distance between \(\mathbf{w}\) and \(\operatorname{span}\{\mathbf{w}^{*}\}\). Lemma 4.3 is fundamental throughout our proof as it unifies (i) the Euclidean geometry induced by linear model and gradient descent and (ii) the geometry defined by \(\mathbf{\Sigma}\) induced by batch normalization. Eventually, Lemma 4.3 converts both metrics to the margin discrepancy, which is essential in our proof.
Even with the metric equivalence results, Lemma 4.2 alone is still not sufficient to demonstrate the convergence to a uniform margin: If the gradient consistently has a even larger component along a different direction, then the predictor, after normalization, will be pushed farther away from the uniform margin solution. Therefore, it is necessary to control the growth of \(\mathbf{w}^{(t)}\) along the other directions. To do so, we give upper bounds on \(\|\mathbf{w}^{(t)}\|_{2}\). Intuitively, the change of \(\mathbf{w}\) in the radial direction will not change the objective function value as \(f(\mathbf{w},\gamma,\mathbf{x})\) is \(0\)-homogeneous in \(\mathbf{w}\). Therefore, the training loss gradient is orthogonal to \(\mathbf{w}\), and when the learning rate is small, \(\|\mathbf{w}^{(t)}\|_{2}\) will hardly change during training. The following lemma is established following this intuition.
**Lemma 4.4**.: _For all \(t\geq 0\), it holds that_
\[\|{\bf w}^{(t)}\|_{2}^{2}\leq\|{\bf w}^{(t+1)}\|_{2}^{2}\leq\|{\bf w}^{(t)}\|_{2} ^{2}+4\eta^{2}\cdot\frac{\gamma^{(t)2}\cdot\max_{i}\|{\bf x}_{i}\|_{2}^{3}}{ \lambda_{\min}^{2}\cdot\|{\bf w}^{(t)}\|_{2}^{2}}.\]
_Moreover, if \(\|{\bf w}^{(t)}-\langle{\bf w}^{(t)},{\bf w}^{*}\rangle\cdot\|{\bf w}^{*}\|_{2 }^{-2}\cdot{\bf w}^{*}\|_{2}\leq\|{\bf w}^{(0)}\|_{2}/2\), then_
\[\|{\bf w}^{(t+1)}\|_{2}^{2}\leq\|{\bf w}^{(t)}\|_{2}^{2}+\eta^{2}G\cdot H^{(t) }\cdot\left\|{\bf w}^{(t)}-\frac{\langle{\bf w}^{*},{\bf w}^{(t)}\rangle}{\|{ \bf w}^{*}\|_{2}^{2}}\cdot{\bf w}^{*}\right\|_{2}^{2},\]
_where \(G=64\lambda_{\min}^{-3}\cdot\max_{i}\|{\bf x}_{i}\|_{2}^{6}\cdot\|{\bf w}^{(0 )}\|_{2}^{-4}\), and \(H^{(t)}=\max\{|\ell_{1}^{\prime(t)}|^{2},\ldots,|\ell_{n}^{\prime(t)}|^{2}, \exp(-2\gamma^{(t)})\}\cdot\max\{\gamma^{(t)2},\gamma^{(t)4}\}\)._
Lemma 4.4 demonstrates that \(\|{\bf w}^{(t)}\|_{2}\) is monotonically increasing during training, but the speed it increases is much slower than the speed \(\langle{\bf w}^{(t)},{\bf w}^{*}\rangle\) increases (when \(\eta\) is small), as is shown in Lemma 4.2. We note that Lemma 4.4 particularly gives a tighter inequality when \({\bf w}^{(t)}\) is close enough to \(\operatorname{span}\{{\bf w}^{*}\}\). This inequality shows that in the later stage of training when the margins tend uniform and the loss derivatives \(|\ell_{i}^{\prime(t)}|\), \(i\in[n]\) on the training data tend small, \(\|{\bf w}^{(t)}\|_{2}\) increases even slower. The importance of this result can be seen considering the case where all \(|\ell_{i}^{\prime(t)}|\)'s are equal (up to constant factors): in this case, combining the bounds in Lemmas 4.2, 4.3 and 4.4 can give an inequality of the form
\[\left\|{\bf w}^{(t+1)}-\frac{\langle{\bf w}^{*},{\bf w}^{(t+1)}\rangle}{\|{ \bf w}^{*}\|_{2}^{2}}\cdot{\bf w}^{*}\right\|_{2}^{2}\leq(1-A^{(t)})\cdot \left\|{\bf w}^{(t)}-\frac{\langle{\bf w}^{*},{\bf w}^{(t)}\rangle}{\|{\bf w}^ {*}\|_{2}^{2}}\cdot{\bf w}^{*}\right\|_{2}^{2} \tag{4.4}\]
for some \(A^{(t)}>0\) that depends on \(|\ell_{i}^{(t)}|\), \(i\in[n]\). (4.4) is clearly the key to show the monotonicity and convergence of the \(\|{\bf w}-\|{\bf w}^{*}\|_{2}^{-2}\cdot\langle{\bf w}^{*},{\bf w}\rangle\cdot {\bf w}^{*}\|_{2}\), which eventually leads to a last-iterate bound of the margin discrepancy according to Lemma 4.3. However, the rigorous version of the inequality above needs to be proved within an induction, which we explain in the next subsection.
### Final Convergence Guarantee With Sharp Convergence Rate
Lemmas 4.2, 4.3 and 4.4 give the key intermediate results in our proof. However, it is still technically challenging to show the convergence and give a sharp convergence rate. We remind the readers that during training, the loss function value at each data point \(({\bf x}_{i},y_{i})\) converges to zero, and so is the absolute value of the loss derivative \(|\ell_{i}^{\prime(t)}|\). Based on the bounds in Lemma 4.2 and the informal result in (4.4), we see that if \(|\ell_{i}^{\prime(t)}|\), \(i\in[n]\) vanish too fast, then the margin discrepancy may not have sufficient time to converge. Therefore, in order to show the convergence of margin discrepancy, we need to accurately characterize the orders of \(\gamma^{(t)}\) and \(|\ell_{i}^{\prime(t)}|\).
Intuitively, characterizing the orders of \(\gamma^{(t)}\) and \(|\ell_{i}^{\prime(t)}|\) is easier when the margins are relatively uniform, because in this case \(|\ell_{i}^{\prime(t)}|\), \(i\in[n]\) are all roughly of the same order. Inspired by this, we implement a two-stage analysis, where the first stage provides a warm start for the second stage with relatively uniform margins. The following lemma presents the guarantees in the first stage.
**Lemma 4.5**.: _Let \(\epsilon=(16\max_{i}\|{\bf z}_{i}\|_{2})^{-1}\cdot\|{\bf w}^{*}\|_{2}^{-1}\cdot \min\left\{1/3,\|{\bf w}^{(0)}\|_{2}^{-1}\cdot\lambda_{\min}^{1/2}/(40\lambda _{\max}^{3/4})\right\}\). There exist constants \(c_{1},c_{2},c_{3}>0\) such that if_
\[\|{\bf w}^{(0)}\|_{2}\leq c_{1}\cdot\min\left\{1,(\max_{i}\|{\bf z}_{i}\|_{2}) ^{-1/2}\cdot\|{\bf w}^{*}\|_{2}^{-1}\right\}\cdot\lambda_{\max}^{-3/4}\cdot \lambda_{\min}^{1/2},\]
\[\eta\leq c_{2}\cdot\min\big{\{}1,\epsilon\cdot\|{\bf w}^{(0)}\|_{2}^{2}\cdot \lambda_{\min}\cdot(\max_{i}\|{\bf x}_{i}\|_{2})^{-3/2},\epsilon^{4}\cdot\lambda_ {\min}^{3}\cdot\lambda_{\max}^{-3/2}\cdot\|{\bf w}^{*}\|_{2}^{-1}\cdot(\max_{i} \|{\bf x}_{i}\|_{2})^{-3}\big{\}},\]
_then there exists \(t_{0}\leq c_{3}\eta^{-1}\epsilon^{-2}\cdot\lambda_{\max}^{3/2}\cdot\lambda_{ \min}^{-1}\cdot\|{\bf w}^{(0)}\|_{2}^{2}\cdot\|{\bf w}^{*}\|_{2}\) such that_
1. \(1/2\leq\gamma^{(t)}\leq 3/2\) _for all_ \(0\leq t\leq t_{0}\)_._
2. \(\|{\bf w}^{(0)}\|_{2}\leq|{\bf w}^{(t)}\|_{2}\leq(1+\epsilon/2)\cdot\|{\bf w}^ {(0)}\|_{2}\) _for all_ \(0\leq t\leq t_{0}\)_._
3. \(\big{\|}{\bf w}^{(t_{0})}-\langle{\bf w}^{*},{\bf w}^{(t_{0})}\rangle\cdot\| {\bf w}^{*}\|_{2}^{-2}\cdot{\bf w}^{*}\big{\|}_{2}\leq\epsilon\cdot\|{\bf w}^ {(0)}\|_{2}\)_._
In Lemma 4.5, we set up a target rate \(\epsilon\) that only depends on the training data and the initialization \({\bf w}^{(0)}\), so that \({\bf w}^{(t_{0})}\) satisfying the three conclusions Lemma 4.5 can serve as a good enough warm start for our analysis on the second stage starting from iteration \(t_{0}\).
The study of the convergence of margin discrepancy for \(t\geq t_{0}\) is the most technical part of our proof. The results are summarized in the following lemma.
**Lemma 4.6**.: _Let \(G\) be defined as in Lemma 4.4, and \(\epsilon\) be defined as in Lemma 4.5. Suppose that there exists \(t_{0}\in\mathbb{N}_{+}\) such that \(1/2\leq\gamma^{(t_{0})}\leq 3/2\), and_
\[\|{\bf w}^{(0)}\|_{2}\leq|{\bf w}^{(t_{0})}\|_{2}\leq(1+\epsilon/2)\cdot\|{\bf w }^{(0)}\|_{2},\ \big{\|}{\bf w}^{(t_{0})}-\langle{\bf w}^{*},{\bf w}^{(t_{0})}\rangle\cdot\|{ \bf w}^{*}\|_{2}^{-2}\cdot{\bf w}^{*}\big{\|}_{2}\leq\epsilon\cdot\|{\bf w}^{ (0)}\|_{2}.\]
_Then there exist constants \(c_{1},c_{2},c_{3},c_{4}\), such that as long as \(\eta\leq c_{1}G^{-1}\min\{\epsilon^{2},\lambda_{\min}\cdot\lambda_{\max}^{3/2 }\cdot\|{\bf w}^{(0)}\|_{2}^{-2}\}\), the following results hold for all \(t\geq t_{0}\):_
1. \(\big{\|}{\bf w}^{(t_{0})}-\langle{\bf w}^{*},{\bf w}^{(t_{0})}\rangle\cdot\|{ \bf w}^{*}\|_{2}^{-2}\cdot{\bf w}^{*}\big{\|}_{2}\geq\cdots\geq\big{\|}{\bf w}^ {(t)}-\langle{\bf w}^{*},{\bf w}^{(t)}\rangle\cdot\|{\bf w}^{*}\|_{2}^{-2} \cdot{\bf w}^{*}\big{\|}_{2}\)_._
2. \(\|{\bf w}^{(0)}\|_{2}\leq\|{\bf w}^{(t)}\|_{2}\leq(1+\epsilon)\cdot\|{\bf w}^ {(0)}\|_{2}\)_._
3. \(\log[(\eta/8)\cdot(t-t_{0})+\exp(\gamma^{(t_{0})})]\leq\gamma^{(t)}\leq\log[ 8\eta\cdot(t-t_{0})+2\exp(\gamma^{(t_{0})})]\)_._
4. \(\Big{\|}{\bf w}^{(t)}-\frac{\langle{\bf w}^{*},{\bf w}^{(t)}\rangle}{\|{\bf w} ^{*}\|_{2}^{2}}\cdot{\bf w}^{*}\Big{\|}_{2}^{2}\leq\epsilon\cdot\|{\bf w}^{(0 )}\|_{2}\cdot\exp\Big{[}-\frac{c_{2}\lambda_{\min}\cdot\log^{2}((8/9)\eta\cdot( t-t_{0})+1)}{\lambda_{\max}^{3/2}\cdot\|{\bf w}^{(0)}\|_{2}^{2}}\Big{]}\)_._
5. \(\max_{i}|\langle{\bf w}^{(t)}/\|{\bf w}^{(t)}\|_{2},{\bf z}_{i}\rangle-\|{\bf w }^{*}\|_{2}^{-1}|\cdot\gamma^{(t)}\leq\|{\bf w}^{*}\|_{2}^{-1}/4\)_._
6. _It holds that_
\[\frac{1}{40}\cdot\frac{1}{\eta\cdot(t-t_{0})+1}\leq\ell(y_{i} \cdot f({\bf w}^{(t)},\gamma^{(t)},{\bf x}_{i}))\leq\frac{12}{\eta\cdot(t-t_{0} )+1},\] \[D({\bf w})\leq\frac{c_{3}\lambda_{\max}}{\lambda_{\min}}\cdot\exp \Bigg{[}-\frac{c_{4}\lambda_{\min}}{\lambda_{\max}^{3/2}\cdot\|{\bf w}^{(0)}\|_ {2}^{2}}\cdot\log^{2}((8/9)\eta\cdot(t-t_{0})+1)\Bigg{]}.\]
A few remarks about Lemma 4.6 are in order: The first and the second results guarantee that the properties of the warm start \({\bf w}^{(t_{0})}\) given in Lemma 4.5 are preserved during training. The third result on \(\gamma^{(t)}\) controls the values of \(|\ell^{\prime}_{i}|\), \(i\in[n]\) given uniform enough margins, and also implies the convergence rate of the training loss function. The fourth result gives the convergence rate of \(\big{\|}{\bf w}^{(t)}-\langle{\bf w}^{*},{\bf w}^{(t)}\rangle\cdot\|{\bf w}^{* }\|_{2}^{-2}\cdot{\bf w}^{*}\big{\|}_{2}\), which is an equivalent metric of the margin discrepancy in \(\ell_{2}\) distance. The fifth result essentially follows by the convergence rates of \(\gamma^{(t)}\) and \(\big{\|}{\bf w}^{(t)}-\langle{\bf w}^{*},{\bf w}^{(t)}\rangle\cdot\|{\bf w}^{* }\|_{2}^{-2}\cdot{\bf w}^{*}\big{\|}_{2}\), and it further implies that \(|\ell^{\prime}_{i}|\), \(i\in[n]\) only differ by constant factors. Finally, the sixth result reformulates the previous results and gives the conclusions of Theorem 2.2.
In our proof of Lemma 4.6, we first show the first five results based on an induction, where each of these results rely on the margin uniformity in the previous iterations. The last result is then proved based on the first five results. We also regard this proof as a key technical contribution of our work. Combining Lemmas 4.5 and 4.6 leads to Theorem 2.2, and the proof is thus complete.
## 5 Conclusion and Future Work
In this work we theoretically demonstrate that with batch normalization, gradient descent converges to a uniform margin solution when training linear models, and converges to a patch-wise uniform margin solution when training two-layer, single-filter linear CNNs. These results give a precise characterization of the implicit bias of batch normalization.
An important future work direction is to study the implicit bias of batch normalization in more complicated neural network models with multiple filters, multiple layers, and nonlinear activation functions. Our analysis may also find other applications in studying the implicit bias of other normalization techniques such as layer normalization.
## Appendix A Additional Related Work
Implicit Bias.In recent years, there have emerged a large number of works studying the implicit bias of various optimization algorithms for different models. We will only review a part of them that is mostly related to this paper.
Theoretical analysis of implicit bias has originated from linear models. For linear regression, Gunasekar et al. (2018) showed that starting from the origin point, gradient descent converges to the minimum \(\ell_{2}\) norm solution. For linear classification problems, gradient descent is shown to converge to the maximum margin solution on separable data (Soudry et al., 2018; Nacson et al., 2019; Ji and Telgarsky, 2019). Similar results have also been obtained for other optimization algorithms such as mirror descent (Gunasekar et al., 2018) and stochastic gradient descent (Nacson et al., 2019). The implicit bias has also been widely studied beyond linear models such as matrix factorization (Gunasekar et al., 2017; Li et al., 2021; Arora et al., 2019), linear networks (Li et al., 2018; Ji and Telgarsky, 2018; Gunasekar et al., 2018; Pesme et al., 2021), and more complicated nonlinear networks (Chizat and Bach, 2020), showing that (stochastic) gradient descent may converge to different types of solutions. However, none of them can be adapted to our setting.
Theory for Normalization Methods.Many normalization methods, including batch normalization, weight normalization (Salimans and Kingma, 2016), layer normalization (Ba et al., 2016), and group normalization (Wu and He, 2018), have been recently developed to improve generalization performance. From a theoretical perspective, a series of works have established the close connection between the normalization methods and the adaptive effective learning rate (Hoffer et al., 2018; Arora et al., 2019; Morwani and Ramaswamy, 2020). Based on the auto-learning rate adjustment effect of weight normalization, Wu et al. (2018) developed an adaptive learning rate scheduler and proved a near-optimal convergence rate for SGD. Moreover, Wu et al. (2020) studied the implicit regularization effect of weight normalization for linear regression, showing that gradient descent can always converge close to the minimum \(\ell_{2}\) norm solution, even from the initialization that is far from zero. Dukler et al. (2020) further investigated the convergence of gradient descent for training
a two-layer ReLU network with weight normalization. For multi-layer networks, Yang et al. (2019) developed a mean-field theory for batch normalization, showing that the gradient explosion brought by large network depth cannot be addressed by BN, if there is no residual connection. These works either concerned different normalization methods or investigated the behavior of BN in different aspects, which can be seen as orthogonal to our results.
## Appendix B Experiments
Here we present some preliminary simulation and real data experiment results to demonstrate that batch normalization encourages a uniform margin. The results are given in Figure 1 and Figure 2 respectively.
In Figure 1, we train linear models with/without batch normalization, and plot the values of \(y_{i}\cdot\langle\mathbf{w}^{(t)},\mathbf{x}_{i}\rangle\), \(i=1,\ldots,n\) in Figure 1. From Figure 1, we can conclude that:
1. The obtained linear model with batch normalization indeed achieves a uniform margin.
2. Batch normalization plays a key role in determining the implicit regularization effect. Without batch normalization, the obtained linear model does not achieve a uniform margin.
Clearly, our simulation results match our theory well.
In Figure 2, we present experiment results for VGG-16 with/without batch normalization trained by stochastic gradient descent to classify cat and dog images in the CIFAR-10 data set. We focus on the last hidden layer of VGG-16, and for each neuron on this layer, we estimate the margin uniformity over (i) all activated data from class cat; (ii) all activated data from class dog; (iii) all activated data from both classes. The final result is then calculated by taking an average over all neurons. Note that in these experiments, the neural network model, the data and the training algorithm all mismatch the exact setting in Theorem 3.2. Nevertheless, the experiment results still corroborate our theory to a certain extent.
## Appendix C Proofs for Batch Normalization in Linear Models
In this section we present the proofs of the lemmas in Section 4. Combining these proofs with the discussion given in Section 4 would give the complete proof of Theorem 2.2.
### Proof of Lemma 4.1
Proof of Lemma 4.1.: By definition, we have
\[\nabla_{\mathbf{w}}L(\mathbf{w},\gamma)=\frac{1}{n}\sum_{i=1}^{n}\ell^{\prime }[y_{i}\cdot f(\mathbf{w},\gamma,\mathbf{x}_{i})]\cdot y_{i}\cdot\nabla_{ \mathbf{w}}f(\mathbf{w},\gamma,\mathbf{x}_{i}).\]
Then by the definition of the linear predictor with batch normalization, we have the following calculation using chain rule:
\[\nabla_{\mathbf{w}}L(\mathbf{w},\gamma) =\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-1}\cdot\frac{1}{n}\sum_{ i=1}^{n}\ell^{\prime}[y_{i}\cdot f(\mathbf{w},\gamma,\mathbf{x}_{i})]\cdot y_{i} \cdot\gamma\cdot\Big{(}\mathbf{I}-\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-2} \cdot\boldsymbol{\Sigma}\mathbf{w}\mathbf{w}^{\top}\Big{)}\mathbf{x}_{i}\] \[=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n} \sum_{i=1}^{n}\ell^{\prime}_{i}\cdot y_{i}\cdot\Big{(}\|\mathbf{w}\|_{ \boldsymbol{\Sigma}}^{2}\cdot\mathbf{I}-\boldsymbol{\Sigma}\mathbf{w}\mathbf{w }^{\top}\Big{)}\mathbf{x}_{i}\]
\[=\|\mathbf{w}\|_{\mathbf{\Sigma}}^{-3}\cdot\frac{\gamma}{n}\sum_{i=1}^{n} \ell_{i}^{\prime}\cdot\Bigg{(}\frac{1}{n}\sum_{i^{\prime}=1}^{n}\langle\mathbf{ w},\mathbf{z}_{i^{\prime}}\rangle^{2}\cdot\mathbf{I}-\frac{1}{n}\sum_{i^{\prime}=1}^{n} \langle\mathbf{w},\mathbf{z}_{i^{\prime}}\rangle\cdot\mathbf{z}_{i^{\prime}} \mathbf{w}^{\top}\Bigg{)}\mathbf{z}_{i}\] \[=\|\mathbf{w}\|_{\mathbf{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2}}\sum_{ i=1}^{n}\sum_{i^{\prime}=1}^{n}\ell_{i}^{\prime}\cdot\langle\mathbf{w}, \mathbf{z}_{i^{\prime}}\rangle^{2}\cdot\mathbf{z}_{i}-\|\mathbf{w}\|_{\mathbf{ \Sigma}}^{-3}\cdot\frac{\gamma}{n^{2}}\sum_{i=1}^{n}\sum_{i^{\prime}=1}^{n} \ell_{i}^{\prime}\cdot\langle\mathbf{w},\mathbf{z}_{i^{\prime}}\rangle\cdot \langle\mathbf{w},\mathbf{z}_{i}\rangle\cdot\mathbf{z}_{i^{\prime}},\]
Figure 1: The values of \(y_{i}\cdot\langle\mathbf{w}^{(t)},\mathbf{x}_{i}\rangle\), \(i=1,\ldots,n\) during the training of linear models with and without batch normalization. Each curve in the figures corresponds to a specific \(i\in\{1,\ldots,n\}\) and illustrates the dynamics of a specific inner product \(y_{i}\cdot\langle\mathbf{w}^{(t)},\mathbf{x}_{i}\rangle\). (a) gives the result for the linear model with batch normalization; (b) gives the result for the linear model without batch normalization. For both settings, we set the sample size \(n=50\) and dimension \(d=1000\). For \(i=1,\ldots,50\), we generate the data inputs \(\mathbf{x}_{i}\) independently as standard Gaussian random vectors, and set \(y_{i}\) randomly as \(+1\) or \(-1\) with equal probability.
where we remind readers that \(\mathbf{z}_{i}=y_{i}\cdot\mathbf{x}_{i}\), \(i\in[n]\). By Assumption 2.1, taking inner product with \(\mathbf{w}^{*}\) on both sides above then gives
\[\langle\nabla_{\mathbf{w}}L(\mathbf{w},\gamma),\mathbf{w}^{*}\rangle=\|\mathbf{ w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2}}\sum_{i=1}^{n}\sum_{i^{ \prime}=1}^{n}\ell_{i}^{\prime}\cdot\langle\mathbf{w},\mathbf{z}_{i^{\prime}} \rangle^{2}-\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2} }\sum_{i=1}^{n}\sum_{i^{\prime}=1}^{n}\ell_{i}^{\prime}\cdot\langle\mathbf{w},\mathbf{z}_{i^{\prime}}\rangle\cdot\langle\mathbf{w},\mathbf{z}_{i}\rangle.\]
Further denote \(u_{i}=\langle\mathbf{w},\mathbf{z}_{i}\rangle\) for \(i\in[n]\). Then we have
\[\langle\nabla_{\mathbf{w}}L(\mathbf{w},\gamma),\mathbf{w}^{*}\rangle=\| \mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2}}\sum_{i=1}^{n }\sum_{i^{\prime}=1}^{n}\ell_{i}^{\prime}\cdot u_{i^{\prime}}^{2}-\|\mathbf{ w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2}}\sum_{i=1}^{n}\sum_{i^{ \prime}=1}^{n}\ell_{i}^{\prime}\cdot u_{i^{\prime}}u_{i}.\] (C.1)
Switching the index notations \(i,i^{\prime}\) in the above equation also gives
\[\langle\nabla_{\mathbf{w}}L(\mathbf{w},\gamma),\mathbf{w}^{*}\rangle=\| \mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2}}\sum_{i=1}^{n }\sum_{i^{\prime}=1}^{n}\ell_{i^{\prime}}^{\prime}\cdot u_{i}^{2}-\|\mathbf{ w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2}}\sum_{i=1}^{n}\sum_{i^{ \prime}=1}^{n}\ell_{i^{\prime}}^{\prime}\cdot u_{i^{\prime}}u_{i}.\] (C.2)
We can add (C.1) and (C.2) together to obtain
\[2\cdot\langle\nabla_{\mathbf{w}}L(\mathbf{w},\gamma),\mathbf{w}^ {*}\rangle =\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2} }\sum_{i=1}^{n}\sum_{i^{\prime}=1}^{n}(\ell_{i}^{\prime}\cdot u_{i^{\prime}}^ {2}-\ell_{i}^{\prime}\cdot u_{i^{\prime}}u_{i}+\ell_{i^{\prime}}^{\prime} \cdot u_{i}^{2}-\ell_{i^{\prime}}^{\prime}\cdot u_{i^{\prime}}u_{i})\] \[=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2} }\sum_{i=1}^{n}\sum_{i^{\prime}=1}^{n}(u_{i^{\prime}}-u_{i})(\ell_{i}^{\prime} \cdot u_{i^{\prime}}-\ell_{i^{\prime}}^{\prime}\cdot u_{i}).\]
Note that by definition we have \(\ell_{i}^{\prime}<0\). Therefore,
\[-\langle\nabla_{\mathbf{w}}L(\mathbf{w},\gamma),\mathbf{w}^{*}\rangle=\| \mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{2n^{2}}\sum_{i=1}^{ n}\sum_{i^{\prime}=1}^{n}|\ell_{i}^{\prime}|\cdot|\ell_{i^{\prime}}^{\prime}| \cdot(u_{i^{\prime}}-u_{i})(|\ell_{i^{\prime}}^{\prime}|^{-1}\cdot u_{i^{ \prime}}-|\ell_{i}^{\prime}|^{-1}\cdot u_{i}).\]
This completes the proof.
### Proof of Lemma 4.2
Proof of Lemma 4.2.: By Lemma 4.1 and the gradient descent update rule, we have
\[\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle-\langle\mathbf{w }^{(t)},\mathbf{w}^{*}\rangle=\eta\cdot\langle-\nabla_{\mathbf{w}}L(\mathbf{w }^{(t)},\gamma^{(t)}),\mathbf{w}^{*}\rangle\] \[=\frac{\gamma^{(t)}\eta}{2n^{2}\|\mathbf{w}^{(t)}\|_{\boldsymbol {\Sigma}}^{3}}\sum_{i,i^{\prime}=1}^{n}|\ell_{i}^{(t)}|\cdot|\ell_{i^{\prime}}^ {(t)}|\cdot(\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}\rangle-\langle \mathbf{w}^{(t)},\mathbf{z}_{i}\rangle)\cdot(|\ell_{i^{\prime}}^{\prime(t)}|^{ -1}\cdot\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}\rangle-|\ell_{i}^{ \prime(t)}|^{-1}\cdot\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle)\] (C.3)
for all \(t\geq 0\). Note that \(\ell(z)=\log[1+\exp(-z)]\), \(\ell^{\prime}(z)=-1/[1+\exp(z)]\) and
\[|\ell_{i}^{\prime(t)}|^{-1}=-\{\ell^{\prime}[y_{i}\cdot f(\mathbf{w}^{(t)}, \gamma^{(t)},\mathbf{x}_{i})]\}^{-1}=1+\exp(\gamma^{(t)}\cdot\langle\mathbf{w} ^{(t)},\mathbf{z}_{i}\rangle/\|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}).\]
Denote \(F_{i}=\gamma^{(t)}\cdot\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle/\|\mathbf{w}^ {(t)}\|_{\mathbf{\Sigma}}\) for \(i\in[n]\). Then we have
\[\frac{(|\ell_{i^{\prime}}^{(t)}|^{-1}\cdot\langle\mathbf{w}^{(t)}, \mathbf{z}_{i^{\prime}}\rangle-|\ell_{i}^{(t)}|^{-1}\cdot\langle\mathbf{w}^{(t )},\mathbf{z}_{i}\rangle)}{\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}} \rangle-\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle} =\frac{(|\ell_{i^{\prime}}^{(t)}|^{-1}\cdot\langle\mathbf{w}^{(t )},\mathbf{z}_{i^{\prime}}\rangle-|\ell_{i}^{(t)}|^{-1}\cdot\langle\mathbf{w}^ {(t)},\mathbf{z}_{i}\rangle)\cdot\gamma^{(t)}/\|\mathbf{w}^{(t)}\|_{\mathbf{ \Sigma}}}{(\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}\rangle-\langle \mathbf{w}^{(t)},\mathbf{z}_{i}\rangle)\cdot\gamma^{(t)}/\|\mathbf{w}^{(t)}\| _{\mathbf{\Sigma}}}\] \[=\frac{F_{i^{\prime}}\cdot[1+\exp(F_{i^{\prime}})]-F_{i}\cdot[1+ \exp(F_{i})]}{F_{i^{\prime}}-F_{i}}\] \[=1+\frac{F_{i^{\prime}}\cdot\exp(F_{i^{\prime}})-F_{i}\cdot\exp(F _{i})}{F_{i^{\prime}}-F_{i}}\] \[\geq 1+\exp(\max\{F_{i^{\prime}},F_{i}\})\] \[=\max\{|\ell_{i}^{\prime(t)}|^{-1},|\ell_{i^{\prime}}^{(t)}|^{-1}\},\] (C.4)
where the inequality follows by the fact that \([a\exp(a)-b\exp(b)]/(a-b)\geq\exp(\max\{a,b\})\) for all \(a\neq b\). Plugging (C.4) into (C.3) gives
\[\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle-\langle\mathbf{w} ^{(t)},\mathbf{w}^{*}\rangle\] \[\qquad\qquad\geq\frac{\gamma^{(t)}\eta}{2n^{2}\|\mathbf{w}^{(t)} \|_{\mathbf{\Sigma}}^{3}}\sum_{i,i^{\prime}=1}^{n}|\ell_{i}^{\prime(t)}|\cdot| \ell_{i^{\prime}}^{\prime(t)}|\cdot(\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{ \prime}}\rangle-\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle)^{2}\cdot\max\{| \ell_{i}^{\prime(t)}|^{-1},|\ell_{i^{\prime}}^{\prime(t)}|^{-1}\}\] \[\qquad\qquad=\frac{\gamma^{(t)}\eta}{2n^{2}\|\mathbf{w}^{(t)}\|_{ \mathbf{\Sigma}}^{3}}\sum_{i,i^{\prime}=1}^{n}\max\{|\ell_{i}^{\prime(t)}|,| \ell_{i^{\prime}}^{\prime(t)}|\}\cdot(\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{ \prime}}\rangle-\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle)^{2}.\] (C.5)
Now by Lemma C.1, we have
\[\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle-\langle\mathbf{w }^{(t)},\mathbf{w}^{*}\rangle\] \[\qquad\qquad\geq\frac{\gamma^{(t)}\eta}{8n^{2}\|\mathbf{w}^{(t)} \|_{\mathbf{\Sigma}}^{3}}\cdot\left(\frac{1}{n}\sum_{i=1}^{n}|\ell_{i}^{(t)}| \right)\cdot\sum_{i,i^{\prime}=1}^{n}(\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{ \prime}}\rangle-\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle)^{2}.\] (C.6)
Moreover, by definition we have
\[\frac{1}{n}\sum_{i=1}^{n}|\ell_{i}^{\prime(t)}| =\frac{1}{n}\sum_{i=1}^{n}\Bigg{[}1+\exp\left(\frac{\gamma^{(t)} \cdot\langle\mathbf{w}^{(t)},\mathbf{z}_{j}\rangle}{\sqrt{\frac{1}{n}\sum_{j=1} ^{n}\langle\mathbf{w}^{(t)},\mathbf{z}_{j}\rangle^{2}}}\right)\Bigg{]}^{-1}\] \[\geq\frac{1}{n}\sum_{i=1}^{n}\Bigg{[}1+\exp\left(\frac{\gamma^{( t)}\cdot|\langle\mathbf{w}^{(t)},\mathbf{z}_{j}\rangle|}{\sqrt{\frac{1}{n}\sum_{j=1} ^{n}\langle\mathbf{w}^{(t)},\mathbf{z}_{j}\rangle^{2}}}\right)\Bigg{]}^{-1},\]
where the inequality follows by the fact that \([1+\exp(z)]^{-1}\) is a decreasing function. Further note that \([1+\exp(z)]^{-1}\) is convex over \(z\geq 0\). Therefore by Jensen's inequality, we have
\[\frac{1}{n}\sum_{i=1}^{n}|\ell_{i}^{\prime(t)}| \geq\frac{1}{n}\sum_{i=1}^{n}\Bigg{[}1+\exp\left(\frac{\gamma^{(t)} \cdot|\langle\mathbf{w}^{(t)},\mathbf{z}_{j}\rangle|}{\sqrt{\frac{1}{n}\sum_{j=1 }^{n}\langle\mathbf{w}^{(t)},\mathbf{z}_{j}\rangle^{2}}}\right)\Bigg{]}^{-1}\] \[\geq\Bigg{[}1+\exp\left(\frac{1}{n}\sum_{i=1}^{n}\frac{\gamma^{( t)}\cdot|\langle\mathbf{w}^{(t)},\mathbf{z}_{j}\rangle|}{\sqrt{\frac{1}{n}\sum_{j=1} ^{n}\langle\mathbf{w}^{(t)},\mathbf{z}_{j}\rangle^{2}}}\right)\Bigg{]}^{-1}\]
\[\geq[1+\exp(\gamma^{(t)})]^{-1}\] \[\geq\exp(-\gamma^{(t)})/2.\] (C.7)
Plugging (C.7) into (C.6) then gives
\[\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle-\langle\mathbf{w}^ {(t)},\mathbf{w}^{*}\rangle \geq\frac{\gamma^{(t)}\eta}{16n^{2}\|\mathbf{w}^{(t)}\|_{\Sigma}^ {3}}\cdot\exp(-\gamma^{(t)})\cdot\sum_{i,i^{\prime}=1}^{n}(\langle\mathbf{w}^ {(t)},\mathbf{z}_{i^{\prime}}\rangle-\langle\mathbf{w}^{(t)},\mathbf{z}_{i} \rangle)^{2}\] \[=\frac{\gamma^{(t)}\eta}{16\|\mathbf{w}^{(t)}\|_{\Sigma}^{3}} \cdot\exp(-\gamma^{(t)})\cdot D(\mathbf{w}^{(t)})\] (C.8)
Moreover, we can simply utilize (C.5) to obtain
\[\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle-\langle\mathbf{w }^{(t)},\mathbf{w}^{*}\rangle \geq\frac{\gamma^{(t)}\eta}{2n^{2}\|\mathbf{w}^{(t)}\|_{\Sigma}^ {3}}\cdot\min_{i}|\ell_{i}^{(t)}|\cdot\sum_{i,i^{\prime}=1}^{n}(\langle \mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}\rangle-\langle\mathbf{w}^{(t)}, \mathbf{z}_{i}\rangle)^{2}\] (C.9) \[=\frac{\gamma^{(t)}\eta}{2\|\mathbf{w}^{(t)}\|_{\Sigma}^{3}} \cdot\min_{i}|\ell_{i}^{(t)}|\cdot D(\mathbf{w}^{(t)})\] (C.10)
Combining (C.8) and (C.10) finishes the proof.
### Proof of Lemma 4.3
Proof of Lemma 4.3.: We note that the following identity holds:
\[\frac{1}{n^{2}}\sum_{i,i^{\prime}=1}^{n}(\langle\mathbf{w}, \mathbf{z}_{i^{\prime}}\rangle-\langle\mathbf{w},\mathbf{z}_{i}\rangle)^{2}\] \[=\frac{1}{n^{2}}\sum_{i,i^{\prime}=1}^{n}(\langle\mathbf{w}, \mathbf{z}_{i^{\prime}}\rangle^{2}-2\cdot\langle\mathbf{w},\mathbf{z}_{i^{ \prime}}\rangle\cdot\langle\mathbf{w},\mathbf{z}_{i}\rangle+\langle\mathbf{w},\mathbf{z}_{i}\rangle^{2})\] \[=\frac{2}{n}\sum_{i}^{n}\langle\mathbf{w},\mathbf{z}_{i}\rangle^{2 }-2\cdot\left(\frac{1}{n}\sum_{i}^{n}\langle\mathbf{w},\mathbf{z}_{i}\rangle \right)^{2}\] \[=\frac{2}{n}\sum_{i}^{n}\langle\mathbf{w},\mathbf{z}_{i}\rangle^{2 }-4\cdot\left(\frac{1}{n}\sum_{i}^{n}\langle\mathbf{w},\mathbf{z}_{i}\rangle \right)^{2}+2\cdot\left(\frac{1}{n}\sum_{i}^{n}\langle\mathbf{w},\mathbf{z}_{ i}\rangle\right)^{2}\] \[=\frac{2}{n}\sum_{i}^{n}\left(\langle\mathbf{w},\mathbf{z}_{i} \rangle-\frac{1}{n}\sum_{i^{\prime}}^{n}\langle\mathbf{w},\mathbf{z}_{i^{ \prime}}\rangle\right)^{2}\] \[=\frac{2}{n}\Big{\|}\mathbf{w}^{\top}\mathbf{Z}\Big{(}\mathbf{I} -\frac{1}{n}\mathbf{1}\mathbf{1}^{\top}\Big{)}\Big{\|}_{2}^{2}\] \[=\frac{2}{n}\mathbf{w}^{\top}\mathbf{Z}\Big{(}\mathbf{I}-\frac{1 }{n}\mathbf{1}\mathbf{1}^{\top}\Big{)}\mathbf{Z}^{\top}\mathbf{w}.\]
It is easy to see that the null space of \(\mathbf{I}-\mathbf{1}\mathbf{1}^{\top}/n\) is \(\mathrm{span}\{\mathbf{1}\}\), and the non-zero eigenvalues of \(\mathbf{I}-\mathbf{1}\mathbf{1}^{\top}/n\) are all \(1\)'s. Moreover, we note that the projection of the vector \(\mathbf{Z}^{\top}\mathbf{w}\) onto the space \(\mathrm{span}\{\mathbf{1}\}^{\perp}\) is
\[\mathbf{Z}^{\top}\mathbf{w}-\mathbf{1}\mathbf{1}^{\top}\mathbf{Z}^{\top} \mathbf{w}/n=\mathbf{Z}^{\top}\mathbf{w}-\mathbf{Z}\mathbf{w}^{*}{\mathbf{w}^ {*}}^{\top}\mathbf{Z}^{\top}\mathbf{Z}^{\top}\mathbf{w}/n=\mathbf{Z}^{\top} \mathbf{w}-\mathbf{Z}\mathbf{w}^{*}\cdot\langle{\mathbf{w}^{*}}^{\top}, \mathbf{w}\rangle_{\mathbf{\Sigma}},\]
where we utilize the property that \(\mathbf{Z}\mathbf{w}^{*}=\mathbf{1}\) to obtain the first equality. Therefore, we have
\[\frac{1}{n^{2}}\sum_{i,i^{\prime}=1}^{n}(\langle\mathbf{w}, \mathbf{z}_{i^{\prime}}\rangle-\langle\mathbf{w},\mathbf{z}_{i}\rangle)^{2} =\frac{2}{n}(\mathbf{w}-\langle\mathbf{w}^{*},\mathbf{w}\rangle_ {\mathbf{\Sigma}}\cdot\mathbf{w}^{*})^{\top}\mathbf{Z}\mathbf{Z}^{\top}( \mathbf{w}-\langle\mathbf{w}^{*},\mathbf{w}\rangle_{\mathbf{\Sigma}}\cdot \mathbf{w}^{*})\] \[=2\cdot\|\mathbf{w}-\langle\mathbf{w}^{*},\mathbf{w}\rangle_{ \mathbf{\Sigma}}\cdot\mathbf{w}^{*}\|_{\mathbf{\Sigma}}^{2}.\]
This finishes the proof of the first result. For the second result, we first note that by definition, \(\|\mathbf{w}^{*}\|_{\mathbf{\Sigma}}=1\), and therefore for any \(\mathbf{w}\in\mathbb{R}^{d}\), \(\langle\mathbf{w}^{*},\mathbf{w}\rangle_{\mathbf{\Sigma}}\cdot\mathbf{w}^{*}\) is the projection of \(\mathbf{w}\) onto \(\mathrm{span}\{\mathbf{w}^{*}\}\) under the inner product \(\langle\cdot,\cdot\rangle_{\mathbf{\Sigma}}\). Therefore we have
\[\|\mathbf{w}-\langle\mathbf{w}^{*},\mathbf{w}\rangle_{\mathbf{\Sigma}}\cdot \mathbf{w}^{*}\|_{\mathbf{\Sigma}}^{2}\leq\|\mathbf{w}-c\cdot\mathbf{w}^{*}\| _{\mathbf{\Sigma}}^{2}\]
for all \(c\in\mathbb{R}\), and hence
\[\|\mathbf{w}-\langle\mathbf{w}^{*},\mathbf{w}\rangle_{\mathbf{\Sigma}}\cdot \mathbf{w}^{*}\|_{\mathbf{\Sigma}}^{2}\leq\left\|\mathbf{w}-\frac{\langle \mathbf{w}^{*},\mathbf{w}\rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{ *}\right\|_{\mathbf{\Sigma}}^{2}\leq\lambda_{\max}\cdot\left\|\mathbf{w}-\frac {\langle\mathbf{w}^{*},\mathbf{w}\rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot \mathbf{w}^{*}\right\|_{2}^{2}.\]
Similarly, we note that \(\|\mathbf{w}^{*}\|_{2}^{-2}\cdot\langle\mathbf{w}^{*},\mathbf{w}\rangle\cdot \mathbf{w}^{*}\) is the projection of \(\mathbf{w}\) onto \(\mathrm{span}\{\mathbf{w}^{*}\}\) under the Euclidean inner product \(\langle\cdot,\cdot\rangle\). Therefore we have
\[\lambda_{\min}\cdot\left\|\mathbf{w}-\frac{\langle\mathbf{w}^{*},\mathbf{w} \rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*}\right\|_{2}^{2}\leq \lambda_{\min}\cdot\|\mathbf{w}-\langle\mathbf{w}^{*},\mathbf{w}\rangle_{ \mathbf{\Sigma}}\cdot\mathbf{w}^{*}\|_{2}^{2}\leq\|\mathbf{w}-\langle\mathbf{ w}^{*},\mathbf{w}\rangle_{\mathbf{\Sigma}}\cdot\mathbf{w}^{*}\|_{\mathbf{\Sigma}}^{2}.\]
This completes the proof.
### Proof of Lemma 4.4
We first present the following technical lemma.
**Lemma C.1**.: _Let \(\{a_{i}\}_{i=1,\ldots,n}\) and \(\{b_{i}\}_{i=1,\ldots,n}\) be two sequences that satisfy_
\[a_{1}\leq a_{2}\leq\cdots\leq a_{n};\quad b_{1}\geq b_{2}\geq\cdots\geq b_{n} \geq 0.\]
_Then it holds that_
\[\sum_{i,i^{\prime}=1}^{n}\max\{b_{i},b_{i^{\prime}}\}\cdot(a_{i}-a_{i^{\prime} })^{2}\geq\frac{\sum_{i=1}^{n}b_{i}}{4n}\cdot\sum_{i,i^{\prime}=1}^{n}(a_{i}-a_ {i^{\prime}})^{2}.\]
Based on Lemma C.1, the proof of Lemma 4.4 is as follows.
Proof of Lemma 4.4.: By the gradient descent update rule, we have
\[\mathbf{w}^{(t+1)}=\mathbf{w}^{(t)}-\eta\cdot\nabla_{\mathbf{w}}L(\mathbf{w}^{ (t)},\gamma^{(t)}).\] (C.11)
Note that we have the calculation
\[\nabla_{\mathbf{w}}L(\mathbf{w}^{(t)},\gamma^{(t)})=\frac{1}{n\cdot\| \mathbf{w}\|_{\boldsymbol{\Sigma}}}\sum_{i=1}^{n}\ell^{\prime}[y_{i}\cdot f( \mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i})]\cdot y_{i}\cdot\gamma^{(t)} \cdot\bigg{(}\mathbf{I}-\frac{\boldsymbol{\Sigma}\mathbf{w}^{(t)}\mathbf{w}^{( t)\top}}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}^{2}}\bigg{)}\mathbf{x}_{i}.\]
It is easy to see that \(\mathbf{w}^{(t)}\) is orthogonal to \(\nabla_{\mathbf{w}}L(\mathbf{w}^{(t)},\gamma^{(t)})\). Therefore, taking \(\|\cdot\|_{2}^{2}\) on both sides of (C.11) gives
\[\|\mathbf{w}^{(t+1)}\|_{2}^{2}=\|\mathbf{w}^{(t)}\|_{2}^{2}+\eta^{2}\cdot\| \nabla_{\mathbf{w}}L(\mathbf{w}^{(t)},\gamma^{(t)})\|_{2}^{2}.\] (C.12)
Therefore we directly conclude that \(\|\mathbf{w}^{(t)}\|_{2}^{2}\leq\|\mathbf{w}^{(t+1)}\|_{2}^{2}\). Besides, plugging in the calculation of \(\nabla_{\mathbf{w}}L(\mathbf{w}^{(t)},\gamma^{(t)})\) also gives
\[\|\mathbf{w}^{(t+1)}\|_{2}^{2} =\|\mathbf{w}^{(t)}\|_{2}^{2}+\eta^{2}\cdot\left\|\frac{1}{n\cdot \|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}}\sum_{i=1}^{n}\ell^{\prime}[y_{i} \cdot f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i})]\cdot y_{i}\cdot\gamma^ {(t)}\cdot\bigg{(}\mathbf{I}-\frac{\boldsymbol{\Sigma}\mathbf{w}^{(t)}\mathbf{ w}^{(t)\top}}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}^{2}}\bigg{)} \mathbf{x}_{i}\right\|_{2}^{2}\] \[\leq\|\mathbf{w}^{(t)}\|_{2}^{2}+\eta^{2}\cdot\frac{\gamma^{(t)2 }\cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{2}}{n\cdot\|\mathbf{w}^{(t)}\|_{ \boldsymbol{\Sigma}}^{2}}\cdot\sum_{i=1}^{n}\bigg{(}1+\left\|\frac{\boldsymbol {\Sigma}\mathbf{w}^{(t)}\mathbf{w}^{(t)\top}}{\|\mathbf{w}^{(t)}\|_{ \boldsymbol{\Sigma}}^{2}}\bigg{\|}_{2}\right)^{2}\] \[=\|\mathbf{w}^{(t)}\|_{2}^{2}+\eta^{2}\cdot\frac{\gamma^{(t)2} \cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{2}}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{ \Sigma}}^{2}}\cdot\bigg{(}1+\frac{\|\boldsymbol{\Sigma}\mathbf{w}^{(t)}\|_{2} \cdot\|\mathbf{w}^{(t)}\|_{2}}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}^{2} }\bigg{)}^{2}\] \[\leq\|\mathbf{w}^{(t)}\|_{2}^{2}+4\eta^{2}\cdot\frac{\gamma^{(t) }\cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{2}}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{ \Sigma}}^{2}}\cdot\bigg{(}\frac{\|\boldsymbol{\Sigma}\mathbf{w}^{(t)}\|_{2} \cdot\|\mathbf{w}^{(t)}\|_{2}}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}^{2} }\bigg{)}^{2}\] \[=\|\mathbf{w}^{(t)}\|_{2}^{2}+4\eta^{2}\cdot\frac{\gamma^{(t)2} \cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{2}}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{ \Sigma}}^{6}}\cdot\|\boldsymbol{\Sigma}\mathbf{w}^{(t)}\|_{2}^{2}\cdot\|\mathbf{ w}^{(t)}\|_{2}^{2},\]
where the first inequality follows by Jensen's inequality, and the last inequality follows by the fact that \(\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{2}=\langle\mathbf{w},\boldsymbol{\Sigma} \mathbf{w}\rangle\leq\|\boldsymbol{\Sigma}\mathbf{w}\|_{2}\cdot\|\mathbf{w}\|_ {2}\). Further plugging in the definition of \(\boldsymbol{\Sigma}\) gives
\[\|\mathbf{w}^{(t+1)}\|_{2}^{2} \leq\|\mathbf{w}^{(t)}\|_{2}^{2}+4\eta^{2}\cdot\frac{\gamma^{(t)2 }\cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{2}}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{ \Sigma}}^{6}}\cdot\|\mathbf{w}^{(t)}\|_{2}^{2}\cdot\left\|\frac{1}{n}\sum_{i=1}^ {n}\mathbf{x}_{i}\mathbf{x}_{i}^{\top}\mathbf{w}^{(t)}\right\|_{2}^{2}\] \[\leq\|\mathbf{w}^{(t)}\|_{2}^{2}+4\eta^{2}\cdot\frac{\gamma^{(t)2 }\cdot\max_{i}\|\mathbf{x}_{i}\|_{3}^{2}}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{ \Sigma}}^{6}}\cdot\|\mathbf{w}^{(t)}\|_{2}^{2}\cdot\left(\frac{1}{n}\sum_{i=1 }^{n}|\langle\mathbf{x}_{i},\mathbf{w}^{(t)}\rangle|\right)^{2}\] \[\leq\|\mathbf{w}^{(t)}\|_{2}^{2}+4\eta^{2}\cdot\frac{\gamma^{(t)2 }\cdot\max_{i}\|\mathbf{x}_{i}\|_{3}^{2}}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{ \Sigma}}^{6}}\cdot\|\mathbf{w}^{(t)}\|_{2}^{2}\cdot\frac{1}{n}\sum_{i=1}^{n} \langle\mathbf{x}_{i},\mathbf{w}^{(t)}\rangle^{2}\] \[=\|\mathbf{w}^{(t)}\|_{2}^{2}+4\eta^{2}\cdot\frac{\gamma^{(t)2} \cdot\max_{i}\|\mathbf{x}_{i}\|_{3}^{2}}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{ \Sigma}}^{4}}\cdot\|\mathbf{w}^{(t)}\|_{2}^{2}\] \[\leq\|\mathbf{w}^{(t)}\|_{2}^{2}+4\eta^{2}\cdot\frac{\gamma^{(t)2 }\cdot\max_{i}\|\mathbf{x}_{i}\|_{3}^{2}}{\lambda_{\min}^{2}\cdot\|\mathbf{w}^{(t) }\|_{2}^{4}}\cdot\|\mathbf{w}^{(t)}\|_{2}^{2}\] \[=\|\mathbf{w}^{(t)}\|_{2}^{2}+4\eta^{2}\cdot\frac{\gamma^{(t)2} \cdot\max_{i}\|\mathbf{x}_{i}\|_{3}^{2}}{\lambda_{\min}^{2}\cdot\|\mathbf{w}^{(t) }\|_{2}^{2}}.\]
This finishes the proof of the first result.
To prove the second result in the lemma, we first denote \(\widehat{\mathbf{w}}_{t}^{*}=\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle\cdot \|\mathbf{w}^{*}\|_{2}^{-2}\cdot\mathbf{w}^{*}\), and define
\[\mathcal{B}^{(t)}:=\{\mathbf{w}\in\mathbb{R}^{d}:\|\mathbf{w}- \mathbf{w}^{(t)}\|_{2}\leq\|\mathbf{w}^{(t)}\|_{2}/2\}.\]
Then the condition \(\|\mathbf{w}^{(t)}-\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle\cdot\| \mathbf{w}^{*}\|_{2}^{-1}\cdot\mathbf{w}^{*}\|_{2}\leq\|\mathbf{w}^{(0)}\|_{2 }/2\) and the result in the first part that \(\|\mathbf{w}^{(t)}\|_{2}^{2}\leq\|\mathbf{w}^{(t+1)}\|_{2}^{2}\) for all \(t\geq 0\) imply that
\[\|\mathbf{w}^{(t)}-\widehat{\mathbf{w}}_{t}^{*}\|_{2}\leq\| \mathbf{w}^{(0)}\|_{2}/2\leq\|\mathbf{w}^{(t)}\|_{2}/2,\]
and therefore
\[\widehat{\mathbf{w}}_{t}^{*}\in\mathcal{B}^{(t)}.\] (C.13)
It is also clear that under this condition we have \(\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle>0\).
We proceed to derive an upper bound of \(\|\nabla_{\mathbf{w}}L(\mathbf{w}^{(t)},\gamma^{(t)})\|_{2}\). By the definition of \(\mathbf{w}^{*},\widehat{\mathbf{w}}_{t}^{*}\), the positive homogeneity of \(f\) in \(\mathbf{w}\) and the fact that \(\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle>0\), it is easy to see that
\[y_{i}\cdot f(\widehat{\mathbf{w}}_{t}^{*},\gamma^{(t)},\mathbf{ x}_{i})=y_{i}\cdot f(\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle\cdot\| \mathbf{w}^{*}\|_{2}^{-2}\cdot\mathbf{w}^{*},\gamma^{(t)},\mathbf{x}_{i})=y_{ i}\cdot f(\mathbf{w}^{*},\gamma^{(t)},\mathbf{x}_{i})=\gamma^{(t)}\]
for all \(i\in[n]\). Therefore, denoting \(\ell_{*}^{\prime(t)}=\ell^{\prime}(\gamma^{(t)})\), then we have \(\ell_{*}^{\prime(t)}=\ell^{\prime}[y_{i}\cdot f(\langle\mathbf{w}^{(t)}, \mathbf{w}^{*}\rangle\cdot\|\mathbf{w}^{*}\|_{2}^{-2}\cdot\mathbf{w}^{*}, \gamma^{(t)},\mathbf{x}_{i})]=\ell^{\prime}[y_{i}\cdot f(\widehat{\mathbf{w}}_ {t}^{*},\gamma^{(t)},\mathbf{x}_{i})]\) for all \(i\in[n]\). Moreover, we have
\[\|\nabla_{\mathbf{w}}L(\mathbf{w}^{(t)},\gamma^{(t)})\|_{2} \leq\left\|\frac{\gamma^{(t)}}{n\cdot\|\mathbf{w}^{(t)}\|_{\mathbf{ \Sigma}}}\sum_{i=1}^{n}\ell_{*}^{\prime(t)}\cdot y_{i}\cdot\left(\mathbf{I}- \frac{\mathbf{\Sigma}\mathbf{w}^{(t)}\mathbf{w}^{(t)}\top}{\|\mathbf{w}^{(t)}\|_{ \mathbf{\Sigma}}^{2}}\right)\mathbf{x}_{i}\right\|_{2}\] \[\quad+\left\|\frac{\gamma^{(t)}}{n\cdot\|\mathbf{w}^{(t)}\|_{\bm {\Sigma}}}\sum_{i=1}^{n}(\ell_{i}^{\prime(t)}-\ell_{*}^{\prime(t)})\cdot y_{i} \cdot\left(\mathbf{I}-\frac{\mathbf{\Sigma}\mathbf{w}^{(t)}\mathbf{w}^{(t)}\top}{ \|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{2}}\right)\mathbf{x}_{i}\right\|_{2}\] \[\leq\left\|\frac{\gamma^{(t)}}{n\cdot\|\mathbf{w}^{(t)}\|_{\mathbf{ \Sigma}}}\sum_{i=1}^{n}\ell_{*}^{\prime(t)}\cdot y_{i}\cdot\left(\mathbf{I}- \frac{\mathbf{\Sigma}\mathbf{w}^{(t)}\mathbf{w}^{(t)}\top}{\|\mathbf{w}^{(t)}\|_{ \mathbf{\Sigma}}^{2}}\right)\mathbf{x}_{i}\right\|_{2}\] \[\quad+\frac{\gamma^{(t)}}{n\cdot\|\mathbf{w}^{(t)}\|_{\mathbf{ \Sigma}}}\sum_{i=1}^{n}|\ell_{i}^{\prime(t)}-\ell_{*}^{\prime(t)}|\cdot\| \mathbf{x}_{i}\|_{2}\] \[=\underbrace{\frac{\gamma^{(t)}\cdot|\ell_{*}^{\prime(t)}|}{n \cdot\|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}}\cdot\left\|\sum_{i=1}^{n}\left( \mathbf{I}-\frac{\mathbf{\Sigma}\mathbf{w}^{(t)}\mathbf{w}^{(t)}\top}{\|\mathbf{w }^{(t)}\|_{\mathbf{\Sigma}}^{2}}\right)\mathbf{z}_{i}\right\|_{2}}_{I_{1}}\] \[\quad+\underbrace{\frac{\gamma^{(t)}}{n\cdot\|\mathbf{w}^{(t)}\|_{ \mathbf{\Sigma}}}\sum_{i=1}^{n}|\ell_{i}^{\prime(t)}-\ell_{*}^{\prime(t)}|\cdot\| \mathbf{x}_{i}\|_{2}}_{I_{2}}.\] (C.14)
In the following, we bound \(I_{1}\) and \(I_{2}\) separately. For \(I_{1}\), we have
\[\sum_{i=1}^{n}\bigg{(}\mathbf{I}-\frac{\mathbf{\Sigma}\mathbf{w}^{(t)} \mathbf{w}^{(t)\top}}{\|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{2}}\bigg{)}\mathbf{z}_ {i} =\|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{-2}\cdot\sum_{i=1}^{n}\Big{(} \|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{2}\cdot\mathbf{I}-\mathbf{\Sigma}\mathbf{w}^{(t )}\mathbf{w}^{(t)\top}\Big{)}\mathbf{z}_{i}\] \[=\|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{-2}\cdot\sum_{i=1}^{n}\Bigg{(} \frac{1}{n}\sum_{i^{\prime}=1}^{n}\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{ \prime}}\rangle^{2}\cdot\mathbf{I}-\frac{1}{n}\sum_{i^{\prime}=1}^{n}\langle \mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}\rangle\cdot\mathbf{z}_{i^{\prime}} \mathbf{w}^{(t)\top}\bigg{)}\mathbf{z}_{i}\] \[=\|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{-2}\cdot\frac{1}{n}\sum_{i, i^{\prime}=1}^{n}\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}\rangle^{2} \cdot\mathbf{z}_{i}-\|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{-2}\cdot\frac{1}{n} \sum_{i,i^{\prime}=1}^{n}\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}} \rangle\cdot\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle\cdot\mathbf{z}_{i^{ \prime}}\] \[=\|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{-2}\cdot\frac{1}{n}\sum_{i, i^{\prime}=1}^{n}\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle^{2}\cdot\mathbf{z}_{i^{ \prime}}-\|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{-2}\cdot\frac{1}{n}\sum_{i,i^{ \prime}=1}^{n}\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}\rangle\cdot \langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle\cdot\mathbf{z}_{i^{\prime}}\] \[=\|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{-2}\cdot\frac{1}{n}\sum_{i, i^{\prime}=1}^{n}\left(\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle^{2}- \langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}\rangle\cdot\langle\mathbf{w}^ {(t)},\mathbf{z}_{i}\rangle\right)\cdot\mathbf{z}_{i^{\prime}}.\] (C.15)
Moreover, by the definition of \(\widehat{\mathbf{w}}_{t}^{*}\), it is clear that
\[\langle\widehat{\mathbf{w}}_{t}^{*},\mathbf{z}_{i}\rangle^{2}-\langle\widehat{ \mathbf{w}}_{t}^{*},\mathbf{z}_{i^{\prime}}\rangle\cdot\langle\widehat{ \mathbf{w}}_{t}^{*},\mathbf{z}_{i}\rangle=0.\]
Therefore we have
\[\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle^{2}-\langle\mathbf{ w}^{(t)},\mathbf{z}_{i^{\prime}}\rangle\cdot\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle\] \[\quad=\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle^{2}-\langle \widehat{\mathbf{w}}^{(t)},\mathbf{z}_{i^{\prime}}\rangle\cdot\langle\mathbf{w }^{(t)},\mathbf{z}_{i}\rangle-\langle\widehat{\mathbf{w}}_{t}^{*},\mathbf{z}_ {i}\rangle^{2}+\langle\widehat{\mathbf{w}}_{t}^{*},\mathbf{z}_{i^{\prime}} \rangle\cdot\langle\widehat{\mathbf{w}}_{t}^{*},\mathbf{z}_{i}\rangle\] \[\quad=(\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle^{2}-\langle \widehat{\mathbf{w}}_{t}^{*},\mathbf{z}_{i}\rangle^{2})-(\langle\mathbf{w}^{(t )},\mathbf{z}_{i^{\prime}}\rangle\cdot\langle\mathbf{w}^{(t)},\mathbf{z}_{i} \rangle-\langle\widehat{\mathbf{w}}_{t}^{*},\mathbf{z}_{i^{\prime}}\rangle\cdot \langle\widehat{\mathbf{w}}_{t}^{*},\mathbf{z}_{i}\rangle)\] \[\quad=(\langle\mathbf{w}^{(t)}+\widehat{\mathbf{w}}_{t}^{*}, \mathbf{z}_{i}\rangle)\cdot(\langle\mathbf{w}^{(t)}-\widehat{\mathbf{w}}_{t}^{ *},\mathbf{z}_{i}\rangle)-\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}} \rangle\cdot\langle\mathbf{w}^{(t)}-\mathbf{w}^{*},\mathbf{z}_{i}\rangle+ \langle\mathbf{w}^{(t)}-\mathbf{w}^{*},\mathbf{z}_{i^{\prime}}\rangle\cdot \langle\widehat{\mathbf{w}}_{t}^{*},\mathbf{z}_{i}\rangle.\]
Taking absolute value on both sides and applying triangle inequality gives
\[|\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle^{2}-\langle \mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}\rangle\cdot\langle\mathbf{w}^{(t)}, \mathbf{z}_{i}\rangle|\] \[\leq(\|\mathbf{w}^{(t)}\|_{2}+\|\widehat{\mathbf{w}}_{t}^{*}\|_{2 })\cdot\|\mathbf{z}_{i}\|_{2}\cdot\|\mathbf{w}^{(t)}-\widehat{\mathbf{w}}_{t}^{ *}\|_{2}\cdot\|\mathbf{z}_{i}\|_{2}+\|\mathbf{w}^{(t)}\|_{2}\cdot\|\mathbf{z}_{ i^{\prime}}\|_{2}\cdot\|\mathbf{w}^{(t)}-\mathbf{w}^{*}\|_{2}\cdot\|\mathbf{z}_{i}\|_{2}\] \[\quad+\|\mathbf{w}^{(t)}-\mathbf{w}^{*}\|_{2}\cdot\|\mathbf{z}_{i ^{\prime}}\|_{2}\cdot\|\widehat{\mathbf{w}}_{t}^{*}\|_{2}\cdot\|\mathbf{z}_{i}\|_{2}\] \[\leq 2(\|\mathbf{w}^{(t)}\|_{2}+\|\widehat{\mathbf{w}}_{t}^{*}\|_{2 })\cdot\max_{i}\|\mathbf{z}_{i}\|_{2}^{2}\cdot\|\mathbf{w}^{(t)}-\widehat{ \mathbf{w}}_{t}^{*}\|_{2}\] \[\leq 4\|\mathbf{w}^{(t)}\|_{2}\cdot\max_{i}\|\mathbf{z}_{i}\|_{2}^{2 }\cdot\|\mathbf{w}^{(t)}-\widehat{\mathbf{w}}_{t}^{*}\|_{2}\] \[=4\|\mathbf{w}^{(t)}\|_{2}\cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{2 }\cdot\|\mathbf{w}^{(t)}-\widehat{\mathbf{w}}_{t}^{*}\|_{2}.\] (C.16)
Plugging (C) into (C) gives
\[I_{1} =\frac{\gamma^{(t)}\cdot|\ell_{*}^{\prime(t)}|}{n\cdot\|\mathbf{w}^{ (t)}\|_{\mathbf{\Sigma}}}\cdot\Bigg{\|}\sum_{i=1}^{n}\bigg{(}\mathbf{I}-\frac{\mathbf{ \Sigma}\mathbf{w}^{(t)}\mathbf{w}^{(t)\top}}{\|\mathbf{w}^{(t)}\|_{\mathbf{ \Sigma}}^{2}}\bigg{)}\mathbf{z}_{i}\Bigg{\|}_{2}\] \[\leq\frac{4\gamma^{(t)}\cdot|\ell_{*}^{\prime(t)}|}{\|\mathbf{w}^{ (t)}\|_{\mathbf{\Sigma}}}\cdot\frac{\|\mathbf{w}^{(t)}\|_{2}}{\|\mathbf{w}^{(t)}\|_{ \mathbf{\Sigma}}^{2}}\cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{3}\cdot\|\mathbf{w}^{(t)}- \widehat{\mathbf{w}}_{t}^{*}\|_{2}\]
\[\leq\frac{4\gamma^{(t)}\cdot\exp(-\gamma^{(t)})}{\|\mathbf{w}^{(t)}\|_ {\boldsymbol{\Sigma}}}\cdot\frac{\|\mathbf{w}^{(t)}\|_{2}}{\|\mathbf{w}^{(t)}\| _{\boldsymbol{\Sigma}}^{2}}\cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{3}\cdot\| \mathbf{w}^{(t)}-\widehat{\mathbf{w}}_{t}^{*}\|_{2}\] \[\leq\frac{4\gamma^{(t)}\cdot\exp(-\gamma^{(t)})}{\lambda_{\min}^ {3/2}\cdot\|\mathbf{w}^{(t)}\|_{2}^{2}}\cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{3 }\cdot\|\mathbf{w}^{(t)}-\widehat{\mathbf{w}}_{t}^{*}\|_{2}\] \[\leq\frac{4\gamma^{(t)}\cdot\exp(-\gamma^{(t)})}{\lambda_{\min}^ {3/2}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}}\cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{3 }\cdot\|\mathbf{w}^{(t)}-\widehat{\mathbf{w}}_{t}^{*}\|_{2}\] (C.17)
where the second inequality follows by the fact that \(-\exp(-\gamma^{(t)})\leq\ell_{*}^{\prime(t)}=\ell^{\prime}(\gamma^{(t)})<0\). Regarding the bound for \(I_{2}\), by the mean value theorem, there exists \(z\) between \(y_{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i})\) and \(y_{i}\cdot f(\widehat{\mathbf{w}}_{t}^{*},\gamma^{(t)},\mathbf{x}_{i})=\gamma ^{(t)}\) such that
\[|\ell_{i}^{\prime(t)}-\ell_{*}^{\prime(t)}| =|\ell^{\prime\prime}(z)\cdot y_{i}\cdot[f(\mathbf{w}^{(t)}, \gamma^{(t)},\mathbf{x}_{i})-f(\widehat{\mathbf{w}}_{t}^{*},\gamma^{(t)}, \mathbf{x}_{i})]|\] \[\leq|\ell^{\prime\prime}(z)|\cdot|f(\mathbf{w}^{(t)},\gamma^{(t) },\mathbf{x}_{i})-f(\widehat{\mathbf{w}}_{t}^{*},\gamma^{(t)},\mathbf{x}_{i})|\] \[\leq\max\{|\ell_{1}^{\prime(t)}|,\ldots,|\ell_{n}^{\prime(t)}|, \exp(-\gamma^{(t)})\}\cdot|f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i})-f( \widehat{\mathbf{w}}_{t}^{*},\gamma^{(t)},\mathbf{x}_{i})|.\] (C.18)
where the first inequality follows by the property of cross-entropy loss that \(0\leq\ell^{\prime\prime}(z)\leq-\ell^{\prime}(z)\), and the second inequality follows from the fact that \(z\) is between \(y_{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i})\) and \(y_{i}\cdot f(\widehat{\mathbf{w}}_{t}^{*},\gamma^{(t)},\mathbf{x}_{i})=\gamma ^{(t)}\). Moreover, by (C.13) we have \(\widehat{\mathbf{w}}_{t}^{*}\in\mathcal{B}^{(t)}\) and \(\|\mathbf{w}\|_{2}\geq\|\mathbf{w}^{(t)}\|_{2}/2\) for all \(\mathbf{w}\in\mathcal{B}^{(t)}=\{\mathbf{w}\in\mathbb{R}^{d}:\|\mathbf{w}- \mathbf{w}^{(t)}\|_{2}\leq\|\mathbf{w}^{(t)}\|_{2}/2\}\). For all \(\mathbf{w}\in\mathcal{B}^{(t)}\), we have
\[\|\nabla_{\mathbf{w}}f(\mathbf{w},\gamma^{(t)},\mathbf{x}_{i})\|_ {2} =\left\|\frac{\gamma^{(t)}}{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}} \cdot\bigg{(}\mathbf{I}-\frac{\boldsymbol{\Sigma}\mathbf{w}\mathbf{w}^{\top}}{ \|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{2}}\bigg{)}\mathbf{x}_{i}\right\|_{2}\] \[\leq\frac{\gamma^{(t)}}{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}} \cdot\|\mathbf{x}_{i}\|_{2}\cdot\bigg{(}1+\left\|\frac{\boldsymbol{\Sigma} \mathbf{w}\mathbf{w}^{\top}}{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{2}}\right\|_ {2}\bigg{)}\] \[=\frac{\gamma^{(t)}}{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}} \cdot\|\mathbf{x}_{i}\|_{2}\cdot\bigg{(}1+\frac{\|\boldsymbol{\Sigma} \mathbf{w}\|_{2}\cdot\|\mathbf{w}\|_{2}}{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{2 }}\bigg{)}\] \[\leq\frac{2\gamma^{(t)}}{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}} \cdot\|\mathbf{x}_{i}\|_{2}\cdot\frac{\|\boldsymbol{\Sigma}\mathbf{w}\|_{2} \cdot\|\mathbf{w}\|_{2}}{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{2}},\]
where the last inequality follows by the fact that \(\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{2}=\langle\mathbf{w},\boldsymbol{\Sigma} \mathbf{w}\rangle\leq\|\boldsymbol{\Sigma}\mathbf{w}\|_{2}\cdot\|\mathbf{w}\|_{2}\). Further plugging in the definition of \(\boldsymbol{\Sigma}\) gives
\[\|\nabla_{\mathbf{w}}f(\mathbf{w},\gamma^{(t)},\mathbf{x}_{i})\|_ {2} \leq\frac{2\gamma^{(t)}}{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{3}} \cdot\|\mathbf{x}_{i}\|_{2}\cdot\|\mathbf{w}\|_{2}\cdot\left\|\frac{1}{n}\sum_{ j=1}^{n}\mathbf{x}_{j}\cdot\langle\mathbf{w},\mathbf{x}_{j}\rangle \right\|_{2}\] \[\leq\frac{2\gamma^{(t)}}{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{3}} \cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{2}\cdot\|\mathbf{w}\|_{2}\cdot\frac{1}{n} \sum_{i=1}^{n}|\langle\mathbf{w},\mathbf{x}_{i}\rangle|\] \[\leq\frac{2\gamma^{(t)}}{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{3}} \cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{2}\cdot\|\mathbf{w}\|_{2}\cdot\sqrt{\frac{1}{n} \sum_{i=1}^{n}|\langle\mathbf{w},\mathbf{x}_{i}\rangle|^{2}}\] \[=\frac{2\gamma^{(t)}}{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{2}} \cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{2}\cdot\|\mathbf{w}\|_{2}\]
\[\leq\|\mathbf{w}^{(t)}\|_{2}^{2}+\eta^{2}G\cdot\max\{\gamma^{(t)2}, \gamma^{(t)4}\}\cdot\max\{|\ell_{1}^{\prime(t)}|^{2},\ldots,|\ell_{n}^{\prime(t) }|^{2},\exp(-2\gamma^{(t)})\}\cdot\|\mathbf{w}^{(t)}-\widehat{\mathbf{w}}_{t}^ {*}\|_{2}^{2},\]
where \(G=64\lambda_{\min}^{-3}\cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{6}\cdot\|\mathbf{w }^{(0)}\|_{2}^{-4}\). This finishes the proof.
### Proof of Lemma 4.5
In preparation of the proof of Lemma 4.5, we first present the following lemma.
**Lemma C.2**.: _Suppose that a sequence \(a^{(t)}\), \(t\geq 0\) follows the iterative formula_
\[a^{(t+1)}=a^{(t)}+c\cdot\exp(-a^{(t)})\]
_for some \(c>0\). Then it holds that_
\[\log(c\cdot t+\exp(a^{(0)}))\leq a^{(t)}\leq c\exp(-a^{(0)})+\log(c\cdot t+\exp(a^ {(0)}))\]
_for all \(t\geq 0\)._
The proof of Lemma 4.5 is given as follows.
Proof of Lemma 4.5.: Set \(T_{0}=1+400\eta^{-1}\epsilon^{-2}\cdot\lambda_{\max}^{3/2}\cdot\lambda_{\min}^ {-1}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}\cdot\|\mathbf{w}^{*}\|_{2}\). By gradient descent update rule, we have
\[|\gamma^{(t+1)}-\gamma^{(t)}| =\left|\frac{\eta}{n}\sum_{i=1}^{n}\ell_{l}^{\prime(t)}\cdot\frac {\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle}{\sqrt{n^{-1}\cdot\sum_{j=1}^{n }\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle^{2}}}\right|\] \[\leq\eta\cdot\frac{n^{-1}\cdot\sum_{i=1}^{n}|\langle\mathbf{w}^{ (t)},\mathbf{z}_{i}\rangle|}{\sqrt{n^{-1}\cdot\sum_{j=1}^{n}\langle\mathbf{w} ^{(t)},\mathbf{z}_{i}\rangle^{2}}}\] \[\leq\eta,\]
where the first inequality follows by the fact that \(|\ell^{\prime}(z)|<1\) for all \(z\in\mathbb{R}\), and the second inequality follows by Jensen's inequality. Therefore, we have
\[|\gamma^{(t)}-1|=|\gamma^{(t)}-\gamma^{(0)}|\leq\eta+400\epsilon^{-2}\cdot \lambda_{\max}^{3/2}\cdot\lambda_{\min}^{-1}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2} \cdot\|\mathbf{w}^{*}\|_{2}.\]
Further plugging in the definition of \(\epsilon\) gives
\[|\gamma^{(t)}-1| \leq\eta+\frac{6400\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\|\mathbf{ w}^{*}\|_{2}^{2}\cdot\lambda_{\max}^{3/2}\cdot\lambda_{\min}^{-1}\cdot\| \mathbf{w}^{(0)}\|_{2}^{2}}{\min\left\{1/3,\|\mathbf{w}^{(0)}\|_{2}^{2}\cdot \lambda_{\min}^{1/2}/(40\lambda_{\max}^{3/4})\right\}}\] \[=\eta+19200\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\|\mathbf{w}^{*}\|_ {2}^{2}\cdot\lambda_{\max}^{3/2}\cdot\lambda_{\min}^{-1}\cdot\|\mathbf{w}^{(0 )}\|_{2}^{2}\] \[\leq 1/2\] (C.21)
for all \(t\in[T_{0}]\), where the first equality follows by the assumption that \(\|\mathbf{w}^{(0)}\|_{2}\leq\lambda_{\min}^{1/2}/(20\lambda_{\max}^{3/4})\), and the second inequality follows by the assumption that \(\eta\leq 1/4\) and \(\|\mathbf{w}^{(0)}\|_{2}\leq(\max_{i}\|\mathbf{z}_{i}\|_{2})^{-1/2}\cdot\| \mathbf{w}^{*}\|_{2}^{-1}\cdot\lambda_{\max}^{-3/4}\cdot\lambda_{\min}^{1/2}/280\).
By Lemma 4.4, we have
\[\|\mathbf{w}^{(t)}\|_{2}^{2}\leq\|\mathbf{w}^{(t+1)}\|_{2}^{2}\leq\|\mathbf{ w}^{(t)}\|_{2}^{2}+4\eta^{2}\cdot\frac{\gamma^{(t)2}\cdot\max_{i}\|\mathbf{x}_{i }\|_{2}^{3}}{\lambda_{\min}^{2}\cdot\|\mathbf{w}^{(t)}\|_{2}^{2}}.\]
By the monotonicity of \(\|\mathbf{w}^{(t)}\|_{2}\) and the result in (C) that \(\gamma^{(t)}\leq 3/2\) for all \(t\in[T_{0}]\), we have
\[\|\mathbf{w}^{(t+1)}\|_{2}^{2}\leq\|\mathbf{w}^{(t)}\|_{2}^{2}+9\eta^{2}\cdot \frac{\max_{i}\|\mathbf{x}_{i}\|_{2}^{3}}{\lambda_{\min}^{2}\cdot\|\mathbf{w}^ {(0)}\|_{2}^{2}}\]
for all \(t\in[T_{0}]\). Taking a telescoping sum then gives
\[\|\mathbf{w}^{(t)}\|_{2}^{2}\leq\|\mathbf{w}^{(0)}\|_{2}^{2}+\frac{9\eta^{2}t \cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{3}}{\lambda_{\min}^{2}\cdot\|\mathbf{w}^ {(0)}\|_{2}^{2}}\cdot t\leq\|\mathbf{w}^{(0)}\|_{2}^{2}+\frac{9\eta^{2}T_{0} \cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{3}}{\lambda_{\min}^{2}\cdot\|\mathbf{w}^ {(0)}\|_{2}^{2}}\]
for all \(t\in[T_{0}]\). Plugging in the definition of \(T_{0}\) gives
\[\|\mathbf{w}^{(t)}\|_{2}^{2} \leq\|\mathbf{w}^{(0)}\|_{2}^{2}+\frac{9\eta^{2}\cdot\max_{i}\| \mathbf{x}_{i}\|_{2}^{3}}{\lambda_{\min}^{2}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}} \cdot(1+400\eta^{-1}\epsilon^{-2}\cdot\lambda_{\max}^{3/2}\cdot\lambda_{\min}^ {-1}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}\cdot\|\mathbf{w}^{*}\|_{2})\] \[=\|\mathbf{w}^{(0)}\|_{2}^{2}+\frac{9\eta^{2}\cdot\max_{i}\| \mathbf{x}_{i}\|_{2}^{3}}{\lambda_{\min}^{2}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2} }+\frac{3600\eta\cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{3}}{\lambda_{\min}^{3}} \cdot\epsilon^{-2}\cdot\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{*}\|_{2})\] \[\leq(1+\epsilon^{2}/4)\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}\]
for all \(t\in[T_{0}]\), where the last inequality follows by the assumption that \(\eta\leq\min\{\epsilon\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}\cdot\lambda_{\min} \cdot(\max_{i}\|\mathbf{x}_{i}\|_{2})^{-3/2}/9,\epsilon^{4}\cdot\lambda_{\min }^{3}\cdot\lambda_{\max}^{-3/2}\cdot\|\mathbf{w}^{*}\|_{2}^{2}\cdot(\max_{i} \|\mathbf{x}_{i}\|_{2})^{-3}/28800\}\). Therefore
\[\|\mathbf{w}^{(t)}\|_{2}\leq\sqrt{1+\epsilon^{2}/4}\cdot\|\mathbf{w}^{(0)}\|_ {2}\leq(1+\epsilon/2)\cdot\|\mathbf{w}^{(0)}\|_{2}\] (C.22)
for all \(t\in[T_{0}]\).
By Lemma 4.2, we have
\[\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle\geq\langle\mathbf{w}^{(t)}, \mathbf{w}^{*}\rangle+\frac{\gamma^{(t)}\eta}{16\|\mathbf{w}^{(t)}\|_{\mathbf{ \Sigma}}^{3}}\cdot\exp(-\gamma^{(t)})\cdot\|\mathbf{w}-\langle\mathbf{w}^{*}, \mathbf{w}\rangle_{\mathbf{\Sigma}}\cdot\mathbf{w}^{*}\|_{\mathbf{\Sigma}}^{2}.\] (C.23)
By the result that \(\gamma^{(t)}\leq 3/2\) for all \(t\in T_{0}\), we have \(\exp(-\gamma^{(t)})\geq\exp(-\gamma^{(3/2)})\geq 1/5\) for all \(t\in[T_{0}]\). Therefore by (C.23), we have
\[\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle \geq\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle+\frac{\gamma^{( t)}\eta}{80\|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{3}}\cdot\|\mathbf{w}- \langle\mathbf{w}^{*},\mathbf{w}\rangle_{\mathbf{\Sigma}}\cdot\mathbf{w}^{*}\|_{ \mathbf{\Sigma}}^{2}\] \[\geq\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle+\frac{\gamma^{( t)}\eta}{80\|\mathbf{w}^{(0)}\|_{\mathbf{\Sigma}}^{3}}\cdot\|\mathbf{w}-\langle \mathbf{w}^{*},\mathbf{w}\rangle_{\mathbf{\Sigma}}\cdot\mathbf{w}^{*}\|_{\mathbf{ \Sigma}}^{2}\] \[\geq\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle+\frac{\eta}{160 \|\mathbf{w}^{(0)}\|_{\mathbf{\Sigma}}^{3}}\cdot\|\mathbf{w}-\langle\mathbf{w}^{ *},\mathbf{w}\rangle_{\mathbf{\Sigma}}\cdot\mathbf{w}^{*}\|_{\mathbf{\Sigma}}^{2}\]
for all \(t\in[T_{0}]\), where the second inequality follows by Lemma 4.4, and the third inequality follows by the proved result that \(\gamma^{(t)}\geq 1/2\) for all \(t\in[T_{0}]\). Telescoping over \(t=0,\ldots,T_{0}-1\) then gives
\[\min_{t\in[T_{0}-1]}\|\mathbf{w}-\langle\mathbf{w}^{*},\mathbf{w} \rangle_{\mathbf{\Sigma}}\cdot\mathbf{w}^{*}\|_{\mathbf{\Sigma}}^{2} \leq\frac{1}{T_{0}-1}\sum_{t=0}^{T_{0}-1}\|\mathbf{w}-\langle \mathbf{w}^{*},\mathbf{w}\rangle_{\mathbf{\Sigma}}\cdot\mathbf{w}^{*}\|_{\mathbf{\Sigma }}^{2}\] \[\leq\frac{160\|\mathbf{w}^{(0)}\|_{\mathbf{\Sigma}}^{3}}{(T_{0}-1)\eta }\cdot(\langle\mathbf{w}^{(T_{0})},\mathbf{w}^{*}\rangle-\langle\mathbf{w}^{ (0)},\mathbf{w}^{*}\rangle)\] \[\leq\frac{160\|\mathbf{w}^{(0)}\|_{\mathbf{\Sigma}}^{3}}{(T_{0}-1)\eta }\cdot(\|\mathbf{w}^{(T_{0})}\|_{2}+\|\mathbf{w}^{(0)}\|_{2})\cdot\|\mathbf{w}^{ *}\|_{2}.\]
By the proved result that \(\|\mathbf{w}^{(T_{0})}\|_{2}\leq(1+\epsilon/2)\cdot\|\mathbf{w}^{(0)}\|_{2}\leq 1.5\|\mathbf{w}^{(0)}\|_{2}\) (by definition, \(\epsilon\leq\max_{i}\langle\mathbf{w}^{*},\mathbf{z}_{i}\rangle^{-1}/48=1/48\)), we then obtain
\[\min_{t\in[T_{0}-1]}\|\mathbf{w}-\langle\mathbf{w}^{*},\mathbf{w }\rangle_{\mathbf{\Sigma}}\cdot\mathbf{w}^{*}\|_{\mathbf{\Sigma}}^{2} \leq\frac{400\|\mathbf{w}^{(0)}\|_{\mathbf{\Sigma}}^{3}}{(T_{0}-1)\eta }\cdot\|\mathbf{w}^{(0)}\|_{2}\cdot\|\mathbf{w}^{*}\|_{2}\] \[\leq\frac{400\lambda_{\max}^{3/2}}{(T_{0}-1)\eta}\cdot\|\mathbf{w}^{ (0)}\|_{2}^{4}\cdot\|\mathbf{w}^{*}\|_{2}\]
\[\leq\epsilon^{2}\cdot\lambda_{\min}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2},\]
where the last inequality follows by the definition that \(T_{0}=1+400\eta^{-1}\epsilon^{-2}\cdot\lambda_{\max}^{3/2}\cdot\lambda_{\min}^{ -1}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}\cdot\|\mathbf{w}^{*}\|_{2}\). Therefore, we see that there exists \(t_{0}\in[T_{0}-1]\) such that
\[\big{\|}\mathbf{w}^{(t_{0})}-\langle\mathbf{w}^{*},\mathbf{w}^{( t_{0})}\rangle\cdot\|\mathbf{w}^{*}\|_{2}^{-2}\cdot\mathbf{w}^{*}\big{\|}_{2} \leq\|\mathbf{w}-\langle\mathbf{w}^{*},\mathbf{w}\rangle_{\boldsymbol{ \Sigma}}\cdot\mathbf{w}^{*}\|_{2}\] \[\leq\lambda_{\min}^{-1/2}\cdot\|\mathbf{w}-\langle\mathbf{w}^{*},\mathbf{w}\rangle_{\boldsymbol{\Sigma}}\cdot\mathbf{w}^{*}\|_{\boldsymbol{ \Sigma}}\] \[\leq\epsilon\cdot\|\mathbf{w}^{(0)}\|_{2},\]
where the first inequality follows by the fact that \(\langle\mathbf{w}^{*},\mathbf{w}^{(t_{0})}\rangle\cdot\|\mathbf{w}^{*}\|_{2} ^{-2}\cdot\mathbf{w}^{*}\) is the \(\ell_{2}\)-projection of \(\mathbf{w}^{(t_{0})}\) on \(\operatorname{span}\{\mathbf{w}^{*}\}\). Together with (C.21) and (C.22), we conclude that there exists \(t_{0}\in[T_{0}-1]\) such that all three results of Lemma 4.5 hold.
### Proof of Lemma 4.6
Proof of Lemma 4.6.: We prove the first five results together by induction, and then prove the sixth result. Clearly, all the results hold at \(t=t_{0}\) by the assumptions. Now suppose that there exists \(t_{1}\geq t_{0}\) such that the results hold for \(t=t_{0},\ldots,t_{1}\), i.e., for \(t=t_{0},\ldots,t_{1}\) it holds that
1. \(\big{\|}\mathbf{w}^{(t_{0})}-\langle\mathbf{w}^{*},\mathbf{w}^{(t_{0})} \rangle\cdot\|\mathbf{w}^{*}\|_{2}^{-2}\cdot\mathbf{w}^{*}\big{\|}_{2},\ldots,\big{\|}\mathbf{w}^{(t)}-\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle\cdot \|\mathbf{w}^{*}\|_{2}^{-2}\cdot\mathbf{w}^{*}\big{\|}_{2}\) is a decreasing sequence.
2. \(\|\mathbf{w}^{(0)}\|_{2}\leq\|\mathbf{w}^{(t)}\|_{2}\leq(1+\epsilon)\cdot\| \mathbf{w}^{(0)}\|_{2}\).
3. \(\gamma^{(t)}\) has the following upper and lower bounds: \[\gamma^{(t)} \leq\log[8\eta\cdot(t-t_{0})+2\exp(\gamma^{(t_{0})})],\] \[\gamma^{(t)} \geq\log[(\eta/8)\cdot(t-t_{0})+\exp(\gamma^{(t_{0})})].\]
4. It holds that \[\big{\|}\mathbf{w}^{(t)}-\langle\mathbf{w}^{*},\mathbf{w}^{(t)} \rangle\cdot\|\mathbf{w}^{*}\|_{2}^{-2}\cdot\mathbf{w}^{*}\big{\|}_{2}\] \[\leq\epsilon\cdot\|\mathbf{w}^{(0)}\|_{2}\cdot\exp\Bigg{[}-\frac{ \lambda_{\min}}{1024\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}} \cdot\log^{2}((8/9)\eta\cdot(t-t_{0})+1)\Bigg{]},\] \[n^{-2}\sum_{i,i^{\prime}=1}^{n}\left(\langle\mathbf{w}^{(t)},\mathbf{z}_{ i^{\prime}}\rangle-\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle\right)^{2}\] \[\leq\lambda_{\max}\cdot\epsilon^{2}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2} \cdot\exp\Bigg{[}-\frac{\lambda_{\min}}{512\lambda_{\max}^{3/2}\cdot\| \mathbf{w}^{(0)}\|_{2}^{2}}\cdot\log^{2}((8/9)\eta\cdot(t-t_{0})+1)\Bigg{]}.\]
5. \(\max_{i}|\langle\mathbf{w}^{(t)}/\|\mathbf{w}^{(t)}\|_{2},\mathbf{z}_{i} \rangle-\|\mathbf{w}^{*}\|_{2}^{-1}|\cdot\gamma^{(t)}\leq\|\mathbf{w}^{*}\|_{2 }^{-1}/4\).
Then we aim to show that the above conclusions also hold at iteration \(t_{1}+1\).
**Preliminary results based on the induction hypotheses.** By definition, it is easy to see that
\[\epsilon\leq\|\mathbf{w}^{*}\|_{2}^{-1}\cdot(\max_{i}\|\mathbf{z}_{i}\|_{2})^ {-1}/24\leq(\min_{i}|\langle\mathbf{w}^{*},\mathbf{z}_{i}\rangle|)^{-1}/24=1/2 4<1.\]
By induction hypothesis (i), we have
\[\left\|\mathbf{w}^{(t)}-\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle\cdot\| \mathbf{w}^{*}\|_{2}^{-2}\cdot\mathbf{w}^{*}\right\|_{2}\leq\left\|\mathbf{w}^{ (t_{0})}-\langle\mathbf{w}^{*},\mathbf{w}^{(t_{0})}\rangle\cdot\|\mathbf{w}^{*} \|_{2}^{-2}\cdot\mathbf{w}^{*}\right\|_{2}\leq\epsilon\cdot\|\mathbf{w}^{(0)} \|_{2}.\] (C.24)
Taking the square of both sides and dividing by \(\|\mathbf{w}^{(t)}\|_{2}^{2}\) gives
\[1-\langle\mathbf{w}^{*}/\|\mathbf{w}^{*}\|_{2},\mathbf{w}^{(t)}/\|\mathbf{w}^{ (t)}\|_{2}\rangle^{2}\leq\epsilon^{2}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}/\| \mathbf{w}^{(t)}\|_{2}^{2}\leq\epsilon^{2},\]
where the last inequality follows by Lemma 4.4 on the monotonicity of \(\|\mathbf{w}^{(t)}\|_{2}\). Therefore, we have
\[(1-\epsilon)\leq\sqrt{1-\epsilon^{2}}\leq\langle\mathbf{w}^{*}/\|\mathbf{w}^{ *}\|_{2},\mathbf{w}^{(t)}/\|\mathbf{w}^{(t)}\|_{2}\rangle\leq 1.\] (C.25)
Moreover, by (C.24), we have
\[|\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle-\langle\mathbf{w}^{*},\mathbf{ w}^{(t)}\rangle\cdot\|\mathbf{w}^{*}\|_{2}^{-2}\cdot\langle\mathbf{w}^{*}, \mathbf{z}_{i}\rangle|\leq\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\|\mathbf{w}^{(0) }\|_{2}\cdot\epsilon\]
for all \(i\in[n]\). Dividing by \(\|\mathbf{w}^{(t)}\|_{2}\) on both sides above gives
\[|\langle\mathbf{w}^{(t)}/\|\mathbf{w}^{(t)}\|_{2},\mathbf{z}_{i} \rangle-\langle\mathbf{w}^{*}/\|\mathbf{w}^{*}\|_{2},\mathbf{w}^{(t)}/\| \mathbf{w}^{(t)}\|_{2}\rangle\cdot\langle\mathbf{w}^{*}/\|\mathbf{w}^{*}\|_{2 },\mathbf{z}_{i}\rangle|\] \[\leq\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\|\mathbf{w}^{(0)}\|_{2}/ \|\mathbf{w}^{(t)}\|_{2}\cdot\epsilon\leq\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\epsilon.\]
Recall that \(\mathbf{w}^{*}\) is chosen such that \(\langle\mathbf{w}^{*},\mathbf{z}_{i}\rangle=1\) for all \(i\in[n]\). Therefore, rearranging terms and applying (C.25) then gives
\[\langle\mathbf{w}^{(t)}/\|\mathbf{w}^{(t)}\|_{2},\mathbf{z}_{i}\rangle \geq\langle\mathbf{w}^{*}/\|\mathbf{w}^{*}\|_{2},\mathbf{w}^{(t)}/ \|\mathbf{w}^{(t)}\|_{2}\rangle\cdot\langle\mathbf{w}^{*}/\|\mathbf{w}^{*}\| _{2},\mathbf{z}_{i}\rangle-\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\epsilon\] \[\geq\|\mathbf{w}^{*}\|_{2}^{-1}\cdot\langle\mathbf{w}^{*},\mathbf{ z}_{i}\rangle-2\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\epsilon,\] \[=\|\mathbf{w}^{*}\|_{2}^{-1}-2\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot \|\mathbf{w}^{(0)}\|_{2}\cdot\epsilon\] \[\langle\mathbf{w}^{(t)}/\|\mathbf{w}^{(t)}\|_{2},\mathbf{z}_{i}\rangle \leq\langle\mathbf{w}^{*}/\|\mathbf{w}^{*}\|_{2},\mathbf{w}^{(t)}/ \|\mathbf{w}^{(t)}\|_{2}\rangle\cdot\langle\mathbf{w}^{*}/\|\mathbf{w}^{*}\| _{2},\mathbf{z}_{i}\rangle+\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\epsilon\] \[\leq\|\mathbf{w}^{*}\|_{2}^{-1}\cdot\langle\mathbf{w}^{*},\mathbf{ z}_{i}\rangle+\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\epsilon\] \[=\|\mathbf{w}^{*}\|_{2}^{-1}+\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\epsilon.\]
Therefore, we have
\[|\langle\mathbf{w}^{(t)}/\|\mathbf{w}^{(t)}\|_{2},\mathbf{z}_{i}\rangle-\| \mathbf{w}^{*}\|_{2}^{-1}|\leq 2\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\epsilon\leq\| \mathbf{w}^{*}\|_{2}^{-1}/4\] (C.26)
for all \(i\in[n]\) and \(t=t_{0},\ldots,t_{1}\), where the second inequality follows by the definition of \(\epsilon\). Now denote \(\alpha=\|\mathbf{w}^{*}\|_{2}^{-1}\) and \(c^{(t)}=\max_{i}|\langle\mathbf{w}^{(t)}/\|\mathbf{w}^{(t)}\|_{2},\mathbf{z}_{i }\rangle-\alpha|\). Then we have
\[y_{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i}) =\frac{\gamma^{(t)}\cdot\langle\mathbf{w}^{(t)},\mathbf{z}_{i} \rangle}{\sqrt{n^{-1}\cdot\sum_{i^{\prime}=1}^{n}\langle\mathbf{w}^{(t)}, \mathbf{z}_{i^{\prime}}\rangle^{2}}}=\frac{\gamma^{(t)}\cdot\langle\mathbf{w}^{ (t)}/\|\mathbf{w}^{(t)}\|_{2},\mathbf{z}_{i}\rangle}{\sqrt{n^{-1}\cdot\sum_{i^{ \prime}=1}^{n}\langle\mathbf{w}^{(t)}/\|\mathbf{w}^{(t)}\|_{2},\mathbf{z}_{i^{ \prime}}\rangle^{2}}}\] \[\leq\gamma^{(t)}\cdot\frac{\alpha+c^{(t)}}{\alpha-c^{(t)}}\leq \gamma^{(t)}\cdot(1+3c^{(t)}/\alpha)\leq\gamma^{(t)}+3/4,\] (C.27)
where the second inequality follows by (C.26) that \(\max_{i}|\langle\mathbf{w}^{(t)}/\|\mathbf{w}^{(t)}\|_{2},\mathbf{z}_{i}\rangle- \alpha|\leq\alpha/4\), and the last inequality follows by induction hypothesis (v). Similarly,
\[y_{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i}) =\frac{\gamma^{(t)}\cdot\langle\mathbf{w}^{(t)}/\|\mathbf{w}^{(t )}\|_{2},\mathbf{z}_{i}\rangle}{\sqrt{n^{-2}\cdot\sum_{i^{\prime}=1}^{n} \langle\mathbf{w}^{(t)}/\|\mathbf{w}^{(t)}\|_{2},\mathbf{z}_{i^{\prime}} \rangle^{2}}}\] \[\geq\gamma^{(t)}\cdot\frac{\alpha-c^{(\tau)}}{\alpha+c^{(\tau)} }\geq\gamma^{(t)}\cdot(1-c^{(\tau)}/\alpha)\geq\gamma^{(t)}-1/4.\] (C.28)
Note that
\[-\ell^{\prime}(y_{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)}, \mathbf{x}_{i}))=\frac{1}{1+\exp[y_{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)}, \mathbf{x}_{i})]}\]
for all \(i\in[n]\) and all \(t\in[t_{0},t_{1}]\). Therefore by (C.27) and (C.28), we have
\[\exp(-\gamma^{(t)}-3/4)/2\leq-\ell^{\prime}(y_{i}\cdot f(\mathbf{ w}^{(t)},\gamma^{(t)},\mathbf{x}_{i}))\leq\exp(-\gamma^{(t)}+3/4)\] (C.29)
for all \(i\in[n]\) and all \(t\in[t_{0},t_{1}]\).
**Proof of induction hypothesis (i) at iteration \(t_{1}+1\).** By Lemma 4.2, for any \(t=t_{0},\ldots,t_{1}\), we have
\[-\frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle}{\|\mathbf{ w}^{*}\|_{2}}\cdot\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle\] \[\leq-\frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle}{\| \mathbf{w}^{*}\|_{2}}\cdot\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle-\frac {\eta\cdot\gamma^{(t)}\cdot\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle}{2 \|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{3}\cdot\|\mathbf{w}^{*}\|_{2}}\cdot \min_{i}|\ell^{(t)}_{i}|\cdot\|\mathbf{w}^{(t)}-\langle\mathbf{w}^{*},\mathbf{ w}^{(t)}\rangle_{\mathbf{\Sigma}}\cdot\mathbf{w}^{*}\|_{2}^{2}\] \[\leq-\frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle}{\| \mathbf{w}^{*}\|_{2}}\cdot\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle-\frac {\eta\cdot\gamma^{(t)}\cdot\lambda_{\min}\cdot\langle\mathbf{w}^{(t)},\mathbf{ w}^{*}\rangle}{2\|\mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{3}\cdot\|\mathbf{w}^{*}\|_{2}} \cdot\min_{i}|\ell^{(t)}_{i}|\cdot\|\mathbf{w}^{(t)}-\langle\mathbf{w}^{*}, \mathbf{w}^{(t)}\rangle_{\mathbf{\Sigma}}\cdot\mathbf{w}^{*}\|_{2}^{2}\] \[\leq-\frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle}{\| \mathbf{w}^{*}\|_{2}}\cdot\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle-\frac {\eta\cdot\gamma^{(t)}\cdot\lambda_{\min}\cdot\langle\mathbf{w}^{(t)},\mathbf{ w}^{*}\rangle}{2\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{(t)}\|_{2}^{3}\cdot\| \mathbf{w}^{*}\|_{2}}\cdot\min_{i}|\ell^{(t)}_{i}|\cdot\|\mathbf{w}^{(t)}- \frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle}{\|\mathbf{w}^{*}\|_{2}^{2 }}\cdot\mathbf{w}^{*}\|_{2}^{2}\] \[\leq-\frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle}{\| \mathbf{w}^{*}\|_{2}}\cdot\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle-\frac {\eta\cdot\gamma^{(t)}\cdot\lambda_{\min}}{8\lambda_{\max}^{3/2}\cdot\|\mathbf{w }^{(t)}\|_{2}^{2}}\cdot\exp(-\gamma^{(t)})\cdot\left\|\mathbf{w}^{(t)}-\frac {\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle}{\|\mathbf{w}^{*}\|_{2}^{2}} \cdot\mathbf{w}^{*}\right\|_{2}^{2}\] \[\leq-\frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle}{\| \mathbf{w}^{*}\|_{2}}\cdot\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle-\frac {\eta\cdot\gamma^{(t)}\cdot\lambda_{\min}}{16\lambda_{\max}^{3/2}\cdot\| \mathbf{w}^{(0)}\|_{2}^{2}}\cdot\exp(-\gamma^{(t)})\cdot\left\|\mathbf{w}^{(t)}- \frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle}{\|\mathbf{w}^{*}\|_{2}^{2 }}\cdot\mathbf{w}^{*}\right\|_{2}^{2},\]
where the third inequality follows by the fact that \(\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle\cdot\|\mathbf{w}^{*}\|_{2}^{-2} \cdot\mathbf{w}^{*}\) is the projection of \(\mathbf{w}^{(t)}\) on \(\mathrm{span}\{\mathbf{w}^{*}\}\) and \(\|\mathbf{w}^{(t)}-\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle\cdot\| \mathbf{w}^{*}\|_{2}^{-2}\cdot\mathbf{w}^{*}\|_{2}^{2}\leq\|\mathbf{w}^{(t)}-c \cdot\mathbf{w}^{*}\|_{2}^{2}\) for all \(c\in\mathbb{R}\), the fifth inequality follows by (C.25) and (C.29), and the last inequality follows by (C.25) and \((1+\epsilon)\leq\sqrt{2}\). Adding \(\|\mathbf{w}^{(t)}\|_{2}^{2}+\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle^{2} \cdot\|\mathbf{w}^{*}\|_{2}^{2}/4\) to both sides above gives
\[\left\|\mathbf{w}^{(t+1)}-\frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle}{\| \mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*}\right\|_{2}^{2}+\|\mathbf{w}^{(t)} \|_{2}^{2}-\|\mathbf{w}^{(t+1)}\|_{2}^{2}\]
\[\left\|\mathbf{w}^{(t+1)}-\frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t+1)} \rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*}\right\|_{2}^{2}\] \[\qquad\leq\left[1-\frac{\eta\cdot\gamma^{(t)}\cdot\lambda_{\min}}{ 32\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}}\cdot\exp(-\gamma^{(t) })\right]\cdot\left\|\mathbf{w}^{(t)}-\frac{\langle\mathbf{w}^{*},\mathbf{w}^ {(t)}\rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*}\right\|_{2}^{2}\] (C.33)
for all \(t\in[t_{0},t_{1}]\). This implies that
\[\left\|\mathbf{w}^{(t+1)}-\frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t+1)} \rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*}\right\|_{2}^{2}\leq \left\|\mathbf{w}^{(t)}-\frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle}{ \|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*}\right\|_{2}^{2}\]
for all \(t\in[t_{0},t_{1}]\), which completes the proof of induction hypothesis (i) at iteration \(t_{1}+1\).
**Proof of induction hypothesis (ii) at iteration \(t_{1}+1\).** By Lemma 4.4 and (C.29), for any \(t=t_{0},\ldots,t_{1}\), we have
\[\|\mathbf{w}^{(t+1)}\|_{2}^{2}\leq\|\mathbf{w}^{(t)}\|_{2}^{2}+\eta^{2}G\cdot \max\{\gamma^{(t)2},\gamma^{(t)4}\}\cdot\exp(-2\gamma^{(t)}+3/2)\cdot\left\| \mathbf{w}^{(t)}-\frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle}{\| \mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*}\right\|_{2}^{2}\]
\[\leq\|\mathbf{w}^{(t)}\|_{2}^{2}+400\eta^{2}G\cdot\exp(-1.5\gamma^{(t) })\cdot\left\|\mathbf{w}^{(t)}-\frac{\left\langle\mathbf{w}^{*},\mathbf{w}^{(t) }\right\rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*}\right\|_{2}^{2}\] \[\leq\|\mathbf{w}^{(t)}\|_{2}^{2}+400\eta^{2}G\cdot\exp(-1.5\gamma ^{(t)})\cdot\left\|\mathbf{w}^{(t_{0})}-\frac{\left\langle\mathbf{w}^{*}, \mathbf{w}^{(t_{0})}\right\rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{ *}\right\|_{2}^{2}\] \[=\|\mathbf{w}^{(t)}\|_{2}^{2}+\frac{400\eta^{2}G}{\exp(1.5\gamma ^{(t)})}\cdot\left\|\frac{\mathbf{w}^{(t_{0})}}{\|\mathbf{w}^{(t_{0})}\|_{2} }-\left\langle\frac{\mathbf{w}^{*}}{\|\mathbf{w}^{*}\|_{2}},\frac{\mathbf{w} ^{(t_{0})}}{\|\mathbf{w}^{(t_{0})}\|_{2}}\right\rangle\cdot\frac{\mathbf{w}^{ *}}{\|\mathbf{w}^{*}\|_{2}}\right\|_{2}^{2}\cdot\|\mathbf{w}^{(t_{0})}\|_{2}^{2}\] \[=\|\mathbf{w}^{(t)}\|_{2}^{2}+\frac{400\eta^{2}G}{\exp(1.5\gamma ^{(t)})}\cdot\left(\left\|\frac{\mathbf{w}^{(t_{0})}}{\|\mathbf{w}^{(t_{0})}\| _{2}}\right\|_{2}^{2}-\left\langle\frac{\mathbf{w}^{*}}{\|\mathbf{w}^{*}\|_{2} },\frac{\mathbf{w}^{(t_{0})}}{\|\mathbf{w}^{(t_{0})}\|_{2}}\right\rangle^{2} \right)\cdot\|\mathbf{w}^{(t_{0})}\|_{2}^{2}\] \[\leq\|\mathbf{w}^{(t)}\|_{2}^{2}+\frac{400\eta^{2}G}{\exp(1.5 \gamma^{(t)})}\cdot\|\mathbf{w}^{(t_{0})}\|_{2}^{2}\] \[\leq\|\mathbf{w}^{(t)}\|_{2}^{2}+\frac{800\eta^{2}G}{\exp(1.5 \gamma^{(t)})}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2},\]
where \(G=64\lambda_{\min}^{-3}\cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{6}\cdot\|\mathbf{ w}^{(0)}\|_{2}^{-4}\) as defined in Lemma 4.4, and the second inequality follows by the fact that \(0<\max\{z^{2},z^{4}\}\cdot\exp(-0.5z)<80\) for all \(z>0\), and the third inequality follows by induction hypothesis 1. Therefore, by induction hypothesis 3, we have
\[\|\mathbf{w}^{(t_{1}+1)}\|_{2}^{2} \leq\|\mathbf{w}^{(0)}\|_{2}^{2}+800\eta^{2}G\cdot\|\mathbf{w}^{( 0)}\|_{2}^{2}\cdot\sum_{t=t_{0}}^{t_{1}}\frac{1}{\exp(1.5\gamma^{(t)})}\] \[\leq\|\mathbf{w}^{(0)}\|_{2}^{2}+800\eta^{2}G\cdot\|\mathbf{w}^{( 0)}\|_{2}^{2}\cdot\sum_{t=t_{0}}^{t_{1}}\frac{1}{\exp\{1.5\log[(\eta/8)\cdot(t -t_{0})+\exp(\gamma^{(t_{0})})]\}}\] \[=\|\mathbf{w}^{(0)}\|_{2}^{2}+800\eta^{2}G\cdot\|\mathbf{w}^{(0) }\|_{2}^{2}\cdot\sum_{t=t_{0}}^{t_{1}}\frac{1}{[(\eta/8)\cdot(t-t_{0})+\exp( \gamma^{(t_{0})})]^{1.5}}.\] (C.34)
Moreover, we have
\[\sum_{t=t_{0}}^{t_{1}}\frac{1}{[(\eta/8)\cdot(t-t_{0})+\exp( \gamma^{(t_{0})})]^{1.5}} =\sum_{t=0}^{t_{1}-t_{0}}\frac{1}{[(\eta/8)\cdot t+\exp(\gamma^{(t _{0})})]^{1.5}}\] \[\leq\sum_{t=0}^{\infty}\frac{1}{[(\eta/8)\cdot t+\exp(\gamma^{(t _{0})})]^{1.5}}\] \[=\frac{1}{\exp(1.5\gamma^{(t_{0})})}+\sum_{t=1}^{\infty}\frac{1}{ [(\eta/8)\cdot t+\exp(\gamma^{(t_{0})})]^{1.5}}\] \[\leq\frac{1}{\exp(1.5\gamma^{(t_{0})})}+\int_{1}^{\infty}\frac{1} {[(\eta/8)\cdot t+\exp(\gamma^{(t_{0})})]^{1.5}}\mathrm{d}t\] \[=\frac{1}{\exp(1.5\gamma^{(t_{0})})}+\frac{2}{(\eta/8)\cdot\exp( \gamma^{(t_{0})}/2)}\] \[\leq 1+16\eta^{-1}.\]
Plugging the bound above into (C.34) gives
\[|\mathbf{w}^{(t_{1}+1)}\|_{2}^{2}\leq\|\mathbf{w}^{(0)}\|_{2}^{2}+800\eta^{2}G \cdot\|\mathbf{w}^{(0)}\|_{2}^{2}\cdot(1+16\eta^{-1})\]
\[\leq\|{\bf w}^{(0)}\|_{2}^{2}+16000\eta G\cdot\|{\bf w}^{(0)}\|_{2}^{2}\] \[\leq(1+\epsilon^{2})\cdot\|{\bf w}^{(0)}\|_{2}^{2},\]
where the second inequality follows by the definition of \(H_{1}\) and the assumption that \(\eta\leq 1\), and the last inequality follows by the assumption that \(\eta\leq G^{-1}\cdot\epsilon^{2}/16000\). Therefore we have
\[|{\bf w}^{(t_{1}+1)}\|_{2}\leq\sqrt{1+\epsilon^{2}}\cdot\|{\bf w}^{(0)}\|_{2} \leq(1+\epsilon)\cdot\|{\bf w}^{(0)}\|_{2},\]
which finishes the proof of induction hypothesis (ii) at iteration \(t_{1}+1\).
**Proof of induction hypothesis (iii) at iteration \(t_{1}+1\).** The gradient descent update rule for \(\gamma^{(t)}\) gives
\[\gamma^{(t+1)} =\gamma^{(t)}-\eta\cdot\frac{1}{n}\sum_{i=1}^{n}\ell^{\prime}(y_{ i}\cdot f({\bf w}^{(t)},\gamma^{(t)},{\bf x}_{i}))\cdot\frac{\langle{\bf w}^{(t)},{ \bf z}_{i}\rangle}{\|{\bf w}^{(t)}\|_{\bf\Sigma}}\] \[=\gamma^{(t)}-\eta\cdot\frac{1}{n}\sum_{i=1}^{n}\ell^{\prime}(y_{ i}\cdot f({\bf w}^{(t)},\gamma^{(t)},{\bf x}_{i}))\cdot\frac{\langle{\bf w}^{(t)},{ \bf z}_{i}\rangle}{\sqrt{\frac{1}{n}\sum_{j=1}^{n}\langle{\bf w}^{(t)},{\bf z} _{j}\rangle^{2}}}.\]
By (C.26) and (C.29), we then have
\[\gamma^{(t+1)} \leq\gamma^{(t)}-\eta\cdot\frac{1}{n}\sum_{i=1}^{n}\ell^{\prime}( y_{i}\cdot f({\bf w}^{(t)},\gamma^{(t)},{\bf x}_{i}))\cdot\frac{\alpha+ \alpha/4}{\sqrt{\frac{1}{n}\sum_{j=1}^{n}(\alpha-\alpha/4)^{2}}}\] \[\leq\gamma^{(t)}+\eta\cdot\exp(-\gamma^{(t)}+3/4)\cdot\frac{ \alpha+\alpha/4}{\sqrt{\frac{1}{n}\sum_{j=1}^{n}(\alpha-\alpha/4)^{2}}}\] \[\leq\gamma^{(t)}+4\eta\cdot\exp(-\gamma^{(t)}),\] (C.35) \[\gamma^{(t+1)} \geq\gamma^{(t)}-\eta\cdot\frac{1}{n}\sum_{i=1}^{n}\ell^{\prime}( y_{i}\cdot f({\bf w}^{(t)},\gamma^{(t)},{\bf x}_{i}))\cdot\frac{\alpha-\alpha/4}{ \sqrt{\frac{1}{n}\sum_{j=1}^{n}(\alpha+\alpha/4)^{2}}}\] \[\geq\gamma^{(t)}+\eta\cdot\exp(-\gamma^{(t)}-3/4)/2\cdot\frac{ \alpha-\alpha/4}{\sqrt{\frac{1}{n}\sum_{j=1}^{n}(\alpha+\alpha/4)^{2}}}\] \[\geq\gamma^{(t)}+\frac{\eta}{8}\cdot\exp(-\gamma^{(t)})\] (C.36)
for all \(t=t_{0},\ldots,t_{1}\). The comparison theorem for discrete dynamical systems then gives \(\gamma^{(t_{1}+1)}\leq\overline{\gamma}^{(t_{1}+1)}\), where \(\overline{\gamma}^{(t)}\) is given by the iterative formula
\[\overline{\gamma}^{(t+1)}=\overline{\gamma}^{(t)}+4\eta\cdot\exp(-\overline{ \gamma}^{(t)}),\quad\overline{\gamma}^{(0)}=\gamma^{(0)}.\]
Applying Lemma C.2 then gives
\[\gamma^{(t_{1}+1)}\leq\overline{\gamma}^{(t_{1}+1)}\leq 4\eta\cdot\exp(-\gamma^{ (t_{0})})+\log[4\eta\cdot(t_{1}+1-t_{0})+\exp(\gamma^{(t_{0})})].\]
By the assumption that \(\eta\leq 1/8\leq\log(2)/4\), we have
\[\gamma^{(t_{1}+1)}\leq\log[8\eta\cdot(t_{1}+1-t_{0})+2\exp(\gamma^{(t_{0})})].\]
Similarly, by Lemma C.2 and (C.36), we also have
\[\gamma^{(t_{1}+1)}\geq\log[(\eta/8)\cdot(t_{1}+1-t_{0})+\exp(\gamma^{(t_{0})})].\]
This finishes the proof of induction hypothesis (iii) at iteration \(t_{1}+1\).
**Proof of induction hypothesis (iv) at iteration \(t_{1}+1\).** By (C.33), we have
\[\left\|\mathbf{w}^{(t_{1}+1)}-\frac{\langle\mathbf{w}^{*}, \mathbf{w}^{(t_{1}+1)}\rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*} \right\|_{2}^{2}\] \[\quad\leq\left[1-\frac{\eta\cdot\gamma^{(t_{1})}\cdot\lambda_{ \min}}{32\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}}\cdot\exp(- \gamma^{(t_{1})})\right]\cdot\left\|\mathbf{w}^{(t_{1})}-\frac{\langle\mathbf{ w}^{*},\mathbf{w}^{(t_{1})}\rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*} \right\|_{2}^{2}\] \[\quad\leq\prod_{\tau=t_{0}}^{t_{1}}\left[1-\frac{\eta\cdot \gamma^{(\tau)}\cdot\lambda_{\min}}{32\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{( 0)}\|_{2}^{2}}\cdot\exp(-\gamma^{(\tau)})\right]\cdot\left\|\mathbf{w}^{(t_{0} )}-\frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t_{0})}\rangle}{\|\mathbf{w}^{*}\| _{2}^{2}}\cdot\mathbf{w}^{*}\right\|_{2}^{2}\] \[\quad\leq\prod_{\tau=t_{0}}^{t_{1}}\left[1-\frac{\eta\cdot\gamma^ {(\tau)}\cdot\lambda_{\min}}{32\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{(0)}\|_{ 2}^{2}}\cdot\exp(-\gamma^{(\tau)})\right]\cdot\epsilon^{2}\cdot\|\mathbf{w}^{( 0)}\|_{2}^{2}\]
Then by the fact that \(1-x\leq\exp(-x)\) for all \(x\in\mathbb{R}\), we have
\[\left\|\mathbf{w}^{(t_{1}+1)}-\frac{\langle\mathbf{w}^{*}, \mathbf{w}^{(t_{1}+1)}\rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*} \right\|_{2}^{2}\leq\prod_{\tau=t_{0}}^{t_{1}}\exp\Bigg{[}-\frac{\eta\cdot \gamma^{(\tau)}\cdot\lambda_{\min}}{64\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{( 0)}\|_{2}^{2}}\cdot\exp(-\gamma^{(\tau)})\Bigg{]}\cdot\epsilon\|\mathbf{w}^{(0) }\|_{2}\\ =\exp\Bigg{[}-\sum_{\tau=t_{0}}^{t_{1}}\frac{\eta\cdot\gamma^{( \tau)}\cdot\lambda_{\min}}{64\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{(0)}\|_{ 2}^{2}}\cdot\exp(-\gamma^{(\tau)})\Bigg{]}\cdot\epsilon\|\mathbf{w}^{(0)}\|_{2}.\] (C.37)
We then study the term \(\gamma^{(\tau)}\cdot\exp(-\gamma^{(\tau)})\). Note that by induction hypothesis (iii), for all \(\tau=t_{0},\ldots,t_{1}\), we have
\[\gamma^{(\tau)} \geq\log[(\eta/8)\cdot(\tau-t_{0})+\exp(\gamma^{(t_{0})})]\geq \gamma^{(t_{0})}\geq 1/2,\] \[\gamma^{(\tau)} \leq\log[8\eta\cdot(t-t_{0})+2\exp(\gamma^{(t_{0})})]\leq\log[8 \eta\cdot(t-t_{0})+2\exp(1.5)]\leq\log[8\eta\cdot(\tau-t_{0})+9].\]
Over the interval \([1/2,\log[H_{0}\eta\cdot(\tau-t_{0})+9]]\), the function \(g(z)=z\cdot\exp(-z)\) is strictly increasing for \(1/2\leq z\leq 1\) and strictly decreasing for \(1\leq z\leq\log[H_{0}\eta\cdot(\tau-t_{0})+9]\). Note that \(\log[H_{0}\eta\cdot(\tau-t_{0})+9]>\log(9)>2\), and \(g\{\log[H_{0}\eta\cdot(\tau-t_{0})+9]\}<g(2)<g(1/2)\). Therefore we have
\[\gamma^{(\tau)}\cdot\exp(-\gamma^{(\tau)}) \geq\min_{1/2\leq z\leq\log[8\eta\cdot(\tau-t_{0})+9]}g(z)\] \[\geq\log[8\eta\cdot(\tau-t_{0})+9]\cdot\exp\{-\log[8\eta\cdot( \tau-t_{0})+9]\}\] \[=\frac{\log[8\eta\cdot(\tau-t_{0})+9]}{8\eta\cdot(\tau-t_{0})+9}.\]
Plugging the bound above into (C.37) gives
\[\left\|\mathbf{w}^{(t_{1}+1)}-\frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t_{1}+1) }\rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*}\right\|_{2}^{2}\]
\[\leq\exp\Bigg{[}-\frac{\eta\cdot\lambda_{\min}}{128\lambda_{\max}^{3/2} \cdot\|{\bf w}^{(0)}\|_{2}^{2}}\cdot\sum_{\tau=t_{0}}^{t_{1}}\frac{\log[8\eta \cdot(\tau-t_{0})+9]}{8\eta\cdot(\tau-t_{0})+9}\Bigg{]}\cdot\epsilon\cdot\|{\bf w }^{(0)}\|_{2}\] \[\leq\exp\Bigg{[}-\frac{\eta\cdot\lambda_{\min}}{128\lambda_{\max} ^{3/2}\cdot\|{\bf w}^{(0)}\|_{2}^{2}}\cdot\int_{0}^{t_{1}+1-t_{0}}\frac{\log(8 \eta\cdot\widetilde{\tau}+9)}{8\eta\cdot\widetilde{\tau}+9}{\rm d}\widetilde{ \tau}\Bigg{]}\cdot\epsilon\cdot\|{\bf w}^{(0)}\|_{2},\]
where the second inequality follows by \(8\eta\cdot\widetilde{\tau}+9>e\), and \(\log(z)/z\) is monotonically decreasing for \(z>e\). Further calculating the integral, we obtain
\[\left\|{\bf w}^{(t_{1}+1)}-\frac{\langle{\bf w}^{*},{\bf w}^{(t_{ 1}+1)}\rangle}{\|{\bf w}^{*}\|_{2}^{2}}\cdot{\bf w}^{*}\right\|_{2}\] \[\qquad\leq\exp\Bigg{[}-\frac{\lambda_{\min}}{1024\lambda_{\max}^ {3/2}\cdot\|{\bf w}^{(0)}\|_{2}^{2}}\cdot\log^{2}(Q)\Big{|}_{Q=9}^{8\eta\cdot (t_{1}+1-t_{0})+9}\Bigg{]}\cdot\epsilon\cdot\|{\bf w}^{(0)}\|_{2}\] \[\qquad=\exp\Bigg{[}-\frac{\lambda_{\min}}{1024\lambda_{\max}^{3/ 2}\cdot\|{\bf w}^{(0)}\|_{2}^{2}}\cdot\left[\log^{2}(8\eta\cdot(t_{1}+1-t_{0} )+9)-\log^{2}(9)\right]\Bigg{]}\cdot\epsilon\cdot\|{\bf w}^{(0)}\|_{2}\] \[\qquad\leq\exp\Bigg{[}-\frac{\lambda_{\min}}{1024\lambda_{\max}^ {3/2}\cdot\|{\bf w}^{(0)}\|_{2}^{2}}\cdot\left[\log(8\eta\cdot(t_{1}+1-t_{0} )+9)-\log(9)\right]^{2}\Bigg{]}\cdot\epsilon\cdot\|{\bf w}^{(0)}\|_{2}\] \[\qquad=\exp\Bigg{[}-\frac{\lambda_{\min}}{1024\lambda_{\max}^{3/ 2}\cdot\|{\bf w}^{(0)}\|_{2}^{2}}\cdot\log^{2}((8/9)\eta\cdot(t_{1}+1-t_{0})+ 1)\Bigg{]}\cdot\epsilon\cdot\|{\bf w}^{(0)}\|_{2},\] (C.38)
where the second inequality follows by the fact that \(a^{2}-b^{2}\geq(a-b)^{2}\) for \(a>b>0\). This finishes the proof of induction hypothesis (iv) at iteration \(t_{1}+1\).
**Proof of induction hypothesis (v) at iteration \(t_{1}+1\).** By (C.38), we have
\[\left\|{\bf w}^{(t_{1}+1)}-\frac{\langle{\bf w}^{*},{\bf w}^{(t_{1}+1)} \rangle}{\|{\bf w}^{*}\|_{2}^{2}}\cdot{\bf w}^{*}\right\|_{2}\leq{\cal E}^{(t_{ 1}+1)}\cdot\|{\bf w}^{(0)}\|_{2},\]
where
\[{\cal E}^{(t_{1}+1)}:=\exp\Bigg{[}-\frac{\lambda_{\min}}{1024\lambda_{\max}^{ 3/2}\cdot\|{\bf w}^{(0)}\|_{2}^{2}}\cdot\log^{2}((8/9)\eta\cdot(t_{1}+1-t_{0} )+1)\Bigg{]}\cdot\epsilon\] (C.39)
Taking the square of both sides and dividing by \(\|{\bf w}^{(t_{1}+1)}\|_{2}^{2}\) gives
\[1-\langle{\bf w}^{*}/\|{\bf w}^{*}\|_{2},{\bf w}^{(t_{1}+1)}/\|{\bf w}^{(t_{1}+ 1)}\|_{2}\rangle^{2}\leq({\cal E}^{(t_{1}+1)})^{2}\cdot\|{\bf w}^{(0)}\|_{2}^{2 }/\|{\bf w}^{(t_{1}+1)}\|_{2}^{2}\leq({\cal E}^{(t_{1}+1)})^{2},\]
where the last inequality follows by Lemma 4.4 on the monotonicity of \(\|{\bf w}^{(t)}\|_{2}\). Therefore, we have
\[(1-{\cal E}^{(t_{1}+1)})\leq\sqrt{1-({\cal E}^{(t_{1}+1)})^{2}}\leq\langle{\bf w }^{*}/\|{\bf w}^{*}\|_{2},{\bf w}^{(t_{1}+1)}/\|{\bf w}^{(t_{1}+1)}\|_{2} \rangle\leq 1.\] (C.40)
Moreover, by (C.38), we have
\[\left|\langle{\bf w}^{(t_{1}+1)},{\bf z}_{i}\rangle-\frac{\langle{\bf w}^{*}, {\bf w}^{(t_{1}+1)}\rangle}{\|{\bf w}^{*}\|_{2}^{2}}\cdot\langle{\bf w}^{*},{ \bf z}_{i}\rangle\right|\leq\max_{i}\|{\bf z}_{i}\|_{2}\cdot{\cal E}^{(t_{1}+1) }\cdot\|{\bf w}^{(0)}\|_{2}\]
for all \(i\in[n]\). Dividing by \(\|\mathbf{w}^{(t_{1}+1)}\|_{2}\) on both sides above gives
\[\left|\left\langle\frac{\mathbf{w}^{(t_{1}+1)}}{\|\mathbf{w}^{(t_{1}+1)}\|_{2}}, \mathbf{z}_{i}\right\rangle-\left\langle\frac{\mathbf{w}^{*}}{\|\mathbf{w}^{*} \|_{2}},\frac{\mathbf{w}^{(t_{1}+1)}}{\|\mathbf{w}^{(t_{1}+1)}\|_{2}}\right\rangle \cdot\left\langle\frac{\mathbf{w}^{*}}{\|\mathbf{w}^{*}\|_{2}},\mathbf{z}_{i} \right\rangle\right|\leq\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\mathcal{E}^{(t_{1}+ 1)}.\]
Recall again that \(\mathbf{w}^{*}\) is chosen such that \(\langle\mathbf{w}^{*},\mathbf{z}_{i}\rangle=1\) for all \(i\in[n]\). Therefore, rearranging terms and applying (C.40) then gives
\[\left\langle\frac{\mathbf{w}^{(t_{1}+1)}}{\|\mathbf{w}^{(t_{1}+1 )}\|_{2}},\mathbf{z}_{i}\right\rangle \geq\left\langle\frac{\mathbf{w}^{*}}{\|\mathbf{w}^{*}\|_{2}}, \frac{\mathbf{w}^{(t_{1}+1)}}{\|\mathbf{w}^{(t_{1}+1)}\|_{2}}\right\rangle \cdot\left\langle\frac{\mathbf{w}^{*}}{\|\mathbf{w}^{*}\|_{2}},\mathbf{z}_{i} \right\rangle-\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\mathcal{E}^{(t_{1}+1)}\] \[\geq\left\langle\frac{\mathbf{w}^{*}}{\|\mathbf{w}^{*}\|_{2}}, \mathbf{z}_{i}\right\rangle-2\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\mathcal{E}^{( t_{1}+1)}\] \[=\|\mathbf{w}^{*}\|_{2}^{-1}-2\max_{i}\|\mathbf{z}_{i}\|_{2} \cdot\mathcal{E}^{(t_{1}+1)},\] \[\left\langle\frac{\mathbf{w}^{(t_{1}+1)}}{\|\mathbf{w}^{(t_{1}+1 )}\|_{2}},\mathbf{z}_{i}\right\rangle \leq\left\langle\frac{\mathbf{w}^{*}}{\|\mathbf{w}^{*}\|_{2}}, \frac{\mathbf{w}^{(t_{1}+1)}}{\|\mathbf{w}^{(t_{1}+1)}\|_{2}}\right\rangle \cdot\left\langle\frac{\mathbf{w}^{*}}{\|\mathbf{w}^{*}\|_{2}},\mathbf{z}_{i} \right\rangle+\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\mathcal{E}^{(t_{1}+1)}\] \[\leq\left\langle\frac{\mathbf{w}^{*}}{\|\mathbf{w}^{*}\|_{2}}, \mathbf{z}_{i}\right\rangle+\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\mathcal{E}^{( t_{1}+1)}\] \[=\|\mathbf{w}^{*}\|_{2}^{-1}+\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot \mathcal{E}^{(t_{1}+1)}.\]
Therefore, we have
\[\left|\left\langle\frac{\mathbf{w}^{(t_{1}+1)}}{\|\mathbf{w}^{(t_{1}+1)}\|_{2 }},\mathbf{z}_{i}\right\rangle-\alpha\right|=\left|\left\langle\frac{\mathbf{ w}^{(t_{1}+1)}}{\|\mathbf{w}^{(t_{1}+1)}\|_{2}},\mathbf{z}_{i}\right\rangle-\| \mathbf{w}^{*}\|_{2}^{-1}\right|\leq 2\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\mathcal{E}^{( t_{1}+1)}.\] (C.41)
Moreover, by the induction hypothesis (iii) at iteration \(t_{1}+1\) (which has been proved), we have
\[\gamma^{(t_{1}+1)} \leq\log[8\eta\cdot(t_{1}+1-t_{0})+2\exp(\gamma^{(t_{0})})]\] \[=\log(9)+\log[(8/9)\eta\cdot(t_{1}+1-t_{0})+(2/9)\cdot\exp( \gamma^{(t_{0})})]\] \[\leq\log(9)+\log[(8/9)\eta\cdot(t_{1}+1-t_{0})+(2/9)\cdot\exp(1.5)]\] \[\leq\log(9)+\log[(8/9)\eta\cdot(t_{1}+1-t_{0})+1].\] (C.42)
Therefore by (C.39), (C.41) and (C.42), we have
\[\left|\left\langle\frac{\mathbf{w}^{(t_{1}+1)}}{\|\mathbf{w}^{(t _{1}+1)}\|_{2}},\mathbf{z}_{i}\right\rangle-\alpha\right|\cdot\gamma^{(t_{1}+1)}\] \[\leq\frac{2\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\epsilon\cdot \{\log(9)+\log[(8/9)\eta\cdot(t_{1}+1-t_{0})+1]\}}{\exp\left[\frac{\lambda_{ \min}}{1024\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}}\cdot\log^{ 2}((8/9)\eta\cdot(t_{1}+1-t_{0})+1)\right]}\] \[\leq 2\log(9)\cdot\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\epsilon+ \frac{32\sqrt{2}\cdot\exp(1/2)\cdot\lambda_{\max}^{3/4}}{\lambda_{\min}^{1/2}} \cdot\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\epsilon\cdot\|\mathbf{w}^{(0)}\|_{2}\] \[\leq 6\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot\epsilon+\frac{80\lambda_{ \max}^{3/4}}{\lambda_{\min}^{1/2}}\cdot\max_{i}\|\mathbf{z}_{i}\|_{2}\cdot \epsilon\cdot\|\mathbf{w}^{(0)}\|_{2}\] \[\leq\|\mathbf{w}^{*}\|_{2}^{-1}/4.\]
where the second inequality follows by the fact that \(\exp(-Az^{2})\cdot z\leq\exp(-1/2)\cdot(2A)^{-1/2}\) for all \(A,z>0\), and the last inequality follows by the definition of \(\epsilon\) which ensures that \(\epsilon\leq(16\max_{i}\|\mathbf{z}_{i}\|_{2})^{-1}\). \(\|\mathbf{w}^{*}\|_{2}^{-1}\cdot\min\left\{1/3,\|\mathbf{w}^{(0)}\|_{2}^{-1} \cdot\lambda_{\min}^{1/2}/(40\lambda_{\max}^{3/4})\right\}\). This finishes the proof of induction hypothesis (v) at iteration \(t_{1}+1\), and thus the first five results in Lemma 4.6 hold for all \(t\geq t_{0}\).
**Proof of the last result in Lemma 4.6.** As we have shown by induction, the first five results in Lemma 4.6 hold for all \(t\geq t_{0}\). Therefore (C.27) and (C.27) also hold for all \(t\geq t_{0}\). Therefore by the monotonicity of \(\ell(\cdot)\) and the third result in Lemma 4.6, we have
\[\ell(y_{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i})) \leq\log(1+\exp(-\gamma^{(t)}+1/4))\] \[\leq\exp\{-\log[(\eta/8)\cdot(t-t_{0})+\exp(\gamma^{(t_{0})})]+1 /4\}\] \[=\frac{\exp(1/4)}{(\eta/8)\cdot(t-t_{0})+\exp(\gamma^{(t_{0})})}\] \[\leq\frac{12}{\eta\cdot(t-t_{0})+1}\]
for all \(\mathrm{t}\geq t_{0}\), where the second inequality follows by \(\log(1+z)\geq z\) for all \(z\in\mathbb{R}\). Similarly, we also have
\[\ell(y_{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i})) \geq\log(1+\exp(-\gamma^{(t)}-3/4))\] \[\geq\exp\{-\log[8\eta\cdot(t-t_{0})+2\exp(\gamma^{(t_{0})})]-3/4 \}/2\] \[=\frac{\exp(-3/4)/2}{8\eta\cdot(t-t_{0})+2\exp(\gamma^{(t_{0})})}\] \[\geq\frac{1}{40}\cdot\frac{1}{\eta\cdot(t-t_{0})+1}\]
for all \(\mathrm{t}\geq t_{0}\), where the second inequality follows by the fact that \(\log(1+z)\geq z/2\) for all \(z\in[0,1]\). This proves the first part of the result.
As for the second part of the result, by Lemma 4.3, for all \(t\geq t_{0}\) we have
\[\frac{1}{n^{2}}\sum_{i,i^{\prime}=1}^{n}(\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}\rangle-\langle\mathbf{w}^{(t_{1}+1)},\mathbf{z}_{i} \rangle)^{2} =2\|\mathbf{w}^{(t)}-\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle _{\boldsymbol{\Sigma}}\cdot\mathbf{w}^{*}\|_{\boldsymbol{\Sigma}}^{2}\] \[\leq 2\lambda_{\max}\cdot\bigg{\|}\mathbf{w}^{(t)}-\frac{\langle \mathbf{w}^{*},\mathbf{w}^{(t)}\rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot \mathbf{w}^{*}\bigg{\|}_{2}^{2}.\] (C.43)
Now that the fourth result in Lemma 4.6, which has been proved to hold for all \(t\geq t_{0}\), we have
\[\bigg{\|}\mathbf{w}^{(t)}-\frac{\langle\mathbf{w}^{*},\mathbf{w}^{(t)}\rangle }{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*}\bigg{\|}_{2}^{2}\leq\epsilon \cdot\|\mathbf{w}^{(0)}\|_{2}\cdot\exp\Bigg{[}-\frac{\lambda_{\min}\cdot\log ^{2}((8/9)\eta\cdot(t-t_{0})+1)}{1024\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{(0 )}\|_{2}^{2}}\Bigg{]}\] (C.44)
for all \(t\geq t_{0}\). Combining (C.44) and (C.43) then gives
\[\frac{1}{n^{2}}\sum_{i,i^{\prime}=1}^{n}(\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}\rangle-\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle) ^{2}\] \[\leq\lambda_{\max}\cdot\epsilon^{2}\cdot\|\mathbf{w}^{(0)}\|_{2}^{ 2}\cdot\exp\Bigg{[}-\frac{\lambda_{\min}}{512\lambda_{\max}^{3/2}\cdot\| \mathbf{w}^{(0)}\|_{2}^{2}}\cdot\log^{2}((8/9)\eta\cdot(t-t_{0})+1)\Bigg{]}.\]
Then by the definitions of \(D(\mathbf{w})\) and \(\epsilon\), we have
\[D(\mathbf{w}^{(t)})\] \[\leq\frac{1}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}^{2}}\cdot \frac{1}{n^{2}}\sum_{i,i^{\prime}=1}^{n}(\langle\mathbf{w}^{(t)},\mathbf{z}_{i ^{\prime}}\rangle-\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle)^{2}\] \[\leq\frac{1}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}^{2}}\cdot \frac{\lambda_{\max}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}}{2304\max_{i}\|\mathbf{ x}_{i}\|_{2}^{2}\cdot\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\exp\Bigg{[}-\frac{\lambda_{ \min}}{512\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}}\cdot\log^{2} ((8/9)\eta\cdot(t-t_{0})+1)\Bigg{]}.\]
By the definition of \(\mathbf{w}^{*}\), clearly we have \(\max_{i}\|\mathbf{x}_{i}\|_{2}^{2}\cdot\|\mathbf{w}^{*}\|_{2}^{2}\geq\min_{i }\langle y_{i}\cdot\mathbf{x}_{i},\mathbf{w}^{*}\rangle^{2}=1\). Therefore
\[D(\mathbf{w}^{(t)}) \leq\frac{\lambda_{\max}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}}{2304 \|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}^{2}}\cdot\exp\Bigg{[}-\frac{ \lambda_{\min}}{512\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}}\cdot \log^{2}((8/9)\eta\cdot(t-t_{0})+1)\Bigg{]}\] \[\leq\frac{\lambda_{\max}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}}{2304 \lambda_{\min}\|\mathbf{w}^{(t)}\|_{2}^{2}}\cdot\exp\Bigg{[}-\frac{\lambda_{ \min}}{512\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2}}\cdot\log^{2} ((8/9)\eta\cdot(t-t_{0})+1)\Bigg{]}\] \[\leq\frac{\lambda_{\max}}{2304\lambda_{\min}}\cdot\exp\Bigg{[}- \frac{\lambda_{\min}}{512\lambda_{\max}^{3/2}\cdot\|\mathbf{w}^{(0)}\|_{2}^{2 }}\cdot\log^{2}((8/9)\eta\cdot(t-t_{0})+1)\Bigg{]}\]
for all \(t\geq t_{0}\), where the last inequality follows by Lemma 4.4. Therefore the last result in Lemma 4.6 holds, and the proof of Lemma 4.6 is thus complete.
## Appendix D Proofs for Batch Normalization in Two-Layer Linear CNNs
### Proof of Theorem 3.2
As most part of the proof of Theorem 3.2 are the same as the proof of Theorem 2.2, here we only highlight the differences between these two proofs. Specifically, we give the proofs of the counterparts of Lemmas 4.1, 4.2, 4.4. The rest of the proofs are essentially the same based on the multi-patch versions of these three results.
By Assumption 3.1, we can define \(\mathbf{w}^{*}\) as the minimum norm solution of the system:
\[\mathbf{w}^{*}:=\operatorname*{argmin}_{\mathbf{w}}\|\mathbf{w}\|_{2}^{3},\ \ \text{ subject to }\langle\mathbf{w},y_{i}\cdot\mathbf{x}_{i}^{(p)}\rangle=1,\ i\in[n],\ p\in[P].\] (D.1)
The following lemma is the counterpart of Lemma 4.1.
**Lemma D.1**.: _Under Assumption 2.1, for any \(\mathbf{w}\in\mathbb{R}^{d}\), it holds that_
\[\langle-\nabla_{\mathbf{w}}L(\mathbf{w},\gamma),\mathbf{w}^{*}\rangle =\frac{\gamma}{2n^{2}P\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{3}} \sum_{i,i^{\prime}=1}^{n}|\ell^{\prime}_{i}|\cdot|\ell^{\prime}_{i^{\prime}}| \cdot(\langle\mathbf{w},\mathbf{z}_{i^{\prime}}\rangle-\langle\mathbf{w}, \mathbf{z}_{i}\rangle)\cdot(|\ell^{\prime}_{i^{\prime}}|^{-1}\cdot\langle \mathbf{w},\mathbf{z}_{i^{\prime}}\rangle-|\ell^{\prime}_{i}|^{-1}\cdot\langle \mathbf{w},\mathbf{z}_{i}\rangle)\] \[\quad+\frac{\gamma}{2n^{2}P\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{3} }\cdot\Bigg{(}\sum_{i=1}^{n}|\ell^{\prime}_{i}|\Bigg{)}\cdot\sum_{i^{\prime}= 1}^{n}\sum_{p,p^{\prime}=1}^{P}\big{(}\langle\mathbf{w},\mathbf{z}_{i^{\prime} }^{(p)}\rangle-\langle\mathbf{w},\mathbf{z}_{i^{\prime}}^{(p^{\prime})} \rangle\big{)}^{2},\]
_where \(\ell^{\prime}_{i}=\ell^{\prime}[y_{i}\cdot f(\mathbf{w},\gamma,\mathbf{x}_{i})]\), \(\mathbf{z}_{i}^{(p)}=y_{i}\cdot\mathbf{x}_{i}^{(p)}\), and \(\mathbf{z}_{i}=\sum_{p=1}^{P}\mathbf{z}_{i}^{(p)}\) for \(i\in[n]\), \(p\in[P]\)._
Proof of Lemma D.1.: By definition, we have
\[\nabla_{\mathbf{w}}L(\mathbf{w},\gamma)=\frac{1}{n}\sum_{i=1}^{n}\ell^{\prime}[y_ {i}\cdot f(\mathbf{w},\gamma,\mathbf{x}_{i})]\cdot y_{i}\cdot\nabla_{\mathbf{w }}f(\mathbf{w},\gamma,\mathbf{x}_{i}).\]
Then by the definition of \(f(\mathbf{w},\gamma,\mathbf{x})\), we have the following calculation using chain rule:
\[\nabla_{\mathbf{w}}L(\mathbf{w},\gamma) =\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-1}\cdot\frac{1}{n}\sum_{i =1}^{n}\ell^{\prime}[y_{i}\cdot f(\mathbf{w},\gamma,\mathbf{x}_{i})]\cdot y_{i }\cdot\sum_{p=1}^{P}\gamma\cdot\Big{(}\mathbf{I}-\|\mathbf{w}\|_{\boldsymbol {\Sigma}}^{-2}\cdot\boldsymbol{\Sigma}\mathbf{w}\mathbf{w}^{\top}\Big{)} \mathbf{x}_{i}^{(p)}\] \[=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n} \sum_{i=1}^{n}\sum_{p=1}^{P}\ell^{\prime}_{i}\cdot y_{i}\cdot\Big{(}\|\mathbf{ w}\|_{\boldsymbol{\Sigma}}^{2}-\boldsymbol{\Sigma}\mathbf{w}\mathbf{w}^{\top} \Big{)}\mathbf{x}_{i}^{(p)}\] \[=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n} \sum_{i=1}^{n}\sum_{p=1}^{P}\ell^{\prime}_{i}\cdot\Bigg{(}\frac{1}{nP}\sum_{i ^{\prime}=1}^{n}\sum_{p^{\prime}=1}^{P}\langle\mathbf{w},\mathbf{z}_{i^{\prime }}^{(p^{\prime})}\rangle^{2}-\frac{1}{nP}\sum_{i^{\prime}=1}^{n}\sum_{p^{\prime }=1}^{P}\langle\mathbf{w},\mathbf{z}_{i^{\prime}}^{(p^{\prime})}\rangle\cdot \mathbf{z}_{i^{\prime}}^{(p^{\prime})}\mathbf{w}^{\top}\Bigg{)}\mathbf{z}_{i}^ {(p)}\] \[=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2}P }\sum_{i,i^{\prime}=1}^{n}\sum_{p,p^{\prime}=1}^{P}\Big{[}\ell^{\prime}_{i} \cdot\langle\mathbf{w},\mathbf{z}_{i^{\prime}}^{(p^{\prime})}\rangle^{2}\cdot \mathbf{z}_{i}^{(p)}-\ell^{\prime}_{i}\cdot\langle\mathbf{w},\mathbf{z}_{i^{ \prime}}^{(p^{\prime})}\rangle\cdot\langle\mathbf{w},\mathbf{z}_{i}^{(p)} \rangle\cdot\mathbf{z}_{i^{\prime}}^{(p^{\prime})}\Big{]},\]
where we denote \(\mathbf{z}_{i}^{(p)}=y_{i}\cdot\mathbf{x}_{i}^{(p)}\), \(i\in[n]\), \(p\in[P]\). By Assumption 3.1, taking inner product with \(-\mathbf{w}^{*}\) on both sides above then gives
\[-\langle\nabla_{\mathbf{w}}L(\mathbf{w},\gamma),\mathbf{w}^{*}\rangle =-\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2 }P}\sum_{i,i^{\prime}=1}^{n}\sum_{p,p^{\prime}=1}^{P}\Big{[}\ell^{\prime}_{i} \cdot\langle\mathbf{w},\mathbf{z}_{i^{\prime}}^{(p^{\prime})}\rangle^{2}- \ell^{\prime}_{i}\cdot\langle\mathbf{w},\mathbf{z}_{i^{\prime}}^{(p^{\prime}) }\rangle\cdot\langle\mathbf{w},\mathbf{z}_{i}^{(p)}\rangle\Big{]}\] \[=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2}P }\sum_{i,i^{\prime}=1}^{n}\sum_{p,p^{\prime}=1}^{P}\Big{[}|\ell^{\prime}_{i}| \cdot\langle\mathbf{w},\mathbf{z}_{i^{\prime}}^{(p^{\prime})}\rangle^{2}-| \ell^{\prime}_{i}|\cdot\langle\mathbf{w},\mathbf{z}_{i^{\prime}}^{(p^{\prime}) }\rangle\cdot\langle\mathbf{w},\mathbf{z}_{i}^{(p)}\rangle\Big{]},\]
where the second equality follows by the fact that \(\ell^{\prime}_{i}<0\), \(i\in[n]\). Further denote \(u_{i}^{(p)}=\langle\mathbf{w},\mathbf{z}_{i}\rangle\) for \(i\in[n]\), \(p\in[P]\). Then we have
\[-\langle\nabla_{\mathbf{w}}L(\mathbf{w},\gamma),\mathbf{w}^{*}\rangle =\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2} P}\sum_{i,i^{\prime}=1}^{n}\sum_{p,p^{\prime}=1}^{P}\big{[}|\ell^{\prime}_{i}| \cdot u_{i^{\prime}}^{(p^{\prime})2}-|\ell^{\prime}_{i}|\cdot u_{i^{\prime}}^{ (p^{\prime})}u_{i}^{(p)}\big{]}\] \[=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2 }P}\sum_{i,i^{\prime}=1}^{n}\Bigg{[}|\ell^{\prime}_{i}|\cdot\Bigg{(}\sum_{p,p^{ \prime}=1}^{P}u_{i^{\prime}}^{(p^{\prime})2}-\sum_{p,p^{\prime}=1}^{P}u_{i^{ \prime}}^{(p^{\prime})}u_{i}^{(p)}\Bigg{)}\Bigg{]}\] \[=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2 }P}\sum_{i,i^{\prime}=1}^{n}\Bigg{\{}|\ell^{\prime}_{i}|\cdot\Bigg{[}P\cdot \sum_{p^{\prime}=1}^{P}(u_{i^{\prime}}^{(p^{\prime})})^{2}-\Bigg{(}\sum_{p=1}^{P }u_{i}^{(p)}\Bigg{)}\cdot\Bigg{(}\sum_{p^{\prime}=1}^{P}u_{i^{\prime}}^{(p^{ \prime})}\Bigg{)}\Bigg{]}\Bigg{\}}.\]
Adding and subtracting a term
\[\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2}P}\sum_{i,i^{ \prime}=1}^{n}\Bigg{[}|\ell^{\prime}_{i}|\cdot\Bigg{(}\sum_{p^{\prime}=1}^{P}u_{i^{ \prime}}^{(p^{\prime})}\Bigg{)}^{2}\Bigg{]}\]
then gives
\[-\langle\nabla_{\mathbf{w}}L(\mathbf{w},\gamma),\mathbf{w}^{*}\rangle =\underbrace{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{ \gamma}{n^{2}P}\sum_{i,i^{\prime}=1}^{n}\left\{|\ell_{i}^{\prime}|\cdot\left[ \Bigg{(}\sum_{p^{\prime}=1}^{P}u_{i^{\prime}}^{(p^{\prime})}\Bigg{)}^{2}- \Bigg{(}\sum_{p=1}^{P}u_{i}^{(p)}\Bigg{)}\cdot\Bigg{(}\sum_{p^{\prime}=1}^{P} u_{i^{\prime}}^{(p^{\prime})}\Bigg{)}\right]\right\}_{I_{1}}\] \[\quad+\underbrace{\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot \frac{\gamma}{n^{2}P}\sum_{i,i^{\prime}=1}^{n}\left\{|\ell_{i}^{\prime}| \cdot\left[P\cdot\sum_{p^{\prime}=1}^{P}(u_{i^{\prime}}^{(p^{\prime})})^{2}- \Bigg{(}\sum_{p^{\prime}=1}^{P}u_{i^{\prime}}^{(p^{\prime})}\Bigg{)}^{2} \right]\right\}}_{I_{2}}.\] (D.2)
We then calculate the two terms \(I_{1}\) and \(I_{2}\) in (D.2) separately. The calculation for \(I_{1}\) is the same as the derivation in the proof of Lemma 4.1. To see this, let \(u_{i}=\sum_{p=1}^{P}u_{i}^{(p)}\) for \(i\in[n]\), \(p\in[P]\). Then we have
\[I_{1}=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2}P}\sum _{i,i^{\prime}=1}^{n}\big{(}|\ell_{i}^{\prime}|\cdot u_{i^{\prime}}^{2}-|\ell_ {i}^{\prime}|\cdot u_{i^{\prime}}u_{i}\big{)}.\] (D.3)
Switching the index notations \(i,i^{\prime}\) in the above equation also gives
\[I_{1}=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2}P}\sum _{i,i^{\prime}=1}^{n}\big{(}|\ell_{i^{\prime}}^{\prime}|\cdot u_{i}^{2}-|\ell_ {i^{\prime}}^{\prime}|\cdot u_{i^{\prime}}u_{i}\big{)}.\] (D.4)
We can add (D.3) and (D.4) together to obtain
\[2I_{1} =\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{ 2}P}\sum_{i,i^{\prime}=1}^{n}\big{(}|\ell_{i}^{\prime}|\cdot u_{i^{\prime}}^{ 2}-|\ell_{i}^{\prime}|\cdot u_{i^{\prime}}u_{i}+|\ell_{i^{\prime}}^{\prime}| \cdot u_{i}^{2}-|\ell_{i^{\prime}}^{\prime}|\cdot u_{i^{\prime}}u_{i}\big{)}\] \[=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2} P}\sum_{i,i^{\prime}=1}^{n}(u_{i^{\prime}}-u_{i})\cdot\big{(}|\ell_{i}^{\prime}| \cdot u_{i^{\prime}}-|\ell_{i^{\prime}}^{\prime}|\cdot u_{i}\big{)}\] \[=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2} P}\sum_{i,i^{\prime}=1}^{n}|\ell_{i}^{\prime}|\cdot|\ell_{i^{\prime}}^{\prime}| \cdot(u_{i^{\prime}}-u_{i})\cdot\big{(}|\ell_{i^{\prime}}^{\prime}|^{-1}\cdot u _{i^{\prime}}-|\ell_{i}^{\prime}|^{-1}\cdot u_{i}\big{)}.\]
Therefore we have
\[I_{1}=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{2n^{2}P} \sum_{i,i^{\prime}=1}^{n}|\ell_{i}^{\prime}|\cdot|\ell_{i^{\prime}}^{\prime}| \cdot(u_{i^{\prime}}-u_{i})\cdot\big{(}|\ell_{i^{\prime}}^{\prime}|^{-1}\cdot u _{i^{\prime}}-|\ell_{i}^{\prime}|^{-1}\cdot u_{i}\big{)}.\] (D.5)
This completes the calculation for \(I_{1}\). We then proceed to calculate \(I_{2}\). for any \(i^{\prime}\in[n]\), we can directly check that the following identity holds:
\[\frac{1}{2}\cdot\sum_{p,p^{\prime}=1}^{P}\big{(}u_{i^{\prime}}^{(p)}-u_{i^{ \prime}}^{(p^{\prime})}\big{)}^{2}=\frac{1}{2}\cdot\sum_{p,p^{\prime}=1}^{P} \big{[}(u_{i^{\prime}}^{(p)})^{2}+(u_{i^{\prime}}^{(p^{\prime})})^{2}-2u_{i^{ \prime}}^{(p)}u_{i^{\prime}}^{(p^{\prime})}\big{]}=P\cdot\sum_{p^{\prime}=1}^{P} (u_{i^{\prime}}^{(p^{\prime})})^{2}-\Bigg{(}\sum_{p^{\prime}=1}^{P}u_{i^{ \prime}}^{(p^{\prime})}\Bigg{)}^{2}.\]
Note that the right hand side above appears in \(I_{2}\). Plugging the above calculation into the definition
of \(I_{2}\) gives
\[I_{2} =\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{n^{2}P} \sum_{i,i^{\prime}=1}^{n}\Bigg{\{}|\ell^{\prime}_{i}|\cdot\left[\frac{1}{2} \cdot\sum_{p,p^{\prime}=1}^{P}\big{(}u_{i^{\prime}}^{(p)}-u_{i^{\prime}}^{(p^{ \prime})}\big{)}^{2}\right]\Bigg{\}}\] \[=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{2n^{2 }P}\sum_{i,i^{\prime}=1}^{n}\sum_{p,p^{\prime}=1}^{P}|\ell^{\prime}_{i}|\cdot \big{(}u_{i^{\prime}}^{(p)}-u_{i^{\prime}}^{(p^{\prime})}\big{)}^{2}\] \[=\|\mathbf{w}\|_{\boldsymbol{\Sigma}}^{-3}\cdot\frac{\gamma}{2n^{2 }P}\cdot\Bigg{(}\sum_{i=1}^{n}|\ell^{\prime}_{i}|\Bigg{)}\cdot\sum_{i^{\prime }=1}^{n}\sum_{p,p^{\prime}=1}^{P}\big{(}u_{i^{\prime}}^{(p)}-u_{i^{\prime}}^{( p^{\prime})}\big{)}^{2}.\] (D.6)
Finally, plugging (D.5) and (D.6) into (D.2) completes the proof.
The following lemma follows by exactly the same proof as Lemma 4.3.
**Lemma D.2**.: _For any \(\mathbf{w}\in\mathbb{R}^{d}\), it holds that_
\[\frac{1}{n^{2}P^{2}}\sum_{i,i^{\prime}=1}^{n}\sum_{p,p^{\prime}=1}^{P}(\langle \mathbf{w},\mathbf{z}_{i^{\prime}}\rangle-\langle\mathbf{w},\mathbf{z}_{i} \rangle)^{2}=\|\mathbf{w}-\langle\mathbf{w}^{*},\mathbf{w}\rangle_{\boldsymbol {\Sigma}}\cdot\mathbf{w}^{*}\|_{\boldsymbol{\Sigma}}^{2}.\]
The following lemma is the counterpart of Lemma 4.2.
**Lemma D.3**.: _For all \(t\geq 0\), it holds that_
\[\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle\geq\langle\mathbf{w}^{(t)}, \mathbf{w}^{*}\rangle+\frac{\eta P\gamma^{(t)}\cdot\exp(-\gamma^{(t)})}{16\cdot \|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}^{3}}\cdot\|\mathbf{w}-\langle \mathbf{w}^{*},\mathbf{w}\rangle_{\boldsymbol{\Sigma}}\cdot\mathbf{w}^{*}\|_{ \boldsymbol{\Sigma}}^{2}.\]
Proof of Lemma d.3.: By Lemma D.1 and the gradient descent update rule, we have
\[\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle-\langle\mathbf{w} ^{(t)},\mathbf{w}^{*}\rangle=\eta\cdot\langle-\nabla_{\mathbf{w}}L(\mathbf{w}^ {(t)},\gamma^{(t)}),\mathbf{w}^{*}\rangle\] \[=\frac{\eta\gamma^{(t)}}{2n^{2}P\|\mathbf{w}^{(t)}\|_{\boldsymbol {\Sigma}}^{3}}\sum_{i,i^{\prime}=1}^{n}|\ell^{\prime(t)}_{i}|\cdot|\ell^{ \prime(t)}_{i^{\prime}}|\cdot(\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}} \rangle-\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle)\cdot(|\ell^{\prime(t) }_{i^{\prime}}|^{-1}\cdot\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}} \rangle-|\ell^{\prime(t)}_{i}|^{-1}\cdot\langle\mathbf{w}^{(t)},\mathbf{z}_{i} \rangle)\] \[\quad+\frac{\eta\gamma^{(t)}}{2n^{2}P\|\mathbf{w}^{(t)}\|_{ \boldsymbol{\Sigma}}^{3}}\cdot\Bigg{(}\sum_{i=1}^{n}|\ell^{\prime(t)}_{i}| \Bigg{)}\cdot\sum_{i^{\prime}=1}^{n}\sum_{p,p^{\prime}=1}^{P}\big{(}\langle \mathbf{w}^{(t)},\mathbf{z}^{(p)}_{i^{\prime}}\rangle-\langle\mathbf{w}, \mathbf{z}^{(p^{\prime})}_{i^{\prime}}\rangle\big{)}^{2}\] (D.7)
for all \(t\geq 0\), \(\mathbf{z}^{(p)}_{i}=y_{i}\cdot\mathbf{x}^{(p)}_{i}\), and \(\mathbf{z}_{i}=\sum_{p=1}^{P}\mathbf{z}^{(p)}_{i}\) for \(i\in[n]\), \(p\in[P]\). Note that
\[y_{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i})=y_{i}\cdot\sum_{p=1 }^{P}\gamma^{(t)}\cdot\frac{\langle\mathbf{w}^{(t)},\mathbf{x}^{(p)}_{i} \rangle}{\|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}}=\gamma^{(t)}\cdot\frac{ \langle\mathbf{w}^{(t)},\mathbf{z}^{(p)}_{i}\rangle}{\|\mathbf{w}^{(t)}\|_{ \boldsymbol{\Sigma}}},\]
and
\[|\ell^{\prime(t)}_{i}|^{-1}=-\{\ell^{\prime}[y_{i}\cdot f(\mathbf{w}^{(t)}, \gamma^{(t)},\mathbf{x}_{i})]\}^{-1}=1+\exp(\gamma^{(t)}\cdot\langle\mathbf{w}^ {(t)},\mathbf{z}_{i}\rangle/\|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}).\]
With the exact same derivation as (C.8) in the proof of Lemma 4.1, we have
\[\sum_{i,i^{\prime}=1}^{n}|\ell^{\prime(t)}_{i}|\cdot|\ell^{\prime(t)}_{i^{ \prime}}|\cdot(\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}\rangle-\langle \mathbf{w}^{(t)},\mathbf{z}_{i}\rangle)\cdot(|\ell^{\prime(t)}_{i^{\prime}}|^{-1 }\cdot\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}\rangle-|\ell^{\prime(t) }_{i}|^{-1}\cdot\langle\mathbf{w}^{(t)},\mathbf{z}_{i}\rangle)\]
\[\geq\frac{1}{8}\exp(-\gamma^{(t)})\sum_{i,i^{\prime}=1}^{n}(\langle{\bf w}^{(t)},{ \bf z}_{i^{\prime}}\rangle-\langle{\bf w}^{(t)},{\bf z}_{i}\rangle)^{2}.\] (D.8)
Moreover, we also have
\[\frac{1}{n}\sum_{i=1}^{n}|\ell_{i}^{\prime(t)}| =\frac{1}{n}\sum_{i=1}^{n}\left[1+\exp\left(\frac{\gamma^{(t)} \cdot\langle{\bf w}^{(t)},{\bf z}_{j}\rangle}{\sqrt{\frac{1}{n}\sum_{j=1}^{n} \langle{\bf w}^{(t)},{\bf z}_{j}\rangle^{2}}}\right)\right]^{-1}\] \[\geq\frac{1}{n}\sum_{i=1}^{n}\left[1+\exp\left(\frac{\gamma^{(t) }\cdot|\langle{\bf w}^{(t)},{\bf z}_{j}\rangle|}{\sqrt{\frac{1}{n}\sum_{j=1}^{ n}\langle{\bf w}^{(t)},{\bf z}_{j}\rangle^{2}}}\right)\right]^{-1},\]
where the inequality follows by the fact that \([1+\exp(z)]^{-1}\) is a decreasing function. Further note that \([1+\exp(z)]^{-1}\) is convex over \(z\in[0,+\infty)\). Therefore by Jensen's inequality, we have
\[\frac{1}{n}\sum_{i=1}^{n}|\ell_{i}^{\prime(t)}| \geq\frac{1}{n}\sum_{i=1}^{n}\left[1+\exp\left(\frac{\gamma^{(t)} \cdot|\langle{\bf w}^{(t)},{\bf z}_{j}\rangle|}{\sqrt{\frac{1}{n}\sum_{j=1}^{ n}\langle{\bf w}^{(t)},{\bf z}_{j}\rangle^{2}}}\right)\right]^{-1}\] \[\geq\left[1+\exp\left(\frac{1}{n}\sum_{i=1}^{n}\frac{\gamma^{(t) }\cdot|\langle{\bf w}^{(t)},{\bf z}_{j}\rangle|}{\sqrt{\frac{1}{n}\sum_{j=1}^{ n}\langle{\bf w}^{(t)},{\bf z}_{j}\rangle^{2}}}\right)\right]^{-1}\] \[\geq[1+\exp(\gamma^{(t)})]^{-1}\] \[\geq\exp(-\gamma^{(t)})/2.\] (D.9)
Plugging (D.8) and (D.9) into (D.7), we obtain
\[\langle{\bf w}^{(t+1)},{\bf w}^{*}\rangle-\langle{\bf w}^{(t)},{ \bf w}^{*}\rangle\] \[\geq\frac{\eta\gamma^{(t)}}{16n^{2}P\|{\bf w}^{(t)}\|_{\Sigma}^{ 3}}\cdot\exp(-\gamma^{(t)})\sum_{i,i^{\prime}=1}^{n}(\langle{\bf w}^{(t)},{ \bf z}_{i^{\prime}}\rangle-\langle{\bf w}^{(t)},{\bf z}_{i}\rangle)^{2}\] \[\quad+\frac{\eta\gamma^{(t)}}{4n^{2}P\|{\bf w}^{(t)}\|_{\Sigma}^{ 3}}\cdot\exp(-\gamma^{(t)})\cdot\sum_{i=1}^{n}\sum_{p,p^{\prime}=1}^{P}\left( \langle{\bf w}^{(t)},{\bf z}_{i}^{(p)}\rangle-\langle{\bf w},{\bf z}_{i}^{(p^ {\prime})}\rangle\right)^{2}\] \[\geq\frac{\eta\gamma^{(t)}\cdot\exp(-\gamma^{(t)})}{16P\|{\bf w}^ {(t)}\|_{\Sigma}^{3}}\cdot\Bigg{[}\frac{1}{n^{2}}\sum_{i,i^{\prime}=1}^{n}( \langle{\bf w}^{(t)},{\bf z}_{i^{\prime}}\rangle-\langle{\bf w}^{(t)},{\bf z }_{i}\rangle)^{2}+\frac{1}{n}\sum_{i=1}^{n}\sum_{p,p^{\prime}=1}^{P}\left( \langle{\bf w}^{(t)},{\bf z}_{i}^{(p)}\rangle-\langle{\bf w},{\bf z}_{i}^{(p^ {\prime})}\rangle\right)^{2}\Bigg{]}.\]
Recall that \({\bf z}_{i}=\sum_{p=1}^{P}{\bf z}_{i}^{(p)}\). Denoting
\[u_{i,p}=\langle{\bf w}^{(t)},{\bf z}_{i}^{(p)}\rangle-\frac{1}{nP}\sum_{i^{ \prime}=1}^{n}\sum_{p^{\prime}=1}^{P}\langle{\bf w}^{(t)},{\bf z}_{i^{\prime }}^{(p^{\prime})}\rangle\]
for \(i\in[n]\) and \(p\in[P]\), we have
\[\langle{\bf w}^{(t+1)},{\bf w}^{*}\rangle-\langle{\bf w}^{(t)},{\bf w}^{*} \rangle\geq\frac{\eta\gamma^{(t)}\cdot\exp(-\gamma^{(t)})}{16P\|{\bf w}^{(t)}\|_ {\Sigma}^{3}}\cdot(I_{1}+I_{2}),\] (D.10)
where
\[I_{1}=\frac{1}{n^{2}}\sum_{i,i^{\prime}=1}^{n}\Bigg{(}\sum_{p=1}^{p}u_{i,p}-\sum_{ p=1}^{p}u_{i^{\prime},p}\Bigg{)}^{2},\qquad I_{2}=\frac{1}{n}\sum_{i=1}^{n}\sum_{p,p^{ \prime}=1}^{P}(u_{i,p}-u_{i,p^{\prime}})^{2}.\]
By direct calculation, we have
\[I_{1} =\frac{1}{n^{2}}\sum_{i,i^{\prime}=1}^{n}\Bigg{[}\Bigg{(}\sum_{p=1 }^{p}u_{i,p}\Bigg{)}^{2}-2\cdot\Bigg{(}\sum_{p=1}^{p}u_{i,p}\Bigg{)}\cdot\Bigg{(} \sum_{p=1}^{p}u_{i^{\prime},p}\Bigg{)}+\Bigg{(}\sum_{p=1}^{p}u_{i^{\prime},p} \Bigg{)}^{2}\Bigg{]}\] \[=\frac{2}{n}\sum_{i=1}^{n}\Bigg{(}\sum_{p=1}^{p}u_{i,p}\Bigg{)}^ {2},\] (D.11)
where the last equality follows by the definition of \(u_{i,p}\). Moreover, we have
\[I_{2}=\frac{1}{n}\sum_{i=1}^{n}\sum_{p,p^{\prime}=1}^{P}(u_{i,p}^{2}-2u_{i,p}u _{i,p^{\prime}}+u_{i,p^{\prime}}^{2})=\frac{2P}{n}\sum_{i=1}^{n}\sum_{p=1}^{P }u_{i,p}^{2}-\frac{2}{n}\sum_{i=1}^{n}\Bigg{(}\sum_{p=1}^{P}u_{i,p}\Bigg{)}^{2}.\] (D.12)
Plugging (D.11), (D.12) and the definition of \(u_{i,p}\) into (D.10) gives
\[\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle-\langle\mathbf{w}^{(t)}, \mathbf{w}^{*}\rangle\geq\frac{\eta\gamma^{(t)}\cdot\exp(-\gamma^{(t)})}{8n \cdot\|\mathbf{w}^{(t)}\|_{\boldsymbol{\Sigma}}^{3}}\cdot\sum_{i=1}^{n}\sum_{ p=1}^{P}\Bigg{(}\langle\mathbf{w}^{(t)},\mathbf{z}_{i}^{(p)}\rangle-\frac{1}{nP} \sum_{i^{\prime}=1}^{n}\sum_{p^{\prime}=1}^{P}\langle\mathbf{w}^{(t)}, \mathbf{z}_{i^{\prime}}^{(p^{\prime})}\rangle\Bigg{)}^{2}.\]
We continue the calculation as follows:
\[\frac{1}{nP}\sum_{i=1}^{n}\sum_{p=1}^{P}\Bigg{(}\langle\mathbf{w} ^{(t)},\mathbf{z}_{i}^{(p)}\rangle-\frac{1}{nP}\sum_{i^{\prime}=1}^{n}\sum_{ p^{\prime}=1}^{P}\langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}^{(p^{\prime})} \rangle\Bigg{)}^{2}\] \[=\frac{1}{nP}\sum_{i=1}^{n}\sum_{p=1}^{P}\langle\mathbf{w}^{(t)}, \mathbf{z}_{i}^{(p)}\rangle^{2}-\Bigg{(}\frac{1}{nP}\sum_{i=1}^{n}\sum_{p=1}^{ P}\langle\mathbf{w}^{(t)},\mathbf{z}_{i}^{(p)}\rangle\Bigg{)}^{2}\] \[=\frac{1}{2}\cdot\Bigg{[}\frac{2}{nP}\sum_{i=1}^{n}\sum_{p=1}^{P} \langle\mathbf{w}^{(t)},\mathbf{z}_{i}^{(p)}\rangle^{2}-2\cdot\Bigg{(}\frac{1 }{nP}\sum_{i=1}^{n}\sum_{p=1}^{P}\langle\mathbf{w}^{(t)},\mathbf{z}_{i}^{(p)} \rangle\Bigg{)}^{2}\Bigg{]}\] \[=\frac{1}{2n^{2}P^{2}}\cdot\sum_{i,i^{\prime}=1}^{n}\sum_{p,p^{ \prime}=1}^{P}(\langle\mathbf{w}^{(t)},\mathbf{z}_{i}^{(p)}\rangle^{2}-2\cdot \langle\mathbf{w}^{(t)},\mathbf{z}_{i}^{(p)}\rangle\langle\mathbf{w}^{(t)}, \mathbf{z}_{i^{\prime}}^{(p^{\prime})}\rangle+\langle\mathbf{w}^{(t)},\mathbf{ z}_{i^{\prime}}^{(p^{\prime})}\rangle^{2})\] \[=\frac{1}{2n^{2}P^{2}}\cdot\sum_{i,i^{\prime}=1}^{n}\sum_{p,p^{ \prime}=1}^{P}(\langle\mathbf{w}^{(t)},\mathbf{z}_{i}^{(p)}\rangle-\langle \mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}^{(p^{\prime})}\rangle)^{2}.\]
Therefore we have
\[\langle\mathbf{w}^{(t+1)},\mathbf{w}^{*}\rangle-\langle\mathbf{w}^{(t)},\mathbf{w}^ {*}\rangle\geq\frac{\eta\gamma^{(t)}\cdot\exp(-\gamma^{(t)})}{16n^{2}P\cdot\| \mathbf{w}^{(t)}\|_{\mathbf{\Sigma}}^{3}}\cdot\sum_{i,i^{\prime}=1}^{n}\sum_{p, p^{\prime}=1}^{P}(\langle\mathbf{w}^{(t)},\mathbf{z}_{i}^{(p)}\rangle- \langle\mathbf{w}^{(t)},\mathbf{z}_{i^{\prime}}^{(p^{\prime})}\rangle)^{2}.\]
Applying Lemma D.2 finishes the proof.
The following lemma is the counterpart of Lemma 4.2.
**Lemma D.4**.: _For all \(t\geq 0\), it holds that_
\[\|\mathbf{w}^{(t)}\|_{2}^{2}\leq\|\mathbf{w}^{(t+1)}\|_{2}^{2}\leq\|\mathbf{w} ^{(t)}\|_{2}^{2}+4\eta^{2}\cdot\frac{P^{2}\gamma^{(t)2}\cdot\max_{i}\|\mathbf{ x}_{i}\|_{2}^{3}}{\lambda_{\min}^{2}\cdot\|\mathbf{w}^{(t)}\|_{2}^{2}}.\]
_Moreover, if \(\|\mathbf{w}^{(t)}-\langle\mathbf{w}^{(t)},\mathbf{w}^{*}\rangle\cdot\| \mathbf{w}^{*}\|_{2}^{-1}\cdot\mathbf{w}^{*}\|_{2}\leq\|\mathbf{w}^{(0)}\|_{2 }/2\), then_
\[\|\mathbf{w}^{(t+1)}\|_{2}^{2} \leq\|\mathbf{w}^{(t)}\|_{2}^{2}+\eta^{2}G\cdot\max\{\gamma^{(t) 2},\gamma^{(t)4}\}\cdot\max\{|\ell_{1}^{(t)}|^{2},\ldots,|\ell_{n}^{(t)}|^{2}, \exp(-2\gamma^{(t)})\}\] \[\cdot\left\|\mathbf{w}^{(t)}-\frac{\langle\mathbf{w}^{*}, \mathbf{w}^{(t)}\rangle}{\|\mathbf{w}^{*}\|_{2}^{2}}\cdot\mathbf{w}^{*}\right\| _{2}^{2},\]
_where \(G=64P^{3}\lambda_{\min}^{-3}\cdot\max_{i}\|\mathbf{x}_{i}\|_{2}^{6}\cdot\| \mathbf{w}^{(0)}\|_{2}^{-4}\)._
Proof of Lemma D.4.: Note that
\[\nabla_{\mathbf{w}}L(\mathbf{w}^{(t)},\gamma^{(t)}) =\frac{1}{n\cdot\|\mathbf{w}\|_{\mathbf{\Sigma}}}\sum_{i=1}^{n} \ell^{\prime}[y_{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)},\mathbf{x}_{i})] \cdot y_{i}\cdot\sum_{p=1}^{P}\gamma^{(t)}\cdot\bigg{(}\mathbf{I}-\frac{ \mathbf{\Sigma w}^{(t)}\mathbf{w}^{(t)}\top}{\|\mathbf{w}^{(t)}\|_{\mathbf{ \Sigma}}^{2}}\bigg{)}\mathbf{x}_{i}^{(p)}\] \[=\frac{1}{n\cdot\|\mathbf{w}\|_{\mathbf{\Sigma}}}\sum_{i=1}^{n} \sum_{p=1}^{P}\ell^{\prime}[y_{i}\cdot f(\mathbf{w}^{(t)},\gamma^{(t)}, \mathbf{x}_{i})]\cdot y_{i}\cdot\gamma^{(t)}\cdot\bigg{(}\mathbf{I}-\frac{ \mathbf{\Sigma w}^{(t)}\mathbf{w}^{(t)}\top}{\|\mathbf{w}^{(t)}\|_{\mathbf{ \Sigma}}^{2}}\bigg{)}\mathbf{x}_{i}^{(p)}\]
One can essentially treat \(\mathbf{x}_{i}^{(p)}\), \(i\in[n]\) and \(p\in[P]\) as different data points and use the same proof as Lemma 4.4 to prove the first inequality. As for the second inequality, for any \(\mathbf{w}\in\mathcal{B}^{(t)}:=\{\mathbf{w}\in\mathbb{R}^{d}:\|\mathbf{w}- \mathbf{w}^{(t)}\|_{2}\leq\|\mathbf{w}^{(t)}\|_{2}/2\}\), we have
\[\|\nabla_{\mathbf{w}}f(\mathbf{w},\gamma^{(t)},\mathbf{x}_{i})\|_ {2} =\left\|\sum_{p=1}^{P}\frac{\gamma^{(t)}}{\|\mathbf{w}\|_{\mathbf{ \Sigma}}}\cdot\bigg{(}\mathbf{I}-\frac{\mathbf{\Sigma ww}^{\top}}{\|\mathbf{w} \|_{\mathbf{\Sigma}}^{2}}\bigg{)}\mathbf{x}_{i}^{(p)}\right\|_{2}\] \[\leq\frac{P\gamma^{(t)}}{\|\mathbf{w}\|_{\mathbf{\Sigma}}}\cdot \max_{i,p}\|\mathbf{x}_{i}^{(p)}\|_{2}\cdot\bigg{(}1+\bigg{\|}\frac{\mathbf{ \Sigma ww}^{\top}}{\|\mathbf{w}\|_{\mathbf{\Sigma}}^{2}}\bigg{\|}_{2}\bigg{)}\] \[=\frac{P\gamma^{(t)}}{\|\mathbf{w}\|_{\mathbf{\Sigma}}}\cdot\max _{i,p}\|\mathbf{x}_{i}^{(p)}\|_{2}\cdot\bigg{(}1+\frac{\|\mathbf{\Sigma w}\|_{2 }\cdot\|\mathbf{w}\|_{2}}{\|\mathbf{w}\|_{\mathbf{\Sigma}}^{2}}\bigg{)}\] \[\leq\frac{2P\gamma^{(t)}}{\|\mathbf{w}\|_{\mathbf{\Sigma}}}\cdot \max_{i,p}\|\mathbf{x}_{i}^{(p)}\|_{2}\cdot\frac{\|\mathbf{\Sigma w}\|_{2} \cdot\|\mathbf{w}\|_{2}}{\|\mathbf{w}\|_{\mathbf{\Sigma}}^{2}},\]
where the last inequality follows by the fact that \(\|\mathbf{w}\|_{\mathbf{\Sigma}}^{2}=\langle\mathbf{w},\mathbf{\Sigma w}\rangle \leq\|\mathbf{\Sigma w}\|_{2}\cdot\|\mathbf{w}\|_{2}\). Further
plugging in the definition of \(\mathbf{\Sigma}\) gives
\[\|\nabla_{\mathbf{w}}f(\mathbf{w},\gamma^{(t)},\mathbf{x}_{i})\|_{2} \leq\frac{2P\gamma^{(t)}}{\|\mathbf{w}\|_{\mathbf{\Sigma}}^{3}}\cdot \max_{i,p}\|\mathbf{x}_{i}^{(p)}\|_{2}\cdot\|\mathbf{w}\|_{2}\cdot\left\|\frac{ 1}{nP}\sum_{i^{\prime}=1}^{n}\sum_{p^{\prime}=1}^{P}\mathbf{x}_{i^{\prime}}^{( p)}\cdot\langle\mathbf{w},\mathbf{x}_{i^{\prime}}^{(p)}\rangle\right\|_{2}\] \[\leq\frac{2P\gamma^{(t)}}{\|\mathbf{w}\|_{\mathbf{\Sigma}}^{3}} \cdot\max_{i,p}\|\mathbf{x}_{i}^{(p)}\|_{2}^{2}\cdot\|\mathbf{w}\|_{2}\cdot \frac{1}{nP}\sum_{i^{\prime}=1}^{n}\sum_{p^{\prime}=1}^{P}|\langle\mathbf{w}, \mathbf{x}_{i^{\prime}}^{(p)}\rangle|\] \[\leq\frac{2P\gamma^{(t)}}{\|\mathbf{w}\|_{\mathbf{\Sigma}}^{3}} \cdot\max_{i,p}\|\mathbf{x}_{i}^{(p)}\|_{2}^{2}\cdot\sqrt{\frac{1}{nP}\sum_{ i^{\prime}=1}^{n}\sum_{p^{\prime}=1}^{P}|\langle\mathbf{w},\mathbf{x}_{i^{ \prime}}^{(p)}\rangle|^{2}}\] \[=\frac{2P\gamma^{(t)}}{\|\mathbf{w}\|_{\mathbf{\Sigma}}^{2}} \cdot\max_{i,p}\|\mathbf{x}_{i}^{(p)}\|_{2}^{2}\cdot\|\mathbf{w}\|_{2}\] \[\leq\frac{2P\gamma^{(t)}}{\lambda_{\min}\cdot\|\mathbf{w}\|_{2}^{ 2}}\cdot\max_{i,p}\|\mathbf{x}_{i}^{(p)}\|_{2}^{2}\cdot\|\mathbf{w}\|_{2}\] \[\leq\frac{4P\gamma^{(t)}}{\lambda_{\min}\cdot\|\mathbf{w}^{(t)}\| _{2}}\cdot\max_{i,p}\|\mathbf{x}_{i}^{(p)}\|_{2}^{2},\]
where the third inequality follows by Jensen's inequality, and the last inequality follows by \(\|\mathbf{w}\|_{2}\geq\|\mathbf{w}^{(t)}\|_{2}/2\) for all \(\mathbf{w}\in\mathcal{B}^{(t)}\). Therefore \(f(\mathbf{w},\gamma^{(t)},\mathbf{x}_{i})\) is \((4\gamma^{(t)}\lambda_{\min}^{-1}\cdot\|\mathbf{w}^{(t)}\|_{2}^{-1}\cdot\max _{i,p}\|\mathbf{x}_{i}^{(p)}\|_{2}^{2})\)-Lipschitz. The rest of the proof is the same as the proof of Lemma 4.4.
### Proof of Theorem 3.4
Proof of Theorem 3.4.: We first show the existence and uniqueness results for the maximum margin and patch-wise uniform margin classifiers. By Example 3.3, the training data patches are given as \(\mathbf{x}_{i}^{(p)}=y_{i}\cdot\mathbf{u}+\mathbf{\xi}_{i}^{(p)}\) for \(i\in[n]\), \(p\in[P]\), where \(\mathbf{\xi}_{i}^{(p)}\sim N(\mathbf{0},\sigma^{2}(\mathbf{I}-\mathbf{u}\mathbf{ u}^{\top}/\|\mathbf{u}\|_{2}^{2})))\) are orthogonal to \(\mathbf{u}\). Therefore it is clear that the linear classifier defined by \(\mathbf{u}\) can linearly separate all the data points \((\overline{\mathbf{x}}_{i},y_{i})\), \(i\in[n]\). Therefore the maximum margin solution exists. Its uniqueness then follows by the definition of the maximum margin problem (3.1), which has a strongly convex objective function and linear constraints.
As for the patch-wise uniform margin classifier, the existence also follows by the observation that \(\mathbf{u}\) gives such a classifier with patch-wise uniform margin. Moreover, for a patch-wise uniform margin classifier \(\mathbf{w}\), by definition we have
\[y_{i^{\prime}}\cdot\langle\mathbf{w},\mathbf{x}_{i^{\prime}}^{(p^{\prime})} \rangle=y_{i}\cdot\langle\mathbf{w},\mathbf{x}_{i}^{(p)}\rangle\]
for all \(i,i^{\prime}\in[n]\) and \(p,p^{\prime}\in[P]\). Plugging in the data model \(\mathbf{x}_{i}^{(p)}=y_{i}\cdot\mathbf{u}+\mathbf{\xi}_{i}^{(p)}\) then gives
\[\langle\mathbf{w},y_{i^{\prime}}\cdot\mathbf{\xi}_{i^{\prime}}^{(p^{\prime})}-y_{i }\cdot\mathbf{\xi}_{i}^{(p)}\rangle=0\] (D.13)
for all \(i,i^{\prime}\in[n]\) and \(p,p^{\prime}\in[P]\). Note that \(nP\geq 4n=2d\). Therefore it is easy to see that with probability 1,
\[\mathrm{span}\big{\{}y_{i^{\prime}}\cdot\mathbf{\xi}_{i^{\prime}}^{(p^{\prime})}-y_ {i}\cdot\mathbf{\xi}_{i}^{(p)}:i,i^{\prime}\in[n],\ p,p^{\prime}\in[P]\big{\}}= \mathrm{span}\{\mathbf{u}\}^{\perp}.\]
Therefore by (D.13), we conclude that \(\mathbf{w}\) is parallel to \(\mathbf{u}\), and thus the patch-wise uniform margin
classifier is unique up to a scaling factor. Moreover, this immediately implies that
\[\mathbb{P}_{(\mathbf{x}_{\text{test}},y_{\text{test}})\sim\mathcal{D }}(y_{\text{test}}\cdot\langle\mathbf{w}_{\text{uniform}},\overline{\mathbf{x}}_ {\text{test}}\rangle<0) =\mathbb{P}_{(\mathbf{x}_{\text{test}},y_{\text{test}})\sim \mathcal{D}}\Bigg{[}y_{\text{test}}\cdot\Bigg{\langle}\mathbf{u},\sum_{p=1}^{P} (y_{\text{test}}\cdot\mathbf{u}+\boldsymbol{\xi}_{\text{test}}^{(p)})\Bigg{\rangle} <0\Bigg{]}\] \[=\mathbb{P}_{(\mathbf{x}_{\text{test}},y_{\text{test}})\sim \mathcal{D}}(\|\mathbf{u}\|_{2}^{2}<0)=0.\]
This proves the first result.
As for the maximum margin classifier \(\mathbf{w}_{\text{max}}\), we first denote \(\overline{\boldsymbol{\xi}}_{i}=y_{i}\cdot\sum_{p=1}^{P}\boldsymbol{\xi}_{i}^ {(p)}\). Then \(\overline{\boldsymbol{\xi}}_{i}\), \(i\in[n]\) are independent Gaussian random vectors from \(N(\mathbf{0},\sigma^{2}P(\mathbf{I}-\mathbf{u}\mathbf{u}^{\top}/\|\mathbf{u }\|_{2}^{2}))\). Define \(\boldsymbol{\Gamma}=[\overline{\boldsymbol{\xi}}_{1},\overline{\boldsymbol{ \xi}}_{2},\ldots,\overline{\boldsymbol{\xi}}_{n}]\in\mathbb{R}^{d\times n}\). Then focusing on the subspace \(\text{span}\{\mathbf{u}\}^{\perp}\), by Corollary 5.35 in Vershynin (2010), with probability at least \(1-\exp(-\Omega(\sigma^{2}Pd))\) we have
\[\sigma_{\min}(\boldsymbol{\Gamma})\geq(\sqrt{d-1}-\sqrt{n}-\sqrt{ d}/10)\cdot\sigma\sqrt{P}\geq\sigma\sqrt{Pd}/10,\] (D.14) \[\sigma_{\max}(\boldsymbol{\Gamma})\leq(\sqrt{d-1}+\sqrt{n}+\sqrt{ d}/10)\cdot\sigma\sqrt{P}\leq 2\sigma\sqrt{Pd},\] (D.15)
where we use \(\sigma_{\min}(\cdot)\) and \(\sigma_{\max}(\cdot)\) to denote the smallest and largest non-zero singular values of a matrix. Note that by \(d=2n>n+1\), \(\boldsymbol{\Gamma}\) has full column rank. Let \(\widehat{\mathbf{w}}=\boldsymbol{\Gamma}(\boldsymbol{\Gamma}^{\top}\boldsymbol {\Gamma})^{-1}\mathbf{1}\), then we have \(\widehat{\mathbf{w}}\in\text{span}\{\mathbf{u}\}^{\perp}\) and \(\boldsymbol{\Gamma}^{\top}\widehat{\mathbf{w}}=\mathbf{1}\). Therefore
\[\langle\widehat{\mathbf{w}},\overline{\boldsymbol{\xi}}_{i}\rangle=\langle \widehat{\mathbf{w}},P\cdot y_{i}\cdot\mathbf{u}+\overline{\boldsymbol{\xi} }_{i}\rangle=0+\langle\widehat{\mathbf{w}},\overline{\boldsymbol{\xi}}_{i} \rangle=\mathbf{e}_{i}^{\top}\boldsymbol{\Gamma}^{\top}\widehat{\mathbf{w}}=1\]
for all \(i\in[n]\). This implies that \(\widehat{\mathbf{w}}\) is a feasible solution to the maximum margin problem (3.1). Therefore by the optimality of \(\mathbf{w}_{\text{max}}\), we have
\[\langle\mathbf{w}_{\text{max}},\mathbf{u}\rangle\cdot\|\mathbf{u} \|_{2}^{-1}\leq\|\mathbf{w}_{\text{max}}\|_{2}\leq\|\widehat{\mathbf{w}}\|_{2 }=\|\boldsymbol{\Gamma}(\boldsymbol{\Gamma}^{\top}\boldsymbol{\Gamma})^{-1} \mathbf{1}\|_{2}=\sqrt{\mathbf{1}^{\top}(\boldsymbol{\Gamma}^{\top} \boldsymbol{\Gamma})^{-1}\mathbf{1}}\leq n\cdot\sigma_{\min}^{-1}(\boldsymbol{ \Gamma}).\]
Then by (D.14), we have
\[\langle\mathbf{w}_{\text{max}},\mathbf{u}\rangle\leq\|\mathbf{u} \|_{2}\cdot n\cdot\sigma_{\min}^{-1}(\boldsymbol{\Gamma})\leq 10\|\mathbf{u}\|_{2} \cdot nP^{-1/2}d^{-1/2}\sigma^{-1}.\]
Now by the assumption that \(\sigma\geq 20\|\mathbf{u}\|_{2}\cdot nP^{1/2}d^{-1/2}\), we have
\[\langle\mathbf{w}_{\text{max}},\mathbf{u}\rangle\leq 1/(2P),\] (D.16)
and therefore
\[\langle\mathbf{w}_{\text{max}},\overline{\boldsymbol{\xi}}_{i}\rangle\geq 1 -\langle\mathbf{w}_{\text{max}},P\cdot\mathbf{u}\rangle\geq 1/2.\] (D.17)
Further note that (3.1) indicates that \(\mathbf{w}_{\text{max}}\in\text{span}\{\mathbf{u},\overline{\boldsymbol{\xi}}_ {1},\overline{\boldsymbol{\xi}}_{2},\ldots,\overline{\boldsymbol{\xi}}_{n}\}\). Therefore we have
\[\|(\mathbf{I}-\mathbf{u}\mathbf{u}^{\top}/\|\mathbf{u}\|_{2}^{2 })\mathbf{w}_{\text{max}}\|_{2} =\|\boldsymbol{\Gamma}(\boldsymbol{\Gamma}^{\top}\boldsymbol{ \Gamma})^{-1}\boldsymbol{\Gamma}^{\top}\mathbf{w}_{\text{max}}\|_{2}\] \[=\sqrt{(\boldsymbol{\Gamma}^{\top}\mathbf{w}_{\text{max}})^{\top}( \boldsymbol{\Gamma}^{\top}\boldsymbol{\Gamma})^{-1}(\boldsymbol{\Gamma}^{\top} \boldsymbol{\mathbf{w}}_{\text{max}})}\] \[\geq\|\boldsymbol{\Gamma}^{\top}\mathbf{w}_{\text{max}}\|_{2} \cdot\sigma_{\max}^{-1}(\boldsymbol{\Gamma})\] \[\geq\sqrt{n}\cdot(1/2)\cdot(2\sigma\sqrt{Pd})^{-1}\] \[\geq\sigma^{-1}P^{-1/2}/8,\] (D.18)
where the second inequality follows by (D.17) and (D.15). Now for a new test data point \((\mathbf{x}_{\text{test}},y_{\text{test}})\), we have \(\overline{\mathbf{x}}_{\text{test}}=P\cdot y_{\text{test}}\cdot\mathbf{u}+ \overline{\boldsymbol{\xi}}_{\text{test}}\), where \(\overline{\boldsymbol{\xi}}_{\text{test}}=y_{\text{test}}\cdot\sum_{p=1}^{P} \boldsymbol{\xi}_{\text{test}}^{(p)}\sim N(\mathbf{0},\sigma^{2}P(\mathbf{I}- \mathbf{u}\mathbf{u}^{\top}/\|\mathbf{u}\|_{2}^{2}))\). Then by (D.16), we have
\[\langle\mathbf{w}_{\max},P\mathbf{u}\rangle\leq 1/2.\]
Moreover, by (D.18) we also have
\[\langle\mathbf{w}_{\max},\overline{\boldsymbol{\xi}}_{\text{test}}\rangle\sim N (0,\overline{\sigma}_{\text{test}}^{2}),\ \overline{\sigma}_{\text{test}}\geq\|(\mathbf{I}-\mathbf{u}\mathbf{u}^{\top}/ \|\mathbf{u}\|_{2}^{2})\mathbf{w}_{\max}\|_{2}\cdot\sigma\sqrt{P}\geq 1/8.\]
Therefore
\[\mathbb{P}_{(\mathbf{x}_{\text{test}},y_{\text{test}})\sim\mathcal{D}}(y_{ \text{test}}\cdot\langle\mathbf{w}_{\max},\overline{\mathbf{x}}_{\text{test} }\rangle<0)=\mathbb{P}_{(\mathbf{x}_{\text{test}},y_{\text{test}})\sim\mathcal{ D}}(\langle\mathbf{w}_{\max},P\mathbf{u}\rangle+\langle\mathbf{w}_{\max}, \overline{\boldsymbol{\xi}}_{\text{test}}\rangle<0)=\Theta(1).\]
This finishes the proof.
### Proof of Theorem 3.6
Proof of Theorem 3.6.: **Proof for uniform margin solution \(\mathbf{w}_{\text{uniform}}\).** Without loss of generality, we assume \(\|\mathbf{w}_{\text{uniform}}\|_{2}=\|\mathbf{w}_{\max}\|_{2}=1\). Then according to the definition of uniform margin, we can get that for all \(i\in[n]\), it holds that for all strong signal data
\[|\langle\mathbf{w}_{\text{uniform}},\mathbf{u}\rangle|=|\langle\mathbf{w}_{ \text{uniform}},\mathbf{v}\rangle|=|\langle\mathbf{w}_{\text{uniform}}, \boldsymbol{\xi}_{i}\rangle|;\] (D.19)
and for all weak signal data:
\[|\langle\mathbf{w}_{\text{uniform}},\mathbf{u}\rangle|=|\langle\mathbf{w}_{ \text{uniform}},\mathbf{v}\rangle|=|\langle\mathbf{w}_{\text{uniform}}, \boldsymbol{\xi}_{i}+\alpha\zeta_{i}\mathbf{u}\rangle|.\] (D.20)
Then note that \(\mathbf{w}_{\text{uniform}}\) lies in the span of \(\{\mathbf{u},\mathbf{v}\}\cup\{\boldsymbol{\xi}_{i}\}_{i=1,\ldots,n}\), we can get,
\[\left|\left\langle\mathbf{w}_{\text{uniform}},\frac{\mathbf{u}}{\|\mathbf{u} \|_{2}}\right\rangle\right|^{2}+\left|\left\langle\mathbf{w}_{\text{uniform}}, \frac{\mathbf{v}}{\|\mathbf{v}\|_{2}}\right\rangle\right|^{2}+\sum_{i=1}^{n} \left|\left\langle\mathbf{w}_{\text{uniform}},\frac{\boldsymbol{\xi}_{i}+ \alpha\zeta_{i}\mathbf{u}}{\|\boldsymbol{\xi}_{i}+\alpha\zeta_{i}\mathbf{u}\| _{2}}\right\rangle\right|^{2}\geq\|\mathbf{w}_{\text{uniform}}\|_{2}^{2}=1,\]
where we slightly abuse the notation by setting \(\zeta_{i}=0\) if \((\mathbf{x}_{i},y_{i})\) is a strong signal data. Then use the fact that \(\|\mathbf{u}\|_{2}=1\), \(\|\mathbf{v}\|_{2}=\alpha^{2}\), and \(\sigma=d^{-1/2}\), we can get that with probability at least \(1-\exp(-\Omega(d))\) with respect to the randomness of training data,
\[|\langle\mathbf{w}_{\text{uniform}},\mathbf{u}\rangle|^{2}+\alpha^{-4}| \langle\mathbf{w}_{\text{uniform}},\mathbf{v}\rangle|^{2}+\sum_{i=1}^{n}| \langle\mathbf{w}_{\text{uniform}},\boldsymbol{\xi}_{i}+\alpha\zeta_{i} \mathbf{u}\rangle|^{2}\geq c\]
for some absolute positive constant \(c\). Then by (D.19) and (D.20) and using the fact that \(\alpha=n^{-1/4}\), we can immediately get that
\[\langle\mathbf{w}_{\text{uniform}},\mathbf{u}\rangle=\langle\mathbf{w}_{ \text{uniform}},\mathbf{v}\rangle=\Omega(n^{-1/2}).\]
We will then move on to the test phase. Consider a new strong signal data \((\mathbf{x},y)\) with \(\mathbf{x}=[\mathbf{u},\boldsymbol{\xi}]\)
and \(y=1\), we have
\[\langle\mathbf{w}_{\text{uniform}},\mathbf{x}^{(1)}\rangle+\langle\mathbf{w}_{ \text{uniform}},\mathbf{x}^{(2)}\rangle=\langle\mathbf{w}_{\text{uniform}}, \mathbf{u}\rangle+\langle\mathbf{w}_{\text{uniform}},\boldsymbol{\xi}\rangle= \Omega(n^{-1/2})+\xi,\]
where \(\xi\) is an independent Gaussian random variable with variance smaller than \(\sigma^{2}\). Additionally, given a weak signal data \((\mathbf{x},y)\) with \(\mathbf{x}=[\mathbf{v},\boldsymbol{\xi}]\) and \(y=1\), we have
\[\langle\mathbf{w}_{\text{uniform}},\mathbf{x}^{(1)}\rangle+ \langle\mathbf{w}_{\text{uniform}},\mathbf{x}^{(2)}\rangle =\langle\mathbf{w}_{\text{uniform}},\mathbf{v}\rangle+\langle \mathbf{w}_{\text{uniform}},\boldsymbol{\xi}+\alpha\zeta\mathbf{u}\rangle\] \[=\langle\mathbf{w}_{\text{uniform}},\mathbf{v}\rangle\cdot(1+ \alpha\zeta)+\xi\] \[=\Omega(n^{-1/2})+\xi,\]
where the second equation is due to \(\langle\mathbf{w}_{\text{uniform}},\mathbf{u}\rangle=\langle\mathbf{w}_{ \text{uniform}},\mathbf{v}\rangle\) and \(\xi\) is an independent Gaussian random variable with variance smaller than \(\sigma^{2}\). Further note that \(d=\omega(n\log(n))\) and \(\sigma=d^{-1/2}\), we can immediately get that with probability at most \(1-1/\text{poly}(n)\geq 1-1/n^{10}\), the random variable \(\xi\) will exceed \(n^{-1/2}\). This completes the proof for the uniform margin solution.
**Proof for maximum margin solution \(\mathbf{w}_{\text{max}}\).** For the maximum margin solution, we consider
\[\mathbf{w}_{\text{max}}=\arg\min_{\mathbf{w}}\|\mathbf{w}\|_{2}\quad\text{s.t. }y_{i}\mathbf{w}^{\top}[\mathbf{x}_{i}^{(0)}+\mathbf{x}_{i}^{(1)}]\geq 1.\] (D.21)
We first prove an upper bound on the norm of \(\mathbf{w}_{\text{max}}\) as follows. Based on the above definition, the upper bound can be obtained by simply finding a \(\mathbf{w}\) that satisfies the margin requirements. Therefore, let \(\mathbf{z}_{i}=y_{i}\cdot[\mathbf{x}_{i}^{(0)}+\mathbf{x}_{i}^{(1)}]\), we consider a candidate solution \(\widehat{\mathbf{w}}\) that satisfies
\[\text{for all strong signal data:}\quad\widehat{\mathbf{w}}^{\top} \mathbf{u}=1,y_{i}\widehat{\mathbf{w}}^{\top}\boldsymbol{\xi}_{i}=0;\] \[\text{for all weak signal data:}\quad\widehat{\mathbf{w}}^{\top} \mathbf{v}=0,\mathbf{w}^{\top}y_{i}(\boldsymbol{\xi}_{i}+\alpha\zeta_{i} \mathbf{u})=1\]
Then, let \(\mathbf{P}_{\mathcal{E}^{c}}\) be the projection on the subspace that is orthogonal to all noise vectors of the strong signal data. Then the above condition for weak signal data requires
\[y_{i}\langle\widehat{\mathbf{w}},\mathbf{P}_{\mathcal{E}^{c}}\boldsymbol{\xi} _{i}\rangle=1-\alpha y_{i}\zeta_{i},\]
where we use the fact that \(\langle\widehat{\mathbf{w}},\mathbf{u}\rangle=1\). Let \(n_{1}\) and \(n_{2}\) be the numbers of strong signal data and weak signal data respectively, which clearly satisfy \(n_{1}=\Theta(n)\) and \(n_{2}=\Theta(\rho n)=\Theta(n^{1/4})\) with probability at least \(1-\exp(-\Omega(n^{1/4}))\). Define \(\mathbf{r}\in\mathbb{R}^{n_{2}\times 1}\) be the collection of \(y_{i}(1-\alpha y_{i}\zeta_{i})\) for all weak signal data, let \(\boldsymbol{\omega}_{i}=\mathbf{P}_{\mathcal{E}^{c}}\boldsymbol{\xi}_{i}\) and set \(\boldsymbol{\Omega}\in\mathbb{R}^{n_{2}\times d}\) as the collection of \(\boldsymbol{\omega}_{i}\)'s. Then the above equation can be rewritten as \(\boldsymbol{\Omega}\widehat{\mathbf{w}}=\mathbf{r}\), which leads to a feasible solution
\[\widehat{\mathbf{w}}=\mathbf{u}+\boldsymbol{\Omega}^{\top}(\boldsymbol{\Omega }\boldsymbol{\Omega}^{\top})^{-1}\mathbf{r}.\]
Then note that the noise vectors for all data points are independent, conditioning on \(\mathbf{P}_{\mathcal{E}^{c}}\), the random vector \(\omega_{i}=\mathbf{P}_{\mathcal{E}^{c}}\boldsymbol{\xi}_{i}\) can be still regarded as a Gaussian random vector in \(d-n_{1}-2\) dimensional space. Then by standard random matrix theory, we can obtain that we can get that with probability at least \(1-\exp(-\Omega(d))\) with respect to the randomness of training data, \(\lambda_{\min}(\boldsymbol{\Omega}\boldsymbol{\Omega}^{\top}),\lambda_{\min}( \boldsymbol{\Omega}\boldsymbol{\Omega}^{\top})=\Theta\big{(}\sigma^{2}\cdot(d-n_ {1}-2)\big{)}=\Theta(1)\). Further note that \(\|\mathbf{r}\|_{2}=\Theta(n_{2}^{1/2})\), we can finally obtain that
\[\|\mathbf{w}_{\text{max}}\|_{2}^{2}\leq\|\widehat{\mathbf{w}}\|_{2}^{2}=1+\big{\|} \boldsymbol{\Omega}^{\top}(\boldsymbol{\Omega}\boldsymbol{\Omega}^{\top})^{-1} \mathbf{r}\big{\|}_{2}^{2}=\Theta(n_{2})=\Theta(n^{1/4}).\] (D.22)
Therefore, we can further get
\[\langle\mathbf{w}_{\max},\mathbf{v}\rangle\leq\|\mathbf{w}_{\max}\|_{2}\cdot\| \mathbf{v}\|_{2}=O\big{(}\alpha^{2}n^{1/8}\big{)}=O(n^{-7/8}).\]
Next, we will show that \(\langle\mathbf{w}_{\max},\mathbf{u}\rangle>1/2\). In particular, under the margin condition in (D.21), we have for all strong signal data
\[\langle\mathbf{w}_{\max},\mathbf{u}\rangle+\langle\mathbf{w}_{\max},y_{i} \boldsymbol{\xi}_{i}\rangle\geq 1.\]
Then if \(\langle\mathbf{w}_{\max},\mathbf{u}\rangle\leq 1/2\), we will then get \(\langle\mathbf{w}_{\max},y_{i}\boldsymbol{\xi}_{i}\rangle\geq 1/2\) for all strong signal data. Let \(\mathcal{S}_{\mathbf{u}}\) be the collection of indices of strong signal data, we further have
\[\left\langle\mathbf{w}_{\max},\sum_{i\in\mathcal{S}_{\mathbf{u}}}y_{i} \boldsymbol{\xi}_{i}\right\rangle\geq n_{1}.\]
Let \(\boldsymbol{\xi}^{\prime}=\sum_{i\in\mathcal{S}_{\mathbf{u}}}y_{i} \boldsymbol{\xi}_{i}\), it can be seen that \(\boldsymbol{\xi}^{\prime}\) is also a Gaussian random vector with covariance matrix \(n_{1}\sigma^{2}\big{(}\mathbf{I}-\mathbf{u}\mathbf{u}^{\top}/\|\mathbf{u}\|_ {2}^{2}-\mathbf{v}\mathbf{v}^{\top}/\|\mathbf{v}\|_{2}^{2}\big{)}\). This implies that conditioning on \(\mathcal{S}_{\mathbf{u}}\), with probability at least \(1-\exp(-\Omega(d))\), it holds that \(\|\boldsymbol{\xi}^{\prime}\|_{2}=\Theta(n_{1}^{1/2})\). Then using the fact that \(n_{1}=\Theta(n)\), it further suggests that
\[\|\mathbf{w}_{\max}\|_{2}\geq\frac{n_{1}}{\|\boldsymbol{\xi}^{\prime}\|_{2}}= \Theta(n^{1/2}),\]
which contradicts the upper bound on the norm of maximum margin solution we have proved in (D.22). Therefore we must have \(\langle\mathbf{w}_{\max},\mathbf{u}\rangle>1/2\).
Finally, we are ready to evaluate the test error of \(\mathbf{w}_{\max}\). In particular, since we are proving a lower bound on the test error, we will only consider the weak signal data while assuming all strong feature data can be correctly classified. Consider a weak signal data \((\mathbf{x},y)\) with with \(\mathbf{x}=[\mathbf{v},\boldsymbol{\xi}]\) and \(y=1\), we have
\[\langle\mathbf{w}_{\max},\mathbf{x}^{(1)}\rangle+\langle\mathbf{ w}_{\max},\mathbf{x}^{(2)}\rangle =\langle\mathbf{w}_{\max},\mathbf{v}\rangle+\langle\mathbf{w}_{ \max},\boldsymbol{\xi}+\alpha\zeta\mathbf{u}\rangle\] \[=\langle\mathbf{w}_{\max},\mathbf{v}\rangle+\zeta\cdot\alpha \langle\mathbf{w}_{\max},\mathbf{u}\rangle+\|\mathbf{w}_{\max}\|_{2}\cdot\xi,\]
where \(\xi\) is a random variable with variance smaller than \(\sigma_{0}\). Note that \(\zeta=-1\) with half probability. In this case, using our previous results on \(\langle\mathbf{w}_{\max},\mathbf{v}\rangle\) and \(\langle\mathbf{w}_{\max},\mathbf{u}\rangle\), and the fact that \(\xi\) is independent of \(\mathbf{v}\) and \(\mathbf{u}\), we can get with probability at least \(1/2\),
\[\langle\mathbf{w}_{\max},\mathbf{x}^{(1)}\rangle+\langle\mathbf{ w}_{\max},\mathbf{x}^{(2)}\rangle\leq\Theta(n^{-7/8})-\Theta(n^{-1/2})<0\]
which will lead to an incorrect prediction. This implies that for weak signal data, the population prediction error will be at least \(1/4\). Noting that the weak signal data will appear with probability \(\rho\), combining them will be able to complete the proof.
Proofs of Technical Lemmas
### Proof of Lemma c.1
Proof of Lemma c.1.: First, it is easy to see that
\[\sum_{i,i^{\prime}=1}^{n}\max\{b_{i},b_{i^{\prime}}\}\cdot(a_{i}-a_{i^{\prime}})^ {2}=2\sum_{i=1}^{n}\sum_{i^{\prime}>i}b_{i}\cdot(a_{i^{\prime}}-a_{i})^{2}.\] (E.1)
Then, we can also observe that for any \(i<j\), we have
\[\sum_{i^{\prime}>i}(a_{i^{\prime}}-a_{i})^{2}\geq\sum_{i^{\prime}>j}(a_{i^{ \prime}}-a_{i})^{2}\geq\sum_{i^{\prime}>j}(a_{i^{\prime}}-a_{j})^{2},\] (E.2)
where we use the fact that \(a_{i}\leq a_{j}\). Then let \(\bar{b}=\frac{\sum_{i=1}^{n}b_{i}}{n}\) and \(k^{*}\) be the index satisfying \(b_{k^{*}-1}\geq\bar{b}/2>b_{k^{*}}\), we can immediately get that
\[\sum_{i\geq k^{*}}b_{i}\leq\frac{\sum_{i=1}^{n}b_{i}}{2n}\cdot n=\frac{\sum_{ i=1}^{n}b_{i}}{2},\] (E.3)
which further implies that \(\sum_{i<k^{*}}b_{i}\geq n\bar{b}/2\). Then by (E.2), we can get
\[\frac{\bar{b}}{2}\cdot\sum_{i\geq k^{*}}\sum_{i^{\prime}>i}(a_{i^{\prime}}-a_ {i})^{2}\leq\frac{\bar{b}}{2}\cdot\sum_{i\geq k^{*}}\sum_{i^{\prime}>k^{*}}(a _{i^{\prime}}-a_{k^{*}})^{2}\leq\frac{n\bar{b}}{2}\cdot\sum_{i^{\prime}>k^{*}} (a_{i^{\prime}}-a_{k^{*}})^{2}.\]
Besides, we also have
\[\sum_{i<k^{*}}\sum_{i^{\prime}>i}b_{i}\cdot(a_{i^{\prime}}-a_{i})^{2}\geq \bigg{(}\sum_{i<k^{*}}b_{i}\bigg{)}\cdot\sum_{i^{\prime}>k^{*}}(a_{i^{\prime} }-a_{k^{*}})^{2}\geq\frac{n\bar{b}}{2}\cdot\sum_{i^{\prime}>k^{*}}(a_{i^{ \prime}}-a_{k^{*}})^{2},\]
which immediately implies that
\[\sum_{i<k^{*}}\sum_{i^{\prime}>i}b_{i}\cdot(a_{i^{\prime}}-a_{i})^{2}\geq \frac{\bar{b}}{2}\cdot\sum_{i\geq k^{*}}\sum_{i^{\prime}>i}(a_{i^{\prime}}-a_ {i})^{2}.\]
Therefore, we can get
\[\sum_{i=1}^{n}\sum_{i^{\prime}>i}b_{i}\cdot(a_{i^{\prime}}-a_{i}) ^{2} \geq\sum_{i<k^{*}}\sum_{i^{\prime}>i}b_{i}\cdot(a_{i^{\prime}}-a_ {i})^{2}\] \[\geq\sum_{i<k^{*}}\sum_{i^{\prime}>i}\frac{b_{i}}{2}\cdot(a_{i^{ \prime}}-a_{i})^{2}+\frac{\bar{b}}{4}\cdot\sum_{i\geq k^{*}}\sum_{i^{\prime}>i }(a_{i^{\prime}}-a_{i})^{2}\] \[\geq\frac{\bar{b}}{4}\sum_{i=1}^{n}\sum_{i^{\prime}>i}(a_{i^{ \prime}}-a_{i})^{2},\]
where we use the fact that \(b_{i}\geq\bar{b}/2\) for all \(i<k^{*}\). Finally, putting the above inequality into (E.1) and applying the definition of \(\bar{b}\), we are able to complete the proof.
### Proof of Lemma c.2
Proof of Lemma c.2.: We first show the lower bound of \(a^{(t)}\). Consider a continuous-time sequence \(\underline{a}^{(t)}\), \(t\geq 0\) defined by the integral equation
\[\underline{a}^{(t)}=\underline{a}^{(0)}+c\cdot\int_{0}^{t}\exp(- \underline{a}^{(\tau)})\mathrm{d}\tau,\quad\underline{a}^{(0)}=a^{(0)}.\] (E.4)
Note that \(\underline{a}^{(t)}\) is obviously an increasing function of \(t\). Therefore we have
\[\underline{a}^{(t+1)} =\underline{a}^{(t)}+c\cdot\int_{t}^{t+1}\exp(-\underline{a}^{( \tau)})\mathrm{d}\tau\] \[\leq\underline{a}^{(t)}+c\cdot\int_{t}^{t+1}\exp(-\underline{a}^ {(t)})\mathrm{d}\tau\] \[=\underline{a}^{(t)}+c\cdot\exp(-\underline{a}^{(t)})\]
for all \(t\in\mathbb{N}\). Comparing the above inequality with the iterative formula \(a^{(t+1)}=a^{(t)}+c\cdot\exp(-a^{(t)})\), we conclude by the comparison theorem that \(a^{(t)}\geq\underline{a}^{(t)}\) for all \(t\in\mathbb{N}\). Note that (E.4) has an exact solution
\[\underline{a}^{(t)}=\log(c\cdot t+\exp(a^{(0)})).\]
Therefore we have
\[a^{(t)}\geq\log(c\cdot t+\exp(a^{(0)}))\]
for all \(t\in\mathbb{N}\), which completes the first part of the proof. Now for the upper bound of \(a^{(t)}\), we have
\[a^{(t)} =a^{(0)}+c\cdot\sum_{\tau=0}^{t}\exp(-a^{(\tau)})\] \[\leq a^{(0)}+c\cdot\sum_{\tau=0}^{t}\exp[-\log(c\cdot\tau+\exp(a^ {(0)}))]\] \[=a^{(0)}+c\cdot\sum_{\tau=0}^{t}\frac{1}{c\cdot\tau+\exp(a^{(0)})}\] \[=a^{(0)}+\frac{c}{\exp(a^{(0)})}+c\cdot\sum_{\tau=1}^{t}\frac{1}{ c\cdot\tau+\exp(a^{(0)})}\] \[\leq a^{(0)}+\frac{c}{\exp(a^{(0)})}+c\cdot\int_{0}^{t}\frac{1}{ c\cdot\tau+\exp(a^{(0)})}\mathrm{d}\tau,\]
where the second inequality follows by the lower bound of \(a^{(t)}\) as the first part of the result of this lemma. Therefore we have
\[a^{(t)} \leq a^{(0)}+\frac{c}{\exp(a^{(0)})}+\log(c\cdot t+\exp(a^{(0)})) -\log(\exp(a^{(0)}))\] \[=c\exp(-a^{(0)})+\log(c\cdot t+\exp(a^{(0)})).\]
This finishes the proof.
|
2303.01265 | Steering Graph Neural Networks with Pinning Control | In the semi-supervised setting where labeled data are largely limited, it
remains to be a big challenge for message passing based graph neural networks
(GNNs) to learn feature representations for the nodes with the same class label
that is distributed discontinuously over the graph. To resolve the
discontinuous information transmission problem, we propose a control principle
to supervise representation learning by leveraging the prototypes (i.e., class
centers) of labeled data. Treating graph learning as a discrete dynamic process
and the prototypes of labeled data as "desired" class representations, we
borrow the pinning control idea from automatic control theory to design
learning feedback controllers for the feature learning process, attempting to
minimize the differences between message passing derived features and the class
prototypes in every round so as to generate class-relevant features.
Specifically, we equip every node with an optimal controller in each round
through learning the matching relationships between nodes and the class
prototypes, enabling nodes to rectify the aggregated information from
incompatible neighbors in a graph with strong heterophily. Our experiments
demonstrate that the proposed PCGCN model achieves better performances than
deep GNNs and other competitive heterophily-oriented methods, especially when
the graph has very few labels and strong heterophily. | Acong Zhang, Ping Li, Guanrong Chen | 2023-03-02T13:50:23Z | http://arxiv.org/abs/2303.01265v2 | # Steering Graph Neural Networks with Pinning Control
###### Abstract
In the semi-supervised setting where labeled data are largely limited, it remains to be a big challenge for message passing based graph neural networks (GNNs) to learn feature representations for the nodes with the same class label that is distributed discontinuously over the graph. To resolve the discontinuous information transmission problem, we propose a control principle to supervise representation learning by leveraging the prototypes (i.e., class centers) of labeled data. Treating graph learning as a discrete dynamic process and the prototypes of labeled data as "desired" class representations, we borrow the pinning control idea from automatic control theory to design learning feedback controllers for the feature learning process, attempting to minimize the differences between message passing derived features and the class prototypes in every round so as to generate class-relevant features. Specifically, we equip every node with an optimal controller in each round through learning the matching relationships between nodes and the class prototypes, enabling nodes to rectify the aggregated information from incompatible neighbors in a graph with strong heterophily. Our experiments demonstrate that the proposed PCGCN model achieves better performances than deep GNNs and other competitive heterophily-oriented methods, especially when the graph has very few labels and strong heterophily.
Graph neural network, pinning control, heterophily, supervised feature learning
## 1 Introduction
Graph or network is widely used for describing the interactions between elements of a complex system, such as those in social networks [5], knowledge graphs [6], molecular graphs [4], and recommender systems [7]. To deal with those non-Euclidean data for various graph analytical tasks such as node classification [3] and link prediction [2], graph neural networks (GNNs) [13, 43] have been developed and shown having superior performances.
The core of current GNNs such as GCN [12] is message passing. In message passing, feature representations are learned for each node by recursively performing aggregation and transformation on the representations of its immediate neighbors, revealing that information about long-distance neighbors can be captured this way. However, it is still challenging for the labeled nodes to propagate their information far away using a conventional message passing algorithm, since the influence of labeled nodes decays as the topological distance increases [30]. Moreover, increasing message passing number will lead to oversmoothing [1, 30], i.e., the case where representations are determined by the graph structure. While techniques like residual connections used in GCNII [19] allow the network architecture to be deeper, they substantially increase the number of learnable parameters and computational complexity of the GNN.
Another shortcoming of message passing is its negative smoothing effect in the circumstances where the nodes of the same type discontinuously distributed in the topology space. For instance, in heterophilious graphs, the immediate neighbors of a node come from different classes. It has been revealed [37] that in smoothing such nodes, message passing forcefully make the feature representations of the nodes with different labels approximate the average of the local neighborhood, thus deteriorating the representation learning. Previous work [19, 37, 42] suggests some solutions by improving the aggregation scheme using intermediate representations (e.g., residual connections) or higher-order neighborhood, but they can be sub-optimal. These methods either leverage the statistics about homophily of a graph while neglecting the differences of the homophily level between nodes, or depend on stacking more convolution layers, as commonly observed.
In this paper, we address these questions by proposing a novel principle that enhances supervision for message passing with the existing labeled data. Intuitively, the representation of a node should be close to the representation of the prototype of the class it belongs, besides its local neighbors. Thus, the class prototypes (i.e., class centers) of the labeled nodes are ideal references for node representation learning. In other words, the class prototypes can be used to supervise node representation learning. In this work, we propose a strategy to implement the class prototype supervised message passing.
Our idea comes from pinning control of complex networks [7], where the coupled nodes are dynamic variables and a certain number of controllers are "pinned" (i.e. exerted) to some nodes to regulate the behaviors of all agents towards a desired common state. Here, a controller is a control feedback scheme, which alters the difference between a dynamical variable and a desired state so that the variable will asymptotically approach the desired state. Inspired by the pinning control idea, we consider the feature representation learning as a discrete dynamic process and the representations of class prototypes in training data as the "desired states", with which we design the controllers
to regulate node representations based on the differences between the current node representations and the "desired class representations". In this process, the class prototypes play the role of supervision.
Different from pinning control in complex systems where the problem is to decide the minimum number of nodes needed for achieving global synchronization, our goal is to infer class labels for all unlabelled nodes, while allowing all nodes to be "pinned" (i.e., supervised). A challenge in applying pinning control to GNNs is which "controllers" will be used to pin which nodes. This is unknown beforehand, because one controller is associated with only a certain class (corresponding to one desired class prototype representation). Ideally, each node should be supervised by one controller associated with the class of that node, but this is impossible for those nodes whose labels are invisible. To resolve this issue, we propose a dynamic pinning control method, which learns the matching relationships between nodes and class prototypes (i.e., a set of "desired states") each time when message passing is performed. This way, the pinning control can be adjusted adaptively so as to better align the desired states and the pinned nodes in each graph convolution iteration. By steering the message passing with class prototype-based pinning control, it is able to teleport the information about classes to the regions that are weakly influenced by the labeled nodes without resorting to deep architectures. Meanwhile, the feedback from the controllers allows message passing to rectify the noisy information aggregated from the incompatible1 neighbors of the central node in a heterophilious graph. Thus, the proposed dynamic pinning control differs from the conventional pinning control in "pinning" the class labels rather than pinning the nodes. We experimentally verify these by comparing it with the vanilla message passing-based GCN and the state-of-the-art GNNs for the task of semi-supervised node classification across the full spectrum of heterophily.
Footnote 1: Following previous work [37], we use compatibility to indicate whether two connected nodes have the same class label, thus two connected nodes with different labels are called incompatible. The overall compatibility of the connected pairs is measured by the homophily index. That is, two connected nodes in a homophilious graph are more likely to be compatible than those in a heterophilious graph.
In summary, the main contributions of our work are as follows:
* We propose a novel graph representation learning framework by introducing the methodology of pinning control into message passing, which uses learning feedback controllers to supervise representation learning towards the representations of class prototypes so to transmit the class-relevant information to each node directly.
* We develop an end-to-end model to learn the representations of class prototypes and dynamically select a pinning controller for each node and update the pinning control relationships adaptively during message passing, which enables unlabeled nodes to be supervised by the prototype of the latent class directly, solving the problem of distant message passing.
* We conduct extensive experiments on a variety of real-world graph datasets demonstrate that the proposed method improves the performance of the vanilla message passing GCN by a large margin and generally outperforms the state-of-the-art GNN models with different message passing schemes, especially when the network has limited labels.
## 2 Preliminary
**Notation.** Consider an undirected and unweighted graph \(\mathbf{G}=(\mathbf{V},\mathbf{E})\) with \(f\)-dimensional attributes \(\mathbf{X}\in\mathbb{R}^{n\times f}\) on the nodes, where \(V\) is the set of nodes, \(\mathbf{E}\) is the set of edges, and \(n=|\mathbf{V}|\) is the number of nodes. The adjacency matrix associated with graph \(G\) is denoted as \(\mathbf{A}\in\mathbb{R}^{n\times n}\). Let \(\mathbf{D}\) be the diagonal degree matrix. Then, the graph with self-loop at every node can be represented as \(\mathbf{\widetilde{A}}=\mathbf{A}+\mathbf{I_{n}}\), and the corresponding diagonal degree matrix is \(\mathbf{\widetilde{D}}=\mathbf{D}+\mathbf{I_{n}}\). Thus, the self-looped adjacency can be symmetrically normalized as \(\mathbf{\widetilde{A}}=\mathbf{\widetilde{D}}^{-1/2}\mathbf{\widetilde{A}}\mathbf{\widetilde{D }}^{-1/2}\). In this work, we focus on semi-supervised node classification [21, 22], which trains a classifier \(f_{\theta}(\cdot)\) on the labeled node set \(\mathbf{T}\) to predict class labels \(y\) for the unlabeled node set \(\mathbf{U}=\mathbf{V}-\mathbf{T}\). We denote the training sets of different classes by \((C_{1},C_{2},...,C_{c})\), where \(c\) is the number of classes.
**Homophily and Heterophily.** As one of the graph properties, homophily means that the connected node pairs tend to have similar features and belong to the same class. Conversely, the connected node pairs are less similar in heterophily. We measure homophily-heterophily using different relationships between node labels and graph structure. There are two commonly ways to measure homophily: edge homophily [37] and node homophily [36], defined as follows:
**Definition 1**.: (**Homophily Ratio**). Given a graph \(G\), the homophily ratio \(\widetilde{h}=\frac{|\{(v_{i},v_{j})|(v_{i},v_{j})\in E\wedge y_{i}=y_{j}\}|}{| E|}\), where \((v_{i},v_{j})\) is the intra-class edge.
**Definition 2**.: (**Node-level Homophily Ratio**). Investigating homophily on graphs from a local perspective, the node-level homophily ratio is defined as \(\widehat{h}_{i}=\frac{|\{(v_{i},v_{j})_{i}\in\mathcal{N}(v_{i})\wedge y_{i}=y_{ j}\}|}{|\mathcal{N}(v_{i})|}\).
The homophily is strong if the homophily ratio value close to 1, while the heterophily is strong if homophily ratio value close to 0.
**Message Passing GNN.** Learning a representation vector of a node \(h_{v}\) from the graph structure and node attributes \(\mathbf{X}\) lies in the core of GNNs. Modern GNNs follow message passing in a neighborhood to approximate graph convolutions, where one can iteratively update the representation of a node by aggregating representations of its neighbors. After \(k\) times of message passing, the structural information within its \(k\)-hop neighborhood can be captured by the node representation. Formally, the \(k\)-th step of message passing in a GNN is
\[\begin{split} a_{v}^{(k)}&=AGGREGATE^{(k)}(\{h_{u}^ {(k-1)}:u\in\mathcal{N}(v)\}),\\ h_{v}^{(k)}&=COMBINE(h_{v}^{(k-1)},a_{v}^{(k)}), \end{split} \tag{1}\]
where \(h_{v}\) is the representation of node \(v\) in the \(k\)-th layer and \(\mathcal{N}(v)\) is a set of nodes directly connected to node \(v\). In initialization, \(h_{v}^{(0)}=\mathbf{X}_{v}\). By choosing the element-wise _mean_ pooling of the neighborhood \(\mathcal{N}(v)\) as \(AGGREGATE^{k}(\cdot)\) function and summing as \(COMBINE^{k}(\cdot)\), the vanilla
GCN [12] can be formulated as the integration of two functions:
\[\mathbf{H}^{(l+1)}=ReLU(\widehat{\mathbf{A}}\mathbf{H}^{(l)}\mathbf{W}^{(l)}), \tag{2}\]
where \(\mathbf{W}^{(l)}\) is a layer-specific trainable weight matrix, which can be learned by minimizing the cross-entropy between ground truth and predicted labels on the training set \(\mathbf{T}\), as
\[\mathcal{L}=-\sum_{v_{i}\in\mathcal{T}}\sum_{k=1}^{C}y_{ik}\ln z_{ik}, \tag{3}\]
in which \(z=softmax(\mathbf{H})\) is the output of last layer.
It is interesting to take Eq.(2) as a coupled discrete dynamic system, wherein each node represented by its feature vector evolves at every iteration. Therefore, it is possible to adopt control methods to guide the learning process towards some desired states (e.g., class centers in the training data), obtaining class-relevant representations.
**Pinning Control.** Our method is inspired by _Pinning Control_ of complex networks [7], which aims to synchronize a set of coupled nonlinear systems to a desired state \(\mathbf{x}_{s}\). The pinning control method to achieve this is to pin or control some of the nodes with a state-feedback law, which is described by
\[\dot{x}_{i}(t)=f(x_{i}(t),t)-\epsilon\sum_{j\in\mathcal{N}_{i}}A_{ij}(x_{i}(t) -x_{j}(t))-\delta_{i}qu_{i}(t), \tag{4}\]
where \(x_{i}\in\mathbb{R}^{d}\) is the state vector of node \(i\), \(f(\cdot)\) describes the node dynamics, \(A_{ij}\) defines the adjacency between node \(i\) and \(j\), i.e., node \(i\) and \(j\) is connected if its value is \(1\), otherwise \(0\). Note that only when \(\delta_{i}=1\), the controller \(u_{i}=x_{i}(t)-\mathbf{x}_{s}(t)\) is pinned at node \(i\) with control gain \(q\).
This scheme can be readily extended to discrete-time networked systems as follows:
\[x_{i}(k+1)=f(x_{i}(k))-\epsilon\sum_{j\in\mathcal{N}_{i}}A_{ij}(x_{i}(k)-x_{j }(k))-\delta_{i}qu_{i}(k), \tag{5}\]
where \(k\) denotes the \(k\)-th time step. In particular, when there is no control action exerted on any nodes and the coupled units are linear systems, i.e., \(f(x_{i}(k))=x_{i}(k)\), the discrete-time networked system can be simplified as: \(\mathbf{X}(k+1)=(\mathbf{I}-\epsilon\mathbf{L})\mathbf{X}(k)\), where \(\mathbf{L}=\mathbf{D}-\mathbf{A}\) is the Laplacian matrix. By letting \(\epsilon=1\) and replacing the Laplacian with the symmetrically normalized Laplacian \(\widetilde{\mathbf{L}}=\mathbf{I}-\widehat{\mathbf{A}}\), the coupled system exactly describes a one-layer graph convolution: \(\mathbf{X}(k+1)=\widehat{\mathbf{A}}\mathbf{X}(k)\), the compact form of Eq.(2). Then, the augmented term \(u_{i}(k)\) in Eq.(5) can be converted as a regulator to rectify the learned representations by graph convolution, which is fulfilled when the features of incompatible neighbors are aggregated, e.g., message passing over heterophilious graphs.
**Other Related Work.** As the core component of GNNs, message passing was first proposed in MPNN [23] to unify various GNN models that leverage message passing algorithms and aggregation procedures on graphs. Among the variants of message passing GNNs, GCN [12] uses a linear aggregation function for the combination of the features from the immediate neighbors. Another GNN model that adopts linear aggregation is GAT [17], which learns the attentive weights for aggregating features at each iteration round. More recently, in order to expand the receptive field for the commonly used two-layer GCN models, personalized page rank is used for deep message passing in APPNP [18]. On the other hand, the residual connection technique is borrowed from deep convolutional networks to GNNs for stacking more layers. Other examples include JKNet [30], GCNII [19], EGNN [32], AP-GCN [25], NDLS [31], DAGNN [24], in which residual connections are employed to preserve the node representations at the previous layer and thus alleviate the over-smoothing problem. However, residual connection based deep architectures generally suffer from high computational complexity. It is also noteworthy that in inductive learning setting, there are some nonlinear message
Fig. 1: (a) Pinning control principle for graph neural networks. (b) The instantiation of pinning controlled message passing based on learnable desired states (i.e., class prototypes), where a feedback controller \(u_{i}^{(l)}\) for any node \(i\) in the \(l\)-th layer is implemented by the difference between its current representation and the matched prototype.
passing GNNs, e.g., GraphSAGE [13] VR-GCN [26], Fast-GCN [27], Cluster-GCN [28], GraphSAINT [29].
## 3 Methodology
Modern message passing GNNs are built on the label consistency assumption that adjacent nodes most probably belong to the same cluster/class. However, this may be risky in some graphs where dissimilar nodes (e.g. nodes with different labels) are more likely to be interconnected. In such a situation, message passing provably fails to capture the incompatibility between connected nodes [37]. Moreover, this way of message passing makes it intractable to pass the information about labeled nodes to the long-distant neighbors that are in the same class but located in different regions of the graph. To address these limitations, it is desirable to introduce auxiliary supervision that can directly act on the nodes so as to rectify the misleading message passing between dissimilar nodes. From a control viewpoint, this is analogous to the pinning control of discrete-time networked systems, because the information utilized to supervise representation learning plays a role similar to the controllers in the evolution of the node states. Motivated by this, we propose a pinning control framework on GNNs and introduce an instantiation of the neural control scheme.
Our framework is graphically demonstrated in Figure 1, which contains two types of message passing in each layer of GNNs: neighbor-aggregation based message passing and pinning control based message passing. The later passes the information about how close the current representation is to the representation of a certain class, which will be described in details in the following subsection. In contrast to pinning control of complex networks where a common desired state for all the nodes is known in advance, there is no common and already desired state for the nodes in GNNs. So a challenge for applying pinning control to the semi-supervised graph learning is how to design meaningful "desired states" for the nodes in different classes, and the following question is then how to assign the unlabeled nodes with their desired states.
Here, we present an instantiation to complete this hybrid message passing scheme, as shown in Figure 1(b). The whole architecture consists of three components, namely, _representation learning of the "desired states"_, _matching between the desired states and the pinned nodes_, and _the hybrid message passing based graph convolution layer_. The first component offers the controllers to be applied to driving node representation learning, while the second component is to learn the pinning relationships between the designed states and graph nodes. Then, the feedback control mechanism (i.e., pinning controller) is integrated into the aggregation function in the third component. The details of the above modules are demonstrated in the following subsections.
### _Representation of "Desired States"_
To embody class-relevant information into node representations, we use the class prototypes of the training data to serve as the desired states, which will supervise node representation learning by the learning feedback control.
**Definition 3**.: (**Class prototype**). Given a graph \(G\) and the associated labeled node set \(\mathbf{T}\), which is partitioned into \(c\) classes, namely, \((C_{1},C_{2},...,C_{c})\), a class prototype \(\mathbf{P}_{c_{j}}\) is the centroid of the embeddings of all the labeled nodes in class \(C_{j}\).
To learn class-relevant representations, the prototype of a class is considered to be the desired representation from representation learning for nodes in that class. This way, the control signal related to a certain class can modify the feature representations of the nodes in this class.
Intuitively, the original attribute mean of the labeled nodes in the same class can be exploited as the representation of the prototype. The embeddings of prototypes for each class are then defined, as
\[\mathbf{P}_{\mathbf{c}_{j}}=\frac{1}{|T_{j}|}\sum_{i\in T_{j}}g_{\theta}(\mathbf{X}_{T_{j} }(i)), \tag{6}\]
where \(g_{\theta}(\cdot)\) is a linear layer and \(T_{j}\) represents the set of labeled nodes of class j. The prototypes convey the information about the corresponding classes, therefore they can serve as the desired states for representation learning. Based on them, it is possible to construct "pinning controllers" to steer the representation learning towards the "desired states".
**Definition 4**.: (**Pinning controller**). Given a graph \(G\), a pinning controller for node \(i\) refers to a control loop feedback \(u_{i}\), which is the difference of the desired representation and the current representation of node \(i\) in the learning process.
That is, a pinning controller can be formulated as \(u_{i}=h_{i}-\mathbf{P}_{\mathbf{c}_{i}}\), where \(\mathbf{c}_{i}\) represents the ground truth class that
node \(i\) belongs to. However, since most of the node labels are invisible, the "desired state" for node \(i\) has to be determined and may not be precisely equipped with the ground truth prototype.
### _Pinning Control Node Matching_
To address the above question, i.e., which class prototype a node should be associated with in order to obtain the optimal pinning controller, the scheme learns the bipartite matching matrix that depicts the relationships between prototypes and graph nodes (i.e., the connections indicated by the red lines in Figure 1(a)). Specifically, we define the pinning relationship based on the similarity between node feature representation and prototype embedding, as \(\mathbf{S}^{(l)}=\mathbf{H}^{(l-1)}\mathbf{P}_{e}^{T}\), where \(\mathbf{P}_{e}\) is composed of \(c\) prototype embeddings and \(l\) refers to the \(l\)-th round of message passing, suggesting that our method allows the control relation to be adapted to the update of node representations, as depicted in Figure 1(a). Here \(\mathbf{S}^{(l)}\) reflects how many nodes will be pinned by different class prototypes.
Since we aim to build matching between nodes and the prototypes with the same label, the perfect relation would be that nodes of the same type are pinned by the same prototype, which would favor the separation between different classes. Intuitively, the key to achieve this goal is to align the pinning relationship between node and prototype within the neighborhood of the concerned node, so that neighboring nodes are pinned by similar prototypes. Consequently, the pinning relationship \(\mathbf{S}^{(l)}\) can propagate on the graph, i.e., \(\widehat{\mathbf{A}}\mathbf{S}^{(l)}\). Note that, an implicit assumption here is that nodes tend to be connected with similar nodes in graphs. However, in some graphs [37] (i.e., graphs with weak homophily) nodes are more likely to be adjacent to nodes with different labels. Therefore, it would be better to distance a node from its incompatible neighbors in terms of pinning similarity, i.e., \((I-\widehat{A})\mathbf{S}^{(l)}\). For a general graph, we combine the above two quantities to improve the approximation of pinning relationships:
\[\widetilde{\mathbf{S}}^{(l)}=\alpha^{(l)}\widehat{A}\mathbf{S}^{(l)}+(1-\alpha^{(l)})( I-\widehat{A})\mathbf{S}^{(l)} \tag{7}\]
where \(\alpha^{(l)}\) is a learnable parameter in each layer. To obtain the index of the prototype most suitable for a certain node \(i\), we normalize the similarities between node \(i\) and all \(c\) prototypes, and retrieve the index of the prototype that has relatively maximal normalized similarity, which is calculated by
\[\begin{split} IX^{(l)}(i)&=\arg softmax(\widetilde{\mathbf{S}}^{(l)}_{i})\\ &=\arg max_{1\leq j\leq c}\frac{e^{\frac{1}{\beta}\widetilde{ \mathbf{S}}^{(l)}_{ij}}}{\sum_{r}e^{\frac{1}{\beta}\widetilde{\mathbf{S}}^{(l)}_{ir}}},\end{split} \tag{8}\]
where \(IX^{(l)}\) is an \(n\)-dimensional vector that records the prototype index for each node, and \(\tau\) is a predefined temperature. A smaller \(\tau\) makes a skewed output distribution, so that the influence of large similarity values get amplified. As a result, the vector \(IX^{(l)}\) depicts the matchable prototype for each node. We translate this bipartite relationship into a sparse matrix:
\[\mathbf{B}^{(l)}_{ij}=\begin{cases}1,&IX^{(l)}(i)=j\\ 0,&otherwise,\end{cases} \tag{9}\]
where \(\mathbf{B}^{(l)}\in\mathbb{R}^{n\times c}\) indicates the bipartite mapping between nodes and prototypes.
### _Hybrid Message Passing_
After matching the prototypes and the pinned nodes, we use the state feedback controller \(\mathbf{u}^{(l)}_{i}=\beta(\mathbf{H}^{(l)}_{i}-\mathbf{B}^{(l)}_{i}\mathbf{P}_{e})\) to regulate the representation of node \(i\) (\(1\leq i\leq n\)) in the \(l\)-th layer, where \(\beta\) is a hyperparameter to tune the impact of the difference between two representations through the learning process, and \(\mathbf{B}^{(l)}\) is used to look for the prototype representation optimal to each node from \(\mathbf{P}_{e}\). Note that from information transmission viewpoint, the pinning control here can also be considered as another kind of message passing, which propagates the information about a certain class. By combing the vanilla message passing and pinning control based "message passing", we have the following aggregation function:
\[\mathbf{H}^{(l+1)}=\sigma(\underbrace{\widehat{\mathbf{A}}\mathbf{H}^{(l)}\mathbf{W}^{(l)}}_{ Message\,passing}+\beta(\underbrace{\mathbf{H}^{(l)}-\mathbf{B}^{(l)}\mathbf{P}_{e})\mathbf{W}^{(l)}}_{ Pinning\,control}). \tag{10}\]
**Semi-supervised classification.** It is noteworthy that in our implementation of pinning control, we put the control on all nodes including labeled nodes. To ensure that the labeled nodes are correctly pinned by the label-associated controller, besides the cross-entropy loss as shown in Eq.(3) on node classification, we add a regularization term to penalize the disagreement between the model prediction and the estimation of class consistency (i.e., the normalized similarity between node representation and class prototype representation). Accordingly, the total loss is
\[\widetilde{\mathcal{L}}=\mathcal{L}-\lambda\sum_{l}\sum_{v_{i}\in\mathcal{T}} \sum_{k=1}^{C}y_{ik}\ln\widetilde{z}^{(l)}_{ik}, \tag{11}\]
where \(\widetilde{z}^{(l)}=softmax(\mathbf{S}^{(l)})\) indicates the degree of matching between a node and a controlling prototype.
**Complexity** We compare the time complexity of PCGCN with the standard message passing GNN (i.e., the vanilla GCN). Note that one-layer PCGCN is the combination of message passing and pinning control. First, the time complexity of one-round message passing with feature transformation (i.e., GCN layer) is \(\mathcal{O}(|E|d+nd^{2})\). Second, the time complexity of generating class prototypes is \(\mathcal{O}(|T|fd)\), and the implementation of pinning control is \(\mathcal{O}(nd+nd^{2}+c|E|)\). So, the time complexity of one-layer PCGCN is \(\mathcal{O}(|E|(c+d)+|T|fd+nd+nd^{2})\), which still linearly depends on the network size \(n\). However, it should be noted that the pinning control relationships built between nodes and "controllers" introduces additional space overhead to implement the scheme.
## 4 Experiments and Evaluation
To demonstrate the effectiveness of the pinning control scheme, we evaluate the performance of PCGCN against the state-of-the-art GNNs on several graph benchmark datasets for the semi-supervised node classification task. In particular, we address the following questions:
**Q1** Can pinning control help the vanilla message passing to
improve the expressive power of GNNs and thus achieve better performance on heterophilous graphs?
**Q2** Is pinning control effective to propagate information for distant nodes?
**Q3** How does a pinning-controlled GNNs depend on the labels?
We implement the PCGCN scheme based on PyTorch and our code is available online2. In what follows, we present the experimental settings, followed by our answers to the above research questions one by one.
Footnote 2: The hyperlink will be given after acceptance.
### _Experimental Setup_
To answer **Q1**, we evaluate the node classification performance of the proposed PCGCN and compare it with state-of-the-art heterophily-oriented GNN models on heterophilous graphs. Moreover, we test our model on some benchmark graph datasets with strong homophily that cover a full spectrum of heterophily.
**Datasets.** We evaluate the performance of PCGCN model and existing GNNs in node classification on various real-world datasets [34, 35, 9, 36]. We provide their statistics in Table I, where we compute the homophily level \(\overline{h}\) of a graph as the average of \(h_{i}\) of all nodes \(v_{i}\in V\). For all benchmarks, we use the feature vectors, class labels, and 10 random splits (48%/32%/20% of nodes per class for train/validation/test3 ) from [36].
Footnote 3: (Pei et al., 2019) claims that the ratios are 60%/20%/20%, which are different from the real data splits shared on GitHub.
For the Flickr dataset, where nodes represent images and each edge indicates that two images have some common attributes, whereas features are the descriptions of the images. For a fair comparison with the existing results, we adopt the split in [29] (i.e.,50%/25%/25% of nodes per class for train/validation/test) for Flickr partition. We report the mean test accuracy and standard deviation of the 10 replicate results.
**Baselines.** For the heterophilous datasets, we specifically compare our model with heterophily-oriented methods, namely, two variants of H2GCN (i.e., H2GCN-1 and H2GCN-2) [37], Geom-GCN [36], FAGNN [38] and one variant of GCNII [19] wherein parameters are shared between layers, GPRGNN [44], GCGCN [45], LINKX [46], GloGNN [47]. We also compare our scheme with following methods, some of which are shown to be competitive on various graphs: Multilayer Perceptron (MLP), SGC [20], Graph Convolutional Network (GCN) [12], Graph Attention Network(GAT) [17], Mixhop [42] and GCNII [19].
**Model Setting.** We implement the proposed PCGCN and some necessary baselines using PyTorch and PyTorch Geometric, a library for deep learning on irregularly structured data built upon PyTorch. We try our best to provide a rigorous and fair comparison between different models. To mitigate the effects of randomness, we run each method \(10\) times and report the average performance. For the baseline methods, whose results on the benchmark datasets are publicly available, we directly present the results. For the models without publicly reported results, we use the original codes published by their authors and fine-tune them. All experiments are implemented in PyTorch on 2 NVIDIA RTX3090 24G GPUs with CUDA 11.1. We use Python 3.9.7 and python packages PyTorch 1.8.1, PYG 1.6.3 (cuda 11.1). Table II summarizes the training configuration of PCGCN for semi-supervised node classification, where lr is the learning rate, hid is the hidden dimension, wd is the weight decay, \(\lambda\) is the regularization factor and \(\beta\) is the control gain.
### _Experimental Results_
#### 4.2.1 Comparison with Baselines on Heterophily Issue
To answer **Q1**, we report the test accuracy of different GNNs on the supervised node classification task over datasets with varying homophily levels in Table III. It can be seen that PCGCN achieves the new state-of-the-art performances on almost all heterophilious graphs (\(h<0.5\)) with remarkable margins, compared to the best of the existing models. Moreover, PCGCN outperforms the other methods across all datasets in terms of average rank (2.0), suggesting its strong adaptability to graphs at various homophily levels. In particular, for heterophilous datasets like Chameleon and Squirrel, PCGCN improves the accuracy by around 3.1% and 3.6% compared to the second-best model, respectively. Compared with leading GNNs on homophilous graphs, PCGCN also achieves competitive accuracy. It is noteworthy that our model is a shallow model, i.e., a 2-layer GCN with pinning control, but it is still comparable or even slightly better than the deep GNN model GCNII with 64 layers. All these results demonstrate that PCGCN's hybrid message passing effectively facilitates the vanilla message passing GNNs.
It is interesting to explore the performance of PCGCN on specific nodes with different local homophily levels. Figure 2 shows the classification accuracy of PCGCN and of two vanilla message passing GNNs, namely GCN and GAT,
\begin{table}
\begin{tabular}{l c c c c c} \hline
**Datasets** & **Nodes** & **Edges** & **Features** & **Classes** \\ \hline Cora & 2,708 & 5,429 & 1,433 & 7 \\ CiteSeer & 3,327 & 4,552 & 3,703 & 6 \\ Pubmed & 19,717 & 44,338 & 500 & 3 \\ Cornell & 183 & 295 & 1703 & 5 \\ Wisconsin & 251 & 499 & 1703 & 5 \\ Texas & 183 & 309 & 1703 & 5 \\ Chameleon & 2277 & 36,101 & 2325 & 5 \\ Squirrel & 5201 & 217,073 & 2089 & 5 \\ Actor & 7600 & 33,544 & 931 & 5 \\ Flickr & 89,250 & 449878 & 500 & 7 \\ \hline \end{tabular}
\end{table} TABLE I: Statistics of the datasets.
\begin{table}
\begin{tabular}{l c c c c c c} \hline
**Datasets** & **dropout** & **hid** & **layers** & **lr** & **wd** & \(\lambda\) & \(\beta\) \\ \hline Cora & 0.8 & 512 & 2 & 0.001 & Se-4 & 0.1 & 0.6 \\ CiteSeer & 0.7 & 256 & 2 & 0.01 & 5e-4 & 0.1 & 0.6 \\ Pubmed & 0.3 & 256 & 2 & 0.001 & 0.0001 & 1 & 0.5 \\ Cornell & 0.4 & 32 & 1 & 0.05 & Se-4 & 1 & 5 \\ Wisconsin & 0.2 & 128 & 1 & 0.05 & Se-4 & 1 & 5 \\ Texas & 0.7 & 256 & 1 & 0.05 & 0.001 & 10 & -3 \\ Chameleon & 0.5 & 64 & 2 & 0.01 & Se-5 & 10 & -0.2 \\ Squirrel & 0.5 & 64 & 2 & 0.01 & Se-5 & 1 & -0.1 \\ Actor & 0.1 & 64 & 2 & 0.01 & Se-5 & 10 & -5 \\ Flickr & 0.6 & 128 & 2 & 0.01 & Se-5 & 0.1 & -0.1 \\ \hline \end{tabular}
\end{table} TABLE II: Hyperparameters of PCGCN.
on the nodes with varying node-level homophily. Clearly, PCGCN is superior to the vanilla message passing GNNs for nodes with low local homophily (i.e., strongly heterophilious nodes), corresponding to the node-level homophily less than 0.4, which shows that pinning control is capable of alleviating the negative effect of heterophily on node classification. It is also clear that, for homophilous nodes, i.e., the nodes with node-level homophily greater than \(0.6\), PCGCN is only comparable with GCN. Although in heterophilious graphs the majority of nodes are weakly homophilous, a significant improvement of the overall node classification performance by PCGCN is still achieved, as shown in Table III.
To intuitively understand what change our pinning control brings to node representation learning, we visualize node feature distribution in the embedding space. We utilize t-SNE to create 2D plots of all node embeddings at the last layer after training for two heterophilious graphs: chameleon and squirrel. Figure 3 and Figure 4 show the node embedding distributions of two datasets achieved by GCN, LINKX and PCGCN, respectively, where different colors indicate different classes. It is clear that, compared to the random distribution of initial node features, some clustering patterns in the feature subspace are captured by GCN but lack of obvious boundary. While LINKX produces a clearer
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline
**Dataset** & **Chameleon** & **Squirrel** & **Actor** & **Texas** & **Wisconsin** & **Cornell** & **Flickr** & **Cora** & **Citeseer** & **Pubmed** & **Avg.** \\
**Hom.ratio**\(\widetilde{h}\) & 0.23 & 0.22 & 0.22 & 0.2 & 0.1 & 0.06 & 0.32 & 0.81 & 0.74 & 0.8 & **Rank** \\ \hline MLP & 46.93\(\pm\)1.7 & 29.95\(\pm\)1.6 & 34.78\(\pm\)1.2 & 79.19\(\pm\)6.3 & 83.15\(\pm\)5.7 & 79.79\(\pm\)4.2 & 44.32\(\pm\)0.2 & 75.13\(\pm\)2.7 & 73.26\(\pm\)1.7 & 85.69\(\pm\)0.3 & 11.1 \\ GCN & 65.92\(\pm\)2.5 & 49.78\(\pm\)2.0 & 27.51\(\pm\)1.2 & 55.14\(\pm\)5.16 & 51.76\(\pm\)3.06 & 60.54\(\pm\)5.3 & 49.68\(\pm\)0.45 & 86.98\(\pm\)1.27 & 76.50\(\pm\)1.36 & 88.42\(\pm\)0.5 & 10.9 \\ GAT & 65.32\(\pm\)1.9 & 46.79\(\pm\)2.0 & 29.03\(\pm\)0.9 & 52.16\(\pm\)6.63 & 49.41\(\pm\)4.09 & 61.89\(\pm\)5.05 & 49.67\(\pm\)0.81 & 87.30\(\pm\)1.10 & 76.55\(\pm\)1.23 & 86.33\(\pm\)0.48 & 11.2 \\ GraphSAGE & 68.71\(\pm\)2.3 & 41.05\(\pm\)1.1 & 34.37\(\pm\)3.1 & 82.70\(\pm\)5.9 & 51.76\(\pm\)5.6 & 75.59\(\pm\)5.2 & 50.1\(\pm\)1.3 & 86.60\(\pm\)1.8 & 75.61\(\pm\)6.1 & 86.01\(\pm\)0.8 & 9.8 \\ MiHop & 60.50\(\pm\)2.5 & 43.80\(\pm\)1.4 & 32.22\(\pm\)3.4 & 77.84\(\pm\)7.3 & 75.88\(\pm\)4.9 & 37.51\(\pm\)6.34 & 51.92\(\pm\)0.41 & 87.61\(\pm\)0.85 & 76.26\(\pm\)1.33 & 85.31\(\pm\)0.61 & 9.9 \\ GCNII & 63.86\(\pm\)3.0 & 36.37\(\pm\)1.6 & 34.40\(\pm\)0.7 & 77.57\(\pm\)3.8 & 80.39\(\pm\)3.4 & 77.86\(\pm\)3.7 & 50.34\(\pm\)0.22 & 88.87\(\pm\)**12.5** & 77.33\(\pm\)1.48 & 30.91\(\pm\)0.43 & 7.1 \\ \hline H2GCN-1 & 58.84\(\pm\)2.1 & 36.42\(\pm\)1.8 & 35.94\(\pm\)1.3 & 84.86\(\pm\)6.7 & 86.67\(\pm\)4.6 & 82.16\(\pm\)4.8 & 51.76\(\pm\)0.4 & 86.35\(\pm\)1.6 & 76.85\(\pm\)1.5 & 88.50\(\pm\)0.6 & 7 \\ H2GCN-2 & 59.56\(\pm\)1.8 & 37.90\(\pm\)2.0 & 35.55\(\pm\)1.6 & 82.16\(\pm\)5.2 & 85.88\(\pm\)4.3 & 82.16\(\pm\)6.0 & 52.01\(\pm\)0.1 & 88.13\(\pm\)1.4 & 76.73\(\pm\)1.4 & 88.46\(\pm\)0.7 & 6.6 \\ Geom-GCN & 60.90\(\pm\)2.8 & 38.14\(\pm\)0.9 & 31.63\(\pm\)1.5 & 60.18 & 67.57 & 64.12 & N/A & 85.35\(\pm\)1.57 & **78.02\(\pm\)1.15** & 89.95\(\pm\)0.47 & 8.6 \\ FAGCN & 45.13\(\pm\)2.2 & 31.77\(\pm\)2.1 & 34.51\(\pm\)0.7 & 72.43\(\pm\)5.6 & 67.84\(\pm\)4.8 & 77.06\(\pm\)6.3 & 49.66\(\pm\)0.6 & 87.87\(\pm\)0.8 & 76.76\(\pm\)1.6 & 88.80\(\pm\)0.6 & 10.2 \\ GPRCNN & 46.58\(\pm\)1.7 & 31.61\(\pm\)2.4 & 34.63\(\pm\)1.22 & 78.38\(\pm\)4.36 & 82.94\(\pm\)4.4 & 80.27\(\pm\)8.11 & 51.14\(\pm\)0.1 & 87.95\(\pm\)1.18 & 77.13\(\pm\)1.67 & 87.54\(\pm\)0.38 & 8.8 \\ GGCN & 71.48\(\pm\)1.4 & 55.17\(\pm\)1.8 & **37.54\(\pm\)1.56** & 84.86\(\pm\)4.58 & 86.68\(\pm\)4.29 & 85.68\(\pm\)4.63 & - & 87.95\(\pm\)1.05 & 77.14\(\pm\)1.45 & 89.15\(\pm\)0.37 & 2.8 \\ LINKX & 68.42\(\pm\)1.38 & 61.81\(\pm\)1.80 & 36.10\(\pm\)1.55 & 74.60\(\pm\)0.83 & 7.549\(\pm\)5.27 & 77.84\(\pm\)5.81 & 52.24\(\pm\)0.19 & 84.64\(\pm\)1.13 & 73.19\(\pm\)0.99 & 87.86\(\pm\)0.77 & 8.4 \\ GloGNN & 69.78\(\pm\)2.42 & 57.54\(\pm\)1.39 & 37.35\(\pm\)1.30 & 34.32\(\pm\)4.15 & 87.06\(\pm\)3.53 & 83.51\(\pm\)4.26 & 53.97\(\pm\)0.22 & 88.31\(\pm\)1.13 & 77.41\(\pm\)1.65 & 89.62\(\pm\)0.35 & 2.7 \\ \hline PCGCN & **74.29\(\pm\)1.9** & **65.47\(\pm\)2.4** & 36.43\(\pm\)0.9 & **85.95\(\pm\)3.9** & **87.64\(\pm\)3.7** & **85.94\(\pm\)6.1** & **5.64\(\pm\)0.3** & 87.65\(\pm\)1.5 & 77.40\(\pm\)1.3 & **90.34\(\pm\)0.4** & **2.0** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Mean accuracy+stdev over different data splits on the ten datasets. The best performance for each dataset is highlighted in bold and the second best performance is underlined for comparison. “N/A” denotes non-reported results. The dash symbols indicate that it is not able to run the experiments due to memory issue
Fig. 2: The node classification accuracy by three models, ordered by node-level homophily ratio range. The dotted-line indicates the proportion of nodes in each homophily interval.
clustering structure than GCN, there is still a large portion of overlap between classes. In contrast, PCGCN with adaptive parameter learning (corresponding to the last subplot in each row) assigns nodes into several distinct clusters, significantly reducing the representation noise thereby improving the classification performance.
#### 4.2.2 Efficacy of Pinning Control on Distant Nodes
We investigate the impact of pinning control on distant nodes, that is, whether pinning control can enhance message passing for the nodes not directly connected to the labeled nodes. Towards this end, we measure the performance of PCGCN on the unlabeled nodes with varying shortest distances from the label nodes in the same class, which is graphically illustrated in Figure 7 and calculated as follows: let \(\mathcal{N}(1)=A\) be the one-hop neighboring matrix. The \(k\)-hop neighboring matrix is then obtained by performing \(k\) iterations as follows:
\[\mathcal{N}(k)=\begin{cases}k,&if\ A_{ij}^{k}>0\ \&\ \mathcal{N}_{ij}(k-1)=0\\ \mathcal{N}(k-1),&otherwise\end{cases} \tag{12}\]
In general, \(k\) can be set to be equal to or greater than the diameter of graph \(G\), to guarantee that all nodes have been visited at least once, i.e., \(\mathcal{N}(k)>0\). Clearly, the elements of \(\mathcal{N}(k)\) are the shortest path-lengths between nodes. Then, the shortest label distance for an arbitrary unlabeled node \(i\) of class \(C_{i}\) is the smallest value among all the shortest path-lengths between node \(i\) and the labeled nodes belonging to the same class \(C_{i}\), which formally reads as
\[SLD(i)=min\left\{\mathcal{N}_{ij}(k),j\in\mathcal{T}(C=C_{i})\right\}, \tag{13}\]
\begin{table}
\begin{tabular}{l c|c c c c c} \hline
**Dataset** & **Chameleon** & **Chameleon-1** & **Chameleon-2** & **Chameleon-3** & **Chameleon-4** & **Chameleon-5** & **Average** \\ \hline MLP & 46.93\(\pm\)1.7 & 23.68\(\pm\)3.9 & 23.83\(\pm\)4.4 & 26.62\(\pm\)6.9 & 21.86\(\pm\)3.3 & 26.09\(\pm\)6.7 & 24.41 \\ GCN & 65.92\(\pm\)2.5 & 38.35\(\pm\)4.0 & 40.54\(\pm\)5.0 & 37.82\(\pm\)4.2 & 34.73\(\pm\)4.4 & 36.86\(\pm\)2.8 & 37.65 \\ GAT & 65.32\(\pm\)1.9 & 35.83\(\pm\)4.2 & 36.60\(\pm\)2.5 & 40.50\(\pm\)6.9 & 33.88\(\pm\)2.8 & 37.10\(\pm\)2.8 & 36.78 \\ PCGCN & **74.29\(\pm\)1.9** & **57.71\(\pm\)1.9** & **58.09\(\pm\)2.0** & **61.62\(\pm\)1.8** & **56.35\(\pm\)1.7** & **56.11\(\pm\)2.6** & **57.96** \\ \hline \end{tabular}
\end{table} TABLE IV: Results with different missing classes on datasets in terms of classification accuracy (in percentage).
Fig. 4: t-SNE visualization of node representations learned by GCN, LINRX and PCGCN on Squirrel, respectively.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline
**Dataset** & **Squirrel** & **Squirrel-1** & **Squirrel-2** & **Squirrel-3** & **Squirrel-4** & **Squirrel-5** & **Average** \\ \hline MLP & 29.95\(\pm\)1.6 & 19.79\(\pm\)0.9 & 20.62\(\pm\)1.3 & 19.97\(\pm\)1.0 & 19.96\(\pm\)0.7 & 20.0\(\pm\)0.9 & 20.06 \\ GCN & 49.78\(\pm\)2.0 & 22.35\(\pm\)2.6 & 21.59\(\pm\)1.9 & 21.72\(\pm\)2.3 & 21.26\(\pm\)1.6 & 21.87\(\pm\)1.4 & 21.75 \\ GAT & 46.79\(\pm\)2.0 & 22.89\(\pm\)1.7 & 22.89\(\pm\)1.7 & 21.88\(\pm\)1.8 & 21.73\(\pm\)2.1 & 21.52\(\pm\)2.4 & 22.02 \\ PCGCN & **65.47\(\pm\)2.4** & **50.99\(\pm\)2.1** & **53.54\(\pm\)1.4** & **51.39\(\pm\)1.2** & **48.43\(\pm\)1.2** & **45.82\(\pm\)1.7** & **50.03** \\ \hline \end{tabular}
\end{table} TABLE V: Results with different missing classes on datasets in terms of classification accuracy (in percentage).
Fig. 3: t-SNE visualization of node representations learned by GCN, LINRX and PCGCN on Chameleon, respectively.
where \(\mathbf{T}(C=C_{i})\) refers to the labeled nodes with class label \(C_{i}\) in the training set \(T\).
Figure 6 reports the node classification accuracy for different types of test nodes in terms of SLD values. One observation is that, on homophilous graphs (i.e., Cora, CiteSeer and Pubmed), pinning control can improve the accuracy of node classification for the nodes that are far away from the labeled nodes in the same class (corresponding to large SLD values), compared to the vanilla message passing, suggesting that the pinning controllers are able to transmit the information about classes to unlabeled nodes directly and more effectively than iterative message passing. Furthermore, from the node classification accuracy distribution on different SLDs shown in Figure 6, it can be seen that for a heterophilious graph PCGCN prominently boosts the classification performance on the nodes whose nearest labeled nodes are in 1 or 2-hop neighborhood, i.e., \(SLD=1,2\), compared to the vanilla GCN. This verifies the rectification effect of pinning control in learning the features via message aggregation. Specifically, the basic message passing aggregates features from the incompatible neighbors, while pinning control injects the class-relevant features to the aggregation, preventing the learned representations to stray away from their ground truth classes.
#### 4.2.3 Influence of the Labels
As the prototypes used for feature supervision in PCGCN are derived from the labeled data, it is essential to study the influence of labeled nodes on PCGCN's performance. We evaluate this from two aspects: 1) the labeled data are limited; and 2) some classes are not labeled. For the first case, we vary the size of the training set from \(70\) to \(2\) on both homophilous and heterophilious datasets, respectively. We average the results over 10 runs on the datasets using random
Fig. 5: Performance comparison with small numbers of labels on Homophilous and Heterophilious datasets.
Fig. 6: Performance distribution v.s. SLD on six datasets.
Fig. 7: A toy example for illustrating the calculation of SLD. Different colors represent different class labels. The ground-truth label of node \(i\) is ‘green’, and the shortest distance from node \(i\) to the ‘green’ nodes is here 2-hops.
train/validation/test splits for each training set size. The results are shown in Figure 5. It can be observed that under different training size settings, PCGCN (red line) consistently surpasses the baseline models on all datasets with varying number of labeled nodes, suggesting that pinning control enables GCN to exploit both the labeled and structural information with state feedback supervision. In particular, the large margin between PCGCN and the vanilla message passing GNNs on heterophilious graphs (i.e., Chameleon, Squirrel and Actor) indicates again that pinning control is effective to mitigate the heterophily issue.
Since the class prototypes in pinning controllers are defined as the centers of labeled nodes in the same classes, it raises a question: when there is no labeled data for some classes (i.e., Texas and Connell, where some classes of labels are missing from the training set in some splits), how can one derive the corresponding controllers for the nodes of this class? To resolve the robustness of PCGCN in the label missing situation, actually only minor changes are needed: the prototype corresponding to the class with unlabeled nodes is randomly initialized, and further learned in the training phase, as described in line 8 in Algorithm 1.
We conduct experiments on two heterophilious graphs, i.e., Chameleon and Squirrel, by masking the labels of a certain class. The resultant datasets are denoted as "dataset-i", where \(i\) is the index of that class. We compare PCGCN with the vanilla GCN and GAT, and non-message passing model MLP. From Table IV and Table V one can observe that the performances of all models are degraded on the datasets with label missing for certain classes, compared to the situations with the original datasets (i.e., the second column in these two tables). However, compared to three baselines, PCGCN preserves good performance on masked datasets, implying that by learning the prototype of missing classes PCGCN is robust against label missing.
#### 4.2.4 Correlation between Matching and Prediction
We explore the relationship between model performance and the degree of matching between the labels of the "desired states" (i.e., prototypes) and the labels of the unlabeled nodes. To make the match degree be a controllable variable in the experiments, we set a certain percentage of unlabeled nodes with their ground-truth labels in pinning control matching matrix \(B\) and then train PCGCN. The results in Figure 8 show that the performance of PCGCN is strongly correlated to the matching degree on heterophilious graphs while less correlated on homophilous graphs, indicating the importance of learning a good matching relation between prototypes and nodes to resolve the heterophily issue.
#### 4.2.5 Full Control vs. Partial Control
We have compared the performances of PCGCN under two control schemes, namely, full control and partial control. In the experiments, we implemented partial control by retaining 10% of the nodes uncontrolled. We also consider three different ways to select the uncontrolled nodes to retain: random retaining, top 10% nodes retaining and last 10% nodes retaining, in terms of node degrees. Results shown in Table VII suggest that full control is necessary to achieve the best performance, compared to the other partial control schemes. The reason is that pinning controllers serve as the
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Datasets** & **Cora** & **CiteSeer** & **Pubmed** & **Chameleon** & **Squirrel** & **Actor** & **Connell** & **Wisconsin** & **Texas** \\ \hline w/o Hom-P & 87.40 & 76.83 & 89.52 & 73.85 & 65.26 & 35.91 & 85.13 & 82.15 & 83.78 \\ w/o Het-P & 87.70 & 76.97 & 89.54 & 72.69 & 64.52 & 35.94 & 83.24 & 80.98 & 83.24 \\ w/o MP & 76.53 & 73.20 & 88.87 & 49.86 & 45.41 & 36.48 & 83.78 & 81.37 & 83.24 \\ w/o CL & 87.08 & 76.94 & 89.40 & 64.01 & 54.60 & 35.51 & 85.13 & 84.90 & 82.16 \\ \hline PCGCN & **87.65** & **77.40** & **90.34** & **74.29** & **65.47** & **36.43** & **85.94** & **87.64** & **85.95** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Ablation study.
Fig. 8: Correlation between matching degree and prediction. The bold black line is the reference of perfect fitting.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline
**Datasets** & **Chameleon** & **Squirrel** & **Actor** & **Texas** & **Wisconsin** & **Cornell** & **Cora** & **Citeseer** & **Pubmed** \\ \hline PCGCN-Random(10\%) & 71.14 & 61.97 & 33.80 & 74.32 & 78.82 & 79.72 & 87.28 & 76.63 & 89.22 \\ PCGCN-MinD(10\%) & 72.25 & 62.95 & 33.19 & 81.62 & 75.29 & 79.18 & 87.68 & 77.24 & 90.18 \\ PCGCN-MaxD(10\%) & 73.57 & 64.83 & 35.03 & 82.70 & 85.09 & 82.43 & 87.40 & 77.05 & 89.74 \\ \hline PCGCN & **74.29** & **65.47** & **36.43** & **85.94** & **87.64** & **85.95** & **87.65** & **77.40** & **90.34** \\ \hline \hline \end{tabular}
\end{table} TABLE VII: Performance comparison between full control and partial control. 10% of the nodes are selected to be uncontrolled in three different ways: “-Random” denotes random selection, “-MinD” denotes the selection of the last 10 percent of nodes in terms of node degree, while “-MaxD” represents the selection of the top 10 percent of nodes also in terms of node degree.
supervisors to enhance class-relevant feature learning. It is also noteworthy that top 10% retaining is superior to the other two schemes on all datasets with strong heterophily, but slightly inferior to the other two on the datasets with strong homophily (i.e., Cora, Citeseer and Pubmed). This reflects that high-degree nodes are less affected by heterophily in heterophilous graphs than in homophilious graphs.
### _Ablation Study_
To evaluate the effects of different pinning relation propagation methods (i.e., homophily propagation (Ho-P) and heterophily propagation (He-P) corresponding to the first and second terms in Eq.(7)), message passing (MP) and consistency loss (CL) for pinning control, we conduct ablation experiments on homophily and heterophily datasets, respectively. The first and second rows of Table VI report the results of PCGCN without homophily pinning similarity propagation and those without heterophily pinning similarity propagation. It can be observed that for graphs with strong homophily (i.e., Cora, CiteSeer and Pubmed), two propagation schemes have little influence on model performance. In contrast, these two schemes play a role on heterophilious graphs. The third row of the table lists the results by replacing message passing in Eq.(10) with the feature representations of the previous layer, while in the fourth row of the Table, the consistency loss that penalizes the pinning control matching degree is removed from the training model. Experimental results suggest that both structural information (captured by message passing) and control matching (regularized by consistency loss) have significant impacts on improving the performance.
## 5 Conclusion
In this paper, to address the challenges from limited training samples and heterophilious graphs, we propose a pinning control scheme for boosting message passing GNNs from the control theoretic viewpoint. By assuming the prototypes of labeled data to be the desired representations for different types of nodes, we integrate the state feedback control into the vanilla message passing to achieve prototypical graph representation learning. The experiments on homophilous and heterophilious benchmark graphs show that the proposed PCGCN brings great gains to the vanilla GCN and outperforms the comparable leading GNNs. PCGCN enables us to explore the supervision ability of labeled data. Our research will pave the way for devising more effective supervision enhancement techniques in the future.
## Acknowledgments
This work is supported by the National Natural Science Foundation (NSFC 62276099) and SWPU Innovation Base funding (No.642).
|
2308.08655 | Physics Informed Recurrent Neural Networks for Seismic Response
Evaluation of Nonlinear Systems | Dynamic response evaluation in structural engineering is the process of
determining the response of a structure, such as member forces, node
displacements, etc when subjected to dynamic loads such as earthquakes, wind,
or impact. This is an important aspect of structural analysis, as it enables
engineers to assess structural performance under extreme loading conditions and
make informed decisions about the design and safety of the structure.
Conventional methods for dynamic response evaluation involve numerical
simulations using finite element analysis (FEA), where the structure is modeled
using finite elements, and the equations of motion are solved numerically.
Although effective, this approach can be computationally intensive and may not
be suitable for real-time applications. To address these limitations, recent
advancements in machine learning, specifically artificial neural networks, have
been applied to dynamic response evaluation in structural engineering. These
techniques leverage large data sets and sophisticated algorithms to learn the
complex relationship between inputs and outputs, making them ideal for such
problems. In this paper, a novel approach is proposed for evaluating the
dynamic response of multi-degree-of-freedom (MDOF) systems using
physics-informed recurrent neural networks. The focus of this paper is to
evaluate the seismic (earthquake) response of nonlinear structures. The
predicted response will be compared to state-of-the-art methods such as FEA to
assess the efficacy of the physics-informed RNN model. | Faisal Nissar Malik, James Ricles, Masoud Yari, Malik Arsala Nissar | 2023-08-16T20:06:41Z | http://arxiv.org/abs/2308.08655v1 | # Physics Informed Recurrent Neural Networks for Seismic Response Evaluation of Nonlinear Systems
## I Abstract
Dynamic response evaluation in structural engineering is the process of determining the response of a structure, such as member forces, node displacements, etc when subjected to dynamic loads such as earthquakes, wind, or impact. This is an important aspect of structural analysis, as it enables engineers to assess structural performance under extreme loading conditions and make informed decisions about the design and safety of the structure. Conventional methods for dynamic response evaluation involve numerical simulations using finite element analysis (FEA), where the structure is modeled using finite elements and the equations of motion are solved numerically. Although effective, this approach can be computationally intensive and may not be suitable for real-time applications. To address these limitations, recent advancements in machine learning, specifically artificial neural networks, have been applied to dynamic response evaluation in structural engineering. These techniques leverage large data sets and sophisticated algorithms to learn the complex relationship between inputs and outputs, making them ideal for such problems. In this paper, a novel approach is proposed for evaluating the dynamic response of multi-degree-of-freedom (MDOF) systems using physics-informed recurrent neural networks. The focus of this paper is to evaluate the seismic (earthquake) response of nonlinear structures. The predicted response will be compared to state-of-the-art methods such as FEA to assess the efficacy of the physics-informed RNN model.
## II Introduction
Dynamic response analysis is a valuable tool for designing and assessing the performance of structures under a variety of loads such as seismic, wind, or wave loading, as well as for conducting reliability analyses of infrastructure and large urban areas. The traditional approach for dynamic response analysis typically involves creating numerical models and solving the partial differential equations using numerical integrators, such as the Newmark-\(\beta\) method or the KR-\(\alpha\)[3] method to predict the system's response under dynamic loads. However, this approach is computationally expensive and is not feasible in scenarios such as probabilistic seismic analysis where a large number of simulations need to be performed, or real-time hybrid simulations involving structures with a large number of degrees of freedom where the computations need to be performed in real-time.
Machine learning and Artificial intelligence have proven to be effective tools for predicting the response of complex nonlinear systems with a high degree of accuracy at a fraction of the computational cost compared to traditional finite element analysis. Furthermore, machine learning techniques can also be leveraged as surrogate low-fidelity models to predict the response of systems under insufficient prior knowledge of the systems. These techniques can also be leveraged to model the system's behavior based on experimental data.
The use of machine learning in predicting the behavior of structures under dynamic loads has been investigated by researchers such as Lagaros [4], Zhang [6], [7], and Eshkevari [2]. These studies demonstrate that machine learning has a lot of potential in the area of structural dynamics and response predictions. Zhang [7] used physics based LSTM models (PhylLSTM\({}^{2}\), PhyLSTM\({}^{3}\)) for seismic response prediction. Their proposed architecture included two or three separate LSTM-based neural network models to predict the system's response such as the displacement and the restoring force. Essentially, they added additional terms in the loss function that penalizes the neural network based on discrepancies in the equations of motion. Even though the accuracy of the predictions was enhanced by imposing these new physical constraints, these architectures lack guidance from physics resulting in overly complex networks that require extended training periods and large amounts of training data. Therefore, in this paper, a new physics-guided neural network architecture is proposed that uses the idea of numerical integrators to predict the seismic response of highly nonlinear structural systems under limited availability of training data. The network architecture of the model is described in detail in Section III.
## III Network Architecture
Numerical solvers determine the structure's response \(X=\left\{x(t),\dot{x}(t),\ddot{x}(t)\right\}\), where \(x(t)\) is the displacement, \(\dot{x}(t)\) is the velocity and \(\ddot{x}(t)\)) is the acceleration at time \(t\) from the state of the system \(\left\{x(t_{i-1}),\dot{x}(t_{i-1}),\ddot{x}(t_{i-1})\right\}\) at time \(t_{i-1}\) and the forcing function \(f(t_{i})\) at time \(t\). For example the KR-\(\alpha\) method
[3] gives:
\[\dot{X}_{i+1}=\dot{X}_{i}+\Delta t\alpha_{1}\ddot{X}_{i} \tag{1}\] \[X_{i+1}=X_{i}+\Delta t\dot{X}_{i+1}+\Delta t^{2}\alpha_{2}\dot{X}_ {i}\] (2) \[M\hat{\bar{X}}_{i+1}+C\dot{X}_{i+1-\alpha_{f}}+KX_{i+1-\alpha_{f} }=F_{i+1-\alpha_{f}} \tag{3}\]
where,
\[\hat{\bar{X}}_{i+1} =(I-\alpha_{3})\ddot{X}_{i+1}+\alpha_{3}\ddot{X}_{i} \tag{5}\] \[\dot{X}_{i+1-\alpha_{f}} =(1-\alpha_{f})\dot{X}_{i+1}+\alpha_{f}\dot{X}_{i}\] (6) \[X_{i+1-\alpha_{f}} =(1-\alpha_{f})X_{i+1}+\alpha_{f}X_{i}\] (7) \[F_{i+1-\alpha_{f}} =(1-\alpha_{f})F_{i+1}+\alpha_{f}F_{i} \tag{8}\]
and initial acceleration vector is given by \(M\ddot{\bar{X}}_{0}=F_{0}-C\dot{X}_{0}-KX_{0}\), where \(M,C,\) and \(K\) are the mass, damping, and stiffness matrices, respectively, of dimension \(n\times n\) for an MDOF system with \(n\) DOFs; \(X\dot{X},\dot{X},\) and \(F\) are the displacement, velocity, acceleration, and force vector, respectively; \(\alpha_{1},\alpha_{2},\alpha_{3}\) are the integration parameter matrices of dimensions \(n\times n\); I is the identity matrix of dimension \(n\times n\); and \(\alpha_{f}\) is a scalar integration parameter.
The network architecture is based on the same principle and is shown in Figure 1. The input to the model at time \(t\) is the state of the system [\(x(t_{i-1})\), \(\dot{x}(t_{i-1})\) ] at time \(t_{i-1}\) and the ground acceleration \(\left\{\ddot{x_{g}}(t)\right\}\) at time \(t\), and the model predicts the system response at time \(t\). The model consists of two separate subnets, the state prediction model (StateNet) and the restoring force and acceleration prediction model (RestNet). The StateNet takes the state of the system \(\left\{x(t_{i-1})\), \(\dot{x}(t_{i-1})\) } and the ground acceleration \(\left\{\ddot{x_{g}}(t)\right\}\) to calculate the state of the system at time \(t\). The RestNet takes the state of the system predicted by the StateNet at time \(t\) to predict the acceleration and restoring force at time \(t\). The two subnets consists of two LSTM layers, followed by three fully connected layers as shown in the figure. Each LSTM layer has 200 neurons and the output from LSTM layers is passed through the three fully connected dense layers and ReLU activation to get the final output. The model architecture is shown in Figure 1.
The outputs of the model are passed through a tensor differentiator to calculate the time derivatives of the responses. The loss of model is calculated from these time derivatives and the predicted responses as:
\[L(\theta_{1},\theta_{2})=w_{1}L_{1}(\theta_{1})+w_{2}L_{2}(\theta_{1})+w_{3}L _{3}(\theta_{1},\theta_{2}) \tag{9}\]
where \(\theta_{1}\), and \(\theta_{2}\) are the parameters of the state and acceleration prediction models, respectively, \(L(\theta_{i})\) are the loss functions, \(w_{i}\) is the weight given to \(i^{th}\) loss term, and the loss terms are:
\[L_{1}(\theta_{1}) =||\frac{dX_{pred}}{dt}-\dot{X}_{pred}|| \tag{10}\] \[L_{2}(\theta_{1}) =||X_{real}-X_{pred}||+||\dot{X}_{real}-\dot{X}_{pred}||\] (11) \[L_{3}(\theta_{1},\theta_{2}) =||\ddot{X}_{real}-\ddot{X}_{pred}|| \tag{12}\]
\(L_{1}(\theta_{1})\) forces the time derivative of predicted response to be equal to the predicted derivative of the response. \(L_{2}(\theta_{2})\) and \(L_{3}(\theta_{1},\theta_{2})\) are mean square losses for the two subnets. For simplicity, the values of \(w^{\prime}_{i}s\) are taken to be unity. It should be observed that \(L_{3}\) depends on both \(\theta_{1}\) and \(\theta_{2}\), therefore, the problem is a bi-objective optimization problem.
The network is trained by using two different approaches. First, the model is trained for \(N\) epochs with teacher forcing. In this approach, the targets \(x_{real}(t_{i-1})\) and \(\dot{x}_{real}(t_{i-1})\) are passed as inputs to the StateNet instead of the model's predictions at time \(t_{i-1}\) to get the state at time \(t\) and the outputs from the StateNet are passed as inputs to the RestNet to get the acceleration and restoring forces. The Algorithm for teacher forcing is shown in Algorithm 1.
```
1:\(X_{0}\gets 0\), \(\dot{X}_{0}\gets 0\), \(\ddot{X}_{0}\gets F_{0}/M\)
2:while\(t_{i}\leq t_{n}\)do
3:\(X_{p}(t_{i})\leftarrow\)Subnet\({}^{1}(X_{r}(t_{i-1}),\dot{X}_{r}(t_{i-1}),\ddot{X}_{r}(t_{i-1}))\)
4:\(\dot{X}_{p}(t_{i})\leftarrow\)Subnet\({}^{1}(X_{r}(t_{i-1}),\dot{X}_{r}(t_{i-1}),\ddot{X}_{r}(t_{i-1}))\)
5:\(\dot{X}_{p}(t_{i})\leftarrow\)Subnet\({}^{2}(X_{r}(t),\dot{X}_{r}(t),\ddot{X}_{g}(t))\)
6:endwhile
7:\(L(\theta_{1},\theta_{2})\gets L_{1}(\theta_{1})+L_{2}(\theta_{2})+L_{3}( \theta_{1},\theta_{2})\)
8:\(\theta_{i}\leftarrow\theta_{i}+lr\frac{d\dot{L}(\theta_{1},\theta_{2})}{d \theta_{i}}\)
```
**Algorithm 1** Training the Model with Teacher Forcing One Input Sequence
After training the model with teacher forcing, the model is then trained for \(M\) epochs with scheduled learning. In this approach, the model is trained as shown in the earlier algorithm, however, the training data is randomly replaced by the model's own predictions to help the model correct its own mistakes. The trained model is used to predict the response to unknown ground motions as described below.
To predict the response to an unknown ground motion, the model is called one step at a time to predict the responses at time \(t_{i}\). The predicted response through feedback is passed back into the model to calculate the response at time \(t_{i+1}\)
Fig. 1: Architecture of the proposed model
The displacement, velocity, and acceleration are initialized as \(X_{0}\), \(\dot{X}_{0}\), and \(\ddot{X}_{0}=(F_{0}-C\dot{X}_{0}-KX_{0})/M\), respectively. Since the system considered in this study starts from at rest conditions i.e., \(X_{0}=\dot{X}_{0}=0\), the initial acceleration \(\ddot{X}_{0}\) becomes \(\ddot{X}_{0}=F_{0}/M=\Gamma\ddot{X}_{g}(0)\), where \(\Gamma\) is the influence vector. The pseudocode for the model inference is shown in Algorithm 2.
```
1:\(X_{0}\gets 0\), \(\dot{X}_{0}\gets 0\), \(\ddot{X}_{0}\gets F_{0}/M\)
2:while\(t_{i}\leq t_{n}\)do
3:\(X_{p}(t_{i})\leftarrow\)Subnet\({}^{1}(X_{p}(t_{i-1}),\dot{X}_{p}(t_{i-1}),\ddot{X}_{p}(t_{i-1}))\)
4:\(\dot{X}_{p}(t_{i})\leftarrow\)Subnet\({}^{1}(X_{p}(t_{i-1}),\dot{X}p(t_{i-1}),\ddot{X}_{p}(t_{i-1}))\)
5:\(\dot{X}_{p}(t_{i})\leftarrow\)Subnet\({}^{2}(X_{p}(t),\dot{X}_{p}(t),\ddot{X}_{g}(t))\)
6:endwhile
```
**Algorithm 2** Model Inference
In the proposed architecture, the learning rate of the optimizer, the number of neurons in the layers, the number of connections between the StateNet and RestNet, and the batch size are the most important hyper-parameters. In this architecture, no regularization methods have been used. Note that the hyper-parameters are problem specific and need to be fine-tuned as per the problem. The model is trained on a relatively small batch size. Since, the training involves tensor differentiation, and the time series are large in length (\(\geq 1500\)), selecting a high batch size is not possible due to memory constraints. A batch size of five was selected for training the model. At this batch size, the model was observed to perform the best.
The data-driven model used for benchmarking the performance of the proposed network is a LSTM-based network developed by Zhang [6]. The input to the data-driven model is the ground acceleration and the model predicts the acceleration, velocity, displacement, and the restoring force. The data-driven model is trained using Adam optimizer with MSE as the loss function.
## IV Numerical examples: Dataset
The data used for training the model is generated using finite element analysis. However, experimental or a combination of experimental and simulated data can also be used to train the model. To generate the training data, a set of \(D\) ground motions is selected and scaled to match a target design spectrum. The structure's response for the ground motions is calculated through finite element analysis and numerical integration to generate the training and validation dataset.
For this paper, two numerical examples are investigated. The first example is a single degree of freedom (SDOF) Boucwen system. This system and dataset are the same as those used by Zhang [7]. The primary purpose of using this dataset was to benchmark the proposed architecture's performance against the PhyLSTM\({}^{3}\) model developed by Zhang [7]. The dataset used by Zhang [7] consisted of 85 synthetic ground motions. The ground motion duration is \(30\) second and the ground motions are sampled at a frequency of \(50\) Hz. Therefore, each ground motion has 1501 data points. 60 examples were used for training the model and 25 examples were used for evaluating the model. The model is compared to the data driven model [6], and the PhyLSTM\({}^{3}\) model [7].
For the second example, a nonlinear MDOF system as shown in Figure 3 is used. The frame has a mass of \(6,720\) kN, and \(10,080\) kN for the first and second floors, respectively. The beams are rigid, and the columns are modeled with elastic beam-column elements. The bracing is modeled using a nonlinear truss element with "Steel02" uniaxial material. The material has a post-yield stiffness ratio of 2.8%. The ground floor is also connected to nonlinear viscous dampers with a velocity exponent of 0.2, as shown in the figure. The building is modeled in OpenSees and a combination of real world ground motion records downloaded for PEER NGA west 2 database [1] and synthetic ground motions is used to generate the training and validation dataset. Numerical integration is performed using the HHT-\(\alpha\) method to obtain the structure's response.
The dataset consists of 274 ground motions sampled at a frequency of \(50\) Hz, and the ground motion duration is fixed at 30 sec. The sampled records are scaled to match the UHS MCE design response spectrum in the period range \(0.4\) sec to \(2.0\) sec i.e., \(0.5\times T_{1}\) to \(2.5\times T_{1}\), where \(T_{1}\) is the fundamental period of the structure. The wavelet record scaling procedure proposed by Montejo [5] is used for scaling the ground motions to the target spectrum The design response spectrum, median response spectrum of the ground motion set and the individual scaled records are shown in Figure 2. As can be seen in the figure, the response spectrum matches well with the target spectrum in the period range of interest.
The output from the network is the displacement, velocity, and acceleration response of the two floors, the force in damper at first floor, and the force in the first floor brace. 130 examples are used for training the model and 144 are used to validate the model. The model is trained as explained earlier in Section III. The purely data-driven LSTM model described earlier in Section III is also trained and evaluated as a benchmark.
Fig. 2: The response spectrum of the ground motion set used for the second numerical example.
## V Numerical Examples: Results
In this section, the results for the two numerical examples discussed in the earlier section are presented below.
### _Numerical Example I: SDOF Boucwen System_
In this section, the results of the SDOF Boucwen system are presented. Recall that the training data consisted of 85 examples, where-in 60 examples were used for training the model and 25 examples were used for evaluating the model. The data-driven model is trained for 200 epochs and the training data is shuffled before each epoch. The proposed architecture is trained for 100 epochs with teacher forcing and 100 epochs with scheduled learning as described earlier in Section III.
Figure 4 shows the actual displacement response of the structure, the response predicted using the proposed architecture, and the response predicted using the data-driven model on two ground motions from the validation data set. As can be seen in the figure, the proposed model is much better at predicting the displacement response of the structure in comparison to the purely data-driven LSTM model. The \(R^{2}\) values for the validation dataset are calculated as:
\[R^{2}(i)=1-\frac{SS_{res}(i)}{SS_{tot}(i)} \tag{14}\]
where, \(R^{2}(i)\) is the R-square value on ith example, \(SS_{res}(i)\) is the sum of squares of the residuals for ith example and \(SS_{tot}(i)\) is the total sum of squares for ith example. The \(R^{2}\) values for the proposed architecture and the purely data-driven model for the 25 validation examples are calculated and are shown in Figure 5. The mean of \(R^{2}\) is \(0.9929\), and the standard deviation of \(R^{2}\) is \(0.0043\) for the proposed architecture on the validation dataset. These values are \(0.5460\), and \(0.1660\) for the data-driven model. In other words, the proposed model is able to explain \(99.29\)% variation in displacement response, while the data-driven model is able to explain only \(54.60\)% variation in the displacement response.
Since the system under consideration and the dataset used for this example is the same as used by Zhang [7], the results are compared to the PhyLSTM\({}^{3}\) model proposed by them. The code for PhyLSTM\({}^{3}\) is not publicly available and thus it was not possible to reproduce the results. Therefore, the data shown by them in their original paper was compared to the proposed architecture. The \(R^{2}\) values for the best and worst case of the entire dataset for PhyLSTM\({}^{3}\) are \(0.99\) and \(0.79\), respectively [7]. The \(R^{2}\) values for the best and the worst case on the entire dataset for the proposed architecture are \(0.995\) and \(0.981\), respectively. Thus it can be seen that the proposed architecture outperforms the PhyLSTM\({}^{3}\) model for this system and dataset.
It is also noteworthy to look at the mean square error of the predicted outputs. The mean square error for the data-driven LSTM and the proposed architecture are summarized in table I. As can be seen in the table, the proposed model shows a reduction of \(98.50\)% in the mean square error for displacement response and a reduction of \(89\)% in the mean square error for velocity response on the validation dataset when compared to
Fig. 4: The actual and predicted response for two examples in the validation dataset.
Fig. 5: Histogram of \(R^{2}\) values of the displacement response predicted by the (a) proposed model and (b) data-driven model on the validation data set.
Fig. 3: Two-story nonlinear damped building frame considered for the study
the purely data-driven LSTM model.
### _Numerical Example II: Nonlinear MDOF building._
In this section, the results for the nonlinear building frame are presented. Recall that the training data consisted of 274 examples, where-in 130 examples are used for training the model and 144 examples are used for evaluating the model. The data-driven model is trained for 500 epochs and the training data is shuffled before each epoch. The proposed architecture is trained for 250 epochs with teacher forcing and 250 epochs with scheduled learning as described earlier in Section III.
Figure 6 shows the predicted displacement response for the first floor for two ground motions in the validation dataset. As can be seen in the figure, the proposed architecture is able to predict the response of the structure with a high degree of accuracy. The figure also shows that the output from the data-driven model is noisier in comparison to the response from the proposed architecture. Furthermore, the data-driven model overestimates the predicted displacement response. Figure 7 shows the predicted force vs deformation (hysteretic) behavior of the first-floor brace of the proposed architecture and the data-driven model for one ground motion record in the validation dataset. As can be seen in the figure, the proposed architecture is much better at capturing the nonlinear hysteretic behavior in comparison to the purely data-driven model.
The \(R^{2}\) and mean square error values of the predicted responses from the proposed architecture and purely data-driven LSTM model are also calculated and are shown in table II. As can be seen in the table, the proposed architecture outperforms the purely data-driven model for all the response quantities of interest, thus highlighting the efficacy of the proposed architecture. The mean square error of the predicted responses from the proposed network is roughly one order of magnitude less than that of the same from the data-driven model. For illustration, a histogram of \(R^{2}\) values of the displacement response on the validation data set from the proposed network and the data-driven model is shown in figure 8. The figure shows that the \(R^{2}\) values of the predicted responses from the proposed network are greater than \(0.98\), indicating a very strong correlation between the predicted and actual response. On the other hand, the \(R^{2}\) values of the predicted responses from the purely data-driven model lie anywhere between \(0.8\) and \(0.98\). The picture is even more clear when we look at the scatter plot of the actual and predicted displacement for the proposed architecture and the data-driven model shown in figure 9. As can be seen in the figure, the scatter plot of the actual displacement and the output from the proposed model is much more correlated than the same from the purely data-driven model. Since this example was not highly nonlinear, the data-driven model was also able to predict the response of the system with good accuracy.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Response & Proposed architecture & Data-driven model \\ \hline Displacement & \(9.14\times 10^{-5}\) & \(6.1\times 10^{-3}\) \\ \hline Velocity & \(2.33\times 10^{-5}\) & \(2.12\times 10^{-4}\) \\ \hline Acceleration & \(1.1\times 10^{-3}\) & \(2.8\times 10^{-3}\) \\ \hline \end{tabular}
\end{table} TABLE I: The mean square error for the proposed architecture and the data-driven model on the validation dataset.
Fig. 8: Histogram of the \(R^{2}\) values of the predicted displacement response from (a) the proposed network and (b) the data-driven model on the validation dataset.
Fig. 6: The actual and predicted displacement response of two examples in the validation dataset for the MDOF frame system.
Fig. 7: Predicted force vs predicted deformation behavior of the first-floor brace. The first plot shows the results from the proposed network and the second plot shows the results from the data-driven model.
## VI Summary and Conclusions
In this paper, a new architecture is proposed for evaluating the seismic response of nonlinear structures. The proposed algorithm was able to capture the nonlinear response of the structures with a high degree of accuracy. Based on the results of the study, the following conclusions have been made:
1. The proposed architecture is able to capture the nonlinear behavior of structures accurately with less training data requirement.
2. Combined with teacher forcing and scheduled learning, the proposed architecture doesn't require long training epochs and performs well without the need for data scaling or normalization.
3. The proposed architecture is able to capture the hysteretic behavior of structures with great accuracy.
4. The proposed architecture is able to predict the seismic response of the structure in a fraction of the time compared to finite element analysis and thus can be used in combination with real-time hybrid simulations or probabilistic seismic analysis where computation cost is a huge concern.
|
2303.07172 | Evaluating Visual Number Discrimination in Deep Neural Networks | The ability to discriminate between large and small quantities is a core
aspect of basic numerical competence in both humans and animals. In this work,
we examine the extent to which the state-of-the-art neural networks designed
for vision exhibit this basic ability. Motivated by studies in animal and
infant numerical cognition, we use the numerical bisection procedure to test
number discrimination in different families of neural architectures. Our
results suggest that vision-specific inductive biases are helpful in numerosity
discrimination, as models with such biases have lowest test errors on the task,
and often have psychometric curves that qualitatively resemble those of humans
and animals performing the task. However, even the strongest models, as
measured on standard metrics of performance, fail to discriminate quantities in
transfer experiments with differing training and testing conditions, indicating
that such inductive biases might not be sufficient. | Ivana Kajić, Aida Nematzadeh | 2023-03-13T15:14:26Z | http://arxiv.org/abs/2303.07172v1 | # Evaluating Visual Number Discrimination in Deep Neural Networks
###### Abstract
The ability to discriminate between large and small quantities is a core aspect of basic numerical competence in both humans and animals. In this work, we examine the extent to which the state-of-the-art neural networks designed for vision exhibit this basic ability. Motivated by studies in animal and infant numerical cognition, we use the numerical bisection procedure to test number discrimination in different families of neural architectures. Our results suggest that vision-specific inductive biases are helpful in numerosity discrimination, as models with such biases have lowest test errors on the task, and often have psychometric curves that qualitatively resemble those of humans and animals performing the task. However, even the strongest models, as measured on standard metrics of performance, fail to discriminate quantities in transfer experiments with differing training and testing conditions, indicating that such inductive biases might not be sufficient.
## Basic Numerical Competence
The ability to represent abstract numbers and compare numerical quantities is a basic numerical competence observed in both animals and humans (Dehaene, Dehaene-Lambertz, & Cohen, 1998). It helps animals in foraging, navigation, hunting, and reproduction (Nieder, 2020), and is also correlated with the later mathematical ability in prelinguistic infants (Gilmore, McCarthy, & Spelke, 2007; Halberda, Mazzocco, & Feigenson, 2008). While such a skill is shared across species and is independent of explicit feedback or formal education (Dehaene, 1997; Gallistel & Gelman, 1992), the degree to which more advanced numerical skills, such as counting and symbolic representation of number, are present across species remains a debated topic (O'Shaughnessy, Gibson, & Piantadosi, 2021; Revkin, Piazza, Izard, Cohen, & Dehaene, 2008; Anobile, Cicchini, & Burr, 2016; Gallistel & Gelman, 1992).
To investigate number representation and processing, different neural networks have been used as cognitive models of various numerical skills such as magnitude comparison (Vergus & Fias, 2004; Dehaene & Changeux, 1993; Zorzi & Butterworth, 1999), subitizing (Peterson & Simon, 2000) and counting (Rodriguez, Wiles, & Elman, 1999; Fang, Zhou, Chen, & McClelland, 2018). Neural networks are able to encode exact magnitudes (Creatore, Sabathiel, & Solstad, 2021) and develop basic numerical abilities such as numerosity comparison (Testolin, Dolfi, Rochus, & Zorzi, 2020).
While such networks have been used successfully to explain different phenomena in numerical cognition, their architecture is often designed for a task targeting specific cognitive function. In contrast to such specialized networks, in recent years we have witnessed a radical improvement in both the performance, and the quality of representations learned by deep neural networks that are trained end-to-end across vision (Simonyan & Zisserman, 2014; He, Zhang, Ren, & Sun, 2016), language (Vaswani et al., 2017; Devlin, Chang, Lee, & Toutanova, 2018; Brown et al., 2020), and multimodal (Lu, Batra, Parikh, & Lee, 2019; Radford et al., 2021; Alayrac et al., 2022) domains.
Here, we investigate whether state-of-the-art models designed for visual processing, also referred to as vision encoders, can exhibit basic numerical competence as observed in humans and animals. Specifically, we evaluate _number discrimination_ in vision encoders, defined as the ability to make broad relative numerical judgements such as many versus few, which is imprecise and not as advanced as counting, but within the normal ability of many animals (Davis & Memmott, 1982). We draw inspiration from studies in animal and child cognition and use a simple discrimination paradigm known as the _bisection task_ to examine if recent vision encoders can learn to discriminate stimuli on the basis of number.
We consider three vision encoders with varying degrees of explicit inductive biases: ResNet(He et al., 2016), ViT (Dosovitskiy et al., 2020), and Swin(Liu et al., 2021), as well a simple, comparatively small, multi-layer perception (MLP) not designed for vision tasks as a baseline. Across all conditions, Swin and ResNet with image-specific inductive biases are the most successful models in number discrimination; moreover, Swin matches the empirical data from humans and animals in more conditions than ResNet suggesting that its additional hierarchical bias results in a better abstract number representation. Even the strongest models, however, often fail in conditions that test for the transfer of numerical skill to a new condition; for example, when models are trained on a stimulus with solid shapes but tested on a stimulus where shapes are not filled. Although models fail in such transfer conditions, we find that they do learn structured number representations, forming clusters that are ordered based on the number identity. This suggests that, unlike humans and animals whose numerical skills generalize across different ecological contexts, vision encoders might require additional modeling innovations or a greater quantity and va
riety of data to use their learned knowledge in new situations.
## 2 The Numerical Bisection Task
The numerical bisection task is used to assess perception of numerical quantitites in both animals and humans. First, a participant is trained to discriminate small and large sample numerosities by associating them with different responses (labels such as _few_ and _many_). For example, Emmerton, Lohmann, and Niemann (1997) train pigeons to respond to images with 1 or 2 shapes by packing to the left (corresponding to _few_), and to the right for images with 6 or 7 shapes (corresponding to _many_). In Almeida, Arantes, and Machado (2007), children learn to pick a green cup for 2 drumbeats, or a blue cup for 8 drumeats in one experiment, and raise a red glove on their left hand after 2 drumbeats and a yellow glove on their right hand after 8 drumbeats in another experiment. The numerosities used for training (_i.e.,_ 1, 2, or 8) are often referred to as _anchor numerosities_.
Then, to probe number discrimination, participants are subsequently tested on intermediate numbers that are _not_ seen during training (_e.g.,_ 3 in the previous experiment). A participant is more likely to select the response associated with the larger anchor value (_e.g., many_), resulting in an s-shaped psychometric curve. Such s-shaped psychometric curves have been used to characterize basic numerical competence in rats (Meck & Church, 1983), pigeons (Honig & Stewart, 1989; Emmerton et al., 1997), rhesus macaques (Jordan & Brannon, 2006), as well as children and adults (Droit-Volet, Clement, & Fayol, 2003; Almeida et al., 2007; Jordan & Brannon, 2006). Qualitatively, psychometric curves documented in the literature have the following characteristics: (1) the initial segment with smaller numerosities is mostly labeled with _few_, (2) intermediate segment with a gradually increasing slope reflecting an increase in _many_ responses, (3) final segment with the largest numerosities mostly labeled with _many_. Although these properties characterize the majority of psychometric responses documented in the literature, between- and within-subject variability has been observed depending on the task and numerosity ranges (Almeida et al., 2007).
### Experimental Stimuli
We automatically generate images with black background and white circles varying the number of circles from 1 to 7. Similar to Emmerton et al. (1997), we use images with 1, 2, 6, or 7 circles as anchor numerosities for training. When designing stimuli, previous work has identified and controlled for potential perceptual confounds such as the size of the constituent elements (_i.e.,_ circles in our case), total white area, or total perimeter (Honig & Stewart, 1989; Testolin et al., 2020; Emmerton et al., 1997); processing these non-numerical features--which may be a confound in the observed numerical discrimination behavior--can develop independently of number processing, as has indeed been observed in children's developmental trajectory (Odic, 2018). To control for such potential confounds, we generate six different stimulus categories shown in Figure 1:
1. _Vary Size_. This is our most general setting, where for each image, we draw circles with radii drawn randomly from a set of 3 values (\(r=\{10,35,55\}\)).
2. _Constant Size_. We control for the size of circles--all circles have the same radius (\(r=20\)); this enables us to examine whether models can discriminate numbers better when circles are identical compared to varied in size.
3. _Constant Area_. In the previous condition (_Constant Size_), the white area (covered by circles) increases as the number of circles increases. We control for this potential confound by fixing the total white area to be constant across stimuli. This results in smaller circles in images that depicts larger numbers.
4. _Constant Area (contour_). We also examine if solid shape background has an impact on models' behavior; we consider a condition the same as _Constant Area_, but using contours instead of shapes with white background.
5. _Constant Circumference_. While the total area is controlled for in the _Constant Size_ condition, the total circumference of circles increases with numerosity. Here, we control for the total circumference by keeping it constant across stimuli.
Figure 1: Sample visual stimuli used in the numerical bisection task. Rows are different numerosities and columns different stimuli types. ”(C)” denotes ”contours”, as opposed to shapes with solid white background.
6. _Constant Circumference (contour)._ It is the same as _Constant Circumference_, but using contours instead of shapes with white background.
We generate the stimuli on-the-fly for both the training and testing and store them in-memory to be used during training and testing, with 100 images generated for each numerosity category of one stimulus type, resulting in overall 400 images for training, and 1,100 images for testing for each stimulus type. The images are of dimensionality expected by models, _i.e.,_\(224\times 224\times 3\).
## 5 Experimental Setup
In this section, we examine a few recent families of deep neural networks designed for computer vision (henceforth, vision encoders); all these models have achieved impressive results on computer vision tasks (such as image classification), but differ with respect to inductive biases that their architecture encode. We first describe these models briefly, and then discuss the details of our experimental setup.
Models.We consider three types of vision encoders: ResNet [14], ViT [15], and Swin [17]. The ResNet model includes a stack of convolutional neural network (CNN) blocks that process images using convolution kernels. These kernels introduce an explicit _locality_ bias--pixels (or features depending on the layer) that are close spatially are combined; as a result, a model with CNN blocks typically learns to encode low-level features (such as edges) in its first layers, and more high-level ones (such as parts) in its last layers.
Both ViT and Swin use Transformer blocks [21] consisting of feed-forward layers and a _self-attention_ mechanism; self-attention introduces a weaker and less explicit _locality_ bias (compared to CNNs) as a model can learn to group neighboring image patches.1 Swin builds on ViT and introduces an explicit _hierarchical_ bias by modifying how self-attention is applied across different layers; more specifically, local image patches are merged at at various stages as the depth of the model increases, resulting in a hierarchical representation.
Footnote 1: Self-attention is designed for sequential data such as language; thus, it is less suitable for modelling the two-dimensional spatial relations among image patches.
We use specific variants of ResNet, ViT, and Swin encoders: the ResNet-50 variant with 25.6M parameters [14], ViT-B [15] with 86M parameters, and "tiny" Swin, Swin-T [16], with 29M parameters. We picked the smallest ViT and Swin variants, and a ResNet model that has a similar number of parameters to Swin.
Finally, as a simple baseline, we consider a generic feed-forward multi-layer perceptron (MLP) that does not include any inductive biases such as convolutions or attention which are known to be helpful for processing of real-world images. We use an MLP consisting of 2 hidden layers with 256 units each, separated by ReLU non-linearities, and a final linear layer with 2 units. With 0.13M parameters and no "bells and whistles", this makes it a substantially smaller, yet less computationally inexpensive baseline model.
Training.For each stimulus type (_e.g., Constant Area_), we train ResNet, ViT, Swin and MLP models on data generated for that stimulus, _i.e.,_ images and their labels (_few_ and _many_). More specifically, we add a classification head to these models, to predict the label _few_ for images with 1 and 2 circles, and _many_ for images with 6 and 7 circles, where labels are encoded as one-hot vectors. All models are trained with a cross-entropy loss and L2 regularization. To get an estimate of variability in model responses for each stimulus category, we train 10 networks by choosing a different seed that randomly initializes network weights.
We perform a hyper-parameter search on the batch size, number of steps, learning rate, and optimizer type to find combinations where training loss has converged on the validation set, and where a network is achieving close to 100% accuracy on the training set. Accuracy is defined as a percentage of correctly classified labels.2
Footnote 2: We find that the batch size of 16, and 5,000 steps worked well for all models, although losses in some models (_e.g.,_ Swin and ResNet) converged much faster. We use the Adam optimizer [15] MLP, ViT, and Swin models, with learning rates of 1e-04, 5e-04, and 5e-05, respectively. We use the SGD optimizer with a learning rate of 1e-2 for ResNet. The models are trained using a NVidia Tesla V100 GPU.
Testing.We test the models on new images of anchor numerosities (_i.e., not_ seen during training), as well as images of novel interpolated numerosities: 3, 4 and 5. We use 100 images for each numerosity category and each stimulus type.
## 6 Experimental Results
In Experiment 1, we investigate models' behavior when trained and tested on the same stimulus type. In Experiment 2, we investigate transfer of the number discrimination skill by testing models on images from a stimulus category that is not used in training (_i.e.,_ train on _Constant Size_, and test on _Constant Area_).
### Experiment 1: Number Discrimination
In this experiment, we test number discrimination using images from the same stimulus category that is used during training (_e.g.,_ train on _Constant Size_, and test on _Constant Size_). We evaluate models based on the accuracy of the stimulus test set, and the quantitative and qualitative characteristics of psychometric curves in relation to the empirical data.
Performance on seen numbers.We first examine the performance of the four architectures when tested on novel images of the anchor numerosities (seen during training): 1, 2, 6, 7. An error occurs when an image with a small numerosity
(_i.e.,_ 1 or 2 circles) is classified as _many_, or when an image with a large numerosity (_i.e.,_ 6 or 7) is classified as _few_. Average error rates for each network and each stimulus type are shown in Table 1. Overall, we observe that ResNet and Swin, with image-specific inductive biases, have smallest mean error rates of less than 1% in 6/12 and 11/12 conditions, respectively. ViT has mean error rates that are in some cases comparable to or even exceed errors of the MLP baseline. When averaged across all 4 numerosities, we find that highest error rates are consistently observed with the _Constant Circumference (contour)_ stimulus category (See Table 1, column "Total error"); suggesting that this combination of visual features represented the most challenging dataset for number abstraction. Meanwhile, no such consistent pattern exists for datasets resulting in smallest errors--the smallest error for ResNet is observed with _Constant Size_, for ViT and Swin with _Constant Circumference_, and for the MLP with _Constant Area_. This observation is not surprising given that these models encode different inductive biases.
Performance on new numbers.Next, we examine how different models perform on numbers not seen at the training time (_i.e.,_ 3, 4, and 5). We plot the psychometric curves for selected stimuli, showing percentages of _many_ responses across numerosities for models trained on that stimuli in Figure 2. We selected these stimulus categories as representative of the easiest (_Constant Size_, _Constant Circumference_) and hardest (_Constant Circumference (contour)_) conditions based on the average error rates in Table 1. Different from Table 1, each value on the y-axis represents a proportion of _many_ responses for a certain numerosity (x-axis).
Overall, some curves in Fig. 2 exhibit characteristics of typical psychometric functions as discussed in Sec. Experimental Setup--specifically, for small numerosities 1, 2, and sometimes 3, we observe a slowly accelerating initial segments, followed by gradual increase with intermediate numbers, and a slowly decelerating final segment for larger numerosities (6, 7). Examples of such curve profiles are Swin and ResNet responses to _Constant Size_, and _Constant Circumference_ stimulus categories. However, there are also curves that have atypical flat shapes indicative of failure to learn this task, _i.e.,_ those not found in the literature. Out of all 24 curves we analyzed, 4 in total exhibit such a shape with 3 shown in Fig. 2 (_i.e.,_ MLP with _Constant Circumference (contour)_, ViT with _Constant Circumference (contour)_). Such curves either do not start at, or do not end at expected values indicating lack of sensitivity to number categories, some of which is also evident based on large error values in Table 1.
### Experiment 2: Transfer to Novel Stimuli
The previous experiment demonstrates that Swin and ResNet architectures to a great degree appear to be able to differentiate number of items in an image when trained and tested on the same stimulus type (_e.g.,_ constant total area or circumference). To understand whether our models have indeed developed a notion of a number category as opposed to learning a given stimuli, we draw a parallel with research in animal cognition and examine if the models "_base their behaviour on the numerosity of a set, independent of its other attributes_" (Gallistel & Gelman, 1992). In other words, if models learn an abstract representation of a number category, we would expect this representation to be agnostic to perceptual features of the stimulus. To test this, we examine models in a cross-stimulus transfer setting: we train a model on one set of stimuli, but test it on other types of stimuli (_i.e.,_ train on _Constant Size_, test on _Constant Circumference_). The test stimuli are _out of distibution (OOD)_ with respect to the model's training distribution. Compared to the _in distribution_ setting where training and test are drawn from the same distribution, the OOD setting is known to be challenging for neural networks (Geirhos et al., 2020).3
\begin{table}
\begin{tabular}{l|r r r r|r r r|r r r r} & \multicolumn{3}{c}{Few (1, 2)} & \multicolumn{3}{c}{Many (6, 7)} & \multicolumn{3}{c}{Total Error (Few+Many)} \\ & \multicolumn{1}{c}{ResNet} & \multicolumn{1}{c}{ViT} & \multicolumn{1}{c}{Swin} & \multicolumn{1}{c|}{MLP} & \multicolumn{1}{c}{ResNet} & \multicolumn{1}{c}{ViT} & \multicolumn{1}{c}{Swin} & \multicolumn{1}{c}{MLP} & \multicolumn{1}{c}{ResNet} & \multicolumn{1}{c}{ViT} & \multicolumn{1}{c}{Swin} & \multicolumn{1}{c}{MLP} \\ \hline Vary Size & 3.0 & 11.8 & 0.2 & **17.4** & 3.6 & **11.9** & 0.6 & 7.8 & 3.3 & 11.8 & 0.4 & **12.6** \\ Const. Size & 0.0 & **35.5** & 0.0 & 32.1 & 0.0 & 0.3 & 0.1 & **5.0** & 0.0 & 17.9 & 0.0 & **18.6** \\ Const. Area & 1.1 & **33.0** & 0.0 & 7.5 & 0.8 & **7.2** & 0.1 & 2.8 & 0.9 & **20.1** & 0.1 & 5.1 \\ Const. Area (C) & 0.4 & 8.0 & 0.0 & **63.2** & 1.5 & **12.8** & 0.3 & 10.1 & 0.9 & 10.4 & 0.1 & **36.6** \\ Const. Circ. & 0.6 & 0.4 & 0.0 & 3.5 & 0.0 & 3.5 & 0.0 & **51.8** & 0.3 & 1.9 & 0.0 & **27.6** \\ Const. Circ. (C) & 8.7 & **40.8** & 0.7 & 31.1 & 9.9 & 35.0 & 12.8 & **44.6** & 9.3 & **37.9** & 6.8 & **37.9** \\ \hline \end{tabular}
\end{table}
Table 1: Error rates (%) in classifying anchor numerosities as either “few” or “many” on respective test sets. Highest error rates for each stimulus type and each anchor numerosity are highlighted. Highest total error rates across stimuli for each model are underlined.
Figure 2: Psychometric functions for each model trained and tested on one stimulus type on the numerical bisection task. Vertical bars are 95% bootstrapped CIs.
Figures 3A) and 3B) show a selected subset of psychometric curves evaluated using such a cross-stimulus protocol, for models trained on _Vary Size_ and _Constant Circumference_. Orange curves denote cases where a model is trained and tested on the same stimulus category (_i.e.,_ the protocol from Exp. 1), and are included for reference, while blue curves are obtained when models are tested on datasets different from the ones they are trained on. We report results on _Vary Size_ and _Constant Circumference_ since they exhibit the most successful (_Vary Size_) and least successful (_Constant Circumference_) transfer cases, as defined by the expected qualititative characteristics of psychometric curves in Sec. The Numerical Bisection Task. Even for the best matching condition _Vary Size_ (Fig. 3A), we observe a number of _transfer failures_, where a trained model shows poor transfer of number discrimination ability to a novel stimulus category, revealing a failure to abstract the number category.
An interesting case of transfer failures is evident in _all-or-none_ responses, where models unanimously assign either _few_ or _many_ response to all numerosities. This has been observed across all models, and is particularly prominent with models trained on _Constant Circumference_ (Fig. 3 B). In some cases, most notably with MLP and to a smaller extent with ViT, we also observe flatter curves with smaller slopes, resulting from frequent misclassification of _many_ responses as _few_ and vice versa. Finally, we also observe a new response pattern, an _inverted_ psychometric curve, where small numerosities are overwhelmingly assigned label _many_, and the opposite for large numerosities. Fig. 3B) showcases that this pattern is consistent across models trained on _Constant Circumference_. We conjecture that this is due to models latching onto total white area during training, which is inversely correlated with numerosity in _Constant Circumference_.
Next, we consider an easier case of transfer, and examine if exposing models to more a diverse set of stimuli (as opposed to one type of stimulus) can help in learning a better representation of number categories; we train models on all but one stimulus type, and evaluate them on the hold-out stimulus type. Instead of 100 images per number category, a model is seeing 500 images per number category (_i.e.,_ 100 images for a number for each of the 5 training stimulus types). As shown in Fig. 3C), we see that increasing data variability results in more curves that resemble the expected s-shaped curve, especially for ResNet and Swin. However, even then, models failed to generalize to _Constant Circumference (contour)_, confirming again the difficulty of this stimulus category. Overall, we find that Swin and ResNet produce representations that better match observed empirical data even in the more challenging transfer setting. We also observe that the training models on a variety of stimulus types help in generalising to new stimulus.
Finally, we examine if learned number representations form meaningful clusters; to answer this question, we do a forward pass on images from a given stimulus category for two models with lowest error rates (_i.e.,_ ResNet and Swin). For each image, we extract embeddings from the last dense layer of the model, prior to the 2-unit classification head. We use PCA followed by the t-SNE [22] dimensionality reduction method to project high-dimensional embedding vectors (2,048 for ResNet and 768 for Swin) into 2D space. In Fig. 4A) we show one selected example of such a projection, where individual points have been color-coded based on the numerosity of the stimulus image. First, we observe that embeddings cluster in groups based on number, with a greater cluster overlap for subsequent numbers. Second, we observe an ordering of clusters based on numerosities. This type of pattern is observed more often with embeddings from Swin, compared to ResNet embeddings which generally result in less discernible clusters (with the exception of clusters for numerosities 1 and 2). Interestingly, based on visual inspection of the data, we do not find that more distinct projections suggest better performance on the task. For example, while clusters in Fig. 4A) seem to be discernible based on number, the model performs poorly when tested on _Constant Size_, possibly because the classifier does not discriminate based on the dimensions that are discriminable in the embeddings.
Figure 3: Psychometric curves in transfer experiments. Bars are standard errors of the mean. Models are represented across rows, and selected training datasets in columns. Shades of blue indicate the test stimulus, orange curves denote train and test on the same stimulus type.
## 4 Discriminability of Small vs. Large Numbers
Empirical data from humans and animals shows that in the numerical bisection task, it is consistently more difficult to distinguish larger numerosities from each other, compared to smaller ones (Almeida et al., 2007; Emmerton et al., 1997). This observation is likely to be related to a more general finding in numerical cognition that small numbers are processed differently than larger numbers (Dehaene, 1997; Revkin et al., 2008).
We examine whether similar observations can be made for our models' responses, using a measure of discriminability for different pairs of numbers. As an example, for a given model and a pair of numbers (such as 5 and 6), we statistically test if the mean percentage of _many_ responses for images of one number (5) is the same as that of images of the other number (6). Intuitively, the two numbers are harder to discriminate if their mean percentage of _many_ responses are the same. We consider models with smallest test error rates in Experiment 1 (_i.e.,_ResNet and Swin). For a given model and each stimulus type, we consider all possible number pairs (_i.e.,_ all points on average psychometric curves) and perform the Tukey's HSD test for multiple pair-wise comparison of means (with family-wise error rate FWER=.05). This approach is based on the similar statistical tests used with pigeon responses in Emmerton et al. (1997).
In Fig. 4B), we show the breakdown of 18 cases (out of total 228 comparisons) where we failed to reject the null hypothesis across number pairs; intuitively, the models find it difficult to discriminate between these number pairs.4 The figure shows that pairs at the higher end of the numerical range, such as (5, 7) and (5, 6) are frequently indistinguishable which is in contrast to the pairs on the lower end of the numerical scale. We conclude that similar to the empirical data, Swin and ResNet better distinguish numerosities at the lower end of the number range compared to those of the higher end of number ranges. Moreover, this effect is stronger among Swin responses compared to ResNet, suggesting that number representations learned by Swin are more discernible.
Footnote 4: From this plot we excluded 24 comparisons for number pairs (1, 2) and (6, 7) since means within these pairs are the same by the design of networks’ training objective.
## 5 Discussion
Number discrimination is a core aspect of basic numerical competence in humans and animals. We investigate if recent, state-of-the-art neural networks used in computer vision exhibit the capacity of discriminating between small and large quantities. We evaluate these models on the numerical bisection task where models learn to categorize numerosity of sets of items, and we investigate their performance on novel stimuli and novel numerosities.
We find that ResNet and Swin, the two models with vision-specific inductive biases, achieve the smallest errors when categorizing novel stimuli as _few_ or _many_. Psychometric curves of models trained on a wide range of stimuli, as well as those of models trained and tested on the same type of stimuli, often resemble the response curves of animals and humans on the same task. In addition, Swin responses are more discernible for smaller numbers compared to larger numbers, and its internal representations are structured in a way that reflects number category and order. Swin's predecessor, ViT, which is also a transformer-based model, albeit with a weak image-specific inductive bias, has errors on the task that are comparable to or even higher than the basic, substantially smaller MLP baseline. This is surprising considering that performance of ViT is within a few percentage points of Swin performance on different computer vision benchmarks (Liu et al., 2021).
Finally, when controlled for perceptual attributes (_e.g.,_ keeping the total white area constant during training, but varying area during testing), most of these models show poor transfer of the number discrimination skill. This might mean that models latch onto features that, while correlated with number, are considered non-numerical in the literature (Honig & Stewart, 1989; Testolin et al., 2020). When analyzing the internal representations of number in Swin, we find that often, despite poor transfer, number representations are structured in an interpretable way. In other words, although these representations could in theory support number discrimination, we do not observe this in practice. One possible reason for poor transfer might be that the models are trained in a limited data regime, in contrast to humans and animals whose numerical cognition develops gradually in a rich environmental context, and who might be biologically predisposed to represent and process numerical quantities (Dehaene et al., 1998). Future work should explore whether pretraining models on larger and more diverse sets of images would result in a more transferable skill. Finally, we only investigate one specific task--the numerical bisection, and it remains to be explored whether our findings generalize across other perceptual domains.
## 6 Acknowledgments
The authors would like to thank Stephanie Chan for detailed feedback on this manuscript as well as other colleagues at
Figure 4: A) t-SNE projections of _Constant Size_ image embeddings for Swin trained on _Constant Area_. B) Numerosities of image pairs for which there was no significant difference in the percentage of “many” responses.
DeepMind for feedback and discussions that helped improve this work.
|
2302.03997 | SimCGNN: Simple Contrastive Graph Neural Network for Session-based
Recommendation | Session-based recommendation (SBR) problem, which focuses on next-item
prediction for anonymous users, has received increasingly more attention from
researchers. Existing graph-based SBR methods all lack the ability to
differentiate between sessions with the same last item, and suffer from severe
popularity bias. Inspired by nowadays emerging contrastive learning methods,
this paper presents a Simple Contrastive Graph Neural Network for Session-based
Recommendation (SimCGNN). In SimCGNN, we first obtain normalized session
embeddings on constructed session graphs. We next construct positive and
negative samples of the sessions by two forward propagation and a novel
negative sample selection strategy, and then calculate the constructive loss.
Finally, session embeddings are used to give prediction. Extensive experiments
conducted on two real-word datasets show our SimCGNN achieves a significant
improvement over state-of-the-art methods. | Yuan Cao, Xudong Zhang, Fan Zhang, Feifei Kou, Josiah Poon, Xiongnan Jin, Yongheng Wang, Jinpeng Chen | 2023-02-08T11:13:22Z | http://arxiv.org/abs/2302.03997v1 | # SimCGNN: Simple Contrastive Graph Neural Network for Session-based Recommendation
###### Abstract
Session-based recommendation (SBR) problem, which focuses on next-item prediction for anonymous users, has received increasingly more attention from researchers. Existing graph-based SBR methods all lack the ability to differentiate between sessions with the same last item, and suffer from severe popularity bias. Inspired by nowadays emerging contrastive learning methods, this paper presents a Simple Contrastive Graph Neural Network for Session-based Recommendation (SimCGNN). In _SimCGNN_, we first obtain normalized session embeddings on constructed session graphs. We next construct positive and negative samples of the sessions by two forward propagation and a novel negative sample selection strategy, and then calculate the constructive loss. Finally, session embeddings are used to give prediction. Extensive experiments conducted on two real-word datasets show our _SimCGNN_ achieves a significant improvement over state-of-the-art methods.
## 1 Introduction
With the progressive development of the contemporary internet and the explosion of information on the Internet, recommender systems have become an essential component. Sequential recommender systems consider the dynamic preference development and take both user-level information and item-level information into consideration. In certain scenarios, however, the user can be anonymous, which means that we only have access to item-level features and user-level features are not visible. Session-based recommender, which observes only the current session rather than sufficient historical user-item interaction records, has drawn attention in recent years.
To capture the sequential relationship, Markov Chain (MC)-based sequential recommenders[1][1] are the first to be proposed. However, due to limited representation ability and the strong reliance upon the last interacted item in each session, their performances are limited. Recurrent neural networks (RNNs) are then introduced into session-based recommendation thanks to their natural ability for modeling sequential information. With the introduction of GRU4Rec[1], RNN turned out to be the structure of choice for solving the session recommendation problem, for example, NARM[1] designs two RNNs to capture user's global and local sequential preference correspondingly. It was not until the existence of SR-GNN[21], which utilized a gated graph neural network (GGNN) to extract sequential information on a session graph. Since then, works following the basic schema of SR-GNN have been proposed. TAGNN[21] added a target-attentive mechanism onto SR-GNN and gain promising performance. Also, as SR-GNN focuses only on a local session, methods like FGNN[22]
Figure 1: We trained SR-GNN on the diginetate dataset and selected a set of sessions (127 in total) with the same last-interacted item (_item 33889_, the most commonly occurred last-item) from the diginetica dataset for prediction. This line graph counts the number of occurrences of different items on the ground truth and model predicted values.
and GCE-GNN[20] both utilize global information in session-based recommendation but in different ways.
Although all methods mentioned above have favorable results, they maintain a drawback inherited from the MC-based approach to the present day. That is the strong dependency upon the last interacted item. Take the classical SR-GNN as an example, when assembling the final session, it simply takes linear transformation over the concatenation of the embedding vector of the last-item and the global embedding vector. And as a result, these methods may have trouble in distinguishing between sessions that have the same last interacted item. As shown in Fig.1, we first trained an SR-GNN on the Diginetica dataset and then visualized the distribution of predicted labels and ground-truth labels of sessions that shares the same last-interacted item. This indicates a phenomenon that methods with this assembling techniques tends to predict the same items for sessions with the same last-item, despite of the underlying diversity. As we have mentioned earlier, a large branch of existing graph-based methods ([23, 20] for example) refers to the same approach as SR-GNN in the final assembly of the session embedding, resulting in this phenomenon will also be widespread in the current advanced graph-based SBR method. In this paper, we refer to this phenomenon as the "same last-item confusion" problem. Apart from this, Fig.1 also proved the hypothesis proposed by [1] that the prediction of SR-GNN is likely to be affected by popularity bias.
To cope with the last-item, intuitively we tend to increase the discrepancy between sessions with the same last-item, which naturally fits the schema of constructive learning methods by considering sessions with the same last-item as negative samples. However, as positive samples are hard to define, we simply follows the idea of [1] by treating the session itself as a positive sample through dropout[21] techniques. To this end, we proposed a novel session-based recommender namely Simple Contrastive Graph Neural Network for Session-based Recommendation (_SimCGNN_). Firstly, in order deal with the "same last-item confusion" problem, we designed a novel contrastive module to increase the discrepancy between sessions with the same last interacted item. Secondly, to eliminate the advantage of popular items in the final prediction and to predict the underlying interest of the users themselves as much as possible, we normalised both the embedding of items and sessions.
Our main contributions in this paper are listed as follows.
1) We introduce a contrast learning approach to solve the last-item interaction confusion problem and propose a novel negative sample selection strategy.
2) To alleviate the popularity bias, we proposed normalized item embeddings and session embeddings.
3) Extensive experiments conducted on real-world datasets show that SimCGNN outperforms the sota methods and additional experiments also demonstrate the effectiveness of our approach for both of these two problems.
## 2 Related Work
### Traditional Methods
A series of approaches [19, 2, 18] based on matrix factorization (MF) are representatives of the traditional methods. MF-based methods consider that the user-item interaction matrix is too sparse and then decompose the matrix into two low-rank dense matrices, which correspond to users and items respectively. However, MF-based approaches do not model users' sequential interaction behavior, and therefore not suitable for making sequential recommendations.
Given that static matrix decomposition methods do not model sequence behavior, FPMC[1] models user behavior as a Markov Chain and combines Markov Chains with MF. FOSSIL[17] improves on FPMC by introducing factorized sequential prediction with an item similarity model and higher-order Markov Chains, thus both long-term and short-term user behaviour are taken into account. However, due to expressive capability limitation of their shallow network, they do not perform well in nowadays more complex recommendation scenarios.
### Deep Learning-Based Methods
In recent years, as deep-learning methods emerge, deep learning-based recommendation methods[19, 20] utilizing recurrent neural networks (RNNs) are on the rise. GRU4Rec[1] is a typical pioneer in utilizing RNN structure for making sequential recommendations. After that, more attempts have been made to perform sequential recommendations on RNNs. Tan et al.[18] improved the performance of RNN recommendation models by proposing several data enhancements and training tricks. Li et al.[19] proposed a neural attentive recommendation machine with an encoder-decoder structure based on RNN to capture the user's sequential preference. STAMP[14], which deposits the RNN structure by using simple MLPs and attention mechanisms, proved to be efficient in capturing both users' static and dynamic interests. Although deep learning-based models have more powerful representation capabilities than traditional methods, these approaches caanot still model complex item relationships, for example, non-adjacent item transitions.
### Neural Network on Graphs
Recently, thanks to the rise of graph neural networks[21, 22, 23, 24], many approaches[25, 19, 18] use GNN-based network structures to solve SBR problems. SR-GNN[20] is one of the earliest and most representative ones, which models each session as a session graph, and applies a gated-GNN[17] to finally get a representative embedding of the session. After the great success of SR-GNN, many variants of SR-GNN have been proposed. For example, TAGNN[20] proposed a target-aware attention mechanism upon SR-GNN, which adaptively
activates different user interests concerning varied target items. FGNN[21] takes both sequence order and the latent order in the session graph into consideration. Disen-GNN[11] constructs a disentangled session graph to discover underlying session purpose. \(S^{2}\)-DHCN[14] leverages hypergraph techniques to represent each session and utilizes constructive learning techniques to perform self-supervised learning. Instead of utilizing local session graphs only, methods such as GCE-GNN[20] construct global graphs to obtain global information on the dataset. However, as mentioned in the previous section, all of these methods have serious "same last-item confusion" problem and thus suffer from performance degradation.
## 3 Methodology
### Problem Formulation
A session-based recommender is supposed to give recommendations to users based on the inputs of their anonymous historical interaction sequences, e.g. clicks or purchases. Given \(\mathcal{V}=\{v_{1},v_{2},...,v_{m}\}\) as the item set, an anonymous interaction session \(s\) is an item sequence \(s_{i}=[v_{1}^{i},v_{2}^{i},...,v_{|s_{i}|}^{i}],s_{i}\in\mathcal{S}\), where \(v_{j}^{i}\) is the \(j\)-th interacted item of the \(i\)-th session. Given session \(s_{i}\), our goal is to predict \(v_{|s_{i}|+1}^{i}\).
### Overview
The overall workflow of our proposed SimCGNN is illustrated in Fig.2.
### Session Feature Extraction Module
**Normalized Item Embedding**
Since the one-hot vectors of items are sparse and high-dimensional and do not carry pairwise distance information, we first embed each item \(v_{i}\in\mathcal{V}\) into a \(d\)-dimensional representation \(\mathbf{v}_{i}\in\mathbb{R}^{d}\).
As some existing methods directly utilize \(\mathbf{v}_{i}\) for downstream tasks, we argue that the direct use of embedded vectors would lead to popularity bias [1]. To this end, we intuitively add \(L_{2}\) normalization to the raw feature vector. Apart from this, we also add Dropout directly on embedding layer for the downstream contrastive module. The normalized vectors \(\hat{\mathbf{v}}_{i}\) are given as follows:
\[\hat{\mathbf{v}}_{i}=Dropout(Norm(\mathbf{v}_{i}),p) \tag{1}\]
where \(p\) is the dropout probability, \(Norm(\cdot)\) is the L2-normalization function.
**Session Graph Construction**
To explore rich transitions among items and generate accurate latent vectors of items, we model each session \(s\) as a directed graph \(\mathcal{G}_{s}=(\mathcal{V}_{s},\mathcal{E}_{s})\), where each node \(v_{s,i}\in\mathcal{V}_{s}\) represents an item and each edge \((v_{s,i-1},v_{s,i})\in\mathcal{E}_{s}\) represents a user interact \(v_{s,i}\) after the interaction of \(v_{s,i-1}\). To deal with items that occurs multiple times in sessions, we assign a normalized weight to each edge. The weight is calculated by the occurrence of the corresponding interaction (\((v_{s,i-1},v_{s,i})\)) divides the out-degree of the starting node (\(v_{s,i-1}\)). We use the previously obtained embedding vector \(\hat{\mathbf{v}}_{i}\) as the initial state of the nodes in the constructed session graph. After the construction, we obtain the outgoing adjacency matrix \(\mathbf{A}_{i}^{out}\in\mathbb{R}^{d\times d}\) and incoming adjacency matrix \(\mathbf{A}_{i}^{in}\in\mathbb{R}^{d\times d}\), where \(n=|V|\). By concatenating \(\mathbf{A}_{i}^{out}\) and \(\mathbf{A}_{i}^{in}\), we obtain the final connection matrix \(\mathbf{A}_{s}\in\mathbb{R}^{d\times 2d}\) of the session graph.
**Learning Graph-based Item Embeddings**
Once the corresponding session graph has been constructed, we will then need to use a graph neural network to extract the structured information from the graph. In _SimCGNN_, we leverage a gated graph neural network (GGNN) to learn node vectors in a session graph. Formally, for node \(v_{s,i}\) in graph \(\mathcal{G}_{s}\), the update function can be formulated as,
\[\mathbf{a}_{s,i}^{t} =\mathbf{A}_{s,i}\colon\left[\mathbf{v}_{1}^{t-1},\mathbf{v}_{2}^{t-1},...,\mathbf{v}_{|s_{i}|}^{t-1}\right]^{\top}\mathbf{H}+\mathbf{b}, \tag{2}\] \[\mathbf{z}_{s,i}^{t} =\sigma\left(\mathbf{W}_{z}\mathbf{a}_{s,i}^{t}+\mathbf{U}_{z}\mathbf{v}_ {i}^{t-1}\right),\] (3) \[\mathbf{r}_{s,i}^{t} =\sigma\left(\mathbf{W}_{r}\mathbf{a}_{s,i}^{t}+\mathbf{U}_{r}\mathbf{v} _{i}^{t-1}\right),\] (4) \[\overline{\mathbf{v}}_{i}^{t} =\tanh\left(\mathbf{W}_{o}\mathbf{a}_{s,i}^{t}+\mathbf{U}_{o}\left( \mathbf{r}_{s,i}^{t}\odot\mathbf{v}_{i}^{t-1}\right)\right),\] (5) \[\mathbf{v}_{i}^{t} =\left(1-\mathbf{z}_{s,i}^{t}\right)\odot\mathbf{v}_{i}^{t-1}+\mathbf{z}_{s, i}^{t}\odot\overline{\mathbf{v}}_{i}^{t}, \tag{6}\]
where \(t\) is the training step, \(\mathbf{A}_{s,i}\). is the \(i\)-th row of matrix \(\mathbf{A}_{s}\) corresponding to \(v_{s,i}\), \(\mathbf{H}\in\mathbb{R}^{d\times 2d}\) is trainable parameter, \(\mathbf{z}_{s,i}^{t}\) and \(\mathbf{r}_{s,i}^{t}\) are the reset and update gates respectively, \(\left[\mathbf{v}_{1}^{t-1},\mathbf{v}_{2}^{t-1},...,\mathbf{v}_{|s_{i}|}^{t-1}\right]\) is the list of item vectors in \(s\), \(\sigma(\cdot)\) is the sigmoid function and \(\odot\) is the Hadamard product operator.
In order to add the necessary noise to the resulting graph item representation vector, we also applied Dropout to the final output vector \(\mathbf{v}_{i}^{l}\) after \(l\)-layers GNN. Also, since there is no location information embedded in the graph neural network, we do the usual positional embedding on the generated item vectors. The final graph-based item embeddings \(\widetilde{\mathbf{v}}_{i}\) can be calculated as,
\[\widetilde{\mathbf{v}}_{i}=\mathbf{w}_{pos(v_{i})}+Dropout(\mathbf{v}_{i},p), \tag{7}\]
where \(\mathbf{w}_{t}\in\mathbb{R}^{d}\) is the trainable positional embedding, \(pos(v_{i})\) is the absolute position of the item in the session., \(p\) is the Dropout probability.
**Generating Session Embeddings**
In this section, we consider constructing the session representation vector by combining long-term preference and short-term preference.
First, intuitively, we can represent the session's short-term preference by its last-interacted item \(\mathbf{v}_{s,|s_{i}|}\). Thus, for session \(s_{i}=[v_{s,1},v_{s,2},...,v_{s,n}]\), the short-term session embedding \(\mathbf{s}_{i,s}\) of session \(s\) can be defined as \(\widetilde{\mathbf{v}}_{n}\).
As for the long-term preference, we consider the long-term session embedding of session graph \(\mathcal{G}_{s}\) by aggregating all node vectors. Meanwhile, as different interaction records may own different levels of priority, we utilize the attention mechanism to gain the long-term session preference \(\mathbf{s}_{i,l}\) as
follows,
\[\alpha_{i} =\mathrm{softmax}(\mathbf{q}^{\top}\sigma(\mathbf{W}_{1}\widetilde{\mathbf{ v}}_{n}+\mathbf{W}_{2}\widetilde{\mathbf{v}}_{i}+\mathbf{b})), \tag{8}\] \[\mathbf{s}_{i,l} =\sum_{i=1}^{n}\alpha_{i}\widetilde{\mathbf{v}}_{i}, \tag{9}\]
where \(\mathbf{q}\in\mathbb{R}^{d}\) and \(\mathbf{W}_{1},\mathbf{W}_{2}\in\mathbb{R}^{d\times d}\) are trainable weights.
Finally, we get the final hybrid embedding \(\mathbf{s}_{i,h}\) by simply linear transform over the concatenation of the short-term and the long-term session embeddings.
\[\mathbf{s}_{i,h}=\mathbf{W}_{3}[\mathbf{s}_{i,l};\mathbf{s}_{i,s}], \tag{10}\]
where \(\mathbf{W}_{3}\in\mathbb{R}^{2d\times d}\) is trainable parameter and \([\cdot;\cdot]\) represents concatenation operation.
### Prediction Module
After extracting session hybrid embeddings, we adopt an MF layer to predict the relevance between the given session \(s_{i}\) and each candidate item \(v_{n}\in V\) by multiplying the normalized session representation \(Norm(\mathbf{s}_{i,h})\) and normalized item embedding \(Norm(\mathbf{v}_{n})\) for the avoidance of popularity bias, which can be defined as,
\[\hat{h_{n}}=r*Norm(\mathbf{s}_{i,h})*Norm(\mathbf{v}_{n}), \tag{11}\]
As \(Norm(\mathbf{s}_{i,h})*Norm(\mathbf{v}_{n})\) is equal to the cosine similarity between \(\mathbf{s}_{i,h}\) and \(\mathbf{v}_{n}\), the predicted logits are restricted to \([-1,1]\), and the softmax score is likely to get saturated at high values for the training set. To this end, we add a scaling factor \(r>1\), which is useful in practice to allow for better convergence.
Then we apply a softmax function on the output logit to get the final scaled output probability vector \(\hat{\mathbf{y}}\),
\[\hat{\mathbf{y}}=\mathrm{softmax}(\hat{\mathbf{h}}), \tag{12}\]
where \(\hat{\mathbf{h}}\) denotes the recommendation scores of all candidate items \(v_{n}\in V\).
The prediction loss \(\mathcal{L}_{pred}\) is defined by calculating the cross-entropy of the prediction and ground truth,
\[\mathcal{L}_{pred}=-\sum_{i=1}^{|D|}\mathbf{y}_{i}\log(\hat{\mathbf{y}})+(1-\mathbf{y}_{i })\log(1-\hat{\mathbf{y}}), \tag{13}\]
where \(\mathbf{y}\) denotes the one-hot encoding vector for the ground truth item.
### Contrastive Module
As we mentioned before, simply using the hybrid session embedding \(\mathbf{s}_{i,h}\) for item prediction inevitably leads to the same last-item confusion problem.
To address this, the most intuitive idea is to separate session representations with the same last-item as much as possible. Naturally, we came up with the idea of using comparative learning to solve this problem. We refer to the schema of [1], and since we have added the Dropout
Figure 2: The overall network architecture of _SimCGNN_.
layer above, we can simply perform twice forward propagation for each session \(s_{i}\) to produce two hybrid embeddings. For the embedding vector obtained by the first forward propagation, we denote it by \(\mathbf{s}_{i,h}^{1}\) and the second by \(\mathbf{s}_{i,h}^{2}\), which constructs a "positive pair". We use all embeddings of sessions with the same last item as negative samples, which can be represented by \(\mathbf{s}_{j,h}^{-},j\in{1,2,...,N}\), \(N\) indicates the number of negative samples. In this way, the contrastive loss \(\mathcal{L}_{con}\) can be calculated as,
\[\mathcal{L}_{con}=-\sum_{i=1}^{|D|}\log\frac{e^{\sin(\mathbf{s}_{i,h}^{1},\mathbf{s}_ {i,h}^{2})/\tau}}{\sum_{j=1}^{N}\Big{(}e^{\sin(\mathbf{s}_{i,h}^{1},\mathbf{s}_{i,h}^{2 })/\tau}+e^{\sin(\mathbf{s}_{i,h}^{1},\mathbf{s}_{j,h}^{-})/\tau}\Big{)}}, \tag{14}\]
where \(\tau\) is the temperature hyperparameter, \(D\) is the training dataset, and we utilize cosine similarity for \(\sin(\cdot,\cdot)\) as follows,
\[\sin(\mathbf{s}_{1},\mathbf{s}_{2})=\frac{\mathbf{s}_{1}\mathbf{s}_{2}}{||\mathbf{s}_{1}||||\mathbf{s }_{2}||} \tag{15}\]
However, unlike [1], since there are not always adequate negative sessions with the same last-item in the same training batch, we cannot directly use sessions in the same training batch as negative samples. And if we forward the corresponding negative sessions each time we calculate the contrastive loss, it would result in a huge waste of computational resources. As a result, instead of exhaustively computing these representations, similar to [21], we maintain a memory bank \(M=\{(\mathbf{f}_{i}^{1},\mathbf{f}_{i}^{2})\}\). During each learning iteration, the two hybrid embeddings \(\mathbf{s}_{i,h}^{1},\mathbf{s}_{i,h}^{2}\) are updated to \(M\) at the corresponding session entry \(\mathbf{s}_{i,h}^{1}\longrightarrow\mathbf{f}_{i}^{1},\mathbf{s}_{i,h}^{2}\longrightarrow \mathbf{f}_{i}^{2}\), and negative session embeddings can be sampled from \(M\), i.e., \(\mathbf{s}_{j,1,h}^{-}\longleftarrow\mathbf{f}_{j}^{1},\mathbf{s}_{j,2,h}^{-}\longleftarrow \mathbf{f}_{j}^{2}\). This allows us to compute the comparison loss without additional forward propagation, but only with the corresponding vector in the memory bank \(M\).
### Model Optimization
In the previous section, we defined the prediction loss \(\mathcal{L}_{pred}\) and the contrastive loss \(\mathcal{L}_{con}\), we define our final loss function with the integration of both of them as follows:
\[\mathcal{L}=\mathcal{L}_{pred}+\beta\mathcal{L}_{con}+\lambda||\Theta||_{2}^{ 2}, \tag{16}\]
where \(\Theta\) is all trainable parameters, \(\beta\) is the hyper-parameter for balancing the contrastive module, \(\lambda\) is the regularization hyper-parameter.
## 4 Experiments
### Datasets
Following previous work, we choose two commonly used session recommendation datasets, that is, Yoochoose 1/641 and Diginetica2. The Yoochoose dataset is from the RecSys Challenge 2015 and consists of six months of interaction sessions from an E-commercial website. We only make use of the most recent fractions 1/64 of the training sequences of Yoochoose denoted. The Diginetica dataset comes from CIKM Cup 2016, and only the transaction data is used.
Footnote 1: [http://2015.recsyschallenge.com/challenge.html](http://2015.recsyschallenge.com/challenge.html)
Footnote 2: [http://cikm2016.cs.iupui.edu/cikm-cup](http://cikm2016.cs.iupui.edu/cikm-cup)
For fairness consideration, we use the same preprocessing techniques as [21], which filter out all sessions of length 1 and items that appears less than 5 times in both datasets. Moreover, we do data augmentation on a session to obtain sequences and corresponding labels. For example, for session \(s=[v_{1},v_{2},...,v_{n}]\), we split it in to \(([v_{1}],v_{2},([v_{1},v_{2}],v_{3}),...,([v_{1},v_{2},...,v_{n-1}],v_{n})\). The statistics of the two datasets are shown in Table 1.
### Evaluation Metrics
According to previous works, **Recall@20** and **MRR@20** are selected to evaluate the performance of our method and baselines.
### Baselines
To evident the effectiveness of our proposed _SimCGNN_, we compare it with the following representative baselines.
* _POP_ and _SPOP_ recommend the top-K popular item in training dataset and in the current predicting session respectively.
* _Item-KNN[17]_ leverages item-to-item collaborative filtering techniques, which recommends items similar to the previously interacted items by cosine similarity.
* _BPR[1]_ is a classical matrix factorization (MF) methods, and is optimized by a pairwise ranking loss function.
* _FPMC[1]_ is a sequential prediction method combining the Markov chain and MF.
* _GRU4REC[11]_ utilizes the RNN structure to model the sequential interaction of users and leverages multiple tricks to help the RNN converge to the session recommendation problem.
* _NARM[18]_ improves GRU4REC by incorporating an attention mechanism into RNN.
* _STAMP[19]_ replaces the RNN structures by employing attention mechanism.
* _SR-GNN[21]_ first introduces session graph structure into a session-based recommendation. By utilizing a Gated GNN to extract on-graph item embeddings, the last item together with a weighted sum of all the session embeddings are concatenated for prediction.
\begin{table}
\begin{tabular}{l l l} \hline \hline Dataset & Yoochoose 1/64 & Diginetica \\ \hline \#click & 557,248 & 982,961 \\ \#training sessions & 369,859 & 719,470 \\ \#test sessions & 55,898 & 60,858 \\ \#items & 16,766 & 43,097 \\ Average Length & 6.16 & 5.12 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistic of the Datasets
* _FGNN_[20] formulates the next item recommendation within the session as a graph classification problem.
* _GCE-GNN_[21] constructed a global graph using session data and modifies the model structure to introduce the global information learned from the global graph.
* _Disen-GNN_[11] proposes a disentangled graph neural network to capture the session purpose.
* \(S^{2}\)_-DHCN_[22] constructs each session as a hypergraph and utilizes contractive learning methods.
### Implementation Details
To align with the previous work, we set the hidden size \(d=100\), while model parameters are initialized using Gaussian distribution with a mean of 0 and deviation of 0.1. The mini-batch Adam optimizer is utilized to optimize model parameters. The initial learning rate is set to 1e-3 and decays by 0.1 every 3 epochs. Dropout probabilities for all dropout layers are set to 0.1. For the contrastive module, the temperature parameter \(\tau\) is set to 12, and \(\beta\) is set to 0.1 for Yoochoose 1/64 and 1 for Diginetica. All the hyperparameters are tuned on the validation set.
### Experiment Results
To further demonstrate the overall performance of our proposed SimCGNN, we compare it with the selected baselines described above. The experimental results are shown in Table 2. From Table 2, we can say that our _SimCGNN_ achieves the best performance on all two datasets, especially in **MRR@20**, which illustrates the superior ranking capability compared with other baseline methods.
Among traditional methods, the Item-KNN achieves the best performance, although the overall performance of all traditional methods is relatively poor. To our surprise, the simple yet effective S-POP shows better performance than those of BPR and FPMC. Notably, the S-POP takes only item popularity into consideration, which means both the Yoochoose 1/64 dataset and the Diginetica dataset are suffered from popularity bias. It's also vital to point out that, the Item-KNN utilizes pairwise item similarities only and performs better than FPMC, which is an MC-based approach with the assumption that only the last interacted items are needed to perform sequential recommendation. This phenomenon indicates that the simple MC assumption is not suitable for such complex sessions.
As for Deep learning-based methods, all DL-based methods consistently outperform traditional methods. GRU4REC and NARM are both based on RNN structure, and they together achieved decent performances. However, since NARM adds an attention mechanism to the original RNN to give different weights to items at different positions in the session, the performance of NARM has a significant improvement over GRU4REC. STAMP, which replaces RNN with attentional MLPs, shows comparative performance over NARM. At last, all DL-base methods share better performance over FPMC, which shows the importance of modeling the whole interaction sequence instead of considering merely the last click. However, both RNNs and MLPs are not suitable for capturing complex transitions among sessions. This may be the reason why they perform worse than graph-based methods.
Graph-based methods outperform all other baselines by a large margin. More specifically, GCE-GNN outperforms SR-GNN as it effectively leverages global information in different ways. FGNN also shows competitive results by rethinking item order and replacing the assembling method by a well-designed readout function. Disen-GNN gains decent performance on both datasets, showing the importance of disentangled session graphs. The \(S^{2}-DHCN\), however, performs the worst on Yoochoose 1/64, which does not match the capacity of hypergraph neural networks. Compared with these state-of-the-art methods, our methods achieve comparative performances or even better performances without explicitly applying global information. Expanding on this, our method has optimal **MRR@20** on both datasets. The **Recall@20** of our method is higher than GCE-GNN and lower than FGNN on the Yoochoose 1/64 dataset and the opposite on the Diginetica dataset. We must also point out that the number of parameters used in our _SimCGNN_ is consistent with the SR-GNN. This means that we have achieved a top-level performance using the least parameters among all graph-based methods, which strongly demonstrates the superiority of our network structure. In conclusion, graph-based methods have an inherent advantage over traditional methods and DL-based methods in modeling complex sessions.
### Ablation Study
To further validate the effectiveness of each module in our _SimCGNN_, we compare our _SimCGNN_ with the following four variants.
* **SimCGNN-Contrast**. We removed the contrastive module of _SimCGNN_ to prove its effectiveness.
* **SimCGNN-WeakNeg**. We randomly sample negative
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Yochoose 1/64} & \multicolumn{2}{c}{Diginetica} \\ & Recall@20 & MRR@20 & Recall@20 & MRR@20 \\ \hline POP & 6.71 & 1.65 & 0.89 & 0.20 \\ S-POP & 30.44 & 18.35 & 21.06 & 13.68 \\ Item-KNN & 51.60 & 21.81 & 35.75 & 11.57 \\ BPR & 31.31 & 12.08 & 5.24 & 1.98 \\ FPMC & 45.62 & 15.01 & 26.53 & 6.95 \\ \hline GRU4REC & 60.64 & 22.89 & 29.45 & 8.33 \\ NARM & 68.32 & 28.63 & 49.70 & 16.17 \\ STAMP & 68.74 & 29.67 & 45.64 & 14.32 \\ \hline SR-GNN & 70.57 & 30.94 & 50.73 & 17.59 \\ FGNN & **71.75** & 31.71 & 51.36 & 18.47 \\ GCE-GNN & 70.91 & 30.63 & **54.22** & **19.04** \\ Disen-GNN & 71.46 & 31.36 & 53.79 & 18.99 \\ \(S^{2}\)-DHCN & 70.39 & 29.92 & 53.66 & 18.51 \\ \hline _SimCGNN_ & 71.61 & **31.99** & 54.01 & **19.04** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Recommendation performance on two datasets. The best performing method in each column is **boldfaced**, and the second best method is underlined.
sessions instead of choosing sessions with the same last item.
* **SimCGNN-Norm**. We removed all normalizations to prove the effectiveness of normalized session embeddings.
* **SimCGNN-PE**. We removed positional embeddings to validate whether the positional information is useful for session graphs.
The results of the proposed ablation studies are shown in Table 3.
As the most essential component of our approach, we first verified the effectiveness of the proposed contrastive module. It is not difficult to see from the experimental results that _SimCGNN_-Contrast performs weaker than the original _SimCGNN_ on both datasets. This demonstrates the effectiveness of using the contrastive related approach to enhance the representation ability of session embeddings.
In _SimCGNN_-WeakNeg, we did not emphasize the importance of negative sessions with the same last-item, which greatly affected the ranking performance of the model on both datasets. In terms of recall metrics, the model was not affected too much, and there was even a marginal increase on the Yoochoose 1/64 dataset. This illustrates that our sampling method can give more favorable rankings to items that are more suitable for the target session in a relatively similar set of candidate items.
As shown in Table 3, the overall performance of _SimCGNN_-Norm is severely damaged. This is not only because normalization is effective in suppressing popularity bias, but also because removing normalization makes it more difficult for the contrastive module to learn the intrinsic discrepancies between sessions.
We finally verified the effectiveness of introducing positional information into the session graph. Experimental results demonstrate that the ranking performance of the model, especially on the Yoochoose 1/64 dataset, is greatly affected after the removal of the positional embedding.
### Case Study
**Effect on Solving Same Last-Item Confusion**
Consistent with the Introduction section, we still take sessions with the same last-item (item _33889_) and utilize the trained SR-GNN and _SimCGNN_ to provide predictions. For each session, 20 items are recommended and counted. The relationship between the number of occurrences of an item in the prediction candidate set and its corresponding occurrence ranking is shown in Fig.3. From the figure we can see that _SimCGNN_ predicts a lower curve of item occurrences compared to SR-GNN, which shows that SimCGNN can provide different recommendations for sessions with the same last-item. At the same time, _SimCGNN_ recommended a total of 142 kinds of items, 20 more than the 122 kinds of items in SR-GNN, which also demonstrates the effectiveness of our method in solving the same last-item confusion.
**Effect on Solving Popularity Bias**
To measure the popularity of recommended items, we calculated **Average Recommendation Popularity (ARP)** on SR-GNN and _SimCGNN_ to compare the difference in popularity of recommended items between the two methods. The ARP can be calculated as follows,
\[ARP=\frac{1}{|S|}\sum_{s\in S}\frac{\sum_{i\in L_{s}}\phi(i)}{K}, \tag{17}\]
where \(\phi(i)\) is the number of times that item \(i\) appears in the training dataset, \(L_{s}\) is the recommended item list for session \(s\), and \(S\) is the set of sessions in the test dataset.
From Table 4, it is easy to see that our _SimCGNN_ method has a huge difference in the popularity of recommended items on both datasets compared to SR-GNN. This proves that our method can indeed solve the problem of popularity bias to some extent.
## 5 Conclusion
In this paper, to address the session-based recommendation problem, we proposed a novel Simple Contrastive Graph Neural Network (_SimCGNN_), which introduces a contrastive module to deal with the "same last-item confusion" problem and normalized item and session embeddings to cope with
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Yoochoose 1/64 & Diginetica \\ \hline SR-GNN & 4128.54 & 495.25 \\ _SimCGNN_ & **2678.64** & **285.85** \\ \hline \hline \end{tabular}
\end{table}
Table 4: _SimCGNN_ versus SR-GNN in terms of Average Recommendation Popularity (ARP). Lower ARP value indicates lower popularity bias.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Yochoose 1/64} & \multicolumn{2}{c}{Diginetica} \\ & Recall@20 & MRR@20 & Recall@20 & MRR@20 \\ \hline SR-GNN & 70.57 & 30.94 & 50.73 & 17.59 \\ \hline _SimCGNN_ & 71.61 & **31.99** & **54.01** & **19.04** \\ -Contrast & 71.31 & 31.80 & 53.49 & 19.01 \\ -WeakNeg & **71.65** & 31.16 & 53.99 & 18.92 \\ -Norm & 71.01 & 31.55 & 51.77 & 17.58 \\ -PE & 71.39 & 30.92 & 53.80 & 18.94 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of variants of _SimCGNN_.
Figure 3: Predicted results of SimCGNN and SR-GNN on sessions with the same last-item (_item 33889_).
popularity bias. Experiments on two real-world datasets validate that our _SimCGNN_ outperforms the state-of-the-art approaches with a significant margin in terms of **Recall@20** and **MRR@20**. In future work, we aim to propose a new combination approach that can eliminate the impact of the last item to a greater extent.
|
2305.18445 | Intelligent gradient amplification for deep neural networks | Deep learning models offer superior performance compared to other machine
learning techniques for a variety of tasks and domains, but pose their own
challenges. In particular, deep learning models require larger training times
as the depth of a model increases, and suffer from vanishing gradients. Several
solutions address these problems independently, but there have been minimal
efforts to identify an integrated solution that improves the performance of a
model by addressing vanishing gradients, as well as accelerates the training
process to achieve higher performance at larger learning rates. In this work,
we intelligently determine which layers of a deep learning model to apply
gradient amplification to, using a formulated approach that analyzes gradient
fluctuations of layers during training. Detailed experiments are performed for
simpler and deeper neural networks using two different intelligent measures and
two different thresholds that determine the amplification layers, and a
training strategy where gradients are amplified only during certain epochs.
Results show that our amplification offers better performance compared to the
original models, and achieves accuracy improvement of around 2.5% on CIFAR- 10
and around 4.5% on CIFAR-100 datasets, even when the models are trained with
higher learning rates. | Sunitha Basodi, Krishna Pusuluri, Xueli Xiao, Yi Pan | 2023-05-29T03:38:09Z | http://arxiv.org/abs/2305.18445v1 | # Intelligent gradient amplification for deep neural networks.
###### Abstract
Deep learning models offer superior performance compared to other machine learning techniques for a variety of tasks and domains, but pose their own challenges. In particular, deep learning models require larger training times as the depth of a model increases, and suffer from vanishing gradients. Several solutions address these problems independently, but there have been minimal efforts to identify an integrated solution that improves the performance of a model by addressing vanishing gradients, as well as accelerates the training process to achieve higher performance at larger learning rates. In this work, we intelligently determine which layers of a deep learning model to apply gradient amplification to, using a formulated approach that analyzes gradient fluctuations of layers during training. Detailed experiments are performed for simpler and deeper neural networks using two different intelligent measures and two different thresholds that determine the amplification layers, and a training strategy where gradients are amplified only during certain epochs. Results show that our amplification offers better performance compared to the original models, and achieves accuracy improvement of \(\sim\)2.5% on CIFAR-10 and \(\sim\)4.5% on CIFAR-100 datasets, even when the models are trained with higher learning rates.
keywords: Deep learning, gradient amplification, CIFAR, backpropagation, learning rate, training time +
Footnote †: journal: Computer Science and Technology
## 1 Introduction
Deep learning models have produced results comparable or sometimes superior to human experts in many interdisciplinary applications[1; 2; 3; 4; 5]. Performance of these models generally improves with increasing depth of the network [6], but some challenges arise such as vanishing gradients and high training time even on parallel computational resources [6]. There are many architectures [7] that work well for a wide range of applications but even the simplest models are computationally intensive. As there are millions of model parameters for each set of hyper parameters, deep neural networks are slow to train, sometimes ranging from several days or weeks depending on the model architecture, dataset size, and hardware resources. One way to improve the training time is to do this in parallel using Graphical Processing Units(GPUs), or Tensor Processing Units(TPUs), or on parallel distributed systems. The other way to further improve is to train the model with higher learning rates or large batch sizes. Employing larger learning rates can speedup the training process by quickly converging to local optima, but they can often miss the global optima, resulting in sub-optimal solutions or sometimes non-convergence [8]. Lower learning rates converge better to optima, but have larger training times. In general, a combination of higher and lower learning rates are employed, using scheduling schemes or algorithms which adaptively reduce the learning rates over epochs. There have been multiple efforts to analyze and modify gradients and learning rates dynamically during the training process, but there is no detailed analysis on the impact of the modification factor on model performance. Vanishing gradients [9; 10; 11] is another problem that can occur while training deep neural networks. Some approaches help to mitigate this issue, such as weight initialization followed by fine-tuning with backpropagation [12], using Rectified Linear Unit (ReLU) activation function [13; 14], or applying batch normalization (BN) [15]. In addition, advancements in hardware capablities of GPUs such as double precision help overcome this issue to some extent while training deep neural networks.
Though there have been efforts to address the above mentioned problems independently, there have been minimal efforts to identify an integrated so
lution to improve the performance of the model by addressing both vanishing gradients and accelerating the training process by achieving higher performance at larger learning rates. To improve the training process of existing deep learning models, we propose an intelligent gradient amplification approach, along with a training strategy that analyzes the net gradient change of a layer in an epoch. While performing gradient amplification, gradients are dynamically increased for some layers during back propagation using a training strategy where amplification is performed for a few epochs, while the model is trained without gradient amplification for the remaining epochs. Neural networks trained using our method have improved testing and training accuracies, even at higher learning rates (thereby reducing the overall training time).
In this work, we extend the gradient amplification method proposed in [16] using a formulated approach by analyzing the fluctuations of gradients within and across layers during training, and intelligently identifying the layers to perform gradient amplification. Our contributions include the following:
* We propose two measures to compute the effective gradient update direction of each layer, which are independently analyzed by performing normalization.
* We suggest two thresholding approaches to intelligently identify the layers to amplify, one when the actual normalized measure of a layer crosses the threshold and the other when the absolute normalized measure is beyond the threshold.
* We study the impact of thresholds by analyzing a wide range of values and identify threshold values that could generally work for a deep learning model.
* We perform detailed experiments using two intelligent measures with two thresholding approaches applying a training strategy where gradients are amplified during certain epochs and the model is trained in the remaining epochs without amplification.
The remainder of this paper is organized as follows. Related works are briefly described in Section 2. Our amplification method with proposed measures and thresholding approaches is presented in Section 3. Experimental
setup, results, and their comparisons are discussed in Section 4 and 5, followed by conclusions in Section 6.
## 2 Related Work
Although deep learning models are robust and acheive better performance, there are several areas where these models could be improved further. Designing new architectures, automatic tuning of network hyperparameters, improving training time of the models, designing efficient functions (activation, kernal and pooling are some of such challenges[17; 18; 19]. In this section, we briefly discuss the existing approaches to address vanishing gradient problem, reduce the training time of deep learning models, and study the impact of learning rates.
### Vanishing gradients
The problem of vanishing gradients [9; 10; 11] occurs in gradient-based learning methods while training models with backpropagation. As the network weights get updated based on the gradients during back-propagation, lower gradient values cause the network to learn slowly and the value of these gradients reduces further while propagating from end layers to initial layers of the network. When gradients are close to 0, a model cannot learn while training and the weight updates do not have any effect on its performance. Though there is no direct solution for this issue, several methods suggested in the literature [20] help to mitigate the issue. One of the early proposed method [12] involves a two step weight update rule, where, network weights are updated based on unsupervised learning methods and then fine-tuned using supervised learning with back-propagation. In recent years, with the introduction of ReLU activation function [13; 14], batch normalization(BN)[15] and Resnet networks[21], this problem reduced further. When a model has ReLU activation function[13; 14], only positive inputs get propagated in the forward pass, it is observed from experiments that the gradients in the backward pass do not diminish and prevents vanishing gradients to some extent. The other approach is to use BN[15] layers in a model. These layers normalize the inputs to reduce its variance during the forward pass and therefore help in regulating the gradients in the backward pass during training. One can use both ReLU activation layer and BN layer in a network, improving the performance of the model even further while training. Resnet networks use these combination of layers in addition to other layers and also have residual
connection in the network, connecting some initial residual blocks(or layers) to the later residual blocks(or layers). Since the gradients also get directly propagated with these connections to the initial layers in addition to the network layers, the problem of vanishing gradients reduces even futher. In addition to these approaches, with the constant advancement in the hardware and their increased computation abilities, the problem has further reduced. In out method, as we increase the gradients of some of the layers of the network during backpropagation, it helps to aid the problem of vanishing gradients.
### Learning rates
Learning rate is one of the critical hyperparameters that directly influences the performance of the deep learning models. Models can be trained faster with larger learning rates, but can lead to sub-optimal solutions. Using lower learning rates converges the models better with optimal solutions [8] but has long training times. There are many approaches designed to achieve better performance using a combination of these learning rates. Learning rate scheduler is one such ways, where the training begins with higher learning rates which is lowered as the training continues [22]. Lowering of learning rates in a scheduler can be designed in many ways [23], such as, assigning learning rates to epochs, gradually decaying the learning rate based on the current learning rate, current epoch and total number of epochs(time-based decay); reducing the learning rate in a step-wise manner after a certain number of epochs(step decay); and exponentially decaying the learning rate based on the initial learning rate and the current epoch(exponential decay). There are some other approaches suggested, such as, normalizing model weights while training with stochastic gradient descent to speed up the performance [24]. Authors of [25] increase batch size without decaying learning rates and observe that the models achieve similar test performances. Since this uses larger batch sizes, it increases parallelism and has fewer parameter updates thereby reducing the overall training times.
In general, while training a model with learning rate scheduler, higher learning rates used in the beginning for a few epochs, followed by lower of learning rates for the next few epochs; and this process is repeated until the desired optima or model performance is achieved. Some optimization algorithms automatically determine the learning rates with the epochs dynamically without the requirement of manual intervention. However, these methods also have some fallbacks and do not always converge to optimal
solutions. One way to improve the training speed is to develop methods to achieve optimal model parameters at larger learning rates.
#### 2.2.1 Adaptive learning rates for layers/neurons/parameters
Another approach to overcome identifying learning rate hyperparameters without need of any scheduling is by modifying learning rate dynamically based on the performance of the optimization algorithm. A few of such methods include Adagrad[26], Adadelta[27], RMSprop[28] and Adam [29]. There have also been several efforts to improve models with adaptive learning rates [30; 31; 32; 33; 34]. Ede et al. [35] control the gradients from being propagated when the expected loss is over a defined boundaries. This causes the learning rates to be dynamically addjuste during training. Paper [36] designs a controller to automatically manage learning rate by identifying informative features. Authors of [37] identify learning rates using reinforcement learning based approach by analyzing the training history. Experiments are performed on CIFAR-10 AND FMNIST database have improved performance emphasizing its advantages. You et al. [38] adaptively scale the learning rates of layers to improve the performance of models trained in large batches in parallel and perform experiments on AlexNet[39]. Paper [40] proposes a layer-wise adaptive learning rate computation by using layer weight dependent matching factor which is computed dynamically during training based on the layer type. Authors demonstrate the advantage of their method with mathematical equations and experimental results.
### Analysis of gradients
Gradients provide vital information on various aspects such as training progression, weight fluctuations, model convergence and so on. This information can be used to address vanishing/exploding gradients, accelerate the training process or dynamically modify learning rates while training the model. Some of the adaptive learning rate algorithms [26][27] mentioned above use the gradients of a few iterations to identify a suitable learning rate on the current iteration. Zhang et al. [40] scale the gradients of a layer based on a matching factor which is computed during training based on the weights and type of the layer.
## 3 Proposed Gradient Amplification Strategies
Gradient amplification with random selection of layers proposed in [16] has overhead of identifying the ratio of layers, the type of layers and the
combination of those types of layers. With the increase in number of layers and also with different types of layers, the network becomes deeper and it is challenging to determine the best amplified models of all the combinations. In this section, we aim to formulate a way to automatically determine the layers which need to be amplified.
### Effect of gradient amplification on learning rate
Here, we emphasize the relationship between learning rate and its effects on gradients while performing amplification. The general weight update formula during training of neural networks is shown below:
\[\mathbf{W}_{t+1}=\mathbf{W}_{t}+\eta\nabla J(\mathbf{W}_{t+1}) \tag{1}\]
where, \(\mathbf{W}_{t}\) represents the weights of a network in the current iteration \(t\), \(\eta\) is the learning rate and \(\nabla J(\mathbf{W}_{t+1})\) corresponds to the gradients of the weights computed as the derivative of error of cost function with respect to weights.
After performing gradient amplification, \(\nabla J(\mathbf{W}_{t+1})\) gets modified depending on the amplification factor. Let us denote the amplified gradient as
\[\nabla J_{amp}=amp*\nabla J(\mathbf{W}_{t+1}) \tag{2}\]
Therefore the weight update formula after gradient amplification can be written as:
\[\begin{gathered}\mathbf{W}_{t+1}=\mathbf{W}_{t}+\eta\nabla J_{amp}(\mathbf{W }_{t+1})\\ \mathbf{W}_{t+1}=\mathbf{W}_{t}+\eta*amp*\nabla J(\mathbf{W}_{t+1})\\ \mathbf{W}_{t+1}=\mathbf{W}_{t}+(\eta*amp)*\nabla J(\mathbf{W}_{t+1})\\ \mathbf{W}_{t+1}=\mathbf{W}_{t}+\eta_{amp}*\nabla J(\mathbf{W}_{t+1}),\ where \ \eta_{amp}=\eta*amp\end{gathered} \tag{3}\]
From the above analysis, one can conclude that amplifying gradients is equivalent to increasing learning rates. To determine the layers that need to be amplified, one approach is to identify the layers that learn actively during the training process. Authors of [41] propose a way to speedup the training process of deep learning models using layer freezing approach by determining the fluctuations of gradients in layers. Similar approach can be employed to determine the layers that are actively learning. As performing amplification is equivalent to increasing step size of weight updates, learning rate (or step
size) can be increased when the current weights of the neurons are relatively far from optima and their gradients are all moving in the same direction to converge to optima. In addition to determining the layers that are actively learning, it is also important that the gradients of the neurons in the layers are all moving in the same direction for the amplification to be meaningful. This can be formulated by analyzing the gradients of the neurons in a layer across iterations. One simple way to perform such an identification is to compute the sum of the gradient values over iterations for all neurons in a layer.
### Layer gradient directionality ratio measure, \(G\)
The effective direction of gradient change of the neurons in a layer \(l\) can be determined using the ratio of the sum of the gradients across different iterations to its absolute gradient sum in an epoch. Here, \(m\) and \(n\) correspond to the number of iterations in an epoch and the number of neurons(or weights) in a layer respectively.
\[G_{l}=\left\{\begin{array}{l}0,\ when\ \sum_{i}^{n}\sum_{j}^{m}|g_{ilj}|=0 \frac{\sum_{i}^{n}|\sum_{j}^{m}g_{ilj}|}{\sum_{i}^{n}\sum_{j}^{m}|g_{ilj}|},\ otherwise\end{array}\right. \tag{4}\]
The above formula determines how the weights in a layer are modified. When all the weight updates of the neurons occur in the same direction across all iterations, its value is 1. When either the model reaches optimal solution (where the gradient for every neuron becomes zero) or half of the neuron weights move in the opposite direction to the other half with same magnitude (ideal case) then the value becomes 0. Otherwise, it lies in between 0 and 1. Values close to 1 signify that most of the weights are changing in the same direction and vice versa.
\[0\leqslant G_{l}\leqslant 1 \tag{5}\]
### Normalized layer gradient directionality ratio measure, \(\hat{G}\)
After computing the _layer gradient directionality ratio (\(G_{l}\))_ for each layer in an epoch, it is observed that most of these values are in same range. To identify the most significant layers close to 1, _layer gradient directionality ratio (\(G_{l}\))_ of all the layers are converted to a normal distribution using :
\[\hat{G}=\frac{G-\overline{\mathrm{G}}}{\sigma_{G}} \tag{6}\]
These normalized values signify how far they lie from the mean value. If the normalized value of a layer is close to 0, then it lies close to the mean value, otherwise, its value signifies the magnitude of standard deviation(s) it is away from the mean value. Larger the positive(or negative) value farther it is from the mean. Since the range of \(Gl\) is \([0,1]\), when the normalized value is a large positive value (signifying farther from the mean on the right side), it can be considered closer to the 1 (in equation 5), where as large negative values can be considered close to 0 (in equation 5).
### Improved Layer gradient directionality ratio measure, \(G^{\prime}\)
The gradients of neurons of a layer lie in a similar range and the measure \(G\), which has the sum of all the gradients of neurons across all the iterations, gives more importance to the gradients values that occur frequently during the training. If equal importance is given to the all the neurons, then even if some of the ratio measures are close to 1, then the layer has relatively higher ratio value (closer to 1). In contrast, when the gradient changes for each neuron across multiple iterations are computed individually and are used for computing effective directionality of layer, all neurons then contribute equally towards the gradient direction change. With this modification, even if some of the neuron gradients in a layer are moving in the same direction, the layer still has an higher chance of being identified.
The effective direction of gradient change of the neurons in a layer \(l\) is therefore determined using the ratio of the sum of the gradients across different iterations to its absolute gradient sum for each neuron and then taking the mean of these ratio values. Here, \(m\) and \(n\) correspond to the number of iterations in the epoch and number of neurons(or weights) in a layer respectively for an epoch.
\[G^{\prime}_{l}=\Big{\{}\,0,\,when\,\,\sum_{j}^{m}|g_{lig}|=0\frac{1}{n}\sum_{i }^{n\,\,|\sum_{j}^{m}g_{ij}|},\,\,otherwise \tag{7}\]
The above formula also determines how the weights in a layer are modified with equal importance given to all the neurons in a layer. When all the weight updates of the neurons occur in the same direction across all iterations, its value is 1. With the modified formula, when some of the neurons are close to 1, it have higher chances of being amplified.
\[0\leqslant G^{\prime}_{l}\leqslant 1 \tag{8}\]
### Normalized layer gradient directionality ratio measure, \(\hat{G^{\prime}}\)
After computing the _layer gradient directionality ratio (\(G^{\prime}_{l}\))_ for each layer in an epoch, in order to identify the most significant layers close to 1, _layer gradient directionality ratio (\(G^{\prime}_{l}\))_ of all the layers are converted to a normal distribution using :
\[\hat{G^{\prime}}=\frac{G^{\prime}-\overline{\text{G}^{\prime}}}{\sigma_{G^{ \prime}}} \tag{9}\]
As previously mentioned, these normalized values signify how far they lie from the mean value. As \(G^{\prime}_{l}\) ranges between 0 and 1, large positive normalized values (signifying farther from the mean on the right side), can be considered close to the 1 (see equation 8), where as large negative normalized values can considered close to 0 (see equation 8).
### Determining amplification layers using \(G\),\(G^{\prime}\) measures
We consider the following cases to perform amplification based on thresholds. Here the formula are shown for measure \(G\) and the same cases can be applied for \(G^{\prime}\) (by replacing \(G\) with \(G^{\prime}\)).
Case-1: Amplify only one side (\(G_{l}\) close to 1)When (\(G_{l}\)) values are close to 1, most of the weights in the layer are modified in the same direction. This suggests that the neurons are all moving down(or up) the slope to approach optima, learning rate can be increased. We amplify the gradients of the layer when its normalized value exceeds a threshold value,
\[if(\hat{G}>threshold)\ :\ amplify\ layer\]
Case-2: Amplify both sides (\(G_{l}\) close to 0 or 1)As mentioned earlier, the \(layer\_gradient\_directionality\_ratio\) (\(G_{l}\)) values close to 0 either mean they are close to optima or a plateau surface. If the weights are close to optima, adding small noise to the gradients will cause the weights to converge eventually. Otherwise, it would make the weights to cross the plateau surface and thereby improving the training process. With this assumption, we propose to amplify the gradients of a layer when the \(layer\_gradient\_directionality\_ratio\) (\(G_{l}\)) values are close to either end (i.e., 0 or 1), we amplify whenever the absolute normalized value crosses threshold value.
\[if(|\hat{G}|>threshold)\ :\ amplify\ layer\]
Here is the overview of the function to determine amplification layers
## 4 Experiments
Experiments are performed on CIFAR-10, CIFAR-100 datasets with the similar setup to the experiments performed as in paper [16]. We primarily perform thorough experiments for VGG-19, Resnet-18 and Resnet-34 models. These models are trained for 150 epochs, where the learning rate of the first 100 epochs is 0.1 and the next 50 epochs is 0.01. We perform experiments using the training strategy \(params_{1}=[(50,0.1,0,1),(100,0.1,is\_amp,2),(130,\)\(0.01,is\_amp,2),(150,0.01,0,1)]\), as shown in Fig. 1, where \(is\_amp\) represents a non-zero value when amplification is performed or 0 otherwise. The values in each element in the \(params\) list represent the end epoch, learning rate, is_amplification_performed(non-zero) and gradient amplification factor respectively. For instance, \((50,0.1,0,1)\) means that the model is trained with learning rate 0.1 until we reach 50 epochs, during which no layers are selected for gradient amplification and the amplification factor is 1. In our training strategy, where \(params_{1}=[(50,0.1,0,1),(100,0.1,is\_amp,2),(130,0.01,\)\(is\_amp,2),(150,0.01,0,1)]\), no amplification is performed for the first 50 epochs and gradients in the 51\({}^{st}\) epoch are analyzed to determine the layers to perform amplification until 100\({}^{th}\) epoch. At epoch 101, learning rate is
reduced to 0.01 and gradients are analyzed again to determine next set of amplification layers, where the selected layers are amplified for the next 29 epochs and the last 20 epochs are trained without amplification. Experiments are performed using varying thresholds from 0.7 to 2.5 with the step size of 0.1. Based on the analysis of these results, we also run experiments on even deeper resnet-50 and resnet-101 models.
For deeper models, another training strategy \(params_{2}=[(10,0.1,0,1),\)\((100,0.1,is\_amp,2),(145,0.01,is\_amp,2),(150,0.01,0,1)]\) is employed, in which no amplification is performed for the first 10 epochs and the gradients in the \(11^{th}\) epoch are analyzed to identify the layers to perform amplification until \(100^{th}\) epoch. At epoch 101, learning rate is reduced to 0.01 and the gradients are analyzed again to determine next set of amplification layers, where the selected layers are amplified for the next 44 epochs and the last 5 epochs are trained without amplification. The number of epochs trained without amplification can be varied at runtime. In our experiments, we just demostrate
Figure 1: Experimental setup and training strategy for all the models, showing the number of epochs and the corresponding learning rates(\(\eta\)).
Figure 2: Performance of the amplified models (red) where layers are selected at different rates compared to mean accuracies of the original models (blue) with no gradient amplification. In each figure, we show the performance (%) of the models when amplification layers are selected once per learning rate (top), selected every 2 epochs (middle) and selected every 5 epochs (bottom). Horizontal axis refers to the thresholds applied on the normalized gradient rate (\(\hat{G}_{l}\)) using case-2 strategy, while vertical axis shows the testing accuracies (%) of the models.
with either 5 or 20 as our last epochs without amplification. Experiments are performed varying thresholds from 1 to 3 with the step size of 0.25.
We also perform analysis on how frequent the amplification layers need to be varied while training the model. In our training strategy, amplification is applied from 51-100 (\(\eta=0.1\)) and 101-130 (\(\eta=0.01\)) epochs. At first, amplification layers are determined on the onset of amplification epochs for each different learning rate and then analysis is done when the amplification layers are changed every 2 epochs from 51-130 and then every 5 epochs.
## 5 Results
In our experiments, firstly we analyze simpler models namely, resnet-18, resnet-34 and VGG-19 models using CIFAR-10 dataset and then extend to deeper resnet architectures and also for CIFAR-100 dataset.
While training these models, a first few epochs are trained normally without any amplification. Then the gradients of the models are analyzed for an epoch by computing normalized gradient rates for all layers. As mentioned previously, for each layer it determines the rate of fluctuations of weight updates, with 1 corresponding to less fluctuations and 0 for more fluctuations. Since thresholds are measured on normalized gradient rates of layers, values beyond thresholds signify the percentage of layers being amplified and indirectly controls the ratio of amplified layers. Lower the threshold value means larger the ratio of amplified layers and vice versa. In our experiments, thresholds are varied from 0.7 - 2.5 in steps of 0.1 for simpler models and from 1.0 - 3.0 in steps of 0.25 for deeper models.
### How frequently should amplification layers be modified/reseleted?
Analysis is also done on how frequent should these amplification layers be selected by running experiments when amplification layers are changed once per each learning rate, every 2 epochs and 5 epochs. Fig. 2 shows the performance of vgg-19, resnet-18 and resnet-34 models when these models are amplified using case-2 selection strategy and has similar performance for case-1 selection strategy. It can be observed that the performance improvement does not vary much for a model and one can always fine-tune to determine the best possible layer selection frequency. However, for our further analysis on CIFAR-10 dataset on deeper resnet-50, resnet-101 models, amplification layers are changed once per learning rate for simplicity.
### Analysis on CIFAR-10 dataset
always perform better than original models. Models with case-1 strategy seems to be robust compared to case-2 for lower thresholds. While using case-2 strategy, for lower thresholds, amplified models perform lower than the original models but the performance of the amplified models increases with increasing thresholds. This suggests that deeper models perform better with higher threshold values. Models with case-1 as amplification strategy perform better for thresholds more than 1.25, and for case-2, models with thresholds 1.5 perform better.
### Analysis on CIFAR-100 dataset
To emphasize the generality of amplification, experiments are also performed on CIFAR-100 dataset. In these experiments, amplification layers are selected once per each learning rate in the training epochs while using previously mentioned training strategies. Experiments are performed with both gradient change measures \(G\) and \(G^{\prime}\) using case-1 and case-2 layer selection strategies.
Analysis on simpler models:Fig. 7, 8 show the performance of VGG-19, resnet-18 and resnet-34 models for a range of thresholds 0.7-2.5 with a step-size of 0.1 when \(G\) and \(G^{\prime}\) amplification is applied respectively with \(params_{1}=[(50,0.1,0,1),(100,0.1,is\_amp,2),(130,0.01,is\_amp,2),\)\((150,0.01,0,1)]\) as the training strategy. For VGG-19 models, when case-1 amplification method is used, for lower thresholds with both \(G\) and \(G^{\prime}\),
Figure 4: Testing accuracies % (Y-axis) of the amplified models (red) using \(G^{\prime}\) compared to mean accuracies of the original models (blue) with no gradient amplification for a range of thresholds (X-axis) applied on the normalized gradient rate (\(\hat{G^{\prime}_{l}}\)) on CIFAR-10 dataset.
Figure 5: Testing accuracies % (Y-axis) of resnet-50 and resnet-101 models with \(G_{l}\) layer amplification (red) applied from epochs 51-145 compared to mean accuracies of the original models (blue) with no gradient amplification for a range of threshold values (X-axis).
Figure 6: (CIFAR-10 dataset) Testing accuracies % (Y-axis) of resnet-50 and resnet-101 models with \(G^{\prime}_{l}\) layer amplification (red), applied from epochs 51-145, compared to mean accuracies of the original models (blue) with no gradient amplification for a range of threshold values (X-axis).
models perform better at lower threshold values but have similar performance to original models in the case of higher thresholds. When case-2 is used,performance of the models is sensitive to threshold values. Models have lower performance than original models for small thresholds and have similar performance for large thresholds. For intermediate threshold values, it either has better or similar performance to original models. For resnet-18, when case-1 is used, for all the models have better performance than the original models for all the thresholds. Performance of the models increase with the thresholds around 1.5 and then the performance improvement remains the same. While for case-2, models have reduced performance for lower thresholds upto 0.9 in \(G\) and 1.1 in \(G^{\prime}\) which then increases until 1.5 and then the improvement remains the same. Resnet-34 models also have similar performance behavior as resnet-18 models maintaining the improved performance with increasing thresholds for case-2. While for case-1, for both \(G\) and \(G^{\prime}\), accuracies of the models are lower than the original models (until threshold reaches 1.2) and are better than original models after 1.2. The improvement of the accuracies are maintained with the increasing thresholds.
Analysis on deeper models:For deeper networks, resnet-50 and resnet-101, amplification is performed on reduced thresholds ranging from 1.0 - 3.0 in steps of 0.25. Fig. 9 and 10 show the testing accuracies of the resnet-50 and resnet-101 models for a range of thresholds with \(params_{2}=[(10,0.1,0,1)\), \((100,0.1,is\_amp,2),(145,0.01,is\_amp,2),(150,0.01,0,1)]\) as the training strat
Figure 7: Performance of the models on CIFAR-100 dataset with amplified models (red) using \(G\) applied from epochs 51-130 compared to mean accuracies of the original models (blue) with no gradient amplification. Horizontal axis refers to the thresholds applied on the normalized gradient rate (\(\hat{G_{l}}\)) and vertical axis corresponds to testing accuracies (%) of the models.
egy. In resnet-50 models, for both \(G\) and \(G^{\prime}\) while using case-1 strategy, amplified models always perform better than original models and the improvement of the accuracies almost remain the same across threshold values. While for case-2 strategy, models have lower performance for threshold value 1.00 and the testing accuracies are better than the original models from threshold 1.25. In resnet-101 models, for both \(G\) and \(G^{\prime}\), amplified models perform better from thresholds 1.25 and 1.50 respectively while using case-1 and case-2 strategy. Though the improvement in the performance appears the same, testing accuracies improve slowly with increasing thresholds.
### Comparison of running times
Table 1 and 2 show the mean running times(in minutes) across 10 runs of original models and amplified models for different ratio measures and cases. Training is performed on the GSU high performance cluster with NVIDIA V100 GPUs with only our models running on the GPUs with no other user jobs. Performing amplification while training increases the training times by only 1-3 minutes for all the models in most of the cases. Therefore, training models performing amplification improves the accuracy of the models maintaining training times closer (less than 2% increment) to original models.
### Best models
Here, we compare the best results of amplified models in each case with their corresponding original models without amplification. Testing accuracies of the best amplified models are shown in the table 3 and 4 for CIFAR-10
Figure 8: Testing accuracies % (Y-axis) of the amplified models (red) using \(G^{\prime}\) compared to mean accuracies of the original models (blue) with no gradient amplification for a range of thresholds (X-axis) applied on the normalized gradient rate (\(\hat{G^{\prime}_{l}}\)) on CIFAR-100 dataset.
Figure 10: (CIFAR-100 dataset) Testing accuracies (Y-axis) of resnet-50 and resnet-101 models with \(G^{\prime}_{l}\) layer amplification (red) applied from epochs 51-145 compared to mean accuracies of the original models (blue) with no gradient amplification for a range of threshold values (X-axis).
Figure 9: (CIFAR-100 dataset) Testing accuracies (Y-axis) of resnet-50 and resnet-101 models with \(G_{l}\) layer amplification (red) applied from epochs 51-145 compared to mean accuracies of the original models (blue) with no gradient amplification for a range of threshold values (X-axis).
and CIFAR-100 datasets. Training and testing accuracies for each epoch of these best models with amplification along with the original models. Fig. 11 shows the best performing models while using measure \(G\) for CIFAR-10 dataset. Similar improvements are observed for while using \(G^{\prime}\) on CIFAR-10 and also while using these measures on CIFAR-100 dataset. Since the mean accuracy of the original models are compared, their training accuracies(in gray), testing accuracies (in blue) including their mean accuracies are plotted along with amplified training(in green) and testing(in red) accuracies. These results are shown in Fig. 11.
\begin{table}
\begin{tabular}{c c c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{_Original (min)_} & \multicolumn{2}{c}{_Ours_ using \(\tilde{G}\)(min)_} & \multicolumn{2}{c}{_Ours_ using \(\tilde{G}\)(min)_} \\ \cline{3-6} & & Case-1 & Case-2 & Case-1 & Case-2 \\ \hline _VGG\_19_ & 31.35 min \(\pm\) 0.45 & 31.82 min \(\pm\) 0.52 (**1.5\%**) & 31.94 min \(\pm\) 0.45 (**1.89\%**) & 31.87 min \(\pm\) 0.45 (**1.66\%**) & 31.82 min \(\pm\) 0.35 (**1.5\%**) \\ _Resnet\_18_ & 42.15 min \(\pm\) 0.23 & 42.79 min \(\pm\) 0.45 (**1.53\%**) & 42.73 min \(\pm\) 0.46 (**1.38\%**) & 42.96 min \(\pm\) 0.57 (**1.92\%**) & 42.76 min \(\pm\) 0.44 (**1.45\%**) \\ _Resnet\_34_ & 66.35 min \(\pm\) 0.92 & 67.73 min \(\pm\) 0.95 (**2.09\%**) & 67.64 min \(\pm\) 0.9 (**1.94\%**) & 67.8 min \(\pm\) 0.82 (**2.18\%**) & 67.68 min \(\pm\) 0.91 (**2.01\%**) \\ _Resnet\_50_ & 139.64 min \(\pm\) 1.97 & 139.71 min \(\pm\) 2.01 (**0.05\%**) & 140.07 min \(\pm\) 1.50 (**0.31\%**) & 140.63 min \(\pm\) 1.29 (**0.71\%**) & 141.06 min \(\pm\) 1.05 (**1.01\%**) \\ _Resnet\_101_ & 224.22 min \(\pm\) 3.45 & 225.48 min \(\pm\) 2.89 (**0.56\%**) & 226.37 min \(\pm\) 2.07 (**0.96\%**) & 227.26 min \(\pm\) 2.12 (**1.35\%**) & 227.78 min \(\pm\) 2.02 (**1.58\%**) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean running times(in minutes) of models with \(G,G^{\prime}\) layer-based gradient amplification on CIFAR-10 dataset across 10 iterations.
\begin{table}
\begin{tabular}{c c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{_Original (min)_} & \multicolumn{2}{c}{_Ours_ using \(\tilde{G}\)(min)_} & \multicolumn{2}{c}{_Ours_ using \(\tilde{G}\)(min)_} \\ \cline{3-6} & & Case-1 & Case-2 & Case-1 & Case-2 \\ \hline _VGG\_19_ & 31.62 min \(\pm\) 0.48 & 32.55 min \(\pm\) 0.43 (**2.92\%**) & 31.92 min \(\pm\) 0.49 (**0.92\%**) & 32.16 min \(\pm\) 0.69 (**1.7\%**) & 31.83 min \(\pm\) 0.57 (**0.64\%**) \\ _Resnet\_18_ & 42.08 min \(\pm\) 0.47 & 42.83 min \(\pm\) 0.64 (**1.78\%**) & 42.24 min \(\pm\) 0.69 (**0.39\%**) & 43.44 min \(\pm\) 2.65 (**3.23\%**) & 42.19 min \(\pm\) 0.57 (**0.26\%**) \\ _Resnet\_34_ & 66.64 min \(\pm\) 0.84 & 67 min \(\pm\) 0.89 (**0.54\%**) & 66.99 min \(\pm\) 0.96 (**0.53\%**) & 69.84 min \(\pm\) 6.11 (**4.8\%**) & 66.88 min \(\pm\) 0.9 (**0.36\%**) \\ _Resnet\_50_ & 139.32 min \(\pm\) 1.18 & 140.64 min \(\pm\) 0.51 (**0.95\%**) & 140.21 min \(\pm\) 1.7 (**0.64\%**) & 140.68 min \(\pm\) 0.51 (**0.97\%**) & 140.79 min \(\pm\) 0.68 (**1.06\%**) \\ _Resnet\_101_ & 223.88 min \(\pm\) 2.09 & 226.35 min \(\pm\) 3.05 (**1.10\%**) & 226.28 min \(\pm\) 1.87 (**1.07\%**) & 225.99 min \(\pm\) 3.23 (**0.54\%**) & 225.12 min \(\pm\) 2.18 (**1.00\%**) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean running times(in minutes) of models with \(G,G^{\prime}\) layer-based gradient amplification on CIFAR-100 dataset across 10 iterations.
Figure 11: Testing accuracies % (Y-axis) of the best models on CIFAR-10 dataset with amplification performed using \(G\) algorithm (1) compared to original models without amplification. Mean training (gray) and mean testing (blue) accuracies (as well as original testing accuracies in light blue) are plotted, along with amplified training (green) and testing (red) accuracies.
plots signify the importance of having the final epochs of the model to be trained without amplification and also demonstrate that the models do not overfit while training with amplification.
We also perform random amplification for deeper resnet models using some of the hyperparameters which have the better performance for resnet-18, resnet-34 models. The best accuracies of these models are also compared in the table below for both CIFAR-10 and CIFAR-100 datasets. Our results with amplification based on \(G\) and \(G^{\prime}\) have similar performance and sometimes improved for VGG-19, resnet-18 and resnet-34 models on CIFAR-10 dataset. Resnet-50 and resnet-101 more than 2% improvement than original as well as randomly amplified models. In the case of CIFAR-100 dataset, all the models based on \(G\) and \(G^{\prime}\) have significant performance improvement compared to original and random amplification.
## 6 Conclusion
We propose two measures to compute effective gradient direction of a layer during the training process. These measures are used to determine the amplification layers based on two amplification thresholding strategies. Detailed experiments are performed to analyze each of the measures and their
\begin{table}
\begin{tabular}{c c c c c c|c|c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{\(Original\)} & \multicolumn{2}{c}{_Random layer_} & \multicolumn{2}{c}{_Ours_ using \(\hat{G}\)} & \multicolumn{2}{c}{_Ours_ using \(\hat{G}^{\prime}\)} \\ \cline{3-8} & & \multicolumn{1}{c|}{_amplification_[16]} & Case-1 & Case-2 & Case-1 & Case-2 \\ \hline _VGG_19 & 65.27\% & 66.52\% (**+1.25\%**) & 69.38\% (**+4.11\%**) & 68.5\% (**+3.23\%**) & 69.83\% (**+4.56\%**) & 69.25\% (**3.98+\%**) \\ _Resnet\_18_ & 71.94\% & 72.7\% (**+0.760\%**) & 75.33\% (**+3.39\%**) & 75.41\% (**+3.47\%**) & 76.14\% (**+4.2\%**) & 75.35\% (**+3.41\%**) \\ _Resnet\_34_ & 72.18\% & 73.02\% (**+0.84\%**) & 74.86\% (**+2.68\%**) & 75.59\% (**+3.41\%**) & 75.95\% (**+3.77\%**) & 75.9\% (**+3.72\%**) \\ _Resnet\_50_ & 72.32\% & 73.05\% (**+0.73\%**) & 77.21\% (**+4.89\%**) & 77.43\% (**+5.11\%**) & 76.89\% (**+4.57\%**) & 76.97\% (**+4.65\%**) \\ _Resnet\_101_ & 73.00\% & 73.72\% (**+0.72\%**) & 77.63\% (**+4.63\%**) & 77.51\% (**+4.51\%**) & 77.53\% (**+4.53\%**) & 77.63\% (**+4.63\%**) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison of random, \(G\), \(G^{\prime}\) layer-based gradient amplification models on CIFAR-100 dataset.
\begin{table}
\begin{tabular}{c c c c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{\(Original\)} & \multicolumn{2}{c}{_Random layer_} & \multicolumn{2}{c}{_Ours_ using \(\hat{G}\)} & \multicolumn{2}{c}{_Ours_ using \(\hat{G}^{\prime}\)} \\ \cline{3-8} & & \multicolumn{1}{c|}{_amplification_[16]} & Case-1 & Case-2 & Case-1 & Case-2 \\ \hline _VGG\_19_ & 91.08\% & 93.35 \% (**+2.27\%**) & 93.30\% (**+2.22\%**) & 92.92\% (**+1.84\%**) & 93.29\% (**+2.21\%**) & 93.34\% (**2.26+\%**) \\ _Resnet\_18_ & 92.39\% & 94.57\% (**+2.18\%**) & 93.90\% (**+1.51\%**) & 93.76\% (**+1.37\%**) & 94.49\% (**+2.1\%**) & 94.1\% (**+1.71\%**) \\ _Resnet\_34_ & 92.71\% & 94.39\%(**+1.68\%**) & 94.05\% (**+1.34\%**) & 93.89\% (**+1.18\%**) & 94.56\% (**+1.85\%**) & 94.14\% (**+1.43\%**) \\ _Resnet\_50_ & 91.80\% & 92.68\% (**+0.88\%**) & 94.24\% (**+2.44\%**) & 94.43\% (**+2.63\%**) & 94.34\% (**+2.54\%**) & 94.02\% (**+2.22\%**) \\ _Resnet\_101_ & 91.95\% & 93.04\% (**+1.09\%**) & 94.57\% (**+2.62\%**) & 94.54\% (**+2.59\%**) & 94.35\% (**+2.4\%**) & 94.7\% (**+2.75\%**) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of random, \(G\), \(G^{\prime}\) layer-based gradient amplification models on CIFAR-10 dataset.
amplification strategies for a range of thresholds. Experiments performed on CIFAR-10 and CIFAR-100 datasets using different models show that intelligent amplification gets better accuracy compared to original models without amplification, even when trained with higher learning rates. In this work, gradient amplification is experimented on VGG and resnet models. As this method can be easily integrated into various deep learning architectures[42], future work could extend this method to deep belief networks[43], recurrent neural networks[44], attention networks[45], fuzzy/hybrid neural networks [46; 47; 48; 49; 50; 51] and graph neural networks[52].
**CRediT authorship contribution statement**
**Sunitha Basodi:** Conceptualization, Methodology, Software, Validation, Investigation, Writing - original draft, Visualization. **Krishna Pusuluri:** Conceptualization, Methodology, Software, Investigation, Validation, Writing - review & editing. **Xueli Xiao:** Methodology, Resources, Software, Writing - review & editing. **Yi Pan:** Supervision, Funding acquisition, Project administration, Conceptualization, Methodology, Investigation, Validation, Writing - review & editing.
**Declaration of competing interest**
All the authors declare that they have no competing interests.
**Data availability**
Openly available datasets are used in this work.
**Acknowledgments**
Research Computing Technology and Innovation Core (ARCTIC) resources, which is supported by the National Science Foundation Major Research Instrumentation (MRI) grant number CNS-1920024. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU also used for this research. |
2310.08392 | Introducing a Deep Neural Network-based Model Predictive Control
Framework for Rapid Controller Implementation | Model Predictive Control (MPC) provides an optimal control solution based on
a cost function while allowing for the implementation of process constraints.
As a model-based optimal control technique, the performance of MPC strongly
depends on the model used where a trade-off between model computation time and
prediction performance exists. One solution is the integration of MPC with a
machine learning (ML) based process model which are quick to evaluate online.
This work presents the experimental implementation of a deep neural network
(DNN) based nonlinear MPC for Homogeneous Charge Compression Ignition (HCCI)
combustion control. The DNN model consists of a Long Short-Term Memory (LSTM)
network surrounded by fully connected layers which was trained using
experimental engine data and showed acceptable prediction performance with
under 5% error for all outputs. Using this model, the MPC is designed to track
the Indicated Mean Effective Pressure (IMEP) and combustion phasing
trajectories, while minimizing several parameters. Using the acados software
package to enable the real-time implementation of the MPC on an ARM Cortex A72,
the optimization calculations are completed within 1.4 ms. The external A72
processor is integrated with the prototyping engine controller using a UDP
connection allowing for rapid experimental deployment of the NMPC. The IMEP
trajectory following of the developed controller was excellent, with a
root-mean-square error of 0.133 bar, in addition to observing process
constraints. | David C. Gordon, Alexander Winkler, Julian Bedei, Patrick Schaber, Jakob Andert, Charles R. Koch | 2023-10-12T15:03:50Z | http://arxiv.org/abs/2310.08392v1 | Introducing a Deep Neural Network-based Model Predictive Control Framework for Rapid Controller Implementation
###### Abstract
Model Predictive Control (MPC) provides an optimal control solution based on a cost function while allowing for the implementation of process constraints. As a model-based optimal control technique, the performance of MPC strongly depends on the model used where a trade-off between model computation time and prediction performance exists. One solution is the integration of MPC with a machine learning (ML) based process model which are quick to evaluate online. This work presents the experimental implementation of a deep neural network (DNN) based nonlinear MPC for Homogeneous Charge Compression Ignition (HCCI) combustion control. The DNN model consists of a Long Short-Term Memory (LSTM) network surrounded by fully connected layers which was trained using experimental engine data and showed acceptable prediction performance with under 5% error for all outputs. Using this model, the MPC is designed to track the Indicated Mean Effective Pressure (IMEP) and combustion phasing trajectories, while minimizing several parameters. Using the acados software package to enable the real-time implementation of the MPC on an ARM Cortex A72, the optimization calculations are completed within 1.4 ms. The external A72 processor is integrated with the prototyping engine controller using a UDP connection allowing for rapid experimental deployment of the NMPC. The IMEP trajectory following of the developed controller was excellent, with a root-mean-square error of 0.133 bar, in addition to observing process constraints.
## I Introduction
Model-based optimal control techniques leverage the significant advances in system modeling and the computational performance increases seen over the last two decades [1]. A wide range of model-based control methods have been investigated including: linear quadratic regulator [2], sliding mode controller [3], adaptive control [4], and Model Predictive Control (MPC) [5]. Of these model-based control strategies MPC is the most widely used in a range of applications [6]. Which takes advantage of the ability of MPC to provide an optimal control solution while allowing for the implementation of constraints on system states and controller outputs [7]. For automotive applications, MPC sounds like an ideal solution as internal combustion engines (ICEs) have nonlinear process dynamics and have many constraints that must be obeyed, however, the high computational demand of the online MPC has meant that it has only recently been applied to vehicles [8, 9]. One such solution is the integration of black-box machine learning (ML) process models which are quick to evaluate during MPC optimization.
One specific application of interest for the application of MPC is the combustion control of Homogeneous Charge Compression Ignition (HCCI) which has shown promise in reducing engine-out emissions (99% reduction in NO\({}_{\text{x}}\) vs modern gasoline combustion [10]) and increasing efficiency (up to 30% compared to current gasoline engines [11]). The challenge is that HCCI is susceptible to large cyclic variations and correspondingly poor combustion stability as the combustion process lacks a direct mechanism to control combustion timing [12]. Specifically, HCCI operation suffers from significant cycle-to-cycle coupling, resulting from high exhaust gas re-circulation, leading to bounded operation due to misfire at low loads and high-pressure rise rates and peak pressure at high loads which complicates modeling and control implementation.
Several ML techniques have been widely used for addressing engine performance, emission modeling and control [13, 14, 15]. As HCCI combustion experiences significant cyclic coupling a Recurrent Neural Network (RNN) is of interest for process modeling due to the inclusion of backward connections to handle sequential inputs [16]. However, the contribution of earlier time steps become increasingly small (the "vanishing gradient") and thus RNN cannot accurately capture long-term dependencies in the process. Therefore, memory cells can be introduced of which the Long Short-Term Memory (LSTM) cell is the most common [17]. Each LSTM cell has two recurrent loops, one for long-term information and one for short-term information. However, traditionally the integration of an LSTM model into a nonlinear Model Predictive Controller (NMPC) has been focused on slow-response applications such as temperature set-point planning for buildings [18]. Recent progress in the experimental application of LSTM-NMPC to high-speed systems including diesel combustion control has shown great potential of the strategy [19]. Unlike the existing literature, this work demonstrates the implementation of a real-time controller on an external ARM Cortex A72 processor using the acados optimal control framework to allow for
real-time execution of the NMPC for control of HCCI combustion [20]. The controller will be designed to track engine load and combustion timing trajectories while minimizing fuel and water consumption, pressure rise rates, and NO\({}_{\text{x}}\) emissions.
## II Experimental Setup
A single cylinder research engine (SCRE) outfitted with a fully variable electromechanical valve train (EMVT) is used. The EMVT system allows for engine operation with a variety of valve strategies, however, in this work only negative valve overlap (NVO) will be used to provide the required thermal energy for HCCI combustion. Fuel, conventional European RON 95 gasoline with 10 % ethanol, and distilled water is directly injected into the combustion chamber. Full engine details can be found in [21]. The SCRE is controlled using a dSPACE MicroAutoBox II (MABX) rapid control prototyping (RCP) ECU containing an Xilinx Kintex-7 FPGA which is used to calculate combustion metrics in real-time [22].
## III Deep Neural Network-based Engine Model
To model the HCCI engine performance and emissions, a deep neural network (DNN) with seven hidden layers (six Fully Connected (FC) layers and one LSTM layer) was developed as shown in Fig 1.
To train this DNN, which has 2260 learnable parameters, a data set of 65,000 consecutive cycles was collected from the SCRE. During the engine operation, the process inputs of duration of injection (DOI) of fuel, DOI of water, and negative valve overlap (NVO) duration were varied in both amplitude and frequency. Then, using a similar training process to previous work, the combustion outputs of engine load represented by indicated mean effective pressure (IMEP), combustion phasing angle (CA50), maximum pressure rise rate (MPRR) and engine out nitrogen oxide emissions NO\({}_{\text{x}}\) were predicted [19]. The overall accuracy of this model for each output is summarized in Table I which lists the mean squared error in percent for training and validation data. MPRR was the most difficult parameter to predict, with a 4.7% error on the validation dataset, while other outputs were predicted with less than 4% error.
For implementation into an MPC, this model is formulated using a nonlinear state-space representation to allow for integration into acados as described in [23]. The nonlinear state-space model is given by
\[x(k+1) =f\left(x(k),u(k)\right), \tag{1a}\] \[y(k) =f_{\text{FC,out}}\left(f\left(x(k),u(k)\right)\right),\] (1b) \[=g\left(x(k),u(k)\right),\]
with \(f\) combining the input and LSTM layers, \(f_{FC,out}\) representing the output layers. Here \(x(k)\) are the internal model states (corresponding to the LSTM: four cell states, \(c(k)\), and four hidden states, \(h(k)\)), \(y(k)\) the model outputs, and \(u(k)\) the model inputs. These are:
\[x(k) =\begin{bmatrix}c(k-1)\\ h(k-1)\end{bmatrix}\in\mathbb{R}^{8},\quad y(k)=\begin{bmatrix}y_{\text{MEP}}(k) \\ y_{\text{CA50}}(k)\\ y_{\text{NO}_{x}}(k)\\ y_{\text{MPRR}}(k)\end{bmatrix}\in\mathbb{R}^{4},\] \[u(k) =\begin{bmatrix}y_{\text{IMEP}}(k-1)\\ y_{\text{CA50}}(k-1)\\ u_{\text{DOI,fuel}}(k)\\ u_{\text{DOI,water}}(k)\\ u_{\text{NVO}}(k)\end{bmatrix}\in\mathbb{R}^{5}. \tag{2}\]
To reduce the oscillation of the outputs, the gradient of manipulated variables is added as new inputs [24]. This ensures that the positive definite weighting matrix forces the change to zero thus allowing the controller to achieve the desired output setpoints:
\[\underbrace{\begin{bmatrix}x(k+1)\\ u(k)\end{bmatrix}}_{\tilde{x}(k+1)} =\underbrace{\begin{bmatrix}f(x(k),u(k-1)+\Delta u(k))\\ u(k-1)+\Delta u(k)\end{bmatrix}}_{\tilde{f}(x(k),\Delta u(k))}, \tag{3a}\] \[\underbrace{\begin{bmatrix}y(k)\\ u(k-1)\end{bmatrix}}_{\tilde{y}(k)} =\underbrace{\begin{bmatrix}g(x(k))\\ u(k-1)\end{bmatrix}}_{\tilde{y}(\tilde{x}(k))}. \tag{3b}\]
This formulation allows for both the absolute inputs and their gradient to be penalized within the cost function.
Thus, the discrete Optimal Control Problem (OCP) is defined as follows
\[\min_{\begin{subarray}{c}\Delta u_{0},\ldots,\Delta u_{N-1}\\ \tilde{x}_{0},\ldots,\tilde{x}_{N}\\ \tilde{y}_{0},\ldots,\tilde{y}_{N}\end{subarray}} \sum_{i=0}^{N}\left\|r_{i}-\tilde{y}_{i}\right\|_{Q}^{2}+\left\| \Delta u_{j}\right\|_{R}^{2}\] (4) s.t. \[\tilde{x}_{0} =\begin{bmatrix}x(k),\ u(k-1)\end{bmatrix}^{\top},\] \[\tilde{x}_{i+1} =\tilde{f}(\tilde{x}_{i},\Delta u_{i}) \forall i\in\mathbb{H}\setminus N,\] \[\tilde{y}_{i} =\tilde{g}(\tilde{x}_{i},\Delta u_{i}) \forall i\in\mathbb{H},\] \[u_{\min} \leq F_{u}\cdot\tilde{u}_{k}\leq u_{\max} \forall i\in\mathbb{H},\] \[y_{\min} \leq F_{y}\cdot\tilde{y}_{k}\leq y_{\max} \forall i\in\mathbb{H}\]
where \(\mathbb{H}=\{0,1,\ldots,N\}\). The reference \(\tilde{r}_{i}\) and the weighting matrix \(Q\) are selected such that deviations from the requested IMEP and CA50 are penalized while minimizing NO\({}_{\text{x}}\) emissions, duration of injected fuel DOI,fuel\((k)\) and water DOI,water\((k)\) and change in control input \(\Delta u\). Therefore, the specific cost function \(J\) is
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **Unit** & **Training** & **Validation** \\ \hline \multirow{2}{*}{\(y_{\text{IMEP}}\)} & [bar] & 0.074 & 0.077 \\ & [\%] & 2.7 & 2.8 \\ \hline \multirow{2}{*}{\(y_{\text{NO}_{x}}\)} & [ppn] & 18 & 16 \\ & [\%] & 4.2 & 3.8 \\ \hline \multirow{2}{*}{\(y_{\text{MPRR}}\)} & [bar/CAD] & 1.6 & 1.7 \\ & [\%] & 4.4 & 4.7 \\ \hline \multirow{2}{*}{\(y_{\text{CA50}}\)} & [CAD] & 1.5 & 1.6 \\ & [\%] & 3.4 & 3.6 \\ \hline \hline \end{tabular}
\end{table} TABLE I: RMSE and normalized RMSE of DNN model vs. experimental data
specified as
\[J =\sum_{i=0}^{N}\underbrace{\left\|r_{\text{IMEP},i}-y_{\text{IMEP},i} \right\|_{q_{\text{IMEP}}^{2}}^{2}}_{\text{Reference Tracking}}\] \[+\underbrace{\left\|r_{\text{CA50},i}-y_{\text{CA50},i}\right\|_{ \text{CA50}}^{2}}_{\text{Reference Tracking}}\] \[+\underbrace{\left\|u_{\text{DOI},\text{fuel},i}\right\|_{\text{ TOD},\text{fuel}}^{2}+\left\|u_{\text{DOI},\text{water},i}\right\|_{\text{TOD}, \text{water}}^{2}}_{\text{Fuel / Water consumption reduction}}\] \[+\underbrace{\left\|y_{\text{NO}_{x},i}\right\|_{q_{\text{NO}_{x} }}^{2}}_{\text{Emission Reduction}}+\underbrace{\left\|\Delta u_{i}\right\|_{R}^{2}}_{ \text{Oscillation Reduction}} \tag{5}\]
One significant advantage of using MPC for combustion control is the ability to impose constraints on inputs and outputs to ensure safe engine operation. \(F_{u}\) and \(F_{y}\) in Eq. 4 are diagonal matrices which map the bounded outputs and inputs. The control outputs are limited to match the hardware used (\(u_{\text{min},\text{max}}\)) while constraints imposed on the outputs (\(y_{\text{min},\text{max}}\)) are used to guarantee safe engine operation as summarized in Table II. In this work the constraints are not overly restrictive, however, in future testing, these limits could be modified to meet specific legislation or design constraints.
## IV Experimental deployment of LSTM-NMPC
One of the main challenges of deploying an NMPC for engine control is the limited calculation time available. For this work, the engine is operated at 1500 rpm resulting in one complete engine cycle taking 80 ms. The combustion metrics, calculated on the FPGA, are completed at 60 crank angle degrees (CAD) after top dead center (aTDC) [22] and the valve timing setpoint must be known by 260 CAD, so only 200 CAD or 22 ms are available for the NMPC calculation. To meet the real-time requirements, the computationally efficient open source package acados is used [25]. When compared to previous LSTM-based NMPC integration [19], this work offloads the NMPC calculation to an ARM Cortex A72 processor located in a Raspberry Pi 400 (RPI) overclocked to 2.2 Ghz clock frequency. The RPI communicates with the main engine controller running on the dSPACE MABX II over UDP as shown in Figure 2.
The NMPC calculation on the RPI is standalone from the main engine controller (dSpace MABX II) other than receiving the current measured states (MPRR, IMEP, CA50 and NO\({}_{\text{x}}\) emissions) and reference from the MABX as shown in figure 3. Both the optimizer and LSTM model are calculat
\begin{table}
\begin{tabular}{c c c} \hline Lower bound & Variable & Upper bound \\ \hline
1 bar & \(y_{\text{IMEP}}\) & 6 bar \\
0 CALQTDC & yCA50 & 17 CADATDC \\
0 ppm & \(y_{\text{NO}_{x}}\) & 500 ppm \\
0 bar/CAD & \(y_{\text{MPRR}}\) & 15 bar/CAD \\
0 ms & \(u_{\text{DOI},\text{fuel}}\) & 1.50 ms \\
0 ms & \(u_{\text{DOI},\text{water}}\) & 1.00 ms \\
150 CAD & \(u_{\text{NVO}}\) & 360 CAD \\ \hline \end{tabular}
\end{table} TABLE II: NMPC Constraint Values
Fig. 1: Structure of proposed deep neural network model for engine performance and emission modeling. LSTM: Long short-term memory, DOI: duration of injection, IMEP: indicated mean effective pressure, MPRR: maximum pressure rise rate, CA50: combustion phasing for 50% heat release
Fig. 2: Block diagram of the split controller structure running on a Raspberry Pi 400 and Rapid Control Prototyping (RCP) unit dSPACE MABX II
calculated actuations for the next cycle are sent to the MABX II. One significant benefit of this formulation is that the NMPC model can be updated or replaced without rebuilding the main engine control software, thus significantly reducing the development time of the controller. Additionally, this modular controller design allows for the NMPC to be executed on any external processor that can interface with the MABX II over the user datagram protocol (UDP).
When compared to other MPC implementations, in simulation acados outperforms both MATLAB's MPC toolbox using fmincon as well as the FORCES PRO backends [23]. This difference in solver performance is attributed to the higher dimension of the state than the control input vector in addition to the short prediction horizon of three cycles needed to model HCCI ICE dynamics. This allows the OCP solver to take full advantage of the condensation benefits [24, 26].
As no discretization is required for the model used, the plant model can be directly implemented using the discrete dynamics interface of acados. The Gauss-Newton approximation is used for the computation of the Hessian in the underlying Sequential Quadratic Programming (SQP) algorithm. The Optimal Control Problem in Eq. 4 leads to a band diagonal structure in the matrices of the Quadratic Problems (QPs) which are solved using the Interior Point (IP) based QP solver hpipm[27], that is provided by the acados package [25].
Fig. 3: Experimental cycle-to-cycle NMPC implementation on ARM Cortex A72: Showing IMEP reference tracking performance. Experimental data shown in blue, DNN model data in black and reference shown in dashed red.
Using a model-in-the-loop (MiL) simulation run on the targeted hardware; the weights, number of SQP iterations and prediction horizon are defined for the NMPC. These MiL simulations showed that three SQP iterations can be completed in the available calculation window. Additionally, as previous work has shown that the cycle-to-cycle dependency of HCCI lasts approximately two cycles so a prediction horizon of three cycles is sufficient [28].
For experimental implementation, a reference load profile is provided to the controller using a target IMEP. Additionally, a reference CA50 is provided to keep the combustion phasing at an efficient operation point of 6 CAD aTDC. Using the constraints specified in Table II the NMPC is implemented and the experimental LSTM-NMPC performance can be seen in figure 3. As the NMPC allows for multi-objective tracking both the IMEP and CA50 tracking performance will be evaluated. A simulated load profile of varying IMEP steps is provided resulting in a tracking performance of 0.133 bar RSME while being able to keep the CA50 at the setpoint of 6 CAD aTDC with an RSME of 1.83 CAD. As expected, the duration of fuel injection corresponds with the requested IMEP. This trend is also seen in the NVO duration where with increased fuelling a reduced NVO is requested to allow more air into the cylinder.
The constraints applied are obeyed for all but one of the cycles (cycle 552) where the 300 ppm constraint on \(\text{NO}_{\text{x}}\) emissions is slightly exceeded to 305 ppm. This is likely due to plant-model mismatch, resulting from the model predicting a slightly lower \(\text{NO}_{\text{x}}\) output than experimental data. With reference to Figure 3, the model prediction for both CA50 and IMEP parameters appear damped compared to the experimental values. However, for both engine performance metrics, the model is able to capture the trends quite accurately with the DNN model predicting IMEP with an RSME of 0.09 bar and for CA50 the RSME is 1.27 CAD.
The experimental testing of the acados NMPC executed on the ARM Cortex A72 external processor resulted in an average execution time of 1.18 ms with all calculations taking below 1.4 ms for each of the 650 cycles tested. Even with the 1 ms UDP communication time, the NMPC is below the 22 ms available and shows that if needed the model complexity could be increased with the computational power available.
The LSTM model used in the NMPC implementation has the ability for the cell and hidden states to vary depending on the current engine output. This allows the LSTM model to adapt as the engine changes. The change in the LSTM states can be seen in Figure 4. These are not physical but rather internal states resulting from the structure of the DNN model used.
When compared to the time and resource-intensive process traditionally used for developing a look-up table-based control strategy the LSTM-based NMPC provides a strategy to allow for rapid controller development. For the model used in this work, the experimental data collected (65,000 engine cycles) took 1.5 hours of testbench time to collect. Often the most significant drawback of blackbox-based models is the required training time for the model itself, however, using modern computing hardware (Intel I7-12700K based PC with a NVIDIA RTX 3090Ti) the model training took just under 3 hours. Taking advantage of the flexibility of acados allows for efficient NMPC design and integration to the external processor. This control design process is significantly quicker than traditional engine control development which can save time and reduce calibration effort and costs while providing improved controller performance by allowing for the integration of an optimal control strategy on experimental hardware.
## V Conclusions
Overall, the development and experimental implementation of an LSTM-based NMPC executed on an external processor using the acados framework was shown. To train the DNN process model, which has 2260 learnable parameters, a data set of 65,000 consecutive cycles was used. The DNN network consists of an LSTM layer, used to capture long-term dependencies and cyclic coupling, surrounded by fully connected layers providing a computationally efficient model of the HCCI combustion outputs (IMEP, MPRR, CA50 and NOx). This model resulted in an error below 5% for all outputs on validation data.
Using this DNN model of the HCCI process, the open-source software acados provided the required embedded programming for real-time implementation of the LSTM-NMPC. Experimental testing showed HCCI cycle-to-cycle combustion control provided good load and combustion phasing tracking while simultaneously obeying process constraints. With an average turnaround time of 1.18 milliseconds, this work showed the potential of the ML-based NMPC on a real-time system where only 22 ms was available for NMPC calculation and actuation.
Fig. 4: LSTM cell and hidden states during experimental implementation. Cell states in black and hidden states in blue.
The advantage of the proposed LSTM-NMPC implementation allows for a shorter controller development time and efficient MPC integration.
The proof of concept shown in this work can then be further improved by tightening process constraints to match emission legislation and to improve the longevity of the engine. Additionally, the developed toolchain for ML-based NMPC could be applied to other nonlinear constrained systems including ICEs utilizing alternative fuels or hydrogen fuel cells.
## Acknowledgements
The research was performed under the Natural Sciences Research Council of Canada Grant 2022-03411 and as part of the Research Unit (Forschungsgruppe) FOR 2401 "Optimization based Multiscale Control for Low Temperature Combustion Engines" which is funded by the German Research Association (Deutsche Forschungsgemeinschaft, DFG).
|
2302.01819 | A Hybrid Training Algorithm for Continuum Deep Learning Neuro-Skin
Neural Network | In this brief paper, a learning algorithm is developed for Deep Learning
NeuroSkin Neural Network to improve their learning properties. Neuroskin is a
new type of neural network presented recently by the authors. It is comprised
of a cellular membrane which has a neuron attached to each cell. The neuron is
the cells nucleus. A neuroskin is modelled using finite elements. Each element
of the finite element represents a cell. Each cells neuron has dendritic fibers
which connects it to the nodes of the cell. On the other hand, its axon is
connected to the nodes of a number of different neurons. The neuroskin is
trained to contract upon receiving an input. The learning takes place during
updating iterations using sensitivity analysis. It is shown that while the
neuroskin cannot present the desirable response, it improves gradually to the
desired level. | Mehrdad Shafiei Dizaji | 2023-02-03T15:54:06Z | http://arxiv.org/abs/2302.01819v1 | # A Hybrid Training Algorithm for Continuum Deep Learning Neuro-Skin Neural Network
###### Abstract
In this brief paper, a learning algorithm is developed for Deep Learning Neuro-Skin Neural Network to improve their learning properties. Neuroskin is a new type of neural network presented recently by the authors. It is comprised of a cellular membrane which has a neuron attached to each cell. The neuron is the cell's nucleus. A neuroskin is modelled using finite elements. Each element of the finite element represents a cell. Each cell's neuron has dendritic fibers which connects it to the nodes of the cell. On the other hand, its axon is connected to the nodes of a number of different neurons. The neuroskin is trained to contract upon receiving an input. The learning takes place during updating iterations using sensitivity analysis. It is shown that while the neuroskin cannot present the desirable response, it improves gradually to the desired level.
Neuro-skin, Hybrid Algorithm, Particle Swarm Optimization (PSO), L-BFGS-B Algorithm, Training, Neural Networks, Finite Element.
## 1 Introduction
_Neuro-Skins or Nero-Membranes (NMs)._ In the previous paper, neuroskins are presented and their response characteristics are studied [1]. Neuroskins are an improved version of Dynamic Plastic Neural Networks (DPNN)s which have recently been introduced by the authors [1-2]. As opposed to a DPCNN which contains a limited number of neurons, each of which is only connected to a small number of points of the base medium, a neuro-skin has the property that every small segment of it has the properties of a neuron. In this regard, the neuro-skin can be considered as a homogeneous skin. This property of the neuro-skin is holographic and does not change with the size of the segment taken out of the membrane. This property will be discussed more in the next sections. The membrane exhibits the activation of neurons. That is at any point it issues an output which is a function of the response of the plate at that point. A more general neuro-membrane can be considered which has the property that its output at any point is a function of the membrane response at farther points as well.
While mathematical modelling and analytical study of neuro-membranes is important to formulate the underlying equations governing their behavior and properties, a first numerical study of their dynamic behavior is essential to provide more insight into their behaviour and to visualize their characteristics. To get more information about the neuroskin neural networks and their characteristics, reader can refer to the previous research [1]. The neuro-membrane utilizing in this paper is a two-dimensional plate which has been studied by the authors in their previous papers on DPCNNs [1] - [26]. They have used this problem in all their studies so that the results and characteristics of the different continuous neural network models could be compared. Figure 1 shows the plate which is 500 mm by 1000 mm. It is unrestricted on three of its edges but restricted by 11 simple supports on its fourth edge [1]. The simple supports provide a large number of redundancies in addition to the reactions needed for the static equilibrium of the |
2304.05729 | Dynamic Graph Representation Learning with Neural Networks: A Survey | In recent years, Dynamic Graph (DG) representations have been increasingly
used for modeling dynamic systems due to their ability to integrate both
topological and temporal information in a compact representation. Dynamic
graphs allow to efficiently handle applications such as social network
prediction, recommender systems, traffic forecasting or electroencephalography
analysis, that can not be adressed using standard numeric representations. As a
direct consequence of the emergence of dynamic graph representations, dynamic
graph learning has emerged as a new machine learning problem, combining
challenges from both sequential/temporal data processing and static graph
learning. In this research area, Dynamic Graph Neural Network (DGNN) has became
the state of the art approach and plethora of models have been proposed in the
very recent years. This paper aims at providing a review of problems and models
related to dynamic graph learning. The various dynamic graph supervised
learning settings are analysed and discussed. We identify the similarities and
differences between existing models with respect to the way time information is
modeled. Finally, general guidelines for a DGNN designer when faced with a
dynamic graph learning problem are provided. | Leshanshui Yang, Sébastien Adam, Clément Chatelain | 2023-04-12T09:39:17Z | http://arxiv.org/abs/2304.05729v1 | # Dynamic Graph Representation Learning with Neural Networks: A Survey
###### Abstract
In recent years, Dynamic Graph (DG) representations have been increasingly used for modeling dynamic systems due to their ability to integrate both topological and temporal information in a compact representation. Dynamic graphs allow to efficiently handle applications such as social network prediction, recommender systems, traffic forecasting or electroencephalography analysis, that can not be adressed using standard numeric representations. As a direct consequence of the emergence of dynamic graph representations, dynamic graph learning has emerged as a new machine learning problem, combining challenges from both sequential/temporal data processing and static graph learning. In this research area, Dynamic Graph Neural Network (DGNN) has became the state of the art approach and plethora of models have been proposed in the very recent years. This paper aims at providing a review of problems and models related to dynamic graph learning. The various dynamic graph supervised learning settings are analysed and discussed. We identify the similarities and differences between existing models with respect to the way time information is modeled. Finally, general guidelines for a DGNN designer when faced with a dynamic graph learning problem are provided.
## 1 Introduction
Graphs are data structures used for representing both attributed entities (the vertices of the graph) and relational information between them (the edges of the graph) in a single and compact formalism. They are powerful and versatile, capable of modelling irregular structures such as skeletons, molecules, transport systems, knowledge graphs or social networks, across different application domains such as chemistry, biology or finance. This expressive power of graphs explains why graphs have been used extensively to tackle Pattern Recognition (PR) tasks, as demonstrated by the current special issue.
In the PR community, most of the existing contributions focus on static graphs where the node set, the edge set and the nodes/edge attributes do not evolve with time. Yet, for some real-world applications such as traffic flow forecasting, rumour detection or link prediction in a recommender system, graphs are asked to handle time-varying topology and/or attributes, in order to model dynamic systems. Several terms are used in the literature to refer to graphs in which the structure and the attributes of nodes/edges evolve. Dynamic graphs
[108, 103], temporal graphs [96, 162, 109], (time-/temporally) evolving graphs [153, 107, 46, 100], time-varying graphs [156, 157], time-dependent graphs [158, 159], or temporal networks [8, 15, 113, 124] are examples of such terms which refer to conceptual variants describing the same principles. This multiplicity of terms can be explained by the diversity of the scientific communities interested by this kind of models but also by the youth of the field. It illustrates the need for precise definitions and clear taxonomies of problems and models, which is one of the contributions of this paper. In the following, we choose to use the more general term Dynamic Graph (DG).
As a direct consequence of the emergence of dynamic graph representations, dynamic graph learning emerged as a new machine learning problem, combining challenges from both sequential/temporal data processing and static graph learning.
When learning on sequential data, the fundamental challenge is to capture the dependencies between the different entities of a sequence. In this domain, the original concept of recurrence, mainly instantiated by LSTM [29], has been gradually replaced in recent years by fully convolutional architectures, which offer better parallelization capabilities during learning. More recently, sequence-to-sequence models based on the encoder/decoder framework have been proposed [173], allowing to deal with desynchronised input and output signals. These models, such as the famous transformer [36], rely on the intensive use of the attention mechanism [35, 174, 175].
When learning on static graphs, the main challenge is to overcome the permutation invariance/equivariance constraint inherent to the absence of node ordering in graph representations. To solve this problem, node-based message-passing mechanisms based on graph structure have led to the first generation of Graph Neural Networks (GNNs), called Message Passing Neural Networks (MPNNs) [163]. As convolutions on images, these models propagate the features of each node to neighbouring nodes using trainable weights that can be shared with respect to the distance between nodes (Chebnet) [164], to the connected nodes features (GAT) [165] and/or to edge features (k-GNN) [166]. Given the maturity of such models and their applicability for large sparse graphs, they have been applied with success on many downstream tasks. This maturity also explains the existence of some exhaustive reviews and comparative studies, such as in [125, 168, 167, 172] to cite a few. One can note that, despite these successes, it has been shown that MPNNs are not powerful enough [167]. That is why machine learning on graphs is still a very active field, trying to improve the expressive power of GNNs [169, 171, 170]. However, these models are still too computationally demanding to be applicable to real-world problems.
Compared to neural networks applied for learning on sequences and on static graphs, Dynamic Graph Neural Network (DGNN) is a much more recent field. To the best of our knowledge, the founding DGNN models are from 2018 [79] and 2019 [106] for respectively the discrete and the continuous cases. These papers have been at the root of a "zoo" of methods proposed by various scientific
communities, with various terminologies, various learning settings and various application domains.
In order to structure the domain, some state-of-the-art papers have been published recently [12, 6, 13]. Without focusing on DGNN, these papers give a good overview of many machine learning issues linked to dynamic graphs and describe many existing models. However, they do not compare models according to the different possible DG inputs and supervised learning settings that can be encountered when applying machine learning to DG. As an example, [12, 6] do not consider in their study the spatial-temporal case. They also do not compare the ability of existing models to consider inductive or transductive tasks.
The main objective of this review is to extend the existing studies mentioned above, focusing on **dynamic graph supervised learning using neural networks**. It is addressed to the audience with fundamental knowledge of neural networks and static graph learning. Three main contributions can be highlighted. The first one consists in **clarifying and categorizing the different dynamic graph learning contexts** that are encountered in the literature. These contexts are distinguished according to the type of input DGs (discrete vs. continuous, edge-evolving vs. node-evolving vs attributes-evolving, homogeneous vs. heterogeneous) but also according to the learning setting (transductive vs. inductive). The second contribution is an **exhaustive review of existing DGNN models**, including the most recent ones. For this review, we choose to categorize models into five groups, according to the strategy used to incorporate time information in the model, which is the main challenge for the application of neural networks on DGs. Based on this categorisation, and using the taxonomy of contexts mentioned above, the third contribution is to provide some **general guidelines for a DGNN designer when faced with a dynamic graph learning problem** and to describe different methods for optimising the DGNN performance.
The remainder of this paper is structured as follows. Section 2 relates to the first contribution, by considering the inputs, the outputs and the learning settings that can be encountered when learning on dynamic graphs. Section 3 reviews existing Dynamic Graph Neural Networks (DGNNs) and compares them according to their temporal information processing. Finally, section 4 brings forward the guidelines for designing DGNNs and discusses some optimisation methods.
## 2 Dynamic Graph Representation Learning
Regardless of the data representation, the goal of supervised learning methods is to build a parameterized statistical model or predictor \(g_{\Theta}\) that maps between an input space \(\mathcal{X}\) and an output space \(\mathcal{Y}\) (see Fig. 1). During the _learning_ phase, the training of the predictor \(g_{\Theta}\) consists in updating its parameters \(\Theta\) using a dataset of couples \((x,y)\). The update is computed by a minimization of the loss between the prediction \(\hat{\mathbf{Y}}\) and the ground truth \(\mathbf{Y}\).
Many recent models follow the encoder/decoder principle, where a variable-length input signal is encoded into a latent representation, which is then used by a decoder to compute the output signal for the downstream task (see Fig. 2). The fixed-size latent representation allows alignment between variable-size input and output signals that are not necessarily synchronised, i.e. the units of the input and output sequences may have a different order. It is of great interest in many sequence-to-sequence problems involving text, images or speech. The learning latent representation (also known as _embedding_) is called _representation learning_.
Sequences are signals in which information varies according to one or more dimensions, that generally define a position in a structure. This structure may have one or multiple dimensions: time for speech, 1D position for text, 2D position for images, etc. In the case of dynamic graphics, the information varies according to the position too, but the structure can also evolve along time. In this section, we propose the concept of _degree of dynamism_ that defines the nature of the variability. Given that the degree of dynamism of DG is higher than for sequences, the encoder/decoder framework therefore provides a very suitable framework for learning DG representations.
In this section, we define important concepts specific to dynamic graphs, giving the necessary material for understanding the review of the dynamic graph embedding problem presented in section 3. The section is structured as follows (see Fig. 2): after giving useful definition about static graphs in subsection 2.1, we present the representation of dynamic graphs in subsection 2.2. We then present
Figure 1: Left: inference phase for making predictions \(\hat{\mathbf{Y}}\) on given data \(\mathbf{X}\). Right: learning phase for updating the parameters \(\Theta\) of the predictor \(g\).
Figure 2: Encoder/decoder model applied on dynamic graphs: the encoding consists in computing \(\mathbf{Z}=f(\text{DG})\), where DG is a dynamic graph (including both topology and attributes), \(f(\cdot)\) is a parameterized statistical model (typically a neural networks with learnable parameters), and \(\mathbf{Z}\) is the encoded tensor representation of DG. The decoder \(f_{dec}(\cdot)\) takes as input the representations \(\mathbf{Z}\) to get the predictions \(\hat{\mathbf{Y}}\).
the output shape and the transductive/inductive nature of learning tasks in subsections 2.3 and 2.4. Finally, we introduce the related applications in each learning setting in subsection 2.5.
### Static Graph Modeling
A static graph \(G\) can be represented topologically by a tuple \((V,E)\) where \(V\) is the node set of \(G\) and \(E\) is the edge set of \(G\). The connectivity information is usually represented by an adjacency matrix \(\mathbf{A}\in\mathbb{R}^{|V|\times|V|}\). In this matrix, \(A(u,v)=1\) if there is an edge between node \(u\) and \(v\), \(A(u,v)=0\) otherwise. \(\mathbf{A}\) is symmetric in an undirected graph, while in the case of directed graphs \(\mathbf{A}\) is not necessarily symmetric.
Nodes usually have attributes represented by a feature matrix \(\mathbf{X}_{V}\in\mathbb{R}^{|V|\times d_{V}}\), where \(d_{V}\) is the length of the attribute vector of a single node. Similarly, edges may have attributes (such as weights, directions, etc.) which can also be represented by a matrix \(\mathbf{X}_{E}\in\mathbb{R}^{|E|\times d_{E}}\).
In the case of weighted graphs, the values in the matrix \(\mathbf{A}\) are the weights of each edge instead of \(1\) which is denoted as:
\[\mathbf{A}_{u,v}=\left\{\begin{array}{rl}w_{u,v}&\text{if }(u,v)\in E\\ 0&\text{otherwise.}\end{array}\right.\]
For some specific applications, both nodes and edges can be of different types. For example, in recommender systems, nodes can usually be mapped into two types, item and user, and have feature matrices of different sizes and contents. In such cases, we extend the notation of the graph \(G=(V,E)\) with the type mapping functions \(\phi:V\to O\) and \(\psi:E\to R\), where \(|O|\) denotes the possible types of nodes and \(|R|\) denotes the possible types of edges [1]. When \(|O|=1\) and \(|R|=1\), nodes and edges are of a single type, this graph is called homogeneous. In contrast, in a heterogeneous graph, \(|O|+|R|>2\) and each type of node or edge could have its own number of feature dimensions [1, 2].
Figure 3 illustrated the difference between homogeneous and heterogeneous graphs.
### Dynamic Graph Modeling
Definition 1 (Dynamic Graph): A dynamic graph is a graph whose topology and/or attributes change over time.
According to this definition, both structure and attributes may change over time in a dynamic graph. Edges and/or nodes may be added or deleted and their attributes may change. Thus, this definition covers different configurations. In order to distinguish between them, we propose the concept of _degree of dynamism defined as follows:
Definition 2 (Degree of Dynamism (Node-Cintroduceentric)): The _degree of dynamism_ of a DG describes whether the topology, i.e. the edge set \(E\) and the node set \(V\), are invariant. Theoretically, there are 4 possible situations: (1) Both \(V\) and \(E\) are invariant, denoted as \(fix_{V,E}\). This situation corresponds to DG called "Spatial-Temporal Graphs" or "Spatio-Temporal Graphs (STGs) in the literature(2) \(V\) is invariant but the edge set is changing, denoted as \(fix_{V}\). (3) The node set and the edge set are both changing, denoted as \(vary\). (4) The set of edges is constant but the set of nodes changes. Since an edge exists based on a tuple of nodes, this situation is meaningless.
The Table 1 illustrates these different configurations which will be discussed throughout the paper.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & & \multicolumn{2}{c}{Node Set} \\ \cline{3-4} & & Constant & Variant \\ \hline \multirow{2}{*}{Edge Set} & Constant & \(fix_{V,E}\) & N/A \\ \cline{2-4} & Variant & \(fix_{V}\) & \(vary\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Degree of Dynamism discussed in the article.
Figure 3: Left: Common static graph representations. The edges in a directed weighted graph have directions and weights, which are frequently used when modelling email networks, citation networks, etc. The nodes and edges in a heterogeneous graph can have multiple possible types i.e., recommendation systems and knowledge graphs. Middle: Discrete Time Dynamic Graph (DTDG) represented by snapshots, and Continuous Time Dynamic Graph (CTDG) represented by events, the example in the figure is the ”Contact Sequence” case. Right: Equivalent Static Graphs (ESGs) represented by edges and nodes.
In the following, we provide the main existing dynamic graph representations for DG, synthesizing previous studies that focus on subparts of these representations [158, 8, 9, 6, 94, 154]. We also introduce in this part a representation called Equivalent-Static-Graph which consists in modeling a DG using a static graph. The different models are compared in Fig. 3.
#### 2.0.1 Continuous Time Dynamic Graphs
To preserve accurate time information, Continuous Time Dynamic Graphs (CTDGs) use a set of events to represent dynamic graphs. Existing work [6] mainly focuses on the dynamics of edges and outlines three typical representation methods that represent events with index \(i=1,2,\ldots\) by giving a pair of nodes \((u_{i},v_{i})\) and time \(t_{i}\):
_Contact-Sequence_[8, 6] is used to represent the instantaneous interaction between two nodes \((u,v)\) at time \(t\).
\[\textit{Contact-Sequence}=\{(u_{i},v_{i},t_{i})\} \tag{1}\]
_Event-Based_[6, 16] dynamic graphs represent edges with a time \(t_{i}\) and duration \(\Delta_{i}\). They are similar to the _Interval-Graph_ defined in [8]. The difference is that _Interval-Graph_ uses a set \(T_{e}\) of start and end times \((t_{i},t_{i}^{\prime})\) to represent all active times of the edge, rather than the duration of each interaction in _Event-Based_.
\[\textit{Event-Based}=\{(u_{i},v_{i},t_{i},\Delta_{i})\} \tag{2}\]
\[\textit{Interval-Graph}=\{(u_{i},v_{i},T_{e});T_{e}=((t_{1},t_{1}^{\prime}),(t_ {2},t_{2}^{\prime}),...)\} \tag{3}\]
_Graph Stream_[6] is often used on massive graphs [11, 10]. It focuses on edges' addition (\(\delta_{i}=1\)) or deletion (\(\delta_{i}=-1\)).
\[\textit{Graph Stream}=\{(u_{i},v_{i},t_{i},\delta_{i})\} \tag{4}\]
#### 2.0.2 Discrete Time Dynamic Graphs
Discrete-time dynamic graphs (DTDGs) can be viewed as a sequence of \(T\) static graphs as shown in Eqn. 5. They are snapshots of the dynamic graph at different moments or time-windows. DTDGs can be obtained by periodically taking snapshots of CTDGs on the time axis [9, 6].
\[\textit{DTDG}=(G^{1},G^{2},...,G^{T}) \tag{5}\]
#### 2.0.3 Equivalent Static Graphs
Representations of this category consist in constructing a single static graph, called Equivalent Static Graph (_ESG_), for representing a dynamic graph. Several methods for constructing _ESG_ have been proposed in recent years. We divide them into two categories: _edge-oriented_ and _node-oriented ESG_.
_Edge-oriented ESG_ aggregates graph sequences into a static graph with time information encoded as sequences of attributes [162, 8, 15] as shown in Fig. 3
(right top). Such representation is also called _time-then-graph_ representation [154].
_Node-oriented ESG_ builds copies of vertices at each moment of their occurrence and defines how the nodes are connected between timestamps/occurrences [162, 161, 160, 23]. A simple example is shown in Fig. 3 (right bottom), further details are discussed in subsection 3.1.
A major interest of ESG representations is that they make static graph algorithms available for learning on dynamic graph.
### Dynamic Graph Output Granularity
As said before, supervised learning aims to learn a mapping function between an input space \(\mathcal{X}\) and an output space \(\mathcal{Y}\). The previous subsection introduces the space in which a DG \(x\in\mathcal{X}\) can be represented. This subsection now discusses the space \(\mathcal{Y}\).
Since a dynamic graph involves concepts from both graph and temporal/sequential data, both aspects must be considered to define the space \(\mathcal{Y}\).
When learning a statistical model on sequential data, the input \(\mathbf{X}\) can be represented by the \(d\)-dimensional features at \(T\) time steps, denoted as \(\mathbf{X}\in\mathbb{R}^{T\times d}\). The granularity of the outputs \(\mathbf{Y}\) can be either timestep-level (one output per time step, as in _Part of speech_ tagging or more generally sequence labelling), or aggregated (one output for many inputs, as in sentiment classification or more generally sequence classification).
In static graph learning, the inputs have topological information in addition to the features \(\mathbf{X}_{V}\) and \(\mathbf{X}_{E}\). As for sequences, the outputs \(\mathbf{Y}\) can be local (one label per node or per edge) or global (one label per subgraphs or for the entire graph).
As a consequence, the output granularity when considering DG can be temporally timestep-level or aggregated, and topologically local or global.
### Transductive/Inductive on Dynamic Graphs
When learning on static graphs, transductive and inductive tasks are frequently distinguished. Transductive tasks consist in taking decision for a set \(V_{inference}\) of unlabeled nodes, the model being learnt on a set of labeled nodes \(V_{learning}\) of the same graph. In such a situation, the features and the neighborhood of nodes in \(V_{inference}\) can be exploited by the learning algorithm in an unsupervised way. That is why this situation is also called a semi-supervised learning on graphs [155]. In contrast, an inductive task is the case when there exist nodes in the \(V_{inference}\) that have not been seen during the learning phase [46]. This situation typically occurs when learning and inference are performed on different static graphs.
In the case of dynamic graphs, the evolving nature of both \(V\) and \(E\) brings more possible scenarios when considering the transductive/inductive nature of the tasks. Various configurations should be defined, considering the "degree of
dynamism" defined in section 2. We use the term "case" to distinguish between different transductive/inductive natures of dynamic graph learning taking into account the degree of dynamism.
Specifically, we divide learning on dynamic graphs into five cases:
* _Trans-fix\({}_{V,E}\)_ (1) : in this case, the topology of the DG is fixed, \(V\) and \(E\) are therefore the same on the learning and inference sets.
* _Trans-fix\({}_{V}\)_ (2) : in this case, the learning and inference node sets are fixed and equal but the edge set evolves, which requires taking into account the evolving connectivity between nodes on the graph.
* _Trans-vary_ (3) : in this case, \(V_{learning}^{t}\) and \(V_{inference}^{t}\) may evolve over time, but the presence of each node to be predicted in the test set are already determined in the learning phase : \(\forall v\in V_{inference},v\in DG_{learning}\).
* _Ind\({}_{V}\)_ (4) : this case refers to node-level inductive tasks, where the learning and inference are on the same DG, but \(\exists v\in V_{inference},v\notin DG_{learning}\). Although the label and historical attributes from the learning phase can be reused on the inference set, a major challenge is to handle the unseen nodes and the uncertain number of nodes.
* _Ind\({}_{DG}\)_ (5) : this case refers to DG-level inductive learning, where learning and inference are performed on different DGs. The statistical model needs to handle the complete unseen dynamic graphs.
These 5 cases are illustrated in Fig. 4.
Figure 4: The transductive and inductive cases in dynamic graph learning under discrete time: (1) denotes the case where both the node set and edge set are fixed in learning and inference. (2) denotes the case where only the node set is fixed for learning and inference. (3) denotes the case where the node set changes but no unseen node appears for inference. (4) denotes the inductive case where the train and test are on the same DG but there are new nodes for inference. (5) denotes the inductive case where there are new DGs for inference.
### Dynamic Graph Predictive Tasks
In the previous subsections, we have investigated dynamic graph learning specificities related to the representations of the input (discrete vs. continous time), the granularity of the output (local vs. global, timestep-based vs. aggregated), and different learning _cases_. In this section, we categorize existing contributions in the literature according to these four criteria.
Table 2 gives a synthetic view of this categorization with an emphasis on the targeted applications and on the metrics used for assessing models performance.
As one can see in this table, in **Discrete Time Transductive Tasks**, related applications usually concern relatively stationary topologies, i.e. _Transfix\({}_{V,E}\)_, such as human body structure or geographic connectivity. Some typical node-level/local tasks predict attributes for the next time step(s) based on past time step(s), such as traffic flow [94, 80, 83, 84, 73], number of infectious disease cases [69, 77, 67], number of crimes [70] and crop yields [91]. Graph-level/global tasks either retrieve the class of each snapshot like sleep stage classification [95] or output a prediction for the entire DG such as the emotion of a skeletal STG [71]. When the DG has a fixed set of nodes with evolving edges, i.e. _Trans-fix\({}_{V}\)_, such _cases_ can be employed for modelling the connectivity in the telecommunication network [100] or contact of individuals in a conference [124].
In **Discrete Time Inductive Tasks**, i.e. when unseen nodes need to be predicted in discrete time i.e. _Ind\({}_{V}\)_, some typical tasks concerns node classification [96] or link prediction [96, 124, 97] for future snapshots in social networks. An example of _Ind\({}_{DG}\)_ with graph-level output is classifying real and fake news based on the snapshots of its propagation tree on the social network [86, 76, 68].
Dynamic graphs in **Continuous Time** are widely used to model massive dynamic graphs that frequently have new events, such as recommendation systems [85, 82, 92] or social networks in the transductive _cases_[110, 121] or inductive _cases_[108, 96, 109, 113, 106]. Local tasks predict the properties and interactions of seen or unseen nodes, under transductive and inductive _cases_, respectively. Since CTDGs have no access to global/entire graph information under their minimum time unit, the global timestep level label is meaningless under continuous time. However, global aggregated tasks can be implemented by aggregating nodes of different time steps, such as rumour detection in continuous time [107]. Since CTDG can be transformed to DTDG by periodically taking snapshots [9, 6], the above tasks can also be considered as tasks under dynamic time with a lower time resolution.
To evaluate the performance of a statistical model on a given task, traditional machine learning metrics are employed as shown in table 2. When the output predictions are discrete values, i.e. for **classification** task [108, 96, 109, 107, 106], common metrics include accuracy, precision, recall, F1, and area under the receiver operating characteristic (AUROC). When the output values are continuous values, i.e. for **regression** task [94, 80, 83], common metrics are mean absolute (percentage) error, root mean square (log) error, correlation, and R squared. **Node ranking** tasks [82, 92] predict a score for each node and then sort them. These tasks can be evaluated by the reciprocal rank, recall@N, cumulative gain
and their variants. Note that dynamic tasks are generally evaluated using static metrics computed along the time axis.
## 3 Dynamic Graph Embedding with Neural Networks
In the previous section, we have introduced various DG predictive tasks settings and we have categorized literature contributions according to these settings. In this section, we now take the model point of view, by describing how DGNN embed dynamic graphs into informative vectors for subsequent predictions.
We first introduce the general idea of dynamic graph embedding in the context of the different learning tasks mentioned in subsection 2.5. We then dive into different embedding approaches, by categorizing them according to the strategy used for handling both temporal and structural information. Finally, we present the methods for handling heterogeneous dynamic graphs.
From an encoder-decoder perspective, a deep learning statistical model first maps the original input into embeddings denoted as \(\mathbf{Z}\), and then exploits \(\mathbf{Z}\) to predict an output [12, 4]. Graphs can be embedded either at node/edge-level or at (sub)graph-level [13, 5]. Node-level embedding benefits a wide range of node-related tasks and allows more complete input information to be retained for later computation [5]. In the same way, time-step level embedding retains more information than time-aggregated embedding Similarly when learning on sequential data [7].
As a consequence, embedding a dynamic graph at its finest granularity consists in computing a \(d\)-dimensional vector representation \(\mathbf{z}_{v}^{t}\in\mathbb{R}^{d}\) for each node \(v\in V\), at all time steps \(t\in T\). In this case, the embedding of the dynamic graph is given by \(\mathbf{Z}\in\mathbb{R}^{|V|\times|T|\times d}\), as shown in Fig. 5.
However, the different input time granularity and learning settings mentioned in the previous section do not always enable such an "ideal" embedding \(\mathbf{Z}\). In this subsection, we generalise the practicable embeddings under these different settings as shown in Fig. 6 and Tab. 3.
For discrete time transductive settings, when the node set is constant, (i.e. for _cases_ (1) [80, 83, 71] and (2) [96, 97, 88]), the input nodes can be encoded at the
Figure 5: The most fine-grained node embedding \(\mathbf{Z}\in\mathbb{R}^{|V|\times|T|\times d}\), where \(|V|\) is the number of nodes, \(|T|\) is the number of timesteps, and \(d\) is the dimension of embedding \(z_{v}^{t}\) of a single node \(v\) at a single timestep \(t\).
finest granularity \(\mathbf{Z}\in\mathbf{R}^{|V|\times|T|\times d}\) since all the nodes are known during learning, as shown in Fig. 5. When the DTDG node set changes across snapshots in a transductive task (i.e. for _case_ (3) [93; 78; 102]), the nodes can still be encoded in the shape of \(|V|\times|T|\times d\), where \(|V|\) denotes the cardinal of the universal node set [78], by filling the missing values. An example of filling the missing values with \(\mathbf{0}\) vectors [93] is shown in Fig. 6 (A) for node C at \(t_{1}\) and node A at \(t_{3}\).
For discrete time inductive settings (i.e. for _cases_ (4) \(Ind_{v}\)[103; 105] and (5) \(Ind_{DG}\)[86; 76; 68]) the predictor cannot determine the existence of a node until it appears. This case is illustrated on Fig. 6 (B) where the predictor cannot determine the existence of node C until it appears at \(t_{4}\). Hence, \(|V^{t}|\) can vary at each timestep \(t\). Therefore, the embedding of nodes in the inference set cannot be represented in the shape of \(|V|\times|T|\times d\). In this situation, one can use a list to store the representations of all accessible time steps for each node seen [105], e.g. \(\mathbf{Z}=\left\{\mathbf{Z}^{1},\mathbf{Z}^{2},\ldots,\mathbf{Z}^{T}\right\}\) with \(\mathbf{Z}^{t}\in\mathbb{R}^{|V|_{t}\times d}\) for \(t\in\{1,\ldots,T\}\).
In continuous time, there is no longer a time grid, as shown in Fig. 6 (C & D). Therefore, there are no longer embedding updates for all nodes at each time step. Instead, when an event occurs on the CTDG, either the embeddings of the associated nodes are updated or the embedding(s) of the unseen node(s) are added [108; 109].
Figure 6: The available most fine-grained embedding under different dynamic graph learning settings.
To informatively encode dynamic graphs into tensors or a list of vectors, a DGNN must capture both the structure information and their evolution over time. Therefore, to handle topology and time respectively, DGs are often decomposed or transformed into components like equivalent static (sub)graphs [74, 89, 75], random walks [113, 124, 123, 121, 122], or sequences of matrices [95, 86, 93]. In the literature, numerous approaches have emerged by combining different encoders \(f_{G}(\cdot)\) for static graphs with \(f_{T}(\cdot)\) for temporal data. A plethora of graph and temporal data encoders have been at the root of the DG encoders reviewed in the section. These encoders are described in appendices B and C.
In the following subsections, we present a taxonomy of DGNN models which relies on five categories. Our categorization, illustrated in Figure 7, is based on the strategy for handling both temporal and structural information.
1. Modelling temporal edges and encoding _ESG via topology_, denoted as \(TE\) (Section 3.1).
2. Sequentially encoding the hidden states, denoted as \(enc(\mathbf{H})\) (Section 3.2).
3. Sequentially encoding the DGNN parameters, denoted as \(enc(\Theta)\) (Section 3.3).
4. Embedding occurrence time \(t\) as edge feature of _ESG via attributes_, denoted as \(emb(t)\) (Section 3.4).
5. Sampling causal walks, denoted as \(CausalRW\) (Section 3.5).
Note that these five approaches are not exclusive, i.e. they can be combined and used on the same DG.
### Temporal Edge Modeling
Since applying convolution on a static graph is generally easier than encoding across multiple snapshots, the DG encoding problem is frequently transformed into encoding a static graph where each node is connected to itself in the adjacent snapshot [74, 89]. This approach can also be interpreted as constructing a _time-expanded graph_[162, 161, 160] or _node-oriented ESG_ (see section 2.2.4) and is widely used to encode _Trans-fix\({}_{V,E}\)_ cases, i.e. STGs. In more complex configurations, nodes are also connected with their k-hop neighbours in the adjacent snapshot(s) [69].
A simple example of such a strategy is shown in fig. 8. An equivalent static graph \(G^{\prime}=\{V^{\prime},E^{\prime}_{S},E^{\prime}_{T}\}\) is obtained by modelling temporal edges [74]. \(G^{\prime}\) has
\begin{table}
\begin{tabular}{|c|c|c|} \hline DG Learning Case[(1)] (Trans-\(fix_{V,E}\))[(2)] (Trans-\(fix_{V}\))[(3)] (Trans-\(\mathit{vary}\)) & (4) Ind\({}_{V}\)[(5)] Ind\({}_{DG}\) \\ \hline Discrete Time & (A) & (B) \\ \hline Continuous Time & (C) & (D) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The available finest embedding granularity under various dynamic graph learning cases.
\(|V^{\prime}|=|V|\times T\) nodes and \(|E^{\prime}_{S}|=|E|\times T\) spatial edges. Depending on the modelling approach, the number of temporal edges \(|E^{\prime}_{T}|=|V|\times(T-1)\) can be greater.
Once defined the connection rule for temporal edges, the traditional convolution for static graphs is applicable on ESGs. A state-of-the-art example is the **ST-GCN** module [89]. To update the hidden states \(h\) to \(h^{\prime}\) in an ST-GCN layer (see Eqn. 6), a typical spatial GNN structure aggregates the neighbour features with the \(\text{msg}(\cdot)\) function, computes their weights by the \(\text{w}(\cdot)\) function, and then sums them after normalisation with the \(\text{norm}(\cdot)\) function. Note that, unlike the neighbourhood definition in static graphs, the neighbourhood set \(\mathbf{N}\) of \(v^{t}_{i}\) is defined by: (1) spatially, the shortest path distance with neighbour \(v_{j}\) obeys \(d(v_{j},v_{i})\leq K\) and (2) temporally, the time difference between timestamps \(q\) and \(t\), i.e. \(|q-t|\) is not greater than \(\lfloor\Gamma/2\rfloor\).
Figure 8: Comparing snapshot representation (left) and STG with temporal edges (middle) [74]. The edges in blue indicate temporal edges. In ST-GCN module (right) [89], temporal edges also connect nodes with their K-hop neighbours in the adjacent \(\Gamma/2\) snapshot(s), thus the light green area shows the neighbourhood of the orange node while applying spatial-temporal graph convolution with K=1 and \(\Gamma\)=3.
Figure 7: Dynamic graph neural network taxonomy on temporal information processing: 1. **Temporal Edge Modeling** models temporal edges to transform STGs into static graphs. 2. **Sequentially Encoding Hidden States H** encodes the hidden states of each snapshot across time with a temporal encoder \(f_{T}(\cdot)\). 3. **Sequentially Encoding Parameters \(\Theta\)** encodes the parameters \(\Theta\) of the graph encoder \(f_{G}(\cdot)\) across time with a temporal encoder \(f_{T}(\cdot)\). 4. **Embedding Time \(t\)** converts time values to vectors and concatenates or adds them to the attribute vectors when encoding node B. 5. **Causal Walks** restricts the random walks on dynamic graphs by causality.
\[h_{i}^{{}^{\prime}t} = \sum_{v_{j}^{t}\in\mathbf{N}_{v_{i}^{t}}}\mathbf{norm}\left(\mathbf{ msg}(h_{i}^{t},h_{j}^{t})\cdot\mathbf{w}(h_{i}^{t},h_{j}^{t})\right)\] \[\mathbf{N}_{v_{i}^{t}} = \left\{v_{j}^{q}|d(v_{j}^{t},v_{i}^{t})\leq K,|q-t|\leq\left\lfloor \Gamma/2\right\rfloor\right\} \tag{6}\]
### Sequentially Encoding Hidden States H
In DTDGs, there are usually additions/deletions of edges or nodes [96, 100, 124, 97]. To deal with these topological changes, this category denoted as \(enc(\mathbf{H})\) uses \(f_{G}(\cdot)\) and \(f_{T}(\cdot)\) to encode the graph and time domains alternatively. \(Enc(\mathbf{H})\) is widely applied to the Trans-\(fix_{V,E}\)_case_, i.e. STGs [71, 94, 80, 83, 84, 73, 91] and the Trans-\(fix_{V}\)_cases_ on DTDGs [96, 100, 124, 97]. \(f_{T}(\cdot)\) either encodes each snapshot across time after \(f_{G}(\cdot)\) encodes each snapshot [93, 78, 79], i.e., in a stacked way [6], or incorporates graph convolution when encoding each snapshot across time [79, 88, 83], i.e., in an integrated way [6].
When the input is an STG, rather than processing them as static graphs, this approach factorises space and time and processes them differently [80, 94, 98, 99, 91, 77, 81, 95, 83]. **RSTG**[98] and **DyReG**[99] first encode each snapshot at node-level with weighted message passing as \(f_{G}(\cdot)\), and then encode each node over time using LSTM or GRU as \(f_{T}(\cdot)\). Both approaches are of the stacked fashion.
If the node set \(V_{t}\) of a DTDG is constant, then this DTDG is equivalent to an STG only with an additional change in the edge set \(E_{t}\), i.e. Trans-\(fix_{V}\)_case_. Therefore, the stacked architecture mentioned in the previous paragraph is equally practicable. A typical example is **CD-GCN**[93] that concatenates the attributes and the adjacency matrix for each snapshot \(t\) to form input \(X^{t}||A^{t}\). Each snapshot is first encoded by GCN to obtain the node-level topological hidden states \(\mathbf{z}_{i,\textit{GCN}}^{t}\), and then encoded by LSTM in the time dimension to obtain \(\mathbf{z}_{i,\textit{LSTM}}^{t}\). Finally, an MLP maps the concatenation of the hidden states and raw features to the final node-level hidden states \(\mathbf{z}_{i}^{t}\) for each time step. Similar structures which stack temporal encoder \(f_{T}(\cdot)\) after graph encoder \(f_{G}(\cdot)\) are **STGCN**[80], **GraphSleepNet**[95], **E-LSTM-D**[90], **Graph WaveNet**[94], **GRNN**[84]. A simple example is shown in Eqn. 7, as a matter of fact, they can be stacked in more complex ways, see tab 4 for more details.
\[\mathbf{Z}^{t}=f_{T}\left(f_{G}(\mathbf{X}^{t})\right)\text{, or }f_{T}(\mathbf{X}^{t})\oplus f_{G}(\mathbf{X}^{t}) \tag{7}\]
Another strategy for sequentially encoding the hidden states incorporates \(f_{G}(\cdot)\) into \(f_{T}(\cdot)\) rather than stacking them. Since there are usually projection or convolution modules in \(f_{T}(\cdot)\) to handle the features of nodes, this "integrated mode" turns these modules into \(f_{G}(\cdot)\) to aggregate neighbouring features. **GVRN-M2**[79] replaces the 2D convolution in convLSTM with graph convolution. Similar examples are **GC-LSTM**[88] and **DCRNN**[83] which replace the linear layer in LSTM and GRU with respectively GCN[41] and diffusion convolution [25].
To deal with the addition/deletion of nodes in a transductive nature, i.e. \(Trans_{vary}\)_case_, one needs to handle \(|V_{t}|\) which may change across snapshots. **TNDCN**[78] proposes to set a universal node set \(V=\cup V_{t}\) to ensure that \(|V|\) is the same for each snapshot, which transforms the transductive _case_ into a node set-invariant _case_ on DTDG.
In the inductive case, one cannot presume \(|V|\) to set the universal node set, which brings about an inconsistent number of nodes in each snapshot making \(f_{T}(\cdot)\) impossible to encode at the node level. Therefore, this method in inductive tasks is only applicable for encoding graph-level representations across time, e.g. fake news detection based on its propagation tree. **Dyn-GCN**[86] applies Bi-GCN [87] to encode the hidden states \(\mathbf{z}_{G_{t}}\) for each snapshot \(t\) by aggregating the hidden states of its nodes and edges, and passes \((\mathbf{z}_{G_{1}},\mathbf{z}_{G_{2}},\ldots,\mathbf{z}_{G_{T}})\) into an attention layer to compute the final hidden states of the entire DG.
### Sequentially Encoding Parameters \(\boldsymbol{\Theta}\)
Although \(enc(\mathbf{H})\) is relatively intuitive and simple in terms of model structure, the problem is that they can neither handle the frequent changes of the node set, especially in the inductive tasks, nor pass learned parameters of \(f_{G}(\cdot)\) across time
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline Model & Task case & Structure & Graph & Temporal & Embedded & Ref. \\ & & & Encoding & Encoding & Object & \\ \hline DyReg & Trans-fix\({}_{V,E}\) & stacked & GNN & GRU & Node & [99] \\ RSTG & Trans-fix\({}_{V,E}\) & stacked & GNN & LSTM & Node & [98] \\ STAGIN & Trans-fix\({}_{V,E}\) & stacked & GIN & Transformer & Graph & [81] \\ GraphSleepNet & Trans-fix\({}_{V,E}\) & stacked & GNN & TCN & Graph & [95] \\ GNN-RNN & Trans-fix\({}_{V,E}\) & stacked & GNN & LSTM & Node & [91] \\ Graph WaveNet & Trans-fix\({}_{V,E}\) & stacked & GNN & TCN & Node & [94] \\ STGCN & Trans-fix\({}_{V,E}\) & stacked & GNN & TCN & Node & [80] \\ DCRNN, GCRNN & Trans-fix\({}_{V,E}\) & integrated & GNN & GRU & Node & [83] \\ USSTN & Trans-fix\({}_{V,E}\) & stacked & GNN & MLP & Node & [77] \\ GRNN & Trans-fix\({}_{V,E}\) & stacked & GNN & Gated RNN & Node & [84] \\ GCRN-M1 & Trans-fix\({}_{V,E}\) & stacked & GNN & LSTM & Node & [79] \\ GCRN-M2 & Trans-fix\({}_{V,E}\) & integrated & GNN & LSTM & Node & [79] \\ \hline tNodeEmbed & Trans-fix\({}_{V}\) & stacked & RW OP \& LSTM & Node & [96] \\ DynSEM & Trans-fix\({}_{V}\) & stacked & RW & OP & Node & [97] \\ GC-LSTM & Trans-fix\({}_{V}\) & integrated & GNN & LSTM & Node & [88] \\ LRGCN & Trans-fix\({}_{V}\) & stacked & GNN & LSTM & Path & [100] \\ & & & & & (set of edges) & \\ E-LSTM-D & Trans-fix\({}_{V}\) & stacked & AE & LSTM & Node & [90] \\ WD-GCN, CD-GCN & Trans-vary & stacked & GNN & LSTM & Node & [93] \\ TNDCN & Trans-vary & stacked & GNN & TCN & Node & [78] \\ ANOMULY & Trans-vary & stacked & GNN & GRU & Node & [102] \\ \hline Dyn-GCN & Ind\({}_{DG}\) & stacked & GNN & Attention & Graph & [86] \\ DDGCN & Ind\({}_{DG}\) & stacked & GNN & Temporal & Graph & [76] \\ & & & & Fusion & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Main components of selected (DT)DGNNs which sequentially encode hidden states. Without specification, GNN refers to graph convolution or message passing, OP refers to Orthogonal Procrustes [26], TCN refers to temporal convolution and its variants, Attention refers to the attention mechanism [35], AE refers Autoencoder and its variants, and RW refers to Random Walk-based approaches [55, 53, 54].
steps [103]. To encode DTDGs more flexibly, some other approaches constraint or encode the parameters of \(f_{G}(\cdot)\) across time steps [104, 105, 103].
In order to encode the parameters \(\Theta\) of GCN, **EvolveGCN**[103] proposes to use LSTM or GRU to update the parameters of the GCN model at each time step as shown in eqn. 8 and eqn. 9:
\[\Theta_{f_{G}}^{t}=\text{LSTM}(\Theta_{f_{G}}^{t-1}) \tag{8}\]
\[\Theta_{f_{G}}^{t}=\text{GRU}(\mathbf{H}^{t},\Theta_{f_{G}}^{t-1}) \tag{9}\]
Otherwise, by constraining the GNN parameters, **DynGEM**[104] incorporates autoencoder (AE) to encode and reconstruct the adjacency matrix of each snapshot. Its parameters \(\Theta_{t}\) are initialized with \(\Theta_{t-1}\) to accelerate and stabilize the model training. Similarly, in **VGRNN**[105] the authors combine GRNN and a variational graph AE in order to reuse learned hidden states of \(t^{-}\) to compute the prior distribution parameters of the AE.
### Embedding Time \(t\)
When the scale of the dynamic graph is large, as in social networks and recommendation systems, aggregation to snapshots is neither precise nor efficient [6, 9]. Its evolution is thus represented by a set of timestamped events. Therefore, when encoding a CTDG, one should consider not only how to asynchronously update the node representations over time, but also to define the neighbours of nodes.
The DGNNs presented in this subsection update the representation of a node \(v\) when it changes, i.e. when it participates in a new edge or when its attributes change. Its neighbours at moment \(t\), also called _temporal neighbourhood_, are usually defined as the nodes that have common edges with \(v\) (before \(t\)) [109]. Thus, such models consider \(v\)-centric equivalent static subgraph as shown in Fig. 9. In such cases, the occurrence time of each edge can be considered as part of the edge attributes or used as weight for graph convolution.
**TDGNN**[110] proposes a weighted graph convolution by assuming that the earlier an edge is created, the more weight this edge will have in aggregation, as
Figure 9: To aggregate information of neighbours on a dynamic graph, for example, to encode the pink node, the nodes with which it has an edge, i.e. its _temporal neighbours [75]_ (yellow, orange, purple nodes) and the temporal information (in blue) are embedded as vectors. This can also be interpreted as constructing a _equivalent static graph_ via attributes.
shown in Equ. 10. In more detail, the weight \(\alpha_{u,v}^{t}\) of an edge \((u,v)\) at moment \(t\) is calculated by the softmax of the existence time \(t-t_{u,v}\) of the edge.
\[\alpha_{u,v}^{t}=\frac{e^{t-t_{u,v}}}{\sum_{u\in N_{t}(v)\cup v}e^{t-t_{u,v}}} \tag{10}\]
On the other hand, **TGAT**[109] embeds the value of the existence time \(t-t_{(u,v)}\) of edge \((u,v)\) as a vector of \(d_{T}\) dimensions and then concatenates it to the node \(v\)'s hidden states \(\mathbf{z}_{v}^{t}\). When aggregating neighbour information for a node \(v\), the weight of each neighbour is computed by multi-head attention (Eqn. 11). Similar methods has also been applied for time embedding in Dynamic Graph Transformers [111, 112].
\[\mathrm{q(t)} =[\mathrm{Z(t)}]_{0}\mathrm{W_{Q}}\] \[\mathrm{K(t)} =[\mathrm{Z(t)}]_{1:\mathrm{N}}\mathrm{W_{K}}\] \[\alpha_{u,v}^{t} =\frac{\exp\left(q_{v}^{T}k_{u}\right)}{\sum_{u\in N_{t}(v)}\exp \left(q_{v}^{T}k_{u}\right)} \tag{11}\]
To better reuse the messages aggregated for each node by TGAT, **TGN**[108] adds a memory mechanism which memorizes the historical information for each node \(v\) with a memory vector \(\mathbf{s}_{v}\). This memory is updated after each time \(t\) a node \(v\) aggregates its neighbours' information \(\mathbf{m}_{v}^{t}\).
\[\mathbf{s}_{v}^{t}=\mathrm{MLP}(\mathbf{m}_{v}^{t},\mathbf{s}_{v}^{t-}) \tag{12}\]
Similar to TGN, **TGNF**[107] embeds nodes and time information via TGAT[109], and then updates the node's memory \(\mathbf{S}\) via the temporal memory module (TMM). In order to learn variational information better, it calculates the similarity \((\mathbf{S}_{t},\mathbf{S}_{t^{-}})\) of the memory \(\mathbf{S}\) at \(t\) and \(t^{-}\) during training as part of the loss, called Time Difference Network (TDN).
Some other approaches consider the interaction predictions on the graph as a Temporal Point Problem (TPP [24]) and simulate the conditional intensity function that describes the probability of the event. **DyRep**[106] encodes a strength matrix \(\mathbf{S}\in R^{|V|\times|V|}\) to simulate the intensity function of the interactions between each node pair and uses \(\mathbf{S}\) as the weights for graph convolution. \(\mathbf{S}\) is initialized by the adjacency matrix \(\mathbf{A}\) and updated when an interaction \((u,v)\) occurs at time \(t\). In this case, the embedding of each node involved is updated. For example, the update of \(v\) is the sum of the three embedding components given by Eqn. 13: the latest embedding of \(v\), the aggregation of embeddings of \(u\)'s neighbor nodes, and the time gap between \(v\)'s last update and this update.
\[\mathbf{z}_{v}^{t}=\sigma\left(\mathbf{W}_{1}\mathbf{h}_{\mathrm{N}(u)}^{t^{ -}}+\mathbf{W}_{2}\mathbf{z}_{v}^{t^{-}}+\mathbf{W}_{3}(t-t^{-})\right) \tag{13}\]
All of the models mentioned above embed timestamps or time-gaps as part of the features. However, models such as TGAT are unable to capture accurate
changes in structure without node features [113]. As discussed above, DyREP [106] solves this problem but is unable to perform the inductive task as it relies on the intensity matrix \(\mathbf{S}\). This raises another challenge: How to learn in an inductive task based only on the topology when the nodes have no features.
### Causal Random Walks
Random walk-based approaches do not aggregate the neighbourhood information of nodes, but sample node sequences to capture the local structure. To incorporate time information to the random walks, different ways of defining "causal walks" on dynamic graphs are derived.
To sample random walks with temporal information in discrete time, an intuitive way is to convert the DTDG into a directed static graph. Huang et al. [115] construct a _ESG via topology_, and allow random walks to sample inside the current layer (/timestep) to get the structural information or walk into the previous layer to obtain historical evolving information. Following the supra-adjacency representation (see fig. 10), **DyAne**[124] applies the random walk approach of static graphs (DeepWalk [53] and LINE [55]) to DTDGs. Otherwise, without using these representations, **LSTM-node2vec**[123] directly samples one neighbouring node of a target node \(v\) per snapshot as a walk.
Similar to the method based on supra-adjacency, the walks obey causality under continuous-time, i.e. after walking from node B to node C via an edge occurring at \(t2\), node C can only walk to the next node via an edge having occurring time \(t>t2\) (Fig. 11).
Some early methods are **CTDNE**[121] and **T-Edge**[122]. To encode a node \(v\), they both sample the causal walks starting from \(v\), each walk \(W\) being a sequence of (node, time) pairs. Then they embed each (node, time) pair and encode the sequence \(W\) by \(f_{T}(\cdot)\) like RNNs. Combined with anonymous walk embedding, Causal Anonymous Walks (**CAW**[113]) anonymise nodes and their attributes in order to focus more on graph motifs, which solve the problem at the end of Sec. 3.5.
Figure 10: Supra-adjacency representation: treating discrete time dynamic graphs as directed static graphs for sampling random walks.
### Encoding Heterogeneous Graphs
Since heterogeneous graphs can maintain more informative representations in tasks like link prediction in recommendation systems, a key challenge is to have a dedicated module for handling different types of nodes and edges on dynamic graphs. We introduce in the following paragraphs two main approaches as illustrated in Fig. 12.
GNN-based models keep the same dimension \(d\) of embedding for different node types (e.g. for user nodes and item nodes) to facilitate computations. For example, to update the node embeddings when user \(u\) and item \(i\) have a connection \(e_{u,i}^{t}\) at time \(t\), **JODIE**[92] embeds the user attributes \(\mathbf{x}_{u}\), the item attributes \(\mathbf{x}_{i}\), the edge attributes \(\mathbf{x}_{e}\) and the time difference since the last update \(\Delta\) to the same dimension \(d\) and then adds them together :
\[\mathbf{h}_{u}^{t}=\sigma\left(\mathbf{W}_{1}^{u}\mathbf{h}_{u}^{ t^{-}}+\mathbf{W}_{2}^{u}\mathbf{h}_{i}^{t^{-}}+\mathbf{W}_{3}^{u}\mathbf{h}_{e}^ {t}+\mathbf{W}_{4}^{u}\Delta_{u}\right)\] \[\mathbf{h}_{i}^{t}=\sigma\left(\mathbf{W}_{1}^{i}\mathbf{h}_{i}^ {t^{-}}+\mathbf{W}_{2}^{i}\mathbf{h}_{u}^{t^{-}}+\mathbf{W}_{3}^{i}\mathbf{h}_ {e}^{t}+\mathbf{W}_{4}^{i}\Delta_{i}\right) \tag{14}\]
where \(t^{-}\) refers to the timestamp of the last update thus \(\Delta=t-t^{-}\), and \(h_{i}^{t}=x_{i}^{t}\) for the first layer of model. **DMGCF**[85] and **DGCF**[82] also use similar approaches for same dimensional embedding.
RW-based methods deal with heterogeneous graphs by defining metapaths [57] which specifies the type of each node in the walk, such as (user, item, user), so that each random walk sampled (also called "instance") with the same metapath can be projected to the same vector space. Examples on the dynamic graph are **THINE**[114] and **HDGNN**[119] which sample instances and encode them through the attention layer and bidirectional RNN layers, respectively.
So far we have presented a wide variety of approaches to capture time and graph dependencies within dynamic graphs. In the next section, we discuss their global success and limitations, as well as present guidelines for the design of DGNN architectures.
Figure 11: Schematic diagram on sampling casual walks on a CTDG, the right part shows graph pattern extracted by anonymizing nodes.
## 4 Guidelines for Designing DGNNs
In the previous sections, we have highlighted the diversity of contexts which can be encountered when considering machine learning on dynamic graphs, as well as the diversity of existing models to tackle these problems. In this section, we present some guidelines for designing DGNNs on the basis of the taxonomies presented in section 2 and 3. We also discuss the latest trends for optimizing DGNNs. To the best of our knowledge, while such guidelines have already been proposed for static graphs [125], they do not exist for DGNNs.
### General Design Workflow of DGNNs
For static graphs, Zhou et _al._[125] described the GNN design pipeline as: _i)_ determine the input graph structure and scale, _ii)_ determine the output representation according to the downstream task, and _iii)_ add computational modules.
For dynamic graphs, the design of DGNN has to consider more factors. We therefore generalize the workflow of designing DGNN as follows :
1. Clearly define the input, output and nature of the task, according to the taxonomies of section 2 ;
2. Choose the compatible time encoding approach according to the learning setting, using the categorizations of section 3 and more precisely the Table 5 ;
Figure 12: Handling heterogeneous nodes. Above: embedding them into the same vector space (e.g. with \(d=5\)). Below: sampling random walks with defined metapaths.
3. Design NN structure ;
4. Optimise the DGNN model.
The key points in the DGNN compatibility are the input time granularity, the nature, and the object to be encoded. We concluded their known adapted DG types and listed them in table 5 and describe the various cases in this subsection.
Transductive tasks under discrete time, as a relatively simple setting, can be encoded with any approach to incorporate time information. In the setting of inductive DT, no approach using _TE_ is found in the literature, and the method of \(enc(\mathbf{H})\) must also have its output aggregated because of the gap mentioned in sections 3.2, e.g. node-level time-aggregated or graph-level time-step. To handle continuous time, only _emb(t)_ and _Causal RW_ are widely used in the literature.
Once the approach has been selected, the next step is to add the computational components, which are very different for each approach.
For the first three methods mainly used in discrete time, i.e., \(\circled{1}TE\), \(\circled{2}Enc(\mathbf{H})\), and \(\circled{3}Enc(\Theta)\), they can be abstracted to encode graph and time information via \(f_{G}(\cdot)\) and \(f_{T}(\cdot)\), respectively. These neural network components are usually some of the modules introduced in the appendices B, C and Tab. 4. In particular, the encoding of temporal edges in the \(TE\) approach can be performed by \(f_{G}(\cdot)\) without a specific \(f_{T}(\cdot)\). In \(enc(\mathbf{H})\) approach, there are more possible ways to combine \(f_{G}(\cdot)\) and \(f_{T}(\cdot)\) (stacked, integrated, etc.). In \(enc(\Theta)\) approach, the key is how \(f_{T}(\cdot)\) acts on the parameters of \(f_{G}(\cdot)\).
For the time embedding approach \(\circled{4}\)_emb(t)_, in addition to the method of aggregating information of neighbours i.e. \(f_{G}(\cdot)\), one also needs to consider the function for time embedding (e.g. by a set of sine or cosine functions [109] or a learnable linear layer [106]) and how to combine them to the hidden states of edges or nodes (e.g. by addition or concatenation). In the situation where the past representation of a node is stored [107; 108; 106], one also needs to determine the module used to update the node representation, i.e. \(f_{T}(\cdot)\).
\begin{table}
\begin{tabular}{l l l l} \hline \hline Approach & DT & Trans. & DT & Ind. & CT \\ \hline \(\circled{1}\) & TE & ✓ & & \\ \(\circled{2}\) & Enc(H) & ✓ & \(*\) & \\ \(\circled{3}\) & Enc(\(\Theta\)) & ✓ & ✓ & \\ \(\circled{4}\) & Emb(t) & ✓ & ✓ & ✓ \\ \(\circled{5}\) & Causal RW & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 5: Adapted dynamic graph types of each method:
**TE** stands for ”Temporal Edge Modeling”, **Enc(H)** for ”Sequentially Encoding Hidden States”, **Enc(\(\Theta\))** for ”Sequentially Encoding Model Parameters”, **Emb(t)** for ”Embedding time”, **Causal RW** for ”Causal Random Walks”. \(\checkmark\) means ”applicable”, \(*\) means ”applicable with output restrictions”, and no symbol indicates that it is not yet used in the literature.
For random walk-based method (c) _Causal RW_, one needs to determine the random walk sampling strategy as well as the method for encoding the sampled walks, typical walk-encoding strategies are [54, 53, 55, 56].
Last but not least, in the case of heterogeneous graphs, an additional vector projection or setting of metapaths has to be considered.
### Trends in Optimising DGNNs
With the development of artificial neural networks and the continuous emergence of new structures over the last five years, questions have been raised about the optimisation of DGNNs. Apart from general neural network training issues (overfitting, lack of data, vanishing gradient, etc.), a main issue for GNN is over-smoothing [133, 132, 134]: if the number of layers and iterations of a GNN is too large, the hidden states of each node will converge to the same value. The second main challenge is over-squashing [135, 136, 137]: if a node has a very large number of K-hop neighbours, then the information passed from a distant node will be compressed and distorted.
To overcome the above problems, numerous methods have been proposed. According to us, general trends to improve DGNN can be categorised as (1) Input oriented, (2) DGNN component oriented, and (3) DGNN structure oriented.
#### 4.2.1 Input Oriented Optimisation
To avoid overfitting when learning graph representations, two main issues related to the DG need to be considered: the noise in topology (e.g. missing or incorrect links) and the noise in the attributes (e.g. incorrect input attributes or output labels) [128, 129].
Since there may be pairs of similar nodes in the graph that should be connected but are not (due to geographical constraints, missing data, etc.), some approaches aim to exploit more informative graph structure or to augment attribute data.
For discrete time, an example is **ST-SHN**[70] for crime prediction. Considering each region as a node and its geographical connectivity as an edge, ST-SHN infers the hyper-edges connecting multiple regions by learning the similarity of hidden states between node pairs. These hyper-edges help to learn cross-region relations to handle the global context.
For continuous time, such as heterogeneous graph-based recommendation systems, **DMGCF**[85] constructs two additional homogeneous graphs \(G_{u}\) for users and \(G_{i}\) for items based on the known user-item graph \(G_{ui}\). Then two GCNs are used to aggregate information on \(G_{u}\cup G_{i}\) and \(G_{ui}\) respectively to learn more informative node embeddings.
Noise in attributes can also lead to overfitting DGNNs, therefore adaptive data augmentation is another direction for input-oriented improvement. Wang _et al._ proposed Memory Tower Augmentation (**MeTA**[126]) for continuous time data augmentation by perturbing time, removing edges, and adding edges with perturbed time. Each augmentation has learnable parameters to better adapt to different input data.
#### 5.2.2 DGNN Component Oriented Optimisation
To solve the over-smoothing and over-squashing problems on graphs, the improvement of the DGNN modules focuses on a more versatile message propagation and a more efficient aggregation.
To avoid stacking multiple layers of GCNs, **TNDCN**[78] uses different-step network diffusion which provides a larger receptive field for each layer. Each step \(k\) propagates attributes with a \(k\)-hop neighbourhood and has its independent learnable parameters \(\Gamma_{k}\), as shown in equ. 15, where \(\widetilde{\text{A}}^{\text{k}}\) refers to the parameterised \(k\)-hop connectivity matrix.
\[\mathbf{H}=\sum_{k\geq 0}\widetilde{\text{A}}^{\text{k}}\mathbf{H}^{k}\Gamma_{k} \tag{15}\]
Inspired from the Bidirectionnal LSTM [176], an easy-to-implement enhancement for propagation is bi-directional message passing **Bi-GCN**[87, 86]. It processes an undirected tree graph as two directed tree graphs: the first one from the root to the leaves, and the second one from leaves to the root. Then two GCNs with different parameters encode each case independently to obtain more informative hidden states.
Besides improving propagation methods, a widely used technique for aggregating information is the (multi-head) attention mechanism, which enables the model to use adaptive weights \(\left(f_{attn}:R^{|N_{v_{i}}|\times d}\to R^{|N_{v_{i}}|\times 1}\right)\) for enhancing vanilla GCNs [127], as well as its variant self-attention mechanism \(\left(f_{self-attn}:R^{|N_{v_{i}}|\times d}\to R^{|N_{v_{i}}|\times d^{ \prime}}\right)\), which is good at capturing the internal correlation among elements in the sequence.
The three component-oriented improvements mentioned above are applicable not only to \(f_{G}(\cdot)\), but also to \(f_{T}(\cdot)\) when encoding along time, such as dilated convolution for expanding the temporal perceptual field [177], Bi-LSTM for bidirectional propagation along the sequence [176], and (self-)attention mechanisms for encoding sequences [36].
#### 5.2.3 DGNN Structure Oriented Optimisation
Another research direction is to optimise the overall structure. For example, residual connection [131] for dealing with the vanishing gradient is widely used in DGNNs, especially for the DGNNs which sequentially encode hidden states [77, 93, 83].
In particular for the time embedding approach under continuous time, how to better store historical information and how to learn variations of information are two major challenges due to the difficulty of encoding over time. Models such as **TGAT**[109] can encode the timestamps of the event but cannot reuse the hidden states already encoded in the past timesteps. **TGN**[108] addresses this problem by incorporating memory modules, **TGNF**[107] adds the similarity between current memory and previous memory in the loss to encourage the model to learn variational information. All these improvements enhance the performance and efficiency of the model. The evolution of the relevant models is shown in Fig. 13.
## 5 Conclusion
Thanks to their ability to integrate both structural and temporal aspects in a compact formalism, dynamic graphs has emerged in the last few years as a state of the art model for describing dynamic systems.
Many scientific communities have investigated this area of research using their own definition, their own vocabulary, their own constraints, and have proposed prediction models dedicated to their downstream tasks. Among these models, Dynamic Graph Neural Networks occupy an important place, taking benefit of the representation learning paradigm.
The first part of this survey provides a clarification and a categorization of the different dynamic graph learning contexts that are encountered in the literature, from the input point of view, the output point of view and the learning setting (inductive/transductive nature of the task). This categorization leads to five different learning _cases_ covering the different contexts encountered in the literature.
Using this categorization, our second contribution was to propose a taxonomy of existing DGNN models. We distinguish five families of DGNN, according to the strategy used to incorporate time information in the model
Finally, we also provide to practitioners some guidelines for designing and improving DGNNs.
Although dynamic graph learning is a recent discipline, it will undoubtedly be a major trend for machine learning researchers for years to come. We hope that this review is a modest contribution in this direction.
## 6 Acknowledgement
This work was financially supported by the ANR Labcom Lisa ANR-20-LCV1-0009
Figure 13: The main differences between the time-embedding methods, with the green color indicating the graph encoding-related methods and the blue color indicating the time encoding-related methods. |
2307.10112 | Extended Graph Assessment Metrics for Graph Neural Networks | When re-structuring patient cohorts into so-called population graphs,
initially independent data points can be incorporated into one interconnected
graph structure. This population graph can then be used for medical downstream
tasks using graph neural networks (GNNs). The construction of a suitable graph
structure is a challenging step in the learning pipeline that can have severe
impact on model performance. To this end, different graph assessment metrics
have been introduced to evaluate graph structures. However, these metrics are
limited to classification tasks and discrete adjacency matrices, only covering
a small subset of real-world applications. In this work, we introduce extended
graph assessment metrics (GAMs) for regression tasks and continuous adjacency
matrices. We focus on two GAMs in specific: \textit{homophily} and
\textit{cross-class neighbourhood similarity} (CCNS). We extend the notion of
GAMs to more than one hop, define homophily for regression tasks, as well as
continuous adjacency matrices, and propose a light-weight CCNS distance for
discrete and continuous adjacency matrices. We show the correlation of these
metrics with model performance on different medical population graphs and under
different learning settings. | Tamara T. Mueller, Sophie Starck, Leonhard F. Feiner, Kyriaki-Margarita Bintsi, Daniel Rueckert, Georgios Kaissis | 2023-07-13T13:55:57Z | http://arxiv.org/abs/2307.10112v2 | # Extended Graph Assessment Metrics for Regression and Weighted Graphs
###### Abstract
When re-structuring patient cohorts into so-called population graphs, initially independent patients can be incorporated into one interconnected graph structure. This population graph can then be used for medical downstream tasks using graph neural networks (GNNs). The construction of a suitable graph structure is a challenging step in the learning pipeline that can have severe impact on model performance. To this end, different graph assessment metrics have been introduced to evaluate graph structures. However, these metrics are limited to classification tasks and discrete adjacency matrices, only covering a small subset of real-world applications. In this work, we introduce extended graph assessment metrics (GAMs) for regression tasks and weighted graphs. We focus on two GAMs in specific: _homophily_ and _cross-class neighbourhood similarity_ (CCNS). We extend the notion of GAMs to more than one hop, define homophily for regression tasks, as well as continuous adjacency matrices, and propose a light-weight CCNS distance for discrete and continuous adjacency matrices. We show the correlation of these metrics with model performance on different medical population graphs and under different learning settings, using the TADPOLE and UKBB datasets.
## 1 Introduction
The performance of graph neural networks can be highly dependent on the graph structure they are trained on [16, 15]. To this end, several graph assessment metrics (GAMs) have been introduced to evaluate graph structures and shown strong correlations between specific graph structures and the performance of graph neural networks (GNNs) [14, 16, 15]. Especially in settings, where the graph structure is not provided by the dataset but needs to be constructed from the data, GAMs are the only way to assess the quality of the constructed graph. This is for example the case when utilising so-called population graphs on medical datasets. Recent works have furthermore shown that learning the graph structure in an end-to-end manner, can improve performance on population graphs [9]. Some of
these methods that learn the graph structure during model training operate with fully connected, weighted graphs, where all nodes are connected with each other and the tightness of the connection is determined by a learnable edge weight. This leads to a different representation of the graph, which does not fit the to-date formulations of GAMs. Furthermore, existing metrics are tailored to classification tasks and cannot be easily transformed for equally important regression tasks. The contributions of this work are the following: (1) We extend existing metrics to allow for an assessment of multi-hop neighbourhoods. (2) We introduce an extension of the homophily metric for regression tasks and continuous adjacency matrices and (3) define a cross-class neighbourhood similarity (CCNS) distance metric and extend CCNS to learning tasks that operate on continuous adjacency matrices. Finally, (4) we show these metrics' correlation to model performance on different medical and synthetic datasets. The metrics introduced in this work can find versatile applications in the area of graph deep learning in medical and non-medical settings, since they strongly correlate with model performance and give insights into the graph structure in various learning settings.
## 2 Background and Related Work
### Definition of graphs
A discrete graph \(G:=(V,E)\) is defined by a set of \(n\) nodes \(V\) and a set of edges \(E\), connecting pairs of nodes. The edges are unweighted and can be represented by an adjacency matrix \(\mathbf{A}\) of shape \(n\times n\), where \(\mathbf{A}_{ij}=1\) if and only if \(e_{ij}\in E\) and \(0\) otherwise. A continuous/weighted graph \(G_{w}:=(V_{w},E_{w},\mathbf{W})\), assigns a (continuous) weight to every edge in \(E_{w}\), summarised in the weight matrix \(\mathbf{W}\). Continuous graphs are for example required in cases where the adjacency matrix is learned in an end-to-end manner and backpropagation through the adjacency matrix needs to be feasible. A neighbourhood \(\mathcal{N}_{v}\) of a node \(v\) contains all direct neighbours of \(v\) and can be extended to \(k\) hops by \(\mathcal{N}_{v}^{(k)}\). For this work, we assume familiarity with GNNs [3].
### Homophily
Homophily is a frequently used metric to assess a graph structure that is correlated to GNN performance [15]. It quantifies how many neighbouring nodes share the same label [15] as the node of interest. There exist three different notions of homophily: edge homophily [10], node homophily [19], and class homophily [12, 15]. Throughout this work, we use node homophily, sometimes omitting the term "node", only referring to "homophily".
Definition 1 (Node homophily): Let \(G:=(V,E)\) be a graph with a set of node labels \(Y:=\{y_{u};u\in V\}\) and \(\mathcal{N}_{v}\) be the set of neighbouring nodes to node \(v\). Then G has the following node homophily:
\[h(G,Y):=\frac{1}{|V|}\sum_{v\in V}\frac{|\{u|u\in\mathcal{N}_{v},Y_{u}=Y_{v}\}|}{| \mathcal{N}_{v}|}, \tag{1}\]
_where \(|\cdot|\) indicates the cardinality of a set._
A graph \(G\) with node labels \(Y\) is called \(\mathit{homophilous}/\mathit{homophilic}\) when \(h(G,Y)\) is large (typically larger than 0.5) and \(\mathit{heterophilous}/\mathit{heterophilic}\) otherwise [10].
### Cross-class neighbourhood similarity
Ma et al. [16] introduce a metric to assess the graph structure for graph deep learning, called cross-class neighbourhood similarity (CCNS). This metric indicates how similar the neighbourhoods of nodes with the same labels are over the whole graph - irrespective of the labels of the neighbouring nodes.
Definition 2 (Cross-class neighbourhood similarity): Let \(\mathit{G}=(\mathit{V},\mathit{E})\), \(\mathcal{N}_{v}\), and \(Y\) be defined as above. Let \(C\) be the set of node label classes, and \(\mathcal{V}_{c}\) the set of nodes of class \(c\). Then the CCNS of two classes \(c\) and \(c^{\prime}\) is defined as follows:
\[\mathrm{CCNS}(c,c^{\prime})=\frac{1}{|\mathcal{V}_{c}||\mathcal{V}_{c^{\prime }}|}\sum_{u\in\mathcal{V},v\in\mathcal{V}^{\prime}}\mathrm{cosim}(d(u),d(v)). \tag{2}\]
\(d(v)\) is the histogram of a node \(v\)'s neighbours' labels and \(\mathrm{cosim}(\cdot,\cdot)\) the cosine similarity.
## 3 Extended Graph Metrics
In this section, we introduce our main contributions by defining new extended GAMs for regression tasks and continuous adjacency matrices. We propose (1) a unidimensional version of CCNS which we call _CCNS distance_, which is easier to evaluate than the whole original CCNS matrix, (2) an extension of existing metrics to \(k\)-hops, (3) GAMs for continuous adjacency matrices, and (4) homophily for regression tasks.
### CCNS distance
The CCNS of a dataset with \(n\) classes is an \(n\times n\) matrix, which can be large and cumbersome to evaluate. The most desirable CCNS for graph learning has high intra-class and low inter-class values, indicating similar neighbourhoods for the same class and different neighbourhoods between classes. We propose to collapse the CCNS matrix into a single value by evaluating the \(L_{1}\) distance between the CCNS and the identity matrix, which we term _CCNS distance_.
Definition 3 (CCNS distance): Let \(\text{G}=(\text{V},\text{E})\), \(C\), CCNS be defined as above. Then the CCNS distance of \(\text{G}\) is defined as follows:
\[D_{\text{CCNS}}:=\frac{1}{n}\sum\|\text{CCNS}-\mathbb{I}\|_{1}, \tag{3}\]
where \(\mathbb{I}\) indicates the identity matrix and \(\|\cdot\|_{1}\) the \(L_{1}\) norm.
We note that the _CCNS distance_ is best at low values and that we do not define CCNS for regression tasks, since it requires the existence of class labels.
### \(\text{K}\)-hop metrics
Most GAMs only evaluate direct neighbourhoods. However, GNNs can apply the message passing scheme to more hops, including more hops in the node feature embedding. We therefore propose to extend homophily and CCNS on unweighted graphs to \(k\)-hop neighbourhoods. An extension of the metrics on weighted graphs is more challenging, since the edge weights impact the \(k\)-hop metrics. The formal definitions for \(k\)-hop homophily and CCNS for unweighted graphs can be found in the Appendix. We here exchange the notion of \(\mathcal{N}_{v}\) with the specific \(k\)-hop neighbourhood \(\mathcal{N}_{v}^{(k)}\) of interest.
### Metrics for continuous adjacency matrices
Several graph learning settings, such as [6, 9], utilise a continuous graph structure. In order to allow for an evaluation of those graphs, we here define GAMs on the weight matrix \(\mathbf{W}\) instead of the binary adjacency matrix \(\mathbf{A}\).
Definition 4 (Homophily for continuous adjacency matrices): Let \(\text{G}_{w}=(\text{V}_{w},\text{E}_{w},\mathbf{W})\), be a weighted graph defined as above with a continuous adjacency matrix. Then the \(1\)-hop node homophily of \(\text{G}_{w}\) is defined as follows:
\[\text{HCont}(G_{w},Y):=\frac{1}{|V|}\sum_{v\in V}\Bigg{(}\frac{\sum_{u\in \mathcal{N}_{v}|y_{u}=y_{v}}w_{uv}}{\sum_{u\in\mathcal{N}_{v}}w_{uv}}\Bigg{)}, \tag{4}\]
where \(w_{uv}\) is the weight of the edge from \(u\) to \(v\).
Definition 5 (CCNS for continuous adjacency matrices): Let \(\text{G}_{w}=(\text{V}_{w},\text{E}_{w},\mathbf{W})\), \(C\), \(cossim(\cdot,\cdot)\) be defined as above. Then, the CCNS for weighted graphs is defined as follows:
\[\text{CCNS}_{cont}(c,c^{\prime}):=\frac{1}{|\mathcal{V}_{c}||\mathcal{V}_{c^{ \prime}}|}\sum_{u\in\mathcal{V},v\in\mathcal{V}^{\prime}}\text{cossim}(d_{c}( u),d_{c}(v)), \tag{5}\]
where \(d_{c}(u)\) is the histogram considering the edge weights of the continuous adjacency matrix of the respective classes instead of the count of neighbours. The _CCNS distance_ for continuous adjacency matrices can be evaluated as above.
### Homophily for regression
Homophily is only defined for node classification tasks, which strictly limits its application to a subset of use cases. However, many relevant graph learning tasks perform a downstream node regression, such as age regression [21, 2]. We here define homophily for node regression tasks. Since homophily is a metric ranging from \(0\) to \(1\), we contain this range for regression tasks by normalising the labels between \(0\) and \(1\) prior to metric evaluation. We subtract the average node label distance from \(1\) to ensure the same range as homophily for classification.
Definition 6 (Homophily for regression): Let \(G=(\,V\,,E)\) and \(\mathcal{N}_{v}^{k}\) be defined as above and \(Y\) be the vector of node labels, which is normalised between \(0\) and \(1\). Then the \(k\)-hop homophily for regression is defined as follows:
\[\mathrm{HReg}^{(k)}(G,Y):=1-\left(\frac{1}{|V|}\sum_{v\in V}\left(\frac{1}{| \mathcal{N}_{v}^{(k)}|}\sum_{n\in\mathcal{N}_{v}^{(k)}}\left\|y_{v}-y_{n} \right\|_{1}\right)\right), \tag{6}\]
where \(\left\|\cdot\right\|_{1}\) indicates the \(L_{1}\) norm.
Definition 7 (Homophily for continuous adjacency matrices for regression): Let \(G_{w}=(\,V_{w},E_{w},\mathbf{W})\), Y, and \(N_{v}\) be defined as above and the task be a regression task, then the homophily of \(G\) is defined as follows:
\[\mathrm{HReg}(G,Y):=1-\left(\frac{1}{|V|}\sum_{v\in V}\left(\frac{\sum_{n\in \mathcal{N}_{v}}w_{nv}\left\|y_{v}-y_{n}\right\|_{1}}{\sum_{n\in\mathcal{N}_{v }}w_{nv}}\right)\right), \tag{7}\]
where \(w_{nv}\) is the weight of the edge from \(n\) to \(v\) and \(\left\|\cdot\right\|_{1}\) the \(L_{1}\) norm.
### Metric evaluation
In general, we recommend the evaluation of GAMs separately on the train, validation, and test set. We believe this to be an important evaluation step since the metrics can differ significantly between the different sub-graphs, given that the graph structure in only optimised on the training set.
## 4 Experiments and Results
We evaluate our metrics on several datasets with different graph learning techniques: We (1) assess benchmark classification datasets using a standard learning pipeline, and (2) medical population graphs for regression and classification that learn the adjacency matrix end-to-end. All experiments are performed in a transductive learning setting using graph convolutional networks (GCNs) [11]. In order to evaluate all introduced GAMs, we specifically perform experiments on two task settings: classification and regression, and under two graph learning settings: one using a discrete adjacency matrix and one using a continuous one.
### Datasets
In order to evaluate the above defined GAMs, we perform node-level prediction experiments with GNNs on different datasets. We evaluate \(\{1,2,3\}\)-hop homophily and CCNS distance on the benchmark citation datasets Cora, CiteSeer, and PubMed [24], Computers and Photos, and Coauthors CS datasets [20]. All of these datasets are classification tasks. We use \(k\)-layer GCNs and compare performance to a multi-layer perceptron (MLP).
Furthermore, we evaluate the introduced metrics on two different medical population graph datasets, as well as two synthetic datasets. The baseline results for these datasets can be found in Appendix 0. Table 4. We generate **synthetic datasets** for classification and regression to analyse the metrics in a controllable setting. As a real-world medical classification dataset, we use **TADPOLE**[17], a neur-imaging dataset which has been frequently used for graph learning on population graphs [18, 6, 9]. For a regression population graph, we perform brain age prediction on \(6\,406\) subjects of the UK BioBank [22] (**UKBB**). We use \(22\) clinical and \(68\) imaging features extracted from the subjects' magnetic resonance imaging (MRI) brain scans, following the approach in [5]. In both medical population graphs, each subject is represented by one node and similar subjects are either connected following the \(k\)-nearest neighbours approach, like in [9] or starting without any edges.
### GNN Training
Prior to this work, the homophily metric has only existed for an evaluation on discrete adjacency matrices. In this work, we extend this metric to continuous adjacency matrices. In order to evaluate the metrics for both, discrete and continuous adjacency matrices, we use two different graph learning methods: (a) _dDGM_ and (b) _cDGM_ from [9]. DGM stands for "differentiable graph module", referring to the fact that both methods learn the adjacency matrix in an end-to-end manner. cDGM hereby uses a continuous adjacency matrix, allowing us to evaluate the metrics introduced specifically for this setting. dDGM uses a discrete adjacency matrix by sampling the edges using the Gumbel-Top-K trick [8]. Both methods are similar in terms of model training and performance, allowing us to compare the newly introduced metrics to the existing homophily metric in the dDGM setting.
### Results
#### 4.3.1 (1) Benchmark classification datasets
The results on the benchmark datasets are summarised in Table 1. We can see that the \(k\)-hop metric values can differ greatly between the different hops for some datasets, while staying more constant for others. This gives an interesting insight into the graph structure over several hops. We believe an evaluation of neighbourhoods in graph learning to be more insightful if the number of hops in the GNN matches the number of hops considered in the graph metric. Interestingly, performance of \(k\)-hop GCNs did not
align with the \(k\)-hop metric values on the specific datasets. We summarise these results in Appendix Table 3. One possible reason for this might be that, e.g., the 3-hop metrics assess the 1, 2, and 3-hop neighbourhood at once, not just the outer ring of neighbours. Another reason for this discrepancy might be that homophily and CCNS do not perfectly predict GNN performance. Furthermore, different graph convolutions have shown to be affected differently by low-homophily graphs [25]. We believe this to be an interesting direction to further investigate GAMs for GNNs.
#### 3.2.1 (2) Population graph experiments
Table 2 shows the dDGM and cDGM results of the population graph datasets. We can see that in some settings, such as the classification tasks on the synthetic dataset using dDGM, the homophily varies greatly between train and test set. This can be an indication for over-fitting on the training set, since the graph structure is optimised for the training nodes only and might not generalise well to the whole graph.
Since we here use graph learning methods which adapt the graph structure during model training, also the graph metrics change over training. Figure 1 shows the development of the accuracy as well as the mean and standard deviation of the 1-hop homophily and CCNS distance, evaluated on the train (left) and validation set (right). We can see that for both sets, the homophily increases with the accuracy, while the standard deviation (STD) of the homophily decreases and the CCNS distance decreases with increasing performance. However, the GAMs
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Nodes**} & \multirow{2}{*}{**Cl.**} & \multicolumn{3}{c}{**Node homophily \(\uparrow\)**} & \multicolumn{3}{c}{\(D_{\text{CCNS}}\downarrow\)} \\ & & & 1-hop & 2-hop & 3-hop & 1-hop & 2-hop & 3-hop \\ \hline Cora & 1,433 & 7 & 0.825 \(\pm\) 0.29 & 0.775 \(\pm\) 0.26 & 0.663 \(\pm\) 0.29 & 0.075 & 0.138 & 0.229 \\ CiteSeer & 3,703 & 6 & 0.706 \(\pm\) 0.40 & 0.754 \(\pm\) 0.28 & 0.712 \(\pm\) 0.29 & 0.124 & 0.166 & 0.196 \\ PubMed & 19,717 & 3 & 0.792 \(\pm\) 0.35 & 0.761 \(\pm\) 0.26 & 0.687 \(\pm\) 0.26 & 0.173 & 0.281 & 0.363 \\ \hline Computers & 13,752 & 10 & 0.785 \(\pm\) 0.26 & 0.569 \(\pm\) 0.27 & 0.303 \(\pm\) 0.20 & 0.080 & 0.275 & 0.697 \\ Photo & 7,650 & 8 & 0.837 \(\pm\) 0.25 & 0.660 \(\pm\) 0.30 & 0.447 \(\pm\) 0.28 & 0.072 & 0.210 & 0.429 \\ \hline Coauthor CS & 18, 333 & 15 & 0.832 \(\pm\) 0.24 & 0.698 \(\pm\) 0.25 & 0.520 \(\pm\) 0.25 & 0.043 & 0.110 & 0.237 \\ \hline \hline \end{tabular}
\end{table}
Table 1: \(K\)-hop graph metrics of benchmark node classification datasets. Cl.: number of classes, Nodes: number of nodes
Figure 1: Development of graph metrics on TADPOLE over training using **cDGM**; left: train set; right: validation set
align more accurately with the training accuracy (left), showing that the method optimised the graph structure on the training set. The validation accuracy does not improve much in this example, while the validation GAMs still converge similarily to the ones evaluated on the train set (left). Figure 2 shows the mean (left) and STD (right) of the validation regression homophily HReg on the UKBB dataset with continuous adjacency matrices (using cDGM) and the corresponding change in validation mean absolute error (MAE). Again, homophily raises when the validation MAE decreases and the STD of the homophily decreases in parallel. On the left, the dotted grey line indicates the MAE of a mean prediction on the dataset. We can see that the mean regression homophily HReg raises once the validation MAE drops below the error of a mean prediction. We here only visualise a subset of all performed experiments, but we observe the same trends for all settings. From these experiments we conclude that the here introduced GAMs show strong correlation with model performance and can be used to assess generated graph structures that are used for graph deep learning.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline
**Method** & **Dataset** & **Task** & **Test score** & \begin{tabular}{l} **1-hop node homophily** \(\uparrow\) \\ **train** \\ \end{tabular} &
\begin{tabular}{l} 1-hop \(D_{\text{CCNS}}\)\(\downarrow\) \\ **train** \\ \end{tabular} \\ \hline
**cDGM** & Synthetic 1k & c & 0.7900 \(\pm\) 0.08 & 1.0000 \(\pm\) 0.00 & 1.0000 \(\pm\) 0.00 & 0.0000 & 0.0000 \\ & & r & 0.0112 \(\pm\) 0.01 & 0.9993 \(\pm\) 0.00 & 0.9991 \(\pm\) 0.00 & - & - \\ \cline{2-8} & Synthetic 2k & c & 0.8620 \(\pm\) 0.03 & 1.0000 \(\pm\) 0.00 & 1.0000 \(\pm\) 0.00 & 0.0000 & 0.0000 \\ & & r & 0.0173 \(\pm\) 0.00 & 0.8787 \(\pm\) 0.06 & 0.8828 \(\pm\) 0.05 & - & - \\ \cline{2-8} & Tadpole & c & 0.9333 \(\pm\) 0.01 & 1.0000 \(\pm\) 0.00 & 0.9781 \(\pm\) 0.09 & 0.0000 & 0.0314 \\ \cline{2-8} & UKBB & r & 4.0775 \(\pm\) 0.23 & 0.8310 \(\pm\) 0.06 & 0.8306 \(\pm\) 0.07 & - & - \\ \hline \hline
**dDGM** & Synthetic 1k & c & 0.8080 \(\pm\) 0.04 & 0.6250 \(\pm\) 0.42 & 0.1150 \(\pm\) 0.32 & 0.4483 & 0.4577 \\ & & r & 0.0262 \(\pm\) 0.00 & 0.7865 \(\pm\) 016 & 0.8472 \(\pm\) 0.15 & - & - \\ \cline{2-8} & Synthetic 2k & c & 0.7170 \(\pm\) 0.06 & 0.6884 \(\pm\) 0.40 & 0.0950 \(\pm\) 0.29 & 0.4115 & 0.4171 \\ & & r & 0.0119 \(\pm\) 0.00 & 0.8347 \(\pm\) 0.13 & 0.8295 \(\pm\) 0.13 & - & - \\ \cline{2-8} & Tadpole & c & 0.9614 \(\pm\) 0.01 & 0.9297 \(\pm\) 0.18 & 0.8801 \(\pm\) 0.31 & 0.1045 & 0.0546 \\ \cline{2-8} & UKBB & r & 3.9067 \(\pm\) 0.04 & 0.8941 \(\pm\) 0.13 & 0.9114 \(\pm\) 0.12 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: cDGM and dDGM results on the population graph datasets. We report the test scores averaged over 5 random seeds and 1-hop homophily and CCNS distance of one final model each. We do not report CCNS distance on regression datasets, since it is not defined for regression tasks.
Figure 2: Development of metrics on UKBB dataset using **cDGM** on validation set
## 5 Conclusion and Future Work
In this work, we extended two frequently used graph assessment metrics (GAMs) for graph deep learning, that allow to evaluate the graph structure in regression tasks and continuous adjacency matrices. For datasets that do not come with a pre-defined graph structure, like population graphs, the assessment of the graph structure is crucial for quality checks on the learning pipeline. Node homophily and cross-class neighbourhood similarity (CCNS) are commonly used GAMs that allow to evaluate how similar the neighbourhoods in a graph are. However, these metrics are only defined for discrete adjacency matrices and classification tasks. This only covers a small portion of graph deep learning tasks. Several graph learning tasks target node regression [21, 2, 1]. Furthermore, recent graph learning methods have shown that an end-to-end learning of the adjacency matrix is beneficial over statically creating the graph structure prior to learning [9]. These methods do not operate on a static binary adjacency matrix, but use weighted continuous graphs, which is not considered by most current GAMs. In order to overcome these limitations, we extend the definition of node homophily to regression tasks and both node homophily and CCNS to continuous adjacency matrices. We formulate these metrics and evaluate them on different synthetic and real world medical datasets and show their strong correlation with model performance. We believe these metrics to be essential tools for investigating the performance of GNNs, especially in the setting of population graphs or similar settings that require explicit graph construction.
Our definition of the CCNS distance \(D_{\mathrm{CCNS}}\) uses the \(L_{1}\)-norm to determine the distance between the node labels in order to weight each inter-class-connection equally. However, the \(L_{1}\)-norm is only one of many norms that could be used here. Given the strong correlation of our definition of \(D_{\mathrm{CCNS}}\), we show the the usage of the \(L_{1}\)-norm is a sensible choice. We also see an extension of the metrics for weighted graphs to multiple hops as promising next steps towards better graph assessment for GNNs.
There exist additional GAMs, such as normalised total variation and normalised smoothness value [13], neighbourhood entropy and centre-neighbour similarity [23], and aggregations similarity score and diversification distinguishability [7] that have been shown to correlate with GNN performance. An extension of these metrics to regression tasks and weighted graphs would be interesting to investigate in future works. All implementations of the here introduced metrics are differentiable. This allows for a seamless integration in the learning pipeline, e.g. as loss components, which could be a highly promising application to improve GNN performance by optimising for specific graph properties.
|
2304.10442 | Securing Neural Networks with Knapsack Optimization | MLaaS Service Providers (SPs) holding a Neural Network would like to keep the
Neural Network weights secret. On the other hand, users wish to utilize the
SPs' Neural Network for inference without revealing their data. Multi-Party
Computation (MPC) offers a solution to achieve this. Computations in MPC
involve communication, as the parties send data back and forth. Non-linear
operations are usually the main bottleneck requiring the bulk of communication
bandwidth. In this paper, we focus on ResNets, which serve as the backbone for
many Computer Vision tasks, and we aim to reduce their non-linear components,
specifically, the number of ReLUs. Our key insight is that spatially close
pixels exhibit correlated ReLU responses. Building on this insight, we replace
the per-pixel ReLU operation with a ReLU operation per patch. We term this
approach 'Block-ReLU'. Since different layers in a Neural Network correspond to
different feature hierarchies, it makes sense to allow patch-size flexibility
for the various layers of the Neural Network. We devise an algorithm to choose
the optimal set of patch sizes through a novel reduction of the problem to the
Knapsack Problem. We demonstrate our approach in the semi-honest secure 3-party
setting for four problems: Classifying ImageNet using ResNet50 backbone,
classifying CIFAR100 using ResNet18 backbone, Semantic Segmentation of ADE20K
using MobileNetV2 backbone, and Semantic Segmentation of Pascal VOC 2012 using
ResNet50 backbone. Our approach achieves competitive performance compared to a
handful of competitors. Our source code is publicly available:
https://github.com/yg320/secure_inference. | Yakir Gorski, Amir Jevnisek, Shai Avidan | 2023-04-20T16:40:10Z | http://arxiv.org/abs/2304.10442v2 | # Securing Neural Networks with Knapsack Optimization
###### Abstract
Deep learning inference brings together the data and the Convolutional Neural Network (CNN). This is problematic in case the user wants to preserve the privacy of the data and the service provider does not want to reveal the weights of his CNN. Secure Inference allows the two parties to engage in a protocol that preserves their respective privacy concerns, while revealing only the inference result to the user. This is known as Multi-Party Computation (MPC).
A major bottleneck of MPC algorithms is communication, as the parties must send data back and forth. The linear component of a CNN (i.e. convolutions) can be done efficiently with minimal communication, but the non-linear part (i.e., ReLU) requires the bulk of communication bandwidth.
We propose two ways to accelerate Secure Inference. The first is based on the observation that the ReLU outcome of many convolutions is highly correlated. Therefore, we replace the per pixel ReLU operation by a ReLU operation per patch. Each layer in the network will benefit from a patch of a different size and we devise an algorithm to choose the optimal set of patch sizes through a novel reduction of the problem to a knapsack problem.
The second way to accelerate Secure Inference is based on cutting the number of bit comparisons required for a secure ReLU operation. We demonstrate the cumulative effect of these tools in the semi-honest secure 3-party setting for four problems: Classifying ImageNet using ResNet50 backbone, classifying CIFAR100 using ResNet18 backbone, semantic segmentation of ADE20K using MobileNetV2 backbone and semantic segmentation of Pascal VOC 2012 using ResNet50 backbone. Our source code is publicly available: [https://github.com/yg320/secureinference](https://github.com/yg320/secureinference).
## 1 Introduction
With the recent rise in popularity of machine learning algorithms, an increasing number of companies have begun offering machine learning as a service (MLaaS). This trend extends to convolutional neural network (CNN)-based computer vision algorithms as well.
In a typical scenario, two parties, a client with an image and an ML-provider server with a deep-learning model, seek to collaborate for the purpose of running inference. However, privacy concerns often impede such cooperation as both the client's image and the server's model may contain information that neither party is willing to share. To overcome this limitation, a Privacy-preserving deep learning framework has been proposed by researchers.
Ideally, secure inference can be done using Homomorphic Encryption (HE) that lets the service provider work directly on encrypted data, with no need for back and forth communication. Although possible in theory, this approach is painfully slow in practice. The vast majority of secure inference algorithms rely on Multi-Party Computation (MPC) that involves multiple rounds of communication. These cryptographic protocols vary in terms of the level of secu
Figure 1: **Reducing the number of DReLUs:** We show the impact of reducing the number of DReLUs (i.e., non-linearities) on performance, compared to inference on a non-secure baseline model. The accuracy for classification tasks (ImageNet in blue, CIFAR100 in red) and mIoU for segmentation (ADE20K in green, Pascal VOC 2012 in purple) relative to the non-secure baseline model is plotted as a function of the DReLU budget (expressed in percentage). Observe that when the percentage of DReLUs is above \(5\%\) we actually improve mIoU for ADE20K. We plot DeepReDuce’s solution for the CIFAR100 dataset in dashed red. Securely evaluated working points are denoted by bold circles.
rity they provide, the type of information they hide and the number of parties involved. These protocols share a common drawback, they are slow compared to their non-secure counterparts and often consume substantial amount of communication bandwidth. Moreover, if not devised well, they often inflict a significant accuracy hit.
One challenge faced by many cryptographic protocols is the cost of a comparison operation (i.e., ReLU [12]) that, in many cases, takes the vast majority of communication bandwidth. Therefore, we aim to reduce the cost and the amount of these operations. Our key insight is that the ReLU outcome of nearby pixels is highly correlated. Therefore, we propose to use a single ReLU operation per patch. The question now becomes what should be the optimal patch size? To address this we evaluate multiple patch sizes per layer and use a novel, Knapsack-based optimization strategy, to find an optimal configuration of patch sizes for all layers in the network.
Additionally, we propose and implement a straightforward and practical approach to lower the cost of comparison operations, which can be applied to various protocols. We achieve this by disregarding some of the most and least significant bits of the activation layers.
Our method has been successfully applied to four distinct tasks: (1) Classifying ImageNet using ResNet50 backbone (2) classifying CIFAR100 using ResNet18 backbone (3) semantic segmentation of ADE20K using DeepLabV3 with MobileNetV2 backbone, and (4) semantic segmentation of Pascal VOC 2012 using DeepLabV3 with ResNet50 backbone. We demonstrate that by accepting a slight decrease in performance, we can reduce the number of comparisons by over \(90\%\).
Figure 1 shows the trade-off between a drop in DReLU (which is the step function that returns 1 for positive values and 0 for negative values) and performance. As can be seen, using just \(10\%\) of the DReLU operations leads to a drop of \(5\%\) in accuracy for the classification tasks, about \(2\%\) for the Pascal VOC 2012 segmentation task and improves mIoU of ADE20K segmentation. Since ReLU operations contribute the most to communication bandwidth, we save a considerable amount of network traffic.
With our implementation of the semi-honest secure 3-party setting of the SecureNN protocol on three instances within the same AWS EC2 region, the image classification models can be executed in about 5.5 seconds on ImageNet and 1.5 seconds on CIFAR100, and the semantic segmentation models can be executed in about 32 seconds on ADE20K and 85 seconds on Pascal VOC 2012, with performance comparable to OpenMMLab's non-secure models [6, 7]
To summarize, the main contributions of this paper are:
* We develop a generic, Knapsack-based, data-driven algorithm that greatly reduces the number of comparisons in a given network.
* We build and release an optimized, purely Pythonic, wrapper code over OpenMMLab based packages that secures models taken from their model zoo.
* We demonstrate a significant decrease in run-time at the cost of a slight reduction in accuracy for four of OpenMMLab's most common models and datasets.
* We present a secure semantic segmentation algorithm that preserves the model accuracy while being 10 times faster than a comparable secure baseline protocol.
## 2 Related Work
Privacy Preserving Deep LearningThe research on privacy preserving deep learning shows significant differences across various aspects. These include the number of parties involved, threat models (Semi-honest, Malicious), supported layers, techniques used (e.g., Homomorphic encryption, garbled circuits, oblivious transfer, and secret sharing) and the capabilities provided (training and inference).
The pioneers in the field of performing prediction with neural networks on encrypted data were CryptoNets [15]. They employed leveled homomorphic encryption, replacing ReLU non-linearities with square activations in order to perform inference on ciphertext. SecureML [39] utilized three distinct sharing methods: Additive, Boolean, and Yao sharing, along with a protocol to facilitate conversions between them. They used linearly homomorphic encryption (LHE) and oblivious transfer (OT) to precompute Beaver's triplets. MiniONN [34] proposed to generate Beaver's triplets using additively homomorphic encryption together with the single instruction multiple data (SIMD) batch processing technique. GAZELLE [27] suggested to use packed additively homomorphic encryption (PAHE) to make linear layers more communication efficient. FALCON [1] achieved high efficiency by running convolution in the frequency domain, using Fast Fourier Transform (FFT) based ciphertext calculation. Chameleon [42] builds upon ABY [41] and employs a Semi-honest Third Party (STP) dealer to generate Beaver's triplets in an offline phase. SecureNN [48] suggested three-party computation (3PC) protocols for secure evaluation of deep learning components. The Porthos component of CrypTFlow [32] is an improved semi-honest 3-party MPC protocol that builds upon SecureNN. FALCON [49], combines techniques from SecureNN and ABY\({}^{3}\)[38], replacing the MSB and Share Convert protocols in SecureNN with cheaper Wrap\({}_{3}\) protocols.
ReLU SavingsThere are several methods to reduce ReLU expense through non-cryptographic means. CryptoDL [23] and Blind Faith [30] proposed to use polynomial approximations to approximate non linear functions.
In [45], the authors proposed to use partial activation layers and only apply ReLUs to a portion of the channels. DELPHI [37] suggested to quadratically approximate ReLU layers. They designed a planner that automatically discovers which ReLUs to replace with quadratic approximations using neural architecture search (NAS). SAFENET [35] suggested a more fine-grained channel-wise activation approximation. They exploited Population Based Training (PBT) [25] to derive the optimal polynomial coefficients. In CryptoNAS [14] the authors developed a NAS over a fixed-depth, skip connections architectures to reduce ReLU count. The current state-of-the-art method for ReLU pruning, DeepReDuce [26], proposed a three-step process to decrease the number of ReLUs. Like our method, it uses a trained network to guide the placement of ReLU reduction and incorporates knowledge distillation to enhance accuracy in the pruned network. In [22], which is the work that has most influenced us, the authors used statistics from neighboring activations and shared DReLUs among them. However, they did not provide a satisfactory method for determining the neighborhood. Finally, Circa [13] proposed the Stochastic ReLU layer and showed that we can use prior knowledge about the absolute size of activations to reduce the complexity of garbled circuits while maintaining a low error probability. They also demonstrated that neural networks can handle clipping of the least significant bits, which further reduces circuit complexity. While we share similar observations with Circa, our approximate ReLU layer differs in its application, as we ignore the shares' most significant bits instead of the negligible probability event of a shares summation overflow. This enables us to better control bandwidth with error probability, which is especially useful when reducing shares precision, as in [49].
Network PruningNetwork pruning is a technique applied to reduce model complexity by removing weights that contribute the least to model accuracy. Our method can be seen as a specialized pruning technique that targets DReLU operations instead of weights. Different approaches use various measures of importance to identify and eliminate the least important weights, such as magnitude-based pruning (e.g., [17, 18]), similarity and clustering methods (e.g., [47, 43, 10]), and sensitivity methods (e.g., [19]). Some methods, including ours, score filters based on the reconstruction error of a later layer (e.g., [21, 36, 28]). The pioneers of Knapsack based network pruning [4] used the 0/1 Knapsack solution to eliminate filters in a neural network, whereby an item's weight was based on the number of FLOPs, and its value was based on the first-order Taylor approximation of the change to the loss. Similarly, in [46] the authors optimized network latency using the Knapsack paradigm. The authors of [24] applied Integer Programming to optimize post training quantization.
## 3 Method
Our method effectively secures a pre-trained model. It accomplishes this by replacing the ReLU layers with a less expensive layer, while minimizing any distortion (i.e., error) to the network. Our work is based on SecureNN [48] and, for completeness, we refer the readers to a brief summary of that protocol in the appendix.
### Approach
Following SecureNN notations, we refer to the ReLU decision of an activation unit as its DReLU. Mainly:
\[DReLU(x)=\begin{cases}1&\text{if }x\geq 0\\ 0&\text{otherwise}\end{cases} \tag{1}\]
Then, ReLU is defined as:
\[ReLU(x)=x\cdot DReLU(x) \tag{2}\]
Block ReLU (bReLU)The key insight that allows for a reduction in the number of DReLUs is that neighboring activation units tend to have similar signs. As depicted in Figure 2, this is demonstrated by the probability that two activation units at a specific spatial distance will share their sign (Segmentation on the ADE20K dataset with MobileNetV2 backbone). To exploit this property, we propose the block ReLU (bReLU) layer.
In this layer, we utilize the local spatial context of an activation unit to estimate its ReLU decision. Each activation channel is partitioned into rectangular patches of a specified size.
Figure 2: **Activation correlation** The probability that two activation units in the same channel have the same sign, based on their spatial distance, calculated using the activation statistics of a trained MobileNetV2. As can be seen, the lowest probability is about \(0.7\).
Let \(x\) be an activation unit, Let \(P(x)\) be its neighborhood induced by this partition. We define the patch ReLU decision (pDReLU) as the DReLU of the mean activation value of the neighbourhood. I.e.,
\[pDReLU(x)=DReLU(\frac{1}{|P(x)|}\Sigma_{a\in P(x)}a) \tag{3}\]
The ReLU decision of each pixel in the neighborhood is then replaced by the patch decision of that pixel.
\[bReLU(x)=x\cdot pDReLU(x) \tag{4}\]
As a result, we perform a single DReLU operation in the \(P(x)\) patch, instead of \(|P(x)|\) DReLU operations. Figure 3 illustrates a \(2\times 3\) bReLU operating on a \(6\times 6\) activation channel.
Knapsack OptimizationOne can use patches of different size for different channels in the network, and our goal is to determine the optimal size for each individual channel. We formulate this problem as a discrete constraint optimization problem, where we set a desired DReLU budget \(\mathcal{B}\), and attempt to allocate this budget in the most efficient manner.
In what follows, the term \(C_{i}\) denotes the \(i^{th}\) channel in a neural network such that all of its channels are enumerated across all of its layers. To illustrate, in a network consisting of two layers, with the first layer having 32 channels and the second layer having 64 channels, \(C_{40}\) would refer to the \(8^{th}\) channel in the second layer.
Now, we define \(P_{i}\) as the list of patch-sizes available for selection by \(C_{i}\), and \(P_{ij}\) as the \(j\)-th item within this list. \(\mathcal{D}(i,j)\) is then defined as the distortion caused to the network as a result of replacing the ReLUs of \(C_{i}\) with a bReLU of patch size \(P_{ij}\).
Formally, let \(F\) be a neural network, and let \(F^{i,j}\) denote the network obtained by replacing the ReLUs of \(C_{i}\) of \(F\) with a bReLU of a patch size \(P_{ij}\). Let \(F(X)\) and \(F^{i,j}(X)\) be the last activation layer of \(F\) and \(F^{i,j}\) operating on an image \(X\). Then the distortion \(\mathcal{D}(i,j)\) is defined as:
\[\mathcal{D}(i,j)=\mathbb{E}[\|F^{i,j}(X)-F(X)\|^{2}] \tag{5}\]
Specifically, we perform a forward pass on the network \(F\) with image \(X\). Then, for each channel in each layer and for each candidate patch size, we perform a forward pass with only this change to the original network.
We define the cost \(\mathcal{W}(i,j)\) as the number of DReLUs remaining in \(C_{i}\) of \(F^{i,j}\), which is a function of both the patch-size as well as the activation channel size. Finally, we define \(m\) as the number of channels in \(F\), and by \(S_{i}\) the set of indices \(\{1,...,|P_{i}|\}\). Using our notations, the minimization variant of the Multiple-Choice Knapsack Problem [29] is formulated as:
\[\min_{\varphi_{ij}} \sum_{i=1}^{m}\sum_{j\in S_{i}}\mathcal{D}(i,j)\cdot\varphi_{ij}\] (6) subject to \[\sum_{i=1}^{m}\sum_{j\in S_{i}}\mathcal{W}(i,j)\cdot\varphi_{ij} \leq\mathcal{B},\] \[\sum_{j\in S_{i}}\varphi_{ij}=1,\quad i=1,...,m,\] \[\varphi_{ij}\in\{0,1\},\quad i=1,...,m,\quad j\in S_{i}\]
\(\varphi_{ij}=1\) indicates that the \(P_{ij}\) patch size has been selected for \(C_{i}\). The second constraint ensures that no multiple patch sizes can be selected for this channel. This problem is naturally solved using a dynamic programming approach with a table \(DP\) consisting of \(m\) rows and \(\mathcal{B}\) columns. The table \(DP\) is iteratively populated using the recursive formula:
\[DP[i,j]=\min_{1\leq l\leq S_{i}}\{DP[i-1,j-\mathcal{W}(i,l)]+\mathcal{D}(i,l)\} \tag{7}\]
Since this table is column independent, we are able to parallelize finding the solution \(DP[m,\mathcal{B}]\). We use GPU implementation to take full advantage of this property.
AnalysisUsing Equation 5 makes the strong assumption that the bReLU distortion is additive. That is, that the distortion caused by the ReLU-bReLU replacement in two channels equals to the sum of distortions caused by the replacement in each channel individually. Interestingly, we can actually use a less strict assumption.
Figure 3: **Block ReLU (bReLU):** An illustration of the bReLU layer (Equation 4). In bReLU the ReLU decision is based on the average of all activations in the block (i.e., patch). Left: a \(6\times 6\) input activation channel overlaid by the induced bReLU partitioning of a \(2\times 3\) patch size. Right: the output activation channel. As can be seen, the average of the top left patch values is negative, therefore, all the activations in this patch are zero. The average of the top right patch values is positive, therefore, all the values in this patch are preserved, including the erroneous -0.6 value. In this example we execute 6 DReLU operations, as opposed to the original 36, and as a result, we introduce 12 DReLU sign flips.
Formally, let \(L\) be a list of indices corresponding to the channels patch size, such that \(|L|=m\), and \(L[i]\in S_{i}\). We denote by \(F^{L}\) the network that is obtained by replacing the ReLUs of each channel \(C_{i}\) of \(F\) with a bReLU with a patch size of \(P_{i,L[i]}\).
We define the additive distortion \(\mathcal{D}_{a}(L)\) as:
\[\mathcal{D}_{a}(L)=\sum_{i=1}^{m}\mathcal{D}(i,L[i]) \tag{8}\]
and the real distortion \(\mathcal{D}_{r}(L)\) as:
\[\mathcal{D}_{r}(L)=\mathbb{E}[\|F^{L}(X)-F(X)\|^{2}] \tag{9}\]
We would like to have some indication that the argument that minimizes \(\mathcal{D}_{a}(L)\) roughly minimizes \(\mathcal{D}_{r}(L)\) as well. A sufficient condition for that is if the real distortion is a monotonically non-decreasing function of the additive distortion.
Formally, Let \(L1\) and \(L2\) be any two lists of indices satisfying the DReLU budget constraint \(\mathcal{B}\). Then, our assumption is that:
\[\mathcal{D}_{a}(L1)\leq\mathcal{D}_{a}(L2)\rightarrow\mathcal{D}_{r}(L1)\leq \mathcal{D}_{r}(L2) \tag{10}\]
The accuracy of this assumption determines how close the Knapsack solution is to the optimal solution.
We verified our assumption on the ImageNet classification task using ResNet50. We characterized the relationship between \(\mathcal{D}_{a}\) and \(\mathcal{D}_{r}\) using randomly drawn \(L\) lists, with the underlying implicit assumption that this characterization is preserved around minimum points. Specifically, we selected a random patch index list \(L\), and calculated both \(\mathcal{D}_{a}(L)\) and \(\mathcal{D}_{r}(L)\) using \(512\) samples. This procedure was repeated for \(3000\) such lists \(L\), and the results are shown in Figure 4. As can be seen, the real distortion is roughly a monotonically non-decreasing function of the additive distortion. This indicates that our assumption is reasonable.
Homogeneous ChannelsWe noticed that the activation units that make up certain channels rarely have different sign values. To make use of this characteristic, we augment the items in each \(P_{i}\) list with an additional special item: the identity channel, with a weight of \(\mathcal{W}=0\), since no DReLUs are employed.
### Approximate DReLU
In protocols that utilize additive sharing, the cost of a comparison operation is usually proportional to the number of bits used to represent the shares, as is evident in the depth of a Yao's Garbled Circuit, and in SecureNN-based protocols. It has been previously observed [40, 13] that by tolerating a small probability of DReLU error, the complexity of comparison operations can be significantly reduced.
The main idea is that when comparing two uniformly distributed \(n\)-bit integer numbers, the outcome can be approximated with a negligible error probability of \(\frac{1}{2^{n-k+1}}\), even if \(k\) of the least significant bits are ignored. This is done simply by comparing the remaining \((n\)-\(k)\) bits.
Moreover, when dealing with numbers that are not uniformly distributed, if we have prior knowledge about the probability of their absolute sum exceeding a certain limit, we can reduce the number of most significant bits we consider accordingly, while bounding the probability of error. This is because in 2's complement numbers, as the probability of a number having high values decreases, the probability of the \(i^{th}\) most significant bit being the same as the \((i-1)^{th}\) most significant bit increases. In 2-party additive sharing protocols, the two numbers are the shares, while their comparison is the DReLU taken over their sum.
The activation values of deep learning networks often exhibit this behavior. The decimal portion tends to be uniform, and the absolute values tend to be low. In Figure 5, we can empirically observe the behavior of ResNet50 with bReLU layers trained on the ImageNet dataset by examining millions of activation values. The graph displays the probability of a DReLU error as a function of the number of most and least significant bits ignored. According to the analysis, we can safely disregard 43 of the most significant bits and 5 of the least significant bits, resulting in a DReLU error probability of about \(5e-4\). The empirical activation statistics, together with our observation enable us to differentiate protocols that require high precision (e.g., Conv2d) from protocols that require low precision (e.g., Private Compare).
### Other Non-Linear Layers
MaxPool and ReLU6MaxPool and ReLU6 are computationally costly layers with limited benefits. To enhance the performance of our algorithm, we substitute ResNet's MaxPool layer with an AveragePool layer and MobileNet's
Figure 4: **Distortion approximation:** The relationship between actual distortion and estimated distortion for ImageNet classification using the ResNet50 backbone, determined through \(3000\) random trials. Each point represent one evaluation of Equation 8
ReLU6 layers with ReLU layers. If required, we also adjust the models through fine-tuning.
### Security Analysis
Our method offers the same level of security as the underlying cryptographic protocol, SecureNN in our case. Accordingly, we protect both the client image and server model weights under the relevant thread model. However, we do make the patch sizes public, which typically consists of approximately 40K discrete parameters that we infer based on the training data statistics. It is important to note that any hyperparameters or architectural structures that are revealed may result in some degree of training data information leakage. Therefore, researchers should be aware of this gray area and determine what level of model disclosure is acceptable. Other than image size, the user's image information is entirely protected.
## 4 Experiments
Implementation DetailsOur algorithm was evaluated on four different tasks: (1) Classifying ImageNet [9], using ResNet50 [20] backbone, (2) classifying CIFAR100 [31], using ResNet18 backbone, (3) ADE20K [50] semantic segmentation using DeepLabV3 [5] with MobileNetV2 [44] backbone and (4) Pascal VOC 2012 [11] semantic segmentation using DeepLabV3 and ResNet50 backbone.
Other than CIFAR100, trained models were obtained from OpenMMLab's model zoo and served as our baseline. As MMClassification does not provide a baseline ResNet18 model on the CIFAR100 dataset, we trained two models from scratch (See: Section 4.1 for more details)
Prior to disotortion calculation, ResNet50's MaxPooling layer was replaced with an AveragePooling layer and fine-tuned for 15 epochs for ImageNet and 3K iterations for Pascal VOC 2012 at a learning rate of \(1e-4\), while MobileNetV2's ReLU6 layers were replaced with ReLU layers without further tuning.
The per channel, per patch-size distortions were then calculated using 512, 2048, 48 and 30 samples for ImageNet, CIFAR100, ADE20K and Pascal VOC 2012, respectively. The size of a patch-size list depends on the size of the channel. Channels of size \(512\times 512\) can have as many as \(103\) possible patch-sizes, while channels of size \(4\times 4\) will have as little as \(8\) patch-sizes to consider. The optimal patch-sizes were then determined using our CUDA-based Multiple-Choice-Knapsack solver.
Finally, the ReLU layers were replaced with bReLU layers, parameterized by the Knapsack-optimal patch-sizes, and the models were retrained for an additional 25, 120 epochs for ImageNet and CIFAR100, using a low learning rate of \(5e-3\) with a gradual step-based decrease toward \(1e-4\). For ADE20K and Pascal VOC 2012 we further trained for 40K, 20K steps, respectively, using learning rate of \(5e-4\) with a gradual polynomial decrease toward \(1e-4\). A warmup [16] low learning-rate for all tasks was applied. With the exception of the learning rate scheduling, we inherited all other parameters as they were in the OpenMMLab configuration files. This includes using batch sizes of 256 for ImageNet, 128 for CIFAR100, and 16 for ADE20K and Pascal VOC 2012. In addition, we followed the common practice of utilizing the additional augmentation training data available in Pascal VOC 2012.
Figure 5: **Approximate DReLU:** The probability of a DReLU error as a function of the number of most and least significant bits ignored. As can be seen, we can safely disregard 43 of the MSB bits and 5 of the LSB bits. We use 64 bits to represent numbers, so we can only evaluate \(16=64-43-5\) bits.
Figure 6: **Runtime and Bandwidth Vs. Performance:** We measure the impact of our approach on runtime, bandwidth and performance relative to the baseline secure model. The factor reduction in bandwidth (top) and runtime (bottom) at different accuracy points for classification (ImageNet in blue, CIFAR100 in red) and mIoU points for segmentation (ADE20K in green, Pascal VOC 2012 in purple). Securely evaluated working points are denoted by bold circles.
EvaluationThe evaluation metrics for segmentation and classification tasks are mIoU and accuracy, respectively. The number of validation images varies among different datasets, with ImageNet containing 50K images, CIFAR100 containing 10K images, ADE20K containing 2K images, and Pascal VOC 2012 containing 1449 images.
As per SecureNN, we conducted inference over 3 Amazon EC2 c4.8xlarge Ubuntu-running instances in the same region (eu-west-1). We developed our own Numba [33] based Python implementation of the SecureNN protocol. We use shares over the \(\mathbb{Z}_{2^{64}}\) ring and our comparison protocol is run over the \(\mathbb{Z}_{67}\) field. We used 16 bits approximate-DReLU for classification tasks (ignoring 5 LSBs and 43 MSBs) and 20 bits for segmentation tasks (ignoring 44 MSBs). To implement the secure bReLU layer, we made the trivial conversion of the MatMul protocol from [48] to support scalar-vector multiplication as well.
Our measurements include both the online and offline phases of Beaver's triples generation. We allocate 12 bits for segmentation and 16 bits for classification to represent the values' decimal part. We avoided sending truly random shares between parties, and computed them as the output of a pseudo random function (prf).
Since the image sizes in our segmentation tasks are not constant, we measured the runtime and bandwidth using the average size of validation images, as determined by the MMSegmentation data pipeline. The image size was set to \(512\times 673\times 3\) for ADE20K and \(512\times 713\times 3\) for Pascal VOC 2012. For classification tasks, we used image sizes of \(32\times 32\times 3\) for CIFAR100 and \(224\times 224\times 3\) for ImageNet. We obtained the runtime measurements for each case by averaging three samples for segmentation and ten samples for classification. The baseline models we used include neither bReLU nor approximate ReLU. The MobileNet baseline model has ReLU non-linearities in place of ReLU6. Finally, the ReLU MaxPool switching optimization was performed in ResNet models (see [32] for more details).
Figure 1 illustrates the relationship between a reduction in DReLU operations and the relative decrease in mIoU (for segmentation) and accuracy (for classification) from baseline models. We have securely evaluated some of the working points in the graph which are indicated by bold circles. We further emphasize that this relationship is not specific to any particular protocol. Interestingly, in the case of ADE20K dataset, mIoU was actually better than the non-secure baseline OpenMMLab model, when the number of DReLU evaluations was above \(5\%\).
Figure 6 demonstrates the cumulative effect of the different components of our algorithm on run-time and performance (top) as well as bandwidth and performance (bottom) for these four tasks in the semi-honest secure 3-party setting of SecureNN. Similar to Figure 1, secure evaluations are indicated by bold circles. We refer the readers to the Section 4.2 for a comparison between secure and non-secure evaluation.
Figure 7 displays the distribution of patch sizes for the top 10 patches selected by our Multiple-Choice-Knapsack solver on the ADE20K dataset using MobileNetV2. Observe how the wide and short patch sizes are consistent with the higher horizontal activation correlation, as demonstrated in Figure 4.
Table 1 displays the cost of communication for the various layers in the SecureNN protocol, with the CrypTFlow convolution optimization being utilized. We define \(h\) as the activation dimension. \(i\) and \(o\) are the number of activation input and output channels. \(f\) is the convolution kernel size. \(\ell\) is the number of bits used to represent activation values. \(\ell^{*}\) is the number of bits used in approx. DReLU. \(P\) is a list of patch-sizes such that \(|P|=o\). We set \(r=6\)log\(\ell+14\ell\) (3,968 in our case), \(r^{*}=6\)log\(\ell^{*}+14\ell\) (1,664 in our case) and \(q=\frac{1}{o}\sum_{i=1}^{o}\frac{1}{P_{1}}\), the ratio of DReLUs left. The communication cost of ReLU and bReLU is defined as the communication that remains after DReLU and pDReLU have been applied. The communication cost of some typical values (\(i=128,o=256,f=3,h=64,\ell=64,\ell^{*}=16,q=0.1\)) is shown. The complexity reduction caused by using identity channels is not shown. Finally, theoretically, in approximate DReLU layer, we can further reduce communication by decreasing the \(\mathbb{Z}_{p}\) field size, as the only requirement is that \(p\) is a prime such that: \(p>2+\ell^{*}\).
The usage of bReLU layer does not decrease the number of communication rounds required in comparison to a standard ReLU layer. This holds true irrespective of whether or not approximate DReLU is utilized. As a result, the bReLU layer incurs a cost of 10 communication rounds. We refer the readers to [48] for more details.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Layer & Communication & Typical \\ & & & Values \\ \hline \hline
1 & Conv2d\({}_{h,i,f,o}\) & \(h^{2}(2i+o)\ell\) & 21MB \\ & & \(+2f^{2}oi\ell\) & \\ \hline
2 & DReLU\({}_{h,o}\) & \(h^{2}ro\) & \(520\)MB \\
3 & App. DReLU\({}_{h,o}\) & \(h^{2}r^{*}o\) & \(218\)MB \\ \hline
4 & pDReLU\({}_{h,o,P}\) & \(h^{2}rqo\) & \(52\)MB \\
5 & App. pDReLU\({}_{h,o,P}\) & \(h^{2}r^{*}qo\) & \(22\)MB \\ \hline
6 & ReLU\({}_{h,o}\) & \(5oh^{2}\ell\) & \(42\)MB \\
7 & bReLU\({}_{h,o,P}\) & \((3+2q)oh^{2}\ell\) & \(27\)MB \\ \hline \hline
8 & Conv2d+ReLU & (1) + (2) + (6) & \(583\)MB \\
9 & Conv2d+bReLU & (1) + (5) + (7) & \(70\)MB \\ \hline \end{tabular}
\end{table}
Table 1: **Communication convexity:** Communication complexity of Conv2D and ReLU layers. Our approximation (line (9)) requires almost an order of magnitude less communication bandwidth, compared to the baseline approach (line (8)). See discussion and details in the text.
### Comparison with Previous Methods
The DeepReduce group [26] developed a new approach to pruning ReLU using a measure of ReLU criticality and distilled learning. They demonstrated the relationship between ReLU budget and CIFAR100 accuracy using their method. As DeepReduce outperformed all previous methods (E.g. [37, 35, 14]), we compared our approach to theirs. DeepReDuce employed different ResNet18 variants for different budgets by decreasing the activation dimensions. We adopted a similar strategy, training two ResNet18 models on the CIFAR100 dataset: a standard ResNet18 and a lightweight version with half the activation sizes in each dimension (height, width, channels). We trained the networks for 200 epochs using SGD with a learning rate of 0.1, momentum of 0.9, and weight decay of \(1e-3\), reducing the learning rate by a factor of 5 in epochs 60,120 and 160. We used the vanilla ResNet18 for high DReLU budget (\(>5\%\)) and the lightweight-ResNet18 for the rest. We then applied our Knapsack algorithm with the appropriate DReLU budget based on the distortion of 2048 samples, and fine-tuned it for an additional 120 epochs using an initial learning rate of \(5e-3\) which was warmed up for 5 epochs and reduced by a factor of 4 every 30 epochs. We apply momentum of 0.9, and weight decay of 1e-3.
The results are presented in Figure 1, along with DeepReduce's results. The plotted graph displays the accuracy of secure evaluation for CIFAR100, as it varies with the DReLU budget. The results demonstrate that, for the majority of the budget range, our method outperforms DeepReduce. In addition, Figure 8 shows the pruned DReLU distribution across layers for both our method and DeepReduce. DeepReduce prunes ReLUs at the layer level, effectively making the network shallower. While this may be convenient for linear layer merging, it goes against the trend of working with deeper networks (E.g., [3, 2]). Interestingly, as can be seen later in Section4.2, CIFAR100 is the dataset that benefits the least from fine-grained DReLU budget characterization at the channel level. Lastly, we believe that our method, like DeepReduce's, can also benefit from knowledge distillation.
### Ablation Study
Alternative Patch Sizes SetsWe investigate the effect on performance of using a different set of patch-sizes instead of the Knapsack optimal patch-sizes. Table 2 presents a comparison of three different patch size sets: (1) A set of naive \(4\times 4\) constant patch sizes, (2) Channel-shuffled Knapsack patch sizes, which are similar to Knapsack-optimized patch sizes but are shuffled among channels within the same layer, and (3) Knapsack-optimized patch sizes using the same DReLU budget as the constant version. The purpose of the third set is to differentiate between the respective contributions of the coarse-grained budget allocation across layers and the fine-grained budget allocation across channels within the same layer. We note that shuffling preserves the patch size layer distribution, and thus only partially isolate the contribution of this fine grained allocation.
Contribution of FinetuningSince our algorithm utilizes fine-tuning to enable network security, we would like to explore the portion of performance improvement that can be solely attributed to further training OpenMMLab's models. Table 3 compares the classification and segmentation performance of three training approaches. The first approach
Figure 8: **DReLU layer histogram** The allocation of the DReLU budget (49.15K) among different layers. The green bars represent the allocation by DeepReDuce, which concentrates the entire budget on four layers and effectively reduces ResNet18 to a 4 layer network. The blue bars represent the allocation using our Knapsack approach to distribute the DReLU budget.
Figure 7: **Patch Size Distribution:** The patch size distribution of MobileNetV2 on the ADE20K dataset (Target DReLU = 9%). Observe that less than 1K layers (out of about \(17K\)) kept using the standard \(1\times 1\) ReLU. The Identity bar refers to homogeneous channels, where no DReLU evaluation is required for the entire channel.
is OpenMMLab's reported results, as well as our results on the CIFAR100 baseline model. The second approach is our results obtained using a DReLU budget of \(12\%\). The third approach uses the same training process as our algorithm, but ReLU layers are left intact.
contribution from approximate DReLU.
## 5 Conclusions
We proposed a new optimization technique to reduce the number of DReLUs in a neural network by about an order of magnitude. This is based on the observation that DReLU operations are highly correlated across neighboring pixels. Therefore, one DReLU per patch is enough to approximate a naive per-pixel DReLU evaluation. Based on this observation we formulate a knapsack optimization that determines, for each layer, what should be the optimal patch size of that layer. In addition, we show that we can further accelerate communication by cutting both MSB and LSB bits from the number representations used during the secure inference.
We evaluated the proposed techniques on two datasets for classification (CIFAR100, ImageNet) and two datasets for semantic segmentation (ADE20K, Pascal VOC 2012) and observed significant gains in performance. To the best of our knowledge, we are the first to demonstrate secure semantic segmentation on large images (\(512\times 512\) images).
The secure inference was implemented within the secure 3-party SecureNN protocol, but there is nothing in our technique that prevents it from being used in other protocols as well. Our source code has been made public.
|
2301.06398 | High-fidelity reproduction of central galaxy joint distributions with
Neural Networks | The relationship between galaxies and haloes is central to the description of
galaxy formation, and a fundamental step towards extracting precise
cosmological information from galaxy maps. However, this connection involves
several complex processes that are interconnected. Machine Learning methods are
flexible tools that can learn complex correlations between a large number of
features, but are traditionally designed as deterministic estimators. In this
work, we use the IllustrisTNG300-1 simulation and apply neural networks in a
binning classification scheme to predict probability distributions of central
galaxy properties, namely stellar mass, colour, specific star formation rate,
and radius, using as input features the halo mass, concentration, spin, age,
and the overdensity on a scale of 3 $h^{-1}$ Mpc. The model captures the
intrinsic scatter in the relation between halo and galaxy properties, and can
thus be used to quantify the uncertainties related to the stochasticity of the
galaxy properties with respect to the halo properties. In particular, with our
proposed method, one can define and accurately reproduce the properties of the
different galaxy populations in great detail. We demonstrate the power of this
tool by directly comparing traditional single-point estimators and the
predicted joint probability distributions, and also by computing the power
spectrum of a large number of tracers defined on the basis of the predicted
colour-stellar mass diagram. We show that the neural networks reproduce
clustering statistics of the individual galaxy populations with excellent
precision and accuracy. | Natália V. N. Rodrigues, Natalí S. M. de Santi, Antonio D. Montero-Dorta, L. Raul Abramo | 2023-01-16T12:37:46Z | http://arxiv.org/abs/2301.06398v1 | # High-fidelity reproduction of central galaxy joint distributions with Neural Networks
###### Abstract
The relationship between galaxies and haloes is central to the description of galaxy formation, and a fundamental step towards extracting precise cosmological information from galaxy maps. However, this connection involves several complex processes that are interconnected. Machine Learning methods are flexible tools that can learn complex correlations between a large number of features, but are traditionally designed as deterministic estimators. In this work, we use the IllustrisTNG300-1 simulation and apply neural networks in a binning classification scheme to predict probability distributions of central galaxy properties, namely stellar mass, colour, specific star formation rate, and radius, using as input features the halo mass, concentration, spin, age, and the overdensity on a scale of 3 \(h^{-1}\) Mpc. The model captures the intrinsic scatter in the relation between halo and galaxy properties, and can thus be used to quantify the uncertainties related to the stochasticity of the galaxy properties with respect to the halo properties. In particular, with our proposed method, one can define and accurately reproduce the properties of the different galaxy populations in great detail. We demonstrate the power of this tool by directly comparing traditional single-point estimators and the predicted joint probability distributions, and also by computing the power spectrum of a large number of tracers defined on the basis of the predicted colour-stellar mass diagram. We show that the neural networks reproduce clustering statistics of the individual galaxy populations with excellent precision and accuracy.
keywords: galaxies: statistics - cosmology: large-scale structure of Universe - methods: data analysis - methods: statistical
## 1 Introduction
Characterising the connection between the properties of galaxies and those of the underlying population of dark-matter (DM) haloes is one of the most crucial aspects to understand the large-scale structure (LSS) of the Universe. This link not only encapsulates fundamental information about the process of galaxy formation, but it is also a crucial step to optimise the extraction of cosmological constraints from galaxy maps.
The halo-galaxy connection is nowadays investigated using a variety of techniques (see, e.g., Wechsler and Tinker, 2018). On the one hand, empirical methods use DM-only simulations as the basis on top of which different analytical prescriptions are implemented in order to establish that connection. These techniques include sub-halo abundance matching (SHAM, e.g., Conroy et al., 2006; Behroozi et al., 2010; Trujillo-Gomez et al., 2011; Favole et al., 2016; Guo et al., 2016; Contreras et al., 2020, 2020; Hadzhiyska et al., 2021; Favole et al., 2022), halo occupation distributions (HODs, e.g., Berlind and Weinberg, 2002; Zehavi et al., 2005, 2018; Artale et al., 2018; Bose et al., 2019; Hadzhiyska et al., 2020; Xu et al., 2021) and empirical forward modelling (e.g., Becker, 2015; Moster et al., 2018; Behroozi et al., 2019). On the other hand, it is possible to model, with varying degrees of detail, the physical mechanisms that shape the process of galaxy formation. In this context, hydrodynamical simulations (e.g., Somerville and Dave, 2015; Naab and Ostriker, 2017; Pillepich et al., 2018, 2018; Springel et al., 2018; Villasescusa-Navarro et al., 2021, 2022) are perhaps the most ambitious efforts. These models employ known physics to simulate, at a sub-grid level, a variety of processes that are related to galaxy formation such as star formation, radiative metal cooling, and supernova, stellar, and black hole feedback - for reviews on this, see Somerville and Dave, 2015; Naab and Ostriker, 2017. This modelling can also be approached from a semi-analytic, less computationally demanding, perspective. These semi-analytic models (SAMs, e.g., White and Frenk, 1991; Guo et al., 2013) employ physically motivated recipes to mimic the galaxy formation processes.
In this paper, we investigate the halo-galaxy connection from a machine learning (ML) perspective. The issue of the halo-galaxy connection has been addressed using ML by many works (e.g., Kamdar et al., 2016; Agarwal et al., 2018; Calderon and Berlind, 2019; Jo Kim, 2019; Man et al., 2019; Yip et al., 2019; Zhang et al., 2019; Jo and Kim, 2019; Kasmanoff et al., 2020; Delgado et al., 2021; McGibbon and Khochfar, 2021; Shao et al., 2021; Lovell et al., 2022; Stiskalek et al., 2022; de Andres et al., 2022; Jespersen et al., 2022; Chittenden and Tojeiro, 2023). In de Santi et al. (2022) we provide a ML suite combining some of the most powerful, well-known models in the literature
to predict central galaxy properties using host halo properties. All the applied methods, however, are designed to return a single value for each galaxy property, independently of the remaining properties. However, there are many complex interrelated processes involved in the formation and evolution of galaxies, and their properties cannot be precisely determined by halo properties alone. Therefore, a model that proposes to map the relation between galaxies and host haloes should encode not only the correlations between galaxy properties, but also the uncertainties due to the stochastic aspects of galaxy formation. In other words, any given halo could host a central galaxy with a variety of properties and, hence, a model should return joint probability distributions for the possible values of those galaxy properties, instead of a single one.
The ML suite from our precursor work (de Santi et al., 2022) provided encouraging results in terms of single-point estimation metrics, such as the Pearson correlation coefficient between true and predicted values, especially for stellar mass, which is highly correlated with halo mass. However, deterministic models that try to predict individual galaxy properties can be biased towards the most frequent values, and thus fail to recover the overall distributions of the galaxy properties. In that paper, this issue is treated as an imbalanced data problem, i.e., despite of the fact that different output values could be associated with some fixed set of halo properties, the machine tends to assign the most frequent values. To address this problem, we made use of a data augmentation technique to increase the weight of the less represented instances, which allowed us to better recover the under-represented populations, but still in a way that each halo is assigned a single, individual value for each central galaxy property (de Santi et al., 2022).
In the present work, we proceed by predicting probability distributions with neural networks (NNs) with a binning classification scheme, which we refer to as NN\({}_{\rm class}\), for the same central galaxy properties as de Santi et al. (2022), namely, stellar mass, \(g-i\) colour, specific star formation rate, and galaxy radius. This not only enables us to recover the overall distributions of the galaxy properties from the IllustrisTNG300-1 (hereafter, TNG300) sample, but also to capture the intrinsic scatter in the halo-galaxy mapping by providing, for each halo, the probability distributions associated with its central galaxy properties. We also train NN\({}_{\rm class}\) to predict the galaxy properties jointly, finding that the joint distributions recover correlations that are lost when predicting univariate distributions independently. ML probability-based descriptions have been used in related contexts, in particular with NNs, such as photometric redshift estimation (e.g., Lima et al., 2022), dynamical mass of galaxy clusters estimation (e.g., Ho et al., 2021; Ramanah et al., 2020) and recently in the halo-galaxy connection (e.g., Stiskalek et al., 2022).
In order to study how NN\({}_{\rm class}\) captures the intrinsic stochasticity in the halo-galaxy connection, we analyse the shape of the distributions of individual galaxies, which gives some insights on the contribution of secondary halo properties. Moreover, we analyse how this uncertainty affects clustering statistics, namely the power spectrum. Our technique enables us to define as many galaxy populations as wished, and to analyse to what extent those populations occupy the same types of haloes. We explore this flexibility by computing the power spectrum of a large number of galaxy populations (tracers), selected on the basis of the colour-stellar mass diagram.
The paper is organised as follows. The IllustrisTNG data and the chosen set of halo and galaxy properties are described in SS2. In SS3, we explain how we applied NNs to predict joint probability distributions. Section 4 analyses the quality of the results obtained with the NNs by comparing the predictions with the IllustrisTNG catalogue. In SS5, we present our results in terms of the power spectra of several galaxy populations. Finally, we outline our main conclusions in SS6, and discuss our plans for future improvements and applications.
## 2 Data
Our analysis is based on data from the IllustrisTNG magnetohydrodynamical cosmological simulation (Pillepich et al., 2018, 2018; Nelson et al., 2018; Marinacci et al., 2018; Naiman et al., 2018; Springel et al., 2018; Nelson et al., 2019). This simulation suite, which was generated using the AREPO moving-mesh code (Springel, 2010), is an improved version of the previous Illustris simulation (Vogelsberger et al., 2014, 2014, 2014). IllustrisTNG features a variety of updated sub-grid models accounting for star formation, radiative metal cooling, chemical enrichment from SNII, SNIa, and AGB stars, as well as feedback mechanisms (including stellar and super-massive black hole feedback). These models were calibrated to reproduce an array of observational constraints, such as the \(z=0\) galaxy stellar mass function and the cosmic SFR density, to name but a few (see the aforementioned references for more information). The IllustrisTNG simulation adopts the standard \(\Lambda\)CDM cosmology (Planck Collaboration et al., 2016), with parameters \(\Omega_{\rm m}=0.3089\), \(\Omega_{\rm b}=0.0486\), \(\Omega_{\Lambda}=0.6911\), \(H_{0}=100\,h\,{\rm km\,s^{-1}Mpc^{-1}}\) with \(h=0.6774\), \(\sigma_{\rm g}=0.8159\), and \(n_{S}=0.9667\).
The ML methodology that we developed in this work to reproduce the halo-galaxy connection is applied to galaxy clustering in terms of the power spectrum. For this reason, in order to minimise cosmic variance, we chose to analyse the largest box available in the database, TNG300, spanning a side length of \(205\;h^{-1}{\rm Mpc}\) with periodic boundary conditions. TNG300 contains \(2500^{3}\) DM particles of mass \(4.0\times 10^{7}\;h^{-1}{\rm M_{\odot}}\) and \(2500^{3}\) gas cells of mass \(7.6\times 10^{6}\;h^{-1}{\rm M_{\odot}}\). The adequacy of TNG300 in the context of clustering science has been extensively proven in a variety of analyses (see, e.g., Contreras et al., 2020; Gu et al., 2020; Hadzhiyska et al., 2020; Montero-Dorta et al., 2020; Shi et al., 2020; Hadzhiyska et al., 2021; Montero-Dorta et al., 2021, 2021; Favole et al., 2022; de Santi et al., 2022).
In this work, we employ both galaxy and DM halo information from TNG300. DM haloes in the entire IllustrisTNG suite are identified using a friends-of-friends (FOF) algorithm based on a linking length of 0.2 times the mean of the inter-particle separation (Davis et al., 1985). As in de Santi et al. (2022), the following halo properties are used as input features to train the NNs:
* _Virial mass_ (\(M_{\rm vir}[h^{-1}{\rm M_{\odot}}]\)), which is computed by adding up the mass of all gas cells and particles contained within the virial radius \(R_{\rm vir}\) (based on a collapse density threshold of \(\Lambda_{c}=200\)). In order to ensure that haloes are well resolved, we impose a mass cut \(\log_{10}(M_{\rm vir}[h^{-1}{\rm M_{\odot}}])\geq 10.5\), corresponding to at least 500 dark matter particles.
* _Virial concentration_ (\(c_{\rm vir}\)), defined in the standard way as the ratio between the virial radius and the scale radius, i.e., \(c_{\rm vir}=R_{\rm vir}/R_{\rm s}\). \(R_{\rm s}\) is obtained by fitting the DM density profiles of individual haloes with a NFW profile (Navarro et al., 1997).
* _Halo spin_ (\(\lambda_{\rm halo}\)), for which we follow the Bullock et al. (2001) definition: \(\lambda_{\rm halo}=|J|/\sqrt{2}M_{\rm vir}V_{\rm vir}R_{\rm vir}\). Here, \(J\) and \(V_{\rm vir}\) are the angular momentum of the halo and its circular velocity at \(R_{\rm vir}\), respectively.
* _Halo age_, parametrised as the half-mass formation redshift \(z_{1/2}\). This parameter corresponds to the redshift at which half of the present-day halo mass has been accreted into a single subhalo for the first time. The formation redshift is measured following
the progenitors of the main branch of the subhalo merger tree computed with sublink, which is initialised at \(z=6\).
* The _overdensity_ around haloes on a scale of \(3\ h^{-1}\mathrm{Mpc}\left(\delta_{3}\right)\), defined as the number density of subhaloes within a sphere of radius \(R=3h^{-1}\mathrm{Mpc}\), normalised by the total number density of subhaloes in the TNG300 box (e.g., Artale et al., 2018; Bose et al., 2019).
On the other hand, subhaloes (i.e., gravitationally bound substructures) are identified in IllustrsTNG using the subfind algorithm (Springel et al., 2001; Dolag et al., 2009). Subhaloes containing a non-zero stellar mass component are labelled as galaxies. Again, following de Santi et al. (2022) for consistency, TNG300 galaxies are characterised in this work using the following basic properties:
* The _stellar mass_ (\(M_{*}\) [\(h^{-1}\mathrm{M}_{\odot}\)]), which includes all stellar particles within the subhalo. In order to ensure that galaxies are well resolved, we impose a mass cut \(\log_{10}(M_{*}[h^{-1}\mathrm{M}_{\odot}])\geq 8.75\), corresponding to at least 50 gas cells.
* The _colour_\(g-i\), computed from the rest-frame magnitudes, which are obtained in IllustrsTNG by adding up the luminosities of all stellar particles in the subhalo (Buser, 1978). Note that the specific choice of colour is rather arbitrary. We have checked that using other combinations (i.e., \(g-r\)) provides similar results.
* The _specific star formation rate_ (sSFR [\(\mathrm{yr}^{-1}h\)]), which is the star formation rate (SFR) normalised by stellar mass. The SFR is computed by adding up the star formation rates of all gas cells in the subhalo. Note that around 14% of the galaxies at redshift \(z=0\) in TNG300 have SFR\(=0\). In order to avoid numerical issues, we have adopted the same approach as in de Santi et al. (2022), assigning to these objects' artificial values of SFR sampled from a Gaussian distribution \(\mathcal{N}(\mu=-13.5,\sigma=0.5)\).
* i.e., the comoving radius containing half of the stellar mass in the subhalo.
## 3 Methodology
NNs are designed to learn how to map an instance, which is characterised by some set of input features \(X\), to a set of output features \(Y\), by weighting and combining the input features. These weights are fitted by minimising a loss function with some optimiser.
In this work, the input features are the halo properties and the outputs are the galaxy properties introduced in SS2. Starting with a sample where the target value \(Y\) is known for all instances (the TNG300 catalogue), we split it into training, validation and test sets. The training set is used to fit the model parameters (weights). The validation set is used to monitor overfitting, i.e., to ensure that the model is properly generalising to data outside of the training set, and to fit the model's hyperparameters1. The test set remains completely blind to the training and validating procedures, and can thus be used to infer the performance of the model when applied to entirely new instances. The training, validation and test sets contain, respectively, 48%, 12% and 40% of the initial sample of 174,527 objects from the TNG300 catalogue.
Footnote 1: In a NN, the model’s parameters are the weights to be learned automatically, while the hyperparameters are the number of layers, neurons, number of epochs, etc., which are often chosen manually.
Our goal is to predict central galaxy properties from a set of halo properties. In the context of ML, this would in principle fall in the category of a supervised regression problem. However, traditional regression models are designed to output single values, while any given halo could host many different central galaxies (since the set of halo properties that we use as inputs do not determine exactly the outcome of the galaxy formation process in terms of the precise values of the galaxy properties). This is reflected, as an example, in the well-known scatter in the stellar-to-halo mass relation (Wechsler, 1994; Stiskalek et al., 2022). Therefore, in order to incorporate this uncertainty, we need a model that returns not only a single best-estimate value for each galaxy property, but some proxy for the probability distribution for those properties.
In this paper, we have addressed this issue by converting the regression problem into a classification. The idea is to define \(K\) classes by splitting each galaxy property into \(K\) intervals, or bins. Just like in the usual classification tasks, the model will return a score associated with each class (bin). These scores add up to one, giving a probabilistic interpretation of the output. This approach has been widely used, as an example, in the context of photometric redshift estimation (Sadeh et al., 2016; Pasquet et al., 2019; Lima et al., 2022). We refer to our method, which is based on training NNs classifiers, as \(\mathrm{NN}_{\mathrm{class}}\).
As a starting point, we train four models to predict each galaxy property individually as univariate distributions, i.e., we have separate models to predict \(P(M_{*})\), \(P(g-i)\), \(P(\mathrm{sSFR})\), \(P(R_{1/2}^{(*)})\). As we discuss in 84, this approach is sufficient to recover the overall distribution \(P(Y)\) for a given sample. However, this does not guarantee, _a priori_, that the joint distributions are well reproduced. Therefore, we proceed to predict jointly pairs of properties, namely \(P(M_{*},g-i)\), \(P(M_{*},\mathrm{sSFR})\), \(P(g-i,\mathrm{sSFR})\) and \(P(R_{1/2}^{(*)},M_{*})\). Our strategy is similar to the univariate \(P(Y)\) case: we make a grid in the \(\{Y_{1},Y_{2}\}\) subspace in such a way that the output corresponds to pixels in this grid. Although in this paper we restrict ourselves to only two galaxy properties when predicting joint distributions, a similar approach could be used, in principle, to characterize galaxies and define populations using an arbitrary number of properties. This generalisation will be implemented in an upcoming paper.
Unless otherwise stated, for all the results shown here we set \(K=50\) classes for each one of the central galaxy properties, in equally spaced bins. For stellar mass, for example, this corresponds to bins of 0.085 dex. We must draw attention to the fact that this choice of binning is arbitrary. We have tried different numbers of bins, finding similar results in terms of the recovery of the distributions. Note that more refined versions of NNs that output distributions without binning the properties, and thus keeping it as a regression problem, already exist in the literature. In the context of photo-z estimation, Lima et al. (2022), for example, compares different types of NNs that return distributions, such as Mixture Density Networks (Bishop, 1994), Bayesian NNs, and also NNs following a similar strategy as in this work, with a binning classification scheme. Ho et al. (2021) estimate the probability distribution of the dynamical mass of galaxy clusters and also compare several types of NNs, including a classifier which is similar to our \(\mathrm{NN}_{\mathrm{class}}\). In the context of the halo-galaxy connection, Stiskalek et al. (2022) model the stellar-to-halo mass relation scatter with a Gaussian distribution and train an ensemble of NNs that predicts the mean and standard deviation. We found the binned classification to be a simpler approach that works as a proof of concept. A more careful exploration of alternative methods is left as future refinements.
Throughout the analysis, we compare our \(\mathrm{NN}_{\mathrm{class}}\) method with the deterministic models developed by de Santi et al. (2022), which we use as our baseline. In that work, several ML models are combined
to return a final, consensus output for the same galaxy properties described in SS2. The two consensus estimators are built from either the "Raw" models, which were trained with the original TNG300 sample, or the "SMOGN" models, which were trained using a data-augmented version of that data set. The SMOGN models were developed because of the difficulty for Raw models to recover the least frequent values of galaxy properties - i.e., to reproduce the tails of the distributions. The SMOGN data augmentation technique is a strategy to handle imbalanced data sets, whereby additional objects are artificially introduced in the training sample in order to force the machine to give more importance to less represented objects (Kunz, 2019).
The specifications of \(\text{NN}_{\text{class}}\) are described as follows. We use the categorical cross-entropy loss function and the adam optimiser to train the networks. The architecture may change depending on the galaxy properties to be predicted. In general, our developed networks have a single intermediate layer, with a number of neurons that typically depends on whether the output is an univariate or a joint distribution. We use the L2 regularisation, which applies a penalty proportional to the square of the model's weights. The number of epochs (iterations) is constrained with an early-stopping criteria based on the validation set loss. In the intermediate layers we used the ReLU function as activation, while in the output layer we use the Softmax function, which is similar to the Sigmoid function, but it normalises the output in such a way that the scores of the \(K\) classes add up to one. In this way, the \(\text{NN}_{\text{class}}\) output works as a proxy for a probability in bins of galaxy properties.
## 4 Results
Fig. 1 shows the distributions of the galaxies in the test set. The first column is the truth table, the TNG300 catalogue. The second column is the \(\text{NN}_{\text{class}}\) prediction of univariate distributions, i.e., galaxy properties predicted independently. With the univariate distributions we can compute the joint distributions as \(P(Y_{1})\cdot P(Y_{2})\), which are shown in the heatmap diagrams. The third column is the \(\text{NN}_{\text{class}}\) prediction for the joint distributions \(P(Y_{1},Y_{2})\), which can be integrated to recover the univariate distributions \(P(Y)\) shown in the marginal plots from the third column, i.e.:
\[P(Y_{i})=\int P(Y_{i},Y_{j})dY_{j}. \tag{1}\]
The univariate distributions predicted by \(\text{NN}_{\text{class}}\), shown in black solid lines in the second-column plots of Fig. 1, are in excellent agreement with the true distributions from TNG300, shown in gray shaded regions. They also reproduce fairly well the joint distributions \(P(Y_{1})\cdot P(Y_{2})\) for most cases. The \(P(g-i)\cdot P(\text{sSFR})\) joint distribution, however, fails to reproduce the shape of the distribution for redder colours and lower sSFRs. According to this prediction, red galaxies could have virtually any value of sSFR, while what we actually observe in TNG300 is that as galaxies move from the blue to the red the peak, their sSFRs decrease. This important feature is recovered when \(\text{NN}_{\text{class}}\) is trained to predict \(P(g-i,\text{sSFR})\) jointly (third column in Fig. 1).
The above result indicates that our input halo properties alone are unable to predict accurately the correlations between colour and sSFR. The model would need additional features in order to capture this relation. It is interesting, however, that we can overcome this limitation by predicting the joint distribution directly using only the presented halo properties. This exercise indicates that, in order to robustly assign galaxies to haloes, with all the properties consistently correlated, the properties should be predicted together. Note that, in principle, one could define galaxy populations based on as many parameters as wished. Therefore, in the most general case, we would have an \(N\)-dimensional distribution associated to each host halo.
As a complementary analysis, Fig. 2 shows two additional well-known relations in the context of the halo-galaxy connection: the stellar-to-halo mass relation, and the galaxy size-halo mass relation obtained with TNG300 and with \(P(M_{*})\) and \(P(R_{1/2}^{(*)})\) predicted by \(\text{NN}_{\text{class}}\).
Figures 1 and 2 allow for a visual inspection of the results. In order to quantify the similarity between the distributions, we have performed the Kolmogorov-Smirnov (KS) test, which measures the maximum distance between cumulative distributions (for more details, see Ivezic et al., 2014):
\[\Delta=max(|F_{1}-F_{2}|). \tag{2}\]
The results are shown in Table 1. For comparison, we also show the values obtained with our baseline models, Raw and SMOGN, from de Santi et al. (2022). Once again, we see that for most cases the independent prediction of univariate distributions reproduce fairly well the joint distributions, except for colour and sSFR. In all cases, \(\text{NN}_{\text{class}}\) provides significantly lower values as compared to Raw and SMOGN.
So far, we have focused on the combined distributions for the entire test sample. We now turn our attention to individual objects and the probability distributions that our ML machinery predicts for them. In particular, Fig. 3 displays, in a similar format to that of Fig. 1, some examples of the joint probability distribution \(P(M_{*},g-i)\) for three illustrative cases: a red object, a blue object, and an object lying at the so-called green valley region (from left to right). In each panel, the host halo mass is specified on the top, whereas the true TNG300 values of stellar mass and colour are shown as the dashed lines. As a reference, we also include in the marginal plots the distributions of the objects in the test set within a bin of \(\pm 0.1\) in halo mass around the values indicated on the top of the plots.
The first thing to notice from Fig. 3 is that the distributions are significantly narrower along the x-axis, as compared to the y-axis. This is of course expected, since stellar mass is the galaxy property that displays a tighter relation with the halo properties (particularly with halo mass), and therefore is the easiest to predict. It is also noteworthy that not all distributions can be well approximated by a Gaussian distribution. Some distributions are significantly skewed or, depending on halo mass, even bimodal, reflecting the well-known colour/sSFR bimodality of the galaxy population (e.g., Baldry et al., 2004).
The red galaxy on the left-hand panel shows very little scatter in colour. This is typically the case for red galaxies hosted by haloes with \(\log_{10}(M_{\text{vir}}[h^{-1}M_{\odot}])\gtrsim 12.5\). By visually inspecting Fig. 1 and Fig. 2, we can get a sense as to why this happens: massive haloes are typically populated by massive galaxies, since the scatter in the stellar-to-halo mass relation is small. Massive galaxies are almost exclusively very red, which explains why the machine predicts a very narrow distribution of colours from the set of halo properties employed. The situation is very different for the blue galaxy featured in the middle panel. In this case, the predicted colour distribution is much broader than that for the red galaxy. Here, the host halo mass is much smaller, which implies a larger scatter in the stellar-to-halo mass relation. On top of that, blue galaxies intrinsically display a wide range of colours. All this uncertainty is captured by the machine in terms of a wider colour distribution.
Finally, the green-valley galaxy on the right-hand panel of Fig. 3 represents the most extreme case of the three, where the colour
Figure 1: Distributions of galaxy properties. From top to bottom: colour \(v\). stellar mass, sSFR \(v\). stellar mass, sSFR \(v\). colour, and radius \(v\). stellar mass. The first column shows the true distributions from TNG300. The second column shows the distributions computed from the univariate distributions as predicted by \(\rm NN_{class}\) – i.e., predicted independently from each other. The third column shows the joint distributions as predicted by \(\rm NN_{class}\). The grey shaded regions in the marginal plots correspond to the TNG300 distributions, while the black solid lines correspond to the \(\rm NN_{class}\) predictions. The univariate distributions shown in the third column plots were computed by marginalising the joint distributions.
degeneracy produces a bimodal distribution. These objects are caught between two intrinsically different populations, i.e., the blue cloud and the red sequence. The analysis of individual distributions reveals that these objects are the ones that display a weaker relation with the properties of their host haloes (at least the ones analysed in this work). As discussed in de Santi et al. (2022), these objects exemplify the most clear case where halo properties alone seem insufficient to predict the colour/sSFR, thus emphasising the advantages of our probability-based methodology.
This probability distribution description on an individual-object basis allows us to explore the dependence of galaxy properties on secondary halo properties at fixed halo mass (a dependence that is closely related to the so-called galaxy assembly bias effect, see, e.g., Wechsler and Tinker, 2018; Sato-Polito et al., 2019; Montero-Dorta et al., 2020, 2021). In particular, we have analysed the dependence of \(P(M_{*},g-i)\) on halo age at fixed halo mass for green-valley objects. To this end, we selected objects in the test sample with predicted colour within the range \(0.80<g-i\leq 1.05\) and halo masses of \(11.8<\log_{10}(M_{\rm vir}[h^{-1}{\rm M}_{\odot}])<12.2\) (we have checked that choosing a narrower halo mass range would not alter our results significantly). This subset was subsequently split by halo age (taking the 15% and 85% quantiles). For younger haloes, a stack of all distributions still reveals some bimodality in colour, albeit with a stronger preference for the blue peak. The predicted probability distribution for green-valley galaxies in older haloes is, conversely, much more skewed towards redder colours. The tail of the distribution for these objects still covers the green valley, which means that in some realisations these host haloes will be populated by a green-valley central galaxy (although the probability for this to happen is low). These results are reassuring in terms of the robustness of our methodology, demonstrating that our probability description is capable of capturing secondary halo dependencies.
## 5 Power Spectrum
With the help of the method presented in this work we have greater flexibility to define different tracers based on galaxy properties. In this section, we explore the performance of \(\rm NN_{class}\) in terms of the accuracy with which we can reproduce the power spectra of those tracers. We compute spectra for tracers in the test set, using the python package nbodykit(Hand et al., 2018). For the truth TNG300 catalogue we use the positions of the central galaxies, but for the predictions we use the positions of the host haloes. Once again, we compare \(\rm NN_{class}\) with the baseline models from de Santi et al. (2022). As a complementary analysis, in Appendix B we compare the power spectra of tracers defined according to the same criteria of that previous work, which are based on individual galaxy properties.
Since TNG300 is a single box, the uncertainties of the spectrum on each bandpower \(k_{i}\), for each tracer \(\alpha\), are computed according to the theoretical (Gaussian) covariance, i.e.:
\[\frac{\sigma_{\alpha,i}^{2}}{P_{\alpha,i}^{2}}=\frac{2}{V\bar{V}}\left(\frac{ 1+\bar{n}_{\alpha}P_{\alpha,i}}{\bar{n}_{\alpha}P_{\alpha,i}}\right)^{2}, \tag{3}\]
with \(\bar{V}=4\pi k_{i}^{2}\Delta k/(2\pi)^{3}\), and the residuals are defined as
\[\frac{\left(p_{\alpha,i}^{\rm pred}-p_{\alpha,i}^{\rm TNG300}\right)^{2}}{ \sigma_{\alpha,i}^{2}}\,. \tag{4}\]
Our choice of tracers is driven by the fact that the target selection in galaxy surveys often rely on the analysis of colour-magnitude diagrams (see e.g. Eisenstein et al., 2001, 2011; Zhou et al., 2020). One of the most common ways to define galaxy populations is in terms of the red sequence and the blue cloud, which can also be clearly distinguished in the colour-stellar mass diagram, as shown in Fig.1. They are two distinct populations with different biases, hence their interest for studies of large scale structure.
In a similar fashion, we defined seven tracers (\(\alpha=1,\ldots,7\)) based on the colour-stellar mass diagram, \(P(M_{*},g-i)\). We split red galaxies (\(g-i>1.05\)) into lower (\(\alpha=1\)) and higher (\(\alpha=2\)) stellar masses. Conversely, "green-valley" galaxies (defined as \(0.80<g-i\leq 1.05\)) are split into three mass bins, leading to populations \(\alpha=3,4,5\). Finally, blue galaxies (\(g-i\leq 0.8\)) are separated into lower (\(\alpha=6\)) and higher (\(\alpha=7\)) stellar mass bins. This selection is outlined in Table 2, and it is represented in the lower right corner of Fig. 4.
An interesting feature of the probabilistic approach is that each
\begin{table}
\begin{tabular}{l c c c c c c c} \hline
**1D KS** & \(P(Y)\) & Raw & SMOGN & **2D KS** & \(P(Y_{1})\cdot P(Y_{2})\) & \(P(Y_{1},Y_{2})\) & Raw & SMOGN \\ \hline \(P(M_{*})\) & 0.002 & 0.064 & 0.064 & \(P(M_{*},g-i)\) & 0.010 & 0.005 & 0.183 & 0.163 \\ \(P(g-i)\) & 0.004 & 0.181 & 0.116 & \(P(M_{*},\rm sSFR)\) & 0.012 & 0.009 & 0.253 & 0.209 \\ \(P(\rm sSFR)\) & 0.004 & 0.213 & 0.168 & \(P(g-i,\rm sSFR)\) & 0.110 & 0.009 & 0.266 & 0.176 \\ \(P(R_{1/2}^{(\alpha)})\) & 0.009 & 0.217 & 0.110 & \(P(M_{*},R_{1/2}^{(\alpha)})\) & 0.015 & 0.007 & 0.217 & 0.150 \\ & & & & & \(P(M_{\rm vir},M_{*})\) & 0.008 & – & 0.064 & 0.064 \\ & & & & & \(P(M_{\rm vir},R_{1/2}^{(\alpha)})\) & 0.012 & – & 0.217 & 0.110 \\ \hline \end{tabular}
\end{table}
Table 1: KS test values for univariate (1D) and joint (2D) distributions computed with the NNs and the baseline models.
Figure 2: Stellar-to-halo mass relation (top) and galaxy size–halo mass relation (bottom) from the TNG300 catalogue (left) and from \(\rm NN_{class}\) predictions (right).
galaxy is generated through a realisation of a probability distribution spreading over many bins. As a consequence, we can build many catalogues of central galaxy properties by drawing values \(y_{1},y_{2}\) from \(P(Y_{1},Y_{2})\). We have performed \(r=42\) realisations of \(P(M_{*},g-i)\), leading to as many values of \(M_{*}\) and \(g-i\) for each halo. We then compute the spectrum of each of these samples, and from that the mean and variance of the spectra. For the mean spectrum \(\hat{P}_{\alpha,i}\), we compute the uncertainties according to Eq. (3).
Fig. 4 shows the power spectra and residuals of the seven tracers defined in terms of \(P(M_{*},g-i)\) - see Table 2. Tracers \(\alpha=3,4\) are relatively rare, hence their corresponding regions in colour-stellar mass space are poorly populated by single-point estimators. Therefore, a model that predicts galaxies in these regimes improves the quality of the fit considerably - i.e., it reduces the \(\chi^{2}\). We had already seen an improvement with the SMOGN models, which better recover this region as compared to the Raw models, but with NN\({}_{\rm class}\) this improvement is even more pronounced. There are only a few \(\alpha=5\) galaxies in TNG300, which makes this population very sparse. In particular, it has the largest variance over realisations. Conversely, all models are equally good at reproducing the power spectra of tracer populations closer to the peaks of the probability distributions: for \(\alpha=1,2,6,7\), the \(\chi^{2}\) is comparable between all models.
As discussed above, we are able to draw multiple samples from the probabilities predicted by NN\({}_{\rm class}\). Each realisation leads to slightly different power spectra, as can be seen in Fig. 4. By computing the variance of the multiple \(P(k)\) we can assess the uncertainties due to the intrinsic stochasticity in the halo-galaxy connection. Fig. 5 compares the relative errors \(\sigma^{2}/P_{\rm TNG300}^{2}(k)\) computed using \(\sigma_{\rm CV}^{2}\), from Eq. 3 (which encodes the uncertainty due to cosmic variance, CV), with \(\sigma_{\rm NN_{class}}^{2}\), which encodes the statistical uncertainties in the halo-galaxy connection estimated with NN\({}_{\rm class}\). As we already saw in Fig. 4, the cosmic variance error bars are typically larger than the scatter in the power spectra due to the multiple realisations of the NN\({}_{\rm class}\) probabilities. The contribution of \(\sigma_{\rm NN_{class}}^{2}\) seems more relevant for the tracer population 5, which is very sparse. However, for all tracers \(\sigma_{\rm CV}^{2}\) decreases for smaller scales (due to the Fourier bin volume), while \(\sigma_{\rm NN_{class}}\) remains approximately constant. Therefore, the relative contribution of \(\sigma_{\rm NN_{class}}\) for the total error budget of the power spectra appears to become more important at smaller scales.
Even though we see no evidence of a bias associated with this additional source of statistical uncertainties, the stochastic nature of the relationship between galaxies and their haloes may present further challenges for multi-tracer analyses of LSS (Seljak, 2009; McDonald & Seljak, 2009). The advantages of the multi-tracer technique are reliant upon the partial cancellation of cosmic variance that results from clustering measurements from different galaxy types that are assumed to reflect the same underlying dark matter density field - in that respect see also Abramo & Leonard (2013); Abramo et al. (2016). The "stochastic bias" associated with the nature of the galaxy-halo connection can dilute some of the expected cosmic variance cancellation. However, that stochastic component seems to affect mostly the power spectra on small scales, where non-linear effects already limit our ability to employ the multi-tracer technique effectively - see, e.g., Montero-Dorta et al. (2020a).
## 6 Discussion and Conclusions
Although there is an obvious relation between the baryonic and DM components of haloes, there is also mounting evidence that the properties of haloes alone are insufficient to reproduce the properties of galaxies, since the latter are shaped by a variety of galaxy-formation processes. On the other hand, ML regression models are traditionally designed to reproduce single-value statistics, and thus are ill-equipped to encode the intrinsic scatter in the halo-galaxy connection. Building on the recent work of de Santi et al. (2022), here we use the TNG300 hydrodynamical simulation in combination with NNs to map the connection between the properties of central galaxies and the properties of their hosting haloes. As in the aforementioned work, NNs are trained to reproduce the stellar mass, \(g-i\) colour, sSFR and radius of TNG300 galaxies based on a set of halo/environmental properties that include virial mass, concentration, formation redshift, spin, and overdensity (computed over scales of 3 \(h^{-1}\)Mpc). In order
\begin{table}
\begin{tabular}{l c c c} \hline Tracer & \(\log\,(M_{*}[h^{-1}{\rm M}_{\odot}])\) & \(g-i\) & \# objects \\ \hline \(\alpha=1\) & \((9.5,\,10.5)\) & \((1.05,\,)\) & \(4,073\) \\ \(\alpha=2\) & \((10.5,\,)\) & \((1.05,\,)\) & \(5,207\) \\ \(\alpha=3\) & \((9.5,\,)\) & \((0.80,\,1.05]\) & \(4,786\) \\ \(\alpha=4\) & \((9.5,\,10.5]\) & \((0.80,\,1.05]\) & \(5,950\) \\ \(\alpha=5\) & \((10.5,\,)\) & \((0.80,\,1.05]\) & \(1,267\) \\ \(\alpha=6\) & \((9.5,\,5]\) & \((,0.80]\) & \(29,695\) \\ \(\alpha=7\) & \((9.5,\,10.5]\) & \((,0.80]\) & \(18,432\) \\ \hline \end{tabular}
\end{table}
Table 2: Criteria for splitting central galaxies by stellar mass and colour, in order to define the tracers used in the power spectrum analysis.
Figure 3: \(P(M_{*},g-i)\) for individual objects predicted by NN\({}_{\rm class}\). The dashed green lines show the true values for stellar mass and colour from TNG300. The shaded regions in the marginal plots are the distributions of objects with similar halo mass as indicated on the top of the corresponding panel.
to alleviate the deficiencies of ML deterministic regression models, we have tested a different approach for the first time in the context of the halo-galaxy connection. The NNs are now trained to predict probability distributions instead of single-value statistics by means of a binning classification scheme. In essence, the distributions of galaxy properties are split into \(K\) narrow bins so that the NNs can associate a score to each of the \(K\) classes. This is performed in such a way that the output can be used as a proxy for the probability distributions of the central galaxy properties.
We have shown that this approach is in fact capable of producing bivariate distributions of galaxy properties, i.e., \(P(Y_{1},Y_{2})\), in outstanding agreement with those from TNG300 (here, \(\{Y_{1},Y_{2}\}\) is any pair of galaxy properties). These joint distributions can be compared with the product of the two 1D (disjoint) distributions, \(P(Y_{1})\)
Figure 4: Power spectra and residuals for seven tracers selected on the basis of the colour-stellar mass diagram (bottom right panel). The green solid lines correspond to TNG300, while the light purple solid lines correspond to spectra from \(r=42\) samples drawn from the probabilities predicted by NN\({}_{\rm class}\). The dark purple, thick dashed lines correspond to the mean of those realisations. The baseline models are shown in orange: darker dotted lines correspond to the Raw model and lighter dotted-dashed lines correspond to the SMOGN model.
Figure 5: Relative error for seven tracers selected based on the colour - stellar mass diagram. The variances are normalised by the TNG300 spectrum \(P_{T}(k)\) of each tracer \(\alpha\). Orange dotted lines correspond to the relative error computed with Eq.(3), purple dashed lines correspond to the relative error computed with NN\({}_{\rm class}\) and green solid lines correspond to the total relative error. |