id
stringlengths 10
10
| title
stringlengths 26
192
| abstract
stringlengths 172
1.92k
| authors
stringlengths 7
591
| published_date
stringlengths 20
20
| link
stringlengths 33
33
| markdown
stringlengths 269
344k
|
---|---|---|---|---|---|---|
2303.14483 | Spatio-Temporal Graph Neural Networks for Predictive Learning in Urban
Computing: A Survey | With recent advances in sensing technologies, a myriad of spatio-temporal
data has been generated and recorded in smart cities. Forecasting the evolution
patterns of spatio-temporal data is an important yet demanding aspect of urban
computing, which can enhance intelligent management decisions in various
fields, including transportation, environment, climate, public safety,
healthcare, and others. Traditional statistical and deep learning methods
struggle to capture complex correlations in urban spatio-temporal data. To this
end, Spatio-Temporal Graph Neural Networks (STGNN) have been proposed,
achieving great promise in recent years. STGNNs enable the extraction of
complex spatio-temporal dependencies by integrating graph neural networks
(GNNs) and various temporal learning methods. In this manuscript, we provide a
comprehensive survey on recent progress on STGNN technologies for predictive
learning in urban computing. Firstly, we provide a brief introduction to the
construction methods of spatio-temporal graph data and the prevalent
deep-learning architectures used in STGNNs. We then sort out the primary
application domains and specific predictive learning tasks based on existing
literature. Afterward, we scrutinize the design of STGNNs and their combination
with some advanced technologies in recent years. Finally, we conclude the
limitations of existing research and suggest potential directions for future
work. | Guangyin Jin, Yuxuan Liang, Yuchen Fang, Zezhi Shao, Jincai Huang, Junbo Zhang, Yu Zheng | 2023-03-25T14:29:20Z | http://arxiv.org/abs/2303.14483v3 | # Spatio-Temporal Graph Neural Networks for Predictive Learning in Urban Computing: A Survey
###### Abstract
With recent advances in sensing technologies, a myriad of spatio-temporal data has been generated and recorded in smart cities. Forecasting the evolution patterns of spatio-temporal data is an important yet demanding aspect of urban computing, which can enhance intelligent management decisions in various fields, including transportation, environment, climate, public safety, healthcare, and others. Traditional statistical and deep learning methods struggle to capture complex correlations in urban spatio-temporal data. To this end, Spatio-Temporal Graph Neural Networks (STGNN) have been proposed, achieving great promise in recent years. STGNNs enable the extraction of complex spatio-temporal dependencies by integrating graph neural networks (GNNs) and various temporal learning methods. In this manuscript, we provide a comprehensive survey on recent progress on STGNN technologies for predictive learning in urban computing. Firstly, we provide a brief introduction to the construction methods of spatio-temporal graph data and the prevalent deep-learning architectures used in STGNNs. We then sort out the primary application domains and specific predictive learning tasks based on existing literature. Afterward, we scrutinize the design of STGNNs and their combination with some advanced technologies in recent years. Finally, we conclude the limitations of existing research and suggest potential directions for future work.
Spatio-Temporal Data Mining, Graph Neural Networks, Urban Computing, Predictive Learning, Time Series
## 1 Introduction
With the rapid advancement of sensing and data stream processing technologies, vast amounts of data in urban systems have been efficiently collected and stored. This has laid the foundation for the era of urban computing, which aims to understand the urban patterns and dynamics from different application domains where the big data explodes, such as transportation, environment, climate, etc. Predictive learning is a typical supervised learning paradigm that learns from historical data to forecast future trends. According to urban computing theories [1], predictive learning based on massive urban data is the most important loop, forming the foundation for intelligent decision-making, scheduling, and management in smart cities. In addition, the predictability of urban big data can also provide the possibility for the development of some new technologies such as digital twin cities and metaverse [2].
The majority of urban data is spatio-temporal, representing that it pertains not only to spatial locations but also changes over time. Within urban systems, spatio-temporal data exhibits ubiquitous properties of _correlation_ and _heterogeneity_[3]. Correlation refers to the data being auto-correlated not only in the temporal dimension, but also in the spatial dimension. Heterogeneity is a property of spatio-temporal data wherein it displays varying patterns across different temporal or spatial ranges, as illustrated in Figure 1. The complex nature of the above characteristics has resulted in an increased difficulty in feature engineering. As a consequence, some methods that performed well in traditional time series forecasting, such as Support Vector Regression (SVR) [4], Random Forest (RF) [5], and Gradient Boosting Decision Tree (GBDT) [6], are less effective in achieving accurate prediction results. In the past decade, the rapid development of deep learning technologies has led to the emergence of hybrid neural networks based on Convolutional Neural Networks (CNN) [7] and Recurrent
Fig. 1: An example of spatio-temporal heterogeneity. (a) Districts with different functionality in an urban network. (b) Statistic of crowd flow data in different nodes from various districts. Despite all nodes exhibiting evident peak patterns, there are notable variations in crowd flow values across nodes located in different districts. Conversely, nodes situated within the same district display similar values even if they differ in their location, _e.g._, the case with node 3&4.
Neural Networks (RNN) [8]. These hybrid networks (_e.g._, ConvLSTM [9], PredRNN [10]) have been increasingly applied to predictive learning of urban spatio-temporal data and have shown significant advantages. However, the major limitation of these methods is their inability to learn directly from non-Euclidean data existing in urban systems, such as vehicle flows over road networks, traffic on route networks, and entities in urban knowledge graphs.
Over the past few years, there have been significant breakthroughs in representation learning of non-Euclidean data through deep learning techniques, particularly Graph Neural Networks (GNN) [11]. This has paved the way for predictive learning of diverse and intricate urban data. Given the spatio-temporal characteristics of urban data, such as traffic flows, a line of studies integrated GNNs with various temporal learning methods to capture dynamics in both space and time dimension [3]. This type of hybrid neural architecture is generally known as **Spatio-Temporal Graph Neural Network (STGNN)**. Recently, STGNNs have been widely used for predictive learning scenarios in urban computing, including transportation, environment, public safety, health, energy, economy, and other fields. Using the search engine of Google Scholar, we perform meticulous keyword searches and tally the relevant paper publications in the past five years. As depicted in Figure 2, we can witness a notable surge in the number of relevant papers on STGNN year by year. In 2018, there are fewer than 20 papers, while in 2022, the number reaches nearly 140. This trend of progress indicates that STGNN-related applications have emerged as a highly sought-after research area in recent years. It is worth noting that a majority of these publications concentrated on predictive learning tasks.
**Related Surveys.** In recent years, there have been a few related surveys on the applications of STGNN-based predictive learning techniques across different fields. Wang et al. [3] conducted a review of deep learning techniques for spatio-temporal data mining involving a series of STGNNs in predictive learning up to 2020. There were also several surveys [12, 13, 14] investigating the blossom of STGNNs in transportation domains. To be specific, [12] analyzed multiple practical problems and revisited related works about prediction, detection, and control problems in urban traffic systems. Bui et al. [13] and Jiang et al. [14] focused on the latest STGNN technologies in traffic forecasting tasks. Gao et al. [15] summarized a wide range of applications using Generative Adversarial Networks (GAN) for spatio-temporal data learning, including some approaches combined with spatio-temporal graph data.
**Our Contributions.** In contrast to prior surveys, the contributions of our survey lie in four aspects:
* To our knowledge, this is the first comprehensive survey to systematically review recent studies that use STGNNs for predictive learning in urban computing. We scrutinize the progress of STGNN from both application and methodology perspectives based on extensive literature.
* We categorize the primary application domains as well as particular predictive learning tasks of STGNNs in urban computing, and sort out a list of public datasets attached with the previous works on STGNNs.
* We provide an in-depth analysis of existing STGNN methods for temporal learning, spatial learning, and spatio-temporal fusion. We further examine some recently emerging approaches that integrated STGNN with other advanced learning frameworks.
* We summarize the challenges shared by STGNNs for predictive learning tasks in urban computing and suggest future directions for addressing these challenging issues.
**Organization.** The rest is organized as follows. Section 2 illustrates how to construct spatio-temporal graphs based on prior knowledge. In Section 3, a taxonomy of STGNNs for predictive learning in urban computing is presented. Section 4 overviews various predictive learning tasks from different domains that can be addressed by STGNNs. Section 5 delineates the fundamental deep learning architectures commonly used in STGNNs. Section 6 and Section 7 delve into an in-depth analysis of the neural architecture design methods of STGNNs and popular advanced techniques that can be combined, respectively. Section 8 further highlights the limitations of existing works and suggests future directions. Finally, we conclude this survey in Section 9.
## 2 Spatio-Temporal Graph Construction
Suppose we obtain some observations from sensors, denoted as \(\mathbf{X}=\{x_{t}\in\mathbb{R}^{N\times F}|t=0,\dots,T\}\), where \(N\) is the number of spatial vertices and \(F\) is the number of features. **Spatio-temporal Graph** is an efficient structure to characterize the relationships between different vertices in a certain spatial and temporal range. We can represent a spatio-temporal graph as \(\mathcal{G}_{t}=(\mathcal{V},\mathcal{E}_{t},\mathbf{A}_{t})\), where \(\mathcal{V}\) is the vertices set, \(\mathcal{E}_{t}\) is the edge set, and \(A_{t}\) denotes the adjacency matrix at time \(t\). In most scenarios, the size of \(\mathcal{V}\) is static, while the size of \(\mathcal{E}_{t}\) can be time-varying or constant, which indicates that \(\mathbf{A}_{t}\in\mathbb{R}^{N\times N}\) also changes with \(\mathcal{E}_{t}\). In terms of connectivity, spatio-temporal graphs can be either directed or undirected, as well as weighted or unweighted. From the perspective of evolution, the structure of spatio-temporal graphs can be either static or dynamic. Figure 3 illustrates the difference between static and dynamic spatio-temporal graphs. The appropriate type of spatio-temporal graph to construct depends on the task and the given data conditions.
Generally, the construction methods of predefined spatio-temporal graphs in urban computing systems can be divided into four categories: topology-based, distance-based, similarity-based, and interaction-based.
Fig. 2: The publication trend of STGNN-related papers in Google Scholar over the past five years. The blue bars represent the total number of relevant publications and the red bars denote those focusing on predictive learning tasks.
**Topology-based graph:** In the context of urban systems, topology-based graphs are usually constructed based on given topology structures, such as road networks [16, 17]. The adjacency matrix of a topology-based graph can be formulated as:
\[a_{ij}^{t}=\begin{cases}1,\text{if }v_{i}\text{ connects to }v_{j}\\ 0,\text{ otherwise}\end{cases}, \tag{1}\]
where \(a_{ij}^{t}\) denotes an element in adjacency matrix at time \(t\), \(v_{i}\) and \(v_{j}\) are different vertices in the graph. Since the connections in topology structures can be symmetrical or asymmetrical, the topology-based graphs can be directed or undirected. Topology only represents connections in non-Euclidean spaces, thus the topology-based graphs are unweighted. In addition, the topology structures in social systems are usually fixed for quite a long time, so we can treat them as static graphs.
**Distance-based graph:** According to first law of geography, _i.e._, "Everything is related to everything else, but near things are more related to each other", we can construct a distance-based graph when a predefined topology is absent. In most applications, the elements in the adjacency matrix are calculated using a kernel function that takes the distances into account [18, 19, 20]. Gaussian radial basis function and inverted function are two common kernel functions used in previous literature. For example, the adjacency matrix of a distance-based graph with gaussian radial basis can be computed as:
\[a_{ij}^{t}=\begin{cases}\dfrac{\exp(-\left\|d_{ij}^{t}\right\|_{2})}{\sigma}, \text{if }d_{ij}^{t}<\epsilon,\\ 0,\quad\text{ otherwise}\end{cases} \tag{2}\]
where \(d_{ij}^{t}\) denotes the distance between node \(i\) and node \(j\) at time \(t\); \(\epsilon\) is a predefined threshold to control the sparsity of the adjacency matrix; \(\sigma\) is a hyperparameter to control the distribution.
**Similarity-based graph:** Similarity can provide insights into the relations between different entities from a semantic perspective. Similarity-based graphs can be constructed based on either the proximity of time series [21, 22, 23] or similarity of the spatial attribute, _e.g._, Point of Interest (POI) [24]. In scenarios where additional data is unavailable, similarity-based graphs are typically constructed based on the similarity of time series. Pearson Correlation Coefficient (PCC) and Dynamic Time Wrapping (DTW) are two prevalent methods used to calculate the similarity between time series. For instance, the adjacency matrix of a similarity-based graph computed by PCC is defined as:
\[a_{ij}^{t}=\begin{cases}\dfrac{\sum_{i=1}^{n}\left(x_{i}^{0:t}-x_{i}^{\bar{0}: t}\right)\left(x_{j}^{0:t}-x_{j}^{\bar{0}:t}\right)}{\sqrt{\sum_{i=1}^{n} \left(x_{i}^{0:t}-x_{i}^{\bar{0}:t}\right)^{2}}}\sqrt{\sum_{i=1}^{n}\left(x_{ j}^{0:t}-x_{j}^{\bar{0}:t}\right)^{2}}},\\ 0,\quad\text{ otherwise}\end{cases} \tag{3}\]
where \(x_{i}^{0:t}\) and \(x_{j}^{0:t}\) represent the time series of node \(i\) and node \(j\) of a given time span \(t\), respectively; \(x_{i}^{0:t}\) and \(x_{j}^{\bar{0}:t}\) are the mean value of the time series of node \(i\) and node \(j\), and \(n\) denotes the number of samples over the time span \(t\).
**Interaction-based graph:** The interaction between different locations can express their connection from the perspective of information flow [18, 20]. This is especially important when representing the characteristics of mobility, as the proportion of flow between two nodes can indicate the strength of their connection. Hence, the adjacency matrix of an interaction-based graph can be written as:
\[a_{ij}^{t}=\begin{cases}\dfrac{F_{ij}^{t}}{\sum_{m\in N(i)}F_{im}^{t}},\;if \;F_{ij}^{t}>0\\ 0,\quad\text{ otherwise}\end{cases}, \tag{4}\]
where \(F_{ij}^{t}\) denotes the flow from node \(i\) to node \(j\) at time \(t\); \(N(i)\) indicates the set of nodes that interact with node \(i\); \(F_{im}^{t}\) is the flow from node \(i\) to other nodes (_e.g._, \(m\)) in a set \(N(i)\) at time \(t\).
In addition to the common predefined graph construction methods mentioned above, many relations in urban systems are implicit and difficult to be directly predefined. Therefore, spatio-temporal graphs based on adaptive learning have been proposed in some recent works. More details about these methods can be found in Section 6.1.2.
## 3 Taxonomy
This section provides a taxonomy of STGNNs for predictive learning in urban computing, which is also a generalization of our follow-up content. As shown in Figure 4, there are four main parts in our survey that need to be highlighted: main application domains, basic spatio-temporal learning neural architecture, improved spatio-temporal learning methods, and advanced methods combined with STGNNs. We present an overview of specific predictive learning tasks based on major application domains in Section 4. In Section 5, we review the fundamental neural architectures of STGNNs from three perspectives: spatial learning, temporal learning and spatio-temporal fusion. Subsequently, in Section 6, we examine the enhanced spatio-temporal dependency learning methods from the same perspectives as in Section 5. Finally, we discuss the advanced techniques combined with STGNNs in Section 7.
Fig. 3: The schematic diagram of static and dynamic spatio-temporal graphs. The color shades of the nodes represent different features.
## 4 Application Domains & Task Description
This section delves into the primary application domains and specific predictive learning tasks in urban computing. Based on the available literature in recent years, we conducted a statistical analysis of the various application domains of STGNN in urban computing. Figure 5 illustrates the main application domains of STGNN, which encompass transportation, safety, environment, and public health. Among these, transportation is the most widely studied application domain of STGNN, constituting over 60% of the existing literature.
### _Transportation_
Modern urban systems have numerous sensors distributed across traffic road networks and critical regions to monitor changing traffic states, such as flow and speed. The objective of traffic state prediction is to forecast future traffic states based on historical traffic states within a particular spatial range. As shown in Figure 6, traffic state prediction can be divided into two main categories:
* **Network-based prediction.** The object of network-wide prediction is usually the traffic flow or speed on the given road networks [25, 26, 27, 28, 16, 19]. The basic graph structures can be directly converted from road networks in most previous works.
* **Region-based prediction.** This task aims to forecast the traffic (_e.g._, crowd flow) in urban areas [29, 30, 31, 32]. In this case, the whole urban area is partitioned into irregular or regular regions, and a spatio-temporal graph can be constructed based on the distances, connectivity, semantic correlations between different regions, etc.
In general, traffic state prediction tasks can be summarized in the following form:
\[\left[\mathbf{X}^{\left(t-T^{\prime}+1\right)},\cdots,\mathbf{X}^{\left(t\right)}; \mathcal{G}\right]\overset{f(\cdot)}{\longrightarrow}\left[\mathbf{X}^{\left(t+1 \right)},\cdots,\mathbf{X}^{\left(t+T\right)}\right], \tag{5}\]
where \(\mathbf{X}^{\left(t\right)}\in\mathbb{R}^{N\times d}\) denotes the traffic states of \(N\) vertices at time step \(t\), \(\mathcal{G}\) is the constructed graph structure, \(f(\cdot)\) is the corresponding STGNN model for making predictions.
Fig. 4: The taxonomy for STGNN in our survey.
Fig. 5: The summary of the different application domains of STGNN in urban computing.
Fig. 6: The two categories of traffic state prediction.
#### 4.1.1 Traffic Demand Prediction
Accurately predicting changing urban traffic demand patterns (_e.g._, taxi order demands, rail transit passenger demands, and sharing bike demands) in various regions can facilitate traffic scheduling to alleviate congestion during peak hours. Traffic demands can be broadly categorized into three main types: origin demands, destination demands, and origin-destination (OD) demands. Predicting origin demands and destination demands are similar to region-based traffic state prediction, _i.e._, forecasting future demands based on historical demands in \(N\) regions [20, 24, 33, 34]. However, OD demand prediction is somewhat distinct, requiring prediction of future origin-destination matrices using historical OD matrices [35, 36, 37, 38, 39]. To be specific, the outputs of OD demand prediction are a series of matrices with size \(N\times N\), which can characterize the flow demand among these region pairs.
#### 4.1.2 Traffic Incident Prediction
With the dramatic increase in the number of vehicles, more and more traffic incidents such as congestion and accidents have occurred, placing significant pressure on urban traffic management. The aim of the traffic incident prediction task is to predict some important properties (_e.g._, occurrence probability, occurrence time) of these incidents that may occur on road networks [40, 41, 42, 43, 44]. In addition to differences in the objects being predicted, similar to the traffic state prediction task, accurate traffic incident prediction also requires capturing spatio-temporal dependencies on road networks by building STGNN models. Compared with the relatively macroscopic traffic state prediction, the incident-oriented prediction can more accurately respond to various emergencies in the traffic system for early warning.
#### 4.1.3 Travel Time Prediction
Travel time prediction is highly valued in industries, especially in online map navigation and ride-hailing software. Accurate travel time estimation can significantly enhance the user experience with these types of software. This task aims to predict the travel time of a given trajectory using historical traffic states on road networks. In order to predict travel time more accurately, not only the characteristics of the trajectory itself need to be considered, but also the spatio-temporal dynamics (_e.g._, flow, speed, etc.) attached to the road networks should be addressed. Under this circumstance, spatio-temporal graphs are established based on road networks for this task. So far, large technology companies such as Baidu [45, 46], Google [47], and DiDi [48] have developed practical travel time prediction functions on their online platforms. STGNN-based travel time prediction can be defined as follows:
\[\mathcal{F}(P_{t}|X_{t-w:t},\mathcal{G})\to T_{g},T_{l} \tag{6}\]
where \(P_{t}\) denotes the given trajectory with departure time \(t\), \(X_{t-w:t}\) denotes the spatio-temporal features in historical time window \(w\) attached to the given road network \(\mathcal{G}\). \(T_{g}\) and \(T_{l}\) respectively represent the global travel time of the entire trajectory and local travel time of the road segment.
#### 4.1.4 Trajectory Prediction
Trajectory prediction is a crucial task for comprehending the intricate group dynamics of humans and vehicles [49, 50, 51, 52, 53, 54], which can foster advancements in autonomous driving and urban monitoring technologies. As shown in Figure 7, there exists some correlations or interactions in the movement patterns of agents in the group, thus we can build spatio-temporal graphs based on the relations between different agents within a group. Upon creating these spatio-temporal graphs, STGNN models can be devised to predict the coordinates that agents may occupy in the future, considering their historical traversal coordinates, thereby facilitating the predictions of future trajectories.
#### 4.1.5 Other prediction tasks
Aside from the mainstream transportation application scenarios mentioned earlier, there are also some relatively niche scenarios that use STGNN techniques to improve prediction outcomes. For example, parking availability prediction [55, 56] and traffic delay prediction [57, 58] are two emerging tasks in transportation management that can be addressed with STGNN. Analogous to other prevalent research topics, these prediction tasks adopt STGNNs to better learn spatio-temporal contextual representations on traffic networks, yielding more accurate predictions.
### _Environment_
#### 4.2.1 Air Quality Prediction
Air quality has become a pressing issue that needs immediate attention and improvement. Accurate air quality prediction can not only assist governments in formulating energy-saving and emission-reduction policies but also provide guidance for residents' outdoor activities. Air quality index (AQI), PM2.5, and emissions are the indicators we are among the most significant indicators of concern. The related data are collected by city-level or national-level monitoring stations [59, 60]. Due to the fluidity of the air, monitoring stations that are geospatially close or sharing the same wind direction may collect correlated results [61, 62, 63]. Hence, utilizing spatio-temporal graph-based deep learning models can not only establish such spatial dependencies, but also capture the time-varying dynamics of air quality.
Fig. 7: An example of social interactions between different agents within a group [49].
#### 4.2.2 Meteorological Prediction
Meteorological forecasting is another research topic intimately connected to the environment and human society. Similar to air quality data, meteorological data are also collected by distributed monitoring stations. However, the correlations between different stations could be more complex and susceptible to a greater number of factors. In recent years, STGNN-based approaches have been progressively applied in various meteorological prediction scenarios such as temperature prediction [64, 65, 66], frost prediction [67] and wind prediction [68, 69, 70], showcasing their superior performance in practice.
### _Public Safety_
#### 4.3.1 Crime Frequency Prediction
Effectively combating and preventing crime is the foundation for ensuring urban safety. Accurate prediction of crime frequency can assist governments in understanding real-time crime dynamics and allocating police resources rationally. Most existing work in this research line focus on crime frequency prediction in urban areas. Given that different urban regions have distinct functions, POI, and other characteristics, these factors could contribute to varying crime types and trends. However, regions with similar characteristics or close distances may exhibit latent correlations in crime incidents [71, 72, 73]. Consequently, many previous studies [74, 75, 76, 77, 78, 79, 80, 71, 72, 71] have introduced a series of STGNN to capture these correlations to reduce the prediction errors.
#### 4.3.2 Disaster Situation Prediction
Natural disasters (_e.g._, earthquakes) have posed serious challenges to the safety of human society since ancient times. Accurate disaster situation prediction can enable governments to implement disaster prevention measures, allocate disaster relief materials, and evacuate residents in a timely manner. To model correlated and heterogeneous features across geographical locations, STGNN can be a fruitful approach in this task. Currently, some literature has introduced the STGNN models into scenarios such as flood prediction [82, 83], fire prediction [84, 85], typhoon forecasting [86, 87, 88] and earthquake prediction [89, 90, 91].
### _Public Health_
#### 4.4.1 Epidemic Prediction
Epidemics are one of the greatest challenges to the public health systems, especially the novel coronavirus that has been prevalent in recent years, which has caused more than six million deaths worldwide. Therefore, accurately predicting the spread of epidemics is an important but challenging task, which can provide data support for the strengthening strategy of the urban public health systems. Some recent existing works have employed STGNN models to address the national-level [92, 93, 94, 95, 96, 97] or international-level [98] epidemic prediction tasks. Many of them combine the mathematical formulations of epidemic dynamics and the modeling of spatio-temporal graphs, which have achieved better prediction results than traditional methods [97, 98, 99, 100, 101].
#### 4.4.2 Ambulance Demand Prediction
In today's aging society, the allocation of ambulance resources is a challenging task that needs careful consideration. Accurate ambulance demand prediction can effectively alleviate the burden on the urban healthcare systems. Since there could be time-varying correlations in public medical resources, traffic conditions, and demand patterns among different regions of the social systems, STGNN-based methods have increasingly been exploited to learn these multi-view spatial correlations in recent years [102, 103, 104].
### _Other Application Domains_
In addition to the four main application domains mentioned above, other scenarios where the spatio-temporal graph structures can be established based on the intrinsic relations of data are potential areas for the development of STGNN-based predictive learning models. In recent years, STGNN-based predictive learning models have also been promoted to other domains such as energy, economy, finance, and production. In the energy domain, STGNN models have been utilized in wind power prediction [105, 106] and photovoltaic power prediction [107, 108]. In the economy domain, a typical application is nation-level regional economy prediction, when researchers have explored the usage of STGNN models [109, 110]. In the finance domain, STGNN models have been widely applied for stock prediction [111, 112, 113]. In the production domain, Fan et al. first adopted STGNN model in predicting crop yield [114].
### _Open Datasets in Main Application Domains_
As depicted in Table I, we have compiled a list of some of the most frequently used public datasets from previous works in the primary application domains, including their details such as source links and related publications. For example, California-PEMS is a set of real-world traffic datasets used for traffic prediction tasks. These datasets are collected from the California Department of Transportation's Performance Measurement System (PeMS) and are commonly used to evaluate and benchmark various traffic prediction models. The PEMS datasets are named according to the number of sensor stations in each dataset, such as PEMS-04 and PEMS-08. These datasets are widely used in the research field of transportation and urban mobility, particularly for traffic state prediction. Due to their high granularity, realistic nature, and real-world applicability, they serve as valuable resources for researchers working on spatio-temporal forecasting and traffic modeling.
## 5 Basic Neural Architectures
Here we introduce basic neural architectures for STGNNs. As shown in Figure 8, the basic framework of STGNNs for predictive learning contains three main modules: Data Processing Module (DPM), Spatio-Temporal Graph Learning Module (STGLM) and Task-Aware Prediction Module (TPM). For predictive learning tasks in urban computing, DPM is responsible for constructing the spatio-temporal graph data from the raw data; STGLM seeks to capture hidden spatio-temporal dependencies from complex social systems, while TPM aims to map the spatio-temporal hidden
representations from STGLM into the space of downstream prediction tasks. STGLM serves as the most vital component of STGNNs, which usually combines spatial learning networks and temporal learning networks organically through a certain spatio-temporal fusion method. Spatial learning networks may utilize spectral graph convolutional networks (Spectral GCNs) [151], spatial graph convolutional networks (Spatial GCNs) [11, 152], and graph attention networks (GATs) [153] as potential options. Temporal learning networks, on the other hand, may incorporate recurrent neural networks (RNNs), temporal convolutional networks (TCNs), or temporal self-attention networks (TSANs). Compared with STGLM, TPM is a relatively simple neural network, thus the majority of existing research focuses on the design of the neural architectures in STGLM.
### _Graph Neural Networks_
Graph neural networks (GNNs) are fruitful tools for learning spatial dependencies in non-Euclidean space. In recent years, popular GNNs can be divided into three categories: spectral GCNs, spatial GCNs and GATs.
#### 5.1.1 Spectral Graph Convolutional Network
Initially, most GNNs were based on Fourier transform, which converts the graph signal in the spatial domain into the spectral domain to conduct convolution calculations [154]. In this approach, Graph Fourier transform and Inverse Graph Fourier transform are required to achieve the transformation between the spatial domain and spectral domain, and are defined as:
\[\mathcal{F}(\mathbf{x})=\mathbf{U}^{T}\mathbf{x}, \tag{7}\] \[\mathcal{F}^{-1}(\mathbf{x})=\mathbf{U}\mathbf{x},\]
where \(\mathbf{U}\) denotes the matrix of eigenvectors of the normalized graph Laplacian. Based on this, the graph convolution operation is defined as:
\[\mathbf{g}\star\mathbf{x}=\mathcal{F}^{-1}(\mathcal{F}(\mathbf{g})\odot\mathcal{F}(\mathbf{x} ))=\mathbf{U}\left(\mathbf{U}^{T}\mathbf{g}\odot\mathbf{U}^{T}\mathbf{x}\right), \tag{8}\]
where \(\odot\) is the convolution operator and \(\mathbf{U}^{T}\mathbf{g}\) denotes the filter in the spectral domain. Eq. 8 can be further simplified as:
\[\mathbf{g}_{w}\star\mathbf{x}=\mathbf{U}\mathbf{g}_{w}\mathbf{U}^{T}\mathbf{x}. \tag{9}\]
Most of the subsequent GNNs based on the spectral domain mainly improve the calculation method of \(\mathbf{g}_{w}\). For example, ChebNet is one of the most popular Spectral GNN methods. According to the theory that \(\mathbf{g}_{w}\) can be approximated by a truncated expansion of Chebyshev polynomials [155], Defferrard et al. [151] further propose ChebNet, which can be formulated as:
\[\begin{split}&\mathbf{\tilde{L}}=\frac{2}{\lambda_{\max}}\mathbf{L}- \mathbf{I}_{N},\\ &\mathbf{g}_{w}\star\mathbf{x}=\sum_{k=0}^{K}w_{k}\mathbf{T}_{k}(\mathbf{\tilde{ L}})\mathbf{x},\end{split} \tag{10}\]
where \(\mathbf{\tilde{L}}\) is the normalized graph Laplacian, \(\lambda_{\max}\) is the largest eigenvalue of \(\mathbf{L}\), \(\mathbf{T}_{k}(x)\) denotes the Chebyshev polynomials up to \(k_{th}\) order, \(w_{k}\) denotes a vector of Chebyshev
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Domain & Dataset & Link & Reference \\ \hline \multirow{10}{*}{Transportation} & California-PEMS & [http://pems.dot.ca.gov/](http://pems.dot.ca.gov/) & [16, 12, 115, 116, 117, 118, 119] \\ \cline{2-4} & METR-LA & [https://www.metro.net/](https://www.metro.net/) & [120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 21, 21, 22, 22, 23, 24, 25, 26, 27, 28, 29, 22, 24, 27, 29, 25, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 41, 43, 45, 44, 46, 47, 49, 42, 43, 47, 48, 49, 40, 43, 49, 40, 45, 41, 42, 44, 45, 46, 47, 48, 49, 42, 45, 49, 43, 46, 47, 48, 49, 40, 41, 45, 42, 46, 49, 44, 45, 46, 47, 48, 49, 40, 41, 46, 49, 42, 45, 47, 49, 43, 48, 41, 42, 45, 46, 47, 49, 45, 48, 42, 46, 49, 40, 47, 49, 41, 48, 42, 49, 43, 44, 45, 46, 47, 48, 49, 41, 42, 49, 44, 45, 46, 47, 49, 42, 48, 43, 44, 49, 44, 45, 46, 47, 48, 49, 40, 49, 41, 42, 43, 44, 46, 49, 45, 47, 48, 49, 42, 49, 43, 44, 46, 49, 45, 48, 49, 46, 47, 49, 48, 49, 49, 40, 41, 42, 43, 44, 45, 46, 47, 49, 48, 49, 40, 42, 49, 41, 43, 44, 45, 46, 49, 42, 45, 47, 49, 43, 44, 46, 47, 48, 49, 45, 49, 46, 47, 49, 48, 49, 49, 40, 41, 42, 43, 44, 45, 46, 49, 41, 43, 45, 47, 49, 42, 46, 47, 48, 49, 45, 49, 46, 48, 49, 47, 49, 48, 49, 49, 40, 49, 41, 42, 49, 45, 49, 46, 49, 47, 48, 49, 49, 40, 42, 49, 41, 45, 49, 46, 47, 49, 48, 49, 49, 41, 45, 49, 42, 47, 49, 42, 48, 49, 43, 49, 45, 46, 47, 49, 48, 49, 49, 40, 49, 42, 49, 43, 44, 45, 49, 46, 47, 48, 49, 49, 40, 43, 41, 45, 49, 41, 46, 47, 49, 42, 48, 49, 45, 49, 46, 47, 49, 48, 49, 49, 42, 49, 45, 49, 46, 49, 47, 48, 49, 49, 40, 49, 41, 45, 49, 42, 49, 43, 46, 49, 47, 48, 49, 49, 40, 45, 49, 41, 42, 49, 45, 46, 47, 49, 48, 49, 49, 40, 49, 42, 49, 41, 43, 45, 49, 46, 47, 49, 49, 41, 45, 47, 49, 48, 49, 49, 40, 42, 49, 43, 41, 45, 49, 42, 49, 45, 46, 47, 48, 49, 41, 49, 42, 49, 45, 47, 49, 42, 48, 49, 43, 49, 45, 46, 49, 47, 49, 48, 49, 49, 40, 49, 45, 49, 41, 46, 49, 42, 49, 45, 47, 49, 46, 48, 49, 41, 49, 42, 49, 45, 49, 46, 49, 47, 48, 49, 49, 40, 49, 45, 49, 40, 46, 47, 48, 49, 49, 40, 47, 49, 48, 49, 49, 40, 49, 40, 49, 41, 45, 49, 40, 49, 41, 42, 49, 41, 45, 49, 42, 49, 43, 41, 46, 49, 41, 45, 49, 46, 47, 48, 49, 49, 40, 47, 49, 48, 49, 49, 40, 49, 41, 45, 49, 42, 49, 40, 49, 42, 49, 40, 49, 41, 42, 49, 43, 41, 45, 49, 42, 49, 45, 46, 47, 48, 49, 41, 47, 49, 48, 49, 40, 49, 49, 40
coefficients. ChebNet eliminates the necessity of computing the eigenvectors of the Laplacian by employing the K-localized graph convolution.
#### 5.1.2 Spatial Graph Convolutional Network
While spectral graph convolutional networks (GCNs) have made significant advancements, their primary limitation lies in their dependence on the graph Laplacian matrix. Whenever there is a change in the underlying graph structure, the graph Laplacian matrix must be recomputed, rendering spectral GCNs better suited to scenarios where the graph structure remains constant. To overcome the dependency on the graph Laplacian matrix, Kipf et al. simplify the graph convolution operation [11] by performing message passing in the spatial domain. We called this new form as spatial GCNs, which is defined as:
\[\mathbf{g}_{w}\star\mathbf{x}=w\left(\mathbf{I}_{N}+\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{- \frac{1}{2}}\right)\mathbf{x}, \tag{11}\]
where \(\mathbf{A}\) is the adjacency matrix; \(\mathbf{D}\) is the degree matrix; \(w\) is the learnable parameters in the spatial GCN.
However, the above spatial GCN has to take the full graph as input, which makes it hard to scale up to large graphs in practice. To address this problem, GraphSAGE [152] proposes a sampling aggregation approach to achieve flexible inductive learning on large graphs. The aggregation operator in GraphSAGE is formulated as:
\[\mathbf{h}_{\mathcal{N}(u)}^{k}\leftarrow\mathrm{Aggregate}_{k}\left( \left\{\mathbf{h}_{u^{\prime}}^{k-1},\forall u^{\prime}\in\mathcal{N}_{k}(u) \right\}\right), \tag{12}\] \[\mathbf{h}_{u}^{k}\leftarrow\sigma\left(\mathbf{W}^{k}\cdot\mathrm{Concat }\left(\mathbf{h}_{u}^{k-1},\mathbf{h}_{\mathcal{N}(u)}^{k}\right)\right),\]
where \(\mathcal{N}_{k}(u)\) denotes the set of neighbor nodes of \(u\); \(\mathbf{h}_{\mathcal{N}(u)}^{k}\) means the embedding of node \(u\) after aggregation operation.
#### 5.1.3 Graph Attention Network
To account for the importance of neighbor nodes in learning spatial dependencies, GAT [153] integrates the attention mechanism into the node aggregation operation as:
\[\mathbf{h}_{v}^{t+1}=\rho\left(\sum_{u\in\mathcal{N}_{v}}\alpha_{vu}\mathbf{W}\mathbf{h}_{ u}^{t}\right), \tag{13}\]
\[\alpha_{vu}=\frac{\exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{T}\left[\mathbf{W} \mathbf{h}_{v}\|\mathbf{W}\mathbf{h}_{u}\right]\right)\right)}{\sum_{k\in\mathcal{N}_{u}} \exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{T}\left[\mathbf{W}\mathbf{h}_{v}\|\mathbf{W} \mathbf{h}_{k}\right]\right)\right)},\]
where \(\alpha_{vu}\) denotes the attention scores of neighbor node \(u\) to the central node \(v\), \(W\) is the weight matrix associated with the linear transformation for each node, and \(\mathbf{\mathrm{a}}\) is the weight parameter for attention output. To further stabilize the computational process of attention, the multi-head trick form also be introduced in GAT:
\[\mathbf{h}_{v}^{t+1} =\|_{k=1}^{K}\sigma\left(\sum_{u\in\mathcal{N}_{v}}\alpha_{vu}^{k} \mathbf{W}_{k}\mathbf{h}_{u}^{t}\right), \tag{14}\] \[\mathbf{h}_{v}^{t+1} =\sigma\left(\frac{1}{K}\sum_{k=1}^{K}\sum_{u\in\mathcal{N}_{v}} \alpha_{vu}^{k}\mathbf{W}_{k}\mathbf{h}_{u}^{t}\right),\]
where \(\alpha_{vu}^{k}\) is the normalized attention score computed by the \(k_{th}\) attention head. The aggregation method for multiple attention heads can be achieved by concatenation or an averaging operation.
### _Recurrent Neural Networks_
Recurrent neural networks (RNNs) are a class of deep neural networks for sequential learning based on recursive computations, and have found extensive applications in time series modeling. However, the vanilla version of RNNs is subject to a significant limitation - the gradient vanishing or explosion problem during the training process [156]. In response to this challenge, two of the most prominent variants of RNNs, _i.e._, Long Short-Term Memory (LSTM) [157] and Gated Recurrent Units (GRU) [158], have been proposed.
#### 5.2.1 Long-Short Term Memory Network
To address the gradient vanishing/explosion problem, LSTM first introduces a new gated mechanism to control the information flow by selectively retaining and forgetting temporal information in each time step. The formulation of LSTM is defined as:
\[\mathbf{f}_{t} =\sigma\left(\mathbf{W}_{f}\cdot\left[\mathbf{h}_{t-1},\mathbf{x}_{t}\right] +b_{f}\right), \tag{15}\] \[\mathbf{i}_{t} =\sigma\left(\mathbf{W}_{i}\cdot\left[\mathbf{h}_{t-1},\mathbf{x}_{t}\right] +b_{i}\right),\] \[\mathbf{o}_{t} =\sigma\left(\mathbf{W}_{o}\cdot\left[\mathbf{h}_{t-1},\mathbf{x}_{t}\right] +b_{o}\right),\] \[\tilde{\mathbf{C}}_{t} =\tanh\left(\mathbf{W}_{C}\cdot\left[\mathbf{h}_{t-1},\mathbf{x}_{t}\right] +b_{C}\right),\] \[\mathbf{C}_{t} =\mathbf{f}_{t}\ast\mathbf{C}_{t-1}+\mathbf{i}_{t}\ast\mathbf{\tilde{C}}_{t},\] \[\mathbf{h}_{t} =\mathbf{o}_{t}\ast\tanh\left(\mathbf{C}_{t}\right),\]
where \(\mathbf{f}_{t}\) represents the forgetting gate, whose function is to discard historical information in a certain proportion. \(\mathbf{i}_{t}\) denotes the input gate, which is to update the information of the current time step. \(\mathbf{o}_{t}\) is the output gate, which is to control the current output information in a certain ratio. \(\sigma(\cdot)\) represents the sigmoid function, so the output values of the forget gate, input gate and output gate are all between 0 and 1. \(\mathbf{C}_{t}\) represents the candidate state of the current LSTM unit. \(\mathbf{C}_{t}\) represents the state of the LSTM unit after the calculation of the forget gate and the input gate, where the forget gate acts on the state of the LSTM unit of the previous time step and the input gate acts on the state of the current candidate LSTM unit. Finally, the hidden state of this time step is obtained by the output gate.
#### 5.2.2 Gated Recurrent Unit Network
Due to the integration of multiple gates at each time step, LSTM bears a comparatively high computational burden. To this end, GRU simplifies the gates of each LSTM unit into two: an update gate and a reset gate. \(\mathbf{u}_{t}\) represents the update gate, which determines how to combine the information of the new input time step with the memory of the previous time step. \(\mathbf{r}_{t}\) represents the reset gate, which defines the amount of memory reserved from the previous time step to the current time step. Although the learnable parameters of GRU are streamlined, its performance can be compared with LSTM in previous works, while improving the training and inference efficiency. The calculation process of GRU is defined as follows:
\[\mathbf{u}_{t} =\sigma(\mathbf{W}_{u}\cdot x_{t}+\mathbf{U}_{u}\cdot\mathbf{C}_{t-1}+b_{u}), \tag{16}\] \[\mathbf{r}_{t} =\sigma(\mathbf{W}_{r}\cdot x_{t}+\mathbf{U}_{r}\cdot\mathbf{C}_{t-1}+b_{r}),\] \[\tilde{\mathbf{C}}_{t} =\tanh(\mathbf{W}_{C}\cdot x_{t}+\mathbf{U}_{C}(\mathbf{r}_{t}\odot\mathbf{C}_{t-1} )+b_{C}),\] \[\mathbf{C}_{t} =\mathbf{u}_{t}\odot\mathbf{C}_{t-1}+(1-\mathbf{u}_{t})\odot\tilde{\mathbf{C}}_{t}.\]
### _Temporal Convolutional Networks_
RNNs have been extensively applied for temporal learning in many spatio-temporal tasks, but their disadvantage is readily apparent: the recurrent structures necessitate the computation of sequences at every time step, leading to a substantial increase in computational cost and a consequent decrease in model efficiency. In contrast, Temporal Convolutional Networks (TCN) with their parallel 1D-CNN structures can address this problem effectively. Similar to 2D-CNN applied in image recognition, 1D-CNN also operates and aggregates features through convolution kernels. The major difference is that the convolution kernel of 1D-CNN is one-dimensional and only slides on the time axis.
#### 5.3.1 Gated Temporal Convolutional Network
Inspired by the gated mechanism in LSTMs and GRUs, we can also integrate it with pure 1D-CNN architecture to enhance the capability of temporal learning. We called this hybrid neural architecture a gated temporal convolutional network (Gated-TCN) [159]. The calculation process of Gated-TCN is defined as follows:
\[F(x)=tanh(\mathbf{\Theta_{1}\star}x)\odot\sigma(\mathbf{\Theta_{2}\star}x), \tag{17}\]
where \(\mathbf{\Theta_{1}}\) and \(\mathbf{\Theta_{2}}\) represent the learnable parameters of the convolution kernel in two different 1D-CNNs, respectively; \(\star\) denotes the convolution operation; \(\odot\) is the element-wise multiplication mechanism; \(\sigma(\mathbf{\Theta_{2}\star}x)\) indicates the gating unit, which is utilized to control the utilization rate of historical information.
#### 5.3.2 Causal Temporal Convolutional Network
While TCN is an efficient parallel neural architecture for sequential learning, it violates the temporal order of spatio-temporal graph data. Compared to standard TCNs, Causal TCNs, which are proposed in Wavenet [160], offer the additional benefit of explicitly modeling the causal nature of temporal data. This is achieved by removing connections between future time steps and past time steps, which eliminates the possibility of data leakage from future time steps to past time steps, as depicted in Figure 9. Furthermore, in order to more effectively capture longer-range temporal dependencies, 1D-CNN with dilated factors [161] can be used. By increasing the dilated factors layer by layer, this model has the capacity to learn temporal dependencies from a short range to a long range. A Causal TCN with dilated factors can be expressed as:
\[F(s)=\left(\boldsymbol{x}*_{d}f\right)(s)=\sum_{i}^{k-1}f(i)\cdot x_{s-d\cdot i}, \tag{18}\]
where \(s\) denotes the time series input; \(d\) represents the dilation factor, and the ordinary convolution operator is a special case of the dilated convolution operator when \(d=1\); \(s-d\cdot i\) refers to the positioning of certain historical information.
### _Temporal Self-Attention Networks_
Self-attention networks represent a highly effective approach for capturing long-range temporal relationships among different time steps, with the most prominent example being the Transformer model [162]. The Transformer comprises three primary components: a scaled dot-product attention network, a feed-forward network, and position encodings, as illustrated in Figure 10.
The scaled dot-product network is the core part of Transformers, in which the attention calculation is formulated as:
\[\mathrm{Attention}(\boldsymbol{Q},\boldsymbol{K},\boldsymbol{V})=\mathrm{ Softmax}\left(\frac{\boldsymbol{Q}\boldsymbol{K}^{T}}{\sqrt{\boldsymbol{d}_{k}}} \right)\boldsymbol{V}, \tag{19}\]
where queries \(\boldsymbol{Q}\), keys \(\boldsymbol{K}\), values \(\boldsymbol{V}\) are three basic elements in the self-attention mechanism, which are obtained by non-shared linear transformations from the original input. \(\boldsymbol{d}_{k}\) denotes the scaling factor, whose value is equal to the dimension of the model. To stabilize the training process, this part can also employ the multi-head attention form, which is similar to Eq. 14.
Since Transformer contains no recurrence or convolution operator, we have to inject some positional information about the tokens in the sequence to consider the order of the sequence. Trigonometric function-based encoding is a common positional encoding approach, which is defined as:
\[\begin{split} PE_{(pos,2i)}&=\sin\left(pos/10000^{2i /d_{\text{model}}}\right),\\ PE_{(pos,2i+1)}&=\cos\left(pos/10000^{2i/d_{\text{ model}}}\right),\end{split} \tag{20}\]
where \(pos\) is the position and \(i\) is the dimension. This method performs sine encoding and cosine encoding on even and odd positions respectively, to distinguish different positions. In addition to Trigonometric functions, we can leverage fully learnable position encodings to encode the position information.
Fig. 10: The overview of Transformer [162]
Fig. 9: The overview of causal temporal convolutional network with exponentially increasing dilated factors [160].
### _Spatio-Temporal Fusion Neural Architecture_
In addition to spatial learning networks and temporal learning networks, spatio-temporal fusion neural architecture represents another critical area of cows, as it determines how spatial learning networks and temporal learning networks are integrated into the complete STGNN. Existing fusion neural architectures can be divided into two categories - factorized or coupled neural architecture.
#### 5.5.1 Factorized Neural Architecture
In factorized neural architectures, spatial learning networks and temporal learning networks are stacked in parallel or serially like building blocks layer by layer. There are two typical examples for factorized neural architectures in STGNN models, as shown in Figure 11 and Figure 12, respectively. The first example is STGCN [19], whose temporal learning network is TCN. In each ST-Conv block of STGCN, two TCNs and one GCN are stacked in series, forming a sandwich structure. Since this model learns temporal information through convolutional structures, its spatio-temporal learning method is parallelized, _i.e._, it receives all information of a given time window length as input at the same time. Mathematically, the calculation of each ST-Conv block in this model can be defined as follows::
\[v^{l+1}=\mathbf{\Gamma}_{1}^{l}*_{\mathcal{T}}\mathrm{ReLU}\left(\mathbf{ \Theta}^{l}*_{\mathcal{G}}\left(\Gamma_{0}^{l}*_{\mathcal{T}}v^{l}\right) \right), \tag{21}\]
where \(\Gamma_{0}^{l}\) and \(\Gamma_{1}^{l}\) denote the upper and lower temporal convolutional kernel within block \(l\), and \(\Theta^{l}\) is the spectral kernel of graph convolution.
The second one is T-GCN [25], which utilizes GRUs for temporal learning. This model captures the spatio-temporal dependencies in a recursive manner. For each time step, graph signals are sequentially processed by GCN and GRU to learn spatial and temporal dependencies separately. The whole process of each stacked GCN and GRU in this model can be expressed as:
\[\begin{split}&\boldsymbol{f}(\boldsymbol{X},\boldsymbol{A})= \sigma\left(\boldsymbol{AXW}_{0}\right),\\ &\boldsymbol{u}_{t}=\sigma\left(\boldsymbol{W}_{u}\left[f\left( \boldsymbol{A},\boldsymbol{X}_{t}\right),\boldsymbol{h}_{t-1}\right]+ \boldsymbol{b}_{u}\right),\\ &\boldsymbol{r}_{t}=\sigma\left(\boldsymbol{W}_{r}\left[\boldsymbol {f}\left(\boldsymbol{A},\boldsymbol{X}_{t}\right),\boldsymbol{h}_{t-1}\right]+ \boldsymbol{b}_{r}\right),\\ &\boldsymbol{c}_{t}=\tanh\left(\boldsymbol{W}_{c}\left[f\left( \boldsymbol{A},\boldsymbol{X}_{t}\right),\left(\boldsymbol{r}_{t}*\boldsymbol{ h}_{t-1}\right)\right]+\boldsymbol{b}_{c}\right),\\ &\boldsymbol{h}_{t}=\boldsymbol{u}_{t}*\boldsymbol{h}_{t-1}+(1- \boldsymbol{u}_{t})*\boldsymbol{c}_{t},\end{split} \tag{22}\]
where \(f(A,X_{t})\) denotes the output of spatial GCN at time step \(t\). Then \(f(A,X_{t})\) is put forward into GRU to obtain the hidden state at \(t\).
#### 5.5.2 Coupled Neural Architecture
In coupled neural architectures, spatial learning networks are usually integrated into the architecture of temporal learning networks as embedded components. In STGNN, this type of neural architecture occurs almost exclusively in combinations of GNN-based spatial learning networks and RNN-based temporal learning networks. One example of a coupled neural architecture in STGNNs is the DCRNN [27], which integrates GCN into the architecture of GRU, as illustrated in Figure 13. In this model, the original linear units in LSTM are replaced with a graph convolution operator, which can be written as:
\[\begin{split}\boldsymbol{r}^{(t)}&=\sigma\left( \boldsymbol{\Theta}_{r}*\mathcal{G}\left[\boldsymbol{X}^{(t)},\boldsymbol{H}^{(t -1)}\right]+\boldsymbol{b}_{r}\right),\\ \boldsymbol{u}^{(t)}&=\sigma\left(\boldsymbol{\Theta} _{u}+\mathcal{G}\left[\boldsymbol{X},\boldsymbol{H}^{(t-1)}\right]+ \boldsymbol{b}_{u}\right),\\ \boldsymbol{C}^{(t)}&=\tanh\left(\boldsymbol{\Theta} _{C}*\mathcal{G}\left[\boldsymbol{X}^{(t)},\left(\boldsymbol{r}^{(t)}\odot \boldsymbol{H}^{(t-1)}\right)\right]+\boldsymbol{b}_{c}\right),\\ \boldsymbol{H}^{(t)}&=\boldsymbol{u}^{(t)}\odot \boldsymbol{H}^{(t-1)}+\left(1-\boldsymbol{u}^{(t)}\right)\odot\boldsymbol{C }^{(t)},\end{split} \tag{23}\]
where \(\boldsymbol{\Theta}_{r}*\mathcal{G}\) denotes the graph convolution operator with parameter \(\boldsymbol{\Theta}_{r}\). Compared with the equation 16 of the original GRU, we can find that except for the internal graph convolution operator, the external calculation methods of the recurrent network are not much different. Similar to some neural translation models [163], DCRNN can also employ sequence-to-sequence structure to improve predictions.
In order to further have an explicit and in-depth understanding of spatio-temporal fusion neural architecture in STGNN, we sort out and analyze the neural architectures of some existing classical models, as shown in Table II.
Fig. 11: The overview1 of STGCN [19].
Fig. 12: The overview of T-GCN [25].
Fig. 13: The overview of DCRNN [27].
## 6 STGNN Variants
In Section 5, we have introduced the basic neural architectures of STGNNs, thereby augmenting the comprehension of the spatio-temporal learning paradigm within this research domain. However, in recent years, there have been numerous innovative methods devised to enhance the learning of spatio-temporal dependencies. In this section, we elaborate on some advanced STGNN variants that can better capture spatio-temporal dependencies for predictive learning in urban computing.
### _Spatial Learning Methods_
#### 6.1.1 Multi-Graph Convolution
In urban systems, there are often multiple types of spatial relations that exist simultaneously. For instance, in transportation systems, adjacent regions and regions with similar POIs may exhibit similar traffic patterns. Hence, jointly considering multiple spatial relations is necessary for spatio-temporal learning in STGNN. In recent years, a series of STGNN variants that integrate multi-graph convolutions have been proposed to address this challenge [166, 167, 168, 20, 21, 24, 33, 170]. Among them, STMGCN [24] is a typical model for urban ride-haling demand prediction, as shown in Figure 14. This model first constructs multi-graph based on neighborhood, function similarity, and connectivity to characterize multiple spatial correlations. For each graph, contextual gated RNN and ChebNet are respectively adopted to capture temporal and spatial dependencies. Finally, the final prediction result is obtained by fusing the parallelized multi-graph spatio-temporal hidden information.
#### 6.1.2 Adaptive Graph Learning
Despite its capability to capture multiple spatial correlations to some extent, multi-graph modeling still suffers from two limitations. Firstly, the graph construction process may be insufficient and fail to account for other implicit correlations. Secondly, the rationality of graph construction may be questioned, particularly in the absence of sufficient domain knowledge to support it. To overcome these challenges, adaptive graph learning methods have been developed gradually. According to existing literature, adaptive graph learning methods in STGNN can be broadly categorized into two main categories: random initialization-based and feature initialization-based approaches.
**Random initialization-based** methods perform adaptive graph structure learning via randomly initialized learnable matrices [171, 172, 173, 174, 34, 80, 122, 34]. Two prominent models in this category are Graph WaveNet [122] and MTGNN [172], which have been widely applied or improved upon in subsequent works. In Graph WaveNet, the adaptive graph is produced as follows:
\[\tilde{\mathbf{A}}_{adap}=\mathrm{SoftMax}\left(\mathrm{ReLU}\left(\mathbf{E}_{1}\mathbf{ E}_{2}^{T}\right)\right), \tag{24}\]
where \(\mathbf{E}_{1}\in\mathbb{R}^{N\times C}\) and \(\mathbf{E}_{2}\in\mathbb{R}^{N\times C}\) are source node embedding and target node embedding, respectively. They are two learnable matrices with the random initialization, where \(N\) denotes the number of nodes in the graph and \(C\) denotes the dimension of the embedding.
In contrast, the generation process of the adaptive graph in MTGNN is defined as:
\[\mathbf{M}_{1}=\tanh\left(\alpha\mathbf{E}_{1}\mathbf{\Theta}_{1}\right), \tag{25}\] \[\mathbf{M}_{2}=\tanh\left(\alpha\mathbf{E}_{2}\mathbf{\Theta}_{2}\right),\] (26) \[\tilde{\mathbf{A}}_{adap}=\mathrm{ReLU}\left(\tanh\left(\alpha\left( \mathbf{M}_{1}\mathbf{M}_{2}^{T}-\mathbf{M}_{2}\mathbf{M}_{1}^{T}\right)\right)\right), \tag{27}\]
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Model & Specific Task & Spatial learning & Temporal Learning & Fusion \\ \hline STGCN [19] & Traffic state prediction & Spectral GCN & Gated TCN & Factorized \\ \hline T-GCN [25] & Traffic state prediction & Spatial GCN & GRU & Factorized \\ \hline DCRNN [27] & Traffic state prediction & Spatial GCN & GRU & Coupled \\ \hline GMAN [120] & Traffic state prediction & Graph attention & Transformer & Factorized \\ \hline Graph WaveNet [122] & Traffic state prediction & Spatial GCN & Causal TCN & Factorized \\ \hline GSNet [40] & Traffic incident prediction & Spatial GCN & GRU & Factorized \\ \hline STGNN-TTE [141] & Travel time prediction & Spatial GCN & Gated TCN+Transformer & Factorized \\ \hline Social-STGCN [53] & Trajectory prediction & Spatial GCN & TCN & Factorized \\ \hline Social-STAGE [144] & Trajectory prediction & Spatial GCN & TCN+Attention & Factorized \\ \hline PM2-5-GNN [62] & Air quality prediction & Spatial GCN & GRU & Factorized \\ \hline SpAttRNN [164] & Air quality prediction & Graph attention & GRU & Coupled \\ \hline ATGCN [165] & Air quality prediction & Graph attention & GRU & Coupled \\ \hline MGWN [68] & Wind prediction & Spatial GCN & Causal TCN & Factorized \\ \hline AGL-STAN [80] & Crime frequency prediction & Spatial GCN & Transformer & Factorized \\ \hline AGCLSTM [82] & Flood situation prediction & Spatial GCN & LSTM+Attention & Factorized \\ \hline STEP [96] & Epidemic prediction & Spatial GCN+Attention & GRU & Factorized \\ \hline \end{tabular}
\end{table} TABLE II: The summary of basic neural architectures of some existing STGNN models.
Fig. 14: The overview of STMGCN [24].
where \(\mathbf{E}_{1}\in\mathbb{R}^{N\times C}\) and \(\mathbf{E}_{2}\in\mathbb{R}^{N\times C}\) represent two randomly initialized node embedding; \(\mathbf{\theta}_{1}\) and \(\mathbf{\theta}_{2}\) are learnable parameters within the model; \(\alpha\) is a hyperparameter for controlling the saturation rate of the activation function. Numerous subsequent random initialization-based adaptive graph learning methods are proposed based on the two aforementioned methods. For example, CCRNN [34] introduced a layer-wise adaptive graph learning mechanism to adjust the graph structures layer by layer. DMSTGCN [124] presented an adaptive graph learning approach with tensor decomposition.
**Feature initialization-based** approaches aim to construct adaptive graph structure learning based on the given inputs or the hidden states [175, 176, 177, 123, 84, 129, 178]. These models usually adopt learnable matrices or attention mechanism to incorporate with the given features for generating the adaptive graph structures. For example, DGCRN [123] proposed a recurrent adaptive graph learning mechanism based on the hidden states to construct the graph structures for each time step. DSTAGNN [175] presented a self-attention-based adaptive graph learning method to establish the connection between the graph structures and the hidden states. BSTGCN [176] designed a Bayesian graph learning mechanism based on the pre-defined graphs and input features. GTS [178] presented a novel probabilistic graph structure learning method based on input features.
#### 6.1.3 Multi-Scale Spatial Learning
Due to the wide existence of spatial heterogeneity in urban systems, entities can be divided into communities with different functions. Entities in the same community may have inter-community correlations, while entities in different communities could also have cross-community correlations. In light of these facts, some recent methods have investigated multi-scale spatial learning based on community partitioning. These methods often leverage domain knowledge to guide the community partitioning process.
In this research line, some studies obtain partitioned communities by artificial division [133, 43, 179] or clustering algorithms [132, 20, 180] while others obtain them by neural networks [181, 71, 182]. For example, ST-SHN [71] and ST-HSL [147] learn the hyperedges, _i.e._ communities, of the hypergraph to capture global spatial dependencies for crime prediction. Besides, GAGNN [181] is a group-aware STGNN model for air quality prediction among hundreds of Chinese cities, as shown in Figure 15. This model first proposed a differentiable grouping network for learning the assignment matrix, which automatically computes the mapping relationships between cities and city groups. Then the spatial GCNs are respectively calculated for graph data at these two different scales to learn the inter-community and cross-community spatio-temporal dependencies. Another notable research line involves THINK [183] and DMGCRN [184], which utilize hyperbolic graph neural networks on the Poincare ball to capture multi-scale spatial dependencies more directly. The hyperbolic space is particularly suitable for modeling hierarchies, including local and global dependencies of spatio-temporal data, which makes it a promising approach for improving STGNN models.
#### 6.1.4 Heterogeneous spatial learning
As mentioned in the introduction section, heterogeneity is an essential property of spatio-temporal data in smart cities wherein it displays varying patterns across various temporal or spatial ranges. Different from the above multi-scale spatial learning methods, some works focused on the fine-grained node-to-node heterogeneous relationships in the spatio-temporal data. To distinguish between the influence of static undirected edges (_e.g._, distance-based edges) and the dynamic directed edges (_e.g._, vehicle's mobility-caused edges) in the spatio-temporal graph, HMGCN [102] performs heterogeneous aggregation on the spatial dimension. Similarly, MasterGNN [60] constructs a heterogeneous graph structure based on multiple relations between air quality and weather monitoring stations, while HT-GNN aggregates heterogeneous information from spatial-based intra-edges, temporal-based inter-edges, and spatio-temporal-based across-time-edges.
Another line of heterogeneous spatial learning utilizes transportation, time, and geographical information to capture intricate spatio-temporal message passing. For instance, HeGA [185] and MOHER [186] design multiple transportation mode-based heterogeneous graphs to receive information from multi-sources at the same time, _e.g._, bike, bus, vehicle, etc. The framework of MOHER is depicted in Figure 16, where the spatio-temporal heterogeneous graph is constructed by the region pair-wise relations and inter-mode multi-relations to characterize the correlations between different transportation modes. Then the heterogeneous graph convolution operator is incorporated with LSTM to capture the complex spatio-temporal dependencies. Additionally, DH-GEM [187] proposed the incorporation of node-position edges, while CAP [188] further expanded this concept by designing node-time and node-location edges within the heterogeneous graph, allowing for the derivation of time and geographical knowledge.
Fig. 16: The overview of MOHER [186].
Fig. 15: The overview of GAGNN [181].
### _Temporal Learning Methods_
#### 6.2.1 Multi-Scale Temporal Learning
Given the prevalence of short- and long-range correlations in spatio-temporal data, capturing multi-scale temporal correlations has emerged as a crucial direction for improving temporal learning. So far, there are two mainstream design directions for multi-scale temporal learning in STGNNs. The first direction utilizes TCNs with receptive fields of varying scales [172, 68]. A typical example is MTGNN [172] which employs multiple TCNs with various kernel sizes for learning temporal dependencies in different scales. The second direction involves integrating other temporal learning networks [17, 20, 117]. For example, DMVSTVGNN [20] jointly utilizes TCNs and Transformers for long-short range temporal learning. As depicted in Figure 17, Traffic STGNN [17] achieves multi-scale temporal learning by multi-networks integration. This model adopts GRU for short-range temporal learning and Transformer for long-range temporal learning.
#### 6.2.2 Multi-Granularity Temporal Learning
There are multiple types of temporal characteristics in spatio-temporal data. For instance, the traffic flow at a given time is not only related to the recent traffic flow but may also exhibit similarities to the traffic flow at the same time on the previous day or even the previous week. This reflects the closeness, periodicity, and trend, respectively. To consider the temporal characteristics at these three granularities, many previous works [30, 31, 32, 189, 190, 191] adopted a three-branch architecture to learn features from different temporal granularities separately, and then fuses the learned hidden states for predictions. As shown in Figure 18, ASTGCN [189] employs a typical three-branch architecture for multi-granularity temporal learning, where \(\mathcal{X}_{h}\), \(\mathcal{X}_{d}\) and \(\mathcal{X}_{w}\)represent the spatio-temporal data for the latest one hour, the data of the same hour from the previous day, and the data of the same hour from the previous week, respectively. After going through separate branches, they are finally fused by the learnable weight matrix.
#### 6.2.3 Decomposition Temporal Learning
Individual temporal patterns usually contain a variety of hidden components, such as inherent components, diffusion components, and periodic components. To better capture these complex temporal dependencies, decomposition-based temporal learning methods have been proposed, which can automatically decompose and integrate different temporal components through special neural designs [171, 192, 193, 194, 195]. FC-GAGA [192] is a noteworthy example of decomposition methods that adopt the subtraction residual from N-BEATS [196] to decompose different components in traffic data and model spatial correlations of each component. As shown in Figure 19, FC-GAGA consists of multiple stacked layers, each consisting of a time gate block, a graph gate block, and several fully connected blocks.The time gate block aims to eliminate node-specific multiplicative seasonality from the block's input and reuse it at the block's output, while the graph gate block aims to capture spatial correlations from various entities. The fully connected blocks are similar to those in N-BEATS, acting on the final model output and removing redundant temporal components for downstream blocks through the two branches of forecast projection and backcast projection.
In addition to FC-GAGA, other works also adopted decomposition-based ideas. For example, StemGNN [194] decomposed the temporal components by the subtraction residual from N-BEATS, but modeled spatial correlations in the spectral domain. D2STGNN [171] proposed a temporal residual decomposition method to incorporate with graph structure learning. STWave [193] directly utilized the discrete wavelet transform to disentangle the event and the trend from the spatio-temporal graph data.
Fig. 19: The overview of FC-GAGA [192].
Fig. 17: The overview of Traffic STGNN [17].
Fig. 18: The overview of ASTGCN [189].
### _Spatio-Temporal Fusion Methods_
#### 6.3.1 Spatio-Temporal Joint Modeling
In Section 5.5, we have discussed basic spatio-temporal fusion architectures of STGNNs, which are either factorized or coupled by spatial learning networks and temporal learning networks. Although these architectures can effectively learn spatial and temporal dependencies separately, they lack the ability to model the joint spatial-temporal dependencies, making it challenging to capture complex spatio-temporal relations across different time steps.
In recent years, some literature focused on jointly modeling spatial-temporal dependencies based on 3D GCN [197], Spatio-Temporal Joint GCN (STJGCN) [198] and Spatio-Temporal Synchronous GCN (STSGCN) [118]. Among them, STSGCN has become a mainstream method for spatio-temporal dependencies jointly fusion. This type of neural architecture enables the modeling of spatio-temporal dependencies in a unified graph structure, which can replace the separated spatial learning networks and temporal learning networks. The crucial part of STSGNN is the construction of the spatio-temporal synchronous graph, as shown in Figure 20. The original spatio-temporal synchronous graph is simple, whose nodes with the same location are connected to each other across adjacent time steps. This graph construction approach not only characterizes spatial neighbors, but also temporal neighbors, establishing unified spatio-temporal relations. After graph construction, STSGNN employs a simple GCN model to capture the spatio-temporal dependencies.
There are some follow-up works [119, 120, 116, 117, 118, 119, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 286, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 335, 336, 337, 341, 342, 343, 344, 345, 346, 347, 358, 360, 361, 362, 363, 364, 365, 366, 367, 371, 383, 384, 385, 386, 387, 388, 389, 390, 311, 323, 339, 340, 341, 342, 343, 344, 346, 347, 358, 369, 370, 371, 389, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 444, 446, 447, 445, 446, 447, 45, 46, 471, 472, 473, 474, 475, 476, 477, 488, 490, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 423, 424, 425, 426, 427, 428, 429, 431, 432, 433, 434, 435, 436, 437, 438, 439, 444, 448, 449, 450, 451, 452, 453, 454, 45
## 7 Advanced Learning Frameworks
I In recent years, more and more advanced learning frameworks have been developed to enhance the performance of STGNN in terms of deep representation and prediction accuracy. In this section, we review and discuss six typical advanced learning frameworks that are combined with STGNNs, including adversarial learning, meta learning, self-supervised learning, continuous spatio-temporal modeling, physics-informed learning, and transfer learning.
### _Adversarial Learning_
Considering that traditional loss functions, such as L1 and L2 norms, are commonly used in predictive learning tasks to measure data point errors, these optimization objectives may lack the ability to capture the distribution and correlation between predicted data and real data. This limitation could potentially lead to distorted prediction results. Hence, the adversarial loss can be introduced to incorporate with the traditional loss for addressing this problem to some extent, which has been widely applied in time series prediction. To incorporate the adversarial loss, Generative Adversarial Networks (GANs) have been proposed, with the neural predictors as the generators and the neural architecture of discriminators designed separately. We have witnessed a blossom of works [60, 224, 225, 226, 227, 228, 229, 230] to combine the adversarial loss with STGNNs for predictive learning tasks.
However, in certain predictive learning scenarios involving urban systems, it can be challenging to discriminate the prediction results from spatio-temporal scales. To address this, TFGAN proposed a STGNN model combined with adversarial loss for traffic flow prediction, where the discriminator is composed of a GCN and GRU, as illustrated in Figure 22. The combination of GCN and GRU allows for joint discrimination of the prediction results across both spatial and temporal dimensions, ensuring that the predicted results are distributed similarly to the real data at the spatio-temporal scale. TFGAN [225] is trained adversarially as a min-max game between the generator \(G\) and the discriminator \(D\). The generator \(G\) of this model is an STGNN with multi-graph convolution. The loss functions in TFGAN are expressed as follows:
\[\mathcal{L}_{\mathbf{G}} =\mathbb{E}_{\hat{\mathbf{z}}\sim\mathcal{P}_{(F)}}[\log(1- \mathbf{D}(\hat{z}))], \tag{29}\] \[\mathcal{L}_{\mathbf{M}} =\frac{1}{b}\sum_{i=1}^{b}\left\|Y^{i}-\mathcal{Y}^{i}\right\|^{2},\] \[\mathcal{L}_{\mathbf{D}} =\mathbb{E}_{\mathbf{z}\sim\mathcal{P}_{(R)}}[\log(\mathbf{D}(z))]\] \[\quad\quad+\mathbb{E}_{\hat{z}\sim\mathcal{P}_{(F)}}[\log(1- \mathbf{D}(\hat{z}))],\] \[\theta_{\mathbf{G}},\theta_{\mathbf{D}} =\min_{\theta_{\mathbf{G}}}\left[\lambda\mathcal{L}_{\mathbf{M}}+\max_{ \theta_{\mathbf{D}}}\mathcal{L}_{\mathbf{D}}\right],\]
where \(\mathcal{L}_{\mathbf{G}}\) is the generator loss, \(\mathcal{L}_{\mathbf{M}}\) is the mean squared error (MSE) loss between the prediction results and ground-truths, \(\mathcal{L}_{\mathbf{D}}\) is the discriminator loss, \(\mathbf{D}(\cdot)\) denotes the discriminator network. The parameter of generator network \(\theta_{\mathbf{G}}\) and discriminator network \(\theta_{\mathbf{D}}\) are optimized by the min-max target.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Main Categories & Sub-Categories & Models & Source Codes \\ \hline \multirow{8}{*}{\begin{tabular}{l} Spatial Learning \\ Methods \\ \end{tabular} } & Multi-Graph Convolution & \begin{tabular}{l} PVGCN [21] \\ \end{tabular} & [https://github.com/HCRTLab-SYSU7/PVCGN](https://github.com/HCRTLab-SYSU7/PVCGN) \\ \cline{2-4} & \multirow{4}{*}{Adaptive Graph Learning} & STGCN [24] & [https://github.com/mraden/Graph-WaveNet](https://github.com/mraden/Graph-WaveNet) \\ \cline{3-4} & & Graph WaveNet [122] & [https://github.com/mnzhan/Grafph-WaveNet](https://github.com/mnzhan/Grafph-WaveNet) \\ \cline{3-4} & & MTCNN [122] & [https://github.com/mnzhan/MTCNN](https://github.com/mnzhan/MTCNN) \\ \cline{3-4} & & DCGRN [123] & [https://github.com/mspihua-fb-lab/](https://github.com/mspihua-fb-lab/) Traffic-Benchmark \\ \cline{2-4} & \multirow{4}{*}{Multi-Scale Spatial Learning} & GACNN [181] & [https://github.com/Frigger/GACNN](https://github.com/Frigger/GACNN) \\ \cline{3-4} & & HKNN [180] & [https://github.com/guokar98/THGCN.git](https://github.com/guokar98/THGCN.git) \\ \cline{3-4} & & ST-SHN [71] & [https://github.com/kar3d1.ST-SHN](https://github.com/kar3d1.ST-SHN) \\ \cline{3-4} & & ST-HSL [147] & [https://github.com/kar3d1.ST-SHN](https://github.com/kar3d1.ST-SHN) \\ \cline{3-4} & & \begin{tabular}{l} Heterogeneous Spatial Learning \\ \end{tabular} & \begin{tabular}{l} HTCNN [211] \\ \end{tabular} & [https://github.com/TestLab-Code/HTGNN](https://github.com/TestLab-Code/HTGNN) \\ \hline \multirow{8}{*}{\begin{tabular}{l} Temporal Learning \\ Methods \\ \end{tabular} } & Multi-Scale Temporal Learning & STCNN [17] & [https://github.com/gmg00M17/DH-GEM](https://github.com/gmg00M17/DH-GEM) \\ \cline{2-4} & \multirow{4}{*}{Decomposition Temporal Learning} & ASICNN [189] & [https://github.com/dosub1BT/ASTCCN-r-pytorch](https://github.com/dosub1BT/ASTCCN-r-pytorch) \\ \cline{3-4} & & FC-GAGA [192] & [https://github.com/boreshshmail/f-gaga](https://github.com/boreshshmail/f-gaga) \\ \cline{3-4} & & StemGNN [194] & [https://github.com/microsoft/SceNN](https://github.com/microsoft/SceNN) \\ \cline{3-4} & & STWave [193] & [https://github.com/AlxMisher/STWave](https://github.com/AlxMisher/STWave) \\ \hline \multirow{8}{*}{\begin{tabular}{l} Spatio-Temporal \\ Fusion Methods \\ \end{tabular} } & Spatio-Temporal Synchronous & STGCNN [118] & [https://github.com/Davidham3/STSCN](https://github.com/Davidham3/STSCN) \\ \cline{3-4} & Graph Modeling & STGCNN [117] & [https://github.com/MengZhang/TISTCNN](https://github.com/MengZhang/TISTCNN) \\ \cline{3-4} & AutoTSG [204] & [https://github.com/AutoTSG/AutoTSG](https://github.com/AutoTSG/AutoTSG) \\ \cline{3-4} & & Auto-DSTSCN [119] & [https://github.com/linguanghyin/Auto-DSTSCN](https://github.com/linguanghyin/Auto-DSTSCN) \\ \cline{3-4} & & AutoTCS [208] & [https://github.com/VM520/AutoCTS](https://github.com/VM520/AutoCTS) \\ \hline \multirow{8}{*}{Advanced Methods Combined with STGNNs} & Adversarial Learning & MasterGAN [80] & [https://github.com/hanjindong/MasterGAN](https://github.com/hanjindong/MasterGAN) \\ \cline{3-4} & & MT-ASIN [212] & [https://github.com/MiaoHacSunny/MI-ASIN](https://github.com/MiaoHacSunny/MI-ASIN) \\ \cline{3-4} & & Meta Learning & ST-MetaNet [213] & [https://github.com/panzhey1/ST-MetaNet](https://github.com/panzhey1/ST-MetaNet) \\ \cline{3-4} & & MegaCN [214] & [https://github.com/deepshshmail/MegaCN](https://github.com/deepshshmail/MegaCN) \\ \cline{2-4} & & STGCL [215] & [https://github.com/linux4v7/STGCL](https://github.com/linux4v7/STGCL) \\ \cline{3-4} & & SFCL [216] & [https://github.com/RonghZ19/STGCL](https://github.com/RonghZ19/STGCL) \\ \cline{3-4} & & ST-SSL [217] & [https://github.com/RonghZ19/STGCL](https://github.com/RonghZ19/STGCL) \\ \cline{3-4} & & STGCN [218] & [https://github.com/RonghZ19/STGCL](https://github.com/RonghZ19/STGCL) \\ \cline{3-4} & & ST-SSL [217] & [https://github.com/RonghZ19/STGCL](https://github.com/RonghZ19/STGCL) \\ \cline{3-4} & & STGODE [218] & [https://github.com/square-coder/STGODE](https://github.com/square-coder/STGODE) \\ \cline{3-4} & & MTGDGDc [219] & [https://github.com/GAKNAch-lab/MLGDc](https://github.com/GAKNAch-lab/MLGDc) \\ \cline{3-4} & & ST-NCE [220] & [https://github.com/roughhZ19/STGCL-NCE](https://github.com/roughhZ19/STGCL-NCE) \\ \cline{3-4} & & \begin{tabular}{l} Physics-Informed Learning \\ \end{tabular} &
\begin{tabular}{l} STDEN [26] \\ \end{tabular} & [https://github.com/Echo-Ji/STDEN](https://github.com/Echo-Ji/STDEN) \\ \cline{3-4} & & CoLa-GNN [98] & [https://github.com/github.com/github516/CoLa_GNN_review](https://github.com/github.com/github516/CoLa_GNN_review) \\ \cline{3-4} & & ST-GPSL [221] & [https://github.com/RobinL120/97-GPSL](https://github.com/RobinL120/97-GPSL) \\ \cline{3-4} & & TL-DCRNN [222] & [https://github.com/tanwimallick/TL-DCRNN](https://github.com/tanwimallick/TL-DCRNN) \\ \cline{3-4} & & DASTNet [223] & [https://github.com/YhongT/DASTNet](https://github.com/YhongT/DASTNet) \\ \hline \end{tabular}
\end{table} TABLE III: The source codes for some typical STGNN models with improved spatio-temporal learning methods.
### _Meta Learning_
Meta learning is an advanced learning paradigm focused on the concept of "how to learn to learn". Incorporating meta learning techniques in STGNN models is important since they can capture high-dimensional heterogeneity and dynamic spatio-temporal dependencies from raw data, and teaching them how to learn can significantly improve their prediction performance. Typically, meta learning techniques in STGNN models involve extracting additional spatio-temporal attributes through a meta-learner. STMetaNet [213] is the pioneering study to introduce meta learning into STGNNs. The ST-MetaNet, as illustrated in Figure 23, is composed of RNN, Meta-GAT, and Meta-RNN, and utilizes two types of meta-knowledge learners, namely Node Meta-Knowledge (NMK) and Edge Meta-Knowledge (EMK) learners, to effectively incorporate additional spatio-temporal information. Both of the two different meta-knowledge learner use fully-connected network as the basic learning network. The NMK learner is designed to learn meta-knowledge from the node attributes, such as distances and road networks, while the EMK learner is designed to learn meta-knowledge from the edge attributes, such as locations and POIs. The learned meta-knowledge is then utilized to learn the weights of the Meta-RNN and Meta-GAT components, improving the overall learning capability of the model. Formally, the calculation process of Meta-RNN for any spatial node \(i\) is formulated as:
\[\begin{split} W_{\Omega}^{(i)}&=g_{W_{\Omega}} \left(\mathrm{NMK}\left(v^{(i)}\right)\right),\\ U_{\Omega}^{(i)}&=g_{U_{\Omega}}\left(\mathrm{ NMK}\left(v^{(i)}\right)\right),\\ b_{\Omega}^{(i)}&=g_{b_{\Omega}}\left(\mathrm{ NMK}\left(v^{(i)}\right)\right),\\ h_{t}^{(i)}&=\mathrm{GRU}\left(z_{t}^{(i)},h_{t-1 }^{(i)}\mid W_{\Omega}^{(i)},U_{\Omega}^{(i)},b_{\Omega}^{(i)}\right),\end{split} \tag{30}\]
where \(W_{\Omega}^{(i)},U_{\Omega}^{(i)}\) and \(b_{\Omega}^{(i)}\) are learnable parameters in GRUs and they are generated from node attributes \(v^{(i)}\) by node knowledge meta learner. The meta learner is composed of three different fully-connected networks \(g_{W_{\Omega^{\prime}}}\), \(g_{U_{\Omega}}\) and \(g_{b_{\Omega}}\).
In light of the success of ST-MetaNet, some other STGNN models have been proposed that incorporate meta learning. For example, ST-MetaNet+ [231] fuse the dynamic spatio-temporal state and meta-knowledge for weight generation of GAT and GRU. AutoSTG [204] also adopts a meta learning method similar to ST-MetaNet while introducing neural architecture search, using meta-knowledge to generate weight parameters for graph convolution and temporal convolution. MegaCRN [214] introduced an attention-based memory network, which stores the typical features in seen samples for further pattern matching, thus improving the capability of graph structure learning. In addition, meta learning can also be used for spatio-temporal graph knowledge transfer in predictive learning scenarios [221, 232].
### _Self-Supervised Learning_
Self-supervised learning is a type of method that transforms an unsupervised learning task into a supervised task by constructing its own labels. The goal of this learning paradigm is to learn better representations for downstream supervised tasks. By using self-supervised learning, a representation with strong generalization performance can be learned. Combining STGNN models with self-supervised learning can enhance the capability of spatio-temporal graph learning, which can improve the accuracy of downstream predictive learning tasks.
**Contrastive learning** is one of the most important self-supervised learning methods realized by constructing positive and negative samples, which has been introduced into STGNN models in recent years. One notable example is STGCL, which was introduced by Liu et al. [215] and was the first work to incorporate contrastive learning into STGNN architectures. As shown in Figure 24, the first step of STGCL is the data augmentation to construct the positive and negative samples, where positive and negative samples are constructed using techniques such as edge masking, input masking, and temporal shifting. After obtaining the positive and negative samples, the same STG encoder is employed to learn the spatio-temporal graph representation for both the original data and the augmented data. Then, STGCL splits into two branches - a predictive branch and a contrastive branch. In the predictive branch, the STG decoder directly outputs the prediction results and traditional data point errors, such as mean absolute error (MAE), can be used as the loss function. In the contrastive branch, the two types of representation \(H^{{}^{\prime}}\) and \(H^{{}^{\prime\prime}}\) are put forward into the
Fig. 23: The overview of ST-MetaNet [213].
Fig. 22: The overview of discriminator in TFGAN [225].
projection head to further obtain the latent representation \(z^{\prime}\) and \(z^{\prime\prime}\). For the two latent representation, the contrastive loss proposed in GraphCL [233] can also be adopted in this case, which is as follows:
\[\mathcal{L}_{cl}=\frac{1}{M}\sum_{i=1}^{M}-\log\frac{\exp\left(\sin\left(\mathbf{ z}_{i}^{\prime},\mathbf{z}_{i}^{\prime\prime}\right)/\tau\right)}{\sum_{j\in x_{i}} \exp\left(\sin\left(\mathbf{z}_{i}^{\prime},\mathbf{z}_{j}^{\prime\prime} \right)/\tau\right)}, \tag{31}\]
where \(\sin(\cdot)\) denotes the cosine similarity, \(\tau\) denotes the temperature parameter. Note that, STGCL also proposed to filter out unsuitable negatives based on the unique properties of spatio-temporal graph data such as first-order neighbors of each node, closeness and periodicity temporal patterns, due to their similarity in the latent space. Thus, \(chi_{i}\) denotes the set of acceptable negatives for the \(i_{th}\) object after negative filting.
Based on STGCL, several other contrastive learning methods have been proposed to enhance the learning capabilities of STGNN in recent years. For example, SPGCL [216] proposed to learn the informative relations by maximizing the distinguishing margin between positive and negative neighbors for generating an optimal graph structure. STSSL [217] proposed an adaptive augmentation method over the spatio-temporal graph data at both attribute and structure levels. START [142] presented a spatio-temporal graph-based contrastive learning method for trajectory representation learning. This model proposed multiple negative trajectories construction methods such as trajectory trimming and road segments mask, to aid the STGNN model in achieving better performance in travel time prediction tasks.
### _Continuous Spatio-Temporal modeling_
Most existing STGNN-based approaches capture the spatial and temporal dependencies in a discrete way, leading to discontinuous latent state trajectories and higher prediction errors. To address this problem, some research has focused on continuous spatio-temporal modeling. Motivated by the success of Neural Ordinary Differential Equation (Neural-ODE) [234], a well-known approach for continuous system modeling, STGNNs combined with Neural-ODE can improve the capability of spatio-temporal graph representation learning in a continuous manner. STGODE [218] was the first attempt to introduce Neural-ODE into STGNNs, however, it only considers integrating Neural-ODE with GCN and neglects continuous modeling for temporal patterns. To achieve a joint continuous modeling for spatio-temporal dependencies, MTGODE [219] introduced the integration of Neural-ODE with graph convolution operators and temporal convolution operators, as depicted in Figure 25, to enable continuous spatio-temporal encoding.
In this model, multi-layer GCN with residual connections is transformed as a continuous form, which is formulated by an ordinary differential equation:
\[\frac{\mathbf{d}\mathbf{H}^{G}(t)}{\mathrm{d}t}=\left(\widehat{\mathbf{A}}-\mathbf{I} _{N}\right)\mathbf{H}^{G}(t), \tag{32}\]
where \(\widehat{\mathbf{A}}\) is the adjacency matrix, \(\mathbf{H}^{G}(t)\) is the continuous hidden state. \(\widehat{\mathbf{A}}\) in MTGODE is an adaptive graph which can be computed by Eq. 27. To obtain the approximate solution of ODE, ODESolver \((\cdot)\) can be any black-box ODE solver introduced in [234] such as Euler, Euler-Cauchy, and Runge-Kuta fourth order. Given the initial hidden state \(\mathbf{H}^{G}(0)\), the continuous hidden state in GCN at \(t\) can be estimated as:
\[\mathbf{H}^{G}\left(t_{i}\right)=\text{ ODESolver }\left(\mathbf{H}^{G}(0),\frac{ \mathrm{d}\mathbf{H}^{G}(t)}{\mathrm{d}t},t_{i}\right), \tag{33}\]
Similar to a continuous GCN, a multi-layer TCN with residual connections can also be transformed to a continuous form, which is defined as:
\[\frac{\mathrm{d}\mathbf{H}^{T}(t)}{\mathrm{d}t}=\mathcal{P}\left(\text{TCN} \left(\mathbf{H}^{T}(t),t,\mathbf{\Theta}\right),R\right), \tag{34}\]
where \(\mathcal{P}\) denotes the padding operation; \(\mathbf{\Theta}\) denotes the parameter of convolutional kernel; \(R\) indicates the receptive field of TCN. Note that the temporal dimension needs to be consistent to ensure the continuity of the hidden state, a padding operation is necessary in this case. The continuous hidden state at \(t\) in TCN can be approximately computed by the ODE solver, which is as follows:
\[\widetilde{\mathbf{H}}_{\text{out}}^{T}=\text{ ODESolver }\left(\mathbf{H}^{T}(0),\frac{\mathrm{d}\mathbf{H}^{T}(t)}{ \mathrm{d}t},t_{i}\right), \tag{35}\]
In addition, Social ODE [235] extended the ODE-based STGNN to the scenario of multi-agent trajectory prediction. MixRNN+ [236] combined Neural-ODE and RNN for continuous recurrent hidden state modeling. STG-NCDE [220] developed a STGNN combined with the neural controlled differential equation (Neural-CDE) for better continuous modeling, compared with Neural-ODE-based methods.
Fig. 24: The overview of STGCL [215].
Fig. 25: The spatio-temporal encoding part in MTGODE [218]. The continuous graph propagation (CGP) module and continuous temporal aggregation (CTA) module are stacked in this part.
### _Physics-Informed Learning_
In the last few years, a new paradigm called Physics-Informed Neural Networks (PINNs) [237] have emerged for exploring and computing real-world dynamics integrating physical differential equations and neural networks with powerful fitting capabilities.The main advantage of PINNs is their ability to enforce physical constraints on the predictions, thereby ensuring that the model's outputs are consistent with the laws of physics. Inspired by PINNs that are based on simple neural networks, physical-informed learning methods can be also combined with STGNNs, especially in epidemic prediction tasks [97, 98, 99, 100, 101]. As shown in Figure 26, STAN first integrates the constraints of SIR differential equations into the STGNN architecture. This model used GAT and GRU to capture the spatial and temporal dependencies respectively and performed a multi-task prediction. There are four components in the output of this model: transmission rate \(\beta\), recovery rate \(\gamma\), time-varying number of infections \(\Delta\)I and recoveries \(\Delta\)R. These components need to satisfy physical constraints based on the SIR equation, which can be expressed as follows:
\[\frac{dR}{dt}=\gamma I, \tag{36}\] \[\frac{dI}{dt}=\beta S-\gamma I,\] \[S=N-I-R,\]
where \(S\) denotes the survivors and \(N\) means the total number of people. In STAN, a constraint loss was used to enforce that the predicted time-varying infections and recoveries are close to those calculated by the SIR equations.
In addition to epidemic prediction tasks, there are also a few works in other domains. For example, STDEN [26] proposed a unified framework that combines traffic potential energy field differential equations and neural networks for traffic flow prediction. The work [66] proposed a knowledge transfer approach from physics-based models to guide the learning of a recurrent graph convolution neural network for predicting flow and temperature in river networks.
### _Transfer Learning_
Due to the scarcity of some spatio-temporal graph data, transfer learning techniques have become a cost-effective approach to extend the same basic STGNN model to different data scenarios. However, there are two main limitations in conducting transfer learning for STGNNs. The first one is the heterogeneity of spatial structures and the other one is the heterogeneity of temporal patterns in different circumstances. To be specific, in different scenarios, the spatial topology, relations, etc. are completely different as well as the temporal patterns such as periodicity and trend.
The existing literature on spatio-temporal graph transfer learning can be roughly divided into three categories: clustering-based [222, 238, 239, 240], domain adaptation-based [223, 241] and meta-learning-based [221, 232]. For example, TL-DCRNN [222] proposed a graph partitioning method to divide the entire highway network into different sub-clusters and then used the DCRNN model to learn the spatio-temporal dependencies from source sub-clusters to target sub-clusters. DASTNet [223] combined the graph representation learning and multi-domains adversarial adaptation methods to obtain domain-invariant node embeddings, achieving the knowledge transfer among different scenarios with different spatial structures. ST-GFSL [221] is a spatio-temporal graph neural network that employs Model-Agnostic Meta Learning (MAML) method for cross-city knowledge transfer. The first step in this model is to meta-train a base model on multiple source datasets, which generates the parameters for adaptation. During the adaptation phase, the feature extractor of the basic STGNN is initialized by the generated parameters, and then the parameters of the feature extractor and predictor are jointly trained on the target dataset.
In order to facilitate future research in the field, we have compiled source codes of several representative STGNN models and categorized them according to the methods used to improve spatio-temporal representation learning. The models and corresponding categories are listed in Table III. This resource can serve as a valuable reference for researchers seeking to develop and compare new STGNN models.
## 8 Challenges and Future Directions
We have investigated the applications, basic neural architectures, and recent advancements of STGNN for predictive learning in social systems. Although STGNN models have achieved remarkable performance in recent years, there are still some challenging problems to be addressed, which point to potential future research directions. We summarize these challenges and suggest potentially feasible research directions as follows:
* **Lack of interpretability:** So far, the vast majority of STGNN-related work has focused on improving predictive performance through sophisticated model design. However, research on the interpretability of models has been relatively lacking, that is, we cannot clearly understand which spatio-temporal features take a leading role in improving predictive performance. In the most recent work, STNSCM [242] proposed to construct a causal graph to describe the bike flow prediction and analyze the causal relationship between the spatio-temporal features and prediction results. Causal-based spatio-temporal graph modeling could be a potential direction to enhance the interpretability of STGNN models.
* **Lack of calibration methods:** Uncertainty quantification is of great significance to practical industrial production, which reflects the degree of trust in the prediction results
Fig. 26: The overview of STAN [97].
of the model. In order to improve the credibility of the deep models, appropriate model calibration methods are necessary, which have been widely used in image recognition [243] and graph representation learning [244] in recent years. At present, only works [245, 246] have studied the uncertainty of STGNN models, and there is a lack of research on calibration methods. Calibration for the STGNN models need to take into account the characteristics of spatial and temporal simultaneously, thus it is more challenging than previous related works.
* **Lack of physical constraints:** In most of previous works, STGNN models capture the complex spatio-temporal dependencies through the integration of deep neural networks, while ignoring the consideration of physical constraints in different application domains, which makes the model less recognized in some professional fields. In recent years, although some STGNN models for epidemic prediction have combined professional differential equations as physical constraints [97, 98, 99, 100, 101], such work is still lacking and needs to be improved in other application fields.
* **Lack of pre-training techniques:** Pre-training techniques have been greatly developed in the fields of time series and graph representation learning in recent years, but they are relatively lacking in STGNN-related work. In the most recent work, STEP [247] proposed a pre-training model combined with the Mask Auto-Encoder (MAE) [248] architecture to efficiently learn temporal patterns from very long-term history spatio-temporal graph data. In the future, pre-training techniques for long-range spatial and long-term temporal learning are necessary, which are of great value to the scalability and deployability of the STGNN models.
* **Hurdle of distribution shifts:** Spatio-temporal data, such as traffic flows over road networks, are often collected from various locations and time periods, resulting in significant differences in the distribution of the training, validation, and test sets. For instance, the training set may span the first two years, while the validation and test sets come from the following two years. This can pose a challenge for STGNNs, as training a model on one dataset may not perform well on validation and test sets due to _distribution shifts_, which is similar to the distribution shift issue in domain adaptation (where the joint distribution of inputs and outputs differs between the training and test stages). Despite its importance, this problem has received less attention in the spatio-temporal research community. While several studies [249] investigated defeating distribution shifts in time series, they fail to encode the spatial correlations among locations.
* **Exploring new training strategies:** Previous studies have primarily focused on introducing novel STGNNs with sophisticated layers or modules to enhance human mobility analytics. However, another promising direction is to investigate new training strategies. For instance, in traffic prediction tasks, every location is treated equally, and the data belonging to these locations are jointly fed into neural networks. Nevertheless, the complexity of modeling the spatio-temporal correlations of each location can vary significantly, necessitating a new training strategy such as curriculum learning. Curriculum learning trains a machine learning model on increasingly difficult data, starting from simpler data, and may be effective in addressing this issue. In addition, other potential training strategies for STGNNs include multi-task learning, transfer learning, and continual learning. By exploring new training strategies, we can improve the performance and accuracy of STGNNs and enable them to tackle even more complex tasks.
* **Scalability issue:** One particularly challenging case for designing efficient STGNNs is when the number of locations in the sensor network is very large. For example, there are over ten thousand of loop detectors in PEMS systems. In this scenario, there is a need to develop STGNNs that can efficiently process and analyze the vast amounts of spatio-temporal data generated by the network while maintaining high prediction accuracy. Under this circumstance, more efficient AI solutions are appreciated, _e.g._, through model pruning/distillation, graph sampling techniques, or exploring the next-generation AI models with high efficiency. There are also a few studies probing into graph-free approaches [250] to reduce computational costs when scaling up to large-scale sensor networks.
## 9 Conclusion
In this paper, we present a systematic survey of spatio-temporal graph neural networks (STGNNs) for predictive learning in urban computing. We start with a basic form and construction method of spatio-temporal graph data, and then summarize the predictive learning tasks involving STGNNs from different application domains in urban computing. Next, Moving on, we delve into the fundamental neural network architectures that underpin STGNNs, including the spatial learning network and temporal learning network, which consist of graph neural networks (GNNs), recurrent neural networks (RNNs), temporal convolutional networks (TCNs), self-attention networks (SANs), and explore the basic fusion techniques used to integrate these spatio-temporal neural architectures. To stay up-to-date with the latest developments in STGNNs, we review notable recent works, focusing on spatial learning methods, temporal learning methods, spatio-temporal fusion methods, and other advanced techniques that can be combined. Finally, we summarize the challenges of current research and suggest some potential directions.
|
2303.03699 | An Edge-based WiFi Fingerprinting Indoor Localization Using
Convolutional Neural Network and Convolutional Auto-Encoder | With the ongoing development of Indoor Location-Based Services, the location
information of users in indoor environments has been a challenging issue in
recent years. Due to the widespread use of WiFi networks, WiFi fingerprinting
has become one of the most practical methods of locating mobile users. In
addition to localization accuracy, some other critical factors such as latency,
and users' privacy should be considered in indoor localization systems. In this
study, we propose a light Convolutional Neural Network-based method for edge
devices (e.g. smartphones) to overcome the above issues by eliminating the need
for a cloud/server in the localization system. The proposed method is evaluated
for three different open datasets, i.e., UJIIndoorLoc, Tampere and
UTSIndoorLoc, as well as for our collected dataset named SBUK-D to verify its
scalability. We also evaluate performance efficiency of our localization method
on an Android smartphone to demonstrate its applicability to edge devices. For
UJIIndoorLoc dataset, our model obtains approximately 99% building accuracy,
over 90% floor accuracy, and 9.5 m positioning mean error with the model size
and inference time of 0.5 MB and 51 us, respectively, which demonstrate high
accuracy in range of state of the art works as well as amenability to the
resource-constrained edge devices. | Amin Kargar-Barzi, Ebrahim Farahmand, Nooshin Taheri Chatrudi, Ali Mahani, Muhammad Shafique | 2023-03-07T07:30:57Z | http://arxiv.org/abs/2303.03699v2 | CAE-CNNLoc: An Edge-based WiFi Fingerprinting Indoor Localization Using Convolutional Neural Network and Convolutional Auto-Encoder
###### Abstract
With the ongoing development of Indoor Location-Based Services, accurate location information of users in indoor environments has been a challenging issue in recent years. Due to the widespread use of WiFi networks, WiFi fingerprinting has become one of the most practical methods of locating mobile users. In addition to localization accuracy, some other critical factors such as cost, latency, and users' privacy should be considered in indoor localization systems. In this study, we propose a lightweight Convolutional Neural Network (CNN)-based method for edge devices (such as smartphones) to overcome the above issues by eliminating the need for a cloud/server in the localization system. To enable the use of the proposed model on resource-constraint edge devices, post-training optimization techniques including quantization, pruning and clustering are used to compress the network model. The proposed method is evaluated for three different open datasets, i.e., UJIIndoorLoc, Tampere and UTSIndoorLoc, as well as for our collected dataset named SBUK-D to verify its scalability. The results demonstrate the superiority of the proposed method compared to state-of-the-art studies. We also evaluate performance efficiency of our localization method on an android smartphone to demonstrate its applicability to edge devices. For UJIIndoorLoc dataset, our model with post-training optimizations obtains approximately 99% building accuracy, over 98% floor accuracy, and 4 m positioning mean error with the model size and inference time of 60 KB and 270 us, respectively, which demonstrate high accuracy as well as amenability to the resource-constrained edge devices.
**Keywords:** Indoor Positioning, Deep Learning, Convolutional Neural Network, WiFi Fingerprinting, Edge-based model
## 1 Introduction
Nowadays, users' position related information in indoor environment has received remarkable attention in the majority of applications [1], especially Indoor Location-Based Services (ILBSs). In the contemporary era, ILBSs can be used in various areas such as indoor navigation and tracking, location-based advertising (shopping advertisements), location-based information retrieval (tourists guiding services in a museum), and many more [1][2]. An accurate and low-cost localization system is an important component of ILBSs, which has been taken into consideration in academic and industrial sectors. Generally, in the outdoor environment, this issue has been solved by GPS technique, but this method is not a suitable approach for indoor places because of blockage, attenuation or reflection of satellite signals [1]. Consequently, finding an accurate and low-cost indoor localization system is known as an ongoing challenge in this area.
In recent years, different technologies like Camera [3], visible Light [4], Bluetooth [5], WiFi [6], Ultra Wide Band (UWB) [7] and RFID [8] have been used for indoor localization. Among these technologies, WiFi technology has attracted lots of attention because WiFi networks and their infrastructures are available in most public buildings, such as offices and shopping centers. Moreover, most users have a smartphone with Wifi technology. Therefore, the position related information can be obtained by this technology without any additional hardware and cost.
Traditional localization algorithms, such as trilateration or triangulation, are based on measured information like distance or angle from some references node to estimate the position [9]. These methods need line-of-sight (LOS) communication to measure accurate distances or angles. Hence, it is clear that these methods are not suitable for indoor environments with lots of walls and other types of obstacles [10]. Among different methods, the _WiFi fingerprinting_ method can easily overcome the mentioned issue without the need for distance or angle information, so this is a proper method for non-line-of-sight (NLOS) environments. Actually, in WiFi fingerprinting methods, just the Received Signal Strength Indicator (RSSI) of WiFi signals from each Access Point (AP) is used to calculate users' location. In these methods, it is assumed that RSSI of several WiFi signals in one point is unique; therefore, this pattern (i.e. a WiFi fingerprint) can be used to estimate the location [11].
Generally, the WiFi fingerprinting localization has two main phases, including offline and online phases. In the _offline phase_, WiFi fingerprint dataset, also known as radio map, is constructed by collecting RSSI values of accessible APs at several known points in the interested area (each RSSI pattern labeled by its location). In the _online phase_, users utilize the collected dataset to estimate their position. In this phase, a user measures the RSSI pattern at his/her place and sends this data to the system (server or cloud) to find its position by matching the RSSI pattern with the available patterns in the dataset. The matching part aims to find the most similar pattern from the dataset with the measured RSSI pattern.
There are different methods for matching part of WiFi fingerprinting to estimate the position, ranging from probabilistic to K-nearest-neighbor (KNN) and Support Vector Machine (SVM) [12]. These methods require complex filtering and parameter adjustments that are time-consuming and computationally intensive.. In order to reduce the time-consuming and intensive computation, Deep Neural Networks (DNN) is recently used in localization [13]. Although a different number of studies attempt to decrease intensive computation and time-consuming, the main issue still remains to be how to effectively optimized DNN based methods for indoor localization applications. Towards this, in this paper, we propose a light-weight CNN-based model to solve the above issues.
Moreover, most of the state-of-the-art approaches are cloud-based or fog-based which gather the data and send them to a server/cloud or other devices to analyze and compute the location [14]. _This procedure has the following drawbacks which significantly affect the efficiency of the localization process._
* Privacy and security concerns by transmitting user's data to third-party platforms, such as a cloud or server.
* Increased latency of the localization process because the data have to be sent to a server for computation followed by the server sending the location response to the user.
* Increased network traffic and system cost coupled with a centralized method that needs a server or cloud for computation.
Recently, edge-based systems are mostly used in different applications in which all computations are performed on edge; so there is no need to transfer data elsewhere. Therefore, to address the mentioned drawbacks, an edge-based indoor localization system is employed to have a better performance in terms of privacy, latency and cost. However, the edge devices are not able to execute complex algorithms such as complex CNN which were recently proposed for indoor localization. For this purpose, a light CNN-based model is proposed to run on edge devices with limited resources. This suggested model can be implemented on edge devices, such as smartphones, which significantly improves the localization performance in terms of accuracy, latency, and cost. **The main contributions of this article are as follows:**
* A light CNN-based model to run on the edge device with limited resources in terms of energy, processor, and memory, such as smartphones for indoor localization application.
* A light Convolutional auto-encoder is utilized for feature extracting, denoising and dimension reduction; also, Global-Average-Pooling is used instead of fully-connected layer to reduce the network parameters, complexity and size for indoor localization application.
* In terms of pre-processing, region griding approach is used to improve the network performance by enhancing the localization accuracy and decreasing network size and complexity.
* Post-training optimization techniques including quantization, pruning and clustering are used to compress the network and shrink the model parameters, size and complexity.
* Validate the scalability of the proposed model by evaluating it on three different public datasets and also our collected dataset named SBUK-D.
* Evaluate the suggested network model on an android smartphone.
**Paper Organization:** The rest of this paper is organized as follows: In Section 2, an overview of several related works on indoor localization methods is presented. Then, In Section 3, we describe our proposed model and its structure in three main subsections. Afterwards, model evaluation and experimental results are reported in Section 4 and ultimately, concluding remarks are drawn in Section 7.
## 2 Related work
In the following, some recent indoor localization studies are briefly discussed with their features and problems. Totally, indoor positioning methods are categorized into two main groups named device-free and device-based methods. As is clear from their names in device-free approaches, the location is estimated without any special device carrying by users, for example, using CCTV to detect and locate a person in indoor environments. Conversely, device-based ways need some particular attached devices to compute the location of users [2], for instance, using smartphones WiFi or its other sensors for user localization.
The majority of device-free localization (DFL) techniques need some infrastructures to estimate the position. Some DFL studies, such as [15][16], benefit from wireless sensor network (WSN) for localization which their main idea is to place several wireless sensors around the desired area communicating with each other. These wireless links are affected if a person locates or moves in this area. Hence, the position can be estimated based on these wireless link variations. For instance, authors in [17] used this method and formulated their DFL problem as an image classification problem. Afterwards, a three-layer convolutional auto-encoder neural network suggested extracting features and computing position by raw data with various patterns associated with different positions. Besides, in [18], authors suggested a DFL method by using orthogonal RFID Tags attached on two adjacent walls and utilized some RFID readers to measure phase information and RSSI. The gathered information is fed to a PSO-based algorithm to estimate the position of an object in a 2D place.
It is observed that device-free methods need several extra infrastructures that increase the localization cost. Also, they are not suitable for large buildings, and in most cases, they need LOS communication and can use for one-floor buildings.
Conversely, device-based localization methods attempt to estimate the position by a device carrying with users. These portable devices utilize various types of modules, such as Bluetooth or Inertial Sensors, to estimate the user location in indoor environments. Nowadays, most people use smart mobile phones with different types of equipment, such as WiFi, Bluetooth and Inertial Sensors; hence, researchers recently attract to benefit from user's smartphones for localization. WiFi-based methods have attracted enormous interest in indoor positioning among various technologies because of their wide availability in most buildings [2]. In WiFi-based methods, some studies like [19] use Time Of Arrival (TOA) technique to estimate the position, while the main disadvantage of TOA is synchronization among all transceivers. In addition, several studies estimate the position based on Angle of Arrival (AOA) approach that requires APs with multiple antennas known as their drawbacks. Besides, some studies locate the user based on WiFi fingerprinting method. The main idea of fingerprinting is to estimate the location by matching the collected RSSI set from surrounding APs named fingerprint with prebuilt WiFi fingerprint dataset [20]. In [21], authors proposed DeepFi for WiFi fingerprinting localization which is a Deep Learning based approach. This method uses Channel State Information (CSI) from all antennas and their all subcarrier which are analyzed with four hidden layers deep network. Generally, CSI-based methods are more accurate than RSS-based ones because they use amplitude and phase of the signal. However, it must be considered that modern smartphones cannot extract CSI; so it seems that CSI-based methods are not suitable approaches for ILBS.
Unlike CSI-based methods, today's smartphones can easily calculate RSS; hence most prior studies focus on RSS-based WiFi fingerprinting. In this regard, authors in [22] improve the accuracy of WiFi fingerprinting localization by using Weighted K-Nearest Neighbor (WKNN) based on RSSI similarity and spatial position while other KNN-based methods, such as [23], are based on euclidean distance. In recent years, deep learning methods were used for WiFi fingerprinting. In [21] a deep learning model was suggested for WiFi fingerprinting localization. Moreover, several studies use Auto-encoder (AE) with their DNN model to improve the localization accuracy. For instance, in [24], authors proposed a DNN system for building and floor classification and employ stacked auto-encoders (SAE) to reduce feature space and improve accuracy. Also, a different DNN architecture with SAE was proposed in [25] for multilabel classification of building ID, floor ID and position.
Generally, DNN models achieve higher accuracy by increasing their hidden layers. However, a deeper DNN model increases the computational complexity and also computation time. To overcome aforesaid issues, a convolutional neural network (CNN) based structure was proposed in [26] for building and floor classification, which decreases the complexity and reduces the sensitivity of the model to the signal variations. Additionally, CNNLoc method was proposed for WiFi fingerprinting in [13], a CNN-based model including an SAE and one-dimensional CNN. These methods are cloud/server based which all data processing are done on third-party platforms. Hence, the data transmission between user and cloud leads to privacy and security concerns and significant time overhead.
_In summary, the main focus of most studies is to achieve the highest possible accuracy that needs high computation and memory requirements which are mostly based on the cloud/server platform_. So, these cloud/server-based models are not suitable and efficient to directly run on edge devices with limited power, memory and computational resources. In addition, there are some other critical factors for indoor localization, such as cost, latency and privacy/security concerns, which must be considered during system designing. Hence, this study proposes a new edge-based WiFi fingerprinting approach to address these issues along with the highest possible accuracy needed for ILBS in multi-building and multi-floor places.
\begin{tabular}{|p{42.7pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Study** & **Category** & **Method** & **Technique** & **Attributes and Limitations** \\ \hline
15,16, 17 & Devised-Free & Wireless Sensor Network (WSN) & Signal variation when an object places between nodes & \(\bullet\) Extra infrastructures are needed \\ \hline
18 & Devised-Free & RFID Tags & Phase and RSSI information feed to PSO algorithm to estimate the position & \(\bullet\) Not suitable for large buildings \\ \hline
19 & Device-Based & Wifi signal & TOA of wifi signal is used to estimate the position & Need synchronization among all transceivers \\ \hline
21 & Device-Based & CSI-based WiFi fingerprint & Deep Learning (DL) & \(\bullet\) CSI-based methods are more accurate than RSS-based \(\bullet\) Modern smartphones and devices cannot extract CSI \\ \hline
22 & Device-Based & RSSI-based WiFi fingerprint & Weighted K-Nearest Neighbor (WKNN) & \(\bullet\) Modern smartphones and devices support RSS \\ \hline
23 & Device-Based & RSSI-based WiFi fingerprint & Euclidean Distance & \(\bullet\) Not edge based methods a that lead to privacy and security concerns and time overhead \\ \hline
13,24, 25,26 & Device-Based & RSSI-based WiFi fingerprint & Deep Learning (DL) & \(\bullet\) Not edge based methods a that lead to privacy and security concerns and time overhead \\ \hline \end{tabular}
## 3 Proposed method
This section introduces a new WiFi fingerprinting system to estimate a person's location in an indoor environment. The suggested architecture is based on CNN classifier that identifies the position of an individual in indoor places. The system diagram of the proposed method is shown in Fig. 1. As shown in this figure, the localization system consists of three main phases, including pre-processing, network training and post-training optimization. The details of each part are elaborated in the following.
### Data Pre-Processing and Dataset Preparation
The first step is to modify raw input data, such that the data is fed into the proposed network in an appropriate format, and only the relevant data is fed. There are three main phases to prepare data for our network as follow:
#### 3.1.1 Normalization
The normalization technique aims at decreasing the input distribution without losing information and facilitate the model training. In this paper, input data are Received Signal Strength Indicator (RSSI) value of neighbouring access points which are normalized and mapped into [0,1] by the following equation:
\[RSS_{inew}=\left\{\begin{aligned} 0,& RSS_{i}>0\ (no\ signal)\\ \frac{RSS_{l}-RSS_{min}}{-RSS_{min}},& otherwise \end{aligned}\right.\]
Where \(RSS_{min}\) is the lowest value in the dataset. For example, in UJIIndoorLoc dataset, RSSI values are between -104 dBm to 0 dBm; hence for this dataset \(RSS_{min}=-104\).
#### 3.1.2 Region Griding
Generally, users' exact position is not required in the majority of ILBS applications [1]. Therefore, to improve the localization performance, we benefit from griding technique [27]; hence we divided the location area into some square cells with the length of \(L\), and considered a unique coordinate for each cell representing the user's location in the given region. This procedure for UJIIndoorLoc dataset with \(L\)=20 is shown in Fig. 2. As is evident from this figure, each cell has several samples in which the cell coordinate is assumed for their position. For each cell, this unique coordinate is calculated by averaging all samples' positions inside it.
This method decreases the number of classes in the proposed network model, which leads to a remarkable reduction in the network parameters and size.
Figure 1: Our localization system diagram
#### 3.1.3 Dataset and Input preparation
To have identically and independently (i.i.d) feature for datasets, which is a vital factor to train the network model, we combine the training and testing set. Afterwards, the combined set is randomly split into training, validation and testing. Additionally, as we develop a CNN-based method for WiFi fingerprinting localization, where the input is a 2-D array of RSSI values. Hence, for UJIndoorLoc dataset, we create a 2-D array from the input, which is a vector with 520 elements. For this purpose, we reshape the input array to 23x23 2-D array with 9 dummy elements (zero value). Note, based on the simulation, the best place for the dummy data is the corners of the image as shown in Fig. 3.
### Network Architecture
In this paper, a Convolutional Neural Network (CNN) is used as the base of the network and a Convolutional Auto-encoder (CAE) is deployed to enhance the performance of the network. Besides that, we use dropout, batch normalization, pooling and Global Average Pooling (GAP) layers to improve the network performance. In the following, we elaborate the proposed network, which is shown in Fig. 5.
#### 3.2.1 Convolutional Neural Network
As mentioned above, this paper proposes a light network for WiFi fingerprinting localization with the highest possible accuracy that can be run on user devices. We leverage a Convolutional Neural Network (CNN) to reduce the input size that is also easier to process without losing important features. Morever, compared with other machine learning methods, such as SVM and KNN, CNN networks are more robust to the sensitivity of input data variation [26]. This is a critical property in WiFi fingerprinting localization since signal strengths can easily be changed in indoor environments by different factors ranging from multipath effect (as a major factor) or electromagnetic interference to temporary obstacles blocking the WiFi signal [28][29].
As mentioned in sub-section 3.1.3, we reshape input data from a vector to a 2D form, so the input can be considered a grayscale image represented radio map in which each pixel is equal to RSSI value from different APs. Hence, the proposed CNN-based network can learn from RSSI values (pixel value) and also the radio maps (pattern) of surrounding APs [26].
#### 3.2.2 Convolutional Auto-encoder
Convolutional Auto-encoder (CAE) benefits from both CNNs and Auto-encoders (AEs) features. AEs are unsupervised learning methods which generally used for denoising, dimension reduction and feature extraction by reconstructing the input data in the output. Therefore, in WiFi fingerprinting localization, this is a suitable way to shrink the RSSI value fluctuation caused by different noise sources in indoor environments, such as multipath effect and other sources mentioned before.
Figure 3: Input preparation for UJIndoorLoc dataset
Figure 2: UJIndoorLoc dataset with L=20 Region Griding
Unlike general AE with some fully connected layers, CAE uses some convolution layers which decrease network parameters that subsequently reduce network size. As shown in Fig. 4, a CAE has two main parts: a convolutional encoder with some convolution and pooling layers and the complementary deconvolutional decoder with several deconvolutions (transposed convolution) and unpooling (upsampling) layers.
#### 3.2.3 Network design
The proposed network architecture is shown in Fig. 5. The network comprises Input, CAE and Classifier parts. Input is a grayscale radio map image fed to CAE layer which is indeed the encoding part of CAE that compresses and extracts the main features of the input. Then, the classifier is used to identify the building, floor and location of a user. To enhance the performance of this network we use some layers as follow:
* **Dropout layer:** Dropout is generally used to overcome the over-fitting problem and the main idea is to randomly omit some units in each training iteration; therefore, the network will not train too accurately for the training set which leads to preventing over-fitting. In our proposed, network we use this technique before the last layer to avoid over-fitting.
* **Batch Normalization (BN):** This layer is utilized to address the Internal Covariance Shift problem by modification of input distribution in various layers for each mini-batch; hence the convergence rate of the network will be increased by using batch normalization [30]. In the proposed method, BN is used after each convolutional layer and also after GAP, as shown in Fig. 5.
* **Global Average Pooling (GAP):** It is widely used instead of the fully-connected layer in CNN models by averaging each feature map. This layer decreases the model parameters, leading to a significant decline in model size and avoiding the over-fitting problem [31].
### Post-training optimization
In this phase, the proposed model is optimized because it should be implemented on smart mobile phones with limited memory and computational power. In this study, three different optimization methods are used to optimize the trained model leading to a significant reduction in model size and also inference latency as is shown in Fig. 1.
A model with a smaller size not only occupies less storage on the phone but also utilizes less RAM when it runs. Hence, there is more memory for other application that improves performance and stability. Besides, a model with lower latency is faster and also has a direct impact on power consumption. It must be noted that generally, post-training optimization decreases the model accuracy; thus, there is a trade-off between accuracy and model size or latency which must be considered during the designing process. For post-training optimization, the following methods are used [32].
**Pruning** is one of the most effective techniques for network optimization. In this technique, unnecessary weights which have the lowest impact on the network are set to zero, i.e., eliminated. This technique leads to a significant reduction in model size.
Fig. 4: Convolutional Auto-encoder
Fig. 5: Proposed network architecture
**Clustering**, also known as weight sharing, is another useful method for network compression. In this technique, in each layer, weights are divided into N clusters. The centroid value of each cluster is used as the values of all weights in the cluster; so, the number of unique weights is considerably reduced, leading to a notable decrease in model size.
**Quantization:** To reduce the model size and also computation time, the precision of numbers in the model, such as weights, are reduced. The default type of numbers is float32; in this paper, float16 and int8 quantization are used, and their impact on model performance in terms of model size and latency are evaluated.
## 4 Model Evaluation and Experimental Results
In this section, the superiority of the proposed CAE-CNNLoc is evaluated in comparison with state-of-the-art methods. To examine the performance of the proposed method, first, we apply CAE-CNNLoc on UJIIndoorLoc dataset [23], then to test the scalability, we apply our model on two other public datasets named Tampere [33] and UTSIndoorLoc [13]. Moreover, we use the proposed model for indoor localization in our department by collecting its WiFi fingerprinting dataset named SBUK-D as a case study. Additionally, the CAE-CNNLoc network model is implemented with Tensorflow 2.4.0 framework on Google Collaboratory Cloud with Tesla 4 GPU and then its performance is tested on an android smartphone. The network parameters are considered as follows (Table 1).
In this study, three various public datasets are used which their details are explained as follow.
**UJIIndoorLoc**: This is the most common dataset for WiFi fingerprinting localization that includes 3 buildings with 4 or 5 floors and covers 108,703 m\({}^{2}\) region at the University of Jaume I in Spain. This dataset has training and testing sets with 19,938 and 1,111 samples respectively and each sample has 529 features. The first 520 features show the RSSI value from different APs between -104 dBm to 0 dBm and the null value is shown by 100 that represents inaccessible AP. The location information of each sample consists of Building ID, Floor ID and Longitude - Latitude values in meters [23].
**Tampere:** This dataset was collected at a university building in Tampere, Finland. The building covers approximately 22,750 m\({}^{2}\) area with five floors, but just four floors were used to create this dataset. Totally, there are 991 APs in the building; hence in each point, the RSSI value was recorded from these APs. The location in Tampere dataset contains X, Y and Z, which the Z represents the floor [33].
**UTSIndoorLoc:** This dataset was gathered in the FEIT Building at the University of Technology Sydney (UTS). This building covers nearly 44,000 m\({}^{2}\) region and includes 18 floors which 16 floors were used to create UTSIndoorLoc dataset. The position information consists of X, Y and Floor Id, and also each sample has 590 RSSI values from different APs [13].
### CAE and CNN optimization
First, various structures of the proposed model are investigated to find the best possible model for our application. For this aim, different layers such as convolutional and pooling layers with different parameters are tested to achieve the best structure. Table 2 shows some different structures and their performance in which _Cx_ represents a convolutional layer with x channels, \(M\) refers to a maxpooling layer, also we use slash symbol ( / ) to distinguish the CAE part and classifier part. For instance, _C16+C32/C32+C64+M_ refers to a model which its CAE part includes two convolutional layers with 16 and 32 channels and the classifier part composes two convolutional layers with 32 and 64 channels and a maxpooling layer. It must be noted that we only investigate small structures because our goal is to have a light network. As is clear from this table, the best performance is gained by the model of _C16+M/C32+C64_.
\begin{table}
\begin{tabular}{l l|l l} \hline
**Parameter** & **Value** & **Parameter** & **Value** \\ \hline Activation Function & Relu & Optimizer & Nadam \\ \hline CAE Loss Function & MSE & Output Layer & Softmax \\ & & Activation Function & \\ \hline CNN Loss Function & Sparse Categorical & & \\ & Crosentropy & & \\ \hline \end{tabular}
\end{table}
Table 1: General network parameters
\begin{table}
\begin{tabular}{c c c c c c} \hline
**Model** & **Euclidean** & **Building** & **Floor** & **Total params** & **File Size** \\ & **Mean error** & **hitrate** & **hitrate** & & **(KB)** \\ \hline
**c16/c32+64** & 34.57 & 0.89153 & 0.751 & 129,756 & 469.69 \\ \hline
**c16+M+c32/64** & 2.65 & 0.99921 & 0.991 & 129,628 & 469.40 \\ \hline
**c16/M+c32+64+M** & 3.02 & 0.99762 & 0.990 & 129,756 & 468.97 \\ \hline
**c16+c32/c32+64+M** & 5.52 & 0.9929 & 0.986 & 139,004 & 503.67 \\ \hline
**c16+c32/c32+64+ \(\epsilon\)128** & 4.82 & 0.99367 & 0.979 & 317,820 & 1,151.58 \\ \hline
**c16+M/c32** & 3.13 & 0.99921 & 0.989 & 66,972 & 212.57 \\ \hline
**c32+M/c32+64** & 2.73 & 0.99842 & 0.990 & 134,524 & 487.07 \\ \hline
**c32+c64+c128/c32+64** & 3.92 & 0.99604 & 0.985 & 254,524 & 921.97 \\ \hline
**c16/c32** & 18.54 & 0.95724 & 0.852 & 58,780 & 212.76 \\ \hline
**c16+M/c32+64** & **2.36** & 0.99921 & 0.993 & 129,756 & 469.91 \\ \hline \end{tabular}
\end{table}
Table 2: Different structures and their performance
### Impact of Improving Techniques on Localization Accuracy
In the proposed method section, some techniques are suggested to improve the localization performance. In this sub-section, we evaluate the effect of these techniques.
**Dropout**: The first one is dropout layer added to CAE-CNNLoc to address the over-fitting problem. This method has a _rate_ that refers to the fraction of input units that drop in each epoch in the training phase. The impact of this method on localization accuracy is shown in Table 3. Based on this table, the best localization accuracy is achieved when the dropout rate is 0.6.
**Global-Average-Pooling (GAP)** replaces fully-connected layer in the network model and reduces the network parameters. The impact of this layer for our network with \(L\)=\(1\) is shown in Table 4. It can be seen that by replacing fully-connected layer with GAP layer, although the number of parameters is declined from 965,340 to 129,756, the localization accuracy is not affected much. Meanwhile, this parameter reduction significantly impacts model size, decreasing from nearly 3.5 to 0.5 MB. Therefore, GAP layer is a suitable way for our application to reduce the network size without any remarkable change in accuracy.
### Impact of Region Griding on localization performance
In this sub-section, the effect of region griding on localization performance is examined. The region griding has a parameter \(L\) which shows the length of each square in the localization area. The CAE-CNNLoc results for different amounts of \(L\) are reported in Table 5 in which the localization accuracy and model parameters and size are compared. Based on the given data, by growing \(L\) the localization accuracy experiences a decline and also, the model parameters and size show a considerable reduction. It is reasonable because by increasing \(L\) the number of classes is decreased which leads to a reduction in model parameters and consequently it reduces the model size. Additionally, longer \(L\) generates a bigger square covering more points that their position is actually considered the unique coordinate of the square; hence it increases the localization error. Therefore, it reveals a trade-off between localization accuracy and model size, so based on the application we can adjust \(L\) to achieve the best performance.
### Comparison with the existing methods
Now, to evaluate the performance efficiency of CAE-CNNLoc method, we compare it with some recent studies. Some of the existing studies just focus on the building and floor accuracy, but in this study in addition to the building and floor accuracy, the positioning mean error has also been investigated. Table 6 reports the localization accuracy of CAE-CNNLoc and some related methods. It can be observed that the proposed method achieves a superior improvement in localization accuracy. In this regard, the floor hitrate of CAE-CNNLoc is 0.9929 while all other studies are under 0.96; hence the proposed method enhances the floor accuracy by nearly 3%. Moreover, the positioning mean error of CAE-CNNLoc is 2.36 that shows about 4 meters improvement compared with other studies, and finally, like some of the other methods, building accuracy is approximately 100%.
\begin{table}
\begin{tabular}{c c c c c c} \hline
**Euclidean** & **Building** & **Floor** & **Total** & **Num layers** \\
**Mean error** & **hitrate** & **hitrate** & & \\ \hline \end{tabular}
\end{table}
Table 6: CAE-CNNLoc in comparison with other methods
\begin{table}
\begin{tabular}{c c c c c} \hline & **Euclidean** & **Building** & **Floor** & **Total** & **File Size** \\
**Mean error** & **hitrate** & **hitrate** & **parameters** & **(KB)** \\ \hline
**Flatten** & 2.21 & 1 & 0.993 & 965,340 & 3,494.83 \\ \hline
**GAP** & 2.36 & 0.9992 & 0.993 & 129,756 & 469.91 \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of GAP and Flatten layer results
\begin{table}
\begin{tabular}{c c c c c} \hline
**L** & **Euclidean** & **Building** & **Floor** & **Num** & **File Size** \\
**(m)** & **Mean error** & **hitrate** & **hitrate** & **Classes** & **(KB)** \\ \hline
**1** & 2.36 & 0.9992 & 0.993 & 1628 & 469.91 \\ \hline
**3** & 2.71 & 0.9992 & 0.992 & 1139 & 354.07 \\ \hline
**5** & 2.89 & 0.9992 & 0.992 & 865 & 289.97 \\ \hline
**7** & 3.52 & 0.9992 & 0.990 & 724 & 256.94 \\ \hline
**10** & 3.94 & 0.9992 & 0.992 & 541 & 214.01 \\ \hline
**20** & 6.41 & 0.9992 & 0.994 & 274 & 151.84 \\ \hline
**30** & 8.86 & 0.9992 & 0.994 & 182 & 130.40 \\ \hline \end{tabular}
\end{table}
Table 5: Region Griding effects on localization performance
### Noise resistance of CAE-CNNLoc
WiFi signals in indoor environments are really vulnerable to noise; hence the RSSI value changes easily based on different conditions mentioned in sub-section III-2-a. Therefore, for indoor localization methods, it is critical to resist against noise. In this sub-section, we check out the performance of our model by adding noise to the data. For this aim, we randomly add 3, 5, 7 and 10 dBm noise to the test data and evaluate the model's accuracy. As is shown in Fig. 6, the localization accuracy is declined by adding noise to signals which is reasonable, but it must be noted that in the worst case with 10 dBm noise, CAE-CNNLoc model shows 6.7 (for L=1) and 7.6 (for L=5) meter of error which has still a better accuracy in comparison with other methods. Therefore, these results properly verify the denoising feature of the convolution auto-encoder part of CAE-CNNLoc model.
### CAE-CNNLoc performance on android smartphone
As mentioned before, the main purpose of this paper is to suggest an on-device indoor localization model which can be run on the user's device. Thus, we examined the performance of CAE-CNNLoc model in the real world on an android smartphone. In this regard, the proposed CAE-CNLoc model was tested on Redmi Note 8 and the results were reported. As two critical factors in on-device implementation, inference time (latency) and model size were taken into consideration and the impact of different techniques were reported.
**Region griding:** First, the effect of region griding method was examined; in this regard, the results of the proposed method with different amounts of \(L\) are depicted in Fig. 7. From sub-section IV-3, it is clear that by increasing \(L\), the number of classes is decreased, leading to a remarkable reduction in network parameters which consequently declines the latency and network size as shown in Fig. 7. For example, by set \(L\)=10, we can decrease the latency by about 20% (65 us) and reduce the network size over 2 times (255 KB) while the localization accuracy declines just nearly 1.6m compared with \(L\)=1. Hence, Region griding method is a suitable way for on-device WiFi fingerprinting localization.
Fig. 6: Effects of noise on CAE-CNNLoc performance
Fig. 7: Effects of region griding on localization error, latency and model size
**GAP:** The second effective approach is to use Global-Average-Pooling layer instead of fully-connected layer in the network model. Fig. 8 illustrates the latency and network size of CAE-CNLoc model with GAP and Flatten layer. It is completely obvious that GAP layer has a significant impact on network performance. Based on the results, the total network parameters with GAP are 129,756, while the network with Flatten layer has 965,340 parameters which show over 7 times reduction. Consequently, the network size and latency decreased over 7 and 3 times respectively, whereas the positioning error has just decreased 0.15 meter as reported in Table 4.
**Pruning:** The first method of optimization is pruning. Fig 9-a shows the impact of this technique on system performance in terms of localization error and model size. Based on this figure, it is clear that by increasing the pruning percentage, the localization error increases but the model size reduces constantly; so as mentioned, there is a trade-off between localization accuracy and model size.
**Clustering:** Fig. 9-b depicts the efficiency of the clustering method as a compression technique. For instance, by considering 128 clusters for each layer, we can reduce the model size approximately 3 times with a slight increase in localization error. Therefore, by means of this method, we obtain a remarkable reduction in model size without any considerable decrease in the system accuracy.
Note, in this study, the pruning and clustering methods do not affect latency because the weights are not removed from the network and are just set to zero (for pruning) or set to cluster value (for clustering). Hence, the computation time remains unchanged because the computation procedure is still executed by the same numbers of weights. However, this is true for smartphones while in the case of hardware-level designing, we can ignore the zero weight (pruning) or use lower weights (clustering) in computation operations which lead to better performance and lower latency [35].
**Quantization:** In this paper, we utilize float16 and int8 quantization which their results are shown in Fig. 10. First, by using float16 quantization, weights type change from float32 to float16. As is clear from Fig. 10, for float16 quantization, although the network size reduces about 2 times, it does not affect localization accuracy. Furthermore, this quantization increases the latency a little bit. This increase returns to the fact that the mobile processor can only do the computation in float32, so numbers must be converted to float32 for computation which takes time and increases the latency. Moreover, Int8 quantization has the ability to decline the network size about 3.8 times and improve the latency by nearly 32% without any effective reduction in localization accuracy.
Fig. 8: Effects of GAP layer on latency and model size
Fig. 9: Effects of pruning and clustering on localization error and model size
Finally, we can benefit from combining all mentioned optimization techniques to have the best possible network model that is as light as possible to run on end-devices. Fig. 11 shows the performance of the combined model in comparison with the original model and different optimization techniques. Based on this figure, a model that benefits from pruning 50%, clustering with 16 clusters and int8 quantization, leads to a nearly 7.8 times reduction in model size and about 1.8 meter increase in localization error compared with the original model without any optimization. Also the building and floor accuracy are approximately 99% and 98% respectively which are not shown a considerable reduction.
### CAE-CNNLoc Scalability
In this sub-section, CAE-CNNLoc model is applied on two other public WiFi fingerprinting datasets to evaluate the scalability of the proposed model. Table 7 shows the performance of CAE-CNNLoc on Tampere and UTSIndoorLoc datasets compared with recent studies. As is clear from this table, our model has superior performance on mentioned datasets, proving the scalability of CAE-CNNLoc.
### Experiments on our dataset (SBUK-D dataset)
In this sub-section, we evaluate the performance of the proposed network model for localization in our department. For this aim, we generated the WIFI fingerprinting dataset named SBUK-D. The details of this dataset are as follow:
**SBUK-D dataset:** The authors gathered this dataset in the Engineering department at the Shahid Bahonar University of Kerman (SBUK). This building includes 3 floors that cover nearly 11,500 m\({}^{2}\) region. The position information consists of Floor Id, X and Y, and there are 198 different APs in the building used to record the RSSI value in each location. The recorded RSSI are between -100 dBm to 0 dBm, and like other datasets, inaccessible APs are set to 100. The dataset has 2292 samples for 70 different locations collected with 4 android smartphones.
Table 8 reports the results of the CAE-CNNLoc on the SBUK-D dataset in comparison with three different datasets. As is clear from this table, CAE-CNNLoc has a significant performance for all datasets, and SBUK-D achieved just 1.73 m error
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline & \multicolumn{3}{c|}{**Best of CAE-CNNLoc**} & \multicolumn{3}{c}{**Best of other studies**} \\ \hline
**Dataset** & **Mean** & **Building** & **Floor** & **Mean** & **Building** & **Floor** \\
**error** & **hitrate** & **hitrate** & **error** & **hitrate** & **hitrate** \\ \hline
**UJIIndoorLoc** & 2.36 & 0.9992 & 0.993 & 6.20 & 1 & 0.937 \\ \hline
**Tampere** & 6.30 & - & 0.961 & 10.88 & - & 0.942 \\ \hline
**UTSIndoorLoc** & 1.72 & - & 0.996 & 7.60 & - & 0.946 \\ \hline \end{tabular}
* * Best of UJIIndoorLoc from [34], Tampere and UTSIndoorLoc from [13]
\end{table}
Table 7: Scalability of CAE-CNNLoc model
Figure 11: Performance of the integration model (P: Pruning, C: Clustering, Q: Quantization)
Figure 10: Effects of quantization on localization error, latency and model size
with over 98 percent accuracy to predict the floor. Moreover, the model size for our dataset is about 103 KB and the inference time on an android phone is 198 \(\upmu\)s.
### User trajectory tracking
In this sub-section, the proposed model is used for indoor tracking. In this regard, several artificial trajectories were designed with some points from UJIndoorLoc dataset and then CAE-CNNLoc model was utilized to track the path. Fig. 12 shows a path and it's predicted one by CAE-CNNLoc model for a system with L=5. As is clear from this figure, in most cases, the model correctly tracks the trajectory. In the situation in which the location is not predicted accurately, it predicts the surrounding cells with the lowest error; hence the tracking accuracy is still acceptable and the model can be utilized for indoor tracking as well.
### Conclusion
This paper presents a light WiFi fingerprinting localization method named CAE-CNNLoc to estimate user position in indoor environments. CAE-CNNLoc is based on the Convolutional Neural Network (CNN) network joined Convolutional Auto-encoder (CAE) that leads to a significant reduction in the input dimension and the model sensitivity to the input fluctuation. To compress the network model, three compression techniques (pruning, clustering and quantization) are used. As CAE-CNNLoc is a light model, it can be easily run on edge devices such as smartphones, leading to a remarkable improvement in localization performance in terms of cost, latency, and users' privacy. The experimental results illustrate that the proposed model outperforms other studies in terms of mean positioning error, building and floor accuracy. In this regard, for UJIIndoorLoc dataset, a model that benefits from pruning 50%, clustering with 16 clusters and int8 quantization is just 60 KB in size and obtains about 270 \(\upmu\)s in inference time with just 4 m positioning error. While, a model without any post-training optimizations achieves 2.36 m positioning mean error with the size and inference time of 470 KB and 355 \(\upmu\)s, respectively. Besides, the proposed model shows significant performance on our new dataset named SBUK-D with 1.73 m positioning error, 98% accuracy of floor detection, inference time of 198 \(\upmu\)s and the model size of 103 KB.
**Acknowledgment:** This research is partly supported by the NYUAD's Research Enhancement Fund (REF) Award on "eDLAuto: An Automated Framework for Energy-Efficient Embedded Deep Learning in Autonomous Systems", and by the NYUAD Center for Artificial Intelligence and Robotics (CAIR), funded by Tamkeen under the NYUAD Research Institute Award CG010.
## Declarations
**Conflict of interest:** The authors have no conflict of interest for publication of this paper.
**Data availability:**
Fig. 12: Trajectory tracking using the proposed model
\begin{table}
\begin{tabular}{c c c c c c} \hline & **Mean** & **Building** & **Floor** & **Inference** & **Model size** \\ & **error** & **hitrate** & **hitrate** & **Time (us)** & **(KB)** \\ \hline
**SBUK-D** & 1.73 & - & 0.985 & 198 & 103.49 \\ \hline
**UJIIndoorLoc** & 2.36 & 0.9992 & 0.993 & 355 & 469.91 \\ \hline
**Tampere** & 6.30 & - & 0.961 & 644 & 307.15 \\ \hline
**UTSIndoorLoc** & 1.72 & - & 0.996 & 360 & 272.31 \\ \hline \end{tabular}
\end{table}
Table 8: Best results of CAE-CNNLoc model
**SBUK-D dataset:** The dataset generated during the current study are not publicly available due to privacy and other restrictions.
The following datasets analyzed during the current study are available in the following public domain resources:
**UJIL indoorLoc dataset:** [https://archive.ics.uci.edu/ml/datasets/ujiindoorloc](https://archive.ics.uci.edu/ml/datasets/ujiindoorloc)
**Tampere dataset:** [https://zenodo.org/record/889798](https://zenodo.org/record/889798)
**UTSIndoorLoc dataset:** [https://github.com/XudongSong/CNNLoc-Access](https://github.com/XudongSong/CNNLoc-Access)
|
2307.00667 | Morse Neural Networks for Uncertainty Quantification | We introduce a new deep generative model useful for uncertainty
quantification: the Morse neural network, which generalizes the unnormalized
Gaussian densities to have modes of high-dimensional submanifolds instead of
just discrete points. Fitting the Morse neural network via a KL-divergence loss
yields 1) a (unnormalized) generative density, 2) an OOD detector, 3) a
calibration temperature, 4) a generative sampler, along with in the supervised
case 5) a distance aware-classifier. The Morse network can be used on top of a
pre-trained network to bring distance-aware calibration w.r.t the training
data. Because of its versatility, the Morse neural networks unifies many
techniques: e.g., the Entropic Out-of-Distribution Detector of (Mac\^edo et
al., 2021) in OOD detection, the one class Deep Support Vector Description
method of (Ruff et al., 2018) in anomaly detection, or the Contrastive One
Class classifier in continuous learning (Sun et al., 2021). The Morse neural
network has connections to support vector machines, kernel methods, and Morse
theory in topology. | Benoit Dherin, Huiyi Hu, Jie Ren, Michael W. Dusenberry, Balaji Lakshminarayanan | 2023-07-02T21:05:42Z | http://arxiv.org/abs/2307.00667v1 | # Morse Neural Networks for Uncertainty Quantification
###### Abstract
We introduce a new deep generative model useful for uncertainty quantification: the Morse neural network, which generalizes the unnormalized Gaussian densities to have modes of high-dimensional submanifolds instead of just discrete points. Fitting the Morse neural network via a KL-divergence loss yields 1) a (unnormalized) generative density, 2) an OOD detector, 3) a calibration temperature, 4) a generative sampler, along with in the supervised case 5) a distance aware-classifier. The Morse network can be used on top of a pre-trained network to bring distance-aware calibration w.r.t the training data. Because of its versatility, the Morse neural networks unifies many techniques: e.g., the Entropic Out-Of-Distribution Detector of (Macedo et al., 2021) in OOD detection, the one class Deep Support Vector Description method of (Ruff et al., 2018) in anomaly detection, or the Contrastive One Class classifier in continuous learning (Sun et al., 2021). The Morse neural network has connections to support vector machines, kernel methods, and Morse theory in topology.
Machine Learning, Morse neural networks, Morse neural networks, Morse neural networks
## 1 Introduction
Neural networks are becoming prevalent in a large number of applications both in industry and research (Larson et al., 2022; Jumper et al., 2021; Ren et al., 2019; Han et al., 2021; Kivlichan et al., 2021). Because of their impressive performances, these models are likely to become increasingly trusted in a wider range of domains, including sensitive applications like in the medical field (Roy et al., 2021; Kivlichan et al., 2021; Jumper et al., 2021; Han et al., 2021). This makes the ability to quantify when the predictions of a neural network should be considered uncertain a critical issue, especially since neural networks are known to deliver wrong predictions very confidently (Nguyen et al., 2015; Goodfellow et al., 2015; Ovadia et al., 2019; Guo et al., 2017; Hendrycks and Gimpel, 2017; Lakshminarayanan et al., 2017). As a result, the development of methods to quantify neural network uncertainty is an increasingly important subject in deep learning research (Amodei et al., 2016). In particular, neural networks tend to produce confidently wrong predictions when presented with Out-Of-Distribution (OOD) inputs, that is, inputs that are far away from the data distribution with which the model was trained (Murphy, 2023; Nagarajan et al., 2021; Liu et al., 2020, 2022). Detecting OOD inputs as well as devising models that are aware of the distance from inputs to the training distribution are becoming key challenges in uncertainty quantification (Murphy, 2023).
One classical approach to detect OOD data is to fit a generative probability density to the In-Distribution data (IND) (since OOD points are often rare) and use it to detect OOD points as points with low probability (Murphy, 2023). This works well for normal data with a single mode, but becomes computationally prohibitive when the data has very complex modes requiring fitting large mixtures of simpler parametric models. On the other hand, standard deep generative models are easier to train and in theory able to express very complex modes. However, they have been shown to have some difficulty to distinguish IND from OOD even in the simpler case of distinguishing between MNIST and FashionMNIST (Nalisnick et al., 2019). Non-generative deep learning approaches like the one class deep Support Vector Data Description (SVDD) of (Ruff et al., 2018) have yielded better results on that front.
Another approach is to leverage the supervised information from a classifier to obtain finer-grained OOD detectors. For instance, (Lee et al., 2018; Ren et al., 2021) fit a multivariate Gaussian to the classifier embeddings for each of the labels, yielding a squared distance, the Mahalanobis distance, from the Gaussian modes, creating a powerful OOD detector. The Spectral Normalized Gaussian Process (SNGP) in (Liu et al., 2020, 2022) on the other hand fits an approximate Gaussian process to the classifier embeddings producing an input-dependent temperature that is used to scale the classifier logits so that it becomes distance-aware (i.e. its uncertainty grows as we move away from the training set distribution). Finally, the entropic OOD detector from (Macedo et al.,
2021) learns a classifier whose logits are distance-aware by construction and uses the entropy score of that classifier as an OOD detector. For further reference, we refer the reader to (Salehi et al., 2022; Bulusu et al., 2020; Ruff et al., 2018) for comprehensive surveys of the literature.
We propose a new deep generative model, the Morse network, which unifies a number of the separate supervised and unsupervised techniques mentioned. This model produces a join (unnormalized) density \(\mu(x,y)\) by taking the kernel similarity \(K(\phi_{\theta}(x),T(y))\) between the image of the feature \(x\) and the one-hot-encoded \(T(y)\) version of the label \(y\), which we set to a fixed value \(a\) in the unsupervised case (see sections 2 and 4 for details). The unsupervised formulation comprises mixtures of (unnormalized) Gaussian densities (and more generally exponential family densities) along with more flexible densities whose modes are submanifolds rather than discrete points. The unsupervised Morse network with a Gaussian kernel degenerates to the deep one class SVDD proposed in (Ruff et al., 2018), except for a built-in regularizer in the loss somewhat reminiscent of the mixup regularizer (Pinto et al., 2022). For the Cauchy kernel, it has a temperature whose form coincides with that of SNGP, except that the variance is learned and not computed by large matrix inversion. The supervised Morse network with a Laplace kernel produces a distance-aware classifier that coincides with the entropy OOD classifier from (Macedo et al., 2021). Although this is not the focus of this work, we explain how the Morse network yields a sample generator by following certain gradient flows from random initial points, very much in the spirit of the Poisson generative model (Xu et al., 2022).
The focus of this work is to expose this unifying idea, and to explore the relationships with known approaches. Comprehensive evaluation of the approach and its extensions is the topic of future work.
## 2 The Unsupervised Morse Neural Network
We now introduce the (unsupervised) Morse neural networks. These networks produce (unnormalized) generative densities \(\mu_{\theta}(x)\in[0,1]\) directly on a feature space \(X=\mathbb{R}^{d}\) or on a space of embeddings \(\mu_{\theta}(h(x))\in[0,1]\) of the original features obtained from a pre-trained network \(h(x)\). Morse neural networks are very expressive in terms of the modes they can produce (see examples below). Recall that the modes of an (unormalized) density \(\mu:X\rightarrow[0,1]\) is the subset modes\((\mu)\subset X\) where the density achieves its highest possible value, namely 1. This set can be reduced to a single point (e.g., the mean of a Gaussian) or it can be more complex such as a smooth subset of \(X\) of higher dimension, like a curve, or a surface, or, more generally a \(k\)-dimensional submanifold of \(X\), as is the case for the Morse neural network densities. Intuitively, these generative densities are uniformly \(1\) on their mode submanifolds and decrease to \(0\) as we move away from these modes at a speed controlled by a special type of kernels, which we call Morse kernels:
**Definition 2.1**.: A _Morse kernel_\(K\) on a space \(Z=\mathbb{R}^{k}\) is a positive kernel \(K(z_{1},z_{2})\) taking its values in the interval \([0,1]\) and such that \(K(z_{1},z_{2})=1\) if and only if \(z_{1}=z_{2}\).
Many common kernels are Morse kernels. Namely, all kernels of the form \(K(z_{1},z_{2})=\exp(-\lambda D(z_{1},z_{2}))\) where \(D\) is a divergence in the sense of Amari (2016) are Morse kernels (since \(D(z_{1},z_{2})=0\) if and only if \(z_{1}=z_{2}\)). The Gaussian kernel and the Laplace Kernel are Morse Kernel, as well as the Cauchy kernel \(K(z_{1},z_{2})=\frac{1}{1+\lambda\|z_{1}-z_{2}\|^{2}}\).
We are now ready to define the Morse neural network:
**Definition 2.2**.: A _Morse neural network_ is defined by the data of 1) a neural network \(\phi_{\theta}:X\to Z\) (with parameters \(\theta\)) from the feature space \(X=\mathbb{R}^{d}\) to a space \(Z=\mathbb{R}^{k}\), and 2) a Morse kernel \(K\) on \(Z\). The (unnormalized) density of a point \(x\in X\) is given by
\[\mu_{\theta}(x)=K(\phi_{\theta}(x),a) \tag{1}\]
where \(a\) is treated as an hyper-parameter of the model.
From the properties of Morse kernels, it is easy to see that \(\mu_{\theta}(x)\in[0,1]\) and that the modes of \(\mu_{\theta}(x)\) (i.e., the points where \(\mu_{\theta}(x)\) reaches 1, its highest possible value) coincide with the level set of \(\phi_{\theta}\) (Sec. A):
\[\text{modes}(\mu_{\theta})=\{x\in X:\;\phi_{\theta}(x)=a\}. \tag{2}\]
Fitting the Morse network to a dataset so that its modes approximate the modes of the data distribution is explained in section 3. Applications of a fitted unsupervised Morse network are detailed in section 2.1, which we briefly summarize here:
First, the input dependent temperature \(T_{\theta}(x)=1/\mu_{\theta}(x)\), which is 1 on the density modes and grows away from them can be used to scale the logit of a classifier to make it distance-aware in the spirit of (Liu et al., 2022). Second, the classifier \(s_{\theta}(x)=1-\mu_{\theta}(x)\) is a epistemic uncertainty score measuring our uncertainty in whether the point \(x\) comes from the training distribution. Hence it can be used as an OOD detector. Third, the function \(V_{\theta}(x)=-\log\mu_{\theta}(x)\) is a form of square distance from the density modes (see section A for details and relationship with Morse-Bott theory). In consequence, its negative gradient field flow \(-\nabla_{x}V_{\theta}(x)\) converges to the mode submanifolds, giving us a way to generate new samples from random initial points very much in the spirit of the generative Poisson flow from (Xu et al., 2022).
We highlight several examples to showcase the flexibility of this model.
Location/Scale densities: \(k=d\).All the standard location/scale densities of the form \(f((x-\mu)^{T}\Sigma^{-1}(x-\mu))\) can be recovered using a linear neural network with one layer with invertible weight matrix. This encompasses the multivariate Gaussian, Student-\(t\), Cauchy, and Laplace densities. For all these densities we can take the same neural network \(\phi(x)=\Sigma^{-\frac{1}{2}}(\mu-x)\) and \(a=0\), but we change the kernel: The Gaussian kernel \(K(x,x^{\prime})=\exp(-\frac{1}{2}\|x-x^{\prime}\|^{2})\) produces the (unnormalized) multivariate Gaussian; the Laplace kernel \(K(x,x^{\prime})=\exp(-\|x-x^{\prime}\|)\) produces the (unnormalized) multivariate Laplace density; the Student-\(t\) kernel with \(\nu\) degrees of freedom \(K(x,x^{\prime})=(1+\frac{1}{\nu}\|x-x^{\prime}\|^{2})^{-\frac{d+\nu}{2}}\) produces the multivariate (unnormalized) Student-\(t\) density, of which the Cauchy density is a particular case.
Distributions with mode submanifold: \(k<d\).To showcase that the Morse network can produce densities with mode submanifolds, we device here an example where the density modes consist in a sphere whose radius is controlled by the hyper-parameter \(a\). For that, consider the map \(\phi(x)=\|x\|\) and the Gaussian kernel \(K_{\sigma}(z,z^{\prime})=\exp(-\frac{1}{2\sigma^{2}}(z-z^{\prime})^{2})\) on \(\mathbb{R}\) with bandwidth \(\sigma^{2}\). We obtain the density \(\mu_{a,\sigma^{2}}(x)=\exp(-\frac{1}{2\sigma^{2}}(\|x\|-a)^{2})\), which has for modes the sphere of radius \(a\). More generally, any regular value \(a\in Z=\mathbb{R}^{k}\) of a differentiable map \(\phi:\mathbb{R}^{d}\rightarrow\mathbb{R}^{k}\) will produce a density with mode submanifold of dimension \(d-k\). In the kernel bandwidth limit \(\sigma^{2}\to 0\), the resulting limiting density is the uniform density on the level sets of \(\phi\).
Mixture models: \(k>d\)We illustrate how the Morse neural network can produce density mixtures. For instance, a mixture of \(l\) Gaussian densities on \(X\) is captured by taking the neural network \(\phi:X\to X^{l}\), where \(\phi_{i}(x)=\Sigma^{-\frac{1}{2}}(\mu_{i}-x)\) is the map for the \(i^{th}\) multivariate Gaussian on \(X\) and \(a\) is the zero vector. The kernel on \(Z=X^{l}\) is given by a convex sum of the Gaussian kernels on the components of \(Z\): \(K(z,z^{\prime})=\sum_{i=1}^{l}\alpha_{i}\exp(-\frac{1}{2}\|x_{i}-x_{i}^{\prime }\|^{2})\), where \(z=(x_{1},\ldots,x_{l})\). Since in this case, the dimension of \(Z\) is larger than that of \(X\) the pre-image of zero is the empty set. However, the zero-level sets of the components of \(\phi\) give back the modes of the mixture of Gaussian densities. One can obtain mixtures of different densities by changing the component kernels.
Relation to divergences and the exponential family.Given a divergence \(D(z,z^{\prime})\) in the sense of Amari (2016), then \(K(z,z^{\prime})=e^{-\lambda D(z,z^{\prime})}\) is a Morse kernel. A convex function \(A\) on \(Z\) produces a Bregman divergence \(D_{A}(z,z^{\prime})=A(z)+A^{*}(\eta^{\prime})-z\eta^{\prime}\) where \(A^{*}\) is the dual convex function and \(\eta^{\prime}=\nabla A(z^{\prime})\) is the Legendre transform (see Amari (2016) for details). The Morse network for this kernel is then \(\mu_{\theta}(x)=e^{\lambda(\phi_{\theta}(x)a-A^{*}(a)-A(\phi_{\theta}(x)))}\) which an unnormalized version of the exponential family with canonical parameter \(a\), cumulant generating function \(A^{*}\), and dispersion \(\lambda\), when \(\phi_{\theta}(x)\) is the identity. One recovers the Gaussian case with \(A(z)=\frac{1}{2}z^{2}\).
Relation to Energy-based models.Energy-based models (Goodfellow et al., 2016) have been commonly used in generative modeling (Zhao et al., 2017; Gao et al., 2021; Arbel et al., 2021; Che et al., 2020). In fact, the Morse neural network can be rewritten as an unnormalized energy-based model as follows
\[\mu_{\theta}(x)=e^{-V_{\theta}(x)} \tag{3}\]
where the model energy is parameterized using the Morse kernel \(V_{\theta}(x)=-\log K(\phi_{\theta}(x),a)\). In section A, we show that this positive function satisfies the Morse-Bott condition on the density modes and thus can be interpreted as a sort of squared distance from the modes. Note that normalized energy models have been used in the context of uncertainty quantification (Wang et al., 2021), where a classifier uncertainty is added as extra dimension and learned as joint energy model. Also note that the Morse neural network with exponential family Morse kernel resembles the conjugate energy models from Wu et al. (2021).
### Applications and experimental results
OOD detection:Since \(\mu_{\theta}(x)\in[0,1]\) yields an density which is 1 on the modes and goes to zero as the distance from the mode increases, \(s_{\theta}(x):=1-\mu_{\theta}(x)\) provides a measure of how uncertain we are about \(x\) being drawn from the training distribution (epistemic uncertainty). Hence \(s_{\theta}(x)\) can be used as an OOD detector. Figure 1 (bottom row, middle right) shows that \(s_{\theta}(x)\) classifies points as OOD (value 1) as the distance from the two-moons dataset increases. As Table 1 shows, the unsupervised generative Morse network produces a detector capable of distinguishing MNIST images from the FashionMNIST training images in contrast to other deep generative models (Nalisnick et al., 2019). It is also able to distinguish between CIFAR10 and CIFAR100 when trained on a vision transformer embeddings as reported in Table 1. Note that for the Gaussian kernel, \(V_{\theta}(x)=-\log\mu_{\theta}(x)\) coincides with the anomaly score introduced in (Ruff et al., 2018) and with the Mahalanobis distance when the network is further linear and invertible.
Distance-aware calibration:We can use the Morse network to calibrate a classifier \(f(x)=\text{softmax}(h(x))\) trained on a supervised dataset \(D=\{(x_{i},y_{i})\}\) so that it becomes less confident on points far way from the data distribution in the same spirit as SNGP (Liu et al., 2022). The idea is to fit a Morse network on the input data. Then we can scale the classifier logits \(h(x)\) by the inverse of the Morse temperature \(T_{\theta}(x):=1/\mu_{\theta}(x)\). This produces a classifier that becomes uncertain ou
it has been trained (the more so as the Kernel bandwidth decreases), as demonstrated in Figure 1 (top row). Note that when using the kernel \(K_{\lambda}(z,z^{\prime})=1/\sqrt{(1+\lambda\|z-z^{\prime}\|^{2})}\), the temperature becomes \(T_{\theta}(x)=\sqrt{1+\lambda V_{\theta}(x)}\), where \(V_{\theta}(x)=-\log\mu_{\theta}(x)\), which has the same form as the SNGP temperature of (Liu et al., 2022, 2020), except that the squared distance function \(V_{\theta}(x)\) replaces the SNGP approximate variance.
Sample generation:The Morse gradient field \(F(x)=-\nabla_{x}V_{\theta}(x)\) is attracted to the density modes, which are the global minima (zeros) of the positive function \(V_{\theta}(x)=-\log K(\phi_{\theta}(x),a)\). If we follow its gradient flow from a random point, we obtain a new sample close to the data distribution mode. This approach resembles the Poisson generative flow from (Xu et al., 2022), where the \(V_{\theta}(x)\) is called a potential and is derived by solving the Poisson PDE. We illustrate this approach with the two-moons dataset in Figure 1 (bottom, right) where random points following the Morse flow converge toward the two-moons dataset modes using gradient descent on \(V_{\theta}(X)\).
## 3 Fitting the Morse neural network
Consider a dataset \(D=\{x_{1},\ldots,x_{n}\}\) sampled from a data distribution with density \(p(x)\). Theoretically, we want to find the neural network parameters \(\theta\) that minimize the KL divergence \(\text{KL}(p(x)\|\,\mu_{\theta}(x))\) for unnormalized densities (i.e., positive measures; see (Amari, 2016)) between the probability density \(p(x)\) generating the sample and the Morse network density \(\mu_{\theta}(x)\), that is,
\[\mathbb{E}_{x\sim p(x)}\left(\log\frac{p(x)}{\mu_{\theta}(x)}\right)\,+\int \mu_{\theta}(x)dx-\int p(x)dx \tag{4}\]
which amounts to minimizing w.r.t. \(\theta\) the following quantity
\[\mathbb{E}_{x\sim p(x)}\left(-\log K(\phi_{\theta}(x),a)\right)+\mathbb{E}_{x \sim\text{uni}}\left(K(\phi_{\theta}(x),a)\right)\]
The corresponding empirical loss is then
\[L(\theta)=-\frac{1}{n}\sum_{x\in D}\log K(\phi_{\theta}(x),a)+\frac{1}{n}\sum _{x\in D_{\text{uni}}}K(\phi_{\theta}(x),a)\]
which can be optimized with any iterative gradient-based optimizer. Note that \(D_{\text{uni}}\) are points uniformly sampled in \(X=\mathbb{R}^{d}\), and that the second term in the Morse loss can be interpreted as a form of regularization in the spirit of the mixup regularizer term (Pinto et al., 2022). For the Gaussian kernel the first term of this loss (with an added L2 penalty) coincides with the one-class Deep SVDD objective proposed in (Ruff et al., 2018) for anomaly detection as a way to simplify the loss of a deep one-class SVM in the case of normal data.
## 4 The supervised Morse neural network
The Morse neural network architecture offers a natural way to incorporate supervised labels to obtain a finer grained generative model. There are two ways to do it:
Separate Morse networks:For each label \(y\in\{1,\ldots,C\}\), we can fit a separate unsupervised Morse network to the subset of the data with label \(y=i\) producing a separate density \(\mu(x|i)=K_{i}(\phi_{\theta_{i}}(x),a_{i})\) for each label. The overall density is then given by the average \(\mu(x)=\frac{1}{C}\sum_{i}\mu(x|i)\). The corresponding OOD detector is \(s(x)=1-\mu(x)\). We can also create a classifier from this data by interpreting \(V_{i}(x)=-\log\mu(x|i)\) as a squared
\begin{table}
\begin{tabular}{c|c|c} IND/OOD & Method & AUROC \\ \hline FashionMNIST / MNIST & Morse & 0.998 \\ & **DoSE\({}_{KDE}\)** & 0.998 \\ CIFAR10 / CIFAR100 & Morse & 0.955 \\ & **DoSE\({}_{SVM}\)** & 0.571 \\ \end{tabular}
\end{table}
Table 1: **OOD detection with unsupervised Morse:** As a proof of concept, we report unsupervised Morse AUROC along with best baselines from (Morningstar et al., 2021). For CIFAR10/CIFAR100, we report Morse AUROC trained on embeddings from a pre-trained vision transformer to demonstrate the benefit of this approach. If trained on CIFAR10 directly, Morse AUROC is 0.569, corresponding to the second best performance (**DoSE\({}_{KDE}\)**) from (Morningstar et al., 2021). (See Appendix B.)
Figure 1: **Top row (distance-aware calibration): Left:** Probability output of a ResNet trained to separate the noisy two-moons dataset. **Middle and Right:** Probability plots of the same ResNet but with its logits calibrated using the unsupervised Gaussian Morse temperature at decreasing kernel bandwidth. The classifier becomes uncertain away from the training data. **Bottom row (noiseless two-moons): Left (density plot):** The Morse density concentrates on the two-moons modes. **Middle Left (mode plot):** The modes learned by the Morse network approximate well the two-moons modes. **Middle Right (OOD detection):** the Morse detector classifies points away from the modes as OOD. **Right (sample generation):** Random points following the Morse flow converge to the modes. See Appendix B for experiment details.
distance between \(x\) and the modes of the data with label \(i\); the classifier simply associates to a point \(x\) the label \(i\) which has the smallest \(V_{i}(x)\). When the kernels are taken to be all Gaussian, the resulting classifier coincides with the ILOC classifier (Sun et al., 2021) (except for the regularizing terms in the loss) used in the context of continual learning to avoid catastrophic forgetting since it allows to learn new tasks in complete isolation from the previous ones. (See also (Hu et al., 2021) for similar ideas in the same continual learning context).
Shared Morse network:The previous approach can be computationally intensive in the case of a large number of labels. The Morse network offers us a more efficient way to proceed by taking as model for the distribution join density
\[\mu(x,y)=K(\phi_{\theta}(x),T(y)). \tag{5}\]
where \(T(y)\) is the one-hot-encoded version of the label \(y\in\{1,\dots,C\}\) and \(\phi_{\theta}:X\rightarrow\mathbb{R}^{C}\) is a shared neural network for all the labels. (For simplicity, we identify \(y\) and \(T(y)\) and will use at time \(e_{i}\) to denote the \(i^{th}\) basis vector.) We can use the same KL divergence minimization principle (between unnormalized densities) as in the unsupervised case exposed in section 3 but using now the joint density \(p(x,y)\) as the density generating the data, yielding the following empirical Loss for the supervised network:
\[L(\theta)=-\frac{1}{n}\sum_{(x,y)}\log K(\phi_{\theta}(x),y)+\frac{1}{n}\sum_ {(x^{\prime},y^{\prime})}K(\phi_{\theta}(x^{\prime}),y^{\prime})\]
where \((x,y)\) range over the supervised dataset, and the \((x^{\prime},y^{\prime})\)'s are obtained by uniform sampling on the join feature and label space. In this case, the generative density can be obtained by marginalization
\[\mu(x)=\sum_{y}K(\phi_{\theta}(x),y), \tag{6}\]
producing the OOD detector \(s(x)=1-\mu(x)\).
Experimental results:Using this supervised Morse detector trained on the embeddings \(h(x)\) of a vision transformer fined-tuned to CIFAR10 we observe an improvement in AUROC from 0.955 (for the unsupervised Morse detector) to 0.969 in the same setting for CIFAR100 detection. Figure 2 also illustrates visually how the supervised Morse network can learn disconnected mode submanifolds better than the unsupervised version. (See section B for details).
Note that we also obtain a classifier that is distance-aware in the sense of (Liu et al., 2022) by construction:
\[\mu(y|x)=\frac{K(\phi_{\theta}(x),y)}{\sum_{y^{\prime}}K(\phi_{\theta}(x),y^{ \prime})}=\frac{\exp(-V_{y}(x))}{\sum_{y^{\prime}}\exp(-V_{y^{\prime}}(x))} \tag{7}\]
In the case of the Laplace kernel, this classifier coincides with the classifier used for entropic OOD detection (Macedo et al., 2021) where the entropy value of the classifier output is used as an OOD score. The main difference between the Morse supervised classifier with Laplace kernel and the entropic classifier from (Macedo et al., 2021) is that the \(y\)'s are learned for each class and the maximum likelihoods loss is used on \(\mu(y|x)\) rather than the Morse loss, making the second term of both loss related but different.
## Acknowledgements
We would like to thank Jasper Snoek, Sharat Chikkerur, Mihaela Rosca, James Allingham, Alan Weinstein, and the reviewers for helpful discussions and feedback as well as Patrick Cole his support.
|
2305.13704 | FlowChroma -- A Deep Recurrent Neural Network for Video Colorization | We develop an automated video colorization framework that minimizes the
flickering of colors across frames. If we apply image colorization techniques
to successive frames of a video, they treat each frame as a separate
colorization task. Thus, they do not necessarily maintain the colors of a scene
consistently across subsequent frames. The proposed solution includes a novel
deep recurrent encoder-decoder architecture which is capable of maintaining
temporal and contextual coherence between consecutive frames of a video. We use
a high-level semantic feature extractor to automatically identify the context
of a scenario including objects, with a custom fusion layer that combines the
spatial and temporal features of a frame sequence. We demonstrate experimental
results, qualitatively showing that recurrent neural networks can be
successfully used to improve color consistency in video colorization. | Thejan Wijesinghe, Chamath Abeysinghe, Chanuka Wijayakoon, Lahiru Jayathilake, Uthayasanker Thayasivam | 2023-05-23T05:41:53Z | http://arxiv.org/abs/2305.13704v1 | # FlowChroma - A Deep Recurrent Neural Network for Video Colorization+
###### Abstract
We develop an automated video colorization framework that minimizes the flickering of colors across frames. If we apply image colorization techniques to successive frames of a video, they treat each frame as a separate colorization task. Thus, they do not necessarily maintain the colors of a scene consistently across subsequent frames. The proposed solution includes a novel deep recurrent encoder-decoder architecture which is capable of maintaining temporal and contextual coherence between consecutive frames of a video. We use a high-level semantic feature extractor to automatically identify the context of a scenario including objects, with a custom fusion layer that combines the spatial and temporal features of a frame sequence. We demonstrate experimental results, qualitatively showing that recurrent neural networks can be successfully used to improve color consistency in video colorization.
Keywords:Video colorization Image colorization Recurrent Neural Networks.
## 1 Introduction
Colorizing a grayscale image to achieve a natural look has been a much-explored research problem in the recent years, especially with the rise of deep learning-based approaches for image processing. A primary goal has been to produce diverse colorizations, while also providing plausible colorizations that apply correct colors to identified objects. Desaturating an image is a surjective operation, but it is not injective. Hence, there are multiple possible colors to choose from when considering a pixel in a grayscale image - it is a one-to-many mapping.
Compared to the image colorization problem, colorizing black and white videos has largely been left behind. This problem has abundant training data, as one could easily convert a video to grayscale and test the colorization against the original video. Video colorization could be used as a video preprocessing technique, such as to enhance CCTV footage, and to restore old movies and documentaries. One could argue that video colorization could be taken as a direct extension of image colorization, where successive application of frame colorization would produce a colorized video. But obviously, there is no guarantee
that the selected image colorization technique would color successive frames consistently, known as temporal coherence, since it would consider each frame as a separate task, ignoring the contextual connections between frames. This would result in flickering colors, reducing the usefulness of such results.
The other prime obstacle has been the high computational costs in colorizing videos [14, 26] - it adds another dimension across time on top of the already computationally intensive image colorization.
Furthermore, we observed that the most realistic image colorization results from current techniques are produced when some sort of human intervention is made, such as user scribbles that guide the colorization process [6, 14]. While this is feasible for a few images, it certainly does not scale up for videos with thousands of consecutive frames, as commercial videos run at 24 or more frames per second. Thus, efficiently colorizing a video with resource constraints and minimal supervision poses an interesting research problem.
There's a plethora of early video content shot in black and white that was enjoyed by older generations and remembered fondly. Such classical content is mostly forgotten and the later generations prefer colored content. Colorizing existing content is much cheaper than reproducing them entirely in color today.
Our research contributions are as follows;
1. We propose a new fully automated video colorization framework focusing on improved temporal and contextual coherence between frames and scene changes.
2. We use a Recurrent Neural Network (RNN) based architecture to maintain contextual information across frames for consistent coloring.
3. We study the effects of using RNNs on the colorization of videos.
## 2 Related Work
Most of the previous work in the colorization domain has been done for image colorization, and video colorization is now gaining momentum with their success. The current image colorization algorithms can broadly be put into two major categories: parametric methods [1, 2, 3, 4, 5, 6, 7] and non-parametric methods [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. Parametric methods learn predictive functions from large datasets of color images; once the predictive function's parameters are learned with an appropriate optimization objective, it is ready to predict colors in a fully automatic manner. Alternatively, non-parametric methods require some level of human intervention.
There are mainly two non-parametric methods explored in the literature: scribble-based and transfer-based. Scribble-based colorization schemas [12, 14, 17, 18, 20] require manually chosen color scribbles on the target grayscale image. In few instances, scribble-based colorization methods are extended to video colorization as well [14, 17]. Transfer-based colorization schemas [9, 10, 11, 13, 15, 16, 19] require the user to select semantically similar colorful reference images to match similar segments of the target grayscale image.
Applying non-parametric methods on both image and video colorization has a number of drawbacks, the most prominent among which is the inability to fully automate the colorization process. In color transferring approaches, there is a manual intervention in searching for colorful reference images. Scribble-based colorization may require tens of well-placed scribbles plus a carefully chosen, rich pallet of colors in order to achieve convincing, natural results for a complex image.
Both scribble-based and transfer-based video colorization schemas can only be automated within a frame sequence without a scene change; i.e. at each scene change, if the process is scribble-based, the user will have to introduce a new set of scribbles. If it is transfer-based, a new selection of swatches with or without a new reference image will be required.
Comparatively, parametric colorization schemas can fully automate the colorization process. Deshpande et al. [3] proposed a parametric image colorization schema which formulates the colorization problem as a quadratic objective function and trained it using the LEARCH framework [24]. With the unparalleled success of deep neural networks, solutions that utilize DNNs have been proposed as parametric image colorization schemas. Cheng et al. [2] proposed an image colorization schema which leverages a three-layer fully connected neural network that was trained by the inputs of a set of image descriptors: luminance, DAISY features [28] and semantic features. More recently, many authors have employed convolutional neural netowrks(CNN) and generative adversarial networks (GAN) in their colorization schemas rather than conventional deep neural networks (DNN). Zhang et al. [5] proposed a CNN-based colorization schema which predicts a probability distribution of possible colors for each pixel in order to address the typical ambiguous and multimodal nature of image colorization [9].
They also introduced a CNN based color recommender system [6] that propagates user-provided scribbles while satisfying high level color preferences of the user. Larsson et al. [7] trained an end-to-end network to predict colors of an image with the hypercolumns [8] for each pixel generated from a pre-trained VGG-16 network without a classification layer. Iizuka et al. [4] proposed a colorization method that utilizes a CNN based architecture, combining a high-level semantic feature extractor, a mid-level feature network and a colorization network. More recently, inspired by the colorization model of Iizuka et al. [4], Baldassarre et al. [1] replaced the high-level semantic feature extractor in the colorization model of Iizuka et al. [4] with a pre-trained CNN image classifier: Inception-ResNet-v2 [27]. This transfer learning approach significantly reduces the computational time as well as the need for extreme amounts of data and hardware resources to train the colorization network to yield a quality colorization result.
Most of the fully-automatic, parametric image colorization solutions can be extended to video colorization domain by treating a video merely as a sequence of independent frames. But considering video frames independently causes colors to shift erratically, failing to maintain temporal coherence throughout the frame sequence, causing visual fatigue for viewers. For an example, a wall in one frame
may be colored in one shade of yellow and the same wall should maintain that color in subsequent frames, rather than changing to a shade of white. Failing to capture these details drastically reduces the quality of colored videos, because the user can notice color mismatches and flickering between video frames. In this research, we explore the effectiveness of employing RNNs to preserve the temporal coherence in video colorization while mitigating the challenges of computational time and need for large amounts of data, with the help of a transfer learning application.
## 3 Proposed Approach
When modeling the video colorization problem as a learnable function, we have chosen the CIE La*b* color space to represent video frames. According to Ruderman et al.[25], La*b* color space was developed to minimize correlation between the three coordinate axes of the color space. La*b* color space provides three decorrelated, principal channels corresponding to an achromatic luminance channel L and two chromatic channels as a* and b*. If we have a grayscale frame, that means we already have the luminance layer of that particular frame, the next step is finding a plausible a*, b* combination and fusing them together to come up with a final colored frame, given that there is temporal coherence when predicting a* and b* combinations. Therefore, the main assumption here is that for every luminance component of video frames
\[{X_{\text{t}}}^{L}\in R^{\text{H}\times\text{W}\times 1} \tag{1}\]
there exists a function F such that
\[F:\{{X_{\text{t}}}^{L},{X_{\text{t-1}}}^{L},...,{X_{\text{t-(T-1)}}}^{L}\} \rightarrow({X_{\text{t}}}^{\text{a*}},{X_{\text{t}}}^{\text{b*}}) \tag{2}\]
Here, \({X_{\text{t}}}^{\text{k}}\) represents the \(a\) or \(b\) color layer in \(t^{\text{th}}\) time frame, while H, W and T represent frame height, width and total number of previous frames used for prediction, respectively.
The chromatic channels a* and b* define an Euclidean space where the distance to the origin determines the chroma. Change of values in one channel imposes minimal effect on values of the other two. This decorrelation of the three channels allows us to combine the luminance with the predicted chromatic channels, ensuring an image construction with high level of detail but with almost non-existent cross-channel artifacts.
### Proposed Architecture
FlowChroma architecture can be divided into five main components, as shown in Figure 1: the CNN encoder, global feature extractor, stacked LSTM, fusion layer and the CNN decoder. We include Inception-ResNet-v2 network as a global feature extractor; this is a transfer learning technique, drawing inspiration from
the works of Iizuka et al. [4] and Baldassarre et al. [1] This significantly reduces the computational complexity in training the model.
Although the use of Long Short-Term Memory (LSTM) units [29] to support video colorization has been proposed before, this is one of the first architectures to produce experimental results showing the effectiveness of it specifically towards video colorization. An LSTM is a special form of recurrent neural networks (RNNs). All RNNs have loops within their architecture, acting as a memory cell allowing information to persist for a certain period. They are able to connect previously learned information to the present task. LSTMs specifically outperform regular RNNs in many scenarios, as they have a superior ability to learn longer-term dependencies against vanilla RNNs. When considering an arbitrary frame sequence, it can include scene changes as well. Therefore, our model also needs to learn how much it should remember or forget while generating a frame sequence - this criteria makes LSTMs an ideal candidate for our use case over vanilla RNNs.
As shown in Figure 1, the CNN encoder extracts local features such as texture and shapes while the Inception-ResNet-v2 extracts high level semantic information such as objects and environments from an individual frame. A stacked LSTM is being employed to grasp temporal features of a sequence of frames. The outputs from the CNN encoder, Inception network and the LSTM are then fused together in the fusion layer to provide inputs to the colorization network or the
Figure 1: FlowChroma Architecture Diagram
CNN decoder. The CNN decoder is used to predict a* and b* layers related to the input luminance frame at the current time step in a spatio-temporal manner.
### Grasping Local & Global Features of each Individual Frame
In order to grasp local features such as shapes in frame at each time step, we apply a CNN encoder to every temporal slice of the input. It processes a \(t\times H\times W\) grayscale frame sequence and outputs a sequence of \(t\times H/8\times W/8\times 256\) feature encodings.
Global features such as objects and environments are helpful for the CNN decoder to provide an appropriate colorization. The high-level feature extractor is a pre-trained Inception-Resnet-v2 model without the last SoftMax layer. When training FlowChroma, we keep Inception's weights static. At each time step, we scale the input luminance frame to \(299\times 299\), and then stack itself to obtain a three channel frame in order to satisfy Inception's input dimensionality requirements. Then we feed the resultant frame to Inception and obtain its logits output (the output before the softmax layer). When the results at each time step are combined, we get a final embedding of \(t\times 1000\) for the entire sequence.
### Capturing Temporal Features
In order to grasp temporal variations of the frame sequence, we use a 2-layer stacked LSTM model. The CNN encoder provides a local feature encoding of \(t\times H/8\times W/8\times 256\). By employing global average pooling operation on that encoding at each time step, we obtain an embedding of \(t\times 256\), which can be
Figure 2: FlowChroma Architecture: The CNN encoder extracts local features while the Inception network extracts high level semantic information from a frame. The stacked LSTM grasps temporal features from a sequence of frames. The outputs from the CNN encoder, Inception network and the LSTM are then fused together in the fusion layer to provide inputs to the colorization network or the CNN decoder. Note that the CNN encoder, decoder, fusion layer and Inception network are all applied to every temporal slice of the input.
used as inputs to the stacked LSTM. Stacked LSTM has two LSTM layers, each having 256 hidden states, thus giving us an output with the dimensions of \(t\times 256\). This output improves temporal coherence of the video colorization predictions.
### Fusing Local and Global Spatial Features with Temporal Features
Fusing local and global level spatial features with temporal features will be done by a specially crafted fusion layer, first introduced by Iizuka et al. [4] Similar to CNN encoder, we apply the fusion layer to every temporal slice of the input. The fusion layer takes the output embeddings from Inception and stacked LSTM to replicate it \(H/8\times W/8\) times and then concatenates them with the output provided by the CNN encoder. The fusion mechanism is more comprehensively illustrated in Figure 3.
### Colorization Decoder Network
Once the local and global spatial features are fused with temporal features, they are processed by a set of convolutions and up-sampling layers in the CNN decoder. Similar to the CNN encoder and Fusion layer, we apply the CNN decoder to every temporal slice of the input. The decoder takes a \(t\times H/8\times W/8\times 1512\) input and results in a final output with dimension of \(t\times H\times W\times 2\). The resultant sequence can be considered as the sequence of a* and b* layers for the input sequence of luminance frames, once this result is appropriately merged with the input sequence, we can obtain the final colorized frame sequence.
### Optimization and Learning
Optimal model parameters were found by minimizing an objective function defined over predicted outputs and actual results. To quantify the loss, mean
Figure 3: Fusion Layer - the outputs of the Inception network and the LSTM are replicated and stacked with the CNN encoder’s output.
squared error between each pixel in a*, b* layers of predicted and actual results were used. If we consider a video V, the MSE loss is estimated by,
\[C(X,\Theta)=\frac{1}{2nHW}\sum_{t=0}^{n}\sum_{k\in a,b}\sum_{i=1}^{H}\sum_{j=1}^{ W}(X^{\mathrm{k}}{}_{t_{i,j}}-\hat{X}^{\mathrm{k}}{}_{t_{i,j}})^{2} \tag{3}\]
Here \(\theta\) represents all model parameters and \(X^{\mathrm{k}}{}_{t_{i,j}}\) represents the \((i,j)\) pixel in \(t^{\mathrm{th}}\) time frame's \(k\) layer. This objective function can be extended to batch level by taking the average.
\[C(X,\beta)=\frac{1}{|\beta|}\sum_{X\in\beta}C(X,\Theta) \tag{4}\]
To optimize the above objective function, we used Adam optimizer[23].
### Training
FlowChroma was trained for roughly 50 hours on 50,000 short, preprocessed video clips, taken from the FCVID [22] video dataset. Videos were randomly selected from the dataset, converted to LAB color space and resized to match the input shape of the network. We used a batch size of 20 and a validation split of 10%. Training was done on an AWS EC2 instance that had 32 virtual CPUs and four NVIDIA Tesla P100 GPUs, with a total video memory of 64 GB.
## 4 Experiments
We compare FlowChroma's video colorization performance by taking the Deep Koalarization framework proposed by Baldassarre et al. [1] as our baseline model. There are mainly two reasons for this choice, rather than another image colorization framework or a state-of-the-art technique.
1. Both FlowChroma and Deep Koalarization use the same transfer learning application of obtaining global features of an image or a video frame from a pre-trained object classifier and fusing them in the fusion layer, similar to Iizuka et al. [4]
2. The main purpose of our research is emphasizing the use of sequence models in preserving temporal coherence between frames and scene changes rather than extremely realistic colorizations; to achieve that, comparison of our framework with a good enough image colorization framework is sufficient.
To evaluate the performance of FlowChroma against our baseline model, we randomly selected 1,000 videos from the FCVID dataset, belonging to various categories depicting a wide range of scenarios, derived their grayscale variants and colorized them with the two models.
In order to provide a fair comparison of the two model's colorization performance, we used Inception-ResNet-v2, pre-trained object classifier as the global
feature extractor for both FlowChroma and the baseline model. We also trained both models on the same dataset and hardware environment upto a close validation loss. Subsequently, a qualitative assessment of the colorizations was performed.
Our model only takes a sequence of 5 frames as an input at once, but when running inference we need to colorize videos with hundreds of frames. Thus, we use a sliding window approach during inference. In contrast to that, our baseline model only takes a single frame as input at a time, thereby coloring each frame in a video independently.
We first confirm that our model performs well in colorization, and verify that although we use a recurrent architecture, it still converges. Next, we show that we can achieve temporal and contextual coherence through video frames with LSTMs. Finally, we discuss the weaknesses of the proposed architecture and discuss possible solutions.
## 5 Results and Discussion
In general, we observed that our model produces appropriate colorization results, assigning realistic colors to objects within each scenario. Furthermore, the system successfully maintains color information between frames, keeping a natural flow and a high spatio-temporal coherence at both global and local levels for videos with common objects and environments in t
Figure 4: FlowChroma generalizes commonly encountered scenes and objects and assigns them appropriate colors during inference. It also generates an acceptable variation of colors in each scene throughout the colorization results, as demonstrated in (a)a, (b)b and (c)c. In (a)a, note how the parachute and the flag are colored in red hues while the sky is in blue. In (c)c, the eye color and skin tones over different regions in the face make the frame appear more realistic.
also observed that for sudden or quick object movements, our model added blobs of flickering that followed the movement of the object.
In terms of raw colorization, our model generalizes commonly encountered scenes and objects and assigns them appropriate colors during inference. Figure 4b depicts a scene with a large field in the foreground and the sky in the background. This type of colorizations are observed throughout the results, and stand to reason that the system generalizes the scenes in the training dataset.
We observe LSTM's sequence learning capabilities on colorization at two scales; locally and globally. At a global scale, FlowChroma maintains the overall color composition of a scene throughout the video better than the baseline image colorization model. At a local level, the baseline model sometimes mistakenly colorizes small regions of a frame with inappropriate colors, but FlowChroma avoids such mistakes.
An example of this is shown in Figure 5a, which depicts a herd of elephants strolling about. FlowChroma maintains the dry tone of the environment across the video while the baseline model shows colors changing between green and off-brown even for slight movements of elephants and their tails. Similarly, in 5b, FlowChroma again maintains the grass field in green while the baseline flickers from brown to green for the slight movements of the shooter and his gun. In 5c, note how the baseline system bleeds color from the smartphone's clock into the background while our model does a better job of keeping the background uniform.
At a local scale, the LSTM affects how FlowChroma decides colors should be assigned even when it cannot fully identify the progression of a previously detected region. In Figure 6a, the background contains a wall that is uniformly colored throughout the frames by FlowChroma, while having blue patches in the baseline model's output. This is an example of the downsides of considering each frame as a separate colorization task, as done by image colorization models. Figure 6b contains an off-white board that is consistently colored by FlowChroma, whereas the baseline model again adds blue color patches. Blue color patches have appeared probably due to erroneously identifying those regions as sky or water in some frames.
Based on our observations, we can divide the factors affecting the consistency of colorization as temporal and non temporal. Non-temporal factors include
1. extreme pixel values of input grayscale image e.g. extreme dark color of asphalt roads or extreme bright color of snow,
2. the prevalence of a context in the training dataset. These factors affect both image colorization extensions to video colorization as well as FlowChroma. If the pixel values are extreme, such as in the case of snow or asphalt roads, both the baseline model and FlowChroma tend to leave them as extremes without assigning new colors.
Furthermore, when colorizing commonly encountered contexts, both the baseline and our model provided consistent appropriate colors because of the high level feature extractor; Inception-ResNet-v2 that is pre-trained on the ImageNet dataset, which contains images of commonly encountered context.
Temporal factors mainly relate to the movement of objects in a scenario, where the action frequency confuses the system's perception of the trajectory of the scene. This is applicable only to FlowChroma. When the movements in a video are smooth, our system identifies the objects and applies appropriate, temporally coherent coloring. When the movement in the scenario speeds up, the perceived flow of movement breaks and thus the colorization quality degrades fast, especially in terms of segmentation and appropriate coloring.
Lastly, we observe when the colorization of FlowChroma becomes inconsistent and also propose possible solutions for them.
1. The introduction of new objects into a scene changes its context, introducing momentary flickering before stabilizing again. Training the model further may alleviate this problem.
2. When there is a high object frequency in a scene, the aptness of the colorization gets reduced. An example would be a surface with a complex pattern.
Figure 5: In each sub-figure, the top and bottom rows show the video frame sequences colored by FlowChroma and the baseline model respectively. These show the superior global color palette maintenance throughout the scene by our model.
A potential solution would be to train the system on more videos with high object frequency.
3. The action frequency also adversely affects the system's performance. Normalizing the action speed is one possible solution. This could be done by increasing the number of frames containing the movement by predicting intermediate frames, as recently demonstrated by Nvidia [21], and then slowing down the video to achieve the desired speed. Another potential solution is to train the system with more time steps.
## 6 Conclusions
Contemporary image colorization techniques are not directly applicable to video colorization as they treat each video frame as a separate colorization task, without maintaining temporal coherence between frames. We propose FlowChroma, a novel colorization framework with a recurrent neural network - LSTM - added to maintain temporal and contextual information between frames.
Inherent capability of LSTMs to learn how much each hidden cell should remember or forget while reading or generating a sequence stands as a justification for using LSTMs in FlowChroma rather than vanilla RNNs - this is the basis for their usage in video colorizations with scene changes.
Figure 6: In each sub-figure, the top and bottom row are from FlowChroma and the baseline model, respectively, showing how the local color uniformity is better maintained by FlowChroma. Note how the baseline model flickers with blue color patches as the camera angle changes in a and as the boy moves his hands in b.
We show that the LSTM maintains the image colorization quality of current methods intact while also successfully minimizing flickering between frames. It maintains the overall color palette of a scenario across subsequent frames at a global level, while coloring identified objects within a scene consistently at a local level.
We observed some limitations in the use of recurrent architectures for video colorization, which may be common to other techniques as well. FlowChroma specifically generates inconsistent colorizations in the following scenarios;
1. Sudden introduction of new objects into the scene
2. High object frequency or having high number of objects in a scene
3. High action frequency or fast movements in a scene.
Finally, from these preliminary results, we have a promising research direction in maintaining temporal and contextual coherence in video colorization with LSTMs. As future work, we hope to quantitatively assess the performance of FlowChroma using a video colorization benchmark. We also plan to perform a visual Turing test of colorized videos from various frameworks.
|
2308.05617 | A Neural Network Based Choice Model for Assortment Optimization | Discrete-choice models are used in economics, marketing and revenue
management to predict customer purchase probabilities, say as a function of
prices and other features of the offered assortment. While they have been shown
to be expressive, capturing customer heterogeneity and behaviour, they are also
hard to estimate, often based on many unobservables like utilities; and
moreover, they still fail to capture many salient features of customer
behaviour. A natural question then, given their success in other contexts, is
if neural networks can eliminate the necessity of carefully building a
context-dependent customer behaviour model and hand-coding and tuning the
estimation. It is unclear however how one would incorporate assortment effects
into such a neural network, and also how one would optimize the assortment with
such a black-box generative model of choice probabilities. In this paper we
investigate first whether a single neural network architecture can predict
purchase probabilities for datasets from various contexts and generated under
various models and assumptions. Next, we develop an assortment optimization
formulation that is solvable by off-the-shelf integer programming solvers. We
compare against a variety of benchmark discrete-choice models on simulated as
well as real-world datasets, developing training tricks along the way to make
the neural network prediction and subsequent optimization robust and comparable
in performance to the alternates. | Hanzhao Wang, Zhongze Cai, Xiaocheng Li, Kalyan Talluri | 2023-08-10T15:01:52Z | http://arxiv.org/abs/2308.05617v1 | # A Neural Network Based Choice Model for Assortment Optimization
###### Abstract
Discrete-choice models are used in economics, marketing and revenue management to predict customer purchase probabilities, say as a function of prices and other features of the offered assortment. While they have been shown to be expressive, capturing customer heterogeneity and behaviour, they are also hard to estimate, often based on many unobservables like utilities; and moreover, they still fail to capture many salient features of customer behaviour. A natural question then, given their success in other contexts, is if neural networks can eliminate the necessity of carefully building a context-dependent customer behaviour model and hand-coding and tuning the estimation. It is unclear however how one would incorporate assortment effects into such a neural network, and also how one would optimize the assortment with such a black-box generative model of choice probabilities. In this paper we investigate first whether a single neural network architecture can predict purchase probabilities for datasets from various contexts and generated under various models and assumptions. Next, we develop an assortment optimization formulation that is solvable by off-the-shelf integer programming solvers. We compare against a variety of benchmark discrete-choice models on simulated as well as real-world datasets, developing training tricks along the way to make the neural network prediction and subsequent optimization robust and comparable in performance to the alternates.
Imperial College Business School, Imperial College London
(h.wang19, z.cai22, xiaocheng.li, kalyan.talluri)@imperial.ac.uk
## 1 Introduction
What goes on in the consumer's mind during the purchase process? How, and why, do they decide to purchase? What features of the product or environment encourage them to purchase? Given the heterogeneity and idiosyncratic thought processes of customers, these questions are unlikely to be answered definitely. Nevertheless these are fundamental questions of interest to marketers, behavioural psychologists, economists and operations researchers. Hence, for operational and algorithmic purposes, a number of stylized or reduced-form models have been proposed in the literature to explain individual-level decision outcomes as well as collective aggregate purchase behavior.
Discrete-choice models are one such class of models, widely used in marketing, economics, transportation and revenue management, especially suitable for situations where customers choose at most one of several alternatives. They are typically parsimonious models, based on micro-economic utility maximization principles in a stochastic framework, and expressive enough to incorporate product, customer and environment features, and tractable enough for estimation and incorporation into optimization models. The classical Multinomial Logit choice model (MNL) is an early and prominent example that, along with mixed-logit, nested-logit and probit, has found widespread application in many fields, both in theoretical models as well as in empirical studies. New alternatives to MNL, that are more flexible or more expressive, such as Markov chain choice (Blanchet et al., 2016) and Exponential (Alptekinoglu and Semple, 2016), have been proposed recently.
One could potentially design more elaborate choice models, folding in many known aspects of the purchase behavior. However, this line of development has some natural limitations as, one, the models have to be specialized and tuned to a particular industry or even to a firm's purchase funnel; two, would require developing and hand-coding its estimation which may be difficult both in theory and practice; and finally, using the model in an assortment optimization method may be computationally infeasible.
So, given the fundamental indeterminacy in how and why a customer chooses to purchase, our inability to observe all the data required by some of these models, and as the process itself is opaque and liable to change by industry and context, a reasonable research proposition is to search for a robust universal model that requires minimal tuning or expert modeling knowledge and works as well as behavioural models across all contexts and industries. Naturally, machine learning methods, specifically neural networks, come to mind. They have been applied with great success for many prediction tasks in industry precisely because they do not require fine-grained modeling specific to a situation and industry, are robust, and come with standardized training algorithms that work off-the-shelf. With enough data they also have proved to be very good performers in many areas, sometimes proving to be superior even to hand-crafted models.
In this paper we develop a neural network based choice model suitable for assortment optimization. The firm has to offer an assortment of products, potentially customized to each individual customer, and the customer either chooses one of the offered products or decides not to purchase. This problem arises in a wide variety of industries where customers choose at most one item, such as hotel rooms, airline seats, or expensive one-off purchases in a single category such as automobiles, laptops or mobile phones. (We leave extensions to multi-item purchases for future research.)
The challenges are the following: First, the data is not at a scale typically required for neural networks training; a hotel for instance has only a few months of seasonal data it can use for predictions. Second, the neural network predictions have to be used subsequently in an assortment optimization problem--essentially revenue-optimal subset-selection--and a neural network output promises no structure to make this optimization efficient. Consequently we want a network as compact as possible so the optimization--in an integer programming formulation that we develop--is tractable for a reasonably large number of products.
Our contributions in this paper are the following:
1. We tackle the above-mentioned challenges and develop a neural network based choice model under two settings: (i) a feature-free setting where no additional features on products or customers are used in prediction and (ii) a feature-based setting that can utilize both product and customer features.
2. We formulate an integer-program over the trained neural network to find an assortment that maximizes expected revenue.
3. We perform extensive numerical simulations that compare the predictive and optimization performance under the following scenarios: 1. Synthetic simulations where the ground-truth is generated by a panoply of models in the choice literature--MNL, Markov chain, non-parametric, mixed-logit--choice models and cross-validate them, both on raw prediction of choice probabilities, as well as the quality of the optimized assortment. 2. Situations where the customer behavior does not follow the models' script or rational economic behaviour, but is documented experimentally and empirically in the literature.
3. Real-world assortment and sales transactions data from the following important industries: physical fashion-retail, airline, hotel, and transportation, and compare the predictive performances of the popular discrete-choice models vs. our neural network.
4. We device a meta-learning trick that allows us to obtain good robust prediction performance from a compact neural network even when trained on limited transactional purchase data. Moreover, we show a well-trained network can be warm-started when adding new products to the pool.
Our numerical results suggest that the neural network gives comparable performance to the best choice model in most cases, both in simulations as well as on real data. Some of the parametric models do perform better in simulations, unsurprisingly, when the ground-truth data-generation coincides with the model. We find a similar pattern for real-world data, where on a hold-out sample our neural network gives robust predictions across the different industries. Moreover, we find that a one-layer neural choice model can capture what may be considered "irrational" purchase behavior--documented to occur in experiments and empirical studies--that cannot be explained by random utility models. In summary, shallow neural networks enjoy both prediction power and computational tractability for assortment optimization.
## 2 Literature Review
Probabilistic choice models originated in economics based on a stochastic utility model of the consumer. The oldest and a still widely used discrete-choice model is the MNL, with alternates and generalizations proposed to overcome its known limitations, such as the nested-logit and mixed-logit. We refer the reader to the seminal paper McFadden and Train (2000) and the book Ben-Akiva and Lerman (1985) for back-ground on discrete-choice models and their economic motivation and estimation from data.
We review the literature in Computer Science and Operations Research next on neural network modeling of discrete-choices and assortment optimization.
### Neural Network Models in Utility Modeling
In the Computer Science community, Bentz and Merunka (2000) use a neural network to capture the non-linear effects of features on the latent utility. A few recent works Wang et al. (2020); Han et al. (2020); Sifringer et al. (2020); Wong and Farooq (2021); Aouad and Desir (2022); Arkoudi et al. (2023) study different application contexts and explore different neural network architectures for the feature-utility mapping. In addition to the product and customer features, Gabel and Timoshenko (2022) also encode history purchase information. Chen et al. (2021); Chen and Misic (2022) take a different route and consider the random forest model as the mapping function. Van Cranenburgh et al. (2022) discuss the potential avenues for further integrating machine learning into choice modeling with a comprehensive review of the existing literature.
We term this line of works as deep learning based utility modeling instead of choice modeling, because these works mainly focus on predicting the unobserved product utilities with the observed features and applying an MNL-style operator for predicting purchase probabilities (except Aouad and Desir (2022), who use a mixture of a set of estimated utilities to mimic the mixed-logit model). What is missing is the _assortment effect_ on utilities--the interactions between the products within (and possibly outside) the set of offered products. This has two negative implications: First, some of the neural network models require all the training samples to have a fixed-size assortment. This requirement is simply not met in many application contexts such as hotels, airlines or fashion retail where the assortments change frequently. Second, it fails to capture the effect that the assortment has on the product utilities.
Intuitively, a product's utility is not only determined by product and customer features, but also affected by the assortment offered to the customer which is a distinguishing feature of our neural network.
Another common drawback of the existing deep learning approaches is that they all require the availability of product or customer features. When there are no features available, most of the models degenerate to a uniform choice probability. In contrast, our neural network based choice model is designed to handle the feature-free setting. Importantly, we make a distinction between the feature-free model and the feature-based model not by the availability of the features, but by whether the downstream application is interested in a population-level or personalized choice model for its assortment optimization.
### Assortment Optimization
Assortment optimization is an important problem in operations management, used in the airline, hotel, and fashion retail e-commerce. For more on the history and applications of assortment optimization, we refer readers to the books Talluri et al. (2004); Gallego and Topaloglu (2019). Here we briefly review the complexity of the problem under some of the aforementioned choice models:
**Multinomial Logit Model (MNL):** Under the MNL model Talluri and Van Ryzin (2004) show that the optimal assortment is a revenue-ordered nested assortment and thus the optimal policy is to order the items by their revenues to form a nested set of assortments and choose the assortment with the largest expected revenue in that nested set. Rusmevichientong and Topaloglu (2012) show such revenue-ordered policy (RO) is robust against the uncertainty in MNL parameters and solvable under a cardinality constraint.
**Markov Chain Choice Model (MCCM):** Blanchet et al. (2016) give a polynomial-time algorithm for finding the optimal assortment under this model by repeatedly using a Bellman operator until convergence. Feldman and Topaloglu (2017) further prove that the optimal assortment can be found by solving a linear programming. With constraints, Desir et al. (2020) prove that this assortment optimization problem is NP-hard and introduce algorithms with a provable worst-case approximation ratio.
**Mixed-Multinomial Logit model (MMNL):** Bront et al. (2009) show that even without constraints, the assortment optimization problem under MMNL is NP-hard and then provide a column-generation method to solve the problem. Rusmevichientong et al. (2014) show that a revenue-ordered policy can obtain the optimal assortment in MMNL when the choice model has some special structures. Mendez-Diaz et al. (2014) propose a branch-and-cut algorithm for solving the optimization problem.
**A General Choice Model:** When the revenue function underlying a choice model is a black-box function, algorithms for finding the optimal assortment cannot be tailored by utilizing structure. Jagabathula (2014) purposes the ADXOpt algorithm which repeatedly finds the best action in adding (A), deleting (D), and exchanging (X) based on the current assortment with provable approximation ratio under some conditions. Udwani (2023) defines submodular order functions and purposes several heuristics for assortment optimization under various types of constraints. With mild conditions, the heuristics have bounded approximation ratios.
In this paper we show that in the context of assortment optimization, (1) a shallow network is enough to recover existing choice models and (2) good performance and robustness is achievable with a standard neural network and integer programming when dealing with different choice models compared to customized heuristics.
## 3 Problem Setup
The firm has a set of \(n\) products \(\mathcal{N}=\{1,2,...,n\}\) that can be offered to customers. An assortment \(\mathcal{S}\subseteq\mathcal{N}\) is a subset of \(\mathcal{N}\) that the firm decides to present to the customer. In practice, the assortment can
be determined by the product availability or the seller may decide the assortment to maximize profits by limiting the choice of the customer. The choice model gives a prediction of the probability of the customer choosing product \(i\) conditional on the assortment \(\mathcal{S}\):
\[\mathbb{P}(i|\mathcal{S})\text{ for all }i\in\mathcal{N}\text{ and }\mathcal{S}\subseteq \mathcal{N}.\]
In particular, we assume that \(\mathbb{P}(i|\mathcal{S})=0\) for \(i\notin\mathcal{S}\), i.e., the customer cannot choose a product not offered in the assortment. In this way, a choice model
\[\mathcal{M}=\{\mathbb{P}(i|\mathcal{S}):\mathcal{S}\subseteq\mathcal{N}\} \tag{1}\]
dictates \(2^{n}-1\) probability distributions, each of which corresponds to one possible nonempty assortment. In all applicable business contexts, there is usually a "no-purchase" option where the customer chooses not to purchase any of the items from the offered assortment. The no-purchase option can be captured by one product (say, indexed as \(n\)) in \(\mathcal{N}\) and we can always have \(n\in\mathcal{S}\).
A tabular parameterization of the choice model \(\mathcal{M}\) generally requires \(\Omega(n\cdot 2^{n})\) parameters. Additional structures are therefore imposed to facilitate efficient estimation and revenue optimization.
### Random Utility Models (RUM) and Beyond
The _random utility model_ (RUM) is the dominant framework for choice models in practice. It assigns a random utility for each product and models the consumer's choice behavior according to the principle of utility maximization. Specifically, the utility of the \(i\)-th product
\[U_{i}=u_{i}+\epsilon_{i}\text{ for }i\in\mathcal{N}.\]
Here \(u_{i}\) is deterministic and represents the mean utility (to the customer) of purchasing the \(i\)-th product among the population. The random quantities, \(\epsilon_{i}\)'s are assumed to have a zero mean and can be correlated among products, to model unobservable factors as well as heterogeneity across products. We note that customer heterogeneity is also modeled by assuming the random part is specific to product as well as the customer. Under the RUM,
\[\mathbb{P}(i|\mathcal{S})\coloneqq\mathbb{P}\left(U_{i}=\max_{j\in\mathcal{S}} U_{j}\right)\quad\text{for }i\in\mathcal{S}. \tag{2}\]
For simplicity, we assume ties are broken randomly. Different distributions of \(\epsilon_{i}\)'s specialize in this general RUM framework into different choice models as in the following.
**Multinomial Logit Model (MNL):** The MNL choice model assumes \(\epsilon_{i}\)'s are i.i.d. from a mean-zero Gumbel distribution with scale parameter 1. Then the choice probability for \(i\in\mathcal{S}\) has a nice closed-form solution:
\[\mathbb{P}(i|\mathcal{S})\coloneqq\frac{\exp(u_{i})}{\sum_{j\in\mathcal{S}} \exp(u_{j})}. \tag{3}\]
When there is a feature vector \(\mathbf{f}_{i}\) associated with each product, one can also represent the deterministic utility \(u_{i}=\mathbf{\theta}^{\top}\mathbf{f}_{i}\) and obtain the linear-in-attributes MNL model. Several recent works model \(u_{i}\) with a neural network function of the feature \(\mathbf{f}_{i}\)(Bentz and Merunka, 2000; Sifringer et al., 2020; Wang et al., 2020; Arkoudi et al., 2023).
Although MNL model is simple to estimate and optimize for an assortment, the model's assumptions
make it restricted to the _independence of irrelevant alternatives_ (IIA) property: for any \(i,j\in\mathcal{S}\subset\mathcal{S}^{\prime}\),
\[\frac{\mathbb{P}(i|\mathcal{S})}{\mathbb{P}(j|\mathcal{S}^{\prime})}=\frac{ \mathbb{P}(i|\mathcal{S})}{\mathbb{P}(j|\mathcal{S}^{\prime})}.\]
This means the ratio of two products' choice probabilities is independent of the other products in the assortment. We illustrate the effect of this for prediction and contrast it with the other methods.
Table 1 is an example to illustrate the limitation of the IIA property: In case I there is one product \(A\) with a no-purchase option. In Case II, when we add another product \(A^{\prime}\) identical to the product \(A\), we assume the customer who would purchase product \(A\) in Case I will have equal probability to purchase \(A\) or \(A^{\prime}\) in Case II. From the predicted probabilities, we can see the estimated MNL model cannot capture this purchase behavior due to the IIA property.
The following three RUM models (along with many others) were developed to mitigate this.
**Mixed-Multinomial Logit Model (MMNL):** The MMNL choice model (McFadden and Train, 2000) is a generalization of MNL to multiple segments. It assumes there are several customer types and each type has its own deterministic utilities; the random terms are still assumed to be i.i.d. and Gumbel distruted. We denote \(\mathcal{C}\) as the set of customer types and \(\alpha_{c}\) as the probability (for a customer) of being \(c\)'s type. Then the choice probability for \(i\in\mathcal{S}\) is:
\[\mathbb{P}(i|\mathcal{S})\coloneqq\sum_{c\in\mathcal{C}}\alpha_{c}\frac{\exp(u _{c,i})}{\sum_{j\in\mathcal{S}}\exp(u_{c,j})}.\]
For a continuous distribution of customer types, the summation is replaced by an integral. McFadden and Train (2000) show any RUM can be represented by an MMNL choice model under mild conditions and thus MMNL also has a large model capacity. However, the estimation of an MMNL is challenging and many of the methods proposed in the literature fail to recover the latent types' parameters (Jagabathula et al., 2020; Hu et al., 2022).
**Markov Chain Choice Model (MCCM):** The MCCM (Blanchet et al., 2016) defines a discrete-time Markov chain on the space of products \(\mathcal{N}\) and models each customer's choice through the realization of a sample path. Specifically, it assumes a customer arrives to product \(i\) with probability \(\lambda_{i}\) (interpreted as the initial state of the Markov chain). Then if the product is in the assortment \(\mathcal{S}\), the customer purchases it and leaves. Otherwise, the customer transitions from product \(i\) to product \(j\) with probability \(\rho_{ij}\). Naturally, \(\sum_{i\in\mathcal{N}}\lambda_{i}=1\) and \(\sum_{j\in\mathcal{N}}\rho_{ij}=1\) for all \(i\in\mathcal{N}.\) Let \(X_{t}\in\mathcal{N}\) denote the product (state) that the customer visits at time \(t\), and define the first hitting time \(\tau=\{t\geq 0:X_{t}\in\mathcal{S}\}.\) Then, under the
\begin{table}
\begin{tabular}{c c|c|c c c} \hline \hline & Product & True Prob. & \multicolumn{4}{c}{Pred. Prob.} \\ & & NN & MNL-MLE & MCCM-EM \\ \hline \multirow{2}{*}{Case I} & No-purchase &.40 &.42 &.46 &.40 \\ & A &.60 &.58 &.54 &.60 \\ \hline \multirow{2}{*}{Case II} & No-purchase &.40 &.42 &.32 &.38 \\ & A &.30 &.28 &.38 &.32 \\ \multicolumn{2}{c}{} & A’ &.30 &.30 &.30 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Example illustrating the effects of the IIA property of MNL. Column True Prob. is the true choice probabilities in each case and Pred. Prob. is the predicted probabilities from each method: NN is a one-layer Gated-Assort-Net neural choice model (that we introduce later), MNL-MLE is the MNL model calibrated via Maximum-Likelihood, while MCCM-EM implements the expectation-maximization algorithm (Simsek and Topaloglu, 2018) to fit a MCCM model. All choice models are estimated based on 8000 training samples where each sample’s assortment is randomly picked with equal probability.
MCCM,
\[\mathbb{P}(i|\mathcal{S})\coloneqq\mathbb{P}(X_{\tau}=i).\]
The MCCM is a generalization of MNL, in the sense MNL arises for a specific form of the \(\lambda\)'s and \(\rho\)'s. It has more parameters, hence has a larger model capacity to capture more complex choice behavior. Berbeglia (2016) establishes MCCM as a RUM through a random walk argument.
**Non-parametric (NP) Choice Model:** The NP choice model or random orderings model has origins in classical microeconomic demand theory based on preference rankings (Block and Marschak, 1959) and recently reintroduced with a new estimation proposal in (Farias et al., 2009, 2013). The model assumes that there exists a distribution \(\lambda:\mathrm{Perm}_{\mathcal{N}}\to[0,1]\) over the set \(\mathrm{Perm}_{\mathcal{N}}\) of all possible permutations of the products. There are \(n!\) customer types and each customer type has a preference list of the products corresponding to one permutation in \(\mathrm{Perm}_{\mathcal{N}}\). Customers always purchase the most preferred product in the preference list. The choice probability is given by
\[\mathbb{P}(i|\mathcal{S})\coloneqq\sum_{\sigma\in\mathrm{Perm}_{i}(\mathcal{S })}\lambda(\sigma)\]
where the set
\[\mathrm{Perm}_{i}(\mathcal{S})\coloneqq\{\sigma\in\mathrm{Perm}_{\mathcal{N} }:\sigma(i)<\sigma(j)\text{ for all }j\neq i\in\mathcal{S}\}\]
contains all customer types/permutations under which product \(i\) is the most preferable product in the assortment \(\mathcal{S}\).
The NP choice model has exponentially many parameters and thus a larger model capacity than both MNL and MCCM. Indeed, Block and Marschak (1959) show that any RUM choice model can be represented by a NP model, and vice versa. Moreover, the estimation method proposed by (Farias et al., 2009, 2013) needs to solve a linear programming with size \(n!\). Although the linear programming enjoys good theoretical properties, it is computationally infeasible when \(n\) is large and has to be further simplified or approximated by the sampling or representation method.
**Non RUM models:** RUM models have a _regularity_ property: if we add a new product to the assortment \(\mathcal{S}\), then the purchase probability of all currently offered products will never increase. However, as remarked by Chen and Misic (2022), there is an increasing body of experimental evidence which suggests that the aggregate choice behavior of customers is not always consistent with this regularity property, and thus often violates the premise of RUM. For instance, Seshadri et al. (2019) use the seminal experiment of (Tversky and Kahneman, 1985) and Chen and Misic (2022) quote the behavioral experiment of (Ariely and Jones, 2008) to motivate the development of choice models beyond the scope of RUMs. Here we use the examples from the above two behavioral experiments to show the limited capacity of RUM, while the neural choice model's potential to pick up such behaviour in the data.
* Table 2 is an implementation of the experiment in (Ariely and Jones, 2008): The price column shows the price of each option. In Case I, an assortment has two available products, while in Case II, a clearly inferior option of Print-Only is added. The addition of Print-Only twists customer utilities for the other two options, which is reflected by the change in the true choice probability.
* Table 3 is an implementation of the gambles example in (Tversky and Kahneman, 1985): The Win Prob. column shows the winning probability of each gamble and Payoff column indicates the winning payoff. There exists a preference cycle over three gambles A, B, C with different winning probabilities and payoffs, where A is preferred to B, B is preferred to C based on the expected payoff. However, C is preferred to A since the winning probability of C is significantly larger than
In sum, a parametric model that fits observed purchases in all situations is unlikely. Certainly more elaborate models of consumer search and decisions exist, but their estimation, based on unobservable factors and limited firm-level data, and subsequent optimization is daunting, and certainly will require a lot of development resources. Hence, it is worth striving for a flexible large-capacity learning engine to predict the purchase probabilities directly without worrying about getting the precise consumer behaviour modeling right.
With this in mind, we develop neural network models that allow the utilities to be dependent on the assortment with or without including customer and product features. Hence our neural choice model allows violation of the regularity property, letting the utilities change depending on the assortment that is offered.
## 4 Neural Choice Model
### A Feature-free Neural Choice Model
In this section, we introduce our feature-free neural choice model defined simply as a neural network function that maps the assortment input \(\mathbf{S}\in\{0,1\}^{n}\) to the output \(\mathbf{Y}\in[0,1]^{n}\):
\[\mathbf{Y}=g(\mathbf{S};\mathbf{\theta})\]
where \(\mathbf{\theta}\) encapsulates all the model parameters. The input \(\mathbf{S}=(s_{1},...,s_{n})\) is the binary encoding of an assortment \(\mathcal{S}\) where \(s_{i}=1\) if \(i\in\mathcal{S}\) and \(s_{i}=0\) if \(i\notin\mathcal{S}\). The output \(\mathbf{Y}=(Y_{1},...,Y_{n})\) is a vector supported on the probability simplex with \(Y_{i}=\mathbb{P}(i|\mathcal{S})\).
We consider two neural network architectures illustrated in Figure 1. For both architectures, we let
\begin{table}
\begin{tabular}{c c c|c|c c c} \hline \hline & \multicolumn{1}{c|}{Option} & \multicolumn{1}{c|}{Price} & \multicolumn{1}{c|}{True Prob.} & \multicolumn{4}{c}{Pred. Prob.} \\ & & & & NN & MNL-MLE & MCCM-EM \\ \hline \multirow{3}{*}{Case I} & No-purchase & – &.14 &.14 &.14 &.14 \\ & Internet-Only & \$59 &.57 &.58 &.43 &.43 \\ & Print-\&-Internet & \$125 &.29 &.28 &.43 &.43 \\ \hline \multirow{3}{*}{Case II} & No-purchase & – &.14 &.13 &.14 &.15 \\ & Internet-Only & \$59 &.29 &.31 &.43 &.43 \\ & Print-\&-Internet & \$125 &.57 &.55 &.43 &.42 \\ & Print-Only & \$125 &.00 &.01 &.00 &.00 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Example based on the behavioral experiment in (Ariely and Jones, 2008). The experiment setup is same as Table 1.
\begin{table}
\begin{tabular}{c c c c|c c c} \hline \hline & \multicolumn{1}{c|}{\multirow{2}{*}{Gamble}} & \multicolumn{1}{c|}{\multirow{2}{*}{Win Prob.}} & \multirow{2}{*}{Payoff} & \multirow{2}{*}{True Prob.} & \multicolumn{4}{c}{Pred. Prob.} \\ & & & & NN & MNL-MLE & MCCM-EM \\ \hline \multirow{2}{*}{Case I} & A & 1/4 & \$ 6 &.75 &.76 &.42 &.62 \\ & B & 1/3 & \$ 4 &.25 &.24 &.58 &.38 \\ \hline \multirow{2}{*}{Case II} & B & 1/3 & \$ 4 &.75 &.74 &.43 &.62 \\ & C & 1/2 & \$ 2 &.25 &.26 &.57 &.38 \\ \hline \multirow{2}{*}{Case III} & A & 1/4 & \$ 6 &.20 &.20 &.36 &.24 \\ & C & 1/2 & \$ 2 &.80 &.80 &.64 &.76 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Example based on the behavioral experiment in (Tversky and Kahneman, 1985). The experiment setup is same as Table 1.
\(\mathbf{W}_{l}\in\mathbb{R}^{n_{l}\times n_{l-1}}\) and \(\mathbf{b}_{l}\in\mathbb{R}^{n_{l}}\) are the weight matrix and bias vector respectively for layer \(l\) with \(n_{l}\) nodes in the \(l\)-th layer, and \(n_{1}=n_{L}=n\). We let \(\mathbf{z}_{l}=(z_{l,1},...,z_{l,n_{l}})\in\mathbb{R}^{n_{l}}\) be the output of the \(l\)-th layer and the input for the next layer. The parameters \(L\) and \(\{n_{l},\ l=1,...,L\}\) need to be specified. (But we will show that \(L=1\) and \(n_{l}=n\) can perform well in both the real data and simulated data across different generative models.)
The final layer's output \(\mathbf{z}_{L}=(z_{L,1},...,z_{L,n})\) are to set the choice probabilities given by a gated operator
\[Y_{i}=\begin{cases}\frac{\exp(z_{L,i})}{\sum_{i^{\prime}\in\mathcal{S}}\exp(z _{L,i^{\prime}})},&i\in\mathcal{S},\\ 0,&i\notin\mathcal{S}.\end{cases}\]
While this may appear to be a standard MNL-like function, and hence suffer from the IIA property etc., it in fact does not; we note an important difference: The \(\mathbf{z}_{L}\) is a function of \(\mathbf{z}_{0}=\mathbf{S}\), i.e., the assortment effect is encoded into each \(z_{L,i}\) and subsequently in \(Y_{i}\) through the neural network layers. (We should be hence be representing \(\mathbf{z}_{L}\) as \(\mathbf{z}_{L}(S)\) but for avoiding cumbersome notation, just write as \(\mathbf{z}_{L}\).
The differences in the architectures are as follows.
**Gated-Assort-Net (GAsN)**
The GAsN model is a fully-connected neural network designed specifically for the discrete-choice model. It adopts the classic structure commonly found in traditional neural networks, making it straightforward to implement. It takes the assortment vector \(\mathbf{S}\) as its input layer and runs through a number of fully connected layers. Finally, it uses the assortment \(\mathbf{S}\) to create an output gate to ensure \(\mathbb{P}(i|\mathcal{S})=0\) if \(i\notin\mathcal{S}\). We initialize \(\mathbf{z}_{0}=\mathbf{S}\) as the input layer and for each layer \(l=1,...,L\),
\[\mathbf{z}_{l}=(\mathbf{W}_{l}\mathbf{z}_{l-1}+\mathbf{b}_{l})^{+} \tag{4}\]
where \((\cdot)^{+}=\max\{\cdot,0\}\).
**Res-Assort-Net (RAsN)**
The RAsN model incorporates the concept of residual learning, which has demonstrated a remarkable enhancement in image recognition compared to the performance of plain networks (He et al., 2016). As
Figure 1: Neural choice models. \([n]\) denotes the dimension of the corresponding layer.
in the GAsN, it is initialized by \(\mathbf{z}_{0}=\mathbf{S}\) and finally uses the assortment to create an output gate to ensure \(\mathbb{P}(i|\mathcal{S})=0\) if \(i\notin\mathcal{S}\). However, for each layer \(l=1,...,L\) in RAsN,
\[\mathbf{z}_{l}=\left(\mathbf{W}_{l}\mathbf{z}_{l-1}+\mathbf{b}_{l}\right)^{+}+\mathbf{z}_{l-1}. \tag{5}\]
Here \(\mathbf{z}_{l}=(z_{l,1},...,z_{l,n})\in\mathbb{R}^{n}\) is the output of the \(l\)-th layer and the input for the next layer, \(\mathbf{W}_{l}\in\mathbb{R}^{n\times n}\) and \(\mathbf{b}_{l}\in\mathbb{R}^{n}\) are the weight matrix and bias vector, parameters of the neural network model. Note we require the dimensions \(n_{l}=n\) for all \(l=1,...,L\) for making the operation of adding \(\mathbf{z}_{l-1}\) in (5) available.
Compared to (4), the additional (directly added) \(\mathbf{z}_{l-1}\in\mathbb{R}^{n}\) in (5) represents the residual effect. Intuitively, directly adding it in (5) can avoid forgetting/distorting the information extracted from previous layers with extreme \(\mathbf{W}_{l}\) (e.g., all entries are close to 0) during the training, which can hopefully further lead to a more robust training process and a better trained network.
#### Computational Experiment Design
We give a preview of our numerical experiment methodology and also demonstrate the model capacity of the two neural choice model architectures (full details are presented in Appendix A).
We first generate training data from four models: MNL, MCCM, NP and MMNL. We then compare the predictive performance of our two neural choice models, GAsN and RAsN, against three benchmarks and an oracle that has knowledge of the true generative model (equivalently, the negative entropy of the true distribution). Table 4 shows the results.
First, both neural choice models benefit from a larger sample size, while the performances of maximum likelihood estimation (MLE) and expectation maximization (EM) do not improve much as the sample size increases. Second, when the true model is MNL, we know the method of MLE is provably asymptotically optimal and this is unsurprisingly verified from our experiment. But when the true model becomes more complex, such as NP and MMNL, the neural network models show their advantage. Third, the learning
\begin{table}
\begin{tabular}{c|c|c c|c c|c c|c c} \hline & \# Samples & \multicolumn{2}{c|}{\begin{tabular}{c} MNL \\ \(|\mathcal{N}|=20\) \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} MCCM \\ \(|\mathcal{N}|=20\) \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} NP \\ \(|\mathcal{N}|=20\) \\ \end{tabular} } & \multicolumn{2}{c|}{
\begin{tabular}{c} MMNL \\ \(|\mathcal{N}|=20\) \\ \end{tabular} } \\ \hline Uniform & – & 2.51 & 3.4 & 2.52 & 3.44 & 2.50 & 3.43 & 2.48 & 3.40 \\ \hline \multirow{3}{*}{MNL-MLE} & 1,000 & 1.87 & 2.65 & 1.96 & 0.50 & 1.73 & 2.50 & 1.95 & 2.41 \\ & 5,000 & 1.86 & 2.63 & 1.96 & 0.50 & 1.71 & 2.48 & 1.96 & 2.37 \\ & 100,000 & 1.86 & 2.62 & 1.95 & 0.52 & 1.71 & 2.47 & 1.96 & 2.36 \\ \hline \multirow{3}{*}{MCCM-EM} & 1,000 & 1.86 & 2.67 & 1.92 & 0.54 & 1.67 & 2.35 & 1.86 & 2.27 \\ & 5,000 & 1.86 & 2.66 & 1.88 & 0.53 & 1.60 & 2.35 & 1.88 & 2.23 \\ & 100,000 & 1.87 & 2.66 & 1.88 & 0.51 & 1.62 & 2.37 & 1.88 & 2.22 \\ \hline \multirow{3}{*}{Gated-Assort-Net} & 1,000 & 2.00 & 3.04 & 1.97 & 0.54 & 1.69 & 2.71 & 2.01 & 2.66 \\ & 5,000 & 1.89 & 2.81 & 1.88 & 0.48 & 1.55 & 2.43 & 1.87 & 2.30 \\ & 100,000 & 1.86 & 2.63 & 1.84 & 0.40 & 1.50 & 2.23 & 1.82 & 2.08 \\ \hline \multirow{3}{*}{Res-Assort-Net} & 1,000 & 2.05 & 3.05 & 2.03 & 0.61 & 1.80 & 2.79 & 2.18 & 2.96 \\ & 5,000 & 1.93 & 2.82 & 1.91 & 0.56 & 1.56 & 2.51 & 1.97 & 2.53 \\ \cline{1-1} & 100,000 & 1.86 & 2.64 & 1.84 & 0.41 & 1.49 & 2.22 & 1.82 & 2.10 \\ \hline Oracle & – & 1.86 & 2.62 & 1.82 & 0.38 & 1.42 & 2.13 & 1.80 & 2.04 \\ \hline \end{tabular}
\end{table}
Table 4: Feature-free choice modeling: The training data are generated from the model of MNL, MCCM, NP and MMNL, with randomly generated true parameters. The number of samples refers to the number of training samples, and additional 5,000 samples are reserved to validate the training procedure. The reported numbers are the average out-of-sample cross-entropy (CE) loss over 10,000 test samples and 10 random trials. The row Uniform denotes the performance of a uniform distribution for predicting. MNL-MLE implements the standard maximum likelihood estimation to estimate a MNL model, while MCCM-EM implements the expectation maximization algorithm to fit a MCCM model. GAsN and RAsN both have one hidden fully-connected layer. The row Oracle denotes the CE loss of the true model.
of a choice model requires a large sample size when the underlying true model becomes more complex. For example, under the true model of MCCM, NP or MMNL, none of these methods give a satisfactory performance when the sample size \(m=1000\). Finally, the neural choice models perform consistently well in recovering these true models with a large amount of data, \(m=100000\). They have the model capacity to capture complex structures and they are also capable of fitting a simpler true model such as MNL. In comparison, MNL choice model suffers from too little model capacity to capture them. Meanwhile, it is widely acknowledged that the models of NP and MMNL are certainly flexible and can model behaviour realistically, but lack an effective learning/estimation algorithm given the number of variables that are unobservable in their stories, hence making them ill-suited for operational purposes.
### Theoretical Analysis
In this subsection, we briefly discuss some theoretical properties of the neural choice model in terms of its parameter estimation. Throughout this subsection, we will focus on the model of GAsN. We remark that the RAsN can be analyzed in the same way. First, we recall that the neural network model can be written as
\[\mathbf{Y}=g(\mathbf{S};\mathbf{\theta})=(g_{1}(\mathbf{S};\mathbf{\theta}),...,g_{n}(\mathbf{S};\mathbf{ \theta}))^{\top}\]
which maps the assortment \(\mathbf{S}\in\{0,1\}^{n}\) to the choice probability vector \(\mathbf{Y}\in[0,1]^{n}\). The parameters \(\mathbf{\theta}\) are estimated through _empirical risk minimization_ on a dataset
\[\mathcal{D}=\{(i_{1},\mathbf{S}_{1}),....,(i_{m},\mathbf{S}_{m})\}\]
where \((i_{k},\mathbf{S}_{k})\)'s are i.i.d. samples from some unknown distribution \(\mathcal{P}\). Specifically, we define the _risk_ as the negative log-likelihood function (also known as the cross-entropy loss)
\[r((i,\mathbf{S});\mathbf{\theta})\coloneqq-\log g_{i}(\mathbf{S};\mathbf{\theta})\]
where \(g_{i}(\mathbf{S};\mathbf{\theta})\) gives the probability that the \(i\)-th product is chosen under parameter \(\mathbf{\theta}\).
The estimated parameters \(\mathbf{\theta}\) are obtained by maximum likelihood estimation as follows
\[\hat{\mathbf{\theta}}\coloneqq\operatorname*{arg\,min}_{\mathbf{\theta}\in\Theta}\hat {R}_{m}(\mathbf{\theta})\coloneqq\frac{1}{m}\sum_{k=1}^{m}r((i_{k},\mathbf{S}_{k});\bm {\theta})\]
where the function \(\hat{R}_{m}(\cdot)\) is also known as the _empirical risk_. Here the set \(\Theta\) is defined as follows:
\[\Theta\coloneqq\big{\{}\mathbf{\theta}:\|\mathbf{W}_{l}\|_{\infty}\leq\bar{W},\|\mathbf{ b}_{l}\|_{\infty}\leq\bar{b},\text{ for }l=1,...,L\big{\}}\]
where \(\mathbf{W}_{l}\) and \(\mathbf{b}_{l}\) are the weight matrix and the bias vector of the \(l\)-th layer of the neural network. Here for a matrix \(\mathbf{W}\in\mathbb{R}^{n_{1}\times n_{2}}\) we define \(\|\mathbf{W}\|_{\infty}\coloneqq\max_{i=1,..,n_{1}}\sum_{j=1}^{n_{2}}|W_{ij}|\) and for a vector \(\mathbf{b}_{l}\in\mathbb{R}^{n_{1}}\), \(\|\mathbf{b}_{l}\|_{\infty}\coloneqq\max_{i=1,..,n_{1}}|b_{i}|\). The restriction to this bounded set usually improves both the stability of the training procedure and the generalization performance of the neural network.
Also, we define the _expected risk_ as
\[R(\mathbf{\theta})=\mathbb{E}\left[r((i,\mathbf{S});\mathbf{\theta})\right]\]
where the expectation is taken with respect to \((i,\mathbf{S})\sim\mathcal{P}.\) The expected risk can be interpreted as the
expected negative likelihood on some unseen/new data sample. We further define \(\mathbf{\theta}^{*}\) as its minimizer:
\[\mathbf{\theta}^{*}:=\operatorname*{arg\,min}_{\mathbf{\theta}\in\Theta}R(\mathbf{\theta}).\]
The theorem below shows the gap between \(R(\hat{\mathbf{\theta}})\), the expected risk of the learned parameter, and the optimal risk \(R(\mathbf{\theta}^{*})\).
**Theorem 1**.: _The following inequality holds with probability \(1-\delta\),_
\[R(\hat{\mathbf{\theta}})\leq R(\mathbf{\theta}^{*})+\frac{4n}{\sqrt{m}}\left(\bar{b} \cdot\frac{(2\bar{W})^{L}-1}{2\bar{W}-1}+(2\bar{W})^{L}\sqrt{2\log(2n)}\right)+ 5C\sqrt{\frac{2\log(8/\delta)}{m}}\]
_where \(C\) is the upper bound of risk function such that \(|r((i,\mathbf{S});\mathbf{\theta})|\leq C\) holds for all \((i,\mathbf{S})\in\mathcal{D}\) and \(\mathbf{\theta}\in\Theta\)._
Theorem 1 shows the excess risk of the empirical risk minimizer \(\hat{\mathbf{\theta}}\). The proof is based on the standard analysis of neural networks (Wan et al., 2013; Neyshabur et al., 2015; Golowich et al., 2018), but we need to customize it to GAsN due to the special gated operator in the final layer. We defer the proof to Appendix C. By exploring the structure of risk function, we can further bound the distance between the true distribution of choice probability and the estimated choice probability by \(\hat{\mathbf{\theta}}\):
**Corollary 1**.: _The following inequality holds with probability \(1-\delta\),_
\[\mathbb{E}\left[\text{KL}(\mathcal{P}_{\mathbf{S}},g_{.}(\mathbf{S};\hat{\mathbf{\theta}} ))\right]\leq\mathbb{E}\left[\text{KL}(\mathcal{P}_{\mathbf{S}},g_{.}(\mathbf{S};\mathbf{ \theta}^{*}))\right]+\frac{4n}{\sqrt{m}}\left(\bar{b}\cdot\frac{(2\bar{W})^{L} -1}{2\bar{W}-1}+(2\bar{W})^{L}\sqrt{2\log(2n)}\right)+5C\sqrt{\frac{2\log(8/ \delta)}{m}}\]
_where \(C\) is defined as in Theorem 1, \(\mathcal{P}_{\mathbf{S}}\) is defined as the distribution of choice probability given assortment \(\mathbf{S}\), \(\text{KL}(\cdot,\cdot)\) is defined as the Kullback-Leibler divergence over two choice probability distributions and expectation is with respect to \(\mathbf{S}\)._
Proof.: Given an assortment \(\mathcal{S}\), the Kullback-Leibler divergence over the true choice probability \(\mathcal{P}_{\mathbf{S}}\) and \(g_{.}(\mathbf{S};\mathbf{\theta})\) is
\[\text{KL}(\mathcal{P}_{\mathbf{S}},g_{.}(\mathbf{S};\mathbf{\theta}))=\sum_{i\in\mathcal{ S}}P(i|\mathcal{S})\left(\log(P(i|\mathcal{S}))-\log(g_{i}(\mathbf{S};\mathbf{\theta})) \right),\]
where \(P(i|\mathcal{S})\) is the choice probability of \(i\)-th product given assortment \(\mathcal{S}\).
Note that
\[\mathbb{E}\left[-\sum_{i\in\mathcal{S}}P(i|\mathcal{S})\log(g_{i}(\mathbf{S};\mathbf{ \theta}))\right]=R(\theta),\]
where expectation is with respect to the \(\mathcal{S}\), with Theorem 1, we complete our proof.
The term \(\mathbb{E}\left[\text{KL}(\mathcal{P}_{\mathbf{S}},g_{.}(\mathbf{S};\mathbf{\theta}^{*}))\right]\) is usually referred as _approximation error_, which depends on the capacity of function family \(\{g_{.}(\mathbf{S};\mathbf{\theta}):\mathbf{\theta}\in\Theta\}\), and the remaining two terms with order \(O(1/\sqrt{m})\) are usually together referred as _estimation error_, which depends on the number of training samples \(m\) and also the capacity of function family through \(W\), \(\bar{b}\) and \(L\). In general, with fixed training samples, a larger function family \(\{g_{.}(\mathbf{S};\mathbf{\theta}):\mathbf{\theta}\in\Theta\}\) (i.e., deeper and wider networks) will reduce the approximation error but will increase the estimation error.
### Feature-based Neural Choice Models
Now we extend our neural network models to the setting with features. We first make a distinction between product feature and customer feature:
* Product feature: there is a feature vector \(\mathbf{f}_{i}^{P}\in\mathbb{R}^{d}\) associated with each product \(i\in\mathcal{N}\). We also refer to these features as static features as they usually remain the unchanged over all the samples in the training data \(\mathcal{D}\).
* Customer feature: Sample in \(\mathcal{D}\) represents different customers, each of which is associated a feature vector \(\mathbf{f}_{k}^{C}\in\mathbb{R}^{d^{\prime}}\) for \(k=1,...,m\). With these features, the dataset is augmented as \[\mathcal{D}=\{(i_{k},\mathcal{S}_{k},\mathbf{f}_{k}^{C}),k=1,...,m\}.\] We refer to these features as _dynamic_ features as they may vary across different samples. Modeling-wise, if there are product features that change over time, we can simply view them as dynamic features.
Figure 3: Choice networks with features: The networks take (i) the latent utilities from feature encoder (in Figure 2) and (ii) the assortment vector as inputs.
Figure 2: Feature encoder: Product features and customer features are encoded and then taken inner product to obtain the latent utility for each product. \(d\) is the dimension of the product features, and \(d^{\prime}\) is the dimension of the customer features. Both features are encoded to \(h\)-dimensional latent features, and the latent utilities are obtained by the inner product of the corresponding latent features.
Figure 2 and Figure 3 describe our choice networks with features. The neural networks inherit the architectures in the feature-free setting, but both take an additional vector of inputs which we call latent utilities. Both feature-based networks of GAsN(f) and RAsN(f) encode product features and customer features to obtain one latent utility for each product. The product encoder is an \(L\)-layer fully-connected network shared by all the products (\(L\) is 1 or 2 in our experiments). By default, the number of nodes in the intermediate layers of the encoder is the same as the dimension of the encoder's input. When there is no available customer feature, we can simply treat the customer feature as 1 for all the samples.
**Unnecessity of Static Features**
Table 5 presents a synthetic experiment with only product (static) features to illustrate a new insight for choice modeling with features. The experiment first ignores all the product features and trains GAsN and RAsN as in the feature-free setting. Then it takes into account the product features and trains the feature-based version of the two neural networks. We find that both versions of the networks are capable of recovering the true model from the comparison against the oracle performance, and that the two versions of each neural network achieve a similar performance. This might seem counter-intuitive at first sight, because the product features may contain useful information about the product's utility, and ignoring the features may accordingly hurt the performance. However, we argue that the product utilities are essentially encoded in the choice data \((i_{k},\mathcal{S}_{k})\)'s but not in the product features. In fact, the experiment shows that such utilities can be learned by the neural networks of GAsN and RAsN without explicitly accessing the product features. An alternative interpretation is that the product utility here should be viewed as a population-level utility that reflects the preference or popularity of a product over the whole population. Things become different when customer features are also available. In that case, the choice behaviour of each data sample is determined by the personalized utilities, and thus the product and customer features become indispensable.
We emphasize that whether to include the (dynamic) customer features in choice modeling is determined not by the availability of the features, but by the downstream application. For example, the assortment optimization problem aims to find an assortment that maximizes profits under a choice model. For a brick-and-mortar store or an online retailer where the personalized assortment is not allowed, customer features should not be used to fit the choice model even if they are available, because one is more interested in the population-level choice behavior. It also underscores the potential of feature-free networks of GAsN and RAsN despite their simple architectures.
\begin{table}
\begin{tabular}{c|c c} \hline \hline & MNL(f) & MCCM(f) \\ \hline MNL(f)-MLE & 2.903 & 2.212 \\ \hline Gated-Assort-Net & 2.921 & 2.010 \\ Gated-Assort-Net(f) & 2.918 & 2.010 \\ \hline Res-Assort-Net & 2.914 & 1.988 \\ Res-Assort-Net(f) & 2.914 & 1.984 \\ \hline Oracle & 2.900 & 1.932 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Choice modeling with only product (static) features: The training data (with \(n=50\), \(m=100,000\), and \(d=5\)) are generated from the feature-based version of MNL and MCCM. All networks have two-layer structures (for the networks with features, the encoder part has one layer). The reported numbers are the average out-of-sample cross-entropy (CE) loss over 10 random trials. The benchmark method implements MLE for the featured-based MNL model. More details about experiment setup can be found in Appendix B.1.
### Comparison on Real Datasets
In this subsection, we test the predictive performances of our neural choice model with several benchmarks on four real datasets, two private datasets from revenue management industries without features and two public ones with features. We repeat all experiments 10 times, choosing the train/validation/test sets independently and randomly, and report the averaged results below. In Section 5.2 we perform extensive numerical experiments on synthetic datasets to jointly test prediction and optimization performance.
#### 4.4.1 Feature-free Neural Choice Model
We conduct numerical experiments on two private real datasets: airline data and retailing data. The airline data contains three flight markets and the offered products are different bundles, i.e., bundle of seat, food etc. The offered assortments are decided by both the selling strategy and remaining seats. The retailing data is collected from a fashion-retailing company, where the products are clothes (from a same category and thus most customers purchase at most one item) and the assortments are decided by the inventories. Both datasets contain records of purchase transactions only, so we add no-purchases as in Simsek and Topaloglu (2018): for each purchase, we create four no-purchases with the same assortment as the original transaction but as choosing the outside no-purchase option. The summary statistics of the dataset is given in Table 6.
The testing result is summarized in Table 7, where the reported numbers are out-of-sample cross-entropy loss of two benchmark methods (with the same configurations as in Table 4) and the two neural network models with one layer.
We observe that our neural networks achieve the best performance on these datasets where the generative model is unknown and likely complex. One interesting observation is unlike other numerical results in this paper, MNL-MLE is better than MCCM-EM for the airline markets. One potential reason for the relatively poor performance of MCCM-EM might be the complicated assortment generation: in Subsection 6.1, we numerically show the performances of MCCM-EM is correlated with the assortment distributions, which are special in the airline markets due to their nested booking-limits selling strategy.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline & Flight 1 & Flight 2 & Flight 3 & Retailing \\ \hline \# Samples & 90634 & 37075 & 70600 & 17200 \\ \# Products & 25 & 25 & 25 & 20 \\ \# Train Samples & 60000 & 30000 & 60000 & 10000 \\ \# Validate Samples & 1000 & 1000 & 1000 & 1000 \\ \# Test Samples & 10000 & 5000 & 10000 & 5000 \\ \hline \end{tabular}
\end{table}
Table 6: Description of flights and retailing data.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline & Flight 1 & Flight 2 & Flight 3 & Retailing \\ \hline MNL-MLE & 2.798 & 2.490 & 2.547 & 1.073 \\ MCCM-EM & 3.972 & 3.470 & 3.596 & 1.063 \\ \hline Gated-Assort-Net & 2.738 & 2.436 & **2.489** & 1.063 \\ Res-Assort-Net & **2.734** & **2.434** & 2.490 & **1.060** \\ \hline \end{tabular}
\end{table}
Table 7: Performance on the airline data and retailing data. The reported numbers are out-of-sample CE loss.
#### 4.4.2 Feature-based Neural Choice Model
For the case of estimation with features, we perform numerical experiments on two public datasets - SwissMetro and Expedia Search. These two datasets contain dynamic customer/product features; that is, the features associated with each training sample may be different.
The SwissMetro dataset is a public survey dataset to analyze traveller preference among three transportation modes, and the Expedia dataset consists of hotel search records and the associated customer choices. More details of datasets, experiment setup and benchmarks can be found in Appendix B.
We implement a number of benchmark models: MNL, feature-based MNL, TasteNet (Han et al., 2020), DeepMNL (Wang et al., 2020; Sifringer et al., 2020), random forest (Chen et al., 2021). The MNL and feature-based MNL are learned by MLE. TasteNet and DeepMNL model the product utility \(u_{i}\) as a function of the product and customer features and map the utility to choice probability via MNL (3). TasteNet assumes the utility \(u_{i}\) is a linear function of product features and uses the customer feature to determine the coefficients of the linear function, while DeepMNL concatenates the customer and product features and feeds both into a fully-connected network to obtain the utility \(u_{i}\). Both models bring assortment effects only through the MNL part.
From the results in Table 8, our two neural networks give a better performance than the benchmark models. Even though the benchmark models include neural networks in this case, the key difference is that our models explicitly take the assortment as an input. This experiment result shows that even with the presence of product and customer features, incorporating assortment effects can help to better explain customer choices. Comparing the two datasets, the Gated-Assort-Net(f) performs better on SwissMetro, while Res-Assort-Net(f) performs better on Expedia. Recall that the Res-Assort-Net(f) makes a stronger usage of the assortment vector throughout the architecture than Gated-Assort-Net(f). Accordingly, one explanation can be that for the transportation setting, customer choices are less affected by the available options but more by their personal preference, but for the hotel search, the customer choices are more affected by the provided assortment, which gives Res-Assort-Net(f) more of an advantage. As for the random forest model, it is trained as a discriminative model using both features and assortment as input, so we believe it also has better potential with some further model recalibration.
## 5 Assortment Optimization for Neural Choice Model
An important downstream task after the estimation of a choice model is the assortment optimization problem, where the seller aims to find an assortment that maximizes the profits
\[\max_{\mathcal{S}\subset\mathcal{N}}\mathrm{Rev}(\mathcal{S})\coloneqq\sum_{i \in\mathcal{N}}\mu_{i}\mathbb{P}(i|\mathcal{S}) \tag{6}\]
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & SwissMetro & Expedia \\ \hline MNL-MLE & 0.883 & 2.827 \\ MNL(f)-MLE & 0.810 & 2.407 \\ \hline TasteNet & 0.681 & 2.436 \\ DeepMNL & 0.741 & 2.374 \\ Random Forest & 0.633 & 2.458 \\ \hline Gated-Assort-Net(f) & **0.598** & 2.403 \\ Res-Assort-Net(f) & 0.607 & **2.325** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Performance on the SwissMetro and Expedia. The reported numbers are out-of-sample CE loss.
where \(\mu_{i}\) is the profits/revenue of selling one unit of product \(i\). The problem usually has additional space or cardinality constraints.
The literature on assortment optimization fixes an underlying choice model \(\mathcal{M}\) and devises algorithms with provable performance guarantees. However, in practice, the underlying choice model is unknown a priori and a model has to be selected first and its parameters estimated from data \(\mathcal{D}\). In this case, the suboptimality of a proposed assortment may come from two sources: (i) the approximation error: the inaccuracy of the choice model estimation and (ii) the optimization error: the optimality gap induced by solving the assortment optimization problem under the estimated model. The following bound shows the intuition:
\[\text{Rev}(\mathcal{S}^{*})-\text{Rev}(\hat{\mathcal{S}})\leq \underbrace{\bar{\mu}\cdot\text{dist}(\mathcal{M},\hat{\mathcal{M}})}_{\text{ Model Approx.Error}}+\underbrace{\text{Opt. Gap}}_{\text{Opt. Error}}. \tag{7}\]
Here \(\mathcal{M}\) denotes the true choice model and \(\hat{\mathcal{M}}\) is the choice model estimated from the data \(\mathcal{D}\). \(\mathcal{S}^{*}\) is the optimal assortment obtained from (6) using the true model \(\mathcal{M}\), and \(\hat{\mathcal{S}}\) is obtained from \(\hat{\mathcal{M}}\) via some assortment optimization algorithm, where the term Opt. Gap refers to the suboptimality induced by the algorithm for assortment optimization. The function \(\text{dist}(\cdot,\cdot)\) refers to some (pseudo-)distance function between two models, and \(\bar{\mu}\coloneqq\max_{i\in\mathcal{N}}\mu_{i}\).
We use the inequality (7) to emphasize the decomposition of the revenue gap into the approximation error and the optimization error. The existing works on assortment optimization have striven to improve the optimization error, while assuming the approximation error is zero. We make the case that both errors should be taken into account when measuring the performance of an assortment algorithm as the firm is interested in the combined performance. In light of this, we measure the performance of our neural choice model illustrating the tradeoff between these two errors.
### Integer Programming Formulation of the Neural Choice-based Assortment Optimization
In this section we present a mixed integer programming (MIP) formulation for the assortment optimization problem under the neural choice model. Denote \(\boldsymbol{\mu}=(\mu_{1},...,\mu_{n})^{\top}\in\mathbb{R}^{n}\) as the revenue vector as in (6). Throughout this section, we will focus on the model of GAsN. We remark that the RAsN gives a similar MIP formulation and numerical performance. We formulate the assortment optimization problem under the GAsN as follows,
\[\max \frac{\sum_{i=1}^{n}\mu_{i}z_{0,i}\exp(z_{L,i})}{\sum_{i=1}^{n}z_ {0,i}\exp(z_{L,i})}\] (8) s.t. \[\boldsymbol{z}_{l}-\tilde{\boldsymbol{z}}_{l}=\boldsymbol{W}_{l }\boldsymbol{z}_{l-1}+\boldsymbol{b}_{l},\text{ for }l=1,...,L,\] \[\boldsymbol{0}\leq\boldsymbol{z}_{l}\leq M\boldsymbol{\zeta}_{l}, \ \boldsymbol{0}\leq\tilde{\boldsymbol{z}}_{l}\leq M(\boldsymbol{1}- \boldsymbol{\zeta}_{l}),\text{ for }l=1,...,L,\] \[\boldsymbol{\zeta}_{l}\in\{0,1\}^{n_{l}},\text{ for }l=1,...,L,\] \[\boldsymbol{z}_{0}=(z_{0,1},...,z_{0,n})^{\top}\in\{0,1\}^{n},\]
where the decision variables are \(\boldsymbol{z}_{l}\in\mathbb{R}^{n_{l}}\), \(\tilde{\boldsymbol{z}}_{l}\in\mathbb{R}^{n_{l}}\), \(\boldsymbol{\zeta}_{l}\in\{0,1\}^{n_{l}}\), all for \(l=1,...,L\), and \(\boldsymbol{z}_{0}\in\{0,1\}^{n}.\) The decision variables \(\boldsymbol{z}_{0}\) for the input layer provide a binary representation of the assortment decision, and \(\boldsymbol{z}_{l}\) represents the values of the intermediate layers of the neural network. The equality constraints describe the forward propagation of the neural network layers, and the inputs \(\boldsymbol{W}_{l}\) and \(\boldsymbol{b}_{l}\) are the parameters of the neural network learned from the data. The objective function describes the mapping from the final
layer of the network to the revenue under the assortment \(\mathbf{z}_{0}\).
The (auxiliary) decision variables \(\mathbf{z}_{l}\) and \(\mathbf{\zeta}_{l}\) implement the big-M method (Fischetti and Jo, 2017; Conforti et al., 2014) where the value of \(M\) is a large positive number obtained by some prior estimate. These two jointly ensure that
\[\mathbf{z}_{l}=\left(\mathbf{W}_{l}\mathbf{z}_{l-1}+\mathbf{b}_{l}\right)^{+}.\]
It is well known that the choice of \(M\) affects the running time of solving the MIP (Belotti et al., 2016). In our numerical experiments, we follow the methods in (Fischetti and Jo, 2017) to tighten the upper bound. Also, additional linear constraints can be added to incorporate the cardinality or capacity constraints. When there is a no-purchase option, one may introduce an additional constraint of \(z_{0,n}=1\) (\(n\) being the index of the no-purchase option).
We remark that the constraints of the MIP (8) are all linear, and the objective function resembles that of the MNL model. For a better numerical performance, we replace the exponential function in the objective function with its second-order approximation \(\exp(x)\sim 1+x+x^{2}/2\)(Banerjee et al., 2020). Though the problem (8) cannot be solved in polynomial time as is the case for some of the other models such as MNL and MCCM (when there is no capacity constraint), it does provide a formulation that allows us to take advantage of off-the-shelf integer programming solvers.
### Numerical Study of Joint Estimation and Optimization
In this subsection, we provide a comparative study of the four models (MNL, MCCM, 1 and 2-layer GAsN) along with their corresponding estimation methods and assortment optimization algorithms by running them on a total of 9600 different synthetic problem instances generated by a mix of ground-truth strategies. Since there are no computationally efficient estimation methods for NP and MMNL choice models, we do not include them here. Each problem instance is a combination of historical sales and revenue data, and additional capacity constraints (if any), and the algorithm(s) output a recommended assortment following this data-to-decision pipeline:
\[\mathcal{D}=\{(i_{k},\mathcal{S}_{k}),k=1,...,m\},(\mu_{1},...,\mu_{n}),\text{ Constraints}\ \rightarrow\ \text{Recommended Assortment}\ \hat{\mathcal{S}}.\]
The numerical experiment is to test performance where the true model is unknown and has to be estimated from data.
We generate the training data \(\mathcal{D}\) from four different choice models as ground truth, MNL, MCCM, NP, and MMNL with the number of training samples fixed at \(m=30,000\) for all the trials. The exact optimal assortment \(\mathcal{S}^{*}\) is computed with the knowledge of the generative model, and then the recommended assortment \(\hat{\mathcal{S}}\) is evaluated by the optimality ratio
\[\text{Opt. Ratio}\coloneqq\frac{\text{Rev}(\hat{\mathcal{S}})}{\text{Rev}( \mathcal{S}^{*})}.\]
For each problem instance, we consider both unconstrained and constrained settings and implement several benchmark methods as follows. For the estimation part, we implement the maximum-likelihood estimation to estimate the parameters of the MNL model, and the expectation-maximization algorithm to estimate those of MCCM. We provide more details of the experiments in Appendix D. Below are the assortment optimization algorithms:
* Unconstrained: The estimated choice model is used to solve for a recommended assortment through the assortment optimization specific for the method.
* Constrained:
* Revenue-ordered policy (RO): All the \(n\) products are ordered in an decreasing order by their revenues to form a nested set of assortments and the feasible assortment with largest expected revenue is picked from that nested set (Talluri and Van Ryzin, 2004).
* MNL-MIP: A mixed integer programming is solved based on the estimated MNL model (Mendez-Diaz et al., 2014).
* ADXOpt: The ADXOpt algorithm proposed by Jagabathula (2014) is a greedy algorithm to solve assortment optimization under a general choice model. We implement the algorithm on the estimated Markov chain choice model. In our implementation, we increase the removal limit (the maximal number of times for a product to be removed from the assortment) from 1 in (Jagabathula, 2014) to 5 so as to boost the algorithm performance.
* Markov Chain Capacity-Assort (MC-CA): We implement the algorithm given in Desir et al. (2020) on the estimated Markov chain choice model.
* Neural Network Optimization (NN): We implement the MIP (8) on the fitted GAsN with 1 or 2 layers (denoted as NN(1) and NN(2)).
The results of our numerical experimets are given in Table 9 and Table 10. In general, the neural network with MIP formulation performs well in all underlying choice models achieving the best performance in several cases (averaged over different numbers of products), with and without a capacity constraint; and under MNL choice model, the (averaged) approximation ratios of NN(1) are above 95%, both with and without constraint.
The advantage of a neural network is more apparent when the generative models are complicated: with data generated by NP and MMNL and with constraints, the averaged approximation ratios of NN are 92.55% and 84.79%, where the best averaged performance of the other methods are 88.28% and 67.14%. The relatively poor performances of the other methods can be due to the limited expressive power or the hardness of the subsequent optimization. While the structural limitations of MNL are well known, and hence is not surprising that it cannot approximate NP and MMNL generated data. For the MCCM, Blanchet et al. (2016) show that the worst case error bound of MCCM to approximate a choice probability of an assortment in MMNL is negatively correlated to "the maximum probability that the most preferable product for a random customer from any segment belongs to that assortment" (Theorem 4.2). Thus, when the offered assortment includes many "preferable products", the estimated MCCM may not approximate the underlying MMNL choice model well. Also, constraints make approximation ratios worse by making the optimization harder; this can be observed by comparing Table 9 and Table 10.
Comparing NN(1) with NN(2), NN(1)'s average approximation ratios are better than NN(2)'s in all cases except MCCM with constraint. In fact, we observe the overfitting of NN(2) during the training in some cases, which is a plausible explanation for its worse relative performance.
In summary, when trying to model and optimize the assortment optimization problems, our finding is that, when evaluating joint estimation and assortment optimization, a shallow network may be preferable to a deep neural network, as it reduces overfitting and leads to a more manageable optimization problem. Of course, if the its estimation error is large, one can try adding more layers to enlarge the capacity and expressive power of the neural network.
## 6 Extensions and Discussions
In this section, we discuss several properties of neural choice models gained from our experience with the experiments, including robustness under the assortment distribution's shift in the testing data and the effects on the depth and width of the networks. We also list some tricks for deploying neural choice
models when we need to add new products or the training samples are not sufficient. We defer the details of all experiments in this section to Appendix E.
### Assortment Distribution Shift in the Training Data
A very practical but often neglected aspect of learning a choice model is the assortment distribution in the training data \(\mathcal{D}\), i.e., the distribution of \(\mathcal{S}_{k}\)'s. Figure 4 gives empirical evidence that the learning of an MCCM using the EM algorithm (implementing (Simsek and Topaloglu, 2018)) is affected by the assortment size of the training samples. Intuitively, a smaller assortment size means it will take longer for the Markov chain to hit the exiting state, and thus the E-step needs to impute larger unobserved quantities which may result in larger variance. Alternatively, we find the performance of the neural network model is quite robust in terms of the generalization on the assortment domain. Specifically, in
\begin{table}
\begin{tabular}{c c|c c c|c c c|c} \hline \hline \multicolumn{2}{c|}{Model} & \multicolumn{4}{c|}{MNL} & \multicolumn{4}{c}{MCCM} \\ \(|\mathcal{N}|\) & & 20 & 40 & 60 & Avg. & 20 & 40 & 60 & Avg. \\ \hline \multirow{2}{*}{MNL (MLE)} & RO & 77.82 & 78.03 & 74.47 & 76.77 & 59.81 & 47.37 & 45.46 & 50.88 \\ & MIP & 86.86 & 90.37 & 87.20 & 88.14 & 75.39 & 65.44 & 60.60 & 67.14 \\ \hline \multirow{2}{*}{MCCM (EM)} & MC-CA & 84.96 & 85.06 & 84.49 & 84.84 & 70.54 & 55.26 & 56.17 & 60.66 \\ & ADXOpt & 89.02 & 88.81 & 87.02 & 88.28 & 73.60 & 61.70 & 59.26 & 64.85 \\ \hline \multirow{2}{*}{NN (1-layer)} & MIP & 93.90 & 93.28 & 90.47 & **92.55** & 92.76 & 84.99 & 76.63 & **84.79** \\ NN (2-layer) & MIP & 94.80 & 91.27 & 87.53 & 91.20 & 91.58 & 77.59 & 68.28 & 79.15 \\ \hline \hline \end{tabular}
\end{table}
Table 10: The approximation ratios (\(\times 100\%\)) of heuristics for assortment optimization with capacity constraint. Each reported value is averaged over 100 trials.
\begin{table}
\begin{tabular}{c|c c c|c|c c c|c} \hline \hline \multicolumn{2}{c|}{Model} & \multicolumn{4}{c|}{MNL} & \multicolumn{4}{c}{MCCM} \\ \(|\mathcal{N}|\) & & 20 & 40 & 60 & Avg. & 20 & 40 & 60 & Avg. \\ \hline MNL (MLE) & 99.94 & 99.36 & 99.11 & 99.47 & 93.54 & 94.26 & 94.59 & 94.13 \\ MCCM (EM) & 99.70 & 99.44 & 99.65 & **99.60** & 93.88 & 96.80 & 97.77 & 96.15 \\ NN (1-layer) & 99.71 & 98.49 & 97.80 & 98.67 & 96.93 & 95.78 & 97.68 & **96.80** \\ NN (2-layer) & 98.03 & 91.14 & 86.60 & 91.92 & 96.60 & 90.46 & 91.83 & 92.96 \\ \hline \hline \multicolumn{2}{c|}{Model} & \multicolumn{4}{c|}{NP} & \multicolumn{4}{c}{MNNL} \\ \(|\mathcal{N}|\) & & 20 & 40 & 60 & Avg. & 20 & 40 & 60 & Avg. \\ \hline MNL (MLE) & 99.94 & 99.36 & 99.11 & 99.47 & 93.54 & 94.26 & 94.59 & 94.13 \\ MCCM (EM) & 99.70 & 99.44 & 99.65 & **99.60** & 93.88 & 96.80 & 97.77 & 96.15 \\ NN (1-layer) & 99.71 & 98.49 & 97.80 & 98.67 & 96.93 & 95.78 & 97.68 & **96.80** \\ NN (2-layer) & 98.03 & 91.14 & 86.60 & 91.92 & 96.60 & 90.46 & 91.83 & 92.96 \\ \hline \hline \end{tabular} \begin{tabular}{c|c c c|c|c c c|c} \hline \hline \multicolumn{2}{c|}{Model} & \multicolumn{4}{c|}{NP} & \multicolumn{4}{c}{MNNL} \\ \(|\mathcal{N}|\) & & 20 & 40 & 60 & Avg. & 20 & 40 & 60 & Avg. \\ \hline MNL (MLE) & 99.94 & 99.36 & 99.11 & 99.47 & 93.54 & 94.26 & 94.59 & 94.13 \\ MCCM (EM) & 99.70 & 99.44 & 99.65 & **99.60** & 93.88 & 96.80 & 97.77 & 96.15 \\ NN (1-layer) & 99.71 & 98.49 & 97.80 & 98.67 & 96.93 & 95.78 & 97.68 & **96.80** \\ NN (2-layer) & 98.03 & 91.14 & 86.60 & 91.92 & 96.60 & 90.46 & 91.83 & 92.96 \\ \hline \hline \end{tabular}
\begin{tabular}{c|c c c|c|c c c|c} \hline \hline \multicolumn{2}{c|}{Model} & \multicolumn{4}{c|}{NP} & \multicolumn{4}{c}{MNNL} \\ \(|\mathcal{N}|\) & & 20 & 40 & 60 & Avg. & 20 & 40 & 60 & Avg. \\ \hline MNL (MLE) & 99.94 & 99.36 & 99.11 & 99.47 & 93.54 & 94.26 & 94.59 & 94.13 \\ MCCM (EM) & 99.70 & 99.44 & 99.65 & **99.60** & 93.88 & 96.80 & 97.77 & 96.15 \\ NN (1-layer) & 99.71 & 98.49 & 97.80 & 98.67 & 96.93 & 95.78 & 97.68 & **96.80** \\ NN (2-layer) & 98.03 & 91.14 & 86.60 & 91.92 & 96.60 & 90.46 & 91.83 & 92.96 \\ \hline \hline \end{tabular} \begin{c|c c|c|c c c|c} \hline \hline \multicolumn{2}{c|}{Model} & \multicolumn{4}{c|}{NP} & \multicolumn{4}{c}{MNNL} \\ \(|\mathcal{N}|\) & & 20 & 40 & 60 & Avg. & 20 & 40 & 60 & Avg. \\ \hline MNL (MLE) & 99.94 & 99.36 & 99.11 & 99.47 & 93.54 & 94.26 & 94.59 & 94.13 \\ MCCM (EM) & 99.70 & 99.44 & 99.65 & **99.60** & 93.88 & 96.80 & 97.77 & 96.15 \\ NN (1-layer) & 99.71 & 98.49 & 97.80 & 98.67 & 96.93 & 95.78 & 97.68 & **96.80** \\ NN (2-layer) & 98.03 & 91.14 & 86.60 & 91.92 & 96.60 & 90.46 & 91.83 & 92.96 \\ \hline \hline \end{tabular}
\end{table}
Table 9: The approximation ratios (\(\times 100\%\)) of heuristics for assortment optimization without capacity constraint. Each reported value is averaged over 100 trials.
Appendix E.1, we train the neural network model with one distribution to generate the assortments \(\mathcal{S}_{k}\)'s (in training data \(\mathcal{D}\)) and find that it performs surprisingly well on the test data where the assortments are generated from a different assortment generating distribution (an aspect of out-of-domain performance).
### Depth and Width of the Neural Networks
Throughout the paper, we use one-layer or two-layer neural networks for our neural choice models, i.e., \(L=1\) or \(2\). We find this provides a sufficiently good fit for both synthetic and real data. For the number of neurons/nodes in the intermediate layers, we set \(n_{l}=n\) for all \(l\)'s by default. In Appendix E.2, we provide more numerical results that the performance of the neural choice models remain stable with more number of layers and wider width for the intermediate layers. Generally, it is not recommended to have many layers for the neural network, as it results in a bad landscape for the loss function and consequently bad local minima that do not generalize well (Keskar et al., 2016).
### Network Augmentation with Warm Start
One drawback of the models proposed in this paper is that they all require the universe of available products to be fixed, i.e., \(\mathcal{N}=\{1,...,n\}\) is fixed. When an additional set of products \(\mathcal{N}^{\prime}=\{n+1,...,n^{\prime}\}\) is available, the model has to be retrained on the new product set of \(\mathcal{N}\cup\mathcal{N}^{\prime}\). One option is to initialize the weights of the new network according to the previously well-trained network of \(\mathcal{N}\) as a warm start. Figure 5 shows that the well-trained network of \(\mathcal{N}\) can provide a good warm start for retraining the new network.
### Neural Choice Model as a Meta Choice Model
From our numerical results in Table 4, we gain a new perspective, namely, to use neural choice model as a meta choice model. Table 4 shows how the neural choice model can benefit from large data size but can also be outperformed when the data size is small. In this case, we can use the estimated choice model with the smallest validation error to generate synthetic data to re-train another neural choice model and fine-tune it by the real training data. The pseudo code is given in Algorithm 1. Intuitively, the strong representation ability of the neural choice model can make its performance arbitrarily close to the well-fitted generative model in the re-training step and can be further improved by real data in the fine-tuning step. Since the re-training step is indeed learning the trained choice model \(\hat{\mathcal{M}}_{k^{*}}\) with the _best_
Figure 4: Smaller assortment size leads to larger estimation error. In the experiment, the total number of products \(n=20\), and the training data consists of \(m=10,000\) samples with a fixed-size assortment \(\|S_{k}\|\). The x-axis gives the assortment size and the y-axis represents the mean error. The plotted curves are based on an average over 10 independent trials.
validation error from all other trained models \(\hat{\mathcal{M}}_{k}\), which has the _best_ training error of the corresponding choice model class \(\mathcal{M}_{k}\), we name this algorithm _Learning the Best of the Bests_. In this way, the neural choice model is not trained as a competitor to the existing choice models, but more as a complementary meta choice model. In Appendix E.4, we implement Algorithm 1 on a public Hotel data (Bodea et al., 2009) as an example.
```
1:Input: Training data \(\mathcal{D}_{\text{train}}\), validation data \(\mathcal{D}_{\text{val}}\), a set of choice models \(\{(\mathcal{M}_{k},\mathcal{L}_{k})\}_{k=1}^{K}\) where each choice model \(\mathcal{M}_{k}\) is equipped with a learning algorithm \(\mathcal{L}_{k}\). \((\mathcal{M}_{1},\mathcal{L}_{1})\) corresponds to neural choice model.
2:%% Training phase
3:for\(k=1,...,K\)do
4:%% We omit the training details such as cross-validation
5: Fit the \(k\)-th candidate choice model using dataset \(\mathcal{D}_{\text{train}}\) and algorithm \(\mathcal{L}_{k}\)
6: Denote the trained model as \(\hat{\mathcal{M}}_{k}\)
7:endfor
8:%% Comparison phase
9: Compute the test performance of all \(\hat{\mathcal{M}}_{k}\)'s using \(\mathcal{D}_{\text{val}}\)
10: Let \(k^{*}\) be the index of the model with best test performance
11:if\(k^{*}=1\)then
12: Let \(\mathcal{M}_{\text{final}}=\hat{\mathcal{M}}_{1}\)
13:else
14:%% Re-training and fine-tuning
15: Generate \(m^{\prime}(=100,000)\) new training samples as \(\mathcal{D}^{\prime}_{\text{train}}\) from the model \(\hat{\mathcal{M}}_{k^{*}}\)
16: Use \(\mathcal{D}^{\prime}_{\text{train}}\) to train (re-train) a neural choice model \(\hat{\mathcal{M}}_{\text{tmp}}\) from scratch
17: Use \(\mathcal{D}_{\text{train}}\) to further train (fine-tune) \(\hat{\mathcal{M}}_{\text{tmp}}\) and obtain \(\hat{\mathcal{M}}^{\prime}_{1}\)
18: Let \(\mathcal{M}_{\text{final}}\) be the better one among \(\hat{\mathcal{M}}^{\prime}_{1}\) and \(\hat{\mathcal{M}}_{k^{*}}\)
19:endif
20:Output: \(\mathcal{M}_{\text{final}}\)
```
**Algorithm 1** Learning the Best of the Bests
## 7 Conclusion and Future Directions
The existing literature of choice modeling in the feature-free and feature-based settings have been largely separate from each other. In this paper, we propose a neural choice model, a unified neural network framework that applies to both settings through a binary representation of the assortment and feature encoders. In addition, we provide a MIP formulation for the assortment optimization for a trained neural
Figure 5: Validation losses using cold start and warm start: The left panel uses \(2,000\) samples for retraining and the right panel uses \(100,000\) samples.
choice model.
From extensive numerical experiments, we gain insights into the performance of the various methods and summarize into the following three points: First, a single neural choice architecture is capable of recovering all the existing choice models, with a standard learning procedure. The neural choice model becomes particularly effective when the underlying model/training data is too complex to be described by a simple model such as MNL and when there is a sufficient amount of training data. Second, the neural choice models are robust under the assortment's distribution shift and can serve as a meta choice model. Third, the combined calibration and assortment optimization regime based on the neural network can outperform other optimization heuristics, especially when the underlying model is complex and when we have capacity constraints; the drawback nevertheless is that as it is based on a MIP, the size of the optimization cannot scale to more than around a hundred products; we need further research and new optimization algorithms to scale beyond this.
Moreover, we conclude with the following two future directions:
* **Multiple-purchase choice model.** The existing choice models focus exclusively on modeling the single-choice behavior where only one product is chosen from the offered assortment. The multiple-purchase choice model is usually studied under an inverse optimization framework in the literature of revealed preference (Zadimoghaddam and Roth, 2012; Amin et al., 2015; Birge et al., 2022). The deep learning-based choice models provide a natural framework to study the multiple-choice behavior, and they can fit to such training data by modifying the loss function.
* **Pricing optimization.** Although we equip our neural networks with an assortment optimization MIP, in a retailing context, the product price is another important factor that affects customer choice. Our current models do not account for this factor but assume prices as fixed. It is an interesting research problem to study optimal pricing under the neural choice model.
### Acknowledgment
The authors thank Antoine Desir for helpful discussions and pointing to the two public datasets for numerical experiments.
|
2303.06078 | An End-to-End Neural Network for Image-to-Audio Transformation | This paper describes an end-to-end (E2E) neural architecture for the audio
rendering of small portions of display content on low resource personal
computing devices. It is intended to address the problem of accessibility for
vision-impaired or vision-distracted users at the hardware level. Neural
image-to-text (ITT) and text-to-speech (TTS) approaches are reviewed and a new
technique is introduced to efficiently integrate them in a way that is both
efficient and back-propagate-able, leading to a non-autoregressive E2E
image-to-speech (ITS) neural network that is efficient and trainable.
Experimental results are presented showing that, compared with the non-E2E
approach, the proposed E2E system is 29% faster and uses 19% fewer parameters
with a 2% reduction in phone accuracy. A future direction to address accuracy
is presented. | Liu Chen, Michael Deisher, Munir Georges | 2023-03-10T16:56:09Z | http://arxiv.org/abs/2303.06078v1 | # An End-to-End Neural Network for Image-to-Audio Transformation
###### Abstract
This paper describes an end-to-end (E2E) neural architecture for the audio rendering of small portions of display content on low resource personal computing devices. It is intended to address the problem of accessibility for vision-impaired or vision-distracted users at the hardware level. Neural image-to-text (ITT) and text-to-speech (TTS) approaches are reviewed and a new technique is introduced to efficiently integrate them in a way that is both efficient and back-propagate-able, leading to a non-autoregressive E2E image-to-speech (ITS) neural network that is efficient and trainable. Experimental results are presented showing that, compared with the non-E2E approach, the proposed E2E system is 29% faster and uses 19% fewer parameters with a 2% reduction in phone accuracy. A future direction to address accuracy is presented.
Liu Chen\({}^{1,2}\), Michael Deisher\({}^{2}\), Munir Georges\({}^{3,4}\)\({}^{1}\)Oregon Health and Science University, Oregon USA
\({}^{2}\)Intel Corporation, Hillsboro, Oregon USA
\({}^{3}\)Intel Labs, Munich, Germany
\({}^{4}\)Technische Hochschule Ingolstadt
[email protected],{michael.deisher,munir.georges}@intel.com OCR, TTS, image-to-speech
## 1 Introduction
Users of touchscreen-enabled personal computing devices such as cell phones, tablets, and laptops may encounter situations where safety considerations or visual impairment make it difficult to take in display content. Operating system (OS) accessibility features that read aloud the text on the display can be used to mitigate this problem. However, screen readers are typically subordinate to the OS and may not render text content within images. The use of a dedicated neural network co-processor to implement such a capability has the advantages of low cost per watt, low power consumption, robustness to operating system failure, and application independence. Although it is possible to generate audio from a region of the display using separate image recognition and audio production neural networks, a non-autoregressive E2E neural network architecture is more suitable for this application since it simplifies the hardware design and the inference procedure. To keep power consumption and cost low it is also necessary to minimize the required memory footprint. In this paper we introduce a non-autoregressive E2E neural network suitable for embedded hardware implementation of an image-to-speech subsystem in personal computing devices. We build upon previous research in the areas of image-to-text (ITT) and text-to-speech (TTS) and introduce a novel method to bridge these into a single trainable ITS neural network. As far as we are aware, this is the first time this has been done.
The network structures shown in Figure 1 synthesize speech given the fixed size image of a word. In the left half of the figure, we show the pipeline (detailed in Section 2) using separate non-autoregressive ITT and TTS models. We desire a fully backpropagatable non-autoregressive E2E network. However, the post-/pre-processing steps shown in yellow are non-back-propagatable. Therefore, we devote our efforts to the pink box in the right half of Figure 1 which addresses alignment of the speech with the encoded image. Our contribution is an intuitive sequence expansion module for the E2E ITS system that is flexible enough to accommodate a wide variety of image encoders and
Figure 1: Image-to-Audio: non-back-propagateable post-/pre-processes steps in yellow are replaced by our novel expansion module shown in red. This enables end-to-end training.
The rest of the paper is organized as follows: Section 2 describes the ITS problem in light of previous work. Section 3 presents the E2E DNN architecture in detail. Section 4 introduces the training process and evaluation metrics. Finally, Section 5 presents results followed by conclusions.
## 2 Background
**Image-to-text (ITT)** aims to recognize text in input images. Generally, ITT contains three modules: an optional rectifier, an image encoder, and a sequential decoder. The rectifier segments and normalizes images through transforming various types of text irregularities in a unified way [1]. The image encoder extracts hidden representations from the normalized image [1]. And the decoder generates a sequence of characters based on the hidden representations. Non-autoregressive ITT predicts sequences of arbitrary length. In the training process, groundtruth text is expanded with additional placeholders, i.e. \(\epsilon\) or repeated characters, making the expanded groundtruth have the same length as the model's hidden representation sequence before feeding it to the chosen loss function [2, 3, 4]. During inference, characters are obtained after removing the placeholders. The expansion algorithms vary among loss functions. Connectionist temporal classification (CTC) [2] utilizes both repeated characters and \(\epsilon\). Aggregation cross-entropy [5] only utilizes \(\epsilon\) as the placeholder. Both loss functions minimize the distance between the target text and all possible expanded texts, thus, a placeholder can occur at any position in the raw output. Cai et al. [6] revises a traditional method: expanding groundtruth by inserting \(\epsilon\) at the tail and minimizing this expansion through cross-entropy (CE) loss. Cai et al. [6] indicates that, with proper configuration, this method can achieve comparable performance to CTC [2]. Therefore, we adopt this approach here.
**Text-to-speech (TTS)**, which aims to synthesize natural and intelligible speech given text, first converts the text (i.e., sequence of graphemes or phonemes) to acoustic features (e.g., sequence of Mel-spectra) and then transforms the acoustic features into audio samples through a vocoder. Since a phoneme sequence is much shorter than its Mel-spectra sequence, aligning phonemes with Mel frames is essential. Recent approaches to solving this problem include the use of autoregressive DNNs as well as the use of non-autoregressive DNNs which rely on the monotonicity of phoneme-to-Mel alignment and predict duration explicitly to bridge the length mismatch [7]. These sequence expansion modules contain expansion length regulators as well as phoneme duration predictors for the hard alignment between a phoneme and its Mel frames. The ground-truth alignments are obtained from an external aligner. FastSpeech [7] and ParaNet [8], which were the first proposed non-autoregressive models, utilize a pretrained autoregressive model to obtain the phoneme-level ground-truth alignments and FastSpeech2 [9] utilizes a forced aligner for the same purpose. PortaSpeech [10] eases the negative effect of imprecise phoneme alignment by learning word-level hard alignment from external aligner and phoneme-level soft alignment through an attention mechanism. Parallel Tacotron2 [11] and ESA [12] utilize a combination of differentiable duration predictor and attention-based length regulator to model the utterance duration and learn the phoneme alignment without an external aligner. Glow-TTS [13] utilizes the invertible transformation property of flow-based models and searches the most probable monotonic alignments through dynamic programming. In this work, we adopt non-autoregressive TTS with phoneme sequence input. We avoid grapheme sequence input due to problems reported in [12].
## 3 E2E Image-to-Speech Architecture
The core challenge of the E2E system is aligning Mel-spectrograms with the encoded image sequences with arbitrary length. In TTS, the length of each Mel-spectra sequence is derived directly from the encoded phoneme sequence. However, in ITS, the lengths of encoded image sequences and the lengths of Mel-spectra sequences are not related. To bridge this gap, we introduce a non-phoneme symbol, \(\epsilon\), as the only placeholder and expand phoneme sequences to an arbitrary fixed length by inserting \(\epsilon\)'s at the tail. Moreover, we define that only phonemes can be aligned to Mel-spectrograms and the duration of any \(\epsilon\) must be zero. In summary, we assign two tasks to the duration predictor: 1) predicting phoneme durations (positive values) and 2) recognizing \(\epsilon\)'s (zeros). This eliminates the need for additional layers (e.g., used in [14]) for the second task.
We present our architecture with an example image in Fig
Figure 2: Processing an image containing the word ”CAT”. The symbol, \(\epsilon\), represents a placeholder. The green rectangular boxes stand for hidden representations of symbols.
ure 2. First, the image encoder encodes "cat" into the hidden representation of the expanded phoneme sequence. Then the duration predictor predicts the duration of every vector in the sequence. The predictor generates zero if the hidden vector represents an \(\epsilon\). Moreover, the Linear Layer on the right side transforms the hidden representation to the required dimension of Mel-spectra generator. Third, we expand the transformed representations through repeating each vector \(d\) times where \(d\) is its respective duration. Fourth, the Mel-spectrogram generator takes the expanded representation as input and synthesizes the Mel-spectrogram. Lastly, a vocoder synthesizes the raw waveform based on the synthesized Mel-spectrogram.
**Image Encoder**. We follow the ITT architecture introduced in Section 2. We utilize the same rectifier as Shi et al [1] and adopt HBONet [15] as the backbone network to extract hidden features from the rectified images. A pooling layer extracts global semantic information from hidden features and feeds it to 26 linear layers. The \(i^{th}\) linear layer predicts the \(i^{th}\) output. If a word only has \(N\) phonemes, then the last 26-N layers should predict \(\epsilon\). Our configuration of HBONet is similar to [15] except we change the values of \(n\) and \(s\) of the first Inverted Residual block to be 1 (see Table 1 in [15]). Empirically, we found that this modification impacts the model accuracy very slightly while saving 0.5M parameters.
**Duration Predictor**. The duration predictor consists of two convolutional blocks. Each block consists of a 1D time-channel separable convolution, a 1x1 step-wise convolution, layer normalization, ReLU, and dropout. A linear layer along with a softplus layer projects the sequence of hidden representations to a sequence of scalars, which are the predicted phoneme durations.
**Mel-spectrogram Generator**. To have a lightweight architecture, we use the same variational autoencoder (VAE) based synthesizer as proposed in PortaSpeech [10]. The synthesizer is comprised of an encoder, a volume-preserving (VP) flow-based prior model and a decoder. The encoder is composed of a 1D-convolution with stride 4 followed by ReLU activation, layer normalization, and a non-causal WaveNet [16]. The prior model is a volume-preserving normalizing flow, which is composed of a residual coupling layer and a channel-wise flip operation. The decoder consists of a non-causal WaveNet [16] and a 1D transposed convolution with stride 4, also followed by ReLU and layer normalization.
## 4 Training and Evaluation
The system is trained in a multi-step fashion as follows. To train the image encoder, we employ CE loss to ensure that the encoder transforms the image into hidden representations of phonemes and \(\epsilon\), and we adopt the same training configuration as Cai et al. [6]. Next, we freeze the pretrained image encoder and train the expansion module and Mel-spectrogram generator with same configuration as PortaSpeech [10]. Finally, we train the vocoder with the raw waveform and the groundtruth Mel-spectrogram. Thus, the vocoder's training is independent. We follow HiFiGAN [17]'s configuration for HiFiGANv2 whose model only contains 0.9M parameters.
### Training Dataset and Preprocessing
The size of input image is set to 64x224. Since we use data sets that are designed for ITT tasks and only offer image-word pairs, we utilize a grapheme-to-phoneme (G2P) tool [18] to translate words into phoneme sequences. To ensure that our results are reproducible, we use Microsoft@Azure speech service to synthesize the ground truth audio. Since we adopt a multi-step training process, we introduce the training data for each step.
**Image Encoder**. We combine two datasets together: MJSynth (MJ) [3], which contains 9 million word box images generated from a lexicon of 90K English words and Synthtext[19]. We exclude images that contain non-alphabetic characters.
**Sequence Expansion Module and Mel-spectrogram Generator**. We generate the ground truth audio for each word in MJ. We sample 20% of the images of each word from the cleaned MJ [3]. Moreover, to enlarge the sample size of short words, we sample 100 images from Synthtext [19] for every word that has less than 6 characters. To increase the speed variation of each word, we randomly apply speed perturbation to each image-audio pair. Empirically, this operation reduces the impact of misalignment caused by the external aligner. We transform the raw waveform with a sampling rate of 22050 Hz into Mel-spectrograms with frame size 1024 and hop size 256. The dataset contains both image and word information so that we can train ITS and TTS models.
**Vocoder**. We generate ground truth audio for each utterance in LJSpeech [20] and use that synthesized audio as training data.
### Testing Dataset
To tailor the end-to-end ITS system for likely displayed content on personal computing devices we synthesize the top-3000-frequent words based on the word frequency of Wikipedia though an open-source toolkit 1 with random fonts and color combinations.
Footnote 1: github.com/Belval/TextRecognitionDataGenerator
### Evaluation Metrics
Empirically, we found that the Microsoft@Azure automatic speech recognition (ASR) service performed well on the ground truth audio recordings. Thus, we used the service to transcribe our synthesized output audio. However, since each synthesized output only contains one word and the ASR cannot distinguish among homonyms without context, our
evaluations are based on phoneme sequences instead of character sequences. We adopt two evaluation metrics: phone error rate (PER) and word accuracy.
## 5 Experiments
### Quality of Audio Synthesis
We compare the performance between our E2E ITS model and a non-E2E ITS pipeline. A potential advantage of an E2E architecture is reducing the number of parameters. In the standard non-E2E ITS pipeline, there are two encoders, an image encoder for text recognition and a linguistic encoder for Mel-spectrogram synthesis, and a g2p translator. Our E2E model encodes an image for Mel-spectrogram synthesis with a single image encoder.
We build a non-E2E ITS pipeline as the baseline. We deploy the same architecture as the image encoder in ITS to train an ITT model with MJSynth [3] and SynthText [19]. The VAE-based TTS [21] is trained on the same dataset as the E2E ITS model. Table 1 presents the PER, accuracy and the size of parameters. The TTS system adopts an embedding layer at the bottom to convert input phoneme indices into phoneme embeddings. A phoneme's embedding is universal. On the other hand, our ITS system takes the image encoder's hidden representation to represent phonemes and this representation is affected by variations related to the input image. Based on this difference, we assume the phoneme representation is not as robust as the embeddings. One of our future works will evaluate how the robustness of representation impacts accuracy.
### Inference Speed
Since we deploy non-autoregressive models for fast inference speed, we compare the speed of our model with the non-E2E pipeline used in Section 5.1. We evaluated the inference speed with both recognition speed (images per second) and real-time factor (RTF). The former is used in the ITT field and the latter is used in the TTS field. Since the ITS pipeline has two DNNs, we consider the summation of both DNNs' inference time of a given input as the pipeline's inference time. We utilize an Nvidia 2080Ti and synthesize Mel-spectrograms with batch size 1. Table 2 shows that the E2E ITS is faster than the ITS pipeline.
### Impact of Data Distribution
Today's TTS research focuses on evaluating sentence-level performance [7, 9, 10, 12, 11] as well as the robustness on long utterances [22]. Clarity of isolated words is rarely studied. However, since we are interested in deploying the ITS system for reading isolated words, clear pronunciation is important. We noticed that both ITS and the VAE-based TTS models perform better when synthesizing words with more phonemes. When we sample similar amounts of images for every word from MJSynth [3], the total sample size of 6+ phoneme words is higher. We think the sample distribution over phoneme lengths is an essential factor to the model performance.
To quantitatively evaluate this impact, we trained new ITS and TTS models utilizing a training set with fewer short-word samples and called these sets _E2E ITS_few_ and _TTS_few_ respectively. We removed all samples that were drawn from Synthtext [19] which are short words. Figure 3 shows that E2E ITS trained with more short word samples performs noticeably better than its counterpart while the performance on longer words are slightly impacted. Moreover, ITS gains more benefit from additional short training samples than TTS.
## 6 Conclusion
We propose the first non-autoregressive E2E ITS system by introducing an intuitive sequence expansion module. This module is flexible enough to accommodate a wide variety of image encoders and Mel-spectrogram generators. Our ITS model is small enough to deploy on mobile devices. Moreover, we demonstrate the importance of data distribution to training ITS models. We will enhance the generality of the image-to-phoneme representations in the future.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Model Name & PER (\%) & Acc (\%) & Param \\ \hline \hline E2E ITS & 4.7 & 87.8 & 6.1M \\ \hline Non-E2E ITS pipeline & 2.7 & 92.3 & 7.5M \\ \hline \end{tabular}
\end{table}
Table 1: Phone error rate and word accuracy for the image-to-speech systems. Vocoder is not included.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Model Name & Speed(image/sec) & RTF \\ \hline \hline E2E ITS & 78.0 & 0.0167 \\ \hline Non-E2E ITS pipeline & 59.9 & 0.0236 \\ \hline \end{tabular}
\end{table}
Table 2: Inference speed comparison.
Figure 3: We categorize all testing samples based on their phoneme length and present the model performance on phoneme lengths between 2 and 8. |
2301.09254 | Learning to Linearize Deep Neural Networks for Secure and Efficient
Private Inference | The large number of ReLU non-linearity operations in existing deep neural
networks makes them ill-suited for latency-efficient private inference (PI).
Existing techniques to reduce ReLU operations often involve manual effort and
sacrifice significant accuracy. In this paper, we first present a novel measure
of non-linearity layers' ReLU sensitivity, enabling mitigation of the
time-consuming manual efforts in identifying the same. Based on this
sensitivity, we then present SENet, a three-stage training method that for a
given ReLU budget, automatically assigns per-layer ReLU counts, decides the
ReLU locations for each layer's activation map, and trains a model with
significantly fewer ReLUs to potentially yield latency and communication
efficient PI. Experimental evaluations with multiple models on various datasets
show SENet's superior performance both in terms of reduced ReLUs and improved
classification accuracy compared to existing alternatives. In particular, SENet
can yield models that require up to ~2x fewer ReLUs while yielding similar
accuracy. For a similar ReLU budget SENet can yield models with ~2.32% improved
classification accuracy, evaluated on CIFAR-100. | Souvik Kundu, Shunlin Lu, Yuke Zhang, Jacqueline Liu, Peter A. Beerel | 2023-01-23T03:33:38Z | http://arxiv.org/abs/2301.09254v1 | # Learning to Linearize Deep Neural Networks
###### Abstract
The large number of ReLU non-linearity operations in existing deep neural networks makes them ill-suited for latency-efficient private inference (PI). Existing techniques to reduce ReLU operations often involve manual effort and sacrifice significant accuracy. In this paper, we first present a novel measure of non-linearity layers' ReLU sensitivity, enabling mitigation of the time-consuming manual efforts in identifying the same. Based on this sensitivity, we then present SENet, a three-stage training method that for a given ReLU budget, automatically assigns per-layer ReLU counts, decides the ReLU locations for each layer's activation map, and trains a model with significantly fewer ReLUs to potentially yield latency and communication efficient PI. Experimental evaluations with multiple models on various datasets show SENet's superior performance both in terms of reduced ReLUs and improved classification accuracy compared to existing alternatives. In particular, SENet can yield models that require up to \(\sim\)\(2\times\) fewer ReLUs while yielding similar accuracy. For a similar ReLU budget SENet can yield models with \(\sim\)\(2.32\%\) improved classification accuracy, evaluated on CIFAR-100.
## 1 Introduction
With the recent proliferation of several AI-driven client-server applications including image analysis (Litjens et al., 2017), object detection, speech recognition (Hinton et al., 2012), and voice assistance services, the demand for machine learning inference as a service (MLaaS) has grown.
Simultaneously, the emergence of privacy concerns from both the users and model developers has made _private inference_ (PI) an important aspect of MLaaS. In PI the service provider retains the proprietary models in the cloud where the inference is performed on the client's encrypted data (_ciphertexts_), thus preserving both model (Kundu et al., 2021) and data-privacy (Yin et al., 2020).
Existing PI methods rely on various cryptographic protocols, including homomorphic encryption (HE) (Brakerski & Vaikuntanathan, 2014; Gentry, 2009) and additive secret sharing (ASS) (Goldreich et al., 2019) for the linear operations in the convolutional and fully connected (FC) layers. For example, popular methods like Gazelle (Juvekar et al., 2018), DELPHI (Mishra et al., 2020), and Cheetah (Reagen et al., 2021) use HE while MiniONN (Liu et al., 2017) and CryptoNAS (Ghodsi et al., 2020) use ASS. For performing the non-linear ReLU operations, the PI methods generally use Yao's Garbled Circuits (GC) (Yao, 1986). However, GCs demand orders of magnitude higher latency and communication than the PI of linear operations, making latency-efficient PI an exceedingly difficult task. In contrast, standard inference latency is dominated by the linear operations (Kundu et al., 2021) and is significantly lower than that of PI.
This has motivated the unique problem of reducing the number of ReLU non-linearity operations to reduce the communication and latency overhead of PI. In particular, recent literature has leveraged
Figure 1: Comparison of various methods in accuracy vs. #ReLU trade-off plot. SENet outperforms the existing approaches with an accuracy improvement of up to \(\sim\)\(4.5\%\) for similar ReLU budget.
neural architecture search (NAS) to optimize both the number and placement of ReLUs (Ghodsi et al., 2020; Cho et al., 2021). However, these methods often cost significant accuracy drop, particularly when the ReLU budget is low. For example, with a ReLU budget of 86k, CryptoNAS costs \(\sim\)\(9\%\) accuracy compared to the model with all ReLUs (AR) present. DeepReDuce (Jha et al., 2021) used a careful multi-stage optimization and provided reduced accuracy drop of \(\sim\)\(3\%\) at similar ReLU budgets. However, DeepReDuce heavily relies on manual effort for the precise removal of ReLU layers, making this strategy exceedingly difficult, particularly, for models with many layers. A portion of these accuracy drops can be attributed to the fact that these approaches are constrained to remove ReLUs at a higher granularity of layers and channels rather than at the pixel level. Only very recently, (Cho et al., 2022) proposed \(l_{1}\)-regularized pixel level ReLU reduction. However, such approaches are extremely hyperparameter sensitive and often do not guarantee meeting a specific ReLU budget. Moreover, the large number of training iterations required for improved accuracy may not be suitable for compute-limited servers (Mishra et al., 2020).
**Our contributions.** Our contribution is three-fold. We first empirically demonstrate the relation between a layer's sensitivity towards pruning and its associated ReLU sensitivity. Based on our observations, we introduce an automated layer-wise ReLU sensitivity evaluation strategy and propose SENet, a three-stage training process to yield secure and efficient networks for PI that guarantees meeting a target ReLU budget without any hyperparameter-dependent iterative training. In particular, for a given global ReLU budget, we first determine a sensitivity-driven layer-wise non-linearity (ReLU) unit budget. Given this budget, we then present a layer-wise ReLU allocation mask search. For each layer, we evaluate a binary mask tensor with the size of the corresponding activation map for which a 1 or 0 signifies the presence or absence of a ReLU unit, respectively. Finally, we use the trained mask to create a partial ReLU (PR) model with ReLU present only at fixed parts of the non-linearity layers, and fine-tune it via distillation from an iso-architecture trained AR model. Importantly, we support ReLU mask allocation both at the granularity of individual pixels and activation channels.
To further reduce both linear and non-linear (ReLU) layer compute costs, we extend our approach to SENet++. SENet++ uses a single training loop to train a model of different channel dropout rates (DRs) \(d_{r}\) (\(d_{r}\leq 1.0\)) of the weight tensor, where each \(d_{r}\) yields a sub-model with a MAC-ReLU budget smaller than or same as that of the original one. In particular, inspired by the idea of ordered dropout (Horvath et al., 2021), we train a PR model with multiple dropout rates (Horvath et al., 2021), where each dropout rate corresponds to a scaled channel sub-model having number of channels per layer \(\propto\) the \(d_{r}\). This essentially allows the server to yield multiple sub-models for different compute requirements that too via a single training loop, without costly memory footprint. Table 1 compares the important characteristics of our methods with existing alternatives.
We conduct extensive experiments and ablations on various models including variants of ResNet, Wide Residual Networks, and VGG on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet datasets. Experimental results show that SENet can yield SOTA accuracy-ReLU trade-off with an improved accuracy of up to \(\sim\)\(2.32\%\) for similar ReLU budgets. SENet++ (\(d_{r}=0.5\)) can further improve the MAC and ReLU cost of SENet, with an additional saving of \(4\times\) and \(\sim\)\(2\times\), respectively.
## 2 Preliminaries and Related Work
**Cryptographic primitives.** We briefly describe the relevant cryptographic primitives in this section.
_Additive secret sharing._ Given an element \(x\), an ASS of \(x\) is the pair \((\langle x\rangle_{1},\langle x\rangle_{2})=(x-r,r)\), where \(r\) is a random element and \(x=\langle x\rangle_{1}+\langle x\rangle_{2}\). Since \(r\) is random, the value \(x\) cannot be revealed by a single share, so that the value \(x\) is hidden.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline
**Name** & **Method** & **Reduced** & **Generativity** & **Before mode** & **Leporporato dynamic** \\ & **used** & **non-linearity** & **dimension** & **channel dropping** \\ \hline \multicolumn{4}{c|}{**Imagenet pruning**} & Vavonen & \(\beta\) & SENet weight & \(\beta\) & \(\beta\) \\ \hline Structural pruning & Vavonen & \(\gamma\) & SENet, Slice & \(\gamma\) & \(\beta\) \\ \hline Sohnma (Cho et al., 2021) & NAS & \(\gamma\) & Lucky Stack & \(\beta\) & \(\beta\) \\ \hline CycleGAN (Choi et al., 2020) & NAS & \(\gamma\) & Lucky Stack & \(\beta\) & \(\beta\) \\ \hline DBLPHPH (Mishra et al., 2020) & NAS + PA & \(\gamma\) & Linearback & \(\beta\) & \(\beta\) \\ \hline SENet (Lou et al., 2021) & NAS + PA & \(\gamma\) & Quantied & \(\beta\) & \(\beta\) \\ \hline DeepReDuce (Cho et al., 2021) & Manual + PA & \(\gamma\) & Lucky Stack & \(\beta\) & \(\beta\) \\ \hline SNL (An et al., 2022) & \(l_{1}\)-regularized & Manual, PASC & \(\gamma\) & Linear, PASC & \(\beta\) \\ \hline SNL(An et al., 2022) & \(l_{2}\)-regularized & Manual, PASC & \(\gamma\) & Linear, PASC & \(\beta\) \\ \hline SENet (ours) & Automated & \(\gamma\) & Quantied, PASC & \(\gamma\) & \(\beta\) \\ \hline SENet++ (ours) & Automated & \(\gamma\) & Quantied, PASC & \(\gamma\) & \(\beta\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison between existing approaches in yielding efficient models to perform PI. Note, SENet++ can yield a model that can be switched to sub-models of reduced channel sizes.
_Homomorphic encryption._ HE (Gentry, 2009) is a public key encryption scheme that supports homomorphic operations on the ciphertexts. Here, encryption function \(E\) generates the ciphertext \(t\) of a plaintext \(m\) where \(t=E(m,pk)\), and a decryption function \(D\) obtains the plaintext \(m\) via \(m=D(t,sk)\), where \(pk\) and \(sk\) are corresponding public and secret key, respectively. In PI, the results of linear operations can be obtained homomorphically through \(m_{1}\)\(\circ\)\(m_{2}=D(t_{1}\star t_{2},sk)\), where \(\circ\) represents a linear operation, \(\star\) is its corresponding homomorphic operation, \(t_{1}\) and \(t_{2}\) are the ciphertexts of \(m_{1}\) and \(m_{2}\), respectively.
_Garbled circuits._ GC (Yao, 1986) allows two parties to jointly compute a Boolean function \(f\) over their private inputs without revealing their inputs to each other. The Boolean function \(f\) is represented as a Boolean circuit \(C\). Here, a garbler creates an encoded Boolean circuit \(\tilde{C}\) and a set of input-correspondent labels through a procedure \(Garble(C)\) to send \(\tilde{C}\) and the labels to the other party who acts as an evaluator. The evaluator further sends the output label upon evaluation via \(Eval(\tilde{C})\). Finally, the garbler decrypts the labels to get the plain results to share with the evaluator.
**Private inference.** Similar to (Mishra et al., 2020), in this paper, we focus on a semi-honest client-server PI scenario where a client, holding private data, intends to use inference service from a server having a private model. Specifically, the semi-honest parties strictly follow the protocol but try to reveal their collaborator's private data by inspecting the information they received. On the other hand, a malicious client could deviate from the protocol.
To defend against various threats existing cryptographic protocols (Mishra et al., 2020; Ghodsi et al., 2020; Lou et al., 2021) rely on the popular online-offline topology (Mishra et al., 2020), where the client data independent component is pre-computed in the offline phase (Juvekar et al., 2018; Mishra et al., 2020; Ghodsi et al., 2021; Lehmkuhl et al., 2021). For the linear operations, DELPHI (Mishra et al., 2020) and MiniONN (Liu et al., 2017) move the heavy primitives in HE and ASS to offline enabling fast linear operations during PI online stage. However, the compute-heavy \(Eval(\tilde{C})\) of GC keeps the ReLU cost high even at the online stage.
**ReLU reduction for efficient PI.** Existing works use model designing with reduced ReLU counts via either search for efficient models (Mishra et al., 2020; Ghodsi et al., 2020; Lou et al., 2021; Cho et al., 2021) or manual re-design from an existing model (Jha et al., 2021). In particular, SAFENet (Lou et al., 2021) enables more fine-grained channel-wise substitution and mixed-precision activation approximation. CryptoNAS (Ghodsi et al., 2020) re-designs the neural architectures through evolutionary NAS techniques to minimize ReLU operations. Sphynx (Cho et al., 2021) further improves the search by leveraging differentiable macro-search NAS (Liu et al., 2018a) in yielding efficient PI models. DeepReDuce (Jha et al., 2021), on the other hand, reduced ReLU models via a manual effort of finding and dropping redundant ReLU layers starting from an existing standard model. Finally, a recent work (Cho et al., 2022) leveraged \(l_{1}\)-regularization to remove ReLU at the pixel level to yield SOTA accuracy vs non-linearity trade-off. However, the extreme hyperparameter dependence of such methods often provide sub-optimal solution and does not necessarily guarantee meeting a target ReLU budget. Moreover, a resource-limited server (Mishra et al., 2020) may not afford costly iterative training (Cho et al., 2022) in reducing the ReLU count.
## 3 Motivational Study: Relation between ReLU importance and Pruning Sensitivity
Existing work to find the importance of a ReLU layer (Jha et al., 2021), requires manual effort and is extremely time consuming. In contrast, model pruning literature (Lee et al., 2018; Kundu et al., 2021) leveraged various metrics to efficiently identify a layer's sensitivity towards a target pruning ratio. In particular, a layer's _pruning sensitivity_ can be quantitatively defined as the accuracy reduction caused by pruning a certain ratio of parameters from it (Ding et al., 2019). In particular, recent literature leveraged sparse learning (Ding et al., 2019; Kundu et al., 2021) and used a trained sparse model to evaluate the sensitivity of a layer \(l\) (\(\eta_{\theta^{l}}\)) as the ratio \(\frac{\texttt{total}\ \texttt{of non-zero layer parameters}}{\texttt{total}\ \texttt{ if layer parameters}}\).
**Despite significant progress in weight pruning sensitivity, due to the absence of any trainable parameter in the ReLU layer, its sensitivity for a given ReLU budget is yet to be explored.** We hypothesize that there may be a correlation between a layer's pruning sensitivity (Kundu et al., 2021) and the importance of ReLU and have conducted the following experiments to explore this.
Let us assume an \(L\)-layer DNN model \(\Phi\) parameterized by \(\mathbf{\Theta}\in\mathbb{R}^{m}\) that learns a function \(f_{\Phi}\), where \(m\) represents the total number of model parameters.
The goal of DNN parameter pruning is to identify and remove the unimportant parameters from a DNN and yield a reduced parameter model that has comparable performance to the baseline unpruned model. As part of the pruning process for a given parameter density \(d\), each parameter is associated with an auxiliary indicator variable \(c\) belonging to a mask tensor \(\mathbf{c}\in\{0,1\}^{m}\) such that only those \(\theta\) remain non-zero whose corresponding \(c=1\). With these notations, we can formulate the training optimization as
\[\text{min }\mathcal{L}(f_{\Phi}(\mathbf{\Theta}\odot\mathbf{c}))\text{, s.t. }\|\mathbf{c}\|_{0}\leq d\times m \tag{1}\]
where \(\mathcal{L}(.)\) represents the loss function, which for image classification tasks is generally the cross-entropy (CE) loss. We used a sparse learning framework (Kundu et al., 2021) to train a ResNet18 on CIFAR-100 for a target \(d=0.1\) and computed the pruning sensitivity of each layer. In particular, as shown in Fig. 2, earlier layers have higher pruning sensitivity than later ones. This means that to achieve close to baseline performance, the model trains later layers' parameters towards zero more than those of earlier layers.
We then compared this trend with that of the importance of different ReLU layers as defined in (Jha et al., 2021). In particular, we first identified five different modules of ReLU placement in a ResNet18, the pre-basic-block (BB) stem, BB1, BB2, BB3, and BB4. We then created five ResNet18 variants with ReLU non-linearity present only at one of the modules while replacing non-linearity at the other modules with identity layers. We identify the modules yielding higher accuracy to be the ones with higher ReLU importance (Jha et al., 2021). We then normalized the importance of a ReLU stage with accuracy Acc as the ratio \((\texttt{Acc}-\texttt{Acc}_{min})/(\texttt{Acc}_{max}-\texttt{Acc}_{min})\). Here Acc\({}_{max}\) and Acc\({}_{min}\) correspond to the accuracy of models with all and no ReLUs, respectively.
As depicted in Figure 2, the results show that the ReLU importance and parameter pruning sensitivity of a layer are inversely correlated. This inverse correlation may imply that a pruned layer can afford to have more zero-valued weights when the associated ReLU layer forces most of the computed activation values to zero.
## 4 SENet Training Methodology
As highlighted earlier, for a large number of ReLU layers \(L_{r}\), the manual evaluation and analysis of the candidate architectures become inefficient and time consuming. Moreover, the manual assignment of ReLU at the pixel level becomes even more intractable because the number of pixels that must be considered, explodes. To that end, we now present SENet, a three-stage automated ReLU trimming strategy that can yield models for a given reduced ReLU budget.
### Sensitivity Analysis
Inspired by our observations in Section 3, we define the ReLU sensitivity of a layer \(l\) as
\[\eta_{\mathbf{\alpha}^{i}}=(1-\eta_{\mathbf{\theta}^{i}}) \tag{2}\]
It is important to emphasize that, unlike ReLU importance, ReLU sensitivity does not require training many candidate models. However, \(\eta_{\mathbf{\theta}^{i}}\) can only be evaluated for a specific \(d\). We empirically observe that \(d>0.3\) tends to yield uniform sensitivity across layers due to a large parameter budget. In contrast, ultra-low density \(d<0.1\), costs non-negligible accuracy drops (Liu et al., 2018; Kundu et al., 2022). Based on these observations, we propose to quantify ReLU sensitivity with a _proxy density_ of \(d=0.1\).
Moreover, to avoid the compute-heavy pruning process, we leverage the idea of sensitivity evaluation before training (Lee et al., 2018). On a sampled mini batch from training data \(\mathcal{D}\), the sensitivity of the \(j^{th}\) connection with associated indication variable and vector as \(c_{j}\) and \(\mathbf{e}_{j}\), can be evaluated as,
Figure 2: Layer-wise pruning sensitivity (\(d=0.1\)) vs. normalized ReLU importance. The later layers are less sensitive to pruning, and, thus, can afford significantly more zero-valued weights as opposed to the earlier ones. On the contrary, later ReLU stages generally have higher importance.
\[\Delta\mathcal{L}_{j}(f_{\Phi}(\mathbf{\Theta};\mathcal{D})) =g_{j}(f_{\Phi}(\mathbf{\Theta};\mathcal{D}))=\frac{\partial\mathcal{L} (f_{\Phi}(\mathbf{c}\odot\mathbf{\Theta};\mathcal{D}))}{\partial c_{j}}\Big{|}_{\text{ e=1}} \tag{3}\] \[=\lim_{\delta\to 0}\frac{\mathcal{L}(f_{\Phi}(\mathbf{c}\odot\mathbf{ \Theta};\mathcal{D}))-\mathcal{L}(f_{\Phi}((\mathbf{c}-\delta\mathbf{e}_{j})\odot\mathbf{ \Theta};\mathcal{D}))}{\delta}\Big{|}_{\text{e=1}}\]
where \(\mathbf{c}\) is a vector containing all indicator variables. The \(\frac{\partial\mathcal{L}}{\partial c_{j}}\) is an infinitesimal version of \(\Delta\mathcal{L}_{j}\) measuring the impact of a change in \(c_{j}\) from \(1\to 1-\delta\). It can be computed using one forward pass for all \(j\) at once. We normalize the connection sensitivities, rank them, and identify the top d-fraction of connections. We then define the layer sensitivity \(\eta_{\mathbf{\Theta}^{i}}\) as the fraction of connections of each layer that are in the top d-fraction. For a given global ReLU budget \(r\), we then assign the \(\#\) ReLU for each layer proportional to its normalized ReLU sensitivity. The details are shown in Algorithm 1 (Fig. 3 as point 1). Note \(r^{l}_{final}\) in Algorithm 1 represents the allocated #ReLUs of layer \(l\) at the end of stage 1, with \(\mathbf{r}_{final}\) representing the set of #ReLUs for all the ReLU layers.
### ReLU Mask Identification
After layer-wise #ReLU allocation, we identify the ReLU locations in each layer's activation map. In particular, for a non-linear layer \(l\), we assign a mask tensor \(M^{l}\in\{0,1\}^{h^{l}\times w^{l}\times c^{l}}\), where \(h^{l},w^{l}\), and \(c^{l}\) represents the height, width, and the number of channels in the activation map. For a layer \(l\), we initialize \(M\) with \(r^{l}_{final}\) assigned 1's with random locations. Then we perform a distillation-based training of the PR model performing ReLU ops only at the locations of the masks with 1, while distilling knowledge from an AR model of the same architecture (see Fig. 3, point 2). At the end of each epoch, for each layer \(l\), we rank the top-\(r^{l}_{final}\) locations based on the highest absolute difference between the PR and AR model's post-ReLU activation output (averaged over all the mini-batches) for that layer, and update the \(M^{l}\) with 1's at these locations. This, on average, de-emphasizes the locations where the post-ReLU activations in both the PR and AR models are positive. We terminate mask evaluation once the ReLU mask1 evaluation reaches the maximum mask training epochs or when the normalized hamming distance between masks generated after two consecutive epochs is below a certain pre-defined \(\epsilon\) value. **Notably, there has been significant research in identifying important trainable parameters (Savarese et al., 2020; Kusupati et al., 2020; Kundu et al., 2020; 2022c;b; Babakniya et al., 2022) through various proxies including magnitude, gradient, Hessian, however, due to the absence of any trainable parameter in the ReLU layer, such methods can't be deployed in identifying important ReLU units of a layer.**
**Channel-wise ReLU mask identification.** The mask identification technique described above, creates irregular ReLU masks. To support a coarser level of granularity where the ReLU removal happens "channel-wise", we now present a simple yet effective extension of the mask identification. For a layer \(l\), we first translate the total non-zero ReLU counts to total non-zero ReLU channels as \(r_{c}^{l}=\lceil\frac{r_{r,l=1}^{l}}{h\cdot w_{l}^{l}}\rceil\). We then follow the same procedure as irregular mask identification, however, only keep top-\(r_{c}^{l}\) channels as non-zero.
### Maximizing Activation Similarity via Distillation
Once the mask for each layer is frozen, we start our final training phase in which we maximize the similarity between activation functions of our PR and AR models, see Fig. 3, point 1. In particular, we initialize a PR model with the weights and mask of best PR model of stage 2 and allow only the parameters to train. We train the PR model with distillation via KL-divergence loss (Hinton et al., 2015; Kundu and Sundaresan, 2021) from a pre-trained AR along with a CE-loss. Moreover, we introduce an AR-PR post-ReLU activation mismatch (PRAM) penalty into the loss function. This loss drives the PR model to have activation maps that are similar to that of the AR model.
More formally, let \(\Psi_{pr}^{m}\) and \(\Psi_{ar}^{m}\) represent the \(m^{th}\) pair of vectorized post-ReLU activation maps of same layer for \(\Phi_{pr}\) and \(\Phi_{ar}\), respectively. Our loss function for the fine-tuning phase is given as
\[\mathcal{L}=(1-\lambda)\underbrace{\mathcal{L}_{pr}(y,y^{pr})}_{\text{CE loss}}+\lambda\underbrace{\mathcal{L}_{KL}\left(\sigma\left(\frac{z^{ar}}{ \rho}\right),\sigma\left(\frac{z^{pr}}{\rho}\right)\right)}_{\text{KL-div. loss}}+ \frac{\beta}{2}\sum_{m\in I}\underbrace{\left\|\frac{\Psi_{pr}^{m}}{\|\Psi_{ pr}^{m}\|_{2}}-\frac{\Psi_{ar}^{m}}{\|\Psi_{ar}^{m}\|_{2}}\right\|_{2}}_{\text{PRAM loss}} \tag{4}\]
where \(\sigma\) represents the softmax function with \(\rho\) being its temperature. \(\lambda\) balances the importance between the CE and KL divergence loss components, and \(\beta\) is the weight for the PRAM loss. Similar to (Zagoruyko and Komodakis, 2016), we use the \(l_{2}\)-norm of the normalized activation maps to compute this loss.
### SENet++: Support for Ordered Channel Dropping
To yield further compute-communication benefits, we now present an extension of SENet, namely SENet++, that can perform the ReLU reduction while also supporting inference with reduced model sizes. In particular, we leverage the idea of ordered dropout (OD) (Horvath et al., 2021) to simultaneously train multiple sub-models with different fractions of channels. The OD method is parameterized by a candidate dropout set \(\mathcal{D}_{r}\) with dropout rate values \(d_{r}\in(0,1]\). At a selected \(d_{r}\) for any layer \(l\), the model uses a \(d_{r}\)-sub-model with only the channels with indices \(\{0,1,...,[d_{r}\cdot C_{l}]-1\}\) active, effectively pruning the remaining \(\{[d_{r}\cdot C_{l}],...,C_{l}-1\}\) channels. Hence, during training, the selection of a \(d_{r}\)-sub-model with \(d_{r}<1.0\in\mathcal{D}_{r}\), is a form of channel pruning, while \(d_{r}=1.0\) trains the full model. For each mini-batch of data, we perform a forward pass once for each value of \(d_{r}\) in \(\mathcal{D}_{r}\), accumulating the loss. We then perform a backward pass in which the model parameters are updated based on the gradients computed on the accumulated loss. We first train an AR model with a dropout set \(\mathcal{D}_{r}\). For the ReLU budget evaluation, we consider only the model with \(d_{r}=1.0\), and finalize the mask by following the methods in Sections 4.1 and 4.2. During the maximizing of activation similarity stage, we fine-tune the PR model supporting the same set \(\mathcal{D}_{r}\) as that of the AR model. In particular, the loss function for the fine-tuning is the same as 4, for \(d_{r}=1.0\). For \(d_{r}<1.0\), we exclude the PRAM loss because we empirically observed that adding the PRAM loss
Figure 3: Different stages of the proposed training methodology for efficient private inference with dynamic channel reduction. For example, the model here supports two channel SFs, \(S_{1}\) and \(S_{2}\). Note, similar to (Horvath et al., 2021), for each SF support we use a separate batch-normalization (BN) layer to maintain separate statistics.
for each sub-model on average does not improve accuracy. During inference, SENet++ models can be dynamically switched to support reduced channel widths, reducing the number of both ReLUs and MACs compared to the baseline model.
## 5 Experiments
### Experimental Setup
**Models and Datasets.** To evaluate the efficacy of the SENet yielded models, we performed extensive experiments on three popular datasets, CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), Tiny-ImageNet (Hansen, 2015), and ImageNet2 with three different model variants, namely ResNet (ResNet18, ResNet34) (He et al., 2016), wide residual network (WRN22-8) (Zagoruyko and Komodakis, 2016), and VGG (VGG16) (Simonyan and Zisserman, 2014). We used PyTorch API to define and train our models on an Nvidia RTX 2080 Ti GPU.
Footnote 2: On ImageNet, for comprehensive training with limited resources, we sample 100 classes from the ImageNet dataset with 500 and 50 training and test examples per class, respectively.
**Training Hyperparameters.** We performed standard data augmentation (horizontal flip and random cropping with reflective padding) and the SGD optimizer for all training. We trained the baseline all-ReLU model for 240, 120, and 60 epochs for CIFAR, Tiny-ImageNet, and ImageNet respectively, with a starting learning rate (LR) of 0.05 that decays by a factor of 0.1 at the 62.5%, 75%, and 87.5% training epochs completion points. For all the training we used an weight decay coefficient of \(5\times 10^{-4}\). For a target ReLU budget, we performed the mask evaluation for 150, 100, and 30 epochs, respectively, for the three dataset types with the \(\epsilon\) set to \(0.05\), meaning the training prematurely terminates when less than \(5\%\) of the total #ReLU masks change their positions. Finally, we performed the post-ReLU activation similarity improvement for 180, 120, and 50 epochs, for CIFAR, Tiny-ImageNet, and ImageNet respectively. Also, unless stated otherwise, we use \(\lambda=0.9\), and \(\beta=1000\) for the loss described in Eq. 4. Further details of our training hyper-parameter choices are provided in the Appendix. In Table 5, we report the accuracy averaged over three runs.
### SENet Results
As shown in Table 5, SENet yields models that have higher accuracy than existing alternatives by a significant margin while often requiring fewer ReLUs. For example, at a small ReLU budget of \(\leq 100\)k, our models yield up to \(4.15\%\) and \(7.8\%\) higher accuracy, on CIFAR-10 and CIFAR-100, respectively. At a ReLU budget of \(\leq 500\)k, our improvement is up to \(0.50\%\) and \(2.38\%\), respectively, on the two datasets. We further evaluate the communication saving due to the nonlinearity reduction by taking the per ReLU communication cost mentioned in Table 2. In particular, the communication saving reported in the \(8^{th}\) column of Table 5 is computed as the ratio of communication costs associated with an AR model to that of the corresponding PR model with reduced ReLUs. We did not report any saving for the custom models, as they do not have any corresponding AR baseline model. On Tiny-ImageNet, SENet models can provide up to \(0.3\%\) higher performance while requiring \(3.08\times\) fewer ReLUs (Table 3). More importantly, even for a high resolution dataset like ImageNet, SENet models can yield close to the baseline performance, depicting the efficacy of our proposed training method.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline
**Model** & **Baseline** & **ReLU** & **Method** & **Test** & **Accity** & **Comm.** \\ & Acc.\(\epsilon\) & **(\%)** & **Acc.\(\epsilon\)** & **Max.\(\epsilon\)** & **Max.\(\epsilon\)** \\ \hline \hline \multirow{3}{*}{Weeks} & \multirow{3}{*}{\(66.1\)} & 142 & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & 55.9 & 0.414 & \(\uparrow\)15.7 \\ & & 208 & & **54.04** & \(0.218\) & \(7.5\times\) \\ & & 937 & & **59.02** & 0.46 & \(0.175\) & \(7.7\times\) \\ & & 917 & & & 64.66 & \(0.071\) & \(\uparrow\)2.4 \\ \cline{2-6} & & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ \cline{2-6} & & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ \hline \multirow{3}{*}{Weeks} & \multirow{3}{*}{\(71.94\)} & 000 & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \multirow{3}{*}{
\begin{tabular}{} \end{tabular} } & 70.28 & 0.117 & \(\downarrow\)3.86\(\times\) \\ & & 950 & & **71.10** & \(0.075\) & \(2.43\times\) \\ \hline \end{tabular}
\end{table}
Table 4: Results with ReLU reduction at the granularity of activation channel evaluated on CIFAR-100.
\begin{table}
\begin{tabular}{c|l|c|c} \hline \hline
**Model** & **Baseline** & **ReLU** & **Method** & **Test** & **Accity** & **Comm.** \\ & Acc.\(\epsilon\) & **(\%)** & **Max.\(\epsilon\)** & **Max.\(\epsilon\)** \\ \hline \hline \multirow{3}{*}{Weeks} & \multirow{3}{*}{\(66.1\)} & 142 & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & 55.9 & 0.414 & \(\uparrow\)15.7 \\ & & 208 & & **54.04** & \(0.218\) & \(7.5\times\) \\ & & 937 & & **59.02** & \(0.26\times\) & \(0.175\) & \(7.7\times\) \\ & & 917 & & & 64.66 & \(0.071\) & \(\uparrow\)2.4 \\ \hline \multirow{3}{*}{Weeks} & \multirow{3}{*}{\(71.94\)} & \multirow{3}{*}{\(600\)} & \multirow{3}{*}{
\begin{tabular}{} \end{tabular} } & 70.28 & 0.117 & \(\downarrow\)3.86\(\times\) \\ & & 950 & & **71.10** & \(0.075\) & \(2.43\times\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results on Tiny-ImageNet and ImageNet.
**Results with activation channel level ReLU reduction.** As shown in Table 4, while trimming ReLUs at a higher granularity of activation channel level, SENet models suffer a little more drop in accuracy compared to that at pixel level. For example, at a ReLU budget of 240k, channel-level ReLU removal yields an accuracy of \(79.3\%\) compared to \(79.81\%\) of pixel-level. However, compared to existing alternatives, SENet can achieve improved performance of up to \(1.85\%\) for similar ReLUs.
### SENet++ Results
For SENet++, we performed experiments with \(\mathcal{D}_{r}=[0.5,1.0]\), meaning each training loop can yield models with two different channel dropout rates. The \(0.5\)-sub-model enjoys a \(\sim\)\(4\times\) MACs reduction compared to the full model. Moreover, as shown in Fig. 4, the \(0.5\)-sub-model also requires significantly less \(\#\)ReLUs due to reduced model size. In particular, the smaller models have \(\#\)ReLUs reduced by a factor of \(2.05\times\), \(2.08\times\), and \(1.88\times\) on CIFAR-10, CIFAR-100, and Tiny-ImageNet, respectively, compared to the PR full models, averaged over four experiments with different ReLU budgets for each dataset. _Lastly, the similar performance of the SENet and SENet++ models at \(d_{r}=1.0\) with similar ReLU budgets, clearly depicts the ability of SENet++ to yield multiple sub-models without sacrificing any accuracy for the full model._
### Analysis of Linear and ReLU Inference Latency
Table 2 shows the GC-based online ReLU operation latency is \(\sim\)\(343\times\) higher than one linear operation (multiply and accumulate), making the ReLU operation latency the dominant latency component. Inspired by this observation, we quantify the online PI latency as that of the \(N\) ReLU operations for a model with ReLU budget of \(N\). In particular, based on this evaluation, Fig. 5(a) shows the superiority of SENet++ of up to \(\sim\)\(9.6\times\) (\(\sim\)\(1.92\times\)) reduced online ReLU latency on CIFAR-100 (CIFAR-100). With negligibly less accuracy this latency improvement can be up to \(\sim\)\(21\times\). Furthermore, when \(d_{r}<1.0\), SENet++ requires fewer MACs and the linear operation latency can be significantly reduced, as demonstrated in Fig. 5(b).
### Ablation Studies
**Importance of ReLU sensitivity.** To understand the importance of layer-wise ReLU sensitivity evaluations at a given ReLU budget, we conducted experiments with evenly allocated
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline \multicolumn{2}{c|}{**min \(\leq r\leq\) max**} & **Model** & **Baseline** & **ReLU (\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\
ReLUs. Specifically, for ResNet18, for a ReLU budget of \(25\%\) of that of the original model, we randomly removed \(75\%\) ReLUs from each PR layer with identity elements to create the ReLU mask, and trained the PR model with this mask. We further trained two other PR ResNet18 with similar and lower # ReLU budgets with the per-layer ReLUs assigned following the proposed sensitivity. As shown in Table 6, the sensitivity-driven PR models can yield significantly improved performance of \(\sim\)\(5.76\%\) for similar ReLU budget, demonstrating the importance of proposed ReLU sensitivity.
**Choice of the hyperparameter \(\lambda\) and \(\beta\).** To determine the influence of the AR teacher's influence on the PR model's learning, we conducted the final stage distillation with \(\lambda\in[0.1,0.3,0.5,0.7,0.9]\) and \(\beta\in[100,300,500,700,1000]\). As shown in Fig. 6, the performance of the student PR model improves with the increasing influence of the teacher both in terms of high \(\lambda\) and \(\beta\) values. However, we also observe, the performance improvement tends to saturate at \(\beta\approx 1000\). Note, we keep \(\lambda=0.5\) and \(\beta=1000\), for the \(\beta\) and \(\lambda\) ablations, respectively.
## 6 Conclusions
In this paper, we introduced the notion of ReLU sensitivity for non-linear layers of a DNN model. Based on this notion, we present an automated ReLU allocation and training algorithm for models with limited ReLU budgets that targets latency and communication-efficient PI. The resulting networks can achieve similar to SOTA accuracy while significantly reducing the \(\#\) ReLUs by up to \(9.6\times\) on CIFAR-10, enabling a dramatic reduction of the latency and communication costs of PI. Extending this idea of efficient PI to vision transformer models is an interesting future research.
## 7 Acknowledgment
This work was supported in parts by Intel, DARPA under the agreement numbers HR00112190120, and FA8650-18-1-7817.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline
**Model** & **Baseline** & **ReLU** & **ReLU** & **Net** & **Acciv** & **Coann.** \\ & Accs & \(\lambda_{0}\) & Sensitivity & Accs & \(\#\) & ReLU & training \\ \hline \multirow{2}{*}{ResNet18} & \multirow{2}{*}{78.05} & 135.2 & \(\mathbf{\mathcal{F}}\) & \(70.12\) & \(0.503\) & \(\times 4\) \\ & & & & & & \\ \cline{1-1} & & & & & & \\ \cline{1-1} & & & & & & \\ \hline \end{tabular}
\end{table}
Table 6: Importance of ReLU sensitivity.
Figure 4: Performance of SENet++ on three datasets for various \(\#\)ReLU budgets. The points labeled A, B, C, and D corresponds to experiments of different target #ReLUs for the full model (\(d_{r}=1.0\)). For SENet++, note that a single training loop yields two points with the same label corresponding to the two different dropout rates.
Figure 5: Performance comparison of SENet++ (with \(d_{r}=1.0\) and \(0.5\)) vs. existing alternatives (a) with VGG16 and ResNet18 in terms of ReLU latency. The labels A, B, C, D correspond to experiments of different target #ReLUs for the full model (\(d_{r}=1.0\)). For SENet++, note that a single training loop yields two points with the same label corresponding to the two different dropout rates. (b) Comparison between DeepReDuce and SENet++ for a target \(\#\) ReLU budget of \(\sim\)\(50\)k with ResNet18 on CIFAR-100.
Figure 6: Ablation studies with different \(\lambda\) and \(\beta\) values for the loss term in Eq. 4. |
2303.10632 | Training a spiking neural network on an event-based label-free flow
cytometry dataset | Imaging flow cytometry systems aim to analyze a huge number of cells or
micro-particles based on their physical characteristics. The vast majority of
current systems acquire a large amount of images which are used to train deep
artificial neural networks. However, this approach increases both the latency
and power consumption of the final apparatus. In this work-in-progress, we
combine an event-based camera with a free-space optical setup to obtain spikes
for each particle passing in a microfluidic channel. A spiking neural network
is trained on the collected dataset, resulting in 97.7% mean training accuracy
and 93.5% mean testing accuracy for the fully event-based classification
pipeline. | Muhammed Gouda, Steven Abreu, Alessio Lugnan, Peter Bienstman | 2023-03-19T11:32:57Z | http://arxiv.org/abs/2303.10632v1 | # Training a spiking neural network on an event-based label-free flow cytometry dataset
###### Abstract.
Imaging flow cytometry systems aim to analyze a huge number of cells or micro-particles based on their physical characteristics. The vast majority of current systems acquire a large amount of images which are used to train deep artificial neural networks. However, this approach increases both the latency and power consumption of the final apparatus. In this work-in-progress, we combine an event-based camera with a free-space optical setup to obtain spikes for each particle passing in a microfluidic channel. A spiking neural network is trained on the collected dataset, resulting in 97.7% mean training accuracy and 93.5% mean testing accuracy for the fully event-based classification pipeline.
+
Footnote †: journal: Soft authors contributed equally to this research.
+
Footnote †: journal: Soft authors contributed equally to this research.
a previous work using a traditional frame-based camera, the dataset was on the order of hundreds of gigabytes (Beng et al., 2018).
We used two different classes of spherical microparticles (class A of diameter 16 \(\upmu\)m and class B with a diameter of 20 \(\upmu\)m). We ran four separate experiments for each class of microparticles, where each experiment ran for \(T_{exp}\approx 60\)s for a total of 480 seconds of data. The accumulation time for a single particle is \(T_{acc}\approx 10m\)s. Therefore, we have around 6,000 samples per experiment, or 24,000 samples in total per class. We train the network for four different train-test splits, each split using a different experiment as testing data, and the remaining experiments as training data.
### Spiking neural network
We pre-process our event-based imaging data using the Tonic library (Beng et al., 2018). The pre-processing involved a spatial downsampling from \(640\times 480\times 2\) to \(32\times 24\times 2\) (event polarity is left unchanged), and a temporal downsampling by passing the events for each neuron corresponding to a downsampled pixel of fixed polarity through a discretized leaky-integrate-and-fire (LIF) neuron:
\[s_{i}^{out}(n+1) =1\text{ if }\left(\beta u_{i}(n)+\text{ws}_{i}^{in}(n+1) \right)\geq u_{thr}\text{ else }0\] \[r_{i}(n+1) =\max_{t\in\{0,\dots,t_{rf}\}}s_{i}^{out}(n+1-t)\] \[u_{i}(n+1) =0\text{ if }\left(r_{i}(n+1)\right)\text{ else }\left(\beta u_{i}(n)+\text{ws}_{i}^{in}(n+1)\right)\]
where \(u_{i}(n),s_{i}^{out}(n),r_{i}(n)\) denote the membrane potential, binary spike output, and binary refractory period of neuron \(i\) at timestep \(n\), respectively. Further, \(\beta=0.9\) is the membrane decay rate, \(w=1.0\) is the synaptic weight, \(u_{thr}=3.0\) is the threshold voltage, \(t_{rf}=2\) is the refractory period, \(s_{i}^{in}(n)\) equals unity if there is an input event from the event-based camera in the \(i\)th \(20\times 20\times 1\) patch of pixels at timestep \(n\) (and zero otherwise).
For classification, we use a feedforward network of LIF neurons with an input layer of size \(32\times 24\times 2=1536\), a hidden layer of size 100, and an output layer of size 20 (using population coding with 10 neurons per output class). The neuron parameters are constant for all neurons in the network, with the membrane decay rate \(\beta=0.9\) and membrane threshold \(U_{thr}=0.5\). We use the Adam optimizer to optimize the mean squared error on the output spike rate, with a desired population spike rate of 20% (correct) and 80% (incorrect). Backpropagation is applied through a shifted Heaviside function in the forward pass and a fast sigmoid
\[S=\frac{U}{1+k\|U\|}\]
with slope \(k=75\) in the backward pass (Beng et al., 2018). We use snnTorch (Beng et al., 2018) for our implementation.
## 3. Results
We trained the spiking neural network described in Section 2.2 for 10 epochs on a NVIDIA GeForce RTX 2080 Ti GPU with 11 GB of memory. Figure 2 show the training and testing performance for this experiment. The figure shows that the accuracy exceeds 98.5% for all experiments, with the exception of the testing accuracy for the network trained on experiments 1, 3, and 4. More work is needed to analyze the anomalous performance on this data split.
## 4. Outlook
As next steps, we are investigating the use of exact timing information for a time-to-first-spike classification that should yield even faster classification. In this work, we fixed the hyperparameters use to the values mentioned in Section 2.2. For further performance gains, we will optimize the hyperparameters and network architecture through a cross-validation procedure. Finally, we will transfer the SNN to a neuromorphic chip for a fully neuromorphic processing pipeline.
## Acknowledgments
This work was performed in the context of the European projects Neoteric (grant agreement 871330), Prometheus (grant agreement 101070195) and Post-Digital project (grant agreement 860360).
|
2301.09268 | PCBDet: An Efficient Deep Neural Network Object Detection Architecture
for Automatic PCB Component Detection on the Edge | There can be numerous electronic components on a given PCB, making the task
of visual inspection to detect defects very time-consuming and prone to error,
especially at scale. There has thus been significant interest in automatic PCB
component detection, particularly leveraging deep learning. However, deep
neural networks typically require high computational resources, possibly
limiting their feasibility in real-world use cases in manufacturing, which
often involve high-volume and high-throughput detection with constrained edge
computing resource availability. As a result of an exploration of efficient
deep neural network architectures for this use case, we introduce PCBDet, an
attention condenser network design that provides state-of-the-art inference
throughput while achieving superior PCB component detection performance
compared to other state-of-the-art efficient architecture designs. Experimental
results show that PCBDet can achieve up to 2$\times$ inference speed-up on an
ARM Cortex A72 processor when compared to an EfficientNet-based design while
achieving $\sim$2-4\% higher mAP on the FICS-PCB benchmark dataset. | Brian Li, Steven Palayew, Francis Li, Saad Abbasi, Saeejith Nair, Alexander Wong | 2023-01-23T04:34:25Z | http://arxiv.org/abs/2301.09268v1 | PCBDet: An Efficient Deep Neural Network Object Detection Architecture for Automatic PCB Component Detection on the Edge
###### Abstract
There can be numerous electronic components on a given PCB, making the task of visual inspection to detect defects very time-consuming and prone to error, especially at scale. There has thus been significant interest in automatic PCB component detection, particularly leveraging deep learning. However, deep neural networks typically require high computational resources, possibly limiting their feasibility in real-world use cases in manufacturing, which often involve high-volume and high-throughput detection with constrained edge computing resource availability. As a result of an exploration of efficient deep neural network architectures for this use case, we introduce PCBDet, an attention condenser network design that provides state-of-the-art inference throughput while achieving superior PCB component detection performance compared to other state-of-the-art efficient architecture designs. Experimental results show that PCBDet can achieve up to 2\(\times\) inference speed-up on an ARM Cortex A72 processor when compared to an EfficientNet-based design while achieving \(\sim\)2-4% higher mAP on the FICS-PCB benchmark dataset.
## 1 Introduction
A crucial process in printed circuit board assembly is the visual inspection of electronic components for potential defects. This can help avoid functional failure of devices, user data leakage, or even system control taken by adversaries Lu et al. (2020). Given that there can be hundreds of electronic components on a given PCB, the task of visual inspection can be extremely time-consuming and prone to operator error, especially during large assembly runs. Therefore, the ability to automatically detect different electronic components on a PCB board for automated inspection purposes is highly desired. As a result, there has been significant interest in the research community in automatic PCB component detection, particularly leveraging deep learning Kuo et al. (2019); Li et al. (2022). However, one consideration that has been largely left unexplored in research literature in this area is computational efficiency, which is particularly critical for real-world visual quality inspection scenarios involving high-volume, high-throughput electronics manufacturing use-cases under constrained edge computing resources.
Motivated by the need for both high efficiency and high accuracy for automatic PCB component detection, this study explores efficient deep neural network object detection architectures for the purpose of automatic PCB component detection on the edge. As a result of this exploration, we introduce PCBDet, a highly efficient, performant self-attention deep neural network architecture
design. This architecture notably makes use of the recently introduced AttendNeXt backbone, which has been shown to achieve state-of-the-art performance for TinyML, and is integrated here into RetinaNet Wong et al. (2022), Lin et al. (2017).
The paper is organized as follows. In Section 2, details about the architecture of PCBDet, the training procedure, the evaluation procedure, and the data used for training and evaluation are described. In Section 3, experimental results in terms of component detection performance, model size, and inference speed on different computing hardware are presented. Finally, conclusions are drawn and future directions are discussed in Section 4.
## 2 Methods
### Dataset
To appropriately explore the quality and impact of our network design, a dataset that can facilitate the training and validation of robust models is essential. With this in mind, models were trained and tested using the DSLR images of the FICS-PCB dataset, a public, comprehensive, and diverse PCB component dataset that includes a number of challenging cases. The dataset itself consists of a total of 31 PCB boards containing over 77 thousand components to detect, with capacitors and resistors being the most widely represented pieces Lu et al. (2020). Each board in the FICS-PCB dataset was further truncated into square patches, reducing train-time resource demand but maintaining component-wise resolution, with each patch being reshaped for additional size reduction further in the input pipeline. Figure 2 demonstrates several examples of such patches with annotated ground truth component labels.
Division of the dataset into distinct train, validation, and test splits is another crucial element in confirming the soundness of our experimentation. The preservation of exclusivity in the test set here is integral, since it allows for performance evaluation on a strict holdout set, and thus all of the image patches extracted from seven of the 31 PCBs were used as the test set. With the exception of one of
Figure 1: Overview of PCBDet architecture. PCBDet consists of A) a double-condensing attention condenser feature encoder feeding a FPN, and B) classification and box regression convolutional sub-nets for bounding box prediction.
the boards, which was excluded due to a lack of DSLR pictures, patches from the remaining boards (which total to 23) were used for the train/validation splits. From this set, 87.5% of the patches were taken for the train set, while the other 12.5% were used for the validation set (for post-epoch performance validation).
### Architecture
As seen in Figure 1, the proposed PCBDet possesses an efficient self-attention architecture design inspired by two different architecture design paradigms: 1) RetinaNet bounding box prediction structure Lin et al. (2017), and 2) double-condensing attention condenser architecture Wong et al. (2022). As a single-stage detector architecture design, the RetinaNet structure encompasses a more efficient object detection process when compared to state-of-the-art two-stage object detectors like R-CNN Lin et al. (2017). RetinaNet has also seen increased performance when compared to one-stage detectors such as SSD or YOLO Lin et al. (2017). As such, the proposed PCBDet architectural design takes inspiration from the RetinaNet structure, looking to adopt an efficient single-stage approach without seeing substantial tradeoffs in performance.
Without an efficient backbone, however, our network cannot maximize on the possible efficiency-based benefits of the RetinaNet framework. More specifically, the backbone architecture within a RetinaNet structure is the feature encoder that feeds into the convolutional sub-nets, and while a larger, complex backbone may enable increased performance gains, this can lead to substantial losses in efficiency. The design of a small, efficient backbone architecture is thus crucial in creating an effective and efficient object detection network.
PCBDet's backbone architecture takes inspiration from the AttendNeXt double-condensing attention condenser architecture design, which has shown top of the line performance among other state-of-the-art efficient architectures for ImageNet classification Wong et al. (2022). This self-attention
Figure 2: Examples of PCB patches annotated with ground truth bounding boxes.
architecture design features double-condensing attention condenser modules, applied within a convolutional network structure, to increase the speed and efficiency of standard convolutional architectures for feature extraction, thus serving as the basis for an efficient backbone for RetinaNet Wong et al. (2022). The AttendNeXt feature encoder used in our study was first pretrained on ImageNet, establishing a basis for the weights to be used in our object detection task. The classification head was then removed, and stage outputs were taken as inputs for a feature pyramid network (FPN), whose layers respectively feed into classification and regression subnets as per general RetinaNet structure. To further increase the efficiency of our designed network, the first of four stages of the feature encoder was omitted from the construction of the FPN, decreasing the amount of subnet operations performed per pass. The resultant network from this design process is dubbed PCBDet.
We also tried integrating an EfficientNetB4-based backbone into RetinaNet and compared the performance of this architecture with PCBDet. EfficientNets are a family of convolutional models generally designed for, as the name implies, efficiency, with the model seeing upscaling as it progresses from B0 through B7 Tan and Le (2019). EfficientNet-B4 shows improved top-1 ImageNet performance when compared to other state-of-the-art convolutional classifiers while maintaining a lower number of parameters, and was thus chosen as an efficient but potent backbone to explore with RetinaNet Tan and Le (2019). As with PCBDet's AttendNeXt backbone, the EfficientNet feature encoder had its classification head removed and block outputs were used as inputs for an FPN. To provide a fair point of comparison for PCBDet, the integration of this feature encoder with the FPN once again seeks to achieve greater efficiency, with the first three of seven blocks of the EfficientNet-B4 feature encoder being excluded from FPN construction. The network designed here is referred to as EfficientNet-Det.
### Training
Proper exploration of our architectures requires thorough training, and for compact networks such as PCBDet in particular, slower, gradual weight learning is crucial to appropriately search for effective weights in the training process. As such, PCBDet and EfficientNet-Det were each trained for 300 epochs, with a base learning rate of 2e-4 and a proprietary learning rate scheduler, along with Adam optimization. While potential overfitting could arise from slow, gradual learning, this issue was combated with the use of image augmentation, including vertical and horizontal flipping and translation, colour degeneration, and random cutouts (patch removal), as well as the monitoring of network performance on the validation set. It is also essential to address the disproportionate representation of components in the FICS-PCB dataset. To do so, network training uses the focal loss metric, which accounts for class imbalances by adding a focusing parameter to the standard cross-entropy loss, resulting in greatly decreased weighting for easy, well-classified data points Lin et al. (2017).
During training, the first and second blocks of the AttendNeXt feature encoder in PCBDet were frozen, allowing for the encoder to retain its memories of low-level features from ImageNet pretraining while also tuning higher-level feature blocks to better recognize the shapes and objects associated with PCB components. Similarly, the first four of seven blocks were frozen for the EfficientNet-B4 encoder in EfficientNet-Det.
### Evaluation
Given the goal of efficient model design, we need a method that can effectively measure the complexity and compactness of a model. Inference time, or the time taken per forward pass, is a method that can reveal how quickly a network can perform as a predictor; in our work, inference time was taken for both PCBDet and EfficientNet-Det using an Intel Core i7-10750H processor and a NVIDIA GeForce RTX 1650 Ti, both within a Dell XPS 15 9500 laptop, as well as a Jetson Nano and a 64-bit ARM Cortex A72 processor, altogether providing an image of the on-the-edge inference speeds of the two networks. The number of parameters was also taken for each of the two networks, providing an additional measure for model compactness.
The predictive power for bounding boxes of our networks is another necessary measure to analyze, in order to compare the model performance of the PCBDet and EfficientNet-Det architectures. This performance assessment was realized using the mean average precision (mAP) for bounding box predictions, over IOU thresholds from 0.5 to 0.95 with a 0.05 step size, commonly known as
mAP@[0.5:0.95]; this mAP metric is also known as COCO mAP, the standard performance metric for COCO challenges Tong et al. (2020). Averaging performance over a range of IOU thresholds provides a more generalized sense of object detection performance across resolutions, as lower IOU thresholds test for more roughly correct box predictions while higher thresholds solely reward exact bounding box location. The validation and test performances of PCBDet and EfficientNet-Det were taken to be the mAP@[0.5:0.95] on the validation and holdout test sets respectively; this general AP metric helps to determine the predictive accuracy of our networks during and after training.
While individual differences in network performance and compactness can be seen through inference time and COCO mAP measures, a collective assessment can provide a better picture of the difference in the accuracy-complexity balance achieved in PCBDet and EfficientNet-Det. This unified analysis can be performed using the NetScore metric, which acts as a quantitative method of assessing this very balance Wong (2018). In this calculation, the coefficient values used were \(\alpha\) = 2, \(\beta\) = 1, and \(\gamma\) = 1, in accordance with the original NetScore study Wong (2018). The inference-time multiply-accumulate (MAC) operations measure was also replaced with the experimental inference time (in seconds) using the ARM Cortex A72 in the calculation, as this experimental metric of complexity provides a more practical measure for edge performance than MAC operations, while the COCO mAP was used as the accuracy metric for the calculation. The calculation used for NetScore was
\[NetScore=\frac{(mAP*100)^{2}}{(MParams)(Inference\ time\ (s))}\]
## 3 Results
The efficacy of the proposed PCBDet for PCB component detection is compared here with EfficientNet-Det across the following metrics: 1) COCO mAP, 2) model size, and 3) inference speed on various low-power computing hardware.
**COCO mAP**. It can be observed in Figure 3 that the proposed PCBDet achieves noticeable gains in terms of test and validation COCO mAP by approximately 4 % and 2 %, respectively, when compared to EfficientNet-Det. This gain in mAP is particularly interesting especially given the fact that PCBDet is significantly smaller and faster than EfficientNet-Det, which we will discuss in greater detail. As such, these results illustrate that a high level of performance can be achieved with the proposed PCBDet for the purpose of automatic PCB component detection on the edge.
**Model Size**. As shown in Figure 4, it can be observed that the proposed PCBDet possesses less than half the number of total parameters, as well as trainable parameters, when compared to EfficientNet-Det. This is particularly important for edge based applications such as automatic on-the-edge PCB component detection where memory resources are limited.
Figure 3: COCO mAP on validation and test data for PCBDet and EfficientNet-Det.
**Inference Speed**. As shown in Figure 5, it can be observed that the proposed PCBDet is more than 30% faster than EfficientNet-Det on the NVIDIA Geforce RTX 1650 Ti, with an even greater speed gain on slower hardware such as the Jetson Nano (almost 65% faster) and Intel Core i7-10750H (over 45% faster). As seen in Figure 6, the performance gains of the proposed PCBDet on lower-power hardware were especially apparent when evaluated on an ARM Cortex A72, where PCBDet was more than 2\(\times\) faster than EfficientNet-Det. These inference speed results demonstrate the efficacy of the proposed PCBDet for high-throughput PCB component detection on the edge.
Finally, using the above results, PCBDet was found to achieve a NetScore of 28.2670 while EfficientNet-Det achieved a NetScore of 13.5749, supporting our findings that PCBDet achieves a superior accuracy-complexity balance when compared to EfficientNet-Det.
These results demonstrate overall that PCBDet shows significantly greater efficacy than RetinaNet with EfficientNet-B4, which is currently considered a state-of-the-art backbone for TinyML. Ultimately, we have developed a model for PCB object detection that shows very strong performance despite its small size and high inference throughput.
Figure 4: Number of total and trainable parameters for PCBDet and EfficientNet-Det.
Figure 5: Inference time for PCBDet and EfficientNet-Det across different hardware.
## 4 Conclusion
Here, we conducted an exploration of efficient deep neural network object detection architectures for the purpose of automatic PCB component detection on the edge. The resulting network architecture, which we coin PCBDet, methodically integrates the recently introduced AttendNeXt backbone into RetinaNet. This results in an architecture which can achieve up to a 2x inference speed-up on low power hardware compared to other state-of-the-art efficient architectures, while still achieving a higher mAP on the FICS-PCB benchmark dataset. This makes PCBDet very well-suited for component detection in high-throughput manufacturing scenarios with limited computational resources. Future work may include seeing if a similar strategy involving the methodical use of the AttendNeXt backbone could be employed to develop high performance, efficient deep neural network object detection architectures for other applications.
|
2310.11366 | Lie Group Decompositions for Equivariant Neural Networks | Invariance and equivariance to geometrical transformations have proven to be
very useful inductive biases when training (convolutional) neural network
models, especially in the low-data regime. Much work has focused on the case
where the symmetry group employed is compact or abelian, or both. Recent work
has explored enlarging the class of transformations used to the case of Lie
groups, principally through the use of their Lie algebra, as well as the group
exponential and logarithm maps. The applicability of such methods is limited by
the fact that depending on the group of interest $G$, the exponential map may
not be surjective. Further limitations are encountered when $G$ is neither
compact nor abelian. Using the structure and geometry of Lie groups and their
homogeneous spaces, we present a framework by which it is possible to work with
such groups primarily focusing on the groups $G = \text{GL}^{+}(n, \mathbb{R})$
and $G = \text{SL}(n, \mathbb{R})$, as well as their representation as affine
transformations $\mathbb{R}^{n} \rtimes G$. Invariant integration as well as a
global parametrization is realized by a decomposition into subgroups and
submanifolds which can be handled individually. Under this framework, we show
how convolution kernels can be parametrized to build models equivariant with
respect to affine transformations. We evaluate the robustness and
out-of-distribution generalisation capability of our model on the benchmark
affine-invariant classification task, outperforming previous proposals. | Mircea Mironenco, Patrick Forré | 2023-10-17T16:04:33Z | http://arxiv.org/abs/2310.11366v2 | # Lie Group Decompositions for Equivariant Neural Networks
###### Abstract
Invariance and equivariance to geometrical transformations have proven to be very useful inductive biases when training (convolutional) neural network models, especially in the low-data regime. Much work has focused on the case where the symmetry group employed is compact or abelian, or both. Recent work has explored enlarging the class of transformations used to the case of Lie groups, principally through the use of their Lie algebra, as well as the group exponential and logarithm maps. The applicability of such methods to larger transformation groups is limited by the fact that depending on the group of interest \(G\), the exponential map may not be surjective. Further limitations are encountered when \(G\) is neither compact nor abelian. Using the structure and geometry of Lie groups and their homogeneous spaces, we present a framework by which it is possible to work with such groups primarily focusing on the Lie groups \(G=\text{GL}^{+}(n,\mathbb{R})\) and \(G=\text{SL}(n,\mathbb{R})\), as well as their representation as affine transformations \(\mathbb{R}^{n}\rtimes G\). Invariant integration as well as a global parametrization is realized by decomposing the 'larger' groups into subgroups and submanifolds which can be handled individually. Under this framework, we show how convolution kernels can be parametrized to build models equivariant with respect to affine transformations. We evaluate the robustness and out-of-distribution generalisation capability of our model on the standard affine-invariant benchmark classification task, where we outperform all previous equivariant models as well as all Capsule Network proposals.
## 1 Introduction
Symmetry constraints in the form of invariance or equivariance to geometric transformations have shown to be widely applicable inductive biases in the context of deep learning (Bronstein et al., 2021). Group-theoretic methods for imposing such constraints have led to numerous breakthroughs across a variety of data modalities. Convolutional neural networks (LeCun et al., 1995) which make use of translation equivariance while operating on image data have been generalized in several directions. Group-equivariant convolutional neural networks (GCNNs) represent one such generalization. Originally proposed in Cohen & Welling (2016), GCNNs make use of group convolution operators to construct layers that produce representations which transform in a predictable manner whenever the input signal is transformed by an a-priori chosen symmetry group \(G\)1. These models have been shown to exhibit increased generalization capabilities, while being less sensitive to \(G\)-perturbations of the input data. For these reasons, equivariant architectures have been proposed for signals in a variety of domains such as graphs (Han et al., 2022), sets (Zaheer et al., 2017) or point cloud data (Thomas et al., 2018). Constructing equivariant (convolutional) networks generally entails first choosing a group \(G\), a representation for the signal space in which our data lives and a description of the way this space transforms when the group _acts_ on it. Choosing a particular group \(G\) entails making a modelling assumption about the underlying (geometrical) structure of the data that should
be preserved. The group and group action employed also dictate how difficult it is to impose the desired symmetry constraint on the model. In the case of equivariant networks, early work has focused on the case where \(G\) is finite, with subsequent work largely concentrated on the Euclidean group \(\mathds{E}(n)\), and its subgroups \(\text{SE}(n,\mathbb{R})\) or \(\text{SO}(n)\).
Working with continuous groups is much more challenging, and the vast majority of equivariant models focus on the case where the group \(G\) has a set of desirable topological and structural properties, namely \(G\) is either compact or abelian, or both. Recent work (Bekkers, 2019; Finzi et al., 2020) has began to explore the possibility of building equivariant networks for "larger" groups, primarily through their representation as Lie groups - continuous groups with a differentiable structure. This research direction is promising since it allows for the modelling of symmetry groups beyond Euclidean geometry. Affine and projective geometry, and respectively affine and homography transformations are ubiquitous within computer vision, robotics and computer graphics (Zacur et al., 2014). Accounting for a larger degree of geometric variation has the promise of making vision architectures more robust to real-world data shifts. When working with non-compact and non-abelian Lie groups, for which the group exponential map is not surjective, standard tools from harmonic analysis cannot be employed straightforwardly. However, the structure and geometry of these groups can be exploited to build tractable optimization algorithms. Our main contribution is a framework by which it is possible to work with such groups, addressing the non-surjectivity of the exponential map as well as presenting a procedure by which invariant integration with respect to the Haar measure can be done in a principled manner. Rather than decomposing their representation, we choose to decompose the groups themselves, while working with the regular representation. The methodology and tools used are generally applicable to the case where one is working with a Lie group with finitely many connected components, however we restrict our focus to the groups \(\text{GL}^{+}(n,\mathbb{R})\) and \(\text{SL}(n,\mathbb{R})\), and more broadly the family of affine matrix Lie groups \(\mathbb{R}^{n}\rtimes H\), where \(H\) is one of the aforementioned groups. After presenting how one can parameterize functions and perform invariant integration on these groups, we show how an equivariant convolutional network can be constructed while still allowing kernels to be defined in the Lie algebra.
## 2 Related work
Recent proposal for Lie group equivariance (Bekkers, 2019; Finzi et al., 2020) focus on the infinite-dimensional regular representation of a group and rely on the group exponential map to allow convolution kernels to be defined analytically in the Lie algebra of the group. Working with the regular representation entails dealing with an intractable convolution integral over the group, and a (Monte Carlo) numerical integration procedure approximating the integral needs to be employed, which requires sampling group elements with respect to the _Haar_ measure of the group. Unfortunately, the applicability of these methods is limited to Lie groups for which the group exponential map is surjective, which is not the case for the affine group \(\text{Aff}(n)=\mathbb{R}^{n}\rtimes\text{GL}(n,\mathbb{R})\). Furthermore, these methods rely on the fact that for compact and abelian groups sampling with respect to the Haar measure of the group is straightforward, which is not the case for the affine groups of interest. MacDonald et al. (2022) aim to address these limitations, proposing a framework which can be applied to larger Lie groups, while still relying on the group exponential. While very general, their method comes with an exponential increase in memory requirements, as the group elements used for every convolutional layer needs to be sampled and kept in memory before the forward pass.
## 3 Background
Lie groupsA group \(G\) is a set together with an associative binary operation \(G\times G\to G\) which tells us how group elements can be composed to form another. Every element \(g\in G\) has an inverse \(g^{-1}\in G\), and the group contains an identity element \(e\in G\) such that \(gg^{-1}=e\). A Lie group \(G\) is a group as well as a smooth manifold, such that \(\forall g,h\in G\) the group operation \((g,h)\mapsto gh\) and the inversion map \(g\mapsto g^{-1}\) are smooth. Let \(\text{M}_{nn}(\mathbb{R})\coloneqq\text{M}_{n}(\mathbb{R})\) be the vector space of \(n\times n\) real matrices, and \(\text{GL}(n,\mathbb{R})\) the Lie group consisting of all invertible \(n\times n\) matrices. A _linear or matrix_ Lie group refers to a Lie subgroup of \(\text{GL}(n,\mathbb{R})\). \(\text{GL}(n,\mathbb{R})\), the translation group \((\mathbb{R}^{n},+)\) and the family of affine groups \(\mathbb{R}^{n}\rtimes H\), \(H\leq\text{GL}(n,\mathbb{R})\) are our primary interest, with \(H\) usually being one of \(\text{GL}^{+}(n,\mathbb{R})\), \(\text{SL}(n,\mathbb{R})\leq\text{GL}^{+}(n,\mathbb{R})\) or the rotation group \(\text{SO}(n)\).
Continuous group equivarianceEquivariance with respect to the action of a locally compact group \(G\) can be realized by constructing layers using the cross-correlation/convolution operators. We recall that in the continuous setting we model our signals as functions \(f:X\to\mathbb{R}^{K}\) defined on some underlying domain \(X\). For example, images and feature maps can be defined \(K\)-channel functions \(f\in L_{\mu}^{2}(\mathbb{R}^{2},\mathbb{R}^{K})\) which are square-integrable (with respect to the measure \(\mu\)), and which have bounded support in practice, e.g. \(f:[-1,1]^{2}\subseteq\mathbb{R}^{2}\to\mathbb{R}^{K}\). \(\mathcal{L}_{g}\) denotes the left-regular representation of \(G\), encoding the action of \(G\) on function spaces. For any continuous \(f\in C(X)\):
\[[\mathcal{L}_{g}f](x)\coloneqq f(g^{-1}x),\quad\forall g\in G,\;x\in X \tag{1}\]
Every locally compact \(G\) has a left (right) nonzero Radon measure \(\mu_{G}\) such that \(\mu_{G}(gB)=\mu_{G}(B)\) (\(\mu_{G}(Bg)=\mu_{G}(B)\)) for any Borel subset \(B\subseteq G\) and \(g\in G\). \(\mu_{G}\) is called the left (right) _Haar measure_ of \(G\) and it is unique up to multiplicative constants. A canonical example is the case \(G=(\mathbb{R}^{n},+)\) in which case \(\mu_{G}\) is the Lebesgue measure. The Haar measure allows for \(G\)-invariant integration to be realized, and for the group convolution to be defined. The invariance property can be easier to state without Borel sets if we define the functional \(\lambda_{\mu_{G}}:L^{1}(G)\to\mathbb{R}\) given by:
\[\lambda_{\mu_{G}}(f)=\int_{G}f(g)\text{d}\mu_{G}(g),\;\forall f\in L^{1}(G) \tag{2}\]
Then, a left Haar measure respects \(\lambda_{\mu_{G}}(\mathcal{L}_{g}f)=\lambda_{\mu_{G}}(f)\) for any \(g\in G\) and \(f\in L^{1}(G)\). For a group \(G\) acting transitively on locally compact spaces \(X\) and \(Y\) we then seek to construct an operator \(\mathcal{K}:L^{2}(X)\to L^{2}(Y)\) satisfying the _equivariance constraint_\(\mathcal{L}_{g}\circ\mathcal{K}=\mathcal{K}\circ\mathcal{L}_{g}\). Taking \(\mathcal{K}\in\mathcal{B}(L^{2}(X),L^{2}(Y))\) (linear and bounded), there exists a corresponding kernel \(k\in L^{1}(Y\times X)\) such that \(\mathcal{K}\) is an integral operator (Arendt & Bukhvalov, 1994, Theorem 1.3):
\[\mathcal{K}[f](y)=\int_{X}f(x)k(y,x)\text{d}\mu_{X}(x),\;\forall f\in L^{2}(X) \tag{3}\]
We formalize two scenarios, when \(X\) is a homogeneous space of \(G\) (not necessarily a group) and \(Y=G\), and the case where \(X=Y=G\). Focusing on the second case, if \(\text{d}\mu_{X}=\text{d}\mu_{G}\) is the Haar measure on \(G\), the integral operator can be reduced to the standard convolution/cross-correlation. Let \(k:Y\times X\to\mathbb{R}\) be a kernel that is invariant to the left action of \(G\) in both arguments, such that \(k(gx,gy)=k(x,y)\) for any \((x,y)\in Y\times X\) and \(g\in G\). Let \(\mu_{X}\) be a \(G\)-invariant Radon measure on \(X\), and define an operator \(\mathcal{K}\coloneqq C_{k}:L^{p}(X)\to L^{p}(G)\) (\(p\in\{1,2\}\)) such that \(\forall f\in L^{p}(X)\):
\[C_{k}:f\mapsto C_{k}f(y)=\int_{X}f(x)k(x,y)\text{d}\mu_{X}(x),\quad\forall y\in Y \tag{4}\]
\(C_{k}\) is \(G\)-equivariant: \(\mathcal{L}_{g}\circ C_{k}=C_{k}\circ\mathcal{L}_{g},\;\forall g\in G\) (A.2). Since \(X=Y=G\) are homogeneous spaces of \(G\) we can easily define a bi-invariant kernel by projection \(k(x,y)=\tilde{k}(g_{y}^{-1}x)\) (\(\tilde{k}:G\to\mathbb{R}\)) for any \((x,y)\in Y\times X\), where \(y=g_{y}y_{0}\) for some fixed \(y_{0}\). The kernel is bi-invariant:
\[k(hx,hy)=\tilde{k}((hg_{y})^{-1}hx)=\tilde{k}(g_{y}^{-1}h^{-1}hx)=\tilde{k}(g_{ y}^{-1}x)=k(x,y),\quad\forall h\in G \tag{5}\]
For the case \(Y=G\) and \(g_{y}=y\) (\(y_{0}=e\), the identity of \(G\)) this corresponds to a cross-correlation operator. For a convolution operator, we would analogously define \(k(x,y)=\tilde{k}(g_{x}^{-1}y)\) where \(x=g_{x}x_{0}\) for \(x_{0}\in X\) fixed. In this case we conclude that the _essential_ component needed for equivariance of the operator \(C_{k}\) is the \(G\)-invariant measure \(\text{d}\mu_{X}\), which is the Haar measure when \(X=Y=G\). When \(X\) is a homogeneous space of \(G\), but not necessarily \(G\) itself, we have to work with an operator which takes in a signal in \(L^{p}(X)\) and produces a signal \(L^{p}(G)\) on the group. This encompasses the case of the _lifting_ (cross-correlation) layers, which are commonly employed when working with the regular representation of a group (Cohen & Welling, 2016; Kondor & Trivedi, 2018; Bekkers, 2019). The form of the kernel \(k(\cdot)\) in this case can be derived through an equivariance constraint formulation as in Bekkers (2019); Cohen et al. (2019). It can also be shown (A.2) that an equivariant lifting cross-correlation can be defined as an operator \(C_{k}^{\uparrow}\) such that for any \(f\in L^{p}(X)\):
\[C_{k}^{\uparrow}:f\mapsto C_{k}^{\uparrow}f,\quad C_{k}^{\uparrow}f:g\mapsto \int_{X}f(x)k(g^{-1}x)\delta(g^{-1})\text{d}\mu_{X}(x),\;\forall g\in G \tag{6}\]
where \(\delta:G\to\mathbb{R}^{\times}_{\geq 0}\) records the change of variables by the action of \(G\) (see A.2). Group cross-correlation \(C_{k}^{\ast}\coloneqq C_{k}\) and convolution \(C_{k}^{\ast}\) operators will be defined for any \(f\in L^{p}(G)\):
\[C_{k}:f\mapsto C_{k}f,\quad C_{k}f:g\mapsto\int_{G}f(\tilde{g})k(g^{-1}\tilde{g })\text{d}\mu_{G}(\tilde{g}),\;\forall g\in G \tag{7}\]
\[C_{k}^{\ast}:f\mapsto C_{k}^{\ast}f,\quad C_{k}^{\ast}f:g\mapsto\int_{G}f(\tilde{ g})k(\tilde{g}^{-1}g)\text{d}\mu_{G}(\tilde{g}),\;\forall g\in G \tag{8}\]
Lie algebras and the group exponentialThe tangent space at the identity of a Lie group \(G\) is denoted by \(\mathfrak{g}\) and called the Lie algebra of \(G\). A (real) Lie algebra is a vector space (over \(\mathbb{R}\)) equipped with a bilinear map \([\cdot,\cdot]:\mathfrak{g}\times\mathfrak{g}\rightarrow\mathfrak{g}\) called the Lie bracket. For every Lie group we can define the Lie group exponential map \(\text{expm}:\mathfrak{g}\to G\), which is a diffeomorphism locally around \(0\in\mathfrak{g}\). Since we are interested in \(\text{GL}(n,\mathbb{R})\) and its subgroups, we can make things more concrete as follows. \(\text{M}_{n}(\mathbb{R})\) equipped with the matrix commutator \([X,Y]=XY-YX\) for \(X,Y\in\text{M}_{n}(\mathbb{R})\) is a Lie algebra, and more precisely it is the Lie algebra of \(\text{GL}(n,\mathbb{R})\). The notation \(\text{gl}(n,\mathbb{R})=\text{M}_{n}(\mathbb{R})\) is used for this identification. For \(G=\text{GL}(n,\mathbb{R})\) with \(\mathfrak{g}=\text{gl}(n,\mathbb{R})\), the group exponential is the matrix exponential \(\text{expm}:\text{gl}(n,\mathbb{R})\rightarrow\text{GL}(n,\mathbb{R})\), with the power series expression \(X\mapsto e^{X}=\sum_{k=0}^{\infty}\frac{1}{k!}X^{k}\).
Lie algebra parametrizationTo construct an equivariant layer using the Lie algebra of the group, one can define the kernels \(k(\cdot)\) in (7) or (8) as functions which take in Lie algebra elements. This requires a map \(\xi:\mathfrak{g}\to G\) which is (at least locally) a diffeomorphism, with an inverse that can be easily calculated, preferably in closed-form. This allows us to rewrite the kernel \(k:G\rightarrow\mathbb{R}\) as:
\[k(g^{-1}\tilde{g})=k(\xi(\xi^{-1}(g^{-1}\tilde{g})))=\tilde{k}_{\theta}(\xi^{ -1}(g^{-1}\tilde{g})) \tag{9}\]
\(\tilde{k}_{\theta}(\cdot)\) is effectively an approximation of \(k(\cdot)\) of the form \(\tilde{k}_{\theta}\cong k\circ\xi:\mathfrak{g}\rightarrow\mathbb{R}\) with learnable parameters \(\theta\). Using the inverse map \(\xi^{-1}(g^{-1}\tilde{g})\), \(\tilde{k}_{\theta}\) maps the Lie algebra coordinates of the 'offset' group element \(g^{-1}\tilde{g}\) (using the notation from the cross-correlation) to real values corresponding to the evaluation \(k(g^{-1}\tilde{g})\). Our kernels are now maps \(\tilde{k}_{\theta}\circ\xi^{-1}:G\rightarrow\mathbb{R}\), requiring the implementation of \(\xi^{-1}(\cdot)\) and a particular choice for the Lie algebra kernel \(\tilde{k}_{\theta}\). This description encompasses recent proposal for Lie group equivariant layers. In Bekkers (2019) the kernels are implemented by modelling \(\tilde{k}_{\theta}\) via B-splines, while Finzi et al. (2020) choose to parametrize \(\tilde{k}_{\theta}\) as small MLPs. Once \(\tilde{k}_{\theta}\) and \(\xi\) are chosen, we can approximate e.g. the cross-correlation using Monte Carlo integration:
\[\int_{G}f(\tilde{g})\tilde{k}_{\theta}(\xi^{-1}(g^{-1}\tilde{g}))\text{d}\mu _{G}(\tilde{g})\approx\frac{\mu_{G}(G)}{N}\sum_{i=1}^{N}f(\tilde{g}_{i}) \tilde{k}_{\theta}(\xi^{-1}(g^{-1}\tilde{g}_{i})),\;\tilde{g}_{i}\sim\mu_{G} \tag{10}\]
where \(\mu_{G}(G)\) denotes the volume of the integration space \(G\) and \(\tilde{g}_{i}\sim\mu_{G}\) indicates that \(\tilde{g}_{i}\) is sampled (uniformly) with respect to the Haar measure. This allows one to obtain equivariance (in expectation) with respect to \(G\). For compact groups, \(\mu_{G}\) can be normalized such that \(\mu_{G}(G)=1\).
## 4 Lie group decompositions for continuous equivariance
Limitations of the group exponentialOne common choice for \(\xi\) in (9) is the matrix exponential \(\xi:=\text{expm}\). Given a subgroup \(G\leq\text{GL}(n,\mathbb{R})\) for which expm is surjective, the advantage of this parametrization is given by the fact that every element \(g\in G\) can be expressed as \(g=\text{expm}(X)=e^{X}\) for \(X\in\mathfrak{g}\). One then needs to consider if both \(\xi\) and \(\xi^{-1}\) need to be implemented, and whether these maps are available in closed form. If one does employ the group exponential for \(\xi\), the inverse map \(\xi^{-1}\) is given by the matrix logarithm:
\[\xi^{-1}(g^{-1}\tilde{g})=\text{logm}(g^{-1}\tilde{g}),\quad\text{logm}:G \rightarrow\mathfrak{g} \tag{11}\]
Assuming there exist \(X\) and \(Y\) such that \(e^{X}=g^{-1}\) and \(e^{Y}=\tilde{g}\), (11) can be rewritten as \(\text{logm}(g^{-1}\tilde{g})=\text{logm}(e^{X}e^{Y})\). A key optimization underlying this framework is enabled by employing the BCH formula (A.3), which tells us that for abelian Lie groups \(\text{logm}(e^{X}e^{Y})=X+Y\). This simplifies calculations considerably and allows one to work primarily at the level of the Lie algebra, bypassing the need to calculate and sample the kernel inputs \(g^{-1}\tilde{g}\) at the group level. Considering the affine Lie groups \(\mathbb{R}^{n}\rtimes H,\;H\leq\text{GL}(n,\mathbb{R})\), this simplification can be used for example for the abelian groups \(H=\text{SO}(n)\) and \(H=\mathbb{R}^{\times}(n)\times\text{SO}(n)\), consisting of rotations and scaling. Bekkers (2019); Finzi et al. (2020) primarily work with these groups, and choose \(\xi\) and \(\xi^{-1}\) to be the matrix exponential and logarithm, respectively. For the non-abelian Lie groups \(\text{SL}(n,\mathbb{R})\) or \(\text{GL}^{+}(n,\mathbb{R})\) the non-surjectivity of the exponential map limits the applicability of the matrix logarithm outside of a neighborhood around the identity (A.3). The class of equivariant networks that can be implemented with this framework is then firstly limited by the 'parametrization' map \(\xi:\mathfrak{g}\to G\), and its inverse \(\xi^{-1}\), motivating the search for an alternative solution. Another key limitation is that for (10) to realize an equivariant estimator when numerically approximating the convolution/cross-correlation integral, sampling needs to be realised with respect to the Haar measure of the group \(G\).
Techniques for sampling with respect to the Haar measure on the groups \(\text{SO}(n)\) or \(\mathbb{R}^{\times}(n)\times\text{SO}(n)\) are known, and generally reduce to working with uniform measures on Euclidean spaces or unit quaternions in the case of \(\text{SO}(3)\). These concerns are also discussed by MacDonald et al. (2022), whose work is closest to our own. While their approach can be applied to any matrix Lie group, in practice their method requires that all group elements \(\tilde{g}^{-1}g\) be precalculated and kept in memory before the forward pass, greatly limiting the scalability of the method due to its exponential memory requirements. We aim to address these limitations, while not restricting the class of 'Lie algebra kernels' \(k_{\theta}:\mathfrak{g}\to\mathbb{R}\) that can be used. Under our framework one should be able to employ any \(k_{\theta}\) that uses the coordinates of tangent vectors in \(\mathfrak{g}\) expressed in some basis.
### Lie group decomposition theory
We exploit the fact that the groups \(\text{GL}^{+}(n,\mathbb{R})\) and \(\text{SL}(n,\mathbb{R})\) have an underlying product structure that allows them to be decomposed into subgroups and submanifolds which are easier to work with individually. More precisely, \(G\in\{\text{GL}^{+}(n,\mathbb{R}),\text{SL}(n,\mathbb{R})\}\) can be decomposed as a product \(P\times H\), where \(H\leq G\) is the maximal compact subgroup of \(G\) and \(P\subseteq G\) is a submanifold which is diffeomorphic to \(\mathbb{R}^{k}\), for some \(k\geq 0\), and we have a diffeomorphism \(\varphi:P\times H\to G\). It can be shown that if the map \(\varphi\) is chosen correctly the Haar measure \(\mu_{G}\) can be written as the pushforward measure \(\varphi_{*}(\mu_{P}\otimes\mu_{H})\), where \(\mu_{P}\) is a \(G\)-invariant measure on \(P\) and \(\mu_{H}\) is the Haar measure on \(H\).
Factorizing the Haar measureSuppose \(G\) is a locally compact group of interest (e.g. \(\text{GL}^{+}(n,\mathbb{R})\)), with (left) Haar measure \(\mu_{G}\). Assume there exist a set of subspaces or subgroups \(P\subseteq G\), \(K\subseteq G\), such that \(G=PK\), and a homeomorphism \(\varphi:P\times K\to G\). Further assume that \(\mu_{P}\) and \(\mu_{K}\) are (left) \(G\)-invariant Radon measures on the corresponding spaces. We look to express (up to multiplicative coefficients) the Haar measure \(\mu_{G}\) as the pushforward of the product measure \(\mu_{P}\otimes\mu_{K}\) under the map \(\varphi\). This allows for the following change of variables for any \(f\in L^{1}(G)\):
\[\int_{G}f(g)\text{d}\mu_{G}(g)=\int_{P\times K}f(\varphi(p,k))\text{d}(\mu_{P} \otimes\mu_{K})(p,k)=\int_{P}\int_{K}f(\varphi(p,k))\text{d}\mu_{K}(k)\text{ d}\mu_{P}(p) \tag{12}\]
In the context of Monte Carlo simulation this will enable us to produce random samples distributed according the measure \(\mu_{G}\) by sampling on the _independent_ factor spaces \(P\) and \(K\) and constructing a sample on \(P\times K\) and respectively on \(G\) using the map \(\varphi\). The space \(P\) will either be another closed subgroup, or a measurable subset \(P\subseteq G\) that is homeomorphic to the quotient space \(G/K\). In particular, if \(P\) is not a subgroup, we will focus on the case where \(P\) is a homogeneous space of \(G\) with stabilizer \(K\) such that \(P\cong G/K\). When the left and right Haar measure of a group coincide, the group is called _unimodular_. The groups \(\text{GL}^{+}(n,\mathbb{R})\), \(\text{SL}(n,\mathbb{R})\) are unimodular, however this is not true for all affine groups \(\mathbb{R}^{n}\rtimes H\). For groups which are volume-preserving, this is not as much of an issue in practice. However, \(\text{GL}^{+}(n,\mathbb{R})\) is not volume-preserving, and we also desire that our framework be general enough to deal with the non-unimodular case as well. If \(G\) is not unimodular and \(\mu_{G}\) is its left Haar measure, there exists a continuous group homomorphism \(\Delta_{G}:G\to\mathbb{R}_{>0}^{>}\), called the _modular function_ of \(G\), which records the degree to which \(\mu_{G}\) fails to be right-invariant. We now have the tools necessary to record two possible integral decomposition methods.
**Theorem 4.1**.: _(1) Let \(G\) be a locally compact group, \(H\leq G\) a closed subgroup, with left Haar measures \(\mu_{G}\) and \(\mu_{H}\) respectively. There is a \(G\)-invariant Radon measure \(\mu_{G/H}\) on \(G/H\) if and only if \(\left.\Delta_{G}\right|_{H}=\Delta_{H}\). The measure \(\mu_{G/H}\) is unique up to a scalar factor and if suitably normalized:_
\[\int_{G}f(g)\text{d}\mu_{G}(g)=\int_{G/H}\int_{H}f(gh)\text{d}\mu_{H}(h)\text{ d}\mu_{G/H}(gH),\ \forall f\in L^{1}(G) \tag{13}\]
_(2) Let \(P\leq G\), \(K\leq G\) closed subgroups such that \(G=PK\). Assume that \(P\cap K\) is compact, and \(Z_{0}\) denotes the stabilizer of the transitive left action of \(P\times K\) on \(G\) given by \((p,k)\cdot g=pkg^{-1}\), for any \((p,k)\in P\times K\) and \(g\in G\). Let \(G\), \(P\) and \(K\) be \(\sigma\)-compact (which holds for matrix Lie groups), \(\mu_{G}\), \(\mu_{P}\) and \(\mu_{K}\) left Haar measures on \(G\), \(P\), and \(K\) respectively and \(\Delta_{G}|_{K}=\Lambda\) is the modular function of \(G\) restricted to \(K\). Then \(\mu_{G}\) is given by \(\mu_{G}=\pi_{*}(\mu_{P}\otimes\Lambda^{-1}\mu_{K})\), where \(\pi:P\times K\to(P\times K)/Z_{0}\) is the canonical projection. In integral form we have:_
\[\int_{G}f(g)\text{d}\mu_{G}(g)=\int_{P}\int_{K}f(pk)\frac{\Delta_{G}(k)}{\Delta_{ K}(k)}\text{d}\mu_{K}(k)\text{d}\mu_{P}(p),\quad\forall f\in L^{1}(G) \tag{14}\]
Proof.: Folland (2016, Theorem 2.51) and Wijsman (1990, Proposition 7.6.1).
Affine groupsWhile these results are very abstract, when going to the Lie group setting, we can already deal with semi-direct products \(G=N\rtimes H\), where \(N,H\) are subgroups of \(G\), and \(N\) is normal. The modular function on \(N\rtimes H\) is \(\Delta_{N\rtimes H}(n,h)=\Delta_{N}(n)\Delta_{H}(h)\delta(h)^{-1}\)(Kaniuth & Taylor, 2013). The term \(\delta:H\to\mathbb{R}^{\times}_{>0}\) records the effect of the action of \(H\) on \(N\), and it coincides with the term \(\delta(\cdot)\) used in the lifting layer definition (28). Making things concrete, take the affine groups \(G=\mathbb{R}^{n}\rtimes H\), \(H\leq\text{GL}(n,\mathbb{R})\), which are defined under the semi-direct product structure by:
\[G=\mathbb{R}^{n}\rtimes H=\{(x,A)\mid x\in\mathbb{R}^{n},\ A\in H\} \tag{15}\]
\(H\) acts on \(\mathbb{R}^{n}\) by matrix multiplication and for any \((x,A),(y,B)\in G\), the product and inverse are:
\[(x,A)(y,B)=(x+Ay,AB),\quad(x,A)^{-1}=(-A^{-1}x,A^{-1}) \tag{16}\]
Elements of \(\mathbb{R}^{n}\) are concretely represented as column vectors. Viewing \((\mathbb{R}^{n},+)\) as the additive group, we have \(\delta:H\to\mathbb{R}^{\times}_{>0}\) given by \(\delta(A)=|\text{det}(A)|\) for any \(A\in H\). Applying Thm. 4.1, gives:
\[\int_{G}f(g)\;\text{d}\mu_{G}(g)=\int_{H}\int_{\mathbb{R}^{n}}f((x,A))\;\frac{ \text{d}x\text{d}\mu_{H}(A)}{|\text{det}(A)|},\;\forall f\in C_{e}(G) \tag{17}\]
\(f((x,A))\) will be denoted by \(f(x,A)\) going forward. Expressing the cross-correlation \(C_{k}f\) in the product space parametrization we have for \(f\in L^{2}(\mathbb{R}^{n}\rtimes H)\) and \((x,A)\in\mathbb{R}^{n}\rtimes H\):
\[C_{k}f:(x,A)\mapsto\int_{H}\int_{\mathbb{R}^{n}}f(\tilde{x},\tilde{A})k((x,A) ^{-1}(\tilde{x},\tilde{A}))\delta(\tilde{A}^{-1})\text{d}\tilde{x}\text{d}\mu _{H}(\tilde{A}) \tag{18}\]
Manifold splitting via Cartan/Polar decompositionLet \(\text{Sym}(n,\mathbb{R})\) be the vector space of \(n\times n\) real symmetric matrices and \(\text{Pos}(n,\mathbb{R})\) the subset of \(\text{Sym}(n,\mathbb{R})\) of symmetric positive definite (SPD) matrices. Denote by \(\text{SPos}(n,\mathbb{R})\) the subset of \(\text{Pos}(n,\mathbb{R})\) consisting of SPD matrices with unit determinant, and by \(\text{Sym}_{0}(n,\mathbb{R})\) the subspace of \(\text{Sym}(n,\mathbb{R})\) of traceless real symmetric matrices. Any matrix \(A\in\text{GL}(n,\mathbb{R})\) can be uniquely decomposed via the left polar decomposition as \(A=PR\) where \(P\in\text{Pos}(n,\mathbb{R})\) and \(R\in\text{O}(n)\) (A.8). The factors of this decomposition are uniquely determined and we have a bijection \(\text{GL}(n,\mathbb{R})\to\text{Pos}(n,\mathbb{R})\times\text{O}(n)\) given by:
\[A\mapsto(\sqrt{AA^{T}},\sqrt{AA^{T}}^{-1}A),\quad\forall A\in\text{GL}(n, \mathbb{R}) \tag{19}\]
For the reader unfamiliar with Lie group structure theory, the following results can simply be understood in terms of matrix factorizations commonly used in numerical linear algebra. The polar decomposition splits the manifold \(\text{GL}^{+}(n,\mathbb{R})\) into the product \(\text{Pos}(n,\mathbb{R})\times\text{SO}(n)\), and \(\text{SL}(n,\mathbb{R})\) into \(\text{SPos}(n,\mathbb{R})\times\text{SO}(n)\). We use the notation \(G\to M\times H\) to cover both cases. This decomposition can be generalized, as the spaces \(\text{Pos}(n,\mathbb{R})=\text{GL}^{+}(n,\mathbb{R})/\text{SO}(n)\) and \(\text{SPos}(n,\mathbb{R})=\text{SL}(n,\mathbb{R})/\text{SO}(n)\) are actually _symmetric spaces_, and a _Cartan decomposition_ is available in this case (A.8). The Cartan decomposition tells us how to decompose not only at the level of the Lie group, but also at the level of the Lie algebra. In fact, using this decomposition we can also obtain a factorization of the measure on these groups. Let \((G/H,M,\mathfrak{m})\) define our 'Lie group data', corresponding to \((\text{GL}^{+}(n,\mathbb{R})/\text{SO}(n),\text{Pos}(n,\mathbb{R}),\text{Sym} (n,\mathbb{R}))\) or \((\text{SL}(n,\mathbb{R})/\text{SO}(n),\text{SPos}(n,\mathbb{R}),\text{Sym}_{0 }(n,\mathbb{R}))\).
**Theorem 4.2**.: _Let \((G/H,M,\mathfrak{m})\) be as above, and denote by \(\mathfrak{g}\), \(\mathfrak{h}\) the Lie algebras of \(G\) and \(H\)._
1. _The matrix exponential and logarithm are diffeomorphisms between_ \(\mathfrak{m}\) _and_ \(M\)_, respectively. For any_ \(P\in M\) _and_ \(\alpha\in\mathbb{R}\)_, the power map_ \(P\mapsto P^{\alpha}\) _is smooth and can be expressed as:_ \[P^{\alpha}=\text{expm}(\alpha\text{logm}(P)),\quad\forall P\in\text{Pos}(n, \mathbb{R})\] (20)
2. \(G\cong M\times H\) _and_ \(G\cong\mathfrak{m}\times H\)_. We have group-level diffeomorphisms:_ \[\chi:M\times H\to G,\quad\chi(P,R)\mapsto PR\] (21) \[\Phi:\mathfrak{m}\times H\to G,\quad\Phi:(X,R)\mapsto\text{expm}(X)R=e^{X}R\] (22)
3. _The above maps can be inverted in closed-form:_ \[\chi^{-1}:G\to M\times H,\;\chi^{-1}:A\mapsto(\sqrt{AA^{T}},\sqrt{AA^{T}}^{- 1}A)\] (23) \[\Phi^{-1}:G\to\mathfrak{m}\times H,\quad\Phi^{-1}:A\mapsto(\frac{1}{2} \text{logm}(AA^{T}),\text{expm}(-\frac{1}{2}\text{logm}(AA^{T}))A)\] (24)
See (A.9) for proofs and references. At the level of the Lie algebra, we have the decomposition \(\mathfrak{gl}(n,\mathbb{R})=\mathfrak{so}(n)\oplus\text{Sym}(n,\mathbb{R})\). The Lie algebra of \(\text{SL}(n,\mathbb{R})\) is \(\mathfrak{sl}(n,\mathbb{R})=\{X\in\mathfrak{gl}(n,\mathbb{R})\mid\text{tr}(X)=0\}\). It decomposes similarly \(\mathfrak{sl}(n,\mathbb{R})=\mathfrak{so}(n)\oplus\text{Sym}_{0}(n,\mathbb{R})\). Then \(\mathfrak{h}=\mathfrak{so}(n)\) with \(\mathfrak{m}=\text{Sym}(n,\mathbb{R})\) if \(G=\text{GL}^{+}(n,\mathbb{R})\) and \(\mathfrak{m}=\text{Sym}_{0}(n,\mathbb{R})\) if \(G=\text{SL}(n,\mathbb{R})\).
### A parametrization based on the Cartan Decomposition
Consider again the notation \((G/H,M,\mathfrak{m})\) as in Theorem 4.2 (\(G=\text{GL}^{+}(n,\mathbb{R})\) or \(G=\text{SL}(n,\mathbb{R})\)).
Concrete integral decompositionsFrom Theorem 4.2 and the fact that symmetric matrices have a unique square root, we actually have equivalent decompositions for \(A\in G\) as \(A=PR\) or \(A=S^{1/2}R\) for \(S,P\in M\), \(R\in H\) and \(P=S^{1/2}\). For \(\text{GL}(n,\mathbb{R})\), the decomposition \(A=S^{1/2}R\), has a factorization of the Haar measure of \(\text{GL}(n,\mathbb{R})\) as a product of invariant measures on \(\text{Pos}(n,\mathbb{R})\) (shortened \(\text{Pos}(n)\)) and \(\text{O}(n)\). Let \(\mu_{\text{Pos}(n)}\) denote the \(\text{GL}(n,\mathbb{R})\) invariant measure on \(\text{Pos}(n)\).
**Theorem 4.3**.: _Denote \(G=\text{GL}(n,\mathbb{R})\), \(H=\text{O}(n)\), and let \(\mu_{G}\) be the Haar measure on \(G\) and \(\mu_{H}\) the Haar measure on \(H\) normalized by \(\text{\rm Vol}(H)=1\). For \(A\in G\), under the decomposition \(A=S^{1/2}R\), \(S\in\text{Pos}(n)\), \(R\in H\), the measure on \(G\) splits as \(d\mu_{G}(A)=\beta_{n}d\mu_{\text{Pos}(n)}(S)d\mu_{H}(R)\), where \(\beta_{n}=\frac{\text{\rm Vol}(\mathfrak{m}(n))}{2}\) is a normalizing constant. Restricting to \(G=\text{\rm GL}^{+}(n,\mathbb{R})\) and \(H=\text{\rm SO}(n)\) and ignoring constants, we have:_
\[f\mapsto\int_{G}f(A)\text{\rm d}\mu_{G}(A)=\int_{\text{Pos}(n)}\int_{H}f(S^{1/ 2}R)\text{\rm d}\mu_{H}(R)\text{\rm d}\mu_{\text{Pos}(n)}(S),\;\forall f\in C _{c}(G) \tag{25}\]
The Haar measure of \(\text{\rm GL}(n,\mathbb{R})\) is \(d\mu_{\text{GL}(n,\mathbb{R})}(A)=|\text{\rm det}(A)|^{-n}dA\), with \(dA\) the Lebesgue measure on \(\mathbb{R}^{n^{2}}\). We now describe how to sample on the individual factors to obtain \(\text{\rm GL}(n,\mathbb{R})\) samples.
**Theorem 4.4**.: _If a random matrix \(A\in\text{\rm GL}(n,\mathbb{R})\) has a left-\(\text{\rm O}(n)\) invariant density function relative to \(|AA^{T}|^{-n/2}dA\), then \((AA^{T})^{1/2}=S^{1/2}\) and \(R=(AA^{T})^{-1/2}A\) are independent random matrices and \(R\) has a uniform probability distribution on \(\text{\rm O}(n)\). The uniform distribution on \(\text{\rm O}(n)\) will be the normalized Haar measure \(\mu_{\text{\rm O}(n)}\). Conversely, if \(S\in\text{\rm Pos}(n)\) has a density function \(f:\text{\rm Pos}(n)\to\mathbb{R}_{\geq 0}\) relative to \(\mu_{\text{Pos}(n)}\) and \(R\in\text{\rm O}(n)\) is uniformly distributed with respect to the Haar measure \(\mu_{\text{\rm O}(n)}\), then \(A=S^{1/2}R\) has a density function \(\beta_{n}^{-1}f(AA^{T})|\text{\rm det}(A)|^{-n}\) relative to \(dA\)._
Theorems 4.3 and 4.4 are known results that appear in the random matrix theory literature, but have not seen recent application in the context of deep learning. In (A.10) we provide more details and references. Using the decomposition \(A=S^{1/2}R\) invariant integration problems on \(G\) can be transferred to the product space \(M\times H\), and we can express up to normalization the invariant measure \(\mu_{G}\) as \(\varphi_{*}(\mu_{M}\otimes\mu_{H})\). To construct samples \(\{\mathbf{A}_{1},\dots,\mathbf{A}_{n}\}\sim\mu_{G}\) one produces samples \(\{\mathbf{R}_{1},\dots,\mathbf{R}_{n}\}\sim\mu_{H}\) where \(\mu_{H}\) will be the uniform distribution on \(H\), and samples \(\{\mathbf{M}_{1},\dots,\mathbf{M}_{n}\}\sim\mu_{M}\). Then \(\mu_{G}\)-distributed random values are obtained by \(\{\mathbf{A}_{1},\dots,\mathbf{A}_{n}\}=\{\varphi(\mathbf{M}_{1},\mathbf{R}_ {1}),\dots,\varphi(\mathbf{M}_{n},\mathbf{R}_{n})\}\), where again \(\varphi:M\times H\to G\) is given by \(\varphi:(S,R)\mapsto S^{1/2}R\).
Mapping to the Lie algebra and backAny \(A\in G\) can be expressed uniquely as \(A=e^{X}R\) for \(x\in\mathfrak{m}\) and \(R\in H\). Since \(H=\text{\rm SO}(n)\) in both cases, the fact that \(\text{\rm expm}:\mathfrak{so}(n)\to\text{\rm SO}(n)\) is surjective, allows us to write it \(A=e^{X}e^{Y}\), \(Y\in\mathfrak{so}(n)\). The factors \(X\) and \(R=e^{Y}\) are obtained using \(\Phi^{-1}\) (23). Then by taking the principal branch of the matrix logarithm on \(H=\text{\rm SO}(n)\), \(Y=\text{\rm logm}(R)\). A map \(\xi^{-1}:G\to\mathfrak{g}\) as described in (9) and (10) is constructed as \(\xi^{-1}=(\text{\rm id}_{\mathfrak{m}}\times\text{\rm logm})\circ\Phi^{-1}\). More precisely, for any \(A=e^{X}e^{Y}\in G\), using \(\xi^{-1}\) we obtain the tangent vectors \((Y,X)\in\mathfrak{so}(n)\times\mathfrak{m}\) and since \(\mathfrak{g}=\mathfrak{so}(n)\oplus\mathfrak{m}\) we have a unique \(Z=X+Y\in\mathfrak{g}\). Details are given in (A.11).
Define \(\tilde{K}_{\theta}\coloneqq k_{\theta}\circ\xi^{-1}:G\to\mathbb{R}\) as our Lie algebra kernel. A Monte Carlo approximation of a cross-correlation operator \(C_{k}:L^{2}(G)\to L^{2}(G)\) as in (10) will be of the form:
\[C_{k}f:g\mapsto\frac{1}{N}\sum_{i=1}^{N}f(\tilde{g}_{i})\tilde{K}_{\theta}( \tilde{g}_{i}^{-1}g),\;\tilde{g}_{i}\sim\mu_{G},\quad\forall g\in G \tag{26}\]
Layer definitionsEvery element \((x,A)\) of \(\mathbb{R}^{n}\rtimes H\), can be uniquely decomposed as \((x,I)(0,A)\), with \(I\) the \(n\times n\) identity matrix. One can use the fact that \(\mathcal{L}_{(x,A)}=\mathcal{L}_{(x,I)}\mathcal{L}_{(0,A)}\) to write:
\[k((x,A)^{-1}(\tilde{x},\tilde{A}))=\mathcal{L}_{(x,A)}k(\tilde{x},\tilde{A})= \mathcal{L}_{(x,I)}[\mathcal{L}_{(0,A)}k(\tilde{x},\tilde{A})]=\mathcal{L}_{x}[k (A^{-1}\tilde{x},A^{-1}\tilde{A})] \tag{27}\]
An efficient implementation of a convolutional layer can be realised in practice for \(n\in\{2,3\}\) by first obtaining the transformed kernel \(k(A^{-1}\tilde{x},A^{-1}\tilde{A})\) and then applying the translation \(\mathcal{L}_{x}\) using an efficient convolution routine, as done for example in Cohen & Welling (2016); Bekkers (2019).
We can approximate a continuous convolution (Finzi et al., 2020) by sampling the translation factor in a uniform grid of coordinates \(\tilde{x}\sim[-1,1]^{n}\subset\mathbb{R}^{n}\) as the parametrization \(\xi:\mathbb{R}^{n}\times\mathfrak{gl}(n,\mathbb{R})\to G\) is the identity map for the first factor. Our lifting layers are:
\[C_{k}^{\dagger}f:(x,A) \mapsto\int_{\mathbb{R}^{n}}f(\tilde{x})\mathcal{L}_{x}k(A^{-1} \tilde{x})\delta(A^{-1})\text{d}\tilde{x} \tag{28}\] \[\approx\frac{1}{N}\sum_{i=1}^{N}f(\tilde{x}_{i})\mathcal{L}_{x}[k _{\theta}(A^{-1}\tilde{x}_{i})\delta(A^{-1})],\ x_{i}\sim[-1,1]^{n} \tag{29}\]
For the non-lifting layers, starting from (18), denoting \(\text{d}\tilde{A}=\text{d}\mu_{G}(\tilde{A})\) and applying (27) we have:
\[[C_{k}f](x,A)=\int_{\mathbb{R}^{n}}\int_{G}f(\tilde{x},\tilde{A})\mathcal{L}_ {x}k(A^{-1}\tilde{x},A^{-1}\tilde{A})\delta(\tilde{A}^{-1})\text{d}\tilde{x} \text{d}\tilde{A} \tag{30}\]
Using Theorem 4.3, denote the invariant measures \(\mu_{M}\) and \(\mu_{H}\) by \(\text{d}S\) and \(\text{d}R\), we obtain:
\[[C_{k}f](x,A)=\beta_{n}\int_{\mathbb{R}^{n}}\int_{H}\int_{M}f(\tilde{x},S^{1/2 }R)\mathcal{L}_{x}k(A^{-1}\tilde{x},A^{-1}S^{1/2}R)\delta(S^{-1/2})\text{d}S \text{d}R\text{d}\tilde{x} \tag{31}\]
The kernel in (26) is now of the form \(K_{\theta}:\mathbb{R}^{n}\rtimes G\to\mathbb{R}\), giving us:
\[[C_{k}f](x,A)\approx\frac{V}{N}\sum_{i=1}^{N}f(\tilde{x_{i}},S_{i}^{1/2}R_{i })\mathcal{L}_{x}[K_{\theta}(A^{-1}\tilde{x_{i}},\xi^{-1}(A^{-1}S_{i}^{1/2}R_ {i}))\delta(S_{i}^{-1/2})] \tag{32}\]
where \(\tilde{x_{i}}\sim[-1,1]^{n}\), \(S_{i}\times R_{i}\sim(\mu_{M}\otimes\mu_{H})\), and \(R_{i}\) sampled uniformly with respect \(\mu_{H}\). \(V\) records both the volume of the integration space from the MC approximation as well as the constant \(\beta_{n}\).
## 5 Experiments
For all experiments we use a ResNet-style architecture, replacing convolutional layers with cross-correlations that are equivariant (in expectation) with respect to the groups \(\mathbb{R}^{2}\rtimes\text{GL}^{+}(2,\mathbb{R})\) and \(\mathbb{R}^{2}\rtimes\text{SL}(2,\mathbb{R})\). Details regarding the network architecture and training are given in Appendix B.
Affine-transformation invarianceWe evaluate our model on a benchmark affine-invariant image classification task employing the affNIST dataset2. The main works we compare with are the affine-equivariant model of MacDonald et al. (2022) and the Capsule Network of De Sousa Ribeiro et al. (2020) which are state of the art for this task. The experimental setup involves training on the standard set of \(50000\) non-transformed MNIST images (padded to \(40\times 40\)), and evaluating on the affNIST test set, which consists of \(320000\) affine-transformed MNIST images. The model never sees the transformed affNIST images during training, and we do not use any data augmentation techniques. In this case, robustness with respect to the larger groups of the affine family of transformations is needed. For a fair comparison we roughly equalize the number of parameters with the referenced models.
Footnote 2: [http://www.cs.toronto.edu/tijmen/affNIST](http://www.cs.toronto.edu/tijmen/affNIST)
\begin{table}
\begin{tabular}{c c c c c} \hline Model & affNIST Acc. & MNIST Acc. & Parameters & MC. Samples \\ \hline \(\mathbb{R}^{2}\rtimes\text{SL}(2,\mathbb{R})\) & \(97.9(\pm 0.25)\) & \(99.61(\pm 0.1)\) & \(370\)K & \(10\) \\ \hline RU CapsNet (De Sousa Ribeiro et al., 2020) & \(97.69\) & \(99.72\) & \(>580\)K & — \\ \hline \(\mathbb{R}^{2}\rtimes\text{GL}^{+}(2,\mathbb{R})\) & \(97.4(\pm 0.2)\) & \(99.5(\pm 0.1)\) & \(395\)K & \(10\) \\ \hline affConv (MacDonald et al., 2022) & \(95.08\) & \(98.7\) & \(374\)K & \(100\) \\ \hline affine CapsNet (Gu and Tresp, 2020) & \(93.21\) & \(99.23\) & — & — \\ \hline Equivariant CapsNet (Lenssen et al., 2018) & \(89.1\) & \(98.42\) & \(235\)K & — \\ \hline \end{tabular}
\end{table}
Table 1: affNIST classification accuracy, after training on MNIST.
Table 1 reports the average test performance of our model at the final epoch, over five training runs with different initialisations. The results and parameter counts for Gu and Tresp (2020); De Sousa Ribeiro et al. (2020); MacDonald et al. (2022) are taken from (MacDonald et al., 2022, Figure 2) and (De Sousa Ribeiro et al., 2020, Table 3). We observe that our equivariant models are robust and generalize well, with the \(\mathbb{R}^{2}\rtimes\text{SL}(2,\mathbb{R})\) model outperforming all previous equivariant models and Capsule Networks. Note that, compared to MacDonald et al. (2022), our sampling scheme requires \(10\) times less samples to realize an accurate Monte Carlo approximation of the convolution. The \(\mathbb{R}^{2}\rtimes\text{GL}^{+}(2,\mathbb{R})\) model performs slightly worse than the volume-preserving affine group \(\mathbb{R}^{2}\rtimes\text{SL}(2,\mathbb{R})\). This can be explained by considering that the affNIST dataset contains only a small degree of scaling.
Homography transformationsWe further evaluate and report in Table 2 the performance of the same model evaluate on the homNIST dataset of MacDonald et al. (2022). The setup is identical to the affNIST case, with the images now being transformed by random homographies. We observe a similar degree of robustness in this case, again outperforming previous methods applied to this task.
## 6 Conclusion
We have built a framework for constructing equivariant networks when working with matrix Lie groups that are not necessarily compact or abelian. Using the structure theory of semisimple/reductive Lie groups we have shown one possible avenue for constructing invariant/equivariant (convolutional) layers primarily relying on tools which allow us to decompose larger groups into smaller ones. In our preliminary experiments, the robustness and out-of-distribution capabilities of the equivariant models were shown to outperform previous proposals on tasks where the symmetry group of relevance is one of \(\text{GL}^{+}(n,\mathbb{R})\) or \(\text{SL}(n,\mathbb{R})\).
Our contribution is largely theoretical, providing a framework by which equivariance/invariance to complex symmetry groups can be obtained. Further experiments will look to validate the applicability of our method to other data modalities, such as point clouds or molecules, as in Finzi et al. (2020).
While we have primarily focused on convolution operators, we remark that the tools explored here are immediately applicable to closely-related machine learning models which employ Lie groups and their regular representation for invariance/equivariance. For example, the 'LieTransformer' architecture proposed in Hutchinson et al. (2021) opts to replace convolutional layers with self-attention layers, while still using the Lie algebra of the group as a mechanism for incorporating positional information. They face the same challenge in that their parametrization is dependent on the mapping elements back and forth between a chosen Lie group and its Lie algebra, and they require a mechanism for sampling on the desired group. The methods presented here are directly applicable in this case. Future work will explore expanding the class of Lie groups employed by such models using the tools presented here.
Another potential avenue to explore is the applicability of the presented tools to the problem of 'partial' and 'learned' invariance/equivariance (Benton et al., 2020). The sampling mechanism of the product decomposition allows one to specify a probability distribution for the non-orthogonal factor, which could be learned from data.
\begin{table}
\begin{tabular}{c c c} \hline \hline Model & homNIST Acc. & MC. Samples \\ \hline \(\mathbb{R}^{2}\rtimes\text{SL}(2,\mathbb{R})\) & \(98.1(\pm 0.15)\) & \(10\) \\ \hline \(\mathbb{R}^{2}\rtimes\text{GL}^{+}(2,\mathbb{R})\) & \(97.71(\pm 0.1)\) & \(10\) \\ \hline affConv (MacDonald et al., 2022) & \(95.71\) & \(100\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: homNIST classification. |
2310.05313 | Accelerating Deep Neural Network guided MCTS using Adaptive Parallelism | Deep Neural Network guided Monte-Carlo Tree Search (DNN-MCTS) is a powerful
class of AI algorithms. In DNN-MCTS, a Deep Neural Network model is trained
collaboratively with a dynamic Monte-Carlo search tree to guide the agent
towards actions that yields the highest returns. While the DNN operations are
highly parallelizable, the search tree operations involved in MCTS are
sequential and often become the system bottleneck. Existing MCTS parallel
schemes on shared-memory multi-core CPU platforms either exploit data
parallelism but sacrifice memory access latency, or take advantage of local
cache for low-latency memory accesses but constrain the tree search to a single
thread. In this work, we analyze the tradeoff of these parallel schemes and
develop performance models for both parallel schemes based on the application
and hardware parameters. We propose a novel implementation that addresses the
tradeoff by adaptively choosing the optimal parallel scheme for the MCTS
component on the CPU. Furthermore, we propose an efficient method for searching
the optimal communication batch size as the MCTS component on the CPU
interfaces with DNN operations offloaded to an accelerator (GPU). Using a
representative DNN-MCTS algorithm - Alphazero on board game benchmarks, we show
that the parallel framework is able to adaptively generate the best-performing
parallel implementation, leading to a range of $1.5\times - 3\times$ speedup
compared with the baseline methods on CPU and CPU-GPU platforms. | Yuan Meng, Qian Wang, Tianxin Zu, Viktor Prasanna | 2023-10-09T00:02:31Z | http://arxiv.org/abs/2310.05313v1 | # Accelerating Deep Neural Network guided MCTS using Adaptive Parallelism
###### Abstract.
Deep Neural Network guided Monte-Carlo Tree Search (DNN-MCTS) is a powerful class of AI algorithms. In DNN-MCTS, a Deep Neural Network model is trained collaboratively with a dynamic Monte-Carlo search tree to guide the agent towards actions that yields the highest returns. While the DNN operations are highly parallelizable, the search tree operations involved in MCTS are sequential and often become the system bottleneck. Existing MCTS parallel schemes on shared-memory multi-core CPU platforms either exploit data parallelism but sacrifice memory access latency, or take advantage of local cache for low-latency memory accesses but constrain the tree search to a single thread. In this work, we analyze the tradeoff of these parallel schemes and develop performance models for both parallel schemes based on the application and hardware parameters. We propose a novel implementation that addresses the tradeoff by adaptively choosing the optimal parallel scheme for the MCTS component on the CPU. Furthermore, we propose an efficient method for searching the optimal communication batch size as the MCTS component on the CPU interfaces with DNN operations offloaded to an accelerator (GPU). Using a representative DNN-MCTS algorithm - Alphazero on board game benchmarks, we show that the parallel framework is able to adaptively generate the best-performing parallel implementation, leading to a range of \(1.5\times-3\times\) speedup compared with the baseline methods on CPU and CPU-GPU platforms.
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal: this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal: this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
+
Footnote †: journal this research
+
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
+
Footnote †: journal this research
+
Footnote †: journal this research
+
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
FootnoteFootnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
+
Footnote †: journal this research
* We implement both local-tree and shared-tree parallel DNN-MCTS as a single program template that allows compile-time adaptive selection of parallel implementations; the program template allows interfacing with existing high-level libraries for simulating various benchmarks, and supports offloading the DNN computations to accelerators.
* We propose a design configuration workflow that decides the optimal parallel method at compile time. This is achieved using high-level performance models for two tree-parallel DNN-MCTS implementations based on algorithm hyper-parameters (e.g., tree fanout, tree depth), hardware specifications (e.g., number of threads, DDR bandwidth and latency), and design-time profiling.
* We utilize an efficient search method that determines the best DNN-request-processing batch size in the design configuration workflow to fine-tune the DNN-MCTS performance on a CPU-GPU platform. This is achieved by overlapping DNN request transfers with in-tree operations and minimizing the GPU wait time.
* We successfully validated the proposed adaptive parallel methodology by running the Gomoku board-game benchmark and achieved up to 3\(\times\) speedup than the baselines using either parallel implementation alone.
## 2. Background
### Dnn-MCTS
The complete DNN-MCTS training pipeline is an iterative process composed of two stages: tree-based search and DNN training. The tree-based search stage is guided by the DNN inference results on a tree, and generates the datasets used for DNN training. The DNN takes the current state \(s\) as the input, and outputs a value estimation of \(s\) and a policy (i.e., the probabilities of taking each available action from \(s\)). Each node in the tree represents a certain environment state. Each edge represents the action that transits from one state to another, and tracks the visit counts and application-specific values associated with the action. For example, in AlphaZero (Ajha and Zisserman, 2017), each edge maintains \(Q(s,a)\) - the expected reward (i.e. the Q value) for taking action \(a\) from state \(s\); \(N(s,a)\) - the number of times action \(a\) is taken from state \(s\) in all the iterations in a search stage; \(P(s,\cdot)\) - the policy returned by the DNN, which is the probability of taking each action from the state \(s\).
In the tree-based search stage, each iteration of the tree-based search is composed of the following operations:
1. **Node Selection:** The search starts from the current state (root node of the tree) and traverses down the tree. At every node traversed \(s\), the next edge is selected according to the statistics stored in the search tree as follows: \[a=argmax(U(s,a)),\text{where the UCT score}\] (1) \[U(s,a)=Q(s,a)+c\cdot P(s,a)\cdot\frac{\sqrt{\Sigma_{p}N(s,b)}}{1+N(s,a)}\]
This leads the agents towards states with high reward values (exploitation), high policy-action probability, and low visit counts (exploration). \(c\) is a pre-set constant controlling the tradeoff between exploitation and exploration.
2. **Node Expansion & Evaluation:** When the tree traversal encounters an edge that was never visited before, the search process adds a new successor node \(s^{\prime}\), and initializes \(Q(s^{\prime},a),N(s^{\prime},a)\) to \(0\) for all its adjacent edges \(a\). Accordingly, \(P(s^{\prime},\cdot)\) is derived from the DNN inference which takes the new node \(s^{\prime}\) as input; the DNN also outputs the estimated reward value \(o(s^{\prime},\cdot)\).
3. **Backup:** To synchronize the tree with the most recent node evaluation, \(o(s^{\prime},\cdot)\) is propagated from the new leaf node back to the root. At each tree level, the visit counts \(N\) is incremented, and the state value \(Q\) is accumulated using \(o\).
After a fixed amount of iterations, the best move is picked at the root node (i.e., the current state \(s_{t}\)) based on Equation 1. This generates a training datapoint \((s_{t},\vec{\pi}_{t},r)\), where \(\vec{\pi}_{t}\) is the action statistics at the root, and \(r\) is the reward recorded at terminal states. These training data points are later consumed by the DNN training stage.
In the DNN training stage, the DNN performs a stochastic gradient descent (SGD, (Glorot and Kool, 2018)) using the data points generated in the tree-based search state. For example, In AlphaZero (Ajha and Zisserman, 2017), it updates the DNN parameters \(\theta\) to minimizes the loss:
\[l=\sum_{t}\left(v_{\theta}\left(s_{t}\right)-r\right)^{2}-\vec{\pi}_{t}\cdot \log\left(\vec{\rho}_{\theta}\left(s_{t}\right)\right) \tag{2}\]
where \(v_{\theta}\) and \(p_{\theta}\) are the value head and policy head of the DNN output.
In our initial profiling of the sequential DNN-MCTS on Gomoku benchmarks (Ajha and Zisserman, 2017), the tree-based search stage account for more than 85% of the complete training process. Therefore, there is a critical need for parallelizing both the MCTS and DNN inference processes in the tree-based search stage. _Our work focus on the (variations of) Tree Parallelization (Bengio et al., 2017; Chen et al., 2017)_. This is recently the most popular MCTS parallelization technique used in existing DNN-MCTS implementations such as AlphaZero (Ajha and Zisserman, 2017). In Tree-Parallel MCTS, after a worker traverses a certain node (path) during Node Selection, a virtual loss \(VL\) is subtracted from \(U\) of the traversed edges to lower their weights, thus encouraging other workers to take different paths. It also creates dependencies between workers during the Node Selection. \(VL\) is recovered later in the BackUp phase. Note that \(VL\) can either be a pre-defined constant value (Bengio et al., 2017), or a number tracking visit counts of child nodes (Chen et al., 2017).
In this work, we view the tree-based search stage as a composition of in-tree operations and DNN inference. The in-tree operations are all the operations that access the tree in Node Selection, Node Expansion, and BackUp phases, and the DNN inference refers to Node Evaluation. Note that the target platform for in-tree operations is a multi-core CPU, and DNN inference may be executed on the CPU or offloaded to an accelerator.
### Related Work
Other than tree-parallel MCTS targeted in this work, multiple other parallel algorithms have been developed for high-throughput MCTS and DNN-MCTS. Leaf-parallel MCTS (Chen et al., 2017) uses a single tree and creates multiple parallel node simulations at the same leaf node, but it wastes parallelism due to the lack of diverse evaluation coverage on different selected paths, which leads to algorithm performance degrades (Chen et al., 2017). Root-parallel MCTS (Chen et al., 2017) creates multiple trees at different workers and aggregates their statistics periodically, but still
lets multiple workers visit repetitive states. The Speculated DNN-MCTS (Deng et al., 2017) comply with the sequential in-tree operations, and uses a speculative model in addition to the main model for faster node evaluation. This preserves the decision-making quality of the sequential MCTS but introduces additional computations.
The original tree-Parallel MCTS (Beng et al., 2017) uses multiple workers to share and modify the same tree, and uses mutex to avoid race conditions. However, the synchronization overhead can dominate the memory-bound in-tree operations, making the achievable speedups sub-optimal. (Krause et al., 2017) attempts to address this by developing a lock-free tree-parallel method, but the agents trained cannot win against root-parallel MCTS on hex game benchmarks without careful tuning of hyper-parameters. WU-UCT (Deng et al., 2017) puts multiple workers on the same thread and executes them in a centralized manner using a local tree, while parallelizing the node evaluations (simulations). This avoids overheads from frequent thread-synchronizations, but the speedup does not linearly scale up wrt allocated parallel resource when the sequential workers become the bottleneck (Deng et al., 2017; Wang et al., 2018). Overall, there are different tradeoffs wrt the execution speed of the best-performing agents. Therefore, we are motivated to combine the different advantages of a tree-Parallel MCTS with shared tree (Beng et al., 2017) and local tree (Deng et al., 2017), and dynamically select between them to suit different scenarios.
## 3. Parallelization Schemes and Implementation
### Parallelization Schemes
Assume that we allocate \(N\) workers sharing the tree during the tree-based search. We consider two methods to implement tree-parallel MCTS on multi-core CPUs. These methods are characterized by their usage of a local tree and a shared tree, respectively:
#### 3.1.1. Shared Tree
The shared-tree method uses \(N\) threads in total - it assigns each worker an individual thread. Each thread is responsible for its own assigned worker's in-tree operations and DNN inference. The tree is stored in a shared memory (typically DDR memory of the CPU), and nodes in the tree are distributed to parallel workers as they access the tree. The shared-tree method on a multi-core system is shown in Figure 1-(a). The in-tree operations by each work are protected with locks so that only one worker can access a certain node at a time. The operation execution timeline of the shared-tree method is shown in Figure 1-(b). All workers start at a common root node, and the virtual loss applied to the root children needs to be updated for all workers accessing it. So, the time interval between consecutive workers involves the overhead for communicating the root-level information through share memory space (i.e., DDR), creating latency offsets between workers. The main advantage of the shared-tree method is that in-tree operations are parallelized. The disadvantage is that the more compute-intensive Node Evaluation process cannot fully utilize the compute power provided by the parallel threads, since they need to wait for the completion of in-tree operations by all workers, and these in-tree operations are bounded by memory access latencies.
#### 3.1.2. Local Tree
The local-tree method uses \(N+1\) threads in total - it uses a centralized master thread to manage the complete tree, and it allocates \(N\) threads to execute the Node Evaluations for \(N\) workers (each thread is solely dedicated to the DNN inferences). The complete tree is stored in the local memory of the master thread (e.g., cache memory). The master thread also manages a worker-thread pool where the master thread communicates with each worker thread through a FIFO (first-in-first-out) communication pipe. The local-tree system is shown in Figure 2-(a). The master thread executes a \(while(1)\) loop; In each iteration, it selects new nodes to send to worker threads, and checks for backup requests received
Figure 1. Shared-tree method
Figure 2. Local-tree method
from any worker in the worker-thread pool. The worker threads' processes are completely independent of one another; they only coordinate with the centralized master thread. The main advantage of the local-tree method is that it can overlap the computation of DNN inferences and in-tree operations by separating them into different hardware resources (Figure 2-(b)); Also, for small-sized trees that can fit in last-level cache, the memory access latencies in in-tree operations are reduced compared to the shared-tree method. The disadvantage is that all the in-tree operations are completely serialized, leading to lower in-tree throughput.
### Adaptive Parallelism: System Overview
The local-tree and shared-tree methods have tradeoffs that suit different scenarios. The intuition is that when DNN inference throughput is the bottleneck, the local-tree method should be favored to fully exploit the parallelism for independent Node Evaluations; when the number of workers becomes large or the tree is very deep such that the sequential in-tree operations become the bottleneck, the shared-tree method should be utilized to parallelize the in-tree operations between workers. In this work, we are motivated to take the best of both works and develop a tree-parallel DNN-MCTS implementation that is able to adaptively switch between the two methods. This implementation is facilitated with an empirical model to determine which method is best suited at compile time given an arbitrary DNN-MCTS algorithm specification and multi-core CPU device specification (later discussed in Section 4).
To support adaptive parallelism that enables switching between the local-tree and shared-tree methods, we implement the DNN-MCTS program as shown in Algorithm 1. The program is an iterative process of data collection (Algorithm 1, lines 3-12) and DNN training (Algorithm 1, lines 13-15). Based on an input flag passed to the main program (Algorithm 1, lines 6-9), it selects between the shared-tree and local-tree methods, shown in Algorithm 2 and 3, respectively.
```
1Functionget_action_prior_s(environment):
2game\(\leftarrow\) copy(environment)
3forinnum_playoutsdo
4addthreadsafe_rollout(game) to thread pool
5wait for threads to finish all work
6action_prior\(\leftarrow\) normalized root's children list wrt visit count
7returnaction_prior
8Functionthreadsafe_rollout(game):
9node\(\gets\)root
10whilenode is not leafdo
11node\(\leftarrow\)node's child with highest UCT score
12game execute the corresponding move
13obtainlock
14updatenode's UCT score with virtual loss
15releaselock
16priors,value\(\leftarrow\)neural_network_simulate(game)
17node create children list according topriors
18obtainlock
19backup(node,value)
20releaselock
21return
```
**Algorithm 1**Adaptive Parallel DNN-MCTS
In the shared-tree method, a pool of threads is spawned to execute all the in-tree operations and DNN inferences in parallel. When a function is added to the thread pool (Algorithm 2, line 4), the input of the function is sent to an available thread, and the function is executed on the same thread. In the case of the shared-tree method, the function executed by each thread is "threadsafe_rollout". It first traverses the tree from root to leaf, performing node selection, then performing node evaluation through "neural_network_simulate", followed by node expansion and backup. During the virtual loss update and backup, multiple threads may share write accesses to the same nodes, so locks are used to ensure atomic accesses.
In the local-tree method, a centralized master thread is responsible for all the in-tree operations, and a thread pool is spawned to execute all the DNN inferences asynchronously in parallel. Specifically, the master thread executes the "rollout_n_times" (Algorithm 3, line 6-17). It repeatedly performs node selection, expansion, and backup, and assigns a "neural_network_simulate" function as node evaluation request to the thread pool through a first-in-first-out queue. When all the threads are occupied by DNN inferences in the thread pool, the master thread waits until receiving a value for backup. Otherwise, it continues with the in-tree operation loop to generate node evaluation requests.
```
1Functionget_action_prior_s(environment):
2game\(\leftarrow\) copy(environment)
3forinnum_playoutsdo
4addthreadsafe_rollout(game) to thread pool
5wait for threads to finish all work
6action_prior\(\leftarrow\) normalized root's children list wrt visit count
7returnaction_prior
8Functionthreadsafe_rollout(game):
9node\(\gets\)root
10whilenode is not leafdo
11node\(\leftarrow\)node's child with highest UCT score
12game execute the corresponding move
13obtainlock
14updatenode's UCT score with virtual loss
15releaselock
16priors,value\(\leftarrow\)neural_network_simulate(game)
17node create children list according topriors
18obtainlock
19backup(node,value)
20releaselock
21return
```
**Algorithm 2**shared-tree based search
### Accelerator-offloaded DNN Inference
Our implementation also supports offloading the DNN inferences onto a GPU. We utilize a dedicated accelerator queue for accumulating DNN inference task requests produced by the tree selection process. When the queue size reaches a predetermined threshold, all tasks are submitted together to the GPU for computation. Acceleration of DNN inferences is particularly important, especially
when the total latency of in-tree operations is relatively small. However, it does require careful tuning of the communication batch size associated with the accelerator queue.
In the case of the shared-tree method, the communication batch size is always set to the number of threads employed (i.e., thread pool size). This is because the selection processes are parallel, resulting in the nearly simultaneous arrival of all inference tasks, leaving only a small gap to wait for the inference queue to be full.
The case of the local-tree method necessitates empirical tuning of the communication batch size. This is because the selection processes on the master thread are sequential and lead to long waiting times by the worker threads; submitting a small batch of inference tasks before the worker threads reach full capacity can help reduce accelerator waiting time, overlapping DNN inference computation with in-tree operations. Our empirical exploration of the communication batch size can be found in Section 4.2 and 5.2.
## 4. Performance Analysis for Adaptive Parallelism
### Performance Model
In this section, we provide a theoretical analysis of the time performance to understand the tradeoff between the shared tree and local tree methods. The main parallel parameters that affect their performance include the number of threads, the latency of executing in-tree operations and inferences on each thread, and the data access and/or data transfer latencies.
Assuming the complete tree-based search process is conducted on a multi-core CPU with a thread pool size of \(N\), the amortized latency for each iteration of the shared tree method on a multi-core CPU can be estimated as:
\[T_{shared}^{CPU}\approx T_{\text{shared tree access}}\times N+T_{select}+T_{ backup}+T_{DNN}^{CPU} \tag{3}\]
The \(T_{\text{shared tree access}}\) refers to the latencies that occurred in multiple threads accessing CPU-shared memory (DDR) as they traverse the same node. For selection and backup in a shared tree, this overhead is non-avoidable as all parallel workers start from the same root node. The in-tree operations latency and the DNN inference latency are summed up since they execute sequentially on each thread.
If we offload the batched DNN computations onto a GPU, the per-iteration latency can be estimated by replacing the DNN inference execution time with \(T_{DNN}^{GPU}\), which contains the PCIe data transfer overhead and the actual computation time.
\[T_{shared}^{CPU-GPU}\approx T_{\text{shared tree access}}\times N +T_{select}+T_{backup}\\ +T_{DNN}^{GPU}(batch=N) \tag{4}\]
The amortized latency for each iteration of the local tree method on a multi-core CPU can be estimated as:
\[T_{local}^{CPU}\approx max((T_{select}+T_{backup})\times N,T_{DNN}^{CPU}) \tag{5}\]
In the local tree method, the in-tree operations and DNN inferences are overlapped. Therefore, the per-iteration execution time is bounded by either the DNN inference latency or the total latency of the sequential in-tree operations.
\[T_{local}^{CPU-GPU}\approx max((T_{select}+T_{backup})\times N,\\ T_{PCL},T_{DNN-compute}^{GPU}(batch=B)) \tag{6}\]
For batched DNN computations on GPU, we select a (sub-)batch size \(B<N\) such that \(\frac{N}{B}\) CUDA streams (Gupta et al., 2017) are initiated, each CUDA stream bulk-processes the node evaluation (DNN inference) requests after \(B\) loop counts of in-tree operations. Therefore, the timeline of the local Tree using a CPU-GPU platform can be visualized similarly to that depicted in Figure 5; The only differences are (1) the \(N\) worker threads are replaced with \(\frac{N}{B}\) CUDA streams, and (2) the blue-colored pipe communication arrows appear every \(B\) iterations (instead of 1 iteration) of in-tree operations.
### Design Configuration Workflow
To decide the parallel method and relevant design parameters (i.e., accelerator inference batch size) at compile time, we first obtain \(T_{DNN}^{CPU}\), \(T_{select}\) and \(T_{backup}\) of a single worker on a single thread by profiling their amortized execution time on the target CPU for one iteration. The DNN for profiling is filled with random parameters and inputs of the same dimensions defined by the target algorithm and application. The \(T_{select}\) and \(T_{backup}\) are measured on a synthetic tree constructed for one episode (i.e., multiple iterations) with random-generated UCT scores, emulating the same fanout and depth limit defined by the DNN-MCTS algorithm. These design-time profiled latencies will provide a close prediction for the actual latencies at run time. We can also obtain \(T_{DNN}^{CPU-GPU}\) including the computation and data migration latency. In our implementation, the tree is managed as a dynamically allocated array of node structs that resides in the CPU DDR memory. Therefore, we estimate \(T_{\text{shared tree access}}\) as the DDR access latency documented for the target CPU device. These are plugged into the performance
models for \(T_{shared}^{CPU}\) and \(T_{local}^{CPU}\) at compile time to decide the optimal parallel method for an arbitrary DNN-MCTS algorithm on a CPU.
For exploring the design space on a CPU-GPU platform, an additional parameter \(B\) (i.e., number of cuda streams, each processing a sub-batch) can affect the performance of the local tree method. A naive method is to iterate over all the possible values for \(B(B\in[1,N])\) and empirically run an episode to test the average latency of each iteration. However, this makes the design space exploration complexity linearly proportional to \(N\) and hard to scale to very large multi-core and accelerator systems. To address this, we make the following observations to equation 6:
* (\(T_{select}+T_{backup}\)) remains constant or monotonically decreases with increasing \(B\). This is because the Expand operation waits for a batch of inferences to complete the UCT score of the newly added nodes before they can be traversed in Backup and Selection. The higher the CUDA stream batch size \(B\), the less frequently the nodes get available to be traversed (the frequency of making new node-UCT scores available is about once per \(\frac{N}{B}\) loop counts on the Master Thread). This (increasing \(B\)) may in turn make the total tree depths traversed by Selection and Backup smaller due to less-frequent node insertions. Therefore, the first term of equation 6 should be a constant or monotonically decreasing sequence wrt \(B\in\{1,...,N\}\).
* \(T_{PCIe}\) is the time for transferring a total of \(N\) data samples (i.e., DNN inference requests) between the CPU and GPU through a PCIe interconnection. It can be viewed as \(\frac{N}{B}\) transfers, each transfer processes a batch of \(B\) data samples. Each transfer is associated with a fixed communication and kernel launch latency \(L\). Therefore, \(T_{PCIe}\) can be modeled as \((\frac{N}{B})\times L+\frac{N}{\text{PCIe bandwidth}}\). Based on this model, \(T_{PCIe}\) is expected to be a monotonically decreasing sequence wrt \(B\in[1,N]\).
* \(T_{DNN}^{CPU}(batch=B)\) is expected to monotonically increase with increasing \(B\). This is because larger \(B\) leads to higher computational workloads.
* Based on Equation 6, the element-wise maximum of two monotonically decreasing sequences ((\(T_{select}+T_{backup}\)) and \(T_{PCIe}\)) is also a monotonically decreasing sequence. The element-wise maximum of this resulting monotonically decreasing sequence and a monotonically increasing sequence (\(T_{DNN}^{GPU}(batch=B)\)) should be a "V-sequence" which is a sequence that first monotonically decreases, then monotonically increases wrt \(B\).
Essentially, we want to search the design space of \(B\) and find its value yielding the minimum execution time, i.e., \(\arg\min_{B}T_{local}^{CPU-GPU}\). Based on the above observations, this enables us to exploit the property of a "V-sequence", and develop an efficient algorithm to determine \(B\) at design time. We achieve this by modeling the problem of finding the best-performing CUDA stream batch size \(B\) as the problem of finding the minimum value of a "V-sequence" \(T\) (\(T\) is the array of per-iteration latency across different values of \(B\in\{1,...,N\}\)). Instead of testing every possible value for \(B\in[1,N]\), we can sample a subset with a reduced complexity of \(O(\log N)\) as shown in Algorithm 4. Note that this is the mirroring problem of finding the maximum value of a bitonic sequence in \(O(\log N)\) time using binary search (Hendra et al., 2017).
```
1FunctionFindMin(\(T\), \(lo\), \(hi\)): if\(lo==hi\)thenreturn\(B\gets lo\) \(mid=\frac{lo+hi}{2}\) Test Run with \(B=mid\) and \(B=mid+1\) Record amortized latency \(T[mid],T[mid+1]\) if\(T[mid]\geq T[mid+1]\)thenreturnFindMin(\(T\), \(mid+1\),\(hi\)) else returnFindMin(\(T\), \(lo\),\(mid\))
```
**Algorithm 4**Exploring the optimal CUDA stream batch size \(B\)
Note that for each Test Run (Algorithm 4 line 5), we do not need to run the DNN-MCTS until policy convergence; we only profile the latency performance in a single move (i.e., get_action_prior functions in Algorithm 2 and 3). This is because each move made in the complete DNN-MCTS training loop has the same amount of computations.
## 5. Evaluation
### Experiment Setup
**Benchmark and hyper-parameters:** We use the Gomoku game benchmark (Golovolov et al., 2016) to evaluate the performance of our proposed method. The board size (i.e., size of the input state to the policy/value network) is 15\(\times\)15, the neural network is composed of 5 convolution layers and 3 fully-connected layers; The tree size limit per move is 1600 (i.e., The total number of selection-expansion-inference-backup operations performed per agent-move is 1600).
**Hardware platform specifications:** We use the AMD Ryzen Threadripper 3990X (@ 2.2GHz as our target CPU platform. It has 64 cores (2 threads per core). The last-level cache size is 256 MB, and has a 8 \(\times\) 32-GB DDR4. The CPU is connected with a NVIDIA RTX A6000 GPU through PCIe 4.0.
**Evaluation metrics:** We conduct experiments to evaluate both the speed and parallel algorithm performance. The speed is measured through (1) the amortized per-worker-iteration latency in the tree-based search stage (Section 5.3), obtained by running and averaging all the 1600 iterations for making a move; and (2) the overall training throughput (Section 5.4) in terms of processed samples/second, obtained by \(\frac{\text{Number of samples processed per episode}}{\sum(\text{Tree-based search time + DNN update time})}\). Note that one sample is obtained by executing all 1600 rounds of in-tree operations and DNN inferences in a move. The algorithm performance (Section 5.5) is measured using the loss of the DNN (Equation 2). The lower the loss, the more accurately the DNN is able to predict the probability of winning at each state and action, and the better the MCTS at guiding the moves toward the winning state.
### Design Exploration of Host-Accelerator Communication Batch Size
We show the performance obtained during the design configuration process for choosing the CUDA stream batch size \(B\) in Figure 3, specific to the local-tree method mapped to a CPU-GPU heterogeneous platform. We only perform this design exploration for the cases when the available number of workers \(N\geq 16\). This is because \(N\geq 16\) is the threshold where the shared-tree method starts to outperform the local-tree method with full-batched (batch size= \(N\)) inferences on GPU (later discussed in Section, Figure 5), and the question of whether choosing an alternative batch size could help improve the local-tree performance arises.
We can observe that at smaller batch sizes, sub-batches of inferences are serialized, which hinders the performance. The extreme case is at batch size\(=1\), where the serial inferences dominate the runtime, making the amortized iteration latency high such that even changing \(N\) does not affect the performance. At larger batch sizes, inferences are parallelized with a higher degree on the GPU, but the inference request is made after waiting for all the serial in-tree operations to complete on the master thread, leading to a large overhead. The extreme case is at batch size\(=N\), the GPU waits for all the \(N\) before it can start the computation; the \(N\) in-tree operations at the master thread is a non-trivial overhead such that they contribute to higher amortized latency at \(N=64\) compared to \(N=16\) or \(32\). Our design exploration finds the balance point where there are enough inferences within each sub-batch to saturate GPU parallelism, while enough requests are also made across sub-batches such that the GPU computation can overlap with the computations on the CPU master thread (i.e., GPU does not have to be idling and waiting for CPU computation to finish). Based on our test runs, the optimal batch sizes are \(8\) when \(N=16\), and \(20\) when \(N=32\) or \(64\).
### Tree-based Search Iteration Latency
We plot the amortized per-worker-iteration latency in the tree-based search stage in Figure 4 and 5. Note that a worker iteration is one round of Node Selection, Node Expansion, Node Evaluation (DNN inference), and BackUp executed by one worker. In each move, \(1600\) such worker-iteration are executed by all the \(N\) parallel workers. We obtain the amortized per-worker-iteration latency by dividing the total time for a move by \(1600\). The higher \(N\) is, due to more parallelism exploited, the lower the total time for a move (and the amortized per-worker-iteration latency) is.
For the CPU-only implementation, each worker is assigned a separate CPU thread for performing one node evaluation (i.e., DNN inference). In Figure 4, we observe that under different configurations (number of workers used), the optimal method can be different. Our method using adaptive parallelism is able to always choose the optimal method, achieving up to \(1.5\times\) speedup compared to either the local tree or the shared tree baselines on the CPU-only platform.
For the CPU-GPU implementation, a communication buffer is used to collect a batch of node evaluation requests before sending them to the GPU for performing a batched DNN inference. In Figure 5, we observe that if we set the buffer (batch) size, the amortized latency using the local tree method gets higher as \(N\) increases over \(16\). At \(N=16\), our implementation chooses the shared tree method. At \(N=32\) and \(64\), using the optimal batch size returned by Algorithm 4, the local tree method combined with overlapped GPU inferences outperforms the shared tree method with full-batched GPU inferences. Overall, on a CPU-GPU heterogeneous platform, our method using the adaptive parallelism achieves up to \(3.07\times\) speedup compared to either the local tree or the shared tree baselines.
Figure 4. Iteration latency, CPU-only
Figure 5. Iteration latency, CPU-GPU, batched inference
Figure 3. Design Exploration of Inference Batch Size
### Throughput Analysis
We plot the overall DNN-MCTS training throughput (processed samples per second) for both the CPU-only and CPU-GPU platforms in Figure 6, varying the number of workers used in the tree-based search. The throughput numbers are obtained by applying the optimal parallel method and design configuration returned by our design configuration workflow. Overall, CPU-GPU implementations show higher throughput compared to CPU-only implementations. In the CPU-GPU implementations, the tree-based search process produces samples and the training process (completely offloaded to GPU) consumes samples. The training process execution time is hidden by the tree-based search time, especially when there is a small number of workers such that the in-tree operations and DNN inferences become the bottleneck. As the number of workers increases, we observe near-linear improvements in throughput, since the time spent producing the same number of samples for training is reduced. When the number of agents increases above 16, the tree-based search time is reduced to the extent that it is lower than the training time. As a result, the throughput improvement becomes less obvious.
In the CPU-only implementations, given the limited number of available CPU hardware threads, we are able to allocate 32 threads for conducting training on the CPU (these are different threads than those used for DNN-MCTS parallel workers). In contrast to GPU-accelerated training, CPU-based DNN training now becomes the bottleneck even for a small number of DNN-MCTS workers. With a different number of workers allocated to the tree-based search process, the compute power provided to the training process is fixed (32 threads). Therefore, the throughput improvements from increasing the number of DNN-MCTS workers are not as scalable as the CPU-GPU implementations. Still, we are able to adaptively choose the best-performing parallel method and design configurations. The optimal methods used at different hardware platforms and available resources (i.e., number of workers) are annotated in Figure 6.
### Algorithm Performance
We show the DNN loss over time as the measurement of parallel DNN-MCTS training algorithm performance in Figure 7. The experiments are conducted on the CPU-GPU platform using the optimal parallel configurations for 4, 16, and 64 workers. As we introduce parallel workers for the tree-based search, the algorithm is modified. This is because in the serial tree search, every iteration accesses the most up-to-date tree information modified by the previous iteration; while in the tree-parallel implementations, a worker traversing the tree may not obtain the newest node UCT scores because the node evaluation (i.e., DNN inference) of other workers have not completed. The more parallel workers are used, the higher the effect is from such obsolete-tree-information. As a result, the training samples generated (states traversed and actions taken based on tree search) in the parallel version are not the same as the 1-worker serial baseline. Still, the converged loss is not negatively impacted by increasing parallelism, as shown in Figure 7. Additionally, the convergence curve is steeper, meaning the time taken to reach the same converged loss is reduced using the optimal parallel configurations of our adaptive parallel implementations.
## 6. Conclusion
In this work, we proposed a novel implementation for DNN-MCTS that adaptively chooses the optimal parallel scheme for the MCTS component on the CPU. We also analyzed the performance on a CPU-GPU platform and proposed an efficient method to search for the optimal communication batch size interfacing the MCTS component and DNN operations. By experimenting on a CPU-only and CPU-GPU platform using a Gomoku game benchmark, we observed up to 1.5\(\times\) and 3.07\(\times\) speedup using our adaptive parallelism compared to existing fixed-parallelism methods. Our method and performance models are general and can also be adopted in the context of many other types of accelerators for DNN inference and training ( FPGAs, ASICS (e.g., TPUs), etc.) in the future.
Figure 6. Training throughput under optimal configurations
Figure 7. DNN loss over time, using the optimal parallel methods returned by our Design Configuration across different number of parallel workers |
2305.09947 | Understanding the Initial Condensation of Convolutional Neural Networks | Previous research has shown that fully-connected networks with small
initialization and gradient-based training methods exhibit a phenomenon known
as condensation during training. This phenomenon refers to the input weights of
hidden neurons condensing into isolated orientations during training, revealing
an implicit bias towards simple solutions in the parameter space. However, the
impact of neural network structure on condensation has not been investigated
yet. In this study, we focus on the investigation of convolutional neural
networks (CNNs). Our experiments suggest that when subjected to small
initialization and gradient-based training methods, kernel weights within the
same CNN layer also cluster together during training, demonstrating a
significant degree of condensation. Theoretically, we demonstrate that in a
finite training period, kernels of a two-layer CNN with small initialization
will converge to one or a few directions. This work represents a step towards a
better understanding of the non-linear training behavior exhibited by neural
networks with specialized structures. | Zhangchen Zhou, Hanxu Zhou, Yuqing Li, Zhi-Qin John Xu | 2023-05-17T05:00:47Z | http://arxiv.org/abs/2305.09947v1 | # Understanding the Initial Condensation of Convolutional Neural Networks
###### Abstract
Previous research has shown that fully-connected networks with small initialization and gradient-based training methods exhibit a phenomenon known as condensation during training. This phenomenon refers to the input weights of hidden neurons condensing into isolated orientations during training, revealing an implicit bias towards simple solutions in the parameter space. However, the impact of neural network structure on condensation has not been investigated yet. In this study, we focus on the investigation of convolutional neural networks (CNNs). Our experiments suggest that when subjected to small initialization and gradient-based training methods, kernel weights within the same CNN layer also cluster together during training, demonstrating a significant degree of condensation. Theoretically, we demonstrate that in a finite training period, kernels of a two-layer CNN with small initialization will converge to one or a few directions. This work represents a step towards a better understanding of the non-linear training behavior exhibited by neural networks with specialized structures.
## 1 Introduction
As large neural networks continue to demonstrate impressive performance in numerous practical tasks, a key challenge has come to understand the reasons behind the strong generalization capabilities often exhibited by over-parameterized networks (Breiman, 1995; Zhang et al., 2021). A commonly employed approach to understanding neural networks is to examine their implicit biases during the training process. Several studies have shown that neural networks tend to favor simple solutions. For instance, from a Fourier perspective, neural networks have a bias toward low-frequency functions, which is known as the frequency principle (Xu et al., 2019, 2020) or spectral bias (Rahaman et al., 2019). In the parameter space, Luo et al. (2021) observed a condensation phenomenon, i.e., the input weights of hidden neurons in two-layer \(\mathrm{ReLU}\) neural networks condense into isolated orientations during training in the non-linear regime, particularly with small initialization. Fig. 1 presents an illustrative example in which a large condensed network can be reduced to an effective smaller network with only two neurons. Based on complexity theory (Bartlett and Mendelson, 2002), as the condensation phenomenon reduces the network complexity, it might provide insights into how over-parameterized neural networks achieve good generalization performance in practice. Zhang and Xu (2022) drew inspiration from this phenomenon and found that dropout (Hinton et al., 2012; Srivastava et al., 2014), a commonly used optimization technique for improving generalization,
exhibits an implicit bias towards condensation through experiments and theory. Prior literature has predominantly centered on the study of fully-connected neural networks, thereby leaving the emergence and properties of the condensation phenomenon in neural networks with different structural characteristics inadequately understood. Consequently, this paper aims to investigate the occurrence of condensation in convolutional neural networks (CNNs).
The success of deep learning relies heavily on the structures used, such as convolution and attention. Convolution is an ideal starting point for investigating the impact of structure on learning outcomes, as it is widely used and has simple structure. To achieve a clear condensation phenomenon in CNNs, we adopt a strategy of initializing weights with small values. Small weight initialization can result in rich non-linearity of neural network (NN) training behavior (Mei et al., 2019; Rotskoff and Vanden-Eijnden, 2018; Chizat and Bach, 2018; Sirignano and Spiliopoulos, 2020). Over-parameterized NNs with small initialization can, for instance, achieve low generalization error (Advani et al., 2020) and converge to a solution with maximum margin (Phuong and Lampert, 2020). In contrast to the condensation in fully-connected networks, each kernel in CNNs is considered as a unit, and condensation is referred to the behavior, that a set of kernels in the same layer evolves towards the same direction.
Understanding the initial condensation can benefit understanding subsequent training stages (Fort et al., 2020; Hu et al., 2020; Luo et al., 2021; Jiang et al., 2019; Li et al., 2018). Previous research has shown how neural networks with small initialization can condense during the initial training stage in fully-connected networks (Maennel et al., 2018; Pellegrini and Biroli, 2020; Zhou et al., 2022; Chen et al., 2023). This work aims to demonstrate the initial condensation in CNNs during training. A major advantage of studying the initial training stage of neural networks with small initialization is that the network can be approximated accurately by the leading-order Taylor expansion at zero weights. Further, the structure of CNNs may cause kernels in the same layer to exhibit similar dynamics. Through theoretical proof, we show that CNNs can condense into one or a few directions within a finite training period with small initialization. This initial condensation serves an important role in resetting the neural network of different initializations to a similar and simple state, thus reducing the sensitivity of initialization and facilitating the tuning of hyper-parameters of initialization.
## 2 Related works
For fully-connected neural networks, it has been generally studied that different initializations can lead to very different training behavior regimes (Luo et al., 2021), including linear regime (similar to the lazy regime) (Jacot et al., 2018; Arora et al., 2019; Zhang et al., 2020; E et al., 2020; Chizat and Bach, 2019), critical regime (Mei et al., 2019; Rotskoff and Vanden-Eijnden, 2018; Chizat and Bach, 2018; Sirignano and Spiliopoulos, 2020) and condensed regime (non-linear regime). The relative change of
Figure 1: Illustration of condensation. The color and its intensity of a line indicate the strength of the weight. Initially, weights are random. Soon after training, the weights from an input node to all hidden neurons are clustered into two groups, i.e., condensation. Multiple hidden neurons can be replaced by an effective neuron with low complexity, which has the same input weight as original hidden neurons and the same output weight as the summation of all output weights of original hidden neurons.
input weights as the width approaches infinity is a critical parameter that distinguishes the different regimes, namely \(0\), \(O(1)\), and \(+\infty\). Zhou et al. (2022) demonstrated that these regimes also exist for three-layer \(\mathrm{ReLU}\) neural networks with infinite width. Experiments suggest that condensation is a frequent occurrence in the non-linear regime.
Zhang et al. (2021, 2021)discovered an embedding principle in loss landscapes between narrow and wide neural networks, based on condensation. This principle suggests that the loss landscape of a deep neural network (DNN) includes all critical points of narrower DNNs, which is also studied in Fukumizu and Amari (2000); Fukumizu et al. (2019); Simsek et al. (2021). The embedding structure indicates the existence of global minima where condensation can occur. Zhang et al. (2022) has demonstrated that NNs exhibiting condensation can achieve the desired function with a substantially lower number of samples compared to the number of parameters. However, these studies fail to demonstrate the training process's role in causing condensation.
CNN is one of the fundamental structures in deep learning (Gu et al., 2018). He et al. (2016) introduced the use of residual connections for training deep CNNs, which has greatly enhanced the performance of CNNs on complex practical tasks. Recently, there are also many theoretical advances. Zhou (2020) shows that CNN can be used to approximate any continuous function to an arbitrary accuracy when the depth of the neural network is large enough. Arora et al. (2019) exactly compute the neural tangent kernel of CNN. Provided that the signal-to-noise ratio satisfies certain conditions, Cao et al. (2022) have demonstrated that a two-layer CNN, trained through gradient descent, can obtain negligible training and test losses. In this work, we focus on the training process of CNNs.
## 3 Preliminaries
### Some Notations
For a matrix \(\mathbf{A}\), we use \(\mathbf{A}_{i,j}\) to denote its \((i,j)\)-th entry. We also use \(\mathbf{A}_{i,:}\) to denote the \(i\)-th row vector of \(\mathbf{A}\) and define \(\mathbf{A}_{i,j:k}:=(\mathbf{A}_{i,j},\mathbf{A}_{i,j+1},\cdots,\mathbf{A}_{i,k})\) as part of the vector. Similarly \(\mathbf{A}_{:,k}\) is the \(k\)-th column vector and \(\mathbf{A}_{i:j,k}:=(\mathbf{A}_{i,k},\mathbf{A}_{i+1,k},\cdots,\mathbf{A}_{j,k})^{\mathsf{T}}\) is part of the \(k\)-th column vector.
We let \([n]=\{1,2,\ldots,n\}\). We set \(\mathcal{N}(\boldsymbol{\mu},\Sigma)\) as the normal distribution with mean \(\boldsymbol{\mu}\) and covariance \(\Sigma\). We set a special vector \(\mathbb{1}:=(1,1,1,\ldots,1)^{\mathsf{T}}\), whose dimension varies. For a vector \(\mathbf{v}\), we use \(\|\mathbf{v}\|_{2}\) to denote its Euclidean norm, and we use \(\langle\cdot,\cdot\rangle\) to denote the standard inner product between two vectors. Finally, for a matrix \(\mathbf{A}\), we use \(\|\mathbf{A}\|_{2\to 2}\) to denote its operator norm.
### Problem Setup
We focus on the empirical risk minimization problem given by the quadratic loss:
\[\min_{\boldsymbol{\theta}}R_{S}(\boldsymbol{\theta})=\frac{1}{2n}\sum_{i=1}^{ n}\left(f(\boldsymbol{x}_{i},\boldsymbol{\theta})-y_{i}\right)^{2}. \tag{1}\]
In the above, \(n\) is the total number of training samples, \(\{\boldsymbol{x}_{i}\}_{i=1}^{n}\) are the training inputs, \(\{y_{i}\}_{i=1}^{n}\) are the labels, \(f(\boldsymbol{x}_{i},\boldsymbol{\theta})\) is the prediction function, and \(\boldsymbol{\theta}\) are the parameters to be optimized, which is modeled by a \((L+1)\)-layer CNN with filter size \(m\times m\). We denote \(\boldsymbol{x}^{[l]}(i)\) as the output of the \(l\)-th layer with respect to the \(i\)-th sample for \(l\geq 1\), and \(\boldsymbol{x}^{[0]}(i):=\boldsymbol{x}_{i}\) is the \(i\)-th training input. For any \(l\in[0:L]\), we denote the size of width, height, channel of \(\boldsymbol{x}^{[l]}\) as \(W_{l},\ H_{l}\), and \(C_{l}\), respectively, i.e., \(\{\boldsymbol{x}^{[l]}(i)\}_{i=1}^{n}\subset\mathbb{R}^{W_{l}\times H_{l} \times C_{l}}\). We introduce a filter operator \(\chi(\cdot,\cdot)\), which maps the width and height indices of the output of all layers to a binary variable, i.e., for a filter of size \(m\times m\), the filter operator reads
\[\chi(p,q)=\left\{\begin{array}{ll}1,&\text{for}\ \ 0\leqslant p,q\leqslant m-1\\ 0,&\text{otherwise,}\end{array}\right. \tag{2}\]
then the \((L+1)\)-layer CNN with filter size \(m\times m\) is recursively defined for \(l\in[2:L]\),
\[\mathbf{x}_{u,v,\beta}^{[1]} :=\left[\sum_{\alpha=1}^{C_{0}}\left(\sum_{p=-\infty}^{+\infty} \sum_{q=-\infty}^{+\infty}\mathbf{x}_{u+p,v+q,\alpha}^{[0]}\cdot\mathbf{W}_{p,q,\alpha, \beta}^{[1]}\cdot\chi(p,q)\right)\right]+\mathbf{b}_{\beta}^{[1]},\] \[\mathbf{x}_{u,v,\beta}^{[l]} :=\left[\sum_{\alpha=1}^{C_{L-1}}\left(\sum_{p=-\infty}^{+\infty} \sum_{q=-\infty}^{\infty}\sigma\left(\mathbf{x}_{u+p,v+q,\alpha}^{[l-1]}\cdot\mathbf{W }_{p,q,\alpha,\beta}^{[l]}\cdot\chi(p,q)\right)\right)+\mathbf{b}_{\beta}^{[l]},\] \[f(\mathbf{x},\mathbf{\theta}) :=f_{\mathrm{CNN}}(\mathbf{x},\mathbf{\theta}):=\left\langle\mathbf{a}, \sigma\left(\mathbf{x}^{[L]}\right)\right\rangle=\sum_{\beta=1}^{C_{L}}\sum_{u=1} ^{W_{L}}\sum_{v=1}^{H_{L}}\mathbf{a}_{u,v,\beta}\cdot\sigma\left(\mathbf{x}_{u,v, \beta}^{[L]}\right),\]
where \(\sigma(\cdot)\) is the activation function applied coordinate-wisely to its input, and for each layer \(l\in[L]\), all parameters belonging to this layer are initialized by: For \(p,q\in[m-1]\), \(\alpha\in[C_{l-1}]\) and \(\beta\in[C_{l}]\),
\[\mathbf{W}_{p,q,\alpha,\beta}^{[l]}\sim\mathcal{N}(0,\beta_{1}^{2}),\quad\mathbf{b}_{ \beta}^{[l]}\sim\mathcal{N}(0,\beta_{1}^{2}). \tag{3}\]
Note that for a pair of \(\alpha\) and \(\beta\), \(\mathbf{W}_{\cdot,\cdot,\alpha,\beta}^{[l]}\) is called a kernel. Moreover, for \(u\in[W_{L}]\) and \(v\in[H_{L}]\),
\[\mathbf{a}_{u,v,\beta}\sim\mathcal{N}(0,\beta_{2}^{2}), \tag{4}\]
and for convenience in theory, we set \(\beta_{1}=\beta_{2}=\varepsilon,\) where \(\varepsilon>0\) is the scaling parameter.
**Cosine similarity:** The cosine similarity between two vectors \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2}\) is defined as
\[D(\mathbf{u}_{1},\mathbf{u}_{2})=\frac{\mathbf{u}_{1}^{\intercal}\mathbf{u}_{2}}{(\mathbf{u}_{1}^ {\intercal}\mathbf{u}_{1})^{1/2}(\mathbf{u}_{2}^{\intercal}\mathbf{u}_{2})^{1/2}}. \tag{5}\]
We remark that in order to compute the cosine similarity between two kernels, each kernel \(\mathbf{W}_{\cdot,\cdot,\alpha,\beta}^{[l]}\) shall be vectorized.
In the theoretical part, as we consider two-layer CNNs (\(L=1\)), thus the upper case \([l]\) can be omitted since the number of weight vectors is equal to \(1\), i.e., \(\mathbf{W}_{\cdot,\cdot,\alpha,\beta}:=\mathbf{W}_{\cdot,\cdot,\alpha,\beta}^{[1]}\). We denote \(M:=C_{1}\), the number of channels in \(\mathbf{x}^{[1]}(i)\), which can be heuristically understood as the 'width' of the hidden layer in the case of two-layer neural networks (NNs).
In experiments, the parameters are trained by either Adam or gradient descent (GD), while in theory, only GD is used. In the following, we demonstrate that kernels in multi-layer CNNs exhibit clear condensation in experiments during the initial training stage, with small initialization. We then provide theoretical evidence for two-layer CNNs.
## 4 Condensation of the convolution kernels in experiments
In this section, we will show condensation of convolution kernels in image datasets using different activation functions.
### Experimental setup
For the CIFAR10 dataset: \(500\) samples are randomly selected from CIFAR10 dataset for training. We use the CNN with the structure: \(n\times 32C\)-\((1024)\)-\(d\). The output dimension \(d=10\) or \(1\) is used to classify the pictures or for regression respectively. The parameters of the convolution layer is initialized by the \(\mathcal{N}(0,\sigma_{1}^{2})\), and the parameters of the linear layer is \(\mathcal{N}(0,\sigma_{2}^{2})\). \(\sigma_{1}\) is given by \((\frac{(c_{in}+c_{out})*m^{2}}{2})^{-\gamma}\) where \(c_{in}\) and \(c_{out}\) are the number of in channels and out channels respectively, \(\sigma_{2}\) is given empirically by \(0.0001\). The training method is GD or Adam with full batch. The training loss of each experiment is shown in Fig.15 in Appendix.
### CIFAR10 examples
We first show that when initialized with small weights, the convolutional kernels of a \(\tanh(x)\) CNN undergo condensation during the training process. As shown in Fig. 2, we train a CNN with three
layers by cross-entropy loss until the training accuracy reaches \(100\%\) (the accuracy during training is shown in Fig. 8 in appendix). In each layer, we compute the cosine similarity between each pair of kernels. This reveals a clear condensation phenomenon after training.
Understanding the mechanism of condensation phenomenon is challenging. To this end, we start by studying the initial training stage. We then study the initial condensation in CNNs.
The initial stage (accuracy less than \(20\%\)) of Fig. 2 is shown in Fig. 3. For each layer, nearly all kernels are condensed into two opposite directions in Fig. 3. In layer one, there are actually two pairs of opposite directions. This subtle phenomenon can be explained by our theory.
We further examine the different activation functions. For illustration, we consider two-layer CNNs with activations \(\mathrm{ReLU}(x)\), \(\mathrm{Sigmoid}(x)\), or \(\tanh(x)\). As shown in Fig. 4, we can still see a very clear condensation phenomenon. Note that, as the learning rate is rather small to see the detailed training process, the epoch selected for studying the initial stage may appear large.
In our theoretical study, we consider the MSE loss. Therefore, we also show an example with MSE loss (softmax is also attached with the output layer) for activation \(\mathrm{tanh(x)}\) in Fig. 5. Similarly, condensation is clearly observed. An MSE example without softmax is shown in 9 in appendix.
These examples demonstrate that condensation occurs when using the Adam method to train networks. We can also observe the condensation with the gradient descent (GD) method. Fig. 6 illustrates that with GD, the kernel weights of a two-layer CNN still undergo condensation at two opposite directions during training. Moreover, we observe that the direction of the condensation is \(\boldsymbol{v}=1\). Fig. 6(b) shows that the cosine similarity between each kernel weight and \(1\) is almost equal to 1 or -1.
Similar results of MNIST dataset are also shown in Figs. 10 for CNNs with different activations, Fig. 11 for multi-layer \(\mathrm{tanh}\) CNN in appendix. Also, two-layer CNNs with \(32\) and \(320\) kernels trained by GD are shown in Fig.12 and Fig.13, respectively, in appendix.
Figure 3: Initial stage of Fig. 2, selected at epoch \(=300\) with accuracy less than \(20\%\).
Figure 2: Final condensation of a CNN with three convolution layers. The kernel size is \(m=5\). The colors in the figure show the cosine similarity of weight vectors of each convolution kernel. The activation for all convolution layers are \(\mathrm{tanh(x)}\). The number of the steps is all at the end of the training. The convolution kernels are initialized by \(\gamma=2\). The learning rate is \(2\times 10^{-6}\). We use cross-entropy (with softmax) as the criterion. The optimizer is full batch Adam on CIFAR10 dataset.
Figure 4: Condensation of CNNs with different activations (indicated by sub-captions) for convolutional layers. The network has 32 kernels in the convolution layer, followed by a ReLU fully-connected layer with 1024 neurons and 10-dimensional output with softmax. The kernel size is \(m=5\). The learning rate for all three experiments is \(5\times 10^{-7}\). epoch\(=1000\), epoch\(=5000\) and epoch\(=300\). The convolution layer is initialized by \(\gamma=2\). We use full batch Adam with cross-entropy on CIFAR10 dataset.
Figure 5: Condensation of \(\tanh(\mathrm{x})\) CNN with three convolution layer by MSE loss. The kernel size is \(m=5\). The color indicates the cosine similarity between kernels. The number of the steps are at epoch \(=200\), epoch \(=300\) and epoch \(=300\). The convolution kernels are initialized by \(\gamma=1.2\). The learning rate is \(1\times 10^{-6}\). We use one hot vector as label and use softmax. The optimizer is full-batch Adam on CIFAR10 dataset.
Figure 6: Condensation of two-layer CNN by GD and MSE training on CIFAR10 dataset with data size \(n=500\). (a) cosine similarity. (b) left ordinate (red): the amplitude of each kernel; (b) right ordinate (blue): cosine similarity between kernel weight and \(\mathbb{1}\). The activation function of the convolution part is \(\tanh(\mathrm{x})\). The kernel size is \(m=3\). The learning rate is \(5\times 10^{-6}\). The number of the selected steps is epoch \(=4500\). The convolution layer is initialized by \(\gamma=4\).
Theory
In the following section, we aim to provide theoretical evidence that kernels in the same layer tend to converge to one or a few directions when initialized with small values. In this part, we shall impose some technical conditions on the activation function and input samples. We start with a technical condition (Zhou et al., 2022, Definition 1) on the activation function \(\sigma(\cdot)\).
**Definition 1** (Multiplicity \(r\)).: \(\sigma(\cdot):\mathbb{R}\to\mathbb{R}\) _has multiplicity \(r\) if there exists an integer \(r\geq 1\), such that for all \(0\leq s\leq r-1\), the \(s\)-th order derivative satisfies \(\sigma^{(s)}(0)=0\), and \(\sigma^{(r)}(0)\neq 0\)._
We list out some examples of activation functions with different multiplicities.
**Remark 1**.:
* \(\tanh(\mathrm{x}):=\frac{\exp(\mathrm{x})-\exp(-\mathrm{x})}{\exp(\mathrm{x}) +\exp(-\mathrm{x})}\) _is with multiplicity_ \(r=1\)_;_
* \(\mathrm{SiLU(\mathrm{x})}:=\frac{\mathrm{x}}{1+\exp(-\mathrm{x})}\) _is with multiplicity_ \(r=1\)_;_
* \(\mathrm{xtanh(\mathrm{x})}:=\frac{\mathrm{x}\exp(\mathrm{x})-\mathrm{x}\exp (-\mathrm{x})}{\exp(\mathrm{x})+\exp(-\mathrm{x})}\) _is with multiplicity_ \(r=2\)_._
**Assumption 1** (Multiplicity \(1\)).: _The activation function \(\sigma\in\mathcal{C}^{2}(\mathbb{R})\), and there exists a universal constant \(C_{L}>0\), such that its first and second derivatives satisfy_
\[\left\|\sigma^{(1)}(\cdot)\right\|_{\infty}\leq C_{L},\quad\left\|\sigma^{(2) }(\cdot)\right\|_{\infty}\leq C_{L}. \tag{6}\]
_Moreover,_
\[\sigma(0)=0,\quad\sigma^{(1)}(0)=1. \tag{7}\]
**Remark 2**.: _We remark that \(\sigma\) has multiplicity \(1\). \(\sigma^{(1)}(0)=1\) can be replaced by \(\sigma^{(1)}(0)\neq 0\), and we set \(\sigma^{(1)}(0)=1\) for simplicity, which can be easily satisfied by replacing the original activation \(\sigma(\cdot)\) with \(\frac{\sigma(\cdot)}{\sigma^{(1)}(0)}\)._
**Assumption 2**.: _The training inputs \(\{\mathbf{x}_{i}\}_{i=1}^{n}\) and labels \(\{y_{i}\}_{i=1}^{n}\) satisfy that there exists a universal constant \(c>0\), such that given any \(i\in[n]\), then for each \(u\in[W_{0}]\), \(v\in[H_{0}]\) and \(\alpha\in[C_{0}]\), the following holds_
\[\frac{1}{c}\leq\left|\mathbf{x}_{u,v,\alpha}(i)\right|,\quad|y_{i}|\leq c.\]
We assume further that
**Assumption 3**.: _The following limit exists_
\[\gamma:=\lim_{M\to\infty}-\frac{\log\varepsilon^{2}}{\log M}. \tag{8}\]
As \(M\to\infty\), then \(\epsilon\to 0\) and since multiplicity \(r=1\), we can use the first-order Taylor expansion to approximate the network output and obtain a linear dynamics
\[\frac{\mathrm{d}\mathbf{\theta}_{\beta}}{\mathrm{d}t}=\mathbf{A}\mathbf{\theta}_{\beta}, \tag{9}\]
where
\[\mathbf{\theta}_{\beta}:=\Big{(} \mathbf{W}_{0,0,1,\beta},\mathbf{W}_{0,1,1,\beta},\cdots,\mathbf{W}_{0,m-1,1, \beta};\mathbf{W}_{1,0,1,\beta},\cdots,\mathbf{W}_{1,m-1,1,\beta};\cdots\cdots\mathbf{W}_ {m-1,m-1,1,\beta};\] \[\mathbf{W}_{0,0,2,\beta},\mathbf{W}_{0,1,2,\beta},\cdots,\mathbf{W}_{0,m-1,2, \beta};\mathbf{W}_{1,0,2,\beta},\cdots,\mathbf{W}_{1,m-1,2,\beta};\cdots\cdots\mathbf{W}_ {m-1,m-1,2,\beta};\] \[\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\] \[\mathbf{W}_{0,0,C_{0},\beta},\mathbf{W}_{0,1,C_{0},\beta},\cdots,\mathbf{W}_ {0,m-1,C_{0},\beta};\cdots,\mathbf{W}_{1,m-1,C_{0},\beta};\cdots\cdots\mathbf{W}_{m-1, m-1,C_{0},\beta};\mathbf{b}_{\beta};\] \[\mathbf{a}_{1,1,\beta},\mathbf{a}_{1,2,\beta},\cdots,\mathbf{a}_{1,H_{1},\beta };\mathbf{a}_{2,1,\beta},\cdots,\mathbf{a}_{2,H_{1},\beta};\cdots\cdots\mathbf{a}_{W_{1},H_ {1},\beta})^{\mathsf{T}},\]
\[\mathbf{A}:=\left[\begin{array}{cc}\mathbf{0}_{(C_{0}m^{2}+1)\times(C_{0}m^{2}+1)}&\mathbf{Z }^{\intercal}\\ \mathbf{Z}&\mathbf{0}_{W_{1}H_{1}\times W_{1}H_{1}}\end{array}\right], \tag{10}\]
where \(\mathbf{Z}\) is detailed described by multi channel (70) and single channel (35) in appendix. We perform singular value decomposition (SVD) on \(\mathbf{Z}\), i.e.,
\[\mathbf{Z}=\mathbf{U}\Lambda\mathbf{V}^{\intercal}, \tag{11}\]
where
\[\mathbf{U}=\left[\mathbf{u}_{1},\mathbf{u}_{2},\cdots,\mathbf{u}_{W_{1}H_{1}}\right],\quad\mathbf{ V}=\left[\mathbf{v}_{1},\mathbf{v}_{2},\cdots,\mathbf{v}_{m^{2}+1}\right],\]
and as we denote \(r:=\mathrm{rank}(\mathbf{Z})\), naturally, \(r\leq\min\{W_{1}H_{1},m^{2}+1\}\), we have \(r\) singular values,
\[\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{r}>0,\]
and WLOG, we assume that
**Assumption 4** (Spectral Gap of \(\mathbf{Z}\)).: _The singular values \(\{\lambda_{k}\}_{k=1}^{r}\) of \(\mathbf{Z}\) satisfy that_
\[\lambda_{1}>\lambda_{2}\geq\cdots\geq\lambda_{r}>0, \tag{12}\]
_and we denote the spectral gap between \(\lambda_{1}\) and \(\lambda_{2}\) by_
\[\Delta\lambda:=\lambda_{1}-\lambda_{2}.\]
We denote that
\[\mathbf{\theta}_{\mathbf{W},\beta} :=\left(\mathbf{W}_{0,0,\beta},\mathbf{W}_{0,1,\beta},\cdots,\mathbf{W}_{0,m- 1,\beta};\mathbf{W}_{1,0,\beta},\cdots,\mathbf{W}_{1,m-1,\beta};\cdots\cdots\mathbf{W}_{m -1,m-1,\beta};\mathbf{b}_{\beta}\right)^{\intercal},\] \[\mathbf{\theta}_{\mathbf{a},\beta} :=\left(\mathbf{a}_{1,1,\beta},\mathbf{a}_{1,2,\beta},\cdots,\mathbf{a}_{1,H_ {1},\beta};\mathbf{a}_{2,1,\beta},\cdots,\mathbf{a}_{2,H_{1},\beta};\cdots\cdots\mathbf{a }_{W_{1},H_{1},\beta}\right)^{\intercal},\]
hence
\[\mathbf{\theta}_{\beta}=\left(\mathbf{\theta}_{\mathbf{W},\beta}^{\intercal},\mathbf{\theta} _{\mathbf{a},\beta}^{\intercal}\right)^{\intercal}.\]
In order to study the phenomenon of condensation, we concatenate the vectors \(\left\{\mathbf{\theta}_{\mathbf{W},\beta}\right\}_{\beta=1}^{M}\) into
\[\mathbf{\theta}_{\mathbf{W}}:=\mathrm{vec}\left(\left\{\mathbf{\theta}_{\mathbf{W},\beta} \right\}_{\beta=1}^{M}\right),\]
and we denote further that
\[\mathbf{\theta}_{\mathbf{W},\mathbf{v}_{1}}:=\mathcal{P}_{1}\mathbf{\theta}_{\mathbf{W}}:=\left( \left\langle\mathbf{\theta}_{\mathbf{W},1},\mathbf{v}_{1}\right\rangle,\left\langle\mathbf{ \theta}_{\mathbf{W},2},\mathbf{v}_{1}\right\rangle,\cdots\left\langle\mathbf{\theta}_{ \mathbf{W},M},\mathbf{v}_{1}\right\rangle\right)^{\intercal},\]
where \(\mathbf{v}_{1}\) is the eigenvector of the largest eigenvalue of \(\mathbf{Z}^{\intercal}\mathbf{Z}\), or the first column vector of \(\mathbf{V}\) in (36). In appendix, we prove that for any \(\eta_{0}>\frac{\gamma-1}{100}>0\), there exists
\[T_{\mathrm{eff}}>\frac{1}{\lambda_{1}}\left[\log\left(\frac{1}{4}\right)+ \left(\frac{\gamma-1}{4}-\eta_{0}\right)\log(M)\right], \tag{13}\]
leading to the following theorem.
**Theorem 1**.: _Given any \(\delta\in(0,1)\), under Assumption 1, Assumption 2, Assumption 3 and Assumption 4, if \(\gamma>1\), then with probability at least \(1-\delta\) over the choice of \(\mathbf{\theta}^{0}\), we have_
\[\lim_{M\rightarrow+\infty}\sup_{t\in[0,T_{\mathrm{eff}}]}\frac{\left\|\mathbf{ \theta}_{\mathbf{W}}(t)-\mathbf{\theta}_{\mathbf{W}}(0)\right\|_{2}}{\left\|\mathbf{\theta}_{ \mathbf{W}}(0)\right\|_{2}}=+\infty, \tag{14}\]
_and_
\[\lim_{M\rightarrow+\infty}\sup_{t\in[0,T_{\mathrm{eff}}]}\frac{\left\|\mathbf{ \theta}_{\mathbf{W},\mathbf{v}_{1}}(t)\right\|_{2}}{\left\|\mathbf{\theta}_{\mathbf{W}}(t) \right\|_{2}}=1. \tag{15}\]
The theorem shows that as the number of channels increases, two implications arise within a finite training time in the small initialization scheme. Firstly, the relative change of kernel weight becomes considerably large, demonstrating significant non-linear behavior. Secondly, the kernel weight concentrates on one direction.
The mechanism of the concentration effect, or the phenomenon of initial condensation, arises from the spectral gap in Assumption 4. The original definition of spectral gap is a fundamental concept
in the analysis of Markov chains, which refers to the minimum difference between the two largest eigenvalues of the transition matrix of a Markov chain, and is closely related to the convergence rate of the chain to its stationary distribution Komorowski et al. (2012). A large spectral gap indicates that the Markov chain converges quickly to its stationary distribution, while a small spectral gap suggests a slower convergence rate. Similar to that, one may observe from relation (64) that the larger the spectral gap is, the faster the kernel weight concentrates onto the direction of \(\mathbf{v}_{1}\). Fig. 7 shows an example of eigenvalues of \(\mathbf{Z}^{\intercal}\mathbf{Z}\), in which a huge spectral gap can be observed. We then study the eigenvector of the maximum eigenvalue. In this example, since the input has 3 channels, we decompose the eigenvector into corresponding three parts and find the inner product of each part with \(1\) is close to 1, i.e., \(0.9891\pm 0.0349\), \(0.9982\pm 0.0009\), and \(0.9992\pm 0.0003\), respectively, computed from 50 independent trials, where in each trial we randomly selected 500 images. The spectral gap for MNIST is also displayed in Fig. 14 and the inner product between the eigenvector of the maximum eigenvalue and \(1\) is shown in the appendix. Also note that the second eigen direction can sometimes be observed, such as the case in Fig. 3(a).
## 6 Conclusion
In this work, our experiments, supported by theoretical analysis, demonstrate the phenomenon of condensation in CNNs during training with small initialization. Specifically, we observe that the kernels in the same layer evolve towards one or few directions. This initial condensation of kernels resets the neural network to a simpler state with few distinct kernels. This process plays a crucial role in the subsequent learning process of the neural network.
## 7 Acknowledgements
This work is sponsored by the National Key R&D Program of China Grant No. 2022YFA1008200, the National Natural Science Foundation of China Grant No. 62002221, Shanghai Municipal of Science and Technology Major Project No. 2021SHZDZX0102, and the HPC of School of Mathematical Sciences and the Student Innovation Center, and the Siyuan-1 cluster supported by the Center for High Performance Computing at Shanghai Jiao Tong University.
|
2305.13248 | Bayesian Numerical Integration with Neural Networks | Bayesian probabilistic numerical methods for numerical integration offer
significant advantages over their non-Bayesian counterparts: they can encode
prior information about the integrand, and can quantify uncertainty over
estimates of an integral. However, the most popular algorithm in this class,
Bayesian quadrature, is based on Gaussian process models and is therefore
associated with a high computational cost. To improve scalability, we propose
an alternative approach based on Bayesian neural networks which we call
Bayesian Stein networks. The key ingredients are a neural network architecture
based on Stein operators, and an approximation of the Bayesian posterior based
on the Laplace approximation. We show that this leads to orders of magnitude
speed-ups on the popular Genz functions benchmark, and on challenging problems
arising in the Bayesian analysis of dynamical systems, and the prediction of
energy production for a large-scale wind farm. | Katharina Ott, Michael Tiemann, Philipp Hennig, François-Xavier Briol | 2023-05-22T17:19:09Z | http://arxiv.org/abs/2305.13248v2 | # Bayesian Numerical Integration with Neural Networks
###### Abstract
Bayesian probabilistic numerical methods for numerical integration offer significant advantages over their non-Bayesian counterparts: they can encode prior information about the integrand, and can quantify uncertainty over estimates of an integral. However, the most popular algorithm in this class, Bayesian quadrature, is based on Gaussian process models and is therefore associated with a high computational cost. To improve scalability, we propose an alternative approach based on Bayesian neural networks which we call _Bayesian Stein networks_. The key ingredients are a neural network architecture based on Stein operators, and an approximation of the Bayesian posterior based on the Laplace approximation. We show that this leads to orders of magnitude speed-ups on the popular Genz functions benchmark, and on challenging problems arising in the Bayesian analysis of dynamical systems, and the prediction of energy production for a large-scale wind farm.
## 1 Introduction
Integration is a core task in probabilistic machine learning. It is required to perform operations such as marginalizing out random variables, or computing normalization constants, predictive distributions, and posterior expectations. In this paper, we consider the computation of the integral of some function \(f:\mathcal{X}\to\mathbb{R}\), where \(\mathcal{X}\subseteq\mathbb{R}^{d}\), against some distribution \(\Pi\) with (Lebesgue) density \(\pi:\mathcal{X}\to\mathbb{R}\):
\[\Pi[f]=\int_{\mathcal{X}}f(x)\pi(x)dx, \tag{1}\]
where we assume we have access to evaluations \(\{f(x_{i})\}_{i=1}^{n}\) at a set of points \(\{x_{i}\}_{i=1}^{n}\subseteq\mathcal{X}\). The problem is particularly challenging if \(f\) and \(\pi\) are multi-modal and/or are very input-sensitive in different regions of the support. A plethora of methods exist for tackling this task; the most common are Monte Carlo (MC) methods, which are sampling-based methods that have been studied extensively in theory and practice (Robert and Casella, 2000; Owen, 2013). This subsumes naive Monte Carlo, Markov chain Monte Carlo (MCMC) and quasi-Monte Carlo (QMC). Sampling is (at least asymptotically, for MCMC) unbiased and thus a gold standard, but precisely for this reason, it can only converge with stochastic rate, and thus requires a large number of samples \(n\), both for accuracy and uncertainty quantification.
This is a challenge if evaluations of \(f\) or samples from \(\pi\) are expensive. The former ("expensive \(f\)") emerges regularly in climate simulations or other large physical models. Section 5 provides an example with a wind farm model - a field where state-of-the-art models require hundreds of hours of CPU for a single evaluation (Kirby et al., 2022, 2023). The latter ("expensive sampling") occurs when \(\pi\) is a posterior distribution for a complex model conditioned on a large amount of data. Section 5 illustrates this through an example of Bayesian inference in dynamical system.
In such scenarios, probabilistic numerical methods (PNMs) (Hennig et al., 2015; Cockayne et al., 2019; Oates and Sullivan, 2019; Wenger et al., 2021; Hennig et al., 2022), and in particular Bayesian approaches, perform particularly well. For numerical integration, the principle behind Bayesian PNMs is to encode prior information about the integrand \(f\), then condition on evaluations of \(f\) to obtain a posterior distribution over \(\Pi[f]\). These methods are well suited for computationally expensive problems since informative priors can be used to encode properties of the problem and to reduce the number of evaluations needed. In addition, the posterior quantifies uncertainty for any finite value of \(n\).
The most popular Bayesian PNM for integration is Bayesian Quadrature (BQ) (O'Hagan, 1991; Diaconis, 1988; Rasmussen and Ghahramani, 2002; Briol et al., 2019), a method that places a Gaussian Process (GP) (Rasmussen and Williams, 2006) prior on \(f\). With this convenient choice of prior, the posterior on \(\Pi[f]\) is a univariate Gaussian, whose
mean and variance can be computed in closed form for certain combinations of prior covariance and distribution. However, for high-dimensional problems where large amounts of data are necessary, the computational cost of GPs, cubic in \(n\), can render BQ too computationally expensive. Fast BQ methods have been proposed to resolve this issue (Karvonen and Sarkkua, 2018; Jagadeeswaran and Hickernell, 2019), but these usually work for a limited range of \(\pi\) or \(\{x_{i}\}_{i=1}^{n}\), and therefore do not provide a widely applicable solution.
This raises the question of whether an alternative probabilistic model could be used in place of a GP within probabilistic integration. Bayesian neural networks (BNNs) are an obvious candidate, as they are known to work well in high dimensions and with large \(n\). Unfortunately, their application to integration tasks is not straightforward since, in contrast to the GP case, analytical integration of the posterior mean of a BNN is usually intractable. This is a significant challenge which has so far prevented their use for probabilistic numerics. We resolve this challenge by proposing the concept of _Bayesian Stein (neural) networks_ (BSNs), a novel BNN architecture based on a final layer constructed through a Stein operator (Anastasiou et al., 2021). Such choice of architecture is designed specifically so that the resulting BNN is analytically integrable (see Section 3), and hence at our disposal for numerical integration.
Given these approaches--MC, BQ, BSNs--a natural question remains: "How should we select a method for a given integration task?". We provide an empirical answer to this question in Section 5, where we consider a popular benchmark dataset, compute posterior expectations arising in the Bayesian treatment of dynamical systems, and estimate the expected power output of a wind farm.
Our conclusions are summarized in Figure 1 and presented below. If sampling \(\pi\) and evaluating \(f\) is computationally cheap, so one can obtain a very large number of data points relative to the complexity of the problem, then MC methods are likely the best choice. But if \(n\) is very limited due to our computational budget, then BQ is likely a better option. BSNs excel in the intermediate region where \(n\) is such that BQ becomes prohibitively expensive but MC is not accurate enough. The architecture of neural networks, plus sophisticated deep learning software libraries, make training of (small) neural networks memory efficient and fast. However, achieving good accuracy at low training cost requires special care during training for the Stein architecture. Finding a good training setup is a main contribution of this work, outlined in Section 4.
For all integration methods, estimates from scarce data are imperfect, so uncertainty estimates are crucial. Bayesian deep learning provides this functionality. Full Bayesian inference is costly even for small neural networks, but we show that a lightweight Laplace approximation (MacKay, 1992; Ritter et al., 2018) can provide good approximate uncertainty for the Stein network.
## 2 Related Work
BQ is the method most closely related to our proposed approach and the approach is fully detailed in Appendix 2. Bayesian PN methods based on alternative priors have also been proposed. These include Bayesian additive regression tree priors (Zhu et al., 2020), multi-output Gaussian process priors (Xi et al., 2018; Gessner et al., 2019), and Dirichlet process priors (Oates et al., 2017). These priors each provide different advantages, such as the ability to model highly discontinuous functions, vector-valued functions, or modelling probability distributions respectively. Unfortunately none of these approaches significantly improve scalability, the main goal of our paper.
The use of (non-Bayesian) neural networks for integration was previously proposed by Lloyd et al. (2020). However, their method is only applicable for uniform \(\pi\) and shallow networks. Wan et al. (2020), Si et al. (2021) propose to use a Langevin Stein operator applied to a neural network to find good control variates for variance reduction in Monte Carlo approximations (based on an earlier construction by Oates et al. (2017)). In contrast to their work, we use the neural network to directly compute \(\Pi[f]\), and our neural network follows Bayesian semantics and can be used to quantify uncertainty. This requires a different network architecture and an efficient posterior inference algorithm.
## 3 Bayesian Stein Networks
We now describe BSNs. This requires introducing Stein operators, BNNs, and Laplace approximations.
Stein Neural NetworksStein operators are a technical construction originating in probability theory, but have recently been used as a computational tool (Anastasiou et al., 2021). Building on this line of work, we will use Stein operators to construct the final layer of our BNNs. The reason for this is simple: given some function \(u\) (with possibly un
Figure 1: Integration methods can be compared at a high-level in terms of their computational cost and ability to include prior information. In both respects, BSNs provide a compromise in-between MC and BQ.
known mean) and a distribution \(\pi\), a Stein operator can map \(u\) to a mean zero function under \(\pi\). This final layer therefore allows us to construct flexible BNNs with the powerful property that _any draw from the posterior will have a known mean under \(\pi\)_. We now highlight this procedure in detail.
We call \(\mathcal{S}\) a _Stein Operator_ if for any suitably regular continuously differentiable \(u:\mathbb{R}^{d}\to\mathbb{R}^{d}\), the following holds
\[\Pi\left[\mathcal{S}[u]\right]=0. \tag{2}\]
Suppose \(\mathcal{X}=\mathbb{R}^{d}\), \(\pi\) is continuously differentiable on \(\mathcal{X}\), such that \(\nabla_{x}\log\pi\) is well-defined (\(\left[\nabla_{x}\log\pi(x)\right]_{i}=\partial\log\pi(x)/\partial x_{i}\) for all \(i\in\{1,\ldots,d\}\)). One example of an operator fulfilling (2) is the diffusion Stein operator (Gorham et al., 2019; Barp et al., 2019):
\[\begin{split}\mathcal{S}_{m}[u](x):=&\left(m(x)^{ \top}\nabla_{x}\log\pi(x)\right)^{\top}u(x)\\ &+\nabla_{x}\cdot\left(m(x)u(x)\right),\end{split} \tag{3}\]
where \(\nabla_{x}\cdot u(x)=\sum_{i=1}^{d}\partial u_{i}(x)/\partial x_{i}\), and \(m:\mathbb{R}^{d}\to\mathbb{R}^{d\times d}\) is an invertible matrix-valued function. This operator only requires access to \(\nabla_{x}\log\pi(x_{i})\), and can thus be used even if the normalization constant of \(\pi\) is unknown. This is an advantage if \(\pi\) is itself a posterior distribution. In such settings, samples from \(\pi\) can be obtained via MCMC, but the distribution \(\pi\) itself cannot be evaluated directly.
To construct BSNs, we use an architecture based on a continuously differentiable deep neural network \(u_{\theta_{u}}:\mathcal{X}\to\mathbb{R}^{d}\), where \(\theta_{u}\in\Theta_{u}\subseteq\mathbb{R}^{p}\), combined with a final layer taking the form of a Stein operator (that we call a _Stein layer_). More precisely, we consider an architecture \(g_{\theta}:\mathcal{X}\to\mathbb{R}\):
\[g_{\theta}(x):=\mathcal{S}_{m}\left[u_{\theta_{u}}\right](x)+\theta_{0}. \tag{4}\]
We call this neural network a _Stein neural network (Stein-NN)_ following (Wan et al., 2020; Si et al., 2021; Sun et al., 2023), but note that we use the more general diffusion Stein operators \(\mathcal{S}_{m}\)(Gorham et al., 2019; Barp et al., 2019). Previous cases can be recovered with \(m(x)=I_{d}\), where \(I_{d}\) is a \(d\)-dimensional identity matrix, however we will demonstrate in Section 5 that alternative choices for \(m\) can significantly improve the performance of our method.
The parameter \(\theta=\{\theta_{0},\theta_{u}\}\in\Theta\subseteq\mathbb{R}^{p+1}\) denotes the weights of the neural network \(g_{\theta}\). Thanks to our choice of architecture, (2) holds and we have:
\[\Pi\left[g_{\theta}\right]=\theta_{0}. \tag{5}\]
The last layer of \(g_{\theta}\) directly tracks the integral of the network, which is the key property for our purpose: by training such a network \(g_{\theta}\) on data from \(f\) so that \(g_{\theta}\approx f\), we are simultaneously constructing a good approximation of the integral \(\Pi[g_{\theta}]\approx\Pi[f]\) (see Figure 2 for a summary).
Uncertainty Estimates for Stein Neural NetworksIn the context of Bayesian PNM, proposing a BNN architecture is not enough: we are also interested in _tractable uncertainty estimates over \(\Pi[f]\)_. We show how to obtain this through the Laplace approximation and a suitable choice of prior, but further details are available in Appendix C.
The specific architecture of the BSN model means that all the uncertainty on \(\Pi[f]\) is represented by the Bayesian posterior on \(\theta_{0}\). This can be obtained through a standard application of Bayes' theorem \(p(\theta|\mathcal{D})\propto p(\mathcal{D}|\theta)p(\theta)\) where in our case the dataset is \(\mathcal{D}=\{x_{i},f(x_{i}),\nabla_{x}\log\pi(x_{i})\}_{i=1}^{n}\), and \(p(\theta)\) denotes our prior, \(p(\theta|\mathcal{D})\) the posterior and \(p(\mathcal{D}|\theta)\) the likelihood. The posterior on \(\theta_{0}\) is then the marginal of \(p(\theta|\mathcal{D})\). Bayesian inference for deep networks provides uncertainty estimates (Neal, 1996; Mackay, 1995) through \(p(\theta|\mathcal{D})\), but this posterior is intractable in general. MCMC is a prominent tool for approximating \(p(\theta|\mathcal{D})\), but using it within an integration method would be circular and reintroduce the spectra of high computational cost (Izmailov et al., 2021). Other popular approximate inference schemes include variational inference (Graves, 2011; Blundell et al., 2015; Hinton and van Camp, 1993) and ensemble methods (Lakshminarayanan et al., 2017). Although cheaper, the cost associated with this can still be significant.
We instead opt for the arguably most lightweight approach available for BNNs: the Laplace approximation (MacKay, 1992; Ritter et al., 2018). It is a simple and computationally cheap method, but yet provides competitive uncertainty estimates (Daxberger et al., 2021). The Laplace approximation constructs a second-order Taylor approximation around the mode of the posterior, which amounts to a Gaussian approximate of the posterior around the MAP (maximum aposteriori) estimate. This can be criticized from a Bayesian standpoint as the MAP estimate and the posterior mean of the weights do not necessarily coincide. However, the MAP
Figure 2: _Visualization of BSNs._ The BSN prior is conditioned on \(\{x_{i},f(x_{i}),\nabla\log\pi(x_{i})\}_{i=1}^{n}\) to obtain a Bayesian posterior on \(\theta_{0}\). This posterior quantifies our uncertainty about \(\Pi[f]\). For computational reasons, this posterior is approximated the Laplace approximation around the MAP estimate \(\theta_{0,\text{MAP}}\).
estimate is the quantity that is usually tuned in deep learning and is also cheap as it only has to be computed once. To be more precise, our approximation of the posterior is implemented in two steps: a Laplace approximation, and an approximation of the corresponding Hessian.
For the first step, we train the network \(g_{\theta}\) by minimizing the mean squared error loss with weight decay regularizer, given for \(\lambda>0\) by:
\[\begin{split} l_{\text{tot}}(\theta)&=l(\theta)+ \lambda\|\theta\|_{2}^{2}\\ \text{where }l(\theta)&=\frac{1}{n}\sum_{i=1}^{n}\|f(x_{ i})-g_{\theta}(x_{i})\|_{2}^{2}\end{split} \tag{6}\]
We notice that \(l\propto-\log p(\mathcal{D}|\theta)\) and \(\lambda\|\theta\|_{2}^{2}\propto-\log p(\theta)\) whenever we take the prior to be \(p(\theta)=\mathcal{N}(\theta\mid 0,\sigma_{0}^{2}I_{p+1})\) (\(\sigma_{0}\) is related to \(\lambda\) through a known constant see Appendix C). As a result, the minimum of the loss above is indeed a MAP estimate: \(\theta_{\text{MAP}}=\text{argmin}_{\theta}l_{\text{tot}}(\theta)\).
Of course, _any_ Bayesian treatment of neural networks requires a prior \(p(\theta)\). The choice is important since the prior encodes the model class, but there is currently no widely accepted choice. Our choice above was motivated by the fact that for the Laplace approximation, only isotropic Gaussian priors are currently feasible [12, 13, 14]. Fortuin et al. [16] suggest that such priors are undesirable, but Wilson and Izmailov [16] argue to the contrary: despite their simplicity, such priors still induce sufficiently complex distributions over functions. Note that it is often beneficial to tune \(\sigma_{0}\) for inference [12].
Once the MAP has been identified, we can construct our Laplace approximation using a Taylor approximation (up to second order) of the log-posterior \(\log p\left(\theta\mid\mathcal{D}\right)\) around that point. This results in a Gaussian approximation of the posterior distribution: \(q_{\text{Laplace}}(\theta)=\mathcal{N}\left(\theta\mid\theta_{\text{MAP}},\Sigma\right)\), where \(\Sigma\) is proportional to the inverse Hessian of the loss \(l_{\text{tot}}\):
\[\begin{split}\Sigma^{-1}&=-\nabla^{2}\log p( \mathcal{D}|\theta)-\nabla^{2}\log p(\theta)\\ &=H+\sigma_{0}^{-2}I_{p+1},\quad\text{where }H\propto\nabla_{\theta}^{2}l (\theta_{\text{MAP}})\end{split}\]
Our second step consists of an approximation of the Hessian. This is necessary since it is often infeasible to calculate \(H\) due to the large computational cost when \(p\) is large. As a result, we use a positive definite approximation called the Generalized-Gauss-Newton (GGN; [10]) approximation:
\[H_{\text{GGN}}=\frac{1}{\sigma^{2}}\sum_{i=1}^{n}J(x_{i})J(x_{i})^{\top},\]
where \(J(x_{i})=\nabla_{\theta}g_{\theta}(x_{i})|_{\theta=\theta_{\text{MAP}}}\) and \(\sigma\) is the dataset noise. This gives us another approximation of the posterior that we denote \(q_{\text{GGN-Laplace}}(\theta)\) obtain through \(\Sigma_{\text{GGN}}^{-1}=H_{\text{GGN}}+\sigma_{0}^{-2}I_{p+1}\). Hence, we can extract an approximation of the posterior on the network's prediction of the integral \(\Pi[f]\) using Eq. (5):
\[q_{\text{GGN-Laplace}}(\theta_{0})=\mathcal{N}\left(\theta_{0}|\theta_{0, \text{MAP}},\left(\Sigma_{\text{GGN}}\right)_{0,0}\right).\]
## 4 Architecture
Due to their specific architecture, naive attempts to train BSNs can lead to unsatisfactory results. Below, as a key contribution, we provide architectural considerations that we have found to significantly improve the conditioning of the loss and lead to better training.
Choice of Activation FunctionWe require \(u_{\theta_{u}}\) to be continuously differentiable on \(\mathcal{X}\), which imposes restrictions on the activation functions of the BSN. A sufficient condition is for these activation functions to be themselves continuously differentiable. This excludes the popular RELU activation functions, but includes the CELU ('Continuously Differentiable Exponential Linear Units' [1]; \(\text{CELU}(x)=\max(0,x)+\min(0,\exp(x)-1)\)), its continuous extension. It also includes the tanh (\(\text{tanh}(x)=(\exp(x)-\exp(-x))/(\exp(x)+\exp(-x))\)), Gaussian (Gauss(\(x\)) = \(\exp(-x^{2})\)), and sigmoid (\(\text{sign}(x)=1/(1+\exp(-x))\)), TanhShrink (\(\text{TanhShrink}(x)=x-\tanh(x)\)) activations. We compared activation functions (see Figure 4 below) and found the CELU to give marginally superior performance on test problems. Based on its good performance, we use CELU activations for all experiments.
Choice of Optimization ProcedureOptimization for BSNs is challenging due to the unique network architecture. For one, the architecture contains gradients of the Stein layer, which are harder to train than standard activation functions. This is because \(\nabla_{x}\log\pi\) can be arbitrarily complicated depending on \(\pi\). We find that the training of \(g_{\theta}\) with Adam [10] is considerably slower compared with training \(u_{\theta_{u}}\) (see the Appendix A.1.1). We suspect that this is due to the loss landscape of the BSN being more narrow (i.e., having a larger spread in curvature eigenvalue spectrum) than that of \(u_{\theta_{u}}\). A second order method should alleviate this issue. Hence, we train the BSN with L-BFGS (an approximate second order method) and the _Hessian-free_ optimizer [10] (a conjugate gradient based second order method). And indeed, (approximate) second order optimization reaches much better performance (for an extended discussion see the Appendix A.1.1).
We therefore used L-BFGS throughout all subsequent experiments. Such quasi-Newton methods have fallen out of fashion in deep learning because they are not stable to noise. In our experiments, we train on the full dataset, so noise is not an issue. We accomplish better (i.e., lower loss) and faster convergence (both in iterations and compute time) with this method compared to gradient descent and its variants. Note that this approach is only feasible for relatively small (in number of weights \(p\)) network architectures, as it requires storing the gradient history for the approximate Hessian in memory. When training on the entire dataset (i.e., no mini-batching), we observe significant speed-up from using GPUs when \(n\) is large (\(\approx 10^{4}\)).
Choice of \(m(x)\)For most of the experiments we set \(m(x)=I_{d}\), but in general other choices for \(m\) are possible. We test a set of different choices (\(m(x)=I_{d}/(||x||_{2}^{2}+1)\), \(m(x)=I_{d}/\sqrt{||x||_{2}^{2}+1}\), \(m(x)=I_{d}\pi(x)\), \(m(x)=\operatorname{diag}(x)\)), but find that none of these perform significantly better than \(m(x)=I_{d}\) (see Appendix A.1.4 for more details).
Choice of Point SetBSNs can be implemented regardless of the choice of \(\{x_{i}\}_{i=1}^{n}\), but we expect better performance when \(\{x_{i}\}_{i=1}^{n}\) cover regions of high probability under \(\pi\). A simple solution is to use independent samples from \(\pi\); this will be our default choice. When independent sampling is not possible, we can use MCMC instead, so long as \(\pi\) can be evaluated up to some normalization constant. Alternatives also include grid of points or QMC point sets (see the Appendix A.1.2 for a comparison of different point sets), but these are usually only helpful when \(\mathcal{X}\) is a hypercube and \(\pi\) is uniform. Alternatively, one could also use active learning (see Gunter et al. (2014); Briol et al. (2015) for corresponding approaches for BQ) based on the Laplace approximation of the uncertainty, but this may not perform well for large \(d\), and we did not explore the idea further.
Stein Architecture for Bounded DomainsThe architecture outlined in Section 3 is only valid on the open integration domain \(\mathcal{X}=\mathbb{R}^{d}\). For bounded \(\mathcal{X}\subset\mathbb{R}^{d}\), it is incorrect because \(\Pi[\mathcal{S}_{m}[u]]=0\) is not necessarily true. This can be guaranteed by adding a layer before the Stein layer. For example, let \(\tilde{u}_{\theta_{\star}}(x)=u_{\theta_{\star}}(x)\delta(x)\), where \(\delta(x)\) is a smooth function (so that \(\tilde{u}_{\theta_{\star}}\) is continuously differentiable) going to zero on the boundary of \(\mathcal{X}\). Then, \(\pi(\cdot)\tilde{u}_{\theta_{\star}}(\cdot)\) is zero on the boundary of \(\mathcal{X}\), and as a result \(\Pi[\mathcal{S}[\tilde{u}_{\theta_{\star}}]]=0\). When \(\mathcal{X}=(a,b)\subset\mathbb{R}\), one such function is given by \(\delta(x)=(x-a)(b-x)\), and we will use this example where necessary in our experiments. Beyond bounded \(\mathcal{X}\), the architecture can also be adapted to manifold or discrete \(\mathcal{X}\); see Barp et al. (2022) and Shi et al. (2022) respectively.
## 5 Experiments
We consider three main experiments: the Genz functions benchmark, a parameter inference problem for a dynamical system called Goodwin Oscillator, and an example describing the energy output of a wind farm. We compare BSNs to the following approaches:
* Monte Carlo methods. When independent sampling from \(\pi\) can be used (i.e. for the Genz benchmark and the wind farm experiments) we use MC. When this is not possible, we use instead an MCMC method called Metropolis-Adjusted Langevin algorithm (MALA; Roberts and Tweedie, 1996).
* A BQ implementation based on emukit (Paleyes et al., 2019), with an RBF covariance function \(k(x,y)=\lambda\exp(-\|x-y\|_{2}^{2}/l^{2})\) for some \(l,\lambda>0\). We use log-likelihood maximization to choose \(l\) and set the GP prior mean to \(0\), as we do not have any prior knowledge about the value of the integral. In Appendix A.1.5 we conduct an additional experiment using the Matern 1/2 Kernel. However, for this kernel, the posterior mean is only available in \(d=1\).
* A control functional estimator based on Stein's method (Stein-CF) as described in Oates et al. (2019) for the experiments on the Genz dataset and the Goodwin oscillator. The approach can be thought of as a kernel interpolant alternative to our neural network. We use \(m(x)=I_{d}\) and an RBF kernel. We use log-likelihood maximization to set the kernel hyperparameters.
To implement the Laplace approximation, we use laplace-torch library (Daxberger et al., 2021). Across all experiments we employ the same fully connected architecture for \(u_{\theta_{\star}}\), where each hidden layer has 32 units, and we use 2 hidden layers (see the Appendix A.1.3 for more details).
Genz BenchmarkWe first consider the Genz family of integrands (Genz, 1984), as a test ground (see Appendix A.2 for detailed definitions). This benchmark, consisting of six integrands with known integrals, was proposed to highlight the performance of numerical integration methods on challenging tasks including discontinuities, peaks and oscillations. Each integrand has a parameter which can be used to increase the dimensionality \(d\) of the domain. We follow the implementation of Si et al. (2021), where the test functions are transformed to be supported on \(\mathcal{X}=\mathbb{R}^{d}\) and integrated against a multivariate standard Gaussian distribution \(\pi\). Since these functions are very cheap, we do not expect BSN or BQ to be competitive with MC methods in terms of runtime, but we use this experiment to showcase the performance of BSNs for challenging integrands and compare methods for fixed \(n\).
In Table 1, we first consider the case \(d=2\) and \(n=5120\). BSN and BQ both outperform MC by several orders of magnitude in terms of mean relative integration error. Notably, BSN is significantly better than BQ for the discontinuous Genz function, indicating that the neural network is able to adapt to rapidly changing functions. For the Gaussian Genz function, BQ outperforms the BSN due to the fact that the prior is more informative. Both methods lead to a significant improvement over MC, but we can run the BSN at higher number of data points \(n\) than BQ. See the Appendix A.2 for detailed figures.
We then considered the impact of dimensionality on MC, BQ, and BSN in Figure 3. We focus on the Continuous Genz function for simplicity. If too few evaluations \(n\) are available, the Stein network cannot approximate \(f\) well, but with a sufficiently large \(n\) (i.e. \(n\approx 10^{2}\) in \(d=1\) and \(n\approx 10^{4}\) in
\(d=10\)), BSN significantly outperforms MC and BQ.
We also considered the impact of the choice of activation functions for \(u_{\theta_{n}}\) in Figure 4. Again, we focus on the Continuous Genz integrand, but limit ourselves to \(d=1\). We consider a diverse set of activation functions (described in Section 4), all continuously differentiable as required for the final Stein layer. We find that the CELU activation leads to the best results on the Continuous Genz dataset, but other activation functions like the tanh and Gaussian activations also perform well.
Finally, we have a deeper look at the Continuous Genz function in \(d=20\) in Figure 5. We observe that a large enough \(n\) (\(n\approx 10^{4}\)) is necessary for the interpolation capabilities of the model to significantly improve performance. In those cases, the BSN achieves significantly better performance than MC-sampling. We note that MC sampling is cheap on the Genz benchmark dataset, and this benchmark is only used as a test bed to vary the complexity of our integrands, so we only compare the MC to the other methods in terms of sample efficiency. Both BQ and Stein-CF do not achieve good performance and are too expensive (in runtime and in memory) to run for large \(n\). The BSN can perform well even for much larger datasets (we ran it up to \(n\approx 10^{6}\)). The kernel based methods (BQ and CF) also surpass our allotted memory limit of 20 GB (see the Appendix A.2.1).
To evaluate the uncertainty estimates provided by the GGN-Laplace approach, we calculate their calibration \(\gamma\). The calibration is given by the ratio between relative integration error \(e_{\text{abs}}\), and the standard deviation \(\sigma_{\theta_{0}}\) of the GGN-Laplace approximation of the posterior on \(\theta_{0}\): \(\gamma=e_{\text{abs}}/\sigma_{\theta_{0}}\). Similarly, for BQ, \(\sigma_{\theta_{0}}\) is the posterior standard deviation on \(\Pi[f]\). A calibration fluctuating around one indicates a well calibrated model, and a large calibration suggests a model that is overconfident, rendering its uncertainty estimates unreliable. The GGN-Laplace approach as well as BQ lead to uncertainty estimates which are underconfident (although less so for the BSN), especially in the high data regime (see Figure 5). Underconfident predictions are still useful in that they provide a prudent assessment of our uncertainty.
Bayesian Inference for the Goodwin OscillatorA challenging computational task in Bayesian inference is posterior inference for parameters of dynamical systems (see for example Calderhead and Girolami (2011)). The challenge is due to the large computational cost of posterior sampling, which is incurred due to the need to solve systems of differential equations numerically at a high-level of accuracy. In addition, large datasets can further increase the computational cost, making the task a prime candidate for BSNs. For this experiment, we consider parameter inference in a dynamical system called the Goodwin oscillator (Goodwin, 1965). This model describes how the feedback loop between
\begin{table}
\begin{tabular}{l|l l l} & \multicolumn{3}{c}{Mean Absolute Error} \\
**Integrand** & **MC** & **BQ** & **BSN** \\ \hline Continuous & 1.59e-03 \(\pm\) 0.90e-03 & 1.40e-03 \(\pm\) 0.09e-03 & **1.11e-05 \(\pm\) 0.55e-05** \\ Discontinuous & 2.69e-02 \(\pm\) 2.64e-02 & 1.12e-02 \(\pm\) 0.50e-02 & **2.56e-03 \(\pm\) 1.94e-03** \\ Gaussian & 1.52e-02 \(\pm\) 8.85e-03 & **1.17e-06 \(\pm\) 1.11e-06** & 1.83e-04 \(\pm\) 1.35e-04 \\ Corner & 1.85e-02 \(\pm\) 1.85e-02 & **2.49e-04 \(\pm\) 1.53e-04** & 6.00e-04 \(\pm\) 5.39e-04 \\ Oscillatory & 2.88e-01 \(\pm\) 1.75e-01 & 4.13e-03 \(\pm\) 0.89e-03 & **1.34e-03 \(\pm\) 0.97e-03** \\ Product & 7.59e-03 \(\pm\) 4.11e-03 & 1.82e-04 \(\pm\) 0.42e-04 & **1.42e-04 \(\pm\) 0.76e-04** \\ \end{tabular}
\end{table}
Table 1: _Performance on Genz integral family in \(d=2\)._ Mean relative integration error and standard deviation (based on 5 repetitions) using \(n=5120\).
Figure 4: _Impact of the choice of activation function for the Continuous Genz function._ Loss \(l\) (_left_) and mean relative integration error (_right_) (mean based on 5 repetitions) as a function of \(n\).
Figure 3: _Continuous Genz function._ We compare methods as a function of \(d\) for \(n=100\) (left) and \(n=10000\) (right)(mean and standard deviation based on 5 repetitions).
mRNA transcription and protein expression can lead to oscillatory dynamics in a cell. It is a common benchmark for MC methods [1, 1, 14].
We analyse the setting with no intermediate protein species, leading to a system with \(d=4\) parameters: \(x=(a_{1},a_{2},k,\alpha)\in\mathbb{R}_{+}^{4}\). Given a posterior distribution \(\pi\), we want to compute the posterior mean \(\Pi[f]\) of each of the ODE parameters, i.e., \(f(x)=x\). For this experiment, the posterior distribution is conditioned on a synthetic dataset of \(2400\) observations generated for some known parameter values. Our exact experimental setup is based on [1], and we refer to the Appendix A.3 for a detailed description.
The posterior density \(\pi\) is only available in unnormalized form, and we therefore use MALA for sampling. This is relatively expensive: sampling \(n=1000\) realizations takes around \(30\) seconds, which is on the same timescale as network inference (\(\sim 1\) min). For ODE problems requiring more complex solvers or settings with a large dataset, the sampling time might increase even further.
In this setting, \(\nabla_{x}\log\pi(x)\) can take very large values, which makes training the BSN harder. We find that \(m(x)=I_{d}/C\) for \(C\in\mathbb{R}\) can considerably improve the performance. We considered two choices for the constant \(C\):
* the standard deviation of \(\{\nabla_{x}\log\pi(x_{i})\}_{i=1}^{n}\) (called \(C=\text{std}\) in Figure 6).
* the largest score value: \(C=\max_{i=1,\dots,n}\nabla_{x}\log\pi(x_{i})\) (\(C=\text{max}\) in Figure 6).
Figure 6 compares the performance of the proposed regularizations. Both choices work well, in contrast to using no regularization at all (i.e. \(C=1\)). We find that the BSN either matches the performance of MALA (for parameter \(\alpha\)) or surpasses it (parameter \(a_{1}\)). The Stein-CF performs well but struggles in the high data regime due to unstable hyperparameter optimization. The results for \(a_{2}\) and \(k\) are presented in Appendix A.3 The saturation in reached accuracy for both the BSN and MALA can be attributed to the noisy likelihood evaluations.
Before concluding, we emphasize that BSN is the only available Bayesian PNM here. This is because \(\pi\) is unnormalized and BQ is therefore not possible to implement.
Expected Local Turbine Thrust Coefficient for Wind Farm Layout DesignThe energy produced by a wind farm depends on factors including the distance between turbines, the direction of the wind, and the wake produced by individual turbines. To understand this phenomenon, fluid dynamic simulations can be used to estimate a local turbine thrust coefficient (which we denote \(f\)), which largely deter
Figure 5: _Continuous Genz function in \(d=20\). Mean relative integration error (left), run time (centre-left), and calibration (right) (mean and standard deviation based on 5 repetitions) as a function of \(n\). Center-right: Mean relative integration error as a function of run time in seconds._
Figure 6: _Posterior expectations for the parameters of a Goodwin ODE. Mean relative integration error and standard deviation (top-left and bottom-left), and uncertainty estimates (top-right and bottom-right) (based on 5 repetitions) as a function of \(n\)._
mines energy production (Nishino, 2016). Since a number of these factors are unknown, it is common practice to represent uncertainty through a distribution (denoted \(\pi\)), and calculate the _expected_ local turbine thrust coefficient \(\Pi[f]\).
A particular challenge here is the cost of evaluating \(f\). For the model we are using (a low-order wake model from Niayifar and Porte-Agel (2016)), each evaluation of \(f\) takes approximately \(130\) seconds, but more accurate models (Kirby et al., 2022) can take up to tens of hours per evaluation. However, it is well known that \(f\) is a smooth function of the inputs, which makes Bayesian PNMs, such as BSNs, prime candidates for the task.
The input to our model \(f\) are the wind direction, the turbulence intensity, as well as a number of parameters representing the design of the wind farm (including parameters impacting the distance between turbines, and turbine-specific parameters such as the turbine resistance coefficient, the turbine hub height and diameter, and parameters describe the turbine wake). The distribution \(\pi\) consists of independent distributions (either mixtures of Gaussians, or a truncated Gaussian) on each input to the wake model. The Appendix A.4 provides full details on the wind farm dataset.
The results are presented in Figure 7. Since the ground truth is unknown for this problem, we ran BSN on a dataset which is \(5\) times larger than what is plotted in order to get a benchmark value. We compared the runtime of all methods including sampling, where we assume that all the points were sampled sequentially (corresponding to running the experiment on a single CPU). The additional runtime of both BQ and the BSN is negligible compared to the initial sampling time. Both methods achieve a much lower mean relative integration error compared to sampling, clearly demonstrating the power of Bayesian PNM methods for problems involving expensive integrands.
On this dataset BQ cannot be used to compute uncertainty estimates, because we cannot integrate the kernel twice in closed form for truncated Gaussians. However, the uncertainty estimates computed with the Laplace approximation for the BSN accurately capture deviations from the ground truth value (shown in Figure 7).
## 6 Limitations and Discussion
The primary advantage of BSNs is in terms of scalability, but they also suffer from some limitations, discussed below.
Firstly, in contrast to GPs where prior knowledge (such as periodicity or smoothness) about \(f\) can be encoded via a kernel, selecting good functional priors for BNNs can be challenging. Our experiments show that simple prior choices are often sufficient to achieve good results for moderately hard problems. More advanced options (Sun et al., 2019; Pearce et al., 2019) could be considered, but this would require novel Laplace approximations.
Secondly, our experiments suggest convergence with large \(n\). Although we did not analyse this convergence from a theoretical viewpoint, we note that Si et al. (2021, Proposition 1 and 2) can be used to prove consistency of the BSN posterior mean to the true value of the integral. Currently, we do not have any results for the convergence rates, but this could be an interesting direction for future research (for example, Belomestny et al. (2023) provides a rate for a related approach). This is in contrast with the GP case where convergence results are highly developed (Kanagawa et al., 2020; Kanagawa and Hennig, 2019; Karvonen et al., 2020; Wynne et al., 2021).
Thirdly, computational cost is highly dependent on the complexity of the deep network \(u_{\theta_{u}}\). Using standard matrix multiplication, neural network training is linear in the number of (non-bias) parameters \(p\), the number of training samples \(n\), and the number of gradient iterations \(i\), i.e, \(O(pni)\). Across all our experiments we used the same architecture for \(u_{\theta_{u}}\) independent of \(n\), but we expect that the complexity of the network will need to increase significantly when high accuracy is required. In such cases, we expect that mini-batching and first order optimization could improve _scalability_, but would likely incur new issues with _stability_.
Figure 7: _Wind farm model_. Mean relative integration error (_left_), and run time (_centre-left_) (mean and standard deviation based on 5 repetitions) as a function of \(n\). _Center-right:_ Fraction of runtime BSN and BQ contribute to the total runtime which includes the runtime of the wind farm simulation. _Right:_ Uncertainty estimates provided by the Laplace approximation.
Conclusion
We have introduced a way to leverage the function approximation abilities of deep BNNs specifically for integration through the application of a Stein operator. Employing a Laplace approximation provides uncertainty quantification of good quality in this architecture. We have noted that significant work is required to stabilize the training process to this end: both the architecture and the training method must be adapted to the non-standard form of the loss.
BSNs perform consistently well across experiments, both in accuracy and in runtime, and are thus an interesting alternative to BQ, especially for the intermediate regime between very small sample size (where traditional BQ works well), and very large sample numbers (where classic MC methods continue to be the preferred solution). Our experiments on a variety of applications also highlight some functional strengths of the BSN approach. In particular, it can deal flexibly with a wide range of integration densities, including cases in which the density is known in unnormalized form.
## Acknowledgements
The authors would like to thank Zhuo Sun, Lukas Tatzel, Frank Schneider and Andrew Kirby for helpful discussions. We would also like to thank Andrew Kirby for sharing code to run the wind-farm experiments, which is available at [https://github.com/AndrewKirby2/ctstar_statistical_model](https://github.com/AndrewKirby2/ctstar_statistical_model). Part of this work was initiated by Dagstuhl Seminar 21432 "Probabilistic Numerical Methods - From Theory to Implementation." PH gratefully acknowledges financial support by the European Research Council through ERC StG Action 757275 / PANAMA; the DFG Cluster of Excellence "Machine Learning - New Perspectives for Science", EXC 2064/1, project number 390727645; the German Federal Ministry of Education and Research (BMBF) through the Tubingen AI Center (FKZ: 01IS18039A); and funds from the Ministry of Science, Research and Arts of the State of Baden-Wurttemberg. FXB was supported by the Lloyd's Register Foundation Programme on Data-Centric Engineering and The Alan Turing Institute under the EPSRC grant [EP/N510129/1], and through an Amazon Research Award on "Transfer Learning for Numerical Integration in Expensive Machine Learning Systems".
|
2301.07912 | Interval Reachability of Nonlinear Dynamical Systems with Neural Network
Controllers | This paper proposes a computationally efficient framework, based on interval
analysis, for rigorous verification of nonlinear continuous-time dynamical
systems with neural network controllers. Given a neural network, we use an
existing verification algorithm to construct inclusion functions for its
input-output behavior. Inspired by mixed monotone theory, we embed the
closed-loop dynamics into a larger system using an inclusion function of the
neural network and a decomposition function of the open-loop system. This
embedding provides a scalable approach for safety analysis of the neural
control loop while preserving the nonlinear structure of the system.
We show that one can efficiently compute hyper-rectangular
over-approximations of the reachable sets using a single trajectory of the
embedding system. We design an algorithm to leverage this computational
advantage through partitioning strategies, improving our reachable set
estimates while balancing its runtime with tunable parameters. We demonstrate
the performance of this algorithm through two case studies. First, we
demonstrate this method's strength in complex nonlinear environments. Then, we
show that our approach matches the performance of the state-of-the art
verification algorithm for linear discretized systems. | Saber Jafarpour, Akash Harapanahalli, Samuel Coogan | 2023-01-19T06:46:36Z | http://arxiv.org/abs/2301.07912v2 | # Interval Reachability of Nonlinear Dynamical Systems with Neural Network Controllers
###### Abstract
This paper proposes a computationally efficient framework, based on interval analysis, for rigorous verification of nonlinear continuous-time dynamical systems with neural network controllers. Given a neural network, we use an existing verification algorithm to construct inclusion functions for its input-output behavior. Inspired by mixed monotone theory, we embed the closed-loop dynamics into a larger system using an inclusion function of the neural network and a decomposition function of the open-loop system. This embedding provides a scalable approach for safety analysis of the neural control loop while preserving the nonlinear structure of the system.
We show that one can efficiently compute hyper-rectangular over-approximations of the reachable sets using a single trajectory of the embedding system. We design an algorithm to leverage this computational advantage through partitioning strategies, improving our reachable set estimates while balancing its runtime with tunable parameters. We demonstrate the performance of this algorithm through two case studies. First, we demonstrate this method's strength in complex nonlinear environments. Then, we show that our approach matches the performance of the state-of-the art verification algorithm for linear discretized systems.
Reachability analysis, Neural feedback loop, Verification of neural networks, Safety verification.
## 1 Introduction
Neural network components are increasingly deployed as controllers in safety-critical applications such as self-driving vehicles and robotic systems. For example, they may be designed based on reinforcement learning algorithms (Zhang et al., 2016) or trained to approximate some dynamic optimization-based controllers (Chen et al., 2018). However, neural networks are known to be vulnerable to small input perturbations (Szegedy et al., 2014); a slight disturbance in their input can lead to a large change in their output. In many applications, these learning-based components are interconnected with nonlinear and time-varying systems. Moreover, they are usually trained with no safety and robustness guarantees. Their learned control policies can therefore suffer from a significant degradation in performance when presented with uncertainties not accounted for during training.
Related works.There is extensive literature on verification of isolated neural networks. Rigorous verification approaches generally fall into three different categories: (i) reachability-based methods which focus on layer-by-layer estimation of reachable sets using interval bound propagation (Mirman et al., 2018; Gowal et al., 2018; Wang et al., 2018), activation function relaxation (Zhang et al., 2018), and symbolic interval analysis (Wang et al., 2018); (ii) optimization-based methods which use linear programming (Wong and Kolter, 2018), semi-definite programming (Fazlyab et al., 2019), or search and optimization (Katz et al., 2017) to estimate the input-output behavior of the neural networks; and (iii) probabilistic methods (Cohen et al., 2019; Li et al., 2019). We refer to the survey (Liu et al., 2021) for a review of neural network verification algorithms. Reachability of nonlinear dynamical systems has been studied using optimization-based methods such as the Hamilton-Jacobi approach (Bansal et al., 2017) and the level set approach (Mitchell and Tomlin, 2000). Several computationally tractable approaches including ellipsoidal (Kurzhanski and Varaiya, 2000) and zonotope methods (Girard, 2005) have been developed for reachability analysis of linear systems. Mixed monotone theory has emerged as a computationally efficient framework for over-approximating the reachable sets of nonlinear systems (Coogan and Arcak, 2015).
Recently, there has been an increased interest in verification methods for closed-loop systems with neural network controllers. However, a direct combination of state-of-the art neural network verification algorithms with the existing reachability analysis toolboxes and can lead to overly-conservative estimates of reachable sets (Dutta et al., 2019, Section 2.1). For linear discrete-time systems, reachable set over-approximation has been studied using semi-definite programming (Hu et al., 2020) and linear programming (Everett et al., 2021, 2021). For nonlinear discrete-time systems, Sidrane et al. (2022) establishes a mixed integer programming framework for reachable set over-approximation using polynomial bounds of the system dynamics. For nonlinear continuous-time systems, Dutta et al. (2019) uses polynomial bounding on the dynamics as well as the neural network to over-approximate reachable sets. Other rigorous approaches for verification of closed-loop neural network controllers include using finite-state abstractions (Sun et al., 2019) and simulation-guided interval analysis (Xiang et al., 2021).
Contributions.In this paper, we use elements of mixed monotone system theory to develop a flexible framework for safety verification of nonlinear continuous-time systems with neural network controllers. First, we employ CROWN--a well-established isolated neural network verification framework (Zhang et al., 2018)--to construct an inclusion function for a pre-trained neural network's output. Then, we use this inclusion function and a decomposition function of the open-loop system to construct suitable embedding systems for the closed-loop system using twice the number of states. Finally, we simulate trajectories of these embedding systems to compute hyper-rectangular over-approximations of the closed-loop reachable sets. Our framework has several advantageous features. It is agnostic to the neural network verifier, only requiring an inclusion function for the neural network. Additionally, our approach is fast and scalable, as only a single simulation of the embedding system is required. This feature, combined with a clever partitioning of the state space, is used to improve the accuracy of our reachable set over-approximations while retaining computational viability. Through several numerical experiments, we study the efficiency of our approach for obstacle avoidance in a simple vehicle model and for perturbation analysis in a linear quadrotor model.
## 2 Notation
For every \(x\in\mathbb{R}^{n}\) and every \(r\in\mathbb{R}_{\geq 0}\) we define the closed ball \(B_{\infty}(x,r)=\{y\in\mathbb{R}^{n}\mid\|y-x\|_{\infty}\leq r\}\). The partial order \(\leq\) on \(\mathbb{R}^{n}\) is defined by \(x\leq y\) if and only if \(x_{i}\leq y_{i}\), for every \(i\in\{1,\ldots,n\}\). For every \(x\leq y\), we can define the interval \([x,y]=\{z\in\mathbb{R}^{n}\mid x\leq z\leq y\}\). The partial order \(\leq\) on \(\mathbb{R}^{n}\) induces the southeast partial order \(\leq_{\mathrm{SE}}\) on \(\mathbb{R}^{2n}\) defined by \([\begin{smallmatrix}\widehat{x}\\ \widehat{y}\end{smallmatrix}]\) if \(x\leq y\) and \(\widehat{y}\leq\widehat{x}\). We define the subsets \(\mathcal{T}_{\geq 0}^{2n},\mathcal{T}_{\leq 0}^{2n},\mathcal{T}^{2n}\subseteq \mathbb{R}^{2n}\) as follows:
\[\mathcal{T}_{\geq 0}^{2n}=\{\lfloor\tfrac{x}{\widehat{x}}\rfloor\in \mathbb{R}^{2n}\mid x\leq\widehat{x}\},\qquad\mathcal{T}_{\leq 0}^{2n}=\{ \lfloor\tfrac{x}{\widehat{x}}\rfloor\in\mathbb{R}^{2n}\mid x\geq\widehat{x}\},\qquad\mathcal{T}^{2n}=\mathcal{T}_{\geq 0}^{2n}\cup\mathcal{T}_{\leq 0}^{2n}.\]
For every two vectors \(v,w\in\mathbb{R}^{n}\) and every \(i\in\{1,\ldots,n\}\), we define the vector \(v_{[i:w]}\in\mathbb{R}^{n}\) by \(\left(v_{[i:w]}\right)_{j}=\begin{cases}v_{j}&j\neq i\\ w_{j}&j=i.\end{cases}\), for every \(j\in\{1,\ldots,n\}\). Given a matrix \(B\in\mathbb{R}^{n\times m}\), we denote the non-negative part of \(B\) by \([B]^{+}=\max(B,0)\) and the nonpositive part of \(B\) by \([B]^{-}=\min(B,0)\). The Metzler and non-Metzler part of square matrix \(A\in\mathbb{R}^{n\times n}\) are denoted by \(\lceil A\rceil^{\mathrm{Mzl}}\) and \(\lfloor A\rfloor^{\mathrm{Mzl}}\), respectively, where \((\lceil A\rceil^{\mathrm{Mzl}})_{ij}=\begin{cases}A_{ij}&A_{ij}\geq 0\text{ or }i=j\\ 0&\text{otherwise,}\end{cases}\) and \(\lfloor A\rfloor^{\mathrm{Mzl}}=A-\lceil A\rceil^{\mathrm{Mzl}}\). Consider a control system \(\dot{x}=f(x,u)\) on \(\mathbb{R}^{n}\) with a measurable input \(\mathbf{u}:\mathbb{R}_{\geq 0}\to\mathbb{R}^{p}\). We denote the trajectory of the system with the control input \(\mathbf{u}\) starting from \(x_{0}\in\mathbb{R}^{n}\) at time \(t_{0}\) by \(t\mapsto\phi_{f}(t,t_{0},x_{0},u)\). Given a map \(f:\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}^{n}\) and a Lipschitz function \(\mathsf{F}:\mathcal{T}^{2n}\times\mathcal{T}^{2m}\to\mathbb{R}^{n}\), we say that \(\mathsf{F}\) is a _decomposition function for \(f\)_ if
1. \(\mathsf{F}_{i}(x,x,u,u)=f_{i}(x,u)\), for every \(x\in\mathbb{R}^{n}\) and every \(u\in\mathbb{R}^{p}\);
2. \(\mathsf{F}_{i}(x,\widehat{x},u,\widehat{u})\leq\mathsf{F}_{i}(y,\widehat{y}, u,\widehat{u})\), for every \(x\leq y\) such that \(x_{i}=y_{i}\), and every \(\widehat{y}\leq\widehat{x}\);
3. \(\mathsf{F}_{i}(x,\widehat{x},u,\widehat{u})\leq\mathsf{F}_{i}(x,\widehat{x}, v,\widehat{v})\), for every \(u\leq v\) and every \(\widehat{v}\leq\widehat{u}\).
A decomposition function \(\mathsf{F}\) is _tight_, if for every other decomposition function \(d\), we have
\[\left[\begin{smallmatrix}d(x,\widehat{x},u,\widehat{u})\\ d(\widehat{x},x,\widehat{u},u)\end{smallmatrix}\right]\leq_{\mathrm{SE}}\left[ \begin{smallmatrix}\mathsf{F}(x,\widehat{x},u,\widehat{u})\\ \mathsf{F}(\widehat{x},x,\widehat{u},u)\end{smallmatrix}\right],\qquad\text{ for every }x\leq\widehat{x},\ w\leq\widehat{w}.\]
Given a map \(g:\mathbb{R}^{n}\to\mathbb{R}^{m}\), the function \(\left[\frac{\mathsf{G}}{\mathsf{G}}\right]:\mathcal{T}_{\geq 0}^{2n}\to\mathcal{T}_{ \geq 0}^{2m}\) is _an inclusion function for \(g\)_ if, for every \(x\leq\widehat{x}\) and every \(z\in[x,\widehat{x}]\), we have \(\mathsf{G}(x,\widehat{x})\leq g(z)\leq\overline{\mathsf{G}}(x,\widehat{x})\). Note that our definition of inclusion function is closely-related to the notion of _inclusion interval function_ in the interval analysis (Jaulin et al., 2001, Section 2.4.1).
## 3 Problem Statement
We consider a nonlinear plant of the form
\[\dot{x}=f(x,u,w) \tag{1}\]
where \(x\in\mathbb{R}^{n}\) is the state of the system, \(u\in\mathbb{R}^{p}\) is the control input, and \(w\in\mathcal{W}\subseteq\mathbb{R}^{q}\) is the disturbance. We assume that \(f:\mathbb{R}^{n}\times\mathbb{R}^{p}\times\mathbb{R}^{q}\to\mathbb{R}^{n}\) is a parameterized vector field and the state feedback is parameterized by a \(k\)-layer feed-forward neural network controller \(\mathsf{N}:\mathbb{R}^{n}\to\mathbb{R}^{p}\) defined by:
\[\xi^{(i)}(x) =\phi_{i-1}(W^{(i-1)}\xi^{(i-1)}(x)+b^{(i-1)}),\quad i\in\{1, \ldots,k\}\] \[x =\xi^{(0)}\qquad u=W^{(k)}\xi^{(k)}(x)+b^{(k)}:=\mathsf{N}(x), \tag{2}\]
where \(n_{i}\) is the number of neurons in the \(i\)th layer, \(W^{(i-1)}\in\mathbb{R}^{n_{i}\times n_{i-1}}\) is the weight matrix of the \(i\)th layer, \(b^{(i-1)}\in\mathbb{R}^{n_{i}}\) is the bias vector of the \(i\)th layer, and \(\xi^{(i)}(y)\in\mathbb{R}^{n_{i}}\) is the \(i\)-th layer hidden variable and the activation function \(\phi_{i}\) satisfies \(0\leq\frac{\phi_{i}(x)-\phi_{i}(y)}{x-y}\leq 1\). One can show that a large class of activation function including, but not restricted to, ReLU, leaky ReLU, sigmoid, and tanh satisfies this condition (after a possibly re-scaling of their co-domain). We assume that, the neural network (2) is trained to approximate an offline controller with the auxiliary objective of achieving a goal set \(\mathcal{G}\subseteq\mathbb{R}^{n}\) while remaining in a safe set \(\mathcal{S}\subseteq\mathbb{R}^{n}\). Our aim is to consider the closed-loop system given by:
\[\dot{x}=f(x,\mathsf{N}(x),w):=f^{\mathrm{cl}}(x,w). \tag{3}\]
and to verify its safety, i.e., to ensure that the closed-loop system avoids the unsafe domain \(\mathbb{R}^{n}/\mathcal{S}\). We define the reachable set of the closed-loop system (3) by:
\[\mathcal{R}(t,\mathcal{X}_{0},\mathcal{W})=\{\phi_{f^{\mathrm{cl}}}(t,0,x_{0}, \mathbf{w})\mid x_{0}\in\mathcal{X}_{0},\ \ \mathbf{w}:\mathbb{R}_{\geq 0}\to \mathcal{W}\ \ \ \text{is piecewise continuous}\}\]
Thus, our goal is to check if \(\mathcal{R}(t,\mathcal{X}_{0},\mathcal{W})\subseteq\mathcal{S}\) holds for every \(t\in\mathbb{R}_{\geq 0}\). In general, computing the exact reachable sets of the closed-loop system (3) is not computationally tractable. Our approach is based on constructing a computationally efficient over-approximation \(\overline{\mathcal{R}}(t,\mathcal{X}_{0},\mathcal{W})\) of the reachable set of the closed-loop system. Then, avoiding the unsafe set \(\mathbb{R}^{n}/\mathcal{S}\) is guaranteed when \(\overline{\mathcal{R}}(t,\mathcal{X}_{0},\mathcal{W})\subseteq\mathcal{S}\), for every \(t\in\mathbb{R}_{\geq 0}\).
## 4 Input-output reachability of neural networks
In order to estimate the input-output behavior of the neural network controller, we use the verification algorithm called CROWN (Zhang et al., 2018). We first use CROWN to obtain an inclusion function for the input-output map of the neural network. Consider the neural network (2) and let the perturbed input vector \(x\) be in the interval \([\underline{x},\overline{x}]\). For every \(i\in\{1,\ldots,k\}\), we define the pre-activation input to \(i\)th layer as \(z^{(i)}=W^{(i)}\xi^{(i-1)}(x)+b^{(i)}\). When \(x\) is perturbed in the interval \([\underline{x},\overline{x}]\), we assume that \(L^{(i)},U^{(i)}\in\mathbb{R}^{n_{i}}\) are such that \(L^{(i)}\leq z^{(i)}\leq U^{(i)}\). In this case, for the \(j\)th neuron in the \(i\)th layer, there exist \(\alpha^{(i)}_{U,j},\beta^{(i)}_{U,j},\alpha^{(i)}_{L,j},\beta^{(i)}_{L,j}\) such that
\[\alpha^{(i)}_{L,j}(z+\beta^{(i)}_{L,j})\leq\phi_{i}(z)\leq\alpha^{(i)}_{U,j}(z +\beta^{(i)}_{U,j}),\ \ \ \ \text{for every }L^{(i)}_{j}\leq z\leq U^{i}_{j}. \tag{4}\]
For every \(y\in[\underline{y},\overline{y}]\), the output of the neural network is bounded as below:
\[\underline{A}(\underline{x},\overline{x})x+\underline{b}(\underline{x}, \overline{x})\leq\mathsf{N}(x)\leq\overline{A}(\underline{x},\overline{x})x+ \overline{b}(\underline{x},\overline{x}), \tag{5}\]
where matrix-valued functions \(\underline{A},\overline{A}:\mathcal{T}_{\geq 0}^{2n}\to\mathbb{R}^{p\times n}\) and two vector-valued functions \(\underline{b},\overline{b}:\mathcal{T}_{\geq 0}^{2n}\to\mathbb{R}^{p}\) are defined as follows:
\[\overline{A}(\underline{x},\overline{x})=\Lambda^{(0)},\qquad \overline{b}(\underline{x},\overline{x})=\sum\nolimits_{i=1}^{k}\Lambda^{(i)}b ^{(i)}+\mathrm{diag}(\Lambda^{(i)}\Delta^{(i)}),\] \[\underline{A}(\underline{x},\overline{x})=\Omega^{(0)},\qquad \underline{b}(\underline{x},\overline{x})=\sum\nolimits_{i=1}^{k}\Omega^{(i)}b ^{(i)}+\mathrm{diag}(\Omega^{(i)}\Theta^{(i)}) \tag{6}\]
where, for every \(i\in\{1,\ldots,k\}\), \(\Lambda^{(i)},\Delta^{(i)},\Omega^{(i)},\Theta^{(i)}\in\mathbb{R}^{n_{k} \times n_{i}}\) are as defined in (Zhang et al., 2018, Theorem 3.2) for the input perturbation set \(B_{\infty}(\frac{x+\overline{x}}{2},\frac{\overline{x}-\overline{x}}{2})\).
**Theorem 1** (Inclusion functions for Neural Networks): _Consider the \(k\)-layer feed-forward neural network \(u=\mathsf{N}(x)\) given by (2). Then,_
1. _for every input perturbation interval_ \([\underline{x},\overline{x}]\)_, the mapping_ \(\left[\frac{\mathsf{G}_{[\underline{x},\overline{x}]}}{\mathsf{G}_{[ \underline{x},\overline{x}]}}\right]:\mathcal{T}_{\geq 0}^{2n}\to\mathcal{T}_{\geq 0}^{2p}\) _defined by_ \[\frac{\mathsf{G}_{[\underline{x},\overline{x}]}(x,\widehat{x})}{ \overline{\mathsf{G}}_{[\underline{x},\overline{x}]}(x,\widehat{x})} =[\overline{A}(\underline{x},\overline{x})]^{+}x+[\overline{A}( \underline{x},\overline{x})]^{-}\widehat{x}+\underline{b}(\underline{x}, \overline{x}),\] (7) _is an inclusion function for the neural network_ \(\mathsf{N}\) _on_ \([\underline{x},\overline{x}]\)_;_
2. _the mapping_ \(\left[\frac{\mathsf{H}}{\mathsf{H}}\right]:\mathcal{T}_{\geq 0}^{2n}\to\mathcal{T}_{ \geq 0}^{2p}\) _defined by_ \(\underline{\mathsf{H}}(x,\widehat{x})=\mathsf{G}_{[\underline{x},\widehat{x} ]}(x,\widehat{x})\) _and_ \(\overline{\mathsf{H}}(x,\widehat{x})=\overline{\mathsf{G}}_{[\underline{x}, \widehat{x}]}(x,\widehat{x})\) _is an inclusion function for the neural network_ \(\mathsf{N}\) _on_ \(\mathbb{R}^{n}\)_._
**Proof**: Regarding part (i), suppose that \(\eta\leq\widehat{\eta}\in[\underline{x},\overline{x}]\) and \(z\in[\eta,\widehat{\eta}]\). Then,
\[\overline{\mathsf{G}}_{[\underline{x},\overline{y}]}(\eta, \widehat{\eta}) =[\overline{A}(\underline{x},\overline{x})]^{+}\widehat{\eta}+[ \overline{A}(\underline{x},\overline{x})]^{-}\eta+\overline{b}(\overline{x}, \overline{x})\geq[\overline{A}(\underline{x},\overline{x})]^{+}z+[\overline{A }(\underline{x},\overline{x})]^{-}z+\overline{b}(\overline{x},\overline{x})\] \[=[\overline{A}(\underline{x},\overline{x})]z+\overline{b}( \overline{x},\overline{x})\geq\mathsf{N}(z),\]
where the first inequality holds because \(z\in[\eta,\widehat{\eta}]\), the matrix \([\overline{A}(\underline{x},\overline{x})]^{+}\) is non-negative, and the matrix \([\overline{A}(\underline{x},\overline{x})]^{+}\) is non-positive. The second inequality holds by noting the fact that \([\overline{A}(\underline{x},\overline{x})]^{+}+[\overline{A}(\underline{x}, \overline{x})]^{-}=[\overline{A}(\underline{x},\overline{x})]\), for every \(\underline{x}\leq\overline{x}\). Finally, the last inequality holds by (5). Similarly, one can show that \(\underline{\mathsf{G}}_{[\underline{x},\overline{x}]}(\widehat{\eta},\eta) \leq\mathsf{N}(z)\). Regarding part (ii), suppose that \(x\leq\widehat{x}\), for every \(z\in[x,\widehat{x}]\), one can use a similar argument as part (i) to show that
\[\overline{\mathsf{G}}_{[\underline{x},\widehat{x}]}(x,\widehat{x}) =[\overline{A}(x,\widehat{x})]^{+}\widehat{x}+[\overline{A}(x, \widehat{x})]^{-}x+\overline{b}(x,\widehat{x})\geq[\overline{A}(x,\widehat{x} )]^{+}z+[\overline{A}(x,\widehat{x})]^{-}z+\overline{b}(x,\widehat{x})\] \[=[\overline{A}(x,\widehat{x})]z+\overline{b}(x,\widehat{x})\geq \mathsf{N}(z).\]
One can similarly show that \(\underline{\mathsf{G}}_{[\underline{x},\widehat{x}]}(\widehat{x},x)\leq \mathsf{N}(z)\), for every \(z\in[x,\widehat{x}]\). Moreover, suppose that \(\underline{x}=\overline{x}=z\). In this case, both inequalities in (4) are equality with \(\alpha_{L}^{(i)}=\alpha_{U}^{(i)}\) and \(\beta_{L}^{(i)}=\beta_{U}^{(i)}\). Thus, using the equation (6), we get that \(\underline{A}(z,z)=\overline{A}(z,z)\) and \(\underline{b}(z,z)=\overline{b}(z,z)\). This implies that \(\mathsf{N}(z)=\underline{A}(z,z)z+\underline{b}(z,z)=\underline{\mathsf{G}}_{ [z,z]}(z,z)\). Thus, \(\left[\frac{\mathsf{H}}{\mathsf{H}}\right]\) is an inclusion function for \(\mathsf{N}\) on the \(\mathbb{R}^{n}\). \(\blacksquare\)
**Remark 2**: _The following remarks are in order._
1. _For each layer_ \(i\in\{1,\ldots,k\}\)_, the intermediate pre-activation bounds_ \(U^{i},L^{i}\) _can be computed using either the Interval Bound Propagation (IBP)_ _(_Gowal et al._,_ 2018_)_ _or using the CROWN itself_ _(_Zhang et al._,_ 2018_)__. In the former case, the IBP can be considered as a special case of CROWN by choosing_ \(\alpha_{U}^{(i)}=\alpha_{L}^{(i)}=0\) _and_ \(\beta_{L}^{(i)}=L^{(i)}\) _and_ \(\beta_{U}^{(i)}=U^{(i)}\)_, for every_ \(i\in\{1,\ldots,k\}\)_._
2. _The inclusion function_ \(\left[\frac{\mathsf{G}_{[\underline{x},\overline{x}]}}{\mathsf{G}_{[ \underline{x},\overline{x}]}}\right]\) _defined in Theorem_ 1_(i) is linear but it depends on the input perturbation interval_ \([\underline{x},\overline{x}]\)_. On the other hand, the inclusion function_ \(\left[\frac{\mathsf{H}}{\mathsf{H}}\right]\) _defined in Theorem_ 1_(ii) is nonlinear but it is independent of the input perturbation set. For both inclusion functions, it can be shown that the smaller the input perturbation interval, the tighter the relaxation bounds in equation (_4_) are and, thus, the tighter the over- and the under-approximations of the neural network in equation (_5_) are._
## 5 Interval reachability of closed-loop system
In this section, we present a system-level approach for over-approximating the reachable set of the closed-loop system (3) with the neural network controller (2). The key idea is to design a suitable function \(d:\mathcal{T}^{2n}\times\mathcal{T}^{2q}\rightarrow\mathbb{R}^{n}\) and use it to construct the following embedding system associated to the closed-loop system (3):
\[\frac{d}{dt}\begin{bmatrix}x\\ \widehat{x}\end{bmatrix}=\begin{bmatrix}d(x,\widehat{x},w,\widehat{w})\\ d(\widehat{x},x,\widehat{w},w)\end{bmatrix} \tag{8}\]
Then, one can use the embedding system (8) to study the propagation of the state and disturbance bounds with time. Suppose that the initial condition set is given by \(\mathcal{X}_{0}\subseteq[\underline{x}_{0},\overline{x}_{0}]\) and the disturbance set is given by \(\mathcal{W}\subseteq[\underline{w},\overline{w}]\). Let \(\mathsf{F}\) be a decomposition function for the open-loop system (1) and \(\left[\frac{\mathsf{G}}{\mathsf{G}}\right]\) and \(\left[\frac{\mathsf{H}}{\mathsf{H}}\right]\) be the inclusion functions of the neural network (2) established in Theorem 1. We introduce "global" function \(d^{\mathrm{G}}\), "hybrid" function \(d^{\mathrm{H}}\), and "local" function \(d^{\mathrm{L}}\), for every \(i\in\{1,\ldots,n\}\), as follows:
\[d^{\mathrm{G}}_{i}(x,\widehat{x},w,\widehat{w}) =\begin{cases}\mathsf{F}_{i}(x,\widehat{x},\underline{\mathsf{H} }(x,\widehat{x}),\overline{\mathsf{H}}(x,\widehat{x}),w,\widehat{w}),&x\leq \widehat{x},w\leq\widehat{w},\\ \mathsf{F}_{i}(\widehat{x},x,\overline{\mathsf{H}}(x,\widehat{x}),\underline {\mathsf{H}}(x,\widehat{x}),\widehat{w},w),&\widehat{x}\leq x,\widehat{w}\leq w,\end{cases}\] \[d^{\mathrm{H}}_{i}(x,\widehat{x},w,\widehat{w}) =\begin{cases}\mathsf{F}_{i}(x,\widehat{x},\underline{\mathsf{G} }_{[x,\widehat{x}]}(x,\widehat{x}_{[i:x]}),\overline{\mathsf{G}}_{[x, \widehat{x}]}(x,\widehat{x}_{[i:x]}),w,\widehat{w}),&x\leq\widehat{x},w\leq \widehat{w},\\ \mathsf{F}_{i}(\widehat{x},x,\overline{\mathsf{G}}_{[x,\widehat{x}]}(x_{[i: \widehat{x}]}),\overline{\mathsf{G}}_{[x,\widehat{x}]}(x_{[i:\widehat{x}]}), \widehat{w},w),&\widehat{x}\leq x,\widehat{w}\leq w,\end{cases}\] \[d^{\mathrm{L}}_{i}(x,\widehat{x},w,\widehat{w}) =\begin{cases}\mathsf{F}_{i}(x,\widehat{x},\underline{\mathsf{H} }(x,\widehat{x}_{[i:x]}),\overline{\mathsf{H}}(x,\widehat{x}_{[i:x]}),w, \widehat{w})&x\leq\widehat{x},w\leq\widehat{w}\\ \mathsf{F}_{i}(\widehat{x},x,\overline{\mathsf{H}}(x_{[i:\widehat{x}]}, \widehat{x}),\underline{\mathsf{H}}(x_{[i:\widehat{x}]},\widehat{x}),\widehat {w},w)&\widehat{x}\leq x,\widehat{w}\leq w.\end{cases}\]
For every \(s\in\{\mathrm{G},\mathrm{L},\mathrm{H}\}\), the trajectory of the embedding system (8) with \(d=d^{s}\) and disturbance \([\begin{smallmatrix}w\\ \widehat{w}\end{smallmatrix}]=[\frac{w}{\overline{w}}]\) starting from \(\left[\frac{\underline{x}_{0}}{\overline{x}_{0}}\right]\) is denoted by \(t\mapsto\left[\frac{\underline{x}^{s}(t)}{\overline{x}^{s}(t)}\right]\).
**Theorem 3** (Interval over-approximation of reachable sets): _Consider the control system (1) with the neural network controller (2). Suppose that the initial condition \(x_{0}\) belongs to \(\mathcal{X}_{0}\subseteq[\underline{x}_{0},\overline{x}_{0}]\) and the disturbance \(w\) belongs to \(\mathcal{W}\subseteq[\underline{w},\overline{w}]\). Let \(\mathsf{F}:\mathcal{T}^{2n}\times\mathcal{T}^{2p}\times\mathcal{T}^{2q} \rightarrow\mathbb{R}^{n}\) be a decomposition function for the open-loop system (1). Then the following statement holds:_
1. _for every_ \(s\in\{\mathrm{G},\mathrm{L},\mathrm{H}\}\) _and every_ \(t\in\mathbb{R}_{\geq 0}\)_, we have_ \[\mathcal{R}(t,\mathcal{X}_{0},\mathcal{W})\subseteq[\underline{x}^{s}(t), \overline{x}^{s}(t)].\]
_If all the activation functions of the neural network \(u=\mathsf{N}(x)\) are ReLU, then_
1. _for every_ \(t\in\mathbb{R}_{\geq 0}\)_, we have_ \[\mathcal{R}(t,\mathcal{X}_{0},\mathcal{W})\subseteq[\underline{x}^{\mathrm{L} }(t),\overline{x}^{\mathrm{L}}(t)]\subseteq[\underline{x}^{\mathrm{H}}(t), \overline{x}^{\mathrm{H}}(t)]\subseteq[\underline{x}^{\mathrm{G}}(t),\overline{ x}^{\mathrm{G}}(t)].\]
* Regarding part (i), we provide the proof for \(s=\mathrm{H}\). The proof of other cases are mutadis mutandis with \(s=\mathrm{H}\). Note that, by (Abate et al., 2021, Theorem 1), the tight decomposition
function \(d^{c}\) for the closed-loop system \(f^{\text{cl}}\) and the tight decomposition function \(d^{o}\) for the open loop system \(f\) can be computed as follows:
\[d^{c}_{i}(x,\widehat{x},w,\widehat{w}) =\begin{cases}\min_{z\in[x,\widehat{x}],\in[w,\widehat{\omega}]}f_ {i}(z,\mathsf{N}(z),\xi),&x\leq\widehat{x},w\leq\widehat{w}\\ \max_{z\in[\widehat{x},x],\in[w,\widehat{\omega}]}f_{i}(z,\mathsf{N}(z),\xi),&\widehat{x}\leq x,\widehat{w}\leq w.\end{cases} \tag{9}\] \[d^{o}_{i}(x,\widehat{x},u,\widehat{u},w,\widehat{w}) =\begin{cases}\min_{z\in[x,\widehat{x}],\in[w,\widehat{\omega}]}f_ {i}(z,\eta,\xi),&x\leq\widehat{x},u\leq\widehat{u},w\leq\widehat{w},\\ \max_{z\in[\widehat{x},x],\in[w,\widehat{\omega}]}f_{i}(z,\eta,\xi),&\widehat {x}\leq x,\widehat{u}\leq u,\widehat{w}\leq w.\end{cases} \tag{10}\]
The trajectory of the embedding system (8) with \(d=d^{c}\) and disturbance \([\frac{w}{\widehat{w}}]=[\frac{w}{\widehat{w}}]\) starting from \(\left[\frac{\overline{x}_{0}}{\overline{x}_{0}}\right]\) is denoted by \(t\mapsto\left[\frac{x^{c}(t)}{\overline{x}^{c}(t)}\right]\). Since \(\mathsf{F}\) is a decomposition function for the open-loop system \(f\), for every \(x\leq\widehat{x}\), \(u\leq\widehat{u}\), \(w\leq\widehat{w}\), and \(i\in\{1,\ldots,n\}\), we get
\[\mathsf{F}_{i}(x,\widehat{x},u,\widehat{u},w,\widehat{w})\leq d^{o}_{i}(x, \widehat{x},u,\widehat{u},w,\widehat{w})=\min_{z\in[x,\widehat{x}],z_{i}=x_{ i}\atop\xi\in[w,\widehat{\omega}],\eta\in[u,\widehat{u}]}f_{i}(z,\eta,\xi). \tag{11}\]
Suppose that \(x\leq\widehat{x}\) and let \(i\in\{1,\ldots,n\}\) and \(y\in[x,\widehat{x}]\) be such that \(y_{i}=x_{i}\). By Theorem 1(i), \(\left[\frac{\underline{\mathsf{G}}_{[x,\widehat{x}]}}{\overline{\mathsf{G}}_{[ x,\widehat{x}]}}\right]\) is an inclusion function for \(\mathsf{N}\) on \([x,\widehat{x}]\). Moreover, \(y\in[x,\widehat{x}_{[i:x]}]\subseteq[x,\widehat{x}]\) and thus
\[\underline{\mathsf{G}}_{[x,\widehat{x}]}(x,\widehat{x}_{[i:x]})\leq\mathsf{N} (y)\leq\overline{\mathsf{G}}_{[x,\widehat{x}]}(x,\widehat{x}_{[i:x]}). \tag{12}\]
Therefore, for every \(i\in\{1,\ldots,n\}\), we get
\[d^{c}_{i}(x,\widehat{x},w,\widehat{w}) =\min_{z\in[x,\widehat{x}],\in[w,\widehat{\omega}],z_{i}=x_{i} \atop z_{i}=x_{i}}f_{i}(z,\mathsf{N}(z),\xi)\geq\min_{z\in[x,\widehat{x}], \in[w,\widehat{\omega}],z_{i}=x_{i}\atop u\in[\underline{\mathsf{G}}_{[x, \widehat{x}]}(x,\widehat{x}_{[i:x]}),\overline{\mathsf{G}}_{[x,\widehat{x}]}( x,\widehat{x}_{[i:x]})]}f_{i}(z,u,\xi)\] \[\geq\mathsf{F}_{i}(x,\widehat{x},\underline{\mathsf{G}}_{[x, \widehat{x}]}(x,\widehat{x}_{[i:x]}),\overline{\mathsf{G}}_{[x,\widehat{x}]}( x,\widehat{x}_{[i:x]}),w,\widehat{w})=d^{\mathrm{H}}_{i}(x,\widehat{x},w, \widehat{w}). \tag{13}\]
where the first equality holds by definition of \(d^{c}\), the second inequality holds by equation (12), the third inequality holds by equation (11), and the fourth inequality holds by definition of \(d^{\mathrm{H}}\). Similarly, one can show that \(d^{c}_{i}(\widehat{x},x,\widehat{w},w)\leq d^{\mathrm{H}}_{i}(\widehat{x},x, \widehat{w},w)\), for every \(i\in\{1,\ldots,n\}\). This implies that \(\left[\begin{smallmatrix}d^{\mathrm{H}}(x,\widehat{x},w,\widehat{w})\\ d^{\mathrm{H}}(\widehat{x},x,\widehat{w},w)\end{smallmatrix}\right]\leq_{ \mathrm{SE}}\left[\begin{smallmatrix}d^{c}(x,\widehat{x},w,\widehat{w})\\ d^{c}(\widehat{x},x,\widehat{w},w)\end{smallmatrix}\right]\), for every \(x\leq\widehat{x}\) and every \(w\leq\widehat{w}\). Note that, by (Abate et al., 2021, Theorem 1), the vector field \(\left[\begin{smallmatrix}d^{c}(x,\widehat{x},w,\widehat{w})\\ d^{c}(\widehat{x},x,\widehat{w},w)\end{smallmatrix}\right]\) is monotone with respect to the southeast order \(\leq_{\mathrm{SE}}\) on \(\mathbb{R}^{2n}\). Now, we can use (Michel et al., 2008, Theorem 3.8.1), to deduce that \(\left[\frac{x^{\mathrm{H}}(t)}{\overline{x}^{\mathrm{H}}(t)}\right]\leq\left[ \frac{x^{c}(t)}{\overline{x}^{c}(t)}\right]\), for every \(t\in\mathbb{R}_{\geq 0}\). This implies that \([\underline{x}^{c}(t),\overline{x}^{c}(t)]\subseteq[\underline{x}^{\mathrm{H} }(t),\overline{x}^{\mathrm{H}}(t)]\). On the other hand, by (Abate et al., 2021, Theorem 2), we know that \(\mathcal{R}(t,\mathcal{X}_{0},\mathcal{W})\subseteq[\underline{x}^{c}(t), \overline{x}^{c}(t)]\), for every \(t\in\mathbb{R}_{\geq 0}\). This lead to \(\mathcal{R}(t,\mathcal{X}_{0},\mathcal{W})\subseteq[\underline{x}^{\mathrm{H} }(t),\overline{x}^{\mathrm{H}}(t)]\), for every \(t\in\mathbb{R}_{\geq 0}\).
Regarding part (ii), suppose that all the activation functions are ReLU and let \(L^{(i)}_{[x,\widehat{x}]}\) and \(U^{(i)}_{[x,\widehat{x}]}\) are the intermediate bounds associated to the input perturbation \([x,\widehat{x}]\). Then, for every \(\left[\frac{x}{\widehat{x}}\right]\leq_{\mathrm{SE}}\left[\frac{y}{\widehat{y} }\right]\in\mathcal{T}_{\geq 0}^{2n}\), the intermediate bounds in CROWN satisfy:
\[L^{(i)}_{[x,\widehat{x}]}\leq L^{(i)}_{[y,\widehat{y}]},\qquad U^{(i)}_{[y, \widehat{y}]}\leq U^{(i)}_{[x,\widehat{x}]},\qquad\text{ for every }i\in\{1,\ldots,k\}.\]
Since all the activation functions are ReLU, we can use (Zhang et al., 2018, Table 1 and Table 2) to show that
\[\alpha^{(i)}_{L,[x,\widehat{x}]} \leq\alpha^{(i)}_{L,[y,\widehat{y}]},\qquad\alpha^{(i)}_{U,[y, \widehat{y}]}\leq\alpha^{(i)}_{U,[x,\widehat{x}]},\] \[\beta^{(i)}_{L,[x,\widehat{x}]} \leq\beta^{(i)}_{L,[y,\widehat{y}]},\qquad\beta^{(i)}_{U,[y, \widehat{y}]}\leq\beta^{(i)}_{U,[x,\widehat{x}]},\qquad\text{ for every }i\in\{1,\ldots,k\}\]
Therefore, using the equations (4) and (6), and the formula in (Zhang et al., 2018, Theorem 3.2) for \(\Lambda^{(i)},\Delta^{(i)},\Omega^{(i)},\Theta^{(i)}\), we get
\[\underline{A}(x,\widehat{x}) \leq\underline{A}(y,\widehat{y}),\qquad\overline{A}(y,\widehat{y })\leq\overline{A}(x,\widehat{x}),\] \[\underline{b}(x,\widehat{x}) \leq\underline{b}(y,\widehat{y}),\qquad\overline{b}(y,\widehat{y })\leq\overline{b}(x,\widehat{x}),\]
This implies that
\[\underline{\mathsf{G}}_{[x,\widehat{x}]}(w,\widehat{w}) =[\underline{A}(x,\widehat{x})]^{+}w+[\underline{A}(x,\widehat{ x})]^{-}\widehat{w}\leq[\underline{A}(y,\widehat{y})]^{+}w+[\underline{A}(y, \widehat{y})]^{-}\widehat{w}\] \[\leq[\underline{A}(y,\widehat{y})]^{+}v+[\underline{A}(y, \widehat{y})]^{-}\widehat{v}=\underline{\mathsf{G}}_{[y,\widehat{y}]}(v, \widehat{v}).\]
Similarly, we can show that \(\overline{\mathsf{G}}_{[y,\widehat{y}]}(v,\widehat{v})\leq\overline{\mathsf{ G}}_{[x,\widehat{x}]}(w,\widehat{w})\). As a result, for every \([\frac{x}{x}]\leq_{\mathrm{SE}}\left[\frac{y}{y}\right]\in\mathcal{T}_{\geq 0}^{2n}\) and \([\frac{w}{\widehat{w}}]\leq_{\mathrm{SE}}\left[\frac{v}{v}\right]\in\mathcal{T }_{\geq 0}^{2n}\),
\[\left[\frac{\underline{\mathsf{G}}_{[x,\widehat{x}]}(w,\widehat{w})}{ \overline{\mathsf{G}}_{[x,\widehat{x}]}(w,\widehat{w})}\right]\leq_{\mathrm{ SE}}\left[\frac{\underline{\mathsf{G}}_{[y,\widehat{y}]}(v,\widehat{v})}{ \overline{\mathsf{G}}_{[y,\widehat{y}]}(v,\widehat{v})}\right] \tag{14}\]
Note that, using property (14) and definition of decomposition function, one can easily check that \(d^{\mathrm{G}},d^{\mathrm{L}},d^{\mathrm{H}}\) are decomposition functions for \(f^{\mathrm{cl}}\) and thus, they are monotone with respect to the southeast order \(\leq_{\mathrm{SE}}\) on \(\mathbb{R}^{2n}\). On the other hand, we have \(\widehat{x}_{[i:x]}\leq\widehat{x}\). Therefore, using (14),
\[\left[\frac{\underline{\mathsf{G}}_{[x,\widehat{x}]}(x,\widehat{x}_{[i:x]})}{ \overline{\mathsf{G}}_{[x,\widehat{x}]}(x,\widehat{x}_{[i:x]})}\right]\leq_{ \mathrm{SE}}\left[\frac{\underline{\mathsf{G}}_{[x,\widehat{x}]}(x,\widehat{x} )}{\overline{\mathsf{G}}_{[x,\widehat{x}]}(x,\widehat{x})}\right],\qquad \text{ for every }x\leq\widehat{x} \tag{15}\]
This implies that, for every \(x\leq\widehat{x}\) and every \(w\leq\widehat{w}\),
\[d^{\mathrm{H}}(x,\widehat{x},w,\widehat{w}) =\mathsf{F}_{i}(x,\widehat{x},\underline{\mathsf{G}}_{[x, \widehat{x}]}(x,\widehat{x}_{[i:x]}),\overline{\mathsf{G}}_{[x,\widehat{x}]}( x,\widehat{x}_{[i:x]}),w,\widehat{w})\] \[\leq\mathsf{F}_{i}(x,\widehat{x},\underline{\mathsf{G}}_{[x, \widehat{x}]}(x,\widehat{x}),\overline{\mathsf{G}}_{[x,\widehat{x}]}(x, \widehat{x}),w,\widehat{w})=d^{\mathrm{G}}(x,\widehat{x},w,\widehat{w})\]
where the inequality holds using equation 15 and the fact that \(\mathsf{F}\) is a decomposition function for \(f^{\mathrm{cl}}\). Similarly, we can show that \(d^{\mathrm{H}}(\widehat{x},x,\widehat{w},w)\leq d^{\mathrm{G}}(\widehat{x},x,\widehat{w},w)\), for every \(x\leq\widehat{x}\) and every \(w\leq\widehat{w}\). This implies that \(\left[\frac{d^{\mathrm{G}}(x,\widehat{x},w,\widehat{w})}{d^{\mathrm{G}}(x,x,\widehat{w},w)}\right]\leq_{\mathrm{SE}}\left[\frac{d^{\mathrm{H}}(x,\widehat{x },w,\widehat{w})}{d^{\mathrm{H}}(\widehat{x},x,\widehat{w},w)}\right]\), for every \(x\leq\widehat{x}\) and every \(w\leq\widehat{w}\). Note that \(d^{\mathrm{G}}\) is a decomposition function for \(f^{\mathrm{cl}}\) and thus the vector field \(\left[\frac{d^{\mathrm{G}}(x,\widehat{x},w,\widehat{w})}{d^{\mathrm{G}}(x, \widehat{x},\widehat{w},w)}\right]\) is monotone with respect to the southeast order \(\leq_{\mathrm{SE}}\) on \(\mathbb{R}^{2n}\). Now, we can use (Michel et al., 2008, Theorem 3.8.1), to deduce that \(\left[\frac{x^{\mathrm{G}}(t)}{x^{\mathrm{G}}(t)}\right]\leq\left[\frac{x^{ \mathrm{H}}(t)}{x^{\mathrm{H}}(t)}\right]\), for every \(t\in\mathbb{R}_{\geq 0}\). This implies that \([\underline{x}^{\mathrm{H}}(t),\overline{x}^{\mathrm{H}}(t)]\subseteq[ \underline{x}^{\mathrm{G}}(t),\overline{x}^{\mathrm{G}}(t)]\). The proofs for the other inclusions are similar and we remove it for the sake of brevity.
For the special case when the open-loop system (1) is linear in state and affine in control, one can find closed-from expressions for \(d^{\mathrm{G}}\), \(d^{\mathrm{H}}\), and \(d^{\mathrm{L}}\).
**Corollary 4** (Linear systems): _Consider the control system (1) with \(f(x,u,w)=Ax+Bu+Cw\) with \(A\in\mathbb{R}^{n\times n}\), \(B\in\mathbb{R}^{n\times p}\), and \(C\in\mathbb{R}^{n\times q}\) and with the neural network controller (2). Then we recover all the results of Theorem (3) with the following "global", "hybrid", and "local" functions:_
\[d^{\rm G}(x,\widehat{x},w,\widehat{w}) =\left(\lceil A\rceil^{\rm Mzl}+\left[R(x,\widehat{x})\right]^{+} \right)x+\left(\lfloor A\rfloor^{\rm Mzl}+\left[S(x,\widehat{x})\right]^{-} \right)\widehat{x}+C^{+}w+C^{-}\widehat{w}\] \[d^{\rm H}(x,\widehat{x},w,\widehat{w}) =\lceil A+R(x,\widehat{x})\rceil^{\rm Mzl}x+\lfloor A+S(x, \widehat{x})\rceil^{\rm Mzl}\widehat{x}+C^{+}w+C^{-}\widehat{w}\] \[d^{\rm L}_{i}(x,\widehat{x},w,\widehat{w}) =\lceil A+R(x,\widehat{x}_{[i:x]})\rceil^{\rm Mzl}x+\lfloor A+S(x,\widehat{x}_{[i:x]})\rfloor^{\rm Mzl}\widehat{x}+C^{+}w+C^{-}\widehat{w}, \quad\forall i\in\{1,\ldots,n\},\]
_where \(R(x,\widehat{x})=B^{+}[\underline{A}(x,\widehat{x})]+B^{-}[\overline{A}(x, \widehat{x})]\) and \(S(x,\widehat{x})=B^{+}[\overline{A}(x,\widehat{x})]+B^{-}[\underline{A}(x, \widehat{x})]\)._
**Proof** Note that a decomposition function for the open-loop system is given by \(\mathsf{F}(x,\widehat{x},u,\widehat{u},w,\widehat{w})=\lceil A\rceil^{\rm Mzl }x+\lfloor A\rfloor^{\rm Mzl}\widehat{x}+B^{+}u+B^{-}\widehat{u}+C^{+}w+C^{-} \widehat{w}\)(Coogan, 2020, Example 3). Using simple algebraic manipulations, one can show that for every \(i\in\{1,\ldots,n\}\) and every \(M\in\mathbb{R}^{n\times n}\),
\[\left(M^{+}x+M^{-}\widehat{x}_{[i:x]}\right)_{i}=\left(\lceil M\rceil^{\rm Mzl }x+\lfloor M\rfloor^{\rm Mzl}\widehat{x}\right)_{i}. \tag{16}\]
Thus, using Theorem 3 and the identity (16), for every \(i\in\{1,\ldots,n\}\), we have
\[d^{\rm H}_{i}(x,\widehat{x},w,\widehat{w}) =\left(\lceil A\rceil^{\rm Mzl}x+\lfloor A\rfloor^{\rm Mzl} \widehat{x}+B^{+}[\underline{A}]^{+}(x,\widehat{x})x+B^{+}[\underline{A}]^{- }(x,\widehat{x})\widehat{x}_{[i:x]}\right)_{i}\] \[\quad\quad+\left(B^{-}[\overline{A}]^{+}(x,\widehat{x})x+B^{-}[ \overline{A}]^{-}(x,\widehat{x})\widehat{x}_{[i:x]}+C^{+}w+C^{-}\widehat{w} \right)_{i}\] \[=\left(\lceil A\rceil^{\rm Mzl}x+\lfloor A\rfloor^{\rm Mzl} \widehat{x}+\lceil B^{+}\underline{A}(x,\widehat{x})\rceil^{\rm Mzl}x+ \lfloor B^{+}\underline{A}(x,\widehat{x})\rceil^{\rm Mzl}\widehat{x}\right)_ {i}\] \[\quad\quad+\left(\lfloor B^{-}\overline{A}(x,\widehat{x})\rfloor^ {\rm Mzl}x+\lceil B^{-}\overline{A}(x,\widehat{x})\rceil^{\rm Mzl}\widehat{x}+ C^{+}w+C^{-}\widehat{w}\right)_{i}\] \[=\left(\lceil A+B^{+}\underline{A}(x,\widehat{x})+B^{-}\overline {A}(x,\widehat{x})\rceil^{\rm Mzl}+\lfloor A+B^{+}[\overline{A}(x,\widehat{ x})]+B^{-}[\underline{A}(x,\widehat{x})]\rfloor^{\rm Mzl}\right)_{i}\] \[\quad\quad+\left(C^{+}w+C^{-}\widehat{w}\right)_{i}\] \[=\left(\lceil A+R(x,\widehat{x})\rceil^{\rm Mzl}x+\lfloor A+S(x, \widehat{x})\rfloor^{\rm Mzl}\widehat{x}+C^{+}w+C^{-}\widehat{w}\right)_{i}.\]
This completes the proof for \(s={\rm H}\). The proofs of the other cases follow similarly.
**Remark 5**: _The following remarks are in order._
1. _(Tight decomposition function) One can use the tight decomposition function of the closed loop system (_3_) in the embedding system (_8_) to obtain hyper-rectangular over-approximations for reachable set of the system. However, finding the tight decomposition function requires solving a computationally complicated optimization problem and generally difficult. In this context, Theorem_ 3 _uses the decomposition function of the open-loop system (_1_) and the inclusion function of the neural network to establish three under-approximation the tight decomposition function of the closed-loop system (_8_)._
2. _(Comparison with the literature) For linear systems, Corollary_ 4 _can be used to show that the forward Euler integration of the embedding system (_8_) with_ \(d=d^{\rm H}\) _and with a small enough time-step will lead to the identical over-approximation sets as_ _(Everett et al._,_ 2021b_, Lemma IV.3_) applied to the forward Euler discretization of the linear system._
_
3. _(Generality of the approach) Theorem_ 3 _proposes a general embedding-based framework for verification of the closed-loop system (_3_) which is based on combining mixed monotone reachability of the open-loop system with interval analysis of the neural network. Using this perspective, our approach can be applied to arbitrary neural network verification algorithms as long as one can construct an inclusion function for the neural network. This is in contrast with most of the existing approaches for neural network closed-loop reachability analysis, which are heavily dependent on the neural network verification algorithm (see for instance_ (Everett et al., 2021)_) and_ _(Sidrane et al., 2022)__)._
4. _(Computational complexity)_ _From a computational perspective, the framework presented in Theorem_ 3 _consists of two main ingredients: (i) evaluating CROWN to compute the inclusion function of the neural network as in Theorem_ 1_, and (ii) integrating the embedding dynamical system (_8_). For a neural network with_ \(k\)_-layer and_ \(N\) _neuron per layer, the complexity of CROWN is_ \(\mathcal{O}(k^{2}N^{3})\)__(Zhang et al., 2018)_. Moreover, the functions_ \(d^{\mathrm{G}}\) _and_ \(d^{\mathrm{H}}\) _call CROWN once per integration step, while the function_ \(d^{\mathrm{L}}\) _calls CROWN_ \(n\) _times per integration step. The run time of the integration process depends on the form of the open-loop decomposition function_ \(\mathsf{F}\)_._
## 6 Efficient reachability analysis via partitioning
In this section, we develop a suitable partitioning of the state space and combine it with Theorem (3) to obtain a computationally efficient algorithm for generating reachable set over-approximations of the closed-loop system. Interval methods are known to suffer from large over-approximation error due to the wrapping effect (Jaulin et al., 2001, Section 2.2.4). However, their ease of computation allows for combining them efficiently with partitioning strategies. Our partitioning strategy consists of two main components: (i) a uniform division of the state space of the embedding system (8) to compute the neural network inclusion functions using CROWN, and (ii) a uniform subdivision to implement the integration on the embedding system (8). We first pick an \(s\in\{\mathrm{G},\mathrm{H},\mathrm{L}\}\) and start with the initial perturbation set \(\mathcal{X}_{0}\). In the first step, we find the smallest hyper-rectangle containing \(\mathcal{X}_{0}\) by computing \((\overline{x}_{0})_{j}=\max_{x\in\mathcal{X}_{0}}x_{j}\) and \((\underline{x}_{0})_{j}=\min_{x\in\mathcal{X}_{0}}x_{j}\), for every \(i\in\{1,\ldots,n\}\). We then divide the hyper-rectangle \([\underline{x}_{0},\overline{x}_{0}]\) into \(D_{a}\) partitions and obtain the set \(\{[\underline{x}^{1},\overline{x}^{1}],\ldots,[\underline{x}^{D_{a}}, \overline{x}^{D_{a}}]\}\). For every \(k\in\{1,\ldots,D_{a}\}\), we compute the trajectory of the embedding system (8) with \(d=d^{\mathrm{s}}\) and the initial condition \(\left\lfloor\frac{\underline{x}^{k}}{\underline{x}^{k}}\right\rfloor\) at time \(\Delta t\) as in Theorem 3. For every \(k\in\{1,\ldots,D_{a}\}\), we divide each states of the hyper-rectangle \([\underline{x}^{1},\overline{x}^{1}]\) into \(D_{s}\) partitions, obtaining the subpartitions \(\{[\underline{x}^{k,1},\overline{x}^{k,1}],\ldots,[\underline{x}^{k,D_{s}}, \overline{x}^{k,D_{s}}]\}\). For every \(k\in\{1,\ldots,D_{a}\}\) and \(l\in\{1,\ldots,D_{s}\}\), we compute the trajectory of the embedding system (8) with \(d=d^{\mathrm{Bs}}\), where, for every \(i\in\{1,\ldots,n\}\),
\[d^{\mathrm{Bs}}_{i}(x,\widehat{x},w,\widehat{w})=\mathsf{F}_{i}(x,\widehat{x},\underline{\mathsf{G}}_{[\underline{x}^{k},\overline{x}^{k}]}(x,\widehat{x} _{[i:x]}),\overline{\mathsf{G}}_{[\underline{x}^{k},\overline{x}^{k}]}( \widehat{x}_{[i:x]},x),w,\widehat{w}) \tag{17}\]
and the initial condition \(\left\lfloor\frac{\underline{x}^{k,l}}{\underline{x}^{k,l}}\right\rfloor\) at time \(\Delta t\) as in Theorem 3. We then set \(\mathcal{X}_{1}=\bigcup_{k=1}^{D_{a}}\bigcup_{l=1}^{D_{s}}[\underline{x}^{l,k} (\Delta t),\overline{x}^{l,k}(\Delta t)]\) and repeat this procedure on the initial set \(\mathcal{X}_{1}\). We keep repeating this algorithm until we get to the final time \(T\). Note that our partitioning approach is different from (Everett et al., 2021) and (Xiang et al., 2021) in that we re-partitioning the state-space at every time step. A summary of the above procedure is presented in Algorithm 1.
```
1:\(s\in\{\mathrm{G},\mathrm{H},\mathrm{L}\}\), the initial set \(\mathcal{X}_{0}\), the final time \(T\), the actuation step \(\Delta t\), divisions \(D_{a}\), subdivision \(D_{s}\) Output: the over-approximation of the reachable set \(\overline{\mathcal{R}}(T,0,\mathcal{X}_{0},\mathcal{W})\)
2:\(j\gets 0\)
3:while\(j<\lfloor\frac{T}{\Delta t}\rfloor\)do
4:\(\underline{x}_{i}\leftarrow\min_{x\in\mathcal{X}_{j}}x_{i}\), for every \(i\in\{1,\ldots,n\}\)
5:\(\overline{x}_{i}\leftarrow\max_{x\in\mathcal{X}_{j}}x_{i}\), for every \(i\in\{1,\ldots,n\}\)
6:\(\{[\underline{x}^{1},\overline{x}^{1}],\ldots,[\underline{x}^{D_{a}}, \overline{x}^{D_{a}}]\}\leftarrow\) uniform_partition\(([\underline{x},\overline{x}],D_{a})\)
7:for\(k=\{1,\ldots,D_{a}\}\)do
8: Compute \(\underline{A}(\underline{x}^{k},\overline{x}^{k})\), \(\overline{A}(\underline{x}^{k},\overline{x}^{i})\), \(\underline{b}(\underline{x}^{k},\overline{x}^{k})\), and \(\overline{b}(\underline{x}^{k},\overline{x}^{k})\) using CROWN and (6).
9:\(\{[\underline{x}^{k,1},\overline{x}^{k,1}],\ldots,[\underline{x}^{k,D_{s}}, \overline{x}^{k,D_{s}}]\}\leftarrow\) uniform_partition\(([\underline{x}^{k},\overline{x}^{k}],D_{s})\)
10:for\(l=\{1,\ldots,D_{s}\}\)do
11: Compute \(\left[\frac{\underline{x}^{k,l}(\Delta t)}{\overline{x}^{k,l}(\Delta t)}\right]\) for system (8) with \(d=d^{\mathrm{Bs}}\) and initial condition \(\left[\frac{\underline{x}^{k,l}}{\overline{x}^{k,l}}\right]\).
12:endfor
13:endfor
14:\(\mathcal{X}_{j+1}=\bigcup_{k=1}^{D_{a}}\bigcup_{l=1}^{D_{s}}[\underline{x}^{l,k}(\Delta t),\overline{x}^{l,k}(\Delta t)]\)
15:endwhile
16:return\(\overline{\mathcal{R}}(T,0,\mathcal{X}_{0},\mathcal{W})\leftarrow\mathcal{X} _{\lfloor\frac{T}{\Delta t}\rfloor}\)
```
**Algorithm 1** Over-approximation of reachable sets of (3)
## 7 Numerical Simulations
In this section, we show the efficiency of our reachability analysis using numerical experiments on a nonlinear vehicle model and a linear quadrotor model1.
Footnote 1: All the code is available at
[https://github.com/gtfactslab/L4DC2023_NNControllerReachability](https://github.com/gtfactslab/L4DC2023_NNControllerReachability)
### Nonlinear Vehicle model
We consider the dynamics of a vehicle adopted from (Polack et al., 2017) satisfying the following nonlinear ordinary differential equation:
\[\dot{p_{x}}=v\cos(\phi+\beta(u_{2})),\quad\dot{\phi}=\frac{v}{\ell_{r}}\sin( \beta(u_{2})),\quad\dot{p_{y}} =v\sin(\phi+\beta(u_{2})),\quad\dot{v}=u_{1}+w, \tag{18}\]
where \([p_{x},p_{y}]^{\top}\in\mathbb{R}^{2}\) is the displacement of the center of mass, \(\phi\in[-\pi,\pi)\) is the heading angle in the plane, \(v\in\mathbb{R}^{+}\) is the speed of the center of mass. Control input \(u_{1}\) is the applied force subject to disturbance \(w\), input \(u_{2}\) is the angle of the front wheels, and \(\beta(u_{2})=\arctan\left(\frac{\ell_{f}}{\ell_{f}+\ell_{r}}\tan(u_{2})\right)\) is the slip slide angle. We set \(x=[p_{x},p_{y},\phi,v]^{\top}\) and \(u=[u_{1},u_{2}]^{\top}\). We use the following tight decomposition function for the open-loop system:
\[\mathsf{F}(x,\widehat{x},u,\widehat{u},w,\widehat{w})=\begin{bmatrix}d^{b_{1}b_ {2}}\left([v,d^{\cos}(\phi+\beta(u_{2}),\widehat{\phi}+\beta(\widehat{u}_{2})) ]^{\top},[\widehat{v},d^{\cos}(\widehat{\phi}+\beta(\widehat{u}_{2}),\phi+ \beta(u_{2}))]^{\top}\right)\\ d^{b_{1}b_{2}}\left([v,d^{\sin}(\phi+\beta(u_{2}),\widehat{\phi}+\beta( \widehat{u}_{2}))]^{\top},[\widehat{v},d^{\sin}(\widehat{\phi}+\beta(\widehat{u }_{2}),\phi+\beta(u_{2}))]^{\top}\right)\\ d^{b_{1}b_{2}}\left([v,d^{\sin}(\beta(u_{2}),\beta(\widehat{u}_{2}))]^{\top},[ \widehat{v},d^{\sin}(\beta(\widehat{u}_{2}),\beta(u_{2}))]^{\top}\right)\\ u_{1}+w\end{bmatrix},\]
where \(d^{b_{1}b_{2}}\), \(d^{\text{cos}}\), and \(d^{\text{sin}}\) are defined in (Cao et al., 2022).
We designed an offline nonlinear model predictive controller in Python using Casadi (Andersson et al., 2019) to steer the vehicle to the origin while avoiding obstacles. We use a fixed horizon of 20 with an actuation step of 0.25 seconds, and a quadratic cost function with \(Q=\operatorname{diag}(1,1,0,0)\), \(Q_{hor}=\operatorname{diag}(100,100,0,1)\), and other regularizing terms tuned empirically. Additionally, we add circular obstacles with 25% padding as hard constraints with slack variables; in Figures 1 and 2, we consider one centered at \((4,4)\) with a radius of 2.4. We simulated 65000 real trajectories (5s, 20 control actions) with initial conditions uniformly sampled from a specified region, and aggregated the data into a set of \(1.3M\) training pairs \((x,u)\in\mathbb{R}^{4}\times\mathbb{R}^{2}\). A neural network \(u=\mathsf{N}(x)\) with \(2\) hidden layers with \(100\) neurons per layer and ReLU activation was trained in Pytorch to approximate the model predictive controller under a scaled Mean Squared Error loss. We use Algorithm 1 with \(D_{a}=16\) and \(D_{s}=1\) to provide over-approximations for the reachable sets of the vehicle model (18), comparing functions \(d^{\text{G}}\), \(d^{\text{L}}\), and \(d^{\text{H}}\) from Theorem 3. Algorithm 1 line 7 is computed using auto_LiRPA (Xu et al., 2020). The results are shown in Figure 1.
### Linear 6D quadrotor model
We use the linear 6D quadrotor model adopted from (Ivanov et al., 2019) with the dynamics \(\dot{x}=Ax+Bu+c\) where \(A,B,c,u\) are as define in (Everett et al., 2021) and \(x=[p_{x},p_{y},p_{z},v_{x},x_{y},v_{z}]^{\top}\) is the state of the system with \(p_{x},p_{y}\), and \(p_{y}\) being the linear displacement in the \(x,y\), and \(z\) directions, respectively and \(v_{x},v_{y}\), and \(v_{z}\) the velocity in the \(x,y\), and \(z\) directions respectively. The neural network controller \(u=\mathsf{N}(x)\) consists of \(2\) layers with \(32\) neurons in each layer and is identical to (Everett et al., 2021, Section H). One can compute the following tight decomposition function for the open-loop system \(\mathsf{F}(x,\widehat{x},u,\widehat{u},w,\widehat{w})=Ax+B^{+}u+B^{-}\widehat {u}+c\). We compare the
Figure 1: Performance of the three different functions \(d^{\text{G}}\), \(d^{\text{H}}\), and \(d^{\text{L}}\) in Theorem (3) for over-approximation of the reachable set of the system (18) with neural network \(u=\mathsf{N}(x)\) trained to approximate an offline model predictive controller. The \(p_{x}-p_{y}\) plot of the motion of the vehicle is shown starting from an initial set \([7.9,8.1]^{2}\times[-\frac{2\pi}{3}-0.01,-\frac{2\pi}{3}+0.01]\times[1.99,2.01]\). In terms of accuracy, the size of the over-approximations obtained using the functions \(d^{\text{H}}\) and \(d^{\text{L}}\) are close to each other, but are much smaller than the size of the over-approximations obtained using the function \(d^{\text{G}}\). On the other hand, finding the over-approximations using the functions \(d^{\text{G}}\) and \(d^{\text{H}}\) take about the same amount of time and are much faster than finding the over-approximations using the functions \(d^{\text{L}}\). The runtimes are averaged over \(10\) instances and mean and standard deviation are reported.
over-approximation of the reachable sets using different integration techniques for Algorithm 1. The results are shown in Figure 3. Note that, our Algorithm 1 with the forward Euler method yields identical results as the linear programming approach in (Everett et al., 2021b).
Figure 3: The time evolution of the reachable sets of the 6D quadrotor with the neural network controller \(u=\mathsf{N}(x)\) in \(p_{x},p_{y},p_{z}\) coordinate. The blue hyper-rectangles are the over-approximations of the reachable sets of the system computed using the function \(d^{\mathrm{H}}\) in Theorem 3. **Left plot**: The ODEs are integrated using a Runge-Kutta method. **Middle plot**: The ODEs are integrated using forward Euler method with time step \(0.01\). **Right plot**: The ODEs are integrated using forward Euler method with time step \(0.1\) to match the actuation time step. Note that the reachable set estimates of this plot are identical to those in (Everett et al., 2021b, Figure 10(a)) where the over-approximations are computed using a linear program. We observed that, on the same computer, this linear program takes more than twice as long, \(0.113\pm 0.004\) seconds. The runtimes are averaged over \(10\) instances and mean and standard deviation are reported.
Figure 2: Performance of the Algorithm (1) with \(s=\mathrm{H}\) for different partitions \(D_{s}\) and sub-partitions \(D_{a}\) for over-approximation of the reachable set of the system (18) with neural network \(u=\mathsf{N}(x)\) trained to approximate an offline model predictive controller. All four figures show the \(p_{x}-p_{y}\) plot of the motion of the vehicle. The runtimes are averaged over \(10\) instances and the mean and standard deviation are reported.
## 8 Conclusions
We presented a fast and scalable method, based on mixed monotone theory, to over-approximate the reachable sets of nonlinear systems coupled with neural network controllers. We introduce three methods to intertwine neural network inclusion functions from CROWN (Zhang et al., 2018) and open-loop decomposition functions with varying empirical results. For future research, we plan to study the role of the neural network verification algorithms in the tightness of our approximations.
|
2302.12697 | Adaptive weighting of Bayesian physics informed neural networks for
multitask and multiscale forward and inverse problems | In this paper, we present a novel methodology for automatic adaptive
weighting of Bayesian Physics-Informed Neural Networks (BPINNs), and we
demonstrate that this makes it possible to robustly address multi-objective and
multi-scale problems. BPINNs are a popular framework for data assimilation,
combining the constraints of Uncertainty Quantification (UQ) and Partial
Differential Equation (PDE). The relative weights of the BPINN target
distribution terms are directly related to the inherent uncertainty in the
respective learning tasks. Yet, they are usually manually set a-priori, that
can lead to pathological behavior, stability concerns, and to conflicts between
tasks which are obstacles that have deterred the use of BPINNs for inverse
problems with multi-scale dynamics. The present weighting strategy
automatically tunes the weights by considering the multi-task nature of target
posterior distribution. We show that this remedies the failure modes of BPINNs
and provides efficient exploration of the optimal Pareto front. This leads to
better convergence and stability of BPINN training while reducing sampling
bias. The determined weights moreover carry information about task
uncertainties, reflecting noise levels in the data and adequacy of the PDE
model. We demonstrate this in numerical experiments in Sobolev training, and
compare them to analytically $\epsilon$-optimal baseline, and in a multi-scale
Lokta-Volterra inverse problem. We eventually apply this framework to an
inpainting task and an inverse problem, involving latent field recovery for
incompressible flow in complex geometries. | Sarah Perez, Suryanarayana Maddu, Ivo F. Sbalzarini, Philippe Poncet | 2023-02-24T15:53:01Z | http://arxiv.org/abs/2302.12697v1 | # Adaptive weighting of Bayesian physics informed
###### Abstract
In this paper, we present a novel methodology for automatic adaptive weighting of Bayesian Physics-Informed Neural Networks (BPINNs), and we demonstrate that this makes it possible to robustly address multi-objective and multi-scale problems. BPINNs are a popular framework for data assimilation, combining the constraints of Uncertainty Quantification (UQ) and Partial Differential Equation (PDE). The relative weights of the BPINN target distribution terms are directly related to the inherent uncertainty in the respective learning tasks. Yet, they are usually manually set a-priori, that can lead to pathological behavior, stability concerns, and to conflicts between tasks which are obstacles that have deterred the use of BPINNs for inverse problems with multi-scale dynamics.
The present weighting strategy automatically tunes the weights by considering the multi-task nature of target posterior distribution. We show that this remedies the failure modes of BPINNs and provides efficient exploration of the optimal Pareto front. This leads to better convergence and stability of BPINN training while reducing sampling bias. The determined weights moreover carry information about task uncertainties, reflecting noise levels in the data and adequacy of the PDE model.
We demonstrate this in numerical experiments in Sobolev training, and compare them to analytically \(\varepsilon\)-optimal baseline, and in a multi-scale Lokta-Volterra inverse problem. We eventually apply this framework to an inpainting task and an inverse problem, involving latent field recovery for incompressible flow in complex geometries.
**Keywords:** Hamiltonian Monte Carlo, Uncertainty Quantification, Multi-objective training, Adaptive weight learning, Artificial Intelligence, Bayesian physics-informed neural networks.
## 1 Introduction
Direct numerical simulation relies on appropriate mathematical models, derived from physical principles, to conceptualize real-world behavior and provide an understanding of complex phenomena. Experimental data are mainly used for parameter identification and a-posteriori model validation. However, a wide range of real-world applications are characterized by the absence of predictive physical models, notably in the life sciences. Data-driven inference of physical models has therefore emerged as a complementary approach in those applications [27]. The same is true for applications that rely on data assimilation and inverse modeling, for example in the geosciences. This has established data-driven models as complementary means to theory-driven models in scientific applications.
Depending on the amount of data available, several data-driven modeling strategies can be chosen. An overview of the state of the art in data-driven modeling, as well as of the remaining challenges, has recently been published [13] with applications focusing on porous media research. It covers methods ranging from model inference using sparse regression [8, 14, 45, 26], where the symbolic structure of a Partial Differential Equation (PDE) model is inferred from the data, to equation-free forecasting models based on extrapolation of observed dynamics [34, 53, 28]. Therefore, model inference methods are available for both physics-based and equation-free scenarios.
A popular framework combining both scenarios are Physics Informed Neural Networks (PINNs) [40]. They integrate potentially sparse and noisy data with physical principles, such as conservation laws, expressed as mathematical equations. These equations regularize the neural network while the network weights \(\theta\in\mathbb{R}^{d}\) and unknown equation coefficients \(\Sigma\in\mathbb{R}^{p}\) are inferred from data. This has enabled the use of PINNs as surrogate models, for example in fluid mechanics [33, 49, 37]. Overall, PINNs provide an effective alternative to purely data-driven methods, since a lack of high-fidelity data can be compensated by physical regularization [22, 31].
Despite their effectiveness and versatility, PINNs can be difficult to use correctly, as they are prone to a range of training instabilities. This is because their training amounts to a weighted multi-objective optimization problem for the joint set of parameters \(\Theta=\{\theta,\Sigma\}\),
\[\widehat{\Theta}=\arg\min_{\Theta}\,\sum_{k=0}^{K}\lambda_{k}\mathcal{L}_{k}( \Theta), \tag{1}\]
where each term \(\mathcal{L}_{k}(\Theta)\) of the loss function corresponds to a distinct inference task. For typical PINNs, these tasks include: data fitting, PDE residual minimization, boundary and initial condition matching, and additional physical constraints such as divergence-freeness of the learned field. Proper training of this multi-task learning problem hinges on correctly setting the loss term weights \(\lambda_{k}\)[29]. An unsuitable choice of weights can lead to biased optimization [39], vanishing task-specific gradients [46, 10], or catastrophic forgetting [29]. Automatically optimizing the loss weights, however, is not straightforward, especially in highly nonlinear and multi-scale problems.
The problem of how to tune the loss weights of a PINN is widely known and several potential solutions have been developed to balance the objectives [9, 55, 56, 29]. This offers criteria to impartially optimize the different tasks and provide a good exploration of the optimal Pareto front [43]. While it improves reliability by reducing optimization bias, several open questions remain regarding the confidence in the predictions, noise estimates, and model adequacy [15, 31, 60]. These questions motivate a need for uncertainty quantification (UQ) to ensure trustworthy inference, extending PINNs to Bayesian inference in the form of BPINNs [59, 21]. How to adapt successful PINN weighting strategies to BPINNs and integrate them with UQ, however, is an open problem.
BPINNs enable integration of UQ by providing posterior distribution estimates of the predictions -- also known as Bayesian Model Averages [57] -- based on Markov Chain Monte Carlo (MCMC) sampling. One of the most popular MCMC schemes for BPINNs is Hamiltonian Monte Carlo (HMC), which provides a particularly efficient sampler for high-dimensional problems with smooth (physical) dynamics [4]. Although HMC has been shown to be more efficient for BPINNs, its formulation implies potential energy that is related to the cost function of the PINN. The multi-objective loss of a PINN then directly translates to multi-potential energy for BPINN sampling. Therefore, it suffers from the same difficulties to avoid bias and provide an efficient exploration of the Pareto front.
This often causes HMC to not correctly explore the Pareto front during BPINN training. Efficient exploration of a high-dimensional Pareto front remains challenging for multi-task and multi-scale learning problems incorporating UQ and has not yet been addressed in the Bayesian case. The challenge arises because each term of the multi-potential energy is weighted within the Bayesian framework by parameters that relate to scaling, noise magnitude, and ultimately the inherent uncertainties in the different learning tasks [10]. While these weights are recognized as critical parameters in the sampling procedure, they are mostly hand-tuned [32, 21, 31], introducing hidden priors when the true uncertainties are not known. Appropriately setting these parameters is neither easy nor computationally efficient and can lead to either a biased estimation or a considerable waste of energy in ineffective tuning. Properly optimizing these weights is therefore
essential to ensure that HMC samples from the posterior distribution around the Pareto front. This is not only required for robust BPINN training, but also for enhanced reliability of the UQ estimates.
In order to robustly handle multi-task UQ inference in BPINNs, the open questions addressed in this article are: How can we automatically adjust the weights in BPINNs to efficiently explore the Pareto front and avoid bias in the UQ inference? How can we manage sensitivity to the noise distributions (homo- or hetero-sedastic) and their amplitude, without imposing hidden priors?
We start by characterizing potential BPINN failure modes, which are particularly prevalent for multi-scale or multi-task inverse problems. We then propose a modified HMC sampler, called Adaptively Weighted Hamiltonian Monte Carlo (AW-HMC), which avoids the problem by balancing gradient variances during sampling. We show that this leads to a weighted posterior distribution that is well suited to exploring the Pareto front. Our benchmarks show that this strategy reduces sampling bias and enhances the BPINNs robustness. In particular, our method improves the stability along the leapfrog steps during training, since it ensures optimal integration and frees the sampling from excessive time step decrease. Moreover, it is able to automatically adjust the potential energy weights and with them, the uncertainties according to the sensitivity to noise of each term and their different scaling. This considerably improves the reliability of the UQ by reducing the need for hyperparameter tuning, including the prior distributions, and reducing the need for prior knowledge of noise levels or appropriate task scaling. We show that this improves BPINNs with respect to both the convergence rate and the computational cost. Moreover, we introduce a new metric for the quality of the prediction, quantifying the convergence rate during the marginalization step. We finally demonstrate that our proposed approach enables the use of BPINNs in multi-scale and multi-task Bayesian inference over complex real-world problems with sparse and noisy data.
The remainder of this manuscript is organized as follows: In Sect. 2, we review the general principles of BPINNs and the HMC sampler and characterize their failure mode in Sobolev training. Sect. 3 describes the proposed adaptive weighting strategy for UQ using BPINNs. We validate this strategy in a benchmark with a known analytical solution in Sect. 3.2 and B. We then demonstrate the effectiveness of the proposed AW-HMC algorithm on a Lokta-Volterra inverse problem in Sect. 3.3, focusing on a multi-scale inference of dynamical parameters. We then illustrate the use of AW-HMC in a real-world problem from fluid dynamics in Sect. 4. This particularly demonstrates successful inpainting of incompressible stenotic blood flow from sparse and noisy data, highlighting UQ estimates consistent with the noise level and noise sensitivity. Finally, Sect. 4.2 considers an inverse flow problem in a complex geometry, where we infer both the flow regime (the inverse Reynolds number) and the latent pressure field from partial velocity measurements. We conclude and discuss our observations in Sect. 5.
From Uncertainty Quantification to Bayesian Physics-Informed Neural Networks: concepts and limitations
Real-world applications of data-driven or black-box surrogate models remain a challenging task. Predictions often need to combine prior physical knowledge, whose reliability can be questioned, with sparse and noisy data exhibiting measurement uncertainties. These real-world problems also suffer from non-linearity [21], scaling [11], and stiffness [19] issues that can considerably impact the efficiency of the usual methodologies. This needs the development of data-driven modeling strategies that robustly address these issues.
At the same time, the need to build upon Bayesian inference raises the question in the research community of ensuring trustable intervals in the estimations. This is important for quantifying uncertainties on both the underlying physical model and the measurement data, although it may be challenging in the context of stiff, multi-scale, or multi-fidelity problems. Therefore, embedding UQ in the previous data-driven methodologies is essential to effectively manage real-world applications.
### Hmc-Bpinn concepts and principles
The growing popularity of Bayesian Physics-Informed Neural Networks [59, 21, 20, 32, 24] offers the opportunity to incorporate uncertainty quantification into PINNs standards, and benefit from their predictive power. It features an interesting Bayesian framework that claims to handle real-world sparse and noisy data
and, as well, it bestows reliability on the models together with the predictions.
The basic idea behind a BPINN is to consider each unknown, namely the neural network and inverse parameters, \(\Theta\), as random variables with specific distributions instead of single parameters as for a PINN. The different sampling strategies all aim to explore the posterior distribution of \(\Theta\)
\[P(\Theta|\mathcal{D},\mathcal{M})\propto P(\mathcal{D}|\Theta)P(\mathcal{M}| \Theta)P(\Theta) \tag{2}\]
through a marginalization process, given some measurement data \(\mathcal{D}\) and a presumed model \(\mathcal{M}\), rather than looking for the best approximation satisfying the optimization problem (1). The posterior distribution expression (2) is obtained from Bayes theorem and basically involves a data-fitting likelihood term \(P(\mathcal{D}|\Theta)\), a PDE-likelihood term \(P(\mathcal{M}|\Theta)\) and a joint prior distribution \(P(\Theta)\). These specific terms are detailed, case-by-case, in the applications, along with the different sections. The Bayesian marginalization then transfers the distribution of the parameters \(\Theta\) into a posterior distribution of the predictions, also known as a Bayesian Model Average (BMA):
\[\underbrace{\mathcal{P}(y|x,\mathcal{D},\mathcal{M})}_{\text{predictive BMA distribution}}=\int\underbrace{\mathcal{P}(y|x,\Theta)}_{\text{prediction for $\Theta$}}\underbrace{\mathcal{P}(\Theta|\mathcal{D},\mathcal{M})}_{\text{ posterior}}\mathrm{d}\Theta \tag{3}\]
where \(x\) and \(y\) respectively refer to the input (e.g spatial and temporal points) and output (e.g field prediction) of the neural network. In this equation, the different predictions arising from all the \(\Theta\) parameters sampling (2) are weighted by their posterior probability and averaged to provide an intrinsic UQ of the BPINN output. Overall, BPINNs introduce a Bayesian marginalization of the parameters \(\Theta\) which forms a predictive distribution (3) of the quantities of interest (QoI), namely the learned fields and inverse parameters.
Different approaches were developed for Bayesian inference in deep neural networks including Variational Inference [58, 23] and Markov Chain Monte Carlo methods. A particular MCMC sampler based on Hamiltonian dynamics -- the Hamiltonian Monte Carlo (HMC) -- has drawn increasing attention due to its ability to handle high-dimensional problems by taking into account the geometric properties of the posterior distribution. Betancourt explained the efficiency of HMC through a conceptual comprehension of the method [4] and theoretically demonstrated the ergodicity and convergence of the chain [6, 25]. From a numerical perspective, Yang et al. [59] highlighted the out-performance of BPINNs-HMC formulation on forward and inverse problems compared to its Variational Inference declination. This has established HMC as a highly effective MCMC scheme for the BPINNs, both theoretically and numerically.
In the following, we briefly review the basic principles of the classical BPINNs-HMC and point out their limitations, especially in the case of multi-objective and multi-scale problems.
The idea of HMC is to assume a fictive particle of successive positions and momenta \((\Theta,r)\) which follows the Hamiltonian dynamics on the frictionless negative log posterior (NLP) geometry. It requires the auxiliary variable \(r\) to immerse the sampling of (2) into the exploration of a joint probability distribution \(\pi(\Theta,r)\) in the phase space
\[\pi(\Theta,r)\sim\mathrm{e}^{-H(\Theta,r)}. \tag{4}\]
The latter relies on a particular decomposition of the Hamiltonian \(H(\Theta,r)=U(\Theta)+K(r)\) where the potential and kinetic energies, \(U(\Theta)\) and \(K(r)\) respectively, are chosen such that
\[\pi(\Theta,r)\propto P(\Theta|\mathcal{D},\mathcal{M})\mathcal{N}(r|0,\mathbf{ M}) \tag{5}\]
and the momentum follows a centered multivariate Gaussian distribution, with a covariance -- or mass -- matrix \(\mathbf{M}\) often scaled identity. The Hamiltonian of the system is thus given by
\[H(\Theta,r)=U(\Theta)+\frac{1}{2}r^{T}\mathbf{M}^{-1}r \tag{6}\]
where the potential energy directly relates to the target posterior distribution. This energy term is usually expressed as the negative log posterior \(U(\Theta)=-\mathrm{ln}P(\Theta|\mathcal{D},\mathcal{M})\), which results in a multi-potential as detailed in Sect. 2.2. This ensures that the marginal distribution of \(\Theta\) provides immediate samples of the target posterior distribution
\[P(\Theta|\mathcal{D},\mathcal{M})\sim\mathrm{e}^{-U(\theta)} \tag{7}\]
since an efficient exploration of the joint distribution \(\pi(\Theta,r)\) directly projects to an efficient exploration of the target distribution, as described by Betancourt [4]. The HMC sampling process alternates between deterministic steps, where we solve for the path of a frictionless particle given the Hamiltonian dynamical system
\[\left\{\begin{array}{l}\mathrm{d}\Theta=\mathbf{M}^{-1}r\,\mathrm{d}t\\ \mathrm{d}r=-\nabla U(\Theta)\,dt,\end{array}\right. \tag{8}\]
and stochastic steps, where the momentum is sampled according to the previously introduced Gaussian distribution. As Hamilton's equation (8) theoretically preserves the total energy of the system, each deterministic step is then constrained to a specific energy level while the stochastic steps enable us to diffuse across the energy level set for efficient exploration in the phase space. This theoretical conservation of the energy level set during the deterministic steps requires numerical schemes that ensure energy conservation.
A symplectic integrator is thus commonly used to numerically solve for the Hamiltonian dynamics (8): the Stormer-Verlet also known as the leapfrog method. However, these integrators are not completely free of discretization errors that may disrupt, in practice, the Hamiltonian conservation through the deterministic iterations. Hence, a correction step is finally added in the process to reduce the bias induced by these discretization errors in the numerical integration: this results in a Metropolis-Hasting criterion based on the Hamiltonian transition. This acceptance criterion tends to preserve energy by rejecting samples that lead to divergent probability transition. The exploration of the deterministic trajectories though remains sensitive to two specific hyperparameters managing the integration time: the step size \(\delta t\) and the number of iterations \(L\) used in the leapfrog method. Tuning these parameters can be challenging, especially if the posterior distribution presents pathological or high curvature regions [4], yielding instability, under-performance, and poor validity of the MCMC estimators. Despite the use of numerical schemes that preserve the Hamiltonian properties, a conventional HMC-BPINN can be confronted with pathological discrepancies.
To counteract these divergence effects, efforts have been put into developing strategies to either adaptively set the trajectory length \(L\)[17] while preserving detailed balance condition or use standard adaptive-MCMC approaches to adjust the step size \(\delta t\) on the fly [5]. In this regard, one of the most popular adaptive strategies is the No-U-Turn sampler (NUTS) from Hoffmann and Gelman [16]. Nonetheless, these divergent trajectories indicate significant bias in the MCMC estimation even if such adaptive methods may offer an alternative to overcome them. This raises the question of the validity of this adaptation when facing multi-potential energy terms that lead to significantly different geometrical behaviors or different scaling in the posterior distribution. In fact, the adaptive strategy mostly tunes the leapfrog parameters so that the most sensitive term respects the energy conservation, which may result in poorly-chosen hyper-parameters for the other potential energy terms, and then the whole posterior distribution. This reflects the limitations of such adaptive strategies that rely on adjusting the leapfrog hyperparameters.
When these divergent pathologies become prevalent, another approach suggested by Betancourt [4] is to regularize the target distribution, which can become strenuous in real-world applications and lead to additional tuning. Nevertheless, it offers a great opportunity to investigate the impact of each learning task on the overall behavior of the target distribution and paves the way for novel adaptive weighting strategies.
In the next sections, we focus particularly on the challenges arising from real-world multi-tasks and multi-scale paradigms. We show that present BPINN methods result in major failures in these cases and we identify the main pathologies using powerful diagnostics based on these divergent probability transitions.
### The multi-objective problem paradigm
As for the issue of the multi-objective optimization problem in a PINN, sampling of the target posterior distribution (2) arising from a direct or inverse problem requires the use of a multi-potential energy term \(U(\Theta)\). Furthermore, in real-world applications, we have to deal with sparse and noisy measurements whose fidelity can also cover different scales: this is the case of multi-fidelity problems with multi-source data [22, 31].
For sake of generality, we introduce a spatio-temporal domain \(\Omega=\widetilde{\Omega}\times\mathcal{T}\) with \(\widetilde{\Omega}\subset\mathbb{R}^{n}\), \(n=1,2,3\)
and we assume a PDE system in the following form:
\[\left\{\begin{array}{ll}\mathcal{F}(u(t,x),\ \Sigma)&=0,\qquad(t,x)\in\Omega\\ \mathcal{H}(u(t,x),\ \Sigma)&=0,\qquad(t,x)\in\Omega\\ \mathcal{B}(u(t,x),\ \Sigma)&=0,\qquad(t,x)\in\Omega^{\partial}:=\partial\widetilde{ \Omega}\times\mathcal{T}\\ \mathcal{I}(u(t,x),\ \Sigma)&=0,\qquad(t,x)\in\Omega^{I}:=\widetilde{\Omega} \times\mathcal{T}_{0}\end{array}\right. \tag{9}\]
where \(u\) is the principal unknown, \(\mathcal{F}\) the main differential equation (e.g the Navier-Stokes equation), \(\mathcal{H}\) an additional constraint (e.g incompressibility condition), \(\mathcal{B}\) and \(\mathcal{I}\) the boundary and initial conditions respectively, and \(\Sigma\) the PDE model parameters, either known or inferred. Some partial measurements of the solution field \(u\) may also be available in a subset \(\Omega^{u}\subset\Omega\). Such a continuous description of the spatio-temporal domain is then discretized to enable the selection of the training dataset, which is used in BPINNs sampling.
We first define the dataset \(\mathcal{D}\) of training data which is decomposed into \(\mathcal{D}=\mathcal{D}^{\Omega}\cup\mathcal{D}^{\partial}\cup\mathcal{D}^{I}\cup \mathcal{D}^{u}\) and includes scattered and noisy measurements sampled in their respective sets \(\Omega\), \(\Omega^{\partial}\), \(\Omega^{I}\), and \(\Omega^{u}\). Regarding data corruption, we consider independent Gaussian noise for the sparse observations on \(u\), such that \(\mathcal{D}^{u}\) is defined as
\[\mathcal{D}^{u}=\{(t_{i},x_{i},u_{i}),\quad\text{s.t}\quad(t_{i},x_{i})\in \Omega^{u}\quad\text{and}\quad u_{i}:=u(t_{i},x_{i})+\xi_{u}(t_{i},x_{i}),\,i=1...N^{u}\} \tag{10}\]
where the noise \(\xi_{u}\sim\mathcal{N}(0,\sigma_{u}^{2}I)\) and the standard deviation \(\sigma_{u}\) might be estimated from the sensor fidelity, if accessible. The neural network component of the BPINN then provides a surrogate model of \(u\) denoted \(u_{\Theta}\) for each sample of the parameters \(\Theta=\{\theta,\Sigma\}\), whose prior distribution is referred to as \(P(\Theta)\). The latter takes into account both the priors on the neural network parameters \(\theta\), which are assumed to be centered and independent Gaussian distributions, and the priors on the model parameters \(\Sigma\), so that \(P(\Theta)=P(\theta)P(\Sigma)\) under the independence condition. In the case of a forward problem, where the PDE model parameters are prescribed, the prior distribution reduces to \(P(\theta)\). When some measurements of the unknown are available, meaning \(\mathcal{D}^{u}\) is not an empty set, which is the case in inverse or inpainting problems, then the surrogate model \(u_{\Theta}\) should satisfy a data-fitting likelihood term in the Bayesian framework. This consists in quantifying, over the set \(\mathcal{D}^{u}\), the fit between the neural network prediction and the training data defined by:
\[P(\mathcal{D}^{u}|\Theta)\propto\prod_{i=1}^{N^{u}}\exp\left(\frac{-\left(u_{ \Theta}(t_{i},x_{i})-u_{i}\right)^{2}}{2\sigma_{u}^{2}}\right). \tag{11}\]
Similarly, the boundary conditions of the model output are imposed on the set \(\mathcal{D}^{\partial}\)
\[\mathcal{D}^{\partial}=\left\{(t_{i},x_{i},\mathcal{B}(u_{i})),\quad\text{s.t }\quad(t_{i},x_{i})\in\Omega^{\partial}\quad\text{and}\quad\mathcal{B}(u_{i}): =\mathcal{B}(u(t_{i},x_{i}))+\xi_{b}(t_{i},x_{i}),\,i=1...N^{\partial}\right\} \tag{12}\]
by satisfying the following boundary-likelihood term
\[P(\mathcal{D}^{\partial}|\Theta)\propto\prod_{i=1}^{N^{\partial}}\exp\left( \frac{-\left(\mathcal{B}(u_{\Theta}(t_{i},x_{i}))-\mathcal{B}(u_{i})\right)^{ 2}}{2\sigma_{b}^{2}}\right). \tag{13}\]
The noise sensitivity on the boundary condition term is also characterized by independent Gaussian distributions in the sense that \(\xi_{b}\sim\mathcal{N}(0,\sigma_{b}^{2}I)\) where the standard deviation \(\sigma_{b}\) needs to be estimated. Such a distinction between \(\xi_{u}\) and \(\xi_{b}\) is prescribed since there is no guarantee that the data corruption is uniform: in fact, the measurement distribution variances can differ locally when facing heteroscedastic noise. This is the case in geosciences, where data-driven modeling based on X-Ray microtomography images require special attention on this boundary noise estimation \(\xi_{b}\). This is mainly due to the artifact limitations (e.g partial volume effect, edge-enhancement) that tend to enhance the blurring effects at the material interface and therefore impact the quantification of the medium effective properties, such as the permeability and micro-porosity [35, 2]. The same holds for the initial condition with potentially a different sensitivity \(\xi_{i}\). In a BPINN, the previous data-fitting terms are complemented with physical principles that regularize the neural network predictions, given the PDE system (9).
Concerning the PDE-likelihood term, the \(\mathcal{D}^{\Omega}\) dataset is defined as the training points on which we force the PDE and the additional physical constraint to be satisfied by the surrogate modeling:
\[\mathcal{D}^{\Omega}=\left\{(t_{i},x_{i})\in\Omega,\quad\mathcal{F}(u_{\Theta}(t _{i},x_{i}))=\xi_{f}(t_{i},x_{i})\quad\text{and}\quad\mathcal{H}(u_{\Theta}(t_{ i},x_{i}))=\xi_{h}(t_{i},x_{i}),\,i=1...N^{\Omega}\right\} \tag{14}\]
with \(\xi_{f}\) and \(\xi_{h}\) standing for the model uncertainty in both equations, which are usually unknown and can easily lead to physical model misspecification. According to these notations, a forward problem consists in \(\mathcal{D}^{u}=\varnothing\) and \(\Sigma\) is known to perform a direct prediction of the field \(u_{\Theta}\) on \(\Omega\) based only on the PDE physical assumptions. On the contrary, an inverse problem aims to infer \(\Sigma\) using together the PDE model with the partial and noisy information \(\mathcal{D}^{u}\) of the predictive field \(u\). Finally, an inpainting problem relies on these partial measurements to complement and recover some missing information on the predictive field, in addition to the PDE-based priors.
Finally, the target posterior distribution of \(\Theta\) (2) is decomposed according to the Bayes rule, into a sequence of multi-task likelihood terms -- involving data-fitting and PDE likelihood -- and the priors:
\[P(\Theta|\mathcal{D},\mathcal{M})\propto P(\mathcal{D}^{u}|\Theta)P(\mathcal{D}^ {\partial}|\Theta)P(\mathcal{D}^{I}|\Theta)P(\mathcal{D}^{\Omega},\mathcal{F}| \Theta)P(\mathcal{D}^{\Omega},\mathcal{H}|\Theta)P(\theta)P(\Sigma) \tag{15}\]
which results, for the HMC sampler, in the multi-potential energy
\[\begin{split} U(\Theta)=&\frac{\|u_{\Theta}-u\|_{ \mathcal{D}^{u}}^{2}}{2\sigma_{u}^{2}}+\frac{\|\mathcal{B}(u_{\Theta})- \mathcal{B}(u)\|_{\mathcal{D}^{\partial}}^{2}}{2\sigma_{b}^{2}}+\frac{\| \mathcal{I}(u_{\Theta})-\mathcal{I}(u)\|_{\mathcal{D}^{I}}^{2}}{2\sigma_{i}^ {2}}\\ &\quad+\frac{\|\mathcal{F}(u_{\Theta})\|_{\mathcal{D}^{\Omega}}^{ 2}}{2\sigma_{f}^{2}}+\frac{\|\mathcal{H}(u_{\Theta})\|_{\mathcal{D}^{\Omega} }^{2}}{2\sigma_{h}^{2}}+\frac{\|\theta\|_{\mathcal{R}^{d}}^{2}}{2\sigma_{ \theta}^{2}}+\frac{\|\Sigma-\mu_{\Sigma}\|_{\mathcal{R}^{p}}^{2}}{2\sigma_{ \Sigma}^{2}}\end{split} \tag{16}\]
according to equation (7). The notation \(\|\cdot\|\) refers to either the RMS (root mean square) norm -- inherited from the functional \({}^{2}\)-norm on the open set \(\Omega\) -- for the log-likelihood terms or to the usual Euclidean norm for the log-prior terms. In addition, the multi-potential (16) is written here, in a general framework, based on the prior assumptions \(P(\theta)\sim\mathcal{N}(0,\sigma_{\theta}^{2}I_{d})\) and \(P(\Sigma)\sim\mathcal{N}(\mu_{\Sigma},\sigma_{\Sigma}^{2}I_{p})\). We note that the log-prior term can be regarded as a \({}^{2}\)-regularization in the equivalent constrained optimization problem. Nonetheless, suitable selection of these prior distributions -- hence appropriate tuning of the parameters \(\sigma_{\theta}\), \(\mu_{\Sigma}\), and \(\sigma_{\Sigma}\) -- is usually not straightforward and is time-consuming. Overall, equation (16) highlights that, even in a simple problem setup, a BPINN may face a potential energy term that closely resembles a weighted multi-objective loss appearing in a PINN, whose weights are mainly hand-tuned.
Therefore, the main challenge is to sample near the Pareto-optimal solution such that the BPINNs provide efficient and reliable prediction and UQ. Otherwise, the risk is that the samples obtained gravitate around a local minimum, corresponding to one of the multi-potential terms at the cost of the others.
Secondly, while the standard deviations \(\sigma_{\bullet}\) are critical parameters to select and are related to the uncertainties on the inherent tasks, most of the authors either assign them a given value or train them as additional hyperparameters [32, 50, 21]. This can lead to highly biased predictions, especially when setting the PDE-residual standard deviations \(\sigma_{f}\) and \(\sigma_{h}\) which introduce strong priors on the model adequacy.
Recently Snaros et al. [38] discussed, _inter alia_, alternatives generalizing the adjustment of some of these parameters -- mainly the data-fitting standard deviations -- in the context of unknown and heteroscedastic noise distributions. They either rely on _offline learning_ at the cost of a pre-trained Generative Adversarial Network (GAN) or _online learning_ of the weights based on additional parameter training. In particular, the number of these additional parameters may increase drastically when considering location-dependent variances, as suggested in [38], for realistic applications and consequently suffer from computational costs. The open question remains on how to deal with such unknown (homo- or hetero-sedastic) noise distributions without adding computational complexity by learning additional hyperparameters.
Finally, although the question of physical model misspecification was pointed out in the total uncertainty quantification, the latter has not been addressed in [38] when misleading model uncertainty is assumed on the physical constraints \(\mathcal{F}\) and \(\mathcal{H}\). As a result, the issue of not introducing strong priors on the model adequacy by hand-tuning of the hyperparameters \(\sigma_{f}\) and \(\sigma_{h}\), usually unknown or prescribed, is still a challenging task.
In view of this, we wanted to test the robustness of the usual BPINNs-HMC approach, as introduced in Sect. 2.1, on a test case demonstrating the issues arising from the multi-objective and multi-scale nature of the sampling using Sobolev training of neural networks.
### Sobolev training for BPINNs failure mode
Sobolev training is a special case of multi-objective BPINN sampling that likely leads to stiff learning due to the disparate scales involved [29]. Nevertheless, it is commonly used in the machine learning community to improve the prediction efficiency and generalization of neural networks, by adding information about the target function derivatives to the loss or its equivalent potential energy [12, 48, 54, 62].
This special training provides a baseline for testing the robustness of the present BPINNs-HMC method against the failure mode of vanishing task-specific gradients [29]. It also offers the opportunity to benchmark against the analytically \(\varepsilon\)-optimal weights that are known for Sobolev multi-objective optimization [29].
The BPINNs-HMC sampling is tested here on a Sobolev regression task, which means the dataset is restricted to \(\mathcal{D}=\mathcal{D}^{u}\) involving measurements of a function and its derivatives \(D_{x}^{k}\), \(k\geq 1\) up to order \(K\), such that the target posterior distribution is
\[P(\Theta|\mathcal{D})\propto\prod_{k=0}^{K}P(\mathcal{D}^{u},D_{x}^{k}u|\Theta )P(\Theta) \tag{17}\]
and the potential energy hence has the general form
\[U(\Theta)=\sum_{k=0}^{K}\left[\frac{\lambda_{k}}{2\sigma_{k}^{2}}\|D_{x}^{k}u _{\Theta}-D_{x}^{k}u\|^{2}\right]+\frac{\lambda_{K+1}}{2\sigma_{K+1}^{2}}\| \Theta\|^{2}:=\sum_{k=0}^{K+1}\lambda_{k}\mathcal{L}_{k}(\Theta) \tag{18}\]
where \(L_{k}=\lambda_{k}\mathcal{L}_{k}\) refers to the weighted \(k^{th}\) objective term, with \(\lambda_{k}\) some positive weighting parameters to define (see Sect. 3.1). In this section, we use only a uniform weighting strategy, with \(\lambda_{k}=1,\forall k\), which corresponds to the classical BPINNs-HMC formulation. For sake of readability, equation (18) gathers the log-prior terms of the neural network and inverse parameters, assuming they all have the same prior distribution.
We first introduce a 1D Sobolev training up to second-order derivatives, with a test function \(u(x)=\sin^{3}(\omega x)\) defined on \(\Omega=[-0.7,0.7]\) for \(\omega=6\). We use 100 training points, set the leapfrog parameters \(L=100\) and \(\delta t=1\mathrm{e}{-3}\) for the number of iterations and time step respectively, and perform \(N_{s}=200\) sampling iterations. We also restrict the test to a function approximation problem so that subsequently \(\Theta\) refers only to the neural network parameters. In the following and unless otherwise indicated, all the \(\sigma_{k}\) are equal to one since in practice we do not have access to the values of these parameters for the derivatives or residual PDE terms, but rather to the observation noise on the data field \(u\) only, if available.
Similarly to PINNs, this test case with uniform weights \(\lambda_{k}\) leads to imbalanced gradient variances between the different objective terms. In particular, the higher-order derivatives present dominant gradient variances that contribute to the vanishing of the other tasks and lead to biased exploration of the posterior distribution. In Fig 1 (left) we see that the term \(\mathrm{Var}\{\nabla_{\Theta}L_{2}\}\) corresponding to the higher-order derivative quickly develops two orders of magnitude greater than the other effective gradient variances. In addition
Figure 1: The HMC uniform-weighting failure mode for Sobolev training up to second-order derivatives leading to non-conservative Hamiltonian (on the middle), and extremely poor resulting approximation of the function (on the right). This is due to the strong imbalances in the variances of the effective gradient \(\nabla_{\Theta}L_{k}\) distributions (\(k=0,1,2\)), plotted with respect to the (\(N_{s}\times L\)) HMC iterations (on the left).
to inefficient exploration of the Pareto front, we also face instability issues, generated by the highest order derivative terms, that result in a lack of conservation of the Hamiltonian along the leapfrog trajectories (see Fig 1 middle). As specified in Sect. 2.1, such divergence pathologies on the classical HMC with uniform weighting are powerful diagnostics of bias in the resulting estimators and raise suspicions about the validity of the latter.
An alternative to counteract these effects consists in reducing the time step \(\delta t\) to balance the order of magnitude of the derivative terms and improve the Metropolis-Hasting acceptance rate of the BPINNs-HMC. However, a small time step within the leapfrog iterations is more likely to generate pathological random walk behaviors or biased sampling [16, 4]. To this aim, we attempt an adaptive strategy by using the No-U-Turn sampler (NUTS) with step-size adaptation, as detailed in Algorithm 5 from Hoffmann and Gelman [16] and implemented in the Python Open Source package hamiltonch [11]. We consider the same exact set of leapfrog parameters as previously -- in order to comply with the same assumptions -- and we impose \(N=20\) adaptive steps that lead to a final adapted time step of \(\delta t=1.29\mathrm{e}{-4}\). In this case, we again reached a configuration where we were not efficiently exploring the Pareto front, as evidenced by the variances of the effective gradients in Fig 2. This resulted in a better approximation of the second derivative compared to the signal itself and demonstrated biased sampling in the sense that the signal \(u\) is determined up to a linear function due to the prevalence of the higher derivative term. This linear deviation is also shown in Fig 2 -- bottom right. This confirms that the NUTS time-step adaptation focuses rather on the prevailing conservation of the higher-order derivative which induced the stiffness.
In short, even a simple 1D Sobolev training with trivial uniform weights induces major failure of the classical BPINNs-HMC approaches because of the sensitivity of the posterior distribution to the higher-order derivatives that generate instabilities. Consequently, such divergence in the Hamiltonian conservation renders the sampling approach inoperative. Moreover, the alternatives ensuring the Hamiltonian conservation are ineffective because they face either inefficient exploration of the energy levels or a strong imbalance in the multi-task and multi-scale sampling. This suggests that the Hamiltonian Markov chain cannot adequately explore the Pareto front of the target distribution resulting from this potential energy, and that strong
Figure 2: Failure mode of NUTS step-size adaptation in Sobolev training up to second order derivatives: variances of the effective gradient \(\nabla_{\Theta}L_{k}\) distributions (top left) and Hamiltonian evolution (top right), respectively showing task imbalances and weak exploration of the energy levels. Signal approximation (bottom left) and pointwise error (bottom right) highlighting the linear deviation of \(u_{\Theta}\). The vertical dotted line delimits the number of adaptive steps in the NUTS sampler.
imbalanced conditions cannot be overcome with the usual methodologies.
The purpose is therefore to develop a strategy to provide balanced conditions between the different tasks, independently of their scales, by looking for an appropriate weighting formulation. This approach is essential regardless of the usual HMC concerns about the adaptive settings of the leapfrog parameters, and presents the advantage of reducing the instabilities without needlessly decreasing the time step.
## 3 An Adaptive Weighting Strategy for Unbiased Uncertainty Quantification in BPINNs
Conventional BPINN formulations exhibit limitations regarding multi-objective inferences, such as stability and stiffness issues, pathological conflicts between tasks, and biased exploration of the Pareto-optimal front. These problems cannot be tackled merely by adaptively setting the leapfrog parameters, as in the NUTS sampler, nor by hand-tuning the standard deviations \(\sigma\), which introduces additional computational costs or energy waste. We therefore investigate another adaptive approach that focuses instead on the direct regularization of the target distribution: it aims to balance task weighting by automatically adapting the critical \(\sigma\) parameters.
### An Inverse Dirichlet Adaptive Weighting algorithm: AW-HMC
The development of a new alternative considering the limits of the HMC-BPINN approach (previously discussed in Sect. 2) becomes crucial, especially in the case of complex multi-objective problems arising from real-world data. This strategy must address the main pathologies identified by: 1) ensuring the exploration of the Pareto front of the target posterior distribution, 2) managing the scaling sensitivity of the different terms, and 3) controlling the Hamiltonian instabilities.
Independently of these pathological considerations, there remains the issue of setting the critical \(\sigma\) parameters, particularly when the level of noise on the data and the confidence in the PDE model are not prior knowledge. While manual tuning of these parameters is still commonplace, we could rely on the \(\lambda\) weight adaptations to implicitly determine the noise and inherent task uncertainties rather than introduce strong priors on the model adequacy that may lead to misleading predictions.
In order to fulfill all these requirements, we consider an Inverse-Dirichlet based approach that has demonstrated its effectiveness in the PINNs framework when dealing with balanced training and multi-scale modeling [29]. It relies on adjusting the weights based on the variances of the loss term gradients, which can be interpreted as a training uncertainty with respect to the main descent direction in a high-dimensional multi-objective optimization problem. This strategy also offers considerable improvement in convergence over conventional training and avoids the vanishing of specific tasks.
The idea of developing an Inverse Dirichlet adaptively weighted algorithm for BPINNs is to incorporate such training uncertainties on the different tasks within the Bayesian framework so that it can simultaneously take into account the noise, the model adequacy and the sensitivity of the tasks, all while ensuring Pareto front exploration. Therefore, we are trying to determine the positive weighting parameters \(\lambda_{k}\), \(k=0,...,K\) in such a way that the weighted gradient \(\nabla_{\Theta}L_{k}=\lambda_{k}\nabla_{\Theta}\mathcal{L}_{k}\) distributions of the potential energy terms have balanced variances. We propose to ensure gradient distributions with the same variance
\[\gamma^{2}:=\mathrm{Var}\{\lambda_{k}\nabla_{\Theta}\mathcal{L}_{k}\}\simeq \min_{t=0,...,K}(\mathrm{Var}\{\nabla_{\Theta}\mathcal{L}_{t}\}),\quad\forall k =0,...,K \tag{19}\]
by setting the weights on an Inverse-Dirichlet based approach:
\[\lambda_{k}=\left(\frac{\min_{t=0,...,K}(\mathrm{Var}\{\nabla_{\Theta} \mathcal{L}_{t}\})}{\mathrm{Var}\{\nabla_{\Theta}\mathcal{L}_{k}\}}\right)^{1 /2}=\left(\frac{\gamma^{2}}{\mathrm{Var}\{\nabla_{\Theta}\mathcal{L}_{k}\}} \right)^{1/2} \tag{20}\]
such that
\[\lambda_{k}\mathcal{N}(\mu_{k},\mathrm{Var}\{\nabla_{\Theta}\mathcal{L}_{k}\} )=\left(\frac{\gamma^{2}}{\mathrm{Var}\{\nabla_{\Theta}\mathcal{L}_{k}\}} \right)^{1/2}\mathcal{N}(\mu_{k},\mathrm{Var}\{\nabla_{\Theta}\mathcal{L}_{k} \})=\mathcal{N}(\mu_{k},\gamma^{2}). \tag{21}\]
Note that we do not discuss here the case of \(\lambda_{K+1}\) corresponding to the prior \(P(\Theta)\), since the log-prior term acts rather as a \({}^{2}\)-regularization in the equivalent constrained optimization problem, such that the weight balancing approach focuses essentially on the log-likelihood terms of the potential energy. In fact, the sampling should enable us to efficiently explore the Pareto front corresponding to balanced conditions between the data-fitting and the different PDE-based likelihood terms. On the contrary, we do not want to rely on a non-informative prior to achieve task balancing, so we impose the following upper bound
\[\mathrm{Var}\{\lambda_{K+1}\nabla_{\Theta}\mathcal{L}_{K+1}\}\leq\gamma^{2}, \tag{22}\]
which can be achieved with setting \(\lambda_{K+1}\leq\sigma_{K+1}\), related to the assumption on the prior \(P(\Theta)\sim\mathcal{N}(0,\sigma_{K+1}^{2}I)\). This comes from the observation that
\[\lambda_{K+1}\nabla_{\Theta}\mathcal{L}_{K+1}(\Theta^{t_{\tau}})=\frac{\lambda _{K+1}}{\sigma_{K+1}^{2}}\Theta^{t_{\tau}}\quad\text{s.t}\quad\mathrm{Var}\{ \lambda_{K+1}\nabla_{\Theta}\mathcal{L}_{K+1}(\Theta^{t_{\tau}})\}=\frac{ \lambda_{K+1}^{2}}{\sigma_{K+1}^{4}}\mathrm{Var}\{\Theta^{t_{\tau}}\}\leq \frac{1}{\sigma_{K+1}^{2}}\mathrm{Var}\{\Theta^{t_{\tau}}\} \tag{23}\]
with \(\Theta^{t_{\tau}}\) the set of parameters sampled at iteration \(\tau\). The latter upper bound also provides a dispersion indicator between the posterior variance of \(\Theta\) and its prior distribution, that can be used to set the value of \(\sigma_{K+1}\) given \(\gamma^{2}\).
We investigate on-the-way methods to deal with the BPINNs-HMC failure mode, so that the weight adaptation strategy (20) depends on the sampling iterations \(\tau\). This results in a modified Hamiltonian Monte Carlo, denoted Adaptively Weighted Hamiltonian Monte Carlo (AW-HMC) and detailed in Algorithm 1. The weighting strategy is carried on until a set number of adaptive iterations \(N\), potentially different from the usual burning steps \(M\). It assumes that \(N\leq M\), and enables us to reach a weighted posterior distribution, well-suited to the exploration of the Pareto front. In fact, finite adaptation preserves ergodicity and asymptotic convergence of the chain, while keeping \(N\leq M\) ensures the posterior distribution is drawn from the same weighted potential energy. In practice, the a priori burning phase is closely linked to the number of adaptive steps by taking \(M=N\). We also introduce the notation \(H_{\lambda_{\tau}}(\Theta,r)\) for the weighted Hamiltonian
\[H_{\lambda_{\tau}}(\Theta,r)=\sum_{k=0}^{K+1}\lambda_{k}(\tau)\mathcal{L}_{k} (\Theta)+\frac{1}{2}r^{T}\mathbf{M}^{-1}r \tag{24}\]
which defines the new transition probability for the Metropolis-Hasting acceptance criterion.
The present balancing of the target distribution, based on the minimum variance of the gradients (20) can be interpreted as adjusting the weights with respect to the most likely or the least sensitive term of the multi-potential energy. It therefore offers the advantage of improving the convergence of the BPINNs toward the Pareto-optimal solution and also enhances the reliability of the uncertainty quantification of the output, whose samples are drawn from the Pareto front. Indeed, this weighting strategy induces an automatic increase in the uncertainty of the least likely task by adaptively adjusting the \(\lambda\) parameters. Such observations arise from the development of upper bounds for each of the gradient variances, as detailed in A, which involves prediction errors and PDE residuals, as well as sensitivity terms characterizing the variability of the mean gradient descent directions for each task. In light of this, we were able to provide an upper bound on the joint variance \(\gamma^{2}\) which is developed in equation (48) in a basic and general perspective.
Last but not least, the Inverse-Dirichlet based adaptive weighting relieves us from an unreasonable decrease in the time step, which no longer has to meet all the stiff scaling requirements to ensure Hamiltonian conservation. This approach then renders the sampling free of excessive tuning adaptation of the leapfrog hyperparameters \(\delta t\) and \(L\). In addition, this prevents pathological random-walk or divergence behaviors in the sampling since it enables the use of optimal integration time, both in terms of convergence rate and adequacy of the time step to the distinct learning tasks.
The current AW-HMC algorithm is first validated on a Sobolev training benchmark with different complexities, which provides a basis for comparison with \(\varepsilon\)-optimality results. This also allows us to establish a new indicator for convergence diagnostics of the BPINNs. The robustness and efficiency of the present method are then experimented on more complex multi-task and multi-scale problems, along the different sections.
```
Input: Initial \(\Theta^{t_{0}}\), \(N_{s}\) number of samples, \(L\) number of leapfrog steps, \(\delta t\) leapfrog step size, \(N\) number of adaptive iterations, \(M\) burning steps and M the mass matrix.
1Sampling procedure:
2for\(\tau=1...N_{s}\)do
3 Sample \(r^{t_{\tau-1}}\sim\mathcal{N}(0,\mathbf{M})\);
4 Set \((\Theta_{0},r_{0})\leftarrow(\Theta^{t_{\tau-1}},r^{t_{\tau-1}})\);
5Weights adaptation:
6if\(\tau\leq N\)then
7 Compute \(\lambda_{k}(\tau)=\left(\frac{\min_{j=0,...,K}(\mathrm{Var}\{\nabla_{\Theta} \mathcal{L}_{j}(\Theta_{0})\})}{\mathrm{Var}\{\nabla_{\Theta}\mathcal{L}_{k}( \Theta_{0})\}}\right)^{1/2}\quad\forall k=0,...,K\) and \(\lambda_{K+1}(\tau)=1\);
8
9else
10\(\lambda_{k}(\tau)=\lambda_{k}(\tau-1)\quad\forall k=0,...,K\) and \(\lambda_{K+1}(\tau)=1\)
11
12 end if
13Leapfrog:
14for\(i=0...L-1\)do
15\(r_{i}\gets r_{i}-\frac{\delta t}{2}\sum_{k=0}^{K+1}\lambda_{k}(\tau)\nabla_ {\Theta}\mathcal{L}_{k}(\Theta_{i})\);
16\(\Theta_{i+1}\leftarrow\Theta_{i}+\delta t\mathbf{M}^{-1}r_{i}\);
17\(r_{i+1}\gets r_{i}-\frac{\delta t}{2}\sum_{k=0}^{K+1}\lambda_{k}(\tau)\nabla _{\Theta}\mathcal{L}_{k}(\Theta_{i+1})\);
18
19 end for
20Metropolis-Hastings:
21 Sample \(p\sim\mathcal{U}(0,1)\);
22 Compute \(\alpha=\min(1,\exp(H_{\lambda_{\tau}}(\Theta_{0},r_{0})-H_{\lambda_{\tau}}( \Theta_{L},r_{L}))\) using (24);
23if\(p\leqslant\alpha\)then
24\(\Theta^{t_{\tau}}=\Theta_{L}\);
25
26else
27\(\Theta^{t_{\tau}}=\Theta_{0}\);
28
29 end if
30Collect the samples after burning : \(\left\{\Theta^{t_{i}}\right\}_{i=M}^{N_{s}}\)
31
32 end for
```
**Algorithm 1**Adaptively Weighted Hamiltonian Monte Carlo (AW-HMC)
Figure 3: Adaptively Weighted Hamiltonian Monte Carlo (AW-HMC) on Sobolev training up to second-order derivatives compared to \(\varepsilon\)-optimal weighting. Effective gradient distributions variances \(\mathrm{Var}\{\nabla_{\Theta}L_{k}\}\) with balanced conditions between the tasks (on the left). Hamiltonian evolution throughout the sampling satisfying energy conservation (in the middle). Resulting field predictions with a comparison between \(\varepsilon\)-optimal results and AW-HMC strategy (on the right).
### Sobolev training benchmark and convergence diagnostics
We investigate the performances of the proposed auto-weighted BPINN methodology on several applications, starting in this section with a Sobolev training benchmark. We first apply this new Adaptively Weighted strategy to the 1D Sobolev training introduced in Sect. 2.3 with the same exact set of hyperparameters. The number of adaptive steps is set to \(N=20\), as for the NUTS declination, to ensure an impartial comparison of the distinct methodologies.
In addition, we compare the predictions with a reference case where the weights \(\lambda_{k}\) are set accordingly to \(\varepsilon\)-optimal analytical solution [52], which can be determined for Sobolev training based on equation [29]:
\[\lambda_{k}^{\varepsilon}=\frac{\prod_{j\neq k}\mathcal{I}_{j}}{\sum_{k=1}^{K }\prod_{j\neq k}\mathcal{I}_{j}}\quad\text{ s.t. }\mathcal{I}_{k}=\int_{\Omega}|D_{x}^{k}(u)|^{2}\,\mathrm{d}x. \tag{25}\]
We tested our methodology against this \(\varepsilon\)-optimal solution assuming observation noise \(\xi_{u}\sim\mathcal{N}(0,\sigma_{u}^{2}I)\) such that \(\sigma_{k}=\sigma_{u},\forall k\) with \(\sigma_{u}=0.1\). It provides good agreement between both approaches with a convergence of the Hamiltonian toward the same energy level, in Fig 3 (middle): in fact, the \({}^{2}\)-relative error on the Hamiltonian values between the \(\varepsilon\)-optimal and AW-HMC methods scales around \(1\mathrm{e}{-}4\) after the adaptive steps. The AW-HMC method also provides \({}^{2}\)-relative errors, compared to this optimal solution, ranging around \(1\mathrm{e}{-}3\) for both the signal and its derivatives. Finally, we also point out in Fig 3 (left) balanced gradient variances in the same way as observed with \(\varepsilon\)-optimal analytical weights. The AW-HMC methodology, therefore, provides similar results to \(\varepsilon\)-optimal solutions in terms of balance between the gradient variances, exploration of the Hamiltonian energy levels, and overall BMA predictions.
Our new approach shows exceptionally balanced conditions between the different tasks: the effective gradient distribution variances \(\mathrm{Var}\{\nabla_{\Theta}L_{k}\}\) present the same orders of magnitude throughout the training, even with a finite number of adaptive steps. This means that the posterior distribution reached after the auto-adjustment of the weights is well-suited to converging toward the Pareto front exploration, thus making the sampling more efficient. Preventing strong imbalance behavior on the gradient variances, and therefore task-specific bias has considerably improved the marginalization of such multi-objective potential energy, in comparison with the conventional approaches that presented major failures in Sect. 2.3.
To further demonstrate the robustness of the method, we consider the third-order derivative extension of this test case, where even a NUTS adaptive strategy on the time step (reaching \(\delta t=1.36\mathrm{e}{-}7\)) generates, here, pathological random-walk behavior making the sampling completely defective (Fig 4 top row and Fig 5). Such a significant decrease in the time step is clearly explained by the enhanced stiffness induced by the third-order derivative term in this multi-task learning. Indeed, the Hamiltonian trajectories are more likely to diverge during the deterministic steps due to this stiffness and require a small \(\delta t\) to compensate for the divergence. To avoid the resulting pathological random walks, the overall integration time must be increased but this inevitably leads to excessive computational costs -- under such a constraint on the leapfrog time step. This highlights the main limitation of NUTS when facing stiff multi-task sampling that involves separate scales.
In contrast, our approach overcomes these major failures (see Fig 5) without additional constraints on \(\delta t\) and provides balanced gradient variances between the different tasks as illustrated in Fig 4 (bottom row). We also compare the results of the AW-HMC methodology with analytical weights from \(\varepsilon\)-optimality and show great agreement between the approaches. In addition, in order to deal with the stochastic-induced process of the BPINNs induced by sampling variability, we perform various repetitions of the sampling with different initialization of the neural network parameters and momentum. This leads to averaged weight evolution along the adaptive steps presented in Fig 6 that show the same order of magnitude as the analytical \(\varepsilon\)-optimal weights.
Apart from these qualitative comparisons between the different methodologies and the analytical solution, we subsequently introduce a new metric that quantifies the quality of the predictions. This complements the usual metrics with a convergence quantification of the sampling along the marginalization process. The samples collected after the burning steps in the AW-HMC process -- i.e. all the instances of \(\big{\{}\Theta^{t_{i}}\big{\}}_{i=M}^{N_{s}}\) -- are first used to determine a Bayesian Model Average estimation as defined in equation (3). Each sample provides a prediction \(P(y|x,\Theta^{t_{i}})\), for the neural network characterised by \(\Theta^{t_{i}}\), and is theoretically drawn
Figure 4: Effective gradient distribution variances \(\mathrm{Var}\{\nabla_{\Theta}L_{k}\}\) and Hamiltonian evolution on 1D Sobolev training up to the third-order derivative. The NUTS formulation (top) highlights strong imbalances between the learned tasks, on the left, and random-walk behaviors in exploring the energy level sets, on the right. AW-HMC strategy (bottom) and comparison with the \(\varepsilon\)-optimality.
Figure 5: 1D Sobolev training up to third-order derivative: comparison of the BMA predictions, on the function and its derivatives, between the AW-HMC and NUTS formulations. The imbalance between tasks and random walk behavior of NUTS (see Fig 4) results in ineffective BMA predictions. The AW-HMC methodology overcomes these effects and significantly improves the sampling of the target distribution.
from the posterior distribution \(P(\Theta|\mathcal{D},\mathcal{M})\) such that the BMA is usually approximated by [57]:
\[P(y|x,\mathcal{D},\mathcal{M})\simeq\frac{1}{N_{s}-M}\sum_{i=M}^{N_{s}}P(y|x, \Theta^{t_{i}})\quad\text{with}\quad\Theta^{t_{i}}\sim P(\Theta|\mathcal{D}, \mathcal{M}). \tag{26}\]
In Sobolev training, we consider as the neural network outputs, the prediction of the function itself and all its derivatives \(y=\left\{D_{x}^{k}u_{\Theta},\,k=0...K\right\}\), such that we can compute, according to equation (26), relative BMA errors with respect to each output defined by :
\[\text{BMA-E}^{k}=\frac{\left\|P\left(D_{x}^{k}u_{\Theta}\left|\,x,\mathcal{D},\mathcal{M}\right)-D_{x}^{k}u\right\|^{2}}{\|D_{x}^{k}u\|^{2}},\qquad\forall k =0...K \tag{27}\]
where the notation \(\|\cdot\|\) used here refers to the functional \({}^{2}\)-norm. Based on the previous definition and in order to incorporate convergence on the BMA along the marginalization process, we introduce a new diagnostic called cumulative (relative) BMA error, defined as follows:
\[\text{BMA-CE}^{k}(\tau)=\frac{\left\|\frac{1}{\tau-N}\sum_{i=N}^{\tau}P\left( D_{x}^{k}u_{\Theta}\left|\,x,\Theta^{t_{i}}\right.\right)-D_{x}^{k}u\right\|^{2}}{ \left\|D_{x}^{k}u\right\|^{2}},\qquad\forall k=0...K \tag{28}\]
depending on the sampling iterations after the adaptive steps, for \(\tau>N\) in Algorithm 1. These formulae can be directly extended to all the neural network outputs, in a more general framework and quantify the sampling efficiency in terms of convergence rate. The cumulative BMA errors are represented in Fig 7 for the third-order extension of Sobolev training highlighting the convergence of the AW-HMC sampler for each of the functional tasks (on the left). Instead, these quantities remain nearly constant for the pathological HMC and NUTS formulations, due to massive rejections and random-walk behavior, respectively (see Fig 7 on the right).
We finally extended this Sobolev training test case to several benchmarks on 2D, where we studied the impact of the functional complexity and the number of training points on the Bayesian Model Average errors. The details of these benchmark problems and the training setup are provided in B and have shown enhanced robustness and efficiency of the AW-HMC algorithm for the BPINNs.
Figure 6: Evolution of the \(\lambda_{k}\) weights along the adaptive steps (\(\tau\leq N\)) on the left, and comparison with analytical \(\varepsilon\)-optimal weights for Sobolev training up to third-order derivative. Evolution of Average weights over several repetitions of the AW-HMC algorithm. This is induced by different initializations of the neural network parameters and momentum, to take into account sample variability. The order of magnitude of the relative weights \(\lambda_{0}/\lambda_{i},\,i=1...3\) are represented by the double-headed arrows.
Figure 8: Lokta-Volterra multi-scale inference: histogram of the marginal posterior distributions for the inverse parameters \(\alpha_{\Theta},\,\beta_{\Theta},\,\delta_{\Theta}\) and \(\gamma_{\Theta}\) (top). Phase diagrams of the parameter trajectories throughout the sampling (bottom) that characterized convergence toward their respective modes during the adaptive steps (in blue) and efficient exploration of the mode neighborhood after the adaptive steps (in red). The ground truth parameters are respectively \(\alpha=1,\,\beta=0.1,\,\delta=0.01\) and \(\gamma=0.5\) and establish an inverse problem with separate scales.
Figure 7: Cumulative relative BMA errors, computed according to equation (28), throughout the sampling iterations \(\tau>N\) for Sobolev training up to third-order derivative. Comparison between the AW-HMC strategy (on the left) and classical HMC and NUTS formulations (on the right). These quantities remain nearly constant in pathological cases, due either to massive rejection or pathological random walk, highlighting the lack of convergence in the usual BPINNs-HMC formulations (on the right).
### A multi-scale Lokta-Volterra inverse problem
We demonstrate the use of AW-HMC on a multi-scale dynamical inverse problem to quantify the impact of the scaling. As Linka et al. [21] pointed out, sensitivity to scaling that may hinder the performance of classical BPINNs, especially when considering nonlinear dynamical systems. The multi-scale nature and stiffness resulting from real-world problems, where vanishing task-specific gradients are commonplace, is therefore an interesting benchmark to quantify the robustness of the present method.
In this context, we consider a Lokta-Volterra dynamical system with parameters of highly varying orders of magnitude defined by the following ordinary differential equation (ODE) system:
\[\left\{\begin{array}{l}\frac{\mathrm{d}u}{\mathrm{d}t}=\alpha u-\beta uv, \quad t\in\Omega\\ \frac{\mathrm{d}v}{\mathrm{d}t}=\delta uv-\gamma v,\quad t\in\Omega\\ u(0)=u_{0},\;v(0)=v_{0}\end{array}\right. \tag{29}\]
which characterizes the temporal evolution of predator-prey species. The notations \(u(t)\) and \(v(t)\) respectively refer to the prey and predator population size at a given time \(t\), whereas the parameters \(\alpha,\beta,\delta,\gamma\geq 0\) control the population dynamics, as growing and shrinkage rates. Thereafter, we set the initial populations to \(u_{0}=100\) and \(v_{0}=20\) with the following parameters \(\alpha=1,\,\beta=0.1,\,\delta=0.01\) and \(\gamma=0.5\) intentionally selected with different orders of magnitude. This sets up an inverse problem benchmark based on real-world dynamics with separate scales involved.
The observation data are first numerically generated by solving the ODE system (29) on a uniform temporal grid \(\Omega=[0,50]\) with a thin resolution of \(400\) points. The data are randomly sampled so as to consider only half in the training phase of the different samplers. The dataset \(\mathcal{D}\) then involves these partial measurements of \(u\) and \(v\) at \(200\) different times, potentially with some added noise, and the same collocation points are kept to satisfy the ODE constraints. In this section, we focus on an inverse problem by inferring
Figure 9: Lokta-Volterra multi-scale inference: BMA predictions of the two-species populations along the physical time with their uncertainties, bottom and top left. Relative BMA-CE errors defined as in equation (28) for the neural network outputs \(y=(u_{\Theta},v_{\Theta})\) plotted throughout the sampling iterations, — top right. The dotted vertical line marks the introduction of the ODE-likelihood terms in the sequential training.
the unknown model parameters \(\Sigma=\{\alpha,\beta,\delta,\gamma\}\) from these measurement data while recovering the whole species evolution on the original finer resolution.
Regarding the noticeable scaling difference between the two populations, we consider a predator-prey split of the tasks such that each field \(u\) and \(v\) satisfies a data-fitting likelihood term and an ODE-residual likelihood term. We also assume log-normal prior distributions on \(\Sigma\) to ensure positivity of the inverse parameters, as Yang et al. [59] have shown that such priors improve the inference, and we set independent normal distributions on the neural network parameters \(\theta\). In practice though, we use a change of variable by introducing \(\Sigma=e^{\widetilde{\Sigma}}:=\left\{e^{\widetilde{\alpha}},e^{\widetilde{ \beta}},e^{\widetilde{\beta}},e^{\widetilde{\gamma}}\right\}\) for each of the inverse parameters to infer \(\widetilde{\Sigma}\) assuming normal prior distributions as well. For this test case, we impose weakly informed priors, especially on \(\widetilde{\Sigma}\), since we expect our methodology to handle the multi-scale inference due to the unbiased auto-weighting of the tasks. We therefore assume that both the neural network and inverse parameters all gather the same prior distribution, given by \(\Theta\sim\mathcal{N}(0,\sigma_{\Theta}^{2}I_{p+d})\) where \(\Theta=\left\{\theta,\widetilde{\Sigma}\right\}\).
Under these assumptions, we can define the multi-potential energy of the corresponding Hamiltonian system:
\[\begin{split} U(\Theta)&=\frac{\lambda_{0}}{2 \sigma_{0}^{2}}\left\|u_{\Theta}-u\right\|_{\mathcal{D}}^{2}+\frac{\lambda_{1} }{2\sigma_{1}^{2}}\left\|v_{\Theta}-v\right\|_{\mathcal{D}}^{2}+\frac{\lambda _{2}}{2\sigma_{2}^{2}}\left\|\frac{\mathrm{d}u_{\Theta}}{\mathrm{d}t}-\alpha _{\Theta}u_{\Theta}+\beta_{\Theta}u_{\Theta}v_{\Theta}\right\|_{\mathcal{D}}^ {2}\\ &\qquad+\frac{\lambda_{3}}{2\sigma_{3}^{2}}\left\|\frac{\mathrm{d }v_{\Theta}}{\mathrm{d}t}-\delta_{\Theta}u_{\Theta}v_{\Theta}+\gamma_{\Theta} v_{\Theta}\right\|_{\mathcal{D}}^{2}+\frac{1}{2\sigma_{\Theta}^{2}}\|\Theta\|_{ \mathbb{R}^{p+d}}^{2}\end{split} \tag{30}\]
where the inferred inverse parameters are defined by \(\Sigma_{\Theta}=e^{\widetilde{\Sigma_{\Theta}}}\) and we also set all the \(\sigma_{\bullet}\) equal to one, as we do not wish to impose strong priors on the tasks and model uncertainty. As mentioned previously in Sect. 2.2, the norms are respectively the RMS and the Euclidean norm for the last term. The prior on the parameters is assumed to follow a Gaussian distribution with a larger standard deviation \(\sigma_{\Theta}=10\), in the sense that a slightly diffuse distribution induces weakly informed priors on the \(\Theta\) parameters. This also ensures that constraint (22) for a non-informative prior is satisfied.
For such inverse modeling, the sampling is decomposed using sequential training. This means that 1) the neural network parameters are sampled with an AW-HMC strategy to mainly target the data-fitting likelihood terms (setting \(\lambda_{2}=\lambda_{3}=0\)). 2) We then introduce the ODE-residual tasks in (30) to provide estimations of the missing inverse parameters, using the AW-HMC algorithm with initial neural network parameters \(\theta^{t_{0}}\) resulting from 1). The BMA predictions and uncertainty quantification finally rely on this entire sampling procedure. In the two-step sequential training, the number of adaptive and sampling iterations are first set to \(N=20\) and \(N_{s}=100\), and then \(N=50\) and \(N_{s}=200\) while the leapfrog parameters are given by \(L=100\), \(\delta t=5\mathrm{e}{-4}\) and \(2\mathrm{e}{-4}\) respectively, for the time steps in 1) and 2). The neural network itself
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Seq. step\(\lambda_{k}\) & \(\lambda_{0}\) & \(\lambda_{1}\) & \(\lambda_{2}\) & \(\lambda_{3}\) \\ \hline Data-fitting (step 1) & \(3.83\mathrm{e}{-2}\) & 1 & — & — \\ \hline Data-fitting + ODE tasks (step 2) & \(4.87\mathrm{e}{-2}\) & 1 & \(9.16\mathrm{e}{-3}\) & \(1.16\mathrm{e}{-1}\) \\ \hline Seq. step\(\widetilde{\sigma}_{k}\) & \(\widetilde{\sigma}_{0}\) & \(\widetilde{\sigma}_{1}\) & \(\widetilde{\sigma}_{2}\) & \(\widetilde{\sigma}_{3}\) \\ \hline Data-fitting (step 1) & 5.109 & 1 & — & — \\ \hline Data-fitting + ODE tasks (step 2) & 4.531 & 1 & 10.45 & 2.936 \\ \hline \end{tabular}
\end{table}
Table 1: Weight parameters \(\lambda_{k}\) obtained after the adaptive steps in the Lokta-Volterra multi-scale inverse problem, for each of the sequential steps (top rows). Effective standard deviations \(\widetilde{\sigma}_{k}\) resulting from the weight adaptations and computed as \(\widetilde{\sigma}_{k}=\sqrt{1/\lambda_{k}}\) for each of the sequential steps (bottom rows). This highlights enhanced uncertainties on the tasks related to the prey species. The splitting of the sequential steps is detailed in Sect. 3.3.
is composed of 4 layers with 32 neurons per layer and we use the sin activation function considering the periodic nature of the solution for the Lokta-Volterra system.
On such an inverse problem the classical BPINNs-HMC algorithm faces massive rejection because the Hamiltonian trajectories are not conserved, which results in inoperative sampling (Fig 17). Even the adaptive strategies on the time step struggle to deal with the multi-scale dynamics and require an extreme decrease in the \(\delta t\) value to obtain some stability, as detailed in C. The natural implication of such constraints on the leapfrog time step is lack of convergence toward the Pareto front and poor inference of the inverse parameters, subject to weakly informed priors (see Fig 18 and 19 from C). In fact, Linka et al. [21] addressed the same issue on learning COVID-19 dynamics and imposed (in Sect. 4.3 of [21]) log-normal prior distributions on the inverse parameters that already rely on appropriate scaling. The need for such appropriate scaling strongly impacts the inference in the sense that it requires prior knowledge which biases the sampling.
On the contrary, we assume independent priors with respect to the scaling and show that our approach is able to properly recover all the \(\Sigma\) parameters as well as predict the species evolution with minimal tuning and decrease on \(\delta t\). The recovery of separate scales no longer requires prior knowledge of the inverse parameter scaling to converge to their respective modes. The results shown in Fig 8 represent both the marginal posterior distributions of each inferred inverse parameter \(\Sigma_{\Theta}\) and their trajectories when exploring the phase space distribution \(\pi(\Theta,r)\). For the latter, we plotted the entire sampling trajectories that converge toward their respective mode during the adaptive steps, to finally sample around them as illustrated by the final trajectories for \(\tau>N\). This confirms the ability of AW-HMC to quickly identify the separate modes of this inverse problem and manage such multi-scale dynamics.
In order to quantify the effectiveness in identifying the parameters, we also measure the relative error in the inference of the parameters \(\Sigma_{\Theta}=\{\alpha_{\Theta},\beta_{\Theta},\delta_{\Theta},\gamma_{\Theta}\}\)
\[E_{\Sigma_{\Theta}}=\frac{|\Sigma_{\Theta}-\Sigma|}{\Sigma} \tag{31}\]
where the prediction is given by \(\Sigma_{\Theta}=\frac{1}{N_{s}-N}\sum_{i=N}^{N_{s}}e^{\Sigma_{ \Theta}^{-t_{i}}}\), and we show that these relative errors all scale around \(5\mathrm{e}-2\) for the four inverse parameters. The predictive evolution of the species populations is displayed in Fig 9, as a BMA on the neural network outputs \(y=(u_{\Theta},v_{\Theta})\), and compared to the exact solutions in a qualitative and quantitative way. In this sense, we computed relative BMA cumulative errors for both the species, highlighting the convergence of the sampling, top right of Fig 9. We see that the insertion of the ODE-residual likelihood terms in the two-step sequential training improves the convergence of the predictions when compared to pure data-based sampling.
This test case also reveals higher uncertainties on the evolution of the prey population characterized by effective standard deviations about four times greater (see Table 1). The enhanced uncertainty on these specific tasks is highlighted by smaller values of \(\lambda_{0}\) and \(\lambda_{2}\) at the end of the adaptive steps, compared to \(\lambda_{1}\) and \(\lambda_{3}\) in the potential energy (30). Therefore, the AW-HMC strategy benefits from its ability to adaptively weight the \(\lambda\) parameters to intrinsically characterize the task uncertainties based on their gradient variances.
## 4 Application to Computational Fluid Dynamics: Stenotic Blood Flow
We illustrate the use of the methodology set out in Sect. 3.1 in a real-world problem from fluid mechanics, more precisely the study of inpainting and inverse problems on incompressible stenotic flows in asymmetric geometries. The objective is to demonstrate the generalization and performance of the present AW-HMC algorithm on more complex 2D geometries and nonlinear PDE dynamics under noise and sparsity of the data.
The measurement data are generated by randomly sampling the fully resolved Computational Fluid Dynamics (CFD) solutions on scattered locations. The direct numerical simulation of vascular flows in asymmetric stenotic vascular geometries is performed using a meshless solver based on the Discretization-Corrected Particle Strength Exchange (DC PSE) method as detailed in [7].
### Inpainting problem with sparse and noisy data
Inpainting problems have drawn increasing interest in MRI or CT medical imaging as an opportunity to reduce artifacts and recover missing information by using deep learning approaches [30, 3, 51]. Although the usual inpainting framework incorporates only measurement data in the image processing, Zheng et al. investigated a physics-informed version of the problem by incorporating the underlying physics as indirect measurements [63]. The present section falls within the same context -- the idea is to infer the whole flow reconstruction based on sparse and noisy measurements while imposing PDE constraints on some complementary collocation points.
The governing equations of the stenotic flow dynamic are written here in a velocity \(\mathbf{u}=(u,v)\) and vorticity \(\omega\) formulation in two dimensions, satisfying an incompressible steady-state Navier-Stokes equation given by
\[(\mathbf{u}\cdot\nabla)\omega=Re^{-1}\Delta\omega,\quad\text{in}\ \Omega \tag{32}\]
or equivalently
\[u\frac{\partial\omega}{\partial x}+v\frac{\partial\omega}{\partial y}=\frac{1 }{Re}\Delta\omega,\quad\text{in}\ \Omega \tag{33}\]
where \(Re\) refers to the dimensionless Reynolds number, \(\omega\) is the vorticity field \(\omega=\frac{\partial v}{\partial x}-\frac{\partial u}{\partial y}\) and the incompressibility condition ensures \(\nabla\cdot\mathbf{u}=0\). We consider the 2D stenotic spatial domain \(\Omega\subset[0,10]\times[0,1]\) and assume two different kinds of boundary conditions: 1) the stenosis upper and lower walls, denoted \(\partial\Omega_{1}\), where we impose no-slip conditions such that \(\mathbf{u}_{\partial\Omega_{1}}=0\) and \(\omega=(\nabla\times\mathbf{u})_{\partial\Omega_{1}}\) and 2) the inlet and outlet boundaries, denoted \(\partial\Omega_{2}\), with a prescribed parabolic profile and Neumann condition, respectively, on the velocity in the main flow direction. These boundary conditions are detailed in Sect. 4.4 of the DC PSE article [7]. We also first consider that the Reynolds number is known and set to \(Re=200\) according to the CFD simulations, such that the set of parameters to infer \(\Theta\) is restricted here to the neural network weights and bias.
The measurement dataset, \(\mathcal{D}\), is composed of noisy vorticity data on \(\mathcal{D}^{\partial_{1}}\) and \(\mathcal{D}^{\partial_{2}}\), defined as in (12) respectively for sets \(\partial\Omega_{1}\) and \(\partial\Omega_{2}\), as well as on 1282 interior collocation points \(\mathcal{D}^{\omega}\) that cover less than
Figure 10: Vorticity physics-informed inpainting problem: Bayesian Model Average Cumulative Error diagnostics, as defined in (35), throughout the sampling iterations and for different noise levels. BMA-CE on the vorticity field prediction (on the left) and on the PDE residual \(\mathcal{F}\) satisfying (33) (on the right). The dotted vertical lines mark the introduction of the PDE constraint in sequential training.
Figure 11: Physics-informed inpainting problem: BMA prediction of the vorticity field \(\omega_{\Theta}\) in asymmetric stenosis without noise, compared to the ground truth solution \(\omega\) (top). The black dots on the exact field correspond to the training measurements of the dataset \(\mathcal{D}\). Comparison of the uncertainty standard deviations (Std) and mean squared errors (MSE) on the predicted vorticity field \(\omega_{\Theta}\) for different noise levels (\(\sigma=0,0.1,0.2\)), shown in the bottom rows.
2% of all the data required for the full vorticity field reconstruction on \(\Omega\). We finally defined the \(\mathcal{D}^{\Omega}\) dataset as 6408 interior points representing 6% of the entire reconstructed data field, where we require that the PDE (33) be satisfied in a physically-constrained inpainting formulation. The multi-potential energy is then defined by:
\[\begin{split} U(\Theta)&=\frac{\lambda_{0}}{2\sigma_ {0}^{2}}\left\|\omega_{\Theta}-\omega\right\|_{\mathcal{D}^{\omega}}^{2}+\frac{ \lambda_{1}}{2\sigma_{1}^{2}}\left\|\omega_{\Theta}-\omega_{\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
the PDE residual constraints converge independently to the noise level, reaching final BMA errors around \(2\mathrm{e}{-3}\) in all cases.
To supplement the performance quantification of the inpainting formulation in recovering the entire vorticity field along with its uncertainty, we also use the Prediction Interval Coverage Probability (PICP) metric as defined by Yao et al. [61]. This consists of a quality indicator of the posterior approximation, which evaluates the percentage of the ground truth observations contained within 95% of the prediction interval, as given by:
\[PICP=\frac{1}{N}\sum_{i=1}^{N}\mathbb{1}_{(\omega_{\Theta}^{I})\,\leq\,\omega_{ i}\,\leq\,(\omega_{\Theta}^{h})_{i}} \tag{36}\]
where \(\omega_{\Theta}^{I}\) and \(\omega_{\Theta}^{h}\) are respectively the 2.5% and 97.5% percentiles of the predictive distribution on the vorticity. In this case, the notation \(N\) refers to the total number of observations in the predictive dataset, in other words, the grid resolution of the computational domain \(\Omega\). In our application, this PICP metric shows that more than 99% of the vorticity ground truth observations are covered by the posterior distribution of the neural network output \(\omega_{\Theta}\), independently of the level of noise.
We also expect our self-weighted adaptation of \(\lambda_{k}\) to be able to capture noise sensitivity with respect to the value of \(\sigma\), and intrinsic task sensitivities to noise level without imposing any a priori on the noise level estimation. This is the key point of our methodology since we intentionally decouple \(\sigma_{k}\) in (34) from the noise magnitude, and rely on the self-weighted strategy to quantify their related uncertainties. On the contrary, when dealing with noisy measurement data in applications researchers frequently assume the fidelity of each sensor to be known and set the standard deviations \(\sigma_{k}\) accordingly. They can also be defined as additional learnable parameters to be inferred. The latter is usually subject to additional computational costs in _online_ learning or requires alternative neural network formalism used as pre-training in _offline_ learning [38]. In contrast, the strength of the AW-HMC methodology relies on its similar computational cost compared to classical BPINNs-HMC. Moreover, AW-HMC improves convergence by drawing attention to exploring the Pareto front with optimal integration time. Therefore, it can shorten overall sampling requirements making this a competitive strategy in terms of computational cost.
The results presented in Fig 11 demonstrate the noise resistance of the AW-HMC approach and highlight sensitivity consideration with respect to the noise and tasks (see Table 2). We first noticed differences in the auto-adjustment of the lambda values relative to noise levels, leading to global enhanced uncertainties with increasing noise. We also observed various uncertainty adjustments depending on the sensitivity of the different tasks to the noise. In fact, the comparison of the local standard deviations on the vorticity field in Fig 11 shows that the wall boundary conditions are the most sensitive to noise, automatically increasing the uncertainties in these areas. The inlet and outlet boundaries are rather less sensitive. This is highlighted by a lower adaptation of their uncertainties to the noise level. In short, this application has shown the ability of our new adaptive methodology to automatically adjust the weights, and with them the uncertainties, to the intrinsic task sensitivities to the noise and to adapt the uncertainty to noise magnitude itself.
### Inverse problem with parameter estimation and latent field recovery
As a second CFD application, we consider a multi-objective flow inverse problem in an asymmetric and steep stenosis geometry. This aims to provide both a parameter estimation of the flow regime and recover a hidden field using our adaptively weighted strategy. Such considerations, motivated by real-world applications, use incomplete or corrupted measurement data in an attempt to derive additional information, which remains challenging or impractical to obtain straightforwardly.
With an emphasis on physical and biomedical problems, Raissi et al. investigated the extraction of hidden fluid mechanics quantities of interest from flow visualizations, using physics-informed deep learning [41, 42]. The authors relied only on measurements of a passive scalar concentration that satisfied the incompressible Navier-Stokes equations, to infer the velocity and pressure fields in both external and internal flows.
In this direction, we focus on the velocity \(\mathbf{u}=(u,v)\) and pressure \(p\) formulation of the stenotic flow
Figure 12: CFD inverse problem: BMA predictions of the velocity field \(\mathbf{u}_{\Theta}=(u_{\Theta},v_{\Theta})\) in asymmetric stenosis along with their uncertainty standard deviations (Std) and mean squared errors (MSE), at the top. BMA and uncertainty on the inferred latent pressure field with the pressure evolution plotted along the central line \(y=0.5\), leading to an average pressure drop of \(1.78\) — bottom.
Figure 13: CFD inverse problem: Bayesian Model Average Cumulative Errors throughout the sampling iterations for the velocity field components \(\mathbf{u}=(u,v)\), the divergence-free condition \(\mathcal{H}(\mathbf{u})\), on the left, and the PDE residuals \(\mathcal{F}(u)\) and \(\mathcal{F}(v)\), on the right. The dotted curve represents the a posteriori checking of pressure gradient norm BMA-CE error as defined in equation (40)
dynamics such that the continuity and momentum governing steady-state equations are written:
\[\begin{cases}(\mathbf{u}\cdot\nabla)\mathbf{u}=-\nabla p+Re^{-1}\Delta\mathbf{u}, &\text{ in }\Omega\\ \nabla\cdot\mathbf{u}=0,&\text{ in }\Omega\end{cases} \tag{37}\]
under the incompressibility condition on the stenotic domain \(\Omega\subset[0,10]\times[0,1]\). We impose adherent boundary conditions on the wall interfaces such that \(\mathbf{u}_{\partial\Omega_{1}}=0\), and the following inlet/outlet boundary conditions respectively:
\[\begin{split} u=4y-4y^{2},\,v=0&\forall(x,y)\in\{0\} \times[0,1]\\ \frac{\partial u}{\partial x}=0,\,v=0&\forall(x,y)\in\{10 \}\times[0,1].\end{split} \tag{38}\]
The direct numerical simulation is performed using the DC-PSE formulation [7] with a Reynolds number set to \(Re=200\), as in the previous section. It is used to generate the observation data on \(\Omega\) with a thin resolution. The \(\mathcal{D}\) dataset is then composed of partial measurements of \(\mathbf{u}\) randomly sampled to consider 9559 training points, representing less than 3% of the entire target resolution. The same collocation points are included to impose the PDE constraints, denoted \(\mathcal{F}(\mathbf{u}):=(\mathcal{F}(u),\mathcal{F}(v))\), as well as the diverge-free condition \(\mathcal{H}(\mathbf{u})\).
Finally, we set up the inverse problem by inferring the flow regime, considering the Reynolds number as an unknown model parameter \(\Sigma=\{Re\}\). At the same time, we address the multi-task problem to recover the latent pressure from the partial measurements of the velocity field and the fluid flow dynamics assumptions. The pressure field prediction, in particular, is adjusted throughout the sampling in such a way that its gradient satisfies the governing equations (37). As commonly established by the nature of the Navier-Stokes equation, the pressure is though not uniquely defined and, given the lack of precise boundary conditions on this field, is thus determined up to a constant. The predictions of each of the quantities of interest, namely the velocity and pressure, are then recovered on the original finer resolution in Fig 12. As in Sect. 3.3, we select a log-normal prior distribution for the physical parameter and independent normal distributions for the neural network parameters, and we also use a sequential training approach, incorporating the PDE constraints in the second sampling phase.
The validation of the inference is first performed by computing the BMA-CE diagnostics for the velocity field components, the PDE constraints, and the incompressibility condition written in the same way as in equation (35). The results are provided in Fig 13 and highlight the convergence of each term toward final BMA errors scaling respectively about \(\text{BMA-CE}^{u}(N_{s})=6.4\mathrm{e}{-3}\), \(\text{BMA-CE}^{v}(N_{s})=1\mathrm{e}{-3}\), \(\text{BMA-CE}^{\mathcal{F}(\mathbf{u})}(N_{s})=(4.2\mathrm{e}{-2},\,4.7\mathrm{ e}{-2})\) and \(\text{BMA-CE}^{\mathcal{H}(\mathbf{u})}(N_{s})=2.9\mathrm{e}{-2}\). The Bayesian Model Average predictions of the velocity field are then compared in Fig 12 with the ground truth observations providing local mean squared error (MSE) that are embedded in their uncertainties and show enhanced standard deviations at the regions with higher errors. The PICP metric also enables to estimate that more than \(95\%\) of the velocity field ground truth is recovered by the posterior distribution of \(\mathbf{u}_{\Theta}\).
Figure 14: CFD inverse problem: from left to right, histogram of the marginal posterior distribution for the inverse Reynolds parameter, phase diagram of its trajectory throughout the sampling and BMA-CE error using the absolute relative norm as defined in (39). The relative BMA-CE error on \(Re_{\Theta}\) is plotted over the all the \(\tau\) iterations of the second step sampling in the sequential training.
For the inverse parameter, we computed a BMA cumulative error based on the relative L1-norm defined as follows
\[\text{BMA-CE}^{Re}(\tau)=\frac{\left|\frac{1}{\tau}\sum_{i=1}^{\tau}Re_{\Theta^{ t_{i}}}-Re\right|}{|Re|},\qquad\forall\tau=1...N_{s} \tag{39}\]
where \(Re_{\Theta^{t_{i}}}\) refers to the prediction of the Reynolds number for the sample characterized by the parameters \(\Theta^{t_{i}}\). We show in Fig 14 that this relative error converges, reaching at the end of the sampling a residual of \(5.4\mathrm{e}{-2}\). We also represent here the histogram of the marginal posterior distribution of \(Re_{\Theta}\) and its trajectory in the phase space illustrating the convergence toward its mode during the adaptive steps \(\tau<N\). In fact, our approach leads to an estimate of the Reynolds number, inferred from the measurements data \(\mathcal{D}\), which is consistent with the exact value and results in the predictive interval \(Re_{\Theta}\in[182.82,208.06]\).
The latent pressure field BMA, inferred up to constant, is illustrated in Fig 12 with its uncertainty and is able to capture a sharp pressure drop --estimated in average to \(1.78\) -- arising from the steep stenosis geometry. In fact, it has been emphasized by Sun et al. in symmetric geometries, that such pressure drops turn to become nonlinear as the stenotic geometry becomes narrower [49], which is in line with what we obtain in our asymmetric case. As the pressure ground truth is unknown in this application, we complement the validation of the inverse problem with a-posteriori checking on the pressure gradient. In this sense, we provide a PICP estimate on the pressure recovery which stands around \(91\%\) for its gradient norm, but also introduce the following posterior diagnostic on the pressure BMA-CE error:
\[\text{BMA-CE}^{\nabla p}(\tau)=\left\|\frac{1}{\tau-N}\sum_{i=N}^{\tau}\left| P\left(\nabla p_{\Theta}\left|\,x,\Theta^{t_{i}}\right)\right|-\left|\nabla p \right|\right\|^{2}\right. \tag{40}\]
where \(\left|\cdot\right|\) denotes the vector norm, and \(\nabla p\) is the evaluation of the exact gradient pressure from equation (37). The results are plotted, in dotted line, throughout the sampling iterations in Fig 13, and reach a residual error of \(7.6\mathrm{e}{-2}\). This illustrates good agreement between the ground truth and the predictive pressure gradient arising from our adaptively-weighted strategy.
Overall, the present AW-HMC methodology relies on multi-task sampling to identify the flow regime through partial measurements of the velocity field and thus handles a complex flow inverse problem with latent field recovery that satisfies non-linear physical PDE constraints.
## 5 Concluding remarks
BPINNs have recently emerged as a promising deep-learning framework for data assimilation and a valuable tool for uncertainty quantification (UQ) [59]. This offers the opportunity to merge the predictive power of Physics-Informed Neural Networks (PINN) with UQ in a Bayesian inference framework using Markov Chain Monte Carlo (MCMC) sampling. This makes it possible to quantify the confidence in predictions under sparse and noisy data with physical model constraints, which is especially appealing for applications in complex systems. For this, Hamiltonian Monte Carlo has been established as a powerful MCMC sampler due to its ability to efficiently explore high-dimensional target distributions [4]. With it, BPINNs have extended the use of PINNs to a Bayesian UQ setting.
As we have shown here, BPINNs, however, share similar failure modes as PINNs: the multi-objective cost function translates to a multi-potential sampling problem in a BPINN. This presents the same difficulties in balancing the inference tasks and efficiently exploring the Pareto front as found in standard PINNs [43]. We illustrated this in a Sobolev training benchmark, which is prone to stiffness, disparate scales, and vanishing task-specific gradients. We emphasized that BPINNs are sensitive to the choice of the \(\lambda\) weights in the potential energy, which can possibly lead to biased predictions or inoperative sampling. Hence, the standard weighting strategy appears to be inefficient in multi-scale problems and multi-task inference, while it turns out to be unsustainable to manually tune the weights in a reproducible and reliable way. Recently proposed alternatives [38] are subject to additional hyper-parameter tuning or pre-training of the weights with a GAN, at the expense of increased computational complexity. Also, previous approaches mainly
focused on measurement noise estimation and did not include physical model mis-specification concerns which are also critical, especially when UQ modeling is the goal.
Robust automatic weighting strategies are therefore essential to apply BPINNs to multi-scale and multi-task (inverse) problems and improve the reliability of the UQ estimates. Here, we have therefore proposed the AW-HMC BPINN formulation, which provides a plug-in automatic adaptive weighting strategy for standard BPINNs. AW-HMC effectively deals with multi-potential sampling, energy conservation instabilities, disparate scales, and noise in the data, as we have shown in the presented benchmarks.
We have shown that the presented strategy ensures a weighted posterior distribution well-fitted to explore the Pareto front, providing balanced sampling by ensuring appropriate adjustment of the \(\lambda\) weights based on Inverse Dirichlet weighting [29]. The weights can therefore directly be interpreted as training uncertainties, as measured by the variances of the task-specific training gradients. This leads to weights that are adjusted with respect to the model to yield the least sensitive multi-potential energy for BPINN HMC sampling. This results in improved convergence, robustness, and UQ reliability, as the sampling focuses on the Pareto front. This enables BPINNs to effectively and efficiently address multi-task UQ.
The proposed method is also computationally more efficient than previous approaches, since it does not require additional hyper-parameters or network layers. This also ensures optimal integration time and convergence in the leapfrog training. This prevents time steps from tending to zero or becoming very small, avoiding a problem commonly encountered in No-U-Turn Sampling (NUTS) when attempting to avoid the pathologically divergent trajectories characteristic of HMC instabilities. The present methodology improves the situation, since the time step no longer needs to meet all of the stiff scaling requirements to ensure energy conservation. As a result, it shortens overall integration time and sample number requirements, combining computational efficiency with robustness against sampling instabilities.
Our results also show that AW-HMC reduces bias in the sampling, since it is able to automatically adjust the \(\lambda\) parameters, and with them the uncertainty estimates, according to the sensitivity of each term to the noise or inherent scaling. In classical approaches, this is prohibited by the bias and implicit prior introduced by manual weight tuning. In fact, we demonstrated the efficiency of the present method in capturing inverse parameters of different orders of magnitude in a multi-scale problem, assuming completely independent priors with respect to the scaling. Previously, this would have been addressed by imposing prior distributions on these parameters that already rely on appropriate scaling. Otherwise, the classic BPINN formulation is prone to failure. The proposed adaptive weighting strategy avoids these issues altogether, performing much better in multi-scale inverse problems.
We have demonstrated this in real-world applications from computational fluid mechanics (CFD) of incompressible flow in asymmetric 2D geometries. We showed the use of AW-HMC BPINNs for CFD inpainting and studies the impact of noise on the multi-potential energy. This highlighted the robustness of the present approach to noisy measurements, but also its ability to automatically adjust the \(\lambda\) values to accurately estimate the noise levels themselves. In this sense, we were able to show enhanced uncertainty with increasing noise, without any prior on the noise level itself, and to capture distinct intrinsic task sensitivities to the noise. Overall, this offers an effective alternative to automatically address multi-fidelity problems with measurements resulting from unknown heteroscedastic noise distributions.
Taken together, the present results render BPINNs a promising approach to scientific data assimilation. They now have the potential to effectively address multi-scale and multi-task inference problems, to couple UQ with physical priors, and to handle problems with sparse and noisy data. In all of these, the presented approach ensures efficient Pareto-front exploration, the ability to correctly scale multi-scale and stiff dynamics, and to derive unbiased uncertainty information from the data. Our approach involves only minimal assumptions on the noise distribution, the different problem scales, and the weights, and it is computationally efficient. This extends the application of BPINNs to more complex real-world problems that were previously not straightforwardly to address.
Applications we expect to particularly benefit from these improvements include porous media research, systems biology, and the geosciences, where BPINNs now offer promising prospects for data-driven modeling. They could support and advance efforts for the extraction and prediction of morphological geometries [36, 47], upscaling and coarse-graining of material properties [1] and physical properties [44] directly from sample images. However, capturing these features from imperfect images remains challenging and is usually subject to uncertainties, e.g., due to unavoidable imaging artifacts. This either requires the development
of homogenization-based approaches [18] to bridge scales and quantify these uncertainties [35] or the use of data assimilation to compensate for the partial lack of knowledge in the images. The present BPINNs formulation with AW-HMC offers a potential solution.
## Appendix A Upper bound on the Inverse-Dirichlet weighting variance
The Inverse-Dirichlet Adaptively Weighted HMC algorithm, developed in Sect. 3.1, guarantees that the gradients of the multi-potential energy terms have balanced distributions throughout the sampling, as shown by their joint variance below:
\[\gamma^{2}:=\mathrm{Var}\{\lambda_{k}\nabla_{\Theta}\mathcal{L}_{k}\}\simeq \min_{t=0,...,K}(\mathrm{Var}\{\nabla_{\Theta}\mathcal{L}_{t}\}),\quad\forall k =0,...,K. \tag{41}\]
In this section, we use a general case to demonstrate that \(\gamma^{2}\) is upper bound and controlled by a reliability criterion which depends on the prediction errors or PDE residuals, the dispersion of their mean variability with respect to \(\Theta\) and the setting of the \(\sigma_{\bullet}\) values.
This first states the necessity to adequately set the \(\sigma\) parameters to avoid biased and imbalanced conditions on task gradient distributions, since these parameters critically and arbitrarily affect the gradient distributions control. This also highlights that manual tuning of the \(\sigma\) values may be an extremely sensitive task, difficult to achieve in practice. Therefore, in all the applications presented in this article, we chose to set these parameters uniformly and instead rely on the \(\lambda\) automatic adjustment to ensure, inter alia, the efficient exploration of the Pareto front. It ensues that these standard deviation parameters imply a strong constraint on each gradient distribution -- with respect to \(\Theta\) -- and so, impact each task uncertainty.
For the sake of simplicity, we used two-task sampling with a data-fitting term from a field \(u\) and a PDE constraint, denoted \(\mathcal{F}\), so the data set is decomposed into \(\mathcal{D}=\mathcal{D}^{u}\cup\mathcal{D}^{\Omega}\), following the notations introduced in Sect. 2.2. The multi-potential energy thus reduces to :
\[U(\Theta)=\frac{\lambda_{0}}{2\sigma_{0}^{2}}\left\|u_{\Theta}-u\right\|_{ \mathcal{D}^{u}}^{2}+\frac{\lambda_{1}}{2\sigma_{1}^{2}}\left\|\mathcal{F}(u _{\Theta})\right\|_{\mathcal{D}^{\Omega}}^{2}+\frac{1}{2\sigma_{\Theta}^{2}} \left\|\Theta\right\|^{2}:=\sum_{k=0}^{K+1}\lambda_{k}\mathcal{L}_{k}(\Theta) \tag{42}\]
where we choose to keep the \(\sigma\) notation for the demonstration and restrain \(\Theta\) to the neural network parameters, even if the following holds in an inverse problem paradigm. As a reminder, the measurement data used for the training \(\mathcal{D}^{u}\) can differ from the collocation points where we impose the PDE constraint \(\mathcal{D}^{\Omega}\) and their respective numbers are denoted \(N^{u}\) and \(N^{\Omega}\). With the notations from Sect. 2.2, the gradients of the two-tasks potential energy write respectively:
\[\begin{split}\frac{\partial\mathcal{L}_{0}}{\partial\Theta_{j}} (\Theta)&=\frac{1}{\sigma_{0}^{2}N^{u}}\sum_{i=0}^{N^{u}}\bigg{(} u_{\Theta}(x_{i})-u_{i}\bigg{)}\frac{\partial u_{\Theta}}{\partial\Theta_{j}} (x_{i})\\ \frac{\partial\mathcal{L}_{1}}{\partial\Theta_{j}}(\Theta)& =\frac{1}{\sigma_{1}^{2}N^{\Omega}}\sum_{i=0}^{N^{\Omega}} \mathcal{F}\bigg{(}u_{\Theta}(x_{i})\bigg{)}\frac{\partial\mathcal{F}(u_{ \Theta})}{\partial\Theta_{j}}(x_{i})\end{split} \tag{43}\]
for \(\Theta\in\mathbb{R}^{p}\) and we can thus decompose the variances \(\mathrm{Var}_{\Theta}\big{[}\nabla_{\Theta}\mathcal{L}_{k}\big{]},\,k=0,1\) with respect to these gradients. To do so, we first compute their mean with respect to \(\Theta\) and get respectively
\[\mathbb{E}_{\Theta}\big{[}\nabla_{\Theta}\mathcal{L}_{0}\big{]}=\frac{1}{N^{ p}}\sum_{j=0}^{Np}\frac{\partial\mathcal{L}_{0}}{\partial\Theta_{j}}(\Theta)= \frac{1}{\sigma_{0}^{2}N^{u}}\sum_{i=0}^{N^{u}}\bigg{(}u_{\Theta}(x_{i})-u_{i} \bigg{)}\mathbb{E}_{\Theta}\big{[}\nabla_{\Theta}u_{\Theta}\big{]}(x_{i})= \frac{1}{\sigma_{0}^{2}}\mathbb{E}_{\mathcal{D}^{u}}\big{[}(u_{\Theta}-u) \mathbb{E}_{\Theta}\big{[}\nabla_{\Theta}u_{\Theta}\big{]}\big{]} \tag{44}\]
and
\[\mathbb{E}_{\Theta}\big{[}\nabla_{\Theta}\mathcal{L}_{1}\big{]}=\frac{1}{ \sigma_{1}^{2}}\mathbb{E}_{\mathcal{D}^{\Omega}}\big{[}\mathcal{F}(u_{\Theta })\mathbb{E}_{\Theta}\big{[}\nabla_{\Theta}\mathcal{F}(u_{\Theta})\big{]} \big{]} \tag{45}\]
with the special configuration \(\nabla_{\Theta}\mathcal{F}(u_{\Theta})=\mathcal{F}(\nabla_{\Theta}u_{\Theta})\) if \(\mathcal{F}\) is linear. Finally, we can extend it to the
variance computations, as follows:
\[\begin{split}\mathrm{Var}_{\Theta}\big{[}\nabla_{\Theta}\mathcal{L}_ {0}\big{]}&=\frac{1}{N^{p}}\sum_{j=0}^{N^{p}}\bigg{(}\frac{ \partial\mathcal{L}_{0}}{\partial\Theta_{j}}-\mathbb{E}_{\Theta}\big{[}\nabla_{ \Theta}\mathcal{L}_{0}\big{]}\bigg{)}^{2}\\ &=\frac{1}{N^{p}(N^{u}\sigma_{0}^{2})^{2}}\sum_{j=0}^{N^{p}} \bigg{[}\sum_{i=0}^{N^{u}}\bigg{(}u_{\Theta}(x_{i})-u_{i}\bigg{)}\bigg{(}\frac {\partial u_{\Theta}}{\partial\Theta_{j}}(x_{i})-\mathbb{E}_{\Theta}\big{[} \nabla_{\Theta}u_{\Theta}\big{]}(x_{i})\bigg{)}\bigg{]}^{2}\\ &=\frac{1}{(N^{u}\sigma_{0}^{2})^{2}}\sum_{i=0}^{N^{u}}\sum_{k=0} ^{N^{u}}\bigg{(}u_{\Theta}(x_{i})-u_{i}\bigg{)}\bigg{(}u_{\Theta}(x_{k})-u_{k }\bigg{)}\mathrm{Cov}_{\Theta}\big{[}\nabla_{\Theta}u_{\Theta}(x_{i}),\nabla_ {\Theta}u_{\Theta}(x_{k})\big{]}\\ &\leqslant\frac{1}{\sigma_{0}^{4}}\|u_{\Theta}-u\|_{\infty, \mathcal{D}^{u}}^{2}\mathrm{Cov}_{\Theta}\bigg{[}\frac{1}{N^{u}}\sum_{i=0}^{ N^{u}}\nabla_{\Theta}u_{\Theta}(x_{i}),\frac{1}{N^{u}}\sum_{k=0}^{N^{u}}\nabla_{ \Theta}u_{\Theta}(x_{k})\bigg{]}\\ &=\frac{1}{\sigma_{0}^{4}}\|u_{\Theta}-u\|_{\infty,\mathcal{D}^ {u}}^{2}\mathrm{Var}_{\Theta}\big{[}\mathbb{E}_{\mathcal{D}^{u}}\big{[} \nabla_{\Theta}u_{\Theta}\big{]}\big{]}\end{split} \tag{46}\]
that provides an upper bound for the gradient variance of the data-fitting term. We then obtain, in the same way, the PDE constraint bound as:
\[\mathrm{Var}_{\Theta}\big{[}\nabla_{\Theta}\mathcal{L}_{1}\big{]}\leqslant \frac{1}{\sigma_{1}^{4}}\|\mathcal{F}(u_{\Theta})\|_{\infty,\mathcal{D}^{ \Omega}}^{2}\mathrm{Var}_{\Theta}\big{[}\mathbb{E}_{\mathcal{D}^{\Omega}} \big{[}\nabla_{\Theta}\mathcal{F}(u_{\Theta})\big{]}\big{]}. \tag{47}\]
The notation \(\|\cdot\|_{\infty,\mathcal{D}^{\bullet}}\) here refers to the discrete \(\ell^{\infty}\) norm on the spatial domain composed of the \(\mathcal{D}^{\bullet}\) training points, and \(\mathbb{E}_{\mathcal{D}^{\bullet}}\) introduces the spatial mean on the corresponding data set. Hence, the gradient variances of the tasks are controlled by the crossed complex components \(\mathrm{Var}_{\Theta}\mathbb{E}_{\mathcal{D}^{\bullet}}\) which can be interpreted as sensitivity terms evaluating the dispersion with respect to \(\Theta\) of the gradient descent directions, averaged in space. Finally, since the \(\sigma_{\bullet}\) values are uniformly set to one to avoid biased sampling, it means that the \(\lambda\) values are computed in such a way the joint variance of the gradient distributions is bounded by:
\[\gamma^{2}\leqslant\min\Big{\{}\|u_{\Theta}-u\|_{\infty,\mathcal{D}^{u}}^{2} \mathrm{Var}_{\Theta}\big{[}\mathbb{E}_{\mathcal{D}^{u}}\big{[}\nabla_{ \Theta}u_{\Theta}\big{]}\big{]},\,\|\mathcal{F}(u_{\Theta})\|_{\infty, \mathcal{D}^{\Omega}}^{2}\mathrm{Var}_{\Theta}\big{[}\mathbb{E}_{\mathcal{D}^{ \Omega}}\big{[}\nabla_{\Theta}\mathcal{F}(u_{\Theta})\big{]}\big{]}\Big{\}} \tag{48}\]
which highlights the fact that the weights are adjusted with respect to the most likely task and thus improve the reliability in the uncertainty quantification. The present computations can straightforwardly be extended to more complex multi-potential energy terms for direct and inverse real-world problems, which concludes our analysis.
## Appendix B 2D Sobolev training benchmark
We extend the Sobolev benchmark used in Sect.3.2 to 2D-training with the gradient and laplacian operators, with a target functional in the form:
\[u(x,y)=\sum_{i=1}^{N_{rep}}A_{x}^{i}\cos\left(2\pi L^{-1}l_{x}^{i}x+\phi_{x}^{ i}\right)A_{y}^{i}\sin\left(2\pi L^{-1}l_{y}^{i}y+\phi_{y}^{i}\right) \tag{49}\]
which enables us to deal with a wide range of shape complexities and sharp interfaces, in addition to the stiffness introduced by the higher-order derivatives. We set the domain size to \(L=2\pi\), the number of repetitions \(N_{rep}=5\), while the parameters \(A_{x}\) and \(A_{y}\) are independently and uniformly sampled from the interval \([-2,2]\), as are \(\phi_{x}\) and \(\phi_{y}\) from \([0,2\pi]\). In order to treat several shape complexities, we consider a range of parameter \(l\) such that the local length scales \(l_{x}\) and \(l_{y}\) are randomly sampled from the set \(\{1,2,...,l\}\). The 2D spatial domain \([0,2\pi]^{2}\) is covered by a uniform grid with a resolution of \(256\times 256\), along with randomly-selected training points. We then study both the impact of the functional complexity, by setting different values of \(l\), and the number of training points on the Bayesian Model Average resulting from our AW-HMC methodology.
Figure 16: 2D Sobolev training benchmark: BMA predictions, predicted standard deviation, and relative BMA errors, presented locally for each term of the multi-objective potential energy. We worked on a limited case \(l=8\) with about \(15\%\) of training points. The global relative BMA errors, averaged over the entire domain, scale around \(1.69e-3\), \(3.42e-3\), and \(5.6e-3\) respectively.
Figure 15: 2D Sobolev training benchmark: comparison of the relative BMA errors, as defined in (27), plotted with respect to the number of training points for various shape complexities induced by the different values of \(l\). The number of training points is increased until about \(30\%\) of the whole data set is reached, for 20000 training points.
The results on the entire benchmark setup, presented in Fig 15, show a convergence trend with an increasing number of training points, and this independently of the \(l\) values, even though the relative BMA errors reach higher bounds with additional shape complexity. These relative BMA errors are computed according to equation (27) and are average versions of different repetitions of Sobolev sampling, simultaneously running in parallel. In fact, in order to deal with the stochastic-induced process that may arise from the sampling variabilities themselves, we performed several realizations starting with distinct initializations of the neural network \(\Theta^{t_{0}}\) and momentum \(r^{t_{0}}\) parameters, which lead to different sampling realizations. We can potentially take into account these sampling variabilities to compute the standard deviation over these repetitions, as illustrated by the colored band in Fig 15.
We also represent in Fig 16 for each term of the multi-potential functional, respectively, their BMA predictions, their uncertainties based on the predictive standard deviations throughout the sampling, and the relative BMA errors in the case \(l=8\) and with \(10000\) training points, randomly sampled over the whole domain. The results here show enhanced uncertainties near the boundary walls where the higher errors are located and highlight the ability of our methodology to capture complex shape fields of different orders of magnitude at the same time. We can also emphasize that such a 2D Sobolev training benchmark was previously unachievable with the classical BPINNs-HMC formulation.
## Appendix C Failure of the usual methodologies on the Lokta-Volterra inverse problem
We consider the Lokta-Volterra inverse problem, as introduced in Sect. 3.3, to investigate the impact of multi-scale dynamics on the usual methodologies, namely HMC with uniform weighting and NUTS. The sampling and leapfrog parameters are set accordingly to the AW-HMC test case, where \(N\) refers to the burning and number of adaptive steps for the HMC and NUTS formulations, respectively. Therefore, we compare the different samplers assuming that 1) their time complexity is the same and 2) we are imposing no informative priors on the inverse parameter scaling. In fact, the first condition states that different
Figure 17: Failure mode of classical HMC, with uniform weighting, on the Lokta-Volterra multi-scale inverse problem defined in Sect. 3.3. BMA predictions for the two-species populations along the physical time with their uncertainties — bottom and top left figures. Relative BMA-CE errors throughout the sampling iterations illustrate lack of convergence of the method — top right.
Figure 19: Failure mode of HMC with NUTS on the Lokta-Volterra multi-scale inference: histogram of the marginal posterior distributions for the inverse parameters (top) and phase diagrams of their trajectories throughout the sampling (bottom). The biased predictions from Fig 18 prevent proper inference of the inverse parameters, leading to random walk pathological behavior in the updated parameters.
Figure 18: Failure mode of HMC with NUTS adaptation on the Lokta-Volterra multi-scale inverse problem defined in Sect. 3.3. BMA predictions for the two-species populations along the physical time with their uncertainties — bottom and top left figures. Relative BMA-CE errors throughout the sampling iterations showing an imbalance between the tasks and preferential adaptation of the prey population — top right figure.
leapfrog parameters might improve the inference of these conventional methodologies. However, it implies a noticeable decrease in the leapfrog time step \(\delta t\), thus slower exploration of the energy levels. Hence, these methods require either an increase in the integration time -- by increasing \(L\) -- or using a large number of samples, to obtain suitable predictions. As a reminder, independently of this lack of efficiency in the posterior distribution sampling, poor choices on the weights of multi-potential energy can bias the sampler and deviate it from the Pareto front exploration. The second assumption is motivated by the willingness to address UQ on multi-scale dynamics without any prior knowledge of the separate scales. This arises from an assertion by Linka et al. [21] indicating that sensitivity to scaling disrupts the performance of the BPINNs-HMC. Finally, we also consider sequential training to provide an appropriate basis for comparison between the different methods.
The results show a lack of convergence of the classical HMC with uniform weighting (in Fig 17 -- top right) and also, a strong imbalance between the tasks. The relative BMA-CE errors effectively characterize an extremely poor convergence of the predator population with respect to the prey population, which translates directly into an inefficient BMA prediction for the two-species populations (in Fig 17 -- bottom and top left). This failure mode is due essentially to the massive rejection of the samples (acceptance rate less than \(1\%\)) due to non-conservation of the Hamiltonian trajectories along the leapfrog steps. Hence, this confirms the lack of robustness of the BPINNs-HMC paradigm when facing instability issues due to multi-scale dynamics.
The NUTS alternative also struggles to converge on this multi-scale inverse problem and results in inadequate predictions, especially for the predator population. Here the reason is not the massive sample rejection but rather a prohibitive decrease in the time step, reaching \(\delta t=8.26\mathrm{e}{-5}\) and \(2.81\mathrm{e}{-5}\), respectively, at the end of the adaptive steps -- nearly corresponding to a ten-fold drop in the time step, compared to AW-HMC. The relative BMA-CE errors (in Fig 18 -- top right) reveal that this time step adaptation is suitable for the convergence of the prey population since it appears to be the most sensitive task. This sensitivity should be understood in the sense that small variations with respect to \(\Theta\) on the potential energy induce the strongest constraint on the Hamiltonian energy conservation. However, the time-step adaptation is not satisfactory for the predator population and even leads to inefficient forgetting of the neural network throughout the sequential training. This translates into misleading predictions on the evolution of the population (see Fig 18 -- bottom and top left) and unsuccessful inference of the inverse parameters (Fig 19). The phase diagram of the inverse parameter trajectories demonstrates the difficulties of the NUTS sampler in adequately identifying the modes resulting from separate scales. Overall, the NUTS sampler suffers from a lack of convergence toward the Pareto front and a misleading inference of the inverse parameters, subject to weakly-informed priors, due to its inability to capture multi-scale behaviors.
## Appendix D Characterization of the multi-potential energy in the CFD inverse problem
The CFD inverse problem, defined in Sec 4.2, involves the recovery of the latent pressure field \(p_{\Theta}\) in addition to the flow regime parameter -- given by the Reynolds number \(Re_{\Theta}\) -- based upon partial measurements of the velocity field. The training dataset \(\mathcal{D}\) used for the AW-HMC sampling is first decomposed into 9559 measurements of randomly-sampled \(\mathbf{u}\), which respectively defined the \(\mathcal{D}^{\mathbf{u}}\) and \(\mathcal{D}^{\partial}\) sets of interior and boundary points. The same collocation points define \(\mathcal{D}^{\Omega}\), where we impose the PDE constraints and the diverge-free condition. The steep stenosis geometry considered in this problem generates sharp gradients at the wall interface. The latter need to be adequately captured to obtain consistency in the inference of the latent pressure and inverse parameter. Hence, we complemented the training with some partial measurements of the first-order derivatives of the velocity. This enables us to ensure that the convective terms, in the PDE constraints (37), are consistent with the velocity data and therefore infer the corresponding pressure field.
The multi-potential energy is thus written as:
\[U(\Theta) =\frac{\lambda_{0}}{2\sigma_{0}^{2}}\left\|u_{\Theta}-u\right\|_{ \mathcal{D}^{\mathsf{u}}}^{2}+\frac{\lambda_{1}}{2\sigma_{1}^{2}}\left\|v_{ \Theta}-v\right\|_{\mathcal{D}^{\mathsf{u}}}^{2}+\frac{\lambda_{2}}{2\sigma_{2 }^{2}}\left\|u_{\Theta}-u\right\|_{\mathcal{D}^{\mathsf{0}}}^{2}+\frac{ \lambda_{3}}{2\sigma_{3}^{2}}\left\|v_{\Theta}-v\right\|_{\mathcal{D}^{ \mathsf{0}}}^{2} \tag{50}\] \[+\frac{\lambda_{4}}{2\sigma_{4}^{2}}\left\|\partial_{x}u_{\Theta} -\partial_{x}u\right\|_{\mathcal{D}^{\mathsf{u}}}^{2}+\frac{\lambda_{5}}{2 \sigma_{5}^{2}}\left\|\partial_{x}v_{\Theta}-\partial_{x}v\right\|_{\mathcal{ D}^{\mathsf{u}}}^{2}+\frac{\lambda_{6}}{2\sigma_{6}^{2}}\left\|\partial_{x}u_{ \Theta}-\partial_{x}u\right\|_{\mathcal{D}^{\mathsf{0}}}^{2}\] \[+\frac{\lambda_{7}}{2\sigma_{7}^{2}}\left\|\partial_{x}v_{\Theta} -\partial_{x}v\right\|_{\mathcal{D}^{\mathsf{0}}}^{2}+\frac{\lambda_{8}}{2 \sigma_{8}^{2}}\left\|Re_{\Theta}^{-1}\Delta u_{\Theta}-(u_{\Theta}\partial_{ x}u_{\Theta}+v_{\Theta}\partial_{y}u_{\Theta})-\partial_{x}p_{\Theta}\right\|_{ \mathcal{D}^{\mathsf{0}}}^{2}\] \[+\frac{\lambda_{9}}{2\sigma_{9}^{2}}\left\|Re_{\Theta}^{-1}\Delta v _{\Theta}-(u_{\Theta}\partial_{x}v_{\Theta}+v_{\Theta}\partial_{y}v_{\Theta}) -\partial_{y}p_{\Theta}\right\|_{\mathcal{D}^{\mathsf{0}}}^{2}\] \[+\frac{\lambda_{10}}{2\sigma_{10}^{2}}\left\|\nabla\cdot u_{ \Theta}\right\|_{\mathcal{D}^{\mathsf{0}}}^{2}+\frac{\lambda_{11}}{2\sigma_{11 }^{2}}\left\|\partial_{y}u_{\Theta}-\partial_{y}u\right\|_{\mathcal{D}^{ \mathsf{u}}}^{2}+\frac{\lambda_{12}}{2\sigma_{12}^{2}}\left\|\partial_{y}u_{ \Theta}-\partial_{y}u\right\|_{\mathcal{D}^{\mathsf{0}}}^{2}+\frac{1}{2\sigma _{\Theta}^{2}}\|\Theta\|_{R_{p+1}}^{2}\]
where the notation \(\|\cdot\|\) refers to either the RMS norm on \(\mathcal{D}^{\bullet}\) or the usual Euclidean norm on \(\mathbb{R}^{p+1}\).
|
2307.13425 | A signal processing interpretation of noise-reduction convolutional
neural networks | Encoding-decoding CNNs play a central role in data-driven noise reduction and
can be found within numerous deep-learning algorithms. However, the development
of these CNN architectures is often done in ad-hoc fashion and theoretical
underpinnings for important design choices is generally lacking. Up to this
moment there are different existing relevant works that strive to explain the
internal operation of these CNNs. Still, these ideas are either scattered
and/or may require significant expertise to be accessible for a bigger
audience. In order to open up this exciting field, this article builds
intuition on the theory of deep convolutional framelets and explains diverse ED
CNN architectures in a unified theoretical framework. By connecting basic
principles from signal processing to the field of deep learning, this
self-contained material offers significant guidance for designing robust and
efficient novel CNN architectures. | Luis A. Zavala-Mondragón, Peter H. N. de With, Fons van der Sommen | 2023-07-25T11:45:28Z | http://arxiv.org/abs/2307.13425v1 | # A signal processing interpretation of noise-reduction convolutional neural networks
###### Abstract
Encoding-decoding CNNs play a central role in data-driven noise reduction and can be found within numerous deep-learning algorithms. However, the development of these CNN architectures is often done in _ad-hoc_ fashion and theoretical underpinnings for important design choices is generally lacking. Up to this moment there are different existing relevant works that strive to explain the internal operation of these CNNs. Still, these ideas are either scattered and/or may require significant expertise to be accessible for a bigger audience. In order to open up this exciting field, this article builds intuition on the theory of deep convolutional framelets and explains diverse ED CNN architectures in a unified theoretical framework. By connecting basic principles from signal processing to the field of deep learning, this self-contained material offers significant guidance for designing robust and efficient novel CNN architectures.
Denoising, convolutional neural network, encoding-decoding. +
Footnote †: publication: 0000-0000/00000.0 © 2021 IEEE
## I Introduction
A well-known image-processing application is noise/artifact reduction of images, which is consists in estimating a noise/artifact-free signal out of a noisy observation. In order to achieve this, conventional signal processing algorithms often employ explicit assumptions on the signal and noise characteristics, which has resulted in well-known algorithms such as wavelet shrinkage [1], sparse dictionaries [2], total-variation minimization [3] and low-rank approximation [4]. With the advent of deep learning techniques, signal processing algorithms applied to image denoising, have been regularly outperformed and increasingly replaced by encoding-decoding convolutional neural networks (CNNs).
In this article, rather than conventional signal processing algorithms, we focus on the so-called encoding-decoding CNNs. These models contain an _encoder_ which maps the input to a multi-channel/redundant representations and a _decoder_, which maps the encoded signal back to the original domain. In both, the encoder and decoder, sparsifying non-linearities which suppress parts of the signal are applied. In contrast with conventional signal processing algorithms, encoding-decoding CNNs are often presented as a solution which does not make explicit assumptions on the signal and noise. For example, in supervised algorithms, an encoding-decoding CNN _learns_ the optimal parameters to filter the signal from a set of paired examples of noise/artifact-free images and images contaminated with noise/artifacts [5, 6, 7], which highly simplifies the solution of the noise-reduction problems, since this circumvents the use of explicit modeling of the signal and noise. Furthermore, the good performance and simple use of encoder-decoder CNNs/autoencoders have enabled additional data-driven noise-reduction algorithms, where CNNs are embedded as part of a larger system. Examples of such approaches are unsupervised noise-reduction [8], denoising based on generative adversarial networks [9]. Besides this, smoothness in signals can be obtained also by advanced regularization using CNNs, e.g. by exploiting data-driven model-based iterative reconstruction [10].
Despite of the impressive noise-reduction performance and flexibility of encoding-decoding convolutional neural networks, these models have also downsides that should be considered. First, the complexity and heuristic nature of such designs often offers restricted understanding of the internal operation of such architectures [11]. Second, the training and deployment of CNNs requires specialized hardware and the use of significant computational resources. Third and final, the restricted understanding of the signal modeling in encoding-decoding CNNs does not clearly reveal the limitations of such models and, consequently, it is not obvious how to overcome these problems.
In order to overcome the limitations of encoding-decoding CNNs, new research has tackled the lack of explainability of these models by acknowledging the similarity of the building blocks of encoding-decoding CNNs applied to image noise reduction and the elements of well-known signal processing algorithms, such as wavelet decomposition, low-rank approximation [12, 13, 14], variational methods [15], lower-dimensional manifolds [8] and convolutional sparse coding [16]. Furthermore, practical works based on shrinkage-based CNNs inspired in well-established wavelet shrinkage algorithms has further deepened the connections between signal processing and CNNs [17, 18]. This unified treatment of signal processing-inspired CNNs has resulted in more explainable [6, 8], better performing [6] and more memory-efficient designs [19].
This article has three main objectives. First, to summarize the diverse explanations of the components of encoding-decoding convolutional neural networks applied to image noise reduction based on the concept of deep convolutional framelets [12] and on elementary signal processing concepts.
Both aspects are considered with the aim of achieving an in-depth understanding of the internal operation of encoding-decoding CNNs and to show that the design choices have _implicit_ assumptions about the signal behavior inside the CNN. A second objective is to offer practitioners tools for optimizing their CNN designs with signal processing concepts. Third and final, the aim is to show practical use cases, where existing CNNs are analyzed in a unified framework, thereby enabling a better comparison of different designs by making their internal operation explicitly visible. Our analysis are based on existing works [12, 6, 20], who analyzed CNNs where the non-linearities are ignored. In this article, we overcome this limitation and present a complete analysis including the non-linear activations, which reveals important assumptions implicit in the analyzed models.
The structure of this article is as follows. Section II introduces the notation that is used in this text. Section III describes the signal model and the architecture of encoding-decoding networks. Afterwards, Section IV addresses fundamental aspects of signal processing, such as singular value decomposition, low-rank approximation, framelets, as well as the estimation of signals in the framelet domain. All the concepts of Sections III and IV converge in Section V, where the encoding-decoding CNNs are interpreted in terms of a data-driven low-rank approximation and of wavelet shrinkage. Afterwards, based on the learnings from Section V, Section VI shows the analysis of diverse architectures from a signal processing perspective and under a set of explicit assumptions. Afterwards, Section VII explores if some of the theoretical properties exposed here are related to trained models. Based on the diverse described models and theoretical operation of CNNs, Section VIII addresses a design criterion which can be used to design or choose new models and briefly describes the state-of-the art for noise reduction with CNNs. Finally, Section IX elaborates concluding remarks and discusses diverse elements that have not yet been (widely) explored by current CNN designs.
## II **Notation**
Convolutional neural networks (CNNs) are composed by basic elements, such as convolution, activation and down/up-sampling layers. In order to achieve better clarity in the explanations given in this paper, we define the mathematical notation to represent the basic operations of CNNs. Part of the definitions presented here are based on the work of Zavala _et al._[19].
In the following, a scalar is represented by a lower-case letter (e.g. \(a\)), while a vector is represented by an underlined lower-case letter (e.g. \(\underline{a}\)). Furthermore, a matrix, such as an image or convolution mask, is represented by a boldface lowercase letter (e.g. variables \(\mathbf{x}\) and \(\mathbf{y}\)). Finally, a tensor is defined by a boldface uppercase letter. For example, the two arbitrary tensors \(\mathbf{A}\) and \(\mathbf{Q}\) are defined by
\[\mathbf{A}=\begin{pmatrix}\mathbf{a}_{0}^{0}&\dots&\mathbf{a}_{N_{\text{C}}- 1}^{0}\\ \vdots&\ddots&\vdots\\ \mathbf{a}_{0}^{N_{\text{R}}-1}&\dots&\mathbf{a}_{N_{\text{C}}-1}^{N_{\text{R} }-1}\end{pmatrix},\mathbf{Q}=\begin{pmatrix}\mathbf{q}^{0}\\ \vdots\\ \mathbf{q}^{N_{\text{R}}-1}\end{pmatrix}. \tag{1}\]
Here, entries \(\mathbf{a}_{c}^{r}\) and \(\mathbf{q}^{r}\) represent two-dimensional arrays (matrices). Since the defined tensors are used in the context of CNNs, matrices \(\mathbf{a}_{c}^{r}\) and \(\mathbf{q}^{r}\) are learned filters, which have dimensions \((N_{\text{V}}\times N_{\text{H}})\), where \(N_{\text{V}}\) and \(N_{\text{H}}\) denote the filter dimensions in the vertical and horizontal directions, respectively. Finally, we define the total tensor dimension of \(\mathbf{A}\) and \(\mathbf{Q}\) by \((N_{\text{C}}\times N_{\text{R}}\times N_{\text{V}}\times N_{\text{H}})\) and \((N_{\text{R}}\times 1\times N_{\text{V}}\times N_{\text{H}})\), where \(N_{\text{R}}\) and \(N_{\text{C}}\) are the number of row and column entries, respectively. If the tensor \(\mathbf{A}\) contains the convolution weights in a CNN, the row-entry dimensions represent the input number of channels to a layer, while the number of column elements denotes the number of output channels.
Having defined the notation for the variables, we focus on a few relevant operators. First, the transpose of a tensor \((\cdot)^{\intercal}\), expressed by
\[\mathbf{Q}^{\intercal}=\begin{pmatrix}\mathbf{q}^{0}&\dots&\mathbf{q}^{N_{ \text{R}}-1}\end{pmatrix}. \tag{2}\]
Furthermore, the convolution of two tensors is written as \(\mathbf{AQ}\) and specified by
\[\mathbf{AQ}=\begin{pmatrix}\sum_{r=0}^{N_{\text{R}}-1}\mathbf{a}_{r}^{0}* \mathbf{q}^{r}\\ \vdots\\ \sum_{r=0}^{N_{\text{R}}-1}\mathbf{a}_{r}^{N_{\text{R}}-1}*\mathbf{q}^{r} \end{pmatrix}. \tag{3}\]
Here, the symbol \(*\) defines the convolution between two matrices (images).
In this paper, images which are 2D arrays (matrices), are often convolved with 4D tensors. When this operation is performed, images are considered to have dimensions \((1\times 1\times N_{\text{V}}\times N_{\text{H}})\). In addition, in this paper matrix \(\mathbf{I}\) is the identity signal for the convolution operator, which for a 2D image is the Kronecker delta/discrete impulse (an image with a single non-zero pixel with unity amplitude at the center of the image). Furthermore, we indicate that variables in the _decoding path_ of a CNN are distinguished with a tilde (e.g. \(\tilde{\mathbf{K}}\), \(\tilde{\underline{b}}\)).
Additional symbols that will be used throughout the article are the down- and up-sampling operations by a factor \(s\), which are denoted by \(f_{(s\downarrow)}(\cdot)\) for down-sampling and for up-sampling \(f_{(s\uparrow)}(\cdot)\). In this paper, both operations are defined in the same way as in multi-rate filter banks. For example, consider the signal
\[\underline{x}=\begin{pmatrix}1,&2,&3,&4,&5,&6,&7,&8,&9,&10\end{pmatrix}. \tag{4}\]
If we apply the down-sampling operator to \(\underline{x}\) by a factor 2, this results into
\[\underline{z}=f_{(2\downarrow)}(\underline{x})=\begin{pmatrix}1,&3,&5,&7,&9 \end{pmatrix}, \tag{5}\]
where \(\underline{z}\) is the down-sampled version of \(\underline{x}\). Conversely, the result of applying the up-sample operator \(f_{(2\uparrow)}(\cdot)\) gives as result
\[f_{(2\uparrow)}(\underline{z})=\begin{pmatrix}1,&0,&3,&0,&5,&0,&7,&0,&9,&0 \end{pmatrix}. \tag{6}\]
Additional operators used in the article are the _rectified linear unit_ (ReLU), the _shrinkage/thresholding_ and the _clipping_, which are represented by \((\cdot)_{+}\), \(\tau_{(\cdot)}(\cdot)\) and \(\mathcal{C}_{(\cdot)}(\cdot)\), respectively.
For better clarity, the most important symbols used in this article are summarized in Table I. In addition, the graphical
representations of some of the symbols that will be used to graphically describe CNNs are shown in Fig. 1.
## III **Encoding-decoding CNNs**
### **Signal model and noise reduction configurations**
In noise-reduction applications, the common additive signal model is defined by
\[\mathbf{y}=\mathbf{x}+\boldsymbol{\eta}, \tag{7}\]
where the observed signal \(\mathbf{y}\) is the result of contaminating a noiseless image \(\mathbf{x}\) with additive noise \(\boldsymbol{\eta}\). Assume that the noiseless signal \(\mathbf{x}\) is to be estimated from the noisy observation \(\mathbf{y}\). In deep learning applications, this is often achieved by models with the form
\[\hat{\mathbf{x}}=G(\mathbf{y}). \tag{8}\]
Here, \(G(\cdot)\) is a generic encoding-decoding CNN. We refer to this form of noise reduction as _non-residual_. Alternatively, it is possible to find \(\hat{\mathbf{x}}\) by training \(G(\cdot)\) to estimate the noise component \(\hat{\boldsymbol{\eta}}\), and subtract it from the noisy image \(\mathbf{y}\) to estimate the noiseless image \(\hat{\mathbf{x}}\), or equivalently
\[\hat{\mathbf{x}}=\mathbf{y}-G(\mathbf{y}). \tag{9}\]
This model is referred to as _residual_[5, 21, 7], because the output of the network is subtracted from its input. For reference, Fig. 2 portrays the difference of the placement of the encoding-decoding structure in residual and non-residual configurations.
### **Encoding-decoding CNNs**
Encoding-decoding (convolutional) neural networks are rooted in the techniques for data-dimensionality reduction and unsupervised feature extraction, where a given signal is mapped to an alternative space via a non-linear transformation. This space should have properties which are somehow attractive for the considered task. For example, for dimensionality reduction, the alternative space should be lower-dimensional than the original input. In this article, we are interested in models that are useful for noise-reduction applications. Specifically, this manuscript addresses models that are referred to as encoding-decoding _convolutional_ neural networks such as the model by Ranzato _et al._[22], in which the encoder uses convolution filters to produce multi-channel/redundant representations, in which sparsifying non-linearities are applied. The sparsified signal is later mapped back to the original representation. It should be noted that despite the fact that the origins of the encoding-decoding CNNs are linked to feature extraction, this type of architecture quickly showed to be useful for other applications such as noise reduction, which is the topic of this article. For the rest of this manuscript whenever we mention an encoding-decoding CNN, we are referring to a design which follows the same basic principles of Ranzato's design.
It can be observed that encoding-decoding CNNs are constituted of three main parts. (1) The **encoder**, which maps the incoming image to a representation with more image channels with a convolution layer. Every channel of the resulting redundant representation contains a fraction of the content of the original signal. It should be noted that the encoder often (but not necessarily) decreases the resolution of the higher dimensional representation, to enable multi-resolution processing and to decrease the memory requirements of the design. (2) The **decoder**, which maps the multi-channel representation back to the original space. (3) The **non-linearities**, which suppress specific parts of the signal. In summary, the most basic encoding-decoding step in a CNN \(G(\cdot)\) is expressed by
\[\boxed{G(\mathbf{y})=G_{\text{dec}}(G_{\text{enc}}(\mathbf{y}))}\enspace, \tag{10}\]
Fig. 1: Symbols used for the schematic representations of the CNNs addressed in this article.
Fig. 2: Residual and non-residual network configurations. Note that the main difference between both designs is the global skip connection occurring in the residual structure. Still, it can be observed that the network \(G(\cdot)\) may contain skip connections _internally_.
where \(G_{\text{enc}}(\cdot)\) is the encoder, which is generally defined by
\[\begin{split}\mathbf{C}_{0}=& E_{0}(\mathbf{y}),\\ \mathbf{C}_{1}=& E_{1}(\mathbf{C}_{0}),\\ \mathbf{C}_{2}=& E_{2}(\mathbf{C}_{1}),\\ \vdots&\\ \mathbf{C}_{N-1}=& E_{N-1}(\mathbf{C}_{N-2}),\\ G_{\text{enc}}(\mathbf{y})=&\mathbf{C}_{N-1}. \end{split} \tag{11}\]
Here, \(\mathbf{C}_{n}\) represents the code generated by the \(n\)-th encoding \(E_{n}(\cdot)\), which can be expressed by the equation
\[\boxed{\mathbf{C}_{n}=E_{n}(\mathbf{C}_{n-1})=f_{(s\downarrow)}\big{(}A_{( \underline{b}_{n-1})}(\mathbf{K}_{n-1}\mathbf{C}_{n-1})\big{)}}\enspace. \tag{12}\]
Here, the function \(A(\cdot)_{(\cdot)}\) is a generic activation used in the encoder and \(f_{(s\downarrow)}(\cdot)\) is a down-sampling function by factor \(s\). Complementary to the encoder, the decoder network maps the multi-channel sparse signal back to the original domain. Here, we define the decoder by
\[\begin{split}\tilde{\mathbf{C}}_{N-2}=& D_{N-1}( \mathbf{C}_{N-1}),\\ \vdots&\\ \tilde{\mathbf{C}}_{1}=& D_{2}(\tilde{\mathbf{C}}_{2 }),\\ \tilde{\mathbf{C}}_{0}=& D_{1}(\tilde{\mathbf{C}}_{1 }),\\ G(\mathbf{y})=& D_{0}(\tilde{\mathbf{C}}_{0}),\end{split} \tag{13}\]
where \(\tilde{\mathbf{C}}_{n}\) is the \(n-\)th decoded signal, which is produced by the \(n\)-th decoder layer, yielding the general expression:
\[\boxed{\tilde{\mathbf{C}}_{n-1}=D_{n}(\tilde{\mathbf{C}}_{n})=\tilde{A}_{( \underline{\underline{\mathbf{j}}})}\big{(}\tilde{\mathbf{K}}_{n}^{\intercal }f_{(s\uparrow)}(\tilde{\mathbf{C}}_{n})\big{)}}\enspace. \tag{14}\]
In the above, \(\tilde{A}(\cdot)_{(\cdot)}\) is the activation function used in the decoder and \(f_{(s\uparrow)}(\cdot)\) is an up-sampling function of factor \(s\).
An important remark is that the encoder-decoder CNN does not always contain down/up-sampling layers in which case, the decimation factor \(s\) is unity, which causes \(f_{(1\uparrow)}(\mathbf{x})=f_{(1\downarrow)}(\mathbf{x})=\mathbf{x}\) for any matrix \(\mathbf{x}\). Furthermore, it should be noted also that we assume that the number of channels of the code \(\mathbf{C}_{N}\) is always larger than the previous one \(\mathbf{C}_{N-1}\). Furthermore, it should be noted that a single encoder layer \(E_{n}(\cdot)\) and its corresponding decoder layer \(D_{n}(\cdot)\) can be considered a single-layer encoder-decoder network/pair.
For this article, the encoding convolution filter for a given layer \(\mathbf{K}\) has dimensions \((N_{\text{o}}\times N_{\text{i}}\times N_{\text{h}}\times N_{\text{v}})\), where \(N_{\text{i}}\) and \(N_{\text{o}}\) are the number of input and output channels for a convolution layer, respectively. Similarly, \(N_{\text{h}}\) and \(N_{\text{v}}\) are the number of elements in the horizontal and vertical directions, respectively. Note that the encoder increases the number of channels of the signal (e.g. \(N_{\text{o}}>N_{\text{i}}\)), akin to Ranzatto's design [22]. Furthermore, it is assumed that the _decoder_ is symmetric in the number of channels to the _encoder_, therefore, the dimensions of the decoding convolution kernel \(\mathbf{K}^{\intercal}\) are \((N_{\text{i}}\times N_{\text{o}}\times N_{\text{h}}\times N_{\text{v}})\). The motivation of this symmetry is to emphasize the similarity between the signal processing and the CNN elements.
## IV **Signal processing fundamentals**
As shown by Ye _et al._[12], within encoding-decoding CNNs, the signal is treated akin to well-known sparse representations, where the coefficients used for the transformation are directly learned from the training data. Prior to addressing this important concept in more detail, relevant supporting concepts such as sparsity, sparse transformations and non-linear signal estimation in the wavelet domain are explained.
### **Sparsity**
A sparse image is a signal where most coefficients are small and the relatively few large coefficients capture most of the information [23]. This characteristic allows to discard low-amplitude components with relatively small perceptual changes. Hereby, the use of sparse signals is attractive for applications such as image compression, denoising and suppression of artifacts.
Despite the convenient characteristics of sparse signals, natural images are often non-sparse. Still, there are numerous transformations, which allow to map the signal to a sparse domain and which are analogous to the internal operations of CNNs. For example, _singular value decomposition_ factorizes the image in terms of two sets of orthogonal bases of which few basis pairs contain most of the energy of the image. An alternative transformation is based on _framelets_, where an image is decomposed in a multi-channel representation, whereby each resulting channel contains a fragment of the Fourier spectrum. In the remainder of this section we will address all of these representations in more detail.
### **Sparse signal representations**
#### Iv-B1 **Singular value decomposition (SVD) and low-rank approximation**
Assume that an image (patch) is represented by a matrix \(\mathbf{y}\) with dimensions \((N_{\text{r}}\times N_{\text{c}})\), where \(N_{\text{r}}\) and \(N_{\text{c}}\) are the number of rows and columns, respectively. Then, the singular value decomposition factorizes \(\mathbf{y}\) as
\[\mathbf{y}=\sum_{n=0}^{N_{\text{N}}-1}(\underline{u}_{n}\underline{v}_{n}^{ \intercal})\cdot\underline{\sigma}[n], \tag{15}\]
in which \(N_{\text{SV}}\) is the number of singular values, \(n\) is a scalar index, while \(\underline{u}_{n}\) and \(\underline{v}_{n}\) are the \(n^{\text{th}}\) left- and right-singular vectors, respectively. Furthermore, vector \(\underline{\sigma}\) contains the singular values and each of its entries \(\underline{\sigma}[n]\) is the weight assigned to every basis pair \(\underline{u}_{n}\), \(\underline{v}_{n}\). This means that the product \((\underline{u}_{n}\underline{v}_{n}^{\intercal})\) contributes more to the image content for higher values of \(\underline{\sigma}[n]\). It is customary that the singular values are ranked in descending order and the amplitudes of the singular values \(\underline{\sigma}\) are sparse, therefore \(\underline{\sigma}[0]\gg\underline{\sigma}[N_{\text{SV}}-1]\). The reason for this sparsity is because image (patches) intrinsically have high correlation. For example, many images contain repetitive patterns (e.g. a wall with bricks, a fence, the tiles of a rooftrop or the stripes of a zebra) or uniform regions (for example, the sky, the skin of a person). This means an image patch may contain only a few linearly independent vectors that describe most of the image content. Consequently, a higher weight is assigned to such image bases.
Given that the amplitudes of the singular values of \(\mathbf{y}\) in SVD are sparse, it is possible approximate \(\hat{\mathbf{y}}\) with only a few bases \((\underline{u}_{n}\underline{v}_{n}^{\intercal})\). Note that this procedure reduces the rank of signal \(\mathbf{y}\) and hence it is known as _low-rank approximation_. This process is equivalent to
\[\hat{\mathbf{y}}=\sum_{n=0}^{N_{\mathrm{LR}}-1}(\underline{u}_{n}\underline{v }_{n}^{\intercal})\cdot\sigma[n], \tag{16}\]
where \(N_{\mathrm{SV}}>N_{\mathrm{LR}}\). Note that this effectively cancels the product \((\underline{u}_{n}\underline{v}_{n}^{\intercal})\) where the weight given by \(\sigma[n]\) is low. Alternatively, it is possible to assign a weight of zero to the product \((\underline{u}_{n}\underline{v}_{n}^{\intercal})\) for \(n\geq N_{\mathrm{LR}}\).
The low-rank representation of a matrix is desirable for diverse applications among which we can find image denoising. The motivation for using low-rank approximation for this application results from the fact that -as mentioned earlier-natural images are considered low-rank due to the strong spatial correlation between pixels, whereas noise is high-rank (it is spatially uncorrelated). In consequence, reducing the rank/number of singular values decreases the presence of noise, while still providing a good approximation of the noise-free signal, as exemplified in Fig. 3.
#### Iii-B2 **Framelets**
Just as SVD, framelets are also commonly used for image processing. In a nutshell, a framelet transform is a signal representation that factorizes/decomposes an arbitrary signal into multiple bands/channels. Each of these channels contain a segment of the energy of the original signal. In image and signal processing, the framelet bands are the result of convolving the analyzed signal with a group of discrete filters that have finite length/support. In this article, the most important characteristic that the filters of the framelet transform should comply with, is that the bands they generate capture _all_ the energy contained on the input to the decomposition. This is important to avoid the loss of information of the decomposed signal. In this text, we refer to framelets that comply with the previous characteristics as _tight_ framelets and the following paragraphs will describe this property in more detail.
In its decimated version, the framelet decomposition for _tight_ frames is represented by
\[\mathbf{Y}_{\text{frame}}=f_{(2\downarrow)}(\mathbf{F}\mathbf{y}), \tag{17}\]
in which \(\mathbf{Y}_{\text{frame}}\) is the decomposed signal and \(\mathbf{F}\) is the framelet basis (tensor). Note that the signal \(\mathbf{Y}_{\text{frame}}\) has more channels than \(\mathbf{y}\). Furthermore, the original signal \(\mathbf{y}\) is recovered from \(\mathbf{Y}_{\text{frame}}\) by
\[\mathbf{y}=\tilde{\mathbf{F}}^{\intercal}f_{(2\uparrow)}(\mathbf{Y}_{\text{ frame}})\cdot c. \tag{18}\]
Here, \(\tilde{\mathbf{F}}\) is the filter of the inverse framelet transform and \(c\) denotes an arbitrary constant. If \(c=1\) the framelet is _normalized_. Finally, note that the framelet transform can also be _undecimated_. This means that in undecimated representations, the down-sampling and up-sampling layers \(f_{(2\downarrow)}(\cdot)\) and \(f_{(2\uparrow)}(\cdot)\) are not used. An important property of the undecimated representation is that it is less prone to aliasing than its decimated counterpart, but more computationally expensive. Therefore, for efficiency reasons, the decimated framelet decomposition is often preferred over the undecimated representation. In summary, the decomposition and synthesis of the decimated framelet decomposition is represented by
\[\boxed{\mathbf{y}=\tilde{\mathbf{F}}^{\intercal}f_{(2\uparrow)}\big{(}f_{(2 \downarrow)}(\mathbf{F}\mathbf{y})\big{)}\cdot c}\enspace, \tag{19}\]
while for the undecimated framelet it holds that
\[\boxed{\mathbf{y}=\tilde{\mathbf{F}}^{\intercal}(\mathbf{F}\mathbf{y})\cdot c }\enspace. \tag{20}\]
A notable normalized framelet is the _discrete wavelet transform_ (DWT), where variables \(\mathbf{F}\) and \(\tilde{\mathbf{F}}\) are replaced by tensors \(\mathbf{W}=\big{(}\mathbf{w}_{\mathrm{LL}},\mathbf{w}_{\mathrm{LH}},\mathbf{ w}_{\mathrm{HH}},\mathbf{w}_{\mathrm{HH}}\big{)}\) and \(\tilde{\mathbf{W}}=\big{(}\tilde{\mathbf{w}}_{\mathrm{LL}},\tilde{\mathbf{w}}_ {\mathrm{LH}},\tilde{\mathbf{w}}_{\mathrm{HL}},\tilde{\mathbf{w}}_{\mathrm{HH}}\big{)}\), respectively. Here, \(\mathbf{w}_{\mathrm{LL}}\) is the filter for the low-frequency band, while \(\mathbf{w}_{\mathrm{LH}}\), \(\mathbf{w}_{\mathrm{HL}}\), \(\mathbf{w}_{\mathrm{HH}}\) are the filters used to extract the detail in the horizontal, vertical and diagonal directions, respectively. Finally, \(\tilde{\mathbf{w}}_{\mathrm{LH}}\,\tilde{\mathbf{w}}_{\mathrm{LH}},\tilde{ \mathbf{w}}_{\mathrm{HL}},\tilde{\mathbf{w}}_{\mathrm{HH}}\) are the filters of the inverse decimated DWT.
In order to understand the DWT more intuitively, Fig. 4 shows the decimated framelet decomposition using the filters of the discrete wavelet transform. Note that the convolution \(\mathbf{W}\mathbf{y}\) results in a four-channel signal, where each channel contains only a fraction of the spectrum of image \(\mathbf{y}\). This allows to down-sample each channel with minimal aliasing. Furthermore, to recover the original signal, each individual channel is up-sampled, thereby introducing aliasing, which is then removed by the filters of the inverse transform. Finally, all the channels are added and the original signal is recovered.
Analogous to the low-rank approximation, in framelets, the reduction of noise is achieved by setting noisy components to zero. These components are typically assumed to have low-amplitude compared with the amplitude of the sparse signal, as expressed by
\[\hat{\mathbf{y}}=\tilde{\mathbf{F}}^{\intercal}f_{(2\uparrow)}\big{(}\tau_{( \underline{t})}\big{(}f_{(2\downarrow)}(\mathbf{F}\mathbf{y})\big{)}\big{)} \cdot c, \tag{21}\]
where \(\tau_{\underline{t}}(\cdot)\) is a generic thresholding/shrinkage function, which sets each of the pixels in \(f_{(2\downarrow)}(\mathbf{F}\mathbf{y})\) to zero when values are lower than the threshold level \(\underline{t}\).
Fig. 3: SVD reconstruction of clean and corrupted images with a different number of singular values. Note that the reconstruction of the clean image with 8 or 32 singular values (\(N_{\mathrm{SV}}=8\) or \(N_{\mathrm{SV}}=32\), respectively) yields to reconstructions indistinguishable from the original image. This contrasts with their noisy counterparts, where \(N_{\mathrm{SV}}=8\) reconstructs a smoother image in which the noise is attenuated, while \(N_{\mathrm{SV}}=32\) reconstructs the noise texture perfectly.
### _Nonlinear signal estimation in the framelet domain_
As mentioned in Section IV-B2, framelets decompose a given image \(\mathbf{y}\) by convolving it with a tensor \(\mathbf{F}\). Note that many of the filters that compose \(\mathbf{F}\) have a high-pass nature. Images often contain approximately uniform regions in which the variation is low, therefore, convolving a signal \(\mathbf{y}\) with a high-pass filter \(\mathbf{f}_{\mathbf{h}}\) -where \(\mathbf{f}_{\mathbf{h}}\in\mathbf{F}\)- produces the sparse detail band \(\mathbf{d}=\mathbf{f}_{\mathbf{h}}*\mathbf{y}\) in which uniform regions have low amplitudes, while transitions i.e. edges contain most of the energy of the bands.
Assuming a model in which a single pixel \(d\in\mathbf{d}\) is observed, which is contaminated with additive noise \(\eta\). Then, the resulting observed pixel \(z\) is defined by
\[z=d+\eta. \tag{22}\]
In order to recover the noiseless pixel \(d\) from observation \(z\), it is possible to use the point-_maximum a posteriori_ (MAP) estimate [1, 24], defined by the maximization problem
\[\hat{d}=\operatorname*{arg\,max}_{d}\bigl{[}\ln\big{(}P(d|z)\big{)}\bigr{]}. \tag{23}\]
Here, the log-posterior \(\ln\big{(}P(d|z)\big{)}\) is defined by
\[\ln\big{(}P(d|z)\big{)}=\ln\big{(}P(z|d)\big{)}+\ln\big{(}P(d)\big{)}, \tag{24}\]
where the conditional probability density function (PDF) \(P(z|d)\) expresses the noise distribution, which is often assumed Gaussian and is defined by
\[P(z|d)\propto\exp\bigg{(}-\frac{(z-d)^{2}}{2\sigma_{\eta}^{2}}\bigg{)}. \tag{25}\]
Here, \(\sigma_{\eta}^{2}\) is the noise variance. Furthermore, as prior probability, it is assumed that the distribution of \(P(d)\) corresponds to a Laplacian distribution, which has been used in wavelet-based denoising [1]. Therefore, \(P(d)\) is mathematically described by
\[P(d)\propto\exp\bigg{(}-\frac{|d|}{\sigma_{d}}\bigg{)}, \tag{26}\]
where \(\sigma_{d}\) is the dispersion measure of the Laplace distribution. For reference, Fig. 5 portrays an example of a Gaussian and a Laplacian PDF. Note that the Laplacian distribution has a higher probability of zero elements to occur than the Gaussian distribution for the same standard deviation.
Finally, substituting Eq. (25) and Eq. (26) in Eq (24) results in
\[\ln\big{(}P(d|z)\big{)}\propto-\frac{(z-d)^{2}}{2\sigma_{\eta}^{2}}-\frac{|d |}{\sigma_{d}}. \tag{27}\]
Fig. 4: 2D spectrum analysis of the decimated discrete framelet decomposition and reconstruction. In the figure, function \(\mathcal{F}\{\cdot\}\) stands for the amplitude Fourier spectrum of the input argument. The yellow squares indicate a region in the low-frequency area of the Fourier spectrum, while the orange, purple and blue squares indicate the high-pass/detail bands. For these images, ideal orthogonal bases are assumed. Note that the forward transform is composed by two steps. First, the signal is convolved with the wavelet basis \((\mathbf{W}\mathbf{y})\). Afterwards, down-sampling is applied to the signal (\(f_{(2i)}(\mathbf{W}\mathbf{y})\)). During the inverse transformation, the signal is up-sampled by inserting zeros between each sample (\(f_{(21)}(f_{(2i)}(\mathbf{W}\mathbf{y}))\)), which causes spatial aliasing (dashed blocks). Finally, the spatial aliasing is removed by the inverse transform filter \(\tilde{\mathbf{W}}\) and all the channels are added \((\mathbf{W}^{\intercal}f_{(21)}(f_{(2i)}(\mathbf{W}\mathbf{y})))\).
Fig. 5: Probability density function for Gaussian (left) and Laplacian (right) distributions.
In the above, maximizing \(d\) in \(\ln\left(P(d|z)\right)\) with the first derivative criterion -in an (un) constrained way- leads to two common activations in noise-reduction CNNs: the ReLU and the soft-shrinkage function. Furthermore, the solution also can be used to derive the so-called _clipping_ function, which is useful in residual networks.
For reference and further understanding, Fig. (6) portrays the elements composing the noise model of Eq. (22), the signal transfer characteristics of the ReLU, soft-shrinkage and clipping functions, and the effect that these functions have on the signal of the observed noisy detail band \(z\).
#### Iii-B1 **Rectified linear unit (ReLU)**
If Eq. (27) is solved for \(d\) while constraining the estimator to be positive, the noiseless estimate \(\hat{d}\) becomes
\[\boxed{\hat{d}=(z-t)_{+}}\enspace, \tag{28}\]
which is also expressed by
\[(z-t)_{+}=\begin{cases}z-t,&\text{if }z\geq t,\\ \hskip 14.226378pt0,&\text{if }t>z.\end{cases} \tag{29}\]
Here, the threshold level is defined by
\[t=\sigma_{\eta}^{2}/\sigma_{d}. \tag{30}\]
Note that this estimator cancels the negative and low-amplitude elements of \(d\) lower than the magnitude of the threshold level \(t\). For example, if the signal content on the feature map is low, then \(\sigma_{d}\to 0\). In such case, \(t\rightarrow+\infty\) and consequently \(\hat{d}\to 0\). This means that the channel is suppressed. Alternatively, if the feature map has strong signal presence i.e. \(\sigma_{d}\rightarrow\infty\), consequently \(t\to 0\) and then \(\hat{d}\rightarrow(z)_{+}\).
A final remark is made on the modeling of functions of a CNN. It should be noted that the estimator of Eq. (28) is analogous to the activation function of a CNN, known as _rectified linear unit_ (ReLU). However, in a CNN the value of \(t\) would be the bias \(b\) learned from the training data.
#### Iii-B2 **Soft-shrinkage/thresholding**
If Eq. (27) is maximized in an unconstrained way, the estimate \(\hat{d}\) is
\[\boxed{\hat{d}=\tau_{(t)}^{\text{Soft}}(z)=(z-t)_{+}-(z-t)_{+}}\enspace. \tag{31}\]
Here, \(\tau_{(t)}^{\text{Soft}}(\cdot)\) denotes the soft-shrinkage/-thresholding function, which is often also written in the form
\[\tau_{(t)}^{\text{Soft}}(z)=\begin{cases}z+t,&\text{if }z\geq t,\\ \hskip 14.226378pt0,&\text{if }t>z\geq-t,\\ z-t,&\text{if }-t>z.\end{cases} \tag{32}\]
It can be observed that the soft-threshold enforces the low-amplitude components whose magnitude is lower than the magnitude threshold level \(t\) to zero. In this case, \(t\) is also defined by Eq. (30). It should be noted that the soft-shrinkage estimator can also be obtained from a variational perspective [25]. Finally, it can be observed that the soft-shrinkage is the superposition of two ReLU functions, which has been pointed out by Fan _et al._[18].
#### Iii-B3 **Soft clipping**
In Section IV-C1 and Section IV-C2, the estimate \(\hat{d}\) is obtained directly from the noisy observation \(z\). Alternatively, it is possible to estimate the noise \(\eta\) and subtract it from \(z\) akin to the residual CNNs represented by Eq. (9). This can be achieved by solving the model
\[\boxed{\hat{\eta}=z-\hat{d}=z-\tau_{(t)}^{\text{Soft}}(z)}\enspace, \tag{33}\]
which is equivalent to
\[\hat{\eta}=\mathcal{C}_{(t)}^{\text{Soft}}(z)=z-((z-t)_{+}-(-z-t)_{+}), \tag{34}\]
where \(\mathcal{C}_{(t)}^{\text{Soft}}(\cdot)\) is the _soft clipping_ function. Note that this function also can be expressed by
\[\mathcal{C}_{(t)}^{\text{Soft}}(z)=\begin{cases}\enspace t,&\text{if }z\geq t,\\ \enspace z,&\text{if }t\geq z>-t,\\ -t,&\text{if }-t\geq z.\end{cases} \tag{35}\]
#### Iii-B4 **Other thresholding layers**
One of the main drawbacks of the soft-threshold activation is that it is a biased estimator. This limitation has been addressed by the hard and semi-hard thresholds, which are (asymptotically) unbiased estimators for large input values. In this section, we focus solely on the semi-hard threshold and avoid the hard variant, because is discontinuous and, therefore, not suited for models that rely on gradient-based optimization, such as CNNs.
Among the semi-hard thresholds, two notable examples are the _garrote shrink_ and the shrinkage functions generated by derivatives of Gaussians (DoG) [26, 19]. The garrote shrink function \(\tau_{(\cdot)}^{\text{Gar}}(\cdot)\) is defined by
\[\tau_{(t)}^{\text{Gar}}(z)=\frac{(z^{2}-t^{2})_{+}}{z}. \tag{36}\]
Fig. 6: Signals involved in the additive noise model, input/output transfer characteristics of activation layers and estimates produced by the activation layers when applied to the noise-contaminated signal. The first row shows the signals involved in the additive noise model. The second row depicts the output amplitude of activation functions with respect to the input amplitude. Finally, the last row depicts the application of the activation functions to the noisy observation \(z\).
Furthermore, an example of a shrinkage function based on the derivative of Gaussians is given by
\[\tau_{(t)}^{\text{DoG}}(z)=z-\mathcal{C}_{(t)}^{\text{DoG}}(z), \tag{37}\]
where the semi-hard clipping function with the derivative of Gaussians \(\mathcal{C}_{(\cdot)}^{\text{DoG}}(\cdot)\) is given by
\[\mathcal{C}_{(t)}^{\text{DoG}}(z)=z\cdot\exp\bigg{(}-\frac{z^{p}}{t^{p}}\bigg{)}, \tag{38}\]
in which \(p\) is an even number.
The garrote and semi-hard DoG shrinkage function are shown in Fig. 7, as well as their clipping counterparts. Note that the shrinkage functions approximate unity for \(|z|\gg t\). Therefore, they are asymptotically unbiased for large signal values.
The final thresholding function addressed in this section is the linear expansion of thresholds proposed by Blu and Luisier [26]. This technique combines multiple thresholding functions to improve the performance. This approach is known as _linear expansion of thresholds_ (LET) and it is defined by
\[\tau_{(\mathbf{\hat{L}})}^{\text{LET}}(z)=\sum_{n=0}^{N_{T}-1}a_{n}\cdot\tau_ {(t_{n})}(z), \tag{39}\]
where \(a_{n}\) is the weighting factor assigned to each threshold, where all weighting factor should add up to unity.
V **Bridging the gap between signal processing and CNNs: Deep convolutional framelets and shrinkage-based CNNs**
This section addresses the theoretical operation of noise-reduction convolutional neural networks based on ReLUs and shrinkage/thresholding functions. The first part of this section describes the theory of deep convolutional framelets [12], which is the most extensive study on the operation of encoding-decoding ReLU-based CNNs up to this moment. Afterwards, the section concentrates on the operation of networks which use shrinkage functions instead of ReLUs [17, 18, 19], with the aim of mimicking well-established denoising algorithms [1]. Finally, the last part of this section addresses the connections between both methods and additional links between convolutional neural networks and signal processing.
### **Theory of deep convolutional framelets**
The theory of deep convolutional framelets [12] describes the operation of encoding-decoding ReLU-based CNNs. Its most relevant contributions are as follows. (1) To establish the equivalence of framelets and the convolutional layers of CNNs. (2) The theory of deep convolutional framelets provides the conditions to preserve the signal integrity within a ReLU CNN. (3) Explain how ReLU and convolution layers reduce noise within an encoding-decoding CNN.
The similarity between framelets and the encoding and decoding convolutional filters can be observed when comparing Eqs. (12), (14) with Eqs. (17), (18), where it becomes visible that the convolution structure of encoding-decoding CNNs is analogous to the forward and inverse framelet decomposition.
Regarding the signal reconstruction characteristics, the theory of deep convolutional framelets [12] states the following. First, in order to be able to recover an arbitrary signal \(\mathbf{y}\in\mathbb{R}^{N}\), the number of output channels of a convolution layer with ReLU activation should _at least_ duplicate the number of input channels. Second, the encoding convolution kernel \(\mathbf{K}\) should be composed of pairs of filters with opposite phase. These two requirements ensure that any negative and positive values propagate through the network. Under these conditions, the encoding and decoding convolution filters \(\mathbf{K}\) and \(\tilde{\mathbf{K}}\) should comply with
\[\boxed{\mathbf{y}=\tilde{\mathbf{K}}^{\intercal}(\mathbf{K}\mathbf{y})_{+} \cdot c}\enspace. \tag{40}\]
It can be noticed that Eq. (40) is an extension of Eq. (20), which describes the reconstruction characteristics of tight framelets. From this point, we refer to convolutional kernels compliant with Eq (40) as _phase-complementary tight framelets_. As a final remark, it should be noted that a common practice in CNN designs is also to use ReLU non-linearities in the decoder, in such case the phase-complementary tight-framelet condition can still be met as long as the pixels \(y\in\mathbf{y}\) comply with \(y\geq 0\), which is equivalent to
\[\boxed{\mathbf{y}=(\mathbf{y})_{+}=\left(\tilde{\mathbf{K}}^{\intercal}( \mathbf{K}\mathbf{y})_{+}\cdot c\right)_{+}}\enspace. \tag{41}\]
It can be observed that the relevance of the properties defined in Eqs. (40) and (41) is that they ensure that a CNN can propagate any arbitrary signal, which is important to avoid any distortions (such as image blur) in the processed images.
An additional element of the theory of deep convolutional framelets regarding the reconstruction of the signal, is to show that conventional pooling layers (e.g. average pooling) discard high-frequency information of the signal, which effectively blurs the processed signals. Furthermore, Ye _et al._[12] have demonstrated that this can be fixed by replacing the conventional up/down-sampling layers by reversible operations, such
Fig. 7: Transfer characteristics of the semi-hard thresholds based on the difference of Gaussians and of the garrote shrink, as well as their clipping counterparts. Note that in contrast with the soft-shrinkage and clipping functions shown in Fig. 6, in the semi-hard thresholds tend to unity for large values, while the semi-hard clipping functions tend to zero for large signal intensities.
as the discrete wavelet transform. To exemplify this property, we refer to Fig. 4. If only an average pooling layer followed by an up-sampling stage would be applied, the treatment of the signal would be equivalent to the low-frequency branch of the DWT. Consequently, only the low-frequency spectrum of the signal would be recovered and images processed with that structure would become blurred. In contrast, if the full forward and inverse wavelet transform of Fig. 4 is used for up- and down-sampling, it is possible to reconstruct any signal, irrespective of its frequency content.
The ultimate key contribution of the theory of deep convolutional framelets is the explanation of the operation of ReLU-based noise-reduction CNNs. For the non-residual configuration, ReLU CNNs perform the following operations. (1) The convolution filters decompose the incoming signal into a sparse multi-channel representation. (2) The feature maps which are uncorrelated to the signal, contain mainly noise. In this case, the bias and the ReLU activation cancel the noisy feature maps in a process analogous to the MAP estimate shown in Section IV-C1. (3) The decoder reconstructs the
filtered image. Note that this process is analogous to the low-rank decomposition described in Section IV-B1. In the case of _residual_ networks, the CNN learns to estimate the noise, which means that in that configuration the ReLU non-linearities suppress the channels with high activation.
A visual example of the low-rank approximation in ReLU CNNs is shown in Fig. 8, which illustrates the operation of an idealized single-layer encoding-decoding ReLU CNN operating both, in residual and non-residual way. It can be noted The ReLU activation suppresses specific channels in the sparse decomposition provided by the encoder, thereby preserving the low-rank structures in the non-residual network. Alternatively, in the residual example, the ReLUs eliminate the feature maps with high activation, which results in a noise estimate that is subtracted from the input to estimate the noiseless signal.
### _Shrinkage and clipping-based CNNs_
Just as ReLU networks, the encoder of shrinkage networks [17, 18, 19] separates the input signal in a multi-channel representation. As a second processing stage, the shrinkage networks estimate the noiseless encoded signal by cancelling the low-amplitude pixels in the feature maps in a process akin to the MAP estimate of Section IV-C2. As final step, the encoder reconstructs the estimated noiseless image. Note that the use of shrinkage functions reduces the number of channels required by ReLU counterparts to achieve perfect signal reconstruction, because the shrinkage activation preserves positive and negative values, while ReLU only preserves the positive part of the signal.
Fig. 8: Operation of a simplified denoising (non-) residual ReLU CNN according to the theory of deep convolutional framelets (TDCF). In the figure, the noisy observation \(\mathbf{y}\) is composed by two vertical bars plus uncorrelated Gaussian noise. Furthermore, for this example, the encoding and decoding convolution filters (\(\mathbf{K}\) and \(\bar{\mathbf{K}}\), respectively) are the Haar basis of the 2D discrete wavelet transform and its phase-inverted counterparts. Given the content of the image, the image in the decomposed domain \(\mathbf{K}\mathbf{y}\) produces only a weak activation for the vertical and diagonal filters (\(\mathbf{w_{LH}}\) and \(\mathbf{w_{HH}}\), respectively) and those feature maps contain mainly noise. In the case of the non-residual network, the ReLUs and biases suppress the channels with low activation (see column \((\mathbf{K}\mathbf{y}+\underline{b})_{+}\)), which is akin to the low-rank approximation. In contrast, in the residual example, the channels with image content are suppressed, while preserving the uncorrelated noise. Finally, the decoding section reconstructs the noise-free estimate \(\bar{\mathbf{x}}\) for the non-residual network or the noise estimate \(\hat{\boldsymbol{\eta}}\) for the residual example, where it is subtracted from \(\mathbf{y}\) to compute the noiseless estimate \(\hat{\mathbf{x}}\).
As shown in Section III-A, in residual learning, a given encoding-decoding network estimates the noise signal \(\mathbf{\eta}\), so that it can be subtracted from the noisy observation \(\mathbf{y}\) to generate the noiseless estimate \(\hat{\mathbf{x}}\). As shown in Section IV-C3, in the framelet domain this is achieved by preserving the low-amplitude values of the feature maps by clipping the signal. Therefore in residual networks, the shrinkage functions can be explicitly replaced by clipping activations.
Visual examples of the operation of a single-layer shrinkage and clipping networks are presented in Fig. 8, where it can be noted that the operation of shrinkage and clipping networks is analogous to their ReLU counterparts, with the main difference that shrinkage and clipping networks do not require phase-complements in the encoding and decoding layers as ReLU-based CNNs do.
### _Shrinkage and clipping in ReLU networks_
As addressed in Section IV-C, the soft-threshold function is the superposition of two ReLU activations. As a consequence, it is feasible that in ReLU CNNs shrinkage behavior could arise in addition to the low-rankness enforcement mentioned in Section V-A. It should be noted that this only can happen if the number of channels of the encoder and decoder complies with the redundancy constraints of theory of deep convolutional framelets and if the decoder is linear. To prove this, Eq. (31) is reparametrized as
\[\hat{\mathbf{d}}=\tilde{\mathbf{K}}^{\intercal}(\mathbf{K}\mathbf{y}+\underline {b})_{+}, \tag{50}\]
where convolution filters \(\mathbf{K}\) and \(\tilde{\mathbf{K}}^{\intercal}\) are defined by \(\mathbf{K}=\begin{pmatrix}\left(\mathbf{I}&-\mathbf{I}\right)\end{pmatrix}^{\intercal}\) and \(\tilde{\mathbf{K}}^{\intercal}=\begin{pmatrix}\left(\mathbf{I}&-\mathbf{I} \right)\end{pmatrix}\), respectively, and \(\underline{b}=\begin{pmatrix}-t&-t\end{pmatrix}^{\intercal}\) represents the threshold value.
In addition to the soft-shrinkage, note that the clipping function described by Eq. (34) also can be expressed by Eq. (50) if \(\mathbf{K}=\begin{pmatrix}\left(\mathbf{I}&-\mathbf{I}&\mathbf{I}&-\mathbf{I} \right)\end{pmatrix}^{\intercal}\), \(\tilde{\mathbf{K}}^{\intercal}=\begin{pmatrix}\left(\mathbf{I}&-\mathbf{I}&- \mathbf{I}&\mathbf{I}\right)\end{pmatrix}\) and \(\underline{b}=\begin{pmatrix}0&0&-t\end{pmatrix}^{\intercal}\). It can be noted that representing the clipping function in convolutional form requires _four_ times more channels than the original input signal.
It should be noted that the ability of ReLUs to approximate other signals has also been noticed Daubechies _et al._[31], who have proven that deep ReLU CNNs are universal function approximators. In addition, Ye and Sung [13] have demonstrated that the ReLU function is the main source of the high-approximation power of CNNs.
Fig. 9: Operation of denoising in shrinkage and clipping networks. In the non-residual configuration, the noisy signal \(\mathbf{y}\) is decomposed by a set of convolution filters, which for this example are the 2D Haar basis functions of the discrete wavelet transform (\(\mathbf{K}\mathbf{y}\)). As a second step, the semi-hard shrinkage produces a MAP estimate of the noiseless detail bands/feature maps (\(\tau_{(\underline{0})}^{\text{Doc}}(\mathbf{K}\mathbf{y})\)). As third and final step, the decoder maps the estimated noiseless encoded signal to the original image domain. In the residual network, the behavior is similar, but the activation layer is a clipping function which performs a MAP estimate of the noise in the feature maps, which is reconstructed by the decoder to generate the noise estimate \(\hat{\mathbf{\eta}}\). After reconstruction, the noise estimate is subtracted from the noisy observation \(\mathbf{y}\) to generate the noise-free estimate \(\hat{\mathbf{x}}\).
### _Additional links between encoding-decoding CNNs and existing signal processing techniques_
Up to this moment, it has been assumed that the operation of the encoding and decoding convolution filters is limited to map the input image to a multi-channel representation and to reconstruct it (i.e. \(\mathbf{K}\) and \(\bar{\mathbf{K}}^{\intercal}\) comply with \(\bar{\mathbf{K}}^{\intercal}(\mathbf{K})_{+}=\mathbf{I}\cdot c\)). Still, it is possible that -besides decomposition and synthesis tasks- the encoding-decoding structure also filters/colors the signal in a way that improves the image estimates. It should be noted that this implies that the perfect reconstruction encoding-decoding structure is no longer preserved. For example, considering the following linear encoding-decoding structure
\[\hat{\mathbf{x}}=\bar{\mathbf{K}}^{\intercal}(\mathbf{K}\mathbf{y}), \tag{51}\]
which can be reduced to
\[\hat{\mathbf{x}}=\mathbf{k}*\mathbf{y}. \tag{52}\]
Here, \(\mathbf{k}=\bar{\mathbf{K}}^{\intercal}\mathbf{K}\), is optimized to reduce the distance between \(\mathbf{y}\) and the ground-truth \(\mathbf{x}\). Consequently, the equivalent filter \(\mathbf{k}\) can be considered a _Wiener filter_. It should be noted that this text is not the first in addressing the potential Wiener-like behavior of a CNN. For example, Mohan _et al._[14] suggested that by eliminating the bias of the convolution layers, the CNN could behave more akin to the Wiener filter and to be able to generalize better to unseen noise levels. It should be noted that by doing that the CNN can also behave akin to the switching behavior described by the theory of deep convolutional framelets, which can be described by the
equation
\[(z)_{+}=\begin{cases}z,&\text{if }z\geq 0,\\ 0&\text{if }z<0,\end{cases}, \tag{53}\]
where \(z\) is a pixel which belongs to the signal \(\mathbf{z}=\mathbf{k}*\mathbf{x}\). It can be observed that in contrast with the low-rank behavior described in Section V-A, in this case the switching behavior is only dependent on the correlation between signal \(\mathbf{x}\) and the filter \(\mathbf{k}\). Consequently, if the value of \(z\) is positive, its value is preserved. On the contrary, if the correlation between \(\mathbf{x}\) and \(\mathbf{k}\) is negative, then the value of \(z\) is cancelled. Consequently, the noise reduction becomes independent/invariant of the noise level. It can be observed that this effect can be considered a non-linear extension of the so-called signal annihilation filters [32].
It should be noticed that besides the low-rank approximation interpretation of ReLU-based CNNs, additional links to other techniques can be derived. For example, the decomposition and synthesis provided by the encoding-decoding structure is also akin to the non-negative matrix factorization (NMF) [33], in which a signal is factorized as a weighted sum of positive bases. In this conception, the feature maps are the bases, which are constrained to be positive by the ReLU function. Furthermore, an additional interpretation of encoding-decoding CNNs can be obtained by analyzing them from a low-dimensional manifold representation perspective [8]. Here, the convolution layers of CNNs are interpreted as two operations. On one hand, they can provide a Hankel representation. On the other hand, they provide a bottleneck which reduces the dimensionality of the manifold of image patches. It should be noticed that the Hankel-like structure that is attributed to the convolution layers of CNNs, has also been noticed by the theory of the deep convolutional framelets [12]. Two final connection with signal-processing and CNNs is the variational formulation combined with kernel-based methods [15] and the convolutional sparse coding interpretation of CNNs by Papyan _et al._[16].
## VI **Mathematical analysis of relevant designs**
In order to demonstrate the application of the principles summarized in Sections IV and V, this section analyzes relevant designs of ReLU and shrinkage CNNs. The analyses focus on three main aspects, which are: (1) the overall descriptions of the network architecture, (2) the signal reconstruction characteristics provided by the convolutional layers of the encoder and decoder sub-networks, and (3) the number operations \(\mathcal{O}(\cdot)\) executed by the trainable parts of the network, since this will give insight on the computational requirements to execute each network and its overall complexity.
The signal reconstruction analysis provides a theoretical indication that a given CNN design can propagate _any_ arbitrary signal when considering the use of ideal filters (i.e. they provide perfect reconstruction and are maximally sparse). In other words, for a fixed network architecture, there exists a selection of parameters (weights and biases) that make the neural network equal to the identity function. This result is important, because a design that cannot propagate arbitrary signals under ideal conditions, will potentially distort the signals that propagate through it _by design_. Consequently, this cannot be fixed by training with large datasets and/or with the application of any special loss term. In order to understand better the signal reconstruction analysis, we provide a brief example, where it is a non-residual CNN \(G(\cdot)\), where we propagate a noiseless signal \(\mathbf{x}\) contaminated with noise \(\boldsymbol{\eta}\), so that
\[\mathbf{x}\approx G(\mathbf{x}+\boldsymbol{\eta}). \tag{54}\]
Here, an ideal CNN allows to propagate _any_\(\mathbf{x}\), while cancelling the noise component \(\boldsymbol{\eta}\), irrespective of the content of \(\mathbf{x}\). If we switch our focus to an ideal _residual CNN_\(R(\cdot)\), it is possible to observe that
\[\hat{\mathbf{x}}\approx R(\mathbf{y})=\mathbf{y}-G(\mathbf{y}). \tag{55}\]
Here, \(G(\cdot)\) is the encoding-decoding section of the residual network \(R(\cdot)\). Consequently, it is desirable that the network \(G(\cdot)\) is able to propagate the noise \(\boldsymbol{\eta}\), while suppressing the noiseless signal \(\mathbf{x}\), which is equivalent to
\[\boldsymbol{\eta}\approx G(\mathbf{x}+\boldsymbol{\eta}). \tag{56}\]
It should be noted that in both residual and non-residual cases, there are two behaviors. On one hand, there is a signal which the network decomposes and reconstructs (almost) perfectly, and on the other hand a signal is suppressed. The signal reconstruction analysis focuses on the signals that the network can propagate or reconstruct, rather than the signal cancellation behavior. In consequence, we focus on the linear part of \(G(\cdot)\) (i.e. its convolution structure), of which, according to
Fig. 10: Simplified structure of encoding-decoding ReLU CNNs. The displayed networks are the U-Net/filtered back-projection network the encoder-decoder residual CNN (RED) and finally, the learned wavelet-frame shrinkage network. Note that for all the designs, the encoding-decoding structures are indicated by dashed blocks. It should be heard in mind that the drawings are simplified, they do not contain normalization layers, are shallow, commonly appearing dual convolutions are drawn as one layer.
Section V-A, we assume that it handles the decomposition and reconstruction of the signal within the CNN. It should be noted that the idealized model assumed here, is only considered for analysis purposes, since practical implementations do no guarantee that this exact behavior is factually obtained. The reader is referred to Section V-D and the text boxes _Fitting low-rank approximation in ReLU CNNs_ and _Network depth_.
In order to test the perfect reconstruction in non-residual CNNs, we propose the following procedure. (1) We assume an _idealized_ model \(G(\cdot)\), where its convolution filters \(\mathbf{K}_{n}\) and \(\tilde{\mathbf{K}}_{n}\) comply with the phase-complementary tight framelet condition and where the biases and non-linearities suppress low-amplitude (and negative for ReLU activations) samples from the feature maps. (2) The biases/thresholds of ReLU/shrinkage CNNs are set to zero (or to infinity for clipping activations). It can be observed that this condition prevents the low-rank (or high-rank for residual models) approximation behavior of the idealized CNN. Under this circumstance, it should be possible to prove that the analyzed CNN can perfectly reconstruct _any_ signal. (3) The last step involves simplifying the mathematical description of the resulting model of the previous point. The mathematical simplification of the model should lead to the identity function if the model complies with the perfect reconstruction property.
To conclude the explanation on the perfect reconstruction analysis, we provide two relevant considerations. First, it can be claimed that residual networks, such as the model \(R(\mathbf{y})=\mathbf{y}-G(\mathbf{y})\) discussed in Eq. (55), is able to reconstruct any signal when \(G(\mathbf{y})=0\) for any \(\mathbf{y}=\mathbf{x}+\mathbf{\eta}\). Still, this does not convey information about the behavior of the encoding-decoding network \(G(\cdot)\), which should be able to perform perfect decomposition and reconstruction of the noise signal \(\mathbf{\eta}\), as discussed in Eq. (55). To avoid this trivial solution, instead of analyzing the network \(R(\cdot)\), the analysis described for non-residual models is applied to the encoding-decoding structure \(G(\cdot)\), which means that the residual connection is excluded from the analysis.
The second concluding remark is that in order to distinguish the equations of the perfect signal reconstruction analysis from other models, we specify the analyzed designs of the perfect reconstruction models, in which the low-rank approximation behavior is avoided by setting the bias values to zero, with a special operator \(\mathcal{P}\{\cdot\}\).
For the analyses regarding the total number of operations of the trainable parameters, it is assumed that the tensors \(\mathbf{K}_{0}\), \(\tilde{\mathbf{K}}_{0}^{\mathsf{T}}\), \((\tilde{\mathbf{K}}_{0}^{\mathsf{u}})^{\mathsf{T}}\), \((\tilde{\mathbf{K}}_{0}^{\mathsf{d}})^{\mathsf{T}}\), \(\mathbf{K}_{1}\) and \(\tilde{\mathbf{K}}_{1}^{\mathsf{T}}\) shown in Fig. 10 have dimensions \((C_{0}\times 1\times N_{\mathsf{f}}\times N_{\mathsf{f}})\), \((1\times C_{0}\times N_{\mathsf{f}}\times N_{\mathsf{f}})\), \((1\times C_{0}\times N_{\mathsf{f}}\times N_{\mathsf{f}})\), \((1\times C_{0}\times N_{\mathsf{f}}\times N_{\mathsf{f}})\), \((C_{1}\times C_{0}\times N_{\mathsf{f}}\times N_{\mathsf{f}})\), \((C_{0}\times C_{1}\times N_{\mathsf{f}}\times N_{\mathsf{f}})\), \(N_{\mathsf{f}}\)), respectively. Here, \(C_{0}\) and \(C_{1}\) represent the number of channels after the first and second convolution layer, all the convolution filters are assumed to be square with \(N_{\mathsf{f}}\times N_{\mathsf{f}}\) pixels. Furthermore, the input signal \(\mathbf{x}\) has dimensions \((1\times 1\times N_{\mathsf{f}}\times N_{\mathsf{c}})\), where \(N_{\mathsf{r}}\) and \(N_{\mathsf{c}}\) denote the numbers of rows and columns, respectively.
The analyses shown for the different networks in this article have the following limitations. (1) The analyzed networks have only enough decomposition levels and convolution layers to understand their basic operation. The motivation for this simplification is to keep the analyses short and clear. Moreover, the same principles can be extended to deeper networks. Since the same single-decomposition CNNs would be recursively embedded within the given architectures. (2) The normalization layers are not considered, because they are linear operators which provide mean shifts and amplitude scaling. Consequently, for analysis purposes it can be assumed that they are embedded in the convolution weights. (3) For every encoder convolution kernel it is assumed that there is at least one decoder filter. (4) No co-adaptation between the filters of the encoder and decoder layers is considered.
The remainder of this section shows analyses of a selection of a few representative designs. Specifically, the chosen designs are the _U-Net_[34] and its residual counterpart, the _filtered back-projection network_[21]1. Additional designs analyzed here are the _the residual encoder-decoder CNN_[5]2, as well as the _learned wavelet-shrinkage network_ (LWFSN) 3. For reference, all the designs are portrayed in Fig. 10.
Footnote 1: Matlab implementation by their authors available at [https://github.com/panakinfo/FBPConvNet](https://github.com/panakinfo/FBPConvNet)
Footnote 2: Pytorch implementation by their authors available at [https://github.com/Sinyu/REDC-CNN](https://github.com/Sinyu/REDC-CNN)
Footnote 3: Pytorch implementation available at [https://github.com/LuisAlbertZM/demo_LWFSN_TMI](https://github.com/LuisAlbertZM/demo_LWFSN_TMI) and interactive demo available at IEEE’s code ocean [https://codeocean.com/capsule/9027829/tree/v1](https://codeocean.com/capsule/9027829/tree/v1). The demo also includes as reference pytorch implementations of FBPComNet and the tight-frame U-Net.
### **U-Net/filtered back-projection network**
#### V-A1 **U-Net - overview of the design**
The first networks analyzed are the U-Net and filtered back-projection networks both of which share the encoding-decoding structure \(U(\cdot)\). However, they differ in the fact that the U-Net is non-residual, while the filtered back-projection network operates in residual configuration. Therefore, the estimate of the noiseless signal \(\hat{\mathbf{x}}\) from the noisy observation \(\mathbf{y}\) in the conventional U-Net is achieved by
\[\hat{\mathbf{x}}=U(\mathbf{y}), \tag{57}\]
whereas in the filtered back-projection network, \(U(\cdot)\) is used in residual configuration, which is equivalent to
\[\hat{\mathbf{x}}=\mathbf{y}-U(\mathbf{y}). \tag{58}\]
If we now switch focus to the encoding-decoding structure of the U-Net \(U(\mathbf{y})\), it can be shown that it is described by
\[U(\mathbf{y})=U^{\mathsf{u}}(\mathbf{y})+U^{\mathsf{d}}(\mathbf{y}), \tag{59}\]
where \(U^{\mathsf{u}}(\mathbf{y})\) corresponds to the _undeimated path_ and is defined by
\[U^{\mathsf{u}}(\mathbf{y})=\big{(}\tilde{\mathbf{K}}_{0}^{\mathsf{u}}\big{)}^{ \mathsf{T}}\big{(}\mathbf{K}_{0}\mathbf{y}+\underline{b}_{0}\big{)}_{+}, \tag{60}\]
while the decimated path is
\[U^{\mathsf{d}}(\mathbf{y})=\big{(}\tilde{\mathbf{K}}_{0}^{\mathsf{d}}\big{)}^{ \mathsf{T}}\tilde{\mathbf{W}}_{\mathsf{L}}^{\mathsf{T}}f_{(2\uparrow)}\Big{(} \big{(}\tilde{\mathbf{K}}_{1}^{\mathsf{T}}(\mathbf{K}_{1}\mathbf{Z}+ \underline{b}_{1})_{+}+\tilde{b}_{1}\big{)}_{+}\Big{)}. \tag{61}\]
Here, signal \(\mathbf{Z}\) is defined by
\[\mathbf{Z}=f_{(2\downarrow)}\big{(}\mathbf{W}_{\mathsf{L}}(\mathbf{K}_{0} \mathbf{y}+\underline{b}_{0})_{+}\big{)}. \tag{62}\]
Note the decimated path contains two nested encoding-decoding architectures, as observed by Jin _et al._[21], who
has acknowledged that the nested filtering structure is akin to the (learned) iterative soft thresholding algorithm [27, 28].
#### V-A2 **U-Net - signal reconstruction analysis**
To prove if the U-Net can perfectly reconstruct _any_ signal, we assume that the biases are equal to zero, on this condition the network \(\mathcal{P}\{U\}(\mathbf{y})\) is defined by
\[\mathcal{P}\{U\}(\mathbf{y})=\mathcal{P}\{U^{\mathrm{u}}\}(\mathbf{y})+ \mathcal{P}\{U^{\mathrm{d}}\}(\mathbf{y}), \tag{63}\]
where sub-network \(\mathcal{P}\{U^{\mathrm{u}}\}(\cdot)\) is defined by
\[\mathcal{P}\{U^{\mathrm{u}}\}(\mathbf{y})=\big{(}\tilde{\mathbf{K}}_{0}^{ \mathrm{u}}\big{)}^{\mathsf{T}}(\mathbf{K}_{0}\mathbf{y})_{+}. \tag{64}\]
Assuming that \(\big{(}\mathbf{K}_{0},\tilde{\mathbf{K}}_{0}^{\mathrm{u}}\big{)}\) is a complementary-phase tight-framelet pair then \(\mathcal{P}\{U^{\mathrm{u}}\}(\mathbf{y})\) is simplified to
\[\mathcal{P}\{U^{\mathrm{u}}\}(\mathbf{y})=\mathbf{y}\cdot c_{0}. \tag{65}\]
Furthermore, the low-frequency path is
\[U^{\mathrm{d}}(\mathbf{y})=\big{(}\tilde{\mathbf{K}}_{0}^{\mathrm{d}}\big{)}^ {\mathsf{T}}\tilde{\mathbf{W}}_{\mathrm{L}}^{\mathsf{T}}f_{(2\uparrow)}\Big{(} \big{(}\tilde{\mathbf{K}}_{1}^{\mathsf{T}}(\mathbf{K}_{1}\mathcal{P}\{ \mathbf{Z}\})_{+}\big{)}_{+}\Big{)}, \tag{66}\]
Where \(\mathcal{P}\{\mathbf{Z}\}\) is defined by
\[\mathcal{P}\{\mathbf{Z}\}=f_{(2\downarrow)}\big{(}\mathbf{W}_{\mathrm{L}}( \mathbf{K}_{0}\mathbf{y})_{+}\big{)}. \tag{67}\]
If \(\mathbf{K}_{1}\) is a phase-complementary tight frame, we know that \(\tilde{\mathbf{K}}_{1}^{\mathsf{T}}(\mathbf{K}_{1}\mathbf{Z})_{+}=\mathbf{Z} \cdot c_{1}\). Consequently, Eq. (66) becomes
\[\mathcal{P}\{U^{\mathrm{d}}\}(\mathbf{y})=\big{(}\tilde{\mathbf{K}}_{0}^{ \mathrm{d}}\big{)}^{\mathsf{T}}\tilde{\mathbf{W}}_{\mathrm{L}}^{\mathsf{T}}f_{ (2\uparrow)}\big{(}(\mathbf{W}_{\mathrm{L}}\mathbf{K}_{0}\mathbf{y})_{+}\big{)} \big{)}\cdot c_{1}. \tag{68}\]
Here, it can be noticed that if \(\mathbf{K}_{0}\) is a phase-complementary tight framelet, then \(\mathcal{P}\{U^{\mathrm{d}}\}(\mathbf{y})\) approximates a low-pass version of \(\mathbf{y}\), or equivalently
\[\mathcal{P}\{U^{\mathrm{d}}\}(\mathbf{y})\approx\boldsymbol{\mathcal{W}}_{ \mathrm{L}}\mathbf{y}\cdot c_{1}, \tag{69}\]
where \(\boldsymbol{\mathcal{W}}_{\mathrm{L}}\) is a low-pass filter. Finally, substituting Eq. (65) and Eq. (69) in Eq. (63) results in
\[\mathcal{P}\{U\}(\mathbf{y})\approx(\mathbf{I}\cdot c_{0}+\boldsymbol{ \mathcal{W}}_{\mathrm{L}}\cdot c_{1})\mathbf{y}. \tag{70}\]
This result proves that the design of the U-Net cannot evenly reconstruct all the frequency of \(\mathbf{y}\) unless \(c_{1}=0\), in which case, the whole low-frequency branch of the network is ignored. Note that this limitation is inherent to its design and cannot be circumvented by training with large datasets and/or with any loss function.
#### V-A3 **U-Net - number of operations**
It can be noted that encoding filter \(\mathbf{K}_{0}\), convolves \(\mathbf{x}\) at its original resolution and maps it to a tensor with \(C_{0}\) channels. Therefore, the number of operations \(\mathcal{O}(\cdot)\) for kernel \(\mathbf{K}_{0}\) is \(\mathcal{O}(\mathbf{K}_{0})=C_{0}\cdot N_{t}\cdot N_{\mathrm{c}}\cdot N_{f}^{2}\) [FLOPS] (floating-point operations). Conversely, due to the symmetry between encoder and decoder filters, \(\mathcal{O}(\tilde{\mathbf{K}}_{0}^{\mathrm{u}})=\mathcal{O}(\tilde{\mathbf{K }}_{0}^{\mathrm{d}})=\mathcal{O}(\mathbf{K}_{0})\). Furthermore, for this design, filter \(\mathbf{K}_{1}\) processes the signal encoded by \(\mathbf{K}_{0}\), which is down-sampled by a factor of one half and maps it from \(C_{0}\) to \(C_{1}\) channels, this results in the estimated operation cost \(\mathcal{O}(\mathbf{K}_{1})=\mathcal{O}(\tilde{\mathbf{K}}_{1})=C_{0}\cdot C _{1}\cdot N_{\mathrm{r}}\cdot N_{\mathrm{c}}\cdot N_{\mathrm{f}}^{2}\cdot(2)^ {-2}\) [FLOPS]. Finally, adding the contributions of filters \(\mathbf{K}_{0}\), \(\tilde{\mathbf{K}}_{0}^{\mathrm{u}}\), \(\tilde{\mathbf{K}}_{0}^{\mathrm{d}}\), \(\mathbf{K}_{1}\) and \(\tilde{\mathbf{K}}_{1}\) results in
\[\mathcal{O}(U)=(3+2^{-1}\cdot C_{1})\cdot C_{0}\cdot N_{\mathrm{r}}\cdot N_{ \mathrm{c}}\cdot N_{\mathrm{f}}^{2}\]
[FLOPS] (71)
#### V-A4 **U-Net - Concluding remarks**
The U-Net/FBPConvNet is a flexible multi-resolution architecture. Still, as it has been shown, the pooling structure of this CNN may be sub-optimal for noise reduction applications because this configuration does not allow to recover the frequency information of the signal evenly. This has been noted and fixed by Han and Ye [6], who introduced the so-called tight-frame U-Net. In which the down/up-sampling structure is replaced by the discrete wavelet transform and its inverse. This simple modification overcomes the limitations of the U-Net and improved its performance for artifact removal in compressed sensing imaging.
### **Residual encoder-decoder CNN**
#### V-B1 **Residual encoder-decoder CNN - overview of the design**
The residual encoder-decoder CNN shown in Fig. 10 consists of nested single-layer residual encoding-decoding networks. For example, in the network showcased in Fig. 10 it is visible that network \(R_{1}(\cdot)\) is nested into \(R_{0}(\cdot)\). Furthermore, for this case the image estimate is given by
\[\hat{\mathbf{x}}=(\mathbf{y}+R_{0}(\mathbf{y})+\tilde{\underline{b}}_{0})_{+}, \tag{72}\]
in which \(R_{0}(\cdot)\) is the outer residual network and \(\tilde{\underline{b}}_{0}\) is the bias for the output layer. Note that the ReLU placed at the output layer intrinsically assumes that the estimated signal \(\hat{\mathbf{x}}\) is positive.
From Eq. (72), the output of the sub-network \(R_{0}(\cdot)\) is defined by
\[\mathbf{Z}=R_{0}^{\text{dec}}(\hat{\mathbf{Q}}). \tag{73}\]
Here, the decoder \(R_{0}^{\text{dec}}(\cdot)\) is defined by
\[R_{0}^{\text{dec}}(\hat{\mathbf{Q}})=\tilde{\mathbf{K}}_{0}^{\mathsf{T}}\hat{ \mathbf{Q}}. \tag{74}\]
In the above, \(\hat{\mathbf{Q}}\) is the noiseless estimate of the intermediate signal \(\mathbf{Q}\) and it is defined by
\[\hat{\mathbf{Q}}=(\mathbf{Q}+R_{1}(\mathbf{Q})+\tilde{\underline{b}}_{1})_{+}, \tag{75}\]
where the network \(R_{1}(\cdot)\) is
\[R_{1}(\mathbf{Q})=\tilde{\mathbf{K}}_{1}^{\mathsf{T}}\big{(}\mathbf{K}_{1} \mathbf{Q}+\underline{b}_{1})_{+}. \tag{76}\]
Furthermore, \(\mathbf{Q}\) represents the signal encoded by \(R_{0}(\cdot)\), or equivalently
\[\mathbf{Q}=R_{0}^{\text{enc}}(\mathbf{y}), \tag{77}\]
where \(R_{0}^{\text{enc}}(\cdot)\) is defined by
\[R_{0}^{\text{enc}}(\mathbf{y})=\mathbf{K}_{0}\mathbf{y}. \tag{78}\]
#### V-B2 **Residual encoder-decoder CNN - signal reconstruction analysis**
As mentioned earlier, the residual encoder-decoder CNN is composed by nested residual blocks, which are independently analyzed to study the reconstruction characteristics of this network. First, block \(R_{1}(\cdot)\), is given by
\[\mathcal{P}\{R_{1}\}(\mathbf{Q})=\tilde{\mathbf{K}}_{1}^{\mathsf{T}}(\mathbf{K }_{1}\mathbf{Q})_{+}. \tag{79}\]
Under complementary-phase tight-frame assumptions for the pair \((\mathbf{K}_{1},\tilde{\mathbf{K}}_{1})\), Eq. (79) reduces to
\[\mathcal{P}\{R_{1}\}(\mathbf{Q})=\mathbf{Q}, \tag{80}\]
which shows that the encoder and decoder \(R_{1}(\cdot)\) can approximately reconstruct any signal. Now switching to \(R_{0}\) it can be observed that the linear part is
\[\mathcal{P}\{R_{0}\}(\mathbf{y})=\tilde{\mathbf{K}}_{0}^{\intercal}(\mathbf{K}_{ 0}\mathbf{y})_{+}. \tag{81}\]
Just as with \(R_{1}(\cdot)\), it is assumed that the convolution kernels are tight-framelets. Therefore Eq. (81) becomes
\[\mathcal{P}\{R_{0}\}(\mathbf{y})=\mathbf{y}. \tag{82}\]
Consequently, \(R_{0}(\cdot)\) and \(R_{1}(\cdot)\) can reconstruct any arbitrary signal under complementary-phase tight-frame assumptions.
#### V-B3 **Residual encoder-decoder CNN - number of operations**
In this case, all the convolution layers operate at the original resolution of image \(\mathbf{x}\). Therefore, the number of operations \(O(\cdot)\) for kernel \(\mathbf{K}_{0}\) and \(\tilde{\mathbf{K}}_{0}\) is \(O(\mathbf{K}_{0})=O(\tilde{\mathbf{K}}_{0})=C_{0}\cdot N_{\text{r}}\cdot N_{ \text{c}}\cdot N_{\text{f}}^{2}\) [FLOPs], while \(\mathbf{K}_{1}\) and \(\tilde{\mathbf{K}}_{1}\) are requiring \(O(\mathbf{K}_{1})=O(\tilde{\mathbf{K}}_{1})=C_{0}\cdot C_{1}\cdot N_{\text{r} }\cdot N_{\text{c}}\cdot N_{\text{f}}^{2}\) [FLOPs]. By adding the contributions of both encoding-decoding pairs, the total operations for the residual encoder-decoder become
\[\mathcal{O}(R)=2\cdot(1+C_{1})\cdot C_{0}\cdot N_{\text{r}}\cdot N_{\text{c}} \cdot N_{\text{f}}^{2}\ [\text{FLOPS}]. \tag{83}\]
#### V-B4 **Residual encoder-decoder CNN - concluding remarks**
The residual encoder-decoder network consists on a set of nested single-resolution residual encoding-decoding CNNs. The single-resolution design increases its computation cost with respect to multi-resolution designs such as the U-Net. In addition, it should be noted that the use of a ReLU as output layer of the encoder-decoder residual network forces the signal estimates to be positive, but this is not always convenient. For example in computed tomography imaging, it is common that images contain positive and negative values.
### **Learned wavelet-frame shrinkage network - description of the architecture**
The learned wavelet-frame shrinkage network is a multi-resolution architecture in which the discrete wavelet transform is used for down/up-sampling and also as part of the decomposition where shrinkage is applied. In this CNN, the noiseless estimates are produced by
\[\hat{\mathbf{x}}=L(\mathbf{y}), \tag{84}\]
where \(L(\cdot)\) represents the encoding-decoding structure of the learned wavelet-frame shrinkage network and the encoding-decoding network \(L(\cdot)\) is
\[L(\mathbf{y})=L_{\text{L}}(\mathbf{y})+L_{\text{H}}(\mathbf{y}). \tag{85}\]
Here, the high-frequency path is given by
\[L_{\text{H}}(\mathbf{y})=\tilde{\mathbf{K}}_{0}^{\intercal}\tilde{\mathbf{W}} _{\text{H}}^{\intercal}f_{(2\uparrow)}\big{(}\tau_{(\downarrow)}^{\text{LET} }\big{)}\big{(}f_{(2\downarrow)}\big{(}\mathbf{W}_{\text{H}}\mathbf{K}_{0} \mathbf{y}\big{)}\big{)}\big{)}. \tag{86}\]
Note that in this design the encoder leverages the filter \(\mathbf{W}_{\text{H}}\) for generating a sparse signal prior to the shrinkage stage, i.e. \(\tau_{(\downarrow)}^{\text{LET}}\big{(}f_{(2\downarrow)}\big{(}\mathbf{W}_{ \text{H}}\mathbf{K}_{0}\mathbf{y}\big{)}\big{)}\). Meanwhile, the low-frequency path \(L_{\text{L}}(\cdot)\) is
\[L_{\text{L}}(\mathbf{y})=\tilde{\mathbf{K}}_{0}^{\intercal}\tilde{\mathbf{W}} _{\text{L}}^{\intercal}f_{(2\uparrow)}\big{(}f_{(2\downarrow)}\big{(}\mathbf{ W}_{\text{L}}\mathbf{K}_{0}\mathbf{y}\big{)}\big{)}. \tag{87}\]
#### V-B2 **Learned wavelet-frame shrinkage network - signal reconstruction analysis**
When analyzing the signal propagation of the learned wavelet-frame shrinkage network, we set the threshold level \(\underline{t}_{0}=0\). This turns Eq. (85) turns into
\[\mathcal{P}\{L\}(\mathbf{y})=\mathcal{P}\{L_{\text{L}}\}(\mathbf{y})+\mathcal{ P}\{L_{\text{H}}\}(\mathbf{y}). \tag{88}\]
Here, \(\mathcal{P}\{L_{\text{H}}\}(\cdot)\) is defined by
\[\mathcal{P}\{L_{\text{H}}\}(\mathbf{y})=\tilde{\mathbf{K}}_{0}^{\intercal} \tilde{\mathbf{W}}_{\text{H}}^{\intercal}f_{(2\uparrow)}\big{(}f_{(2 \downarrow)}\big{(}\mathbf{W}_{\text{H}}\mathbf{K}_{0}\mathbf{y}\big{)}\big{)}, \tag{89}\]
while the low-frequency path \(\mathcal{P}\{L_{\text{L}}\}(\cdot)\) is mathematically described by
\[\mathcal{P}\{L_{\text{L}}\}(\mathbf{y})=\tilde{\mathbf{K}}_{0}^{\intercal} \tilde{\mathbf{W}}_{\text{L}}^{\intercal}f_{(2\uparrow)}\big{(}f_{(2 \downarrow)}\big{(}\mathbf{W}_{\text{L}}\mathbf{K}_{0}\mathbf{y}\big{)}\big{)}. \tag{90}\]
Substituting Eq. (89) and Eq. (90) in Eq. (88) results in the equation
\[\mathcal{P}\{L\}(\mathbf{y})=\tilde{\mathbf{K}}_{0}^{\intercal}\tilde{\mathbf{ W}}^{\intercal}f_{(2\uparrow)}\big{(}f_{(2\downarrow)}\big{(}\mathbf{W} \mathbf{K}_{0}\mathbf{y}\big{)}\big{)}. \tag{91}\]
For the discrete wavelet transform, it holds that \(\mathbf{Q}=\tilde{\mathbf{W}}^{\intercal}f_{(2\uparrow)}\big{(}f_{(2\downarrow )}\big{(}\mathbf{W}\mathbf{Q}\big{)}\big{)}\). Consequently, Eq. (91) is simplified to
\[\mathcal{P}\{L\}(\mathbf{y})=\tilde{\mathbf{K}}_{0}^{\intercal}\mathbf{K}_{0} \mathbf{y}. \tag{92}\]
Assuming that \(\mathbf{K}_{0}\) is a tight framelet i.e. \(\tilde{\mathbf{K}}_{0}^{\intercal}\mathbf{K}_{0}=\mathbf{I}\cdot c\), with \(c=1\), then
\[\mathcal{P}\{L\}(\mathbf{y})=\mathbf{y}. \tag{93}\]
This proves that the encoding-decoding section of the learned wavelet-frame shrinkage network allows for perfect signal reconstruction.
#### V-B3 **Learned wavelet-frame shrinkage network - number of operations**
The learned wavelet-frame shrinkage network contains a simpler convolution structure than the networks reviewed up to this moment. Therefore, for a single-level decomposition architecture, the total number of operations is
\[\mathcal{O}(L)=2\cdot C_{0}\cdot N_{\text{r}}\cdot N_{\text{c}}\cdot N_{\text{ f}}^{2}\ [\text{FLOPS}]. \tag{94}\]
#### V-B4 **Learned wavelet-frame shrinkage network - residual variant**
To illustrate the use of clipping activations in residual noise reduction, the residual version of the learned wavelet-frame shrinkage network is included. Note that there are two main differences with the conventional learned wavelet-frame shrinkage network. First, the shrinkage functions are replaced by clipping activations. Second, the low-frequency signal is suppressed. This is performed because the original design of the learned wavelet-frame shrinkage network does
Fig. 11: Residual version of the learned wavelet-frame shrinkage network. It can be noticed that the low-frequency branch of the network is nulled. In deeper networks it would further decomposed and the nulling would be activated at the deepest level (lowest resolution).
not have any non-linearities in that section. This is akin to the low-frequency nulling proposed by Kwon and Ye [35]. The modified learned wavelet-frame shrinkage network is shown in Fig. 11. It can be observed that by setting to zero the low-frequency branch of the design the model is inherently assuming that the noise is high-pass.
#### Vi-B5 (Residual) Learned wavelet-frame shrinkage network - concluding remarks
The (residual) learned wavelet-frame shrinkage network is a design which explicitly mimics wavelet-shrinkage algorithms. It can be observed that the (r)LWFSN inherently assume that noise is high-frequency and explicitly avoid non-linear processing in the low-frequency band. Follow-up experiments also included non-linearities in the low-frequency band of the learned wavelet-frame shrinkage network [36] and obtained similar results to the original design.
## VII **What happens in trained models?**
### **Properties of convolution kernels and low-rank approximation**
The assumption that the convolution filters of a CNN behave as (complementary-phase) tight framelets is useful for analyzing the theoretical ability of a CNN to propagate signals. Albeit, it is difficult to prove that trained models comply with this assumption, because there are diverse elements affecting the optimization of the model, e.g. the initialization of the network, the data presented to the model, the optimization algorithm as well as its parameters. In addition, in real CNNs, there may be co-adaptation between the diverse CNN layers, which may prevent that the individual filters of the CNN behave as tight framelets, since the decomposition and filtering performed by one layer is not independent from the rest [37].
To test the behavior if the filters of trained CNN can converge to complementary-phase tight framelets, at least, on a simplified environment, we propose to train a toy model, as displayed in Fig. 13. If the trained filters of an encoder-decoder pair of the toy model (\(\mathbf{K}_{l}\), \(\mathbf{\tilde{K}}_{l}\)), (where \(l\) denotes one of the decomposition levels) behave as a complementary-phase tight framelet, then the pair (\(\mathbf{K}_{l}\), \(\mathbf{\tilde{K}}_{l}\)) approximately complies with the condition presented in Eq. (40), which for identity input \(\mathbf{I}\) simplifies to
\[\mathbf{\tilde{K}}_{n}^{\intercal}(\mathbf{K}_{n})_{+}=\mathbf{I}\cdot c_{n}\,, \tag{95}\]
in which \(c_{n}\) is an arbitrary constant.
The toy model is trained on images that contain multiple randomly-generated overlapping triangles. All the images were scaled to the range [0,1]. For this experiment, the input to the images is the noise-contaminated image and the objective/desired output is the noiseless image. For training the CNNs, normally-distributed noise with standard deviation of 0.1 was added to the ground truth. For every epoch, a batch of 192 training images is generated. As validation and test images we use the "astronaut" and "cameraman" images included in the software package _Scipy_. The model is optimized with Adam for 25 epochs with a linearly decreasing learning rate. The initial learning rate for the optimizer is set to 10\({}^{-3}\) and the batch size is set to 1 sample. The convolution kernels were initialized with Xavier initialization using a uniform distribution (see Glorot and Bengio [38]). The code is available at IEEE's code ocean [https://codeocean.com/capsule/7845737/tree](https://codeocean.com/capsule/7845737/tree).
With the described settings, we have trained the toy model, and have tested if the phase-complementary tight-framelet property holds for the filters of the deepest level \(l\)=2. The results for the operation \(\mathbf{\tilde{K}}_{2}^{\intercal}(\mathbf{K}_{2})_{+}\) are displayed in Fig. 12 (left), which shows that when the weights of the encoder and decoder have different initial values, the kernel pair (\(\mathbf{K}_{2}\), \(\mathbf{\tilde{K}}_{2}\)) are not complementary-phase tight framelets. We have observed that the forward and inverse filters of wavelets/framelets are often the same or at least very similar. Based on this reasoning, we have initialized the toy model with the same initial values of the kernel pairs (\(\mathbf{K}_{n}\), \(\mathbf{\tilde{K}}_{n}\)). As shown by Fig. 12 (right), with the proposed initialization, the filters of the CNN converge to tensors with properties reminiscent of complementary-phase tight-framelets. This suggests that the initialization of the CNN has an important influence on the convergence of the model to a specific solution.
Fig. 14 displays a test image processed with two toy models, one trained trained with different and one trained with the same initial values for the encoding-decoding pairs. It can be observed that there are no significant differences between the images produced by both models. In the same figure (lower row), we have set the bias of both networks to zero. In this case, it is expected that the networks reconstruct the noisy input, as confirmed by the figure, where both CNNs partly reconstruct the original noisy signal. This result suggests that the ReLU plus bias pairs operate akin to the low-rank approx
Fig. 12: Phase-complementary tight-framelet test for the trained toy network, initialized with random weights. The left figure shows the product \(\mathbf{\tilde{K}}_{l}^{\intercal}(\mathbf{K}_{2})_{+}\), where the initialization of \(\mathbf{K}_{2}\) and \(\mathbf{\tilde{K}}_{2}\) is different. It can be seen that the pair \((\mathbf{K}_{2},\mathbf{\tilde{K}}_{2})\) does not comply with the complementary-phase framelel criterion of Eq. (95). This contrasts with the right result, which displays the result of the product \(\mathbf{\tilde{K}}_{l}^{\intercal}(\mathbf{K}_{2})_{+}\), for the same CNN, but where the initial values of \(\mathbf{\tilde{K}}_{2}\) and \(\mathbf{K}_{2}\) are identical. For this initialization, the filters approximate the complementary-phase tight-framelet criterion.
Fig. 13: Toy model used for experiment on the properties of the filters of a trained CNN. The dimensions for tensors \(\mathbf{K}_{0}\), \(\mathbf{K}_{1}\) and \(\mathbf{K}_{2}\) are \((6\times 1\times 3\times 3)\), \((12\times 6\times 3\times 3)\) and \((24\times 12\times 3\times 3)\). The network is symmetric and the filter dimensions for the decoder convolution kernels \(\mathbf{\tilde{K}}_{n}\) are the same as there corresponding encoding kernel \(\mathbf{K}_{n}\).
imation mechanism proposed the theory of deep convolutional framelets.
The following conclusions can be drawn from this experiment. First, the filters of the CNN may not necessarily converge to complementary-phase tight framelets. This is possibly caused by the initialization of the network and/or the interaction/co-adaptation between the multiple encoder/decoder layers. Second, we confirm that for our experimental setting, the low-rank approximation behavior in the CNN can be observed. For example, when setting the biases and thresholds to zero, part of the noise texture (high-rank information) is recovered. Third, it is possible that linear filtering happens in the network as well, which may explain why noise texture is not fully recovered when setting the biases to zero. Fourth and final, we have observed that the behavior of the trained models change drastically depending on factors such as the learning rate and the initialization values of the model. For this reason, we consider this experiment and its outcome more as a proof of concept, where further investigation is needed.
### **Generalization**
From the explanations in Section IV-C, it can be noted that the bias/threshold used in CNNs can modulate how much of the signal is suppressed by the nonlinearities. In addition, Section V-D established that there are additional mechanisms for noise reduction within the CNN, such as the Wiener-like behavior observed by Mohan _et al._[14]. This raises the question how robust conventional CNNs are to noise-level changes different from the level that the model has been trained with. To perform such experiment, we have trained two variants of the toy model. The first variant is inspired by the multi-scale sparse coding network by Mentl _et al._[17], where the biases of each of the nonlinearities (ReLU in this case) are multiplied by an estimate of the standard deviation of the noise. In the design of this example, the noise estimate \(\hat{\sigma}_{\eta}\), which in accordance to Chang _et al._[1] is defined by
\[\hat{\sigma}_{\eta}=1.4826\cdot\text{Median}(|\mathbf{f}_{\text{HH}}*\mathbf{ x}|). \tag{96}\]
Here, variable \(\mathbf{f}_{\text{HH}}\) is the diagonal convolution filter of the discrete wavelet transform with Haar basis. For comparison purposes, we will refer to this model as _adaptive_ toy model. The second variant of the toy model being tested, examines the case where the convolution layers of the model do not add bias to the signal. This model is based in the so-called bias-free CNNs proposed by Mohan _et al._, in which the bias of every convolution filter is set to zero during training. This setting has the purpose of achieving better generalization on the model, since it is claimed that this modification causes the model to behave independently of the noise level.
We have trained the described variants of the toy models with the same settings of the experiment in Section VII-A. The three models are evaluated on the test image with varying noise levels \(\sigma_{n}\in[0.100,0.150,0.175,0.200,0.225]\), the result for this evaluation is displayed in Fig. 15. These results confirm that the performance of the original toy model degrades for higher noise levels. In contrast, the adaptive and bias-free toy
Fig. 14: Processed “cameraman” image for (in)dependently sampled initialization for the encoding and decoding filters. The top-left picture represents the noise-contaminated input (\(\sigma_{\eta}=0.1\)) and the bottom-left, the noiseless reference. The middle-column images are the processed noisy image with the toy model trained with different initialization for its convolution filters, while the right-column images are processed with the model where the same initial values are used for the encoding and decoding filters. The top-middle and top-right images are nearly identical in terms of quality and SNR, so that initialization has no effect. The middle and bottom-right images are the same model presented that processed the middle and top-right figures, but where its bias is set to zero. As expected, the noise is partly reconstructed.
model perform better than the original toy model for most noise levels.
The results of this experiment confirm the diverse noise-reduction mechanisms within a CNN, as well as showing that the CNNs have certain modeling limitations. For example, noise invariance, which can be addressed by further incorporating prior knowledge to the model, such as the case of the adaptive model, or by forcing the model to have a more Wiener-like behavior such as the case of the bias-free model. In the case of the bias-free model, note that, theoretically, it should be possible to obtain exactly the same behavior with the original toy model if the biases of the model would have converged to zero. This reasoning suggests that the large amount of free parameters and non-linear behavior of the model can potentially prevent to find the optimal/robust solution, in which case the incorporation of prior knowledge can help to improve the model.
## VIII **Which network fits to my problem?**
### **Design elements**
When choosing or designing a CNN for a specific noise-reduction application, multiple choices and design elements should be considered. For example, the required performance, the memory required to train/deploy models, if certain signal preservation characteristics are required, the target execution time for the model, the characteristics of the images being processed, etc. Based on these requirements diverse design elements of CNNs can be more or less desirable, for example, the activation functions, the use of single/multi-resolution models, the need for skip connections, and so forth. This section briefly discusses such elements by focusing on the impact that such elements have in terms of performance
Fig. 15: Comparison of the baseline (original) toy model against its adaptive and bias-free variants. The models are evaluated in the cameraman picture with increasing noise levels. The top row displays the noisy input. The second top-row represents the images processed with the original toy model. Meanwhile, the third row are the results of the adaptive toy model. Finally, the bottom-row are the results corresponding to the bias-free model. It can be observed that the performance original toy model degrades as the noise level increases, while the performance adaptive and bias-free model degrade less with increased noise levels, resulting in pictures with lower noise levels.
and potential computational cost. A summary of the main conclusions of these elements is included in Table II.
#### V-A1 **Nonlinearity**
In literature, the most common activation function in CNNs is the ReLU. There are two main advantages of the ReLU with respect to other activations. First, ReLUs potentially enforce more sparsity in the feature maps than -for example- the soft shrinkage, because ReLUs cancel not only small values of the feature maps like the shrinkage functions do, but also all the negative values. The second advantage of the ReLU is its capacity to approximate other functions (see Section V-C). Note that the high-capacity of the ReLU to represent other functions [31, 13] (often referred to as _expressivity_) may also be one of the reasons why these models are prone to overfitting.
The better expressivity of the ReLU-CNNs may be the reason why -at the time of writing this manuscript- ReLU-based CNNs perform marginally better than the shrinkage-based models in terms of metrics such as signal-to-noise ratio or the structural similarity index metric [39, 40, 19]. Despite this small benefit, the visual characteristics of estimates produced by ReLU and shrinkage-based networks are very similar. Furthermore, the computational cost of ReLU-based designs is potentially higher those with shrinkage functions, because ReLUs require more feature maps to preserve the signal integrity. For example, the LWFSN shown in Section VI-C achieves a performance very close to the FBPConvNet and the tight-frame U-Net for noise reduction in computed tomography, but only with a small fraction of the total trainable parameters, which allows for a faster and less computation-expensive model [19].
As concluding remark, it can be noted that regardless of the expressivity of the ReLU activation, it is not entirely clear if this means that ReLU activations outperform other functions such as the soft threshold _in general_. Because we could not find articles, which are specifically focused on comparing the performance of ReLU/shrinkage-based models. In spite of this, there are some works that compare shrinkage-based CNNs with other (architecturally different) models based on ReLUs that indicate that the compared ReLU-based designs slightly outperform the shrinkage-based ones. For example, Herbetaeu and Kevran [40] proposed the so-called DCT2-Net, which is a shrinkage-based CNN, which despite of its good performance, it is still outperformed by the ReLU-based DnCNN [7] CNN. A similar behavior was observed by Zavala _et al._[19], where their shrinkage-based LWFSN could not outperform the ReLU-based FBPConvNet [21] and the tight-frame U-Net [6]. Another similar case is the deep K-SVD network [39], which achieves performance close (but slightly less good) than the ReLU-based DnCNN. Among the few examples were we found that a ReLU CNN performed better than shrinkage-based models, we find the work of Fan _et al._[18], where they compared variants of the so-called soft autoencoder and found that the shrinkage-based model outperformed the ReLU variant.
#### V-A2 **Single/multi-scale designs**
Single-scale models have the advantage that they avoid aliasing because no down/up-sampling layers are used. Still this comes at the cost of more computations and memory. Furthermore, this approach may lead to models with larger filters and/or deeper networks to achieve the same receptive-field than multi-scale models, which may further increase the computation costs of single-scale models.
In the case of multi-scale models, the main consideration should be that the down/up-sampling structure should allow perfect signal reconstruction to avoid introducing aliasing and/or distortion to the image estimates (e.g. the discrete wavelet transform in the tight-frame U-Net and in the learned wavelet-frame shrinkage network).
#### V-A3 **(Non-) residual models**
Residual noise-reduction CNNs often perform better than their non-residual counterparts (e.g. the U-Net vs FBPConvNet and the LWFSN vs the rLWFSN). This may be caused because the trained models have more freedom to learn the filters, because the design does not need to learn to reconstruct the noiseless signal, but only to estimate the noise [12]. Also, it can be observed that non-residual models potentially need more parameters than residual networks, because the propagation/reconstruction of the noiseless signal is also dependent on the number of channels of the network.
### **State-of-the art**
Defining the state-of-the art in image denoising with CNNs is challenging for diverse reasons. First, there is a wide variety of available CNNs, which often are not compared to each other. Second, the suitability of a CNN for a given task may depend on the image and noise characteristics, such as noise distribution and (non-) stationarity. Third, the large amount of variables, in terms of e.g. optimization, data and data augmentation, adds reproducibility issues which further complicate making a fair comparison between all available models [11]. In addition, it should be noted that for many of
the existing models, the performance gap between state-of-the art models and other CNNs is often small.
Despite of the mentioned challenges, we have found some models that could be regarded as the state-of-the art. The first model to be addressed is the so-called DRU-Net [41], which is a bias-free model [14] that incorporates a U-Net architecture with residual blocks. In addition, the DRU-Net uses an additional input to indicate to the network the noise intensity, which increases its generalization to different noise levels. An additional state-of-the art model is DnCNN [7]. This network is residual, single-scale, while also using ReLU activations. Another state-of-the art model is the multi-level-wavelet CNN [42], which has a design very similar to the tight-frame U-Net [6]. Both of these models are based on the original U-Net design [34], but are deployed in residual configuration and the down/up-sampling structure is based on the discrete wavelet transform. Furthermore, in addition to use encoding-decoding CNNs stand-alone, have used CNNs as proximal operator within model-based methods [43, 41], which further improves the denoising power of non-model-based encoding-decoding CNNs.
## IX **Conclusions and Future Outlook**
In this paper, the widely used encoding-decoding CNN architecture has been analyzed from several signal processing principles. This analysis has revealed the following conclusions. (1) Multiple signal processing concepts converge in the mathematical formulation encoding-decoding CNNs models. For example, the convolution and down/up-sampling structure of the encoder-decoder structure is akin to the framelet decomposition, the activation functions are rooted on classical signal estimators. In addition, linear filtering also may happen within the model. (2) The activations implicitly assume noise and signal characteristics of the feature maps. (3) There are still many signal processing developments that can be integrated with current CNNs, further improving their performance in terms of accuracy, efficiency or robustness.
Despite of the signal processing nature of the encoding-decoding CNNs, at the moment of this publication, the integration of CNNs and existing signal processing algorithms is at an early stage. A clear example of the signal modeling limitations of current CNN denoisers are the activation functions, where the estimators provided by current activation layers neglect spatial correlation of the feature maps. Possible alternatives to solve this limitation could be to perform an activation function inspired on denoisers working on principles such as Markov random fields [44], the locally-spatial indicators [45] and multi-scale shrinkage [24]. Further ideas are provided by the extensive survey on denoising algorithms by Pizurika and Philips [46]. Additional approaches that can be further explored are non-local [47] and collaborative filtering [48]. Both techniques exploit the redundancy in natural images and only a few models are exploring these properties [49, 50].
To finalize this manuscript we encourage the reader to actively consider the properties of the signals processed, the design requirements and the existing signal processing algorithms when designing new CNNs. By doing so, we expect that next the generation of CNNs denoisers not only will be better performing, but also more interpretable, and reliable.
## Acknowledgement
We thank Dr. Ulugbek Kamilov and the anonymous reviewers for their valuable feedback and suggestions for this article.
|
2306.00016 | Incorporating Domain Knowledge in Deep Neural Networks for Discrete
Choice Models | Discrete choice models (DCM) are widely employed in travel demand analysis as
a powerful theoretical econometric framework for understanding and predicting
choice behaviors. DCMs are formed as random utility models (RUM), with their
key advantage of interpretability. However, a core requirement for the
estimation of these models is a priori specification of the associated utility
functions, making them sensitive to modelers' subjective beliefs. Recently,
machine learning (ML) approaches have emerged as a promising avenue for
learning unobserved non-linear relationships in DCMs. However, ML models are
considered "black box" and may not correspond with expected relationships. This
paper proposes a framework that expands the potential of data-driven approaches
for DCM by supporting the development of interpretable models that incorporate
domain knowledge and prior beliefs through constraints. The proposed framework
includes pseudo data samples that represent required relationships and a loss
function that measures their fulfillment, along with observed data, for model
training. The developed framework aims to improve model interpretability by
combining ML's specification flexibility with econometrics and interpretable
behavioral analysis. A case study demonstrates the potential of this framework
for discrete choice analysis. | Shadi Haj-Yahia, Omar Mansour, Tomer Toledo | 2023-05-30T12:53:55Z | http://arxiv.org/abs/2306.00016v1 | # Incorporating Domain Knowledge in Deep Neural Networks for Discrete Choice Models
###### Abstract
Discrete choice models (DCM) are widely employed in travel demand analysis as a powerful theoretical econometric framework for understanding and predicting choice behaviors. DCMs are formed as random utility models (RUM), with their key advantage of interpretability. However, a core requirement for the estimation of these models is a priori specification of the associated utility functions, making them sensitive to modelers' subjective beliefs. Recently, machine learning (ML) approaches have emerged as a promising avenue for learning unobserved non-linear relationships in DCMs. However, ML models are considered "black box" and may not correspond with expected relationships.
This paper proposes a framework that expands the potential of data-driven approaches for DCM by supporting the development of interpretable models that incorporate domain knowledge and prior beliefs through constraints. The proposed framework includes pseudo data samples that represent required relationships and a loss function that measures their fulfillment, along with observed data, for model training. The developed framework aims to improve model interpretability by combining ML's specification flexibility with econometrics and interpretable behavioral analysis. A case study demonstrates the potential of this framework for discrete choice analysis.
Deep neural networks, discrete choice models, domain knowledge, interpretability
## 1 Introduction
Discrete choice models (DCMs) have been employed widely in travel demand analysis and prediction to understand individuals' decision-making processes (Ben-Akiva et al., 1985). Most DCMs are formed as random utility models (RUMs), which assume that individuals make decisions based on utility maximization decision protocols (McFadden, 1974).
Researchers have proposed various RUM-based models, incorporating different assumptions about error structures (Ben-Akiva, 1973; McFadden, 1978; Revelt & Train, 1998; Train, 2009). However, the selection of variables and their functional forms in the utility function is subjective and can significantly impact the estimation results. Utility functions describe how different attributes of each alternative are valued by individuals, which varies depending on personal characteristics and preferences that differ across decision-makers. As such, these relationships are often nonlinear, and selecting a suitable utility model that accurately captures these complex relationships through functional forms and variable transformations can be a challenging task.
Several studies have shown that an incorrectly assumed utility functional form (e.g., linear) can cause bias in parameter estimates and in the resulting predictions. This modeling uncertainty has been a persistent concern for modelers (Torres et al., 2011).
Recently, the use of data-driven approaches to learn DCM specifications has emerged as a promising avenue to overcome the limitations of RUM specifications. Machine learning (ML) methods (e.g., neural networks, decision trees, ensemble learning) can learn non-linear mapping functions (Bishop, 2006). Deep neural networks (DNN) (LeCun et al., 2015) are an increasingly used data-driven approach that has shown higher prediction accuracy in many tasks. DNN architectures represent the decision-making process by imitating the structure of neuron activity patterns and memory. Unlike RUM, DNN models require essentially no a priori knowledge about the nature of the true underlying relationships among variables and choices. However, the main focus of their use has been on prediction at the level of the individual rather at the level of market shares.
In DNN models, input variables are fed into the network and pass through multiple hidden layers before reaching the output layer. The output layer has the same dimension as the number of alternatives, with each output representing the score \(y_{j}\) of each alternative. These scores are then transformed into choice probabilities using a softmax function in the form \(e^{y_{k}}/\sum_{j\in C}e^{y_{j}}\). While DNNs do not have the same behavioral constraints as RUMs and no utility functions to be defined, the similarity of DNN to logit models promotes their use as a more flexible substitute to logit models and interpret these scores as utilities. The main focus of these works has been on prediction accuracy (Chang et al., 2019; Mahajan et al., 2020; van Cranenburgh and Alwosheel, 2019). They find that DNNs can outperform RUMs in prediction accuracy (Hagenauer and Helbich, 2017; Hillel et al., 2019; Omrani, 2015).
However, deep neural networks (DNNs) are often criticized for their "black box" nature, which hinders the interpretation of the extracted relationships and limits their usefulness in providing insights into the factors that influence choice behavior. Unlike RUMs, DNN models do not provide explicit information about how the input variables affect the output, which may not be consistent with domain knowledge. For example, the signs of the effects of various variables on the choices (e.g., in travel choices, negative own-elasticity of choices to their costs and travel times) and ranges of marginal rates of substitutions among variables (e.g., values of time) may not align with the expected relationships. While the high flexibility of DNNs allows them to find complex non-linear specifications without a priori knowledge, their ability to explain and interpret the choice behavior and generalize the model to unseen situations are essential features, which may be negatively affected by the "black box" form of these models (Van Cranenburgh et al., 2021).
Works have been done to promote the use of ML models for choice modeling by "opening" the "black box" through the development of external tools to derive economic behavioral information that enhance their interpretability. Wang et al. (2020) numerically extract different economic quantities of interest from DNNs, including estimates of individual choice predictions and probabilities, market shares, social welfare, substitution patterns of alternatives, probability derivatives, elasticities, and heterogenous marginal rates of substitution. They show that most economic information, when averaged over several trained models, is reasonable and more flexible than multinomial logit (MNL) models. However, at the disaggregate level, some results are counter intuitive, such as negative values of time.
Another technique to estimate choice sensitivities relies on partial dependence plots (PDPs) (Friedman, 2001). It calculates choice probabilities for every possible value of a variable for each observation. Zhao et al. (2020) implement and visualize PDPs to capture both magnitude and direction of effects. The insights gained from are qualitative in nature. While these plots are computationally intuitive and are easy to implement, they provide a clear interpretation only when
features are uncorrelated. The assumption behind this method is that the features are independent, while this is often not the case in choice modeling context. If this assumption is violated, the averages calculated for the PDP will include data points that are very unlikely or even impossible to exist. In their case study they compare several ML models to MNL. In some of cases, the PDPs for ML models deviate substantially from linearity, which suggests that ML models can capture nonlinear relationships among the explanatory variables and choice.
A second approach involves combining ML and RUMs. Sifringer et al., (2018, 2020) introduced a systematic utility function that incorporates both an interpretable and a learned part. The interpretable part is specified explicitly by the modeler, as is common with RUMs, while the learned part utilizes a DNN to discover additional useful utility terms from available data. The DNN is fed with explanatory variables and outputs a term that captures additional utility to alternatives. The model improved the estimation log-likelihood value compared to a traditional MNL. However, a potential limitation of this approach is that the DNN term is unbounded, and its relationship with the explanatory variables remains unknown, which may cause problems when the model is applied with new inputs. The decision on which variables to include in each part of the utility function is also subjective and made by the modeler. Therefore, it would be beneficial to bound the DNN terms and develop a more objective way to determine which variables enter each part of the utility function.
A third approach is to use ML models with utility-like architectures to resemble RUM. Wang et al., (2020) develop an alternative specific utility deep neural network (ASU-DNN) model. Their model maintains separate utility functions for each alternative, which are trained using separate neural networks using only the variables relevant to that alternative. Thus, the utility scores for each alternative depend only on its own attributes. Systematic heterogeneity is captured by inclusion of socio-demographic variables that first enter a separate DNN and then integrated into each of the utility functions to obtain the final outputs. This model was more readily interpreted compared to fully connected DNNs and achieved comparable or even better fit to the training and testing data. But because of the model structure, as in MNL, the model exhibits the independence of irrelevant alternatives property. In addition, the model architecture might still suffer from unreasonable relationships among explanatory variables and choices.
The above-mentioned methods focus on interpretability but do not specify and enforce any restrictions on the relationships among the explanatory variables and choices (Alwosheel et al., 2019, 2021). Therefore, they are unable to guarantee that the resulting non-linear relationships are controllable and consistent with domain knowledge. Inconsistency with domain knowledge also limits model's application in prediction for evaluating new policies in new scenarios. Prediction for new policy analysis might be needed beyond the fitting region. In such cases, extrapolation is required, which is a challenge for ML models. Therefore, the ability to explain and interpret the choice behavior to understand the factors that affect it and generalize the ML model to unseen situations are essential features. By incorporating domain knowledge into the model, extrapolation can be made possible in prediction.
This study aims to enhance the consistency of ML model with domain knowledge. The proposed framework allows incorporating domain knowledge by the introduction of constraints to the model. This is facilitated by generating pseudo data that contains such knowledge to be fed to the model, in combination of formulating a loss function to penalize violating this knowledge. The contribution of this work is to demonstrate that domain knowledge can be incorporated while preserving flexibility. The proposed approach is independent of the model structure, making it possible to implement on other architectures. It demonstrates that theory and expert knowledge can be introduced effectively and flexibly as required by domain expertise and will promote the application of data-driven ML models for travel choice predictions through building trust and giving modelers control over the model's behavior.
The rest of the paper is organized as follows: the next section describes the proposed methodology for incorporating domain knowledge in DNN. The following section presents a case study that implements the proposed framework and discusses the results of different models. Conclusions and potential enhancements and extensions to the proposed methodology are presented in the final section.
## 2 Methodology
The idea behind incorporating domain knowledge in DNNs involves augmenting the data given to the model and modifying the loss function that the model optimizes. To achieve this, additional data, termed pseudo data, is generated to hold the targeted knowledge that the model is expected to capture. The loss function is then formulated to include terms that use this data, in combination with the original model loss function, such as the negative log-likelihood. The additional loss terms measure the extent to which the trained model is consistent with the domain knowledge.
The overall framework for incorporating domain knowledge into DNNs is shown in Figure 1. The framework is independent of the model structure, allowing seamless integration with existing off-the-shelf DNN architectures. The model is trained on two sets of inputs: the originally available observed data and domain knowledge, which is mathematically formulated as a set of constraints on the outcomes of the trained model. The observed data represents the available dataset collected, including socio-economic characteristics of decision makers, attributes of the alternatives, and choices. The domain knowledge represents the knowledge that the model wants to incorporate into the model and expects to be captured (e.g., directions of sensitivities).
In this work, the modeler's prior expectations are related to directions of the effects of an alternative's attributes on its own utility. For example, these may be negative effects of mode travel times and costs on the utilities of these modes. In this case, the model is constrained to learn a monotonically decreasing probability of choosing an alternative with respect to its travel time and cost and, consequently, monotonically increasing probabilities of choosing the remaining alternatives.
Figure 1: Overall framework for incorporating domain knowledge
Consider a training set consists of \(N\) samples \(\{(x_{i},y_{i})\},i=1,...,N\), where \(x_{i}\) is a feature vector in \(x\in\mathbb{R}^{D}\), and \(y_{i}\) is the discrete choice among \(\mathcal{C}\) alternatives, \(y_{i}\in\{1,\ldots,\mathcal{C}\}\). Let \(p_{c}(x_{i})\) be the probability of choosing alternative \(c\) given input \(x_{i}\), and \(x_{i}[m]\) is the value of feature \(m\) in the feature vector. The estimated model is considered to be monotonically increasing in \(p_{c}\)with respect to feature \(m\) if \(p_{c}(x_{i})\geq p_{c}\big{(}x_{j}\big{)}\) for any two feature vectors \(x_{i},x_{j}\), such that \(x_{i}[m]\geq x_{j}[m]\) and \(x_{i}[h]=x_{j}[h]\), for all \(h\in\mathcal{D}\backslash m\). The opposite applies for decreasing monotonicity. The rest of the components are described as follows.
### Generating Pseudo Data
Following monotonicity constraints above, pseudo data can be generated as pairs of samples to numerically approximate the probabilities' derivatives that are constrained. For each monotonicity constraint with respect to a feature \(m\), \(K\) pseudo samples are generated uniformly along the region values of that feature \(x_{k,1}^{*}\). Each pseudo sample is then paired with another pseudo one, such that the second pseudo sample has a positive incremental change applied to feature \(m\). The relationship required for an increasing monotonicity constraint of probability of choosing alternative \(c\) with respect to feature \(m\) is \(p_{c}\big{(}x_{k,2}^{*}\big{)}-p_{c}\big{(}x_{k,1}^{*}\big{)}\geq 0\).
The pseudo data does not require labels (i.e., chosen alternatives), as they are only used for capturing domain knowledge, not for predicting the chosen alternative. This ability to generate pseudo samples enhances the model in three ways:
1. When the dataset is small, the pseudo dataset helps increase the dataset size to learn the model's parameters.
2. When the input feature region is imperfectly covered, the pseudo data helps fill gaps and enforce the model to learn along the full range of possible values.
3. Generating pseudo data outside the range of current values for specific features helps enforcing better learning, hence enabling extrapolation in the outer regions (i.e., unseen scenarios).
### Loss Function
The loss function includes two components: prediction loss and domain knowledge loss. The prediction loss quantifies the accuracy of predictions and can be calculated for example using the negative log-likelihood (\(\mathcal{L}_{NLL}\)) method commonly used in RUMs. This calculation is performed only for samples with observed choices and is represented by the following formula:
\[\mathcal{L}_{NLL}=-\sum_{i=1}^{N}\sum_{c\in c}g_{i,c}\cdot log\big{(}p_{i,c} \big{)} \tag{1}\]
Where \(g_{i,c}\) equals 1 if alternative \(c\) is chosen by individual \(i\) and 0 otherwise.
The domain knowledge loss measures the violation of monotonicity constraints on the probability of choosing alternative \(c\) with respect to feature \(m\). This is determined using pseudo sample pairs that estimate the derivatives of the probabilities, represented by the following formula:
\[\mathcal{L}_{c,m}=\sum_{k=1}^{K}\max\left(0,d_{c,m}\cdot\frac{p_{c}(x_{k,2}^{* })-p_{c}(x_{k,1}^{*})}{\Delta x_{m}^{*}}\right) \tag{2}\]
Where \(d_{c,m}\) equals 1 if the probability of choosing alternative \(c\) with respect to feature \(m\) should be increasing and -1 otherwise.
If it is assumed that when the probability of choosing alternative \(c\) with respect to feature \(m\) is in one direction, the probability of choosing other alternatives should be in the opposite direction, the total loss to be minimized can be expressed as follows:
\[\min\mathcal{L}_{total}=\ \mathcal{L}_{NLL}+\sum_{m\in M}\sum_{c\in C}w_{c,m} \cdot\ \mathcal{L}_{c,m} \tag{3}\]
Where \(M\) represents the indices of the features that constrain the probabilities, and \(w_{c,m}\) represents the weight of each constraint violation penalty.
### Model Training
The training process is illustrated in Figure 2. Observed data, represented as vector **x**, and a vector of pseudo sample pairs \(\textbf{x}^{*}=\left\{\left(x^{*}_{1,1},x^{*}_{1,2}\right),...,\left(x^{*}_{k, 1},x^{*}_{k,1}\right)\right\}\) are fed into the model. The total loss, calculated as a combination of the prediction loss from the observed samples **x** and the domain knowledge loss from the pseudo data \(\textbf{x}^{*}\), is minimized using the backpropagation technique. This process continues iteratively until convergence is achieved.
## 3 Case Study
The methodology outlined above was applied to a mode choice dataset to assess the potential of incorporating domain knowledge in a DNN model and examine the impact of such knowledge on the resulting economic information.
### Dataset
The experiment relies on the openly available Swissmetro dataset (Bierlaire et al. (2001)). The Swissmetro dataset is a stated preference survey that was collected in Switzerland in 1998. Participants were asked to provide information regarding their preferred mode of transportation between the new Swissmetro (SM) mode, car, and train. Each individual reported the chosen transport mode for various trips among the three alternative modes. Variables used from the dataset are described in **Table 1**. Observations with unavailable alternatives, unknown features or outlier values were filtered, resulting in 7,778 samples. The dataset was then divided into training, validation, and testing sets in the ratio of 60:20:20.
Figure 2: Model training process
### Experimental Design
The proposed methodology was implemented on two model architectures: DNN and ASU-DNN. The DNN model was an off-the-shelf model, while the ASU-DNN model was proposed by Wang et al. (2020) and calculates alternative-specific utilities. Both models were estimated in both an unconstrained and a constrained (i.e., with domain knowledge) version. The constrained models are referred to as C-DNN and C-ASU-DNN, respectively. In addition, a Multinomial Logit (MNL) model was also estimated for comparison.
The domain knowledge incorporated in the constrained models includes negative own-sensitivities of choice probability to travel time and cost and positive cross-sensitivities. All constraints are incorporated simultaneously, and are mathematically formulated as follows:
\[\frac{\partial P_{j}}{\partial travel\ time_{j}}\leq 0,\forall j \tag{4}\]
\[\frac{\partial P_{j}}{\partial cost_{j}}\leq 0,\forall j \tag{5}\]
\[\frac{\partial P_{i}}{\partial travel\ time_{j}}\geq 0,\forall i\neq j \tag{6}\]
\begin{table}
\begin{tabular}{l l} \hline \hline
**Variable** & **Descriptions** \\ \hline Train travel time & Train travel time [minutes] \\ Train cost & Train cost [CHF] \\ Train headway & Train headway [minutes] \\ SM travel time & SM travel time [minutes] \\ SM cost & SM cost [CHF] \\ SM headway & SM headway [minutes] \\ Car travel time & Car travel time [minutes] \\ Car cost & Car cost [CHF] \\ Seats & Seats configuration in the Swissmetro (dummy). Airline seats (1) or not (0). \\ Group & Different groups in the population. 2: current rail users, 3: current road users \\ & Travel purpose. 1: Commuter, 2: Shopping, 3: Business, 4: Leisure, 5: Return from \\ Purpose & work, 6: Return from shopping, 7: Return from business, \\ & 8: Return from leisure \\ First & First class traveler (0 = no, 1 = yes) \\ Luggage & 0: none, 1: one piece, 3: several pieces \\ Age & It captures the age class of individuals. The age-class coding scheme is of the type: \\ & 1: age\(\leq\)24, 2: 24\(<\)age\(\leq\)39, 3: 39\(<\)age\(\leq\)54, 4: 54\(<\)age\(\leq\) 65, 5: 65 \(<\)age \\ Male & Traveler’s Gender 0: female, 1: male \\ Income & Traveler’s income per year [thousand CHF] \\ & 0 or 1: under 50, 2: between 50 and 100, 3: over 100 \\ TRAIN AV & Train availability dummy \\ CAR AV & Car availability dummy \\ SMAV & SM availability dummy \\ \hline \hline \end{tabular}
\end{table}
Table 1: Swissmetro dataset variables descriptions
\[\frac{\partial P_{i}}{\partial cost_{j}}\geq 0,\forall i\neq j \tag{7}\]
Where \(P_{j}\), \(travel\)\(time_{j}\), and \(cost_{j}\) are the choice probability, travel time and cost of alternative \(j\). Therefore, in total 18 constraints are incorporated into the model.
To evaluate the performance of each model, negative log-likelihood and prediction accuracy were measured on each of the datasets. In addition, predicted market shares were calculated. To further illustrate the fulfillment of domain knowledge, choice probabilities were presented. Finally, to analyze the effect of domain knowledge on the extracted economic information, values of time (VOT) were calculated from all models.
## 4 Results
### Prediction performance
Table 2 presents the negative log-likelihood (NLL) and accuracy (Acc.) of each estimated model in the training, validation, and testing sets. The results demonstrate that the DNN model achieves the best NLL and accuracy across all sets, indicating its strong capability for empirical fitting to data. However, its average NLL and accuracy show a significant decrease in testing, indicating its vulnerability to overfitting.
The C-DNN model shows a significant drop in fit compared to DNN in training, but slightly on testing. However, its performance is similar on training and testing sets, which indicates that incorporating domain knowledge may improve model's generalizability. ASU-DNN has a more restricted structure than DNN, leading to a lower performance. C-ASU-DNN is the least flexible model among the four DNNs, and therefore it performs the worst on testing. MNL performance is the weakest in both training and testing sets because of its simple linear specification. It is worth noting, however, that the maximum drop in performance between constrained and unconstrained versions of DNN and ASU-DNN models on testing is limited to 0.02 points in average NLL and 1.6% in accuracy.
### Market shares
While prediction accuracy relates to predicting choices at the level of individuals, transportation policy planners are mainly interested in prediction at the market level. Table 3 shows the predicted market shares by the different models and the root mean square error (RMSE) in each model. In the training set, unconstrained models DNN and ASU-DNN outperform their constrained counterparts due to their high flexibility in empirical fitting to data. However, in the testing set, constrained models show better performance than unconstrained ones, highlighting the
\begin{table}
\begin{tabular}{l c c c c c c} \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{Training} & \multicolumn{2}{c}{Validation} & \multicolumn{2}{c}{Testing} \\ \hline \multirow{2}{*}{Model} & Avg. & Acc & Avg. & Acc & Avg. & Acc \\ & NLL & [\%] & NLL & [\%] & NLL & [\%] \\ \hline DNN & 0.51 & 78.3 & 0.58 & 75.5 & 0.68 & 70.1 \\ C-DNN & 0.70 & 69.1 & 0.66 & 71.9 & 0.70 & 69.1 \\ ASU-DNN & 0.61 & 74.0 & 0.67 & 69.0 & 0.72 & 69.4 \\ C-ASU-DNN & 0.62 & 74.2 & 0.67 & 68.8 & 0.73 & 67.8 \\ MNL & 0.73 & 67.6 & 0.70 & 71.0 & 0.77 & 66.1 \\ \hline \end{tabular}
\end{table}
Table 2: Average negative log-likelihood (NLL) and prediction accuracy
significance of domain knowledge in improving generalizability on unseen data. Furthermore, although MNL yields exact market shares in estimation, it demonstrates the poorest performance when tested on unseen data. These findings emphasize the trade-off between model flexibility and generalizability and highlight the importance of incorporating domain knowledge for accurate market-level predictions.
#### Choice probabilities
To assess whether the models fulfill expected domain knowledge, choice probability functions were calculated for each of the six variables for the three alternatives. Each variable's value was systematically varied across all observations by a percentage ranging from -50% to +50%, while the remaining variables kept unchanged. These plots illustrate how the model's behavior responds to changes in variable values (e.g., for evaluating a new policies) on an aggregated level.
The estimated coefficients in the MNL are with the expected sign (i.e., negative coefficients of travel time and cost in all utility functions), therefore, the directions of choice probabilities are consistent with domain knowledge as can be seen in Figure 3-4.
However, choice probabilities may not always be consistent with domain knowledge when derived from the unconstrained models, even in ASU-DNN where utilities are calculated independently from the others following RUM. This inconsistency could be restrained when domain knowledge is incorporated into the models.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{6}{c}{**Training set**} \\ \hline & DNN & C-DNN & ASU-DNN & C-ASU-DNN & MNL & Observed \\ \hline Train & 6.4\% & 6.6\% & 6.8\% & 5.8\% & 6.8\% & 6.8\% \\ SM & 56.4\% & 57.8\% & 55.9\% & 56.8\% & 56.0\% & 56.0\% \\ Car & 37.2\% & 35.6\% & 37.4\% & 37.3\% & 37.3\% & 37.3\% \\ \hline RMSE & 0.3\% & 1.4\% & 0.1\% & 0.7\% & 0\% & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Market shares of travel modes
For example, the expected behavior of the estimated models is that train choice probabilities decrease as its travel time increases, while SM and car choice probabilities increase. However, the DNN model presented in Figure 3(a) shows an unusual pattern where SM choice probability increases as train travel times decrease by up to 20%, but decreases when train travel times are increased by up to 25%. Moreover, as train travel times increase by more than 20%, car choice probabilities decrease, making SM more appealing to travelers. Figure 3(c) shows the predicted choice probabilities provided by the ASU-DNN model. The model behaves consistently until train travel times increase by 30%, where car choice probabilities slightly decrease, while car choice probabilities slightly increase. These inconsistencies are addressed in the constrained models C-DNN and C-ASU-DNN in Figure 3(b) and Figure 3(d), respectively.
Similar behaviors are expected as a function of changes in train cost. However, the DNN model presented in Figure 4(a), shows that car choice probability increases as train travel costs decrease by up to 30%, and it decreases when train costs are increased up to 10%. These results suggest that, while flexible, the DNN model performs poorly in unseen scenarios beyond the fitting region, even at an aggregated level. Conversely, these inconsistencies are avoided in C-DNN (Figure 4(b)). ASU-DNN presents slight inconsistencies when train costs increase by more than 40%, where train and car choice probabilities flip directions (Figure 4(c)). This inconsistency is mitigated in the C-ASU-DNN model as shown in Figure 4(d).
Figure 3: Alternatives’ choice probabilities as a function of a percentage change in train travel time
### Value of time
Value of time (VOT) is an important economic information that is obtained from DCMs which represents travelers' willingness to pay in order to save time and is used to evaluate the benefits of transport projects. VOT for each alternative mode was calculated given each of the five models. In MNL, VOT is a single value obtained by taking the ratio of travel time and cost coefficients for each alternative. Table 4 presents the descriptive statistics of the calculated VOTs, including mean, median, and the percentages of negative VOTs.
In the analysis of the calculated VOTs, it was found that the unconstrained models yield mean VOT values that appear reasonable for each alternative, but they obscure more complex results. Specifically, significant percentages of VOTs in these models have negative values, which contradicts common domain knowledge. On the other hand, the constrained models, C-DNN and C-ASU-DNN, which incorporate domain knowledge through the modeling process, offer more reliable results. In these models, VOT is calculated as the ratio of probability derivatives with respect to travel time and cost, which are mostly constrained to be positive, resulting in a significant decrease in the percentages of negative VOTs for all alternatives.
Examining the median VOTs in Table 4, it is evident that the DNN model tends to underestimate mean VOT for all modes compared to other models, potentially due to the substantial number of negative VOTs. In contrast, MNL tends to overestimate the mean VOT, which could result from biased parameter estimates due to linear utilities misspecification.
Figure 4: Alternatives’ choice probabilities as a function of a percentage change in train cost
Figure 5-7 present the distribution of heterogenous VOT in the sample from all models for train, SM, and car, respectively. VOTs derived from DNN model, seem to be normally distributed for all modes (Figure 5(a), Figure 6(a) and Figure 7(a)). In contrast, the C-DNN model manages to almost completely prevent negative VOTs. The derived VOTs for all modes are log-normal distributed.
The analysis of VOT distributions for ASU-DNN and C-ASU-DNN also reveals differences between the two models for the different modes. In the case of train (Figure 5), both models provide close median VOTs, with C-ASU-DNN showing a larger mean due to fewer negative values. For SM, both models exhibit bimodal VOT distributions (Figure 6 (c-d)), with the median values being close, but the mean from C-ASU-DNN being three times larger than that from ASU-DNN. This can be attributed to the larger number of negative values in ASU-DNN and the presence of outliers in C-ASU-DNN on the extreme positive side. For car (Figure 7), ASU-DNN yields a log-normal distribution of VOTs, while C-ASU-DNN provides a normal distribution. However, the extreme large values from C-ASU-DNN lead to an unrealistically large mean VOT, while extreme negative values from ASU-DNN result in a negative mean. Notably, ASU-DNN exhibits larger gaps between the mean and median VOTs for all modes due to the presence of extreme values. While domain knowledge helps to reduce the number of negative VOTs, it does not address the issue of extreme values, as shown in C-ASU-DNN.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Value of Time (Train) & Mean & Median & \\ & [CHF/h] & [CHF/h] & Negative \\ \hline DNN & 11.9 & 13.9 & 24.7\% \\ C-DNN & 26.2 & 24.9 & 0\% \\ ASU-DNN & 45.6 & 65.3 & 4.0\% \\ C-ASU-DNN & 54.1 & 63.1 & 1.5\% \\ MNL & 101.0 & 101.0 & 0\% \\ \hline \multirow{2}{*}{Value of Time (SM)} & Mean & Median & \\ & [CHF/h] & [CHF/h] & Negative \\ \hline DNN & 10.6 & 15.3 & 31.2\% \\ C-DNN & 61.7 & 50.9 & 0.1\% \\ ASU-DNN & 105.6 & 41.9 & 6.1\% \\ C-ASU-DNN & 355.0 & 47.1 & 1.0\% \\ MNL & 83.4 & 83.4 & 0\% \\ \hline \multirow{2}{*}{Value of Time (Car)} & Mean & Median & \\ & [CHF/h] & [CHF/h] & Negative \\ \hline DNN & 90.0 & 37.3 & 28.0\% \\ C-DNN & 83.6 & 79.4 & 0\% \\ ASU-DNN & -84.6 & 99.6 & 5.4\% \\ C-ASU-DNN & 2746.6 & 107.0 & 0.6\% \\ MNL & 200.2 & 200.2 & 0\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: Value of time descriptive statistics: mean, median, and percentage of negative VOT
Figure 5: Heterogeneous values of time for train; the extremely large and small values are cut off from histogram
Figure 6: Heterogeneous values of time for SM; the extremely large and small values are cut off from histogram
## 5 Conclusions
This study addresses a major limitation in the application of DNN models for discrete choice analysis, which often leads to counterintuitive results due to the lack of domain knowledge. Analysts typically possess knowledge that should be captured by the model, such as the negative impact of travel time on an alternative's choice probability. However, as the case study demonstrates, such knowledge is not always captured by data-driven models that rely solely on the data.
To overcome this limitation, a framework was proposed to enhance the consistency of ML models with domain knowledge. This approach involves incorporating constraints to the model to ensure that specific relationships are fulfilled, while leaving others unrestricted for data-driven learning. Only the direction of the relationships (i.e., positive or negative) is constrained, preserving
Figure 7: Heterogeneous values of time for car; the extremely large and small values are cut off from histogram
flexibility in the model's form. The proposed framework is independent of the model structure, making it easy to implement on different architectures (e.g., DNN and ASU-DNN).
The proposed methodology was applied to Swissmetro dataset using fully connected DNN and ASU-DNN models. The tradeoff between accuracy and interpretability was demonstrated, as both models' prediction performance was negatively affected, but their interpretation became more reasonable and consistent with prior expectations. The case study showcases the potential of combining domain knowledge from utility theory with DNN models for more interpretable choice analysis, regardless of the architecture. It also highlights the range of differences in analyses (e.g., predicted values of time and choice probabilities) that can arise from different models. Further work is needed to include additional domain knowledge (e.g., magnitudes of elasticity and values of time) and test it on richer datasets to explore the implications of incorporating domain knowledge for choice analysis.
|
2307.14362 | Learnable wavelet neural networks for cosmological inference | Convolutional neural networks (CNNs) have been shown to both extract more
information than the traditional two-point statistics from cosmological fields,
and marginalise over astrophysical effects extremely well. However, CNNs
require large amounts of training data, which is potentially problematic in the
domain of expensive cosmological simulations, and it is difficult to interpret
the network. In this work we apply the learnable scattering transform, a kind
of convolutional neural network that uses trainable wavelets as filters, to the
problem of cosmological inference and marginalisation over astrophysical
effects. We present two models based on the scattering transform, one
constructed for performance, and one constructed for interpretability, and
perform a comparison with a CNN. We find that scattering architectures are able
to outperform a CNN, significantly in the case of small training data samples.
Additionally we present a lightweight scattering network that is highly
interpretable. | Christian Pedersen, Michael Eickenberg, Shirley Ho | 2023-07-24T22:12:16Z | http://arxiv.org/abs/2307.14362v1 | # Learnable wavelet neural networks for cosmological inference
###### Abstract
Convolutional neural networks (CNNs) have been shown to both extract more information than the traditional two-point statistics from cosmological fields, and marginalise over astrophysical effects extremely well. However, CNNs require large amounts of training data, which is potentially problematic in the domain of expensive cosmological simulations, and it is difficult to interpret the network. In this work we apply the learnable scattering transform, a kind of convolutional neural network that uses trainable wavelets as filters, to the problem of cosmological inference and marginalisation over astrophysical effects. We present two models based on the scattering transform, one constructed for performance, and one constructed for interpretability, and perform a comparison with a CNN. We find that scattering architectures are able to outperform a CNN, significantly in the case of small training data samples. Additionally we present a lightweight scattering network that is highly interpretable.
Machine Learning, ICML 2022 Workshop on Machine Learning for Astrophysics, Baltimore, Maryland, USA, 2022
## 1 Introduction
The process of extracting information from cosmological fields is a fundamental component of modern cosmology. The early Universe, observed as the Cosmic Microwave Background, is well described by a Gaussian random field, meaning the two-point function (or the power spectrum) contains all relevant information. However the growth of non-linear structure means this is not the case for probes of the late-time Universe, and it has been shown that significant information lies outside the two-point statistics in this regime. In the advent of several upcoming large surveys of late-time structure growth such as Euclid (Laureijs et al., 2011), DESI (DESI Collaboration, 2016), Roman observatory (Spergel et al., 2015), Rubin observatory (LSST Science Collaboration, 2009), it is becoming increasingly important to consider methods of cosmological inference that go beyond the power spectrum, in order to maximise the scientific yield from these datasets.
The situation is further complicated by the fact that on small scales, astrophysical phenomena such as supernovae and AGN feedback affect the clustering of observational tracers. Recent work has demonstrated that neural networks are able to optimally marginalise over these effects (Villaescusas-Navarro et al., 2022), and that convolutional neural networks (CNNs) are able to extract significantly more information from cosmological fields than the power spectrum (Ravanbakhsh et al., 2016; Gupta et al., 2018; Villaescusa-Navarro et al., 2021; Lu et al., 2022). However CNNs suffer from two potential pitfalls. First, the large numbers of parameters involved in deep convolutional networks require large amounts of training data to optimise. This is a significant consideration in the context of late-universe cosmology, where the hydrodynamical simulations required to model the non-linear baryon density field are extremely computationally expensive. Second, CNNs also suffer from a lack of interpretibility.
In this work we attempt to address these issues by presenting two models for cosmological inference based on the scattering transform (Mallat, 2011). Scattering transforms use a cascade of analytic wavelet transforms followed by complex modulus nonlinearities to construct descriptors of the input signal that behave smoothly to small deformations and that can be made to exhibit known symmetries of the data. Scattering transforms and the related wavelet phase harmonics have been successfully applied to cosmological parameter estimation from different types of input fields (Cheng et al., 2020; Allys et al., 2020). Recent work has demonstrated that learning the filter parameters can provide performance gains in the regime of small datasets (Gauthier et al., 2022). Using the suite of CAMELs simulations (Villaescusa-Navarro et al., 2021), we investigate two kinds of scattering networks, one designed for performance and one designed for interpretability, and perform a comparison with a traditional CNN.
## 2 Method
### Simulations
We use the publicly available CAMELs simulations (Villaescusa-Navarro et al., 2021). Whilst we refer the reader to (Villaescusa-Navarro et al., 2021) for a full description, we briefly overview the most relevant aspects. We restrict our analysis to the subset of CAMELs simulations that use the IllustrisTNG simulation code (Weinberger et al., 2017; Pillepich et al., 2018). The dataset includes a suite of 1,000 simulations varying 6 parameters: \(\Omega_{m}\) and \(\sigma_{8}\) to describe cosmology, and \(A_{\rm SN1}\), \(A_{\rm SN2}\), \(A_{\rm AGN1}\), \(A_{\rm AGN2}\) that control the efficiency of the supernova and AGN feedback. Each simulation produces 13 different fields describing various properties such as gas temperature, total matter density, and magnetic fields. We use projected 2D slices of the 3D \(25\times 25\)\(h^{-3}\)\({\rm Mpc^{3}}\) simulation boxes. Each slice is \(5\)\(h^{-1}\)\({\rm Mpc}\) thick, so with 5 slices along each axis, this produces a total set of 15,000 maps available for each field, each with a resolution of \(256\times 256\) pixels. These maps are then divided into training, test and validation sets at a fractional split of \(0.9\), \(0.05\) and \(0.05\) respectively.
### Models
We investigate three models 1. As our baseline CNN, we use a model with 12 convolutional layers, each with batch normalisation and Leaky ReLu activations. Following (Villaescusa-Navarro et al., 2022) we train moment neural networks presented in (Jeffrey and Wandelt, 2020) to infer the mean (\(\mu_{i}\)) and variance (\(\sigma_{i}\)) of the posterior for the 6 parameters (\(\theta_{i}=\{\Omega_{m,i},\sigma_{8,i},A_{\rm SN1,i},A_{\rm SN2,i},A_{\rm SN1,i},A_{\rm AGN1,i},A_{\rm AGN2,i}\}\)) describing each simulation. Therefore we define our loss function:
Footnote 1: Code available on github at github.com/Chris-Pedersen/LearnableWavelets
\[\mathcal{L} = \sum_{i=1}^{6}\log\left(\sum_{j\in\rm batch}\left(\theta_{i,j}- \mu_{i,j}\right)^{2}\right) \tag{1}\] \[+ \sum_{i=1}^{6}\log\left(\sum_{j\in\rm batch}\left(\left(\theta_{ i,j}-\mu_{i,j}\right)^{2}-\sigma_{i,j}^{2}\right)^{2}\right)\]
We also consider two models based on the scattering transform. First, we introduce the _scattering network_ (SN), shown in figure 1. The first two layers are composed of convolutions with banks of 8 wavelet filters. We use Morlet wavelets, defined as
\[\psi_{\sigma,\phi,\xi,\gamma}(u)=e^{-\|D_{\gamma}R_{\phi}(u)\|^{2}/(2\sigma^{ 2})}(e^{i\xi u^{\prime}}-\beta), \tag{2}\]
where \(\beta\) is a normalisation constant to ensure that the wavelet integrates to 0 over the spatial domain, \(u^{\prime}=u_{1}\cos\phi+u_{2}\sin\phi\), \(R_{\phi}\) is the rotation matrix of angle \(\phi\) and \(D_{\gamma}=\begin{pmatrix}1&0\\ 0&\gamma\end{pmatrix}.\) We therefore consider 4 parameters that modify the Morlet wavelet: the orientation is controlled by \(\phi\), the spatial frequency by \(\xi\), the Gaussian envelope is determined by \(\sigma\), and the aspect ratio is set by \(\gamma\). At initialisation, we randomly sample these parameters from \(\phi\sim U[0,2\pi]\), \(\xi\sim U[0.5,1]\), \(\sigma\sim\log(U[\exp 1,\exp 5])\), and \(\gamma\sim U[0.5,1.5]\). These filter parameters are modified by gradient descent as the network trains, and so are optimised to extract information from the fields they are operating on. Unlike in (Gauthier et al., 2022), we use different wavelet filters for first- and second-order scattering, allowing the network more flexibility to optimise these parameters.
Figure 1: Architecture for the _scattering network_ (SN). The input fields are convolved with a set of 8 first order wavelet filters. These convolved fields are then convolved with a further set of 8 more second order filters. During each wavelet convolution the field is downsampled by a factor of 2. We include residual connections, so the zeroth and first order fields are smoothed and downsampled to match the resolution of the second order fields. These are then concatenated and passed to a 3 layer CNN, which provides the model output. The 4 parameters describing each wavelet filter are included in the backprop during training.
At each convolution, the input field is smoothed and downsampled by a factor of 2, so the second-order convolved fields have a resolution of \(64\times 64\). We include residual connections, so the zeroth and first-order fields are smoothed and downsampled to match this resolution and concatenated with the second order fields. This produces a total of \(1+8+64=73\) fields output by the scattering layers. Finally these fields are passed to a shallow CNN with 3 convolutional layers, each with batch normalisation and ReLu activations. Finally we use one fully connected layer generating the output. We train this network on the same set of 11 fields as the "MultiField" (MF) results from (Villaescus-Navarro et al., 2021), which includes all simulated fields except the dark matter mass and velocity fields.
Second, we also introduce an extremely lightweight model, the "Interpretable network" (IN) shown in figure 2. The first two layers of this network are identical to the SN, except now the 73 output fields of the scattering layers are spatially averaged to form a single vector of 73 numbers. We then use a single linear layer on this vector to obtain parameter values. For the sake of simplicity in interpreting this network, we train this model on only the cold dark matter mass field (\(M_{\rm cdm}\)) to produce the results in section 3.2.
For all models, we use batch gradient descent with Adam optimiser and a batch size of \(32\). Hyperparameters are optimised using the optuna package using \(30\) trials. We vary independently the learning rate for the scattering and neural layers, and in the case of the SN and CNN, we vary the number of convolutional filters.
## 3 Results
### Scattering network
We show a comparison of model performance between the baseline CNN and SN in table 1. We focus on the accuracy of the predicted values for \(\Omega_{m}\) and \(\sigma_{8}\), when averaging over the test set. We also show the validation loss as evaluated by equation 1. The two models are compared for 3 different dataset sizes: \(10,000\), \(5,000\) and \(1,000\), in order to evaluate how performance scales with the size of the training set. These numbers include test and validation sets. We find significantly better performance from the SN when using smaller datasets. This is consistent with a similar comparison performed in (Gauthier et al., 2022) on the CIFAR-10 dataset. The SN model has approximately an order of magnitude less parameters than the CNN, with \(\sim 2\times 10^{6}\) versus \(2\times 10^{7}\), and is therefore able to optimise effectively on smaller datasets. In addition, our results indicate that even as we scale to larger datasets, the SN performance mildly outperforms the CNN.
### Interpretable network
At the bottom of table 1 we compare the performance of the SN and IN when using a set of \(1,000\) maps of the \(M_{\rm cdm}\) field. First, we find that the SN is able to predict \(\Omega_{m}\) and
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & \(\Delta\Omega_{m}(\%)\) & \(\Delta\sigma_{8}\) (\%) & Valid. Loss \\ \hline CNN (10k) & 6.31 & 4.02 & -11.90 \\ SN (10k) & 4.24 & 3.42 & -12.18 \\ \hline CNN (5k) & 6.61 & 5.15 & -11.18 \\ SN (5k) & 5.29 & 4.15 & -11.58 \\ \hline CNN (1k) & 10.07 & 8.99 & -9.82 \\ SN (1k) & 6.32 & 4.33 & -10.62 \\ \hline SN (1k \(M_{\rm cdm}\)) & 2.48 & 1.84 & -3.83 \\ IN (1k \(M_{\rm cdm}\)) & 11.50 & 2.79 & -3.33 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of model performance for datasets of sizes 10,000, 5,000 and 1000 simulated maps. We show results for the empirical accuracy on \(\Omega_{m}\) and \(\sigma_{8}\) averaging over the test set, and the validation loss. For the final two rows, we use only the \(M_{\rm cdm}\) fields. For simplicity, for these last examples we only model the mean of the posterior, i.e. the loss function is only the first term in equation 1, which is the reason for the large difference in stated values.
Figure 2: Model architecture for the _interpretable network_ (IN). The first two layers are identical to fig 1, except now the output fields are spatially averaged, providing a single vector of 73 values. A single linear layer is used on this vector to predict the model output.
to a remarkable degree of accuracy when using only \(1,000\) maps. This is potentially due to the fact that when operating on only one type of input field, the wavelet parameters are able to better extract information from this field. Whilst the difference in performance on \(\Omega_{m}\) is very large, the IN is able to predict \(\sigma_{8}\) to \(2.79\%\) accuracy despite having only \(654\) parameters.
Finally, after training the filters for the IN, we use \(L_{1}\)-penalised linear regression to isolate the most significant filters in the model. We find that using only 10 of the 73 elements of the final linear layer, we are able to retain \(98\%\) of the model performance of the IN. These 10 fields correspond to the zeroth order field, 5 first-order wavelets, and 4 second-order wavelets. We show these 9 wavelets before and after training in figure 3. Interestingly, all 4 of the second-order fields are downstream from the same first-order wavelet, which is highlighted in red. It is possible that the network has chosen to optimise this filter to operate in conjunction with the second order fields.
In figure 4, we show the evolution of the parameters describing these 9 wavelet filters during training of the IN. Whilst the orientations of the wavelets do not change significantly, the frequency and aspect ratios are clearly strongly driven by the input fields to different values than they were initialised with.
## 4 Conclusion
We performed a comparison between a learnable scattering transform and a CNN for the purposes of inferring cosmological parameters and marginalising over astrophysical effects. We find a scattering network is able to outperform a CNN, with the performance difference significantly increasing for smaller training set sizes. We also present a lightweight interpretable scattering network that is able to find a sparse wavelet compression of the input fields.
Figure 4: Evolution of the wavelet parameters describing each filter shown in figure 3 during training. First order wavelets are shown in blue and second order wavelets are shown in orange. The first order filter used to propagate fields to the second order filters is shown in blue dashed lines.
Figure 3: Wavelet filters before (top) and after (bottom) training. We show only the 9 most relevant fields after applying \(L_{1}\)-penalised linear regression to the IN described in figure 2. We find that these 9 filters plus the zeroth order filters are able to retain \(98\%\) of the model performance. All second order fields selected by the network were the product of a single first order wavelet, highlighted in red. For visualisation purposes we have magnified the spatial size of the wavelets by a factor of 4 in this figure. |
2310.19289 | AMLNet: Adversarial Mutual Learning Neural Network for
Non-AutoRegressive Multi-Horizon Time Series Forecasting | Multi-horizon time series forecasting, crucial across diverse domains,
demands high accuracy and speed. While AutoRegressive (AR) models excel in
short-term predictions, they suffer speed and error issues as the horizon
extends. Non-AutoRegressive (NAR) models suit long-term predictions but
struggle with interdependence, yielding unrealistic results. We introduce
AMLNet, an innovative NAR model that achieves realistic forecasts through an
online Knowledge Distillation (KD) approach. AMLNet harnesses the strengths of
both AR and NAR models by training a deep AR decoder and a deep NAR decoder in
a collaborative manner, serving as ensemble teachers that impart knowledge to a
shallower NAR decoder. This knowledge transfer is facilitated through two key
mechanisms: 1) outcome-driven KD, which dynamically weights the contribution of
KD losses from the teacher models, enabling the shallow NAR decoder to
incorporate the ensemble's diversity; and 2) hint-driven KD, which employs
adversarial training to extract valuable insights from the model's hidden
states for distillation. Extensive experimentation showcases AMLNet's
superiority over conventional AR and NAR models, thereby presenting a promising
avenue for multi-horizon time series forecasting that enhances accuracy and
expedites computation. | Yang Lin | 2023-10-30T06:10:00Z | http://arxiv.org/abs/2310.19289v1 | AMLNet: Adversarial Mutual Learning Neural Network for Non-AutoRegressive Multi-Horizon Time Series Forecasting
###### Abstract
Multi-horizon time series forecasting, crucial across diverse domains, demands high accuracy and speed. While AutoRegressive (AR) models excel in short-term predictions, they suffer speed and error issues as the horizon extends. Non-AutoRegressive (NAR) models suit long-term predictions but struggle with interdependence, yielding unrealistic results. We introduce AMLNet, an innovative NAR model that achieves realistic forecasts through an online Knowledge Distillation (KD) approach. AMLNet harnesses the strengths of both AR and NAR models by training a deep AR decoder and a deep NAR decoder in a collaborative manner, serving as ensemble teachers that impart knowledge to a shallower NAR decoder. This knowledge transfer is facilitated through two key mechanisms: 1) outcome-driven KD, which dynamically weights the contribution of KD losses from the teacher models, enabling the shallow NAR decoder to incorporate the ensemble's diversity; and 2) hint-driven KD, which employs adversarial training to extract valuable insights from the model's hidden states for distillation. Extensive experimentation showcases AMLNet's superiority over conventional AR and NAR models, thereby presenting a promising avenue for multi-horizon time series forecasting that enhances accuracy and expedites computation.
time series forecasting, deep learning, Transformer, knowledge distillation
## I Introduction
Time-series forecasting is integral to various practical applications, from electricity grid control [1] and economic trend prediction [2] to traffic control [3]. Such scenarios often require multi-step ahead forecasts for informed decision-making; for instance, accurately predicting hourly electricity consumption for upcoming days or weeks aids efficient resource allocation. Classical statistical methods like AutoRegressive Integrated Moving Average (ARIMA) and exponential smoothing [4] excel in single time series forecasting, yet fall short when handling related time series collectively, as they treat each series independently [5, 6, 7]. Emerging as a promising alternative, deep learning techniques have gained traction for large-scale related time series forecasting [8, 7, 9, 3].
These deep learning methods can be categorized into AutoRegressive (AR) and Non-AutoRegressive (NAR) models. AR models, including DeepAR [8], TCNN [10, 11] and LogSparse Transformer [7], predict one step ahead, using prior forecasts as input for subsequent predictions. While effective at capturing interdependencies in the output space [12, 13], AR models face issues such as training-inference discrepancies [14, 15], error accumulation [16], and high inference latency [17, 18]. Conversely, NAR models (e.g., MQ-RNN [15], N-BEATS [6], AST [5], and Informer [19]) overcome AR modeling problems by generating parallel predictions, proving superior in long horizon forecasting. However, NAR models may yield unrealistic, disjointed series due to a lack of interdependence consideration, leading to unrelated forecasts [16, 5]. Our work addresses this by employing Knowledge Distillation (KD) to incorporate both model outcomes and hidden states, yielding more coherent and accurate forecasts.
To tackle NAR model limitations, we introduce Adversarial Mutual Learning Neural Network (AMLNet), an NAR model utilizing online KD methods. AMLNet comprises an encoder, a deep AR decoder, a deep NAR decoder, and a shallow NAR decoder (Fig. 1). During training, encoder extracts patterns for all decoders, deep AR and NAR decoders train in mutual fashion, then serve as ensemble teachers transferring knowledge to the shallow NAR decoder, enhancing error handling and output interdependence. Testing employs the encoder and shallow NAR for forecast generation. AMLNet's knowledge transfer employs two techniques: outcome-driven KD, dynamically weighting distillation loss based on network performance to prevent error circulation; and hint-driven KD, distilling knowledge from hidden states via adversarial training, as these states contain valuable information for enhanced transfer.
Our contributions encompass: 1) Introduction of AMLNet, pioneering online KD for time series forecasting. It trains deep AR and NAR decoders mutually as ensemble teachers, transferring knowledge to a shallow NAR decoder, resulting in contiguous forecasts and superior performance with fast inference speed, as demonstrated across four time series datasets. 2) Proposal of outcome-driven and hint-driven online KD, simultaneously learning from teacher network predictions and inner features. Our experiments, compared to state-of-the-art KD methods, affirm the efficacy of both proposed techniques.
## II Problem Formulation
### _Data Sets_
We conducted experiments on four publicly available real-world datasets: Sanyo [20], Hanergy [21], Solar [22], and Electricity [23]. The **Sanyo** and **Hanergy** datasets consist of
solar power generation data from two PV plants in Australia. The data for Hanergy spans from 01/01/2011 to 31/12/2016 (6 years), while the data for Sanyo covers the period from 01/01/2011 to 31/12/2017 (7 years). The **Solar** dataset comprises solar power data from 137 PV plants in Alabama, USA, gathered between 01/01/2006 and 31/08/2006. The **Electricity** dataset contains electricity consumption data from 370 households, recorded from 01/01/2011 to 07/09/2014.
A summary of the data statistics is provided in Table I. For the Sanyo and Hanergy datasets, we considered data between 7 am and 5 pm and aggregated it at half-hourly intervals. Additionally, weather and weather forecast data were collected and used as covariates in the experiments (refer to [24] for further details). For the Solar and Electricity datasets, the data was aggregated into 1-hour intervals. Following the approach in [7, 24], calendar features were incorporated based on the granularity of each dataset. Specifically, Sanyo and Hanergy datasets used features such as _month, hour-of-the-day, and minute-of-the-hour_, Solar dataset used _month, hour-of-the-day, and age_, and Electricity dataset used _month, day-of-the-week, hour-of-the-day, and age_. For consistent preprocessing, all data was normalized to have zero mean and unit variance [24].
### _Problem Statement_
Given is: 1) a set of \(N\) univariate time series (solar or electricity series) \(\{\mathbf{Y}_{i,1:T_{l}}\}_{i=1}^{N}\), where \(\mathbf{Y}_{i,1:T_{l}}\coloneqq[y_{i,1},y_{i,2},...,y_{i,T_{l}}]\), \(T_{l}\) is the input sequence length, and \(y_{i,t}\in\Re\) is the value of the \(i\)th time series (generated PV solar power or consumed electricity) at time \(t\); 2) a set of associated time-based multi-dimensional covariate vectors \(\{\mathbf{X}_{i,1:T_{l}+T_{h}}\}_{i=1}^{N}\), where \(T_{h}\) denotes the length of the forecasting horizon. Our goal is to predict the future values of the time series \(\{\mathbf{Y}_{i,T_{l}+1:T_{l}+T_{h}}\}_{i=1}^{N}\), i.e. the PV power or electricity usage for the next \(T_{h}\) time steps after \(T_{l}\).
The covariates for the Sanyo and Hanergy datasets include: weather data \(\{\mathbf{W}\mathbf{I}_{i,1:T_{l}}\}_{i=1}^{N}\), weather forecasts \(\{\mathbf{W}\mathbf{F}_{i,T_{l}+1:T_{l}+T_{h}}\}_{i=1}^{N}\) and calendar features \(\{\mathbf{Z}_{i,1:T_{l}+T_{h}}\}_{i=1}^{N}\), while the covariates for Solar and Electricity datasets include only calendar features.
Specifically, AMLNet produces the probability distribution of the future values, given the past history:
\[p\left(\mathbf{Y}_{i,T_{l}+1:T_{l}+T_{h}}\mid\mathbf{Y}_{i,1:T_{ l}},\mathbf{X}_{i,1:T_{l}+T_{h}};\theta\right) \tag{1}\] \[=\prod_{t=T_{l}+1}^{T_{l}+T_{h}}p\left(y_{i,t}\mid\mathbf{Y}_{i,1: T_{l}},\mathbf{X}_{i,1:T_{l}+T_{h}};\theta\right),\]
where the input of model at step \(t\) is the concatenation of \(y_{i,t-1}\) and \(x_{i,t}\).
## III Related Work
### _Non-AutoRegressive Sequence Modelling_
NAR forecasting models [15, 6, 5, 19] directly eliminate AR connection from the decoder side, instead modeling separate conditional distributions for each prediction independently. Unlike AR models, NAR models enable parallelized training and inference processes. However, NAR models can produce disjointed forecasts and introduce discontinuities [16] due to the erroneous assumption of independence, limiting their ability to capture interdependencies among predictions. AST [5] stands as the sole approach addressing this within the NAR framework, utilizing adversarial training to enhance global perspective. Recent NAR models focused on reducing output space interdependence have primarily emerged in the realm of Natural Language Processing (NLP) tasks [12, 25]. Various strategies have emerged, with KD [26, 27] garnering substantial attention. KD effectively transfers knowledge from a larger teacher to a smaller student network by offering softer, more informative target distributions. KD methods for NAR either distill the prediction distribution of a pre-trained AR teacher model [12] or incorporate hidden state patterns of the AR model [28].
### _Online Knowledge Distillation_
Classic KD methods are offline and can incur computational and memory overhead due to the reliance on powerful pre-trained teachers. In response, online KD techniques [29, 30, 31, 32] have emerged, showing superior results. These methods treat all networks as peers, enabling mutual exchange of output information and requiring less training time and memory. DML [29] introduced collaborative training, where each model can be both a student and a teacher. Further advancements, such as Wu and Gong's work [32], assemble teachers into online KD, enhancing generalization. Notably, online KD techniques can capture intermediate feature distributions through adversarial training, as seen in AFD [30] and AMLN [31].
### _Generative Adversarial Networks_
Generative Adversarial Networks (GANs) [33], comprising a generator \(G\) and a discriminator \(D\) engaged in adversarial training, were initially proposed for sample generation. However, the adversarial training paradigm has found applications in diverse domains, including computer vision, NLP [30, 31], and time series forecasting [34, 5].
### _Summary_
In contrast to prior work, our AMLNet introduces several advancements: 1) We pioneer the application of online KD for forecasting, introducing AMLNet as the first model to employ online KD methods for training a NAR forecasting model to capture target sequence interdependencies. Specifically, AMLNet trains a deep AR and a deep NAR model mutually as ensemble teachers before transferring their knowledge to a shallow NAR student. 2) While Wu and Gong [32] construct ensemble teachers and adjust KD loss weights based on training epoch number, they overlook teacher model instability during training. AMLNet utilizes outcome-driven KD, assigning dynamic weights to KD losses based on teacher model performance, specifically tailored for probabilistic forecasting. 3) We address the issue of discontinuous predictions stemming from NAR model hidden states, proposing hint-driven KD to capture hidden state distribution information. Unlike previous
approaches [30, 31], designed for networks with differing layer counts, our method is tailored to AMLNet's architecture.
## IV Adversarial Mutual Learning Neural Network
The proposed architecture, Adversarial Mutual Learning Neural Network (AMLNet), addresses the challenges in NAR forecasting by modeling output space interdependence. This section presents the architecture of AMLNet, the proposed outcome-driven and hint-driven online KD methods and the optimisation and inference process of AMLNet.
### _Network Architecture_
The central components of AMLNet, as depicted in Figure 1, encompass a shared encoder \(f_{\theta_{e}}\) consisting of \(n_{e}\) layers, a deep AR Peer1 (P1) decoder \(f_{\theta_{P1}}\) with \(n_{P}\) hidden layers, a deep NAR Peer2 (P2) decoder \(f_{\theta_{P2}}\) also possessing \(n_{P}\) hidden layers, and a shallow NAR Student (S) decoder \(f_{\theta_{S}}\) equipped with \(n_{S}\) hidden layers. It's noteworthy that Informer [19] serves as the foundational framework for AMLNet, although other deep learning forecasting models can replace Informer.
To harness temporal patterns from historical data, the encoder extracts insights from past time steps by processing the concatenation of covariate \(x_{t}\) and ground truth \(y_{t}\) as input at time step \(t\):
\[h_{e;1:T_{l}}=f_{\theta_{e}}(y_{1:T_{l}},x_{1:T_{l}}) \tag{2}\]
The shared encoder temporal patterns \(h_{e;1:T_{l}}\) are uniformly leveraged across all decoders, exploiting the consistent input pasting sequence. This shared approach significantly reduces network parameters and computational overhead.
The P1, P2 and S decoders are formulated:
\[y_{P1;T_{l}+1:T}=f_{\theta_{P1}}(y_{T_{l}:T-1},x_{T_{l}+1:T},h_{e;1:T_{l}}) \tag{3}\]
\[y_{P2;T_{l}+1:T}=f_{\theta_{P2}}(x_{T_{l}-T_{d}e:T_{l}},h_{e;1:T_{l}}) \tag{4}\]
\[y_{S,T_{l}+1:T}=f_{\theta_{S}}(x_{T_{l}-T_{d}e:T_{l}},h_{e;1:T_{l}}) \tag{5}\]
where \(T_{d}e\) is the length of start token used by Informer [19], and the prediction \(y\) consists of a mean and variance. NAR models without input sequences for the decoder have poor performance and copying input pasting series to the decoder as its input could enhance model performance [12].
Notably, the AR model excels at capturing output space interdependence, whereas the NAR model is adept at mitigating error propagation. Through a mutually beneficial relationship, the deep AR P1 and NAR P2 decoders coalesce as peers, leveraging each other's strengths to augment their individual abilities. This collective knowledge is then harnessed to train the shallow NAR S decoder, effectively establishing a dynamic ensemble, as illustrated in Figure 1.
### _Outcome-driven Online Knowledge Distillation_
Conventional optimization objective of probabilistic time series forecasting is the Negative Log Likelihood (NLL) loss, denoted as \(\mathcal{L}_{\mathcal{NLL}}\). This loss function, prevalent in training probabilistic forecasting models [8, 7], is formally defined in Eq. (6):
\[\mathcal{L}_{\mathcal{NLL}}(\hat{y}_{T_{l}+1:T},y_{T_{l}+1:T}) \tag{6}\] \[= -\frac{1}{2T_{h}}\times\left(T_{h}\log(2\pi)+\sum_{t=T_{l}+1}^{T }\log\left|\sigma_{t}^{2}\right|\right.\] \[+\sum_{t=T_{l}+1}^{T}(y_{t}-\mu_{t})^{2}\sigma_{t}^{-2}\Big{)}\]
where \(y_{T_{l}+1:T}\) represents the ground truth, while \(\hat{y}_{T_{l}+1:T}\) pertains to the predicted distribution, encompassing the mean \(\mu t\) and standard deviation \(\sigma t\) across the forecasting horizon.
Addressing the inherent limitations of both autoregressive (AR) and non-autoregressive (NAR) models when subjected to the NLL loss, we propose a novel approach by enlisting the counterpart model as a peer network. This peer relationship serves as a guiding mechanism to overcome the respective limitations, approximating both the ground truth and the predicted distribution of their teacher network.
In the realm of online Knowledge Distillation (KD), traditional methodologies involve aggregating prediction and KD losses with fixed predefined proportions [29] or gradually increasing the proportion of KD loss over training epochs [32]. However, these approaches overlook the diverse abilities of the peer networks and the varying quality of their predictions during training. Consequently, the contribution of each peer network to the student network should be weighted according to its performance.
By allocating a constant weight to the KD loss irrespective of the peers' performance, inaccurately predicted distributions could propagate errors within the network, limiting overall performance. Conventional remedies, such as removing misclassified data for offline KD, are unsuitable for online distillation or forecasting tasks due to reduced training data.
To address these challenges, we introduce an attention-based KD loss that assigns higher weights to well-forecasted samples. The KD loss functions for the P1, P2, and S decoders are formulated in Eq. (8), (9), and (10). P1 and P2 aim to mimic each other's predicted distributions, while S
simultaneously distills knowledge from the outputs of both P1 and P2. The Kullback Leibler (KL) divergence serves as a measure of discrepancy between predicted distributions. Given distributions \(P\) and \(Q\), their KL divergence is defined in Eq. (7).
\[\mathcal{L}_{\mathrm{KL}}(P\|Q)=\int_{-\infty}^{\infty}p(x)\log\left(\frac{p(x) }{q(x)}\right)dx \tag{7}\]
\[\mathcal{L}_{P1}= \frac{\alpha_{o}}{T_{h}}\times\omega_{e}(y_{P2;T_{l}+1:T},y_{T_{ l}+1:T})\cdot \tag{8}\] \[\mathcal{L}_{\mathrm{KL}}(y_{P2;T_{l}+1:T}\|y_{P1;T_{l}+1:T})\] (9) \[\mathcal{L}_{P2}= \frac{\alpha_{o}}{T_{h}}\times\omega_{e}(y_{P1;T_{l}+1:T},y_{T_{ l}+1:T})\cdot\] \[\mathcal{L}_{\mathrm{KL}}(y_{P1;T_{l}+1:T}\|y_{P2;T_{l}+1:T})\] \[\mathcal{L}_{S}= \frac{\alpha_{o}}{T_{h}}\times\omega_{e}(y_{P1;T_{l}+1:T},y_{T_{ l}+1:T})\cdot\] (10) \[\mathcal{L}_{\mathrm{KL}}(y_{P1;T_{l}+1:T}\|y_{S;T_{l}+1:T})+\] \[\frac{\alpha_{o}}{T_{h}}\times\omega_{e}(y_{P2;T_{l}+1:T},y_{T_{ l}+1:T})\cdot\] \[\mathcal{L}_{\mathrm{KL}}(y_{P2;T_{l}+1:T}\|y_{S;T_{l}+1:T})\]
In these equations, \(\alpha_{o}\) modulates the weight of the outcome-driven KD loss, while the weight \(\omega_{e}(\cdot)\) captures the importance of the teacher model's predictions. Considering a Gaussian distribution of data, we define \(\omega_{e}(\cdot)\) as follows:
\[\omega_{e}(\hat{y}_{T_{l}+1:T},y_{T_{l}+1:T})=\frac{1}{\sigma_{T_{l}+1:T}\sqrt {2\pi}}e^{-\frac{1}{2}\left(\frac{y_{T_{l}+1:T}-\mu_{T_{l}+1:T}}{\sigma_{T_{l} +1:T}}\right)^{2}} \tag{11}\]
where \(\mu_{T_{l}+1:T}\) and \(\sigma_{T_{l}+1:T}\) are mean and standard deviation of teacher predicted distribution. The weight \(\omega_{e}(\cdot)\) ranges from 0 to 1 and changes during training. A higher weight signifies a more accurate teacher prediction, optimizing the student network's approximation to the teacher's outputs.
### _Hidden State-driven Online Knowledge Distillation_
Time series data exhibit a natural continuity, yet NAR forecasting models often yield discontinuous and unrelated predictions [16]. This disconnect arises due to these models ignoring output interdependence and position information, and we trace this phenomenon to the behavior of hidden states. To illustrate this, we conducted a case study where an AR model outperformed an NAR model. In Figure 2 (d), we display a smooth and continuous PV power measurement from the Sanyo dataset alongside forecasts from an AR model (DeepAR), an NAR model (Informer), and our proposed AMLNet. Notably, the trajectory of DeepAR is noticeably more continuous and smooth than that of the NAR models. To quantify this observation, we employed the Dynamic Time Warping (DTW) algorithm [35] to measure the similarity between two series. Specifically, we computed the DTW distance and Mean Absolute Percentage Errors (MAPEs) between the DeepAR prediction and the ground truth, yielding 3.05 and 0.102 kW, respectively. In contrast, the Informer yields MAPEs and DTW values of 4.91 and 0.143 kW, indicating that the DeepAR forecasts align more closely with the ground truth.
We visualize the cosine distances between hidden states at the last hidden layer in Figure 2, where each pixel at row \(i\) and column \(j\) represents the distance between hidden states
Fig. 1: AMLNet comprises an encoder, P1, P2, and S decoders, with each P1 and P2 layer accompanied by a dedicated discriminator. P1 operates as an AR component, while P2 and S function as NAR components. The solid lines depict the feedforward process, while dashed lines represent the data flow for knowledge distillation.
\(h_{i}\) and \(h_{j}\) at steps \(i\) and \(j\). Lighter colors correspond to lower distances. Given two hidden states \(h_{i}\) and \(h_{j}\), their cosine distance is calculated as \(1-\frac{h_{i}\cdot h_{j}}{\|h_{i}\|\cdot\|h_{j}\|}\). Notably, the average cosine distance between DeepAR hidden states \(h_{i}\) and their six closest neighbors is 0.016, while the corresponding value for Informer is 0.084. Clearly, the cosine distances of DeepAR hidden states are substantially lower, indicating greater similarity to their neighbors. Such similarity suggests that the hidden states of DeepAR generate predictions with consistent and gradual variations, yielding predictions that are more continuous and smooth. From our analysis, several key observations emerge:
* AR models exhibit similar hidden states patterns, while NAR models lack this property.
* Dissimilar hidden states could lead to discontinuous predicted trajectory.
* Hidden states hold meaningful information, and distilling knowledge from them could be beneficial [31, 30].
Thus, we capitalize on the hidden state patterns through online KD to generate continuous predictions while utilizing their inherent information. Unlike offline KD methods which regularize the distance between pairs of hidden states [28], the hidden states of online KD models exhibit greater variability compared to model outputs during training [30]. Direct regularization in this context impairs the learning process and fails to guarantee convergence. To address this, we adopt an adversarial training approach to extract distributional information from hidden states.
In our approach, Peer 1 (P1) learns from Peer 2 (P2) to counter error accumulation, while P2 learns output interdependence from P1. The Student (S) inherits abilities from both P1 and P2. Given a shared encoder, we focus solely on the decoders' hidden states. The adversarial training involves two components: a generator producing feature mappings and a discriminator classifying these mappings. Each decoder layer serves as a generator for feature mappings, and each P1 and P2 layer is paired with a discriminator acting as a classifier (refer to Figure 1).
The discriminators receive hidden states \(h\in\Re^{T_{h}\times d_{hid}}\) and output probabilities ranging between 0 (fake) and 1 (real). A discriminator's architecture consists of a sequence of ConvLayer-BatchNorm-LeakyReLU-ConvLayer-LinearLayer-Sigmoid operations. The initial ConvLayer has an output dimension of 16, stride of 2, and kernel size of 3, while the second ConvLayer has an output dimension of 1, stride of 1, and kernel size of 3.
In our training process, the generators aim to fool discriminators by approximating the distribution of hidden states from their teacher networks. Discriminators, on the other hand, strive to distinguish the origin of hidden states. Specifically, we denote the \(i\)th layer of P1, P2, and S decoders as generators \(G_{i,P1}\), \(G_{i,P2}\), and \(G_{i,S}\), which produce hidden states \(h_{i,P1}\), \(h_{i,P2}\), and \(h_{i,S}\), respectively. The \(i\)th P1 discriminator \(D_{i,P1}\) is trained to classify P1-generated mappings as real (output: 1) and P2 or S-generated mappings as fake (output: 0). Analogously, the \(i\)th P2 discriminator \(D_{i,P2}\) distinguishes P2-generated mappings as real and P1 or S-generated mappings as fake. Its parameters \(G_{i,P1}\) is optimised by minimising the hidden states-driven KD loss \(\mathcal{L}_{i,P1}(h_{i,P1})\):
\[\mathcal{L}_{i,P1}=\alpha_{h}\log(1-D_{i,P2}(h_{i,P1})). \tag{12}\]
where hyperparameter \(\alpha_{h}\) controls the weight of hint-driven KD loss. Similar, \(i\)th P2 layer \(G_{i,P2}\) minimises the KD loss:
\[\mathcal{L}_{i,P2}=\alpha_{h}\log(1-D_{i,P1}(h_{i,P2})). \tag{13}\]
For hint-driven KD, the student network's shallow layers imitate low-level features from teacher networks, while deep layers learn higher-level features. In the context of the S decoder with fewer layers, we enable each shallow S layer to acquire knowledge from multiple deep network layers. More specifically, the \(i\)th shallow S network layer learns features from the \(j=[1+(i-1)\times\lfloor\frac{n_{d}-1}{n_{S}-1}\rfloor,min(\lceil\frac{n_{d}-1} {n_{S}-1}\rceil+(i-1)\times\lfloor\frac{n_{d}-1}{n_{S}-1}\rfloor,n_{d})]\)th layers of the teachers. Notably, the term \(min(\lceil\frac{n_{d}-1}{n_{S}-1}\rceil+(i-1)\times\lfloor\frac{n_{d}-1}{n_{S }-1}\rfloor,n_{d})\) ensures that \(j\) does not exceed the depth of the teacher network.
Consequently, each \(i\)th shallow S network layer (\(G_{i,S}\)) distills knowledge from both P1 and P2 decoders, attempting to deceive corresponding discriminators (\(D_{j,P1}\) and \(D_{j,P2}\)) for \(j\) within the specified range. The KD loss for the \(i\)th S layer is formulated in Eq. (14). Here, \(\alpha_{h}\) governs the weight of hint-driven KD loss.
Fig. 2: Hidden state cosine distance of: (a) DeepAR; (b) Informer and (c) AMLNet. (d) Ground truth vs predictions.
\[\begin{split}\mathcal{L}_{i,S}=&\alpha_{h}\sum_{j}(\log(1-D_ {j,P1}(h_{i,S}))\\ &+\log(1-D_{j,P2}(h_{i,S}))).\end{split} \tag{14}\]
In parallel, the discriminators are trained to classify the origin of hidden states. The \(i\)th P1 discriminator (\(D_{i,P1}\)) distinguishes features generated by the \(i\)th P1 layer as real (output: 1), and P2 or S-generated features as fake (output: 0). Similarly, the \(i\)th P2 discriminator (\(D_{i,P2}\)) discriminates P2-generated features as real and P1 or S-generated features as fake. The loss functions for the P1 and P2 discriminators are defined in Eq. (15) and (16), respectively. Here, \(k\) represents the indexes of shallow NAR layers endeavoring to deceive the \(i\)th discriminator. For example, the first and second S layers aim to deceive the first P1 discriminator in Figure 1, resulting in \(D_{1,P1}\) having \(k=1,2\).
\[\begin{split}\mathcal{L}_{\mathcal{D}i,P1}=&-\log D _{i,P1}(h_{i,P1})\\ &-\log(1-D_{i,P1}(h_{i,P2}))\\ &-\sum_{k}\log(1-D_{i,P1}(h_{k,S}))\end{split} \tag{15}\]
\[\begin{split}\mathcal{L}_{\mathcal{D}i,P2}=&-\log D _{i,P2}(h_{i,P2})\\ &-\log(1-D_{i,P2}(h_{i,P1}))\\ &-\sum_{k}\log(1-D_{i,P2}(h_{k,S}))\end{split} \tag{16}\]
### _Optimisation and Inference_
The optimization process for AMLNet involves minimizing the forecasting loss, as well as the outcome-driven and hint-driven KD losses. During each training iteration, the encoder and P1 and P2 decoders are optimized first by minimizing Equations (17) and (18). Subsequently, the S decoder is optimized by minimizing Equation (19). Lastly, each discriminator associated with the \(i\)th P1 or P2 layer is optimized by minimizing Equation (15) or (16). During testing, only the encoder and S decoder are utilized to generate results. The entire training and testing process is outlined in Algorithm 1.
\[\mathcal{L}_{P1}=\mathcal{L}_{\mathcal{N}\mathcal{L}P1}+\mathcal{L}_{P1}+\sum _{i=1}^{n_{P}}\mathcal{L}_{i,P1} \tag{17}\]
\[\mathcal{L}_{P2}=\mathcal{L}_{\mathcal{N}\mathcal{L}P2}+\mathcal{L}_{P2}+\sum _{i=1}^{n_{P}}\mathcal{L}_{i,P2} \tag{18}\]
\[\mathcal{L}_{S}=\mathcal{L}_{\mathcal{N}\mathcal{L}\mathcal{L}S}+\mathcal{L}_{S }+\sum_{i=1}^{n_{S}}\mathcal{L}_{i,S} \tag{19}\]
```
0: Training data \(\{(y_{i,1:T-1},x_{i,1:T},y_{i,T_{i+1}:T})\}_{i=1}^{n}\); initialised parameters of the shared encoder \(f_{\theta_{e}}\), P1 decoder \(f_{\theta_{P1}}\), P2 decoder \(f_{\theta_{P2}}\), shallow S decoder \(f_{\theta_{S}}\) and discriminators \(\{D_{i,P1}\}_{i=1}^{n_{P}}\) and \(\{D_{i,P2}\}_{i=1}^{n_{P}}\); training epochs \(e_{max}\).
1:// Training stage
2:for\(1\to e_{max}\):do
3:// Optimizing encoder and P1, P2 decoders
4: Compute encoder output \(h_{e;1:T_{1}}\) (Eq. (2))
5: Compute P1 and P2 decoders forecasts \(y_{P1;T_{1}+1:T}\) and \(y_{P2;T_{1}+1:T}\) and their hidden states \(h_{1:n_{P},P1}\) and \(h_{1:n_{P},P2}\) (Eq. (3 and 4))
6: Compute loss with P1 forecasts \(\mathcal{L}_{P1}\) (Eq. (17))
7: Compute loss with P2 forecasts \(\mathcal{L}_{P2}\) (Eq. (18))
8: Update encoder, P1, P2 decoders by minimising \(\mathcal{L}_{P1}\) and \(\mathcal{L}_{P2}\)
9:// Optimizing S decoder
10: Compute forecasts \(y_{S;T_{1}+1:T}\) and hidden states \(h_{1:n_{S},S}\) of shallow S decoder (Eq. (5))
11: Compute loss with S forecasts \(\mathcal{L}_{S}\) (Eq. (19))
12: Update S decoder by minimising \(\mathcal{L}_{S}\)
13:// Optimizing discriminators
14: Compute classification loss \(\mathcal{L}_{\mathcal{D}i,P1}\) and \(\mathcal{L}_{\mathcal{D}i,P2}\) for every P1 and P2 discriminators (Eq. (15 and 16))
15: Update discriminators by minimising the classification loss
16:endfor
17:// Testing stage
18: Compute encoder output \(h_{e;1:T_{1}}\) (Eq. (2))
19: Compute forecasting results \(y_{S;T_{1}+1:T}\) by the S decoder (Eq. (5))
```
**Algorithm 1** Training and Testing Process of AMLNet
## V Experiments
### _Experimental Details_
We compare the performance of AMLNet with six methods: four state-of-the-art deep learning (DeepAR, LogSparse Transformer, N-BEATS and Informer), a statistical (SARIMAX) and a persistence model. 1) **Persistence** is a typical baseline in forecasting which considers the time series of the previous day as the prediction for the next day; 2) **SARIMAX**[4] is an extension of ARIMA which cann handle seasonality with exogenous variables; 3) **DeepAR**[8] is a widely used RNN-based forecasting model; 4) **LogSparse Transformer**[7] is a Transformer-based forecasting model, it is denoted as "LogTrans" in Table III; 5) **N-BEATS**[6] consists of blocks of fully-connected neural networks, organised into stacks using residual links. We introduced covariates at the input of each block to facilitate multivariate series forecasting; 6) **Informer**[19] is a Transformer-based forecasting model. We modified it for probabilistic forecasts to generate the mean value and variance. Note that Persistence, N-BEATS and Informer are NAR models while the others are AR models.
All models were implemented using PyTorch 1.6 and evaluated on Tesla V100 16GB GPU. The deep learning models were optimised by mini-batch gradient descent with the Adam optimiser and a maximum number of epochs 200. Following the experimental setup in [24, 9], we used the following training, validation and test split: for Sanyo and Hanergy - the data from the last year as test set, the second last year as validation set for early stopping and the remaining data (5 years for Sanyo and 4 years for Hanergy) as training set; for Solar and Electricity - the last week data as test set (from 25/08/2006 for Solar and 01/09/2014 for Electricity), the week before as validation set. For all data sets, the data preceding the validation set is split in the same way into three subsets and the corresponding validation set is used to select the best hyperparameters. We selected the hyperparameters with a minimum loss on the validation set. We used Bayesian optimisation for hyperparameter search with a maximum number of iterations 20. The models used for comparison were tuned based on the authors' recommendations. For the Transformer-based models, we used learnable position and ID (for Solar, Electricity and Exchange sets) embedding. For AMLNet, the constant sampling factor for Informer backbone was set to 2, and the length of start token \(T_{d}e\) is fixed as half of the forecasting horizon. The learning rate of generator \(\lambda_{G}\) and discriminator \(\lambda_{D}\) was fixed, the loss function regularisation parameters \(\alpha_{o}\) and \(\alpha_{h}\) were chosen from {0.001, 0.05, 0.1, 0.5}, the dropout rate \(\delta\) was chosen from {0, 0.1, 0.2}, the hidden layer dimension size \(d_{hid}\) was chosen from {8, 12, 16, 24, 48}, the Informer backbone Pos-wise FFN dimension dimension size \(d_{f}\) and number of heads \(n_{h}\) were chosen from {8, 12, 16, 24, 48, 96} and {4, 8, 16, 24, 32}, number of encoder \(n_{e}\), P1 and P2 decoder \(n_{S}\) and shallow NAR decoder layers \(n_{S}\) were chosen from and {2, 3, 4}. Note that number of encoder layer is not less than number of decoder layers, P1 and P2 decoders have same number of layers, shallow NAR decoder has less number of layers than deep decoders. The discriminators are simply a series of ConvLayer-BatchNorm-LeakyReLU-ConvLayer-LinearLayer-Sigmoid. The first ConvLayer has the output dimension of 16, stride of 2 and kernel size of 3, while the second has dimension of 1, stride of 1 and kernel size of 3.
The selected best hyperparameters for AMLNet are listed in Table II and used for the evaluation of the test set.
Following [8], we report the standard \(\rho\)0.5 and \(\rho\)0.9-quantile losses. The quantile loss function is applied to predict quantiles, and quantile \(\rho\) is the value below which a fraction of observations in a group falls. Given the ground truth \(y\) and \(\rho\)-quantile of the predicted distribution \(\hat{y}\), the \(\rho\)-quantile loss is given by \(\mathrm{QL}_{\rho}(y,\hat{y})\):
\[\mathrm{QL}_{\rho}(y,\hat{y}) =\frac{2\times\sum_{t}P_{\rho}\left(y_{t},\hat{y}_{t}\right)}{ \sum_{t}|y_{t}|}, \tag{20}\] \[P_{\rho}(y,\hat{y}) =\left\{\begin{array}{ll}\rho(y-\hat{y})&\text{if }y>\hat{y}\\ (1-\rho)(\hat{y}-y)&\text{otherwise}\end{array}\right.\]
### _Accuracy Analysis_
Table III shows the \(\rho\)0.5 and \(\rho\)0.9 losses of all models, including the three AMLNet versions which use different decoders (AMLNet-P1, AMLNet-P2, AMLNet-S). As N-BEATS and Persistence produce point forecasts, only the \(\rho\)0.5-loss is reported for them. AMLNet is the most accurate method - it outperforms the other methods on all data sets except for \(\rho\)0.5 on Electricity where the Logsparse Transformer is the best model. For AMLNet, S decoder successfully inherits the abilities of P1 and P2 decoders with fewer layers and has the best performance on Hanergy and Solar sets. Overall, all decoders of AMLNet outperform their backbone model Informer except for Solar and Electricity sets where P1 underperforms, indicating the AMLNet is beneficial. Both NAR branches (P2 and S) exhibit clear improvement over Informer, suggesting the design for overcoming the disadvantages of AR and NAR models is successful for improving forecasting accuracy.
AST [5] employs adversarial training to improve the contiguous and fidelity at the sequence level. AST is not compared directly because it generates quantile forecasts and minimises quantile loss, while we consider probabilistic forecasting with different objective functions in Eq. (6). To compare the effectiveness of AMLNet with AST, we apply the adversarial training of AST to AMLNet and its backbone - Informer, and the results are shown in Table IV. The adversarial training improves the performance of Informer, while AMLNet still exhibits advantages. The design of AMLNet is compatible with the AST adversarial training, combining both techniques achieves further performance improvement on the S decoder.
### _Case Analysis_
To study the capabilities of AMLNet in addressing error accumulation and modeling output space interdependence, we conduct a comparative analysis with two benchmark models: classic AR model DeepAR and NAR model Informer. The evaluation is performed on both the Sanyo and Hanergy datasets, providing insights into AMLNet's performance across different scenarios.
Fig. 3 illustrates the \(\rho\)0.5-loss of various models across different forecasting horizons, using a fixed pasting history. The loss of all models tends to increase as the forecasting
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline & \(\lambda_{G}\) & \(\lambda_{D}\) & \(\alpha_{o}\) & \(\alpha_{h}\) & \(\delta\) & \(d_{hid}\) & \(n_{e}\) & \(n_{d}\) & \(n_{S}\) & \(d_{f}\) & \(n_{h}\) \\ \hline Sanyo & 0.005 & 0.001 & 0.1 & 0.5 & 0 & 48 & 4 & 4 & 2 & 16 & 8 \\ Hanergy & 0.005 & 0.001 & 0.1 & 0.5 & 0 & 48 & 4 & 4 & 2 & 16 & 8 \\ Solar & 0.005 & 0.001 & 0.5 & 0.001 & 0.2 & 96 & 4 & 3 & 2 & 48 & 32 \\ Electricity & 0.001 & 0.001 & 0.1 & 0.001 & 0.1 & 48 & 4 & 3 & 2 & 48 & 32 \\ \hline \end{tabular}
\end{table} TABLE II: Hyperparameters for AMLNet
horizon expands. However, it is evident that the performance of AR models, such as DeepAR, deteriorates more significantly compared to NAR models. Remarkably, AMLNet's P1 decoder consistently outperforms DeepAR across different horizons, demonstrating its capability to mitigate the adverse effects of error accumulation. Conversely, NAR models, including Informer and AMLNet, exhibit relatively stable performance over varying forecasting horizons. This observation indicates that AMLNet's design effectively addresses the issue of error accumulation in its P1 decoder.
Referencing Fig. 2 (c), it becomes evident that AMLNet's S decoder exhibits lower cosine distances in hidden states compared to its backbone counterpart shown in Fig. 2 (b). This distinction is particularly pronounced when observing the lighter color in Fig. 2 (c) as compared to Fig. 2 (b). Additionally, the average cosine distances of the hidden states in the P2 and S decoders on both the Sanyo and Hanergy datasets are significantly lower, by 28% and 23% respectively, compared to the backbone model. Furthermore, the average DTW distances of the P2 and S decoder predictions exhibit a reduction of 18% and 17%, respectively. These findings underline the efficacy of our designed approach in learning and leveraging output space interdependence. This enables the model's hidden states to exhibit greater similarity to neighboring states and subsequently generates more realistic and coherent prediction trajectories.
### _Speed Analysis_
We conducted an evaluation of the inference time for different configurations of AMLNet, as well as the NAR backbone Informer and the AR baseline LogTrans. The results are summarized in Table V. All experiments were conducted on the same computer configuration, and the reported values represent the average elapsed time in milliseconds along with the standard deviation from 10 runs. Notably, the NAR models exhibit faster inference times compared to the P1 decoder, primarily due to their inherent parallelizability. Informer and AMLNet-P2 demonstrate similar inference speeds, which is consistent with their comparable architectural characteristics. AMLNet-S, designed with fewer layers, stands out as the fastest among the models evaluated.
### _Ablation Analysis_
To assess the effectiveness of our proposed methods, we conducted an ablation study, focusing on the \(\rho\)0.5-loss metric for various model configurations. The results are presented in Table VI. In this table, \(\mathcal{L}_{o}\) corresponds to the classic online KD, \(\mathcal{L}_{wo}\) represents our outcome-driven KD, \(\mathcal{L}_{GAN}\) indicates adversarial KD applied to the last hidden layer, and \(\mathcal{L}_{hGAN}\) refers to our hint-driven KD.
Among the key findings from this ablation analysis:
* AMLNet, when combined with our proposed KD methods, emerges as the most effective model configuration, attaining the highest accuracy.
* Both outcome-driven KD and hint-driven KD lead to improved accuracy when incorporated into the frameworks, underscoring the efficacy of both design approaches.
Fig. 3: \(\rho\)0.5-loss of DeepAR, Informer and AMLNet with various forecasting horizon on (a) Sanyo set and (b) Hanergy set
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Sanyo & Hanery & Solar & Electricity \\ \hline Persistence & 0.154/- & 0.242/- & 0.256/- & 0.091/- \\ SARIMAX & 0.124/0.096 & 0.145/0.098 & 0.256/0.192 & 0.196/0.079 \\ DeepAR & 0.070/0.031 & 0.092/0.045 & 0.222\({}^{\circ}\)0.093\({}^{\circ}\) & 0.075\({}^{\circ}\)0.040\({}^{\circ}\) \\ LogTrans & 0.067/0.036 & 0.124/0.066 & 0.210\({}^{\circ}\)0.082\({}^{\circ}\) & **0.059\({}^{\circ}\)**/0.034\({}^{\circ}\) \\ N-BEATS & 0.07/- & 0.132/- & 0.212/- & 0.071/- \\ Informer & 0.046/0.022 & 0.084/0.046 & 0.215/0.115 & 0.068/0.033 \\ \hline AMLNet-P1 & 0.044/0.021 & 0.084/0.043 & 0.224/0.091 & 0.068/0.034 \\ AMLNet-P2 & **0.040/0.019** & 0.078/0.040 & 0.206/0.090 & 0.065/0.033 \\ AMLNet-S & 0.042/0.020 & **0.077/0.038** & **0.204/0.088** & 0.067/**0.032** \\ \hline \hline \end{tabular}
\end{table} TABLE III: \(\rho\)0.5/\(\rho\)0.9-loss of data sets with various granularities. \(\circ\) denotes results from [7].
* The Backbone+\(\mathcal{L}_{wo}\)+\(\mathcal{L}_{GAN}\) outperforms Backbone+\(\mathcal{L}_{o}\)+\(\mathcal{L}_{RGAN}\) substantially, suggesting outcome-driven KD exerts a more pronounced impact on accuracy enhancement compared to the hint-driven KD.
* S decoders tend to outperform P1 and P2 decoders, supporting the notion that our design of online knowledge distillation from P1 and S is a beneficial strategy.
## VI Conclusion
We introduce AMLNet, the NAR model that harnesses both outcome-driven and hint-driven online KD methods. It comprises a shared encoder, alongside deep AR (P1), deep NAR (P2), and shallow NAR (S) decoders. P1 and P2 operate collaboratively, mutually distilling knowledge from each other, and collectively acting as ensemble teachers to effectively transfer this knowledge to S. Our method dynamically assigns attention-based weights to the model's output KD loss, thereby effectively mitigating the risk of learning from less reliable predictions. Additionally, we employ adversarial training to distill knowledge from the distribution of hidden states. This is particularly significant as the root of unrealistic forecasts in NAR models often lies within the hidden states, which inherently carry valuable information. Our extensive experimental evaluations substantiate the remarkable performance and effectiveness of AMLNet in comparison to state-of-the-art forecasting models and existing online KD methods. AMLNet excels not only in modeling output space interdependence, resulting in more plausible forecasts, but also in addressing the challenge of error accumulation, all while maintaining a low inference latency.
|
2301.02856 | Neural Network-Based DOA Estimation in the Presence of Non-Gaussian
Interference | This work addresses the problem of direction-of-arrival (DOA) estimation in
the presence of non-Gaussian, heavy-tailed, and spatially-colored interference.
Conventionally, the interference is considered to be Gaussian-distributed and
spatially white. However, in practice, this assumption is not guaranteed, which
results in degraded DOA estimation performance. Maximum likelihood DOA
estimation in the presence of non-Gaussian and spatially colored interference
is computationally complex and not practical. Therefore, this work proposes a
neural network (NN) based DOA estimation approach for spatial spectrum
estimation in multi-source scenarios with a-priori unknown number of sources in
the presence of non-Gaussian spatially-colored interference. The proposed
approach utilizes a single NN instance for simultaneous source enumeration and
DOA estimation. It is shown via simulations that the proposed approach
significantly outperforms conventional and NN-based approaches in terms of
probability of resolution, estimation accuracy, and source enumeration accuracy
in conditions of low SIR, small sample support, and when the angular separation
between the source DOAs and the spatially-colored interference is small. | Stefan Feintuch, Joseph Tabrikian, Igal Bilik, Haim H. Permuter | 2023-01-07T13:59:45Z | http://arxiv.org/abs/2301.02856v2 | # Neural Network-Based DOA Estimation in the Presence of Non-Gaussian Interference
###### Abstract
This work addresses the problem of direction- of-arrival (DOA) estimation in the presence of non-Gaussian, heavy-tailed, and spatially-colored interference. Conventionally, the interference is considered to be Gaussian-distributed and spatially white. However, in practice, this assumption is not guaranteed, which results in degraded DOA estimation performance. Maximum likelihood DOA estimation in the presence of non-Gaussian and spatially colored interference is computationally complex and not practical. Therefore, this work proposes a neural network (NN) based DOA estimation approach for spatial spectrum estimation in multi-source scenarios with _a-priori_ unknown number of sources in the presence of non-Gaussian spatially-colored interference. The proposed approach utilizes a single NN instance for simultaneous source enumeration and DOA estimation. It is shown via simulations that the proposed approach significantly outperforms conventional and NN-based approaches in terms of probability of resolution, estimation accuracy, and source enumeration accuracy in conditions of low SIR, small sample support, and when the angular separation between the source DOAs and the spatially-colored interference is small.
Array Processing, DOA Estimation, Source Enumeration, Spatially-Colored Interference, Non-Gaussian Interference, Neural Networks, Deep Learning, Machine Learning, MVDR, MDL, AIC, Radar.
## I Introduction
Direction-of-arrival (DOA) estimation using a sensor array is required in multiple applications, such as radar, sonar, ultrasonic, wireless communications, and medical imaging [1]. In real-world applications, the signal received at the sensor array is a superposition of signals from the sources of interest, interference, and receiver thermal noise. In radars, the received signal consists of a target echo, clutter, and thermal noise. In multiple scenarios, the radar clutter has a spatially-colored, heavy-tailed non-Gaussian distribution [2], which can significantly degrade the performance of conventional estimators.
Minimum-variance-distortionless-response (MVDR) [3], is a conventional adaptive beamforming approach for DOA estimation. MVDR estimates the spatial spectrum and obtains the source DOAs via a one-dimensional peak search on a predefined grid. The estimation of signal parameters using rotational invariance techniques (ESPIRIT) [4], multiple signal classification (MUSIC) [5], and root-MUSIC (R-MUSIC) [6] are additional widely used DOA estimation approaches. These approaches involve received signal autocorrelation matrix processing, which conventionally is performed via the sample autocorrelation matrix estimation [3, 4, 5, 6]. However, the performance of the sample autocorrelation matrix estimator degrades in small sample support or non-Gaussian scenarios. Furthermore, these methods use the second-order statistics only and omit the higher-order statistics on non-Gaussian-distributed interference. In addition, ESPRIT, MUSIC, and R-MUSIC approaches require _a-priori_ knowledge of the number of sources (or targets), which limits their practical use.
The problem of DOA estimation in the presence of non-Gaussian interference is of great practical interest. The maximum likelihood estimator (MLE) for DOA estimation in the presence of non-Gaussian interference does not have a closed-form analytical solution [7, 8]. Multiple model-based DOA estimation approaches have been intensively studied in the literature [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18].
Robust covariance matrix-based DOA estimation and source enumeration methods have been studied in the literature. For complex elliptically symmetric (CES) distributed data, the authors in [9] showed that a scatter matrix-based beamformer is consistent, and the semiparametric lower bound and Slepian-Bang formula for DOA estimation were derived in [10]. In [11], a generalized covariance-based (GC) approach for the covariance matrix estimation in scenarios with impulsive alpha-stable noise was proposed for MUSIC DOA estimation. However, these methods consider a specific family of distributions, such as the CES or alpha-stable, and are therefore, limited in the case of model mismatch. In [12], a probability measure transform (MT) based covariance matrix estimator was proposed for MUSIC-based DOA estimation and minimum descriptive length (MDL) based source enumeration. The MT-based covariance estimator was also adopted for robust MVDR beamformer [13]. These methods are usually based on setting a parameter that determines the tradeoff between the level of robustness and performance.
The problem of DOA estimation in the presence of a mixture of spatially-white K-distributed and Gaussian-distributed noise under a deterministic and unknown (conditional) source model was studied in [7]. An iterative MLE-based approach for the conditional and joint likelihood of interference distribution's parameters was derived in [14, 15]. This approach was further extended in [16] to marginal likelihood function. However, this approach is computationally complex due to numerical integral evaluation that involves a \(2M\) dimensional grid search for \(M\) targets [8]. Therefore, [8] proposed a kernel minimum error entropy-based adaptive estimator and a novel criterion to reduce the estimator's computational complexity. The expectation-maximization (EM) with a partial relaxation-based DOA estimation algorithm under the conditional model assumption was proposed in [17]. In [18] a sparse Bayesian learning (SBL) approach for outlier rejection of impulsive
and spatially-white interference was proposed. This EM-based approach does not require _a-priori_ knowledge of the number of sources and was shown to resolve highly-correlated and coherent sources. However, none of these model-based DOA estimation approaches considered an _a-priori_ unknown number of sources and spatially-colored interference and therefore are limited for real-world applications. Although source enumeration methods, such as MDL and Akaike information criterion (AIC) [19] can be used, they assume signal Gaussianity, and can therefore be inaccurate in non-Gaussian scenarios.
Deep learning and machine learning approaches were recently adopted for radar signal processing. Three types of NN-based DOA estimation approaches have been introduced in literature [20]. The first approach assumes _a-priori_ known number of sources, and uses a NN, which is optimized to output a vector of the estimated DOAs [21, 22, 23, 24, 25, 26, 27]. The second approach does not assume _a-priori_ known number of sources and uses a NN for source enumeration [25, 26, 27, 28, 29, 30, 31]. The third approach uses a NN to estimate source presence probability at each DOA on a predefined angular grid and obtains the source DOAs via a peak search [32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. However, all these approaches have not addressed non-Gaussian and spatially-colored interference [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31].
The cases of non-Gaussian and/or spatially-colored interference have been addressed using machine learning-based approaches. For massive MIMO cognitive radar, a reinforcement learning-based approach for multi-target detection under heavy-tailed spatially-colored interference was proposed in [42]. In [43], authors addressed the MIMO radar target detection under non-Gaussian spatially-colored interference by using a CNN architecture that is optimized according to a novel loss. A radial-basis-function (RBF) NN [44] and a convolutional neural network (CNN) [45] architectures were proposed for DOA estimation in the presence of non-Gaussian impulsive noise. In [46], a CNN-based architecture that includes denoising NN, source enumeration NN, and DOA estimation sub-NNs, was introduced. However, [44, 45, 46] consider spatially-white noise and are suboptimal for scenarios with spatially-colored interference.
This work addresses the problem of DOA estimation of _a-priori_ unknown number of sources in the presence of non-Gaussian, heavy-tailed, spatially-colored interference at a low signal-to-interference ratio (SIR) and small sample size. The contribution of this work include:
1. A novel NN-based processing mechanism is used for array processing within non-Gaussian spatially-colored interference. The proposed NN architecture utilizes the structure of information within the set of received complex snapshots.
2. The proposed NN is optimized to output an interference-mitigated spatial spectrum, and is used for simultaneous source enumeration and DOA estimation of sources within non-Gaussian spatially-colored interference.
The proposed approach outperforms conventional adaptive beamforming and competing straightforward NN-based methods in terms of probability of resolution and estimation accuracy in scenarios with non-Gaussian spatially-colored interference. In addition, the proposed approach outperforms conventional source enumeration techniques in scenarios characterized by non-Gaussian spatially-colored interference.
The following notations are used throughout the paper. Roman boldface lower-case and upper-case letters represent vectors and matrices, respectively while Italic letters stand for scalars. \(\mathbf{I}_{N}\) is the identity matrix of size \(N\times N\) and \(\mathbf{1}_{N}\) is a column vector of length \(N\) whose entries are equal to one. \(\mathbb{E}(\cdot)\), \((\cdot)^{T}\), and \((\cdot)^{H}\) are the expectation, transpose, and Hermitian transpose operators, respectively. \(\text{Vec}(\cdot)\), \(\text{diag}(\cdot)\), and \(|\cdot|\) stand for the vectorization, diagonalization, and absolute value operators, respectively. \([\mathbf{a}]_{n}\) and \([\mathbf{A}]_{n,m}\) are the \(n\)-th and \(n,m\)-th elements of the vector \(\mathbf{a}\) and the matrix \(\mathbf{A}\), respectively.
The remainder of this paper is organized as follows. The addressed problem is stated in Section II. Section III introduces the proposed NN-based DOA estimation approach. The proposed approach is evaluated via simulations in Section IV. Our conclusions are summarized in Section V.
## II Problem Definition
This work considers the problem of DOA estimation using an array of \(L\) receiving elements and \(M\) distinct and unknown sources with DOAs, \(\mathbf{\Theta}=\{\theta_{1},\ldots,\theta_{M}\}\). The measurements contain \(K\) spatial snapshots, \(\{\mathbf{x}_{k}\}_{k=1}^{K}\):
\[\mathbf{x}_{k} =\mathbf{A}\left(\mathbf{\Theta}\right)\mathbf{s}_{k}+\sigma_{c} \mathbf{c}_{k}+\mathbf{n}_{k}\, \tag{1}\] \[=\sum_{m=1}^{M}\mathbf{a}\left(\theta_{m}\right)s_{k,m}+\sigma_{c }\mathbf{c}_{k}+\mathbf{n}_{k}\,\ k=1,\ldots,K\,\]
where \(\mathbf{A}\left(\mathbf{\Theta}\right)=\left[\mathbf{a}\left(\theta_{1}\right) \ \cdots\ \ \mathbf{a}\left(\theta_{M}\right)\right]\), with \(\mathbf{a}\left(\theta_{m}\right)\in\mathbb{C}^{L}\) denoting the steering vector for source at direction \(\theta_{m}\), and \(\mathbf{s}_{k}\triangleq\left[s_{k,1}\ \ \cdots\ s_{k,M}\right]^{T}\) is the source signal vector. We assume an unconditional model [47], where \(\{\mathbf{s}_{k}\}\overset{i.i.d.}{\sim}\mathcal{CN}\left(\mathbf{0}_{M}, \text{diag}\left(\sigma_{1}^{2},\ldots,\sigma_{M}^{2}\right)\right)\), is temporally uncorrelated between pulses. The targets are assumed to be spatially distinct. The receiver thermal noise, denoted by \(\mathbf{n}_{k}\), is considered to be complex Gaussian-distributed \(\{\mathbf{n}_{k}\}\overset{i.i.d.}{\sim}\mathcal{CN}\left(\mathbf{0}_{L}, \sigma_{n}^{2}\mathbf{I}_{L}\right)\). The heavy-tailed non-Gaussian and spatially-colored interference is modeled by the interference amplitude \(\sigma_{c}\), and the interference component \(\mathbf{c}_{k}\in\mathbb{C}^{L}\). The considered compound-Gaussian distributed interference, \(\{\mathbf{c}_{k}\}\overset{i.i.d.}{\sim}\mathcal{K}\left(\nu,\theta_{c}\right)\) represents a non-Gaussian interference with angular spread around an unknown direction \(\theta_{c}\), such that \(\mathbf{c}\sim\mathcal{K}\left(\nu,\theta_{c}\right)\) implies
\[\mathbf{c} =\sqrt{\tau}\mathbf{z}\, \tag{2}\] \[\tau\rotatebox[origin={c}]{0.0pt}{$\perp$}\mathbf{z},\ \tau\sim \Gamma\left(\nu,\nu\right),\ \mathbf{z}\sim\mathcal{CN}\left(\mathbf{0}_{L}, \mathbf{M}_{\theta_{c}}\right)\.\]
The compound-Gaussian statistical model is conventionally used in the literature to model heavy-tailed non-Gaussian interference [14, 16, 43, 7, 8, 48]. The texture component, \(\tau\in\mathbb{R}_{+}\), determines the heavy-tailed behavior and is characterized by, \(\nu\). The speckle component, \(\mathbf{z}\in\mathbb{C}^{L}\), determines the spatial distribution of the interference and is characterized by the covariance matrix, \(\mathbf{M}_{\theta_{c}}\). The spatial covariance matrix of the interference upholds:
\[\mathbb{E}\left[\sigma_{c}^{2}\mathbf{c}\mathbf{c}^{H}\right]= \sigma_{c}^{2}\mathbb{E}\left[\tau\right]\mathbb{E}\left[\mathbf{z} \mathbf{z}^{H}\right]=\sigma_{c}^{2}\mathbf{M}_{\theta_{c}}\, \tag{3}\]
where \(\mathbf{M}_{\theta_{c}}\) can be modeled as [14, 15, 16, 43, 48]:
\[\left[\mathbf{M}_{\theta_{c}}\right]_{m,l}=\rho^{|m-l|}e^{j(m-l)\pi\sin\theta_{c }}. \tag{4}\]
The model in (3) and (4), represents the spatial interference, characterized by \(\rho\), with a spread around the interference DOA, \(\theta_{c}\).
## III The Proposed DAFC-Based Neural Network
The proposed approach generalizes the NN architecture that was introduced for linear-frequency-modulated (LFM) radar target detection in the range-Doppler domain [49]. In the following, the data pre-processing and the proposed NN-based processing mechanism are introduced in Subsections III-A and III-B. The proposed NN architecture and loss function are detailed in Subsections III-C and III-D, respectively.
### _Pre-Processing_
The input matrix, \(\mathbf{X}\in\mathbb{C}^{L\times K}\) is constructed from the set of \(K\) snapshots in (1), \(\{\mathbf{x}_{k}\}\):
\[\mathbf{X}=\begin{bmatrix}\mathbf{x}_{1}&\mathbf{x}_{2}&\cdots&\mathbf{x}_{K} \end{bmatrix}\, \tag{5}\]
where the \(k\)-th column of \(\mathbf{X}\) contains the \(k\)-th snapshot. The variation between the columns of \(\mathbf{X}\) is induced by the statistical characteristics of the source signal \(\mathbf{s}_{k}\), interference signal \(\mathbf{c}_{k}\), and thermal noise \(\mathbf{n}_{k}\). Therefore, each column in \(\mathbf{X}\) can be interpreted as a complex "feature" vector containing essential information for DOA estimation. The set of columns in \(\mathbf{X}\) can be interpreted as "realizations" of that feature.
The complex-valued matrix, \(\mathbf{X}\), is converted into real-valued representation needed for the NN-based processing. To keep consistency with [49], we apply a transpose operator to the input matrix, such that the snapshots are stacked in rows. The output of the pre-processing denoted by \(\mathbf{Z}_{0}\in\mathbb{C}^{K\times 2L}\), is:
\[\mathbf{Z}_{0}=\begin{bmatrix}\mathrm{Re}\big{\{}\mathbf{X}^{T}\big{\}},\ \mathrm{Im}\big{\{}\mathbf{X}^{T}\big{\}}\end{bmatrix}. \tag{6}\]
### _Dimensional Alternating Fully-Connected_
The dimensional alternating fully-connected (DAFC) block was introduced to process measurements in a form similar to the model in Section II[49]. Fig. 1 schematically shows the DAFC mechanism.
For arbitrary dimensions \(D_{1},D_{2},D_{3}\), the formulation of a general fully-connected (FC) layer applied to each row in a given matrix \(\mathbf{Z}\in\mathbb{R}^{D_{1}\times D_{2}}\) can be represented by the transform \(\mathcal{F}\left(\cdot\right)\):
\[\mathcal{F}:\mathbb{R}^{D_{1}\times D_{2}}\rightarrow\mathbb{R}^{D_{1}\times D _{3}}\, \tag{7}\] \[\mathcal{F}\left(\mathbf{Z}\right)\triangleq h\left(\mathbf{Z} \mathbf{W}+\mathbf{1}_{D_{1}}\mathbf{b}^{T}\right)\.\]
This matrix-to-matrix transformation is characterized by the "learnable" weight matrix, \(\mathbf{W}\in\mathbb{R}^{D_{2}\times D_{3}}\), the bias vector, \(\mathbf{b}\in\mathbb{R}^{D_{3}}\), and a scalar element-wise activation function, \(h(\cdot)\).
Let \(\mathcal{F}_{r}\left(\cdot\right)\) and \(\mathcal{F}_{c}(\cdot)\) be two separate, and not necessarily identical instances of \(\mathcal{F}\left(\cdot\right)\) from (7), and \(\mathbf{Z}_{in}\) be an arbitrary input matrix. The DAFC mechanism is formulated by the following operations:
**Dimensional Alternating Fully Connected**
* **Input:**\(\mathbf{Z}_{in}\in\mathbb{R}^{H\times W}\) \(\mathcal{F}_{r}:\mathbb{R}^{H\times W}\rightarrow\mathbb{R}^{H\times W^{\prime}}\) \(\mathcal{F}_{c}:\mathbb{R}^{W^{\prime}\times H}\rightarrow\mathbb{R}^{W^{ \prime}\times H^{\prime}}\)
* Apply a single FC layer to each row in \(\mathbf{Z}_{in}\): \[\mathbf{Z}_{r}=\mathcal{F}_{r}\left(\mathbf{Z}_{in}\right)\]
* Apply a single FC layer to each column in \(\mathbf{Z}_{r}\): \[\mathbf{Z}_{c}=\mathcal{F}_{c}\left(\mathbf{Z}_{r}^{T}\right)\]
* Transpose to keep orientation: \[\mathbf{Z}_{out}=\mathbf{Z}_{c}^{T}\]
* **Output:**\(\mathbf{Z}_{out}\triangleq\mathcal{S}\left(\mathbf{Z}\right)\in\mathbb{R}^{H ^{\prime}\times W^{\prime}}\)
In the following, three DAFC design principles are detailed.
**1) Structured transformation**
The input to the first DAFC block is the pre-processed, \(\mathbf{Z}_{0}\), given in (6). Therefore, the first FC layer, \(\mathcal{F}_{r}\), of the first DAFC block extracts spatial-related features from each row in \(\mathbf{Z}_{0}\). The second FC layer, \(\mathcal{F}_{c}\), of the first DAFC block, introduces an interaction between transformed rows. This implies that a) \(\mathcal{F}_{r}\) performs "spatial-feature" extraction by transforming the pre-processed i.i.d. snapshots (the rows of \(\mathbf{Z}_{0}\)) to a high-dimensional feature space, and b) the \(\mathcal{F}_{c}\) performs a nonlinear transformation of the extracted features (the columns of \(\mathcal{F}_{r}\left(\mathbf{Z}_{0}\right)\)) from each snapshot. In this way, the DAFC utilizes both spatial and statistical information. In addition, it can exploit high-order statistics-related features. Thus, the DAFC mechanism can contribute to estimating the source DOAs and mitigating the interference when incorporated into a NN.
**2) Sparsity**
Conventional DOA estimation considers the input data as the collection of measurement vectors (the snapshots \(\{\mathbf{x}_{k}\}\)) in a matrix form. One straightforward approach to processing the input data using a NN is to reshape it and process it via an FC-based architecture. In this way, each neuron in the layer's output interacts with every neuron in the input. On the other hand, the DAFC block transforms the data using a structured transformation, which is significantly sparser in terms of learnable parameters compared to the straightforward FC-based approach.
This parameter reduction can be observed in the following typical case. Consider an input matrix \(\mathbf{Z}_{1}\in\mathbb{R}^{D_{1}\times D_{1}}\), which is transformed to an output matrix \(\mathbf{Z}_{2}\in\mathbb{R}^{D_{2}\times D_{2}}\). The number of learnable parameters in the FC- and the proposed DAFC-based approaches is of the order of \(\mathcal{O}\left(D_{1}^{2}D_{2}^{2}\right)\), and \(\mathcal{O}\left(D_{1}D_{2}\right)\), respectively. Notice that the DAFC-based transformation complexity grows linearly with the number of learnable parameters compared to the quadratic complexity growth of the straightforward, FC-based approach.
The contribution of learnable parameters dimension reduction is twofold. First, the conventional NN optimization is gradient-based [50]. Therefore, a significant reduction in the learnable parameter dimension reduces the degrees of freedom in the optimizable parameter space and improves the gradient-based learning algorithm convergence rate. Second, reduction
in the learnable parameter dimension can be interpreted as increasing the "inductive bias" of the NN model [51], which conventionally contributes to the NN statistical efficiency and generalization ability, thus, reducing the NNs tendency to overfit the training data.
**3) Nonlinearity**
The proposed DAFC considers an additional degree of nonlinearity compared to the straightforward FC-based approach. A straightforward matrix-to-matrix approach includes an interaction of every neuron in the output matrix with every neuron in the input matrix, followed by an element-wise nonlinear activation function. On the other hand, the proposed DAFC consists of two degrees of nonlinearity, in \(\mathcal{F}_{r}\) and \(\mathcal{F}_{c}\). Although the weight matrices applied as part of \(\mathcal{F}_{r}\) and \(\mathcal{F}_{c}\) are of lower dimension than the weight matrix used in the straightforward approach, the extra degree of nonlinearity can increase the NN's capacity [50]. Therefore, a NN architecture with the proposed DAFC is capable of learning a more abstract and rich transformation of the input data.
### _NN Architecture_
The continuous DOA space is discretized into a \(d\)-dimensional grid: \(\mathbf{\phi}=\begin{bmatrix}\phi_{1}&\phi_{2}&\cdots&\phi_{d}\end{bmatrix}^{T}\). This implies that the entire field-of-view (FOV) is partitioned into \(d\) DOAs, \(\{\phi_{i}\}_{i=1}^{d}\), determined by the selected grid resolution, \(\Delta\phi\triangleq\phi_{i+1}-\phi_{i}\). The proposed NN is designed to represent a mapping from the input set of snapshots, \(\{\mathbf{x}_{k}\}\) given in (1), into the probability of source present in the DOAs \(\{\phi_{i}\}_{i=1}^{d}\). The proposed NN architecture is formulated as follows:
\[\mathbf{Z}_{0} =\mathcal{P}\left(\mathbf{X}\right)\, \tag{8}\] \[\mathbf{z}_{\text{vec}} =\text{Vec}\left(\mathcal{S}_{6}\left(\ldots\mathcal{S}_{1}\left( \mathbf{Z}_{0}\right)\right)\right)\,\] \[\hat{\mathbf{y}} =\mathcal{G}_{3}\left(\mathcal{G}_{2}\left(\mathcal{G}_{1}\left( \mathbf{z}_{\text{vec}}\right)\right)\right)\,\]
where \(\mathbf{Z}_{0}\) is the output of the pre-processing procedure, denoted as \(\mathcal{P}\left(\cdot\right)\) and detailed in Section III-A, and \(\mathbf{X}\) is the input matrix in (5).
In the next stage, six DAFC instances, represented by \(\mathcal{S}_{1}\left(\cdot\right),\ldots,\mathcal{S}_{6}\left(\cdot\right)\), of different dimensions with tanh activation for the row transform (\(\mathcal{F}_{r}\) in Section III-B) and ReLu activation for the column transform (\(\mathcal{F}_{c}\) in Section III-B), are used to generate the vectorized signal \(\mathbf{z}_{\text{vec}}\). Our experiments showed that this configuration of row and column activation functions provides the best performance. At the last stage, the signal, \(\mathbf{z}_{\text{vec}}\), is processed by three FC layers, where the first two use tanh activation, and the final (output) layer of equal size to the DOA grid dimension, \(d\), uses sigmoid activation function to output \(\hat{\mathbf{y}}\in[0,1]^{d}\). Thus, \(\{\hat{\mathbf{y}}\}_{i=1}^{d}\) represent the estimated probabilities of a source presence at \(\{\phi_{i}\}_{i=1}^{d}\). Table I and Fig. 2 summarize the parameters and architecture of the proposed NN-based approach.
The estimated source DOAs are extracted from the spatial spectrum via _peak_search_ and applying \(0.5\) threshold:
\[\{i_{1},\ldots,i_{\hat{N}}\} =\text{{peak\_search}}\left(\{\{\hat{\mathbf{y}}\}_{i}\}_{i=1}^{d}\right) \tag{9}\] \[\hat{\mathbf{\Theta}} =\left\{\phi_{i_{n}}:\left[\hat{\mathbf{y}}\right]_{i_{n}}>0.5 \right\}_{n=1}^{\hat{N}}\.\]
Namely, the set of estimated DOAs, \(\hat{\mathbf{\Theta}}\), consists of the grid points corresponding to the peaks of \(\hat{\mathbf{y}}\) that exceed the \(0.5\) threshold. The number of peaks that exceed this threshold is used for source enumeration, and therefore the proposed NN can be utilized as a source enumeration method as well.
The dimensionality of the hidden layers in the proposed
\begin{table}
\begin{tabular}{l l l l} \hline \hline Operator & \begin{tabular}{l} Output \\ Dimension \\ \end{tabular} & Activation & \# Parameters \\ \hline \hline \(\mathcal{P}\) & \(K\times 2L\) & - & - \\ \hline \(\mathcal{S}_{1}\) & \(64\times 256\) & \begin{tabular}{l} tanh- \\ ReLu \\ \end{tabular} & 9,536 \\ \hline \(\mathcal{S}_{2}\) & \(128\times 512\) & \begin{tabular}{l} tanh- \\ ReLu \\ \end{tabular} & 139,904 \\ \hline \(\mathcal{S}_{3}\) & \(256\times 1024\) & \begin{tabular}{l} tanh- \\ ReLu \\ \end{tabular} & 558,336 \\ \hline \(\mathcal{S}_{4}\) & \(64\times 512\) & \begin{tabular}{l} tanh- \\ ReLu \\ \end{tabular} & 541,248 \\ \hline \(\mathcal{S}_{5}\) & \(16\times 256\) & \begin{tabular}{l} tanh- \\ ReLu \\ \end{tabular} & 132,368 \\ \hline \(\mathcal{S}_{6}\) & \(4\times 128\) &
\begin{tabular}{l} tanh- \\ ReLu \\ \end{tabular} & 32,964 \\ \hline vec & 512 & - & - \\ \hline \(\mathcal{G}_{1}\) & 1024 & tanh & 525,312 \\ \hline \(\mathcal{G}_{2}\) & 256 & tanh & 262,400 \\ \hline \(\mathcal{G}_{3}\) & \(d\) & sigmoid & 31,097 \\ \hline \hline \end{tabular}
\end{table}
Table I: Specification of the proposed NN architecture for \(K=16,\ L=16,\ d=121\). “tanh-ReLu” activation stands for tanh in \(\mathcal{F}_{r}\) and ReLU in \(\mathcal{F}_{c}\) of each DAFC block. The number of total learnable parameters is \(2,233,165\).
Figure 1: The DAFC mechanism concept. Each row of dimension \(W\) in \(\mathbf{Z}_{in}\), represented by the red color, is transformed by \(\mathcal{F}_{r}\) to a row of dimension \(W^{\prime}\) in the middle matrix, represented by the transparent red color. Next, each column of dimension \(H\) in the middle matrix, represented by the blue color, is transformed by \(\mathcal{F}_{c}\) to a column of dimension \(H^{\prime}\) in \(\mathbf{Z}_{out}\), represented by the transparent blue color.
NN architecture expands in the first layers and then reduces. This trend resembles the NN architecture presented in [49] and characterizes both the D/FC-based and FC-based processing stages. This expansion-reduction structure can be explained by a) the early NN stages need to learn an expressive and meaningful transformation of the input data by mapping it to a higher dimensional representation and b) the late stages need to extract significant features from the early mappings, and are therefore limited in dimensionality. In addition, the late stages are adjacent to the output vector and therefore need to be of similar dimension.
### _Loss Function_
The label used for the supervised learning process, \(\mathbf{y}\in\{0,1\}^{d}\), is defined as a sparse binary vector with the value 1, at the grid points that correspond to the source DOAs, and 0, otherwise. In practice, the DOAs in \(\mathbf{\Theta}\) do not precisely correspond to the grid points. Therefore, for each DOA in \(\mathbf{\Theta}\), the nearest grid point in \(\{\phi_{i}\}_{i=1}^{d}\) is selected as the representative grid point in the label. Each training example is determined by the input-label pair, \((\mathbf{X},\mathbf{y})\). Using the NN feed-forward in (8), \(\mathbf{X}\) is used to generate the output spatial spectrum, \(\hat{\mathbf{y}}\), which is considered as the estimated label.
The loss function, \(\mathcal{L}\), is a weighted mean of the binary cross entropy (BCE) loss computed at each grid point:
\[\mathcal{L}\left(\mathbf{y},\hat{\mathbf{y}},t\right) =\frac{1}{d}\sum_{i=1}^{d}w_{i}^{(t)}\text{BCE}\left(\left[ \mathbf{y}\right]_{i},\left[\hat{\mathbf{y}}\right]_{i}\right)\,, \tag{10}\] \[\text{BCE}\left(y,\hat{y}\right) =-y\log\left(\hat{y}\right)-\left(1-y\right)\log\left(1-\hat{y} \right)\,,\]
where \(w_{i}^{(t)}\) represents the loss weight of the \(i\)-th grid point at the \(t\)-th epoch. The loss value for equally-weighted BCEs evaluated per grid point (\(w_{i}^{(t)}=1\) in (10)) does not significantly increase in the case of a large error in source/interference estimated probability, due to the sparsity of the label \(\mathbf{y}\). This forces the NN convergence into a sub-optimal solution that is prone to "miss" the sources. Therefore, the loss weights, \(\{w_{i}^{(t)}\}_{i=1}^{d}\), are introduced to "focus" the penalty on source/interference grid points.
The loss weight of the \(i\)-th grid point, \(w_{i}^{(t)}\), is determined by the presence of source or interference in the corresponding label entry \(\left[\mathbf{y}\right]_{i}\). This relation is defined using the epoch and label dependent factors \(e_{0}^{(t)},e_{1}^{(t)}\), according to:
\[w_{i}^{(t)}=\begin{cases}1/e_{1}^{(t)},&\text{if $\phi_{i}$ contains source or interference}\\ 1/e_{0}^{(t)},&\text{else}\end{cases}\,. \tag{11}\]
For \(t=0\), the factor \(e_{1}^{(0)}\) is determined by the fraction of label grid points that contain source or interference out of the total label grid points in the training set, and \(e_{0}^{(0)}\) is the corresponding complement. For subsequent epochs, the factors are updated according to a predefined schedule, similarly to a predefined learning rate schedule. The loss weights are updated \(N_{w}\) times with spacing of \(\Delta t\) epochs during training. The update values are determined by updating \(e_{0}^{(t)},e_{1}^{(t)}\), according to the following decaying rule:
\[e_{q}^{(t)} =(1-\beta^{(D)})e_{q}^{(l\Delta t)}+\beta^{(l)},\;l\Delta t\leq t <(l+1)\Delta t \tag{12}\] \[q =0,1,\;l=1,\ldots,N_{w},\]
Figure 2: Proposed NN architecture. The pre-processing \(\mathcal{P}\) is described in Section III-A and appears in yellow. The purple matrices denote the concatenation of D/FC blocks, which is detailed in Section III-B. The blue vector represents a vectorization of the last D/FC output, and the orange vectors stands for FC layers with tanh activation function. The last green vector is the output of the last FC layer, which consists of sigmoid activation function and yields the estimated spatial spectrum \(\hat{\mathbf{y}}\).
where \(l\) is the loss weight update iteration, and \(\{\beta^{(l)}\}_{l=1}^{N_{u}}\) represent the loss weight update factors which uphold, \(0\leq\beta^{(l)}\leq 1\). Note that for \(N_{w}\Delta t\leq t\), the weight factor remains \(e_{i}^{(N_{w}\Delta t)}\) during the rest of the training stage. Notice that as \(\beta^{(l)}\to 1\), the corresponding loss weights will tend to be equally distributed across the grid points, i.e., \(e_{1}^{(t)}\approx e_{0}^{(t)}\). In this case, an erroneously estimated probability for source/interference containing grid point is equally weighted to a neither-containing grid point. On the other hand, as \(\beta^{(l)}\to 0\), the corresponding factors will uphold \(e_{1}^{(t)}\ll e_{0}^{(t)}\), yielding a significantly larger contribution of source/interference containing grid points to the loss value. The rule in (12) enables a "transition of focus" throughout the training. That is, during the early epochs \(\beta^{(l)}\to 0\), which contributes more weight to the source/interference containing areas in the estimated label \(\hat{\mathbf{y}}\) (i.e., the estimated spatial spectrum) to focus the NN to being correct for source/interference. During the later epochs, \(\beta^{(l)}\) is incrementally increased, which relaxes the focus on source/interference from early epochs. Thus, reducing erroneously estimated sources in areas that do not contain source/interference (i.e. "false-alarms").
## IV Performance Evaluation
This section evaluates the performance of the proposed DAFC-based NN approach and compares it to the conventional approaches, summarized in Subsection IV-A1. The data for all considered scenarios is simulated using the measurement model from Section II.
### _Setup & Training_
This work considers a uniform linear array (ULA) with half-wavelength-spaced \(L\) elements. Each simulated example consists of the input-label pair, \((\mathbf{X},\mathbf{y})\), where the input \(\mathbf{X}\) is defined in (5), and the label \(\mathbf{y}\) is defined in Section III-D. The simulation configurations are detailed in Table II. The performance of the proposed approach is evaluated using a **single NN instance**. Therefore, a single NN model is used for various signal-to-interference ratios (SIRs), signal-to-noise ratios (SNRs), interference-to-noise ratios (INRs), DOAs, interference distribution, and the number of sources for joint DOA estimation and source enumeration. The following definitions for the \(m\)-th source are used in all experiments:
\[\text{INR} =\frac{\mathbb{E}\left[\|\mathbf{e}\|^{2}\right]}{\mathbb{E} \left[\|\mathbf{n}\|^{2}\right]}=\sigma_{c}^{2}/\sigma_{n}^{2}\;, \tag{13}\] \[\text{SNR}_{m} =\frac{\mathbb{E}\left[\|\mathbf{a}(\theta_{m})\mathbf{s}_{m}\|^ {2}\right]}{\mathbb{E}\left[\|\mathbf{n}\|^{2}\right]}=\sigma_{m}^{2}/\sigma_{ n}^{2}\;,\] (14) \[\text{SIR}_{m} =\frac{\mathbb{E}\left[\|\mathbf{a}(\theta_{m})\mathbf{s}_{m}\|^ {2}\right]}{\mathbb{E}\left[\|\mathbf{e}\|^{2}\right]}=\sigma_{m}^{2}/\sigma_{ c}^{2}\;. \tag{15}\]
The NN optimization for all evaluated architectures is performed using the loss function in (10) and Adam optimizer [52] with a learning rate of \(10^{-3}\), and a plateau learning rate scheduler with a decay of \(0.905\). The set of loss weight update factors, \(\{\beta^{(l)}\}_{l=1}^{N_{w}}\), in (12) is chosen as the evenly-spaced logarithmic scale between \(10^{-5}\) and \(10^{-2}\) with \(N_{w}=6\), that is \(\{10^{-5},7.25\cdot 10^{-5},5.25\cdot 10^{-4},3.8\cdot 10^{-3},2.78\cdot 10^{-2},0. 2\}\). The chosen batch size is \(512\), the number of epochs is \(500\), and early stopping is applied according to the last \(200\) epochs.
#### Iv-A1 DOA Estimation Approaches
This subsection briefly summarizes the conventional DOA estimation approaches. The performance of the proposed approach is compared to the conventional MVDR, CNN, and FC-based NN. All the NN-based approaches were implemented using similar number of layers and learnable parameters. In addition, the FC-based NN and CNN were optimized using the same learning algorithm and configurations.
_(a) Conventional Adaptive Beamforming_
The MVDR [3] estimator is based on adaptive beamforming, and it is the maximum likelihood estimator in the presence of unknown Gaussian interference [53]. The MVDR estimates DOAs by a peak search on the MVDR spectrum:
\[P_{MVDR}\left(\phi\right)=\frac{1}{\mathbf{a}^{H}\left(\phi\right)\hat{ \mathbf{R}}_{x}^{-1}\mathbf{a}\left(\phi\right)}\;, \tag{16}\]
where \(\hat{\mathbf{R}}_{x}=\frac{1}{K}\sum_{k=1}^{K}\mathbf{x}_{k}\mathbf{x}_{k}^{H}\) is the sample covariance matrix estimator. Notice that the MVDR spectrum utilizes only second-order statistics of the received signal \(\mathbf{x}_{k}\). For Gaussian-only interference (i.e. \(\mathbf{c}_{k}=0\) in (1)), the second-order statistics contains the entire statistical information. However, for non-Gaussian interference, information from higher-order statistics is needed.
_(b) CNN Architecture_
We consider a CNN-based DOA estimation approach using a CNN architecture that is similar to the architecture provided in [38]. The input to the CNN of dimension \(L\times L\times 3\) consists of the real, imaginary, and angle parts of \(\hat{\mathbf{R}}_{x}\). The CNN architecture consists of \(4\) consecutive CNN blocks, such that each block contains a convolutional layer, a batch normalization layer, and a ReLu activation. The convolutional layers consist of \([128,256,256,128]\) filters. Kernel sizes of \(3\times 3\) for the first block and \(2\times 2\) for the following three blocks are used. Similarly to [38], \(2\times 2\) strides are used for the first block and \(1\times 1\) for the following three blocks. Next, a flatten layer is used to vectorize the hidden tensor, and \(3\) FC layers of dimensions \(1024,\ 512,\ 256\) are used with a ReLu activation and Dropout of \(30\%\). Finally, the output layer is identical to the proposed DAFC-based NN as detailed in Subsection III-C. The considered loss function is identical to the proposed DAFC-based approach in (10). The number of trainable parameters in the considered CNN
\begin{table}
\begin{tabular}{l l l} \hline \hline Notation & Description & Value \\ \hline \hline \multirow{2}{*}{\(M_{max}\)} & Maximal number of sources & \multirow{2}{*}{\(4\)} \\ \cline{2-3} & Number of sensors & \(16\) \\ \hline \(K\) & Number of snapshots & \(16\) \\ \hline \(d\) & Angular grid dimension & \(121\) \\ \hline \(\Delta\phi\) & Angular grid resolution & \(1^{\circ}\) \\ \hline FOV & Field of view & \([-60^{\circ},60^{\circ}]\) \\ \hline \(\sigma_{n}^{2}\) & Thermal noise power & \(1\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Simulation Configurations.
architecture accounts for \(3,315,449\). Notice that the CNN-based architecture utilizes the information within the sample covariance matrix and therefore, is limited to second-order statistics only.
_(c) FC Architecture_
A straightforward implementation of an FC-based architecture, as mentioned in Subsection III-B, was implemented. The data matrix, \(\mathbf{X}\), is vectorized, and the real and imaginary parts of the values were concatenated to obtain a \(2KL\)-dimension input vector. The selected hidden layers are of sizes: \([512,512,1024,1024,512,256]\) where each hidden layer is followed by a tanh activation function. The output layer is identical to the proposed DARC-based NN approach as detailed in Subsection III-C. The considered loss function is (10), and the number of trainable parameters in the FC-based NN accounts for \(2,787,449\). Notice that the FC-based NN architecture utilizes all the measurements by interacting with all samples in the input data. However, this processing is not specifically tailored to the structure of information within the measurements. On the other hand, the proposed DAFC-based NN utilizes the information structure to process the input data. Therefore, for the considered DOA estimation problem, the "inductive bias" [51] for this approach is improper and can result in under-fitted NN architecture.
#### Iii-A2 Performance Evaluation Metrics
This subsection discusses the criteria for the performance evaluation of the proposed DOA estimation approach. In this work, similarly to [38], the DOA estimation accuracy of a set of sources is evaluated by the Hausdorff distance between sets. The Hausdorff distance, \(d_{H}\) between the sets, \(\mathcal{A}\), and \(\mathcal{B}\), is defined as:
\[d_{H}\left(\mathcal{A},\mathcal{B}\right) =\max\left\{d\left(\mathcal{A},\mathcal{B}\right),d\left(\mathcal{ B},\mathcal{A}\right)\right\}\;, \tag{17}\] \[d\left(\mathcal{A},\mathcal{B}\right) =\sup\left\{\inf\left\{\left|\alpha-\beta\right|:\beta\in \mathcal{B}\right\}:\alpha\in\mathcal{A}\right\}\;.\]
Notice that \(d\left(\mathcal{A},\mathcal{B}\right)\neq d\left(\mathcal{B},\mathcal{A}\right)\). Let \(\mathbf{\Theta}=\{\theta_{m}\}_{m=1}^{M}\) and \(\hat{\mathbf{\Theta}}=\{\hat{\theta}_{m}\}_{m=1}^{M}\) be the sets of true and estimated DOAs, respectively. The estimation error is obtained by evaluating the Hausdorff distance, \(d_{H}(\mathbf{\Theta},\hat{\mathbf{\Theta}})\). We define the root mean squared distance (RMSD) for an arbitrary set of \(N\) examples (e.g., test set), \(\left\{\mathbf{X}^{(n)},\mathbf{y}^{(n)}\right\}_{n=1}^{N}\), with the corresponding true and estimated DOAs, \(\left\{\mathbf{\Theta}^{(n)},\hat{\mathbf{\Theta}}^{(n)}\right\}_{n=1}^{N}\) as:
\[\text{RMSD}\triangleq\sqrt{\frac{1}{N}\sum_{n=1}^{N}d_{H}^{2}\left(\mathbf{ \Theta}^{(n)},\hat{\mathbf{\Theta}}^{(n)}\right)}\;. \tag{18}\]
Angular resolution is one of the key criteria for DOA estimation performance. The probability of resolution is commonly used as a performance evaluation metric for angular resolution. In the considered problem, resolution between two sources and between source and interference are used for performance evaluation. For an arbitrary example with \(M\) sources, the resolution event \(A_{res}\) is defined as:
\[A_{res}\left(\mathbf{\Theta},\hat{\mathbf{\Theta}}\right)\triangleq\begin{cases} 1,&\bigcap_{m=1}^{M}\xi_{m}\leq 2^{\circ}\text{ and }|\hat{\mathbf{\Theta}}|\geq M\\ 0,&\text{else}\end{cases}\;, \tag{19}\]
\[\xi_{m}\triangleq\min_{\hat{\theta}\in\hat{\mathbf{\Theta}}}|\theta_{m}-\hat {\theta}|,\;m=1,\ldots,M\;.\]
For example, a scene with \(M\) sources is considered successfully resolved if for each true DOA a) there exists a close-enough estimated DOA, \(\hat{\theta}\in\hat{\mathbf{\Theta}}\), that is at most \(2^{\circ}\) apart, and b) there exists at least \(M\) DOA estimations. According to (18), the probability of resolution, can be defined as:
\[P_{res}=\frac{1}{N}\sum_{n=1}^{N}A_{res}\left(\mathbf{\Theta}^{(n)},\hat{ \mathbf{\Theta}}^{(n)}\right)\;. \tag{20}\]
#### Iii-A3 Data Sets
This subsection describes the structure and formation of _Training & Test sets_.
_(a) Training Set_
The considered training set contains \(N_{train}=10,000\) examples re-generated at each epoch. For each example, i.e. an input-label pair \((\mathbf{X},\mathbf{y})\), the number of DOA sources, \(M\), is generated from uniform and \(i.i.d.\) distribution, \(\{1,\ldots,M_{max}\}\). The training set contains \(10\%\) of interference-free examples and \(90\%\) of interference-containing. Out of the interference-containing examples, \(90\%\) generated such that the source DOAs, \(\{\theta_{m}\}_{m=1}^{M}\), and the interference's DOA, \(\theta_{c}\), are distributed uniformly over the simulated FOV. The remaining \(10\%\) are generated such that \(\theta_{c}\) is distributed uniformly over the FOV, and the source DOAs, \(\{\theta_{m}\}_{m=1}^{M}\), are distributed uniformly over the interval \([\theta_{c}-8^{\circ},\theta_{c}+8^{\circ}]\). This data set formation enables to "focus" the NN training on the challenging scenarios where the source and interference DOAs are closely spaced. The generalization capabilities of the proposed NN to variations in interference statistics are achieved via the interference angular spread parameter, \(\rho\), from the uniform distribution, \(\text{U}\left([0.7,0.95]\right)\), and the interference spikiness parameter, \(\nu\), from the uniform distribution, \(\text{U}\left([0.1,1.5]\right)\). The INR for each interference-containing example and \(\{\text{SIR}_{m}\}_{m=1}^{M}\) or \(\{\text{SNR}_{m}\}_{m=1}^{M}\) are drawn independently according to Table III.
_(b) Test Set_
The test set consists of \(N_{test}=20,000\) examples. The results are obtained by averaging the evaluated performance over \(50\) independent test set realizations. Considering the low-snapshot support regime, the number of snapshots is set to \(K=16\), except for experiment (c) in IV-B2. Considering heavy-tailed interference, the spikiness parameter is set to \(\nu=0.2\). The INR is set to \(\text{INR}=5\ dB\), and the interference angular spread parameter is set to \(\rho=0.9\). The signal amplitude was set to be identical for all sources, \(\sigma_{1}=\cdots=\sigma_{m}\), except for experiment (b) in IV-B2.
### _Experiments_
#### Iii-B1 Single Source Within Interference
In this scenario, the ability to resolve a single source from interference is evaluated. Let \(M=1\) with \(\theta_{1}=0.55^{\circ}\), and \(\theta_{c}=\theta_{1}+\Delta\theta_{c}\) such that \(\Delta\theta_{c}\) is the angular separation between the single source and interference. The \(0.55^{\circ}\) offset is considered to impose a realistic off-grid condition. Fig. 3 shows the RMSD and probability of resolution for all evaluated approaches.
Fig. (a)a shows that the FC-based NN approach does not manage to resolve the single source from the interference for all evaluated angular separations. This result supports the
under-fitting limitation of the FC-based NN approach for the DOA estimation, which can be explained by the architecture that processes the input data as-is, without any structured transformation or model-based pre-processing.
The MVDR and CNN performance in terms of the resolution are similar since both rely only on second-order statistics, which is sufficient in scenarios with widely separated sources and interference. Fig. (a)a shows that the proposed DAFC-based NN approach outperforms all other considered approaches in low angular separation scenarios. This can be explained by the fact that the DAFC uses the high-order statistics needed for the resolution of closely spaced sources and interference.
Fig. (b)b shows the RMSD of all considered DOA estimation approaches. The proposed DAFC-based NN approach outperforms the other tested approaches in low SIR. At high SIR and small angular separation, \(\Delta\theta_{c}=5^{\circ}\), the interference is negligible with respect to the strong source signal, and therefore, the DAFC-based, CNN, and MVDR approaches obtain similar performance. For large angular separation, \(\Delta\theta_{c}=30^{\circ}\), the source and the interference are sufficiently separated, and therefore, DOA estimation errors are mainly induced by the interference DOA, \(\theta_{c}\). The MVDR spectrum contains a peak at \(\theta_{c}=30.55^{\circ}\), and therefore, MVDR's \(RMSD=30^{\circ}\) is approximately constant. The NNs are trained to output a \(0\)-probability for the interference, therefore, the NN-based approaches: FC, CNN, and DAFC achieve a smaller DOA estimation error. The DAFC-based NN and CNN utilize structured transformations, which better fit the input data, and therefore, they outperform the FC-based NN approach in terms of RMSD.
#### Iv-B2 Resolving Two Sources from Interference
This subsection evaluates the performance of the tested DOA estimation approaches in scenarios with two sources within AWGN and interference.
_(a) Resolution of Equal-Strength Sources_
In the following experiment, the resolution between two equal-power sources, \(M=2\), with \(\theta_{1}=-\frac{\Delta\theta}{2}+0.55^{\circ}\), and \(\theta_{2}=\frac{\Delta\theta}{2}+0.55^{\circ}\), is evaluated. The off-grid additional \(0.55^{\circ}\) offset to the \(\Delta\theta\) angular separation between the sources represents the practical scenario. The interference at \(\theta_{c}=0.55^{\circ}\) influences the two sources similarly. Fig. 4 shows the probability of resolution of the tested approaches in scenarios with (a) the AWGN only and (b) spatially-colored interference.
The FC-based NN approach does not resolve the two targets in both evaluated scenarios. Subplot (a) in Fig. 4 shows that the proposed DAFC-based NN approach outperforms the MVDR and the CNN at low-SNR and small angular separation scenarios due to its generalization ability to spatially-white interference. Subplot (b) in Fig. 4 shows that at low SIR of \(\text{SIR}=-5\ dB\), the performances of MVDR and CNN significantly degrade compared to the proposed DAFC-based NN approach. Comparing subplots in Fig. 4, notice that at \(\text{SIR}=-5\ dB\), the MVDR fails to resolve the sources with angular separation \(\Delta\theta<20^{\circ}\) due to the presence of the heavy-tailed spatially-colored interference in the proximity of the sources. However, the proposed DAFC-based NN approach mitigates this interference and resolves the sources, and hence, outperforms other tested approaches at both \(\text{SIR}=0\ dB\) and \(\text{SIR}=-5\ dB\).
Subplot (b) in Fig. 4 shows the non-monotonic trend of CNN and MVDR performance at \(4^{\circ}<\Delta\theta<18^{\circ}\) and
Figure 3: Scenario with a single source at \(\theta_{1}=0.55^{\circ}\) and interference located at \(\theta_{c}=\theta_{1}+\Delta\theta_{c}\). (a) probability of resolution and (b) RMSD.
\begin{table}
\begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt}} \hline \hline Notation & Description & Value \\ \hline \hline \multirow{2}{*}{\(\rho\)} & Interference angular spread parameter & \multirow{2}{*}{\(\sim\text{U}\left([0.7,0.95]\right)\)} \\ \cline{2-3} & Interference spikiness parameter & \\ \hline INR & INR & \(\sim\text{U}\left([0,10]\right)\ [dB]\) \\ \hline \(\text{SIR}_{m}\) & SIR of \(m\)-th source & \(\sim\text{U}\left([-10,10]\right)\ [dB]\) \\ \hline \(\text{SNR}_{m}\) & SNR of \(m\)-th source & \(\sim\text{U}\left([-10,10]\right)\ [dB]\) \\ \hline \end{tabular}
\end{table}
Table III: Training set parameters. \(\text{SNR}_{m}\) distribution applies to interference-free examples.
\(\text{SIR}=-5\ dB\). For \(4^{\circ}<\Delta\theta<8^{\circ}\) the sources are closer to the peak of the interference's lobe and are therefore less mitigated by it. As \(\Delta\theta\) initially increases, \(8^{\circ}<\Delta\theta<12^{\circ}\), the sources reach DOAs which are in the proximity of the interference lobe's "nulls" which explains the reduction in resolution, and as \(\Delta\theta\) further increases, \(16^{\circ}<\Delta\theta\), the sources are sufficiently separated from the interference such that the resolution increases. As a result, MVDR and CNN-based approaches that use second-order statistics only, can not resolve the sources in the vicinity of a stronger interference.
Fig. 5 shows the average spatial spectrum of all tested approaches for \(\Delta\theta=12^{\circ}\) and \(\text{SIR}=-5\ dB\). The average spatial spectrum of the FC-based NN approach does not show two prominent peaks, which results in its poor probability of resolution in Fig. 4. The MVDR "bell-shaped" spatial spectrum does not contain the two prominent peaks at \(\theta_{1,2}\) since the interference "masks" the two sources. The CNN and proposed DAFC-based NN approaches show two peaks at the average spatial spectrum. The peaks at the CNN's average spatial spectrum are lower, resulting in a low-resolution probability. The average spatial spectrum of the proposed DAFC-based NN approach contains two high peaks, resulting in a superior probability of resolution in Fig. 4.
_(b) Resolution of Unequal-Power Sources_
Fig. 6 shows the probability of resolution in a scenario with two sources, \(M=2\), at \(\theta_{1}=-\Delta\theta/2+0.55^{\circ}\), and \(\theta_{2}=+\Delta\theta/2+0.55^{\circ}\) with interference located between the sources at \(\theta_{c}=0.55^{\circ}\). The signal strength of the second source is set to \(\text{SIR}_{1}=\text{SIR}_{2}+10\ dB\). Comparing Fig. 6 to Fig. (b)b, the competing methods show similar trends, except the degradation of the CNN's probability of resolution for the \(\text{SIR}=0\ dB\) case. On the other hand, the proposed DAFC-based NN approach outperforms other tested approaches in terms of the probability of resolution. Therefore, Fig. 6 demonstrates the generalization ability of the proposed DAFC-based NN approach to a variance between source strengths.
_(c) Effect of the Number of Snapshots on the Resolution_
This experiment investigates the influence of the number of snapshots, \(K\), on the ability to resolve two proximate sources from heavy-tailed spatially-colored interference. The equal-strength resolution scenario is repeated using \(K=4,\ 8,\ 16,\ 32,\ 64\) with different instances of NN training for each \(K\) value. Fig. 7 shows the probability of resolution for two equal-strength sources at \(\theta_{1,2}=\theta_{c}\pm\Delta\theta/2\) for \(\Delta\theta=12^{\circ}\) and \(\theta_{c}=0.55^{\circ}\).
The FC-based NN approach fails to resolve the two sources. For \(\text{SIR}=0\ dB\), the MVDR, CNN, and DAFC-based NN
Figure 4: Probability of resolution for two sources located at \(\theta_{1,2}=\theta_{c}\pm\Delta\theta/2\), and interference located at \(\theta_{c}=0.55^{\circ}\). (a) AWGN-only scenario and (b) interference-containing scenario.
Figure 5: Spatial spectrum, two sources with \(\text{SIR}=-5\ dB\) located at \(\theta_{1,2}=\theta_{c}\pm\Delta\theta/2\) with \(\Delta\theta=12^{\circ}\) and \(\theta_{c}=0.55^{\circ}\). The dashed blue lines represent the mean spatial spectrum, and the color fill represents the standard deviation around the mean obtained from \(2,000\) i.i.d. examples. The solid vertical orange lines represent the true source DOAs, and the dashed vertical green line represents the interference DOA.
approaches achieve a monotonic increasing probability of resolution with increasing \(K\). The proposed DAFC-based NN approach slightly outperforms other tested approaches. At low SIR of \(\text{SIR}=-5\ dB\), the proposed DAFC-based NN approach significantly outperforms the other tested approaches. This can be explained by the fact that increasing \(K\) increases the probability for outliers to be present in the input data matrix, \(\mathbf{X}\). Therefore, the estimated autocorrelation matrix, \(\hat{\mathbf{R}}_{x}\), is more likely to be biased by the interference-related outliers, which results in interference "masking" the sources. The proposed DAFC-based NN approach is immune to these outliers and successfully exploits the information from the additional snapshots to improve the probability of resolution.
Figs. 4, 5, 6, and 7 show the ability of the proposed DAFC-based NN approach to utilize the information structure of the input data by exploiting the higher-order statistics and performing the domain-fitted transformation in order to provide superior resolution ability in the case of proximate heavy-tailed spatially-colored interference, low SIR and small sample size.
#### V-B3 Multiple Source Localization
The performances of the tested DOA estimation approaches are evaluated and compared in a multi-source scenario. Four sources, (\(M=4\)) were simulated with angular separation, \(\Delta\theta\): \(\{\theta_{1},\theta_{2},\theta_{3},\theta_{4}\}=\theta_{c}+\{-2\Delta\theta, -\Delta\theta,\Delta\theta,2\Delta\theta\}\), where \(\theta_{c}=0.51^{\circ}\) represents a realistic off-grid condition. The RMSD of evaluated methods is depicted in Fig. 8. The proposed DAFC-based NN approach outperforms the other tested approaches at low SIR (\(\text{SIR}<0\ dB\)) for large and small angular separations. For high SIR and low angular separation, \(\Delta\theta=5^{\circ}\), the MVDR achieves the lowest RMSD. The reason is that for this case, the interference is negligible with respect to the lobe of the strong source in the MVDR's spectrum. However, at high angular separation, \(\Delta\theta=20^{\circ}\), the proposed DAFC-based NN approach significantly outperforms the other tested approaches. This is explained by Fig. 9, that shows the spectrum of the tested DOA estimation approaches. Notice that the proposed DAFC-based NN mitigates interference, while the spectra of other tested approaches contain high peaks at the interference DOA, \(\theta_{c}\). These peaks increase the Hausdorff distance in (17), increasing the RMSD of other tested approaches in Fig. 8.
#### V-B4 Multiple Source Enumeration
The source enumeration performance is evaluated in this experiment. The DOAs of the sources are selected from the set of following values: \(\{10.51^{\circ},-9.49^{\circ},-19.49^{\circ},10.51^{\circ}\}\) such that for \(M\) sources, the DOAs are selected to be the first \(M\) DOAs. The interference is located at \(\theta_{c}=0.51^{\circ}\). The proposed DAFC-based NN approach is compared to the MDL and AIC [19]. Fig. 10 shows the source enumeration confusion matrices for the MDL, AIC, and the proposed DAFC-based NN with \(\text{SIR}=0\ dB\).
Figs. (a)a, (b)b show that in both the MDL and the AIC, the predicted number of sources has a constant bias for each true \(M\) due to the spatially-colored interference. Fig. (c)c shows the source enumeration performance of the proposed DAFC-based NN approach in the presence of spatially colored interference. The DAFC-based NN identifies the interference and does not count it as one of the sources by outputting a low probability for angular grid points near \(\theta_{c}\), resulting in a better source enumeration performance.
#### Iv-B5 Loss Weights
This experiment evaluates the effect of the loss weight update factors, \(\{\beta^{(l)}\}_{l=1}^{N_{\text{w}}}\), introduced in (12), on the confidence level in the spatial spectrum. Let \(\mathbf{\bar{B}}\) denote the set of \(\{\beta^{(l)}\}_{l=1}^{N_{\text{w}}}\) values used in the proposed approach. The loss weights, \(\{w_{i}^{(t)}\}_{i=1}^{d}\), are defined by the factors \(e_{0}^{(t)},e_{1}^{(t)}\) according to (11), and are introduced to provide a trade-off between the penalty obtained on source/interference and the penalty obtained for the rest of the output spatial spectrum.
For comparison, we set \(\mathbf{B}_{0}=\{10^{-6},3.98\cdot 10^{-6},1.58\cdot 10^{-5},6.31\cdot 10^{-5},2. 51\cdot 10^{-4},10^{-3}\}\), and \(\mathbf{B}_{1}=\{10^{-3},3.98\cdot 10-3,0.0158,0.063,0.25,0.1\}\) as two sets of loss weight update factors. For \(\mathbf{B}_{0}\), the loss weight update factors are closer to \(0\), hence the loss weights emphasize the source/interference, since \(e_{1}^{(t)}\ll e_{0}^{(t)}\) which, according to (11), translates to larger \(w_{i}^{(t)}\) for source/interference grid points. For \(\mathbf{B}_{1}\) the values are closer to \(1\), hence the loss weights are more equally distributed among grid points, since \(e_{1}^{(t)}\approx e_{0}^{(t)}\). The experiment in IV-B1 is repeated here for the DAPC-based NN approach with the two additional \(\mathbf{B}_{0},\mathbf{B}_{1}\) values mentioned above.
Let \(\hat{p}_{1}\) represent the probability assigned for the source-containing grid point in the estimated label \(\hat{\mathbf{y}}\). Let \(\hat{p}_{0}\) represent the maximum over probabilities assigned for non-source grid points in \(\hat{\mathbf{y}}\), excluding a \(5\)-grid point guard interval around the source. Fig. 11 shows \(\hat{p}_{1}\) and \(\hat{p}_{0}\) for various angular separations between the source and interference for \(\text{SIR}=-5\ dB\). For \(\mathbf{B}_{0}\), the source's contribution to the loss value is substantially higher, which results in a higher probability for the source-containing grid point. However, this results in a higher probability obtained for non-source grid points, since their contribution to the loss value is negligible compared to the source-containing grid point, increasing "false-alarm" peaks in the spatial spectrum, subsequently increasing the estimation error. Correspondingly, for \(\mathbf{B}_{1}\) the source's contribution to the loss value is less significant, which results in low probability assigned for the source-containing grid points, as well as low probability for non-source grid points.
## V Conclusion
This work addresses the problem of DOA estimation and source enumeration of an unknown number of sources within heavy-tailed, non-Gaussian, and spatially colored interference. A novel DAPC-based NN approach is proposed for this
Figure 10: Confusion matrix for source enumeration, \(\text{SIR}=0\ dB\), sources located at \(\{10.51^{\circ},-9.49^{\circ},-19.49^{\circ},10.51^{\circ}\}\). (a) MDL, (b) AIC, (c) proposed DAFC-based NN.
Figure 9: Spatial spectrum, four sources with \(\text{SIR}=0\ dB\) located at \(\{\theta_{1},\theta_{2},\theta_{3},\theta_{4}\}=\theta_{c}+\{-2\Delta\theta,- \Delta\theta,\Delta\theta,2\Delta\theta\}\), where \(\theta_{c}=0.51^{\circ}\) and \(\Delta\theta=20^{\circ}\). The dashed blue lines represent the mean spatial spectrum, and the color fill represents the standard deviation around the mean obtained from \(2,000\) i.i.d. examples. The solid vertical orange lines represent the true source DOAs and the dashed vertical green line represents the interference DOA.
problem. The DAFC mechanism applies a structured transformation capable of exploiting the interference non-Gaussianity for its mitigation while retaining a low complexity of learnable parameters. The proposed DAFC-based NN approach is optimized to provide an interference-mitigated spatial spectrum using a loss weight scheduling routine, performing DOA estimation and source enumeration using a unified NN.
The performance of the proposed approach is compared to MVDR, CNN-based, and FC-based approaches. Simulations showed the superiority of the proposed DAFC-based NN approach in terms of probability of resolution and estimation accuracy, evaluated by RMSD, especially in weak signal power, small number of snapshots, and near-interference scenarios. The source enumeration performance of the proposed DAFC-based NN approach was compared to the MDL and AIC. It was shown that in the considered scenarios, the proposed approach outperforms the MDL and the AIC in the source enumeration accuracy.
|
2306.15427 | Adversarial Training for Graph Neural Networks: Pitfalls, Solutions, and
New Directions | Despite its success in the image domain, adversarial training did not (yet)
stand out as an effective defense for Graph Neural Networks (GNNs) against
graph structure perturbations. In the pursuit of fixing adversarial training
(1) we show and overcome fundamental theoretical as well as practical
limitations of the adopted graph learning setting in prior work; (2) we reveal
that more flexible GNNs based on learnable graph diffusion are able to adjust
to adversarial perturbations, while the learned message passing scheme is
naturally interpretable; (3) we introduce the first attack for structure
perturbations that, while targeting multiple nodes at once, is capable of
handling global (graph-level) as well as local (node-level) constraints.
Including these contributions, we demonstrate that adversarial training is a
state-of-the-art defense against adversarial structure perturbations. | Lukas Gosch, Simon Geisler, Daniel Sturm, Bertrand Charpentier, Daniel Zügner, Stephan Günnemann | 2023-06-27T12:38:11Z | http://arxiv.org/abs/2306.15427v2 | # Adversarial Training for Graph Neural Networks
###### Abstract
Despite its success in the image domain, adversarial training does not (yet) stand out as an effective defense for Graph Neural Networks (GNNs) against graph structure perturbations. In the pursuit of fixing adversarial training _(1)_ we show and overcome fundamental theoretical as well as practical limitations of the adopted graph learning setting in prior work; _(2)_ we reveal that more flexible GNNs based on learnable graph diffusion are able to adjust to adversarial perturbations, while the learned message passing scheme is naturally interpretable; _(3)_ we introduce the first attack for structure perturbations that, while targeting multiple nodes at once, is capable of handling global (graph-level) as well as local (node-level) constraints. Including these contributions, we demonstrate that adversarial training is a state-of-the-art defense against adversarial structure perturbations.1
Footnote 1: Project page: [https://www.cs.cit.tum.de/daml/adversarial-training/](https://www.cs.cit.tum.de/daml/adversarial-training/)
## 1 Introduction
Adversarial training has weathered the test of time and stands out as one of the few effective measures against adversarial perturbations. While this is particularly true for numerical input data like images [1], it is not yet established as an effective method to defend predictions of Graph Neural Networks (GNNs) against graph structure perturbations.
Although previous work reported some robustness gains by using adversarial training in GNNs [2; 3], closer inspection highlights two main shortcomings: (i) the learning setting leads to a biased evaluation and (ii) the studied architectures seem to struggle to learn robust representations.
_(i) Learning setting._ In the previously studied transductive learning settings (see Table 2), clean validation and test nodes are known during training. Thus, _perfect robustness_ can be achieved by memorizing the training graph. This can lead to a false impression of robustness. Indeed, we find that the gains of the adversarial training approach from Xu et al. [2] largely stem from exploiting this flaw. Motivated by this finding, we revisit adversarial training for node classification under structure perturbations in a fully inductive setting (i.e., validation/test nodes are excluded during training). Thus, our results do not suffer from the same evaluation pitfall and pertain to a more challenging and realistic scenario.
_(ii) GNN architectures._ The studied GNNs like GCN [4] or APPNP [5] all seem to share the same fate as they have a limited ability to adjust their message passing to counteract adversarial perturbations.
Figure 1: Robust diffusion (GPRGNN) vs. GCN on (inductive) Cora-ML. We report adversarial training and standard training.
Instead, we propose to use flexible message-passing schemes based on _learnable diffusion_ that can approximate any spectral graph filter. An adversarially trained graph diffusion significantly improves robustness compared to previous work, while yielding an interpretable message-passing scheme.
_More realistic perturbations._ Inspecting robust diffusion indicates that previously studied perturbation sets [2; 3] are overly permissive and unrealistic. Prior work only constrained the _global_ (graph-level) number of inserted and removed edges, despite studying node classification, where predictions are inherently _local_ (node-level). As a result, commonly, an adversary has been allowed to add or remove a number of edges that exceeds the average node degree by 100 times or more, providing ample leeway for a complete change of many node neighborhoods through rewiring. E.g., on Cora-ML 5% edge-changes correspond to 798 changes, but the average degree is 5.68. To prevent such degenerate perturbations [6], we propose Locally constrained Randomized Block Coordinate Descent (LR-BCD), the first attack that, while targeting multiple nodes at once, is able to constrain _both_ local perturbations per node and global ones. Even though local constraints are well-studied for attacks targeting a single node [7], surprisingly, there was no attack incorporating these for jointly attacking a set of nodes at once. Thus, LR-BCD fills an important gap in the literature, while being efficient and effective.
Addressing the aforementioned points, we substantially boost empirical and certifiable adversarial robustness up to the level of state-of-the-art defenses for GNNs. For example, in Figure 1, we show a 4-fold increased robustness over standard training (measured in decline in accuracy after an attack).
**Contributions. (1)** We show theoretically and empirically that in the transductive learning setting, previously studied for adversarial training, one can trivially achieve perfect robustness. We show that the full inductive setting does not have this limitation and, consequently, revisit adversarial training (see Section 2). **(2)** By leveraging more flexible GNN architectures based on learnable diffusion, we significantly improve upon the robustness under adversarial training (see Section 3). **(3)** We implement more realistic adversaries for structure perturbations with a novel attack that constrains perturbations globally and locally for each node in the graph (see Section 4).
## 2 Learning Settings: Transductive vs. Inductive Adversarial Training
We first give background on adversarial training and self-training. Then, in Section 2.1, we discuss the transductive learning setting and its shortcomings in contrast to inductive learning in the context of robust generalization. Following previous work, we focus on node classification and assume we are given an undirected training graph \(\mathcal{G}=(\mathbf{X},\mathbf{A})\) consisting of \(n\) nodes of which \(m\) are labeled. By \(\mathbf{A}\in\{0,1\}^{n\times n}\) we denote the adjacency matrix, by \(\mathbf{X}\in\mathbb{R}^{n\times d}\) the feature matrix, and by \(y_{i}\in\{1,...,C\}\) the label of a node \(i\), summarized for all nodes in the vector \(\mathbf{y}\).
**Adversarial Training.** Adversarial training optimizes the parameters of a GNN \(f_{\theta}\) on an adversarially perturbed input graph with the goal to increase robustness against test-time (evasion) attacks (i.e., attacks against the test graph after training). During training the following problem is solved:
\[\operatorname*{arg\,min}_{\theta}\max_{\tilde{\mathcal{G}}\in\mathcal{B}( \mathcal{G})}\sum_{i=1}^{m}\ell(f_{\theta}(\tilde{\mathcal{G}})_{i},y_{i}) \tag{1}\]
where \(f_{\theta}(\tilde{\mathcal{G}})_{i}\) is the GNN-prediction for node \(i\) based on the perturbed graph \(\tilde{\mathcal{G}}\), \(\ell\) is a chosen loss function, and \(\mathcal{B}(\mathcal{G})\) is the set of allowed perturbed graphs given the clean graph \(\mathcal{G}\). As in previous work, we assume that an adversary maliciously changes the graph structure by inserting or deleting edges. Then, \(\mathcal{B}(\mathcal{G})\) can be defined by restricting the total number of malicious edge changes in the graph to \(\Delta\in\mathbb{N}_{\geq 0}\) (called a global constraint) and/or restricting the total number of edge changes in each neighborhood of a node \(i\) to \(\Delta_{i}\in\mathbb{N}_{\geq 0}\) (called local constraints, see Section 4). Solving Equation (1) is difficult in practice. Thus, we approximate it with alternating first-order optimization, i.e., we approximately solve Equation (1) by training \(f_{\theta}\) not on the clean graph \(\mathcal{G}\), but on a perturbed one \(\tilde{\mathcal{G}}\) that is newly crafted in each iteration through attacking the current model (see Algorithm B.2).
**Self-training** is an established semi-supervised learning strategy [8] to make optimal use of unlabeled nodes via pseudo-labeling and is applied by previous works on adversarial training for GNNs [2; 3]. For this, first a model \(f_{\theta^{\prime}}\) is trained regularly using the \(m\) labeled nodes, minimizing \(\sum_{i=1}^{m}\ell(f_{\theta}(\mathcal{G})_{i},y_{i})\). Thereafter, a new (final) model \(f_{\theta}\) is randomly initialized and trained on all nodes, using the known labels for the training nodes and pseudo-labels generated by \(f_{\theta^{\prime}}\) for the
unlabeled nodes (see Algorithm B.3). Thus, if including self-training, Equation (1) changes to
\[\operatorname*{arg\,min}_{\theta}\max_{\tilde{\mathcal{G}}\in\mathcal{B}( \mathcal{G})}\left\{\sum_{i=1}^{m}\ell(f_{\theta}(\tilde{\mathcal{G}})_{i},y_{ i})+\sum_{i=m+1}^{n}\ell(f_{\theta}(\tilde{\mathcal{G}})_{i},f_{\theta^{\prime}}( \mathcal{G})_{i})\right\} \tag{2}\]
### Transductive Setting
Transductive node classification is a common and well-studied graph learning problem [9; 4]. It aims to complete the node labeling of the given and partially labeled graph \(\mathcal{G}\). More formally, the goal at test time is to accurately label the already known \(n-m\) unlabeled nodes, i.e., to achieve minimal
\[\mathcal{L}_{0/1}(f_{\theta})=\sum_{i=m+1}^{n}\ell_{0/1}(f_{\theta}(\mathcal{ G})_{i},y_{i}) \tag{3}\]
This is in contrast to an inductive setting, where at test time a new (extended) graph \(\mathcal{G}^{\prime}\) (with labels \(\mathbf{y}^{\prime}\)) is sampled conditioned on the training graph \(\mathcal{G}\). Then, the goal is to optimally classify the newly sampled nodes \(\mathcal{I}\)_in expectation_ over possible \((\mathcal{G}^{\prime},\mathbf{y}^{\prime})\)-pairs, i.e., to minimize \(\mathbb{E}\big{[}\sum_{i\in\mathcal{I}}\ell_{0/1}(f_{\theta}(\mathcal{G}^{ \prime})_{i},y^{\prime}_{i})\big{]}\). That is, in transductive learning the \(n-m\) unlabeled nodes known during training are considered test nodes, but in an inductive setting, new unseen test nodes are sampled. For additional details on the inductive setting, see Appendix A.
Many common graph benchmark datasets such as Cora, CiteSeer, or Pubmed [10; 11], are designed for transductive learning. Naturally, this setting has been adopted as a starting point for many works on robust graph learning [7; 2; 12], including all of the adversarial training literature (see Section 6). Here, the latter is concerned with defending against test-time (evasion) attacks in transductive node classification, i.e., it is assumed that after training an adversary can select a maliciously changed graph \(\tilde{\mathcal{G}}\) out of the set of permissible perturbations \(\mathcal{B}(\mathcal{G})\), with the goal to maximize misclassification:
\[\mathcal{L}_{adv}(f_{\theta})=\max_{\tilde{\mathcal{G}}\in\mathcal{B}( \mathcal{G})}\sum_{i=m+1}^{n}\ell_{0/1}(f_{\theta}(\tilde{\mathcal{G}})_{i},y _{i}) \tag{4}\]
Since the adversary changes the graph at test time (i.e., the changes are not known during training), this, strictly speaking, corresponds to an inductive setting [9], where the only change considered is adversarial. Now, the goal of _adversarial training_ is to find parameters \(\theta\) minimizing \(\mathcal{L}_{adv}(f_{\theta})\), corresponding to the optimal (robust) classifier under attack [1].
#### 2.1.1 Theoretical Limitations
Defending against test-time (evasion) attacks in a transductive setting comes with conceptual limitations. In the case of graph learning, the test nodes are already known at training time and the only change is adversarial. Hence, we can trivially achieve _perfect robustness_ without trading off accuracy by _memorizing_ the (clean) training graph and ignoring every change.
This can be shown by choosing a slightly modified learning algorithm \(\tilde{f}_{\theta}\) corresponding to a GNN \(f_{\theta}\) with an added preprocessing routine (memory) that, during inference, replaces the perturbed graph \(\tilde{\mathcal{G}}=(\mathbf{\tilde{X}},\mathbf{\tilde{A}})\in\mathcal{B}( \mathcal{G})\) with the clean graph \(\mathcal{G}=(\mathbf{X},\mathbf{A})\) known from training, i.e., \(\tilde{f}_{\theta}(\tilde{\mathcal{G}})=f_{\theta}(\mathcal{G})\). This results in the same (clean) misclassification rate \(\mathcal{L}_{0/1}(f_{\theta})=\mathcal{L}_{0/1}(\tilde{f}_{\theta})\) because \(\tilde{f}_{\theta}\) and \(f_{\theta}\) have the same predictions on the clean graph, but also perfect robustness, i.e., \(\mathcal{L}_{0/1}(\tilde{f}_{\theta})=\mathcal{L}_{adv}(\tilde{f}_{\theta})\), as the predictions are not influenced by the adversary. Thus, we can state:
**Proposition 1**.: _For transductive node classification, we obtain a perfectly robust version \(\tilde{f}_{\theta}\) of any GNN \(f_{\theta}\), if \(\tilde{f}_{\theta}\) memorizes and uses the (clean) training graph \(\mathcal{G}\) for inference instead of the (potentially perturbed) graph \(\tilde{\mathcal{G}}\). Then, \(\mathcal{L}_{0/1}(f_{\theta})=\mathcal{L}_{0/1}(\tilde{f}_{\theta})=\mathcal{ L}_{adv}(\tilde{f}_{\theta})\)._
However, what is probably most interesting about this modified learning algorithm is that we can use it to construct an _optimal solution_ to the otherwise difficult saddle-point problem \(\min_{\theta}\mathcal{L}_{adv}(f_{\theta})=\min_{\theta}\max_{\mathcal{G}\in \mathcal{B}(\mathcal{G})}\mathcal{L}_{0/1}(f_{\theta})\) arising in adversarial training. The main idea to show this is to first train
a (normal) GNN \(f_{\theta}\) minimizing Equation (3), then robustifying it through memorization and lastly, showing that the result indeed is optimal. Formally, we state (proof see Appendix C.1):
**Proposition 2**.: _Assuming we solve the standard learning problem \(\theta^{*}=\arg\min_{\theta}\mathcal{L}_{0/1}(f_{\theta})\) and that \(\mathcal{G}\in\mathcal{B}(\mathcal{G})\). Then, the model from Proposition 1 is an optimal solution to the saddle-point problem \(\min_{\theta}\mathcal{L}_{adv}(f_{\theta})=\min_{\theta}\max_{\mathcal{G}\in B (\mathcal{G})}\mathcal{L}_{0/1}(f_{\theta})\) arising in adversarial training._
Note that in a fully inductive setting due to the expectation in the losses, neither Proposition 1 nor Proposition 2 hold, i.e., an optimal solution cannot be found through memorization (see Appendix A). Thus, inductive node classification does not suffer from the same theoretical limitations.
#### 2.1.2 Empirical Limitations
Even though prior work did not achieve perfect robustness, we show below that the reported gains from adversarial training by Xu et al. [2] actually mostly stem from self-training, i.e., the "leakage" of information on the clean test nodes through pseudo-labeling. Then, we close the gap between theory and practice and show that perfect robustness, while preserving clean accuracy, can be achieved empirically through adversarial training - effectively solving the learning setting in prior work.
**Adversarial training causes overfitting.** For transductive node classification, it is also empirically possible to achieve perfect robustness while maintaining clean test accuracy by combining self- with adversarial training and using a more flexible, learnable diffusion model (GPRGNN) as introduced in Section 3 In Figure 3, we show the test accuracy under severe perturbations for an adversarially trained GCN [2] and GPRGNN, both trained using a very strong adversary and pseudo-labels derived from self-training. The performance of the adversarially trained GCN reduces rapidly, while GPRGNN achieves (almost constant) perfect robustness. Since the pseudo-labels are derived from the clean graph, the clean graph is leaked in the training process. This allows GPRGNN to mimic the behavior of an MLP and overfit to the pseudo-labels, i.e., memorize the (node-feature, pseudo-label)-pairs it has seen during training. In other words, it finds a trivial solution to this learning setting, similar to the theoretic solution discussed in Section 2.1.1. In Appendix C.2.1 we show that we achieve perfect robustness not only if using the PGD attack as Xu et al. [2], but also for many different attacks.
These findings question how to interpret the reported robustness gains of previous work, which all evaluate transductively and usually use self-training. Conceptually, being robust against test-time (evasion) attacks is most relevant, if there can be a natural change in the existing data. Only then it is realistic to assume an adversary causing some of the change to be malicious and against which we want to be robust without losing our ability to generalize to new data. Therefore, in Section 5 we revisit adversarial training in an inductive setting avoiding the same evaluation pitfalls. Indeed, we find that using robust diffusion (see Section 3), we are not only capable of solving transductive robustness, but learn to robustly generalize to unseen nodes far better than previously studied models.
Robust Diffusion: Combining Graph Diffusion with Adversarial Training
We propose to use more flexible GNN architectures for adversarial training than previous studies. In particular, we use learnable _graph diffusion_ models in conjunction with adversarial training to obtain a _robust diffusion_. We not only show that a robust diffusion significantly outperforms other models used in previous work (see Section 5), but it also allows for increased _interpretability_ as we can gain insights into the characteristics of such robustness-enhancing models from different perspectives.
**Fixed message passing** schemes of popular GNNs like GCN or APPNP can be interpreted as the _graph convolution_\(g(\mathbf{\Lambda})\mathbin{\raisebox{-1.0pt}{\scalebox{1.0}{$\circ$}}}\mathbf{H}= \mathbf{V}g(\mathbf{\Lambda})\mathbf{V}^{\top}\mathbf{H}\) between a _fixed_ graph filter \(g(\mathbf{\Lambda})\) and the transformed node attributes are \(\mathbf{H}=f_{\theta}(\mathbf{X})\) using MLP \(f_{\theta}\). This convolution is defined w.r.t. the (diagonalized) eigenvalues \(\mathbf{\Lambda}\in\mathbb{R}^{n\times n}\) and eigenvectors \(\mathbf{V}\in\mathbb{R}^{n\times n}\) of the Laplacian \(\mathbf{L}^{\prime}=\mathbf{I}-\mathbf{D}^{-\nicefrac{{1}}{{2}}}\mathbf{\Lambda}\mathbf{D}^{- \nicefrac{{1}}{{2}}}\) with diagonal degree matrix \(\mathbf{\tilde{D}}\). Following the common parametrization, instead of \(\mathbf{L}^{\prime}\), we use \(\mathbf{L}=\mathbf{D}^{-\nicefrac{{1}}{{2}}}\mathbf{\Lambda}\mathbf{D}^{-\nicefrac{{1}}{{2}}}\) or, depending on the specific model, \(\hat{\mathbf{L}}=\hat{\mathbf{D}}^{-\nicefrac{{1}}{{2}}}\hat{\mathbf{\Lambda}}\hat{\mathbf{D} }^{-\nicefrac{{1}}{{2}}}\) where \(\hat{\mathbf{A}}=\mathbf{A}+\mathbf{I}\) with node degrees \(\hat{\mathbf{D}}\). Then, many spatial GNNs relate to the \(K\)-order polynomial approximation \(\mathbf{V}g(\mathbf{\Lambda})\mathbf{V}^{\top}\mathbf{H}\approx\sum_{k=0}^{K}\gamma_{k}\mathbf{L}^ {k}\mathbf{H}\) with the \(K+1\) diffusion coefficients \(\mathbf{\gamma}\in\mathbb{R}^{K+1}\). Models like GCN stack multiple feature transformations, convolutions, and point-wise non-linearities.
Crucially, GCN's or APPNP's graph filter \(g(\mathbf{\Lambda})\) is fixed (up to scaling). Specifically, we obtain an MLP with \(\mathbf{\gamma}=[1,0,\dots,0]\). If using \(\hat{\mathbf{L}}\), a GCN corresponds to \(\mathbf{\gamma}=[0,-1,0,\dots,0]\), and in APPNP \(\mathbf{\gamma}\) are the Personalized PageRank coefficients.
**Robust diffusion.** In contrast to prior work on robust graph learning, we do not solely use a static parametrization of \(g(\mathbf{\Lambda})\). Instead, we learn the graph filter, which corresponds to training diffusion coefficients \(\mathbf{\gamma}\). This allows the model to adapt the graph diffusion to the adversarial perturbations seen during training, i.e., we learn a _robust diffusion_.
For this, the used architectures consist of two steps: **(1)** using an MLP to preprocess the node features, i.e., \(\mathbf{H}=f_{\theta}(\mathbf{X})\) where \(\theta\) are learnable parameters; and then, **(2)** using a learnable diffusion computing the logits. For the learnable diffusion, we employ GPRGNN [13] that uses the previously introduced monomial basis: \(\operatorname{softmax}\bigl{(}\sum_{k=0}^{K}\gamma_{k}\hat{\mathbf{L}}^{k}\mathbf{H} \bigr{)}\). Additionally, we study Chebyshev polynomials (see ChebNetII [14]) that are of interest due to their beneficial theoretical properties. ChebNetII can be expressed as \(\operatorname{softmax}\bigl{(}\sum_{k=0}^{K}\gamma_{k}w_{k}T_{k}(\mathbf{L})\mathbf{H} \bigr{)}\) with extra weighting factor \(w_{k}=\nicefrac{{2}}{{K-1}}\sum_{j=0}^{K}T_{k}(x_{j})\). The Chebyshev basis is given via \(T_{0}(\mathbf{L})=\mathbf{I}\), \(T_{1}(\mathbf{L})=\mathbf{L}\), and \(T_{k}(\mathbf{L})=2\mathbf{L}T_{k-1}(\mathbf{L})-T_{k-2}(\mathbf{L})\). The Chebyshev nodes \(x_{j}=\cos\big{(}\frac{j+1/2}{K+1}\pi\big{)},j=0,\dots,K\) for \(w_{k}\) reduce the so-called Runge phenomenon [14]. Note, the resulting Chebyshev polynomial can be expressed in monomial basis (up to \(\mathbf{L}\) vs. \(\hat{\mathbf{L}}\)) via expanding the \(T_{k}(\mathbf{L})\) terms and collecting the powers \(\mathbf{L}^{k}\).
**Interpretability.** We can gain insights about the learned robust representation from the _(i)_ polynomial, _(ii)_ spectral, and _(iii)_ spatial perspective.
_(i) Polynomial perspective._ The coefficients \(\gamma_{k}\) determine the importance of the respective \(k\)-hop neighborhood for the learned representations. To visualize \(\gamma_{k}\), we can always consider \(\gamma_{0}\) to be positive, which is equivalent to flipping the sign of the processed features \(\mathbf{H}\). Additionally, we normalize the coefficients s.t. \(\sum|\gamma|=1\) since \(\mathbf{H}\) also influences the scale. In Figure 6, we give an example for the polynomial perspective (details are discussed in Section 5).
_(ii) Spectral perspective._ We solve for \(g_{\theta}(\mathbf{\Lambda})\) in the polynomial approximation \(\mathbf{V}g_{\theta}(\mathbf{\Lambda})\mathbf{V}^{\top}\mathbf{H}\approx\sum_{k=0}^{K}\gamma_{k }\hat{\mathbf{L}}^{k}\mathbf{H}\) to obtain a possible graph filter \(g_{\theta}(\mathbf{\Lambda})=\mathbf{V}^{\top}(\sum_{k=0}^{K}\gamma_{k}\hat{\mathbf{L}}^{ k})\mathbf{V}\). Following Balcilar
Figure 4: Robust diffusion (GPRGNN) on Karate Club where the edge \((u,v)\) width encodes the diffusion coefficient \(T_{u,v}\) learned during training. In (a) we show the normalized adjacency matrix. The other plots show the robust diffusion transition matrix: (b) w/o adversarial training, (c) w/ adversarial training but w/o local constraints, and (d) w/ adversarial training and w/ local constraints.
et al. [15], we study the spectral characteristics w.r.t. \(\mathbf{I}-\mathbf{D}^{-\nicefrac{{1}}{{2}}}\mathbf{\Lambda}\mathbf{D}^{-\nicefrac{{1}}{{2}}}\). Then, the diagonal entries of \(g_{\theta}(\mathbf{\Lambda})\) correspond to the (relative) magnitude of how signals of frequency \(\lambda\) are present in the filtered signal. Vice versa, a low value corresponds to suppression of this frequency. Recall, low frequencies correspond to the small eigenvalues and high to the large eigenvalues. In Figure 7, we show how adversarial training and the permissible perturbations affect the spectra of the learned graph filters.
_(iii) Spatial perspective._ Robust diffusion (monomial basis) can be summarized as \(\operatorname{softmax}\left(\mathbf{T}\mathbf{H}\right)\) where \(\mathbf{T}=\sum_{k=0}^{K}\gamma_{k}\hat{\mathbf{L}}^{k}\) is the total diffusion matrix. The coefficient \(\mathbf{T}_{uv}\) indicates the diffusion strength between node \(u\) and node \(v\). For example, we visualize the total diffusion matrix \(\mathbf{T}\) in Figure 4 on Karate Club [16] with different training strategies. Depending on the learning signal we give, GPRGNN is able to adjust its diffusion.
## 4 LR-BCD: Adversarial Attack with Local Constraints
**Motivation for local constraints.** The just-discussed interpretability of robust diffusion provides empirical evidence for the importance of local constraints. Specifically, from Figure 3(c), we see that GPRGNN adversarially trained _without_ local constraints learns a diffusion that almost ignores the graph structure. While this model is certainly very robust w.r.t. structure perturbations, it is not a very useful GNN since it cannot leverage the structure information anymore. In contrast, we show in Figure 3(d) that GPRGNN trained adversarially _with_ local constraints results in a robust model that can still incorporate the structure information.
The local predictions in node classification yield an alternative motivation. In the absence of local constraints, an adversary typically has the power to rewire the entire neighborhood of many nodes. When attacking GNNs, we empirically observe that a majority of successfully attacked nodes are perturbed beyond their degree even for moderate global attack budgets (see Figure 5). That such perturbations are not reasonable is evident in the fact that studies on local attacks [7; 17], where the adversary targets a single node's prediction, do not consider perturbations (far) beyond the node degree. However, a \(5\%\) attack budget on Cora-ML allows changing \(798\) edges while the majority of nodes have a degree less than three.
Traditionally, adversarial changes are judged by noticeability [18] since unnoticeable changes do not alter the semantics. However, for graphs a manual inspection of its content is often not feasible, and the concept of (un-)noticeability is unclear. However, using generative graph models Gosch et al. [6] revealed that perturbations beyond the node degree most often do alter the semantics. Perturbations that alter semantics can pose a problem for adversarial training. Similar observations have been done in graph contrastive learning where perturbations that do not preserve graph semantic preservation have a direct effect on the achievable error bounds [19].
**Locally constrained Randomized Block Coordinate Descent (LR-BCD).** We next introduce our LR-BCD attack; the first attack targeting multiple nodes at once while maintaining a local constraint for each node next to a global one. For this, we extend the so-called PR-BCD attack framework.
_Projected Randomized Block Coordinate Descent (PR-BCD)_[17] is a gradient-based attack framework applicable to a wide range of models. Its goals is to generate a perturbation matrix \(\mathbf{P}\in\{0,1\}^{n\times n}\) that is applied to the original adjacency matrix \(\mathbf{\Lambda}\) to construct a perturbed matrix \(\tilde{\mathbf{\Lambda}}=\mathbf{\Lambda}+(\mathbb{I}-2\mathbf{\Lambda})\odot\mathbf{P}\), where \(\mathbb{I}\) is an all-one matrix. For undirected graphs, only the upper-triangular parts of all matrices are considered. To construct \(\mathbf{P}\) in an efficient manner, PR-BCD relaxes \(\mathbf{P}\) during the attack from \(\{0,1\}^{n\times n}\) to \([0,1]^{n\times n}\) and employs an iterative process for \(T\) iterations. It consists of three repeating steps. **(1)** A random block of size \(b\) is sampled. That is, only \(b\) (non-contiguous) elements in the perturbation matrix \(\mathbf{P}_{t-1}\) are considered and all other elements set to zero. Thus, \(\mathbf{P}_{t-1}\) is sparse. **(2)** A gradient update w.r.t. the loss to compute relaxed perturbations \(\mathbf{S}_{t}\) is performed, i.e., \(\mathbf{S}_{t}\leftarrow\mathbf{P}_{t-1}+\alpha_{t-1}\nabla_{\mathbf{P}_{t-1}}\ell(\mathbf{P}_ {t-1})\), where \(\mathbf{P}_{t-1}\) are the previous perturbations, \(\alpha_{t-1}\) is the learning rate, and \(\nabla_{\mathbf{P}_{t-1}}\ell(\mathbf{P}_{t-1})\) are the perturbation gradients through the GNN. Finally, and most crucial **(3)** a projection \(\Pi_{\mathcal{B}(\mathcal{G})}\) ensures that the perturbations are _permissible_ given the set of allowed perturbations
Figure 5: Number of successfully attacked nodes by node degree and number of connected adversarial edges. We attack self-trained GCN with PR-BCD and global budgets \(10\%\) (left) and \(25\%\) (right) on Cora-ML and aggregate the results over three different data splits. We observe that a notable amount of nodes is perturbed beyond their degree.
\(\mathcal{B}(\mathcal{G})\), i.e. \(\mathbf{P}_{t}=\Pi_{\mathcal{B}(\mathcal{G})}(\mathbf{S}_{t})\). Now, the process starts again with **(1)**, but all zero-elements in the block \(b\) (or at least \(\nicefrac{{1}}{{2}}\) of the lowest-value block elements) are resampled. After \(T\) iterations, relaxed perturbations \(\mathbf{P}\) are discretized from \([0,1]^{n\times n}\) to \(\{0,1\}^{n\times n}\) to obtain the resulting \(\tilde{\mathbf{A}}\).
_Global projection._ As mentioned above, the projection \(\Pi_{\mathcal{B}(\mathcal{G})}\) of PR-BCD is the crucial step that accounts for the attack budget. Geisler et al. [17] develop an efficient projection for an adversary implementing a global perturbation constraint, i.e., for \(\mathcal{B}(\mathcal{G})=\{\tilde{\mathbf{A}}\in\{0,1\}^{n\times n}\,|\,\|\tilde{ \mathbf{A}}-\mathbf{A}\|_{0}\leq\Delta\}\) with \(\Delta\in\mathbb{N}_{\geq 0}\). This is achieved by formulating the projection as the following optimization problem
\[\Pi_{\Delta}(\mathbf{S})=\arg\min_{\mathbf{P}}\|\mathbf{P}-\mathbf{S}\|_{F}^{2}\qquad\text{s.t. }\mathbf{P}\in[0,1]^{n\times n},\quad\sum\mathbf{P}\leq\Delta \tag{5}\]
with Frobenius norm \(\|\cdot\|_{F}^{2}\) and the sum aggregating all elements of the matrix. \(\Pi_{\Delta}(\mathbf{S})\) can be solved with the bisection method [17]. After \(T\) iterations, the discretization is performed by sampling, using the elements in \(\mathbf{P}_{T}\) as defining Bernoulli distributions. However, the projection \(\Pi_{\Delta}\) does not support limiting the number of perturbations per node, i.e., \(\sum_{v=1}^{n}P_{u,v}\leq\Delta_{u}^{(l)}\) where \(\mathbf{\Delta}^{(l)}\in\mathbb{N}_{\geq 0}^{n}\) is the vector of local budgets for each node. Indeed, extending the optimization problem above to include local constraints leads to the notoriously hard problem of Euclidean projection to polyhedra [20].
_A locally constrained global projection (LR-BCD)._ With the goal of an efficient attack including local constraints, we have to develop an alternative and scaleable projection strategy for \(\mathcal{B}(\mathcal{G})=\{\tilde{\mathbf{A}}\in\{0,1\}^{n\times n}\,|\,\|\tilde{ \mathbf{A}}-\mathbf{A}\|_{0}<\Delta\wedge\|\sum_{j}(\tilde{\mathbf{A}}_{ij}-\mathbf{A}_{ij}) \|_{0}\leq\Delta_{i}^{(l)}\,\forall i\in[n]\}\). Our novel projection \(\mathbf{P}=\Pi_{\Delta}^{(l)}(\mathbf{S})=\Pi_{[0,1]}(\mathbf{S})\odot\mathbf{C}^{*}\) chooses the largest entries from the perturbation matrix \(\mathbf{S}\in\mathbb{R}^{n\times n}\) using \(\mathbf{C}^{*}\in[0,1]^{n\times n}\) and clipping operator \(\Pi_{[0,1]}(\mathbf{s})\in[0,1]\), s.t. we obey global _and_ local constraints. The optimal choices \(\mathbf{C}^{*}\) can be calculated by solving a relaxed multi-dimensional knapsack problem:
\[\mathbf{C}^{*}=\arg\max_{\mathbf{C}}\sum\mathbf{S}\odot\mathbf{C}\qquad\text{s.t. }\mathbf{C}\in[0,1]^{n\times n},\quad\sum\mathbf{P}^{\prime}\leq\Delta,\quad\mathbf{P}^{ \prime}\,\mathbf{1}\leq\mathbf{\Delta}^{(l)} \tag{6}\]
where \(\mathbf{P}^{\prime}=\Pi_{[0,1]}(\mathbf{S})\odot\mathbf{C}\), i.e., \(\Pi_{[0,1]}(\mathbf{S})\) represents the weights of the individual choices, while \(\mathbf{S}\) captures their value. The sums aggregate all elements in the matrix. In the vector-valued constraint \(\mathbf{P}^{\prime}\,\mathbf{1}\leq\mathbf{\Delta}^{(l)}\), we compare row sums \((\mathbf{P}^{\prime}\,\mathbf{1})_{u}\), capturing the perturbations in the neighborhood, with node-specific budgets \(\Delta_{u}^{(l)}\). Even though this optimization problem has no closed-form solution, it can be readily approximated with greedy approaches [21]. The key idea is to iterate the non-zero entries in \(\mathbf{S}\) in descending order and construct the resulting \(\mathbf{C}^{*}\) (or directly \(\mathbf{P}\)) as follows: For each perturbation \((u,v)\) related to \(\mathbf{S}_{uv}\), we check if the global \(\Delta\) or local budgets \(\Delta_{u}^{(l)}\) and \(\Delta_{v}^{(l)}\) are not yet exhausted. Then, \(\mathbf{C}^{*}_{uv}=1\) (i.e., \(\mathbf{P}_{u,v}=\Pi_{[0,1]}(\mathbf{S}_{uv})\)). Otherwise, \(\mathbf{C}^{*}_{uv}=\min\{\Delta,\Delta_{u}^{(l)},\Delta_{v}^{(l)}\}\). The final discretization is given by \(\mathbf{C}^{*}\) corresponding to the solution of Equation (6) for \(\mathbf{S}_{T}\), but changing the weight of each choice to 1 (i.e., \(\mathbf{P}^{\prime}=\mathbf{C}\)), guaranteeing a binary solution due to the budgets being integer. We give the pseudo-code and more details in Appendix B.4.
Our projection yields sparse solutions as it greedily selects the largest entries in \(\mathbf{S}\) until the budget is exhausted. Moreover, it is efficient. Its time complexity is \(\mathcal{O}(b\log(b))\) due to the sorting of the up to \(b\) non-zero entries in the sparse matrix \(\mathbf{S}\). Its space complexity is \(\mathcal{O}(b)\), assuming we choose the block size \(b\geq\Delta\) as well as \(b\geq n\). Thus, LR-BCD has the same asymptotic complexities as the purely global projection by Geisler et al. [17] (which we will from now on denote as PR-BCD).
**Summary.** We will now summarize our LR-BCD along the four dimensions proposed by Biggio and Roli [22]. _(1) Attacker's goal:_ Although our focus is to increase the misclassification for the target nodes in node classification, we can apply LR-BCD to different node- and graph-level tasks depending on the loss choice. _(2) Attacker's knowledge:_ We assume perfect knowledge about the model and dataset, which is reasonable for adversarial training. _(3) Attacker's capability:_ We propose to use a threat model that constrains the number of edge perturbations globally as well as locally for each node. _(4) Attacker's strategy:_ LR-BCD extends PR-BCD by a projection incorporating local constraints. Most notably, we relax the unweighted graph during the attack to continuous values and leverage randomization for an efficient attack while optimizing over all possible \(n^{2}\) edges.
## 5 Empirical Results
**Inductive learning.** All results in this section follow a fully inductive semi-supervised setting. That is, the training graph \(\mathcal{G}\) does not include validation and test nodes. For evaluation, we then successively
add the respective evaluation set. This way, we avoid the evaluation flaw of prior work since the model cannot achieve robustness via "memorizing" the predictions on the clean graph. We obtain inductive splits randomly, except for OGB arXiv [11], which comes with a split. In addition to the commonly sampled 20 nodes per class for both (labeled) training and validation, we sample a stratified test set consisting of \(10\%\) of all nodes. The remaining nodes are used as (unlabeled) training nodes.
**Setup.** We study the citation networks Cora, Cora-ML, CiteSeer [23], Pubmed [24], and OGB arXiv [11] using GPRGNN, ChebNetII, GCN, APPNP, and GAT [25]. Further, we compare to the state-of-the-art evasion defense Soft Median GDC [17]. We apply adversarial training (see Section 2 and Appendix B.5) using both PR-BCD that only constraints perturbations globally and our _LR-BCD_ that also allows for local constraints. Moreover, we use adversarial training in conjunction with self-training. Due to the inductive split, this does not bias results. We use the tanh margin attack loss of [17] and do not generate adversarial examples for the first 10 epochs (warm-up). We evaluate robust generalization on the test set using L/PR-BCD, which corresponds to _adaptive attacks_. Adaptive attacks are the gold standard in evaluating empirical robustness because they craft model-specific perturbations [26]. We use \(\epsilon\) to parametrize the global budget \(\Delta=\lfloor\epsilon\cdot\sum_{u\in\mathcal{A}}d_{u}/2\rfloor\) relative to the degree \(d_{u}\) for the set of targeted nodes \(\mathcal{A}\). We find that \(\Delta_{u}^{(l)}=\lfloor d_{u}/2\rfloor\) is a reasonable local budget for all datasets but arXiv where we use \(\Delta_{u}^{(l)}=\lfloor d_{u}/4\rfloor\). We report averaged results with the standard error of the mean over three random splits. We use GTX 1080Ti (11 Gb) GPUs for all experiments but arXiv, for which we use a V100 (40 GB). For details see Appendix B. We discuss limitations in Appendix F and provide code at [https://www.cs.cit.tum.de/daml/adversarial-training/](https://www.cs.cit.tum.de/daml/adversarial-training/).
**Certifiable robustness.** We use the model-agnostic randomized/sparse smoothing certificate of Bojchevski et al. [27] to also quantify certifiable robustness. Sparse smoothing creates a randomized ensemble given a base model \(f_{\theta}\) s.t. the majority prediction of the ensemble comes with guarantees. For the randomization, we follow Bojchevski et al. [27] and uniformly drop edges with \(p_{-}=0.6\) as well as add edges with \(p_{+}=0.01\). Sparse smoothing then returns if a node-level prediction is certified (does not change) for the desired deletion radius \(r_{-}\) or addition radius \(r_{+}\). The guarantee holds in a probabilistic sense with significance level \(\alpha\). We report the certified accuracy \(\gamma(r_{-},r_{+})\) that aggregates correct and certifiable predictions over all nodes. We choose \(\alpha=5\%\) and obtain 100,000 random samples. For simplicity, we report in the main part the certified accuracies \(\gamma(r_{-}=0,r_{+}=0)\), \(\gamma(5,0)\), and \(\gamma(0,3)\). See Appendix D.4 for more details and results.
**Finding I: Adversarial training is an effective defense against structure perturbations.** This is apparent from the results in Table 1, where we compare the empirical and certifiable robustness between the aforementioned models on Citeseer. Our adversarially trained robust diffusion models GPRGNN and ChebNetII outperform the other baselines both in empirical and certifiable robustness. This includes the state-of-the-art defense Soft Median, which we outperform with a comfortable margin. Thus, we close the gap in terms of the efficacy of adversarial training in the image domain vs. structure perturbations. Notably, the increased robustness does not imply a lower clean accuracy. For example, our LR-BCD adversarially trained GPRGNN achieves a 1.7% higher clean accuracy
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & **Adv.** & **A. eval.** & \(\sim\) & **LR-BCD** & **PR-BCD** & **LR-BCD** & **PR-BCD** & **Certifiable accuracy / sparse smoothing** \\ & **trm.** & **trm.** & **t.m.** & **cline & **conf.** & **t.m.** & **t.m.** & **t.m.** & **t.m.** & **t.m.** \\ \hline
**GCN** & \(\mathcal{K}\) & - & 72.0 \(\pm\) 2.5 & 54.7 \(\pm\) 2.8 & 51.7 \(\pm\) 2.8 & 45.3 \(\pm\) 3.4 & 38.0 \(\pm\) 3.8 & 38.3 \(\pm\) 1.5 & 1.7 \(\pm\) 0.7 & 4.8 \(\pm\) 1.5 \\ \hline
**GAT** & \(\mathcal{K}\) & - & -3.6 \(\pm\) 2.7 & -3.9 \(\pm\) 3.4 & +0.5 \(\pm\) 3.5 & -15.9 \(\pm\) 5.3 & -2.3 \(\pm\) 7.3 & -14.0 \(\pm\) 12.0 & -1.7 \(\pm\) 0.7 & 4.8 \(\pm\) 1.5 \\
**APPNP** & \(\mathcal{K}\) & - & +0.2 \(\pm\) 1.1 & -1.7 \(\pm\) 0.7 \(\pm\) 1.9 \(\pm\) 1.4 & +3.0 \(\pm\) 1.2 & +2.2 \(\pm\) 2.5 & +8.9 \(\pm\) 0.1 & +31.2 \(\pm\) 6.4 & +31.6 \(\pm\) 6.4 \\
**GPRGNN** & \(\mathcal{K}\) & - & -2.2 \(\pm\) 4.3 \(\pm\) 4.2 & +2.7 \(\pm\) 3.6 & +4.9 \(\pm\) 5.3 & +3.9 \(\pm\) 4.6 & +17.9 \(\pm\) 6.9 & +42.4 \(\pm\) 4.4 & +41.3 \(\pm\) 3.7 \\
**ChebNetII** & \(\mathcal{K}\) & - & +1.1 \(\pm\) 2.2 & +5.8 \(\pm\) 2.5 & +5.0 \(\pm\) 2.4 & +10.4 \(\pm\) 2.6 & +76.3 \(\pm\) 3.4 & +24.6 \(\pm\) 9.9 & +55.6 \(\pm\) 1.2 & +54.0 \(\pm\) 0.9 \\
**SoftMedian** & \(\mathcal{K}\) & - & +0.9 \(\pm\) 1.7 & +9.5 \(\pm\) 2.2 & +3.9 \(\pm\) 1.9 & +16.2 \(\pm\) 2.4 & +16.6 \(\pm\) 2.9 & +45.2 \(\pm\) 10.5 & +60.3 \(\pm\) 1.4 & +5.7 \(\pm\) 0.8 \\ \hline \multirow{2}{*}{**GCN**} & \(\mathcal{K}\) & **LR-BCD** & - & +0.2 \(\pm\) 1.8 & +7.8 \(\pm\) 1.6 & +6.9 \(\pm\) 1.5 & +10.9 \(\pm\) 2.1 & +8.1 \(\pm\) 2.3 & +3.0 \(\pm\) 9.4 & +10.7 \(\pm\) 4.6 & +13.1 \(\pm\) 5.1 \\ & **LR-BCD** & +0.0 \(\pm\) 1.9 & +6.9 \(\pm\) 0.9 & +5.3 \(\pm\) 1.6 & +8.6 \(\pm\) 2.3 & +5.8 \(\pm\) 2.4 & +2.4 \(\pm\) 1.6 & +10.1 \(\pm\) 5.5 & +11.4 \(\pm\) 4.5 \\ & **LR-BCD** & +0.8 \(\pm\) 1.6 & +5.9 \(\pm\) 3.7 & +9.0 \(\pm\) 3.0 & +5.4 \(\pm\) 5.1 & +13.3 \(\pm\) 3.5 & -2.0 \(\pm\) 2.0 & +4.1 \(\pm\) 1.7 & +4.0 \(\pm\) 2.1 \\
**GAT** & \(\mathcal{K}\) & **PR-BCD** & +1.1 \(\pm\) 2.2 & +8.9 \(\pm\) 2.8 & +31.2 \(\pm\) 3.7 & +10.3 \(\pm\) 2.3 & +21.8 \(\pm\) 4.8 & -10.1 \(\pm\) 1.3 & +1.7 \(\pm\) 1.2 \(\pm\) 2.0 & +2.8 \(\pm\) 1.2 \\
**APPNP** & \(\mathcal{K}\) & **LR-BCD** & -0.8 \(\pm\) 1.3 & +7.6 \(\pm\) 1.6 & +5.6 \(\pm\) 1.9 & +11.1 \(\pm\) 1.9 & +6.3 \(\pm\) 3.6 & +21.7 \(\pm\) 9.5 & +41.3 \(\pm\) 2.9 & +44.1 \(\pm\) 1.2 \\
**APPNP** & \(\mathcal{K}\) & **PR-BCD** & -0.2 \(\pm\) 2.2 & +6.2 \(\pm\) 2.3 & +5.5 \(\pm\) 3.1 & +9.0 \(\pm\) 2.9 & +6.5 \(\pm\) 3.9 & +19.3 \(\pm\) 2.7 & +41.0 \(\pm\) 3.8 & +41.9 \(\pm\) 3.0 \\ \hline
**GPRGNN** & \(\mathcal{K}\) & **LR-BCD** & +17.7 \(\pm\) 3.0 & **+15.7 \(\pm\) 3.6** & +13.6 \(\pm\) 3.5 & **+24.8 \(\pm\) 4.2** & +22.9 \(\pm\) 4.3 & **+34.0 \(\pm\) 1.5** & +67.4 \(\pm\) 1.5 & +65.0 \(\pm\) 2.5 \\ & **PR-BCD
than a GCN, while with an adaptive LR-BCD attack and \(\epsilon=25\%\) perturbed edges, we outperform a GCN by 24.8%. This amounts to a clean accuracy of 73.7% and a perturbed accuracy of 70.1%. We present comparable results for the other datasets and an ablation study in Appendix D.
**Finding II: Choose the set of permissible perturbations wisely!** As argued already in Section 4, the right set of admissible permutations can guide the learned robust diffusion. Importantly, the set of admissible perturbations is application specific. That is, depending on the choice, the resulting model might have different robustness characteristics w.r.t. different threat models. For example, in Table 1, a LR-BCD adversarially trained GPRGNN is more robust to LR-BCD attacks than to a model trained with PR-BCD, and vice versa. Importantly, the learned message passing is very different as the spectral analysis (Figure 7) reveals a striking pattern. Adversarial training with PR-BCD flipped the behavior from low-pass to high-pass if compared to a regularly trained model. However, adversarial training with LR-BCD seems to preserve the general characteristic while using a larger fraction of the spectrum to enhance robustness. Moreover, the polynomial coefficients (Figure 6) resulting from adversarial training without local constraints (Figure 6c) are very similar to coefficients that Chien et al. [13] associated with a non-informative graph structure. See Appendix D.2 for other datasets.
**Finding III: Adversarial training with local constraints is efficient and scalable.** We show in Figure 8 the results of an adversarially trained GPRGNN using LR-BCD with different relative budgets \(\epsilon\). Adversarial training on arXiv requires less than 20 GB of RAM and an epoch takes approx. 10 seconds. In contrast, the globally constrained adversarial training of Xu et al. [2] would require multiple terabytes of RAM due to \(n^{2}\) possible edge perturbations [17]. We not only show that robust diffusion with LR-BCD is scalable, but also that it improves the robustness on arXiv. That is, twice as many edge perturbations are required to push the accuracy below that of an MLP, if comparing \(\epsilon=10\%\) to standard training.
## 6 Related Work
We next relate to prior work and refer to Appendix E for more detail: _(1) Learning setting._ Conceptual problems by remembering training data are not unique to adversarial training and have been mentioned in Zheng et al. [34] for graph injection attacks and by Scholten et al. [35] for certification. _(2) Adversarial training_ for GNNs under structure perturbations have been studied in [30, 31, 2, 32, 33]. However, all prior works study a transductive setting (see Table 2) having serious shortcomings for evasion attacks (see Section 2). Note that only Xu et al. [2, 3] study adversarial training for (global) structure perturbations in its literal sense, i.e., directly work on Equation (1). Further, we are the first to study adversarial training in the inductive setting. _(3) Local constraints_ have been studied for local attacks [7, 17, 36, 12], i.e., if attacking a single node's prediction. They are also well-established for robustness certificates [37, 38, 39]. However, surprisingly, there is no global attack, i.e., attack targeting a large set of node predictions at once, that incorporates local constraints. This even led to prior work trying to naively and expensively apply local attacks for each node sequentially to generate locally constrained global perturbations [32]. With LR-BCD, we finally close this gap. _(4) GNN Architectures._ The robust GNN literature strongly focuses on GCN and variants [7, 40, 41, 42, 43, 17, 44, 32, 8]. GAT is perhaps the most flexible studied model [34]. While adversarial training improves the robustness of GAT, in our experiments, an
\begin{table}
\begin{tabular}{c c c} \hline Publication & Learning setting & Type of attack \\ \hline Deng et al. [28] & Transductive & evasion (attribute) \\ Feng et al. [29] & Transductive & evasion (attribute) \\ Jin and Zhang [30] & Transductive & evasion (structure + attribute) \\ Xu et al. [2] & Transductive & evasion (structure) \\ Xu et al. [3] & Transductive & evasion (structure) \\ Chen et al. [31] & Transductive & evasion (structure) \\ Li et al. [32] & Transductive & evasion (structure + attribute) \\ Guo et al. [33] & Transductive & evasion + poisoning (structure) \\ \hline \end{tabular}
\end{table}
Table 2: Works on adversarial training for GNNs.
adversarial trained GAT is not more robust than a GCN [4]. A novel aspect of our work is that spectral filters, in the form of polynomials [14; 13], can learn significantly more robust representations, beating a state-of-the-art graph defense. They also reveal insights about the learned robust representation.
## 7 Discussion and Conclusion
We show that the transductive learning setting in prior work on adversarial training for GNNs has fundamental limitations leading to a biased evaluation and a trivial solution through memorization. Thus, we revisit adversarial training in a fully inductive setting. We employ more flexible GNNs than before that achieve substantial robustness gains through adversarial training and are interpretable. For more realistic perturbations, we develop the first global attack able to maintain local constraints.
## Acknowledgments and Disclosure of Funding
This paper has been supported by the DAAD programme Konrad Zuse Schools of Excellence in Artificial Intelligence, sponsored by the German Federal Ministry of Education and Research.
|
2305.15611 | Size Generalization of Graph Neural Networks on Biological Data:
Insights and Practices from the Spectral Perspective | We investigate size-induced distribution shifts in graphs and assess their
impact on the ability of graph neural networks (GNNs) to generalize to larger
graphs relative to the training data. Existing literature presents conflicting
conclusions on GNNs' size generalizability, primarily due to disparities in
application domains and underlying assumptions concerning size-induced
distribution shifts. Motivated by this, we take a data-driven approach: we
focus on real biological datasets and seek to characterize the types of
size-induced distribution shifts. Diverging from prior approaches, we adopt a
spectral perspective and identify that spectrum differences induced by size are
related to differences in subgraph patterns (e.g., average cycle lengths).
While previous studies have identified that the inability of GNNs in capturing
subgraph information negatively impacts their in-distribution generalization,
our findings further show that this decline is more pronounced when evaluating
on larger test graphs not encountered during training. Based on these spectral
insights, we introduce a simple yet effective model-agnostic strategy, which
makes GNNs aware of these important subgraph patterns to enhance their size
generalizability. Our empirical results reveal that our proposed
size-insensitive attention strategy substantially enhances graph classification
performance on large test graphs, which are 2-10 times larger than the training
graphs, resulting in an improvement in F1 scores by up to 8%. | Gaotang Li, Danai Koutra, Yujun Yan | 2023-05-24T23:01:14Z | http://arxiv.org/abs/2305.15611v4 | Size Generalizability of Graph Neural Networks on Biological Data: Insights and Practices from the Spectral Perspective
###### Abstract
We investigate the question of whether the knowledge learned by graph neural networks (GNNs) from small graphs is generalizable to large graphs in the same domain. Prior works suggest that the distribution shift, particularly in the degree distribution, between graphs of different sizes can lead to performance degradation in the graph classification task. However, this may not be the case for biological datasets where the degrees are bounded and the distribution shift of degrees is small. Even with little degree distribution shift, our observations show that GNNs' performance on larger graphs from the same datasets still degrades, suggesting other causes. In fact, there has been a lack of exploration in real datasets to understand the types and properties of distribution shifts caused by various graph sizes. Furthermore, previous analyses of size generalizability mostly focus on the spatial domain. To fill these gaps, we take the spectral perspective and study the size generalizability of GNNs on biological data. We identify a distribution shift between small and large graphs in the eigenvalues of the normalized Laplacian/adjacency matrix, indicating a difference in the global node connectivity, which is found to be correlated with the node closeness centrality. We further find that despite of the variations in global connectivity, graphs of different sizes share similar local connectivity, which can be utilized to improve the size generalizability of GNNs. Based on our spectral insights and empirical observations, we propose a model-agnostic strategy, SIA, which uses size-irrelevant local structural features, i.e., the local closeness centrality of a node, to guide the learning process. Our empirical results demonstrate that our strategy improves the graph classification performance of various GNNs on small and large graphs when training with only small graphs.
## 1 Introduction
We focus on the size generalizability of graph neural networks on biological data. While numerous GNNs [17; 27; 13; 38; 34; 19] have exhibited exceptional performance in graph classification tasks and can handle graphs of varying size, there has been inadequate research into the ability of GNNs to generalize effectively from small to large graphs. Size generalizability is a critical concern in various fields. For instance, in graph algorithmic reasoning, neural networks (NN) learn complex
algorithms from small graph examples and generalize that reasoning to larger graphs, as obtaining exact solutions for larger graphs is challenging. In the biological domain, datasets can vary in graph size, from small atom-based structures to large compounds with thousands of atoms. It is important to assess if the learned knowledge is influenced by graph size, as size-dependent information could adversely affect performance when using pre-training strategies [15].
Many prior works [36; 7; 5] have observed performance degradation when there exists a size shift between the training and test data. However, it remains unclear: what structural properties will change with the graph size? The answer to this question depends on the data domain. For instance, local degree patterns in social networks may vary with graph size [36] while the node degrees in biological networks are bounded and the difference in degree distributions between small and large graphs is small (Section 5.2). Additionally, the global connectivity of nodes in biological graphs may change with the graph size. Many prior works [7; 5; 35], however, lack sufficient exploration and verification on real datasets to understand the types and properties of distribution shifts caused by various graph sizes. Thus, it is unclear what distribution shifts they target at and what data domain their methods can improve the size generalizability on. In this paper, we obtain two important observations from biological data: (1) the eigenvalue distributions, obtained from the normalized Laplacian/adjacency matrix, vary with the graph size. The differences in the eigenvalue distributions reflect the variations in the global connectivity of the nodes, which are associated with the node closeness centrality; and (2) graphs with different sizes have similar distributions of local node connectivity, which can be measured by the local closeness centrality computed from the subgraphs induced from local neighborhoods. We use these observations to guide our design of a Size-Insensitive Attention (SIA) strategy, which is model agnostic and helps improve the size generalizability of many GNN models.
Furthermore, most analyses in prior works [36; 7; 5] focus on the spatial domain. In this paper, we take the spectral perspective to understand why GNNs can be sensitive to size shift, and use the insights from the spectral domain to understand what structural properties change with the graph size. In sum, our paper makes the following contributions:
* **New Observations.** We identify the types and properties of distribution shifts caused by various graph sizes in biological networks.
* **Spectral Analyses.** Unlike prior works in size generalizability, we take the spectral perspective and use the spectral insights to analyze the structural change with the graph size.
* **Model Agnostic Strategy.** Based on our spectral insights and empirical observations, we propose a model agnostic strategy, SIA, which uses size-irrelevant local structural features, i.e., the local closeness centrality of a node, to guide the learning process.
## 2 Notations and Preliminaries
In this section, we introduce the definitions and notations that are used throughout the paper. Let \(\mathcal{G}(\mathcal{V},\mathcal{E})\) be an undirected and unweighted graph with \(N\) nodes, where \(\mathcal{V}\) denotes the node set, and \(\mathcal{E}\) denotes the edge set. The \(h\)-hop neighborhood of a node \(v_{i}\) is defined as the set of all nodes that are at a distance of \(h\) or less from the node \(v_{i}\): \(\mathcal{N}_{i}^{h}=\{v_{j}|\mathtt{dist}(v_{j},v_{i})\leq h\}\). We use \(\mathbf{A}\) to represent the adjacency matrix of the graph, \(\mathbf{D}\) to represent the degree matrix whose \(i\)th diagonal element is the degree of node \(v_{i}\).
### Closeness Centrality
**Global Closeness Centrality.** Global closeness centrality [3] of a node \(v_{i}\) measures how long it will take information to spread from \(v_{i}\) to other nodes in the network. It is calculated as the reciprocal of the average length of the shortest paths between \(v_{i}\) and all other nodes in the graph. To account for graphs with more than one connected component, we adopt the following closeness definition in [26]:
\[C(v_{i})=\frac{n-1}{N-1}\frac{n-1}{\sum_{j=1}^{n-1}\mathtt{dist}(v_{i},v_{j})},\]
where \(n\) is the number of reachable nodes from \(v_{i}\).
\(h\)**-hop Closeness Centrality.**\(h\)-hop closeness centrality is a variant of closeness centrality proposed in this paper, which aims to capture the local connectivity of the nodes. It is computed using the
subgraphs induced from \(h\)-hop neighborhood of the node \(v_{i}\):
\[C^{(h)}(v_{i})=\frac{|\mathcal{N}_{i}^{h}|-1}{\sum_{v_{j}\in\mathcal{N}_{i}^{h},v _{j}\neq v_{i}}\mathtt{dist}(v_{i},v_{j})}\]
### Graph Learning Task
In this paper, we focus on the graph classification task, where each node \(v_{i}\) is associated with a feature vector \(\mathbf{x}_{i}^{(0)}\), and the feature matrix \(\mathbf{X}^{(0)}\) is constructed by arranging the node feature vectors as rows. When using a GNN for the graph classification task, we further denote the node representation matrix at the \(l\)-th layer as \(\mathbf{X}^{(l)}\), and the representation of node \(v_{i}\) as \(\mathbf{x}_{i}^{(l)}\).
**Supervised Graph Classification.** Each graph \(\mathcal{G}_{i}\) is associated with a ground truth label \(y_{i}^{\mathcal{G}}\) sampled from a label set \(\hat{\mathcal{L}}\). Given a subset of labeled graphs (from a label set \(\hat{\mathcal{L}}\)), the goal is to learn a mapping \(f^{\mathcal{G}}:(\mathbf{A},\mathbf{X}^{(0)})_{i}\mapsto y_{i}^{\mathcal{G}}\) between each graph \(\mathcal{G}_{i}\) and its ground truth label \(y_{i}^{\mathcal{G}}\in\hat{\mathcal{L}}\). The graph classification loss is given by \(L=\frac{1}{|\mathcal{G}_{\text{train}}|}\sum_{\mathcal{G}_{i}\in\mathcal{G}_{ \text{train}}}\mathtt{CrossEntropy}\) (\(\mathbf{x}^{\mathcal{G}_{i}}\), \(y_{i}^{\mathcal{G}}\)), where \(\mathcal{G}_{\text{train}}\) is the training graph set and \(\mathbf{x}^{\mathcal{G}_{i}}\) is the representation of graph \(\mathcal{G}_{i}\).
### Graph Neural Networks for Graph Classification
GNNs can be designed from either the spatial perspective or the spectral perspective. Despite the difference in the design perspectives, a recent work [1] has shown that spectral GNNs and spatial GNNs are related and that spectral analysis of GNNs' behavior can provide a complementary point of view to understand GNNs in general.
**Spatial Perspective.** Many GNNs [17; 31; 27; 13] use the message passing framework [12], which is consisted of three steps: neighborhood propagation and aggregation, combination and global pooling. The node representations at \(l\)-th layer are given by: \(\mathbf{x}_{i}^{(l+1)}=\mathtt{Combine}(\mathtt{Aggregate}(\mathbf{x}_{j}^{(l) },v_{j}\in\mathcal{N}_{i}^{1},v_{j}\neq v_{i}),\mathbf{x}_{i}^{(l)})\), where the function Aggregate can be \(\mathtt{Sum}\)[31], Average[17], or other learned aggregating functions [27]; and the function Combine can be \(\mathtt{Concat}\)[13] or \(\mathtt{Average}\)[17]. The graph representation is obtained from the node representations at the last layer: \(\mathbf{x}^{\prime\prime}=\mathtt{Pooling}\) (\(\{\mathbf{x}_{i}^{(\text{Last})}\}\)), where the \(\mathtt{Pooling}\) function is performed on the set of all the node representations, and it can be \(\mathtt{Global\_mean}\) or \(\mathtt{Global\_max}\) or other more complex pooling functions [37; 18].
**Spectral Perspective.** Spectral GNNs [6; 11; 21] utilize the spectral properties of a propagation matrix \(\mathbf{T}\) to perform the graph classification. The propagation matrix \(\mathbf{T}\) is usually a function of the adjacency matrix \(\mathbf{A}\), such as the normalized adjacency matrix \(\mathbf{T}=(\mathbf{D}+\mathbf{I})^{-1/2}(\mathbf{A}+\mathbf{I})(\mathbf{D}+ \mathbf{I})^{-1/2}\), or the normalized graph Laplacian matrix \(\mathbf{T}=\mathbf{I}-\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\). We consider an undirected graph with a real and symmetric adjacency matrix. Consequently, the propagation matrix \(\mathbf{T}\) is also real and symmetric. Then we can perform the eigendecomposition on the propagation matrix \(\mathbf{T}\): \(\mathbf{T}=\mathbf{U}\mathbf{A}\mathbf{U}^{T}\), where \(\mathbf{U}\) is an orthogonal matrix whose columns \(\mathbf{U}_{i}\) are orthonormal and are the eigenvectors of \(\mathbf{T}\); \(\mathbf{\Lambda}\) is a matrix whose diagonal elements are the eigenvalues of \(\mathbf{T}\), sorted from large to small by their absolute values. The set of eigenvectors \(\{\mathbf{U}_{i}\}\) form the orthonormal basis of \(\mathbb{R}^{n}\). The goal of a spectral GNN is to learn a proper spectral filter: \(f(\mathbf{\Lambda})=c_{0}\mathbf{I}+c_{1}\mathbf{\Lambda}+c_{2}\mathbf{ \Lambda}^{2}+\cdots+c_{i}\mathbf{\Lambda}^{i}+\cdots\), where \(c_{i}\) are the learnable coefficients. The convolution at each layer can be viewed as or is equivalent to: \(\mathbf{X}^{(l+1)}=\sigma(\mathbf{U}f(\mathbf{\Lambda})\mathbf{U}^{T}\mathbf{X }^{(l)}\mathbf{W}^{(l)})\), where \(\mathbf{W}^{(l)}\) is a learnable weight matrix, and \(\sigma(\cdot)\) is a nonlinear function (e.g., ReLU). Similar to the spatial GNNs, a global pooling is performed after the last convolution layer to obtain a graph representation for the graph classification task.
## 3 Size Generalizability from the Spectral Perspective
In this section, we take a spectral perspective to analyze the factors that affect the size generalizability of a GNN. At a high level, we study how the graph representation correlates with the graph size.
The output of a GNN at the \((l+1)\)-th layer is given by: \(\mathbf{X}^{(l+1)}=\sigma(\mathbf{U}f(\mathbf{\Lambda})\mathbf{U}^{T}\mathbf{ X}^{(l)}\mathbf{W}^{(l)})\) (Section 2). To ease our analysis, we rewrite \(\mathbf{U}\), \(\mathbf{\Lambda}\) and \(\mathbf{X}^{(l)}\) as: \(\mathbf{U}=[\mathbf{U}_{1},\mathbf{U}_{2},\cdots,\mathbf{U}_{N}]\), \(\mathbf{\Lambda}=\mathtt{Diag}([\lambda_{1},\lambda_{2},\cdots,\lambda_{N}])\), and \(\mathbf{X}^{(l)}=[\mathbf{X}_{1}^{(l)},\mathbf{X}_{2}^{(l)},\cdots,\mathbf{X}_ {D}^{(l)}]\), respectively. \(\mathbf{U}_{i}\) is the \(i\)-th column
vector of \(\mathbf{U}\), \(\lambda_{i}\) is the \(i\)-th largest eigenvalue, \(\texttt{Diag}(\cdot)\) creates a matrix with the vector as its diagonal elements, \(\mathbf{X}_{i}^{(l)}\) is the \(i\)-th column vector of \(\mathbf{X}^{(l)}\), and \(D\) is the feature dimension at the \(l\)-th layer.
\[\begin{split}\mathbf{X}^{(l+1)}&=\sigma([\mathbf{ U}_{1},\mathbf{U}_{2},\cdots,\mathbf{U}_{N}]\cdot\texttt{Diag}([f(\lambda_{1}),f( \lambda_{2}),\cdots,f(\lambda_{N})])\cdot([\mathbf{U}_{1},\mathbf{U}_{2}, \cdots,\mathbf{U}_{N}])^{T}\\ &\cdot[\mathbf{X}_{1}^{(l)},\mathbf{X}_{2}^{(l)},\cdots,\mathbf{ X}_{D}^{(l)}]\cdot\mathbf{W}^{(l)}\\ &=\sigma([f(\lambda_{1})\mathbf{U}_{1},f(\lambda_{2})\mathbf{U}_ {2},\cdots,f(\lambda_{N})\mathbf{U}_{N}]\cdot([\mathbf{U}_{1},\mathbf{U}_{2}, \cdots,\mathbf{U}_{N}])^{T}\\ &\cdot[\mathbf{X}_{1}^{(l)},\mathbf{X}_{2}^{(l)},\cdots,\mathbf{ X}_{D}^{(l)}]\cdot\mathbf{W}^{(l)}.\end{split} \tag{1}\]
Since \(\{\mathbf{U}_{i}:i=1,\cdots,N\}\) are the orthonormal basis of \(\mathbb{R}^{N}\), \(\mathbf{X}_{i}^{(l)}\) can be expressed as a linear combination of \(\{\mathbf{U}_{i}\}\). Thus, suppose \(\mathbf{X}_{i}^{(l)}=\sum_{j=1,2,\cdots,N}\alpha_{j}^{i}\mathbf{U}_{j}\), then Equation 1 can be rewritten as:
\[\begin{split}\mathbf{X}^{(l+1)}&=\sigma([f(\lambda_ {1})\mathbf{U}_{1},f(\lambda_{2})\mathbf{U}_{2},\cdots,f(\lambda_{N})\mathbf{ U}_{N}]\cdot([\mathbf{U}_{1},\mathbf{U}_{2},\cdots,\mathbf{U}_{N}])^{T}\\ &\cdot[\sum_{j=1,2,\cdots,N}\alpha_{j}^{1}\mathbf{U}_{j},\cdots, \sum_{j=1,2,\cdots,N}\alpha_{j}^{i}\mathbf{U}_{j},\cdots]\cdot\mathbf{W}^{(l )})\\ &=\sigma\left([f(\lambda_{1})\mathbf{U}_{1},f(\lambda_{2})\mathbf{ U}_{2},\cdots,f(\lambda_{N})\mathbf{U}_{N}]\cdot\begin{bmatrix}\alpha_{1}^{1}& \alpha_{1}^{2}&\cdots&\alpha_{1}^{D}\\ \alpha_{1}^{2}&\alpha_{2}^{2}&\cdots&\alpha_{2}^{D}\\ \cdots&\cdots&\cdots&\cdots\\ \alpha_{N}^{1}&\alpha_{N}^{2}&\cdots&\alpha_{N}^{D}\end{bmatrix}\cdot\mathbf{W }^{(l)}\right)\\ &=\sigma([\sum_{j=1,\cdots,N}f(\lambda_{j})\alpha_{j}^{1}\mathbf{U}_{j}, \sum_{j=1,\cdots,N}f(\lambda_{j})\alpha_{j}^{2}\mathbf{U}_{j},\cdots,\sum_{j=1,\cdots,N}f(\lambda_{j})\alpha_{j}^{D}\mathbf{U}_{j}]\cdot\mathbf{W}^{(l)}). \end{split} \tag{2}\]
We know that \(\alpha_{j}^{i}=\mathbf{U}_{j}^{T}\cdot\mathbf{X}_{i}^{(l)}=\texttt{COSINE}( \mathbf{U}_{j},\mathbf{X}_{i}^{(l)})\cdot\|\mathbf{X}_{i}^{(l)}\|_{2}=\texttt {COSINE}(\mathbf{U}_{j},\mathbf{X}_{i}^{(l)})\cdot\frac{\|\mathbf{X}_{i}^{(l) }\|_{2}}{\sqrt{N}}\cdot\sqrt{N}\). Then Equation 2 can be rewritten as:
\[\begin{split}\mathbf{X}^{(l+1)}&=\sigma([\sum_{j=1, \cdots,N}f(\lambda_{j})\texttt{COSINE}(\mathbf{U}_{j},\mathbf{X}_{1}^{(l)}) \cdot\frac{\|\mathbf{X}_{1}^{(l)}\|_{2}}{\sqrt{N}}\cdot(\sqrt{N}\cdot\mathbf{U }_{j}),\\ &\cdots,\sum_{j=1,\cdots,N}f(\lambda_{j})\texttt{COSINE}(\mathbf{ U}_{j},\mathbf{X}_{D}^{(l)})\cdot\frac{\|\mathbf{X}_{D}^{(l)}\|_{2}}{\sqrt{N}} \cdot(\sqrt{N}\cdot\mathbf{U}_{j})]\cdot\mathbf{W}^{(l)}).\end{split} \tag{3}\]
The final graph representation is obtained through a global pooling (Section 2), where a set function is applied to each feature dimension. To ensure that the graph representation is irrelevant to the graph size, it is crucial for each column of \(\mathbf{X}^{(l+1)}\) to be unaffected by the size. If the distributions of eigenvalues (\(\lambda_{j}\)) and scaled eigenvectors (\(\sqrt{N}\cdot\mathbf{U}_{j}\)), as well as the scalar \(\frac{\|\mathbf{X}_{i}^{(l)}\|_{2}}{\sqrt{N}}\), are uncorrelated with the graph size, then the graph representation will be size-invariant. It is worth noting that \(\mathbf{X}_{i}^{(l)}\) contains \(N\) elements and its 2-norm scales with \(\sqrt{N}\); and \(\|\mathbf{U}_{j}\|_{2}=1\) and its elements diminish with \(\sqrt{N}\). Hence, we account for the scaling factor \(\sqrt{N}\) in both terms. However, as demonstrated in Section 5.2 and appendix D, the distributions of eigenvalues and scaled eigenvectors in real biological graphs are influenced by the graph size. This poses a challenge for the size generalization of GNNs. Importantly, the difference in eigenvalue distributions cannot be attributed to the value range [4], as the eigenvalues of a normalized Laplacian matrix and adjacency matrix are constrained within the intervals [0,2] and [-1,1] respectively.
## 4 Methodology
Before presenting our method, we highlight two observations discovered in biological networks, which will be thoroughly explained in Section 5. These observations serve as the motivation behind our development of a size-insensitive attention mechanism.
* **Observation 1**: The differences in the eigenvalue distributions reflect the variations in the global connectivity of the nodes, which are associated with the node closeness centrality.
* **Observation 2**: Graphs with different sizes have similar distributions of local node connectivity, which can be measured by the local closeness centrality computed from the subgraphs induced from local neighborhoods.
Motivated by Observation 2, our design aims at improving size generalizability by incorporating size-insensitive local structural features. Intuitively, we aim to utilize these structural features to filter out the nodes that increase in quantity with the graph size but are less relevant to the labels. Attention is a suitable mechanism in our setting as it can filter out irrelevant information [18] and be integrated with structural features. However, as noted in [28; 33], attention weights gradually lose fidelity with larger graphs. Therefore, our design focuses on two main challenges: (1) designing size-insensitive local structural features and (2) reducing the impact of graph size on the attention weights.
**Structural Feature Design.** Inspired by Observation 2, we use local closeness centrality as the node features. In more detail, we compute the local closeness centrality of a node \(v_{i}\) from its \(h\)-hop neighborhoods \(\mathcal{N}_{i}^{h}\), where \(h\) is the largest neighborhood size to maintain size-insensitivity (Section 5.4). To enrich the features [10], we further compute the maximum, minimum, average, and standard deviation of \(h\)-hop closeness for the nodes in \(v_{i}\)'s immediate neighborhood. In this way, for each node \(v_{i}\), we manually construct a 5-dimensional feature vector \(\mathbf{c}_{i}\) such that
\[\mathbf{c}_{i}=[C^{h}(v_{i}),\mathtt{max}(\{C^{h}(v_{j})\}),\mathtt{min}(\{C^ {h}(v_{j})\}),\mathtt{avg}(\{C^{h}(v_{j})\}),\mathtt{std}(\{C^{h}(v_{j})\})] ^{\top},v_{j}\in\mathcal{N}_{i}^{1} \tag{4}\]
**Size-insensitive Structural Attention.** We use the structural feature matrix \(\mathbf{C}=[\mathbf{c}_{1},\ldots,\mathbf{c}_{N}],\mathbf{c}_{i}\in\mathbb{R} ^{5}\) for attention. However, attention weights often diminish with increasing graph size due to the utilization of \(\mathtt{Softmax}\). To address this issue, we propose scaling the attention weights by the graph size and employing \(\mathtt{Global\_max}\) as the global pooling operation to mitigate the impact of graph size. Mathematically, our final graph representation is given by:
\[\mathbf{k}=\mathtt{Softmax}(\mathbf{w}_{A}^{\top}\mathbf{C})\cdot N,\quad \mathbf{x}^{\mathcal{G}}=\mathtt{Global\_max}(\mathtt{Diag}(\mathbf{k})\cdot \mathbf{X}^{(\text{Last})}) \tag{5}\]
where \(\mathbf{w}_{A}^{\top}\) is a learnable vector, and \(\mathtt{Diag}(\cdot)\) creates a diagonal matrix using the vector as its elements.
We perform a comprehensive ablation study on Appendix F to showcase the effectiveness of our feature selection. Our findings reveal that local closeness alone can enhance size generalizability to some extent. However, when incorporating more enriched features, the improvements become more consistent and pronounced.
## 5 Experiments
### Overview of the Experimental Setup
In our experiments, we use three pre-processed biological datasets (BBBP, BACE, and PROTEINS) from the Open Graph Benchmark [16] and TuDataset [23] to perform graph classification. More details about the datasets are provided in Appendix A.
Data Preprocessing and Important Training DetailsIn order to analyze size generalizability, we have four splits for each dataset: train, validation, small_test, and large_test, where large_test contains graphs with significantly larger sizes. We generate the splits as follows. First, we sort the samples in the dataset by their size. Next, We take the train, validation, and small_test split from the 50% smallest graphs in the dataset. An intuitive way of getting the large_split is to take the top k largest graphs. However, doing so would result in severe label skewness (class imbalances) between the small_test and the large_test as demonstrated by Table 4 in Appendix B. To avoid such a severe distribution shift, we select the same number of graphs per class as in the small_test subset, starting from the largest graph within each class. In this way, we ensure the label distribution between small_test and large_test is the same. Nevertheless, the smallest 50% samples still have significant **class imbalance**. To address this issue, we use upsampling during training throughout the experiments, and we use **F1** as the metric to measure the model performance. More details about data preprocessing, hyperparameters, and training can be found in Appendix B and Appendix C.
### Dependence of Structural Properties on Graph Size
In this subsection, we aim to address the question: "What structural properties change with the graph size?" To explore this, we focus on two structural properties of the graphs: degree distribution and eigenvalue distribution. We present their relationship with the graph size in Figure 1.
Figure 1 illustrates the pairwise distances of the graphs arranged in ascending order of size, where the distances are calculated using the Wasserstein distance [29]. We measure the empirical distributions of degrees (Figure 0(a)) and eigenvalues (Figure 0(b)), respectively, to compute these distances. The eigenvalues are obtained from the normalized adjacency matrix as suggested in [17]: \(\mathbf{T}=(\mathbf{D}+\mathbf{I})^{-1/2}(\mathbf{A}+\mathbf{I})(\mathbf{D}+ \mathbf{I})^{-1/2}\). We have similar observations for normalized Laplacian matrix. Note that the eigenvalues do not scale with the graph size, and they are bounded between [-1,1]. Dark blue represents a small distance (high similarity) while light red represents a large distance (low similarity). We find that Figure 0(a) does not have a clear pattern and thus the graph distance measured by degree distributions does not show a clear correlation with the graph size. In contrast, there is a wide blue band along the diagonal in the three subplots in Figure 0(b), corresponding to the three datasets in our analysis. This indicates that graphs of similar sizes have more similar eigenvalue distributions than graphs of different sizes. It suggests a strong correlation between the eigenvalue distributions and the graph size. To verify the analysis quantitatively, we compute the distance of graphs with similar sizes and graphs of different sizes in Table 1. Graphs of similar sizes include 20
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline \multicolumn{5}{c}{**Degree Distribution Difference**} & \multicolumn{5}{c}{**Eigenvalue Distribution Difference**} \\ \hline
**Datasets** & **Different sizes** & **Similar sizes** & **Relative difference** & **Different sizes** & **Similar sizes** & **Relative difference** \\ \hline
**BBBP** & 0.039 & 0.035 & 12.1\% & 0.281 & 0.091 & 209\% \\
**BACE** & 0.027 & 0.024 & 14.7\% & 0.204 & 0.073 & 177\% \\
**PROTEINS** & 0.047 & 0.041 & 14.8\% & 0.379 & 0.130 & 192\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average Wasserstein distance between graphs of similar sizes and graphs of different sizes based on **degree** and **eigenvalue** distributions, respectively. The relative difference is computed by the difference of the Wasserstein distance normalized by the Wasserstein distance of similar graphs.
Figure 1: Graph distance is quantified using the Wasserstein distance for degree (a) and eigenvalue (b) distributions, respectively. Graphs are sorted from small to large, and the (i,j)-th pixel in the plot represents the distance between the i-th and j-th graphs. Dark blue represents a small distance (high similarity), while light red represents a large distance (low similarity). We find that **degree distributions do not show a clear correlation with the graph size** and **eigenvalue distributions show a strong correlation with the graph size**.
graphs whose size is closest to the graph of interest, and graphs of different sizes are the remaining graphs. From the table, we find that (1) the Wasserstein distance of degree distributions between the graphs of different sizes are slightly larger (around 15%) than the distance between graphs of similar sizes; and (2) the Wasserstein distance of eigenvalue distributions between the graphs of different sizes are significantly larger than the distance between graphs of similar sizes. This implies that the discrepancy of degree distributions is not the main challenge to the size generalizability of GNNs on real biological graphs. Furthermore, based on the results and the analysis in Section 3, the correlation between the eigenvalue distributions and the graph size results in the correlation of the final graph representation and the graph size, which prevents GNNs from generalizing over different sizes.
### Observation 1
In this subsection, we present our first observation that the discrepancies of the eigenvalue distributions reflect the differences of global connectivity of the nodes, which are correlated with the node closeness centrality. Our intuition is that the structural connectivity of the graph is often related to the eigenvalues of the normalized Laplacian matrix. For example, the algebraic multiplicity of eigenvalue 0 represents the number of connected components in the graph [9]; the cheeger constant [8], which measures the "bottle neck" of the graph, is related to the spectral gap of the graph. To analyze their relationship, we compute the empirical distribution of node closeness centrality for each graph (Section 2) as an indicator of global node connectivity and examine its correlation with the eigenvalue distribution. To elaborate, we randomly select 200 pairs of graphs. Each pair of graphs yields two distances: one calculated from the eigenvalue distributions and the other from the closeness distributions. We then map each graph pair to a point whose coordinates represent the two distances. The scatter plots for three datasets are depicted in Figure 2. The \(x\)-axis represents the closeness distance, and the \(y\)-axis represents the eigenvalue distance. We find that there is a clear correlation between the two variables in all the plots we draw. To verify it quantitatively, we compute the Pearson correlation between the two distances, and the result is presented in Figure 2. To conclude, the closeness distribution is highly or moderately correlated with the eigenvalue distributions in real-world biological datasets.
### Observation 2
In this subsection, we present our second observation that graphs with different sizes have similar distributions of local node connectivity. We measure the local node connectivity by computing the closeness of a node in its local neighborhoods. In the following analysis, we use the BACE dataset as an example, and similar analyses are conducted for the other two datasets in Appendix E.
Following the same procedures in Section 5.2, Figure 3 shows the distances of sorted graphs, where the distance is measured by the Wasserstein distance between the empirical distributions of \(h\)-hop local closeness. We can see that in the two-hop and three-hop plots, small and large graphs share similar distributions of local closeness. In contrast, there is a clear wide blue band along the diagonal in the four-hop plot, indicating that the graph distance measured by four-hop local closeness correlates with the graph size. To verify the analysis quantitatively, we compute the distance of graphs of similar sizes and graphs of different sizes in Table 2. We follow the same definitions in Section 5.2. We
Figure 2: Correlation between the distances measured by eigenvalue distributions and closeness distributions on BBBP, BACE, and PROTEINS datasets. The closeness distribution is highly or moderately correlated with the eigenvalue distribution.
observe that the relative difference between graphs of similar sizes and graphs of different sizes are relatively small for two-hop and three-hop local closeness, but becomes significant for four-hop local closeness. This quantitative result verifies the plot shown in Figure 3. We thus conclude that three-hop is the maximum neighborhood size in the BACE dataset that preserves the size-insensitivity of the local closeness features. As a consequence, we choose \(h=3\) for BACE.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Number of Hops** & **Different sizes** & **Similar sizes** & **Relative Difference** \\ \hline
**2-hop** & 0.314 & 0.268 & 17.1\% \\
**3-hop** & 0.368 & 0.265 & 38.5\% \\
**4-hop** & 0.387 & 0.201 & 92.2\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average Wasserstein distance between graphs of similar sizes and graphs of different sizes based on **h-hop local closeness** for BACE. The relative difference is computed by the difference of the average Wasserstein distance normalized by the average Wasserstein distance of similar graphs.
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline \multicolumn{2}{c}{**Datasets**} & \multicolumn{2}{c}{**BBBP**} & \multicolumn{2}{c}{**BACE**} & \multicolumn{2}{c}{**PROTEINS**} \\ \hline
**Models** & **Small** & **Large** & **Small** & **Large** & **Small** & **Large** \\ \hline
**MLP** & \(92.66\pm_{172}\) & \(58.39\pm_{4.63}\) & \(61.75\pm_{4.64}\) & \(21.18\pm_{9.68}\) & \(36.15\pm_{2.28}\) & \(21.55\pm_{1.34}\) \\
**MLP+SAGPool (threshold)** & \(88.3\pm_{2.84}\) & \(59.02\pm_{17.70}\) & \(61.03\pm_{2.38}\) & \(40.66\pm_{4.81}\) & \(67.78\pm_{7.11}\) & \(13.04\pm_{5.02}\) \\
**MLP+SIA** & \(91.07\pm_{0.88}\) & \(59.43\pm_{3.0}\) & \(64.48\pm_{12.22}\) & \(31.36\pm_{15.19}\) & \(45.02\pm_{8.18}\) & \(36.42\pm_{13.83}\) \\ \hline
**GCN** & \(90.66\pm_{1.28}\) & \(64.45\pm_{1.38}\) & \(64.52\pm_{2.57}\) & \(27.08\pm_{13.66}\) & \(72.35\pm_{2.58}\) & \(40.49\pm_{3.22}\) \\
**GCN+SAGPool (threshold)** & \(88.51\pm_{2.94}\) & \(66.46\pm_{16.11}\) & \(52.94\pm_{7.99}\) & \(47.87\pm_{7.6}\) & \(60.75\pm_{3.09}\) & \(5.98\pm_{3.26}\) \\
**GCN+SIA** & \(91.53\pm_{1.57}\) & \(71.84\pm_{5.33}\) & \(66.46\pm_{2.63}\) & \(51.96\pm_{3.47}\) & \(71.5\pm_{1.09}\) & \(37.86\pm_{3.06}\) \\ \hline
**GNN** & \(90.24\pm_{1.8}\) & \(67.27\pm_{1.59}\) & \(56.28\pm_{1.35}\) & \(22.97\pm_{9.45}\) & \(73.73\pm_{1.85}\) & \(45.19\pm_{15.07}\) \\
**GIN+SAGPool (threshold)** & \(89.95\pm_{2.32}\) & \(67.47\pm_{1.52}\) & \(58.52\pm_{1.66}\) & \(16.71\pm_{1.19}\) & \(65.38\pm_{1.48}\) & \(25.77\pm_{12.54}\) \\
**GNN+SIA** & \(90.48\pm_{2.28}\) & \(70.01\pm_{9.90}\) & \(56.72\pm_{1.36}\) & \(250.65\pm_{2.55}\) & \(72.89\pm_{1.8}\) & \(47.68\pm_{6.04}\) \\ \hline
**GAT** & \(92.66\pm_{0.73}\) & \(68.37\pm_{4.87}\) & \(70.14\pm_{4.65}\) & \(42.94\pm_{4.4}\) & \(72.94\pm_{2.77}\) & \(42.32\pm_{5.93}\) \\
**GAT+SAGPool (threshold)** & \(89.9\pm_{2.15}\) & \(45.74\pm_{19.36}\) & \(66.1\pm_{1.64}\) & \(38.54\pm_{15.45}\) & \(59.89\pm_{4.53}\) & \(10.6\pm_{5.34}\) \\
**GAT+SIA** & \(93.56\pm_{0.74}\) & \(68.76\pm_{2.38}\) & \(69.43\pm_{5.02}\) & \(44.59\pm_{9.25}\) & \(75.01\pm_{2.95}\) & \(51.19\pm_{5.02}\) \\ \hline
**FAGCN** & \(90.63\pm_{2.56}\) & \(60.7\pm_{4.01}\) & \(55.77\pm_{4.72}\) & \(21.0\pm_{1.85}\) & \(68.38\pm_{3.99}\) & \(46.81\pm_{8.08}\) \\
**FAGCN+SAGPool (threshold)** & \(83.08\pm_{2.46}\) & \(58.47\pm_{17.34}\) & \(58.78\pm_{4.28}\) & \(33.99\pm_{4.97}\) & \(63.41\pm_{9.85}\) & \(50.09\pm_{16.49}\) \\
**FAGCN+SIA** & \(92.01\pm_{1.5}\) & \(70.23\pm_{15.87}\) & \(60.84\pm_{5.38}\) & \(29.25\pm_{1.18}\) & \(68.38\pm_{3.07}\) & \(49.5\pm_{4.42}\) \\ \hline
**GNNML3** & \(91.52\pm_{1.76}\) & \(59.13\pm_{2.82}\) & \(61.95\pm_{6.95}\) & \(18.01\pm_{1.22}\) & \(69.66\pm_{4.48}\) & \(34.69\pm_{5.36}\) \\
**GNNML3+SAGPool (threshold)** & \(87.59\pm_{2.9}\) & \(47.5\pm_{13.57}\) & \(57.21\pm_{1.67}\) & \(35.88\pm_{2.14}\) & \(54.85\pm_{9.96}\) & \(24.79\pm_{3.19}\) \\
**GNNML3+SIA** & \(93.04\pm_{9.01}\) & \(65.4\pm_{4.81}\) & \(62.06\pm_{3.48}\) & \(40.06\pm_{2.76}\) & \(72.14\pm_{2.29}\) & \(52.93\pm_{2.78}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Size generalizability evaluated by the graph classification performance on small test graphs and large test graphs. The performance is reported by the average F1 scores and its standard deviation. **+SAGPool (threshold)** means that we apply the thresholded SAG pooling [18, 20] to that model. **+SIA** means that we apply our proposed Size-Insensitive Attention to that model. Highest performance among different models is highlighted in orange.
Figure 3: Graph distance based on h-hop local closeness for BACE (left to right: h=2, h=3, and h=4). Graphs are sorted from small to large and we follow the same procedures as Figure 2. Small-hop (two-hop and three-hop) local closeness does not exhibit a clear dependence on graph size.
### Effectiveness of Proposed Strategy
In this subsection, we examine the effectiveness of SIA.
BaselinesWe consider six neural network models. Each model consists of three layers, with a global max pooling layer employed in the final layer. The baseline models are: Multilayer Perceptron (MLP), GCN [17], GAT [27], GIN [31], FAGCN [6], and GNNML3 [2]. We also consider thresholded SAG pooling[18; 20] as a baseline strategy to be combined with the baseline models. **Upon paper acceptance, we will release the code for SIA.**
Main ResultsTable 3 displays the average and standard deviation of F1 scores assessed on small and large test sets for various models utilizing different strategies. The evaluation is conducted on the BBBP, BACE, and PROTEINS datasets. The highest performance in small and large test sets is highlighted in orange. Our observations are as follows: firstly, in comparison to the original pooling method, SIA consistently enhances performance on the large test set, with minimal exceptions, resulting in performance boosts of up to 22%. Secondly, the improved generalizability of SIA does not come at the expense of performance degradation on the small test set. In fact, SIA outperforms the baselines in more than half of the cases for small test sets and yields comparable results in the remaining cases. These findings demonstrate that SIA is model agnostic and effectively improves the size generalizability of GNNs.
## 6 Related Work
**Size-generalizable Neural Networks for Graph Data.** Size-generalizable neural networks for graph data were firstly studied in various application areas. In the field of algorithmic reasoning, [33; 28] discovered that neural networks are capable of generalizing effectively to larger graphs than the ones used during training, provided that the attention weights are properly supervised. In the field of physical simulation, [24] presented a model to simulate complex physical domains, which can generalize from single-timestep predictions with thousands of particles during training, to different initial conditions, thousands of timesteps, and at least an order of magnitude more particles at test time. Later, more works on the fundamental understanding of GNN's size generalizability were proposed. [18] found that using attention with proper thresholding can improve the size generalizability of GNNs. [36] argued that GNNs' performance degradation on larger graphs can be attributed to the changes in the local degree patterns. As a result, they proposed two domain adaptation strategies to improve generalizability to larger graphs. [7] simulated a size shift in the training graphs via graph coarsening, and proposed a regularization that makes the model robust to the shift. [5] used a causal model to learn approximately invariant representations that better extrapolate between train and test data. [35] studied out-of-distribution in general, and introduced an environment inference model to identify the latent factors that impact data generation from different distributions in a fully data-driven manner. Prior works have not utilized spectral analyses, and many do not specify how variations in graph size affect structural properties.
**Graph Neural Networks with Structural Attention.** Previous works have also explicitly used the structural features in their attention mechanisms for various purposes. [19; 25; 22] used structural attention to improve GNNs' in-distribution generalization in the node/graph classification tasks. [14] incorporated structural information into the embedding learning process for heterogeneous networks, while [32] leveraged degree information to enhance GNNs' performance on heterophilous graphs. In contrast, we use structural attention to improve the size generalizability of GNNs.
## 7 Conclusion
We investigate the size generalizability of GNNs on biological data. We observe a distribution shift in the eigenvalues of the normalized Laplacian/adjacency matrix between small and large graphs, which indicates the differences in the global node connectivity associated with the node closeness centrality. We further observe that graphs of different sizes share similar local connectivity, which can be leveraged to improve size generalizability. Based on the spectral insights and observations from real biological datasets, we propose a model-agnostic strategy, SIA, that utilizes size-irrelevant local structural features to guide learning. Our approach improves the graph classification performance of
various GNNs on small and large graphs, indicating the potential for more effective use of GNNs in real-world applications.
|
2308.15835 | Prediction and Anomaly Detection of accelerated particles in PIC
simulations using neural networks | Acceleration processes that occur in astrophysical plasmas produce cosmic
rays that are observed on Earth. To study particle acceleration, fully-kinetic
particle-in-cell (PIC) simulations are often used as they can unveil the
microphysics of energization processes. Tracing of individual particles in PIC
simulations is particularly useful in this regard. However, by-eye inspection
of particle trajectories includes a high level of bias and uncertainty in
pinpointing specific acceleration mechanisms that affect particles. Here we
present a new approach that uses neural networks to aid individual particle
data analysis. We demonstrate this approach on the test data that consists of
252,000 electrons which have been traced in a PIC simulation of a
non-relativistic high Mach number perpendicular shock, in which we observe the
two-stream electrostatic Buneman instability to pre-accelerate a portion of
electrons to nonthermal energies. We perform classification, regression and
anomaly detection by using a Convolutional Neural Network. We show that
regardless of how noisy and imbalanced the datasets are, the regression and
classification are able to predict the final energies of particles with high
accuracy, whereas anomaly detection is able to discern between energetic and
non-energetic particles. The methodology proposed may considerably simplify
particle classification in large-scale PIC and also hybrid kinetic simulations. | Gabriel Torralba Paz, Artem Bohdan, Jacek Niemiec | 2023-08-30T08:19:53Z | http://arxiv.org/abs/2308.15835v1 | # Prediction and Anomaly Detection of accelerated particles in PIC simulations using neural networks
###### Abstract:
Acceleration processes that occur in astrophysical plasmas produce cosmic rays that are observed on Earth. To study particle acceleration, fully-kinetic particle-in-cell (PIC) simulations are often used as they can unveil the microphysics of energization processes. Tracing of individual particles in PIC simulations is particularly useful in this regard. However, by-eye inspection of particle trajectories includes a high level of bias and uncertainty in pinpointing specific acceleration mechanisms that affect particles. Here we present a new approach that uses neural networks to aid individual particle data analysis. We demonstrate this approach on the test data that consists of 252,000 electrons which have been traced in a PIC simulation of a non-relativistic high Mach number perpendicular shock, in which we observe the two-stream electrostatic Buneman instability to pre-accelerate a portion of electrons to nonthermal energies. We perform classification, regression and anomaly detection by using a Convolutional Neural Network. We show that regardless of how noisy and imbalanced the datasets are, the regression and classification are able to predict the final energies of particles with high accuracy, whereas anomaly detection is able to discern between energetic and non-energetic particles. The methodology proposed may considerably simplify particle classification in large-scale PIC and also hybrid kinetic simulations.
Introduction
Cosmic rays (CRs) are high-energy charged particles accelerated to nearly the speed of light carrying immense kinetic energies that can reach up to \(10^{21}\,\,\mathrm{eV}\). Such particles can be produced via a number of mechanisms, including the first- and second-order Fermi acceleration [1, 2], magnetic reconnection [3], shock drift acceleration (SDA) [4] and stochastic SDA [5] processes. To understand the microphysics of the acceleration processes, kinetic plasma simulation techniques, e.g., particle-in-cell (PIC) methods [6], are often employed. PIC simulations, which follow individual particle trajectories in self-generated electromagnetic fields, have recently facilitated significant progress in understanding complex dynamics and interactions within astrophysical plasma environments, including shock physics [7].
For an in-depth analysis of particle acceleration mechanisms, many of the PIC codes employ particle tracing. This feature enables the tracking of particles that are part of the plasma itself, as opposed to test-particle simulations [8] However, disentangling acceleration processes requires a large amount of data, a thorough examination of the trajectories and an intuition to determine which particles were affected by the potential mechanisms involved. This could introduce a significant bias. Hence, we propose a novel approach for analysing particle tracing data based on Neural Networks (NNs). This method enables fast and reliable post-processing of thousands of particle trajectories. The NN can effectively identify particles that have been energized by detecting the distinct patterns imprinted in momentum space during acceleration. NN have been used before in various high-energy astrophysical applications, e.g., identification of neutron star mergers [9], image processing in Cherenkov telescopes [11], or anomaly detection in gravitational wave detectors [10]. In our study, we use NN to select non-thermal particles from all-particle distribution obtained in kinetic plasma simulations. We utilise two algorithms, Regression and Anomaly Detection, to analyze traced particles. This approach is unique and has not been previously explored.
## 2 Description of the particle dataset
The dataset we use in this work has been obtained from our recent PIC simulations of non-relativistic high Mach number shocks [13, 14], in which the Buneman instability [15] is excited in the shock foot. The dataset consists of \(N_{p}=\) 252,000 electrons that pass through the Buneman instability region, with some of them undergoing energization. We record these particle momenta, **p**, as well as the electric field, **E**, and magnetic field, **B**, at their respective positions, across 1200 time steps. For the analysis presented in this paper we use only particle momentum data. For each particle we compute a label, \(y_{i}=\max\left(\gamma_{i}-1\right)\) representing the maximum kinetic energy achieved by that particle along its trajectory, where \(\gamma_{i}\) is the Lorentz factor of the \(i\)th particle. Figure 1 shows two examples of particle trajectories from our data. The trajectory for a particle that is influenced by the Buneman instability (illustrated in panel (a) in a map of the \(E_{x}\) electric field) and accelerated is shown by the red line, while the blue line depicts a particle that remains unaffected. This distinction is particularly evident in panel (b), which shows the evolution of the particle Lorentz factor. The shape of the momentum time series varies depending on whether the particle has undergone acceleration or not, and the neural network has the capability to discern this difference. This can be seen in panels (d), (e) and (f) for the two selected particles but in most
cases the difference between energised and thermal particles is not so clear. Figure 2 shows the histogram of the maximum kinetic energy. One can see that only a small fraction, approximately 2%, of the total particle population are affected by the instability. The majority of particles belong to the thermal population, whereas for the non-thermal population, the number of particles decreases significantly with the energy.
Figure 1: Time series for two sample electrons travelling through the Buneman instability region. Shown are particle trajectories overlaid on a map of the \(E_{x}\) electric field (a), the evolution of particle Lorentz factor (b), and the evolution of particle momentum components (c-e). The electron data shown with a red (blue) line belongs to the non-thermal (thermal) population. Spatial coordinates are given in units of the electron skin depth, \(\lambda_{\mathrm{se=c}/\omega_{\mathrm{pe}}}\). Here \(c\) is the speed of light and \(\omega_{\mathrm{pe}}\) is the electron plasma frequency, which is also the unit of time.
Figure 2: Histogram of the maximum kinetic energy of the particles in the dataset. The thermal electron population is represented by a dashed line for reference.
## 3 Results
### Convolutional Neural Networks
Convolutional Neural Networks (CNNs) are a type of NN that use convolutional layers to process data. These layers consist of multiple filters or units. Each filter contains a convolutional window with a given width, filled with weights that perform convolutions across the time series. The resulting output is then standardized, which helps the NN in effectively processing the data. Without standardization, the performance of NN would deteriorate. Finally, the standardized data goes through an activation function, often Leaky ReLU [16], that gives the NN the characteristic non-linear behaviour. Typically, NN consist of multiple convolutional layers, varying from 3-5 to even hundreds of layers.
In our study, we investigate the use of a classical CNN for regression analysis on our data. Additionally, we employ an autoencoder based on a CNN to perform anomaly detection.
### Regression
The regression algorithm is a supervised learning method that provides a single value as its output. After the convolutional layers, the output passes through Max Pooling that selects the maximum value for each filter in the layer. The resulting data is concatenated into a single array and then it goes through the linear activation function, \(y=x\), which predicts the maximum kinetic energy. This prediction is compared to the original value \(y_{i}\) by using a loss function, specifically the Huber loss function [17], to update the weights of the convolutional layer filters (see Fig. 3). Due to the heavily unbalanced dataset, we use sample weights [12] on our input data, which helps balancing the input data.
Figure 4 (left) shows the relation between the original and predicted values of \(y_{i}\) in regression analysis. Our goal is to obtain a regression line close to \(y=x\) and a high \(R^{2}\) score. Remarkably, we achieve a score of 0.9805, despite a noisy momentum data. Figure 4 (right) shows histograms comparing the true and predicted values of the maximum kinetic energy. Both histograms closely align, although there is a slight over-prediction in the energetic regime, possibly due to the impact of sample weighting in high-energy particles.
Figure 3: Sketch of a Convolutional Neural Network for a regression algorithm with input data representing the momentum time series.
### Anomaly Detection
Anomaly Detection is an unsupervised learning method that uses an autoencoder (Fig. 5). The autoencoder operates as follows: The input data goes through several convolutional layers with progressively lower number of filters, effectively encoding the data. This is very similar to the regression approach. However, the data is compressed into a bottleneck convolutional layer, containing fewer filters, but retaining essential information. Subsequently, the data is decoded, with the amount of filters in convolutional layers progressively increasing until a recreation of the original data is obtained. If the reconstructed time series significantly deviates from the original, it is tagged as an anomaly.
To distinguish between anomalies and non-anomalies, a threshold is computed, equivalent to the maximum value achieved by the loss function (in our case, LogCosh). If a particle loss exceeds the threshold, it is labeled as an anomaly. It is important to note that we do not use any labels, as this is unsupervised learning. To train the autoencoder, we only use particles with max \((\gamma-1)<0.07\) (non-energetic). This is to ensure that the training phase only sees most commonly occurring particles.
Figure 6 (top) shows the relation between the maximum kinetic energy of particles in the dataset and the loss obtained from the autoencoder. The horizontal dashed line represents the threshold for each input value (feature), whereas the vertical line represents our separation between energetic and non-energetic particles. Ideally, all energetic particles would reside in the top-right quadrant. However, near the crossing point it is harder for the NN to differentiate between energised and non-energised particles, as their time series are relatively similar. Figure 6 (top) can be translated into the table (Fig. 6, bottom), which displays the number of energetic and non-energetic particles along with their assigned tags, either anomaly or non-anomaly. Overall, the autoencoder demonstrates an accurate prediction of energised particles just based on the shape of the time series, although 327 particles (\(\sim\)22%) are not correctly identified. Conversely, for non-energetic particles,
Figure 4: Results for regression. Left plot shows the comparison between the true value (x-axis) and predicted value (y-axis) of the maximum kinetic energy. Right plot shows the histograms for the true and predicted values of the maximum kinetic energy.
the autoencoder effectively distinguishes them, as evidenced by the low count of 17 compared to 61,468.
Figure 7 illustrates the predictions made by the NN on the data. In the case of an anomaly, the NN fails to recreate one of the time series and therefore it is tagged as an anomaly. In the example shown in the top row, the \(p_{x}\) and \(p_{z}\) data are satisfactorily reproduced. However, \(p_{y}\) data deviate at an instance near 600-750 time step, marking the operation of the Buneman instability that affects \(p_{y}\) as the particles surf the electrostatic waves and get energised. On the other hand, the non-anomaly is accurately recreated, and although there are slight deviations in some instances, the loss is below the anomaly threshold.
## 4 Conclusions
We employ NN to predict the maximum kinetic energy of particles that are energised through surfing on the Buneman instability waves in the foot of a quasi-perpendicular shock. Regression yields an excellent result, with a \(R^{2}\) score of 0.9805. The anomaly detection allows us to detect energetic particles without requiring any labels, contrarily to regression which uses labels. Using only the momentum time series as input data, the autoencoder identifies around 78% of energetic particles. By incorporating this autoencoder in the analysis, the NN can aid in identifying energetic particles.
The results demonstrate our ability to predict the particle energy solely using momenta in a specific scenario of acceleration at the Buneman instability waves. However, we are also exploring the use of the electric field data as input to predict the maximum kinetic energy. Unlike momentum, the electric field at particle location is not directly related to the energy. The goal of this approach is to test various acceleration scenarios beyond the surfing on the Buneman instability waves and experiment with other input variables, such as the Fourier transform of the input data. As simulations grow in size, tools such as NN facilitate faster and more precise analysis, enabling improved workflow in particle acceleration research.
Figure 5: Sketch of an autoencoder used for anomaly detection. The data is encoded via several convolutional layers, culminating in a bottleneck layer, subsequently to be decoded to reconstruct the original data.
energetic particles from the non-energetic ones. The table at the bottom shows the values that are in the four quadrants in the top plots. Particles that have at least one feature (one momentum component) with a higher loss than the threshold are tagged as anomaly.
## Acknowledgments
The work of G. T. P. and J. N. has been supported by Narodowe Centrum Nauki through research project no. 2019/33/B/ST9/02569. A. B. was supported by the German Research Foundation (DFG) as part of the Excellence Strategy of the federal and state governments - EXC 2094 - 390783311. We gratefully acknowledge Polish high-performance computing infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2023/016378. This research was supported by the International Space Science Institute (ISSI) in Bern, through ISSI International Team project #520 (Energy Partition Across Collisionless Shocks).
|
2301.11180 | Low-Rank Winograd Transformation for 3D Convolutional Neural Networks | This paper focuses on Winograd transformation in 3D convolutional neural
networks (CNNs) that are more over-parameterized compared with the 2D version.
The over-increasing Winograd parameters not only exacerbate training complexity
but also barricade the practical speedups due simply to the volume of
element-wise products in the Winograd domain. We attempt to reduce trainable
parameters by introducing a low-rank Winograd transformation, a novel training
paradigm that decouples the original large tensor into two less
storage-required trainable tensors, leading to a significant complexity
reduction. Built upon our low-rank Winograd transformation, we take one step
ahead by proposing a low-rank oriented sparse granularity that measures
column-wise parameter importance. By simply involving the non-zero columns in
the element-wise product, our sparse granularity is empowered with the ability
to produce a very regular sparse pattern to acquire effectual Winograd
speedups. To better understand the efficacy of our method, we perform extensive
experiments on 3D CNNs. Results manifest that our low-rank Winograd
transformation well outperforms the vanilla Winograd transformation. We also
show that our proposed low-rank oriented sparse granularity permits practical
Winograd acceleration compared with the vanilla counterpart. | Ziran Qin, Mingbao Lin, Weiyao Lin | 2023-01-26T15:44:22Z | http://arxiv.org/abs/2301.11180v1 | # Low-Rank Winograd Transformation for 3D Convolutional Neural Networks
###### Abstract
This paper focuses on Winograd transformation in 3D convolutional neural networks (CNNs) that are more over-parameterized compared with the 2D version. The over-increasing Winograd parameters not only exacerbate training complexity but also barricade the practical speedups due simply to the volume of element-wise products in the Winograd domain. We attempt to reduce trainable parameters by introducing a low-rank Winograd transformation, a novel training paradigm that decouples the original large tensor into two less storage-required trainable tensors, leading to a significant complexity reduction. Built upon our low-rank Winograd transformation, we take one step ahead by proposing a low-rank oriented sparse granularity that measures column-wise parameter importance. By simply involving the non-zero columns in the element-wise product, our sparse granularity is empowered with the ability to produce a very regular sparse pattern to acquire effectual Winograd speedups. To better understand the efficacy of our method, we perform extensive experiments on 3D CNNs. Results manifest that our low-rank Winograd transformation well outperforms the vanilla Winograd transformation. We also show that our proposed low-rank oriented sparse granularity permits practical Winograd acceleration compared with the vanilla counterpart.
## 1 Introduction
3D convolutional neural networks (CNNs) have achieved substantial accuracy increases in many video processing tasks, as a result of their superior capacity of extracting spatio-temporal features. Unfortunately, they come with large amounts of computing resources, primarily because the 3D kernels are more computationally intensive. Many restrictions such as runtime, memory, and power budget prevent 3D CNNs from running on many real-world devices.
Fast convolution algorithms such as Winograd convolution [18] and fast Fourier transform (FFT) [33] can greatly reduce the convolutional cost. Their principle is to replace spatial convolution operations with element-wise product and discard redundant multiplications in convolution. Conventional FFT based convolution is fast for large filters, therefore most recent attention focuses on Winograd convolution principally for the small 3\(\times\)3 filters adopted by state-of-the-art CNNs. Another potential solution is network pruning that reduces network complexity by removing unnecessary units [6; 32]. It seems that Winograd convolution and network pruning can be well combined to further save computation costs. However, they are not naturally compatible since the sparsity property from network pruning is diminished after the kernel transformation of the Winograd algorithm. To tackle this incompatibility, [24] performed pruning operations upon Winograd domain while [28] added the ReLU function after the Winograd transformation to increase the sparsity of element-wise product. However, both studies do not consider that Winograd kernel is location-sensitive as later demonstrated by [24] where an importance factor matrix is utilized to gauge the significance of different kernel locations.
Current investigations mostly give attention to the Winograd transformation in 2D CNNs. A direct extension of these methods to 3D CNNs is inapplicable, as we analyze, for two issues. First, 3D Winograd transformation causes considerable parameter increase. Taking F(2, 3)-based Winograd algorithm as an example, a typical 3D convolutional kernel with a shape of \(3\times 3\times 3\) is often replaced by a \(4\times 4\times 4\) Winograd kernel, leading to 2.37\(\times\) more parameters which it is only 1.78\(\times\) for 2D case. More parameters from Winograd transformation do not always benefit model capacity but cause model redundancy as analyzed in Sec. 3.2 and verified in Sec. 4.2. Also, the over-increasing trainable parameters pose a serious challenge to the capability of training machine. Second, existing methods fail to accelerate Winograd transformation even though conducting pruning upon the Winograd domain. Similar to the weight pruning [20; 8; 6], prior implementations derive irregular sparse weight matrix, which receives very limited speed gains since the irregular sparsity barely takes advantage of vector processing architectures such as single instruction multiple data (SIMD), and poorly utilizes memory buses [27]. Therefore, it remains unsolved to excavate Winograd transformation for acceleration, in particular to 3D CNNs primarily for the ever-increasing element-wise product in the Winograd domain.
In this paper, we put forward a novel Winograd transformation for 3D CNNs towards solving the above issues. Considering the over-increasing parameters in 3D CNNs, as shown in Fig. 1, compared with the vanilla version, we introduce a low-rank Winograd transformation method that represents the updating matrix (variation from a pre-trained Winograd weight tensor to the final fine-tuned one) with two smaller matrices. In this fashion, we concentrate on updating weights in the main directions of the whole Winograd space during the sparse training, leading to superior performance over the vanilla Winograd transformation. Besides, the two less storage-required matrices lead to a significant reduction in trainable Winograd parameters. With regard to Winograd transformation acceleration, we further present a low-rank oriented sparse granularity that quantifies the importance of each tensor column. The rationale behind this is to derive a more regular sparse pattern by simply involving the non-zero columns in the element-wise product of the Winograd domain. Thus, we introduce a scoring sequence to continuously accumulate the magnitude and gradient of column location in each training iteration as the importance assessment, and finally remove all weights in compliance with the low-scored columns. Practical speedups are observed from our low-rank oriented sparse granularity (see Table 3).
## 2 Related Work
### Spatial-Domain Pruning
Spatial-domain pruning is the practice of removing parameters from an existing network. It may entail removing individual parameters, _a.k.a._ weight pruning, or parameters in groups such as filter pruning and block pruning. For weight pruning, individual weights are measured by a certain criterion such as weight magnitude [8; 7; 6], higher-order information [20; 10; 5; 21] and so on. These methods are demonstrated to well preserve model performance. However, the resulting irregular sparse matrix requires specialized hardware/libraries to achieve practical speedups. For filter pruning, the entire filters are removed by standards such as \(\ell_{1}\)/\(\ell_{2}\)-norm [23; 30; 11], activation sparsity [14], lasso regression-based channel selection [12], and rank of feature maps [26]. In contrast to weight pruning, filter pruning advantages in acceleration but causes more performance drops. Therefore, block pruning, where a block of weights is removed simultaneously, has received recent research focus [37; 27] for its better performance than filter pruning as well as hardware-friendly deployment than weight pruning. However, vanilla spatial pruning methods cannot be directly combined with the Winograd convolution because the Winograd transformation diminishes the sparsity resulting from pruning [45].
### Winograd-Domain Pruning
Though vanilla spatial pruning fails to cooperate with the Winograd convolution, its main pruning principles have been extended to remove parameters in the Winograd domain. [29] removed Winograd-domain kernels, meanwhile, they retained kernels from the original network. However, dimension inconsistency arises since the Winograd-domain kernels are of a higher dimension than the spatial-domain kernels. [24] introduced Winograd layers in exchange for the standard convolutional layers. The pruning and training are simultaneously conducted in the Winograd layers. Thus, the dimension inconsistency issue is eliminated and the sparsity in Winograd domain also increases. [28] introduced the ReLU operation to the Winograd domain to derive sparse transformed activations. It improves the possibility of sparse element-wise products in the Winograd domain. [45] specified that different locations of the Winograd layers contribute differently to the output activations. Despite the progress, these studies lead to hardware-unfriendly irregular sparse patterns, causing imbalanced workloads among the data flows. To leverage the multiplication reduction from sparsity, [31; 44] devised sparse patterns that benefit more from speedups on specialized hardware.
## 3 Methodology
### 3D Winograd
Giving a 3D convolution kernel \(\mathcal{G}\in\mathbb{R}^{C_{o}\times C_{i}\times r_{d}\times r_{h}\times r_{w}}\) where \(C_{o}\) and \(C_{i}\) denote the number of output and input channel, and \((r_{d},r_{h},r_{w})\) forms the kernel size, it is convoluted with a four-dimensional input data \(\mathcal{I}\in\mathbb{R}^{C_{i}\times D_{i}\times H_{i}\times W_{i}}\) where \(D_{i}\), \(H_{i}\) and \(W_{i}\) respectively denote the depth, height, and width of the input \(\mathcal{I}\). Typical 3D convolution operation
Figure 1: Comparison between (a) the vanilla Winograd transformation and (b) our low-rank Winograd transformation. We decouple the whole Winograd weights into two smaller matrices, leading to a significant reduction in trainable parameters.
with unit stride and unit dilation can be formulated as:
\[\mathcal{O}(n,:,:,:)=\sum_{c=0}^{C_{i}-1}\mathcal{G}(n,c,:,:,:)*\mathcal{I}(c,:,:,:), \tag{1}\]
where \(\mathcal{O}\in\mathbb{R}^{C_{o}\times D_{o}\times H_{o}\times W_{o}}\) is the output feature map, \(D_{o}\), \(H_{o}\) and \(W_{o}\) denote the depth, height and width of \(\mathcal{O}\), respectively, and \(*\) stands for 3D convolution.
The above 3D operation can be split into multiple basic convolutions, each of which can be optimized to obtain lower arithmetic complexity by using the Winograd convolution [42]. Specifically, the Winograd convolution disassembles the input data \(\mathcal{I}\) into several overlapping tiles \(\{\mathcal{I}_{1},\mathcal{I}_{2},...\}\), in which each tile is a sub-matrix of \(\mathcal{I}_{k}\in\mathcal{I}\) and has a shape of \(C_{i}\times t_{d}\times t_{h}\times t_{w}\). Similar to Eq. (1), each tile \(\mathcal{I}_{t}\) can be convoluted separately with \(\mathcal{G}\), resulting in a basic output tile \(\mathcal{O}_{t}\) that is a sub-matrix of \(\mathcal{O}\) and has a shape of \(C_{o}\times m_{d}\times m_{h}\times m_{w}\) where \(m_{d}=t_{d}-r_{d}+1\), \(m_{h}=t_{h}-r_{h}+1\) and \(m_{w}=t_{w}-r_{w}+1\). Note that, any \(\mathcal{O}_{k}\) and \(\mathcal{O}_{s}\) are non-overlapping and the spatial convolution result \(\mathcal{O}\) can be obtained by reassembling \(\{\mathcal{O}_{1},\mathcal{O}_{2},...\}\) in order. Eq. (1) can be further formulated as:
\[\tilde{\mathcal{I}}\in\mathbb{R}^{T\times C_{i}\times t_{d}\times t _{h}\times t_{w}}\rightarrow\tilde{\mathcal{O}}\in\mathbb{R}^{T\times C_{o} \times m_{d}\times m_{h}\times m_{w}}\] \[\text{via}\quad\tilde{\mathcal{O}}(k,n,:,:,:)=\sum_{c=1}^{C_{i} }\mathcal{G}(n,c,:,:,:)*\tilde{\mathcal{I}}(k,c,:,:,:), \tag{2}\]
where \(\tilde{\mathcal{O}},\tilde{\mathcal{I}}\) denote the disassembled output feature and input data, \(\tilde{\mathcal{O}}(k,:,:,:,:)\), \(\tilde{\mathcal{I}}(k,:,:,:,:)\) denote basic output tile \(\mathcal{O}_{k}\), basic input tile \(\mathcal{I}_{k}\), respectively, and \(T=D_{i}H_{i}W_{i}/(t_{d}t_{h}t_{w})\) is the number of disassembled tiles. A thorough comprehension can be referred to [18]. Notice in what follows, we introduce \(r=r_{d}=r_{h}=r_{w}\) and \(t=t_{d}=t_{h}=t_{w}\) for brevity since current networks often have a uniform kernel size such as \(3\times 3\times 3\) across different dimensions, and also the basic output tile is characterized with \(m=m_{d}=m_{h}=m_{w}\). Since the spatial convolution is computationally intensive, Eq. (2) can be further optimized with the Winograd transformation [18]:
\[\tilde{\mathcal{O}}(k,n,:,:,:)=\mathscr{T}_{O}\Big{(}\sum_{c=0}^{C_{i}-1} \mathscr{T}_{K}\big{(}\mathcal{G}(n,c,:,:,:)\big{)}\odot\mathscr{T}_{I}( \tilde{\mathcal{I}}(k,c,:,:,:)\Big{)}, \tag{3}\]
where \(\odot\) stands for the element-wise product.
In Eq. (A), the kernel \(\mathcal{G}(n,c,:,:,:)\) and input tiles \(\tilde{\mathcal{I}}(k,c,:,:,:)\) are individually converted into the Winograd domain of the same shape by the Winograd kernel transformation \(\mathscr{T}_{K}(x)=(KxK^{T})^{R}K^{T}\) and input transformation \(\mathscr{T}_{I}(x)=(B^{T}xB)^{R}B\). Finally, the Winograd-domain kernel and input tile are multiplied in an element-wise manner, the results of which are transformed back to the vanilla spatial domain by the Winograd inverse transformation \(\mathscr{T}_{O}(x)=\big{(}(A^{T}xA)^{R}A\big{)}^{R}\). Herein, \(K\), \(B\) and \(A\) are three transformation matrices determined by \(F(m\times m\times m,r\times r\times r)\). Their specific formats can turn to [18]. \((\cdot)^{R}\) denotes clock-wise dimension rotation. Considering a 3-D Tensor \(T(z,y,x)\), a toy example for \((\cdot)^{R}\) is illustrated as: \(T(z,y,x)^{R}=T(y,x,z)\).
For better illustration, we first rearrange the spatial kernel \(\mathcal{G}\in\mathbb{R}^{C_{o}\times C_{i}\times r\times r\times r}\), the input tiles \(\tilde{\mathcal{I}}\in\mathbb{R}^{T\times C_{i}\times t\times t\times t}\), and the output tiles \(\tilde{\mathcal{O}}\in\mathbb{R}^{T\times C_{o}\times m\times m\times m}\) into 2D matrices \(G\in\mathbb{R}^{C_{o}C_{i}\times r^{3}}\), \(\tilde{I}\in\mathbb{R}^{TC_{i}\times t^{3}}\), and \(\tilde{O}\in\mathbb{R}^{TC_{o}\times m^{3}}\) accordingly. After rearrangement, Eq. (A) is modified as a new form:
\[\tilde{O}(kn,:)=\Big{(}\sum_{c=0}^{C_{i}-1}G(C_{i}n+c,:)T_{K}\odot\tilde{I}(kC_ {i}+c,:)T_{I})\Big{)}T_{O}, \tag{4}\]
where \(T_{K}\in\mathbb{R}^{r^{3}\times t^{3}}\), \(T_{I}\in\mathbb{R}^{t^{3}\times t^{3}}\), \(T_{O}\in\mathbb{R}^{t^{3}\times m^{3}}\) are transformation matrices and the operations with transformation matrices \(T_{K}\), \(T_{I}\), \(T_{O}\) correspond to the process of Winograd input transformation \(\mathscr{T}_{i}(\cdot)\), Winograd kernel transformation \(\mathscr{T}_{k}(\cdot)\) and Winograd output transformation \(\mathscr{T}_{\mathcal{G}}(\cdot)\) in Eq. (A), respectively1. To simplify the description, we denote the Winograd input transformed result \(\tilde{I}T_{I}\) as \(V\) and use \(\tilde{\odot}\) to represent the consecutive operations of element-wise product and summation over the output channel in Eq. (4) for ease of the following representation. Eq. (4) can be simplified as:
Footnote 1: A comprehensive derivation of Eq. (4) as well as the formats of transformation matrices \(T_{K}\), \(T_{I}\) and \(T_{O}\) can be referred to the appendix.
\[\tilde{O}=\big{(}GT_{K}\tilde{\odot}V\big{)}T_{O}. \tag{5}\]
Particularly, [24] introduced a 2D Winograd layer parameterized by a weight tensor \(\mathcal{G}_{W}\in\mathbb{R}^{C_{o}\times C_{i}\times t\times t}\) to replace the Winograd kernel transformation. The element-wise operation costs are expected to decrease from deriving a sparse \(\mathcal{G}_{W}\). In this paper, we extend the Winograd layer to 3D and introduce the Winograd-domain weight \(\mathcal{G}_{W}\in\mathbb{R}^{C_{o}\times C_{i}\times t\times t\times t}\) to directly perform element-wise products with the Winograd-domain input tiles. We also rearrange \(\mathcal{G}_{W}\) to a 2D Winograd-domain weight matrix \(G_{W}\in\mathbb{R}^{C_{o}C_{i}\times t^{3}}\), therefore Eq. (5) becomes as follows:
\[\tilde{O}=\big{(}G_{W}\tilde{\odot}V\big{)}T_{O}. \tag{6}\]
The weight matrix \(G_{W}\) is directly inherited from the pre-trained spatial-domain weight matrix transformed into the Winograd domain, _i.e._, \(G_{W}=GT_{K}\). After getting \(G_{W}\), the problem of pruning the 3D Winograd layer is converted into sparsifying the weight matrix \(G_{W}\).
Different from the 1D or 2D Winograd layer, 3D Winograd layer introduces more parameters, which raises a formidable challenge in not only the increasing parameters of sparse training but also accelerating the element-wise product. These two issues are respectively solved in this paper by introducing a low-rank Winograd transformation in Sec. 3.2 and a low-rank oriented sparse granularity in Sec. 3.3, which are also two core contributions of this paper. In what follows, we refer to the model with Winograd layers as Winograd model and the one with convolutional layers as spatial model.
### Low-rank Winograd Transformation
The sparse training of the Winograd model shares a two-step pipeline similar to the spatial model, including pruning weights that are unimportant and retraining the model for recovering accuracy. However, it is challenging to train the
3D Winograd model due to the increasing trainable parameters and expensive element-wise products. When looking back on the Winograd weight \(G_{W}\), we observe that the over-increasing parameters do not always benefit the performance gains. Then, we analyze the over-increasing parameters by performing singular value decomposition (SVD) on the rearranged Winograd-domain weight matrix \(G_{W}\in\mathbb{R}^{C_{o}C_{i}\times t^{3}}\). In this fashion, \(G_{W}\) can be represented by the \(t^{3}\)-dimensional subspace as: \(G_{W}=\sum_{i=0}^{s^{3}-1}\sigma_{i}\vec{u_{i}}\vec{v_{i}}^{T}\), where \(\sigma_{i}\) and \(\vec{u_{i}}\vec{v_{i}}\) indicate the \(i\)-th largest singular value and the corresponding left/right singular vector. Fig. 2 visualizes the singular values where two phenomena can be observed. First, among all the \(t^{3}\) singular values, larger-magnitude ones are concentrated in the top-\(r^{3}\) (\(r^{3}=27\) in Fig. 2), which is exactly the number of weight elements in the spatial domain. Second, among the top-\(r^{3}\) singular values, those in the front part are much larger than those in the back part. Such phenomena suggest the existence of over-parameterized weights in the Winograd domain, which means that it may not be necessary to train the Winograd model within the full Winograd domain. A more efficient training way is urgent for 3D Winograd models.
Given the pre-trained Winograd-domain weight \(G_{W}\), we denote the fine-tuned weight as \(G_{W}+\Delta G_{W}\), where \(\triangle G_{W}\in\mathbb{R}^{C_{o}C_{i}\times t^{3}}\) denotes the updates from the \(G_{W}\) to the eventual fine-tuned \(\tilde{G}_{W}\). [13] and [25] showed that the update \(\Delta G_{W}\) is supposed to have a low "intrinsic rank" if the pre-trained weight \(G_{W}\) is over-parameterized. This indicates that fine-tuning in the Winograd domain raises attention to the main directions of the whole Winograd space. In light of this, we freeze the pre-trained dense Winograd weight \(G_{W}\) first and then achieve low-rank update \(\Delta G_{W}\) by a low-rank decomposition \(\Delta G_{W}=G_{r}G_{c}\) where \(G_{r}\in\mathbb{R}^{C_{o}C_{i}\times s}\) and \(G_{c}\in\mathbb{R}^{s\times t^{3}}\) (\(s\ll t^{3}\)). Therefore, our low-rank Winograd transformation can be finally described as:
\[\tilde{O}=\big{(}(G_{W}+G_{r}G_{c})\tilde{\odot}V\big{)}T_{O}. \tag{7}\]
During sparse training, we freeze \(G_{W}\), and upgrade \(G_{r}\) and \(G_{c}\) only. In this fashion, the amount of trainable parameters are reduced from \(C_{o}C_{i}t^{3}\) to \(C_{o}C_{i}s+st^{3}\) in the Winograd domain. In order to train \(G_{r}\) and \(G_{c}\) more effectively, we initialize \(G_{r}\) by: \(G_{r}(\cdot,i)=\alpha\sigma_{i}\vec{u_{i}}\) and \(G_{c}\) by: \(G_{c}(i,:)=\vec{v_{i}}^{T}\), where \(\alpha\) is a scalar hyper-parameter that controls the amplitude of the update and is set to 0.1 in this paper.
### Low-Rank Oriented Sparse Granularity
In addition to trainable parameter reduction, we further attempt to decrease the computation cost from the element-wise product at inference time. By virtue of our low-rank Winograd transformation in Eq. (7), the element-wise multiplications can be lessened if most elements in the resulting \(G_{W}+G_{r}G_{c}\) are zeros. [24] imposed a sparse constraint upon \(G_{W}\) given that only \(G_{W}\) involves in the element-wise product of Eq. (6). However, sparse constraints often cause irregular weight matrix that receives little acceleration, a simple extension of which prevents the practicality in our settings of Eq. (7).
**Sparse Granularity**. Instead, in this paper, we devise a low-rank oriented sparsity to purchase effectual speedups. Our motive mainly stems from [45] that measured the element importance of 2D Winograd kernel using a score matrix and removed low-scored weights accordingly which leads to a distinct sparsity at different locations. We also intend to measure the weight importance, but at a more regular pattern. For ease of presentation, we denote our dense 3D Winograd weight as \(G_{W}+G_{r}G_{c}=[\alpha_{1},\alpha_{2},...,\alpha_{t^{3}}]\in\mathbb{R}^{C_{o }C_{i}\times t^{3}}\) where each column \(\alpha_{i}\in\mathbb{R}^{C_{o}C_{i}\times 1}\). Our sparse granularity consists of a single column location in \(G_{W}+G_{r}G_{c}\). In other words, pruning based on column locations results in removing the entire column elements.
The implementation of our low-rank oriented sparsity is of two stages including location scoring and weight retraining, respectively to filter out pruned target column locations and to recover the pruned model performance. In the former stage, we freeze the Winograd parameter \(G_{W}\) and initialize trainable parameters \(G_{r}\) and \(G_{c}\) as introduced in Sec. 3.2. Then, we introduce a score sequence \(S\in\mathbb{R}^{t^{3}}\), which is initialized with zeros, to evaluate location importance. Alike to Taylor pruning [35], we opt to accumulate the location magnitude and gradient in each training iteration to be served as the values of \(S\):
\[S^{t}= S^{t-1}+\frac{1}{C_{i}^{2}C_{o}^{2}}\big{(}\sum_{u=0}^{C_{i}C_{o}-1}|G _{W}+G_{r}G_{c}|^{t-1}\left(u,:\right)\big{)}\] \[\odot\big{(}\sum_{v=0}^{C_{i}C_{o}-1}|\frac{\partial\mathcal{L}}{ \partial G_{r}}\frac{\partial\mathcal{L}}{\partial G_{c}}|^{t-1}(v,:)\big{)}, \tag{8}\]
where the superscript \(t\) represents weight/magnitude and score sequence at the \(t\)-th training iteration. Here, no additional parameters are introduced during determining \(S\).
With the score sequence \(S\), we can finally derive a location set \(\mathcal{P}=\{p_{1},p_{2},...,p_{l}\}\) that contains locations whose scores within the top-\(l\) largest, leading to a sparsity rate of \((t^{3}-l)/t^{3}\).
Figure 2: Singular value analysis for layer5b of C3D and layer4b_2b of R3D-18.
Then, we can obtain a binary mask \(M\in\mathbb{R}^{t^{3}}\) as:
\[M(i)=\begin{cases}1,&i\in\mathcal{P},\\ 0,&\text{Otherwise}.\end{cases} \tag{9}\]
The location scoring stage feeds back a fixed binary mask \(M\). In the stage of retraining, \(M\) is applied to remove low-scored column locations and we only need to fine-tune the trainable parameters \(G_{r}\) and \(G_{c}\) to recover the accuracy of the pruned model. Therefore, the computation of input tiles becomes:
\[\tilde{O}=\Big{(}\big{(}(G_{W}+G_{r}G_{c})\odot M\big{)}\tilde{\odot}V\Big{)}T _{O}. \tag{10}\]
After retraining, \(G_{W}\) is finally updated by \(G_{W}=(G_{W}+G_{r}G_{c})\odot M=[\vec{0},\alpha_{p_{1}},\vec{0},\alpha_{p_{2}},\cdots,\alpha_{p_{l}},\vec{0}]\), where \(\vec{0}\), \(\alpha_{p}\) represent the pruned column and reserved important column, respectively. Fig. 3(a) gives an illustrative example of sparse training of our low-rank oriented sparse granularity.
**Speedup and Compression Mechanism**. Unlike the irregular sparse patterns [24], our low-rank oriented sparse granularity results in a very regular sparse pattern. Therefore, it can well support practical speedups in the inference by simply involving non-zero columns with the multiplication in the code implementation. The model pruned in low-rank oriented sparse granularity can be easily compressed by storing the corresponding reserved columns. The inference process after applying our low-rank oriented sparse granularity is illustrated in Fig. 3(b), and its detailed process is provided in the following.
Similar to \(G_{W}\), the transformed input tiles can be represented as \(t^{3}\) columns: \(V=[\beta_{1},\beta_{2},...,\beta_{t^{3}}]\), where \(\beta_{i}\in\mathbb{R}^{TC_{i}\times 1}\), and output transformation matrix can be represented as \(t^{3}\) rows: \(T_{O}=[\xi_{1};\xi_{2};...;\xi_{t^{3}}]\) where \(\xi_{i}\in\mathbb{R}^{1\times m^{3}}\). Based on Eq. (6), the output can be given as:
\[\begin{split}\tilde{O}&=\big{(}[\vec{0},\alpha_{p_{ 1}},\vec{0},\alpha_{p_{2}},\cdots,\alpha_{p_{l}},\vec{0}]\tilde{\odot}V\big{)}T _{O}\\ &=(\alpha_{p_{1}}\tilde{\odot}\beta_{p_{1}})\xi_{p_{1}}+(\alpha_{ p_{2}}\tilde{\odot}\beta_{p_{2}})\xi_{p_{2}}+\cdots+(\alpha_{p_{l}}\tilde{ \odot}\beta_{p_{l}})\xi_{p_{l}}\\ &=(\tilde{G}_{W}\tilde{\odot}\tilde{V})\bar{T}_{O}.\end{split} \tag{11}\]
Recall that \(\mathcal{P}=\{p_{1},p_{2},...,p_{l}\}\) contains column locations with their scores within the top-\(l\) largest. Therefore, in the inference stage, we only need to store a compact Winograd weight \(\tilde{G}_{W}=[\alpha_{p_{1}},\alpha_{p_{2}},...,\alpha_{p_{l}}]\in\mathbb{R}^ {C_{o}\mathcal{S}_{c}\times l}\) as well as the location set \(\mathcal{P}\) to extract the columns \(\bar{V}=[\beta_{p_{1}},\beta_{p_{2}},...,\beta_{p_{l}}]\in V\) and rows in \(\bar{T}_{O}=[\xi_{p_{1}};\xi_{p_{2}};...;\xi_{p_{l}}]\in T_{O}\) which are operated with the corresponding columns in \(G_{W}\). The cost of data extraction is negligible compared to the large percentage of reduction on element-wise products, _i.e._, \((l/t^{3})\).
## 4 Experiments
### Experimental Setup
Our methodology is applied on 3D CNN models including 3D Resnet [9] (denoted as R3D) and C3D [40] which consist of plentiful 3D convolution layers with \(3\times 3\times 3\) kernels and a stride of 1. We use pre-trained R3D and C3D on the Kinetics dataset [3] and Sports-1M dataset [15], and fine-tune upon the UCF101 [39] and HMDB51 [17] datasets, results of which serve as the dense models. We follow the steps in [9] and [40] to generate training samples with 16-frame length, which are cropped to 112 \(\times\) 112. For R3D and C3D, we replace all the 3D convolutional layers with \(3\times 3\times 3\) kernels and a stride of 1 (except the first layer) with 3D Winograd layers (\(t=4\)) and prune on 3D Winograd layers with our proposed low-rank Winograd transformation. The sparsity of 3D Winograd layers is defined as \((t^{3}-l)/t^{3}\), where \(l\) denotes the number of remaining columns after pruning. We initialize the low-rank matrices \(G_{r}\) and \(G_{c}\) described in Sec. 3.2. The training epochs are set to 50, of which the first 2 epochs are used for location scoring and the remaining ones are used for weight retraining. The initial learning rate is 1e-4 for C3D and 5e-4 for R3D during location scoring and is divided by 10 for every 15 epochs during weight retraining. Stochastic gradient descent (SGD) serves as the optimizer and cross-entropy loss is adopted to guide model learning.
Figure 3: (a)Sparse training of our low-rank oriented sparse granularity. (b)Inference after applying our low-rank oriented sparse granularity.
### Result Analysis
**Low-rank Winograd Transformation**. One of the supreme advantages of our low-rank Winograd transformation is that the two less storage-required matrices significantly reduce the trainable parameters during pruning in the Winograd domain. Table 1 compares the results for pruning R3D-18 in our proposed sparse pattern with and without low-rank Winograd transformation on the UCF101 dataset. As can be observed, low-rank Winograd transformation saves numerous trainable parameters (\(5.94\times\) in this situation) while achieving better performance when sparsity \(<0.7\). In the case of a large proportion (sparsity \(>0.7\)) of redundant parameters pruned in the Winograd domain, low-rank Winograd transformation can still achieve nearly the same performance as the full Winograd-domain parameters. Results in Table 1 well validate the effectiveness of low-rank Winograd transformation for pruning the redundant Winograd-domain parameters and reducing the storage requirement.
**Low-Rank Oriented Sparse Granularity**. We compare the proposed method with state-of-art methods of pruning 3D CNNs in the different domains, such as FP (Filter Pruning) [36], RT3D (Group Pruning) [37], DPR (Stripe Pruning) [46] in the spatial domain, and MRP [38] in the Winograd domain. Table 2 displays the pruning results of different methods. As can be seen, compared with pruning in the Winograd domain, spatial-domain pruning methods suffer the most performance degradation while achieving the same or even lower speedups. As a contrast, our proposed Winograd-domain pruning pattern achieves the highest speedup ratio and even obtains the highest accuracy for both C3D and R3D-18. For example, when pruning C3D, our method achieves \(5.8\times\) speedups with an accuracy loss of \(0.9\%\) while MRP achieves \(4.3\times\) speedups with an accuracy loss of \(1.1\%\), and RT3D achieves only \(3.6\times\) speedups with an accuracy loss of \(1.4\%\). Similarly, our method also outperforms other methods by a large margin when pruning R3D-18.
To further investigate the effect of sparsity on our proposed method, we evaluate low-rank oriented sparse granularity on R3D-18 and R3D-34 on UCF101 and HMDB51 datasets. Fig. 4 demonstrates the performance results of R3D-18 and R3D-34 under different sparsity ratios. As can be seen, our method is capable of maintaining performance drops within \(1\%\) when the sparsity ratio \(<0.5\). However, due to the extremely regular sparse granularity of our pruning pattern, our method degrades drastically if the sparsity ratio \(>0.8\).
**Acceleration Performance**. The acceleration capacity of our proposed method is evaluated on CPUs-based platforms. We first compare our proposed method with Img2col and Winograd algorithms which are commonly used methods for accelerating convolutional operations. For fair comparison, the inference code for our method and the compared methods are all implemented based on the mobile inference framework of Tencent ncnn2 and optimized by advanced SIMD (Single Instruction, Multiple Data). We respectively deploy C3D with different acceleration methods to obtain the end-to-end network inference latency on the Redmi Note10 platform. Table 3 presents the experimental results. Compared with Img2col, the Winograd algorithm reduces a large amount of multiplications, which makes it obtain a 2.1\(\times\) acceleration ratio and our method is able to reduce more inference latency based on the Winograd algorithm. For example, our method obtains a 5.0\(\times\) acceleration ratio under the sparsity of 0.9 and a 3.4\(\times\) acceleration ratio under the sparsity of 0.3. The results shown in Table 3 demonstrate the excellent acceleration capacity of our proposed method.
Footnote 2: [https://github.com/Tencent/ncnn](https://github.com/Tencent/ncnn).
For Winograd convolution, the element-wise products in the Winograd domain occupy a vast majority of the inference time. The core acceleration mechanism of our method is to reduce a large number of operations in the Winograd domain by pruning Winograd-domain weights. Due to our
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|} \hline & Trainable & \multicolumn{5}{c|}{Sparsity} \\ & Parameters & 0.1 & 0.3 & 0.5 & 0.7 & 0.9 \\ \hline w/o Low-Rank & 170.0M & 83.3 & 83.3 & 82.9 & 80.7 & 73.4 \\ \hline Low-Rank & 28.6M & 83.4 & 83.4 & 83.1 & 80.7 & 73.3 \\ \hline \end{tabular}
\end{table}
Table 1: Results of our proposed sparse pattern with and without low-rank Winograd transformation. The experiments are conducted using R3D-18 pruned with different sparsity on UCF101.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Model & Methods & Speedup & Accuracy(\%)\(\downarrow\) \\ \hline \multirow{8}{*}{C3D} & FP* [36] & 1.6\(\times\) & 1.6 \\ & FP* [36] & 3.7\(\times\) & 10.1 \\ & DPR [46] & 2.0\(\times\) & 3.3 \\ & DPR [46] & 4.0\(\times\) & 6.6 \\ & RT3D [37] & 2.6\(\times\) & 1.0 \\ & RT3D [37] & 3.6\(\times\) & 1.4 \\ & MRP [38] & 3.8\(\times\) & 0.4 \\ & MRP [38] & 4.3\(\times\) & 1.1 \\ & Ours(sparsity=0.3) & 4.3\(\times\) & 0.6 \\ & Ours(sparsity=0.5) & 5.8\(\times\) & 0.9 \\ \hline \multirow{8}{*}{R3D-18} & FP* [36] & 1.6\(\times\) & 2.6 \\ & FP* [36] & 4.0\(\times\) & 8.5 \\ & DPR* [46] & 2.0\(\times\) & 3.4 \\ & DPR* [46] & 4.0\(\times\) & 5.0 \\ & MRP [38] & 3.2\(\times\) & 0.1 \\ & MRP [38] & 3.8\(\times\) & 1.2 \\ & Ours(sparsity=0.3) & 4.3\(\times\) & 0.1 \\ & Ours(sparsity=0.5) & 5.0\(\times\) & 0.5 \\ \hline \end{tabular}
\end{table}
Table 2: Results of different pruning methods on UCF101, where * denotes our re-implementation. The experiment is conducted using C3D (baseline accuracy 81.6\(\%\)) and R3D-18 (baseline accuracy 83.5\(\%\)) and the speedup ratio is computed by GFLOPs reduction.
Figure 4: Results of R3D-18 and R3D-34 pruned with our proposed low-rank oriented sparse granularity under different sparsity. The experiment is conducted on UCF101(left) and HMDB51(right).
proposed regular sparse pattern, extracting the corresponding arithmetic data only introduces a very small amount of overhead which makes sure that the sparsity obtained by pruning can be converted into actual speedups. Fig. 5 manifests the Winograd-domain inference latency of our method and its dense Winograd counterpart. As can be seen from the table, our proposed pruning pattern effectively translates the sparsity into actual speedups in the Winograd domain across different layers of C3D.
### Ablation Study
**Rank Selection**. To further explore the effect of ranks on low-rank Winograd transformation, we have tried different settings of rank \(s\) for \(G_{r}\) and \(G_{c}\). For R3D, we set ranks based on different blocks, and for C3D, we set ranks based on different layers. Specifically, for a rank set \(\mathbb{S}=\{s_{1},\cdots,s_{l}\}\), \(s_{i}\) denotes the concrete rank for \(i\)-th Winograd layer or \(i\)-th block containing Winograd layer and \(l\) denotes the total number of Winograd layers/blocks. Table 4 shows the effect of rank setting. As can be observed from the displayed table, a modest increase in ranks will improve the performance of low-rank Winograd transformation and deeper layers tend to require larger ranks than shallow layers. Considering the overall effect, we finally choose \(\mathbb{S}=\{2,4,8,12\}\) for R3D model and \(\mathbb{S}=\{1,1,2,4,8,12,12\}\) for C3D model to perform our experiments in this paper.
**Indicators of Location Importance**. In the stage of location scoring, the score sequence \(S\) updated by different metrics will produce different pruned columns, result of which greatly affects the performance of the sparse model. We compare several different indicators on C3D and R3D-18 models in Table 5. The results show that when selecting pruned columns, gradient-based score (\(S_{grad}\)) has a greater impact on the assessment of selecting locations than magnitude-based score (\(|G_{W}|\) and \(|G_{r}G_{c}|\)), while the combination of the two gives the best results.
## 5 Conclusion
Here, we have presented a novel low-rank Winograd transformation to reduce the over-parameterized issue in 3D CNNs. We decouple the original Winograd weight matrix into two less storage-required matrices, leading to remarkable trainable parameter reduction. The low-rank constraint well eliminates the redundant parameters and drives the updating toward the main directions of the whole Winograd space. Consequently, our low-rank Winograd transformation leads to a better performance increase. In addition, we have also introduced a low-rank oriented sparse granularity to purchase effectual speedups. It models the column-wise importance of the Winograd weight matrix and removes the low-scored ones. In this fashion, the sparsity tends to be more regular, which therefore better supports the practical acceleration in comparison with the existing irregular sparsity.
|
2305.15822 | Towards Label Position Bias in Graph Neural Networks | Graph Neural Networks (GNNs) have emerged as a powerful tool for
semi-supervised node classification tasks. However, recent studies have
revealed various biases in GNNs stemming from both node features and graph
topology. In this work, we uncover a new bias - label position bias, which
indicates that the node closer to the labeled nodes tends to perform better. We
introduce a new metric, the Label Proximity Score, to quantify this bias, and
find that it is closely related to performance disparities. To address the
label position bias, we propose a novel optimization framework for learning a
label position unbiased graph structure, which can be applied to existing GNNs.
Extensive experiments demonstrate that our proposed method not only outperforms
backbone methods but also significantly mitigates the issue of label position
bias in GNNs. | Haoyu Han, Xiaorui Liu, Feng Shi, MohamadAli Torkamani, Charu C. Aggarwal, Jiliang Tang | 2023-05-25T08:06:42Z | http://arxiv.org/abs/2305.15822v1 | # Towards Label Position Bias
###### Abstract
Graph Neural Networks (GNNs) have emerged as a powerful tool for semi-supervised node classification tasks. However, recent studies have revealed various biases in GNNs stemming from both node features and graph topology. In this work, we uncover a new bias - label position bias, which indicates that the node closer to the labeled nodes tends to perform better. We introduce a new metric, the Label Proximity Score, to quantify this bias, and find that it is closely related to performance disparities. To address the label position bias, we propose a novel optimization framework for learning a label position unbiased graph structure, which can be applied to existing GNNs. Extensive experiments demonstrate that our proposed method not only outperforms backbone methods but also significantly mitigates the issue of label position bias in GNNs.
## 1 Introduction
Graph is a foundational data structure, denoting pairwise relationships between entities. It finds applications across a range of domains, such as social networks, transportation, and biology. (Wu et al., 2020; Ma and Tang, 2021) Among these diverse applications, semi-supervised node classification has emerged as a crucial and challenging task, attracting significant attention from researchers. Given the graph structure, node features, and a subset of labels, the semi-supervised node classification task aims to predict the labels of unlabeled nodes. In recent years, Graph Neural Networks (GNNs) have demonstrated remarkable success in addressing this task due to their exceptional ability to model both the graph structure and node features (Zhou et al., 2020). A typical GNN model usually follows the message-passing scheme (Gilmer et al., 2017), which mainly contains two operators, i.e., feature transformation and feature propagation, to exploit node features, graph structure, and label information.
Despite the great success, recent studies have shown that GNNs could introduce various biases from the perspectives of node features and graph topology. In terms of node features, Jiang et al. (Jiang et al., 2022) demonstrated that the message-passing scheme could amplify sensitive node attribute bias. A series of studies (Agarwal et al., 2021; Kose and Shen, 2022; Dai and Wang, 2021) have endeavored to mitigate this sensitive attribute bias in GNNs and ensure fair classification. In terms of graph topology, Tang et al. (Tang et al., 2020) investigated the degree bias in GNNs, signifying that high-degree nodes typically outperform low-degree nodes. This degree bias has also been addressed by several recent studies (Kang et al., 2022; Liu et al., 2023; Liang et al., 2022).
In addition to node features and graph topology, the label information, especially the position of labeled nodes, also plays a crucial role in GNNs. However, the potential bias in label information has been largely overlooked. In practice, with an equal number of training nodes, different labeling can result in significant discrepancies in test performance [14, 15, 16, 17]. For instance, Ma et al. [14] study the subgroup generalization of GNNs and find that the shortest path distance to labeled nodes can also affect the GNNs' performance, but they haven't provided deep understanding or solutions. The investigation of the influence of labeled nodes' position on unlabeled nodes remains under-explored.
In this work, we discover the presence of a new bias in GNNs, namely the label position bias, which indicates that the nodes "closer" to the labeled nodes tend to receive better prediction accuracy. We propose a novel metric called Label Proximity Score (LPS) to quantify and measure this bias. Our study shows that different node groups with varied LPSs can result in a significant performance gap, which showcases the existence of label position bias. More importantly, this new metric has a much stronger correlation with performance disparity than existing metrics such as degree [13] and shortest path distance [14], which suggests that the proposed Label Proximity Score might be a more intrinsic measurement of label position bias.
Addressing the label position bias in GNNs is greatly desired. First, the label position bias would cause the fairness issue to nodes that are distant from the labeled nodes. For instance, in a financial system, label position bias could result in unfair assessments for individuals far from labeled ones, potentially denying them access to financial resources. Second, mitigating this bias has the potential to enhance the performance of GNNs, especially if nodes that are distant can be correctly classified. In this work, we propose a Label Position unbiased Structure Learning method (LPSL) to derive a graph structure that mitigates the label position bias. Specifically, our goal is to learn a new graph structure in which each node exhibits similar Label Proximity Scores. The learned graph structure can then be applied across various GNNs. Extensive experiments demonstrate that our proposed LPSL not only outperforms backbone methods but also significantly mitigates the issue of label position bias in GNNs.
## 2 Label Position Bias
In this section, we provide an insightful preliminary study to reveal the existence of label position bias in GNNs. Before that, we first define the notations used in this paper.
**Notations**. We use bold upper-case letters such as \(\mathbf{X}\) to denote matrices. \(\mathbf{X}_{i}\) denotes its \(i\)-th row and \(\mathbf{X}_{ij}\) indicates the \(i\)-th row and \(j\)-th column element. We use bold lower-case letters such as \(\mathbf{x}\) to denote vectors. \(\mathbf{1}_{\mathbf{n}}\in\mathbb{R}^{n\times 1}\) is all-ones column vector. The Frobenius norm and the trace of a matrix \(\mathbf{X}\) are defined as \(\|\mathbf{X}\|_{F}=\sqrt{\sum_{ij}\mathbf{X}_{ij}^{2}}\) and \(tr(\mathbf{X})=\sum_{i}\mathbf{X}_{ii}\), respectively. Let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) be a graph, where \(\mathcal{V}\) is the node set and \(\mathcal{E}\) is the edge set. \(\mathcal{N}_{i}\) denotes the neighborhood node set for node \(v_{i}\). The graph can be represented by an adjacency matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\), where \(\mathbf{A}_{ij}>0\) indices that there exists an edge between nodes \(v_{i}\) and \(v_{j}\) in \(\mathcal{G}\), or otherwise \(\mathbf{A}_{ij}=0\). Let \(\mathbf{D}=diag(d_{1},d_{2},\ldots,d_{n})\) be the degree matrix, where \(d_{i}=\sum_{j}\mathbf{A}_{ij}\) is the degree of node \(v_{i}\). The graph Laplacian matrix is defined as \(\mathbf{L}=\mathbf{D}-\mathbf{A}\). We define the normalized adjacency matrix as \(\tilde{\mathbf{A}}=\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\) and the normalized Laplacian matrix as \(\tilde{\mathbf{L}}=\mathbf{I}-\tilde{\mathbf{A}}\). Furthermore, suppose that each node is associated with a \(d\)-dimensional feature \(\mathbf{x}\) and we use \(\mathbf{X}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{n}]^{\top}\in\mathbb{R}^{n\times d}\) to denote the feature matrix. In this work, we focus on the node classification task on graphs. Given a graph \(\mathcal{G}=\{\mathbf{A},\mathbf{X}\}\) and a partial set of labels \(\mathcal{Y}_{L}=\{\mathbf{y}_{1},\ldots,\mathbf{y}_{l}\}\) for node set \(\mathcal{V}_{L}=\{v_{1},\ldots,v_{l}\}\), where \(\mathbf{y}_{i}\in\mathbb{R}^{c}\) is a one-hot vector with \(c\) classes, our goal is to predict labels of unlabeled nodes. For convenience, we reorder the index of nodes and use a mask matrix \(\mathbf{T}=\begin{bmatrix}\mathbf{I}_{l}&0\\ 0&0\end{bmatrix}\) to represent the indices of labeled nodes.
**Label Proximity Score.** In this study, we aim to study the bias caused by label positions. When studying prediction bias, we first need to define the sensitive groups based on certain attributes or metrics. Therefore, we propose a novel metric, namely the Label Proximity Score, to quantify the closeness between test nodes and training nodes with label information. Specifically, the proposed Label Proximity Score (LPS) is defined as follows:
\[\textit{LPS}=\mathbf{PT1}_{\mathbf{n}},\ \ \text{and}\ \ \mathbf{P}=\left( \mathbf{I}-(1-\alpha)\tilde{\mathbf{A}}\right)^{-1}, \tag{1}\]
where \(\mathbf{P}\) represents the Personalized PageRank matrix, \(\mathbf{T}\) is the label mask matrix, \(\mathbf{1_{n}}\) is an all-ones column vector, and \(\alpha\in(0,1]\) stands for the teleport probability. \(\mathbf{P}_{ij}\) represents the pairwise node proximity between node \(i\) and node \(j\). For each test node \(i\), its LPS represents the sum of its node proximity values to all labeled nodes, i.e., \((\mathbf{PT1}_{n})_{i}=\mathbf{P}_{i,:}\mathbf{T}\mathbf{1}_{n}=\sum_{j\in \mathcal{V}_{L}}\mathbf{P}_{ij}\).
**Sensitive Groups.** In addition to the proposed LPS, we also explore two existing metrics such as node degree (Tang et al., 2020) and shortest path distance to label nodes (Ma et al., 2021a) for comparison since they could be related to the label position bias. For instance, the node with a high degree is more likely to connect with labeled nodes, and the node with a small shortest path to a labeled node is also likely "closer" to all labeled nodes if the number of labeled nodes is small. According to these metrics, we split test nodes into different sensitive groups. Specifically, for node degree and shortest path distance to label nodes, we use their actual values to split them into seven sensitive groups, as there are only very few nodes whose degrees or shortest path distances are larger than seven. For the proposed LPS, we first calculate its value and subsequently partition the test nodes evenly into seven sensitive groups, each having an identical range of LPS values.
**Experimental Setup**. We conduct the experiments on three representative datasets used in semi-supervised node classification tasks, namely Cora, CiteSeer, and PubMed. We also experiment with three different labeling rates: 5 labels per class, 20 labels per class, and 60% labels per class. The experiments are performed using two representative GNN models, GCN (Kipf and Welling, 2016) and APPNP (Gasteiger et al., 2018), which cover both coupled and decoupled architectures. We also provide the evaluation on Label Propagation (LP) (Zhou et al., 2003) to exclude the potential bias caused by node features. For GCN and APPNP, we adopt the same hyperparameter setting with their original papers. The node classification accuracy on different sensitive groups \(\{1,2,3,4,5,6,7\}\) with the labeling rate of 20 labeled nodes per class under APPNP, GCN, and LP models is illustrated in Figure 1, 2, and 3 respectively. Due to the space limitation, we put more details and results of other datasets and labeling rates into Appendix A.
**Observations.** From the results presented in Figure 1, 2, and 3, we can observe the following:
* Label Position bias is prevalent across all GNN models and datasets. The classification accuracy can notably vary between different sensitive groups, and certain trends are discernible. To ensure fairness and improve performance, addressing this bias is a crucial step in improving GNN models.
* While Degree and Shortest Path Distance (SPD) can somewhat reflect disparate performance, indicating that nodes with higher degrees and shorter SPDs tend to perform better, these trends lack consistency, and they can't fully reflect the Label Position bias. For instance, degree bias is not pronounced in the APPNP model as shown in Figure 1, as APPNP can capture the global structure. Moreover, SPD fails to effectively evaluate relatively low homophily graphs, such as CiteSeer (Ma et al., 2021b). Consequently, there is a need to identify a more reliable metric.
* The Label Proximity Score (LPS) consistently exhibits a strong correlation with performance disparity across all datasets and models. Typically, nodes with higher LPS scores perform better. In addition, nodes with high degrees and low Shortest Path Distance (SPD) often have higher LPS, as previously analyzed. Therefore, LPS is highly correlated with label position bias.
* The Label Propagation, which solely relies on the graph structure, demonstrates a stronger label position bias compared to GNNs as shown in Figure 3. Moreover, the label position bias becomes less noticeable in all models when the labeling rate is high, as there typically exist labeled nodes within the two-hop neighborhood of each test node (detailed in Appendix A). These observations suggest that the label position bias is predominantly influenced by the graph structure. Consequently, this insight motivates us to address the Label Position bias from the perspective of the graph structure.
In conclusion, label position bias is indeed present in GNN models, and the proposed Label Proximity Score accurately and consistently reflects the performance disparity over different sensitive groups for different models across various datasets. Overall, the label proximity score exhibits more consistent and stronger correlations with performance disparity compared with node degree and shortest path distance, which suggests that LPS serves as a better metric for label position bias. Further, through the analysis of the Label Propagation method and the effects of different labeling rates, we deduce that the label position bias is primarily influenced by the graph structure. This understanding paves us a way to mitigate label position bias.
## 3 The Proposed Framework
The studies in Section 2 suggest that Label Position bias is a prevalent issue in GNNs. In other words, nodes far away from labeled nodes tend to yield subpar performance. Such unfairness could be problematic, especially in real-world applications where decisions based on these predictions can have substantial implications. As a result, mitigating label position bias has the potential to enhance the fairness of GNNs in real-world applications, as well as improve overall model performance. Typically, there are two ways to address this problem, i.e., from a model-centric or a data-centric perspective. In this work, we opt for a data-centric perspective for two primary reasons: (1) The wide variety of GNN models in use in real-world scenarios, each with its unique architecture, makes it challenging to design a universal component that can be seamlessly integrated into all GNNs to mitigate the label position bias. Instead, the graph structure is universal and can be applied to any existing GNNs. (2) Our preliminary studies indicate that the graph structure is the primary factor contributing to the label position bias. Therefore, it is more rational to address the bias by learning a label position unbiased graph structure.
However, there are mainly two challenges: (1) How can we define a label position unbiased graph structure, and how can we learn this structure based on the original graph? (2) Given that existing graphs are typically sparse, how can we ensure that the learned data structure is also sparse to avoid excessive memory consumption? In the following subsections, we aim to address these challenges.
### Label Position Unbiased Graph Structure Learning
Based on our preliminary studies, the Label Proximity Score (LPS) can consistently reflect performance disparity across various GNNs and indicate the label position bias. Therefore, to mitigate the label position bias from the structural perspective, our objective is to learn a new graph structure in which each node exhibits similar LPSs. Meanwhile, this learned unbiased graph structure should
maintain certain properties of the original graph. To achieve this goal, we formulate the Label Position Unbiased Structure Learning (LPSL) problem as follows:
\[\begin{split}&\operatorname*{arg\,min}_{\mathbf{B}}\|\mathbf{I}- \mathbf{B}\|_{F}^{2}+\lambda\mathrm{tr}(\mathbf{B}^{\top}\tilde{\mathbf{L}} \mathbf{B})\\ &\text{s.t.}\quad\mathbf{BT1}_{n}=c\mathbf{1}_{n},\end{split} \tag{2}\]
where \(\mathbf{B}\in\mathbb{R}^{n\times n}\) represents the debiased graph structure matrix. \(\mathrm{tr}(\mathbf{B}^{\top}\tilde{\mathbf{L}}\mathbf{B})=\sum_{(v_{i},v_{j}) \in\mathcal{E}}\|\mathbf{B}_{i}/\sqrt{d_{i}}-\mathbf{B}_{j}/\sqrt{d_{j}}\|_{2} ^{2}\) measures the smoothness of the new structure based on the original graph structure. The proximity to identity matrix \(\mathbf{I}\in\mathbb{R}^{n\times n}\) encourages self-loops and avoids trivial over-smoothed structures. \(\lambda\) is a hyperparameter that controls the balance between smoothness and self-loop. \(\mathbf{T}\) is the mask matrix indicating the labeled nodes, \(\mathbf{1_{n}}\) is the all-ones vector, and \(c\) is a hyperparameter serving as the uniform Label Proximity Score for all nodes.
Notably, if we ignore the constraint, then the optimal solution for this primary problem is given by \(\mathbf{B}=(\mathbf{I}+\lambda\mathbf{L})^{-1}=\alpha(\mathbf{I}-(1-\alpha \tilde{\mathbf{A}}))^{-1}\), where \(\alpha=\frac{1}{1+\lambda}\). This solution recovers the Personalized PageRank (PPR) matrix which measures pairwise node proximity. Furthermore, the constraint in Eq. (2) ensures that all nodes have the same Label Proximity Score, denoted as \(c\). The constraint encourages fair label proximity scores for all nodes so that the learned graph structure mitigates the label position bias.
The constrained optimization problem in Eq. (2) is a convex optimization problem, and it can be solved by the Lagrange Multiplier method (Boyd and Vandenberghe, 2004). The augmented Lagrange function can be written as:
\[L_{\rho}(\mathbf{B},\mathbf{y})=\|\mathbf{I}-\mathbf{B}\|_{F}^{2}+\lambda \mathrm{tr}(\mathbf{B}^{\top}\tilde{\mathbf{L}}\mathbf{B})+\mathbf{y}^{\top} (\mathbf{BT1}_{n}-c\mathbf{1}_{n})+\frac{\rho}{2}\|\mathbf{BT1}_{n}-c\mathbf{1 }_{n}\|_{2}^{2}, \tag{3}\]
where \(\mathbf{y}\in\mathbb{R}^{n\times 1}\) is the introduced Lagrange multiplier, and \(\rho>0\) represents the augmented Lagrangian parameter. The gradient of \(L_{\rho}(\mathbf{B},\mathbf{y})\) to \(\mathbf{B}\) can be represented as:
\[\frac{\partial L_{\rho}}{\partial\mathbf{B}}=2(\mathbf{B}-\mathbf{I})+2 \lambda\tilde{\mathbf{L}}\mathbf{B}+\mathbf{y}(\mathbf{T1}_{n})^{\top}+\rho( \mathbf{BT1}_{n}-c\mathbf{1}_{n})(\mathbf{T1}_{n})^{\top}. \tag{4}\]
Then, the problem can be solved by dual ascent algorithm (Boyd et al., 2011) as follows:
\[\mathbf{B}^{k+1} =\operatorname*{arg\,min}_{\mathbf{B}}L_{\rho}(\mathbf{B}^{k}, \mathbf{y}^{k})\] \[\mathbf{y}^{k+1} =\mathbf{y}^{k}+\rho(\mathbf{B}^{k}\mathbf{T1}_{n}-c\mathbf{1}_{n }),\]
where \(k\) is the current optimization step, and \(\mathbf{B}^{k+1}\) can be obtained by multiple steps of gradient descent using the gradient in Eq. (4).
### Understandings
In this subsection, we provide the understanding and interpretation of our proposed LPSL, establishing its connections with the message passing in GNNs.
_Remark 3.1_.: The feature aggregation using the learned graph structure \(\mathbf{B}\) directly as a propagation matrix, i.e., \(\mathbf{F}=\mathbf{B}\mathbf{X}\), is equivalent to applying the message passing in GNNs using the original graph if \(\mathbf{B}\) is the approximate or exact solution to the primary problem defined in Eq. 2 without constraints.
The detailed proof can be found in Appendix B. Remark 3.1 suggests that we can directly substitute the propagation matrix in GNNs with the learned structure \(\mathbf{B}\). The GNNs are trained based on the labeled nodes, and the labeled nodes would influence the prediction of unlabeled nodes because of the message-passing scheme. Following the definition in (Xu et al., 2018), the influence of node \(j\) on node \(i\) can be represented by \(I_{i}(j)=sum\left[\frac{\partial\mathbf{h}_{i}}{\partial\mathbf{x}_{j}}\right]\), where \(\mathbf{h}_{i}\) is the representation of node \(i\), \(\mathbf{x}_{j}\) is the input feature of node \(j\), and \(\left[\frac{\partial\mathbf{h}_{i}}{\partial\mathbf{x}_{j}}\right]\) represents the Jacobian matrix. Afterward, we have the following Proposition based on the influence scores:
**Proposition 3.1**.: _The influence scores from all labeled nodes to any unlabeled node \(i\) will be the equal, i.e., \(\sum_{j\in\mathcal{V}_{L}}I_{i}(j)=c\), when using the unbiased graph structure \(\mathbf{B}\) obtained from the optimization problem in Eq. (2) as the propagation matrix in GNNs._
The proof can be found in Appendix B. Proposition 3.1 suggests that by using the unbiased graph structure for feature propagation, each node can receive an equivalent influence from all the labeled nodes, thereby mitigating the label position bias issue.
### \(\ell_{1}\)-regularized Label Position Unbiased Sparse Structure Learning
One challenge of solving the graph structure learning problem in Eq. (2) is that it could result in a dense structure matrix \(\mathbf{B}\in\mathbb{R}^{n\times n}\). This is a memory-intensive outcome, especially when the number of nodes \(n\) is large. Furthermore, applying this dense matrix to GNNs can be time-consuming for downstream tasks, which makes it less practical for real-world applications. To make the learned graph structure sparse, we propose the following \(\ell_{1}\)-regularized Label Position Unbiased Sparse Structure Learning optimization problem:
\[\begin{split}\operatorname*{arg\,min}_{B}\|\mathbf{I}- \mathbf{B}\|_{F}^{2}+\lambda\mathrm{tr}(\mathbf{B}^{\top}\mathbf{L}\mathbf{B} )+\beta\|\mathbf{B}\|_{1}\\ \text{s.t.}\quad\mathbf{B}\mathbf{T}\mathbf{1}_{n}=c\mathbf{1}_ {n},\end{split} \tag{5}\]
where \(\|\mathbf{B}\|_{1}\) represents the \(\ell_{1}\) regularization that encourages zero values in \(\mathbf{B}\). \(\beta>0\) is a hyperparameter to control the sparsity of \(\mathbf{B}\). The primary problem in Eq. (5) is proved to have a strong localization property and can guarantee the sparsity (Ha et al., 2021; Hu, 2020). The problem in Eq. (5) can also be solved by the Lagrange Multiplier method. However, when the number of nodes \(n\) is large, solving this problem using conventional gradient descent methods becomes computationally challenging. Therefore, we propose to solve the problem in Eq. (5) efficiently by Block Coordinate Descent (BCD) method (Tseng, 2001) in conjunction with the proximal gradient approach, particularly due to the presence of the \(\ell_{1}\) regularization. Specifically, we split \(\mathbf{B}\) into column blocks, and \(\mathbf{B}_{\cdot:j}\) represents the \(j\)-th block. The gradient of \(L_{\rho}\) with respect to \(\mathbf{B}_{\cdot:j}\) can be written as:
\[\frac{\partial L_{\rho}}{\partial\mathbf{B}_{\cdot:j}}=2(\mathbf{B}_{\cdot:j}- \mathbf{I}_{\cdot:j})+2\lambda\tilde{\mathbf{L}}\mathbf{B}_{\cdot:j}+\mathbf{y }(\mathbf{T}\mathbf{1}_{n})_{j}^{\top}+\rho(\mathbf{B}\mathbf{T}\mathbf{1}_{n }-c\mathbf{1}_{n})(\mathbf{T}\mathbf{1}_{n})_{j}^{\top}, \tag{6}\]
where \((\mathbf{T}\mathbf{1}_{n})_{j}\in\mathbb{R}^{d\times 1}\) is the corresponding block part with block size \(d\). After updating the current block \(\mathbf{B}_{\cdot:j}\), we apply a soft thresholding operator \(S_{\beta/\rho}(\cdot)\) based on the proximal mapping. The full algorithm is detailed in Algorithm 1. Notably, lines 6-8 handle the block updates, line 9 performs the soft thresholding operation, and line 11 updates Lagrange multiplier \(\mathbf{y}\) through dual ascent update.
**Input:** Laplacian matrix \(\tilde{\mathbf{L}}\), Label mask matrix \(\mathbf{T}\), Hyperparamters \(\lambda,c,\beta,\rho\), learning rate \(\gamma\)
**Output:** Label position unbiased graph structure \(\mathbf{B}\)
```
1:Input: Laplacian matrix \(\tilde{\mathbf{L}}\), Label mask matrix \(\mathbf{T}\), Hyperparamters \(\lambda,c,\beta,\rho\), learning rate \(\gamma\)
2:Output: Label position unbiased graph structure \(\mathbf{B}\)
3:Initialization: \(\mathbf{B}^{0}=\mathbf{I}\) and \(\mathbf{y}^{0}=\mathbf{0}\)
4:while Not converge do
5:for each block \(j\)do
6:for\(i=0\)to update steps \(t\)do
7:\(\mathbf{B}_{\cdot:j}=\mathbf{B}_{\cdot:j}-\gamma*\frac{\partial L_{\rho}}{ \partial\mathbf{B}_{\cdot:j}}\)
8:endfor
9:\(\mathbf{B}_{\cdot:j}=S_{\beta/\rho}(\mathbf{B}_{\cdot:j})\)
10:endfor
11:\(\mathbf{y}=\mathbf{y}+\rho(\mathbf{B}\mathbf{T}\mathbf{1}_{n}-c\mathbf{1}_ {n})\)
12:endwhile
13:return\(\mathbf{B}\)
```
**Algorithm 1** Algorithm of LPSL
### The Model Architecture
The proposed LPSL learns an unbiased graph structure with respect to the labeled nodes. Therefore, the learned graph structure can be applied to various GNN models to mitigate the Label Position bias. In this work, we test LPSL on two widely used GNN models, i.e., GCN (Kipf and Welling, 2016) and APPNP (Gasteiger et al., 2018). For the GCN model, each layer can be represented by:
\[\mathbf{H}^{l+1}=\sigma\left(\mathbf{B}_{\lambda}\mathbf{H}^{l}\mathbf{W}^{l} \right),\]
where \(\mathbf{H}^{0}=\mathbf{X}\), \(\sigma\) is the non-linear activation function, \(\mathbf{B}_{\lambda}\) is the unbiased structure with parameter \(\lambda\), and \(\mathbf{W}^{l}\) is the weight matrix in the \(l\)-th layer. We refer to this model as LPSL\({}_{\text{GNN}}\). For the APPNP model, we directly use the learned \(\mathbf{B}_{\lambda}\) as the propagation matrix, and the prediction can be written as:
\[\mathbf{Y}_{\text{pred}}=\mathbf{B}_{\lambda}f_{\theta}(\mathbf{X}),\]
where \(f_{\theta}(\cdot)\) is any machine learning model parameterized by the learnable parameters \(\theta\). We name this model as LPSL\({}_{\text{APPNP}}\). The parameter \(\lambda\) provides a high flexibility when applying \(\mathbf{B}_{\lambda}\) to different GNN architectures. For decoupled GNNs such as APPNP, which only propagates once, a large \(\lambda\) is necessary to encode the global graph structure with a denser sparsity. In contrast, for coupled GNNs, such as GCN, which apply propagation multiple times, a smaller \(\lambda\) can be used to encode a more local structure with a higher sparsity.
Experiment
In this section, we conduct comprehensive experiments to verify the effectiveness of the proposed LPSL. In particular, we try to answer the following questions:
* **Q1:** Can the proposed LPSL improve the performance of different GNNs? (Section 4.2)
* **Q2:** Can the proposed LPSL mitigate the label position bias? (Section 4.3)
* **Q3:** How do different hyperparameters affect the proposed LPSL? (Section 4.4)
### Experimental Settings
**Datasets.** We conduct experiments on 8 real-world graph datasets for the semi-supervised node classification task, including three citation datasets, i.e., Cora, Citeseer, and Pubmed (Sen et al., 2008), two co-authorship datasets, i.e., Coauthor CS and Coauthor Physics, two co-purchase datasets, i.e., Amazon Computers and Amazon Photo (Shchur et al., 2018), and one OGB dataset, i.e., ogbn-arxiv (Hu et al., 2020). The details about these datasets are shown in Appendix C.
We employ the fixed data split for the ogbn-arxiv dataset, while using ten random data splits for all other datasets to ensure more reliable results (Liu et al., 2021). Additionally, for the Cora, CiteSeer, and PubMed datasets, we experiment with various labeling rates: low labeling rates with 5, 10, and 20 labeled nodes per class, and high labeling rates with 60% labeled nodes per class. Each model is run three times for every data split, and we report the average performance along with the standard deviation.
**Baselines.** To the best of our knowledge, there are no previous works that aim to address the label position bias. In this work, we select three GNNs, namely, GCN (Kipf and Welling, 2016), GAT (Velickovic et al., 2017), and APPNP (Gasteiger et al., 2018), two Non-GNNs, MLP and Label Propagation (Zhou et al., 2003), as baselines. Furthermore, we also include GRADE (Wang et al., 2022), a method designed to mitigate degree bias. Notably, SRGNN (Zhu et al., 2021) demonstrates that if labeled nodes are gathered locally, it could lead to an issue of feature distribution shift. SRGNN aims to mitigate the feature distribution shift issue and is also included as a baseline.
**Hyperparameters Setting.** We follow the best hyperparameter settings in their original papers for all baselines. For the proposed LPSL\({}_{\text{GCN}}\), we set the \(\lambda\) in range [1,8]. For LPSL\({}_{\text{APPNP}}\), we set the \(\lambda\) in the range [8, 15]. For both methods, \(c\) is set in the range [0.5, 1.5]. We fix the learning rate 0.01, dropout 0.5 or 0.8, hidden dimension size 64, and weight decay 0.0005, except for the ogbn-arxiv dataset. Adam optimizer (Kingma and Ba, 2014) is used in all experiments. More details about the hyperparameters setting for all methods can be found in Appendix D.
### Performance Comparison on Benchmark Datasets
In this subsection, we test the learned unbiased graph structure by the proposed LPSL on both GCN and APPNP models. We then compare these results with seven baseline methods across all eight datasets. The primary results are presented in Table 1. Due to space limitations, we have included the results from other baselines in Appendix E. From these results, we can make several key observations:
* The integration of our proposed LPSL to both GCN and APPNP models consistently improves their performance on almost all datasets. This indicates that a label position unbiased graph structure can significantly aid semi-supervised node classification tasks.
* Concerning the different labeling rates for the first three datasets, our proposed LPSL shows greater performance improvement with a low labeling rate. This aligns with our preliminary study that label position bias is more pronounced when the labeling rate is low.
* SRGNN, designed to address the feature distribution shift issue, does not perform well on most datasets with random splits instead of locally distributed labels. Only when the labeling rate is very low, SRGNN can outperform GCN. Hence, the label position bias cannot be solely solved by addressing the feature distribution shift.
* The GRADE method, aimed at mitigating the degree-bias issue, also fails to improve overall performance with randomly split datasets.
### Evaluating Bias Mitigation Performance
In this subsection, we aim to investigate whether the proposed LPSL can mitigate the label position bias. We employ all three aforementioned bias metrics, namely label proximity score, degree, and shortest path distance, on Cora and CiteSeer datasets. We first group test nodes into different sensitive groups according to the metrics, and then use three representative group bias measurements - Weighted Demographic Parity (WDP), Weighted Standard Deviation (WSD), and Weighted Coefficient of Variation (WCV) - to quantify the bias. These are defined as follows:
\[\text{WDP}=\frac{\sum_{i=1}^{D}N_{i}\cdot|A_{i}-A_{\text{avg}}|}{N_{\text{ total}}},\text{WSD}=\sqrt{\frac{1}{N_{\text{total}}}\sum_{i=1}^{D}N_{i}\cdot(A_{i}-A_ {\text{avg}})^{2}},\text{WCV}=\frac{\text{WSD}}{A_{\text{avg}}},\]
where \(D\) is the number of groups, \(N_{i}\) is the node number of group \(i\), \(A_{i}\) is the accuracy of group \(i\), \(A_{\text{avg}}\) is the weighted average accuracy of all groups, i.e., the overall accuracy, and \(N_{\text{total}}\) is the total number of nodes. We choose six representative models, i.e., Label Propagation (LP), GRADE, GCN, APPNP, LPSL\({}_{\text{GCN}}\), and LPSL\({}_{\text{APPNP}}\), in this experiment. The results of the label proximity score, degree, and shortest path on the Cora and Citeseer datasets are shown in Tabel 2, 3, and 4, respectively. It can be observed from the tables:
* The Label Propagation method, which solely utilizes the graph structure information, exhibits the most significant label position bias across all measurements and datasets. This evidence suggests that label position bias primarily stems from the biased graph structure, thereby validating our strategy of learning an unbiased graph structure with LPSL.
* The proposed LPSL not only enhances the classification accuracy of the backbone models, but also alleviates the bias concerning Label Proximity Score, degree, and Shortest distance.
* The GRADE method, designed to mitigate degree bias, does exhibit a lesser degree bias than GCN and APPNP. However, it still falls short when compared to the proposed LPSL. Furthermore, GRADE may inadvertently heighten the bias evaluated by other metrics. For instance, it significantly increases the label proximity score bias on the CiteSeer dataset.
\begin{table}
\begin{tabular}{c|c c c c|c c c} \hline Dataset & Label Rate & GCN & APPNP & GRADE & SRGNN & LPSL\({}_{\text{GCN}}\) & LPSL\({}_{\text{APPNP}}\) \\ \hline \multirow{4}{*}{Cora} & 5 & \(70.68\pm 2.17\) & \(75.86\pm 2.34\) & \(69.51\pm 6.79\) & \(70.77\pm 1.82\) & \(76.58\pm 2.37\) & \(\mathbf{77.24\pm 2.18}\) \\ & 10 & \(76.50\pm 1.42\) & \(80.29\pm 1.00\) & \(74.95\pm 2.46\) & \(75.42\pm 1.57\) & \(80.39\pm 1.17\) & \(\mathbf{81.59\pm 0.98}\) \\ & 20 & \(79.41\pm 1.30\) & \(82.34\pm 0.67\) & \(77.41\pm 1.49\) & \(78.42\pm 1.75\) & \(82.74\pm 1.01\) & \(\mathbf{83.24\pm 0.75}\) \\ & 60\% & \(88.60\pm 1.19\) & \(88.49\pm 1.28\) & \(86.84\pm 0.99\) & \(87.17\pm 0.95\) & \(\mathbf{88.75\pm 1.21}\) & \(88.62\pm 1.69\) \\ \hline \multirow{4}{*}{CiteSeer} & 5 & \(61.27\pm 3.85\) & \(63.92\pm 3.39\) & \(63.03\pm 3.61\) & \(64.84\pm 3.41\) & \(65.65\pm 2.47\) & \(\mathbf{65.70\pm 2.18}\) \\ & 10 & \(66.28\pm 2.14\) & \(67.57\pm 2.05\) & \(64.20\pm 3.23\) & \(67.83\pm 2.19\) & \(67.73\pm 2.57\) & \(\mathbf{68.76\pm 1.77}\) \\ & 20 & \(69.60\pm 1.67\) & \(70.85\pm 1.45\) & \(67.50\pm 1.76\) & \(69.13\pm 1.99\) & \(70.73\pm 1.32\) & \(\mathbf{71.25\pm 1.14}\) \\ & 60\% & \(76.88\pm 1.78\) & \(77.42\pm 1.47\) & \(74.00\pm 1.87\) & \(74.57\pm 1.57\) & \(77.18\pm 1.64\) & \(\mathbf{77.56\pm 1.44}\) \\ \hline \multirow{4}{*}{PubMed} & 5 & \(69.76\pm 6.46\) & \(72.68\pm 5.68\) & \(66.90\pm 6.49\) & \(69.38\pm 6.48\) & \(73.46\pm 4.64\) & \(\mathbf{73.57\pm 5.30}\) \\ & 10 & \(72.79\pm 3.58\) & \(75.53\pm 3.85\) & \(73.31\pm 3.75\) & \(72.69\pm 3.49\) & \(75.67\pm 4.42\) & \(\mathbf{76.18\pm 4.05}\) \\ & 20 & \(77.43\pm 2.66\) & \(76.83\pm 2.11\) & \(75.12\pm 2.37\) & \(77.09\pm 1.68\) & \(78.75\pm 2.45\) & \(\mathbf{79.26\pm 2.32}\) \\ & 60\% & \(\mathbf{88.48\pm 0.46}\) & \(87.56\pm 0.52\) & \(86.90\pm 0.46\) & \(88.32\pm 0.55\) & \(87.75\pm 0.57\) & \(87.96\pm 0.57\) \\ \hline CS & 20 & \(91.73\pm 0.49\) & \(92.38\pm 0.38\) & \(89.43\pm 0.67\) & \(89.43\pm 0.67\) & \(91.94\pm 0.54\) & \(\mathbf{92.44\pm 0.36}\) \\ \hline Physics & 20 & \(93.29\pm 0.80\) & \(93.93\pm 0.67\) & \(91.44\pm 1.41\) & \(93.16\pm 0.64\) & \(93.56\pm 0.51\) & \(\mathbf{93.65\pm 0.50}\) \\ \hline Computers & 20 & \(79.17\pm 1.92\) & \(79.07\pm 2.34\) & \(79.01\pm 2.36\) & \(78.54\pm 2.15\) & \(\mathbf{80.05\pm 2.92}\) & \(79.58\pm 2.31\) \\ \hline Photo & 20 & \(89.94\pm 1.22\) & \(90.87\pm 1.14\) & \(90.17\pm 0.93\) & \(89.36\pm 1.02\) & \(90.85\pm 1.16\) & \(\mathbf{90.93\pm 1.40}\) \\ \hline ogbn-arxiv & 54\% & \(71.91\pm 0.15\) & \(71.61\pm 0.30\) & OOM & \(68.01\pm 0.35\) & \(\mathbf{72.04\pm 0.12}\) & \(69.20\pm 0.26\) \\ \hline \end{tabular}
\end{table}
Table 1: Semi-supervised node classification accuracy (%) on benchmark datasets.
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline Dataset & \multicolumn{4}{c|}{Cora} & \multicolumn{4}{c}{CiteSeer} \\ \hline Method & WDP \(\downarrow\) & WSD \(\downarrow\) & WCV \(\downarrow\) & WDP \(\downarrow\) & WSD \(\downarrow\) & WCV \(\downarrow\) \\ \hline LP & 0.1079 & 0.1378 & 0.1941 & 0.2282 & 0.2336 & 0.4692 \\ GRADE & 0.0372 & 0.0489 & 0.0615 & 0.0376 & 0.0467 & 0.0658 \\ GCN & 0.0494 & 0.0618 & 0.0758 & 0.0233 & 0.0376 & 0.0524 \\ LPSL\({}_{\text{GCN}}\) & **0.0361** & **0.0438** & **0.0518** & **0.0229** & **0.0346** & 0.0476 \\ APPNP & 0.0497 & 0.0616 & 0.0732 & 0.0344 & 0.0426 & 0.0594 \\ LPSL\({}_{\text{APPNP}}\) & 0.0390 & 0.0476 & 0.0562 & 0.0275 & 0.0349 & **0.0474** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of Methods in Addressing Label Proximity Score Bias.
### Ablation Study
In this subsection, we explore the impact of different hyperparameters, specifically the smoothing term \(\lambda\) and the constraint \(c\), on our model. We conducted experiments on the Cora and CiteSeer datasets using ten random data splits with 20 labels per class. The accuracy of different \(\lambda\) values for LPSL\({}_{\text{APPNP}}\) and LPSL\({}_{\text{GCN}}\) on the Cora and CiteSeer datasets are illustrated in Figure 4.4.
From our analysis, we note that the proposed LPSL is not highly sensitive to the \(\lambda\) within the selected regions. Moreover, for the APPNP model, the best \(\lambda\) is higher than that for the GCN model, which aligns with our discussion in Section 3 that the decoupled GNNs require a larger \(\lambda\) to encode the global graph structure. The results for hyperparameter \(c\) can be found in Appendix F with similar observations.
## 5 Related Work
Graph Neural Networks (GNNs) serve as an effective framework for representing graph-structured data, primarily employing two operators: feature transformation and propagation. The ordering of these operators classifies most GNNs into two categories: Coupled and Decoupled GNNs. Coupled GNNs, such as GCN (Kipf and Welling, 2016), GraphSAGE (Hamilton et al., 2017), and GAT (Velickovic et al., 2017), entwine feature transformation and propagation within each layer. In contrast, recent models like APPNP (Gasteiger et al., 2018) represent Decoupled GNNs (Liu et al., 2021, 2020, Zhou et al., 2021) that separate transformation and propagation. While Graph Neural Networks (GNNs) have achieved notable success across a range of domains (Wu et al., 2020), they often harbor various biases tied to node features and graph topology (Dai et al., 2022). For example, GNNs may generate predictions skewed by sensitive node features (Dai and Wang, 2021; Agarwal et al., 2021), leading to potential unfairness in diverse tasks such as recommendations (Buyl and De Bie, 2020) and loan fraud detection (Xu et al., 2021). Numerous studies have proposed different methods to address feature bias, including adversarial training (Dai and Wang, 2021; Dong et al.,
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline DataSet & \multicolumn{3}{c|}{Cora} & \multicolumn{3}{c}{CiteSeer} \\ \hline Method & WDP \(\downarrow\) & WSD \(\downarrow\) & WCV \(\downarrow\) & WDP \(\downarrow\) & WSD \(\downarrow\) & WCV \(\downarrow\) \\ \hline LP & 0.0562 & 0.0632 & 0.0841 & 0.0508 & 0.0735 & 0.109 \\ GRADE & 0.0292 & 0.0369 & 0.0459 & 0.0282 & 0.0517 & 0.0707 \\ GCN & 0.0237 & 0.0444 & 0.0533 & 0.0296 & 0.0553 & 0.0752 \\ LPSL\({}_{\text{GCN}}\) & **0.0150** & **0.0248** & **0.0289** & **0.0246** & 0.0526 & 0.0714 \\ APPNP & 0.0218 & 0.0316 & 0.0369 & 0.0321 & 0.0495 & 0.0668 \\ LPSL\({}_{\text{APPNP}}\) & 0.0166 & 0.0253 & 0.0295 & 0.0265 & **0.0490** & **0.0654** \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of Methods in Addressing Shortest Path Distance Bias.
Figure 4: The accuracy of different \(\lambda\) for LPSL\({}_{\text{APPNP}}\) and LPSL\({}_{\text{GCN}}\) on Cora and CiteSeer datasets.
\begin{table}
\begin{tabular}{c|c c|c c c c} \hline Dataset & \multicolumn{3}{c|}{Cora} & \multicolumn{3}{c}{CiteSeer} \\ \hline Method & WDP \(\downarrow\) & WSD \(\downarrow\) & WCV \(\downarrow\) & WDP \(\downarrow\) & WSD \(\downarrow\) & WCV \(\downarrow\) \\ \hline LP & 0.0893 & 0.1019 & 0.1447 & 0.1202 & 0.1367 & 0.2773 \\ GRADE & 0.0386 & 0.0471 & 0.0594 & 0.0342 & 0.0529 & 0.0744 \\ GCN & 0.0503 & 0.0566 & 0.0696 & 0.0466 & 0.0643 & 0.0901 \\ LPSL\({}_{\text{GCN}}\) & 0.0407 & 0.0468 & 0.0554 & 0.0378 & 0.0538 & 0.0742 \\ APPNP & 0.0408 & 0.0442 & 0.0527 & 0.0499 & 0.0688 & 0.0964 \\ LPSL\({}_{\text{APPNP}}\) & **0.0349** & **0.0395** & **0.0467** & **0.0316** & **0.0487** & **0.0665** \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of Methods in Addressing Degree Bias.
2022, Masrour et al., 2020), and fairness constraints (Agarwal et al., 2021; Dai and Wang, 2022; Kang et al., 2020). Structural bias is another significant concern, where low-degree nodes are more likely to be falsely predicted by GNNs (Tang et al., 2020). Recently, there are several works aimed to mitigate the degree bias issue (Kang et al., 2022; Liu et al., 2023; Liang et al., 2022). Distinct from these previous studies, our work identifies a new form of bias - label position bias, which is prevalent in GNNs. To address this, we propose a novel method, LPSL, specifically designed to alleviate the label position bias.
## 6 Conclusion and Limitation
In this study, we shed light on a previously unexplored bias in GNNs, the label position bias, which suggests that nodes closer to labeled nodes typically yield superior performance. To quantify this bias, we introduce a new metric, the Label Proximity Score, which proves to be a more intrinsic measure. To combat this prevalent issue, we propose a novel optimization framework, LPSL, to learn an unbiased graph structure. Our extensive experimental evaluation shows that LPSL not only outperforms standard methods but also significantly alleviates the label position bias in GNNs. In our current work, we address the label position bias only from a structure learning perspective. Future research could incorporate feature information, which might lead to improved performance. Besides, we have primarily examined homophily graphs. It would be interesting to investigate how label position bias affects heterophily graphs. We hope this work will stimulate further research and development of methods aimed at enhancing label position fairness in GNNs.
|
2304.02911 | Heavy-Tailed Regularization of Weight Matrices in Deep Neural Networks | Unraveling the reasons behind the remarkable success and exceptional
generalization capabilities of deep neural networks presents a formidable
challenge. Recent insights from random matrix theory, specifically those
concerning the spectral analysis of weight matrices in deep neural networks,
offer valuable clues to address this issue. A key finding indicates that the
generalization performance of a neural network is associated with the degree of
heavy tails in the spectrum of its weight matrices. To capitalize on this
discovery, we introduce a novel regularization technique, termed Heavy-Tailed
Regularization, which explicitly promotes a more heavy-tailed spectrum in the
weight matrix through regularization. Firstly, we employ the Weighted Alpha and
Stable Rank as penalty terms, both of which are differentiable, enabling the
direct calculation of their gradients. To circumvent over-regularization, we
introduce two variations of the penalty function. Then, adopting a Bayesian
statistics perspective and leveraging knowledge from random matrices, we
develop two novel heavy-tailed regularization methods, utilizing Powerlaw
distribution and Frechet distribution as priors for the global spectrum and
maximum eigenvalues, respectively. We empirically show that heavytailed
regularization outperforms conventional regularization techniques in terms of
generalization performance. | Xuanzhe Xiao, Zeng Li, Chuanlong Xie, Fengwei Zhou | 2023-04-06T07:50:14Z | http://arxiv.org/abs/2304.02911v2 | # Heavy-Tailed Regularization of Weight Matrices
###### Abstract
Unraveling the reasons behind the remarkable success and exceptional generalization capabilities of deep neural networks presents a formidable challenge. Recent insights from random matrix theory, specifically those concerning the spectral analysis of weight matrices in deep neural networks, offer valuable clues to address this issue. A key finding indicates that the generalization performance of a neural network is associated with the degree of heavy tails in the spectrum of its weight matrices. To capitalize on this discovery, we introduce a novel regularization technique, termed **Heavy-Tailed Regularization**, which explicitly promotes a more heavy-tailed spectrum in the weight matrix through regularization. Firstly, we employ the Weighted Alpha and Stable Rank as penalty terms, both of which are differentiable, enabling the direct calculation of their gradients. To circumvent over-regularization, we introduce two variations of the penalty function. Then, adopting a Bayesian statistics perspective and leveraging knowledge from random matrices, we develop two novel heavy-tailed regularization methods, utilizing Power-law distribution and Frechet distribution as priors for the global spectrum and maximum eigenvalues, respectively. We empirically show that heavy-tailed regularization outperforms conventional regularization techniques in terms of generalization performance.
Keywords:Heavy-Tailed Regularization Deep Neural Network Random Matrix Theory.
## 1 Introduction
Deep neural networks (DNN) have shown remarkable performance in recent years, achieving unprecedented success in various fields such as computer vision, natural language processing, and recommendation systems [6, 10, 11, 25]. However, there is still a lack of clear understanding of how neural networks generalize.
Efforts to construct a generalization framework for DNNs have incorporated various mathematical tools from conventional learning theory [3, 4, 5, 24]. Nevertheless, the majority of these approaches have been found to exhibit certain limitations. For example, the VC dimension and Rademacher complexity have been deemed inadequate in offering a satisfactory explanation for the generalization performance of DNNs [26]. The uniform-convergence-based generalization bounds may fail to elucidate generalization in deep learning due to their vacuous generalization guarantee [20].
One might consider that the weight matrices of a DNN serve as a representation of its generalization capabilities for the following reasons: from a theoretical standpoint, the parameters contained within the weight matrices are intricately connected to the model's output space, input data, optimization algorithm, etc; Additionally, in practical scenarios, access to trained models often comes with limited information regarding training and testing data, which can be attributed to the highly compartmentalized nature of the industry. Recently, Martin and Mahoney [16] introduced a perspective grounded in random matrix theory (RMT) to elucidate the generalization behavior of deep neural networks. They studied the empirical spectral distribution (ESD) of weight matrices in deep neural networks and observed a _5+1 phase of regularization_ - throughout the training process, the ESDs of the weight matrices initially conform well to the Marchenko-Pastur (MP) law, gradually deviate from it, and ultimately approach a Heavy-Tailed (HT) distribution[14, 16]. This regularization phenomenon is referred to as _Implicit Self-Regularization_. Furthermore, this theory suggests that large, well-trained DNN architectures should exhibit Heavy-Tailed Self-Regularization, meaning that the spectra of their weight matrices can be well-fitted by a heavy-tailed distribution. Building on Martin and Mahoney's work, Meng and Yao [19] discovered that the complexity of the classification problem could influence the weight matrices spectra of DNNs. These theories offer a novel perspective for exploring the generalization of DNNs.
In addition to the aforementioned studies, several works have advocated the positive impact of heavy tails of weight matrices on the generalization of neural networks from the perspective of stochastic gradient descent (SGD). Zhou et al. [27] pointed out that the time required for both SGD and Adam to escape sharp minima is negatively related to the heavy-tailedness of gradient noise. They further explained that the superior generalization of SGD compared to Adam in deep learning is due to Adam's gradient calculation being smoothed by the exponential moving average, resulting in lighter gradient noise tails compared to SGD. Hodgkinson et al. [12] presented a similar finding, demonstrating that, within a stochastic optimization problem, multiplicative noise and heavy-tailed stationary behavior enhance the capacity for basin hopping during the exploratory phase of learning, in contrast to additive noise and light-tailed stationary behavior. Simsekli et al. [22] approximated the trajectories of SGD using a Feller process and derived a generalization bound controlled by the Hausdorff dimension, which is associated with the tail properties of the process. Their results suggest that processes with heavier tails should achieve better generalization. Barsbey et al. [2]
argued that the heavy-tailed behavior present in the weight matrices of a neural network contributes to network compressibility, thereby enhancing the network's generalization capabilities. Taken together, these results suggest that the heavy tail of weight matrices is a fundamental factor for the improved generalization of DNNs under SGD.
An intuitive notion arising from these theories is that the presence of heavy tails of weight matrices during DNN training is crucial for achieving favorable generalization performance. However, previous studies provide a limited understanding of how to enhance the heavy-tailed behavior in neural networks. In this study, our focus lies in regularizing DNNs to facilitate more rapid and pronounced heavy-tailed behavior. To this end, we introduce an explicit regularization technique called **Heavy-Tailed Regularization**. We empirically demonstrate that models trained with heavy-tailed regularization display superior generalization performance compared to those trained with conventional methods.
#### 1.0.1 Contribution of this paper.
1. We propose a regularization framework termed _Heavy-Tailed Regularization_. This proposal is motivated by prior research, which has shown that the heavy-tailed behavior of weight matrices in neural networks can improve their generalization capabilities.
2. We develop four distinct heavy-tailed regularization methods, including (a) Weighted Alpha regularization, (b) Stable Rank regularization, (c) Power-law Prior, and (d) Frechet Prior. The first two methods are inspired by existing complexity measures for neural networks, while the latter two are informed by insights from random matrix theory (RMT) and Bayesian statistics.
3. We made comparison with conventional methods on widely used datasets including KMNIST and CIFAR10. Numerical experiments show that the heavy-tailed regularization approaches are efficient and outperform competing conventional regularization methods.
## 2 Heavy-Tailed Regularization
### Definition
Consider a DNN \(f_{\mathbf{W}}:\mathcal{X}\rightarrow\mathcal{Y}\) with \(L\) layers and with weight matrices of its fully connected layers \(\mathbf{W}=\left\{\mathbf{W}_{1},\mathbf{W}_{2},\cdots,\mathbf{W}_{L}\right\}\) and data sample set \(S=\left\{\left(x_{1},y_{1}\right),\left(x_{2},y_{2}\right)\cdots,\left(x_{N},y _{N}\right)\right\}\) with sample size \(N\).
Denote \(l(f(x),y)\) the loss of example \((x,y)\in\mathcal{X}\times\mathcal{Y}\) under model \(f_{\mathbf{W}}\). The optimization problem of the DNN can be viewed as a problem of minimizing the empirical risk with a penalty term:
\[\min_{\mathbf{W}}\quad\mathcal{L}\left(x,y\right)=\frac{1}{N}\sum_{i=1}^{N}l \left(f\left(x_{i}\right),y_{i}\right)+\lambda\sum_{l=1}^{L}p_{l}\left(\mathbf{ W}_{l}\right)\text{,} \tag{1}\]
where \(\lambda\) is a tuning parameter and \(p_{l}\left(\cdot\right)\) is a penalty function on the weight matrices.
Here, we propose a class of regularization methods called **Heavy-Tailed Regularization**, which refers to the regularization methods that are conducive to making the model's weight matrices more heavy-tailed. To achieve this goal, \(p_{l}\left(\cdot\right)\) is supposed to be a complex measure of the model that reflects the degree of the heavy-tailed behavior of the model, and it decreases as the tails get heavier.
To describe the degree of heavy tails of the spectra of the weight matrices, it is critical to estimate the tail index \(\alpha\). In statistics, estimating the tail index is a tricky issue. Denote the data points \(\left\{x_{i},1\leq i\leq n\right\}\) and assume the data come from a heavy-tailed distribution with density function \(p(x)\sim cx^{-\alpha}\), i.e., its p.d.f. is comparable with _power-law_\(x^{-\alpha}\) as \(x\rightarrow\infty\). A general method to estimate the tail index \(\alpha\) is the Hill estimator (HE), which can be used for general power-law settings. If the data is sorted in increasing order, the HE can be written by:
\[\hat{\alpha}=1+\frac{k}{\left(\sum_{i=1}^{k}\ln\frac{x_{n-i+1}}{x_{n-k}} \right)}, \tag{2}\]
where \(k\) is a tuning parameter. There is a trade-off depending on the value of \(k\) between the bias and variance of the estimator. In this study, we use HE with \(k=\frac{n}{2}\) for tail index estimation.
### Weighted Alpha Regularization
Motivated by Martin and Mahoney's work [15, 17, 18], the _Weighted Alpha_ (also called _AlphaHat_) is used in our regularization approach. In their theory, there is a strong linear correlation between the test accuracy and the Weighted Alpha of models. The Weighted Alpha is defined as:
\[\text{Weighted Alpha}(\mathbf{W})=\sum_{l=1}^{L}\alpha_{l}\log\lambda_{\max,l}, \tag{3}\]
where \(\alpha_{l}\) is the tail index of all the positive eigenvalues of \(\mathbf{S}_{l}=\mathbf{W}_{l}^{T}\mathbf{W}_{l}\), and \(\lambda_{\max,l}\) is the maximum eigenvalue of \(\mathbf{S}_{l}\). In Martin and Mahoney's theory, only the performance of Weighted Alpha on different architectures of large-scale, pre-trained, state-of-the-art models was discussed. While we are interested in how this metric changes during the training process of DNNs. Here, we conducted some experiments and obtained evidence that Weighted Alpha is _negatively correlated_ with test accuracy. Thus, the penalty function \(p_{l}(\cdot)\) can be written as
\[p_{l}\left(\mathbf{W}_{l}\right)=\alpha_{l}\cdot\log\lambda_{\max,l}. \tag{4}\]
In fact, we do not need to penalize the weighted alpha throughout the training process. Our goal is to impose a heavy-tailed perturbation in the stochastic optimization, which only requires us to activate the regularization in the early stages or intermittently. Otherwise, it will be over-regularized. On the other
hand, for practical reasons, we can terminate the regularization at some point to avoid high computational costs. Therefore, we provide two additional variants of the penalty function as follows:
1. Decay Weighted Alpha: \[p_{l}\left(\mathbf{W}_{l}\right)=d\left(\left\lfloor e/m\right\rfloor\right) \cdot\alpha_{l}\cdot\log\lambda_{\max,l},\] (5) where \(e\) is the current epoch number, \(m\) is the frequency of decay, and \(d(\cdot)\) is a decreasing function called the _decay function_. The decay function is called _power decay_ when \(d\left(x\right)=x^{-k}I_{\left\{x^{-k}>t\right\}}\) and called _exponential decay_ when \(d\left(x\right)=\exp\left(-kx\right)I_{\left\{\right.\right\}}\exp\left(-kx \right)>t\) for the hyperparameter \(k\) and \(t\). The adoption of this penalty function means that the regularization is activated only in the early epochs and becomes weaker with training.
2. Lower Threshold Weighted Alpha: \[p_{l}\left(\mathbf{W}_{l}\right)=\alpha_{l}\cdot\log\lambda_{\max,l}\cdot I \Bigg{\{}\sum_{l=1}^{L}\alpha_{l}\cdot\log\lambda_{\max,l}\geqslant t\Bigg{\}},\] (6) where \(t\) is a hyperparameter, \(I\left\{\cdot\right\}\) is the indicator function. The adoption of this penalty function means that the regularization is activated only when the Weighted Alpha is above a predetermined lower threshold \(t\), i.e., the model falls short of the degree of the heavy tail we expect.
### Stable Rank Regularization
Stable rank is a classical metric in deep learning, which is defined as
\[\text{stable}\left(\mathbf{W}_{l}\right)=\frac{\left\|\mathbf{W}_{l}\right\| _{F}^{2}}{\left\|\mathbf{W}_{l}\right\|_{2}^{2}}. \tag{7}\]
It has been verified that the stable rank decreases as the training of DNNs [16]. Several recent studies [4, 21] also shows that the generalization error can be upper bounded by \(O\left(\prod_{i}\left\|\mathbf{W}_{i}\right\|_{2}^{2}\sum_{i}\text{stable} \left(\mathbf{W}_{i}\right)\right)\), which implies that a smaller \(\sum_{i}\text{stable}\left(\mathbf{W}_{i}\right)\) leads to a smaller generalization error. In light of this, the penalty function for the stable rank regularization can be written as \(p_{l}\left(\mathbf{W}_{l}\right)=\text{stable}\left(\mathbf{W}_{l}\right)\), and thus the optimization problem can be written as
\[\min_{\mathbf{W}}\quad\mathcal{L}\left(x,y\right)=\sum_{i=1}^{N}l\left(f \left(x_{i}\right),y_{i}\right)+\lambda\sum_{l=1}^{L}\frac{\left\|\mathbf{W} _{l}\right\|_{F}^{2}}{\left\|\mathbf{W}_{l}\right\|_{2}^{2}} \tag{8}\]
Note that \(||\mathbf{W}||_{F}^{2}\) is the sum of square singular values of \(\mathbf{W}\) and \(||\mathbf{W}||_{2}\) is the maximum singular value of \(\mathbf{W}\). Recall the random matrix theory, when the matrix is heavy-tailed, the maximum eigenvalue is far off the global spectrum. Combined with Martin's 5+1 phase transition theory [16], a smaller stable rank of the weight matrix leads to stronger heavy-tailed self-regularization.
Similar to the weighted alpha regularization, in order to avoid over-regularization, we can also add decay and upper threshold to stable rank regularization as follows:
1. Decay Stable Rank: \[p_{l}\left(\mathbf{W}_{l}\right)=d\left(\left\lfloor e/m\right\rfloor\right)\cdot \frac{\left\|\mathbf{W}_{l}\right\|_{F}^{2}}{\left\|\mathbf{W}_{l}\right\|_{2}^ {2}}.\] (9)
2. Lower Threshold Stable Rank: \[p_{l}\left(\mathbf{W}_{l}\right)=\frac{\left\|\mathbf{W}_{l}\right\|_{F}^{2}}{ \left\|\mathbf{W}_{l}\right\|_{2}^{2}}\cdot I\Bigg{\{}\sum_{l=1}^{L}\frac{ \left\|\mathbf{W}_{l}\right\|_{F}^{2}}{\left\|\mathbf{W}_{l}\right\|_{2}^{2}} \geqslant t\Bigg{\}}.\] (10)
### Heavy Tailed Regularization from A Bayesian Perspective
Here, we propose two heavy-tailed regularization methods from a Bayesian perspective. Let us view the deep neural network as a probabilistic model \(P(\mathbf{y}|\mathbf{x},\mathbf{W})\), where \(\mathbf{x}\in\mathcal{X}=\mathbb{R}^{p}\) is the input and \(\mathbf{y}\in\mathcal{Y}\) is the output probability assigned by the neural network. \(\mathbf{W}=\left\{\mathbf{W}_{1},\cdots,\mathbf{W}_{L}\right\}\) is the set of weight matrices of the neural network. Given a training sample set \(S=\left\{\left(x_{1},y_{1}\right),\left(x_{2},y_{2}\right)\cdots,\left(x_{N}, y_{N}\right)\right\}\) with sample size \(N\), a common method for estimating the weights \(\mathbf{W}\) is the maximum likelihood estimation (MLE):
\[\mathbf{W}^{\mathrm{MLE}}=\operatorname*{arg\,max}_{\mathbf{W}}\sum_{i=1}^{N} \log P\left(y_{i}\left|x_{i},\mathbf{W}\right.\right). \tag{11}\]
Specifically, for a multi-classification task, the probabilistic model is usually a multinomial distribution, and then the MLE can be written as
\[\mathbf{W}^{\mathrm{MLE}}=\operatorname*{arg\,max}_{\mathbf{W}}\sum_{i=1}^{N} y_{i}\log f_{\mathbf{W}}\left(x_{i}\right). \tag{12}\]
From a Bayesian perspective, if we want to introduce the heavy-tailed regularization upon the model, we can assign a heavy-tailed prior upon the probabilistic model and then find the maximum a posteriori (MAP) rather than MLE:
\[\mathbf{W}^{\mathrm{MAP}}=\operatorname*{arg\,max}_{\mathbf{W}}\log P\left( \mathbf{y}\left|\mathbf{x},\mathbf{W}\right.\right)+\log P\left(\mathbf{W} \right). \tag{13}\]
Thus, it is important to choose a reasonable prior \(P(\mathbf{W})\) for the weights which can make the weights more heavy-tailed.
Recall the Random Matrix Theory for the heavy-tailed matrices, if a random matrix is heavy-tailed, its limiting spectral distribution (LSD) is supposed to be power-law [7, 8, 9] and its largest eigenvalue is supposed to be Frechet distribution [1, 23]. Therefore, it is easy to consider that prior distribution can be set as Power-law or Frechet distribution when we introduce prior knowledge of the global spectrum or maximum eigenvalue.
Now we introduce the heavy-tailed prior upon the model. Firstly we consider the prior of global spectra of weight matrices. When the weight matrices are
heavy-tailed, the LSD is power-law, so the prior distribution can be set as
\[P\left(\mathbf{W}\right)=\prod_{l=1}^{L}P\left(\mathbf{W}_{l}\right)\propto\prod _{l=1}^{L}\prod_{j=1}^{K_{l}}\lambda_{l,j}^{-\alpha_{l}}, \tag{14}\]
where \(\alpha_{l}\) is the tail index of the power-law of square singular value of weight matrix \(\mathbf{W}_{l}\) in the \(l\)-th layer of the neural network, \(\lambda_{l,j}\) is the \(j\)-th square singular value of \(\mathbf{W}_{l}\). \(K_{l}\) is the number of singular values of \(\mathbf{W}_{l}\) that is considered to be from a heavy-tailed distribution. \(K_{l}\) is a hyperparameter and we choose \(K_{l}\) as half the size of \(\mathbf{W}_{l}\) in our study. Substituting this into (13), we have the following optimization problem:
\[\mathbf{W}^{\mathrm{MAP}}=\operatorname*{arg\,max}_{\mathbf{W}}\sum_{i=1}^{N}y _{i}\log f_{\mathbf{W}}\left(x_{i}\right)-\sum_{l=1}^{L}\sum_{j=1}^{K_{l}} \alpha_{l}\log\lambda_{l,j} \tag{15}\]
Secondly, we consider the prior of the maximum square singular value of the weight matrices. When the weight matrices are heavy-tailed, the distribution of maximum square singular value is Frechet distribution, so the prior distribution can be set as
\[P\left(\mathbf{W}\right)=\prod_{l=1}^{L}P\left(\mathbf{W}_{l}\right)=\prod_{l =1}^{L}\exp\left(-\lambda_{\max,l}^{-\alpha_{l}}\right)\!. \tag{16}\]
where \(\alpha_{l}\) is the tail index of \(\mathbf{W}_{l}\) and \(\lambda_{\max,l}\) is the maximum square singular value. Similarly, substituting this into (13), we have the following optimization problem:
\[\mathbf{W}^{\mathrm{MAP}}=\operatorname*{arg\,max}_{\mathbf{W}}\sum_{i=1}^{N}y _{i}\log f_{\mathbf{W}}\left(x_{i}\right)-\sum_{l=1}^{L}\lambda_{\max,l}^{- \alpha_{l}}. \tag{17}\]
So far we derive two forms of MAP but we have some problems to solve: How to determine the hyperparameters \(\boldsymbol{\alpha}=\{\alpha_{l},1\leq l\leq L\}\), and how to solve this maximization problem. In Empirical Bayes, the hyperparameter is determined by maximizing the marginal likelihood of the data, that is
\[\boldsymbol{\alpha}=\operatorname*{arg\,max}_{\boldsymbol{\alpha}}\log\int P \left(\mathbf{y},\mathbf{W}\left|\mathbf{x},\boldsymbol{\alpha}\right.\right) \mathrm{d}\mathbf{W}. \tag{18}\]
It's apparently impossible since the integral is intractable. According to Mandt et al., [13], the SGD can be seen as a variational expectation maximization (VEM) method for Bayesian inference. The maximization problems in (15) and (17) are equivalent to the following minimization problem:
\[\min_{\mathbf{W}}\quad\mathcal{L}\left(x,y\right)=\frac{1}{N}\sum_{i=1}^{N}l \left(f\left(x_{i}\right),y_{i}\right)+\sum_{l=1}^{L}\sum_{j=1}^{K_{l}}\alpha _{l}\log\lambda_{l,j}, \tag{19}\]
\[\min_{\mathbf{W}}\quad\mathcal{L}\left(x,y\right)=\frac{1}{N}\sum_{i=1}^{N}l \left(f\left(x_{i}\right),y_{i}\right)+\sum_{l=1}^{L}\lambda_{\max,l}^{-\alpha _{l}}, \tag{20}\]
where \(l(f(x),y)=-y\log f(x)\) is the cross entropy loss. The hyperparameters \(\mathbf{\alpha}\) can be optimized when the SGD is seen as a type of VEM algorithm. Instead of the MLE, the Hill estimator is a better choice to estimate the hyperparameters \(\mathbf{\alpha}\). When the tuning parameter is added, the (19) and (20) can be modified as:
\[\min_{\mathbf{W}}\quad\mathcal{L}\left(x,y\right)=\frac{1}{N}\sum_{i=1}^{N}l \left(f\left(x_{i}\right),y_{i}\right)+\mu\sum_{l=1}^{L}\sum_{j=1}^{K_{l}} \hat{\alpha}_{l}\log\lambda_{l,j}, \tag{21}\]
\[\min_{\mathbf{W}}\quad\mathcal{L}\left(x,y\right)=\frac{1}{N}\sum_{i=1}^{N}l \left(f\left(x_{i}\right),y_{i}\right)+\mu\sum_{l=1}^{L}\lambda_{\max,l}^{- \hat{\alpha}_{l}}, \tag{22}\]
where \(\hat{\alpha}_{l}\) is the Hill estimator of the \(l\)-th layer weight matrix.
Note that (21) and (22) are the special cases of (1) when \(p_{l}(\mathbf{W})=\alpha_{l}\cdot\sum_{j}\log\lambda_{j,l}\) and \(p_{l}(\mathbf{W})=\lambda_{\max,l}^{\alpha_{l}}\). Since the penalty terms are similar to the weighted alpha, these regularization terms can be considered as variants of Weighted Alpha. According to the priors used in these regularizations, we call the (21) **Heavy-Tailed Regularization under Power-law Prior** and the (22) **Heavy-Tailed Regularization under Frechet Prior**.
## 3 Experiment
In this section, we experimentally demonstrate the performance of Heavy-Tailed Regularization on multi-classification tasks. To verify the effectiveness of heavy-tailed regularization, we employed the Weighted Alpha Regularization and Stable Rank Regularization with their variants, and the heavy-tailed regularization under the Power-law and Frechet spectral prior. Here all the tail index is replaced by its Hill estimator with \(k=\frac{n}{2}\), where \(n\) is the size of corresponding weight matrix. In our experiment, we used the following training methods to compare with the heavy-tailed regularization approach:
1. Vanilla problem (Base): We considered the original model without any explicit regularization.
2. Weight Decay: We considered the most commonly used explicit regularization in (1) where \(p_{l}(\mathbf{W})=\frac{1}{2}||\mathbf{W}||_{F}^{2}\).
3. Spectral Norm Regularization: We considered another explicit regularization method with regards to the spectrum which penalty function \(p_{l}(\mathbf{W})=\frac{1}{2}||\mathbf{W}||_{2}^{2}\).
All the experiments here are based on mini-batch SGD and learning rate decay. In our experiments, we used the following four settings on the model and dataset:
1. The Three Layer Neural Network (FC3) on KMNIST and CIFAR10.
2. The LeNet5 on CIFAR10.
3. The ResNet18 on CIFAR10.
### Fc3
In the first place, we train the neural network with three hidden layers on the KMNIST and CIFAR10 dataset for 200 epochs. The KMNIST dataset is adapted from Kuzushiji Dataset and is a drop-in replacement for the MNIST dataset. The image size of the KMNIST dataset is 28\(\times\)28. The CIFAR10 dataset consists of 60000 color images whose size is 32\(\times\)32. The CIFAR10 dataset is more complex than the KMNIST dataset so the CIFAR10 dataset is more difficult to be classified correctly. Because the image size of these two datasets varied, we use two types of three-layer neural networks with different sizes for each dataset. For the KMNIST dataset, we use the network with sizes of layers \(\mathbf{n}=[784,128,128,128,10]\); For the CIFAR10 dataset, we use the network with sizes of layers \(\mathbf{n}=[3072,512,256,256,10]\).
The results are shown in Figure 2, 2 and Table 1. The heavy-tailed regularizations all show better accuracies than the vanilla problem both on the KMNIST dataset and the CIFAR10 dataset. The Frechet prior achieves the best test accuracy on the KMNIST dataset, and the stable rank with lower threshold of \(t=15\) achieves the best test accuracy on the CIFAR10 dataset.
### LeNet5
Secondly, we train the LeNet5 on the CIFAR10 dataset for 200 epochs. The LeNet5 is a famous and classical convolutional neural network (CNN) architecture. The results are shown in Figure 3 and Table 2. The heavy-tailed regularization also all shows better accuracy than the vanilla problem both on the CIFAR10 dataset. As shown in the table, the stable rank with \(\beta=0.1\) achieves the best test accuracy.
### ResNet18
Thirdly, we train the ResNet18 on the CIFAR10 dataset for 200 epochs. The ResNet is a CNN architecture which greatly advanced the SOTA in various
\begin{table}
\begin{tabular}{c c c c} \hline \hline Network & Dataset & Method & \(\beta\) & Test accuracy \\ \hline \multirow{8}{*}{LeNet5} & base & & 89.19\(\pm\)0.020 \\ & weight decay & 5.00\(\times 10^{-4}\) & 89.44\(\pm\)0.037 \\ & spectral norm & 0.0001 & 89.27\(\pm\)0.013 \\ & KMNIST & weighted alpha\({}^{1}\) & 5.00\(\times 10^{-5}\) & 89.60\(\pm\)0.011 \\ & stable rank\({}^{2}\) & 1.00\(\times 10^{-4}\) & 89.48\(\pm\)0.008 \\ & Power-law prior & 5.00\(\times 10^{-4}\) & 89.58\(\pm\)0.175 \\ & **Frechet prior** & **2.00\(\times 10^{-5}\)** & **89.64\(\pm\)0.173** \\ \cline{2-4} \multirow{8}{*}{FC3} & base & & 54.97\(\pm\)0.039 \\ & weight decay & 5.00\(\times 10^{-4}\) & 55.56\(\pm\)0.092 \\ & spectral norm & 0.0001 & 55.27\(\pm\)0.003 \\ & weighted alpha\({}^{1}\) & 1.00\(\times 10^{-4}\) & 55.72\(\pm\)0.053 \\ & **stable rank\({}^{2}\)** & **1.00\(\times 10^{-4}\)** & **55.82\(\pm\)0.041** \\ & Power-law prior & 5.00\(\times 10^{-5}\) & 55.44\(\pm\)0.038 \\ & Fréchet prior & 5.00\(\times 10^{-5}\) & 55.45\(\pm\)0.029 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The average (\(\pm\) standard error) of test accuracy of FC3 with different regularization methods on the KMNIST and CIFAR10 dataset
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Network & Dataset & Method & \(\beta\) & Test Accuracy \\ \hline \multirow{8}{*}{LeNet5} & base & & 72.42\(\pm\)0.213 \\ & weight decay & 5.00\(\times 10^{-4}\) & 72.62\(\pm\)0.277 \\ & spectral norm & 0.0001 & 71.98\(\pm\)0.275 \\ & weighted alpha\({}^{1}\) & 0.004 & 72.61\(\pm\)0.300 \\ & **stable rank** & **0.1** & **73.63\(\pm\)0.193** \\ & Power-law prior & 7.00\(\times 10^{-4}\) & 72.61\(\pm\)1.061 \\ & Fréchet prior & 5.00\(\times 10^{-5}\) & 72.58\(\pm\)0.270 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The average (\(\pm\) standard error) of test accuracy of LeNet5 with different regularization methods on the CIFAR10 dataset
computer vision tasks. In this experiment, we add one linear layer with size of 512\(\times\)128 before the linear layer in the origin ResNet18 architecture. The results are shown in Figure 4 and Table 3. As shown in the table, the stable rank with \(\beta=5\times 10^{-4}\) achieves the best test accuracy.
|
2303.11060 | Quantile and moment neural networks for learning functionals of
distributions | We study news neural networks to approximate function of distributions in a
probability space. Two classes of neural networks based on quantile and moment
approximation are proposed to learn these functions and are theoretically
supported by universal approximation theorems. By mixing the quantile and
moment features in other new networks, we develop schemes that outperform
existing networks on numerical test cases involving univariate distributions.
For bivariate distributions, the moment neural network outperforms all other
networks. | Xavier Warin | 2023-03-20T12:23:31Z | http://arxiv.org/abs/2303.11060v1 | # Quantile and moment neural networks for learning functionals of distributions. +
###### Abstract
We study news neural networks to approximate function of distributions in a probability space. Two classes of neural networks based on quantile and moment approximation are proposed to learn these functions and are theoretically supported by universal approximation theorems. By mixing the quantile and moment features in other new networks, we develop schemes that outperform existing networks on numerical test cases involving univariate distributions. For bivariate distributions, the moment neural network outperforms all other networks.
## 1 Introduction
The deep neural networks have been successfully used to solve high dimensional PDEs either by solving the PDE using physics informed methods, or by using backward stochastic differential equations (see [2], [6] for an overview). Recently the mean field game and control theory has allowed the formalization of problems involving large populations of interacting agents. The solution of such problems is a function depending on the probability distribution of the population and can be obtained by solving a PDE in the Wasserstein space of probability measures (called the Master equation) or by solving BSDEs of McKean-Vlasov (MKV) (see [3, 4] ). In this case, the resulting PDE is infinite dimensional and must be reduced to a (high) finite dimensional problem to be tractable.
To solve such problems, [11] has developed two schemes approximating functions depending on both \(X\) a random variable and \(\mu\) a probability distribution, where \(X\sim\mu\). The first scheme is based on a bin representation of the density and the second uses a neural network to automatically extract the key features of the distribution. In both cases, these key features and the \(X\) values are used as input to a neural network permitting to approximate the value function. The first approach is the bin network and the second one is the cylinder network. Both schemes have been successfully applied to various toy cases in the case of one-dimensional distributions and used to solve the master equation using its semi-lagrangian representation in [10]. As we explain in the next section, the \(X\) dependence on the functional is not relevant for testing the various networks developed, and we focus in this article only on the dependence on the distribution.
In this article we propose new different networks to approximate functions depending on distributions:
* The first one, limited to one-dimensional distributions, uses the quantile of the distribution as key features : the resulting scheme gives the quantile network scheme.
* The second ones uses the moments of the distribution as key features and leads to the moment network scheme.
* Finally, the two previous features can be mixed to take advantage of the first two networks.
We give some universal approximation theorems for the first two networks. We test the developed networks on functions of univariate and bivariate distributions, where possible. We compare the proposed networks with the bin network and the cylinder network and show that:
* The moment network **or** the quantile network outperform the cylinder network and the bin network in the case of univariate distributions: the best solution obtained by the two networks always gives better results (on our tests) than the cylinder network and the bin network.
* By combining quantile and moment, we obtain a neural network that always outperforms the cylinder and the bin networks.
* In the case of bivariate distributions, the bin network fails and the moment network outperforms all other networks.
The structure of the article is as follows. In a first section, we formalize our problem as a minimization problem on the distribution space using a formal neural network. We give the general methodology for sampling distributions in the general multivariate case, and show how to solve the previous minimization problem using a stochastic gradient method. The second section is dedicated to the different proposed neural networks. The last one is dedicated to numerical results for univariate and bivariate distributions. A final conclusion is given.
**Notations.** Denote by \(\mathcal{P}_{2}(\mathbb{R}^{d})\) the Wasserstein space of square integrable probability measures equipped with the 2-Wasserstein distance \(\mathcal{W}_{2}\). Given some \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), and \(\phi\) a measurable function on \(\mathbb{R}^{d}\) with quadratic growth condition, hence in \(L^{2}(\mu)\), we set: \(\mathbb{E}_{X\sim\mu}[\phi(X)]:=\int\phi(x)\mu(\mathrm{d}x)\).
## 2 Learning distribution functions
Given a function \(V\) on \(\mathcal{P}_{2}(\mathbb{R}^{d})\), valued on \(\mathbb{R}^{p}\), we want to approximate the infinite-dimensional mapping
\[\mathcal{V}:\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\longmapsto\ V(\mu)\in \mathbb{R}^{p},\]
called the distribution function, by a map \(\mathcal{N}\) constructed from suitable classes of neural networks. The distribution network \(\mathcal{N}\) takes input \(\mu\) a probability measure and outputs \(\mathcal{N}(\mu)\). The quality of this approximation is measured by the error:
\[L(\mathcal{N}):=\int_{\mathcal{P}_{2}(\mathbb{R}^{d})}\big{|} \mathcal{V}(\mu)-\mathcal{N}(\mu)\big{|}^{2}\nu(\mathrm{d}\mu) \tag{2.1}\]
where \(\nu\) is a probability measure on \(\mathcal{P}_{2}(\mathbb{R}^{d})\), called the training measure. The distribution function \(\mathcal{V}\) is learned by minimizing the loss function over the parameters of the neural network operator \(\mathcal{N}\).
In the article [11], the authors learn what they call a mean-field function, a function \(\hat{V}\) depending on both \(\mu\) and \(x\). The network is a function \(\hat{\mathcal{N}}\) that takes as input \(\mu\) a probability measure and \(x\) in the support of \(\mu\), and outputs \(\hat{\mathcal{N}}(\mu)(x)\). The solution is found by minimizing:
\[\hat{L}(\hat{\mathcal{N}}):=\int_{\mathcal{P}_{2}(\mathbb{R}^{d})}\mathbb{E}_ {X\sim\mu}\big{|}\hat{V}(X,\mu)-\hat{\mathcal{N}}(\mu)(X)\big{|}^{2}\nu( \mathrm{d}\mu). \tag{2.2}\]
The resolution of the equation (2.2) is more general than the resolution of the equation (2.1), but in fact the result is simply obtained by concatenating \(x\) and the representation of the distribution \(\mu\) as input to the neural network, similarly as suggested in [5]. Therefore we focus on the resolution of the problem (2.1).
In the following, we explain how to sample distributions and how to generate samples for a given distribution in the multivariate case. Then we explain the global methodology used to train the neural networks. This methodology is used for all the networks developed in the next sections.
### Sampling distribution on a compact set
To learn a function of a distribution \(\mu\) with support in \(\mathcal{K}=[\underline{\mathcal{K}}_{1},\bar{\mathcal{K}}_{1}]\times\ldots \times[\underline{\mathcal{K}}_{d},\bar{\mathcal{K}}_{d}]\subset\mathbb{R}^{d}\), we must be able to "generate" distributions and, having chosen the distribution \(\mu\), to efficiently sample \(X\sim\mu\) in \(\mathbb{R}^{d}\).
As done in [11], we use a bin representation but propose a different algorithm to tackle the multivariate
case. By tensorization, a multivariate bin representation for a lattice \((J_{1},\ldots,J_{d})\) is given, for \((j_{1},\ldots,j_{d})\in[1,J_{1}]\times\ldots\times[1,J_{d}]\), by \[\text{Bin}(j_{1},\ldots,j_{d})=\prod_{i=1}^{d}[\underline{\mathcal{K}}_{i}+(j_{ i}-1)\frac{\bar{\mathcal{K}}_{i}-\underline{\mathcal{K}}_{i}}{J_{i}}, \underline{\mathcal{K}}_{i}+j_{i}\frac{\bar{\mathcal{K}}_{i}-\underline{ \mathcal{K}}_{i}}{J_{i}}].\]
* First, we generate a distribution \(\mu\) by sampling \(e_{1},\ldots,e_{\prod_{i=1}^{d}J_{i}}\), positive random variables according to an exponential law, and set for \((j_{1},\ldots,j_{d})\in[1,J_{1}]\times\ldots\times[1,J_{d}]\) \[p(j_{1},\ldots,j_{d})=\frac{e_{j_{1}+j_{2}J_{1}+\ldots j_{d}\prod_{i=1}^{d-1}J_ {i}}}{\sum_{i=1}^{\prod_{i=1}^{d}J_{i}}e_{i}}\] which gives a constant per bin probability measure where the probability of sampling in \(\text{Bin}(j_{1},\ldots,j_{d})\) is given by \(p(j_{1},\ldots,j_{d})\).
* Now that we have chosen \(\mu\), we can generate \(N\) samples of \(d\) dimensional coordinates \((j_{1}^{n},\ldots,j_{d}^{n})\in[1,J_{1}]\times\ldots\times[1,J_{d}]\) for \(n\in[1,N]\) such that \[\text{proba}[(j_{1}^{n},\ldots,j_{d}^{n})=(j_{1},\ldots,j_{d})]=p(j_{1}, \ldots,j_{d}).\] Finally, we sample \(Y^{n}\sim\mathbf{U}([0,1]^{d})\) for \(n=1,N\), and set \[X^{n}= (X_{1}^{n},\ldots,X_{d}^{n}),\] where \[X_{i}^{n}=\underline{\mathcal{K}}_{i}+(j_{i}^{n}-1+Y_{i}^{n}) \frac{\bar{\mathcal{K}}_{i}-\underline{\mathcal{K}}_{i}}{J_{i}},\text{for} \quad i=1,\ldots,d.\]
**Remark 2.1**.: _This procedure allows to generate points according to a constant density function per bin. In dimension one, it is equivalent to the algorithm proposed in [11] which generates points with a linear representation of the cumulative distribution function._
### The training methodolody
Since the equation (2.1) is infinite dimensional, we need to introduce a discretization of the measure. We note \(R^{K}(\mu):=(R^{K}_{k}(\mu))_{k=1,\ldots,K}\) the \(K\) features estimated from the law \(\mu\). The features selected depend on the method developed and will be detailed in the following sections.
The neural network \(\mathcal{N}(\mu):=\Phi_{\theta}(R^{K}(\mu))\) is such that \(\Phi_{\theta}\) is an operator from \(\mathbb{R}^{K}\) to \(\mathbb{R}^{p}\) depending on some parameters \(\theta\) and we use a gradient descent algorithm (ADAM [8]) with Tensorflow software [1] to minimize the loss
\[\tilde{L}(\theta):=\int_{\mathcal{P}_{2}(\mathbb{R}^{d})}\big{|} \mathcal{V}(\mu)-\Phi_{\theta}(R^{K}(\mu))\big{|}^{2}\nu(\mathrm{d}\mu) \tag{2.3}\]
with respect to the parameters \(\theta\).
At each iteration of the stochastic gradient,
* \(M\) distributions \((\mu^{m})_{m=1,M}\) are generated and for each \(\mu^{m}\), \(X^{m,n}\sim\mu^{m}\) are generated for \(n=1,\ldots,N\), following the methodology given in section 2.1.
* The \(K\) features representing the law are estimated from the \(N\) samples for a given estimator \(R^{K,N}(\mu^{m}):=(R^{K,N}_{k}((X^{m,n})_{n=1,N}))_{k=1,\ldots,K}\).
The discretized version of the loss function (2.3) is then
\[\tilde{L}(\theta):=\ \frac{1}{M}\sum_{m=1}^{M}\big{|}V(\mu^{m})-\Phi_{ \theta}(R^{K,N}(\mu^{m}))\big{|}^{2}.\]
The learning rate associated with the gradient method is equal to \(5\times 10^{-3}\) in all the experiments.
The networks
### The quantile network for one dimensional distribution
Let \(\mathcal{D}_{2}(\mathbb{R})\) be the subset of probability measures \(\mu\) in \(\mathcal{P}_{2}(\mathbb{R})\) admitting a density function \(\mathrm{p}^{\mu}\) with respect to the Lebesgue measure \(\lambda\) on \(\mathbb{R}\). Fixing \(\mathcal{K}\) as a bounded segment in \(\mathbb{R}\), we want to approximate the functions of the distributions with support in \(\mathcal{K}\).
For a distribution \(\mu\), we note \(F_{\mu}\) its cumulative distribution function and we note \(Q_{\mu}\) the quantile function defined as \(Q_{\mu}(p)=\inf\{x\in\mathcal{K}:p\leq F_{\mu}(x)\}\). In the sequel, we also use the notation \(Q_{X}\) for \(Q_{\mu}\) if \(X\sim\mu\).
Choosing \(K>0\), the main characteristics of the distribution \(\mu\) are given by
\[\mathrm{Q}_{\mu}^{K}=(Q_{\mu}(\frac{k}{K+1}))_{k=1,K}\]
which lies on \(\mathcal{D}_{K}:=\{Q:=(q_{k})_{k=1,K}:q_{1}<\cdots<q_{K}\}\).
A quantile network is thus an operator on \(\mathcal{D}_{2}(\mathbb{R})\) in the form
\[\mathcal{N}_{Q}(\mu)=\Phi_{\theta}(\mathrm{Q}_{\mu}^{K}),\]
so setting \(R^{K}(\mu)=\mathrm{Q}_{\mu}^{K}\) in equation (2.3).
Let us denote by \(\mathcal{D}_{C^{1}}(\mathcal{K})\) the subset of elements \(\mu\) in \(\mathcal{D}_{2}(\mathbb{R})\) with support in \(\mathcal{K}\), with continuously derivable density functions \(\mathrm{p}^{\mu}\). We get the following universal approximation theorem:
**Theorem 3.1**.: _Let \(\mathcal{K}=[\underline{\mathcal{K}},\bar{\mathcal{K}}]\) be a bounded segment in \(\mathbb{R}\), \(V\) a continuous function from \(\mathcal{P}_{2}(\mathbb{R})\) to \(\mathbb{R}\). Then, for all \(\varepsilon>0\), there exists \(K\in\mathbb{N}^{*}\), and \(\Phi\) a neural network on \(\mathbb{R}^{K}\) with values in \(\mathbb{R}\) such that_
\[\big{|}V(\mu)-\Phi(\mathrm{Q}_{\mu}^{K})\big{|}\leq\ \varepsilon,\quad\forall \mu\in\mathcal{D}_{C^{1}}(\mathcal{K}).\]
Proof.: The proof is very similar to the proof of theorem 2.1 in [11]. The only difference is the modification of step one in the proof. From the quantile representation of the density function, we get the following density step approximation:
\[p_{\mu,K}^{\mathrm{Q}}(x)=\frac{1}{K(Q_{\mu,k+1}^{K}-Q_{\mu,k}^{K})},\ \text{for}\ x\in]Q_{\mu,k}^{K},Q_{\mu,k+1}^{K}],\quad 0 \leq k\leq K\]
where \(Q_{\mu,k}^{K}=Q_{\mu}^{K}(\frac{k}{K+1})\), for \(k=1,\ldots,K\), \(Q_{\mu,0}^{K}=\underline{\mathcal{K}}\), \(Q_{\mu,K+1}^{K}=\bar{\mathcal{K}}\).
For \(\mu\in\mathcal{D}_{C^{1}}(\mathcal{K})\) with density \(\mathrm{p}^{\mu}\), denote by \(\hat{\mu}^{K}=\mathcal{L}_{D}(p_{\mu,K}^{\mathrm{Q}})\) the probability measure with density representation \(p_{\mu,K}^{\mathrm{Q}}\).
Since \(\mu\), \(\hat{\mu}^{K}\) are supported on the compact set \(\mathcal{K}\), they lie in \(\mathcal{P}_{1}(\mathbb{R}^{d})\) the set of probability measures with finite first moment. From the Kantorovich-Rubinstein dual representation of the 1-Wasserstein distance, we have
\[\mathcal{W}_{1}(\mu,\hat{\mu}^{K}) = \sup_{\phi}\int_{\mathcal{K}}\phi(x)(\mathrm{p}^{\mu}(x)-p_{\mu, K}^{\mathrm{Q}}(x))\mathrm{d}x,\]
where the supremum is taken over all Lipschitz continuous functions \(\phi\) on \(\mathcal{K}\) with Lipschitz constant bounded by 1, and where we can assume w.l.o.g. that \(\phi(x_{0})=0\) for some fixed point \(x_{0}\) in \(\mathcal{K}\).
Noting \(\bar{\mathrm{p}}_{k}^{\mu}:=\frac{1}{(Q_{\mu,k+1}^{K}-Q_{\mu,k}^{K})}\int_{Q_{\mu,k }^{K}}^{Q_{k,k+1}^{K}}\mathrm{p}^{\mu}(s)ds=\mathrm{p}^{\mu}(\tilde{x}_{k})\) with \(\tilde{x}_{k}\in[Q_{\mu,k}^{K},Q_{\mu,k+1}^{K}]\) due to mean value theorem:
\[\mathcal{W}_{1}(\mu,\hat{\mu}^{K}) \leq \sup_{\phi}\sum_{k=1}^{K}\int_{Q_{\mu,k}^{K}}^{Q_{\mu,k+1}^{K}}| \phi(x)||\big{(}\mathrm{p}^{\mu}(x)-\frac{1}{K(Q_{\mu,k+1}^{K}-Q_{\mu,k}^{K})} \big{)}|\mathrm{d}x \tag{3.1}\] \[\leq \mathrm{diam}(\mathcal{K})\sum_{k=1}^{K}\int_{Q_{\mu,k}^{K}}^{Q_{ \mu,k+1}^{K}}|\big{(}\mathrm{p}^{\mu}(x)-\frac{1}{K(Q_{\mu,k+1}^{K}-Q_{\mu,k}^{K })}\big{)}|\mathrm{d}x\] \[= \mathrm{diam}(\mathcal{K})\sum_{k=1}^{K}\int_{Q_{\mu,k}^{K}}^{Q_ {\mu,k+1}^{K}}|\mathrm{p}^{\mu}(x)-\bar{\mathrm{p}}_{k}^{\mu}|dx\] \[= \mathrm{diam}(\mathcal{K})\sum_{k=1}^{K}(1_{\bar{\mathrm{p}}_{k} ^{\mu}<\epsilon}+1_{\bar{\mathrm{p}}_{k}^{\mu}\geq\epsilon})\int_{Q_{\mu,k}^{K }}^{Q_{\mu,k+1}^{K}}|\mathrm{p}^{\mu}(x)-\bar{\mathrm{p}}_{k}^{\mu}|dx\]
where we used that \(|\phi(x)|\leq|x-x_{0}|\leq\mathrm{diam}(\mathcal{K})\).
Then :
\[\sum_{k=1}^{K}1_{\bar{\mathrm{p}}_{k}^{\mu}\leq\epsilon}\int_{Q_{ \mu,k}^{K}}^{Q_{\mu,k+1}^{K}}|\mathrm{p}^{\mu}(x)-\bar{\mathrm{p}}_{k}^{\mu}|dx\leq \sum_{k=1}^{K}1_{\bar{\mathrm{p}}_{k}^{\mu}<\epsilon}\int_{Q_{\mu, k}^{K}}^{Q_{\mu,k+1}^{K}}(\mathrm{p}^{\mu}(x)+\bar{\mathrm{p}}_{k}^{\mu})dx\] \[=\sum_{k=1}^{K}1_{\bar{\mathrm{p}}_{k}^{\mu}<\epsilon}(Q_{\mu,k+1 }^{K}-Q_{\mu,k}^{K})2\mathrm{p}_{k}^{\mu}\leq 2\epsilon \tag{3.2}\]
Notice that if \(\mathrm{p}_{k}^{\mu}\geq\epsilon\), \(Q_{\mu,k+1}^{K}-Q_{\mu,k}^{K}\leq\frac{1}{\epsilon K}\) and again due to mean value theorem and noting \(C=\sup_{y}\mathrm{p}^{\mu^{\prime}}(y)\):
\[\sum_{k=1}^{K}1_{\bar{\mathrm{p}}_{k}^{\mu}\geq\epsilon}\int_{Q_{ \mu,k}^{K}}^{Q_{\mu,k+1}^{K}}|\mathrm{p}^{\mu}(x)-\bar{\mathrm{p}}_{k}^{\mu}|dx= \sum_{k=1}^{K}1_{\bar{\mathrm{p}}_{k}^{\mu}\geq\epsilon}\int_{Q_{ \mu,k}^{K}}^{Q_{\mu,k+1}^{K}}|\mathrm{p}^{\mu}(x)-\mathrm{p}_{k}^{\mu}(\tilde{x }_{k})|dx\] \[\leq C\sum_{k=1}^{K}1_{\bar{\mathrm{p}}_{k}^{\mu}\geq\epsilon}\int_{Q_ {\mu,k}^{K}}^{Q_{\mu,k+1}^{K}}|x-\tilde{x}_{k}|dx\] \[\leq C\sum_{k=1}^{K}1_{\bar{\mathrm{p}}_{k}^{\mu}\geq\epsilon}(Q_{ \mu,k+1}^{K}-Q_{\mu,k}^{K})^{2}\] \[\leq C\sum_{k=1}^{K}(Q_{\mu,k+1}^{K}-Q_{\mu,k}^{K})\frac{1}{\epsilon K }=C\frac{1}{\epsilon K} \tag{3.3}\]
Pluging equations (3.2), (3.3) in (3.1) gives:
\[\mathcal{W}_{1}(\mu,\hat{\mu}^{K})\leq\mathrm{diam}(\mathcal{K})(2\epsilon+C \frac{1}{\epsilon K})\]
For \(\epsilon\) given, it is possible to have \(K\) high enough to get \(\mathcal{W}_{1}(\mu,\hat{\mu}^{K})\leq\mathrm{diam}(\mathcal{K})3\epsilon\) and noting that \(\mathcal{W}_{2}(\mu,\hat{\mu}^{K})\leq\sqrt{\mathrm{diam}(\mathcal{K})\mathcal{ W}_{1}(\mu,\hat{\mu}^{K})}\) by Holder inequality, we get that \(\mathcal{W}_{2}(\mu,\hat{\mu}^{K})\leq\mathrm{diam}(\mathcal{K})\sqrt{3\epsilon}\).
Therefore we have shown that
\[\sup_{\mu\in\mathcal{D}_{C^{1}}(\mathcal{K})}\mathcal{W}_{2}(\mu,\hat{\mu}^{K}) \rightarrow\ 0,\quad\text{ as }K\rightarrow\infty.\]
Then we use the same argument as in [11] to get that, for a given \(\epsilon\), it is possible to set \(K\) such that
\[|V(\mu)-V(\hat{\mu}^{K})|\leq\ \frac{\varepsilon}{2},\quad\forall\ \mu\in\mathcal{D}_{C^{1}}( \mathcal{K}).\]
At last using a classical universal theorem, we can conclude as in Step 2 of [11].
### The moment network
Let \(\mathcal{D}_{2}(\mathbb{R}^{d})\) be the subset of probability measures \(\mu\) in \(\mathcal{P}_{2}(\mathbb{R}^{d})\) that admit a density function \(\mathrm{p}^{\mu}\) with respect to the Lebesgue measure \(\lambda_{d}\) on \(\mathbb{R}^{d}\). Fixing \(\mathcal{K}\) as a bounded rectangle in \(\mathbb{R}^{d}\), we want to approximate the function of the distribution with support in \(\mathcal{K}\). By choosing \(K>0\), the main features of the distribution \(\mu\) are approximated by choosing the lowest moments of the distribution so:
\[\mathbf{M}_{\mu}^{K}=(\mathbb{E}_{X\sim\mu}[\prod_{i=1,\ldots,d}X_{i}^{k_{i}}])_{ \sum_{i=1}^{d}k_{i}\leq K}\]
with values in \(\mathbb{R}^{\hat{K}}\), with \(\hat{K}=\#\{p\in\mathbb{N}^{d}/\sum_{i=1}^{d}p_{i}\leq K\}\).
**Remark 3.2**.: _Since the support of the distribution is bounded all moments are well defined._
A moment network is an operator on \(\mathcal{D}_{2}(\mathbb{R}^{d})\) in the form
\[\mathcal{N}_{Q}(\mu)=\Phi_{\theta}(\mathbf{M}_{\mu}^{K}),\]
so setting \(R^{\hat{K}}(\mu)=\mathbf{M}_{\mu}^{K}\) in the equation (2.3).
**Remark 3.3**.: _This approach is closely related to the moment problem which consists in determining a distribution from its moments if they exist. If the support of the distribution is \([0,\infty[\), this is the Stieltjes moment problem, and if the support is \(\mathbb{R}\), this is the Hamburger moment problem. If \(\mu\) is a positive measure with all moments defined, we say that \(\mu\) is a solution to the moment problem. If the solution to the moment problem is unique, the moment problem is called determinate. Otherwise the moment problem is said to be indeterminate. In our case, where the support is compact, this problem is known as the Haussdorf moment problem and it is determinate. The connection with the moment problem and the reconstruction of an approximation of the quantile has been studied for example in [9]._
We now give a universal approximation theorem for this neural network:
**Theorem 3.4**.: _Let \(\mathcal{K}\) be a bounded rectangle in \(\mathbb{R}^{d}\), and \(V\) be a continuous function from \(\mathcal{P}_{2}(\mathcal{K})\) into \(\mathbb{R}^{p}\), then, for all \(\varepsilon>0\), there exists \(K\) and \(\Psi\) a neural network from \(\mathbb{R}^{\hat{K}}\) to \(\mathbb{R}^{p}\) such that_
\[\big{|}V(\mu)-\Psi(\mathbf{M}_{\mu}^{K})\big{|}\leq\ \varepsilon\quad\forall\mu\in \mathcal{P}(\mathcal{K})\]
Proof.: By the density of the cylindrical polynomial function with respect to distribution functions, see Lemma 3.12 in [7], for all \(\varepsilon>0\), there exists \(K\in\mathbb{N}^{*}\), \(P\) a linear function from \(\mathbb{R}^{\hat{K}}\) into \(\mathbb{R}^{p}\), s.t.
\[\big{|}V(\mu)-P(\mathbf{M}_{\mu}^{K})\big{|}\leq\ \frac{\varepsilon}{2},\quad \forall\mu\in\mathcal{P}(\mathcal{K}).\]
Note that since \(\mathcal{K}\) is bounded, \(\mathbf{M}_{\mu}^{K}\) is in a compact \(\mathcal{Y}\) and we use the classical universal approximation theorem for finite-dimensional functions to obtain the existence of a feedforward neural network \(\Psi\) on \(R^{\hat{K}}\) such that
\[\big{|}P(x)-\Psi(x)\big{|}\leq\ \frac{\varepsilon}{2},\quad\forall x\in \mathcal{Y}.\]
We conclude that for all \(\mu\in\mathcal{P}(\mathcal{K})\),
\[\big{|}V(\mu)-\Psi(\mathbf{M}_{\mu}^{K})\big{|}\] \[\leq\ \big{|}V(\mu)-P(\mathbf{M}_{\mu}^{K})\big{|}+\big{|}P(\mathbf{M}_{\mu}^{K })-\Psi(\mathbf{M}_{\mu}^{K})\big{|}\ \leq\ \varepsilon.\]
Numerical tests
### Univariate case
All functions are tested with the developed networks, and the results are compared with those obtained using the bin and cylinder methods in [11]. In dimension one, we propose to learn the following functions \(V\) with support in \([-2,2]\).
1. The moment case \[V(\mu)=\mathbb{E}_{X\sim\mu}[X]\mathbb{E}_{X\sim\mu}[X^{4}]-\mathbb{E}_{X\sim \mu}[X^{2}]\]
2. The pure quantile case \[V(\mu)=Q_{\mu}(q)\] and we take \(q=0.7\).
3. The quantile-moment case \[V(\mu)=\mathbb{E}_{X\sim\mu}[X^{3}](1+Q_{\mu}(q))\] taking \(q=0.9\).
4. The quantile-superquantile case \[V(\mu)=\mathbb{E}_{X\sim\mu}[X/X>Q_{\mu}(q)]+Q_{\mu}(q)\] and we take \(q=0.3\).
All distribution features are estimated with \(N=200000\) samples, and the distribution is sampled using the method in the section 2.1 using \(J_{1}=400\) bins. During the training, \(M=20\) distributions (the batch size) are used. All curves plot the MSE obtained during gradient iterations as follows : every 100 iterations, the MSE is estimated using 1000 distributions and the results are plotted using a window averaging the estimates over 20 consecutive iterations. The ReLU activation function is used for all networks. Similar results are obtained using the \(\tanh\) activation function. The quantile, and moment networks (and the networks derived from these features) use 2 hidden layers with 20 neurons. The cylinder network, which uses 2 networks, has 3 layers and 20 neurons for the "inner" network and 2 layers with 20 neurons for the "outer" network (see [11]).
The neural network seems to be less accurate for functions involving moments (cases A and C) (see figure 1). A number of quantiles equal to 200 seems to be sufficient to obtain a good accuracy.
Figure 1: Quantile network convergence depending on the number of quantiles.
In contrast to the quantile networks, the results are not surprisingly better when the functional to be approximated is mainly a function of the moments (see Figure 2). In case A, it is optimal to use a small number of moments, since the functional is only a function of moments with degrees less than 5.
Since the best network depends on the case, we can develop new networks based on moments and quantiles:
* A first one uses a concatenation of the features of the two proposed networks. Using the same notation as in the section 3, \[\mathcal{N}_{QM}(\mu)=\Phi_{\theta}(\boldsymbol{M}_{\mu}^{K^{M}},\mathrm{Q}_{ \mu}^{K^{Q}}),\] where now \(K^{M}\) is the number of moments retained in the approximation and \(K^{Q}\) is the number of quantiles. This neural network is the **moment and quantile network**. The results obtained for this network are shown in Figure 3. Overall, it seems that a moment number of \(K^{M}=7\) and a quantile number of \(K^{Q}=200\) is a good choice.
Figure 2: Moment network convergence depending on the number of moments.
* In a second one, instead of concatenating some moments expectations and quantiles, we can take some quantiles of the moments by defining: \[\mathbf{L}_{\mu}^{K_{M},K_{Q}}=\big{[}Q_{\prod_{i=1,\ldots,d}X_{i}^{\hat{k}_{i}}/X \sim\mu}\big{(}\frac{\hat{k}}{K_{Q}+1}\big{)}\big{]}_{\sum_{i=1}^{d}\hat{k}_{i }\leq K_{M},1\leq\hat{k}\leq K_{Q}}\] A **quantile of moments network** is an operator on \(\mathcal{D}_{2}(\mathbb{R}^{d})\) in the form \[\mathcal{N}_{Q}(\mu)=\Phi_{\theta}(\mathbf{L}_{\mu}^{K_{M},K_{Q}}),\] thus setting \(R^{K}(\mu)=\mathbf{L}_{\mu}^{K_{M},K_{Q}}\) in the equation (2.3). The results for this network are shown in the figure 4. The convergence seems to be good in all cases, but we observe that this convergence is less regular than with the previous neural network.
Figure 3: Moment and quantile network convergence depending on \(K^{M}\) and \(K^{Q}\).
**Remark 4.1**.: _We have also tested networks based on superquantiles or superquantiles of moments._
* _Defining for one-dimensional distributions_ \[\boldsymbol{V}_{\mu}^{K}=\big{[}E_{X\sim\mu}[X\geq Q_{\mu}(\frac{\hat{k}}{K+1})] \big{]}_{0\leq\hat{k}\leq K},\] _a superquantile network is an operator on_ \(\mathcal{D}_{2}(\mathbb{R}^{d})\) _in the form_ \[\mathcal{N}_{Q}(\mu)=\Phi_{\theta}(\boldsymbol{V}_{\mu}^{K}),\]
* _and defining for potentially multivariate distributions:_ \[\boldsymbol{S}_{\mu}^{K_{M},K_{Q}}=\big{[}E_{X\sim\mu}[\prod_{i=1,\dots,d}X_{i }^{\bar{k}_{i}}/\prod_{i=1,\dots,d}X_{i}^{\bar{k}_{i}}\geq Q_{\prod_{i=1,\dots, d}X_{i}^{\bar{k}_{i}}/X\sim\mu}(\frac{\hat{k}}{K_{Q}+1})]\big{]}_{\sum_{i=1}^{d} \bar{k}_{i}\leq K_{M},0\leq\hat{k}\leq K_{Q}},\] _a superquantile of moment network is an operator on_ \(\mathcal{D}_{2}(\mathbb{R}^{d})\) _in the form_ \[\mathcal{N}_{Q}(\mu)=\Phi_{\theta}(\boldsymbol{S}_{\mu}^{K_{M},K_{Q}}).\]
Figure 4: Quantile of moments network convergence.
_Both neural networks give good results but never better than the cylinder network. We don't report them_.
We now compare all the networks together on the figure 5 using
* \(K=200\) quantiles for the quantile network,
* \(K=10\) moments for the moment network,
* \(K^{M}=7\) moments and \(K^{Q}=200\) quantiles for the "quantile of moments" and "moment and quantile" networks,
* 200 bins for the bin network.
Overall, the "moment and quantile" network and the "quantile of moments" network seem to be the best choices.
Finally, we compare the different networks, taking for all networks 3 layers and 40 neurons.
Figure 5: Convergence of the different networks in 1D.
The results for the bin network are improved, but the conclusions remain the same.
### Bivariate case
We assume that the support is in \([-2,2]^{2}\). For a distribution \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), \((j,m)\in\mathbb{N}^{*}\times\mathbb{N}^{*}\), we note \(\hat{F}_{\mu,j,m}\) the cumulative distribution function of \(X_{1}^{j}X_{2}^{m}\) where \(X\sim\mu\) and \(\hat{Q}_{\mu,j,m}(p)=\inf\{x\in\mathbb{R}:p\leq\hat{F}_{\mu,j,m}(x)\}\). We define the following test cases:
1. The moment case \[V(\mu)=\sum_{i=1}^{2}\left[\mathbb{E}_{X\sim\mu_{i}}[X]\mathbb{E}_{X\sim\mu_{i} }[X^{4}]-\mathbb{E}_{X\sim\mu_{i}}[X^{2}]\right].\]
2. The quantile-superquantile case \[V(\mu)= \sum_{i=1}^{2}[\mathbb{E}_{X\sim\mu_{i}}[X/X>Q_{\mu_{i}}(q)]+Q_{ \mu_{i}}(q)]+\] \[\mathbb{E}_{X\sim\mu}[X_{1}X_{2}/X_{1}X_{2}>\hat{Q}_{\mu,1,1}(q)]\] with \(q=0.7\).
Figure 6: Convergence of the different networks in 1D taking 3 layers and 40 neurons.
3. The quantile moment case \[V(\mu)= (1+Q_{\mu_{1}}(q))\mathbb{E}_{X\sim\mu_{1}}[X^{3}]+\mathbb{E}_{X\sim \mu_{2}}[X^{3}]+\] \[\hat{Q}_{\mu,2,1}(q)+\hat{Q}_{\mu,1,2}(q)\] with \(q=0.9\).
4. The quantile-superquantile marginal case \[V(\mu)= \sum_{i=1}^{2}[\mathbb{E}_{X\sim\mu_{i}}[X/X>Q_{\mu_{i}}(q_{i})]+ Q_{\mu_{i}}(q_{i})]\] with \(q=(0.6,0.3)\).
5. The quantile-cross-superquantile case \[V(\mu)= \mathbb{E}_{X\sim\mu}[X_{2}/X_{2}>Q_{\mu_{1}}(q)]+Q_{\mu_{1}}(q)\] with \(q=0.2\).
6. The quantile marginal case \[V(\mu)= Q_{\mu_{1}}(q)+Q_{\mu_{2}}(q)\] with \(q=0.8\).
We test the bin network, the cylinder network, the moment network, and the quantile of moments network on the different cases. The bin network fails in all the test cases with a number of layers equal to 2 or 3 and a number of neurons taken equal to 20, 40 and 80. As for the other networks, we keep the same number of layers and neurons as in the previous section. For the moment network we use \(K=7\), while for the quantile of moment network we use \(K^{M}=5\) and \(K^{Q}=200\). The distribution features are estimated using \(N=400000\) samples, and we take \((J_{1},J_{2})=(200,200)\) to sample a given distribution.
The tests are shown in figure 7. In all cases, the moment network gives the best results. We observe a loss not as good as in dimension one for the case C.
Figure 7: Convergence of the different networks in 2D.
Conclusion
New networks have been developed to learn functions of distributions: some of them outperform the existing ones. In all cases, the best networks are based on some moments of the distribution. For univariate distributions, it is optimal to add some information, for example based on quantiles, to get an effective scheme on all the test cases. For bivariate distributions, it is sufficient to take the expectation of the moments to get the best scheme. Using this moment scheme, the resolution of the PDE in Wasserstein space becomes possible in the multivariate case.
|
2310.13029 | Blending gradient boosted trees and neural networks for point and
probabilistic forecasting of hierarchical time series | In this paper we tackle the problem of point and probabilistic forecasting by
describing a blending methodology of machine learning models that belong to
gradient boosted trees and neural networks families. These principles were
successfully applied in the recent M5 Competition on both Accuracy and
Uncertainty tracks. The keypoints of our methodology are: a) transform the task
to regression on sales for a single day b) information rich feature engineering
c) create a diverse set of state-of-the-art machine learning models and d)
carefully construct validation sets for model tuning. We argue that the
diversity of the machine learning models along with the careful selection of
validation examples, where the most important ingredients for the effectiveness
of our approach. Although forecasting data had an inherent hierarchy structure
(12 levels), none of our proposed solutions exploited that hierarchical scheme.
Using the proposed methodology, our team was ranked within the gold medal range
in both Accuracy and the Uncertainty track. Inference code along with already
trained models are available at
https://github.com/IoannisNasios/M5_Uncertainty_3rd_place | Ioannis Nasios, Konstantinos Vogklis | 2023-10-19T09:42:02Z | http://arxiv.org/abs/2310.13029v1 | Blending gradient boosted trees and neural networks for point and probabilistic forecasting of hierarchical time series
###### Abstract
In this paper we tackle the problem of point and probabilistic forecasting by describing a blending methodology of machine learning models that belong to gradient boosted trees and neural networks families. These principles were successfully applied in the recent M5 Competition on both Accuracy and Uncertainty tracks. The keypoints of our methodology are: a) transform the task to regression on sales for a single day b) information rich feature engineering c) create a diverse set of state-of-the-art machine learning models and d) carefully construct validation sets for model tuning. We argue that the diversity of the machine learning models along with the careful selection of validation examples, where the most important ingredients for the effectiveness of our approach. Although forecasting data had an inherent hierarchy structure (12 levels), none of our proposed solutions exploited that hierarchical scheme. Using the proposed methodology, our team was ranked within the gold medal range in both Accuracy and the Uncertainty track. Inference code along with already trained models are available at [https://github.com/IoannisNasios/M5_Uncertainty_3rd_place](https://github.com/IoannisNasios/M5_Uncertainty_3rd_place)
keywords: M5 Competition, Point forecast, Probabilistic forecast, Regression models, Gradient Boosted Trees, Neural Networks, Machine learning +
Footnote †: journal: International Journal of Forecasting
## 1 Introduction
Machine Learning (ML) methods have been well established in the academic literature as alternatives to statistical ones for time series forecasting [1; 2; 3; 4; 5]. The results of the recent M-competitions also reveal such a trend from the classical Exponential Smoothing and ARIMA methods [6] towards more data-driven generic ML models such as Neural Networks [7] and Gradient Boosted Trees [8; 9].
One of the most important results of the recent M5 Competition [1] was the superiority of Machine Learning methods, particularly the Gradient Boosted Trees and Neural Networks, against statistical time series methods (e.g., exponential smoothing, ARIMA etc.). This realization is in the opposite direction
of the findings of previous M competitions that classical time series methods were more accurate, and establishes a milestone in the domination of Machine Learning in yet another scientific domain.
The M5 forecasting competition was designed to empirically evaluate the accuracy of new and existing forecasting algorithms in a real-world scenario of hierarchical unit sales series. The dataset provided contains \(42,840\) hierarchical sales data from Walmart. It covers stores in three US states (California Texas and Wisconsin) and includes sales data per item level, department and product categories and store details ranging from February 2011 to April 2016. The products have a (maximum) selling history of 1941 days. The competition was divided into two tracks, one requiring point forecasts (Accuracy track), and one requiring the estimation of the uncertainty distribution (Uncertainty track). The time horizon for both tracks was 28 days ahead. For the Accuracy track, the task was to predict the sales for each one of the \(42,840\) hierarchical time series following day 1941. For the Uncertainty track, the task was to provide probabilistic forecasts for the corresponding median and four prediction intervals (50%, 67%, 95%, and 99%).
Table 1 presents all hierarchical groupings of the data. Level 12, containing \(30,490\) unique combinations of product per store, is the most disaggregated level. Following the competition rules, only these \(30,490\) series needed to be submitted; the forecasts of all higher levels would be automatically calculated by aggregating (summing) the ones of this lowest level. So, regardless the way we used the hierarchy information to produce 28-day ahead predictions, all we needed to actually submit was level 12 series only.
\begin{table}
\begin{tabular}{l l l r} \hline Level & Level Description & Aggr. Level & \#of series \\ \hline
1 & Unit sales of all products, aggregated for all stores/states & Total & 1 \\
2 & Unit sales of all products, aggregated for each State & State & 3 \\
3 & Unit sales of all products, aggregated for each store & Store & 10 \\
4 & Unit sales of all products, aggregated for each category & Category & 3 \\
5 & Unit sales of all products, aggregated for each department & Department & 7 \\
6 & Unit sales of all products, aggregated for each State and category & State/Category & 9 \\
7 & Unit sales of all products, aggregated for each State and department & State/Department & 21 \\
8 & Unit sales of all products, aggregated for each store and category & Store/Category & 30 \\
9 & Unit sales of all products, aggregated for each store and department & Store/Department & 70 \\
10 & Unit sales of product x, aggregated for all stores/s-tates & Product & 3,049 \\
11 & Unit sales of product x, aggregated for each State & Product/State & 9,147 \\
12 & Unit sales of product x, aggregated for each store & Product/Store & 30,490 \\ Total & & & 42,840 \\ \end{tabular}
\end{table}
Table 1: M5 series aggregation levels
## 2 Machine Learning based forecasting
Machine Learning methods are designed to learn patterns from data and they make no assumptions about their nature. Time series forecasting can be easily formulated as a supervised learning task. The goal is to approximate a function \(f(\cdot;\theta):\mathbf{R}^{d}\rightarrow\mathbf{R}\) controlled by a set of parameters \(\theta\), which corresponds to the relation between a vector of input variables (features) and a target quantity. The machine learning setup is completed once we define: a) a (training) dataset \(\mathcal{D}=(x^{(k)},y^{(k)}),\quad k=1...n\) consisting of a set of tuples containing features \(x^{(k)}\in\mathbf{R}^{d}\) that describe a target \(y^{(k)}\in\mathbf{R}\) (number of daily sales in the context of M5 competition) and b) a suitable loss function to be minimized \(\mathcal{L}(Y,f(x;\theta))\). The parameters of \(f(\cdot;\theta)\) are then tuned to minimize the loss function \(\mathcal{L}\) using the tuples of the training dataset \(\mathcal{D}\).
### Feature engineering
One of the key ingredients in any Machine Learning method is the creation of representative and information-rich input features \(x^{(\cdot)}\in\mathbf{R}^{d}\). The task of feature engineering is a largely ad hoc procedure, considered equal parts science and art, and it crucially depends on the experience of the practitioner on similar tasks. In the context of the M5 Competition, we worked with the following feature groups:
1. Categorical id-based features: This is a special type of features that take only discrete values. Their numerical encoding can take many forms and depends on the methodology chosen. 1. Categorical variables encoding via mean target value: It is a process of encoding the target in a predictor variable perfectly suited for categorical variables. Each category is replaced with the corresponding probability of the target value in the presence of this category [10]. 2. Trainable embedding encoding: A common paradigm that gained a lot of popularity since its application to Natural Language Processing [11; 12] is to project the distinct states of a categorical feature to a real-valued, low-dimensional latent space.This is usually implemented as a Neural Network layer (called Embedding Layer) and it is used to compactly encode all the discrete states of a categorical feature. Contrary to one hot encoding which are binary, sparse, and very high-dimensional, trainable embeddings are low-dimensional floating-point vectors (see Fig. 1):
2. Price related: Sell prices, provided on a week level for each combination of store and product. Prices are constant at weekly basis, although they may change through time. Using this information we calculate statistical features such as maximum, minimum, mean, standard deviation and also number of unique historical prices for each combination of store and product (level 12).
3. Calendar related
* Special events and holidays (e.g. Super Bowl, Valentine's Day, and Orthodox Easter), organized into four classes, namely Sporting, Cultural, National, and Religious.
* Supplement Nutrition Assistance Program (SNAP) activities that serve as promotions. This is a binary variable (0 or 1) indicating whether the stores of CA, TX or WI allow SNAP purchases on the examined date.
4. Lag related features: * Lag only: These features are based the historical sales for each store/product combination (level 12) for \(28+k\) days before a given date \(t\) with \(k\) ranging from 1 to 14, thus spanning two weeks. * Rolling only: Rolling mean and standard deviation of historical sales for each store/product (level 12) ending 28 days before a given date \(t\). * Lag and rolling: Rolling mean and standard deviation until a lag date in the past.
In total we devised around 80 input features, each one used to predict the unit sales of a specific product/store (Level 12) for one specific date.
### Cross Validation
The inherent ordering of time series forecasting (i.e. the time component) forces ML practitioners to define special cross validation schemes and avoid k-fold validation [14] random splits back and forth in time. We chose the following different training/validation splits.
* Validation split 1:
Figure 1: Trainable embeddings vs one hot encoding. Image taken from [13]
* Training days \([d_{1},d_{2},\ldots,d_{1940}]\)
* Validation days \([d_{1914},d_{1915},\ldots d_{1941}]\) (last 28 days)
* Validation split 2:
* Training days \([d_{1},d_{2},\ldots,d_{1885}]\)
* Validation days \([d_{1886},d_{1887},\ldots,d_{1913}]\) (28 days before last 28 days)
* Validation split 3:
* Training days \([d_{1},d_{2},\ldots,d_{1577}]\)
* Validation days \([d_{1578},d_{1579},\ldots,d_{1605}]\) (exactly one year before)
Modeling and blending was based on improving the mean of the competition metric on these splits. For each split we followed a two step modeling procedure:
**Tuning**: Use all three validation sets and the competition metric to select the best architecture and parameters of the models.
**Full train**: Use the fine-tuned parameters of the previous step to perform a full training run until day \(d_{1941}\).
The biggest added value of using multiple validation sets was the elimination of any need for external adjustments on the final prediction(see Finding 5 in [1]).
## 3 Point Forecast Methodology
Point forecast is based on ML models that predict the number of daily sales for a specific date and a specific product/store combination. In order to forecast the complete 28 days horizon we apply a recursive multi-step scheme [15]. This involves using the prediction of the model in a specific time step (day) as an input in order to predict the subsequent time step. This process is repeated until the desired number of steps have been forecasted. We found this methodology to be superior to a direct multi-step scheme were the models would predict at once 28 days in the future.
The performance measure selected for this forecasting task was a variant of Mean Absolute Scaled Error (MASE) [16] called Root Mean Squared Scaled Error (RMSSE). The measure for the 28 day horizon is defined in Eq. 1
\[RMSSE=\sqrt{\frac{\frac{1}{h}\sum\limits_{t=n+1}^{n+h}\left(y_{t}-\hat{y}_{t} \right)^{2}}{\frac{1}{n-1}\sum\limits_{t=2}^{n}\left(y_{t}-y_{t-1}\right)^{2}}} \tag{1}\]
where \(y\) is the actual future value of the examined time series (for a specific aggregation level \(l\)) at point \(t\) and \(\hat{y}\) the predicted value, \(n\) the number of historical observations, and \(h\) the 28 day forecasting horizon. The choice of
this metric is justified from the intermittency of forecasting data that involve sporadic unit sales with lots of zeros.
The overall accuracy of each forecasting method at each aggregation level is computed by averaging the RMSSE scores across all the series of the dataset using appropriate price related weights. The measure, called weighted RMSSE (WRMSSE) by the organizers, is defined in Eq. 2.
\[WRMSSE=\sum_{i=0}^{42,840}w_{i}\cdot RMSSE_{i} \tag{2}\]
where \(w_{i}\) is a weight assigned on the \(i_{th}\) series. This weight is computed based on the last 28 observations of the training sample of the dataset, i.e., the cumulative actual dollar sales that each series displayed in that particular period (sum of units sold multiplied by their respective price) [1]. The weights \(w_{i}\) are computed once and kept constant throughout the analysis.
### Models
Here we describe briefly the specific instances of GBM and neural network models used.
#### 3.1.1 LightGBM models
LightGBM [17] is an open source Gradient Boosting Decision Tree (GBDT) [9; 18] implementation by Microsoft. It uses a histogram-based algorithm to speed up the training process and reduce memory. LightGBM models have proven to be very efficient in terms of speed and quality in many practical regression problems. For the Accuracy track, we trained two variations of LightGBM models:
1. lgb_cos: Single lightGBM regression model for all available data (see Table 2)
2. lgb_nas: A different lightGBM regression model for each store (10 models in total) (see Table 3)
The target output of each LightGBM models was the sales count of a specific product. Since the target quantity (sales count) is intermittent and has a lot of zeros (\(\approx 68\%\) of daily sales are zeros), using the MSE loss function may lead to suboptimal solutions. For this reason, we implemented a special loss function that follows the Tweedie-Gaussian distribution [19]. Tweedie regression [20] is designed to deal with right-skewed data where most of the target values are concentrated around zero. In Figure 2 we show the histogram of sales for all available training data; it is obviously right skewed with a lot of concentration around zero..
The formula of the Tweedie loss function given a predefined parameter \(p\) is shown in Eq. 3. In our implementation we used the default value of \(p=1.5\) which is a good balance between the two terms:
\[\mathcal{L}(y,\hat{y})=-\sum_{k}y^{(k)}\cdot\frac{\left(\hat{y}^{(k)}\right)^ {1-p}}{1-p}+\frac{\left(\hat{y}^{(k)}\right)^{2-p}}{2-p},\ \ \hat{y}^{(k)}=f(x^{(k)};\theta) \tag{3}\]
\begin{table}
\begin{tabular}{|l|l|} \hline
**parameter** & **value** \\ \hline boosting\_type & gbdt \\ \hline &
where \(f(\cdot;\theta):\mathbf{R}^{d}\rightarrow\mathbf{R}\) is a regression model controlled by a set of parameters \(\theta\) that maps a set of multidimensional features \(x^{(k)}\in\mathbf{R}^{d}\) to a target value \(y^{(k)}\) (daily sales count). \(\hat{y}^{(k)}\) is the output of the regression model for input \(x^{(k)}\).
#### 3.1.2 Neural Network models
We implemented the following two classes of Neural Network models:
* Keras MLP models. Keras [13] is a model-level library, providing high-level building blocks for developing neural network models. It has been adopted by Google to become the standard interface for Tensorflow [21], its flagship Machine Learning library. Using the highly intuitive description of Keras building modules one can define a complex Neural Network architectures and experiment on training and inference. In total we trained 15 slightly different Keras models and we averaged their predictions. These models were grouped in 3 slightly different architecture groups, and within each group we kept the final 5 weights during training. This strategy helped reduce variance among predictions and stabilized the result, both in every validation split and in final training. The loss function chosen was mean squared error between the actual and predicted sale count. All model groups share the shame architecture depicted in Figure 3 and are presented in detail at [https://github.com/IoannisNasios/M5_Uncertainty_3rd_place](https://github.com/IoannisNasios/M5_Uncertainty_3rd_place).
* FastAI MLP models. FastAI [22] is a Pytorch [23] based deep learning library which provides high-level components for many machine learning tasks. To tackle the regression task at hand, we incorporated the _tabular_ module that implements state of the art deep learning models for tabular data. One important thing about FastAI tabular module is again the use of embedding layers for categorical data. Similarly to the Keras case, using the embedding layer introduced good interaction for the categorical variables and leveraged deep learning's inherent mechanism of automatic feature extraction. Our modelling approach includes implementing a special Tweedie loss function to be used during training. Roughly speaking, the FastAI tabular model is the Pytorch equivalent of Keras / Tensorflow keras_nas model with the difference of using a specially implentned objective function. Empirical results on different regression contexts, mostly from Kaggle competitions, support this decision of using similar modelling methodology on complete different Deep Learning frameworks.
### Ensembling
The final part of our modelling approach was to carefully blend the predictions of the diverge set of models. We divide our models in 4 groups:
* lgb_cos: 1 LightGBM model, using all available data
* lgb_nas: 10 LightGBM models (1 model per store), using all available data for every store.
* keras_nas: 3 Keras models using only last \(17\times 28\) days of data and simple averaging.
* fastai_cos: 1 FastAI model, using all available data
All these 4 model groups were finetuned to perform best on the average of the three validation splits described in Section 2.2 above. After fine-tuning, and using the best parameters, all 4 model groups were retrained using the information available until \(d_{1941}\) and then used to produce forecasts for days \(d_{1942}\) until \(d_{1969}\).
The individual predictions were blended using geometric averaging shown in
Figure 3: Basic Keras model architecture
Eq. 4.
\[\text{blend}=\begin{cases}\left(\text{lgb\_nas}^{3.5}\cdot\text{lgb\_cos}^{1} \cdot\text{keras\_nas}^{1}\cdot\text{fastai\_cos}^{0.5}\right)^{\frac{1}{6}}& \text{for days 1-27}\\ \\ \left(\text{lgb\_nas}^{3}\cdot\text{lgb\_cos}^{0.5}\cdot\text{fastai\_cos}^{1.5 }\right)^{\frac{1}{6}}&\text{for day 28}\end{cases} \tag{4}\]
By reviewing several predictions by the keras_nas group of models, we noticed an unexpected large peak on the last day of the private set (day \(d_{1969}\)). After confirming this behavior to be dominant in almost all cases, we decided to exclude that group's prediction for the last day. This is the reasoning for not using keras_nas predictions on the lower branch of Eq 4. In Table 3.2 we present the weights for each model component as well as the validation WRMSSE score for each validation split.
### Post processing
We tried several post processing smoothing techniques to improve forecasting accuracy. In the final solution we employed a simple exponential smoothing (\(\alpha=0.96\)) per product and store id (Level 12 - \(30,490\)) that lead to substantial improvement in both validation sets and the final evaluation (private leaderboard).
We notice that, although this post-processing should be performed for both competition tracks, we were able to only use it for the Uncertainty track as it was a last minute finding (2 hours before competitions closing); as it turned out, had we applied it also for the Accuracy track submissions we would have ended 3 positions higher in the Accuracy track leaderboard.
## 4 Probabilistic Forecast Methodology
The performance measure selected for this competition track was the Scaled Pinball Loss (SPL) function. The measure is calculated for the 28 days horizon
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{\(\text{lgb\_nas}\)} & \multicolumn{2}{|c|}{\(\text{lgb\_cos}\)} & \multicolumn{2}{|c|}{\(\text{keras\_nas}\)} & \multicolumn{2}{|c|}{\(\text{fastai\_cos}\)} & \multicolumn{2}{|c|}{\(\text{Ensemble}\)} \\ \hline Val. split 1 & 0.474 & 0.470 & 0.715 & 0.687 & 0.531 \\ \hline Val. split 2 & 0.641 & 0.671 & 0.577 & 0.631 & 0.519 \\ \hline Val. split 3 & 0.652 & 0.661 & 0.746 & 0.681 & 0.598 \\ \hline Mean / std & 0.589 & 0.08 & 0.602 & 0.09 & 0.679 & 0.05 & 0.667 & 0.02 & 0.549 & 0.03 \\ \hline \end{tabular}
\end{table}
Table 6: Competition scores for each component
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{\(\text{lgb\_nas}\)} & \multicolumn{2}{|c|}{\(\text{lgb\_cos}\)} & \multicolumn{2}{|c|}{\(\text{keras\_nas}\)} & \multicolumn{2}{|c|}{\(\text{fastai\_cos}\)} & \multicolumn{2}{|c|}{\(\text{Ensemble}\)} \\ \hline weights \(d_{1}-d_{27}\) & 3.5 & 1.0 & 1.0 & 0.5 & \\ \hline weight \(d_{28}\) & 3.0 & 0.5 & 0.0 & 1.5 & \\ \hline \end{tabular}
\end{table}
Table 5: Final weights selected for blending
for each series and quantile, as shown in Eq. 5:
\[SPL=\frac{1}{h}\frac{\sum_{t=n+1}^{n+h}\left(Y_{t}-Q_{t}(u)\right)u\mathbf{1}\{Q_{ t}(u)\leq Y_{t}\}+\left(Q_{t}(u)-Y_{t}\right)(1-u)\mathbf{1}\{Q_{t}(u)>Y_{t}\}}{ \frac{1}{n-1}\sum_{t=2}^{2}|Y_{t}-Y_{t-1}|} \tag{5}\]
where \(Y_{t}\) is the actual future value of the examined time series at point \(t\), \(Q_{t}(u)\) the generated forecast for quantile \(u\), \(h\) the forecasting horizon, \(n\) is the number of historical observations, and \(\mathbf{1}\) the indicator function (being 1 if Y is within the postulated interval and 0 otherwise). The values \(u\) were set to \(u_{1}=0.005\), \(u_{2}=0.025\), \(u_{3}=0.165\), \(u_{4}=0.25\), \(u_{5}=0.5\), \(u_{6}=0.75\), \(u_{7}=0.835\), \(u_{8}=0.975\), and \(u_{9}=0.995\), so that they correspond to the requested median, 50%, 67%, 95%, and 99% prediction intervals.
After estimating the SPL for all time series and all the requested quantiles of the competition, the Uncertainty track competition entries were be ranked using the Weighted SPL (WSPL) shown in Eq. 6:
\[WSPL=\sum_{i=1}^{42,840}w_{i}*\frac{1}{9}\sum_{j=1}^{9}SPL(u_{j}) \tag{6}\]
where weights \(w_{i}\) were same described in Section 3.
### Quantile estimation via optimization
Using our best point forecast as median, we proceed on optimizing WSPL objective function on validation split 1 (last 28 days) and calculate the factors in Table 7. Due to time restrictions of the competition we could not extend our analysis to cover all three validation splits. These factors were used to multiply median solution (quantile \(u=0.50\)) and produce the remaining upper and lower quantiles. We assumed symmetric distributions on levels 1-9 and skewed distributions on levels 10-12. Furthermore, due to right-skewness of our sales data (zero-bounded on the left) for every level, on last quantile (99.5%) distributions proposed factor was multiplied by either 1.02 or 1.03. These factors were determined so as to minimize WSPL on validation split 1.
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline
**Level Aggr** & \# & **0.005** & **0.025** & **0.165** & **0.25** & **0.5** & **0.75** & **0.835** & **0.975** & **0.995** \\ \hline
1 & Total & 1 & 0.890 & 0.922 & 0.963 & 0.973 & 1.000 & 1.027 & 1.037 & 1.078 & 1.143 \\
2 & State & 3 & 0.869 & 0.907 & 0.956 & 0.969 & 1.000 & 1.031 & 1.043 & 1.093 & 1.166 \\
3 & Store & 10 & 0.848 & 0.893 & 0.950 & 0.964 & 1.000 & 1.036 & 1.049 & 1.107 & 1.186 \\
4 & Category & 3 & 0.869 & 0.907 & 0.951 & 0.969 & 1.000 & 1.031 & 1.043 & 1.093 & 1.166 \\
5 & Dept & 7 & 0.827 & 0.878 & 0.943 & 0.960 & 1.000 & 1.040 & 1.057 & 1.123 & 1.209 \\
6 & State/Cat. & 9 & 0.827 & 0.878 & 0.943 & 0.960 & 1.000 & 1.040 & 1.057 & 1.123 & 1.209 \\
7 & State/Dept. & 21 & 0.787 & 0.850 & 0.930 & 0.951 & 1.000 & 1.048 & 1.070 & 1.150 & 1.251 \\
8 & Store/Cat. & 30 & 0.767 & 0.835 & 0.924 & 0.947 & 1.000 & 1.053 & 1.076 & 1.166 & 1.272 \\
9 & Store/Dept. & 70 & 0.707 & 0.793 & 0.905 & 0.934 & 1.000 & 1.066 & 1.095 & 1.208 & 1.335 \\
10 & Product & 3.049 & 0.249 & 0.416 & 0.707 & 0.705 & 1.000 & 1.218 & 1.323 & 1.720 & 2.041 \\
11 & Product/State & 9.147 & 0.111 & 0.254 & 0.590 & 0.708 & 1.000 & 1.336 & 1.504 & 2.158 & 2.662 \\
12 & Product/Store & 30.490 & 0.005 & 0.055 & 0.295 & 0.446 & 1.000 & 1.884 & 2.328 & 3.548 & 4.066 \\ \hline \end{tabular}
\end{table}
Table 7: Quantile factors for all levels
### Quantile correction via statistical information
Level 12 (the only non-aggregated one) was the most difficult for accurate estimations, and the previously calculated factors could be further improved. Statistical information on past days played a major role for this level. For every product/store id, eight statistical sales quantiles (four intervals excluding median) were calculated over the last \(13\times 28\) days (1 year) and over the last 28 days. These quantiles were first averaged and then used to correct the quantile estimation of Section 4.1 for the corresponding level. For the same reason we calculated _w_eekly sales quantiles over the last \(13\times 28\) days and \(3\times 28\) days. The final formula for estimating level 12 quantiles, which led to minimum WSPL score for validation split 1, is given below:
level 12 \[= 0.2\cdot\text{quantile estimation}\] \[+ 0.7\cdot\left(\frac{\text{statistical correction}_{13\times 28\text{ days}}+1.75\cdot\text{statistical correction}_{28\text{ days}}}{2.75}\right)\] \[+ 0.1\cdot\left(\frac{\text{weekly statistical correction}_{13\times 28 \text{ days}}+\text{weekly statistical correction}_{3\times 28\text{ days}}}{2}\right)\]
Level 11 was also corrected in a similar manner and the final formula is given below:
level 11 \[= 0.91\cdot\text{quantile estimation}\] \[+ 0.09\cdot\left(\frac{\text{statistical correction}_{13\times 28\text{ days}}+1.75\cdot\text{statistical correction}_{28\text{ days}}}{2.75}\right)\]
No corrections were applied for levels other than 12 and 11, so the respective quantile factors for levels 1-10 remain as shown in Table 7.
The evolution of our submission attempts (SPL score / ranking) for the Uncertainty track of the competition is shown in Fig. 4. This scatter plot highlights:
Figure 4: Leaderboard ranking vs. score
* the importance of ensembling diverse models, as our rank increased by almost 90 places only by ensembling,
* the contribution of statistical correction of levels 11 and 12 that led us to a winning placement.
The overall procedure of ensembling, optimizing, and correcting for the probabilistic forecasting is depicted in Fig. 5.
## 5 Discussion
We have presented a Machine Learning solution for point and probabilistic forecasting of hierarchical time series that represent daily unit sales of retail products. The methodology involves two state-of-the-art Machine Learning approaches, namely Gradient Boosting Trees and Neural Networks, tuned and combined using carefully selected training and validation sets. The proposed methodology was applied successfully on the recent M5 Competition, yielding a prize position placement in the Uncertainty track.
### Point Forecasting score breakdown
In order to get a deeper insight on the point forecasting task, we broke down the WRMSSE calculation of Eq. 2 on each hierarchical level. The resulting per-level WRMSSE for validation split 1 is shown in Fig. 6. It is obvious that
Figure 5: Flowchart of probabilistic forecasting pipeline
the performance varies significantly among different aggregation levels, with levels 10, 11 and 12 being the harder to predict. An interesting observation is that, although Level 1 is simply the aggregation of all Level 12 predictions, the Level 1 loss is less than half the magnitude of the Level 12 loss. It would seem that aggregation _cancels out_ the poor predictive capability on Level 12. This is stressed out even more by the fact that, even though mixture of traditional forecasting techniques (ARIMA, Exponential smoothing) achieved lower error on levels 10, 11, 12, than the one shown in Figure 6, the overall score was really poor.
The mean validation WRMSSE score using the final blend of Eq. 4 is 0.549, while our final submission score was 0.552 - only 0.003 units apart. This close tracking of the unseen test data error by the validation error is generally sought after by ML practitioners in real world problems, and it serves as an extra indication for the soundness of our methodology.
### Probabilistic Forecasting Factors visualized
In Fig. 7 we present a graphical plot of the factors sorted by increasing level. We notice an unsymmetrical widening of the calculated factors as the number of series of the corresponding level increases. This delta-like shape is correlated to the increasing WRMSSE error on levels 10-12 shown in Fig. 6.
### Takeaways
It is worth mentioning that our point and probabilistic forecasting results are highly correlated since we use the point forecasting as the starting point for our probabilistic analysis. It is expected that, starting from better point forecasts, will result to better probabilistic forecasts as well.
Model diversity and selected training/validation split was crucial for the overall performance and choices eliminated any need for external adjustments on the final prediction in both tracks. This is also highlighted in M5 Competition
Figure 6: Loss per Level for predictions of validation split 1
he magic multiplier 0.97 we would end up first in Accuracy Track.
Finally, although the M5 Competition dataset was hierarchical in nature, we have not used this information explicitly via a reconciliation procedure as described in [24; 6]. Hierarchy was only implicitly considered via the nature of WRMSSE objective function.
### Extensions
The competition data were carefully curated and clean providing a rich variety of potential features. There were also a multitude of ML regression methods that could be tested. However we feel that the most crucial decision was not the selection of features or ML models but the selection of a representative validation sets. Although the three validation splits described in Section 2.2 were enough to stabilize our point forecasting results we believe that using more validation splits could be beneficial and would even eliminate the need of _magic_ external multipliers (0.97) to reach the top place.
One obvious extension to the probabilistic forecast methodology was to use a weighted averaged on three validation splits instead of validation split 1 (last 28 days).
|
2303.16041 | Referenceless characterisation of complex media using physics-informed
neural networks | In this work, we present a method to characterise the transmission matrices
of complex scattering media using a physics-informed, multi-plane neural
network (MPNN) without the requirement of a known optical reference field. We
use this method to accurately measure the transmission matrix of a commercial
multi-mode fiber without the problems of output-phase ambiguity and dark spots,
leading to upto 58% improvement in focusing efficiency compared with
phase-stepping holography. We demonstrate how our method is significantly more
noise-robust than phase-stepping holography and show how it can be generalised
to characterise a cascade of transmission matrices, allowing one to control the
propagation of light between independent scattering media. This work presents
an essential tool for accurate light control through complex media, with
applications ranging from classical optical networks, biomedical imaging, to
quantum information processing. | Suraj Goel, Claudio Conti, Saroch Leedumrongwatthanakun, Mehul Malik | 2023-03-28T15:16:23Z | http://arxiv.org/abs/2303.16041v3 | # Referenceless characterisation of complex media using physics-informed neural networks
###### Abstract
In this work, we present a method to characterise the transmission matrices of complex scattering media using a physics-informed, multi-plane neural network (MPNN) without the requirement of a known optical reference field. We use this method to accurately measure the transmission matrix of a commercial multi-mode fiber without the problems of output-phase ambiguity and dark spots, leading to upto 58% improvement in focusing efficiency compared with phase-stepping holography. We demonstrate how our method is significantly more noise-robust than phase-stepping holography and show how it can be generalised to characterise a cascade of transmission matrices, allowing one to control the propagation of light between independent scattering media. This work presents an essential tool for accurate light control through complex media, with applications ranging from classical optical networks, biomedical imaging, to quantum information processing.
## I Introduction
A scattering matrix provides complete knowledge of the linear optical response of a material. This not only gives us a better understanding of light-matter interactions, but also allows us to harness their use in photonic technology [1; 2; 3; 4]. As the complexity of the optical system of interest grows to large-scale complex media, such as programmable circuits and dynamic biological tissues, efficient and accurate measurement of a scattering matrix has become a necessary tool in many tasks and applications. For example, it can be utilised in the control of light propagation through an opaque scattering medium [5; 6], serve as an alternative way to construct optical devices [4], and has many applications spanning from imaging through scattering tissue [7; 8], mode-multiplexing through a multimode fiber [9], optical neural networks [10; 11; 12; 13] to the transport and manipulation of quantum states of light [14; 15; 16; 17; 18].
Over the last few decades, the development of techniques for measuring a scattering matrix-- both its transmission (TM) and reflection (RM) components--have been advanced and tailored to particular scenarios with different conditions and prior assumptions [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. Conventionally, the measurement of a TM is performed under the assumption that a medium preserves the coherence properties of the input light, i.e., the scattering process (channel) is pure. In this case, the measurement is usually performed by sending different pure input states in a given basis sequentially to probe a medium/system of interest and detecting the corresponding output fields by means of off-axis holography using an external reference [32; 33; 34; 35]. The technique requires an interferometer that is insensitive to environmental disturbances, particularly for a long optical path length.
To mitigate the problem of stability, alternative methods such as phase-stepping holography with a co-propagating internal reference field have been developed [6]. Nevertheless, the use of a common-path reference field for an accurate TM measurement poses additional challenges since the internal reference needs to interfere with every optical mode within the complex medium with sufficient interference visibility for an optimal TM measurement. However, the internal reference field usually turns into a speckle pattern due to scattering, with a large variance in amplitude and consequently interference visibility, leading to a drawback known as the "dark-spot" problem [36; 37; 34]. An alternative way to characterise complex media without using a reference field has been achieved through optimisation algorithms, which search for complex vectors using intensity-only measurements under the assumption of a pure process matrix. Various phase retrieval techniques has been developed, such as ones using Bayesian approaches [38], generalized approximate message passing [39], alternating projection methods such as the Gerchberg-Saxton algorithm[40; 41], and convex relaxations [42; 43; 44].
A caveat of all these techniques that do not involve an external reference is that they do not completely characterise the transmission matrix. This is due to the fact that the intensity-only measurements used in these techniques cause phase ambiguity at the output, resulting in the relative phases between output modes being undefined [45]. While a TM obtained in this manner is sufficient for the majority of imaging applications through complex media [36; 46], complete coherent information about the TM is necessary for many applications such as programmable photonic circuits [47] and in quantum information processing, where complex media are used to transport [14; 15] and manipulate quantum states of light [16; 17; 18]. An extension to phase-stepping holography allows us to characterise this output phase ambiguity by interfering with different optical modes after the scatter
ing process[48]. Alternatively, phase diversity techniques can be applied to characterise this phase ambiguity by effectively measuring the output field intensity at different propagation distances in free space [29; 50; 49]. However, these methods are still subject to the dark-spot problem mentioned above or require full reconstruction of the optical field.
In recent years, artificial neural networks such as perceptrons and convolutional neural networks have been applied for tasks such as image reconstruction, classification, and transfer through a scattering medium [51; 52; 53; 54; 55], demonstrating their potential for learning the transfer function of a scattering medium.
While light scattering through complex media is a linear process, its measurement in intensity is non-linear, which makes it a suitable system to model within the framework of artificial neural networks. Incorporating the physical model that describes this scattering process into a neural network architecture is thus a clear contender for solving optimisation problems [56]. Unlike general machine-learning models, physics-informed neural networks are models based on physical systems, and thus do not require treating the algorithm like a black box, but indeed a simplification tool. Recent advances in optics-based machine learning have not only led us towards enhanced control through linear complex media[57; 58; 46], but also useful applications in non-linear microscopy [59] and programming quantum gates [60].
In this work, we demonstrate the complete characterization of a coherent transmission matrix without the use of a reference field and the accompanying problem of dark spots by employing the use of physics-informed neural networks, referred to as a multi-plane neural network (MPNN). We do so by performing a set of random measurements before and after the complex medium, while probing only intensity at the output and subsequently feeding the data into a neural network designed to mimic the experimental apparatus. We study the accuracy of the method experimentally by comparing the focusing efficiency achieved through the medium using the transmission matrices obtained with an MPNN to ones recovered with conventional phase-stepping (PS) holography. Furthermore, we investigate the number of measurements required to characterise the TM and show that while phase-stepping requires fewer measurements, the TM recovered with MPNN is much more accurate. We also show that our technique is significantly more noise-robust as compared to phase-stepping holography, allowing the recovery of a high fidelity TM in the presence of large amounts of noise. Finally, using a numerical simulation, we demonstrate the general usage of MPNNs for the TM measurement of a cascade of random scattering media.
## II Multi-plane neural network
We start by describing the model of a multi-layer optical network, where each complex medium is placed between reconfigurable phase-shifter planes, as illustrated in Fig. 1. The \(k\)-th layer of the optical system is composed of a reconfigurable phase-shifter plane represented by a vector \(x_{k}\) and a complex medium with complex transmission matrix \(T_{k}\). The intensity \(y\) observed at the output detectors of an \(n\)-layered network for a uniform excitation at an input of the network is given by
\[\begin{split} y&=|\mathbf{F}\mathbf{p}_{n+1}\mathbf{ T}_{n}\mathbf{P}_{n}\ldots\mathbf{T}_{k}\mathbf{P}_{k}\ldots\mathbf{T}_{2} \mathbf{P}_{2}\mathbf{T}_{1}x_{1}|^{2},\\ &=|\mathbf{F}(x_{n+1}\otimes(\mathbf{T}_{n}(x_{n}\odot...\mathbf{ T}_{2}(x_{2}\odot\mathbf{T}_{1}x_{1})...)))|^{2},\end{split} \tag{1}\]
where \(\odot\) represents an element-wise dot product, \(\mathbf{T}_{k}\) is the transmission matrix of the \(k^{\text{th}}\) complex medium in the network, \(\mathbf{F}\) is a known complex matrix defined by the detection optics, and \(\mathbf{P}_{k}=\text{diag}[x_{k}]\). Eq. 1 also describes the neural network that models this optical process, where \(\{x_{i}\}_{i}\) are \(n\) input vectors, \(\{\mathbf{T}_{i}\}_{i}\) are fully-connected "hidden" complex layers with a linear activation, and \(\mathbf{F}\) is a fully-connected known complex layer with a non-linear activation of \(|.|^{2}\) to simulate intensity measurements. Since the input layers are located at different planes in the network, we refer to this model as the multi-plane neural network (MPNN).
We train the described model on a measured dataset using Tensorflow and Keras on Python. Tensorflow provides an open-source framework that makes models like these convenient to implement and extend [61]. Building models in this framework allowed us to use previous complex-value neural network layers developed in [46] and makes our work open to extensions using different optimisation techniques. Our model is optimised
Figure 1: Schematic of an \(n\)-layered optical network: a cascade of \(n\) complex media denoted by their optical transmission matrices \(T_{k}\) are separated by reconfigurable phase-shifter planes \(x_{k}\), followed by detection optics \(F\). This optical network can be represented as a multi-plane neural network (MPNN) with \(n+1\) input layers of \(x_{k}\), \(n\) “hidden-layers” for \(T_{k}\), and a known layer for \(F\) with a \(|.|^{2}\) activation.
using adaptive moment estimation, also referred to as the Adam optimizer [62], the mean-squared error loss of which is given by
\[MSE=\sum_{i}|\bar{y}_{i}-y_{i}|^{2}, \tag{2}\]
where \(i\) represents the index of a point in the dataset, \(\bar{y}\) is the predicted output from the model, and \(y\) is the measured output. Once the loss function is decided, gradients to each weight in the layer \(\mathbf{T_{k}}\) are calculated with respect to the MSE using a chain-rule governed by the backpropagation algorithm[63]. To ensure that the learning is efficient, we appropriately set the learning rate before beginning the optimisation while also reducing it during the optimisation if the loss plateaus. Post training, retrieving the weights of the layer \(\mathbf{T_{k}}\) gives us the required transmission matrix of the \(k^{\text{th}}\) complex medium.
## III Experimental Method
In this work, we use the formalism of MPNN to measure the coherent transmission matrix of a multi-mode fiber using two programmable phase planes. The phase planes are implemented on spatial light modulators (SLMs) placed before and after the MMF to probe the optical fields propagating through the fiber using intensity-only measurements. A schematic of the setup is illustrated in Fig. 2a, where light from a superluminescent diode is filtered by a spectral filter centered at 810
Figure 2: (a) Experiment: Light from a superluminescent diode is filtered by a 3.1nm spectral filter centered at 810 nm, modulated by a random phase pattern displayed on a spatial light modulator (SLM\({}_{1}\)) and coupled into a multi-mode fiber (MMF) which is the complex medium of interest. The output speckle field from the MMF is projected onto SLM\({}_{2}\), which displays another random phase pattern, and is then detected on a CMOS camera which is placed in the Fourier plane of SLM\({}_{2}\). (b) The phase patterns on the SLM are constructed in a specific macro-pixel basis with varying pixel size based on the incident field distribution. Intensities of the output modes are recorded at a given set of points at the camera, which enclose an area corresponding to the MMF core. (c) The physics-informed neural network consists of two input layers encoding phase patterns on SLM\({}_{1}\) and SLM\({}_{2}\), and a single output layer encoding the intensity pattern of the output speckle in the given basis. The hidden layer \(T\) denotes the complex transformation between SLM\({}_{1}\) and SLM\({}_{2}\) in the macro-pixel bases, while \(F\) is a known layer corresponding to the \(2f\)-lens system between SLM\({}_{2}\) and the camera.
nm (FWHM 3.1 nm) and coupled into a 2m-long graded-index multi-mode fiber (MMF, Thorlabs-M116L02) sandwiched between two SLMs (Hamamatsu LCOS-X10468). The MMF has a core size of 50 \(\mu\)m and supports approximately 200 modes for each polarization at 810 nm wavelength. The telescopes placed between each component in the setup are designed such that the incident beam covers a large area on each SLM. In this particular experiment, we only control a single polarization channel of the MMF. However, the techniques discussed here can be equivalently applied to characterise both polarization channels simultaneously.
We choose to work in the so-called macro-pixel basis, which consists of groups of SLM pixels chosen such that the intensity per macro-pixel is approximately equal (Fig. 2b). SLM\({}_{1}\) is used to prepare a set of input modes in the macro-pixel basis and SLM\({}_{2}\) in combination with a CMOS camera (XIMEA-SiC USB3.1) allows us to perform measurements on the output modes of the fiber. \(\mathbf{T}\) denotes the optical transmission matrix between SLM\({}_{1}\) and SLM\({}_{2}\) in the macro-pixel basis. The field after SLM\({}_{2}\) is given by the element-wise product \(x_{2}\odot\mathbf{T}x_{1}\), where \(x_{1(2)}\) is a vector corresponding to the hologram implemented on SLM\({}_{1(2)}\). Finally, this field is incident on the camera placed in the Fourier plane of SLM\({}_{2}\) using the transfer matrix \(\mathbf{F}\). The resultant measurement intensity \(y\) on the camera is given by
\[y=|\mathbf{F}(x_{2}\odot\mathbf{T}x_{1})|^{2}, \tag{3}\]
which describes a single-layered MPNN model. A set of random measurements are performed on the setup to generate a dataset for this model. A number of holograms are generated with phases randomly varied following a uniform distribution. These holograms are displayed on each SLM and the corresponding intensities measured by the CMOS camera are recorded. We sample the intensity of the field at the camera at positions separated by a distance corresponding to the size of a diffraction-limited spot (as shown in Fig. 2b) calculated using the effective focal length of the lens system (L\({}_{9-11}\)) between SLM\({}_{2}\) and the camera.
While we expect the TM to have a dimension of approximately \(200\times 200\) based on fiber specifications, we oversample our measurements at the SLM planes to capture any optical misalignment and aberrations in the experiment[58]. Thus, we use 800 macro-pixels at the input phase plane (SLM\({}_{1}\)) and 832 macro-pixels at the output phase plane (SLM\({}_{2}\)) to perform the measurements. Since the intensity sampling at the camera plane is limited by the resolution of the optical system, we only use 367 sampling points here. We train our model containing an \(800\times 832\) dimensional TM. The optimization is carried out in multiple batches with a batch size of 500 samples, and is accelerated using a GPU (NVIDIA GeForce RTX 3060). We observe good convergence in about 20 epochs of training, as plotted in the training and test metrics shown in Fig. 3(a-b). Retrieving the weights from the hidden layer of the model recovers the measured TM in the macro-pixel basis, as visualised in Fig. 3c. Transforming this TM to the Laguerre-Gauss modal basis with 20 mode groups reveals a block-diagonal structure with crosstalk as shown in Fig. 3d, which is a typical structure expected for a graded-index multi-mode fiber [48; 64].
Figure 3: Learning the coherent transmission matrix of a multi-mode fiber: Test and train (a) loss and (b) mean-absolute percentage error (MAPE) as learning proceeds in epochs. A visual plot of the learnt transmission matrix in the (c) macro-pixel and (d) Laguerre-Gauss (LG) basis, where modes are segregated into respective mode groups. In order to change the basis of the TM from (c) to (d), a transformation matrix \(B_{\text{Pix}\to\text{LG}}\) is constructed that maps the set of macro-pixel modes to a set of LG modes. The TM in the LG basis is calculated as \(T_{\text{LG}}=B_{\text{Pix}\to\text{LG}}^{\dagger}P_{\text{Pix}}B_{\text{Pix} \to\text{LG}}\).
## IV Results and Discussions
### Accuracy of the measured transmission matrix
To verify the accuracy of the measured transmission matrix, we perform an optical phase conjugation (OPC) experiment to focus light from a given input mode into a particular output mode after scattering through the fibre, by displaying a phase-only solution on the first plane (SLM\({}_{1}\)), the second plane (SLM\({}_{2}\)), and both planes simultaneously. The focusing efficiency obtained by controlling light using only the first plane or the second plane allows us to assess the quality of individual rows and columns of the transmission matrix, respectively. On the other hand, manipulating light using both planes allows us to assess the overall quality of the measured transmission matrix. The focusing efficiency is defined as the ratio of intensity measured in a diffraction-limited spot at the camera to that measured in an output region that is 1.75 times the area corresponding to the output facet of the fiber. This is to capture any light that is diffracted outside the output facet due to phase modulation at SLM\({}_{2}\).
We measure the TM of a multi-mode fiber using the MPNN technique and compare it with one measured using the phase-stepping (PS) technique. Measurement of a TM using the PS technique is a two-step process. In the first step, we measure the joint transfer matrix of the fiber and \(2f\)-lens system ambiguous to the reference field, i.e. \(D\)**F T**, where \(D\) is an arbitrary diagonal matrix owing to the unknown reference field. We do so by displaying the superposition of each input mode with the chosen reference mode at SLM\({}_{1}\) and varying their relative phase in multiple steps, while measuring the intensity at each output spot. In order to maximize the intensity per mode, we choose the input basis to be a discrete Fourier transform of the macro-pixel basis with the \(50^{\text{th}}\) mode chosen as the reference. The second step of PS technique involves the measurement of the reference field (the characterisation of \(D\)). To measure \(D\), we display only the input reference mode at SLM\({}_{1}\) and choose an output reference mode on SLM\({}_{2}\). We then display superpositions of each output mode with the chosen output reference mode on SLM\({}_{2}\) and vary their relative phase in multiple steps in a similar manner to above. The measured intensity at the camera then allows us to reconstruct the matrix \(D\) corresponding to the input reference field. It should be noted that for the knowledge of \(D\) is unnecessary for many applications such as imaging through complex media as well as the aforementioned experiment on performing OPC using only the first plane.
The focusing efficiency achieved at different points across the output facet of the fiber when performing OPC with only the first plane (SLM\({}_{1}\)) when using a TM obtained via the PS and MPNN techniques is shown in Fig. 4a. Using the PS technique, we observe a reduction of the focusing efficiency at several output points due to the dark-spot problem [34]. This is due to the nature of speckle that results in a high probability of obtaining very low output intensities of the internal reference mode after scattering through the fiber. This leads to low interference visibility in the PS technique for specific output modes, which consequently results in inaccuracy in the reconstructed transmission matrix at these outputs. This problem is solved by measuring the transmission matrix with the MPNN technique, as this does not
Figure 4: A comparison of optical phase conjugation (OPC) performed with only the first phase plane: (a) Focusing efficiencies achieved at different points across the output facet of the multi-mode fiber using the phase-stepping (PS) and multi-plane neural network (MPNN) techniques. Output points with low focusing efficiency due to the inherent dark-spot problem can be seen in the PS method (see text for more details). In contrast, focusing efficiencies obtained with the MPNN method show more uniformity across the fiber facet. (b) A histogram comparing the PS and MPNN methods shows that the PS technique leads to some output points with very low focusing efficiencies, while those obtained with MPNN method are more uniform and significantly higher. Note: Only the points within first 80% diameter of the core from the center of the core are taken for the purpose of this histogram.
involve a reference mode. The more uniform focusing efficiency achieved across the output facet of the fiber with the MPNN technique is clearly illustrated in Fig. 4a. A histogram of focusing efficiencies achieved with these two methods (Fig. 4b) shows that low focusing efficiencies are only observed with the PS technique, while the MPNN technique achieves a significantly higher maximum focusing efficiency.
While improved control with the first phase plane using various optimisation techniques has been well studied in many previous works, one of the chief merits of our approach lies in the simultaneous measurement of relative phases between the rows of the transmission matrix, i.e. the coherence between output modes. We assess the accuracy of the reconstructed relative phase between output modes by performing an OPC experiment to focus light by using only the second phase plane (SLM\({}_{2}\)). Log-scale images of a focused spot at the centre of the output facet of the fiber using the two techniques are compared in Fig. 5a-b. We observe the suppression of unwanted speckle background when using a TM obtained with the MPNN technique as compared to the one acquired using the PS technique. By measuring the focusing efficiencies for different input modes, the overall enhancement obtained with the MPNN technique as compared to the PS technique is evident in Fig. 5c. The average focusing efficiency using the second plane increases from \(26.5\pm 2.3\%\) using PS to \(40.8\pm 1.7\%\) using MPNN. The underlying reason for this improvement is that learning the TM with the MPNN technique does not require a static internal reference mode, whereas the use of the fixed internal reference mode in the PS technique results in errors at particular outputs due to the dark-spot problem.
As discussed above, a complete characterization of the transmission matrix including relative phases between its rows and columns is essential for coherent control of optical fields propagating through a complex medium. To examine this coherent control, we perform an OPC experiment by focusing light at different output spots by simultaneously utilising both the phase planes at hand. The solution of phase patterns for focusing is determined using an iterative wavefront-matching technique [65, 66]. As seen in the log-scale images shown in Fig. 5d-e, light focused using both phase planes with a TM acquired using the MPNN method has significantly less speckle background as compared to one acquired with the phase-stepping technique. More quantitatively, we are able to achieve an average focusing efficiency of \(65.5\pm 2.5\%\) (up to
Figure 5: (a-c) A comparison of optical phase conjugation (OPC) performed with only the second phase plane. Log-scale images of light focused using only SLM\({}_{2}\) with a TM obtained by the (a) phase-stepping and (b) multi-plane neural network techniques. (c) A histogram comparing second-plane focusing efficiencies of 50 random input modes achieved with the PS and MPNN techniques shows a marked improvement with the latter. (d-f) A comparison of OPC performed with both phase planes. Log-scale images of light focused using both SLMs with a TM obtained by the (d) PS and (e) MPNN techniques. (f) A histogram comparing focusing efficiencies for all output modes achieved with both planes shows a significant improvement with MPNN over the PS technique. The focusing efficiencies are corrected for SLM\({}_{2}\) basis-dependent loss for both methods.
a maximum of \(73.8\%\)) with both planes using the MPNN technique (Fig. 5f). This is a substantial increase with respect to that achieved with the PS technique, where we observe an average focusing efficiency of \(42.4\pm 3.1\%\) (up to a maximum of \(46.7\%\)). This result also demonstrates the increase in focusing efficiency achievable with two-plane control as compared to individual phase planes, which makes it particularly suitable for usage in applications such as programmable optical circuits [17, 18].
The maximum achievable focusing efficiencies (ideal simulations) shown in Figs. 4 and 5 are numerically calculated by taking into account the presence of polarization coupling in the fiber. The fiber TM is represented by a truncated random unitary matrix considering that only one polarization channel is measured and controlled. This results in the reduction of the maximum achievable focusing efficiency of the first phase plane as compared to that of the second/both phase planes.
### Efficiency of learning with the MPNN technique
In this section, we study the number of measurements required to obtain an optimal TM using the MPNN technique as compared to the PS technique. First, we experimentally evaluate this by performing an OPC experiment with the fiber TM reconstructed with different dataset sizes. For the MPNN method, the size of the dataset (\(\alpha\)) is quantified by the total number of intensity-only measurements performed on the camera divided by the number of input modes that characterize the TM. For the PS method, this quantity is a bit more nuanced since we only control the number of phase steps (\(n_{\phi}\)) within the interval \([0,2\pi]\) that are used per mode. As described in section III, the first part of the PS method requires \((n_{\text{in}}-1)n_{\phi}+1\) measurements where \(n_{\text{in}}\) is the number of input modes and the additional measurement corresponds to that of reference itself. The second part requires \((n_{\text{out}}-1)n_{\phi}\) measurements where \(n_{\text{out}}\) is the number of output modes measured on the camera. The total number of measurements for each input mode is then given by \(\alpha\approx[(n_{\text{in}}+n_{\text{out}})/n_{\text{in}}]n_{\phi}\). In our experiment, \(n_{\text{in}}=800\) and \(n_{\text{out}}=367\), which gives \(\alpha\approx 1.46n_{\phi}\) for the PS technique. In this manner, the parameter \(\alpha\) corresponds to the total number of measurements performed per mode in both the MPNN and the PS technique.
The focusing efficiency achieved with a TM obtained via the MPNN and PS techniques as a function of the number of measurements per mode (\(\alpha\)) is plotted in Fig. 6 for all three cases--focusing with the first, the second, and both planes. For the PS technique, the focusing efficiency is seen to converge to its maximum value at \(\alpha\sim 6-8\), which is close to the minimum required number of phase steps (\(n_{\phi}\sim 3-4\)) [67]. However, it reaches a plateau after this and cannot be improved further. In contrast, the focusing efficiency obtained via the MPNN technique is seen to converge at a higher number of measurements per mode (\(\alpha=20\)). However, in all three OPC cases, the maximum efficiency achieved with MPNN is higher than that achieved with the PS technique-- \(46.5\%\), \(43.6\%\), and \(73.8\%\) versus \(45.0\%\), \(30.2\%\), \(46.7\%\) with the first, second, and both planes respectively. In particular, the maximum efficiency is significantly higher when focusing with only the second or both phase planes--cases where complete coherent information of the TM plays a critical role. Thus, while the MPNN technique takes longer to learn a given TM, the reconstructed TM is more accurate, as quantified by the focusing efficiency achieved through it. One should note that the number of measurements can also be reduced by incorporating the underlying physical
Figure 6: Number of measurements required for optimal transmission matrix measurement: Experimental focusing efficiency achieved with (a) the first phase plane, (b) the second phase plane, and (c) both planes simultaneously, using a TM reconstructed with the phase-stepping technique and the multi-plane neural network (MPNN) plotted as a function of the number of measurements per input mode (\(\alpha\)). While phase-stepping converges to a maximum faster, the MPNN technique shows a much higher focusing efficiency.
model of a multimode optical fiber [57, 58].
### Noise-robustness of the MPNN technique
From the previous sections, it is clear that one of the advantages of the MPNN technique is improved performance over the PS technique in the presence of noise. While our experiment studies one specific case, here we quantify this improvement by simulating the effects of different noise levels on both techniques. An \(800\times 800\)-dimensional random unitary matrix is chosen as our ground truth TM and intensity measurements using the PS and MPNN techniques are simulated while varying the number of measurements per mode (\(\alpha\)). Noise in the measurement is modelled as additive white Gaussian noise on the readout intensity
\[\tilde{y}_{\sigma}=|\mathbf{F}(x_{2}\odot\mathbf{T}x_{1})|^{2}+\mathcal{N}(2 \sigma,\sigma), \tag{4}\]
where \(\mathcal{N}(\mu,\sigma)\) is additive gaussian noise with mean \(\mu\) and standard deviation \(\sigma\). The signal-to-noise ratio (SNR) is the ratio of the average norm of intensity of the signal to the mean noise, i.e. \(\text{SNR}=\overline{|y|}/\mu\). It should be noted that each simulated data point is normalised to unity before adding noise, which simply implies an SNR of \(1/2\sigma\) in our model. The fidelity between the recovered TM (\(\widetilde{\mathbf{T}}\)) and the ground-truth TM (\(\mathbf{T}\)) is calculated as \(\mathcal{F}(\mathbf{T})=[\text{Tr}(\widetilde{\mathbf{T}}^{t}\mathbf{T})]^{ 2}/(\text{Tr}(\widetilde{\mathbf{T}}^{t}\widetilde{\mathbf{T}})\text{Tr}( \mathbf{T}^{t}\mathbf{T}))\). The fidelity is sensitive to the relative phases between rows and columns of the TM, and thus only reaches its maximum value of 1 when complete coherent information about the TM is present.
The TM fidelity as a function of the number of measurements (\(\alpha\)) is plotted in Fig. 7 for different levels of noise. In these simulations, we choose \(n_{\text{in}}=n_{\text{out}}=800\), which implies that \(\alpha\approx 2n_{\phi}\) for the PS technique. In the noiseless case (\(\text{SNR}=\text{inf}\)), the PS technique converges to perfect fidelity at \(\alpha\approx 6\), while the MPNN technique requires \(\alpha\approx 10\) to do the same. Note that in this case, the PS technique is able to reach perfect fidelity regardless of the dark spot problem, as even the smallest interference signal provides complete information with no noise present. As the SNR decreases, the maximum fidelity achievable via the PS technique drops rapidly and is unable to reach perfect fidelity regardless of the number of measurements used. For example, even with a small amount of noise (SNR=10), the PS technique can only recover a TM with fidelity less than 60%. In contrast, the MPNN technique is able to achieve very high fidelity in the presence of noise through slight increases in the number of measurements. Markedly, we are able to recover a near-perfect TM even when the SNR is as low as 1.25 by increasing \(\alpha\) to 16. This highlights the significantly improved noise-robustness of the MPNN technique in comparison to the PS technique.
In practice, our experimental results are still far from ideal as estimated by the numerical simulations (black dotted lines in Figs. 4 and 5). This deviation from the ideal might originate from a variety of imperfections in the experiment that are not explained by the simple model of noise studied here. These include imperfections such as the choice of basis, instability of light source, phase instability of SLM, temperature-dependent movement of optomechanics and the optical medium, or linearity of the camera. Many of these issues can be addressed and improved in the experiment to achieve perfect focusing efficiency [68, 69, 70], and could be combined with the MPNN technique presented here.
Figure 7: Noise-robustness of MPNN: A simulated \(800\times 800\)- dimensional TM is recovered using the (a) phase-stepping (PS) and (b) multi-plane neural network (MPNN) techniques for different additive gaussian noise levels. The fidelity of the recovered TM is plotted as a function of the number of measurements per input mode (\(\alpha\)). While the PS technique quickly degrades in the presence of noise, the MPNN technique is able to reach high fidelities with small increases in the number of measurements.
### Learning multiple transmission matrices with an MPNN
In this section, we demonstrate how the MPNN technique can be used to reconstruct a complex optical network characterised by a series of transmission matrices, as described in Eq. (1). Furthermore, we show how knowledge of each individual TM allows us to focus light at any intermediate plane within this network. As shown in Fig. 8a, we simulate a cascade of three \(16\times 16\)-dimensional TMs (T\({}_{i}\)) separated by programmable phase planes. These are followed by a known mode-mixer (F) that performs a 2D discrete Fourier transform. By randomly modulating the phase planes and performing intensity measurements after F, the MPNN technique is able to fully characterise this optical network. The recovered TM of each medium as well as the training loss are shown in Fig. 8b-c.
A multi-plane network such as this can allow us to not only control light through the whole system, but also control light at intermediate planes as conceptualised in Fig. 8a. As an example, we simulate optical phase conjugation using the three recovered TMs to focus light at each intermediate plane in the system. Insets in Fig. 8b show the focused image obtained at each plane in the trained network using the preceding phase planes. The size of the dataset required to characterise this network with dimension \(3\times 16\times 16\) is \(\alpha\approx 10^{5}\). However, this is not the minimal size required to train this model and strongly depends on the training parameters. Further tuning and investigations can lead to a better understanding of how the minimum dataset size required to train this model varies with the number of planes and dimensions of the TM. Nonetheless, it should be noted that the model for such a neural network is very complex and to our knowledge, MPNN is the only known method to date that can perform such a task.
A caveat of our technique is that the measured TM of two consecutive complex media in the series can be ambiguous up to a diagonal matrix on either side. In a cascade of complex media \(T_{i}\) separated by planes \(P_{i}\), taking a single layer at the \(k^{\text{th}}\) plane,
\[T_{k}P_{k}T_{k-1}=T_{k}DP_{k}D^{-1}T_{k-1}=\widehat{T}_{k}P_{k}\widehat{T}_{k-1} \tag{5}\]
where \(D\) is a diagonal matrix. Due to the commutation relation between \(D,P_{k}\) and \(D^{-1}\), the MPNN technique measures the TM of an individual complex medium up to such an ambiguity in the TMs of consecutive media, for example, the presence of equal but opposite diagonal phases. We anticipate that this ambiguity affects the training since few elements in \(D\) can acquire high amplitudes leading to over-fitting, however, this problem can be tackled using suitable regularizers. Importantly, this
Figure 8: Learning multiple transmission matrices with an MPNN: (a) A cascade of multiple complex media interspersed by programmable phase planes can be characterised through the use of the MPNN technique. The recovered TMs corresponding to these media allows us to control the propagation of light at intermediate phase planes. (b) Numerical simulation showing three \(16\times 16\)-dimensional TMs reconstructed with the MPNN technique, which are then used to focus light at each intermediate plane (insets). (c) Optimisation loss using the training and testing datasets for the three-plane MPNN.
ambiguity between \(T_{k}\) and \(\widetilde{T}_{k}\) does not affect the description of the overall cascaded optical network.
Systems consisting of cascaded programmable phase planes separated by optical media are fast gaining popularity, with recent work demonstrating their use in a variety of applications such as spatial mode-sorting [66, 71, 72], projective measurements [73], unambiguous state discrimination [74], programmable quantum circuits [18, 75], and optical neural networks [76, 11]. In all these implementations, accurate knowledge of the optical system between phase planes is critical to their performance. While free-space propagation is relatively straightforward to model, aberrations arising from the devices and elements used can introduce significant design challenges. By enabling the characterisation of a cascade of independent TMs, the MPNN technique provides a way to more accurately design such multi-plane devices, with uses ranging from classical to quantum optical networks.
## V Conclusion
In this work, we have conceptualised and experimentally demonstrated a method to characterise a complex medium, or a network of complex media, using physics-informed neural networks, referred to as multi-plane neural networks (MPNN). We apply the proposed MPNN technique to measure the full coherent transmission matrix of a multi-mode fiber without the need for an external reference field. The key idea behind the measurement is to randomly modulate phases of optical fields both before and after the fiber and measure the intensity-only outcomes to form a dataset. The trained model produces a transmission matrix capable of controlling light by manipulating fields not only before the multi-mode fiber, but also after it, which relies on the complete coherent information of the obtained transmission matrix without the problem of dark spots and output-phase ambiguity. We demonstrate accurate control of optical modes through a multi-mode fiber using the MPNN method, with a significant improvement over the phase-stepping technique in terms of focusing efficiency and noise-robustness. Finally, we show the capability of this technique to learn more complex systems such as a cascade of transmission matrices interspersed with multiple phase planes and discuss possible applications. Our technique will allow for accurate control of coherent light not only through complex media but also through complex optical networks, with applications ranging from optical communication systems to biomedical imaging.
**Funding -** This work was made possible by financial support from the QuantERA ERA-NET Co-fund (FWF Project I3773-N36), the UK Engineering and Physical Sciences Research Council (EPSRC) (EP/P024114/1) and the European Research Council (ERC) Starting grant PIQUaNT (950402).
**Acknowledgments -** We would like to thank Dr. Will McCutcheon and Mayuna Gupta for fruitful discussions. The scientific colour maps batlow and romaO [77] are used in this study to prevent visual distortion of the data and exclusion of readers with colour vision deficiencies [78].
**Disclosures -** The authors declare no conflicts of interest.
**Data availability -** Data underlying the results presented in this paper are available in Ref. [79].
|
2310.05105 | How Graph Neural Networks Learn: Lessons from Training Dynamics | A long-standing goal in deep learning has been to characterize the learning
behavior of black-box models in a more interpretable manner. For graph neural
networks (GNNs), considerable advances have been made in formalizing what
functions they can represent, but whether GNNs will learn desired functions
during the optimization process remains less clear. To fill this gap, we study
their training dynamics in function space. In particular, we find that the
gradient descent optimization of GNNs implicitly leverages the graph structure
to update the learned function, as can be quantified by a phenomenon which we
call \emph{kernel-graph alignment}. We provide theoretical explanations for the
emergence of this phenomenon in the overparameterized regime and empirically
validate it on real-world GNNs. This finding offers new interpretable insights
into when and why the learned GNN functions generalize, highlighting their
limitations in heterophilic graphs. Practically, we propose a parameter-free
algorithm that directly uses a sparse matrix (i.e. graph adjacency) to update
the learned function. We demonstrate that this embarrassingly simple approach
can be as effective as GNNs while being orders-of-magnitude faster. | Chenxiao Yang, Qitian Wu, David Wipf, Ruoyu Sun, Junchi Yan | 2023-10-08T10:19:56Z | http://arxiv.org/abs/2310.05105v3 | # How Graph Neural Networks Learn: Lessons from Training Dynamics in Function Space
###### Abstract
A long-standing goal in deep learning has been to characterize the learning behavior of black-box models in a more interpretable manner. For graph neural networks (GNNs), considerable advances have been made in formalizing what functions they can represent, however it remains less clear whether and how GNNs learn desired functions during the optimization process. To fill this critical gap, we study the learning dynamics of GNNs in function space via the analytic framework of overparameterization. In particular, we find that the seemingly complicated training process of GNNs can be re-cast into a more familiar label propagation framework, due to the graph inductive bias implicit in this process. From this vantage point, we provide explanations for why the learned GNN functions successfully generalize and for their pathological behavior on heterophilic graphs, which are consistent with observations. Practically, sparsifying and implementing the learning dynamics lead to a minimalist semi-supervised learning algorithm with the efficiency of classic algorithms and the effectiveness of modern GNNs.
## 1 Introduction
_Graph Neural Networks (GNNs)_(Gori et al., 2005; Scarselli et al., 2008; Bruna et al., 2014; Kipf and Welling, 2017) represent network architectures for learning on entities with explicit relations. In addition to their empirical success, the recent pursuit of theoretical understanding has also led researchers to dissect GNN models, especially regarding their representation (Maron et al., 2019; Xu et al., 2019; Oono and Suzuki, 2019; Chen et al., 2019) and generalization (Scarselli et al., 2018; Verma and Zhang, 2019; Garg et al., 2020) powers. Another research area in parallel with representation and generalization is optimization, which focuses on understanding how the training algorithm identifies desirable GNN functions. Despite its importance, the optimization process of training GNNs is far less understood compared with other topics. While previous work has studied the learning dynamics of linearized GNNs (Xu et al., 2021) in their _weight space_, with emphasis on the convergence speed of empirical risk, an analysis of their dynamics in _function space_ that directly examines how the GNN function evolves has been lacking. In particular, we focus on the following open questions:
_1 How do GNN models evolve in function space during optimization, and how exactly do graphs regularize or bias this process?_
_2 In light of the above, are there interpretable ways to explain why the learned GNN function successfully generalizes and when they might fail?_
Studying these questions can lead to deeper understanding of GNNs and provide guidelines for designing more principled algorithms (as we will exemplify in Sec. 3). As an initial probe in this direction, we study GNN learning dynamics in function space, i.e., how the learned function \(f_{t}(\mathbf{x})\) (or model prediction) evolves on an arbitrary sample \(\mathbf{x}\) as model weights \(\mathbf{W}_{t}\) evolves over time according to gradient descent. Our contributions are summarized as follows:
**Theoretical Contributions.** We provide answers to the above two questions, via the widely adopted analytical framework (Jacot et al., 2018; Arora et al., 2019; Du et al., 2019) of studying neural networks in overparameterized regimes, and map our insights to practice via empirical verification.
\(\circ\)_GNN Dynamics Resemble Label Propagation:_ The evolution of GNNs in function space follows a cross-instance propagation scheme with pairwise weights controlled by a kernel matrix \(\mathbf{\Theta}_{t}\) called the _Neural Tangent Kernel (NTK)_, which we demonstrate tends to align well with, or even become equivalent to, various specialized graph adjacency matrices \(\mathbf{A}\) in overparameterized regimes. In this way, each gradient descent step for GNNs resembles classic _Label Propagation (LP)_ algorithms (Szummer and Jaakkola, 2001; Zhu and Ghahramani, 2002; Zhu et al., 2003; Zhou et al., 2003; Chapelle et al., 2009) that propagate true labels from the training set to unseen samples along the graph, as illustrated in Fig. 1(a). Such a characterization of the evolution of learned GNN functions provides answers to question 1 and reveals the role of graphs in biasing this procedure.
\(\circ\)_Rethinking Generalization and Heterophily:_ The alignment between NTK \(\mathbf{\Theta}_{t}\) and graph adjacency \(\mathbf{A}\) (dubbed as kernel-graph alignment) explains success and pathology of learned GNN functions. Let \(\mathbf{\Theta}^{*}\) be the optimal kernel denoting if samples share the same (true) label, and the alignment between \(\mathbf{\Theta}^{*}\) and \(\mathbf{A}\) indicates the homophily level of a graph (i.e. whether connected nodes indeed have the same label). As illustrated by the triangle relation in Fig. 1(b), a larger homophily level adding inherently good kernel-graph alignment translates to better kernel-target alignment (Cristianini et al., 2001), which is desired for favorable generalization. This answers question 2 and sheds additional insights on heterophily problems. As support, we establish a strong correlation between generalization and homophily level in overparameterized regimes, and characterize the Bayesian optimality of GNNs that minimizes the population risk. Our empirical results on the time evolution of real-world GNN NTKs on various datasets serves to corroborate our analysis.
**Practical Contributions.** Drawing upon the novel message passing perspective on the learning dynamics of neural networks, we propose to directly replace the NTK matrix with the adjacency matrix \(\mathbf{A}\), which leads to a parameter-free algorithm which we refer to as _Residual Propagation (RP)_. Theoretically, it serves as an extreme-case characterization of the learning dynamics of GNNs where graph structure dominates the optimization process. Empirically, we found this embarrassingly simple algorithm can outperform GNNs by \(3.0\%\) on OGB benchmarks even without using node features (improvement also shown on \(12\) additional datasets in Appendix). And notably RP takes orders of magnitude (\(<0.01\)) less time to excel the performance of GNNs.
## 2 Preliminaries
**Notations and Setup.** Given a set of training inputs \(\mathcal{X}=\{\mathbf{x}_{i}\}_{i=1}^{n_{l}}\in\mathbb{R}^{n_{l}\times d}\) and labels \(\mathcal{Y}=\{y_{i}\}_{i=1}^{n_{l}}\in\mathbb{R}^{n_{l}}\), where \(n_{l}\) is the size of the labeled instances, we aim to learn a predictive function \(f(\mathbf{x})\) (such as a neural network) parameterized by weights \(\mathbf{W}\). For (semi-)supervised learning, we minimize the squared loss \(\mathcal{L}\) using _Gradient Descent (GD)_,
Figure 1: **(a) Learning dynamics of (graph) neural networks from a function space viewpoint where residuals (i.e. difference between ground-truth labels and predictions) propagate from observed to unobserved samples based on a kernel matrix and shape the learned function. (Each image is a data point) (b) Alignment of matrices that control the residual propagation process: NTK \(\mathbf{\Theta}_{t}\), adjacency matrix \(\mathbf{A}\), and the optimal kernel matrix \(\mathbf{\Theta}^{*}\). Kernel-graph alignment is introduced in this work.**
\[\mathcal{L}=\|\mathcal{F}_{t}-\mathcal{Y}\|^{2}/2,\quad\partial\mathbf{W}_{t}/ \partial t=-\eta\nabla\mathbf{W}\mathcal{L}, \tag{1}\]
where \(\mathbf{W}_{t}\) and \(\mathcal{F}_{t}=\{f_{t}(\mathbf{x})\}_{\mathbf{x}\in\mathcal{X}}\in\mathbb{R}^{n_{l}}\) are weights and predictions for the training set at optimization time index \(t\), and \(\eta\) is the learning rate. Temporal discretization of this gradient flow system with time-step \(\Delta t=1\) yields the fixed step-size GD algorithm commonly used in practice, i.e. \(\mathbf{W}_{t+1}=\mathbf{W}_{t}-\eta\nabla\mathbf{W}\mathcal{L}\). Additionally, \(\mathcal{X}^{\prime}\) and \(\mathcal{Y}^{\prime}\) denote testing instances, such that \(\bar{\mathcal{X}}=[\mathcal{X},\mathcal{X}^{\prime}]\in\mathbb{R}^{n\times d}\) (resp. \(\bar{\mathcal{Y}}\)) represents the concatenation of training and testing inputs (resp. labels), where \(n\) is the full dataset size. For convenience, we generally refer to \(f_{t}(\mathbf{x})\) as prediction for a single data point, which is allowed to use additional information such as input features of other nodes by slightly abusing notation, \(\mathcal{F}_{t}\) and \(\mathcal{F}^{\prime}_{t}\) as predictions for the training and testing sets, which are agnostic to model parameterization in order to apply to parameter-free models. Our analysis generalizes to other loss functions and multi-dimensional output (see Appendix. C.2 and C.3).
Following (Xu et al., 2021), we focus on node classification or regression tasks, where instances (i.e. nodes) and their relations (i.e. edges) are described by an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), \(|\mathcal{V}|=n\). The graph defines a symmetric adjacency matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\) where \(\mathbf{A}_{ij}=\mathbf{A}_{\mathbf{x};\mathbf{x}_{j}}=1\) for a pair of connected nodes \((\mathbf{x}_{i},\mathbf{x}_{j})\) otherwise \(0\). Based on the data split, we denote submatrices of \(\mathbf{A}\) using \(\mathbf{A}_{\mathcal{X}\mathcal{X}}\) and \(\mathbf{A}_{\mathcal{X}^{\prime}\mathcal{X}}\). As a standard technique in graph theory (Chung, 1997), we let \(\mathbf{A}\) be normalized, i.e. \(\mathbf{A}\leftarrow\mathbf{D}^{-\frac{1}{2}}(\mathbf{A}+\mathbf{I})\mathbf{D }^{-\frac{1}{2}}\) where \(\mathbf{D}\) is the node degree matrix. Our insights apply to both transductive and inductive settings (see Appendix. C.1), but will focus on the former case unless stated otherwise.
**Graph Neural Networks (GNNs)** are a class of network architectures for learning representations on graphs, commonly trained based on the aforementioned optimization procedure. For GCN (Kipf & Welling, 2017) (and other GNNs that differ by the definition of \(\mathbf{A}\)), each layer can be written as
\[\mathbf{Z}^{(\ell)}=\sigma(\mathbf{A}\mathbf{Z}^{(\ell-1)}\mathbf{W}^{(\ell) })\in\mathbb{R}^{n\times m}, \tag{2}\]
where \(\mathbf{Z}^{(\ell)}\) are node representations at the \(\ell\)-th layer with \(\mathbf{Z}^{(0)}=\bar{\mathcal{X}}\), \(\sigma\) is ReLU activation, and \(m\) is the model width. We denote the GNN prediction for a single data point as \(f(\mathbf{x};\mathbf{A})\), conditioned on \(\mathbf{A}\) to distinguish it from other models. We adopt this architecture as the default GNN model.
**Label Propagation (LP)** represents a class of algorithms for semi-supervised learning, ground-truth labels \(\mathcal{Y}\) are propagated along edges to predict \(\mathcal{F}^{\prime}\) for the testing (unlabeled) set. From (Zhu & Ghahramani, 2002), the LP update equation can be written as
\[\mathrm{LP}(\mathcal{Y};k,\alpha)=[\mathcal{F}_{k},\mathcal{F}^{\prime}_{k}] =\alpha\mathbf{A}\;[\mathcal{F}_{k-1},\mathcal{F}^{\prime}_{k-1}]+(1-\alpha)[ \mathcal{Y},\mathbf{0}], \tag{3}\]
where \([\cdot,\cdot]\) is concatenation, \(k\) is the iteration number, and \(\alpha\) is a hyperparameter. As initialization \([\mathcal{F}_{0},\mathcal{F}^{\prime}_{0}]=[\mathcal{Y},\mathbf{0}]\), and after convergence \([\mathcal{F}_{\infty},\mathcal{F}^{\prime}_{\infty}]\propto(\mathbf{I}_{n}- \alpha\mathbf{A})^{-1}[\mathcal{Y},\mathbf{0}]\). LP algorithms have found wide applicability due to their superior efficiency, scalability, and simplicity for deployment.
## 3 A Label Propagation View of Gradient Descent Dynamics
Before delving into GNNs specifically, we detour by providing a LP perspective on the evolution of general parameterized models during GD-based optimization. From this vantage point we propose a simple and interpretable propagation algorithm for semi-supervised learning, with an aim towards illustrating core ideas that will later contribute to our understanding of GNN behavior during optimization (Sec. 3.1). This new algorithm has interesting connections with classic LP algorithms, encompassing them as special cases with additional global optimization guarantees. We evaluate on OGB benchmarks (and additional \(12\) datasets in Appendix E) to verify effectiveness, efficiency, and scalability (Sec. 3.2). We will circle back to connections with GNN learning dynamics in Section 4.
### Learning Dynamics in Function Space and Residual Propagation
We start by characterizing the evolution of a general model (with no restriction on model architecture) in its function space \(\mathcal{F}_{t}\), induced by GD-based optimization that continuously updates the weights \(\mathbf{W}_{t}\). On the training set, this process can be described by (Jacot et al., 2018)
\[\partial\mathcal{F}_{t}/\partial t=\eta\;\mathbf{\Theta}_{t}(\mathcal{X}, \mathcal{X})\mathcal{R}_{t},\quad\text{(NTK Matrix): }\;\mathbf{\Theta}_{t}(\mathcal{X},\mathcal{X})\triangleq\nabla\mathbf{w} \mathcal{F}_{t}^{\top}\nabla\mathbf{w}\mathcal{F}_{t}\in\mathbb{R}^{n_{l} \times n_{l}}, \tag{4}\]
where \(\mathcal{R}_{t}=\mathcal{Y}-\mathcal{F}_{t}\in\mathbb{R}^{n_{l}}\) denotes _residuals_(Hastie et al., 2009) (a.k.a. errors), the difference between ground-truth labels and model predictions. The so-called _Neural Tangent Kernel (NTK)_(Jacot
et al., 2018)\(\mathbf{\Theta}_{t}(\mathcal{X},\mathcal{X})\) is produced by the product between Jacobian matrices, which is dependent on the network architecture, and evolves over time due to its association with time-varying weights \(\mathbf{W}_{t}\). Specially, if the kernel is constant (such as for linear models and infinitely-wide neural networks), (4) reduces to the learning dynamics of kernel regression (Shawe-Taylor & Cristianini, 2004).
**Information Propagation Interpretation.** While (4) has found usage for analyzing the convergence of empirical risk (Du et al., 2019; Arora et al., 2019) and the spectral bias of deep learning (Mei et al., 2019; Cao et al., 2019), it is restricted to a limited set of samples (i.e. training set). To see how the model evolves on arbitrary inputs for fully characterizing the learned function, we extend (4) to accommodate unseen samples (which could be chosen arbitrarily). Specifically, let \(\mathcal{R}^{\prime}_{t}=\mathcal{Y}^{\prime}-\mathcal{F}^{\prime}_{t}\) denote residuals for the testing set, and after temporal discretization, (4) can be extended and rewritten neatly using a single variable residual \(\mathcal{R}\) (see derivation in Appendix B.1)
\[\left[\mathcal{R}_{t+1},\mathcal{R}^{\prime}_{t+1}\right]=-\eta\ \left[ \begin{array}{cc}\mathbf{\Theta}_{t}(\mathcal{X},\mathcal{X})&\mathbf{\Theta}_{t}( \mathcal{X},\mathcal{X}^{\prime})\\ \mathbf{\Theta}_{t}(\mathcal{X}^{\prime},\mathcal{X})&\mathbf{\Theta}_{t}(\mathcal{X }^{\prime},\mathcal{X}^{\prime})\end{array}\right]\left[\mathcal{R}_{t},\mathbf{0 }\right]+\left[\mathcal{R}_{t},\mathcal{R}^{\prime}_{t}\right], \tag{5}\]
where \(\mathbf{\Theta}_{t}(\mathcal{X}^{\prime},\mathcal{X})\triangleq\nabla_{\mathbf{W} }\mathcal{F}^{\top}_{t}\nabla_{\mathbf{W}}\mathcal{F}_{t}\in\mathbb{R}^{(n-n )\times n_{l}}\) is the NTK matrix between training and testing sets, which quantifies similarity between seen and unseen instances based on how differently their outputs change by an infinitesimal perturbation of weights. The whole \(n\times n\) NTK matrix will be henceforth abbreviated as \(\mathbf{\Theta}_{t}\). This equation implies that the GD-based optimization of arbitrary parameterized models can be viewed as an information propagation process where residuals propagate unidirectionally from training to arbitrary unseen samples based on a kernel similarity measure.
**Residual Propagation.** Drawing upon an analogy between the information propagation process in (5) induced by optimization, and message passing schemes between instances commonly seen in graph learning, we propose to replace the dense NTK matrix \(\mathbf{\Theta}_{t}\) with a sparse matrix (e.g. high-order graph adjacency matrix \(\mathbf{A}^{K}\) ) as a similarity measure, which allows us to implement (5) as a new semi-supervised algorithm called _Residual Propagation (RP)_ with update equation:
\[\left[\mathcal{R}_{t+1},\mathcal{R}^{\prime}_{t+1}\right]=-\eta\mathbf{A}^{K} [\mathcal{R}_{t},\mathbf{0}]+[\mathcal{R}_{t},\mathcal{R}^{\prime}_{t}],\ \text{ where }\mathcal{R}_{0}=\mathcal{Y}, \mathcal{R}^{\prime}_{0}=\mathbf{0}. \tag{6}\]
As initialization, the model predictions \(\mathcal{F}_{0}\) and \(\mathcal{F}^{\prime}_{0}\) are defined as \(\mathbf{0}\), and unknown testing labels are defined as \(\mathcal{Y}^{\prime}=\mathbf{0}\) such that negative \(\mathcal{R}^{\prime}_{t}\) equals model predictions. Compared with (5) merely as an expression, RP is a practical algorithm without trainable weights or relying on input features, and is efficient with linear time complexity owing to sparse matrix multiplication. For further intuition, consider an arbitrary unseen data point \(\mathbf{x}^{\prime}\), whose update equation in RP is
\[\boxed{\mathbf{\frac{\mathbf{\mathsf{P}_{t}}}{\mathbf{\mathsf{f}_{t+1}(\mathbf{x}^{\prime})}} }=f_{t}(\mathbf{x}^{\prime})+\eta\sum_{\mathbf{x}\in\mathcal{X}}\mathbf{A}^{K}_{\mathbf{x} ^{\prime}\mathbf{x}}\ \left(\begin{array}{c}\frac{1}{\left(\mathbf{y}(\mathbf{x})\right)}-f_{t}(\mathbf{x}) \end{array}\right), \tag{7}\]
where \(y(\mathbf{x})\) is the ground-truth label for \(\mathbf{x}\), and \(\mathbf{A}^{K}_{\mathbf{x}^{\prime}\mathbf{x}}\) is an element of \(\mathbf{A}^{K}\) corresponding to the sample pair \((\mathbf{x},\mathbf{x}^{\prime})\). For an unseen instance \(\mathbf{x}^{\prime}\) that is more'similar' or 'close' to \(\mathbf{x}\), more ground-truth label information \(y(\mathbf{x})\) will then propagate to the testing instance, and vice versa, to shape the prediction for the unseen one (illustrated in Fig. 1(a)). For \(f_{t}(\mathbf{x})\neq 0\), the ground-truth label is adjusted by subtracting current model prediction, i.e. \(y(\mathbf{x})-f_{t}(\mathbf{x})\), enabling the propagation process to diminish progressively as the predictions become more accurate.
### Theoretical Analysis and Empirical Evaluation
Intriguingly, we found the RP algorithm is connected with various classic methods including LP and kernel regression, though they emerge from very different contexts. (Analysis here for RP is also useful for understanding GNNs as will be discussed in the next section.)
**Proposition 1** (Connection with Label Propagation).: _The first step of RP in (6) yields identical classification results as LP in (3) (with \(\alpha=1\) and \(k=K\)):_
\[\text{(First Step of RP): }[\mathcal{F}_{1},\mathcal{F}^{\prime}_{1}]=\eta\mathbf{A}^{K}[ \mathcal{Y},\mathbf{0}],\ \ \text{(Label Propagation): }\ \ \text{LP}(\mathcal{Y};K,1)=\mathbf{A}^{K}[\mathcal{Y},\mathbf{0}]. \tag{8}\]
_Remark_.: Besides the first step, each of subsequent step of RP can also be viewed as LP on adjusted ground-truth labels, i.e. \(\mathcal{Y}-\mathcal{F}_{t}=\mathcal{R}_{t}\). This connection offers the potential for improvement as we iterate beyond the initial LP solution, and also demonstrates the flexibility of RP by replacing
with a more general function \(\operatorname{LP}^{*}(\cdot):\mathbb{R}^{n_{l}}\to\mathbb{R}^{n}\) that takes as input ground-truth labels and outputs predictions, leading to generalized RP with the following update equation:
\[\left[\mathcal{R}_{t+1},\mathcal{R}^{\prime}_{t+1}\right]=-\eta \operatorname{LP}^{*}(\mathcal{R}_{t})+\left[\mathcal{R}_{t},\mathcal{R}^{ \prime}_{t}\right]. \tag{9}\]
**Theorem 2** (Convergence and Optimization Guarantee).: _For RP in (6) and sufficiently small step size \(\eta<2\sigma_{max}^{-1}[\mathbf{A}_{\mathcal{X}\mathcal{X}}^{K}]\), where \(\mathbf{A}_{\mathcal{X}\mathcal{X}}^{K}\) is a submatrix of \(\mathbf{A}^{K}\), and \(\sigma_{max}[\mathbf{A}_{\mathcal{X}\mathcal{X}}^{K}]\) is its largest eigenvalue, \(\mathcal{R}_{t}\) and \(\mathcal{R}^{\prime}_{t}\) converge as \(t\to\infty\) for positive definite \(\mathbf{A}_{\mathcal{X}\mathcal{X}}^{K}\) or positive semi-definite \(\mathbf{A}^{K}\)._
**Corollary 3** (Connection with Kernel Regression).: _Upon convergence for positive definite \(\mathbf{A}_{\mathcal{X}\mathcal{X}}^{K}\), the model predictions are equivalent to the kernel regression solution w.r.t. kernel \(\kappa(\mathbf{x},\mathbf{x}^{\prime})\triangleq\mathbf{A}_{\mathbf{x}\mathbf{x}^{\prime}}^{K}\)_
\[\mathcal{F}_{\infty}=\mathcal{Y}\quad\text{(Perfect fit of training set)},\quad\mathcal{F}_{\infty}^{\prime}=\mathbf{A}_{\mathcal{X}\mathcal{X}}^{K}( \mathbf{A}_{\mathcal{X}\mathcal{X}}^{K})^{-1}\mathcal{Y}. \tag{10}\]
_Remark_.: There exists one exception though where the convergence is not guaranteed, namely when \(\mathbf{A}\) has negative eigenvalues and \(K\) is odd. While this can be easily avoided by adding diagonal components (e.g. PageRank matrix \(\alpha\mathbf{A}+(1-\alpha)\mathbf{I}_{n}\)) for enforcing positive semi-definiteness, empirically we do not need the algorithm to converge by stopping at the step with peak validation performance (just like standard training procedure for neural networks), and the algorithm can still achieve satisfactory generalization performance (often better than the version of RP that can converge).
**Empirical Evaluation.** We compare the basic version of RP from (6) with some representative GNN architectures (LinearGNN Wu et al. (2019) and GCN Kipf and Welling (2017)) on challenging OGB (Hu et al., 2020) node classification datasets Arxiv, Proteins, Products with up to millions of nodes and edges. Details of experimental setting are deferred to Appendix D.1. The codes will be available at [https://github.com/chr26195/ResidualPropagation](https://github.com/chr26195/ResidualPropagation).
\(\circ\)_Effectiveness:_ In Table 1, the proposed RP demonstrates decent performance, often surpassing GNNs even when no node or edge feature information is utilized. Notably, this performance even approaches some other more advanced models such as node-level graph Transformers (Wu et al., 2022). As depicted in Fig. 2, RP achieves the same performance as LP using one step, and quickly increases until reaching its peak performance, which surpasses the GNN. Specifically, on Proteins where the graph contains relatively richer structural information, a single step of RP exceeds a well-trained deep GNN, while in Products, \(4\) steps of RP exceeds the GNN.
\(\circ\)_Efficiency and Scalability:_ In addition to its effectiveness, RP offers several advantages. It does not require learnable parameters and boosts speed, with each step being more than 10 times faster than the GNN, in part because it does not need a backward pass typically required in conventional
\begin{table}
\begin{tabular}{c|c|c c|c c|c c|c|c} \hline \hline
**Model** & **Feat.** & \multicolumn{2}{c|}{Arxiv} & \multicolumn{2}{c|}{Proteins} & \multicolumn{2}{c|}{Products} & \multicolumn{2}{c|}{**Avg.**} & \multicolumn{2}{c}{**Param.**} \\ & & Validation & Test & Validation & Test & Validation & Test & \multicolumn{2}{c|}{**Test**} & \\ \hline MLP & \(\mathcal{X}\) & \(57.65\pm 0.12\) & \(55.50\pm 0.23\) & \(77.06\pm 0.14\) & \(72.04\pm 0.48\) & \(75.54\pm 0.14\) & \(61.06\pm 0.08\) & \(66.48\) & \(O(dm^{2})\) \\ LinearGNN & \(\mathcal{X}\), \(\mathbf{A}\) & \(70.67\pm 0.02\) & \(69.39\pm 0.11\) & \(66.11\pm 0.87\) & \(62.99\pm 0.11\) & \(88.97\pm 0.01\) & \(74.21\pm 0.04\) & \(72.04\) & \(O(dc)\) \\ GNN & \(\mathcal{X}\), \(\mathbf{A}\) & \(\mathbf{730.0\pm 0.17}\) & \(71.74\pm 0.29\) & \(79.21\pm 0.18\) & \(72.51\pm 0.03\) & \(92.00\pm 0.00\) & \(75.64\pm 0.021\) & \(72.35\) & \(O(dm^{2})\) \\ LP & \(\mathbf{A}\) & \(70.14\pm 0.08\) & \(68.32\pm 0.00\) & \(83.02\pm 0.00\) & \(76.23\pm 0.00\) & \(90.91\pm 0.00\) & \(74.34\pm 0.00\) & \(76.91\) & \(0\) \\ \hline RP (ours) & A & \(71.37\pm 0.00\) & \(70.06\pm 0.00\) & \(85.19\pm 0.00\) & \(78.17\pm 0.00\) & \(91.31\pm 0.00\) & \(78.25\pm 0.00\) & \(79.06\) & \(0\) \\ Speedup / step & & \(\times\) 14.48 & \(\times\) 14.00 & \(\times\) 12.46 & & & & \\ Time to Acc. & & \(\times\) 0.01461 & & \(\times\) 0.0008 & \(\times\) 0.00427 & & & \\ Memory & & \(\times\) 0.094 & & \(\times\) 0.363 & & \(\times\) 0.151 & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Empirical evaluation of RP on OGB datasets. Accuracy is reported for Arxiv and Products, and ROC-AUC for Proteins. Last three rows compare RP against full-batch GNN.
Figure 2: Learning curves of RP and comparison with the performance of LP (\(\alpha=1\)), linear GNN and deep GNN. Transition from yellow to purple denotes RP with decreasing step size \(\eta\).
learning frameworks. Furthermore, RP achieves its peak performance within a mere \(20\) steps, and thus overall takes orders of magnitudes less time (\(<0.01\)) to attain GNN-level performance. In terms of memory cost, RP inherits the scalability of LP and only requires storage space for model predictions; in practice, we often found the memory bottleneck lies with data preprocessing \(O(|\mathcal{E}|)\) rather than the algorithm itself \(O(n)\).
In AppendicesE.1 andE.2 respectively, we discuss extensions of RP that combine with kernels (e.g. Gaussian kernel) to accommodate node features, and test additional \(7\) datasets (Cora, Citeseer, Pubmed, Computer, Photo, CS, Physics) where RP can still outperform popular GNNs.
## 4 Learning Dynamics of Graph Neural Networks
We have introduced a simple propagation method that competes with deep GNN models, which raises the question whether or not GNNs are also (secretly) taking advantage of similar residual propagating effects with a graph inductive bias in their optimization process. To examine this possibility, we formally study the learning dynamics of GNNs in function space during GD-based training.
### General Form by Node-Level GNTK
Similar to the learning dynamics of general parameterized models in (5), the learning dynamics of GNNs in node-level tasks is characterized by their NTK defined as follows:
**Definition 4** (Node-Level Graph Neural Tangent Kernel).: _For a \(\ell\)-layer GNN in node-level tasks defined in Sec. 2, the NTK is defined as_
\[\mathbf{\Theta}_{t}^{(\ell)}(\mathbf{x},\mathbf{x}^{\prime};\mathbf{A})=\nabla_{\mathbf{W }}f(\mathbf{x};\mathbf{A})^{\top}\nabla_{\mathbf{W}}f(\mathbf{x}^{\prime};\mathbf{A}), \tag{11}\]
_which we refer to as Node-Level Graph Neural Tangent Kernel (GNTK) to differentiate it with the graph-level GNTK for graph-level tasks proposed in Du et al. (2019), or simply NTK of GNNs._
How GNNs evolve in function space during training also follows the residual propagation process:
\[\begin{bmatrix}\mathcal{R}_{t+1},\mathcal{R}^{\prime}_{t+1}\end{bmatrix}=- \eta\mathbf{\Theta}_{t}^{(\ell)}[\mathcal{R}_{t},\mathbf{0}]+[\mathcal{R}_{t}, \mathcal{R}^{\prime}_{t}]\,,\quad\mathbf{\Theta}_{t}^{(\ell)}\triangleq[\mathbf{ \Theta}_{t}^{(\ell)}(\mathbf{x}_{i},\mathbf{x}_{j};\mathbf{A})]_{i,j\in[n]\times[n]}, \tag{12}\]
where the initial values of residuals depend on initial weights. A precise characterization of (12) requires mathematically computing the node-level GNTK, which is prohibitively challenging for finitely-wide GNNs. Fortunately, we can derive its explicit formula in overparameterized regimes where the model width \(m\) tends to infinity, following that of fully-connected neural networks (Jacot et al., 2018; Lee et al., 2019). Given that the limiting NTK in this regime behaves as a constant kernel function (Yang and Littwin, 2021), we change the notation by omitting the subscript \(t\).
Concretely, we present the computation of the node-level GNTK in a recurrent layer-wise manner where each layer is decomposed into two steps: _Feature Transformation_ (corresp. \(\mathbf{Z}\leftarrow\sigma(\mathbf{Z}\mathbf{W})\)) and _Feature Propagation_ (corresp. \(\mathbf{Z}\leftarrow\mathbf{A}\mathbf{Z}\)). As elements in the computation, we denote a node-level GNTK for GNN without feature propagation at \(\ell\)-th layer as \(\mathbf{\Theta}^{(\ell)}\), the covariance matrix of the \(\ell\)-th layer's outputs with (without) feature propagation as \(\mathbf{\Sigma}^{(\ell)}\) (\(\mathbf{\bar{\Sigma}}^{(\ell)}\)), and the covariance matrix of the derivative to the \(\ell\)-th layer as \(\mathbf{\bar{\Sigma}}^{(\ell)}\). Then the feature propagation and transformation steps in each layer respectively correspond to (we concisely show the key steps here and defer the complete formula to AppendixB.3):
\[\begin{array}{ll}\text{(Feature Transformation)}&\text{(Feature Propagation)}\\ &\\ \mathbf{\bar{\Theta}}^{(\ell)}=\mathbf{\Theta}^{(\ell-1)}\odot\mathbf{\bar{\Sigma}}^{(\ell )}+\mathbf{\bar{\Sigma}}^{(\ell)},&\\ &\\ \mathbf{\Theta}^{(\ell)}=\mathbf{A}\ \mathbf{\bar{\Theta}}^{(\ell)}\mathbf{A}.\end{array} \tag{13}\]
**Implications.** Compared with the NTK computation for a fully-connected neural network (Jacot et al., 2018), the node-level GNTK has an equivalent feature transformation step, while its uniqueness stems from the feature propagation step, whereby the adjacency matrix \(\mathbf{A}\) integrates into the kernel similarity measure. Consequently, this kernel function naturally accommodates a graph inductive bias, and thus the residual propagation process of GNNs also tends to follow the trajectory regulated by the graph, similar to the behavior of the RP algorithm. While such an analysis offers valuable mathematical intuitions, we next present how the learning dynamics of certain GNNs in function space can be exactly formulated under the framework of generalized RP.
### Case Studies: from Two Layers to Arbitrary Depth
Given that our primary focus centers on the role of graphs in GNNs (as without them, GNNs are largely equivalent to MLPs), we exclude external node features and instead define inputs as either: 1) an identity matrix \(\bar{\mathcal{X}}\triangleq\mathbf{I}_{n}\) that assigns each node a one-hot vector as indication of its unique identity (as sometimes assumed in practice (Kipf and Welling, 2017; Zhu et al., 2021)), which can be viewed as learning a unique embedding for each node by treating the first-layer weights as an embedding table; 2) fixed node embeddings from graph spectral decomposition \(\bar{\mathcal{X}}\triangleq\operatorname*{arg\,min}_{\mathbf{B}}\|\mathbf{A}- \mathbf{B}\mathbf{B}^{\top}\|_{F}^{2}\), which aligns with various network embedding approaches based on definitions of \(\mathbf{A}\)(Qiu et al., 2018).
**Two-Layer GNN.** Building on the setup from prior work analyzing infinitely-wide two-layer neural networks (Arora et al., 2019), we consider two-layer GNNs:
**Theorem 5** (Two-Layer GNN Dynamics).: _For an infinitely-wide two-layer GNN defined as \([\mathcal{F},\mathcal{F}^{\prime}]=\mathbf{A}\sigma(\mathbf{A}\bar{\mathcal{ X}}\mathbf{W}^{(1)})\mathbf{W}^{(2)}/\sqrt{m}\) with ReLU activation, \(\bar{\mathcal{X}}=\mathbf{I}_{n}\) and standard NTK parameterization, its learning dynamics by optimizing \(\mathbf{W}^{(1)}\) can be written as a generalized RP process_
\[\left[\mathcal{R}_{t+1},\mathcal{R}^{\prime}_{t+1}\right]=-\eta\mathbf{A}( \mathbf{A}^{2}\odot\mathbf{S})\mathbf{A}[\mathcal{R}_{t},\mathbf{0}]+[ \mathcal{R}_{t},\mathcal{R}^{\prime}_{t}],\ \ \mathbf{S}_{ij}=\left(\pi-\arccos(\frac{ \mathbf{A}^{\top}\mathbf{A}_{j}}{\|\mathbf{A}_{i}\|\mathbf{A}_{j}\|})\right) /2\pi. \tag{14}\]
_The matrix \(\mathbf{S}\) reweights each element in \(\mathbf{A}^{2}\) by the similarity of neighborhood distributions of two nodes. For \(\bar{\mathcal{X}}=\operatorname*{arg\,min}_{\mathbf{B}}\|\mathbf{A}-\mathbf{B }\mathbf{B}^{\top}\|_{F}^{2}\), the propagation matrix is replaced by \(\mathbf{A}(\mathbf{A}^{3}\odot\tilde{\mathbf{S}})\mathbf{A}\) where \(\tilde{\mathbf{S}}\) is another reweighting matrix (details given in Appendix B.4)._
**Arbitrarily Deep GNN.** Pushing further, we can also characterize the evolution of arbitrarily deep GNNs where feature propagation is applied at the last layer (e.g. Klicpera et al. (2019); Liu et al. (2020); Spinelli et al. (2020); Chien et al. (2021), called decoupled GNNs in some other literature):
**Theorem 6** (Deep and Wide GNN Dynamics).: _For arbitrarily deep and infinitely-wide GNNs with feature propagation deferred to the last layer, i.e. \([\mathcal{F},\mathcal{F}^{\prime}]=\mathbf{A}^{\ell}\operatorname{MLP}(\bar{ \mathcal{X}})\) with \(\bar{\mathcal{X}}=\mathbf{I}_{n}\), the learning dynamics that result from optimizing MLP weights can be written as the generalized RP process_
\[\left[\mathcal{R}_{t+1},\mathcal{R}^{\prime}_{t+1}\right]=-\eta\mathbf{A}^{ \ell}(\mathbf{I}_{n}+c\mathbf{1}\mathbf{1}^{\top})\mathbf{A}^{\ell}[\mathcal{R }_{t},\mathbf{0}]\ +\ \left[\mathcal{R}_{t},\mathcal{R}^{\prime}_{t}\right], \tag{15}\]
_where \(c\geq 0\) is a constant determined by the model depth, and \(\mathbf{1}\) is an all-\(1\) column vector._
As a special case of the above, when the backbone MLP only has one layer, the model degrades to a linear GNN (e.g. SGC Wu et al. (2019)), and the constant correspondingly is \(c=0\), giving us the basic version of RP in (6) or its approximation for odd \(K\):
**Corollary 7** (Linear GNN Dynamics).: _The learning dynamics of the linear GNN \([\mathcal{F},\mathcal{F}^{\prime}]=\mathbf{A}^{\ell}\bar{\mathcal{X}}\mathbf{W}\) is identical to the basic version of RP in (6) with \(K=2\ell\) for input features \(\bar{\mathcal{X}}=\mathbf{I}_{n}\), and \(K=2\ell+1\) for \(\bar{\mathcal{X}}=\operatorname*{arg\,min}_{\mathbf{B}}\|\mathbf{A}-\mathbf{B }\mathbf{B}^{\top}\|_{F}^{2}\) and positive semi-definite \(\mathbf{A}\)._
_Remark_.: Despite this equivalence, it is important to note that this specific linear GNN (on our given input features) is not as lightweight as it may appear. First, its parameter number scales with the size of dataset, often reaching orders of magnitude larger than deep GNN models (e.g., \(115,104,363\) for the linear model v.s. \(103,727\) for GCN on Products). Additionally, the full-rank spectral decomposition of \(\mathbf{A}\) is computationally very expensive (\(O(n^{3})\)) for large graphs. In contrast, the basic version RP in (6) efficiently yields identical results to this heavily parameterized GNN without actually training parameters or computing expensive matrix decompositions. Furthermore, the generalized RP from (9) can implement algorithms that are infeasible within a conventional deep learning framework, e.g. training infinitely-wide GNNs.
## 5 Rethinking the Success and Pathology of GNNs
In the last section, we have given theoretical analysis with concrete examples to illustrate that NTKs of GNNs naturally tend to align with the graph. This finding, coupled with the consistently strong performance of the RP algorithm -- an example of perfect graph-kernel alignment -- prompts the question of whether this alignment between NTK and the graph plays a crucial role in understanding the success of GNNs. Next, we offer interpretable explanations of "when and why GNNs successfully generalize" and their pathological training behavior on heterophilic graphs (Sec. 5.1). We also study the time evolution of real-world GNN NTKs to empirically verify our theoretical results (Sec. 5.2).
### When and Why GNNs Successfully Generalize?
Our previous discussions have revolved around two matrices, namely the graph adjacency matrix \(\mathbf{A}\) and the NTK matrix \(\mathbf{\Theta}_{t}\).1 To complete the theoretical picture, we introduce another matrix called the ideal _or optimal kernel matrix_(Cristianini et al., 2001), defined as \(\mathbf{\Theta}^{*}\triangleq\tilde{\mathcal{Y}}\tilde{\mathcal{Y}}^{\top}\in \mathbb{R}^{n\times n}\) to indicate whether two instances have the same label, and a metric to quantify alignment of (kernel) matrices:
Footnote 1: We here refer to \(\mathbf{A}\) as a class of similarity matrices based on original \(\mathbf{A}\) in a general sense, such as \(\mathbf{A}^{K}\) etc.
**Definition 8** (Alignment, Cristianini et al. (2001)).: _Given two (kernel) matrices \(\mathbf{K}_{1}\) and \(\mathbf{K}_{2}\), their alignment is defined as \(A\left(\mathbf{K}_{1},\mathbf{K}_{2}\right)\triangleq\left(\mathbf{K}_{1}, \mathbf{K}_{2}\right)_{F}/(\|\mathbf{K}_{1}\|_{F}\|\mathbf{K}_{2}\|_{F})\in[0,1]\). This is a generalization of cosine similarity from vectors to matrices, \(\arccos\) of which satisfies the triangle inequality._
In the following, we will first take a separate look at the effects of alignment for each pair of the three matrices of interest (cf. Fig. 1(a)), and then discuss their collective implications.
\(\circ\)_Homophily Level:_\(A(\mathbf{A},\mathbf{\Theta}^{*})\). The alignment between \(\mathbf{A}\) and \(\mathbf{\Theta}^{*}\) quantifies the _homophily level_ of graph structure, i.e. whether two connected nodes indeed have the same label, and is determined the dataset. While many empirical results (e.g. Zhu et al. (2020)) suggest high homophily level is important for the performance of GNNs, deeper theoretical understandings are mostly lacking.
\(\circ\)_Kernel-Target Alignment:_\(A(\mathbf{\Theta}_{t},\mathbf{\Theta}^{*})\). The alignment between kernel matrix and optimal \(\mathbf{\Theta}^{*}\) has been widely studied in the research on kernel learning (Cristianini et al., 2001; Kwok and Tsang, 2003; Lanckriet et al., 2004; Gonen and Alpaydin, 2011). Better kernel-target alignment has been recognized as a critical factor that leads to favorable generalization. For intuition, one can quickly verify that substituting \(\mathbf{\Theta}^{*}\) to the residual dynamics in (5) leads to perfect generalization performance (since ground-truth labels only propagate to unseen instances with the same label).
\(\circ\)_Kernel-Graph Alignment:_\(A(\mathbf{\Theta}_{t},\mathbf{A})\). The alignment between NTK and graph is a novel notion in our work, as prior sections have shown that GNN NTK matrices naturally tend to align with \(\mathbf{A}\). E.g., the RP algorithm (and variants thereof) serve as an extreme case with two identical matrices.
**Implications.** We consider two cases. For _homophilic_ graphs where \(A(\mathbf{A},\mathbf{\Theta}^{*})\uparrow\) is naturally large, better kernel-graph alignment \(A(\mathbf{\Theta}_{t},\mathbf{A})\uparrow\) consequently leads to better kernel-target alignment \(A(\mathbf{\Theta}_{t},\mathbf{\Theta}^{*})\uparrow\). In other words, the NTK of GNNs naturally approaches the optimum as the graph structure possesses homophily property, and leveraging it in the optimization process (12) encourages training residuals to flow to unseen samples with the same label and thus better generalization; In contrast, for _heterophilic_ graphs where \(A(\mathbf{A},\mathbf{\Theta}^{*})\) is small, better kernel-graph alignment will hinder kernel-target alignment, explaining the pathological learning behavior of GNNs when dealing with heterophilic graphs in an interpretable manner.
**Theoretical Support 1.** To support this interpretation, we examine the generalization behavior of infinitely-wide neural networks in the extreme case where its NTK matrix is assumed to be governed by the graph, say, \(\lim_{k\to\infty}\sum_{i=0}^{k}(\alpha\mathbf{A})^{i}\) with \(\alpha\in(0,1)\) as adopted by the converged LP algorithm in (3). With a common assumption that training instances are drawn i.i.d. from a distribution \(\mathcal{P}\), and based on the Rademacher complexity generalization bound for kernel regression (Bartlett and Mendelson, 2002; Arora et al., 2019; Du et al., 2019), we have a label-dependent high-probability (at least \(1-\delta\)) upper bound on population risk (derivation in Appendix B.7):
\[\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}}\left[l\left(f(\mathbf{x}),y\right)\right]= O\left(\sqrt{\frac{n_{l}-cA(\mathbf{A},\mathbf{\Theta}^{*})}{n_{l}}}+\sqrt{ \frac{\log(1/\delta)}{n_{l}}}\right), \tag{16}\]
where \(c=\alpha\|\mathbf{\Theta}^{*}\|_{F}\|\mathbf{A}\|_{F}\) is a constant, \(A(\mathbf{A},\mathbf{\Theta}^{*})\) is the homophily level for the training set (with slight abuse of notation). This bound can also be viewed as a theoretical guarantee for the converged (generalized) RP algorithm and clearly demonstrates that: _for\(A(\mathbf{\Theta}_{t},\mathbf{A})\) fixed as \(1\), higher level of graph homophily \(A(\mathbf{A},\mathbf{\Theta}^{*})\) plays a dominant role in better generalization_. The above analysis can also be potentially extended to infinitely-wide GNNs discussed in Sec. 4 with further assumptions on the condition number of its NTK matrix, which will lead to similar (but less straightforward) results.
**Theoretical Support 2.** Pushing further, in a complementary setting to the above, where the objective is to find the optimal a priori kernel for directly minimizing the population risk (but without access to any input features or ground-truth labels), we demonstrate that when homophily assumptions are
imposed on the graph, infinitely-wide GNNs and the RP algorithm will yield the optimal kernel regression predictor with the provably best generalization performance:
**Theorem 9** (Bayesian Optimality of GNN).: _Assume that the underlying data generation distribution \(\mathcal{P}\) is such that the probability of a pair of instances having the same label \(P(y(\mathbf{x}_{i})=y(\mathbf{x}_{j}))\) is proportional to \(\mathbf{A}_{ij}-1/2\). Then the optimal kernel regression predictor that minimizes the population risk with squared loss has kernel matrix \(\mathbf{A}\)._
_Remark_.: The matrix \(\mathbf{A}\) from the above result could vary depending on different assumptions on \(P(y(\mathbf{x}_{i})=y(\mathbf{x}_{j}))\), such as \(\mathbf{A}^{K}\) for the basic version of RP, or \(\mathbf{A}\left(\mathbf{A}^{2}\odot\mathbf{S}\right)\mathbf{A}\) for infinitely-wide two-layer GNNs in Theorem 5. And ideally, if one has privileged access to labels of all instances, this optimal matrix is exactly the optimal (posterior) kernel \(\mathbf{\Theta}^{*}\) discussed above and perfect generalization will be achieved. This result further verifies our interpretation of generalization by showing that _for \(A(\mathbf{A},\mathbf{\Theta}^{*})\) fixed to be large, better \(A(\Theta_{\ell},\mathbf{A})\) (e.g. GNN and RP) leads to favorable generalization.
### Empirical Verification and Further Insights on Heterophily
We next empirically verify results from Sec. 4 and 5.1 by studying the time evolution of real-world GNN NTKs. Figure 3 plots the results reflected by alignment between \(\mathbf{\Theta}_{t}\), \(\mathbf{A}\) and \(\mathbf{\Theta}^{*}\).
**Synthetic Dataset.** To demonstrate the effects of different homophily levels, we use a stochastic block model (Holland et al., 1983) to generate a synthetic dataset with informative node features. As shown in Fig. 3(a), the kernel-graph alignment \(A(\Theta_{t},\mathbf{A})\) for GNNs stays at a high level regardless of different graph structures, as a natural result of the network architecture (Sec. 4). Consequently, as we gradually alter edge probabilities to decrease the homophily level \(A\left(\mathbf{A},\mathbf{\Theta}^{*}\right)\), the kernel-target alignment \(A\left(\Theta_{t},\mathbf{\Theta}^{*}\right)\) also decreases and the testing accuracy drops dramatically (Sec. 5.1).
**Real-World Datasets.** On real-world homophilic and heterophilic datasets, we progressively coarsen the graph until the GNN degrades to an MLP, allowing us to analyze the effects of feature propagation in the model. For now, let us first focus on comparing red and blue lines in Fig. 3(b,c). We found the kernel-graph alignment \(A(\Theta_{t},\mathbf{A})\) overall decreases with less graph structure, again verifying results in Sec. 4. However, the impact of the graph structure depends on different homophily levels \(A\left(\mathbf{A},\mathbf{\Theta}^{*}\right)\): on the homophilic dataset, more graph structure and better \(A(\mathbf{\Theta}_{t},\mathbf{A})\) optimize the NTK matrix as reflected by better kernel-target alignment \(A\left(\mathbf{\Theta}_{t},\mathbf{\Theta}^{*}\right)\), but worsens it on the heterophilic one. This results in distinct generalization behavior of GNNs reflected by the testing accuracy.
**Can and How (G)NNs Handle Heterophily?** Recently, there has been a growing discussion on whether standard GNNs are capable of handling heterophily, with empirical results pointing to diverging conclusions (Zhu et al., 2020; Ma et al., 2022; Luan et al., 2022; Platonov et al., 2023). From the learning dynamics perspective, we add new insights to this debate: \(\circ\)**1)** While we have shown that infinitely-wide GNNs are sub-optimal on heterophilic graphs, MLPs in contrast have
Figure 3: Evolution of NTK matrix \(\mathbf{\Theta}_{t}\) of finitely-wide GCN during training, reflected by matrix alignment. **(a)** Synthetic dataset generated by a stochastic block model, where the homophily level gradually decreases by altering edge probabilities, i.e. homophilic \(\rightarrow\) heterophilic; **(b & c)** Real-world homophilic (Cora) and heterophilic (Texas) datasets, where the graph is gradually coarsened until there is no edge left when evaluating \(\mathbf{\Theta}_{t}\), i.e. more graph \(\rightarrow\) less graph. (Details in Appendix D.2)
no guarantee of kernel-target alignment (since both \(A(\mathbf{A},\mathbf{\Theta}^{*})\) and \(A(\mathbf{\Theta}_{t},\mathbf{A})\) are not well-aligned), indicating that _they could either generalize better or worse without clearcut (dis)advantages, as opposed to the case on homophilic graphs where GNN is provably at an advantage_, which explains the diverse empirical results when comparing GNNs with MLPs in prior work. \(\circ\)**2)** As we now turn to the NTK's time evolution in Fig. 3, an additional phenomenon we found across all datasets is the overall increase of kernel-target alignment \(A\left(\mathbf{\Theta}_{t},\mathbf{\Theta}^{*}\right)\) during the training process. This indicates that the training process enables real-world GNNs to adjust their NTK's feature space such that the kernel matrix learns towards an ultimately homophilic graph structure (i.e. \(\mathbf{\Theta}^{*}\)) to adapt to heterophilic datasets (and consequently different evolutionary trends of \(A\left(\mathbf{\Theta}_{t},\mathbf{A}\right)\) for homophilic and heterophilic datasets). Such a phenomenon has also been found for other models in vision tasks (Baratin et al., 2021). Additionally, since our analysis also applies to other forms of \(\mathbf{A}\) for feature propagation in GNNs, it could also potentially explain how some specialized models with different definitions of \(\mathbf{A}\) (e.g. allowing signed propagation) can mitigate the heterophily issue.
In Appendix E.3, we consider additional \(5\) heterophilic datasets and compare RP against recently proposed GNNs specially designed for handing heterophily. We found this simple algorithm still achieves decent performance despite it is theoretically sub-optimal.
## 6 More Discussions and Conclusion
**Abridged Related Work.** Most existing work on theoretical aspects of GNNs focuses on representation and generalization (see Jegelka (2022) and references therein) while their optimization properties remains under-explored (Zhang et al., 2020; Xu et al., 2021; Yadati, 2022). Specifically, existing work in representation (or expressiveness) does not provide answers to what exactly GNN functions are found during the optimization process. For generalization, prior work is insufficient to explain the effects of training, which is widely-recognized as a crucial ingredient, and does not connect to heterophily, a relevant aspect in practice. (See unabridged related work in Appendix A.)
**Applicability.** Our insights apply to both transductive and inductive settings, other loss functions, and multi-dimensional outputs (see Appendix C). The analytical framework could also be extended to other tasks (which are left as future work): for graph classification or regression, the definition of GNTK in (12) should be modified to the graph-level one (Du et al., 2019); for link prediction and self-supervised learning, the loss function in the derivation of (5) should be adjusted accordingly. A similar message passing process in function space would still hold in these settings. Moreover, the proposed RP is compatible with regression tasks due to minimization of the squared loss.
**Broader Impacts.** Theoretically, our framework of interpreting GD dynamics in function space as a label propagation process across seen and unseen samples could be adapted for dissecting other models and tasks, and the analysis in Sec. 5 exemplifies using kernel learning to justify model architectures. Practically, the proposed RP unlocks the possibility of leveraging a sparse structure in complicated training dynamics, acting as efficient forward-only solutions to implement dynamics in function space which are previously impractical or extremely expansive (Arora et al., 2020). More broadly, it also serves as a bridge of classic graph-based algorithms (Chapelle et al., 2009) and modern deep learning, and echoes recent efforts to develop forward-only learning frameworks (Hinton, 2022).
**Conclusion.** This work represents a preliminary endeavor in formally studying the learning dynamics of GNNs in function space. We reveal the alignment between external graph structure and NTK matrix that controls the evolution of models, which enables deeper understanding of the generalization capability and pathological behavior of GNNs by connecting to the study of heterophily, and leads to a minimalist algorithm with superior efficiency, decent performance and theoretical interpretability.
## Ethics Statement
In this research, we investigate the learning dynamics of GNNs in function space and offer insights to their optimization and generalization behaviors. Our insights can deepen understandings for graph-based deep learning, and do not appear to carry immediate harmful consequences to the best of our knowledge.
## Reproducibility Statement
The complete proofs of our theoretical results and their assumptions are provided in Appendix B. Extensive implementation details of the experiments from Section 3.2 and Section 5.2 regarding baselines, hyperparameters and datasets are provided in Appendix D.1 and D.2. Detailed descriptions for the implementation of different versions of RP algorithms are given in Appendix D.3.
|
2310.19263 | A Metadata-Driven Approach to Understand Graph Neural Networks | Graph Neural Networks (GNNs) have achieved remarkable success in various
applications, but their performance can be sensitive to specific data
properties of the graph datasets they operate on. Current literature on
understanding the limitations of GNNs has primarily employed a
$\textit{model-driven}$ approach that leverage heuristics and domain knowledge
from network science or graph theory to model the GNN behaviors, which is
time-consuming and highly subjective. In this work, we propose a
$\textit{metadata-driven}$ approach to analyze the sensitivity of GNNs to graph
data properties, motivated by the increasing availability of graph learning
benchmarks. We perform a multivariate sparse regression analysis on the
metadata derived from benchmarking GNN performance across diverse datasets,
yielding a set of salient data properties. To validate the effectiveness of our
data-driven approach, we focus on one identified data property, the degree
distribution, and investigate how this property influences GNN performance
through theoretical analysis and controlled experiments. Our theoretical
findings reveal that datasets with more balanced degree distribution exhibit
better linear separability of node representations, thus leading to better GNN
performance. We also conduct controlled experiments using synthetic datasets
with varying degree distributions, and the results align well with our
theoretical findings. Collectively, both the theoretical analysis and
controlled experiments verify that the proposed metadata-driven approach is
effective in identifying critical data properties for GNNs. | Ting Wei Li, Qiaozhu Mei, Jiaqi Ma | 2023-10-30T04:25:02Z | http://arxiv.org/abs/2310.19263v1 | # A Metadata-Driven Approach to Understand Graph Neural Networks
###### Abstract
Graph Neural Networks (GNNs) have achieved remarkable success in various applications, but their performance can be sensitive to specific data properties of the graph datasets they operate on. Current literature on understanding the limitations of GNNs has primarily employed a _model-driven_ approach that leverages heuristics and domain knowledge from network science or graph theory to model the GNN behaviors, which is time-consuming and highly subjective. In this work, we propose a _metadata-driven_ approach to analyze the sensitivity of GNNs to graph data properties, motivated by the increasing availability of graph learning benchmarks. We perform a multivariate sparse regression analysis on the metadata derived from benchmarking GNN performance across diverse datasets, yielding a set of salient data properties. To validate the effectiveness of our data-driven approach, we focus on one identified data property, the degree distribution, and investigate how this property influences GNN performance through theoretical analysis and controlled experiments. Our theoretical findings reveal that datasets with a more balanced degree distribution exhibit better linear separability of node representations, thus leading to better GNN performance. We also conduct controlled experiments using synthetic datasets with varying degree distributions, and the results align well with our theoretical findings. Collectively, both the theoretical analysis and controlled experiments verify that the proposed metadata-driven approach is effective in identifying critical data properties for GNNs.
## 1 Introduction
Graph Neural Networks (GNNs), as a broad family of graph machine learning models, have gained increasing research interests in recent years. However, unlike the ResNet model [14] in computer vision or the Transformer model [36] in natural language processing, there has not been a dominant GNN architecture that is universally effective across a wide range of graph machine learning tasks. This may be attributed to the inherently diverse nature of graph-structured data, which results in the GNN performance being highly sensitive to specific properties of the graph datasets. Consequently, GNNs that demonstrate high performance on certain benchmark datasets often underperform on others with distinct properties. For example, early GNNs have been shown to exhibit degraded performance when applied to non-homophilous graph datasets, where nodes from different classes are highly interconnected and mixed [45; 46; 32; 11; 9].
However, it is non-trivial to identify and understand critical graph data properties that are highly influential on GNN performance. Current literature primarily employs what we term as a _model-driven_ approach, which attempts to model GNN performance using specific heuristics or domain knowledge derived from network science or graph theory [41; 45]. Although this approach can offer an in-depth understanding of GNN performance, it can also be time-consuming, subjective, and may not fully capture the entire spectrum of relevant data properties.
To address these limitations and complement the model-driven approach, we propose a _metadata-driven approach_ to identify critical data properties affecting GNN performance. With the increasing availability of diverse benchmark datasets for graph machine learning [16; 27], we hypothesize that critical graph data properties can be inferred from the benchmarking performance of GNNs on these datasets, which can be viewed as the metadata of the datasets. More concretely, we carry out a multivariate sparse regression analysis on the metadata obtained from large-scale benchmark experiments [27] involving multiple GNN models and a variety of graph datasets. Through this regression analysis, we examine the correlation between GNN performance and the data properties of each dataset, thereby identifying a set of salient data properties that significantly influence GNN performance.
To validate the effectiveness of the proposed metadata-driven approach, we further focus on a specific salient data property, degree distribution, identified from the regression analysis, and investigate the mechanism by which this data property affects GNN performance. In particular, our regression analysis reveals a decline in GNN performance as the degree distribution becomes more imbalanced. We delve deeper into this phenomenon through a theoretical analysis and a controlled experiment.
We initiate our investigation with a theoretical analysis of the GNN performance under the assumption that the graph data is generated by a Degree-Corrected Contextual Stochastic Block Model (DC-CSBM). Here, we define DC-CSBM by combining and generalizing the Contextual Stochastic Block Model [4] and the Degree-Corrected Stochastic Block Model [17]. Building upon the analysis by Baranwal et al. [3], we establish a novel theoretical result on how the degree distribution impacts the linear separability of the GNN representations and subsequently, the GNN performance. Within the DC-CSBM context, our theory suggests that more imbalanced degree distribution leads to few nodes being linearly separable in their GNN representations, thus negatively impacting GNN performance.
Complementing our theoretical analysis, we conduct a controlled experiment, evaluating GNN performance on synthetic graph datasets with varying degree distribution while holding other properties fixed. Remarkably, we observe a consistent decline in GNN performance correlating with the increase of the Gini coefficient of degree distribution, which reflects the imbalance of degree distribution. This observation further corroborates the findings of our metadata-driven regression analysis.
In summary, our contribution in this paper is two-fold. Firstly, we introduce a novel metadata-driven approach to identify critical graph data properties affecting GNN performance and demonstrate its effectiveness through a case study on a specific salient data property identified by our approach. Secondly, we develop an in-depth understanding of how the degree distribution of graph data influences GNN performance through both a novel theoretical analysis and a carefully controlled experiment, which is of interest to the graph machine learning community in its own right.
## 2 Related Work
### Analysis on the Limitations of GNNs
There has been a wealth of existing literature investigating the limitations of GNNs. However, most of the previous works employ the model-driven approach. Below we summarize a few well-known limitations of GNNs while acknowledging that an exhaustive review of the literature is impractical. Among the limitations identified, GNNs have been shown to be sensitive to the extent of homophily in graph data, and applying GNNs to non-homophilous data often has degraded performance [1; 9; 23; 46; 45]. In addition, over-smoothing, a phenomenon where GNNs lose their discriminative power with deeper layers [20; 34; 6], is a primary concern particularly for node-level prediction tasks where distinguishing the nodes within the graph is critical. Further, when applied to graph-level prediction tasks, GNNs are limited by their ability to represent and model specific functions or patterns on graph-structured data, an issue often referred to as the expressiveness problem of GNNs. [41; 30; 25; 43]. Most of these limitations are understood through a _model-driven_ approach, which offers in-depth insights but is time-consuming and highly subjective. In contrast, this paper presents a _metadata-driven_ approach, leveraging metadata from benchmark datasets to efficiently screen through a vast array of data properties.
### Data-Driven Analysis in Graph Machine Learning
With the increasing availability of graph learning benchmarks, there have been several recent studies that leverage diverse benchmarks for data-driven analysis. For example, Liu et al. [24] presents a principled pipeline to taxonomize benchmark datasets. Specifically, by applying a number of different perturbation methods on each dataset and obtaining the sensitivity profile of the resulting GNN performance on perturbed datasets, they perform hierarchical clustering on these sensitivity profiles to cluster statistically similar datasets. However, this study only aims to categorize datasets instead of identifying salient data properties that influence GNN performance. Ma et al. [27] establish a Graph Learning Indexer (GLI) library that curates a large collection of graph learning benchmarks and GNN models and conducts a large-scale benchmark study. We obtain our metadata from their benchmarks. Palowitch et al. [31] introduce a GraphWorld library that can generate diverse synthetic graph datasets with various properties. These synthetic datasets can be used to test GNN models through controlled experiments. In this paper, we have used this library to verify the effectiveness of the identified critical data properties.
### Impact of Node Degrees on GNN Performance
There have also been a few studies investigating the impact of node degrees on GNNs. In particular, it has been observed that within a single graph dataset, there tends to be an accuracy discrepancy among nodes with varying degrees [35; 22; 44; 39]. Typically, GNN predictions on nodes with lower degrees tend to have lower accuracy. However, the finding of the Gini coefficient of the degree distribution as a strong indicator of GNN performance is novel. Furthermore, this indicator describes the dataset-level characteristics, allowing comparing GNN performance across different graph datasets. In addition, this paper presents a novel theoretical analysis, directly relating the degree distribution to the generalization performance of GNNs.
## 3 A Metadata-Driven Analysis on GNNs
### Understanding GNNs with Metadata
Motivation.Real-world graph data are heterogeneous and incredibly diverse, contrasting with images or texts that often possess common structures or vocabularies. The inherent diversity of graph data makes it particularly challenging, if not unfeasible, to have one model to rule all tasks and datasets in the graph machine learning domain. Indeed, specific types of GNN models often only perform well on a selected set of graph learning datasets. For example, the expressive power of GNNs [41] is primarily relevant to graph-level prediction tasks rather than node-level tasks - higher-order GNNs with improved expressive power are predominantly evaluated on graph-level prediction tasks [30; 41]. As another example, several early GNNs such as Graph Convolution Networks (GCN) [19] or Graph Attention Networks (GAT) [37] only work well when the graphs exhibit homophily [45]. Consequently, it becomes crucial to identify and understand the critical data properties that influence the performance of different GNNs, allowing for more effective model design and selection.
The increasing availability of graph learning benchmarks that offer a wide range of structural and feature variations [16; 27] presents a valuable opportunity: one can possibly infer critical data properties from the performance of GNNs on these datasets. To systematically identify these critical data properties, we propose to conduct a regression analysis on the metadata of the benchmarks.
Regression Analysis on Metadata.In the regression analysis, the performance metrics of various GNN models on each dataset serve as the dependent variables, while the extracted data properties from each dataset act as the independent variables. Formally, we denote the number of datasets as \(n\), the number of GNN models as \(q\), and the number of data properties as \(p\). Define the response variables \(\{\mathbf{y}_{i}\}_{i\in[q]}\) to be GNN model performance operated on each dataset and the covariate variables \(\{\mathbf{x}_{j}\}_{j\in[p]}\) to be properties of each dataset. Note that \(\mathbf{y}_{i}\in\mathbb{R}^{n},\forall i\in[q]\) and \(\mathbf{x}_{j}\in\mathbb{R}^{n},\forall j\in[p]\). For ease of notation, we define \(\mathbf{Y}=(\mathbf{y}_{1},...,\mathbf{y}_{q})\in\mathbb{R}^{n\times q}\) to be the response matrix of \(n\) samples and \(q\) variables, and \(\mathbf{X}=(\mathbf{x}_{1},...,\mathbf{x}_{p})\in\mathbb{R}^{n\times p}\) to be the covariate matrix of \(n\) samples and \(p\) variables.
Given these data matrices, we establish the following multivariate linear model to analyze the relationship between response matrix \(\mathbf{Y}\) and covariate matrix \(\mathbf{X}\), which is characterized by the coefficient matrix \(\mathbf{B}\).
**Definition 3.1** (Multivariate Linear Model).: \[\mathbf{Y}=\mathbf{X}\mathbf{B}+\mathbf{W},\] (1)
_where \(\mathbf{B}\in\mathbb{R}^{p\times q}\) is the coefficient matrix and \(\mathbf{W}=(\mathbf{w}_{1},...,\mathbf{w}_{q})\in\mathbb{R}^{n\times q}\) is the matrix of error terms._
Our goal is to find the most salient data properties that correlate with the performance of GNN models given a number of samples. To this end, we introduce two sparse regularizers for feature selections, which leads to the following Multivariate Sparse Group Lasso problem.
**Definition 3.2** (Multivariate Sparse Group Lasso Problem).: \[\underset{\mathbf{B}}{\operatorname{argmin}}\frac{1}{2n}\|\mathbf{Y}-\mathbf{ X}\mathbf{B}\|_{2}^{2}+\lambda_{1}\|\mathbf{B}\|_{1}+\lambda_{g}\|\mathbf{B}\|_{2,1},\] (2)
_where \(\|\mathbf{B}\|_{1}=\sum_{i=1}^{p}\sum_{j=1}^{q}|\mathbf{B}_{ij}|\) is the \(L_{1}\) norm of \(\mathbf{B}\), \(\|\mathbf{B}\|_{2,1}=\sum_{i=1}^{p}\sqrt{\sum_{j=1}^{q}\mathbf{B}_{ij}^{2}}\) is the \(L_{2,1}\) group norm of \(\mathbf{B}\), and \(\lambda_{1},\lambda_{g}>0\) are the corresponding penalty parameters._
In particular, the \(L_{1}\) penalty encourages the coefficient matrix \(\mathbf{B}\) to be sparse, only selecting salient data properties. The \(L_{2,1}\) penalty further leverages the structure of the dependent variables and tries to make only a small set of the GNN models' performance depends on each data property, thus differentiating the impacts on different GNNs.
To solve for the coefficient matrix \(\mathbf{B}\) in Equation 2, we employ an R package, MSGLasso [21], using matrices \(\mathbf{Y}\) and \(\mathbf{X}\) as input.
To ensure proper input for the MSGLasso solver [21], we have preprocessed the data by standardizing the columns of both \(\mathbf{Y}\) and \(\mathbf{X}\).
### Data Properties and Model Performance
Next, we introduce the metadata used for the regression analysis. We obtain both the benchmark datasets and the model performance using the Graph Learning Indexer (GLI) library [27].
Data Properties.We include the following benchmark datasets in our regression analysis: corona [42], citeseer [42], pubmed [42], texas [33], cornell [33], wisconsin [33], actor [33], squirrel [33], chameleon [33], arxiv-year [23], snap-patents [23], penn94 [23], pokec [23], genius [23], and twitch-gamers [23].
For each graph dataset, we calculate 15 data properties, which can be categorized into the following six groups:
* _Basic_: Edge Density, Average Degree, Degree Assortativity;
* _Distance_: Pseudo Diameter;
* _Connectivity_: Relative Size of Largest Connected Component (RSLCC);
* _Clustering_: Average Clustering Coefficient (ACC), Transitivity, Degeneracy;
* _Degree Distribution_: Gini Coefficient of Degree Distribution (Gini-Degree);
* _Attribute_: Edge Homogeneity, In-Feature Similarity, Out-Feature Similarity, Feature Angular SNR, Homophily Measure, Attribute Assortativity.
The formal definition of these graph properties can be found in Appendix A.
Model Performance.For GNN models, we include GCN [19], GAT [37], GraphSAGE [13], MoNet [29], MixHop [1], and LINKX [23] into our regression analysis. We also include a non-graph model, Multi-Layer Perceptron (MLP). The complete experimental setup for these models can be found in Appendix B.
### Analysis Results
The estimated coefficient matrix \(\mathbf{B}\) is presented in Table 1. As can be seen, the estimated coefficient matrix is fairly sparse, allowing us to identify salient data properties. Next, we will discuss the six most salient data properties that correlate to some or all of the GNN models' performance. For the data properties that have an impact on all GNNs' performance, we call them **Widely Influential Factors**; for the data properties that have an impact on over one-half of GNNs' performance, we call them **Narrowly Influential Factors**. Notice that the \((+,-)\) sign after the name of the factors indicates whether this data property has a positive or negative correlation with the GNN performance.
Widely Influential Factors.We discover that the Gini coefficient of the degree distribution (Gini-Degree), Edge Homogeneity, and In-Feature Similarity impact all GNNs' model performance consistently.
* _Gini-Degree_\((-)\) measures how the graph's degree distribution deviates from the perfectly equal distribution, i.e., a regular graph. This is a crucial data property that dramatically influences GNNs' performance but remains under-explored in prior literature.
* _Edge Homogeneity_\((+)\) is a salient indicator for all GNN models' performance. This phenomenon coincides with the fact that various GNNs assume strong homophily condition [28] to obtain improvements on node classification tasks [13; 19; 37].
* _In-feature Similarity_\((+)\) calculates the average of feature similarity within each class. Under the homophily assumption, GNNs work better when nodes with the same labels additionally have similar node features, which also aligns with existing findings in the literature [15].
Narrowly Influential Factors.We find that Average Degree, Pseudo Diameter, and Feature Angular SNR are salient factors for a subset of GNN models, although we do not yet have a good understanding on the mechanism of how these data properties impact model performance.
* _Average Degree_\((+)\) is more significant for GCN, GraphSAGE, MoNet, and LINKX.
* _Pseudo Diameter_\((-)\) is more significant for GAT, GraphSAGE, MixHop, LINKX, and MLP.
* _Feature Angular SNR_\((+)\) is more significant for GCN, GraphSAGE, MixHop, LINKX, and MLP.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Graph Data Property & GCN & GAT & GraphSAGE & MoNet & MixHop & LINKX & MLP \\ \hline Edge Density & 0 & 0 & 0 & 0 & 0 & 0.0253 & 0.0983 \\
**Average Degree** & 0.2071 & 0 & 0.1048 & 0.1081 & 0 & 0.3363 & 0 \\
**Pseudo Diameter** & 0 & -0.349 & -0.1531 & 0 & -0.4894 & -0.3943 & -0.6119 \\ Degree Assortativity & 0 & 0 & 0 & -0.0744 & 0 & 0 & 0 \\ RSLCC & 0.1019 & 0 & 0 & 0.0654 & 0 & 0.1309 & 0 \\ ACC & 0 & 0 & 0 & 0 & 0 & 0 & -0.0502 \\ Transitivity & 0 & -0.0518 & 0 & -0.1372 & 0 & 0.2311 & 0 \\ Degeneracy & 0 & 0 & 0 & 0 & 0 & 0 & -0.1657 \\
**Gini-Degree** & -0.4403 & -0.2961 & -0.3267 & -0.2944 & -0.4205 & -0.367 & -0.1958 \\
**Edge Homogeneity** & 0.7094 & 0.4705 & 0.7361 & 0.8122 & 0.6407 & 0.2006 & 0.4776 \\
**In-Feature Similarity** & 0.3053 & 0.1081 & 0.1844 & 0.1003 & 0.4613 & 0.6396 & 0.2399 \\ Out-Feature Similarity & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
**Feature Angular SNR** & 0.2522 & 0 & 0.2506 & 0 & 0.2381 & 0.3563 & 0.3731 \\ Homophily Measure & 0 & 0.4072 & 0 & 0 & 0 & 0 & 0 \\ Attribute Assortativity & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The estimated coefficient matrix \(\mathbf{B}\) of the multivariate sparse regression analysis. Each entry indicates the strength (magnitude) and direction \((+,-)\) of the relationship between a graph data property and the performance of a GNN model. The six most salient data properties are indicated in **bold**.
We note that the regression analysis only indicates associative relationships between data properties and the model performance. While our analysis has successfully identified well-known influential data properties, e.g., Edge Homogeneity, the mechanism for most identified data properties through which they impact the GNN performance remains under-explored.
To further verify the effectiveness of the proposed metadata-driven approach in identifying critical data properties, we perform an in-depth analysis for _Gini-Degree_, one of the most widely influential factors. In the following Section 4 and 5, we conduct theoretical analysis and controlled experiments to understand how Gini-Degree influences GNNs' performance.
## 4 Theoretical Analysis on the Impact of Degree Distribution
In this section, we present a theoretical analysis on influence of graph data's degree distribution on the performance of GNNs. Specifically, our analysis investigates the linear separability of node representations produced by applying graph convolution to the node features. In the case that the graph data comes from a Degree-Corrected Stochastic Block Model, we show that nodes from different classes are more separable when their degree exceeds a threshold. This separability result relates the graph data's degree distribution to the GNN performance. Finally, we discuss the role of Gini-Degree on the GNN performance using implications of our theory.
### Notations and Sketch of Analysis
The Graph Data.Let \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) be an undirected graph, where \(\mathcal{V}\) is the set of nodes and \(\mathcal{E}\) is the set of edges. The information regarding the connections within the graph can also be summarized as an adjacency matrix \(\mathbf{A}\in\{0,1\}^{|\mathcal{V}|\times|\mathcal{V}|}\), where \(|\mathcal{V}|\) is the number of nodes in the graph \(\mathcal{G}\). Each node \(i\in\mathcal{V}\) possesses a \(d\)-dimensional feature vector \(\mathbf{x}_{i}\in\mathbb{R}^{d}\). The features for all nodes in \(\mathcal{G}\) can be stacked and represented as a feature matrix \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d}\). In the context of node classification, each node \(i\) is associated with a class label \(y_{i}\in\mathcal{C}\), where \(\mathcal{C}\) is the set of labels.
Graph Convolutional Network [19].In our analysis, we consider a single-layer graph convolution, which can be defined as an operation on the adjacency matrix and feature matrix of a graph \(\mathcal{G}\) to produce a new feature matrix \(\tilde{\mathbf{X}}\). Formally, the output of a single-layer graph convolution operation can be represented as \(\tilde{\mathbf{X}}=\mathbf{D}^{-1}\tilde{\mathbf{A}}\mathbf{X}\), where \(\tilde{\mathbf{A}}=\mathbf{A}+\mathbf{I}\) is the augmented adjacency matrix with added self-loops, and \(\mathbf{D}\) is the diagonal degree matrix with \(\mathbf{D}_{ii}=\text{deg}(i)=\sum_{j\in[n]}\tilde{\mathbf{A}}_{ij}\). Hence, for each node \(i\in\mathcal{V}\), the new node representation will become \(\tilde{\mathbf{x}}_{i}\in\mathbb{R}^{d}\), which is the \(i\)th row of the output matrix \(\tilde{\mathbf{X}}\).
Sketch of Our Analysis.Our analysis builds upon and generalizes the theoretical framework introduced by Baranwal et al. [3], where they demonstrate that in comparison to raw node features, the graph convolution representations of nodes have better linear separability if the graph data comes from Contextual Stochastic Block Model (CSBM) [4; 8]. However, in CSBM, the nodes within the same class all have similar degrees with high probability, which prevents us to draw meaningful conclusions about the impact of degree distribution.
To better understand the role of degree distribution in the GNN performance, we develop a non-trivial generalization of the theory by Baranwal et al. [3]. Specifically, we first coin a new graph data generation model, Degree-Corrected Contextual Stochastic Block Model (DC-CSBM) that combines and generalizes Degree-Corrected SBM (DC-SBM) [17] and CSBM, and leverages heterogeneity in node degrees into consideration. Under DC-CSBM, we find that node degrees play a crucial role in the statistical properties of the node representations, and the node degrees have to exceed a certain threshold in order for the node representations to sufficiently leverage the neighborhood information and become reliably separable. Notably, the incorporation of the node degree heterogeneity into the analysis requires a non-trivial adaptation of the analysis by Baranwal et al. [3].
### Degree-Corrected Contextual Stochastic Block Model (DC-CSBM)
In this section, we introduce the DC-CSBM that models the generation of graph data. Specifically, we assume the graph data is randomly sampled from a DC-CSBM with two classes.
DC-CSBM With 2 Classes.Let us define the class assignments \((\epsilon_{i})_{i\in[n]}\) as independent and identically distributed (i.i.d.) Bernoulli random variables coming from \(\text{Ber}(\frac{1}{2})\), where \(n=|\mathcal{V}|\) is the number of nodes in the graph \(\mathcal{G}\). These class assignments divide \(n\) nodes into 2 classes: \(C_{0}=\{i\in[n]:\epsilon_{i}=0\}\) and \(C_{1}=\{i\in[n]:\epsilon_{i}=1\}\). Assume that inter-class edge probability is \(q\) and intra-class edge probability is \(p\), and no self-loops are allowed. For each node \(i\), we additionally introduce a degree-correction parameter \(\theta_{i}\in(0,n]\), which can be interpreted as the propensity of node \(i\) to connect with others. Note that to keep the DC-SSBM identifiable and easier to analyze, we adopt a normalization rule to enforce the following constraint: \(\sum_{i\in C_{0}}\theta_{i}=|C_{0}|\), \(\sum_{i\in C_{1}}\theta_{i}=|C_{1}|\) and thus \(\sum_{i\in\mathcal{V}}\theta_{i}=n\).
Assumptions on Adjacency Matrix and Feature Matrix.Conditioning on \((\epsilon_{i})_{i\in[n]}\), each entry of the adjacency matrix \(\mathbf{A}\) is a Poisson random variable with \(\mathbf{A}_{ij}\sim\text{Poi}(\theta_{i}\theta_{j}p)\) if \(i,j\) are in the same class and \(\mathbf{A}_{ij}\sim\text{Poi}(\theta_{i}\theta_{j}q)\) if \(i,j\) are in different classes. On top of this, let \(\mathbf{X}\in\mathbb{R}^{n\times d}\) be the feature matrix where each row \(\mathbf{x}_{i}\) represents the node feature of node \(i\). Assume each \(\mathbf{x}_{i}\) is an independent \(d\)-dimensional Gaussian random vector with \(\mathbf{x}_{i}\sim\mathcal{N}(\boldsymbol{\mu},\frac{1}{d}\mathbf{I})\) if \(i\in C_{0}\) and \(\mathbf{x}_{i}\sim\mathcal{N}(\boldsymbol{\nu},\frac{1}{d}\mathbf{I})\) if \(i\in C_{1}\). We let \(\boldsymbol{\mu},\boldsymbol{\nu}\in\mathbb{R}^{d}\) to be fixed \(d\)-dimensional vectors with \(\|\boldsymbol{\mu}\|_{2},\|\boldsymbol{\nu}\|_{2}\leq 1\), which serve as the Gaussian mean for the two classes.
Given a particular choice of \(n,\boldsymbol{\mu},\boldsymbol{\nu},p,q\) and \(\theta=(\theta_{i})_{i\in[n]}\), we can define a class of random graphs generated by these parameters and sample a graph from such DC-CSBM as \(\mathcal{G}=(\mathbf{A},\mathbf{X})\sim\text{DC-CSBM}(n,\boldsymbol{\mu}, \boldsymbol{\nu},p,q,\theta)\).
### Linear Separability After Graph Convolution
Linear Separability.Linear separability refers to the ability to linearly differentiate nodes in the two classes based on their feature vectors. Formally, for any \(\mathcal{V}_{s}\subseteq\mathcal{V}\), we say that \(\{\tilde{\mathbf{x}}_{i}:i\in\mathcal{V}_{s}\}\) is linearly separable if there exists some unit vector \(\mathbf{v}\in\mathbb{R}^{d}\) and a scalar \(b\) such that \(\mathbf{v}^{\top}\tilde{\mathbf{x}}_{i}+b<0,\forall i\in C_{0}\cap\mathcal{V} _{s}\) and \(\mathbf{v}^{\top}\tilde{\mathbf{x}}_{i}+b>0,\forall i\in C_{1}\cap\mathcal{V} _{s}\). Note that linear separability is closely related to GNN performance. Intuitively, more nodes being linearly separable will lead to better GNN performance.
Degree-Thresholded Subgroups of \(C_{0}\) and \(C_{1}\).To better control the behavior of graph convolution operation, we will focus on particular subgroups of \(C_{0}\) and \(C_{1}\) where the member nodes having degree-corrected factor larger or equal to a pre-defined threshold \(\alpha>0\). Slightly abusing the notations, we denote these subgroups as \(C_{0}(\alpha)\) and \(C_{1}(\alpha)\), which are formally defined below.
**Definition 4.1** (\(\alpha\)-Subgroups).: _Given any \(\alpha\in(0,n]\), define \(\alpha\)-subgroups of \(C_{0}\) and \(C_{1}\) as follows:_
\[C_{0}(\alpha) =\{j\in[n]:\theta_{j}\geq\alpha\text{ and }j\in C_{0}\},\] \[C_{1}(\alpha) =\{j\in[n]:\theta_{j}\geq\alpha\text{ and }j\in C_{1}\}.\]
Let \(\mathcal{V}_{\alpha}:=C_{0}(\alpha)\cup C_{1}(\alpha)\), we are interested in analyzing the linear separability of the node representations after the graph convolution operation, namely \(\{\tilde{\mathbf{x}}_{i}:i\in\mathcal{V}_{\alpha}\}\). Recall that for each node \(i\), \(\tilde{\mathbf{x}}_{i}=\frac{1}{\text{deg}(i)}\sum_{j\in\mathcal{N}(i)}\mathbf{ x}_{j}\), where \(\mathcal{N}(i)\) is the set of neighbors of node \(i\).
Relationship Between \(\alpha\) and Linear Separability.We first make the following assumptions about the DC-CSBM, closely following the assumptions made by Baranwal et al. [3].
**Assumption 4.2** (Graph Size).: _Assume the relationship between the graph size \(n\) and the feature dimension \(d\) follows \(\omega(d\log d)\leq n\leq O(\text{poly}(d))\)._
**Assumption 4.3** (Edge Probabilities).: _Define \(\Gamma(p,q):=\frac{p-q}{p+q}\). Assume the edge probabilities \(p,q\) satisfy \(p,q=\omega(\log^{2}(n)/n)\) and \(\Gamma(p,q)=\Omega(1)\)._
Theorem 4.4 asserts that if the threshold \(\alpha\) is not too small, then the set \(\mathcal{V}_{\alpha}=C_{0}(\alpha)\cup C_{1}(\alpha)\) can be linear separated with high probability. The proof of Theorem 4.4 can be found in Appendix C.
**Theorem 4.4** (Linear Separability of \(\alpha\)-Subgroups).: _Suppose that Assumption 4.2 and 4.3 hold. For any \((\mathbf{X},\mathbf{A})\sim\text{DC-CSBM}(n,\boldsymbol{\mu},\boldsymbol{\nu},p,q,\theta)\), if \(\alpha=\omega\left(\max\left(\frac{1}{\log n},\frac{\log n}{dn(p+q)\|\boldsymbol {\mu}-\boldsymbol{\nu}\|_{2}^{2}}\right)\right)\), then_
\[\mathbb{P}(\{\tilde{\mathbf{x}}_{i}:i\in\mathcal{V}_{\alpha}\}\text{ is linearly separable})=1-o_{d}(1),\]
_where \(o_{d}(1)\) is a quantity that converges to 0 as \(d\) approaches infinity._
Note that Theorem 4.4 suggests that, when the heterogeneity of node degrees is taken into consideration, the nodes with degrees exceeding a threshold \(\alpha\) are more likely to be linearly separable. And the requirement for the threshold \(\alpha\) depends on the DC-CSBM parameters: \(n,p,q,\boldsymbol{\mu},\boldsymbol{\nu}\).
**Remark 4.5**.: _If we let \(p,q\in\Theta(\frac{\log^{3}n}{n})\) and \(\|\boldsymbol{\mu}-\boldsymbol{\nu}\|_{2}\) be fixed constant, then the requirement can be reduced to \(\alpha\in\omega(\frac{1}{\log n})\), which is not too large. Given this particular setting and reasonable selection of \(p,q\), the regime of acceptable \(\alpha\) is broad and thus demonstrates the generalizability of Theorem 4.4._
### Implications on Gini-Degree
Finally, we qualitatively discuss the relationship between Gini-Degree and GNNs' performance using the results from Theorem 4.4. For any \(\alpha>0\) that meets the criteria in the statement, we can consider
1. _Negative correlation between Gini-Degree and the size of \(\mathcal{V}_{\alpha}\)_: If the number of nodes and edges is fixed, a higher Gini-Degree implies more high-degree nodes in the network and thus the majority of nodes are receiving lower degrees. Clearly, if most of the nodes have lower degrees, then there will be fewer nodes having degrees exceeding a certain threshold proportional to \(\alpha\)1 and being placed in \(\mathcal{V}_{\alpha}\). Hence, a dataset with a higher (or lower) Gini-Degree will lead to a smaller (or larger) size of \(\mathcal{V}_{\alpha}\). Footnote 1: Note that the expected value of the degree of node \(i\) is proportional to \(\theta_{i}\) when we ignore self-loops. (See Appendix C for more information.) Thus, the lower bound \(\alpha\) on degree-corrected factors can be translated to the lower bound \(n(p+q)\alpha\) on degrees.
2. _Positive correlation between the size of \(\mathcal{V}_{\alpha}\) and model performance_: Intuitively, the GNN performance tends to be better if there are more nodes that can be linearly separable after graph convolution. Consequently, the GNN performance is positively relevant to the size of \(\mathcal{V}_{\alpha}\) corresponding to the minimum possible \(\alpha\).
Combining the two factors above, our analysis suggests that Gini-Degree tends to have a negative correlation with GNNs' performance.
## 5 Controlled Experiment on Gini-Degree
To further verify whether there is a causal relationship between the degree distribution of graph data (in particular, measured by Gini-Degree) and the GNN performance, we conduct a controlled experiment using synthetic graph datasets.
Experiment Setup.We first generate a series of synthetic graph datasets using the GraphWorld library [31]. To investigate the causal effect of Gini-Degree, we manipulate the data generation parameters to obtain datasets with varying Gini-Degree while keeping a bunch of other properties fixed. Specifically, we use the SBM generator in GraphWorld library and set the number of nodes \(n=5000\), the average degree as \(30\), the number of clusters as \(4\), cluster size slope as \(0.5\), feature center distance as \(0.5\), the edge probability ratio \(p/q=4.0\), feature dimension as \(16\), feature cluster variance as \(0.05\). The parameters above are fixed throughout our experiments, and their complete definition can be found in the Appendix. By manipulating the power law exponent parameter of the generator, we obtain five synthetic datasets with Gini-Degree as \(0.906,0.761,0.526,0.354\), and \(0.075\), respectively.
Then we train the same set of GNN models and MLP model as mentioned in Table 1 on each dataset. We randomly split the nodes into training, validation, and test sets with a ratio of 3:1:1. We closely
follow the hyperparameters and the training protocol in the GLI library [27], which is where we obtain the metadata in Section 3. We run five independent trials with different random seeds.
**Experiment Results.** The experiment results are shown in Table 2.
We observe an evident monotonically decreasing trend for the performance of the graph-based models, GCN, GAT, GraphSAGE, MoNet, MixHop, and LINKX, as Gini-Degree increases. However, there is no clear pattern for the non-graph model, MLP. This result suggests that these widely-used GNN models are indeed sensitive to Gini-Degree, which validates our result of sparse regression analysis. Note that MLP does not take the graph structure into consideration, and hence the degree distribution has less influence on the performance of MLP. The result on MLP also indicates that we have done a reasonably well-controlled experiment.
## 6 Conclusion
In this work, we propose a novel metadata-driven approach that can efficiently identify critical graph data properties influencing the performance of GNNs. This is a significant contribution given the diverse nature of graph-structured data and the sensitivity of GNN performance to these specific properties. We also verify the effectiveness of the proposed approach through an in-depth case study around one identified salient graph data property.
As a side product, this paper also highlights the considerable impact of the degree distribution, a salient data property identified through our metadata-driven regression analysis, on the GNN performance. We present a novel theoretical analysis and a carefully controlled experiment to demonstrate this impact.
#### Acknowledgement
The authors would like to thank Pingbang Hu for the feedback on the draft.
|
2310.05892 | A Generalization Bound of Deep Neural Networks for Dependent Data | Existing generalization bounds for deep neural networks require data to be
independent and identically distributed (iid). This assumption may not hold in
real-life applications such as evolutionary biology, infectious disease
epidemiology, and stock price prediction. This work establishes a
generalization bound of feed-forward neural networks for non-stationary
$\phi$-mixing data. | Quan Huu Do, Binh T. Nguyen, Lam Si Tung Ho | 2023-10-09T17:33:37Z | http://arxiv.org/abs/2310.05892v1 | # A Generalization Bound of Deep Neural Networks for Dependent Data
###### Abstract
Existing generalization bounds for deep neural networks require data to be independent and identically distributed (iid). This assumption may not hold in real-life applications such as evolutionary biology, infectious disease epidemiology, and stock price prediction. This work establishes a generalization bound of feed-forward neural networks for non-stationary \(\varphi\)-mixing data.
**Keywords:** neural networks, generalization bound, non-stationary process, mixing stochastic process
## 1 Introduction
Explaining the generalization ability of machine learning methods (that is, they can provide a close fit to new, unseen data) lies at the heart of theoretical machine learning. The main direction for this research topic is to bound the difference between the expected loss (population loss) and the empirical loss (training loss). This is known as generalization bound, which has been studied extensively in various settings (Freund et al., 2004; Zou et al., 2009; Agarwal and Duchi, 2012; Cuong et al., 2013; Bartlett et al., 2017; Golowich et al., 2018; Lugosi and Neu, 2022).
In the last decade, deep neural networks have become the central attention of the machine learning community due to their remarkable success in solving complex tasks that are considered to be challenging for existing machine learning methods. For example, in computer vision, tasks like image classification, facial recognition, and object detection have significant progress by applying deep neural networks (Krizhevsky et al., 2012). In natural language processing, deep learning models have become state-of-the-art in language translation, sentiment analysis, and chatbots (Vaswani et al., 2017). Additionally,
they have made undeniable contributions to fields beyond computer sciences, including autonomous vehicles, healthcare (Esteva et al., 2019), and finance (Heaton et al., 2017).
Effort has been made to derive the generalization bound for neural networks (Bartlett et al., 2017; Golowich et al., 2018; Dinh and Ho, 2020; Ho and Dinh, 2022). However, these results assume that data are independent and identically distributed (iid). Unfortunately, this assumption is not often satisfied in many applications, including evolutionary biology, infectious disease epidemiology, and stock price prediction. Therefore, it is crucial to study the generalization ability of deep neural networks when data are not iid. In this paper, we will bridge this gap by establishing a generalization bound of feed-forward neural networks for non-stationary \(\varphi\)-mixing data. It is worth noticing that mixing data is the most common alternative to iid data (e.g. White and Domowitz, 1984; Modha and Masry, 1996; Mohri and Rostamizadeh, 2010; Dinh et al., 2015; Ho et al., 2020). In this paper, we consider on the popular \(\varphi\)-mixing sequences: data are dependent, but the dependency of two data points decreases as their distance increases. Furthermore, we do not require data to be identically distributed. Instead, we allow the marginal distribution of data to converge to an unknown target distribution. Under this setting, we establish a new generalization bound for feed-forward neural networks.
## 2 Setting and main results
**Setting:** We consider a classification problem setting where the input-output pairs \(\{(X_{i},Y_{i})\}_{i=1}^{n}\subset\mathbb{R}^{d}\times\{1,2,\ldots,K\}\) are not i.i.d. Specifically, we relax the independence assumption by assuming that the data \(\mathcal{Z}=\{(X_{i},Y_{i})\}_{i=1}^{n}\) are generated from a \(\varphi\)-mixing sequence:
**Definition 1**.: _Let the \(\{Z_{k}\}_{k=0}^{\infty}\) be a sequence of random variables. For any \(i,j\in\mathbb{Z}\), let \(\sigma_{i}^{j}\) denote the \(\sigma\)-algebra generated by the random variables \(\{Z_{k}\}_{k=i}^{j}\). Then, for any positive integer \(k\), the \(\varphi\)-mixing coefficients of the stochastic process \(Z\) is defined as_
\[\varphi(k)=\sup_{\begin{subarray}{c}A\in\sigma_{n+k}^{\infty}\\ B\in\sigma_{0}^{n}\end{subarray}}\big{|}\mathbb{P}\,[A|B]-\mathbb{P}[A]\big{|}.\]
_The sequence of variables \(\{Z_{k}\}_{k=0}^{\infty}\) is said to be \(\varphi\)-mixing if \(\varphi(k)\to 0\) as \(k\to\infty\)._
Additionally, we assume that the data \(\mathcal{Z}=\{(X_{i},Y_{i})\}_{i=1}^{n}\) are not identically distributed. Instead, the marginal distribution of \((X_{i},Y_{i})\) converges to the target distribution \(\Pi\), which is the marginal distribution of the test data. More precisely,
\[\mu_{n}:=\big{|}\big{|}\mathbb{P}_{n}-\Pi\big{|}\big{|}_{\mathrm{TV}}\to 0\]
where \(\mathbb{P}_{n}\) is the marginal distribution of \((X_{i},Y_{i})\) and \(\|\cdot\|_{\mathrm{TV}}\) is the total variation distance.
This paper will focus on feed-forward neural networks with \(L\) hidden layers where the \(i\)-th layer has a weight matrix \(A_{i}\) and an activation function \(\sigma_{i}\). Throughout the paper, we assume that each weight matrix has a dimension at most \(W\) along each axis. Moreover, the activation functions \(\sigma_{i}\) is \(p_{i}\)-Lipschitz (i.e. \(|\sigma_{i}(x)-\sigma_{i}(y)|\leq p_{i}|x-y|\) for all \(x,y\in\mathbb{R}\)) and \(\sigma_{i}(0)=0\).
Denote \(\mathcal{A}=(A_{1},\ldots,A_{L})\) and \(\boldsymbol{\sigma}=(\sigma_{1},\ldots,\sigma_{L})\). The corresponding feed-forward neural network \(F_{\mathcal{A},\boldsymbol{\sigma}}\) is
\[F_{\mathcal{A},\boldsymbol{\sigma}}(x)=\sigma_{L}(A_{L}(\sigma_{L-1}(A_{L-1} \ldots\sigma_{1}(A_{1}x)\ldots)).\]
The network output \(F_{\mathcal{A},\boldsymbol{\sigma}}(x)\in\mathbb{R}^{K}\) is converted to a class label in \(\{1,\ldots,K\}\) by taking the \(\arg\max\) over components, with an arbitrary rule for breaking ties. We will work with the popular ramp loss \(\ell_{\gamma}:\mathbb{R}\rightarrow\mathbb{R}^{+}\): \(\ell_{\gamma}(r):=(1+r^{-}/\gamma)^{+}\) where \(a^{+}=\max\{a,0\}\) and \(a^{-}=\min\{a,0\}\). The empirical loss \(\mathcal{L}_{\mathcal{Z}}(F_{\mathcal{A},\boldsymbol{\sigma}})\) and expected loss \(\mathcal{L}(F_{\mathcal{A},\boldsymbol{\sigma}})\) are defined as
\[\mathcal{L}_{\mathcal{Z}}(F_{\mathcal{A},\boldsymbol{\sigma}}) =\frac{1}{n}\sum_{i=1}^{n}\ell_{\gamma}(-\mathcal{M}(F_{\mathcal{ A},\boldsymbol{\sigma}}(X_{i}),Y_{i}))\] \[\mathcal{L}(F_{\mathcal{A},\boldsymbol{\sigma}}) =\underset{(X,Y)\sim\Pi}{\mathbb{E}}\left[\ell_{\gamma}(-\mathcal{ M}(F_{\mathcal{A},\boldsymbol{\sigma}}(X),Y))\right]\]
where \(\mathcal{M}(v,j):=v_{j}-\max_{i\neq j}v_{i}\) is the margin operator.
**Main results:** First, we will derive a uniform bound of the gap between expected loss and empirical loss for a general hypothesis space \(\mathcal{H}\) and a bounded loss \(\ell\) using Rademacher complexity.
**Definition 2**.: _Given a class of function \(\mathcal{F}\) and a data set \(\mathcal{Z}=(Z_{i})_{i=1}^{n}\), the empirical Rademacher complexity is defined as_
\[\mathcal{R}_{\mathcal{Z}}(\mathcal{F})=\underset{\theta_{1},\theta_{2},\ldots,\theta_{n}}{\mathbb{E}}\left[\sup_{f\in\mathcal{F}}\left(\sum_{i=1}^{n}\theta _{i}f(Z_{i})\right)\right]\]
_where \(\{\theta_{i}\}_{i=1}^{n}\) are independent Rademacher random variables. The Rademacher complexity is defined as_
\[\mathcal{R}_{n}(\mathcal{F})=\underset{Z_{1},\ldots,Z_{n}}{\mathbb{E}}\left[R _{\mathcal{Z}}(\mathcal{F})\right].\]
**Theorem 1**.: _Suppose \(\mathcal{H}\) is a hypothesis space and \(\ell\) is a loss function bounded in \([0,1]\). Let \(\delta\) be a positive number. Under our setting, with probability at least \(1-\delta\), for all \(h\in\mathcal{H}\), we have_
\[\underset{(X,Y)\sim\Pi}{\mathbb{E}}\left[\ell(h(X),Y)\right]\leq\frac{1}{n} \sum_{i=1}^{n}\ell(h(X_{i}),Y_{i})+2\mathcal{R}_{\mathcal{Z}}(\mathcal{F}_{ \ell})+\frac{1}{n}\sum_{i=1}^{n}\mu_{i}+3\sqrt{||\Delta_{n}||_{\infty}^{2} \frac{\log(2/\delta)}{2n}}\]
_where \(\mathcal{F}_{\ell}=\{(X,Y)\rightarrow\ell(h(X),Y)|h\in\mathcal{H}\}\) and \(||\Delta_{n}||_{\infty}=1+2\sum_{k=1}^{n}\varphi(k)\)._
**Remark 2.1**.: _Kuznetsov and Mohri (2017) establish a generalization bound for asymptotically stationary processes. However, their setting is different from ours. They consider the scenario where data include \(m\) independent blocks of mixing sequences of size \(a\). That is, the number of data points is \(n=ma\). They assume that the mixing sequences are asymptotically stationary. More precisely, for a sequence \(\{Z_{i}\}_{i=1}^{\infty}\), they define_
\[\beta(a):=\sup_{t}\mathbb{E}\|\mathbb{P}_{t+a}(.|Z_{1},...,Z_{t})-\Pi\|_{\text {TV}}.\]
_The sequence \(\{Z_{i}\}_{i=1}^{\infty}\) is asymptotically stationary if \(\beta(a)\to 0\). It is ready to see that \(\mu_{i}\leq\beta(i)\) for any integer \(i\). Therefore, the marginal distribution of an asymptotically stationary sequence converges to the target distribution \(\Pi\). In other words, their condition is more restricted compared to our condition. Moreover, the convergence rate of their bound is \(\mathcal{O}(1/\sqrt{m}\,)\), which depends on the number of independent sequences. So, their result is not applicable to the scenario we are considering in this paper, where data consists of only one mixing sequence. On the other hand, they also require \(\lim_{i\to\infty}i\beta(i)=0\) while we only require \(\lim_{i\to\infty}\mu_{i}=0\). Thus, their result requires the marginal distribution to converge to the target distribution at a faster rate than ours._
Based on Theorem 1, we can derive the following generalization bound for feed-forward neural networks:
**Theorem 2**.: _Assume that \(\mu_{n}=\mathcal{O}\left(1/\sqrt{n}\right)\) and \(\varphi(n)=\mathcal{O}\left(1/n\right)\). Under our setting, with probability at least \(1-\delta\), for all margin \(\gamma>0\) and network \(F_{\mathcal{A},\boldsymbol{\sigma}}\), we have_
\[\mathbb{P}_{(X,Y)\sim\Pi}\left\{\arg\max_{j}[F_{\mathcal{A}, \boldsymbol{\sigma}}(X)]_{j}\neq Y\right\} \leq\mathcal{L}_{\mathcal{Z}}(F_{\mathcal{A},\boldsymbol{\sigma}})\] \[\quad+\tilde{\mathcal{O}}\left(\frac{\sqrt{\sum_{i}||X_{i}||_{2} ^{2}}}{n\gamma}T_{\mathcal{A}}\log(W)+\sqrt{\frac{log(2/\delta)}{n}}\right)\]
_where \(T_{\mathcal{A}}=\left(\prod_{i=1}^{L}p_{i}\|A_{i}\|_{S}\right)\left(\sum_{i=1} ^{L}\left(\frac{\|A_{i}^{T}\|_{2,1}}{\|A_{i}\|_{S}}\right)^{2/3}\right)^{3/2}\). Here, \(f(x)=\tilde{\mathcal{O}}(g(x))\) means there exists \(C,x_{0}>0\) such that \(|f(x)|\leq C\log(x)g(x)\) for all \(x\geq x_{0}\), \(||.||_{S}\) is the spectral norm, and \(||.||_{p,q}\) is \((p,q)\)-matrix norm, defined by \(||A||_{p,q}=||(||A_{:,1}||_{p},...,||A_{:,m}||_{p})||_{q}\)._
**Remark 2.2**.: _The generalization bound in Bartlett et al. (2017) is a special case of Theorem 2 when data are iid._
## 3 Proofs of main theorems
In this section, we will provide proof of our main theorems.
### Proof of Theorem 1
We first introduce some supporting Lemmas.
**Lemma 1**.: _Let \(P\) be the distribution of a \(\varphi\)-mixing sequence and \(\mathcal{F}\) be any class of functions. Then_
\[\mathop{\mathbb{E}}_{(Z_{1},Z_{2},\ldots,Z_{n})\sim P}\left[\sup_{f\in\mathcal{F }}\left[\frac{1}{n}\sum_{i=1}^{n}f(Z_{i})-\mathop{\mathbb{E}}_{(Z_{1}^{\prime}, Z_{2}^{\prime},\ldots,Z_{n}^{\prime})\sim P}\left[\frac{1}{n}\sum_{i=1}^{n}f(Z_{i}^{ \prime})\right]\right]\right]\leq 2\mathcal{R}_{n}(\mathcal{F}).\]
Proof.: We first rewrite the term inside of the first expectation:
\[\sup_{f\in\mathcal{F}}\left[\frac{1}{n}\sum_{i=1}^{n}f(Z_{i})-\mathop{\mathbb{ E}}_{Z_{1}^{\prime},\ldots,Z_{n}^{\prime}}\left[\frac{1}{n}\sum_{i=1}^{n}f(Z_{i}^ {\prime})\right]\right]\leq\frac{1}{n}\mathop{\mathbb{E}}_{Z_{1}^{\prime}, \ldots,Z_{n}^{\prime}}\left[\sup_{f\in\mathcal{F}}\left(\sum_{i=1}^{n}f(Z_{i} )-\sum_{i=1}^{n}f(Z_{i}^{\prime})\right)\right].\]
Let \(\{\theta_{i}\}_{i=1}^{n}\) be independent Rademacher random variables. Taking the expectation with respect to \(\{Z_{i}\}_{i=1}^{n}\) for both sides, we have
\[\mathop{\mathbb{E}}_{Z_{1},\ldots,Z_{n}}\left[\sup_{f\in\mathcal{ F}}\left[\frac{1}{n}\sum_{i=1}^{n}f(Z_{i})-\mathop{\mathbb{E}}_{Z_{1}^{\prime}, \ldots,Z_{n}^{\prime}}\left[\frac{1}{n}\sum_{i=1}^{n}f(Z_{i}^{\prime})\right] \right]\right]\] \[\leq\frac{1}{n}\mathop{\mathbb{E}}_{Z_{1},\ldots,Z_{n}}\left[ \mathop{\mathbb{E}}_{Z_{1}^{\prime},\ldots,Z_{n}^{\prime}}\left[\sup_{f\in \mathcal{F}}\left(\sum_{i=1}^{n}f(Z_{i})-\sum_{i=1}^{n}f(Z_{i}^{\prime}) \right)\right]\right]\] \[\leq\mathop{\mathbb{E}}_{Z_{1},\ldots,Z_{n},Z_{1}^{\prime}, \ldots,Z_{n}^{\prime}}\left[\mathop{\mathbb{E}}_{\theta_{j}\times\{1;-1;-1\}} \left[\sup_{f\in\mathcal{F}}\left(\sum_{i=1}^{n}\theta_{i}f(Z_{i})-\sum_{i=1}^ {n}\theta_{i}f(Z_{i}^{\prime})\right)\right]\right]\] \[\leq\mathop{\mathbb{E}}_{Z_{1},\ldots,Z_{n},Z_{1}^{\prime}, \ldots,Z_{n}^{\prime},\theta_{1},\ldots,\theta_{n}}\left[\sup_{f\in\mathcal{ F}}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{i}f(Z_{i})\right)+\sup_{f\in \mathcal{F}}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{i}f(Z_{i}^{\prime})\right)\right]\] \[=2\mathcal{R}_{n}(\mathcal{F}).\]
**Lemma 2**.: _(Mohri and Rostamizadeh, 2010) Let \(\Phi:Z^{n}\to\mathbb{R}\) be a measurable function that is \(c\)-Lipschitz with respect to the Hamming metric for some \(c>0\) and let \(\{Z_{i}\}_{i=1}^{n}\) be a \(\varphi\)-mixing sequence. Then, for any \(\epsilon>0\), the following inequality holds:_
\[\mathbb{P}\left[\left|\Phi(Z_{1},\ldots,Z_{n})-\mathbb{E}[\Phi(Z_{1},\ldots,Z_{ n})]\right|\geq\epsilon\right]\leq 2\exp\left(\frac{-2\epsilon^{2}}{nc^{2}\|\Delta_{n}\|_{ \infty}^{2}}\right),\]
_where \(\|\Delta_{n}\|_{\infty}^{2}=1+2\sum_{k=1}^{n}\varphi(k)\)._
**Lemma 3**.: _Let \(\mathcal{Z}=\{Z_{i}\}_{i=1}^{n}\) be a \(\varphi\)-mixing sequence and \(\mathcal{F}\) be any class of functions bounded in \([0,1]\). Then, with probability at least \(1-\delta\), we have_
\[\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}f(Z_{i})\right]-\frac{1}{n}\sum_{i=1} ^{n}f(Z_{i})\leq 2\mathcal{R}_{\mathcal{Z}}(\mathcal{F})+3\sqrt{||\Delta_{n}||_{ \infty}^{2}\frac{\log(2/\delta)}{2n}},\quad\forall f\in\mathcal{F}.\]
Proof.: Define
\[g(Z_{1},\ldots,Z_{n})\triangleq\sup_{f\in\mathcal{F}}\left[\mathbb{E}\left[ \frac{1}{n}\sum_{i=1}^{n}f(Z_{i})\right]-\frac{1}{n}\sum_{i=1}^{n}f(Z_{i}) \right].\]
We first show that \(g\) is \(\frac{1}{n}\)-Lipschitz with respect to the Hamming distance. For any \(\mathcal{Z}=(Z_{1},\ldots,Z_{n})\) and \(\mathcal{Z}^{\prime}=(Z_{1}^{\prime},\ldots,Z_{n}^{\prime})\), we have
\[|g(\mathcal{Z})-g(\mathcal{Z}^{\prime})|\] \[=\left|\sup_{f\in\mathcal{F}}\left[\mathbb{E}\left[\frac{1}{n} \sum_{i=1}^{n}f(Z_{i})\right]-\frac{1}{n}\sum_{i=1}^{n}f(Z_{i})\right]-\sup_{f \in\mathcal{F}}\left[\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}f(Z_{i}^{\prime })\right]-\frac{1}{n}\sum_{i=1}^{n}f(Z_{i}^{\prime})\right]\right|\] \[\leq\sup_{f\in\mathcal{F}}\left|\frac{1}{n}\sum_{i=1}^{n}f(Z_{i}) -\frac{1}{n}\sum_{i=1}^{n}f(Z_{i}^{\prime})\right|\leq\frac{\|\mathcal{Z}- \mathcal{Z}^{\prime}\|_{H}}{n}\]
where \(\|\cdot\|_{H}\) is the Hamming distance. The last inequality holds since \(f\) bound in \([0,1]\). Since \(g\) is \(\frac{1}{n}\)-Lipschitz with respect to the Hamming distance, we apply Lemma 2 to obtain:
\[\mathbb{P}\left[g(\mathcal{Z})\geq\underset{Z_{1},Z_{2},\ldots,Z_{n}}{\mathbb{ E}}[g]+\epsilon\right]\leq\exp\left(\frac{-2n\epsilon^{2}}{\|\Delta_{n}\|_{ \infty}^{2}}\right).\]
Applying Lemma 1, we get \(\mathbb{E}_{Z_{1},Z_{2},\ldots,Z_{n}}[g]\leq 2\mathcal{R}_{n}(\mathcal{F}).\) We will show that \(\mathcal{R}_{\mathcal{Z}}(\mathcal{F})\) is also \(\frac{1}{n}\)-Lipschitz with respect to the Hamming distance. Indeed, using similar arguments, we have
\[\left|\underset{\theta_{i}}{\mathbb{E}}\left[\sup_{f\in\mathcal{ F}}\left[\frac{1}{n}\sum_{i=1}^{n}\theta_{i}f(Z_{i})\right]\right]-\underset{ \theta_{i}}{\mathbb{E}}\left[\sup_{f\in\mathcal{F}}\left[\frac{1}{n}\sum_{i=1 }^{n}\theta_{i}f(Z_{i}^{\prime})\right]\right]\right|\] \[\leq\underset{\theta_{i}}{\mathbb{E}}\left|\sup_{f\in\mathcal{F}} \left[\frac{1}{n}\sum_{i=1}^{n}\theta_{i}f(Z_{i})\right]-\sup_{f\in\mathcal{F}} \left[\frac{1}{n}\sum_{i=1}^{n}\theta_{i}f(Z_{i}^{\prime})\right]\right|\] \[\leq\underset{\mu_{i}}{\mathbb{E}}\left|\sup_{f\in\mathcal{F}} \left[\frac{1}{n}\sum_{i=1}^{n}\theta_{i}f(Z_{i})-\frac{1}{n}\sum_{i=1}^{n} \theta_{i}f(Z_{i}^{\prime})\right]\right|\leq\frac{\|\mathcal{Z}-\mathcal{Z}^ {\prime}\|_{H}}{n}.\]
We can thus apply Lemma 2:
\[\mathbb{P}\left[\underset{Z_{1},Z_{2},\ldots,Z_{n}}{\mathbb{E}}[\mathcal{R}_{ \mathcal{Z}}(\mathcal{F})]\geq\mathcal{R}_{\mathcal{Z}}(\mathcal{F})+\epsilon \right]\leq\exp\left(\frac{-2n\epsilon^{2}}{||\Delta_{n}||_{\infty}^{2}}\right).\]
We set \(\epsilon=\sqrt{||\Delta_{n}||_{\infty}^{2}\frac{\log(2/\delta)}{2n}}\). Then with probability at least \(1-\delta\),
\[g\leq\mathbb{E}[g]+\epsilon\leq 2\mathcal{R}_{n}(\mathcal{F})+\epsilon\leq 2( \mathcal{R}_{\mathcal{Z}}(\mathcal{F})+\epsilon)+\epsilon=2\mathcal{R}_{ \mathcal{Z}}(\mathcal{F})+3\epsilon.\]
**Lemma 4**.: _Let \(f\) is bounded function in \([0;1]\). Let \(\{Z_{i}\}_{i=0}^{n}\) be a non-stationary \(\varphi\)-mixing sequence such that the marginal distributions converge to a target distribution \(\Pi\) with rate \(\mu_{n}\). Then_
\[\left|\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}f(Z_{i})\right]- \underset{Z\sim\Pi}{\mathbb{E}}\left[f(Z)\right]\right|\leq\frac{1}{n}\sum_{i =1}^{n}\mu_{i}.\]
Proof.: For any \(i\),
\[\mathbb{E}\left[f(Z_{i})\right]-\mathbb{E}\left[f(Z)\right]=\int_{\Omega}f(z).(f_{\mathbb{P}_{i}}(z)-f_{\Pi}(z))dz. \tag{1}\]
Define \(A=\{z\in\Omega|f_{\mathbb{P}_{i}}(z)<f_{\Pi}(z)\}\) and \(B=\{z\in\Omega|f_{\mathbb{P}_{i}}(z)>f_{\Pi}(z)\}\). We rewrite Eq.(1)
\[\mathbb{E}\left[f(Z_{i})\right]-\mathbb{E}\left[f(Z)\right]=\int_{A}f(z).(f_{ \mathbb{P}_{i}}(z)-f_{\Pi}(z))dz+\int_{B}f(z).(f_{\mathbb{P}_{i}}(z)-f_{\Pi}( z))dz.\]
For the first term,
\[0\geq\int_{A}f(z).(f_{\mathbb{P}_{i}}(z)-f_{\Pi}(z))dz\geq\int_{A}(f_{\mathbb{ P}_{i}}(z)-f_{\Pi}(z))dz\geq-\mu_{i}.\]
For the second term,
\[0\leq\int_{B}f(z).(f_{\mathbb{P}_{i}}(z)-f_{\Pi}(z))dz\leq\int_{B}(f_{\mathbb{ P}_{i}}(z)-f_{\Pi}(z))dz\leq\mu_{i}.\]
Then
\[-\mu_{i}\leq\mathbb{E}\left[f(Z_{i})\right]-\mathbb{E}\left[f(Z)\right]\leq \mu_{i}.\]
we have
\[\left|\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}f(Z_{i})\right]- \underset{Z\sim\Pi}{\mathbb{E}}\left[f(Z)\right]\right|\leq\frac{1}{n}\sum_{i =1}^{n}\left|\mathbb{E}\left[f(Z_{i})\right]-\underset{Z\sim\Pi}{\mathbb{E}} \left[f(Z)\right]\right|\leq\frac{1}{n}\sum_{i=1}^{n}\mu_{i}.\]
Theorem 1 is a direct consequence of Lemmas 3 and 4.
### Proof of Theorem 2
Theorem 2 can be achieved by combining Theorem 1 and the proof technique of Bartlett et al. (2017). Denote \(\mathcal{F}_{\mathcal{A},\boldsymbol{\sigma},\gamma}=\{(x,y)\to\ell_{\gamma}( \mathcal{M}(F_{\mathcal{A},\boldsymbol{\sigma}}(x),y))\}\). Applying Theorem 1, we have
\[\mathcal{L}(F_{A,\sigma})\leq\mathcal{L}_{\mathcal{Z}}(F_{\mathcal{A}, \boldsymbol{\sigma}})+2\mathcal{R}_{\mathcal{Z}}(\mathcal{F}_{\mathcal{A}, \boldsymbol{\sigma},\gamma})+\frac{1}{n}\sum_{i=1}^{n}\mu_{i}+3\sqrt{|| \Delta_{n}||_{\infty}^{2}\frac{\log(2/\delta)}{2n}} \tag{2}\]
with probability at least \(1-\delta\).
Next, we introduce some supporting Lemmas.
**Lemma 5** (Lemma A.4 in Bartlett et al. (2017)).: _For any \(f:\mathbb{R}^{d}\to\mathbb{R}^{k}\) and every \(\gamma>0\), we have_
\[\mathbb{P}_{(X,Y)\sim\Pi}\left\{\arg\max_{j}[F_{\mathcal{A},\boldsymbol{ \sigma}}(X)]_{j}\neq Y\right\}\leq\mathcal{L}(F_{A,\sigma}).\]
**Lemma 6**.: _Assume that \(\sqrt{\sum_{i}||X_{i}||_{2}^{2}}\leq B\). For all feed-forward neural network \(F_{\mathcal{A},\boldsymbol{\sigma}}:\mathbb{R}^{d}\to\mathbb{R}^{k}\) such that \(||A_{i}||_{\sigma}\leq s_{i}\) and \(||A_{i}^{T}||_{2,1}\leq b_{i}\), we have_
\[\mathcal{R}_{\mathcal{Z}}(F_{\mathcal{A},\boldsymbol{\sigma},\gamma})\leq \frac{4}{\sqrt{n^{3}}}+\frac{36B\ln(2W)\ln(n)}{\gamma n}\left(\sum_{i=1}^{L} \left(\frac{b_{i}}{s_{i}}\right)^{2/3}\right)^{3/2}\left(\prod_{i=1}^{L}s_{i} p_{i}\right).\]
Proof.: Using the same argument of the proof of Lemma A.8 in Bartlett et al. (2017), we obtain:
\[\mathcal{R}_{\mathcal{Z}}(F_{\mathcal{A},\boldsymbol{\sigma},\gamma})\leq \inf_{a>0}\left(\frac{4\alpha}{\sqrt{n}}+\frac{12}{n}\int_{\alpha}^{\sqrt{n}} \sqrt{\frac{R}{\epsilon^{2}}}d\epsilon\right)=\inf_{a>0}\left(\frac{4\alpha}{ \sqrt{n}}+\ln(\sqrt{n}/\alpha)\frac{12\sqrt{R}}{n}\right);\]
where
\[R=\frac{4B^{2}\ln(2W^{2})}{\gamma^{2}}\left(\sum_{i=1}^{L}\left(\frac{b_{i}}{ s_{i}}\right)^{2/3}\right)^{3}\left(\prod_{i=1}^{L}s_{i}p_{i}\right)^{2}.\]
The desired bound may be obtained by setting \(\alpha=1/n\).
Combining equation (2), Lemma 5, and Lemma 6, we get the following Lemma
**Lemma 7**.: _With probability at least \(1-\delta\) over a non-stationary \(\varphi\)-mixing sequence \(\mathcal{Z}=((X_{i},Y_{i}))_{i=1}^{n}\) with \(\sqrt{\sum_{i}||X_{i}||_{2}^{2}}\leq B\), for all feed-forward neural network \(F_{\mathcal{A},\boldsymbol{\sigma}}:\mathbb{R}^{d}\to\mathbb{R}^{k}\) such that \(||A_{i}||_{\sigma}\leq s_{i}\) and \(||A_{i}^{T}||_{2,1}\leq b_{i}\), we have_
\[\mathbb{P}_{(X,Y)\sim\Pi}\left\{\arg\max_{j}[F_{\mathcal{A}, \boldsymbol{\sigma}}(X)]_{j}\neq Y\right\}\leq\mathcal{L}_{\mathcal{Z}}(F_{ \mathcal{A},\boldsymbol{\sigma}})+\frac{1}{n}\sum_{i=1}^{n}\mu_{i}+3\sqrt{|| \Delta_{n}||_{\infty}^{2}\frac{\log(2/\delta)}{2n}}\\ +\frac{8}{\sqrt{n^{3}}}+\frac{72B\ln(2W)\ln(n)}{\gamma n}\left( \sum_{i=1}^{L}\left(\frac{b_{i}}{s_{i}}\right)^{2/3}\right)^{3/2}\left(\prod_ {i=1}^{L}s_{i}p_{i}\right).\]
Now, we can achieve Theorem 2 by utilizing Lemma 7 to derive a union bound over the parameter space and input space. This can be done by following the same steps as the proofs of Lemma A.9 in Bartlett et al. (2017). Finally, we note that \(\mu_{n}=\mathcal{O}\left(1/\sqrt{n}\right)\) and \(\varphi(n)=\mathcal{O}\left(1/n\right)\). Therefore,
\[\frac{1}{n}\sum_{i=1}^{n}\mu_{i}=\mathcal{O}\left(\frac{1}{\sqrt{n}}\right), \quad||\Delta_{n}||_{\infty}=\mathcal{O}\left(\log n\right).\]
## 4 Discussion and conclusion
In this paper, we propose a generalization bound of feed-forward neural networks for the nonstationary \(\varphi\)-mixing sequences using Rademacher complexity. We first derive a generalization bound for bounded loss on a general hypothesis space when data are nonstationary and \(\varphi\)-mixing. Our result allows data to converge to the target distribution at a slower rate compared to Kuznetsov and Mohri (2017). Moreover, the generalization bound in Kuznetsov and Mohri (2017) does not work for our setting where data include only one mixing sequence. Using our new bound, we establish a generalization bound of feed-forward neural networks, including the result of Bartlett et al. (2017) for iid data as a special case. A future research direction is extending our generalization bound beyond mixing data. Alternative options include data generated from a dynamical system (Ho et al., 2023), evolutionary data (Ho and Ane, 2013), and data from infectious disease epidemics (Ho et al., 2018). Another direction is to develop a generalization bound for other types of deep neural networks. This requires new bounds for the Rademacher complexity of these neural networks.
## Acknowledgement
LSTH was supported by the Canada Research Chairs program, the NSERC Discovery Grant RGPIN-2018-05447, and the NSERC Discovery Launch Supplement DGECR-2018-00181. We want to thank the University of Science, Vietnam National University Ho Chi Minh City, and AISIA Research Lab for supporting us in this project.
|
2306.15340 | A Toolbox for Fast Interval Arithmetic in numpy with an Application to
Formal Verification of Neural Network Controlled Systems | In this paper, we present a toolbox for interval analysis in numpy, with an
application to formal verification of neural network controlled systems. Using
the notion of natural inclusion functions, we systematically construct interval
bounds for a general class of mappings. The toolbox offers efficient
computation of natural inclusion functions using compiled C code, as well as a
familiar interface in numpy with its canonical features, such as n-dimensional
arrays, matrix/vector operations, and vectorization. We then use this toolbox
in formal verification of dynamical systems with neural network controllers,
through the composition of their inclusion functions. | Akash Harapanahalli, Saber Jafarpour, Samuel Coogan | 2023-06-27T09:50:47Z | http://arxiv.org/abs/2306.15340v1 | A Toolbox for Fast Interval Arithmetic in numpy with an Application to Formal Verification of Neural Network Controlled Systems
###### Abstract
In this paper, we present a toolbox for interval analysis in numpy, with an application to formal verification of neural network controlled systems. Using the notion of natural inclusion functions, we systematically construct interval bounds for a general class of mappings. The toolbox offers efficient computation of natural inclusion functions using compiled C code, as well as a familiar interface in numpy with its canonical features, such as \(n\)-dimensional arrays, matrix/vector operations, and vectorization. We then use this toolbox in formal verification of dynamical systems with neural network controllers, through the composition of their inclusion functions.
Machine Learning, Neural Network, Neural Network, Neural Network, Neural Network, Neural Network
## 1 Introduction
Interval analysis is a classical field that provides a computationally efficient approach for propagating errors by computing function bounds (Jaulin et al., 2001). It has been successfully used for floating point error bounding in numerical and scientific analysis (Hickey et al., 2001). Interval bounds are often used for dynamical systems: (i) in reachability analysis, using methods such as Differential Inequalities (Scott and Barton, 2013) and Mixed Monotonicity (Meyer et al., 2019; Abate et al., 2021); (ii) for invariant set computation (Abate and Coogan, 2020). Recently, interval analysis has been increasingly used for the verification of learning algorithms: (i) standalone neural network verification approaches such as Interval Bound Propagation (Gowal et al., 2019); (ii) in-the-loop neural network control system verification approaches including simulation-guided approaches (Xiang et al., 2021), POLAR (Huang et al., 2022) and ReachMM (Jafarpour et al., 2023).
Since all of these techniques use a similar suite of tools, there is value in creating an efficient, user-friendly toolbox for general interval analysis. Python has become the standard for the learning community, and as such there are several existing tools for interval arithmetic. However, they come with key drawbacks: pyinterval does not natively support interval vectors and matrices, and portion supports lists of intervals, but not matrix and vector operations.
ContributionsIn this paper, we introduce a novel interval analysis framework called npinterval1, implemented in numpy(Harris et al., 2020), the computational backbone of most scientific Python packages. This framework is built upon the notion of inclusion functions, which provide interval bounds on the output of a given function. We first define tight inclusion functions for several elementary functions, then use Theorem 2.3 to build natural inclusion functions for a more general class of composed functions. The proposed package extends the prominent benefits of numpy directly to interval analysis, including its efficiency with compiled C implementations, versatility with \(n\)-dimensional arrays, matrix/vector operations, vectorization, and its familiar user interface. We then demonstrate its utility through an application in formal verification of neural network controlled systems, by composing CROWN (Zhang et al., 2018), a state-of-the-art neural network verification method with a natural inclusion functions of the system in Theorem 3.4. The proofs of all the Theorems are presented in Appendix B.
Footnote 1: The most recent code for npinterval can be viewed at [https://github.com/gtfactslab/npinterval](https://github.com/gtfactslab/npinterval); to reproduce the figures in this paper, see [https://github.com/gtfactslab/Harapanahalli_WFVML2023](https://github.com/gtfactslab/Harapanahalli_WFVML2023).
NotationWe denote the standard partial order on \(\mathbb{R}^{n}\) by \(\leq\), i.e., for \(x,y\in\mathbb{R}^{n}\), \(x\leq y\) if and only if \(x_{i}\leq y_{i}\) for all \(i\in\{1,\ldots,n\}\). A (bounded) _interval_ of \(\mathbb{R}^{n}\) is a set of form \(\{z:\underline{x}\leq z\leq\overline{x}\}=:[\underline{x},\overline{x}]\) for some endpoints \(\underline{x},\overline{x}\in\mathbb{R}^{n}\), \(\underline{x}\leq\overline{x}\). Let \(\mathbb{IR}^{n}\) denote the set of all intervals on \(\mathbb{R}^{n}\). We also use the notation \([x]\in\mathbb{IR}^{n}\) to denote an interval when its endpoints are not relevant or implicitly understood to be \(\underline{x}\) and \(\overline{x}\). For every two vectors \(v,w\in\mathbb{R}^{n}\) and every \(i\in\{1,\ldots,n\}\), we define the vector \(v_{\{i:w\}}\in\mathbb{R}^{n}\) by \(\left(v_{\{i:w\}}\right)_{j}=\left\{\begin{matrix}v_{j}&j\neq i\\ w_{j}&j=i\end{matrix}\right..\) For a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) and a set \(\mathcal{X}\subseteq\mathbb{R}^{n}\), define the set-valued extension
\(f(\mathcal{X})=\{f(x):x\in\mathcal{X}\}\). For two vectors \(x,y\in\mathbb{R}^{n}\), let \((x,y)\in\mathbb{R}^{2n}\) denote their concatenation.
## 2 Interval Analysis
### Interval Arithmetic
Interval analysis extends operations and functions to intervals (Jaulin et al., 2001). For example, if we know that some number \(a\in[\underline{a},\overline{a}]\), and \(b\in[\underline{b},\overline{b}]\), it is easy to see that the sum \((a+b)\in[\underline{a}+\underline{b},\overline{a}+\overline{b}]\).
**Definition 2.1** (Inclusion Function (Jaulin et al., 2001)).: Given a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), the interval function \([f]:\mathbb{IR}^{n}\rightarrow\mathbb{IR}^{m}\) is called an
1. _inclusion function_ for \(f\) if, for every \([x]\in\mathbb{IR}^{n}\), \(f([x])\subseteq[f]([x])\);
2. \([y]\)_-localized inclusion function_ for \(f\) if for every \([x]\subseteq[y]\), we have \(f([x])\subseteq[f]([x])\).
Moreover, an inclusion function \([f]\) for \(f\) is
1. _monotone_ if \([x]\subseteq[y]\) implies that \([f]([x])\subseteq[f]([y])\).
2. _tight_ if, for every \([x]\), \([f]([x])\) is the smallest interval containing \(f([x])\).
In the next Theorem, we provide a closed-form expression for the tight inclusion function.
**Theorem 2.2** (Uniqueness and Monotonicity of the Tight Inclusion Function).: _Given a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), the tight inclusion function can be characterized as_
\[[f]([x])=\bigg{[}\inf_{x\in[x]}f(x),\sup_{x\in[x]}f(x)\bigg{]},\]
_where the \(\inf\) and \(\sup\) are taken element-wise, which is unique and monotone._
For some common functions, the tight inclusion function is easily defined. For example, if a function is monotonic, the tight inclusion function is simply the interval created by the function evaluated at its endpoints.
More generally, the tight inclusion function can be alternatively computed using the fact that on a closed and bounded interval, a continuous function will achieve its maximal and minimal values at either the endpoints or a critical value in the interval. For any bounded interval, the tight inclusion function can be evaluated by taking the maximum and minimum of the function on each critical point within and the endpoints of the input interval. Tight inclusion functions for some elementary functions such as \(\sin\) and \(\cos\) can be defined in this manner (see Table 1). However, when considering general functions, finding the tight inclusion function is often not computationally viable. The following theorem shows a more computational approach, by chaining known inclusion functions.
**Theorem 2.3** (Natural Inclusion Functions).: _Given a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) defined by a composition of functions with known monotone inclusion functions, i.e., \(f=e_{\ell}\circ e_{\ell-1}\circ\cdots\circ e_{1}\), an inclusion function for \(f\) is formed by replacing each composite function with its inclusion function, i.e. \([f]=[e_{\ell}]\circ[e_{\ell-1}]\circ\cdots\circ[e_{1}]\) and is called the natural inclusion function. Additionally, if each of the \([e_{j}]\) are monotone inclusion functions, the natural inclusion function is also monotone._
Note that two different decompositions of the function \(f\) can lead to two different natural inclusion functions for \(f\). Thus, the natural inclusion function is not guaranteed to be the tight inclusion function. For example, consider the function
\[(x+1)^{2}=x^{2}+2x+1,\]
on the interval \([-1,1]\). With the natural inclusion function for the first expression (LHS), the output interval is \(([-1,1]+1)^{2}=[0,4]\). With the natural inclusion function for the second expression (RHS), the output interval is \([-1,1]^{2}+2*[-1,1]+1=[0,1]+[-2,2]+1=[-1,4]\). Figure 1 demonstrates this phenomenon in further detail.
### Automated Interval Analysis using npinterval
The main contribution of this paper is to introduce the open source npinterval package, an extension of numpy to allow native support for interval arithmetic. npinterval defines a new interval data-type, internally represented as a tuple of two doubles, \([a]=(a.l,a.u)\). The interval type is fully implemented in C, and therefore the standard operations from Table 1 are all compiled into efficient machine-code executed as needed at runtime. Additionally, since interval is implemented as a dtype in numpy, all of numpy's prominent features, including \(n\)-dimensional arrays, fast matrix multiplication, and vectorization, are available to use. In particular, each function from Table 1 is registered as a numpy universal function, allowing for quick, element-wise operation over an \(n\)-dimensional array. While there are existing interval arithmetic toolboxes, none plug directly into numpy, opting instead to rewrite every operation in Python. While these packages do support the same standard operations from Table 1, they lose the flexibility and utility of numpy, as well as the efficiency of compiled C code.
## 3 Interval Reachability of Neural Network Controlled Systems
One application of interval analysis is in reachability analysis of neural network controlled systems. This section
revisits and extends the framework considered in (Jafarpour et al., 2023).
### Problem Statement
Consider a dynamical system of the following form
\[\dot{x}=f(x,u,w), \tag{1}\]
where \(x\in\mathbb{R}^{n}\) is the system state, \(u\in\mathbb{R}^{p}\) is the control input, \(w\in\mathcal{W}\subseteq\mathbb{R}^{q}\) is a unknown disturbance input in a compact set \(\mathcal{W}\), and \(f:\mathbb{R}^{n}\times\mathbb{R}^{p}\times\mathbb{R}^{q}\to\mathbb{R}^{n}\) is a parameterized vector field. We assume that a feedback control policy for the system (1) is given by a \(k\)-layer fully connected feed-forward neural network \(N:\mathbb{R}^{n}\to\mathbb{R}^{p}\) as follows:
\[\begin{split}\xi^{(i)}&=\sigma^{(i-1)}\left(W^{(i-1 )}\xi^{(i-1)}+b^{(i-1)}\right),\,i=1,\ldots k\\ &\xi^{(0)}=x,\quad N(x)=W^{(k)}\xi^{(k)}+b^{(k)}\end{split} \tag{2}\]
where \(m_{i}\) is the number of neurons in the \(i\)-th layer, \(W^{(i)}\in\mathbb{R}^{m_{i}\times m_{i-1}}\) is the weight matrix on the \(i\)-th layer, \(b^{(i)}\in\mathbb{R}^{m_{i}}\) is the bias vector on the \(i\)-th layer, \(\xi^{(i)}\in\mathbb{R}^{m_{i}}\) is the \(i\)-th layer hidden variable and \(\sigma_{i}\) is the activation function for the \(i\)-th layer. Thus, we consider the closed-loop system
\[\dot{x}=f(x,N(x),w)=f^{c}(x,w). \tag{3}\]
Given an initial time \(t_{0}\), an initial state \(x_{0}\), and a piecewise continuous mapping \(\mathbf{w}:\mathbb{R}\to\mathcal{W}\), denote the trajectory of the system for any \(t\geq t_{0}\) as \(\phi_{f^{c}}(t,t_{0},x_{0},\mathbf{w})\). Given an initial set \(\mathcal{X}_{0}\), we denote the reachable set of \(f^{c}\) at some \(t\geq t_{0}\):
\[\mathcal{R}_{f}(t,t_{0},\mathcal{X},\mathcal{W})=\left\{\begin{split}&\phi_{f^{c}}(t,t_{0},x_{0}, \mathbf{w}),\,\forall x_{0}\in\mathcal{X}_{0},\\ & w:\mathbb{R}\to\mathcal{W}\text{ piecewise cont.}\end{split}\right\} \tag{4}\]
One can use the reachable set of the system to verify safety specifications, e.g. by ensuring an empty intersection with unsafe states for all time. However, in general, computing the reachable set exactly is not computationally tractable--instead, approaches typically compute an over-approximation \(\overline{\mathcal{R}}_{f^{c}}(t,t_{0},\mathcal{X},\mathcal{W})\supseteq \mathcal{R}_{f^{c}}(t,t_{0},\mathcal{X},\mathcal{W})\). The main challenge addressed in this section is to develop an approach for providing tight over-approximations of reachable sets while remaining computationally tractable for runtime computation.
### Open-loop System Interval Reachability
Previously, (Jafarpour et al., 2023) consider a known decomposition function of the open-loop system. The interval analysis framework from Section 2 allows us to extend this theory to remove the need to define a decomposition function _a priori_.
**Assumption 3.1**.: For the dynamical system (1), there exists a known monotone inclusion function \([f]\) for \(f\).
Using Theorem 2.3, this assumption reduces to knowing a particular form \(f=e_{\ell}\circ\cdots\circ e_{1}\) with known monotone inclusion functions for each \(e_{j}\), thus removing the need to manually define a decomposition function for \(f\).
The open-loop embedding function is defined by:
\[\begin{split}&\underline{\mathsf{E}}_{i}([x],[u],[w])=\underline{f }_{i}([\underline{x},\overline{x}_{(i:\underline{x})}],[u],[w]),\\ &\overline{\mathsf{F}}_{i}([x],[u],[w])=\overline{f}_{i}([ \underline{x}_{(i:\underline{x})},\overline{x}],[u],[w]),\end{split} \tag{5}\]
for every \(i\in\{1,\ldots,n\}\), where \([f]=[f,\overline{f}]\) is the monotone inclusion function of \(f\), and \(\underline{\mathsf{E}},\overline{\mathsf{F}}:\mathbb{IR}^{n}\times\mathbb{IR} ^{p}\times\mathbb{IR}^{q}\to\mathbb{R}^{n}\). Using this embedding function, the following embedding dynamics can be defined
\[\begin{bmatrix}\dot{x}\\ \dot{\widehat{x}}\end{bmatrix}=\begin{bmatrix}\underline{\mathsf{E}}([x, \widehat{x}],[u],[w])\\ \overline{\mathsf{F}}([x,\widehat{x}],[u],[w])\end{bmatrix}, \tag{6}\]
where \((x,\widehat{x})\in\mathbb{R}^{2n}\), \(x\leq\widehat{x}\).
**Theorem 3.2** (Open-Loop System Interval Reachability).: _Let \(t\mapsto(\underline{x}(t),\overline{x}(t))\) denote the trajectory of the system (6) with initial condition \((\underline{x}_{0},\overline{x}_{0})\), under interval control mapping \([\mathbf{u}]:\mathbb{R}\to\mathbb{IR}^{p}\) and interval disturbance mapping \([\mathbf{w}]:\mathbb{R}\to\mathbb{IR}^{q}\). Let \(t\mapsto x(t)\) denote the trajectory of the system (1) with initial condition \(x_{0}\), under control \(\mathbf{u}:\mathbb{R}\to\mathbb{R}^{p}\) and disturbance \(\mathbf{w}:\mathbb{R}\to\mathcal{W}\). If \(x_{0}\in[\underline{x}_{0},\overline{x}_{0}]\), \(\mathbf{u}(t)\in[\mathbf{u}](t)\), and \(\mathbf{w}(t)\in[\mathbf{w}](t)\) for every \(t\geq t_{0}\), then_
\[x(t)\in[\underline{x}(t),\overline{x}(t)],\quad\text{for every $t\geq t_{0}$.}\]
Figure 1: **Left:**npinterval is used to generate interval approximations for a function \(f\) using two different natural inclusion functions. Blue: \(f(x_{1},x_{2})=[(x_{1}+x_{2})^{2},4\sin((x_{1}-x_{2})/4)]^{T}\) Green: \(f(x_{1},x_{2})=[x_{2}^{2}+2x_{1}x_{2}+x_{2}^{2},4\sin(x_{1}/4)\cos(x_{2}/4)-4 \cos(x_{1}/4)\sin(x_{2}/4)]^{T}\). The approximations are generated using the initial set \([-1,1]\times[-1,1]\), and 2000 uniformly sampled outputs are shown in red. **Right:** The same function is analyzed, with the same two natural inclusion functions, but the initial set is partitioned into 1024 uniform sections, and the union of the interval approximations are shown.
### Interconnected Closed-Loop System Interval Reachability
While Theorem 3.2 provides a method of over-approximating the system (3) using global bounds on the control input, it disregards all interactions between the neural network controller and the dynamical system.
**Assumption 3.3**.: For the neural network (2), there exists a known monotone inclusion function \([N]\) for \(N\).
For example, one can use CROWN (Zhang et al., 2018) to obtain these bounds. Using CROWN, given an interval \([y]\), we can obtain affine upper and lower bounds of the following form
\[\underline{C}_{[y]}x+\underline{d}_{[y]}\leq N(x)\leq\overline{C}_{[y]}x+ \overline{d}_{[y]}, \tag{7}\]
valid for any \(x\in[y]\), which can be used to create the following monotone \([y]\)-localized inclusion function \([N]_{[y]}=[\underline{N}_{[y]},\overline{N}_{[y]}]\), with
\[\underline{N}_{[y]}([x]) =\underline{C}_{[y]}^{+}\underline{x}+\underline{C}_{[y]}^{-} \overline{x}+\underline{d}_{[y]}, \tag{8}\] \[\overline{N}_{[y]}([x]) =\overline{C}_{[y]}^{+}\overline{x}+\overline{C}_{[y]}^{-} \underline{x}+\overline{d}_{[y]},\]
valid for any \([x]\subseteq[y]\). One can then construct the following "hybrid" closed-loop embedding function by interconnecting the open-loop embedding function with the monotone inclusion function for the neural network as follows
\[\underline{\mathsf{E}}_{i}^{c}([x],[w]) =\underline{f}_{i}([\underline{x},\overline{x}_{\{i:\underline{ x}\}}],[N]_{[x]}([\underline{x},\overline{x}_{\{i:\underline{x}\}}]),[w]), \tag{9}\] \[\overline{\mathsf{F}}_{i}^{c}([x],[w]) =\overline{f}_{i}([\underline{x}_{\{i:\overline{x}\}},\overline {x}],[N]_{[x]}([\underline{x}_{\{i:\overline{x}\}},\overline{x}]),[w]),\]
for every \(i\in\{1,\ldots,n\}\), and \(\underline{\mathsf{E}}^{c},\overline{\mathsf{F}}^{c}:\mathbbm{R}^{n}\times \mathbbm{R}^{q}\rightarrow\mathbb{R}^{n}\). Using this embedding function, the following embedding dynamics can be defined
\[\begin{bmatrix}\dot{x}\\ \dot{\dot{x}}\end{bmatrix}=\begin{bmatrix}\underline{\mathsf{F}}^{c}([x, \widehat{x}],[w])\\ \overline{\mathsf{F}}^{c}([x,\widehat{x}],[w])\end{bmatrix}, \tag{10}\]
where \((x,\widehat{x})\in\mathbb{R}^{2n}\), \(x\leq\widehat{x}\).
**Theorem 3.4** (Closed-Loop System Interval Reachability).: _Let \(t\mapsto(\underline{x}(t),\overline{x}(t))\) denote the trajectory of the system (10) with initial condition \((\underline{x}_{0},\overline{x}_{0})\), under interval disturbance mapping \([\mathbf{w}]:\mathbb{R}\rightarrow\mathbbm{R}^{q}\). Let \(t\mapsto x(t)\) denote the trajectory of the closed-loop system (3) with initial condition \(x_{0}\), under disturbance \(\mathbf{w}:\mathbb{R}\rightarrow\mathcal{W}\). If \(x_{0}\in[\underline{x}_{0},\overline{x}_{0}]\) and \(\mathbf{w}(t)\in[\mathbf{w}](t)\) for every \(t\geq t_{0}\), then_
\[x(t)\in[\underline{x}(t),\overline{x}(t)],\quad\text{ for every }t\geq t_{0}.\]
### Experiments
Vehicle ModelConsider the nonlinear dynamics of a vehicle adopted from (Polack et al., 2017):
\[\dot{p_{x}} =v\cos(\phi+\beta(u_{2})) \dot{\phi} =\frac{v}{\ell_{r}}\sin(\beta(u_{2})) \tag{11}\] \[\dot{p_{y}} =v\sin(\phi+\beta(u_{2})) \dot{v} =u_{1}. \tag{12}\]
where \([p_{x},p_{y}]^{\top}\in\mathbb{R}^{2}\) is the displacement of the center of mass, \(\phi\in[-\pi,\pi)\) is the heading angle in the plane, and \(v\in\mathbb{R}_{\geq 0}\) is the speed of the center of mass. Control input \(u_{1}\) is the applied force, input \(u_{2}\) is the angle of the front wheels, and \(\beta(u_{2})=\arctan\left(\frac{\ell_{f}}{\ell_{r}+\ell_{r}}\tan(u_{2})\right)\) is the slip slide angle. Let \(x=[p_{x},p_{y},\phi,v]^{\top}\) and \(u=[u_{1},u_{2}]^{\top}\). We use the neural network controller (\(4\times 100\times 100\times 2\) ReLU) defined in (Jafarpour et al., 2023), which is applied at evenly spaced intervals of \(0.25\) seconds apart. This neural network is trained to mimic an MPC that stabilizes the vehicle to the origin while avoiding a circular obstacle centered at \((4,4)\) with a radius of \(2\).
The natural inclusion function is constructed using npinterval with Table 1, and the monotone inclusion function (8) is computed using autoLiRPA (Xu et al., 2020). The closed-loop embedding function (10) is then used to over-approximate the reachable set of the system using Theorem 3.4. The dynamics are simulated using Euler integration with a step size of \(0.05\). The results are shown in Figure 2.
## 4 Conclusion
In this paper, we introduced a framework for interval analysis implemented directly in numpy, called npinterval. The framework provides an automatic way to generate provide interval bounds on the output of a general class of functions. We use this framework to formally verify the output of nonlinear neural network controlled systems. In the future, npinterval will be updated to account for floating point error, as well as to support a wider array of standard inclusion functions.
Figure 2: The over-approximated reachable set of the nonlinear vehicle model in the \(p_{x}\)-\(p_{y}\) coordinates are shown in blue for the initial set \([7.95,8.05]^{2}\times[-\frac{2\pi}{3}-0.005,-\frac{2\pi}{3}+0.005]\times[1.995,2.005]\) over the time interval \([0,1.25]\). 100 true trajectories of the system are shown in red, and the average runtime and standard deviation over 100 runs is shown.
## Acknowledgements
This work was supported in part by the National Science Foundation under grant #2219755 and by the Ford Motor Company.
|
2306.11132 | Fairness-aware Message Passing for Graph Neural Networks | Graph Neural Networks (GNNs) have shown great power in various domains.
However, their predictions may inherit societal biases on sensitive attributes,
limiting their adoption in real-world applications. Although many efforts have
been taken for fair GNNs, most existing works just adopt widely used fairness
techniques in machine learning to graph domains and ignore or don't have a
thorough understanding of the message passing mechanism with fairness
constraints, which is a distinctive feature of GNNs. To fill the gap, we
propose a novel fairness-aware message passing framework GMMD, which is derived
from an optimization problem that considers both graph smoothness and
representation fairness. GMMD can be intuitively interpreted as encouraging a
node to aggregate representations of other nodes from different sensitive
groups while subtracting representations of other nodes from the same sensitive
group, resulting in fair representations. We also provide a theoretical
analysis to justify that GMMD can guarantee fairness, which leads to a simpler
and theory-guided variant GMMD-S. Extensive experiments on graph benchmarks
show that our proposed framework can significantly improve the fairness of
various backbone GNN models while maintaining high accuracy. | Huaisheng Zhu, Guoji Fu, Zhimeng Guo, Zhiwei Zhang, Teng Xiao, Suhang Wang | 2023-06-19T19:31:35Z | http://arxiv.org/abs/2306.11132v1 | # Fairness-aware Message Passing for
###### Abstract
Graph Neural Networks (GNNs) have shown great power in various domains. However, their predictions may inherit societal biases on sensitive attributes, limiting their adoption in real-world applications. Although many efforts have been taken for fair GNNs, most existing works just adopt widely used fairness techniques in machine learning to graph domains and ignore or don't have a thorough understanding of the message passing mechanism with fairness constraints, which is a distinctive feature of GNNs. To fill the gap, we propose a novel fairness-aware message passing framework GMMD, which is derived from an optimization problem that considers both graph smoothness and representation fairness. GMMD can be intuitively interpreted as encouraging a node to aggregate representations of other nodes from different sensitive groups while subtracting representations of other nodes from the same sensitive group, resulting in fair representations. We also provide a theoretical analysis to justify that GMMD can guarantee fairness, which leads to a simpler and theory-guided variant GMMD-S. Extensive experiments on graph benchmarks show that our proposed framework can significantly improve the fairness of various backbone GNN models while maintaining high accuracy.
## 1 Introduction
In recent years, Graph Neural Networks (GNNs) have emerged as a powerful approach for node representation learning on graphs, facilitating various domains such as recommendation system [1; 2], social network mining [3; 4] and knowledge graph [5; 6]. The success of GNNs relies on message passing mechanisms, which allows them to aggregate information from neighboring nodes and preserve both topological and node feature information [4; 7; 8]. Despite the significant progress of GNNs, GNNs may inherit societal biases on sensitive attributes in data such as ages, genders, and races [9; 10; 11], resulting in biased predictions. Moreover, nodes with the same sensitive attributes are likely to be connected. The message passing process smooths representations of connected nodes, further segregating the representations of nodes in different sensitive groups, causing over-association of their predictions with sensitive attributes. The biased predictions largely limit the adoption of GNNs in many real-world applications, e.g., healthcare [12], credit scoring [13] and job hunting [14].
Recently, many efforts have been taken to learn fair GNNs [9; 10; 11; 15; 16; 17]. Most of these methods propose to adapt the power of regularization [16], adversarial debiasing [9; 11], or contrastive learning techniques [15; 17] from traditional fairness models of Euclidean data to graph structured data. Additionally, some methods aim to directly remove bias from node features and graph topology [10]. While these methods can alleviate bias issues, it is still difficult to understand their effect in conjunction with the message passing process, which is a distinctive feature for GNN models. Training fair models in machine learning is a complex task as it involves utility and fairness performance. In the case of GNNs, the message passing process is a critical component: (i) it can greatly influence the utility performance of GNN models [18]; and (ii) it might magnify the bias
in graph [9]. To control the trade-off between the utility and fairness of models, it is important to design a fairness-ware message passing with a deep understanding of its mechanism about how to achieve fair prediction results. Hence, the above problems pose an important and challenging research question: _How to design a new message passing process that has high accuracy, is fairness-aware, and also provides a thorough understanding of how it can debias GNN models?_
Generally, message passing can be treated as solving an optimization problem with smoothness regularization [19; 20; 21; 22; 23; 24]. Specifically, message passing schemes are derived by the one step gradient of smoothness regularization, and they typically entail the aggregation of node features from neighboring nodes, which shows great utility performance. However, such optimization problem with smoothness regularization only considers utility without fairness. Therefore, we propose a new optimization formulation that considers both utility and fairness. By solving the new optimization problem, we obtain a novel fairness-aware message passing framework, GMMD, with utility performance ensured by the smoothness regularization, and fairness performance ensured by the fairness regularization. It can easily trade-off utility and fairness by adjusting the weight of each regularization term. Intuitively, the message passing of GMMD can be interpreted as encouraging each node \(v_{i}\) to weightedly aggregate representations of other nodes with different sensitive attributes of \(v_{i}\), and weightedly subtract representations of other nodes with the same sensitive attribute, which can alleviate over-association of the learned representation with sensitive attributes, resulting in fair representations with good utility ensured by smoothness. We also theoretically show that GMMD can minimize the upper bound of the metric for group fairness, resulting in fair predictions. Based on our theoretical analysis, we further propose a more efficient and simpler variant called GMMD-S.
Our main contributions are: **(i)** We propose a novel fairness-aware message passing framework GMMD to mitigate biased predictions on graph data, which guarantees graph smoothness and enhances fairness; **(ii)** We theoretically show that the proposed message passing can achieve better downstream performance with regard to fairness metrics; and **(iii)** Experimental results on real-world datasets show that GMMD exhibits a superior trade-off between prediction performance and fairness.
## 2 Related Works
**Graph Neural Networks.** Graph Neural Networks have shown great ability in representation learning on graphs [18; 25]. Existing GNNs can be roughly divided into spectral-based and spatial-based GNNs. Spectral-based approaches are defined according to graph theory [4; 26; 27; 28; 29; 30; 31]. The convolution operation is firstly proposed to graph data from spectral domain [26]. Then, a first-order approximation is utilized to simplify the graph convolution via GCN [4]. Spatial-based GNN models learn node representation by aggregating the information of the neighbor nodes such as GraphSage [3], APPNP [7] and GAT [32]. Despite their differences, most GNN variants can be summarized as the message-passing framework composed of pattern extraction and interaction modeling within each layer. Recent works also show that the message passing scheme in each layer can be treated as one step gradient of solving a graph signal denoising problem [19; 20; 21; 22; 23; 24].
**Fairness in Graph Learning.** Despite their great success, most existing GNNs just focus on the utility performance while ignoring the fairness issue, which can lead to discriminatory decisions [33]. Therefore, many efforts have been taken for fair GNNs [9; 10; 11; 15; 16; 17; 34; 35; 36; 37; 38]. For example, adversarial debiasing is adopted to train fair GNNs by preventing an adversary from predicting sensitive attributes from learned fair node representations [9; 16]. Fairwalk use a fair random walk method to learn fair network embedding [34]. Some recent works also study contrastive learning methods with fairness concern [15; 39]. FMP [40] proposes a fair message passing with transparency. Even though message passing of FMP can implicitly optimize fairness constraints, it still lacks a direct and theoretical understanding of how its message passing process can mitigate biases in GNNs based on fairness metrics. Therefore, the aforementioned works ignore a fairness-aware message passing component or don't provide a thorough understanding of how fairness-aware message passing can debias predictions in GNNs. In this work, we develop a novel fairness-ware message passing approach with a theoretical understanding of how it can give fair prediction results.
## 3 Notations and Problem Definition
**Notations.** Let \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X})\) be an undirected attributed graph, where \(\mathcal{V}=\{v_{1},v_{2},\ldots,v_{N}\}\) is the set of \(N\) nodes, \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the set of edges, and \(\mathbf{X}\) is the node feature matrix with the \(i\)-th row of \(\mathbf{X}\), i.e., \(\mathbf{X}_{i,:}\) as node feature of \(v_{i}\). The graph structure can be denoted as the adjacency
matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\). \(A_{ij}=1\) if \((v_{i},v_{j})\in\mathcal{E}\); \(A_{ij}=0\) otherwise. The symmetrically normalized graph Laplacian matrix is defined as \(\tilde{\mathbf{L}}=\mathbf{I}-\tilde{\mathbf{A}}\), where \(\tilde{\mathbf{A}}=\hat{\mathbf{D}}^{-\frac{1}{2}}\hat{\mathbf{A}}\hat{\mathbf{D }}^{-\frac{1}{2}}\) is a normalized self-loop adjacency matrix with \(\hat{\mathbf{A}}=\mathbf{A}+\mathbf{I}\) and \(\tilde{\mathbf{D}}\) is the degree matrix of \(\tilde{\mathbf{A}}\). \(\mathcal{N}_{i}=\{v_{j}:(v_{i},v_{j})\in\mathcal{E}\}\) denotes the set of \(v_{i}\) neighbors. We use \(\mathbf{s}\in\{0,1\}^{N}\) to denote sensitive attributes of all nodes, where the \(i\)-th element of \(\mathbf{s}\), i.e., \(s_{i}\in\{0,1\}\), is the binary sensitive attribute of \(v_{i}\). The set of nodes from group 0 (\(s_{i}=0\)) is denoted as \(\mathcal{S}_{0}=\{v_{i}:v_{i}\in\mathcal{V}\wedge s_{i}=0\}\). The set of nodes from group 1 (\(s_{i}=1\)) is denoted as \(\mathcal{S}_{1}=\{v_{i}:v_{i}\in\mathcal{V}\wedge s_{i}=1\}\). \(N_{1}\) and \(N_{2}\) are the number of nodes in \(\mathcal{S}_{0}\) and \(\mathcal{S}_{1}\).
**Graph Signal Denoising.** Recently, it is shown that many popular GNNs can be uniformly understood as solving graph signal denoising problems with various diffusion properties [19; 20; 21; 22; 23; 24]. For instance, the message passing in GCN [4], GAT [32] and APPNP [7] can be considered as one gradient descent iteration for minimizing \(\lambda_{s}\cdot\operatorname{tr}\left(\mathbf{F}^{T}\tilde{\mathbf{L}} \mathbf{F}\right)+\left\|\mathbf{F}-\mathbf{X}_{\text{in}}\right\|_{F}^{2}\) with the initialization \(\mathbf{F}^{(0)}=\mathbf{X}_{\text{in}}\) and weight \(\lambda_{s}\), where \(\mathbf{X}_{\text{in}}\) can be the original feature \(\mathbf{X}\) or hidden features after feature transformation on \(\mathbf{X}\). These hidden features can be obtained by passing \(\mathbf{X}\) through several layers of MLPs. These GNN models optimize smoothness regularization over graph structure implicitly, but they may also inherit biased representations from the graph structure [10; 41].
**Problem Formulation.** For fair semi-supervised node classification, part of nodes \(v_{i}\in\mathcal{V}_{L}\) are provided with labels \(y_{i}\in\mathcal{Y}_{L}\), where \(\mathcal{V}_{L}\subseteq\mathcal{V}\) is the set of labeled nodes, and \(\mathcal{Y}_{L}\) is the corresponding label set. Given \(\mathcal{G},\mathcal{V}_{L}\) and \(\mathbf{s}\), the goal of GNNs for fair semi-supervised node classification is to learn a function \(f\left(\mathcal{G}\right)\rightarrow\mathcal{Y}_{L}\) and predict the labels of unlabeled nodes, where the predicted labels should maintain high accuracy whilst satisfying the fairness criteria about sensitive attributes \(\mathbf{s}\).
## 4 Graph Neural Network with Fairness-aware Message Passing
In this section, we introduce the proposed fairness-aware message passing framework GMMD, which aims to give fair prediction and maintain high accuracy. Unlike traditional message passing methods that optimize smoothness regularization over graphs while ignoring fairness constraints, GMMD formulates a new optimization problem that simultaneously considers smoothness and fairness. Then, by performing one-step gradient descent on this optimization function, we obtain a fairness-aware message passing approach that can learn fair and smooth representations that preserve utility and fairness performance. We also provide theoretical analysis to that GMMD can guarantee the fairness performance. Based on our theoretical analysis, a simpler and theory-guided fairness-aware message passing method (GMMD-S) is introduced with the fairness guarantee.
### Fairness-aware Message Passing - GMMD
Traditional message passing methods in GNN models learn smooth representation over graphs while ignoring fairness issues, which hinders their adoption in many real-world applications [9; 33]. This motivates us to design a fairness-aware message passing method. Inspired by previous works [19; 20; 21] that treat message passing as the on step gradient of an optimization problem based on smoothness assumption of graph representations, we propose a new optimization problem with fairness considerations. Specifically, to guarantee both smoothness and fairness, a fairness-aware message passing method should be the solution of the following optimization function:
\[\min_{\mathbf{F}}h(\mathbf{F})=h_{s}(\mathbf{F})+\lambda_{f}h_{f}(\mathbf{F} )=\frac{\lambda_{s}}{2}\operatorname{tr}\left(\mathbf{F}^{T}\tilde{\mathbf{L}} \mathbf{F}\right)+\frac{1}{2}\left\|\mathbf{F}-\mathbf{X}_{\text{in}}\right\|_ {F}^{2}+\lambda_{f}h_{f}(\mathbf{F}), \tag{1}\]
where \(h_{s}(\mathbf{F})=\frac{\lambda_{s}}{2}\operatorname{tr}\left(\mathbf{F}^{T} \tilde{\mathbf{L}}\mathbf{F}\right)+\frac{1}{2}\left\|\mathbf{F}-\mathbf{X}_{ \text{in}}\right\|_{F}^{2}\) is smoothness regularization to recover a clean node representation matrix \(\mathbf{F}\in\mathbb{R}^{\tilde{N}\times d}\) and guide the smoothness of \(\mathbf{F}\) over the graph. \(h_{f}(\mathbf{F})\) is a regularization to control the fairness of \(\mathbf{F}\). In this paper, we adopt Maximum Mean Discrepancy (MMD) [42] as fairness regularization \(h_{f}(\mathbf{F})\)_as in enables us to gain a better understanding of the resulting message passing approach from both an intuitive and a theoretical standpoint_. We leave the other choices as future work. MMD is a popular estimator for measuring the distribution discrepancy of two groups. For two groups \(\mathcal{S}_{0}\) and \(\mathcal{S}_{1}\), MMD measures the distribution discrepancy as:
\[\ell_{\text{MMD}}\left(\mathbf{F}\right)=\frac{1}{N_{0}^{2}}\sum_{v_{i}\in \mathcal{S}_{0}}\sum_{v_{j}\in\mathcal{S}_{0}}k\left(\mathbf{F}_{i},\mathbf{F} _{j}\right)+\frac{1}{N_{1}^{2}}\sum_{v_{i}\in\mathcal{S}_{1}}\sum_{v_{j}\in \mathcal{S}_{1}}k\left(\mathbf{F}_{i},\mathbf{F}_{j}\right)-\frac{2}{N_{0}N_{1 }}\sum_{v_{i}\in\mathcal{S}_{0}}\sum_{v_{j}\in\mathcal{S}_{1}}k\left(\mathbf{F }_{i},\mathbf{F}_{j}\right), \tag{2}\]
where \(\mathbf{F}_{i}\) is the node representation of \(v_{i}\) and \(k(\mathbf{F}_{i},\mathbf{F}_{j})\) denotes the kernel similarity between \(\mathbf{F}_{i}\) and \(\mathbf{F}_{j}\). As our fairness constraint involves exactly matching the considered distributions using
MMD, \(k(\cdot,\cdot)\) should be characteristic kernels. As shown in [43], RBF and Laplace are popular stationary characteristic kernels. Thus, in this paper, we set \(k(\mathbf{F}_{i},\mathbf{F}_{j})\) to be the RBF kernel, i.e., \(k(\mathbf{F}_{i},\mathbf{F}_{j})=\exp\left(-\alpha\|\mathbf{F}_{i}-\mathbf{F}_ {j}\|^{2}\right)\), where \(\alpha\) determines the rate at which the similarity between two data points decreases as the distance between them increases. Several works [44; 45; 46] have shown the benefits of directly using MMD as a regularizer to train fair models., e.g., we can directly adopt MMD to regularize the node representations of GNNs. However, such direct adoption doesn't consider the message passing process, which makes it challenging to comprehend the actual impact of using MMD regularization in conjunction with GNNs' message passing. In addition, we also empirically show that simple adoption can't perform well in Section 5. Hence, instead of simply using it as a regularizer, based on this new optimization problem, we adopt one-step gradient for \(h(\mathbf{F})\) and propose the fairness-aware message passing based on MMD, which can be written as:
\[\mathbf{F}^{(k)}=\mathbf{F}^{(k-1)}-\gamma\nabla h\big{(}\mathbf{ F}^{(k-1)}\big{)} =\mathbf{F}^{(k-1)}-\gamma(\lambda_{s}\tilde{\mathbf{L}}\mathbf{F} ^{(k-1)}+\mathbf{F}^{(k-1)}-\mathbf{X}_{\text{in}})+4\gamma\lambda_{f}\alpha \mathbf{PF}^{(k-1)} \tag{3}\] \[=\big{(}(1-\gamma)\mathbf{I}-\gamma\lambda_{s}\tilde{\mathbf{L}} +4\gamma\lambda_{f}\alpha\mathbf{P}\big{)}\mathbf{F}^{(k-1)}+\gamma\mathbf{X} _{\text{in}},\]
where for \(i\neq j\), if \(v_{i}\) and \(v_{j}\) are in the same sensitive group \(\mathcal{S}_{t}\), \(P_{ij}=-k(\mathbf{F}_{i}^{(k-1)},\mathbf{F}_{j}^{(k-1)})/N_{t}^{2}\), \(t\in\{0,1\}\). If \(v_{i}\) and \(v_{j}\) are in different sensitive groups, \(P_{ij}=k(\mathbf{F}_{i}^{(k-1)},\mathbf{F}_{j}^{(k-1)})/N_{0}N_{1}\). And \(P_{ii}=-\sum_{k=1,m\neq i}^{N}P_{im}\). The detailed derivation for Eq.(3) is in Appendix A.1. Following [19], we set the learning rate \(\gamma=\frac{1}{1+\lambda_{s}}\). Then the final _message passing of GMD_ can be written as:
\[\mathbf{F}^{(k)}=\mathbf{F}^{(k-1)}-\gamma\nabla h\big{(}\mathbf{F}^{(k-1)} \big{)}=\big{(}(1-\gamma)(\mathbf{I}-\tilde{\mathbf{L}})+4\gamma\lambda_{f} \alpha\mathbf{P}\big{)}\mathbf{F}^{(k-1)}+\gamma\mathbf{X}_{\text{in}}. \tag{4}\]
The proposed message passing method can implicitly optimize both fairness and smoothness regularization in Eq.(1) to learn fair and smooth representation. To better understand this message passing mechanism in Eq.(4), we interpret it as modeling inter-sensitive-group and intra-sensitive-group relations to learn fair representation. Specifically, for two nodes \(v_{i}\) and \(v_{j}\) from different sensitive groups, we have \(P_{ij}>0\), meaning that \(v_{i}\) will aggregate information from \(v_{j}\) with the aggregation as kernel similarity \(P_{ij}\); on contrary, if \(v_{i}\) and \(v_{j}\) are from the same sensitive group, then \(P_{ij}<0\), meaning that \(v_{i}\) will subtract the information from \(v_{i}\). Hence the proposed approach can mitigate bias in GNNs by aggregating representations of nodes with different sensitive attributes to bring nodes from different sensitive groups closer in the embedding space, and subtracting representations of nodes with the same sensitive attributes to prevent representations overly correlated with sensitive attributes. Therefore, our proposed message passing mechanism can mitigate the issue of biased label assignment in GNNs by modifying the way node representations are updated during message passing, promoting fairer and more accurate predictions. In order to gain a deeper comprehension of GIMD, we will also provide a theoretical understanding of our model for fairness performance.
### Theoretical Analysis
Next, we provide theoretical understandings of proposed GIMD. We show that GIMD can minimize the upper bound of the metric for group fairness. All detailed proofs can be found in Appendix F.
One of the most widely used fairness metrics for group fairness is demographic parity [14], which ensures an equal positive rate for two groups. The difference in demographic parity, denoted as \(\Delta_{\text{DP}}=|\mathbb{E}(\hat{y}|S=0)-\mathbb{E}(\hat{y}|S=1)|\), measures the fairness of the model, where \(\hat{y}\) is the predicted probability for nodes being positive and \(S\) is the sensitive attribute of nodes. Note that following [10; 9; 40] we only consider binary sensitive attributes, but our analysis can be easily extended to multivariate or continuous sensitive attributes. The details of \(\Delta_{\text{DP}}\) are in Appendix E.
With the definition of \(\Delta_{\text{DP}}\), we will theoretically analyze how our proposed message passing could affect the difference in demographic parity. We consider a K-layer GIMD with each layer performing fairness aware message passing as Eq.(4) and the input to the model is \(\mathbf{F}^{(0)}=\mathbf{X}\). Then, we can obtain the representation \(\mathbf{F}^{(K)}\) with a \(K\)-layer model, with both fairness and smoothness constraints Finally, a linear model is applied to \(\mathbf{F}^{(K)}\) with a Softmax function to predict the probability of labels as \(\hat{\mathbf{Y}}=\text{softmax}(\mathbf{F}^{(K)}\mathbf{W})\), where \(\mathbf{W}\in\mathbb{R}^{d\times c}\) is the learnable weight matrix. \(\hat{\mathbf{Y}}\in\mathbb{R}^{N\times c}\) is a matrix, where \(i\)-th row means the predicted label of node \(v_{i}\). \(d\) is the dimension of the input feature \(\mathbf{X}\) and \(c\) is the number of classes. We use \(\mu\in\mathbb{R}^{d}\) to denote the sample mean of the original features of all nodes, i.e., \(\mu=\sum_{v_{i}\in\mathcal{V}}\mathbf{X}_{i}/N\). Furthermore, \(\mu^{(s)}\in\mathbb{R}^{d}\) represents the sample mean of original feature for the sensitive group \(s\), \(s\in\{0,1\}\). We first demonstrate \(\Delta_{\text{DP}}^{\text{Rep}}(\mathbf{F}^{(K)})\) at the representation level is the upper bound of \(\Delta_{\text{DP}}\) through the following theorem:
**Theorem 4.1**.: _Minimizing representation discrepancy between two sensitive groups \(\Delta_{\text{DP}}^{\text{Rep}}(\mathbf{F}^{(K)})=\|\mathbb{E}(\mathbf{F}^{(K)} \mid S=0)-\mathbb{E}(\mathbf{F}^{(K)}\mid S=1)\|\) is equivalent to minimizing \(\Delta_{\text{DP}}\) of the model prediction for a binary classification problem, and \(\mathbf{F}^{(K)}\) is the final output of a \(K\) layer model before the softmax function. When \(K\) is set as 2, we have:_
\[\Delta_{\text{DP}}\leq\frac{L}{2}\left\|\mathbf{W}\right\|\left(\Delta_{\text{ DP}}^{\text{Rep}}(\mathbf{F}^{(K)})+C_{1}\left\|\boldsymbol{\Delta}\right\| \right), \tag{5}\]
_where \(C_{1}=(2+\frac{4}{N_{0}}+\frac{4}{N_{1}})^{2}+6+\frac{8}{N_{0}}+\frac{8}{N_{1}}\) and \(\boldsymbol{\Delta}\) is the maximal deviation of \(\mathbf{X}\) from \(\mu\) i.e., \(\boldsymbol{\Delta}_{m}=\max_{i}|\mu_{m}-\mathbf{X}_{i,m}|\). \(L<1\) is the Lipschitz constant for the Softmax function._
The proof is in Appendix F.1. Theorem 4.1 shows that minimizing \(\Delta_{\text{DP}}^{\text{Rep}}\) at the representation level will minimize the fairness metric \(\Delta_{\text{DP}}\). With this theorem, we can analyze the influence of our GMMD on \(\Delta_{\text{DP}}^{\text{Rep}}\) to evaluate whether it promotes fairness in the classification task by reducing discrimination.
Next, we theoretically show that performing our proposed message passing to learn node representation can minimize the fairness metric at the representation level \(\Delta_{\text{DP}}^{\text{Rep}}\).
**Theorem 4.2**.: _For an arbitrary graph, after \(K\)-layer fairness-aware message passing with Eq.(4) on the original feature \(\mathbf{X}\) to obtain \(\mathbf{F}^{(K)}\), when \(K=2\), the consequent representation discrepancy on \(\mathbf{F}^{(K)}\) between two sensitive groups is up bounded as:_
\[\Delta_{\text{DP}}^{\text{Rep}}(\mathbf{F}^{(K)})\leq\big{(}3-(\frac{1}{N_{0} N_{1}^{2}}+\frac{1}{N_{0}^{2}N_{1}})\sum_{i\in S}\sum_{j\in S_{1}}k(\mathbf{F} _{i}^{(K-1)},\mathbf{F}_{j}^{(K-1)})\big{)}C_{3}+C_{2}, \tag{6}\]
_where \(C_{2}=\|\mu^{(0)}-\mu^{(1)}\|+8(1+\frac{1}{N_{0}}+\frac{1}{N_{1}})^{2}\| \boldsymbol{\Delta}\|+2\|\boldsymbol{\Delta}\|\). Also, the constant value \(C_{3}\) is \(C_{3}=(3-\frac{1}{N_{0}}-\frac{1}{N_{1}})\|(\mu^{(0)}-\mu^{(1)})\|+(4+\frac{2} {N_{0}}+\frac{2}{N_{1}})\|\boldsymbol{\Delta}\|\)._
The proof is in Appendix F.2. Theorem 4.2 shows that \(\Delta_{\text{DP}}^{\text{Rep}}(\mathbf{F}^{(K)})\) is upper bounded by the third term of the MMD loss in Eq.(2) and constant values \(C_{2}\) and \(C_{3}\). Therefore, combining theorem 4.2 and theorem 4.1, optimizing the third term of MMD regularization can minimize \(\Delta_{\text{DP}}^{\text{Rep}}(\mathbf{F}^{(K)})\), which minimizes the upper bound of the fairness metric \(\Delta_{\text{DP}}\). The proposed GMMD can implicitly optimize the MMD regularization and can be flexibly incorporated to various GNN backbones such as GAT [32] and GIN [47] (see Appendix C). These findings highlight the efficacy of GMMD in mitigating fairness issues and facilitating various backbone GNNs.
### Simple Fairness-aware Message Passing - GMMD-S
From the theoretical analysis in Section 4.2, we find that directly optimizing the third term of MMD regularization can also effectively minimize the upper bound of the fairness metric. This motivate us to further simplify GMMD to obtain a theory-guided method for mitigating bias for GNNs. Specifically, we simplify the fairness regularization function in Eq.(1) as \(h_{f}(\mathbf{F})=-\frac{1}{N_{0}N_{1}}\sum_{v_{i}\in\mathcal{S}_{0}}\sum_{v _{j}\in\mathcal{S}_{1}}k\left(\mathbf{F}_{i},\mathbf{F}_{j}\right)\) and obtain simplified fairness-aware message passing as follows:
\[\mathbf{F}^{(k)}=\mathbf{F}^{(k-1)}-\gamma\nabla h\big{(}\mathbf{F}^{(k-1)} \big{)}=\big{(}(1-\gamma)(\mathbf{I}-\tilde{\mathbf{L}})+4\gamma\lambda_{f} \alpha\widetilde{\mathbf{P}}\big{)}\mathbf{F}^{(k-1)}+\gamma\mathbf{X}_{\text{ in}}, \tag{7}\]
where for \(i\neq j\), if \(v_{i}\) and \(v_{j}\) are in the same sensitive group, \(\widetilde{P}_{ij}=0\). If \(v_{i}\) and \(v_{j}\) are in the different sensitive groups, \(\widetilde{P}_{ij}=-k(\mathbf{F}_{i},\mathbf{F}_{j})/N_{0}N_{1}\). And \(\widetilde{P}_{ii}=-\sum_{k=1,m\neq i}^{N}\widetilde{P}_{im}\), \(\gamma=\frac{1}{1+\lambda_{s}}\). The details can be found in Appendix A.2. Compared to GMMD, this simplified version doesn't need to calculate the similarity between node pairs from the same sensitive group and can greatly improve the efficiency of the algorithm. Using this message passing can immediately optimize the third term of the MMD regularization, which can provably minimize \(\Delta_{\text{DP}}\) in Theorem 4.2 and Theorem 4.1. We call this simplified version as GMMD-S.
### Model Framework and Training Method
**Model Framework.** To increase the expressive power of GMMD, we perform transformations on the original features, resulting in \(\mathbf{X}_{\text{in}}=\text{MLP}_{\theta}(\mathbf{X})\), where \(\mathbf{X}_{\text{in}}\in\mathbb{R}^{N\times c}\) and \(\text{MLP}_{\theta}(\cdot)\) denotes \(M\) layer MLP parameterized by \(\theta\). Note that \(c\) can be replaced with any values for hidden dimensions, but setting it as \(c\) yields better empirical results in our experiment. Following [19], we treat \(\mathbf{F}^{(0)}=\mathbf{X}_{\text{in}}\)
After transformations, we operate fairness-aware message passing in Eq.(7) or Eq.(4) for \(K\) layers, obtaining \(\mathbf{F}^{(K)}=\text{GMMD}(\mathbf{A},\mathbf{X}_{\text{in}},\mathbf{s})\). We add Softmax function to \(\mathbf{F}^{(K)}\) to get predictions. Finally, cross-entropy loss is used to train the model. Our full algorithm is given in Appendix B.
**Incorporate Different Backbones.** Moreover, our models is flexible to facilitate various GNN backbones. For the fairness-aware message passing in Eq.(4) and Eq.(7), the first term of these equations is the message passing method of GCN, \(\mathbf{I}-\tilde{\mathbf{L}}\). Therefore, we can simply replace this message passing style with different GNN models. We also conduct the experiments of GAT and GIN backbones to show the flexibility of our proposed method in Section 5.4. We discuss the details of how to incorporate our model with different backbones in Appendix C.
**Efficient Training.** Calculating \(\mathbf{P}\) or \(\widetilde{\mathbf{P}}\) is time-consuming, where the time complexity are \(\mathcal{O}(N^{2}c)\) and \(\mathcal{O}(N_{0}N_{1}c)\). To efficiently train our algorithm, we randomly sample an equal number of nodes from \(\mathcal{S}_{0}\) and \(\mathcal{S}_{1}\) in each training epoch to update the sampled nodes' representation by Eq.(4) or Eq.(7). Let the sampled sensitive group set be \(\mathcal{S}^{\prime}_{0}\) and \(\mathcal{S}^{\prime}_{1}\), where \(|\mathcal{S}^{\prime}_{0}|\) and \(|\mathcal{S}^{\prime}_{1}|\) are both equal to \(N_{s}\). We can get the sampled fairness-aware message passing as follows:
\[\mathbf{F}^{(k)}_{i}=\mathbf{F}^{(k)}_{i}+4\gamma\lambda_{f}\alpha\mathbf{ \sum}_{j\in\mathcal{S}^{\prime}_{0}\cup\mathcal{S}^{\prime}_{1}}P_{ij} \mathbf{F}^{(k-1)}_{j},\;\mathbf{F}^{(k)}=\big{(}(1-\gamma)(\mathbf{I}-\tilde {\mathbf{L}})+\gamma\mathbf{X}_{\text{in}}\big{)}\mathbf{F}^{(k-1)},\;i\in \mathcal{S}^{\prime}_{0}\cup\mathcal{S}^{\prime}_{1}, \tag{8}\]
where we can also replace \(\mathbf{P}\) with \(\widetilde{\mathbf{P}}\) for our simplified fairness-aware message passing in Eq.(7). We give time complexity of our proposed method in Appendix D. A detailed training time comparison with state-of-the-art methods is given in Appendix H.4.
## 5 Experiment
In this section, we empirically evaluate the effectiveness of the proposed fairness-aware message passing GMMD on real-world graphs and analyze its behavior on graphs to gain further insights.
### Experimental Setup
**Datasets.** We perform experiments on three widely-used datasets for fairness aware node classification, i.e., German, Credit and Bail [39]. We provide the detailed descriptions, statistics, and sensitive attribute homophily measures of datasets in Appendix G.1.
**Baselines.** In the following part, we denote the proposed message passing method in Eq.(4) as GMMD and Eq.(7) as GMMD-S. To evaluate the effectiveness of GMMD and GMMD-S, we consider the following representative and state-of-the-art baselines for group fairness in graph domains, including NIFTY [39], EDITS [10], FairGNN [9], FMP [40]. We also adopt a baseline named MMD, which directly adds MMD regularization at the output layer of the GNN. For all baselines and our model, GCN [4] is used as the backbone. For fair comparisons, we set the number of layers for GCN in all methods as \(2\). The details of baselines and implementations are given in Appendix G.3.
**Evaluation Protocol.** We use the default data splitting following NIFTY [39] and EDITS [10]. We adopt AUC, F1-score and Accuracy to evaluate node classification performance. As for fairness metric, we adopt two quantitative group fairness metrics to measure the prediction bias, i.e., demographic parity \(\Delta_{\text{DP}}\) and equal opportunity \(\Delta_{\text{EO}}\)[48], where the smaller values represent better performance. A detailed description of these two fairness metrics is shown in Appendix E.
**Setup.** We run each experiment 5 times and report the average performance. For a fair comparison, we select the best configuration of hyperparameters based on performance on the validation set for all methods. For all baselines, we use the same hyperparameter search space used in [11]. The detailed setup and hyper-parameter settings are given in Appendix G.3.
### Overall Performance Comparison
In this subsection, we conduct experiments on real-world graphs to evaluate the utility and fairness of GMMD. Table 1 reports the average AUC, F1-score and Accuracy with standard deviation on node classification after five runs. We also report average \(\Delta_{\text{EO}}\) and \(\Delta_{\text{DP}}\) with standard deviation as the fairness metric. From the table, we make the following observations: (**i**) Generally, our GMMD and GMMD-S achieve the best performance in terms of the average rank for all datasets and across all evaluation metrics, which indicates the superiority of our model in achieving a better trade-off
between model utility and fairness. The average rank of GMMD and GMMD-S are 2.4 and 2.6 with regard to all metrics. **(ii)** When compared with FairGNN, FairVGNN, EDITS, NIFTY and FMP, GMMD can consistently outperform them. This improvement is mainly because our model combines fairness regularization with the original message passing process of GNNs. It can effectively learn a fair representation and achieves better model utility with smoothness regularization simultaneously. **(iii)** Our simplified model GMMD-S, which just minimizes the third term of the MMD regularization inspired from Theorem 4.2, achieves comparable or even better results compared with GMMD, which verifies our motivation to design a simpler fairness-aware message passing mechanism with less time cost to mitigate the fairness issue on GNNs by minimizing the inter-group kernel similarity.
### Parameter Analysis and Empirical Analysis on Theory
**Hyper-parameters Analysis.** Here we investigate the hyper-parameters most essential to our framework, i.e., \(\lambda_{f}\) and \(\lambda_{s}\) (\(\gamma\) is set as \(1/(1+\lambda_{s})\)) to control smoothness and fairness regularization, respectively. The results are shown in Figure 1. _To better understand the influence of hyperparameters, we use homophily ratio and sensitive homophily ratio of the graphs to explain the observations_, which are discussed in Appendix G.1. From the figure, we can make the following observation: **(i)** For the German dataset, excessively high or low weights assigned to the smoothness regularization term \(\lambda_{s}\) lead to lower accuracy values caused by a lower homophily ratio of it, meaning that neighboring nodes may not necessarily have similar labels. For this case, it's challenging to increase the performance by simply increasing the weights of the smoothness regularization. In contrast, for the Bail dataset with a higher homophily ratio, the accuracy increases and remains stable with the increasing value of \(\lambda_{s}\). **(ii)** Also, \(\Delta_{\text{DP}}\) will drop with the increase of \(\lambda_{s}\) for the Bail dataset with a lower sensitive homophily ratio but will enhance the bias issue for the German dataset with a higher sensitive homophily ratio. This is because encouraging nodes with edges to have similar label prediction results with smoothness regularization can also help to result in consistent prediction results for different groups on the Bail dataset with lower sensitive homophily ratio, where connected nodes have a higher probability of sharing different sensitive attributes. Thus, consistent prediction results for different groups can lead to drop on \(\Delta_{\text{DP}}\). However, for the German dataset with a higher sensitive homophily ratio, such regularization can exacerbate the bias issue by giving similar prediction results for nodes in the same sensitive group. Therefore, it's important to select \(\lambda_{s}\) and \(\lambda_{f}\) by considering homophily ratio and sensitive homophily ratio. **(iii)** Generally, from our experimental results, we can observe that \(\lambda_{s}\) is better to be selected from 0.5 to 1.0 to have both utility and fairness performance guarantee. A too small (e.g., 0.1) value of \(\lambda_{s}\) may degenerate the accuracy performance in all three datasets. Meanwhile, it's better to select \(\lambda_{f}\) from 1 to 5 to have lower \(\Delta_{\text{DP}}\). The hyperparameter sensitivity
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c c} \hline
**Dataset** & **Metric** & **GCN** & **NIFTY** & **EDITS** & **FairGNN** & **FairVGNN** & **FMP** & **MMD** & **GMMD** & **GMMD** & **GMMD-S** \\ \hline \multirow{4}{*}{**German**} & AUC (\(\uparrow\)) & 74.1\(\pm\)0.37 & 68.78\(\pm\)2.69 & 69.41\(\pm\)2.33 & 67.35\(\pm\)2.13 & 72.41\(\pm\)2.10 & 68.20\(\pm\)4.63 & 72.41\(\pm\)3.10 & 74.03\(\pm\)2.28 & 73.10\(\pm\)1.46 \\ & Fl (\(\uparrow\)) & 82.46\(\pm\)0.89 & 82.78\(\pm\)0.50 & 81.55\(\pm\)0.59 & 82.01\(\pm\)0.26 & 83.25\(\pm\)0.50 & 81.38\(\pm\)1.30 & 81.70\(\pm\)0.67 & 83.13\(\pm\)0.64 & 82.54\(\pm\)0.92 \\ & **Acc** (\(\uparrow\)) & 74.44\(\pm\)0.99 & 69.92\(\pm\)1.14 & 71.60\(\pm\)0.89 & 69.68\(\pm\)0.30 & 72.27\(\pm\)2.73 & 72.53\(\pm\)2.22 & 69.90\(\pm\)0.56 & 72.28\(\pm\)1.64 & 71.87\(\pm\)1.24 \\ & \(\Delta_{\text{DP}}\)(\(\downarrow\)) & 31.77\(\pm\)2.77 & 13.65\(\pm\)3.25 & 40.48\(\pm\)4.39 & 2.15\(\pm\) 1.71 & 7.11\(\pm\)8.68 & 6.48\(\pm\)4.04 & 7.56\(\pm\)7.47 & 2.09\(\pm\)1.60 & 9.09\(\pm\)0.44 \\ & \(\Delta_{\text{DP}}\)(\(\downarrow\)) & 25.17\(\pm\)5.89 & 5.08\(\pm\)4.29 & 3.89\(\pm\)4.23 & 3.40\(\pm\) 2.15 & 0.88\(\pm\)5.58 & 5.81\(\pm\)4.03 & 5.34\(\pm\)5.31 & 0.76\(\pm\)0.52 & 0.99\(\pm\) 0.33 \\ \hline \multirow{4}{*}{**Credit**} & AUC (\(\uparrow\)) & 83.02\(\pm\)0.42 & 74.30\(\pm\)2.01 & 73.01\(\pm\)0.11 & 71.95\(\pm\)4.13 & 71.94\(\pm\)5.14 & 69.42\(\pm\)4.25 & 69.94\(\pm\)0.83 & 71.62\(\pm\)0.07 & 71.78\(\pm\)0.30 \\ & **F1** (\(\uparrow\)) & 81.92\(\pm\)0.21 & 81.20\(\pm\)0.58 & 81.18\(\pm\)0.18 & 81.14\(\pm\)1.97 & 87.08\(\pm\)0.74 & 84.99\(\pm\)0.73 & 84.99\(\pm\)0.73 & 87.57\(\pm\)0.71 & 86.96\(\pm\)0.58 \\ & **Acc** (\(\uparrow\)) & 73.67\(\pm\)0.03 & 73.45\(\pm\)0.06 & 73.51\(\pm\)0.30 & 73.41\(\pm\)1.24 & 78.04\(\pm\)0.33 & 75.81\(\pm\)2.35 & 76.13\(\pm\)0.35 & 78.11\(\pm\)0.30 & 78.08\(\pm\)0.36 \\ & **Acc** (\(\downarrow\)) & 20.49\(\pm\)0.19 & 1.68\(\pm\)0.07 & 19.02\(\pm\)12.64 & 71.60\(\pm\)1.21 & 50.25\(\pm\)2.52 & 50.54\(\pm\)2.28 & 1.33\(\pm\)1.35 & 1.55\(\pm\)1.99 & 2.27\(\pm\)1.75 \\ & **Acc** (\(\downarrow\)) & 10.63\(\pm\)0.13 & 9.99\(\pm\)0.07 & 8.75\(\pm\)1.21 & 10.41\(\pm\)2.03 & 3.60\(\pm\)3.41 & 4.60\(\pm\)4.49 & 2.24\(\pm\)1.06 & 1.08\(\pm\)1.53 & 1.95\(\pm\)1.43 \\ \hline \multirow{4}{*}{**Bail**} & AUC (\(\uparrow\)) & 87.87\(\pm\)0.35 & 78.20\(\pm\)2.78 & 86.44\(\pm\)2.27 & 87.36\(\pm\)0.90 & 55.88\(\pm\)6.09 & 83.68\(\pm\)0.26 & 88.18\(\pm\)2.04 & 89.12\(\pm\)0.66 \\ & **F1** (\(\uparrow\)) & 79.02\(\pm\)0.74 & 76.49\(\pm\)0.57 & 75.83\(\pm\)3.77 & 75.01\(\pm\)0.69 & 79.11\(\pm\)0.33 & 77.46\(\pm\)0.40 & 77.51\(\pm\)0.67 & 77.92\(\pm\)1.98 & 78.72\(\pm\)0.49 \\ & **Acc** (\(\uparrow\)) & 84.56\(\pm\)0.68 & 74.19\(\pm\)2.57 & 84.49\(\pm\)2.27 & 82.94\(\pm\)1.67 & 84.73\(\pm\)0.46 & 83.69\(\pm\)0.26 & 84.41\(\pm\)0.40 & 84.95\(\pm\)1.63 & 85.62\(\pm\)0.45 \\ & **Acc** (\(\downarrow\)) & 73.55\(\pm\)0.72 & 2.44\(\pm\)1.29 & 6.64\(\pm\)0.39 & 6.90\(\pm\)0.17 & 6.53\(\pm\)0.67 & 6.85\(\pm\)0.19 & 7.33\(\pm\)1.05 & 2.83\(\pm\)1.99 & 1.68\(\pm\)1.34 \\ & **Acc** (\(\downarrow\)) & 4.96\(\pm\)0.62 & 1.72\(\pm\)1.08 & 7.51\(\pm\)1.20 & 4.65\(\pm\)0.14 & 4.95\(\pm\)1.22
analysis on \(\alpha\) and the number of MLP layers \(M\) are not included here as they are not directly relevant to our key motivation. Therefore, we include sensitivity results of them in Appendix H.2.
**Effect of Layers \(K\).** We then explore the effect of layers \(K\) on the utility (AUC, F1, Acc) and fairness (\(\Delta_{\text{DP}}\) and \(\Delta_{\text{EO}}\)). Figure 2 shows the corresponding results. We can find that **(i)** the utility performance first increases and then decreases with the increase of \(K\). This is because a larger \(K\) leads to over-smoothing issues, which will degenerate the model's performance; and **(ii)**\(\Delta_{DP}\) and \(\Delta_{EO}\) consistently drop as \(K\) increases. This is because more GMMD layers can effectively minimize the MMD loss and lead to a fairer model. To learn a fair and accurate model, the number of layers \(K\) can be selected from 2 to 4 based on our experimental results.
**Empirical Analysis for Theorem 4.2.** We can understand the empirical implications of our Theorem 4.2 combining with Theorem 4.1 on GMMD and GMMD-S as follows: by maximizing the inter-group kernel similarity term, which is defined as \(1/N_{0}N_{1}\sum_{i\in S_{0}}\sum_{j\in S_{1}}k(\mathbf{F}_{i}^{(K-1)},\mathbf{ F}_{j}^{(K-1)})\), we can simultaneously minimize the fairness metric. The corresponding results are shown in Figure 3, where Sim denotes the inter-group kernel similarity multiplied by 100. We can observe that \(\Delta_{\text{DP}}\) will decrease with the increase of the inter-group kernel similarity. Furthermore, at the initial stages of training, the smoothness regularization will dominate the model, which will increase the accuracy performance and lead to unfair predictions. This is because the message passing of GNNs from smoothness regularization may introduce or exacerbate bias [10]. Then, our proposed GMMD can minimize the inter-group similarity and help GNN models to mitigate the biased prediction results.
### Ablation Study and Different Backbones
**Ablation Study.** We conduct ablation studies to gain insights into the effect of message passing without smoothness and fairness regularization, which are denoted as "w/o smooth" and "w/o fair" in Table 5.4. We also conduct the experiment without the sampling method, "w/o sample". First, smoothness regularization alone can achieve great performance on AUC, F1 and Acc but introduces bias and doesn't perform well on the fairness metric. Fairness regularization as an optimization target alone for GMMD produces better results on the fairness metric. The full model (last row) achieves the best performance, which illustrates that we can greatly control the trade-off between fairness and utility performance with the message passing from two regularization terms. Moreover, the average rank for "w/o sample" and the full model are 2.4 and 2.4, which shows that our model with the sampling method can achieve comparable performance with the model training with all samples. It verifies the effectiveness of our sampling method to train GMMD.
**Different Backbones Analysis.** To further show the effectiveness of our model, we adopt various backbones of our model. Specifically, we replace the message passing method (\(\mathbf{I}-\tilde{\mathbf{L}}\)) in Eq.(4) and Eq.(7) with the message passing method employed in GAT and GIN. The detailed discussion of GMMD with different backbones is put into the Appendix H.1. The corresponding results on Bail
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{5}{c}{**German**} \\ & AUC (\(\uparrow\)) & F1 (\(\uparrow\)) & ACC (\(\uparrow\)) & \(\Delta_{\text{DP}}\) (\(\downarrow\)) & \(\Delta_{\text{DP}}\) (\(\downarrow\)) \\ \hline \multirow{2}{*}{**w/o**} & \multirow{2}{*}{70.85\(\pm\)0.43} & \multirow{2}{*}{82.46\(\pm\)0.38} & \multirow{2}{*}{70.80\(\pm\)0.30} & \multirow{2}{*}{70.83\(\pm\)0.42} & \multirow{2}{*}{2.01\(\pm\)0.37} \\ & & & & & & \\ \hline w/o fair & 73.41\(\pm\)9.38 & 83.46\(\pm\)0.58 & 73.20\(\pm\)1.13 & 2.92\(\pm\)1.50 & 3.04\(\pm\)1.29 \\ w/o sample & 73.68\(\pm\)0.28 & 81.85\(\pm\)1.57 & 73.37\(\pm\)1.00 & 1.37\(\pm\)0.29 & 2.11\(\pm\)1.51 \\ \hline \multirow{2}{*}{**w/o**} & \multirow{2}{*}{74.01\(\pm\)2.86} & \multirow{2}{*}{83.13\(\pm\)0.44} & \multirow{2}{*}{22.86\(\pm\)1.60} & \multirow{2}{*}{2.09\(\pm\)1.60} & \multirow{2}{*}{2.06\(\pm\)0.65} \\ \cline{1-1} \cline{5-5} \
dataset are given in Table 3 and results of other datasets are in Table 5 in Appendix G.3. As shown in Table 3, our GMMD and GMMD-S outperform the state-of-the-art method FairVGNN about both utility and fairness in most cases, which once again proves the effectiveness of our proposed method, and also shows that our framework is flexible to facilitate various GNN backbones.
### Visualization and Case Study
Generally, GMMD encourages a node \(v_{i}\) to aggregate nodes \(v_{j}\) from a different sensitive group to mitigate fairness issues, using weights based on kernel similarity \(k(\mathbf{F}_{i}^{(K-1)},\mathbf{F}_{j}^{(K-1)})\). Next, we provide visualization and case study to understand how GMMD can preserve utility performance when learning fair representation. More results can be found in Appendix H.3. Specifically, in Figure 4 (a) and (b), we calculate the pair-wise kernel similarity of randomly sampled nodes from different sensitive groups. We visualize the distribution of similarity for these node pairs with the same labels and different labels, respectively. We can observe that node pairs having the same labels exhibit larger kernel similarity, whereas nodes with distinct labels show a lower similarity. The above observation suggests that our model assigns higher weights to nodes with the same label, thereby encouraging nodes to aggregate relevant label semantic information and preserving the utility performance. In Figure 4 (c) and (d), we randomly sample the one-hop sub-graph of a central node and calculate the kernel similarity based on learned representations between the central node and neighbor nodes. We can observe that our model can successfully assign higher weights to inter sensitive group edges which connect two nodes from the same label. This observation also demonstrates that our proposed message passing can keep utility performance by assigning higher weights to nodes with the same label from different sensitive groups.
## 6 Conclusion
In this paper, we propose a novel fairness-aware message passing framework GMMD to mitigate fair issues on graph data. The key idea is to derive a new message passing method for GNNs from an optimization problem with smoothness and fairness constraints. Interestingly, the derived message passing can be intuitively interpreted as encouraging a node to aggregate representations of other nodes from different sensitive groups while subtracting representation of other nodes from the same sensitive groups. By mixing node representations from various sensitive groups, GMMD can alleviate the issue of learning representations that have high correlation with sensitive attributes. Moreover, we theoretically show that GMMD enjoys good downstream performance on the fairness metric. We empirically validate our theoretical findings and our extensive experiments on several graph benchmarks validate the superiority of GMMD on both utility and fairness performance.
Figure 4: Visualization and Case Study. For case study in (c) and (d), node color denotes the class label of nodes and node shape denotes sensitive group of nodes. Blue lines denote inter sensitive group edges and their line width is proportional to kernel similarity between the connected nodes.
\begin{table}
\begin{tabular}{l|l|l l l l l l l l l} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Metric**} & \multicolumn{4}{c|}{**GIN**} & \multicolumn{4}{c}{**GAT**} \\ & & \multicolumn{1}{c|}{**Vanilla**} & \multicolumn{1}{c|}{**FariVGNN**} & \multicolumn{1}{c|}{**GMMD**} & \multicolumn{1}{c|}{**GMMD-S**} & \multicolumn{1}{c|}{**Vanilla**} & \multicolumn{1}{c|}{**FariVGNN**} & \multicolumn{1}{c}{**GMMD**} & \multicolumn{1}{c}{**GMMD-S**} \\ \hline \multirow{4}{*}{**Ball**} & AUC (\(\uparrow\)) & 86.14\(\pm\)0.25 & 83.22\(\pm\)1.60 & 86.53\(\pm\)1.54 & 86.50\(\pm\)1.61 & 88.10\(\pm\)3.52 & 87.46\(\pm\)0.69 & 90.58\(\pm\)0.74 & 89.77\(\pm\)0.74 \\ & F1 (\(\uparrow\)) & 76.49\(\pm\)0.57 & 76.36\(\pm\)2.20 & 76.62\(\pm\)1.88 & 76.64\(\pm\)1.71 & 76.80\(\pm\)5.54 & 76.12\(\pm\)0.92 & 78.45\(\pm\)3.21 & 77.34\(\pm\)0.47 \\ & ACC (\(\uparrow\)) & 81.70\(\pm\)0.67 & 83.86\(\pm\)1.57 & 84.08\(\pm\)1.35 & 84.04\(\pm\)1.26 & 83.31\(\pm\)4.55 & 81.65\(\pm\)0.94 & 83.39\(\pm\)4.50 & 83.44\(\pm\)2.13 \\ & \(\Delta_{\text{DP}}\) (\(\downarrow\)) & 8.55\(\pm\)1.61 & 5.67\(\pm\)0.76 & 3.75\(\pm\)1.11 & 3.61\(\pm\)1.42 & 5.66\(\pm\)1.78 & 5.41\(\pm\)3.25 & 1.24\(\pm\)0.28 & 3.27\(\pm\)1.79 \\ & \(\Delta_{\text{EO}}\) (\(\downarrow\)) & 6.99\(\pm\)1.51 & 5.77\(\pm\)1.26 & 4.69\(\pm\)0.69 & 4.71\(\pm\)0.66 & 4.34\(\pm\)1.35 & 2.25\(\pm\)1.61 & 1.18\(\pm\)0.34 & 2.52\(\pm\)2.22 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Node classification performance with GIN and GAT as backbones. The best and runner-up results of different backbones are colored in red and blue. |
2303.07830 | Emergent Bio-Functional Similarities in a Cortical-Spike-Train-Decoding
Spiking Neural Network Facilitate Predictions of Neural Computation | Despite its better bio-plausibility, goal-driven spiking neural network (SNN)
has not achieved applicable performance for classifying biological spike
trains, and showed little bio-functional similarities compared to traditional
artificial neural networks. In this study, we proposed the motorSRNN, a
recurrent SNN topologically inspired by the neural motor circuit of primates.
By employing the motorSRNN in decoding spike trains from the primary motor
cortex of monkeys, we achieved a good balance between classification accuracy
and energy consumption. The motorSRNN communicated with the input by capturing
and cultivating more cosine-tuning, an essential property of neurons in the
motor cortex, and maintained its stability during training. Such
training-induced cultivation and persistency of cosine-tuning was also observed
in our monkeys. Moreover, the motorSRNN produced additional bio-functional
similarities at the single-neuron, population, and circuit levels,
demonstrating biological authenticity. Thereby, ablation studies on motorSRNN
have suggested long-term stable feedback synapses contribute to the
training-induced cultivation in the motor cortex. Besides these novel findings
and predictions, we offer a new framework for building authentic models of
neural computation. | Tengjun Liu, Yansong Chua, Yiwei Zhang, Yuxiao Ning, Pengfu Liu, Guihua Wan, Zijun Wan, Shaomin Zhang, Weidong Chen | 2023-03-14T12:06:56Z | http://arxiv.org/abs/2303.07830v1 | Emergent Bio-Functional Similarities in a Cortical-Spike-Train-Decoding Spiking Neural Network Facilitate Predictions of Neural Computation
###### Abstract
Despite its better bio-plausibility, goal-driven spiking neural network (SNN) has not achieved applicable performance for classifying biological spike trains, and showed little bio-functional similarities compared to traditional artificial neural networks. In this study, we proposed the motorSRNN, a recurrent SNN topologically inspired by the neural motor circuit of primates. By employing the motorSRNN in decoding spike trains from the primary motor cortex of monkeys, we achieved a good balance between classification accuracy and energy consumption. The motorSRNN communicated with the input by capturing and cultivating more cosine-tuning, an essential property of neurons in the motor cortex, and maintained its stability during training. Such training-induced cultivation and persistency of cosine-tuning was also observed in our monkeys. Moreover, the motorSRNN produced additional bio-functional similarities at the single-neuron, population, and circuit levels, demonstrating biological authenticity. Thereby, ablation studies on motorSRNN have suggested long-term stable feedback synapses contribute to the training-induced cultivation in the motor cortex. Besides these novel findings and predictions, we offer a new framework for building authentic models of neural computation.
## Introduction
Recently, invasive brain-machine interfaces (iBMI) technology has made great strides. Subjects now can voluntarily modulate their neuronal activities to control robotic arms [1, 2], or type texts for communication [3, 4]. These studies utilized neuronal firing rates for decoding, which are compressed from spike trains. Theoretically, spike trains as features are more computationally powerful and efficient [5]. However, it remains unclear if spike trains can be directly decoded with applicable performance. Spiking neural network (SNN) serves as a good candidate to test this idea since it can take spike trains as inputs and makes use of their temporal coordinates [6]. Yet, there are only a few studies employing SNN to directly decode cortical spike trains. In one of the pioneering works, a feedforward fully-connected SNN was used to decode the cortical spike trains collected from the primary motor cortex (M1) of monkeys and achieved the highest accuracy of 90.4% and 67.8% for a 2-category and a 3-category decoding task respectively [7]. But notably, the above accuracies resulted from optimal neuron selection, a practice of selecting the best neurons in multiple experiments for decoding. Additionally, two more SNN applications for decoding cortical spike trains were developed on neuromorphic chips. One study implemented a feedforward SNN for a real-time 4-category decoding task on the spike trains collected from an anesthetic rat's brain reacting to 4 stimuli, which only achieved relatively low accuracies between 50-70% [8]. The other one employed an SNN with contralateral suppression strategy for a 2
category decoding task on the spike trains collected from the motor cortex of monkeys. Its highest accuracy reached 89.3% after optimal neuron selection, but its decoding performance on simultaneously recorded neurons only exceeded the chance level marginally [9]. Overall, the previous model without optimal neuron selection suffered from low accuracy, while models with optimal neuron selection were inapplicable for real-time iBMI applications. Therefore, we aim to investigate whether SNN can achieve satisfactory performance for directly decoding cortical spike trains, without employing optimal neuron selection.
Aside from decoding, we also wonder if some biological similarity could emerge within such a goal-driven SNN of better bio-plausibility, as what has been largely shown in traditional artificial neural network (ANN) models. Goal-driven deep learning (DL) network models have gained much popularity to help understand sensory neural systems, especially the convolutional neural networks (CNN) [10]. It has been reported that a supervised deep CNN could explain the representation of inferior temporal (IT) cortex well [11]. Recently, Khaled et al. trained a deep CNN to classify objects in natural images, in which numerosity-selective neurons spontaneously emerged, similar to some biological neurons with numerosity preference [12]. Another research conducted by Katharina et al. also showed that functional separations, especially of objects and faces, spontaneously emerged in a deep CNN, just like the biological brain [13]. Other bio-functional similarity has also been found between CNNs and the rodent Whisker-Trigeminal system [14], and between the word-predictive natural language processing (NLP) ANN models and the human cortex responsible for NLP [15]. Besides sensory systems, there are much less reports on the resemblance between ANN models and biological motor circuits. David et al. found that the dynamics of artificial neurons was similar to that of motor cortex neurons in monkeys when training a recurrent neural network (RNN) model to generate muscle activity [16]. Receiving inputs of different images of objects, a modular RNN trained to produce object-corresponding grasping activities could also capture the neural dynamics in the visuomotor circuit of monkeys [17]. Based on these similarities in ANN, researchers started to seek their potential neuroscientific inspirations. Since ANNs are proper models of the primate brain's ventral visual stream, Pouya et al. developed an ANN-driven image synthesis method [18]. By applying the generated images to the primate retinae, researchers could predict the neuronal tuning and control the neural activity of the neuronal population in V4. Taking deep CNN trained for object classification as models of the primate visual circuit, Bao et al. found a hierarchical map of objects space in the IT cortex, which has been further validated in biology [19]. As for SNNs, Chen et al. has shown that a data-based large-scale model for primary visual cortex owns some neural coding properties similar to the biological brain [20]. Yet, for neural motor circuits, whether SNN can show such bio-functional similarity like its traditional counterpart remains unclear. Therefore, we intend to examine SNN for its bio-functional similarity and see if there is any insight to be gained for a better understanding of motor neuroscience.
In this study, we proposed the motorSRNN, a spiking recurrent neural network (SRNN) topologically inspired by the neural motor circuit of primates, to decode the cortical spike trains collected from the motor cortex of monkeys, and evaluated its engineering performance. Next, we investigated how the motorSRNN processes the input biological spike trains. Afterward, we examined the emergence of bio-functional similarity in the motorSRNN. Finally, we sought some other biologically verifiable observations in motorSRNN as predictions for features of the motor circuits, one of which was validated through biological experiments.
## Results
In order to investigate SNN's potential applicability in iBMI, we proposed the motorSRNN. Topologically inspired by the neural motor circuit in primates, motorSRNN is designed to reconstruct a path between the biological brain and external devices, as shown in Fig. 1 (**A**). It consists of multiple layers that correspond to different regions of the biological brain, including the motor cortex (input layer, MC1, MC2), subcortical areas (SC), the spine (Sp), and motoneurons that connect to the muscles (Ms), as presented in Fig. 1 (**A-B**). Detailed descriptions of motorSRNN refer to section _Materials and Methods_.
Firstly, our study sought to investigate the advantages of our proposed motorSRNN when decoding neural data collected from M1 in two monkeys, B04 and C05. To this end, we compared the performance of the motorSRNN with that of the feedforward SNN (fSNN), the best-reported SNN classification algorithm applied on similar iBMI task [7], long short-time memory (LSTM), and support vector machine (SVM) in both datasets. Our results, presented in Fig. 1 (**C-D**), demonstrated that the motorSRNN outperformed the fSNN and LSTM in terms of the TOP-1 validating accuracy, with a comparable number of parameters. Especially when compared to the fSNN, the motorSRNN advanced the TOP-1 validating accuracy by more than 25% in both datasets. Moreover, there was a ~5% improvement of the motorSRNN compared to the LSTM. The performance of the motorSRNN was superior to that of SVM, a traditional decoding algorithm in neuroscience, which though had a considerably smaller number of parameters. Note that SVM took firing rates of neurons as input, while the other algorithms directly took spike trains. Moreover, the standard deviation of the motorSRNN was lower than that of the other algorithms, suggesting great robustness. Combining this fact with its accuracy performance, our results highlights the potential for motorSRNN in applicable iBMI with neuromorphic computing. Furthermore, our investigation revealed that the motorSRNN exhibited the fastest learning rate among the other models considered in Fig. 1 (**C**). Specifically, the motorSRNN converged in less than 5 epochs, whereas the fSNN and LSTM required around 10 and more than 20 epochs, respectively. The motorSRNN's superior performance in terms of learning rate and decoding accuracy was further corroborated by the results of the 10-fold cross-validation analysis, as depicted in Fig. S1 and Fig. S2.
Though the motorSRNN has achieved practical accuracy, for the application on fully implantable iBMI, it remained a necessity for the motorSRNN to be energy-efficient. The estimation of energy cost was based on the number of accumulation and multiply-and-accumulate operations for different algorithms, as listed in Tab. S3. According to the theoretical energy computations and the architectures of the networks, the estimated energy consumption of motorSRNN, fSNN and LSTM was shown in Tab. S4. In Fig. 1 (**D**), we demonstrated that the motorSRNN only consumed approximately 1/50 of the energy required by the LSTM, which is consistent with previous studies [21-23]. Indeed, the motorSRNN needed 50 times more energy than that of the fSNN. However, fSNN performed much worse than motorSRNN, failing to achieve practical accuracy. Similarly, since the utilized features were traditional firing rates compressed from spike trains and the calculation was simpler, SVM consumed considerably less energy than the other algorithms. But its performance was also significantly worse than that of the motorSRNN. Overall, the motorSRNN could achieve an applicable classification performance while remain its energy consumption relatively low, paving a way to fully implantable iBMI with strict energy-consumption requirements.
preferred direction (PD) with respect to the direction of movement of the primate's hand [24], and this impressive finding has served as a cornerstone for a multitude of motor iBMI applications [25-28]. A more detailed description of cosine-tuned neurons in M1 is shown in section _Materials and Methods._ Cosine-tuning is an important property of the recorded biological neurons of the input spike trains. Intriguingly, some artificial neurons that were significantly cosine-tuned to the labels (i.e., the moving directions of the macaques' hands) were also found in the motorSRNN, especially in layer MC1.
For both dataset B04 and C05, Fig. 2 (**A**, **D**) showed the distribution of \(R^{2}\) for cosine fitting of the neuronal average firing rates in layer MC1 of motorSRNN. After training, most neurons obtained high \(R^{2}\), and a substantial portion of neurons increased their \(R^{2}\). In order to prevent overestimation of the dependence between the reaching directions and the firing rates of neurons, a statistical test for cosine fitting was conducted, where an \(R^{2}\) was considered significant only if \(p\)\(<\)0.05 in the fitting. The significant cosine-tuned neurons with \(R^{2}\) larger than 0.7 will be referred to as SCtNs. A comparison of the numbers of SCtNs in layer MC1, motorSRNN and the hidden layer (HL), fSNN before and after training were shown in Fig. 2 (**B**, **E**). After training, in the motorSRNN, a prominent increase of the number of the SCtNs was observed. However, there was no significant difference before and after training for the fSNN. Thus, the motorSRNN captured and cultivated more cosine-tuning from the input biological spike trains, which was not found in the fSNN.
However, the results presented in Fig. 2 (**A-B**, **D-E**) were computed based on the epoch with the highest validating accuracy, thereby leaving the stability of the reported cultivation unknown. To access the persistence of this cultivation, we investigated whether the number of SCtNs remained significantly elevated compared to the pre-training level during training for both dataset B04 and C05. Our findings confirmed that this cultivation was stable across epochs, as demonstrated in Fig. 2 (**C**, **F**).
**Bio-functional similarity between the neural motor circuit and motorSRNN**
The results analyzed above demonstrated a transfer of cosine-tuning from the input biological spike trains to the motorSRNN. Subsequently, we were prompted to investigate whether the motorSRNN cultivated additional bio-functional similarities or simply maintained the information passed on by the input. In Fig. 2 (**B**, **E**), we observed a relatively small number of SCtNs in motorSRNN prior to training, which we interpret as a reflection of the cosine-tuning purely induced by the input collected from M1. Following training, the number of SCtNs increased, indicating the emergence of more bio-functional similarities of cosine-tuning in the proposed motorSRNN. This finding was in agreement with a great number of other inspiring models [10-20]. Next, we further elucidated the concrete bio-functional similarities that existed in our proposed motorSRNN.
**Single-neuron level.** Cosine-tuning is a vital feature of neurons in the motor cortex of primates [24]. Some neurons in layer MC1 of motorSRNN that were not significantly cosine-tuned prior to training became tuned to the labels, i.e., the moving directions of the monkeys' hand. This was demonstrated by two typical example neurons, one trained with dataset B04 and the other with dataset C05, both in a single run, as shown in Fig. 3 (**A-B**). Notably, these two neurons were not significant cosine-tuned before training, despite receiving spikes from input M1 neurons of cosine-tuning. However, after training, they exhibited significant cosine-tuning with PDs of 4.03 and 1.76 rad respectively (\(R^{2}\)=0.9999 for the neuron trained with dataset B04, and \(R^{2}\)=0.9979 for the one trained with dataset C05). More cases of SCtNs with different PDs, which were not significant cosine-tuned before training, were shown in Fig. S3.
**Population level**: Several studies have shown that symmetry is an important feature of neuronal PD distribution in the motor cortex of primates [29-31]. In our investigation, we examined the PD distributions of SCtNs in layer MC1 of the motorSRNN, using both dataset B04 and C05. We
found that these PD distributions were consistent with the symmetry observed _in vivo_, as demonstrated by the polar histogram in Fig. 3 (**C-D**). Note that the peaks in the polar plot denoted that there were more neurons cosine-tuned to certain directions. A decrease in the resultant vector length (RVL) showed that the PDs of SCtNs in layer MC1 of motorSRNN became more symmetrically distributed after training. Besides, we also calculated the firing rates of neurons in motorSRNN that were trained on both dataset B04 and C05. Our findings, as shown in Fig. S4, indicate that the firing rates of these neurons became more biologically plausible after training, which did not occur in the case of fSNN.
**Circuit level**: The most unique structure of the motorSRNN is the M1LC inspired by the cortico-motoneuronal (CM) connections, which is special in primates after evolution [32]. To explore the function of the M1LC of motorSRNN, we conducted ablation experiments. Previous studies have established a consensus that the biological CM connections mainly contribute to the dexterous control of hands and fingers [33]. Accordingly, given the correspondence of the M1LC in motorSRNN and the biological CM connections, we hypothesized that the ablation of M1LC would lead to two changes. First, in our experiment, the monkeys performed a task not demanding dexterity, so we expected the ablation of M1LC to have a small effect on the final classification performance. Second, we also anticipated that it might increase the variability of the training process since learning motor skills requires fine coordination of muscles. The results showed that, after the ablation, the validating accuracy of the motorSRNN slightly decreased (Fig. 3 (**E-F**), left panel); and importantly, the standard deviations significantly increased (Fig. 3 (**E-F**), right panel). These findings were consistent with our expectations and highlighted the bio-functional similarity between the M1LC in motorSRNN and the biological CM connections.
**Cultivation and persistency of cosine-tuning in the biological motor cortex**
In one of our previous study, we observed an cultivation and persistency of the emergent cosine-tuning in layer MC1of the motorSRNN after training. Therefore, it is of interest to validate our prediction that new task training will also induce such cultivation and persistency in the motor cortex of primates, the biological counterpart of layer MC1. To explore this idea, we designed mind control experiments in which monkeys were trained to modulate their neural activities to adapt to a new adaptive Kalman filter decoder. In order to ensure a large compositional difference of recorded neuronal groups for more generalizable results, a new monkey (B11) was trained along with monkey C05, in separate sessions, with an interval of approximately one month between consecutive sessions. Beginning with a look block, each session consisted of several mind control blocks, during which the level of assistance decreased, and ended with pure mind control blocks. Both monkeys C05 and B11 were able to manipulate the cursors on the computer screen purely by modulating their neural activities with success rates of 95.60% and 90.65%, respectively.
In the experiments, the look block was identified as the before-training phase, the subsequent blocks were considered as the during-training phase, and the pure mind control blocks were designated as the after-training phase. A comparison between the number of SCtNs in the recorded M1 neurons of two monkeys, C05 and B11, was conducted before and after training, as shown in Fig. 4 (**A**). The results indicated that there were significant increases in the number of SCtNs in the motor cortex of both monkeys after training, consistent with the findings of layer MC1 of motor SRNN presented in Fig. 2 (**B**, **E**). Furthermore, the persistence of this cultivation was also examined to assess its stability over time. Specifically, we investigated whether the number of SCtNs remained significantly higher than the before-training level during training for both monkeys C05 and B11. Results demonstrated that this cultivation remained stable across blocks, as shown in Fig. 4 (**B**). These findings confirmed that new task training can induce the cultivation and persistence of cosine-tuning in the motor cortex of primates.
### Fixed feedback connections contributed to the cultivation of cosine-tuning
Previous goal-driven models have shown the potential for exploring neural mechanisms [18, 19]. Following the observation of multiple bio-functional similarities, we believed that valuable predictions regarding the biological motor circuit can also be inferred from the motorSRNN. We have noticed the cultivation of cosine-tuning both _in silicon_ and _in vivo_, yet the underlying topological structures responsible for this phenomenon are unclear. To investigate, we conducted several ablation studies on different structures of the motorSRNN to assess their contributions to the emergence of cosine-tuning cultivation. The results, presented in Fig. 5, indicated that the only structures whose ablation consistently led to significant decreases in the cosine-tuning cultivation ratio across datasets B04 and C05 were feedback connections themselves and fixed weights of feedback connections. Conversely, ablations of other structures, including the M1LC, the second module of the motor cortex, the recurrent connections, and the topology-dependent initialized weights, did not consistently exhibit such effect in both datasets. These findings supported that fixed feedback connections play a crucial role in the cultivation of cosine-tuning in layer MC1 of motorSRNN after training. Accordingly, long-term stable feedback synapses were suggested to contribute to the training-induced cultivation of cosine-tuning in the motor cortex of biological brain.
## Discussion
In this work, we proposed the motorSRNN, an SRNN topologically inspired by the neural motor circuit of primates. Our primary objective was to evaluate the feasibility of applying SNN on iBMIIs. Thus, we employed the motorSRNN to decode the cortical spike trains collected from M1 of monkeys, and achieved applicable engineering performance in terms of a good balance between classification accuracy and energy consumption. Next, we investigated how motorSRNN processes the input spike trains and found that it can capture and cultivate more cosine-tuning, an essential property of recorded M1 neurons generating the input, and maintained its stability during training. Additionally, motorSRNN not only received cosine-tuning from the input cortical spike trains, but also produced more bio-functional similarities at the single-neuron, population, and circuit levels. Furthermore, we also observed that new task training can induce the cultivation and persistency of cosine-tuning in the biological motor cortex of monkeys. Finally, since the motorSRNN shared multiple bio-functional similarities, we conducted ablation studies that suggested long-term stable feedback synapses may contribute to the training-induced cultivation of cosine-tuning in the biological motor cortex.
### Implications for neural engineering
Recently, iBMI has made significant progress [4, 34, 35]. However, the externally head-mounted pedestal remains a potential source of infection for the subjects. To overcome this problem, a fully implantable iBMI will be necessary [36, 37], which will impose higher energy consumption requirements. As per the regulations of the American Association of Medical Instrumentation, the temperature increase in tissues caused by chronically implanted medical devices must be less than 1\({}^{\circ}\)C [38]. Practical decoding performance is also necessary. Along with the development of neuromorphic chips [39, 40], SNN may provide a great opportunity for this type of iBMI due to its high compatibility with cortical spike trains, low energy consumption, and potential computing capability. In this study, the proposed motorSRNN was utilized to decode the cortical spike trains obtained from M1 of two macaque monkeys. The energy consumption of motorSRNN was approximately 1/50th of that of LSTM. Notably, it improved the accuracy by more than 25% compared to the best-reported SNN algorithm (fSNN) in the iBMI classification tasks so far. As a type of SNN, the motorSRNN is highly compatible with neuromorphic chips. Furthermore, the decoded cortical spike trains in this study were simultaneously collected from neurons, unlike
some previous research that selected neurons from different experimental sessions [7, 9], which paves a way for real-time applications. In summary, this study lays the preliminary foundation for constructing a real-time, fully implantable iBMI equipped with neuromorphic chips.
**Implications for neuroscience**
After training the motorSRNN, a topologically brain-inspired SNN, we discovered several bio-functional similarities at single-neuron, population, and circuit levels. Firstly, the neurons in the motor cortex, corresponding to layer MC1 of motorSRNN, possess an essential characteristic of cosine-tuning [24]. And we found that more cosine-tuned neurons emerged in layer MC1 of motorSRNN, besides those induced by the input. Secondly, a symmetrical distribution of PDs of SCtNs in layer MC1 of motorSRNN was observed, in line with reports in the literature [29-31]. Thirdly, previous studies have established that the biological CM connections, the counterpart of M1LC, primarily contribute to the dexterous control of hands and fingers [33]. Similarly, after ablating M1LC, the validating accuracy of motorSRNN slightly decreased, and importantly, the standard deviations significantly increased (Fig. 3 (**E**-**F**)). As far as the authors are aware, these bio-functional similarities were observed for the first time in a goal-driven SNN model.
Due to these discoveries of bio-functional similarities, we hypothesize that the motorSRNN can provide valuable insights into the biological motor circuit. Based on the cultivation and persistency of cosine-tuning observed in layer MC1 of motorSRNN, we predict that training can also induce such cultivation and persistency in the biological motor cortex. Additionally, ablation studies of motorSRNN demonstrate that fixed feedback connections are crucial for cultivation of cosine-tuning, indicating that long-term stable feedback synapses contribute to the training-induced cultivation of cosine-tuning in the biological motor cortex.
The design of biological mind control experiments was inspired by the observation of cosine-tuning cultivation in layer MC1 of motorSRNN. We conducted mind control experiments on two monkeys with implanted microelectrode arrays, and confirmed the prediction that training can indeed promote cultivation and persistency of cosine-tuning in the biological motor cortex. Previous studies have reported that neurons in the motor cortex jointly associate on a low-dimensional manifold to facilitate movement preparation, execution, and adaptation [41]. The emergence of more cosine-tuned neurons after training suggests stronger neuronal covariability, consistent with prior findings [42].
Fig. 3 (**E**, **F**) illustrated a slower learning process after the ablation of M1LC in motorSRNN, which implies that the biological CM connections may also contribute to fast motor learning in addition to the previously reported dexterous control. Besides, while there is still debate over whether rating coding or temporal coding is the neural scheme being followed [43], most neuroscience research presumes rate coding [44-46]. It is worth noting that in this study, the motorSRNN utilized spike trains, rather than the traditional firing rates used by SVM, as shown in the example sample in Fig. 6 (**A**). The motorSRNN's significantly better performance compared to SVM implies that the temporal coordination, not just the firing rate, may also encode information about the monkeys' hand movements' direction [47, 48]. Besides, we observed a decrease in the firing rates of neurons in motorSRNN after learning, while the previously proposed fSNN did not demonstrate such characteristic, as illustrated in Fig. S4. It should be noted that a decrease in firing rates of neurons after learning is not a universal feature in neuroscience. In some cases, fewer neurons may participate in the task, leading to an increase in their firing rates [49]. For efficient coding [50], a decrease in firing rates of neurons is expected. Our findings in motorSRNN were consistent with the latter strategy.
Task-tuning should definitely appear in decoding, however, it is not necessarily cosine-like, as the absence of cosine-tuning in fSNN suggests that this type of tuning is not inevitable. In this study, the analysis of cosine-tuning was based on a statistical \(F\)-test for fitting. Thus, only neurons
passing the test would be further considered. In a classical work, Perge et al. also fitted neuronal firing rates with four directions using a cosine function, based on statistical _F_-tests [51]. But importantly, we emphasize that the purpose of our cosine-tuning investigation is to establish the biological consistency of the proposed motorSRNN and the biological motor circuit, rather than to exclude other possible mathematical fitting functions.
In this study, we focused on training motorSRNN for classification instead of regression as an initial step to validate the feasibility of applications of SNN on iBMI. However, it should not be inferred that the role of the motor cortex is just for classification. The motor cortex's primary function is to generate motion, which is a complex process that cannot be simply reduced to a classifier or regressor. Nevertheless, classification, in our view, represents a reasonable simplification, which can still facilitate our understanding of neural systems to some degree. Notably, the continuity of muscles was also considered in the motorSRNN, so that the output neurons were non-spiking integrators. The classification targets could be regarded as the directions of "muscles" in the output layer. Even under such simplification, some bio-functional similarities still emerged in the proposed motorSRNN. Similarly, we cannot assert that the visual cortex solely performs image classification. However, the convolutional neural network, a classifier, have been widely accepted as neural models of the visual system by the community [10, 11, 12, 13].
**Comparison between motorSRNN and other goal-driven DL models**
**Bio-functional similarities and predictability.** Bio-functional similarities have been reported in numerous goal-driven DL models at different levels, as shown by the following examples. At the single-neuron level, Khaled et al. found that ANNs can spontaneously develop numerosity-selective neurons similar to those found in the biological brain. [12]. At the population level, Katharina et al. showed that ANNs can exhibit functional separations for objects and faces, just like the biological brain [13]. At the circuit level, Shahab et al. demonstrated that self-supervision can promote the emergence of both the biological ventral and dorsal pathways in a deep neural network [52]. Moreover, David et al. found that artificial neurons in a recurrent neural network (RNN) can closely resemble the dynamics of motor cortex neurons in monkeys at both the single-neuron and population levels [16]. In our study, the motorSRNN showed the bio-functional similarities at all single-neuron, population, and circuit levels. These bio-functional similarities have motivated researchers to seek potential neuroscientific inspirations from ANNs. For example, Pouya et al. developed an ANN-driven image synthesis method for predicting neuronal tuning by taking ANNs as models of the primate brain's ventral visual stream [18]. Bao et al. found a hierarchical map of object space in the IT cortex, and further validated it in biology [19]. In this work, inspired by the observation in layer MC1 of motorSRNN, we also validated that training can indeed promote cultivation and persistency of cosine-tuning in the biological motor cortex in primates. Additionally, ablation studies predicted that long-term stable feedback synapses may contribute to the training-induced cultivation of cosine-tuning.
**Layer representations.** DL research has demonstrated that various layers of a neural network represent distinct types of information, with deeper layers encoding more abstract information [53]. Fig. S5 illustrates examples of the representations learned by different layers in the motorSRNN. Notably, the number of significant cosine-tuning neurons decreases as the layers become deeper, suggesting that the neurons in deeper layers may represent more complex features. Specifically, the neurons in deep layers may not be restricted to encoding only the direction of hand movements made by the monkeys.
**M1LC vs. skip connections.** The M1LC of the motorSRNN appears similar to the skip connections employed in DL and it is intriguing that the biological motor circuit of primates also utilizes such a skip-like connection. Additionally, both skip connections and M1LC have been
found to contribute to fast convergence. However, there are notable differences that should be considered. Firstly, skip connections are predominantly implemented in CNNs, whose underlying mechanism is totally different from that of SNN. Moreover, outstanding DL models incorporating skip connections, such as ResNet [54] and DenseNet [55], mainly focused on computer vision (CV) applications that differ markedly from our task. Furthermore, the success of ResNet, DenseNet, and other variants in DL rely on multiple skip connections in hidden layers. In contrast, the motorSRNN has only one long-looped connection from the input to the output. Given these discrepancies, it is difficult to conclude that skip connections inevitably lead to fast convergence of SNNs in an iBMI application. Notably, when the M1LC was ablated in motorSRNN and applied to the SHD dataset, an audio-based spike trains classification dataset [56], the results did not exhibit a significant decrease in convergence speed (Fig. S6), indicating that M1LC is not simply a borrowing from skip connections in DL and does not necessarily lead to fast convergence. However, we believe that exploring the functional similarities between skip connections in DL and the CM connection in the primate neural motor circuit would be insightful.
#### Goal-driven SNN models for neural computation
In neuroscience, cosine-tuning has long been recognized as a key feature of the motor cortex, yet how it exists in this brain region remains a fundamental but open question for decades. To deconstruct this question, we propose the following three processes to be considered: origination, cultivation, and preservation. In this study, the SCtNs occurred in the motorSRNN before training can be attributed to cosine tuning induced directly by the input biological spike trains. And the number of SCtNs significantly increased after training, indicating that the proposed motorSRNN can generate additional cosine-tuning not directly induced by the input. This finding provides a possible avenue for exploring the second process: cultivation. Ablation studies indicated that fixed feedback connections, i.e., the long-term stable feedback synapses, may contribute to this cultivation. Moreover, in Fig. 2 (**C**, **F**), we observe that the average number of SCtNs in the MC1 layer of the motorSRNN remained stable throughout training epochs. This observation implies that the motorSRNN may also be a suitable candidate for exploring the third process: preservation. Investigating these two processes may shed light on potential solutions to more interesting and challenging questions about the process of origination, such as how cosine-tuning emerges within a neural network with high-level input (without pre-existing cosine-tuning).
More importantly, the significance of this work goes beyond the proposition of motorSRNN, as it introduces a novel framework for building models of neural computation. Incorporating biological signals as input enhances the authenticity of models and results in more biologically plausible representations. Consequently, such models are likely to exhibit more bio-functional similarities and offer valuable insights into the mechanisms of neural computation. The proposed motorSRNN demonstrated bio-functional similarities at the single-neuron, population, and circuit levels, and generated two predictions: first, the presence of training-induced cultivation and persistence of cosine-tuning in the biological motor cortex, which we also validated in monkeys; and second, the contribution of long-term stable feedback synapses to the cultivation of cosine-tuning. However, the utilization of this framework should be approached with caution. The bio-functional similarities directly arising from biological signal input must be accounted for first, before identifying emergent bio-functional similarities. In this study, the occurrence of cosine-tuning before training was not surprising, but the emergence of more cosine-tuned neurons after training was a novel finding.
### Limitations and future work
In this pilot study, our aim was to access the feasibility of employing SNN for iBMIs. To this end, we utilized the motorSRNN algorithm to classify four reaching directions of monkeys' hands based on recorded biological spike trains. Although motorSRNN demonstrated satisfactory
engineering performance, the relatively simplistic nature of the task limits the development of more complex applications. Later, to establish a stronger foundation for constructing a more applicable SNN-based iBMI, a more sophisticated model capable of classifying more categories or predicting hand trajectories is required. Additionally, before proceeding with the development of a fully implanted iBMI with neuromorphic chips, we must first conduct on-chip testing of the motorSRNN. Ultimately, our findings suggest that biological topology may aid in the design of an effective SNN architecture for decoding, and further research is needed to validate this idea in various application scenarios.
Besides, despite the emergent bio-functional similarities, the motorSRNN currently only mimics the primate neural motor circuit in a simplistic manner. There exist numerous biological facts that can be considered to improve the model, such as the projections from other brain areas [57], more details of different structures [58], as well as delays and refractory periods [59], among others. By taking into account these facts, we anticipate that more precise bio-functional similarities will be observed, leading to more accurate predictions regarding the underlying mechanisms of neural computation. Especially, as an SNN, the improved motorSRNN is expected to yield some bio-functional similarities and predictions regarding spiking at a fine timescale. Moreover, apart from cultivation and persistence, a more comprehensive model may potentially elucidate the origin of cosine-tuning. Such a goal-driven SNN, trained using machine learning, may provide a novel yet effective framework for comprehending the biological brain.
## Materials and Methods
### Overall Experimental Design
To explore the feasibility of the SNN application on iBMI, we first trained 2 monkeys with implanted microelectrode array to perform a joystick control task, thereby obtained the training and validating samples for SNN. Next, we proposed a novel SNN architecture called motorSRNN, and applied it to classify the aforementioned samples. Afterwards, we used cosine-tuning and statistical analysis to identify the bio-functional similarities of the motorSRNN. Finally, 2 monkeys with implanted microelectrode array were trained to conduct the mind control experiment, in order to biologically validate one prediction of neural computation made by the motorSRNN.
### Animal Experiments
Three macaque monkeys, B04, C05, and B11 were implanted with a 96-channel microelectrode array (Blackrock Microsystems Inc., USA) in the right primary motor cortex (M1) respectively. B04 and C05 were trained to conduct the joystick control experiment, while C05 and B11 were trained to execute the mind control experiments. All experimental procedures were approved by the Experimental Animal Welfare Ethics Committee of Zhejiang University.
\(\bullet\) **Experiment Paradigm: joystick control**
The joystick moving experiment paradigm is shown in Fig. 6 (**A**). In order to complete a 4-radial center-out reaching task, B04 and C05 were trained to operate a joystick with their left hand while seated in a primate chair, facing a computer screen displaying operational instructions. On the screen were two circular cursors: an orange cursor, representing the target, and a blue cursor, representing the position of the joystick. At the start of a trial, the orange cursor randomly appeared in one of four positions (_top_, _bottom_, _left_, or _right_), and the monkeys were required to manipulate the joystick such that the blue cursor overlapped with the orange target for a 300 milliseconds duration for each of the 2-second trials. Upon successful completion, the monkeys were rewarded and the cycle repeated with the reappearance of the randomly positioned orange cursor.
During the experiments, the monkeys' neural signals and the joystick position signals were concurrently recorded by the Cereb multi-channel data acquisition system (Blackrock Microsystems Inc., USA). The neural signals were recorded at a 30 kHz rate, while the joystick position signals were recorded at a rate of 1 kHz. For each monkey, the data utilized in this study was drawn from a single experiment.
\(\bullet\) **Experiment Paradigm: mind control**
In this study, we conducted the mind control experiments to validate the predictability of the motorSRNN. By designed different decoders for every experimental section, we could guide the biological neural network in the motor cortex of primates to learn multiple new tasks, thereby obtaining more generalizable results. In the mind control experiment shown in Fig. 6 (**B**), C05 and B11, seated in a primate chair, were trained to voluntarily modulate their neural activity to complete a 4-radial center-out task. A computer screen in front of the monkeys displayed the operational cues. The collected neural signals were mapped to the movement of the blue cursor on the screen using a Kalman filter, while the orange cursor indicated the target. Other designs were identical to the joystick control experiment. Similarly, during the experiments, the Cerebus multi-channel data acquisition system (Blackrock Microsystems Inc., USA) was utilized to collect the neural signals of the monkeys at a sampling rate of 30 kHz.
C05 and B11 were both required to undertake the experiment for 4 sessions. Due to the neural variability [60, 61], the consecutive interval between these sessions was set to around one month, to ensure the composition difference of recorded neuronal groups in different session to a large extent for more generalizable results. As shown in Fig. 6 (**C**), every session consisted of a look block, several assisted mind control (MC) blocks and pure MC blocks, where the number of assisted and pure MC blocks are determined by the willingness of monkeys. The neural signals collected during the look block were used to establish a new Kalman filter decoder. During the assisted MC blocks, an auxiliary vector was utilized to aid in guiding the blue cursor towards the target, and the strength of the auxiliary vector gradually decreased. Finally, the auxiliary vector was eliminated during the pure MC blocks, thus the monkeys were required to solely rely on modulating their neural activity to control the blue cursor. In every assisted and pure MC block, the decoder was updated based on neural activities in the last block. The look and pure MC blocks were identified as the before-training and after-training phases, respectively, while the assisted and pure MC blocks as a whole were regarded as the during-training phase.
**Signal Processing**
\(\bullet\) **Experiment Paradigm: joystick control**
To extract the spike trains, the raw neural signals underwent filtering via a 250 Hz 4th-order Butterworth high-pass filter, followed by spike detection using a threshold of 4.5 times the root mean square (RMS) value of the baseline signal. The detected spikes were then sorted into distinct neurons using Offline Softer (Plexon Inc., USA), based on criteria including spike waveform similarity and distribution of principal components. 157 and 153 spike trains were isolated and served as the decoded neural signals for B04 and C05, respectively. For the sake of computational efficiency, the neural signals were then downsampled to 10 kHz. The joystick position signals were smoothed through a moving average method with a 20-length window.
Considering the transmission delay of neural signals to the muscles, delays of 120 ms and 140 ms were applied for data segmentation in B04 and C05, respectively. A time window of 50 ms was then used to segment the neural signals, followed by the calculation of corresponding joystick velocities that were delayed. Samples exceeding the threshold-crossing speed were categorized based on the direction of their velocities: _Top_ (45 to 135 ), _Bottom_ (-135 to -45 ), _Right_ (135 to 225 ) and _Left_ (-45 to 45 ), which were later classified. After segmenting the data, we obtained
1981 and 3755 samples for B04 and C05, respectively. To form the validation set, 1/10 of the samples were randomly selected, while the remaining samples were used to create the training set. Subsequently, two separate datasets were created for B04 and C05. Every sample contains 2 types of features, namely the spike trains and the firing rates, as shown by an example in Fig. **6** (**A**).
\(\bullet\) **Experiment Paradigm: mind control**
Similarly, a 250 Hz 4th-order Butterworth high-pass filtering was first applied to the neural signals. Next, to reduce the influence of the recording instability during the experiments, we set the threshold for spike detection to 8.0 times the root mean square (RMS) of the baseline signal. The spikes were sorted out online based on the waveform shape. Signals of all the first collected neuron in every valid channel were mapped to the movement of the blue cursor via a Kalman filter Fig. 6** (**B**). In each trial, we decoded the cursor velocity every 30 ms, then calculated and updated the cursor position on the screen.
**motorSRNN: an SNN architecture inspired by the motor circuit in primates**
Aiming to reconstruct the pathway between the brain and the externals, motorSRNN is an SNN architecture inspired by the motor circuit in primates (Fig. 1 (**B**)). The motor cortex, subcortical region (layer SC), spine (layer Sp), and motoneurons to muscles (layer Ms) are considered in the motorSRNN as different layers (Fig. 1 (**A**)). Regarding modularity [62] and hierarchy [63], we constructed three layers of the motor cortex: the input layer, layer MC1 and MC2. The input layer itself yields the cortical spike trains collected from the M1 of monkeys. Layer MC1 is regarded as a module directly receiving the cortical spike trains, and layer MC2 is a module with a lower hierarchy, and is not directly connected to the input layer. Additionally, the MC1 and MC2 modules are sparsely connected, where 20% of the connections are randomly selected and set to zero. In the primates' motor cortex, some neurons convey signals to the subcortical region, while some others are directly connected to the motoneurons to muscles via the CM connections (bottom left sub-figure in Fig. 1 (**B**)), which specifically evolved in primates [32]. In motorSRNN, layer SC receives inputs from layer MC1 and MC2, and then feeds the outputs back to them, as what the thalamus does in the biological neural system (top left sub-figure in Fig. 1 (**B**)). Moreover, layer Ms directly receives signals from the input layer via a long-loop connection (M1LC), whose biological counterpart is the CM connection. Layer Sp also receives outputs from the basal ganglia in layer SC, and transmits its outputs to layer Ms. SRNNs are embedded in layer MC1, MC2, and SC [64, 65], while a feedforward SNN is used in layer Sp. Finally, we considered distance-dependent initialized connections for a more biologically plausible topology [66], where the strength of the connection weakens as its length increases. Layer Ms outputs the cumulative probabilities of the 4 labels over time, and the maximum cumulative probability at the final moment indicates the predicted label.
\(\bullet\) **Neuron models**
Firing-threshold adaptive leaky Integrate-and-fire (ALIF) neurons constitute the SNNs in layer MC1, MC2, SC, and Sp, owing to their better biological plausibility [67] and performance in classification [68] than traditional leaky integrate-and-fire (LIF) neurons. The dynamics of ALIF neuron are mathematically represented by Equation (1) through (6). An ALIF neuron receives presynaptic spikes, which leads to an increase in its membrane potential. Upon reaching the firing threshold, the ALIF neuron emits a spike. Subsequently, its membrane potential is reset to the resting potential (set to 0 in this study) and its firing threshold is increased. When no spike is received, both the membrane potential and firing threshold of the ALIF neuron gradually decrease.
Equation (1) describes the dynamical process of an ALIF neuron's membrane potential.
\[u_{t}=\alpha u_{t-1}+\left(1-\alpha\right)R_{m}I_{t}-s_{t-1}\theta\,, \tag{1}\]
where \(u_{t}\) and \(u_{t-1}\) denotes the membrane potential at time \(t\) and \(t\)-1; \(\alpha\) represents the decay coefficient of the membrane potential, as shown in Equation (2); \(R_{m}\) indicates the constant membrane resistance of the ALIF neuron; \(I_{t}\) is the total current received from the presynaptic ALIF neurons, as shown in Equation (3); \(s_{t-1}\) denotes the spiking state of an ALIF neuron at time \(t\)-1, as shown in Equation (4); and \(\theta\) indicates the dynamical firing threshold, as shown in Equation (5).
\[\alpha=\exp\left(-\frac{\mathrm{d}t}{\tau_{m}}\right), \tag{2}\]
where \(\mathrm{d}t\) denotes the unit time, \(\tau_{m}\) represents the time constant of the membrane potential.
\[I_{t}=\sum_{i}S_{t}\,, \tag{3}\]
where \(i\) indicates the index of the pre-synaptic ALIF neurons connected to the current ALIF neuron, and \(s_{t}\) denotes the spiking state of the connected ALIF neuron at time \(t\). Equation (3) shows that the input current of an ALIF neuron at the current moment is the sum of the number of spikes from the pre-synaptic ALIF neurons at the current moment.
\[s_{t}=\begin{cases}1,&u_{t}\geq\theta\\ 0,&u_{t}<\theta\end{cases}, \tag{4}\]
where \(s_{t}=1\) indicates an ALIF neuron firing at time \(t\), while \(s_{t}=0\) means no spike fired at time \(t\). The dynamical threshold \(\theta\) is given by:
\[\theta=b_{0}+\beta\eta_{t}\,, \tag{5}\]
where \(b_{0}\) indicates the minimal firing threshold, and the product of \(\beta\) and \(\eta_{t}\) denotes the change of the dynamical threshold \(\theta\) at time \(t\), where \(\beta\) is a constant, and \(\eta_{t}\) is calculated as shown in Equation (6).
\[\eta_{t}=\rho\eta_{t-1}+\left(1-\rho\right)s_{t-1}\,, \tag{6}\]
where \(\rho\) represents the decay coefficient of the dynamical firing threshold, as shown in Equation (7).
\[\rho=\exp\left(-\frac{\mathrm{d}t}{\tau_{\alpha\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
themselves, and the membrane time constant \(\tau_{m}\) and the adaptive threshold time constant \(\tau_{adj}\). No feedback connections were trained after initialization.
The key parameter settings are detailed in Tab. S1. The size of the input layer is equal to the number of biological neurons recorded. The output layer contains 4 LI neurons corresponding to 4 labels of movement directions. At each time step, the LI neuron with the highest membrane potential represents the output. The membrane potential is transformed to a probability using a SoftMax function. The probabilities of the 4 output neurons are accumulated over all time steps to produce the final output of the network. The largest accumulated probability represents the predicted label. The models were implemented using PyTorch 1.8.1 and trained on an NVIDIA GeForce RTX 3090.
\(\bullet\) **Baseline models for comparison**
The feedforward SNN (fSNN) employed in [7] is chosen as the baseline model for the SNN-decoding iBMI classification task, due to its highest similarity with our task and best-reported performance among related work. Long short-term memory (LSTM) network is considered since it is a classical algorithm in deep learning. As common used decoder in neuroscience research, support vector machine (SVM) is also compared [69-71]. Note that the features utilized by the motorSRNN, fSNN, and LSTM are spike trains, while the ones used by SVM are traditional firing rates.
In fSNN, every connection has 31 synapses with trainable weights and a fixed delay of {1, 3,..., 59, 61} ms. LIF neurons build up the network. Their dynamics can also be described by formula [1] to [4], while the firing threshold \(\theta^{IF}\) remains constant. The fixed firing threshold and initialized weights are adjusted to ensure most input spikes lead the LIF neurons to fire before learning, to better prevent output neurons from stopping firing. The network consists of the input, hidden, and the output layers. Similarly, the size of the input layer equals the amount of the collected biological neurons, and the output layer contains 4 LIF neurons corresponding to 4 labels. There are 5 LIF neurons in the hidden layer for a close number of parameters to that of motorSRNN. A winner-take-all strategy is used for output classification encoding, such that the output LIF neuron coding for the respective class learns to fire early (at 51 ms), and all other output neurons fire later (at 61 ms). The output neuron firing the earliest denotes the output label. Other parameter settings refer to Tab. S2. The time constants are set according to Fang's study [7], while the learning rates are optimized for fSNN. The mean squared error (MSE) is chosen as the loss function for its better performance than the cross-entropy in this scenario.
The long short-term memory (LSTM) network employed in this paper consists of 4 layers, the input layer, two hidden layers, and the output layer. Similarly, the input and output layers are of sizes equal to the amount of the collected biological neurons and labels respectively. 26 units constitute each hidden layer, which leads to a close number of parameters compared to that of motorSRNN. \(L_{2}\) regularization is used to prevent overfitting. The batch size is 32, the number of the epoch is 100, and the learning rate is 5e-3. Cross-entropy is selected as the loss function. The coefficients of \(L_{2}\)-norm regularization are 5e-4 and 1e-3 for the dataset of B04 and C05 respectively.
Voting mechanism is implemented for support vector machine (SVM) to accomplish the 4-label classification. That is, 6 one-to-one classifiers are constructed with linear kernels.
\(\bullet\) **Estimation of energy consumption**
The theoretical energy consumption of recurrent layers composed of different units can be found in Tab. S3. For a layer connected to multiple other layers, the blue term should be counted for corresponding times. For a feedforward layer, the red term should be ignored.
### Cosine-tuning Analysis
\(\bullet\)**Cosine-tuning neurons in the motor cortex**
In 1982, the firing rates of many pyramidal neurons in the M1 of monkeys were found to vary systematically with the direction of monkeys' hand movements [24]. A neuron exhibited the highest firing rate when a monkey moved its hand to the neuron's preferred direction (PD), and the firing rate decreased as movements deviated from the PD. This relation between the firing rate (\(F\)) of a specific neuron and the direction of movements of the monkeys' hands (\(\delta\)) could be fitted by a cosine function as shown in Equation (9):
\[F=f_{0}+g\cos\big{(}\delta-\delta_{0}\big{)}, \tag{9}\]
where \(f_{0}\) denotes the baseline firing rate, \(g\) indicates the modulation depth, and \(\delta_{0}\) represents the PD.
\(\bullet\)**Cultivation ratio**
To explore the influence of ablation of different structures in the motorSRNN on cosine-tuning, we define the cultivation ratio \(R_{a}\) in Equation (10):
\[R_{a}=\frac{1}{N_{r}}\sum_{k}\frac{C_{AT}-\overline{C_{BT}}}{\overline{C_{BT}}} \tag{10}\]
where \(C_{AT}\) is the number of the SCtNs after training, while \(\overline{C_{BT}}\) denotes the average count of those before training. \(N_{r}\), \(k\) are the number and index of runs, respectively. It measures the degree to which training amplifies the number of SCtNs. A larger cultivation ratio indicates that more SCtNs emerge.
\(\bullet\)**Resultant vector length and the distributional symmetry**
Symmetry has been found to be an important feature of the neuronal PD distributions in the motor cortex of primates [29, 30, 31]. In circular statistics, the distributional symmetry of data points on the unit circle can be deduced by the resultant vector length (_RVL_). Every data point \(r_{j}\) represents an angle \(\gamma_{j}\), as shown in Equation (11):
\[r_{j}=\left(\begin{array}{c}\cos\gamma_{j}\\ \sin\gamma_{j}\end{array}\right), \tag{11}\]
where \(j\) indicates the index of the data point. Next, _RVL_ is the length of the average vector of all the data points, as shown in Equation (12):
\[RVL=\|\,\frac{1}{N_{s}}\sum_{j}r_{j}\,\|\,, \tag{12}\]
where \(N_{s}\) denotes the total number of data points. The closer the _RVL_ is to 0, the more symmetrical the distribution of data points is. On the contrary, if the _RVL_ equals 1, the data points totally concentrate in one single direction. In this study, every data point on the unit circle means the PD of a SCtN. That is, smaller _RVL_ indicates a more symmetrical distribution of the PDs of the SCtNs. Note that the symmetry is defined on polar coordinate, thus an absolute symmetry satisfies \(N_{\gamma}\)=\(N_{\gamma+\tau_{n}}\) where \(N_{\gamma+\tau}\) and \(N_{\gamma}\) represent the numbers of data points with certain angles \(\gamma\) and \(\gamma+\tau\), respectively. CircStats, an MATLAB toolbox, was employed for the aforementioned calculation [72].
### Statistical Analysis
\(F\)-test was applied to determine whether the cosine fitting regression is significant. \(p\)\(<\)0.05 indicates a significant dependency between the response and predictor variables. In the
motorSRNN, since there are four predictor variables and three parameters to be fitted in the cosine function, only the fitting with very high \(R^{2}\) can pass the statistical test.
The significance of distributional difference was judged by a 2-sample \(t\)-test. The null hypothesis is that two groups of data to be tested are independent random samples from normal distributions with equal means. \(p\)\(<\)0.05 indicates a significant distributional difference. Smaller \(p\)-value indicates stronger significance.
## Acknowledgments
We thank Yunying Wu, Zihao Li, and Tuoru Li for their insightful discussion and suggestions.
**Funding:**
STI 2030-Major Projects 2022ZD0208604
the Key R&D Program for Zhejiang 2021C03003, 2022C03029, 2021C03050
the National Natural Science Foundation of China 31371001
**Author contributions:**
Conceptualization: TL, YC, SZ
Methodology: TL, YC, YZ, YN, GW
Investigation: TL, YC, SZ
Visualization: TL
Supervision: YC, SZ, WC
Writing--original draft: TL
Writing--review & editing: TL, YC, YZ, YN, GW, ZW, PL, SZ, WC
**Competing interests:** Authors declare that they have no competing interests.
**Data and materials availability:** All data are available in the main text or the supplementary materials. The data and code are accessible upon reasonable request from the corresponding author, which will be made public after publication.
## References
* [1] A. A. Abrahams, J. M. Barabasi, and A. A. Abrahams, "The 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-1000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-10000 challenge of the 2011-100000 challenge of the 2011-10000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-100000 challenge of the 2011-1000000 challenge of the 2011-1000000 challenge of the 2011-1000000 challenge of the 2011-1000000 challenge of the 2011-100000 challenge of the 2011-1000000 challenge of the 2011-1000000 challenge of the 2011-1000000 challenge of the 2011-10000000 challenge of the 2011-10000000 challenge of the 2011-10000000 challenge of the 2011-10000000 challenge of the 2011-1
**Fig. 2** **Cosine-tuning was captured and cultivated in layer MC1 of motorSRNN, and existed persistently. (A, D)** For dataset B04 (**A**) and C05 (**D**), the plot shows the \(R^{2}\) values for cosine fitting of the average firing rates of neurons in layer MC1 of the motorSRNN, both before training and after training. Gray dots represent all the neurons pooled from 10 runs, while red dots denote neurons that did not show significant cosine tuning (n.s., \(p\)\(\geq\)0.05 in cosine fitting) before training, but became significantly cosine-tuned (\(p\)\(<\)0.05 in cosine fitting) after training. The marginal density plots show the distribution of data on the two axes. (B, E) The average number of significantly cosine-tuned neurons (SCiNs) in layer MC1 of the motorSRNN and the hidden layer (HL) of the fSNN before training (BT, in gray) and after training (AT, in red) over 10 runs are presented for dataset B04 (**B**) and C05 (**E**). Error bars denote the standard deviation. Under 2-sample _t_-test, n.s.: not significant (\(p\)\(\geq\)0.05), *: \(p\)\(<\)0.05, **: \(p\)\(<\)0.01, ***: \(p\)\(<\)0.001. (**C**, **F**) For dataset B04 (**C**) and C05 (**F**), the plots of the average number of SCiNs over training are represented by the red curves, while the gray line indicates the number of SCiNs before training. The symbol '*' has the same meaning as in (**B**, **E**). BT: before training, AT: after training. DT: during training.
[MISSING_PAGE_POST]
**Fig. 4 Changes of the average number of significantly cosine-tuning neurons (SCtNs) in the mind control experiment for monkey C05 and B11.** (**A**) The average number of SctNs in M1 of monkeys C05 and B11 before training (BT, in dark gray) and after training (AT, in red) across sessions are presented, with error bars indicating the standard deviation. The average successful rates of pure mind control for the two monkeys are indicated by the white text. Under 2-sample _t_-test, n.s.: not significant (p\(\geq\)0.05), *: p\(<\)0.05, **: p\(<\)0.01, ***: p\(<\)0.001. (**B**) The plots represent the average number of SctNs during mind control experiments for monkeys C05 and B11. The red curves indicate the average number of SctNs during mind control experiments, while the dark gray line shows the number of SctNs before training. The light gray curves represent the average number of SctNs during individual mind control sessions. The symbol '*' has the same meaning as in (**A**). BT: before training, DT: during training.
**Fig. 5 The influence of ablations of different structures in the motorSRNN on the cultivation ratio of cosine-tuning for dataset B04 (Top) and C05 (Bottom).** Yellow bars indicate the average cosine-tuning cultivation ratio of layer MC1 in the original motorSRNN, while bars in other color represent those of layer MC1 in the corresponding ablated motorSRNN. Error bars denote the standard deviations. BC: feedback connections, M1LC: the long-loop connection from M1, M1Mod12: the second module of the motor cortex, RC: recurrent connections, TopoW: Topology-dependent initialized weights, FixedBC: weights of fixed feedback connections. Under 2-sample t-test, n.s.: not significant (\(p\)\(\geq\)0.05), *: \(p\)\(<\)0.05, **: \(p\)\(<\)0.01, ***: \(p\)\(<\)0.001.
**Fig. 6. Biological experiment paradigms. (A) Experimental paradigm for joystick control and an example sample for classification. 2 monkeys (B04 & C05) were trained to complete a 4-radial center-out task using a joystick. The blue and orange cursors represent the position of joystick and target, respectively. After finishing the task, the monkeys were rewarded. The neural signal and joystick position signal were simultaneously collected. After preprocessing and segmentation, an example sample of B04 labeled as _Right_ is shown. Samples contain 2 types of features: spike trains and firing rates. (B) Experimental paradigm for mind control. 2 monkeys (B11 & C05) were trained to voluntarily modulate their neural activity to completed a 4-radial center-out task. The collected neural signals were mapped to the movement of the blue cursor on the screen via a Kalman filter. The orange cursor indicates the target. Upon completing the task, the monkeys got rewarded. (C) Timeline of the mind control experiment. The monkeys completed the mind control experiments for \(n\) sessions, with consecutive intervals of around 1 month. Every session consisted of a look block, several assisted mind control (MC) blocks, and several pure MC blocks. A new Kalman filter was established by the neural signals collected in the look block. The look and pure MC block were regarded as before training and after training, respectively, while the assisted and pure MC blocks were considered during training. |
2305.18446 | Trompt: Towards a Better Deep Neural Network for Tabular Data | Tabular data is arguably one of the most commonly used data structures in
various practical domains, including finance, healthcare and e-commerce. The
inherent heterogeneity allows tabular data to store rich information. However,
based on a recently published tabular benchmark, we can see deep neural
networks still fall behind tree-based models on tabular datasets. In this
paper, we propose Trompt--which stands for Tabular Prompt--a novel architecture
inspired by prompt learning of language models. The essence of prompt learning
is to adjust a large pre-trained model through a set of prompts outside the
model without directly modifying the model. Based on this idea, Trompt
separates the learning strategy of tabular data into two parts. The first part,
analogous to pre-trained models, focus on learning the intrinsic information of
a table. The second part, analogous to prompts, focus on learning the
variations among samples. Trompt is evaluated with the benchmark mentioned
above. The experimental results demonstrate that Trompt outperforms
state-of-the-art deep neural networks and is comparable to tree-based models. | Kuan-Yu Chen, Ping-Han Chiang, Hsin-Rung Chou, Ting-Wei Chen, Tien-Hao Chang | 2023-05-29T03:51:18Z | http://arxiv.org/abs/2305.18446v2 | # Trompt: Towards a Better Deep Neural Network for Tabular Data
###### Abstract
Tabular data is arguably one of the most commonly used data structures in various practical domains, including finance, healthcare and e-commerce. However, based on a recently published tabular benchmark, we can see deep neural networks still fall behind tree-based models on tabular datasets (Grinsztajn et al., 2022). In this paper, we propose _Trompt_-which stands for **T**abular **Prompt_-a novel architecture inspired by prompt learning of language models. The essence of prompt learning is to adjust a large pre-trained model through a set of prompts outside the model without directly modifying the model. Based on this idea, Trompt separates the learning strategy of tabular data into two parts for the intrinsic information of a table and the varied information among samples. Trompt is evaluated with the benchmark mentioned above. The experimental results demonstrate that Trompt outperforms state-of-the-art deep neural networks and is comparable to tree-based models (Figure 1).
Machine Learning, tabular data, tabular data, tabular data, tabular data, tabular data, tabular data, tabular data
## 1 Introduction
Tabular data plays a vital role in many real world applications, such as financial statements for banks to evaluate the credibility of a company, diagnostic reports for doctors to identify the aetiology of a patient, and customer records for e-commerce platforms to discover the potential interest of a customer. In general, tabular data can be used to record activities consisting of heterogeneous features and has many practical usages.
On the other hand, deep learning has achieved a great success in various domains, including computer vision, natural language processing (NLP) and robotics (He et al., 2016; Redmon et al., 2016; Gu et al., 2017; Devlin et al., 2018). Besides extraordinary performance, there are numerous benefits of the end-to-end optimization nature of deep learning, including (i) online learning with streaming data (Sahoo et al., 2017), (ii) multi-model integration that incorporates different types of input, e.g., image and text (Ramachandran and Taylor, 2017) and (iii) representation learning that realizes semi-supervised learning and generative modeling (Van Engelen and Hoos, 2020; Goodfellow et al., 2020).
Consequently, researchers have been dedicated to apply deep learning on tabular data, either through (i) transformer (Huang et al., 2020; Somepalli et al., 2021; Gorishniy et al., 2021) or (ii) inductive bias investigation (Katzir et al., 2020; Arik and Pfister, 2021).
Though many of the previous publications claimed that they have achieved the state of the art, further researches pointed that previous works were evaluated on favorable datasets and tree-based models still show superior performances in the realm of tabular data (Borisov et al., 2021; Gorishniy et al., 2021; Shwartz-Ziv and Armon, 2022). For a fair comparison
Figure 1: Benchmark results.
between different algorithms, a standard benchmark for tabular data was proposed by (Grinsztajn et al., 2022). The benchmark, denoted as _Grinsztajn45_ in this work, consists of 45 curated datasets from various domains.
In this paper, we propose a novel prompt-inspired architecture, _Trompt_, which abbreviates **T**abular **Prompt**. Prompt learning has played an important role in the recent development of language models. For example, GPT-3 can well handle a wide range of tasks with an appropriate prompt engineering (Radford et al., 2018; Brown et al., 2020). In Trompt, prompt is utilized to derive feature importances that vary in different samples. Trompt consists of multiple _Trompt Cells_ and a shared _Trompt Downstream_ as Figure 2. Each Trompt Cell is responsible for feature extraction, while the Trompt Downstream is for prediction.
The performance of Trompt is evaluated on the Grinsztajn45 benchmark and compared with three deep learning models and five tree-based models. Figure 1 illustrates the overall evaluation results on Grinsztajn45. The x-axis is the number of hyperparameter search iterations and y-axis is the normalized performance. In Figure 1, Trompt is consistently better than state-of-the-art deep learning models (SAINT and FT-Transformer) and the gap between deep learning models and tree-based models is narrowed.
Our key contributions are summarized as follows:
* The experiments are conducted on a recognized tabular benchmark, Grinsztajn45. Additionally, we add two well-performed tree-based models, LightGBM (Ke et al., 2017) and CatBoost (Prokhorenkova et al., 2018) to baselines.
* Trompt achieves state-of-the-art performance among deep learning models and narrows the performance gap between deep learning models and tree-based models.
* Thorough empirical studies and ablation tests were conducted to verify the design of Trompt. The results further shed light on future research directions of the architecture design of tabular neural network.
## 2 Related Work
In this section, we first discuss the prompt learning of language models. Secondly, we discuss two research branches of tabular neural networks, transformer and inductive bias investigation. Lastly, we discuss the differences between Trompt and the related works and highlight the uniqueness of our work.
### Prompt Learning
The purpose of prompt learning is to transform the input and output of downstream tasks to the original task used to build a pre-trained model. Unlike fine-tuning that changes the task and usually involves updating model weights, a pre-train model with prompts can dedicate itself to one task. With prompt learning, a small amount of data or even zero-shot can achieve good results (Radford et al., 2018; Brown et al., 2020). The emergence of prompt learning substantially improves the application versatility of pre-trained models that are too large for common users to fine-tune.
To prompt a language model, one can insert a task-specific prompt before a sentence and hint the model to adjust its responses for different tasks (Brown et al., 2020). Prompts can either be discrete or soft. The former are composed of discrete tokens from the vocabulary of natural languages (Radford et al., 2018; Brown et al., 2020), while the latter are learned representations (Li and Liang, 2021; Lester et al., 2021).
### Tabular Neural Network
**Transformer.** Self-attention has revolutionized NLP since 2017 (Vaswani et al., 2017), and soon been adopted by other domains, such as computer vision, reinforcement learning and speech recognition (Dosovitskiy et al., 2020; Chen et al., 2021; Zhang et al., 2020). The intention of transformer blocks is to capture the relationships among features, which can be applied on tabular data as well.
TabTransformer (Huang et al., 2020) is the first transformer-based tabular neural network. However, TabTransformer only fed categorical features to transformer blocks and ignored the potential relationships among categorical and numerical features. FT-Transformer (Gorishniy et al., 2021) fixed this issue through feeding both categorical and numerical features to transformer blocks. SAINT (Sompalli et al., 2021) further improved FT-Transformer through applying attentions on not only the feature dimensions but also the sample dimensions.
**Inductive Bias Investigation.** Deep neural networks perform well on tasks with clear inductive bias. For example, Convolutional Neural Network (CNN) works well on images. The kernel of CNN is designed to capture local patterns since neighboring pixels usually relate to each other (LeCun et al., 1995). Recurrent Neural Networks (RNN) is widely used in language understanding because the causal relationship among words is well encapsulated through recurrent units (Rumelhart et al., 1986). However, unlike other popular tasks, the inductive bias of tabular data has not been well discovered.
Given the fact that tree-based model has been the solid state of the art for tabular data (Borisov et al., 2021; Gorishniy et al., 2021; Shwartz-Ziv and Armon, 2022), Net-DNF (Katzir et al., 2020) and TabNet (Arik and Pfister, 2021) hypothesized that the inductive bias for tabular data might be the learning
strategy of tree-based model. The strategy is to find the optimal root-to-leaf decision paths by selecting a portion of the features and deriving the optimal split from the selected features in non-leaf nodes. To emulate the learning strategy, TabNet utilized sequential attention and sparsity regularization. On the other hand, Net-DNF theoretically proved that decision tree is equivalent to some disjunctive normal form (DNF) and proposed disjunctive neural normal form to emulate a DNF formula.
### The Uniqueness of Trompt
We argue that the column importances of tabular data are not invariant for all samples and can be grouped into multiple modalities. Since prompt learning is born to adapt a model to multiple tasks, the concept is used in Trompt to handle multiple modalities. To this end, Trompt separates the learning strategy of tabular data into two parts. The first part, analogous to pre-trained models, focus on learning the intrinsic column information of a table. The second part, analogous to prompts, focus on diversifying the feature importances of different samples.
As far as our understanding, Trompt is the first prompt-inspired tabular neural network. Compared to transformer-based models, Trompt learns separated column importances instead of focusing on the interactions among columns. Compared to TabNet and Net-DNF, Trompt handle multiple modalities by emulating prompt learning instead of the branch split of decision tree.
## 3 Trompt
In this section, we elaborate on the architecture design of Trompt. As Figure 2 shows, Trompt consists of multiple Trompt Cells and a shared Trompt Downstream. Each Trompt Cell is responsible for feature extraction and providing diverse representations, while the Trompt Downstream is for prediction. The details of Trompt Cell and Trompt Downstream are discussed in Section 3.1 and Section 3.2, respectively. In Section 3.3, we further discuss the prompt learning of Trompt.
### Trompt Cell
Figure 3 illustrates the architecture of a Trompt Cell, which can be divided into three parts. The first part derives feature importances (\(\mathbf{M}_{\text{importance}}\)) based on column embeddings (\(\mathbf{E}_{\text{column}}\)), the previous cell's output (\(\mathbf{O}_{\text{prev}}\)) and prompt embeddings (\(\mathbf{E}_{\text{prompt}}\)). The second part transforms the input into feature embeddings (\(\mathbf{E}_{\text{feature}}\)) with two paths for categorical and numerical columns, respectively. The third part expands \(\mathbf{E}_{\text{feature}}\) for the later multiplication.
The details of the first part are illustrated in Section 3.1.1 and the details of the second and third parts are illustrated in Section 3.1.2. Lastly, the generation of the output of a Trompt Cell is illustrated in Section 3.1.3.
#### 3.1.1 Derive Feature Importances
Let \(\mathbf{E}_{\text{column}}\in\mathbb{R}^{C\times d}\) be column embeddings and \(\mathbf{E}_{\text{prompt}}\in\mathbb{R}^{P\times d}\) be prompt embeddings. \(C\) is the number of columns of a table defined by the dataset, while \(P\) and \(d\) are hyperparameters for the number of prompts and the hidden dimension, respectively. Both \(\mathbf{E}_{\text{column}}\) and \(\mathbf{E}_{\text{prompt}}\) are input independent and trainable. Let \(\mathbf{O}_{\text{prev}}\in\mathbb{R}^{B\times P\times d}\) be the previous cell's output and \(B\) be the batch size.
\(\mathbf{O}_{\text{prev}}\) is fused with the prompt embeddings as Equations (1) and (2). Since \(\mathbf{E}_{\text{prompt}}\) is input independent and lack a batch dimension, \(\mathbf{E}_{\text{prompt}}\) is expanded to \(\mathbf{SE}_{\text{prompt}}\) through the stack operation as Equation (1). Later, we concatenate \(\mathbf{SE}_{\text{prompt}}\) and \(\mathbf{O}_{\text{prev}}\) and then reduce the dimension of the concatenated tensor back to \(\mathbb{R}^{B\times P\times d}\) for the final addition as Equation (2).
Figure 2: Overall architecture of the proposed Trompt.
For the same reason as \(\mathbf{E}_{\text{prompt}}\), the \(\mathbf{E}_{\text{column}}\) is expanded to \(\mathbf{SE}_{\text{column}}\) as Equation (3). Subsequently, feature importances are derived through Equation (4), where \(\otimes\) is the batch matrix multiplication, \(\mathsf{T}\) is the batch transpose, and the softmax is applied to the column axis.
\[\mathbf{SE}_{\text{prompt}} =\mathtt{stack}(\mathbf{E}_{\text{prompt}})\in\mathbb{R}^{B\times P \times d} \tag{1}\] \[\hat{\mathbf{SE}}_{\text{prompt}}= \mathtt{dense}(\mathtt{concat}(\mathbf{SE}_{\text{prompt}}, \mathbf{O}_{\text{prev}}))\] \[+\mathbf{SE}_{\text{prompt}}\] (2) \[+\mathbf{O}_{\text{prev}}\] \[\in\mathbb{R}^{B\times P\times d}\] \[\mathbf{SE}_{\text{column}} =\mathtt{stack}(\mathbf{E}_{\text{column}})\in\mathbb{R}^{B \times C\times d} \tag{3}\]
\[\mathbf{M}_{\text{importance}}=\mathtt{softmax}(\hat{\mathbf{SE}}_{\text{ prompt}}\otimes\mathbf{SE}_{\text{column}}^{\intercal})\in\mathbb{R}^{B \times P\times C} \tag{4}\]
The output of the first part is \(\mathbf{M}_{\text{importance}}\in\mathbb{R}^{B\times P\times C}\), which accommodates the feature importances yielded by \(P\) prompts. Notice that the column embeddings are not connected to the input and the prompt embeddings are fused with the previous cell's output. In Section 3.3, we further discuss these designs and their connections to the prompt learning of NLP.
#### 3.1.2 Construct and Expand Feature Embeddings
In Trompt, categorical features are embedded through a embedding layer and numerical features are embedded through a dense layer as previous works (Sompelalli et al., 2021; Gorishniy et al., 2021). The embedding construction procedure is illustrated in part two of Figure 3, where \(\mathbf{E}_{\text{feature}}\in\mathbb{R}^{B\times C\times d}\) is the feature embeddings of the batch.
The shapes of \(\mathbf{M}_{\text{importance}}\) and \(\mathbf{E}_{\text{feature}}\) are \(\mathbb{R}^{B\times P\times C}\) and \(\mathbb{R}^{B\times C\times d}\), respectively. Since \(\mathbf{E}_{\text{feature}}\) lacks the prompt dimension, Trompt expands \(\mathbf{E}_{\text{feature}}\) into \(\mathbf{\hat{E}}_{\text{feature}}\in\mathbb{R}^{B\times P\times C\times d}\) to accommodate the \(P\) prompts by a dense layer in part three of Figure 3.
#### 3.1.3 Generate Output
The output of Trompt Cell is the column-wise sum of the element-wise multiplication of \(\mathbf{\hat{E}}_{\text{feature}}\) and \(\mathbf{M}_{\text{importance}}\) as Equation (5), where \(\odot\) is element-wise multiplication. Notice that, during element-wise multiplication, the shape of \(\mathbf{M}_{\text{importance}}\) is considered \(\mathbb{R}^{B\times P\times C\times 1}\). In addition, since column is the third axis, the shape is reduced from \(\mathbb{R}^{B\times P\times C\times d}\) to \(\mathbb{R}^{B\times P\times d}\) after column-wise summation.
\[\mathbf{O}=\sum_{i=1}^{C}(\mathbf{\hat{E}}_{\text{feature}}\odot\mathbf{M}_ {\text{importance}}),:,i,:\in\mathbb{R}^{B\times P\times d} \tag{5}\]
### Trompt Downstream
A Trompt Downstream makes a prediction based on a Trompt Cell's output, which contains representations corresponding to \(P\) prompt embeddings. To aggregate these representations, the weight for each prompt is first derived through a dense layer and a softmax activation function as Equation (6). Afterwards, the weighted sum is calculated as Equation (7).
The prediction is subsequently made through two dense layers as Equation (8), where \(T\) is the target dimension.
Figure 3: Architecture of a Trompt Cell.
For classification tasks, \(T\) is the number of target classes. For regression tasks, \(T\) is set to 1. As Figure 2 shows, a sample gets a prediction through a Trompt Cell and thus multiple predictions through all cells. During training, the loss of each prediction is separately calculated and the loss is summed up to update model weights. During inference, on the other hand, predictions through all cells are simply averaged as the final prediction as Equation (9), where \(L\) is the number of Trompt Cells.
\[\mathbf{W_{prompt}}=\mathtt{softmax}(\mathtt{dense}(\mathbf{O}))\in\mathbb{R}^{ B\times P} \tag{6}\]
\[\mathbf{\hat{O}}=\sum_{i=1}^{P}(\mathbf{W_{prompt}}\odot\mathbf{O})_{:,i,:} \in\mathbb{R}^{B\times d} \tag{7}\]
\[\mathbf{P}=\mathtt{dense}(\mathtt{relu}(\mathtt{dense}(\mathbf{\hat{O}}))) \in\mathbb{R}^{B\times T} \tag{8}\]
\[\begin{split} loss=\sum_{i=1}^{L}\mathtt{loss}_{\mathtt{fn}}( \mathbf{P}_{i},y)\\ pred=\sum_{i=1}^{L}\mathbf{P}_{i}/L\end{split} \tag{9}\]
### Prompt Learning of Trompt
Trompt's architecture is specifically designed for tabular data, taking into account the unique characteristics of this type of data and the impressive performance of tree-based models. Unlike conventional operations, the design may appear unconventional and detached from tabular data features. In this section, we explain the rationale behind Trompt's network design and how we adapted prompt learning to a tabular neural network.
Tabular data is structured, with each column representing a specific dataset property that remains constant across individual samples. The success of tree-based models relies on assigning feature importances to individual samples. This concept has been explored in models such as TabNet (Arik and Pfister, 2021) and Net-DNF (Katzir et al., 2020). However, tree-based algorithms do not explicitly assign feature importances to individual samples. Instead, importances vary implicitly along the path from the root to a leaf node. Only the columns involved in this path are considered important features for the samples reaching the corresponding leaf node, representing sample-specific feature importances.
Given the fundamental characteristic of tabular data and the learning strategy of tree-based models, Trompt aims to combine the intrinsic properties of columns with sample-specific feature importances using a prompt learning-inspired architecture from NLP (Radford et al., 2018; Brown et al., 2020). Trompt employs column embeddings to represent the intrinsic properties of each column and prompt embeddings to prompt column embeddings, generating feature importances for given prompts. Both column embeddings and prompt embeddings are invariant across samples. However, before prompting column embeddings with prompt embeddings, the prompt embeddings are fused with the output of the previous Trompt Cell as shown in Equation (2), enabling input-related representations to flow through and derive sample-specific feature importances. The "prompt" mechanism in Trompt is implemented as a matrix multiplication in Equation (4).
A conceptual analogy of Trompt's prompt learning approach to NLP is presented in Table 1. It's important to note that the implementation details of prompt learning differ substantially between tabular data and NLP tasks due to the fundamental differences between the two domains. Therefore, appropriate adjustments must be made to bridge these two domains.
## 4 Experiments
In this section, the experimental results and analyses are presented. First, we elaborate on the settings of experiments and the configurations of Trompt in Section 4.1. Second, the performance of Trompt on Grinsztajn45 is reported in
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Problem** & **Implemented by** & **Inspired by** \\
**Identification** & & \\ \hline Sample-invariant & \(\mathbf{E}_{\text{column}}\) & Fixed Large \\ Intrinsic Properties & & Language Model \\ \hline Sample-specific & \(\mathbf{M}_{\text{importance}}\) & Task-specific \\ Feature Importances & & Predictions \\ \hline \hline \end{tabular}
\end{table}
Table 1: Analogy of the prompt learning of Trompt to that of NLP.
Figure 4: Architecture of a Trompt Downstream.
Section 4.2. Third, ablation studies regarding the hyperparameters and the architecture of Trompt are studied in Section 4.3. Lastly, the interpretability of Trompt is investigated using synthetic and real-world datasets in Section 4.4.
### Setup
The performance and ablation study of Trompt primarily focus on the Grinsztajn45 benchmark (Grinsztajn et al., 2022)1. This benchmark comprises datasets from various domains and follows a unified methodology for evaluating different models, providing a fair and comprehensive assessment. Furthermore, we evaluate the performance of Trompt on datasets selected by FT-Transformer and SAINT to compare it with state-of-the-art tabular neural networks.
Footnote 1: [https://github.com/LeoGrin/tabular-benchmark](https://github.com/LeoGrin/tabular-benchmark)
For interpretability analysis, we follow the experimental settings of TabNet (Arik and Pfister, 2021). This involves using two synthetic datasets (Syn2 and Syn4) and a real-world dataset (mushroom) to visualize attention masks.
The settings of Grinsztajn45 are presented in Section 4.1.1 and the implementation details of Trompt are presented in Section 4.1.2. Furthermore, the settings of datasets chosen by FT-Transformer and SAINT are provided in Appendix B.2 and Appendix B.3, respectively.
#### 4.1.1 Settings of Grinsztajn45
To fairly evaluate the performance, we follow the configurations of Grinsztajn45, including train test data split, data preprocessing and evaluation metric. Grinsztajn45 comprises two kinds of tasks, classification tasks and regression tasks. Please see Appendix A.1 and Appendix A.2 for the dataset selection criteria and dataset normalization process of Grinsztajn45. The tasks are further grouped according to (i) the size of datasets (medium-sized and large-sized) and (ii) the inclusion of categorical features (numerical only and heterogeneous).
In addition, we make the following adjustments: (i) models with incomplete experimental results in (Grinsztajn et al., 2022) are omitted, (ii) two well-performed tree-based models are added for comparison, and (iii) Trompt used a hyperparameter search space smaller than its opponents. The details of the adjustments are described in Appendix A.3 and Appendix A.4.
#### 4.1.2 Implementation Details
Trompt is implemented using PyTorch. The default hyperparameters are shown in Table 2. The size of embeddings and the hidden dimension of dense layers are configured \(d\). Note that only the size of column and prompt embeddings must be the same by the architecture design. The hidden dimension of dense layers is set as \(d\) to reduce hyperparameters and save computing resources. On the other hand, the number of prompts and the number of Trompt Cells are set to \(P\) and \(L\). Please refer to Appendix F for the hyperparameter search spaces for all baselines and Trompt.
### Evaluation Results
The results of classification tasks are discussed in Section 4.2.1 and the results of regression tasks are discussed in Section 4.2.2. The evaluation metrics are accuracy and r2-score for classification and regression tasks, respectively. In this section, we report an overall result and leave results of individual datasets in Appendix B.1. In addition, the evaluation results on datasets chosen by FT-Transformer and SAINT are provided in Appendix B.2 and Appendix B.3, respectively.
#### 4.2.1 Classification
On the medium-sized classification tasks, Figure 5 shows that Trompt outperforms DNN models. The curve of Trompt is consistently above deep neural networks (SAINT, FT-Transformer and ResNet) on tasks with and without categorical features. Additionally, Trompt narrows the gap between deep neural networks and tree-based models, especially on the tasks with heterogeneous features. In Figure 4(b), Trompt seems to be a member of the leading cluster with four tree-based models. The GradientBoostingTree starts slow but catches up the leading cluster in the end of search. The other deep neural networks forms the second cluster and have a gap to the leading one.
On the large-sized classification tasks, tree-based models remain the leading positions but the gap to deep neural networks is obscure. This echoes that deep neural networks requires more samples for training (LeCun et al., 2015). Figure 5(a) shows that Trompt outperforms ALL models on the task with numerical features and Figure 5(b) shows that Trompt achieves a comparable performance to FT-Transformer on tasks with heterogeneous features.
With the small hyperparameter search space, the curve of Trompt is relatively flat. The flat curve also suggests that Trompt performs well with its default hyperparameters. Its
\begin{table}
\begin{tabular}{l|c|c} \hline \hline \multicolumn{1}{c|}{**Hyperparameter**} & \multicolumn{1}{c|}{**Symbol**} & \multicolumn{1}{c}{**Value**} \\ \hline Feature Embeddings & & \\ Prompt/Column Embeddings & \(d\) & 128 \\ Hidden Dimension & & \\ \hline Prompts & \(P\) & 128 \\ \hline Layer & \(L\) & 6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Default hyperparameters of Trompt.
performance after an exhausted search is worthy of future exploring.
#### 4.2.2 Regression
On the medium-sized regression tasks, Figure 7 shows that Trompt outperforms deep neural networks as the curves of Trompt are consistently higher than SAINT, FT-Transformer and ResNet on tasks with and without categorical features. The gap between deep neural networks and tree-based models is less obvious in Figure 6(a) than that in Figure 6(b). On the tasks with numerical features only, Trompt achieves a comparable performance with random forest. On the tasks with heterogeneous features, Trompt narrows the gap but is below all the tree-based models.
On the large-sized regression tasks with numerical features only, Figure 7(a) shows that Trompt is slightly worse than SAINT and FT-Transformer in the end of search. On the large-sized regression tasks with heterogeneous features, Figure 7(b) shows that Trompt outperforms deep neural networks with a large margin.
In general, deep learning models are not good at handling categorical features. Trompt alleviates this weakness as shown in all tasks with heterogeneous features in Figure 5- Figure 8. Trompt achieves superior performance over state-of-the-art deep neural networks except on the large-sized regression tasks with numerical features only.
### Ablation Study
In this subsection, we discuss the ablation study results of Trompt regarding hyperparameters and architecture design. Please refer to Appendix C for the settings of the ablation study. In the main article, we report two major ablations on (i) the number of prompts and (ii) the necessity of expanding feature embeddings by a dense layer. Other ablations can be found in Appendix D.
**Ablations on the number of prompts.** Prompt embeddings (\(\mathbf{E}_{\text{prompt}}\)) stand a vital role to derive the feature importances. Here we discuss the effectiveness of adjusting the number of prompts.
As shown in Table 3, setting the number of prompts to one results in the worse results. However, halving and doubling the default number (128) do not have much effect on the performance. The results demonstrate that Trompt is not sensitive to the number of prompts, as long as the number of prompts is enough to accommodate the modalities of the dataset.
**Ablations on expanding feature embeddings by a dense layer.** Part three of Figure 3 uses a dense layer to expand feature embeddings to accommodate \(P\) prompts. Here we discuss the necessity of the dense layer.
As you can see in Table 4, adding a dense layer really leads to better results and is a one of the key architecture designs of Trompt. By design, adding the dense layer enables
Figure 5: Benchmark on **medium-sized classification** datasets.
Figure 8: Benchmark on **large-sized regression** datasets.
Figure 6: Benchmark on **large-sized classification** datasets.
Figure 7: Benchmark on **medium-sized regression** datasets.
Trompt to generate different feature embeddings for each prompt. Without the dense layer, Trompt is degraded to a simplified situation where each prompt uses the same feature embeddings. The results of Table 3 and Table 4 suggest that the variation of feature importances, which comes from both the prompt embedding and the expansion dense layer, is the key to the excellent performance of Trompt.
### Interpretability
Besides outstanding performance, tree-based models are well-known for their interpretability. Here we explore whether Trompt can also provide concise feature importances that highlighted salient features. To investigate this, we conduct experiments on both synthetic datasets and real-world datasets, following the experimental design of TabNet (Arik and Pfister, 2021). To derive the feature importances of Trompt for each sample, \(\mathbf{M}_{\text{importance}}\in\mathbb{R}^{B\times P\times C}\) is reduced to \(\mathbf{\hat{M}}_{\text{importance}}\in\mathbb{R}^{B\times C}\) as Equation (10), where the weight of \(\mathbf{M}_{\text{importance}}\) is the \(\mathbf{W}_{\text{prompt}}\) of Equation (6).
Notice that all Trompt Cells derive separated feature importances. We demonstrate the averaged results of all cells here and leave the results of each cell in Appendix E.1.
\[\mathbf{\hat{M}}_{\text{importance}}=\sum_{i=1}^{P}(\mathbf{W}_{\text{ prompt}}\odot\mathbf{M}_{\text{importance}})_{:,i,:}\in\mathbb{R}^{B\times C} \tag{10}\]
**Synthetic datasets.** The Syn2 and Syn4 datasets are used to study the feature importances learned by each model (Chen et al., 2018). A model is trained on oversampled training set (10k to 100k) using default hyperparameters and evaluated on 20 randomly picked testing samples. The configuration is identical to that in TabNet (Arik and Pfister, 2021).
Figure 9 and Figure 10 compare the important features of the dataset and those learned by Trompt. In the Syn2 dataset, features 2-5 are important (Figure 8(a)) and Trompt excellently focuses on them (Figure 8(b)). In the Syn4 dataset, either features 0-1 or 2-5 could be important based on the value of feature 10 (Figure 9(a)). As Figure 10 shows, Trompt still properly focuses on features 0-5 and discovers the influence of feature 10.
**Real-world datasets.** The mushroom dataset (Dua and Graff, 2017) is used as the real-world dataset for visualization as TabNet (Arik and Pfister, 2021). With only the _Odor_ feature, most machine learning models can achieve \(>95\%\) test accuracy (Arik and Pfister, 2021). As a result, a high feature importance is expected on Odor.
Table 5 shows the three most important features of Trompt and five tree-based models. As shown, all models place Odor in their top three. The second and third places of Trompt, _gill-size_ and _gill-color_, also appear in the top three of the other models. Actually, _cap-color_ is selected only by XGBoost. If it is excluded, the union of the top important features of all models comes down to four features. The one Trompt missed is _spore-print-color_, which is the fifth place of Trompt. Overall speaking, the important features selected by Trompt are consistent with those by tree-based models, and can therefore be used in various analyses that are familiar in the field of machine learning.
To further demonstrate that the experimental results were not ad-hoc, we repeat the experiments on additional real-world datasets. Please see Appendix E.2 for the details and
\begin{table}
\begin{tabular}{l|c|c} \hline \hline & **w (default)** & **w/o** \\ \hline Classification & \(81.81\%\) & \(80.76\%\) \\ \hline Regression & \(74.15\%\) & \(73.73\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: The performance of with and without applying feature transformation on Input Transform.
Figure 10: Attention mask on Syn4 dataset (synthetic).
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline & **1** & **64** & **128 (default)** & **256** \\ \hline Classification & \(79.74\%\) & \(81.76\%\) & \(81.81\%\) & \(81.85\%\) \\ \hline Regression & \(72.07\%\) & \(74.11\%\) & \(74.15\%\) & \(74.14\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The performance of different number of prompts.
Figure 9: Attention mask on Syn2 dataset (synthetic).
experimental results.
## 5 Discussion
In this section, we further explore the "prompt" mechanism of Trompt. Section 5.1 clarifies the underlying hypothesis of how the prompt learning of Trompt fits for tabular data. In addition, as Trompt is partially inspired by the learning strategy of tree-based models, we further discussed the difference between Trompt and tree-based models in Section 5.2.
### Further exploration of the "prompt" mechanism in Trompt
The "prompt" mechanism in Trompt is realized as Equation (4). This equation involves a matrix multiplication of expanded prompt embeddings (\(\mathbf{SE}_{\text{prompt}}\in\mathbb{R}^{B\times P\times d}\)) and the transpose of expanded column embeddings (\(\mathbf{SE}_{\text{column}}\in\mathbb{R}^{B\times C\times d}\)). It results in \(\mathbf{M}_{\text{importance}}\in\mathbb{R}^{P\times C}\), which represents prompt-to-column feature importances. The matrix multiplication calculates the cosine-based distance between \(\mathbf{SE}_{\text{prompt}}\) and \(\mathbf{SE}_{\text{column}}\), and favors high similarity between the sample-specific representations and sample-invariant intrinsic properties.
To make it clearer, \(\mathbf{SE}_{\text{prompt}}\) consists of \(P\) embeddings that are specific to individual samples, except for the first Trompt Cell where \(\mathbf{O}_{\text{prev}}\) is a zero tensor since there is no previous Trompt Cell, as stated in Equations (1) and (2). On the other hand, \(\mathbf{SE}_{\text{column}}\) consists of \(C\) embeddings that represent intrinsic properties specific to a tabular dataset as stated in Equation (3).
Unlike self-attention, which calculates the distance between queries and keys and derives token-to-token similarity measures, Trompt calculates the distance between \(\mathbf{SE}_{\text{prompt}}\) and \(\mathbf{SE}_{\text{column}}\) in Equation (4) to derive sample-to-intrinsic-property similarity measures. The underlying idea of the calculation is to capture the distance between each sample and intrinsic property of a tabular dataset and we hypothesize that incorporating the intrinsic properties into the modeling of a tabular neural network might help making good predictions.
### The differences between Trompt and Tree-based Models
As discussed in Section 3.3, the idea of using prompt learning to derive feature importances, is inspired by the learning algorithm of tree-based models and the intrinsic properties of tabular data. As a result, Trompt and tree-based models share a common characteristic in that they enable sample-dependent feature importances. However, there are two main differences between them. First, to incorporate the intrinsic properties of tabular data, Trompt uses column embeddings to share the column information across samples, while the learning strategy of tree-based models learn column information in their node-split nature. Second, Trompt and tree-based models use different techniques to learn feature importance. Trompt derives feature importances explicitly through prompt learning, while tree-based models vary the feature importances implicitly in the root-to-leaf path.
## 6 Conclusion
In this study, we introduce Trompt, a novel network architecture for tabular data analysis. Trompt utilizes prompt learning to determine varying feature importances in individual samples. Our evaluation shows that Trompt outperforms state-of-the-art deep neural networks (SAINT and FT-Transformer) and closes the performance gap between deep neural networks and tree-based models.
The emergence of prompt learning in deep learning is promising. While the design of Trompt may not be intuitive or perfect for language model prompts, it demonstrates the potential of leveraging prompts in tabular data analysis. This work introduces a new strategy for deep neural networks to challenge tree-based models and future research in this direction can explore more prompt-inspired architectures.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline & **1st** & **2nd** & **3rd** \\ \hline RandomForest & odor (\(15.11\%\)) & gill-size (\(12.37\%\)) & gill-color (\(10.42\%\)) \\ \hline XGBoost & spore-print-color (\(29.43\%\)) & odor (\(22.71\%\)) & cap-color (\(14.07\%\)) \\ \hline LightGBM & spore-print-color (\(22.08\%\)) & gill-color (\(14.95\%\)) & odor (\(12.96\%\)) \\ \hline CatBoost & odor (\(72.43\%\)) & spore-print-color (\(10.57\%\)) & gill-size (\(2.71\%\)) \\ \hline GradientBoostingTree & gill-color (\(31.08\%\)) & spore-print-color (\(19.89\%\)) & odor (\(17.44\%\)) \\ \hline Trompt (ours) & odor (\(24.93\%\)) & gill-size (\(8.13\%\)) & gill-color (\(5.73\%\)) \\ \hline \hline \end{tabular}
\end{table}
Table 5: The top-3 importance score ratio on the mushroom dataset. |
2307.11555 | Accurate Detection of Spiking Motifs by Learning Heterogeneous Delays of
a Spiking Neural Network | Recently, interest has grown in exploring the hypothesis that neural activity
conveys information through precise spiking motifs. To investigate this
phenomenon, various algorithms have been proposed to detect such motifs in
Single Unit Activity (SUA) recorded from populations of neurons. In this study,
we present a novel detection model based on the inversion of a generative model
of raster plot synthesis. Using this generative model, we derive an optimal
detection procedure that takes the form of logistic regression combined with
temporal convolution. A key advantage of this model is its differentiability,
which allows us to formulate a supervised learning approach using a gradient
descent on the binary cross-entropy loss. To assess the model's ability to
detect spiking motifs in synthetic data, we first perform numerical
evaluations. This analysis highlights the advantages of using spiking motifs
over traditional firing rate based population codes. We then successfully
demonstrate that our learning method can recover synthetically generated
spiking motifs, indicating its potential for further applications. In the
future, we aim to extend this method to real neurobiological data, where the
ground truth is unknown, to explore and detect spiking motifs in a more natural
and biologically relevant context. | Laurent U Perrinet | 2023-07-21T13:04:28Z | http://arxiv.org/abs/2307.11555v2 | # Accurate Detection of Spiking Motifs by Learning Heterogeneous Delays of a Spiking Neural Network +
###### Abstract
Recently, interest has grown in exploring the hypothesis that neural activity conveys information through precise spiking motifs. To investigate this phenomenon, various algorithms have been proposed to detect such motifs in Single Unit Activity (SUA) recorded from populations of neurons. In this study, we present a novel detection model based on the inversion of a generative model of raster plot synthesis. Using this generative model, we derive an optimal detection procedure that takes the form of logistic regression combined with temporal convolution. A key advantage of this model is its differentiability, which allows us to formulate a supervised learning approach using a gradient descent on the binary cross-entropy loss. To assess the model's ability to detect spiking motifs in synthetic data, we first perform numerical evaluations. This analysis highlights the advantages of using spiking motifs over traditional firing rate based population codes. We then successfully demonstrate that our learning method can recover synthetically generated spiking motifs, indicating its potential for further applications. In the future, we aim to extend this method to real neurobiological data, where the ground truth is unknown, to explore and detect spiking motifs in a more natural and biologically relevant context.
Keywords:Neurobiology spike trains population coding spiking motifs heterogeneous delays pattern detection.
## 1 Introduction
### The age of large-scale neurobiological event-based data
Over the past decade, remarkable technological progress across multiple disciplines has expanded the potential for experimental neuroscience research. These cutting-edge methods, such as _in vivo_ two-photon imaging, large population recording arrays, optogenetic circuit control tools, transgenic manipulations, and large volume circuit reconstructions, allow researchers to explore neural networks' function, structure, and dynamics with unparalleled precision.
The complexity revealed by these advanced technologies underscores the significance of neurobiological knowledge in bridging the gap between abstract brain function principles and their biological implementation in neural circuits. Consequently, there is a growing need to scale up analysis methods to handle the vast amounts of data generated by these powerful techniques. By meeting this demand, researchers can gain deeper insights into brain function, further our understanding of neural circuits, and make groundbreaking discoveries in neuroscience.
One approach aimed at addressing this challenge is the Rastermap algorithm [24]. This algorithm rearranges neurons in the raster map based on the similarity of their activity and utilizes a deconvolution strategy with a linear model. However, it's worth noting that the Rastermap algorithm's primary testing has been on calcium imaging data, which may introduce some imprecision in the timing of spiking activity observed in Single Unit Activity (SUA) recordings. Another significant contribution is from the work of Williams _et al._[32]. They propose a point process model that overcomes limitations present in existing models, such as the need for discretized spike times or lack of uncertainty estimates for model predictions and estimated parameters. By incorporating learnable time-warping parameters to model sequences of varying durations, the model effectively captures experimentally observed patterns in neural circuits.
### Decoding neural activity using spike distances
Neuroscience research heavily relies on defining appropriate metrics to compute the distance between spike trains, and one well-known measure for this purpose is the Victor-Purpura distance [30]. This metric effectively addresses inconsistencies observed with firing rate-based estimation of spike trains. Another study refines the Victor-Purpura distance by introducing a time constant as a parameter, allowing for interpolation between a coincidence detector and a rate difference counter [27]. Additionally, researchers have extended these distance measures to non-Euclidean metrics and morphological manipulations, enabling the computation of spike train dissimilarity.
Regarding spike timings, various methods have been developed to estimate the latency of neural responses. Bayesian binning [19] is one such method. Unitary event analysis, based on a statistical model of chance detection, has been widely used to detect significant synchronous patterns above chance in neuron pair recordings [11]. Recent extensions of these methods, such as the 3D-SPADE approach [29], enable the identification of reoccurring patterns in parallel spike train data and assess their statistical significance. Incorporating possible temporal dithering in spike timings has been shown to improve performance, particularly in the presence of patterns with varying durations, such as surrogates used to evaluate precisely timed higher-order spike correlations.
However, some of these methods may suffer from computational complexity, block-based implementations, and narrow specialization for specific tasks. To address these challenges, novel methods like SpikeShip[28] are being developed. The complexity and diversity of these spike train distance and timing
comparison methods demonstrate the growing interest in integrating such measures to understand the neural code. A critical step in testing their potential usefulness is scaling these methods to handle larger amounts of data, enabling broader applications and deeper insights into neural activity patterns and their significance.
### A novel hypothesis: spiking motifs
In recent studies, the importance of spike timing has been emphasized, especially in the barn owl auditory system, where precise spike timing in response to the sound of a mouse allows the brain to determine the prey's position [7]. This discovery aligns with a growing body of literature suggesting that the brain's dynamics often exhibit stereotyped sequences known as _spiking motifs_[9]. The
Figure 1: **Core Mechanism of Spiking Motif Detection:** In this illustrative example, we consider a scenario involving three presynaptic neurons denoted as \(a_{1}\), \(a_{2}\), and \(a_{3}\), which are fully connected to two postsynaptic neurons \(b_{1}\) and \(b_{2}\). The synaptic delays for the connections to \(b_{1}\) are 1, 5, and 9 ms, while for \(b_{2}\) they are 8, 5, and 1 ms, respectively. In the middle panel, when the three presynaptic neurons emit synchronous pulses, the postsynaptic potentials generated in \(b_{1}\) and \(b_{2}\) reach them asynchronously due to the heterogeneous delays. Consequently, the postsynaptic potentials may not be sufficient to reach the membrane threshold (dashed line) in either of the postsynaptic neurons, and no output spike is generated. In the right panel, the pulses emitted by the presynaptic neurons are arranged in such a way that, taking into account the delays, they reach the postsynaptic neuron \(b_{1}\) at the same time (at \(t=10\) ms in this example). As a result, the postsynaptic potentials \(V_{t}\) evoked by the three presynaptic neurons sum up, causing the voltage threshold to be crossed. This leads to the emission of an output spike, signaling the detection of a spiking motif in the presynaptic population (highlighted in red color). This core mechanism illustrates how the interplay between heterogeneous delays in the network allows for precise spike timing, enabling the detection of spiking motifs in neural populations.
concept of spiking motifs is a generalization of the patterns observed in the _polychronization_ model developed by Izhikevich [16]. This theoretical model comprises a random recurrent network of spiking neurons with biologically realistic synaptic delays and evolving weights governed by Spike-Time Dependent Plasticity (STDP) learning rule.
The interplay between the synaptic delays and STDP leads to the spontaneous organization of neurons into groups called "polychronous groups." Despite neurons in one of these groups firing at different times, the heterogeneous delays enable their spikes to converge synchronously on the postsynaptic neuron. This convergence results in the summation of excitatory postsynaptic potentials, leading to the firing of the postsynaptic neuron (see Figure 1). The polychronization model allows spiking neurons to self-organize into groups and generate reproducible time-locked spiking motifs. The STDP rule increases synaptic weights selectively for neurons involved in these polychronous groups, thereby consolidating the formation of such groups.
While the polychronization model provides valuable insights into understanding spiking neural networks and their potential role in generating spatio-temporal spiking motifs, it has a limitation. The model's heterogeneous delays are fixed and cannot evolve over time, which may limit its applicability in certain scenarios. However, the underlying mechanism offers valuable implications for studying neural activity motifs and their significance in the brain. To effectively detect spiking motifs, we propose a novel metric inspired by this model.
### The Heterogeneous Delays Spiking Neural Network (HD-SNN)
In this work, we propose to accurately detect spatio-temporal spiking motifs using a feed-forward, single layer heterogeneous delays spiking neural network (HD-SNN). The paper is organized as follows. We develop a theoretically defined HD-SNN for which we can attune both the weights and delays. We first detail the methodology by defining the basic mechanism of spiking neurons that utilize heterogeneous delays. This will allow us to formalize the spiking neuron used to learn the model's parameters in a supervised manner and test its effectiveness. In the results section, we will first evaluate the efficiency of the learning scheme. We will also study the robustness of the spiking motif detection mechanism and in particular its resilience to changing the dimensions of the presynaptic or postsynaptic populations, or the depth in the number of different possible delays. Then, we will explore how the spiking motifs may be learned using supervised learning, and evaluate how the efficiency of the algorithm may depend on the parameters of the HD-SNN architecture. This will allow us to show how such a model can provide an efficient solution which may in the future be applied to neurobiological data. Finally, we will conclude by highlighting the main contributions of this paper, while defining some limitations which will open perspectives for future detection methods.
## 2 Methods
Let us formally define the HD-SNN model. First, we will define raster plots similar to those obtained from Single Unit Activity (SUA) recordings using an event-based and then binarized setting. We will then derive a generative model for raster plots using a HD-SNN, and derive a model for efficient detection of event-based motifs using a similar HD-SNN with "inverted" delays.
### Raster plots: from event-based to binarized
In neurobiological recordings, any generic raster plot consists of a stream of _spikes_. This can be formalized as a list of neural addresses and timestamps tuples \(\epsilon=\{(a_{r},t_{r})\}_{r\in[1,N_{ev}]}\) where \(N_{ev}\in\mathbb{N}\) is the total number of events in the data stream and the rank \(r\) is the index of each event in the list of events. Each event has a time of occurrence \(t_{r}\) (these are typically ordered) and an associated address \(a_{r}\) in the space \(\mathcal{A}\) of the neural population. In a neurobiological recording like that of SUAs, this can be the identified set of neurons.
Events are generated by neurons which are defined on the one hand by the equations governing the evolution of its membrane potential dynamics on their soma and on the other hand by the integration of the synaptic potential propagating on their dendritic tree. A classical characterization consists in detailing the synaptic weights of each synaptic contact, the so-called weight matrix. As we saw above, neurons can receive inputs from multiple presynaptic neurons with heterogeneous delays. These delays represent the time it takes for a presynaptic spike to reach the soma of the postsynaptic neuron. In such neurons, input presynaptic spikes \(\epsilon\) will be multiplexed in time by the dendrites defined by this synaptic set (see Figure 1).
Let's formalize such a layer of spiking neurons in the HD-SNN model. Each postsynaptic neuron \(b\in\mathcal{B}\) connects to presynaptic neurons from a set of addresses in \(\mathcal{A}\). In biology, a single cortical neuron has generally several thousands of synapses. Each may be defined by its synaptic weight and also its delay. Note that two neurons may contact with multiple synapses, and thus different delays. Scanning all neurons \(b\), we thus define the set of \(N_{s}\in\mathbb{N}\) synapses as \(\mathcal{S}=\{(a_{s},b_{s},w_{s},\delta_{s})\}_{s\in[1,N_{s}]}\), where each synapse is associated to a presynaptic address \(a_{s}\), a postsynaptic address \(b_{s}\), a weight \(w_{s}\), and a delay \(\delta_{s}\).
This defines the full connectivity of the HD-SNN model. The receptive field of a postsynaptic neuron refers to the set of synapses that connect to it. Similarly, the emitting field of a presynaptic neuron refers to the set of synapses it connects to. These fields determine the synaptic inputs and outputs of individual neurons. More formally, the receptive field of a postsynaptic neuron is defined \(\mathcal{S}^{b}=\{(a_{s},b_{s},w_{s},\delta_{s})\|b_{s}=b\}_{s\in[1,N_{s}]}\), and the emitting field of a presynaptic neuron as \(\mathcal{S}_{a}=\{(a_{s},b_{s},w_{s},\delta_{s})\|a_{s}=a\}_{s\in[1,N_{s}]}\). Following this definition, an event stream which evokes neurons in the presynaptic address space is multiplexed by the synapses into a new event stream which is defined by the union of the sets generated by each emitting field from the presynaptic space: \(\cup_{r\in[1,N_{ev}]}\{(b_{s},w_{s},t_{r}+\delta_{s})\}_{s\in\mathcal{S}_{a_{ r}}}\). In biology, this new stream of events is naturally
ordered in time as events reach the soma of post-synaptic neurons. Synchronous activation of postsynaptic neurons, where multiple spikes converge on the soma simultaneously, will increase the firing probability of those neurons.
From the perspective of simulating such event-based computations on standard CPU- or GPU-based computers, it is useful to transform this event-based representation into a dense representation. Indeed, we may transform any event-based input as the boolean matrix \(A\in\{0,1\}^{N\times T}\), where \(N\) is the number of presynaptic neurons in \(\mathcal{A}\) and \(T\) is the number of time bins (see Figure 2a). In this simplified model, we will consider that heterogeneous delays are integers limited in range between \(0\) and \(D\) (that is, \(\forall s\in[1,N_{s}]\), \(0\leq\delta_{s}<D\)) such that the synaptic set can be represented by the dense matrix \(K^{b}\in\mathbb{R}^{N\times D}\) giving for each neuron \(b\) the weights as a function of presynaptic address and delay (see Figure 2b). It is equal to zero except on synapses: \(\forall s\in\mathcal{S}^{b},K^{b}(a_{s},\delta_{s})=w_{s}\). Equivalently, one may define for each presynaptic neuron \(a\) the emitting kernel as the transpose kernel \(K_{a}^{T}\in\mathbb{R}^{M\times D}\), where \(M\) is the number of postsynaptic neurons, whose values are zero except on synapses: \(\forall s\in\mathcal{S}_{a},K_{a}^{T}(b_{s},\delta_{s})=w_{s}\).
### A generative model for raster plots
As described in Figure 1, a spiking motif can be detected using a properly tuned HD-SNN that maximizes spike synchronization at the postsynaptic terminal. Taking the argument the other way around, one may form a generative model for realistic raster plots in which spikes in the presynaptic address space are generated as the conjunction of spiking motifs defined in the postsynaptic space, knowing that both populations are connected by a set of weights and delays whose structure is stable relatively to the coding timescale. When connection weights are strong and sparsely distributed, this firing will robustly cause a specific temporal motif. Overall, these examples show that raster plots may be considered as a mixture of the effects of different elementary causes, and that each event triggers a specific spatio-temporal spiking motif.
Formally, the activation of spiking motifs can occur independently and at random times. The activity is represented as a boolean matrix \(B\in\{0,1\}^{M\times T}\), where \(M\) is the number of different spiking motifs (see Figure 2c). Each entry \(B(b,t)\) indicates whether a particular motif \(b\) is activated at time \(t\). The firing of a neuron \(a\) at time \(t\) is considered a Bernoulli trial with a bias parameter \(p(a,t)\in[0,1]\). This bias is conditioned by the presence of spiking motifs on postsynaptic neurons with corresponding delays. Assuming that this bias is conditioned by the presence of spiking motifs on _all_ different postsynaptic neurons with the corresponding delays, it can be shown that the logit (inverse of the sigmoid) of this probability bias can be written as the sum of the logit of each of these factors, whose values we will define as the corresponding weights in the kernel. We can thus write the probability bias \(p(a,t)\) as the accumulated evidence given these factors as
\[p(a,t)=\sigma\big{(}K_{\mathcal{A}}(a)+\sum_{b\in\mathcal{S}_{a},0\leq\delta \leq D}B(b,t+\delta)\cdot K_{a}(b,\delta)\big{)}\]
Figure 2: **From generating raster plots to inferring spiking motifs**. _(a)_ As an illustration for the generative model, we draw a multiunit raster plot synthesized from 4 different spiking motifs and for 10 presynaptic neurons. _(b)_ We show these motifs, each identified at the top by a different color. The evidence of activation (red) or deactivation (blue) is assigned to each presynaptic neuron and 31 different possible delays. _(c)_ The activation in time of the different motifs (denoted by stars) is drawn at random and then used to generate a raster plot on the multi-unit address space (see panel a). By inverting this model, an inference model can be defined for their efficient detection, outputting an evidence value (continuous line) from which the identity and timing of SMs can be inferred (vertical bars). _(d)_ The original raster plot can be annotated with each identified spiking motif (as represented by the respective color assigned to SMs).
where \(\sigma\) is the sigmoid function. We will further assume that kernel's weights are balanced (their mean is zero) and that \(K_{\mathcal{A}}\) is a bias such that \(\forall a,t\), \(\sigma(K_{\mathcal{A}}(a))\) is the average background firing rate.
Finally, we obtain the raster plot \(A\in\{0,1\}^{N\times T}\) by drawing spikes using independent Bernoulli trials based on the computed probability biases \(A\sim\mathcal{B}(p)\). Note that, depending on the definition of kernels, the generative model can model a discretized Poisson process, generate rhythmic activity or more generally propagating waves. This formulation thus defines a simple generative model for raster plots as a combination of independent spiking motifs. This generative model can be easily extented to include a refractory period in order to ensure that there is a minimum time gap between successive action potentials, preventing them from overlapping. This temporal separation allows for discrete and well-defined neural signals, enabling accurate information processing and mitigating signal interference. The refractory period contributes to energy efficiency in neural systems and plays a crucial role in temporal coding by creating distinct time windows between successive spikes.
### Detecting spiking motifs
Assuming the spiking motifs (as defined by the kernel \(K\)) are known, the generative model allows to determine an inference model for detecting sources \(\hat{B}\) when observing a raster plot \(A\). Indeed, by using this forward model, it is possible to estimate the likelihood \(p(b,t)\) for the presence of a spiking motif of address \(b\) and at time \(t\) by using the transpose convolution operator. This consists in using the emitting field \(\mathcal{S}_{a}\) of presynaptic neurons in place of the receptive field \(\mathcal{S}^{b}\) of postsynaptic neurons. It thus comes that when observing \(A\), then one may infer the logit of the probability as the sum of evidences:
\[p(b,t)=\sigma\big{(}K_{\mathcal{B}}(b)+\sum_{a\in\mathcal{S}^{b},0\leq\delta \leq D}A(a,t-\delta)\cdot K^{b}(a,\delta)\big{)}\]
This also takes the form of a temporal convolution. This assumption holds as long as the kernels are uncorrelated, a condition which is met here numerically by choosing a relatively sparse set of synapses (approximately 1% of active synapses). Finally, we compute \(\hat{B}\) by selecting the most likely items, allowing to identify the spiking motifs in the input raster plot (see Figure 2d).
One may naturally extend this algorithm when the spiking motifs (that is, the weights) are not known, but that we know the timing and identity of the spiking motifs. Indeed, the equation above is differentiable. Indeed, the activation function of our spiking neural is a sigmoid function implementing a form of Multinomial Logistic Regression (MLR) [10]. The underlying metric is the binary cross-entropy, as used in the logistic regression model. In particular, if we consider kernels with similar decreasing exponential time profile, one can prove that this detection model is similar to the method of Berens _et al._[2]. In our specific case, the difference is that the regression is performed in both dendritic and delay space by extending the summation using a temporal convolution operator.
## 3 Results
To quantify the efficiency of this operation, we generated raster plots parameterized by \(N=128\) presynaptic inputs and \(M=144\) synthetic spiking motifs as random independent kernels and with \(D=31\) possible delays. We drew random independent instances of \(B\) with a length of \(T=1000\) time steps and an average of 1.0 spikes per neuron. This allowed us to generate a large number of synthetic raster plots, which we use to infer \(\hat{B}\). We compute accuracy as the rate of true positive detections (both for inferring the address and its exact timing) and observe on average \(\approx 98.8\%\) correct detections.
We extended this result by showing how accuracy evolves as a function of the number of simultaneous spiking motifs, holding the frequency of occurrence constant. We show in Fig. 3 (left) that the accuracy of finding the right spiking motif is still above \(80\%\) accuracy with more than \(1364\) overlapping spiking motifs. This observation illustrates quantitatively the capacity of the HD-SNN in representing a high number of simultaneous motifs. Furthermore, we show in Fig. 3 (middle) that (with \(M=144\) spiking motifs fixed) the accuracy increases significantly with increasing temporal depth \(D\) of the spiking motif kernel, quantitatively demonstrating the computational advantage of using heterogeneous delays. These results were obtained under the assumption that we know the spiking motifs through \(K\). However, this is generally not the case, for example, when considering the raster plot of biological neurons.
Finally, we evaluated the performance of the supervised learning scheme in inferring the connection kernel when the address and timing of spiking motifs are known. The kernel was initialized with random independent values, and we used stochastic gradient descent with a learning rate of \(1\mathrm{e}{-4}\) over \(1\mathrm{e}4\) trials (i.e., over rasters as defined above with \(T=1000\) and \(N=128\)). Qualitatively, the convergence was monotonous, and the correct values of the \(M=144\) spiking motifs were quickly recovered. Quantitatively, the correlation between the true and learned kernel weights showed that all kernels were correctly recovered (see Figure 3, right). Performing inference with the learned weights was as efficient as with the true kernels, and showed no significant difference (not shown).
## 4 Discussion
### Synthesis and Main Contributions
In this paper, we present a novel Heterogeneous Delays Spiking Neural Network (HD-SNN) model designed for the detection of spiking motifs in synthetic neurobiologically-inspired raster plots.
Our contributions encompass several innovations. Firstly, we formulate the HD-SNN model from first principles, optimizing the detection of event-based spatiotemporal motifs. Unlike previous models like the tempotron, which are evaluated on simplified problems, our model is rigorously tested on realistic synthetic data. The results demonstrate that, assuming that the spiking motifs are known, our model accurately detects the identity and timing of spiking motifs,
even when multiple motifs are superimposed. Additionally, we show that our method outperforms correlation-based heuristics, such as those used in previous works like [6, 33], in terms of efficiency. Secondly, compared to other event-based methods, like HOTS [18], our model's weights are interpretable. These weights are directly related to the logit, which is the inverse sigmoid of the probability of detecting each spatiotemporal spiking motif. Finally, a crucial novelty lies in the simultaneous learning of weights and delays in our model. In contrast, models like the polychronization model [16] only learn weights and delays are frozen. These contributions highlight the significance and effectiveness of our HD-SNN model for detecting spiking motifs, offering insights into the neural mechanisms involved in pattern recognition and information processing.
### Main limits
The model comes with certain limitations. First, the entire framework is based on discrete time binning, which is incompatible with the continuous nature of biological time. While this choice facilitated efficient implementation on conventional hardware such as GPUs, it can be extended to a purely event-based SNN framework [8]. By analytically incorporating a precision term in the temporal value of the input spikes, a purely event-based scheme can be achieved, promising speedups and computational energy gains.
Second, the current model is purely feed-forward, i.e. the spikes generated by postsynaptic neurons are based solely on information from their classical receptive fields. However, neural systems often involve lateral interactions between neurons in the same layer and feedback connections, which can be crucial for computational principles and modulation of neural information. While our theoretical model can incorporate these recurrent connections by inserting new spikes into the list of spikes reaching presynaptic addresses, it requires proper tuning to avoid perturbations of the homeostatic state. For the implementation of predictive or anticipatory processes, recurrent activity would be essential, especially
Figure 3: **Detecting spiking motifs using spiking neurons with heterogeneous delays**. Accuracy of detection for the classical correlation (red) and the HD-SNN method (blue) as a function of _(Left)_ the number \(M\) of kernels, _(Middle)_ the number of presynaptic neurons, _(Right)_ Correlation matrix of true vs learned kernels.
when dealing with multiple different delays that require temporal alignment. Such recurrent activity has previously been modelled to explain phenomena such as the flash-lag illusion. Implementing this using generalised coordinate and delay operators would allow predictive mechanisms to be incorporated into our proposed HD-SNN model, providing an elegant solution to this problem.
Addressing these limitations and exploring the extension of the HD-SNN model to event-based schemes and recurrent connections would enrich its potential applications and pave the way for a better understanding of neural information processing in complex systems.
### Perspectives
The coding results were obtained under the assumption that we know the spiking motifs by way of \(K\), or using supervised learning by knowing the identity and timing of spiking motifs. However, this is generally not the case, e.g. when observing the neurobiological raster plot of a population of neurons. One perspective would be to extend the model to a fully self-supervised learning paradigm, i.e. without any labeled data [1]. This type of learning is thought to be prevalent in the central nervous system and, assuming the signal is sparse [23], one could extend these Hebbian sparse learning schemes to spikes [25, 22].
We expect that this would be particularly adapted for exploring neurobiological data [21]. Indeed, there is a large literature showing that brain dynamics often organize into stereotyped sequences such as synfire chains [15], packets [20], or hippocampal sequences [31] (for a review, see [9]). These motifs are stereotyped and robust, as they can be activated in the same motif from day to day [13]. In contrast to conventional methods used to process neurobiological data, such an event-based model would be able to answer key questions regarding the representation of information in neurobiological data.
|
2308.08859 | Tutorial: How to Train a Neural Network Potential | The introduction of modern Machine Learning Potentials (MLP) has led to a
paradigm change in the development of potential energy surfaces for atomistic
simulations. By providing efficient access to energies and forces, they allow
to perform large-scale simulations of extended systems, which are not directly
accessible by demanding first-principles methods. In these simulations, MLPs
can reach the accuracy of electronic structure calculations provided that they
have been properly trained and validated using a suitable set of reference
data. Due to their highly flexible functional form the construction of MLPs has
to be done with great care. In this tutorial, we describe the necessary key
steps for training reliable MLPs, from data generation via training to final
validation. The procedure, which is illustrated for the example of a
high-dimensional neural network potential, is general and applicable to many
types of MLPs. | Alea Miako Tokita, Jörg Behler | 2023-08-17T08:35:37Z | http://arxiv.org/abs/2308.08859v2 | # Tutorial: How to Train a Neural Network Potential
###### Abstract
The introduction of modern Machine Learning Potentials (MLP) has led to a paradigm change in the development of potential energy surfaces for atomistic simulations. By providing efficient access to energies and forces, they allow us to perform large-scale simulations of extended systems, which are not directly accessible by demanding first-principles methods. In these simulations, MLPs can reach the accuracy of electronic structure calculations provided that they have been properly trained and validated using a suitable set of reference data. Due to their highly flexible functional form the construction of MLPs has to be done with great care. In this Tutorial, we describe the necessary key steps for training reliable MLPs, from data generation via training to final validation. The procedure, which is illustrated for the example of a high-dimensional neural network potential, is general and applicable to many types of MLPs.
## I Introduction
In recent decades, advances in atomistic simulations have revolutionized the way of studying complex systems in many fields, from chemistry and physics via materials science to the life sciences. At the present time, computer simulations allow to understand complex experimental data, and to rationalize or even predict the properties of molecules and solids as well as their reactions based on detailed structural and dynamical information at the atomic level. A fundamental requirement to perform such simulations is the knowledge about the atomic interactions, i.e., the potential energy and the forces, which in principle can be obtained by solving the Schrodinger equation. Unfortunately, such quantum mechanical calculations are computationally very demanding, even if relatively efficient methods such as density functional theory (DFT) [1; 2] are used. Therefore, the accessible time and length scales of ab initio molecular dynamics (MD) simulations [3; 4], in which the energies and forces are determined by DFT for each visited atomic configuration, are limited to a few hundred atoms and tens to hundreds of picoseconds.
The computational effort could be substantially reduced if the multidimensional function defining the relation between the atomic positions and the potential energy, i.e., the potential energy surface (PES), would be directly accessible. Founded on the Born-Oppenheimer approximation of quantum mechanics [5], the PES contains a wealth of information, such as global and local minima defining stable and metastable geometries, barriers of chemical reactions, and forces governing the nuclear motion. Approximate atomistic potentials and force fields representing the PES in analytic form have been used for decades [6; 7; 8; 9; 10; 11; 12], and conventional approaches rely on physically reasonable simplifications to increase the computational efficiency. Such approximations typically reduce the accuracy but often still allow to maintain an acceptable transferability of the potential.
In recent years, machine learning (ML) has emerged as a new and powerful computational tool with many applications in the chemical and physical sciences [13; 14; 15], such as drug design [16; 17], synthesis planning [18; 19], protein structure prediction [20] and the analysis and prediction of spectra [21; 22]. Another important usage of ML is the representation of the PES by machine learning potentials (MLP), which has first been proposed more than a quarter of a century ago [23]. Since then, MLPs have witnessed tremendous progress [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37] by exploiting the flexibility of ML methods such as neural networks (NNs) to learn the atomic interactions from reference energies and forces obtained from electronic structure calculations. The resulting analytical ML expression can then provide the energy and its derivatives with about the accuracy of the reference method at a small fraction of the computational costs, which works well as long as the requested structures are not too different from those included in the training set. Due to this limited extrapolation capabilities, often large and structurally diverse data sets are used in the construction of MLPs. Another advantage of MLPs is their ability to represent all types of bonding, such as covalent and metallic bonds as well as ionic and dispersion interactions, at the same level of accuracy by utilizing general and unbiased functional forms. Moreover, like the underlying electronic structure methods, they are "reactive", i.e., they can describe the making and breaking of bonds. However, the computational costs of MLPs are usually higher than those of simple classical force fields, since a large number of terms needs to be evaluated and more complex coordinate transformations are required.
Current MLPs can be assigned to different generations [38; 36], and a suitable choice depends on the specific system of interest. While MLPs of the first generation are restricted to low-dimensional systems, the introduction of second-generation high-dimensional neural network potentials in 2007 paved the way to the
application of MLPs to high-dimensional systems containing large numbers of atoms [38; 39; 40; 41]. This has been achieved by expressing the potential energy of the system as a sum of atomic energies and by the introduction of atom-centered symmetry functions (ACSFs) as atomic environment descriptors maintaining the mandatory translational, rotational and permutational invariances of the energy [42]. Over the years, many types of second-generation MLPs have been introduced differing in the employed ML algorithms and descriptors, such as various forms of neural network potentials [39; 43; 44; 45; 46], Gaussian approximation potentials [47; 48], moment tensor potentials [49], spectral neighbor analysis potentials [50], atomic cluster expansion[51] and many others.
Long-range electrostatic interactions are included in third-generation MLPs [52; 53; 54; 55; 56]. Here, the necessary charges or even multipoles can be predicted as a function of the atomic environments by machine learning [52; 53; 57; 58]. Also MLPs with explicit dispersion interactions beyond the local atomic environments can be classified as third-generation [59; 60; 61]. Fourth-generation MLPs [62; 63; 64; 65; 66], which employ global charge equilibration techniques [67] or self-consistent charge distributions [63], can describe non-local phenomena such as long-range charge transfer and can be applied to multiple charge states of a system.
While many MLPs make use of predefined features to describe environment-dependent atomic properties such as energies, charges, or electronegativities, in recent years different forms of message passing neural networks [68] have also been developed for the representation of PESs. Here, the description of the atomic environments is included in the training process by iteratively passing structural information through the system [44; 45; 53; 69; 70; 71], which in principle can provide information beyond a predefined local environment.
Despite the significant progress that has been made in the increasingly automated generation of MLPs in the past two decades [72; 73; 74], constructing MLPs is still not a black box task. For instance, it is important to be aware that MLPs can spectacularly fail as a consequence of their flexible functional form if they have not been carefully validated or if they are used beyond the range of structures they have been developed for. Therefore, the construction and the validation of MLPs should go hand in hand, and users need to know about the applicability and the limitations of a specific potential, which strongly depends on the underlying data set. Hence it is important to distinguish between the general capabilities of a MLP method and the performance of a specific parameterization for a given system. This subtle difference poses a significant challenge for an unbiased comparison of different methodologies.
In spite of the rapidly growing body of publications on MLPs and their applications, there are not many Tutorials in the literature [75; 32; 76] addressing these issues. With this Tutorial we aim to fill this gap by discussing in detail all important aspects of training and validating MLPs. As a real-life example, we will use high-dimensional neural network potentials (HDNNP) [39] representing a common type of MLP. Nevertheless, most of the discussion equally applies to different classes of neural network potentials as well as MLPs employing other types of machine learning algorithms. The general procedures are illustrated using a simple model system consisting of a LiOH ion pair in a small box of water.
The overall workflow for training a neural network potential is shown schematically in Fig. 1 and corresponds to the structure of this Tutorial. After a concise summary of the HDNNP methodology in Section II, the generation of the initial data set is discussed in Section III. Moreover, several decisions have to be made before the training process can be started, such as the selection of the features or descriptors and the settings of the neural networks, e.g. the architecture and activation functions. These preparations for the training are presented in Section IV. The training procedure, which is usually an iterative process that also involves the extension of the data set by active learning, is then explained in Sec. V followed by a recipe for the validation of the potential in Sec. VI. The Tutorial ends with some final remarks and conclusions in Sec. VII. Some illustrative examples are given in the supplementary material.
Figure 1: Overview of the construction of a neural network potential.
High-dimensional neural network potentials
### Method
A major drawback of early, first-generation MLPs has been the limitation to a small number of atoms, which prevented their use in simulations of complex molecular and condensed systems. A strategy to address large systems, which has been used in empirical potentials for a long time with great success is the decomposition of the total potential energy \(E_{\mathrm{tot}}\) of the system into structure-dependent atomic energies \(E_{i}\), e.g., in the famous Tersoff potential [6] and the embedded atom method [7].
However, transferring this strategy to the realm of MLPs has been a frustrating task in the early days of MLP development. While the high dimensionality of machine learning algorithms is the reason for their superior accuracy, it also posed severe challenges for finding suitable coordinates considering the mandatory invariances of the potential energy with respect to translation, rotation and permutation, i.e., the order of chemically equivalent atoms of the same element.
The rotational and translational invariances can in principle be incorporated easily by using internal coordinates such as interatomic distances instead of the Cartesian positions. However, the output of machine learning algorithms such as neural networks (NN) depends on the order in which such internal coordinates are supplied in the vector of input values, which violates permutation invariance. Moreover, feed-forward NNs, which have almost exclusively been used in early MLPs, have a fixed dimensionality, preventing the construction of PESs for systems with variable numbers of atoms.
A solution to this problem has been found in 2007 with the introduction of atom-centered symmetry functions (ACSF) [39; 42] as atomic environment descriptors fulfilling all required invariances. These enabled the development of high-dimensional neural network potentials [39] applicable to systems consisting of thousands of atoms using the total energy expression
\[E_{\mathrm{tot}}=\sum_{i=1}^{N_{\mathrm{atoms}}}E_{i}=\sum_{j=1}^{N_{ \mathrm{elements}}}\sum_{i=1}^{N_{\mathrm{atoms}}^{j}}E_{i}^{j}\quad. \tag{1}\]
As the atomic environments determining the atomic energies are defined by a cutoff radius \(R_{\mathrm{c}}\), HDNNPs of this form represent a second-generation potential. Today, a very large number of atomic environment descriptors [77; 78; 79; 80; 81] and many flavors of very accurate high-dimensional MLPs are available offering access to large-scale atomistic simulations of condensed systems.
The structure of a HDNNP is shown in Fig. 2 for the example of a LiOH ion pair in water. Starting from the Cartesian coordinate vectors \(\mathbf{R}_{i}\) of the atoms first a transformation to the ACSF vectors \(\mathbf{G}_{i}\) is carried out for each atom. In general, the number and definition of the ACSFs (cf. Sec. II.2) can be different for each element. These vectors then enter the respective atomic feed-forward NNs (see Box 1), which are the central components of the HDNNP providing the atomic energies yielding \(E_{\mathrm{tot}}\).
For a given element, the architecture and weight parameters of the atomic NNs are constrained to be the same. Consequently, \(N_{\mathrm{elements}}\) different feed-forward NNs need to be trained simultaneously, and in applications of the HDNNP each element-specific NN is evaluated once per each atom of the respective element. Apart from the element information and the atomic positions - and if applicable the lattice vectors - no further structural information about the system is required for second-generation MLPs. Moreover, in contrast to most traditional force field no further classification of atoms of a given element into atom types depending on the local bonding patterns is required.
**Box 1: Feed-forward neural networks**
Multilayer feed-forward neural networks [82; 83], often also called "deep neural networks", are a type of artificial neural network whose functional form is inspired by the interaction of neurons in the brain [84]. While initially they were used to develop mathematical models of these interactions, nowadays they have become a powerful technique in the field of machine learning to establish functional relations between input and target properties [82; 85].
An example of a feed-forward neural network is shown in Fig. 3. As its name implies, the flow of information is from left to right, i.e., from the input to the output layer along the black lines connecting the neurons, or nodes, in the network, which are represented by the circles. The structural information is given by the vector \(\mathbf{G}=\{G_{i}\}\) and is provided in the input layer. It is then passed via one or more hidden layers to the node(s) in the output layer.
Each hidden layer contains several neurons, which are connected to the neurons in the neighboring layers by weight parameters \(a_{ij}^{kl}\) representing the connection strength between node \(i\) in layer \(k\) and node \(j\) in layer \(l=k+1\). In addition, each neuron \(j\) in layer \(l\), which may be a hidden or the output layer, is connected to a bias node by a bias weight \(b_{j}^{l}\) acting as an adjustable offset.
The numerical value \(y_{j}^{l}\) of a node \(j\) in layer \(l\) is given by a linear combination of the values of all neurons \(y_{i}^{k}\) in the previous layer weighted by the respective connecting weight parameters. Then, the bias weight is added and a non-linear activa
tion function \(f_{j}^{l}\) is applied yielding
\[y_{j}^{l}=f_{j}^{l}\left(b_{j}^{l}+\sum_{i=1}^{N_{\rm node}^{k}}a_{ij}^{kl}\cdot y _{i}^{k}\right)\quad. \tag{2}\]
Consequently, the output \(E\) of the small feed-forward neural network in Fig. 3 is given by
\[E = f_{1}^{3}\Bigg{(}b_{1}^{3}+\sum_{k=1}^{4}a_{k1}^{23}\cdot f_{k}^ {2}\Bigg{(}b_{k}^{2} \tag{3}\] \[+ \sum_{j=1}^{4}a_{jk}^{12}\cdot f_{j}^{1}\left(b_{j}^{1}+\sum_{i=1 }^{3}a_{ij}^{01}\cdot G_{i}\right)\Bigg{)}\Bigg{)}\quad.\]
Activation functions are an important component of a neural network, since they introduce non-linearity to the model and thereby give feed-forward neural networks the ability to fit arbitrary non-linear functions [86; 87] such as potential energy surfaces. Various functional forms can be used such as the hyperbolic tangent, the sigmoid function, and Gaussians. They have in common that they contain a non-linear region, and are continuous and differentiable, which is needed for the gradient-based optimization of the network and for the computation of atomic forces in MLPs. For the output layer usually a linear function is used to avoid restricting the output values of the feed-forward neural network. Alternatively, the target values can be (re)scaled.
The flexibility of a network is determined by its architecture, i.e., the number of hidden layers and nodes per layer. The more layers and nodes are included, the larger are the fitting capabilites of the network, which often contain a few thousand weights. Typically, the feed-forward NNs in HDNNPs consist of two to three hidden layers including between 15 and 50 neurons each.
Figure 2: Schematic structure of a second-generation high-dimensional neural network potential (HDNNP) [39] for the example of a lithium hydroxide ion pair in water (lithium: dark blue; oxygen: red; hydrogen: grey). First, the Cartesian coordinate vectors \({\bf R}_{i}^{\alpha}\) of the atoms \(i\) of element \(\alpha\) are transformed to vectors of \(j\) atom-centered symmetry functions (ACSF) \({\bf G}_{j}^{\alpha}\)[42], which describe the local atomic environments up to the cutoff radius shown as circle for the case of the lithium atom. For each atom the respective ACSF vector is then used as input for an atomic neural network (NN) predicting its atomic energy \(E_{i}^{\alpha}\). Finally, the atomic energies are summed to obtain the total potential energy \(E_{\rm tot}\) of the system (Eq. 1). The sets of ACSFs, the architectures of the atomic NNs, and the weight parameters are the same for all atoms of a given element.
### Atom-Centered Symmetry Functions
The availability of suitable descriptors for the atomic environments is a key for the construction of high-dimensional MLPs. Here, we will just briefly summarize two types of atom-centered symmetry functions, which are most commonly chosen when constructing HDNNPs, but in principle many other types of descriptors could equally be used. The spatial extension of the ACSFs as a function of distance \(R_{ij}\) of atom \(j\) from central atom \(i\) is defined by a cutoff function such as the monotonously decreasing part of the cosine function,
\[f_{\mathrm{c}}\left(R_{ij}\right)=\left\{\begin{array}{r@{\quad}l}0.5\cdot \left[\cos\left(\frac{\pi R_{ij}}{R_{\mathrm{c}}}\right)+1\right]\quad\text{ for}\quad&R_{ij}\leq R_{\mathrm{c}}\\ 0.0\quad\quad\text{for}\quad&R_{ij}>R_{\mathrm{c}}\,,\end{array}\right. \tag{4}\]
which smoothly decays to zero in value and slope at \(R_{\mathrm{c}}\) (cf. Fig. 4a). There are two main classes of ACSFs termed radial and angular, which can be used to provide local structural fingerprints of the atomic environments. The most common radial ACSF has the form
\[G_{i}^{\mathrm{rad}}=\sum_{j=1}^{N_{\mathrm{atoms}}\in R_{\mathrm{c}}}e^{-\eta \left(R_{ij}-R_{\mathrm{s}}\right)^{2}}\cdot f_{\mathrm{c}}\left(R_{ij}\right) \tag{5}\]
and consists of a set of Gaussian functions (see Fig. 4b) evaluated at the radial distances of all neighboring atoms inside the cutoff sphere. To ensure that the number of symmetry functions is independent of the number of neighbors, which is required for compatibility with the fixed dimensionality of the input vectors of the atomic NNs, the Gaussian functions are multiplied by the cutoff function and the sum of all terms is calculated to yield a single value, which can be interpreted as an effective coordination number within a range defined by the Gaussian width parameter \(\eta\). The parameter \(R_{\mathrm{s}}\) allows to shift the centers of the Gaussians away from central atom \(i\) to increase the sensitivity of the function in specific spherical shells. To obtain a radial profile of the neighboring atoms, a set of radial symmetry functions is constructed employing element pair-specific sets of different \(\eta\) values (see Sec. IV).
Angular symmetry functions consider the angles \(\theta_{ijk}\) of triplets of atoms centered between the connections \(ij\) and _ik_ employing the cosine function such that the periodicity of the angle and in particular its mandatory symmetry with respect to angles of \(0^{\circ}\) and \(180^{\circ}\) are taken into account,
\[G_{i}^{\mathrm{ang}}= 2^{1-\zeta}\sum_{j\neq i}^{N_{\mathrm{atoms}}\in R_{\mathrm{c}} }\sum_{k\neq i,j}^{N_{\mathrm{atoms}}\in R_{\mathrm{c}}}\left[\left(1+ \lambda\cdot\cos\theta_{ijk}\right)^{\zeta}\right.\] \[\left.\cdot e^{-\eta\left(R_{ij}^{2}+R_{ik}^{2}+R_{jk}^{2}\right) }\cdot f_{\mathrm{c}}\left(R_{ij}\right)\cdot f_{\mathrm{c}}\left(R_{ik} \right)\cdot f_{\mathrm{c}}\left(R_{jk}\right)\right]\,. \tag{6}\]
There are two parameters, which can be varied to characterize the atomic environment. These are the values of \(\zeta\) and \(\lambda\). As can be seen in Fig. 4c, the purpose of \(\lambda=+1\) or \(\lambda=-1\) is to center the maxima of the cosine function at angles of \(0^{\circ}\) or \(180^{\circ}\), respectively, while a series of \(\zeta\) values controls the angular resolution of the functions. Multiplying the angular terms by the three cutoff functions of the three pairwise distances ensures that only triples of close atoms enter the summation, while the Gaussian functions can be used to contract the spatial range to close neighbors, if desired. This can be useful since for a given angle the distance between neighbors \(j\) and \(k\) increases with \(R_{ij}\) and \(R_{jk}\) resulting in weaker interactions.
It is important to note that there is no need for a linear relation between the values of the ACSFs and the potential energy, since the atomic NNs are able to express very general functional forms. Therefore, only some mandatory properties of the ACSFs are relevant for the construction of HDNNPs (see Box 2). For each element in the system, its radial and angular functions are constructed for all element combinations present for atoms \(j\) (radial functions) or \(j\) and \(k\) (angular functions), respectively, to reflect the different interactions between different chemical species. Typically, between 10 and 200 ACSFs are used per atom depending on the complexity of the system. Although ACSFs consist of atomic distances and angles, they simultaneously depend on all atomic positions inside the cutoff sphere. Hence, they are formally many-body functions. Still, it has been shown that in rare situations descriptors consisting of two- and three-body terms may not be unique in particular for low-dimensional systems [88]. A more detailed discussion and further types of ACSFs can be found in Ref. [42].
**Box 2: Properties of ACSFs**
To be suitable descriptors for the atomic environments, atom-centered symmetry functions must fulfill several requirements.:
* They need to describe the structural details of the atomic environments.
* Their values must be invariant with respect to rotation, translation and permutation.
* They have to decay smoothly to zero in value
Figure 3: Architecture of a feed-forward neural network. The output energy \(E\) is a function of the neurons \(\left\{G_{i}\right\}\) in the input layer. In between the input and the output layer the neurons are arranged in hidden layers and determine the functional flexibility of the neural network. The black lines connecting pairs of neurons and also dotted lines between the bias node and the neurons represent the fitting weight parameters of the network.
and slope at the cutoff radius to restrict the atomic interactions to the environments inside the cutoff spheres.
* They need to be continuous and differentiable for the gradient-based training of HDNNPs and the calculation of analytic forces.
* The number of ACSFs in the atomic NN input vectors must be independent of the number of neighbors inside the cutoff sphere, because this number can be different for each atom and can change in MD simulations.
### Long-Range Electrostatic Interactions
So far, we have discussed the structure of second-generation HDNNPs and ACSFs as descriptors for the training of environment-dependent atomic energies. The main assumption of such second-generation potentials is the locality of a major part of the atomic interactions, which, if valid, can be expressed to a good approximation in terms of local atomic energies. For clarity it should be noted that even second-generation MLPs contain all types of interactions, including electrostatics and dispersion, between atoms within the cutoff radius, since Eq. 1 does not distinguish between different types of bonding. However, while this ansatz works surprisingly well for many systems, in some cases it is necessary to explicitly include long-range interactions such as electrostatics without truncation [89].
The inclusion of long-range electrostatic interations is possible using, e.g., an Ewald sum [90] employing atomic charges determined by ML. The extended total energy expression is then given by
\[E_{\rm tot}=E_{\rm elec}+E_{\rm short}=E(\{Q_{i}\},\{{\bf R}_{i}\})+\sum_{i=1} ^{N_{\rm atoms}}E_{i}(\{{\bf R}_{i}\}) \tag{7}\]
as a sum of a short-range and a long-range part.
There are different options to obtain the required charges. If the atomic charges are essentially local, i.e., they only depend on the close chemical environment, they can be learned as atomic properties using ACSF descriptors and a second set of atomic neural networks, e.g., in third-generation HDNNPs [91, 52]. If, however, the charges depend on distant structural features and are influenced by long-range charge transfer, such as in aromatic systems or molecules containing conjugated \(\pi\)-bonds, fourth-generation potentials may be required [64]. In this case, the charges are determined indirectly via environmental-dependent atomic electronegativities in combination with a charge equilibration step or a self-consistent redistribution of the charges.
Most of the aspects of training neural network potentials are not much affected by the decision if a second-, third-, or fourth-generation HDNNP is trained, since the procedures for the selection of the data, the iterative training and the validation are essentially the same. Therefore, in this Tutorial we will focus on the example of a second-generation HDNNP, while some comments on the particularities of training HDNNPs including electrostatic interactions are given in Section V.3. Further details on third- and fourth-generation HDNNPs can be found in Refs. [38, 52, 64, 91, 36].
## III Data Set Generation
### Reference Electronic Structure Calculations
Before a MLP can be trained, first a reference electronic structure method has to be chosen. When constructing potentials for condensed systems, often DFT calculations are used. These can be carried out routinely for systems containing a few hundred atoms with and without periodic boundary conditions even if thousands of data points are required, which is typical for MLP training. However, also coupled cluster and other wave function-based methods are increasingly popular [92, 93], since any limitation in the accuracy of the training data will become an intrinsic property of the MLP, which thus cannot provide more accurate results than the underlying reference method. Still, due to the high computational costs, often a compromise between accuracy and computational efficiency has to be found when choosing the reference method. It is thus important to note that the choice of the reference method determines the reliability of the final MLP, i.e., the agreement with experimental observables, and the only aim of MLP construction is to reproduce this method with all its intrinsic properties.
Ideally, the reference method should not only provide accurate energies but also atomic forces, which contain a wealth of information about the local shape of the PES. Including forces in the training, which is good practice in almost any modern MLP [94, 95, 96, 75, 97], offers several advantages. First, the amount of information that can be extracted from expensive electronic structure calculations is dramatically increased, since each single point calculation can only provide one energy value but \(3N_{\rm atoms}\) force components. Moreover, as has been shown recently for the case of metal-organic frameworks [97], MLPs can be subject to arbitrary internal energy shifts between the individual atoms. These internal energy shifts, which can strongly reduce the transferability of MLPs, are invisible when monitoring the total energy errors in the training process, because there are many ways to distribute the total energy in the system for a given total energy. Forces, which can provide local information about the gradients of the PES at the atomic positions, can contribute to overcome these problems.
The generation of the reference data set is often the computational bottleneck in the construction of MLPs as
they usually consist of tens of thousands of data points. This effort can be reduced in two ways: by carefully choosing and thus decreasing the number of structures, which is a very active field of research (see Sec. V.2), and by reducing the costs of each calculation by optimizing the settings of the employed electronic structure code.
The numerical convergence of the electronic structure calculations is a crucial point, since any numerical noise in the data represents a limit to the minimum error that can be reached in the training process. Particularly problematic is the choice of k-points if periodic systems of different size or symmetry are combined in a single training set, because underconverged k-point grids can have a significant impact on the accuracy of energies and forces, which finally results in inconsistent data in the reference set (see Fig. 5a). Attention should also be paid to the density of Furthermore numerical integration grids, which usually have a fixed orientation in space [98, 99]. If too sparse grids are used, data sets with different molecular orientations might contain similar structures with contradictory energies and forces (see Fig. 5b). While this is straightforward to detect for small gas phase molecules, in condensed systems such as liquid water, which contain many similar bonds in multiple different directions, such inaccuracies might be extremely difficult to detect resulting in poor potentials. In general, a numerical convergence to about 1 meV/atom for energy differences and 0.05-0.10 eV/A for the forces should be considered as a minimum requirement.
Finally, there are other properties next to energies and forces, which might be needed for the construction MLPs and should be computed and stored. Apart from atomic partial charges for MLPs containing electrostatic interactions, this might be information about the spin for magnetic systems [100, 101, 102] or Hirshfeld volumes for dispersion interactions [60]. Sometimes also the stress tensor is used for training [103].
### System Size and Dimensionality
Once the decision concerning the electronic structure method has been made and converged settings have been determined, the required structure types have to be selected. This includes many aspects, such as the system size, the chemical composition and the structural diversity, because MLPs have limited extrapolation capabilities beyond the range of structures included in the training set. Unfortunately, with the exception of smallest molecules in vacuum, mapping atomic configurations on a regular grid is not a feasible option due to the exponential growth in the amount of possible structures with increasing number of atoms.
The construction of the total energy as a sum of environment-dependent atomic energies in second-generation MLPs corresponds to an effective reduction of the dimensionality for large condensed systems since only the positions of the atoms inside the cutoff spheres need to be learned while the total energy of the system still explicitly depends on all atomic positions. Fig. 6 shows the number of neighboring atoms included in the local environment for cutoff radii between 4 and 7 A for
Figure 4: Different types and components of atom-centered symmetry functions (ACSF). Panel a) shows the cutoff function \(f_{\rm c}(R_{ij})\) (Eq. 4) for different cutoff radii \(R_{\rm c}\) (in \(a_{0}\)). Panel b) displays a term \(g_{ij}^{\rm rad}=e^{-\eta(R_{ij}-R_{\rm c})^{2}}\cdot f_{\rm c}(R_{ij})\) of the radial symmetry function in Eq. 5 for different Gaussian exponents \(\eta\) (in \(a_{0}^{-2}\)) with \(R_{\rm c}=12\;a_{0}\) and \(R_{\rm s}=0\;a_{0}\). In panel c) the dependence of the angular symmetry functions (Eq. 6) on the angle \(\theta_{ijk}\) and the distance \(R_{ij}\) as radial coordinate is displayed for a cutoff of \(R_{\rm c}=12\;a_{0}\) and the cases \(\lambda=+1\) and \(\lambda=-1\) with different \(\zeta\) parameters.
the examples of liquid water and fcc bulk copper. As can be seen, typical environments contain between 50 and 150 atoms. These atoms define the configuration space that should be mapped for the generation of the MLP, which requires a system size of at least the extension of the cutoff sphere. In case of periodic systems, the smallest cell diameter should be larger than the diameter of the cutoff sphere to avoid sampling only a subspace of the possible atomic configurations due to an artificial periodicity of the system. Thus, training structures for condensed systems often reach a size between 150 and 250 atoms. Still, it is possible to combine systems of different size in the training set to benefit from faster calculations for smaller systems, as long as consistent settings are used (see Sec. III.1). We note here that also MLPs of the third and fourth generation follow the same principle of environment-dependent properties such that also for these generations in principle it is not necessary to significantly increase the size of the reference structures. However, the choice of the cutoff is a critical decision and will be discussed in Sec. IV.2.
Even if the effective dimensionality is reduced due to the cutoff sphere, a comprehensive sampling is still out of reach and systematic strategies to select the most important data points are needed (see Sec. V.2). Unphysical atomic configurations, such as too close atomic encounters or chemically unreasonable bonding patterns should be avoided, as they would have very high potential energies and would be irrelevant in atomistic simulations. Still, any moderate and thus chemically still relevant increase in temperature enlarges the size of accessible configuration space and makes the mapping more demanding. For some condensed and molecular systems it is also possible to reduce the complexity of the reference systems by making use of molecular fragments, which can be extracted from the system [95; 104; 105; 97].
### Generation of an Initial Data Set
Before a first MLP can be trained, an initial data set needs to be available, which - depending on the complexity of the system - might contain a few hundred up to a few thousand structures. Ideally, the atomic configu
Figure 5: Numerical accuracy of integration grids in electronic structure calculations. Panel a) shows the selection of consistent k-point grids in periodic DFT calculations when changing the cell size. For supercells it is sometimes possible to choose exactly equivalent k-point meshes in reciprocal space, e.g., when the lattice parameter \(a\) is doubled and the number of k-points in the corresponding direction can be divided by two. For the combination of arbitrary systems of different size in the training set the k-point density should be kept constant. Panel b) illustrates the role of numerical integration grids for the case of a diatomic molecule, which might, e.g., be atom-centered grids used for the representation of numerical atomic orbitals or electron densities. For a sparse grid, the energy of the system does not show the required rotational invariance (e.g. in representing the grey region), while for a dense grid this invariance is to a good accuracy achieved. The same effect can be observed for regular three-dimensional grids used, e.g., in Fourier transformations.
rations should be chemically reasonable and as close as possible to the geometries visited in the intended applications of the potential. They should therefore be sampled at the corresponding conditions, such as temperature, pressure, and chemical composition.
A straightforward although expensive way of generating reasonable initial data sets is the use of ab initio MD. Here, often less tight settings are used, such as \(\Gamma\)-point sampling of k-space only. Therefore, energies and forces from ab initio MD trajectories are often not used directly but provide geometries, which are then recomputed with a higher degree of convergence or even more demanding electronic structure methods. This also allows to avoid the computation of structurally correlated geometries such as configurations visited in subsequent MD steps, which only provide a marginal amount of new information, by selecting structures in specific intervals.
A significant advantage of ab initio MD is the generation of reasonable structures already right in the beginning of the data set construction. Still, it should be taken into account that potential energy wells are more frequently sampled than repulsive high-energy regions or transition states. The representation of repulsive walls can be improved by running simulations at slightly elevated temperatures and pressures, which increase the probability of closer atomic encounters. Transition states can be included using, e.g., metadynamics [106].
If available for the system of interest, first structures for electronic structure calculations can also be extracted from conventional force field trajectories. It should be noted, however, that different equilibrium geometries may result in structural biases, which - to a much smaller extent - can also be present in ab initio MD simulations when using less converged settings or different functionals.
There are several alternative approaches to generate preliminary data sets. In particular in the field of materials science it is common to start from the known crystal structures of a material and to introduce step-by-step random atomic displacements of increasing amplitude to sample the local minimum wells. In molecular systems, also normal mode sampling is frequently used [43; 107]. Furthermore, randomly placed atoms and molecules can be used in combination with suitable interatomic distance constraints. Finally, also the increasing availability of repositories and databases might be an interesting source of initial data sets [108; 109]. However, the initial data set represents only a first step, and its extension by further consistent data is crucial, which makes the reuse of repository data difficult, as the exact input settings need to be known and the same version of the same electronic structure code has to be available (see Sec. V.2).
**Case study: Construction of a first data set for the LiOH-water system**
For the DFT calculations of the lithium hydroxide system the Car-Parrinello Projector Augmented Wave (CP-PAW) code (version 28-09-2016) was employed [110] using the PBE0r local hybrid functional [111] employing the settings described in Ref. [112]. CP-PAW implements the projector augmented-wave (PAW) method [113], in which augmentations of the 1s orbital of H, the 2s and 2p orbitals of Li, and the 2p, 2p, and 3d orbitals
Figure 6: Structural complexity of atomic environments. Panels a) and b) show typical numbers of atoms inside the cutoff spheres for different cutoff radii \(R_{\mathrm{c}}\) for bulk liquid water and fcc bulk copper at ambient conditions. The number of possible atomic configurations inside the cutoff spheres increases strongly with the effective dimensionality of the atomic environments.
of O were used. The auxiliary wave functions were set up as node-less partial waves. Their matching radii for all orbitals are 0.7 times the covalent radii, which are 0.32 A for H, 1.23 A for Li, and 0.73 A for O. The plane wave cutoffs of the auxiliary wave functions and the auxiliary density have been set to 35 \(E_{\mathrm{H}}\) and 140 \(E_{\mathrm{H}}\), respectively. In the PBE0r tight binding orbitals used for the calculation of the Hartree-Fock exchange, contributions of the 1s orbital of H, the 2s orbital of Li, and the 2s and 2p orbitals of O were incorporated.
For cubic cells with a lattice constant of 8.2 A a \(2\times 2\times 2\) k-point grid was chosen. k-point grids of similar k-point density were selected for other cell sizes. Wave function and geometry optimizations were carried using the Car-Parrinello ab initio molecular dynamics method [3] including a friction term. The total energy was converged to out up to a numerical convergence of \(10^{-5}\)\(E_{\mathrm{H}}\). The D3 dispersion correction [114] was applied by the DFTD3 software (version 14-06-2016), which uses Becke-Johnson damping.
The periodic system contains one lithium ion, one hydroxide ion and between 19 to 23 solvating water molecules. We note that these systems are too small to provide a realistic description of general LiOH solutions and have been chosen for demonstration purposes in this Tutorial only. To construct an initial data set, at first the lithium ion, the hydroxide ion, and the water molecules were placed randomly in small boxes. The dimensions of the box were chosen to yield densities in the range between 0.97 and 1.02 kg dm\({}^{-3}\). Structures were checked and only accepted if atom-atom distances were larger than element-pair specific minimum distance thresholds. These thresholds are 1.2 A for H to H, 0.85 A for H to O and H to Li, and 2.3 A for O to O.
Initially, the structures have been optimized. Then, ab initio MD simulations with different simulation temperatures ranging from 100 K to 500 K were set up. After 5000 equilibration time steps every 1000 time steps the configurations were saved. In this way an initial data set of 630 reference structures was created. To this first data set an existing set of pure water structures reported in Ref. [115] was added.
## IV Preparations for the training
### Preparing the Atomic Neural Networks
With an initial data set at hand, several further choices have to be made before the training process can begin. First of all, the architectures of the individual element-specific atomic neural networks have to be chosen. Often, several architectures differing in size are used for training, because a priori it is not clear, which NN architecture, i.e., which number of hidden layers and neurons per hidden layer, will be best suited for representing the data set. Moreover, the non-linear activation functions have to be selected. This process will then generate a set of different HDNNPs, which are also needed for the iterative improvement of the data set by active learning (see Sec. V.2).
In principle, for each element a different NN architecture could be defined, but for simplicity it is common to use the same architecture for all elements, which typically consists of two - sometimes three - hidden layers, each of which contains between 15 and 50 neurons per layer. Therefore, compared to other applications in data science, the atomic NNs in HDNNPs are relatively small and can be quickly evaluated. The atomic NNs are usually standard, i.e., fully connected, feed-forward NNs, but also modifications such as direct links from the input to the output layer or omitted connections are possible.
When choosing the architecture, care should be taken that the number of weight and bias parameters defined in this way is smaller than the information content of the training set. Although the atomic NNs typically contain a few thousand parameters for each element, this is usually not a problem, if also the wealth of information included in the forces is used for training.
Another point to be considered regarding the size of the atomic NNs is the increasing flexibility as a consequence of the growing number of parameters in larger atomic NNs. If the NNs are too small, the flexibility is insufficient and the HDNNP will not be able to resolve all the details of the PES even if this information is present in the data set, which is called "underfitting". If the NNs are too large, "overfitting" will occur resulting in a poor prediction quality for structures not included in the training set. A more detailed description of the detection of overfitting and the selection of NN architectures is given in Sec. V.1.
### Atom-Centered Symmetry Functions
Next, the atom-centered symmetry functions have to be chosen. As explained in section II.1 the purpose of the descriptors is to provide a structural fingerprint of the environments inside the cutoff spheres while being invariant with respect to permutational, translational and rotational symmetry. In contrast to the atomic NN architectures, the ACSFs are usually not the same for
each element, because the typical bond distances between atoms of different elements depend on the specific chemical species.
First, the cutoff radius defining the atomic environments has to be selected. It represents a convergence parameter that has to be increased until all important atomic interactions are included. For a too small cutoff, atoms outside the cutoff sphere are still significantly interacting with the central atoms, which is equivalent to noise in the data limiting the accuracy that can be reached, since no information about the positions of the atoms outside the cutoff sphere is available in the ACSFs. If the cutoff is too large, the construction of the MLP becomes more demanding, since a large configuration space in the atomic environments has to be mapped by the reference electronic structure calculations. Consequently, for large cutoffs the effort to be invested in the reference calculations increases with respect to both, the number of structures and the size of the reference systems. Moreover, the detailed representation of the atomic positions inside the cutoff spheres requires a larger number of descriptors, which increases the computational costs of the HDNNP energy and force prediction.
Several procedures to determine the size of the cutoff radius, which is required for a certain level of convergence of the forces acting on the central atom, have been suggested in the literature. Since atomic interactions beyond the cutoff introduce noise in the forces, the variance of the forces when freezing the atoms inside and moving atoms outside the cutoff sphere can be used for uncertainty quantification [47, 116]. Only if this uncertainty is small, the cutoff is large enough. Convergence tests of the forces as a function of the environment have also been reported in Ref. [95]. An alternative approach, in which the individual interaction strength with each neighboring atom can be determined, is based on the Hessian [117], which provides the derivative of the atomic forces with respect to all atomic positions in the system. This method is also applicable to crystalline environments since small net forces may also be a consequence of compensating interactions in symmetric environments.
Interestingly, as discussed in Refs. [41] and [105], the analytic forces of second-generation MLPs have twice the spatial range of the cutoff spheres defining the atomic energy contributions. Since the forces in MLPs can be constructed as the analytic derivatives of the atomic energies (Eq. 1), it is therefore possible to use reference systems, which are large enough to learn the atomic energies but too small to provide forces of a condensed phase environment. This has been demonstrated recently for the benchmark case of metal-organic frameworks [105] and can, if carefully tested, reduce the costs of the electronic structure calculations for the training set in that only information from small systems is used, while the obtained potentials can predict correct forces beyond the training range.
Once the cutoff has been determined, which is typically between 5 and 8 A, the next step is the selection of the parameters of the ACSFs. Overall, there are two main strategies for the determination of these parameters. The first strategy aims for a balanced and systematic description of space in the atomic environments, similar to basis sets in electronic structure calculations. The second strategy is data-centered and aims to identify the best set of ACSFs for a given data set. This is motivated by the typically very inhomogenous distribution of atoms in the cutoff spheres, which is governed by chemical bonds and thus does not require the capability to describe all imaginable geometries.
Since the reference data sets employed in the construction of the potential are not static but are continuously extended by active learning (see Sec. V.2), often a combination of both strategies is used. In early stages unbiased and very general "default" sets of ACSFs (see Box 3) are generated following the first strategy, while for large and converged data sets finally the optimum set of ACSFs is used according to the second strategy to yield the smallest number of descriptors providing the desired accuracy. These can be selected in different ways, such as optimizing the ACSF parameters by genetic algorithms [118], principal component analysis [119], or selecting functions from large pools of candidates based on CUR decomposition or the Pearson correlation in an automatic way [120].
Different ACSFs can have very different ranges of values. It is therefore common to rescale the ACSF values before using them as input for the atomic NNs [121]. This can be done by subtracting the mean value and adjusting the standard deviation of the symmetry functions of the reference data set. This procedure has to be repeated whenever new structures are included in the training set, because the composition of symmetry function values will change with the data set. In particular in early stages of the training set constructions, there might also be individual ACSFs with a very small range of values. Such symmetry functions should be preliminarily eliminated from the set of functions, because a narrow range of values prevents proper scaling and can result in unstable training by learning numerical noise. Once the structural diversity in the data set increases resulting in an extended range of values, these ACSFs should be included again.
For all these reasons, the set of ACSFs is not at all fixed in the training process but has to adapt to the increasing data set size for optimum performance, which can in principle be done in an automatic way. The final dimensionality of the ACSF vector is usually smaller than the formal dimensionality of the configuration space in the atomic cutoff spheres but strongly depends on the chemical composition of the system. Since the radial and angular symmetry functions contain all combinations of elements in the system, the number of descriptors strongly increases with the number of elements in the system.
**Box 3: Constructing a default set of ACSFs**
In early stages of HDNNP construction, often a
default set of ACSFs is used to provide a balanced description of all possible configurations in the atomic environments.
_Procedure for radial symmetry functions:_
* For each element combination in the data set determine the minimum interatomic distance \(R_{\min}\).
* For each element combination set \(\eta_{\max}=0\ \mathrm{a}_{0}^{-2}\) to define the radial function with the largest spatial extension.
* Determine for each element combination the value of \(\eta_{\min}\) such that the inflection point of the term \(g_{ij}=e^{-\eta_{\min}R_{ij}^{2}}\cdot f_{\mathrm{c}}(R_{ij})\) is located at \(R_{\min}\). If this inflection point is close to \(R_{\mathrm{c}}\) the cutoff should be increased.
* Select further \(\eta\) values between \(\eta_{\max}\) and \(\eta_{\min}\) for each element combination to obtain overall 5-6 radial functions with equidistant inflection points to yield a balanced radial resolution.
Since by construction the minimum interatomic distance has been taken into account, all functions should have a reasonable range of values enabling the scaling of the ACSFs. Radial functions of element combinations, which are not present in the system, are left out.
_Procedure for angular symmetry functions_
* Identify all possible element triplets.
* Set the exponent \(\eta=0\ \mathrm{a}_{0}^{-2}\) in Eq. 6 to generate angular functions with maximum spatial extension.
* For each triplet generate angular functions with \(\zeta=1,2,4,16\) for an approximately equidistant angular resolution.
* Combine each function with \(\lambda=1,-1\) to yield in total eight angular functions per element triple.
* Check for each angular symmetry function the range of values and eliminate those functions with a too small range.
Optionally, a second set of contracted angular symmetry functions can be constructed using a larger value of \(\eta\) to increase the angular sensitivity close to the central atom.
### Initial Weight Parameters and Data Preparation
Finally, while the number of weight parameters is defined by the architectures of the atomic NNs, a set of initial weight parameters has to be chosen as starting point for the HDNNP training. Apart from random numbers, e.g. in the interval \([-1,1]\), with normal or Gaussian distribution, several different methods for the selection of the starting weights have been proposed, such as the Nguyen Widrow scheme [122], in which weights are chosen such that each node approximates a part of the target function, or the Xavier method [123], which aims to avoid values in the saturating region of the activation functions and is therefore especially suited for funnel-like architectures.
However, the choice of the intial weights a priori does not ensure that the predicted energies and forces are close to the respective target values. This can be achieved by an additional preconditioning step before the training (see Fig. 7). If the initial weights are chosen randomly, the predicted mean energies and their standard deviations often differ significantly from the respective values of the reference data giving rise to a large error at the beginning of the training process. This error can be reduced by adjusting the initial weights such that the deviations between the mean energies of the HDNNP and the reference data as well as the differences in their standard deviations disappear. For this purpose, the mean HDNNP energy can be shifted by adjusting the bias weights of the output neuron, while the standard deviation can be controlled by additionally modifying the connecting weights between the neurons of the last hidden layer and the output layer.
Further, some preprocessing is performed for the reference data. Generally, splitting into a training and a test set is done (see Sec. V.1), and also a third validation set is often used. The difference between the test and the validation set is that the test set is used to identify the potential with the minimum error for unknown structures. Since, however, the test set is thus involved in the selection of the potential, the latter is not completely independent of the test set. For this reason, the validation set might provide a more unbiased estimate for the generalization properties of the potential. However, as discussed in Sec. VI, this splitting is not a reliable measure for the transferability of a potential since it is based on the assumption that the available reference set is covering the entire configuration space realiably, which cannot be taken for granted.
Another preparation of the data set that is used if the range of output values is restricted by a hyperbolic tangent or sigmoid activation function in the output neuron is the scaling of the target energies to a unit interval, which in the application of the potential then has to be reversed. Moreover, depending on the choice of the reference method, the target total energies can have very large values, which is numerically inconvenient. There
fore, often the energies of the free atoms of all elements in vacuum are removed from the data set such that instead numerically much smaller binding energies are used in the training. The resulting offset can then be corrected by adding the corresponding free atom energies when applying the potential to yield total energies, which are numerically consistent with the reference method.
**Case study: Symmetry functions values, architecture and weights initialization of the LiOH-water system**
The symmetry function parameters of the LiOH-water system have been chosen following the recipe for generating a default set of ACSFs given in Box 3. As the values of the parameters \(\eta\) of the radial symmetry functions depend on the minimum distances between the atoms of the respective element pairs in the reference data set, the set of symmetry function values has been adjusted throughout the construction of the HDNNP. In Table 1 the final \(\eta\) parameters of the converged potential are given, while the cutoff has been set to \(R_{\mathbf{c}}=6\) A and a shift of \(R_{\mathbf{s}}=0\) A has been used. The parameters of the angular symmetry functions have been selected as described in Box 3. In addition to the set of angular symmetry functions which has been derived following this procedure a second set of angular symmetry functions with the same values for \(\xi\) and \(\lambda\) but a value of \(\eta=0.025\)\(\mathrm{a}_{0}^{-2}\) has been employed to better describe the interaction of close atoms. Since there is only one lithium ion in the system and since the lithium ions in the periodic images are outside the cutoff radius, radial and angular ACSFs involving Li-Li interactions have been removed.
For training the HDNNPs of the LiOH system the RuNver code [32; 41] has been used. Various atomic NN architectures containing two or three hidden layers with 10 to 30 neurons per layer have been tested. The number of nodes was chosen to obtain a funnel-like architecture, i.e., each hidden layer contains less neurons than the previous hidden layer. The highest accuracy was found employing an architecture containing 25 nodes in the first hidden layer, 20 nodes in the second hidden layer, and 15 nodes in the third hidden layer resulting in a number of 3251 weights per element. The values of these weights were initialized following a modified Xavier initialization [124].
## V Training
### Optimization of the weight parameters
The aim of the training process is to minimize the deviations between the HDNNP and the reference PES in the energetically relevant range. This is achieved by adjusting the values of the weight parameters using information from the reference data set. The optimization of the atomic NNs is a high-dimensional problem depending on thousands of weight and bias parameters and consequently it is impossible to find the global minimum in this parameter space. Still, there is a large number of roughly equivalent local minima of high quality, which represent the training data with low root mean squared errors (RMSE, see Box 4).
\begin{table}
\begin{tabular}{c c} \hline Element pair & \(\eta\) [\(\mathrm{a}_{0}^{-2}\)] \\ \hline H-H & 0.000, 0.006, 0.016, 0.038, 0.099 \\ H-O & 0.000, 0.007, 0.019, 0.051, 0.166 \\ H-Li & 0.000, 0.005, 0.012, 0.025, 0.052 \\ O-O & 0.000, 0.004, 0.008, 0.015, 0.027 \\ O-Li & 0.000, 0.005, 0.012, 0.024, 0.051 \\ \hline \end{tabular}
\end{table}
Table 1: \(\eta\) values of the radial symmetry functions of the HDNNP for LiOH in water.
Figure 7: Preconditioning of the neural network weights. In a first step, the mean energies \(\bar{E}\) and the standard deviations of the energies \(\sigma\) of the training data are determined for the HDNNP and the reference method (a). Then, the weight parameters are adjusted to match the respective values of the reference method to minimize the root mean squared error at the beginning of the training process (b).
The typical order of magnitude of the energy and force RMSEs of high-dimensional MLPs is about 1 meV/atom and 0.1 eV/A, respectively, but these values strongly depend on the diversity of atomic configurations, the quality of the data set and the energy and force ranges to be represented. The energy error is usually normalized per atom to make the RMSE size-consistent. For instance, a primitive unit cell of an fcc metal containing only one atom should have the same energy error as the conventional cubic unit cell containing four atoms, since the structures are identical.
The training is done iteratively presenting energies and forces of the training data again and again for a predefined number of epochs (iterations), i.e., by supervised learning. However, the training process contains much more than merely the optimization of the parameter values. Instead, it also includes a continuous assessment of the quality of the potential (see Sec. VI), and the extension of the data set by active learning (see Sec. V.2).
Typical data sets of about 10,000 structures, which may contain roughly 100 atoms each, provide a wealth of information, i.e., 10,000 total energies and about 1,000,000 force vectors for each atom in the system, amounting in total to 3,010,000 pieces of information that can be exploited for the weight optimization. This is orders of magnitude larger than the number of parameters in HDNNPs, but this relation has to be treated with care as the formal amount of data does not contain any information about the diversity of the data set and its suitability to cover the configuration space of interest. For instance, the energy and force RMSEs will be low although the overall shape of the PES is not reliably represented if only a small subspace is sampled and all structures are very similar. Therefore, validating the PES quality for diverse structures, which are representative for the intended applications, is essential, and RMSE values alone can be difficult to interpret, because RMSEs can be arbitrarily low when computed for homogeneous and strongly correlated data sets.
The course of the training process is shown schematically in Fig. 8. Starting with an initial set of weights (see Sec. IV.3), in each epoch the optimization algorithm loops over all structures in the training set in random order. There are different ways to update the weights, batch-wise for a group of energies or forces, which may even comprise the entire data set, or after the presentation of each individual piece of information in a point-by-point fashion corresponding to a batch-size of one.
Since in a point-by-point training strategy the parameters are updated once for each energy and for each force component, an epoch contains a large number of weight adjustments resulting in a rapid progress of the training in terms of epochs, while numerous updates also increase the computational costs of each of these epochs. In particular in case of large numbers of force components such a procedure can be very time-consuming.
To speed-up the training process, several measures can be taken. First, in adaptive training algorithms [125] an error threshold is defined for the energies and forces, and only the energies and forces exceeding this threshold are used for the training, while those, which are already accurately represented, are skipped. A common procedure is to define the energy and force error thresholds in terms of the respective RMSEs of the previous epoch such that the update criteria become tighter along with the improvement of the potential. As a consequence, in initial epochs when the RMSEs are still high only a few data points will be used and the overall shape of the PES will be learned quickly, while in later stages more information will be processed resulting in increasing computational costs of later epochs when the fine details of the PES are learned.
Another optional way of reducing the number of weight
updates is to use batches of energy and forces and to perform an update only every \(N^{\text{th}}\) energy or force component or even structure. In this approach, the individual errors and their gradients are accumulated, finally averaged, and used only for a single weight update per data group, which is particularly suited for parallelization. Such a parallelization is more difficult in case of constantly changing weight parameter values in the point-by-point approach in analogy to MD simulations, in which several MD steps cannot be parallelized due to the changing positions. Still, the latter update strategy is often more efficient, and reasonable potentials can be obtained after about 30-50 epochs, while in case of large batches that might even contain the entire training set thousands of epochs may be needed to reach a satisfactory accuracy of the potential.
The most basic loss function \(\Gamma_{\text{E}}\) to be minimized in the training for a set of \(N_{\text{batch}}\) energies is given by
\[\Gamma_{\text{E}}=\frac{1}{N_{\text{batch}}}\sum_{i=1}^{N_{\text{batch}}} \left(E_{i,\text{Ref}}-E_{i,\text{HDNFP}}\right)^{2}\;. \tag{12}\]
It should be noted that although the output values of the individual atomic NNs can be interpreted as atomic energies, these are just mathematical auxiliary quantities without a physical meaning. Hence, \(\Gamma_{\text{E}}\) is constructed using total binding energies only, without prior partitioning into individual atomic energies. For gradient-based optimization algorithms the value and the derivative of this loss function with respect to the HDNNP parameters have to be computed. The analytic derivatives are efficiently available by recursion through backpropagation [126]. Libraries for the construction of neural networks such as TensorFlow [127] or PyTorch [128] directly provide access to these gradients.
The loss function for the forces is defined in a similar way, but here the batch of \(N_{\text{batch}}\) force components \(F_{i}\) over all atoms of all structures is evaluated,
\[\Gamma_{\text{F}}=\frac{\beta}{N_{\text{batch}}}\sum_{i=1}^{N_{\text{batch}}} \left(F_{i,\text{Ref}}-F_{i,\text{HDNFP}}\right)^{2}\quad. \tag{13}\]
The constant prefactor \(\beta\) can be used to balance the relative impact of the energies and forces in a joint loss function \(\Gamma_{\text{E}}+\Gamma_{\text{F}}\), but for instance in point-by-point update strategies such a joint loss function is not required. Once the loop over all structures as well as their energies and forces has been completed and the respective updates of the weight parameters have been carried out, finally the new RMSE values of the energies and forces in the training as well as in the test set are computed and stored to assess the improvement of the potential. Then, the next epoch is started until the selected number of epochs has been completed and the final potential is obtained, which needs to be further validated (see Sec. VI). The weight updates are usually performed using gradient-based optimization algorithms, such as the global adaptive extended Kalman filter [129; 130; 131] (see Box 5), conjugate gradients [132], or the Levenberg-Marquardt algorithm [133]. Very simple algorithms, such as steepest descent are usually not used.
Alternatively, for large data sets the computational performance can be improved using the method of stochastic gradient descent [134]. To optimize the convergence of this procedure momentum can be added or the learning rate can be modified, leading to algorithms such as the Adaptive Gradient Algorithm [135], AdaDelta [136] and the popular ADAM [137] optimizer.
Figure 8: Flowchart of the weight optimization process using energies and forces. In a point-by-point update strategy the weights are updated once per energy and per force component, while also batch-wise grouping of errors and gradients is possible. At the end of each epoch, the RMSEs of the entire training and also test sets are computed.
An important decision for the accuracy of the potential that has been made in the training process is the architecture of the atomic NNs, which governs the flexibility of the HDNNP. While too small networks prevent an accurate representation of the PES, too many parameters can give rise to overfitting, which can substantially reduce the transferability of the potential for structures not included in the training set. The presence of underfitting or overfitting can be monitored by the early-stopping method. For this purpose, the available reference data set is split into a training set used for the optimization of the weight and bias parameters, and an independent test set including only structures unknown to the HDNNP. Then, the error of both data sets in the iterative training process is observed (see Fig. 9). If both, training and test set, do not reach low errors (Fig. 9a), the flexibility of the HDNNP is not large enough. If the flexibility of the HDNNP is increased, both errors decrease resulting in a reasonable fit quality for both, known and unknown structures (Fig. 9b). For too large atomic neural networks, overfitting results in a close to perfect fit for the known training data, while artificial oscillations are introduced, which yield poor predictions for the test data (Fig. 9c). The reliability of the early-stopping method can be further improved by using cross-validation techniques and different splittings of the reference data, such that individual outliers are less relevant. Still, while useful for the determination of the required atomic NN architecture, the early-stopping method has to be used with great care as a tool to assess the overall quality of a potential as will be discussed in Sec. VI.
Even with an optimal architecture overfitting cannot be completely avoided. Still, it can be further reduced by the use of regularization methods such as ridge regression. As large values of the weights can be connected to overfitting, in ridge regression a term that penalizes large weight parameter values is added to the loss function. Another possibility to avoid overfitting is to use dropout [137]. In this method nodes of the neural network are randomly dropped during training such that different thinned networks are trained. In this way complex co-adaptions are avoided in which layers would learn to correct mistakes of previous layers leading to bad generalization properties of the network.
Finally, several points have to be considered when restarting the training process from parameters obtained in a previous optimization. Since these parameters have been determined for a well-defined set of structures and their associated ACSF values, any scaling factors possibly employed in the preprocessing of the ACSFs become part of the potential and need to be applied in the same way to newly added structures. If, for instance, the training set is increased before restarting the training process, these scaling factors change. Consequently, the scaled symmetry function values are modified resulting in slightly different energies and forces even for the structures included in the original data set and even if no adjustments in the weights have been made. This can be avoided by restarting the training process using fixed scaling factors at the cost of slight changes in the effective range of the ACSF values. Moreover, some optimization algorithms, such as the Kalman filter generate further information such as a covariance matrix in the fitting procedure, which must be stored are reused when restarting fits if a strict continuation of the training is intended (see Fig. 10).
**Box 5: Kalman Filter**
Originally, the Kalman filter has been developed to recursively find the minimum variance estimates of state variables of linear dynamic systems, which has been generalized in the extended Kalman filter to non-linear systems [129]. Feed-forward neural networks can be considered as such non-linear dynamic systems and the Kalman filter can be efficiently used to find optimal weight parameters while recursively looping through a set of reference data [125].
Generally, the update procedure is the same for all properties, which will be denoted here as \(V\). According to the scheme in Fig. 8, the Kalman filter is usually used for a batch size of one, i.e., a weight update is performed after each presented energy or force component, respectively.
Each update \(k\) follows same procedure: First, the error is estimated as the absolute difference between the predicted target value \(V_{k}\) and its reference value \(V_{k}^{\text{Ref}}\) and normalized by the number of atoms in the current structure \(N_{\text{atoms}}\)
\[\nu_{k}=|V_{k}-V_{k}^{\text{ref}}|N_{\text{atoms}}^{-1}\;. \tag{14}\]
In the following adaptive step this error is compared to an error threshold, which must be exceeded to continue with the update process.
Then, the Kalman gain matrix \(\mathbf{K}\) is computed using the Kalman covariance matrix of the previous step \(\mathbf{P}_{k-1}\), the current value of a forgetting factor \(\lambda\), the identity matrix \(\mathbf{I}\), and the Jabobian \(\mathbf{J}\) containing the derivatives of the loss function with respect to all weight parameters,
\[\mathbf{K}_{k}=\mathbf{P}_{k-1}\mathbf{J}_{k}\left[\lambda_{k-1}\mathbf{I}+ \left(\mathbf{J}_{k}\right)^{T}\mathbf{P}_{k-1}\mathbf{J}_{k}\right]^{-1}\;. \tag{15}\]
The covariance matrix is usually initialized as the unity matrix. During the training, it stores information about all updates. Its diagonal elements can be interpreted as the uncertainty of the current weight estimate. The gain matrix is then used to update the weight matrix \(\mathbf{w}\) as
\[\mathbf{w}_{k}=\mathbf{w}_{k-1}+\mathbf{K}_{k}\nu_{k}\;. \tag{16}\]
In the following the covariance matrix and \(\lambda\) are
updated
\[\mathbf{P}_{k}=\lambda_{k}^{-1}\left[\mathbf{I}-\mathbf{K}_{k}\left(\mathbf{J}_{k} \right)^{T}\right]\mathbf{P}_{k-1}\;. \tag{17}\]
\[\lambda_{k}=\lambda_{k-1}\lambda_{0}+1-\lambda_{0}\;. \tag{18}\]
\(\lambda\) here takes the role of keeping the information of previous updates in the current update. Its initial value \(\lambda_{0}\) is chosen close to unity and its value increases as the optimization progresses. As the value of \(\lambda\) increases, the amount of reference date points considered in each update also increases. This has the advantage to enable larger changes at the beginning of the optimizing process and thereby avoiding to get stuck in local minima. When the optimizing proceeds, more information
Figure 9: Illustration of overfitting. The predictive power for structures of the target surface not included in the training can be estimated by comparing the root mean squared errors (RMSE) of the training set and an independent test set. Panel a) shows a typical HDNNP for the case of “underfitting” due to the use of small and thus inflexible neural networks. Both, the training and the test set RMSEs converge to high values (left) and neither the training nor the test points are well represented (right). The fine details of the potential energy surface cannot be reproduced and only rough features can be learned. Panel b) displays the RMSEs obtained when using more flexible neural networks, which can represent all details present in the training set. Also the test points are reliably predicted if they are not too different from the training data and if the flexibility is kept as low as necessary. The case of too flexible neural networks is shown in panel c). Here, the RMSE of the training set is very low, and as can be seen on the right, all training points are closely matched although the rapid oscillations resulting from the high flexibility prevent accurate predictions for the test data. This “overfitting”, which can be detected by comparing the RMSEs of the training and the test set (“early-stopping method”), can be reduced by also including gradient information, i.e., the forces, of the training points.
is used and the local minima can be found by taking smaller steps.
### Active Learning
Since the quality of the potential obtained in the training process critically depends on the available data set, an important task in the construction of MLPs is the generation of reference data sets covering the relevant configuration space as comprehensively as possible. While in conventional empirical potentials and classical force fields a certain transferability is ensured by the physically inspired functional form, the extrapolation capabilities of MLPs are very limited and all information about the topology of the PES must be learned from the reference data. Therefore, the reference structures must cover the part of the PES, which is energetically accessible in the intended simulations. For the data set, not only conditions such as temperature and pressure, but depending on the intended application also different structures such as various polymorphs of crystal in case of solids or configurations occurring in classical and quantum simulations of liquids such as water [92, 138] need to be considered. Further, repulsive structures for close atomic encounters need to be learned, which can also be assisted by explicitly including two- and three-body terms [139, 140].
Since there is no hope to have at hand a complete set of all relevant structures before the training process, an automatic and unbiased approach to identify important atomic configurations is needed. Ideally, these structures should be determined taking the current training status of the MLP and the existing data into account. This approach is called active learning and is based on the concept of query by committee [141]. In this method, the high flexibility of MLPs, which is essential for their numerical accuracy but also the origin of failures for predictions far from the training geometries, is turned into an advantage by making use of the energy or force variance of predictions in an ensemble of MLPs when applied to unseen structures. To ensure that these structures are relevant for the intended applications of the MLP, they are often generated using similar simulation protocols and conditions including temperature and pressure as the final production simulations. If this variance is in the order of the RMSEs, then the prediction can be considered as reliable and the unseen structure is sufficiently close to a known training point. If, however, the variance is much larger, the unseen structure is far away from the training data and should be added to improve the potential in this region. In this way, step-by-step the data set and the quality of the MLP can be iteratively improved until a reliable potential has been generated. As a side note we mention that the use of a committee is not always necessary in all types of MLPs, as, for instance, in Gaussian process regression a single instance of the regressor can provide an estimate of the variance. The general procedure of active learning, which has become a standard in the development of modern MLPs [72, 73, 94, 142, 143, 144, 145] is shown in Fig. 11. First, an initial data set is generated (see Sec. III.3) and an ensemble of MLPs is trained as represented by two HDNNPs in Fig. 11a. All members of the ensemble should have approximately the same quality in representing the available training data, as measured, e.g., by the energy and force RMSEs. Then, one of these HDNNPs is chosen to generate a large number of trial structures, e.g. by the simulation method of interest, and the second HDNNP is used to recompute the energies and forces for these structures. Alternatively, an ensemble of HDNNPs can be used to predict these properties on-the-fly during the simulation. Then, the variance in the predictions is investigated and structures with large deviations are selected for additional reference electronic structure calculations. These new data points are then added to the extended data set and used to train the next generation of MLPs. This procedure is repeated for several cycles until the variance of all trial structures remains close to the RMSEs. The HDNNPs trained to this final data set are then ready for the production simulations. As can be seen in Fig. 11b, adding more and more data points reduces the gray-shaded regions of high uncertainty in the PES until a reliable representation of the PES has been obtained.
Several comments should be made at this point. While in
Figure 10: Restarting the training process with the Kalman filter [125]. The violet curve shows a typical decrease of the energy RMSE in a training process consisting of 20 epochs. If however, the training is stopped after 10 epochs and shall be restarted, there are two options. If only the weight parameters of epoch 10 are used in the restart, the further progress of the training differs from the violet reference curve, since the covariance matrix of the Kalman filter is initialized again from scratch. If, however, this Kalman matrix is stored and used in the restart along with the weights, the obtained RMSE values are indistinguishable from the continuous fit of 20 epochs.
particular for Gaussian approximation potentials the remaining of MLPs on-the-fly during MD simulations has been suggested [146], this procedure is usually not applied in case of HDNNPs. Instead, for neural network potentials often a few hundred new structures are first identified before a new generation of potentials is trained. Before this training step, new structures should be carefully investigated since in particular in early stages of active learning unphysical structures may be suggested by poor preliminary potentials. Such structures complicate the training process while they are irrelevant for the final potential. Moreover, including similar structure should be avoided, which can be achieved, e.g., by farthest point sampling [147].
In general, monitoring the variance of the atomic forces should be preferred for the selection of new structures, since total energies are difficult to interpret in particular for large systems. The reason is the tendency for error compensation among the individual atomic energies, which can lead to apparently reliable energy predictions but in fact reduce transferability. Moreover, the investigation of atomic forces allows to identify specific atomic environment, which are not well represented in the training set. Transfering the respective atoms from complex systems to smaller structures for efficient reference calculations can be challenging, as a substantial part of the environment has to be included. For some systems the use of molecular fragments is a viable path to reduce the system size while keeping the relevant parts of the atomic environments intact [95; 104; 105; 97]. Active learning and farthest point sampling can also be used to select to minimum number of structures from an already existing data set needed to achieve a certain accuracy of the potential [145].
Figure 11: Iterative improvement of HDNNPs by active learning. Panel a) shows a flowchart of the self-consistent determination of the required training data set. Starting from an initial data set, several HDNNPs are trained. One of these HDNNPs is then used to generate a large pool of structures. These structures are then recomputed by the remaining HDNNPs. If the variance of predicted properties, such as energies or forces in the ensemble are too large for some structures (framed in red) these are selected for additional reference calculations to increase the data set. This extended data set is then used for training in the next cycle and so forth until no poorly predicted structures are found and a converged potential has been obtained. In panel b) this procedure is shown schematically for an unknown target potential energy surface (violet line). The symbols represent the available training points. Close to these training points HDNNP 1 and HDNNP 2 predict very similar energies, while the regions with high uncertainty (grey) decrease with increasing data set until the final HDNNPs represent the potential energy surface in the entire region of interest.
The convergence of the data set in the active learning process can also be investigated by plotting histograms of the energy distributions in the training set. Fig. 12a shows a balanced distribution of training structures starting from the smallest energy \(E_{\mathrm{min}}\), which is often the optimized global ground state structure of the system, and the highest energy of interest \(E_{\mathrm{max}}\). The possible presence of energy gaps, however, should be carefully investigated (Fig.12b). Such gaps can arise from different chemical compositions of structures in the data set, which is in general no problem, but they can also point to missing parts of molecular trajectories. If, for example, a reaction from \(A\) to \(B\) shall be studied, a continuous representation of all intermediate structures and the respective energies in the data set is mandatory.
Another common procedure to investigate the completeness of the data set is the use of learning curves (see Fig. 13). Here, the RMSEs of the training set and the test set are plotted as a function of the training set size. Small training sets can be accurately learned while they are not representative for the entire configuration space. Therefore, the RMSE of an independent test set is very large. If the training set is increased, the learning task is more difficult and the training error increases while the overall quality of the MLP improves such that the RMSE of the test set decreases. For an ideal, infinitely large data set, training and test set RMSEs converge to very similar values. However, the usefulness of learning curves strongly depends on the coverage of configuration space. If certain structures are completely missing, they are absent in both, the training and the test set, and consequently not true convergence can be achieved.
Several variants of active learning have been proposed to reduce the computational effort of the reference calculations. In \(\Delta\)-Learning [148; 149; 150; 151] first a baseline potential is constructed. This baseline potential should be cost-effective to evaluate and can be, e.g., a simple empirical potential, a moderately expensive electronic structure method, or another MLP trained to a large data set obtained from such a method. This baseline potential is then used to represent the rough overall topology of the PES. In a second step a MLP potential is trained to a represent the energy difference between the baseline potential and a very accurate high-level electronic structure method. Since only a small energy range needs to be learned for this correction, its error is easier to control. Moreover, the hope is that a smaller data set may be required compared to a conventional MLP completely based directly on high-level data. Still, as usually the corrugation of the PES is more or less independent of the choice of the reference method, the possibility for the reduction of the data set size is limited. An interesting use of \(\Delta\)-Learning is the combination of a cost-effective
Figure 12: Possible shapes of energy histograms of the reference data (schematic). Panel a) shows a histogram with balanced energy distribution in the energy range of interest. The lowest energy \(E_{\mathrm{min}}\) is the energy of the optimized system in its global minimum configuration. \(E_{\mathrm{max}}\) is the highest energy in the data set, which should be larger than the highest energy of interest. Panel b) shows an incomplete distribution of reference structures. Such isolated groups of structures can point to disconnected parts of trajectories in configuration space. These disconnected regions must be filled in the active learning process if transitions between these structure types are relevant for the intended simulations. The energies are normalized per atom to avoid offsets due to possible different chemical compositions.
Figure 13: Schematic learning curve. Small training sets can be learned with high accuracy, but the resulting HDNNP has poor generalization capabilities and provides high errors for test structures not included in the training set. If the number of training structures is increased, the training process is more challenging resulting in an increased training error, while the test set error decreases due to the overall improving potential. For very large training sets, the configuration space is well represented by the training data and the root mean squared errors (RMSE) of the training and test sets are very similar.
method providing energies and analytic forces, and an expensive method for which no forces are available [92]. In this case, at least the baseline potential benefits from the availability of additional force information.
Finally, another approach is transfer learning [152]. Here, first a MLP is trained to a large and computationally affordable reference data set. A part of the training data is then recalculated at a higher level of theory and the potential is retrained to this data set. Some parameter of the MLP are fixed and do not change during the retraining while others can adjust to represent the modified shape of the PES at reduced costs.
**Case study: Training of the LiOH-water system**
The initial reference data set of the LiOH system was iteratively expanded following the active learning scheme. Six cycles of active learning were conducted using the tool RuNNerActiveLearn [101]. In Fig. 14 histograms of the reference data set of each cycle are shown.
The progress of the active learning can also be monitored by evaluating the RMSE values after each completed cycle of adding new structures and retraining the potential. The resulting RMSE values are shown in Fig. 15. The obtained curve represents a learning curve, but there are notable differences in the shape of test set RMSE values compared to the idealized learning curve shown in Fig. 13, which would be found when extracting data from a close-to complete data set that is never available in any realistic scenario for complex systems. In the real case of Fig. 15, the data set increases from cycle to cycle and new parts of configuration space are explored, which thus increase the complexity in the training and the test set in the same way. Hence, the test points are described with comparable accuracy as the training points illustrating the need for a careful use of learning curves in case of incomplete data sets. If at all, learning curves are best used for an already large and converged data set starting from a small subset of data, which is then step-by-step increased, but they are not a very useful tool during active learning cycles.
Overall, the most accurate HDNNP obtained for the final LiOH data set has energy RMSEs of 1.204 \(\mathrm{meV\,atom^{-1}}\) for the training and 1.682 \(\mathrm{meV\,atom^{-1}}\) for the test data set. The force component RMSEs are 0.069272 \(\mathrm{eV\,a_{0}^{-1}}\) for the training and 0.069355 \(\mathrm{eV\,a_{0}^{-1}}\) for the test data set.
### Beyond Short-Ranged Potentials
So far, we have discussed all the steps of the construction of MLPs for the example of second-generation HDNNPs. Most of these steps are general and equally apply to third- and fourth-generation HDNNPs. However, the total energy expression of third- and fourth-generation HDNNPs consists of the electrostatic and the short-range parts (Eq. 7), which have to be trained sequentially following the flowchart in Fig. 16.
First, the energies, forces and atomic partial charges are determined in reference electronic structure calculations. Since atomic charges are not quantum mechanical ob
Figure 14: Histograms of the lithium hydroxide data set at each cycle of the active learning process. The full data set also contains pure water structures, which are not included in these histograms for clarity. The initial data set has been generated by ab initio MD (AIMD).
Figure 15: Change of the RMSE values during the active learning cycles. For consistency the same settings and architectures of HDNNPs have been used in all potentials. The radial symmetry function \(\eta\) values are adjusted to the current data set.
servables, many different partitioning schemes can be used, such as Hirshfeld [153], Bader [154] or density-derived electrostatic and chemical (DDEC) charges [155]. Any numerical uncertainty resulting from this choice can be compensated inside the atomic environments by the short-range atomic energies. Then, the charges are learned, which is done either by using a second set of atomic NNs (third-generation HDNNPs) or by training environment-dependent electronegativities (fourth-generation HDNNPs) yielding these charges in a charge equilibration process. Once an expression for the charges has been learned, the electrostatic energies and forces are computed and removed from the total reference energies and forces to determine the reference data for the short-range training. This sequential procedure has two reasons: first, a functional relation between the structure-dependent charges and the atomic positions needs to be available to compute the electrostatic forces, which contain the derivatives of the charges with respect to the atomic coordinates [38]. Such a relation cannot be obtained from the reference electronic structures calculations. Second, since all those energy contributions, which are not electrostatic, are combined in the short-range part, there is no double counting of electrostatics by construction. Finally, the remaining short-range energies and forces are trained to yield the HDNNP consisting of the electrostatic and the short-range part.
The Coulomb potential has a singularity for short interatomic distances, which can give rise to an increased short-range energy range, which is more difficult to learn than the original reference energies. Therefore, the Coulomb interaction is usually screened to zero inside a screening radius \(R_{\text{screen}}\), which must be lower than the cutoff radius of the ACSFs as illustrated in Fig. 17. The screening function [91] (Fig. 17a) is given by
\[f_{\text{screen}}=\begin{cases}\frac{1}{2}\Big{[}1-\cos\Big{(}\frac{\pi R_{ ij}}{R_{\text{screen}}}\Big{)}\Big{]}&\text{for }R_{ij}\leq R_{\text{screen}}\\ 1&\text{for }R_{ij}>R_{\text{screen}}.\end{cases} \tag{19}\]
When multiplied by this screening function, the Coulomb potential decays smoothly to zero for small distances \(R_{ij}\) (Fig. 17b). This screened Coulomb potential is then removed from the total energy reference curve before training the short-range part (Fig. 17c). As can be seen for the example of an interatomic distance of about 1.2 A, removing the screened Coulomb potential results in a much smaller energy range to be learned by the short-range part compared to the case of removing the unmodified Coulomb potential.
## VI Validation
Due to the absence of physically restricted functional forms in MLPs, the validation is of central importance not only for the final potentials but also during the training process and during the extension of the data set by active learning, as all these steps cannot be strictly separated. The validation is a multistep process, which is illustrated schematically in Fig.18. Starting from the available reference data set, a set of preliminary HDNNPs is constructed. For this purpose, the data set is split randomly into a training and a test set, and early-stopping is used to generate potentials with low test set RMSEs indicating good generalization of the potentials to structures not included in the training set. Like in the case of learning curves discussed above, the early-stopping method can provide information about the density and completeness of the data in the configuration space that is covered by the reference data, but no assessment of the quality for other types of structures is possible.
Figure 16: Training of HDNNPs including long-range electrostatic interactions. First, the reference energies, forces and atomic charges are obtained from electronic structure calculations. Then, the charges are trained and the HDNNP electrostatic energies and forces are computed. After applying a screening (Eq. 19) these are then removed from the reference energies and forces to obtain the target energies and forces for the training of the short-range part. Once also the short-range part has been learned, the total HDNNP is given as a sum of the short range and electrostatic energies and forces (Eq. 7).
For all data in the training and the test sets, further analyses need to be done, since the RMSEs of the energies and forces provide only averaged information. Therefore, outliers and individual energies or forces, which are difficult to learn, might remain undetected when inspecting RMSE values only. Fig. 19 shows the energy correlation plot relating the energies predicted by the HDNNP to the reference energies. For reasonable potentials all points should be located close to the line of perfect correlation. Such plots should be routinely investigated for the energies and forces in the training as well as in the test set for all generated HDNNPs. Outliers in these plots can often be related to problems in the underlying reference data, such as failed electronic convergence.
A closer investigation of outliers is possible when plotting the correlation of errors obtained with two different potentials (see Fig.20). Points, which have a low error in one of the two or even in both potentials are acceptable, since the representation quality of individual points may differ from HDNNP to HDNNP. If, however, a point has large errors in both potentials, this is an indication of contradictory data in the training set. Such contradictions may arise, e.g, from inconsistent k-point sets (see Sec. III.1) resulting in the assignment of different energies or forces to very similar structural features. Such points cannot be learned and can be identified in error correlation plots for closer examination. It is important to note in this context that even a small number of problematic data points can result in very poor HDNNPs, because they can have a strong impact on the weight optimization thus affecting also the quality of the overall PES.
In the next step, a large number of configurations is generated, ideally by the simulation technique that shall be used in the production calculations. These configurations should be searched for extrapolations, i.e., the existence of ACSF values outside the range of function values defined by the training set (see Fig. 21). This
Figure 17: Screening of electrostatic interactions [91]. Panel a) shows the screening function \(f_{\rm screen}\) (Eq. 19), which smoothly decays from one to zero inside the screening radius \(R_{\rm screen}=6\)Å. Panel b) shows the Coulomb energy of two atoms with opposite charges as well as the screened Coulomb interaction, which approaches zero for short interatomic distances. Inside the cutoff radius of the ACSFs the missing part of the Coulomb energy can be represented by the short-range atomic energies. Panel c) shows a reference pair potential \(E_{\rm ref}\), as well as the energy curves obtained when removing the unscreened Coulomb energy (\(E_{\rm ref}-E_{\rm Coulomb}\)) or the screened Coulomb energy (\(E_{\rm ref}-E_{\rm screened}\)). The curve obtained using the screened Coulomb potential covers a much smaller energy range and thus can be represented more accurately by the short-range energy.
Figure 18: Multistep validation of machine learning potentials.
needs to be done separately for each ACSF, which is straightforward and computationally cost-effective, since information about the minimum and maximum values is typically available for each function from the preparatory scaling of the ACSFs before training. Since MLPs are not reliable in case of extrapolation, such situations must be avoided in the production simulations. This can be achieved by systematically searching for extrapolating structures to extend the reference data with the aim to cover an increasing part of configuration space. It should be noted, however, that the absence of extrapolation is not a sufficient criterion for a reliable potential, as structural outliers can drastically extend the range of symmetry function values without a suitable coverage of configuration space. In such a situation, extrapolation in the descriptor space cannot be automatically detected. Another form of extrapolation refers to the potential energy and forces in the system. Predictions of energies
Figure 19: Energy correlation plots of the LiOH-water data set. Panel a) shows the signed energy errors and the density of data points as a function of the target energy of the reference method. Panel b) shows the correlation of the HDNNP energies and the energies of the reference method. To avoid energy offsets, binding energies are used for this purpose instead of total energies, which would strongly depend on the chemical composition. For a perfect potential, all points should be aligned along the diagonal line with slope of 45\({}^{\circ}\). Correlation plots should be generated separately for the training and the test data set for the energies as well as the force components.
Figure 20: Error correlation for different HDNNPs fitted on the LiOH data set. Panels a) and b) show the correlation between the training set errors of total energies and atomic forces with respect to the reference data obtained with two different HDNNPs trained to the same data set. Data points with high errors in both potentials indicate possible contradictory data, which cannot be learned due to insufficient the accuracy of the reference data. Such problems might arise from underconverged settings in the reference calculations (cf. Fig.5)) or incomplete electronic self-consistency.
or forces outside the range of values in the training set should be carefully checked in production simulations, as they might be less reliable.
In principle, extrapolating structures can also be found in the active learning process, which is the most general approach, since deviations in the energy and force predictions for different HDNNPs are very likely in this case. However, reasonable predictions even in the case of extrapolation cannot be excluded, and in such situations the elimination of extrapolating structures by active learning can be difficult. Once extrapolation occurs, simulations should be stopped and moderately extrapolating structures should be included, while structures exhibiting very different descriptor values should be discarded since very unphysical structures occuring in later stages of extrapolating trajectories should not enter the reference data set. As a simple check, for each newly added structure the interatomic distances should be computed and structures including too short bonds and other unwanted structural features should be excluded.
Much more difficult than the detection of extrapolation is the identification of regions in configuration space, which are not sufficiently sampled, i.e., "holes" in the multidimensional data set. Examples for points in such holes, which are formally not fulfilling the criterion of extrapolation are shown in Fig. 21. These points can be identified by active learning as described above, which is thus a central component of potential validation. Only if the active learning process has been completed without finding further structures with substantial uncertainty, a reasonable transferability of the HDNNP can be expected.
The last and most important quality check of a data set is the performance of the fitted HDNNP in applications. Whenever possible, direct comparisons with data obtained directly with the reference electronic structure method should be made, including e.g. equilibrium geometries and lattice parameters, vibrational frequencies and radial distribution functions. Ideally also independently trained HDNNPs, which have passed the full hierarchy of validation steps, should be employed to check on the robustness of the simulation results with respect to the specific parameterization.
## VII Conclusions
Much progress has been made in the past two decades in the construction of MLPs for atomistic simulations of large molecular and condensed systems. These developments do not only concern the general methodical framework, such as the adaption and use of modern machine learning algorithms and the derivation of suitable descriptors, but also the applicability to more and more complex systems. At the present time, the increasing availability of methods and their implementations in easily accessible software packages has substantially lowered the barrier to the parameterization and use of MLPs. Still, this availability also bears some risks as the quality of MLPs is much more difficult to assess than the performance of conventional empirical potentials and force fields.
While MLPs in principle can reach a very high accuracy, which is often indistinguishable from the underlying electronic structure method, validation takes a central role as an excellent performance for some atomic configurations may go along with dramatic failures for other geometries. The use of "canned" potentials should therefore only be recommended if detailed information about the applicability of a potential for a specific system is available. Unfortunately, no community standards have been established for such information yet.
An important take-home message of this Tutorial is the insight that the parameterization, the generation of the reference data set and the validation of a potential are equally important steps, which cannot be separated from each other. An important example is active learning, in which preliminary, i.e., not yet perfect, potentials are used to identify structures missing in the training set. Hence, active learning also represents an important opportunity for validation on-the-fly. An entire hierarchy
Figure 21: Illustration of poorly represented regions in a two-dimensional configuration space spanned by coordinates \(G_{1}\) and \(G_{2}\). The blue points represent the available training data covering the grey-shaded region. Any structure outside the minimum and maximum values of \(G_{1}\) and \(G_{2}\) is formally extrapolating. The three green structures, however, are inside the intervals \([G_{1}^{\min},G_{1}^{\max}]\) and \([G_{2}^{\min},G_{2}^{\max}]\) and therefore do not fulfill the criterion of extrapolation. Still, these structures are located in poorly represented regions and will be unreliable. Such structures can be identified by active learning.
of validation steps is available, with active learning being the most general approach, which in principle also can cover the detection of overfitting and extrapolation. A lot has been achieved in (semi)automatic active learning schemes and the uncertainty quantification of MLPs during application, but the state of black-box methods has not yet been reached. Moreover, even some apparently simple tests, such as energy and force correlation plots, the analysis of emerging structures and the detailed inspection of outliers, which can often be traced back to some problem in the data set, can contribute substantially to the improvement of potentials. Although this Tutorial takes the perspective of high-dimensional neural network potentials, almost all discussed aspects are generally valid for a wide range of MLPs currently in use.
###### Acknowledgements.
This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy--EXC 2033-390677874--RESOLV. Funding by the DFG (priority program SPP 2363, project number 495842446) and discussions with Marco Eckhoff are gratefully acknowledged.
|
2305.16325 | Graph Neural Network Interatomic Potential Ensembles with Calibrated
Aleatoric and Epistemic Uncertainty on Energy and Forces | Inexpensive machine learning potentials are increasingly being used to speed
up structural optimization and molecular dynamics simulations of materials by
iteratively predicting and applying interatomic forces. In these settings, it
is crucial to detect when predictions are unreliable to avoid wrong or
misleading results. Here, we present a complete framework for training and
recalibrating graph neural network ensemble models to produce accurate
predictions of energy and forces with calibrated uncertainty estimates. The
proposed method considers both epistemic and aleatoric uncertainty and the
total uncertainties are recalibrated post hoc using a nonlinear scaling
function to achieve good calibration on previously unseen data, without loss of
predictive accuracy. The method is demonstrated and evaluated on two
challenging, publicly available datasets, ANI-1x (Smith et al.) and
Transition1x (Schreiner et al.), both containing diverse conformations far from
equilibrium. A detailed analysis of the predictive performance and uncertainty
calibration is provided. In all experiments, the proposed method achieved low
prediction error and good uncertainty calibration, with predicted uncertainty
correlating with expected error, on energy and forces. To the best of our
knowledge, the method presented in this paper is the first to consider a
complete framework for obtaining calibrated epistemic and aleatoric uncertainty
predictions on both energy and forces in ML potentials. | Jonas Busk, Mikkel N. Schmidt, Ole Winther, Tejs Vegge, Peter Bjørn Jørgensen | 2023-05-10T13:03:06Z | http://arxiv.org/abs/2305.16325v2 | Graph Neural Network Interatomic Potential Ensembles with Calibrated Aleatoric and Epistemic Uncertainty on Energy and Forces
###### Abstract
Inexpensive machine learning potentials are increasingly being used to speed up structural optimization and molecular dynamics simulations of materials by iteratively predicting and applying interatomic forces. In these settings, it is crucial to detect when predictions are unreliable to avoid wrong or misleading results. Here, we present a complete framework for training and recalibrating graph neural network ensemble models to produce accurate predictions of energy and forces with calibrated uncertainty estimates. The proposed method considers both epistemic and aleatoric uncertainty and the total uncertainties are recalibrated post hoc using a nonlinear scaling function to achieve good calibration on previously unseen data, without loss of predictive accuracy. The method is demonstrated and evaluated on two challenging, publicly available datasets, ANI-1x (Smith _et al._1) and Transition1x (Schreiner _et al._2), both containing diverse conformations far from equilibrium. A detailed analysis of the predictive performance and uncertainty calibration is provided. In all experiments, the proposed method achieved low prediction error and good uncertainty calibration, with predicted uncertainty correlating with expected error, on energy and forces. To the best of our knowledge, the method presented in this paper is the first to consider a complete framework for obtaining calibrated epistemic and aleatoric uncertainty predictions on both energy and forces in ML potentials.
## 1 Introduction
Accurate and computationally inexpensive machine learning (ML) potentials are increasingly being used to accelerate atomic structure optimization and molecular dy
namics simulations by iteratively predicting and applying interatomic energies and forces [3, 4]. This development has the potential to revolutionise disciplines of computational chemistry such as predicting molecular properties and structures, predicting reaction mechanisms and networks, as well as discovering new materials, e.g., for energy conversion and storage of renewable energy. In these settings, it is crucial to asses the confidence of predictions and to detect when predictions are unreliable, to avoid wrong or misleading results by either ending the simulation early or enabling recovery by falling back to higher fidelity but also more computationally expensive methods, such as density functional theory (DFT) [5]. Uncertainty quantification (UQ) methods can enable assessment of the confidence in predictions and thus make applications of ML potentials more robust and reliable. To ensure uncertainty estimates are useful and informative, they need to be calibrated, i.e. there should be an agreement between the predictive distribution and the empirical distribution. Especially if the uncertainty is expected to indicate the range of plausible values of the predictions. Good calibration thus ensures that ML-based uncertainty estimates are interpretable and actionable, and enable the selection of a suitable confidence threshold for a given application on the original unit scale of the quantity of interest.
A widely used UQ method for ML potentials is to apply an ensemble of models and use the agreement of the predictions of the ensemble members as a measure of confidence in the prediction [6]. This approach relies on the observation that randomly initialised models will often provide increasingly different predictions further away from the training data distribution. From a Bayesian perspective, if the individual ensemble members are seen as draws from the posterior distribution, the variance between predictions of the ensemble members are a measure of the posterior uncertainty [7, 8, 9]. This type of uncertainty in the model parameters is often referred to as model uncertainty or _epistemic uncertainty_. Other popular approaches for estimating the epistemic uncertainty with neural networks include Bayesian neural networks [10, 11] that can directly learn a probability distribution over the neural network parameters, and Monte Carlo dropout [12] that estimates the predictive distribution through multiple stochastic forward passes. However, ensembles as well as these other UQ methods are not inherently calibrated and do not account for uncertainty in the data, which makes it difficult to select an appropriate confidence threshold for a given application. Data uncertainty, also known as _aleatoric uncertainty_, can originate from noisy observations and can be estimated with a mean-variance model [13] that treats the uncertainty variance as an additional model output, or more recently with evidential learning [14, 15] that learns the parameters of a higher order distribution over the likelihood parameters, and conformal prediction [16], a distribution-free approach that estimates a prediction interval directly. The deep ensemble approach [17] combines an ensemble of mean-variance networks to estimate both aleatoric and epistemic uncertainty in a unified model.
As discussed above, an important aspect of predictive uncertainty is the concept of _calibration_, which implies an agreement between the predicted uncertainty and the expected empirical error. When evaluating calibration, it is important to consider the asymmetric relationship between errors and uncertainties. By the common assumption that errors are drawn from a distribution (usually Gaussian), small uncertainties should be associated with small errors and large errors should be associated with large uncertainties, but large uncertainties can be associated with both small and large er
rors. Therefore there is no direct correlation between uncertainties and the magnitude of errors. However, there should be a correlation between the uncertainties and the expected magnitude of errors. Several works have proposed methods for evaluating and validation calibration of regression models. Kuleshov _et al._[18] proposed evaluating the coverage of the errors by the predictive distribution averaged over the data using a calibration curve. Later, Levi _et al._[19] proposed checking the correlation of uncertainties and expected errors computed in bins of increasing uncertainty in a reliability diagram. Recently, Pernot[20] highlighted the limitations of the previous approaches and proposed an additional analysis of z-scores (standard scores) for variance-based UQ methods. UQ and calibration for molecular property prediction and interatomic ML potentials has been explored in recent literature[21, 22, 23, 16, 24], but often the training and inference methods used do not inherently ensure good calibration. The method presented in this paper is, to the best of our knowledge, the first to consider a complete framework for obtaining calibrated epistemic and aleatoric uncertainty predictions on both energy and forces.
In previous work, we have shown how to extend a graph neural network model for predicting formation energy of molecules to also provide calibrated uncertainty estimates that can be decomposed into aleatoric and epistemic uncertainty[25]. The method works by combining an ensemble of models with mean-variance outputs and applying post hoc recalibration with isotonic regression on data not used for the training. In this work, we further extend the approach to include calibrated uncertainty on the force predictions. Specifically, we extend a neural network potential with a probabilistic predictive distribution on energy and forces, and consider a deep ensemble of models[17] to express the aleatoric and epistemic uncertainty about the energy and force predictions. The uncalibrated predictive distributions are then recalibrated post hoc to fit the error distribution on previously unseen data. An added benefit of this approach is that ensemble models are generally known to produce more accurate predictions than single models[6]. Through computer experiments, we demonstrate that the proposed method results in accurate and calibrated predictions on two publicly available datasets, ANI-1x[1] and Transitions1x[2] containing out-of-equilibrium and near-transition-state structures, respectively. The main contribution of the work is a complete framework for training and evaluating neural network potentials with accurate predictions and calibrated aleatoric and epistemic uncertainty on both energies and forces.
The rest of the paper is structured as follows. The proposed method including the extended graph neural network model and the recalibration procedure is described in Section 2. The datasets, experiments and results are presented in Section 3. Finally, the main findings and perspectives are discussed in Section 4 and we conclude in Section 5.
## 2 Methods
### Graph neural network model
As the base model for our ensemble we use PaiNN[26], an equivariant message passing neural network (MPNN) model designed specifically for predicting properties of molecules and materials. The model provides a mapping from sets of atomic species
and positions \(\{(Z_{i},\vec{r}_{i})\}\) to potential energy \(E\) and interatomic forces \(\{\vec{F}_{i}\}\). The potential energy is modelled as a sum over the atomic contributions \(E_{i}\):
\[E=\sum_{i}E_{i}\,, \tag{1}\]
and the forces are computed as the derivative of the potential energy with respect to the atomic positions, ensuring conservation of energy:
\[\vec{F}_{i}=-\partial E/\partial\vec{r}_{i}\,. \tag{2}\]
Specifically, the model input is represented as a graph, where there is an edge between a pair atoms if the mutual distance between the atoms is below a certain cutoff. The cutoff distance is treated as a hyperparameter and is fixed at 5.0 A in all of our experiments. The neural network architecture consists of a number of interaction layers, where information, or "messages", are exchanged along the edges of the input graph to update the hidden node states, followed by a readout function represented by a fully connected neural network that outputs the atom-wise quantities. The number of interaction layers and the size of the node hidden states are hyperparameters of the model.
### Extended model with aleatoric uncertainty
We extend the base model with additional outputs representing the aleatoric energy uncertainty \(\sigma_{E}^{2}=\sum_{i}\sigma_{E_{i}}^{2}\) and atom-wise aleatoric force uncertainties \(\{\sigma_{F_{i}}^{2}\}\). The atom-wise quantities, \(\sigma_{E_{i}}^{2}\) and \(\sigma_{F_{i}}^{2}\), are constrained to be positive by passing them through a softplus activation function, \(\log(1+\exp(\cdot))\), and adding a small constant for numerical stability. Note that here we chose to represent the atom-wise aleatoric force uncertainty by a single scalar even though the force vectors are 3-dimensional. This simplifying assumption means that we consider the noise scale in the spatial dimensions to be isotropic, i.e., uniform in all directions. Other options would be to represent the aleatoric force uncertainty by a common scalar for all atoms or as atom-wise vectors representing the uncertainty in each direction. However we found the isotropic approach to be a reasonable compromise and to work well in practice and did not study the other solutions further.
### Model training procedure
Each network in the ensemble is initialized with random weight parameters \(\theta\) and trained individually on the same training dataset using a loss function composed of a weighted sum of the energy and force loss terms:
\[\mathcal{L}(\theta)=\lambda_{E}\mathcal{L}_{E}(\theta)+\lambda_{F}\mathcal{L}_ {F}(\theta)\,, \tag{3}\]
where the weight \(\lambda_{F}\) is between 0 and 1 and \(\lambda_{E}=(1-\lambda_{F})\).
ML potentials are usually trained with mean squared error (MSE) loss for both the energy and forces. Using a negative log likelihood (NLL) loss function provides
a natural way of training mean-variance models that also consider uncertainty [13]. The mean squared error (MSE) loss for the energy is straight forward. The MSE loss for the forces is evaluated per atom and component-wise over the spatial dimensions and is then averaged over the number of atoms. The negative log likelihood (NLL) loss for energy, assuming a normally distributed error, is given for a single instance by the following expression where \(x=\{(Z_{i},\vec{r}_{i})\}\) represents the model input and the observed values of energy and forces are denoted by \(E^{\text{obs}}\) and \(F^{\text{obs}}\), respectively:
\[\text{NLL}_{E}(\theta) =-\log p(E^{\text{obs}}|x,\theta) \tag{4}\] \[=\frac{1}{2}\left(\frac{\left(E^{\text{obs}}-E(x)\right)^{2}}{ \sigma_{E}^{2}(x)}+\log\sigma_{E}^{2}(x)+\log 2\pi\right). \tag{5}\]
The instance-wise energy losses are then averaged over the number of instances.
Analogous to the MSE loss for forces, the NLL loss for forces is evaluated per atom \(i\) and component-wise over the spatial dimensions \(D\) (recall that the predicted atom-wise force uncertainty \(\sigma_{F_{i}}^{2}\) is a single scalar applied over all spatial dimensions):
\[\text{NLL}_{F_{i}}(\theta) =\sum_{d=1}^{D}-\log p(F_{i,d}^{\text{obs}}|x,\theta) \tag{6}\] \[=\sum_{d=1}^{D}\frac{1}{2}\Bigg{(}\frac{\left(F_{i,d}^{\text{obs} }-F_{i,d}(x)\right)^{2}}{\sigma_{F_{i}}^{2}(x)}+\log\sigma_{F_{i}}^{2}(x)+ \log 2\pi\Bigg{)}\,. \tag{7}\]
The atom-wise force losses are then averaged over the total number of atoms. Note that NLL with fixed variance is equivalent to (scaled) MSE and the \(\log 2\pi\) terms are constant and can be omitted in training. Here, for models that are trained with a combination of MSE and NLL loss on either energy or forces, we scale the NLL loss by the expected uncertainty (determined empirically) to avoid the NLL loss dominating.
Training directly with NLL loss can be unstable due to interactions between the mean and variance in the loss function, so we apply a training procedure similar to previous work [25], where the model is always trained with MSE loss for an initial warmup period before linearly interpolating to the NLL loss. Other more sophisticated methods for training with NLL loss exist [27, 28], but we found this simple approach to be sufficient to achieve training stability in our experiments.
### Ensemble model with epistemic uncertainty
To estimate the epistemic uncertainty, we follow the approach of [17] and make an ensemble approximation by combining the predictions of \(M\) individual models. Using a Bayesian interpretation of deep ensemble models [7, 8, 9], we can interpret the model weights \(\theta^{(m)}\) of each ensemble member \(m\) as samples from an approximate posterior distribution \(q(\theta)\approx p(\theta|\mathcal{D})\), where \(\mathcal{D}\) is the training data. For a regression model with
input \(x\) and output \(y\) trained on a dataset \(\mathcal{D}\) we have:
\[p(y|x,\theta) =\int p(y|x,\theta)p(\theta|\mathcal{D})d\theta, \tag{8}\] \[\approx\frac{1}{M}\sum_{m=1}^{M}p(y|x,\theta^{(m)}),\quad\theta^{( m)}\sim p(\theta|\mathcal{D}),\] (9) \[\approx\frac{1}{M}\sum_{m=1}^{M}p(y|x,\theta^{(m)}),\quad\theta^{ (m)}\sim q(\theta). \tag{10}\]
The first approximation is to estimate the integral with \(M\) samples from the distribution \(p(\theta|\mathcal{D})\) and the second approximation comes from approximating the true posterior \(p(\theta|\mathcal{D})\) with the distribution \(q(\theta)\). The uncertainty arising from \(p(y|x,\theta)\) is the aleatoric uncertainty, while the epistemic uncertainty is modeled as the uncertainty arising from the distribution of the model parameters \(q(\theta)\).
When applying this interpretation to a ML potential energy model, the underlying assumption of the model is that energy and force observations are generated by first sampling \(\theta\) and then sampling the normally distributed noise, i.e., for a single molecular energy and atomic force we then have:
\[\theta \sim q(\theta), \tag{11}\] \[E^{\text{obs}}(x) \sim\mathcal{N}\left(E_{\theta}(x),\sigma_{\theta,E}^{2}(x) \right),\] (12) \[\vec{F}_{i}^{\text{obs}}(x) \sim\mathcal{N}\left(-\frac{\partial E_{\theta}}{\partial\vec{r} _{i}}(x),\mathbf{I}\sigma_{\theta,F_{i}}^{2}(x)\right). \tag{13}\]
We have two levels of stochastic variables, so to calculate the variance we can use the law of total variance:
\[\text{Var}\left[Y\right]=\underbrace{\mathbb{E}\left[\text{Var}\left[Y|X \right]\right]}_{\text{aleatoric}}+\underbrace{\text{Var}\left[\mathbb{E} \left[Y|X\right]\right]}_{\text{epistemic}}. \tag{14}\]
Using the law of total variance, we get the following expression for the observed energy variance:
\[\text{Var}\left[E^{\text{obs}}\right] =\mathbb{E}\left[\text{Var}\left[E^{\text{obs}}|\theta\right] \right]+\text{Var}\left[\mathbb{E}\left[E^{\text{obs}}|\theta\right]\right]\, \tag{15}\] \[=\underbrace{\mathbb{E}_{\theta}\left[\sigma_{\theta,E}^{2}(x) \right]}_{\text{aleatoric}}+\underbrace{\text{Var}_{\theta}\left(E_{\theta}(x )\right)}_{\text{epistemic}}. \tag{16}\]
Since the force observation is a vector, we compute the total variance element-wise:
\[\text{Var}\left[F_{i,d}^{\text{obs}},F_{i,d}^{\text{obs}}\right] =\mathbb{E}\left[\text{Var}\left[F_{i,d}^{\text{obs}},F_{i,d}^{ \text{obs}}|\theta\right]\right] \tag{17}\] \[\quad+\text{Var}\left[\mathbb{E}\left[F_{i,d}^{\text{obs}}| \theta\right],\mathbb{E}\left[F_{i,d}^{\text{obs}}|\theta\right]\right]\] \[=\underbrace{\mathbb{E}_{\theta}\left[\sigma_{\theta,F_{i}}^{2}(x )\right]}_{\text{aleatoric}}+\underbrace{\text{Var}_{\theta}\left(-\frac{ \partial E_{\theta}(x)}{\partial r_{i,d}},-\frac{\partial E_{\theta}(x)}{ \partial r_{i,d}}\right)}_{\text{epistemic}}. \tag{18}\]
Treating the parameters of the ensemble member as samples from an approximate posterior \(q(\theta)\) and using the samples to approximate the expectations, we get the following expressions for the energy mean and variance:
\[E^{(*)} =\frac{1}{M}\sum_{m=1}^{M}E^{(m)}\,, \tag{19}\] \[\sigma_{E^{(*)}}^{2} =\underbrace{\frac{1}{M}\sum_{m=1}^{M}\sigma_{E^{(m)}}^{2}}_{ \text{aleatoric}}+\underbrace{\frac{1}{M}\sum_{m=1}^{M}\left(E^{(m)}-E^{(*)} \right)^{2}}_{\text{epistemic}}\,. \tag{20}\]
Similarly, the mean and variance of the forces for a single atom \(i\) are given by the following expressions:
\[\vec{F}_{i}^{(*)} =\frac{1}{M}\sum_{m=1}^{M}\vec{F}_{i}^{(m)}\,, \tag{21}\] \[\sigma_{F_{i}^{(*)}}^{2} =\underbrace{\frac{1}{M}\sum_{m=1}^{M}\sigma_{F_{i}^{(m)}}^{2}}_ {\text{aleatoric}}+\underbrace{\frac{1}{M}\sum_{m=1}^{M}\frac{1}{D}\left\| \vec{F}_{i}^{(m)}-\vec{F}_{i}^{(*)}\right\|^{2}}_{\text{epistemic}}, \tag{22}\]
where \(D\) denotes the spatial dimensions. With the assumption of isotropic force variance, we average across the spatial dimension in (22) when estimating the epistemic force variance of (18) using the sample variance. The mean energy and forces represent the ensemble prediction and the energy and force variances represent the ensemble total uncertainties, which can be decomposed into aleatoric and epistemic components as shown above.
When we want to evaluate the likelihood of an observation, we need access to the predictive distribution and knowing the mean and variance is not sufficient. Following [17], we parameterize a normal distribution with the mean and variance. With the mean and variance specified, the normal distribution is the maximum entropy probability distribution, i.e., we make the least assumptions about the data by using a normal distribution following the maximum entropy principle [29, 30]. However, to hold true, this would require all the variance outputs of the ensemble members to be equal. If the variances follow an inverse-gamma distribution, the predictive distribution would be a student-t in the infinite ensemble limit, which is the assumption used in deep evidential regression [14, 27].
### Uncertainty calibration
Several methods exist for evaluating the calibration of regression models. NLL provides a standard metric for quantifying the overall quality of probabilistic models by measuring the probability of observing the data given the predicted distribution. However, NLL depends on both the predicted mean and uncertainty (see eq. 5 and 7) and it can be useful to evaluate only the quality of the uncertainty estimates. For example, it is often informative to visually compare the predicted uncertainties with the empirical errors by plotting them. Since the total uncertainty of the ensemble model can be
interpreted as a variance of a normal distribution, we expect most of the errors to lie within 2-3 standard deviations of the predictive distribution. The variance-based approach also allows us to evaluate standard scores, also known as z-scores, defined as the empirical error divided by the standard deviation of the predictive distribution [20]:
\[z=\frac{y^{\text{obs}}-y(x)}{\sigma(x)}\,. \tag{23}\]
A z-score variance (ZV) close to 1 indicates that the predicted uncertainty on average corresponds to the variance of the error and thereby is an indication of good average calibration. The same approach can be applied to subsets of the data leading to a local z-score variance (LZV) analysis, for example by evaluating ZV in bins of increasing uncertainty to test the consistency of the uncertainty estimates. For plotting, we found it useful to report the square root of the z-variance (RZV). Additionally, we can assess how well the the uncertainty estimates correspond to the expected error locally by sorting the predictions in equal size bins by increasing uncertainty and plotting the root mean variance (RMV) of the uncertainty versus the empirical root mean squared error (RMSE), also known as an error-calibration plot or reliability diagram. The error-calibration can be summarized by the expected normalized calibration error (ENCE) [19], which measures the mean difference between RMV and RMSE normalised by RMV:
\[\text{ENCE}=\frac{1}{K}\sum_{k=1}^{K}\frac{|\text{RMV}_{k}-\text{RMSE}_{k}|} {\text{RMV}_{k}}\,, \tag{24}\]
where \(k=1,\ldots,K\) iterates the bins. LZV analysis and the reliability diagram provide two useful ways to evaluate the local consistency of the uncertainty estimates.
Good average or local calibration is not sufficient to ensure that individual uncertainty estimates are informative, i.e., if the uncertainty estimates are homoscedastic, they are not very useful. Thus it is generally desirable for uncertainty estimates to be as small as possible while also having some variation, which is also known as sharpness. To measure sharpness, we report the root mean variance (RMV) of the uncertainty, which should be small and correspond to the RMSE, and the coefficient of variation (CV), which quantifies the ratio of the standard deviation of the uncertainties with the mean uncertainty and thus the overall dispersion, or heteroscedasticity, of the predicted uncertainty:
\[\text{CV}=\frac{\sqrt{N^{-1}\sum_{n=1}^{N}(\sigma(x_{n})-\overline{\sigma})^{ 2}}}{\overline{\sigma}}\,, \tag{25}\]
where \(n=1,\ldots,N\) in this case iterates the test dataset, \(\sigma(x_{n})\) is the predicted standard deviation (uncertainty) of instance \(n\) and \(\overline{\sigma}=N^{-1}\sum_{n=1}^{N}\sigma(x_{n})\) is the mean predicted standard deviation. If uncertainties are heteroscedastic while having good local calibration, it is also an indication of good ranking ability which is important in certain applications such as active learning [6].
### Uncertainty recalibration
The model training procedure described above does not by itself ensure good uncertainty calibration when the model is applied to unseen data. The individual models
may overfit to the training data and the total ensemble uncertainties (eq. 20 and 22) are strictly greater than any of the individual model uncertainties, and not fitted on any data. Therefore, following the approach of our previous work [25], we recalibrate the ensemble uncertainty estimates post hoc by using a recalibration function that maps the uncalibrated predictive distribution to a well calibrated distribution. The recalibration function is a non-linear uncertainty scaling function based on isotonic regression fitted to predict empirical squared errors on the validation set. Specifically, the recalibration function maps the uncalibrated uncertainty estimates \(\sigma^{2}\) to scaled uncertainty estimates \(s^{2}\sigma^{2}\), where \(s^{2}\) is the predicted scaling factor. In our experiments, both energy uncertainties \(\sigma^{2}_{E^{(*)}}\) and force uncertainties \(\sigma^{2}_{F^{(*)}}\) are recalibrated in this way.
Because the recalibration function is a scaling function, the recalibration procedure does not change the mean of the predictive distribution and thus does not change the prediction. Additionally, the isotonic regression results in a monotonic increasing scaling function and thus preserves the ordering of the uncertainty estimates and thereby the ranking. We use the implementation of isotonic regression available in the scikit-learn Python package [31].
## 3 Experiments and Results
### Datasets
The proposed method was evaluated on two publicly available datasets designed specifically for the development and evaluation of ML potentials, ANI-1x [1] and Transitions1x [2]. The datasets include out-of-equilibrium and near-transition-state structures, respectively, and represent varied energy and force distributions. The ANI-1x dataset consists of Density Functional Theory (DFT) calculations for approximately 5 million diverse molecular conformations with an average of 8 heavy atoms (C, N, O) and an average of 15 total atoms (including H) along with multiple properties including total
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c} \hline \hline \multicolumn{2}{c}{**Data**} & \multicolumn{3}{c}{**Loss**} & \multicolumn{6}{c}{**Energy (eV)**} & \multicolumn{6}{c}{**Fores (eV/Å)**} \\ \cline{2-13} \multicolumn{2}{c}{} & \(\mathcal{L}_{E}\) & \(\mathcal{L}_{F}\) & MAE\(\downarrow\) & RMSE\(\downarrow\) & NLL\(\downarrow\) & RZV & ENCE\(\downarrow\) & CV & RMV & MAE\(\downarrow\) & RMSE\(\downarrow\) & NLL\(\downarrow\) & RZV & ENCE\(\downarrow\) & CV & RMV \\ \hline \multirow{5}{*}{A1x} & MSE & MSE & 0.0123 & 0.0278 & -2.81 & 0.97 & 0.0773 & 1.27 & 0.0283 & 0.0180 & 0.0362 & -2.56 & 0.98 & 0.0243 & 1.24 & 0.0386 \\ & NLL & MSE & 0.0118 & 0.0256 & -3.04 & 1.00 & 0.0411 & 1.00 & 0.0209 & 0.0179 & 0.0364 & -2.56 & 0.98 & 0.0229 & 1.22 & 0.0381 \\ & MSE & NLL & 0.0117 & 0.0305 & -2.91 & 0.97 & 0.0928 & 1.53 & 0.0305 & 0.0175 & 0.0399 & -2.77 & 1.00 & 0.0099 & 1.51 & 0.0410 \\ & NLL & NLL & 0.0105 & 0.0296 & **-3.26** & 1.02 & 0.0600 & 1.51 & 0.0237 & 0.0171 & 0.0402 & **-2.79** & 1.00 & 0.0093 & 1.56 & 0.0409 \\ \hline \multirow{5}{*}{T1x} & MSE & MSE & 0.0344 & 0.0612 & -1.79 & 0.89 & 0.1144 & 0.82 & 0.0682 & 0.0370 & 0.0743 & -1.94 & 0.99 & 0.0292 & 1.21 & 0.0804 \\ & NLL & MSE & 0.0318 & 0.0578 & -1.94 & 0.92 & 0.1383 & 0.86 & 0.0655 & 0.0366 & 0.0744 & -1.97 & 0.98 & 0.0293 & 1.27 & 0.0831 \\ \cline{1-1} & MSE & NLL & 0.0332 & 0.0600 & -1.84 & 0.95 & 0.1108 & 0.83 & 0.0628 & 0.0369 & 0.0745 & -1.99 & 0.94 & 0.0615 & 1.18 & 0.0817 \\ \cline{1-1} & NLL & NLL & 0.0303 & 0.0574 & **-2.09** & 0.98 & 0.0906 & 0.96 & 0.0562 & 0.0359 & 0.0751 & **-2.05** & 0.94 & 0.0645 & 1.15 & 0.0773 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test results of ensemble models (\(M=5\)) trained on the ANI-1x (A1x) and Transition1x (T1x) datasets with different combinations of mean squared error (MSE) and negative log likelihood (NLL) loss functions on the energies and forces. Energy errors are averaged over molecules, while force errors are computed component-wise and averaged over the spatial dimensions and atoms.
energy and interatomic forces computed at the \(\omega\)B97x/6-31G(d) level of theory. The dataset was generated by perturbing equilibrium configurations using an active learning procedure to ensure conformational diversity with the aim of developing an accurate and transferable ML potential. The Transition1x dataset contains DFT calculations of energy and forces, for 9.6 million molecular conformations with up to 7 heavy atoms (C, N, O) and an average of 14 total atoms (including H), likewise computed at the \(\omega\)B97x/6-31G(d) level of theory. Here, the structures were sampled on and around full reaction pathways, thus including conformations far from equilibrium and near transition states. The dataset was generated by running a nudged elastic band (NEB)[32] algorithm with DFT on a set of approximately 10 thousand organic reactions with up to 6 bond changes while saving intermediate calculations with the aim of improving ML potentials around transition states. Transition1x is more varied in terms of interatomic distances between pairs of heavy atoms than ANI-1x, but less varied in terms of the distribution of forces, since forces are generally minimised along reaction pathways[2, 32].
### Model hyperparameters
The same general model configuration was used in all experiments. Each individual graph neural network model was configured with 3 interaction layers, a hidden node state size of 256 and a 2-layer atom-wise readout network with 3 outputs representing \(E_{i}\), \(\sigma_{E_{i}}^{2}\) and \(\sigma_{F_{i}}^{2}\). The input molecular graphs were generated with an edge cutoff radius of 5.0 A. Models were trained using the Adam optimizer with an initial learning rate of \(10^{-4}\), an exponential decay learning rate scheduler, a batch size of 64 molecular graphs, force loss weight \(\lambda_{F}=0.5\) and an early stopping criterion on the validation loss to prevent overfitting. In each experiment, models were trained individually with the same hyperparameters on the same training data, but with different random parameter initialisation and random shuffling of the training data to induce model diversity in the ensembles. For each ensemble model, after the training was completed, a recalibration function was fitted using predictions on the respective validation dataset following the procedure described in Section 2.6 and applied to the predictions on the test data. For ensemble models trained with MSE loss, only the epistemic uncertainty is considered.
### ANI-1x results
Ensembles of graph neural network models were trained on ANI-1x using the data splits from Schreiner _et al._[2], where the validation and test datasets consist of approximately 5% of the data each and the training set consists of the remaining 90% of the data. The splits are stratified by chemical formula to ensure different splits do not contain configurations made up of exactly the same atoms and selected such that all splits include all species of heavy atoms. Individual models were trained for up to 10 million gradient steps (approximately 144 epochs) with an initial warmup period of 2 million steps where the model was trained only with MSE loss followed by an interpolation period of 1 million steps, where the loss was interpolated linearly to NLL loss (only NLL models). The ensemble predictions were then recalibrated post hoc using a recalibration function fitted using the validation dataset. The trade-off between validation
performance and ensemble size \(M\) using models trained with NLL loss on both energy and forces is illustrated in Figure 1. Using a larger ensemble size results in lower error, as expected, but comes at the cost of additional computations. We observe that a reasonably low error is obtained at \(M=5\) and only small improvements are gained beyond that, which is similar to what we found in previous work [25].
Test results for \(M=5\) ensembles trained with different combinations of MSE and NLL loss on energy and forces are presented in the first four rows of Table 1. The ensemble trained with MSE loss on both energy and forces is a standard ensemble model with post hoc recalibration. The other three ensembles show the effect of training with NLL loss on either energy or forces or both. All four ensembles achieved a low error on energy and forces in terms of MAE and RMSE compared to the results reported by Schreiner _et al._[2] using a PaiNN model similar to the base model of our ensembles (MAE=0.023 on energy and MAE=0.039 on forces). Importantly, training with NLL loss on either energy or forces or both did not result in worse performance in terms of prediction error.
All four ensemble models achieved good average calibration on ANI-1x in terms of NLL and ZV after recalibration, but ensembles trained with NLL loss performed slightly better which is observed both on energy and forces and the ensemble trained with NLL on both energy and forces performs best overall. Additionally, all ensembles scored a high CV indicating uncertainty estimates are heteroscedastic and thus informative. Calibration plots for the ensemble trained on ANI-1x with NLL loss on energy and forces are presented in Figure 2. The uncertainty vs. error plots show that in general large errors are associated with large uncertainties and most errors are within 2-3 standard deviations of uncertainty as desired. For the energy, the model appears to be biased for some examples with large errors, but these are relatively few and are correctly identified as problematic by high uncertainty. For the forces, the distribution of errors looks more symmetrical around zero. This is also clearly shown by the local z-score analysis plots where for the energy, the variance of the z-scores is slightly off for very low and very high uncertainties, although still centered around 1, whereas for
Figure 1: Trade-off between performance and ensemble size on the ANI-1x validation dataset using ensembles of models trained with NLL loss on both energy and forces.
the forces the variance of the z-scores is close to 1 for all uncertainties which indicates high consistency. Finally, the reliability diagrams show the relation between predicted uncertainty and expected error for the energy and forces, respectively. Both plots show a clear correlation between the uncertainty and the expected error as the curves lie close to the identity line. Again, the model very slightly underestimates the expected error of the energy at low and high uncertainties, and the curve for the forces is near perfect. The reliability diagrams are summarised by ENCE scores in Table 1.
### Transition1x results
Analogous to the first experiment, ensembles of graph neural network models were trained on Transition1x using data splits from Schreiner _et al._[2] based on the same splitting criteria as ANI-1x described above. Models were trained for up to 3 million gradient steps (approximately 21 epochs) with an initial warmup period of 2 million steps followed by an interpolation period of 2 million steps (training was stopped before finishing the full interpolation period). When training for longer on this dataset, we observed severe overfitting. We believe this is because the data was generated from a relatively small set of chemical reactions making the models prone to overfit the many similar configurations associated with the same reactions in the training data. The ensemble predictions were recalibrated post hoc using a recalibration function fitted using the validation dataset. Varying the ensemble size yielded similar results to the first experiment (Figure 1) and \(M=5\) was selected as a good compromise between performance and computational cost.
Figure 2: Calibration results on the ANI-1x dataset of energy (top row) and forces (bottom row) for an ensemble of \(M=5\) models trained with NLL loss on both energy and forces. Transparent curves show results before applying recalibration.
Test results for \(M=5\) ensembles are presented in the last four rows of Table 1. As in the first experiment, all ensembles achieved a low error on energy and forces in terms of MAE and RMSE compared to the results reported in Schreiner _et al._[2] (MAE=0.048 on energy and MAE=0.058 on forces) and training with NLL loss did not decrease performance in terms of prediction error in any case. All four ensembles achieved acceptable average calibration in terms of NLL and ZV on the Transition1x test data. Surprisingly, ensembles trained with MSE loss were as well or better calibrated than ensembles trained with NLL loss on this dataset. All ensembles score similar high CV indicating uncertainty estimates are heteroscedastic and thus informative. Calibration plots for an ensemble trained on Transition1x with NLL loss on energy and forces are presented in Figure 3. The uncertainty vs. error plots show that in general large errors are associated with large uncertainties as desired. For both energy and forces some errors extend beyond 2-3 standard deviations of uncertainty indicating the error distributions have wider tails and may not be Gaussian in this case. Similar to the ANI-1x experiment, it looks like the model is biased for some instances with large energy errors, but these cases are correctly identified as problematic by high uncertainty. For the forces, the error distribution appears more symmetrical around zero but with wide tails. The local z-score analysis plot for the energy indicate some inconsistencies in the energy uncertainties. Plotting the root variance of the z-scores as a function of the observed molecular energies (Figure 4) shows that the uncertainty is underestimated for low energies and overestimated for high energies on average. Taking a closer look at the energy distribution reveals significant differences between the validation and test set which could be the reason for this problem. The variance of the local z-scores for the forces are more consistent, but values below one indicate that the model generally overestimates the uncertainty on the forces. The reliability diagram for the energy also shows signs of some inconsistencies, as the curve does not form a straight line along the diagonal, but overall the uncertainties are correlated with the expected error. The corresponding reliability diagram for the forces shows a more consistent result, only with a tiny overestimation of the force uncertainty. The reliability diagrams are summarised by ENCE scores in Table 1.
## 4 Discussion
The proposed method achieved good predictive performance as well as calibrated and consistent uncertainty estimates in experiments on two challenging, publicly available molecular datasets. A major advantage of the approach is that it considers both epistemic and aleatoric uncertainty through an ensemble approximation of mean-variance models. We believe that considering both aleatoric and epistemic uncertainty is critical to ensure good calibration in and out of the training data distribution. Often the training procedures of uncertainty aware models do not inherently ensure good calibration on unseen data. For example, ensemble members trained on the same data will often fit the same mean prediction without accounting for noise in the data and mean-variance methods will estimate the noise on the training data but do not guarantee good extrapolation of the uncertainty estimates to previously unseen data. Therefore, the post hoc recalibration procedure is key to achieve good calibration on unseen data in our
experiments, but is not commonly applied by other UQ methods in the literature.
The computational overhead of training and evaluating ensemble models is sometimes pointed out as a major disadvantage of using ensembles. However, it is important to note that most of this computation can be performed in parallel and thus only leads to a small overhead of computing the ensemble approximation and recalibration in real time. Some works have proposed methods for speeding up the training of ensembles, such as snapshot ensembles [33, 34], which could also be applied in this case. Another widely accepted advantage of ensembles is that they often improve prediction accuracy (see Figure 1 as an example), which can be considered a positive side effect of the proposed method. Here, we have used ensembles of size 5, but larger ensembles can be expected to further improve performance (up to a limit) at the cost of more computation. The approach could also potentially benefit from other recent extensions to ensembles such as using randomized priors [35] to improve the quality of especially the epistemic uncertainty estimates.
Evaluation of uncertainty calibration for regression models is an active area of research [18, 19, 20, 21]. Standard procedures for assessing the quality of uncertainty estimates are necessary within the field to establish confidence in uncertainty quantification methods and establish fair comparison. We recommend recent work by Pernot [36] which provides a good overview of calibration assessment methods and a detailed approach for evaluating uncertainty. Our experiments show that the uncertainty estimates obtained with the proposed method are largely consistent with the expected error for varying size of the uncertainty (Figures 2 and 3). However, we observed indications that uncertainties are not equally well calibrated along different molecular energies (Figure 4). The cur
Figure 3: Calibration results on the Transition1x dataset of energy (top row) and forces (bottom row) for an ensemble of \(M=5\) models trained with NLL loss on both energy and forces. Transparent curves show results before applying recalibration.
rent recalibration method only considers the magnitude of the predicted uncertainty. It would be an interesting direction for future work to design a recalibration function that can account for additional input features such as the (predicted) energy, while remaining a monotonic increasing scaling function, with the aim of achieving equally good calibration throughout the input space. Applying the calibration evaluation framework proposed by Pernot [20] could help provide additional insights into the consistency and adaptive of predictive uncertainty.
## 5 Conclusion
In this work, we have presented a complete framework for training neural network potentials with calibrated uncertainty estimates on both energy and forces. The proposed method was demonstrated and evaluated on two challenging, publicly available molecular datasets containing diverse conformations far from equilibrium. In all cases, the proposed method achieved low prediction error and good uncertainty calibration. On the ANI-1x dataset training with NLL loss improved the calibration over training with standard MSE loss. On the Transition1x dataset, the same improvement was not observed and good calibration was achieved by training with standard MSE loss and applying post hoc nonlinear recalibration. This could be because the validation and test data are more out of distribution in this case. The proposed method does not depend on the particular architecture of the neural network model, and can thus easily be adapted to new models in the future. We hope that this work will contribute to better calibrated ML potentials and enable more robust and reliable applications.
|
2307.10139 | Artificial Neural Networks for Predicting Mechanical Properties of
Crystalline Polyamide12 via Molecular Dynamics Simulations | Predicting material properties of 3D printed polymer products is a challenge
in additive manufacturing due to the highly localized and complex manufacturing
process. The microstructure of such products is fundamentally different from
the ones obtained by using conventional manufacturing methods, which makes the
task even more difficult. As a first step of a systematic multiscale approach,
in this work, we have developed an artificial neural network (ANN) to predict
the mechanical properties of the crystalline form of Polyamide12 (PA12) based
on data collected from molecular dynamics (MD) simulations. Using the machine
learning approach, we are able to predict the stress-strain relations of PA12
once the macroscale deformation gradient is provided as an input to the ANN. We
have shown that this is an efficient and accurate approach, which can provide a
three-dimensional molecular-level anisotropic stress-strain relation of PA12
for any macroscale mechanics model, such as finite element modeling at
arbitrary quadrature points. This work lays the foundation for a multiscale
finite element method for simulating semicrystalline polymers, which will be
published as a separate study. | Caglar Tamur, Shaofan Li, Danielle Zeng | 2023-07-19T17:05:04Z | http://arxiv.org/abs/2307.10139v3 | Artificial neural networks for predicting mechanical properties of crystalline polyamide12 via molecular dynamics simulations
###### Abstract
Predicting material properties of 3D printed polymer products is a challenge in additive manufacturing due to the highly localized and complex manufacturing process. The microstructure of such products is fundamentally different from the ones obtained by using conventional manufacturing methods, which makes the task even more difficult. As a first step of a systematic multiscale approach, in this work, we have developed an artificial neural network (ANN) to predict the mechanical properties of the crystalline form of Polyamide12 (PA12) based on data collected from molecular dynamics (MD) simulations. Using the machine learning approach, we are able to predict the stress-strain relations of PA12 once the macroscale deformation gradient is provided as an input to the ANN. We have shown that this is an efficient and accurate approach, which can provide a three-dimensional molecular-level anisotropic stress-strain relation of PA12 for any macroscale mechanics model, such as finite element modeling at arbitrary quadrature points. This work lays the foundation of a multiscale finite element method for simulating semicrystalline polymers, which will be published as a seperate study.
keywords: _Molecular dynamics, crystalline polymers, rexaff, polyamide, machine learning, neural network_
## 1 Introduction
Material modeling of 3D printed material products has been a great challenge for additive manufacturing and other advanced manufacturing technologies. In additive manufacturing, the product is built on a material point upon a material point; thus, the resultant properties are highly dependent on the printing process both spatially and temporally at the localized regions. A microstructurally informed and robust mechanical model is highly anticipated, as it would help to overcome the challenges with manufacturing process, product design and material selection.
One of the promising approaches is the multiscale modeling of 3D printed materials [1; 2; 3], which has the ability to extract bottom-up material information to assess the
macroscale properties of a product that is additively manufactured. However, the cost of multiscale modeling may vary, especially when it is involved with molecular-scale simulations. Traditionally, in multiscale modeling of crystalline materials, this is overcome by using a technique called the Cauchy-Born rule [4, 5], which is an approximation of molecular dynamics by using simplified molecular statics. However, for semicrystalline or amorphous materials, due to the lack of a definite crystal structure, full-scale molecular dynamics must be utilized, which will significantly increase the cost of the multiscale simulation. This is because the macroscale finite element method would require a molecular level stress-strain relation at each quadrature point for an arbitrarily given strain, and it is almost impossible to conduct concurrent molecular dynamics simulations unless one has abundant computational resources.
The task at hand is to simulate the molecular-level stress-strain relation under arbitrary strain and temperature with good accuracy, efficiency, and low cost. To solve this problem, in the present work, we adopted a machine learning approach by developing an artificial neural network (ANN) that is trained on a molecular dynamics (MD) dataset, which can predict the molecular-level mechanical material properties of polymeric materials.
Current efforts in machine-learning driven material property predictions involve a variety of approaches. One common approach is to use the features based on the material composition to obtain structure and electronic properties, such as predicting bandgap of semiconductor materials using convolutional neural networks [6] and predicting adsorption energies of high-entropy alloys through deep neural networks [7]. In a recent study, standard feed-forward neural networks are used with EAM based MD simulations to model crystal plasticity in a multiscale framework [5].
Polymeric materials have three different microstructural forms: crystal, amorphous, and semicrystalline. As a starting point, we shall first develop an artificial neural network to predict the mechanical response of the anisotropic crystalline forms of polymers. The approach we presented can be extended to include amorphous phases and would eventually incorporated into the constitutive relations of a multiscale finite element method for semicrystalline polymers.
One of the polymeric materials widely used in additive manufacturing is Polyamide12 (PA12), also known as Nylon12, which is a synthetic polymer that is used in many industrial applications such as automobile parts, aerospace applications and medical components. An automotive part manufactured using the Multi Jet Fusion process is shown in Fig. 1.
PA12 has the chemical formula [-(CH\({}_{2}\))\({}_{11}\)C(O)NH-]\({}_{n}\) as visualized in Fig. 2. In recent years, many 3D printed PA12 products have been fabricated, and they have shown some outstanding material properties. Additively manufactured PA12 is a semicrystalline material, in which the microstructure has a sandwich-like structure of alternating regions of crystal and amorphous zones.
Its structural characterization has been extensively studied since the 1970s with x-ray diffraction (XRD) experiments [8; 9] and with nuclear magnetic resonance (NMR) spectroscopy [10], which revealed that PA12 displays polymorphism and can have crystal phases \(\alpha,\alpha^{\prime},\gamma\) and \(\gamma^{\prime}\). The most abundant phase is the \(\gamma\) form, which results from slow cooling at atmospheric pressure, whereas the other phases require specific conditions such as rapid quenching and/or high-pressure treatment. For the purposes of our study, we focus on the \(\gamma\) crystal form.
Figure 1: 3D printed Nylon PA12 auto part via HP multi-jet fusion (Courtesy of Ford Motor Company).
Figure 2: Polyamide12 molecules. Gray, white, blue and red spheres represent carbon, hydrogen, nitrogen and oxygen atoms respectively. Dashed lines indicate the H bonds.
## 2 Molecular dyannics simulations
In this section, we describe the details of molecular dynamics simulations, such as the preparation of initial configurations, the force field selection, and simulation procedures, and we conduct a preliminary study.
### System setup
We start by constructing the unit cell of the \(\gamma\) form PA12 crystal. The \(\gamma\) phase has a pseudo-hexagonal monoclinic structure with the lattice parameters summarized in Table 1[8; 9]. Using the atomistic coordinates and the lattice parameters from the experimental literature [9], the unit cell of the \(\gamma\) phase is constructed. This monoclinic structure contains four PA12 chains and is visualized in Fig. 3. Note that the dashed lines indicate hydrogen bonds, and the unit cell is periodic in all directions.
### Force field selection
Force fields represent the interactions between atoms and molecules using a set of equations. In classical forms, force fields consist of empirical potentials that describe bonded interactions (primary bonds, bond angles, and dihedrals) and non-bonded interactions (van der Waals and electrostatic). Choosing a suitable force field for the system under consideration is a crucial part of molecular dynamics simulations.
Several empirical force fields have been used in the simulation of polyamides; some popular examples being OPLS, CVFF, COMPASS, and DREIDING [11]. However, these potentials are typically unable to model chemical reactions and cannot account for bond dissociation. Hence, they are not the ideal choice for investigating the mechanical response of the system in the context of fracture mechanics and rupture of polymer crystals, which may involve the chain scission process [12; 13].
Reactive force fields, on the other hand, have been developed to simulate complex chemical reactions and have been used successfully in deformation simulations that involve bond cleavage. Once such a force field is ReaxFF, originally developed by van Duin [14], which utilizes the so-called bond order formalism to describe chemical bonding. ReaxFF requires no topology information, and it is easy to construct initial MD configurations since only the atomistic coordinates are needed. ReaxFF specially treats weak hydrogen bonds with an explicit energy term in the functional, which plays an important role in polymer systems. The current implementation in LAMMPS utilizes the functional described in [15].
ReaxFF is parameterized for a wide range of materials through quantum mechanical calculations [16] and has also been used in the simulation of polyamide crystals before [17]. For our polymer system, we tested three such parameterizations [18; 19; 20]. The test process involved relaxing the initial structure in the NPT ensemble and comparing the resultant lattice parameters with the experimental values. Among these, the ReaxFF parameterization by Mattson [19] yielded the best result; therefore, it was chosen as the force field for this study.
Figure 4: Supercell of perfect PA12 crystals (a) View from x-y plane (b) View from y-z plane.
### Simulation details and preliminary analysis
Molecular dynamics simulations are performed using the Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) [21]. The simulations are run on Berkeley's high-performance computing cluster Savio using a \([4\times 2\times 4]\) CPU grid. MD time steps are chosen as 0.5 fs. Nose-Hoover thermostat and barostat [22] are used to control temperature and pressure, respectively, with damping parameters chosen as 50 fs and 500 fs.
The initial structure is relaxed in the constant pressure and temperature (NPT) ensemble at 300 K and 1 atm, where the pressure is controlled independently in all directions. We observed that an NPT simulation for 50 ps was sufficient to reach equilibrium, as the lattice parameters stabilized adequately at this point.
As a preliminary study and to investigate in detail the anisotropic mechanical response of PA12 crystals, we constructed a larger MD cell with 38,000 atoms. After relaxation under ambient conditions, we deform the system in different directions under uniaxial tension. Uniaxial deformation is imposed by stretching the unit cell in one direction at a constant strain rate and relaxing the other two dimensions with the NPT ensemble at 1 atm. For this part of the study, we used slower strain rates and deformed the cell quasi-statically in a step-by-step fashion as described in [23]. In this process, the MD cell is deformed to 1% of its final stretch, relaxed in the NVT ensemble, and the stress is sampled and averaged over 5 ps intervals. These steps are repeated until rupture of the polymer chains is observed and the resultant stress-strain behaviors are shown in Fig. 5 for each direction.
From Fig. 5, we observe that the system is highly anisotropic; crystal is significantly more stiff in the z-direction, which is the direction of polymer chain alignment, and it is more ductile in the other two directions. Interestingly, for the x-direction Fig. 5 (a), there was a remarkable increase in the elastic modulus once the strain reaches around 12%. We explored this phenomenon to comprehend whether it involves a mechanically induced phase transformation, by deforming the original monoclinic crystal supercell in the x-direction. Resulting atomistic configurations before and after the change in elastic modulus are visualized in Fig. 6, in which we clearly observe a change in the microstructural configurations.
Figure 5: Stress (MPa) versus Strain (engineering) behavior of PA12 in uniaxial tension.
Upon additional analysis, we understood that the process is completely reversible and the effect disappears once the MD cell is unloaded to its original state. Further investigation may be required to draw a meaningful conclusion on the phenomenon.
Figure 6: Snapshots of the atomistic configurations during deformation in the x-direction.
## 3 Artificial neural network for constitutive law
In this section, we present a data-driven approach to model the hyperelastic constitutive law of crystalline PA12, to be used in multiscale mechanics simulations. Our task is to find the mapping between the deformation state and the resulting material response, using the data obtained from the molecular dynamics simulations on an artificial neural network (ANN). We discuss data collection, model selection, training phase, and predictions in detail.
### Data collection and processing
In order to generate the data set to train the learning model, we performed MD simulations by deforming the PA12 supercell in different directions with varying strain rates. For the purposes of this study, we limit our attention to uniaxial and biaxial tensile deformations, although the presented methodology should be applicable to any deformation mode. Using the findings of the preliminary study, Fig.5, we determined the final stretch limits of the simulation box necessary in each direction. Accordingly, we chose the ranges of strain rates in each direction, which are summarized in Table 2. In all MD simulations, the supercell is deformed for 10,000 steps with a rate chosen from the prescribed ranges depending on the loading direction. Specifically, we chose 15 evenly spaced values from the ranges in Table 2 and took their combinations to obtain uniaxial and biaxial loading cases, resulting in 720 different MD simulations each having different final stretches and strain rates.
Each deformation simulation is run for 5 ps and the Virial stress and the box dimensions are sampled at 50 fs intervals. Consequently, after running all the 720 simulations, we collected a data set consisting of 72,720 data points. Each data point is 12 dimensional, which represents the deformation (6 components) and stress (6 components) states of the PA12 crystal. Note that the process described here involves remarkably high strain rates and short simulation times, since it was not computationally feasible to conduct slower simulations and collect a large data set at the same time.
The original data from MD simulations consist of box dimensions \(l=\{l_{x},l_{y},l_{z}\}\) which can be used to construct the deformation gradient \(\mathbf{F}\) and the stress measure we get is the pressure tensor, known as the Virial stress. Virial stress has been shown to be equivalent to continuum Cauchy stress \(\mathbf{\sigma}\)[24]. In order to adopt the model into the continuum mechanics framework, we need to transform the data set into the tensor quantities utilized in finite deformation. To preserve material objectivity [25; 5], we chose the energetic conjugates right Cauchy-Green tensor \(\mathbf{C}\) and the second Piola-Kirchhoff (PK2) stress tensor \(\mathbf{S}\) to represent the deformation state and material response, respectively. Each of these second-order tensors have nine components, but making use of the symmetry and Voigt notation we represent
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{Deformation Direction} \\ & x & y & z \\ \hline Ultimate Strain & 0.5 & 0.6 & 0.15 \\ Strain Rate (\(10^{-6}/fs\)) & [3.3, 50] & [4; 60] & [1; 15] \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ultimate strain and resulting ranges of strain rates for each direction.
them in the vector form as below.
\[\mathbf{C} =[C_{11},\ C_{22},\ C_{33},\ C_{12},\ C_{13},\ C_{23}] \tag{1}\] \[\mathbf{S} =[S_{11},\ S_{22},\ S_{33},\ S_{12},\ S_{13},\ S_{23}] \tag{2}\]
Using the elementary relations of continuum mechanics, we can obtain the \(\mathbf{C}\) and \(\mathbf{S}\) tensors as follows.
\[J =\det\mathbf{F} \tag{3}\] \[\mathbf{C} =\mathbf{F}^{T}\mathbf{F}\] (4) \[\mathbf{S} =J\mathbf{F}^{-1}\mathbf{\sigma}\mathbf{F}^{-T} \tag{5}\]
Now we can state our goal as finding the map,
\[\Psi:\mathbf{C}\rightarrow\mathbf{S} \tag{6}\]
where \(\Psi\) encapsulates the constitutive model that we are going to approximate through supervised learning.
Finally, we normalize our input data set to have a mean of zero and a standard deviation of one, a process known _standardization_. It is common practice in gradient-based learning methods to standardize the data, and it improves the performance of ANNs by helping to solve the convergence issues of the backpropagation algorithm [26; 27]. Standardization can be described as follows.
\[C_{ij}^{std}=\frac{C_{ij}-\hat{\mu}_{ij}}{\hat{\sigma}_{ij}}\quad\text{for} \quad\mathrm{i,j}=1,2,3. \tag{7}\]
where \(C_{ij}^{std}\) are the normalized components of the right Cauchy-Green tensor, \(\hat{\mu}_{ij}\) is the sample mean and \(\hat{\sigma}_{ij}\) is the sample standard deviation of the respective component.
### Model selection and results
We adopt a feedforward, fully connected network architecture to construct our regression ANN, which may be called a multilayer perceptron (MLP). As shown by the universal approximation theorem, feedforward neural networks can approximate any continuous function, provided that they have at least one hidden layer, have enough neurons, and the activation functions satisfy certain properties [28; 29]. Thus we believe that a MLP is a suitable choice to approximate the hyperelastic constitutive law. Keras [30] Python interface of the TensorFlow [31] machine learning package is used to select and train our ANN model.
Our task is to find an approximation to the constitutive model defined in Eq.(6). Formally, we can express the learning problem as follows.
Find
\[\mathcal{N}(\mathbf{C},\mathbf{w})=\mathbf{\hat{S}} \tag{8}\]
such that
\[\mathbf{w}=\operatorname*{argmin}_{\mathbf{w}}\mathcal{L}(\mathbf{S},\mathbf{\hat{S}}) \tag{9}\]
where \(\mathcal{N}\) is encodes the ANN, \(\mathbf{w}\) are known as the _weights_ of each neuron and \(\mathcal{L}\) is a _loss function_. The task in supervised learning is to find optimal weights \(\mathbf{w}\) such that the metric defined in the loss function is minimized.
To select the ANN model parameters known as _hyperparameters_, such as the number of hidden layers, number of neurons at each layers, type of activation functions and learning rate of the gradient descent, we conduct hyperparameter optimization. We chose two candidate activation functions which are commonly used in regression, _ReLu_ function and its smooth approximate version known as the _Softplus_ function.
\[\text{ReLu}\ :\ \sigma(\text{x})=\text{max}(\text{x})\qquad\quad\text{ Softplus}\ :\ \sigma(\text{x})=\log(1+\text{e}^{\text{x}}) \tag{10}\]
For the loss function, we choose the mean squared error (MSE) of the PK2 stress, which is defined as
\[\mathcal{L}_{\text{MSE}}(\mathbf{S},\mathbf{\hat{S}})=\frac{1}{n}\sum_{i=1}^{n}||\mathbf{ S}_{i}-\mathbf{\hat{S}}_{i}||_{2}\, \tag{11}\]
where \(\mathbf{S}\) is the PK2 stress obtain from MD simulations, \(\mathbf{\hat{S}}\) is the predicted stress from ANN and \(n\) is the number of data points.
We tried two methodologies to select the best set of hyperparameters, the HyperBand algorithm [32] and Bayesian Optimization with Gaussian Process [33]. Both algorithms would pick the best parameters from a predefined set, which would give the lowest validation loss. The set of parameters that we make the search is determined by the preliminary analysis, where the optimizer of the ANN is chosen as the Adam algorithm [34], which implements a version of the stochastic gradient descent (SGD) method. Accordingly, we chose the following range of parameters to perform hyperparameter tuning.
We start by partitioning the data into a random 80% - 20% train-test split. Then, we run hyperparameter optimization using Hyperband and Bayesian Optimization methods in the domain described in Table 3, leaving aside 20% of the training data to be used to compute validation loss. For all models, the learning rate \(\gamma=10^{-2}\) and the number of hidden layers of four or five gave us the best results. For the rest of the hyperparameters, we investigated the top ten models and chose the ones with the lowest complexity to reduce overfitting. Resulting candidate models are summarized in Table 4.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{Hyperparameters} \\ \hline Hidden Layers & Neurons & Activation Function & Learning Rate \\ \([2,5]\) & \([32,256]\) & \(\{\text{ReLu, Softplus}\}\) & \(\{10^{-1},10^{-2},10^{-3}\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Hyperparameter search grid
The models summarized above are trained on the train data set for 1000 epochs, and predictions are made on the test data to assess our final performance. The resulting train-test history curves are presented in Fig. 7. We conclude that the ANNs are stable, there is no significant overfitting, and they perform well against the test data set, as seen from the test loss. The best model proves to be ANN-4, the rightmost column in Table 4, and its architecture is schematically visualized in Fig. 8.
\begin{table}
\begin{tabular}{c c c c c} \hline & Hyperband (1) & Bayesian (2) & Hyperband (3) & Bayesian (4) \\ \hline Input Layer & 6 & 6 & 6 & 6 \\ Hidden Layer 1 & 192 \(\times\) ReLu & 64 \(\times\) ReLu & 160 \(\times\) ReLu & 224 \(\times\) ReLu \\ Hidden Layer 2 & 32 \(\times\) Softplus & 32 \(\times\) Softplus & 64 \(\times\) Softplus & 96 \(\times\) Softplus \\ Hidden Layer 3 & 32 \(\times\) ReLu & 256 \(\times\) Softplus & 32 \(\times\) ReLu & 64 \(\times\) ReLu \\ Hidden Layer 4 & 64 \(\times\) ReLu & 32 \(\times\) ReLu & 128 Softplus & 96 \(\times\) Softplus \\ Hidden Layer 5 & 128 \(\times\) ReLu & 64 \(\times\) ReLu & - & - \\ Output Layer & 6 \(\times\) Linear & 6 \(\times\)Linear & 6 \(\times\) Linear & 6 \(\times\)Linear \\ Validation Loss & 15630 & 15026 & 14109 & 13538 \\ \hline \end{tabular}
\end{table}
Table 4: ANN architectures resulting from Hyperband method and Bayesian Optimization
Figure 7: Learning curves of the ANNS: Training and test loss during training phase.
Finally, we tested ANN-4 against a new set of uniaxial tension MD simulations to observe our models performance against the variance in strain rate. Two of the simulations utilized strain rates \(\dot{\epsilon}_{1}=5\times 10^{-6}/fs\) and \(\dot{\epsilon}_{2}=10\times 10^{-6}/fs\), that are within the training interval defined in Table 7. Predictions of ANN-4 is compared with the MD results and the resultant \(C_{33}\) versus \(S_{33}\) (MPa) relations are shown in Fig. 9. Predictions in the elastic range and the ultimate strength display great agreement with the MD findings, which proves the predictive power of the ML approach.
Figure 8: ANN-4 schematic representation [35].
Figure 9: Predictions of ANN-4 for uniaxial tension in z-direction for a new set of simulations. The strain rates are contained in the training range.
We consider two additional MD simulations that employed strain rates lying beyond the training range, encompassing both extremes. These strain rates are defined as \(\dot{\epsilon}_{3}=0.5\times 10^{-6}/fs\) (slower) and \(\dot{\epsilon}_{4}=30\times 10^{-6}/fs\) (faster). The corresponding outcomes are depicted in Fig. 10, alongside the MD results, for comparison. While the slower rate led to a slightly higher ultimate strength, a remarkable agreement is observed within the elastic range. Here, the ANN effectively captured intricate aspects of the loading path. The findings clearly demonstrate the model's capacity for generalization and robustness across various strain rates, which makes it a promising candidate for effectively describing the constitutive laws of crystalline polymers to be used in large deformation FEM simulations.
## 4 Conclusion and future perspectives
In this study, we have developed an artificial neural network to model the mechanical response of crystalline polymers based on molecular dynamics simulations. Initial configurations are generated to model PA12 in \(\gamma\) crystal form. ReaxFF was our choice for the force field, and MD simulations are performed under uniaxial and biaxial tensile deformations to generate a data set for the learning model.
ANN architecture is selected through hyperparameter tuning algorithms and carefully calibrated using the training set. The training and validation process produced the best model to be ANN-4, which is proven to be robust and accurate with no significant overfitting. The best model is tested against different strain rates, which covers above and beyond the loading rate range of the training set. The results demonstrated the predictive power and generalization capacity of our approach, even with strain rates that are much slower or faster than the original data.
Figure 10: Predictions of ANN-4 for uniaxial tension in z-direction for a new set of simulations. The strain rates are beyond the range of the training data.
The presented methodology proved to be an efficient way of modeling the microstructurally informed constitutive relation of crystalline polymers, which would be used in macroscale FEM simulations. Namely, we presented an approach to accurately predict the mapping between right Cauchy-Green (\(\mathbf{C}\)) and second PK2 stress (\(\mathbf{S}\)) tensors, that shall be utilized to compute the strain energy density of large deformation problems. Once the model is trained, we can simply treat it as a black box for computing the strain energy at arbitrary quadrature points and time steps, corresponding to the current deformation state of the material.
There are some caveats that need further attention. As detailed in Section 2.3, there may be a mechanically induced phase transformation of PA12, which may depend on the particular additive manufacturing method and process parameters such as temperature and printing speed. Furthermore, we only considered uniaxial and biaxial tensile deformations in our training set. For general loading conditions, such as pure shear, compression, and torsion, the artificial neural network will need more data and the associated ML training procedure may become more complicated and expensive.
It is worth noting that our current investigation was limited to the fully crystalline form of PA12 for purposes of demonstration. The actual material components consist of semicrystalline structures wherein the microstructure comprises both amorphous and crystalline regions. However, the presented MD and ML methodologies can easily be adopted to amorphous domains. Additionally, considering larger simulation cells, having longer relaxation windows and adopting slower strain rates may be necessary to simulate more accurate and realistic scenarios. These matters will be subjected further investigated and will be reported in a separate study.
## 5 Acknowledgements
We acknowledge the support from the University Grant by Ford Motor Company. Caglar Tamur gratefully acknowledges the financial support for his graduate studies and research by the Fulbright Scholarship Program. |
2309.01771 | ADC/DAC-Free Analog Acceleration of Deep Neural Networks with Frequency
Transformation | The edge processing of deep neural networks (DNNs) is becoming increasingly
important due to its ability to extract valuable information directly at the
data source to minimize latency and energy consumption. Frequency-domain model
compression, such as with the Walsh-Hadamard transform (WHT), has been
identified as an efficient alternative. However, the benefits of
frequency-domain processing are often offset by the increased
multiply-accumulate (MAC) operations required. This paper proposes a novel
approach to an energy-efficient acceleration of frequency-domain neural
networks by utilizing analog-domain frequency-based tensor transformations. Our
approach offers unique opportunities to enhance computational efficiency,
resulting in several high-level advantages, including array micro-architecture
with parallelism, ADC/DAC-free analog computations, and increased output
sparsity. Our approach achieves more compact cells by eliminating the need for
trainable parameters in the transformation matrix. Moreover, our novel array
micro-architecture enables adaptive stitching of cells column-wise and
row-wise, thereby facilitating perfect parallelism in computations.
Additionally, our scheme enables ADC/DAC-free computations by training against
highly quantized matrix-vector products, leveraging the parameter-free nature
of matrix multiplications. Another crucial aspect of our design is its ability
to handle signed-bit processing for frequency-based transformations. This leads
to increased output sparsity and reduced digitization workload. On a
16$\times$16 crossbars, for 8-bit input processing, the proposed approach
achieves the energy efficiency of 1602 tera operations per second per Watt
(TOPS/W) without early termination strategy and 5311 TOPS/W with early
termination strategy at VDD = 0.8 V. | Nastaran Darabi, Maeesha Binte Hashem, Hongyi Pan, Ahmet Cetin, Wilfred Gomes, Amit Ranjan Trivedi | 2023-09-04T19:19:39Z | http://arxiv.org/abs/2309.01771v1 | # ADC/DAC-Free Analog Acceleration of Deep Neural Networks with Frequency Transformation
###### Abstract
The edge processing of deep neural networks (DNNs) is becoming increasingly important due to its ability to extract valuable information directly at the data source to minimize latency and energy consumption. Although pruning techniques are commonly used to reduce model size for edge computing, they have certain limitations. Frequency-domain model compression, such as with the Walsh-Hadamard transform (WHT), has been identified as an efficient alternative. However, the benefits of frequency-domain processing are often offset by the increased multiply-accumulate (MAC) operations required. This paper proposes a novel approach to an energy-efficient acceleration of frequency-domain neural networks by utilizing analog-domain frequency-based tensor transformations. Our approach offers unique opportunities to enhance computational efficiency, resulting in several high-level advantages, including array micro-architecture with parallelism, ADC/DAC-free analog computations, and increased output sparsity. Our approach achieves more compact cells by eliminating the need for trainable parameters in the transformation matrix. Moreover, our novel array micro-architecture enables adaptive stitching of cells column-wise and row-wise, thereby facilitating perfect parallelism in computations. Additionally, our scheme enables ADC/DAC-free computations by training against highly quantized matrix-vector products, leveraging the parameter-free nature of matrix multiplications. Another crucial aspect of our design is its ability to handle signed-bit processing for frequency-based transformations. This leads to increased output sparsity and reduced digitization workload. On a 16\(\times\)16 crossbars, for 8-bit input processing, the proposed approach achieves the energy efficiency of 1602tera operations per second per Watt (TOPS/W) without early termination strategy and 5311 TOPS/W with early termination strategy at VDD = 0.8 V.
Compute-in-SRAM; deep neural network; frequency transforms; low power computing.
## I Introduction
In recent years, deep learning has gained significant traction in critical domains such as healthcare, finance, security, and autonomous vehicles [1, 2, 3, 4, 5]. Especially as the complexity and accuracy requirements of deep learning applications continue to grow, deploying deep neural networks (DNNs) at the network's edge has become increasingly common for these applications. The edge, characterized by limited computing and storage resources, poses challenges for running large-scale DNN models efficiently. To overcome these limitations, pruning techniques have emerged as a popular approach to enhance edge computing for DNNs [6]. By selectively removing network parameters that contribute only minimally to prediction accuracy, pruning reduces the model's size and the computing/storage resources required for inference.
Two main types of pruning techniques have been developed: unstructured and structured pruning [7, 8]. Unstructured pruning removes connections with minimal magnitude weights, minimizing the impact on prediction accuracy. However, this technique does not always lead to proportional performance benefits, as it disrupts the organization of network weights, affecting their mappability to modular computing substrates like GPUs, FPGAs, or spatial digital processors. On the other hand, structured pruning removes entire channels, filters, rows, or columns in fully connected layers, thereby preserving the data organization structure for mappability [9]. However, structured pruning is susceptible to over-pruning. The constraint of removing entire structures, such as weight channels, may inadvertently eliminate salient weights and connections, creating a trade-off between model size reduction and accuracy. This limitation becomes particularly significant in applications like object detection or segmentation, where fine-grained representations are crucial for achieving high performance [10, 11, 12, 13].
Figure 1: **Frequency-domain processing of neural networks: (a)** Frequency transformations of neural tensors. (b) Prediction accuracy and model compression by increasingly processing more layers of ResNet20 with Walsh-Hadamard transforms (WHT). The accuracy is characterized for the CIFAR10 dataset. Model compression is the ratio of the number of parameters in the frequency domain processing over conventional processing. (c) Increase in multiply-accumulate (MAC) operations under frequency domain processing compared to conventional processing for MobileNetV2 and ResNet20 by increasingly processing more layers in the frequency domain.
Frequency-domain model compression has emerged as an efficient alternative to traditional model pruning techniques [14, 15, 16]. Frequency-based compression leverages fast algorithms for transforms such as the discrete cosine transform (DCT) or discrete Fourier transform (DFT), effectively identifying and removing redundant or uncorrelated information in the frequency domain. Various frequency transformations have been applied to DNNs, including Walsh-Hadamard transform (WHT) [17, 18], discrete cosine transform (DCT) [19, 20], block circulant matrices (BCM) [21], discrete Fourier transform (DFT) [22], discrete wavelet transform (DWT) [23], singular value decomposition (SVD) [24], and principal component analysis (PCA) [25].
For instance, Fig. 1(b) illustrates the potential benefits of implementing the layers of ResNet20 to the transform domain using Walsh-Hadamard transforms (WHT), which approximately computes the frequency domain parameters and convolutions can be approximately computed in the WHT domain as element-wise multiplications. By progressively processing more layers in the frequency domain, a remarkable reduction in the model size can be achieved while only incurring a limited loss in accuracy. In the figure, when all network layers are processed in the frequency domain, the number of trainable parameters reduces by 55.6% while incurring only a \(\sim\)3% accuracy loss for the classification of the CIFAR10 dataset. The figure also demonstrates that frequency domain processing of a typical neural network can be applied on a _curve_. Thereby, under accuracy constraints, an optimal number of layers can be processed in the frequency domain to achieve maximal model compression. Moreover, WHT, as considered in this example, is well-suited for low-power and computationally efficient processing, as the transformation matrices only consist of binary values. More detailed characterization of WHT-based frequency transformation was presented in [26, 27]. Motivated by these findings, this work primarily focuses on WHT-based model compression.
While frequency-domain model compression using WHT provides substantial advantages in model compression, it also introduces a notable increase in the required multiply-accumulate (MAC) operations. Fig. 1(c) compares MAC operations with conventional processing and frequency domain processing when applying frequency transforms to the bottleneck layers of MobileNetV2 and the residual layers of ResNet20. On average, the MAC operations increase three-fold under the transforms for MobileNetV2 when all layers are processed in the frequency domain. This significant increase in computational workload offsets the benefits of frequency domain model size reduction, emphasizing the critical need for computationally efficient transformations to leverage this technique fully.
We present a novel analog acceleration approach to address this challenge and make the following key contributions:
* _Analog acceleration of frequency-domain tensor transformation:_ We leverage the analog representation of operands to simplify the frequency transformation of neural tensors. E.g., by leveraging Kirchhoff's law, we obviate overheads of a digital multiplier to use a transistor for analog-domain multiplications between operands. Similarly, weight-input product terms represented as charges in our scheme are summed over a wire without a dedicated adder. Thus, exploiting physics minimizes the workload, processing elements, and data movements.
* _Matrix-level operational parallelism:_ Analog processing for operands in our scheme also simplifies the design of computing cells, enabling a large crossbar to operate within a limited area and energy budget. Additionally, our novel crossbar micro-architecture facilitates matrix-level operational parallelism through adaptive stitching of computing cells along both column and row dimensions. This enables parallel processing on all input vector elements and parallel determination of all output vector elements.
* _Co-designing of analog crossbar operations and compute flow:_ Even though internal operands are represented in the analog domain, the crossbar computations are performed without the need for ADC/DAC (Analog-to-Digital and Digital-to-Analog) conversions using a co-design approach. This eliminates the overhead of domain conversion and allows for better technology node scalability. We adopt input bitplane-wise processing directly on digital input vector streams to achieve DAC-free computations. Our computations are ADC-free by training against extremely quantized matrix-vector products and leveraging matrix multiplications' parameter-free nature in frequency domain neural processing. Our crossbar design supports signed-bit processing, which promotes increased output sparsity and reduces the workload. Additionally, we present a novel training loss function that promotes output sparsity, further capitalizing on these co-design aspects. Sec. II presents the necessary background for the proposed techniques. Sec. III discusses the architecture for analog-domain frequency transforms. Sec. IV presents simulation results. Sec. V concludes.
## II Background
### _Walsh-Hadamard Transform (WHT)_
WHT shares similarities with the Fast Fourier Transform (FFT) in that both can convert convolutions in the time or spatial domain into multiplications in the frequency domain. However, a major advantage of the WHT over other transformations is that its transform matrix comprises exclusively binary values (-1 and 1). This makes the transformations essentially multiplication-free, resulting in higher efficiency.
Let \(X,Y\in\mathbf{R}^{m}\) be the vector in the time-domain and WHT-domain, respectively, where \(m\) should be an integer power of 2 (\(2^{k},k\in\mathbf{N}\)). Under WHT:
\[Y=W_{k}X \tag{1}\]
Here \(W_{k}\) is a \(2^{k}\times 2^{k}\) Walsh matrix. For WHT of \(X\), a Hadamard matrix \(H_{k}\) is constructed using the following procedure:
\[H_{k}=\left\{\begin{array}{ll}1,&k=0\\ \begin{bmatrix}H_{k-1}&H_{k-1}\\ H_{k-1}&-H_{k-1}\end{bmatrix},&k>0\end{array}\right. \tag{2}\]
Here, we rearrange matrix rows to increase the sign change order, resulting in the Walsh matrix derived from the Hadamard
matrix as generated in Eq. (2). The process entails partitioning the Hadamard matrix of size \(N\) into four sub-matrices of dimensions \(N/2\times N/2\), followed by replacing the upper-left and lower-right sub-matrices with their negative counterparts, i.e., negating each entry. These steps are repeated until each sub-matrix is reduced to a size of \(1\times 1\). The resulting matrix exhibits the unique property that every row of the matrix is orthogonal to each other, with the dot product of any two rows being zero. This orthogonality renders the Walsh matrix particularly advantageous in a wide range of signal and image processing applications [28].
WHT presents a computational challenge when the dimension of the input vector is not a power of two. A technique called blockwise Walsh-Hadamard transforms (BWHT) was introduced to address this issue in [26]. The BWHT approach divides the transform matrix into multiple blocks, each sized to an integer power of two. By partitioning the transform matrix, only the last block requires zero padding if the input vector's dimension is not a power of two. This partitioning strategy significantly reduces the worst-case size of operating tensors, mitigating excessive zero-padding.
### _Frequency-Domain Compression of Deep Neural Networks_
The BWHT-based frequency transformations in Fig. 2 can expand or project channels. The expansion operation typically uses a 1D-BWHT layer to increase the number of channels in the feature map, providing a richer representation for subsequent layers to learn from. On the other hand, the projection operation employs a 1D-BWHT layer to reduce the dimensional to make the network computationally efficient while retaining essential features. In Pan et al.'s study [26], these transformations maintained a matching accuracy under frequency transforms while achieving significant compression than standard implementation on benchmark datasets such as CIFAR-10, CIFAR-100, and ImageNet. The number of parameters in the BWHT layer is thus proportional to the thresholding parameter \(T\), which is significantly smaller than the number of parameters in a \(1\times 1\) convolution layer. The activation function \(S_{T}\) for this frequency-domain compression is given as
\[y=S_{T}(x)=sign(x)(|x|-T)=\left\{\begin{array}{ll}x+T,&x<-T\\ 0,&|x|\leq T\\ x-T,&x>T\end{array}\right. \tag{3}\]
Frequency-domain transformations, such as BWHT, can be integrated into state-of-the-art deep learning architectures to compress them. For example, in the context of MobileNetV2, which uses \(1\times 1\) convolutions in its bottleneck layers to reduce computational complexity, BWHT can replace these convolution layers while achieving similar accuracy with fewer parameters, as shown in Fig. 3(b). Unlike \(1\times 1\) convolution layers, which have a parameter count proportional to the product of input feature map size and the number of output channels, BWHT-based binary layers use fixed Walsh-Hadamard matrices, eliminating trainable parameters. Instead, a soft-thresholding activation function with a trainable parameter \(T\) can be used to attend to important frequency bands selectively. We opt for soft-thresholding over the ReLU activation function
Figure 3: **(a)** Residual block of ResNet20 with \(1\times 1\) convolution layers, in our design \(1\times 1\) convolution layers are replaced with 1D-BWHT layer in ResNet20. **(b)** MobileNetV2 bottleneck layers _vs._ MobileNetV2 bottleneck layers with 1D-BWHT layer. BWHT can replace \(1\times 1\) convolution layers, achieving similar accuracy with fewer parameters.
Figure 2: **(a)** Channel expansion and **(b)** channel projection under one-dimensional Blockwise Hadamard Transformation (BWHT). The figure pictrically shows the transformation of input tensors under expansion and projection steps by zero-padding, Hadamard multiplications, and soft thresholding. A flow diagram for tensor transformations is shown to the right.
because the magnitude of the coefficients in the transform domain holds significant importance. This approach allows us to preserve high-amplitude negative coefficients in the transform domain, which are crucial for our analysis. Similarly, in ResNet20, we can replace \(1\times 1\) convolutions with BWHT layers. This modification is illustrated in Fig. 3(a).
However, BWHT transforms also increase the necessary computations for deep networks, as demonstrated in Fig. 1(c). In Sec. III, we discuss how micro-architectures and circuits can enhance the computational efficiency of BWHT-based tensor transformations.
## III Analog-Domain Frequency Transforms
Analog domain processing has emerged as a promising approach to accelerate vector and matrix-level parallel instructions suitable for low-precision computations, such as matrix-vector products in deep neural networks (DNNs). For instance, analog computations simplify processing cells and enable the integration of storage and computations within a single cell in many compute-in-memory designs. This significantly reduces data movement during deep learning computations, which is a critical bottleneck for energy and performance in traditional processors [29, 30, 31].
However, a major drawback of analog domain processing is its dependence on analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) for domain conversions of operands. These conversions are necessary when interfacing with digital systems or processing digital data, as analog signals must be converted to a digital format for processing and vice versa. Nevertheless, ADC and DAC operations introduce design complexities, significant area/power overheads, and limitations in technology scalability. Moreover, the performance of ADCs and DACs is constrained by speed, power consumption, and cost, further limiting the overall capabilities of analog domain computations.
Figure 4: **Architecture and operation flow for an analog acceleration of frequency-domain neural processing:** The operation consists of four steps: (1) precharging bit lines (BL/BLB) and applying input, (2) enabling parallel local computations in O and OB, (3) activating row-merge to connect all cells row-wise and summing O/OB in sum lines (SL/SLB), and (4) comparing SL/SLB values and applying soft thresholding for accurate output generation.
Figure 5: **Timing diagram of signal flows:** Waveforms of key signals in the CIM operation, including clock signal (CLK), precharge signal (PCH), bit lines (BL/BLB), column lines (CL/CLB), column-merge signal (CM), row lines (RL), row-merge signal (RM), sum lines (SL/SLB), and operating points (O/OB). The four-step CIM operation is completed in two clock cycles. The compact cell design is comprised solely of NMOS transistors.
In the following discussion, we present our proposed techniques for analog domain processing of frequency operations, eliminating the need for ADC/DAC conversions, even when operating on digital input vectors and computing output vectors in the digital domain. To achieve this, our approach incorporates bitplane-wise input vector processing and co-designs learning methods that can operate accurately under extreme quantization, enabling ADC/DAC-free operations.
### _Crossbar Micro-architecture Design and Operation_
In Fig. 4, we leverage analog computations for frequency domain processing of neural networks. The crossbar in the design combines six transistors (6T) NMOS-based cells for analog-domain frequency transformation. The corresponding cells for '-1' and '1' entries in the Walsh-Hadamard transform matrix are shown to the figure's right. The crossbar combines these cells according to the elements in the transform matrix. Since the transform matrix is parameter-free, computing cells in the proposed design are simpler by being based only on NMOS for a lower area than conventional 6T or 8T SRAM-based compute-in-memory designs. Additionally, processing cells in the proposed crossbar are _stitchable_ along rows and columns to enable perfect parallelism and extreme throughput.
Crossbar's operation comprises four steps. These steps are marked in Fig. 4. In the _first step_, the precharge signal (PCH) is activated, charging all the bit lines (BL and BLB) to the supply voltage (VDD). The column merge signal (CM) is set to high, effectively connecting or "stitching" all cells along a column. The input vector is loaded bitwise along the column peripherals. Depending on the sign bit of each element in the input vector, CL or CLB is applied with the corresponding magnitude bit, CL is activated for positive input vector elements, and CLB is activated for negative elements. In the _second step_, the row line (RL) is set to high while the CM and PCH signals are turned off, thus disconnecting cells along a column to allow independent local computations to take place in parallel. Depending on CL/CLB, cell output nodes O and OB either retain the precharge voltage or are grounded. In the _third step_, cell outputs O and OB are summed row-wise by activating the row merge signal (RM) while turning OFF CM and RL. The potential of O and OB are then averaged row-wise on sum lines, SL and SLB, by stitching the computing cells row-wise using a row-merge signal in the figure. In the _fourth and final step_, the values at SL and SLB are compared using a single comparator, resulting in single-bit output.
Fig. 5 shows the signal flow diagram for the above four steps. The 16 nm predictive technology models (PTM) simulation results are shown using the low standby power (LSTP) library in [32] and VDD = 0.85V while boosting RM and CM signals. Unlike comparable compute-in-memory designs such as [33], which place product computations on bit lines, in our design, these computations are placed on local nodes in parallel at all array cells. This improves parallelism, energy efficiency, and performance by computing on significantly less capacitive local nodes than bit lines of traditional designs.
### _ADC-Free by Training against 1-bit Quantization_
Fig. 6 presents the high-level operation flow of our scheme for processing multi-bit digital input vectors and generating the corresponding multi-bit digital output vector using the analog crossbar illustrated in Fig. 4, without the need for ADC and DAC operations. The scheme utilizes bitplane-wise processing of the multi-bit digital input vector and is trained to operate effectively with extreme quantization. In the figure, the input vector's elements with the same significance bits are grouped and processed in a single step using the scheme described in Fig. 4, which spans two clock cycles. The analog charge-represented output is computed along the row-wise charge sum lines and thresholded to generate the corresponding digital bits. This extreme quantization approach is applied to the computed MAC output, eliminating the need for ADCs. With multiple input bitplanes, labeled as in the figure, the corresponding output bitplanes are concatenated to form the final multi-bit output vector.
Due to the extreme quantization applied to the analog output in the above scheme, the resulting digital output vector only approximates the true output vector under transformation. However, unlike a standard deep neural network (DNN) weight matrix, the transformation matrix used in our frequency domain processing is parameter-free. This characteristic enables training the system to effectively mitigate the impact of the approximation while allowing for a significantly simpler implementation without needing ADCs or DACs. The following training methodology achieves this:
Consider the frequency-domain processing of an input vector \(\mathbf{x_{i}}\). As shown in Fig. 2, we process \(\mathbf{x_{i}}\) by transforming it to the frequency domain, followed by parameterized
Figure 6: **Bitplane-wise operation flow: A multi-bit input vector is processed in a bitplane-wise manner. The same significant bits of vector elements are grouped together and processed in a single step. Analog the crossbar rows output vector bits are computed in parallel. The output bits generated for all input bitplanes are concatenated to produce the multi-bit output vector.**
Figure 7: **Approximation functions: Continuous function approximation to (a) sign and (b) quantization function with well-defined derivatives to accelerate loss convergence.**
thresholding, and then reverting the output to the spatial domain. Consider a DNN with \(n\) layers that chain the above sequence of operations as \(\mathbf{x_{i+1}}=F_{0}(S_{T,i}(F_{0}(\mathbf{x_{i}})))\). Here, \(F_{0}()\) is a parameter-free _approximate_ frequency transformation as followed in our scheme in Fig. 6. \(S_{T,i}()\) is a parameterized thresholding function at the i\({}^{th}\) layer whose parameters \(T_{i}\) are learned from the training data.
Following the stochastic gradient descent (SGD)-based training frameworks to minimize the loss function \(\mathcal{L}(\mathbf{x_{n}})\) involves the derivatives of \(S_{T}()\) and \(F_{0}()\). Since \(S_{T}()\) is a smooth activation function, its derivatives are well-defined. However, due to a bitplane-wise processing of quantized \(\mathbf{x_{n}}\) and 1-bit quantization of the resultant dot products, \(F_{0}()\) is a non-continuous function with ill-defined derivatives. Therefore, we can approximate the partial derivatives of \(F_{0}()\) to accelerate the loss convergence.
For input \(\mathbf{x}\), the i\({}^{th}\) element of the output vector \(F_{0}(\mathbf{x})\) under our approximate frequency transform can be defined as
\[F_{0,i}(\mathbf{x})=\sum_{b=1}^{B}\text{sign}\Bigg{(}\sum_{j=1}^{N}I_{jb}B_{ij }\Bigg{)}\times 2^{b-1} \tag{4}\]
Here, \(j\) indices over BWHT matrix columns running up to \(N\), and \(b\) indices over input bits running up to \(B\). \(I_{jb}\) is the b\({}^{th}\) bit of digitized input vector \(\mathbf{x}\) at column \(j\). \(B_{ij}\) is BWHT matrix entry at i\({}^{th}\) row and j\({}^{th}\) column. sign() function's output is one if the operand is positive; otherwise, the output is -1. Now consider the partial derivative of \(F_{0,i}()\) w.r.t. j\({}^{th}\) element of \(\mathbf{x}\), i.e., \(x_{j}\):
\[\frac{\partial F_{0,i}}{\partial x_{j}} =\frac{\partial}{\partial x_{j}}\sum_{b=1}^{B}\text{sign}\Bigg{(} \sum_{j=1}^{N}I_{jb}B_{ij}\Bigg{)}\times 2^{b-1} \tag{5a}\] \[=\sum_{b=1}^{B}\text{sign}^{\prime}\Bigg{(}\sum_{j=1}^{N}I_{jb}B_{ij }\Bigg{)}\times\frac{\partial I_{jb}}{\partial x_{j}}\times 2^{b-1} \tag{5b}\]
As can be noticed that the partial derivatives involve discontinuous functions sign() and quantization functions. To handle these discontinuities, we approximated these functions with continuous functions as follows:
\[sign(x)=\lim_{\tau\rightarrow\infty}tanh(x\cdot\tau) \tag{6}\]
\[I_{b}(x)=\lim_{\tau\rightarrow\infty}\frac{exp(-\tau\cdot sin(2\pi\cdot 2^{b_{ max}-b}\cdot x/x_{max}))}{1+exp(-\tau\cdot sin(2\pi\cdot 2^{b_{ max}-b}\cdot x/x_{max}))} \tag{7}\]
Here, the sign() function is approximated as tanh(). Quantization functions corresponding to various significance bits are approximated as in Eq. (7). The above approximation functions involve hyperparameter \(\tau\), which matches the involved functions accurately when it tends to infinity. For loss function minimization over several training iterations, \(\tau\) can be incrementally increased to avoid creating sharp local minima. Fig. 7 plots the approximating function. For \(I_{b}()\), results are shown for the quantization function corresponding to the second most significant bit. Fig. 8 shows the accuracy under training with 1-bit quantization of product-sum for ResNet20 and MobileNetV2 on the CIFAR-10 dataset. The results demonstrate that accuracy converges to a similar level across all input quantization levels, and it is 3-4% lower than the floating-point baseline.
### _Predictive Early Termination by exploiting Output Sparsity_
In Fig. 6, the BWH-transformed output (\(x\)) undergoes filtering via an activation function \(S_{T}()\). This activation function yields a zero output when \(|x|\leq T\). Consequently, we can anticipate a substantial level of output sparsity within the frequency domain processing under consideration. To leverage this sparsity, in the following, we discuss an early termination strategy that enables us to avoid processing all input bitplanes. Instead, we can predictably terminate the processing when we anticipate a zero output, decreasing computation time and energy consumption.
For the predictive early termination scheme, the execution begins with processing the input vector's most significant bitplane (MSB) and progresses to increasingly lower-significance bitplanes. At the currently executed bitplane \(b\), the running output of frequency processing is computed as \(y_{b}=\sum_{b=b}^{B}O_{b}\times 2^{b-1}\). Here, \(O_{b}\in\) [-1, 1] is the binary output computed during the \(b^{th}\) input bitplane processing by thresholding the computed analog sum on the sum lines. Based on the current running sum output, the expected upper and lower bounds of \(y_{b}\) are computed as \(y_{b,UB}=\sum_{k=b}^{B}O_{k}\times 2^{k-1}+\sum_{k=0}^{b-1}2^{k-1}\) and \(y_{b,LB}=\sum_{k=b}^{B}O_{k}\times 2^{k-1}-\sum_{k=0}^{b-1}2^{k-1}\). Next, the bounds on the running sum are compared to the respective thresholding parameter \(T\). If \(y_{b,UB}\leq T\) and \(y_{b,LB}\geq-T\), processing of the corresponding output element under frequency transformation can be terminated early since it is expected to be zero post-activation.
Moreover, adjusting thresholding parameters can further improve the opportunities for early termination. Specifically, if the thresholding parameters (\(T\)) of \(S_{T}()\) can be maximized during training, the output sparsity and early termination opportunities are maximized. A \(T\)-dependent regularizer term is added to the loss function for this,
\[\mathcal{L}_{mod}=\mathcal{L}_{acc}(T)-\lambda\log\left(\sqrt{\frac{1}{g(T)^{3} }}\exp\left(-\frac{g(T)}{2}\right)\right). \tag{8}\]
Here, \(\mathcal{L}_{acc}\) is accuracy-dependent loss based on cross-entropy for classification. Through the second term, \(T\) values tend to gravitate towards either 1 or -1 during training to follow
Fig. 8: **Accuracy under training with 1-bit quantization: Impact of input quantization on the performance of deep learning models (a) ResNet20 and (b) MobileNetV2 on the CIFAR-10 dataset, while considering 1-bit product-sum quantization and varying input quantization levels. The results demonstrate that accuracy converges to a similar level across all input quantization levels, and it is 3-4% lower than the floating-point baseline.**
an inverted Gaussian (Wald) distribution within the interval \((-1,1)\). \(\lambda\) is a hyperparameter controlling the strength of the regularization term, and \(g(T)=abs(T/T_{max})\) normalizes and takes the absolute value of \(T\). Notably, the second term represents the log-likelihood of the absolute value of \(T\) under the inverted Gaussian distribution.
Fig. 9(a) illustrates the impact of the early termination technique on the distribution of the soft-thresholding parameter (\(T\)). In the absence of early termination, the distribution of \(T\) is uniform. However, when we apply the unique loss function defined in Eq. (6), the \(T\) parameter is driven towards -1 and 1, which aids output sparsification and workload reduction. Fig. 9(b) demonstrates an example scenario showcasing how the comparison bounds on product sum (PSUM), i.e., \(\text{PSUM}_{low}\) and \(\text{PSUM}_{high}\) determining the early termination progressively tighten to improve the early termination opportunities with increasing bitplane processing. Fig. 10 shows a digital implementation of predictive early termination. As PSUM computations progress from MSB to lower significance bits, the unknown values are clamped to the highest and lowest values (1 or -1) to determine \(\text{PSUM}_{low}\) and \(\text{PSUM}_{high}\) for checking early termination criteria.
Fig. 9(c) shows the quantitative advantages of early termination. The figure shows the execution cycles for an 8-bit input vector processing case on 10,000 random cases. Without the early termination technique, the execution of all eight input bit planes will be necessary. However, the average number of input bit-planes is significantly lower with early termination while optimizing the distribution of \(T\) parameters, as shown in Fig. 9(a). For most operations, only one bit of extraction is sufficient before the operation can be predictably terminated. The average number of extraction cycles needed is less than 2 while maintaining the same level of accuracy. Fig. 10 shows the digital implementation of the early termination scheme, which can be implemented with digital comparators, shift registers, and logic.
## IV Simulation Results and Discussions
In this section, we discuss simulation results on the presented analog acceleration of frequency domain DNN processing. Simulations were conducted using HSPICE and predictive technology models for 16 nm CMOS technology [34].
### _Exploiting BWHT's Algorithmic Non-ideality Tolerance (ANT) for Processing Simplicity_
Due to the analog processing steps, computations in the proposed approach are susceptible to process variability and noise. For example, in Fig. 6, processing accuracy is susceptible to the offset and noise when the charge-domain processed PSUM is thresholded to -1 or 1 using analog comparators. Although sophisticated analog techniques such as autozerong [34], chopper stabilization [35], layout guard rings [36], _etc._ can minimize the non-idealities, they invariably come with the additional power or area overheads. In Fig. 11(a), we investigate the algorithmic non-ideality tolerance (ANT) of BWHT-based frequency domain processing, which can be leveraged to retain processing simplicity. The figure shows the accuracy trends of BWHT-based RESNET20's processing on CIFAR10 while injecting noise in the product sum. Here, we algorithmically mimic the bitplane-wise processing as implemented by our ADC/DAC-free analog structure in Fig. 4. Before the product sum (PSUM) output is digitized, noise is injected into it to emulate various non-idealities in the design as \(\text{PSUM}\leftarrow\text{PSUM}+N(0,L_{I}\times\sigma_{\text{ANT}})\). \(N()\) is a
Figure 10: **Digital Implementation of Early Termination:** This diagram shows the implementation of early termination using digital components.
Figure 9: **Predictive Early Termination: (a) Distribution of thresholding parameters with and without early termination-based loss function. A modified loss function pushes \(T\)-values towards 1 or -1, aiding the sparsity or neuron output. (b) Comparison bounds determining the early termination with varying bitplane cycles. As the processing moves to lower significance bitplanes, progressively, the comparison bounds tighten, thus increasing the opportunities for early termination. (c) Histogram illustrating the number of bits required for early termination using 10,000 randomly generated 8-bit inputs and weights. The analysis was conducted under two scenarios: 1) when the soft thresholding parameter (T) follows a uniform distribution and 2) when it adheres to an Inverted Gaussian distribution, as dictated by our loss function.
Gaussian random number generation function. \(\sigma_{\text{ANT}}\) controls the standard deviation of injected noise. \(L_{I}\) is the input vector length for which PSUM is computed. Fig. 11(a) shows the prediction accuracy with increasing \(\sigma_{\text{ANT}}\) showing that \(\sigma_{\text{ANT}}<2\times 10^{-3}\) causes an inconsequential impact on the overall accuracy.
In Fig. 11(b), we investigate the implications of such ANT of frequency domain processing to the simplicity of proposed analog acceleration. For a hundred random input and weight vectors, we compare the accuracy of the quantized product sum vector, as processed by our ADC/DAC-free analog structure in Fig. 4 under process variability, to the true values. The mismatch in the threshold voltage (i.e., local variability) is simulated by accounting \(\sigma_{TH}\) = 24 mV for the minimum-sized transistors [34] and scaling \(\sigma_{TH}\) using Pelgrom's law for larger transistors. All analog cell transistors are minimum-sized. Peripherals are scaled according to the array size for the necessary driving strength. In the figure, if the true value of PSUM lies within the safety margin (SM), i.e., \(|\text{PSUM}|<L_{I}\times\text{SM}\), its quantization errors are ignored considering BWHT's ANT in Fig. 11(a). \(L_{I}\) is the input vector length mapped onto the analog processing array. Otherwise, product sum vector bit inaccuracy is marked as a processing failure. Fig. 11(b) shows the processing failure for 16\(\times\)16 and 32\(\times\)32 array sizes at sweeping safety margin (SM) in the above simulation setting. Notably, while in Fig. 11(a), \(\sigma\sim 2\times 10^{-3}\) causes an inconsequential impact on the overall accuracy of BWHT-based processing, with a comparable SM, the analog processing is accurate on more than 95% on the considered random cases and array sizes. Thereby, leveraging BWHT-based processing algorithmic non-ideality tolerance allows processing simplicity of our analog acceleration scheme.
### _Design and Operating Space Exploration_
In Fig. 11(c), we demonstrate the variation in processing failure across different supply voltages for \(16\times 16\) and \(32\times 32\) crossbars. Evidently, for \(32\times 32\) crossbar, the processing failure increases sharply with lower supply voltage, whereas \(16\times 16\) crossbar design shows significantly better scalability with barely any increase in processing failure. Because of the row and column-wise parallelism enabled by stitching cells through column-merge and row-merge signals, a larger array becomes quadratically more vulnerable to process variability under supply voltage scaling. Importantly, in Fig. 11(c), boosting column-merge and row-merge signal voltages by 0.2 V reduces the processing failure for \(32\times 32\) crossbars. Still, the smaller crossbar allows an easier implementation by containing the processing variability even with a single VDD.
In Fig. 11(d), energy-per-operation for multiply-accumulate (MAC) processing for one bit-plane of the input vector's processing against frequency domain parameters are compared at varying supply voltage (VDD) for \(16\times 16\) and \(32\times 32\) crossbars. Notably, by splitting the bit lines of the analog crossbar cell-wise, the processing energy per operation is expected to be only weakly dependent on the crossbar size. This is also evident in the figure. Another notable aspect of our ADC/DAC-free crossbar design is that the simplicity of peripherals is leveraged for crossbar size downscaling. In typical analog crossbar processing, excessive crossbar downscaling results in more pronounced (area/power) resources dedicated to peripherals, thereby limiting downscaling. Meanwhile, smaller crossbars have lower bit line parasitic, and with suitable architecture choices, they can also better adapt to mapping deep learning layers of diverse sizes and channel widths. While the traditional analog crossbar approaches cannot take full advantage of crossbar downscaling, the proposed approach leads to better energy efficiency and allows better mapping
Figure 11: **Noise-Induced Quantization Effects: (a) Algorithmic accuracy vs. standard deviation of noise added to normalized product sum (\(\sigma_{\text{ANT}}\)) (Algorithmic Noise Tolerance). (b) Processing failure vs. safety margin (SM) for \(16\times 16\) and \(32\times 32\) crossbars. The evaluations are conducted with a nominal supply voltage of 0.90 V.(c) Processing failure vs. supply voltage. (d) 1-bit MAC Energy/Operation (a) vs. supply voltage.**
Figure 12: **Power distribution: This plot shows power distribution among various components of the operation for \(16\times 16\) crossbar.**
by simplifying the peripherals. In Fig. 12, the average power consumption across various components for a \(16\times 16\) crossbar is demonstrated. Row/Column-wise stitching in our scheme incurs \(\sim\)27% power overhead. However, matrix-level parallelism enabled by the scheme also improves the throughput of our design. Note that the scheme processes all elements of an input vector in parallel and computes all elements of the resulting output vector in parallel.
### _Comparison to State-of-the-Art_
Table I presents a comparison of our method with the current state-of-the-art techniques for macro-level multiply-accumulate (MAC) processing. Using 16\(\times\)16 crossbars and 8-bit input processing, our proposed method achieves an energy efficiency of 1602 TOPS/W without employing an early termination strategy. This efficiency increases to 5311 TOPS/W with the inclusion of the early termination strategy at VDD = 0.8 V. In the absence of early termination, processing an eight-bit input requires eight 1-bit MAC cycles, as illustrated in Fig. 6. However, by leveraging our early termination strategy, the average cycle count is significantly reduced to approximately 1.34, as highlighted in Fig. 9(c). The redesigned loss function achieves this efficiency by driving the \(T\) parameters closer to 1 or -1, increasing the opportunities for early termination. It's worth noting that while scaling weight-bit precision often compromises the accuracy and energy efficiency in conventional deep learning implementations, our method effectively navigates around this challenge. Our approach processes inputs using only a one-bit frequency domain transformation matrix while applying high-precision parameters in the activation function, thus effectively sidestepping the typical trade-offs encountered during the model design phase. This synergistic integration of processing and model design facilitates simpler processing without compromising accuracy.
## V Conclusions
We have introduced an ADC/DAC-free analog acceleration approach for deep learning by processing weight-input matrix products in the frequency domain. The proposed approach significantly reduces the necessary network size while matching the accuracy of a full-scale network. Despite the downsizing of weight matrices, the proposed approach maintains the structure of processing matrices by pruning in the frequency domain, thereby maximally leveraging the vector processing abilities for high performance, unlike in unstructured pruning-based approaches. This is due to the implementation of convolutions as element-wise multiplications in the transform domain. Analog computations in the proposed approach leverage physics for computing to minimize the workload while avoiding ADC/DAC for design simplicity and scalability of the processing node. Furthermore, we have implemented a predictive early termination strategy that intelligently terminates computations by accounting for the output sparsity due to rectified activation. To further enhance the potential of the early termination strategy, we have developed a novel loss function that optimizes model parameters to minimize the workload. With 16\(\times\)16 crossbars and 8-bit input processing, our proposed method delivers an energy efficiency of 1602 TOPS/W, which rises to 5311 TOPS/W when incorporating an early termination strategy at VDD = 0.8 V.
|
2303.16524 | Ensemble Learning Model on Artificial Neural Network-Backpropagation
(ANN-BP) Architecture for Coal Pillar Stability Classification | Pillars are important structural units used to ensure mining safety in
underground hard rock mines. Therefore, precise predictions regarding the
stability of underground pillars are required. One common index that is often
used to assess pillar stability is the Safety Factor (SF). Unfortunately, such
crisp boundaries in pillar stability assessment using SF are unreliable. This
paper presents a novel application of Artificial Neural Network-Backpropagation
(ANN-BP) and Deep Ensemble Learning for pillar stability classification. There
are three types of ANN-BP used for the classification of pillar stability
distinguished by their activation functions: ANN-BP ReLU, ANN-BP ELU, and
ANN-BP GELU. This research also presents a new labeling alternative for pillar
stability by considering its suitability with the SF. Thus, pillar stability is
expanded into four categories: failed with a suitable safety factor, intact
with a suitable safety factor, failed without a suitable safety factor, and
intact without a suitable safety factor. There are five inputs used for each
model: pillar width, mining height, bord width, depth to floor, and ratio. The
results showed that the ANN-BP model with Ensemble Learning could improve
ANN-BP performance with an average accuracy of 86.48% and an F_2-score of
96.35% for the category of failed with a suitable safety factor. | G. Aileen Mendrofa, Gatot Fatwanto Hertono, Bevina Desjwiandara Handari | 2023-03-29T08:26:26Z | http://arxiv.org/abs/2303.16524v3 | Ensemble Learning Model on Artificial Neural Network-Backpropagation (ANN-BP) Architecture for Coal Pillar Stability Classification
###### Abstract
Pillars are important structural units used to ensure mining safety in underground hard rock mines. Unstable pillars can significantly increase worker safety hazards and sudden roof collapse. Therefore, precise predictions regarding the stability of underground pillars are required. One common index that is often used to assess pillar stability is the Safety Factor (SF). Unfortunately, such crisp boundaries in pillar stability assessment using SF are unreliable. This paper presents a novel application of Artificial Neural Network-Backpropagation (ANN-BP) and Deep Ensemble Learning for pillar stability classification. There are three types of ANN-BP used for the classification of pillar stability distinguished by their activation functions: ANN-BP ReLU, ANN-BP ELU, and ANN-BP GELU. These three activation functions were chosen because they can solve the vanishing gradient problem in ANN-BP. In addition, a Deep Ensemble Learning process was carried out on these three types of ANN-BP to reduce the prediction variance and improve the classification results. This study also presents new labeling for pillar stability by considering its suitability with the SF. Thus, pillar stability is expanded into four categories: failed with a suitable safety factor, intact with a suitable safety factor, failed without a suitable safety factor, and intact without a suitable safety factor. There are five features used for each model: pillar width, mining height, bond width, depth to floor, and ratio. In constructing the model, the initial dataset is divided into training data, validation data, and testing data. In this case, four type of proportions are used. For training-testing division the proportions are: 80%:20%, 70%:30%, for training-validation-testing division the proportions are: 80%:10%:10%, 70%:15%:15%. Average accuracy, \(F_{1}\)-score, and \(F_{2}\)-score from 10 trials were used as performance indicators for each model. The results showed that the ANN-BP model with Ensemble Learning could improve ANN-BP performance with an average accuracy 87.98% and an \(F_{2}\)-score 99.27% for the category of failed with a suitable safety factor.
Artificial Neural Network, Ensemble Learning, Pillar Stability
## 1 Introduction
Pillars are important structural components in underground hard rock and coal mining. This is because pillars can provide temporary or permanent support for mining and tunneling operations [1]. Pillars can protect the machine and ensure the safety of workers [2]. Unstable pillars can increase employee safety risks and potential roof collapse [3]. In addition, as the mining depth increases, the increased ground pressure can also lead to more frequent and serious pillar failures [4]. Therefore, a proper assessment of the stability of the underground pillars is necessary. Assessment of the stability of the existing pillars can provide a reference for the designer to avoid unwanted accidents [5].
In general, pillar stability can be divided into three categories, namely: stable, unstable and failed [6]. Safety Factor (SF) is a common index used in several pillar design methodologies to assess pillar stability in relation to pillar strength
and average pillar stress [2]. SF is calculated by dividing the pillar strength by the pillar stress [1]. Theoretically, a rock or coal pillar is considered "unstable" if the SF value is less than 1, and "stable" if it is greater than 1. However, such rigid boundaries are often unreliable, because the occurrence of unstable pillars is also frequently appears when the SF value is above 1 [2][6].
Machine learning techniques have been currently used effectively to evaluate the stability of pillars with better accuracy compared to other methods. This is due to an increase in the availability of pillar stability data [4]. Tawadrous and Katsabanis [7] used Artificial Neural Network (ANN) with logistic activation function and 12 inputs to classify pillar stability. The classification performance showed 90-93% of testing accuracy. Zhou et al. [8] did a comparative study on performances of six supervised learning methods in classifying pillar stability. One of the supervised learning methods compared was ANN with logistic activation function. The highest classification accuracy obtained using ANN was 80.3% with five model inputs. Li et al. [1] proposed Logistic Model Trees (LMT) algorithm to classify pillar stability with five model inputs. The training accuracy and the average accuracy (10-fold CV) of LMT was then compared to other existing algorithm, including ANN. The result showed that the accuracy of LMT model was among the highest with 79.1-80.5% of 10-fold cross validation accuracy. Unfortunately, from the literature research on pillar stability prediction using ANN that has been carried out [1][7][8], the use of the activation function in the ANN algorithm is still limited to the sigmoid (logistic) function. The use of this function can cause vanishing gradient problems when there are many layers in an ANN [9][10]. Vanishing gradient is a situation where the value of the partial derivative of the loss/error function gets closer to zero, and the partial derivative disappears. As a result, ANN will not improve the model weight [11] and ANN performance will not increase. Therefore, the prediction of pillar stability using the ANN algorithm needs to be improved.
One way to overcome the vanishing gradient problem in ANN is to replace the activation function. Several activation functions that have been proposed in recent years to overcome the vanishing gradient problem include ReLU, ELU, and GELU. On the other hand, ensemble learning techniques are also used in ANN to improve ANN performance. Ensemble learning combines the decisions of several predictors in two ways, namely majority vote and averaging. With this combination, the variance of the prediction results will be lower, and consequently the accuracy of the prediction results will increase [12].
In this study, the authors use the South African coal mining data used in [13] by expanding the initial pillar stability categories (intact and failed) into 4 categories based on the stability and its suitability with SF value. In addition, the authors also add a ratio variable referring to research [14]. Next, the authors use Artificial Neural Network-Backpropagation (ANN-BP) with ReLU, ELU, and GELU activation functions for pillar stability classification. Furthermore, classification of pillar stability using ensemble learning with the ANN-BP basic model was also carried out. The results of the classification using ensemble learning are then compared with the performance of a single ANN-BP. The framework of this study can be seen in Fig. 1.
Figure 1: Research framework
## Materials and Method
### Dataset
The dataset used in this study was taken from a journal written by J.N. van der Merwe and M. Mathey [13] entitled "Update of Coal Pillar Database for South African Coal Mining". The data consists of 423 case histories of coal mines in South Africa with 4 types of variables (depth, mining height, bord with, and pillar width) and 2 types of pillar stability labels (intact and failed). The intact label means that the pillar is stable, while the failed label means that the pillar is unstable which can cause it to collapse. Based on the results of data exploration, it was found that the data contained no missing values, and 337 of the 423 cases in the data were labeled intact. This means that there is a class imbalance in the data. In addition to the 4 variables contained in the data, in this study one additional variable was added, namely Ratio. The value of the Ratio variable is calculated by dividing the pillar width by mining height.
Furthermore, the SF value is also calculated for each case in the data using the equation (1)-(3) [13][15][16].
\[SF=\frac{S_{p}}{\sigma_{p}}, \tag{1}\]
\[S_{p}=5.47\ \frac{w^{0.8}}{h}\ MPa, \tag{2}\]
\[\sigma_{p}=\frac{25HC^{2}}{w^{2}}\ kPa, \tag{3}\]
where \(w\) denotes pillar width (meters), \(h\) denotes pillar height (meters), 5.47 is the strength of coal material (MPa), \(H\) denotes the depth to the bottom of the mine (meters), \(C\) denotes the amount of pillar width and the distance between the pillars (bord width) in meters, and 25 is the multiples of the overburden density and gravitation (kPa/m). According to the theory, a rock or coal pillar will be classified as "unstable" if the SF value is less than 1, and "stable" if it is greater than 1. However, the occurrence of unstable or failed pillars is also frequently appears when the SF value is above 1 [1]. Previously, pillar stability was only classified by only looking at its stability in the field. Because of the fact that there are cases of pillar stability that are not in accordance with the theory, in this study an extension of the pillar stability label was carried out in the dataset used by considering its suitability with the safety factor value. Because there are cases of pillar stability that are not in accordance with the theory, in this study an extension of the pillar stability label was carried out in the dataset used by considering its suitability with the safety factor value. The initial labels in the dataset will be expanded from 2 categories (intact and failed) to 4 categories (F0, F1, I0, and I1) with the description of each label shown in Table 1. Because the expanded label contains additional information on the conformity of the pillar stability to the SF, it is necessary to specify the boundaries that determine whether the stability of the pillars is suitable with the calculation of the SF.
In this study, cases with class labels F1 and I1 are considered as outliers in the SF data for each label (intact and failed). Referring to Yang et al [17], assuming the data is normally distributed, the general equation for calculating the outlier threshold in a data is presented in equations (4) and (5).
\[T_{min}=mean-a\ \times SD, \tag{4}\]
\[T_{max}=mean+a\times SD, \tag{5}\]
\begin{table}
\begin{tabular}{c c c} \hline
**Label** & **Description** & **Safety Factor (SF) Value** \\ \hline F0 & Failed with a suitable safety factor & SF value is lower than the specified boundary \\ F1 & Failed without a suitable safety factor & SF value is higher than the specified boundary \\ I0 & Intact with a suitable safety factor & SF value is lower than the specified boundary \\ I1 & Intact without a suitable safety factor & SF value is higher than the specified boundary \\ \hline \end{tabular}
\end{table}
Table 1: Expanded label descriptions
where \(T_{min}\) and \(T_{max}\) denote the minimum and maximum threshold respectively. The mean denotes the average of the data, and SD denotes the standard deviation of the data. Thus, the boundary is determined by calculating SF threshold for each label (intact and failed). The threshold used in this study was calculated from the average and standard deviation of the safety factor for each label using equations (4) and (5) with the value \(a~{}=~{}1\). The threshold used for each label is as follows.
\[T_{failed}=\text{Avg. SF~{}Failed}+\text{Std. SF~{}Failed}=2.48\, \tag{6}\]
\[T_{intact}=\text{Avg. SF~{}Intact}+\text{Std. SF~{}Intact}=1.42\, \tag{7}\]
where \(T_{failed}\) denotes the maximum SF value limit is said to be suitable for failed labels and \(T_{intact}\) denotes the minimum SF value limit is said to be suitable for intact labels. The labeling rules are presented in Table 2.
## 3 Preprocessing
Before conducting model training, preprocessing is carried out on the dataset used. This step is needed so that the data can be used as input to the model and the model can learn the characteristics of the dataset better. The preprocessing performed on the dataset includes three stages, namely oversampling, label encoding, and split dataset. The oversampling method used in this study is SMOTE. After oversampling, the number of cases for each label was 312. The difference in the number of cases for each category before and after SMOTE can be seen in table 3. Next, label encoding was also performed for each class. This needs to be done because ANN-BP cannot read categorical data types. Class F0 is encoded as 0, class F1 is encoded as 1, class I0 is encoded as 2, and class I1 is encoded as 3. Furthermore, the data is finally divided into training data sets, validation data, and testing data. There are four combinations of data proportions used to test each model, each of which can be seen in Table 4.
## 4 Model Training
The ANN-BP architecture used in this study is multilayer perceptron (MLP). The number of neurons used in the input layer and the output layer in this architecture are 5 and 4 respectively (because there are 5 inputs, namely depth, pillar width, mining height, bord width, and ratio, and 4 output labels, namely F0, F1, I0, and I1). In this study, it was
\begin{table}
\begin{tabular}{c c} \hline Data Proportion & \% Training, Validation, and Testing \\ \hline
1 & 80\% Training, 20\% Testing \\
2 & 70\% Training, 30\% Testing \\
3 & 80\% Training, 10\% Validation, 10\% Testing \\
4 & 70\% Training, 15\% Validation, 15\% Testing \\ \hline \end{tabular}
\end{table}
Table 4: Types of proportions and percentages of training, validation, and testing data
\begin{table}
\begin{tabular}{c c} \hline
**Conditions** & **Label** \\ \hline _Failed_ \& _SF_ \(\leq\) 2.48 & F0 \\ _Failed_ \& _SF_ \(>\) 2.48 & F1 \\ _Intact_ \& _SF_ \(\geq\) 1.42 & I0 \\ _Intact_ \& _SF_ \(<\) 1.42 & I1 \\ \hline \end{tabular}
\end{table}
Table 2: Labeling conditions
\begin{table}
\begin{tabular}{c c} \hline
**Conditions** & **Label** \\ \hline _Failed_ \& _SF_ \(\leq\) 2.48 & F0 \\ _Failed_ \& _SF_ \(>\) 2.48 & F1 \\ _Intact_ \& _SF_ \(\geq\) 1.42 & I0 \\ _Intact_ \& _SF_ \(<\) 1.42 & I1 \\ \hline \end{tabular}
\end{table}
Table 3: The difference in the number of cases for each stability category before and after SMOTE
determined that the number of hidden layers was 4 and the number of neurons contained in the hidden layer were 512, 256, 256, and 128 respectively. There were 3 ANN-BP models used in this study which were differentiated based on the type of activation function used, namely ReLU [18], ELU [19], and GELU [9].
Mathematically, the output of the ReLU activation function can be expressed as follows [18].
\[ReLU(x)=\max(0,x)\,,x\in\mathbb{R}. \tag{8}\]
From the mathematical expression of ReLU, we can see that not all neurons are activated in ReLU function. This is why ReLU function is said to be more effective than other activation functions. However, ReLU function cannot produce negative values. This can lead to particular case of vanishing gradient problem, which is dying ReLU problem. On the other hand, ELU activation function was proposed as an improvement of ReLU because it can produce negative values. Mathematically, the output of the ELU activation function can be expressed as follows [19].
\[ELU(x)=\max(0,x)+\min(0,\alpha(e^{x}-1))\,,x\in\mathbb{R}, \tag{9}\]
where \(\alpha\) is hyperparameter that controls the output of negative inputs. GELU activation function is a State of the Art of activation function proposed in 2020 by Hendrycks & Gimpel. While the deactivation of neurons in ReLU and ELU function is still stochastic and highly dependent on input values, GELU is formulated to deactivate neurons deterministically. GELU assumes normally distributed inputs with mean 0 and standard deviation 1 (\(X\sim N(0,1)\)), and uses the standard Gaussian cumulative distribution function (\(\Phi(x)\)) as the input multiplier. GELU function is defined as follows [9].
\[GELU(x)=xP(X\leq x)=x\;\Phi(x)=x\cdot\frac{1}{2}\big{[}1+erf\big{(}x/\sqrt{2} \;\big{)}\big{]} \tag{10}\]
The type of ANN-BP ensemble learning used in this study is bagging. The ensemble learning process begins with bootstrapping, or sampling with replacement from the training data to serve as new training data for each basic model of ensemble learning (ANN-BP ReLU, ANN-BP ELU, and ANN-BP GELU). There are 3 types of bootstrap percentages used in this study, they are 70%, 80%, and 90%. After bootstrapping, the training process for each ANN-BP is carried out independently.
The process of determining the class in ensemble learning is done by doing a majority vote on the prediction results of each basic model. If the prediction results of the three ANN-BP are different, then the class that becomes the final prediction for ensemble learning is the class with the smallest encoding label. In this study, class with the smallest encoding label is F0. Class F0 is the best choice when there are differences in predictions among the three basic models, because class F0 is a class with the smallest risk of prediction error compared to the other classes (F1, I0, and I1). Class F0 is said to have the smallest risk of prediction error because the stability of the pillars with class F0 is in accordance with the predicted stability based on the calculation of the safety factor. In addition, because the predicted stability in class F0 is failed, the possibility of the pillar being built is also lower, and consequently the operational costs incurred are also smaller. The majority vote and the ensemble learning process carried out in this study is shown in Table 5 and Fig. 2 respectively.
In this study, the model simulation was implemented using the Python programming language and executed on Google Colab with GPU running time. The model is built using the TensorFlow library and trained for 10 times. In TensorFlow, the batch_size parameter specifies the number of samples in a batch that are inputted into the neural network before updating the model parameters, while the epochs parameter specifies the number of times the entire dataset is inputted into the neural network. In this simulation the batch_size used for each model is 16 with the maximum number of epochs for one training is 400. Models without data validation are trained using accuracy Early Stopping with a value of \(\text{patience}=10\), which means the model will stop the training process if the training accuracy
\begin{table}
\begin{tabular}{c c c c} \hline \hline Predicted Class by & Predicted Class by & Predicted Class by & Final Prediction by \\ ANN-BP ReLU & ANN-BP ELU & ANN-BP GELU & Ensemble Learning \\ \hline
1 & 1 & 1 & 1 \\
2 & 2 & 1 & 2 \\
1 & 0 & 2 & 0 \\
0 & 3 & 1 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Example of Ensemble Learning Decision Making Process using Majority Vote
value is not improved after 10 epochs. Meanwhile, models with validation data are trained using val_loss Early Stopping with a value of \(\text{patience}=10\), which means the model will stop the training process if the validation loss value does not improve after 10 epochs.
## 4 Results and Discussion
The average accuracy and standard deviation of accuracy from 10 trials of the four models (ANN-BP ReLU, ANN-BP ELU, ANN-BP GELU, and Ensemble Learning) are presented in Table 6-7.
Based on the results of the average model accuracy, ANN-BP with GELU activation function produces a better average accuracy compared to ANN-BP with ReLU and ELU activation functions for each data proportion, which is at least 1.36% higher in the 3rd type of data proportion. The best average accuracy produced by ANN-BP GELU is obtained in the 4th data proportion, which is 87.98%. On the other hand, the use ensemble learning can increase the average accuracy produced by a single ANN-BP, at least by 1.17% in the 4th type data proportion. The best average accuracy produced using ensemble learning is obtained using 70% percentage of bootstrap in the 2nd type data proportion, which is 90.64%. This means that the model can provide the right pillar stability predictions for around 90.64% of the data from all the testing data used, or in other words, the chance that the model gives the right predictions for the testing data used is 90.64%.
Overall, ensemble learning increases the average accuracy produced using single ANN-BP for each data proportion. However, the difference of average accuracy in ensemble learning for each bootstrap percentages is still very small (not reaching 1%).
\begin{table}
\begin{tabular}{c c c c c c c} \hline
**Data** & **ANN-BP** & **ANN-BP** & **ANN-BP** & **Ensemble** & **Ensemble** & **Ensemble** \\
**Proportion** & **ReLU** & **ELU** & **GELU** & **Learning (70\%)** & **Learning (80\%)** & **Learning (90\%)** \\ \hline
1 & 83.48\% & 82.48\% & 86.32\% & 88.32\% & 88.68\% & 88.76\% \\
2 & 83.60\% & 85.44\% & 87.73\% & 90.64\% & 90.37\% & 90.21\% \\
3 & **82.16\%** & 82.56\% & **83.52\%** & 87.12\% & 87.92\% & 86.48\% \\
4 & 85.48\% & 83.24\% & **87.98\%** & 89.15\% & 89.20\% & **89.31\%** \\ \hline \end{tabular}
\end{table}
Table 6: Average accuracy of the four models
Figure 2: Ensemble Learning Scheme on ANN-BP ReLU, ANN-BP ELU, and ANN-BP GELU
Based on the results from Table 7, the standard deviation of ensemble learning accuracy only ranges from 0.47-1.86%, with the smallest standard deviation of accuracy obtained using Ensemble Learning 90% in the 4th data proportion and the largest standard deviation of accuracy obtained using Ensemble Learning 80% in 3rd data proportion. Meanwhile, the standard deviation of single ANN-BP accuracy ranges from 1.57-6.84%, with the smallest standard deviation of accuracy obtained using the GELU activation function in the 4th data proportion and the largest standard deviation of accuracy obtained using the ELU activation function in the 1st data proportion.
Because the value is smaller, the standard deviation of ensemble learning accuracy is said to be better when compared to the standard deviation of single ANN-BP accuracy. This means that the percentage accuracy of the prediction results using ensemble learning does not fluctuate too much from the average accuracy.
In addition to the average accuracy and standard deviation of accuracy, the \(F_{1}\) score for each class label is also calculated and presented in Table 8-15. In particular, cases with class label F1 must be prioritized compared to cases with other class labels (F0, I0, and I1) because F1 is the most dangerous class. It is said to be the most dangerous because based on the calculation of the safety factor, pillars in the F1 class are categorized as intact, but in reality these pillars failed. Cases in this class will cause greater operational losses compared to other classes. Therefore, the authors assume that it is much worse to miss a failed stability prediction than to give a false alarm for an intact pillar stability. This means, in the F1 category, recall in more important than precision. In this study, F1 class recall is considered to be twice as important as F1 class precision. Therefore, the \(F_{2}\) score for each model is calculated for the F1 category by choosing the value \(\beta=2\) in the \(F_{\beta}\) evaluation metric function. The results of calculating the \(F_{1}\) score and \(F_{2}\) score are presented in Table 8-11 and Fig 3 respectively.
Based on the \(F_{1}\) score obtained, among the other three types of activation function, GELU activation function provides better model performance in classifying the majority of class labels in each type of data proportion (except data with label I1 in the 1st, 2nd and 3rd data proportion, and data with label I0 in the 3rd data proportion). The best \(F_{1}\) score obtained using GELU activation function can be found in the 2nd data proportion, with the \(F_{1}\) score from class F0 is 79.27%, class F1 is 96.84%, class I0 is 86.53%, and class I1 is 87.66%. In this data, ensemble learning provides equal or better performance for detecting the stability of each label for almost every proportion of data. The best \(F_{1}\) score obtained using ensemble learning can be found in the 2nd data proportion. The bootstrap percentage that was used is 70%, with the \(F_{1}\) score from class F0 is 82.96%, class F1 is 98.19%, class I0 is 91.70%, and class I1 is 90.24%.
Based on Fig. 3, it can be seen that ensemble learning provides better performance in detecting class F1 for all types of data proportions compared to single ANN-BP. The \(F_{2}\) score of class F1 obtained using ensemble learning reaches 99.27% (Ensemble Learning 70%). This means that the ensemble learning model has a very good ability in focus on detecting the presence of class F1.
## 5 Conclusion
Among the three types of ANN-BP activation functions, GELU activation function provides the best performance in classifying pillar stability measured based on average accuracy, standard deviation, and \(F_{1}\) score. This might be caused by its ability to deterministically deactivate neurons in the neural network during the training process. The best average accuracy using ANN-BP GELU is 87.98% (in the 4th data proportion) and the best standard deviation of accuracy obtained using ANN-BP GELU is 1.57% (4th data proportion).
Ensemble learning (EL) provides excellent performance in the pillar stability classification measured by accuracy, standard deviation, \(F_{1}\) score. The best average accuracy using EL is 90.64% (EL 70%) and the best standard deviation
Figure 3: \(F_{2}\) Score Comparison of Single ANN-BP and Ensemble Learning Model
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Label**} & **ANN-BP** & **ANN-BP** & **ANN-BP** & **Ensemble** & **Ensemble** & **Ensemble** \\ & **ReLU** & **ELU** & **GELU** & **Learning (70\%)** & **Learning (80\%)** & **Learning (90\%)** \\ \hline F0 & 70.15\% & 70.56\% & 74.14\% & 77.55\% & 77.62\% & 74.68\% \\ F1 & 94.85\% & 94.23\% & 93.87\% & 96.68\% & 97.37\% & 96.68\% \\ I0 & 79.28\% & 79.77\% & **76.52\%** & 88.05\% & 88.93\% & 86.44\% \\ I1 & 79.78\% & 81.33\% & 84.53\% & 83.84\% & 84.91\% & 84.42\% \\ \hline \hline \end{tabular}
\end{table}
Table 11: \(F_{1}\) score for 4th data proportion
of accuracy using EL is 0.47% (EL 90%). Ensemble learning can also improve the performance of the pillar stability classification of a single ANN-BP, with an increase in average accuracy of at least 1.17% and reduce the standard deviation of accuracy to a maximum of 1.86%.
Finally, the change of the activation functions in the previous ANN algorithm can improve the clasification results. It gives the same or better classification performance with less number of inputs. The additional implementation of ensemble learning also stabilize the classification process by making the classification results more robust against changes of input.
The research is supported by Hibah Riset Penugasan FMIPA UI No. 003/UN2.F3.D/PPM.00.02/2022
|
2305.18687 | Graph-based Multi-ODE Neural Networks for Spatio-Temporal Traffic
Forecasting | There is a recent surge in the development of spatio-temporal forecasting
models in the transportation domain. Long-range traffic forecasting, however,
remains a challenging task due to the intricate and extensive spatio-temporal
correlations observed in traffic networks. Current works primarily rely on road
networks with graph structures and learn representations using graph neural
networks (GNNs), but this approach suffers from over-smoothing problem in deep
architectures. To tackle this problem, recent methods introduced the
combination of GNNs with residual connections or neural ordinary differential
equations (ODE). However, current graph ODE models face two key limitations in
feature extraction: (1) they lean towards global temporal patterns, overlooking
local patterns that are important for unexpected events; and (2) they lack
dynamic semantic edges in their architectural design. In this paper, we propose
a novel architecture called Graph-based Multi-ODE Neural Networks (GRAM-ODE)
which is designed with multiple connective ODE-GNN modules to learn better
representations by capturing different views of complex local and global
dynamic spatio-temporal dependencies. We also add some techniques like shared
weights and divergence constraints into the intermediate layers of distinct
ODE-GNN modules to further improve their communication towards the forecasting
task. Our extensive set of experiments conducted on six real-world datasets
demonstrate the superior performance of GRAM-ODE compared with state-of-the-art
baselines as well as the contribution of different components to the overall
performance. The code is available at https://github.com/zbliu98/GRAM-ODE | Zibo Liu, Parshin Shojaee, Chandan K Reddy | 2023-05-30T02:10:42Z | http://arxiv.org/abs/2305.18687v2 | # Graph-based Multi-ODE Neural Networks for Spatio-Temporal Traffic Forecasting
###### Abstract
There is a recent surge in the development of spatio-temporal forecasting models in the transportation domain. Long-range traffic forecasting, however, remains a challenging task due to the intricate and extensive spatio-temporal correlations observed in traffic networks. Current works primarily rely on road networks with graph structures and learn representations using graph neural networks (GNNs), but this approach suffers from over-smoothing problem in deep architectures. To tackle this problem, recent methods introduced the combination of GNNs with residual connections or neural ordinary differential equations (ODE). However, current graph ODE models face two key limitations in feature extraction: (1) they lean towards global temporal patterns, overlooking local patterns that are important for unexpected events; and (2) they lack dynamic semantic edges in their architectural design. In this paper, we propose a novel architecture called Graph-based Multi-ODE Neural Networks (GRAM-ODE) which is designed with multiple connective ODE-GNN modules to learn better representations by capturing different views of complex local and global dynamic spatio-temporal dependencies. We also add some techniques like shared weights and divergence constraints into the intermediate layers of distinct ODE-GNN modules to further improve their communication towards the forecasting task. Our extensive set of experiments conducted on six real-world datasets demonstrate the superior performance ofGRAM-ODE compared with state-of-the-art baselines as well as the contribution of different components to the overall performance. The code is available at [https://github.com/zbliu98/GRAM-ODE](https://github.com/zbliu98/GRAM-ODE)
## 1 Introduction
Spatio-temporal forecasting is one of the main research topics studied in the context of temporally varying spatial data which is commonly seen in many real-world applications such as traffic networks, climate networks, urban systems, etc. (Jiang & Luo, 2022; Du et al., 2017; Jones, 2017; Longo et al., 2017). In this paper, we investigate the problem of traffic forecasting, in which the goal is to statistically model and identify historical traffic patterns in conjunction with the underlying road networks to predict the future traffic flow. This task is challenging primarily due to the intricate and extensive spatio-temporal dependencies in traffic networks, also known as intra-dependencies (i.e., temporal correlations within one traffic series) and inter-dependencies (i.e., spatial correlations among correlated traffic series). In addition to this, frequent events such as traffic peaks or accidents lead to the formation of non-stationary time-series among different locations, thus, posing challenges for the prediction task.
Traffic forecasting is a spatio-temporal prediction task that exploits both the location and event data collected by sensors. Traditional methods such as AutoRegressive Integrated Moving Average (ARIMA) and Support Vector Machine (SVM) algorithms only consider the temporal patterns and ignore the corresponding spatial relations (Joong et al., 2013; Williams & Hoel, 2003; Van Der Voort et al., 1996). Statistical and classical
machine learning methods suffer from limitations in learning complex spatio-temporal interactions, and thus deep learning models were later introduced for this task. Early examples of deep learning approaches to capture spatial and temporal relations are FC-LSTM (Shi et al., 2015) where authors integrate CNN and LSTM modules; and ST-ResNet (Zhang et al., 2017) which uses deep residual CNNs for both spatial and temporal views. However, these CNN-based methods are developed towards grid data and cannnot account for traffic road networks which are more akin to graph-structures. Hence, researchers in the domain have recently developed Graph Neural Network (GNN) based approaches for effectively learning graph-based representations. Examples of these graph-based models are: STGCN (Yu et al., 2018) which utilizes complete convolutional structures for both temporal and spatial views; and DCRNN (Li et al., 2018) which combines the diffusion process with bi-directional random walks on directed graphs in order to capture spatio-temporal dependencies.
However, such GNN-based methods cannot capture long-range spatio-temporal relations or develop deeper representations due to their limitation of over-smoothing (Lan et al., 2022; Li et al., 2018; Zhou et al., 2020). Deep GNN over-smoothing occurs when a GNN model with deeper architecture tends to lose the discriminative ability and learn similar node representations for all nodes. Thus, making it challenging to learn richer representations and investigate more complex graph structures. In order to address this problem, researchers have introduced combining GNNs with residual or skip connections (Chen et al., 2020) which are connections that bypass one or more layers and allow information to flow more freely through the network, therefore, improving the network's ability in learning more complex temporal relations. Neural Ordinary Differential Equation (NODE) is a type of deep learning model that uses continuous-time dynamics to learn more powerful representations of the time series data. NODEs can also be used to address over-smoothing in GNNs by providing a more flexible and expressive model architecture in capturing temporal relations. Recently, this was studied by the STGODE (Fang et al., 2021) model that integrates GNNs with NODEs (Chen et al., 2018) to model the dynamics of traffic over time. This combination can derive a continuous GNN (CGNN) with continuous temporal connections toward alleviating the over-smoothing problem. Nevertheless, current CGNN models still encounter the following limitations: (1) Previous approaches in this area tend to overemphasize the global temporal structures while undervaluing local patterns, which are often crucial for predictions in the presence of unexpected traffic events, e.g., a road has a higher chance of getting clogged shortly after a car accident. This will cause drivers to switch to a faster route and thus the traffic flow on this road may significantly drop for a short period of time. Therefore, ignoring these local temporal patterns may cause significant problems in the final predictions for roads facing unexpected events. (2) The dynamic
Figure 1: An overview of the proposed model alongside with prior models. The sub-figures depict traffic data over time with blue nodes representing recording points and lines representing roads. The orange dashed line shows the common global temporal pattern. Our model incorporates local temporal patterns represented by the purple dashed line, node-edge correlations depicted by red arrows, and dynamic edge-edge correlations displayed by green arcs.
correlation of traffic nodes is ignored by existing approaches. In other words, models usually do not consider the dynamic semantic spatial correlations (i.e., dynamic semantic edges) in their architecture. (3) Several baselines use vanilla aggregations, such as average pooling, to combine latent features learned from multiple streams which ignore the high-dimensional feature interactions.
To overcome the above limitations, we propose a novel framework called **GRAM-ODE**, **GRA**ph-based **M**ulti-**ODE** Neural Networks. First, in order to balance the consideration of global and local temporal patterns, we design new ODE modules for the local temporal patterns in addition to the existing ODE module for the global temporal patterns (Fang et al., 2021) using different sizes of temporal kernels (represented as purple and orange dashed lines in Fig. 1). Specifically, for local dependencies, we assign ODE functions to the local kernel's output that approximate local patterns, and then concatenate the results. These local and global ODE modules are depicted with more details in Fig. 3(a) and Fig. 3(b), respectively. Second, we design a new ODE module into our model to consider the dynamic correlation of traffic nodes as well as edges. In other words, at each time step, we find the intermediate dynamic spatial correlations based on the embedding representations of nodes (i.e., dynamic semantic edges), and then construct a new ODE module to approximate patterns of semantic edge-to-edge correlations over time (represented with different edge patterns over time in Fig. 1). More details regarding this dynamic semantic edge ODE module are provided in Fig. 3(c). Third, we also design the nonlinear aggregation paradigm and a multi-head attention across different ODE modules (represented in Fig. 3(d)) and different streams of traffic graphs (represented in the last layer of Fig. 2), respectively. Through these two operations, we account for high-dimensional correlations and similarities of latent features corresponding to different ODE modules and traffic graphs. By doing so, we let the model select and fuse latent features from different views for the forecasting task.
Also, since our proposed **GRAM-ODE** includes multiple ODE modules, we ensure that these different modules are not isolated and have effective connectivity in the intermediate layers. To do so, we design coupled ODE modules in two ways: (1) adding similarity constraint between the semantic parts of local and global modules to ensure these two semantic embeddings do not diverge from one another (represented with red marks in Fig. 3); (2) sharing weights for the global node-based and edge-based ODE modules (represented with green marks in Fig. 3). Therefore, we create a coupled graph-based multi-ODE structure as **GRAM-ODE** in which all modules are designed to effectively connect with each other for the downstream application of traffic prediction. The major contributions of this work are summarized below.
* _Developing a new ODE module for capturing local temporal patterns._ Due to the importance of short-term temporal dependencies in the traffic prediction in case of unexpected events, we develop a new ODE module for short-term dependencies in addition to the current ODE block for global temporal patterns.
* _Developing a new ODE module for the dynamic semantic edges._ In addition to ODE blocks for traffic nodes, which model the dynamic node-to-node correlations over time, we also add a new ODE module for the dynamic semantic edges based on node representations (i.e., dynamic spatial correlations) to model the edge-to-edge correlations over time.
* _Building effective connections between different ODE modules (coupled multi-ODE blocks)._ To build effective interactions between multiple ODE modules, we consider shared weights for node-based and edge-based ODE modules as well as adaptive similarity constraints for the outputs of local and global temporal ODE modules.
* _Designing a new aggregation module with multi-head attention across features of different streams._ To enhance the model's awareness in the selection and fusion of different ODE modules as well as the streams corresponding to different types of traffic graphs, we design multi-head attention mechanism at the aggregation layers.
The rest of this paper is organized as follows. Section 2 summarizes existing traffic forecasting works based on machine learning, graph-based and NODE-based methods. Section 3 provides some of the required preliminaries and explains the problem formulation. Section 4 covers the details of our proposed **GRAM-ODE** method and its different components. Our experimental evaluation including both quantitative and qualitative comparison results, ablation studies, and robustness assessments are reported in Section 5. Finally, Section 6 concludes the paper.
## 2 Related Works
### Machine Learning and Deep Learning Methods
Researchers have employed traditional statistical and machine learning methods for the task of traffic forecasting. Some prominent example models are (1) K-Nearest Neighbor (KNN) (Zhang et al., 2013) which predict traffic of a node based on its k-nearest neighbors; (2) ARIMA (Van Der Voort et al., 1996; Alghamdi et al., 2019) which integrates the autoregressive model with moving average operation; and (3) SARIMA (Williams and Hoel, 2003) which adds a specific ability to ARIMA for the recognition of seasonal patterns. Many of these machine learning models only consider the temporal dependencies and ignore the spatial information. Also, they are usually based on human-designed features and have limitations in learning informative features for the intended task. Later, deep learning methods became popular due to their ability in considering the complex and high-dimensional spatio-temporal dependencies through richer representations. Early examples of deep learning models considering the spatial and temporal relations are FC-LSTM (Shi et al., 2015), ConvLSTM (Xingjian et al., 2015), ST-ResNet (Zhang et al., 2017), ST-LSTM (Liu et al., 2016), and STCNN (He et al., 2019) which are usually based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to account for spatial and temporal information. However, these models are developed for the grid-based data and disregard the graph structure of traffic road networks. Due to this limitation, researchers moved into the application of graph-based deep learning models like graph neural networks.
### Graph-based Methods
Graphs provide vital knowledge about the spatial, temporal, and spatio-temporal relationships that can potentially improve performance of the final model. Recently, researchers employed the graph convolution network (GCN) (Kipf and Welling, 2017) to model the spatio-temporal interactions. DCRNN (Veeriah et al., 2015) is an early example of it that utilizes diffusion GCN with bi-directional random walks to model the spatial correlations as well as a GRU (Gated Recurrent Unit) based network to model the temporal correlations. GRU is an RNN-based method which is not effective and efficient in modeling long-range temporal dependencies. To address this limitation, works such as STGCN (Yu et al., 2018) and GraphWaveNet (Wu et al., 2021) utilize convolutional operations for both spatial and temporal domains. After the rise of attention-based models in deep learning, researchers realized that these two models still have some limitations in learning spatial and temporal correlations due to the limited capability of convolutional operations in capturing high-dimensional correlations. Therefore, two attention layers are later employed in ASTGCN (Guo et al., 2019) to capture the spatial and temporal correlations. However, these models are limited in capturing local relations and may lose local information due to the sensitivity of representations to the dilation operation. STSGCN (Song et al., 2020) used localized spatio-temporal subgraphs to enhance prior models in terms of capturing the local correlations. However, since the model formulation does not incorporate global information, this model still hal limitations when it came to long-term forecasting and dealing with data that included missing entries or noise. In addition to the spatial graphs from predefined road networks, STFGNN (Li and Zhu, 2021) later introduced the use of Dynamic Time Warping (DTW) for data-driven spatial networks which helped the model to learn representations from different data-driven and domain-driven views. STFGNN also utilized a new fusion module to capture spatial and temporal graphs in parallel. However, due to the over-smoothing problem of deep GNNs (which happens when a GNN model learns similar node representations for all nodes in case of adopting deeper architecture), current GNN-based models incur some difficulties in learning rich spatio-temporal representations with deep layers.
### Neural Differential Equation Methods
Neural Differential Equations (NDEs) (Kidger, 2022) provide a new perspective of optimizing neural networks in a continuous manner using differential equations (DEs). In other words, DEs help to create a continuous depth generation mechanism for neural networks, provide a high-capacity function approximation, and offer strong priors on model space. There are a few types of NDEs: (1) neural ordinary differential equation (NODE) such as ODE-based RNN models (Habiba and Pearlmutter, 2020; Lechner and Hasani, 2022) (e.g., ODE-RNN, ODE-GRU, ODE-LSTM) and latent ODE models (Rubanova et al., 2019); (2) neural controlled differential equation (NCDE) (Choi et al., 2022; Kidger et al., 2020; Morrill et al., 2021) which is usually used to learn functions for the analysis of irregular time-series data; and (3) neural stochastic differential equation
(NSDE) (Kidger et al., 2021; 2) which is usually employed for generative models that can represent complex stochastic dynamics. More recently, NODEs have been employed for the traffic forecasting task (Fang et al., 2021; Su et al., 2022; Pu et al., 2022). For example, STGODE (Fang et al., 2021) is proposed to address the aforementioned over-smoothing problem of GNN-based models. STGODE provides a new perspective of continuous depth generation by employing Neural Graph ODEs to capture the spatio-temporal dependencies. However, complex spatio-temporal correlations such as the lack of local temporal dependencies and dynamic node-edge communications are not captured by this model. In this study, we employ the idea of Neural Graph ODE for multiple coupled ODE-GNN modules which take into account the local temporal patterns and dynamic spatio-temporal dependencies, and as a result, can provide better function approximation for the forecasting in the presence of complex spatio-temporal correlations.
## 3 Problem Formulation
This section explains the required preliminaries and definitions for GRAM-ODE, and then defines the main problem statement. All notations used in this paper are summarized in Table 1.
### Definitions
_Definition 1: Traffic Graphs_. We consider the traffic network as a graph \(\mathcal{G}=(V,E,A)\), where \(V\) is the set of nodes, \(E\) is the set of edges and \(A\) is the adjacency matrix such that \(A\in\mathbb{R}^{N\times N}\) if \(|V|=N\). In this paper, we use two types of graphs, the connection map graph and the DTW (dynamic time warping) graph. In the connection map graph, the adjacency matrix, denoted by \(A^{C}\), represents the roads and actual connectivity among traffic nodes with binary value.
\[A^{C}_{i,j}=\left\{\begin{array}{ll}1,&if\ v_{i}\ and\ v_{j}\ are\ neighbors\\ 0,&otherwise\end{array}\right. \tag{1}\]
\begin{table}
\begin{tabular}{c l} \hline \hline
**Notation** & **Description** \\ \hline \(\mathcal{G}\) & Traffic graph \\ \(V\) & Set of nodes \\ \(E\) & Set of edges \\ \(A^{C}\) & Connection adjacency matrix \\ \(A^{seg}\) & DTW adjacency matrix \\ \(D\) & Degree matrix \\ \(A\) & Normalized adjacency matrix \\ \(f_{i}/f_{j}/f_{i}\) & Message generation for Edge/Global/Local Neural ODEs \\ \(\mathcal{H}(t)\) & ODE function (integrals from 0 to \(t\)) initialized with \(\mathcal{H}(0)\) \\ \(\mathcal{H}_{t}(t)/\mathcal{H}_{t}(t)\) & Edge/Global/Local ODE function features \\ \(\mathcal{Y}\) & Fluctronic time series data \\ \(\mathcal{Y}\) & Predicted future time series \\ \(\mathcal{Z}\) & Length of historical time series \\ \(L^{\prime}\) & Length of target time series \\ \(N\) & Number of nodes, \(|V|\) \\ \(C\) & Number of channels \\ \(H\) & Input of multi ODE-GNN \\ \(\mathcal{J}_{T}/\mathcal{H}\) & Updated adjacency matrix with shared spatial weights \\ \(\mathbf{EM}\) & Update Global/Local/Edge temporal shared weights \\ \(\mathbf{GM}\) & Global message \\ \(\mathbf{LM}\) & Local message \\ \(M_{A}(t)\) & Features of single embedded time step in Local ODE-GNN \\ \(L^{\prime}\) & Length of the embedded temporal window in Local ODE-GNN \\ \(K(t)\) & Local ODE-GNN’s output for the \(t\)-th embedded temporal step \\ \(e\) & Learnable threshold parameter in message filtering \\ \(p_{n}\) & Embeddings of \(n\)-th ODE in aggregation layer \\ \(H^{\prime}\) & Output of multi ODE-GNN’s aggregation layer \\ \(W_{n}/b_{n}\) & Weight matrix/bias vector in update layer \\ \(H^{\prime\prime}\) & Output of multi ODE-GNN’s update layer \\ \(H^{\prime\prime}_{con}\) & Hidden states of \(l\)-th TCN’s layer \\ \(C^{l}\) & Embedding size of \(l\)-th TCN’s layer \\ \(W^{t}\) & Convolutional kernel of \(t\)-th TCN’s layer \\ \(d^{l}\) & Exponential dilation rate of \(l\)-th TCN’s layer \\ \(W_{j}/W_{k}/W_{v}\) & The attention query/key/value weight matrix \\ \(b_{k}/b_{k}/b_{v}\) & The attention query/key/value bias vector \\ \(h\) & Number of attention heads \\ \(X^{\prime}_{i}\) & Attention output of the \(i\)-th head \\ \(\delta^{l}\) & Error threshold in the Huber loss \\ \hline \hline \end{tabular}
\end{table}
Table 1: Notations used in this paper.
where \(v_{i}\) and \(v_{j}\) refer to traffic nodes \(i\) and \(j\) in the graph. In the DTW graph, the adjacency matrix, indicated by \(A^{SE}\), is generated from the Dynamic Time Warping (DTW) algorithm (Berndt & Clifford, 1994) which calculates the distance between two time-series corresponding to a pair of nodes.
\[A^{SE}_{i,j}=\left\{\begin{array}{l}1,DTW\left(X^{i},X^{j}\right)<\epsilon\\ 0,\text{ otherwise}\end{array}\right. \tag{2}\]
where \(X^{i}\) and \(X^{j}\) refers to the time-series data of nodes \(i\) and \(j\), respectively; \(\epsilon\) also identifies the sparsity ratio of adjacency matrix. Notably, DTW is more effective than other point-wise similarity metrics (such as Euclidean distance) due to its sensitivity to shape and pattern similarities.
_Definition 2: Graph Normalization._ Adjacency matrix \(A\in\{A^{C},A^{SE}\}\in\mathbb{R}^{N\times N}\) is normalized with \(D^{-\frac{1}{2}}AD^{-\frac{1}{2}}\) where \(D\) is the degree matrix of \(A\). As shown in Eq. (3), the self-loop identity matrix \(I\) is incorporated in normalization to avoid the negative eigenvalues.
\[\hat{A}=\alpha\left(I+D^{-\frac{1}{2}}AD^{-\frac{1}{2}}\right) \tag{3}\]
where \(\hat{A}\) is the normalized adjacency matrix; and \(\alpha\in(0,1)\) identifies the eigenvalue range to be in \([0,\alpha]\).
_Definition 3: Graph-based Neural ODE._ The standard formulation of GNN-based continuous-time ODE function is defined in Eq. (4). It takes the initial value \(\mathcal{H}(0)\), temporal integrals from 0 to given time \(t\), traffic graph \(\mathcal{G}\), and the Neural ODE's network parameter \(\theta\).
\[\mathcal{H}(t)=\mathcal{H}(0)+\int_{0}^{t}f(\mathcal{H}(s),s;\mathcal{G}, \theta)\mathrm{d}s \tag{4}\]
where \(f\) is the process to generate semantic message of Neural ODE parameterized by \(\theta\) to model the hidden dynamics of graph \(\mathcal{G}\). Since the structure of our proposed method consists of multiple ODE-GNN blocks, we have different types of Eq. (4) graph neural ODEs which are explained with more details in Section 4.
_Definition 4: Tensor \(n\)-mode multiplication._ We use subscript to identify the tensor-matrix multiplication on the specific dimension as shown below.
\[(\mathcal{B}\times_{2}\mathcal{C})_{ilk}=\sum_{j=1}^{n_{2}}\mathcal{B}_{ijk} \mathcal{C}_{jl} \tag{5}\]
where \(\mathcal{B}\in\mathbb{R}^{N_{1}\times N_{2}\times N_{3}}\), \(\mathcal{C}\in\mathbb{R}^{N_{2}\times N_{2}^{l}}\), so, \(\mathcal{B}\times_{2}\mathcal{C}\in\mathbb{R}^{N_{1}\times N_{2}^{l}\times N_{ 3}}\). The \(n\)-mode tensor-matrix multiplication is denoted as \(\times_{n}\) with the \(n\)-th subscript.
### Problem Statement
The spatio-temporal forecasting problem is described as learning a mapping function \(\mathcal{F}\) that transforms the current historical spatio-temporal data \(\mathcal{X}=(X_{t-L+1},X_{t-L+2},...,X_{t})\) into the future spatio-temporal data \(\mathcal{Y}=(X_{t+1},X_{t+2},...,X_{t+L^{\prime}})\), where \(L\) and \(L^{\prime}\) denote the length of historical and target time-series to be predicted, respectively. In the traffic forecasting problem, we have a historical tensor \(\mathcal{X}\in\mathbb{R}^{B\times N\times L\times C}\) and traffic graph \(\mathcal{G}\), where \(B\) is the batch size; \(N\) is the number of nodes; \(L\) is the input temporal length; and \(C\) is the number of input features (e.g., traffic speed, flow, density). The goal is to find \(\hat{\mathcal{Y}}=\mathcal{F}(\mathcal{X},f,\mathcal{G})\) in which \(\mathcal{F}\) is the overall forecasting network and \(f\) corresponds to different graph-based Neural ODE processes.
## 4 Gram-Ode
### Overview
The overview of our proposed GRAM-ODE is shown in Fig. 2, which is composed of two streams of operations: (\(i\)) DTW-based graph (top row) which is constructed based on semantical similarities, and (\(ii\)) connection map graph (bottom row) which is constructed based on geographical spatial connectivities. These two types of adjacency matrices are fed into the model separately to capture spatial correlations from both data-driven and geographical domain-driven views. They are later integrated with the multi-head attention mechanism in the final layer. As shown in Fig. 2, we have three parallel channels for each graph, each of which has two consecutive GRAM-ODE layers with a multi ODE-GNN block sandwiched between two blocks of temporal
dilated convolution (TCN). The final features of each graph from different channels are then concatenated and imported into the attention module (\(AM\)) which is designed to effectively fuse these two separate sets of operations by taking into account all the high-dimensional relations towards the intended forecasting task. We provide further details about each of these components in the subsections below.
### Gram-Ode Layer
Each GRAM-ODE layer consists of a multi ODE-GNN block placed between two TCNs. Fig. 3 illustrates details of our proposed Multi ODE-GNN block in which we combine multiple ODE modules to take into account all the essential temporal and spatial patterns from different viewpoints and extract informative features from their fusion. In the multi ODE-GNN block, we design three types of message passing operations, a message filtering constraint as well as a new aggregation module to consider high-dimensional correlations.
#### 4.2.1 Message Passing Layer
The current models primarily focus on node-based global temporal correlations and overlook short-term temporal patterns as well as the dynamic edge-based correlations in their formulation. Hence, we design three types of message passing processes to formulate ODE modules corresponding to global, local, and edge-based temporal dependencies.
_Shared Weights:_ Although node-based and edge-based features have distinct semantic meanings, they can still exhibit common spatial and temporal patterns. In order to consider these common patterns in our problem formulation, we design a shared weight matrix between node-based and edge-based global temporal ODE modules (shown in Fig. 3(a) and 3(c)). We define two weight matrices for consideration of these shared spatial and temporal weights. The shared spatial weight, denoted by \(M\), is added to the normalized adjacency matrix as \(\bar{\mathcal{A}}=\bar{A}+M\), where \(M\) is initialized randomly from the normal distribution. The shared temporal weights, denoted by \(W_{s1},W_{s2}\), is added to the node-based and edge-based global temporal modules, given in Eqs. (6) and (7). To consider the local temporal patterns in model formulation, we apply different sizes of temporal kernels \(W_{l1},W_{l2}\) into the local temporal module as shown in Eq. (8).
Figure 2: A graphical overview of the proposed GRAM-ODE framework. The DTW-based graph will be obtained from data at the blue triangle mark. The connection map will be obtained from the distance among recording nodes at the blue diamond mark. These two traffic graphs are separately imported into the model which consists of three parallel channels of two consecutive GRAM-ODE layers. Each layer contains a sandwiched multi ODE-GNN block (explained in Fig. 3) between two temporal convolution networks (TCNs). Outputs of these two streams are then aggregated with an Attention Module (\(AM\)) in the final layer.
\[\mathcal{T}_{e} =(\mathcal{H}_{e}(t)\times_{4}W_{s1})^{T}(\mathcal{H}_{e}(t)\times_{4 }W_{s2}) \tag{6}\] \[\mathcal{T}_{g} =(\mathcal{H}_{g}^{T}(t)\times_{3}W_{s1})(\mathcal{H}_{g}^{T}(t) \times_{3}W_{s2})^{T}\] (7) \[\mathcal{T}_{l} =(\mathcal{H}_{l}^{T}(t)\times_{3}W_{l1})(\mathcal{H}_{l}^{T}(t) \times_{3}W_{l2})^{T} \tag{8}\]
where \(W_{s1},W_{s2}\in\mathbb{R}^{L\times L}\) represent the global temporal shared weights with \(L\) referring to the global temporal length; \(W_{l1},W_{l2}\in\mathbb{R}^{1\times 1}\) represent the local temporal weights for each time step in the embedded temporal space; and \(\mathcal{H}_{e}(t),\mathcal{H}_{g}(t)\), and \(\mathcal{H}_{l}(t)\) represent the feature of edge, global, and local ODE module, respectively.
_Global and Local Message Passing:_ Fig 3(a) represents the global message passing with the goal of modeling long-term node-based temporal patterns in a continuous manner. Eqs. (9) and (10) represent the global message processing operations and return the global message **GM**, respectively. Additionally, Fig 3(b) represents the local message passing operations on local temporal data to emphasize the importance of short-term temporal patterns. In this module, we first use self-attention mechanism with dense projection layers to create the input in the lower-dimensional embedded temporal space by considering all the possible high-dimensional correlations (similar to the attention module (\(AM\)) explained with more details in Section 4.3). Then, features of each time stamp in the embedded temporal inputs are separately imported into the local ODE functions, encouraging the network to learn implicit local dependencies. These features at each embedded time step are denoted by \(\mathcal{H}_{Ai}\in\mathbb{R}^{B\times N\times 1\times C}\), where \(i\in\{0,1,2,3,...,L^{\prime\prime}-1\}\), and \(L^{\prime\prime}\) is the size of the embedded temporal window. As shown in Eqs. (13) and (15), at each embedded time step, \(\mathcal{H}_{Ai}\) is used as the initial value to formulate temporal dependencies of future \(t=L/L^{\prime\prime}\) time stamps which are returned as \(\mathcal{K}(i)\). Then, the outputs are concatenated to form the final local message \(\textbf{LM}=\mathcal{K}(0)||\mathcal{K}(1)||\ldots||\mathcal{K}(L^{\prime \prime}-1)\).
\[\textbf{GM} =GlobalMessagePassing(\mathcal{H}_{g}(0),\hat{\mathcal{A}}, \mathcal{T}_{g},\mathcal{W})=\mathcal{H}_{g}(0)+\int_{0}^{t}f_{g}(\mathcal{H }_{g}(\tau),\hat{\mathcal{A}},\mathcal{T}_{g},\mathcal{W})\mathrm{d}\tau \tag{9}\] \[f_{g} =\mathcal{H}_{g}(t)\times_{2}(\hat{\mathcal{A}}-I)+((S(\mathcal{ T}_{g})-I)\mathcal{H}_{g}^{T}(t))^{T}+\mathcal{H}_{g}(t)\times_{4}(\mathcal{W}-I) \tag{10}\]
Figure 3: An overview of the multi ODE-GNN block which consists of three ODE modules, i.e., (a) global, (b) local, and (c) edge-based temporal dependencies as well as a (d) new aggregation layer. The inputs and outputs of the multi ODE-GNN block are displayed with \(H\) and \(H^{\prime}\) blocks on the left and right sides of the diagram. The shared weights among different ODE modules are marked in green, and a constraint to limit the divergence of embeddings is marked in red. AM denotes the Attention Module defined in Section 4.3.
\[\textbf{LM}= LocalMessagePassing(ATT,\mathcal{H}_{l}(0),\hat{\mathcal{A}},\mathcal{T}_{l}, \mathcal{W})=\mathcal{K}(0)||\mathcal{K}(1)||......||\mathcal{K}(L^{\prime\prime}-1) \tag{11}\] \[\mathcal{H}_{A0}, \mathcal{H}_{A1},...,\mathcal{H}_{Al}=ATT(\mathcal{H}_{l}(0))\] (12) \[\mathcal{K}(i)= F_{l}(\mathcal{H}_{Ai},t_{0})||F_{l}(\mathcal{H}_{Ai},t_{1})||...... ||F_{l}(\mathcal{H}_{Ai},t_{L/L^{\prime\prime}-1})\] (13) \[F_{l}(\mathcal{H}_{Ai},t_{j})=\mathcal{H}_{Ai}+\int_{0}^{t_{j}}f _{l}(\mathcal{H}_{Ai}(\tau),\hat{\mathcal{A}},\mathcal{T}_{l},\mathcal{W}) \mathrm{d}\tau\] (14) \[f_{l}= \mathcal{H}_{Ai}(t)\times_{2}(\hat{\mathcal{A}}-I)+((S(\mathcal{ T}_{l})-I)\mathcal{H}_{Ai}^{T}(t))^{T}+\mathcal{H}_{Ai}(t)\times_{4}(\mathcal{W}-I) \tag{15}\]
_Edge Message Passing:_ Fig. 3(c) depicts the edge message passing procedures that take into account the dynamic edge-based temporal patterns in addition to the node-based correlations. We first create the initial edge features \(H_{e}(0)\in\mathbb{R}^{B\times N\times N\times L}\) from the node representation \(H\in\mathbb{R}^{B\times N\times L\times C}\) by taking an average over \(C\) channels and copying the \(N\) dimension. Eqs. (16) and (17) represent the edge message passing operations which return edge message **EM**.
\[\textbf{EM}= EdgeMessagePassing(\mathcal{H}_{e}(0),\hat{\mathcal{A}},\mathcal{T}_{e} )=\mathcal{H}_{e}(0)+\int_{0}^{t}f_{e}(\mathcal{H}_{e}(\tau),\hat{\mathcal{A} },\mathcal{T}_{e})\mathrm{d}\tau \tag{16}\] \[f_{e}=\mathcal{H}_{e}(t)\times_{2}(\hat{\mathcal{A}}-I)+ \mathcal{H}_{e}(t)(S(\mathcal{T}_{e})-I) \tag{17}\]
In Eqs (9)-(17), \(\mathcal{H}_{g}\), \(\mathcal{H}_{l}\), and \(\mathcal{H}_{e}\) represent the feature of global, local, and edge ODE modules, repsectively; \(\mathcal{T}_{g}\), \(\mathcal{T}_{l}\), and \(\mathcal{T}_{e}\) represent the global, local, and edge temporal weights formulated in Eqs. (6)-(8), respectively; \(\hat{\mathcal{A}}\) represents the updated adjacency matrix based on shared spatial weights; \(\mathcal{W}\in\mathbb{R}^{C\times C}\) represents the weight matrix for modeling interaction of different channels; \(S(.)\) shows the sigmoid operation; and \(I\) is the identity matrix.
#### 4.2.2 Message Filtering
To prevent the semantic parts of local and global modules diverging from one another, we add a message filtering constraint into the problem formulation (shown with red mark in Fig. 3). Eq. (18) shows that this message filtering constraint works like a both-way clipping operation in which we clip local semantic embeddings if they are significantly larger or smaller than global semantic embeddings.
\[LM=\left\{\begin{array}{ll}GM+e,if&LM>GM+e\\ GM-e,if&LM<GM-e\\ LM,otherwise\end{array}\right. \tag{18}\]
where \(LM\) and \(GM\) represent the local and global semantic embeddings; and \(e\) represents a learnable noise parameter initialized with normal distribution.
#### 4.2.3 Aggregation Layer
Fig. 3(d) represents our aggregation paradigm designed for combining output features of the three ODE modules. Instead of employing single add or pooling operations in the aggregation layer, we use the non-linear matrix multiplication operation to account for key correlations in the higher dimensions. Eq. (19) represents our designed aggregation in which the output of each ODE module is multiplied by the sum of other modules that are normalized by softmax operation. This gated aggregation has several benefits: (\(i\)) it enables the selection of features that are more crucial for forecasting; (\(ii\)) it allows for non-linear aggregation; and (\(iii\)) it contains a linear gradient path, reducing the risk of vanishing gradient.
\[H^{\prime}=Aggregation(GM,LM,EM)=\frac{1}{2K}\sum_{m}^{K}\sum_{n\neq m}^{K}p_{ m}\odot softmax(p_{n}) \tag{19}\]
where \(H^{\prime}\) represents the output of Multi ODE-GNN aggregation layer; \(LM\), \(GM\), and \(EM\) represent the local, global, and edge-based semantic embeddings, respectively; \(K=3\) refers to three ODE modules (\(p_{0}=GM,\ p_{1}=LM,\ p_{2}=EM\)); and \(\odot\) operation represents the point-wise multiplication.
#### 4.2.4 Update Layer
After obtaining the aggregated information, we add a residual connection around the multi ODE-GNN block to update the node information of GNN. To achieve this, the input of multi ODE-GNN block is remapped to the aggregated output using a Fully Connected Layer as shown below.
\[H^{\prime\prime}=Update(H^{\prime},H)=\alpha*Sigmoid(W_{r}H+b_{r})+\beta*H^{\prime} \tag{20}\]
where \(H\) and \(H^{\prime}\) represent the inputs and outputs of the multi ODE-GNN block, respectively; \(W_{r}\) and \(b_{r}\) denote the weight matrix and bias vector of the residual fully connected layer, respectively. Also, \(\alpha\) and \(\beta\) are the hyperparameters identifying the combination weights in the residual connection.
#### 4.2.5 Temporal Convolutional Network
Since the problem being considered in this paper is spatio-temporal in nature, we need to consider the temporal correlations of nodes in addition to their spatial relations in the problem formulation. To model the temporal correlations, we utilize temporal convolutional networks (TCNs) which usually have more efficient training, more stable gradients, and faster response to dynamic changes compared to recurrent neural networks (RNNs). The architecture of TCN adopts the dilated convolutional operations with 1-D kernels along the time axis.
\[H_{ten}^{l}=\begin{cases}X&,\ \ l=0\\ Sigmoid\left(W^{l}\odot d^{l}H_{ten}^{l-1}\right)&,\ \ l=1,2,\ldots,L \end{cases} \tag{21}\]
where \(X\in\mathbb{R}^{B\times N\times L\times C}\) is the input of TCN; \(H_{ten}^{l}\in\mathbb{R}^{B\times N\times L\times C^{l}}\) is the latent output with \(C^{l}\) as the embedding size in the \(l\)-th TCN's layer; \(W^{l}\) represents the convolutional kernel of the \(l\)-th layer; and \(d^{l}\) is the exponential dilation rate which is used to expand the receptive field during the convolution. We usually take \(d^{l}=2^{l-1}\) in the temporal dilated convolution.
Algorithm 1 provides the pseudocode for the GRAM-ODE layer by sequentially passing the input via TCN, Multi ODE-GNN, and another TCN blocks. The previously explained steps of Multi ODE-GNN block including initialization, message passing, message filtering, aggregation, and update are summarized in lines 4 - 26.
```
Input: Node Information \(\mathcal{X}\), Traffic Graph \(\mathcal{G}\) Output: Updated Node Information \(\mathcal{\hat{X}}\)
1# TCN Block
2\(H\leftarrow\) TCN(\(\mathcal{X}\))
3if Find Initial Values
4\(\mathcal{H}_{g}(0)\gets H\)
5\(\mathcal{N}_{Ata}(0)\leftarrow\) ATT(\(H\))
6\(H\leftarrow\) Mean(\(H\)) # average on channel dimension
7\(H_{e}(0)\leftarrow\) Repeat(\(\hat{H}\)) # repeat N times
8if Edge Edge Passing
9for\(\mathcal{H}_{e}(\mathcal{H}_{e})\times\mathcal{Q}_{A}(\mathcal{Q}_{A}-I)+ \mathcal{H}_{e}(t)(S(\mathcal{T}_{e})-I)\)
10if Global Message Passing
11\(\mathcal{M}\leftarrow\mathcal{H}_{g}(\mathcal{H}_{e}-\mathcal{H}_{e}(\mathcal{ H}_{e})+(S(\mathcal{T}_{g})-I)\mathcal{H}_{g}^{T}(t))^{T}+\mathcal{H}_{g}(t) \times_{4}(\mathcal{W}-I)\)
12if Local Message Passing
13\(LM\leftarrow\mathcal{H}_{Ata}(t)\times_{2}(\mathcal{Q}_{A}-I)+((S(\mathcal{T}_ {l})-I)\mathcal{H}_{Ata}^{T}(t))^{T}+\mathcal{H}_{Ata}(t)\times_{4}(\mathcal{ W}-I)\)
14if Message Message Message
15if\(LM\times QM_{e}\)then
16\(LM\gets QM_{e}+e\)
17elseif\(LM\times GM_{e}\)then
18\(LM\gets GM-e\)
19else
20\(LM\gets LM\)
21
22\(\#\) Aggregate Multi ODE-GNNs
23\(P_{0},P_{1},P_{2},P_{3},P_{4},LM,EM\)
24\(H^{\prime}\leftarrow\frac{1}{2K}\sum\limits_{m=1}^{K}\sum\limits_{m=1}^{K}P_{m} \odot softmax(p_{n})\)
25\(\#\) Update
26\(H^{\prime\prime}\leftarrow\alpha*Sigmoid(W_{r}H+b_{r})+\beta*H^{\prime}\)
27\(\#\) TCN Block
28\(\mathcal{\hat{X}}\leftarrow\) TCN(\(H^{\prime\prime}\))
```
**Algorithm 1**GRAM-ODE Layer
### Attention Module
We use the attention mechanism in the last layer of GRAM-ODE to effectively aggregate the final learned embeddings of two traffic graphs (i.e., DTW-based graph and the connection map graph) in a way that is better aligned with the forecasting objective. The attention module (AM) is designed to replace the previous fully connected layers while capturing the correlations of high-dimensional information. In this module, we first concatenate the embeddings of two graphs and then compute the attention scores between them. Eqs. (22) and (23) mathematically describe this attention operation.
\[Q=XW_{q}+b_{q},\ K=XW_{k}+b_{k},\ V=XW_{v}+b_{v} \tag{22}\]
\[X^{\prime}_{i}=softmax\ \Bigg{(}\sqrt{\frac{h}{C^{\prime}}}\ *(\ Q_{i}^{T}K_{i}\,)\Bigg{)}\ V_{i} \tag{23}\]
where \(W_{q}(b_{q}),W_{k}(b_{k})\), and \(W_{v}(b_{v})\) represent the attention query, key, and value weight matrix (bias vector), respectively; \(X\) is the input of attention module; \(h\) is the number of heads; and \(\sqrt{\frac{h}{C^{\prime}}}\) is the normalization factor with \(C^{\prime}\) representing the embedding dimension. Therefore, \(Q\):\(\{Q_{1},Q_{2},...,Q_{h}\}\), \(K\):\(\{K_{1},K_{2},...,K_{h}\}\), \(V\):\(\{V_{1},V_{2},...,V_{h}\}\) shows the query, key, and value set for multiple attention heads. Finally, output of the attention module \(X^{\prime}_{i}\) can be mapped to the feature space of original data for prediction with a linear dense layer.
### Loss Function
In regression problem settings (such as the traffic flow forecasting), Huber loss function between the real value and predicted value is widely used in the literature. It combines the merits of \(L1\) and \(L2\) loss functions. Huber loss is a piece-wise function which consists of two parts: (1) a squared term with small and smooth gradient when the difference between the true and predicted values is small (i.e., less than a \(\delta\) threshold value) and (2) a restricted gradient term when the true and predicted values are far from each other. Eq. (24) represents the standard form of Huber loss function.
\[L(\hat{\mathcal{Y}},\mathcal{Y})=\begin{cases}\dfrac{1}{2}(\hat{\mathcal{Y}}- \mathcal{Y})^{2},\qquad|\hat{\mathcal{Y}}-\mathcal{Y}|\leq\delta\\ \delta|\hat{\mathcal{Y}}-\mathcal{Y}|-\dfrac{1}{2}\delta^{2},\ otherwise\end{cases} \tag{24}\]
where \(\delta\) is a hyperparameter value set for the intended threshold; \(\mathcal{Y}\) is the true future spatio-temporal data; and \(\hat{\mathcal{Y}}\) is the predicted future data.
Algorithm 2 provides the complete pseudocode of GRAM-ODE training. In each optimization step of this algorithm, the outputs of all GRAM-ODE layers across parallel channels are concatenated, and these concatenated outputs of different graph types are then fed into the attention module for better aggregation and prediction.
```
Input: Historical Data \(\mathcal{X}\), Future Data \(\mathcal{Y}\), Traffic Graph \(\mathcal{G}\) Output: Forecast Model with parameter \(\theta\)
1 initialize model parameters \(\theta\)
2 normalize the historical data \(X\leftarrow\frac{\mathcal{X}-mean(\mathcal{X})}{std(\mathcal{X})}\)
3fornumber of epochs until convergencedo
4forbatch in num batchesdo
5 layers=\(|\)\(|\)\(\leftarrow\)
6foregraph \(g\) in \(\mathcal{G}\)do
7repeat
8\(\hat{X}\leftarrow\)GRAM-ODE Layer(\(X,g\))
9 layers \(\leftarrow\)[layers, (\(\hat{X}\))] # Concatenation
10untilnum_parallealed_layers
11
12 end for
13\(\mathcal{X}^{\prime}\leftarrow\)Attention Module(layers)
14\(\hat{\mathcal{Y}}\gets X^{\prime}\times std(\mathcal{X})+mean(\mathcal{X})\)
15\(l\leftarrow\)Huber Loss(\(\hat{\mathcal{Y}}\),\(\mathcal{Y}\)) # Compute Loss
16\(\theta\leftarrow\theta-\Delta_{\theta}l\)# Update Parameters
17
18 end for
19
20 end for
```
**Algorithm 2**GRAM-ODE training
## 5 Our Experiments
We conduct experiments on six real-world datasets and seven baseline models to evaluate the effectiveness of our proposed GRAM-ODE and its components for the traffic forecasting task.
### Datasets
We show the performance results of our model on six widely used public benchmark traffic datasets 1 : PEMS03, PEMS04, PEMS07, and PEMS08 released by (Song et al., 2020) as well as PEMS-BAY (Li et al., 2017) and METR-LA (Jagadish et al., 2014). The first four datasets (PEMS03, PEMS04, PEMS07, PEMS08) are collected based on the California Districts they represent (District 3, District 4, District 7, and District 8, respectively). PEMS-BAY covers the San Francisco Bay Area and METR-LA focuses on the traffic data of the Los Angeles Metropolitan Area. All these datasets collect three features (flow, occupation, and speed) at each location point over a period of time (with 5-minute time intervals). The spatial connection network for each dataset is constructed using the existing road network. The details of data statistics are shown in Table. 2. To use these datasets in experiments, we pre-process the features by z-score normalization.
Footnote 1: Datasets are downloaded from STSGCN github repository [https://github.com/Davidham3/STSGCN/](https://github.com/Davidham3/STSGCN/)
### Evaluation Metrics
We use Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Squared Error (RMSE) metrics to evaluate the spatio-temporal forecasting. These metrics are defined as follows.
\[MAE=\frac{1}{n}\sum_{i=1}^{n}|y_{i}-\hat{y}_{i}|,\quad MAPE=(\frac{1}{n}\sum_ {i=1}^{n}|\frac{y_{i}-\hat{y}_{i}}{y_{i}}|)*100\%,\quad RMSE=\sqrt{\frac{1}{ n}\sum_{i=1}^{n}\left(y_{i}-\hat{y}_{i}\right)^{2}},\]
### Baselines
We compared our proposed GRAM-ODE with the following baselines.
* **ARIMA** (Box and Pierce, 1970): Auto-Regressive Integrated Moving Average is one of the most well-known statistical models for time-series analysis.
* **DCRNN** (Veeriah et al., 2015): Diffusion Convolutional Recurrent Neural Network utilizes diffusion graph convolutional networks with bidirectional random walks on directed graphs, and seq2seq gated recurrent unit (GRU) to capture spatial and temporal dependencies, respectively.
* **STGCN** (Yan et al., 2018): Spatio-Temporal Graph Convolutional Network combines graph structure convolutions with 1D temporal convolutional kernels to capture spatial dependencies and temporal correlations, respectively.
* **GraphWaveNet**(Wu et al., 2021): GraphWaveNet integrates adaptive graph convolution with 1D dilated casual convolution to capture spatio-temporal dependencies.
* **STSGCN**(Song et al., 2020): Spatio-Temporal Synchronous Graph Convolutional Networks decompose the problem into multiple localized spatio-temporal subgraphs, assisting the network in better capturing of spatio-temporal local correlations and consideration of various heterogeneities in spatio-temporal data.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline
**Data** & **PEMS03** & **PEMS04** & **PEMS07** & **PEMS08** & **PEMS-BAY** & **METR-LA** \\ \hline Location & \multicolumn{6}{c}{CA, USA} \\ \hline Time Span & 9/1/2018 - & 1/1/2018 - & 5/1/2017 - & 7/1/2016 - & 1/1/2017 - & 3/1/2012 - \\ & 11/30/2018 & 2/28/2018 & 8/31/2017 & 8/31/2016 & 5/31/2017 & 6/30/2012 \\ \hline \hline Time Interval & \multicolumn{6}{c}{5 min} \\ \hline Sensors & 358 & 307 & 883 & 170 & 325 & 207 \\ \hline Edges & 547 & 340 & 866 & 295 & 2,369 & 1,515 \\ \hline Time Steps & 26,208 & 16,992 & 28,224 & 17,856 & 52,116 & 34,272 \\ \hline \end{tabular}
\end{table}
Table 2: Basic statistics of the datasets used in our experiments.
* **STFGNN**(Li & Zhu, 2021): Spatio-Temporal Fusion Graph Neural Networks uses Dynamic Time Warping (DTW) algorithm to gain features, and follow STSGCN (Song et al., 2020) in using sliding window to capture spatial, temporal, and spatio-temporal dependencies.
* **STGODE**(Fang et al., 2021): Spatio-Temporal Graph ODE Networks attempt to bridge continuous differential equations to the node representations of road networks in the area of traffic forecasting.
### Experimental Settings
Following the previous works in this domain, we perform experiments by splitting the entire dataset into 6:2:2 for train, validation, and test sets. This split follows a temporal order, using the first 60% of the time length for training, and the subsequent 20% each for validation and testing. We use the past one hour to predict the future one hour. Since the time interval of data collection is 5 minutes, \(L,L^{\prime}=12\) temporal data points. Based on Table 2, we have different number of sensors among different datasets, therefore, \(|V|\) is different. DTW threshold (\(\epsilon\)) in Eq. (2) is 0.1; number of channels (\(C\)) in the historical data is 3 (i.e., flow, speed, and occupation) and in the embedding space is 64. The shared temporal weights \(W_{s1},W_{s2}\in\mathbb{R}^{12\times 12}\) are initialized randomly from normal distribution. The length of latent space for the input of local ODE block is \(L^{\prime\prime}=4\), and in the final attention module, number of attention heads \(h=12\). During training, we use the learning rate of \(10^{-4}\), \(10^{-4}\), \(10^{-5}\), \(10^{-5}\), \(10^{-4}\), and \(10^{-4}\) for PEMS03, PEMS04, PEMS07, PEMS08, PEMS-BAY, and METR-LA datasets, respectively. The optimizer is AdamW. All experiments are implemented using PyTorch (Paszke et al., 2019) and trained using Quadro RTX 8000 GPU, with 48GB of RAM.
### Experimental Results
Our proposed GRAM-ODE outperforms other baselines on all datasets in Table 3, except for the MAPE metric in PEMS07, where it is slightly greater than that of STFGNN. ARIMA performs considerably worse than other baselines, likely because it ignores the graph structure of spatio-temporal data. GraphWaveNet performs relatively poorly, possibly due to its limited capability in stacking spatio-temporal layers and expanding the receptive field learned from 1D CNN temporal kernels. DCRNN uses bi-directional random walks and a GRU to model spatial and temporal information, respectively, but its relatively low modeling efficacy and efficiency for long-range temporal information may contribute to its subpar performance. STGCN uses GCN for the spatial domain and 1D dilated convolutional operations for the temporal domain, but may lose local information due to the dilation operation, while its absence of attention-based operations and limited capability of convolutional operations in modeling high-dimensional spatial, temporal, and spatio-temporal correlations may also result in relatively poor performance. STSGCN captures local correlations with localized spatio-temporal subgraphs but may miss global information and thus perform poorly in long-range forecasting and with data containing missing entries or noise. STFGNN uses DTW for latent spatial networks and performs well, but is limited in learning comprehensive spatio-temporal correlations. STGODE
\begin{table}
\begin{tabular}{l l c c c c c c c c} \hline \hline
**Dataset** & **Metric** & **ARIMA** & **DCRNN** & **STGCN** & **GraphWaveNet** & **STSGCN** & **STFGNN** & **STGODE** & **GRAM-ODE** \\ \hline \multirow{3}{*}{PEMS03} & MAE & 33.51 & 18.18 & 17.48 & 19.85 & 17.48 & 16.77 & 16.50 & 15.72 \\ & MAPE(\%) & 33.78 & 18.91 & 17.15 & 19.31 & 16.78 & 16.30 & 16.69 & 15.98 \\ & RMSE & 47.59 & 30.31 & 30.12 & 32.94 & 29.21 & 28.34 & 27.84 & 26.40 \\ \hline \multirow{3}{*}{PEMS04} & MAE & 33.73 & 24.70 & 22.70 & 25.45 & 21.19 & 20.84 & 20.84 & 19.55 \\ & MAPE(\%) & 24.18 & 17.12 & 14.59 & 17.29 & 13.90 & 13.02 & 13.77 & **12.66** \\ & RMSE & 48.80 & 38.12 & 35.55 & 39.70 & 33.65 & 32.51 & 32.82 & **31.05** \\ \hline \multirow{3}{*}{PEMS07} & MAE & 38.17 & 25.30 & 25.38 & 26.85 & 24.26 & 23.46 & 22.99 & **21.75** \\ & MAPE(\%) & 19.46 & 11.16 & 11.08 & 12.12 & 10.21 & **9.21** & 10.14 & 2.74 \\ & RMSE & 59.27 & 38.58 & 38.78 & 42.78 & 39.03 & 36.60 & 37.54 & 34.42 \\ \hline \multirow{3}{*}{PEMS08} & MAE & 31.09 & 17.86 & 18.02 & 19.13 & 17.13 & 16.94 & 16.81 & **16.05** \\ & MAPE(\%) & 22.73 & 11.45 & 11.40 & 12.68 & 10.96 & 10.60 & 10.62 & **10.58** \\ & RMSE & 44.32 & 27.83 & 27.83 & 31.05 & 26.80 & 25.22 & 25.97 & **25.17** \\ \hline \multirow{3}{*}{PEMS-BAY} & MAE & 3.38 & 2.07 & 2.49 & 1.95 & 2.11 & 2.02 & 2.30 & **1.67** \\ & MAPE(\%) & 8.30 & 4.90 & 5.79 & 4.61 & 4.96 & 4.79 & 4.61 & **3.83** \\ & RMSE & 6.50 & 4.74 & 5.69 & 4.88 & 4.85 & 4.63 & 4.89 & **3.34** \\ \hline \multirow{3}{*}{METR-LA} & MAE & 6.90 & 3.60 & 4.59 & 3.53 & 3.65 & 3.55 & 3.75 & **3.44** \\ & MAPE(\%) & 17.40 & 10.50 & 12.70 & 10.01 & 10.67 & 10.56 & 10.26 & **9.38** \\ \cline{1-1} & RMSE & 13.23 & 7.60 & 9.40 & 7.37 & 7.81 & 7.47 & 7.37 & **6.64** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of GRAM-ODE and baselines on six benchmark datasets. A lower MAE/MAPE/RMSE indicates better performance. The **best** results are in bold and the second-best are underlined.
uses Neural ODEs to capture spatio-temporal dynamics and achieves very good performance compared to other baselines, but still lacks the ability to capture complex spatio-temporal correlations, balance local and global patterns, and model dynamic interactions between its components.
### Case Study
We selected two nodes from the PEMS08 road networks to conduct our case study for the qualitative demonstration of results. As Fig.4 shows, the predicted curves by our proposed GRAM-ODE (red curve) is better (more closely) aligned with the ground truth than STGODE (grey curve). The ground truth in node 109 has more fluctuations compared to the node 17 which causes more difficulty in the forecasting task. We can also observe that our model provides faster response in case of these abrupt fluctuations. This highlights the effectiveness of local ODE-GNN block in our model architecture which helps the model to better learn the local fluctuations.
### Ablation Study
To investigate the effect of different components of GRAM-ODE, we conduct ablation experiments on PEMS04 and PEMS08 with several different variants of our model.
1. **Base:** In this model, data only passes through a TCN, an ODE block and another TCN module. The ODE block only contains the global ODE-GNN with a fully connected layer after that. The output of TCN is then imported to a simple fully connected layer instead of an attention module.
2. **+E:** Beyond (1), this model adds the dynamic edge correlations with an edge-based ODE-GNN block and aggregates its outputs with the global ODE-GNN block through a simple weighted sum operation.
3. **+L:** Beyond (2), this model adds the local ODE-GNN block with different temporal kernels to better consider the local temporal patterns. The outputs of local ODE modules are aggregated with other ODE modules through a weighted sum operation.
4. **+share:** Compared to (3), this model uses shared temporal weights between the edge and global ODE-GNN modules and uses shared spatial weights among all three ODE-GNN modules, global, local and edge module.
5. **+cons:** Beyond (4), this model adds an adaptive message filtering constraint to restrict the divergence of embeddings from local and global ODE modules.
6. **+agg:** Beyond (5), this model replaces the weighted sum aggregation with a newly designed aggregation module explained in Eq. (19).
7. **+res:** Beyond (6), this model adds the intra-block residual connections between outputs and inputs of the multi ODE-GNN blocks (which is explained in Eq. (20)).
8. GRAM-ODE: Beyond (7), this model replaces the last linear layer with the attention module to better combine the features learned from different traffic graphs and parallel channels of GRAM-ODE layers (given by Eqs. (22) and (23)).
Figure 4: The comparison of traffic flow forecasting between our proposed GRAM-ODE and STGODE visualized for node 17 (left column) and node 109 (right column) of the PEMS08 dataset.
Fig. 5 shows the results of ablation experiments with MAE, MAPE, and RMSE metrics. It can be observed that the edge and local ODE-GNN modules are both enhancing the feature representation in the traffic forecasting task. The model variant '+E' improves the performance of the base model across all metrics and both datasets. This shows that the simple node-based global ODE-GNN is insufficient in learning informative features and adding the edge-based global ODE-GNN module can considerably help the performance. Without any further techniques, the model gets worse by only using '+L' beyond '+E'. However, after adding the shared weight and divergence constraint techniques, the model usually gets better across all metrics. The shared weights are applied in the spatial and temporal operations of global node-based and edge-based ODE-GNN blocks to take into account the dynamic correlations of edges as well as nodes in the model formulation. The constraint is added to prevent local and global ODE-GNN embeddings deviating from one another. In this figure, we can also observe the impact of aggregation layer, residual-based update layer as well as the designed attention module. It appears that, among these three elements, adding the attention module (\(AM\)) will always result in better performance, which is consistent with our hypothesis that the attention mechanism makes the model more effective in considering all the high-dimensional correlations during feature extraction.
### Robustness Study
To evaluate the robustness of our model, we add noise to the historical input of training data, which can potentially mimic uncertainties and biases that can arise during the data collection process. The added noise follows zero-mean i.i.d. Gaussian distribution with fixed variance, e.g., \(\mathcal{N}(0,\gamma^{2})\), where \(\gamma^{2}\in\{2,4\}\). We conduct robustness analysis across different values of \(n\in\{0.1,0.2,0.3,0.4\}\) representing the ratio of training data impacted by noise. In other words, \(n=0\) captures the performance without any noise. Fig. 6 represents the results of robustness comparisons between GRAM-ODE and STGODE on PEMS04 dataset across all the three metrics (MAE, MAPE, and RMSE). It can be observed that GRAM-ODE performance is more robust than STGODE with different levels of added noise \(n=0.1;0.2;0.3;0.4\) which is probably due to the powerful spatio-temporal feature extraction gained by multiple ODE-GNN modules. We can also notice that, when the noise levels are high (with \(\gamma^{2}=4\) and \(n=0.4\)), GRAM-ODE can still beat many baseline models listed in Table 3, demonstrating the significant benefit of incorporating various ODE modules in our framework which can improve robustness.
Figure 5: Ablation experiment results with different configurations of GRAM-ODE on PEMS03 (top row) and PEMS08 (bottom row) datasets.
## 6 Conclusion
In this paper, we propose Spatio-Temporal Graph Multi-ODE Neural Networks (GRAM-ODE) for forecasting traffic. In this model, multiple coupled ODE-GNN blocks are used to capture complex spatio-temporal dependencies from different views and learn better representations. We also add some techniques to further improve the communication between different ODE-GNN blocks including sharing weights, advanced aggregation, and divergence constraint. Extensive experiments on six real-world datasets show the superior performance of our proposed model compared to other state-of-the-art models as well as the effectiveness of each component. Future work may focus on investigating model compression techniques to reduce model size without sacrificing performance, exploring distributed computing strategies for efficiency, and evaluating GRAM-ODE's applicability to other spatio-temporal applications like climate modeling or social network analysis.
|
2308.08468 | An Expert's Guide to Training Physics-informed Neural Networks | Physics-informed neural networks (PINNs) have been popularized as a deep
learning framework that can seamlessly synthesize observational data and
partial differential equation (PDE) constraints. Their practical effectiveness
however can be hampered by training pathologies, but also oftentimes by poor
choices made by users who lack deep learning expertise. In this paper we
present a series of best practices that can significantly improve the training
efficiency and overall accuracy of PINNs. We also put forth a series of
challenging benchmark problems that highlight some of the most prominent
difficulties in training PINNs, and present comprehensive and fully
reproducible ablation studies that demonstrate how different architecture
choices and training strategies affect the test accuracy of the resulting
models. We show that the methods and guiding principles put forth in this study
lead to state-of-the-art results and provide strong baselines that future
studies should use for comparison purposes. To this end, we also release a
highly optimized library in JAX that can be used to reproduce all results
reported in this paper, enable future research studies, as well as facilitate
easy adaptation to new use-case scenarios. | Sifan Wang, Shyam Sankaran, Hanwen Wang, Paris Perdikaris | 2023-08-16T16:19:25Z | http://arxiv.org/abs/2308.08468v1 | # An Expert's Guide to Training Physics-informed Neural Networks
###### Abstract
Physics-informed neural networks (PINNs) have been popularized as a deep learning framework that can seamlessly synthesize observational data and partial differential equation (PDE) constraints. Their practical effectiveness however can be hampered by training pathologies, but also oftentimes by poor choices made by users who lack deep learning expertise. In this paper we present a series of best practices that can significantly improve the training efficiency and overall accuracy of PINNs. We also put forth a series of challenging benchmark problems that highlight some of the most prominent difficulties in training PINNs, and present comprehensive and fully reproducible ablation studies that demonstrate how different architecture choices and training strategies affect the test accuracy of the resulting models. We show that the methods and guiding principles put forth in this study lead to state-of-the-art results and provide strong baselines that future studies should use for comparison purposes. To this end, we also release a highly optimized library in JAX that can be used to reproduce all results reported in this paper, enable future research studies, as well as facilitate easy adaptation to new use-case scenarios.
## 1 Introduction
Recent advances in deep learning have revolutionized fields such as computer vision, natural language processing and reinforcement learning [1; 2; 3]. Powered by rapid growth in computational resources, deep neural networks are also increasingly used for modeling and simulating physical systems. Examples of these include weather forecasting [4; 5; 6], quantum chemistry [7; 8] and protein structure prediction [9].
Notably, the fusion of scientific computing and machine learning has led to the emergence of physics-informed neural networks (PINNs) [10], an emerging paradigm for tackling forward and inverse problems involving partial differential equations (PDEs). These deep learning models are known for their capability to seamlessly incorporate noisy experimental data and physical laws into the learning process. This is typically accomplished by parameterizing unknown functions of interest using deep neural networks and formulating a multi-task learning problem with the aim of matching observational data and approximating an underlying PDE system. Over the last couple of years, PINNs have led to a series of promising results across a range of problems in computational science and engineering, including fluids mechanics [11; 12; 13], bio-engineering [14; 15], materials [16; 17; 18], molecular dynamics [19], electromagnetics [20; 21], geosciences [22; 23], and the design of thermal systems [24; 25].
Despite some empirical success, PINNs are still facing many challenges that define open areas for research and further methodological advancements. In recent years, there have been numerous studies focusing on improving the performance of PINNs, mostly by designing more effective neural network architectures or better training algorithms. For example, loss re-weighting schemes have emerged as a prominent strategy for promoting a more balanced training process and improved test accuracy [26; 27; 28; 29]. Other efforts aim to achieve similar goals by adaptively re-sampling collocation points, such as importance sampling [30], evolutionary sampling [31] and residual-based adaptive sampling [32]. Considerable efforts have also been dedicated towards developing new neural network architectures to improve the representation capacity of PINNs. Examples include the use of adaptive activation functions [33], positional embbedings [34; 35], and novel architectures [26; 36; 37; 38; 39; 40]. Another research avenue explores alternative objective functions for PINNs training, beyond the weighted summation of residuals [41]. Some approaches incorporate numerical differentiation [42], while others draw inspiration from Finite Element Methods (FEM), adopting variational formulations [43; 44]. Other approaches propose adding additional regularization terms to accelerate training of PINNs [45; 46]. Lastly, the evolution of training strategies has been an area of active research. Techniques such as sequential training [47; 48] and transfer learning [49; 50; 51] have shown potential in speeding up the learning process and yielding better predictive accuracy.
While new research on PINNs is currently being produced at high frequency, a suite of common benchmarks and baselines is still missing from the literature. Almost all existing studies put forth their own collection of benchmark examples, and typically compare against the original PINNs formulation put forth by Raissi _et al._, which is admittedly a weak baseline. This introduces many difficulties in systematically assessing progress in the field, but also in determining how to use PINNs from a practitioner's standpoint.
To address this gap, this work proposes a training pipeline that seamlessly integrates recent research developments to effectively resolve the identified issues in PINNs training, including spectral bias [52; 35], unbalanced back-propagated gradients [26; 27] and causality violation [53]. In addition, we present a variety of techniques that could further enhance performance, shedding light on some practical tips that form a guideline for selecting hyper-parameters. This is accompanied by an extensive suite of fully reproducible ablation studies performed across a wide range of benchmarks. This allows us to identify the setups that consistently yield the state-of-the-art results, which we believe should become the new baseline that future studies should compare against. We also release a high-performance library in JAX that can be used to reproduce all findings reported in this work, enable future research studies, as well as facilitate easy adaptation to new use-case scenarios. As such, we believe that this work can equally benefit researchers and practitioners to further advance PINNs and deploy them in more realistic application settings.
The rest of this paper is organized as follows. In section 2, we provide a brief overview of the original formulation of PINNs as introduced by Raissi et al. [10], and outline our training pipeline. From Section 3 to Section 5, we delve into the motivation and implementation details of the key components of the proposed algorithm. These consist of non-dimensionalization, network architectures that employ Fourier feature embeddings and random weight factorization, as well as training algorithms such as causal training, curriculum training and loss weighting strategies. Section 6 discusses various aspects of PINNs that lead to improved stability and superior training performance Finally, in section 7 we validate the effectiveness and robustness of the proposed pipeline across a wide range of benchmarks and showcase state-of-the-art results.
## 2 Physics-informed Neural Networks
Following the original formulation of Raissi _et al._, we begin with a brief overview of physics-informed neural networks (PINNs) [10] in the context of solving partial differential equations (PDEs). Generally, we consider PDEs taking the form
\[\mathbf{u}_{t}+\mathcal{N}[\mathbf{u}]=0,\ \ t\in[0,T],\ \mathbf{x}\in\Omega, \tag{2.1}\]
subject to the initial and boundary conditions
\[\mathbf{u}(0,\mathbf{x})=\mathbf{g}(\mathbf{x}),\ \ \mathbf{x}\in\Omega, \tag{2.2}\] \[\mathcal{B}[\mathbf{u}]=0,\ \ t\in[0,T],\ \mathbf{x}\in\partial\Omega, \tag{2.3}\]
where \(\mathcal{N}[\cdot]\) is a linear or nonlinear differential operator, and \(\mathcal{B}[\cdot]\) is a boundary operator corresponding to Dirichlet, Neumann, Robin, or periodic boundary conditions. In addition, \(\mathbf{u}\) describes the unknown latent solution that is governed by the PDE system of Equation (2.1).
We proceed by representing the unknown solution \(\mathbf{u}(t,\mathbf{x})\) by a deep neural network \(\mathbf{u}_{\theta}(t,\mathbf{x})\), where \(\theta\) denotes all tunable parameters of the network (e.g., weights and biases). This allows us to define the PDE residuals as
\[\mathcal{R}_{\theta}(t,\mathbf{x})=\frac{\partial\mathbf{u}_{\theta}}{ \partial t}(t_{r},\mathbf{x}_{r})+\mathcal{N}[\mathbf{u}_{\theta}](t_{r}, \mathbf{x}_{r}) \tag{2.4}\]
Then, a physics-informed model can be trained by minimizing the following composite loss function
\[\mathcal{L}(\theta)=\mathcal{L}_{ic}(\theta)+\mathcal{L}_{bc}(\theta)+\mathcal{L}_ {r}(\theta), \tag{2.5}\]
where
\[\mathcal{L}_{ic}(\theta) =\frac{1}{N_{ic}}\sum_{i=1}^{N_{ic}}\left|\mathbf{u}_{\theta}(0, \mathbf{x}_{ic}^{i})-\mathbf{g}(\mathbf{x}_{ic}^{i})\right|^{2}, \tag{2.6}\] \[\mathcal{L}_{bc}(\theta) =\frac{1}{N_{bc}}\sum_{i=1}^{N_{bc}}\left|\mathcal{B}[\mathbf{u} _{\theta}](t_{bc}^{i},\mathbf{x}_{bc}^{i})\right|^{2},\] (2.7) \[\mathcal{L}_{r}(\theta) =\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}\left|\mathcal{R}_{\theta}(t_{ r}^{i},\mathbf{x}_{r}^{i})\right|^{2}. \tag{2.8}\]
Here \(\{\mathbf{x}_{ic}^{i}\}_{i=1}^{N_{ic}}\), \(\{t_{bc}^{i},\mathbf{x}_{bc}^{i}\}_{i=1}^{N_{bc}}\) and \(\{t_{r}^{i},\mathbf{x}_{r}^{i}\}_{i=1}^{N_{r}}\) can be the vertices of a fixed mesh or points that are randomly sampled at each iteration of a gradient descent algorithm. Notice that all required gradients with respect to input variables or network parameters \(\theta\) can be efficiently computed via automatic differentiation [54].
However, as demonstrated by recent work, several critical training pathologies prevent PINNs from yielding accurate and robust results. These pathologies include spectral bias [52; 35], causality violation [53], and unbalanced back-propagated gradients among different loss terms [26], etc. To address these issues, we propose a training pipeline that integrates key recent advancements, which we believe are indispensable for the successful implementation of PINNs. As shown in Figure 1, the pipeline consists of three main steps, PDE non-dimensionalization, choosing suitable network architectures and employing appropriate training algorithms. Further details are provided in Algorithm 2. In the following sections, we will carefully demonstrate the motivation and necessity of each component in the proposed algorithm and validate its effectiveness via a wide range of benchmarks.
## 3 Non-dimensionalization
It is well-known that data normalization is an important pre-processing step in traditional deep learning, which typically involves scaling the input features of a data-set so that they have similar magnitudes and ranges [55; 56]. However, this process may not be generally applicable for PINNs as the target solutions are typically not available when solving forward PDE problems. In such cases, it is important to ensure that the target output variables vary within a reasonable range. One way to achieve this is through non-dimensionalization. It is a common technique used in mathematics and physics. to simplify and analyze complex systems by transforming the original system into an equivalent dimensionless system. This is performed by selecting one or more fundamental units or characteristic values, and scaling the variables in the problem so that they become dimensionless and of order one. From our experience, non-dimensionalization plays a crucial role in building physics-informed models especially for dealing with experimental data or real-world problems. The reasons are shown below:
* Lack of consistent network initialization schemes: The initialization of neural networks has a crucial role on the effectiveness of gradient descent algorithms. Common initialization schemes (e.g., Glorot [55]) not
Figure 1: Illustration of the proposed training pipeline. The procedure begins with the non-dimensionalization of the PDE system, ensuring that input and output variables are in a reasonable range. Subsequently, an appropriate network architecture is constructed to represent the unknown PDE solution. The use of Fourier feature embeddings and random weight factorization is highly recommended for mitigating spectral bias and accelerating convergence. The training phase of the PINN model integrates various advanced algorithms, including self-adaptive loss balancing, causal training, and curriculum training.
1. Non-dimensionalize the PDE system (2.1).
2. Represent the PDE solution by a multi-layer perceptron network (MLP) \(\mathbf{u}_{\theta}\) with Fourier feature embeddings and random weight factorization. In general, we recommend using \(\tanh\) activation and initialized using the Glorot scheme.
3. Formulate the weighted loss function according to the PDE system: \[\mathcal{L}(\theta)=\lambda_{ic}\mathcal{L}_{ic}(\theta)+\lambda_{bc}\mathcal{L }_{bc}(\theta)+\lambda_{r}\mathcal{L}_{r}(\theta),\] (2.9) where \(\mathcal{L}_{ic}(\theta)\) and \(\mathcal{L}_{bc}(\theta)\) are defined in (2.6), (2.7) respectively, and \[\mathcal{L}_{r}(\theta)=\frac{1}{M}\sum_{i=1}^{M}w_{i}\mathcal{L}_{r}^{i}( \theta).\] (2.10) Here we partition the temporal domain into \(M\) equal sequential segments and introduce \(\mathcal{L}_{r}^{i}\) to denote the PDE residual loss within the \(i\)-th segment of the temporal domain.
4. Set all global weights \(\lambda_{ic},\lambda_{bc},\lambda_{r}\) and temporal weights \(\{w_{i}\}_{i=1}^{M}\) to 1.
5. Use \(S\) steps of a gradient descent algorithm to update the parameters \(\theta\) as:
**for \(n=1,\ldots,S\)do**
(a) Randomly sample \(\{\mathbf{x}_{ic}^{i}\}_{i=1}^{N_{ic}}\), \(\{t_{bc}^{i},\mathbf{x}_{bc}^{i}\}_{i=1}^{N_{bc}}\) and \(\{t_{r}^{i},\mathbf{x}_{r}^{i}\}_{i=1}^{N_{r}}\) in the computational domain and evaluated each loss terms \(\mathcal{L}_{ic},\mathcal{L}_{bc}\) and \(\{\mathcal{L}_{r}^{i}\}_{i=1}^{M}\).
(b) Compute and update the temporal weights by
\[w_{i}=\exp\left(-\epsilon\sum_{k=1}^{i-1}\mathcal{L}_{r}^{k}(\theta)\right), \text{ for }i=2,3,\ldots,M. \tag{2.11}\]
Here \(\epsilon>0\) is a user-defined hyper-parameter that determines the "slope" of temporal weights.
**if \(n\mod f=0\) then**
(c) Compute the global weights by
\[\hat{\lambda}_{ic} =\frac{\|\nabla_{\theta}\mathcal{L}_{ic}(\theta)\|+\|\nabla_{ \theta}\mathcal{L}_{bc}(\theta)\|+\|\nabla_{\theta}\mathcal{L}_{r}(\theta)\|} {\|\nabla_{\theta}\mathcal{L}_{ic}(\theta)\|}, \tag{2.12}\] \[\hat{\lambda}_{bc} =\frac{\|\nabla_{\theta}\mathcal{L}_{ic}(\theta)\|+\|\nabla_{ \theta}\mathcal{L}_{bc}(\theta)\|+\|\nabla_{\theta}\mathcal{L}_{r}(\theta)\|} {\|\nabla_{\theta}\mathcal{L}_{bc}(\theta)\|},\] (2.13) \[\hat{\lambda}_{r} =\frac{\|\nabla_{\theta}\mathcal{L}_{ic}(\theta)\|+\|\nabla_{ \theta}\mathcal{L}_{bc}(\theta)\|+\|\nabla_{\theta}\mathcal{L}_{r}(\theta)\|} {\|\nabla_{\theta}\mathcal{L}_{r}(\theta)\|}, \tag{2.14}\]
where \(\|\cdot\|\) denotes the \(L^{2}\) norm.
(d) Update the global weights \(\lambda=(\lambda_{ic},\lambda_{bc},\lambda_{r})\) using a moving average of the form
\[\lambda_{\text{new}}=\alpha\lambda_{\text{old}}+(1-\alpha)\hat{\lambda}_{ \text{new}}. \tag{2.15}\]
where the parameter \(\alpha\) determines the balance between the old and new values
**end if**
(e) Update the parameters \(\theta\) via gradient descent
\[\theta_{n+1}=\theta_{n}-\eta\nabla_{\theta}\mathcal{L}(\theta_{n}) \tag{2.16}\]
**end for**
The recommended default values for hyper-parameters are as follows: \(f=1,000,\alpha=0.9,\gamma=1.0,\epsilon=1.0\). Please note that we freeze the back-propagation of the weights \(w_{i}\)'s and \(\lambda_{i}\)'s with respect to network parameters \(\theta\).
only prevent vanishing gradients but also accelerate training convergence. A critical assumption for these initialization methods is that input variables should be in a moderate range, such as having a zero mean and unit variance, which enables smooth and stable forward and backward propagation. To satisfy this assumption, we propose using non-dimensionalization to scale the input and output variables so that they are of order one.
* Mitigating the disparities in variable scales: If input and output variables have different scales, some can dominate over others, leading to unbalanced contributions in the model training, therefore hindering the learning of meaningful correlations between them. Non-dimensionalization, which scales variables to have similar magnitudes and ranges, can help to reduce this discrepancy and facilitate model training.
* Improving convergence: If the variables are not properly scaled, the optimization algorithm may have to take very small steps to adjust the weights for one variable while large steps for another variable. This may result in a slow and unstable training process. Through non-dimensionalization, the optimizer can take more consistent steps, yielding faster convergence and better performance.
While non-dimensionalization is an indispensable pre-processing step, it is not a "silver bullet" that can resolve all issues in training PINNs. One of the main differences between PINNs and conventional deep learning tasks is the minimization of PDE residuals, which introduces additional difficulties in optimization process. Even if all variables are properly scaled via non-dimensionalization, the scale of PDE residuals can still vastly differ from the scale of the latent solution function, leading to a considerable discrepancy in the scale of different loss terms. Therefore, it is important to carefully inspect and re-scale the loss terms that define the PINNs objective. In section 5.2, we introduce two self-adaptive loss weighting schemes based on the magnitude of back-propagated gradients and Neural Tangent Kernel (NTK) theory. We will show that these methods can automatically balance the interplay between different loss terms during training and lead to more robust model performance.
## 4 Network Architecture
In this section, we delve into the selection of suitable network architectures for training PINNs. We begin by providing a brief overview of multi-layer perceptrons, along with common hyper-parameter choices, activation functions, and initialization schemes. Then, we discuss random Fourier feature embeddings, a simple yet effective technique that enables coordinate MLPs to learn complex high frequency functions. Finally, we introduce random weight factorization, a simple drop-in replacement of dense layers that has been shown to consistently accelerate training convergence and improve model performance.
### Multi-layer Perceptrons (MLP)
We mainly use multi-layer perceptrons (MLPs) as a universal approximator to represent the latent functions of interest, which takes the coordinates of a spatio-temporal domain as inputs and predicts the corresponding target solution functions. Specifically, let \(\mathbf{x}\in\mathbb{R}^{d}\) be the input, \(\mathbf{g}^{(0)}(\mathbf{x})=\mathbf{x}\) and \(d_{0}=d\). A MLP \(f_{\theta}(\mathbf{x})\) is recursively defined by
\[\mathbf{f}^{(l)}(\mathbf{x})=\mathbf{W}^{(l)}\cdot\mathbf{g}^{(l-1)}(\mathbf{ x})+\mathbf{b}^{(l)},\quad\mathbf{g}^{(l)}(\mathbf{x})=\sigma(\mathbf{f}_{ \theta}^{(l)}(\mathbf{x})),\quad l=1,2,\ldots,L, \tag{4.1}\]
with a final layer
\[\mathbf{f}_{\theta}(\mathbf{x})=\mathbf{W}^{(L+1)}\cdot\mathbf{g}^{(L)}( \mathbf{x})+\mathbf{b}^{(L+1)}, \tag{4.2}\]
where \(\mathbf{W}^{(l)}\in\mathbb{R}^{d_{l}\times d_{l-1}}\) is the weight matrix in \(l\)-th layer and \(\sigma\) is an element-wise activation function. Here, \(\theta=\big{(}\mathbf{W}^{(1)},\mathbf{b}^{(1)},\ldots,\mathbf{W}^{(L+1)}, \mathbf{b}^{(L+1)}\big{)}\) represents all trainable parameters in the network.
In practice, the choice of an appropriate network architecture impacts the success of physics-informed neural networks. From our experience, networks that are too narrow and shallow lack the expressive capacity to capture complex nonlinear functions, while networks that are too wide and deep can be difficult to optimize. Therefore, we recommend employing networks with width and depth ranging from \(128\) to \(512\) and \(3\) to \(6\), respectively, which tends to yield relatively optimal and robust results. To build a continuously differentiable neural representation, we recommend using the hyperbolic tangent (Tanh). Other popular choices include sinusoidal functions [36] and GeLU [57]. We point out that ReLU is not suitable since its second-order derivative is zero, which inevitably saturates the computation of PDE residuals. Finally, dense layers will be typically initialized using the Glorot scheme [55].
### Random Fourier features
As demonstrated by [52; 58; 59], MLPs suffer from a phenomenon referred to as spectral bias, showing that they are biased towards learning low frequency functions. This undesired preference also prevents PINNs from learning
high frequencies and fine structures of target solutions [35]. In Appendix A, we present a detailed analysis of this phenomenon via a lens of Neural Tangent Kernel (NTK) theory,
To mitigate spectral bias, Tancik _et al._[60] proposed random Fourier feature embeddings, which map input coordinates into high frequency signals before passing through a MLP. This encoding \(\gamma:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) is defined by
\[\gamma(\mathbf{x})=\begin{bmatrix}\cos(\mathbf{B}\mathbf{x})\\ \sin(\mathbf{B}\mathbf{x})\end{bmatrix}, \tag{4.3}\]
where each entry in \(\mathbf{B}\in\mathbb{R}^{m\times d}\) is sampled from a Gaussian distribution \(\mathcal{N}(0,\sigma^{2})\) and \(\sigma>0\) is a user-specified hyper-parameter.
This simple method has been shown to significantly enhance the performance of PINNs in approximating sharp gradients and complex solutions [35]. It is worth emphasizing the significance of the scale factor \(\sigma\) in the performance of neural networks. As demonstrated in Appendix A and [35], this hyper-parameter directly governs the frequencies of \(\gamma_{i}\) and the resulting eigenspace of the NTK, thereby biasing the network to learn certain band-limited signals. Specifically, lower encoding frequencies can result in blurry predictions, while higher encoding frequencies can introduce salt-and-pepper artifacts. Ideally, an appropriate \(\sigma\) should be selected such that the band width of NTK matches that of the target signals. This not only accelerates the training convergence, but also improves the prediction accuracy. However, the spectral information of the solution may not be accessible when solving forward PDEs. In practice, we recommend using a moderately large \(\sigma\in[1,10]\).
### Random weight factorization
Recently, Wang _et al._[61] proposed random weight factorization (RWF) and demonstrated that it can consistently improve the performance of PINNs. RWF factorizes the weights associated with each neuron in the network as
\[\mathbf{w}^{(k,l)}=s^{(k,l)}\cdot\mathbf{v}^{(k,l)}, \tag{4.4}\]
for \(k=1,2,\ldots,d_{l},\quad l=1,2,\ldots,L+1\), where \(\mathbf{w}^{(k,l)}\in\mathbb{R}^{d_{l-1}}\) is a weight vector representing the \(k\)-th row of the weight matrix \(\mathbf{W}^{(l)}\), \(s^{(k,l)}\in\mathbb{R}\) is a trainable scale factor assigned to each individual neuron, and \(\mathbf{v}^{(k,l)}\in\mathbb{R}^{d_{l-1}}\). Consequently, the proposed weight factorization can be written by
\[\mathbf{W}^{(l)}=\mathrm{diag}(\mathbf{s}^{(l)})\cdot\mathbf{V}^{(l)},\quad l =1,2,\ldots,L+1. \tag{4.5}\]
with \(\mathbf{s}^{(l)}\in\mathbb{R}^{d_{l}}\).
We provide a geometric intuition of weight factorization in Appendix B. More theoretical and experimental results can be found in Appendix B and [61].
In practice, RWF is applied as follows. We first initialize the parameters of an MLP using a standard scheme, e.g. Glorot scheme [55]. Then, for every weight matrix \(\mathbf{W}\), we proceed by initializing a scale vector \(\exp(\mathbf{s})\) where \(\mathbf{s}\) is sampled from a multivariate normal distribution \(\mathcal{N}(\mu,\sigma\Gamma)\). Then every weight matrix is factorized by the associated scale factor as \(\mathbf{W}=\mathrm{diag}(\exp(\mathbf{s}))\cdot\mathbf{V}\) at initialization. Finally, we apply gradient descent to the new parameters \(\mathbf{s},\mathbf{V}\) directly. This procedure is summarized in Algorithm 2. The use of exponential parameterization is motivated by Weight Normalization [62] to strictly avoid zeros or very small values in the scale factors and allow them to span a wide range of different magnitudes. Empirically, too small \(\mu,\sigma\) values may lead to performance that is similar to a conventional MLP, while too large \(\mu,\sigma\) can result in an unstable training process. Therefore, we recommend setting \(\mu=0.5\) or \(1\), and \(\sigma=0.1\), which seem to consistently and robustly improve the loss convergence and model accuracy.
```
1. Initialize a neural network \(f_{\theta}\) with \(\theta=\{\mathbf{W}^{(l)},\mathbf{b}^{(l)}\}_{l=1}^{L+1}\) (e.g. using the Glorot scheme [55]). for\(l=1,2,\ldots,L\)do (a) Initialize each scale factor as \(\mathbf{s}^{(l)}\sim\mathcal{N}(\mu,\sigma I)\). (b) Construct the factorized weight matrices as \(\mathbf{W}^{(l)}=\text{diag}(\exp(\mathbf{s}^{(l)}))\cdot\mathbf{V}^{(l)}\). endfor 2. Train the network by gradient descent on the factorized parameters \(\{\mathbf{s}^{(l)},\mathbf{V}^{(l)},\mathbf{b}^{(l)}\}_{l=1}^{L+1}\). The recommended hyper-parameters are \(\mu=1.0,\sigma=0.1\).
```
**Algorithm 2** Random weight factorization (RWF)
Training
### Respecting Temporal Causality
In this section, we discuss the motivation and details of equation (2.11) in Algorithm 1. Recently, Wang _et al._[53] illustrates that PINNs may violate temporal causality when solving time-dependent PDEs, and hence are susceptible to converge towards erroneous solutions. This is mainly because the conventional PINNs tend to minimize all PDE residuals simultaneously meanwhile they are undesirably biased toward minimizing PDE residuals at later time, even before obtaining the correct solutions for earlier times. A more detailed analysis can be found in Appendix C and [53].
To impose the missing causal structure within the optimization process, we first split the temporal domain into \(M\) equal sequential segments and introduce \(L_{r}^{i}\) to denote the PDE residual loss within the \(i\)-th segment of the temporal domain. Then the original PDE residual loss can be rewritten as
\[\mathcal{L}_{r}(\theta)=\frac{1}{M}\sum_{i=1}^{M}w_{i}\mathcal{L}_{r}^{i}( \theta). \tag{5.1}\]
Combing with equation (2.11), we obtain
\[\mathcal{L}_{r}(\theta)=\frac{1}{M}\sum_{i=1}^{M}\exp\left(-\epsilon\sum_{k=1 }^{i-1}\mathcal{L}_{r}^{k}(\theta)\right)\mathcal{L}_{r}^{i}(\theta). \tag{5.2}\]
It can be observed that \(w_{i}\) is inversely exponentially proportional to the magnitude of the cumulative residual loss from the previous time steps. As a result, \(\mathcal{L}_{r}^{i}(\theta)\) will not be minimized unless all previous residuals \(\{\mathcal{L}_{r}^{k}(\theta)\}_{k=1}^{i-1}\) decrease to sufficiently small value such that \(w_{i}\) is large enough. These temporal weights encourage PINNs to the PDE solution progressively along the time axis, in accordance with how the information propagates in time, as the dynamics evolve throughout the spatio-temporal domain.
We emphasize that the computational cost of calculating temporal weights is negligible, as the temporal weights are computed by directly evaluating the PINNs loss functions, whose values are already stored in the computational graph during training. Moreover, it is important to note that the temporal weights are functions of the trainable parameters \(\theta\). In our JAX implementation [63], we use lax.stop_gradient to avoid the computation of back-propagated gradients of temporal weights, thereby further conserving computational resources.
We must point out that the proposed weighted residual loss does exhibit some sensitivity to the causality parameter \(\epsilon\). Choosing a very small \(\epsilon\) may fail to impose enough temporal causality, resulting in the PINN model behaving similarly to the conventional one. Conversely, choosing a large \(\epsilon\) value can result in a more difficult optimization problem, because the temporal residuals at earlier times have to decrease to a very small value in order to activate the latter temporal weights. Achieving this may be difficult in some cases due to limited network capacity for minimizing the target residuals. In practice, we recommend choosing a moderately large \(\epsilon\) to ensure that all temporal weights can properly converge to 1 at the end of training. If this is not the case, it would be advisable to slightly reduce the value of \(\epsilon\).
### Loss Balancing
As mentioned in Section 3, one of the main challenges in training PINNs is addressing multi-scale losses that arise from the minimization of PDE residuals. These losses cannot be normalized during the pre-processing step. An alternative approach involves assigning appropriate weights to each loss term to scale them during training. However, manually choosing weights is impractical, as the optimal weights can vary greatly across different problems, making it difficult to find a fixed empirical recipe that is transferable to various PDEs. More importantly, since the solution to a PDE is unknown, there is no validation data-set available for fine-tuning these hyper-parameters in the context of solving PDEs.
Given that, our training pipeline integrates a self-adaptive learning rate annealing algorithm, which can automatically balance losses during training. Specifically, we first compute \(\hat{\lambda}\) by equation (2.12)-(2.14). Then we obtain
\[\|\hat{\lambda}_{ic}\nabla_{\theta}\mathcal{L}_{ic}(\theta)\|=\|\hat{\lambda}_ {bc}\nabla_{\theta}\mathcal{L}_{ic}(\theta)\|=\|\hat{\lambda}_{r}\nabla_{ \theta}\mathcal{L}_{ic}(\theta)\|=\|\nabla_{\theta}\mathcal{L}_{ic}(\theta)\|+ \|\nabla_{\theta}\mathcal{L}_{bc}(\theta)\|+\|\nabla_{\theta}\mathcal{L}_{r}( \theta)\| \tag{5.3}\]
This effectively guarantees that the norm of gradients of each weighted loss is equal to each other, preventing our model from being biased towards minimizing certain terms during training. The actual weights are then updated as a running average of their previous values, as defined by Equation (2.15). This technique mitigates the instability of stochastic gradient descent. In practice, these updates can either take place every hundred or thousand iterations of the gradient
descent loop or at a user-specified frequency. Consequently, the extra computational overhead associated with these updates is negligible, particularly when updates are infrequent.
We now introduce an alternative weighting scheme that leverages the resulting NTK matrix of PINNs [27]. To this end, we define the NTK matrices corresponding to \(\mathcal{L}_{ic}\), \(\mathcal{L}_{bc}\), and \(\mathcal{L}_{r}\) as follows:
\[K_{ic} =\left[\left\langle\frac{\partial u}{\partial\theta}(t_{bc}^{i}, \mathbf{x}_{bc}^{i}),\frac{\partial u}{\partial\theta}(t_{bc}^{j},\mathbf{x}_ {bc}^{j})\right\rangle\right], \tag{5.4}\] \[K_{bc} =\left[\left\langle\frac{\partial\mathcal{R}[u]}{\partial \theta}(t_{r}^{i},\mathbf{x}_{r}^{i}),\frac{\partial\mathcal{R}[u]}{ \partial\theta}(t_{r}^{j},\mathbf{x}_{r}^{j})\right\rangle\right],\] (5.5) \[K_{bc} =\left[\left\langle\frac{\partial\mathcal{R}[u]}{\partial\theta} (t_{r}^{i},\mathbf{x}_{r}^{i}),\frac{\partial\mathcal{R}[u]}{\partial\theta} (t_{r}^{j},\mathbf{x}_{r}^{j})\right\rangle\right], \tag{5.6}\]
where \(\mathcal{R}[\cdot]\) denotes the residual operator defined in (2.4).
With this definition, we can establish an NTK-based weighting scheme as shown below
\[\hat{\lambda}_{ic} =\frac{Tr(\mathbf{K}_{ic})+Tr(\mathbf{K}_{bc})+Tr(\mathbf{K}_{r} )}{Tr(\mathbf{K}_{ic})}, \tag{5.7}\] \[\hat{\lambda}_{bc} =\frac{Tr(\mathbf{K}_{ic})+Tr(\mathbf{K}_{bc})+Tr(\mathbf{K}_{r} )}{Tr(\mathbf{K}_{bc})},\] (5.8) \[\hat{\lambda}_{r} =\frac{Tr(\mathbf{K}_{ic})+Tr(\mathbf{K}_{bc})+Tr(\mathbf{K}_{r} )}{Tr(\mathbf{K}_{r})}. \tag{5.9}\]
We proceed by updating the \(\lambda_{i}\) values using a moving average, as previously described. As detailed in Appendix A, the eigenvalues of NTK characterize the convergence rate of a loss function. Higher eigenvalues imply a faster convergence rate. Given that the trace of an NTK matrix is equal to the sum of all its eigenvalues, this scheme aims to balance the convergence rates of different loss terms such that their convergence rates are comparable to one another. In practice, it should be noted that we avoid constructing the full NTK matrix. Instead, we evaluate only the diagonal elements of the NTK matrix for computing the weights, which significantly saves computational resources.
We observed that while the performance of the gradient-based and NTK-based weighting schemes is similar, the updated weights in the gradient-based scheme are less stable compared to the NTK-based scheme. This instability may be attributed to the noisy back-propagated gradients due to random mini-batches. However, the NTK-based scheme demands a higher computational cost, making it more difficult to scale to complex problems. As a result, we generally recommend employing the gradient-based scheme as a first choice.
### Curriculum Training
While the techniques detailed in the preceding sections have greatly enhanced the performance and application range of PINNs, there remain certain complex domains where PINNs encounter challenges, especially in scenarios where high predictive accuracy is required. For example, when simulating chaotic dynamical systems such as the Navier-Stokes equations at high Reynolds numbers, enhanced accuracy is required to prevent error accumulation and trajectory divergence. In this section, we aim to shed light on these challenging areas and explore pathways to overcome them.
A promising approach we will delve into is the curriculum training strategy introduced by Krishnapriyan _et. al._[48]. The core idea involves decomposing the entire optimization task for PINNs into a sequence of more manageable sub-tasks. In this work, we mainly focus on integrating this strategy into our training pipeline for solving time-dependent PDEs and singular perturbation problems.
For time-dependent PDEs, we divide the temporal domain into smaller intervals and employ Algorithm 2 to train PINNs for solving the PDE within each of these intervals. Except for the first time window, the initial condition for each subsequent time window is updated using the prediction from the last time-step of the previous time window. This approach can be viewed as a temporal domain decomposition strategy, and significantly reduces the optimization difficulty of learning the full evolution of a dynamical system while increasing computational costs due to model retraining for each window.
It is worth noting that we also partition the temporal domain in Algorithm 2 to compute the causal weights within the time-window. We emphasize that the causal weighting shares a similar motivation with "time-marching", in the sense of respecting temporal causality by learning the solution sequentially along the time axis. Nevertheless, the causal weighting discussed in section 5.1 should not be considered a replacement for time-marching approaches, but rather a crucial enhancement, as violations of causality may still occur within each time window of a time-marching algorithm.
In addressing singular perturbation problems, our strategy hinges on a progressive approach. We initiate the training process with a less singular PDE and progressively increase its singularity throughout the training. For example, if our goal is to solve the Navier-Stokes equation at moderately high Reynolds numbers, we typically start by training a model for a lower Reynolds number and use this result as a suitable initialization for minimizing PDE residuals at higher Reynolds numbers. Through our experiments, we have observed that this approach effectively stabilizes the training process. It reduces the likelihood of PINNs becoming trapped in unfavorable local minima, thus enabling them to accurately capture complex and nonlinear PDE solutions. For a more concrete illustration, readers are directed to the example of lid-driven cavity flow in section 7.5.
## 6 Miscellaneous
In this section, we introduce several aspects that researchers and practitioners should consider when using PINNs to promote robust and optimal performance. The discussion highlights the importance of selecting appropriate optimizers and learning rates, imposing exact boundary conditions, employing random sampling and a modified MLP architecture.
### Optimizer and learning rate
Numerous optimizers have been developed for deep learning applications; however, we find that the Adam optimizer consistently yields good performance without heavy tuning. Moreover, we do not recommend using weight decay especially for forward problems, as it tends to degrade the resulting predictive accuracy. Furthermore, the learning rate is a crucial factor in PINNs' performance. Our experience suggests that an initial learning rate of \(0.001\), combined with exponential decay, typically yields good results.
### Random sampling
The choice of an appropriate sampling strategy for collocation points plays an important role in the training efficiency and model performance. In comparison to full batch sampling, random sampling significantly reduces the memory requirements and the computational cost of each iteration. More importantly, random sampling introduces regularization effects, which ultimately contribute to the improved generalization capabilities of PINNs. Based on our observations, training PINNs using full-batch gradient descent may result in over-fitting the PDE residuals. Consequently, we strongly recommend using random sampling in all PINN simulations to achieve optimal performance.
### Imposing boundary conditions
Recent work by Dong _et al._[64] showed how to strictly impose periodic boundary conditions in PINNs as hard-constraints, which not only effectively reduces the number of loss constraints but also significantly enhances training convergence and predictive accuracy. To illustrate the main idea, let us consider enforcing periodic boundary conditions with period \(P\) in a one-dimensional setting. Our goal is to build a network architecture satisfying
\[u^{(l)}(a)=u^{(l)}(a+P),\quad l=0,1,2,\ldots. \tag{6.1}\]
To this end, we construct a special Fourier feature embedding of the form
\[\mathbf{v}(x)=\left(\cos(\omega x),\sin(\omega x)\right), \tag{6.2}\]
with \(\omega=\frac{2\pi}{L}\). Then, for any network representation \(u_{\theta}\), it can be proved that any \(u_{\theta}(v(x))\) exactly satisfies the periodic boundary condition.
The same idea can be directly extended to higher-dimensional domains. For instance, let \((x,y)\) denote the coordinates of a point in two dimensions, and suppose that \(u(x,y)\) is a smooth periodic function to be approximated in a periodic cell \([a,a+P_{x}]\times[b,b+P_{y}]\), satisfying the following constraints
\[\frac{\partial^{l}}{\partial x^{l}}u\left(a,y\right) =\frac{\partial^{l}}{\partial x^{l}}u\left(a+P_{x},y\right),\quad y \in\left[b,b+P_{y}\right], \tag{6.3}\] \[\frac{\partial^{l}}{\partial y^{l}}u\left(x,a\right) =\frac{\partial^{l}}{\partial y^{l}}u\left(x,b+P_{y}\right),\quad x \in\left[a,a+P_{x}\right], \tag{6.4}\]
for \(l=0,1,2,\ldots\), where \(P_{x}\) and \(P_{y}\) are the periods in the \(x\) and \(y\) directions, respectively. Similar to the one-dimensional setting, these constraints can be implicitly encoded in a neural network by constructing a two-dimensional Fourier features embedding as
\[\mathbf{v}(x,y)=\left[\cos\left(\omega_{x}x\right),\sin\left(\omega_{x}x \right),\cos\left(\omega_{y}y\right),\sin\left(\omega_{y}y\right)\right] \tag{6.5}\]
with \(w_{x}=\frac{2\pi}{P_{t}},w_{y}=\frac{2\pi}{P_{y}}\).
For time-dependent problems, we simply concatenate the time coordinates \(t\) with the constructed Fourier features embedding, i.e., \(u_{\theta}([t,\mathbf{v}(x)])\), or \(u_{\theta}([t,\mathbf{v}(x,y)])\). In particular, if the PDE solutions are known to exhibit periodic behavior over time, we can also enforce periodicity along the time axis. More precisely, we consider the following special Fourier embedding
\[\mathbf{w}(t,x)=[\cos(\omega_{t}t),\sin(\omega_{t}t),\mathbf{v}(t,x)] \tag{6.6}\]
where \(\omega_{t}=\frac{2\pi}{P_{t}}\). The key difference is that \(P_{t}\) is a trainable parameter. Typically, \(P_{t}\) is initialized to the temporal domain's length, allowing networks to learn the solution's correct period. It is worth emphasizing that this assumption of time periodicity is not a severe restriction, and this technique can be applied to arbitrary dynamical systems, even if the solution is not periodic. This is because one can always set the initial \(P_{t}\) greater than the length of the temporal domain.
Lastly, other types of boundary conditions, including Dirichlet, Neumann, Robin, etc., can also be enforced in a "hard" manner by modifying the network outputs, see [65; 66] for more details.
### Modified MLP
In practice, we found that a simple modification of MLPs proposed by Wang _et al._[26] demonstrates an enhanced capability for learning nonlinear and complex PDE solutions. The forward pass of an \(L\)-layer modified MLP is defined as follows. First, we introduce two encoders for the input coordinates
\[\mathbf{U}=\sigma(\mathbf{W}_{1}\mathbf{x}+\mathbf{b}_{1}),\ \ \mathbf{V}= \sigma(\mathbf{W}_{2}\mathbf{x}+\mathbf{b}_{2}). \tag{6.7}\]
Then, for \(l=1,2,\ldots,L\),
\[\mathbf{f}^{(l)}(\mathbf{x}) =\mathbf{W}^{(l)}\cdot\mathbf{g}^{(l-1)}(\mathbf{x})+\mathbf{b}^ {(l)}, \tag{6.8}\] \[\mathbf{g}^{(l)}(\mathbf{x}) =\sigma(\mathbf{f}^{(l)}_{\theta}(\mathbf{x}))\odot\mathbf{U}+(1 -\sigma(\mathbf{f}^{(l)}_{\theta}(\mathbf{x})))\odot\mathbf{V}. \tag{6.9}\]
The final network output is given by
\[\mathbf{f}_{\theta}(\mathbf{x})=\mathbf{W}^{(L+1)}\cdot\mathbf{g}^{(L)}( \mathbf{x})+\mathbf{b}^{(L+1)}. \tag{6.10}\]
Here, \(\sigma\) denotes a nonlinear activation function, and \(\odot\) denotes an element-wise multiplication. All trainable parameters are given by
\[\theta=\{\mathbf{W}_{1},\mathbf{b}_{1},\mathbf{W}_{2},\mathbf{b}_{2},( \mathbf{W}^{(l)},\mathbf{b}^{(l)})_{l=1}^{L+1}\}. \tag{6.11}\]
This architecture is almost the same as a standard MLP network, with the addition of two encoders and a minor modification in the forward pass. Specifically, the inputs \(\mathbf{x}\) are embedded into a feature space via two encoders \(\mathbf{U},\mathbf{V}\), respectively, and merged in each hidden layer of a standard MLP using a point-wise multiplication. In our experience, the modified MLP demands greater computational resources; however, it generally outperforms the standard MLP in effectively minimizing PDE residuals, thereby yielding more accurate results.
## 7 Results
In this section, we present a series of challenging benchmarks for evaluating PINNs performance and illustrate the effectiveness of Algorithm 1, along with the proposed training strategies. Besides, we showcase the state-of-the-art results for each benchmark, demonstrating the current performance capacity of PINNs. More importantly, we believe that these results can establish robust and strong baselines, enabling future researchers to perform thorough evaluations and comparisons of their novel methods. This paves the way for continued innovation and developments in this field.
For each benchmark, except for the last two, we perform comprehensive ablation studies to assess the effectiveness of the methods presented in the previous sections. In each ablation study we systematically disable each methodological component individually, while keeping the others active under the same hyper-parameter settings, and evaluate the resulting relative \(L^{2}\) error and run-time. This allows us to isolate the effects of each component and understand their contribution to the overall model performance. Throughout all ablation studies, we maintain the following hyper-parameter settings, unless stated otherwise. Specifically, we employ an MLP with 4 hidden layers, 256 neurons per hidden layer, and tanh activation functions as our backbone, initializing it using the Glorot scheme [55]. For model training, we use the Adam optimizer [67], starting with a learning rate of \(10^{-3}\) and an exponential decay with a decay
rate of 0.9 for every \(2,000\) decay steps. The collocation points are uniformly sampled from the computational domain with a batch size of \(4096\). The total number of training iterations can vary depending on the complexity of the example.
Furthermore, we conduct extensive hyper-parameter sweeps across various learning rate schedules, network sizes, and activations, in order to produce state-of-the-art results for each example. Note that the hyper-parameter settings for our ablation studies differ from those yielding the best results. We summarize our results in Table 1 and provide detailed hyper-parameter settings for our optimal models in the Appendix. Throughout all numerical experiments, when applicable, we enforce the exact periodic boundary conditions as described in section 6.
The code and data accompanying this manuscript will be made publicly available at [https://github.com/PredictiveIntelligenceLab/jaxpi](https://github.com/PredictiveIntelligenceLab/jaxpi). It should be highlighted that our implementation automatically supports efficient data-parallel multi-GPU training. As illustrated in Figure 2, we show great weak scaling capabilities up to 256 GPUs, enabling the effective simulation of large-scale problems. Additionally, our code includes valuable utilities for monitoring gradient norms and NTK eigenvalues throughout training--key metrics essential for identifying potential training issues.
### Allen-Cahn equation
We start with 1D Allen-Cahn equation, a representative case with which conventional PINN models are known to struggle. It takes the form
\[u_{t}-0.0001u_{xx}+5u^{3}-5u=0,\quad t\in[0,1],x\in[-1,1], \tag{7.1}\] \[u(0,x)=x^{2}\cos(\pi x),\] (7.2) \[u(t,-1)=u(t,1),\] (7.3) \[u_{x}(t,-1)=u_{x}(t,1). \tag{7.4}\]
For this example, we first train a conventional PINN model to diagnose potential issues. In Figure 3, we visualize the histogram of back-propagated gradients, the resulting NTK eigenvalues and the temporal PDE residual loss (equation
\begin{table}
\begin{tabular}{l c} \hline
**Benchmark** & **Relative \(L^{2}\) error** \\ \hline Allen-Cahn equation & \(5.37\times 10^{-5}\) \\ Advection equation & \(6.88\times 10^{-4}\) \\ Stokes flow & \(8.04\times 10^{-5}\) \\ Kuramoto–Sivashinsky equation & \(1.61\times 10^{-1}\) \\ Lid-driven cavity flow (Re=3200) & \(1.58\times 10^{-1}\) \\ Navier–Stokes flow in a torus & \(2.45\times 10^{-1}\) \\ Navier–Stokes flow around a cylinder & – \\ \hline \end{tabular}
\end{table}
Table 1: State-of-the-art relative \(L^{2}\) error for various benchmark equations using our proposed model.
Figure 2: Efficiency of weak scaling using the Navier-Stokes flow (section 7.6) as a benchmark. We employ a neural network with hyper-parameters shown in Table 12 and measure the execution time for 10,000 iterations, maintaining a consistent batch size of 40960 per GPU.
(2.10)) at the early stages of training. On the top left panel, one can see that the gradients of PDE residual loss dominates those of the initial condition loss, which implies unbalanced back-propagated gradients. Moreover, the top right panel reveals that the network tends to minimize the PDE residuals at later times first, suggesting a violation of causality. In the bottom panel, a rapid decay in the NTK eigenvalues can be observed, indicating the presence of spectral bias. These findings strongly suggest that conventional PINNs suffer from multiple severe training pathologies, which need to be addressed simultaneously to yield satisfactory results.
To showcase the effectiveness of the proposed training pipeline in addressing these issues, we employ Algorithm 1 and disable individual methodological components one-at-a-time. The results are summarized in Table 2 and Figure 4. It can be concluded that the full algorithm yields the best performance in terms of relative \(L^{2}\) error of \(5.84\times 10^{-4}\). Removing any individual component from the algorithm generally leads to a worse performance, which indicates the positive contributions of each component to the overall model performance. The most significant negative impact on performance occurs when disabling the Fourier Feature embedding, resulting in a relative \(L^{2}\) error of \(4.35\times 10^{-1}\). It implies that the spectral bias degrades the predictive accuracy the most for this example. Furthermore, it is worth noting that the run-times across different configurations are relatively similar, except for the case corresponding to conventional PINNs, which shows a slightly shorter run-time of \(12.93\) minutes. This highly suggests the computational efficiency of each component presented in Algorithm 1. Finally, we present our best result in Figure 5, whereas Table 6 details the corresponding hyper-parameter configuration, and Figure 19 visualizes the loss convergence and the weight changes during training. One can see that the predicted solution achieves an excellent agreement with the reference solution, yielding a relative \(L^{2}\) error of \(5.37\times 10^{-5}\).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{4}{c}{**Ablation Settings**} & \multicolumn{3}{c}{**Performance**} \\ \hline
**Fourier Feature** & **RWF** & **Grad Norm** & **Causal** & **Rel.**\(L^{2}\) **error** & **Run time (min)** \\ \hline ✓ & ✓ & ✓ & ✓ & \(5.84\times 10^{-4}\) & 16.26 \\ ✗ & ✓ & ✓ & ✓ & \(4.35\times 10^{-1}\) & 13.20 \\ ✓ & ✗ & ✓ & ✓ & \(6.62\times 10^{-3}\) & 16.53 \\ ✓ & ✓ & ✗ & ✓ & \(7.51\times 10^{-3}\) & 16.36 \\ ✓ & ✓ & ✓ & ✗ & \(1.59\times 10^{-3}\) & 16.11 \\ ✗ & ✗ & ✗ & ✗ & \(51.74\times 10^{-1}\) & 12.93 \\ \hline \hline \end{tabular}
\end{table}
Table 2: _Allen Cahn equation:_ Relative \(L^{2}\) error and run time for an ablation study illustrating the impact of disabling individual components of the proposed training pipeline. Note that the GPU run time may vary due to factors such as hardware utilization, batch processing, and other computational loads.
Figure 4: _Allen Cahn equation:_ Convergence of relative \(L^{2}\) error for the ablation study with different components disabled. Plain: Conventional PINN formulation. Default: PINN model trained using Algorithm 1. No RWF: PINN model trained using Algorithm 1 without random weight factorization. No Grad Norm: PINN model trained using Algorithm 1 without grad norm weighting scheme. No Fourier feature: PINN model trained using Algorithm 1 without random Fourier feature embeddings. No Causal: PINN model trained using Algorithm 1 without casual weighting.
Figure 5: _Allen Cahn equation:_ Comparison of the best prediction against the reference solution. The resulting relative \(L^{2}\) error is \(5.37\times 10^{-5}\). The hyper-parameter configuration can be found in Table 6.
with periodic boundary conditions. This example has been studied in [48; 31], exposing some of the limitations that PINNs suffer from as the transport velocity \(c\) is increased. In our experiments, we consider the challenging setting of \(c=80\) with an initial condition \(g(x)=\sin(x)\).
Analogous to the first example, we train a conventional PINN model with the aim of identifying the issues that lead to inaccurate results. As illustrated in Figure 3, it is evident that PINNs experience the same challenges as those observed in the first example. This observation strongly suggests the widespread nature of these issues in the training of PINNs, further emphasizing the necessity of addressing them to obtain robust and accurate PINN models.
As mentioned in section 6.3, we can impose the spatial and temporal periodicity by
\[\mathbf{v}(t,x)=[\cos(\omega_{t}t),\sin(\omega_{t}t),\cos(\omega_{x}x),\sin( \omega_{x}x)], \tag{7.7}\]
where \(\omega_{t}=\frac{2\pi}{P_{t}}\) and \(\omega_{x}=\frac{2\pi}{P_{x}}\) with \(P_{x}=2\pi\) and \(P_{t}\) a trainable parameter. For this example, we incorporate the imposition of temporal periodicity in Algorithm 1 and subsequently perform an ablation study on the enhanced algorithm. The performance of various configurations is summarized in Table 3. One can conclude that the integration of all the techniques together yields the optimal accuracy. The exclusion of any of these elements, especially the time periodicity, Fourier Features and the grad norm weighting scheme, leads to a significant increase in test error, highlighting their crucial role in achieving accurate results. Additionally, we present the state-of the-art result in Figure 8. We see that the model prediction achieves an excellent agreement with the exact solution, with an relative \(L^{2}\) error of \(6.88\cdot 10^{-4}\). The hyper-parameter configuration and loss convergence are presented in Table 7 and Figure 20, respectively.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{6}{c}{**Ablation Settings**} & \multicolumn{3}{c}{**Performance**} \\ \hline
**Time Period** & **Fourier Feature** & **RWF** & **Grad Norm** & **Causal** & **Rel.**\(L^{2}\) **error** & **Run time (min)** \\ \hline ✓ & ✓ & ✓ & ✓ & ✓ & \(1.02\times 10^{-2}\) & 9.18 \\ � & ✓ & ✓ & ✓ & ✓ & \(7.37\times 10^{-1}\) & 8.76 \\ ✓ & � & ✓ & ✓ & ✓ & \(4.29\times 10^{-1}\) & 7.60 \\ ✓ & ✓ & � & ✓ & ✓ & \(1.31\times 10^{-2}\) & 9.25 \\ ✓ & ✓ & ✓ & � & ✓ & \(1.13\times 10^{0}\) & 7.46 \\ ✓ & ✓ & ✓ & ✓ & � & \(1.49\times 10^{-2}\) & 9.18 \\ � & � & � & � & � & \(9.51\times 10^{-1}\) & 7.12 \\ \hline \hline \end{tabular}
\end{table}
Table 3: _Advection equation:_ Relative \(L^{2}\) error and run time for an ablation study illustrating the impact of disabling individual components of the proposed technique and training pipeline.
Figure 8: _Advection equation:_ Comparison of the best prediction against the reference solution obtained from the hyper-parameter sweep. The resulting relative \(L^{2}\) error is \(6.88\times 10^{-4}\). The hyper-parameter configuration can be found in Table 7.
Figure 7: _Advection equation:_ Convergence of relative \(L^{2}\) error for the ablation study with different components disabled. Plain: Conventional PINN formulation. Default: PINN model trained imposing time periodicity and using Algorithm 1. No Time Period: PINN model trained using Algorithm 1. No RWF: PINN model trained imposing time periodicity and using Algorithm 1 without random weight factorization. No Grad Norm: PINN model trained imposing time periodicity and using Algorithm 1 without grad norm weighting scheme. No Fourier feature: PINN model trained imposing time periodicity and using Algorithm 1 without random Fourier feature embeddings. No Causal: PINN model trained imposing time periodicity and using Algorithm 1 without casual weighting.
locomotion in fluid environments. The governing equation is given by
\[-\nu\Delta\mathbf{u}+\nabla p =0, \tag{7.8}\] \[\nabla\cdot\mathbf{u} =0, \tag{7.9}\]
where \(\mathbf{u}=(u,v)\) defines the velocity and \(p\) the pressure, and \(\nu\) is the kinematic viscosity.
As depicted in Figure 9, the underlying geometry is a pipe \(\Omega=[0,2.2]\times[0,0.41]\backslash B_{r}(0.2,0.2)\) with a circular cylinder obstacle of radius \(r=0.05\). For the top and bottom walls \(\Gamma_{1}=[0,2.2]\times 0.41\) and \(\Gamma_{2}=[0,2.2]\times 0\) as well as the boundary \(S=\partial B_{r}(0.2,0.2)\), we impose the no-slip boundary condition
\[u_{|\Gamma_{1}}=u_{|\Gamma_{2}}=u_{|S}=0. \tag{7.10}\]
At the inlet \(\Gamma_{3}=0\times[0,0.41]\), a parabolic inflow profile is prescribed,
\[\mathbf{u}(0,y)=\mathbf{u}_{\mathrm{in}}=\left(\frac{4Uy(0.41-y)}{0.41^{2}},0 \right), \tag{7.11}\]
with a maximum velocity \(U=0.3\). At the outlet \(\Gamma_{4}=2.2\times[0,0.41]\), we define the outflow condition
\[\nu\partial_{\mathbf{n}}\mathbf{u}-p\mathbf{n}=0, \tag{7.12}\]
where \(\mathbf{n}\) denotes the outer normal vector.
To non-dimensionalize the system, we select the characteristic flow velocity and length as \(U^{*}=0.2\) and \(L^{*}=0.1\), respectively. This results in a Reynolds number of
\[\text{Re}=\frac{U^{*}L^{*}}{\nu}=\frac{0.2\cdot 0.1}{0.001}=20. \tag{7.13}\]
We can then define the non-dimensionalized variables as
\[\mathbf{x}^{*}=\frac{\mathbf{x}}{L^{*}},\quad\mathbf{u}^{*}=\frac{\mathbf{u}} {U^{*}},\quad p^{*}=\frac{pL^{*}}{\nu U^{*}},\quad\nabla^{*}=L^{*}\nabla. \tag{7.14}\]
By substituting these scales into the dimensionalized system, we obtain the non-dimensionalized PDE as
\[-\frac{1}{\text{Re}}\Delta\mathbf{u}^{*}+\nabla^{*}p^{*} =\mathbf{0} \text{in}\ \Omega^{*}, \tag{7.15}\] \[\nabla^{*}\mathbf{u}^{*} =0 \text{in}\ \Omega^{*},\] (7.16) \[\mathbf{u}^{*} =0 \text{on}\ \Gamma_{1}^{*}\cup\Gamma_{2}^{*}\cup S^{*},\] (7.17) \[\mathbf{u}^{*} =\frac{\mathbf{u}_{\mathrm{in}}}{U^{*}} \text{on}\ \Gamma_{3}^{*},\] (7.18) \[\frac{1}{\text{Re}}\frac{\partial\mathbf{u}^{*}}{\partial \mathbf{n}}-p^{*}\mathbf{n} =0 \text{on}\ \Gamma_{4}^{*}, \tag{7.19}\]
where \(\Omega^{*}\), \(S^{*}\) and \(\{\Gamma_{i}\}_{i=1}^{4}\) denote the non-dimensionalized domains, respectively.
To perform an ablation study for Algorithm 1, we employ an MLP with 4 hidden layers, 128 neurons per hidden layer, and GeLU activation functions and train each model for \(10^{5}\) iterations of gradient descent using the Adam optimizer. The results are summarized in Table 4, and strongly indicate the positive impact of all proposed components on model performance; disabling any one component leads to worse predictive accuracy. In particular, comparing the performance of the configurations with non-dimensionalization enabled (1st row) to the ones with non-dimensionalization disabled (5th rows), we observe a substantial increase in the relative \(L^{2}\) error when non-dimensionalization is removed. This observation highlights the importance of non-dimensionalization in achieving optimal performance for solving the Stokes equation. Moreover, as evidenced by the 3rd and 4th rows of the table, models trained without Fourier features and RWF fail to capture the correct solution, thus implying their essential contribution to the overall model performance. Lastly, we present the results of a fine-tuned PINN model in Figure 11, which exhibits excellent agreement with the reference solution and achieves a relative \(L^{2}\) error of \(8.04\times 10^{-5}\). The detailed hyper-parameter configuration and the loss convergence are respectively shown in Table 8 and Figure 21.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{5}{c}{**Ablation Settings**} & \multicolumn{2}{c}{**Performance**} \\ \hline
**Fourier Feature** & **RWF** & **Grad Norm** & **Non-dimensionalization** & **Rel.**\(L^{2}\) **error** & **Run time (min)** \\ \hline ✓ & ✓ & ✓ & ✓ & \(5.41\times 10^{-4}\) & 9.51 \\ ✗ & ✓ & ✓ & ✓ & \(9.56\times 10^{-1}\) & 7.93 \\ ✓ & ✗ & ✓ & ✓ & \(9.86\times 10^{-1}\) & 9.58 \\ ✓ & ✓ & ✗ & ✓ & \(1.01\times 10^{-2}\) & 8.63 \\ ✓ & ✓ & ✓ & ✗ & \(9.74\times 10^{-1}\) & 9.58 \\ ✗ & ✗ & ✗ & ✗ & \(9.21\times 10^{-1}\) & 7.95 \\ \hline \hline \end{tabular}
\end{table}
Table 4: _Stokes equation:_ Relative \(L^{2}\) error and run time for an ablation study illustrating the impact of disabling non-dimensionalization and individual components of the proposed training pipeline. The error is measured against the norm of flow velocity \(\|\mathbf{u}\|_{2}=\sqrt{u^{2}+v^{2}}\).
Figure 10: _Stokes equation:_ Convergence of relative \(L^{2}\) error for the ablation study with different components disabled.
Figure 9: _Stokes equation:_ Illustration of the pipe geometry for Stokes flow.
consider the Kuramoto-Sivashinsky equation, which exhibits a wealth of spatially and temporally nontrivial dynamical behavior, and has served as a model example in efforts to understand and predict the complex dynamical behavior associated with a variety of physical systems. The equation takes the form
\[u_{t}+\alpha uu_{x}+\beta u_{xx}+\gamma u_{xxxx}=0,\quad t\in[0,1],\;x\in[0,2 \pi], \tag{7.20}\]
subject to periodic boundary conditions and an initial condition
\[u(0,x)=u_{0}(x). \tag{7.21}\]
Specifically, we take \(\alpha=100/16,\beta=100/16^{2},\gamma=100/16^{4}\) and \(u_{0}(x)=\cos(x)(1+\sin(x))\).
Based on our experience, it appears highly challenging to conduct long-term integration of this PDE system via a single-shot training of PINNs. This could potentially be attributed to the inherently chaotic nature of the system and the insufficient accuracy of PINNs predictions. To illustrate this point, we train a PINN to simulate the dynamical system up to different final time \(T\) without time-marching, while keeping the same hyper-parameter settings. As shown in the left panel of Figure 12, we can see that the resulting relative \(L^{2}\) error drastically increases for larger temporal domains and eventually leads to a failure in correctly capturing the PDE solution. This illustrates the necessity for applying time-marching in order to mitigate the difficulties of approximation and optimization, thus leading to more accurate results. However, we must emphasize that the computational cost of time-marching is considerably larger than one-shot learning as one needs to train multiple PINN models sequentially. It would be interesting to explore the acceleration of this training process in the future work.
Moreover, we present an ablation study on Algorithm 1 and summarize our results in Table 5. It can be concluded that all proposed components positively contribute to the overall model performance, and removing of any one of them results in increased errors. Notably, the use of modified MLP greatly enhances the predictive accuracy, reflected in the substantial error reduction from \(2.98\times 10^{-3}\) to \(1.42\times 10^{-4}\). From our experience, modified MLPs typically outperforms plain MLPs, especially for tackling non-linear PDE systems. Furthermore, the predicted solution obtained from our best model is visualized in Figure 13, which is in a good agreement with the ground truth. Nevertheless, some discrepancies can be observed near \(t=1\), which may be due to the error accumulation and the inherent nature of chaos. More details of implementation and training are provided in Table 9 and Figure 22.
Figure 11: _Stokes equation:_ Comparison of the best prediction against the reference solution obtained from the hyper-parameter sweep. The resulting relative \(L^{2}\) error is \(8.04\times 10^{-5}\). The hyper-parameter configuration can be found in Table 8.
Figure 12: _Kuramoto–Sivashinsky equation: Left: Relative \(L^{2}\) errors from one-shot PINN training for different system final time \(T\) under the same hyper-parameter setting. Right: Convergence of relative \(L^{2}\) error for the ablation study with different components disabled._
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{6}{c}{**Ablation Settings**} & \multicolumn{3}{c}{**Performance**} \\ \hline
**Modified MLP** & **Fourier Feature** & **RWF** & **Grad Norm** & **Causal** & **Rel.**\(L^{2}\) **error** & **Run time (min)** \\ \hline ✓ & ✓ & ✓ & ✓ & ✓ & \(\mathbf{1.42\times 10^{-4}}\) & 13.33 \\ � & ✓ & ✓ & ✓ & ✓ & \(2.98\times 10^{-3}\) & 6.21 \\ ✓ & � & ✓ & ✓ & ✓ & \(1.86\times 10^{-2}\) & 7.60 \\ ✓ & ✓ & � & ✓ & ✓ & \(1.86\times 10^{-4}\) & 14.11 \\ ✓ & ✓ & ✓ & � & ✓ & \(2.19\times 10^{-1}\) & 14.11 \\ ✓ & ✓ & ✓ & ✓ & � & \(2.58\times 10^{-4}\) & 9.18 \\ � & � & � & � & � & � & \(2.59\times 10^{-1}\) & 7.12 \\ \hline \hline \end{tabular}
\end{table}
Table 5: _Kuramoto–Sivashinsky equation: Relative \(L^{2}\) error and run time for an ablation study illustrating the impact of disabling individual components of the proposed technique and training pipeline._
Figure 13: _Kuramoto–Sivashinsky equation: Comparison of the best prediction against the reference solution. The relative \(L^{2}\) error of the spatial temporal predicted solution is \(1.61\times 10^{-1}\). Note that the the majority of this error is attributed to last few time steps._
### Lid-driven cavity flow
In this example, we consider a classical benchmark problem in computational fluid dynamics, describing the motion of an incompressible fluid in a two-dimensional square cavity. The system is governed by the incompressible Navier-Stokes equations written in a non-dimensional form
\[\mathbf{u}\cdot\nabla\mathbf{u}+\nabla p-\frac{1}{Re}\Delta\mathbf{ u} =0,\quad(x,y)\in(0,1)^{2}, \tag{7.22}\] \[\nabla\cdot\mathbf{u} =0,\quad(x,y)\in(0,1)^{2}, \tag{7.23}\]
where \(\mathbf{u}=(u,v)\) denotes the velocity in \(x\) and \(y\) directions, respectively, and \(p\) is the scalar pressure field. We assume \(\mathbf{u}=(1,0)\) on the top lid of the cavity, and a non-slip boundary condition on the other three walls. We are interested in the velocity and pressure distribution for a Reynolds number of \(3200\).
In our experience, when trained directly at a high Reynolds number, PINNs tend to be unstable and susceptible of converging to erroneous solutions. This observation is verified by the left panel of Figure 14, where we plot the relative \(L^{2}\) errors from training PINNs with Algorithm 1 at varying Reynolds numbers under the same hyper-parameter settings. Our results demonstrate that PINNs struggle to yield accurate solutions for Reynolds numbers greater than \(1,000\). To improve this result, one effective approach is to start the training of PINNs with a lower initial Reynolds number, and gradually increase the Reynolds numbers during training. By this way, the model parameters obtained from the training with smaller Reynolds numbers serve as a good initialization when training for higher Reynolds numbers. To demonstrate this, we select an increasing sequence of Reynolds numbers \([100,400,1000,3200]\) and train PINNs with Algorithm 1 for \(5\times 10^{4},5\times 10^{4},1\times 10^{5},5\times 10^{5}\) iterations, respectively. The detailed hyper-parameter configuration is summarized in Table 10. As shown in Figure 15, our predicted velocity field agrees well with the reference results of Ghia _et al._[68], yielding a relative \(L^{2}\) error of \(1.58\times 10^{-1}\) against the reference solution.
Figure 14: _Lid-driven cavity:_ Relative \(L^{2}\) error of training PINNs with Algorithm 1 at different Reynolds numbers \(Re\in[100,400,1000,3200]\).
Figure 15: _Lid-driven cavity (Re=3200): Left: Predicted velocity of the fine-tuned model. Right: Comparison of the predicted velocity profiles on the vertical and horizontal center-lines against Ghia _et al._[68]. The resulting relative \(L^{2}\) error against the reference solution is \(1.58\times 10^{-1}\)._
### Navier-Stokes flow in a torus
As the second to last example, our goal is to showcase the capability of PINNs in simulating incompressible Navier-Stokes flow using the velocity-vorticity formulation. The equation is given by
\[w_{t}+\mathbf{u}\cdot\nabla w =\frac{1}{\text{Re}}\Delta w,\quad\text{ in }[0,T]\times\Omega, \tag{7.24}\] \[\nabla\cdot\mathbf{u} =0,\quad\text{ in }[0,T]\times\Omega,\] (7.25) \[w(0,x,y) =w_{0}(x,y),\quad\text{ in }\Omega. \tag{7.26}\]
Here, \(u=(u,v)\) represents the flow velocity field, \(w=\nabla\times\mathbf{u}\) denotes the vorticity, and Re denotes the Reynolds number. For this example, we define \(\Omega=[0,2\pi]^{2}\) and set Re as 100.
As the validation and effectiveness of the proposed PINN algorithm have been rigorously proven in prior examples, our focus is on simulating the vorticity evolution up to \(T=10\) using PINNs. To this end, we split the temporal domain into 5 intervals and employ a time-marching strategy. For each interval, we use a PINN model with a modified MLP (4 hidden layers, 256 neurons per hidden layer, Tanh activations) and train it using Algorithm 1 for \(10^{5}\) iterations of gradient descent with Adam optimizer. The results of this simulation are summarized in Figure 16, which provides a visual comparison of the reference and predicted vorticity at \(T=10\). While a slight misalignment between the two can be observed, the model prediction is in good agreement with the corresponding numerical estimations. This demonstrates the capability of PINNs to closely match the reference solution, emphasizing its effectiveness in simulating vortical fluid flows.
### Navier-Stokes flow around a cylinder
In our last example, we investigate a classical benchmark in computational fluid dynamics, describing the behaviour of a transient fluid in a pipe with a circular obstacle. Previous research by Chuang _et al._[69] reported that PINNs act as a steady-flow solver, and fail to capture the phenomenon of vortex shedding. Here we challenge these findings and demonstrate that, if properly used, PINNs can successfully simulate the development of vortex shedding in this scenario.
Specifically, we consider a fluid with a density of \(\rho=1.0\) and describe its behavior using the time-dependent incompressible Navier-Stokes equations
\[\mathbf{u}_{t}+\mathbf{u}\nabla\mathbf{u}+\nabla p-\nu\mathbf{u} =0, \tag{7.27}\] \[\nabla\cdot\mathbf{u} =0, \tag{7.28}\]
with \(\mathbf{u}=(u,v)\) defining the velocity field and \(p\) the pressure. The kinematic viscosity is taken as \(\nu=0.001\).
The underlying geometry is identical to Figure 9 and the boundary conditions are the same as the Stokes flow example discussed in section 7.3. However, we introduce a parabolic inflow profile with a maximum velocity of \(U=1.5\). As a result, we have characteristic flow velocity and length values of \(U=1.0\) and \(L=0.1\), respectively, and a Reynolds number of \(Re=100\).
We begin by normalizing the PDE system as follows:
\[\mathbf{x}^{*}=\frac{\mathbf{x}}{L^{*}},\quad t^{*}=\frac{L^{*}}{U^{*}},\quad \mathbf{u}^{*}=\frac{\mathbf{u}}{U^{*}},\quad p^{*}=\frac{pL^{*}}{\nu U^{*}}, \quad\nabla^{*}=L^{*}\nabla. \tag{7.29}\]
Figure 16: _Navier-Stokes flow in a torus:_ Comparison of the best prediction against the reference solution at the last time step. The animation is provided in [https://github.com/PredictiveIntelligenceLab/jaxpi](https://github.com/PredictiveIntelligenceLab/jaxpi).
This leads us to the non-dimensionalized equations:
\[\mathbf{u}_{t}^{*}+\mathbf{u}^{*}\nabla^{*}\mathbf{u}+\nabla p^{*}- \frac{1}{Re}\mathbf{u}^{*} =0, \tag{7.30}\] \[\nabla^{*}\cdot\mathbf{u}^{*} =0. \tag{7.31}\]
To obtain a proper initial condition for PINNs, we start with a zero solution and run a numerical simulation for 4 seconds at a very coarse spatial and temporal resolution. We then use the last time-step as our initial condition for the PINNs simulation.
Using a time-marching strategy, we partition the temporal domain \([0,10]\) into 10 individual time windows. For each window, a modified MLP is employed as our model backbone. PINN training runs for \(2\times 10^{5}\) iterations per window following Algorithm 1. Key hyper-parameters are detailed in Table 13. It deserves mentioning that there are more than 10 terms in the total loss and thus it is practically infeasible to manually adjust the weights of each loss. The predicted velocity and pressure field at \(T=10\) are plotted in Figure 17. For this benchmark, we do not report the test error against the numerical solution, as the start time of vortex shedding in numerical solvers fluctuates based on the underlying discretizations. To the best of our knowledge, our work presents the first empirical evidence of a PINN model being able to capture the phenomenon of vortex shedding. This finding opens up new avenues for further research and application of PINNs in the field of computational fluid dynamics.
## 8 Conclusions
In this work, we introduce a comprehensive training pipeline for physics-informed neural networks, addressing various training pathologies such as spectral bias, imbalanced losses, and causality violation. Our pipeline seamlessly integrates essential techniques, including equation non-dimensionalization, Fourier feature embeddings, loss weighting schemes and causal training strategies. Moreover, we explore additional techniques such as a modified MLP architecture, random weight factorization and curriculum training, which can further improve the training stability and model performance. By sharing our empirical findings, we also provide insights into selecting appropriate hyper-parameters associated with network architectures and learning rate schedules in conjunction with the aforementioned algorithms. To demonstrate
Figure 17: _Navier-Stokes flow around cylinder:_ Predicted velocity field and pressure at \(T=1\). the last time step. The animation is provided in [https://github.com/PredictiveIntelligenceLab/jaxpi](https://github.com/PredictiveIntelligenceLab/jaxpi).
the effectiveness of the proposed training pipeline, we perform thorough ablation studies on a collection of benchmarks which PINNs often struggle with, and showcase the state-of-the-art results, which we believe should serve as a strong baseline for future studies. By establishing these benchmarks, we hope that our contribution will serve as a cornerstone for more fair and systematic comparisons in the development and adoption of PINN-based methodologies, ultimately propelling PINN research towards more effective and reliable solutions in computational science and engineering.
## Acknowledgments
We would like to acknowledge support from the US Department of Energy under the Advanced Scientific Computing Research program (grant DE-SC0019116), the US Air Force (grant AFOSR FA9550-20-1-0060), and US Department of Energy/Advanced Research Projects Agency (grant DE-AR0001201). We also thank the developers of the software that enabled our research, including JAX [63], JAX-CFD[70], Matplotlib [71], and NumPy [72].
|
2303.02551 | Discrepancies among Pre-trained Deep Neural Networks: A New Threat to
Model Zoo Reliability | Training deep neural networks (DNNs) takes signifcant time and resources. A
practice for expedited deployment is to use pre-trained deep neural networks
(PTNNs), often from model zoos -- collections of PTNNs; yet, the reliability of
model zoos remains unexamined. In the absence of an industry standard for the
implementation and performance of PTNNs, engineers cannot confidently
incorporate them into production systems. As a first step, discovering
potential discrepancies between PTNNs across model zoos would reveal a threat
to model zoo reliability. Prior works indicated existing variances in deep
learning systems in terms of accuracy. However, broader measures of reliability
for PTNNs from model zoos are unexplored. This work measures notable
discrepancies between accuracy, latency, and architecture of 36 PTNNs across
four model zoos. Among the top 10 discrepancies, we find differences of
1.23%-2.62% in accuracy and 9%-131% in latency. We also fnd mismatches in
architecture for well-known DNN architectures (e.g., ResNet and AlexNet). Our
findings call for future works on empirical validation, automated tools for
measurement, and best practices for implementation. | Diego Montes, Pongpatapee Peerapatanapokin, Jeff Schultz, Chengjun Gun, Wenxin Jiang, James C. Davis | 2023-03-05T02:27:55Z | http://arxiv.org/abs/2303.02551v1 | # Discrepancies among Pre-trained Deep Neural Networks:
###### Abstract.
Training deep neural networks (DNNs) takes significant time and resources. A practice for expedited deployment is to use pre-trained deep neural networks (PTNNs), often from model zoos--collections of PTNNs; yet, the reliability of model zoos remains unexamined. In the absence of an industry standard for the implementation and performance of PTNNs, engineers cannot confidently incorporate them into production systems. As a first step, discovering potential discrepancies between PTNNs across model zoos would reveal a threat to model zoo reliability. Prior works indicated existing variances in deep learning systems in terms of accuracy. However, broader measures of reliability for PTNNs from model zoos are unexplored. This work measures notable discrepancies between accuracy, latency, and architecture of 36 PTNNs across four model zoos. Among the top 10 discrepancies, we find differences of 1.23%-2.62% in accuracy and 9%-131% in latency. We also find mismatches in architecture for well-known DNN architectures (_e.g._, ResNet and AlexNet). Our findings call for future works on empirical validation, automated tools for measurement, and best practices for implementation.
Neural networks, Model zoos, Software reuse, Empirical software engineering +
Footnote †: 2022, Singapore, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Singapore.
+
Footnote †: 2022, Copyright held by the owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/owner/
## 1. Introduction
With the growing energy consumption of training DNNs (26), taking advantage of the re-usability of PTNNs can significantly reduce the costs of training (13). In particular, transfer learning can result in shorter training times and higher asymptotic accuracies compared to other weight initialization methods (22; 36). This kind of technique accelerates model reuse and development. The history of PTNNs and their impact on the development of artificial intelligence has been extensively documented (13; 25). As such, collections of PTNNs have been created, referred to as _model zoos_. Notably, maintainers of popular machine learning frameworks, such as TensorFlow (2), maintain corresponding model zoos developed with their framework, such as the TensorFlow Model Garden (38).
There are many model zoos (18; 23; 38) and an expanding use of PTNNs in production systems (13). Past work has emphasized the difficulties in adopting software engineering practices in machine learning, and specifically, the challenges with reproducing machine learning research papers (4; 17). These reproducibility issues may affect PTNNs, leading to variations across model zoos (28). Disparities in the accuracy, latency, or architecture of a PTNN could negatively affect a deep learning system, threatening PTNNs' reuse potential. Consider a model zoo that has an incorrect implementation of a well-known DNN architecture, which has increased its latency significantly. If an engineer were to use the PTNN from this zoo, they would unknowingly be receiving a lower quality PTNN than they might otherwise have from a different model zoo. The engineer's effort to enable a quick turnaround time with a PTNN would have become harmful. Discovering discrepancies would shain a light on the reliability of model zoos.
To explore the reliability of model zoos, we performed a measurement study to identify discrepancies among 36 image classification PTNN architectures across four model zoos: _TensorFlow
dataset (ImageNet) can be as large as 2.62% (Krizhevsky et al., 2014).1 Similarly, over 20% of the PTNNs measured had latency differences (FLOPs) of 10% or more when comparing PTNNs of the same name across the model zoos. Lastly, we discover architectural differences in several PTNNs, including implementations of _AlexNet_ and _ResNet V2_. We conclude with an agenda for future research on further empirical validation, automated tools for measurement, and best practices for implementing model zoo PTNNs.
Footnote 1: The _LSWC-2012-CLS_ image dataset has 50,000 validation images. A 1% accuracy difference is equivalent to 500 images.
## 2. Background and Related Work
PTNNs are applied in a wide variety of domains (Krizhevsky et al., 2014). With the demand for engineers far exceeding supply (Krizhevsky et al., 2014), companies are looking for best practices that can boost the productivity of their engineers. Major companies (_e.g._, Google and Microsoft) have shared best practices on machine learning development and informed future directions on model reuse (Bannis et al., 2016; Bannis et al., 2016). A case study from SAP indicates possible compatibility, portability, and scalability challenges in machine learning model deployment, which may affect their performance (Krizhevsky et al., 2014). There have been many efforts to improve the quality of model zoos. For example, IBM has developed a tool to extract model metadata (Wang et al., 2017) to support better model management. Bannis _et al._ promote best practices for reproducing and publishing high-quality PTNNs (Bannis et al., 2016). However, the reliability of model zoos has not been validated by prior works.
The ability to replicate the accuracy of a DNN in identical training environments is hindered by non-deterministic factors. Accuracy differences of up to 10.8%, stemming purely from non-determinism, have been reported with popular DNN architectures (Krizhevsky et al., 2015). Closely related, research has investigated and benchmarked the performance variances tied to deep learning frameworks (Krizhevsky et al., 2014; Krizhevsky et al., 2014). This variability threatens the reliability of new deep learning techniques. As such, automated variance testing (Krizhevsky et al., 2015) has been proposed to assure the validity of these comparisons. However, PTNNs in model zoos may also suffer from varying architectural implementations, affecting more than just accuracy. Our work measures the disparities in PTNNs across different model zoos as opposed to attempting to improve the standard in just one (Bannis et al., 2016). Our results enlighten future works validating the quality and promoting the standardization of model zoos.
## 3. Methodology
We perform a measurement study to assess our problem statement: whether discrepancies exist between the accuracy, latency, and architecture of PTNNs across different model zoos.
### Subjects
A _model zoo_ is a collection of PTNNs for various tasks. We carry out a selection process for four model zoos. Our selection criteria included the model zoo being maintained alongside a machine learning framework: this increases the likelihood of the model zoo being actively maintained. Furthermore, to ensure the popularity of the model zoo, the zoo must have a public GitHub repository with at least three thousand stars (Bannis et al., 2016). Using GitHub search2 to identify potential model zoo candidates, 11 model zoos were selected that met the criteria.3 The PTNNs within the 11 model zoos were categorized into deep learning tasks, including image classification, object detection, and natural language processing. We focused on image classification models because it is the most common type in 8 of the 11 model zoos.
Footnote 2: [https://github.com/search](https://github.com/search)
A PTNN availability analysis was done on the candidate model zoos to assess how many model zoos offered the same image classification PTNN architectures. Based on the largest shared availability, we chose _TensorFlow Model Garden_, _ONNX Model Zoo_, _Torchvision Models_, and _Keras Applications_. Within these model zoos, we selected all the image classification PTNN architectures that were present in at least two of the four model zoos, yielding 36 PTNN architectures. The selected PTNNs are either directly downloadable from the model zoos' GitHub repositories or can be pulled using the model zoos' APIs.
### Evaluation Metrics
**Accuracy.** Image classification DNNs' effectiveness is measured in accuracy, which is a critical component of a PTNN. We are measuring discrepancies between the claims of model zoos as opposed to verifying them. _Top-1 accuracy_ is the conventional accuracy where model prediction must exactly match the expected label, while _top-5 accuracy_ measures the fraction of images where any of the top five predicted labels matches the target label (Bannis et al., 2016; Krizhevsky et al., 2014). 35 image classification PTNN architectures reported top-1 ImageNet classification accuracies, meanwhile only 32 reported top-5 ImageNet classification accuracies.
**Latency.** The latency of a DNN is a key factor that engineers consider (Krizhevsky et al., 2014). For example, _MobileNet_ is a DNN image classification architecture that prioritizes low latency on mobile and embedded systems (Krizhevsky et al., 2014). We used open-source tools (Bannis et al., 2016; Wang et al., 2017; Wang et al., 2017) to measure the latency by counting the floating point operations (FLOPs) (Bannis et al., 2016).
Figure 1. Overview of the measurement process. We gather PTNNs from the model zoos with the same name, perform measurements on each PTNN, and compare for discrepancies.
FLOPs are framework and hardware-agnostic, allowing for unbiased comparisons.
**Architecture**. PTNNs are trained weights based on research papers that propose DNN architectures. As a result, model zoos advertise PTNNs by their architecture name. The observed accuracy differences and past work on DNN vulnerabilities motivated us to examine architecture (Han et al., 2017). Qualitative observations of discrepancies in the descriptions, source code, and visualizations of PTNN architectures were employed. Specifically, neutron, an open-source neural network visualizer, was used to inspect the architecture of the PTNNs (Wang et al., 2017). However, not all neural network weight formats are supported, so all PTNNs were converted to the ONNX format for architectural analysis using an appropriate tool for each framework (Zhou et al., 2018; Wang et al., 2019). The source code for the implementations of the PTNNs are present in the model zoos' GitHub repositories and was used as an additional form of PTNN inspection.
## 4. Results and Analysis
### Accuracy
We compared the top-1 accuracy of 35 PTNN architectures and the top-5 accuracy of 32 PTNN architectures by using ImageNet. Notably, 12 of the 35 profiled PTNN architectures had top-1 accuracy differences greater than 0.96%. For top-5 accuracies, 6 of the 32 PTNN architectures had differences greater than 0.94%. The large differences present in Figure 2 have significant consequences. For example, _ResNet V1 152_ from Keras is noticeably less accurate than the PTNN by the same name from Torchvision, with top-1 accuracies of 76.6% and 78.31%, respectively. This difference is pronounced enough that _ResNet V1 101_ from Torchvision with top-1 accuracy of \(77.3^{\prime\prime}\) to more accurate than _ResNet V1 152_ from Keras 4
Footnote 4: ResNet V1 101 was originally reported to be 0.32% less accurate than _ResNet V1 152_(Keras, 2016).
Table 1 shows the aggregation of accuracy differences across model zoos, highlighting how often a model zoo had the highest or lowest top-1 or top-5 accuracy for a given PTNN architecture. As seen, 48% of the PTNNs that were available on Torchvision had the highest top-1 accuracy among the model zoos. On the other hand, Keras had the lowest top-1 accuracy 44% of the time for its selection of PTNNs.
### Latency
36 PTNN architectures were measured for their FLOPs. Figure 3 shows that there are 8 PTNN architectures where the PTNN with the highest amount of FLOPs had greater than 10% more FLOPs than the PTNN with the lowest FLOP count. At the extreme, Torchvision's _SqueezeNet 1.0_, sitting at 819.08 million FLOPs, had 2.31\(\times\) the FLOPs of ONNX's _SqueezeNet 1.0_. Likewise, the three PTNN architectures from the _ResNet V2_ family all had greater than 85% more FLOPs than the lowest FLOPs PTNN. All the high FLOP-count _ResNet V2_ come from TFMG.
We discuss the possible explanations for the FLOPs differences seen in Figure 3. The high FLOPs difference measured in _SqueezeNet 1.0_ can be explained by looking at its successor, _SqueezeNet 1.1_. _SqueezeNet 1.1_ is advertised by ONNX to contain 2.4\(\times\) less computation than the former. However, _SqueezeNet 1.1_ from ONNX has the same number of measured FLOPs as the _1.0_ PTNN offered. ONNX has been advertising _SqueezeNet 1.1_ as its _1.0_ counterpart. Similarly, looking at the _ResNet V2_ from TFMG: a primary contributor to the large amount of FLOPs is the input shape. _ResNet V2_ architectures, according to the origin paper, accept 224\(\times\)224 inputs (Keras, 2016); however, TFMG states that the _ResNet V2_ PTNNs it provides use Inception pre-processing and an input image size of 299\(\times\)299. A trade-off between accuracy and throughput, FLOPs, was potentially made here by the model zoo maintainers.
Across all FLOP-counted PTNNs, Torchvision had the highest FLOPs PTNNs for 78% of the PTNNs it offered. Close behind, TFMG had 69%. Pointedly, Keras never had the highest FLOPs PTNN and had the lowest FLOPs implementation 81% of the time.
### Architecture
We frame our results for architecture in terms of the discrepancies we discovered in our analysis. Specifically, we discuss differences among PTNNs for _AlexNet_, _ResNet V1 101_, _ResNet V2 50_, and _ResNet V2 101_ and against the PTNNs' origin papers.
The _AlexNet_ from Torchvision cites a different origin paper than other model zoos (Zhou et al., 2018; Wang et al., 2019). Both papers contain the same first author; however, only the latter contains an explicit description of a DNN architecture. As such, our analysis pegs the PTNN against the latter paper (Wang et al., 2019). We notice two main discrepancies: the PTNN is missing the response normalization layers and the kernel-size and number of kernels for the convolution layers are incorrect. For instance, Torchvision's PTNN has 64 kernels in the first convolution layer as opposed to the 96 that are described in the origin paper.
\begin{table}
\begin{tabular}{l c c c c} & Highest Top-1 & Lowset Top-1 & Highest Top-5 & Lowset Top-5 \\ \hline Torchvision Models & 48\% & 41\% & 52\% & 36\% \\ TF Model Garden & 40\% & 33\% & 36\% & 43\% \\ Keras Applications & 37\% & 44\% & 36\% & 40\% \\ ONNX Model Zoo & 35\% & 41\% & 31\% & 44\% \\ \hline \end{tabular}
\end{table}
Table 1. Frequency at which each model zoo had the most or least accurate model ordered by highest top-1 accuracy.
Figure 2. Top 10 largest top-1 accuracy differences. For a PTNN architecture, the accuracy of the PTNN with the lowest reported top-1 accuracy is subtracted from that of the PTNN with the largest top-1 accuracy.
The _ResNet V1 101_ from ONNX and Keras contain convolution shortcuts, which were only introduced in the _ResNet V2_ paper, but not in the _ResNet V1_ origin paper (Keras, 2018; Keras, 2018). Turchvision's and TFMG's _ResNet V1 101_ do not include this shortcut. Also in the _ResNet_ family, both the _ResNet V2 50_ and _ResNet V2 101_ have a shared discrepancy. As seen in Figure 4, Keras' _ResNet V2 50_ implementation contains max pool skip connections, which are not present in the paper, and uses convolutions with larger strides in these residual blocks (Keras, 2018).
The observed discrepancies in architecture may affect the accuracy and latency. For example, the larger convolution strides and max pool skip connection in the _ResNet V2 50_ from Keras allows the network to use less compute, FLOPs, compared to the PTNN from ONNX. This can be seen in the FLOP measurements of the _ResNet V2 50_ from Keras and ONNX. ONNX's _ResNet V2 50_ has 4.12 billion FLOPs while Keras' PTNN only has 3.49 billion FLOPs, an 18.1% difference. Moreover, the Keras PTNN did not sacrifice accuracy through this implementation, reporting a 76% top-1 accuracy, which is greater than ONNX's _ResNet V2 50_ top-1 accuracy of 75.81%. While the Keras maintainers did not implement _ResNet V2 50_ faithfully to the origin paper, they produced a more accurate PTNN with lower latency.
## 5. Discussion and Future Work
**Empirical Validation.** The top-1 accuracy differences depicted in Figure 2 suggest that the choice of model zoo matters. Specifically, 34% of the PTNN architectures having top-1 accuracy differences greater than 0.96% is not easily overlooked. An engineer may receive a PTNN that incorrectly classifies greater than 500 validation images on ImageNet more than a PTNN from a different model zoo. Model zoo choice should not result in a noticeable impact on the accuracy of PTNNs that engineers receive. Although model zoo's currently report the accuracy of the PTNNs they offer, our work has shown that this does not guarantee that there is not another model zoo with the same PTNN at a higher accuracy. Publicly available and actively maintained comparisons of model zoo PTNNs would allow engineers to be more informed when choosing a model zoo. Furthermore, we only studied the accuracies of image classification models at face value. We recommend future works focus on empirical validation on the claims of PTNNs in model zoo to check for the existence of false advertising.
**New Metrics and Automated Tools.** The measured FLOP disparities seen in Figure 3 have consequences, especially in edge devices with limited compute. For example, ONNX incorrectly listing _SqueezeNet 1.1_ as _SqueezeNet 1.0_ may lead to confusion when an engineer switches to _SqueezeNet 1.1_ from _SqueezeNet 1.0_ expecting a drop in latency. Similar confusion may arise from instances like the one seen in TFMG's selection of _ResNet V2_. While the increased input size is stated, the impact on latency is not made clear. To effectively inform engineers of the latency of PTNNs, model zoos should report FLOP counts alongside accuracy. Also of interest is the energy usage of these PTNNs, another important property for edge devices. The lack of reporting of these properties may make choosing PTNNs more difficult. We recommend future works create new metrics to measure the reliability and quality of PTNNs from model zooos and develop tools for automatically measuring these properties. Publishing updated results frequently can support easier decision-making of models for deployment.
**Naming Conventions.** The differences in the architectures of PTNNs may indicate an underlying improper documentation standard and a need for improved naming conventions in model zoos. As indicated in SS4.3, Torchvision's _AlexNet_ did not adhere to the origin paper while still claiming to be _AlexNet_. Seemingly, model zoos are advertising PTNNs labeled as well-known DNN architectures, like _ResNet_ and _AlexNet_, but when they do this, they really mean that the PTNNs are based on the DNN architecture and are not strict implementations. This inadequate naming convention leads to a false sense of equality and thus confusion. We recommend the community comprehensively document PTNN naming conventions to increase cohesion among model zoos. Likewise, we suggest future works investigate the expectations of engineers with regards to the PTNNs from model zoos to see whether they prefer exact reproductions or more accurate and lower latency PTNNs.
Figure 4. _ResNet V2 50_ architecture differences between _Keras Applications_ (left) and _ONNX Model Zoo_ (right). The top-right convolution on the left has a stride size of 2, while the top-right convolution on the right has a stride size of 1.
Figure 3. Top 10 largest FLOPs differences. For a PTNN architecture, the FLOP count of the PTNN with the most FLOPs is divided by the FLOPs of the PTNN with the fewest.
The result of such a study would inform model zoo maintainers on how to best implement and train PTNNs.
## 6. Conclusion
We present an investigation of the discrepancies between 36 image classification PTNN architectures from four popular model zoo through accuracy, latency, and architecture analyses. We find several significant discrepancies among these three axes that challenge the well-established use of PTNNs from model zoos, suggesting that an engineer will receive a PTNN with different characteristics based on the model zoo. The PTNN's goal of shortening model deployment time is diminished because of the time investment needed to verify the properties of the PTNN. We discuss the importance of future works to validate the claims of model zoos, develop automated tools for measurement, and explore best practices for implementing model zoo PTNNs.
###### Acknowledgements.
This work was supported by gifts from Google and Cisco and by NSF-OAC award #2107230. We thank the anonymous reviewers for their careful reading of our manuscript and their many insightful comments and suggestions. |
2301.11354 | Permutation-based Hypothesis Testing for Neural Networks | Neural networks are powerful predictive models, but they provide little
insight into the nature of relationships between predictors and outcomes.
Although numerous methods have been proposed to quantify the relative
contributions of input features, statistical inference and hypothesis testing
of feature associations remain largely unexplored. We propose a
permutation-based approach to testing that uses the partial derivatives of the
network output with respect to specific inputs to assess both the significance
of input features and whether significant features are linearly associated with
the network output. These tests, which can be flexibly applied to a variety of
network architectures, enhance the explanatory power of neural networks, and
combined with powerful predictive capability, extend the applicability of these
models. | Francesca Mandel, Ian Barnett | 2023-01-26T19:07:16Z | http://arxiv.org/abs/2301.11354v1 | # Permutation-based Hypothesis Testing for Neural Networks
###### Abstract
Neural networks are powerful predictive models, but they provide little insight into the nature of relationships between predictors and outcomes. Although numerous methods have been proposed to quantify the relative contributions of input features, statistical inference and hypothesis testing of feature associations remain largely unexplored. We propose a permutation-based approach to testing that uses the partial derivatives of the network output with respect to specific inputs to assess both the significance of input features and whether significant features are linearly associated with the network output. These tests, which can be flexibly applied to a variety of network architectures, enhance the explanatory power of neural networks, and combined with powerful predictive capability, extend the applicability of these models.
Machine Learning, Neural Networks, Neural Networks, Neural Networks
## 1 Introduction
While neural networks are well known for their predictive capability, compared to traditional regression approaches, they generally provide little explanatory insight into how they make their predictions. While the mathematics of each layer-to-layer transformation are relatively simple, how and why a network combines information from the inputs to predict the outputs becomes more difficult to understand as the network architecture grows in complexity. This issue of interpretability of neural networks has been addressed extensively in the literature (Gilpin et al., 2018; Zhang et al., 2021). Despite the challenges, there are many settings in which is it desirable or necessary to interpret neural networks. In applications such as credit, employment, and criminal justice, understanding how predictions are made is extremely useful for evaluating whether the algorithms are fair and non-discriminatory (Bostrom and Yudkowsky, 2014; Hardt et al., 2016). Recent laws mandating the "right to explanation," a right to information about individual decisions made by algorithms, have accelerated the need for interpretability of complex models (Goodman and Flaxman, 2017). In many scientific research fields where data contain highly complex patterns, there is interest not just in accurate prediction but also in gleaning knowledge of the subject from the model fit. Machine learning methods are well-equipped to handle the size and complexity of genetic sequencing data, but improving network interpretability can further understanding of how novel variants contribute to susceptibility of diseases such as Parkinson's disease (Bryant et al., 2021). Prognostic models for patients with severe brain injuries are critical to prescribing appropriate individualized treatment, and machine learning models that combine a variety of sources of information have shown great potential for enhancing the medical decision-making process. However, these models require a level of transparency and interpretability to be implemented in practice (Farzaneh et al., 2021).
Despite the well-documented need for interpretability with neural networks, there is not a clear consensus on what interpretability in this context means (Doshi-Velez and Kim, 2017; Lipton, 2018). A wide variety of methods have been proposed to address different elements of interpretability. Many can be categorized as feature importance methods. Early work in this area included connection weights introduced in Garson (1991) and a saliency measure described in Ruck et al. (1990). Dimopoulos et al. (1995) proposed using partial derivatives to measure the sensitivity of a network and thereby determine its generalizability. Newer work has extended some of these ideas to more general concepts of feature relevance and explainability. Bach et al. (2015) proposed layer-wise relevance propagation (LRP), a framework for determining the relevance of input features in the determination of the network output. On an observation-by-observation basis, the output is propagated backward through the network according to a set of propagation rules that incorporate information from the weights and can be tailored to the network architecture and structure of the data. Ribeiro et al. (2016) introduced local interpretable model-agnostic explanations (LIME), a technique for explaining the predictions of any classifier or regressor, including neural networks, by learning an interpretable model locally around the prediction. DeepLIFT, an algorithm for assigning contribution scores to network inputs based on a difference-from-reference approach, was pre
sented in Shrikumar et al. (2017). Lundberg and Lee (2017) unified these concepts in their 2017 paper and introduced Shapley additive explanation (SHAP) values, which quantify the contribution of each input feature for a particular prediction. Sundararajan et al. (2017) proposed integrated gradients as a measure of feature relevance. Zhang et al. (2021) provides a useful survey of these and other methods. Several of these methods use some form of network gradients, however, their focus is on constructing measures of feature importance or interpretability rather than conducting formal tests of statistical significance.
A second category of methods aims to design network architectures that enable interpretability. Potts (1999) proposed a generalized additive neural network (GANN), which fits a separate neural network with a single hidden layer for each input variable and combines the individual outputs. Leveraging advances in deep learning from the past decades, Agarwal et al. (2021) developed neural additive models (NAM), which replace the smooth functions in generalized additive models (GAM) with deep neural networks. Wojtas and Chen (2020) introduced a dual-net architecture for population-level feature importance that simultaneously finds an optimal feature set that maximizes model performance and ranks the importance of the features in the subset. A selector network learns the optimal feature subset and ranks feature importance while an operator network makes predictions based on the optimal subset.
Developments in a third category of methods, significance testing of network inputs, have been more limited. Olden and Jackson (2002) designed a randomization test for network weights that can be combined with the connection weights metric introduced by Garson (1991) to test for statistical significance of input features. Horel and Giesecke (2020) developed a test for the significance of input features in a single-layer feed-forward neural network. They proposed a gradient-based test statistic that is a weighted average of squared partial derivatives and studied its large-sample asymptotic behavior. Racine (1997) addressed the issue of significance testing for nonparametric regression and devised a test based on partial derivatives with estimation of the null distribution via resampling methods. However, the test was designed for kernel regression rather than neural networks. Each of these methods focuses on testing whether associations exist between network inputs and outputs. Since neural networks can flexibly model complex nonlinear associations, it is of interest to extend the significance testing framework and study the nature of associations between inputs and outputs.
Hypothesis testing for neural networks offers several advantages over other interpretability methods. Many feature importance methods only provide local explanations of network behavior. Focusing on explainability at the individual prediction level can potentially obscure important information at the global level. When assessing the associations between network inputs and outputs, it is more desirable to take a global approach and account for the overall behavior of the network. Additionally, a feature importance ranking is only interpretable relative to the other network inputs. On the other hand, hypothesis testing provides a clear and objective interpretation of the significance of the inputs in predicting the network output. Methods that modify the network architecture to increase explainability can be powerful tools. However, the network architecture must still be compatible with the structure of the data and the type of interpretation desired, potentially limiting their usability in some settings. In contrast, the significance testing framework can be flexibly applied to general architectures.
Here we propose two hypothesis testing frameworks for evaluating the association between network inputs and outputs. The first test determines whether an input is nonlinearly associated with an output, and the second test evaluates the statistical significance of any type of association between each input feature and an output. In both tests, we construct a gradient-based test statistic and use permutation methods to estimate the null distribution. We use simulation studies to demonstrate the performance of our test under various types of data and compare to competing methods in the literature. Additionally, we apply our proposed tests to evaluate feature associations in pediatric concussion data and to test genetic links to Parkinson's disease.
The rest of the paper is organized as follows. In Section 2, we introduce the hypotheses, test statistics, and testing procedures. Section 3 evaluates the performance of our test relative to competing methods in simulation studies. In Section 4, we apply our tests in two settings: pediatric concussion data and genomic data from Parkinson's patients. We conclude with discussions in Section 5.
## 2 Methodology
For notational simplicity, we henceforth assume a one-layer feed-forward neural network, but the approach is general and can be easily extended to more complex architectures. Consider the \(i\)th of \(n\) observations with univariate outcome \(y_{i}\) and vector \(\mathbf{x}_{i}=(x_{i1},...,x_{ip})^{T}\) of predictors. Let \(\mathbf{X}\) be the \(n\) by \(p\) matrix of predictors and \(\mathbf{y}\) be the vector of length \(n\) of outcomes. Suppose a neural network with \(p\) input features, a single hidden layer with \(k\) nodes and nonlinear, differentiable activation function \(g_{1}\), a univariate outcome \(\mu_{i}\), and final layer activation function \(g_{0}\) has been trained on \((\mathbf{x}_{i},y_{i}),i=1,...,n\). Of interest is the association between an input feature \(\mathbf{X}_{j}=(x_{1j},...,x_{nj})^{T}\) and outcome \(\mathbf{y}\). It is relevant to consider the partial derivative of the network output with respect to \(\mathbf{X}_{j}\). For a single-hidden-layer network
with a univariate output, the partial derivative is
\[\begin{split}\frac{\partial\mu_{i}}{\partial x_{ij}}=& g_{0}^{\prime}\left\{\mathbf{\omega}^{(0)}\mathbf{\alpha}_{i}^{(1)}+\delta^{(0)} \right\}\\ &\cdot\mathbf{\omega}^{(0)}\left[g_{1}^{\prime}\left\{\mathbf{\omega}^{(1 )}\mathbf{x}_{i}+\mathbf{\delta}^{(1)}\right\}\odot\mathbf{\omega}_{j}^{(1)}\right], \end{split} \tag{1}\]
where \(\mathbf{\omega}^{(0)}\) are the final layer weights, \(\mathbf{\omega}^{(1)}\) are the hidden layer weights, \(\delta^{(0)}\) is the final layer bias, \(\mathbf{\delta}^{(1)}\) are the hidden layer biases, \(\mathbf{\omega}_{j}^{(1)}\) is the vector \((\omega_{j1}^{(1)}\omega_{j2}^{(1)}...\omega_{jk}^{(1)})^{T}\), and \(\odot\) is the Hadamard product. It is natural to assume that if the partial derivative in (1) is equal to 0 for all \(\mathbf{x}_{i}\in\mathbf{X}\), then \(\mathbf{X}_{j}\) is not associated with \(\mathbf{y}\). Similarly, if the partial derivative is equal to a constant \(c\) for all \(\mathbf{x}_{i}\in\mathbf{X}\), then \(\mathbf{X}_{j}\) is linearly associated with \(\mathbf{y}\). This provides our motivation for basing our test statistic on this partial derivative. However, the partial derivative varies over the domain of \(\mathbf{X}\), so we must account for the wide range of values the test statistic can take. Furthermore, the asymptotic distribution of the partial derivative function is not easily derived. Instead, we rely on resampling techniques to estimate the null distribution of the test statistic. We outline the procedures for two tests: a test for nonlinear association between \(\mathbf{X}_{j}\) and \(\mathbf{y}\) and a test for general association between \(\mathbf{X}_{j}\) and \(\mathbf{y}\).
### Test for nonlinear association
Suppose a network has been trained as described above. Of interest is whether a nonlinear association exists between input feature \(\mathbf{X}_{j}\) and outcome \(\mathbf{y}\). We can state the null hypothesis that \(\mathbf{X}_{j}\) is linearly associated with \(\mathbf{y}\) in terms of the partial derivatives:
\[H_{0}:\frac{\partial\mu_{i}}{\partial x_{ij}}=c\quad\forall\mathbf{x}_{i}\in\mathbf{X}\]
\[H_{A}:\frac{\partial\mu_{i}}{\partial x_{ij}}\neq c\quad\text{for some }\mathbf{x}_{i}\in\mathbf{X}\]
for some constant \(c\). We propose the following testing procedure. We first calculate the partial derivative in (1) for every observation in the data. Under \(H_{0}\), the partial derivatives should be fairly constant across the domain of \(\mathbf{X}\). To evaluate whether the partial derivatives are sufficiently close to \(c\), we calculate the residuals of the observed partial derivatives from their mean. We then fit a smooth function to the \(n\) residuals and let the test statistic be the mean of the squared coefficients from the smooth function. To obtain a null distribution of the test statistic, we use networks trained on permutations of the observed data. We generate permutations of the data by permuting the model residuals of a GAM fit to \((\mathbf{X},\mathbf{y})\), where \(\mathbf{X}_{j}\) is restricted to a linear term in the model. In general, the test is robust to the specification of the smooth terms for the other \(p-1\) variables. However, enough flexibility should be provided to reasonably capture the contribution of each input to the outcome variable. The permuted data consists of the observed predictors and a permuted outcome vector that is the sum of the fitted values from the estimated GAM and a permutation of the vector of model residuals. Permuting the data in this way forces the association between \(\mathbf{X}_{j}\) and \(\mathbf{y}\) to be linear while preserving any potential nonlinearity between the outcome and the other \(p-1\) predictors. We then train a network on the permuted data and calculate the partial derivatives at every observation. We again fit a smooth function to the residuals of the partial derivatives and calculate the corresponding test statistic to be the mean of the squared coefficients. Under the null, the residuals of the partial derivatives will be randomly scattered around 0, so it should be the case that the smooth function is approximately 0 and therefore the test statistic is close to 0. Under the alternative, there will be a systematic pattern in the residuals, so the smooth function will be nonzero and the test statistic will be larger than 0. The p-value is then the proportion of the test statistics calculated under the null that are larger than the observed test statistic. For additional detail, see Appendix A.
The test relies on the assumption that the neural networks are well-fitted to the data. Poorly trained networks may not accurately capture the true associations between predictors and the output, impacting the performance of the test. The test is fairly robust to the degree of smoothing; the estimated smooth functions should capture important patterns in the residuals without overfitting. Additionally, there are some implicit assumptions that arise from fitting a GAM in the permutation step of the test. First, GAMs are limited to modeling smooth effects, so any nonsmooth associations between predictors and the outcome may hurt the model fit and therefore affect the values of the permuted outcome vector. In practice, this may not have a large effect on test performance if the smooth estimate of the true nonsmooth association is reasonable. Second, unless explicitly specified in the model, GAMs cannot capture interactions like a neural network can. If there is knowledge or evidence of interaction effects, these can be included in the GAM used to permute the data. However, these should be restricted to predictors that are not being tested so the interpretation of the predictor of interest is not affected.
### Test for association
Since the null hypothesis of the test for nonlinearity includes the possibility of no association between the input feature and the output of the network, it is of interest to test whether any type of association exists between \(\mathbf{X}_{j}\) and \(\mathbf{y}\). Specifically, we wish to test the following hypotheses:
\[H_{0}:\mathbf{X}_{j}\text{ is not associated with }\mathbf{y}\]
\[H_{A}:\mathbf{X}_{j}\text{ is associated with }\mathbf{y}.\]
Alternatively, we can state the hypotheses in terms of the partial derivatives:
\[H_{0}:\frac{\partial\mu_{i}}{\partial x_{ij}}=0\quad\forall\mathbf{x}_{i}\in\mathbf{X}\]
\[H_{A}:\frac{\partial\mu_{i}}{\partial x_{ij}}\neq 0\quad\text{for some }\mathbf{x}_{i}\in\mathbf{X}.\]
A resampling procedure similar to the nonlinearity test is used to test for an association. For a neural network trained on \((\mathbf{X},\mathbf{y})\), we calculate the partial derivative function in (1) for every observation in the data and let the observed test statistic be the mean of the squared partial derivatives. Under \(H_{0}\), the partial derivatives should be approximately 0 across the domain of \(\mathbf{X}\). To obtain a null distribution of \(T\), we use network fits based on permutations of the original data. We permute the vector of observed values of \(\mathbf{X}_{j}\) such that all columns of the new predictor matrix are identical to the original predictor matrix \(\mathbf{X}\) except the \(j\)th. By permuting the values of \(\mathbf{X}_{j}\), any potential association between the \(j\)th input and the output is erased, reflecting \(H_{0}\). We then train a network on the permuted data, calculate the partial derivatives at every observation, and compute the test statistic to be the mean of the squared partial derivatives. We expect the partial derivatives to be close to 0 under the null, so the test statistic will be close to 0. Under the alternative, the partial derivatives should be nonzero, so the test statistic will be larger than 0. Then, the p-value is the proportion of the test statistics calculated under the null that are larger than the observed test statistic. We list the steps of the test in detail in Appendix A.
It is important to note that the joint distribution of the predictors is broken by permuting \(\mathbf{X}_{j}\). Thus, a key drawback of this approach is the implicit assumption of independence among the predictors. At a minimum, the test assumes that the predictor of interest \(\mathbf{X}_{j}\) is not correlated with the other predictors, though the other \(p-1\) predictors can be correlated with one another in an arbitrary pattern. The sensitivity of test performance to correlated predictors is explored empirically in Section 3. Additionally, as with the test for nonlinearity, the neural network must be well-fitted to the data. If the network has not been trained well, the partial derivatives may not represent the true nature of the relationship between the predictors and outputs, and consequently the test may not perform well.
### Suggested usage of tests
To fully characterize the association between \(\mathbf{X}_{j}\) and \(\mathbf{y}\), both the nonlinearity and association tests may be needed. If the test for nonlinearity is implemented first and there is evidence of nonlinearity, then no further testing is needed. However, if the test suggests there is a linear relationship, then the association test must be used to determine whether an association exists at all. Two unassociated variables can be said to follow the linear relationship \(\mathbf{y}\propto c\mathbf{X}\) where \(c=0\). Therefore, the nonlinearity test cannot be used alone to determine a nonzero linear association. Alternatively, if the association test is implemented first and the test suggests there is no association, then no further testing is required. If the test finds evidence of an association, the test for nonlinearity can then be used to determine the nature of that association. To implement the tests in practice, a network can be fit on the original observed data and used for both tests. Then the permutation of the data and retraining of the network can be run separately for each test.
## 3 Simulation Studies
We evaluate the performance of our proposed tests through several simulation studies. Where applicable, we include comparisons to competing methods.
### Power and Type-I error of nonlinearity test
We estimate the power and Type-I error of the proposed test for nonlinearity through simulation. Let \(i\) denote the observation. For \(i=1,...,500\), we generate five continuous independent variables \(\mathbf{x}_{i}=(x_{i1},x_{i2},x_{i3},x_{i4},x_{i5})^{T}\) from a standard normal distribution. A univariate continuous outcome is generated by the model \(y_{i}=-\beta x_{i1}+\beta x_{i2}^{2}-\beta x_{i3}^{3}+\beta\sin(2x_{i4})-\beta |x_{i5}|+\epsilon_{i}\), where \(\beta=0.2\) and \(\epsilon_{i}\sim N(0,0.2)\), and a test of nonlinearity with 500 permutations is conducted for each of the five predictors. We fit a neural network with one hidden layer with 40 nodes and a sigmoid activation function. The network is trained for 150 epochs using stochastic gradient descent, \(L_{2}\) regularization, and a decreasing learning rate at every epoch. The number of hidden nodes, the initial learning rate, and the regularization parameter are chosen to minimize loss on a validation set. Power and Type-I error are estimated across 300 simulations.
Power and Type-I error of the nonlinearity test are presented in Table 1. The estimated Type-I error rate is 0.05. The test has high power to detect a variety of nonlinear effects. The nonlinearity of the quadratic term (\(X_{2}\)) is easily detected by the test, with an estimated power of 1. Although the association between the cubic term (\(X_{3}\)) and the outcome could be reasonably approximated by a linear function, the test maintains high power in detecting the true nonlinearity with power equal to 1. The power of the test is slightly lower for the trigonometric (\(X_{4}\)) term, due to the complexity of modeling a periodic function. The test has high power even in the nonsmooth setting, with an estimated power of 0.94 for \(X_{5}\). Overall, the test for nonlinearity performs well under a variety of alternatives, even when the association can be well approximated by a linear function.
### Power and Type-I error of association test
We compare the performance of the proposed association test for neural networks to association tests from two traditional interpretable models: linear models and GAMs. The properties of standard \(t\)-tests from a linear regression model are well-established, however, these only hold under the assumption of linear associations between predictors and outcomes. GAMs can flexibly model many nonlinear trends, and testing for associations between predictors and outcomes is straightforward using the p-values for smooth terms outlined in Wood (2013). However, GAMs are limited to smooth effects Hastie & Tibshirani (2017).
As discussed in the introduction, NAMs have been introduced as a way to combine the predictive capability of deep neural networks with the interpretability of GAMs Agarwal et al. (2021). The model's explainability is due to the ability to easily visualize how it computes a prediction. Since the impact of a feature on the predicted output does not rely on the other network inputs, each feature can be assessed individually by plotting its estimated shape function. While the architecture of NAMs makes them far easier to interpret than standard deep neural networks, explainability is still based on subjective interpretation of a graph. Therefore, we view NAMs not as a competing method, but as a model to which our proposed tests could be applied, and as such, we do not include them in our simulation studies.
To compare the performance of testing for association in neural networks, GAMs, and linear models, power and Type-I error under various settings are estimated through simulation. Three data generation mechanisms are considered: linear, smooth nonlinear, and nonsmooth nonlinear. Let \(i\) denote the observation. For each setting, four continuous independent variables \(\mathbf{x}_{i}=(x_{i1},x_{i2},x_{i3},x_{i4})^{T}\) are generated from a standard normal distribution. In the linear setting, a univariate continuous outcome is generated from the linear model \(y_{i}=\mathbf{x}_{i}^{T}\mathbf{\beta}+\epsilon_{i}\), where under the null, \(\beta_{j}\sim N(0.3,0.01^{2}),\ j=1,2,3\) and \(\beta_{4}=0\), and under the alternative, \(\beta_{j}\sim(m,0.01^{2}),\ j=1,2,3,4\). The mean \(m\) takes values \(\{0.24,0.27,0.30,0.33,0.36\}\). In the smooth nonlinear setting, a univariate continuous outcome is generated from the model \(y_{i}=\beta_{1}x_{i1}^{3}+\beta_{2}\cos(x_{i2})+\beta_{3}\tanh(x_{i3})+\beta_{4 }\sin(3x_{i4})+\epsilon_{i}\), where under the null, \(\beta_{j}\sim N(0.3,0.01^{2}),\ j=1,2,3\) and \(\beta_{4}=0\), and under the alternative, \(\beta_{j}\sim(m,0.01^{2}),\ j=1,2,3,4\). The mean \(m\) takes values \(\{0.24,0.27,0.30,0.33,0.36\}\). In the nonsmooth nonlinear setting, \(\mathbf{x}_{i}\) defines a vector \(\mathbf{z}_{i}=(z_{i1},z_{i2},z_{i3},z_{i4})^{T}\), where
\[z_{i1}=\begin{cases}x_{i1}x_{i2}&x_{i1}x_{i2}<0\\ 0&x_{i1}x_{i2}\geq 0\end{cases}z_{i2}=\begin{cases}x_{i2}x_{i3}&x_{i2}x_{i3}>0 \\ 0&x_{i2}x_{i3}\leq 0\end{cases}\]
\[z_{i3}=\begin{cases}x_{i3}x_{i4}&x_{i3}x_{i4}<0\\ 0&x_{i3}x_{i4}\geq 0\end{cases}z_{i4}=\begin{cases}x_{i4}x_{i1}&x_{i4}x_{i1}>0 \\ 0&x_{i4}x_{i1}\leq 0.\end{cases}\]
A univariate continuous outcome is generated from \(y_{i}=\mathbf{z}_{i}^{T}\mathbf{\beta}+\epsilon_{i}\), where under the null, \(\beta_{j}\sim N(0.18,0.01^{2}),\ j=1,2,3\) and \(\beta_{4}=0\), and under the alternative, \(\beta_{j}\sim(m,0.01^{2}),\ j=1,2,3,4\). The mean \(m\) takes values \(\{0.12,0.24,0.36,0.48,0.60\}\). In all settings, \(\epsilon_{i}\sim N(0,0.3)\). 500 observations are generated for each setting.
We perform a test of association between the predictor \(\mathbf{X}_{4}\) and the outcome \(\mathbf{y}\) in each setting. For the linear model (LM), a standard linear regression is fit on all four predictors, and the p-value from a \(t\)-test for \(\beta_{4}=0\) is used. GAMs are fit with a separate smooth term with a 10-dimensional cubic regression spline basis for each predictor, and p-values for significance of the smooth term for \(\mathbf{X}_{4}\) are calculated as described in Wood (2013). We fit neural networks with one (NN-1) and two hidden layers (NN-2) and sigmoid activations. We train using stochastic gradient descent, \(L_{2}\) regularization, and a decaying learning rate. The number of nodes, the initial learning rate, and the regularization parameter minimize loss on a validation set. The proposed test for association is conducted with 500 permutations. Power and Type-I error are estimated across 500 simulations.
Table 2 contains the estimated Type-I error rates, and Figure 1 shows the power curves for the three data generation models and four testing methods. Due to the conservative nature of the p-values for smooth terms in a GAM, the significance threshold was adjusted to 0.035 for these models to allow fair comparison among the methods. Type-I error of the permutation test is accurate across all settings, though slightly conservative in the smooth and nonsmooth settings. In the linear setting, all methods perform similarly, an expected result given that linear models, GAMs, and neural networks can all estimate linear effects. LM and GAM slightly outperform the permutation test under low signal, likely due to a loss of efficiency from fitting a neural network with a large parameter space for a simple linear association. In the smooth setting, power is significantly lower for LM since it is limited to modeling linear effects while power remains
\begin{table}
\begin{tabular}{l l r} \hline \hline Variable & Association & Pr(Reject \(H_{0}\)) \\ \hline \(X_{1}\) & Linear & 0.05 \\ \(X_{2}\) & Quadratic & 1.00 \\ \(X_{3}\) & Cubic & 1.00 \\ \(X_{4}\) & Trigonometric & 0.86 \\ \(X_{5}\) & Nonsmooth & 0.94 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Power and Type-I error of the neural network permutation test for nonlinearity. The probability of rejecting \(H_{0}\) is calculated at the 5% level for five types of associations: linear, quadratic, cubic, trigonometric, and nonsmooth.
high for both GAM and NN. In the nonsmooth setting, our proposed test significantly outperforms the competing methods. This is expected as linear models and GAMs cannot adequately model nonsmooth nonlinear associations. In contrast, these complex relationships are easily learned by neural networks, and therefore the proposed permutation test can accurately detect the presence of associations between the predictors and outcome. At small signal levels, the added complexity of a second hidden layer in NN-2 results in a slight improvement in power compared to NN-1.
In real data settings, it is likely the predictors may be correlated. We assess the degree to which correlation among the network inputs impacts the Type-I error of the association test. We draw 500 observations of \(\mathbf{x}_{i}=(x_{i1},x_{i2},x_{i3},x_{i4},x_{i5},x_{i6},x_{i7},x_{i8})^{T}\) from a multivariate normal distribution with correlation \(\mathbf{\Sigma}\). We consider three settings for \(\mathbf{\Sigma}\): independence, low correlation, and high correlation. We choose \(\mathbf{\Sigma}\) to reflect real data structures by selecting the low and high correlation settings from the empirical correlation matrix of a subset of features from the Children's Hospital of Philadelphia pediatric concussion data, described in Section 4. The low correlation matrix is estimated from eight elements of the Sport Concussion Assessment Tool and has a mean magnitude of correlation of 0.13, and the high correlation matrix is estimated from eight elements of the Post-Concussion Symptom Inventory and has a mean magnitude of correlation of 0.60. For each of the 500 observations of \(\mathbf{x}_{i}\), we generate \(y_{i}\) from the model \(y_{i}=\beta x_{i2}^{2}+\beta\cos(x_{i3})+\beta\sin(2x_{i4})+\beta x_{i5}+\beta x _{i6}+\beta x_{i7}+\beta x_{i8}+\epsilon_{i}\), where \(\beta\sim N(0.1,0.01^{2})\) and \(\epsilon_{i}\sim N(0,0.1)\). We conduct an association test for \(\mathbf{X}_{1}\) with 500 permutations. A one-layer network with 30 nodes and sigmoid activation is trained for 150 epochs using stochastic gradient descent, \(L_{2}\) regularization, and a decaying learning rate. The number of nodes, the initial learning rate, and the regularization parameter minimize loss on a validation set. Power and Type-I error are estimated across 300 simulations.
With independent predictors, the estimated Type-I error rate is 0.05. However, this rate increases to 0.09 under low correlation and to 0.32 under high correlation. The diminished performance of the test under correlated settings is a direct result of breaking the joint distribution of the predictors in the permutation step of the testing procedure. The increase in Type-I error is moderate under low correlation but very large under high correlation. These results suggest the proposed test is best implemented when there is a reasonable assumption of near-independence among the predictors.
## 4 Data Applications
We apply the permutation tests in two settings. Our first application uses pediatric concussion data from the Center for Injury Research and Prevention at the Children's Hospital of Philadelphia (CHOP) to assess associations between clinical and device-based diagnostic measures (Corwin et al., 2021). Our second application uses genomic data from the Accelerating Medicines Partnership Parkinson's Disease (AMP PD) project (2019 v1 release) to test for associations of genes linked to Parkinson's with disease status.
### Testing associations among concussion diagnostics
Teenage participants from a suburban school were enrolled in a large, prospective observational cohort study assessing various diagnostic measures of concussion. Concussed subjects sustained sport-related injuries; non-concussed subjects completed testing as part of an assessment for a scholastic sport season. The Sport Concussion Assessment Tool, \(5^{th}\) Edition (SCAT-5) is a concussion assessment battery
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Linear & Smooth & Nonsmooth \\ \hline NN-1 & 0.030 & 0.026 & 0.028 \\ NN-2 & 0.056 & 0.024 & 0.026 \\ LM & 0.056 & 0.050 & 0.048 \\ GAM & 0.048 & 0.038 & 0.050 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Type-I error of tests for association using neural networks (NN-1 and NN-2), linear models (LM), and generalized additive models (GAM). Data is generated from three models (linear, smooth nonlinear, and nonsmooth nonlinear). Type-I error is the rate of rejecting \(H_{0}\) (based on \(p\leq 0.05\) for NN-1, NN-2, and LM, \(p\leq 0.035\) for GAM) from 500 simulations.
Figure 1: Power curves of tests for association using neural networks (NN-1 and NN-2), linear models (LM), and generalized additive models (GAM). Data is generated from three models (linear, smooth nonlinear, and nonsmooth nonlinear) and five signal levels. Each point is the rate of rejecting \(H_{0}\) (based on \(p\leq 0.05\) for NN-1, NN-2, and LM, \(p\leq 0.035\) for GAM) from 500 simulations.
that measures symptom burden, memory, and concentration (Echemedia et al., 2017). We consider three variables from SCAT-5: symptom score of 0-22 and symptom severity score of 0-132 from assessing 22 symptoms on a 7-point scale, and delayed word memory, where subjects repeat a list of five words after five minutes elapse. The Post-Concussion Symptom Inventory (PCSI) is a self-report questionnaire of symptoms rated on a 7-point scale (Sady et al., 2014). We consider two individual elements of the PCSI (headache and dizziness) and two combined scores (emotional symptom score and total symptom score). Total symptom score is the sum of each individual symptom, including headache and dizziness. Pupillary light reflex (PLR) metrics can potentially assess visual dysfunction following concussion (Master et al., 2020). We consider two PLR metrics: average pupil constriction velocity (ACV) and time for pupil redilation to 75% of maximum diameter (T75).
We analyze a data set of 544 observations including cases and controls. We fit two separate one-layer networks with sigmoid activations and \(L_{2}\) regularization. The first network (NN-SCAT) predicts SCAT-5 symptom score with four inputs: SCAT-5 symptom severity score, SCAT-5 delayed word memory, PCSI emotional score, and age. The second network (NN-PCSI) predicts PCSI total score with four inputs: PCSI headache, PCSI dizziness, PLR ACV, and PLR T75. We employ stochastic gradient descent with an initial learning rate of 0.005 for NN-SCAT and 0.01 for NN-PCSI; the learning rates decay by 1.5% at each epoch. The number of nodes and the regularization parameter minimize validation loss. Both networks have 20 nodes and a regularization parameter of 0.03. The networks train for 175 epochs.
Tests for nonlinearity and association were conducted for each input in each network. The p-values are reported in Table 3. In NN-SCAT, the tests suggest that SCAT-5 symptom severity is nonlinearly associated with SCAT-5 symptom score [nonlinearity: \(p<0.001\); association: \(p<0.001\)]. Symptom score and symptom severity score are highly related measures as they are calculated from assessing the same 22 symptoms. However, the number of symptoms and the corresponding severity do not increase at the same rate. This nonlinear relationship is clearly visible in Figure 2(a). Additionally, the tests suggest that PCSI emotional score is associated with symptom score [\(p<0.001\)], but there is not evidence that the association is nonlinear [\(p=0.912\)]. This result makes sense given that both metrics measure the presence of symptoms. The other two inputs, SCAT-5 delayed memory and age, do not show evidence of association with symptom score, an expected result given the even spread of values in Figure 2 (b) and (d). In NN-PCSI, the tests indicate that both headache and dizziness are strongly associated with total score [headache: \(p<0.001\); dizziness: \(p<0.001\)] but neither association is nonlinear [headache: \(p=0.642\), dizziness: \(p=0.158\)]. The tests confirm known linear associations; headache and dizziness are two of the elements summed to calculate total score. The linear associations are evident in Figure 3 (a) and (b). There is not evidence of an association between PCSI total score and either ACV [\(p=0.100\)] or T75 [\(p=0.442\)], an expected result given that PLR metrics measure a different dimension of concussion than symptoms and Figure 3 (c) and (d) shows no distinguishable relationships between the metrics.
consider three genes with studied links to PD: SNCA (Polymeropoulos et al., 1997), SPPL3 (Zhang and Wong, 2022), and PLXNA4 (Schulte et al., 2013). We include two genes linked to other conditions for comparison: HBB and CD4.
We analyze a set of 2890 subjects with WGS data; 1760 subjects have a PD diagnosis. For each gene, we calculate the mean minor allele count across all SNPs as a summary measure. We train a one-layer network with 20 nodes and sigmoid activation using the five genetic predictors as well as age and sex to predict PD case status. We employ stochastic gradient descent with an initial learning rate of 0.06, which decays by 1.5% at each epoch, and \(L_{2}\) regularization with \(\lambda=0.002\). The network trains for 125 epochs.
In addition to the neural network permutation test for association, we fit a linear model and a GAM to the data and conduct significance testing. The p-values are reported in Table 4. All three methods suggest SNCA and PLXNA4 are significantly associated with PD, while none of the tests have sufficient evidence to suggest that SPPL3 is linked to PD. As expected, none of the methods find a significant association between PD and the HBB and CD4 genes.
## 5 Discussion
In this article, we introduce a flexible permutation-based approach to hypothesis testing for neural networks. Our proposed tests utilize the partial derivative function of a network output with respect to specific inputs to evaluate associations between predictors and outcomes. There are several advantages to addressing the neural network explainability problem from the perspective of hypothesis testing. First, testing accounts for the overall network behavior rather than relying on local explanations at the individual prediction level. A global approach to network interpretation can better capture the nature of associations between predictors and outcomes. Second, feature importance methods provide relative rankings of predictors, but testing offers an objective interpretation of the significance of predictors analogous to inference conducted in classical statistical modeling like regression. Third, testing places no restrictions on network structure, allowing the data, rather than the need for explainability, to determine the optimal size and architecture.
Basing our hypothesis testing framework on the partial derivatives allows for the method to be flexibly applied to general network structures. In this paper, we have implemented the tests in small, feed-forward networks which are appropriate to the scale and complexity of our data. However, the tests can be employed in any architecture where the partial derivatives can be calculated. For complex architectures where analytically deriving the partial derivatives proves difficult, it may be possible to approximate them in order to conduct testing. Despite the flexibility it affords, the permutation-based approach is computationally intensive, which limits the practicality of implementing the proposed tests for very large or complex networks. In these settings, an asymptotic test based on the partial derivatives would be an excellent alternative. However, while deriving an analytic form of the distribution of the partial derivative function is feasible, verifying it empirically is not a straightforward task. Many of the existing theoretical results for neural network parameters require that the network reach an optimal global solution, a condition that in practice, may be difficult to achieve or impossible to verify. Additionally, since the optimal network fit is achieved through numerical optimization rather than a closed-form solution, empirical estimates of the variance of the partial derivative must account for added variability due to training, especially when using methods such as stochastic gradient descent. Lastly, deriving an analytic form of the variance of the partial derivative requires the use of linear approximations to nonlinear functions. It is difficult to quantify the extent to which these approximations may bias the estimate or to study the conditions necessary to ensure a reasonable estimate. Together, these considerations make the development and implementation of an asymptotic test a rather challenging task. Empirical tests are therefore a useful alternative, especially for moderately-sized networks.
\begin{table}
\begin{tabular}{l r r r} \hline \hline & LM & GAM & NN \\ \hline SNCA & 0.001 & 0.003 & \(<\)0.001 \\ PLXNA4 & 0.014 & 0.013 & 0.010 \\ SPPL3 & 0.058 & 0.102 & 0.086 \\ HBB & 0.652 & 0.292 & 0.756 \\ CD4 & 0.919 & 0.873 & 0.410 \\ \hline \hline \end{tabular}
\end{table}
Table 4: P-values for linear model (LM), GAM, and neural network (NN) tests for association for genetic predictors of PD. Tests are conducted for each feature at the 5% level.
Figure 3: Density scatter plots of network output PCSI total score vs. inputs (a) PCSI headache, (b) PCSI dizziness, (c) PLR ACV, and (d) PLR T75. Dark blue indicates higher density of observations. |
2310.01148 | Cryptocurrency Portfolio Optimization by Neural Networks | Many cryptocurrency brokers nowadays offer a variety of derivative assets
that allow traders to perform hedging or speculation. This paper proposes an
effective algorithm based on neural networks to take advantage of these
investment products. The proposed algorithm constructs a portfolio that
contains a pair of negatively correlated assets. A deep neural network, which
outputs the allocation weight of each asset at a time interval, is trained to
maximize the Sharpe ratio. A novel loss term is proposed to regulate the
network's bias towards a specific asset, thus enforcing the network to learn an
allocation strategy that is close to a minimum variance strategy. Extensive
experiments were conducted using data collected from Binance spanning 19 months
to evaluate the effectiveness of our approach. The backtest results show that
the proposed algorithm can produce neural networks that are able to make
profits in different market situations. | Quoc Minh Nguyen, Dat Thanh Tran, Juho Kanniainen, Alexandros Iosifidis, Moncef Gabbouj | 2023-10-02T12:33:28Z | http://arxiv.org/abs/2310.01148v1 | # Cryptocurrency Portfolio Optimization by Neural Networks
###### Abstract
Many cryptocurrency brokers nowadays offer a variety of derivative assets that allow traders to perform hedging or speculation. This paper proposes an effective algorithm based on neural networks to take advantage of these investment products. The proposed algorithm constructs a portfolio that contains a pair of negatively correlated assets. A deep neural network, which outputs the allocation weight of each asset at a time interval, is trained to maximize the Sharpe ratio. A novel loss term is proposed to regulate the network's bias towards a specific asset, thus enforcing the network to learn an allocation strategy that is close to a minimum variance strategy. Extensive experiments were conducted using data collected from Binance spanning 19 months to evaluate the effectiveness of our approach. The backtest results show that the proposed algorithm can produce neural networks that are able to make profits in different market situations.
Deep learning, portfolio optimization, financial engineering, cryptocurrency, decision making.
## I Introduction
Portfolio optimization is the process of distributing wealth over a universe of assets to satisfy specific criteria (e.g., maximize accumulated return, Sharpe ratio, or minimize volatility). One of the pioneering works in this field is the modern portfolio theory [1], which takes advantage of the diversification of the portfolio to reduce volatility, given the same level of portfolio returns. Recent research has incorporated machine learning into the model-based approach or designing a Reinforcement Learning (RL) framework in which an agent learns how to trade from market feedback, or even for training an end-to-end model to output the portfolio weights. Regarding different approaches, portfolio diversification still plays a vital role in the success of portfolio optimization algorithms. However, asset selection can be challenging since the universe of assets can be large and correlation properties between assets change over time.
In modern portfolio theory, the optimal weights are the solution to the optimization problem that uses the expected returns and covariance matrix as parameters. These two quantities are estimated from the historical returns, which have a low signal-to-noise ratio, and this is likely to affect the accuracy of the estimations. Many attempts have been made to bypass the need for the estimation step. One common approach is to design an RL framework [2, 3, 4] that uses an agent to take action based on market information (e.g., prices, returns, volumes) for maximizing the expected accumulated reward. For adapting the RL framework to the portfolio optimization problem, the action could be a vector of allocation weights the agent needs to allocate for constructing the portfolio. At the same time, the state can be created from the previous trading action, asset prices, and volumes. Another less-discussed but also promising approach is the end-to-end framework [5, 6, 7]. In this approach, the asset information forms an input to be introduced to a neural network which outputs the allocation weights. This paper will focus on the latter approach.
Choosing the market from which to construct the portfolio is also an important factor. Cryptocurrency is a type of virtual currency that uses cryptography technology so that it is not possible to make a counterfeit. The best-known example of cryptocurrencies is Bitcoin (BTC), which has a market capitalization of over 800 billion dollars1 and is the highest market capitalization among other cryptocurrencies. In 2020, Binance, which is the largest cryptocurrency exchange in terms of the daily trading volume of cryptocurrencies, introduced Binance Leverage Tokens (BLVTs), with the first two of them called BTCUP and BTCDOWN. Those tokens are essentially tokenized versions of future positions of BTC allocated by Binance. This investment product lets traders perform hedging or speculate on BTC's future price movements (up or down) and achieve leveraged profits without liquidation risk.
Footnote 1: Data reference from [https://coinmarketcap.com](https://coinmarketcap.com) on April 8, 2022
This paper proposes to use an end-to-end machine learning framework to optimize the Sharpe ratio of a portfolio containing BTCUP and BTCDOWN. We estimate the price relation of the two assets and utilize this information to design a loss term that regulates the model's bias towards a specific asset. When combined with the Sharpe ratio, this loss term can enforce the model to make allocations close to a neutral position and profit by taking arbitrage in these assets. In the ideal situation, the portfolio at the neutral position is invariant with respect to changes in BTCUP and BTCDOWN prices, and this position is constant over time. However, due to Binance's proprietary allocation mechanism that affects the leverage effect and the trading activity of BLVTs on the market, this position changes all the time. Hence, our portfolio value can go up or down, even when we initially bought a portfolio of BTCUP and BTCDOWN at a neutral position.
This work makes several contributions. First, we study how
to construct a portfolio from a pair of BLVTs. Second, we adopt an end-to-end machine learning framework to allocate a profitable BLVT portfolio in no-cost and cost-included settings. Finally, we make further improvements to the portfolio performance by exploiting the negative correlation characteristic of the BLVTs pair through a custom loss term. To the best of our knowledge, this is the first work that uses BLVTs for a portfolio optimization problem.
## II Related Work
Modern portfolio theory [1], or the so-called mean-variance framework, is widely used to benchmark other portfolio optimization strategies. The goal is to maximize returns at a given risk level. Its variant, the Global Minimum Variance portfolio (GMVP) [1], can be constructed by setting the objective function in the optimization step to the portfolio variance. This strategy is suitable for risk-averse investors whose focus is on reducing portfolio risk. Another common benchmark method is the capitalization-weighted portfolio [8] which is used to build market indices such as S&P500. In the capitalization-weighted approach, a large amount of money is distributed over a small set of high-capital assets, while the low-capital assets have little contribution to the portfolio performance. Hence, the diversification of the portfolio is decreased. This drawback is overcome by the Equal-weighted portfolio (EWP) [9], in which the total wealth is distributed equally to all assets. On the other hand, an equally-weighted risk contributions portfolio [10] implements the idea of having the portfolio components contribute equal risk. Authors in [10] state that this method offers a volatility level higher than the GMVP portfolio but lower than the EWP, allowing a trade-off between two approaches in terms of the absolute level of risk, risk budgeting, and diversification.
In the machine learning context, a natural approach for portfolio optimization is the RL framework since the problem involves interacting with a market via actions like asset allocation and received rewards such as portfolio returns. In the work of [2], the authors explored how the asset allocation problem can be addressed using the RL method. The experiments conducted with multiple trading agents showed superior results over some model-based portfolio optimization strategies. Deep Portfolio Management (DPM) [3] introduces many improvements to the RL framework in the portfolio optimization problem. The most important proposal in this work is to use a neural network architecture that enables sharing of network weights between different assets. This approach allows the network to assess an asset based on patterns learned from other assets. Since the network weights are not specialized to any asset, this method can update the portfolio's constituents or even the size of the portfolio. Going further toward a real-world trading system, Hierarchical Reinforced trading system for Portfolio Management (HRPM) [4] is an RL framework that decomposes the trading process into hierarchical tasks of portfolio optimization and allocation or trading. In this framework, each task is associated with a separate policy. The high-level policy manages the portfolio optimization task that determines the portfolio weights at a lower frequency, such as days, while the low-level policy operates at a higher frequency and places limit orders to fulfill goals from the high-level policy.
In the RL frameworks, the objective function is an expected cumulative reward, whereas popular risk-adjusted metrics like the Sharpe ratio [11] cannot be decomposed into immediate rewards. To be able to use the Sharpe ratio as the optimization objective, the authors in [5] proposed an end-to-end framework that directly outputs allocation weights from the asset prices and returns via a Long Short-Term Memory (LSTM). Risk-based allocation in [7] takes a similar approach. This method trained a fully connected neural network to output the allocation of risk contribution and solve the weight allocation from that risk contribution using an implicit optimization layer embedded in the neural network. The framework proposed in [5] has the flexibility to be extended to incorporate different types of portfolio constraints such as cardinality, leverage, or maximum position for individual assets by utilizing several custom neural network layers. Overall, the end-to-end framework is simpler than the RL approach in terms of framework complexity, and it can be easier to design specialized components for a specific problem.
A special case of the portfolio optimization problem is the portfolio of Binance Leverage Tokens considered in this paper. Every BLVT is associated with a price called Net Asset Value (NAV). At this price, the owner can redeem their tokens to Binance and receive USDT1 back. The NAV is updated based on the fluctuation in the value of the future position basket that each BLVT represents. However, the BLVTs can also be traded in the spot market, where their prices follow around NAVs. These prices on the spot market also reflect speculation on the future price movements of the underlying assets in the futures market. The first available BLVTs are BTCUP and BTCDOWN. BTCUP aims to generate leveraged gains when the price of BTC goes up, while BTCDOWN aims to generate leveraged gains when the price of BTC goes down. From that property, every pair of UP and DOWN tokens in BLVTs have
Fig. 1: Rolling correlation of the returns (top) and the prices (bottom) of BTCUP and BTCDOWN from May 2020 to July 2021. The correlation window length is \(72\) hours, and the timeframe between price samples is \(1\) hour.
a consistent negative correlation over time. Figure 1 illustrates this correlation between BTCUP and BTCDOWN.
## III Method
This section describes the proposed strategy to construct a portfolio that consists of two leveraged tokens that are negatively correlated. In addition, this section presents the formulation of a novel loss function that is used to optimize a neural network generating the portfolio reallocation weights. The proposed loss function consists of two terms. The former maximizes the Sharpe ratio, while the latter is used to regularize the network's bias toward specific assets.
### _Mathematical Formulation_
Given an initial amount of capital, an agent allocates all capital into two assets, denoted as asset \(A\) and asset \(B\). At each time interval, the agent adjusts the amount of the two assets so that the portfolio contains only asset \(A\) and asset \(B\) (without any cash). The allocation of each asset in the portfolio is restricted to the long-only and budget constraints. The goal is to design a reallocation strategy that maximizes some portfolio metrics over time, for example, the Sharpe ratio. We start by constructing the mathematical formulation for a portfolio of \(N\) assets as in [3]. Later, we restrict \(N=2\) for our specific problem. A portfolio of \(N\) assets at time \(t-1\) is associated with a weight vector
\[\mathbf{w}_{t-1}=(w_{t-1,0},w_{t-1,1},\dots,w_{t-1,N-1}), \tag{1}\]
which is the relative value of each asset compared to the portfolio value. At any time \(t\), the long-only constraint is realized as
\[w_{t,i}\geq 0\quad\forall i\in\{0,1,\dots,N-1\}, \tag{2}\]
and the budget constraint can be written as
\[\sum_{i=0}^{N-1}w_{t,i}=1. \tag{3}\]
Let denote period \([t-1,t]\) is the period between time \(t-1\) and time \(t\). At the beginning of period \([t-1,t]\), the asset price vector is denoted as
\[\mathbf{y}_{t-1}=(y_{t-1,0}\,,y_{t-1,1}\,,\dots,y_{t-1,N-1}), \tag{4}\]
and the volume vector for the whole period is
\[\mathbf{v}_{t-1}=(v_{t-1,0}\,,v_{t-1,1}\,,\dots,v_{t-1,N-1}). \tag{5}\]
The asset value vector at time \(t-1\), which denotes the absolute value (monetary value) of \(N\) assets, is
\[\mathbf{a}_{t-1}=\mathbf{y}_{t-1}\odot\mathbf{v}_{t-1} \tag{6}\]
where \(\odot\) is the element-wise multiplication operator. The portfolio value is \(p_{t-1}\), which is computed by summing over the absolute value of \(N\) assets
\[p_{t-1}=\mathbf{1}_{N}^{T}\,\mathbf{a}_{t-1}=\sum_{i=0}^{N-1}y_{t-1,i}\,v_{t-1,i}, \tag{7}\]
where \(\mathbf{1}_{N}\) is the one-vector of size \(N\).
At the end of period \([t-1,t]\), before we make any reallocation, the portfolio value is denoted as \(p_{t}^{\prime}\), and the asset price vector is \(\mathbf{y}_{t}\). Here we assume that we only reallocate the portfolio at the end of each period, and asset prices at the beginning of the incoming period are equal to the prices at the end of the current period. The asset returns vector in period \([t-1,t]\) is
\[\begin{split}\mathbf{r}_{t}&=((\mathbf{y}_{t}- \mathbf{y}_{t-1})\oslash\mathbf{y}_{t-1})\\ &=\left(\frac{y_{t,0}}{y_{t-1,0}}-1,\frac{y_{t,1}}{y_{t-1,1}}-1, \dots,\frac{y_{t,N-1}}{y_{t-1,N-1}}-1\right),\end{split} \tag{8}\]
where \(\oslash\) denotes the element-wise division operator. Right before making the reallocation in period \([t-1,t]\), the volume vector is determined by
\[\mathbf{v}_{t-1}=p_{t-1}\mathbf{w}_{t-1}\oslash\mathbf{y}_{t-1}, \tag{9}\]
and the asset value vector is
\[\mathbf{a}_{t}^{\prime}=\mathbf{y}_{t}\odot\mathbf{v}_{t-1}=p_{t-1}\mathbf{w }_{t-1}\odot(\mathbf{y}_{t}\oslash\mathbf{y}_{t-1})\,. \tag{10}\]
Following Eq. (8) and Eq. (10), the portfolio value at time \(t\), before making reallocation, is
\[p_{t}^{\prime}=\mathbf{1}_{N}^{T}\,\mathbf{a}_{t}^{\prime}=p_{t-1}\,\sum_{i= 1}^{N-1}w_{t-1,i}(1+r_{t,i}). \tag{11}\]
The corresponding portfolio weight vector for this portfolio value is
\[\mathbf{w}_{t}^{\prime}=\frac{1}{p_{t}^{\prime}}(\mathbf{y}_{t}\odot\mathbf{ v}_{t-1}). \tag{12}\]
At the end of period \([t-1,t]\), the agent needs to decide the weight vector \(\mathbf{w}_{t}\) for the next period \([t,t+1]\) and makes the reallocation accordingly. This reallocation is executed via trading on the market, which requires trading fee. The cost reduces the portfolio value by a shrinkage parameter
\[\mu_{t}=\frac{p_{t}}{p_{t}^{\prime}}, \tag{13}\]
where \(0<\mu_{t}\leq 1\). If there is no trading fee, the portfolio value will remain unchanged after the reallocation, which means \(\mu_{t}\) is \(1\). Figure 2 summaries the relationship between portfolio quantities in period \([t-1,t]\).
Fig. 2: Illustration of the effect of price change and allocation to the portfolio quantities. In period \([t-1,t]\), the price of assets in the portfolio changes from \(\mathbf{y}_{t-1}\) to \(\mathbf{y}_{t}\), which results in the return vector \(\mathbf{r}_{t}\). This price change adjusts \(p_{t-1}\), \(\mathbf{a}_{t-1}\), and \(\mathbf{w}_{t-1}\) to \(p_{t}^{\prime}\), \(\mathbf{a}_{t}^{\prime}\) and \(\mathbf{w}_{t}\). The price change does not affect the volume vector \(\mathbf{v}_{t-1}\). At the end of period \([t-1,t]\), the reallocation activity adjusts the previously mentioned quantities to \(p_{t}\), \(\mathbf{a}_{t}\), and \(\mathbf{w}_{t}\). By the effect of transaction fee and other fees, the portfolio value \(p_{t}^{\prime}\) is shrunk to \(p_{t}\) by a factor of \(\mu_{t}\). By the reallocation, the volume vector has a new value.
The portfolio return in period \([t-1,t]\) is computed by comparing the portfolio value at the start of the current and incoming periods
\[R_{t}=\frac{p_{t}}{p_{t-1}}-1=\frac{\mu_{t}p_{t}^{\prime}}{p_{t-1}}-1=\mu_{t} \left(\sum_{i=0}^{N-1}w_{t-1,i}(1+r_{t,i})\right)-1. \tag{14}\]
The portfolio return of consecutive periods will be used to calculate the Sharpe ratio for training the neural networks model. This return is an essential input to compute other portfolio performance metrics.
### _Transaction and Management Fee_
Transaction fee resulting from portfolio reallocation in the general case of \(N\) assets has no analytical formula [12], but it can be solved iteratively [3]. In our portfolio optimization problem, there are only two assets, called \(A\) and \(B\), and cash is not allowed. When trading occurs, exactly one asset needs to be sold, and all the cash received afterward is used to buy the other asset. We can find the closed form of shrinkage parameter using this portfolio properties.
At the end of period \([t-1,t]\), we need to adjust the weight vector from \(\mathbf{w}_{t}^{\prime}\) to \(\mathbf{w}_{t}\) by reallocation. If the new weight for asset \(A\) satisfied
\[w_{t,A}^{\prime}\,p_{t}^{\prime}>w_{t,A}\,p_{t}\Longleftrightarrow w_{t,A}^{ \prime}>\mu_{t}\,w_{t,A}, \tag{15}\]
then the allocation for this asset is decreased. In this case, we need to sell asset \(A\), and the money obtained is used to buy more asset \(B\). It can be shown that when the portfolio contains only two assets, the selling condition above is equivalent to \(w_{t,A}^{\prime}>w_{t,A}\). Similarly, asset \(B\) is needed to buy more if and only if \(w_{t,B}^{\prime}<w_{t,B}\).
We suppose that at the end of period \([t-1,t]\) the sold asset is asset \(A\) and the bought asset is asset \(B\). When we perform buying and selling, these actions induce transaction fee. The change in the portfolio value before and after reallocation is caused only by these transaction fee. Therefore, the total trading fee from reallocation is \(p_{t}^{\prime}-p_{t}\). This cost can be broken down into the cost associated with selling and buying. We set the cost rate for selling and buying is equal and is denoted as \(0\leq c<1\). The cost generated from selling asset \(A\) is
\[c\,(w_{t,A}^{\prime}\,p_{t}^{\prime}-w_{t,A}\,p_{t})=c\,p_{t}^{\prime}\,(w_{t, A}^{\prime}-w_{t,A}\,\mu_{t}). \tag{16}\]
The change in absolute value after reallocation of the bought asset \(B\) is
\[w_{t,B}\,p_{t}-w_{t,B}^{\prime}\,p_{t}^{\prime}=p_{t}^{\prime}\,(w_{t,B}\,\mu_ {t}-w_{t,B}^{\prime}). \tag{17}\]
However, to account for the trading fee, the buying order is larger than this value change by a multiple of \(1/(1-c)\). Then, the cost generated from buying asset \(B\) is
\[c\,\frac{p_{t}^{\prime}(w_{t,B}\,\mu_{t}-w_{t,B}^{\prime})}{1-c}. \tag{18}\]
The sum of selling cost and buying cost equals the difference between the portfolio values after and before reallocation
\[c\,p_{t}^{\prime}(w_{t,A}^{\prime}-w_{t,A}\,\mu_{t})+c\,\frac{p_{t}^{\prime}( w_{t,B}\,\mu_{t}-w_{t,B}^{\prime})}{1-c}=p_{t}^{\prime}-p_{t}. \tag{19}\]
Solving \(\mu_{t}\) from this equation, the shrinkage parameter of the portfolio for reallocation is
\[\mu_{t}=\frac{(1-c)+c\,(w_{t,B}^{\prime}-w_{t,A}^{\prime}(1-c))}{(1-c)+c\,(w _{t,B}-w_{t,A}(1-c))}. \tag{20}\]
In case the sold asset is \(B\), and the bought asset is \(A\), we just need to exchange the index \(A\) and \(B\) of the weights in Eq. (20) to obtain the right formula.
BLVTs are also associated with the management fee. This fee is charged at 00:00 UTC directly on the net portfolio value. To consider this type of fee, we multiply \(\mu_{t}\) by a multiplier \((1-m)\) every \(24\) hours, where \(0\leq m<1\) is the management fee rate.
### _Baseline Loss Function_
The Sharpe ratio integrates two important aspects of portfolio performance, that is, profitability and risk, into one measure. More specifically, it is defined as the excess portfolio expected return over the portfolio volatility. Since the portfolio return distribution is unknown, we estimate the Sharpe ratio with the portfolio return samples. For \(T\) trading periods \(\{[0,1],[1,2],\ldots,[T-1,T]\}\), the sample mean of portfolio return is
\[E_{R}=\frac{1}{T}\sum_{t=1}^{T}R_{t}, \tag{21}\]
where \(R_{t}\) is determined by Eq. (14) and the portfolio volatility is computed by the sample standard deviation of portfolio returns, which is
\[\sigma_{R}=\sqrt{\frac{1}{T-1}\sum_{t=1}^{T}(R_{t}-E_{R})^{2}}. \tag{22}\]
The Sharpe ratio, which omits the risk-free rate for simplicity, is
\[SR_{T}=\frac{E_{R}}{\sigma_{R}}. \tag{23}\]
In this paper, we train a neural network model, represented by the function \(\mathcal{F}_{\boldsymbol{\theta}}(\cdot)\), to output the portfolio weights in an end-to-end manner [5].
\[\mathbf{w}_{t}=\mathcal{F}_{\boldsymbol{\theta}}(\mathbf{x}_{t}), \tag{24}\]
where \(\mathbf{x}_{t}\) is the market information available at time \(t\). The Sharpe ratio in Eq. (23) is a function of portfolio weights. Therefore, it can be used as the loss function for training. We refer to this approach as the baseline method, which involves training a neural network model using the negative Sharpe ratio as the loss function.
\[L_{BL}=-SR_{T}. \tag{25}\]
Minimizing this loss function is equivalent to maximizing the Sharpe ratio. The long-only and budget constraints are fulfilled by adding the Softmax activation layer as the final layer in the neural network model.
### _Neutral Position Constrain_
In this section, we form a constrain for the training of neural networks for the optimal allocation of two tokens: BTCUP and BTCDOWN. This is based on linear regression to model the relationship between the price of the two tokens:
\[y_{t,u}=\alpha+\beta^{market}\,y_{t,d}+\epsilon_{t}, \tag{26}\]
The terms \(y_{t,u}\) and \(y_{t,d}\) denote the prices of BTCUP and BTCDOWN at time \(t\), respectively. The error term \(\epsilon_{t}\) is a zero-mean random variable with variance of \(\sigma^{2}\). We assume that \(\epsilon_{t}\) is independent with \(y_{t,d}\). The coefficients \(\beta^{market}\) and \(\alpha\) are unknown parameters. Figure 3 illustrates the price relationship of two tokens and the corresponding estimated values of \(\beta^{market}\).
Suppose that in period \([t,t+1]\), the price of BTCDOWN changes by \(\Delta y_{t+1,d}\) to
\[y_{t+1,d}=y_{t,d}+\Delta y_{t+1,d}. \tag{27}\]
Then Eq. (26) implies that
\[y_{t+1,u}=y_{t,u}+\Delta y_{t+1,d}\,\beta^{market}+(\epsilon_{t+1}-\epsilon_{ t}). \tag{28}\]
At the beginning of period \([t,t+1]\), we hold a portfolio of BTCUP and BTCDOWN with volumes \(v_{t,u}\) and \(v_{t,d}\), respectively. The portfolio value at time \(t\) is
\[p_{t}=y_{t,u}\,v_{t,u}+y_{t,d}\,v_{t,d}. \tag{29}\]
The portfolio value at the end of the period \([t,t+1]\) just before the portfolio reallocation is
\[p^{\prime}_{t+1} =y_{t+1,u}\,v_{t,u}+y_{t+1,d}\,v_{t,d}\] \[=p_{t}+(\beta^{market}\,v_{t,u}+v_{t,d})\Delta y_{t+1,d}+v_{t,u} \,\Delta\epsilon_{t+1}, \tag{30}\]
where \(\Delta\epsilon_{t+1}=\epsilon_{t+1}-\epsilon_{t}\).
Our regularization strategy is based on the idea that the value of the portfolio at \(t+1\) just before the portfolio reallocation, \(p^{\prime}_{t+1}\), is _immune_ for changes in \(y_{t+1,d}\). In this way, our strategy is not dynamically dependent on instantaneous changes in the token prices, but is static. To exclude the second term in Eq. (30), which captures the dependency on \(y_{t+1,d}\), we set
\[\frac{v_{t,d}}{v_{t,u}}=-\beta^{market}. \tag{31}\]
This does not guarantee that \(p^{\prime}_{t+1}=p_{t}\) as \(\epsilon_{t+1}-\epsilon_{t}\) has certain variance, and for that reason, the portfolio value can change due to the residual component. However, assuming that \(\epsilon_{t+1}\) does not correlate with \(y_{t+1,d}\), this strategy minimizes the influence of \(\Delta y_{t+1,d}\) on the \(\Delta p^{\prime}_{t+1}=p^{\prime}_{t+1}-p_{t}\).
We call the weight values that satisfy Eq. (31) the neutral weights and denote them as \(w^{*}_{t,u}\), and \(w^{*}_{t,d}\). Then by rewriting the volumes in terms of portfolio weights, we get
\[\frac{w^{*}_{t,d}}{w^{*}_{t,u}}=-\beta^{market}\left(\frac{y_{t,d}}{y_{t,u}} \right). \tag{32}\]
Suppose we explicitly set the portfolio weights to neutral weights in every period. In that case, the portfolio has no bias toward specific assets and is neutral regarding asset price change. One can expect the portfolio value to be stable in the successive trading period, and the maximum drawdown over the long term will be low. This strategy resembles the GMVP strategy because both aim to minimize portfolio volatility. However, when using the above analysis instead of GMVP, we directly exploit the linearity relation of a pair of BLVTs.
To compute the value of these neutral weights, we estimate the value of \(\beta^{market}\) using the ordinary least squares method using data from the \(K\) most recent period. Then
\[\frac{\widehat{w}_{t,d}}{\widehat{w}_{t,u}}=-\widehat{\beta}^{market}_{t} \left(\frac{y_{t,d}}{y_{t,u}}\right), \tag{33}\]
where \(\widehat{\beta}^{market}_{t}\) is the estimate at time \(t\) of \(\beta^{market}\), and \(\widehat{w}_{t,d}\), and \(\widehat{w}_{t,u}\) are the estimates of \(w^{*}_{t,d}\), and \(w^{*}_{t,u}\), respectively. Eq. (33), together with the budget constraint, determines the value of \(\widehat{w}_{t,u}\), and \(\widehat{w}_{t,d}\).
### _Variance-controlled Loss Terms_
Controlling the neutrality of a portfolio has some benefits. First, gearing the portfolio towards neutral position will lower directional risk, hence possibly improving the Sharpe ratio. Second, reducing the network's bias toward a specific assets can reduce portfolio lost in case the asset has worse performance compare to its past. Finally, the region around the neutral weights by a margin will always contain some weight values that the model can make profits or at least preserve the portfolio value. If we encourage model outputs to be in this region, it might be easier for the network to learn how to take profits rather than searching from random portfolio weight values.
To control the neutrality of the portfolio, we first rewrite the approximate portfolio variance
\[\begin{split}\mathrm{Var}[p^{\prime}_{t+1}]&\approx( \beta^{market}\,v_{t,u}+v_{t,d})^{2}\,\mathrm{Var}[\Delta y_{t+1,d}]\\ &=v^{2}_{t,u}(\beta^{market}-\beta^{model}_{t})^{2}\,\mathrm{ Var}[\Delta y_{t+1,d}],\end{split} \tag{34}\]
where
\[\beta^{model}_{t}=-\frac{v_{t,d}}{v_{t,u}}=-\frac{y_{t,u}\,w_{t,d}}{y_{t,d}\,w_ {t,u}} \tag{35}\]
Fig. 3: Scatter plot and ordinary least squares linear regression line of the prices of BTCTUP and BTCDOWN. In each plot, the prices are sampled every \(1\) hour and there are \(48\) samples. Even though the price relation is a straight line, simple linear regression does not fit well in some cases.
Here \(\beta_{t}^{model}\) is a parameter computed from model allocation weights. At training time, we use the estimate \(\widehat{\beta}_{t}^{market}\) for \(\beta^{market}\). Our purpose is to constrain the variance via the term \((\beta_{t}^{market}-\beta_{t}^{model})^{2}\) within a margin
\[(\beta_{t}^{market}-\beta_{t}^{model})^{2} \leq(\gamma\beta^{market})^{2} \tag{36}\] \[\Longleftrightarrow(1+\gamma)\beta^{market}\leq\beta_{t}^{model }\leq(1-\gamma)\beta_{t}^{market}.\]
This condition is equivalent to
\[\begin{cases}C_{1}=\beta_{t}^{model}-(1+\gamma)\beta_{t}^{market}\geq 0\\ C_{2}=(1-\gamma)\beta_{t}^{market}-\beta_{t}^{model}\geq 0.\end{cases} \tag{37}\]
where \(\gamma\geq 0\) is the parameter that controls the degree of neutrality of the portfolio. We design the following loss terms that take the form of hinge loss
\[A_{1}(\mathbf{w}_{t};\gamma)=\max(0,-C_{1}\,C_{2}), \tag{38}\]
and
\[A_{2}(\mathbf{w}_{t};\gamma)=\max(0,-C_{1})^{2}+\max(0,-C_{2})^{2}. \tag{39}\]
These two loss terms will only penalize the model when the \(\beta_{t}^{model}\) lies outside the constrained region. When \(\beta_{t}^{model}\) lies in the range defined in Eq. (36), the two loss terms will vanish. Otherwise, the penalized terms will be equal to the following positive terms
\[A_{1}(\mathbf{w}_{t};\gamma)=-C_{1}C_{2}, \tag{40}\]
and
\[A_{2}(\mathbf{w}_{t};\gamma)=\begin{cases}C_{1}^{2}&\text{if }C_{1}<0,\\ C_{2}^{2}&\text{if }C_{2}<0.\end{cases} \tag{41}\]
We have conducted experiments where we train the neural network with loss functions that is the negative Sharpe ratio combined with \(A_{1}(\mathbf{w}_{t};\gamma)\) or \(A_{2}(\mathbf{w}_{t};\gamma)\). The obtained Sharpe ratios are significantly lower than the baseline. We suspect that the model output in a fraction form \(w_{t,d}/w_{t,u}\) in the proposed loss terms may cause difficulty for the model to learn the optimal allocation. We solve this problem by multiplying the proposed loss terms with square BTCUP volume \(v_{t,u}^{2}\) to eliminate the fraction form and observe better results. The final forms of the proposed loss terms are
\[\begin{split} HL_{1}(\mathbf{w}_{t};\gamma)&=v_{t,u }^{2}\,A_{1}(\mathbf{w}_{t};\gamma)\\ &=\max\left(0,-v_{t,u}^{2}\,C_{1}\,C_{2}\right),\end{split} \tag{42}\]
and
\[\begin{split} HL_{2}(\mathbf{w}_{t};\gamma)&=v_{t,u }^{2}\,A_{2}(\mathbf{w}_{t};\gamma)\\ &=\max(0,-v_{t,u}\,C_{1})^{2}+\max(0,-v_{t,u}\,C_{2})^{2}.\end{split} \tag{43}\]
The proposed loss function is the negative Sharpe ratio, combined with the variance-controlled loss terms
\[L_{1}=-SR_{T}+\xi\,\frac{1}{T}\left(\sum_{t=1}^{T}HL_{1}(\mathbf{w}_{t}; \gamma)\right), \tag{44}\]
and
\[L_{2}=-SR_{T}+\xi\,\frac{1}{T}\left(\sum_{t=1}^{T}HL_{2}(\mathbf{w}_{t};\gamma )\right), \tag{45}\]
where \(\xi\) is the parameter that controls the effects of the additional loss terms. We define the proposed methods as training a machine learning model that uses the proposed loss functions \(L_{1}\) or \(L_{2}\).
## IV Datasets and Evaluations
In this section, we present empirical experiments on the baseline and proposed methods. Before that, details about the dataset, model architecture, feature selection, and hyperparameters setting are described.
### _Dataset Description_
We pulled the \(1\)-hour timeframe of OHLCV data of three assets with tickers name BTCUSD (BTC), BTCUPUSDT (BTCUP), and BTCDOWNNUSDT (BTCDOWN) using Bi-nance API. OHLCV is an aggregate form of market data standing for Open, High, Low, Close, and Volume. The entire dataset spans 19 months and starts from the introduction of BTCUP and BTCDOWN. The data is split following the forward validation split scheme, where the previous training and the test dataset are merged to form a new training dataset. The new test dataset is the data available right after the new training dataset. In our experiment, we use three test datasets. A summary of the data range in training and testing is shown in Table I. The test periods are selected so that all have a duration of two months and try to cover different price trends of BTC. In the first and second periods, BTC has an uptrend price movement, whereas BTC lost about \(25\%\) of its value in the third period.
### _Model Architecture and Feature Selection_
The input features for the neural network are derived from OHLCV data and return data for all the assets in the portfolio. Returns for individual assets are computed from the closing prices of two sequential periods, as defined in Eq. (8). Given the heterogeneity in price and volume ranges among assets, we apply normalization for all data, including returns. For each type of data, the mean and standard deviation in \(L_{norm}\) consecutive periods are computed, and these statistics are then used to perform the z-score normalization for the successful \(L_{norm}\) periods. This forwarding normalization scheme guarantees that information from the future does not flow backward in time.
After normalization, the feature of each ticker is organized into a matrix where the columns are concatenated from the OHLCV and return features. The features of the three assets are combined along the column dimension. Figure (a)a visualizes the input feature. To estimate the portfolio weights \(\mathbf{w}_{t}\)
the model uses a lookback window containing data from the most recent \(L\) periods, including period \([t-1,t]\).
Our choice of network is a Long Short-Term Memory (LSTM) [13] with one layer and the hidden feature size of \(64\), followed by a fully connected layer with the Softmax activation function. While there exist numerous specialized network architectures, such as [14, 15, 16, 17, 18, 19], proposed for the purpose of financial forecasting, our choice leans towards LSTM (Long Short-Term Memory). This is primarily due to its robust validation through extensive testing in portfolio optimization [5]. Figure (b)b summarizes the model architecture.
### _Experiment Protocols_
We perform a hyperparameter search using the following parameters. The batch size is selected from the set \(\{64,128,256,512\}\), while the number of epochs ranged between \(80\) and \(140\) in increments of \(20\). We use Adam optimizer [20] and a cosine annealing learning rate scheduler [21] with the start learning rate chosen from the set \(\{\text{1e-5},\text{3e-5},\text{5e-5},\text{1e-4},\text{3e-4},\text{5e-4}, \text{1e-3}\}\). The weight decay is chosen from \(\{0.0,\text{1e-4},\text{3e-4},\text{5e-4},\text{1e-3}\}\). The value of \(\gamma\) is chosen from the set \(\{0.0,0.1,0.2,\dots,1.0\}\). The parameter \(\xi\) is selected from the set \(\{\text{1e-6},\text{3e-6},\text{5e-6},\text{1e-5},\text{3e-5},\text{5e-5}, \text{1e-4},\text{3e-4},\text{5e-4},\text{1e-3}\}\). We conduct experiments under both fee-included and no-fee configurations. In the no-cost scheme, \(\mu_{t}\) is set to \(1\), and no management fee is imposed. Conversely, the cost-included scheme incorporated the Binance trading fee for BLVTs, setting the trading fee rate \(c\) at \(0.075\%\), and the daily management fee \(m\) at \(0.01\%\). Lastly, the length of the normalization window is fixed at \(L_{norm}=12\) and the number of periods used to estimate \(\beta^{market}\) and the length of the lookback window is fixed at \(L=K=48\).
Due to the random initialization of the model parameters, the performance of the trained models differs for each training session. Each parameter configuration is run five times, and the reported metric is the mean and standard deviation of the average Sharpe ratio over three testing periods across five runs.
### _Experimental Results_
We compare our proposed methods with the baseline method (NS). The proposed methods are presented under the name SVC\({}_{1}\) for model training with loss function \(L_{1}\), and SVC\({}_{2}\) for loss function \(L_{2}\). We also include other benchmark methods for a comprehensive comparison. The Neutral-Weight Portfolio (NWP) involves the allocation of estimated neutral weights as outlined in Eq. (33). The Equal-Weight Portfolio (EWP) [9] uniformly distributes weights between BTCUP and BTCDOWN for each allocation period. The Global Minimum Variance Portfolio (GMVP) [1] determines optimal weights that minimize the estimated return covariance matrix, using a window length of \(48\) hours for estimation. Finally, the performance of the underlying cryptocurrency of BTCUP and BTCDOWN, which is BTC, will be presented for reference. For holding BTC, the performance when the trading fee and management fee are included does not affect the portfolio performance since we do not conduct any trading, and holding BTC is not required the management fee.
The findings presented in Table II indicate that our proposed methods enhance the performance of the baseline method in most cases. Other strategies such as NWP, EWP, and GMVP yield Sharpe ratios near zero, underscoring the advantages of deploying neural network-based methods over conventional ones. The performance of BTC is much lower than the performance of neural network-based methods because the profits of BTC gains in the first and second periods are eroded by the downtrend price movement in the third period, while the neural network-based methods can put more weight on BTCDOWN to speculate on the decreasing BTC price. Therefore, these results show the benefit of holding a pair of BLVTs over the underlying asset.
In Tables III, and IV, we choose the median result in the \(5\) runs to show a breakdown of the performance of all methods over three testing periods to observe how each method performs in different market situations. In addition, the final accumulated portfolio value (fAPV) is presented as a profitability metric. We set the initial portfolio value to 1. Then the fAPV will reflect the accumulated return over each testing period. The Maximum Drawdown (MDD) is also considered to highlight the risk associated with each method.
These tables show that the NWP, EWP, and GMVP methods usually have the lowest MDD in all test periods, whether the trading and management fee are considered. While these methods are effective in reducing portfolio risk, they do so at the expense of portfolio profitability. The fAPVs of these methods show that they can only preserve the original portfolio value and cannot make profits. The Sharpe ratios from the proposed methods outperform the baseline method in mostly every period in different settings. When BTC loses one-fourth of its value in the third period, the proposed methods can profit and achieve a higher Sharpe ratio than both BTC and the baseline method.
Fig. 4: In (a), the elements of feature matrices are the z-score normalization data. (b) shows the forward computation of the model and its structure. The market information are the BTCUP and BTCDOWN close prices, their future returns, and \(\beta^{market}_{t}\).
## V Conclusion
This work adopts the approach of using deep learning for the portfolio optimization problem with the Sharpe ratio as the loss function. A pair of BLVTs is chosen to construct a portfolio to benefit from their consistent negative correlation. The portfolio contains only two assets that allow us to investigate the neutral position of the portfolio using simple mathematical analysis. Additional loss terms are designed to control the neutrality of the portfolio. We compare the proposed methods with the baseline and other non-learning approaches. Experimental results show that holding a portfolio containing a pair of BLVTs is superior to holding only the underlying asset, especially for a high-volatility market like cryptocurrency. In addition, the proposed methods show their effectiveness in improving the baseline method and gaining the best results compared to other methods in all different settings.
|
2307.11794 | Artificial Intelligence-Generated Terahertz Multi-Resonant Metasurfaces
via Improved Transformer and CGAN Neural Networks | It is well known that the inverse design of terahertz (THz) multi-resonant
graphene metasurfaces by using traditional deep neural networks (DNNs) has
limited generalization ability. In this paper, we propose improved Transformer
and conditional generative adversarial neural networks (CGAN) for the inverse
design of graphene metasurfaces based upon THz multi-resonant absorption
spectra. The improved Transformer can obtain higher accuracy and generalization
performance in the StoV (Spectrum to Vector) design compared to traditional
multilayer perceptron (MLP) neural networks, while the StoI (Spectrum to Image)
design achieved through CGAN can provide more comprehensive information and
higher accuracy than the StoV design obtained by MLP. Moreover, the improved
CGAN can achieve the inverse design of graphene metasurface images directly
from the desired multi-resonant absorption spectra. It is turned out that this
work can finish facilitating the design process of artificial
intelligence-generated metasurfaces (AIGM), and even provide a useful guide for
developing complex THz metasurfaces based on 2D materials using generative
neural networks. | Yangpeng Huang, Naixing Feng, Yijun Cai | 2023-07-21T02:49:03Z | http://arxiv.org/abs/2307.11794v1 | Artificial Intelligence-Generated Terahertz Multi-Resonant Metasurfaces via Improved Transformer and CGAN Neural Networks
###### Abstract
It is well known that the inverse design of terahertz (THz) multi-resonant graphene metasurfaces by using traditional deep neural networks (DNNs) has limited generalization ability. In this paper, we propose improved Transformer and conditional generative adversarial neural networks (CGAN) for the inverse design of graphene metasurfaces based upon THz multi-resonant absorption spectra. The improved Transformer can obtain higher accuracy and generalization performance in the StoV (Spectrum to Vector) design compared to traditional multilayer perceptron (MLP) neural networks, while the StoI (Spectrum to Image) design achieved through CGAN can provide more comprehensive information and higher accuracy than the StoV design obtained by MLP. Moreover, the improved CGAN can achieve the inverse design of graphene metasurface images directly from the desired multi-resonant absorption spectra. It is turned out that this work can finish facilitating the design process of artificial intelligence-generated metasurfaces (AIGM), and even provide a useful guide for developing complex THz metasurfaces based on 2D materials using generative neural networks.
Conditional generative adversarial neural networks (CGAN), inverse design, improved Transformer, multi-resonant metasurface, terahertz.
## I Introduction
Terahertz (THz) technology has garnered a great deal of research interest due to its unique technical characteristics [1]-[5]. As known to us, the graphene-based THz metasurfaces, with their distinctive optical properties, have hitherto enabled the development and improvement of numerous applications including optical detection [6], imaging [7], [8], sensing [9], and tunable absorption [10]. The design of metasurfaces often relies on the desired THz spectrum, which requires extensive simulation and continuous trial and error by experienced experts. However, due to the limitations of computational performance, only a limited number of design parameters could be allowed to adjust, making it difficult to obtain the optimal structure [11], [12]. For a long time, traditional algorithms like adjoint methods [13], topology optimization [14], and genetic algorithms [15] have been used to solve the inverse design problem in metasurfaces. To the best of our knowledge, nevertheless, these traditional methods typically require significant computing power and time. Furthermore, as the number of optimization parameters and spatial dimension increases, the required computing power also increases, which is not conducive to the rapid development of metasurfaces.
As we all know, deep learning (DL) is widely recognized for its unique advantage of being data-driven, enabling models to automatically extract valuable information from large amounts of data [16]-[19]. As an end-to-end inverse design approach, the DL demonstrates outstanding performance in designing metasurfaces. It not only eliminates the need for trial-and-error experiences in numerical simulations, but also effectively addresses the computational challenges faced by traditional methods. Furthermore, the use of DL algorithms can explore a large number of design parameters and obtain optimal structures that cannot be achieved by traditional algorithms. DL technology has been widely applied in the inverse design of various nanostructures, such as multilayer nanoparticles [20], multilayer thin films [21], metamaterials [22], metasurfaces [23]. Especially for the inverse design problems of graphene-based metasurfaces, the DL technology can exhibit greater advantages upon conventional algorithms considering both the efficiency and accuracy. Next, Harper et al. developed artificial neural networks (ANNs) that can relate metasurface geometries to reflection and transmission spectra, enabling the neural networks to suggest device geometries based on a desired optical performance. This inverts the design process for metasurfaces [24]. Lin _et al_. proposed an improved transfer function (TF)-based ANN model that can directly generate structure parameters to match the customer-expected THz transmission spectrum, using an inner ring resonator and a split outer ring resonator [25]. Du _et al_. proposed a scalable multi-task learning neural network that can capture the impact of nanostructure dimensions on their optical absorption in graphene-based metasurfaces, obtained through inverse design of absorption spectrum in the visible frequency [26]. Liu _et al_. constructed a generative network which can produce candidate patterns that match the desired spectrum with high fidelity. The generative network can produce a specific metasurface
corresponding to a given input transmission spectrum in the THz range [27].
To the best of our knowledge, the inverse design approaches described in previous work have primarily used conventional neural networks. Furthermore, the inverse design of graphene-based metasurfaces primarily relies on structural parameters, which result in limited information content, generalization capacity, and practical limitations. As a result, there is an urgent need to develop the neural network model with higher generalization capacity and improved accuracy for the inverse design of THz multi-resonant metasurfaces based on graphene. In our previous work, we employed an improved Transformer neural network for achieving the inverse design of multilayer metamaterials, which can obtain significant improvements in both generalization ability and accuracy [28]. However, it was only applicable to the one-dimensional layered structures and powerless for more complex metasurface structures.
In this article, we propose the improved Transformer and conditional generative adversarial neural networks (CGAN) for the inverse design of graphene metasurfaces based on THz multi-resonant absorption spectra. The inputs for both neural networks are the THz multi-resonant absorption spectrum, and the outputs are the parameter vector and the pattern image of the graphene metasurface, referred to as StoV (Spectrum to Vector) and StoI (Spectrum to Image), respectively, in the remaining parts of the paper. Besides, as compared with the traditional multilayer perceptron (MLP) neural networks, the improved Transformer can achieve both higher accuracy and generalization performance in the StoV design. Furthermore, the StoI design achieved through CGAN can provide more comprehensive information and higher accuracy than the StoV design obtained by the traditional MLP neural network, and it truly facilitates the design process of AIGM (AI-Generated Metasurface).
## II Structure and Parameters
To verify the effectiveness of the proposed inverse design algorithm, we constructed a graphene metasurface composed of graphene strips with different widths, as shown in Fig. 1. It consists of a monolayer graphene metasurface, SiC dielectric layer and conventional perfect electric conductor (PEC) substrate, where the thickness of the dielectric layer SiC \(h\) is 2.8 \(\mu\)m and the periodic width \(P\) is 20 \(\mu\)m. Graphene strips with different widths generate different resonances, making the device suitable for use as a multi-resonant THz resonator. To facilitate the subsequent design, we divided the 20 \(\mu\)m period width of the multi-resonant metasurface into 20 strips, each with a width of 1 \(\mu\)m. There are 6 different cases of graphene chemical potential, including 0 eV, 0.6 eV, 0.7 eV, 0.8 eV, 0.9 eV, and 1.0 eV, where 0 eV represents the absence of graphene on the surface of the strip.
In the design, the refractive index of SiC is set as 2.5 according to [29, 30] and the refractive index of graphene can be calculated using its surface conductivity, as defined in (1)-(6) [28, 31]:
\[\sigma(\omega,\mu_{c},\Gamma,\Gamma)=\sigma_{\rm intra}+\sigma_{\rm inter}\, \tag{1}\]
\[\sigma_{\rm intra}=\frac{je^{2}}{\pi h^{2}(\omega-2j\Gamma)}\int_{0}^{\infty} \xi\left(\frac{\partial f_{d}(\xi\mu_{c}\Gamma)}{\partial\xi}-\frac{\partial f _{d}(-\xi\mu_{c}\Gamma)}{\partial\xi}\right)d\xi\, \tag{2}\]
\[\sigma_{\rm inter}=-\frac{je^{2}(\omega-2j\Gamma)}{\pi h^{2}}\int_{0}^{\infty} \frac{f_{d}(-\xi\mu_{c}\Gamma)-f_{d}(\xi\mu_{c}\Gamma)}{(\omega-2j\Gamma)^{2}- 4(\xi/h)^{2}}d\xi\, \tag{3}\]
\[f_{d}(\xi,\mu_{c},T)=\left(e^{(\xi-\mu_{c})/k_{B}T}+1\right)^{-1}, \tag{4}\]
\[\varepsilon_{in}(\omega)=\varepsilon_{xx}(\omega)=\varepsilon_{yy}(\omega)= \varepsilon_{0}+i\frac{\sigma}{\Delta\omega}\,, \tag{5}\]
\[n_{\rm g}=(\varepsilon_{in}\mu_{r})^{1/2}. \tag{6}\]
Here, the symbols \(\sigma_{intra}\) and \(\sigma_{inter}\) denote the conductivity of graphene resulting from the intraband and interband transitions, respectively. \(h\), \(\Gamma\), \(\omega\), \(e\), \(k_{B}\), \(T\), \(\mu_{c}\) and \(\zeta\) denote the reduced Planck constant, the scattering rate, the radian frequency, the electron charge, the Boltzmann constant, the temperature of Kelvin, the chemical potential and the energy of electrons, respectively. The Fermi-Dirac distribution is represented by \(f_{d}\left(\zeta,\mu_{c},T\right)\). The in-plane components of graphene relative permittivity are represented by \(\varepsilon_{in}\), \(\varepsilon_{xx}\) and \(\varepsilon_{yy}\) and the permittivity of vacuum is represented by \(\varepsilon_{0}\). \(\Delta\) denotes the thickness of graphene. \(n_{\rm g}\) and \(\mu_{r}\) are the refractive index and relative permeability of graphene, respectively. Moreover, \(\mu_{r}\) is assumed as 1 due to its nonmagnetic property.
In order to obtain the absorption spectrum of THz multi-resonant metasurfaces based on graphene, we employed simulation using the numerical method of finite element method (FEM). In our simulation, we applied periodic boundary conditions in the \(x\)-axis and \(y\)-axis directions and imposed THz incidence downward from the top surface of the graphene metasurface. To capture the localized enhanced electromagnetic fields of the graphene layer, meshes of user-defined size were applied, while tetrahedral meshes were used for the remaining domains of the structure. The simulation was performed on a metasurface with a period width of 20 \(\mu\)m, which was divided into 20 strips, each with six different values of chemical potential. Besides, we employed uniform sampling to generate 20,000 data sets, with 19,000 used for training and 1,000 for testing to avoid solution space bias. The data sets consisted of different combinations of graphene chemical potentials and their corresponding absorption spectra.
## III StoV Design for Graphene Metasurface
We first consider the StoV inverse design for the THz multi-resonant metasurface with our proposed improved Transformer network. The graphene metasurface with a periodic 20 \(\mu\)m is divided into 20 strips, and the chemical potential parameters of the 20 graphene strips are represented using the vector \(C\)= [\(c_{1}\), \(c_{2}\),..., \(c_{20}\)], where \(c_{1}\), \(c_{2}\),..., \(c_{20}\) represents the chemical potential of each graphene strip.
Electrical biasing or chemical doping can be used to tune the chemical potential and design the diversity of THz multi-resonant metasurfaces based on graphene. The green and red curve samples in Fig. 2 correspond to Segments I and II and have chemical potential representations of \(C_{\rm I}\)=[0, 0.9, 0, 0.9, 0.9, 0.9, 0.9, 0.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] eV and \(C_{\rm I}\)=[0, 0.7, 0, 0, 1.0, 1.0, 0, 0, 1.0, 1.0, 0, 0, 0.9, 0.9, 0, 0.9, 0.9, 0, 0, 0] eV, respectively. As shown in Fig. 2, these two samples have two and four resonant absorption peaks, respectively.
Due to the fact that some simulated absorption spectra do not perform well, and real-world fabrication conditions must be taken into account, such as ensuring that the chemical potentials of adjacent graphene strips are the same, collecting data sets for THz multi-resonant metasurfaces based on graphene is challenging. This can result in a smaller amount of data than expected, which can easily lead to overfitting in larger neural networks. To solve this problem, we propose an improved Transformer network to design the desired THz absorption spectrum based on the training data \(D_{l}=\{(S_{l},C_{j}),i=1,2...n,j=1,2...h\}\), where \(S_{i}\) and \(C_{j}\) represent the sampling value of THz multi-resonant absorption spectrum and the chemical potential of 20 graphene strips in the metasurface, respectively. In the proposed improved Transformer model, the THz multi-resonant absorption spectrum is used as the input to the neural network, which consists of 181 sampling points representing frequencies ranging from 1 THz to 10 THz with an interval of 0.05 THz. The output of the network is the chemical potential vector of the 20 strips in the graphene metasurface.
As seen in Fig. 3, like the original Transformer [32], our proposed improved Transformer network employs the classical Encoder-Decoder model. The Encoder and Decoder have identical structures. In the Encoder module, each input vector is linearly mapped into three vectors \(Q\), \(K\), \(V\), \(Q\) and \(K\) are used to compute the attention map of the Encoder. Encoder is performed for \([a^{\prime},a^{\prime},...,a^{\prime 181}]\) by means of adopting the self-attention mechanism. Firstly, \(Q\), \(K\) and \(V\) matrices are computed by three FCLs, see (7)-(10), where \(W^{u}\), \(W^{k}\), \(W^{v}\) are the weights in three FCLs, respectively. The \([q^{\prime},\ q^{\prime},...,q^{\prime 181}]\), \([k^{l},k^{2},...,k^{\prime 181}]\), \([v^{l},v^{2},...,v^{\prime 181}]\) are the results of the linear mapping of \([a^{\prime},a^{\prime},...,a^{\prime 181}]\) in three FCLs, respectively.
\[q^{i}=W^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
To train and validate the improved Transformer in our work, we choose the Adam optimizer, while the batchsize is set as 256, the learning rate is set as 0.001 and the weight decay is set as 1e-5. For the testing set, we define a function to calculate the prediction accuracy of neural networks as shown in (14):
\[H=\frac{\sqrt{\sum_{i=1}^{n}(p_{i}-x_{i})^{2}}}{\sqrt{\sum_{i=1}^{n}{p_{i}}^{2}}}. \tag{14}\]
where \(H\), \(p_{i}\), \(x_{i}\) and \(n\) denote the average relative accuracy, the sampling point of the target absorption spectrum, the sampling value of the predicted absorption spectrum and the total amount of sampling points, respectively. In this work, Equation (14) is utilized to evaluate the test results of the improved Transformer and the traditional MLP networks.
It is obvious that the inverse design accuracy based on the improved Transformer is higher than that based on the traditional MLP for the design of StoV. As reflected in Fig. 4(a), the Transformer converges at epoch=16, while the MLP converges at epoch=33. Moreover, the training and testing loss after Transformer converges is 1.13, while the training and testing loss after MLP converges is 1.87. Next, in Fig. 4(b), the test accuracy of Transformer reaches 96.14% after convergence, while that of MLP is only 94.27%. Therefore, the proposed network can exhibit higher accuracy and faster convergence speed than the traditional neural network.
To further confirm the superiority of the improved Transformer in the application of inverse design, we compare the target spectrum with the predicted spectrum predicted by means of the standard MLP and the improved Transformer networks, seen in Fig. 5. Obviously, due to the introduction of self-attention mechanism, Transformer is able to better extract the key information from the data, capture the internal relevance of the data or features, ignore the unimportant information, and reduce the requirement of the dataset. Transformer neural network is able to achieve higher accuracy and better generalization capability in the inverse design of THz multi-resonant metasurface based on graphene.
Fig. 4: Performance of StoV design for THz multi-resonant graphene metasurfaces. (a) Training and test loss curves of MLP and improved Transformer. (b) Prediction accuracy of MLP and improved Transformer.
Fig. 3: Structure of the proposed improved Transformer Network.
## IV StoI Design for Graphene Metasurfaces
The StoV design is based on the chemical potential combination of 20 graphene strips, however, the inverse design based on this method is not intuitive enough and cannot achieve inverse design of complex graphene metasurface shapes. Therefore, we further proposed an improved Conditional Generative Adversarial Network (CGAN) to achieve the inverse design of graphene metasurface images directly from the desired multi-resonant absorption spectra, namely StoI design.
In the StoI dataset, each metasurface image corresponds to a THz multi-resonant absorption spectrum. The size of the metasurface is 80\(\times\)80 and the dimension of the absorption spectrum is 181. Two sample metasurface images on the left side of Fig. 6 correspond to blue and red multi-resonant absorption spectra, respectively. In these images, green, yellow, red, blue, and dark red colors represent the five chemical potentials of graphene, namely 0.6 eV, 0.7 eV, 0.8 eV, 0.9 eV, and 1.0 eV, respectively, while white indicates the absence of graphene. This dataset can be extended to include other complex-shaped graphene metasurfaces as long as a sufficient dataset is available for those shapes.
In the original CGAN neural network [33], not only conditional information is required as input, but also random noise signals, which can lead to poor learning of the THz absorption spectrum, thereby affecting the accuracy of the inverse design. However, the improved CGAN directly inputs the THz absorption spectrum to the generator as conditional information to guide the model in generating metasurface images with specific characteristics. The structure of the improved CGAN neural network is shown in Fig. 7, which is a type of generated adversarial neural (GAN) network with conditional constraints. In the improved CGAN, the THz multi-resonant absorption spectrum is used as a condition to guide the generation process of graphene metasurfaces in the model. In this way, the generator can better learn the THz
Fig. 5: Absorption spectra of target and predicted based on MLP and improved Transformer algorithms. (a), (b), (c) and (d) represent datasets with one, two, three and four absorption peaks, respectively.
Fig. 6: Typical Stol data with metasurface images and the corresponding absorption spectra.
absorption spectrum and generate more accurate metasurface. During the training process, the generator would generate the metasurface samples by THz absorption spectrum, while the discriminator would judge the realism of the generated metasurfaces by comparing them with the real metasurfaces. Through continuous iterative training, the generator would gradually learn how to generate more realistic metasurface images, and the discriminator would become more accurate in judging the generated metasurfaces and the real metasurfaces, thus improving the quality of the overall model generation.
In the modified CGAN, the specific layers of the generator network and the discriminator network are illustrated in Table I. The generator network consists of a fully connected layer and five transposed convolutional layers, and the discriminator consists of two fully connected layers and three convolutional layers. In the generator network, the THz multi-resonant absorption spectrum with 181 sampling points is first passed through a fully connected layer, expanding the input to 5\(\times\)5\(\times\)128. Then, this is converted into an image of size 5\(\times\)5 with 128 channels. Next, the image is upsampled to size 80\(\times\)80 with 32 channels using four transposed convolution layers. In the last transposed convolution layer, the image size is unchanged and the number of channels is reduced to 3, thus achieving the generation of an 80\(\times\)80 image from a THz multi-resonant absorption spectrum with 181 sampling points. In the discriminator network, the absorption spectrum is first transformed into a long sequence of 3\(\times\)80\(\times\)80 by a fully connected layer, and then converted into an image with a size of 80\(\times\)80 and 3 channels. The generated image by the generator is concatenated with the absorption spectrum image by channel, resulting in an image with a size of 80\(\times\)80 and 6 channels. Second, the resulting image is passed through three convolutional layers, resulting in an output dimension of 10\(\times\)10\(\times\)64. Finally, it is outputted to one dimension through a fully connected layer, which is applied to identify whether the input image is real or fake.
To train and validate the improved CGAN neural network in this work, we choose the Adam optimizer, and the weight decay is set as 1e-5, while the batchsize is set as 128. The learning rate for the generator and discriminator is set to 0.0005 and 0.00005, respectively. When calculating the accuracy of the neural network, the metasurfaces generated by the CGAN were simulated using the FEM method to obtain the corresponding THz absorption spectra. Then, Formula (14) was employed to calculate the difference between the target spectrum and the THz absorption spectrum generated from the inverse design.
Next, we plotted the training loss curve and the prediction accuracy curve of the CGAN network as shown in Fig. 8. The model converged at epoch 17 with a generator loss of 0.68 and a discriminator loss of 1.33, as reflected in Fig. 8(a). Next, Figure 8(b) shows that the CGAN model achieved a test accuracy of 95.34%. In the inverse design of graphene-based THz metasurfaces, the CGAN model demonstrates higher accuracy compared to the StoV inverse design achieved through MLP neural network. It is the main reason that the CGAN is capable of learning more information from the StoI dataset, which leads to higher accuracy in inverse design and allows the model to reach stability more quickly.
In Fig. 9, we present four THz multi-resonant metasurfaces based on graphene to evaluate the performance of CGAN in inverse design for more specific samples. A comparison is depicted between the target spectrum and the predicted spectrum. The two small images on the left show the desired metasurface image (top) and the metasurface image generated by the CGAN neural network (bottom), respectively, while the spectra of the target metasurface and the metasurface generated by the CGAN are shown in the main figure. The results of the inverse design of StoI demonstrate that the modified CGAN neural network exhibits a more intuitive inverse design effect compared to the traditional MLP neural network. It is capable of directly generating the desired graphene metasurface and has a stronger adaptability in inverse designs of complex-shaped graphene metasurfaces.
Fig. 7: Structure of the proposed improved CGAN network.
In order to comprehensively compare the performance of three neural networks in implementing inverse design of different types of graphene metasurfaces, we have listed the corresponding inverse design performance in Table II. It can be found intuitively that in the StoV inverse design, the improved Transformer neural network has higher accuracy than the traditional MLP neural network. Besides, the CGAN neural network can learn the two-dimensional graphene metasurface through the Stol dataset, which has higher accuracy in inverse design. Moreover, the model can achieve stability more quickly and is also applicable to more complex inverse design of graphene metasurfaces.
Fig. 8: Performance of Stol design for THz multi-resonant graphene metasurfaces. (a) Training loss curves of discriminator and generator. (b) Prediction accuracy of CGAN.
Fig. 9: Metasurface images and their corresponding absorption spectra, both for the target and the predicted values.
## V Conclusion
In this work we mainly investigated two improved neural networks for designing graphene-based THz multi-resonant metasurfaces on-demand. The improved Transformer and CGAN networks were validated by using the StoV and StoI datasets, respectively. It is shown from the results that the improved Transformer network has higher accuracy and faster convergence speed than the traditional MLP neural networks in the StoV design. The improved CGAN achieved higher inverse design accuracy through StoI dataset and provided a more intuitive inverse design process, which can also be employed to design other complex graphene-based metasurfaces in the microwave or millimeter wave range.
## Acknowledgment
The authors declare no conflicts of interest regarding this article.
|
2303.03667 | Run, Don't Walk: Chasing Higher FLOPS for Faster Neural Networks | To design fast neural networks, many works have been focusing on reducing the
number of floating-point operations (FLOPs). We observe that such reduction in
FLOPs, however, does not necessarily lead to a similar level of reduction in
latency. This mainly stems from inefficiently low floating-point operations per
second (FLOPS). To achieve faster networks, we revisit popular operators and
demonstrate that such low FLOPS is mainly due to frequent memory access of the
operators, especially the depthwise convolution. We hence propose a novel
partial convolution (PConv) that extracts spatial features more efficiently, by
cutting down redundant computation and memory access simultaneously. Building
upon our PConv, we further propose FasterNet, a new family of neural networks,
which attains substantially higher running speed than others on a wide range of
devices, without compromising on accuracy for various vision tasks. For
example, on ImageNet-1k, our tiny FasterNet-T0 is $2.8\times$, $3.3\times$, and
$2.4\times$ faster than MobileViT-XXS on GPU, CPU, and ARM processors,
respectively, while being $2.9\%$ more accurate. Our large FasterNet-L achieves
impressive $83.5\%$ top-1 accuracy, on par with the emerging Swin-B, while
having $36\%$ higher inference throughput on GPU, as well as saving $37\%$
compute time on CPU. Code is available at
\url{https://github.com/JierunChen/FasterNet}. | Jierun Chen, Shiu-hong Kao, Hao He, Weipeng Zhuo, Song Wen, Chul-Ho Lee, S. -H. Gary Chan | 2023-03-07T06:05:30Z | http://arxiv.org/abs/2303.03667v3 | # Run, Don't Walk: Chasing Higher FLOPS for Faster Neural Networks
###### Abstract
To design fast neural networks, many works have been focusing on reducing the number of floating-point operations (FLOPs). We observe that such reduction in FLOPs, however, does not necessarily lead to a similar level of reduction in latency. This mainly stems from inefficiently low floating-point operations per second (FLOPS). To achieve faster networks, we revisit popular operators and demonstrate that such low FLOPS is mainly due to frequent memory access of the operators, especially the depthwise convolution. We hence propose a novel partial convolution (PConv) that extracts spatial features more efficiently, by cutting down redundant computation and memory access simultaneously. Building upon our PConv, we further propose FasterNet, a new family of neural networks, which attains substantially higher running speed than others on a wide range of devices, without compromising on accuracy for various vision tasks. For example, on ImageNet-1k, our tiny FasterNet-T0 is \(2.8\times\), \(3.3\times\), and \(2.4\times\) faster than MobileVt-XXS on GPU, CPU, and ARM processors, respectively, while being 2.9% more accurate. Our large FasterNet-L achieves impressive 83.5% top-1 accuracy, on par with the emerging Swin-B, while having 36% higher inference throughput on GPU, as well as saving 37% compute time on CPU. Code is available at [https://github.com/JierunChen/FasterNet](https://github.com/JierunChen/FasterNet).
## 1 Introduction
Neural networks have undergone rapid development in various computer vision tasks such as image classification, detection and segmentation. While their impressive performance has powered many applications, a roaring trend is to pursue fast neural networks with low latency and high throughput for great user experiences, instant responses, safety reasons, etc.
How to be fast? Instead of asking for more costly computing devices, researchers and practitioners prefer to design cost-effective fast neural networks with reduced computational complexity, mainly measured in the number of floating-point operations (FLOPs)1. MobileNets [24, 25, 54], ShuffleNets [84, 46] and GhostNet [17], among others, leverage the depthwise convolution (DWConv) [55] and/or group convolution (GConv) [31] to extract spatial features. However, in the effort to reduce FLOPs, the operators often suffer from the side effect of increased memory access. MicroNet [33] further decomposes and sparsifies the network to push its FLOPs to an extremely low level. Despite its improvement in FLOPs, this approach experiences inefficient fragmented computation. Besides, the above networks are often accompanied by additional data manipulations, such as concatenation, shuffling, and pooling, whose running time tends to be significant for tiny models.
Footnote 1: We follow a widely adopted definition of FLOPs, as the number of multiply-adds [84, 42].
Apart from the above pure convolutional neural networks (CNNs), there is an emerging interest in making vision transformers (ViTs) [12] and multilayer perceptrons (MLPs) architectures [64] smaller and faster. For example, MobileViTs [48, 49, 70] and MobileFormer [6] reduce the computational complexity by combining DWConv with a modified attention mechanism. However, they still suffer from the aforementioned issue with DWConv and also need dedicated hardware support for the modified attention mechanism. The use of advanced yet time-consuming nor
Figure 1: Our partial convolution (PConv) is fast and efficient by applying filters on only a few input channels while leaving the remaining ones untouched. PConv obtains lower FLOPs than the regular convolution and higher FLOPS than the depthwise/group convolution.
malization and activation layers may also limit their speed on devices.
All these issues together lead to the following question: Are these "fast" neural networks really fast? To answer this, we examine the relationship between latency and FLOPs, which is captured by
\[Latency=\frac{FLOPs}{FLOPS}, \tag{1}\]
where FLOPS is short for floating-point operations per second, as a measure of the effective computational speed. While there are many attempts to reduce FLOPs, they seldom consider optimizing FLOPS at the same time to achieve truly low latency. To better understand the situation, we compare the FLOPS of typical neural networks on an Intel CPU. The results in Fig. 2 show that many existing neural networks suffer from low FLOPS, and their FLOPS is generally lower than the popular ResNet50. With such low FLOPS, these "fast" neural networks are actually not fast enough. Their reduction in FLOPs cannot be translated into the exact amount of reduction in latency. In some cases, there is no improvement, and it even leads to worse latency. For example, CycleMLP-B1 [5] has half of FLOPs of ResNet50 [20] but runs more slowly (_i.e._, CycleMLP-B1 _vs._ ResNet50: 116.1ms _vs._ 73.0ms). Note that this discrepancy between FLOPs and latency has also been noticed in previous works [46, 48] but remains unresolved partially because they employ the DWConv/GConv and various data manipulations with low FLOPS. It is deemed there are no better alternatives available.
This paper aims to eliminate the discrepancy by developing a simple yet fast and effective operator that maintains high FLOPS with reduced FLOPs. Specifically, we reexamine existing operators, particularly DWConv, in terms of the computational speed - FLOPS. We uncover that the main reason causing the low FLOPS issue is _frequent memory access_. We then propose a novel partial convolution (PConv) as a competitive alternative that reduces the computational redundancy as well as the number of memory access. Fig. 1 illustrates the design of our PConv. It takes advantage of redundancy within the feature maps and systematically applies a regular convolution (Conv) on only a part of the input channels while leaving the remaining ones untouched. By nature, PConv has lower FLOPs than the regular Conv while having higher FLOPS than the DWConv/GConv. In other words, PConv better exploits the on-device computational capacity. PConv is also effective in extracting spatial features as empirically validated later in the paper.
We further introduce FasterNet, which is primarily built upon our PConv, as a new family of networks that run highly fast on various devices. In particular, our FasterNet achieves state-of-the-art performance for classification, detection, and segmentation tasks while having much lower latency and higher throughput. For example, our tiny FasterNet-T0 is \(2.8\times\), \(3.3\times\), and \(2.4\times\) faster than MobileViT-XXS [48] on GPU, CPU, and ARM processors, respectively, while being 2.9% more accurate on ImageNet-1k. Our large FasterNet-L achieves 83.5% top-1 accuracy, on par with the emerging Swin-B [41], while offering 36% higher throughput on GPU and saving 37% compute time on CPU. To summarize, our contributions are as follows:
* We point out the importance of achieving higher FLOPS beyond simply reducing FLOPs for faster neural networks.
* We introduce a simple yet fast and effective operator called PConv, which has a high potential to replace the existing go-to choice, DWConv.
* We introduce FasterNet which runs favorably and universally fast on a variety of devices such as GPU, CPU, and ARM processors.
* We conduct extensive experiments on various tasks and validate the high speed and effectiveness of our PConv and FasterNet.
Figure 2: (a) FLOPS under varied FLOPs on CPU. Many existing neural networks suffer from low computational speed issues. Their effective FLOPS are lower than the popular ResNet50. By contrast, our FasterNet attains higher FLOPS. (b) Latency under varied FLOPs on CPU. Our FasterNet obtains lower latency than others with the same amount of FLOPs.
## 2 Related Work
We briefly review prior works on fast and efficient neural networks and differentiate this work from them.
**CNN.** CNNs are the mainstream architecture in the computer vision field, especially when it comes to deployment in practice, where being fast is as important as being accurate. Though there have been numerous studies [7, 8, 21, 33, 55, 56, 83, 86] to achieve higher efficiency, the rationale behind them is more or less to perform a low-rank approximation. Specifically, the group convolution [31] and the depthwise separable convolution [55] (consisting of depthwise and pointwise convolutions) are probably the most popular ones. They have been widely adopted in mobile/edge-oriented networks, such as MobileNets [24, 25, 54], ShuffleNets [46, 84], GhostNet [17], EfficientNets [61, 62], TinyNet [18], Xception [8], CondenseNet [27, 78], TVConv [4], MnasNet [60], and FBNet [74]. While they exploit the redundancy in filters to reduce the number of parameters and FLOPs, they suffer from increased memory access when increasing the network width to compensate for the accuracy drop. By contrast, we consider the redundancy in feature maps and propose a partial convolution to reduce FLOPs and memory access _simultaneously_.
ViT, MLP, and variants.There is a growing interest in studying ViT ever since Dosovitskiy [12] expanded the application scope of transformers [69] from machine translation [69] or forecasting [73] to the computer vision field. Many follow-up works have attempted to improve ViT in terms of training setting [58, 65, 66] and model design [15, 40, 41, 72, 85]. One notable trend is to pursue a better accuracy-latency trade-off by reducing the complexity of the attention operator [1, 29, 45, 63, 68], incorporating convolution into ViTs [6, 10, 57], or doing both [3, 34, 49, 52]. Besides, other studies [5, 64, 35] propose to replace the attention with simple MLP-based operators. However, they often evolve to be CNN-like [39]. In this paper, we focus on analyzing the convolution operations, particularly DWConv, due to the following reasons: First, the advantage of attention over convolution is unclear or debatable [42, 71]. Second, the attention-based mechanism generally runs slower than its convolutional counterparts and thus becomes less favorable for the current industry [26, 48]. Finally, DWConv is still a popular choice in many hybrid models, so it is worth a careful examination.
## 3 Design of PConv and FasterNet
In this section, we first revisit DWConv and analyze the issue with its frequent memory access. We then introduce PConv as a competitive alternative operator to resolve the issue. After that, we introduce FasterNet and explain its details, including design considerations.
### Preliminary
DWConv is a popular variant of Conv and has been widely adopted as a key building block for many neural networks. For an input \(\mathbf{I}\in\mathbb{R}^{c\times h\times w}\), DWConv applies \(c\) filters \(\mathbf{W}\in\mathbb{R}^{k\times k}\) to compute the output \(\mathbf{O}\in\mathbb{R}^{c\times h\times w}\). As shown in Fig. 1(b), each filter slides spatially on one input channel and contributes to one output channel. This depthwise computation makes DWConv have as low FLOPs as \(h\times w\times k^{2}\times c\) compared to a regular Conv with \(h\times w\times k^{2}\times c^{2}\). While effective in reducing FLOPs, a DWConv, which is typically followed by a pointwise convolution, or PWConv, cannot be simply used to replace a regular Conv as it would incur a severe accuracy drop. Thus, in practice the channel number \(c\) (or the network width) of DWConv is increased to \(c^{\prime}\left(c^{\prime}>c\right)\) to compensate the accuracy drop,, the width is expanded by six times for the DWConv in the inverted residual blocks [54]. This, however, results in much higher memory access that can cause non-negligible delay and slow down the overall computation, especially for I/O-bound devices. In particular, the number of memory access now escalates to
\[h\times w\times 2c^{\prime}+k^{2}\times c^{\prime}\approx h\times w\times 2c^{ \prime}, \tag{2}\]
which is higher than that of a regular Conv,,
\[h\times w\times 2c+k^{2}\times c^{2}\approx h\times w\times 2c. \tag{3}\]
Note that the \(h\times w\times 2c^{\prime}\) memory access is spent on the I/O operation, which is deemed to be already the minimum cost and hard to optimize further.
### Partial convolution as a basic operator
We below demonstrate that the cost can be further optimized by leveraging the feature maps' redundancy. As visualized in Fig. 3, the feature maps share high similarities among different channels. This redundancy has also
Figure 3: Visualization of feature maps in an intermediate layer of a pre-trained ResNet50, with the top-left image as the input. Qualitatively, we can see the high redundancies across different channels.
been covered in many other works [17, 82], but few of them make full use of it in a simple yet effective way.
Specifically, we propose a simple PConv to reduce computational redundancy and memory access simultaneously. The bottom-left corner in Fig. 4 illustrates how our PConv works. It simply applies a regular Conv on only a part of the input channels for spatial feature extraction and leaves the remaining channels untouched. For contiguous or regular memory access, we consider the first or last consecutive \(c_{p}\) channels as the representatives of the whole feature maps for computation. Without loss of generality, we consider the input and output feature maps to have the same number of channels. Therefore, the FLOPs of a PConv are only
\[h\times w\times k^{2}\times c_{p}^{2}. \tag{4}\]
With a typical partial ratio \(r=\frac{c_{p}}{c}=\frac{1}{4}\), the FLOPs of a PConv is only \(\frac{1}{16}\) of a regular Conv. Besides, PConv has a smaller amount of memory access, _i.e_.,
\[h\times w\times 2c_{p}+k^{2}\times c_{p}^{2}\approx h\times w\times 2c_{p}, \tag{5}\]
which is only \(\frac{1}{4}\) of a regular Conv for \(r=\frac{1}{4}\).
Since there are only \(c_{p}\) channels utilized for spatial feature extraction, one may ask if we can simply remove the remaining \((c-c_{p})\) channels? If so, PConv would degrade to a regular Conv with fewer channels, which deviates from our objective to reduce redundancy. Note that we keep the remaining channels untouched instead of removing them from the feature maps. It is because they are useful for a subsequent PWConv layer, which allows the feature information to flow through all channels.
### PConv followed by PWConv
To fully and efficiently leverage the information from all channels, we further append a pointwise convolution (PWConv) to our PConv. Their effective receptive field together on the input feature maps looks like a T-shaped Conv, which focuses more on the center position compared to a regular Conv uniformly processing a patch, as shown in Fig. 5. To justify this T-shaped receptive field, we first evaluate the importance of each position by calculating the position-wise Frobenius norm. We assume that a position tends to be more important if it has a larger Frobenius norm than other positions. For a regular Conv filter \(\mathbf{F}\in\mathbb{R}^{k^{2}\times c}\), the Frobenius norm at position \(i\) is calculated by \(\|\mathbf{F}_{i}\|=\sqrt{\sum_{j=1}^{c}|f_{ij}|^{2}}\), for \(i=1,2,3...,k^{2}\). We consider a salient position to be the one with the maximum Frobenius norm. We then collectively examine each filter in a pre-trained ResNet18, find out their salient positions, and plot a histogram of the salient positions. Results in Fig. 6 show that the center position turns out to be the salient position most frequently among the filters. In other words, the center position weighs more than its surrounding neighbors. This is consistent with the T-shaped computation which concentrates on the center position.
While the T-shaped Conv can be directly used for efficient computation, we show that it is better to decompose the T-shaped Conv into a PConv and a PWConv because the decomposition exploits the inter-filter redundancy and further saves FLOPs. For the same input \(\mathbf{I}\in\mathbb{R}^{c\times h\times w}\) and output \(\mathbf{O}\in\mathbb{R}^{c\times h\times w}\), a T-shaped Conv's FLOPs can be calculated as
\[h\times w\times\left(k^{2}\times c_{p}\times c+c\times(c-c_{p})\right), \tag{6}\]
which is higher than the FLOPs of a PConv and a PWConv, _i.e_.,
\[h\times w\times(k^{2}\times c_{p}^{2}+c^{2}), \tag{7}\]
where \((k^{2}-1)c>k^{2}c_{p}\), _e.g_. when \(c_{p}=\frac{c}{4}\) and \(k=3\). Besides, we can readily leverage the regular Conv for the two-step implementation.
### FasterNet as a general backbone
Given our novel PConv and off-the-shelf PWConv as the primary building operators, we further propose FasterNet, a new family of neural networks that runs favorably fast and is highly effective for many vision tasks. We aim to keep the architecture as simple as possible, without bells and whistles, to make it hardware-friendly in general.
We present the overall architecture in Fig. 4. It has four hierarchical stages, each of which is preceded by an embedding layer (a regular Conv \(4\times 4\) with stride 4) or a merging layer (a regular Conv \(2\times 2\) with stride 2) for spatial downsampling and channel number expanding. Each stage has a stack of FasterNet blocks. We observe that the blocks in the last two stages consume less memory access and tend to have higher FLOPS, as empirically validated in Tab. 1. Thus, we put more FasterNet blocks and correspondingly assign more computations to the last two stages. Each FasterNet block has a PConv layer followed by two PWConv (or Conv \(1\times 1\)) layers. Together, they appear as inverted residual blocks where the middle layer has an expanded number of channels, and a shortcut connection is placed to reuse the input features.
In addition to the above operators, the normalization and activation layers are also indispensable for high-performing neural networks. Many prior works [17, 20, 54], however, overuse such layers throughout the network, which may limit the feature diversity and thus hurt the performance. It can also slow down the overall computation. By contrast, we put them only after each middle PWConv to preserve the feature diversity and achieve lower latency. Besides, we use the batch normalization (BN) [30] instead of other alternative ones [75, 2, 67]. The benefit of BN is that it can be merged into its adjacent Conv layers for faster inference
while being as effective as the others. As for the activation layers, we empirically choose GELU [22] for smaller FasterNet variants and ReLU [51] for bigger FasterNet variants, considering both running time and effectiveness. The last three layers, _i.e_. a global average pooling, a Conv \(1\times 1\), and a fully-connected layer, are used together for feature transformation and classification.
To serve a wide range of applications under different computational budgets, we provide tiny, small, medium, and large variants of FasterNet, referred to as FasterNet-T0/1/2, FasterNet-S, FasterNet-M, and FasterNet-L, respectively. They share a similar architecture but vary in depth and width. Detailed architecture specifications are provided in the appendix.
## 4 Experimental Results
We first examine the computational speed of our PConv and its effectiveness when combined with a PWConv. We then comprehensively evaluate the performance of our FasterNet for classification, detection, and segmentation tasks. Finally, we conduct a brief ablation study.
To benchmark the latency and throughput, we choose the following three typical processors, which cover a wide range of computational capacity: GPU (2080Ti), CPU (Intel i9-9900X, using a single thread), and ARM (Cortex-A72, using a single thread). We report their latency for inputs with a batch size of 1 and throughput for inputs with a batch size of 32. During inference, the BN layers are merged to their adjacent layers wherever applicable.
### PConv is fast with high FLOPS
We below show that our PConv is fast and better exploits the on-device computational capacity. Specifically, we stack 10 layers of pure PConv and take feature maps of typical dimensions as inputs. We then measure FLOPs and latency/throughput on GPU, CPU, and ARM processors, which also allow us to further compute FLOPS. We repeat the same procedure for other convolutional variants and make comparisons.
Results in Tab. 1 show that PConv is overall an appealing choice for high FLOPS with reduced FLOPs. It has only \(\frac{1}{16}\) FLOPs of a regular Conv and achieves \(10.5\times\), \(6.2\times\), and \(22.8\times\) higher FLOPS than the DWConv on GPU, CPU, and ARM, respectively. We are unsurprised to see that the regular Conv has the highest FLOPS as it has been constantly optimized for years. However, its total FLOPs and
Figure 4: Overall architecture of our FasterNet. It has four hierarchical stages, each with a stack of FasterNet blocks and preceded by an embedding or merging layer. The last three layers are used for feature classification. Within each FasterNet block, a PConv layer is followed by two PWConv layers. We put normalization and activation layers only after the middle layer to preserve the feature diversity and achieve lower latency.
Figure 5: Comparison of convolutional variants. A PConv followed by a PWConv (a) resembles a T-shaped Conv (b), which spends more computation on the center position compared to a regular Conv (c).
Figure 6: Histogram of salient position distribution for the regular Conv \(3\times 3\) filters in a pre-trained ResNet18. The histogram contains four kinds of bars, corresponding to different stages in the network. In all stages, the center position (position 5) appears as a salient position most frequently.
latency/throughput are unaffordable. GConv and DWConv, despite their significant reduction in FLOPs, suffer from a drastic decrease in FLOPS. In addition, they tend to increase the number of channels to compensate for the performance drop, which, however, increase their latency.
### PConv is effective together with PWConv
We next show that a PConv followed by a PWConv is effective in approximating a regular Conv to transform the feature maps. To this end, we first build four datasets by feeding the ImageNet-1k val split images into a pre-trained ResNet50, and extract the feature maps before and after the first Conv \(3\times 3\) in each of the four stages. Each feature map dataset is further spilt into the train (70%), val (10%), and test (20%) subsets. We then build a simple network consisting of a PConv followed by a PWConv and train it on the feature map datasets with a mean squared error loss. For comparison, we also build and train networks for DWConv + PWConv and GConv + PWConv under the same setting.
Tab. 2 shows that PConv + PWConv achieve the lowest test loss, meaning that they better approximate a regular Conv in feature transformation. The results also suggest that it is sufficient and efficient to capture spatial features from only a part of the feature maps. PConv shows a great potential to be the new go-to choice in designing fast and effective neural networks.
### FasterNet on ImageNet-1k classification
To verify the effectiveness and efficiency of our FasterNet, we first conduct experiments on the large-scale ImageNet-1k classification dataset [53]. It covers 1k categories of common objects and contains about 1.3M labeled images for training and 50k labeled images for validation. We train our models for 300 epochs using AdamW optimizer [44]. We set the batch size to 2048 for the FasterNet-M/L and 4096 for other variants. We use cosine learning rate scheduler [43] with a peak value of \(0.001\cdot\text{batch size}/1024\) and a 20-epoch linear warmup. We apply commonly-used regularization and augmentation techniques, including Weight Decay [32], Stochastic Depth [28], Label Smoothing [59], Mixup [81], Cutmix [80] and Rand Augment [9], with varying magnitudes for different FasterNet variants. To reduce the training time, we use \(192\times 192\) resolution for the first 280 training epochs and \(224\times 224\) for the remaining 20 epochs. For fair comparison, we do not use knowledge distillation [23] and neural architecture search [87]. We report our top-1 accuracy on the validation set with a center crop at \(224\times 224\) resolution and a 0.9 crop ratio. Detailed training and validation settings are provided in the appendix.
Fig. 7 and Tab. 3 demonstrate the superiority of our FasterNet over state-of-the-art classification models. The trade-off curves in Fig. 7 clearly show that FasterNet sets
\begin{table}
\begin{tabular}{c|c c|c c|c c|c} \hline \hline \multirow{2}{*}{Operator} & \multirow{2}{*}{\begin{tabular}{c} Feature map \\ size \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} FLOPs (M), \\ \(\times 10\) layers \\ \end{tabular} } & \multicolumn{2}{c|}{GPU} & \multicolumn{2}{c|}{CPU} & \multicolumn{2}{c}{ARM} \\ & & & \multicolumn{1}{c}{Throughput (fps) FLOPS (G/s)} & Latency (ms) & FLOPs (G/s) & Latency (ms) & FLOPs (G/s) \\ \hline \multirow{4}{*}{Conv 3\(\times\)3} & 96\(\times\)56\(\times\)56 & 2601 & 3010 & 7824 & 35.67 & 72.90 & 779.57 & 3.33 \\ & 192\(\times\)28\(\times\)28 & 2601 & 4893 & 12717 & 28.41 & 91.53 & 619.64 & 4.19 \\ & 384\(\times\)14\(\times\)14 & 2601 & 4558 & 11854 & 31.85 & 81.66 & 595.09 & 4.37 \\ & 768\(\times\)7 & 2601 & 3159 & 8212 & 62.71 & 41.47 & 662.17 & 3.92 \\ \cline{2-8} & Average & - & 10151 & - & 71.89 & - & 3.95 \\ \hline \multirow{4}{*}{\begin{tabular}{c} GConv 3\(\times\)3 \\ (16 groups) \\ \end{tabular} } & 96\(\times\)56\(\times\)56 & 162 & 2888 & 469 & 21.90 & 7.42 & 166.30 & 0.97 \\ & 192\(\times\)28\(\times\)28 & 162 & 10811 & 1754 & 7.58 & 21.44 & 96.22 & 1.68 \\ & 384\(\times\)14\(\times\)14 & 162 & 15534 & 2514 & 4.40 & 36.88 & 63.57 & 2.55 \\ & 768\(\times\)7 & 162 & 16000 & 2598 & 4.28 & 37.97 & 65.20 & 2.49 \\ \cline{2-8} & Average & & 1833 & - & 25.93 & - & 1.92 \\ \hline \multirow{4}{*}{\begin{tabular}{c} DWConv 3\(\times\)3 \\ (ours, with \(r=\frac{1}{2}\)) \\ \end{tabular} } & 96\(\times\)56\(\times\)56 & 27.09 & 11940 & 323 & 3.59 & 7.52 & 108.70 & 0.24 \\ & 192\(\times\)28\(\times\)28 & 13.54 & 23358 & 315 & 1.97 & 6.86 & 82.01 & 0.16 \\ & 384\(\times\)14\(\times\)14 & 6.77 & 46377 & 313 & 1.06 & 6.35 & 94.89 & 0.07 \\ & 768\(\times\)7\(\times\)7 & 3.38 & 88889 & 302 & 0.68 & 4.93 & 150.89 & 0.02 \\ \cline{2-8} & Average & - & 313 & - & 6.42 & - & 0.12 \\ \hline \multirow{4}{*}{
\begin{tabular}{c} PConv 3\(\times\)3 \\ (ours, with \(r=\frac{1}{2}\)) \\ \end{tabular} } & 96\(\times\)56\(\times\)56 & 162 & 9215 & 1493 & 5.46 & 29.67 & 85.30 & 1.90 \\ & 192\(\times\)28\(\times\)28 & 162 & 14360 & 2326 & 3.09 & 52.43 & 66.46 & 2.44 \\ \cline{1-1} & 384\(\times\)14\(\times\)14 & 162 & 24408 & 3954 & 3.58 & 45.25 & 49.98 & 3.24 \\ \cline{1-1} & 768\(\times\)7 & 162 & 32866 & 5324 & 5.02 & 32.27 & 48.30 & 3.35 \\ \cline{1-1} & Average & - & 3274 & - & 39.91 & - & 2.73 \\ \hline \hline \end{tabular}
\end{table}
Table 1: On-device FLOPS for different operations. PConv appears as an appealing choice for high FLOPS with reduced FLOPs.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Stage} & \multirow{2}{*}{\begin{tabular}{c} DWConv+PWConv \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} GConv+PWConv \\ (16 groups) \\ \end{tabular} } & \multicolumn{2}{c|}{
\begin{tabular}{c} PConv+PWConv \\ \(r=\frac{1}{4}\) \\ \end{tabular} } \\ \hline
1 & 0.0089 & 0.0065 & 0.0069 \\
2 & 0.0158 & 0.0137 & 0.0136 \\
3 & 0.0214 & 0.0202 & 0.0172 \\
4 & 0.0130 & 0.0128 & 0.0115 \\ \hline \hline Average & 0.0148 & 0.0133 & 0.0123 \\ \hline \hline \end{tabular}
\end{table}
Table 2: A PConv followed by a PWConv well approximates the regular Conv \(3\times 3\) at different stages of a pre-trained ResNet50. PConv + PWConv together have the lowest test loss on average.
the new state-of-the-art in balancing accuracy and latency/throughput among all the networks examined. From another perspective, FasterNet runs faster than various CNN, ViT and MLP models on a wide range of devices, when having similar top-1 accuracy. As quantitatively shown in Tab. 3, FasterNet-T0 is \(2.8\times\), \(3.3\times\), and \(2.4\times\) faster than MobileViT-XXS [48] on GPU, CPU, and ARM processors, respectively, while being 2.9% more accurate. Our large FasterNet-L achieves 83.5% top-1 accuracy, comparable to the emerging Swin-B [41] and ConvNeXt-B [42] while having 36% and 28% higher inference throughput on GPU, as well as saving 37% and 15% compute time on CPU. Given such promising results, we highlight that our FasterNet is much simpler than many other models in terms of architectural design, which showcases the feasibility of designing simple yet powerful neural networks.
### FasterNet on downstream tasks
To further evaluate the generalization ability of FasterNet, we conduct experiments on the challenging COCO dataset [36] for object detection and instance segmentation. As a common practice, we employ the ImageNet pre-trained FasterNet as a backbone and equip it with the popular Mask R-CNN detector [19]. To highlight the effectiveness of the backbone itself, we simply follow PoolFormer [79] and adopt an AdamW optimizer, a \(1\times\) training
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Network} & \multirow{2}{*}{Params FLOPs} & \multirow{2}{*}{\begin{tabular}{c} FLOPs \\ (G) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Throughput \\ on GPU \\ (fps) \(\uparrow\) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Latency \\ on CPU on ARM \\ (ms) \(\downarrow\) \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{c} Acc. \\ (\%) \\ \end{tabular} } \\ \hline ShuffleNetV2 \(\times\)1.5 [46] & 3.5 & 0.30 & 4878 & 12.1 & 266 & 72.6 \\ MobileNetV2 [54] & 3.5 & 0.31 & 4198 & 12.2 & 442 & 72.0 \\ MobileViT-XXS [48] & 1.3 & 0.42 & 2393 & 30.8 & 348 & 69.0 \\ EdgeNet-XXS [47] & 1.3 & 0.26 & 2765 & 15.7 & 239 & 71.2 \\ FasterNet-T0 & 3.9 & 0.34 & 6807 & 9.2 & 143 & 71.9 \\ \hline GhostNet \(\times\)1.3 [17] & 7.4 & 0.24 & 2988 & 17.9 & 481 & 75.7 \\ ShuffleNetV2 \(\times\)2 [46] & 7.4 & 0.59 & 3339 & 17.8 & 403 & 74.9 \\ MobileNetV2 \(\times\)1.4 [54] & 6.1 & 0.60 & 2711 & 22.6 & 650 & 74.7 \\ MobileViT-XS [48] & 2.3 & 1.05 & 1392 & 40.8 & 648 & 74.8 \\ EdgeNet-XS [47] & 2.3 & 0.54 & 1738 & 24.4 & 434 & 75.0 \\ PVT-Tiny [72] & 13.2 & 1.94 & 1266 & 55.6 & 708 & 75.1 \\ FasterNet-T1 & 7.6 & 0.85 & 3782 & 17.7 & 285 & 76.2 \\ \hline CycleMLP-B1 [5] & 15.2 & 2.10 & 865 & 116.1 & 892 & 79.1 \\ PoolFormer-S12 [79] & 11.9 & 1.82 & 1439 & 49.0 & 665 & 77.2 \\ MobileViT-S [48] & 5.6 & 2.03 & 1039 & 56.7 & 941 & 78.4 \\ EdgeNeXt-S [47] & 5.6 & 1.26 & 1128 & 39.2 & 743 & 79.4 \\ ResNet50 (20, 42) & 25.6 & 4.11 & 959 & 73.0 & 1131 & 78.8 \\ FasterNet-T2 & 15.0 & 1.91 & 1991 & 33.5 & 497 & 78.9 \\ \hline CycleMLP-B2 [5] & 26.8 & 3.90 & 528 & 186.3 & 1502 & 81.6 \\ PoolFormer-S24 [79] & 21.4 & 3.41 & 748 & 92.8 & 1261 & 80.3 \\ PoolFormer-S36 [79] & 30.9 & 5.00 & 507 & 138.0 & 1860 & 81.4 \\ ConvNeXt-T [42] & 28.6 & 4.47 & 657 & 86.3 & 1889 & 82.1 \\ Swin-T [41] & 28.3 & 4.51 & 609 & 122.2 & 1424 & 81.3 \\ PVT-Small [72] & 24.5 & 3.83 & 689 & 89.6 & 1345 & 79.8 \\ PVT-Medium [72] & 44.2 & 6.69 & 438 & 143.6 & 2142 & 81.2 \\ FasterNet-S & 31.1 & 4.56 & 1029 & 71.2 & 1103 & 81.3 \\ \hline PoolFormer-M36 [79] & 56.2 & 8.80 & 320 & 215.0 & 2979 & 82.1 \\ ConvNeXt-S [42] & 50.2 & 8.71 & 377 & 153.2 & 3484 & 83.1 \\ Swin-S [41] & 49.6 & 8.77 & 348 & 224.2 & 2613 & 83.0 \\ PVT-Large [72] & 61.4 & 9.85 & 306 & 203.4 & 3101 & 81.7 \\ FasterNet-M & 53.5 & 8.74 & 500 & 129.5 & 2092 & 83.0 \\ \hline PoolFormer-M48 [79] & 73.5 & 11.59 & 242 & 281.8 & OOM & 82.5 \\ ConvNeXt-B [42] & 88.6 & 15.38 & 253 & 257.1 & OOM & 83.8 \\ Swin-B [41] & 87.8 & 15.47 & 237 & 349.2 & OOM & 83.5 \\ FasterNet-L & 93.5 & 15.52 & 323 & 219.5 & OOM & 83.5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison on ImageNet-1k benchmark. Models with similar top-1 accuracy are grouped together. For each group, our FasterNet achieves the highest throughput on GPU and the lowest latency on CPU and ARM. All models are evaluated at \(224\times 224\) resolution except for the MobileViT and EdgeNeXt with \(256\times 256\). OOM is short for out of memory.
Figure 7: FasterNet has the highest efficiency in balancing accuracy-throughput and accuracy-latency trade-offs for different devices. To save space and make the plots more proportionate, we showcase network variants within a certain range of latency. Full plots can be found in the appendix, which show consistent results.
schedule (12 epochs), a batch size of 16, and other training settings without further hyper-parameter tuning.
Tab. 4 shows the results for comparison between FasterNet and representative models. FasterNet consistently outperforms ResNet and ResNext by having higher average precision (AP) with similar latency. Specifically, FasterNet-S yields \(+1.9\) higher box AP and \(+2.4\) higher mask AP compared to the standard baseline ResNet50. FasterNet is also competitive against the ViT variants. Under similar FLOPs, FasterNet-L reduces PVT-Large's latency by 38%, _i.e_., from 152.2 ms to 93.8 ms on GPU, and achieves \(+1.1\) higher box AP and \(+0.4\) higher mask AP.
### Ablation study
We conduct a brief ablation study on the value of partial ratio \(r\) and the choices of activation and normalization layers. We compare different variants in terms of ImageNet top-1 accuracy and on-device latency/throughput. Results are summarized in Tab. 5. For the partial ratio \(r\), we set it to \(\frac{1}{4}\) for all FasterNet variants by default, which achieves higher accuracy, higher throughput, and lower latency at similar complexity. A too large partial ratio \(r\) would make PConv degrade to a regular Conv, while a too small value would render PConv less effective in capturing the spatial features. For the normalization layers, we choose BatchNorm over LayerNorm because BatchNorm can be merged into its adjacent convolutional layers for faster inference while it is as effective as LayerNorm in our experiment. For the activation function, interestingly, we empirically found that GELU fits FasterNet-T0/T1 models more efficiently than ReLU. It, however, becomes opposite for FasterNet-T2/S/M/L. Here we only show two examples in Tab. 5 due to space constraint. We conjecture that GELU strengthens FasterNet-T0/T1 by having higher non-linearity, while the benefit fades away for larger FasterNet variants.
## 5 Conclusion
In this paper, we have investigated the common and unresolved issue that many established neural networks suffer from low floating-point operations per second (FLOPS). We have revisited a bottleneck operator, DWConv, and analyzed its main cause for a slowdown - frequent memory access. To overcome the issue and achieve faster neural networks, we have proposed a simple yet fast and effective operator, PConv, that can be readily plugged into many existing networks. We have further introduced our general-purpose FasterNet, built upon our PConv, that achieves state-of-the-art speed and accuracy trade-off on various devices and vision tasks. We hope that our PConv and FasterNet would inspire more research on simple yet effective neural networks, going beyond academia to impact the industry and community directly.
AcknowledgementThis work was supported, in part, by Hong Kong General Research Fund under grant number 16200120. The work of C.-H. Lee was supported, in part, by the NSF under Grant IIS-2209921.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Backbone & \begin{tabular}{c} Params \\ (M) \\ \end{tabular} & \begin{tabular}{c} FLOPs \\ (G) \\ \end{tabular} &
\begin{tabular}{c} Latency on \\ GPU (ms) \\ \end{tabular} & \(AP^{b}_{50}\) & \(AP^{b}_{75}\) & \(AP^{m}\) & \(AP^{m}_{50}\) & \(AP^{m}_{75}\) \\ \hline ResNet50 [20] & 44.2 & 253 & 54.9 & 38.0 & 58.6 & 41.4 & 34.4 & 55.1 & 36.7 \\ PoolFormer-S24 [79] & 41.0 & 233 & 111.0 & 40.1 & 62.2 & 43.4 & 37.0 & 59.1 & 39.6 \\ PVT-Small [72] & 44.1 & 238 & 89.5 & 40.4 & 62.9 & 43.8 & 37.8 & 60.1 & 40.3 \\ FasterNet-S & 49.0 & 258 & 54.3 & 39.9 & 61.2 & 43.6 & 36.9 & 58.1 & 39.7 \\ \hline ResNet101 [20] & 63.2 & 329 & 68.9 & 40.4 & 61.1 & 44.2 & 36.4 & 57.7 & 38.8 \\ ResNeXt101-32\(\times\)4d [77] & 62.8 & 333 & 80.5 & 41.9 & 62.5 & 45.9 & 37.5 & 59.4 & 40.2 \\ PoolFormer-S36 [79] & 50.5 & 266 & 146.9 & 41.0 & 63.1 & 44.8 & 37.7 & 60.1 & 40.0 \\ PVT-Medium [72] & 63.9 & 295 & 117.3 & 42.0 & 64.4 & 45.6 & 39.0 & 61.6 & 42.1 \\ FasterNet-M & 71.2 & 344 & 71.4 & 43.0 & 64.4 & 47.4 & 39.1 & 61.5 & 42.3 \\ \hline ResNeXt101-64\(\times\)4d [77] & 101.9 & 487 & 112.9 & 42.8 & 63.8 & 47.3 & 38.4 & 60.6 & 41.3 \\ PVT-Large [72] & 81.0 & 358 & 152.2 & 42.9 & 65.0 & 46.6 & 39.5 & 61.9 & 42.5 \\ FasterNet-L & 110.9 & 484 & 93.8 & 44.0 & 65.6 & 48.2 & 39.9 & 62.3 & 43.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results on COCO object detection and instance segmentation benchmarks. FLOPs are calculated with image size (1280, 800).
## Appendix
In this appendix, we provide further details on the experimental settings, full comparison plots, architectural configurations, PConv implementations, comparisons with related work, limitations, and future work.
## Appendix A ImageNet-1k experimental settings
We provide ImageNet-1k training and evaluation settings in Tab. 6. They can be used for reproducing our main results in Tab. 3 and Fig. 7. Different FasterNet variants vary in the magnitude of regularization and augmentation techniques. The magnitude increases as the model becomes larger to alleviate overfitting and improve accuracy. Note that most of the compared works in Tab. 3 and Fig. 7, _e.g_., MobileViT, EdgeNext, PVT, CycleMLP, ConvNeXt, Swin, etc., also adopt such advanced training techniques (ADT). Some even heavily rely on the hyper-parameter search. For others w/o ADT, _i.e_., ShuffleNetV2, MobileNetV2, and GhostNet, though the comparison is not totally fair, we include them for reference.
## Appendix B Downstream tasks experimental settings
For object detection and instance segmentation on the COCO2017 dataset, we equip our FasterNet backbone with the popular Mask R-CNN detector. We use ImageNet-1k pre-trained weights to initialize the backbone and Xavier to initialize the add-on layers. Detailed settings are summarized in Tab. 7.
## Appendix C Full comparison plots on ImageNet-1k
Fig. 8 shows the full comparison plots on ImageNet-1k, which is the extension of Fig. 7 in the main paper with a larger range of latency. Fig. 8 shows consistent results that FasterNet strikes better trade-offs than others in balancing accuracy and latency/throughput on GPU, CPU, and ARM processors.
## Appendix D Detailed architectural configurations
We present the detailed architectural configurations in Tab. 8. While different FasterNet variants share a unified architecture, they vary in the network width (the number of channels) and network depth (the number of FasterNet blocks at each stage). The classifier at the end of the architecture is used for classification tasks but removed for other downstream tasks.
it. They generally follow existing operators and try to find their proper configurations, _e.g_., RepLKNet [11] simply increases the kernel size while TRT-ViT [76] reorders different blocks in the architecture. By contrast, this paper advances the field by proposing a novel and efficient PConv, opening up new directions and potentially larger room for FLOPS improvement.
PConv vs. GConv.PConv is schematically equivalent to a modified GConv [31] that operates on a single group and leaves other groups untouched. Though simple, such a modification remains unexplored before. It's also significant in the sense that it prevents the operator from excessive memory access and is computationally more efficient. From the perspective of low-rank approximations, PConv improves GConv by further reducing the intra-filter redundancy beyond the inter-filter redundancy [16].
FasterNet vs. ConvNeXt.Our FasterNet appears similar to ConvNeXt [42] after substituting DWConv with our PConv. However, they are different in motivations. While ConvNeXt searches for a better structure by trial and error, we append PWConv after PConv to better aggregate information from all channels. Moreover, ConvNeXt follows
Figure 8: Comparison of FasterNet with state-of-the-art networks. FasterNet consistently achieves better accuracy-throughput (the top plot) and accuracy-latency (the medium and bottom plots) trade-offs than others.
ViT to use fewer activation functions, while we intentionally remove them from the middle of PConv and PWConv, to minimize their error in approximating a regular Conv.
Other paradigms for efficient inference.Our work focuses on efficient network design, orthogonal to the other paradigms, _e.g._, neural architecture search (NAS) [13], network pruning [50], and knowledge distillation [23]. They can be applied in this paper for better performance. However, we opt not to do so to keep our core idea centered and to make the performance gain clear and fair.
Other partial/masked convolution works.There are several works [14, 37, 38] sharing similar names with our PConv. However, they differ a lot in objectives and methods. For example, they apply filters on partial pixels to exclude invalid patches [38], enable self-supervised learning [14], or synthesize novel images [37], while we target at the channel dimension for efficient inference.
## Appendix F Limitations and future work
We have demonstrated that PConv and FasterNet are fast and effective, being competitive with existing operators and networks. Yet there are some minor technical limitations of this paper. For one thing, PConv is designed to apply a regular convolution on only a part of the input channels while leaving the remaining ones untouched. Thus, the stride of the partial convolution should always be 1, in order to align the spatial resolution of the convolutional output and that of the untouched channels. Note that it is still feasible to down-sample the spatial resolution as there can be addi
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \hline \hline Name & Output size & Layer specification & T0 & T1 & T2 & S & M & L \\ \hline Embedding & \(\frac{h}{4}\times\frac{w}{4}\) & \begin{tabular}{c} Conv\_4\_c,4, \\ BN \\ \end{tabular} & \# Channels \(c\) & 40 & 64 & 96 & 128 & 144 & 192 \\ \hline Stage 1 & \(\frac{h}{4}\times\frac{w}{4}\) & \begin{tabular}{c} [FOOTNOTE:1]Footnote 1: \begin{tabular}{c} PConv\_3\_c_1\_1/4, \\ Conv\_1\_2\_c,1, \\ BN, Acci, \\ Conv\_1\_c,1 \\ \end{tabular} & \(\times b_{1}\) & \# Blocks \(b_{1}\) & 1 & 1 & 1 & 1 & 3 & 3 \\ \hline Merging & \(\frac{h}{8}\times\frac{w}{8}\) & \begin{tabular}{c} Conv\_2\_2\_c,2, \\ BN \\ \end{tabular} & \# Channels \(2c\) & 80 & 128 & 192 & 256 & 288 & 384 \\ \hline Stage 2 & \(\frac{h}{8}\times\frac{w}{8}\) & \begin{tabular}{c} [FOOTNOTE:1]Footnote 1: \begin{tabular}{c} PConv\_3\_2\_c,1\_1/4, \\ Conv\_1\_4\_c,1, \\ BN, Acci, \\ Conv\_1\_2\_c,1 \\ \end{tabular} & \(\times b_{2}\) & \# Blocks \(b_{2}\) & 2 & 2 & 2 & 2 & 4 & 4 \\ \hline Merging & \(\frac{h}{16}\times\frac{w}{16}\) & \begin{tabular}{c} Conv\_2\_4\_c,2, \\ BN \\ \end{tabular} & \# Channels \(4c\) & 160 & 256 & 384 & 512 & 576 & 768 \\ \hline Stage 3 & \(\frac{h}{16}\times\frac{w}{16}\) & \begin{tabular}{c} [FOOTNOTE:1]Footnote 1: \begin{tabular}{c} PConv\_3\_4\_c,1\_1/4, \\ Conv\_1\_8\_c,1, \\ BN, Acci, \\ Conv\_1\_4\_c,1 \\ \end{tabular} & \(\times b_{3}\) & \# Blocks \(b_{3}\) & 8 & 8 & 8 & 13 & 18 & 18 \\ \hline Merging & \(\frac{h}{32}\times\frac{w}{32}\) & \begin{tabular}{c} Conv\_2\_2\_c,2, \\ BN \\ \end{tabular} & \# Channels \(8c\) & 320 & 512 & 768 & 1024 & 1152 & 1536 \\ \hline Stage 4 & \(\frac{h}{32}\times\frac{w}{32}\) & \begin{tabular}{c} [FOOTNOTE:1]Footnote 1: \begin{tabular}{c} PConv\_3\_3\_c,1\_1/4, \\ Conv\_1\_1\_6\_c,1, \\ BN, Acci, \\ Conv\_1\_8\_c,1 \\ \end{tabular} & \(\times b_{4}\) & \# Blocks \(b_{4}\) & 2 & 2 & 2 & 2 & 3 & 3 \\ \hline Classifier & \(1\times 1\) &
\begin{tabular}{c} Global average pool, \\ Conv\_1\_1280\_1, \\ Acci, \\ FC\_1000 \\ \end{tabular} & Acci & GELU & GELU & ReLU & ReLU & ReLU & ReLU \\ \hline \multicolumn{6}{c}{FLOPs (G)} & 0.34 & 0.85 & 1.90 & 4.55 & 8.72 & 15.49 \\ \hline \multicolumn{6}{c}{Params (M)} & 3.9 & 7.6 & 15.0 & 31.1 & 53.5 & 93.4 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Configurations of different FasterNet variants. “Conv_k_c_s” means a convolutional layer with the kernel size of \(k\), the output channels of \(c\), and the stride of \(s\). “PConv_k_c_s_” means a partial convolution with an extra parameter, the partial ratio of \(r\). “FC_1000” means a fully connected layer with 1000 output channels. \(h\times w\) is the input size while \(b_{i}\) is the number of FasterNet blocks at stage \(i\). The FLOPs are calculated given the input size of \(224\times 224\).
tional downsampling layers in the architecture. And for another, our FasterNet is simply built upon convolutional operators with a possibly limited receptive field. Future efforts can be made to enlarge its receptive field and combine it with other operators to pursue higher accuracy.
|
2305.16682 | Sharpend Cosine Similarity based Neural Network for Hyperspectral Image
Classification | Hyperspectral Image Classification (HSIC) is a difficult task due to high
inter and intra-class similarity and variability, nested regions, and
overlapping. 2D Convolutional Neural Networks (CNN) emerged as a viable network
whereas, 3D CNNs are a better alternative due to accurate classification.
However, 3D CNNs are highly computationally complex due to their volume and
spectral dimensions. Moreover, down-sampling and hierarchical filtering (high
frequency) i.e., texture features need to be smoothed during the forward pass
which is crucial for accurate HSIC. Furthermore, CNN requires tons of tuning
parameters which increases the training time. Therefore, to overcome the
aforesaid issues, Sharpened Cosine Similarity (SCS) concept as an alternative
to convolutions in a Neural Network for HSIC is introduced. SCS is
exceptionally parameter efficient due to skipping the non-linear activation
layers, normalization, and dropout after the SCS layer. Use of MaxAbsPool
instead of MaxPool which selects the element with the highest magnitude of
activity, even if it's negative. Experimental results on publicly available HSI
datasets proved the performance of SCS as compared to the convolutions in
Neural Networks. | Muhammad Ahmad | 2023-05-26T07:04:00Z | http://arxiv.org/abs/2305.16682v1 | # Sharpenal Cosine Similarity based Neural Network for Hyperspectral Image Classification
###### Abstract
Hyperspectral Image Classification (HSIC) is a difficult task due to high inter and intra-class similarity and variability, nested regions, and overlapping. 2D Convolutional Neural Networks (CNN) emerged as a viable network whereas, 3D CNNs are a better alternative due to accurate classification. However, 3D CNNs are highly computationally complex due to their volume and spectral dimensions. Moreover, down-sampling and hierarchical filtering (high frequency) i.e., texture features need to be smoothed during the forward pass which is crucial for accurate HSIC. Furthermore, CNN requires tons of tuning parameters which increases the training time. Therefore, to overcome the aforesaid issues, Sharpened Cosine Similarity (SCS) concept as an alternative to convolutions in a Neural Network for HSIC is introduced. SCS is exceptionally parameter efficient due to skipping the non-linear activation layers, normalization, and dropout after the SCS layer. Use of MaxAbsPool instead of MaxPool which selects the element with the highest magnitude of activity, even if it's negative. Experimental results on publicly available HSI datasets proved the performance of SCS as compared to the convolutions in Neural Networks.
Cosine Similarity; Convolutional Neural Network (CNN); Hyperspectral Images Classification (HSIC).
## I Introduction
Hyperspectral Image Classification (HSIC) has been an extensively studied area of research for decades for many applications [1, 2, 3, 4, 5]. HSIC task requires assigning a unique class label to a set of pixels according to the information presented in HSI [6, 7, 8]. Early HSIC approaches are based on handcrafted features which are extracted from spectral (color, intensity, etc.) or spatial (texture, shape, etc.) information or their combination. Among the handcrafted features, HoG, texture, and color features are widely employed. However, the classification performance largely deteriorated since these approaches are not able to extract rich semantic features from HSI.
To overcome the aforesaid limitations, several unsupervised feature-learning techniques have been proposed. Such techniques focus on learning a set of basis functions used for feature encoding for an image [6]. These approaches can learn more discriminative information thus suitable for representing the information. However, the discriminative performance is still limited as these techniques do not make use of class information, hence, do not well represent distinguishable features among different classes [9].
To overcome the aforesaid limitations, deep learning approaches have been proposed, especially Convolutional Neural Network (CNN) [10, 11, 12, 13]. CNNs are able to use class information to learn discriminative features, thus the features learned by CNN are more robust and discriminative for HSIC. However, due to the down-sampling and hierarchical filtering processes, the high frequency (i.e., texture features) gradually needs to be smoothened during the forward processes [14]. Thus, to some extent, CNN architecture can be considered a low-frequency pathway for HSIC. Moreover, traditional CNN models require tons of parameters to train the model which increases the training time. Therefore, to overcome the aforesaid issues, this work proposed the use of the Sharpened Cosine Similarity (SCS) concept as an alternative to convolutions in a neural network. More specifically, the following benefits can be obtained using SCS instead of convolutions in neural networks. SCS appears to be parameter efficiency and architecture simplicity due to the nonexistence of nonlinear activation layers, dropout, and normalization layers, like batch normalization or layer normalization, after SCS layers. Use AbsMaxpool instead of Maxpool. It selects the element with the highest magnitude of activity, even if it's negative.
## II Literature Review
High-level earth observation using HSI has become cutting-edge research. Hence HSIC has drawn widespread attention and brought several state-of-the-art methodologies since its emergence [15]. However, most of the early approaches only considered shallow processing which requires feature engineering prior to the classification. Moreover, shallow-level processing requires extensive parameter tuning, settings, and experience which affects the learning capabilities. Therefore, during the past decade, deep learning such as CNNs and similar methodologies have been proposed.
The works [16, 17] proposed an HSIC method based on deep pyramid residual and capsule networks. Similarly, the work [18] proposed a spectral-spatial residual network in which the residual blocks were used for identity mapping to connect 3D CLs whereas the work [19] proposed an unsupervised HSIC method to better explore the residual CLs. The work [20] proposed a 3D CNN network for both spatial-spectral feature learning and classification. The work [21] proposed a mini-batch Graph CNN (miniGCN) which addresses the complexity of graph computation and feature fusion extracted by CNN and miniGCN as an end-to-end network.
From the above discussion, one can conclude that features learned by CNN are more robust and discriminative for HSIC as compared to the other methods. However, due to the down-sampling and hierarchical filtering process, the high-frequency features gradually need to be smoothened during the forward
pass. To some extent, these features are considered crucial for accurate classification, therefore, CNN architecture is referred to as a low-frequency pathway. Moreover, parameter explosion is yet another issue that increases the computational time. To overcome the aforesaid issues, this work implemented an SCS as an alternative to Convolutional Layers (CLs) for HSIC. There are a number of benefits that can be obtained using SCS instead of CLs in neural networks.
For instance, SCS appears to be parameter efficient, and its architecture simplicity. It's not about the accuracy records but it's killing in parameter efficiency leaderboard. Skip the non-linear activation layers, dropout layers, and normalization layers after SCS. SCS uses Abs Max-pool instead of Max-pool which selects the element with the highest magnitude of activity even if it's negative. Thus, in a nutshell, the SCS works better in terms of the computational cost requires to train a DNN as compared to deep CNNs.
## III Methodology
Let us consider an HSI \(\textbf{X}\in\mathcal{R}^{(M\times N)\times B}\) where \((M\times N)\) represents the spatial region over the Earth's surface and \(B\) refers to be the number of spectral bands exists in HSI. All \(x_{ij}\in\textbf{X}\) pixels of HSI are classified into \(C\) disjoint land-cover classes denoted as \(Y=(y_{1},y_{2},\ldots,y_{n})\). Each \(x_{ij}\in\textbf{X}\) (where \(i=1,2,\ldots,M\) and \(j=1,2,\ldots,N\)) define a land-cover pixel as a spectral vector \(x_{ij}=[x_{i,j,1},x_{i,j,2},\ldots,x_{i,j,B}]\in\textbf{X}\) contain \(B\) number of values.
Moreover, to process the spatial information, the patch extraction process is carried out as a preprocessing step where the HSI cube \(\textbf{x}_{i,j}\in\mathcal{R}^{(k\times k)\times B}\) with the neighboring regions of size \(k\times k\) centered at targeted pixel \((i,j)\). joint spectral-spatial features can increase the discriminative power of any model, thus the spectral-spatial cubes \(\textbf{x}_{i,j}\in\mathcal{R}^{(k\times k)\times B}\) are extracted from raw data and stacked into \(X\) before feature extraction. Finally, the training and test samples were selected for each class.
Convolution in Neural Network is a sliding dot product between the patch of the image and the kernel aligned which may not be a good similarity metric and thus may skip some important information which makes it an inappropriate feature detector. However, normalizing both to a magnitude of 1, thus it will turn to cosine similarity. The cosine of two non-zero vectors can be derived by using the Euclidean distance as \(k\cdot x_{i}=||k||\ ||x_{i}||cos\theta\), given \(k\) and \(x_{i}\), the cosine similarity \(cos(\theta)\) can be computed as \(cos(\theta)=\frac{k\cdot x_{i}}{||k||\ ||x_{i}||}=\frac{\sum_{i=1}^{n}k_{i}\ x_{i}}{ \sqrt{\sum_{i=1}^{n}k_{i}^{2}\sqrt{\sum_{i=1}^{n}x_{i}^{2}}}}\).
The resulting similarity ranges from \(-1\) to \(1\), where \(-1\) means that the kernel and the image patch are exactly opposite and \(0.5\) means intermediate similarity or dissimilarity among the kernel and image, whereas, \(1\) means that the kernel and the patch are aligned. The main issue in cosine similarity is a very small magnitude between the kernel and the image patch which in principle desired to be magnitude in variance. However, keeping it to an extreme may end up gathering background or noise rather than foreground information. Thus, extra parameters can help to overcome the aforesaid issues.
Therefore, the SCS was initiated, which is a stridden operation similar to convolution that extracts features from an image patch. SCS is similar to the convolutional process, however, have some important differences. In practice, the convolution is a stridden dot product among the kernel \(k\) and the image patch \(x_{i}\) as \(k\cdot x_{i}\), whereas in cosine similarity, the image patch and kernel are normalized to have a magnitude of 1 before the dot product is taken. It is so named because, in two dimensions, it gives the cosine of the angle between the image patch and the kernel as \(\frac{k\cdot x_{i}}{||k||\ ||x_{j}||}\). Therefore, the cosine can be sharpened by raising the magnitude of the result to a power \(p\) while maintaining the sign as \(sign(k\cdot x_{i})\left|\frac{k\cdot x_{i}}{||k||\ ||x_{i}||}\right|^{p}\). This measure can become numerically unstable if ever the magnitude of the image patch or kernel gets too close to zero. In practice, the kernel magnitude doesn't get too small and doesn't need \(q\), however, maybe a good practice to have a small constraint on it as well, i.e., \(sign(k\cdot x_{i})\left|\frac{k\cdot x_{i}}{||k||+q||\ ||x_{i}||}\right|^{p}\).
\[SCS(k,x_{i})=sign(k\cdot x_{i})\bigg{(}\frac{k\cdot x_{i}}{(||k||+q)(||x_{i} +q||)}\bigg{)} \tag{1}\]
Similar to the traditional convolutional process in Deep Learning, SCS is also a stride operation that takes features as an output from an input image patch. However, before calculating the straight dot product, the kernel and image patch are adjusted to have a magnitude of 1 that produce the sharpened cosine similarity or some literature refers to it as sharpened cosine normalization. SCS operation extracts feature from image patches for which sharpening exponents must be learned for each unit which can be sharpened by raising it to a power of an exponent. The aforementioned procedure outperformed and is faster than a convolutional process in any neural network by 10 to 100 times due to the fewer number of required parameters, without the normalization and activation function.
Apart from the SCS process, as compared to the pooling that is used to downsample the data, absolute max-pooling is used to update the filter in backpropagation until an appropriate value of features is obtained from the given patch. Moreover, as compared to simple pooling, absolute maxpooling chooses the highest magnitude even if a certain element has a negative value.
The overall model is trained using 250 epochs, 256 batch sizes, and a 0.001 learning rate. The learning rate has a strong influence on the pace of learning any Deep Model that determines the number of movements required to minimize the loss of function value whereas the momentum is used to enhance the accuracy and model's training speed. The RMS and momentum prop optimizer combined model is trained using the Adam optimizer. TensorFlow-based Keras inner library is called to fine-tune the hyperparameters for SCS-Net which ultimately makes the overall model robust and comparative as compared to the other state-of-the-art models. The motivation behind using the Adam optimizer is its efficiency in consuming less memory and computational efficiency. The complete architecture is presented in Figure 1.
## IV Experimental Results and Discussion
This section discusses the experimental datasets used to evaluate the proposed pipeline along with the comparative methods proposed in recent years. Moreover, this section presents the details regarding the experimental settings, results, and discussion together with the experimental results as compared to the state-of-the-art methods proposed in recent years. For experimental evaluation, two publicly available HSI datasets have been used. These datasets were acquired at different times, locations, and sensors.
The works published in the literature present comprehensive results to highlight the pros and cons as compared to state-of-the-art works however, to some extent, all the works may have different experimental settings such as, the number or percentage of training, validation, and test samples may remain the same but their geographical locations may differ (due to the random selection nature) as these methods may have run in different times or may have executed on different machines. Therefore, to make the comparison fair among different works, one must need to have the same number/percentage and geographical locations of training, validation, and test samples. Thus, in this work, the first experimental settings, i.e., the percentage of training, validation, and test samples along with their geographical locations remain the same as all the comparative methods along with the SCS pipeline are executed in one single run.
The experimental results presented in this section have been obtained using Google Colab with a Graphical Process Unit with 358\(+\) GB of storage and \(25GB\) of RAM. For all the experimental results presented in this section the training, validation, and test samples are randomly selected as 40%/30%/30%. For fair comparative analysis, all three models have been executed at once with one-time randomly selected samples. The presented results are obtained using \(15\times 15\) patch size along with the \(15\) most informative bands selected through PCA. As far as the training parameters for 3D CNN and Hybrid models, the weights are initially randomized and later optimized using backpropagation with Adam optimizer and softmax loss function. The overall training and validation loss and accuracy for all three models are presented in Figure 2.
The experimental results presented in Figure 3 and Table I for both IP and SA datasets are obtained using the overall, average accuracy, and Kappa coefficient. Kappa computes the mutual information regarding a strong agreement among the classification and ground-truth maps whereas Average and Overall accuracy compute the average class-wise classification performance and the correctly classified samples out of the total test samples, respectively.
easy. Therefore, all the comparative methods must need to be evaluated on the same set of samples rather than different samples. In this work, the training and test samples are selected once only and used to train and test all the models for the same samples.
Here the SCS method is compared with theal-layered Perceptron (MLP) [22], Multinomial Logistic Regression (MLR) [23], Random Forest (RF) [24], Support Vector Machine (SVM) [25], 1D CNN [11], 2D CNN [26], 3D CNN [13], Hybrid CNN [27], Bayesian CNN (1D, 2D, and 3D BCNN) [12]. All these methods were implemented as per the parameters mentioned in their respective works. The detailed experimental results are presented in Table II for the Salains dataset (due to the page length, other datasets' results are skipped). Looking into Table II, one can conclude that the pixel-based classification methods i.e., MLR and RF under-performed SVM, however, 1D CNNs performance is superior to other pixel-based spectral classification methods. Whereas, the spatial and spectral-spatial classification methods such as 2D, 3D, and Hybrid methods, respectively outperformed spectral methods. Moreover, SCS performance is similar to spectral-spatial methods' performance while requiring significantly fewer training parameters, i.e., SCS requires \(5624\) training parameters whereas, 3D and Hybrid CNN require \(127,104\) training parameters, which is significant.
## VI Conclusion
Hyperspectral Image Classification is a difficult task due to high inter and intra-class similarity and variability, nested regions, and overlapping. 2D Convolutional Neural Networks (CNN) emerged as a viable network whereas, 3D CNNs are a better alternative due to accurate classification. However, 3D CNNs are highly computationally complex due to their volume and spectral dimensions. Moreover, down-sampling and hierarchical filtering (high frequency) i.e., texture features need to be smoothed during the forward pass which is crucial for accurate classification. Furthermore, CNN requires tons of tuning parameters which increases the training time. To overcome the aforesaid issues, this work presented a Sharpened Cosine Similarity (SCS) as an alternative to Convolution in neural networks. SCS is exceptionally parameter efficient due to the following reasons: Skip nonlinear activation layer (ReLU, Sigmoid, etc.), Skip normalization and dropout layers, Use of Abs Max-pooling instead of MaxPool which selects the elements with the highest magnitude of activity even if its negative. Several experimental results proved the efficiency of SCS as compared to Convolutional layers.
|
2303.15005 | Architecturing Binarized Neural Networks for Traffic Sign Recognition | Traffic signs support road safety and managing the flow of traffic, hence are
an integral part of any vision system for autonomous driving. While the use of
deep learning is well-known in traffic signs classification due to the high
accuracy results obtained using convolutional neural networks (CNNs) (state of
the art is 99.46\%), little is known about binarized neural networks (BNNs).
Compared to CNNs, BNNs reduce the model size and simplify convolution
operations and have shown promising results in computationally limited and
energy-constrained devices which appear in the context of autonomous driving.
This work presents a bottom-up approach for architecturing BNNs by studying
characteristics of the constituent layers. These constituent layers (binarized
convolutional layers, max pooling, batch normalization, fully connected layers)
are studied in various combinations and with different values of kernel size,
number of filters and of neurons by using the German Traffic Sign Recognition
Benchmark (GTSRB) for training. As a result, we propose BNNs architectures
which achieve more than $90\%$ for GTSRB (the maximum is $96.45\%$) and an
average greater than $80\%$ (the maximum is $88.99\%$) considering also the
Belgian and Chinese datasets for testing. The number of parameters of these
architectures varies from 100k to less than 2M. The accompanying material of
this paper is publicly available at
https://github.com/apostovan21/BinarizedNeuralNetwork. | Andreea Postovan, Mădălina Eraşcu | 2023-03-27T08:46:31Z | http://arxiv.org/abs/2303.15005v1 | # Architecturing Binarized Neural Networks for Traffic Sign Recognition+
###### Abstract
Traffic signs support road safety and managing the flow of traffic, hence are an integral part of any vision system for autonomous driving. While the use of deep learning is well-known in traffic signs classification due to the high accuracy results obtained using convolutional neural networks (CNNs) (state of the art is 99.46%), little is known about binarized neural networks (BNNs). Compared to CNNs, BNNs reduce the model size and simplify convolution operations and have shown promising results in computationally limited and energy-constrained devices which appear in the context of autonomous driving.
This work presents a bottom-up approach for architecturing BNNs by studying characteristics of the constituent layers. These constituent layers (binarized convolutional layers, max pooling, batch normalization, fully connected layers) are studied in various combinations and with different values of kernel size, number of filters and of neurons by using the German Traffic Sign Recognition Benchmark (GTSRB) for training. As a result, we propose BNNs architectures which achieve more than 90% for GTSRB (the maximum is 96.45%) and an average greater than 80% (the maximum is 88.99%) considering also the Belgian and Chinese datasets for testing. The number of parameters of these architectures varies from 100k to less than 2M. The accompanying material of this paper is publicly available at [https://github.com/apostovan21/BinarizedNeuralNetwork](https://github.com/apostovan21/BinarizedNeuralNetwork).
Keywords:binarized neural networks XNOR architectures traffic sign classification GTSRB.
## 1 Introduction
Traffic signs are important both in city and highway driving for supporting road safety and managing the flow of traffic. Therefore, _traffic sign classification (recognition)_ is an integral part of any vision system for autonomous driving. It consists of: _a)_ isolating the traffic sign in a bounding box, and _b)_ classifying the sign into a specific traffic class. This work focuses on the second task.
Building a traffic sign classifier is challenging as it needs to cope with complex real-world traffic scenes. A well-know problem of the classifiers is the lack of _robustness_ to _adversarial examples_[29] and to occlusions [30]. _Adversarial examples_ are traffic signs taken as input which produce erroneous outputs and, together with _occlusions_, they naturally occur because the traffic scenes are unique in terms of weather conditions, lighting, aging.
One way to alleviate the lack of robustness is to formally verify that the trained classifier is robust to adversarial and occluded examples. For constructing the trained model, binary neural networks (BNNs) have shown promising results [14] even in computationally limited and energy-constrained devices which appear in the context of autonomous driving. BNNs are neural networks (NNs) with weights and/or activations binarized and constrained to \(\pm 1\). Compared to NNs, they reduce the model size and simplify convolution operations utilized in image recognition task.
Our long term goal, which also motivated this work, is to give formal guarantees of properties (e.g. robustness) which are true for a trained classifier. The formal _verification problem_ is formulated as follows: given a trained model and a property to be verified for the model, does the property hold for that model? To do so, the model and the property are translated into a constrained satisfaction problem and use, in principle, existing tools to solve the problem [22]. However, the problem is NP-complete [17], so experimentally beyond the reach of general-purpose tool.
This work makes an attempt to arrive at BNN architectures specifically for traffic signs recognition by making an extensive study of variation in accuracy, model size and number of parameters of the produced architectures. In particular, we are interested in BNNs architectures with high accuracy and small model size in order to be suitable in computationally limited and energy-constrained devices but, at the same time, reduced number of parameters in order to make the verification task easier. A bottom-up approach is adopted to design the architectures by studying characteristics of the constituent layers of internal blocks. These constituent layers are studied in various combinations and with different values of kernel size, number of filters and of neurons by using the German Traffic Sign Recognition Benchmark (GTSRB) for training. For testing, similar images from GTSRB, as well as from Belgian and Chinese datasets were used.
As a result of this study, we propose the network architectures (see Section 6) which achieve more than 90% for GTSRB [13] and an average greater than 80% considering also the Belgian [1] and Chinese [3] datasets, and for which the number of parameters varies from 100k to 2M.
## 2 Related Work
Traffic Sign Recognition using CNNs.Traffic sign recognition (TSR) consists in predicting a label for the input based on a series of features learned by the trained classifier. CNNs were used in traffic sign classification since long time ago [27, 8]. These works used GTSRB [13] which is maintained and used on a
large scale also nowadays. Paper [8] obtained an accuracy of 99.46% on the test images which is better than the human performance of 98.84%, while [27] with 98.31% was very close. These accuracies were obtained either modifying traditional models for image recognition (e.g. ResNet in case of [27]) or coming up with new ones (e.g. multi-column deep neural network composed of 25 CNNs in case of [8]). The architecture from [8] (see Figure 1) contains a number of parameters much higher than those of the models trained by us and it is not amenable for verification although the convolutional layers would be quantized. The work of [8] is still state of the art for TSR using CNNs.
Binarized Neural Networks Architectures.Quantized neural networks (QNNs) are neural networks that represent their weights and activations using low-bit integer variables. There are two main strategies for training QNNs: _post-training quantization_ and _quantization-aware training_[18] (QAT). The drawback of the post-training quantization is that it typically results in a drop in the accuracy of the network with a magnitude that depends on the specific dataset and network architecture. In our work, we use the second approach which is implemented in Larq library [11]. In QAT, the imprecision of the low-bit fixed-point arithmetic is modeled already during the training process, i.e., the network can adapt to a quantized computation during training. The challenge for QNNs is that they can not be trained directly with stochastic gradient descent (SGD) like classical NNs. This was solved by using the straight-through gradient estimator (STE) approach [15] which, in the forward pass of a training step, applies rounding operations to computations involved in the QNN (i.e. weights, biases, and arithmetic operations) and in the backward pass, the rounding operations are removed such that the error can backpropagate through the network.
BinaryConnect [9] is one of the first works which uses 1-bit quantization of weights during forward and backward propagation, but not during parameter update to maintain accurate gradient calculation during SGD. As an observation, the models used in conjuction with BinaryConnect use only linear layers which is sufficient for MNIST [20] dataset, but convolutional layers for CIFAR-10 [19] and SVHN [24]. Paper [14] binarizes the activations as well. Similarly, for MNIST dataset they use linear layers, while for CIFAR-10, SVHN and ImageNet [10] they use variants of ConvNet, inspired by VGG [28], with the binarization of the activations.
In XNOR-Net [25], both the weights and the inputs to the convolutional and fully connected layers are approximated with binary values which allows an efficient way of implementing convolutional operations. The paper uses ImageNet
Figure 1: Architecture for recognizing traffic signs [8]. Image sz: 48\(\times\)48 (px \(\times\) px)
dataset in experiments. We use XNOR-Net architectures in our work but for a new dataset, namely traffic signs.
Research on BNNs for traffic sign detection and recognition is scarce. Paper [7] uses the binarization of RetinaNet [21] and ITA [6] for traffic sign detection, in the first phase, and then recognition. Differently, we focus only on recognition, hence the architectures used have different underlying principles.
_Verification of Neural Networks._ Properties of neural networks are subject to verification. In the latest verification competition there are various benchmarks subject to verification [2], however, there is none involving traffic signs. This is because a model with reasonable accuracy for classification task must contain convolutional layers which leads to an increase of number of parameters. To the best of our knowledge there is only one paper which deals with traffic signs datasets [12] that is GTSRB. However, they considered only subsets of the dataset and their trained models consist of only fully connected layers with ReLU activation functions ranging from 70 to 1300. They do not mention the accuracy of their trained models. BNNs [23, 5] are also subject to verification but we did not find works involving traffic signs datasets.
## 3 Binarized Neural Networks
A BNN [14] is a feedforward network where weights and activations are mainly binary. [23] describes BNNs as sequential composition of blocks, each block consisting of linear and non-linear transformations. One could distinguish between _internal_ and _output blocks_.
There are typically several _internal blocks_. The layers of the blocks are chosen in such a way that the resulting architecture fulfills the requirements of accuracy, model size, number of parameters, for example. Typical layers in an internal block are: _1)_ linear transformation (LIN), _2)_ binarization (BIN), _3)_ max pooling (MP), _4)_ batch normalization (BN). A linear transformation of the input vector can be based on a fully connected layer or a convolutional layer. In our case is a convolution layer since our experiments have shown that a fully connected layer can not synthesize well the features of traffic signs, therefore, the accuracy is low. The linear transformation is followed either by a binarization or a max pooling operation. Max pooling helps in reducing the number of parameters. One can swap binarization with max pooling, the result would be the same. We use this sequence as Larq [11], the library we used in our experiments, implements convolution and binarization in the same function. Finally, scaling is performed with a batch normalization operation [16].
There is _one output block_ which produces the predictions for a given image. It consists of a dense layer that maps its input to a vector of integers, one for each output label class. It is followed by function which outputs the index of the largest entry in this vector as the predicted label.
We make the observation that, if the MP and BN layers are omitted, then the input and output of the internal blocks are binary, in which case, also the
input to the output block. The input of the first block is never binarized as it drops down drastically the accuracy.
## 4 Datasets and Experimental Setting
We use GTSRB [4] for training and testing purposes of various architectures of BNNs. These architectures were also tested with the Belgian data set [1] and the Chinese [3].
GTSRB is a multi-class, single-image dataset. The dataset consists of images of German road signs in 43 classes, ranging in size from 25 \(\times\) 25 to 243 \(\times\) 225, and not all of them are square. Each class comprises 210 to 2250 images including prohibitory signs, danger signs, and mandatory signs. The training set contains 39209 images; the remaining 12630 images are selected as the testing set. For training and validation the ratio 80:20 was applied to the images in the train dataset. GTSRB is a challenging dataset even for humans, due to perspective change, shade, color degradation, lighting conditions, just to name a few.
The _Belgium Traffic Signs_ test dataset contains 7095 images of 62 classes out of which only 23 match the ones from GTSRB. From that dataset we have used only the images from training folder which are 4533 in total. The _Chinese Traffic Signs_ test dataset contains 5998 traffic sign images for testing of 58 classes out of which only 15 match the ones from GTSRB. For our experiments, we performed the following pre-processing steps on the Belgium and Chinese datasets, otherwise the accuracy of the trained model would be very low: _1)_ we relabeled the classes from the Belgium, respectively Chinese, datasets such that their common classes with GTSRB have the same label, and _2)_ we eliminated the classes not appearing in GTSRB.
In the end, for testing, we have used 1818 images from the Belgium dataset and 1590 from the Chinese dataset.
For this study, the following points are taken into consideration.
1. Training of network is done on Intel Iris Plus Graphics 650 GPU using Keras v2.10.0, Tensorflow v2.10.0 and Lara v0.12.2.
2. From the open-source Python library Larq [11], we used the function QuantConv2D in order to binarize the convolutional layers except the first. Subsequently, we denote it by QConv. The bias is set to False as we observed that does not influence negatively the accuracy but it reduces the number of parameters.
3. Input shape is fixed either to \(30\times 30\), \(48\times 48\), or \(64\times 64\) (px \(\times\) px). Due to lack of space, most of the experimental results included are for \(30\times 30\), however all the results are available at [https://github.com/apostovan21/BinarizedNeuralNetwork](https://github.com/apostovan21/BinarizedNeuralNetwork).
4. Unless otherwise stated, the number of epochs used in training is 30.
5. Throughout the paper, for max pooling, the kernel is fixed to non-overlapping \(2\times 2\) dimension.
6. Accuracy is measured with variation in the number of layers, kernel size, the number of filters and of neurons of the internal dense layer. Various combination of the following values considered are: _(a)_ Number of blocks: \(2,3,4\); _(b)_ Kernel size: \(2,3,5\); _(c)_ Number of filters: \(16,32,64,128,256\); _(d)_ Number of neurons of the internal dense layer: \(0,64,128,256,512,1024\).
7. ADAM is chosen as the default optimizer for this study. For initial training of deep learning networks, ADAM is the best overall choice [26].
Following section discusses the systematic progress of the study.
## 5 Proposed Methodology
We recall that the goal of our work is to obtain a set of architectures for BNNs with high accuracy but at the same time with small number of parameters for the scalability of the formal verification. At this aim, we proceed in two steps. First, we propose two simple two internal blocks XNOR architectures1 (Section 5.1). We train them on a set of images from GTSRB dataset and test them on similar images from the same dataset. We learned that MP reduces drastically the accuracy while the composition of a convolutional and binary layers (QConv) learns well the features of traffic signs images. In Section 5.2.1, we restore the accuracy lost by adding a BN layer after the MP one. At the same time, we try to increase the accuracy of the architecture composed by blocks of the QConv layer only by adding a BN layer after it.
Footnote 1: An XNOR architecture [25] is a deep neural network where both the weights and the inputs to the convolutional and fully connected layers are approximated with binary values.
Second, based on the learnings from Sections 5.1 and 5.2.1, as well as on the fact that a higher number of internal layers typically increases the accuracy, we propose several architectures (Section 5.2.2). Notable are those with accuracy greater than 90% for GTSRB and an average greater than 80% considering also the Belgian and Chinese datasets, and for which the number of parameters varies from 100k to 2M.
### XNOR Architectures
We consider the two XNOR architectures from Figure 2. Each is composed of two internal blocks and an output dense (fully connected) layer. Note that, these architectures have only binary parameters. For the GTSRB, the results are in Table 1. One could observe that a simple XNOR architecture gives accuracy of at least 70% as long as MP layers are not present but the number of parameters and the model size are high. We can conclude that QConv synthesizes the features well. However, MP layers reduce the accuracy tremendously.
### Binarized Neural Architectures
#### 5.2.1 Two internal blocks
As of Table 1, the number of parameters for an architecture with MP layers is at least 15 times less than in a one without, while the size of the binarized models is approx. 30 times less than the 32 bits equivalent. Hence, to benefit from these two sweet spots, we propose a new architecture (see Figure 2(b)) which adds a BN layer in the second block of the XNOR architecture from Figure 1(b). The increase in accuracy is considerable (see Table 2)2. However, a BN layer following a binarized convolution (see Figure 2(a)) typically leads to a decrease in accuracy (see Table 3). The BN layer introduces few real parameters in the model as well as a slight increase in the model size. This is because only one BN layer was added. Note that the architectures from Figure 3 are not XNOR architectures.
Footnote 2: A BN layer following MP is also obtained by composing two blocks of XNOR-Net proposed by [25].
#### 5.2.2 Several Internal Blocks
Based on the results obtained in Sections 5.1 and 5.2.1, firstly, we trained an architecture where each internal block contains a BN layer only after the MP (see Figure 3(a)). This is based on the results
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Model description** & **Acc** & \begin{tabular}{c} **\#Binary** \\ **Params** \\ \end{tabular} & \begin{tabular}{c} **Model Size (in KiB)** \\ **Binary** \\ \end{tabular} \\ \hline \begin{tabular}{l} QConv(32, 3\(\times\)3), \\ QConv(64, 2\(\times\)2), \\ D(43) \\ \end{tabular} & 77,91 & 2015264 & 246,5 & 7874,56 \\ \hline \begin{tabular}{l} QConv(32, 3\(\times\)3), MP(2\(\times\)2), \\ QConv(64, 2\(\times\)2), MP(2\(\times\)2), \\ \end{tabular} & 5,46 & 108128 & 13,2 & 422,38 \\ \hline \begin{tabular}{l} QConv(64, 3\(\times\)3), \\ QConv(128, 2\(\times\)2), \\ QConv(64, 3\(\times\)3), MP(2\(\times\)2), \\ D(43) \\ \end{tabular} & 70,05 & 4046912 & 495,01 & 15810,56 \\ \hline \begin{tabular}{l} QConv(64, 3\(\times\)3), MP(2\(\times\)2), \\ QConv(128, 2\(\times\)2), MP(2\(\times\)2) \\ D(43) \\ \end{tabular} & 10,98 & 232640 & 28,4 & 908,75 \\ \hline \begin{tabular}{l} QConv(16, 3\(\times\)3), \\ QConv(32, 2\(\times\)2), \\ D(43) \\ \end{tabular} & 81,54 & 1005584 & 122,75 & 3932,16 \\ \hline
\begin{tabular}{l} QConv(16, 3\(\times\)3), MP(2\(\times\)2), \\ QConv(32, 2\(\times\)2), MP(2\(\times\)2), \\ D(43) \\ \end{tabular} & 1,42 & 52016 & 6,35 & 203,19 \\ \hline \end{tabular}
\end{table}
Table 1: XNOR(QConv) and XNOR(QConv, MP) architectures. Image size: 30px \(\times\) 30px. Dataset for train and test: GTSRB.
Figure 2: XNOR architectures
from Tables 2 (the BN layer is crucial after MP for accuracy) and 3 (BN layer after QConv degrades the accuracy). There is an additional internal dense layer for which the number of neurons varies in the set \(\{64,128,256,512,1028\}\). The results are in Table 4. One could observe that the conclusions drawn from the 2 blocks architecture do not persist. Hence, motivated also by [14] we propose the architecture from Figure 3(b).
## 6 Experimental results and discussion
The best accuracy for GTSRB and Belgium datasets is \(96,45\) and \(88,17\), respectively, and was obtained for the architecture from Figure 5, with input size \(64\times 64\) (see Table 5). The number of parameters is almost 2M and the model size \(225,67\) KiB (for the binary model) and \(6932,48\) KiB (for the Float-32 equivalent). There is no surprise the same architecture gave the best results for GTSRB and Belgium since they belong to the European area. The best accuracy for Chinese dataset (\(83,9\%\)) is obtained by another architecture, namely from Figure 6,
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Model description**} & \multirow{2}{*}{**Acc**} & \multicolumn{2}{c|}{**\#Params**} & \multicolumn{2}{c|}{**Model Size (in KiB)**} \\ \cline{3-8} & & **Binary** & **Real** & **Total** & **Binary** & **Float-32** \\ \hline QConv(32, 3\(\times\)3), MP(2\(\times\)2), & & & & & & \\ QConv(64, 2\(\times\)2), MP(2\(\times\)2), BN, & 50,87 & 108128 & 128 & 108256 & 13,7 & 422,88 \\ D(43) & & & & & & \\ \hline QConv(64, 3\(\times\)3), MP(2\(\times\)2), & & & & & & \\ QConv(128, 2\(\times\)2), MP(2\(\times\)2), BN, & 36,96 & 232640 & 256 & 232896 & 29,4 & 909,75 \\ D(43) & & & & & & \\ \hline QConv(16, 3\(\times\)3), MP(2\(\times\)2), & & & & & & \\ QConv(32, 2\(\times\)2), MP(2\(\times\)2), BN, & 39,55 & 52016 & 64 & 52080 & 6,6 & 203,44 \\ D(43) & & & & & & \\ \hline \end{tabular}
\end{table}
Table 2: XNOR(QConv, MP) enhanced. Image size: 30px \(\times\)30px. Dataset for train and test: GTSRB.
Figure 4: Binarized Neural Architectures
Figure 3: BNNs architectures which are not XNOR
with input size 48\(\times\)48 (see Table 6). This architecture is more efficient from the point of view of computationally limited devices and formal verification having 900k parameters and \(113,64\) KiB (for the binary model) and \(3532,8\) KiB (for the Float-32 equivalent). Also, the second architecture gave the best average accuracy and the decrease in accuracy for GTSRB and Belgium is small, namely \(1,17\%\) and \(0,39\%\), respectively.
If we investigate both architectures based on confusion matrix results, for GTSRB we observe that the model failed to predict, for example, the _End of speed limit 80_ and _Bicycle Crossing_. The first was confused the most with _Speed limit (80km/h)_, the second with _Children crossing_. One reason for the first confusion could be that _End of speed limit (80 km/h)_ might be considered the occluded version of _Speed limit (80km/h)_.
For Belgium test set, the worst results were obtained, for example, for _Bicycle crossing_ and _Wild animals crossing_ because the images differ a lot from the images on GTSRB training set (see Figure 7a). Another bad prediction is for _Double Curve_ which was equally confused with _Slippery road_ and _Children crossing_.
In the Chinese test set, the _Traffic signals_ failed to be predicted at all by the model proposed by us and was assimilated with the _General Caution_ class from the GTSRB, however _General Caution_ is not a class in the Chinese test
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Model Description**} & \multirow{2}{*}{**\#Neur**} & \multirow{2}{*}{**\#Ep**} & \multirow{2}{*}{**Acc**} & \multicolumn{2}{c|}{**\#Params**} & \multicolumn{2}{c|}{**Model size (in KiB)**} \\ \cline{5-8} & & & & **Binary** & **Real** & **Total** & **Binary** & **Float-32** \\ \hline \multirow{5}{*}{QConv(32, 5x5), MP(2x2), BN, QConv(64, 3x3), D(43)} & 0 & 30 & 41,17 & 101472 & 192 & 101664 & 13,14 & 397,12 \\ \cline{2-8} & 0 & 100 & 52,17 & & & & & \\ \cline{1-1} \cline{2-8} & 64 & 30 & 4,98 & \multirow{2}{*}{109600} & \multirow{2}{*}{192} & \multirow{2}{*}{109792} & \multirow{2}{*}{14,13} & \multirow{2}{*}{428,88} \\ \cline{1-1} \cline{2-8} & 100 & 5,70 & & & & & & \\ \cline{1-1} \cline{2-8} & 256 & 30 & 12,43 & \multirow{2}{*}{182736} & \multirow{2}{*}{192} & \multirow{2}{*}{128928} & \multirow{2}{*}{16,46} & \multirow{2}{*}{503,62} \\ \cline{1-1} \cline{2-8} & 100 & 8,48 & & & & & & \\ \cline{1-1} \cline{2-8} & 512 & 30 & 19,82 & \multirow{2}{*}{243552} & \multirow{2}{*}{192} & \multirow{2}{*}{243744} & \multirow{2}{*}{30,48} & \multirow{2}{*}{952,12} \\ \cline{1-1} \cline{2-8} & 100 & 32,13 & & & & & & \\ \cline{1-1} \cline{2-8} & 1024 & 30 & 46,05 & & & & & \\ \hline \end{tabular}
\end{table}
Table 4: Results for the architecture from the column Model Description. Image size: 30px \(\times\)30px. Dataset for train and test: GTSRB.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Model description**} & \multirow{2}{*}{**Acc**} & \multicolumn{2}{c|}{**\#Params**} & \multicolumn{2}{c|}{**Model Size (in KiB)**} \\ \cline{3-8} & & & **Binary** & **Real** & **Total** & **Binary** & **Float-32** \\ \hline QConv(32, 3\(\times\)3), QConv(64, 2\(\times\)2), BN, D(43) & 82,01 & 2015264 & 128 & 2015392 & 246,5 & 7874,56 \\ \hline QConv(64, 3\(\times\)3), QConv(128, 2\(\times\)2), BN, D(43) & 69,12 & 4046912 & 256 & 4047168 & 495,01 & 15810,56 \\ \hline QConv(16, 3\(\times\)3), QConv(32, 2\(\times\)2), BN, D(43) & 73,11 & 1005584 & 64 & 1005648 & 123 & 3932,16 \\ \hline \end{tabular}
\end{table}
Table 3: XNOR(QConv) modified. Image size: 30px \(\times\) 30px. Dataset for train and test: GTSRB.
set (see Figure (b)b, top). Another bad prediction is for _Speed limit (80km/h)_ which was equally confused with _Speed limit (30km/h), Speed limit (50km/h)_ and _Speed limit (60km/h)_ but not with _Speed limit (70km/h)_. One reason could be the quality of the training images compared to the test ones (see Figure (b)b, bottom).
In conclusion, there are few cases when the prediction failures can be explained, however the need for formal verification guarantees of the results is urgent which we will be performed as future work.
|
2308.05170 | FPGA Resource-aware Structured Pruning for Real-Time Neural Networks | Neural networks achieve state-of-the-art performance in image classification,
speech recognition, scientific analysis and many more application areas. Due to
the high computational complexity and memory footprint of neural networks,
various compression techniques, such as pruning and quantization, have been
proposed in literature. Pruning sparsifies a neural network, reducing the
number of multiplications and memory. However, pruning often fails to capture
properties of the underlying hardware, causing unstructured sparsity and
load-balance inefficiency, thus bottlenecking resource improvements. We propose
a hardware-centric formulation of pruning, by formulating it as a knapsack
problem with resource-aware tensor structures. Evaluated on a range of tasks,
including sub-microsecond particle classification at CERN's Large Hadron
Collider and fast image classification, the proposed method achieves reductions
ranging between 55% and 92% in the DSP utilization and up to 81% in BRAM
utilization. | Benjamin Ramhorst, Vladimir Loncar, George A. Constantinides | 2023-08-09T18:14:54Z | http://arxiv.org/abs/2308.05170v2 | # FPGA Resource-aware Structured Pruning
###### Abstract
Neural networks achieve state-of-the-art performance in image classification, speech recognition, scientific analysis and many more application areas. With the ever-increasing need for faster computation and lower power consumption, driven by real-time systems and Internet-of-Things (IoT) devices, FPGAs have emerged as suitable devices for deep learning inference. Due to the high computational complexity and memory footprint of neural networks, various compression techniques, such as pruning, quantization and knowledge distillation, have been proposed in literature. Pruning sparsifies a neural network, reducing the number of multiplications and memory. However, pruning often fails to capture properties of the underlying hardware, causing unstructured sparsity and load-balance inefficiency, thus bottlenecking resource improvements. We propose a hardware-centric formulation of pruning, by formulating it as a knapsack problem with resource-aware tensor structures. The primary emphasis is on real-time inference, with latencies in the order of 1\(\upmu\)s, accelerated with hls4ml, an open-source framework for deep learning inference on FPGAs. Evaluated on a range of tasks, including real-time particle classification at CERN's Large Hadron Collider and fast image classification, the proposed method achieves a reduction ranging between 55% and 92% in the utilization of digital signal processing blocks (DSP) and up to 81% in block memory (BRAM) utilization.
FPGA, Deep Learning, Pruning
## I Introduction
Modern neural networks are associated with high computational complexity and memory footprint, linked to the high dimensionality of weight tensors. With the ever-increasing need for faster computation and lower power consumption, driven by real-time systems and Internet-of-Things (IoT), neural networks must be adapted to reduce their complexity and memory footprint. A common method for network compression is pruning, the process of sparsifying a neural network, by setting some of its weights to zero. LeCun _et al._[1] first proposed the idea of Optimal Brain Damage: for a typical neural network, it is possible to remove over half of the weights, with no loss in accuracy. Since then, pruning has been extensively studied in the literature [2, 3, 4] and evaluated on a range of problems and hardware platforms. However, to achieve maximum improvements in resource utilization and computational complexity, pruning must consider the underlying hardware.
hls4ml [5, 6] is an open-source library for real-time inference of neural networks on FPGAs, built on top of high-level synthesis (HLS). Primarily intended for real-time deep learning at CERN, hls4ml has become a powerful tool evaluated on a range of tasks: particle physics [6, 7, 8], autonomous vehicles [9] and fast image classification [10]. To meet the strict latency constraints of experiments at CERN, hls4ml employs a full on-chip design with high parallelism. However, as with any high-performance system, neural networks accelerated with hls4ml are susceptible to high resource utilization. Previous compression efforts with hls4ml included quantization-aware training with QKeras [11], up to extreme, binary and ternary precisions [12], with limited emphasis on pruning.
We propose an FPGA resource-aware structured pruning algorithm, able to capture, at training time, the underlying mapping to digital signal processing blocks (DSPs) and block random access memory (BRAM). Groups of weights are iteratively removed through constrained optimization, formulated as a knapsack problem. Evaluated on particle physics benchmarks and fast image classification tasks, the proposed method achieves a reduction ranging between 55% and 92% in DSP and up to 81% in BRAM utilization while maintaining inference latencies in the order of a few microseconds. The methodology described in this research has been integrated with hls4ml and open-sourced with an Apache 2.0 license1, allowing long-term contributions. Key contributions of our research include:
Footnote 1: GitHub page: [https://github.com/fastmachinelearning/hls4ml/tree/hardware-pruning](https://github.com/fastmachinelearning/hls4ml/tree/hardware-pruning)
* A structured pruning algorithm, derived from the mapping of model weights to DSP and BRAM blocks.
* Formulation of pruning as a knapsack problem, aimed at minimizing network loss and resources.
* An open-source tool for resource-aware pruning of Keras and TensorFlow models, integrated with hls4ml, enabling real-time inference.
## II Background
### _Pruning_
Modern neural networks have billions of weights, leading to long inference latencies and high resource utilization. Pruning is a network compression technique aimed at producing a sparse neural network. First proposed by LeCun _et al._[1], it was applied to modern neural networks by Han _et al._[13], implementing an iterative, magnitude-based pruning algorithm. The networks are trained with regularization, shifting weights towards zero. Weights below a certain magnitude are pruned and the remaining weights are updated, retraining the network. Han _et al._ implemented unstructured pruning, removing individual weights from the network. Through unstructured pruning, high sparsities are achievable, but, with little improvements in resource utilization and inference latency [2, 4]. Sparse, unstructured matrices, require additional encodings, such as compressed sparse row (CSR) or compressed sparse column (CSC), incurring additional overheads in memory utilization. In HLS, unstructured multiplications can only be optimized by the compiler if loops are fully unrolled; otherwise, multiplications by zero are executed during inference. On the other hand, structured pruning [14, 15, 16], achieves improvements in resource utilization and inference latency by removing entire structures within weight tensors. Compared to unstructured pruning, for the same level of sparsity, structured pruning achieves a lower accuracy.
### _hls4ml_
High-energy physics experiments, such as those undertaken at CERN, collide sub-atomic particles near the speed of light. Current experiments aim to fully characterize the Higgs boson as well as search for physics beyond the Standard Model, including dark matter [6]. By accelerating sub-atomic particles near the speed of light, hundreds of terabytes of data are generated each second. Triggering is the process of analyzing and selecting only the most important data. Due to the extreme frequencies of particle collisions, a strict latency constraint is imposed, ranging from 100ns to a few microseconds, depending on the experiment. Therefore, specialized hardware, including ASICs and FPGAs, is used for data processing. An early attempt of deep learning in triggering was achieved with hls4ml [6] and has been shown effective in a range of studies, including the acceleration of convolutional autoencoders for anomaly detection [7], recurrent neural networks [8] and graph neural networks [17].
hls4ml allows fine-grained tuning of the hardware configuration, through the following properties:
* _Reuse factor_ (RF): a layer-wise variable controlling the trade-off between inference latency and resource utilization, indicating the number of multiplications executed in parallel, as shown in Fig. 1.
* Precision: all variables are represented using a fixed-point format, with a tunable precision. By using low-precision fixed-point arithmetic, instead of floating-point, hls4ml aims to reduce inference latency and resource utilization.
* I/O type: streaming or parallel. Parallel I/O is suitable for fully connected (FC) networks targeting ultra-low latencies, as activation tensors are processed in parallel and stored in registers. Streaming I/O is suitable for large-scale convolutional neural networks (CNNs), as activation tensors are processed sequentially and stored in BRAM. Streaming I/O reduces resource consumption at the expense of latency.
* _Latency_ or _Resource_. In the _Latency_ strategy, weights are stored in registers, enabling a faster inference at the expense of resource utilization, making it unsuitable for larger models. In the _Resource_ strategy, weights are stored in BRAM, reducing resource consumption and increasing inference latency.
In the _Latency_ strategy, unstructured pruning is optimal, as multiplications by zero are optimized by Vivado HLS (unrolled loops and weights are stored in registers). While suitable for small networks, the _Latency_ strategy does not scale well, due to Vivado partition and unroll limits. Our work focuses on pruning in the _Resource_ strategy, as it enables the acceleration of a wide range of networks. In _Resource_ strategy, weights are transposed before hardware synthesis, ensuring all the weights processed in the same clock cycle can simultaneously be read from BRAM. An example mapping from weight tensors to BRAM is given in Fig. 2. Algorithm 1 summarizes the implementation of matrix-vector multiplication in _Resource_ strategy, which forms the baseline for FC layers and is used for kernel multiplication in convolutional (CONV) layers. In Section III-A, the algorithm is used to determine the mapping between weight tensors and DSP blocks, a key component of resource-aware pruning.
Fig. 1: Variations of RF and the impact on resource utilization [6].
Fig. 2: Example mapping of weights to BRAM.
### _Related work_
Several works proposed hardware-aware pruning algorithms suitable for FPGA inference. Yu _et al._ proposed Scalpel [18], a pruning algorithm customized to the level of parallelism. On micro-controllers with a single instruction, multiple data (SIMD) processor, the authors split the weights into aligned groups, each group of size equal to the SIMD input size. Then, groups below a certain threshold were removed and the network was retrained. Kang [19] and Li & Louri [20] proposed pruning methods so that each processing element (PE) processes the same number of zeros, solving the load-balance problem. Ren _et al._[21] proposed a hardware-aware pruning algorithm, based on the alternating direction method of multipliers (ADMM). Through ADMM, the authors formulate pruning as a discrete optimization problem, restricting the weight tensors to an admissible set, depending on the target hardware. Zhannong _et al._[22] propose pattern pruning for CONV kernels, by noticing that the number of combinations in 3x3 kernels is 512. Through unstructured pruning, the algorithm selects the dominant pattern and projects the remaining onto the selected set, thus encoding each filter as a 7-bit value. Similar work was undertaken by PACA [23] and PatDNN [24], but, through ADMM optimization. However, pattern pruning is only applicable to large-scale neural networks, with many, 3x3 filters. Furthermore, for 5x5 filters, the proposed method does not scale.
A complementary method to pruning is network quantization. FINN [25, 26], is an open-source framework for generating custom dataflow-style architectures for quantized networks. Faraone _et al._[27] proposed a resource-aware, filter-level pruning algorithm for quantized neural networks accelerated with FINN. Previous compression efforts with hls4m1 primarily focused on quantization-aware training with QKeras [11], up to extreme, binary and ternary precisions [12]. Wang _et al._[28] proposed LUTNet, a hardware-software framework for acceleration of sparse, binary neural networks using K-LUT. In LUTNet, networks are trained and pruned with floating-point precision, before binarization and acceleration using K-LUTs. In a follow-up work, the authors, proposed Logic Shrinkage [29]: a fine-grained pruning methodology for selecting the parameter K and reducing LUT utilization, evaluated on large-scale CNNs.
Our method differs from the above in several ways. While similar to Kang [19] and Li & Louri [20], our method considers mapping to DSPs as well as BRAM. Shen _et al._[30] proposed a latency-aware structured pruning method, by formulating pruning as a knapsack problem. While similar to our work, we focus on structured resource-aware pruning targeting FPGA inference, compared to the work of Shen _et al._, focusing on GPU inference latency. Furthermore, many of the proposed methods focused on pruning large-scale CNNs, such as VGG or ResNet. Our method focused on pruning smaller, shallower methods suitable for ultra-low latencies. Finally, our implementation is open-sourced and integrated with hls4m1, enabling an end-to-end flow of resource-aware pruning, quantization-aware training, through the interface with QKeras [11], and real-time inference.
## III Resource-aware pruning
### _Resource-aware tensor structures_
To maximize resource savings of accelerated neural networks, we consider the reduction of DSP and BRAM utilization, by pruning all the weights processed by the same multiplier or block of memory. The mapping between weights and DSP blocks is deterministic from the RF. Given a weight matrix, weights processed by the same DSP block can be obtained by transposing and flattening the matrix into a vector. The vector is split into sub-vectors of length RF, with each sub-vector representing the weights processed by the same DSP. Illustrated in Fig. 3, pruning weights \((w_{1},w_{5},w_{9})\), or \((w_{2},w_{6},w_{10})\), or \((w_{3},w_{7},w_{11})\), or \((w_{4},w_{8},w_{12})\) would achieve a reduction of one DSP block, further referred to as DSP-aware pruning.
DSP-aware pruning can readily be extended to optimizing memory utilization. Weights processed by the same DSP are stored as subsequent words in the same block. In hls4m1, BRAM is typically implemented as 1K x 36. Weights are quantized before deployment to a lower precision, so each BRAM will store weights processed by several neighboring DSPs. As an example, weights quantized to 18 bits would map to two DSP blocks for each block of RAM. To optimize BRAM utilization, given weight precision (\(P\)), \(C\) consecutive
groups of weights processed by the same DSP block need to be pruned, given by Equation 12.
Footnote 2: \(\equiv\) is the congruence modulo operator.
\[C=\begin{cases}\frac{36}{P}&\text{if }36\equiv 0\bmod P\\ \lceil\frac{2.36}{P}\rceil&\text{otherwise}\end{cases} \tag{1}\]
If the precision is a factor of BRAM width, each pruning step ensures one block of RAM is saved. Otherwise, pruning must capture weight with width equal twice the width of BRAM, ensuring at least one block of RAM is removed. Therefore, to aid pruning and mitigate an increased network loss, the precision should be set to a factor of 36. In subsequent sections, we refer to BRAM-aware pruning as multi-dimensional pruning since each pruning step removes \(C\) DSP blocks and one block of RAM.
### _Pruning as a knapsack problem_
hls4ml supports heterogeneous configuration of layers, allowing fine-grained tuning of the hardware-configuration: RF, precision and strategy. Furthermore, Vivado aims to implement multiplications for precisions lower than 10 bits through LUTs, rather than DSP blocks. As a result of these factors, pruning becomes a non-trivial optimization problem, aimed at minimizing network loss and overall resource consumption. We solve this problem by transforming it to a knapsack problem, similar to the work by Shen _et al._[30] on latency-aware pruning for GPUs.
Given a neural network, \(\mathcal{N}\), let \(\mathbb{W}=\{\mathbf{w}_{1},\ldots,\mathbf{w}_{n}\}\) be the set of resource-aware vectors, with each vector containing network weights mapped to the same resource (BRAM or DSP). Without pruning, the set \(\mathbb{W}\), contains all the network weights, grouped per resource mapping. We define \(R:\mathbb{R}^{k}\rightarrow\mathbb{R}^{m}\) as the resource estimation function for resource-aware structures. The resource estimation function has no explicit format, but can be calculated by considering RF, precision and strategy. As an example, consider the following cases:
* Layers quantized to 18 bits, with BRAM-aware structures would map to two DSP blocks and one block of RAM, for every resource-aware structure, \(\mathbf{w}_{i}\).
* Layers quantized to 9 bits, with BRAM-aware structures would map to zero DSP blocks3 and one block of RAM (with \(C=4\) consecutive groups), for every resource-aware structure, \(\mathbf{w}_{i}\). Footnote 3: Multiplication is implemented using LUT for precisions smaller than 10 bits.
* Layer quantized to 16 bits, with DSP-aware structures would map to one DSP, for every resource-aware structure, \(\mathbf{w}_{i}\).
The resource estimation function is, in general, a vector-valued function, considering \(k\) inputs (dimensionality of resource-aware structures) with \(m\) outputs. The dimensionality of resource-aware structures varies across layers, determined from the RF, precision and strategy. Considering \(m\) outputs enables modeling of several hardware resources. Given training data, \(\mathcal{D}\), and the network loss function, \(\mathcal{L}(\mathbb{W};\mathcal{D})\), we formulate pruning as a multi-objective optimization problem of selecting a subset of weights, \(\hat{\mathbb{W}}\subseteq\mathbb{W}\), minimizing network loss and overall resource utilization:
\[\min_{\mathbb{W}\subseteq\mathbb{W}}\ \mathcal{L}(\hat{\mathbb{W}},\mathcal{D}), \sum_{\mathbf{w}_{i}\in\hat{\mathbb{W}}}R(\mathbf{w}_{i}) \tag{2}\]
We transform Equation 2 by minimizing the loss, while restricting the overall resource utilization, using the \(\epsilon-\)constraint method [31]:
\[\min_{\mathbb{W}}\ \mathcal{L}(\hat{\mathbb{W}},\mathcal{D})\] (3) subject to \[\sum_{\mathbf{w}_{i}\in\hat{\mathbb{W}}}R(\mathbf{w}_{i})\preceq \mathbf{c}\] \[\hat{\mathbb{W}}\subseteq\mathbb{W}\]
The problem of minimizing network loss obtained from selecting a subset of network weights is often approximated by selecting the subset of weights with the highest magnitude [13, 14, 15, 16], followed by fine-tuning of the selected weights:
\[\max_{\hat{\mathbb{W}}}\ \sum_{\mathbf{w}_{i}\in\hat{\mathbb{W}}}\frac{ \left\|\mathbf{w}_{i}\right\|}{\max_{\mathbf{w}_{i},\mathbf{w}_{j}\in L} \left\|\mathbf{w}_{j}\right\|}\] (4) subject to \[\sum_{\mathbf{w}_{i}\in\hat{\mathbb{W}}}R(\mathbf{w}_{i})\preceq \mathbf{c}\] \[\hat{\mathbb{W}}\subseteq\mathbb{W}\]
Fig. 3: Mapping of weights to DSP with RF=3.
Considering the norm of each structure would lead to bias towards highly-dimensional structures or layers with large weights. Therefore, the norm of every structure, \(\mathbf{w}_{i}\), is normalized, with respect to the largest norm in the same layer, \(L\). To solve the combinatorial optimization problem in Equation 4, we consider the 0-1 knapsack problem:
\[\max_{\mathbf{x}}\mathbf{v}^{T}\mathbf{x}\] (5) subject to \[\mathbf{u}^{T}\mathbf{x}\leq c\] \[\mathbf{x}\in\{0,1\}^{n}\]
The problem considers \(n\) items, each with value, \(v_{i}\) and weight \(u_{i}\). The objective of the knapsack problem is to select a subset of the items, maximizing overall value while keeping the selected weight under a certain limit. For resource-aware pruning, each item is the resource-aware tensor structure, \(\mathbf{w}_{i}\). Item values are equal to the normalized magnitude, \(\frac{\|\mathbf{w}_{i}\|}{\max_{\mathbf{w}_{i}}\mathbf{w}_{j}\in L}\frac{\| \mathbf{w}_{j}\|}{\|\mathbf{w}_{j}\|}\). Item weights are equal to the resource consumption, \(R(\mathbf{w}_{i})\). By solving the knapsack problem, the most important weights, maximizing accuracy, are selected, while the rest are discarded, in order to meet the resource budget. The solution of the knapsack problem is such that:
\[x_{i}=\begin{cases}0&\text{if }\mathbf{w}_{i}\text{ pruned}\\ 1&\text{otherwise}\end{cases} \tag{6}\]
The knapsack problem readily extends to multiple dimensions (MDKP):
\[\max_{\mathbf{x}}\mathbf{v}^{T}\mathbf{x}\] (7) subject to \[\mathbf{U}\mathbf{x}\preceq\mathbf{c} \tag{8}\] \[\mathbf{x}\in\{0,1\}^{n}\]
MDKP enables the modeling of multiple resource objectives. In our case, we model DSP and BRAM. The 1-dimensional knapsack problem can effectively be solved using a fully polynomial-time approximation scheme (FPTAS) with dynamic programming, as well as branching methods. The MDKP has no FPTAS, unless \(P=NP\)[32]. We find that, for problems of our size, it can be effectively solved using branch-and-cut methods available from the OR-Tools library [33].
### _End-to-end algorithm_
We implement magnitude-based iterative pruning, described in Algorithm 2. The algorithm requires a pre-trained neural network, \(\mathcal{N}\), training and validation data, the target sparsity \(\mathbf{s_{T}}\) and hardware and training configuration. The target sparsity is a vector, with dimensionality equal to the number of resources considered. In our case, we consider DSP-aware pruning (one-dimensional) and two-dimensional DSP- and BRAM-aware pruning. Pruning is implemented iteratively, until the target resource consumption is achieved, or, network performance on the validation set drops below the relative tolerance. Training and hardware configuration include layer-wise RF, precision, strategy, descent algorithm, batch size _etc._
Key steps of resource-aware pruning are:
* Regularization: a resource-aware regularization loss is added to the network loss. Through regularization, the objective is to shift weights sharing the same hardware resource towards zero. Similar to Wen _et al._[16], we implement group regularization. However, unlike Wen _et al._, weights are not grouped per filter; instead, they are grouped per hardware resource.
* Initialization: Resource-aware structures are identified. The initial sparsity4 is set to zero, and, the baseline resource utilization is obtained as the sum of the resource utilization of all the structures. Finally, the baseline performance on the validation set is calculated. Footnote 4: Sparsity is relative to the baseline resource utilization, not the total number of parameters.
* Iterative pruning: The network is pruned iteratively, by solving the MDKP, which selects the most important resource-aware structures, given a resource constraint. The constraint is iteratively reduced, by updating the sparsity, through a user-defined function, \(f(\cdot)\). In our case, sparsity is incremented by a constant step size. From the MDKP solution, a subset of the network weights, \(\hat{\mathbb{W}}\) are selected and fine-tuned, while the remaining weights, \(\mathbb{W}/\hat{\mathbb{W}}\) are set to zero. Fine-tuning is continuously done with regularization, pushing weights toward zero before updating sparsity. Footnote 4: Sparsity is relative to the baseline resource utilization, not the total number of parameters.
```
0: Neural network \(\mathcal{N}\), Training data \(\mathcal{D}\), Validation data \(\mathcal{D}\), Target sparsity \(\mathbf{s_{T}}\in\mathbb{R}_{+}^{m}\), Tolerance \(\epsilon\), Hardware and training configuration
```
Identify resource-aware structures \(\mathbb{W}=\{\mathbf{w}_{1},\ldots,\mathbf{w}_{n}\}\)\(\mathbf{s}\leftarrow\mathbf{0}\)\(\mathbf{R}_{B}\leftarrow\sum_{i=1}^{n}R(\mathbf{w}_{i})\)\(b\leftarrow\texttt{evaluate($\mathcal{N}$;$\mathbb{W}$)}\)\(p\gets b\)while\(\mathbf{s}\preceq\mathbf{s_{T}}\wedge p\geq\epsilon\cdot b\)do\(v_{i}\leftarrow\frac{\|\mathbf{w}_{i}\|}{\max_{\mathbf{w}_{i}}\mathbf{w}_{j} \in\|\mathbf{w}_{j}\|}\quad i=1,...,n\)\(R_{i}\gets R(\mathbf{w}_{i})\quad i=1,...,n\) Solve MDKP with \(v_{i}\), \(\mathbf{U}_{:,i}=R_{i}\) and \(\mathbf{C}=(1-\mathbf{s})\odot\mathbf{R}_{B}\)5 Solving MDKP returns a set of selected weights, \(\hat{\mathbb{W}}\) Fine-tune pruned network \(\mathcal{N}(\hat{\mathbb{W}})\) with regularization \(p\leftarrow\texttt{evaluate($\mathcal{N}$;$\hat{\mathbb{W}}$,$\hat{D}$)}\)\(\mathbf{s}\gets f(\mathbf{s})\) endwhile ```
**Algorithm 2** Resource-aware network pruning
To maximize resource savings from resource-aware pruning, we implement an automated code generation framework, to ignore DSP blocks processing constant zeros. This optimization is not done by the HLS compiler. The framework will generate HLS RTL tailored for every layer, with multiplications by
zero ignored, with, otherwise equivalent logic to the baseline implementation of matrix multiplication. Finally, hls4m1 uses task-level pipelining (i.e. pragma HLS dataflow) in parallel networks, inserting additional first-in, first-out (FIFO) channels, enabling task-level concurrency and reducing resource (LUT) utilization. However, pruning reduces the overall complexity of the network, and, therefore, we implement system-level pipelining (i.e. pragma HLS pipeline) instead. As confirmed by the results in Section IV, system-level pipelining reduces latency, and, the increase in logic utilization is mitigated through pruning. This optimization does not apply to streaming convolutions. The end-to-end flow for resource-aware pruning is shown in Figure 4.
## IV Results
### _Experimental setup_
To evaluate the effects of resource-aware pruning, three benchmark models are considered, with varying complexity, depth and application. Since hls4m1 is intended for ultra-low latency applications, we focus on application-specific, shallow networks with the following benchmarks, summarized in Table I:
* Jet classification for triggering. Jets are collimated showers of sub-atomic particles that occur due to the decay of particle collisions [6]. By considering 16 jet features (multiplicity, momentum, _etc._), the particles can be classified into one of five categories, the interesting observations: \(W\) boson, \(Z\) boson, \(t\) quark or background events, quark \(q\) and gluon \(g\). The FC architecture is derived from Duarte _et al._[6] using the data set from Zenodo [34].
* Street View House Numbers (SVHN) [35] classification, with the CNN architecture described by Aarrestad _et al._[10]. The architecture described in [10] was optimized for low-latency inference on the non-trivial digit dataset, by limiting the number of layers and weights.
* Fashion MNIST [36] classification, with a LeNet-like architecture [37]. The input size was changed from 32x32 to 28x28 to match the Fashion MNIST dataset. The 5x5 kernels were replaced with 3x3 kernels and ReLU activation [38] was added after every FC and CONV layer.
The models were trained using Keras [39] with a TensorFlow backend [40], by minimizing categorical cross-entropy using Adam optimization [41]. Models are pruned with constant increments in sparsity until the relative drop in validation accuracy exceeds 2%. The models are synthesized using Vivado and Vivado HLS 2020.1 with hls4m1 0.7.1 deblinium, reporting resource utilization post place and route. The target device is Xilinx Virtex UltraScale+ XCVU9P (xcvu9p-flgb2104-2-e). Latency is reported following co-simulation, simulating the RTL against a set of inputs. In the following sections, BM denotes the baseline model, BP-DSP denotes the DSP-optimized model, BP-MD denotes the DSP- and BRAM-optimized model, obtained from the multi-dimensional pruning described in Section III-A. The pruned models are accelerated using the code generation framework, described in Section III-C and the baseline model is accelerated using the built-in matrix multiplication method, described in Algorithm 1.
### _DSP-aware pruning_
First, we consider pruning all the weights processed by the same DSP block, by varying the RF. Results for jet and SVHN classification are reported in Tables II and III, respectively. In jet classification, the proposed method achieves a mean reduction of 9.4x and 3.1x in DSP and BRAM utilization, respectively. In SVHN classification, the proposed method achieves a mean reduction of 3.2x and 1.4x in DSP and BRAM utilization, respectively. In both cases, logic utilization is reduced as well. By increasing the RF, a smaller reduction in DSP utilization is obtained - due to the scope of pruning, with a higher RF, each pruning step removes more weights, leading to a higher drop in accuracy.
In jet classification, a noticeable drop in inference latency is observed, by switching from task-level pipelining to a system-level pipeline. For SVHN classification, the drop in latency is lower, as task-level pipelining must be enabled to ensure data is processed in a FIFO manner. However, through pruning, logic is reduced and the HLS compiler is able to schedule the RTL with a lower latency for the same clock frequency. Pruning can aid the closure of timing as well as the reduction of congestion, by reducing the overall LUT utilization. While DSP-aware pruning primarily targets weights processed by the same DSP, BRAM utilization is also reduced. For high sparsities, consecutive DSP blocks will be
Fig. 4: End-to-end flow of resource-aware pruning.
pruned, corresponding to one block of RAM. The reduction is lower for SVHN classification, since activation tensors are stored in BRAM, and, DSP-aware pruning has no effect on activation tensors.
### _Multi-dimensional pruning_
Next, we consider multi-dimensional pruning, by pruning all the weights in a single block of RAM, corresponding to several DSP blocks. To aid pruning, the models were quantized using 18-bit precision, instead of the default 16-bit precision. The results for jet classification are reported in Table II, with reductions ranging between 2.3x and 5.2x in BRAM utilization and 3.9x and 11.6x in DSP utilization. Compared to DSP-aware pruning, multi-dimensional pruning achieves higher reductions in block memory, with lower reductions in DSP utilization. Similar to DSP-aware pruning, LUT and FF utilization decreased in all cases.
### _Heterogeneous pruning_
Finally, we consider the example of heterogeneous, multi-dimensional pruning for fast image classification with a LeNet-like architecture. The LeNet model has significantly more parameters than the models used for jet and SVHN classification. To ensure low latency and meet Vivado partitioning and unrolling limits, the hardware configuration of each layer is tuned manually, according to its size and complexity. Table IV summarizes the strategy and RF for every layer, identified by its type (CONV, FC) and position in the network.
CONV layers compute the output for one pixel at a time, performing fewer multiplication over a longer period of time. For a CONV layer with output width \(W\), height \(H\) and reuse factor \(R\), the latency is approximately \(HWR\). Furthermore, CONV layers have a small number of weights, which can be stored in registers (_Latency_ strategy), further reducing the latency, while meeting the Vivado partition limits. Therefore, CONV layers are implemented using _Latency_ strategy with RF=1, performing all multiplications in parallel per output pixel. Since each weight maps to one DSP block and BRAM is not used for storing weights, unstructured pruning can be used to reduce resource utilization in CONV layers.
FC layers have a large number of weights, and, as such cannot perform all multiplications in parallel. However, the latency of FC layers is low and approximately equal to the RF. Furthermore, storing weights in registers would exceed the Vivado partition limit. Therefore, _Resource_ strategy, storing weights in BRAM, with the smallest possible RF passing synthesis, is chosen for FC layers. FC layers are quantized to 18 bits and pruned with resource-aware structures so that each pruned structure saves one block of RAM and two DSP blocks.
The resource utilization \([\text{DSP, BRAM}]\) of individual weights in CONV layers is \([1,0]\) and \([2,1]\) for resource-aware structures in FC layers. This example showcases the power of the knapsack formulation: different layers will have different
resource utilization per target structure and varying contributions to network accuracy. The results of multi-dimensional pruning are summarized in Table V. With a clock period of 10ns, there is a noticeable reduction in DSP utilization, with a slight reduction in BRAM utilization. However, increasing the clock period will drive a further reduction in BRAM, linked to duplication of logic by Vivado to close timing for a clock period of 10ns. In both cases, the inference was completed in less than 10\(\upmu\)s. Finally, to show the true power of pruning, we accelerate the pruned network on a mid-range accelerator card, Zynq UltraScale+ MPSoC ZCU102 (FPGA part xczu9eg-ffvb1156-2-e), intended for automotive, industrial and communication applications [42]. While the baseline model (4,175 DSP) is too large to be accelerated with the target device (2,520 DSP), the pruned model was accelerated while using less than a third of the available resources. Through pruning, neural networks can be accelerated on lower-end FPGAs, as well as enable the acceleration of several, concurrent neural networks on the same FPGA.
## V Conclusions
We propose a novel pruning method, modeled using the knapsack problem with resource-aware tensor structures, achieving reductions up to 92% in DSP utilization and up to 81% reduction in BRAM utilization on particle physics benchmarks and fast image classification. Furthermore, while primarily targeting DSP and BRAM reduction, the proposed method has been shown effective in reducing inference latency and logic utilization. The proposed methods have been open-sourced, building on top of hls4ml, enabling long-term contributions as well as applications in other fields. A possible direction for future work is the integration of resource-aware pruning with quantization-aware training. Recent work [43, 44, 45] aimed to combine the two methods, maximizing resource savings.
## VI Acknowledgments
This work is supported by Engineering and Physical Sciences Research Council (EPSRC) grant EP/S030069/1 and NSF Institute A3D3, NSF 2117997.
|
2305.17205 | Ghost Noise for Regularizing Deep Neural Networks | Batch Normalization (BN) is widely used to stabilize the optimization process
and improve the test performance of deep neural networks. The regularization
effect of BN depends on the batch size and explicitly using smaller batch sizes
with Batch Normalization, a method known as Ghost Batch Normalization (GBN),
has been found to improve generalization in many settings. We investigate the
effectiveness of GBN by disentangling the induced ``Ghost Noise'' from
normalization and quantitatively analyzing the distribution of noise as well as
its impact on model performance. Inspired by our analysis, we propose a new
regularization technique called Ghost Noise Injection (GNI) that imitates the
noise in GBN without incurring the detrimental train-test discrepancy effects
of small batch training. We experimentally show that GNI can provide a greater
generalization benefit than GBN. Ghost Noise Injection can also be beneficial
in otherwise non-noisy settings such as layer-normalized networks, providing
additional evidence of the usefulness of Ghost Noise in Batch Normalization as
a regularizer. | Atli Kosson, Dongyang Fan, Martin Jaggi | 2023-05-26T18:53:35Z | http://arxiv.org/abs/2305.17205v2 | # Ghost Noise for Regularizing Deep Neural Networks
###### Abstract
Batch Normalization (BN) is widely used to stabilize the optimization process and improve the test performance of deep neural networks. The regularization effect of BN depends on the batch size and explicitly using smaller batch sizes with Batch Normalization, a method known as Ghost Batch Normalization (GBN), has been found to improve generalization in many settings. We investigate the effectiveness of GBN by disentangling the induced "Ghost Noise" from normalization and quantitatively analyzing the distribution of noise as well as its impact on model performance. Inspired by our analysis, we propose a new regularization technique called Ghost Noise Injection (GNI) that imitates the noise in GBN without incurring the detrimental train-test discrepancy effects of small batch training. We experimentally show that GNI can provide a greater generalization benefit than GBN. Ghost Noise Injection can also be beneficial in otherwise non-noisy settings such as layer-normalized networks, providing additional evidence of the usefulness of Ghost Noise in Batch Normalization as a regularizer.
+
Footnote †: Code available at [https://github.com/epfl/ghost-noise](https://github.com/epfl/ghost-noise)
## 1 Introduction
The use of normalization methods has become widespread in deep learning since the introduction of Batch Normalization [12] (BN) in 2015. For convolutional network training, the batch normalization of a tensor \(\mathbf{X}\in\mathbb{R}^{N\times C\times H\times W}\) with batch size \(N\), \(C\) channels, height \(H\), and width \(W\), is given by:
\[\hat{\mathbf{X}}=\frac{\mathbf{X}-\mathbf{\mu}}{\sqrt{\mathbf{\sigma}^{2}+\varepsilon}}, \qquad\mathbf{\mu}=\frac{1}{NHW}\sum_{n,h,w}\mathbf{X}_{n,:,h,w},\quad\mathbf{\sigma}^{2}= \frac{1}{NHW}\sum_{n,h,w}(\mathbf{X}-\mathbf{\mu})^{2}_{n,:,h,w} \tag{1}\]
where \(\mathbf{\mu}\in\mathbb{R}^{1\times C\times 1\times 1}\) is the mean, \(\mathbf{\sigma}\in\mathbb{R}^{1\times C\times 1\times 1}\) the variance, and operations are broadcasted and performed elementwise. Following a normalization operation, we typically have an affine transformation \(\mathbf{Y}=\mathbf{\gamma}\odot\hat{\mathbf{X}}+\mathbf{\beta}\) with a trainable gain \(\mathbf{\gamma}\in\mathbb{R}^{1\times C\times 1\times 1}\) and bias \(\mathbf{\beta}\in\mathbb{R}^{1\times C\times 1\times 1}\). The mean and variance are computed over the batch dimension causing the final prediction of a given sample to depend on others in the batch. This cross-sample dependency is undesirable for inference, e.g. if test data arrives in small correlated batches or even one sample at a time. In practice, \(\mathbf{\mu},\mathbf{\sigma}\) from Equation 1 are replaced with \(\hat{\mathbf{\mu}}\) and \(\hat{\mathbf{\sigma}}\) that are exponential moving averages of the \(\mathbf{\mu},\mathbf{\sigma}\) used during training. Each sample contributes to its own normalization statistics during training but not inference, causing a discrepancy that can degrade performance. This can be addressed by modifying the deployment network to take the statistics of the current sample into account as done in EvalNorm [19] and Inference Example Weighing [21], at a slightly increased computation cost.
The batch dependency of BN can be detrimental during training under certain circumstances. This is especially true when the available batch size is small or the elements of the batch are correlated [25], where the statistics may deviate significantly from the population statistics. This can lead to a mismatch between the statistics used for normalization during training and those used during inference. A number of alternative methods such as Layer Normalization [1], Instance Normalization [10], Group Normalization [24], Online Normalization [4], Batch Renormalization [11] and Weight
Normalization [18] have been proposed to overcome some of these issues. These either normalize across other dimensions of the tensor \(\mathbf{X}\), are applied to the weights instead, or make use of the history over prior batches to augment the effective size of the batch.
The stochasticity induced by the cross-sample dependency of BN can sometimes have a beneficial regularization effect, which is frequently observed but poorly understood. From a geometrical point of view, Keskar et al. [13] empirically found that training with small batches arrives at flatter minima, which they argue contributes to better generalization. Hoffer et al. [8] observed that performing batch normalization over smaller subgroups of the batch could improve generalization. They call these smaller batches _ghost batches_ and this method of explicitly using smaller batches in batch normalization is known as **Ghost Batch Normalization (GBN)** or sometimes just Ghost Normalization. Summers and Dinneen [21] also observed this effect and recommended the use of GBN. The regularization effect arises from the noise in \(\mathbf{\mu},\mathbf{\sigma}\) computed over a batch compared to the corresponding statistics for the full dataset. This noise, which we will refer to as **Ghost Noise**, increases for smaller batches and may explain the improved generalization with Ghost Batch Normalization. Figure 1 shows an example of how the ghost batch size can affect performance.
In this work we analyze the ghost noise and the train-test discrepancy in Ghost Batch Normalization. The strength of both effects increases with smaller batch sizes, creating a trade-off between the regularization from the ghost noise and the detrimental effect of train-test discrepancy. We propose a new method, **Ghost Noise Injection (GNI)**, which decouples the ghost noise from the normalization process, allowing us to increase the regularization strength without the need to perform normalization using small batch sizes thereby avoiding their negative effects. It also enables us to apply it in new settings, such as with Layer Normalization, that do not induce noise on its own.
Experimentally, we find that Ghost Nose Injection can provide a greater regularization effect than Ghost Batch Normalization. We ablate the method and find that both the scaling and shifting noise of GNI contributes to its effectiveness. In convolutional neural networks, the noise magnitude can vary between channels and layers depending on their distribution, in particular how much of their variance comes from the spatial extent of a single sample compared to the variance between samples. This "adaptivity" may be important, as we find that simpler IID noise methods are unable to match the regularization effect of GNI. This includes several dropout variants as well as IID noise based on our analysis of the ghost noise distribution.
To summarize, our contributions are as follows:
* We study the batch size dependency of the regularization effect of batch normalization. We show that both the shift and scale components of the induced noise can positively contribute to the generalization performance, and that the distribution of noise is channel and layer dependent in convolutional networks.
* We propose a novel regularization method, Ghost Noise Injection (GNI) that decouples the normalization and noise effects of Batch Normalization.
* We perform extensive experiments to validate the effectiveness of our proposed regularizer, showing that it can outperform Ghost Batch Normalization as well as simpler IID noise methods, such as dropout variants.
## 2 Methods and Analysis
### Ghost batch normalization
We can divide a batch \(\mathbf{X}\in\mathbb{R}^{B\times C}\) it into \(g=\frac{B}{N}\) ghost batches (i.e. non-overlapping subgroups): \(\{\mathbf{X}_{1},\mathbf{X}_{2},...\mathbf{X}_{g}\}\), of size \(N\) each. Ghost Batch Normalization is equivalent to performing standard Batch Normalization on each ghost batch independently. For simplicity, we will drop the full convolutional tensor notation from this point onward focusing on the simpler fully connected case. We emphasize that the normalization statistics are still computed on a per-feature basis. Writing a
Figure 1: Impact of ghost batch size on ResNet18 validation accuracy (mean\(\pm\)std) on CIFAR-100.
ghost batch as \(\mathbf{X}_{i}=[\mathbf{x}_{1},\mathbf{x}_{2},..,\mathbf{x}_{N}]\in\mathbb{R}^{B\times C}\), and \(\mathbf{x}_{i}\in\mathbb{R}^{1\times C}\) the normalization becomes:
\[\widetilde{\mathbf{X}}_{i}=\frac{\mathbf{X}_{i}-\mathbf{\mu}_{i}}{\mathbf{\sigma}_{i}},\qquad \qquad\mathbf{\mu}_{i}=\frac{1}{N}\sum_{j}\mathbf{x}_{j}\in\mathbb{R}^{1\times C}, \qquad\mathbf{\sigma}_{i}^{2}=\frac{1}{N}\sum_{j}(\mathbf{x}_{j}-\mathbf{\mu}_{i})^{2}\in \mathbb{R}^{1\times C} \tag{2}\]
The computation over smaller batch sizes leads to higher variance in \(\mathbf{\mu},\mathbf{\sigma}\). Here we note that the term "batch" is highly overloaded and there are at least three different batch sizes of interest:
* Accelerator batch size is the number of samples each worker (e.g. GPU) uses during a single forward / backward pass through the network.
* Ghost batch size, or normalization batch size, is the number of samples over which normalization statistics are calculated.
* Optimization batch size is the number of samples contributing to each optimizer update.
With local accumulation or distributed training, we can have optimization batch size bigger than the accelerator batch size. Likewise, we can also have normalization batch size bigger than accelerator batch size when using e.g. synchronized batch normalization in distributed training.
### Reducing train-test discrepancy
Smaller normalization batch sizes result in increased stochasticity with a potential regularization impact. At the same time, these smaller batches increase the contribution of each sample to the \(\mathbf{\mu},\mathbf{\sigma}\) used in the normalization. This does not match the test time behavior, where pre-calculated running statistics are used to normalize new coming samples. To reduce this over-dependency, we explore an alternative where we prevent each sample from contributing to the normalization statistics used for itself. Instead we compute a different set of statistics for each sample, where we consider all other elements in the ghost batch. We refer to this technique as Exclusive Batch Normalization (XBN). This way, the normalization statistics during training no longer depend on the sample itself, better matching the situation at test-time. We can write the XBN mean and variance computation as:
\[\mathbf{\mu}_{i}=\frac{1}{N-1}\sum_{j\neq i}\mathbf{x}_{j},\qquad\mathbf{\sigma}_{i}^{2}= \frac{1}{N-1}\sum_{j\neq i}(\mathbf{x}_{j}-\mathbf{\mu}_{i})^{2} \tag{3}\]
Importantly, XBN maintains a similar level of noise as GBN while reducing the discrepancy between the train and test time. However, removing the self-dependency of the normalization has a major downside, which is that the self-dependency of normalization bounds the output range which can improve stability. Without it the output range can grow arbitrarily, destabilizing training. We empirically observe this behavior for smaller batch sizes, but when the batch size is big enough to support stable training, we do observe a test accuracy boost, likely due to the reduced train-test discrepancy. Below we analyze the output range for GBN vs XBN.
**Bounded output range of GBN:** The GBN output corresponding to an input \(\mathbf{x}_{i}\) is given by:
\[\frac{\mathbf{x}_{i}-\frac{1}{N}\sum_{j}\mathbf{x}_{j}}{\sqrt{\frac{1}{N}\sum_{j}(\bm {x}_{j}-\frac{1}{N}\sum_{t}\mathbf{x}_{t})^{2}+\varepsilon}}=\frac{\mathbf{x}_{i}- \frac{1}{N}\sum_{j}\mathbf{x}_{j}}{\sqrt{\frac{1}{N}\left((\mathbf{x}_{i}-\frac{1}{N} \sum_{t}\mathbf{x}_{t})^{2}+\sum_{j\neq i}(\mathbf{x}_{j}-\frac{1}{N}\sum_{t}\mathbf{x}_{t })^{2}\right)+\varepsilon}} \tag{4}\]
For analysis purposes, we treat \(\varepsilon\) as 0 for now. The reciprocal of Equation 4 is:
\[\frac{\sqrt{\frac{1}{N}\sum_{j}(\mathbf{x}_{j}-\frac{1}{N}\sum_{t}\mathbf{x}_{t})^{2} }}{\mathbf{x}_{i}-\frac{1}{N}\sum_{j}\mathbf{x}_{j}}=\sqrt{\left(\frac{1}{N}+\sum_{j \neq i}\frac{(\mathbf{x}_{j}-\frac{1}{N}\sum_{t}\mathbf{x}_{t})^{2}}{(\mathbf{x}_{i}-\frac {1}{N}\sum_{t}\mathbf{x}_{t})^{2}}\right)}\geq\frac{1}{\sqrt{N}} \tag{5}\]
which implies that the magnitude of the output is bounded by the square root of the ghost batch size.
**Unbounded output range of XBN:** With XBN the output for a single element \(\mathbf{x}_{i}\) will be:
\[\frac{\mathbf{x}_{i}-\frac{1}{N-1}\sum_{j\neq i}\mathbf{x}_{j}}{\sqrt{\frac{1}{N-1} \sum_{j\neq i}(\mathbf{x}_{j}-\frac{1}{N-1}\sum_{t\neq i}\mathbf{x}_{t})^{2}+\varepsilon}} \tag{6}\]
The numerator here has no dependency on \(\mathbf{x}_{i}\) and if we ignore \(\varepsilon\), it can therefore be arbitrarily small in comparison, making the output unbounded. For example when all \(\mathbf{x}_{j},j\neq i\) are identical, the numerator of Equation 6 will be \(\sqrt{\varepsilon}\), which is close to 0 in practice (the default value in PyTorch is \(\varepsilon=10^{-5}\)). Large values like these can destabilize training. We observe this in practice when using smaller batch sizes, likely due to \(\mathbf{\sigma}_{i}^{2}\) randomly being close to zero for some ghost batches. It may be possible to mitigate this issue, for example by clamping the computed sigma or the resulting output, but we do not pursue this further here. The preceding analysis indicates that self-dependency can be beneficial for stability but larger batch sizes are preferable to minimize the resulting train-test discrepancy. We therefore seek alternative ways of obtaining the Ghost Noise of small batches that do not rely on decreasing the normalization batch size. This would confer stability while also avoiding a significant train-test discrepancy by keeping the self-dependency small.
### Modelling Ghost Batch Normalization as Double Normalization
One of our key insights is that performing a standard batch normalization before ghost batch normalization does not change the output of the GBN. For a batch \(\mathbf{X}\), batch normalization BN and ghost batch normalization GBN we can write this as:
\[\text{GBN}(\mathbf{X})\ =\ \overline{\text{GBN}(\ \overline{\text{BN}(\mathbf{X})}\ )} \tag{7}\]
This follows from the fact that normalization is invariant to affine transformations of the inputs, i.e. \(f(\mathbf{X})=\mathbf{a}\mathbf{X}+\mathbf{b}\) where \(\mathbf{a},\mathbf{b}\) are constant vectors. When \(\varepsilon\) is small enough to be ignored, we have:
\[\text{BN}(\mathbf{a}\mathbf{X}+\mathbf{b})=\frac{(\mathbf{a}\mathbf{X}+\mathbf{b})-\mathbf{\mu}(\mathbf{a}\bm {X}+\mathbf{b})}{\sqrt{\mathbf{\sigma}_{i}^{2}(\mathbf{a}\mathbf{X}+\mathbf{b})+\varepsilon}}= \frac{\mathbf{a}\mathbf{X}+\mathbf{b}-\mathbf{a}\mathbf{\mu}(\mathbf{X})-\mathbf{b}}{\sqrt{\mathbf{a}^{2}\mathbf{ \sigma}_{i}^{2}(\mathbf{X})+\varepsilon}}=\frac{\mathbf{X}-\mathbf{\mu}(\mathbf{X})}{\sqrt{ \mathbf{\sigma}_{i}^{2}(\mathbf{X})+\varepsilon/\mathbf{a}^{2}}}\approx\text{BN}(\mathbf{X}) \tag{8}\]
where we treat \(\mathbf{\mu}\) and \(\mathbf{\sigma}\) as functions. For a given batch, BN is an affine transformation with:
\[\mathbf{a}=(\mathbf{\sigma}_{i}^{2}(\mathbf{X})+\varepsilon)^{-\frac{1}{2}},\qquad\mathbf{b}= -(\mathbf{\sigma}_{i}^{2}(\mathbf{X})+\varepsilon)^{-\frac{1}{2}}\mathbf{\mu}(\mathbf{X}) \tag{9}\]
The coefficients of course depend on \(\mathbf{X}\), but that does not matter for the following normalization.
This decomposition of GBN into two successive normalization operations lets us isolate the differences between standard batch normalization and ghost batch normalization. Ignoring \(\varepsilon\), the double normalization \(\text{GBN}(\text{BN}(\mathbf{X}))\) can be formulated as:
\[\hat{\mathbf{X}}:=\ \overline{\text{BN}}\left(\mathbf{X}\right)=\frac{\mathbf{X}-\mathbf{\mu}} {\mathbf{\sigma}}=[\hat{\mathbf{X}}_{1},..,\hat{\mathbf{X}}_{g}],\qquad\widetilde{\mathbf{X}} _{j}:=\ \overline{\text{GBN}}\left(\hat{\mathbf{X}}\right)_{j}=\frac{\hat{\mathbf{X}}_{j}- \hat{\mathbf{\mu}}_{j}}{\hat{\mathbf{\sigma}}_{j}} \tag{10}\]
where we split the normalized batch into \(g\) ghost batches and show the following GBN for the \(j\)-th subgroup. We can now write the \(\mathbf{\mu}_{j}\) and \(\mathbf{\sigma}_{j}\) of the original \(\text{GBN}(\mathbf{X})\) setup in Equation 2 as:
\[\mathbf{\mu}_{j}=\mathbf{\mu}+\mathbf{\sigma}\hat{\mathbf{\mu}}_{j},\qquad\mathbf{\sigma}_{j}=\bm {\sigma}\hat{\mathbf{\sigma}}_{j} \tag{11}\]
GBN would be equivalent to \(\overline{\text{BN}}\) if \(\hat{\mathbf{\mu}}_{j}=0\) and \(\hat{\mathbf{\sigma}}_{j}=1\). This happens when the ghost batches are identical, but generally they contain different samples resulting in random variations in \(\hat{\mathbf{\mu}}_{j}\) and \(\hat{\mathbf{\sigma}}_{j}\). These fluctuations cause noise with two components, additive (or shifting) noise from \(\hat{\mathbf{\mu}}_{j}\) and multiplicative (or scaling) noise from \(\hat{\mathbf{\sigma}}_{j}\). The increased train-test discrepancy of GBN comes from the dependency of \(\hat{\mathbf{\mu}}_{j}\) and \(\hat{\mathbf{\sigma}}_{j}\) on a given sample.
### Ghost Noise Injection
The preceding analysis isolates \(\hat{\mathbf{\mu}}_{j}\) and \(\hat{\mathbf{\sigma}}_{j}\) in Equation 11 as the source of Ghost Noise and the train-test discrepancy from self-dependency. We propose replacing the Ghost Batch Normalization in the double normalization setup with a new method, Ghost Noise Injection (GNI) given by:
\[\mathbf{Y}=\frac{\hat{\mathbf{X}}-\mathbf{m}}{\mathbf{s}} \tag{12}\]
where \(\mathbf{m}\) and \(\mathbf{s}\) are independent and identically distributed random vectors corresponding to each sample in the input \(\hat{\mathbf{X}}\). The elements of \(\mathbf{m}\) and \(\mathbf{s}\) are computed as the mean and standard deviation of a randomly sampled (with replacement) subsets of size \(N\) from \(\hat{\mathbf{X}}\). We will refer to the subset size as ghost batch size like for GBN. We treat the resulting \(\mathbf{m}\) and \(\mathbf{s}\) as pure noise, i.e. _we don't backpropagate through their computation_. GNI has two main advantages over GBN:
* Each sample is unlikely to contribute strongly to its own normalization statistics, reducing the train-test discrepancy for a given level of noise (similar to XBN).
* We can freely select normalization batch size \(N\), and it does not need to divide the accelerator batch size used.
**Wider Applicability of GNI:** Ghost Noise Injection does not necessarily need to follow batch normalization. In this case we can instead perform GNI as \(\mathbf{Y}=\frac{\hat{\mathbf{X}}-(\mathbf{m}-\mathbf{\mu})}{\mathbf{s}/\mathbf{\sigma}}\) where \(\mathbf{\mu}\) and \(\mathbf{\sigma}\) are computed like in batch normalization (but without backpropagating through them). This lets us decouple the noise from the normalization, allowing us to inject ghost noise anywhere in the network.
### The Distribution of Ghost Noise
The estimated mean \(\hat{\mathbf{\mu}}_{i}\) and standard deviation \(\hat{\mathbf{\sigma}}_{i}\) from the \(i\)-th ghost batch, can be interpreted as bootstrapped statistics derived from the empirical distribution \(P_{\hat{\mathbf{X}}}\). Following the normalization in Equation 10, the variable \(\hat{\mathbf{X}}\) exhibits a zero mean and unit variance. Assuming that the individual elements of \(\hat{\mathbf{X}}\) are normally distributed allows us to derive an analytical distribution for the mean and variance, giving us additional insights into the workings of GBN. For this section we focus on the distribution for a single channel \(c\) in the \(g\)-th ghost batch, which we denote \(\hat{\mu}_{gc}\) and \(\hat{\sigma}_{gc}\).
#### 2.5.1 Fully Connected Layers
If we assume that the output of batch normalization is independent and normally distributed, the normalization statistics \(\hat{\mathbf{\mu}}_{i}\) and \(\hat{\mathbf{\sigma}}_{i}^{2}\) are computed over a sample \(\hat{\mathbf{X}}=[\hat{\mathbf{x}}_{1},..,\hat{\mathbf{x}}_{N}]\) of \(N\) variables from \(\mathcal{N}(0,1)\). We can the derive the distribution of the sample mean, and therefore the shift noise, as:
\[\hat{\mu}_{gc}=\frac{1}{N}\sum_{i=1}^{N}\hat{x}_{ic}\sim\mathcal{N}\left(0, \frac{1}{N}\right) \tag{13}\]
Since the sum of standard normally distributed variables follows a chi-squared distribution we get:
\[\hat{\sigma}_{gc}^{2}=\frac{1}{N}\sum_{i=1}^{N}(\hat{x}_{ic}-0)^{2}\sim\frac{1 }{N}\chi^{2}(N) \tag{14}\]
This clearly shows the dependency of the noise on the ghost batch size. Larger ghost batch sizes correspond to less noise, explaining their reduced generalization benefit as observed in e.g. Figure 1.
**Analytical Ghost Noise Injection:** From the above analysis, instead of computing ghost statistics from a sampled batch, we could directly sample \(\hat{\mathbf{\mu}}_{g}\) and \(\hat{\mathbf{\sigma}}_{g}^{2}\) from the analytical distribution for each channel. Let \(\hat{\mathbf{X}}\) be \(\mathbf{X}\) after batch normalization like before. For each channel \(c\), we can then inject noise using:
\[\frac{\hat{\mathbf{X}}_{c}-\mu_{c}}{\sigma_{c}}\qquad\text{with }\mu_{c}\sim P_{\hat{ \mu}_{gc}},\sigma_{c}\sim P_{\hat{\sigma}_{gc}^{2}} \tag{15}\]
\(P_{\hat{\mu}_{gc}}\) is given in Equation 13 and \(P_{\hat{\sigma}_{gc}^{2}}\) is given in Equation 14. The hyperparameter \(N\) is used to vary the amount of noise, corresponding to different ghost batch sizes.
**Comparison to Gaussian Dropout:** Here we would like to point out the similarity between the scaling noise and Gaussian Dropout which is written as:
\[\hat{\mathbf{X}}\cdot\mathbf{t}\qquad\text{with }\mathbf{t}\sim\mathcal{N}(1, \tfrac{p}{1-p}) \tag{16}\]
where \(\mathbf{t}\) is sampled element-wise from the Gaussian distribution and \(p\) is interpreted like the drop probability in standard dropout (\(p=0\) for no dropout). Both the scaling noise and Gaussian dropout are multiplicative with similar but slightly different distributions.
#### 2.5.2 Convolutional Layers
In convolutional neural networks the batch statistics are computed across both the batch and spatial dimensions (e.g. the height and width of an image). The data can therefor vary in two separate ways, between samples across the batch dimension, i.e. _inter-sample variance_ and within a single sample
across the spatial dimension, i.e. _intra-sample variance_. This differs from the fully connected case where all variance arises across the batch dimension and there is no concept of intra-sample variance.
To investigate the effects of this, we analyze a simple model. We will assume that \(\mathbf{X}\in\mathbb{R}^{B\times C\times I}\), where \(I\) is the spatial dimension e.g. \(I=H\times W\) where \(H\) is the height and \(W\) the width for image data. We further assume the true linear \(b\)-th batch in a specific channel \(c\) is sampled i.i.d. from a normal distribution, i.e. \(\mu_{bc}\overset{i.i.d.}{\sim}\mathcal{N}(0,\sigma_{Bc}^{2})\), where \(\sigma_{Bc}^{2}\) denotes the inter-sample variance and only varies across \(c\), i.e. it is constant across \(B\). We model the \(i\)-th spatial location in the \(b\)-th batch as being sampled from \(x_{bic}\overset{i.i.d.}{\sim}\mathcal{N}(\mu_{bc},\sigma_{Ic}^{2})\), with an intra-sample variance \(\sigma_{Ic}^{2}\) that does not vary across \(I\).After batch normalization, we always have unit variance, which means \(\sigma_{Bc}^{2}+\sigma_{Ic}^{2}=1\), for any \(c\). Now assume we sample a random ghost batch of size \(N\). As the sample mean is an average of all samples, we still have it following a normal distribution, and the mean and variance can be calculated as follows:
\[\mathbb{E}[\hat{\mu}_{gc}]=\mathbb{E}\left[\frac{1}{NI}\sum_{n}\sum_{i}x_{nic }\right]=\frac{1}{NI}\sum_{n}\sum_{i}\mathbb{E}[x_{nic}]=0 \tag{17}\]
\[\begin{split}\mathrm{Var}[\hat{\mu}_{gc}]&=\mathbb{ E}\left[\frac{1}{NI}\sum_{n}\sum_{i}x_{nic}\right]^{2}\\ &=\mathbb{E}\left[\frac{1}{NI}\sum_{n}\sum_{i}x_{nic}-\mu_{nc}+\mu _{nc}\right]^{2}\\ &=\frac{1}{N^{2}I^{2}}\sum_{n}\sum_{i}\sigma_{Ic}^{2}+\frac{1}{N^ {2}I^{2}}NI^{2}\sigma_{Bc}^{2}\\ &=\frac{1}{NI}\sigma_{Ic}^{2}+\frac{1}{N}\sigma_{Bc}^{2}\end{split} \tag{18}\]
The distribution of shift noise in channel \(c\) can thus be written as:
\[\hat{\mu}_{gc}\sim\mathcal{N}\left(0,\frac{1}{NI}\sigma_{Ic}^{2}+\frac{1}{N} \sigma_{Bc}^{2}\right) \tag{19}\]
For the scale noise in a specific channel \(c\), we have
\[\begin{split}\hat{\sigma}_{gc}^{2}=&\frac{1}{NI} \sum_{i}\sum_{n}(x_{nic}-\mu_{nc}+\mu_{nc})^{2}\\ =&\frac{1}{NI}\sum_{i}\sum_{n}\left((x_{nic}-\mu_{nc })^{2}+(\mu_{nc}-\mu)^{2}+2(x_{nic}-\mu_{nc})(\mu_{nc}-\mu)\right)\\ \overset{\pi_{nc}\otimes\mathbb{E}\,x_{nc}}{\approx}& \frac{1}{NI}\sum_{i}\sum_{n}\sigma_{Ic}^{2}\left(\frac{x_{nic}-\mu_{nc}}{\sigma _{Ic}}\right)^{2}+\frac{1}{N}\sigma_{Bc}^{2}\sum_{n}\left(\frac{\mu_{nc}-\mu}{ \sigma_{Bc}}\right)^{2}+\frac{1}{N}(\mu_{nc}-\mu_{nc})(\mu_{nc}-\mu)\\ =&\frac{1}{NI}\sum_{i}\sum_{n}\sigma_{Ic}^{2}\left( \frac{x_{nic}-\mu_{nc}}{\sigma_{Ic}}\right)^{2}+\frac{1}{N}\sigma_{Bc}^{2}\sum_ {n}\left(\frac{\mu_{nc}-\mu}{\sigma_{Bc}}\right)^{2}\end{split} \tag{20}\]
It follows that \(\hat{\sigma}_{gc}^{2}\) approximately follows a mixture of two chi-square distributions.
\[\hat{\sigma}_{gc}^{2}\sim\frac{\sigma_{Ic}^{2}}{NI}\chi^{2}(NI)+\frac{\sigma_{ Bc}^{2}}{N}\chi^{2}(N) \tag{21}\]
When \(\sigma_{Ic}^{2}=0\), the results coincide with the 1d case. However, the analysis in 2D case gives us more freedom to separate inter- (\(\sigma_{Bc}^{2}\)) and intra-(\(\sigma_{Ic}^{2}\)) sample variances. Another insightful aspect is that it provides more understanding of the differences across channels. As both \(\sigma_{Bc}^{2}\) and \(\sigma_{Ic}^{2}\) are channel-specific, there is indeed a channel-wise noise effect, which we will provide an empirical proof in Figure 3.
## 3 Experiments
**Base Setup:** We use ResNet-18 [6] training on CIFAR-100 [14] as our main experimental setup. The dataset consists of 32x32 images split into 50000 training and 10000 test samples, evenly split between 100 classes. For hyperparameter tuning, we use a random subset of 10% of the training dataset as validation set. We report the test set results for the best-performing hyperparameters, after retraining on the entire training set. All the models were trained for 200 epochs, using a cosine decay learning rate schedule with a 5 epoch warmup at a batch size of 256. The dataset is relatively small and contains few images in each class, making regularization important. ResNet-18 is a moderately sized network which helps keep the computational cost of training reasonable, allowing us to perform multiple runs of each setting. We use a learning rate of \(\eta=0.3\) for all experiments, which was tuned on the validation set for training with standard batch normalization. Further details are listed in Appendix A.
**Ghost Batch Normalization:** The top left panel of Figure 2 shows how the ghost batch size affects the final validation accuracy when using Ghost Batch Normalization. Using a ghost batch size of 256 is equivalent to standard batch normalization. We see that the accuracy increases for smaller batch sizes up to a certain extent, after which it goes down again. This increase is likely due to increased regularization while the eventual decline may come from either too much regularization or by the amplified bias in normalization statistics leading to a train-test discrepancy. The optimal ghost batch size of 16 gives an accuracy boost of just over 1% on both the validation set and the test set (Table 1).
**Reducing the Train-Test Discrepancy:** In the upper middle panel of Figure 2 we investigate the effectiveness of Exclusive Batch Normalization (XBN, Equation 3) and EvalNorm [19] (EN). Both methods aim at improving test performance by reducing the train-test discrepancy, XBN by changing sample dependency during training time to be more similar to inference time while EvalNorm does the opposite. XBN can be unstable for small batch sizes (see analysis in Section 2.2), in this case batch sizes smaller than 32. XBN and EN both seem to resolve some of the train-test discrepancy,
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & BN & GBN-16 & GNI-16 & XBN & EN \\ \hline Mean & 77.10 & 78.20 & **78.84** & 78.33 & 78.33 \\ Std & 0.26 & 0.10 & 0.09 & 0.40 & 0.27 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test accuracy comparison (mean\(\pm\)std for 3 runs)
Figure 2: CIFAR-100 ResNet-18 validation accuracy versus ghost batch size and dropout probability for different methods. Each line is the average of five runs and the shaded area shows the standard deviation. The dotted lines show the standard batch normalization performance and the maximum for ghost batch normalization. For dropout (DO, lower right panel), B=Bernoulli, G=Gaussian, E=elementwise and C=channelwise. See Section 3 for further experimental details and discussion.
giving a small accuracy boost over Ghost Batch Normalization. XBN performs slightly better on the validation set, but similarly on the test set (Table 1).
**Decoupling the Noise:** In Figure 2, the lower-left panel shows the performance of Ghost Noise Injection for different ghost batch sizes. We observe a significant boost, around twice that of Ghost Batch Normalization. At higher ghost batch sizes, GNI performs similarly to XBN. This may indicate that GNI also successfully reduces the train-test discrepancy while stabilizing smaller batch sizes, allowing us to benefit from their increased regularization effect. Out of the methods we have tried, GNI also performs the best on the test set. In the lower middle panel of Figure 2 we investigate the effect of the different noise components of GNI. GNI \(\mathbf{\mu}\) only includes the shift and GNI \(\mathbf{\sigma}\) only the scaling term. Either one gives a significant boost on their own but is unable to match GNI, suggesting that both components of Ghost Noise are beneficial in this setting.
**Ghost Noise Distribution:** Figure 3 shows measured distributions of both the scale and shift components of the noise generated by GNI. The observed average distributions are similar to our derived distribution for the fully connected case, but the batch size parameter must be adjusted to account for the intra-sample variance component (Section 21). The distributions vary significantly between layers and channels (not shown here), especially later in training potentially due to changes in the intra-sample variance component. In the top right panel of Figure 2 we apply our analytical ghost noise injection to training. We observe some gain but are unable to match the performance of the sample-based GNI, which may indicate that the per-channel variations are important.
**Dropout:** Ghost Noise Injection is a regularizer and a potential alternative to Dropout. In Figure 2 (bottom right) we compare four variants of dropout (Bernoulli/Gaussian and Element-wise/Channel-wise) to the other methods. We apply the dropout after the second normalization layer on each branch, as was done in Wide Residual Networks [26]. We find that dropout performs similarly to the Analytical Ghost Noise Injection but is unable to match either GBN or sample-based GNI.
**Applicability to other Normalizers:** So far we have applied GNI on top of Batch Normalization. However, GNI is not limited to this use case. Figure 4 shows the CIFAR-100 validation accuracy of a ResNet-18 where batch normalization has been replaced by weight-standardization [17] and layer normalization [1]. We find that GNI can also provide a significant accuracy boost in this setting, bringing the performance with this noise-free normalization method in line with that of GBN.
**Generalization to other Datasets:** Table 2 shows that GNI can also regularize the training of ResNet-50 on ImageNet-1k and ResNet-20 on CIFAR-10. In both cases Ghost Noise Injections provides a decent boost in accuracy and outperforms Ghost Batch Normalization.
**Additional Experimental Results:** We report additional experiments in Appendix B.
Figure 3: Measured GNI noise distributions for ResNet18 on CIFAR100 with a Ghost Batch Size of 32 along with analytical fully connected distributions for a batch size 256. Each GNI line is an average over a layer, the distributions also vary between channels inside each layer. The lines are plotted with a standard color spectrum (i.e. the rainbow) from violet (first layer) to red (last layer).
## 4 Related Work
**Mitigating Train-test Discrepancy:** Singh and Shrivastava [19] proposed EvalNorm to estimate corrected normalization statistics to use for BN during evaluation, such that training and testing time normalized activation distributions become closer. Summers and Dinneen [21] incorporated example statistics during inference, which reduces the output range of a network with Batch Normalization during testing. While both works try to mitigate train-test discrepancy by alternating test time normalization statistics, our XBN shows that it is also possible to address the same issue by reducing sample dependency during training time. XBN brings increased generalization performances but unfortunately is not stable with respect to small batch sizes. Nonetheless, it might still provide some new perspectives for future works.
**Noise Injection:** Liu et al. [15] proposed a composition of two trainable scalars and a zero-centered noise injector for regularizing normalization-free DNNs. Camuto et al. [3] analyzed Gaussian Noise Injection from a more theoretical point of view and found injected noise particularly regularises neural network layers that are closer to the output. Compared to these two, Ghost Noise Injection more closely imitates the noise of batch normalization, accounting for the channel and layer wise differences in the distribution.
**Dropout:** Dropout was first proposed by Srivastava et al. [20] as a simple regularization approach to prevent units from co-adapting too much. Wei et al. [22] demonstrates the explicit and implicit regularization effects from dropout, and found out that the implicit regularization effect is analogous to the effect of stochasticity in small mini-batch stochastic gradient descent. Hou and Wang [9], He et al. [7] applied channel-wise dropout and experimentally showed consistent benefits in DNNs with convolutional structures. Further, it is observed that channel-wise dropout encourages its succeeding layers to minimize the intra-class feature variance [7].
## 5 Conclusion
In this study we have investigated the source of an important part of the regularization effect of batch normalization, the generalization benefit from smaller batch sizes. To the best of our knowledge, this has not been thoroughly studied before, despite Batch Normalization being one of the most widely used methods in deep learning. By formulating Ghost Batch Normalization as a series of two normalization operations, we are able to analyze the impact of smaller batch sizes on the noise and train-test discrepancy. In our analysis of the noise shows that it is channel dependent and has two components, shifting and scaling that both contribute to its effectiveness. We also show that we can reduce the discrepancy by preventing a sample from contributing to its own normalization statistics during training. Our proposed Ghost Noise Injection (GNI) is able to increase the levels of Ghost Noise without performing normalization on smaller batches, avoiding the train-test discrepancy. We show GNI has a beneficial impact on generalization in a number of settings, including where batch normalization is not present. This further highlights the importance of Ghost Noise as a source of regularization in batch normalization.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Setup & BN-256 & GBN-16 & GBN-32 & GBN-64 & GNI-16 & GNI-32 & GNI-64 \\ \hline \multirow{2}{*}{C10-RN20} & 92.22 & 92.65 & 92.53 & 92.50 & **93.01** & 92.76 & 92.56 \\ & \(\pm\)0.05 & \(\pm\)0.07 & \(\pm\)0.33 & \(\pm\)0.13 & \(\pm\)0.28 & \(\pm\)0.11 & \(\pm\)0.12 \\ i1k-RN50 & 77.43 & 76.90 & 77.14 & 77.48 & 77.62 & 77.70 & **77.98** \\ \hline \hline \end{tabular}
\end{table}
Table 2: ResNet-20 CIFAR-10 (mean\(\pm\)std for 3 runs) and ResNet-50 ImageNet-1k accuracy
Figure 4: Accuracy impact of Ghost Noise Injection on Weight Standardized Layer Normalized ResNet18 trained on CIFAR100. The black dotted lines correspond to Figure 2 and the red dotted line to no GNI. |
2304.01996 | ANTN: Bridging Autoregressive Neural Networks and Tensor Networks for
Quantum Many-Body Simulation | Quantum many-body physics simulation has important impacts on understanding
fundamental science and has applications to quantum materials design and
quantum technology. However, due to the exponentially growing size of the
Hilbert space with respect to the particle number, a direct simulation is
intractable. While representing quantum states with tensor networks and neural
networks are the two state-of-the-art methods for approximate simulations, each
has its own limitations in terms of expressivity and inductive bias. To address
these challenges, we develop a novel architecture, Autoregressive Neural
TensorNet (ANTN), which bridges tensor networks and autoregressive neural
networks. We show that Autoregressive Neural TensorNet parameterizes normalized
wavefunctions, allows for exact sampling, generalizes the expressivity of
tensor networks and autoregressive neural networks, and inherits a variety of
symmetries from autoregressive neural networks. We demonstrate our approach on
quantum state learning as well as finding the ground state of the challenging
2D $J_1$-$J_2$ Heisenberg model with different systems sizes and coupling
parameters, outperforming both tensor networks and autoregressive neural
networks. Our work opens up new opportunities for quantum many-body physics
simulation, quantum technology design, and generative modeling in artificial
intelligence. | Zhuo Chen, Laker Newhouse, Eddie Chen, Di Luo, Marin SoljaÄiÄ | 2023-04-04T17:54:14Z | http://arxiv.org/abs/2304.01996v3 | # ANTN: Bridging Autoregressive Neural Networks and Tensor Networks for Quantum Many-Body Simulation
###### Abstract
Quantum many-body physics simulation has important impacts on understanding fundamental science and has applications to quantum materials design and quantum technology. However, due to the exponentially growing size of the Hilbert space with respect to the particle number, a direct simulation is intractable. While representing quantum states with tensor networks and neural networks are the two state-of-the-art methods for approximate simulations, each has its own limitations in terms of expressivity and inductive bias. To address these challenges, we develop a novel architecture, Autoregressive Neural TensorNet (ANTN), which bridges tensor networks and autoregressive neural networks. We show that Autoregressive Neural TensorNet parameterizes normalized wavefunctions, allows for exact sampling, generalizes the expressivity of tensor networks and autoregressive neural networks, and inherits a variety of symmetries from autoregressive neural networks. We demonstrate our approach on quantum state learning as well as finding the ground state of the challenging 2D \(J_{1}\)-\(J_{2}\) Heisenberg model with different systems sizes and coupling parameters, outperforming both tensor networks and autoregressive neural networks. Our work opens up new opportunities for scientific simulations of quantum many-body physics and quantum technology.
## 1 Introduction
Quantum many-body physics is fundamental to our understanding of the universe. It appears in high energy physics where all the fundamental interactions in the Standard Model, such as quantum electrodynamics (QED) and quantum chromodynamics (QCD), are described by quantum mechanics. In condensed matter physics, quantum many-body physics has led to a number of rich phenomena and exotic quantum matters, including superfluids, superconductivity, the quantum Hall effect, and topological ordered states (Girvin & Yang, 2019). As an application, quantum many-body physics is crucial for new materials design. The electronic structure problem and chemical reactions in quantum chemistry are governed by quantum many-body physics. The recent development of quantum computers is also deeply connected to quantum many-body physics. A multi-qubit quantum device is intrinsically a quantum many-body system, such that progress on quantum computer engineering is tied to our understanding of quantum many-body physics (Preskill, 2021).
All information of a closed quantum many-body system is captured by the wavefunction, whose properties are described by the famous Schrodinger equation. An important tool to study and understand quantum many-body physics is to classically simulate the wavefunction. However, the wavefunction is a high dimensional function in Hilbert space, whose dimension grows exponentially with the number of particles. For example, for qubit systems where each qubit has two degrees of freedom), the wavefunction of 300 qubits will have dimension \(2^{300}\), which is larger than the number of particles in the observable universe. Furthermore, the Schrodinger equation is a complex-valued high-dimensional equation, which is challenging to solve or simulate in general.
A number of algorithms have been developed to simulate quantum many-body physics, including quantum Monte Carlo, tensor networks, neural network quantum states, and direct quantum computation. In particular, computing the ground state of quantum many-body systems is of great interest. One important approach is the variational principle, which provides an upper bound for the ground state energy. To apply variational principle successfully, one must design an ansatz that can represent and optimize the wavefunction efficiently. Tensor networks and neural network quantum states (Carleo and Troyer, 2017) are the two main state-of-the-art methods that can be applied with the variational principle for quantum many-body simulation. However, tensor networks usually suffer from an expressivity issue in systems with more than one dimension, while neural network quantum states usually lack the inductive bias from the underlying physics structure and have sign structure challenges in the representation.
In this paper, we develop a novel architecture, Autoregressive Neural TensorNet (ANTN), to bridge neural network quantum states and tensor networks, achieving the best of both worlds. In particular, our contributions are threefold:
* Develop ANTN with two variants called "elementwise" and "blockwise," which each naturally generalize the two state-of-the-arts ansatzes, tensor networks (TN) and autoregressive neural networks (ARNN), to provide proper inductive bias and high expressivity.
* Prove that ANTN is normalized with exact sampling, has generalized expressivity of TN and ARNN, and inherits multiple symmetries from ARNN.
* Demonstrate our methods on quantum state learning and variationally finding the ground state of the challenging 2D \(J_{1}\)-\(J_{2}\) Heisenberg model, outperforming both TN and ARNN.
## 2 Related Work
**Tensor Networks (TN)** represent high-dimensional wavefunctions using low-rank tensor decomposition, notably matrix product state (MPS) (Vidal, 2003, 2004), PEPS (Verstraete and Cirac, 2004), and MERA (Vidal, 2007). They capture the entanglement structure of physical systems and, with algorithms like the density matrix renormalization group (DMRG) (White, 1992), are used for state simulations and real-time dynamics. However, their expressivity can be limited in systems with more than one dimension and systems with high entanglement. In machine learning, TN appears as tensor train (Oseledets, 2011) and CP decomposition methods.
**Neural Network Quantum State (NNQS)** leverages neural networks for high dimensional wavefunction representation (Carleo and Troyer, 2017). It's been demonstrated that many quantum states can be approximated or represented by NNQS (Sharir et al., 2021; Gao and Duan, 2017; Lu et al., 2019; Levine et al., 2019; Luo et al., 2021, 2019; Deng et al., 2017; Huang and Moore, 2021; Vieijra et al., 2020), and it has yielded state-of-the-art results in computing quantum system properties (Gutierrez and Mendl, 2020; Schmitt and Heyl, 2020; Vicentini et al., 2019; Yoshioka and Hamazaki, 2019; Hartmann and Carleo, 2019; Nagy and Savona, 2019; Luo et al., 2021, 2022). Key advancements in NNQS include ARNN that improves sample efficiency and gradient estimation, and the development of
Figure 1: Diagrammatic representation of autoregressive neural network (ARNN), tensor network (TN) and our Autoregressive Neural TensorNet (ANTN).
neural networks that adhere to the underlying symmetries of quantum systems (Luo et al., 2021; Hibat-Allah et al., 2020; Choo et al., 2019; Luo and Clark, 2019; Hermann et al., 2019; Pfau et al., 2020; Luo et al., 2021, 2022b, 2021b; Chen et al., 2022). Despite its potential, NNQS faces challenges such as the lack of physics prior and the sign structure problem. While there are attempts to integrate NNQS with TN, including matrix product state with neural network backflow (Lami et al., 2022) and generalizing MPS to RNN (Wu et al., 2022), the former can not produce normalized wavefunctions, while the latter only builds on tensor contractions that lacks the nonlinear activation functions of standard neural network wavefunctions.
## 3 Background
### Quantum Preliminaries
In this work, we focus on qubit systems in quantum many-body physics. The wavefunction or quantum state \(\psi\) of the system is a normalized function \(\psi:\mathbb{Z}_{2}^{n}\rightarrow\mathbb{C}\) with \(\sum_{\mathbf{x}}\left|\psi(\mathbf{x})\right|^{2}=1\), where \(n\) is the system size. The input to the wavefunction is an \(n\)-bit string \(\mathbf{x}\in\{0,1\}^{n}\). Therefore, the wavefunction \(\psi\) can be viewed as a complex-valued vector of size \(2^{n}\), with Dirac notation \(\left\langle\psi\right|\) and \(\left|\psi\right\rangle\) correspond to a conjugate row vector and a column vector respectively, and \(\psi(\mathbf{x})=\langle\mathbf{x}|\psi\rangle\). Because the size of the wavefunction grows exponentially with the system size \(n\), a direct computation quickly becomes intractable as the system size increases. The goal of NNQS is to design a compact architecture that can approximate and optimize the wavefunction efficiently.
### Quantum State Learning
Given a quantum state \(\left|\phi\right\rangle\), we are often interested in finding \(\left|\psi_{\theta}\right\rangle\) that is closest to \(\left|\phi\right\rangle\) given the variational ansatz. In quantum mechanics, the closeness of two quantum states are measured by the quantum fidelity \(F=\left|\left\langle\phi|\psi_{\theta}\right\rangle\right|^{2}=\left|\sum_{\bm {x}}\phi^{*}(\mathbf{x})\psi_{\theta}(\mathbf{x})\right|^{2}\) where \({}^{*}\) refers to complex conjugation. Quantum fidelity satisfies \(0\leq F\leq 1\), with \(F=1\) corresponding to exact match, and \(F=0\) orthogonal quantum states. Finding \(F_{\max}\) for a given ansatz allows us to quantify how good the ansatz can be used to approximate that state. In practice, minimizing \(-\log F\) is usually better than maximizing \(F\) itself. This can be achieved by enumerating all the basis \(\mathbf{x}\) in small systems and stochastically in large systems such as quantum state tomography (see Appendix A.1).
### Variational Monte Carlo
For a given quantum system with \(n\) qubits, the Hamiltonian \(\hat{\mathcal{H}}\) can be written as a Hermitian matrix of size \(2^{n}\times 2^{n}\). The ground state energy \(E_{g}\) of the system is the smallest eigenvalue of \(\hat{\mathcal{H}}\) and the ground state is the corresponding eigenvector. For large system sizes, finding the ground state directly is usually impossible. In this case, the variational principle in quantum mechanics provides an upper bound on the ground state energy \(E_{g}\). For all (normalized) wavefunctions \(\left|\psi\right\rangle\), it is evidential that \(E_{g}\leq\left\langle\psi|\hat{\mathcal{H}}|\psi\right\rangle\). Therefore, finding the \(\left|\psi_{\theta}\right\rangle\) that minimizes \(E_{\theta}=\left\langle\psi_{\theta}|\hat{\mathcal{H}}|\psi_{\theta}\right\rangle\) gives the lowest upper bound of the ground state energy. For large system sizes, we can stochastically evaluate and minimize
\[\left\langle\psi_{\theta}|\hat{\mathcal{H}}|\psi_{\theta}\right\rangle=\sum_{ \mathbf{x}\mathbf{x}^{\prime}}\left|\psi_{\theta}(\mathbf{x})\right|^{2}\frac{\mathcal{H}_ {\mathbf{x},\mathbf{x}^{\prime}}\psi_{\theta}(\mathbf{x}^{\prime})}{\psi_{\theta}(\mathbf{x})} =\mathbb{E}_{\mathbf{x}\sim\left|\psi_{\theta}\right|^{2}}\frac{\sum_{\mathbf{x}^{ \prime}}\mathcal{H}_{\mathbf{x},\mathbf{x}^{\prime}}\psi_{\theta}(\mathbf{x}^{\prime})}{ \psi_{\theta}(\mathbf{x})}, \tag{1}\]
where \(\mathcal{H}_{\mathbf{x},\mathbf{x}^{\prime}}\) refers to the matrix element of \(\hat{\mathcal{H}}\) and we interpret \(\left|\psi_{\theta}(\mathbf{x})\right|^{2}\) as a probability distribution. The summation over \(\mathbf{x}^{\prime}\) can be efficiently computed given \(\mathbf{x}\) since the Hamiltonian is usually sparse. The gradient \(\nabla_{\theta}E_{\theta}\) can also be calculated stochastically in a similar fashion (see Appendix A.2).
## 4 Autoregressive Neural TensorNet (ANTN)
**Autoregressive Neural Network Wavefunction.** The autoregressive neural network (ARNN) (Fig. 1 left) parameterizes the full probability distribution as a product of conditional probability distributions as
\[p(\mathbf{x})=\prod_{i=1}^{n}p(x_{i}|\mathbf{x}_{<i}), \tag{2}\]
where \(\mathbf{x}=(x_{1},\ldots,x_{n})\) is the a configuration of \(n\) qubits and \(\mathbf{x}_{<i}=(x_{1},\ldots,x_{i-1})\) is any configuration before \(x_{i}\). The normalization of the full probability can be guaranteed from the normalization of individual conditional probabilities. The ARNN also allows for exact sampling from the full distribution by sampling sequentially from the conditional probabilities (see Appendix B.1). The autoregressive constructions for probabilities can be easily modified to represent quantum wavefunction by (1) replacing \(p(x_{i}|\mathbf{x}_{<i})\) with a complex valued conditional wavefunction \(\psi(x_{i}|\mathbf{x}_{<i})\) and (2) using the following normalization condition for the conditional wavefunctions: \(\sum_{x}|\psi(x_{i}|\mathbf{x}_{<i})|^{2}=1\)(Sharir et al., 2020; Hibat-Allah et al., 2020; Luo et al., 2021b). Similar to the case of probabilities, ARNN automatically preserves normalization of wavefunctions and allows for exact sampling (see Appendix B.1). Because of this, ARNN is often more efficient when training and computing various quantum observables compared to other generative neural networks.
**Matrix Product State (MPS).** MPS (also known as tensor train) is a widely used TN architecture to study quantum many body physics (Vidal, 2003, 2004). The MPS defines a wavefunction using \(n\) rank-3 tensors \(M_{x_{i}}^{\alpha_{i-1}\alpha_{i}}\) with \(\alpha_{i-1}\) (or \(\alpha_{i}\)) the index of left (or right) bond dimensions and \(x_{i}\) the configuration of the ith qubit. Then, the full wavefunction is generated by first choosing a particular configuration \(\mathbf{x}=(x_{1},\ldots,x_{n})\) and then contracting the tensors selected by this configuration (Fig. 1 right) as
\[\psi(\mathbf{x})=\sum_{\alpha_{1},\ldots,\alpha_{n-1}}M_{x_{1}}^{\alpha_{0}\alpha _{1}}\cdots M_{x_{n}}^{\alpha_{n-1}\alpha_{n}}=\sum_{\mathbf{\alpha}}\prod_{i=1}^ {n}M_{x_{i}}^{\alpha_{i-1}\alpha_{i}}, \tag{3}\]
where the left-and-right-most bond dimensions are assumed to be \(\mathcal{D}(\alpha_{0})=\mathcal{D}(\alpha_{n+1})=1\) so are not summed.
**Elementwise and Blockwise Construction for ANTN.** Both ARNN and MPS are powerful ansatzes for parameterizing quantum many-body wavefunctions. However, they cover different subspaces of the Hilbert space. Albeit very expressive, ARNN lacks the physics prior of the system of interest. In addition, since wavefunctions are complex-valued in general, learning the sign structure can be a nontrivial task (Westrehout et al., 2020). MPS, on the other hand, contains the necessary physics prior and can efficiently represent local or quasi-local sign structures, but its expressivity is severely limited. The internal bond dimension needs to grow exponentially to account for the increase of entanglement. It turns out that MPS representation is not unique that many MPS actually represents the same wavefunction; and if we choose a particular constraint (without affecting the expressivity), MPS allows for efficient evaluation of conditional probability and exact sampling (Ferris and Vidal, 2012) (see Appendix B.2). Because both these ansatzes allow for exact sampling, it is natural to combine them to produce a more powerful ansatz. Therefore, we develop the Autoregressive Neural TensorNet (ANTN) (Fig. 1 (middle)). In the last layer of the ANTN, instead of outputting the conditional wavefunction \(\psi(x_{i}|\mathbf{x}_{<i})\), we output a conditional wavefunction tensor \(\tilde{\psi}^{\alpha_{i-1}\alpha_{i}}(x_{i}|\mathbf{x}_{<i})\) for each site. Defining the left partially contracted tensor up to qubit \(j\) as \(\tilde{\psi}^{\alpha_{j}}_{L^{j}}(\mathbf{x}_{\leq j}):=\sum_{\mathbf{\alpha}_{<j}} \prod_{i=1}^{n}\tilde{\psi}^{\alpha_{i-1}\alpha_{i}}(x_{i}|\mathbf{x}_{<i})\), we can define the (unnormalized) marginal probability distribution as
\[q(\mathbf{x}_{\leq j}):=\sum_{\alpha_{j}}\tilde{\psi}^{\alpha_{j}}_{L }(\mathbf{x}_{\leq j})\tilde{\psi}^{\alpha_{j}}_{L}(\mathbf{x}_{\leq j})^{*}, \tag{4}\]
where \({}^{*}\) denotes complex conjugation. Then, the (normalized) conditional probability can be obtained as \(q(x_{j}|\mathbf{x}_{<j})=q(\mathbf{x}_{\leq j})/\sum_{x_{j}}q(\mathbf{x}_{\leq j})\). We construct the overall wavefunction by defining both its amplitude and phase according to
\[\psi(\mathbf{x}):=\sqrt{q(\mathbf{x})}e^{i\phi(\mathbf{x})}, \tag{5}\]
with \(q(\mathbf{x})=:\prod_{i=1}^{n}q(x_{i}|\mathbf{x}_{<i})\) and the phase \(\phi(\mathbf{x})=:\text{Arg}\sum_{\mathbf{\alpha}}\prod_{i=1}^{n}\tilde{\psi}^{\alpha_{ i-1},\alpha_{i}}(x_{i}|\mathbf{x}_{<i})\). In other words, we define the amplitude of the wavefunction through the conditional probability distributions and define the phase analogous to the standard MPS.
We develop two different constructions for ANTN that differ in the last layer on how to construct conditional wavefunction tensors.
The elementwise ANTN is given by
\[\tilde{\psi}^{\alpha_{i-1}\alpha_{i}}(x_{i}|\mathbf{x}_{<i})=M_{x_{i}} ^{\alpha_{i-1}\alpha_{i}}+f_{NN}(x_{i},\alpha_{i-1},\alpha_{i}|\mathbf{x}_{<i}), \tag{6}\]
where \(f_{NN}(x_{i},\alpha_{i-1},\alpha_{i}|\mathbf{x}_{<i})\) is the complex-valued output. The blockwise ANTN is given by
\[\tilde{\psi}^{\alpha_{i-1}\alpha_{i}}(x_{i}|\mathbf{x}_{<i})=M_{x_{i}} ^{\alpha_{i-1}\alpha_{i}}+f_{NN}(x_{i}|\mathbf{x}_{<i}), \tag{7}\]
where the complex-valued output \(f_{NN}(x_{i}|\mathbf{x}_{<i})\) is broadcasted over \(\alpha\). This results in a lower complexity that allows us to use a larger maximum bond dimension.
**Transformer and PixelCNN Based ANTN.** Our construction above is general and can be applied to any standard ARNN. Depending on the application, we can use different ARNN architectures. In this work, we choose the transformer and PixelCNN depending on the specific tasks. The transformer (Vaswani et al., 2017) used here is similar to a decoder-only transformer implemented in (Luo et al., 2020), and The PixelCNN we use is the gated PixelCNN (Van den Oord et al., 2016) implemented in (Chen et al., 2022).
**MPS Initialization.** Since our ANTN generalizes from TN, both the elementwise and the blockwise can take advantage of the optimized MPS from algorithms such as DMRG (White, 1992) as an initialization. In practice, we can initialize the TN component with the optimized DMRG results of the same bond dimension (similar to (Lami et al., 2022; Wu et al., 2022)). The MPS Initialization can also be though of as a pretraining of the ANTN. This is a nice feature since it provides a good sign structure and inductive bias from physics structure, which does not exist in the conventional ARNN.
**Limitations.** In this work, we only integrated MPS to the ANTN, where MPS may not be the best TN in various settings. Besides MPS, many other TNs also allows efficient evaluation of conditional probabilities and exact sampling, such as MERA (Vidal, 2007) and PEPS (Verstraete and Cirac, 2004), or cylindrical MPS for periodic boundary conditions. In fact the recently developed TensorRNN (Wu et al., 2022) can also be viewed as a type of TN and can be integrated in our construction for future work. In addition, while the ANTN construction is general in terms of the base ARNN used, our current study only focuses on transformer and PixelCNN. Lastly, as shown later in Sec. 5.1, the current implementation of ANTN has an additional sampling overhead that is linear in system size which can be avoided.
## 5 Theoretical Results
### Exact Sampling and Complexity Analysis of ANTN
**Theorem 5.1**.: _Autoregressive Neural TensorNet wavefunction is automatically normalized and allows for exact sampling._
Proof.: This is a direct consequence that we defined the amplitude of the wavefunction through normalized conditional probability distributions \(q(x_{i}|\mathbf{x}_{<i})\). (See Appendix B.1 for detailed sampling procedure.)
**Complexity Analysis.** We first note that for MPS, the number of parameters and computational complexity for evaluating a bitstring \(\mathbf{x}\) scales as \(\mathcal{O}(n\chi^{2})\), where \(n\) is the number of particles and \(\chi=\mathcal{D}(\alpha)\) is the (maximum) bond dimension of MPS. The sampling complexity scales as \(\mathcal{O}(n^{2}\chi^{2})\). The DMRG algorithm has a computational complexity of \(\mathcal{O}(n\chi^{3})\)(White, 1992). The number of parameters and computational complexity of ARNN depends on the specific choice. Assuming the ARNN has \(n_{\rm ARNN}\) parameters and a single pass (evaluation) of the ARNN has a computational complexity of \(c_{\rm ARNN}\), then the sampling complexity of the ARNN is \(\mathcal{O}(nc_{\rm ARNN})\) in our current implementation. The ANTN based on such ARNN would have \(\mathcal{O}(n_{\rm ARNN}+n\chi^{2}h_{\rm dim})\) parameters for the elementwise construction, and \(\mathcal{O}(n_{\rm ARNN}+n\chi^{2}+nh_{\rm dim})\) parameters for the blockwise construction. The computational complexity for evaluating and sampling bitstrings scales as \(\mathcal{O}(n^{\gamma}(c_{\rm ARNN}+n\chi^{2}h_{\rm dim}))\) for elementwise construction and \(\mathcal{O}(n^{\gamma}(c_{\rm ARNN}+n\chi^{2}))\) for blockwise construction with \(\gamma=0\) for evaluation and \(\gamma=1\) for sampling.
We note that the complexity analysis above is based on our current implementation, where the sampling procedure for all algorithms has an \(\mathcal{O}(n)\) overhead compared to evaluation. It is possible to remove it by storing the partial results during sampling procedure.
Then, each gradient update of the variational Monte Carlo algorithm requires sampling once and evaluating \(N_{c}\) times where \(N_{c}\) is the number of non-zero \(\mathbf{x}^{\prime}\)'s in \(\mathcal{H}_{\mathbf{x},\mathbf{x}^{\prime}}\) given \(\mathbf{x}\), which usually scales linearly with the system size. Then, the overall complexity is \(\mathcal{O}(N_{s}(N_{c}c_{\rm eva}+c_{\rm samp})\) with \(N_{s}\) the batch size, \(c_{\rm eva}\) the evaluation cost and \(c_{\rm samp}\) the sampling cost. Usually, the first part dominates the cost.
The difference in computational complexities and number of parameters between elementwise ANTN and blockwise ANTN implies that blockwise ANTN is usually more economical than elementwise ANTN for the same \(h_{\text{dim}}\) and \(\chi\). Therefore, for small bond dimension, we use the elementwise ANTN for a more flexible parameterization with a higher cost. In contrast, for large bond dimension we use the blockwise ANTN, which saves computational complexity and improves the initial performance (with DMRG initialization) of the blockwise ANTN at the cost of less flexible modifications from the ARNN. We also note that compared to the state-of-the-arts MPS simulation, even for our blockwise ANTN, a small bond dimension usually suffices. For example, in the later experimental section, we use bond dimension 70 for blockwise ANTN while the best DMRG results use bond dimension 1024, which has much more parameters than our ANTN. In general, our ANTN has much fewer parameters compared to MPS due to the \(\mathcal{O}(\chi^{2})\) parameters scaling which dominates at large bond dimension.
### Expressivity Results of ANTN
**Theorem 5.2**.: _Autoregressive Neural TensorNet has generalized expressivity over tensor network and autoregressive neural network._
Proof.: (See detail in Thm B.2of Appendix B.4.) If we set \(\mathcal{D}(\alpha_{i})=1\) for all \(i\), the marginal probability distribution in Eq. 4 reduces to that of the standard ARNN (up to normalization). This means both the conditional and full probability distributions are the same. The phase of the ANTN becomes a sum of the phases of each conditional wavefunction, which also reduces to the ARNN. This shows that ARNN is a special case of ANTN.
If we set \(f_{NN}^{\alpha_{i}-1\alpha_{i}}(x_{i}|\mathbf{x}_{<i})=0\) in Eq. 6 and \(f_{NN}(x_{i}|\mathbf{x}_{<i})=0\) in Eq. 7, then ANTN reduces to the TN shown in Eq. 3. This shows that TN is also a special case of ANTN. Thus ANTN generalizes both ARNN and TN with more expressivity.
**Theorem 5.3**.: _Autoregressive Neural TensorNet can have volume law entanglement, which is strictly beyond the expressivity of matrix product state._
Proof.: We proved in Thm. 5.2 that ANTN can be reduced to an ARNN, which has been shown to have volume law entanglement that cannot be efficiently represented by TN (Sharir et al., 2021; Levine et al., 2019b). Hence, ANTN can represent states that cannot in general be represented by MPS efficiently; thus it has strictly greater expressivity.
### Symmetry Results of ANTN
Symmetry plays an important role in quantum many-body physics and quantum chemistry. Many of the symmetries can be enforced in ARNN via the two classes of the symmetries--_mask symmetry_ and _function symmetry_.
**Definition 5.1** (Mask Symmetry).: A conditional wavefunction \(\psi(x_{i}|\mathbf{x}_{<i})\) has a _mask symmetry_ if \(\psi(x_{i}|\mathbf{x}_{<i})=0\) for some \(x_{i}\) given \(\mathbf{x}_{<i}\).
**Definition 5.2** (Function Symmetry).: For a function \(F\) that satisfies \(\{F(x_{i}),F(\mathbf{x}_{<i})\}=F(\mathbf{x}_{\leq i})\), a conditional wavefunction tensor \(\psi(x_{i}|\mathbf{x}_{<i})\) has a _function symmetry_ over the \(F\) if \(\psi(x_{i}|\mathbf{x}_{<i})=\psi(F(x_{i})|F(\mathbf{x}_{<i}))\). Here, \(F(x_{i})\) is allowed to also depend on \(\mathbf{x}_{<i}\).
Here, we list several symmetries that ANTN inherits from ARNN and show the proofs in Appendix B.4.
**Theorem 5.4**.: _Autoregressive Neural TensorNet inherits mask symmetry and function symmetry from autoregressive neural networks._
**Corollary 5.4.1** (Global U(1) Symmetry).: _Autoregressive Neural TensorNet can realize global U(1) symmetry, which conserves particle number._
**Corollary 5.4.2** (\(\mathbb{Z}_{2}\) Spin Flip Symmetry).: _Autoregressive Neural TensorNet can realize \(\mathbb{Z}_{2}\) spin flip symmetry such that the wavefunction is invariant under conjugation of the input._
**Corollary 5.4.3** (Discrete Abelian and Non-Abelian Symmetries).: _Autoregressive Neural TensorNet can realize discrete Abelian and Non-Abelian symmetries._
## 6 Experiments
The details in experiment setup and hyperparameters can be found in Appendix C.
### Quantum State Learning
As stated previous, ANTN generalizes both ARNN and TN to take advantage of both the expressivity and the inductive bias. Here, we test the ANTN on learning physically important quantum states to demonstrate this ability.
**Experiments on expressivity.** We first test the expressivity of the ANTN by learning a class of well-known high-entangled states--random permuted Bell states. A 2-qubit Bell state is defined as \((|00\rangle+|11\rangle)/\sqrt{2}\), which has 1 bit of entanglement. For a system size of \(n\), we randomly pair qubits in the first half of the system with the qubits in the second half of the system to be Bell states. This process creates quantum states with \(n/2\) bit of entanglement between the first and second half of the system. It can be shown that a bond dimension of \(2^{n/2}\) is required for MPS to fully represent such a system. In Fig. 2 (a), we plot the quantum fidelity of learning 16-qubit random Bell states. As the figure shows, MPS cannot represent the states without reaching the required bond dimension of \(256=2^{8}\). The ARNN and ANTN, on the other hand, can represent such states without limitations from bond dimension.
**Experiments on inductive bias.** We then test the physics inductive bias of the ANTN. One consequence of the physics inductive bias of MPS is that it can represent wavefunctions with local or quasi-local sign structures that can be hard for neural networks to learn, since these structures usually comes from physical interactions. Here, we use (real-valued) shallow random circuit to mimic the short-range interaction and generate states with sign structures. We test the algorithms on these states both with and without the "sign rule", which means that we explicitly provide the sign structure to the algorithm. As shown in Fig. 2 (b), the fidelity of MPS only depends weakly on the sign rule, whereas the fidelity of ARNN can change drastically. Our ANTN inherits the property of MPS and is not affected by the sign structure. In fact, the ANTN with a bond dimension of 4 already performs better than MPS with a bond dimension of 16.
### Variational Monte Carlo
We further test our algorithm on finding the ground state of the challenging 2D \(J_{1}\)-\(J_{2}\) Heisenberg model with open boundary condition. The model has a rich phase diagram with at least three different phases across different \(J_{2}/J_{1}\) values (Capriotti et al., 2004) (Lante & Parola, 2006). In addition, the complicated structure of its ground state makes it a robust model on which to test state-of-the-art methods.
The 2D \(J_{1}\)-\(J_{2}\) Hamiltonian is given by
\[\hat{\mathcal{H}}=J_{1}\sum_{\langle i,j\rangle}\vec{\sigma_{i}}\cdot\vec{ \sigma_{j}}+J_{2}\sum_{\langle\langle i,j\rangle\rangle}\vec{\sigma_{i}}\cdot \vec{\sigma_{j}}, \tag{8}\]
Figure 2: Fidelity \(\uparrow\) on quantum state learning with 16 qubits. (a) Learning random Bell states. (b) Learning real-valued depth-4 random circuit with and without sign rule. The error bar denotes the standard deviation (not standard error of mean) of the fidelities over the random states sampled from the corresponding distribution. The mean and standard deviation are calculated from 10 random states. The numbers inside the parentheses denote the bond dimension.
where subscript refers to the site of the qubits, \(\langle\cdot,\cdot\rangle\) is the nearest neighbour and \(\langle\langle\cdot,\cdot\rangle\rangle\) is the next nearest neighbour. \(\vec{\sigma_{i}}\cdot\vec{\sigma_{j}}=X_{i}\otimes X_{j}+Y_{i}\otimes Y_{j}+Z_{i }\otimes Z_{j}\) with \(X\), \(Y\) and \(Z\) the Pauli matrices
\[X=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\quad Y=\begin{bmatrix}0&-i\\ i&0\end{bmatrix},\quad Z=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}. \tag{9}\]
We will fix \(J_{1}=1\) and vary \(J_{2}\) in our studies.
**Experiments on inductive bias of \(J_{1}\)-\(J_{2}\) Heisenberg Model.**
As shown previously, it can be challenging for neural networks to learning the sign structures of the \(J_{1}\)-\(J_{2}\) model. There is a sign rule for this model known as the Marshall sign rule, which is approximate for \(J_{2}\neq 0\)(Marshall, 1955). We test the restricted Boltzmann machine (RBM) (from NetKet (Vicentini et al., 2022)), PixelCNN, and the two different constructions of ANTN. The result is shown in Table 1. Both RBM and PixelCNN improved significantly with the application of the sign rule at \(J_{2}=0.2\) and \(J_{2}=0.5\), but the result deteriorates at \(J_{2}=0.8\). This is expected because the sign rule becomes increasingly inaccurate as \(J_{2}\) increases, especially pass \(J_{2}=0.5\). Our ANTN, on the other hand, does not require the sign rule in both constructions. We note that both of our ANTN construction achieved better result compared to the recently developed matrix product backflow state (Lami et al., 2022), which uses the sign rule and has an per site energy of \(-1.9267\) at \(J_{2}=0.5\).
**Ablation study on \(J_{1}\)-\(J_{2}\) Heisenberg Model.** We scan across many \(J_{2}\) for the model without using the sign rule. In this experiment, we compare the performance of six models to compute the ground state energy of the \(J_{1}\)-\(J_{2}\) Hamiltonian system with \(J_{1}=1\) fixed and \(J_{2}=0.2\)-\(0.8\) with 0.1 increment, covering three different phases of the model. The first three models are TN models, using the DMRG
\begin{table}
\begin{tabular}{l l l l} \hline \hline & \multicolumn{4}{c}{Energy per site \(\downarrow 1\times 8\)} \\ \hline Algorithms & \(J_{2}=0.2\) & \(J_{2}=0.3\) & \(J_{2}=0.4\) & \(J_{2}=0.5\) & \(J_{2}=0.6\) & \(J_{2}=0.7\) & \(J_{2}=0.8\) \\ \hline RBM (S) & -1.9609(16) & -1.5128(20) & -1.7508(19) \\ RBM (S) & -2.1944(17) & -1.8625(20) & -0.7267(29) \\ PixelCNN (NS) & -2.21218(16) & -1.77058(29) & -1.9382(126) \\ PixelCNN (S) & -2.2317(19) & -1.8890(219) & -1.8367(226) \\ Elementwise (8 NS) & **-2.23690(4)** & _-1.93018(8)_ & _-2.00036(16)_ \\ Elementwise (8 S) & **-2.2638(4)** & **-1.93102(7)** & **-2.00262(11)** \\ Blockwise (70 NS) & -2.23484(7) & -1.9300(7) & -1.9948(13) \\ Blockwise (70 S) & -2.23517(6) & -1.92880(9) & -1.99246(13) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Energy per site \(\downarrow\) for \(8\times 8\) system with various algorithms. The bond dimensions for elementwise and blockwise ANTN are labeled inside the parentheses. For each algorithm, we test it both with the sign rule (S) and without the sign rule (NS) The best energy is highlighted in boldface and the second in italic.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & \multicolumn{6}{c}{Energy per site \(\downarrow 10\times 10\)} \\ \hline Algorithms & \(J_{2}=0.2\) & \(J_{2}=0.3\) & \(J_{2}=0.4\) & \(J_{2}=0.5\) & \(J_{2}=0.6\) & \(J_{2}=0.7\) & \(J_{2}=0.8\) \\ \hline DMRG (8) & -1.998207 & -1.887531 & -1.797675 & -1.734326 & -1.716253 & -1.768225 & -1.869871 \\ DMRG (70) & -2.191048 & -2.069029 & -1.956480 & -1.8661659 & -1.816249 & -1.854296 & -2.007845 \\ DMRG (1024) & -2.256533 & -1.238591 & -2.031681 & -1.938777 & **-1.865561** & **-1.894371** & **-2.062730** \\ PixelCNN & -2.22462(24) & -2.12873(14) & -2.02053(14) & -1.74098(29) & -1.78852(77) & -1.81800(13) & -1.98331(17) \\ Elementwise (8) & **-2.26034(6)** & **-2.14450(4)** & **-2.03727(7)** & **-1.94001(6)** & _-1.85684(10)_ & _-1.88463(7)_ & _-2.05707(43)_ \\ Blockwise (70) & _-2.25755(8)_ & _-2.14152(8)_ & _-2.03391(70)_ & -1.938342(42) & -1.85270(12) & -1.87853(13) & -2.05088(14) \\ \hline \hline \end{tabular}
\begin{tabular}{l l l l l l l l} \hline \hline Algorithms & \(J_{2}=0.2\) & \(J_{2}=0.3\) & \(J_{2}=0.4\) & \(J_{2}=0.5\) & \(J_{2}=0.6\) & \(J_{2}=0.7\) & \(J_{2}=0.8\) \\ \hline DMRG (8) & -1.998207 & -1.887531 & -1.800784 & -1.735906 & -1.720619 & -1.728852 & -1.893916 \\ DMRG (70) & -2.185071 & -1.205443 & -1.944832 & -1.851954 & -1.812450 & -1.836350 & -2.030131 \\ DMRG (1024) & _-2.264084_ & -2.141043 & -2.027736 & -1.931243 & **-1.858846** & _-1.93183_ & _-2.093013_ \\ PixelCNN & -2.24311(102) & -2.12616(23) & -2.01768(21) & -1.74282(30) & -1.72637(16) & -1.85239(13) & -2.0326(59) \\ Elementwise (8) & **-2.27446(27)** & **-2.15537(6)** & **-2.04437(7)** & **-1.94686(6)** & _-1.85443(15)_ & **-1.91391(10)** & **-2.09457(10)** \\ Blockwise (70) & 2.26152(50) & _-2.15395(7)_ & _2.04225(8)_ & _-1.94298(43)_ & -1.85176(15) & -1.90571(12) & -2.09113(43) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Energy per site \(\downarrow\) for \(10\times 10\) and \(12\times 12\) system with various algorithms where elementwise and blockwise are two constructions of ANTN. The bond dimensions for DMRG and ANTN are labeled in the parentheses. The best energy is highlighted in boldface and the second best value is highlighted in italic.
Figure 3: Energy per site difference \(\downarrow\) between elementwise ANTN with bond dimension 8 and DMRG with bond dimension 1024 for various \(J_{2}\) and different system sizes from \(4\times 4\) to \(12\times 12\).
algorithm with bond dimensions of 8, 70, and 1024. For up to 20 qubits, the DMRG results with bond dimension 1024 is exact and can be regarded as a benchmark. The fourth model is a PixelCNN pure neural network; the fifth and sixth models are elementwise and blockwise ANTN models. Thus the experiment compares our novel architecture (fifth and sixth models) against the two previous state-of-the-art techniques (first through fourth models).
Since our approach is based on the variational principle discussed in Sec. 3, it provides an upper bound on the exact ground state energy; therefore, a lower energy implies a better state. Table 2 summarizes the per site ground state energy computed by different models at different \(J_{2}\)'s in three distinct phases. Our results provide strong evidence that ANTN integrating TN and ARNN outperforms each base model, achieving state-of-the-art results.
_Discussion._ In all cases the ANTN surpasses the DMRG with the corresponding bond dimension on which they are based. For example, even in the \(10\times 10\) system with \(J_{2}>0.5\), where DMRG 1024 outperforms the ANTN, the elementwise ANTN still significantly improves on the base DMRG 8 result and the blockwise ANTN improves on the base DMRG 70 result. It is consistent with Theorem. 5.2 and Theorem. 5.3 that ANTN is more expressive than TN. The elementwise and blockwise ANTN models also consistently outperform the pure neural network PixelCNN model. This agrees with Theorem 5.2 that both elementwise and blockwise ANTN have a generalized expressivity compared to ARNN.
In addition, the ANTN models scale well for larger systems. Figure 3 visualizes the scalability of ANTN compared to DMRG. The figure plots the difference in per site energy between the elementwise ANTN with bond dimension 8 and DMRG with bond dimension 1024 for \(J_{2}=0.2,0.5,0.8\) where lower energies signify better performance for the elementwise ANTN. Compared to DMRG, the ANTN models compute better ground state energies as the system size grows larger. As the system size increases, the elementwise ANTN with bond dimension 8 starts to outperforms DMRG with bond dimension 1024. Even at \(J_{2}=0.6\), where the elementwise ANTN is slightly worse than DMRG, the difference gets significantly smaller at \(12\times 12\) compared to \(10\times 10\) (Table 2). In fact, the elementwise ANTN achieves such a performance using only \(\sim 1\%\) the number of parameters of DMRG.
The DMRG has a cost dominated by the bond dimension (quadratic for memory and cubic for computational), which limits its use in practice for large system sizes which can require large bond dimensions. According to the complexity analysis in Sec. 5.1, ANTN has lower complexity than TN and thus scales better for larger systems. We note that for large \(J_{2}\), the system goes into a stripped phase (Nomura and Imada, 2021), which can be less challenging for DMRG calculation. Nevertheless, in almost all cases our ANTN still outperforms DMRG on the \(12\times 12\) system.
## 7 Conclusion
In this paper, we developed Autoregressive Neural TensorNet, bridging the two state-of-the-art methods in the field, tensor networks and autoregressive neural networks. We proved that Autoregressive Neural TensorNet is self-normalized with exact sampling, naturally generalizes the expressivity of both tensor networks and autoregressive neural networks, and inherits proper physics inductive bias from tensor networks and various symmetries from autoregressive neural networks. We demonstrated our approach on quantum state learning and the challenging 2D \(J_{1}\)-\(J_{2}\) Heisenberg model with different system sizes and couplings. Our results achieve better performance than both the original tensor network and autoregressive neural network while surpassing tensor networks with large bond dimension as the system size increases. Besides scientific applications, since both tensor networks and autoregressive neural networks have been applied to machine learning tasks such as supervised learning and generative modeling, Autoregressive Neural TensorNet can also be applied to these other contexts. Our novel approach holds promise for better performance in these other domains due to its exact sampling, expressivity, and symmetries.
**Broader Impact.** Our Autoregressive Neural TensorNet, blending tensor networks and autoregressive neural networks, could advance our grasp of quantum phenomena, potentially fueling scientific breakthroughs. It enhances quantum material design and quantum computing through improved simulations and control of quantum states. This technology may also inspire new machine learning models for handling high-dimensional data. However, possible ethical and societal impacts, such as the use for chemical weapon development, require careful scrutiny.
## References
* Barrett et al. (2022) Barrett, T. D., Malyshev, A., and Lvovsky, A. Autoregressive neural-network wavefunctions for ab initio quantum chemistry. _Nature Machine Intelligence_, 4(4):351-358, 2022.
* Capriotti et al. (2004) Capriotti, L., Fubini, A., Roscilde, T., and Tognetti, V. Ising transition in the two-dimensional quantum j 1- j 2 heisenberg model. _Physical review letters_, 92(15):157202, 2004.
* Carleo and Troyer (2017) Carleo, G. and Troyer, M. Solving the quantum many-body problem with artificial neural networks. _Science_, 355(6325):602-606, 2017. doi: 10.1126/science.aag2302. URL [https://www.science.org/doi/abs/10.1126/science.aag2302](https://www.science.org/doi/abs/10.1126/science.aag2302).
* Chen et al. (2022) Chen, Z., Luo, D., Hu, K., and Clark, B. K. Simulating 2+ 1d lattice quantum electrodynamics at finite density with neural flow wavefunctions. _arXiv preprint arXiv:2212.06835_, 2022.
* Choo et al. (2019) Choo, K., Neupert, T., and Carleo, G. Two-dimensional frustrated \({J}_{1}-{J}_{2}\) model studied with neural network quantum states. _Phys. Rev. B_, 100:125124, Sep 2019. doi: 10.1103/PhysRevB.100.125124. URL [https://link.aps.org/doi/10.1103/PhysRevB.100.125124](https://link.aps.org/doi/10.1103/PhysRevB.100.125124).
* Deng et al. (2017) Deng, D.-L., Li, X., and Das Sarma, S. Quantum entanglement in neural network states. _Physical Review X_, 7(2), May 2017. ISSN 2160-3308. doi: 10.1103/physrevx.7.021021. URL [http://dx.doi.org/10.1103/PhysRevX.7.021021](http://dx.doi.org/10.1103/PhysRevX.7.021021).
* Ferris and Vidal (2012) Ferris, A. J. and Vidal, G. Perfect sampling with unitary tensor networks. _Phys. Rev. B_, 85:165146, Apr 2012. doi: 10.1103/PhysRevB.85.165146. URL [https://link.aps.org/doi/10.1103/PhysRevB.85.165146](https://link.aps.org/doi/10.1103/PhysRevB.85.165146).
* Gao and Duan (2017) Gao, X. and Duan, L.-M. Efficient representation of quantum many-body states with deep neural networks. _Nature Communications_, 8(1):662, Sep 2017. ISSN 2041-1723. doi: 10.1038/s41467-017-00705-2. URL [https://www.nature.com/articles/s41467-017-00705-2](https://www.nature.com/articles/s41467-017-00705-2).
* Girvin and Yang (2019) Girvin, S. M. and Yang, K. _Modern condensed matter physics_. Cambridge University Press, 2019.
* Greensmith et al. (2004) Greensmith, E., Bartlett, P. L., and Baxter, J. Variance reduction techniques for gradient estimates in reinforcement learning. _J. Mach. Learn. Res._, 5:1471-1530, December 2004. ISSN 1532-4435.
* Gutierrez and Mendl (2020) Gutierrez, I. L. and Mendl, C. B. Real time evolution with neural-network quantum states, 2020.
* Hartmann and Carleo (2019) Hartmann, M. J. and Carleo, G. Neural-network approach to dissipative quantum many-body dynamics. _Phys. Rev. Lett._, 122:250502, Jun 2019. doi: 10.1103/PhysRevLett.122.250502. URL [https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.250502](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.250502).
* Hermann et al. (2019) Hermann, J., Schatzle, Z., and Noe, F. Deep neural network solution of the electronic schrodinger equation, 2019.
* Hibat-Allah et al. (2020) Hibat-Allah, M., Ganahl, M., Hayward, L. E., Melko, R. G., and Carrasquilla, J. Recurrent neural network wave functions. _Phys. Rev. Research_, 2:023358, Jun 2020. doi: 10.1103/PhysRevResearch.2.023358. URL [https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.023358](https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.023358).
* Huang and Moore (2021) Huang, Y. and Moore, J. E. Neural network representation of tensor network and chiral states. _Phys. Rev. Lett._, 127:170601, Oct 2021. doi: 10.1103/PhysRevLett.127.170601. URL [https://link.aps.org/doi/10.1103/PhysRevLett.127.170601](https://link.aps.org/doi/10.1103/PhysRevLett.127.170601).
* Lami et al. (2022) Lami, G., Carleo, G., and Collura, M. Matrix product states with backflow correlations. _Physical Review B_, 106(8), aug 2022. doi: 10.1103/physrevb.106.1081111. URL [https://doi.org/10.1103%2Physrevb.106.1081111](https://doi.org/10.1103%2Physrevb.106.1081111).
* Lante and Parola (2006) Lante, V. and Parola, A. The ising phase in the j1-j2 heisenberg model. _Physical Review_, 2006.
* Levine et al. (2019) Levine, Y., Sharir, O., Cohen, N., and Shashua, A. Quantum entanglement in deep learning architectures. _Physical Review Letters_, 122(6), Feb 2019a. ISSN 1079-7114. doi: 10.1103/physrevlett.122.065301. URL [https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.065301](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.065301).
Levine, Y., Sharir, O., Cohen, N., and Shashua, A. Quantum entanglement in deep learning architectures. _Physical review letters_, 122(6):065301, 2019b.
* Lu et al. (2019) Lu, S., Gao, X., and Duan, L.-M. Efficient representation of topologically ordered states with restricted boltzmann machines. _Phys. Rev. B_, 99:155136, Apr 2019. doi: 10.1103/PhysRevB.99.155136. URL [https://journals.aps.org/prb/abstract/10.1103/PhysRevB.99.155136](https://journals.aps.org/prb/abstract/10.1103/PhysRevB.99.155136).
* Luo and Clark (2019) Luo, D. and Clark, B. K. Backflow transformations via neural networks for quantum many-body wave functions. _Physical Review Letters_, 122(22), Jun 2019. ISSN 1079-7114. doi: 10.1103/physrevlett.122.226401. URL [https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.226401](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.226401).
* Luo et al. (2020) Luo, D., Chen, Z., Carrasquilla, J., and Clark, B. K. Autoregressive neural network for simulating open quantum systems via a probabilistic formulation, 2020.
* Luo et al. (2021a) Luo, D., Carleo, G., Clark, B. K., and Stokes, J. Gauge equivariant neural networks for quantum lattice gauge theories. _Physical review letters_, 127(27):276402, 2021a.
* Luo et al. (2021b) Luo, D., Chen, Z., Hu, K., Zhao, Z., Hur, V. M., and Clark, B. K. Gauge invariant autoregressive neural networks for quantum lattice models, 2021b. URL [https://arxiv.org/abs/2101.07243](https://arxiv.org/abs/2101.07243).
* Luo et al. (2022a) Luo, D., Chen, Z., Carrasquilla, J., and Clark, B. K. Autoregressive neural network for simulating open quantum systems via a probabilistic formulation. _Phys. Rev. Lett._, 128:090501, Feb 2022a. doi: 10.1103/PhysRevLett.128.090501. URL [https://link.aps.org/doi/10.1103/PhysRevLett.128.090501](https://link.aps.org/doi/10.1103/PhysRevLett.128.090501).
* Luo et al. (2022b) Luo, D., Yuan, S., Stokes, J., and Clark, B. K. Gauge equivariant neural networks for 2+ 1d u (1) gauge theory simulations in hamiltonian formulation. _arXiv preprint arXiv:2211.03198_, 2022b.
* Marshall (1955) Marshall, W. Antiferromagnetism. _Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences_, 232(1188):48-68, 1955. ISSN 00804630. URL [http://www.jstor.org/stable/99682](http://www.jstor.org/stable/99682).
* Nagy and Savona (2019) Nagy, A. and Savona, V. Variational quantum monte carlo method with a neural-network ansatz for open quantum systems. _Phys. Rev. Lett._, 122:250501, Jun 2019. doi: 10.1103/PhysRevLett.122.250501. URL [https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.250501](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.250501).
* Nomura and Imada (2021) Nomura, Y. and Imada, M. Dirac-type nodal spin liquid revealed by refined quantum many-body solver using neural-network wave function, correlation ratio, and level spectroscopy. _Physical Review X_, 11(3), aug 2021. doi: 10.1103/physrevx.11.031034. URL [https://doi.org/10.1103%2Fphysrevx.11.031034](https://doi.org/10.1103%2Fphysrevx.11.031034).
* Oseledets (2011) Oseledets, I. V. Tensor-train decomposition. _SIAM Journal on Scientific Computing_, 33(5):2295-2317, 2011. doi: 10.1137/090752286. URL [https://doi.org/10.1137/090752286](https://doi.org/10.1137/090752286).
* Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. Pytorch: An imperative style, high-performance deep learning library. In _Advances in neural information processing systems_, pp. 8026-8037, 2019.
* Pfau et al. (2020) Pfau, D., Spencer, J. S., Matthews, A. G. D. G., and Foulkes, W. M. C. Ab initio solution of the many-electron schrodinger equation with deep neural networks. _Phys. Rev. Research_, 2:033429, Sep 2020. doi: 10.1103/PhysRevResearch.2.033429. URL [https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.033429](https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.033429).
* Preskill (2021) Preskill, J. Quantum computing 40 years later. _arXiv preprint arXiv:2106.10522_, 2021.
* Schmitt and Heyl (2020) Schmitt, M. and Heyl, M. Quantum many-body dynamics in two dimensions with artificial neural networks. _Physical Review Letters_, 125(10), Sep 2020. ISSN 1079-7114. doi: 10.1103/physrevlett.125.100503. URL [https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.100503](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.100503).
* Wang et al. (2019)
Sharir, O., Levine, Y., Wies, N., Carleo, G., and Shashua, A. Deep autoregressive models for the efficient variational simulation of many-body quantum systems. _Physical Review Letters_, 124(2), jan 2020. doi: 10.1103/physrevlett.124.020503. URL [https://doi.org/10.1103%2Fphysrevlett.124.020503](https://doi.org/10.1103%2Fphysrevlett.124.020503).
* Sharir et al. (2021) Sharir, O., Shashua, A., and Carleo, G. Neural tensor contractions and the expressive power of deep neural quantum states, 2021.
* Van den Oord et al. (2016) Van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., Graves, A., et al. Conditional image generation with pixelcnn decoders. _Advances in neural information processing systems_, 29, 2016.
* Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. _Advances in neural information processing systems_, 30:5998-6008, 2017.
* Verstraete & Cirac (2004) Verstraete, F. and Cirac, J. I. Renormalization algorithms for quantum-many body systems in two and higher dimensions, 2004. URL [https://arxiv.org/abs/cond-mat/0407066](https://arxiv.org/abs/cond-mat/0407066).
* Vicentini et al. (2019) Vicentini, F., Biella, A., Regnault, N., and Ciuti, C. Variational neural-network ansatz for steady states in open quantum systems. _Physical Review Letters_, 122(25), Jun 2019. ISSN 1079-7114. doi: 10.1103/physrevlett.122.250503. URL [https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.250503](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.250503).
* Vicentini et al. (2022) Vicentini, F., Hofmann, D., Szabo, A., Wu, D., Roth, C., Giuliani, C., Pescia, G., Nys, J., Vargas-Calderon, V., Astrakhantsev, N., and Carleo, G. NetKet 3: Machine learning toolbox for many-body quantum systems. _SciPost Physics Codebases_, aug 2022. doi: 10.21468/scipostphyscodeb.7. URL [https://doi.org/10.21468%2Fscipostphyscodeb.7](https://doi.org/10.21468%2Fscipostphyscodeb.7).
* Vidal (2003) Vidal, G. Efficient classical simulation of slightly entangled quantum computations. _Physical Review Letters_, 91(14), oct 2003. doi: 10.1103/physrevlett.91.147902. URL [https://doi.org/10.1103%2Fphysrevlett.91.147902](https://doi.org/10.1103%2Fphysrevlett.91.147902).
* Vidal (2004) Vidal, G. Efficient simulation of one-dimensional quantum many-body systems. _Physical Review Letters_, 93(4), jul 2004. doi: 10.1103/physrevlett.93.040502. URL [https://doi.org/10.1103%2Fphysrevlett.93.040502](https://doi.org/10.1103%2Fphysrevlett.93.040502).
* Vidal (2007) Vidal, G. Entanglement renormalization. _Phys. Rev. Lett._, 99:220405, Nov 2007. doi: 10.1103/PhysRevLett.99.220405. URL [https://link.aps.org/doi/10.1103/PhysRevLett.99.220405](https://link.aps.org/doi/10.1103/PhysRevLett.99.220405).
* Vieijra et al. (2020) Vieijra, T., Casert, C., Nys, J., De Neve, W., Haegeman, J., Ryckebusch, J., and Verstraete, F. Restricted boltzmann machines for quantum states with non-abelian or anyonic symmetries. _Physical Review Letters_, 124(9), Mar 2020. ISSN 1079-7114. doi: 10.1103/physrevlett.124.097201. URL [https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.124.097201](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.124.097201).
* Westerhout et al. (2020) Westerhout, T., Astrakhantsev, N., Tikhonov, K. S., Katsnelson, M. I., and Bagrov, A. A. Generalization properties of neural network approximations to frustrated magnet ground states. _Nature Communications_, 11(1):1593, Mar 2020. ISSN 2041-1723. doi: 10.1038/s41467-020-15402-w. URL [https://doi.org/10.1038/s41467-020-15402-w](https://doi.org/10.1038/s41467-020-15402-w).
* White (1992) White, S. R. Density matrix formulation for quantum renormalization groups. _Physical review letters_, 69(19):2863, 1992.
* Wu et al. (2022) Wu, D., Rossi, R., Vicentini, F., and Carleo, G. From tensor network quantum states to tensorial recurrent neural networks. _arXiv preprint arXiv:2206.12363_, 2022.
* Yoshioka & Hamazaki (2019) Yoshioka, N. and Hamazaki, R. Constructing neural stationary states for open quantum many-body systems. _Phys. Rev. B_, 99:214306, Jun 2019. doi: 10.1103/PhysRevB.99.214306. URL [https://journals.aps.org/prb/abstract/10.1103/PhysRevB.99.214306](https://journals.aps.org/prb/abstract/10.1103/PhysRevB.99.214306).
## Appendix A Additional Background
### Quantum State Learning
In this section, we discuss more details about quantum state learning. As mentioned in the main paper, the quantum fidelity
\[F=\left|\langle\phi|\psi_{\theta}\rangle\right|^{2}=\left|\sum_{\mathbf{x}}\phi^{*}( \mathbf{x})\psi_{\theta}(\mathbf{x})\right|^{2}\] (A.1)
measures the closeness of the two (normalized) quantum states \(\left|\phi\right\rangle\) and \(\left|\psi_{\theta}\right\rangle\), with \({}^{*}\) denotes complex conjugation. By minimizing \(-\log F\), one can obtain the \(\left|\psi_{\theta}\right\rangle\) closest to the target state \(\left|\phi\right\rangle\) (see Appendix C for optimization details). For small system sizes (\(\lesssim 20\) qubits), it is possible to enumerate all the basis \(\mathbf{x}\) and compute \(-\log F\) exactly as in the case of this work. As mentioned in the main paper, we learn the random Bell state and shallow random circuit to test the expressivity and physics inductive bias of the ansatz. More specifically, the states are generated as follows
**Random Bell State.** A two-qubit Bell state is defined to be the state \((\left|00\right\rangle+\left|11\right\rangle)/\sqrt{2}\). We use the following algorithm to generate a \(n\)-qubit random Bell state (\(n\) is a multiple of \(2\)).
```
\(\mathbf{a}\leftarrow\text{shuffle}([1,\dots\frac{n}{2}])\) \(\mathbf{b}\leftarrow\text{shuffle}([\frac{n}{2}+1,\dots n])\) \(\mathbf{\Psi}\leftarrow[]\) \(i\gets 1\) while\(i\leq n/2\)do \(\mathbf{\Psi}.\text{append}(\text{bell\_state}(a_{i},b_{i}))\) \(i\gets i+1\) endwhile \(\left|\psi\right\rangle\leftarrow\text{product\_state}(\mathbf{\Psi})\) return\(\left|\psi\right\rangle\)
```
**Algorithm 1** Random Bell State Generation
where \(\text{bell\_state}(\cdot,\cdot)\) creates a Bell state given the two qubits and \(\text{product\_state}([])\) creates the tensor product state for a list of individual states. Each two-qubit Bell state in the random Bell state forms between the left half and right half of the system, allowing a maximum entanglement across a cut in the middle of the system.
**Shallow Random Circuit.** The shallow random circuit generates random states using the following algorithm.
where \(n_{l}\) is the number of layers, and \(\operatorname{random\_gate}(\cdot,\cdot)\) generates a random unitary gate on the two qubits. The \(\operatorname{random\_gate}(\cdot,\cdot)\) function is realized by first generating a (real-valued) Gaussian random matrix of size \(4\times 4\), following a QR decomposition of the matrix to obtain the unitary part as the random unitary gate.
Each of the above algorithm defines a distribution of states, which we average over multiple realizations to compute the mean and standard deviation of the learned quantum fidelity.
### Variational Monte Carlo
As shown in the main paper, given a Hamiltonian \(\hat{\mathcal{H}}\) (and its matrix elements \(\mathcal{H}_{\mathbf{x},\mathbf{x}^{\prime}}\))
\[\langle\psi_{\theta}|\hat{\mathcal{H}}|\psi_{\theta}\rangle=\sum_{\mathbf{x},\mathbf{x }^{\prime}}\mathcal{H}_{\mathbf{x},\mathbf{x}^{\prime}}\psi_{\theta}^{*}(\mathbf{x})\psi_ {\theta}(\mathbf{x}^{\prime})=\sum_{\mathbf{x},\mathbf{x}^{\prime}}|\psi_{\theta}(\mathbf{x} )|^{2}\frac{\mathcal{H}_{\mathbf{x},\mathbf{x}^{\prime}}\psi_{\theta}(\mathbf{x}^{\prime} )}{\psi_{\theta}(\mathbf{x})}=\mathbb{E}_{\mathbf{x}\sim|\psi_{\theta}|^{2}}\frac{ \sum_{\mathbf{x}^{\prime}}\mathcal{H}_{\mathbf{x},\mathbf{x}^{\prime}}\psi_{\theta}(\mathbf{x }^{\prime})}{\psi_{\theta}(\mathbf{x})}.\] (A.2)
Since the Hamiltonian \(\hat{\mathcal{H}}\) is usually sparse, give \(\mathbf{x}\), one only needs to sum up a small number of \(\mathbf{x}^{\prime}\)'s for the numerator (usually linear in system size), allowing Eq. A.2 to be evaluated efficiently. Analogously, we can evaluate the gradient as
\[\nabla_{\theta}\left\langle\psi_{\theta}|\hat{\mathcal{H}}|\psi_{ \theta}\right\rangle =2\Re\sum_{\mathbf{x},\mathbf{x}^{\prime}}\mathcal{H}_{\mathbf{x},\mathbf{x}^{ \prime}}\left[\nabla_{\theta}\psi_{\theta}^{*}(\mathbf{x})\right]\psi_{\theta}(\bm {x}^{\prime})\] (A.3) \[=2\Re\sum_{\mathbf{x},\mathbf{x}^{\prime}}|\psi_{\theta}(\mathbf{x})|^{2} \mathcal{H}_{\mathbf{x},\mathbf{x}^{\prime}}\frac{\nabla_{\theta}\psi_{\theta}^{*}( \mathbf{x})}{\psi_{\theta}^{*}(\mathbf{x})}\frac{\psi_{\theta}(\mathbf{x}^{\prime})}{\psi _{\theta}(\mathbf{x})}\] \[=2\Re\,\mathbb{E}_{\mathbf{x}\sim|\psi_{\theta}|^{2}}\frac{\sum_{\bm {x}^{\prime}}\mathcal{H}_{\mathbf{x},\mathbf{x}^{\prime}}\psi_{\theta}(\mathbf{x}^{\prime })}{\psi_{\theta}(\mathbf{x})}\nabla_{\theta}\log\psi_{\theta}^{*}(\mathbf{x}).\]
Furthermore, it is possible to reduce the variance by either directly applying the variance reduction formula (Greensmith et al., 2004), or explicitly normalizing \(|\psi_{\theta}\rangle\) (minimizing \(\left\langle\psi_{\theta}|\hat{\mathcal{H}}|\psi_{\theta}\right\rangle/ \left\langle\psi_{\theta}|\psi_{\theta}\right\rangle\)) to obtain
\[\nabla_{\theta}\left\langle\psi_{\theta}|\hat{\mathcal{H}}|\psi_{\theta} \right\rangle=2\Re\,\mathbb{E}_{\mathbf{x}\sim|\psi_{\theta}|^{2}}\left[\frac{ \sum_{\mathbf{x}^{\prime}}\mathcal{H}_{\mathbf{x},\mathbf{x}^{\prime}}\psi_{\theta}(\mathbf{x }^{\prime})}{\psi_{\theta}(\mathbf{x})}-\left\langle\psi_{\theta}|\hat{\mathcal{H} }|\psi_{\theta}\right\rangle\right]\nabla_{\theta}\log\psi_{\theta}^{*}(\mathbf{x}),\] (A.4)
where \(\left\langle\psi_{\theta}|\hat{\mathcal{H}}|\psi_{\theta}\right\rangle\) can be approximated by applying Eq. A.2 on the same batch \(\mathbf{x}\). In this work, we use Eq. A.4 as the gradient of the loss function (see Appendix C for optimization details).
## Appendix B Additional Theoretical Results
### Exact Sampling from Conditional Probabilities
Suppose a full probability distribution over multiple qubits is written as a product of conditional probabilities as
\[p(\mathbf{x})=p(x_{1},\ldots x_{n})=\prod_{i=1}^{n}p(x_{i}|\mathbf{x}_{<i})\] (B.1)
with \(\mathbf{x}_{<i}=(x_{1},\ldots x_{i-1})\), then sampling a bitstring \(\mathbf{x}\) from the probability distribution can be obtained by sequentially sampling each conditional probability as Since each sample is sampled
```
\(\mathbf{x}\leftarrow[]\) \(i\gets 1\) while\(i\leq n\)do \(\mathbf{x}_{<i}\leftarrow\mathbf{x}\) \(x_{i}\sim p(x_{i}|\mathbf{x}_{<i})\) \(\mathbf{x}_{\text{.append}}(x_{i})\) \(i\gets i+1\) endwhile return\(\mathbf{x}\)
```
**Algorithm 3** Exact Sampling from Conditional Probabilities/Wavefunctions
In the following, we will use the following notation:
\[\mathbf{x}_{<i}=\mathbf{x}_{<i}+\mathbf{x}_{<i},\] (B.2)
and the fact that we require \(\sum_{x_{i}}|\psi(x_{i}|\mathbf{x}_{<i})|^{2}=1\).
### Exact Sampling from MPS
As mentioned in the main text, matrix product state (MPS) with particular constraint, also allows for exact sampling. Recall that MPS defines a wavefunction as
\[\psi(\mathbf{x})=\sum_{\alpha_{1},\ldots,\alpha_{n-1}}M_{x_{1}}^{\alpha_{0}\alpha_ {1}}\cdots M_{x_{n}}^{\alpha_{n-1}\alpha_{n}}=\sum_{\mathbf{\alpha}}\prod_{i=1}^{n} M_{x_{i}}^{\alpha_{i-1}\alpha_{i}}.\] (B.3)
Let's first assume that the wavefunction is normalized. It is easy to notice that any gauge transformation
\[(M_{x_{i}},M_{x_{i+1}})\rightarrow(M_{x_{i}}A,A^{-1}M_{x_{i+1}}),\] (B.4)
with \(A\) being any invertable matrix does not leaves the resulting wavefunction invariant. Here, we suppress the \(\alpha\) indices and view them as the indices for matrices. It has been shown that by fixing this gauge redundancy (Vidal, 2003, 2004), we can restrict all the \(M\) matrices to satisfy the following condition
\[\sum_{x_{i},\alpha_{i}}M_{x_{i}}^{\alpha_{i-1}\alpha_{i}}\left(M_{x_{i}}^{ \alpha_{i-1}^{\prime}\alpha_{i}}\right)^{*}=\delta_{\alpha_{i-1},\alpha_{i-1}^ {\prime}},\] (B.5)
where \(\delta_{\cdot,\cdot}\) is the Kronecker delta function. The matrices \(M\) that satisfies this condition is called isometries, and the MPS is called in the right canonical form (to be distinguished from the left canonical form). In the right canonical form, each \(M_{x_{i}}^{\alpha_{i-1}\alpha_{i}}\) can be interpreted as a basis transformation \((x_{i},\alpha_{i})\rightarrow\alpha_{i-1}\) that is part of a unitary matrix.
**Theorem B.1**.: _Defining the left partially contracted tensors_
\[M_{L^{\alpha_{j}}_{\mathbf{x}\leq j}}:=\sum_{\mathbf{\alpha}\leq j}\prod_{i=1}^{j}M_{x_{i }}^{\alpha_{i-1}\alpha_{i}},\] (B.6)
_If the matrix product state is in the right canonical form, then the marginal probability of the wavefunction satisfies_
\[q(\mathbf{x}_{\leq i}):=\sum_{\mathbf{x}_{>i}}\left|\psi(\mathbf{x})\right|^{2}=\sum_{ \alpha_{j}}M_{L^{\alpha_{j}}_{\mathbf{x}\leq j}}\left(M_{L^{\alpha_{j}}_{\mathbf{x} \leq j}}\right)^{*}.\] (B.7)
Proof.: Lets define the right partially contracted tensors analogously
\[M_{R^{\alpha_{j-1}}_{\mathbf{x}\geq j}}:=\sum_{\alpha_{j}\geq j}\prod_{i=j}^{n}M_{ x_{i}}^{\alpha_{i-1}\alpha_{i}}.\] (B.8)
We will first show that
\[\sum_{\mathbf{x}_{\geq j}}M_{R^{\alpha_{j-1}}_{\mathbf{x}\geq j}}\left(M_{R^{\alpha_{ j-1}}_{\mathbf{x}\geq j}}\right)^{*}=\delta_{\alpha_{j-1},\alpha_{j-1}^{\prime}}.\] (B.9)
We can prove this by induction:
* **Base case:** \[\sum_{x_{n}}M_{R^{\alpha_{n-1}}_{x_{n}}}\left(M_{R^{\alpha_{n-1}}_{x_{n}}}^{ \alpha_{n-1}}\right)^{*}=\sum_{x_{n}}M_{x_{n}}^{\alpha_{n-1}\alpha_{n}}\left( M_{x_{n}}^{\alpha_{n-1}^{\prime}\alpha_{n}^{\prime}}\right)^{*}=\delta_{\alpha_{n-1}, \alpha_{n-1}^{\prime}},\] (B.10) which directly comes from Eq. B.5 by noticing that \(\mathcal{D}(\alpha_{n})=1\) so it can be ignored.
* **Inductive step:** Write \[M_{R^{\alpha_{j-1}}_{\mathbf{x}\geq j}}^{\alpha_{j-1}}=\sum_{\alpha_{j}}M_{x_{j}}^ {\alpha_{j-1}\alpha_{j}}M_{R^{\alpha_{j}}_{\mathbf{x}\geq j+1}}.\] (B.11) Assuming \[\sum_{\mathbf{x}_{\geq j+1}}M_{R^{\alpha_{j}}_{\mathbf{x}\geq j+1}}\left(M_{R^{\alpha _{j}^{\prime}}_{\mathbf{x}\geq j+1}}\right)^{*}=\delta_{\alpha_{j},\alpha_{j}^{ \prime}},\] (B.12) then \[\sum_{\mathbf{x}_{\geq j}}M_{R^{\alpha_{j-1}}_{\mathbf{x}\geq j}}\left(M_ {R^{\alpha_{j-1}}_{\mathbf{x}\geq j}}^{\alpha_{j-1}\alpha_{j}^{\prime}}\right)^{*} =\sum_{x_{j}}\sum_{\alpha_{j},\alpha_{j}^{\prime}}M_{x_{j}}^{ \alpha_{j-1}\alpha_{j}}\left(M_{x_{j}}^{\alpha_{j-1}^{\prime}\alpha_{j}^{ \prime}}\right)^{*}\sum_{\mathbf{x}_{\geq j+1}}M_{R^{\alpha_{j}}_{\mathbf{x}\geq j+1} }\left(M_{R^{\alpha_{j}^{\prime}}_{\mathbf{x}\geq j+1}}\right)^{*}\] \[=\sum_{x_{j}}\sum_{\alpha_{j},\alpha_{j}^{\prime}}M_{x_{j}}^{ \alpha_{j-1}\alpha_{j}}\left(M_{x_{j}}^{\alpha_{j-1}^{\prime}\alpha_{j}^{ \prime}}\right)^{*}\delta_{\alpha_{j},\alpha_{j}^{\prime}}\] \[=\sum_{x_{j},\alpha_{j}}M_{x_{j}}^{\alpha_{j-1}\alpha_{j}}\left( M_{x_{j}}^{\alpha_{j-1}^{\prime}\alpha_{j}}\right)^{*}\] \[=\delta_{\alpha_{j-1},\alpha_{j-1}^{\prime}}.\]
Using this result, we can then prove our desired result about the marginal probability. Lets write
\[\psi(\mathbf{x})=\sum_{\mathbf{\alpha}}\prod_{i=1}^{n}M_{x_{i}}^{\alpha_{i-1}\alpha_{i }}=\sum_{\alpha_{j}}M_{L^{\alpha_{j}}_{\mathbf{x}\leq j}}M_{R^{\alpha_{j}}_{\mathbf{x} >j}},\] (B.14)
then,
\[q(\mathbf{x}_{\leq i}) =\sum_{\mathbf{x}_{>i}}\left|\psi(\mathbf{x})\right|^{2}=\sum_{\mathbf{x}_{>j}} \sum_{\alpha_{j},\alpha_{j}^{\prime}}M_{L^{\alpha_{j}}_{\mathbf{x}\leq j}}M_{R^{ \alpha_{j}}_{\mathbf{x}>j}}\left(M_{L^{\alpha_{j}^{\prime}}_{\mathbf{x}\leq j}}M_{R^{ \alpha_{j}^{\prime}}_{\mathbf{x}>j}}\right)^{*}\] \[=\sum_{\alpha_{j},\alpha_{j}^{\prime}}M_{L^{\alpha_{j}}_{\mathbf{x} \leq j}}\left(M_{L^{\alpha_{j}^{\prime}}_{\mathbf{x}\leq j}}\right)^{*}\delta_{ \alpha_{j},\alpha_{j}^{\prime}}\] (B.15) \[=\sum_{\alpha_{j}}M_{L^{\alpha_{j}}_{\mathbf{x}\leq j}}\left(M_{L^{ \alpha_{j}}_{\mathbf{x}\leq j}}\right)^{*}.\]
**Corollary B.1.1**.: _Matrix product state allows for efficient exact sampling._
Proof.: Thm. B.1 shows that marginal probability distribution \(q(\mathbf{x}_{\leq i})\) can be evaluated efficiently. The conditional probability can be evaluated efficiently either as \(q(x_{i}|\mathbf{x}_{<i})=q(\mathbf{x}_{\leq i})/q(\mathbf{x}_{<i})\) or (equivalently) as an explicitly normalization of the marginal probability distribution \(q(x_{i}|\mathbf{x}_{<i})=q(\mathbf{x}_{\leq i})/\sum_{x_{i}}q(\mathbf{x}_{\leq i})\). Then, Algorithm 3 can be used to sample from the full distribution.
### Autoregressive Sampling Order
Algorithm 3 samples from conditional probabilities, which requires an ordering of the system to be defined. For 1D systems, we use a linear ordering such that qubit 1 corresponds to the left most qubit, and qubit \(n\) corresponds to the right most qubit. This ordering is natural for 1D MPS and transformer, as well as the transformer based ANTN. For 2D systems, we use a snake (zig-zag) ordering. In this ordering, the qubit at the 2D location \((i,j)\) is defined to be the \(i\times L_{y}+j\) th qubit, where \(L_{y}\) is the number of qubits along the second dimension. This ordering is inherently defined from the masked convolution for PixelCNN and PixelCNN based ANTN. The MPS is reshaped to satisfy the same ordering for 2D systems. The same 2D ordering can be generalized to other tensor networks as well.
### Additional Details of Expressivity and Symmetry Results of ANTN
**Theorem B.2**.: _Both tensor networks and autoregressive neural networks are special cases of Autoregressive Neural TensorNet._
Proof.: Recall that the ANTN defines the wavefunction from its amplitude and phase
\[\psi(\mathbf{x}):=\sqrt{q(\mathbf{x})}e^{i\phi(\mathbf{x})},\] (B.16)
where
\[q(\mathbf{x})= \prod_{j=1}^{n}q(x_{j}|\mathbf{x}_{<j})\] (B.17) \[\text{with}\ \ q(x_{j}|\mathbf{x}_{<j})=\frac{q(\mathbf{x}_{\leq j})}{ \sum_{x_{j}}q(\mathbf{x}_{\leq j})}\] (B.18) \[q(\mathbf{x}_{\leq j}):=\sum_{\alpha_{j}}\tilde{\psi}_{L}^{\alpha_{j} }(\mathbf{x}_{\leq j})\tilde{\psi}_{L}^{\alpha_{j}}(\mathbf{x}_{\leq j})^{*}\] (B.19) \[\tilde{\psi}_{L}^{\alpha_{j}}(\mathbf{x}_{\leq j}):=\sum_{\mathbf{\alpha}_{<j}}\prod_ {i=1}^{j}\tilde{\psi}^{\alpha_{i-1}\alpha_{i}}(x_{i}|\mathbf{x}_{<i})\] (B.20)
and
\[\phi(\mathbf{x})=:\text{Arg}\sum_{\mathbf{\alpha}}\prod_{i=1}^{n}\tilde{\psi}^{ \alpha_{i-1},\alpha_{i}}(x_{i}|\mathbf{x}_{<i}).\] (B.21)
* **ANTN reduces to MPS.** If we don't modify the tensors with neural networks, \(\tilde{\psi}^{\alpha_{i-1},\alpha_{i}}(x_{i}|\mathbf{x}_{<i})\) reduces to \(M_{x_{i}}^{\alpha_{i-1},\alpha_{i}}\) and \(\tilde{\psi}_{L}^{\alpha_{j}}(\mathbf{x}_{\leq j})\) reduces to \(M_{L_{\mathbf{x}_{\leq j}}^{\alpha_{j}}}\). It is then straightforward that Eq. B.19 reduces to Eq. B.15 and thus the probability distribution of ANTN reduces to that of MPS. Analogously, Eq. B.21 reduces to the phase of an MPS, and thus ANTN wavefunction reduces to MPS wavefunction.
* **ANTN reduces to ARNN.** If we set all \(\mathcal{D}(\alpha)=1\), then \(\tilde{\psi}^{\alpha_{i-1}\alpha_{i}}(x_{i}|\mathbf{x}_{<i})\) reduces to \(\psi(x_{i}|\mathbf{x}_{<i})\) and all the summations over \(\mathbf{\alpha}\) can be dropped. In this case, \(q(\mathbf{x}_{\leq j})\) reduces to \(\prod_{i=1}^{j}\left|\psi(x_{i}|\mathbf{x}_{<i})\right|^{2}\) and \(q(x_{j}|\mathbf{x}_{<j})\) reduces to \(\left|\psi(x_{j}|\mathbf{x}_{<j})\right|^{2}\), which is the same as the standard ARNN. In addition, the phase \(\phi(\mathbf{x})\) reduces to \(\text{Arg}\prod_{i=1}^{n}\psi(x_{i}|\mathbf{x}_{<i})\) which is also the same as ARNN. Thus, ANTN wavefunction reduces to ARNN wavefunction.
Notice that we only showed one type of TN, i.e. MPS, but the proof can be easily generalized to any TN that allows evaluation of marginal probabilities.
**Definition B.1** (Mask Symmetry).: A conditional wavefunction \(\psi(x_{i}|\mathbf{x}_{<i})\) has a _mask symmetry_ if \(\psi(x_{i}|\mathbf{x}_{<i})=0\) for some \(x_{i}\) given \(\mathbf{x}_{<i}\).
**Theorem B.3**.: _Autoregressive Neural TensorNet inherits mask symmetry from autoregressive neural network._
Proof.: The mask symmetry results in \(\psi(\mathbf{x})=0\) for certain \(\mathbf{x}\) satisfying the condition while preserving the normalization of \(\psi(\mathbf{x})\). For ANTN, we can directly set \(q(x_{i}|\mathbf{x}_{<i})=0\) for the \(x_{i}\) given \(\mathbf{x}_{<i}\). This results in \(\psi(\mathbf{x})=0\) for the same \(\mathbf{x}\).
**Corollary B.3.1** (Global U(1) Symmetry).: _Autoregressive Neural TensorNet can realize global U(1) symmetry, which conserves particle number._
Proof.: A global U(1) symmetry in a qubit system manifests as a conservation of the number of \(0\)'s and number of \(1\)'s in \(\mathbf{x}\). Equivalently, \(\psi(\mathbf{x})=0\) for \(\mathbf{x}\) violates such conservation. (Hibat-Allah et al., 2020) has shown that autoregressive neural networks can preserve global U(1) symmetry as a mask symmetry, which ANTN inherits.
**Definition B.2** (Function Symmetry).: For a function \(F\) that satisfies \(\{F(x_{i}),F(\mathbf{x}_{<i})\}=F(\mathbf{x}_{\leq i})\), a conditional wavefunction tensor \(\psi(x_{i}|\mathbf{x}_{<i})\) has a _function symmetry_ over the \(F\) if \(\psi(x_{i}|\mathbf{x}_{<i})=\psi(F(x_{i})|F(\mathbf{x}_{<i}))\). Here, \(F(x_{i})\) is allowed to also depend on \(\mathbf{x}_{<i}\).
**Theorem B.4**.: _Autoregressive Neural TensorNet inherits function symmetry from autoregressive neural networks._
Proof.: We can apply the function symmetry on the left partially contracted tensors instead of the conditional wavefunctions. Here, we show that this produces the desired conditional probabilities and phase. Applying the function symmetry on the left partially contracted tensors results in
\[\tilde{\psi}_{L}^{\alpha_{j}}(\mathbf{x}_{\leq j})=\tilde{\psi}_{L}^{\alpha_{j}}( F(\mathbf{x}_{\leq j})).\] (B.22)
This implies that the marginal probability satisfies \(q(\mathbf{x}_{\leq j})=q(F(\mathbf{x}_{\leq j}))\) after contracting over the index \(\alpha_{j}\), which then results in the correct symmetry on the conditional probabilities
\[q(x_{j}|\mathbf{x}_{<j})=q(F(x_{j})|F(\mathbf{x}_{<j})),\] (B.23)
as conditional probability is just the marginal probability normalized at site \(j\). The phase, on the other hand,
\[\phi(\mathbf{x})=\text{Arg}\sum_{\mathbf{\alpha}}\prod_{i=1}^{n}\tilde{\psi}^{\alpha_ {i-1},\alpha_{i}}(x_{i}|\mathbf{x}_{<i})=\text{Arg}\;\tilde{\psi}_{L}^{\alpha_{n} }(\mathbf{x}_{\leq n}),\] (B.24)
which satisfies the same function symmetry from Eq. B.22.
**Corollary B.4.1** (\(\mathbb{Z}_{2}\) Spin Flip Symmetry).: _Autoregressive Neural TensorNet can realize \(\mathbb{Z}_{2}\) spin flip symmetry such that the ANTN wavefunction is invariant under conjugation of the input._
Proof.: Spin flip symmetry exists for quantum chemistry systems that do not couple spin up and spin down electron, so that the wavefunction is invariant to inputs which spin up and down are flipped. (Barrett et al., 2022) has shown that autoregressive neural network can preserve \(\mathbb{Z}_{2}\) spin flip symmetry as a function symmetry, which ANTN inherits.
**Corollary B.4.2** (Discrete Abelian and Non-Abelian Symmetries).: _Autoregressive Neural TensorNet can realize discrete Abelian and Non-Abelian symmetries._
Proof.: Gauge symmetry is a local symmetry such that the function is invariant under local transformation. (Luo et al., 2021) has shown that autoregressive neural network can preserve discrete Abelian and non-Abelian symmetries as either the mask symmetry or the function symmetry, which ANTN inherits.
## Appendix C Experimental Details and Reproducibility
We note that while the following details can be used to reproduce our results, addition adjustment may be possible to further improve our results.
**Implementation.** The transformer (Vaswani et al., 2017) used in this work is a decoder-only transformer implemented in (Luo et al., 2020), and the PixelCNN used is the gated PixelCNN (Van den Oord et al., 2016) implemented in (Chen et al., 2022) but without the channelwise mask. We will release the code following the publication of this work.
**Hyperparameters.** For the transformer in quantum state learning, we use 2 layers, 32 hidden dimensions and 16 attention heads, where as for the PixelCNN in variational Monte Carlo, we use 7 layers with dilations 1, 2, 1, 4, 1, 2, and 1 for each layer, and 48 hidden dimensions.
**Initialization.** Throughout the experiment, we initialize the weight matrices of the last fully connected layer of the ARNN and ANTN to 0, and initialize the bias according to the following rule:
* The bias of ARNN is always randomly initialized.
* For the random Bell state learning, the bias of ANTN is randomly initialized
* For the rest of the experiment, the bias of ANTN is set to be 0.
The rest of the parameters of ARNN and ANTN are randomly initialized according to PyTorch (Paszke et al., 2019)'s default initialization scheme.
In addition, for the random Bell state learning experiment, we do not initialize the ANTN with MPS (and hence the random bias), whereas for the shallow random circuit learning experiment, we initialize the ANTN with an MPS that comes from explicit truncation of the full wavefunction, and for the variational Monte Carlo experiment, we initialize the ANTN with DMRG optimized MPS.
**Optimization.** Through the experiment, we use Adam optimizer with an initial learning rate of 0.01. For quantum state learning experiments, we train the ARNN and ANTN with full basis for 2000 iterations, where the learning rate is halved at iterations 600, 1200 and 1800. In case that the full basis cannot be computed in one pass due to GPU memory constraint, we divide the full basis into several batchs and accumulates the gradient in each iteration. For variational Monte Carlo experiments, we train the ARNN and ANTN stochastically until the energy converges. During the training, we halve the learning rate at iterations 100, 500, 1000, 1800, 2500 and 4000. In each experiment, we choose the maximum batch size that can be fed into the GPU memory, while using accumulation steps to keep the effective batch size around 10000 throughout the training.
**Transfer Learning.** We in addition take advantage of transfer learning. For \(J_{2}=0.2\), \(0.5\) and \(0.8\), we begin from scratch with the \(4\times 4\) system. Then, an \(N\times N\) system is initialized from an \((N-2)\times(N-2)\) system by keeping all but the weights in the last layer. Then, for other \(J_{2}\), the neural network is initialized from one of \(J_{2}=0.2\), \(0.5\) and \(0.8\) that is closes to the current \(J_{2}\) of the same system size by keeping all but the weights in the last layer. We find this allows the optimization to converge faster than always staring from scratch.
**Computational Resources.** The models are mainly trained using NVIDIA V100 GPUs with 32 GB memory and Intel Xeon Gold 6248 CPUs, with some models trained using NVIDIA A100 GPUs with 80 GB memory.
|
2303.01931 | Deep Neural Network Architecture Search for Accurate Visual Pose
Estimation aboard Nano-UAVs | Miniaturized autonomous unmanned aerial vehicles (UAVs) are an emerging and
trending topic. With their form factor as big as the palm of one hand, they can
reach spots otherwise inaccessible to bigger robots and safely operate in human
surroundings. The simple electronics aboard such robots (sub-100mW) make them
particularly cheap and attractive but pose significant challenges in enabling
onboard sophisticated intelligence. In this work, we leverage a novel neural
architecture search (NAS) technique to automatically identify several
Pareto-optimal convolutional neural networks (CNNs) for a visual pose
estimation task. Our work demonstrates how real-life and field-tested robotics
applications can concretely leverage NAS technologies to automatically and
efficiently optimize CNNs for the specific hardware constraints of small UAVs.
We deploy several NAS-optimized CNNs and run them in closed-loop aboard a 27-g
Crazyflie nano-UAV equipped with a parallel ultra-low power System-on-Chip. Our
results improve the State-of-the-Art by reducing the in-field control error of
32% while achieving a real-time onboard inference-rate of ~10Hz@10mW and
~50Hz@90mW. | Elia Cereda, Luca Crupi, Matteo Risso, Alessio Burrello, Luca Benini, Alessandro Giusti, Daniele Jahier Pagliari, Daniele Palossi | 2023-03-03T14:02:09Z | http://arxiv.org/abs/2303.01931v1 | # Deep Neural Network Architecture Search for Accurate Visual Pose Estimation aboard Nano-UAVs
###### Abstract
Miniaturized autonomous unmanned aerial vehicles (UAVs) are an emerging and trending topic. With their form factor as big as the palm of one hand, they can reach spots otherwise inaccessible to bigger robots and safely operate in human surroundings. The simple electronics aboard such robots (sub-100 mW) make them particularly cheap and attractive but pose significant challenges in enabling onboard sophisticated intelligence. In this work, we leverage a novel neural architecture search (NAS) technique to automatically identify several Pareto-optimal convolutional neural networks (CNNs) for a visual pose estimation task. Our work demonstrates how real-life and field-tested robotics applications can concretely leverage NAS technologies to automatically and efficiently optimize CNNs for the specific hardware constraints of small UAVs. We deploy several NAS-optimized CNNs and run them in closed-loop aboard a 27-g Crazyllie nano-UAV equipped with a parallel ultra-low power System-on-Chip. Our results improve the State-of-the-Art by reducing the in-field control error of 32% while achieving a real-time onboard inference-rate of \(\sim\)10 Hz@10 mW and \(\sim\)50 Hz@90 mW.
## Supplementary video material
In-field tests: [https://youtu.be/dVCScckvcg8](https://youtu.be/dVCScckvcg8).
## I Introduction
Nano-sized unmanned aerial vehicles (UAVs) are gaining significant momentum due to their reduced form factor (sub-10 cm) and weight (sub-40 g), which allows them to fulfill sensitive missions, such as in GPS-denied narrow spaces and human proximity. Moreover, the simplified sensory, mechanical, and computational sub-systems available aboard these platforms make them particularly cheap and attractive compared to their bigger counterparts. However, making them fully autonomous, i.e., no external/remote infrastructure or computation, is still challenged by their simplified electronics, e.g., sub-100 mW compute power, which is 1-2 orders of magnitude less than mobile phones' processors.
These constraints hinder the adoption of standard algorithmic pipelines, which often rely on memory/compute-intensive sophisticated planning/localization/mapping algorithms, and heavy pre-trained perception convolutional neural networks (CNNs). Therefore, running these methods aboard nano-UAVs is unfeasible. In this context, tiny CNNs increasingly define the State-of-the-Art (SoA) of autonomous nano-drones [1, 2]. First, they can be trained to operate on data from very limited sensors, such as ultra-low power, and tiny cameras with poor resolution/dynamic range. Second, CNNs have predictable and fixed computational and memory requirements at inference time. Third, the system designer can tune such requirements to fit the available resources on the robot by defining a suitable network architecture.
Roboticists integrating CNNs in larger robots can often overlook the third aspect, relying on standard SoA architectures such as ResNet50 [3] that easily fit in the available resources. In contrast, thoroughly defining a suitable custom CNN is crucial when developing nano-class robot systems. The choice of the neural network architecture determines whether the model can run on the robot and directly impacts two critical parameters that affect the robot's behavior: _i_) prediction performance and _ii_) real-time throughput.
For many years, the only approach to fulfill the requirements posed by this complex optimization scenario with contrasting objectives (i.e., obtaining accurate yet deployable CNNs) was to resort to a manual, tedious, error-prone, and time-consuming iterative hyper-parameters tuning based on heuristics and rules-of-thumb. Nowadays, the _go-to_ approach to perform such optimization is based upon the neural architecture search (NAS) paradigm [4, 5]. NAS tools enable an automatic exploration over an arbitrarily large search space of different network topologies and hyper-parameters settings. Furthermore, many novel NAS approaches can directly optimize complex cost functions by combining different objectives [5, 6, 7], such as regression performance and the network's computational complexity.
In this work, we exploit a novel computationally efficient NAS technique [7], able to generate Pareto-optimal CNN architectures in the accuracy vs. model size, to optimize a vision-based human pose estimation task. First, we contribute by enhancing the functionalities of an existing NAS engine, which is needed to explore two different seed CNNs: PULP-Frontnet [2] and MobileNetv1 [8]. Then, we thoroughly analyze and deploy multiple CNNs on our target nano-drone robotic platform. Ultimately, the delivered CNNs improve the SoA baseline [2] at least for one metric among size (up to \(5.6\times\) smaller), speed (up to \(1.5\times\) faster), and accuracy (up to 32% lower horizontal displacement error). The improvement in accuracy is finally confirmed by a challenging in-field testing setup, with a _never-seen-before_ environment.
## II Related work
**Neural architecture search.** NAS tools assist designers in the design phase of DNNs by automatically exploring a large space of architectures defined as combinations of different layers and/or hyper-parameters. On constrained platforms, these tools usually minimize an objective function that depends both on task accuracy and non-functional cost metrics (e.g., memory footprint, latency, or energy consumption). Early NAS algorithms were based on _evolutionary algorithms_ (EA) [4] and reinforcement learning (RL) [5]. These methods can explore arbitrary search spaces, and optimize any cost function by iteratively sampling a network, training it to convergence to evaluate performance, and then using this information to drive the following sampling.
However, their extreme flexibility implies poor scalability, due to their computational requirement, i.e., thousands of GPU hours, even for simple tasks. Instead, differentiable NAS (DNAS) [9, 6] has been proposed to mitigate this issue. Early DNASes exploit _supernets_, i.e., networks whose layers' outputs are obtained as a weighted sum of multiple alternative sub-layers (e.g., different types of convolution) applied to the same input [9]. The weights assigned to each alternative are optimized _during training_, based on an accuracy/complexity-dependent loss, and the final network is obtained by selecting, for each layer, the alternative associated with the largest weight at the end of the search. Compared to EA/RL approaches, DNAS trades some flexibility in the definition of the optimization target, which must be differentiable, in exchange for a search process that requires a _single training_. However, the convergency of a supernet is _i_) not trivial, and _ii_) still expensive (in terms of GPU memory), since multiple alternatives are instantiated for each layer.
_Mask-based DNAS_ are a further step toward lightweight architecture search. They replace the supernet with a single-path DNN, usually referred to as _seed_[10, 11, 7], and their search space is composed of sub-architectures obtained from the seed _by subtraction_ (e.g., eliminating some channels in each convolution). In practice, these sub-networks are simulated at training time using _trainable masks_, which prune part of the seed. As in supernet DNASes, the masks are optimized during training, but since the seed is much smaller than a supernet, the time and memory overhead compared to regular DNN training is minimal. On the one hand, mask-based DNASes can only produce networks derived from the seed; on the other, this allows for a much more fine-grained search space exploration. For instance, they can easily generate convolutions with an arbitrary number of channels (e.g., 17, 25, or 31 for a 32-channel seed): prohibitive with a supernet approach [7].
**Nano-drones.** Deploying CNN models for autonomous navigation on a nano-drone introduces severe sensorial, computational, and memory constraints due to limited form factor, payload, and energy availability. Solutions built around commercial off-the-shelf (COTS) microcontroller units (MCUs) [12, 13] can afford only minimal neural architectures to achieve real-time performance, such as [13] with 27 kMAC (multiply-accumulate operations) per frame at 100 Hz. These approaches are thus suitable only for low-dimensional input signals, not for processing camera images. More advanced visual approaches [2, 14] have been enabled by careful hardware-software co-design, exploiting general-purpose ultra-low-power multi-core System-on-Chip (SoC) [15], integrated software deployment pipelines [16, 17] and manually-tuned CNN architectures. These technological breakthroughs enabled PULP-Frontnet [2], a model with 14.7 MMAC (3 order of magnitude larger than [13]), to achieve an inference rate of 48 Hz while consuming 96 mW, aboard a COTS Crazyflie 2.1 nano-drone.
An open research question still revolves around CNN's complexity/memory reduction, as answering it would pave the way toward _i_) better energy utilization on battery-limited devices, and _ii_) enabling multi-tasking systems, yet not reached on nano-drones. Recent works [18, 19] have aimed at reducing CNNs' memory footprint with minimal degradation in accuracy, achieving up to 27\(\times\) reductions in computation with just 4% lower accuracy [19], but still at the cost of extensive manual fine-tuning of the architectures. In this work, we leverage NAS techniques to efficiently and automatically cope with this problem in a robotic domain.
**Human pose estimation.** We consider the robotic problem of monocular relative pose estimation of a human subject from an autonomous nano-drone. Despite the remarkable accuracy of SoA computer vision approaches [20, 21, 22], they are still far from the reach of nano-drones due to the required amount of computational resources [22] (\(\sim\)10s GMAC). To date, the PULP-Frontnet CNN [2] represents one of the few examples of a human pose estimation task fully executed aboard a Crazyflie nano-drone in real-time. While complex CNNs manage to estimate entire skeletons [21] or even dense 3D mesh representations [22] of the person, PULP-Frontnet minimizes its prediction to a 3D point in space (\(x,y,z\)) and a rotation angle w.r.t. the gravity z-axis \((\phi)\). MobileNetv1 [8] is another SoA CNN vastly used for vision-based classification tasks, which has also been proven accurate (e.g., 68.2% top-1 accuracy on ImageNet) when streamlined down to the same power envelope allotted on our nano-drone [23, 17]. For these reasons, we choose PULP-Frontnet and MobileNetv1 as seed models for our NAS, and we will refer to the PULP-Frontnet model as the SoA baseline (F\({}_{SoA}\)) for the in-field comparison.
## III Methodology
### _Seed deep neural networks_
In this work, we use a mask-based DNAS tool to optimize three seed models: one based on the shallow PULP-Frontnet architecture, and two based on a deeper MobileNetv1. PULP-Frontnet is composed by 6 convolutional layers requiring 304 k parameters and up to 14.7 MMAC. The design of the network was first introduced in [2] specifically for deployment on a nano-drone. On the other hand, MobileNetv1 [8] is a deeper network formed by 27 different convolutional layers, which mostly differ from the PULP-Frontnet by
the separable depthwise and pointwise convolutional layers instead of traditional ones. Our two MobileNetv1 seed networks vary by width multiplier, a hyperparameter that controls the size of the feature maps throughout the model: we consider width multipliers of 1.0 (M\({}^{1.0}\)) and 0.25 (M\({}^{0.25}\)), corresponding to respectively 3.21 M and 204 k parameters.
### _Network architecture search_
We optimize the network architecture using a mask-based DNAS tool, namely _pruning in time_ (PIT) [7], whose scope is to search for smaller and more lightweight networks which hold almost the same accuracy as the seed models. PIT shows low memory and execution time overheads (only \(\sim\)30% of the training time), several orders of magnitude lower than other NAS engines [4, 5, 9]. Additionally, the availability of good reference architectures for our task that can be used as _seed_ fits well with a mask-based approach, which efficiently explores different sub-networks contained within the seed.
The NAS we propose performs a fine-grained search of the number of output channels in all convolutional layers of a 2D CNN. The approach is derived from the channel search scheme originally described in [7] for 1D Temporal Convolutional Networks. Figure 1 summarizes our NAS functionality: for each convolutional layer in the seed, its weight tensor \(W\), with \(C_{out}\) output channels, is isolated, and the correspondent searchable tensor \(W_{\Theta}\) is built as:
\[W_{\Theta}=W\odot\mathcal{H}(\theta) \tag{1}\]
where \(\odot\) is the Hadamard product, \(\theta\) is a trainable mask vector of \(C_{out}\) elements, in which each component \(\theta_{i}\) represents the mask for a specific channel, and \(\mathcal{H}\) is the Heaviside step function used to binarize \(\theta\). Depending on the binarized values of \(\mathcal{H}(\theta_{i})\) the correspondent \(i\)-th output channel of \(W\) may be kept alive (\(\mathcal{H}(\theta_{i})=1\)) or removed (\(\mathcal{H}(\theta_{i})=0\)). Therefore PIT finds models which only vary in the number of channels w.r.t. the seed networks. Compared to its original version proposed in [7], we also added support for _jointly exploring_ the channels of pointwise and depthwise layers, the two main layers in depth-separable convolutional blocks and basic components of MobileNet architectures. To do so, we used shared masks to select the number of channels of these two layers, since the output channels of a depthwise convolution are completely determined by the preceding pointwise layer.
The obtained network is inserted in a normal training loop where \(W\) and \(\theta\) are trained in conjunction to solve the following optimization problem:
\[\min_{W,\theta}\mathcal{L}(W;\theta)+\lambda\mathcal{R}(\theta) \tag{2}\]
where \(\mathcal{L}\) is the task-specific loss function (i.e., the same loss function as the seed CNNs) and \(\mathcal{R}\) is a regularization loss term representing a cost metric to be optimized (e.g., n. of parameters, n. of operations, etc.). \(\lambda\) is the so-called _regularization strength_, a hand-tuned scalar term used to balance the two losses. Once loss terms of Eq.2 are defined, \(\lambda\) represents the main knob upon which the designer can act to drive the search towards more accurate or less costly networks. For our use case, we considered \(\lambda\in[5\cdot 10^{-11}:5\cdot 10^{-5}]\). As regularization loss \(\mathcal{R}\), we used a differentiable estimate of the number of parameters of the network as a function of the NAS mask values. In this way, we can bias the exploration phase towards architectures that are both small and accurate.
### _System design_
**Robotic platform.** Our target robotic platform is the Bitcraze Crazyflie 2.11, a 27 g nano-quadrotor. This drone features a main STM32 MCU in charge of basic functionalities, such as state estimation and control loops. In our in-field deployment, it is extended with a 5 g commercial AI-deck companion board [1]. The AI-deck features an additional MCU: the GreenWaves Technologies GAP8, which embodies the parallel ultra-low power paradigm [15] through a RISC-V-based multi-core SoC. These two processors communicate via a bidirectional UART interface. The GAP8 is designed with two power domains: a single-core _fabric controller_ (FC) that orchestrates the interaction with external memories/sensors and offloads computationally intensive kernels on a second 8-core _cluster_ (CL) domain. The SoC's memory hierarchy relies on 64 kB of low-latency L1 memory shared among all cluster cores and 512 kB of L2 memory within the FC domain. The GAP8 also features two DMA engines to efficiently automate data transfers from/to all memories and external peripherals, such as 8 MB off-chip DRAM and a QVGA monochrome camera, both available on the AI-deck. However, it provides neither data caches nor hardware floating-point units, dictating explicit data management and the adoption of integer-quantized arithmetic, respectively.
Footnote 1: [https://www.bitcraze.io/products/crazyflie-2-1/](https://www.bitcraze.io/products/crazyflie-2-1/)
**Deployment tools.** To fulfill the platform's requirements, we exploit two tools to _i_) train integer-only neural networks and _ii_) automatically and optimally generate C code for the target network. The first tool, the open-source NEMO library [16], based on the PyTorch framework, is used to train the network in three sequential steps. First, NEMO
Fig. 1: Left: the proposed masking scheme. Right: three possible outcomes of the channel search process on a four-filters layer. \(\theta_{0}=\theta_{1}=1\) and \(\theta_{2}=\theta_{3}=0\) means that the first two channels are kept alive, while the second two are removed. Only the second and last channels are kept alive in the second example, and only the first channel in the last one.
trains a _full-precision_ floating-point network to minimize the sum of the L1 loss for the pose estimation vector (\(x,y,z,\phi\)). We use the SGD optimizer with a _lr_ of 0.001 over 100 epochs, selecting at the end the model which achieved the lowest validation loss. After, we convert this model into a _fake-quantized_ network. In this stage, weights and activations are still represented as float32 values, but their magnitude can assume only a discrete (256 for 8-bits quantization) set of values. Given the support offered by the optimized kernels for GAP8, we use linear per-layer quantization and PACT [24] in this fine-tuning step. We choose to use 8-bit quantization since it is natively supported by the SIMD operations in the RISC-V ISA extensions, which allow the execution of four int8 MACs within a single cycle. To maximize the accuracy of the network, we initialize the fake-quantized network with the floating point weights, and we perform 100 additional train epochs with the same parameters. The final step is the creation of the _integer deployable_ network. Compared to the fake-quantized network, all the tensors are represented by integer numbers and a float scale factor. Tensors \(\mathbf{t}\) are approximated as
\[\mathbf{t}\approx\varepsilon_{\mathbf{t}}\cdot\mathbf{t}^{*}\, \tag{3}\]
where \(\mathbf{t}\) is the fake-quantized tensor, \(\mathbf{t}^{*}\) is the integer-only tensor, and \(\varepsilon_{\mathbf{t}}\) is the scale floating point factor. Therefore, the network can run entirely in the integer domain by multiplying and accumulating only integer values.
The CNN deployment is based on the open-source PULP-NN library [25], which encompasses optimized 8-bits fixed-point arithmetic kernels. In particular, it exploits the eight general-purpose cores of the Cluster of GAP8 SoC and the ISA-specific optimizations to maximize kernels' performance. Given a theoretical maximum of 32 MAC/cycle (4 MACs can be executed thanks to the int8 SIMD for each of the 8 cores), PULP-NN is able to reach a peak of 15.6 MAC/cycle for squared-size images. Note that the library peak is obtained by perfectly re-using data in the register file, reducing the number of necessary loads and stores per each MAC. If no data is stored in the register file for data re-use, two loads would be required for MAC inputs and one store to save the result in memory, reducing the theoretical peak to 8 MAC/cycle. On the other hand, the PULP-NN kernels consider data to be stored in the shared L1 64 kB memory, relegating their applicability to small layers.
To cope with this constraint, we employ a second open-source tool, DORY [17]. This tool automatically produces template-based C code, which wraps the PULP-NN kernels, managing the different levels of memories of GAP8 (i.e., L1, L2, and the external RAM) and orchestrating the tensors' movements. In particular, DORY exploits a tiling approach to separate layers into small nodes whose tensors can fit the L1 memory of the system. Thanks to this, the kernels can be directly executed in these nodes. DORY produces indeed the C routines, which i) execute the kernels of the small nodes with tensors stored in L1, and ii) double-buffer the movements of tensors from L2 to L1, to always have data ready for kernel execution. Notice that since the DMA is not blocking, the calls to the kernels are always overlapped with the asynchronous DMA calls.
**Closed-loop control.** We deploy our models as part of the same closed-loop control system from [2], which allows to maintain the drone at a desired 3D pose in front of the person. Four main components are involved: _i)_ the CNN model outputs a point-in-time pose estimate from a single image, _ii)_ a Kalman filter produces smooth temporally-consistent sequences of poses, and _iii)_ a velocity controller computes the drone's desired pose from the subject's current pose and generates velocity setpoints to bring the drone to the desired pose. _iv)_ the Crazyflie's low-level control, responsible for motor actuation and stabilization. We adopt a Kalman filter decoupled between the model's four outputs, by assuming diagonal process and observation noise covariance matrices. The velocity controller is also decoupled between linear velocity control, whose goal is to reach the specified target position, and angular velocity control, which keeps the subject centered in the image frame.
## IV Results
### _NAS Pareto analysis_
In Figure 2, we show and analyze the architectures found by our NAS algorithm. We compare on one axis the inference latency (number of clock cycles), whereas, on the other, we report the mean absolute error (MAE), expressed as the sum of L1 errors on (\(x,y,z,\phi\)) between the networks predicted
Fig. 2: Pareto curves of the networks extracted from the NAS in the clock cycles vs. MAE space (lower is better).
values and the ground truth. The _trivial predictor_, i.e., a network that always predicts each output as its mean value in the test set, represents a lower bound to the models' MAE. From our three seed network architectures, the NAS search discovers eight new models, most of which lie on the global Pareto front in the space of MAE vs. the number of cycles. The found architectures range from 1.27M to 3.69M execution cycles, with corresponding MAE values from 1.31 to 0.84. In detail, PULP-Frontnet models occupy the leftmost section of the Pareto curve, being very lightweight but less accurate. The middle is populated by models derived from the MobileNet 0.25\(\times\) seed. Finally, the most accurate architectures are derived from MobileNet 1.0\(\times\). This seed architecture is too big to fit in the GAP8's L2 memory. However, our NAS algorithm can shrink it enough to find great solutions: absolute top-performing models that only increase latency by \(15\%\) compared to \(F_{SoA}\)[2].
From the global Pareto front, we select four models to deploy and further analyze. The first is \(F_{SoA}\), which corresponds to the original PULP-Frontnet, the current state-of-the-art, and our baseline. The \(F_{small}\) architecture is the fastest model, still achieving a MAE roughly equivalent to \(F_{SoA}\). \(M_{small}^{1.0}\) is the most accurate architecture found by the NAS, but also the most expensive in terms of latency. \(M_{small}^{0.25}\) represents the most balanced trade-off between the two metrics, with both better than or equivalent to \(F_{SoA}\). In the following sections, we further analyze the selected architectures' performance on the test set and their behavior in the field in a closed-loop system.
### _Regression performance_
In Table I and Figure 3, we break down the models' regression performance in the four output variables, \((x,y,z,\phi)\). For each output, in addition to the MAE values, we report the coefficient of determination \(R^{2}\), a standard adimensional metric that represents the fraction of variance in the target variable explained by the model2. Compared to the MAE, the \(R^{2}\) score quantifies the quality of a regressor independently of the target variable's variance and is, therefore, better suited for comparing regression performance between different variables. Performance on all outputs shows trends consistent with those on the aggregated loss, confirming \(M_{small}^{1.0}\) as the top performing on all outputs except for \(x\), where \(M_{small}^{0.25}\) performs slightly better. In Figure 3, we see that all models can predict \(x\) and \(y\) best while \(\phi\) is the most difficult to estimate, matching the findings of background work.
Footnote 2: \(R^{2}=1-\sum_{i}(y_{i}-y_{i})^{2}/\sum_{i}(y_{i}-\phi)^{2}\) with \(y_{i}\) the ground-truth output and \(\hat{y}_{i}\) the model prediction for each test sample \(i\), \(\hat{y}\) the mean of ground-truth outputs. An \(R^{2}=1.0\) corresponds to a perfect regressor, while the trivial regressor achieves \(R2=0.0\).
### _Onboard performance assessment_
We assess the onboard performance of the CNNs selected in the previous Section IV-A. To nail down the computational, power, and memory requirements, we profiled our models, running them on the GAP8 SoC and using a RocketLogger data logger [26] (64 ksps). For these experiments, we test three SoC operating points: minimum power consumption with FC@25 MHz CL@25 MHz VDD@1 V, most energy-efficient with FC@25 MHz CL@75 MHz VDD@1 V, and maximum performance with FC@250 MHz CL@170 MHz [email protected] V, as shown for the PULP-Frontnet baseline [2]. Table II summarizes the analysis on the four CNNs, showing that NAS models significantly reduce parameters (up to -85% w.r.t. \(F_{SoA}\)), MACs, clock cycles, and memory (all three around -50%). Power and throughput figures for the operative point used in the in-field experiments (max. performance) are also reported. Figure 4 shows the relation between power and throughput, across the three operative points. We see that the two smaller models \(M_{small}^{0.25}\) and \(F_{small}\) improve upon the baseline \(F_{SoA}\) in both regards, resulting in higher energy efficiency. Figure 5 breaks down the power usage of the entire system. Crazyflie electronics plus the AI-deck cost 4.8% of the total budget, almost saturating the power that can be allocated to sensing and computing [27]. As the GAP8 accounts for 24% of that, there is a clear benefit in best optimizing its workload.
### _In-field experimental evaluation_
We further validate the proposed networks in a closed-loop, in-field experiment. We ask a human subject to follow a predefined path, while the drone is tasked to stay in front of them at eye level, at a distance of \(\Delta=1.3\,\mathrm{m}\). For consistency, we adopt the same test setup proposed by [2], with a 50 s path composed of 8 phases of increasing difficulty for the model (walking straight along different directions, walking along a curve, and rotating in place). To ensure repeatability between different runs of the experiment, we ask the subject to completely ignore the drone's behavior and instead synchronize each step to the beats of a metronome.
Fig. 3: R2 score of the four deployed models (higher is better).
The experiment is performed with both a subject and an environment outside of our training set, stressing the models' generalization capabilities. We perform four experiment runs for each of the four models, plus one baseline run in which the drone is controlled using perfect information about the subject's pose from a mocap system, a total of 17 test flights.
Table III summarizes our results in this experiment. We provide quantitative measures of three aspects of the system's performance: overall path completion, inference accuracy, and control accuracy. We measure path completion for each model in terms of total flight time over the four runs and mean percentage of path completed, interrupting a run as soon as the person completely leaves the camera field of view. In the challenging never-seen-before environment used for this experiment, the two PULP-Frontnet models struggle to complete the path (especially \(F_{small}\), which reaches only 35% on average), while the two MobileNet models consistently maintain the tracking until the end. We mark metrics corresponding to incomplete runs with an asterisk because they cannot be directly compared with other runs.
Inference accuracy then evaluates the models in isolation, measuring their ability to correctly estimate the subject's position w.r.t. to the drone. As in the reference work, here we do not consider the \(z\) component because the target height is approximately constant in our task. We report the MAE, to allow direct comparison with offline performance on the test set reported in Section IV-B. As expected, absolute performance decreases significantly due to the different environments. At the same time, we see that the best in-field regression performance comes from \(M_{small}^{0.25}\), instead of \(M_{small}^{1.0}\) as on the test set. One explanation is the lower number of parameters in the former model, which makes it less prone to overfitting and thus generalize better.
Finally, control accuracy evaluates the whole system's precision in tracking the subject as it moves along the path. We compare the drone's actual pose against the desired pose, measuring two errors: the absolute position error \(e_{xy}\) (i.e., the distance between the two poses in the horizontal plane) and the absolute angular error \(e_{\theta}\) (i.e., the difference in orientation). \(M_{small}^{0.25}\) is the best on both metrics, confirming itself as the best-performing in-field. In Figure 6, \(M_{small}^{0.25}\) is the model with the lowest variance in absolute position error, further explaining its better performance. In addition, visually inspecting the drone's behavior shows that the \(M_{small}^{0.25}\) model is noticeably more accurate than the baseline \(F_{SoA}\). To complement our results, we provide a supplementary video of the four model's behavior in the in-field experiments.
## V Conclusion
This system paper presents a practical use case on NAS technologies applied for a challenging robotic visual perception human pose estimation task on nano-drones. Starting from two seed CNNs (i.e., PULP-Frontnet and MobileNetv1), we select four Pareto-optimal models to be deployed aboard a resource-constrained (i.e., sub-100 mW) nano-quadrotor. We assess the capabilities of the CNNs with a thorough analysis: from their regression performance on a disjoint test set, an onboard performance evaluation (power consumption and throughput), down to an in-field closed-loop test in a _never-seen-before_ environment. Our best model improves the SoA by reducing the in-field control error of 32% with a real-time inference rate of \(\sim\)50 Hz@90 mW.
Fig. 4: Throughput vs. power consumption for the four models at various operative points.
Fig. 5: Nano-drone’s power breakdown, running the M\({}_{small}^{0.25}\) model.
Fig. 6: In-field control errors distribution (lower is better). Boxplot whiskers mark the \(5^{th}\) and \(95^{th}\) percentile of data. |
2305.15374 | ASPER: Answer Set Programming Enhanced Neural Network Models for Joint
Entity-Relation Extraction | A plethora of approaches have been proposed for joint entity-relation (ER)
extraction. Most of these methods largely depend on a large amount of manually
annotated training data. However, manual data annotation is time consuming,
labor intensive, and error prone. Human beings learn using both data (through
induction) and knowledge (through deduction). Answer Set Programming (ASP) has
been a widely utilized approach for knowledge representation and reasoning that
is elaboration tolerant and adept at reasoning with incomplete information.
This paper proposes a new approach, ASP-enhanced Entity-Relation extraction
(ASPER), to jointly recognize entities and relations by learning from both data
and domain knowledge. In particular, ASPER takes advantage of the factual
knowledge (represented as facts in ASP) and derived knowledge (represented as
rules in ASP) in the learning process of neural network models. We have
conducted experiments on two real datasets and compare our method with three
baselines. The results show that our ASPER model consistently outperforms the
baselines. | Trung Hoang Le, Huiping Cao, Tran Cao Son | 2023-05-24T17:32:58Z | http://arxiv.org/abs/2305.15374v1 | [
###### Abstract
A plethora of approaches have been proposed for joint entity-relation (ER) extraction. Most of these methods largely depend on a large amount of manually annotated training data. However, manual data annotation is time consuming, labor intensive, and error prone. Human beings learn using both data (through induction) and knowledge (through deduction). Answer Set Programming (ASP) has been a widely utilized approach for knowledge representation and reasoning that is elaboration tolerant and adept at reasoning with incomplete information. This paper proposes a new approach, \(\underline{\text{ASP}}\)-enhanced \(\mathsf{Entity}\)-\(\mathsf{Relation}\) extraction (ASPER), to jointly recognize entities and relations by learning from both data and domain knowledge. In particular, ASPER takes advantage of the factual knowledge (represented as facts in ASP) and derived knowledge (represented as rules in ASP) in the learning process of neural network models. We have conducted experiments on two real datasets and compare our method with three baselines. The results show that our ASPER model consistently outperforms the baselines.
J 2023 8 April 200311 May 2023 2023 8 April 20031 May 2023 2023 8 April 200311 May 2023
A]N.H. Hoang Le, H]H.A.Cao Shen
Department of Computer Science, New Mexico State University, Las Cruces, New Mexico, USA
(e-mail: {trungle,hcao}@nmsu.edu) and (e-mail: [email protected])
Joint Entity Relation Extraction Semi-supervised Learning Answer Set Programming Knowledge-enhanced Models.
2023 8 April 2003 11 May 2023
A]N.H. Hoang Le, H]H.H.Cao Shen
Department of Computer Science, New Mexico State University, Las Cruces, New Mexico, USA
(e-mail: {trungle,hcao}@nmsu.edu) and (e-mail: [email protected])
## 1 Introduction
Entity-relation (ER) extraction is to identify named entities and relations from unstructured text. For joint ER extraction, deep neural network (NN) models have created many successful stories (e.g., the papers by Gupta et al. (2016); Eberts and Ulges (2020); Wang and Lu (2020)). Despite such success, the supervised NN methods depend on utilizing a large amount of well-labeled training data. However, labeling free text data with entities/relations is time-consuming, labor intensive, and error prone because of a lot of noise, as shown by Chen et al. (2022).
Semi-supervised learning (SSL), introduced by Chapelle et al. (2006) and Ouali et al. (2020), has been utilized to improve predictions by using a small amount of labeled data and a much larger amount of unlabeled data. Among the many SSL approaches, the proxy-label methods described in the papers by Ruder and Plank (2018) and Ouali et al. (2020) are one commonly utilized strategy. These approaches create different strategies to utilize the pseudo labels that are predicted from the unlabeled data. However, most of them do not make use of domain knowledge as discussed by Hu et al. (2016), which is abundant and very useful, in symbolic forms in the learning process as shown in the survey by Ouali et al. (2020).
Recent years have witnessed the increasing interest in utilizing general domain knowledge (e.g., represented as symbolic knowledge) to improve machine learning models to alleviate the issue caused by the lack of large amounts of labeled data. Such efforts include neural symbolic modeling and abductive learning.
Neural symbolic models, referred by us as works that encode the knowledge and rules to be a holistic component of neural network models (e.g., through the design of a new loss function). Such models have achieved great success (e.g., see the papers by Hu et al. (2016); d'Avila Garcez et al. (2019)). However, tightly modeling the symbolic knowledge as a part of NN models suffers from the elaboration tolerant issue where the model is hard to scale to changes of logical representations of facts (e.g., loss functions need to be modified when adding new rules).
Zhou (2019); Dai et al. (2019); and Cai et al. (2021) introduced abductive learning that combines machine learning models (which are mainly data driven) and logic programming (which encodes background knowledge and reason about them). In abductive learning, an initial machine learning model \(M\) is trained from the labeled data and used to get predicted labels from the unlabeled data (denoted as pseudo labels). Pseudo labels may be wrong or inconsistent when the model \(M\) is not effective. Example 3 demonstrates different issues of pseudo labels. The pseudo labels are revised to get a new set of consistent pseudo labels through minimizing the inconsistency of the abduced labels and the knowledge. The revised set of pseudo labels are used to retrain a machine learning model. Most existing abductive learning approaches use first order logic (FOL) to encode knowledge.
In this work, we propose to design an SSL approach by encoding domain knowledge and rules using Answer Set Programming (ASP) for the joint recognition of entities and relations. The purpose of using the domain knowledge is to generate consistent pseudo labels (consistent w.r.t. the knowledge base) and derive more pseudo labels that cannot be predicted using the pure data driven models. ASP instead of FOL is used because of multiple advantages ASP provides. ASP is a simple, rule-based, and declarative language that possess several theoretical building block results which support the development of provably correct programs. In addition, ASP is non-monotonic, which is important for dealing with commonsense knowledge, and supports an elaboration tolerant development of programs. For non-logical experts, ASP-rules are easier to use and to understand than FOL-formulae.
The main contributions of this work are as follows.
* A new framework, ASP-enhanced Entity-Relation extraction (ASPER), is introduced to make use of sophisticated domain knowledge in neural network models. ASP-encoded knowledge and rules intend to generate higher quality pseudo labels, which are further used to improve the model. As far as we know, this is the **first work** that incorporates logic programming to deep learning models for joint ER extraction.
* ASPER introduces novel commonsense rules to select pseudo labels that may improve the model performance with higher probabilities.
* The experimental evaluation on two real datasets shows that the proposed ASPER consistently improves the other baselines.
In what follows, Section 2 reviews the related work, Section 3 formally defines the problem and related notations, Section 4 explains our new framework, Section 5 shows our experimental results, and Section 6 concludes the work.
## 2 Related Work
SSL methods are designed to make use of a small amount of labeled data and a much larger amount of unlabeled data in the learning process. Traditional SSL methods including self-training and tri-training have been revisited recently by Ruder and Plank (2018). Both self-training and tri-training utilize an iterative process to improve the initial model(s) trained on the small amount of labeled data by iteratively adding pseudo labels to the training set. In each iteration, self-training picks pseudo labels that have higher prediction probability and tri-training picks the pseudo labels that are agreed by at least two models. A surprising finding in such revisit is that the classic tri-training, introduced in the paper by Zhou and Li (2005), strategy with minor changes can outperform many recent NN models.
Many recent SSL approaches have been proposed to conduct ER extraction. Hu et al. (2021) proposes a self-training based SSL method for relation extraction. In each iteration, it adopts meta-learning to generate higher quality pseudo labels. Curriculum labeling, introduced in the paper by Cascante-Bonilla et al. (2021), borrows the idea of curriculum learning discussed in the paper by Bengio et al. (2009), which uses easy samples first and proceeds to use hard samples, and proposes a self-paced curriculum (curriculum labeling) in the pseudo-labeling process. However, this model does not use any symbolic knowledge. Most of these approaches do not make use of domain knowledge in symbolic forms.
Similar to SSL, neural network models also alleviate the issue of insufficient labeled data. Hu et al. (2016) is the first work of integrating logic rules with NN models. It proposes an iterative distillation method to enhance several NNs with declarative first-order logic (FOL) rules, which is encoded using soft logic. NeurASP, introduced in the paper by Yang et al. (2020), also employs ASP and neural networks. Its knowledge base is predefined and all the atoms are utilized in the learning process. However, our knowledge base is used to generate answer sets with consistent pseudo labels and some answer sets (when multiple are available) may not be utilized in the learning process.
Our work also shares similarity with the framework of abductive learning such as those described in the papers Zhou (2019); Huang et al. (2020) in that symbolic knowledge is used to improve the quality of pseudo labels. However, our work is different from abductive learning in several aspects. Abductive learning (e.g., the paper by Huang et al. (2020)) derives abduced pseudo labels through an optimization process. Once these labels are revised, they are used to retrain a model. When they retrain a model, all the pseudo labels are utilized. Our approach utilizes a subset of the pseudo labels, which have higher probability to be true, to retrain a model. In addition, the pseudo labels are iteratively refined using ASP, which provides powerful reasoning capability.
## 3 Problem Definition and Terminology
This section defines our research problem and related terminology.
**Problem definition**: Given a dataset \(D_{L}\) (training data) with labeled entities and relations, a dataset \(D_{UL}\) without any annotated labels, where \(|D_{L}|\ll|D_{UL}|\), and a knowledge base1\(KB\) which encodes the common sense facts and reasoning rules, our problem is to learn an NN
model \(M\gets f(D_{L},D_{UL},KB)\) to capture the hidden patterns about entities and relations in the datasets \(D_{L}\cup D_{UL}\).
**Definition 1** (Pseudo labels).: _Given a model \(M\) and a dataset \(D_{UL}\) without any annotated labels, the predicted labels from \(D_{UL}\) using \(M\) are called pseudo labels._
The possible entity and relation labels that occur in the training data \(D_{L}\) are represented as \(ent\) and \(rel\). Given any token or word in a sentence, we use \(b\) and \(e\) to represent the beginning and ending locations of that token in the sentence. The \(b\) and \(e\) are in the range of 0 and the total number of tokens of a sentence. An entity pseudo label is in the form of
\[ent(b,e),conf \tag{1}\]
It means that the tokens at the locations [\(b\), \(e-1\)] in the sentence is of entity type \(ent\). Here, \(conf\) is a value in the range of [0,1] indicating the confidence of the prediction.
A relation pseudo label is in the form of
\[rel(b,e,b^{\prime},e^{\prime}),conf \tag{2}\]
Here, \(b\) (\(b^{\prime}\)) and \(e\) (\(e^{\prime}\)) represent the beginning and ending locations of the first (second) token in the relation. To make the descriptions more intuitive, we sometimes represent a relation as
\[rel(tokens,tokens^{\prime}),conf \tag{3}\]
where \(tokens\) (\(tokens^{\prime}\)) are the first (second) tokens at locations [\(b\), \(e\)) (and [\(b^{\prime}\), \(e^{\prime}\))).
Without loss of generality, we may omit \(conf\) when writing the entity and relation pseudo labels.
**Example 1** (Running example notations).: _In later examples, we will use some well defined entity types. For example, \(org\), \(loc\), and \(people\) represent organization, location, and people respectively. Some predefined relations are \(livedIn\), \(locatedIn\), and \(orgBasedIn\). They describe one person, a location, and an organization lives in, is located in, or is based in a location respectively._
## 4 ASP Enhanced Neural Network Models for Entity-Relation Extraction (ASPER)
This section presents our proposed ASPER method. ASPER targets at utilizing answer sets and ASP to improve the quality of pseudo labels and derive more useful labels to retrain the model.
### ASPER Framework
The framework of ASPER is shown in Algorithm 1. ASPER first trains an initial model using the limited amount of training data (Line 1) and improves the model through an iterative process using ASP revised pseudo labels (Lines 3-20).
To train an initial neural network model, we utilize the SpERT architecture proposed in the paper by Eberts and Ulges (2020) due to its lightweight nature. Multiple iterations (Steps 3-20) are used to improve the model. In each iteration, ASPER predicts the entities and relations in a sentence \(x\) (i.e., the pseudo labels) using the model \(M\) (Line 7) where \(M\) can be the initial model (trained in Line 1) or the retrained model (Line 18). Then, it utilizes ASP to update these pseudo labels (Lines 8-10). The updated pseudo labels coupled with the selected sentences are then used to retrain the model (Lines 12-18). There are many ways to revise pseudo labels. We define a preference relation over the sets of revised labels (answer sets) based on the notion of the probability of a set of revised labels (Definition 2 in Section 4.2). This preference relation is then used to select the most preferred set of revised labels. The iteration condition can be that the
prediction of the unlabeled data does not change anymore or the number of iterations reaches a threshold. In our experiments, we set the number of iterations to be a fixed number.
#### 4.1.1 Generate pseudo labels
Using the initially trained model or a model that is trained in the previous iteration \(M\), the algorithm can recognize the entities and relations in the unlabeled dataset \(D_{UL}\). The predicted entity and relation pseudo labels are in the form of Eqs. (1) and (2).
**Example 2** (Pseudo labels).: _Given a sentence "CDT Tuesday is in the area of Port Arther and Galveston, Texas." the predicted pseudo labels look like the following:_
\begin{tabular}{|l c|l c|l c|} \hline _org(0,2),_ & _0.888_ & _other(1,2),_ & _0.799_ & _locatedIn(7,9,10,11),_ & _0.998_ \\ _loc(7,9),_ & _0.998_ & _loc(10,11),_ & _0.998_ & _locatedIn(0,2,12, 13),_ & _0.993_ \\ _loc(12,13),_ & _0.998_ & _orgBasedIn(1,2,12,13),_ & _0.777_ & _locatedIn(10,11,12,13),_ & _0.993_ \\ \hline _The pseudo label "org(0,2), 0.888” means that the token from location 0 to 1 (which is "CDT Tuesday") is an organization (\(org\)). This prediction has confidence value 0.888. Similarly, the pseudo label "loc(7,9)" means that "Port Arther" (the tokens at locations 7 and 8) is of type location (\(loc\)). Correspondingly, the predicted relation "locatedIn(7,9,10,11)" means that "Port Arther" is located in "Galveston". The other entities and relations can be interpreted accordingly. To assist the understanding of the relations, we create a figure for these entities and relations._
**Input**: Labeled data \(D_{L}\); Unlabeled data \(D_{UL}\); Knowledge Base \(KB\)
**Parameter**: Confidence step parameter \(\Delta\) (e.g., 20)
**Output**: The model \(M\)
```
1:Learn an initial model \(M\) from \(D_{L}\)
2:\(\Delta_{t}=100-\Delta\)
3:while iteration condition is not met do
4:\(\mathbb{Z}\leftarrow\emptyset\)# The set of the selected answer sets for all the sentences in \(D_{UL}\)
5:\(D_{aug}\leftarrow\emptyset\)# The pseudo labels augmented to train the model
6:for each sentence \(x\in D_{UL}\)do
7:\(z\leftarrow\) GenPseudoLabs(\(M,x\))
8:\(A_{S}\leftarrow\)Convert2Atoms(\(z\))
9:\(W\leftarrow\)ReviseUsingASP(\(A_{S}\cup KB\))#\(W\) is an answer set associated with a confidence value \(W.conf\)
10:\(\mathbb{Z}\leftarrow\mathbb{Z}\cup W\)
11:endfor
12:\(T\leftarrow\) the confidence value at \(\Delta_{t}\) percentile of all the answer sets in \(\mathbb{Z}\)
13:for each answer set \(W\in\mathbb{Z}\)do
14:if\(W.conf\geq T\)then
15:\(D_{aug}\gets D_{aug}\cup W\)
16:endif
17:endfor
18: Train model \(M\) on \(D_{L}\cup D_{aug}\) from scratch
19:\(\Delta_{t}=\Delta_{t}-\Delta\)
20:endwhile
21:return\(M\)
```
**Algorithm 1**ASPER framework
Figure 1: Example of pseudo labels (a node represents an entity and a directed edge between a source (first tokens) and a destination (second tokens) represents a relation)
#### 4.1.2 Improve pseudo-label quality using answer sets
The predicted entity and relation pseudo labels may be wrong (just like any machine learning model does) or inconsistent.
**Example 3** (Inconsistent and hidden pseudo labels).: _Example 2 shows the pseudo labels predicted from one sentence. They have different types of inconsistencies._
* _(inconsistent labels) The relation_ \(locatedIn(CDT\;Tuesday,Texas)\) _and entity_ \(org(CDT\;Tuesday)\) _are not consistent because the first term of_ \(locatedIn\) _needs to be a location, but_ \(CDT\;Tuesday\) _is an organization. Similarly, the entity_ \(other(Tuesday)\) _and relation_ \(orgBasedIn(Tuesday,Texas)\) _are not consistent because the first term of_ \(orgBasedIn\) _needs to be an organization._
* _(overlapping entities) The two entities_ \(org(0,2)\) _and_ \(other(1,2)\) _overlap because "Tuesday" is a part of "CDT Tuesday". It makes more sense not to have both._
* _(hidden labels) Given_ \(locatedIn(Port\;Arther,Galveston)\) _and_ \(locatedIn(Galveston,\)__\(Texas)\)_, another relation_ \(locatedIn(Port\;Arther,Texas)\) _should be valid. However, the model does not predict this relation._
In utilizing pseudo labels to improve a model, a recognized problem is _confirmation bias_ as mentioned in the paper by Cascante-Bonilla et al. (2021), which means that utilizing the wrong pseudo labels to retrain a model can amplify the error. It is critical to control which pseudo labels are utilized to retrain the model.
The issues listed above are not inclusive. Their root issue is the ineffective pure data-driven model learned from insufficient training data. The most commonly identified issue with an ineffective model is its inaccurate predictions. We target at addressing the root issue by somehow correcting the wrongly predicted pseudo labels. Our ASPER framework (Lines 8-10) improves the quality of pseudo labels by computing a consistent set of pseudo labels (an answer set). It first converts all the pseudo labels (\(z\)) to a set of atoms \(A_{S}\) that ASP can process (using Function _Convert2Atoms_). Given \(A_{S}\) and the knowledge base \(KB\), there may be multiple answer sets. _ReviseUsingASP_ utilizes the rules in the \(KB\) to calculate a probability for each answer set and chooses the one with the highest probability and associates with it a confidence level. The details of the two steps are described in Section 4.2.2.
#### 4.1.3 Model retraining with improved pseudo labels
Once we get the improved pseudo labels from the unlabeled dataset \(D_{UL}\), these improved pseudo labels are put to \(\mathbb{Z}\) (Line 10) and are used to help retrain the model.
We observe that some answer sets have much higher confidence values than others. The pseudo labels in these answer sets tend to be correct with higher probabilities. Based on this observation, the model retraining first utilizes the pseudo labels in the answer sets with higher confidence values and proceeds to use pseudo labels in answer sets with lower confidence values. This idea is the same as that in curriculum labeling proposed by Cascante-Bonilla et al. (2021) that uses a portion (with high prediction confidence) of the pseudo labels to retrain a model in each iteration. This curriculum idea is implemented through the use of \(\Delta_{t}\) in Line 12. With the iterations proceed, \(\Delta_{t}\) decreases (Line 19). At the end, when \(\Delta_{t}\) becomes zero, the model retraining uses the pseudo labels in all the answer sets.
### Computing Improved Pseudo Labels via ASP
BackgroundASP, proposed in the papers by Marek and Truszczynski (1999); and Niemela (1999), is a knowledge representation and reasoning (KR&R) approach to problem solving us
ing logic programs under answer set semantics introduced by Gelfond and Lifschitz (1990). In ASP, a problem is solved by first encoding it as an ASP program, whose answer sets correspond one-to-one to the solutions of the problem. The encoded ASP program is then given to an ASP solver (e.g., clingo as described in the paper by Gebser et al. (2014)) to compute answer sets and solutions can be extracted from these answer sets.
The ASP language also includes language-level extensions to simplify the use of ASP in practical applications. We will make use of the _choice atom_ of the form \(l\)\(\{l_{1};\ldots;l_{n}\}\)\(u\) where \(l_{i}\) is an atom, \(l\) and \(u\) are integers. Intuitively, it says that the number of literal \(l_{i}\) that is true must be within a lower bound \(l\) and an upper bound \(u\). If \(l\) (resp. \(u\)) is not specified, then the lower (resp. upper) bound is \(0\) (resp. \(+\infty\)) by default. A choice atom can appear in the head or body of a rule or after the default negation.
#### 4.2.1 ASP for computing pseudo labels
First, the pseudo labels representing the entities (Eq. (1)) and relations (Eq. (2)) are represented in ASP using atoms of the form (4) or (5).
\[atom(entity(ent,b,e),conf)\qquad\text{(4)}\qquad atom(relation(rel,b,e,b^{ \prime},e^{\prime}),conf) \tag{5}\]
Let \(A_{S}\) be the collection of atoms of the form (4) or (5) for a sentence \(S\). A sentence is a part of a dataset \(D\) that often has declarative knowledge associated with it. In this paper, we consider the different types of knowledge that are usually available given \(D\) which can be specified as follows:
1. _Type declaration_: a type declaration defines the type of a relation and is given in the form
\[type\_def(rel,ent,ent^{\prime}). \tag{6}\]
A type declaration by Eq. (6) says that relation \(rel\) is between entities \(ent\) and \(ent^{\prime}\). For example, in the domain CONLL04 (see next section), the relation \(liveIn\) is specified by the atom \(type\_def(liveIn,peop,loc)\) which says that it is a relationship between entities of the types \(peop\) and \(loc\).
2. _Inference rule_: in many domains, there are relationships among relations. For example, \(locatedIn\) is transitive in the sense that if area \(A\) is located in area \(B\) and \(B\) is located in \(C\) then \(A\) is located in \(C\). This rule can easily be specified by an ASP rule of the following form2:
Footnote 2: We use variables (strings starting with an uppercase letter) in the logic program. A rule with variables is the collection of ground rules obtained from substituting variables with any possible values; in this case, variables refer to locations in the sentence.
\[rule(X,Y,Z)\gets Body \tag{7}\]
where \(X\), \(Y\), and \(Z\) is of the form \(relation(R,B,E,B^{\prime},E^{\prime})\) and \(Body\) is domain-specific information. The rule relating to \(locatedIn\) discussed above can be encoded as follows:
\[rule(relation(locatedIn,B_{1},E_{1},B_{2},E_{2}),relation(orgbasedIn,B_{o},E_{o},B_{1},E_{1}),\] \[\underline{relation(orgbasedIn,B_{o},E_{o},B_{2},E_{2})})\leftarrow\] \[atom(relation(locatedIn,B_{1},E_{1},B_{2},E_{2})),atom(relation (orgbasedIn,B_{o},E_{o},B_{1},E_{1})),\] \[not\ atom(relation(orgbasedIn,B_{o},E_{o},B_{2},E_{2})).\]
The head of the above ASP rule encodes an inference rule, whose first two labels (the relations
on the first line) are predicted but the third relation (underlined) is missing in the set of predicted labels. This inference rule is used for inferring the third pseudo label only if the first two pseudo labels exist and the third pseudo label is not in the predicted model.
3. _Optional parameters_: in some dataset, entity pseudo labels cannot overlap and/or each sentence has at least one relation. Such information can be specified by setting different flags. In our experiments, we use the following:
\[overlap\_fl. \%\text{ true if present} \tag{8}\] \[relation\_fl. \%\text{ true if present} \tag{9}\]
Form (8) forbids overlapping entities while (9) signals that each sentence should have a relation.
4. _Other rules_: any ASP rule can be a part of the domain-dependent part. Preferably, these rules work with the predicates defined below.
We refer to the collection of rules of the form (7)-(9) and any other rules for a domain \(D\) as \(KB_{D}\). We denote that \(A_{S}\) is _inconsistent_ when it contains pseudo labels that either contradict to each other or violate the rules in \(KB_{D}\).
We next describe \(\Pi\), the ASP program that takes the domain dependent knowledge \(KB_{D}\) and the set of pseudo labels for a sentence \(A_{S}\) as inputs and produces a consistent set of pseudo labels. \(\Pi\) uses the following main predicates:
* \(pi(X)\): the prediction of \(X\) (entity/relation pseudo label) might be incorrect (\(pi\) stands for \(possible\_incorrect\));
* \(ok(X)\): the prediction of \(X\) is considered as correct; as such, \(X\) can be used as pseudo label (for further training);
* \(nok(X)\): \(X\) is not included in the output (set of pseudo labels); and
* \(inf(X)\): \(X\) is derived using domain-dependent rules.
1. _Overlap checking rules_: \(\Pi\) contains the following rules:
\[2\{pi(entity(N,B,E));pi(entity(N^{\prime},B^{\prime},E^{\prime}))\} \gets overlap\_fl,\] \[atom(entity(N,B,E)),B<B^{\prime},E>B^{\prime},atom(entity(N^{ \prime},B^{\prime},E^{\prime})). \tag{10}\] \[\gets overlap\_fl,ok(entity(N,B,E)),B<B^{\prime},E>B^{ \prime},ok(entity(N^{\prime},B^{\prime},E^{\prime})). \tag{11}\]
Rule (10) states that if the starting location \(B^{\prime}\) of an entity (\(N^{\prime}\)) lies within the interval of the some other entity (\(N\)) then the prediction of the two entities might be wrong which is represented by the predicate \(pi\). This leads to the constraints (11) that prevents the consideration of both entities at the same time when they overlap in a sentence. The presence of \(overlap\_fl\) in these rules indicates that they are in effect only if \(overlap\_fl\) is set to true in \(KB_{D}\). Similar rules and constraints are also implemented for the case that the starting indices of both entities are the same. They are omitted for brevity to save space and are listed in the appendix.
2. _Type checking rules_: This type of rules is used to make sure that the entity types and relation types are consistent with the type declaration information in \(KB_{D}\):
\[2\{pi(relation(R,B,E,B^{\prime},E^{\prime}));pi(entity(N,B,E))\} \gets type\_def(R,N_{1},N_{2}),\] \[atom(relation(R,B,E,B^{\prime},E^{\prime})),atom(entity(N,B,E)),N _{1}\neq N. \tag{12}\] \[\leftarrow ok(relation(R,B,E,\_,\_)),ok(entity(N,B,E)),type\_def(R,N^{ \prime},\_),N\neq N^{\prime}. \tag{13}\]
Rule (12) states that if the first entity in a predicted relation is different from its specified type then both the predicted relation and the entity might be wrong. Constraint (13) disallows the
acceptance of both the relation and entity if their types do not march the specification. Again, we omit some rule relating to the second entity and the relation.
3. _Type inference rules_: These rules use the type declarations from \(KB_{D}\) to infer the type of accepted entities or the possible incorrect predictions of relations and entities.
\[2\{ok(entity(N,B,E));ok(entity(N^{\prime},B^{\prime},E^{\prime}))\}\leftarrow\] \[type\_def(R,N,N^{\prime}),ok(relation(R,B,E,B^{\prime},E^{ \prime})). \tag{14}\] \[pi(entity(N^{\prime},B^{\prime},E^{\prime}))\gets atom(relation( R,B,E,B^{\prime},E^{\prime})),\] \[pi(entity(N,B,E)),type\_def(R,N,N^{\prime}). \tag{15}\]
Rule (14) says that if a relation \(R\) is accepted as a true prediction then we should also accept its first and second entity with the type specified by the type declaration of \(R\). Rule (15) indicates that if the first entity of a relation \(R\) is potentially incorrect then so is its second entity.
4. _Rules for inference from predictions_: The following rules are used for processing an inference rule specified in \(KB_{D}\) via atom of the form \(rule(X,Y,Z)\).
\[6\{pi(X);pi(Y);pi(Z);inf(Z);dependency(X,Z);dependency(Y,Z)\}\leftarrow\] \[rule(X,Y,Z),atom(X),atom(Y),\ not\ atom(Z). \tag{16}\] \[\leftarrow ok(Y),inf(Y),dependency(X,Y),not\ ok(X).\] (17) \[\leftarrow rule(X,Y,Z),ok(X),ok(Y),\ not\ ok(Z). \tag{18}\]
Rule (16) says that if we have an inference rule \(rule(X,Y,Z)\) and \(X\) and \(Y\) were predicted but not \(Z\) then \(Z\) is an inferred prediction and all the predictions might be incorrect. Furthermore, \(Z\) depends on \(X\) and \(Y\). Constraint (17) states that if \(Y\) is an inferred atom and depends on \(X\) then the acceptance of \(Y\) cannot be done separately from the acceptance of \(X\). Constraint (18), on the other hand, states that if \(rule(X,Y,Z)\) is an inference rule then the acceptance of \(X\) and \(Y\) cannot be done separately from the acceptance of \(Z\).
5. _Rules for checking the existence of relation_: The following rule records the acceptance of some relation pseudo label whenever \(relation\_fl\) is defined:
\[relation\_exists\gets ok(relation(R,\_,\_,\_,\_,\_)),relation\_fl. \tag{19}\]
6. _Rules for selection of a consistent set of pseudo labels_: This set of rules defines the various types of atoms that will be used for computing the probability of a set of pseudo labels and selecting a set of pseudo labels.
\[atom(X)\gets atom(X,P). \tag{20}\] \[invprob(X,P)\gets atom(X,P),nok(X).\] (21) \[prob(X,P)\gets atom(X,P),ok(X).\] (22) \[ok(X)\gets atom(X),\ not\ pi(X). \tag{23}\]
Rule (20) projects an input \(atom(X,P)\) to define \(atom(X)\) for use with other part of the program. Rules (21)-(23) define the predicates \(prob\) and \(invprob\) that are used in computing the probability of the set of selected labels (see later). Rule (22) says that if there is no information about the potential incorrectness of the prediction of \(X\) then \(X\) must be accepted as a pseudo label. Rule (24) states that if the prediction of \(X\) might be incorrect then \(X\) could be accepted or not accepted as a pseudo label. Rule (25) says that if \(X\) is not selected as a pseudo label then \(nok(X)\) is true.
In summary, \(\Pi\) is the collection of rules of the Rules (10)-(25). Together with the set \(A_{S}\) of the predicted labels and the knowledge base \(KB_{D}\), we can compute a new set of pseudo labels by computing the answer sets of \(\pi(S)=A_{S}\cup KB_{D}\cup\Pi\). Each answer set \(W\) of \(\pi(S)\) consists of a set of atoms of the form \(ok(X)\). We say that \(L(W)=\{X\mid ok(X)\in W\}\) is the set of pseudo labels corresponding to \(W\). Intuitively, \(L(W)\) encodes a revision of \(A_{S}\). For an arbitrary sentence \(S\) in a domain \(D\), we can prove several properties of the program \(\pi(S)\) such as (_i_) \(\pi(S)\) is consistent; (_ii_) if \(A_{S}\) does not contain any relation then \(relation\_exists\) does not belong to any answer set of \(\pi(S)\); and (_iii_) for every \(W\), \(L(W)\) does not contain any overlapping pseudo labels if \(overlap\_fl\) is true; and \(L(W)\) does not contain type-inconsistent relations or entities. Intuitively, (_i_) represents the fact that there is always some revision of \(A_{S}\). This can be proven using the splitting theorem proposed by Lifschitz and Turner (1994); (_ii_) holds because of Rule (19); and (_iii_) holds due to the rules and constraints defined for type inference and checking. Due to space limitation, we omit the formal proof. The next example illustrates the use of \(\Pi\).
**Example 4**.: _Consider the sentence in Example 2. The program produces twenty answer sets (the \(KB_{\texttt{CoNLLQ4}}\) and the answer sets of the program are given in the Appendix). We observe that_
* _Some answer set does not contain any atom of the form_ \(ok(relation(\ldots))\)_;_
* _No answer set contains both_ \(ok(entity(org,0,2))\) _and_ \(ok(entity(other,1,2))\) _because they overlap each other and_ \(KB_{\texttt{CoNLLQ4}}\) _contains_ \(overlap\_fl\)_;_
* \(ok(relation(orgBasedIn,1,2,12,13))\) _belongs to some answer sets but not all. It is because_ \(entity(other,1,2)\) _is of the type_ \(other\) _that does not match the required type (_org_) in the definition of_ \(orgbasedIn\)_; and_
* _If_ \(ok(relation(locatedIn,7,9,10,11))\) _and_ \(ok(relation(locatedIn,10,11,12,13))\) _belong to an answer set then_ \(ok(relation(locatedIn,7,9,12,13))\) _also belongs to the answer set due to the transitivity of_ \(locatedIn\)_, a part of the_ \(KB_{\texttt{CoNLLQ4}}\)_._
Note that encoding knowledge in ASP incurs extra works. However, compared with manually labeling a large amount of data, this extra works pay off.
#### 4.2.2 Computing the Most Preferred Answer Set
**Definition 2** (Preference score of an answer set).: _Given an answer set \(W\), its preference score is defined as_
\[pref(W)=\Pi_{prob(a,p)\in WP}*\Pi_{invprob(a,p)\in W}(1-p) \tag{26}\]
The preference score \(pref(W)\) is the product of two terms, the first term is the product of the confidence level \(p\) of every pseudo label \(a\) such that \(ok(a)\in W\) (hence, \(prob(a,p)\in W\), due to Rule (21)) and the second term is the product of the complement confidence level \(1-p\) of pseudo label \(a\) such that \(ok(a)\not\in W\) (hence, \(invprob(a,p)\in W\), due to Rule (23)). It is easy to see that for two answer sets \(A\) and \(B\) such that \(L(B)\subseteq L(A)\) (i.e., \(A\) contains all acceptable labels in \(B\)), if \(prob(l,p)\in A\) and \(p\geq 0.5\) for every \(l\in L(A)\setminus L(B)\), then \(p(A)\geq p(B)\). Intuitively, this probability definition favors answer sets containing more (w.r.t. \(\subseteq\)) pseudo labels whose confidence level is greater than 0.5. When \(relation\_fl\) is set to true, we set \(pref(W)=0\) if \(relation\_exists\not\in W\). Preference will be given to the answer set with maximal probability. When all answer sets have zero preference, we prefer those with higher number of entity pseudo labels. The selection of the most preferred answer set is implemented using clingo and its PythonAPI library. The confidence level of an answer set \(W\) (i.e., \(W.conf\)) is defined by \(\min\{p\mid prob(l,p)\in W\}\).
## 5 Experiments
The algorithms and models are implemented in Python 3.8 and run on a server with two 32-core GPU @3.9 GHz, 512 GB RAM, and one NVIDIA A100 GPU. The Clingo version is 5.5.2.
**Data**. We use two datasets, CoNLL04 (Eberts and Ulges, 2020; Roth and Yih, 2004; Gupta et al., 2016; Wang and Lu, 2020) and SciERC (Eberts and Ulges, 2020; Luan et al., 2018), which have been utilized in other entity/relation extraction work. The CoNLL04 dataset contains sentences extracted from newspapers. We employ the training (922 sentences), development (231 sentences) and test set (288 sentences) split, which is similar to that of (Gupta et al., 2016). The SciERC dataset consists of abstracts of artificial intelligence research papers. The training/development/test split is 1861/275/551 sentences, which is the same as that of Luan et al. (2018).
These datasets all have training sets. To utilize these datasets to verify the effectiveness of our method, we do not use the whole training set to train the initial model. Instead, we use a small percentage of the training data (e.g., \(p_{tr}\in(0,1]\)) as the actual training data \(D_{L}\) and use the remaining (\(1-p_{tr}\)) training data as the unlabeled data \(D_{UL}\) to calculate pseudo labels. The original testing data is still utilized to test the model performance. To get **stable** results, for each dataset, we randomly choose five subsets (each one contains \(p_{tr}\) of the training data) from the training data and train five models. Then, we report the averaged results from the five models.
**Performance metric**. We report the micro and macro \(F_{1}\) values for entities (E), relations (R), and relations together with entities (ER). The micro \(F_{1}\) is calculated globally by counting the total true positives (TP), false negatives (FN), and false positives (FP). In all the counting, a special class (not-an-entity or not-a-relation), which is encoded as zero, is never treated as positive. For example, considering the \(E\) type, all the correctly predicted non-zero entities are counted as TP. Among the wrongly predicted entities, an entity is counted as FP if it is wrongly predicted to be non-zero, and an entity is counted as FN if its true class is non-zero. Some wrongly predicted entities are counted in both FP and FN. The macro \(F_{1}\) is obtained by calculating the prediction \(F_{1}\) when treating each class as positive (and all the others as negative) and averaging the \(F_{1}\)s for all the classes. We also report the running time of the models.
**Methods**. We compare our ASPER method with three classical and state-of-the-art baselines listed below. (1) _Self-training_ as described in the papers by McClosky et al. (2006); Reichart and Rappoport (2007). For this method, we use 90% as the threshold to select pseudo labels to be included for model retraining. (2) _Curriculum-labeling (CL)_: this method retrains the model using the curriculum-labeling strategy proposed by Cascante-Bonilla et al. (2021). This is a state-of-the-art approach for SSL using pseudo labels. It has one hyper parameter (stepping threshold) controlling the confidence value of pseudo labels that are included in the training of the model in one iteration. This parameter is set to the same (20%) as the original paper. (3) _Tri-training_: this method retrains the model using the tri-training strategy proposed by Zhou and Li (2005). A recent study by Ruder and Plank (2018) has shown that the classic tri-training method is still a strong baseline for neural semi-supervised learning for natural language processing.
For fair comparison, we run five iterations (retraining of the model) for every model. For our model, \(\Delta\) is set to be 20% as that in curriculum-labeling approach.
### Effectiveness Analysis
We conduct experiments to examine the effectiveness of our approach. Our **first set of experiments** is to compare our ASPER method with the other baselines. In these experiment, \(p_{tr}\) is set to be 10%. I.e., 10% of the training data forms \(D_{L}\). Table 1 shows the results on the CoNLL04
and the SciERC datasets. It shows that ASPER outperforms all the other baselines on all the calculated \(F_{1}\) measurement on recognizing relations (R) and both entities and relations (ER) no matter it is at the micro or macro level. For Entity (E) recognition, Tri-training is slightly better than our method. This is because our training process gives higher preferences to sentences with potentially correct relations. These results show the superiority of our proposed method.
When more training data is available and the KB cannot provide extra information than what the labeled data can provide, ASPER may not beat the pure data driven models such as trile-training and curriculum labeling. However, ASPER is able to improve (at least not hurt) its base deep learning model (the SpERT model in this paper) that ASPER is built upon no matter whether the KB can provide much more information than the training data or not.
We conduct a more detailed analysis about the running of ASPER by showing its performance in three iterations (other iterations show similar trend). This analysis is to show the quality of the pseudo label revision. The quality can be measured by comparing the pseudo labels and the ground truth labels (which are the training data with the correct labels, but are not directly used for training) and calculating the \(F_{1}\) score.
Table 2 shows the detailed analysis of how the ASP component helps improve the quality of the generated pseudo labels. We can see that after each iteration, the F1 of the ASP generated pseudo labels is always higher than that of the raw pseudo labels. This confirms that the use of answer sets and ASP helps improve the quality of the pseudo labels. Table 2 also shows that the performance improvement in earlier iterations is better than that in later iterations. This is also consistent with our design of utilizing curriculum labeling, i.e., answer sets with higher confidence values are used in earlier iterations (Section 4.1.3).
The **second** set of experiments examines the effect of the amount of initial training data on ASPER. We change the amount of initial training dataset \(D_{L}\) by varying the percentage \(p_{tr}\) with different values (5%, 10%, 20%, 30%).
The results are reported in Figure 2. We only report the F1 of relation with entity (micro) because the trend of the other F1 values is the same. The figure shows that, when there is less initial training data, the overall performance is worse. However the positive effect of ASPER is more obvious when there are less training data, which can be observed from the larger gap between ASPER and other methods for smaller \(p_{tr}\) (e.g., 5%). This result is consistent with
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{method} & \multicolumn{6}{c|}{CoNLL04 dataset} \\ \cline{2-7} & \multicolumn{3}{c|}{\(F_{1}\) (micro)} & \multicolumn{3}{c|}{\(F_{1}\) (macro)} \\ \cline{2-7} & E & R & ER & E & R & ER \\ \hline Self-train & 77.74\(\pm\)1.7 & 41.76\(\pm\)5.7 & 41.39\(\pm\)5.7 & 72.50\(\pm\)1.9 & 43.19\(\pm\)6.0 & 42.82\(\pm\)6.0 \\ \hline CL & 77.49\(\pm\)1.1 & 41.61\(\pm\)3.0 & 41.35\(\pm\)3.2 & 72.03\(\pm\)1.6 & 43.07\(\pm\)3.8 & 42.77\(\pm\)4.0 \\ \hline Tri-train & 78.63\(\pm\)2.4 & 42.60\(\pm\)6.7 & 42.29\(\pm\)6.7 & 72.49\(\pm\)2.5 & 42.99\(\pm\)7.1 & 42.64\(\pm\)7.2 \\ \hline ASPER & **81.25\(\pm\)**1.2 & **52.47\(\pm\)**3.6 & **52.41\(\pm\)**3.6 & **75.90\(\pm\)**1.7 & **53.32\(\pm\)**4.0 & **53.27\(\pm\)**4.0 \\ \hline \multirow{3}{*}{method} & \multicolumn{6}{c|}{SciERC dataset} \\ \cline{2-7} & \multicolumn{3}{c|}{\(F_{1}\) (micro)} & \multicolumn{3}{c|}{\(F_{1}\) (macro)} \\ \cline{2-7} & E & R & ER & E & R & ER \\ \hline Self-train & 56.72\(\pm\)1.2 & 18.60\(\pm\)2.6 & 12.36\(\pm\)1.7 & 54.43\(\pm\)1.4 & 11.07\(\pm\)3.7 & 6.98\(\pm\)2.3 \\ \hline CL & 60.75\(\pm\)0.8 & 31.00\(\pm\)2.1 & 20.81\(\pm\)1.0 & 59.19\(\pm\)0.4 & 22.00\(\pm\)3.8 & 15.55\(\pm\)1.8 \\ \hline Tri-train & **60.99\(\pm\)**0.7 & 27.43\(\pm\)1.9 & 18.94\(\pm\)1.4 & **59.52\(\pm\)**0.4 & 17.09\(\pm\)3.6 & 11.59\(\pm\)2.7 \\ \hline ASPER & 60.34\(\pm\)0.6 & **32.30\(\pm\)**1.2 & **21.73\(\pm\)**1.2 & 59.10\(\pm\)0.4 & **22.72\(\pm\)**3.1 & **16.06\(\pm\)**2.3 \\ \hline \end{tabular}
\end{table}
Table 1: Performance comparison of ASPER and other baselines on the two datasets (\(E\): entity, \(R\): relation, \(ER\): entity and relation; \(p_{tr}=10\%\))
our intuition of designing ASPER to alleviate the issue of insufficient amount of training data. Figure 2(b) also shows that our method does outperform, but has comparable performance as, CL and tri-training when \(p_{tr}\) is larger. The major reason is that the knowledge base is less effective in capturing the characteristics of the second domain (research articles). More effective rules need to be developed to enrich the knowledge base in the future.
The third set of experiments conduct an **ablation study** to investigate the effect of the rules in the ASP program. Due to space limitation, we present this analysis on one dataset. Table 3 shows the results. The first row (_with all rules_) shows the results when all the rules are utilized.
The third row (_without any rules_) on the other hand shows the results when no rule is utilized. The results (improvement of the first row comparing to the third row) clearly demonstrate that
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{} & \multicolumn{6}{|c|}{CoNLL04 dataset} \\ \cline{3-8} & \multicolumn{3}{|c|}{\(F_{1}\) (micro)} & \multicolumn{3}{|c|}{\(F_{1}\) (macro)} \\ \cline{3-8} & ASPER & E & R & ER & E & R & ER \\ \hline Iter 1 & no-ASP & 75.65 & 41.62 & 41.07 & 69.81 & 42.86 & 42.29 \\ \cline{2-8} & with ASP & 77.06 & 44.45 & 44.45 & 71.16 & 45.40 & 45.40 \\ \hline Iter 2 & no-ASP & 78.68 & 49.78 & 49.08 & 73.24 & 50.86 & 50.16 \\ \cline{2-8} & with ASP & 79.01 & 50.19 & 50.19 & 73.49 & 51.27 & 51.27 \\ \hline Iter 3 & no-ASP & 79.98 & 52.92 & 52.66 & 74.25 & 53.97 & 53.72 \\ \cline{2-8} & with ASP & 80.24 & 53.62 & 53.62 & 74.52 & 54.67 & 54.67 \\ \hline \multirow{3}{*}{} & \multicolumn{6}{|c|}{SciERC dataset} \\ \cline{3-8} & \multicolumn{3}{|c|}{\(F_{1}\) (micro)} & \multicolumn{3}{|c|}{\(F_{1}\) (macro)} \\ \cline{3-8} & ASPER & E & R & ER & E & R & ER \\ \hline Iter 1 & no-ASP & 60.34 & 30.87 & 21.98 & 58.78 & 21.47 & 16.59 \\ \cline{2-8} & with ASP & 60.34 & 31.12 & 22.09 & 58.78 & 21.75 & 16.79 \\ \hline Iter 2 & no-ASP & 60.86 & 32.59 & 23.10 & 59.39 & 22.34 & 17.25 \\ \cline{2-8} & with ASP & 60.86 & 32.83 & 23.24 & 59.39 & 22.58 & 17.42 \\ \hline Iter 3 & no-ASP & 60.65 & 33.11 & 23.53 & 59.31 & 22.79 & 17.74 \\ \cline{2-8} & with ASP & 60.65 & 33.12 & 23.54 & 59.31 & 22.82 & 17.76 \\ \hline \end{tabular}
\end{table}
Table 2: The effect of the ASPER algorithm to improve the quality of the pseudo labels on the two datasets. (\(E\): entity, \(R\): relation, \(ER\): entity and relation; \(p_{tr}=10\%\))
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{3}{*}{} & \multicolumn{3}{|c|}{(\(F_{1}\) micro)} & \multicolumn{3}{|c|}{\(F_{1}\) (macro)} \\ \cline{2-7} & E & R & ER & E & R & ER \\ \hline with all rules & **81.25** & **52.47** & **52.41** & **75.90** & **53.32** & **53.27** \\ \hline with all rules except the \(relation\_exists\) rule & 76.64 & 34.13 & 33.84 & 70.56 & 34.87 & 34.57 \\ \hline without any rules & 76.74 & 31.07 & 31.07 & 70.52 & 31.99 & 31.99 \\ \hline \end{tabular}
\end{table}
Table 3: Ablation study on ASPER; CoNLL04
Figure 2: Performance comparison (varying training data amount)
the rules contribute positively to the performance of ASPER. We conduct a further analysis about the effect of the different type of rules and find that the \(relation\_exists\) rule (Rule (19)) plays the most significant role. The second row shows the results from the program while the \(relation\_exists\) rule is not utilized, but all the other rules are used. The improvement of all the other rules to the algorithm (which is captured by the difference between the 2nd and the 3rd row) is not as much as the \(relation\_exists\) rule (which is observed from the difference between the 1st and the 2nd row).
### Efficiency Analysis
We also examine the running time of the different methods to understand the overhead brought by the ASP program. Due to space constraint, we report the summarized data here. Self training, curriculum labeling, and ASPER use similar amount of time. On the CoNLL04 dataset, it takes approximately 40-50 minutes to run the five iterations. Tri-training's time is approximately three times of the other three methods because it needs to train three models in each iteration. The overhead of using ASP to generate the updated pseudo labels is about 30 seconds in each iteration. This time is negligible compared with the expensive NN model training.
## 6 Conclusions
In this paper, we presented a novel method ASPER, which leverages Answer Set Programming (ASP) to improve the performance of Neural Network models in the joint recognition of entities and relations from text data when limited amount of training data is available. ASPER makes use of pseudo labels. The ASP program encodes different types of commonsense rules by taking advantage of the commonsense domain knowledge. The experiments on two real datasets show that ASPER can report significantly better results than the other baselines in most cases.
|
2306.10990 | Super-resolving sparse observations in partial differential equations: A
physics-constrained convolutional neural network approach | We propose the physics-constrained convolutional neural network (PC-CNN) to
infer the high-resolution solution from sparse observations of spatiotemporal
and nonlinear partial differential equations. Results are shown for a chaotic
and turbulent fluid motion, whose solution is high-dimensional, and has fine
spatiotemporal scales. We show that, by constraining prior physical knowledge
in the CNN, we can infer the unresolved physical dynamics without using the
high-resolution dataset in the training. This opens opportunities for
super-resolution of experimental data and low-resolution simulations. | Daniel Kelshaw, Luca Magri | 2023-06-19T15:00:04Z | http://arxiv.org/abs/2306.10990v1 | # Super-resolving sparse observations in partial differential equations:
###### Abstract
We propose the physics-constrained convolutional neural network (PC-CNN) to infer the high-resolution solution from sparse observations of spatiotemporal and nonlinear partial differential equations. Results are shown for a chaotic and turbulent fluid motion, whose solution is high-dimensional, and has fine spatiotemporal scales. We show that, by constraining prior physical knowledge in the CNN, we can infer the unresolved physical dynamics without using the high-resolution dataset in the training. This opens opportunities for super-resolution of experimental data and low-resolution simulations.
## 1 Introduction
Observations of turbulent flows and physical systems are limited in many cases, with only sparse or partial measurements being accessible. Access to limited information obscures the underlying dynamics and provides a challenge for system identification [e.g., 1]. Super-resolution methods offer the means for high-resolution state reconstruction from limited observations, which is a key objective for experimentalists and computational scientists alike. Current methods for super-resolution or image reconstruction primarily make use of convolutional neural networks due to their ability to exploit spatial correlations. In the classic, data-driven approach there is a requirement for access to pairs of low- and high-resolution samples, which are needed to produce a parametric mapping for the super-resolution task [e.g., 2; 3; 4; 5]. Considering observations of physical systems, the problems encountered in the absence of ground-truth high-resolution observations can be mitigated by employing a physics-constrained approach by imposing prior knowledge of the governing equations [e.g., 6; 7; 8].
Physics-informed neural networks (PINNs) [7] have provided a tool for physically-motivated problems, which exploits automatic-differentiation to constrain the governing equations. Although the use of PINNs for super-resolution shows promising results for simple systems [9], they remain challenging to train, and are not designed to exploit spatial correlations [10; 11]. On the other hand, convolutional neural networks are designed to exploit spatial correlations, but they cannot naturally leverage automatic differentiation to evaluate the physical loss, as PINNs do, because they provide a mapping between states, rather than a mapping from the spatiotemporal coordinates to states as in PINNs. As such, the design of physics-informed convolutional neural networks is more challenging, and rely on finite-different approximations or differentiable solvers [12; 13]. For example, the authors of [13] show results for a steady flow field (with no temporal dynamics), which produces a mapping for stationary solutions of the Navier-Stokes equations. In this work, we design a framework to tackle spatiotemporal partial differential equations. For this, we propose a super-resolution method that does not require the full set of high-resolution observations. To accomplish this, we design and propose the physics-constrained convolutional neural network (PC-CNN) with a time windowing scheme. We apply this to a two-dimensional chaotic and turbulent fluid motion. The paper is structured as follows. In SS2 we introduce the super-resolution task in the context of a mapping from low- to high- resolution. The choice of architecture used to represent
the mapping is discussed in SS3 before providing an overview of the methodology SS4. We introduce the two-dimensional, turbulent flow in SS5 and provide details on the pseudospectral method used for discretisation. Finally, results are demonstrated in SS6. Conclusions SS7 end the paper.
## 2 Super-resolution task
We consider partial differential equations (PDEs) of the form
\[\mathcal{R}(\mathbf{\tilde{u}};\lambda)\equiv\partial_{t}\mathbf{\tilde{u}}-\mathcal{N} (\mathbf{\tilde{u}};\lambda), \tag{1}\]
where \(\mathbf{x}\in\Omega\subset\mathbb{R}^{n}\) denotes the spatial location; \(t\in[0,T]\subset\mathbb{R}_{\geq 0}\) is the time, where \(n\) is the space dimension; \(\mathbf{\tilde{u}}:\Omega\times[0,T]\rightarrow\mathbb{R}^{m}\) is the state, where \(m\) is the number of components of the vector function \(\mathbf{\tilde{u}}\); \(\lambda\) are the physical parameters of the system; \(\mathcal{N}\) is a sufficiently smooth differential operator, which represents the nonlinear part of the system of \(m\) partial differential equations; \(\mathcal{R}\) is the residual; and \(\partial_{t}\) is the partial derivative with respect to time. A solution \(\mathbf{u}\) of the PDE (1) is the function that makes the residual vanish, i.e., \(\mathcal{R}(\mathbf{u};\lambda)=0\).
Given sparse observations, \(\mathbf{u}(\mathbf{\Omega}_{L},t)\), we aim to reconstruct the underlying solution to the partial differential equation on a high-resolution grid, \(\mathbf{u}(\mathbf{\Omega}_{H},t)\), where the domain \(\Omega\) is discretised on low- and high-resolution uniform grids, \(\mathbf{\Omega}_{L}\subset\mathbb{R}^{N^{n}}\) and \(\mathbf{\Omega}_{H}\subset\mathbb{R}^{M^{n}}\), respectively, such that \(\mathbf{\Omega}_{L}\cap\mathbf{\Omega}_{H}=\mathbf{\Omega}_{L}\); \(M=\kappa N\); and \(\kappa\in\mathbb{N}^{+}\) is the up-sampling factor. Our objective is to find a parameterised mapping \(\mathbf{f_{\theta}}\) such that
\[\mathbf{f_{\theta}}:\mathbf{u}(\mathbf{\Omega}_{L},t)\rightarrow\mathbf{u}(\mathbf{\Omega}_{H},t). \tag{2}\]
We consider the solution, \(\mathbf{u}\), to be discretised with \(N_{t}\) times steps, \(t_{i}\), which are contained in the set \(\mathcal{T}=\left\{t_{i}\in[0,T]\right\}_{i=0}^{N_{t}}\). We approximate the mapping \(\mathbf{f_{\theta}}\) by a convolutional neural network parameterised by \(\mathbf{\theta}\).
## 3 Convolutional neural networks
Parameterising \(\mathbf{f_{\theta}}\) as a convolutional neural network is a design choice, which allows us to exploit two key properties. First, shift invariance, which is consequence of kernel-based operations acting on structured grids [14]. Second, partial differential equations are defined by local operators, which is a property that we wish the machine to induce naturally. Therefore, we choose convolutional neural networks, which employ an architectural paradigm that leverages spatial correlations [15], through the composition of functions
\[\mathbf{f_{\theta}}=\mathbf{f_{\theta_{Q}}^{Q}}\circ\cdots\circ h(\mathbf{f_{\theta_{1}}^ {1}})\circ h(\mathbf{f_{\theta_{0}}^{0}}), \tag{3}\]
where \(\mathbf{f_{\theta_{i}}^{i}}\) denote discrete convolutional layers; \(h\) is an element-wise nonlinear activation function, which increases the expressive power of the network [e.g., 16] and yields a universal function approximator [17]; and \(Q\) is the number of layers. The convolutional layers are responsible for the discrete
Figure 1: Super-resolution task. The model \(\mathbf{f_{\theta}}\) is responsible for mapping the low-resolution field \(\mathbf{u}(\mathbf{\Omega}_{L},t)\) to the high-resolution field \(\mathbf{u}(\mathbf{\Omega}_{H},t)\). The upsampling layers (middle white layers in the leftmost set of data) perform bi-cubic upsampling to obtain the correct spatial dimensions. Convolutional layers are parameterised as Conv2D(c_in, c_out, k), where c_in, c_out denote the number of input and output channels, respectively; and k is the spatial extent of the filter. The terms \(\mathcal{L_{O}},\mathcal{L_{R}}\) denote the observation-based loss and residual loss, respectively, the combination of which forms the objective loss \(\mathcal{L_{\theta}}\), which is used to update the network’s parameters, \(\mathbf{\theta}\). We provide an in-depth explanation of these losses in §4.
operation \((\mathbf{x},\mathbf{w},\mathbf{b})\mapsto\mathbf{w}*\mathbf{x}+\mathbf{b}\), where \(\mathbf{\theta}=(\mathbf{w},\mathbf{b})\). As the kernel operates locally around each pixel, information is leveraged from the surrounding grid cells. This makes convolutional layers an excellent choice for learning and exploiting spatial correlations, as are often important in the solutions to partial differential equations (e.g., 18; 13).
We propose a physics-constrained convolutional neural network (PC-CNN), which consists of three successive convolutional layers that are prepended by an upsampling operation tasked with increasing the spatial resolution of the input. Knowledge of the boundary conditions can be imposed through the use of padding; for instance periodic padding for periodic boundary conditions. Figure 1 provides an overview of the super-resolution task with information about the architecture employed.
## 4 Methodology
Having defined the mapping \(\mathbf{f_{\theta}}\) as a convolutional neural network parameterised by \(\mathbf{\theta}\), we formalise an optimisation problem to minimise the cost function \(\mathcal{L}_{\mathbf{\theta}}\)
\[\mathbf{\theta}^{*}=\operatorname*{argmin}_{\mathbf{\theta}}\mathcal{L}_{\mathbf{\theta}} \quad\text{where}\quad\mathcal{L}_{\mathbf{\theta}}=\mathcal{L}_{\mathcal{O}}+ \alpha\mathcal{L}_{\mathcal{R}}, \tag{4}\]
where \(\alpha\) is a non-negative empirical regularisation factor, which determines the relative importance of the corresponding loss terms. Given low-resolution observations \(\mathbf{u}(\mathbf{\Omega}_{L},t)\) at arbitrary times \(t\in\mathcal{T}\), we define each of the loss terms as
\[\mathcal{L}_{\mathcal{O}} =\frac{1}{N_{t}}\sum_{t\in\mathcal{T}}\lVert\mathbf{f_{\theta}}(\mathbf{ u}(\mathbf{\Omega}_{L},t))\big{|}^{\mathbf{\Omega}_{L}}-\mathbf{u}(\mathbf{\Omega}_{L},t) \rVert^{2}_{\mathbf{\Omega}_{L}}, \tag{5}\] \[\mathcal{L}_{\mathcal{R}} =\frac{1}{N_{t}}\sum_{t\in\mathcal{T}}\lVert\mathcal{R}(\mathbf{f_{ \theta}}(\mathbf{u}(\mathbf{\Omega}_{L},t));\lambda)\rVert^{2}_{\mathbf{\Omega}_{H}},\]
where \(\mathbf{f_{\theta}}(\,\cdot\,)\big{|}^{\mathbf{\Omega}_{L}}\) denotes the corestriction of \(\mathbf{\Omega}_{H}\) on \(\mathbf{\Omega}_{L}\), and \(\lVert\,\cdot\,\rVert_{\mathbf{\Omega}}\) represents the \(\ell^{2}\)-norm over the domain \(\mathbf{\Omega}\). In order to find an optimal set of parameters \(\mathbf{\theta}^{*}\), the loss is designed to regularise predictions that do not conform to the desired output. Given sensor observations \(\mathbf{u}(\mathbf{\Omega}_{L},t)\) in the low-resolution input, the observation-based loss, \(\mathcal{L}_{\mathcal{O}}\), is defined to minimise the distance between known observations, \(\mathbf{u}(\mathbf{\Omega}_{L},t)\), and their corresponding predictions on the high-resolution grid, \(\mathbf{f_{\theta}}(\mathbf{u}(\mathbf{\Omega}_{L},t))\).
We impose the prior knowledge that we have on the dynamical system by defining the residual loss, \(\mathcal{L}_{\mathcal{R}}\), which penalises the parameters that yield predictions that violate the governing equations (1)1. Mathematically, this means that weIn absence of observations (i.e., data), the residual loss \(\mathcal{L}_{\mathcal{R}}\) alone does not provide a unique solution. By augmenting the residual loss with the data loss, \(\mathcal{L}_{\mathcal{O}}\) (see Eq. (4)), we ensure that network realisations conform to the observations (data) whilst fulfilling the governing equations, e.g., conservation laws. Crucially, as consequence of the proposed training objective, we do not need the high-resolution field as labelled dataset, which is required with conventional super-resolution methods.
Footnote 1: For example, in turbulence, the fluid mass and momentum must be in balance with mass sources, and forces, respectively. Thus, (1) represent conservation laws.
### Time-windowing of the residual loss
Computing the residual of a partial differential equation is a temporal task, as shown in Eq. (1). We employ a time-windowing approach to allow the network \(\mathbf{f_{\theta}}\) to learn the sequentiality of the data. This provides a means to compute the time-derivative \(\partial_{t}\mathbf{u}\) required for the residual loss \(\mathcal{L}_{\mathcal{R}}\). The network takes time-windowed samples as inputs, each sample consisting of \(\tau\) sequential time-steps. The time-derivative \(\partial_{t}\mathbf{u}\) is computed by applying a forward Euler approximation to the loss
\[\mathcal{L}_{\mathcal{R}} =\frac{1}{\tau N_{t}}\sum_{t\in\mathcal{T}}\sum_{n=0}^{\tau} \big{\lVert}\partial_{t}\mathbf{f_{\theta}}(\mathbf{u}(\mathbf{\Omega}_{L},t+n\Delta t)) \tag{6}\] \[\qquad-\mathcal{N}(\mathbf{f_{\theta}}(\mathbf{u}(\mathbf{\Omega}_{L},t+n \Delta t));\lambda)\big{\rVert}^{2}_{\mathbf{\Omega}_{H}}.\]
Using this approach, we are able to obtain the residual for the predictions in a temporally local sense; computing derivatives across discrete time windows rather than the entire simulation domain. The network \(\mathbf{f_{\theta}}\) is augmented to operate on these time windows, which vectorises the operations over the time-window. For a fixed quantity of training data, the choice of \(\tau\) introduces a trade-off between the number of input samples \(N_{t}\), and the size of each time-window \(\tau\). We take a value \(\tau=2\), which is the minimum window size for computing the residual loss \(\mathcal{L}_{\mathcal{R}}\). We find that this is sufficient for training the network whilst simultaneously maximising the number of independent samples used for training. To avoid duplication of the data in the training set, we ensure that all samples are at least \(\tau\) time-steps apart so that independent time-windows are guaranteed to not contain any overlaps.
## 5 Chaotic and turbulent dataset
As nonlinear partial differential equations, we consider the Navier-Stokes equations, which are the expressions of the conservation of mass and momentum of fluid motion, respectively
\[\begin{split}\nabla\cdot\mathbf{u}&=0,\\ \partial_{t}\mathbf{u}+\left(\mathbf{u}\cdot\mathbf{\nabla}\right)\mathbf{u}& =-\nabla p+\nu\Delta\mathbf{u}+\mathbf{g},\end{split} \tag{7}\]
where \(p,\nu\) denote the scalar pressure field and kinematic viscosity, respectively. The flow velocity, \(\mathbf{u}\in\mathbb{R}^{m=2}\) evolves on the domain \(\Omega\in[0,2\pi)\subset\mathbb{R}^{n=2}\) with periodic boundary conditions applied on \(\partial\Omega\), and a stationary, spatially-varying sinusoidal forcing term \(\mathbf{g}(\mathbf{x})\). In this paper, we take \(\nu=\nicefrac{{1}}{{42}}\) to ensure chaotic and turbulent dynamics, and employ the forcing term \(\mathbf{g}(\mathbf{x})=\left[\sin\left(4\mathbf{x_{2}}\right),0\right]^{\top}\)[19], where \(\mathbf{x_{2}}\) is the transverse coordinate. This flow, which is also known as the Kolmogorov flow [19], provides a nonlinear and multi-scale dateset, which allows us to evaluate the quality of predictions across the turbulent spectrum.
### Differentiable pseudospectral discretisation
To produce a solution for the Kolmogorov flow, we utilise a differentiable pseudospectral spatial discretisation \(\hat{\mathbf{u}}_{k}=\hat{\mathbf{u}}(\mathbf{k},t)\) where \(\hat{\mathbf{u}}=\mathcal{F}\circ\mathbf{u}\); \(\mathcal{F}\) is the Fourier transform; and \(\mathbf{k}\in\hat{\mathbf{\Omega}}_{k}\subset\mathbb{C}^{K^{n}}\) is the spectral discretisation of the spatial domain \(\Omega\in[0,2\pi)\)[20]. Operating in the Fourier domain eliminates the continuity term [21]. The equations for the spectral representation of the Kolmogorov flow are
\[\mathcal{R}(\hat{\mathbf{u}}_{k};\lambda)=\left(\tfrac{d}{dt}+\nu|\mathbf{k}|^{2} \right)\hat{\mathbf{u}}_{k}-\hat{\mathbf{f}}_{k}+\mathbf{k}\frac{\mathbf{k}\cdot\hat{\mathbf{f}}_{ k}}{|\mathbf{k}|^{2}}-\hat{\mathbf{g}}_{k}, \tag{8}\]
with \(\hat{\mathbf{f}}_{k}=-\left(\widehat{\mathbf{u}\cdot\nabla\mathbf{u}}\right)_{k}\), where nonlinear terms are handled pseudospectrally, employing the \(\nicefrac{{2}}{{3}}\) delaliasing rule to avoid unphysical culmination of energy at the high frequencies [21]. A solution is produced by time-integration of the dynamical system with the explicit forward-Euler scheme, choosing a time-step \(\Delta t\) that satisfies the Courant-Friedrichs-Lewy (CFL) condition. Initial conditions are generated by producing a random field scaled by the wavenumber, which retains the spatial structures of varying lengthscale in the physical domain [22]. The initial transient, which is approximately \(T_{t}=180s\), is removed from the dataset to ensure that the results are statistically stationary. (The transient time is case dependant.) For a spatial resolution \(\mathbf{\Omega}_{H}\in\mathbb{R}^{70\times 70}\), we use a spectral discretisation \(\hat{\mathbf{\Omega}}_{k}\in\mathbb{C}^{35\times 35}\) to avoid aliasing and resolve the smallest lengthscales possible.
### Residual loss in the Fourier domain
The pseudospectral discretisation provides an efficient means to compute the differential operator \(\mathcal{N}\), which allows us to evaluate the residual loss \(\mathcal{L}_{\mathcal{R}}\) in the Fourier domain
\[\begin{split}\mathcal{L}_{\mathcal{R}}=\frac{1}{\tau N_{t}}\sum_ {t\in\mathcal{T}}\sum_{n=0}^{\tau}&\|\partial_{t}\hat{\mathbf{f}}_{ \mathbf{\theta}}(\mathbf{u}(\mathbf{\Omega}_{L},t+n\Delta t))\\ -\hat{\mathcal{N}}(\hat{\mathbf{f}}_{\mathbf{\theta}}(\mathbf{u}(\mathbf{\Omega }_{L},t+n\Delta t)))\|^{2}_{\hat{\mathbf{\Omega}}_{k}},\end{split} \tag{9}\]
where \(\hat{\mathbf{f}}_{\mathbf{\theta}}=\mathcal{F}\circ\mathbf{f}_{\mathbf{\theta}}\), and \(\hat{\mathcal{N}}\) denotes the Fourier-transformed differential operator. The pseudospectral discretisation is fully differentiable, which allows us to numerically compute gradients with respect to the parameters of the network \(\mathbf{f}_{\mathbf{\theta}}\). Computing the loss \(\mathcal{L}_{\mathcal{R}}\) in the Fourier domain provides two advantages: _(i)_ periodic boundary conditions are naturally enforced, which enforces the prior knowledge in the loss calculations; and _(ii)_ gradient calculations yield spectral accuracy. In contrast, a conventional finite differences approach requires a computational stencil, the spatial extent of which places an error bound on the gradient computation. This error bound is a function of the spatial resolution of the field.
## 6 Results
First, we discuss the generation of the low-resolution data for the super-resolution task. Next, we show the ability of the PC-CNN to infer the high-resolution solution of the partial differential equation on points that are not present in the training set.
### Obtaining the low-resolution data
A high-resolution solution of the partial differential equation is generated on the grid \(\mathbf{\Omega}_{H}\) prior to extracting a low-resolution grid \(\mathbf{\Omega}_{L}\) with the downsampling factor of \(\kappa=\nicefrac{{N}}{{M}}\). Both the solver and residual loss are discretised with \(K=\nicefrac{{N}}{{2}}\) wavenumbers in the Fourier domain, which complies with the Nyquist-Shannon sampling criterion. The downsampling by \(\kappa\) is performed by extracting the value of the solution at spatial locations \(\mathbf{\Omega}_{L}\cap\mathbf{\Omega}_{H}\), i.e., a low-resolution representation of the high-resolution solution
\[\mathbf{u}(\mathbf{\Omega}_{L},t)\triangleq\mathbf{u}(\mathbf{\Omega}_{H},t)\big{|}^{\mathbf{ \Omega}_{L}}. \tag{10}\]
(In contrast, the use of a pooling method for downsampling would distort values in the low-resolution representation, which effectively modifies the high-resolution solution.)
### Comparison with standard upsampling
We showcase the results for a downsampling factor \(\kappa=7\). Results are compared with interpolating upsampling methods, i.e., bi-linear, and bi-cubic interpolation to demonstrate the ability of the method. We provide a notion of quantitative accuracy by computing the relative \(\ell^{2}\)-error between the true solution, \(\mathbf{u}(\mathbf{\Omega}_{H},t)\), and the corresponding network realisation, \(\mathbf{f}_{\mathbf{\theta}}(\mathbf{u}(\mathbf{\Omega}_{L},t))\)
\[e=\sqrt{\frac{\sum_{t\in\mathcal{T}}\lVert\mathbf{u}(\mathbf{\Omega}_{H},t)-\mathbf{f}_{ \mathbf{\theta}}(\mathbf{u}(\mathbf{\Omega}_{L},t))\rVert_{\mathbf{\Omega}_{H}}^{2}}{\sum_{t \in\mathcal{T}}\lVert\mathbf{u}(\mathbf{\Omega}_{H},t)\rVert_{\mathbf{\Omega}_{H}}^{2}}}. \tag{11}\]
Upon discarding the transient, a solution \(\mathbf{u}(\mathbf{\Omega}_{H},t)\in\mathbb{R}^{70\times 70}\) is generated by time-integration over \(12\times 10^{3}\) time-steps, with \(\Delta t=1\times 10^{-3}\). We extract the low-resolution solution \(\mathbf{u}(\mathbf{\Omega}_{L},t)\in\mathbb{R}^{10\times 10}\) as a candidate for the super-resolution task. We extract 2048 samples at random from the time-domain of the solution, each sample consisting of \(\tau=2\) consecutive time-steps. The _adam_ optimiser [23] is employed for training with a learning rate of \(3\times 10^{-4}\). We take \(\alpha=10^{3}\) as the regularisation factor for the loss \(\mathcal{L}_{\mathbf{\theta}}\) and train for a total of \(10^{3}\) epochs, which is empirically determined to provide sufficient convergence.
Figure 2 shows a snapshot of results for the streamwise component of the velocity field, comparing network realisations \(\mathbf{f}_{\mathbf{\theta}}(\mathbf{u}(\mathbf{\Omega}_{L},t))\) with the interpolated alternatives. Bi-linear and bi-cubic interpolation are denoted by \(BL(\mathbf{u}(\mathbf{\Omega}_{L},t)),BC(\mathbf{u}(\mathbf{\Omega}_{L},t))\) respectively. We observe that network realisations
Figure 2: Super-resolution results. Physics-constrained convolutional neural network (PC-CNN) compared with interpolation methods. Panel _(i)_ shows the low-resolution input, \(\mathbf{u}(\mathbf{\Omega}_{L},t)\); _(ii)_ bi-linear interpolation, \(BL(\mathbf{u}(\mathbf{\Omega}_{L},t))\); _(iii)_ bi-cubic interpolation, \(BC(\mathbf{u}(\mathbf{\Omega}_{L},t))\); _(iv)_ true high-resolution field, \(\mathbf{u}(\mathbf{\Omega}_{H},t)\); _(v)_ model prediction of the high-resolution field, \(\mathbf{f}_{\mathbf{\theta}}(\mathbf{u}(\mathbf{\Omega}_{L},t))\); and _(vi)_ energy spectra for each of the predictions.
of the high-resolution solution yield qualitatively more accurate results as compared with the interpolation. Artefacts indicative of the interpolation scheme used are present in both of the interpolated fields, whereas the network realisation captures the structures present in the high-resolution field correctly. Across the entire solution domain the model \(\mathbf{f_{\theta}}\) achieves a relative \(\ell^{2}\)-error of \(e=3.449\times 10^{-2}\) compared with \(e=2.091\times 10^{-1}\) for bi-linear interpolation and \(e=1.717\times 10^{-1}\) for bi-cubic interpolation.
Although the relative \(\ell^{2}\)-error provides a notion of predictive accuracy, it is crucial to assess the physical characteristics of the super-resolved field [24]. The energy spectrum, which is characteristic of turbulent flows, represents a multi-scale phenomenon where energy content decreases with the wavenumber. From the energy spectrum of the network's prediction, \(\mathbf{f_{\theta}}(\mathbf{u}(\mathbf{\Omega}_{L},t))\), we gain physical insight into the multi-scale nature of the solution. Results in Figure 2 show that the energy content of the low-resolution field diverges from that of the high-resolution field, which is a consequence of spectral aliasing. Network realisations \(\mathbf{f_{\theta}}(\mathbf{u}(\mathbf{\Omega}_{L},t))\) are capable of capturing finer scales of turbulence compared to both interpolation approaches, prior to diverging from the true spectrum as \(|\mathbf{k}|=18\). The residual loss, \(\mathcal{L_{R}}\), enables the network to act beyond simple interpolation. The network is capable of de-aliasing, thereby inferring unresolved physics. Parametric studies show similar results across a range of super-resolution factors \(\kappa\) (result not shown).
## 7 Conclusions
In this paper, we introduce a method for physics-constrained super-resolution of observations in partial differential equations without access to the full high-resolution samples. First, we define the super-resolution task and introduce the physics-constrained convolutional neural network (PC-CNN), which provides the framework to compute physical residuals for spatiotemporal systems. Second, we formulate an optimisation problem by leveraging knowledge of the partial differential equations and low-resolution observations to regularise the predictions from the network. Third, we showcase the PC-CNN on a turbulent flow, which is a spatiotemporally chaotic solution of the nonlinear partial differential equations of fluid motion (Navier-Stokes). Finally, we demonstrate that the proposed PC-CNN provides more accurate physical results, both qualitatively and quantitatively, as compared to interpolating upsampling methods. This work opens opportunities for the accurate reconstruction of solutions of partial differential equations from sparse observations, as is prevalent in experimental settings, without the full set of high-resolution images.
## 8 Acknowledgements
D. Kelshaw. and L. Magri. acknowledge support from the UK Engineering and Physical Sciences Research Council. L. Magri gratefully acknowledges financial support from the ERC Starting Grant PhyCo 949388.
|
2304.05514 | State estimation of a carbon capture process through POD model reduction
and neural network approximation | This paper presents an efficient approach for state estimation of
post-combustion CO2 capture plants (PCCPs) by using reduced-order neural
network models. The method involves extracting lower-dimensional feature
vectors from high-dimensional operational data of the PCCP and constructing a
reduced-order process model using proper orthogonal decomposition (POD).
Multi-layer perceptron (MLP) neural networks capture the dominant dynamics of
the process and train the network parameters with low-dimensional data obtained
from open-loop simulations. The proposed POD-MLP model can be used as the basis
for estimating the states of PCCPs at a significantly decreased computational
cost. For state estimation, a reduced-order extended Kalman filtering (EKF)
scheme based on the POD-MLP model is developed. Our simulations demonstrate
that the proposed POD-MLP modeling approach reduces computational complexity
compared to the POD-only model for nonlinear systems. Additionally, the
POD-MLP-EKF algorithm can accurately reconstruct the full state information of
PCCPs while significantly improving computational efficiency compared to the
EKF based on the original PCCP model. | Siyu Liu, Xunyuan Yin, Jinfeng Liu | 2023-04-11T21:43:26Z | http://arxiv.org/abs/2304.05514v1 | State estimation of a carbon capture process through POD model reduction and neural network approximation
###### Abstract
This paper presents an efficient approach for state estimation of post-combustion CO\({}_{2}\) capture plants (PCCPs) by using reduced-order neural network models. The method involves extracting lower-dimensional feature vectors from high-dimensional operational data of the PCCP and constructing a reduced-order process model using proper orthogonal decomposition (POD). Multi-layer perceptron (MLP) neural networks capture the dominant dynamics of the process and train the network parameters with low-dimensional data obtained from open-loop simulations. The proposed POD-MLP model can be used as the basis for estimating the states of PCCPs at a significantly decreased computational cost. For state estimation, a reduced-order extended Kalman filtering (EKF) scheme based on the POD-MLP model is developed. Our simulations demonstrate that the proposed POD-MLP modeling approach reduces computational complexity compared to the POD-only model for nonlinear systems. Additionally, the POD-MLP-EKF algorithm can accurately reconstruct the full state information of PCCPs while significantly improving computational efficiency compared to the EKF based on the original PCCP model.
## I Introduction
In recent years, post-combustion CO\({}_{2}\) capture plants (PCPs) have gained significant attention due to their potential in reducing greenhouse gas emissions and mitigate global warming. PCCPs are commonly used in power plants and carbon-intensive industrial processes to separate CO\({}_{2}\) from the flue gas [1]. The operational safety, carbon capture efficiency and economic cost of PCCPs are highly dependent on the performance of the advanced control systems used for regulating the process operations [2].
Real-time information of the key quality variables of the PCCP is essential for the advanced control system to make the most appropriate decisions for safe and efficient process operation. However, measuring all quality variables online through deploying hardware sensors is unrealistic. Therefore, it is crucial to exploit state estimation to reconstruct the full state information for PCCPs. Unfortunately, results on state estimation of PCCPs have been limited. In [3], we made an initial attempt on estimating the states of the absorber of a PCCP by developing a distributed moving horizon estimation scheme. However, the other key quality variables associated with the desorption unit and other physical units are not addressed within this framework. Wang et al., presented a robust soft sensor using a neural network and moving horizon estimator to monitor key operating parameters in the carbon capture process [4]. In the context of nonlinear state estimation, there have been some algorithms that have the potential to be leveraged for state estimation of PCCPs, e.g., extended Kalman filtering [5], moving horizon estimation methods [6], and particle filter [7]. However, most of non-linear estimation algorithms require accurate first-principles dynamic models of the underlying nonlinear processes. Due to the large scales and complex structures of PCCPs, it can be challenging to conduct first-principles modeling. Additionally, even if a first-principles model is obtained, this type of model that will involve partial differential equations that describe the dynamical behaviors of the absorption and desorption columns will be computationally expensive to simulate, especially considering the cases when optimization-based estimation approaches are used.
Data-driven modeling using neural networks has been widely used as an alternative to first-principles modeling for various nonlinear processes. For example, Jeon et al. utilized the neural networks (NNs) to describe the dynamics of a chemical reactor, which was further used to optimize its control performance [8]. Cheng et al. employed NNs to model the ship motion and improved its navigation accuracy [9]. Zhao et al., developed reduced-order recurrent neural networks that capture the dominant dynamics of nonlinear systems using an autoencoder [10]. However, these methods may face certain limitations when considering possible applications to PCCPs. Specifically, the complexity of the NN model, including the number of neurons and layers, and the computational cost of training, may increase exponentially with the dimensionality of the input and output data, commonly referred to as the "curse of dimensionality".
To address these limitations associated with state estimation of PCCPs, we propose a solution that combines data-driven modeling using NNs with model reduction techniques. Proper orthogonal decomposition (POD) has been widely adopted in various engineering fields for reducing the dimensionality of high-dimensional data sets while preserving dominant patterns or features of the data. In the context of control systems, POD can be used to satisfactorily approximate the dynamics of a large-scale nonlinear process in a lower-dimensional state space [11, 12]. POD holds the promise to lower the dimensionality of the originally high-dimensional state-space model for PCCP. Specifically, our objective is to leverage POD to create a reduced-order machine learning-based model with lower structural complexity. This model can then facilitate reductions in the
computational cost for model training and implementation of state estimation based on the reduced-order model.
Motivated by the observations above, we propose a neural network-based state estimation approach for PCCPs using POD reduced-order models. Specifically, we normalize the data before POD and use the POD approach to obtain a reduced-order model that accurately approximates the dynamics of the PCCP. Then, we train a multi-layer perceptron (MLP) neural network to capture the dominant dynamics of the POD reduced-order model from the low-dimensional data. The resulting reduced-order POD-based MLP (POD-MLP) model is used as the basis of state estimation. We develop a reduced-order extended Kalman filtering (EKF) algorithm based on the POD-MLP model to estimate the states of the original process. Our approach can effectively reduce the complexity of the NN model and the computational workload required for training while preserving high modeling accuracy. This approach provides a promising solution to state estimation of PCCPs and can potentially be applied to other complex systems with of high dimension for efficient state estimation through bypassing the use of the original higher-dimensional dynamic process model. Our proposed framework offers an advantage in that it can be employed when only data is available. Specifically, in the absence of a physical model, we can still leverage proposed method to build a reduced-order neural network and perform efficient state estimation.
## II Model Description
Figure 1 illustrates the post-combustion capture plant considered in this paper. This diagram shows the four key components of the PCCP, which include the absorption and desorption columns, lean-rich heat exchanger (LRHE), and the reboiler. The flue gas, which contains a high concentration of CO\({}_{2}\), is introduced at the bottom of the absorber from the power plant and is then mixed with a lean solvent having low CO\({}_{2}\) levels. The 5M Monoethanolamine (MEA) is used as the solvent in this study. The treated flue gas with a reduced amount of CO\({}_{2}\) leaves the absorption column, while the rich solvent with a high concentration of CO\({}_{2}\) is heated via the heat exchanger by exchanging heat with the lean solvent coming from the reboiler. The rich solvent is then fed into the top of the desorption column, where it is heated through contact with the hot vapor from the reboiler. In the desorption column, the CO\({}_{2}\) is stripped from the rich solvent, which is then recycled back to the absorber. The discharged CO\({}_{2}\) gas, with a high concentration of CO\({}_{2}\) (90-99\(\%\)), is obtained from the desorption column. The details of the PCCP model are briefly described in the following [13, 14]:
\[\frac{\partial C_{L}(i)}{\partial t} =\frac{F_{L}}{S_{c}}\frac{\partial C_{L}(i)}{\partial l}+(N(i)a^{ I}), \tag{1a}\] \[\frac{\partial C_{G}(i)}{\partial t} =-\frac{F_{G}}{S_{c}}\frac{\partial C_{G}(i)}{\partial l}-(N(i)a^ {I}),\] (1b) \[\frac{\partial T_{L}}{\partial t} =\frac{F_{L}}{S_{c}}\frac{\partial T_{L}}{\partial l}+\frac{(Q_{L }a^{I})}{\sum_{i=1}^{n}C_{L}(i)C_{p,i}}, \tag{1c}\]
\[\frac{\partial T_{G}}{\partial t} =-\frac{F_{G}}{S_{c}}\frac{\partial T_{G}}{\partial l}+\frac{(Q_ {G}a^{I})}{\sum_{i=1}^{n}C_{G}(i)C_{p,i}}, \tag{1d}\] \[\frac{dT_{tu}}{dt} =\frac{\dot{V}_{tu}}{V_{tu}}(T_{tu,in}-T_{tu,out})+\frac{\dot{Q}} {C_{p_{tu}}\rho_{tu}V_{tu}},\] (1e) \[\frac{dT_{sh}}{dt} =\frac{\dot{V}_{sh}}{V_{sh}}(T_{sh,in}-T_{sh,out})+\frac{\dot{Q}} {C_{p_{sh}}\rho_{sh}V_{sh}},\] (1f) \[\rho C_{p}V\frac{dT_{reb}}{dt}=f_{in}H_{in}-f_{V}H_{V,out}-f_{L}H _{L,out}+Q_{reb}. \tag{1g}\]
The dynamic models for the absorption column and desorption column are described by the partial differential equations (1a)-(1d), where \(i=CO_{2},MEA,H_{2}O,N_{2}\), and the subscripts \(L\) and \(G\) denote liquid and gas phases, respectively. The dependent variables vary with time \(t\) and axial position \(l\) of the column. It is assumed that each stage in the two columns is well mixed. Their dynamic models are similar except for a few details like the direction of reactions, temperature, and reaction rate constants. The energy balance equations in (1e)-(1f) represent the dynamics of the lean-rich heat exchanger, where \(\dot{V}(\mathrm{m}^{3}/\mathrm{s})\) and \(\dot{Q}(\mathrm{kJ}/\mathrm{s})\) represent the volumetric flow and heat transfer rate, respectively, the subscripts \(tu\), \(sh\), \(in\) and \(out\) denote the tube-side, shell-side, inlets and outlets of the heat exchanger, respectively. It is assumed that the mass inside the heat exchanger remains constant. Equation (1g) is the energy balance equation of the reboiler, where \(T_{reb}(\mathrm{K})\) represents the temperature of the reboiler, the subscripts \(in\), \(out\), \(V\), and \(L\) denote inlet, outlet, vapour and liquid, respectively, and \(Q_{reb}(\mathrm{KJ}/\mathrm{s})\) is the heat input. The definitions of other variables and parameters of the PCCP can be found in Table I. Physical property calculations of gas and liquid phases are necessary for the model development, and they are estimated from seven nonlinear algebraic correlations. Details of these calculations are not included in this work but can be found in [13, 14].
The PCCP model consists of partial differential equations for the two columns, and ordinary differential equations for the heat exchanger and reboiler, as well as some algebraic equations for parameter calculations. As the variables in the columns exhibit temporal and spatial distributions, the partial differential equations are discretized using the method
Fig. 1: A schematic of the CO\({}_{2}\) capture plant.
outlined in [13] to convert them into ordinary differential equations, with the column length divided into five stages. Therefore, the model presented in (1) is expressed as a system of differential-algebraic equations (DAEs):
\[\mathbf{x}(k+1)=\mathbf{F}(\mathbf{x}(k),\mathbf{a}(k),\mathbf{u}(k))+\mathbf{w}(k), \tag{2a}\] \[\mathbf{G}(\mathbf{x}(k),\mathbf{a}(k),\mathbf{u}(k))=\mathbf{0},\] (2b) \[\mathbf{y}(k)=\mathbf{H}(\mathbf{x}(k),\mathbf{u}(k))+\mathbf{v}(k), \tag{2c}\]
where \(\mathbf{x}(k)\in\mathbb{R}^{103}\) is the state vector, and the definitions of state variables are defined in Table II, \(\mathbf{a}(k)\in\mathbb{R}^{7}\) is the algebraic state vector, \(\mathbf{u}(k)=[F_{L},Q_{reb},F_{G}]\in\mathbb{R}^{3}\) denotes the input vector: solvent flow rate in L/s, reboiler heat in KJ/s, and flue gas flow rate m\({}^{3}\)/s, and \(\mathbf{w}(k)\) is the process noise.
## III POD and ITS APPLICATION TO PCCP
In this section, we employ the proper orthogonal decomposition (POD) to derive a reduced-order model that approximates the dynamics of the PCCP that originally has 103 state variables. By projecting the high-dimensional state variables onto a lower-dimensional subspace, that captures the dominant modes of variability, we obtain a reduced-order model that accurately captures the essential dynamics of the PCCP while significantly reducing its complexity. This is accomplished by computing the singular value decomposition (SVD) of the data matrix, which yields a set of orthonormal basis vectors that represent the most significant patterns of variability in the data.
For general nonlinear systems described by (2), we obtain a state trajectory by capturing and sampling the system's response to a typical input trajectory at fixed time intervals \(\delta\). Then, we sample the resulting state trajectory to construct a matrix of process states from time \(0\) to \(N\), denoted as:
\[\mathbf{\chi}=[\mathbf{x}(0)\ \mathbf{x}(1)\ \ldots\ \mathbf{x}(N)]\in\mathbb{R}^{n\times(N+ 1)}, \tag{3}\]
where the snapshot matrix \(\mathbf{\chi}\) is composed of the actual state at each sampling interval, the number of state variables is denoted as \(n\), and the number of sampling intervals is represented by \(N\). To ensure a sufficient number of samples, we require \(N\) to be much larger than \(n\).
For PCCP, the magnitudes of different states vary greatly. To ensure that the POD reduction method is not biased towards states with larger magnitudes in PCCP, each state variable \(x_{i}\) (where \(i=1,2,\ldots,103\)) in the data matrix \(\mathbf{\chi}\) is normalized using (4) prior to performing SVD decomposition:
\[x_{i,norm}=\frac{x_{i}-x_{i,min}}{x_{i,max}-x_{i,min}}. \tag{4}\]
This normalization transforms each state variable \(x_{i}\) to \(x_{i,norm}\), where \(x_{i,min}\) and \(x_{i,max}\) are the minimum and maximum values of \(x_{i}\) in the original dataset, respectively. The resulting normalized matrix \(\mathbf{\chi}_{norm}\) constructed by \(x_{i,norm}\) has all states with magnitudes between 0 and 1. SVD is then performed on the normalized matrix \(\mathbf{\chi}_{norm}\) as shown in the following,
\[\mathbf{\chi}_{norm}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\mathrm{ T}}, \tag{5}\]
where \(\mathbf{U}\in\mathbb{R}^{n\times n}\) and \(\mathbf{V}\in\mathbb{R}^{(N+1)\times(N+1)}\) are orthogonal matrices, the rectangular matrix \(\mathbf{\Sigma}\in\mathbb{R}^{n\times(N+1)}\) has non-negative real values on its main diagonal. The diagonal entries \(\sigma_{i}\) represent the singular values of the \(\mathbf{\chi}\) matrix, where \(i\in{1,2,\ldots,n}\). These values are sorted in descending order on the main diagonal of \(\mathbf{\Sigma}\).
To construct a reduced-order model, we select a positive integer \(r\) that is smaller than the number of states \(103\), and truncate \(\mathbf{\Sigma}\) at the \(r\)th column and row to from the reduced-order matrix \(\mathbf{\Sigma}_{r}\in\mathbb{R}^{r\times r}\) using the first \(r\) singular values \(\sigma_{i}\). Accordingly, we select the first \(r\) columns of \(\mathbf{U}\) and the first \(r\) rows of \(\mathbf{V}^{\mathrm{ T}}\) to form the matrices \(\mathbf{U}_{r}\) and \(\mathbf{V}_{r}^{\mathrm{ T}}\), respectively. Using these matrices, we obtain a reduced-order approximation of normalized process data, given by
\[\mathbf{\chi}_{norm}\approx\mathbf{U}_{r}\mathbf{\Sigma}_{r}\mathbf{V}_{r}^{\mathrm{ T}}. \tag{6}\]
We define \(\mathbf{\xi}\in\mathbb{R}^{r}\) as the state vector of the reduced-order model, and set \(\mathbf{\xi}(k):=\mathbf{U}_{r}^{\mathrm{ T}}x(k)\). Using the truncated SVD matrices, the original nonlinear model in (2) can be expressed as a reduced-order model in state-space form:
\[\mathbf{\xi}(k+1)=\mathbf{U}_{r}^{\mathrm{ T}}\mathbf{F}(\mathbf{U}_{r}\mathbf{\xi}(k),\mathbf{a}(k),\mathbf{u}(k))+\mathbf{U}_{r}^{ \mathrm{ T}}\mathbf{w}(k), \tag{7a}\] \[\mathbf{G}(\mathbf{U}_{r}\mathbf{\xi}(k),\mathbf{a}(k),\mathbf{u}(k))=\mathbf{0},\] (7b) \[\mathbf{y}(k)=\mathbf{H}(\mathbf{U}_{r}\mathbf{\xi}(k),\mathbf{u}(k))+\mathbf{v}(k). \tag{7c}\]
The evolution of \(\mathbf{\xi}(k)\) in the reduced-order model can be used to approximate the actual state trajectory of the original nonlinear process through the mapping \(\mathbf{x}(k)\approx\mathbf{U}_{r}\mathbf{\xi}(k)\).
**Remark 1**: _In this section, we improve the accuracy of the model approximation by normalizing the data prior to applying the POD method to obtain a reduced-order model. The effectiveness of the normalization for POD will also be illustrated through simulations in Section VI._
## IV Approximating Reduced-Order Model With MLP Networks
The POD technique is commonly applied in linear systems to decrease computational costs by reducing the dimensionality of the problem. However, it does not yield similar advantages for nonlinear systems due to the challenge in explicitly expressing \(\mathbf{U}_{r}^{\mathrm{T}}\mathbf{F}(\mathbf{U}_{r}\mathbf{\xi},\mathbf{a},\mathbf{u})\) in terms of the reduced basis \(\mathbf{U}_{r}\). Consequently, evaluating the reduced-order model may require more time than evaluating the original nonlinear function \(\mathbf{F}\). Different approaches have been adopted to address this issue. For instance, a linear parameter varying model was utilized to approximate the reduced-order model, however, the benefits were found to be insignificant [15].
To address this issue, we present a method to speed up the evolution of reduced-order models for nonlinear systems such as (2). The method employs a multi-layer perceptron (MLP) neural network to fit the reduced-order model and hence decrease the computation time. The MLP neural network model of \(\mathbf{\xi}\) is given by:
\[\mathbf{\xi}(k+1)=\mathbf{f}_{mlp}(\mathbf{\xi}(k),\mathbf{u}(k))+\mathbf{w}_{r}(k), \tag{8}\]
where the vector function \(\mathbf{f}_{mlp}\in\mathbb{R}^{r}\) approximates the dynamic behavior of \(\mathbf{\xi}\) in (7a), and \(\mathbf{w}_{r}\in\mathbb{R}^{r}\) denotes the process noise and model error. Consequently, we do not have to evaluate the vector function \(\mathbf{F}:\mathbb{R}^{103}\rightarrow\mathbb{R}^{103}\) of the full-order model, leading to significant time savings.
MLP models are a type of artificial neural network widely used to approximate complex and nonlinear problems. An MLP typically consist of an input layer, one or more hidden layers, and an output layer. Each layer in an MLP contains multiple fully connected neurons, which are connected to the next layer by weights. In supervised learning, the weights are adjusted to approximate each target value. The number of neurons in the input and output layers is determined by the input and output variables, respectively. The computation time required for MLP output is relatively short, as only a few matrix multiplications, vector additions, and function evaluations are necessary. Given these characteristics, MLP is a suitable choice for approximating the vector function \(\mathbf{f}_{mlp}\) in equation (2) after POD model reduction. The basic model formulation of MLP is indicated below:
\[\mathbf{z}^{(l)}=\sigma_{h}(\mathbf{w}^{(l)}\mathbf{z}^{(l-1)}+\mathbf{b}^{(l)}),\quad\mathbf{y}= \mathbf{w}^{o}\mathbf{z}+\mathbf{b}^{o} \tag{9}\]
where \(\mathbf{z}^{(l)}\) denotes the output vector of the \(l\)-th hidden layer, obtained by applying an activation function \(\sigma_{h}\) to the weighted sum of the input vector \(\mathbf{z}^{(l-1)}\) and the bias vector \(\mathbf{b}^{(l)}\). The weight matrix of the \(l\)-th hidden layer is denoted by \(\mathbf{w}^{(l)}\). The input vector \(\mathbf{z}^{(0)}\) is the MLP input, and the output vector \(\mathbf{y}\) is obtained by applying the weight matrix \(\mathbf{w}^{o}\) to the output vector of the last hidden layer and adding the bias vector \(\mathbf{b}^{o}\). It is worth noting that the MLP output layer is typically linear, while the activation function of the hidden layer can be chosen from various options based on the specific problem being solved.
The reduced-order PCCP model is enhanced by utilizing an MLP network, where the input is \(\mathbf{\xi}_{u}:=[\mathbf{u}^{r},\mathbf{\xi}^{r}]^{\mathrm{T}}\in\mathbb{R}^{3+r}\), and the output is \(\mathbf{\hat{\xi}}\in\mathbb{R}^{r}\). The input layer has \(3+r\) neurons, and the output layer has \(r\) neurons. The MLP model is trained to minimize the mean-squared-error (MSE) between the predicted output \(\mathbf{\hat{\xi}}\) and the actual output \(\mathbf{\xi}\):
\[L=MSE(\mathbf{\xi},\mathbf{\hat{\xi}}). \tag{10}\]
The model structure of the proposed POD-based MLP (POD-MLP) model is shown in Figure 2.
**Remark 2**: _As the system dimension increases, the number of training samples required to construct an MLP model that accurately approximates the system grows exponentially. Therefore, direct training a large-scale nonlinear model can be computationally expensive. Hence, performing POD dimension reduction for high-dimensional PCCP models and then training the MLP model for the reduced-order model is a highly practical approach._
**Remark 3**: _The POD-MLP model not only reduces the order of the system, but also eliminates the need for solving the DAEs in the original PCCP model. It achieves this by extracting the dynamic information of the algebraic state variable \(\mathbf{a}\) from the data, and excluding \(\mathbf{a}\) from the model._
## V EKF USING the POD-MLP Model
In this section, we develop an extended Kalman filtering (EKF) based on the POD-MLP model (the POD-MLP-EKF algorithm for short) to estimate the actual process states. The POD-MLP model is summarized in the following:
\[\mathbf{\xi}(k+1)=\mathbf{f}_{mlp}(\mathbf{\xi}(k),\mathbf{u}(k))+\mathbf{w}_{r}(k), \tag{11a}\] \[\mathbf{y}(k)=\mathbf{H}(\mathbf{U}_{r}\mathbf{\xi}(k),\mathbf{u}(k))+\mathbf{v}(k). \tag{11b}\]
Assuming that \(\mathbf{w}_{r}(k)\) and \(\mathbf{v}(k)\) are two mutually uncorrelated Gaussian noise sequences with zero-mean, we further assume that they have covariance matrices \(\mathbf{Q}_{r}\) and \(\mathbf{R}_{r}\).
Fig. 2: The diagram of the POD-MLP model.
Based on the above preparation, the POD-MLP-EKF algorithm is designed in the following two steps:
Step 1: Prediction step:
\[\hat{\mathbf{\xi}}(k+1|k)=\mathbf{f}_{mlp}(\hat{\mathbf{\xi}}(k|k),\mathbf{u}(k)), \tag{12a}\] \[\mathbf{P}(k+1|k)=\mathbf{A}(k)\mathbf{P}(k|k)\mathbf{A}^{\intercal}(k)+\mathbf{Q}_{ r}, \tag{12b}\]
where \(\hat{\mathbf{\xi}}(k+1|k)\) is the prediction of the system state at time \(k+1\) based on the current state estimate \(\hat{\mathbf{\xi}}(k|k)\) and the input \(\mathbf{u}(k)\), and \(\mathbf{P}(k+1|k)\) contains a priori error covariance information, incorporating the prediction error and the uncertainty associated with the system dynamics through the covariance matrix \(\mathbf{Q}_{r}\). The matrix \(\mathbf{A}(k)=\frac{\partial\mathbf{f}_{mlp}(\hat{\mathbf{\xi}},\mathbf{u})}{\partial\mathbf{\xi} }|_{\mathbf{\xi}=\hat{\mathbf{\xi}}(k|k)}\) is the Jacobian matrix of the MLP model with respect to the state vector \(\mathbf{\xi}\) evaluated at the predicted state estimate \(\hat{\mathbf{\xi}}(k|k)\).
Step 2: Update step:
\[\mathbf{K}(k+1)=\frac{\mathbf{P}(k+1|k)\mathbf{C}^{\intercal}(k)}{\mathbf{C}(k) \mathbf{P}(k+1|k)\mathbf{C}^{\intercal}(k)+\mathbf{R}_{r}}, \tag{13a}\] \[\hat{\mathbf{\xi}}(k+1|k+1)=\hat{\mathbf{\xi}}(k+1|k)+\mathbf{K}(k+1)[y(k+1)\] \[-\mathbf{H}(\mathbf{U}_{r}\hat{\mathbf{\xi}}(k+1|k),\mathbf{u}(k))],\] (13b) \[\mathbf{P}(k+1|k+1)=[\mathbf{I}-\mathbf{K}(k+1)\mathbf{C}(k)]\mathbf{P}(k+1|k), \tag{13c}\]
where the correction gain \(\mathbf{K}(k+1)\) is computed based on the a priori estimation error covariance \(\mathbf{P}(k+1|k)\), the measurement error covariance \(\mathbf{R}_{r}\), and the observation matrix \(\mathbf{C}(k)\), which maps the predicted state \(\hat{\mathbf{\xi}}(k+1|k)\) to the measurement space. The state estimate is then updated to \(\hat{\mathbf{\xi}}(k+1|k+1)\) using the correction gain and the measurement innovation \(y(k+1)-\mathbf{H}(\mathbf{U}_{r}\hat{\mathbf{\xi}}(k+1|k),\mathbf{u}(k))\), which represents the difference between the actual measurement and the predicted measurement based on the predicted state. Finally, the a posteriori estimation error covariance matrix \(\mathbf{P}(k+1|k+1)\) is computed based on the updated state estimate and the correction gain, which reflects the reduced uncertainty in the estimated state after incorporating the measurement information.
Then, we can obtain the state estimates of the actual PCCP states, denoted by \(\hat{\mathbf{x}}\), by utilizing the reduced-order state estimate \(\hat{\mathbf{\xi}}(k+1|k+1)\) and the linear mapping \(\mathbf{U}_{r}\), as follows:
\[\hat{\mathbf{x}}(k+1)=\mathbf{U}_{r}\hat{\mathbf{\xi}}(k+1|k+1). \tag{14}\]
## VI Simulation Results
To perform the model order reduction, the input vector \(\mathbf{u}(k)=[F_{L},Q_{reb},F_{G}]\) is used to excite the PCCP, which are constrained as 0.48 L/s \(\leq F_{L}\leq\) 0.66 L/s, 0.14 KJ/s \(\leq Q_{reb}\leq\) 0.20 KJ/s and 0.8 m\({}^{3}\)/s \(\leq F_{G}\leq\) 1.2 m\({}^{3}\)/s. The dynamic model of the PCCP is discretized at a sample interval of \(\Delta=30\)s. Pseudo-random multi-level signals (PRMSs) are commonly used as excitation signals for identifying nonlinear systems. For example, a ten-level PRMS used in the PCCP is shown in Figure 3 (Samples 1 to 3000). The switching times are randomly chosen from a uniform distribution ranging between 900s and 3000s (30 to 100 sampling times). The system states are sampled over a duration of 100 hours to construct the snapshot matrix \(\mathbf{\chi}\) for POD reduction. With 12,000 sampling intervals, the requirement \(N\gg n\) is satisfied.
Next, we apply the SVD to the matrix \(\mathbf{\chi}\) which has dimensions 103 by 12000. This results in a unitary matrix \(U\) that can be used for coordinate transformation. To evaluate the reduced-order model accuracy, we use the root-mean-square error (RMSE) to denote the model errors, which is defined as follows:
\[\mathrm{RMSE}:=\sqrt{\frac{\sum_{j=0}^{N}\sum_{i=1}^{103}(x_{i,norm}(j)-\hat{x }_{i,norm}(j))^{2}}{N}},\]
where \(\hat{x}_{i,norm}:=\mathbf{U}_{r}\xi_{i}\) is the \(i\)th approximated state obtained from a reduced order model. To validate the accuracy of a reduced-order model, we should use input trajectories that are different from the ones used in POD model reduction. Additional 600 sampling data are used for validation. Based on the actual state trajectories and reduced-order model state trajectories, the RMSE is calculated for each model. The values of log(RMSE) at different orders \(r=20,\ldots,90\) under the POD with normalization and the POD without normalization are shown in Figure 4. It shows that the degree of model mismatch increases with the decrease in the model order for both cases. Moreover, the RMSE value of POD with normalization is smaller than that without normalization. This is because for the POD method without normalization of the data matrix \(\mathbf{\chi}\), the approximation of the state with small numerical values is not accurate when the model is reduced, and the model approximation error is added to each state in the form of absolute value. While the method that normalizes the data matrix \(\mathbf{\chi}\) reflects the model approximation error in the form of relative values for each state. Therefore, RMSE values of reduced-order models at different orders obtained from POD with normalization are smaller than those obtained from POD. This can also be further demonstrated from Fig. 5, which shows the trajectories of some of states based on the original model and the reduced model with order 30 using POD with normalization and POD methods respectively. From the figure, it can be seen
Fig. 3: The three input signals for generating system state trajectories in model reduction.
that for states with large numerical values (\(x_{17}\), \(x_{101}\)), both methods provide very good approximation effects, but for states with small numerical values or small fluctuations (\(x_{3}\), \(x_{11}\), \(x_{26}\), \(x_{31}\)), the approximation of POD with normalization is significantly better than that of POD.
Next, we will evaluate the open-loop prediction performance of the POD-MLP model. Using the PRMS input signal, we generate 100,000 samples for each state by utilizing the first-principle model with \(\mathbf{U}_{r}\). These data are split into training (70\(\%\)), validation (20\(\%\)), and testing (10\(\%\)) datasets. The developed POD-MLP model consists of 3 hidden layers with 128 neurons, the input of the POD-MLP is the normalized 33-dimensional vector \(\mathbf{\xi}_{u}\) and the output is the normalized state vector \(\mathbf{\xi}\). The activation function for the hidden layers has been selected to be Tanh, and a linear activation function is used in the output layer. Figure 6 demonstrates the testing performance of the POD-MLP model in multi-step open-loop prediction for the actual state trajectory of the PCCP. It can be seen that the fit is accurate.
In the following analysis, we focus on the state estimation of the PCCP using the POD-MLP-EKF algorithm. The PCCP is subject to process disturbances, and the output measurements are corrupted by random noise. Specifically, each process disturbance sequence associated with the \(i\)th state \(x_{i}\) is generated following normal distribution with zero mean and a standard deviation 0.01\(x_{i,s}\), where \(x_{i,s}\) is the steady-state value of \(x_{i}\). Random noise is added to each measurement \(y_{i}\) as Gaussian white noise with zero mean and a standard deviation 0.01\(y_{i,s}\), where \(y_{i,s}\) is the value of \(y_{i}\) at steady-state. As a result, the covariance matrices of process noise and measurement noise are \(\mathbf{Q}=\mathrm{diag}((0.01\mathbf{x}_{s})^{2})\) and \(\mathbf{R}=\mathrm{diag}((0.01\mathbf{y}_{s})^{2})\), respectively. The tuning parameters in the POD-MLP-EKF are \(\mathbf{Q}_{r}=\mathbf{U}_{r}^{\mathrm{T}}\mathbf{Q}\mathbf{U}_{r}\) and \(\mathbf{R}_{r}=\mathbf{U}_{r}^{\mathrm{T}}\mathbf{R}\mathbf{U}_{r}\). The initial guess for the normalized states is set to \(0.51_{\mathbf{103}}\). Figure 7 shows some of the state estimates and the actual states. The proposed estimation scheme provides accurate state estimates.
We compare the computational efficiency of the proposed POD-MLP-EKF algorithm based on model reduction and neural network, a centralized EKF design directly based on the PCCP model, and an EKF for POD model, denoted as the POD-EKF. Specifically, we evaluate the average computation time for 600 steps required by the three algorithms, which is 125 s, 127 s, and 6.4 s, respectively. The results demonstrate that the combination of POD reduction and neural network significantly reduces the computing time.
the University of Alberta from March 2021 to February 2023. She acknowledges the financial support from the China Scholarship Council (CSC) during this period.
|
2306.04073 | Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient
for Convolutional Neural Networks | In deep learning, mixture-of-experts (MoE) activates one or few experts
(sub-networks) on a per-sample or per-token basis, resulting in significant
computation reduction. The recently proposed \underline{p}atch-level routing in
\underline{MoE} (pMoE) divides each input into $n$ patches (or tokens) and
sends $l$ patches ($l\ll n$) to each expert through prioritized routing. pMoE
has demonstrated great empirical success in reducing training and inference
costs while maintaining test accuracy. However, the theoretical explanation of
pMoE and the general MoE remains elusive. Focusing on a supervised
classification task using a mixture of two-layer convolutional neural networks
(CNNs), we show for the first time that pMoE provably reduces the required
number of training samples to achieve desirable generalization (referred to as
the sample complexity) by a factor in the polynomial order of $n/l$, and
outperforms its single-expert counterpart of the same or even larger capacity.
The advantage results from the discriminative routing property, which is
justified in both theory and practice that pMoE routers can filter
label-irrelevant patches and route similar class-discriminative patches to the
same expert. Our experimental results on MNIST, CIFAR-10, and CelebA support
our theoretical findings on pMoE's generalization and show that pMoE can avoid
learning spurious correlations. | Mohammed Nowaz Rabbani Chowdhury, Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen | 2023-06-07T00:16:10Z | http://arxiv.org/abs/2306.04073v1 | Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks
###### Abstract
In deep learning, mixture-of-experts (MoE) activates one or few experts (sub-networks) on a per-sample or per-token basis, resulting in significant computation reduction. The recently proposed patch-level routing in MoE (pMoE) divides each input into \(n\) patches (or tokens) and sends \(l\) patches (\(l\ll n\)) to each expert through prioritized routing. pMoE has demonstrated great empirical success in reducing training and inference costs while maintaining test accuracy. However, the theoretical explanation of pMoE and the general MoE remains elusive. Focusing on a supervised classification task using a mixture of two-layer convolutional neural networks (CNNs), we show for the first time that pMoE provably reduces the required number of training samples to achieve desirable generalization (referred to as the sample complexity) by a factor in the polynomial order of \(n/l\), and outperforms its single-expert counterpart of the same or even larger capacity. The advantage results from the discriminative routing property, which is justified in both theory and practice that pMoE routers can filter label-irrelevant patches and route similar class-discriminative patches to the same expert. Our experimental results on MNIST, CIFAR-10, and CelebA support our theoretical findings on pMoE's generalization and show that pMoE can avoid learning spurious correlations.
Machine Learning, ICML
## 1 Introduction
Deep learning has demonstrated exceptional empirical success in many applications at the cost of high computational and data requirements. To address this issue, mixture-of-experts (MoE) only activates partial regions of a neural network for each data point and significantly reduces the computational complexity of deep learning _without hurting the performance_ in applications such as machine translation and natural image classification (Shazeer et al., 2017; Yang et al., 2019). A conventional MoE model contains multiple experts (subnetworks of the backbone architecture) and one learnable router that routes each input sample to a few but not all the experts (Ramachandran and Le, 2018). Position-wise MoE has been introduced in language models (Shazeer et al., 2017; Lepikhin et al., 2020; Fedus et al., 2022), where the routing decisions are made on embeddings of different positions of the input separately rather than routing the entire text-input. Riquelme et al. (2021) extended it to vision models where the routing decisions are made on image patches. Zhou et al. (2022) further extended where the MoE layer has one router for each expert such that the router selects partial patches for the corresponding expert and discards the remaining patches. We termed this routing
Figure 1: An illustration of pMoE. The image is divided into \(20\) patches while the router selects \(4\) of them for each expert.
mode as _patch-level routing_ and the MoE layer as _patch-level MoE_ (pMoE) layer (see Figure 1 for an illustration of a pMoE). Notably, pMoE achieves the _same_ test accuracy in vision tasks with 20% less training compute, and 50% less inference compute compared to its single-expert (i.e., one expert which is receiving all the patches of an input) counterpart of the same capacity (Riquelme et al., 2021).
Despite the empirical success of MoE, it remains elusive in theory, why can MoE maintain test accuracy while significantly reducing the amount of computation? To the best of our knowledge, only one recent work by Chen et al. (2022) shows theoretically that a conventional sample-wise MoE achieves higher test accuracy than convolutional neural networks (CNN) in a special setup of a binary classification task on data from linearly separable clusters. However, the sample-wise analyses by Chen et al. (2022) do not extend to patch-level MoE, which employ different routing strategies than conventional MoE, and their data model might not characterize some practical datasets. This paper addresses the following question theoretically:
_How much computational resource does pMoE save from the single-expert counterpart while maintaining the same generalization guarantee?_
In this paper, we consider a supervised binary classification task where each input sample consists of \(n\) equal-sized patches including _class-discriminative_ patterns that determine the labels and _class-irrelevant_ patterns that do not affect the labels. The neural network contains a pMoE layer1 and multiple experts, each of which is a two-layer CNN2 of the same architecture. The router sends \(l\) (\(l\ll n\)) patches to each expert. Although we consider a simplified neural network model to facilitate the formal analysis of pMoE, the insights are applicable to more general setups. Our major results include:
Footnote 1: In practice, pMoEs are usually placed in the last layers of deep models. Our analysis can be extended to this case as long as the input to the pMoE layer satisfies our data model (see Section 4.2).
Footnote 2: We consider CNN as expert due to its wide applications, especially in vision tasks. Moreover, the pMoE in (Riquelme et al., 2021; Zhou et al., 2022) uses two-layer Multi-Layer Perceptrons (MLPs) as experts in vision transformer (ViT), which operates on image patches. Hence, the MLPs in (Riquelme et al., 2021; Zhou et al., 2022) are effectively non-overlapping CNNs.
**1.** To the best of our knowledge, this paper provides **the first theoretical generalization analysis of pMoE**. Our analysis reveals that pMoE with two-layer CNNs as experts can achieve the same generalization performance as conventional CNN while reducing the sample complexity (the required number of training samples to learn a proper model) and model complexity. Specifically, we prove that as long as \(l\) is larger than a certain threshold, pMoE reduces the sample complexity and model complexity by a factor polynomial in \(n/l\), indicating an improved generalization with a smaller \(l\).
**2. Characterization of the desired property of the pMoE router.** We show that a desired pMoE router can dispatch the same class-discriminative patterns to the same expert and discard some class-irrelevant patterns. This discriminative property allows the experts to learn the class-discriminative patterns with reduced interference from irrelevant patterns, which in turn reduces the sample complexity and model complexity. We also prove theoretically that a separately trained pMoE router has the desired property and empirically verify this property on practical pMoE routers.
**3. Experimental demonstration of reduced sample complexity by pMoE in deep CNN models.** In addition to verifying our theoretical findings on synthetic data prepared from the MNIST dataset (LeCun et al., 2010), we demonstrate the sample efficiency of pMoE in learning some benchmark vision datasets (e.g., CIFAR-10 (Krizhevsky, 2009) and CelebA (Liu et al., 2015)) by replacing the last convolutional layer of a ten-layer wide residual network (WRN) (Zagoruyko & Komodakis, 2016) with a pMoE layer. These experiments not only verify our theoretical findings but also demonstrate the applicability of pMoE in reducing sample complexity in deep-CNN-based vision models, complementing the existing empirical success of pMoE with vision transformers.
## 2 Related Works
**Mixture-of-Experts.** MoE was first introduced in the 1990s with dense sample-wise routing, i.e. each input sample is routed to all the experts (Jacobs et al., 1991; Jordan & Jacobs, 1994; Chen et al., 1999; Tresp, 2000; Rasmussen & Ghahramani, 2001). Sparse sample-wise routing was later introduced (Bengio et al., 2013; Eigen et al., 2013), where each input sample activates few of the experts in an MoE layer both for joint training (Ramachandran & Le, 2018; Yang et al., 2019) and separate training of the router and experts (Collobert et al., 2001, 2003; Ahmed et al., 2016; Gross et al., 2017). Position/patch-wise MoE (i.e., pMoE) recently demonstrated success in large language and vision models (Shazeer et al., 2017; Lepikhin et al., 2020; Riquelme et al., 2021; Fedus et al., 2022). To solve the issue of load imbalance (Lewis et al., 2021), Zhou et al. (2022) introduces the _expert-choice routing_ in pMoE, where each expert uses one router to select a fixed number of patches from the input. This paper analyzes the sparse patch-level MoE with expert-choice routing under both joint-training and separate-training setups.
**Optimization and generalization analyses of neural networks (NN).** Due to the significant nonconvexity of deep learning problem, the existing generalization analyses are
limited to linearized or shallow neural networks. The Neural-Tangent-Kernel (NTK) approach (Jacot et al., 2018; Lee et al., 2019; Du et al., 2019; Allen-Zhu et al., 2019; Zou et al., 2020; Chizat et al., 2019; Ghorbani et al., 2021) considers strong over-parameterization and approximates the neural network by the first-order Taylor expansion. The NTK results are independent of the input data, and performance gaps in the representation power and generalization ability exist between the practical NN and the NTK results (Yehudai and Shamir, 2019; Ghorbani et al., 2019, 2020; Li et al., 2020; Malach et al., 2021). Nonlinear neural networks are analyzed recently through higher-order Taylor expansions (Allen-Zhu et al., 2019; Bai and Lee, 2019; Arora et al., 2019; Ji and Telgarsky, 2019) or employing a model estimation approach from Gaussian input data (Zhong et al., 2017; Ma et al., 2020; Ma et al., 2020; Li et al., 2022), but these results are limited to two-layer networks with few papers on three-layer networks (Allen-Zhu et al., 2019; Allen-Zhu and Li, 2019; 2020; Li et al., 2022).
The above works consider arbitrary input data or Gaussian input. To better characterize the practical generalization performance, some recent works analyze structured data models using approaches such as feature mapping (Li and Liang, 2018), where some of the initial model weights are close to data features, and feature learning (Daniely and Malach, 2020; Shalev-Shwartz et al., 2020; Shi et al., 2021; Allen-Zhu and Li, 2022; Li et al., 2023), where some weights gradually learn features during training. Among them, Allen-Zhu and Li (2020); Brutzkus and Globerson (2021); Karp et al. (2021) analyze CNN on learning structured data composed of class-discriminative patterns that determine the labels and other label-irrelevant patterns. This paper extends the data models in Allen-Zhu and Li (2020); Brutzkus and Globerson (2021); Karp et al. (2021) to a more general setup, and our analytical approach is a combination of feature learning in routers and feature mapping in experts for pMoE.
## 3 Problem Formulation
This paper considers the supervised binary classification3 problem where given \(N\) i.i.d. training samples \(\{(x_{i},y_{i})\}_{i=1}^{N}\) generated by an unknown distribution \(\mathcal{D}\), the objective is to learn a neural network model that maps \(x\) to \(y\) for any \((x,y)\) sampled from \(\mathcal{D}\). Here, the input \(x\in\mathbb{R}^{nd}\) has \(n\) disjoint patches, i.e., \(x^{\intercal}=[x^{(1)\intercal},x^{(2)\intercal},...,x^{(n)\intercal}]\), where \(x^{(j)}\in\mathbb{R}^{d}\) denotes the \(j\)-th patch of \(x\). \(y\in\{+1,-1\}\) denotes the corresponding label.
Footnote 3: Our results can be extended to multiclass classification problems. See Section M in the Appendix for details.
### Neural Network Models
We consider a pMoE architecture that includes \(k\) experts and the corresponding \(k\) routers. Each router selects \(l\) out of \(n\) (\(l<n\)) patches for each expert separately. Specifically, the router for each expert \(s\) (\(s\in[k]\)) contains a trainable gating kernel \(w_{s}\in\mathbb{R}^{d}\). Given a sample \(x\), the router computes a routing value \(g_{j,s}(x)=\langle w_{s},x^{(j)}\rangle\) for each patch \(j\). Let \(J_{s}(x)\) denote the index set of top-\(l\) values of \(g_{j,s}\) among all the patches \(j\in[n]\). Only patches with indices in \(J_{s}(x)\) are routed to the expert \(s\), multiplied by a gating value \(G_{j,s}(x)\), which are selected differently in different pMoE models.
Each expert is a two-layer CNN with the same architecture. Let \(m\) denote the total number of neurons in all the experts. Then each expert contains \((m/k)\) neurons. Let \(w_{r,s}\in\mathbb{R}^{d}\) and \(a_{r,s}\in\mathbb{R}\) denote the hidden layer and output layer weights for neuron \(r\) (\(r\in[m/k]\)) in expert \(s\) (\(s\in[k]\)), respectively. The activation function is the rectified linear unit (ReLU), where \(\textbf{ReLU}(z)=\text{max}(0,z)\).
Let \(\theta=\{a_{r,s},w_{r,s},w_{s},\forall s\in[k],\forall r\in[m/k]\}\) include all the trainable weights. The pMoE model denoted as \(f_{M}\), is defined as follows:
\[f_{M}(\theta,x)=\sum_{s=1}^{k}\sum_{r=1}^{\overline{k}}\frac{a_{r,s}}{l}\sum_ {j\in J_{s}(w_{s},x)}\textbf{ReLU}(\langle w_{r,s},x^{(j)}\rangle)G_{j,s}(w_{ s},x) \tag{1}\]
An illustration of (1) is given in Figure 2.
The learning problem solves the following empirical risk minimization problem with the logistic loss function,
\[\text{min}_{\theta}:\quad L(\theta)=\frac{1}{N}\!\!\sum_{i=1}^{N}\log\left(1+e ^{-y_{i}f_{M}(\theta,x_{i})}\right) \tag{2}\]
We consider two different training modes of pMoE, _Separate-training_ and _Joint-training_ of the routers and the experts. We also consider the conventional CNN architecture for comparison.
Figure 2: An illustration of the pMoE model in (1) with \(k=3,m=6,n=6\), and \(l=2\).
(I) **Separate-training pMoE**: Under the setup of the so-called _hard mixtures of experts_(Collobert et al., 2003; Ahmed et al., 2016; Gross et al., 2017), the router weights \(w_{s}\) are trained first and then fixed when training the weights of the experts. In this case, the gating values are set as
\[G_{j,s}(w_{s},x)\equiv 1,\ \forall j,s,x \tag{3}\]
We select \(k=2\) in this case to simplify the analysis.
(II) **Joint-training pMoE**: The routers and the experts are learned jointly, see, e.g., (Lepikhin et al., 2020; Riquelme et al., 2021; Fedus et al., 2022). Here, the gating values are softmax functions with
\[G_{j,s}(w_{s},x)=e^{g_{j,s}(x)}/(\sum_{i\in J_{s}(x)}g_{i,s}(x)) \tag{4}\]
(III) **CNN single-expert counterpart**: The conventional two-layer CNN with \(m\) neurons, denoted as \(f_{C}\), satisfies,
\[f_{C}(\theta,x)=\sum_{r=1}^{m}\!a_{r}\left(\frac{1}{n}\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**(II). Both the sample complexity and the required number of hidden nodes of pMoE reduce by a polynomial factor of \(n/l\) over CNN.** We prove that as long as \(l\), the number of patches per expert, is greater than a threshold (that decreases as the separation between class-discriminative and class-irrelevant patterns increases), the sample complexity and the required number of neurons of learning pMoE are \(\Omega(l^{8})\) and \(\Omega(l^{10})\) respectively. In contrast, the sample and model complexities of the CNN are \(\Omega(n^{8})\) and \(\Omega(n^{10})\) respectively, indicating improved generalization by pMoE.
**(III). Larger separation among class-discriminative and class-irrelevant patterns reduces the sample complexity and model complexity of pMoE.** Both the sample complexity and the required number of neurons of pMoE is polynomial in \(\delta\), which decreases when the separation among patterns increases.
### Data Model Assumptions and Rationale
The input \(x\) is comprised of one class-discriminative pattern and \(n-1\) class-irrelevant patterns, and the label \(y\) is determined by the class-discriminative pattern only.
**Distributions of class-discriminative patterns**: The unit vectors \(o_{1}\) and \(o_{2}\in\mathbb{R}^{d}\) denote the _class-discriminative_ patterns that determine the labels. The separation between \(o_{1}\) and \(o_{2}\) is measured as \(\delta_{d}:=\langle o_{1},o_{2}\rangle\in(-1,1)\). \(o_{1}\) and \(o_{2}\) are equally distributed in the samples, and each sample has exactly one of them. If \(x\) contains \(o_{1}\) (or \(o_{2}\)), then \(y\) is \(+1\) (or \(-1\)).
**Distributions of class-irrelevant patterns.**_Class-irrelevant_ patterns are unit vectors in \(\mathbb{R}^{d}\) belonging to \(p\) disjoint pattern sets \(S_{1},S_{2},....,S_{p}\), and these patterns distribute equally for both classes. \(\delta_{r}\) measures the separation between class-discriminative patterns and class-irrelevant patterns, where \(|\langle o_{i},q\rangle|\leq\delta_{r}\), \(\forall i\in[2]\), \(\forall q\in S_{j}\), \(j=1,...,p\). Each \(S_{j}\) belongs to a ball with a diameter of \(\Theta(\sqrt{(1-\delta_{r}^{2})/dp^{2}})\). Note that NO separation among class-irrelevant patterns themselves is required.
**The rationale of our data model.** The data distribution \(\mathcal{D}\) captures the locality of the label-defining features in image data. It is motivated by and extended from the data distributions in recent theoretical frameworks (Yu et al., 2019; Brutzkus & Globerson, 2021; Karp et al., 2021; Chen et al., 2022). Specifically, Yu et al. (2019) and Brutzkus & Globerson (2021) require orthogonal patterns, i.e., \(\delta_{r}\) and \(\delta_{d}\) are both \(0\), and there are only a fixed number of non-discriminative patterns. Karp et al. (2021) and Chen et al. (2022) assume that \(\delta_{d}=-1\) and a possibly infinite number of patterns drawn from zero-mean Gaussian distribution. In our model, \(\delta_{d}\) takes any value in \((-1,1)\), and the class-irrelevant patterns can be drawn from \(p\) pattern sets that contain an infinite number of patterns that are not necessarily Gaussian or orthogonal.
Define
\[\delta=1/(1-\max(\delta_{d}^{2},\delta_{r}^{2})) \tag{9}\]
\(\delta\) decreases if (1) \(o_{1}\) and \(o_{2}\) are more separated from each other, and (2) Both \(o_{1}\) and \(o_{2}\) are more separated from any set \(S_{i}\), \(i\in[p]\). We also define an integer \(l^{*}\) (\(l^{*}\leq n\)) that measures _the maximum number of class-irrelevant patterns per sample that are sufficiently closer to \(o_{1}\) than \(o_{2}\), and vice versa_. Specifically, a class-irrelevant pattern \(q\) is called \(\delta^{\prime}\)-closer (\(\delta^{\prime}>0\)) to \(o_{1}\) than \(o_{2}\), if \(\langle o_{1}-o_{2},q\rangle>\delta^{\prime}\) holds. Similarly, \(q\) is \(\delta^{\prime}\)-closer to \(o_{2}\) than \(o_{1}\) if \(\langle o_{2}-o_{1},q\rangle>\delta^{\prime}\). Then, let \(l^{*}-1\) be the maximum number of class-irrelevant patches that are either \(\delta^{\prime}\)-closer to \(o_{1}\) than \(o_{2}\) or vice versa with \(\delta^{\prime}=\Theta(1-\delta_{d})\) in any \(x\) sampled from \(\mathcal{D}\). \(l^{*}\) depends on \(\mathcal{D}\) and \(\delta_{d}\). When \(\mathcal{D}\) is fixed, a smaller \(\delta_{d}\) corresponds to a larger separation between \(o_{1}\) and \(o_{2}\) and leads to a small \(l^{*}\). In contrast to linearly separable data in (Yu et al., 2019; Brutzkus et al., 2018; Chen et al., 2022), our data model is **NOT** linearly separable as long as \(l^{*}=\Omega(1)\) (see section K in Appendix for the proof).
### Main Theoretical Results
#### 4.3.1 Generalization Guarantee of Separate-training pMoE
Lemma 4.1 shows that as long as the number of patches per expert, \(l\), is greater than \(l^{*}\), then the separately learned routers by solving (6) always send \(o_{1}\) to expert 1 and \(o_{2}\) to expert 2. Based on this discriminative property of the learned routers, Theorem 4.2 then quantifies the sample complexity and network size of separate-training pMoE to achieve a desired generalization error \(\epsilon\). Theorem 4.3 quantifies the sample and model complexities of CNN for comparison.
**Lemma 4.1** (Discriminative Property of Separately Trained Routers).: _For every \(l\geq l^{*}\), w.h.p. over the random initialization defined in (7), after doing mini-batch SGD with batch-size \(B_{r}=\Omega\left(n^{2}/(1-\delta_{d})^{2}\right)\) and learning rate \(\eta_{r}=\Theta(1/n)\), for \(T_{r}=\Omega\left(1/(1-\delta_{d})\right)\) iterations, the returned \(w_{1}\) and \(w_{2}\) satisfy_
\[\underset{j\in[n]}{\text{arg}}\left(x^{(j)}=o_{1}\right)\in J_{1}(w_{1},x), \quad\forall(x,y=+1)\sim\mathcal{D}\]
\[\underset{j\in[n]}{\text{arg}}\left(x^{(j)}=o_{2}\right)\in J_{2}(w_{2},x), \quad\forall(x,y=-1)\sim\mathcal{D}\]
_i.e., the learned routers always send \(o_{1}\) to expert 1 and \(o_{2}\) to expert 2._
The main idea in proving Lemma 4.1 is to show that the gradient in each iteration has a large component along the directions of \(o_{1}\) and \(o_{2}\). Then after enough iterations, the inner product of \(w_{1}\) and \(o_{1}\) (similarly, \(w_{2}\) and \(o_{2}\)) is sufficiently
large. The intuition of requiring \(l\geq l^{*}\) is that because there are at most \(l^{*}-1\) class-irrelevant patches sufficiently closer to \(o_{1}\) than \(o_{2}\) (or vice versa), then sending \(l\geq l^{*}\) patches to one expert will ensure that one of them is \(o_{1}\) (or \(o_{2}\)). Note that the batch size \(B_{r}\) and the number of iterations \(T_{r}\) depend on \(\delta_{d}\), the separation between \(o_{1}\) and \(o_{2}\), but are independent of the separation between class-discriminative and class-irrelevant patterns.
We then show that the separate-training pMoE reduces both the sample complexity and the required model size (Theorem 4.2) compared to the CNN (Theorem 4.3).
**Theorem 4.2** (Generalization guarantee of separate-training pMoE).: _For every \(\epsilon>0\) and \(l\geq l^{*}\), for every \(m\geq M_{S}=\Omega\left(l^{10}p^{12}\delta^{6}\big{/}\epsilon^{16}\right)\) with at least \(N_{S}=\Omega(l^{8}p^{12}\delta^{6}/\epsilon^{16})\) training samples, after performing minibatch SGD with the batch size \(B=\Omega\left(l^{4}p^{6}\delta^{3}\big{/}\epsilon^{8}\right)\) and the learning rate \(\eta=O\big{(}1/(mpoly(l,p,\delta,1/\epsilon,\log m))\big{)}\) for \(T=O\left(l^{4}p^{6}\delta^{3}\big{/}\epsilon^{8}\right)\) iterations, it holds w.h.p. that_
\[\mathop{\mathbb{P}}_{(x,y)\sim\mathcal{D}}\big{[}yf_{M}(\theta^{(T)},x)>0 \big{]}\geq 1-\epsilon\]
Theorem 4.2 implies that to achieve generalization error \(\epsilon\) by a separate-training pMoE, we need \(N_{S}=\Omega(l^{8}p^{12}\delta^{6}/\epsilon^{16})\) training samples and \(M_{S}=\Omega\left(l^{10}p^{12}\delta^{6}\big{/}\epsilon^{16}\right)\) hidden nodes. Therefore, both \(N_{S}\) and \(M_{S}\) increase polynomially with the number of patches \(l\) sent to each expert. Moreover, both \(N_{S}\) and \(M_{S}\) are polynomial in \(\delta\) defined in (9), indicating an improved generalization performance with stronger separation among patterns.
The proof of Theorem 4.2 is inspired by Li & Liang (2018), which analyzes the generalization performance of fully-connected neural networks (FCN) on structured data, but we have new technical contributions in analyzing pMoE models. In addition to analyzing the pMoE routers (Lemma 4.1), which do not appear in the FCN analysis, our analyses also significantly relax the separation requirement on the data, compared with that by Li & Liang (2018). For example, Li & Liang (2018) requires the separation between the two classes, measured by the smallest \(\ell_{2}\)-norm distance of two points in different classes, being \(\Omega(n)\) to obtain a sample complexity bound of poly(\(n\)) for the binary classification task. In contrast, the separation between the two classes in our data model is \(\min\{\sqrt{2(1-\delta_{d})},2\sqrt{1-\delta_{r}}\}\), much less than \(\Omega(n)\) required by Li & Liang (2018).
**Theorem 4.3** (Generalization guarantee of CNN).: _For every \(\epsilon>0\), for every \(m\geq M_{C}=\Omega\left(n^{10}p^{12}\delta^{6}\big{/}\epsilon^{16}\right)\) with at least \(N_{C}=\Omega(n^{8}p^{12}\delta^{6}\big{/}\epsilon^{16})\) training samples, after performing minibatch SGD with the batch size \(B=\Omega\left(n^{4}p^{6}\delta^{3}\big{/}\epsilon^{8}\right)\) and the learning rate \(\eta=O\big{(}1/(mpoly(n,p,\delta,1/\epsilon,\log m))\big{)}\) for \(T=O\left(n^{4}p^{6}\delta^{3}\big{/}\epsilon^{8}\right)\) iterations, it holds w.h.p. that_
\[\mathop{\mathbb{P}}_{(x,y)\sim\mathcal{D}}\big{[}yf_{C}(\theta^{(T)},x)>0 \big{]}\geq 1-\epsilon\]
Theorem 4.3 implies that to achieve a generalization error \(\epsilon\) using CNN in (5), we need \(N_{C}=\Omega(n^{8}p^{12}\delta^{6}/\epsilon^{16})\) training samples and \(M_{C}=\Omega\left(n^{10}p^{12}\delta^{6}\big{/}\epsilon^{16}\right)\) neurons.
**Sample-complexity gap between single CNN and mixture of CNNs.** From Theorem 4.2 and Theorem 4.3, the sample-complexity ratio of the CNN to the separate-training pMoE is \(N_{C}/N_{S}=\Theta\big{(}(n/l)^{8}\big{)}\). Similarly, the required number of neurons is reduced by a factor of \(M_{C}/M_{S}=\Theta\big{(}(n/l)^{10}\big{)}\) in separate-training pMoE4.
Footnote 4: The bounds for the sample complexity and model size in Theorem 4.2 and Theorem 4.3 are sufficient but not necessary. Thus, rigorously speaking, one can not compare sufficient conditions only. In our analysis, however, the bounds for MoE and CNN are derived with exactly the same technique with the only difference to handle the routers. Therefore, it is fair to compare these two bounds to show the advantage of pMoE.
#### 4.3.2 Generalization Guarantee of
Joint-training pMoE with Proper Routers
Theorem 4.5 characterizes the generalization performance of joint-training pMoE assuming the routers are properly trained in the sense that after some SGD iterations, for each class at least one of the \(k\) experts receives all class-discriminative patches of that class with the largest gating-value (see Assumption 4.4).
**Assumption 4.4**.: There exists an integer \(T^{\prime}<T\) such that for all \(t\geq T^{\prime}\), it holds that:
There exists an expert \(s\in[k]\) s.t. \(\forall(x,y=+1)\sim\mathcal{D},\)
\[j_{o_{1}}\in J_{s}(w_{s}^{(t)},x),\text{ and }G_{j_{o_{1}},s}^{(t)}(x) \geq G_{j,s}^{(t)}(x)\]
and an expert \(s\in[k]\) s.t. \(\forall(x,y=-1)\sim\mathcal{D},\)
\[j_{o_{2}}\in J_{s}(w_{s}^{(t)},x),\text{ and }G_{j_{o_{2}},s}^{(t)}(x) \geq G_{j,s}^{(t)}(x)\]
where \(j_{o_{1}}\) (\(j_{o_{2}}\)) denotes the index of the class-discriminative pattern \(o_{1}\) (\(o_{2}\)), \(G_{j,s}^{(t)}(x)\) is the gating output of patch \(j\in J_{s}(w_{s}^{(t)},x)\) of sample \(x\) for expert \(s\) at the iteration \(t\), and \(w_{s}^{(t)}\) is the gating kernel for expert \(s\) at iteration \(t\).
Assumption 4.4 is required in proving Theorem 4.5 because of the difficulty of tracking the dynamics of the routers in joint-training pMoE. Assumption 4.4 is verified on empirical experiments in Section 5.1, while its theoretical proof is left for future work.
**Theorem 4.5** (Generalization guarantee of joint-training pMoE).: _Suppose Assumption 4.4 hold. Then for every \(\epsilon>0\), for every \(m\geq M_{J}=\Omega\left(k^{3}n^{2}l^{6}p^{12}\delta^{6}\big{/}\epsilon^{16}\right)\) with at least \(N_{J}=\Omega(k^{4}l^{6}p^{12}\delta^{6}/\epsilon^{16})\) training samples, after performing minibatch SGD with the batch size \(B=\Omega\left(k^{2}l^{4}p^{6}\delta^{3}\big{/}\epsilon^{8}\right)\) and the learning rate
\(\eta=O\big{(}1/(mppoly(l,p,\delta,1/\epsilon,\log m))\big{)}\) for \(T=O\left(k^{2}l^{2}p^{6}\delta^{3}/\epsilon^{8}\right)\) iterations, it holds w.h.p. that
\[\mathop{\mathbb{P}}\limits_{(x,y)\sim\mathcal{D}}\big{[}yf_{M}(\theta^{(T)},x)> 0\big{]}\geq 1-\epsilon\]
Theorem 4.5 indicates that, with proper routers, joint-training pMoE needs \(N_{J}=\Omega(k^{4}l^{6}p^{12}\delta^{6}/\epsilon^{16})\) training samples and \(M_{J}=\Omega\left(k^{3}n^{2}l^{6}p^{12}\delta^{6}/\epsilon^{16}\right)\) neurons to achieve \(\epsilon\) generalization error. Compared with CNN in Theorem 4.3, joint-training pMoE reduces the sample complexity and model size by a factor of \(\Theta(n^{8}/k^{4}l^{6})\) and \(\Theta(n^{10}/k^{3}l^{6})\), respectively. With more experts (a larger \(k\)), it is easier to satisfy Assumption 4.4 to learn proper routers but requires larger sample and model complexities. When the number of samples is fixed, the expression of \(N_{J}\) also indicates that \(\epsilon\) sales as \(k^{1/4}l^{3/8}\), corresponding to an improved generalization when \(k\) and \(l\) decrease.
We provide the end-to-end computational complexity comparison between the analyzed pMoE models and general CNN model in Table 1 (see section N in Appendix for details). The results in Table 1 indicates that the computational complexity in joint-training pMoE is reduced by a factor of \(O(n^{5}/k^{2}l^{3})\) compared with CNN. Similarly, the reduction of computational complexity of separate-training pMoE is \(O(n^{5}/l^{5})\).
## 5 Experimental Results
### pMoE of Two-layer CNN
**Dataset**: We verify our theoretical findings about the model in (1) on synthetic data prepared from MNIST (LeCun et al., 2010) data set. Each sample contains \(n=16\) patches with patch size \(d=28\times 28\). Each patch is drawn from the MNIST dataset. See Figure 3 as an example. We treat the digits "**I**" and "**0**" as the class-discriminative patterns \(o_{1}\) and \(o_{2}\), respectively. Each of the digits from "**2**" to "**9**" represents a class-irrelevant pattern set.
**Setup**: We compare separate-training pMoE, joint-training pMoE, and CNN with similar model sizes. The separate-training pMoE contains _two_ experts with \(20\) hidden nodes in each expert. The joint-training pMoE has eight experts with five hidden nodes per expert. The CNN has \(40\) hidden nodes. All are trained using SGD with \(\eta=0.2\) until zero training error. pMoE converges much faster than CNN, which takes \(150\) epochs. Before training the experts in the separate-training pMoE, we train the router for \(100\) epochs. The models are evaluated on \(1000\) test samples.
**Generalization performance**: Figure 4 compares the test accuracy of the three models, where \(l=2\) and \(l=6\) for separate-training and joint-training pMoE, respectively. The error bars show the mean plus/minus one standard deviation
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline \multirow{2}{*}{**Complexity to**} & \multicolumn{3}{c|}{**pMoE**} & \multirow{2}{*}{**CNN**} \\ \cline{2-3} \cline{5-5} & **Separate-training** & & **Joint-training** \\ \cline{2-3} \cline{5-5} & \(O(Bml^{5}d/\epsilon^{8})\) & \(O(Bmk^{2}l^{3}d/\epsilon^{8})\) & \(O(Bmn^{5}d/\epsilon^{8})\) \\ \hline \multirow{2}{*}{
\begin{tabular}{l} Complexity per \\ Iteration \\ (**CompLYS/Iter**) \\ \end{tabular} } & \multirow{2}{*}{\(O(Bmld)\)} & **Router** & **Expert** \\ \cline{3-3} \cline{5-5} & & \(O(Bknd)\) (Forward pass) & \multirow{2}{*}{\(O(Bmld)\)} \\ \cline{3-3} \cline{5-5} & & \(O(Bkl^{2}d)\) (Backward pass) & \\ \hline Iteration required & & & \\ to converge with \(\epsilon\) & & & \\ error (**T**) & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Computational complexity of pMoE and CNN.
Figure 4: Generalization performance of pMoE and CNN with a similar model size
Figure 5: Phase transition of sample complexity with \(l\) in separate-training pMoE
Figure 3: Sample image of the synthetic data from MNIST. Class label is "1".
of five independent experiments. pMoE outperforms CNN with the same number of training samples. pMoE only requires 60% of the training samples needed by CNN to achieve \(95\%\) test accuracy.
Figure 5 shows the sample complexity of separate-training pMoE with respect to \(l\). Each block represents 20 independent trials. A white block indicates all success, and a black block indicates all failure. The sample complexity is polynomial in \(l\), verifying Theorem 4.2. Figure 7 and 6 show the test accuracy of joint-training pMoE with a fixed sample size when \(l\) and \(k\) change, respectively. When \(l\) is greater than \(l^{*}\), which is \(6\) in Figure 7, the test accuracy matches our predicted order. Similarly, the dependence on \(k\) also matches our prediction, when \(k\) is large enough to make Assumption 4.4 hold.
**Router performance**: Figure 8 verifies the discriminative property of separately trained routers (Lemma 4.1) by showing the percentage of testing data that have class-discriminative patterns (\(o_{1}\) and \(o_{2}\)) in top \(l\) patches of the separately trained router. With very few training samples (such as \(300\)), one can already learn a proper router that has discriminative patterns in top-\(4\) patches for 95% of data. Figure 9 verifies the discriminative property of jointly trained routers (Assumption 4.4). With only \(300\) training samples, the jointly trained router dispatches \(o_{1}\) with the largest gating value to a particular expert for 95% of class-1 data and similarly for \(o_{2}\) in 92% of class-2 data.
### pMoE of Wide Residual Networks (WRNs)
**Neural network model**: We employ the 10-layer WRN (Zagoruyko & Komodakis, 2016) with a widening factor of 10 as the expert. We construct a patch-level MoE counterpart of WRN, referred to as WRN-pMoE, by replacing the last convolutional layer of WRN with an pMoE layer of an equal number of trainable parameters (see Figure 18 in Appendix for an illustration). WRN-pMoE is trained with the joint-training method5. All the results are averaged over five independent experiments.
Footnote 5: Code is available at [https://github.com/nowazrabbani/pMoE_CNN](https://github.com/nowazrabbani/pMoE_CNN)
**Datasets**: We consider both CelebA (Liu et al., 2015) and CIFAR-10 datasets. The experiments on CIFAR-10 are deferred to the Appendix (see section A). We down-sample the images of CelebA to \(64\times 64\). The last convolutional layer of WRN receives a (\(16\times 16\times 640\)) dimensional feature map. The feature map is divided into \(16\) patches with size \(4\times 4\times 640\) in WRN-pMoE. \(k=8\) and \(l=2\) for the pMoE layer.
Figure 8: Percentage of properly routed discriminative patterns by a separately trained router.
Figure 6: Change of test accuracy in joint-training pMoE with \(k\) for fixed sample sizes
Figure 7: Change of test accuracy in joint-training pMoE with \(l\) for fixed sample sizes
Figure 9: Percentage of properly routed discriminative patterns by a jointly trained router. \(l=6\).
**Performance Comparison**: Figure 11 shows the test accuracy of the binary classification problem on the attribute "smiling." WRN-pMoE requires less than _one-fifth_ of the training samples needed by WRN to achieve 86% accuracy. Figure 11 shows the performance when the training data contain spurious correlations with the hair color as a spurious attribute. Specifically, 95% of the training images with the attribute "smiling" also have the attribute "black hair," while 95% of the training images with the attribute "not-smiling" have the attribute "blond hair." The models may learn the hair-color attribute rather than "smiling" due to spurious correlation and, thus, the test accuracies are lower in Figure 11 than those in Figure 11. Nevertheless, WRN-pMoE outperforms WRN and reduces the sample complexity to achieve the same accuracy.
Figure 12 shows the test accuracy of multiclass classification (four classes with class attributes: "Not smiling, Eyeglass," "Smiling, Eyeglass," "Smiling, No eyeglass," and "Not smiling, No eyeglass") in CelebA. The results are consistent with the binary classification results. Furthermore, Table 2 empirically verifies the computational efficiency of WRN-pMoE over WRN on multiclass classification in CelebA6. Even with same number of training samples, WRN-pMoE is still more computationally efficient than WRN, because WRN-pMoE requires fewer iterations to converge and has a lower per-iteration cost.
Footnote 6: An NVIDIA RTX 4500 GPU was used to run the experiments, training FLOPs are calculated as Training FLOPs \(=\) Training time (second) \(\times\) Number of GPUs \(\times\) peak FLOP/second \(\times\) GPU utilization rate
## 6 Conclusion
MoE reduces computational costs significantly without hurting the generalization performance in various empirical studies, but the theoretical explanation is mostly elusive. This paper provides the first theoretical analysis of patch-level MoE and proves its savings in sample complexity and model size quantitatively compared with the single-expert counterpart. Although centered on a classification task using a mixture of two-layer CNNs, our theoretical insights are verified empirically on deep architectures and multiple datasets. Future works include analyzing other MoE architectures such as MoE in Vision Transformer (ViT) and connecting MoE with other sparsification methods to further reduce the computation.
## Acknowledgements
This work was supported by AFOSR FA9550-20-1-0122, NSF 1932196 and the Rensselaer-IBM AI Research Collaboration ([http://airc.rpi.edu](http://airc.rpi.edu)), part of the IBM AI Horizons Network ([http://ibm.biz/AIHorizons](http://ibm.biz/AIHorizons)). We thank Yihua Zhang at Michigan State University for the help in experiments with CelebA dataset. We thank all anonymous reviewers. |
2310.11398 | Neural Attention: Enhancing QKV Calculation in Self-Attention Mechanism
with Neural Networks | In the realm of deep learning, the self-attention mechanism has substantiated
its pivotal role across a myriad of tasks, encompassing natural language
processing and computer vision. Despite achieving success across diverse
applications, the traditional self-attention mechanism primarily leverages
linear transformations for the computation of query, key, and value (QKV),
which may not invariably be the optimal choice under specific circumstances.
This paper probes into a novel methodology for QKV computation-implementing a
specially-designed neural network structure for the calculation. Utilizing a
modified Marian model, we conducted experiments on the IWSLT 2017
German-English translation task dataset and juxtaposed our method with the
conventional approach. The experimental results unveil a significant
enhancement in BLEU scores with our method. Furthermore, our approach also
manifested superiority when training the Roberta model with the Wikitext-103
dataset, reflecting a notable reduction in model perplexity compared to its
original counterpart. These experimental outcomes not only validate the
efficacy of our method but also reveal the immense potential in optimizing the
self-attention mechanism through neural network-based QKV computation, paving
the way for future research and practical applications. The source code and
implementation details for our proposed method can be accessed at
https://github.com/ocislyjrti/NeuralAttention. | Muhan Zhang | 2023-10-17T17:06:26Z | http://arxiv.org/abs/2310.11398v2 | # Neural Attention: Enhancing QKV Calculation in Self-Attention Mechanism with Neural Networks
###### Abstract
In the realm of deep learning, the self-attention mechanism has substantiated its pivotal role across a myriad of tasks, encompassing natural language processing and computer vision. Despite achieving success across diverse applications, the traditional self-attention mechanism primarily leverages linear transformations for the computation of query, key, and value (QKV), which may not invariably be the optimal choice under specific circumstances. This paper probes into a novel methodology for QKV computation--implementing a specially-designed neural network structure for the calculation. Utilizing a modified Marian model, we conducted experiments on the IWSLT 2017 German-English translation task dataset and juxtaposed our method with the conventional approach. The experimental results unveil a significant enhancement in BLEU scores with our method. Furthermore, our approach also manifested superiority when training the Roberta model with the Wikitext-103 dataset, reflecting a notable reduction in model perplexity compared to its original counterpart. These experimental outcomes not only validate the efficacy of our method but also reveal the immense potential in optimizing the self-attention mechanism through neural network-based QKV computation, paving the way for future research and practical applications. The source code and implementation details for our proposed method can be accessed at [https://github.com/ocislyjrti/NeuralAttention](https://github.com/ocislyjrti/NeuralAttention).
Self-Attention Mechanism Multi-Layer Perceptron (MLP) QKV Computation Neural Networks Natural Language Processing
## 1 Introduction
### Problem Statement and Research Motivation
The self-attention mechanism, introduced by Vaswani et al. in their seminal work on the Transformer architecture Vaswani et al. (2017), has established itself as a potent method in capturing dependencies within input sequences across various tasks in natural language processing and computer vision. Despite its notable achievements, the traditional self-attention mechanism, which computes the query, key, and value (QKV) predominantly through linear transformations, might encounter limitations in its expressive power in certain scenarios. As highlighted by Goodfellow et al. in their comprehensive book on deep learning Goodfellow et al. (2016), linear transformations essentially perform linear mapping, potentially lacking the capability to handle complex patterns and non-linear relationships. In contrast, non-linear transformations, such as those performed through neural networks, generally possess the ability to capture more intricate features and patterns in input data. Thus, the main motivation of this paper is to explore a novel method, leveraging neural networks to enhance QKV computation within the self-attention mechanism, aiming to amplify its expressive power and performance.
### Research Significance
Such an enhancement in the self-attention mechanism is profoundly significant. As shown by its application in machine translation with the Transformer architecture Vaswani et al. (2017) and in image recognition with the Vision Transformer
(ViT) Dosovitskiy et al. (2020), any improvement upon this mechanism may directly impact a multitude of applications and models dependent on it.
### Proposed Solution and Contributions
Against this backdrop, we propose a novel approach to compute QKV in the self-attention mechanism using a neural network, and validate its efficacy through experiments on different datasets and models. Our main contributions are as follows:
1. Proposing and implementing a novel neural network model for computing QKV in the self-attention mechanism, elucidating its architecture and underlying rationale, inspired by recent advancements in neural attention Lin et al. (2017).
2. Validating the effectiveness of our method in enhancing model performance through experiments using a modified Marian model on the IWSLT 2017 German-English translation task dataset and a modified Roberta model Liu et al. (2019) on the Wikitext-103 dataset.
## 2 Background
### The Rise of the Self-Attention Mechanism
Since its groundbreaking introduction by Vaswani et al. within the Transformer model, the self-attention mechanism has rapidly grown in prominence and influence, becoming a cornerstone in the design of many subsequent models and architectures in the expansive field of deep learning. This ingenious mechanism, with its unique ability to process input sequences concurrently in parallel, has redefined how models perceive and handle dependencies. Unlike traditional models that often struggled with long-range dependencies, the self-attention mechanism excels by capturing intricate relationships between elements, irrespective of the vast positional distances that might separate them in the input sequence. Such capabilities not only enhance the model's understanding of the data but also ensure more contextually accurate representations. Over time, this mechanism has proven its mettle, showing remarkable performance improvements across a diverse range of tasks, from machine translation to document summarization, solidifying its position as an indispensable tool in modern deep learning toolkits.
### Roberta Model
The Roberta model, short for "Robustly Optimized BERT Approach", represents an evolution of the BERT architecture Devlin et al. (2018). While the foundational principles remain similar, Roberta introduces significant optimizations, particularly in terms of model size and the approach to pre-training data. One of the strategic changes was the decision to eliminate the "Next Sentence Prediction" task from BERT, and instead, Roberta adopts longer training sequences. This design choice has proven to be pivotal, resulting in increased efficiency and precision. Consequently, Roberta has showcased superior performance and robust representational capacities across a broad spectrum of natural language processing tasks Liu et al. (2019). The underlying design philosophies and optimization strategies have cemented Roberta's position as an invaluable asset in the NLP research and application landscape, garnering acclaim from many researchers and practitioners.
### Marian Model
Marian NMT is an efficient, free, and open-source framework specifically designed for research in neural machine translation (NMT) Junczys-Dowmunt et al. (2018). Supporting multi-GPU training and model ensemble, it provides implementations of a range of model architectures and training strategies, including the standard Transformer model and various derivatives. Marian has achieved success across various machine translation benchmark tests.
In the following sections, we will delve into the method proposed, which involves utilizing neural networks to enhance QKV computation within the self-attention mechanism, and observe the results from experiments based on Roberta and Marian models.
## 3 Proposed Method and Theoretical Analysis
### Method Overview
In this section, we present an enhanced attention mechanism, which introduces a Multilayer Perceptron (MLP) to augment the model's representational learning capability Rumelhart et al. (1986). Our motivation stems from the notion that traditional linear attention mechanisms, such as those introduced by Vaswani et al. (2017), may not sufficiently capture complex patterns and non-linear relationships within the input data.
### Detailed Methodology
The core computation in traditional attention mechanisms is typically represented as
\[\text{Attention}(Q,K,V)=\text{softmax}\left(\frac{QK^{\top}}{\sqrt{d_{k}}} \right)V\]
where \(Q\), \(K\), and \(V\) are commonly obtained through a single linear transformation:
\[Q=W_{q}X,\quad K=W_{k}X,\quad V=W_{v}X\]
In the method we propose, we employ an MLP with a certain depth to replace these linear transformations, formally we have:
\[Q=\text{MLP}_{q}(X),\quad K=\text{MLP}_{k}(X),\quad V=\text{MLP}_{v}(X)\]
Where the MLP can be expressed as:
\[\text{MLP}(X)=W_{2}\cdot\sigma\left(\text{LayerNorm}(W_{1}X+b_{1})\right)+b_{2}\]
Where:
* \(X\) is the input,
* \(W_{1}\) and \(b_{1}\) are the weight and bias of the first layer, respectively,
* \(\sigma\) represents the ReLU activation function Nair and Hinton (2010),
* LayerNorm denotes the Layer Normalization operation Ba et al. (2016),
* \(W_{2}\) and \(b_{2}\) are the weight and bias of the second layer, respectively.
### Theoretical Analysis
In the multi-head self-attention mechanism, each "head" learns different attention weights, thereby capturing different facets of information from the input sequence. The calculations for Query (Q), Key (K), and Value (V) play a pivotal role in this process. In the original attention mechanism, these vectors are obtained through linear transformations. These transformations can be understood as operations that reposition word vectors within the vector space.
From a geometric or vector space perspective, linear transformations include operations such as rotation, scaling, and translation Goodfellow et al. (2016). Therefore, the traditional linear attention mechanism moves word vectors in space to focus on relevant context. Specifically, by learning linear transformations for Q, K and V, the model learns how to reposition word vectors in space so that vectors of words that are associated or need to be attended to are brought closer together.
\[Q=W_{q}X,\quad K=W_{k}X,\quad V=W_{v}X\]
However, a limitation of linear transformations is that they preserve the linear structure of the vector space. In other words, linear transformations cannot alter the topological structure of the space. In some cases, nonlinear relationships and complex patterns might not be sufficiently captured by linear transformations.
This is why we introduce the Multi-Layer Perceptron (MLP). Unlike single-layer linear transformations, an MLP, owing to its internal non-linear activation functions (e.g., ReLU), can implement nonlinear mappings from the input space to the output space Rumelhart et al. (1986). Therefore, the MLP has the capacity to alter the topological structure of the word vector space, potentially creating more complex and expressive relationships in the space.
\[Q=\text{MLP}_{q}(X),\quad K=\text{MLP}_{k}(X),\quad V=\text{MLP}_{v}(X)\]
In this way, our model can learn more complex relationships and dependencies between words, further enriching the semantic and structural information of the representations. Although this increases the computational complexity of the model, we argue that if this complexity brings about a significant improvement in performance, then the trade-off is worthwhile. In the subsequent experimental section, we will validate this through a series of experiments.
## 4 Experiments
### Experimental Setup
Experiments were conducted based on the Hugging Face Transformers library Wolf et al. (2020). We assessed Roberta and Marian models on masked language modeling and machine translation tasks respectively. In the implementation, we employed pre-trained weights for model initialization, ensuring that parts of the model (excluding QKV computation) could benefit from the pre-trained weights, and ensured that both models used entirely identical parameters and settings during the experiments. All models were trained using the same default parameters, including the same learning rate, optimizer, batch size, etc. Furthermore, we ensured experiment consistency and reproducibility by setting the same random seed. These measures ensured that all observed performance differences can be attributed to variations in the QKV computation method, rather than other factors.
Firstly, we analyzed the machine translation task utilizing the opus-mt-de-en model pre-trained by the Helsinki-NLP team Tiedemann and Thottingal (2020), which is based on the MarianMTModel architecture, specifically designed for neural machine translation. Both the encoder and decoder of the model contain 6 layers, with hidden layer dimensions of 512, feed-forward layer dimensions of 2048, employing 8 attention heads, and applying a dropout rate of 0.1 in the model to suppress overfitting and enhance generalization capabilities. The source language (German) and target language (English) share a vocabulary of 58,101 words. We utilized the Hugging Face Transformers library to conduct further fine-tuning and evaluation of the model on the "IWSLT 2017" dataset (config name "iwslt2017-de-en") by executing the run_translation.py script. The batch size for training and evaluation per device was set to 32, the model was trained on the data for 6 epochs, evaluated every 1000 training steps, and logged every 200 steps. All model outputs and logs were saved in the "/tmp/tst-translation" directory for further analysis and model inspection.
Secondly, we trained and evaluated the performance of the reborta-base model on the Masked Language Modeling (MLM) task. Specifically, the model adopts the RobertaForMaskedLM architecture, contains 12 encoder and decoder layers, with a hidden dimension of 768 per layer, feed-forward network layer dimensions of 3072, and deploys 12 attention heads. A dropout rate of 0.1 is used in the model, along with a vocabulary of 50,265 words. This experiment was conducted on the wikitext-103-raw-v1 dataset, utilizing the run_mlm.py script for model fine-tuning and evaluation. In both training and evaluation stages, we set the batch size to 8, the model was trained on the data for 5 epochs, evaluated every 250 training steps, and logged every 50 steps. Relevant outputs and logs were saved in the /tmp/test-mlm directory. In both tasks, we particularly focused on the performance variations and results brought about by the fine-tuning process of the model on specific tasks and datasets.
### Experiment One: Roberta Model on Masked Language Modeling Task
#### 4.2.1 Model Modification and Training Setup
We modified the self-attention mechanism of the Roberta model, replacing the QKV computation from the original linear transformation to the MLP neural network. The model was trained and evaluated on the Wikitext-103-raw-v1 dataset, undergoing a total of 5 training epochs.
#### 4.2.2 Results
Experimental results indicate that, compared to the original Roberta model, the modified model achieved a significant improvement in perplexity evaluation. The original model's perplexity was 5.51, while the modified model was 4.47. Moreover, the modified model displayed a quicker improvement speed and higher final evaluation accuracy within the same training time. Figures 1 and 2 show the accuracy graphs. It can be observed that the modified model's accuracy
improvement speed are faster than the original model. A detailed comparison of the perplexity and evaluation accuracy between the original and the modified model is presented in Table 1.
#### 4.3.2 Results
Experimental results indicate that, compared to the original Marian model, the modified model achieved a significant improvement in the BLEU score. The original model's BLEU score was 32.62, while the modified model was 35.76. As shown in Table 2, both BLEU scores are compared. Figures 3 show the BLEU score graphs over epochs, while Figures 4 present them over time. From these graphs, it can similarly be observed that the modified model's BLEU score improvement speed are faster than the original model.
### Ablation Study
In order to validate the pivotal role of the ReLU activation function in the enhancement of model performance, we conducted a comparison of methods for calculating keys (Key) and values (Value) in the Marian model on the iwsll2017-de-en dataset.
**Dual Linear Projection (DLP)**: A two-step linear mapping was utilized to compute the keys and values. Initially, the input is mapped to a twice-as-large intermediate representation through a linear layer, then it is mapped back to the original dimension through a second linear layer. Formally, given an input \(x\in\mathbb{R}^{d}\), the computation of keys and values is as follows:
\[k,v=W_{2}\cdot\text{LayerNorm}(W_{1}\cdot x+b_{1})+b_{2}\]
**Neural Attention**: On top of DLP, we introduced a ReLU non-linear activation function between the two linear mappings. Therefore, the computation of keys and values becomes:
\[k,v=W_{2}\cdot\text{ReLU}(\text{LayerNorm}(W_{1}\cdot x+b_{1}))+b_{2}\]
The results of the ablation study using these two mechanisms, along with the standard self-attention for comparison, are presented in Table 3.
Through the results presented in Table 3, we can observe that the non-linear layer (ReLU) plays a crucial role in the Neural Attention mechanism. The DLP method, which removes the ReLU activation function, did not demonstrate significant performance improvement, while the Neural Attention method markedly increased the BLEU score. This validates our initial intention and the necessity of introducing non-linear transformations.
## 5 Conclusion
This research delves deeply into the effectiveness and potential of employing Multi-Layer Perceptrons (MLP) for Query, Key, and Value (QKV) computation in the self-attention mechanism. Through a series of experiments and analyses on different models and tasks, we observed significant improvements in model performance in various aspects by introducing non-linear activation functions and complex key and value generation strategies.
In the masked language modeling task, we found that substituting the traditional linear transformation with an MLP neural network significantly improved the Roberta model in terms of Perplexity and other evaluation metrics. Similarly, in the machine translation task, the Marian model also showed a significant enhancement in the BLEU score after introducing a new attention mechanism computation method.
It is noteworthy that, due to computational resource limitations, we were unable to train a large model from scratch to attempt to surpass the existing state-of-the-art (SOTA) models. Our experiments mainly focused on existing pre-trained models and explored the feasibility and effects of fine-tuning models by modifying the self-attention mechanism.
Although we have made some positive strides in this research, there is still ample space for exploration and optimization in using an MLP neural network as the QKV computation method in the self-attention mechanism. Future research directions could include exploring different network architectures, activation functions, and the potential applicability of this method on other NLP tasks and models.
In summary, this research offers a novel approach and perspective, showcasing the potential value of introducing non-linearity and complex computations into the self-attention mechanism, and lays the foundation for further research and exploration. We hope these findings provide insights for researchers and developers in the field of natural language processing and further propel the development and innovation of attention mechanisms.
\begin{table}
\begin{tabular}{l l l} \hline \hline Model & Attention Mechanism Type & DE-EN BLEU \\ \hline Marian & Standard Self-Attention & 32.63 \\ Marian & Dual Linear Projection & 32.64 \\ Marian & Neural Attention & 35.76 \\ \hline \hline \end{tabular}
\end{table}
Table 3: BLEU score results on the iwsll2017-de-en dataset. |
2305.12433 | ParticleWNN: a Novel Neural Networks Framework for Solving Partial
Differential Equations | Deep neural networks (DNNs) have been widely used to solve partial
differential equations (PDEs) in recent years. In this work, a novel deep
learning-based framework named Particle Weak-form based Neural Networks
(ParticleWNN) is developed for solving PDEs in the weak form. In this
framework, the trial space is defined as the space of DNNs, while the test
space consists of functions compactly supported in extremely small regions,
centered around particles. To facilitate the training of neural networks, an
R-adaptive strategy is designed to adaptively modify the radius of regions
during training. The ParticleWNN inherits the benefits of weak/variational
formulation, requiring less regularity of the solution and a small number of
quadrature points for computing integrals. Additionally, due to the special
construction of the test functions, ParticleWNN enables parallel implementation
and integral calculations only in extremely small regions. This framework is
particularly desirable for solving problems with high-dimensional and complex
domains. The efficiency and accuracy of ParticleWNN are demonstrated through
several numerical examples, showcasing its superiority over state-of-the-art
methods. The source code for the numerical examples presented in this paper is
available at https://github.com/yaohua32/ParticleWNN. | Yaohua Zang, Gang Bao | 2023-05-21T11:22:48Z | http://arxiv.org/abs/2305.12433v3 | # ParticleWNN: a Novel Neural Networks Framework for Solving Partial Differential Equations
###### Abstract
Deep neural networks (DNNs) have been widely used to solve partial differential equations (PDEs) in recent years. In this work, a novel deep learning-based framework named Particle Weak-form based Neural Networks (ParticleWNN) is developed for solving PDEs in the weak form. In this framework, the trial space is chosen as the space of DNNs, and the test space is constructed by functions compactly supported in extremely small regions whose centers are particles. To train the neural networks, an R-adaptive strategy is designed to adaptively modify the radius of regions during training. The ParticleWNN inherits the advantages of weak/variational formulation, such as requiring less regularity of the solution and a small number of quadrature points for computing the integrals. Moreover, due to the special construction of the test functions, the ParticleWNN allows local training of networks, parallel implementation, and integral calculations only in extremely small regions. The framework is particularly desirable for solving problems with high-dimensional and complex domains. The efficiency and accuracy of the ParticleWNN are demonstrated with several numerical examples. The numerical results show clear advantages of the ParticleWNN over the state-of-the-art methods.
## 1 Introduction
Due to the powerful approximation ability of DNNs, the DNN-based methods for solving PDEs and related inverse problems have sprung up in recent years. Generally, the PDE-based forward and inverse problems are governed by the following PDE
\[\mathcal{A}[u(t,\mathbf{x});\mathbf{\gamma}] =0,\quad\text{in }(0,T]\times\Omega, \tag{1a}\] \[\mathcal{B}[u(t,\mathbf{x});\mathbf{\gamma}] =g(t,\mathbf{x}),\quad\text{on }[0,T]\times\partial\Omega,\] (1b) \[\mathcal{I}[u(0,\mathbf{x});\mathbf{\gamma}] =h(\mathbf{x}),\quad\text{on }\Omega, \tag{1c}\]
where \(\Omega\) is a bounded domain in \(\mathbb{R}^{d}\) with boundary \(\partial\Omega\), \(T>0\), and \(u(t,\mathbf{x})\) is the solution of the PDE. Here, \(\mathcal{A}\) is the forward operator, which can be of parabolic, hyperbolic, or elliptic type. \(\mathcal{B}\) and \(\mathcal{I}\) represent the boundary condition operator and the initial condition operator, respectively. The function \(\mathbf{\gamma}\) denotes parameters in the PDE. The forward problem is to solve the PDE problem given the function \(\gamma\) along with appropriate boundary conditions (1b) and/or initial conditions (1c). The inverse problem, on the other hand, is to determine the function \(\gamma\) from additional boundary or interior measurements of the solution.
Among the existing DNN-based PDE solvers, one popular framework is based on the strong form of PDEs. The most notable one is the physics-informed neural networks (PINN) [30], which approximates the PDE solutions with DNNs and formulates the loss function as a combination of the strong-form PDE residuals and data mismatch. However, the strong-form methods usually require a massive amount of collocation points, leading to a high training cost [11]. Moreover, the
non-convex optimization problem obtained by strong-form methods may result in many local minima, which makes learning complex physical phenomena more challenging[12; 18]. Different from the strong-form methods, the weak-form methods formulate the loss based on the weak/variational form of the PDEs, e.g., [41; 23; 19; 17; 16; 43; 2; 15]. Therefore, these methods have the advantage of requiring less smoothness of the solution, a smaller number of quadrature points for computing the integrals, and allowing local learning through domain decomposition [15]. Usually, these methods use DNNs to parameterize the trial functions. The difference lies in choosing different types of test functions, including global or piecewise polynomials [15; 17], localized non-overlapping high order polynomials [16], and deep neural networks [43; 2]. However, existing weak-form methods define the test functions in the whole domain or sub-domain, which requires accurate integration methods and a large number of integration points to reduce integral approximation errors. Although domain decomposition [19; 16] can alleviate the problem in low-dimensional cases, high-dimensionality makes domain decomposition much more difficult.
Main contributions.In this paper, a novel deep learning-based framework, ParticleWNN is developed for solving PDEs based on the weak form. The trial space is chosen as the space of DNNs, and the test space is constructed by functions compactly supported in extremely small regions whose centers are particles. There are several advantages of constructing test functions this way. First, the locally defined test functions allow the local training of network parameters [16] and enable parallel implementation. Second, since the particles are randomly sampled in the domain and the radius can be extremely small, the proposed method is desirable for high-dimensional and complex domain problems, and avoids calculating integrals over large domains. An R-adaptive strategy is then introduced to train the neural network, i.e., the upper bound of the radius decreases adaptively with iterations. Finally, the effectiveness of the ParticleWNN is confirmed with several numerical examples. Our primary contributions are listed below:
* We propose a novel weak-form framework for solving PDEs by using DNNs, where the test space is constructed by functions compactly supported in extremely small regions whose centers are particles.
* We develop several training techniques for the proposed framework to improve its convergence and accuracy.
* We demonstrate the efficiency of the proposed framework with several numerical examples and show its superiority over the state-of-the-art methods in dealing with difficult problems, such as PDE problems with complicated solutions and inverse problems with noisy data.
## 2 Related Works
According to different frameworks, deep learning-based PDE methods can be roughly divided into four categories: the strong-form methods, the weak-form methods, the hybrid methods of traditional methods and deep learning, and methods based on neural operators. Typically, the strong-form methods utilize DNNs to approximate the solution of the equation. Then, collocation points are generated on the domain and the domain boundary to evaluate the strong-from residuals of the governing equations in the domain and the mismatches of solutions on the boundary, respectively. Finally, the network parameters are learned by minimizing the weighted combination of residuals and mismatches. The most representative methods of this type include the deep Galerkin method (DGM) [33] and the physical-informed neural networks (PINN) method [30]. Based on PINN, more methods are proposed to solve fractional advection-diffusion equations (fPINN) [28], Navier-Stokes equations (NSFnets) [13], PDE-based inverse problems (B-PINN) [39], and PDEs in irregular domain (PhyGeoNet) [10]; or to improve the PINN from different aspects, including adaptive collocation points strategy [1; 25], domain decomposition [12; 11], loss modification [29; 42], and the new training strategy [18]. For additional information on the strong-from methods, refer to [14].
The weak-form methods, on the other hand, formulate the loss based on the weak or variational form of the PDEs. Based on the variational principle, the Deep Ritz method (DeepRitz) [41] solves PDEs by minimizing an energy function, which was further extended to PDEs with essential boundary conditions in the deep Nitsche method [23]. A deep domain decomposition method (D3M) was proposed based on the variational principle in [19]. In [43; 2], the weak adversarial network (WAN) was proposed based on the weak form to convert the problem into an operator norm minimization
problem. The variational neural networks (VarNet) method [17] is based on the Petrov-Galerkin method, in which the trial function was approximated by DNN and the test functions were a set of piecewise polynomials that have compact support in the domain. The variational formulation of PINN (VPINN) [15] has a similar formulation, except that the test functions were chosen to be polynomials with globally compact support over the domain. Combined with domain decomposition, the hp-VPINN [16] was developed based on the sub-domain Petrov-Galerkin method. It divides the domain into multiple non-overlapping subdomains (h-refinement) and sets test functions as localized non-overlapping high-order polynomials (p-refinement).
The hybrid methods combine traditional methods with DNN-based methods to overcome the shortcomings of pure DNN-based methods. The classical Galerkin method was combined with neural networks in [34] to solve initial boundary value problems. In [31], a DiscretizationNet method was developed to combine the finite volume discretization with a generative CNN-based encoder-decoder for solving the Navier-Stokes equations. A coupled automatic-numerical differentiation PINN (CANN PINN) was developed in [7], where the numerical differentiation-inspired method was coupled with the automatic differentiation to define the loss function. DNN-based methods were also combined with the finite difference method in [32; 35] and the finite element method in [40]. There are also recent methods based on neural operators [21; 24; 20; 22; 6], which focus on learning mappings between function spaces.
## 3 Methodology
### The ParticleWNN framework
To illustrate the proposed method, we consider the classical Poisson equation with the Dirichlet boundary condition
\[\begin{cases}-\Delta u(\mathbf{x})=f(\mathbf{x}),\quad\text{in}\;\Omega\subset\mathbb{ R}^{d},\\ u(\mathbf{x})=g(\mathbf{x}),\quad\text{on}\;\partial\Omega.\end{cases} \tag{2}\]
The weak formulation of Poisson's equation (2) involves finding a function in \(\{u\in H^{1}(\Omega)|u|_{\partial\Omega}=g\}\) such that, for all test functions \(v\in H^{1}_{0}(\Omega)\), the following equation holds
\[\int_{\Omega}\nabla u\cdot\nabla v\;d\mathbf{x}=\int_{\Omega}fv\;d\mathbf{x}, \tag{3}\]
where \(H^{1}(\Omega)\) denotes the Sobolev space of functions with square-integrable derivatives and \(H^{1}_{0}(\Omega)\) contains \(H^{1}(\Omega)\) functions with zero boundary conditions. Under appropriate conditions for \(f\) and \(g\), the weak form (3) allows a unique solution \(u\), which is called the _weak solution_[8]. Generally, the weak-form DNN-based methods approximate the function \(u\) with a neural network \(u_{NN}(\mathbf{x};\theta)\), which is usually comprised of \(l\) hidden layers with \(\mathcal{N}_{i}\) neurons in each layer and activation function \(\sigma(\cdot)\) that takes the following form
\[u_{NN}(\mathbf{x};\theta)=T^{(l+1)}\circ T^{l}\circ T^{(l-1)}\circ\cdots\circ T^{ (1)}(\mathbf{x}). \tag{4}\]
Here, the linear mapping \(T^{(l+1)}:\mathbb{R}^{\mathcal{N}_{l}}\rightarrow\mathbb{R}\) indicates the output layer, and \(T^{(i)}(\cdot)=\sigma(\mathbf{W}_{i}\cdot+\mathbf{b}_{i}),\;i=1,\cdots l\) are nonlinear mappings with weights \(\mathbf{W}_{i}\in\mathbb{R}^{\mathcal{N}_{i}\times\mathcal{N}_{i-1}}\) and biases \(\mathbf{b}_{i}\in\mathbb{R}^{\mathcal{N}_{i}}\), and \(\theta=\left\{\mathbf{W}_{i},\mathbf{b}_{i}\right\}_{i=1}^{l+1}\) collects the network parameters. Then, these methods train the network by minimizing a loss function that is usually defined as the root mean square (RMS) or mean squared error (MSE) of the weak-form residuals along with some penalty terms. Following [15; 16], we denote \(\mathcal{R}\) the weak-form residual in (3)
\[\mathcal{R}(u_{NN};v)=\int_{\Omega}\nabla u_{NN}\cdot\nabla v\;d\mathbf{x}-\int_ {\Omega}fv\;d\mathbf{x}. \tag{5}\]
Different choices of the test functions vary in different weak-form methods. For example, the VarNet [17] defines the test functions as a set of piecewise polynomials that have compact support over the entire domain, the VPINN [15] selects the test functions to be polynomials with globally compact support, the hp-VPINN [16] uses localized non-overlapping high-order polynomials as test functions, and the WAN [43] represents the test space with DNNs. The special feature of our work is to choose test functions to be compactly supported functions defined in small neighborhoods \(B(\mathbf{x}^{c},R)\subset\Omega\), where \(\mathbf{x}^{c}\) is a particle in \(\Omega\) and \(R\) is the radius of the neighborhood. Advantages of
defining the test function in this way include allowing the DNN to focus on extremely local regions, avoiding integrating over the entire domain, being able to parallelize, etc. Specifically, we choose the compactly supported radial basis functions (CSRBFs) as test functions in this work. Usually, the CSRBFs defined in \(B(\mathbf{x}^{c},R)\) have the following form
\[v(r)=\begin{cases}v_{+}(r),&r(\mathbf{x})\leq 1,\\ 0,&r(\mathbf{x})>1,\end{cases} \tag{6}\]
where \(r(\mathbf{x})=\frac{\|\mathbf{x}-\mathbf{x}^{c}\|}{R}\). In fact, any functions in \(H^{1}(\Omega)\) that is compactly supported in \(B(\mathbf{x}^{c},R)\) can be used as test functions. To improve the training efficiency, we use multiple test functions to formulate the loss function. That is, we generate \(N_{p}\) particles \(\left\{\mathbf{x}_{i}^{c}\right\}_{i}^{N_{p}}\) and the corresponding \(\left\{R_{i}\right\}_{i}^{N_{p}}\) randomly or with some rules in the domain 1, and then define \(N_{p}\) CSRBFs \(\left\{v_{i}\right\}_{i}^{N_{p}}\) in each small neighbourhood \(B(\mathbf{x}_{i}^{c},R_{i})\). Therefore, we obtain the MSE of the weak-form residuals
Footnote 1: To ensure that \(B(\mathbf{x}^{c},R)\subset\Omega\), we generate \(R\) first, and then sample \(\mathbf{x}^{c}\) in \(\tilde{\Omega}=\{\mathbf{x}\in\Omega|\text{dist}(\mathbf{x},\partial\Omega)\geq R\}\).
\[\mathcal{L}_{\mathcal{R}}=\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}|\mathcal{R}(u_{NN} ;v_{i})|^{2}. \tag{7}\]
For the boundary condition (and/or initial condition), we can treat it as a penalty term
\[\mathcal{L}_{\mathcal{B}}=\frac{1}{N_{bd}}\sum_{j=1}^{N_{bd}}|\mathcal{B}[u_{ NN}(\mathbf{x}_{j})]-g(\mathbf{x}_{j})|^{2}, \tag{8}\]
where \(\{\mathbf{x}_{j}\}_{j=1}^{N_{bd}}\) are sampled points on the \(\partial\Omega\). Finally, we formulate our loss function as
\[\mathcal{L}(\theta)=\lambda_{\mathcal{R}}\mathcal{L}_{\mathcal{R}}+\lambda_{ \mathcal{B}}\mathcal{L}_{\mathcal{B}}, \tag{9}\]
where \(\lambda_{\mathcal{R}}\) and \(\lambda_{\mathcal{B}}\) are weight coefficients in the loss function.
### Calculation of the loss
To evaluate the loss in (9), we need to calculate \(N_{p}\) integrals that are defined in \(B(\mathbf{x}_{i}^{c},R_{i}),\;i=1,\cdots,N_{p}\). A straightforward way to evaluate integrals is to use the Monte Carlo integration. Unfortunately, it requires an immense sample size to ensure admissible integration errors. An alternative is the quadrature rule method. This method works efficiently in low-dimensional case or when the integrand is simple. However, in the case of high dimensional problems and complicated integrands, one needs to further increase the quadrature points, thus greatly increasing the computational cost. Other numerical techniques, such as sparse grids [27] and quasi-Monte Carlo integration [26], can also be employed. For our framework, thanks to the special construction of the test functions, we only need to evaluate integrals on the small region \(B(\mathbf{x}_{i}^{c},R_{i})\) rather than on the entire domain. Through a simple coordinate transformation, we can convert the calculation of integrals on \(N_{p}\) small regions into a standard region \(B(\mathbf{0},1)\). In fact, the coordinate transformation \(\mathbf{x}=\mathbf{s}R_{i}+\mathbf{x}_{i}^{c}\) can be applied in (5) to get
\[\mathcal{R}(u_{NN};v_{i})=R_{i}^{d}\bigg{(}\int_{B(\mathbf{0},1)}\nabla_{\mathbf{x}}u_ {NN}(\mathbf{x})\cdot\nabla_{r}v_{i}\cdot\nabla_{\mathbf{x}}r\;d\mathbf{s}-\int_{B(\mathbf{0},1)}f(\mathbf{x})v_{i}\;d\mathbf{s}\bigg{)}, \tag{10}\]
where we use \(\nabla_{\mathbf{x}}v_{i}=\nabla_{r}v_{i}\cdot\nabla_{\mathbf{x}}r\). Therefore, we only need to generate one set of integration points in \(B(\mathbf{0},1)\) to calculate \(N_{p}\) integrals. For example, assume that \(\left\{\mathbf{s}_{k}\right\}_{k=1}^{m}\) are \(K_{int}\) integration points uniformly sampled from \(B(\mathbf{0},1)\), then we denote \(\mathbf{x}_{k}^{(i)}:=\mathbf{s}_{k}*R_{i}+\mathbf{x}_{i}^{c}\) and approximate \(\mathcal{R}(u_{NN};v_{i})\) by
\[\mathcal{R}(u_{NN};v_{i})\approx\frac{R_{i}^{d}\mathcal{V}_{B(\mathbf{0},1)}}{K_{ int}}\sum_{k=1}^{K_{int}}\bigg{(}\nabla_{\mathbf{x}}u_{NN}(\mathbf{x}_{k}^{(i)})\cdot \nabla_{r}v_{i}(\mathbf{x}_{k}^{(i)})\cdot\nabla_{\mathbf{x}}r(\mathbf{x}_{k}^{(i)})-f( \mathbf{x}_{k}^{(i)})v_{i}(\mathbf{x}_{k}^{(i)})\bigg{)}, \tag{11}\]
where \(\mathcal{V}_{B(\mathbf{0},1)}\) indicates the volume of \(B(\mathbf{0},1)\). Consequently, the loss (7) can be approximated by
\[\mathcal{L}(\mathcal{R})\approx\frac{\mathcal{V}_{B(\mathbf{0},1)}^{2}}{N_{p}K_{ int}^{2}}\sum_{i=1}^{N_{p}}R_{i}^{2d}\bigg{(}\sum_{k=1}^{K_{int}}\bigg{(}\nabla_{ \mathbf{x}}u_{NN}(\mathbf{x}_{k}^{(i)})\cdot\nabla_{r}v_{i}(\mathbf{x}_{k}^{(i)})\cdot \nabla_{\mathbf{x}}r(\mathbf{x}_{k}^{(i)})-f(\mathbf{x}_{k}^{(i)})v_{i}(\mathbf{x}_{k}^{(i)}) \bigg{)}\bigg{)}^{2}. \tag{12}\]
From (12), we can see that the value of \(\mathcal{L}(\mathcal{R})\) depends on \(R_{i}^{2d}\). To avoid the training failure, we remove \(R_{i}^{2d}\) in (12). We also remove the fixed term \(\mathcal{V}_{B(\mathbf{0},1)}^{2}\), which does not affect training. We denote \(\tilde{\mathcal{L}}(\mathcal{R})\) as the approximation of \(\mathcal{L}(\mathcal{R})\) with \(R_{i}^{2d}\) and \(\mathcal{V}_{B(\mathbf{0},1)}^{2}\) removed. Then, the loss function (9) is modified and approximated as
\[\begin{split}\tilde{\mathcal{L}}(\theta)&=\frac{ \lambda_{\mathcal{R}}}{N_{p}K_{int}^{2}}\sum_{i=1}^{N_{p}}\bigg{(}\sum_{k=1}^{K _{int}}\bigg{(}\nabla_{\mathbf{x}}u_{NN}(\mathbf{x}_{k}^{(i)})\cdot\nabla_{r}v_{i}(\bm {x}_{k}^{(i)})\cdot\nabla_{\mathbf{x}}r(\mathbf{x}_{k}^{(i)})-f(\mathbf{x}_{k}^{(i)})v_{i}( \mathbf{x}_{k}^{(i)})\bigg{)}\bigg{)}^{2}\\ &\quad+\frac{\lambda_{\mathcal{B}}}{N_{bd}}\sum_{j=1}^{N_{bd}} \bigg{(}\mathcal{B}[u_{NN}(\mathbf{x}_{j})]-g(\mathbf{x}_{j})\bigg{)}^{2}.\end{split} \tag{13}\]
## 4 The Implementation Details
In this section, we discuss some training techniques that will improve the performance of the proposed ParticleWNN.
Selection of test functions.In our framework, the choice of test functions can be very diverse. For different integration methods, different test functions will have different effects on the integration error, and will eventually affect the accuracy of the method. For numerical examples in this paper, we choose meshgrid integration points and select CSRBFs as test functions. There are many types of CSRBFs to choose from, such as the Bump function [9], Wendland's function [37], Wu's function [38], Buhmann's function [5], and so on. One can also refer to [44] for how to construct it. Typically, we consider the following Wendland's type CSRBFs
\[\phi_{d,2}(r)=\begin{cases}\frac{(1-r)^{l+2}}{3}[(l^{2}+4l+3)r^{2}+(3l+6)r+3],\quad r<1,\\ 0,\quad r\geq 1,\end{cases} \tag{14}\]
where \(l=\lfloor d/2\rfloor+3\), \(d\) is the dimension of the domain, and \(\lfloor\cdot\rfloor\) indicates the flooring function.
R-adaptive strategy.In the ParticleWNN framework, \(R\) is a very important parameter, which determines the size of the interest area of the DNN. If \(R\) is too large, there will be more overlaps between compact supports, so the loss will become insensitive to the change of particles. Moreover, a large \(R\) usually means a complicated integrand, which results in a larger approximation error of the integral. However, \(R\) should not also be too small since it will cause the precision loss of floating-point numbers. In this work, we generate \(R_{i}\) for each particle \(\mathbf{x}_{i}^{c}\) from a range \([R_{min},R_{max}]\) where \(R_{min}\) is a small fixed number and \(R_{max}(\geq R_{min})\) is another small number that varies with iterations. Generally, there are three common ways for \(R_{max}\) to change with iterations, that is, **R-ascending**: \(R_{max}\) gradually increases with iterations until it reaches a given upper bound, **R-fixed**: \(R_{max}\) remains unchanged, and **R-descending**: \(R_{max}\) gradually decreases with iterations until a given lower bound is encountered. In the experiment, we found that the **R-descending** strategy has the best performance (see details in the Supplementary Material). Therefore, we adopt the **R-descending** strategy to train the ParticleWNN.
Adaptive selection of particles.A common way to improve the training of ParticleWNN is to select the particles with some smart rules. In this paper, we found that the \(topK\) technique can improve the performance of the proposed ParticleWNN (see the Supplementary Material for details). The implementation of the \(topK\) technique is as follows: first, we randomly sample \(N_{p}\) particles in the domain; then, we evaluate the square residuals for each particle with (11); finally, we select the largest \(topK(\leq N_{p})\) residuals to calculate the loss.
Finally, we summarize the algorithm for the ParticleWNN in 1.
## 5 Experiments
In this section, we show the efficiency of the ParticleWNN in both PDE problems and inverse problems. For comparison, we choose the DeepRitz method and the vanilla PINN as baselines.
Experimental setups.Without specific clarifications, we choose the trial function \(u_{NN}\) as a ResNet with \(4\) hidden layers and \(50\) nodes in each layer, and give the activations in specific problems. During the training process, we randomly sample \(N_{p}\) particles in the domain and \(N_{bd}=2d\tilde{N}_{bd}\) points (\(\tilde{N}_{bd}\) for each of \(2d\) sides) on the boundary, and sample radius \(R_{i}\) from \([R_{min},R_{max}]\) for each region. We set \(R_{min}=10^{-6}\), \(R_{max}=10^{-4}\), \(\lambda_{\mathcal{R}}=1\), \(\lambda_{\mathcal{B}}=5\), \(maxIter=20000\), and generate \(K_{int}\) integration points in \(B(0,1)\) to evaluate integrals. We adopt the Adam optimizer with \(\tau_{\theta}=0.001\) and apply the StepLR scheduler with a step size of 1 epoch and a \(\gamma=1-2./maxIter\). We use the Relative error \(\|u_{NN}-u\|_{2}/\|u\|_{2}\) and the maximum absolute error (MAE) \(\max|u_{NN}-u|\) as evaluation metrics and run each example with 5 random seeds to obtain the mean and the standard deviation. The experiments in Section 5.2 were executed using Kaggle Notebook with a Tesla P100 GPU, and all other experiments were executed using Google Colab [4] with a Tesla T4 GPU.
### One Dimensional Poisson's Equation
We first consider the following one dimensional Poisson's equation
\[\begin{cases}-\frac{\partial^{2}u}{\partial x^{2}}(x)=f(x),\quad x\in\Omega= (-1,1),\\ u(x)=g(x),\quad x\in\partial\Omega.\end{cases} \tag{15}\]
We construct a solution \(u(x)=x\cos(\omega x)\) for (15) and use it to calculate the forcing term \(f(x)\) and boundary condition \(g(x)\). It has been shown in [3] that, given a large frequency \(\omega\) (e.g., \(15\pi\)), the vanilla PINN failed to solve the problem with a Fully Connected Feedforward Neural Network with the activation function being the Tanh. Here, we use the Sine function as the activation in the ResNet. We set \(N_{p}=200\), \(topK=150\), \(K_{int}=49\), and \(\tilde{N}_{bd}=1\) for the proposed method. For comparison, we generate \(9800\) integration points in \(\Omega\) for the DeepRitz method and \(9800\) (\(topK=7350\)) collocation points for the vanilla PINN. For other parameters, we keep their settings consistent. We show the experiment results in Figure 1. It can be seen from Figure 1(c) that, although the DeepRitz method converges fastest, its accuracy is greatly limited by the number of integral sample points. The vanilla PINN does not have the problem of being limited by the integral points. However, it converges slowly and takes a long computation time (almost twice that of the other two methods). The ParticleWNN method converges quickly and obtains the smallest Relative error and MAE, see Figure 1(a) and 1(b). The point-wise error in Figure 1(d) indicates that the solution obtained by the ParticleWNN is the closest to the ground truth.
### Allen-Cahn equation
We then consider the following nonlinear Allen-Cahn equation
\[\begin{cases}u_{t}-\lambda u_{xx}+5u^{3}-5u=0,\quad t\in(0,1],\ x\in[-1,1],\\ u(x,0)=x^{2}\cos(\pi x),\\ u(t,-1)=u(t,1),\ u_{x}(t,-1)=u_{x}(t,1),\end{cases} \tag{16}\]
where \(\lambda=0.0001\). The equation is very difficult to solve as it has a sharp solution (see Figure 2(c)). To show the efficiency of the proposed method, we solve the problem with algorithm 1 by choosing
the test function \(v(\mathbf{x})\) only depends on \(\mathbf{x}\) and multiplying it with both sides of the equation (16). Then, we get the weak formulation through variation
\[\int_{\Omega}(u_{t}+5u^{3}-5u)v\;d\mathbf{x}+\lambda\int_{\Omega}\nabla_{\mathbf{x}}u \cdot\nabla_{\mathbf{x}}v\;d\mathbf{x}=0,\quad t\in(0,1]. \tag{17}\]
We approach the initial value conditions in the same way as we handled the boundary conditions and sample \(N_{init}=200\) points in \(\Omega\) to evaluate the mismatch of initial conditions. To evaluate the weak-form residuals, we sample \(N_{t}=100\) time points in \((0,1]\). At each time point, we randomly sample \(N_{p}=50\) particles in \(\Omega\) and set \(K_{int}=25\) to calculate the integrals in (17). For other parameters, we set \(N_{bd}=200\), \(topK=4000\), \(\lambda_{\mathcal{R}}=100\), \(maxIter=50000\), and increase the number of neurons per layer in the ResNet to \(100\) and use the composite function of the Tanh and the Sine as the activation. For comparison, we generate \(125000\) (\(topK=100000\)) collocation points for the vanilla PINN to evaluate the strong residuals and keep other settings consistent with the ParticleWNN. We show the experiment results in Figure 2, from which we can see that the ParticleWNN achieves higher accuracy and converges much faster than the vanilla PINN. The Figure 2(b) shows that the ParticleWNN takes almost half the computation time of the vanilla PINN.
### Inverse Problems
The particleWNN can also be applied in solving inverse problems. Typically, we consider the following coefficient identification problem which is often used to verify the effectiveness of DNN-based methods
\[-\nabla\cdot(a(x,y)\Delta u)=f(x,y),\quad(x,y)\in\Omega=[-1,1]\times[-1,1], \tag{18}\]
where \(u\) is the solution of the equation, \(f\) indicates the source term, and \(a\in\{a\in L^{\infty}(\Omega):0<\underline{a}\leq a(x,y)\leq\overline{a}\;a.e. \;in\;\Omega\}\) represents the coefficient. Given the source term \(f\), the inverse problem is to
Figure 1: Performance of the ParticleWNN, the vanilla PINN, and the DeepRitz in solving the problem (15). (a) Average Relative errors obtained by the ParticleWNN (\(0.0031\pm 0.0018\)), the vanilla PINN (\(0.0061\pm 0.0043\)), and the DeepRitz (\(0.1049\pm 0.0064\)); (b) Average MAEs obtained by the ParticleWNN (\(0.0032\pm 0.0019\)), the vanilla PINN (\(0.0045\pm 0.0025\)), and the DeepRitz (\(0.0893\pm 0.0035\)); (c) Average Relative errors vs. Average computation times. (d) Average pointwise errors.
Figure 2: Performance of the ParticleWNN and the vanilla PINN in solving the problem (16). (a) Average Relative errors obtained by the ParticleWNN (\(0.073\pm 0.015\)) and the vanilla PINN (\(0.323\pm 0.167\)); (b) Average Relative errors vs. average computation times. (c) The exact \(u\) and the average point-wise errors obtained by the ParticleWNN and the vanilla PINN.
identify the coefficient \(a\) with inexact measurements \(u^{\delta}\) where \(\delta\) indicates Gaussian noise. We set the exact coefficient as
\[a(x,y)=0.1+\exp^{-\frac{(x-x_{1})^{2}}{\sigma_{1}}+\frac{(y-y_{1})^{2}}{\sigma_{2 }}}, \tag{19}\]
where \(\sigma_{1},\sigma_{2}\) are sampled from \([0.01,0.5]\), and \(x_{1},y_{1}\) are sampled from \([-0.5,0.5]\), see Figure 3(a). For the sake of convenience, we manufacture a solution \(u(x,y)=\sin(\pi x)\sin(\pi y)\) and use the exact coefficient and solution to compute \(f\). We assume that the measurements \(u^{\delta}\) can be obtained from 100 sensors inside the domain and \(\tilde{N}_{bd}=25\) equally distributed sensors at each boundary for the Dirichlet boundary condition, see Figure 3(b). To solve the problem, we use two independent DNNs with the same structure to parameterize \(a\) and \(u\), respectively. We set \(N_{p}=200\), \(topK=150\), \(K_{int}=60\), and use the Tanh as the activation for the ParticleWNN method. For a fair comparison, we use \(12000\) (\(topK=9000\)) collocation points for the vanilla PINN and keep other settings consistent with the ParticleWNN. We consider two different noise levels \(\delta\sim\mathcal{N}(0,0.01^{2}),\;\delta\sim\mathcal{N}(0,0.1^{2})\), and show the experiment results in Table 1. From Table 1, we can see that the vanilla PINN is very sensitive to noise, while the ParticleWNN achieves good results at both noise levels. Moreover, the ParticleWNN converges much faster and takes less than half of the computation time of the vanilla PINN (See Figure 3(c)-3(d)).
### Additional ablations
In the following, we study the effect of \(N_{p}\), \(K_{int}\) on the ParticleWNN. For this purpose, we consider solving Poisson's equation (15) by the proposed ParticleWNN approach with different combinations of \(N_{p}\) and \(K_{int}\). We show experimental results in Figure 4 (Relative errors, MAEs, and computation times are recorded in the Supplementary Material). From Figures 4(a)-4(b), we can see that the algorithm performs worst when \(K_{int}=5\), and increasing \(N_{p}\) does not improve the algorithm much, indicating that the ParticleWNN still needs a sufficient number of integration points to obtain a considerable precision of integral approximation. When there are enough integration points, the number of particles will have a dominant influence on the algorithm. However, the consuming time increases linearly with \(N_{p}K_{int}\) (see Figure 4(c)), so it is necessary to select an appropriate combination to obtain a balance between accuracy and time consumption.
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline & \multicolumn{2}{c|}{noise level 0.01} & \multicolumn{2}{c}{noise level 0.1} \\ \cline{2-5} & ParticleWNN & vanilla PINN & ParticleWNN & vanilla PINN \\ \hline Relative error for \(a\) & \(\mathbf{0.036}\pm 0.005\) & \(0.755\pm 1.014\) & \(\mathbf{0.061}\pm 0.006\) & \(0.255\pm 0.393\) \\ MAE for \(a\) & \(\mathbf{0.071}\pm 0.009\) & \(0.613\pm 0.697\) & \(\mathbf{0.087}\pm 0.004\) & \(0.277\pm 0.408\) \\ Relative error for \(u\) & \(\mathbf{0.013}\pm 0.002\) & \(0.272\pm 0.318\) & \(\mathbf{0.022}\pm 0.004\) & \(0.139\pm 0.232\) \\ MAE for \(u\) & \(\mathbf{0.039}\pm 0.013\) & \(0.502\pm 0.577\) & \(\mathbf{0.058}\pm 0.009\) & \(0.277\pm 0.449\) \\ Time (s) & \(\mathbf{610.31}\pm 0.98\) & \(1441.65\pm 1.98\) & \(\mathbf{607.60}\pm 0.24\) & \(1436.66\pm 2.64\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance (avg\(\pm\)std) of the ParticleWNN and the vanilla PINN in solving the inverse problem (18) with different noise levels.
Figure 3: Performance of the ParticleWNN and the vanilla PINN in solving the inverse problem (18) with different noise levels. (a) The exact \(a\); (b) The exact \(u\) and sensors (black dots); (c) Average Relative error vs. average computation time at noise level 0.01; (d) Average Relative error vs. average computation time at noise level 0.1.
## 6 Conclusion
In this work, we propose a novel framework, ParticleWNN, for solving PDEs in weak-form with DNNs. The novelty of the framework is to employ test functions compactly supported in extremely small regions. The sizes and locations of these support regions can be selected arbitrarily. In this way, the method not only inherits the advantages of weak-form methods such as requiring less regularity of the solution and a small number of quadrature points for computing the integrals, but also outperforms them in solving complicated solution problems. In addition, the flexible definition of the test function makes ParticleWNN easily applicable for solving high-dimensional problems and complex domain problems. To improve the training of the ParticleWNN, several training strategies are developed, such as the R-descending strategy and adaptive selection of particles. Finally, we demonstrate the advantages of the ParticleWNN over other state-of-the-art methods in solving PDE problems and inverse problems.
Although the ParticleWNN avoids integrals over the entire domain or subdomains, it still suffers from the limitation imposed by integral calculations, such as errors caused by integral approximations. Therefore, an important future direction is to develop more efficient integral calculation techniques. Another future direction is to investigate new techniques to improve the ParticleWNN method, such as smarter particle selection rules and more efficient training strategies,including causal training technique [36] and seq2seq learning technique [3].
The potential risk is that the appropriate conditions for the algorithm to converge are still unclear due to the lack of theoretical analysis of DNNs, especially in the case of nonlinear PDEs. However, this is a problem common to all DNN-based methods, which in particular is known to researchers in this area. Overall, we believe our new framework is likely to have a positive impact on the society. It brings a novel framework for numerical solution of PDEs with diverse applications in science and technology.
## Acknowledgments and Disclosure of Funding
We are grateful to Qian Huang for providing valuable feedbacks on the draft. The work of G. Bao was supported in part by the National Natural Science Foundation of China (Grant No. 11621101, U21A20425) and by the Key Laboratory of Zhejiang Province.
|
2302.07099 | Tetris-inspired detector with neural network for radiation mapping | In recent years, radiation mapping has attracted widespread research
attention and increased public concerns on environmental monitoring. In terms
of both materials and their configurations, radiation detectors have been
developed to locate the directions and positions of the radiation sources. In
this process, algorithm is essential in converting detector signals to
radiation source information. However, due to the complex mechanisms of
radiation-matter interaction and the current limitation of data collection,
high-performance, low-cost radiation mapping is still challenging. Here we
present a computational framework using Tetris-inspired detector pixels and
machine learning for radiation mapping. Using inter-pixel padding to increase
the contrast between pixels and neural network to analyze the detector
readings, a detector with as few as four pixels can achieve high-resolution
directional mapping. By further imposing Maximum a Posteriori (MAP) with a
moving detector, further radiation position localization is achieved.
Non-square, Tetris-shaped detector can further improve performance beyond the
conventional grid-shaped detector. Our framework offers a new avenue for high
quality radiation mapping with least number of detector pixels possible, and is
anticipated to be capable to deploy for real-world radiation detection with
moderate validation. | Ryotaro Okabe, Shangjie Xue, Jiankai Yu, Tongtong Liu, Benoit Forget, Stefanie Jegelka, Gordon Kohse, Lin-wen Hu, Mingda Li | 2023-02-07T22:17:18Z | http://arxiv.org/abs/2302.07099v1 | # Tetris-inspired detector with neural network for radiation mapping
###### Abstract
In recent years, radiation mapping has attracted widespread research attention and increased public concerns on environmental monitoring. In terms of both materials and their configurations, radiation detectors have been developed to locate the directions and positions of the radiation sources. In this process, algorithm is essential in converting detector signals to radiation source information. However, due to the complex mechanisms of radiation-matter interaction and the current limitation of data collection, high-performance, low-cost radiation mapping is still challenging. Here we present a computational framework using Tetris-inspired detector pixels and machine learning for radiation mapping. Using inter-pixel padding to increase the contrast between pixels and neural network to analyze the detector readings, a detector with as few as four pixels can achieve high-resolution directional mapping. By further imposing Maximum a Posteriori (MAP) with a moving detector, further radiation position localization is achieved. Non-square, Tetris-shaped detector can further improve performance beyond the conventional grid-shaped detector. Our framework offers a new avenue for high quality radiation mapping with least number of detector pixels possible, and is anticipated to be capable to deploy for real-world radiation detection with moderate validation.
## Introduction
Since the Fukushima nuclear accident in 2011 till the recent risk at Zaporizhzhia nuclear power plant, there is an increasing global need calling for improved radiation detection technology, aiming to achieve high-performance radiation detection mapping with minimum impact on detectors and reduced cost. Due to the simultaneous presence of multiple radiation-interaction mechanisms, radiation detection for ionizing radiation is considerably harder than visible light. The large penetration depth of radiation, such as hard X-ray, \(\gamma\)-ray, and neutron, reduces the angular sensitivity of detectors and limits the majority of radiation detection efforts to focus on counting or spectra acquisition rather than their directional information. The challenge on acquiring directional radiation information further triggers additional difficulties in performing source localization, that to determine the position distributions of radiation sources [1, 2]. In recent years, radiation localization has attracted increased interest with applications such as autonomous nuclear site inspection. Several prototypes of system configurations have been proposed including unmanned ground [3, 4, 5, 6, 7], aerial [3, 5, 7] and underwater vehicles [8, 9]. Despite the remarkable progress, the information extraction process of the radioactive environment are still at an early stage with further in-depth studies much needed.
In past decades, several approaches have been proposed for directional radiation detection. One approach is the High Efficiency Multimode Imager (HEMI), which can be used to detect and locate \(\gamma\)-ray and X-ray radioactive sources [10, 11, 12, 13]. A typical HEMI consists of two layers of CdZnTe (CZT) detectors, the first layer has a randomly arranged aperture for coded aperture imaging and the second layer is the conventional co-planar detector grid. This system requires the incident beam only come from a limited solid angle range in order to make sure the beam passes through the aperture of the first layer in order to interact with the second layer. The traditional reconstruction algorithm requires that all the incident beams should come within the field of view. If the radiation incident from another direction, the accuracy will be affected (especially for near-field radiation). Besides, this system can only conditionally detect multiple sources, usually in the case that the sources |
2303.04040 | Uncertainty Quantification of Spatiotemporal Travel Demand with
Probabilistic Graph Neural Networks | Recent studies have significantly improved the prediction accuracy of travel
demand using graph neural networks. However, these studies largely ignored
uncertainty that inevitably exists in travel demand prediction. To fill this
gap, this study proposes a framework of probabilistic graph neural networks
(Prob-GNN) to quantify the spatiotemporal uncertainty of travel demand. This
Prob-GNN framework is substantiated by deterministic and probabilistic
assumptions, and empirically applied to the task of predicting the transit and
ridesharing demand in Chicago. We found that the probabilistic assumptions
(e.g. distribution tail, support) have a greater impact on uncertainty
prediction than the deterministic ones (e.g. deep modules, depth). Among the
family of Prob-GNNs, the GNNs with truncated Gaussian and Laplace distributions
achieve the highest performance in transit and ridesharing data. Even under
significant domain shifts, Prob-GNNs can predict the ridership uncertainty in a
stable manner, when the models are trained on pre-COVID data and tested across
multiple periods during and after the COVID-19 pandemic. Prob-GNNs also reveal
the spatiotemporal pattern of uncertainty, which is concentrated on the
afternoon peak hours and the areas with large travel volumes. Overall, our
findings highlight the importance of incorporating randomness into deep
learning for spatiotemporal ridership prediction. Future research should
continue to investigate versatile probabilistic assumptions to capture
behavioral randomness, and further develop methods to quantify uncertainty to
build resilient cities. | Qingyi Wang, Shenhao Wang, Dingyi Zhuang, Haris Koutsopoulos, Jinhua Zhao | 2023-03-07T16:49:46Z | http://arxiv.org/abs/2303.04040v2 | # Uncertainty Quantification of Spatiotemporal Travel Demand with Probabilistic Graph Neural Networks
###### Abstract
Recent studies have significantly improved the prediction accuracy of travel demand using graph neural networks. However, these studies largely ignored uncertainty that inevitably exists in travel demand prediction. To fill this gap, this study proposes a framework of probabilistic graph neural networks (Prob-GNN) to quantify the spatiotemporal uncertainty of travel demand. This Prob-GNN framework is substantiated by deterministic and probabilistic assumptions, and empirically applied to the task of predicting the transit and ridesharing demand in Chicago. We found that the probabilistic assumptions (e.g. distribution tail, support) have a greater impact on uncertainty prediction than the deterministic ones (e.g. deep modules, depth). Among the family of Prob-GNNs, the GNNs with truncated Gaussian and Laplace distributions achieve the highest performance in transit and ridesharing data. Even under significant domain shifts, Prob-GNNs can predict the ridership uncertainty in a stable manner, when the models are trained on pre-COVID data and tested across multiple periods during and after the COVID-19 pandemic. Prob-GNNs also reveal the spatiotemporal pattern of uncertainty, which is concentrated on the afternoon peak hours and the areas with large travel volumes. Overall, our findings highlight the importance of incorporating randomness into deep learning for spatiotemporal ridership prediction. Future research should continue to investigate versatile probabilistic assumptions to capture behavioral randomness, and further develop methods to quantify uncertainty to build resilient cities.
probabilistic graph neural networks, uncertainty quantification, travel demand prediction
## I Introduction
Uncertainty prevails in urban mobility systems. An enormous number of uncertainty sources range from the long-term natural disasters and climate change, to real-time system breakdown, and to the inherent randomness in travel demand. These uncertainty sources exhibit spatial and temporal patterns: travel demand could drastically change during the holiday seasons, or becomes spatially concentrated in the special events (e.g. football games). Traditionally, the spatial uncertainty is analyzed by spatial econometrics or discrete choice models, while the temporal one by time series models. Recently, deep learning models have been designed to capture the spatiotemporal correlations, with the Long Short Term Memory (LSTM) networks for temporal correlations and convolutional or graph networks (CNN, GNN) for spatial dependencies. Deep learning has been shown to significantly improve the prediction accuracy of travel demand. However, most studies focus on designing _deterministic_ deep learning models to predict the _average_ travel demand, but largely ignore the uncertainty inevitably associated with any prediction.
Overlooking uncertainty leads to theoretical and practical problems. In theory, travel demand is inherently a random quantity, which is characterized by a rich family of probability distributions, sometimes even "fat-tail" ones that render a simple average approximation highly inappropriate [1]. In practice, while the prediction of average demand provides a valuable basis for system design, the lack of uncertainty analysis precludes the opportunity of using deep learning to provide robust real-time service or resilient long-term planning [2, 3]. Since the recent work has demonstrated the outstanding capability of deep learning in modeling spatiotemporal dynamics, it seems timely and imperative to enrich the deterministic deep learning models to capture the spatiotemporal uncertainty of travel demand.
To address this research gap, we propose a framework of probabilistic graph neural networks (Prob-GNN) to quantify the spatiotemporal uncertainty in travel demand. This framework is substantiated by deterministic and probabilistic assumptions: the former refer to the various architectural designs in deterministic neural networks, while the latter refer to the probabilistic distributions with a variety of uncertainty characteristics. The framework is used to compare the two types of assumptions by both uncertainty and point prediction metrics. This study makes the following contributions:
1. We identify the research gap in quantifying uncertainty for travel demand using deep learning. Despite a vast number of deep learning techniques for predicting average travel demand, only a limited number of studies have sought to quantify spatiotemporal uncertainty.
2. To fill the research gap, the paper introduces a general framework of Prob-GNN, which comprises deterministic and probabilistic assumptions. This framework is further substantiated by twelve architectures, all of which can quantify the spatiotemporal uncertainty in travel demand.
3. We find that probabilistic assumptions significantly influence uncertainty predictions, while only slightly influence point predictions. Conversely, the deterministic assumptions, such as graph convolution, network depth, and dropout rate, lead to a similar performance in both point and uncertainty predictions when probabilistic assumptions are fixed.
4. We further demonstrate the generalizability of the Prob-GNNs by comparing the performance across multiple
shifting domains caused by COVID-19. Although the mean prediction encounters significant prediction errors, the interval prediction is accurate across the domains.
## II Literature Review
The literature on spatiotemporal travel demand prediction using deep learning for has grown significantly in recent years. Among them, very limited number of studies have sought to understand the predictive uncertainty. This literature review will first examine the latest developments in demand predictions using deep learning, and subsequently, delve into uncertainty quantification.
### _Travel Demand Prediction with Deep Learning_
Researchers have achieved a lot of success in improving the prediction accuracy of spatiotemporal travel demand using advanced neural network architectural designs [4, 5, 6, 7]. Temporally, LSTM networks and Gated Recurrent Units (GRU) layers are used to analyze the seasonal, weekly, and time-of-day trends [8, 9, 10]. Spatially, the analysis unit is often station/stop, urban grid, individuals, or census tract. The urban grid is often analyzed with convolutional neural networks (CNN) [4]. Individual travel demand is analyzed by neural networks with behavioral insights for innovative architectural design [11, 12, 13]. More complex urban structures, such as transit lines that connect upstream and downstream stops, can be represented by graphs. Graph convolutions (GCN) are then often used to model such spatial propagation of traffic/demand flow into adjacent nodes [14, 15, 16, 17]. The baseline GCNs have been expanded by defining multiple graphs to describe the spatial similarity in points of interest (POIs) and transportation connectivity [18, 19]. An alternative to GCN is Graph Attention Network (GAT), which automatically learn the attentional weightings of each node, thus detecting the long-range dependencies without handcrafted weighting mechanisms through adjacency matrices [20, 21].
Despite the abundance of deep learning models, few studies sought to quantify the travel demand uncertainty, typically caused by narrowly defined prediction objectives and methods. The prediction objective - the average ridership - can hardly represent the randomness in travel demand, particularly for distributions with fat tails, in which the outliers deviating from the average value could predominate [1]. The prediction method - the family of deterministic neural networks - is only a specific example of probabilistic neural networks with homogeneous assumptions on the variance. Demand uncertainty, if unaccounted for, can negatively influence downstream planning tasks [3, 22]. Uncertainty can also propagate and be significantly enhanced at each stage in multi-stage models [23]. Despite its importance, in the travel prediction field, only three studies tackled uncertainty with deep learning in transportation. One analyzed heteroskedastic noises with quantile regressions [24], one studied the uncertainty in sparse travel demand prediction [25], and the other quantified model uncertainty with Monte-Carlo dropouts [26].
### _Uncertainty Quantification Methods_
Research on uncertainty quantification has been developed in other deep learning fields. There are two major categories of uncertainty: data uncertainty and model uncertainty. Data uncertainty refers to the irreducible uncertainty inherent in the data generation process, while model uncertainty captures the uncertainties in the model parameters. For example, in linear regression, data uncertainty refers to the residuals, and model uncertainty refers to the standard errors of the estimated coefficients.
To characterize data uncertainty, either a parametric or non-parametric method can be used. Parametric methods refer to the models that parameterize a probabilistic distribution. Parametric models are often estimated by either Bayesian methods or mean-variance estimation (MVE). Although conceptually appealing, Bayesian methods often involves unnecessarily intense computation, which relies on sampling methods and variational inference [27, 28]. MVE minimizes the negative log-likelihood (NLL) loss based on a pre-specified distribution of the dependent variable [29, 30]. The MVE is computationally efficient, but it could lead to misleading results if probabilistic distributions are misspecified. Non-parametric methods quantify uncertainty without explicitly imposing any parametric form. Typical examples include quantile regression [31] and Lower Upper Bound Estimation (LUBE) [27, 28]. Non-parametric methods do not have misspecification problems, but it is hard to be optimized in complex neural networks. The pros and cons of both methods are systematically compared in review articles [32, 33].
Besides data uncertainty, uncertainty also emerges in model construction and training. Bayesian deep neural networks refer to the models that characterize all parameters of neural networks by probability distributions, typically Gaussian distribution [25, 34]. A popular, low-cost way to do approximate Bayesian inference for neural networks with Gaussian parameters is to use dropouts [26, 35, 36]. Another source of model uncertainty comes from model training. Neural networks tend to converge to different local minima, resulting in different predictions. This type of uncertainty is usually addressed by bootstrapping or model ensembles [37, 38]. Bootstrapping refers to the process of repeated training with re-sampling to capture the distributional uncertainty in sampling. Model ensembling refers to an average of multiple trainings procedures with different parameter initializations.
Although the literature on travel demand uncertainty within the deep learning framework is scarce, classical statistics have decades of effort in quantifying travel demand uncertainty, with a focus on evaluating a rich family of probabilistic assumptions. Using high-resolution temporal ridership data, researchers adopted the ARIMA-GARCH model to quantify the volatility of subway demands during special events [39] and adaptive Kalman filters to analyze real-time transit ridership [40]. Using cross-sectional data, researchers used bootstrapping [41] to analyze parameter uncertainty, ensembling [42] to analyze activity uncertainty, and heteroskedastic errors to describe the association between social demographics and ridership uncertainty in discrete choice models [41].
The common debates regarding the probabilistic assumptions include homoskedastic vs. heteroskedastic errors, Gaussian vs. exponential tails, real-line vs. positive support, and many others [39, 43, 44], through which researchers could learn the detailed characteristics of travel demand uncertainty. The primary focus from the classical statistics on the probabilistic assumptions presents an interesting difference from the primary focus of the deep learning field in refining the deterministic assumptions. Indeed, it is quite ambiguous which set of assumptions is more crucial in quantifying the uncertainty of travel demand.
## III Theoretical Framework
The authors propose a probabilistic graph convolutional neural network to compare the deterministic and probabilistic assumptions. The probabilistic GNN is represented as:
\[y\sim\mathcal{G}(\theta)=\mathcal{G}(\mathcal{F}(\mathbf{X},w)) \tag{1}\]
The first statement is a probability assumption about \(y\), specified by the model family \(\mathcal{G}(\theta)\), and the second uses the model family \(\mathcal{F}(\mathbf{X},w)\) to parameterize the probability distributions of \(\mathcal{G}(\theta)\), with \(\mathbf{X}\) being the inputs and \(w\) being the model weights. While the recent studies focus on enriching \(\mathcal{F}(\mathbf{X},w)\) through spatiotemporal deep learning architectures, the potentially more critical probability assumption \(\mathcal{G}(\theta)\) is largely neglected. The authors seek to compare the effects of the probabilistic assumptions \(\mathcal{G}\), and the deterministic assumptions \(\mathcal{F}\) in determining the model performance.
Table I summarizes the specifics of probabilistic and deterministic assumptions that substantiate the probabilistic GCN framework. A cross product of six probabilistic assumptions in \(\mathcal{G}\) and two deterministic assumptions in \(\mathcal{F}\) are tested, leading to twelve base models. Six probabilistic assumptions are: Homoskedastic Gaussian (HomoG), Poisson (Pois), Heteroskedastic Gaussian (HetG), Truncated Gaussian (TG), Gaussian Ensemble (GEns), and Laplace (LAP). Two deterministic architectures, Graph Convolutional Networks (GCN) and Graph Attention Networks (GAT) are used to specify the probabilistic parameters.
### _Probabilistic Assumptions \(\mathcal{G}\)_
The Gaussian distribution with a deterministic and homoskedastic variance term is chosen as the benchmark in specifying \(\mathcal{G}\) because it is stable and has simple ensembling properties. This homoskedastic Gaussian assumption represents the vast number of deterministic deep learning models that use mean squared error as the training objective. The Gaussian benchmark facilitates the comparison across the probabilistic assumptions, as listed below.
1. Homoskedasticity vs. heteroskedasticity We compare the heteroskedastic and homoskedastic Gaussian assumptions to examine how the data variance influences the model performance. The homoskedastic assumption assumes the same variance for all observations, whereas the heteroskedastic assumption estimates variance for every observation. It is highly likely that the travel demand variances has spatiotemporal patterns.
2. Continuous vs. discrete support We compare the Poisson distribution to the homoskedastic Gaussian benchmark to examine the effectiveness of continuous vs. discrete supports in determining the model performance. Since ridership takes integer values, the Poisson distribution could be more appropriate. However, the Poisson distribution uses only one parameter to represent both mean and variance, which could be overly restrictive.
3. Real-line vs. non-negative support We compare the truncated heteroskedastic Gaussian distribution to the heteroskedastic Gaussian distribution to examine the effectiveness of the real-line vs. non-negative distribution support. Travel demand is non-negative, but the support of the Gaussian distribution covers the entire real line \((-\infty,\infty)\). Therefore, a normal distribution left-truncated at zero is implemented to test whether non-negativity should be strictly imposed.
4. Gaussian vs. exponential tails We compare the Laplace distribution to the heteroskedastic Gaussian distribution to examine whether the tail behavior of the distribution matters. The probability density function of the Gaussian distribution decays at a fast rate of \(e^{x^{2}}\), while the Laplace distribution has a heavier tail with the decay rate of \(e^{|x|}\).
5. Single vs. ensembled models We compare the ensembled heteroskedastic Gaussian distributions to a single heteroskedastic Gaussian model to test whether an ensemble of distributions can outperform a single distributional assumption. The ensemble model is created by uniformly mixing \(K\) estimates, which are trained independently with different parameter initializations. Suppose \(\mathcal{Y}_{k},\sigma_{k}^{2}\) are the estimated mean and variance for model \(k\), and \(\mathcal{Y}_{*}\) is the ensembled random variable. For the Gaussian distribution, the mixture can be further approximated as a Gaussian distribution, whose mean and variance are respectively the mean and variance of the mixture. The ensembled distribution is given by [38]: \(\mathcal{Y}_{*}\sim\mathcal{N}(\frac{1}{K}\sum_{k}\mathcal{Y}_{k},\frac{1}{K} \sum_{k}\left(\sigma_{k}^{2}+\mathcal{Y}_{k}^{2}\right)-\mathcal{Y}_{*}^{2})\).
With the distributional assumption, maximum likelihood estimation (MLE) can be used to learn the parameters, which translates to minimizing the negative log-likelihood (NLL) loss in implementation. The NLL loss function based on the joint density of all observations is simply the negative sum of the log of the probability density of all observations:
\[\text{NLL}=-\sum_{s,t}\text{log }\mathcal{G}(y_{st}|\mathbf{X_{t}};w) \tag{2}\]
In the uncertainty quantification literature, researchers typically differentiate between data (aleator) and model (epistemic) uncertainty [47], and their sum is referred to as prediction uncertainty. The model uncertainty refers to the uncertainty resulting from the difference between \(\mathcal{F}(\mathbf{X},w)\) and the estimated \(\hat{\mathcal{F}}(\mathbf{X},w)\). The data uncertainty refers to the randomness in the data generation process and is represented by probabilistic assumptions \(\mathcal{G}\), of which \(y\) has the distribution. The prediction uncertainty combines model and
data uncertainty, and describes the overall difference between the actual \(y\) and the predicted \(\hat{y}\). The relationship between the three quantities is directly given by \(\sigma_{y}^{2}=\sigma_{model}^{2}+\sigma_{data}^{2}\). In our framework, assumptions 1-4 deal with the data uncertainty alone; while assumption 5 quantifies the prediction uncertainty.
### _Deterministic Assumptions in \(\mathcal{F}\)_
The deterministic assumptions in \(\mathcal{F}\) consist of the spatial encoding, temporal encoding, and associated hyperparameter specifications. The following two subsections introduce the formulation of common spatial and temporal encodings.
#### Iii-B1 GCN and GAT for Spatial Encoding
Two predominant spatial encoding methods - GCN and GAT - are adopted and compared to examine the effect of the deterministic architectures on model performance. The GCN layers need to access the global structure of the graph in the form of adjacency matrice(s), while the GAT layers aim to learn the spatial correlations from data. The propagation formula of both layers is introduced in Table I.
To construct the multi-graph proximity in GCNs and GATs, four types of adjacency matrices are computed, including direct connectivity \([A_{Con}]_{ij}=1\) if two stations are adjacent, \(=0\) otherwise, network distance \([A_{Net}]_{ij}=\text{Network Distance}(i,j)^{-1}\), Euclidean distance \([A_{Euc}]_{ij}=\text{Euclidean Distance}(i,j)^{-1}\), and functional similarity \([A_{Func}]_{ij}=\sqrt{(F_{i}-F_{j})^{T}(F_{i}-F_{j})}^{-1}\), where \(F_{i}\) is the vector of functionalities represented by population, jobs, percent low income, percent minority, percent adults, number of schools, shops, restaurants, etc.
#### Iii-B2 LSTM for Temporal Encoding
The LSTM network is chosen for temporal encoding. In short, the LSTM layers take the spatially encoded tensors for \(l\) previous time periods as inputs and propagate them down the layers through a series of input, forget, cell state (memory), and output gates. To make the prediction, the encoded output from the last time step \(h_{t-1}\) is decoded by fully connected layers to get the contribution (weights) of the time series on the final prediction. The LSTM structure is well-documented in a lot of papers [24, 39], and will not be discussed in detail here.
### _Specific Examples: Gaussian Distributions and Mean-Variance Estimation_
As an important case, the Gaussian distribution with homoskedastic variance (HomoG-GCN and HomoG-GAT) represents the vast number of deterministic deep learning models that use mean squared errors as the training objective. With the homoskedastic Gaussian assumption, the dependent variable \(y\) follows Gaussian distribution with mean \(F_{1}(\mathbf{X},w_{1})\) and a constant variance \(c\). The MLE with this homoskedastic Gaussian distribution is the same as the deterministic deep learning with the mean squared errors as the training objective because
\[\log P(x;\mu,\sigma=c)=\log\frac{1}{c\sqrt{2\pi}}-\frac{1}{2}\frac{(x-\mu)^{2} }{c^{2}} \tag{3}\]
In other words, our benchmark example represents the predominant deterministic modeling technique in this field.
The homoskedastic Gaussian can be extended to the heterskedastic Gaussian distribution, which resembles the mean-variance estimation (MVE), the most dominant method in uncertainty quantification literature. With the heteroskedastic Gaussian distribution, \(y\sim\mathcal{N}(\theta)=\mathcal{N}(F_{1}(\mathbf{X},w_{1}),F_{2}(\mathbf{X},w_{2}))=F_{1}(\mathbf{X},w_{1})+\mathcal{N}(0,F_{2}(\mathbf{X},w_{2}))\). Essentially, the MVE uses two graph neural networks to capture the mean and variance separately, which is the same as the HetG-GCN and HetG-GAT models. Therefore, the two Gaussian examples in our probabilistic GNN framework can represent the two most common research methods, namely the deterministic spatiotemporal models and the MVE for uncertainty quantification.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Panel 1: Probabilistic Assumptions in \(\mathcal{G}\)** & \multicolumn{1}{c}{} \\ \hline Probabilistic assumptions & Probability density function & Distribution parameters \(\theta\) \\ \hline Homoskedastic Gaussian (HomoG) & \(f_{HomoG}(x;\mu,\sigma)=\frac{1}{c\sqrt{2\pi}}\text{exp}(-\frac{1}{2}\frac{(x- \mu)^{2}}{\sigma^{2}})\), \(\sigma=c\) & \(\mu\) \\ Poisson (Pois) & \(f(x;\lambda)=\frac{\lambda_{\sigma}^{2}-c^{2}}{a^{2}}\) & \(\mu\) \\ Heteroskedastic Gaussian (HetG) & \(f_{HetG}(x;\mu,\sigma)=\frac{1}{c\sqrt{2\pi}}(-\frac{1}{2}\frac{(x-\mu)^{2}}{ \sigma^{2}})\) & \(\mu,\sigma\) \\ Truncated Gaussian (TG) & \(f_{TC}(x;\mu,\sigma)=\frac{1}{c\sqrt{2\pi}}\frac{1-f_{(0;\mu,\sigma)}}{\sum_{ \lambda\in\mathcal{N}}\mu_{i}}\frac{1-f_{(0;\mu,\sigma)}}{\sum_{\lambda\in \mathcal{N}}\mu_{i}}\) & \(\mu,\sigma\) \\ Gaussian Ensemble (GEns) & \(y_{\star}\sim\mathcal{N}(\frac{1}{N}\sum_{k}\mu_{k},\frac{1}{N}\sum_{k}(\alpha _{k}^{2}+\frac{1}{k}\mu_{k}^{2})-\mu_{k}^{2})\) & \(\mu_{k},\sigma_{k}\) for \(k=1..K\) models \\ Laplace (Lap) & \(f(x;\mu,b)=\frac{1}{c}\text{exp}(-\frac{|x-\mu|}{b})\) & \(\mu,b\) \\ \hline \hline
**Panel 2: Deterministic Assumptions in \(\mathcal{F}\)** & \multicolumn{1}{c}{} \\ \hline Deterministic assumptions & Graph convolutional iteration function & Deterministic parameters \(w\) \\ \hline Graph Convolutional Networks (GCN) [45] & \(h^{(l+1)}=\sigma(\sum_{\gamma=1}^{R}\hat{D}_{\gamma}^{-\frac{1}{2}}\hat{A}_{ \gamma}\hat{D}_{\gamma}^{-\frac{1}{2}}h^{(l)}(W_{r}^{(l)})\) & \(W_{r}^{(l)}\) \\ Graph Attention Networks (GAT) [46] & \(h^{(l+1)}=\sigma(\sum_{j\in\mathcal{N}(0)}\alpha_{ij}W_{j}^{(l)})\), \(\alpha_{ij}=\frac{c\sigma_{W}(W_{j}^{(l)})}{\sum_{\lambda\in\mathcal{N}, exp}(W_{j}^{(l)})}\) & \(\alpha_{ij},W\) \\ \hline Other hyperparameters in deterministic assumptions \(\mathcal{F}\): Number of GCN/GAT/LSTM layers, Number of hidden layer neurons, Dropouts, Weight decay. \\ \hline \hline \end{tabular}
\end{table} TABLE I: Model Design. Upper panel: Probabilistic assumptions and their probability density functions. Bottom panel: Deterministic architecture and hyperparameters. A cross product of the two panels are used to substantiate the probabilistic GCNs, leading to twelve base models: HomoG-GCN, HomoG-GAT, HetG-GCN, HetG-GAT, TG-GCN, TG-GAT, Gems-GCN, GENs-GAT, Pois-GCN, Pois-GAT, Lap-GCN, and Lap-GAT.
### _Evaluation_
Performance metrics can be grouped into three categories: composite measures, point prediction quality, and uncertainty prediction quality. Table II summarizes the evaluation metrics. The NLL loss is a composite metric to evaluate the joint quality of point and uncertainty estimates. The standard mean absolute error (MAE) and mean absolute percent error (MAPE) are used to evaluate point predictions.
The quality of the uncertainty estimates is less straightforward to evaluate, because the ground truth distributions are unknown. Therefore, we design the calibration error (CE) metric and also visualize quantile-quantile plots to measure the distributional fit. Let \(F_{i}(y)\) represent the cumulative distribution function of observation \(i\). Define \(q(p)=\mathbb{P}(F_{i}(y_{i})\leq p)\) as the proportion of observations that actually fall into the \(p\)-th quantile of the estimated distribution. A well-calibrated model should generate distributions that align with the empirical distribution so that \(q(p)=p\). Therefore, the calibration error associated with the quantile-quantile plots is defined as the sum of the deviation between the empirical quantiles and the predicted quantiles, approximated by a number of discrete bins from \([0,1]\): \(CE=\sum_{p=0}^{1}|q(p)-p|\). An alternative metric is the simultaneous use of the Prediction Interval Coverage Probability (PICP) and the Mean Prediction Interval Width (MPIW) [27, 28, 32]. Formally, \(PICP=\frac{1}{n}\sum_{i=1}^{n}c_{i}\), where \(c_{i}=\mathds{1}\{L_{i}\leq y_{i}\leq U_{i}\}\), that is an indicator variable of whether observation \(i\) falls within the prediction interval bounded by \(L_{i}\) and \(U_{i}\). \(MPIW=\frac{1}{n}\sum_{i=1}^{n}\left(U_{i}-L_{i}\right)\) measures the average width of the intervals. With significance level \(1-\alpha\), the lower bound of the prediction interval is given by \(L=F_{i}^{-1}(\frac{\alpha}{2})\), and the upper bound of the prediction interval by \(U=F_{i}^{-1}(1-\frac{\alpha}{2})\), where \(F_{i}^{-1}\) is the predicted inverse cumulative function of observation \(i\). Using this approach, the evaluation of uncertainty quantification involves a tradeoff between PICP and MPIW. A high-quality prediction interval should be narrow while covering a large proportion of data points; however, wider MPIW naturally leads to larger PICP. Therefore, this tradeoff poses a challenge in model selection since it is difficult for one model to dominate in both metrics.
## IV Case Study
Two case studies were conducted to examine the effect of deterministic and probabilistic assumptions on predicting travel demand, with a focus on estimating uncertainty. The case studies use data from the Chicago Transit Authority's (CTA) rail system and ridesharing in Chicago. This section describes data sources, the spatial and temporal characteristics of the two datasets, and the experiment setup. Our experiments are implemented in PyTorch and the source code is available at [https://github.com/sunnyqwang/uncertainty](https://github.com/sunnyqwang/uncertainty).
### _Data_
The CTA rail and bus data was obtained through collaboration with the CTA, while the ridesharing data was sourced from the City of Chicago Open Data Portal1. The CTA rail system has tap-in records for each trip, which were aggregated into 15-minute intervals for temporal analysis. The system comprises 141 stations, arranged in a graph structure for spatial analysis. The spatial relationships can be identified by adjacency matrices constructed from Euclidean, connectivity, network and functional similarity between stations. Ridesharing trips are available at the census tract level, in 15-minute intervals. The graph structure is determined by the relationships between census tracts. As there is no explicit network, the ridesharing adjacency matrices are defined by Euclidean, connectivity (neighboring census tracts are considered connected), functional similarity only. Most census tracts have close to none ridesharing trips in most 15-minute intervals. Since learning sparse graphs are a topic of its own [48], the ridesharing dataset is filtered to include only those census tracts that, on average, had more than 30 trips per hour pre-COVID. This resulted in a total of 59 census tracts being used in the analysis.
Footnote 1: [https://data.cityofchicago.org/Transportation/Transportation-Network-Providers-Tripps/m6dm-c72p](https://data.cityofchicago.org/Transportation/Transportation-Network-Providers-Tripps/m6dm-c72p)
Moran's I was calculated for each 15min ridership snapshot using each of the weight matrices to demonstrate that spatial autocorrelation exists through the defined adjacency matrices. Figure 1 shows the histograms of Moran's I for both data sources. Most statistics have averages well above 0, indicating the existence of spatial autocorrelations defined by the adjacency matrices. Ridesharing trips appear to be less correlated with Euclidean distance and functional similarity, but the correlation between bordering tracts is very strong.
Five different explanatory variables are obtained for each data source. First, the system's own history, both recent (in the past 1.5 hours) and historical (last week same time period) is used to account for the temporal correlation. Second, the history of the other travel modes is included to mine the correlation between different modes. In the case of CTA rail, ridesharing and bus counts are included; and in the case of ridesharing, CTA rail and bus counts are included.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Category** & **Metric** & **Formula** \\ \hline Composite & Negative Log Likelihood & \(NLL=-\sum_{i}\text{log}\ P_{W}(y_{i}|\mathbf{X}_{i})\) \\ \hline \multirow{2}{*}{Point} & Mean Absolute Error & \(MAE=\frac{1}{4}\sum_{i}\text{log}\ \hat{y}_{i}(y_{i}-\hat{y}_{i})\) \\ & Mean Absolute Percent Error & \(MAE=\frac{1}{4}MAE/\hat{y}\) \\ \hline \multirow{3}{*}{Uncertainty} & Calibration Error & \(CE=\sum_{p=0}^{1}|q(p)-p|\) \\ & Mean PI Width & \(MPIW=\sum_{i=1}^{n}\left(U_{i}-L_{i}\right)\) \\ \cline{1-1} & PI Coverage Probability & \(PICP=\frac{1}{N}\sum_{i=1}^{n}\mathds{1}\{L_{i}\leq y_{i}\leq U_{i}\}\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Three Categories of Evaluation Metrics
Fig. 1: Spatial Autocorrelation (Moran’s I)
Third, demographics2 and POIs 3 are spatial covariates used in the calculation of functional similarity. Fourth, weather4 is a temporal covariate that is assumed to be the same in the whole study area. Lastly, the frequency of services in each spatial unit during each time period is used to indicate supply levels. In the case of ridesharing, we do not have supply information. Instead, we use the bus frequency as a proxy. Bus schedules are slower to respond during normal times, but after March 2020 the CTA is actively adjusting service to account for labor shortages and changing demand.
Footnote 2: [https://www.census.gov/programs-surveys/acs/data.html](https://www.census.gov/programs-surveys/acs/data.html)
Footnote 3: [https://planet.openstreetmap.org/](https://planet.openstreetmap.org/)
Footnote 4: [https://www.ncdc.noaa.gov/cdo-web/](https://www.ncdc.noaa.gov/cdo-web/)
### _Experiment Setup_
The datasets were split into three subsets along the temporal dimension: train, validation, and test. The training set consists of data from August 1, 2019 to Feb 16, 2020, the validation set from Feb 17, 2020 to March 1, 2020, and the testing set consists of four different two-week periods during the course of COVID-19 pandemic in 2020: immediately before (March 2 to March 15), stay-at-home (March 16 to March 29), initial recovery (June 22 to July 5), and steady recovery (Oct 18 to Oct 31). Significant changes emerged starting March 2020 when the confirmed COVID-19 cases were growing rapidly and the city issued stay-at-home orders. Since then, the ridership has seen different stages of recovery. The changes in ridership of a few stations/census tracts are shown in Figure 2. Meanwhile, changes induced by COVID-19 provide an opportunity to test the temporal generalizability from pre- to post-COVID periods.
Several other architectural consideration have been taken into account besides the GCN/GAT spatial convolutions. Figure 3 summarizes the deterministic architecture, consisting of three components: spatiotemporal layers (GCN/GAT+LSTM) for recent observed demand, and linear layer to connect last week's observation (history), and weather (temperature and precipitation). The three components are indexed by \(r,h,p\), respectively, and summed to the final prediction. First, due to the cyclic nature of travel demand, the travel demand time series can be decomposed into a weekly component (\(\hat{\mathcal{Y}}_{t}^{h}\), \(\hat{\sigma}_{t}^{h}\)), and a time-of-day component (\(\hat{\mathcal{Y}}_{t}^{r}\), \(\hat{\sigma}_{t}^{r}\)). The weekly component is calculated from a reference demand. A good reference demand is the observed value for the same region and time in the previous week. The weights \(w_{h}\) are obtained from the LSTM encodings. Additionally, the recent demand used for LSTM network inputs are the deviations from last week's demand, and the decoded outputs are used to produce both the weekly and the time-of-day components. Intuitively, if the recent residual demand is very different from the reference, the reference should have a smaller weight on the final prediction, and the final prediction is more uncertain. The time-of-day component is directly obtained from LSTM encodings of the recent residual demand. Next, weather is another source of temporal variation because extreme weather could have an evident impact on transit ridership. The model takes daily deviations from average of precipitation \(P_{T}\) and temperature \(T_{E}\), and multiply them with spatiotemporal weights \(W^{p},W^{t}\in\mathbb{R}^{2\times T\times S}\) to get \([\hat{\mathcal{Y}}_{t}^{p},\hat{\sigma}_{t}^{p}]\).
For each dataset and each deterministic architecture, six probabilistic assumptions are tested: homoskedastic Gaussian (homoG), Poisson (Pois), heteroskedastic Gaussian (HetG), truncated Gaussian (TG), Laplace (Lap), ensembled Gaussian (GEns). In the homoG model, a search for the best standard deviation was done. Values are searched in multiples of \(\bar{y}\): \(\frac{1}{4},\frac{1}{2},\frac{3}{4},1\), and the best value for the variance is \(\frac{1}{2}\bar{y}\) for both datasets. In Gems, five top HetG models by validation set loss are selected to create an ensemble model. The ensemble distribution is given by [38]: \(\mathcal{Y}_{*}\sim\mathcal{N}(\frac{1}{K}\sum_{k}\mathcal{Y}_{k},\frac{1}{K} \sum_{k}(\sigma_{k}^{2}+\mathcal{Y}_{k}^{2})-\mathcal{Y}_{*}^{2})\).
## V Results and Discussion
We present and compare the model performances of the Prob-GNNs under six probabilistic and two deterministic assumptions on two separate datasets. First, we analyze the model performance for periods immediately after the training period, that is, from March 2 to March 15, 2020. We demonstrate that probabilistic assumptions can influence the model performance more significantly than deterministic architectures. Next, we show that uncertainty predictions are important especially when point predictions become unreliable during system disruptions, by applying models trained with the pre-COVID training set to the post-COVID test sets. Lastly,
Fig. 3: Proposed model architecture. The model processes past demand information using spatial and temporal layers, with the addition of auxiliary information to produce estimates of distributions parameters
Fig. 2: Average Daily Ridership (T1: Immediately Before; T2: Stay-at-home; T3: Initial Recovery; T4: Steady Recovery)
we discuss the spatiotemporal patterns of uncertainty revealed by the models.
First, to illustrate the models' proper convergence, Figure 4 presents the GCN model training curves for all probabilistic assumptions trained on CTA Rail. All models demonstrated successful convergence under the same learning rate. However, the number of epochs required for convergence varied among the models. The two-parameter distributions (HetG, TG, Laplace) converged faster, and achieved a lower NLL compared to the one-parameter distributions (HomoG, Pois).
Model performances of CTA Rail and ridesharing data, on the test set between March 2 and March 15, 2020, are tabulated in Tables III. The table has two panels for GCN and GAT models, respectively. Each panel presents the six different probabilistic assumptions presented in Section IV-B. Model performances are measured on three categories of metrics: composite (NLL), uncertainty (calibration error, MPIW, and PICP), and point (MAE, MAPE). In the tables, the intended level of coverage was set to 95%. The next two sections discuss the findings from Table III in detail.
### _Significance of Probabilistic and Deterministic Assumptions_
The quality of uncertainty quantification varies remarkably with probabilistic assumptions. The performance is significantly higher in the probabilistic models with two parameters (HetG and Lap) compared to the single-parameter models (HomoG and Pois). Among all GCN models, the average NLL and calibration error for two-parameter distributions in the CTA Rail data is \(465.9\) and \(0.033\), and for one-parameter distributions \(570.8(+22.5\%)\) and \(0.11(3.3\times)\). Similar observations can be made in the ridesharing dataset, where the average NLL is \(218.8\) vs. \(189.3(+15.4\%)\) and the calibration error is \(0.017\) vs. \(0.099(5.8\times)\). Similar observations can be made for GAT models.
Truncated Gaussian and Laplace distribution have the overall best distributional fit. Figure 5 illustrates the distributional fit by plotting the empirical quantiles \(q(p)\) against the theoretical quantiles \(p\) for both datasets across all the GCN models. The line \(y=x\) represents a perfectly-calibrated model. The calibration error in Table III corresponds to the areas between the \(y=x\) line and the empirical quantiles. Poisson and homoskedastic Gaussian are very far away from the \(y=x\) line, while truncated Gaussian and Laplace trace the line most closely. Although the Heteroskedastic Gaussian works the best with GCN models on ridesharing data, the truncated Gaussian and Laplace distribution had more stable performances across the datasets and models, and the curves are closer to the \(y=x\) line.
In contrast to the strong influence probabilistic assumptions have on the quality of uncertainty predictions, little influence is exerted on the point prediction quality from the probabilistic assumptions. For both datasets and both GCN and GAT, between different probabilistic assumptions, the performance gap between the best and the worst point estimate is around 4%, significantly less than that of the composite and the uncertainty metrics.
The variations in predictive performance caused by the deterministic assumptions are also small compared to the probabilistic assumptions. Comparing the GCNs and GATs, the error metrics are similar across the models, and the performance patterns across probabilistic assumptions remain the same for GAT models. The one-parameter probabilistic assumptions are likely to be affected more by deterministic architectures. For all probabilistic assumptions except for Poisson, less than a 3% difference in NLL loss is observed between GCN and GAT models for the same dataset. Although the GAT allows the model to learn spatial relationships and is hence more flexible, the predefined adjacency matrices serve as domain knowledge to reduce the learning complexity. In theory, the GAT setup should perform better with more complex relationships or with larger sample sizes, and the GCN setup is better for more efficient learning under limited sample sizes. In this case, there is not a large difference between the two architectures.
### _Implications of Probabilistic Assumptions_
The probabilistic assumptions not only suggest the distributional fit, but also have practical implications on the ridership patterns. We then discuss these implications in detail below.
#### Iv-B1 Heteroskedasticity vs. Homoskedasticity (HetG vs. HomoG)
HetG forces a constant variance across all observations, resulting in an inaccurate representation of the data. The HomoG models achieve around \(20\%\) higher NLL loss and \(1.5\) to \(2\) times higher calibration error than the HetG models. For example, the NLL loss of HetG-GCN is 462.9, 24% lower than
Fig. 4: Training Curves. Top: GCN; Bottom: GAT
Fig. 5: Calibration Plot of GCN Models (a) CTA rail and (b) Ridesharing. The line \(y=x\) represents a perfectly-calibrated model. The calibration error (CE) corresponds to the areas between the \(y=x\) line and the empirical quantiles.
the NLL loss (612.5) of HomoG-GCN for CTA Rail. Although predicting the same variance is inaccurate distributionally, the average MPIW and PICP are not much worse compared to their heteroskedastic counterparts since the variance scale is searched and fixed. Additionally, the inaccurate distributional assumption does not affect the quality of the point estimate, and point estimate errors from HomoG models are only 1-2% higher than the best model, as the likelihood improvement can only come from a more accurate prediction of the mean. Regardless, having one fixed uncertainty parameter not only is an inaccurate representation, but also limits the model's flexibility to adapt to sharp changes in ridership magnitude.
#### Iv-B2 Continuous vs. Discrete (HomoG vs. Pois)
Both HomoG and Pois have the worst NLL loss among all probabilistic assumptions tested. Despite its discreteness, the Poisson assumption yields similar or worse NLL and calibration error than HomoG since it forces equal mean and variance. However, for both datasets, the prediction intervals significantly under-predict the demand, meaning that the variance should be larger than the mean. The PICP of the Poisson prediction interval indicates the magnitude of variance compared to the mean. The smaller the PICP, the larger the variance.
#### Iv-B3 Real-line vs. non-negative support (HetG vs. TG)
Since the time resolution is 15min, left-truncation at 0 significantly improves the model performance as many predictions have means close to 0. TG is the best-performing model for the CTA Rail and the second-best for ridesharing. Even though the average ridership is around 40 per station and 23 per census tract, significant heterogeneity exists among different stations/census tracts. Sparsity (zero ridership) is an issue to be considered in short-term or real-time demand predictions.
#### Iv-B4 Gaussian vs. Exponential tails (HetG vs. Lap)
Both distributions are characterized by two parameters and can be trained to describe the data accurately. The weights of the tails are different, and the model performances on the two probabilistic assumptions suggest behavioral differences. Compared to HetG, Lap has a heavier tail; hence the prediction intervals tend to be larger, covering more observations in more extreme cases. In all cases, Lap covers more points than intended.
#### Iv-B5 Single vs. ensembled models (HetG vs. GEns)
Ensembling only makes marginal improvements to the model results. The NLL loss, calibration error, and MAE improve by less than 1% in all cases. For example, the biggest improvement occurs in the GAT models for CTA Rail data, with NLL loss for HetG-GAT being 462.9 and for GEns-GAT 459.6. Since ensembling aims at reducing model uncertainty, its ineffectiveness suggests that different training instances of the neural networks produce similar results, and the model uncertainty is low.
We also constructed prediction intervals with Monte Carlo dropouts to further illustrate that model uncertainty is much smaller than the data uncertainty inherent in the data generation process. Monte Carlo dropout approximates Bayesian neural networks and measures the model uncertainty by applying dropout at test time, treating each as an instance from the space of all available models. However, since the different training instances produce similar results, Monte Carlo dropout fails to capture the full picture. Its prediction intervals were very narrow with PICPs between 30% - 40% for both datasets. Additionally, since we applied dropout at test time, the point prediction loses the benefit of dropout regularization and performs worse than other models.
### _Model Performance under System Disruption_
The drastic change in pre- and post-COVID periods presents a unique opportunity to test the generalizability of Prob-GNNs. The previous sections use the periods immediately following the training set for validation and testing. This section compares the predictive performance at different stages of the COVID-19 pandemic, by applying the models trained with the pre-COVID training set to three post-COVID time periods. Table IV presents the results for CTA Rail and ridesharing. Since HomoG and Pois have poor performance in the previous test set, they are excluded from this comparison.
All the error metrics increase under the significant domain shifts from pre- to post-COVID. In the stay-at-home period, the average ridership for both systems dropped to less than 10% of pre-COVID levels. The NLL, MPIW, and MAE are not comparable to pre-COVID levels because they are influenced
\begin{table}
\begin{tabular}{l|c|c|c|c|c||c|c|c|c|c|c} \hline \hline & \multicolumn{6}{c||}{CTA Rail} & \multicolumn{6}{c}{Ridesharing} \\ \hline & Comp & \multicolumn{3}{c|}{Uncertainty Prediction} & \multicolumn{3}{c|}{Point Prediction} & \multicolumn{3}{c|}{Comp} & \multicolumn{3}{c|}{Uncertainty Prediction} & \multicolumn{3}{c}{Point Prediction} \\ \hline Model & NLL & Cal. Err & MPIW & PICP & MAE & MAPE & NLL & Cal. Err & MPIW & PICP & MAE & MAPE \\ \hline HomoG-GCN & 612.5 & 0.146 & 64.53 & 97.9\% & 7.64 & 18.5\% & 225.6 & 0.114 & 42.63 & 97.7\% & 5.81 & 25.2\% \\ Pois-GCN & 529.2 & 0.075 & 18.04 & 84.9\% & **7.18** & **17.4\%** & 211.9 & 0.083 & 15.53 & 82.7\% & 5.81 & 25.2\% \\ HetG-GCN & 462.9 & 0.049 & 39.04 & 96.5\% & 7.67 & 18.6\% & 187.6 & 0.019 & 27.32 & 93.3\% & 5.80 & 25.1\% \\ TG-GCN & 480.7 & **0.021** & 39.37 & 95.6\% & 7.61 & 18.5\% & 196.7 & 0.029 & **36.63** & **95.2\%** & 6.06 & 26.3\% \\ Lap-GCN & 460.2 & 0.026 & 43.48 & 97.0\% & 7.52 & 18.2\% & 187.1 & 0.022 & 34.12 & 96.5\% & 5.81 & 25.2\% \\ GENs-GCN & **459.6** & 0.036 & **37.48** & **95.7\%** & 7.42 & 18.0\% & **185.6** & **0.017** & 28.01 & 93.9\% & **5.70** & **24.7\%** \\ \hline HomoG-GAT & 620.4 & 0.141 & 64.54 & 97.8\% & 8.13 & 19.7\% & 230.7 & 0.112 & 43.26 & 96.9\% & 6.52 & 28.2\% \\ Pois-GAT & 572.2 & 0.100 & 18.11 & 81.8\% & 7.98 & 19.4\% & 214.9 & 0.093 & 15.62 & 82.0\% & **6.06** & **26.3\%** \\ HetG-GAT & 472.3 & 0.058 & **39.58** & **96.3\%** & 7.84 & 19.0\% & 188.3 & 0.041 & **31.43** & **95.75** & 6.07 & 26.3\% \\ TG-GAT & 481.6 & **0.026** & 40.96 & 96.6\% & 7.69 & 18.7\% & 199.9 & 0.024 & 32.54 & 95.7\% & 6.55 & 28.4\% \\ Lap-GAT & **464.3** & 0.037 & 42.77 & 97.7\% & 7.63 & 18.5\% & 189.5 & **0.021** & 36.08 & 96.5\% & 6.16 & 26.7\% \\ GENs-GAT & 470.5 & 0.037 & 42.49 & 96.8\% & **7.52** & **18.3\%** & **188.1** & 0.044 & 32.28 & 96.0\% & 6.18 & 26.8\% \\ \hline \hline \end{tabular}
\end{table} TABLE III: Model Performance (Test Period: Immediately Before - March 2 to March 15, 2020)
by the magnitude of ridership, while calibration error, PICP, and MAPE are unit-free and can be compared to values in Table III. Unsurprisingly, the performance during the stay-at-home period is relatively poor but slowly rebounds with the recovery of the ridership.
In the case where significant disruptions happen in the system, the point predictions fail miserably, but the uncertainty predictions stay accurate and indicative of the changing situation. The MAPE for the test set in Table III was 18% for CTA and 25% for ridesharing. For the three additional periods, the MAPEs are 93%, 43%, 36% for CTA and 133%, 54%, and 47% for ridesharing, respectively. The uncertainty predictions recovered a lot faster than MAPE. The calibration error returned to pre-COVID levels at the steady recovery stage, although the point prediction error is still 10% higher than before. If only a 95% prediction interval is considered, even at the stay-at-home home stage, we can achieve pretty precise prediction intervals (95.6% and 96.7% for CTA and ridesharing).
The generalizability between different two-parameter distributions is similar, although each has distinct characteristics. TG-GCN excels at low ridership; Lap-GCN is heavy-tailed and more conservative; while HetG-GCN relies on the similarity between traning and testing domains. Since the situation is evolving constantly, there is no probabilistic assumption that dominates all scenarios. But some general conclusions can be drawn. In most cases, TG-GCN has the best calibration error due to its left-truncation. Enforcing non-negativity is beneficial since the ridership haven't recovered to pre-COVID levels for either CTA or ridesharing. Lap-GCN typically produces the best NLL and point prediction due to its more distributed shape, reducing overfitting to pre-COVID data. As ridesharing trips become more spontaneous in post-COVID times, Lap-GCN's conservative prediction intervals outperform others.
### _Spatiotemporal Uncertainty_
Travel demand uncertainty has both spatial and temporal patterns: spatially, uncertainty is higher at stations with higher volumes, and temporally, uncertainty is higher in the afternoons. Figure 6 shows the spatial distribution of predicted uncertainty for CTA Rail at different times of the day during the steady recovery test period (Oct 12 - 25, 2020). The left bottom corner of each figure zooms in on the "loop", the downtown of Chicago. Uncertainty is proportional to both the size and the color of the circles, with darker and larger circles indicating higher uncertainty.
Spatially, uncertainty is higher at busier stations. During the same time period, darker and larger circles appear near downtown, transfer stations, and airports. Statistically, if the occurrence of each potential trip is a random Bernoulli process with probability \(p\), and we do not have further information on the values of \(p\) for each trip, having more potential trips \(n\) will yield higher uncertainty in the sum. The number of observed trips has a binomial distribution with variance \(np(1-p)\), which is proportional to \(n\). Practically, the trips from downtown, transfer stations, and airports are usually more diverse and complex in nature, which promotes spontaneity in trip-making, resulting in higher uncertainty.
Temporally, uncertainty is higher during the afternoon peak. The uncertainty during the morning peak is even generally smaller than that during midday and in the evening, although the morning peak has significantly higher ridership. Statistically, this observation could be attributed to having knowledge about some of the \(p\)'s. Since the morning peak primarily consists of commuting trips, the probability of the trips happening is higher than recreational trips, which tend to happen during midday and afternoons.
Spatiotemporal uncertainty predictions can inform strategic decision-making regarding capacity buffers. First, the framework could be used to identify bottlenecks and outliers in the system. For example, the station Division/Milwaukee on the
blue line has unusually high uncertainty during the morning peak. Further investigation can be done to identify the reasons for the abnormal behavior and perform demand management. Moreover, different strategies are needed for different types of systems. In systems with a fixed, relatively large capacity, such as stations along the same subway line, uncertainty at the busiest stations at peak times is the most critical, as the rest of the line will have quite a lot of excess capacity. Therefore, understanding uncertainty in low-demand regions is important for behavioral analysis, but less critical for service planning. However, in systems that are built to meet demand, such as ride-hailing, uncertainty in lower-demand regions is as important as higher-demand regions, as the supply in those regions will be proportionally low. Re-balancing actions will be needed across the system.
## VI Conclusion
Despite the importance of uncertainty in travel demand prediction, past studies use deep learning to predict only the average travel demand but not quantify its uncertainty. To address this gap, this study proposes a framework of Prob-GNNs to quantify the spatiotemporal uncertainty of travel demand. The framework is concretized by six probabilistic assumptions (HomoG, Pois, HetG, TG, GEns, Lap) and two deterministic ones (GCN and GAT), which are applied to transit and ridesharing data, yielding the following conclusions.
First, the Prob-GNN framework can successfully quantify spatiotemporal uncertainty in travel demand with empirical evidence demonstrated on the transit and ridesharing datasets. In both cases, the Prob-GNNs can accurately characterize the probabilistic distributions of travel demand, while retain the point predictions of a similar quality to the deterministic counterpart (HomoG). Second, the probabilistic assumptions have a much more substantial impact on model performance than the deterministic ones. The two deterministic architectures lead to less than a 3% difference in NLL loss, while a wise choice of probabilistic assumptions could drastically improve model performance. Specifically, the two-parameter distributions (e.g., truncated Gaussian) achieve the highest predictive performance, which is 20% higher in NLL and 3-5 times lower in calibration errors in comparison to the one-parameter baseline distributions. Third, the Prob-GNNs enhance model generalizability under significant system disruptions. By applying the models trained on pre-COVID data to three post-COVID periods, we show that the point predictions fail to generalize but the uncertainty predictions remain accurate and successfully reflect the evolving situations. Even under significant domain shifts, the difference of predictive performance among the two-parameter distributions is minor. Lastly, Prob-GNNs can reveal spatiotemporal uncertainty patterns. Uncertainty is spatially concentrated on the stations with higher travel volume, and temporally concentrated on the afternoon peak hours. In addition, the Prob-GNNs can identify the stations with abnormally large uncertainty, which can inform real-time traffic controls to proactively address potential system disruptions.
Future studies could advance this work by taking further theoretical or empirical efforts. Theoretically, Prob-GNN is a parametric uncertainty quantification method, which should be compared to Bayesian and non-parametric methods in terms of prediction quality. Empirically, our framework can be generally applied to the prediction of origin-destination flows, travel safety, or even climate risks. Since uncertainty in urban systems has broad policy implications, future studies could integrate the Prob-GNNs with robust optimization methods to enhance urban resilience.
## Acknowledgment
This material is based upon work supported by the U.S. Department of Energy's Office of Energy Efficiency and Renewable Energy (EERE) under the Vehicle Technology Program Award Number DE-EE0009211. The views expressed herein do not necessarily represent the views of the U.S. Department of Energy or the United States Government. The
Fig. 6: Spatiotemporal Uncertainty. Standard deviations of estimated CTA Rail station tap-ins in the 15min periods starting at 8 A.M. (morning peak), 1 P.M. (middle), 5 P.M. (afternoon peak), and 9 P.M. (evening).
authors would also like to thank the Chicago Transit Authority for providing access to public transit ridership data.
|
2302.04369 | Unsupervised Learning of Initialization in Deep Neural Networks via
Maximum Mean Discrepancy | Despite the recent success of stochastic gradient descent in deep learning,
it is often difficult to train a deep neural network with an inappropriate
choice of its initial parameters. Even if training is successful, it has been
known that the initial parameter configuration may negatively impact
generalization. In this paper, we propose an unsupervised algorithm to find
good initialization for input data, given that a downstream task is d-way
classification. We first notice that each parameter configuration in the
parameter space corresponds to one particular downstream task of d-way
classification. We then conjecture that the success of learning is directly
related to how diverse downstream tasks are in the vicinity of the initial
parameters. We thus design an algorithm that encourages small perturbation to
the initial parameter configuration leads to a diverse set of d-way
classification tasks. In other words, the proposed algorithm ensures a solution
to any downstream task to be near the initial parameter configuration. We
empirically evaluate the proposed algorithm on various tasks derived from MNIST
with a fully connected network. In these experiments, we observe that our
algorithm improves average test accuracy across most of these tasks, and that
such improvement is greater when the number of labelled examples is small. | Cheolhyoung Lee, Kyunghyun Cho | 2023-02-08T23:23:28Z | http://arxiv.org/abs/2302.04369v1 | # Unsupervised Learning of Initialization in Deep Neural Networks via Maximum Mean Discrepancy
###### Abstract
Despite the recent success of stochastic gradient descent in deep learning, it is often difficult to train a deep neural network with an inappropriate choice of its initial parameters. Even if training is successful, it has been known that the initial parameter configuration may negatively impact generalization. In this paper, we propose an unsupervised algorithm to find good initialization for input data, given that a downstream task is \(d\)-way classification. We first notice that each parameter configuration in the parameter space corresponds to one particular downstream task of \(d\)-way classification. We then conjecture that the success of learning is directly related to how diverse downstream tasks are in the vicinity of the initial parameters. We thus design an algorithm that encourages small perturbation to the initial parameter configuration leads to a diverse set of \(d\)-way classification tasks. In other words, the proposed algorithm ensures a solution to _any_ downstream task to be near the initial parameter configuration. We empirically evaluate the proposed algorithm on various tasks derived from MNIST with a fully connected network. In these experiments, we observe that our algorithm improves average test accuracy across most of these tasks, and that such improvement is greater when the number of labelled examples is small.
## 1 Introduction
Initialization of parameters has long been identified as playing a critical role in improving both the convergence and generalization performance of deep neural networks (Glorot & Bengio, 2010; Erhan et al., 2010; He et al., 2015). In recent years, however, various normalization techniques, such as batch normalization (Ioffe & Szegedy, 2015), layer normalization (Ba et al., 2016), and weight normalization (Salimans & Kingma, 2016), have been found to somewhat reduce this heavy reliance on the initialization of parameters. The normalization techniques have done so by preserving some of the conditions that motivated various initialization schemes throughout training more explicitly. For instance, batch normalization normalizes each neuron to have zero mean and unit variance across examples within a minibatch, which is what Xavier initialization (Glorot & Bengio, 2010) and He initialization (He et al., 2015) aim to achieve in an ideal situation.
Although batch normalization has been widely used for training deep neural networks (He et al., 2016; Tan & Le, 2019), there are a small number of studies about why it helps training (Santurkar et al., 2018). Rather than revealing its theoretical effect, several researchers studied whether batch normalization is really necessary by training deep neural networks without batch normalization. Zhang et al. (2019) have proposed Fixup initialization replacing batch normalization in ResNet (He et al., 2016) by adding additional parameters to each residual block. Brock et al. (2021) have also succeeded to train ResNet with adaptive gradient clipping that adjusts unit-wise ratio of gradient norms to parameter norms during training. The similarity among their algorithms and batch normalization is that they add their own schemes to adaptively supervise optimization of the deep neural networks.
We suspect that the necessity for such adaptation comes from some neighborhood properties of an initial parameter configuration. Training is an optimization process finding an optimal parameter configuration which well-approximates a particular task derived from input data in the parameter space. It means that each parameter configuration corresponds to each task but this is not necessarily one-to-one. We hypothesize that training encourages the current parameter configuration to converge to the nearest optimal parameter configuration from the initial one. If there is no optimal solution near the initial parameter configuration, then the current parameter configuration either deviates from the initial parameter configuration (exploding gradient) or stays around it (vanishing gradient). We thus propose an algorithm to find a initial parameter configuration that can solve various tasks in their neighborhood.
Before finding such initial parameter configuration, we first need to check whether a given network solves any task derived from the input data. Zhang et al. (2016) empirically showed that over-parametrization of deep neural networks enables them to memorize the entire dataset so that they can be fitted to its arbitrary target task. Based on this, Pondenkandath et al. (2018) have empirically demonstrated that pre-training on random labels can accelerate training on downstream tasks. However, Maennel et al. (2020) has shown that the random label pre-training sometimes hurts the convergence of fine-tuning on the downstream tasks. They also presented that the pre-trained model generalizes worse than randomly initialized networks even if the random label pre-training promotes learning on the downstream task. For these studies, we further conjecture that a given over-parametrized network can solve any task in its parameter space, but it cannot do this at a single parameter configuration.
We therefore decide to utilize a set of parameter configurations, where we can find an optimal parameter configuration for any target task. If this set can be accumulated to the vicinity of one parameter configuration, we view this configuration as a good initial parameter configuration. To do this, we first restrict possible downstream tasks to \(d\)-way classification to make the model output domain be the same as a \((d-1)\)-dimensional unit simplex defined in equation 1. We then define a neighbor of the initial parameter configuration as small perturbation to this. Our unsupervised algorithm encourages each neighbor to solve a different task so that optimizers based on stochastic gradient descent such as Adam (Kingma & Ba, 2014) can easily find a solution near our initial parameter configuration.
We offer the mathematical statement for our conjecture in SS3.1, and propose an optimization problem to satisfy our claim for a given input. In doing so, we observe two possible degenerate cases to achieve our goal. In SS3.2.1 and SS3.2.2, we present how to avoid these unwanted situations. We validate our algorithm by various binary tasks derived from MNIST (LeCun et al., 1998) in SS5. From these experiments, we observe that fine-tuning deep neural networks from our initial parameters improves average test accuracy across the various binary tasks, and this gain is greater when the number of labelled examples is small.
## 2 Preliminaries and Notations
NormsUnless explicitly stated, a norm \(\|\cdot\|\) refers to \(L^{2}\) norm. We denote the Frobenius norm of a matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) by \(\|\mathbf{A}\|_{F}=\sqrt{\sum_{i=1}^{m}\sum_{j=1}^{n}A_{ij}^{2}}\), where \(A_{ij}\) is the \((i,j)\)-th entry of \(\mathbf{A}\). We write the \(L^{2}\) operator norm of \(\mathbf{A}\) as \(\|\mathbf{A}\|^{*}=\sup_{\|\mathbf{x}\|=1}\|\mathbf{A}\mathbf{x}\|\), where \(\mathbf{x}\in\mathbb{R}^{n}\).
SupportsFor a distribution \(p(\mathbf{x})\), we write its support as \(\texttt{supp}(p(\mathbf{x}))=\{\mathbf{x}\in\mathbb{R}^{n}\mid p(\mathbf{x})>0\}\).
Model predictionA model prediction for \(d\)-way classification is a point in the \((d-1)\)-dimensional unit simplex \(\Delta^{d-1}\subset\mathbb{R}^{d}\) defined by
\[\Delta^{d-1}=\left\{(p_{1},p_{2},\cdots,p_{d})\in\mathbb{R}^{d}_{\geq 0}: \sum_{i=1}^{d}p_{i}=1\right\}, \tag{1}\]
where \(\mathbb{R}_{\geq 0}\) is the set of non-negative real numbers. We refer to a prediction of the model parametrized by \(\mathbf{\theta}\) for an input \(\mathbf{x}\), as \(\mathbf{f}_{\mathbf{\theta}}(\mathbf{x})\in\Delta^{d-1}\).
Uniform distribution over \(\Delta^{d-1}\)In this paper, we mainly deal with the uniform distribution over \(\Delta^{d-1},\mathcal{U}(\Delta^{d-1})\). We can generate its random sample \(\mathbf{u}\) by
\[\mathbf{u}=\left(\frac{\mathbf{e}_{1}}{\sum_{i=1}^{d}\mathbf{e}_{i}},\frac{\mathbf{e}_{2}}{\sum_{ i=1}^{d}\mathbf{e}_{i}},\cdots,\frac{\mathbf{e}_{d}}{\sum_{i=1}^{d}\mathbf{e}_{i}}\right), \tag{2}\]
where each \(\mathbf{e}_{i}\) is independently drawn from \(\texttt{exponential}(1)\)(Marsaglia, 1961).
Maximum mean discrepancy (MMD)The MMD (Gretton et al., 2012) is a framework for comparing two distributions \(p(\mathbf{x})\) and \(q(\mathbf{y})\) when we have samples from both distributions. The kernel MMD is defined by
\[\texttt{MMD}(p(\mathbf{x}),q(\mathbf{y});\gamma)= \mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}\mathbb{E}_{\mathbf{x}^{\prime}\sim p (\mathbf{x})}[k_{\gamma}(\mathbf{x},\mathbf{x}^{\prime})] \tag{3}\] \[-2\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}\mathbb{E}_{\mathbf{y}\sim q(\mathbf{ y})}[k_{\gamma}(\mathbf{x},\mathbf{y})]\] \[+\mathbb{E}_{\mathbf{y}\sim q(\mathbf{y})}\mathbb{E}_{\mathbf{y}^{\prime} \sim q(\mathbf{y})}[k_{\gamma}(\mathbf{y},\mathbf{y}^{\prime})],\]
where \(k\) is a kernel function. A Gaussian kernel is often used, i.e., \(k_{\gamma}(\mathbf{x},\mathbf{y})=\exp\left(-\frac{\|\mathbf{x}-\mathbf{y}\|^{2}}{2\gamma^{2}}\right)\). Gretton et al. (2012) showed that \(p(\mathbf{x})=q(\mathbf{y})\) in distribution if and only if \(\texttt{MMD}(p(\mathbf{x}),q(\mathbf{y});\gamma)=0\).
## 3 Unsupervised learning of initialization
We start by conjecturing that the parameter configuration for any \(d\)-way classification must be in the vicinity of _good_ initial parameters. In other words, a parameter configuration, that solves any \(d\)-way classification task, is near the initial parameter configuration, so that such configuration can be readily found by stochastic gradient descent using labelled examples. The question we answer here is then how we can identify such an initial parameter configuration given a set of unlabelled examples.
### Uniformity over all mappings
Let \(\mathbf{f}_{\mathbf{\theta}}(\mathbf{x})\in\mathbb{R}^{d}\) be an output of a deep neural network parametrized by \(\mathbf{\theta}\in\mathbb{R}^{m}\) given an input \(\mathbf{x}\in\mathbb{R}^{n}\) sampled from an input distribution \(p(\mathbf{x})\). In supervised learning, there is a target mapping \(\mathbf{f}^{*}\) defined on \(\texttt{supp}(p(\mathbf{x}))\), and we want to find \(\mathbf{\theta}^{*}\) that
\[\min_{\mathbf{\theta}\in\mathbb{R}^{m}}l(\mathbf{f}_{\mathbf{\theta}},\mathbf{f}^{*}), \tag{4}\]
for a given loss function \(l\). For example, we often use \(l(\mathbf{f}_{\mathbf{\theta}},\mathbf{f}^{*})=\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}[\|\mathbf{f}_ {\mathbf{\theta}}(\mathbf{x})-\mathbf{f}^{*}(\mathbf{x})\|^{2}]\) for regression and \(l(\mathbf{f}_{\mathbf{\theta}},\mathbf{f}^{*})=\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}[\mathbb{ KL}(\mathbf{f}^{*}(\mathbf{x})\|\mathbf{f}_{\mathbf{\theta}}(\mathbf{x}))]\) for classification task, where \(\mathbb{KL}(\mathbf{f}^{*}(\mathbf{x})\|\mathbf{f}_{\mathbf{\theta}}(\mathbf{x}))\) is the Kullback-Leibler (KL) divergence from \(\mathbf{f}_{\mathbf{\theta}}(\mathbf{x})\) to \(\mathbf{f}^{*}(\mathbf{x})\).
In deep learning, it is usual to search for an optimal solution \(\mathbf{\theta}^{*}\) from equation 4 in the full parameter space \(\mathbb{R}^{m}\) by using a first-order optimizer, such as SGD and Adam (Kingma & Ba, 2014). In this process, Hoffer et al. (2017) have demonstrated however that
\[\|\mathbf{\theta}_{t}-\mathbf{\theta}_{0}\|\sim\log t, \tag{5}\]
where \(\mathbf{\theta}_{t}\) is a vector of parameters at the \(t\)-th optimization step and \(\mathbf{\theta}_{0}\) is that of initial parameters. In other words, the rate of deviation from \(\mathbf{\theta}_{0}\) decreases as training progresses. It means that the first order optimizer tends to find an optimal solution near the initial point. We thus rewrite equation 4 as
\[\mathbf{\theta}^{*}=\arg\min_{\mathbf{\theta}\in\mathbb{B}_{r}(\mathbf{\theta}_{0})}l(\mathbf{ f}_{\mathbf{\theta}},\mathbf{f}^{*}), \tag{6}\]
where \(\mathbb{B}_{r}(\mathbf{\theta}_{0})\) is a \(r\)-ball centered at \(\mathbf{\theta}_{0}\), \(\mathbb{B}_{r}(\mathbf{\theta}_{0})=\{\mathbf{\theta}\in\mathbb{R}^{d}:\|\mathbf{\theta}- \mathbf{\theta}_{0}\|<r\}\).
With this in our mind, what is the good initialization \(\mathbf{\theta}_{0}\) for equation 6? To answer this question, we look at what kind of classifiers we have within \(\mathbb{B}_{r}(\mathbf{\theta}_{0})\). If \(\mathbf{x}\) is an example randomly drawn from the input distribution \(p(\mathbf{x})\), the set of all possible model outputs from \(\mathbf{x}\) in \(\mathbb{B}_{r}(\mathbf{\theta}_{0})\) is
\[\mathbb{F}(\mathbf{x};\mathbf{\theta}_{0})=\{\mathbf{f}_{\mathbf{\theta}}(\mathbf{x}):\mathbf{\theta} \in\mathbb{B}_{r}(\mathbf{\theta}_{0})\}.\]
We define the collection of all possible target mappings from the input space into the \((d-1)\)-dimensional unit simplex \(\Delta^{d-1}\) defined in equation 1 as
\[\mathcal{F}=\{\mathbf{f}^{*}\mid\mathbf{f}^{*}:\texttt{supp}(p(\mathbf{x}))\to\Delta^{d-1} \subset\mathbb{R}^{d}\}.\]
If \(\mathbf{\theta}_{0}\) is a good initial configuration, \(\mathbb{F}(\mathbf{x};\mathbf{\theta}_{0})\) has to be \(\Delta^{d-1}\). Otherwise, our model cannot approximate \(\mathbf{f}^{*}\in\mathcal{F}\) such that \(\mathbf{f}^{*}(\mathbf{x})\in\Delta^{d-1}\setminus\mathbb{F}(\mathbf{x};\mathbf{\theta}_{0})\) in \(\mathbb{B}_{r}(\mathbf{\theta}_{0})\).
To approximate all target mappings in \(\mathcal{F}\) by \(\mathbf{f}_{\mathbf{\theta}}\) near \(\mathbf{\theta}_{0}\) for \(\mathbf{x}\), there must be \(\mathbf{\theta}\in\mathbb{B}_{r}(\mathbf{\theta}_{0})\) satisfying \(\mathbf{f}_{\mathbf{\theta}}(\mathbf{x})=\mathbf{s}\) for arbitrary \(\mathbf{s}\in\Delta^{d-1}\). In other words, if we randomly pick \(\mathbf{\theta}\) in \(\mathbb{B}_{r}(\mathbf{\theta}_{0})\), the probability density of \(\mathbf{f}_{\mathbf{\theta}}(\mathbf{x})=\mathbf{s}\) for any \(\mathbf{s}\in\Delta^{d-1}\) is positive and should be the same over \(\Delta^{d-1}\) without prior knowledge of target mappings.
**Claim 1**.: _Denote the distribution of \(\mathbf{y}=\mathbf{f}_{\mathbf{\theta}}(\mathbf{x})\) given \(\mathbf{x}\sim p(\mathbf{x})\) over \(\mathbf{\theta}\sim\mathcal{U}(\mathbb{B}_{r}(\mathbf{\theta}_{0}))\) as \(q_{\mathbf{x}}(\mathbf{y};\mathbf{\theta}_{0},r)\).1 Then, \(\mathbf{\theta}_{0}\) is a good initialization if and only if \(\texttt{supp}(q_{\mathbf{x}}(\mathbf{y};\mathbf{\theta}_{0},r))=\Delta^{d-1}\) and \(q_{\mathbf{x}}(\mathbf{y};\mathbf{\theta}_{0},r)\) is equal to \(\mathcal{U}(\Delta^{d-1})\) in distribution, because we do not know which \(\mathbf{s}\in\Delta^{d-1}\) is more likely._
Footnote 1: Although \(\mathbf{x}\) is given, \(\mathbf{f}_{\mathbf{\theta}}(\mathbf{x})\) is random due to the randomness of \(\mathbf{\theta}\).
To obtain \(\mathbf{\theta}_{0}\) satisfying Claim 1, we build an optimization problem that makes \(q_{\mathbf{x}}(\mathbf{y};\mathbf{\theta}_{0},r)\) converge to \(\mathcal{U}(\Delta^{d-1})\) in distribution for a given \(\mathbf{x}\sim p(\mathbf{x})\). The first step toward this goal is to use the maximum mean discrepancy (MMD) (Gretton et al., 2012) from equation 3. We define an example specific loss as
\[\mathcal{L}_{\mathbf{x}}^{uni}(\mathbf{\theta}_{0};r,\Delta^{d-1},\gamma)=\texttt{MMD} (q_{\mathbf{x}}(\mathbf{y};\mathbf{\theta}_{0},r),\mathcal{U}(\Delta^{d-1});\gamma). \tag{7}\]
According to Gretton et al. (2012), equation 7 is equal to \(0\) if and only if \(q_{\mathbf{x}}(\mathbf{y};\mathbf{\theta}_{0},r)\) is equal to \(\mathcal{U}(\Delta^{d-1})\) in distribution. We can therefore find \(\mathbf{\theta}_{0}\) that satisfies Claim 1, by minimizing equation 7 with respect to \(\mathbf{\theta}_{0}\).
The minimization of equation 7 with respect to \(\mathbf{\theta}_{0}\) needs samples from both \(\mathcal{U}(\Delta^{d-1})\) and \(\mathcal{U}(\mathbb{B}_{r}(\mathbf{\theta}_{0}))\). In the case of \(\mathcal{U}(\Delta^{d-1})\), we draw samples using equation 2. For \(\mathcal{U}(\mathbb{B}_{r}(\mathbf{\theta}_{0}))\), we relax it to \(\mathcal{N}(\mathbf{\theta}_{0},\mathbf{\Sigma})\) where \(\mathbf{\Sigma}=\texttt{diag}(\sigma_{1}^{2},\sigma_{2}^{2},\cdots,\sigma_{m}^{2})\) for two reasons: i) this applies the same with uniform, since we can change the value range for each parameter separately; ii) the normal distribution allows us to use the reparametrization trick to compute \(\nabla_{\mathbf{\theta}_{0}}\mathcal{L}_{\mathbf{x}}^{uni}(\mathbf{\theta}_{0};r,\Delta^{ d-1},\gamma)\) from equation 7 (Kingma & Welling, 2013). Furthermore, as shown in Theorem 1 below, a proper choice of the covariance matrix makes Gaussian perturbation have similar effect as uniform perturbation:
**Theorem 1**.: _Let \(\mathbf{\theta}\sim\mathcal{N}(\mathbf{\theta}_{0},\texttt{diag}(\sigma_{1}^{2}, \sigma_{2}^{2},\cdots,\sigma_{m}^{2}))\) and \(\alpha_{*}=\max_{i=1,2,\cdots,m}\sigma_{i}^{2}\). If \(r^{2}\) is greater than \(m\alpha_{*}\), then we have_
\[\mathbb{P}\left(\left\|\mathbf{\theta}-\mathbf{\theta}_{0}\right\|\geq\ r \right)\leq\exp\left(-\frac{1}{8}\min\left\{\eta^{2},m\eta\right\}\right), \tag{8}\]
_where \(\eta=\frac{r^{2}}{m\alpha_{*}}-1\) (proved in SSA.1)._
Theorem 1 implies that if we add a Gaussian perturbation \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\) to \(\mathbf{\theta}_{0}\), then the perturbed parameter configuration, \(\mathbf{\theta}=\mathbf{\theta}_{0}+\mathbf{\epsilon}\), is enough closed to \(\mathbf{\theta}_{0}\) with a high probability, when \(\alpha^{*}=\max_{i}\sigma_{i}^{2}\) is sufficiently small. In other words, although \(\mathcal{N}(\mathbf{\theta}_{0},\mathbf{\Sigma})\) is not exactly equivalent to \(\mathcal{U}(\mathbb{B}_{r}(\mathbf{\theta}_{0}))\) in distribution, these two distributions play a similar role in the view of generating random parameter configurations near \(\mathbf{\theta}_{0}\). We therefore rewrite equation 7 to enable reparametrization trick, as below:
\[\mathcal{L}_{\mathbf{x}}^{uni}(\mathbf{\theta}_{0};\mathbf{\Sigma},\Delta^{d -1},\gamma)= \mathbb{E}_{\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma})}\mathbb{E}_ {\mathbf{\epsilon}^{\prime}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma})}[k_{\gamma}(\mathbf{f}_{ \mathbf{\theta}_{0}+\mathbf{\epsilon}}(\mathbf{x}),\mathbf{f}_{\mathbf{\theta}_{0}+\mathbf{\epsilon} ^{\prime}}(\mathbf{x}))] \tag{9}\] \[-2\mathbb{E}_{\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma})} \mathbb{E}_{\mathbf{u}\sim\mathcal{U}(\Delta^{d-1})}[k_{\gamma}(\mathbf{f}_{\mathbf{ \theta}_{0}+\mathbf{\epsilon}}(\mathbf{x}),\mathbf{u})]\] \[+\mathbb{E}_{\mathbf{u}\sim\mathcal{U}(\Delta^{d-1})}\mathbb{E}_{\mathbf{u} ^{\prime}\sim\mathcal{U}(\Delta^{d-1})}[k_{\gamma}(\mathbf{u},\mathbf{u}^{\prime})],\]
where \(q_{\mathbf{x}}(\mathbf{y};\mathbf{\theta}_{0},\mathbf{\Sigma})\) is the distribution of \(\mathbf{f}_{\mathbf{\theta}}(\mathbf{x})\) given \(\mathbf{x}\) with \(\mathbf{\theta}\sim\mathcal{N}(\mathbf{\theta}_{0},\mathbf{\Sigma})\). In other words, we add Gaussian noise to each parameter and encourage prediction for \(\mathbf{x}\) based on such perturbed parameter configuration to be well spread out over \(\Delta^{d-1}\). From now on, we use \(\mathbf{\theta}_{0}+\mathbf{\epsilon}\) to denote the perturbed parameter configuration, with \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\), to be more explicit about our use of reparametrization trick.
Equation 9 is an example specific loss, and minimizing this with respect to \(\mathbf{\theta}_{0}\) only guarantees the existence of \(\mathbf{\theta}^{*}\) near \(\mathbf{\theta}_{0}\) satisfying \(\mathbf{f}^{*}=\mathbf{f}_{\mathbf{\theta}^{*}}\) for a single \(\mathbf{x}\). Hence, we take the expectation of equation 9 over the input distribution \(p(\mathbf{x})\):
\[\mathcal{L}^{uni}(\mathbf{\theta}_{0};\mathbf{\Sigma},\Delta^{d-1},\gamma,p(\mathbf{x}))= \mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}[\mathcal{L}_{\mathbf{x}}^{uni}(\mathbf{\theta}_{0}; \mathbf{\Sigma},\Delta^{d-1},\gamma)]. \tag{10}\]
We minimize this expected loss to find an initial parameter configuration \(\mathbf{\theta}_{0}^{*}\) that satisfies Claim 1 for the input data on average. When done so, we can find \(\mathbf{f}_{\mathbf{\theta}}\) within the close proximity of \(\mathbf{\theta}_{0}^{*}\) that approximates any \(d\)-way target mapping \(\mathbf{f}^{*}\), given \(p(\mathbf{x})\).
### Degeneracies and remedies
Let \(\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{M}\) be random samples drawn from \(p(\mathbf{x})\), \(\mathbf{\epsilon}_{1},\mathbf{\epsilon}_{2},\cdots,\mathbf{\epsilon}_{N}\) be random perturbations from \(\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\), and \(\mathbf{c}_{1},\mathbf{c}_{2},\cdots,\mathbf{c}_{N}\) from \(\mathcal{U}(\Delta^{d-1})\). If \(\mathbf{\theta}_{0}^{1}\) satisfies
\[\mathbf{c}_{j}=\mathbf{f}_{\mathbf{\theta}_{0}^{1}+\mathbf{c}_{j}}(\mathbf{x}_{1})=\mathbf{f}_{\mathbf{ \theta}_{0}^{1}+\mathbf{c}_{j}}(\mathbf{x}_{2})=\cdots=\mathbf{f}_{\mathbf{\theta}_{0}^{1}+ \mathbf{c}_{j}}(\mathbf{x}_{M}), \tag{11}\]
for each \(j\), then \(\mathcal{L}_{\mathbf{x}_{i}}^{uni}(\mathbf{\theta}_{0}^{1};\mathbf{\Sigma},\Delta^{d-1}, \gamma)=0\) for all \(i\). Hence, \(\mathbf{\theta}_{0}^{1}\) is one of the optimal solutions for equation 10. In the case of \(\mathbf{\theta}_{0}^{1}\), each perturbed model near \(\mathbf{\theta}_{0}^{1}\) is a constant function, to which we refer as _input-output detachment_. Furthermore, each of these constant functions may output a _degenerate_ categorical distribution whose support does not cover all \(d\) classes, for which we refer to this phenomenon as _degenerate softmax_. We empirically demonstrate that both degeneracies indeed occur when we train a fully connected network by minimizing \(\mathcal{L}^{uni}\) in SSB.1. In this section, we present two regularization terms, to be added to equation 10, to avoid these two unwanted cases, respectively.
#### 3.2.1 Degenerate softmax
We first address the latter issue of degenerate softmax. Since we have specified that the task of our interest is \(d\)-way classification, we prefer models that can classify inputs into all \(d\) classes in the neighborhood of \(\mathbf{\theta}_{0}^{*}\). We thus impose a condition that there exists at least one example categorized into each and every class. We first define a set of the points \(\mathbb{A}_{i}\) classified into the \(i\)-th class as
\[\mathbb{A}_{i}=\left\{\mathbf{a}=(a_{1},a_{2},\cdots,a_{d})\in\Delta^{d-1}:a_{i} \geq a_{j}\text{ for all }j=1,2,\cdots,d\right\}. \tag{12}\]
Given \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\), the probability of _'the model at \(\mathbf{\theta}_{0}^{*}+\mathbf{\epsilon}\) classifies \(\mathbf{x}\) into the \(i\)-th class'_ is \(\mathbb{P}_{\mathbf{x}\sim p(\mathbf{x})}(\mathbf{f}_{\mathbf{\theta}_{0}^{*}+\mathbf{\epsilon}}( \mathbf{x})\in\mathbb{A}_{i})\). This probability should be positive for all \(i=1,2,\cdots,d\) to avoid degenerate softmax at \(\mathbf{\theta}_{0}^{*}\). To satisfy this, we use Theorem 2 which offers a lower bound of \(\mathbb{P}_{\mathbf{x}\sim p(\mathbf{x})}(\mathbf{f}_{\mathbf{\theta}_{0}^{*}+\mathbf{\epsilon}}( \mathbf{x})\in\mathbb{A}_{i})\) using the distance from the \(i\)-th vertex \(\mathbf{v}^{(i)}\):
**Theorem 2**.: _Let \(\mathbf{v}^{(i)}=\left(v_{1}^{(i)},v_{2}^{(i)},\cdots,v_{d}^{(i)}\right)\in\Delta ^{d-1}\), where \(v_{i}^{(i)}=1\) and \(\mathbb{A}_{i}\) be a subset of \(\Delta^{d-1}\), as defined in equation 12. Then,_
\[\mathbb{P}_{\mathbf{x}\sim p(\mathbf{x})}(\mathbf{f}_{\mathbf{\theta}_{0}^{*}+\mathbf{\epsilon}}( \mathbf{x})\in\mathbb{A}_{i})\geq 1-\sqrt{d}\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}[\|\mathbf{v}^{ (i)}-\mathbf{f}_{\mathbf{\theta}_{0}^{*}+\mathbf{\epsilon}}(\mathbf{x})\|], \tag{13}\]
_for a given \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\) (proved in SSA.2)._
According to equation 13, \(\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}[\|\mathbf{v}^{(i)}-\mathbf{f}_{\mathbf{\theta}_{0}^{*}+ \mathbf{\epsilon}}(\mathbf{x})\|]<\frac{1}{\sqrt{d}}\) implies \(\mathbb{P}_{\mathbf{x}\sim p(\mathbf{x})}(\mathbf{f}_{\mathbf{\theta}_{0}^{*}+\mathbf{\epsilon}}( \mathbf{x})\in\mathbb{A}_{i})>0\) for each \(i\), given \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\). This means that we can avoid degenerate softmax by minimizing
\[\mathcal{L}^{sd}(\mathbf{\theta}_{0};\mathbf{\Sigma},d,p(\mathbf{x}))=\mathbb{E}_{\mathbf{ \epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma})}\left[\max\left\{\max_{i=1,2, \cdots,d}\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}[\|\mathbf{v}^{(i)}-\mathbf{f}_{\mathbf{\theta} _{0}^{*}+\mathbf{\epsilon}}(\mathbf{x})\|],\frac{1}{\sqrt{d}}\right\}-\frac{1}{\sqrt{d }}\right]. \tag{14}\]
This minimization pulls the softmax output toward the furthest vertex for each \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\), eventually avoiding the issue of degenerate softmax.
#### 3.2.2 Input-output detachment
Here, let us go back to the first issue of input-output detachment we identified in equation 11. This issue happens when each perturbed model near \(\mathbf{\theta}_{0}^{1}\) is a constant function. In other words, the Jacobian of the model's output with respect to the input is zero, and in the case of multi-layered neural networks, the Jacobian of the model's output with respect to one of the intermediate layers is zero. This largely prevents learning from \(\mathbf{\theta}_{0}^{1}\), because \(\mathbf{\theta}_{0}^{1}\) is surrounded by the parameter configurations from which learning cannot happen. We thus design an additional loss that regularizes the Jacobian of model prediction with respect to its input and hidden neurons to prevent the input-output detachment.
In the rest of this section, we consider \(\mathbf{f}\) as the logits instead of the values after applying softmax, in order to avoid an issue of saturation caused by softmax (Varga et al., 2017). Let \(\mathbf{x}_{l}\in\mathbb{R}^{n_{l}}\), for \(l\in\{0,1,\cdots,L\}\), be a vector of pre-activated neurons at the \((l+1)\)-th layer parametrized by \(\mathbf{\theta}_{0}^{(l+1)}\), where \(\mathbf{x}_{0}\in\mathbb{R}^{n_{0}}\) and \(\mathbf{x}_{L}\in\mathbb{R}^{n_{L}}=\mathbb{R}^{d}\) are an input vector and its corresponding output vector, respectively. \(\mathbf{f}_{\mathbf{\theta}_{0}^{(l:L)}}\) is the function from \(\mathbb{R}^{n_{l}}\) to \(\mathbb{R}^{d}\), parametrized by \(\mathbf{\theta}_{0}^{(l+1)},\mathbf{\theta}_{0}^{(l+2)},\cdots,\mathbf{\theta}_{0}^{(L)}\). Let us now consider the effect of perturbing the input to such a function:
\[\mathbf{f}_{\mathbf{\theta}_{0}^{(l:L)}}(\mathbf{x}_{l}+\mathbf{\xi}_{l})\approx\mathbf{f}_{\mathbf{ \theta}_{0}^{(l:L)}}(\mathbf{x}_{l})+\mathbf{J}_{\mathbf{\theta}_{0}^{(l:L)}}(\mathbf{x}_{l}) \mathbf{\xi}_{l}, \tag{15}\]
where \(\mathbf{J}_{\mathbf{\theta}_{0}^{(l:L)}}(\mathbf{x}_{l})\in\mathbb{R}^{d\times n_{l}}\) is the Jacobian matrix of \(\mathbf{f}_{\mathbf{\theta}_{0}^{(l:L)}}\) with respect to \(\mathbf{x}_{l}\).
We then look at equation 15 entry-wise:
\[f_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l}+\mathbf{\xi}_{l})\approx f_{\mathbf{ \theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l})+J_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}(\mathbf{ x}_{l})\mathbf{\xi}_{l}, \tag{16}\]
where \(f_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}\) is the \(i\)-th entry of \(\mathbf{f}_{\mathbf{\theta}_{0}^{(l:L)}}\), and \(J_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}\) is the \(i\)-th row of \(\mathbf{J}_{\mathbf{\theta}_{0}^{(l:L)}}\) for \(i=1,2,\cdots,d\). From equation 16, we can see that the absolute difference between \(f_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l})\) and \(f_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l}+\mathbf{\xi}_{l})\) can be well approximated by the absolute value of the gradient-perturbation product:
\[\left|f_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l}+\mathbf{\xi}_{l})-f_{\mathbf{ \theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l})\right|\approx\left|J_{\mathbf{\theta}_{0}^{ (l:L)}}^{(i)}(\mathbf{x}_{l})\mathbf{\xi}_{l}\right|. \tag{17}\]
Assuming the perturbation's norm to be unit, we can bound this quantity by the operator norm of the \(i\)-th row of Jacobian:
\[\sup_{\|\mathbf{\xi}_{l}\|=1}\left|f_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l}+ \mathbf{\xi}_{l})-f_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l})\right|\approx\sup_ {\|\mathbf{\xi}_{l}\|_{2}=1}\left|J_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l}) \mathbf{\xi}_{l}\right|=\left\|J_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l}) \right\|^{*}. \tag{18}\]
Since \(J_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l})\) is a row vector, i.e., a matrix of rank 1, the Frobenius norm \(\|\cdot\|_{F}\) is equivalent to the operator norm \(\|\cdot\|^{*}\). This allows us to rewrite equation 18 as
\[\sup_{\|\mathbf{\xi}_{l}\|_{2}=1}\left|f_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l }+\mathbf{\xi}_{l})-f_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l})\right|\approx \left\|J_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l})\right\|_{F}. \tag{19}\]
According to Equation 19, if \(\|J_{\mathbf{\theta}_{0}^{(l:L)}}^{(i)}(\mathbf{x}_{l})\|_{F}\) is positive, our initial model \(\mathbf{f}_{\mathbf{\theta}_{0}}\) is sensitive to the change in \(\mathbf{x}_{l}\). That is, it is not a constant function.
Per the derivation above, in order to avoid the input-output detachment, we can for instance impose that, for all \(i=1,2,\cdots,d\),
\[c=\left\|J_{\mathbf{\theta}_{0}^{(0:L)}}^{(i)}(\mathbf{x}_{0})\right\|_{F}=\left\|J_{ \mathbf{\theta}_{0}^{(1:L)}}^{(i)}(\mathbf{x}_{1})\right\|_{F}=\cdots=\left\|J_{\mathbf{ \theta}_{0}^{(L-1:L)}}^{(i)}(\mathbf{x}_{L-1})\right\|_{F}, \tag{20}\]
where \(c>0\) is a constant. Here, we set \(c=1\) which has an equivalent effect of setting the parameters using the so-called He initialization (He et al., 2015), as shown in the following theorem:
**Theorem 3**.: _Let \(\mathbf{f}_{\mathbf{\theta}_{0}}\) be a fully connected network with ReLU (Nair & Hinton, 2010) non-linearity. We write the layerwise non-linear transformation from \(\mathbf{x}_{l}\) to \(\mathbf{x}_{l+1}\) for \(l\neq 0\) as_
\[\mathbf{\hat{f}}_{\mathbf{\theta}_{0}^{(l:L)}}(\mathbf{x}_{l})=\mathbf{W}^{(l+1)}\textsc{Re} \textsc{L}\textsc{U}(\mathbf{x}_{l})+\mathbf{b}^{(l+1)},\]
_where \(\mathbf{W}^{(l+1)}\in\mathbb{R}^{n_{l+1}\times n_{l}}\) is the weight matrix and \(\mathbf{b}\in\mathbb{R}^{n_{l+1}}\) is the bias vector. Assume that each element of \(\mathbf{x}_{l}\) has a symmetric distribution at \(0\) and all elements of \(\mathbf{x}_{l}\) are mutually independent. If the \((i,j)\)-th entry of \(\mathbf{W}^{(l+1)}\), \(W_{ij}^{(l+1)}\), is a random sample from \(\mathcal{N}(0,\sigma_{l}^{2})\) and \(\mathbf{b}^{(l+1)}\) is \(\mathbf{0}\), then the following equality holds for all \(k=1,2,\cdots,n_{l+1}\) when \(\sigma_{l}=\sqrt{\frac{2}{n_{l}}}\) with sufficiently large \(n_{l}\):_
\[1\approx\left\|J_{\mathbf{\theta}_{0}^{(l+1)}}^{(k)}(\mathbf{x}_{l})\right\|_{F}=\|\mathbf{W} ^{(l+1)}\mathds{1}(\mathbf{x}_{l}>0)\|_{F}, \tag{21}\]
_where \(\mathds{1}(\mathbf{x}_{l}>0)\) turns each positive entry in \(\mathbf{x}_{l}\) to \(1\) and \(0\) otherwise (proved in SSA.3)._
In order to prevent input-output detachment, we thus introduce an additional regularization term:
\[\mathcal{L}^{iod}(\mathbf{\theta}_{0};p(\mathbf{x}))=\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})} \left[\frac{1}{d}\sum_{i=1}^{d}\left\{\max_{l\in\{0,1,\cdots,L-1\}}\left(1- \left\|J^{(i)}_{\mathbf{\theta}_{0}^{(L,i)}}(\mathbf{x}_{l})\right\|_{F}\right)^{2} \right\}\right], \tag{22}\]
where \(\mathbf{x}_{l}\) is a vector of pre-activated neurons at the \(l\)-th layer and \(\mathbf{x}_{0}\) is an input vector. By minimizing equation 22 with respect to \(\mathbf{\theta}_{0}\), we prevent \(\mathbf{\theta}_{0}\) from being constant, and consequently all nearby models as well, which we demonstrate empirically in SSB.2.
### Hyperparameters and our recommendation
We designed three loss functions to find a good initial parameter configuration \(\mathbf{\theta}_{0}^{*}\) for \(d\)-way classification, using only unlabelled examples; i) \(\mathcal{L}^{uni}(\mathbf{\theta}_{0};\mathbf{\Sigma},\Delta^{d-1},\gamma)\) in SS3.1; ii) \(\mathcal{L}^{sd}(\mathbf{\theta}_{0};\mathbf{\Sigma},d)\) in SS3.2.1; iii) \(\mathcal{L}^{iod}(\mathbf{\theta}_{0})\) in SS3.2.2. \(\mathcal{L}^{uni}(\mathbf{\theta}_{0};\mathbf{\Sigma},\Delta^{d-1},\gamma)\) makes our model predictions be evenly spread over \(\Delta^{d-1}\) centered on \(\mathbf{\theta}_{0}\). \(\mathcal{L}^{sd}(\mathbf{\theta}_{0};\mathbf{\Sigma},d)\) encourages the neighborhood of \(\mathbf{\theta}_{0}\) to have solutions specialized for \(d\)-way classification by preventing _degenerate softmax_. \(\mathcal{L}^{iod}(\mathbf{\theta}_{0})\) avoids the issue of _input-output detachment_. We additively combine all these to form the final loss function:
\[\mathcal{L}(\mathbf{\theta}_{0};\mathbf{\Sigma},\Delta^{d-1},\gamma,p( \mathbf{x}),\lambda,\xi)= \mathcal{L}^{uni}(\mathbf{\theta}_{0};\mathbf{\Sigma},\Delta^{d-1},\gamma,p (\mathbf{x})) \tag{23}\] \[+\lambda\mathcal{L}^{sd}(\mathbf{\theta}_{0};\mathbf{\Sigma},d,p(\mathbf{x}))\] \[+\xi\mathcal{L}^{iod}(\mathbf{\theta}_{0};p(\mathbf{x})).\]
In SSB.3, we empirically present that \(\mathcal{L}^{sd}\) and \(\mathcal{L}^{iod}\) indeed prevent the degenerate softmax and the input-output detachment, and all these three loss functions in equation 23 are necessary to find a good initial parameter configuration. In the rest of this section, we provide guidelines on how to choose some of the hyperparameters.
We select the bandwidth of MMD in \(\mathcal{L}^{uni}\), \(\gamma\), based on the median heuristic (Smola & Scholkopf, 1998). It uses the median of all pairwise distances for the Gaussian kernel in equation 9. This technique is commonly used in many unsupervised learning based on the Gaussian kernel Garreau et al. (2017) such as kernel CCA (Bach & Jordan, 2002) and kernel two-sample test (Gretton et al., 2012). For more detailed description of the median heuristic in our experiments, see SSC.1.
For \(\mathbf{\Sigma}=\texttt{diag}(\sigma_{1}^{2},\sigma_{2}^{2},\cdots,\sigma_{m}^{2})\) of both \(\mathcal{L}^{uni}\) and \(\mathcal{L}^{sd}\), each \(\sigma_{i}^{2}\) corresponding to \(\theta_{0,i}\) is set based on the number of neurons connected to \(\theta_{0,i}\). For instance, if \(\theta_{0,i}\) is the entry of either \(\mathbf{W}\in\mathbb{R}^{n_{out}\times n_{in}}\) or \(\mathbf{b}\in\mathbb{R}^{n_{out}}\) (i.e., a parameter in a fully-connected layer), we set \(\sigma_{i}\) to \(\sqrt{s^{2}/n_{in}}\) for \(\mathbf{W}\) and \(\sqrt{s^{2}/n_{out}}\) for \(\mathbf{b}\) where \(s\) is a hyperparameter shared across all \(i\)'s. For all the experiments in SS5, we set \(s=\sqrt{0.5}\), based on the preliminary experiments in SSC.2.
In the cases of \(\lambda\) and \(\xi\), we mainly focus on selecting \(\lambda\) while fixing \(\xi\) to \(1\), because these two loss functions, \(\mathcal{L}^{uni}\) and \(\mathcal{L}^{iod}\), are intertwined. We use \(\lambda=0.4\) for all the experiments in SS5. With \(\lambda=0.4\), we observed in the preliminary experiments that both \(\mathcal{L}^{uni}\) and \(\mathcal{L}^{sd}\) decrease. See SSC.3 for more details.
## 4 Experimental Settings
To evaluate our algorithm, we fine-tune deep neural networks on the various binary downstream tasks synthetically created out of existing dataset. Here, we describe the experimental setup.
Datasets and tasksWe derive binary tasks from MNIST (LeCun et al., 1998), using the original labels. For example, we can create a binary classification problem, distinguishing odd and even numbers from MNIST which originally has 10 classes (0-9 digits). In this way, we can create \(2^{10}-2\) tasks from MNIST. After we define how to convert the original labels to either 0 or 1, we randomly select \(N\) (for training) + \(0.2N\) (for validation) instances, which allows us to test the impact of the size of labelled set. We standardize each image to have zero-mean and unit variance across all the examples. We do not use any data augmentation.
ModelsWe train a multi-layer perceptron with fully-connected layers, FCN, on MNIST. FCN has three hidden layers with ReLU (Nair & Hinton, 2010) nonlinearity. +BN refers to the addition of
batch normalization (Ioffe & Szegedy, 2015) to all hidden layers before ReLU. Additional details about the network architectures are included in SSD.1.
BaselinesIn order to assess the effectiveness of the proposed approach, we compare it against more conventional approaches to initialization. First, we compare our approach against data-agnostic initialization schemes, including Xavier initialization (Glorot & Bengio, 2010) and He initialization (He et al., 2015). We also compare it to _R.label_ which refers to a data-dependent initialization scheme proposed by Pondenkandath et al. (2018). In the case of R.label, we randomly assign labels to the examples in each mini-batch and minimize the cross entropy loss. Both our initial parameter configuration and R.label's initial parameter configuration are pre-trained on the same number of unlabelled examples for the same maximum number of epochs. For each pre-training run, we choose the parameter configuration based on the pre-training loss. See SSD.2 for more details about the baselines and our pre-training setup.
Orthogonal to these initialization schemes, we also test adding batch normalization to these baseline approaches. It has been observed by some that batch normalization makes learning less sensitive to initialization (Ioffe & Szegedy, 2015).
Training and evaluationFor each initialization scheme, we fine-tune the network by minimizing the cross entropy loss, using Adam (Kingma & Ba, 2014) with a fixed learning rate of \(10^{-3}\) and momentum parameters set to \((\beta_{1},\beta_{2})=(0.9,0.999)\). We use mini-batches of size 50 and train the network for up to 10 epochs without any regularization. For each binary task, we monitor the validation loss over the epochs and calculate the test accuracy (%) on 10,000 test examples when the validation loss is at its minimum. We then report the mean and standard deviation of the test accuracy (%) across 20 random binary tasks. We repeat this whole set of experiments four times, for each setup.
## 5 Results
Table 1 shows that the average test scores on 20 random binary tasks across 4 random runs. The 20 binary tasks for each run is the same regardless of model, initialization, and pre-training. Pre-training FCN with 60,000 unlabelled examples by our algorithm improves average test accuracy across 20 random tasks compared to that of training FCN from scratch, and this improvement is greater than the number of labelled instances is small. Furthermore, our test scores are better than all the schemes
\begin{table}
\begin{tabular}{c c c|c c c c} \hline
**Model** & **Init** & **Pre-trained** & **N=5** & **N=10** & **N=20** & **N=40** \\ \hline \hline FCN & Xavier & Ours & **82.42**\(\pm\)0.72 & 85.98\(\pm\)0.65 & **90.07**\(\pm\)0.17 & 92.48\(\pm\)0.57 \\ FCN & Xavier & - & 79.63\(\pm\)0.78 & 83.70\(\pm\)0.59 & 87.54\(\pm\)0.67 & 90.91\(\pm\)0.53 \\ FCN & Xavier & R.label & 76.81\(\pm\)2.13 & 83.34\(\pm\)0.79 & 87.53\(\pm\)0.91 & 90.88\(\pm\)0.52 \\ FCN+BN & Xavier & - & 77.09\(\pm\)1.22 & 83.50\(\pm\)0.44 & 88.00\(\pm\)0.60 & 91.48\(\pm\)0.53 \\ FCN+BN & Xavier & R.label & 78.87\(\pm\)1.75 & 84.38\(\pm\)0.97 & 88.71\(\pm\)0.53 & 91.57\(\pm\)0.59 \\ \hline FCN & He & Ours & 82.27\(\pm\)0.78 & **86.46**\(\pm\)0.37 & 89.69\(\pm\)0.28 & **92.61**\(\pm\)0.51 \\ FCN & He & - & 79.17\(\pm\)1.21 & 83.41\(\pm\)0.92 & 87.96\(\pm\)0.64 & 91.34\(\pm\)0.37 \\ FCN & He & R.label & 77.41\(\pm\)2.09 & 83.52\(\pm\)0.77 & 87.31\(\pm\)0.68 & 90.66\(\pm\)0.41 \\ FCN+BN & He & - & 76.89\(\pm\)1.48 & 83.01\(\pm\)0.98 & 88.01\(\pm\)0.66 & 91.55\(\pm\)0.57 \\ FCN+BN & He & R.label & 78.82\(\pm\)0.78 & 85.33\(\pm\)0.62 & 89.15\(\pm\)0.68 & 92.14\(\pm\)0.67 \\ \hline \end{tabular}
\end{table}
Table 1: We present the average (\(\pm\)stdev) test scores on MNIST across four random experiments by varying the number of labelled examples (\(10N\) for training and \(2N\) for validation). We denote the random label pre-training by _R.label_. **Bold** marks the best score within each column. For all \(N\), our initialization approximates various tasks better than the others do. Especially, when the number of labelled examples is small, the improvement is significant. Although both R.label and our initialization use 60,000 unlabelled data, our pre-training is superior to R.label. The positive effect of batch normalization (+BN) can be observed with \(N=40\), but its effect does not match that of our approach. Compared to FCN trained from scratch, we observe that +BN negatively impacts on the test score when the number of labelled instances is small (\(N=5\)) while our initialization improves the test score regardless of \(N\).
applied to FCN + BN which has more parameters than FCN. Both R.label and +BN bring the positive effect when the number of labelled examples is sufficient (N=40). However, for \(N=5\), both hurt the test performance of the randomly initialized plain network.
We also present the standard deviation of test scores across 20 binary random tasks created from MNIST in Table 2. Similar to Table 1, our initialization improves the ability to solve most of downstream tasks, and this improvement is greater when the number of labelled instances is small. We also observe R.label and +BN can hurt this ability in terms of the standard deviation for \(N=5\).
## 6 Conclusion
In this paper we proposed a novel criterion for identifying good initialization of parameters in deep neural networks. This criterion looks at the distribution over models derived from parameter configurations in the vicinity of an initial parameter configuration. If this distribution is close to a uniform distribution, the initial parameters are considered good, since we can easily reach any possible solution rapidly from there on.
We then derived an unsupervised initialization algorithm based on this criterion. In addition to maximizing this uniformity, our algorithm prevents two degenerate cases; (1) degenerate softmax and (2) input-output detachment. Our experiments reveal that the model initialized by our algorithm can be trained better than the one trained from scratch, in terms of average test accuracy across a diverse set of tasks. This improvement was found to be comparable to or better than random label pre-training (Pondenkandath et al., 2018; Maennel et al., 2020) and batch normalization (Ioffe and Szegedy, 2015) combined with typical initialization strategies.
The effectiveness of the proposed approach leaves us with one puzzling question. The proposed algorithm does not take into account the use of gradient-based optimization, unlike model-agnostic meta-learning (Finn et al., 2017), and it could still find initial parameters that were amenable to gradient-based fine-tuning. This raises a question on the relative importance between initialization and the choice of optimizer in deep learning. We leave this question for the future.
#### Acknowledgments
This work was supported by 42dot, Hyundai Motor Company (under the project Uncertainty in Neural Sequence Modeling), Samsung Advanced Institute of Technology (under the project Next Generation Deep Learning: From Pattern Recognition to AI), and NSF Award 1922658 NRT-HDR:
\begin{table}
\begin{tabular}{c c c|c c c c} \hline
**Model** & **Init** & **Pre-trained** & **N=5** & **N=10** & **N=20** & **N=40** \\ \hline \hline FCN & Xavier & Ours & **4.76\(\pm\)**0.88 & 4.54\(\pm\)0.52 & **3.01\(\pm\)**0.71 & 2.26\(\pm\)0.40 \\ FCN & Xavier & - & 6.62\(\pm\)1.29 & 5.55\(\pm\)0.62 & 3.54\(\pm\)0.27 & 2.65\(\pm\)0.53 \\ FCN & Xavier & R.label & 6.08\(\pm\)0.92 & 5.02\(\pm\)1.16 & 3.82\(\pm\)0.23 & 2.78\(\pm\)0.49 \\ FCN+BN & Xavier & - & 7.47\(\pm\)1.70 & 5.53\(\pm\)0.69 & 3.72\(\pm\)0.78 & 2.72\(\pm\)0.32 \\ FCN+BN & Xavier & R.label & 6.62\(\pm\)1.50 & 5.44\(\pm\)0.33 & 3.20\(\pm\)0.41 & 2.40\(\pm\)0.37 \\ \hline FCN & He & Ours & 5.26\(\pm\)0.87 & **4.04\(\pm\)**0.80 & 3.25\(\pm\)0.42 & **2.16\(\pm\)**0.40 \\ FCN & He & - & 5.74\(\pm\)0.81 & 5.32\(\pm\)0.45 & 3.31\(\pm\)0.48 & 2.47\(\pm\)0.31 \\ FCN & He & R.label & 6.37\(\pm\)1.10 & 4.84\(\pm\)1.02 & 3.98\(\pm\)0.44 & 3.03\(\pm\)0.85 \\ FCN+BN & He & - & 7.52\(\pm\)0.76 & 6.50\(\pm\)1.76 & 3.59\(\pm\)0.80 & 2.74\(\pm\)0.41 \\ FCN+BN & He & R.label & 7.33\(\pm\)1.10 & 4.95\(\pm\)1.08 & 3.18\(\pm\)0.55 & 2.30\(\pm\)0.28 \\ \hline \end{tabular}
\end{table}
Table 2: We additionally demonstrate the standard deviation of test scores across 20 binary random tasks derived from MNIST by varying the number of labelled examples (\(10N\) for training and \(2N\) for validation). This metric measures the ability to solve most of tasks well (lower is better). We perform four random runs and report the average standard deviation. Here, (\(\pm\)stddev) means the standard deviation across four random experiments. We denote the random label pre-training by _R.label_. **Bold** marks the best score within each column. Similar to Table 1, our initialization solves most of tasks well even if there are a small number of labelled examples. Both +BN and R.label can hurts the performance to approximate various tasks when the number of labelled instances is small (N=5).
FUTURE Foundations, Translation, and Responsibility for Data Science. This work was supported in part through the NYU IT High Performance Computing resources, services, and staff expertise.
|
2308.15203 | Preference-based training framework for automatic speech quality
assessment using deep neural network | One objective of Speech Quality Assessment (SQA) is to estimate the ranks of
synthetic speech systems. However, recent SQA models are typically trained
using low-precision direct scores such as mean opinion scores (MOS) as the
training objective, which is not straightforward to estimate ranking. Although
it is effective for predicting quality scores of individual sentences, this
approach does not account for speech and system preferences when ranking
multiple systems. We propose a training framework of SQA models that can be
trained with only preference scores derived from pairs of MOS to improve
ranking prediction. Our experiment reveals conditions where our framework works
the best in terms of pair generation, aggregation functions to derive system
score from utterance preferences, and threshold functions to determine
preference from a pair of MOS. Our results demonstrate that our proposed method
significantly outperforms the baseline model in Spearman's Rank Correlation
Coefficient. | Cheng-Hung Hu, Yusuke Yasuda, Tomoki Toda | 2023-08-29T10:40:57Z | http://arxiv.org/abs/2308.15203v1 | Preference-based training framework for automatic speech quality assessment using deep neural network
###### Abstract
One objective of Speech Quality Assessment (SQA) is to estimate the ranks of synthetic speech systems. However, recent SQA models are typically trained using low-precision direct scores such as mean opinion scores (MOS) as the training objective, which is not straightforward to estimate ranking. Although it is effective for predicting quality scores of individual sentences, this approach does not account for speech and system preferences when ranking multiple systems. We propose a training framework of SQA models that can be trained with only preference scores derived from pairs of MOS to improve ranking prediction. Our experiment reveals conditions where our framework works the best in terms of pair generation, aggregation functions to derive system score from utterance preferences, and threshold functions to determine preference from a pair of MOS. Our results demonstrate that our proposed method significantly outperforms the baseline model in Spearman's Rank Correlation Coefficient.
Cheng-Hung Hu\({}^{1}\), Yusuke Yasuda\({}^{1}\), Tomoki Toda\({}^{1}\)\({}^{1}\)Nagoya University
[email protected], [email protected], [email protected]
**Index Terms**: Speech Naturalness Assessment, Speech Quality Assessment, Pairwise Comparison, MOS
## 1 Introduction
Speech quality is usually used as a criterion assessing the performance of speech applications such as hearing aids [1], VoIP [2], speech synthesis systems [3, 4], speech coding systems [5, 6], etc. To determine the speech quality of a system, subjective evaluation methods like ITUT Recommendation P.85 [7] are commonly used. However, it is resource-exhaustive and time-consuming to collect a listener-unbiased result. Therefore, it is essential to develop an automatic and reliable method for speech quality assessment (SQA).
Recently, several data-driven SQA approaches are proposed using deep neural network (DNN) [8, 9, 10, 11, 12, 13] to learn Mean Opinion Score (MOS) of utterances. When training models with subjective scores like MOS, there are two potential issues to consider. One problem, as noted by Manocha et al. [14], is the lack of references. This can be a challenging problem to address since the model are expected to learn the implicit distribution of references used by human listeners, whether consciously or unconsciously. This reference distribution can be heavily influenced by the listener's mood or experience. Despite the efforts of several SQA models to learn the distribution using listener IDs to identify each individual for each MOS [9, 10], the lack of reference distribution remains a significant challenge. Second, the size of the test set is also a problem. When the number of sentences from a system in the test set is particularly small, determining the system quality score by a small number of utterance scores can result in significant noise.
A preference score is another subjective score to assess speech quality. The preference scores are recognized to be easier and faster to evaluate for human raters than direct scores such as MOS [15], which characteristics make preference scores less noisy [16, 17]. The preference score can also be converted to system quality scores by aggregation methods [15, 18].
In this paper, we develop a method to convert MOS from a pair of utterances into the form of preferences, which we hypothesize is more suitable for training SQA models. Although we use the derived preference scores instead of real preference scores, this method can still address the aforementioned issues: 1) By explicitly providing a reference for the model, the model no longer needs to learn the distribution of the reference on its own. 2) we propose to use MOS from the same listener to generate preference scores, which can more effectively reduce the listener bias in preference scores, and 3) our method can increase the number of evaluations for each system in the test set, reducing the noise in predicted system quality scores.
Our proposal comprises not only a training framework for SQA models that relies on the derived preference scores as a training objective, but also includes speech pair generation methods, aggregation functions to obtain a system score from utterance scores, and threshold functions to determine the preference from a pair of quality scores. Since the method used to aggregate preference scores differs from that used for utterance scores, although the system quality score derived from preference scores has its own meaning, it may not be linearly correlated with the system quality score aggregated from MOS. Consequently, this paper will focus on evaluating the correlation of system ranks using Spearman's Rank Correlation Coefficient (SRCC), rather than system quality scores using the Linear Correlation Coefficient (LCC).
A preliminary simulation is conducted to show the feasibility of converting MOS to preference scores for training. Further, in the simulation, we also found that using MOS from the same listener can improve the performance bound. We then conduct our experiments by training the baseline model and our preference score based model. The experimental results showed that our model had a statistically significantly better performance (p-value \(<\) 0.05) than the baseline in terms of SRCC.
## 2 Proposed method
### Framework
Figure 1 shows a framework of our proposed method along with a normal general-non-reference SQA method. As shown in Fig. 1 (a), a normal SQA method predicts two scores in different levels: (1) an utterance score predicted for a single utterance, and (2) a system score derived by aggregating all utterance scores based on an aggregation function. Our method shown in Fig. 1 (b) predicts a preference score in addition to the
utterance score and the system score: (1) a preference score predicted from a pair of the utterance scores based on a preference function, and (2) a preferential system score based on a preferential aggregation function. This framework enables our SQA model to be trained with comparative scores while predicting quality scores in the same way as general-non-reference SQA models.
### Pair Generation
In the training phase, we randomly select a listener and then choose two utterances that are assessed by the listener. Note that the utterance pair can contain different content. In the testing phase, a subset of all possible combinations of utterance pairs are generated by proposed pair generation methods. Given \(N\) systems and \(K\) pairs to generate, we consider:
**Random Selection (RAND).** We randomly select a pair of systems \((\mathrm{sys_{i}},\mathrm{sys_{j}})\) from all possible system pairs and increment a counter for that system pair. The process is repeated until a total of \(K\) pairs are generated. In this method, each system may be compared a different number of times, and the number of pairs formed between each system may also vary.
**Linked Selection (LINK).** This method generates an equal number of comparisons for each system. We perform the following steps K/N times. First, We create a circular list of integers \([1,2,...,\mathrm{N}]\). Then, we randomly shuffle the circular list to obtain a permutation \([\mathrm{a_{1}},\mathrm{a_{2}},...,\mathrm{a_{N}}]\) of the integers. Next, we form system pairs by pairing up the consecutive integers in the permutation, as follows: \((\mathrm{sys_{a_{1}}},\mathrm{sys_{a_{2}}})\), \((\mathrm{sys_{a_{2}}},\mathrm{sys_{a_{3}}})\),..., \((\mathrm{sys_{a_{N}}},\mathrm{sys_{a_{1}}})\). Finally, we count the number of occurrences for each system pair.
**Balanced System Pair Selection (BS).** This method generates all combinations of systems to a total of \(K\) pairs.
### SQA Model
Our model is based on the neural network (NN) part of UTMOS [12], which is the state-of-the-art model for the Voice-MOS dataset [19]. The NN of UTMOS has five inputs: the data-domain ID, the listener ID, the phoneme sequence, the reference sequence, and the SSL feature. Phoneme sequence is recognized by pretrained ASR [20] model and then clustered by DBSCAN algorithm [21] to generate the reference sequence. The SSL feature is extracted from the pretrained wav2vec2 [22] model. These inputs are concatenated and fed to the subsequent BLSTM layer and linear layers to produce the frame-wise scores. The frame-wise scores are then averaged to form the utterance score. The loss of the original UTMOS is calculated by the contrastive loss and the clipped MSE loss.
### Preference Function
We derive a preference score from a pair of utterance scores with a preference function. The preference function is used in the bottom path in Fig. 1 (b).
Given the \(a\)-th subjective listening test result of system \(i\), an utterance \(x_{i,a}\) is assessed by a listener \(l_{i,a}\), the predicted preference score \(\mathrm{pref_{pred}}(i,a,j,b)\) is calculated as:
\[\mathrm{pref_{pred}}(i,a,j,b)=\alpha(\mathrm{SQA}(x_{i,a},l_{i,a})-\mathrm{SQA }(x_{j,b},l_{j,b}))\]
, where \(\alpha(x)=2\mathrm{sigmoid}(x)-1\) and \(SQA(\cdot,\cdot)\) is the output of the SQA model. Note that one listener can assess more than one utterances, that is, there is \(l_{i,a}=l_{j,b}\) for some \(i\), \(a\), \(j\), \(b\). The ground-truth preference score is defined as \(\mathrm{pref_{gt}}(i,a,j,b)=\mathrm{sgn}(s_{i,a}-s_{j,b})\), where \(\mathrm{sgn}(\cdot)\) is the sign function, and \(s_{i,a}\) is ground-truth MOS of the utterance \(x_{i,a}\) assessed by the listener \(l_{i,a}\). We use Mean Square Error (MSE) as a training objective: \(L=\mathrm{MSE}(\mathrm{pref_{pred}},\mathrm{pref_{gt}})\).
### Preferential Aggregation Function
The normal SQA models use the average function to aggregate utterance scores into system scores. Our preference-based SQA method uses various preferential aggregation functions along with threshold functions to determine a win, draw, or loss to derive system quality scores from preferences of utterance pairs.
#### 2.5.1 Threshold selection
We determine the threshold by three methods:
**Equal Range (ER).** We split the range \([-1,1]\) of \(\mathrm{pref_{pred}}\) into three equal intervals, [-1, -1/3], [-1/3, 1/3], and [1/3, 1] to represent a lose, draw, and win, respectively.
**Equal Error Rate (EER).** We use the development set to find two equal error rate thresholds. One threshold is found between a win and a non-win, and the other is found between a lose and a non-lose. Empirically, the bound is nearly at 0.15 and -0.15.
**No Draw (ND).** We ignore the draw condition. \(\mathrm{pref_{pred}}>0\) means a win while \(\mathrm{pref_{pred}}<0\) means a lose.
#### 2.5.2 Aggregation method
Reduction from preferences to absolute values is studied in the utility theory [23]. The utility theory associates latent utility values with preference probability. In our case, the latent utility represents the absolute quality of a system. The utility model formulates the preference probability \(p(i\succ j)\) of \(i\) over \(j\) based on a difference of their utility values \(u_{i},u_{j}\) and the link function \(\sigma\) as \(p(i\succ j)=\sigma(u_{i}-u_{j})\). The link function works as cumulative probability distribution to convert the utility difference to preference probability [24]. We can aggregate preferences into absolute values by obtaining utility values.
We use the following aggregation methods:
**Differential Count (DC).** We use the win count minus the lost count as the quality score of a system. This is equivalent to using a linear function \(\sigma(x)=\frac{1+x}{2}\) as the link function to derive preference probability from the utility difference, where \(x\) is the utility difference \(x=u_{i}-u_{j}\). This setting assumes the distribution of the utility difference is uniform.
**Bradley-Terry-Luce (BTL) model.** The Bradley-Terry-Luce (BTL) [18] model is a well-known probability model for deriving absolute scores from pairwise comparisons. The BTL is equivalent to using the sigmoid function \(\sigma(x)=\frac{1}{1+\exp(-x)}\) as the link function to derive preference probability from the utility difference. This setting assumes the distribution of the
Figure 1: Framework of speech quality assessment models.
utility difference is logistic. Note that if exponential utility \(q_{i}\) is used as \(u_{i}=\log q_{i}\), the preference probability becomes the ratio of the exponential utilities: \(p(i\succ j)=\frac{q_{i}}{q_{i}+q_{j}}\). To obtain the utility values, the BTL model iteratively updates the utility values starting from constants based on preference data. We set the maximum number for iteration to 200 and the tolerance to 0.0001.
**Wining Count (WC).** We use the winning count alone as the quality score of a system. This is a naive method that can not be associated with preference probability with the utility model.
**Preference Score (PS).** This method is proposed to aggregate the raw \(\mathrm{pref}_{\mathrm{pred}}\) into the system quality score without the use of the threshold selection method. The quality score of the system \(i\) can be obtained by computing the sum of \(\mathrm{pref}_{\mathrm{pred}}(\mathrm{i},\mathrm{a},\mathrm{j},\mathrm{b})\) over all evaluated combinations with the compared system \(j\) and indexes \(a\) and \(b\), and then subtracting the sum of \(\mathrm{pref}_{\mathrm{pred}}(\mathrm{k},\mathrm{c},\mathrm{i},\mathrm{d})\) over all evaluated combinations with the compared system \(k\) and indexes \(c\) and \(d\).
## 3 Experimental Evaluation
### Dataset
The dataset used for experiments was the main track of the VoiceMOS Challenge [19]. In the training set, there were 4,973 unique utterances, each of which was evaluated 8 times, resulting in a total of 39,784 utterance-score pairs. The set includes 175 systems, which were each evaluated between 96 to 288 times, and assessed by 288 listeners who each evaluated 126 to 152 utterances. For the development set, there were 1,066 unique utterances, each was evaluated 8 times, resulting in a total of 8,528 utterance-score pairs. The set contains 181 systems, which were evaluated between 8 to 296 times, and assessed by 296 listeners who each evaluated 16 to 177 utterances. The test set consisted of 1,066 unique utterances, each assigned an average quality score. There is no utterance overlap in the three sets. The set contained 187 systems, each with 1 to 38 unique utterances. In order to obtain ground-truth system scores for the subsequent experiments, we averaged the ground-truth scores of each utterance within each system.
### Simulation of Pair Generations and Aggregations
We investigated the upper bound of SQA model's performance that can be learned from the training dataset under combinations of utterance pair generation methods (RAND, LINK, and BS) with or without the same listener constraint and system score aggregation methods (DC, BTL, and WC). The threshold selection methods were not investigated here because they were about model predictions. RAND has no restrictions on the number of system pairs, while LINK requires multiples of the system counts and BS requires multiples of the system combinations. Therefore, we used 175 systems and generated a varying number of system pairs for each frequency of once, twice, five, ten, and fifty times for the LINK method, while 15,225 system combinations are used to generate a varying number of system pairs for each frequency of once and twice for BS method. We evaluated the average of Spearman's Rank Correlation Coefficient (SRCC) from 100 simulations for combinations of the pair generation and the score aggregation methods by using the training set of the VoiceMOS challenge. We use the training set for simulation rather than the test set for two main reasons. First, the test set doesn't contain quality scores assessed by individual listeners, which makes it impossible to demonstrate the effect of the same listener constraint. Second, our model is primarily trained using individual listener scores, so even if we simulate using the average score from the test set, it may still be difficult to reflect the performance achieved by training with individual scores.
#### 3.2.1 Performance bound of pair generation methods
Figure 2 shows the result of the simulation. It was evident that all the pair generation methods could reach the performance bound close to SRCC=1 around 30,000 comparisons for any combinations with all aggregation functions. Both the LINK and BS methods demonstrated similar performance across all aggregation methods. The performance of LINK started at SRCC=0.5 at 175 comparisons and gradually increased to SRCC=0.8, 0.87, 0.97, 0.982, and finally achieved 0.99 at 30,450 comparisons. Similarly, the BS method also achieved SRCC=0.985 at 15,225 comparisons and SRCC=0.99 at 30,450 comparisons. The RAND method achieved SRCC=0.984 at 30,450 comparisons, although it exhibited inconsistent performance on the lower number of comparisons. We, therefore, concluded that all the pair generation methods were feasible to achieve high performance combined with any aggregation functions by using a sufficiently large number of comparisons.
#### 3.2.2 Effects of the same listener constraint
We observed that the same listener constraint on pair generation method consistently yielded more correlated system ranks than with the method without it. The performance gain from the constraint was up to 0.03 for a small number of comparisons, and there were slight improvements even for a large number of comparisons. We, therefore, concluded that the same listener constraint on pair generation was effective.
Figure 2: Simulation results of (a) BTL model, (b) WC aggregation, (c) DC aggregation against ground-truth MOS.
#### 3.2.3 Comparison among aggregation methods
We found that the WC aggregation method performed the worst among the three aggregation methods with any pair generation methods. In particular, given the small number of pairs generated by the RAND method, the WC aggregation method could even have 0.1 lower SRCC compared with the DC and BTL aggregation methods. We interpreted that the WC performed poorly because this method could not be associated with preference probability. Combined with the LINK pair generation, the DC and BTL aggregation methods showed different performances around 8,750 pairs. If the number of pairs was 8,750 or greater, BTL aggregation method was better than DC aggregation method. On the contrary, if the number of pairs was lower than 8,750, the DC aggregation method had a higher performance bound than the BTL. Given 15,225 and 30,450 pairs generated by the BS pair generation, the DC aggregation method was better than that of the BS pair generation and BTL aggregation methods. As a result, we concluded that WC was not an appropriate aggregation method, BTL had a good combination with the LINK pair generation method, and DC had a good combination with the BS pair generation method.
### Experiment of MOS prediction
We trained the original version of UTMOS and our preference model 20 times with different seeds. Then, we tested our models based on two combinations of pair generation and aggregation functions based on the simulation results in Section 3.2: the LINK and BTL combination and the BS and DC combination. Each combination is applied with three threshold methods: ER, EER, and ND. We denoted our models as UTP_X_Y_Z where X meant the pair generation method and Y meant the threshold method and Z meant the aggregation method. We also evaluated these pair generations with the PS aggregation for preference score prediction and denoted them as UTP_X_PS. For reference, we also checked the direct score prediction performance with the averaging aggregation of our preference models, and we denoted this system as UTP_SC. Note that UTP_SC did not need the pair generation method. We followed the same configurations as UTMOS including hyperparameters and model parameters except for the data augmentation methods. In concrete, we downsampled all waveforms to 16kHz and normalized the subjective quality score into the range [-1, 1]. The Adam [25] optimizer was used for training with 4,000 steps of warming up to 15,000 training steps. The batch size was set to 12, 1, and 1 for training, development, and testing, respectively. As for the data augmentation methods, we observed that speaking rate-changing and pitch-shifting caused a degradation for not only our preference models but also the original version of UTMOS. Thus, we included results of UTMOS models trained with the data augmentation method (UTMOS\({}_{\mathrm{aug}}\)) and without the data augmentation method (UTMOS\({}_{\mathrm{noaug}}\)) for comparison, and we set UTMOS\({}_{\mathrm{noaug}}\) as the baseline. We evaluated the average of SRCC from 20 times of experiments. For every evaluation, we regenerate testing pairs by the pair generation methods, if they were applied. We checked the statistical significance of our proposed methods against UTMOS\({}_{\mathrm{noaug}}\) with a pairwise t-test.
Table 1 shows the results of MOS predictions as the average of SRCC. For direct quality prediction using averaging aggregation, the UTP_SC showed similar performance to the baseline. This indicated our preference-based training framework did not cause much degradation in the direct quality prediction.
Using proposed pair generation methods, UTP_LINK_PS and UTP_BS_PS achieved higher performance than the baseline in general. This suggested that carefully-designed pair generations were important to predict ranks of synthetic speech accurately. Their high performance was prominent when a larger test set was used. The results suggest that using LINK and BS pairs is effective in accurately evaluating SQA models.
As for preference quality prediction, all the models using the LINK pair generation and the BTL aggregation did not show improvements. These models did not show improvements against a larger test set as well. On the other hand, most models using the BS pair generation and the DC aggregation showed improvement compared to the baseline. The improvements from these models were prominent when a larger test set was used. The performance difference between the LINK and BTL combination and the BS and DC combination could be explained by the assumption of the aggregation methods. The BTL assumed the utility difference follows a logistic distribution whereas the DC assumed uniform distribution. Thus, the assumption of high noise on preference was important for our preference-based training framework. The choice of threshold functions did not impact the prediction performance. Among the threshold functions we used, ND had a weak assumption prohibiting no ties, but our result indicated that the weak assumptions such as no ties did not matter in our framework. There are stronger assumptions on preferences such as total order or stochastic transitivity [26], and investigation of these assumptions would be our future work.
## 4 Conclusion
In this paper, we proposed a preference-based framework for SQA. Our framework consisted of pair generation, aggregation functions to derive system scores from utterance preferences, and threshold functions to determine preferences from quality scores. Our method help SQA models to learn the reference distribution explicitly and reduce the listener bias. Our simulation confirmed that our pair generation and aggregation functions had high performance bounds, and the constraint on the pair generation to select utterance pairs from the same listener improved the performance bound by reducing listener bias of MOS. Our experiment showed that our methods significantly outperformed the UTMOS baseline in terms of SRCC, and the choice of the aggregation function was quite important for our framework to be effective. In the future, we plan to collect the real preference scores and compare their effects with our framework utilizing derived preference scores from MOS.
## 5 Acknowledgements
This work was partly supported by JST CREST Grant Number JPMJCR19A3.
\begin{table}
\begin{tabular}{l|c|c} \hline \multirow{2}{*}{Model} & Score & 34,782 & 69,564 \\ & Type & pairs & pairs \\ \hline UTMOS\({}_{aug}\) & \multirow{2}{*}{Direct} & 0.927 \\ UTMOS\({}_{noaug}\) & & 0.932 \\ \hline UTP\_SC & \multirow{2}{*}{Direct} & 0.930 \\ \hline UTP\_INK\_PS & Preference & 0.934 & 0.940* \\ UTP\_BS\_PS & Preference & 0.934* & 0.940* \\ \hline UTP\_LINK\_ER\_BTL & Preference & 0.930 & 0.932 \\ UTP\_LINK\_ER\_BTL & Preference & 0.931 & 0.932 \\ UTP\_LINK\_ND\_BTL & Preference & 0.931 & 0.932 \\ \hline UTP\_BS\_ER\_DC & Preference & 0.934 & 0.941* \\ UTP\_BS\_ER\_DC & Preference & 0.934* & 0.941* \\ UTP\_BS\_ND\_DC & Preference & 0.934 & 0.940* \\ \hline \end{tabular}
\end{table}
Table 1: The experimental result of the ranking prediction with SQA models. The top row shows the number of testing utterance pairs. The asterisk(*) mark represents statistical significance (\(p=0.05\)) between our proposed models and UTMOS. |
2309.00255 | SortedNet: A Scalable and Generalized Framework for Training Modular
Deep Neural Networks | Deep neural networks (DNNs) must cater to a variety of users with different
performance needs and budgets, leading to the costly practice of training,
storing, and maintaining numerous user/task-specific models. There are
solutions in the literature to deal with single dynamic or many-in-one models
instead of many individual networks; however, they suffer from significant
drops in performance, lack of generalization across different model
architectures or different dimensions (e.g. depth, width, attention blocks),
heavy model search requirements during training, and training a limited number
of sub-models. To address these limitations, we propose SortedNet, a
generalized and scalable training solution to harness the inherent modularity
of DNNs. Thanks to a generalized nested architecture (which we refer as
\textit{sorted} architecture in this paper) with shared parameters and its
novel update scheme combining random sub-model sampling and a new gradient
accumulation mechanism, SortedNet enables the training of sub-models
simultaneously along with the training of the main model (without any
significant extra training or inference overhead), simplifies dynamic model
selection, customizes deployment during inference, and reduces the model
storage requirement significantly. The versatility and scalability of SortedNet
are validated through various architectures and tasks, including LLaMA, BERT,
RoBERTa (NLP tasks), ResNet and MobileNet (image classification) demonstrating
its superiority over existing dynamic training methods. For example, we
introduce a novel adaptive self-speculative approach based on sorted-training
to accelerate large language models decoding. Moreover, SortedNet is able to
train 160 sub-models at once, achieving at least 96\% of the original model's
performance. | Mojtaba Valipour, Mehdi Rezagholizadeh, Hossein Rajabzadeh, Parsa Kavehzadeh, Marzieh Tahaei, Boxing Chen, Ali Ghodsi | 2023-09-01T05:12:25Z | http://arxiv.org/abs/2309.00255v3 | # SortedNet, a Place for Every Network and Every Network in its Place:
###### Abstract
As the size of deep learning models continues to grow, finding optimal models under memory and computation constraints becomes increasingly more important. Although usually the architecture and constituent building blocks of neural networks allow them to be used in a modular way (i.e. using the sub-networks of a given network after training), their training process is not aware of this modularity. Consequently, conventional neural network training lacks the flexibility to adapt the computational load of the model during inference. This paper proposes, SortedNet, a generalized and scalable solution to harness the inherent modularity of deep neural networks across various dimensions (e.g. width, depth, blocks) for efficient dynamic inference. Our training considers a nested architecture for the sub-models with shared parameters and train them together with the main model in a sorted and probabilistic manner. This sorted training of sub-networks enables us to scale the number of sub-networks to hundreds using a single round of training. We utilize a novel updating scheme during training that combines random sampling of sub-networks with gradient accumulation to improve training efficiency. Furthermore, the sorted nature of our training leads to a search-free sub-network selection at inference time; and the nested architecture of the resulting sub-networks leads to minimal storage requirement and efficient switching between sub-networks at inference. Our general dynamic training approach is demonstrated across various architectures and tasks, including large language models and pre-trained vision models. Experimental results show the efficacy of the proposed approach in achieving efficient sub-networks while outperforming state-of-the-art dynamic training approaches. Our findings demonstrate the feasibility of training up to 160 different sub-models simultaneously, showcasing the extensive scalability of our proposed method while maintaining 96% of the model performance.
1University of Waterloo
2Huawei Noah's Arc Lab
{mojtaba.valipour, hossein.rajabzadeh, ali.ghodsi}@uwaterloo.ca,
{mehdi.rezagholizadeh, marzieh.tahaei, boxing.chen}@huawei.com
## 1 Introduction
_"For every minute spent organizing, an hour is earned." - Benjamin Franklin._
There has been a remarkable growth in the size of deep neural networks. Nevertheless, the computation/memory resources allocated to a model at inference depends on the specific hardware availability and the accuracy/time requirements of applications, whether deployed in the cloud or on edge devices.
In particular, the computational burden from concurrent processes and battery limitations can significantly impact the resources allocated to a neural network. Moreover, in the era of gigantic pre-trained models the computational demand can vary from task to task. Therefore there is a growing demand for models that can adapt themselves to such dynamic conditions. Unfortunately, conventional neural network training, with its fixed architecture, falls short in adaptive adjusting of the computational load at inference time.
On the other hand, deep neural networks demonstrate modular architectures along various dimensions, like layers and blocks across depth, and neurons and channels and even attention heads along width. This inherent modularity enables the extraction of sub-networks with the same shape as the original model. However, the current training methods fail to effectively leverage this modularity, resulting in limited practical advantages of sub-networks in the final product. Consequently, the performance of these sub-networks falls short compared to the main model, making their deployment during inference impractical.
Hence, the challenge lies in harnessing the full potential of modularity in deep neural networks, allowing for the efficient utilization of sub-networks to enhance performance and enable practical deployment in real-world scenarios.
Recent works have proposed a variety of approaches for training dynamic models. These approaches While effective often use a sophisticated training process combined knowledge distillation Hou et al. (2020), architecture modification Nunez et al. (2023), and redundant subnetwork optimization Fan et al. (2019). Although not explicitly mentioned, an important ingredient shared by all these methods is the attempt to implicitly sort sub-networks along a specific dimension with respect to computation/accuracy. Limiting dynamicity to one or two dimension while leaving other dimensions intact can lead to suboptimal sub-networks. Inspired by these works (Valipour et al., 2023), Rippel et al. (2014)), in this paper we explore how sorting generalized to all dimensions can provide many in one efficient dynamic models. Our solution take advantage from an intrinsic nested sub-networks which are sorted monotonically from bottom to top (i.e. we have only a single instance from each sub-network). This sorted configuration with shared parameters enforces a regular order and consistency in the knowledge learned by sub-networks. Sorting these sub-networks based
on their computation/accuracy characteristics presents the most optimal solution. By organizing the model in this manner, extracting the desired sub-network becomes a search-free process. The use of a predefined sorting order ensures that each targeted sub-network possesses a unique computation overhead, effectively removing optimization of redundant sub-networks from training. In the resulting nested architecture with shared parameters in a _sorted_ manner, each smaller (shallower/narrower) model is a sub-networks of a larger (deeper/wider) model. This will lead to models with sorted accuracy, latency and importance.
To achieve this sorted architecture, during training, we propose a novel updating scheme that combines random sampling of sub-networks with gradient accumulation in order to further reduce the cost of training. With one single training our proposed method can yield multiple models with different capacities.
Our general dynamic training approach is applicable to any architecture without necessitating any modifications to the original model. The proposed nested architecture offers several benefits, including minimal storage requirements and efficient switching between various computation budgets during inference.
Through comprehensive empirical studies across different architectures, tasks and dynamicity along various dimensions ranging from width and depth to attention head and embedding layer we show the superiority and generalizability of our proposed methods over state of the art dynamic training methods.
To summarize, the main contributions of this paper are:
* Introduction of nested sorted network training, which leverages efficient sub-network training through a combination of gradient accumulation and sub-network sampling.
* Generalization of dynamicity to multiple dimensions through stochastic sampling of sub-networks during training.
* Outperforming state-of-the-art methods in dynamic training on CIFAR10 [11]. Furthermore, scaling the number of sub-networks to 160 and showcasing the efficacy of our single round training method.
* Demonstrating the effectiveness of the proposed method on Large pre-trained language models by dynamic training of the BERT model.
Figure 1: SortedNet: The overall diagram of our proposed method. During training, in each iteration, we sample from a pre-defined random distribution which will help us to optimize the sub-models as well as the original model.
Figure 2: Comparing SortedNet and Once For All: on a hypothetical 5-layer network, we show how the sub-network selection strategy of SortedNet differs from the Once-for-All [10] approach.
## 2 Related Work
In this section, we briefly review the most relevant existing works to our SortedNet idea. A summary of these solutions and how they are different from each other can be found in Table 1.
**Slimmable Networks [23]**Slimmable networks is an idea to train a single neural network in a way that it can be deployed with adjustable width at the inference time. This solution was proposed particularly for CNN architectures and thus, careful consideration of the batch normalization module for various width sizes is necessary. In this regard, they use switchable batch normalization which leads to additional trainable parameters. In contrast to slimmable networks, our SortedNet are architecture agnostic and work in both depth and width dimensions.
**Early Exit [20]**Early exit refers to a technique which adds a classifier to intermediate layers of an already trained neural network. While the parameters of the main model are frozen, the parameters of the classifiers are updated in a separate fine-tuning process. In this approach, each of the classifiers and their subsequent network can be treated as an independent sub-model. While this solution is relatively straightforward, the performance of the sub-models lags significantly behind that of the main model.
**Dyna-BERT [1]**Dyna-BERT is a dynamic compression method for pre-trained BERT models, allowing flexible adjustments of the size of the model along depth and width at the inference time. While the introduced objective in the DynaBERT paper has some overlap with ours, there are several key differences: first, in DynaBERT, Only a few subsets of the model are functional (while our SortedNet do not have such an assumption); second, DynaBERT needs an already trained teacher and uses knowledge distillation but our technique does not need KD; third, DynaBERT needs search during both training and inference while our solution is _search-free_; Lastly, DynaBERT's applicability is architecture-dependent, whereas our approach is not.
**Layer-drop [20]**Layer-drop is a structured dropout solution at the training time which allows layer pruning at the inference time. Similar to DynaBERT, this solution is applied to pre-trained language models; however, in contrast to DynaBERT, Layer-drop only targets the depth of neural networks and not their width. In Layer-drop, there is no fixed training pattern and any layer can be dropped with a certain probability, which is referred to as drop rate. At the inference time, the number of active layers can be adjusted by the drop-rates that are seen during the training time of that network (i.e. to achieve the best performance on any other drop-rate value, the network needs to be re-trained.). Layer-drop works only in depth while our solution works for both depth and width. Moreover, Layer-Drop requires specific search patterns for dropping layers at the inference time and training time, whereas our solution is search free.
**Once-for-All [1]**Once-for-all(OFA) targets efficient inference across different devices by first training an OFA network which supports many sub-networks with varying latency/accuracy characteristics ; it then searches among the feasible sub-networks according to the accuracy and latency requirements of their target device. OFA has a progressive training nature i.e. it goes from the largest model to the smaller sub-networks. OFA is different from our solution from the following aspects: first, it needs teacher and knowledge distillation; second, OFA requires a separate Neural Architecture Search (NAS) at the inference time; third, OFA is not architecture agnostic (their solution is for CNN-based neural networks while our SortedNet works for both CNNs and Transformers). Moreover, OFA is different from our solution in terms of the sub-network selection strategy. While our SortedNet selects sub-networks in a sorted manner, OFA does not have any particular assumption for sorting sub-networks (see Fig. 2 for more details).
**Learning Compressible Subspace [20]**Learning Compressible Subspace (LCS) is an adaptive compression technique based on training compressible subspace of neural networks (using a linear convex combination of two sets of weights for the network). While LCS does not require any re-training at the inference time, this solution has some other limitations including: first, it needs double memory at the training time; second, the choices of initial weights and the compression function are unclear and arbitrary (left as a hyper-parameter); third, it is only tried on CNNs; forth,
\begin{table}
\begin{tabular}{|l|c c c c c c c|} \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**Sub-**} & \multicolumn{2}{c|}{**\# of**} & \multicolumn{2}{c|}{**No**} & \multirow{2}{*}{**Target Dim.**} & \multirow{2}{*}{**Architecture**} \\ & **Networks: Config.(\#)** & & & & & **Free** & **Params** & **Re-training** \\ \hline \hline Early Exit [20] & Sorted (Few) & Low & ✓ & ✓ & \(\left|\theta\right|\) & ✗ & Depth & Transformer \\ \hline Layer Drop [20] & Random & \multirow{2}{*}{Low} & \multirow{2}{*}{\(\mathcal{K}\)} & \multirow{2}{*}{\(\mathcal{K}\)} & \multirow{2}{*}{\(\left|\theta\right|\)} & ✗ & Depth & Transformer \\ Joulin [2] & & & & & & & & \\ \hline DynaBERT [1] & Sorted \& & & & & & & \\ Random (Few) & High & ✗ & ✗ & \(2\left|\theta\right|\) & ✗ & Depth \& Width & Transformer \\ \hline Once for All [1] & Nested & High & ✗ & ✗ & \(\left|\theta\right|\,\mathrm{or}\,2\left|\theta\right|\) & ✗ & General & CNN \\ \hline LCS [20] & Arbitrary & \multirow{2}{*}{High} & ✓ & ✓ & \(\left|\theta\right|\,\mathrm{or}\,2\left|\theta\right|\) & ✓ & General & CNN \\ & (Many) & & & & & & & \\ \hline Slimmable [23] & Sorted (Few) & Moderate & ✓ & ✓ & \(\left|\theta\right|\) & ✓ & Width & CNN \\ \hline
**SortedNet (Ours)** & Sorted (Many) & High & ✓ & ✓ & \(\left|\theta\right|\) & ✓ & General &
\begin{tabular}{c} CNN \& Transformer \\ \end{tabular} \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of different existing related work and distinguishing our solution
similar to Layer-drop intermediate sub-networks are trained randomly which will make the performance of the target model sub-optimal.
## 3 Methodology
### A Generalized and Scalable View
In the related work section, we have discussed several approaches concerning the training of many-in-one networks. These approaches differ in terms of their target architecture, training loss, number of training parameters, configuration of the sub-networks (random or sorted), the number of trained sub-models, and reliance on search or re-training before deployment. Our SortedNet method can be viewed as a simple general and scalable version of these existing solutions. These generalization and scalability have been mostly resulted from the sorted configuration of sub-networks with their shared parameters and our stochastic training. To the best of our knowledge this is the first work which has scaled training of sorted nested sub-networks to various dimensions.
### SortedNet: Towards a Generalized Solution for Training Many-in-One Networks
While many of deep neural network architectures are modular in design (using similar layers such as in Transformers [20] or blocks such as in MobileNet [2]) but this modularity is not preserved during training. In this subsection, we introduce our SortedNet solution which aims at training a generalized and scalable many-in-one networks. In order to train many-in-one networks, we need to specify a few design choices: first, how to form the sub-networks and their configurations; second, what are the target architectures; and third, how to train the sub-networks along with the main model.
Designing the Sub-networksSortedNet impose an inductive bias on the training based on the assumption that the parameters of sub-networks across each dimension have a concentric (or onion shape) architecture with shared parameters in a _sorted_ manner. This sorted configuration with shared parameters enforces a regular order and consistency in the knowledge learned by sub-networks (see Fig. 1).
Let's consider a many-in-one neural network \(f(x;\theta(n))\) with the parameters \(\theta(n)\) and the input \(x\) which is comprised of \(n\) sub-networks \(f(x;\theta(i))|_{i=0}^{n-1}\), where \(\theta(i)\) represents the weights of the \(i^{\text{th}}\) sub-model. We define a universal set which contains all unique sub-models: \(\Theta=\{\theta(0),\theta(1),...,\theta(n)\}\).
How to build the sub-models?Suppose that we would like to target \(D=\{Dim_{1},Dim_{2},...,Dim_{K}\}\) many-in-one dimensions in our model. Then, let's start with \(\Theta=\varnothing\) and build the sub-models iteratively. In this regard, at each iteration \(t\) during training, we have a sampling and truncation procedures along any of the targeted dimensions:
\[\theta_{t}^{*}=\cap_{j=1}^{|D|}\theta_{Dim_{j}\downarrow b_{j}^{t }}(n) \tag{1}\] \[\text{where }b_{j}^{t}\sim P_{B_{j}}\] \[\text{IF }\theta_{t}^{*}\notin\Theta:\Theta\leftarrow\Theta\cup \{\theta_{t}^{*}\}\]
where \(Dim_{j}\downarrow b_{j}^{t}\) indicates that we have truncated \(\theta(n)\) along the \(Dim_{j}\) dimension up to the index \(b_{j}^{t}\) at the iteration \(t\). \(b_{j}^{t}\) is sampled from a distribution \(P_{B_{j}}\) with the support set of \(B_{j}=\{1,2,...,d_{j}\}\) to form the \(i^{\text{th}}\) sub-model. \(d_{j}\) refers to the maximum index of the \(j^{\text{th}}\) dimension. This iterative process will be done during training and the set of \(n\) unique sub-models \(\Theta\) will be built.
To illustrate the process better, let's see a simple case such as BERT\({}_{base}\) where we want to make a many-in-one network across the width and depth dimensions, \(D=\{\text{Depth, Width}\}\). In this case, we have 12 layers and the hidden dimension size of 768. Suppose that \(Depth\) corresponds to \(j=1\) and \(Width\) corresponds to \(j=2\) in Eq. 1. For simplicity, let's use a discrete uniform distribution for sampling indices across these two dimensions. To create the first sub-network (\(i=1\)), we need to sample \(b_{1}^{1}\) uniformly from the set of natural numbers in the range of 1 to 12: \(B_{1}=\{1,2,...,12\}\); and we need to sample \(b_{2}^{1}\) from the range of 1 to 768: \(B_{2}=\{1,2,3,...,768\}\). Bear in mind that we can even choose a subset of \(B_{1}\) and \(B_{2}\) as the support set for sampling probability distribution. After these two samplings, we will have two truncated sets of parameters: \(\theta_{Depth\downarrow b_{1}^{1}}\) and \(\theta_{Width\downarrow b_{2}^{1}}\). The intersection of these two truncated parameters will give us the first sub-network: \(\theta_{1}=\theta_{Depth\downarrow b_{1}^{1}}\cap\theta_{Width\downarrow b_{2 }^{1}}\).
Training ParadigmRegular training of neural networks concerns improving the performance of the whole model and usually this training is not aware of the performance of the sub-networks. In fact, in this scenario, if we extract and deploy the sub-models of the trained large model on a target task, we would experience a significant drop in the performance of these sub-networks compared with the main model. However in SortedNet, we propose a training method that allows for training sub-networks together with the main model in a stochastic way. The SortedNet paradigm leads to the following benefits:
* Search-free sub-model extraction: after training, by importance sorting of sub-models the best sub-model for a given budget can be selected without the need for search.
* Anytime: Each smaller sub-model is a subset of a larger one which makes switching between different sub-models efficient. This leads to an important feature of our SortedNet which is so-called _anytime_ that is a network which can produce its output at any stage of its computation.
* Memory efficient: we train a many-in-one network where sub-models are all part of a single checkpoint, which minimizes storage requirement.
For efficiency purposes, in our training, the last layer, e.g. the classification layer, is shared between all sub-models; alternatively we can add a separate classification layer to each sub-model. For simplicity and efficiency, we chose to use a shared classification layer.
### SortedNet Algorithm
In this subsection, we describe our proposed training algorithm. For training a SortedNet with \(n\) sub-models, at each iteration during training, a random index needs to be sampled
from a pre-defined distribution: \(b^{i}_{j}\sim P_{B_{j}}\). After finding the target sub-model \(\theta^{*}_{t}\) at each iteration, we can use one of the following objectives to update the parameters of the selected sub-model:
* (Stochastic Loss) Only train the selected sub-model \(f(x,\theta^{*}_{t})\) : \(\min\limits_{\theta^{*}_{t}}\mathcal{L}\triangleq\mathrm{L}(y,f(x,\theta^{*}_{ t}))\) where \(L\) is the loss function for training the model on a given task (e.g. \(L\) can be a regular cross entropy loss) and \(y\) refers to the ground-truth labels.
* (Stochastic Summation) Train the sub-model \(f(x,\theta^{*}_{t})\) and all its targeted sub-models along each dimension. Let's assume that \(\Theta^{\perp}(\theta^{*}_{t})\) is the universal set for all targeted sub-networks of \(\theta^{*}_{t}\). Then the loss function can be defined as: \(\min\limits_{\Theta^{\perp}(\theta^{*}_{t})}\mathcal{L}\triangleq\sum_{\theta \in\Theta^{\perp}(\theta^{*}_{t})}\mathrm{L}(y,f(x,\theta))\)
This way, one sub-model or a subset of sub-models are updated in each iteration. Alternatively, one can choose to train all the sub-models at each iteration which is costly in the large scale.
## 4 Experiments
In this section, we discuss a set of experiments that we conducted to show the effectiveness and importance of sorting information and fixing a nested property. The details of the hyperparameters for each experiment can be found in Appendix A.3.
### Is SortedNet scalable?
To show that our proposed method is scalable, we designed an experiment that try to train 160 different models across multiple dimensions (width and depth) all at once. As baseline, we trained the largest network (a MobileNetV2), and reported the best performance of the model. Because the performance of the model was poor for all the other sub-models (less than 12%), we trained the classifier layer for 5 more epochs before evaluating each sub-model for the baseline and reported the best performance. As the results suggests in figure 3, our method was able to capture the maximum performance for many of these sub-models in a zero-shot manner. In each cell, we reported the performance of the sub-network on top and the recovery percentage of the model with respect to the largest model (in this example, 95.45). Despite sharing the weights across all models, sharing the classifier and zero-shot evaluation, the proposed method preserved up to 96% of the performance of the largest model which is highly encouraging. Further training of the classifier for our proposed method will lead to even better performance as shown in appendix A.4 (between \(\sim\)2 to 15% improvement for different sub-models). In addition, we also tried to sort the depth and width using proposed method individually which has been reported in the figure 3 as D. Only, and W. Only, respectively. Across width, SortedNet successfully preserved up to 99% of the largest network's performance.
### Can we find the best sub-models using SortedNet?
As shown in figure 4, based on the performance of the models in the previous experiment which has been shown in figure 3, we selected a subset of best-performing networks (width \(>\) 60% and depth \(>\) 13 blocks), and retrained the network from scratch using SortedNet to show the success rate of our proposed method. As shown, SortedNet successfully preserved up to 99% of the performance of the ordinary training of the largest network.
We can also make this selection process fully automated by sorting the performance of all sub-models after evaluation and filtering out a subset of best-performing models that perform better than a desired threshold. As can be seen in figure 6, there is a set of sub-networks which perform better than 80%. To better understand the pattern, we annotated some of the points using "\({}^{D}_{W}\)" as template which shows for each model the corresponding width and depth.
### Can we generalize SortedNet?
In another experiment, as shown in table 2, we wanted to show our stochastic approach can outperform the state-of-the-art methods such as LCS (shown as \(LCS_{p}\) in table) (Nunez et al., 2023), Slimmable Neural Network (NS) (Yu et al.
\begin{table}
\begin{tabular}{l c c|c c c|c c} \hline Network & Width & FLOPs & NS-IN & LCS-p-IN & SortedNet-IN & NS-BN & LCS-p-BN (aka US) & SortedNet-BN \\ \hline & 100\% & 301M & 88.67 & 87.61 & **89.14** & 79.84 & 65.87 & **85.24** \\ & 75\% & 209M & 87.86 & 85.73 & **88.46** & 78.59 & **85.67** & 85.29 \\ \multirow{-1}{*}{cpreresnet20 (He et al., 2015) (CIFAR10)} & 50\% & 97M & 84.46 & 81.54 & **85.51** & 69.44 & 65.58 & **70.98** \\ & 25\% & 59M & 75.42 & **76.17** & 75.10 & 10.96 & **15.78** & 12.59 \\ \hline avg. & - & - & 84.10 & 82.76 & **84.55** & 59.70 & 58.22 & **63.52** \\ \hline \end{tabular}
\end{table}
Table 2: Comparing the performance of state-of- the-art methods with Sorted-Net over CIFAR10 in terms of test accuracies.
Figure 5: Comparing the training loss trajectory of SortedNet on CIFAR10 for different gradient accumulation values with LCS_p. Each subfigure demonstrates the results in different widths. The rightmost subfigure reports the average across the widths. The underlying network (cPreResNet20) and hyperparameters are fixed.
Figure 6: Finding best sub-models automatically using a desired threshold bar to eliminate the worst performing models.
2018), and Universally Slimmable Networks (US) (Yu and Huang 2019). To make the comparisons fair, we equalized the number of gradient updates for all models. We also tried to remove the impact of architecture design such as the choice of the normalization layers. Therefore, we tried to compare methods by different layer normalization techniques such as BatchNorm (Ioffe and Szegedy 2015) and InstanceNorm (Ulyanov, Vedaldi, and Lempitsky 2016). In addition, we made sure that the orthogonal methods such as Knowledge Distillation will not impact the results as these methods can be applied and improve the results independent of the method. As shown in the table, SortedNet demonstrates a superior average performance compared to other methods, indicating its generalization across various settings such as different norms.
### Can we extend SortedNet to language models?
In this experiment, the goal is to apply SortedNet for a pre-trained transformer model and evaluate the performance on the GLUE benchmark (Wang et al. 2018). As the baseline, we chose RoBERTa (Liu et al. 2019) to demonstrate the flexibility of our algorithm. In table 3, we sorted all the layers of RoBERTa-base. As the results demonstrate, our proposed method in average perform better than the baseline by a significant margin (\(\sim\) 23%). However, the largest model has a small drop in performance (less than 2%). It is interesting that the transformer architecture can preserve the performance of sub-models up to some extent without additional training. However, our algorithm improve the performance of these sub-models between 10 to 40% approximately.
## 5 Can we extend SortedNet to complex dimensions?
In this section, we are interested to investigate whether SortedNet is applicable to more complex dimensions other than width and depth. For example, can we utilize the SortedNet for sorting the Attention Heads (Vaswani et al. 2017)? To achieve this, we conducted an experiment over BERT-large (Devlin et al. 2019) which we tried to sort the information across multiple dimensions at once including, number of layers, hidden dimension, and number of attention heads. In other words, we tried to sort information over Bert-large and Bert-base as Bert-base can be a subset of the Bert-large and therefore respect the nested property. As shown in table 4, in addition to the reported performance of Bert-base and Bert-large according to the original paper (Devlin et al. 2019), we reported the performance of these models in the paper experimental setting. The performance of randomly initialized Bert-base has been reported as well. We also extracted a Bert-base from a Bert-large model, and we reported the performance of such model in the same table. Additionally, we highlighted the number of training updates with respect to each objective function in front of each model. For example, in the last row (Sorted \(BERT_{LARGE}\)), we approximately trained our Sorted model half of the times (\(\sim 3Epochs\)) over the objective function of Bert-base (\(\mathcal{L}_{B}\)) and the other half of the times over the objective function of Bert-large (\(\mathcal{L}_{L}\)) in an iterative random manner as introduced in the section 3. The learned Bert-base performance with these methods is still around 10% behind a pre-trained base but we argue that this is the value of pre-training. To investigate the impact, one should apply the SortedNet during pre-training which we will leave for future research. However, the performance of the learned Bert-large is on-par with an individual Bert-large which suggests sharing the weights does not necessarily have a negative impact over learning. It seems, however, the secret sauce to achieve a similar performance is that we should keep the number of updates for each objective the same as the individual training of Bert-large and Bert-base.
### Ablation Study
Convergence (Training Time) AnalysisBeing sorted and randomly selecting one sub-network at the time from a pre-defined set of the sub-networks empowers SortedNet with a higher convergence rate and a faster training time. Figure 5 empirically certifies this claim and compares the training convergence of SortedNet against LCP_p, which, to the best of our knowledge, LCP_p stands as the most recent state-of-the-art method. As LCS_p uses summation loss over four
\begin{table}
\begin{tabular}{l c c c c c c c c|c} \hline \hline & Acc. & Acc. & F1 & Mathews Corr. & Acc. & Acc. & Acc. & Pearson & & \\
**Model** & **MNLI** & **SST-2** & **MRPC** & **CoLA** & **QNLI** & **QQP** & **RTE** & **STS-B** & **AVG** & **AVG w/o ours** \\ \hline Sorted-RoBERTa (11) & 60.07 & 70.76 & 81.22 & 0.00 & 59.64 & 77.80 & 47.65 & 9.36 & **50.81** & 40.33 \\ Sorted-RoBERTa (21) & 71.98 & 80.28 & 81.22 & 0.00 & 81.49 & 87.09 & 47.29 & 70.37 & **64.97** & 40.86 \\ Sorted-RoBERTa (41) & 76.74 & 80.50 & 81.22 & 0.00 & 85.21 & 88.82 & 46.93 & 75.07 & **66.81** & 41.06 \\ Sorted-RoBERTa (41) & 79.13 & 84.75 & 81.22 & 44.51 & 86.60 & 90.11 & 49.10 & 84.94 & **75.04** & 42.95 \\ Sorted-RoBERTa (5L) & 81.14 & 89.91 & 81.22 & 48.41 & 87.88 & 90.86 & 55.96 & 88.22 & **77.95** & 43.80 \\ Sorted-RoBERTa (6L) & 82.21 & 92.09 & 86.67 & 53.41 & 88.83 & 91.12 & 67.87 & 89.09 & **81.41** & 46.13 \\ Sorted-RoBERTa (7L) & 82.99 & 92.78 & 89.13 & 56.42 & 89.29 & **91.29** & 73.29 & 89.58 & **83.10** & 44.80 \\ Sorted-RoBERTa (8L) & 83.33 & 93.23 & 89.78 & 57.22 & 89.40 & 91.29 & 75.09 & 89.67 & **83.63** & 55.17 \\ Sorted-RoBERTa (9L) & 83.39 & 92.66 & 89.66 & 58.69 & 89.40 & 91.25 & **77.26** & 89.72 & **84.00** & 61.36 \\ Sorted-RoBERTa (10L) & **87.42** & 93.12 & **91.64** & **61.21** & **91.87** & 91.19 & 74.01 & **89.74** & **85.02** & 54.30 \\ Sorted-RoBERTa (11L) & 87.34 & **93.35** & 91.45 & 60.72 & 91.74 & 91.17 & 74.01 & 89.72 & **84.94** & 77.48 \\ Sorted-RoBERTa (12L) & 83.35 & 92.89 & 90.81 & 59.20 & 89.44 & 91.28 & 76.53 & 89.77 & 84.16 & **86.13** \\ \hline avg. & 79.26 & 87.93 & 86.09 & 41.25 & 85.50 & 89.45 & 64.26 & 79.61 & **76.67** & 52.86 \\ \hline \hline \end{tabular}
\end{table}
Table 3: A comparison of the performance of different sub-models with and without the SortedNet. The model’s performance will improve if we have more budgets and calculate the representation of deeper layers.
sub-networks in every training steps and to have a fair comparison, we therefore report the performance of SortedNet in different values of gradient accumulation (\(g_{acc}\)), where \(g_{acc}=4\) provides a fair comparison with LCS_p. As shown in the figure, SortedNet with \(g_{acc}=4\) converges either faster or competitive across different sub-networks. Moreover, SortedNet does not require any for-loop in its implementation; thus tailoring parallel computation and resulting in faster running time. We empirically investigate this feature and found that in the same settings, SortedNet runs at least one third faster than LCS_p (details in Appendix A.2).
The impact of gradient accumulationThe goal of this experiment is to examine the impact of gradient accumulation (\(g_{acc}\)) on the performance of SortedNet within an equal number of parameters updates. Table 5 presents the results obtained in terms of accuracies for 4 different gradient accumulation values. To ensure an equal number of updates, the maximum number of epochs is adjusted for each scenario, e.g. \(g_{acc}=k\) receives \(k\) times more epochs than \(g_{acc}=1\). As the results explains, increasing gradient accumulation values results in a higher performance for SortedNet. This observation can be attributed to the increase in training stochasticity when gradient accumulation is raised. Consequently, each sub-network in SortedNet contributes more equally to the updating of weight parameters, leading to a faster convergence rate. More details are provided in Appendix A.1.
In addition, we highlighted the details of each experiment hyperparameters in appendix A.3 and further analysis has been provided in appendix A.5 to better understand the behavior of sortedNet methodology.
## Conclusion
In summary, this paper proposes a new approach for training dynamic neural networks that leverages the modularity of deep neural networks to efficiently switch between sub-networks during inference. Our method sorts sub-networks based on their computation/accuracy and trains them using an efficient updating scheme that randomly samples sub-networks while accumulating gradients. The stochastic nature of our proposed method is helping our algorithm to generalize better and avoid greedy choices to robustly optimize many networks at once. We demonstrate through extensive experiments that our method outperforms previous dynamic training methods and yields more accurate sub-networks across various architectures and tasks. The sorted architecture of the dynamic model proposed in this work aligns with sample efficient inference by allowing easier samples to exit the inference process at intermediate layers. Exploring this direction could be interesting for future.
## Limitations
It is good to note that our proposed method might be sensitive to the randomness as the chosen trajectory at the moment is random uniform. Further research is necessary to investigate the effect of choosing more optimal strategies for choosing the next model at each iteration.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Flops} & \multirow{2}{*}{Weights} & \multirow{2}{*}{MNLI} & \multirow{2}{*}{QQP} & \multirow{2}{*}{QNLI} & \multirow{2}{*}{SST-2} & \multirow{2}{*}{CoLA} & \multirow{2}{*}{STS-B} & \multirow{2}{*}{MRPC} & \multirow{2}{*}{RTE} & \multirow{2}{*}{avg.} \\ \cline{5-10} \cline{6-11 |
2303.03707 | Hybrid quantum-classical convolutional neural network for phytoplankton
classification | The taxonomic composition and abundance of phytoplankton, having direct
impact on marine ecosystem dynamic and global environment change, are listed as
essential ocean variables. Phytoplankton classification is very crucial for
Phytoplankton analysis, but it is very difficult because of the huge amount and
tiny volume of Phytoplankton. Machine learning is the principle way of
performing phytoplankton image classification automatically. When carrying out
large-scale research on the marine phytoplankton, the volume of data increases
overwhelmingly and more powerful computational resources are required for the
success of machine learning algorithms. Recently, quantum machine learning has
emerged as the potential solution for large-scale data processing by harnessing
the exponentially computational power of quantum computer. Here, for the first
time, we demonstrate the feasibility of quantum deep neural networks for
phytoplankton classification. Hybrid quantum-classical convolutional and
residual neural networks are developed based on the classical architectures.
These models make a proper balance between the limited function of the current
quantum devices and the large size of phytoplankton images, which make it
possible to perform phytoplankton classification on the near-term quantum
computers. Better performance is obtained by the quantum-enhanced models
against the classical counterparts. In particular, quantum models converge much
faster than classical ones. The present quantum models are versatile, and can
be applied for various tasks of image classification in the field of marine
science. | Shangshang Shi, Zhimin Wang, Ruimin Shang, Yanan Li, Jiaxin Li, Guoqiang Zhong, Yongjian Gu | 2023-03-07T07:42:37Z | http://arxiv.org/abs/2303.03707v1 | **Hybrid quantum-classical convolutional neural network for phytoplankton classification**
## Abstract
The taxonomic composition and abundance of phytoplankton, having direct impact on marine ecosystem dynamic and global environment change, are listed as essential ocean variables. Phytoplankton classification is very crucial for Phytoplankton analysis, but it is very difficult because of the huge amount and tiny volume of Phytoplankton. Machine learning is the principle way of performing phytoplankton image classification automatically. When carrying out large-scale research on the marine phytoplankton, the volume of data increases overwhelmingly and more powerful computational resources are required for the success of machine learning algorithms. Recently, quantum machine learning has emerged as the potential solution for large-scale data processing by harnessing the exponentially computational power of quantum computer. Here, for the first time, we demonstrate the feasibility of quantum deep neural networks for phytoplankton classification. Hybrid quantum-classical convolutional and residual neural networks are developed based on the classical architectures. These models make a proper balance between the limited function of the current quantum devices and the large size of phytoplankton images, which make it possible to perform phytoplankton classification on the near-term quantum computers. Better performance is obtained by the quantum-enhanced models against the classical counterparts. In particular, quantum models converge much faster than classical ones. The present quantum models are versatile, and can be applied for various tasks of image classification in the field of marine science.
## 1 Introduction
Phytoplankton is the most important primary producer in the aquatic ecosystem. Being the main supplier of dissolved oxygen in the ocean, phytoplankton plays a vital role in the energy flow, material circulation and information transmission in marine ecosystem (Barton et al., 2010; Gittings et al., 2018). Species composition and abundance of phytoplankton is one of the key factors of marine ecosystem dynamics, exerting a direct influence upon the global environment change. Therefore, much attention has been paid to the identification and classification of phytoplankton (Zheng et al., 2017; Pastore et al., 2020; Fuchs et al., 2022).
Nowadays, with the rapid development of imaging devices for phytoplankton (Owen et al., 2022), a huge number of phytoplankton images can be collected in a short time. Then it becomes impossible to classify and count these images using the traditional manual methods, i.e. expert-based methods.
In order to increase the efficiency of processing these images, machine learning methods has been introduced in, including the support vector machine (Hu et al., 2005; Sosik et al., 2007), random forest (Verikas et al., 2015; Faillettaz et al., 2016), k-nearest neighbor (Gluge et al., 2014), and artificial neural network (Mattei et al., 2018; Mattei et al., 2020). In particular, convolutional neural network (CNN), achieving state-of-the-art performance on image classification, becomes widely used in this field in recent years. A variety of CNN-based architectures were proposed to identify and classify phytoplankton with high efficiency and precision (Dai et al., 2017; Wang et al., 2018; Cui et al., 2018; Fuchs et al., 2022).
In order to perform large-scale research on the marine phytoplankton, more powerful computational resources are desired to guarantee the success of machine learning algorithms for processing the overwhelmingly increasing volume of data. On the other hand, there has been remarkable progress in the domain of quantum computing in recent years (Arute et al., 2019; Zhong et al., 2020; Bharti et al., 2022; Madsen et al., 2022). Quantum computing holds the promise of solving certain classically intractable problems (Preskill, 2018); quantum machine learning (QML) has emerged as the potential solution for large-scale data processing (Biamonte et al., 2017). In particular, there is a growing consensus that NISQ (noisy intermediate-scale quantum) devices may find advantageous applications in the near term (Callison et al., 2022), one of which is the quantum neural network (QNN) (Jeswal et al., 2019; Kwak et al., 2021). As a quantum analogue of classical neural network, QNN takes the parameterized quantum circuit (PQC) as a learning model (Benedetti et al., 2019), and can be extended naturally to quantum deep neural networks with the flexible multilayer architecture. Particularly, quantum convolutional neural network (QCNN) has got a lot of attention, and has demonstrated its success for processing both quantum data and classical data, including quantum many-body problems (Cong et al., 2019), identification of high-energy physics events (Chen et al., 2022), COVID-19 prediction (Houssein et al., 2022) and MNIST dataset classification (Oh et al., 2020). In general, there exits two architectures of QCNN, i.e. the fully quantum parameterized QCNN (Cong et al., 2019) and the hybrid quantum-classical CNN (Liu et al., 2021).
In this work, we present the potential of QCNN to perform phytoplankton classification. Considering the large size of phytoplankton images but the limited number of qubits and quantum operations available on current quantum devices, it is still unpractical to learn the images using fully quantum parameterized QCNN. Hence, we adopt the hybrid quantum-classical convolutional neural network (QCCNN) architecture to realize good multi-classification of phytoplankton dataset. QCCNN integrates the PQC into the classical CNN architecture by replacing the classical feature map with the quantum feature map. Therefore, QCCNN is friendly to current NISQ devices, in terms of both number of qubits and circuit depths, while retains important features of classical CNN, such as nonlinearity and scalability (Liu et al., 2021).
Furthermore, quantum neural networks would also suffer from the barren plateau problem (i.e. vanishing gradient) and the degradation problem (i.e. saturated accuracy with increasing depth) (Cong et al., 2019; Oh et al., 2020; Chen et al., 2022; Houssein et al., 2022). To address this issue, we further develop a QCCNN with residual structure, that is, a hybrid quantum-classical residual network (QCResNet), leveraging the residual architecture to enhance the performance of QCCNN.
The main contribution of the present work is as follows.
1. The feasibility of quantum neural networks to perform phytoplankton classification is demonstrated concretely for the first time. This work will definitely inspire more study of applying QML algorithms in marine science.
Several specific architectures of QCCNN and QCResNet are developed, which achieve excellent performance on phytoplankton classification. Particularly, QCResNet architecture is proposed to increase the performance of QCCNN. These models are versatile, and can be applied directly for other tasks of image classification.
3. Better performance on phytoplankton classification is obtained by QCCNN and QCResNet against the template CNN and ResNet models. The influence of the expressibility and entangling capability of PQCs on the performance of QCCNN is discussed.
The rest of the paper is organized as follows. Section 2 introduces preliminaries about the CNN, ResNet, and QNN. In section 3, architectures of QCCNN and QCResNet are discussed. Section 4 presents the experimental results about the performance of QCCNN and QCResNet as well as the dependence of QCCNN's performance on the feature of ansatz circuit. Conclusions are given in section 5.
## 2 Preliminaries
In the preliminaries, we present the basic architectures of CNN, ResNet and QNN with a certain degree of detail. These basic blocks will be used as framework to build the QCCNN and QCResNet models in the Methods section.
### Convolutional neural network
CNN is one of the most successful tools in the area of computer vision (Gu et al., 2018). Since the first CNN known as Lenet was proposed (LeCun et al., 1998), there has been a series of architectures to improve the performance, such as Alexnet (Russakovsky et al., 2015), ZFNet (Zeiler et al., 2014), VGGNet (Simonyan et al., 2015), GoogleNet (Szegedy et al., 2015) and ResNet (He et al., 2016). These models are widely used in the field of image classification (Zuo et al., 2015; Lopes et al., 2017), object tracking (Nguyen et al., 2016; Li et al., 2017) and natural language processing (Kim et al., 2016).
The architecture of CNN is inspired by the visual perception mechanism of living creatures (Gu et al., 2018). CNN is comprised of three types of layers, which are the convolutional layers, pooling layers and fully-connected layers. The convolutional layers are applied in an attempt to extract useful features in the data which can be leveraged for classification purposes. Specifically, in a convolutional layer, a filter iteratively convolves local regions of the full input to produce feature maps, and each output feature contains information about different spatially-local patterns in the data (Henderson et al., 2020). When the convolutional layers are applied repeatedly, the increasingly abstract features are obtained, which capture the long-range dependencies within the data and are beneficial for the following classification or regression. In addition, the number of parameters in CNN is reduced substantially by the weight sharing between filters, increasing the computational efficiency of the model.
An example architecture of CNN is shown in Figure 1. This CNN consists of two convolutional layers, a max-pooling layer, and two fully connected layers. In this paper, this CNN is taken as the template framework to construct the QCCNN for phytoplankton classification.
FIGURE 1 An example architecture of CNN that is used as the template to construct QCCNN.
### Residual Network
In deep neural networks, network depth is of crucial importance for the performance of the network. However, when the network has large number of layers, two annoying problems emerge, i.e. the vanishing/exploding gradients and the saturation of accuracy. The residual learning framework was proposed to address the problems, making networks capable of having extremely deep architectures (He et al., 2016). The residual networks (ResNet) has been widely used in a wide range of applications, such as image recognition (Zagoruyko et al., 2016) and natural language processing (Conneau et al., 2016), showing compelling accuracy and nice convergence behaviors.
ResNet has the modularized architectures that stack residual units with the same connecting shape (He et al., 2016). The residual unit can be expressed as
\[\begin{split}&\mathbf{y}_{t}=h\left(\mathbf{x}_{t}\right)+F\left(\mathbf{x}_ {t},W_{t}\right)\\ &\mathbf{x}_{t+1}=f\left(\mathbf{y}_{t}\right)\end{split}, \tag{1}\]
where \(\mathbf{x}_{t}\) and \(\mathbf{x}_{t+1}\) is the input and output of the \(l\)-th unit, and \(F\) is the residual function. The central idea of ResNet is to learn the additive residual function \(F\) with respect to \(h(\mathbf{x}_{t})\). \(h(\mathbf{x}_{t})\) can be implemented by the identity mapping, i.e. \(h(\mathbf{x}_{t})=\mathbf{x}_{t}\), or projection shortcut, i.e. the 1\(\times\)1 convolution operation. Simply put, a CNN with residual architecture can be constructed by adding shortcut connections in the feedforward neural network.
Figure 2 shows an example of ResNet. It consists of two residual units, an adaptive average pooling layer and a fully connected layer. The 1\(\times\)1 convolution operation is used in the shortcut connections. In this paper, this network is taken as the template framework to construct the QCResNet for phytoplankton classification.
FIGURE 2 An example architecture of ResNet that is used as the template to construct QCResNet.
### Quantum Neural Network
QNN belongs to the kind of variational quantum algorithms, which are the hybrid quantum-classical algorithms. In general, QNN is comprised of four parts, i.e. data encoding, ansatz performing forward transformation, quantum measurement and parameters optimization routine, as schematically shown in Figure 3. Note that the first three parts are implemented on the quantum device, while the optimization routine is carried out on the classical computer and feed the updated parameters back into quantum device.
Data encoding is to embed classical data into quantum states by applying a unitary transformation, i.e. \(\left|\mathbf{x}\right\rangle\!=\!U_{e}\left|0\right\rangle^{\otimes n}\) where \(\left|\mathbf{x}\right\rangle\) is proportional to the data vector \(\mathbf{x}\). Data encoding can be regarded as a quantum feature map, which is to map the data space to the quantum Hilbert space (Schuld et al., 2019). One of the prominent properties of QNN is introducing the quantum feature map into neural network that is extremely hard to simulate by classical resources (Havlicek et al., 2019). One of the most commonly used encoding method in QNN is the angle encoding. It embeds classical data into the rotation angles of the quantum rotation gates. For example, given a normalized data vector \(\mathbf{x}\!=\!\left(x_{1},\ldots x_{N}\right)^{T}\) with \(x_{i}\!\in\!\left[0,1\right)\), angle encoding can embed it as
Figure 3: Architecture of QNN model. QNN is a quantum-classical hybrid algorithm. The forward transformation is implemented by the quantum computer and the parameters optimization is done by the classical computer.
\[R_{y}^{\otimes N}\left(\mathbf{x}\right)\left|0\right\rangle^{\otimes N}=\mathop{ \otimes}\limits_{i=1}^{N}\left(\cos\frac{x_{i}}{2}\left|0\right\rangle+\sin \frac{x_{i}}{2}\left|1\right\rangle\right), \tag{2}\]
where \(R_{y}\) is the rotation gates about the \(\hat{y}\) axes, i.e. \(R_{y}\left(x_{i}\right)=\left[\cos\frac{x_{i}}{2},-\sin\frac{x_{i}}{2};\sin \frac{x_{i}}{2},\cos\frac{x_{i}}{2}\right]\). More details about data encoding strategies can be found in (Hur et al., 2022).
Ansatz can be interpreted as a quantum analogue of feedforward neural network, using the quantum unitary transformation to implement the feature map of data. Ansatz is in fact a PQC with adjustable quantum gates. The adjustable parameters are optimized to approximate the target function that map features into different value domains representing different classes. So a proper structure of ansatz circuit plays a key role in specific learning tasks. Typically, QNN adopts the hardware-efficient ansatz, which use a limited set of quantum gates and particular qubit connection topology relating to the specific quantum devices on hand. The gate set usually contains three single-qubit gates and one two-qubit gates. An arbitrary single-qubit gate can be expressed as a combination of rotation gates about \(\hat{x}\), \(\hat{y}\) and \(\hat{z}\) axes. For example, using the X-Z decomposition, a single-qubit gate can be represented as
\[U_{1_{q}}\left(\alpha,\beta,\gamma\right)=R_{x}\left(\alpha\right)R_{z}\left( \beta\right)R_{x}\left(\gamma\right), \tag{3}\]
where \(\alpha\), \(\beta\), and \(\gamma\) are the rotation angles. The two-qubit gates are used to produce entanglement between qubits. There are fixed two-qubit gates without adjustable parameters, such as the CNOT gate, and the one with adjustable parameters, such as the controlled \(R_{x}(\theta)\) and \(R_{z}(\theta)\) gates. A comprehensive discussion about the property of different ansatz circuit can be found in (Sim et al., 2019)
Quantum measurement is to output a value used as a prediction for the data. Measurement operation always corresponds to a Hermitian operator \(M\) that can be decomposed as \(M=\sum_{i}\lambda_{i}\left|i\right\rangle\!\left\langle i\right|,\) where \(\lambda_{i}\) is the \(i\)th eigenvalue and\(\left|i\right\rangle\)is the corresponding eigenvector. After measurement, a quantum state\(\left|\psi\right\rangle\) will collapse to one of the eigenstates\(\left|i\right\rangle\)with a probability \(p_{i}=\left|\left\langle i\left|\psi\right\rangle\right|^{2}.\) Then, the expectation value of the measurement outcome is
\[\left\langle M\right\rangle=\sum_{i}\lambda_{i}\cdot p_{i}=\sum_{i}\lambda_{ i}\left|\left\langle i\left|\psi\right\rangle\right|^{2}\,. \tag{4}\]
That is, the fundamental measurement outcome is the probabilities\(\left\{p_{i}\right\}\) and the expectation\(\left\langle M\right\rangle.\) The most basic measurement in quantum computing is the computational basis measurement, also known as Pauli-\(Z\)measurement, with the Hermitian operator
\[\sigma_{z}=\left(+1\right)\left|0\right\rangle\!\left\langle 0\right|+\left(-1 \right)\left|1\right\rangle\!\left\langle 1\right|. \tag{5}\]
When performing the \(\sigma_{z}\) measurement, a qubit will collapse to the state\(\left|0\right\rangle\left\langle 1\right|\) ) with the probability \(p_{0}=\left|\left\langle 0\left|\psi\right\rangle\right|^{2}\left(p_{1}= \left|\left\langle 1\left|\psi\right\rangle\right|^{2}\right)\), from which we can read off the eigenvalues \(+1\) or \(-1\), respectively. The expectation value\(\left\langle\sigma_{z}\right\rangle\)is a value in the range [-1, 1]. Because of the collapse principle of quantum
measurement, in practice the probability \(p_{i}\)and the expectation value are estimated by \(s\) samples of measurement, where \(s\) is known as the number of shots.
Optimization routine is used to update the parameters of ansatz circuit, i.e. the adjustable rotation angles of gates, based on the data. Optimizing the parameters is the process of minimizing the loss function \(L(\mathbf{\theta})\) with respect to the parameter vector \(\mathbf{\theta}\). Almost the same loss functions as that used in the classical models, e.g. the mean squared error loss and the cross-entropy loss, can be applied in QNN. For instance, the multi-category cross-entropy loss can be expressed as
\[L(\theta)=-\frac{1}{N}\sum_{j=1}^{N}\sum_{c=1}^{C}\left[y_{jc}\cdot f\left(p_{ inc}\right)\right], \tag{6}\]
where \(N\) is the batch size, \(C\) is the number of categories, and \(y_{jc}\in\left\{0,1\right\}\) is the class label. \(p_{inc}\) is the probability of measuring the eigenstates \(\left|i\right\rangle\)corresponding to the category \(c\). \(f\left(\cdot\right)\) represents the post-processing of the measurement outcome, which is used to associate the outcome to the label \(y_{jc}\).
Just like classical neural networks, the parameters can be updated based on the gradient of the loss function. Take the gradient descent method as example, the \(i\)th parameter \(\theta_{i}\) is updated as
\[\theta_{i}^{\prime}=\theta_{i}-\delta\cdot\frac{\partial L\left(\mathbf{\theta} \right)}{\partial\theta_{i}}\,, \tag{7}\]
where \(\delta\) is the learning rate. In quantum computing, there exists no backpropagation algorithm to directly calculate the gradient of loss. In practice, derivatives are evaluated using the difference method or the parameter shift rule (Wierichs et al., 2022) on the quantum devices. More details can be found in (Gyongyosi. et al., 2019).
## 3 Methods
### Quantum-classical convolutional neural network
Using the CNN in Figure 1 as the template, the QCCNN can be obtained by replacing the convolutional layers with PQC. Since the template CNN has two convolutional layers, there are different replacement schemes. Figure 4 shows two possible QCCNN architectures. In Figure 4A, the QCCNN has a quantum convolutional layer and a classical convolutional layer, while Figure 4B has two quantum convolutional layers. Note that in order to take full advantage of quantum feature map to process the raw data, QCCNN always take the first convolutional layer as quantum one. Hereafter, the model of Figure 4A and Figure 4B is named as QCCNN-1 and QCCNN-2, respectively.
FIGURE 4 Architecture of the QCCNN with (A) one quantum and one classical convolutional layer (named as QCCNN-1) and (B) two quantum convolutional layers (named as QCCNN-2).
#### 3.1.1 Quantum Convolutional Layer
The architecture of the first quantum convolution layer is shown in Figure 5. It is comprised of similar components as QNN, including the encoding circuit, ansatz circuit and quantum measurement.
The window size of the filter is taken as 2\(\times\)2, and the four elements are embedded using four qubits through four \(R_{y}(\theta)\) gates. Two typical hardware-efficient ansatz are shown in Figure 6. They have different expressibility and entangling capability, which has large but still ambiguous impact on the performance of QNN. Figure 6A depicts the all-to-all configuration of two-qubit gates, which has the larger expressibility and entangling capability but the higher circuit complexity. By contrast, Figure 6B depicts the circuit-block configuration of two-qubit gates, which has the smaller expressibility and entangling capability but the lower circuit complexity (Sim et al., 2019). Of course, the expressibility and entangling capability of the ansatz can be increased by stacking the circuit as multi layers.
In quantum measurement, the four qubits are measured individually by the \(\sigma_{z}\) operator. Then the four probabilities of each of the four qubits collapsing to \(\left|0\right\rangle\) are taken as four feature channels for next layer as shown in Figure 5. Note that in the quantum convolutional layer there is not the activation function, and the nonlinearity comes from the process of data encoding and quantum measurement. This is the major different between QNN and CNN.
The architecture of the second quantum convolution layer is shown in Figure 7. In this case, the window size of the filter is taken as 3\(\times\)3, and the nine elements are embedded using nine qubits through nine \(R_{y}(\theta)\) rotation gates. The same ansatz circuits are used as shown in Figure 6. In quantum measurement, the nine qubits are measured individually by the \(\sigma_{z}\) operator, and the nine probabilities of each of the nine qubits collapsing to \(\left|0\right\rangle\) are taken as nine channels for next layer.
#### 3.1.2 Classical Operations
The classical operations of QCCNN include the classical convolutional layers, the pooling layers, and the fully connected layers, which follows the typical operations of CNN. More specifically, in the convolutional layers, the window size is taken as 3\(\times\)3. Activation function is the ReLu function. The Max Pooling layer is used to reduce the number of trainable parameters. At the end of QCCNN, two fully connected layers are always used to connect the convolutional and output layer.
### Quantum Residual Network
Just as the way of constructing QCCNN, QCResNet can be obtained by replacing the convolutional layers with PQC in the template ResNet as shown in Figure 2. Since the template ResNet has two
Figure 6: Two ansatz circuits with (A) all-to-all configuration and (B) circuit-block configuration of two-qubit gates. The circuit is used as one layer and several layers can be stacked to increase the expressibility and entangling capability of the circuit.
residual units, there are different replacement schemes. Figure 8 shows two possible QCResNet architectures. The first QCResNet as shown in Figure 8A has one quantum residual unit and the second one as shown in Figure 8B has two quantum residual unit. Hereafter, the model of Figure 8A and Figure 8B is named as QCResNet-1 and QCResNet-2, respectively.
## Appendix B
FIGURE 8 Architecture of the QCResNet with (A) one quantum residual unit (named as QCResNet-1) and (B) two quantum residual unit (named as QCResNet-2).
The operations of the first quantum residual unit are shown in Figure 9A, where the first convolutional layer is replaced with the quantum convolutional layer as shown in Figure 9B. The window size of the filter is taken as 3\(\times\)3. In order to output three channels, the dense angle encoding method is applied, which use 3 qubits to embed 9 data elements. The ansatz circuit used in the quantum convolutional layer is the one as shown in Figure 6A. Finally, the three qubits are measured
individually by the \(\sigma_{z}\) operator, and the three probabilities of each of the three qubits collapsing to\(\left|0\right\rangle\) are taken as three channels for next layer.
The architecture of the second quantum residual unit is almost the same as that of the first quantum residual unit except that the quantum convolutional layer used in the second quantum residual unit is the one in Figure 7.
## Appendix A
FIGURE 9 Operations of the first quantum residual unit (A), and the quantum convolutional layer used in the first quantum residual unit (B).
## 4 Results and discussion
### Dataset and networks
The image dataset of phytoplankton used here were collected with a custom-built imaging-in-flow cytometer (Imaging FlowCytobot) by analyzing water from Woods Hole Harbor. All sampling was done between late fall and early spring in 2004 and 2005 (Sosik et al., 2007). The dataset comprises of 1200 images and contains 4 kinds of phytoplankton, i.e. _DactFragCeratul_, _Dactyliosolen_, _Dinobryon_ and _Ditylum_. So each category contains 300 images. The 1200 images are divided unbiasedly into the training and testing subset with 600 images each. All images are normalized to 20\(\times\)20 pixels.
Totally six neural networks are evaluated including the template CNN (see Figure 1), template ResNet (see Figure 2), QCCNN-1, QCCNN-2 (see Figure 4), QCResNet-1 and QCResNet-2 (see Figure 8). The ansatz used in the quantum convolutional layer of QCCNN and QCResNet is the circuit of Figure 6A. The cross-entropy function as shown in Eq. (6) is used as the loss function. The parameters in the quantum and classical layers are trained together, and they are updated based on the SGD method.
As discussed in the section of quantum measurement, the probability or expectation is estimated by repeating the measurement \(s\) time, where \(s\) is the number of shots. An appropriate number of shots need to be set a prior. To this end, a preliminary experiment is done using QCCNN-1. As shown in Figure 10, when \(s=1500\), QCCNN-1 achieves the best classification accuracy. So in the following experiments, 1500 shots are used in quantum measurement.
FIGURE 10 Test accuracy of QCCNN-1 on the phytoplankton classification using different number of shots in the process of quantum measurement.
### Comparison of the performance
First, we compare the performance of the template CNN and QCCNN models. Figure 11 presents the variation of the test classification accuracy of the two models. As can be seen from the figure, the prominent feature of QCCNN is that it converges much faster than CNN. This should be due to the fact that QCCNN can represent much higher dimensional feature space and thus is capable of capturing more abstract information from data. Similar features have also been found in the quantum-inspired CNN (Shi et al., 2022).
The classification accuracy of QCCNN-1 is about 93.0%, which is close to that of CNN. But the accuracy fluctuation of QCCNN is much smaller, implying the good generalization of QCCNN. Furthermore, the accuracy of QCCNN-1 is higher than that of QCCNN-2, which implies that the replacement scheme of quantum convolutional layers has large impact on the performance of QCCNN. Specific replacement scheme should be trialed with regard to the learning task.
Next, we compare the performance of the template ResNet and QCResNet models. Figure 12 presents the variation of the test classification accuracy of the two models. Similar features can be found in QCResNet as that of QCCNN. As can be seen from the figure, QCResNet-1 converges faster than ResNet. The classification accuracy of QCResNet-1 is 93.2%, higher than the 91.9% of ResNet. However, QCResNet-2 has worse performance than ResNet, which again imply the large impact of the replacement scheme of quantum convolutional layers on the model performance. Although both QCResNet and ResNet present large fluctuation in the test accuracy curves, the fluctuation of QCResNet becomes smaller against ResNet after about 30 epoch. The large fluctuation should be due to the architecture of the template ResNet. We leave the optimization of ResNet for the future work.
FIGURE 11 Curves of the test accuracy of QCCNNs and CNN for phytoplankton classification.
### Influence of ansatz circuit for QCCNN
Quantum convolutional layer is the key component of both QCCNN and QResNet. Essentially, quantum convolutional layer takes the ansatz circuit, i.e. a PQC, as a filer to implement the forward transformation in CNN. Thus, features of the ansatz circuit should have direct influence on the performance of QCCNN and QResNet. Analyzing this relationship is beneficial to increase the performance of QNN models.
Ansatz circuit can be characterized quantificationally from the perspective of expressibility and entangling capability of PQC (Sim et al., 2019). Hence, there should be relations between the measures of PQC and the performance of the corresponding QCCNN. Below we use the QCCNN-1 as the basic model to exploit the dependence. Note that up to now QCCNN-1 use the circuit of Figure 6A as the ansatz. As discussed above, the circuit of Figure 6A has the larger expressibility and entangling capability, while the circuit of Figure 6B is lower but can be used in stack to increase its expressibility and entangling capability. Replace the ansatz of QCCNN-1 with multi-layers of the circuit of Figure 6B, then several versions of QCCNN-1 are obtained.
The classification accuracy of the five versions of QCCNN-1 is shown in Figure 13. As can be seen from the figure, the accuracy of QCCNN-1 using Figure 6A as the ansatz is higher than that using Figure 6B, which implies that higher expressibility and entangling capability of the ansatz circuit can indeed give rise to better performance of the QCCNN model.
However, for QCCNN-1 using multi-layers of Figure 6B as the ansatz, the accuracy does not always increase with the number of layer. The accuracy of QCCNN-1 with 2 layers is the highest, while that with 1, 3 and 4 layers is close. Note that the circuit using 4 layers of Figure 6B achieves the similar expressibility and entangling capability as that of Figure 6A. So this result implies that in addition to the property of expressibility and entangling capability, there are other influence factors. One is the number of trainable parameters. When increasing the number of layers, the expressibility and entangling capability increase, but the number of trainable parameters also increase. More parameters make the model harder to train and thus decrease its generalization, offsetting the positive effect of increasing the expressibility and entangling capability. The second factor would be the topological structure of the ansatz circuit. A quantificational way of characterizing the architecture of PQC and its correlation to the performance of the corresponding QCCNN need to be exploited in detail. We leave this for the future work.
FIGURE 13 Classification accuracy of the five versions of QCCNN-1 using the circuit of Figure 6A and 1, 2, 3, 4 layers of Figure 6B as the ansatz.
## 5 Conclusion
In this work, we develop several hybrid quantum-classical convolutional and residual neural networks, and demonstrate their efficiency for phytoplankton classification. The QCCNN and QCResNet models are constructed by adding the quantum-enhanced forward transformation into the classical CNN and ResNet models. The hybrid architectures make a good balance between the limited function of the current NISQ devices and the large-size images of phytoplankton. Better performance of QCCNN and QCResNet is obtained against the classical counterpart. Especially, QCCNN and QCResNet can converge much faster than the classical models. Furthermore, we find that the performance of QCCNN and QCResNet depends on several factors, including the expressibility, entangling capability and topological structure of ansatz circuit and the number of training parameters. The model performance can be increased by taking into account all these factors. The present QCCNN and QCResNet models can be expanded easily for other tasks of image classification. In future, we will optimize the architecture of QCCNN and QCResNet from the aspect of both quantum and classical convolutional layers, and exploit the applications of the quantum-enhanced models in other tasks of marine science.
## Data and Code availability
The data and codes are available upon request from the authors.
## Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Author contributions
SS and ZW developed the algorithms and wrote the first draft. SS, RS, YL and JL wrote the codes and carried out the numerical experiments. YG, GZ and ZW planned and designed the project. All authors discussed the results and reviewed the manuscript.
## Funding
The present work is supported by the Natural Science Foundation of Shandong Province of China (ZR2021ZD19) and the National Natural Science Foundation of China (12005212).
## Acknowledgments
We are grateful to the support of computational resources from the Marine Big Data Center of Institute for Advanced Ocean Study of Ocean University of China.
|
2304.09500 | Biologically inspired structure learning with reverse knowledge
distillation for spiking neural networks | Spiking neural networks (SNNs) have superb characteristics in sensory
information recognition tasks due to their biological plausibility. However,
the performance of some current spiking-based models is limited by their
structures which means either fully connected or too-deep structures bring too
much redundancy. This redundancy from both connection and neurons is one of the
key factors hindering the practical application of SNNs. Although Some pruning
methods were proposed to tackle this problem, they normally ignored the fact
the neural topology in the human brain could be adjusted dynamically. Inspired
by this, this paper proposed an evolutionary-based structure construction
method for constructing more reasonable SNNs. By integrating the knowledge
distillation and connection pruning method, the synaptic connections in SNNs
can be optimized dynamically to reach an optimal state. As a result, the
structure of SNNs could not only absorb knowledge from the teacher model but
also search for deep but sparse network topology. Experimental results on
CIFAR100 and DVS-Gesture show that the proposed structure learning method can
get pretty well performance while reducing the connection redundancy. The
proposed method explores a novel dynamical way for structure learning from
scratch in SNNs which could build a bridge to close the gap between deep
learning and bio-inspired neural dynamics. | Qi Xu, Yaxin Li, Xuanye Fang, Jiangrong Shen, Jian K. Liu, Huajin Tang, Gang Pan | 2023-04-19T08:41:17Z | http://arxiv.org/abs/2304.09500v1 | Biologically inspired structure learning with reverse knowledge distillation for spiking neural networks
###### Abstract
Spiking neural networks (SNNs) have superb characteristics in sensory information recognition tasks due to their biological plausibility. However, the performance of some current spiking-based models is limited by their structures which means either fully connected or too-deep structures bring too much redundancy. This redundancy from both connection and neurons is one of the key factors hindering the practical application of SNNs. Although Some pruning methods were proposed to tackle this problem, they normally ignored the fact the neural topology in the human brain could be adjusted dynamically. Inspired by this, this paper proposed an evolutionary-based structure construction method for constructing more reasonable SNNs. By integrating the knowledge distillation and connection pruning method, the synaptic connections in SNNs can be optimized dynamically to reach an optimal state. As a result, the structure of SNNs could not only absorb knowledge from the teacher model but also search for deep but sparse network topology. Experimental results on CIFAR100 and DVS-Gesture show that the proposed structure learning method can get pretty well performance while reducing the connection redundancy. The proposed method explores a novel dynamical way for structure learning from scratch in SNNs which could build a bridge to close the gap between deep learning and bio-inspired neural dynamics.
## 1 Introduction
Spiking Neural Networks (SNNs) are supposed to be one of the efficient computational models because of their biological plausibility, especially since the structures are dynamically malleable. Cognitive activities can be realized with the help of the complex structures of the human brain which is composed of billions of neurons and more neural connections between them. Similar to biological neural networks, SNNs transfer and process information via binary spikes. During the flow of the information, those fired spikes are transmitted on the synaptic connection between neurons with the structural plasticity rules.
Compared to conventional artificial neural network models including convolutional neural networks (CNNs), definitely, SNNs are more biologically inspired and energy efficient. However, there is one fact that cannot be ignored the SNNs cannot achieve nearly as good performance as ANNs did. One of the key factors that we think some of the structures of current spiking based models are limited by training methods, that is SNNs cannot leverage the global backpropagation (BP) rules directly as
CNNs did. This defect directly leads to unreasonable structures in SNNs. Besides, whether structures of CNNs or SNNs are both fixed, unlike biological neural systems, the structures and topologies should have been dynamical which means the connections could be discarded as needed.
Aiming at tackling this problem, some studies focus on constructing more efficient structures to improve the efficiency of SNNs. Researchers proposed an approximate BP method named STBP (Spatio-temporal backpropagation) for training high-performance SNNs [24]. To avoid training SNNs directly, some studies [11; 6] proposed ANN-to-SNN get parameters from trained ANNs, then map them to SNNs. Although these methods could construct pretty deep structures, they bring additional computational power. Moreover, these BP based learning rules can only match the fixed structure, if changing the network topology, it would be retrained from scratch.
Aiming at constructing more biologically flexible SNNs, this paper proposed biologically inspired structure learning methods with reverse knowledge distillation (KD). Based on the proposed training method, the wanted student SNN models could learn rich information from teacher ANN models [25]. Compared to traditional KD methods, one of the key differences of the proposed re-KD method for SNN training is that we think the structures play important role in the training process, they are not only the final results, they could instruct themselves to train. Not limited to label smoothing regularization [27], this paper proved that the proposed re-KD method could build more robust
Figure 1: The schematic illustration of the reverse-KD framework between teacher SNN and student SNN. (a) When the structure of teacher SNN is sparse, this kind of re-KD is named sparse-KD. (b) When teacher SNN is default as a probability distribution manually, this kind of re-KD is named teacher default-KD.
structures of SNNs. We evaluated the proposed methods on several pattern recognition datasets (CIFAR100 and DVS-Gesture) experimental results show that the proposed methods can not only get good recognition performance but also show robust generation ability on time-series dynamic environments. The main contributions in this paper are summarized as follows:
* This paper proposed reverse KD methods for constructing efficient structures of spiking neural networks. The proposed methods are emphasized to circumvent the no-differentiable of the spikes when using BP rules directly.
* The proposed methods let sparse structures models as teachers help construct robust student SNN models. Besides, this paper provides a brandnew teacher-free KD method which could help student SNN models absorb useful information when the teacher is default.
* Experimental evaluations showed that the proposed re-KD method could facilitate the performance of SNNs. Furthermore, this kind of KD construction for deep SNNs could allow teacher and student models to be homogeneous or heterogeneous, which shows great potential on neuromorphic hardware platforms.
## 2 Related work and Motivation
There lacking suitable structures for deep SNNs, which caused difficulties in training deep SNNs using BP rules directly. Based on the KD method, this paper proposed reverse-KD method to further facilitate the brain-inspired characteristics of dynamic spiking signals. The proposed reverse-KD methods include two ways, one is when the structures of the teacher model are sparse named sparse-KD, and the other is when the teacher is the default during KD training process named teacher default-KD. For further play and to verify the effectiveness of the proposed method, this paper embedded surrogated gradient method into the proposed method to train the deep SNNs.
### The structures of deep SNNs
Compared to the structures in deep learning field, those of SNNs are various because there are no unified learning rules for deep SNNs training as BP did in deep learning. [5; 26] proposed temporal efficient training methods to build deep SNNs. [2] dug out the state transition of dendritic spines to improve the training efficiency of sparse SNNs. Some studies focused on the activation function in SNNs [1; 29] to build high brain-inspired spiking neurons and neural circuits.
Considering more detailed differences between ANNs and SNNs [3; 18], some researchers are committed to improving the network structure so that to make SNNs could be applied to detection [13] and dynamic time-series data processing [12]. Although they can build efficient deep structures of SNNs, they also bring huge power consumption and additional computing resources. Especially, when they used ANN-to-SNN conversion [1; 20]. Besides, these structure construction methods have overlooked the flexibility of structure creation which means if the task changed, you must train another brand-new SNN from scratch.
### Surrogate gradient method for SNN training
Different from ANN-to-SNN conversion [7; 11], because of the huge success that BP gets in deep CNNs, some studies also want to utilize the BP-based rules to train deep SNNs. [17] firstly introduced surrogated gradient training method to mimic the backpropagation process in CNNs. These types of methods aimed at the no-differentiable problem in binary spikes, by applying surrogated gradient to make spikes become differentiable so that the corresponding weights can be trained globally.
[28] analysed the robustness of surrogated gradient training in SNNs and instilled the complex function in SNNs. Combing the characteristic of membrane potential, some scholars want to incorporate learnable membrane time constant to enhance the surrogated gradient training [6]. Others made a further improvement to make it friendly to neuromorphic on-chip hardware systems [22].
Although these surrogated training methods consume little power consumption compared to ANN-to-SNN conversion, they still bring additional computational resources. More importantly, they still make the structures of SNNs fixed and cannot change with the change of tasks. To build more flexible structures of SNNs, combing with surrogated-gradient training, some studies proposed knowledge
distillation-based methods to build more efficient SNNs [14; 23; 15]. They normally set a powerful ANN as a teacher and a shallow SNN as a student, it will make unrealistic assumptions and introduce more computing consumption when training a strong ANN additionally.
### Motivation
Aiming at tackling the aforementioned problems in constructing efficient deep SNN models, this paper proposed a reverse KD method to construct deep SNNs. Through re-think the original KD in SNNs, this paper makes a systematic analysis of the knowledge transfer between teacher and student.
This paper not only focused on the efficient structures of SNNs, but also care about the power consumption brought by KD process. Specifically, we build two re-KD methods for deep SNN construction, one is sparse-KD and the other is teacher default-KD. One of the key innovations is that we think the teacher is not always strong meanwhile, the student is not always weak. By building the relationship between weak teachers and strong students, this paper rethinks the KD process in SNN training and proved that based on the proposed method, the performance would be improved meanwhile the power consumption decreases.
## 3 Method
In order to construct a bio-inspired training method for combining structural learning and knowledge distillation for SNNs, this paper proposes the re-KD method. Our method presents two approaches for knowledge distillation: sparse-KD and teacher default-KD. This knowledge distillation framework explores a method of knowledge distillation that utilizes a network with a sparse structure or a virtual network as the teacher network, which is different from conventional knowledge distillation methods.
### The framework of reverse knowledge distillation
**Spiking neuron model.** We use IF (Integrate-And-Fire Models) spiking neuron models as the basic unit of the network. The IF neuron model is one of the commonly used spiking neuron models. It stimulates the process of action potential in biological neurons, where it receives spiking stimulation from presynaptic neurons and the state of membrane potential changes. The neural dynamics equation of the IF neuron can be represented by Eq. (1).
\[\frac{\mathrm{d}V(t)}{\mathrm{d}t}=V(t)+X(t) \tag{1}\]
where \(V(t)\) is the membrane potential of IF neuron. \(X(t)\) denotes the current input at time \(t\). Therefore, the current membrane potential of the IF neuron can be expressed as Eq. (2).
\[V(t)=f(V(t-1),X(t))=V(t-1)+X(t) \tag{2}\]
When the membrane potential reaches the threshold, the neuron will fire a spike, and then its membrane potential will be reset to the reset voltage \(V_{\text{reset}}\). In addition to this, the framework trains SNN based on gradient surrogate.
**Overall framework of the reverse knowledge distillation method.** Re-KD explores a novel way of knowledge distillation, which combines bio-inspired structure and distillation. A reverse knowledge distillation approach is adopted to train the SNN student network. As shown in Figure 1(a), the pruned SNN model is used as a teacher network to guide the training of the student network, which can reduce the interference in the hidden information of the teacher network to better train the student network. As shown in Figure 1(b), a \(100\%\) accuracy virtual teacher network is designed to avoid the impact of wrong classification better. Then this framework adopts a response-based knowledge distillation method to train a student network.
### Sparse knowledge distillation
**Teacher sparsified.** This method uses a sparse network as the teacher network which is called sparse-KD. We use a weight-based pruning method to obtain a sparse network structure as the teacher
model to guide the training of the student model. This sparse method prunes a certain proportion of connections with low-weight values from a pre-trained SNN model. There is often redundancy in the connections within a network structure, which is similar to that of a biological neural network. Removing these redundant connections makes the network structure more robust. First, we train an SNN model and fix the weights of the model. Then, we sort the weights of all connections in the convolutional layers of the model and prune the weights with the smallest values according to a certain proportion. This method sets a mask that is multiplied by the weights, with the elements in the corresponding positions of the mask matrix set to 0 for the connections with weights that need to be pruned. The weight and pruning mask in \(l\) layer is \(W^{l}\) and \(M^{l}\) and the weight after pruning can be expressed as Eq. (3):
\[W^{l}_{\text{pruned}}=W^{l}\odot M^{l} \tag{3}\]
**Loss function.** This method uses a knowledge distillation method based on response, where the final layer output of the teacher network is used to guide the training of the student network. In this case, the teacher network refers to a pre-trained SNN model that has been pruned. The learning objectives for the student network are divided into soft labels and true labels. Soft labels refer to the output of the final layer of the teacher network, which contains hidden knowledge. In order to increase the entropy of the probability distribution of the network's output, a temperature parameter \(T\) is introduced to smooth this probability distribution. This allows for better learning of the hidden similarity information in the probability distribution of the teacher network's output. The probability distribution is represented by \(Z_{i}\). The flatten output probability distribution \(q_{i}\) can be expressed as Eq. (4):
\[q_{i}=\frac{\exp{(Z_{i}/T)}}{\sum\exp{(Z_{j}/T)}} \tag{4}\]
The loss function for knowledge distillation during training of a student network consists of two parts. One is the loss calculated using cross-entropy between the output of the student network and the true labels. \(Q_{s}\) is the probability distribution of the student network's output. The other is the loss calculated using KL divergence between the output of the student network and the soft labels. The probability distributions of the outputs of the student network and the teacher network are both flattened by parameter \(T\) to obtain \(Q_{S}^{\tau}\) and \(Q_{T}^{\tau}\). The student network can be trained using the loss function in Eq. (5), as referenced in paper [16].
\[\begin{split} L_{sparse-KD}&=\alpha T^{2}*\text{ KL-Divergence }(Q_{S}^{\tau},Q_{T}^{\tau})\\ &+(1-\alpha)*\text{ CrossEntropy }(Q_{S},y_{\text{true}})\end{split} \tag{5}\]
where \(\alpha\) controls the importance of the two parts of the loss function. \(y_{\text{true}}\) denotes the true labels.
**Training algorithm.** The first step is to train an SNN model and then prune the model based on the weights and save it as the teacher network. In the second step, we use an SNN model with the same structure as the teacher network as the student network. Then we use knowledge distillation based on response to train the student network. Through this method, the clearer hidden knowledge in the pruned teacher network is utilized to guide the training of the student network.
### Teacher default knowledge distillation
**Teacher default.** This method uses a probability distribution designed manually as the teacher network, which allows the teacher network to have an accuracy of \(100\%\). In normal knowledge distillation, the accuracy of the teacher network is usually not \(100\%\), so there may be some interference in its hidden knowledge. Using a pruned teacher network for knowledge distillation can yield a more robust student network. In order to better guide the learning of the student network, a \(100\%\) accurate virtual teacher network can be designed. Assuming there are \(C\) classification categories,
this probability distribution sets the probability of the correct label \(\alpha\) (\(\alpha>0.9\)), and the probability distribution of this virtual teacher network can be represented by Eq. (6).
\[p(c)=\begin{cases}\alpha&\text{if }c=t\\ \dfrac{1-\alpha}{C-1}&\text{if }c\neq t\end{cases} \tag{6}\]
where \(c\) represents each category and \(t\) is the correct category. The probability distribution \(p(c)\) can be viewed as the probability distribution of the output of this virtual teacher network.
**Loss function.** This method is similar to the response-based knowledge distillation method. The loss function of the student network training is divided into two parts. One is the cross-entropy loss between the output of the student network and the true label. The other is the KL divergence loss between the output and the soft label. The probability distribution of this virtual teacher network can be flattened using the parameter \(T\) to obtain the soft label. The two parts are summed to obtain the loss function of this knowledge distillation, as shown in Eq. (7).
\[\begin{split} L_{default-KD}&=\alpha*\text{ KL- Divergence }(Q_{S},Q_{T}^{\tau})\\ &+(1-\alpha)*\text{ CrossEntropy }(Q_{S},y_{\text{true}})\end{split} \tag{7}\]
where \(Q_{S}\) is the output of student network. \(Q_{T}^{\tau}\) denotes the probability distribution designed manually after flatting by parameter \(T\).
**Training algorithm.** The first step is to select appropriate parameters to design this probability distribution to obtain a virtual teacher network with completely correct outputs. The second step is to use the virtual teacher network to guide the training of the student network through knowledge distillation. The virtual teacher networks have a \(100\%\) accuracy and do not have interference from incorrect classification, so they can better guide the training of student networks.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Dataset & SNN Model & \begin{tabular}{c} Teacher \\ prune ratio \\ \end{tabular} & \begin{tabular}{c} Teacher \\ Acc. (\%) \\ \end{tabular} & \begin{tabular}{c} Student SNN \\ ACC. (\%) \\ \end{tabular} &
\begin{tabular}{c} Student KD \\ ACC. (\%) \\ \end{tabular} & Improvement(\%) \\ \hline \multirow{6}{*}{CIFAR100} & & 0.1 & 69.77 & 70.30 & 71.90 & 1.60 \\ & & 0.3 & 69.54 & 70.30 & 71.59 & 1.29 \\ & Resnet18 & 0.5 & 69.20 & 70.30 & 71.53 & 1.23 \\ & & 0.7 & 63.05 & 70.30 & 71.36 & 1.06 \\ & & 0 & 70.30 & 70.30 & 71.28 & 0.98 \\ \cline{2-7} & & 0.1 & 68.20 & 69.08 & 69.91 & 0.83 \\ & & 0.3 & 68.24 & 69.08 & 69.72 & 0.64 \\ & WRN16-4 & 0.5 & 66.09 & 69.08 & 69.90 & 0.82 \\ & & 0.7 & 50.09 & 69.08 & 69.87 & 0.79 \\ & & 0 & 69.08 & 69.08 & 69.44 & 0.36 \\ \hline \multirow{6}{*}{DVS-Gesture} & & 0.1 & 94.79 & 95.13 & 95.83 & 0.70 \\ & & 0.3 & 94.79 & 95.13 & 95.83 & 0.70 \\ & Resnet18 & 0.5 & 88.54 & 95.13 & 95.14 & 0.01 \\ & & 0.7 & 31.60 & 95.13 & 96.18 & 1.05 \\ & & 0 & 95.13 & 95.13 & 95.83 & 0.70 \\ \cline{2-7} & & 0.1 & 93.40 & 94.44 & 95.48 & 1.04 \\ & & 0.3 & 90.62 & 94.44 & 96.88 & 2.44 \\ & WRN16-4 & 0.5 & 74.31 & 94.44 & 95.49 & 1.05 \\ & & 0.7 & 12.85 & 94.44 & 94.79 & 0.35 \\ & & 0 & 94.44 & 94.44 & 94.79 & 0.35 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test accuracies of sparse-KD on CIFAR100 and DVS-Gesture.
## 4 Experimental results
We conducted experiments on the static dataset CIFAR100 and the neuromorphic dataset DVS-Gesture to verify the effectiveness of this framework. Firstly, we analyze the impact of different pruning ratios of the teacher network on the performance of the student network. Then we verify the performance of knowledge distillation with virtual teacher network. In addition, in order to validate the advantages of this re-KD method, we compare the experimental results of this framework with other SNN methods.
### Experimental settings
We conduct the experiments on the server which is equipped with 16 cores Intel(R) Xeon(R) Gold 5218 CPU with 2.30GHz and 8 NVidia GeForce RTX 2080 Ti GPUs. The training of SNN is based on the spikingjelly framework, which is an SNN framework developed based on pytorch. Our experiments mainly focus on VGG structures and residual structures such as Resnet and WideResnet. In this experiment, the teacher network adopts the same structure as the student network in order to better verify the advantages of structure learning.
**Dataset CIFAR100.** CIFAR100 is a commonly used static dataset, it has three channels RGB and the image size is 32*32. The CIFAR100 dataset has 100 classes, and each class has 500 training sets and 100 test sets. For each image, it has fine-labels and coarse-labels two labels. The dataset is relatively complex and can verify the performance of our proposed framework in a more realistic way. During training, we need to first encode the static images into spiking sequences and then input them into the SNN, here the first spiking neuron layer in the network is regarded as the encoding layer.
**Dataset DVS-Gesture.** DVS-Gesture is a neuromorphic dataset that includes 11 gestures for recognition. The dataset is stored in the form of events and needs to be integrated into frame data before use. During training, the spiking sequences data obtained can be directly input into SNN. The dataset has 2 channels and the image size is 128x128.
### Evaluation on sparse-KD method
We use the SNN student network itself and the SNN student network with different pruning ratios as the teacher network and then analyze the experimental results. In this experiment, we used the spiking form of Resnet18 and WRN16-4 network structures to conduct the experiments.
For the CIFAR100 dataset, as shown in Table 1, using the SNN student network itself as its own teacher network for knowledge distillation improves the accuracy of the student network by \(0.98\%\) (Resnet18) and \(0.36\%\) (WRN16-4). The experiments also show that using models with different pruning ratios of the student network itself as the teacher network can also improve the accuracy of the student network. As the pruning ratio increases, the improvement of the accuracy of the student network gradually decreases. However, using a pruned model as a teacher network for knowledge distillation is more effective than using a non-pruned model as a teacher network. As can be seen in Table 1, for Resnet18, when using a model pruned at 0.1 as the teacher network, the accuracy of the student network can be improved by \(1.60\%\), which is greater than \(0.98\%\).
For the DVS-Gesture dataset, as shown in Table 1, using the SNN student network itself as its own teacher network for knowledge distillation improves the accuracy of the student network by \(0.70\%\) (Resnet18) and \(0.35\%\) (WRN16-4). As the pruning ratio increases, the distillation effect of the teacher network becomes less obvious. For WRN16-4, when pruning at 0.3, the improvement effect
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Dataset & SNN Model & Student SNN ACC. (\%) & Student KD ACC. (\%) & Improvement(\%) \\ \hline \multirow{3}{*}{CIFAR100} & Resnet18 & 70.30 & 71.20 & 0.90 \\ & WRN16-4 & 69.08 & 70.68 & 1.60 \\ & VGG16 & 65.49 & 65.95 & 0.46 \\ \hline \multirow{2}{*}{DVS-Gesture} & Resnet18 & 95.13 & 96.88 & 1.75 \\ & WRN16-4 & 94.44 & 96.18 & 1.74 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test accuracies of default-KD on CIFAR100 and DVS-Gesture.
on the student network is more obvious, and it improves by \(2.44\%\). For Resnet18, when pruning at 0.7, the accuracy improves by \(1.05\%\).
Using a pruned student network or the student network itself for knowledge distillation is different from the general perception of knowledge distillation. The accuracy of the pruned student network may be slightly lower than that of the student network, but experimental results show that it can still improve the accuracy of the student network. This shows that the pruned model, by reducing some of the interference from redundant connections, can better transfer effective hidden knowledge to guide the training of the student network.
### Evaluation on teacher default-KD method
We use a virtual teacher network to perform knowledge distillation. The experiment is based on the spiking form of Resnet18, WRN16-4, and VGG16 network structures. The virtual teacher network is a probability distribution designed by humans, with an accuracy of \(100\%\).
The effectiveness of using virtual teacher networks for different models varies, as shown in Table 2. On the CIFAR100 dataset, using knowledge distillation with a virtual teacher network on a WRN16-4 model improves the accuracy of the student network by \(1.6\%\). For the VGG16 model, the student network's accuracy is improved by \(0.46\%\). On the DVS-Gesture dataset, the student network's accuracy is also slightly improved over \(1\%\).
Using a virtual teacher network to perform knowledge distillation on a student network is an effective way to improve the accuracy of the student network. The virtual teacher network does not have any output errors and can better guide the training of the student network. Since the training of SNN is difficult, this method is also suitable for situations where it is difficult to find a model with better performance as a teacher network for knowledge distillation.
### Performance Comparison with Other Methods
As shown in Table 3, in order to better analyze the effectiveness of the method, we compare it with some existing methods. For the CIFAR100 dataset, the sparse-KD and default-KD methods using a pruned model with a pruning rate of 0.1 as the teacher network on the Resnet model have an accuracy of \(71.90\%\) and \(71.20\%\), respectively, which is better than the Rmp-snns [9], Hybrid [19], Opt. [4] and TSC [8] methods. For the DVS-Gesture dataset, the sparse-KD method using a pruned model with a pruning rate of 0.7 as the teacher network on the Resnet18 model has an accuracy of \(96.18\%\), and the default-KD method using a Resnet18 model has an accuracy of \(96.88\%\), both of which are
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Dataset & Method & \begin{tabular}{c} SNN \\ Architecture \\ \end{tabular} &
\begin{tabular}{c} SNN \\ Acc.(\%) \\ \end{tabular} & timestep \\ \hline \hline \multirow{8}{*}{CIFAR100} & \multirow{2}{*}{RMP-sans [9]} & VGG16 & 70.93 & \multirow{2}{*}{2048} \\ & & Resnet20 & 67.82 & \\ \cline{2-4} & \multirow{2}{*}{Hybrid [19]} & VGG11 & 67.87 & 125 \\ \cline{2-4} & & TSC [8] & VGG16 & 70.97 & \multirow{2}{*}{1024} \\ & & Resnet20 & 68.18 & \\ \cline{2-4} & \multirow{2}{*}{Opt. [4]} & VGG16 & 70.55 & \multirow{2}{*}{400} \\ & & Resnet20 & 69.82 & \\ \cline{2-4} & \multirow{2}{*}{**Proposed sparse-KD**} & ResNet18 & 71.90 & 4 \\ \cline{2-4} & \multirow{2}{*}{**Proposed default-KD**} & ResNet18 & 71.20 & 4 \\ \hline \multirow{8}{*}{DVS-Gesture} & \multirow{2}{*}{PLIF [6]} & c128k341-BN-PLIF-MPK225*-DPFC152- & \multirow{2}{*}{97.57} & \multirow{2}{*}{20} \\ & & PLIF-DF-FC110-PLIF-APK10s10 & & & \\ \cline{1-1} \cline{2-4} & \multirow{2}{*}{STBP-T4BN [30]} & Resnet17 & \multirow{2}{*}{96.87} & \multirow{2}{*}{40} \\ \cline{1-1} \cline{2-4} & SLAYER [21] & 8 layers & & \\ \cline{1-1} \cline{2-4} & Com. [10] & Input-MPK16-64C3-128C3- & \multirow{2}{*}{93.40} & \multirow{2}{*}{25} \\ \cline{1-1} \cline{2-4} & \multirow{2}{*}{**Proposed sparse-KD**} & ResNet18 & & \\ \cline{1-1} \cline{2-4} & **Proposed default-KD** & ResNet18 & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary comparison of classification accuracies with other spiking based models
better than the SLAYER [21] and Com. [10] methods. Compared to the PLIF [6] and STBP-TdBN [30] methods, the accuracy is slightly lower, but the time step used is 16, which is shorter than them.
## 5 Conclusion
Inspired by the biological plausibility of neural systems in human brain, this paper proposed efficient structure learning methods with reverse knowledge distillation for SNNs. This kind of method could help to build a deep but sparse structure under the guidance of the pruning method which could not only discard the redundancy of the complex spiking neural dynamics but also save power consumption and memory storage in SNNs.
Considering the abnormal KD cases such as the proposed sparse-KD and teacher-default KD methods, experimental results showed that we can expand our work to broader conditions in SNNs especially when the teacher model is weak. It also showed that the proposed models not only get good performance on a relatively large dataset (CIFAR100) but are also suitable for the dynamic scene (DVS-Gesture). Under strict timesteps, the proposed method can help SNNs get good performance compared to other spiking based models. That is to say, the proposed methods could give full play to the advantages of low power consumption of SNNs.
In our future work, we will expand the structure learning methods to utilize the advantages of spiking signals, not limited by the proposed two teacher-weak conditions, we will consider more situations when the teacher model is ill or disabled and evaluate them on more types of datasets.
|
2306.05557 | On Performance Discrepancies Across Local Homophily Levels in Graph
Neural Networks | Graph Neural Network (GNN) research has highlighted a relationship between
high homophily (i.e., the tendency of nodes of the same class to connect) and
strong predictive performance in node classification. However, recent work has
found the relationship to be more nuanced, demonstrating that simple GNNs can
learn in certain heterophilous settings. To resolve these conflicting findings
and align closer to real-world datasets, we go beyond the assumption of a
global graph homophily level and study the performance of GNNs when the local
homophily level of a node deviates from the global homophily level. Through
theoretical and empirical analysis, we systematically demonstrate how shifts in
local homophily can introduce performance degradation, leading to performance
discrepancies across local homophily levels. We ground the practical
implications of this work through granular analysis on five real-world datasets
with varying global homophily levels, demonstrating that (a) GNNs can fail to
generalize to test nodes that deviate from the global homophily of a graph, and
(b) high local homophily does not necessarily confer high performance for a
node. We further show that GNNs designed for globally heterophilous graphs can
alleviate performance discrepancy by improving performance across local
homophily levels, offering a new perspective on how these GNNs achieve stronger
global performance. | Donald Loveland, Jiong Zhu, Mark Heimann, Benjamin Fish, Michael T. Schaub, Danai Koutra | 2023-06-08T21:01:24Z | http://arxiv.org/abs/2306.05557v4 | # On Performance Discrepancies Across Local Homophily Levels
###### Abstract
Graph Neural Network (GNN) research has highlighted a relationship between high homophily (i.e., the tendency of nodes of the same class to connect) and strong predictive performance in node classification. However, recent work has found the relationship to be more nuanced, demonstrating that simple GNNs can learn in certain heterophilous settings. To resolve these conflicting findings and align closer to real-world datasets, we go beyond the assumption of a global graph homophily level and study the performance of GNNs when the local homophily level of a node deviates from the global homophily level. Through theoretical and empirical analysis, we systematically demonstrate how shifts in local homophily can introduce performance degradation, leading to performance discrepancies across local homophily levels. We ground the practical implications of this work through granular analysis on five real-world datasets with varying global homophily levels, demonstrating that (a) GNNs can fail to generalize to test nodes that deviate from the global homophily of a graph, and (b) high local homophily does not necessarily confer high performance for a node. We further show that GNNs designed for globally heterophilous graphs can alleviate performance discrepancy by improving performance across local homophily levels, offering a new perspective on how these GNNs achieve stronger global performance.
## 1 Introduction
Deep learning with Graph Neural Networks (GNNs) has become common in many learning tasks over collaboration networks [1], social networks [2], financial networks [3], and more [4; 5; 6]. However, given the relative infancy of GNNs, retrospectives on GNN performance are limited. Understanding the conditions that will cause a GNN's performance to degrade promotes a proactive approach to GNN development, rectifying any issues that may arise once deployed to the public. One previously studied degradation mechanism of GNN performance is the presence of heterophilous connections [7; 8]. Heterophily, or the tendency for nodes of different classes to connect, has been found in a variety of graph applications where sensitive factors may influence the connective patterns, necessitating its study [9; 10; 11; 12]. However, more recent retrospectives have argued that performance does not necessarily degrade with heterophily, and in fact simple GNN architectures, such as GCN, can perform well in certain settings [13; 14; 15]. These seemingly conflicting results indicate a gap in understanding, demanding further research on the influence of heterophily on GNNs.
To better understand the factors which govern GNN performance, we begin by exploring the assumptions made in previous works. Surprisingly, many assume that constituent nodes of a graph possess local homophily levels similar to the global homophily of the entire graph, leading to a disregard for the impact of a node's local homophily level on performance [7, 13, 14]. For the few works that consider local homophily, it is often assumed that homophilous nodes should perform better, creating unclear conclusions due to biased interpretation of the localized results [16, 17]. Moreover, local homophily has yet to be the focal point of previous works, leading to limited discussion on why certain patterns emerge. Practically, assuming a constant local homophily makes it difficult to determine if new models are improving performance across all nodes, or simply increasing performance for certain node subsets. Furthermore, by myopically assuming higher homophily is indicative of higher performance, analysis on discrepancies that arise from model choice, global homophily, and local homophily is limited, slowing the development of new GNNs that could address these concerns.
**This work.** We investigate how shifts in local homophily can impact GNNs, extending beyond current assumptions and aligning closer to real-world settings. Our analysis considers a GNN trained on nodes biased towards a graph's global homophily, and then applied to test nodes of varying local homophily levels. We _theoretically analyze_ the scenario by obtaining a closed-form solution for a GNN's weight matrix and demonstrate, through perturbation analysis, that a GNN's performance can degrade when a node's local homophily level is shifted relative to the global homophily of the graph. We show that our findings generalize to a variety of settings through a broad empirical analysis facilitated by our proposed _synthetic graph generator_ that enables control over the local homophily levels of nodes. We also show the practical repercussions of our theoretical and empirical analyses on a representative set of five _real-world_ datasets with varying global homophily levels. Across our synthetic and real-world datasets, we additionally study nine different GNN architectures, demonstrating that those tailored to handle heterophily often maintain more uniform performance, minimizing discrepancies. Together, our theoretical and empirical analysis describes a new failure point of GNNs - an expected distribution of labels over a node's neighbors, stemming from an over-reliance on the global homophily of a graph - presenting a challenge for nodes with underrepresented homophily levels to be correctly predicted. While previous works have noted fairness issues in GNNs based on sensitive attributes, e.g. race or gender, determined exogenously [18, 19], our results point to a novel inequality rooted in a network's structure that could lead to unfairness in human-centric settings. Our main contributions are:
* **Theoretical Analysis:** We theoretically connect how a shift in local homophily level can impact a model's predictions, providing intuition on how GNN performance can degrade for nodes with local homophily levels far from the global homophily level of the entire graph.
* **Synthetic Experiments and Model Comparison:** We perform empirical analysis by modifying the preferential attachment model to allow for more granular control over the distribution of local homophily levels. This capability facilitates empirical verification of our theory under more general graph structures and GNN architectures. Additionally, we perform the first node-level analysis that directly compares GNNs that assume homophily and GNNs that are adjusted for heterophily, demonstrating different levels of performance discrepancy across GNN designs.
* **Real-world Experiments:** We provide the first granular analysis of GNN performance as local homophily levels are varied across a set of five real-world datasets. We find that our theoretical performance degradation trends hold more generally, confirming GNNs designed for heterophily can aid in minimizing performance discrepancy across nodes with varying local homophily patterns.
## 2 Related Work
In this section, we begin by discussing GNN architectures designed to improve learning under heterophily. We then detail previous approaches towards local property analysis, as well as discrepancy analysis, connecting both to our study of local homophily.
**Learning GNNs in Diverse Neighborhoods.** To learn node representations, GNNs adopt an aggregation function to combine the ego-node's (the node being updated) features and the neighboring nodes' features. Depending on the neighborhood structure of a node, a particular aggregation mechanism may be insufficient to adequately learn representations [8]. For example, GCN [20], GAT [21], and SGC [22] were built to learn over homophilous neighborhoods through their weighted average of the ego-node and neighboring nodes' features. To remedy this issue, methods such as GraphSAGE [23], GPR-GNN [24], FA-GCN [25], and GCNII [26] separate the ego and neighbor embeddings, ei
ther through a residual connection or concatenation. GPR-GNN and FA-GCN additionally follow a predict-before-propagate paradigm to help alleviate the harm that can come from mixing representation learning and aggregation [27, 28], while GCNII utilizes identity mapping to mitigate oversmoothing, a known problem for homophily [8]. H2GCN [7] adopts further decoupling across higher order neighborhoods, aggregating each \(k\)-hop neighborhood separately. _In previous works, there is limited analysis of the impact of GNN architectures on the performance nodes with varying local homophily. In this work, we provide the first granular analysis of these models, showing a different perspective on how they perform and their (in)ability to mitigate performance discrepancies._
**Local Property Analysis.** Studies on GNN performance relative to an input graph's structural properties have gained traction; however, the adjustment to considering a per-node local perspective is still under-explored. For instance, many studies have argued the conditions in which a node is able to benefit from message passing with respect to homophily, but only consider a constant local homophily level for all nodes [8, 13, 14]. Du et al. offers the first local analysis, however the results are contradictory across datasets and are only performed for a single model [16]. More recent work has developed other homophily-inspired metrics to contextualize local performance, however the proposed metric can still fail to explain performance depending on the dataset [17]. Both works assume that higher local homophily should always improve performance, ultimately guiding their development of new architectures and metrics that reinforce this assumption. However, the conflicting results across datasets seen in both works indicate that this assumption may oversimplify the behavior of a GNN and fail to consider other drivers for performance degradation. Closely related to our work, Ma et al. analyzes the disparate treatment of individual nodes defined by their shortest path distance to the training dataset, showing a degradation in performance as distance increases [29]. We build upon the idea of structural property subgroup analysis, but instead consider variations in local homophily rather than distance to the training set, creating a shift in how performance is analyzed in the context of homophily. _Thus, we analyze the performance of GNNs, breaking the assumption that the local homophily levels are constant, and demonstrate how node predictions systematically degrade as the local homophily levels deviate from the global homophily of the training graph._
## 3 Preliminaries
In this section, we provide key notations and definitions, the notation is summarized in App. A.1.
### Graphs
Let \(G=(V,E,\mathbf{X},\mathbf{Y})\) denote a simple graph with node set \(V\) and edge set \(E\), where \(\mathbf{X}\in\mathbb{R}^{|V|\times f}\) represents the node feature matrix with \(f\) features per node and \(\mathbf{Y}\in\{0,1\}^{|V|\times c}\) represents the one-hot encoded node label matrix with \(c\) classes. A specific node \(i\in G\) has feature vector \(\mathbf{x_{i}}\), class \(y_{i}\in\{1,...,c\}\), and one-hot encoded class label vector \(\mathbf{y_{i}}\). The edge set can also be represented as an adjacency matrix, \(\mathbf{A}\in\{0,1\}^{|V|\times|V|}\), where a value of \(1\) at index \((i,j)\) denotes an edge between nodes \(i\) and \(j\) in \(G\), otherwise the index is set to \(0\). We use both \(E\) and \(\mathbf{A}\) throughout the paper, opting for \(E\) when explicitly discussing the edges of \(G\) and \(\mathbf{A}\) when describing matrix computations on the edge set. A _\(k\)-hop neighborhood_ of node \(i\in V\), \(N_{k}(i)\), denotes the subgraph induced by the nodes that are reachable within \(k\)-steps of \(i\).
### Node Classification with GNNs
We focus on node classification through a GNN, where the goal is to learn a mapping between \(\mathbf{X}\) and \(\mathbf{Y}\). This mapping is estimated through a subset of \(V\), referred to as the set of training nodes \(n_{train}\). For a \(k\)-layer GNN, learning is facilitated through message passing over \(k\)-hop neighborhoods of a graph. The steps, at a high level, include (1) embedding \(\mathbf{X}\) through a non-linear transformation parameterized by a weight matrix \(\mathbf{W}\) and (2) aggregating the embedded features across neighborhoods of each node. Message passing over all nodes in the graph can be computed through matrix multiplication, where the most basic formulation updates node representations through \(\mathbf{R}_{l}=\sigma((\mathbf{A}+\mathbf{I})\mathbf{R}_{l-1}\mathbf{W}_{l})\) for a layer \(l\in\{1,2,...,k\}\) of the GNN, where \(\mathbf{R}_{0}=\mathbf{X}\) and \(\sigma\) is an activation function. The update is applied \(k\) times, resulting in final representations for each node that can be used for classification.
### Homophily and Heterophily
In this work, we focus on edge homophily and present the following definitions to describe our homophily-based analysis. We begin with the global homophily ratio of a graph, \(h\).
**Definition 1 - Global Homophily Ratio.**_The global homophily ratio \(h\) over a graph's edge set \(E\) is the fraction of edges in \(E\) that connect two nodes, \(u\) and \(v\), with the same label, \(y_{u}\) and \(y_{v}\):_
\[h=\frac{|\{(u,v):(u,v)\in E\wedge y_{u}=y_{v}\}|}{|E|}. \tag{1}\]
The global homophily ratio is used to describe the overall homophily level in graphs; \(h=0\) indicates a fully heterophilous graph and \(h=1\) indicates a fully homophilous graph [10]. Additionally, the empirical class compatibility matrix of a graph, \([\mathbf{H}_{L}]\) describes the probability of two nodes with certain labels connecting, where the \((u,v)\)-th entry is the fraction of edges between a node in class \(u\) and a node in class \(v\): \([\mathbf{H}_{L}]_{u,v}=\frac{|\{(i,j):(i,j)\in E\wedge y_{i}=u\wedge y_{j}=v\}|} {|\{(i,j):(i,j)\in E\wedge y_{i}=u\}|}\). However, both the global homophily ratio and compatibility matrix oversimplify the mixing patterns in a graph when there are varying neighborhood compositions. To perform more granular analysis on a per-node basis, we also define the local homophily ratio of a node \(t\), \(h_{t}\).
**Definition 2 - Local Homophily Ratio.**_The local homophily ratio of a node \(t\), \(h_{t}\), is the fraction of edges in the neighborhood of \(t\) that connect \(t\) to a neighbor \(u\) with the same class:_
\[h_{t}=\frac{|\{(u,t):(u,t)\in N_{1}(t)\wedge y_{u}=y_{t}\}|}{|N_{1}(t)|}. \tag{2}\]
Given GNNs are often shallow and only depend on a small \(k\)-hop neighborhood for a single node prediction, it is natural to analyze GNNs through the local, rather than global, homophily ratio. Moreover, many real-world graphs display a wide range of local homophily ratios across the constituent nodes, as seen in App. A.4.1, necessitating local analysis.
## 4 Relationship between a Node's Local Homophily Level and Performance
In this section, we aim to characterize the impact of local homophily on the accuracy of node-level predictions by considering shifts in local homophily levels, at test time, relative to the global homophily level the GNN was trained on. We begin by revealing the drivers for performance discrepancies through theoretical analysis and discuss their implications on node-level performance. Leveraging these insights, we relax our assumptions in Section 5.2 and show that our theory holds in more general settings via extensive empirical analysis on synthetic data. Additionally, we consider even more general real-world graphs (without any constraints) in Section 6.
**Setup.** Following previous theoretical GNN work [7, 14, 16, 30] and popular models such as SGC and LightGCN [22, 31], we assume a GNN, \(F\), formulated as \((\mathbf{A}+\mathbf{I})\mathbf{X}\mathbf{W}\), where \(\mathbf{A}+\mathbf{I}\) is \(G\)'s adjacency matrix with self-loops and \(\mathbf{W}\) is \(F\)'s weight matrix. Similar to the setup in [32], we consider a graph \(G\) with a subset of training nodes \(n_{train}\), each of which has an associated node feature vector \(\mathbf{x_{i}}\), one-hot encoded class label vector \(\mathbf{y_{i}}\), \(1\)-hop homophily ratio \(h\), and degree \(d\). For brevity, we focus on binary classification (though we consider multi-class settings in our experiments) and represent \(\mathbf{y_{i}}\)= \(onehot(y_{i})=[1\quad 0]\) when node \(i\)'s class is \(y_{i}=0\) and \([0\quad 1]\) when \(y_{i}=1\). We consider node feature vectors that are sampled from a uniform distribution and biased towards a particular class: when \(y_{i}=0,\mathbf{x_{i}}=[(0.5+p)\quad(0.5-p)]\) and when \(y_{i}=1,\mathbf{x_{i}}=[(0.5-p)\quad(0.5+p)]\), where parameter \(p\in[0,0.5]\) controls the 'agreement' between the node features and its class label. The final prediction for node \(i\) is \(\mathbf{argmax}\mathbf{z_{i}}\) where \(\mathbf{z_{i}}\) is the output logit vector of \(F\). We begin by solving for \(F\)'s optimal weight matrix \(\mathbf{W}\), and then apply \(F\) to \(t\). We consider a test node \(t\) with local homophily ratio as \(h+\alpha_{t}=h_{t}\), where \(\alpha_{t}\in[-h,1-h]\) is the shift \(i\) local homophily level compared to the global homophily level. Under this setup, we analyze how \(t\)'s prediction is impacted as its local homophily ratio \(h_{t}\) shifts relative to \(h\), the global homophily ratio used to train \(F\). In Thereom 1, without loss of generality, we consider the impacts of \(h_{t}\) when t's class label is \(y_{t}=0\).
**Theorem 1**: _Consider a test node \(t\) with local homophily ratio \(h+\alpha_{t}\), label \(\mathbf{y_{t}}=[1\quad 0]\), and node features \(\mathbf{x_{t}}=[(0.5+p)\quad(0.5-p)]\). The class prediction from \(F\) for node \(t\) is a function of the global homophily level and the shift of the local homophily level, given by \(\mathbf{argmax}\mathbf{z_{t}}\), where \(\mathbf{z_{t}}=\mathbf{y_{t}}+b_{1}\left[\alpha_{t}\quad-\alpha_{t}\right]\) and \(b_{1}=d/(1+d(2h-1))\)._
**Proof.** The proof can be found in App. A.2.1. We provide additional analysis on a 2-layer variant of \(F\), formulated as \((\mathbf{A}+\mathbf{I})^{\mathbf{2}}\mathbf{X}\mathbf{W}\) in App. A.2.3.
**Main Implications of Theorem 1.** This theorem provides a _direct relationship between a test node's performance and the node's local homophily ratio_ as it deviates from the global homophily ratio. Specifically, we can expect the _performance to degrade_ when a test node either becomes _more homophilous relative to a heterophilous graph_ or _more heterophilous relative to a homophilous graph_. To further understand and demonstrate the implications of \(\alpha_{t}\), we analyze three settings that naturally arise for the global homophily: (1) \(0\leq h<0.5\), (2) \(h=0.5\), and (3) \(0.5<h\leq 1\). We note that while the node degrees can influence the conditions that cause \(\alpha_{t}\) to impact \(\mathbf{z_{t}}\) (through \(b_{1}\)), we show in App. A.2.1 that this mostly occurs for extremely low-degree nodes. Furthermore, previous work corroborates our findings regarding the difficulty with low-degree nodes under heterophily [7, 8, 33]; however, our analysis extends this observation to demonstrate a significantly more complex interplay between node degree, global homophily, and shift in local homophily.
**Setting 1: Heterophilous \((\mathbf{0}\leq\boldsymbol{h}<\mathbf{0.5})\):** In this scenario, when \(d(2h-1)<-1\), \(b_{1}<0\), leading to \(\mathbf{z_{t}}=\mathbf{y_{t}}+|b_{1}|\left[-\alpha_{t}\quad\alpha_{t}\right]\), where \(b_{1}\)'s sign has been distributed into the vector. Thus,
\[\mathbf{z_{t}}=\begin{cases}\mathbf{y_{t}}+|b_{1}|\left[\left|\alpha_{t} \right|\quad-|\alpha_{t}|\right],&\text{if }h_{t}\leq h\\ \mathbf{y_{t}}+|b_{1}|\left[-\alpha_{t}\quad\alpha_{t}\right],&\text{if }h_{t}>h, \end{cases} \tag{3}\]
where the sign of \(\alpha_{t}\) has been integrated into the vectors. We can then deduce that (globally) heterophilous graphs, when \(b_{1}<0\) is satisfied, **will cause \(F\) to degrade in performance as the test node's local homophily increases**, denoted by the score increase of the wrong class in the second case of Equation (3). Additionally, when local homophily decreases, the predictions will improve given the increase in score for the correct class of the first case in Equation (3), however as \(h<0.5\), \(\alpha_{t}\) has a smaller possible range of values, minimizing the impact on \(F\)'s predictions.
**Setting 2: Mixed Homophily (\(\boldsymbol{h}=\mathbf{0.5}\))**: When the graph is not strongly homophilous nor strongly heterophilous (i.e., \(h=0.5\)), \(b_{1}=d\), leading to:
\[\mathbf{z_{t}}=\begin{cases}\mathbf{y_{t}}+d\left[-|\alpha_{t}|\quad|\alpha_{ t}|\right],&\text{if }h_{t}\leq 0.5\\ \mathbf{y_{t}}+d\left[\alpha_{t}\quad-\alpha_{t}\right],&\text{if }h_{t}>0.5. \end{cases} \tag{4}\]
In this case, we find that \(F\) will have improved performance when the local homophily of a test node is increased. Conversely, decreased local homophily for a test node will decrease performance. This is the only case that agrees with previous work regarding high homophily as a direct indicator of performance. Notably, the prediction directly depends on \(d\), potentially leading to performance variations that are dominated by degree, rather than local homophily.
**Setting 3: Homophilous (\(\mathbf{0.5}<\boldsymbol{h}\leq\mathbf{1}\))**: In this scenario, \(b_{1}>0\), leading to:
\[\mathbf{z_{t}}=\begin{cases}\mathbf{y_{t}}+|b_{1}|\left[-|\alpha_{t}|\quad| \alpha_{t}|\right],&\text{if }h_{t}\leq h\\ \mathbf{y_{t}}+|b_{1}|\left[\alpha_{t}\quad-\alpha_{t}\right],&\text{if }h_{t}>h. \end{cases} \tag{5}\]
We can then deduce that (globally) homophilous graphs **will cause \(F\) to degrade in performance as a test node's local homophily decreases**, denoted by the score increase of the first case of Equation (5). When local homophily increases, the predictions will improve, however as \(h>0.5\), \(\alpha_{t}\) has a smaller range of values, minimizing impact on the predictions.
## 5 Generalization of Theoretical Results via Synthetic Data Analysis
To display how the theoretical relationship between local homophily and classification performance generalizes, we introduce a graph generator that enables control over the local homophily ratios across a graph and conduct an extensive empirical analysis to study the following research questions: **(Q1)** What performance disparities arise across the range of local homophily values as the global homophily is varied? and **(Q2)** Do GNNs built specifically for heterophily display different performance patterns across varying local homophily ratio ranges as compared to simpler GNN architectures?
### Synthetic Data Generation
Building on the preferential attachment model where a compatibility matrix governs edge likelihood [7, 12], we modify the generator to allow a node's homophily level to be either randomly
assigned or defined by the compatibility matrix. Below we explain the generation process and detail the graph generation model in Algorithm 1. At a high level, the steps to add a node to a synthetic graph are: (1) Sample a class label, (2) Generate node features, and (3) Add connections based on assigned homophily level. The code is available in an anonymized repository1. We provide related work for graph generation in App. A.3.1, and additional property analysis in App. A.3.3.
Footnote 1: [https://anonymous.4open.science/r/HeterophilyDiscrepancyGNN-85FB](https://anonymous.4open.science/r/HeterophilyDiscrepancyGNN-85FB)
**Class and Feature Generation.** For a node \(i\), label \(y_{i}\) is sampled from probability distribution \(P(\{0,...,c\})\) with \(c\) possible classes. Features \(\mathbf{x_{i}}\) are generated from a 2D Gaussian, where each dimension has a mean of \(\epsilon*y_{i}\) and standard deviation of 1, where \(\epsilon\in[0,1]\) introduces noise into the features. When \(\epsilon=1\), the label \(y_{i}\) is explicitly encoded, when \(\epsilon=0\), \(y_{i}\) is unrecoverable.
**Structure Generation.** Similar to previous heterophily analyses, we define a class compatibility matrix \(\mathbf{H}_{L}\) with diagonal elements \(h\), denoting the probability of connecting nodes with similar classes (homophilous), and off-diagonals elements \((1-h)/c\), denoting the probability of connecting nodes with different classes (heterophilous) [7; 12]. During generation, a new node \(u\) is attached to an existing node \(v\) as \(P\left((u,v)\in E\right)\propto\left[\mathbf{H}_{L}\right]_{y_{u},y_{v}}\). To control the local homophily ratios, we introduce a uniformity parameter \(\rho\) such that with probability \(\rho\), a node \(i\)'s local homophily ratio \(h_{i}\) is sampled at random from a uniform distribution, \(U(0,1)\). As \(\rho\) increases, more local homophily ratios follow the random distribution, rather than the compatibility matrix. Since the preferential attachment model adds nodes sequentially, it is possible that the local homophily of nodes early in the generation process drift from their original values. To correct this, we keep track of the drift per node \(i\) through \(drift_{i}\), and prioritize connections to high-drift nodes (i.e., a node \(drift_{i}>\delta\), where \(\delta\) is a hyperparameter that defines the drift threshold) that would return the node's local homophily ratio back to its original value.
``` Input: Total nodes \(n\), \(m\) edges to add per step, label probability distribution \(P(\{0,...,c\})\), uniformity probability \(\rho\), class compatibility matrix \(\mathbf{H}_{L}\), drift change cutoff \(\delta\)
1 Initialize \(G\) with \(m\) nodes and \(m\) edges according to \(\mathbf{H}_{L}\) ; // Details in App. A.3.2
2 Initialize vector \(drift\) to hold change in homophily per node \(drift_{0},...,drift_{m}=0\)
3for\(u=m\) to \(n\)do
4 Add node \(u\) to \(G\)
5 Sample label \(y_{u}\sim P(\{0,...,c\})\)
6\(N_{1}(u)\) = GetNeighbors(\(G\), \(m\), \(u\), \(drift\), \(\rho\), \(\mathbf{H}_{L}\), \(\delta\))
7for\(v\) in \(N_{1}(u)\)do
8 Add edge \((u,v)\) to \(G\)
9if\(y_{u}\) = \(y_{v}\)then
10\(drift[v]\) *= 1 ; // Drifted more homophilous
11else
12\(drift[v]\) *= 1 ; // Drifted more heterophilous
13\(drift[u]=0\)
14
15
16
17
18
19
20
21 end
22
23 end
24
25 end
26
27 end
28
29 end
30 end
310
32 end
333 end
34
35 end
36
37 end
38
39 end
30 end
30 end
311
32 end
33 end
34
35 end
36
37 end
38 end
39
40 end
41 end
42 end
43 end
44 end
45
46 end
47 end
48
49 end
50 end
510
52 end
53 end
540
55 end
56
57 end
58 end
590 end
591 end
60 end
611 end
62
63 end
64
65 end
66 end
67
68 end
692 end
693
70 end
80 end
810
82 end
83
84 end
85
86 end
87
88 end
894
895 end
80
80 end
811 end
820
83 end
84
85 end
86
87 end
88 end
896
897 end
80
80 end
821
898 end
80
899 end
810
830 end
84
850 end
86
871 end
88
899 end
88 end
890
891 end
800
892 end
801
893 end
894
895 end
896
897 end
898
899 end
802
899 end
803
899 end
804
897 end
805
898 end
806
899 end
807
899 end
810
811 end
822
812 end
813
814 end
815
816 end
817
818 end
823
83 end
840
841 end
85
851 end
867
878 end
88
889
889 end
899
88 end
890
891 end
800
892 end
803
893 end
894
895
896 end
897
898 end
899
898
899 end
800
899
899 end
810
899 end
8110
812 end
813
8140 end
815
816
817 end
818
818 end
8199
8190
8191 end
824
8192 end
825
826
827 end
828
830 end
831
832 end
843
844 end
856
857 end
868
871 end
872
873 end
874
875
889 end
889
896
897 end
898
899
898 end
899
899 end
800
899
899 end
8100
8999
899 end
8111 end
8000
812 end
813
8141 end
815
816
817 end
818
818 end
8192
8193 end
8194
8195
8196 end
817
8197 end
818
8198
8198 end
820
81997
8198 end
8210
822
8210 end
823
823 end
824
825
826
827 end
828
828 end
8299
830
8310 end
832
833 end
834
835
836 end
837
838 end
839
847
848 end
849
849
8499 end
850
851
850 end
852
853
854 end
856
857
858 end
859
860
859
859 end
857
859
8610 end
862
863
864 end
867
868
869 end
878
879
888 end
889
889
889 end
899
899
8999
899 end
899
899
899
8999
8999 end
8100
8999
8999
89999
89999
999999
999999
9999999
9999999
9999999
GNNs while jointly varying global and local homophily levels. An additional study for \(\rho=0.75\) is provided in Figure 8 of App. A.3.4, demonstrating how discrepancy can be alleviated when the entire range of local homophily ratios has a sufficient number of nodes. For each combination of \(h\) and \(\rho\), we generate 10 graph with 5000 nodes and 100k edges (i.e. n = 5000, m = 20), and split the nodes into a 50-25-25% split for train, validation, and test. To match our theoretical analysis, we focus on binary classification. For evaluation, we compare the performance across models and global homophily ratios as the local homophily ratio is varied. To compute localized performance, we split the test nodes into four groups based on their local homophily ratios and calculate an F1 score per group. Per bin, we report the average and standard deviation for F1 over the 10 generated graphs. Results are presented in Figures 1 of the main text, and Figures 6, 7 in App. A.3.4
performance often degrades as local homophily increases, while when \(h>0.5\) (higher global homophily), performance often degrades as local homophily decreases. The results align well with our theoretical analysis, showing strong generalization of our findings to more complex graph structures and GNN architectures. Together, **our theoretical and empirical results indicate that assuming high homophily always indicates high performance may oversimplify the GNN's behavior, leading retrospective analyses astray when diagnosing performance degradation.**
**(Q2) Performance disparities for homophilous vs. heterophilous GNNs.** To understand how different GNN designs amplify or reduce discrepancy, we analyze the trend of local performances across models. While all models achieve similar global performance, we observe that the models perform differently depending on the local homophily range. As shown in Figure 1 (and Figure 5 in App. A.3.4), homophilous models often have higher performance for test nodes with local homophily levels that are close to the global homophily, whereas heterophily-based models perform better than homophilous models for nodes with local homophily levels far from the global homophily. These insights highlight the different behaviors of the two architectural designs, indicating that **models designed for heterophily are able to alleviate performance discrepancies across nodes, while homophilous models exacerbate discrepancy in strongly homophilous/heterophilous graphs**. These results suggest that heterophilous models offer a better performance trade-off between nodes with over- and under-represented local homophily levels, displaying minor degradation on the over-represented nodes and significant improvement on the under-represented nodes.
## 6 Real-world Empirical Evaluation
We now demonstrate how our results extend to real-world datasets.
**Data and Setup.** We choose five real-world graphs: two homophilous (Cora [34], Coauthor-CS [35]) and three heterophilous (Cornell [36], Wisconsin [36], Squirrel [37]). We choose the heterophilous datasets due to their historically inconclusive performance when comparing GNNs to non-graph-based deep learning baselines [7; 13]. Our analysis aims to provide insight into why prior results have been inconclusive and explain their poor performance from a local perspective. For each dataset, we perform 30 random 50-25-25% splits of the nodes to obtain the train, validation, and test sets, except for Cora and Coauthor-CS where we perform 5 splits due to graph size and low variance. We train the same architectures as in the synthetic experiments, seen in Section 5.2.
We report results in Figure 2 of the main text, and Figure 10 in App. A.4.2. While we group all test nodes into four local homophily ranges, we limit the ranges for Cornell and Wisconsin in Figure 2 due to having less than three test nodes in local homophily ranges above 0.5 and 0.6, respectively. Additionally, features may be more informative in certain local homophily ranges, obscuring when performance degrades due to homophily or uninformative features. Thus, we report the difference in F1 score between the different GNNs and the graph-agnostic MLP, \(\Delta\)F1 = (F1-score GNN - F1-score MLP), to disentangle how performance relies on the node features as compared to the graph structure.
### Performance Discrepancies in Homophilous vs. Heterophilous Real-World Datasets
**Homophilous Graphs.** The homophilous graphs are shown in the two right-most plots of Figure 2. First, we highlight the nearly **0.6 drop in \(\Delta\)F1 across both datasets for test nodes with local homophily ratios far from the global homophily ratio**, performing worse than an MLP as denoted by the negative F1 score difference. Furthermore, this degradation is consistent across all of the GNN architectures, with H2GCN, which leverages heterophilous designs, maintaining generally higher performance as the nodes become more heterophilous. This result demonstrates the practical implications of our theoretical analysis, allowing us to additionally demonstrate that degradation can occur under rich feature sets. Notably, the MLP outperforms all of the GNN architectures in the heterophilous parts of the graph, implying the neighborhood information is actively corrupting performance in these regions due to the GNN's reliance on the global homophily.
**Heterophilous Graphs.** The heterophilous datasets are shown on the three left-most plots of Figure 2. Compared to the homophilous datasets, the GNNs achieve poor performance relative to the MLP for nearly all local homophily ranges. However, our local analysis identifies a mechanism for _how_ this arises: exacerbated performance degradation for nodes with local homophily ratios far from the global homophily ratio. Despite the MLP model outperforming the GNN models across Cornell, Wisconsin, and Squirrel, there is still a notable degradation in performance as the local homophily
ranges increase, **causing a 0.2-0.4 drop in \(\Delta\)F1 between the region which contains the global homophily ratio and the furthest bin**. We hypothesize that degree can influence whether GNNs degrade in performance on heterophilous graphs, as seen in our theoretical analysis in App. A.2.1, causing this drop to be lower as compared to the homophilous graphs. Again, H2GCN displays high uniformity across these regimes for both datasets, following the trends seen in the synthetic data. Interestingly, for Squirrel, we also observe a significant difference in MLP performance across the various local homophily ranges, indicating that the features are intrinsically more informative in certain regions. When accounting for this through \(\Delta\)F1, we see similar degradation to that in the synthetic and homophilous datasets, again with the heterophilous GNNs tending to perform best.
### Discussion
Two key insights emerge that have yet to be established in other analyses: (a) nodes with higher local homophily are not inherently easier to classify, as seen by the relative drops in performance on heterophilous datasets, and (b) heterophilous designs improve learning across nearly all local homophily ranges, not just one particular range, alleviating performance discrepancies across nodes. We note that there might be additional factors which give rise to performance variations across local homophily ratios. For instance, previous works have identified that, under certain settings, nodes with extreme homophily levels can be easier to classify [13, 16, 17]. Moreover, the ease of classification has been tied to the interaction of degree and homophily, pointing towards high degree nodes as the easiest setting [8]. Performance variations due to raw structural properties (e.g. degree or local homophily ratio) and local homophily shift are not necessarily mutually exclusive, nor do they conflict with our results. Instead, we hypothesize both factors can interact, amplifying performance disparities across homophily ranges, further necessitating future work which studies their interplay.
## 7 Conclusion
In this work, we take a local perspective focused on discerning how nodes of varying local homophily levels can experience performance discrepancies. We first theoretically demonstrated that classification performance degrades as the local homophily ratio of a node deviates from the global homophily ratio. To demonstrate the generalizability of our findings, we proposed a new parsimonious synthetic graph generator that allows generating graphs with varying global and local homophily. We demonstrated that our theoretical insights still hold in more general settings, finding that performance degradation can occur in either highly homophilous or heterophilous settings. Furthermore, we showed that this disparity in performance can be reduced by using GNN models which adopt explicit mechanisms to support heterophily, providing insight into how GNNs with heterophilous designs improve performance globally. Our experiments on real-world datasets of varying global homophily
Figure 2: Real-words graphs: Difference in performance, \(\Delta\)F1=F1(GNN) - F1(MLP), for GNN models across ranges of local homophily ratios (more GNNs in App. A.4.2), averaged over multiple splits. Our results elucidate _how_ models achieve different global performance, where heterophilous models (bars in reds) better tend to combat the systematic performance discrepancy seen in homophilous models (bars in blues). Gray indicates the range that the global homophily ratio falls in; negative bars indicate worse performance than MLP.
ratios confirm the practical implications of our insights, exhibiting similar disparity patterns. The discovery and characterization of GNN degradation through shifts in local homophily relative to a graph's global homophily necessitates the development of new GNNs that are able to explicitly handle such data shifts. Additionally, our findings highlight how, in human-facing applications of GNNs, individuals might experience disparate treatment under a GNN due to structural properties of the underlying graph, opening new research directions in algorithmic fairness.
|
2310.06960 | Jaynes Machine: The universal microstructure of deep neural networks | We present a novel theory of the microstructure of deep neural networks.
Using a theoretical framework called statistical teleodynamics, which is a
conceptual synthesis of statistical thermodynamics and potential game theory,
we predict that all highly connected layers of deep neural networks have a
universal microstructure of connection strengths that is distributed
lognormally ($LN({\mu}, {\sigma})$). Furthermore, under ideal conditions, the
theory predicts that ${\mu}$ and ${\sigma}$ are the same for all layers in all
networks. This is shown to be the result of an arbitrage equilibrium where all
connections compete and contribute the same effective utility towards the
minimization of the overall loss function. These surprising predictions are
shown to be supported by empirical data from six large-scale deep neural
networks in real life. We also discuss how these results can be exploited to
reduce the amount of data, time, and computational resources needed to train
large deep neural networks. | Venkat Venkatasubramanian, N. Sanjeevrajan, Manasi Khandekar | 2023-10-10T19:22:01Z | http://arxiv.org/abs/2310.06960v1 | # Jaynes Machine: The universal microstructure of deep neural networks
###### Abstract
We present a novel theory of the microstructure of deep neural networks. Using a theoretical framework called _statistical teleodynamics_, which is a conceptual synthesis of statistical thermodynamics and potential game theory, we predict that all highly connected layers of deep neural networks have a universal microstructure of connection strengths that is distributed lognormally (\(LN(\mu,\sigma)\)). Furthermore, under ideal conditions, the theory predicts that \(\mu\) and \(\sigma\) are the same for all layers in all networks. This is shown to be the result of an arbitrage equilibrium where all connections compete and contribute the same effective utility towards the minimization of the overall loss function. These surprising predictions are shown to be supported by empirical data from six large-scale deep neural networks in real life. We also discuss how these results can be exploited to reduce the amount of data, time, and computational resources needed to train large deep neural networks.
**Keywords:** Weight distribution, Statistical teleodynamics, Utility, Arbitrage equilibrium, Maximum entropy, Lognormal distribution
## 1 Design of Optimal Teleological Networks
We consider deep neural networks [1] to be an example of teleological networked systems that are specifically designed to achieve some goal(s) in uncertain operating environments. Whether human-engineered or naturally evolved, such networked systems exist to deliver desired performance under challenging conditions. For example, supply chains exist to deliver goods and services, and the Internet to facilitate communication robustly. Similarly, in nature, metabolic networks and ecological food webs exist to support life in myriad ways. This goal-driven feature is an integral aspect of their structure, function, and optimal design.
Modeling and analyzing the structure of networked goal-driven systems is an important step toward understanding their design and performance. This is crucial because connection strengths determine the interactions between the agents in the network, and the interactions cumulatively determine the overall performance of the system. In general, network performance depends on two critical design metrics of the system: efficiency and robustness [2, 3]. We use the term efficiency broadly as a measure of the effectiveness of the network configuration to perform its functions and meet a target level of performance. In airline networks, for example, this would be efficient and safe transportation of passengers. In deep neural networks, this is the efficient minimization of the loss function. By robustness, we mean the extent to which the network is able to meet the performance target despite variations in its operating environment. Efficiency and robustness are often conflicting objectives to satisfy. Engineers are instinctively aware of the trade-offs they need to make to obtain a satisfactory overall performance that would have to include both efficiency and robustness requirements. Furthermore, these have to be accomplished under cost constraints, which occur in the form of constraints in energy, materials, computational power, money, time, etc.
In this broad perspective of goal-driven networked agents, we consider the design of a deep neural network in its most generic and essential form and pose the problem as follows. Given a set of performance specifications and cost constraints, what is the optimal microstructure of a network that can deliver the target performance in a wide variety of uncertain operating environments?
We address this general formulation by building on our prior work on network design [2, 3, 4, 5, 6]. We consider a large deep neural network with \(L\) layers of neurons. Let layer \(l\) have \(N^{l}\) neurons that are connected to the neurons in layer \(l-1\) using \(M^{l}\) connections. To benefit from statistical properties, we assume that \(M^{l}\) is of the order of millions. These connections have weights, which can be positive or negative, that determine the strength of the connections. We scaled all the weights to the range 0 to 1, which is divided into \(m\) bins. So, any given connection belongs to one of the \(m\) bins. The strength of connection of a neuron \(i\) in layer \(l\) that is connected to a neuron \(j\) in layer \(l-1\), and belonging to the bin \(k\), is denoted by \(w^{l}_{ijk}\). The number of connections in bin \(k\) for layer \(l\) is given by \(M^{l}_{k}\) with the constraint \(M^{l}=\sum_{k=1}^{m}M^{l}_{k}\)
The total budget for weights is constrained by \(W^{l}=\sum_{k=1}^{m}M_{k}^{l}\mid w^{l}_{ijk}\mid\).
Our deep neural network is teleological. That is, it was human-engineered or naturally evolved to meet certain goals and deliver certain performance targets efficiently and robustly. In this context, efficiency is a measure of how effectively the network minimizes the loss function with minimal use of resources. For example, building and maintaining connections incur costs such as computing power, time, energy, etc. One would like the network to meet its performance target of making accurate predictions with minimal use of such resources. Similarly, by robustness, we mean the ability to deliver the performance target despite variations in its operating environment, such as making accurate predictions in _test_ datasets that are different from its _training_ datasets.
## 2 Statistical Teleodynamics, Population Games, and Arbitrage Equilibrium
In a typical deep neural network training regimen using gradient descent, the backpropagation algorithm gently nudges all the connections to modify their weights in such a way that the overall loss function is minimized over many iterations and over many datasets. One can equivalently model the same process as the self-organizing competitive dynamics among the connections to modify their weights in such a way that the overall loss function is minimized iteratively over many datasets. We formulate this approach by building on our previous work using a theoretical framework called _statistical teleodynamics_[2, 3, 4, 5, 7]. It is a synthesis of the central concepts and techniques of population games theory with those of statistical mechanics towards a unified theory of emergent equilibrium phenomena and pattern formation in active matter.
In population games, one is interested in the prediction of the final outcome(s) of a large population of goal-driven agents competing dynamically to increase their respective utilities. In particular, one would like to know whether such a game would lead to an equilibrium outcome [8, 9]. For some population games, one can identify a single scalar-valued global function, called a _potential_\(\phi(\mathbf{x})\) (where \(\mathbf{x}\) is the state vector of the system) that captures the necessary information about the utilities of the agents. The gradient of the potential is the utility. Such games are called _potential games_[8, 9, 10, 11]. A potential game reaches strategic equilibrium, called _Nash equilibrium_, when the potential \(\phi(\mathbf{x})\) is maximized. Furthermore, this equilibrium is unique if \(\phi(\mathbf{x})\) is strictly concave (ie, \(\partial^{2}\phi/\partial^{2}x<0\)) [9].
Therefore, an agent's utility, \(h_{k}\), in state \(k\) is the gradient of potential \(\phi(\mathbf{x})\), i.e.,
\[h_{k}(\mathbf{x})\equiv\partial\phi(\mathbf{x})/\partial x_{k} \tag{1}\]
where \(x_{k}=N_{k}/N\), \(\mathbf{x}\) is the population vector, \(N_{k}\) is the number of agents in state \(k\), and \(N\) is the total number of agents. By integration (we replace partial derivative with total derivative because \(h_{k}(\mathbf{x})\) can be reduced to \(h_{k}(x_{k})\)), we have
\[\phi(\mathbf{x}) = \sum_{k=1}^{m}\int h_{k}(\mathbf{x})dx_{k} \tag{2}\]
where \(m\) is the total number of states.
To determine the maximum potential, one can use the method of Lagrange multipliers with \(\mathscr{L}\) as the Lagrangian and \(\lambda\) as the Lagrange multiplier for the constraint \(\sum_{k=1}^{m}x_{k}=1\):
\[\mathscr{L}=\phi+\lambda(1-\sum_{k=1}^{m}x_{k}) \tag{3}\]
If there are other constraints, they can be accommodated similarly [7].
In equilibrium, all agents enjoy the same utility, that is, \(h_{k}=h^{*}\). It is an _arbitrage equilibrium_[12] where the agents no longer have any incentive to switch states, as all states provide the same utility \(h^{*}\). Thus, the maximization of \(\phi\) and \(h_{k}=h^{*}\) are equivalent when the equilibrium is unique (i.e., \(\phi(\mathbf{x})\) is strictly concave [9]). The former stipulates it from the _top-down, system perspective_ whereas the latter is the _bottom-up, agent_ perspective. Thus, this formulation exhibits a _duality_ property.
We use this formalism to model the self-organizing competitive dynamics among the connections in a deep neural network. We define the _effective utility_, \(h^{l}_{ijk}\), introduced in Eq. 1, for a connection with a weight of \(w^{l}_{ijk}\) in layer \(l\). The effective utility is a measure of the contribution that this connection makes toward the network-wide reduction in the loss function (i.e., the efficiency component) in a robust manner. The efficiency and robustness metrics, the reader may recall, capture the teleological objective of the network design, which is to deliver certain performance targets in uncertain environments. In the context of deep neural networks, the performance target is to minimize the overall loss function in a robust manner, i.e., for a wide variety of test datasets. In this perspective, the goal of every neuron is to stay connected with other neurons so that it can process, send, and receive information efficiently under different conditions to minimize the loss function. The more connections of varying weights it has, the more robust its membership in the network against the loss of connections and/or neurons. To accomplish this, the connections compete with each other to provide a more effective utility. i.e., more net benefit towards the goal of minimizing the loss function. Thus, the effective utility of a connection is a benefit-cost trade-off function. It is the net benefit contributed by a connection after accounting for the costs of maintenance and competition, as we discuss below.
Thus, the effective utility \(h^{l}_{ijk}\) is made up of three components,
\[h^{l}_{ijk}=u^{l}_{ijk}-v^{l}_{ijk}-z^{l}_{ijk} \tag{4}\]
where \(u^{l}_{ijk}\) is the utility derived from the strength of the connection, \(v^{l}_{ijk}\) is the cost or disutility of maintaining such a connection, and \(z^{l}_{ijk}\) is the disutility of competition among connections. Disutilities are costs to be subtracted from the benefit \(u^{l}_{ijk}\).
Now, in general, as the strength of the connection \(w^{l}_{ijk}\) grows, the marginal utility of its contribution diminishes. This diminishing marginal utility is a commonly found occurrence for many resources and is normally modeled as a logarithmic function. Therefore, the utility \(u^{l}_{ijk}\) derived from this can be written as
\[u^{l}_{ijk}=\alpha^{l}\ln\mid w^{l}_{ijk}\mid \tag{5}\]
where \(\mid w^{l}_{ijk}\mid\) signifies that \(u^{l}_{ijk}\) depends only on the absolute magnitude and not on the sign of the weight, and \(\alpha^{l}\) is a parameter.
But, as noted, this benefit comes with a cost, as building and maintaining connections are not free. As Venkatasubramanian [5] has shown, most benefit-cost trade-offs in real life are in the form of an inverted-U curve. The simplest model of this behavior is a quadratic function [5], and so we have \(v^{l}_{ijk}=\beta^{l}(\ln\mid w^{l}_{ijk}\mid)^{2}\), such that
\[u^{l}_{ijk}-v^{l}_{ijk}=\alpha^{l}\ln\mid w^{l}_{ijk}\mid-\beta^{l}(\ln\mid w ^{l}_{ijk}\mid)^{2} \tag{6}\]
where \(\beta^{l}\) is another parameter.
As more and more connections accumulate in the same bin (that is, having the same weight), each new connection is less valuable to the neuron in generating utility. Thus, a neuron would prefer the connections to be distributed over all the bins. This is enforced by the cost term \(z^{l}_{ijk}\). Appealing to diminishing marginal utility again [5], we model this as \(\gamma^{l}\ln M^{l}_{k}\), where \(\gamma^{l}\) is another parameter.
Therefore, the effective utility \(h^{l}_{ijk}\) is given by
\[h^{l}_{ijk}=\alpha^{l}\ln\mid w^{l}_{ijk}\mid-\beta^{l}(\ln\mid w^{l}_{ijk} \mid)^{2}-\gamma^{l}\ln M^{l}_{k}\]
We can let \(\gamma^{l}=1\) without any loss of generality and rewrite the equation as
\[h^{l}_{ijk}=\alpha^{l}\ln\mid w^{l}_{ijk}\mid-\beta^{l}(\ln\mid w^{l}_{ijk} \mid)^{2}-\ln M^{l}_{k} \tag{7}\]
We wish to point out that the structure of this model is similar to the one we proposed in modeling the Income Game using the statistical teleodynamic framework [5, 7]. In fact, that is the inspiration for the model proposed here.
So, all connections compete with each other to increase their effective utilities (\(h^{l}_{ijk}\)) to help reduce the overall loss function in a robust manner. They do this by switching from one state to another by dynamically changing the weights \(w^{l}_{ijk}\), depending on the local gradient of \(h^{l}_{ijk}\), in a manner similar to gradient descent. One of the important results in potential game theory is that this competitive dynamics will result in a Nash equilibrium where the potential \(\phi(x)\) is maximized. At equilibrium, all agents enjoy the same utility - that is, \(h^{l}_{ijk}=h^{l*}\) for all \(i,j\) and \(k\). This is an _arbitrage equilibrium_ as all agents have the same utility, thereby removing any incentive to switch states.
Using Eq. 7 in Eq. 2, we have
\[\phi(\mathbf{x})^{l}=\phi^{l}_{u}+\phi^{l}_{v}+\phi^{l}_{z}+\text{constant} \tag{8}\]
where
\[\phi^{l}_{u} =\alpha^{l}\sum_{k=1}^{m}x^{l}_{k}\ln\mid w^{l}_{ijk}\mid \tag{9}\] \[\phi^{l}_{v} =-\beta^{l}\sum_{k=1}^{m}x^{l}_{k}(\ln\mid w^{l}_{ijk}\mid)^{2}\] (10) \[\phi^{l}_{z} =\frac{1}{M^{l}}\ln\frac{M^{l}!}{\prod_{k=1}^{m}(M^{l}x^{l}_{k})!} \tag{11}\]
where \(x^{l}_{k}=M^{l}_{k}/M^{l}\) and we have used Stirling's approximation in equation (11).
We see that \(\phi(\mathbf{x})^{l}\) is strictly concave:
\[\partial^{2}\phi(\mathbf{x})^{l}/\partial x^{l2}_{k}=-1/x^{l}_{k}<0 \tag{12}\]
Therefore, a _unique Nash Equilibrium_ for this game exists, where \(\phi(\mathbf{x})\) is maximized. Using the Lagrangian multiplier approach (Eq. 3), we maximize \(\phi(\mathbf{x})\) in equations (8)-(11) to determine that the equilibrium distribution of the connection weights follows a lognormal distribution, given by
\[x^{l}_{k}=\frac{1}{\mid w^{l}_{ijk}\mid\sigma^{l}\sqrt{2\pi}}\exp\left[-\frac {(\ln\mid w^{l}_{ijk}\mid-\mu^{l})^{2}}{2\sigma^{l2}}\right] \tag{13}\]
where, \(\mu^{l}=\frac{\alpha^{l}+1}{2\beta^{l}}\) and \(\sigma^{l}=\sqrt{\frac{1}{2\beta^{l}}}\).
Thus, the theory predicts a surprising and useful result that the microstructure of deep neural networks, i.e., the distribution of connection weights, is lognormal for all highly connected layers. This universality is independent of the size of the
network, its architecture, or its application domain. The intuitive explanation is that, in a given layer, all individual connections contribute an effective utility (i.e., a net benefit) toward the overall objective of the network, which is to learn robustly the structure of a complex high-dimensional manifold by minimizing the loss function. In a large deep neural network, with hundreds of layers and millions of connections in each layer, no connection is particularly unique. No connection is more important than another. Every connection has thousands of counterparts elsewhere, so no one is special. Therefore, there is this inherent symmetry and equality in the microstructure. Hence, they all end up contributing the same effective utility towards that layer's goal of minimizing the loss function as the training progresses. That is why, when training is completed, one reaches the arbitrage equilibrium where all effective utilities are equal in that layer, i.e., \(h^{l}_{ijk}=h^{l*}\) for all \(i,j\), and \(k\).
Furthermore, in the "thermodynamic limit" of extremely large networks, i.e. \(L\rightarrow\infty\), \(M^{l}\rightarrow\infty\), and \(W^{l}\rightarrow\infty\), _all_ connections in _all_ the layers end up making the _same_ effective utility contribution, i.e. \(h^{l}_{ijk}=h^{*}\) for all \(i,j,k\), and \(l\). For this ideal deep neural network, all layers will have a lognormal weight distribution with the _same_\(\mu\) and \(\sigma\). In other words, \(\alpha^{l}\) and \(\beta^{l}\) are the _same_ for all layers. This is the ultimate universal microstructure for ideal deep neural networks.
Now, readers familiar with statistical mechanics will recognize the potential component \(\phi^{l}_{z}\) as _entropy_ (except for the missing Boltzmann constant \(k_{B}\)). Thus, by maximizing \(\phi^{l}\) in the Lagrangian multiplier formulation, one is equivalently maximizing entropy subject to the constraints specified in the terms \(\phi^{l}_{u}\) and \(\phi^{l}_{v}\). Thus, the lognormal distribution is the maximum entropy distribution under these constraints. This connection with entropy reveals an important insight into the robustness property of the network design, as discussed in the next section.
The ideal deep neural network is the conceptual equivalent of the ideal gas in statistical thermodynamics. Just as the maximum entropy distribution of energy in statistical thermodynamics is the well-known exponential distribution, called the Boltzmann distribution, we observe that its equivalent in statistical teleodynamics for deep neural networks is the lognormal distribution.
### Optimally Robust Design
The maximum-entropy design distributes the weights in the network in such a way that it maximizes the uncertainty about a wide variety of future datasets whose nature is unknown, unknowable, and, therefore, uncertain. Thus, in maximum-entropy design, the network is optimized for all potential future environments, not for any particular one. Note that for any particular dataset, one can design a weight distribution such that it will outperform the maximum entropy design with respect to the loss function. However, such a biased network may not perform as well for other datasets, while the maximum entropy distribution-based network is likely to perform better. For instance, if a network is overfitted on a specific dataset, then it might
"memorize" these data and hence might not perform that well for other datasets. To prevent this, one uses techniques such as data segmentation, weight regularization, dropout, early stopping, etc. The combined effect of such procedures is to achieve robustness in performance on a wide range of datasets. The goal of such techniques is to accommodate as much variability and as much uncertainty as possible in the test environments. This is exactly what we achieve by maximizing entropy in our theory. Maximizing entropy is the same as maximizing the uncertainty and variability of future datasets. In our theory, this robustness requirement is naturally built in from the very beginning as an integral part of the effective utility and potential function formulation, not as _ad hoc_ afterthoughts to prevent overfitting. This is what we mean by _optimally robust design_[3].
Thus, an optimally robust deep neural network is a robust prediction engine. It is a maximum entropy machine that learns an efficient and robust model of the target manifold. In the "thermodynamic limit," all networks, such as the Boltzmann Machine, Hopfield network, and so on, are various special instances of this general class. We call this machine the _Jaynes Machine_ in honor of Professor E. T. Jaynes, who elucidated the modern interpretation of the maximum entropy principle in the 1950s [13, 14, 15, 16].
## 3 Empirical Results
The predictions of the theory were tested by analyzing the weight distributions in six different deep neural networks. They are (i) BlazePose, (ii) Xception, (iii) BERT-Small, (iv) BERT-Large, (v) Llama-2 (7B), and (vi) LLAMA-2 (13B) [17, 18, 19, 20]. Their salient features are summarized in Table 1. The first two utilize convolution layers, and the other four are based on the transformer architecture [21, 22, 23]. They are of widely different sizes with respect to the number of parameters and are designed for different application domains.
The layer-by-layer weight data for these networks were extracted, normalized between 0 and 1, converted to their absolute magnitudes by dropping the signs, and classified into different bins. For all these networks, some layers had only a few thousand data points (out of the millions or tens of millions in the network), so we did not fit a distribution as statistical measures such as \(R^{2}\) were not good.
\begin{table}
\begin{tabular}{c c c c} \hline Model & Architecture & Parameters size & Application \\ \hline BlazePose & Convolution & \(2.8\times 10^{6}\) & Computer Vision \\ Xception & Convolution & \(20\times 10^{6}\) & Computer Vision \\ BERT Small & Transformer & \(109\times 10^{6}\) & Natural Language Processing \\ BERT Large & Transformer & \(325\times 10^{6}\) & Natural Language Processing \\ LLAMA-2 (7B) & Transformer & \(7\times 10^{9}\) & Natural Language Processing \\ LLAMA-2 (13B) & Transformer & \(13\times 10^{9}\) & Natural Language Processing \\ \hline \end{tabular}
\end{table}
Table 1: Six deep neural network case studies
The plots show the _size-weighted_ distributions (noted as category weight in the y-axis) rather than the weight distribution, since the features are clearer in the former. The category weight is simply the product of the size (i.e., weight) of a category (i.e., bin) and the number of connections in that category. There is a well-known result in statistics [24] that if the weight distribution is lognormal with \(\mu\) and \(\sigma\) (i.e., \(LN(\mu,\sigma)\)), then the size-weighted distribution is also lognormal, \(LN(\mu^{\prime},\sigma^{\prime})\)), where \(\mu^{\prime}=\mu+\sigma^{2}\) and \(\sigma^{\prime}=\sigma\). Furthermore, since the utility \(u^{l}_{ijk}\) in Eq. 5 is positive (since it is a benefit), and \(\ln\mid w^{l}_{ijk}\mid\) is negative in the range of \(0<\mid w^{l}_{ijk}\mid<1\), we have \(\alpha^{l}<0\), \(\mu^{l}<0\), and \(\mu^{\prime l}<0\). Similarly, from Eq. 6, the disutility \(v^{l}_{ijk}\) requires \(\beta^{l}>0\).
**Fig. 1**: Typical lognormal fitted curves: A & B - BlazePose; C & D - Xception. Blue dots are data, and the red curve is the lognormal fit. The parameters of the fits are also shown.
**Fig. 2**: A & B - BERT-Small; C & D - BERT-Large
The lognormal distribution was fitted and tested for the following networks: BlazePose (39 layers), Xception (32 layers), BERT-Small (75 layers), BERT-Large (144 layers), Llama-2 7B (226 layers) and Llama-2 13B (282 layers). Instead of showing the plots for all the 798 layers, which all look pretty similar, we show a much smaller selection of sample distributions in Figs. 1-3. We show two typical distributions, one at the beginning and one near the end of the network, for each of the six networks we analyzed. We see that these size-weighted data fit the lognormal distribution very well with high \(R^{2}\) values. This is typical of all the layers with high connectivity. Although the six networks use different architectures, are of different sizes, and are trained for different applications, we find this surprising universal microstructure. This is an important design feature of these networks that emerges automatically during training. As discussed in Section 2, our theory predicts this universal lognormal microstructure.
In the Supplemental Information section, we list the lognormal parameters (\(A^{\prime}\), \(\mu^{\prime}\), and \(\sigma^{\prime}\)) for all the highly connected layers (798 of them) for all six case studies. Table 2 summarizes the average and standard deviation values of the lognormal distribution parameters for the six case studies. Note that for large networks with
\(>100\) million connections, \(\sigma^{\prime}\) appears to be nearly constant (around 0.65) for all networks, as seen by its low standard deviation values in Table 2. This implies that \(\beta^{\prime}\) is also approximately constant for all networks. Even \(\mu^{\prime}\) (and hence \(\alpha^{\prime}\)) appears to be in a tight range (-2.5 to -3.0) for the different networks. The theory predicts that \(\mu^{\prime}\) and \(\sigma^{\prime}\) are constants for all networks only in the "thermodynamic limit" of the ideal network. However, we see such a trend even for these nonideal cases.
The number of connections in the 798 layers we studied ranged from 36,864 to 163,840,000. Generally speaking, we find that the more connections a layer has, the better the lognormal fit with higher \(R^{2}\) due to better statistical averaging. We can see in Fig. 3(a) that Layer #4 of the Xception network, which has only 18,432 connections, has too much noise in the data to fit any distribution well. Therefore, we did not model such layers.
However, layers with scores of millions of connections have their own challenges, as they are harder to train, and hence run the risk of suboptimal weight assignments. Recall that, according to the theory, the lognormal distribution emerges only when arbitrage equilibrium is reached. It is possible that these extremely highly connected layers had not quite reached equilibrium when training was stopped. Therefore,
\begin{table}
\begin{tabular}{c c c c c c} \hline Model & Layers & \(R^{2}\) & \(A^{\prime}\) & \(\mu^{\prime}\) & \(\sigma^{\prime}\) \\ \hline BlazePose & 39 & \(0.93\pm 0.02\) & \(3.75\pm 2.09\) & \(-1.74\pm 0.52\) & \(1.49\pm 0.60\) \\ Xception & 32 & \(0.98\pm 0.01\) & \(6.53\pm 3.64\) & \(-2.87\pm 0.18\) & \(0.70\pm 0.05\) \\ BERT Small & 75 & \(0.96\pm 0.01\) & \(66.15\pm 46.46\) & \(-2.47\pm 0.95\) & \(0.65\pm 0.02\) \\ BERT Large & 144 & \(0.96\pm 0.01\) & \(44.84\pm 143.6\) & \(-2.37\pm 0.98\) & \(0.64\pm 0.01\) \\ LLAMA-2 (7B) & 226 & \(0.97\pm 0.01\) & \(11464\pm 8170\) & \(-2.96\pm 0.54\) & \(0.66\pm 0.05\) \\ LLAMA-2 (13B) & 282 & \(0.94\pm 0.03\) & \(1513\pm 1116\) & \(-3.02\pm 0.53\) & \(0.67\pm 0.06\) \\ \hline \end{tabular}
\end{table}
Table 2: Average and standard deviation of Lognormal parameters
Figure 4: Typical problems with low and high number of connections
naturally, there would be a mismatch between theoretical predictions and empirical observations. We observe this in the Llama-2 (13B) data. Fig. 3(b) shows the size-weighted distribution for Layer #1, which has over 163 million weights. As we can clearly see, there are elements of the lognormal distribution present, but the fit is not as good as it is for Layer #7 (see Fig.3C), for example, with about 70 million weights. This suggests that Layer #1 training was suboptimal. It appears from our empirical analysis that layers that have connections in the range of about 1 to 50 million have the right trade-off between better statistical properties and reaching optimal weight distribution.
## 4 Conclusions
Understanding the microstructure of large deep neural networks is of great importance, given their enormous role in many applications. This understanding could lead to practical benefits, such as better design and training algorithms. But equally importantly, it could also lead to better theories about their structure, function, and behavior. Toward that goal, we present a novel theoretical and empirical analysis of the distribution of weights of six different large neural networks.
We show both theoretically and empirically that in large neural networks, the final connection strengths (i.e., weights) are distributed lognormally in highly connected layers. This pattern is independent of the architecture or the application domain. We should be able to take advantage of this knowledge to reduce the amount of time, data, and computational resources required to train large deep neural networks effectively. We believe that there are at least three ways in which this knowledge can be utilized to train deep neural network models.
First, since we know the final weight distribution is lognormal, we can 'hot start' and initialize the weights lognormally instead of randomly. Furthermore, since we have the average values of \(A^{\prime}\), \(\mu^{\prime}\), and \(\sigma^{\prime}\) from these six case studies, they provide us with guidance on the initial weights. This should bring us closer to the finish line when we start. This raises an interesting question: Is this what pre-training is doing for GPTs? Does unsupervised learning transform the initial random distribution of weights into a nearly lognormal distribution, which makes subsequent supervised learning easier? We need further studies to answer such questions.
Second, during training using iterative gradient descent, instead of individually tuning millions of weights, we can tune the much smaller number of lognormal parameters. Consider, for example, layer #7 of Llama-2 (13B), which has about 70 million weights. However, this distribution can be modeled by only three parameters (\(A^{\prime}\), \(\mu^{\prime}\), and \(\sigma^{\prime}\)) of the corresponding lognormal distribution. Therefore, we can modify the backpropagation algorithm to tune just these three parameters rather than adjusting 70 million weights. One could do this at least for the initial stages of training and reserve the more resource-consuming fine-tuning of all the weights towards the last
stages of training. This kind of hybrid training could result in considerable savings in data, time, and computational resources for large networks. Furthermore, since very large networks struggle to reach optimal allocation of weights (see Fig. 4B), constraining the weight distribution to lognormal in the training iterations, which is the theoretical optimum, minimizes the chances of settling down in suboptimal microstructures.
Third, we can design special-purpose hardware where the layers are connected in a lognormal manner with tunable connection strengths. We are currently pursuing all of these opportunities.
We wish to emphasize that the spirit of our modeling is similar to that of the ideal gas or the Ising model in statistical mechanics. Just as real molecules are not point-like objects or devoid of intermolecular interactions, as assumed in the ideal gas model, we make similar simplifying assumptions in our model. The ideal version serves as a useful starting point for developing more comprehensive models. Furthermore, just as real gases do not behave like the ideal gas, we do not expect real-life deep neural networks to behave like their ideal version. That is why it comes as a surprise that the six networks we analyzed come this close to the predictions made for the ideal version.
We also stress an important insight revealed by our theory. It is generally viewed that active matter systems such as neural networks are out-of-equilibrium or far-from-equilibrium systems. However, both our theoretical and empirical analyses demonstrate that they are actually in equilibrium, an _arbitrage_ equilibrium. Just as systems are in mechanical equilibrium when forces or pressures are equal, in thermal equilibrium when temperatures are equal, or in phase equilibrium when chemical potentials are equal, we have active matter systems in arbitrage equilibrium when effective utilities are equal [25].
The crucial feature of the maximum entropy design, expressed by the lognormal distribution at arbitrage equilibrium, is that the effective utilities of all the connections are equal. This equality reflects a deep sense of balance, harmony, and fairness in network design. This is an elegant solution to the credit assignment problem among millions of connections. One could view this equality as the mathematical criterion of beauty. One cannot help but wonder whether nature has discovered this beautiful secret and exploits it in her evolutionary design of biological brains.
### Acknowledgements
We would like to thank Professor Babji Srinivasan of IIT-Madras for his valuable suggestions.
### Author Contributions
VV: Conceptualization, Theory, Methodology, Formal Analysis, Investigation, Supervision, and Writing; NS: Software development and analysis for the BlazePose,
BERT-Small, BERT-Large, Llama-2 (7B), and Llama-2 (13B) networks; MK: Software development and analysis for the Xception network.
The authors have no conflicts of interest to declare.
|
2305.14644 | KARNet: Kalman Filter Augmented Recurrent Neural Network for Learning
World Models in Autonomous Driving Tasks | Autonomous driving has received a great deal of attention in the automotive
industry and is often seen as the future of transportation. The development of
autonomous driving technology has been greatly accelerated by the growth of
end-to-end machine learning techniques that have been successfully used for
perception, planning, and control tasks. An important aspect of autonomous
driving planning is knowing how the environment evolves in the immediate future
and taking appropriate actions. An autonomous driving system should effectively
use the information collected from the various sensors to form an abstract
representation of the world to maintain situational awareness. For this
purpose, deep learning models can be used to learn compact latent
representations from a stream of incoming data. However, most deep learning
models are trained end-to-end and do not incorporate any prior knowledge (e.g.,
from physics) of the vehicle in the architecture. In this direction, many works
have explored physics-infused neural network (PINN) architectures to infuse
physics models during training. Inspired by this observation, we present a
Kalman filter augmented recurrent neural network architecture to learn the
latent representation of the traffic flow using front camera images only. We
demonstrate the efficacy of the proposed model in both imitation and
reinforcement learning settings using both simulated and real-world datasets.
The results show that incorporating an explicit model of the vehicle (states
estimated using Kalman filtering) in the end-to-end learning significantly
increases performance. | Hemanth Manjunatha, Andrey Pak, Dimitar Filev, Panagiotis Tsiotras | 2023-05-24T02:27:34Z | http://arxiv.org/abs/2305.14644v1 | KARNet: Kalman Filter Augmented Recurrent Neural Network for Learning World Models in Autonomous Driving Tasks
###### Abstract
Autonomous driving has received a great deal of attention in the automotive industry and is often seen as the future of transportation. The development of autonomous driving technology has been greatly accelerated by the growth of end-to-end machine learning techniques that have been successfully used for perception, planning, and control tasks. An important aspect of autonomous driving planning is knowing how the environment evolves in the immediate future and taking appropriate actions. An autonomous driving system should effectively use the information collected from the various sensors to form an abstract representation of the world to maintain situational awareness. For this purpose, deep learning models can be used to learn compact latent representations from a stream of incoming data. However, most deep learning models are trained end-to-end and do not incorporate in the architecture any prior knowledge (e.g., from physics) of the vehicle. In this direction, many works have explored physics-infused neural network (PINN) architectures to infuse physics models during training. Inspired by this observation, we present a Kalman filter augmented recurrent neural network architecture to learn the latent representation of the traffic flow using front camera images only. We demonstrate the efficacy of the proposed model in both imitation and reinforcement learning settings using both simulated and real-world datasets. The results show that incorporating an explicit model of the vehicle (states estimated using Kalman filtering) in the end-to-end learning significantly increases performance.
Autonomous vehicles, Autoencoders, Imitation Learning, Reinforcement Learning, Physics Infused Neural Networks.
## I Introduction
Agents that can learn autonomously while interacting with the world are quickly becoming mainstream due to advances in the machine learning domain [1]. Particularly, stochastic generative modeling and reinforcement learning frameworks have proved successful in learning strategies for complex tasks, often out-performing humans by learning the structure and statistical regularities found in data collected in the real world [2, 3]. These successful theoretical frameworks support the idea that acquiring internal models of the environment, i.e. World Models (WM) is a natural way to achieve desired interaction of the agent with its surroundings. Extensive evidence from recent neuroscience/cognitive science research [4, 5] highlights prediction as one of the core brain functions, even stating that "prediction, not behavior, is the proof of intelligence" with the brain continuously combining future expectations with present sensory inputs during decision making. One can argue that prediction provides an additional advantage from an evolutionary perspective, where the ability to predict the next action is critical for successful survival. Thus, constructing explicit models of the environment, or enabling the learning agent to predict its future, has a fundamental appeal for learning-based methods [6]. In this direction, we present a physics-infused neural network architecture for learning World Models for autonomous driving tasks. By World Model we mean predicting the future frames from consecutive historical frames where a frame is the front view image of the traffic flow [7].
Reinforcement Learning (RL) and Imitation Learning (IL) have gained much traction in autonomous driving as a promising avenue to learn an end-to-end policy that directly maps sensor observations to steering and throttle commands. However, approaches based on RL (or IL) require many training samples collected from a multitude of sensors of different modalities, including camera images, LiDAR scans, and Inertial-Measurement-Units (IMUs). These data modalities generate high-dimensional data that are both spatially and temporally correlated. In order to effectively use the information collected from the various sensors and develop a world model (an abstract description of the world), this high-dimensional data needs to be reduced to low-dimensional representations [8]. To this end, Variational Autoencoders (VAEs) and Recurrent Neural Networks (RNNs) have been extensively used to infer low-dimensional _latent variables_ from temporally correlated data [9, 10, 11, 12]. Furthermore, modeling temporal dependencies using RNN in the data allows to construct a good _prediction model_ (a World Model), which has been shown to benefit RL/IL scenarios [13, 14]. However, learning a good prediction model in an end-to-end fashion comes with the cost of substantial training data, training time, not to mention a significant computational burden. Moreover, end-to-end training can lead to hallucinations where an agent learns World Models that do not adhere to physical laws [9, 15]. Hence, there is a need to consider physics-informed or model-based machine learning approaches [16].
Model-based learning methods, on the other hand, incorporate prior knowledge (e.g., from physics) into the neural network architecture [16, 17]. The explicit model acts as a high-level abstraction of the observed data and can provide rich |
2306.05219 | XNOR-VSH: A Valley-Spin Hall Effect-based Compact and Energy-Efficient
Synaptic Crossbar Array for Binary Neural Networks | Binary neural networks (BNNs) have shown an immense promise for
resource-constrained edge artificial intelligence (AI) platforms as their
binarized weights and inputs can significantly reduce the compute, storage and
communication costs. Several works have explored XNOR-based BNNs using SRAMs
and nonvolatile memories (NVMs). However, these designs typically need two
bit-cells to encode signed weights leading to an area overhead. In this paper,
we address this issue by proposing a compact and low power in-memory computing
(IMC) of XNOR-based dot products featuring signed weight encoding in a single
bit-cell. Our approach utilizes valley-spin Hall (VSH) effect in monolayer
tungsten di-selenide to design an XNOR bit-cell (named 'XNOR-VSH') with
differential storage and access-transistor-less topology. We co-optimize the
proposed VSH device and a memory array to enable robust in-memory dot product
computations between signed binary inputs and signed binary weights with sense
margin (SM) > 1 micro-amps. Our results show that the proposed XNOR-VSH array
achieves 4.8% ~ 9.0% and 37% ~ 63% lower IMC latency and energy, respectively,
with 4 % ~ 64% smaller area compared to spin-transfer-torque (STT)-MRAM and
spin-orbit-torque (SOT)-MRAM based XNOR-arrays. | Karam Cho, Sumeet Kumar Gupta | 2023-06-08T14:17:42Z | http://arxiv.org/abs/2306.05219v1 | XNOR-VSH: A Valley-Spin Hall Effect-based Compact and Energy-Efficient Synaptic Crossbar Array for Binary Neural Networks
###### Abstract
Binary neural networks (BNNs) have shown an immense promise for resource-constrained edge artificial intelligence (AI) platforms as their binarized weights and inputs can significantly reduce the compute, storage and communication costs. Several works have explored XNOR-based BNNs using SRAMs and nonvolatile memories (NMMs). However, these designs typically need two bit-cells to encode signed weights leading to an area overhead. In this paper, we address this issue by proposing a compact and low power in-memory computing (IMC) of XNOR-based dot products featuring signed weight encoding in a single bit-cell. Our approach utilizes valley-spin Hall (VSH) effect in monolayer tungsten di-selenide to design an XNOR bit-cell (named 'XNOR-VSH') with differential storage and access-transistor-less topology. We co-optimize the proposed VSH device and a memory array to enable robust in-memory dot product computations between signed binary inputs and signed binary weights with sense margin (SM) > 1 \(\mu\)A. Our results show that the proposed XNOR-VSH array achieves 4.8% - 9.0% and 37% - 63% lower IMC latency and energy, respectively, with 49% - 64% smaller area compared to spin-transfer-torque (STT)-MRAM and spin-orbit-torque (SOT)-MRAM based XNOR-arrays.
Binary neural networks (BNNs), edge artificial intelligence (AI), nonvolatile memories (NVMs), in-memory computing (IMC), monolayer transition metal dichalcogenide (TMD), valley-spin Hall effect (VSH), magnetic tunnel junction (MTJ).
## I Introduction
Deep neural networks (DNNs) have realized remarkable advances for artificial intelligence (AI) workloads achieving super-human accuracies in several tasks. However, this comes at enormous storage, computation, communication costs as traditional solutions based on the standard architectures (e.g., graphics processing unit (GPU) and tensor processing unit (TPU)) entail a large number of power-hungry and performance-limiting processor-memory transactions. Therefore, to efficiently handle the data-intensive workloads in DNNs, in-memory computing (IMC) architectures (where computing operations are performed inside a memory macro) have been extensively investigated [1, 2, 3, 4]. The IMC reduces frequent data movement between memory and processor yielding much lower communication costs.
However, for highly energy-constrained edge AI platforms IMC alone may not be sufficient to meet the energy efficiency targets. To manage the power consumption of edge devices, quantization of the inputs and weights is a common technique [4], which reduces the storage and IMC energy costs drastically. In fact, input/weight quantization all the way to binary levels has been shown to significantly boost the IMC energy efficiency for DNNs. The simplest binary quantization scheme involves encoding the weights and inputs as 0s and 1s (unsigned binary); however, such approaches suffer from large accuracy loss. To bring the accuracies to acceptable levels (while still reaping the energy benefits of heavy quantization), signed binary neural networks (BNNs) have emerged where weights and inputs are binarized to +1 and -1 [5]. The IMC of scalar product of weights and inputs corresponds to an XNOR operation; hence, such designs are referred to as XNOR-BNNs.
To implement XNOR-based signed BNNs, there are two common techniques. One is to encode signed binary weights into two bit-cells in the memory array to achieve complementary weight encoding. Such an approach has been explored in the context of SRAM [6] and emerging nonvolatile memories (NVMs) including resistive random-access memory (XNOR-RRAM [7]) and magnetic tunnel junction (MTJ)-based magnetic random-access memories (MRAMs). The MTJ-based designs exploit spin-transfer-torque (XNOR-STT [8, 9]) or spin-orbit-torque (XNOR-SOT [10]). However, the main bottleneck of this approach is the area overhead due to the need for two bit-cells per a set of complementary weights. To mitigate this issue, the second technique utilizes a single bit-cell to store weights in the forms of 0s and 1s and performs IMC in the unsigned regime. However, it requires a transformation of the outputs from the unsigned regime to the signed regime, which entails pre-processing of inputs/weights and post-processing of outputs leading to peripheral circuit overheads [11, 12, 13].
In this paper, we address the limitations of the existing solutions for XNOR-BNNs by utilizing valley-spin Hall (VSH) effect in monolayer tungsten di-selenide to perform XNOR-based IMC (named 'XNOR-VSH'). Our design features 1) encoding signed binary weights in the array (thus, avoiding the post-processing overheads associated with transformations between signed and unsigned regimes), and 2) storing complementary bits in a _single_ device (thus, averting the need for two bit-cells for signed weight encoding). Moreover, the proposed design leverages the integrated back-gate in VSH device (which we explored in our earlier works [14]) to achieve
access-transistor-less design, leading to further area savings. We will show later that the compactness in the proposed design enabled by complementary bit encoding in a single device along with the integrated back-gate [14] translates to a significant reduction in IMC energy and latency compared to the previous spin-based XNOR-IMC alternatives. The key contributions of this paper are as follows:
* We propose an XNOR bit-cell utilizing the VSH effect (XNOR-VSH) where complementary signed weights (+1/-1) are stored in a _single_ device enabling an area-efficient layout compared to the state-of-the-art 2T-2R XNOR designs.
* We implement an array based on the proposed XNOR-VSH bit-cell that can perform IMC of dot products of signed binary weights and inputs satisfying sense margin (SM) \(>\) 1 \(\mu\)A.
* We evaluate the benefits and trade-offs of XNOR-VSH vis-a-vis XNOR-STT and XNOR-SOT, showing significant area, energy and latency benefits of the proposed technique at the cost of lower SM.
## 2 Background
### State-of-the-art XNOR-IMC based on Nonvolatile Memories
Figure 1 shows the state-of-the-art XNOR cell designs based on emerging NVMs. Fig. 1(a) depicts 2T-2R XNOR cell where R1 and R2 describe resistive components, which can be either memristors or MTJs, representing XNOR-RRAM [7] or XNOR-STT [8, 9], respectively. In these designs, synaptic weights are encoded in the resistive components that usually have two resistance levels depending on their configuration. For XNOR-RRAM, the memristor becomes conducting (e.g., a filament is formed between two electrodes) when a set voltage across it is greater than the threshold voltage, leading to low resistance (\(R_{L}\)). When reset (e.g. the filament is broken), the memristor loses its conducting characteristics and exhibits high resistance (\(R_{H}\)). By utilizing two 1T-1R RRAM bit-cells, complementary synaptic weights are stored in the XNOR cell. Weight = +1 (-1) corresponds to \(R_{L}\) (\(R_{H}\)) stored in one bit-cell and \(R_{H}\) (\(R_{L}\)) in the other. The input is encoded by driving the two word-lines (WLs) of the XNOR cell to \(V_{DD}\) or 0. Input = +1 (-1) corresponds \(V_{DD}\) (0) on one WL and 0 (\(V_{DD}\)) on the other. The combination of input vectors and stored weights returns the XNOR output on the bit-line (BL) or sense-line (SL). This XNOR output current from multiple XNOR-cells is summed on the BL/SL to obtain the dot product of the inputs and weights in the signed binary regime.
The MTJs in XNOR-STT play a similar role in storing the weights. When the write current flowing through an MTJ is large enough to switch the magnetization of the free layer (FL) aligning it with that of the pinned layer (PL), MTJ is in parallel (P) state with \(R_{L}\) as its resistance. If the FL is set to have the opposite magnetization as that of the PL (by flowing the write current in the opposite direction to the former case), it holds anti-parallel (AP) state with \(R_{H}\) as its resistance. Thus, XNOR-STT also requires two bit-cells to store complementary weights and sense the XNOR output in the same way as XNOR-RRAM. In case of XNOR-SOT (see Fig. 1(b)) [10], encoding the weights is achieved by flowing write currents along the heavy metals (HMs) such that in-plane spin polarizations are induced at the interface between the HMs and FLs of MTJs, exerting SOT on the FLs. XNOR-SOT is more write-efficient than XNOR-STT and allows independent optimization of write and IMC, but requires additional access transistors. This results in a 4T-2R configuration and thus, larger cell area. The IMC in XNOR-SOT is performed in a similar manner as XNOR-RRAM by flowing the current in the read path through the MTJs.
However, such two-bit-cell-based designs can lead to non-trivial area overheads. To relieve the issue, relevant works have been conceived to utilize a single bit-cell to store weights in the forms of 0s and 1s and perform IMC in the unsigned regime [11, 12, 13]. In the NAND-net [11], XNOR operations are replaced by NAND operations, requiring only one bit-cell per weight and simplifying the IMC of dot products. In [12, 13], the signed inputs are replaced by the unsigned inputs to improve the area and energy efficiencies. However, such techniques require transformations of the outputs from the unsigned regime to the signed regime, which involve pre-processing of inputs/weights and post-processing of outputs leading to the need for extra peripheral circuits.
Although the state-of-the-art XNOR designs exhibit higher density than CMOS-based designs such as XNOR-SRAM [6], they either suffer from the area overheads due to the necessity of multiple transistors to enable XNOR-IMC or possess circuit complexity arising from the transformations of inputs/outputs between unsigned and signed regimes. These limit the scalability and the IMC energy efficiency. In this paper, we propose VSH-based XNOR-IMC, which avoids the overheads of both the techniques by enabling signed binary weight encoding in a single device and designing compute-enabled array for XNOR-IMC without the overheads of transformations between signed and unsigned regimes.
### Valley-Spin Hall (VSH) Effect in Monolayer WSe2
Monolayer transition metal dichalcogenides (TMDs) such as tungsten di-selenide (WSe\({}_{2}\)) exhibit intriguing spintronic features promising low power applications [14, 15, 16, 17]. Due to the unique physics of monolayer TMDs, an external electric field (or charge current, \(I_{C}\)) generates transverse spin current (\(I_{S}\)) by separating carriers with opposite out-of-plane (i.e., up or down) spins in divergent directions, which is known as valley-spin Hall (VSH) effect [14, 15, 16, 17, 18]. Given the generation of out-of-plane spins, perpendicular magnetic anisotropy (PMA) magnets can be coupled with WSe\({}_{2}\) to switch their magnetization utilizing VSH-effect-driven SOT (VSH-SOT) without any external magnetic field. As the PMA magnets are
Figure 1: State-of-the-art XNOR: (a) 2T-2R XNOR cell based on RRAM (R as filament) [7] or STT-MRAM (R as MTJ) [8, 9] and (b) SOT-MRAM-based XNOR cell with 4T-2R configuration [10].
known to be more energy-efficient in switching than in-plane magnetic anisotropy (IMA) magnets, VSH promises to reduce write energy compared to IMA-based devices (e.g., SOT-MRAM). Therefore, energy-efficient nonvolatile memories based on the VSH effect (VSH-MRAM; see the inset of Fig. 2) were previously proposed in [14] by utilizing the WSe\({}_{2}\) spin generator, featuring an integrated back-gate, and coupling it with the PMA MTJs. These designs, targeting memory operations and IMC of Boolean and simple arithmetic functions (for general purpose computing) showed higher energy-efficiency along with access-transistor-less compact layout due to the integrated back-gate. In this work, we utilize the VSH effect in monolayer WSe\({}_{2}\) to design an array that can compute XNOR-based dot products with signed binary weights and inputs (targeting BNNs).
### Simulation Framework
We establish a simulation framework (Fig. 2) to evaluate the proposed VSH-based XNOR-IMC (XNOR-VSH), and to perform a comparison with the STT- and SOT-MRAM-based XNOR designs (XNOR-STT and XNOR-SOT, respectively). Note, STT- and SOT-MRAMs are also optimized to implement XNOR dot products for fair comparison with the proposed design, the details of which will be discussed in section IV.
For XNOR-VSH, we utilize the model developed previously by us and described in detail in [14]. Here, we provide a brief overview of the model. First, a 2D FET model is developed based on the approach in [18] to self-consistently capture 2D electrostatics and charge transport in 2D WSe\({}_{2}\) channel of the VSH device. The drain current (or charge current, \(I_{C}\)) is calculated as a function of the gate (\(V_{GS}\)) and drain voltages (\(V_{DS}\)) with mobility of WSe\({}_{2}\) (\(\mu_{WSe2}\)) [18] and contact resistance (\(R_{C}\)) at S/D side [19] calibrated with the experiments. During _Write_, spin current, \(I_{S}\) is obtained from \(I_{C}\) considering valley-spin Hall angle (\(\theta_{SH}\)) and spin diffusion length (\(\lambda_{S}\)) of WSe\({}_{2}\) layer, extracted from the experiments [16, 17]. Generated \(I_{S}\) is provided to ferromagnet \(|\) non-magnet (FM\(|\)NM) interface model [20] and Landau-Lifshitz-Gilbert-Slonczewski (LLGS) equation [20] to capture the interface spin scattering and resultant switching dynamics of the PMA magnets, which serve as the FLs of MTJs sitting on the WSe\({}_{2}\) layer. The MTJ resistance which is utilized for IMC operation of the proposed design is obtained from the Non-Equilibrium Green Function (NEGF) equations, as detailed in [21]. Finally, HSPICE simulation evaluates energy and delay of the proposed XNOR-VSH array. Note, during _Read_ and XNOR-IMC, a distributed resistance network is used to model the current paths via WSe\({}_{2}\) channel, which captures the unique geometry of the VSH device as detailed in [14]. The resistance is obtained from 2D FET model similar to [14] as a function of \(V_{GS}\) and \(V_{DS}\) capturing Schottky contact resistance between MTJs and WSe\({}_{2}\).
For comparison, XNOR-STT and XNOR-SOT are implemented in HSPICE based on the cell designs in Fig. 1. For XNOR-STT, the LLGS equation [20] is used to capture the effect of spin torque in MTJs. XNOR-SOT follows the same simulation framework as XNOR-VSH except that the \(I_{C}\) along the HM (here, tungsten W [22]) is calculated, provided to a GSHE model, and converted into \(I_{S}\) via \(\theta_{SH}\) of W, which switches IMA magnets (FLs of MTJs in XNOR-SOT). The MTJ resistance model similar to XNOR-VSH is utilized in the IMC operation of XNOR-STT/SOT. For both designs, 7-nm n-FinFETs [low power predictive technology model (PTM) model] are used for access transistors. Table I shows the parameters used in HSPICE simulation. Here, STT- and VSH-based XNORs utilize PMA magnets while XNOR-SOT is designed with IMA magnet. All magnets are optimized to have energy barrier (\(E_{B}\)) of \(\sim\)50 \(k_{B}T\) (where \(k_{B}T\) is the thermal energy).
## III Proposed XNOR-VSH
### XNOR-VSH Bit-cell
We design the XNOR-VSH bit-cell by utilizing the differential storage capability of VSH-MRAM explored previously in [14]. However, unlike the work in [14] (which focused on utilizing the differential functionality for storage and IMC of Boolean functions/addition of in-memory operands for general purpose computing), here, we harness the VSH effect for implementing XNOR-IMC for BNNs. Therefore, the routing and biasing of various lines in the proposed bit-cell and the optimization strategies are different from [14] and will be described shortly.
The XNOR-VSH bit-cell includes a p-type spin generator, which has monolayer WSe\({}_{2}\) channel with an integrated back-gate (see inset in Fig. 2). The channel is extended beyond the area between source and drain, forming two arms. The back-gate controls the flow of charge and spin currents (\(I_{C}\) and \(I_{S}\)) between S and D, and in the arms, respectively. As \(I_{C}\) (or \(I_{WETE}\)) flows from S to D, the carriers of opposite spins (\(I_{S1}\) or \(I_{S1}\)) diverge due to VSH effect, leading to the flow of \(I_{S}\) in the arms
Fig. 2: Simulation framework for evaluation and comparison of the proposed XNOR-VSH with STT- and SOT-based XNOR designs. Details of the 2D FET model is in [18].
transverse to \(I_{C}\). The induced \(I_{S}\) interacts with PMA magnets, which serve as FLs in MTJs located on each arm, storing complementary bits (as discussed shortly). For reading the bits stored, read paths are formed along the MTJs and WSe\({}_{2}\) channel via the read nodes (R1 and R2) formed on top of the MTJs.
In our proposed XNOR-VSH, both the weight \(W\) and input \(IN\) are binarized to \(+1\) or -1, so that their scalar multiplication (i.e., output \(OUT\)) can represent the bitwise XNOR operation. First, \(W\) encoding (WE) is achieved following the _Write_ operation. Complementary bias voltages (i.e., \(V_{DD}\) or 0) are applied at S and D, following which the p-type spin generator is turned ON (G driven to 0). For example, when S is driven to \(V_{DD}\) and D to 0, \(I_{WRITE}\) flows from S to D inducing \(I_{S1}\) and \(I_{S1}\) in the left and right arms, respectively. This switches the left FL downward and the right FL upward. Hence, the MTJs on the left (MTJ\({}_{\rm L}\)) and right (MTJ\({}_{\rm R}\)) are set to AP and P states, respectively. Note, the PLs of MTJs are fixed to \(+z\). We define this MTJ configuration as \(W=\) -1 (see Fig. 3). The opposite WE (\(W=+1\)) is achieved by reversing the direction of \(I_{WRITE}\) with the biasing of S at 0 and D at \(V_{DD}\), which leads to \(I_{S1}\) and \(I_{S1}\) in the left and right arms, respectively, storing P in MTJ\({}_{\rm L}\) and AP in MTJ\({}_{\rm R}\). It is important to note that the complementary WE is implemented in a _single_ XNOR-VSH bit-cell. This, along with the integrated back-gate, yields a compact layout compared to the existing XNOR designs where, in general two bit-cells in the 2T-2R configuration are required to realize the complementary WE (see Fig. 1) [7, 8, 9, 10]. As a result, the proposed XNOR-VSH design yields significant area savings and the resultant IMC energy-efficiency, which will be discussed in section IV.
Next, we define the \(IN\) encoding (IE) and explain the implementation of XNOR-functionality in the bit-cell in situ. \(IN\) is '\(+1\)' when the read node R1 is driven to '\(V_{DD}-V_{READ}\)' and R2 to '\(V_{DD}\).' The opposite biasing (i.e., R1 and R2 driven to '\(V_{DD}\)' and '\(V_{DD}-V_{READ}\)' respectively) represents \(IN\) of '-1.' To compute the XNOR-based scalar product of \(IN\) and \(W\), we apply \(V_{DD}\) at both S and D, drive R1 and R2 to either '\(V_{DD}-V_{READ}\)' or '\(V_{DD}\)' (depending on the \(IN\) value) and assert the back-gate (G = 0). Note, with this biasing scheme, the voltage drop across MTJs is limited to \(V_{READ}\), which is small enough so as not to disturb the stored \(W\) during the compute. The XNOR-based scalar product between \(IN\) and \(W\) is obtained by sensing the current flowing through S (or D). The proposed biasing leads to current paths along S, D, and the two MTJs (and their respective read nodes R1 and R2), which is explained as follows. A node connected to '\(V_{DD}-V_{READ}\)' acts like a sink for current while the other three nodes inject the current in the WSe\({}_{2}\) channel. Recall, this current flow is modeled using the distributed non-linear \(V_{GS}\)/\(V_{DS}\)-dependent resistance network (as mentioned in section II), and our results are based on self-consistent simulations accounting for various physical effects. However, to explain the concept, let us simplify the discussion by considering an equivalent circuit shown in Fig. 3(c). Note, the current sensed corresponds to that flowing through \(R_{S}\). When \(IN=+1\) and \(W\) = \(+1\), S, D, and node R2 (connected to AP-MTJ) are driven to \(V_{DD}\) while the node R1 (connected to P-MTJ) is driven to \(V_{DD}-V_{READ}\), making R1 the sink for current. In this case, the voltage drop across \(R_{S}\) is determined by the voltage division between \(R_{S}\)/\(R_{D}\)/\(R_{AP}\) and \(R_{P}\). Since \(R_{P}<R_{AP}\), the voltage drop across \(R_{S}\) (annotated as \(\Delta V_{AP}\) in Fig. 3(c)) is large, leading to high sensing current through \(R_{S}\) (\(I_{H}\)). This corresponds to \(OUT=+1\). A similar configuration is obtained when \(IN=\) -1 and \(W=\) -1, with the only exception that now, node R2 acts like a sink and R1 as a source for the current. Let us now consider the two cases: (i) \(IN=+1\) and \(W=\) -1 and (ii) \(IN=\) -1 and \(W=+1\). In both the scenarios, the read node connected to \(R_{P}\) acts as a current source while connected to \(R_{AP}\) serves the role of current sink. Now, the voltage division action occurs between \(R_{S}\)/\(R_{D}\)/\(R_{P}\) and \(R_{AP}\), leading to lower voltage across \(R_{S}\) (\(\Delta V_{P}\)) and low sensing current (\(I_{L}\)), which corresponds to \(OUT=\) -1. Thus, when \(IN\) and \(W\) are the same, \(OUT=+1\) is obtained, while when they are different (i.e., opposite in sign), \(OUT=\) -1 is computed, thus implementing the XNOR functionality. Note, for robust operation, \(I_{H}\) needs to be sufficiently larger than \(I_{L}\), which we ensure by design, as discussed subsequently.
### XNOR-VSH Array
Utilizing the proposed XNOR-VSH bit-cell, we design an array with 64 rows and 64 columns (see Fig. 4). The back-gate, S, and D of the bit-cell are connected to WL, BL, and BL-bar (BLB), respectively. The read nodes (R1 of MTJ\({}_{\rm L}\) and R2 of MTJ\({}_{\rm R}\)) are connected to read-word-line A (RWL\({}_{\rm A}\)) and read-word-line B (RWL\({}_{\rm B}\)). RWLs are associated with the input activation and are routed along the row. BL/BLBs run along the column and are used to program the weight during write. WLs are routed along the row and asserted during both write and read to make the WSe\({}_{2}\) channel conducting. The IMC output current is sensed at BL. In this work, driver resistance of 0.5 k\(\Omega\) and sensing resistance of 0.1 k\(\Omega\) are considered, following the work in [23].
Fig. 3: (a) Truth table of the proposed XNOR-VSH binary encoding of input (IE), weight (WE), and output (OE). (b) Illustration for examples of OE = +1 (upper) and OE = -1 (lower). (c) Equivalent circuit for sensing of OE = +1 (left) and OE = -1 (right).
#### 3.2.1 Write Operation
In BNNs, the trained binary weights are loaded into the array during the write/program operation. For this, BL and BLB of the cell are driven to \(V_{DD}\) (0.9 V) or 0 V according to the bit-information which needs to be stored. Then, we apply 0 V to the active-low WL of the selected bit-cell (recall that the proposed XNOR device is p-type). The BL/BLB biasing determines the direction of \(I_{WRITE}\) in the accessed bit-cell, and writes P/AP-states into MTJs in the arms, as described before. For example, when we write \(W\) = -1 into the bit-cell, BL and BLB are driven to \(V_{DD}\) and 0 V, respectively. To encode \(W\) = +1 in the bit-cell, BL and BLB are driven to 0 V and \(V_{DD}\). As discussed before, the complementary weight encoding is feasible within one bit-cell in the proposed XNOR-VSH design. Here, \(\text{RWL}_{\text{A}}\) and \(\text{RWL}_{\text{B}}\) are kept pre-charged (and floating) at \(V_{DD}\) to keep most of the \(I_{WRITE}\) between S and D only. For unaccessed cells, all lines are driven to \(V_{DD}\).
#### 3.2.2 XNOR-IMC and Read Operations
In the proposed XNOR-VSH array, the dot product output is read by sensing the currents flowing through BL (\(I_{BL}\)). As noted before, the signed weights are encoded in the MTJs of XNOR-VSH bit-cells, and the signed inputs are applied via corresponding \(\text{RWL}_{\text{A}}\) and \(\text{RWL}_{\text{B}}\). Also, recall from the previous sub-section that the proposed bit-cell and biasing scheme implement the XNOR functionality, which corresponds to the scalar product of the input and weight in the signed binary regime.
To compute the dot product of input vector and weight matrix, we activate multiple WLs (\(N\)) and sense the accumulated \(I_{BL}\) realizing the MAC operation within a column. Similar to [7], the analog summation of \(I_{BL}\) can be expressed using the number of bit-cells with \(I_{II}\) (\(a\)) and \(I_{L}\) (\(N\) - \(a\)):
\[I_{BL}=aI_{H}+(N-a)I_{L} \tag{1}\]
Now, by replacing \(I_{H}\) and \(I_{L}\) with the corresponding bit values of +1 and -1, respectively, the MAC output (\(OUT_{M}\)) of the column is obtained as:
\[OUT_{M}=a(+1)+(N-a)(-1)=2a-N \tag{2}\]
The \(I_{BL}\) of the column is digitized using an analog-to-digital converter (ADC) [24]. From this, the corresponding value of \(a\) can be obtained following (1), and \(OUT_{M}\) can be deduced from (2). These operations correspond to the bit-counting technique proposed in [5]. The illustration in Fig. 5(a) describes the IMC operation in the proposed XNOR-VSH using an example with 8 bits, which can be implemented by simultaneously activating 8 rows in the XNOR-VSH array. For this example, \(IN\) and \(W^{\prime}\) vectors are {-1, +1, -1, -1, -1, -1, -1} and {+1, +1, -1, -1, -1, -1, -1, -1, +1, +1}, respectively. Each bit-cell produces currents according to their \(IN\) and \(W\) that are {\(I_{L}\), \(I_{H}\), \(I_{B}\), \(I_{L}\), \(I_{L}\), \(I_{R}\), \(I_{L}\), \(I_{R}\), \(I_{R}\), \(I_{R}\), \(I_{R}\)}, representing \(OUT_{M}\) vector of {-1, +1, +1, +1, -1, +1, -1, +1}. From the accumulated \(I_{BL}\), we can deduce \(a\) to be 5 (from (1)). Then, we calculate the output of \({}^{\circ}2a-N=10-8=+2^{*}\) (from (2)) which matches the result of regular MAC operation. Therefore, \(I_{BL}\) of a single column can be translated into \(a\) using the ADC, and finally \(OUT_{M}\) in the proposed XNOR-VSH can be achieved using digital peripheral circuits implementing the shift operation and subtraction
It may be noted that reading the stored weight in the array is a special case of the XNOR-IMC (i.e., \(IN=+1\) with only one row activated). Therefore, we can apply the biases corresponding to \(IN=+1\) (as described above) and sense \(I_{BL}\) to perform the read operation.
#### 3.2.3 IMC Robustness
For efficient data processing in a BNN array, performing XNOR-IMC in multiple columns is a typical approach which
Figure 4: An XNOR-VSH array with 64 rows and 64 columns. Each column performs MAC operation.
Figure 5: (a) Illustration of bitwise XNOR operation for \(N\)bit \(IN\) and \(W\)vectors (here, \(N\) = 8) in the proposed XNOR-VSH array replacing the costly MAC operation. (b) \(I_{M}\) of column 0 with 8 rows (\(N\) = 8) and 64 columns asserted in the XNOR-VSH array (array size = 64*64). Columns 1:63 hold equal distributions of +1 and -1’ weights.
offers the benefits of parallelism. In our proposed XNOR-VSH array, we follow the same methodology by simultaneously asserting all the columns of the array and reading their MAC outputs in one shot. However, this leads to interactions between different cells in an array due to sneak current paths (as also in several other solutions based on crossbar arrays). In other words, the cross-point connections between RWLs and BLs can make the output current to be data-dependent as the equivalent resistance of the array seen by a column can be different for different weight combinations. Hence, we need to ensure by design that the output of any column is not sullied by the interaction with other cells in the array.
To verify the compute functionality in the proposed XNOR-VSH array, we assert 8 rows and all 64 columns (i.e., maximum column-wise parallelism), and sense the \(I_{BL}\) of all columns. To achieve the robust compute output for a column, we adopt a partial word-line activation (PWA) asserting only 8 rows rather than the entire 64 rows. This becomes necessary as low tunneling magnetoresistance (TMR) of MTJs involved in spin-based designs leads to severe adverse effects of the hardware non-idealities (such as the parasitic resistances associated with driver and sink circuitry [23]). Asserting more WLs leads to the larger \(I_{BL}\) which increases the voltage drop across the parasitic resistances (also known as the loading effect) and aggravates the non-ideal effects. This leads to the deviation of the actual \(I_{BL}\) from the ideal, increasing [MC errors [23]]. Therefore, to enhance the sensing accuracy by mitigating this deviation of \(I_{BL}\) from the expected (ideal) value, only a subset of rows (in our case, 8 rows) are asserted in one cycle. Hence, for a complete MAC operation, \(N_{k}\)/8 cycles are needed (where \(N_{R}\) is the number of rows in the array). It is noteworthy that PWA or other techniques to mitigate the effect of non-idealities are also needed for other spin-based designs [8] and such effect is primarily due to the low TMR of MTJs. Moreover, PWA is a common technique to mitigate the energy and area overheads of ADCs, not just in BNNs, but also in high precision neural networks [25].
To ensure sufficient robustness, we perform a detailed analysis of the SM for different outputs. Figure 5(b) shows the \(I_{BL}\) of column 0 with respect to \(a\), which ranges from 0 to 8 in our design. The remaining columns are programmed to have equal distributions of '+1' and '-1' weights. We obtain \(I_{BL}\) for each output considering different cases to ensure sensing robustness for all _IN-W_ combinations. For that, we perform two sweeps representing the extreme cases for each output. Recall, for each column, we compute the dot product for input and weight vectors that are of size 8 each. Let us define the number of +1s in the applied input vector as \(N_{IP}\) and that in the weight vector as \(N_{WP}\). Thus, the number of -1s in the input and weight vectors are 8-\(N_{IP}\) and 8-\(N_{WP}\), respectively. First, we apply the _IN_ sweep to obtain different outputs. In this, \(W\) is fixed as '+1' for the 8 rows (i.e., \(N_{WP}\)= 8), and \(N_{IP}\) is swept from 0 through 8. In other words, _IN_ vector is swept from {-1,..., -1} (\(N_{IP}\)= 0; \(a\) = 0) through {+1, -1,..., -1} (\(N_{IP}\) = 1; \(a\) = 1) and so on, up till {+1,..., +1} (\(N_{IP}\)= 8; \(a\) = 8). The second sweep is \(W\) sweep, in which, _IN_ is fixed as '+1' for all the 8 rows (\(N_{IP}\)= 8), and \(N_{WP}\) is swept from 0 through 8. That is, \(W\) vectors of {-1,..., -1} (\(N_{WP}\) = 0; \(a\) = 0) through {+1, -1,..., -1} (\(N_{WP}\) = 1; \(a\) = 1) and so on, up till {+1,..., +1} (\(N_{WP}\)= 8; \(a\) = 8) are stored to obtained different output. It is noteworthy that for the same \(a\), _IN_ and \(W\) sweeps are expected to have the same \(I_{BL}\) defined in equation (1). This is true if the columns are isolated. However, due to the cross-point connections and the sneak current paths between various cells in the array, these two extreme sweeps may result in different \(I_{BL}\) values. In other words, each \(a\) is now represented by a range of \(I_{BL}\) (\(I_{BL,MN}\) to \(I_{BL,MN}\)) corresponding to the _IN_ and \(W\) sweeps [26, 27]. Therefore, the SM for a given \(a\) and \(a\)-1 outputs is defined as \((I_{BL,MN,a}-I_{BL,MN,a})/2\). Further, the loading effect leads to non-linear behavior of \(I_{BL}\) with respect to \(a\), which also affects SM. In our design, we minimize the loading effect by co-optimizing the XNOR-VSH device and array, as will be discussed in the next section. Our results in Fig. 5(b) show that our proposed XNOR-VSH array exhibits a high linearity between \(I_{BL}\) and \(a\), and sufficiently large SM (\(>\) 1 \(\mu\)A [26]) is obtained. The SM will be compared to other MTJ-based alternatives and discussed shortly in the following section.
## IV Results and Analysis
In this section, we evaluate the proposed XNOR-VSH, and present its comparison with the existing MTJ-based XNOR-IMC designs. For the comparison, STT- and SOT-MRAMs are utilized to implement XNOR arrays with array size of 64*64. The co-optimization of device and array is performed for each design including the MTJ dimensions (see table I) and access transistor size. MTJs in STT- and VSH-based designs employ PMA while the MTJ in SOT-based design utilizes IMA. Each FL is designed to have the \(E_{B}\) of ~50. The TMR of the MTJs is obtained as ~500 % for all designs, similar to the study in [23] and as experimentally obtained in [28]. The number of fins (_nfin_) of access transistors in STT- and SOT-based bit-cells are determined by considering multiple factors such as writability and energy-efficiency (details discussed in section IV. C).
### _IMC Robustness_
We follow the SM analysis methodology presented in Section III. B for the proposed XNOR-VSH array. Along with our design, the compute of the STT- and SOT-based XNOR arrays are also investigated using the same method and utilizing PWA (with 8 rows asserted) for comparison. As all three designs are based on the analog current summation, it is important to ensure by design that there is sufficient SM for compute accuracy. The SM is calculated for all values of \(a\) that ranges from 0 to 8 (e.g. between 0 \(\leftrightarrow\) 1,..., 7 \(\leftrightarrow\) 8; see Fig. 5(b)). The worst-case SM of STT, SOT, and VSH-based arrays are 3.93 \(\mu\)A, 3.94 \(\mu\)A, and 1.12 \(\mu\)A, respectively. The lower SM in the proposed VSH-based design is owing to fact that the read path is formed along the WSe\({}_{2}\) channel layer, which is less
Fig. 6: Device-circuit co-optimization of sense margin (SM) in the proposed XNOR-VSH array with 8 rows asserted. The read voltage (\(V_{\text{READ}}\)) and MTJ tunneling oxide thickness (T\({}_{\text{OX}}\)) are considered for SM optimization.
conductive due to the low mobility of WSe\({}_{2}\) as well as Schottky contact resistance. On the other hand, STT- and SOT-based bit-cells have highly conductive FinFETs as access transistors in the read path. However, despite this disadvantage, we ensure that the proposed XNOR-VSH exhibits SM \(>\) 1 \(\mu\)A by performing a device-array co-optimization with the MTJ tunneling oxide (MgO) thickness and \(V_{\textit{{RELUD}}}\) optimization. The SM results in Fig. 6 show that the larger \(V_{\textit{{RELUD}}}\) and thinner MgO boost the SM. However, lowering the MgO thickness also leads to cell TMR degradation and aggravated effects of hardware non-idealities (as the sensing/driver resistance start becoming more dominant) [23]. Further, larger \(V_{\textit{{RELUD}}}\) or thinner MgO leads to an increase in the IMC energy and the read disturbance during IMC (discussed shortly). Considering all these aspects, we choose \(V_{\textit{{RELUD}}}\) of 0.4 V and MgO thickness of 1.3 nm as the optimal design point for our XNOR-VSH. Similarly, for XNOR-STT/SOT, \(V_{\textit{{RELUD}}}\) of 0.1/0.2 V and MgO thickness of 1.1/1.3 nm are chosen from the optimization.
Let us explain the MTJ disturb issue in a bit more detail. As currents flow along the MTJs (\(I_{\textit{{MTJ}}}\)) during read/IMC, all the designs are under influence of \(I_{\textit{{MTJ}}}\)-driven STT, which can disturb the MTJ states. Such a disturbance needs to be studied to ensure that the weights are retained during IMC. In this work, we quantify this using read disturb margin (RDM), which is defined as (\(I_{\textit{{CI}}}-I_{\textit{{MTJ}}}\))/\(I_{\textit{{CI}}}\)*100 (higher, the better). Here, \(I_{\textit{{CI}}}\) is the critical current associated with STT, above which the MTJ state is disturbed and an undesired switching happens. Given the \(I_{\textit{{MTJ}}}\) direction, XNOR-STT is vulnerable to AP to P switching with \(I_{\textit{{CRAP}}\)-\(\textit{{P}}}\) \(\rightarrow\) 14.2 \(\mu\)A. For XNOR-SOT, P to AP switching can occur with \(I_{\textit{{CR}},\textit{{P}}\)-\(\textit{{P}}\)-\(\textit{{P}}\) 54.7 \(\mu\)A. Due to the larger FL dimension, XNOR-SOT exhibits higher \(I_{\textit{{CI}}}\). The proposed XNOR-VSH array can be disrupted having both AP to P and P to AP switching as the MTJ currents flow in two directions (see Fig. 3 for details). The corresponding critical currents are \(I_{\textit{{CRAP}}\)-\(\textit{{P}}\)\(\leftarrow\) 214.2 \(\mu\)A and \(I_{\textit{{CRP}}\)-\(\textit{{P}}\)\(\leftarrow\) 15.5 \(\mu\)A. Based on the currents obtained during IMC, we calculate the RDM of STT, SOT, and VSH-based XNOR-arrays as 88.1 %, 80.5 %, and 91.5 %, respectively. Note, the resistive WSe\({}_{2}\) channel in XNOR-VSH reduces the \(I_{\textit{{MTJ}}}\). This, while degrading SM (as discussed before) lowers the read disturb, because of which XNOR-VSH achieves the highest RDM among the three designs.
### Layout and Area Analysis
As mentioned before, the STT- and SOT-based XNOR cells utilize two bit-cells for complementary WE, limiting their scalability. However, in the proposed XNOR-VSH design, such an area penalty is avoided since a differential MTJ encoding in a single bit-cell is possible via VSH effect (see Fig. 7 and section III. A for details). Moreover, the integrated back-gate in the VSH device precludes the need for an additional access transistor. In this sub-section, we analyze the layout of each XNOR cell that consists of two STT/SOT bit-cells or one VSH bit-cell, and compare their area. Low area of XNOR-VSH is not only beneficial for high integration density but also leads other benefits such as IMC operation energy and latency, which is discussed in the following sub-section.
The layout analysis is performed using \(\lambda\)-based design rules [29], where \(\lambda\) is half the minimum feature size (\(F\)) associated with technology. Gate pitch (GP), fin pitch (FP), and metal pitch (MP) are based on Intel 7nm technology [30]. As the STT- and SOT-based XNOR cells require two bit-cells (placed in a column) for XNOR-functionality, their cell heights are twice the height of the individual bit-cells. From the layout, we obtain area of each XNOR cell as 0.022 \(\mu\)m\({}^{2}\), 0.031 \(\mu\)m\({}^{2}\), and 0.011 \(\mu\)m\({}^{2}\) for STT, SOT, and VSH-based cells, respectively. Among the XNOR bit-cells, SOT-based design shows the largest area. The proposed XNOR-VSH exhibits 64.4 % lower area than XNOR-SOT and 49.3 % lower area than XNOR-STT.
### Latency and Energy Analysis
In this sub-section, we present the energy and latency analysis of the three designs considering write, read, and IMC. For comparison, STT, SOT, and VSH-based XNOR arrays (array size = 64*64) are designed with respective device-array co-optimizations as follows. In XNOR-STT array, the writability is ensured from using 5 fins in the access transistors in XNOR bit-cells and 1.1-nm-thick MgO in MTJs to flow sufficient write current. On the other hand, in XNOR-SOT array, ample write current along the HM is achieved even with 1 fin in the access transistors and 1.3-nm-thick MgO. A 1.3-nm-thick MgO is also used in XNOR-VSH, which is possible due to the decoupling of read and write paths. To maximize the immunity to read disturbance (discussed earlier) and reduce the effect of the load (sink/driver resistance), MgO thickness of 1.3 nm is effective in the XNOR-SOT and XNOR-VSH designs. From such device-array co-optimizations, we observe compactness-driven energy-efficiency (see Fig. 8) of the proposed XNOR-VSH array in all operations as discussed next.
#### 1 Write
The spin-based designs perform write (or WE) by switching the magnetization of the FLs of MTJs, which is controlled by the magnitude and _direction_ of \(I_{\textit{{WRITE}}}\). Note, \(I_{\textit{{WRITE}}}\) flows through the MTJ in XNOR-STT, HM in XNOR-SOT, and WSe\({}_{2}\) channel in XNOR-VSH. In XNOR-STT and XNOR-SOT, \(I_{\textit{{WRITE}}}\) can flow only in one direction in a given cycle (since the BLs as well as the two bit-cells encoding differential weights are shared along the column). Thus, the FL is switched towards a specific direction in a cycle. Hence, to store the complementary weights for XNOR operation, two-cycles are needed - one to write P and the other to write AP - in XNOR-STT and XNOR-SOT. This increases the write latency and write power. On the other hand, the proposed XNOR-VSH can achieve the differential WE in a single bit-cell by the virtue of the VSH effect. Further, the inherent separation of spin currents in the two arms of the proposed device leads to differential WE in a _single_ cycle (detailed in section III). To add to this, the VSH-based design inherently benefits write energy-efficiency
Figure 7: Layout comparison of XNOR cells based on (a) STT-MRAM (5 fins), (b) SOT-MRAM (1 fin), and (c) VSH-MRAM. Note, STT- and SOT-based designs require two bit-cells to achieve XNOR operation.
due to (1) PMA-based write, (2) integrated back-gate-driven access-transistor-less compact layout (leading to lower BL/WL capacitances) and (3) large \(\theta_{SH}\). As a result, we observe a significantly larger write efficiency in the proposed XNOR-VSH compared to the XNOR-STT and SOT. We achieve 72.2 % and 41.1 % lower write time, and 95.6% and 96.6% lower write energy in the XNOR-VSH array compared to XNOR-STT and XNOR-SOT arrays, respectively.
#### 4.3.2 Read
To analyze the read latency and energy, we assert one XNOR cell with IE corresponding to \(IN=+1\) (as discussed earlier). The results show that compared to XNOR-STT and XNOR-SOT arrays, read time (RT) is lowered by 6.6 % and 3.4 %, and read energy (RE) is reduced by 80.3 % and 69.1 %, respectively. To explain the results, let us first discuss the differences in the read mechanisms of XNOR-STT/SOT and XNOR-VSH. Both XNOR-STT and XNOR-SOT perform the read operation by applying \(V_{READ}\) (0.1 V for XNOR-STT and 0.2 V for XNOR-SOT) between BL and SL (see Fig. 1), following the conventional approach. Therefore, all 64 XNOR cells along the column contribute to the capacitances including wire capacitance and drain capacitance of access transistors. These capacitances need to be charged/discharged during the read operation, which entails energy consumption and latency. On the other hand, in XNOR-VSH, this overhead is largely mitigated. This is because the initial voltage values of all the lines are \(V_{DD}\) and the read operation requires the application of \(V_{DD^{\prime}}V_{READ}\) to \(\text{RWL}_{\text{A}}\) and 0 on WL, both of which are routed along the row. The BLs do not need to be charged/discharged during read (except for a small change in the BL voltage due to loading effect). Now, since multiple reads occur in parallel along different columns in an array, the energy cost associated with capacitive charging/discharging of \(\text{RWL}_{\text{A}}\) and WL is amortized, leading to high read energy efficiency. Thus, the read energy savings are due to the compactness of the XNOR-VSH (leading to lower array capacitances) and the biasing scheme, which alleviate the BL energy consumption. The latency benefits of XNOR-VSH are mainly due to its compactness.
#### 4.3.3 In-Memory Compute (IMC)
For IMC, we apply PWA in all three array designs, simultaneously activating 8 rows and 64 columns. The compute time (CT) and energy (CE) are calculated for the asserted 8 rows from one column (column 0). Note, other 63 columns (columns 1:63) affect the sensing current in column 0 during compute by modifying the equivalent resistance of the entire array but do not add CE. Among the three, the lowest CT and CE are observed in XNOR-VSH design: CT is 9.0 % and 4.8 % lower, and CE is 62.5 % and 36.6 % lower compared to STT- and SOT-based designs, respectively. These benefits stem from the factors similar to those discussed for read.
However, it is important to note some differences between IMC and read. In IMC, all designs consume larger WL charging/discharging energy compared to read due to the 8-row assertion. Moreover, higher sensing currents (i.e., accumulated \(I_{\text{at}}\)) also play a role in increasing loading effect (which discharges BL to a lower value during IMC than read). This, in turn, increases the associated charging energy. Now, recall, XNOR-VSH achieves read benefits due to much lower BL charging component compared to other designs and amortization of \(\text{RWL}_{\text{A}}\) and WL charging energies. In IMC, both these factors play a role but are less effective (due to high loading effect and multi-WL assertion), which reduces the IMC benefits of XNOR-VSH compared to read. However, the compactness of the proposed design still remains to be a major advantage, leading to the overall energy and latency benefits of the proposed design for IMC.
### Discussions and Future Outlook
Our results establish the compactness and energy efficiency of XNOR-VSH vis-a-vis other spin-based memories (that need 2-bit-cells to encode signed weights). As noted earlier, the other approach of XNOR-IMC is to utilize a single bit-cell for unsigned binary IMC (AND-based IMC) and perform transformations between signed and unsigned regimes. This typically requires computing the average of the inputs and weights [12] and post-processing of the outputs, which incur some storage and compute costs. The proposed XNOR-VSH averts this penalty by performing in situ XNOR-IMC within the array. In this paper, we limit the quantitative comparisons to only the first XNOR-IMC approach (that performs IMC in the signed binary regime). The reason is that a proper qualitative comparison with the second XNOR-IMC would need to consider the fact that the VSH effect can also be utilized to design optimized AND-based IMC arrays, which along with transformations between signed and unsigned binary regimes can achieve XNOR-IMC. However, this requires an extensive optimization of VSH-based AND-IMC arrays, which is beyond the scope of this work. It may, however, be noted that the novelty of the approach in this paper lies in harnessing the VSH effect to avoid the transformations between signed and unsigned binary regimes, while encoding the differential weights in a single bit-cell.
Another point to note here is that we limit our analysis to comparing our technique with optimized solutions based on spin-memories only. As mentioned earlier, other XNOR-IMC solutions based on SRAM [6] and RRAM [7] exist. Our technique offers more compact design compared to SRAM and better endurance than RRAM, albeit with lower distinguishability. Therefore, simple benchmarking of the proposed technique with other technologies is challenging, as the role of endurance in BNNs needs to be accounted for a fair comparison, which is a subject of a future work.
## 5 Conclusion
We propose a compact and energy-efficient array capable of performing in situ XNOR-based in-memory computation for BNNs. Our design, named XNOR-VSH, utilizes VSH effect in monolayer \(\text{WSe}_{2}\). By the virtue of VSH, the proposed cell can perform XNOR-IMC with only one bit-cell compared to the
Figure 8: Comparison of read/IMC energy and latency among STT, SOT, and VSH-based XNOR arrays (array size = 64°64).
state-of-the-art spin-based designs where, in general, two bit-cells are required for encoding complementary weights. The integrated back-gate in the proposed design further reduces the area by enabling access-transistor-less layout. These lead to a significant reduction in area compared to STT- and SOT-MRAM based XNOR designs. These benefits come at the cost of lower sensing robustness; but, by co-optimizing the XNOR-VSH device and array, we achieve sufficiently large sense margin (\(>1\) nA) for robust IMC. The compactness of XNOR-VSH along with other technological and array-level attributes offer improvements in the energy efficiency for write, read and IMC, thus leading to an energy-efficient and compact solution of BNNs.
## Acknowledgment
The authors thank Akul Malhotra (Purdue University, West Lafayette, IN, USA) for his input on trained BNNs with the ResNet 18 architecture.
|
2306.04107 | BeMap: Balanced Message Passing for Fair Graph Neural Network | Fairness in graph neural networks has been actively studied recently.
However, existing works often do not explicitly consider the role of message
passing in introducing or amplifying the bias. In this paper, we first
investigate the problem of bias amplification in message passing. We
empirically and theoretically demonstrate that message passing could amplify
the bias when the 1-hop neighbors from different demographic groups are
unbalanced. Guided by such analyses, we propose BeMap, a fair message passing
method, that leverages a balance-aware sampling strategy to balance the number
of the 1-hop neighbors of each node among different demographic groups.
Extensive experiments on node classification demonstrate the efficacy of BeMap
in mitigating bias while maintaining classification accuracy. The code is
available at https://github.com/xiaolin-cs/BeMap. | Xiao Lin, Jian Kang, Weilin Cong, Hanghang Tong | 2023-06-07T02:16:36Z | http://arxiv.org/abs/2306.04107v2 | # BeMap: Balanced Message Passing for Fair Graph Neural Network
###### Abstract.
Graph Neural Network (GNN) has shown strong empirical performance in many downstream tasks by iteratively aggregating information from the local neighborhood of each node, i.e., message passing. However, concrete evidence has revealed that a graph neural network could be biased against certain demographic groups, which calls for the consideration of algorithmic fairness. Despite the increasing efforts in ensuring algorithmic fairness on graph neural networks, they often do not explicitly consider the induced bias caused by message passing in GNN during training. In this paper, we first investigate the problem of bias amplification in message passing. We empirically and theoretically demonstrate that message passing could amplify the bias when the 1-hop neighbors from different demographic groups are unbalanced. Guided by such analyses, we propose BeMap, a fair message passing method, that leverages a balance-aware sampling strategy to balance the number of the 1-hop neighbors of each node among different demographic groups. Extensive experiments on node classification demonstrate the efficacy of our proposed BeMap method in mitigating bias while maintaining classification accuracy.
2017 acmcopyright 17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B177B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B177B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B17B1717B17B17B17B177B17B17B17B17B17B17B17B17B17B17B177B17B17B17B177B17B17B177B17B177B17B17B17B17B17B17B17B17B177B17B177B17B17B177B17B17B17B17B177B17B17B177B17B17B17B177B177B17B17B177B177B17B17B17B177B17B17B1717B17B17B17B17B17B17B17B17B177B17B17B177B177B17B17B177B17B17B177B17B177B17B17B17B177B1717B17B17B177B17B17B17B1717B17B177B17B177B17B177B17B17B17B177B17B17B17B17B177B177B17B17B17B177B17B177B17B17B17B177B17B177B17B17B177B17B17B177B17B177B17B177B17B17B177B17B177B17B177B177B177B17B177B177B17B177B17B177B17B177B177B17B177B177B17B17B17B177B17B177B17B177B17B177B17B17B1717B177B17B177B17B1717B177B177B17B177B177B17B177B17B177B17B177B177B17B177B17B177B17B177B177B17B177B177B177B1717B177B17B177B17B177B17B177B177B177B177B177B177B177B177B177B177B17B17B17B177B17B177B177B17B177B177B17B177B17B177B177B177B177B177B17B17B177B177B177B17B17B177B177B177B177B177B177B177B177B177B177B177B177B177B17B177B177B177B177B177B177B177B17B177B17B177B177B177B17B177B177B177B17B177B177B17B177B17B17B177B177B17B177B177B17B177B177B177B177B177B177B17B17B177B17B177B177B177B177B17B17B177B177B17B177B17B177B17B17B17B177B1717B17B17B17B17B17B17B17B17B177B17B17B177B177B17B177B177B177B17B177B177B17B177B177B177B17B177B17B177B17B177B177B17B177B17B177B17B17B177B17B177B177B17B177B17B177B177B177B177B177B17B177B177B17B177B17B177B177B177B177B17B177B177B177B177B17B177B177B17B177B177B177B177B177B177B177B177B177B177B177B177B177B177B177B1777B177B177B177B177B177B177B177B177B
* **Problems.** We study the problems of bias amplifications in message passing and fair message passing from the perspective of neighborhood balance of each node.
* **Analyses.** Our analyses _both empirically and theoretically_ reveal that the message passing schema could amplify the bias, if the demographic groups with respect to a sensitive attribute in the local neighborhood of each node are unbalanced.
* **Algorithm.** Guided by our analyses, we propose an easy-to-implement sampling-based method named BeMap, which leverages a balance-aware sampling strategy to mitigate bias.
* **Evaluations.** We conduct extensive experiments on real-world datasets, which demonstrate that BeMap could significantly mitigate bias while maintaining a competitive accuracy compared with the baseline methods.
## 2. Preliminaries
In this section, we first briefly introduce the Graph Convolutional Network (GCN) and the commonly used group fairness definitions. Then, we formally define the problem of bias amplification in message passing and fair message passing in GCN.
**Notation Convention.** We use bold uppercase/lowercase letters for matrices/vectors (e.g., \(\mathbf{A}\), \(\mathbf{x}\)), italic letters for scalars (e.g., \(d\)), and calligraphic letters for sets (e.g., \(\mathcal{N}\)). For matrix indexing, the \(i\)-th row of a matrix is denoted as its corresponding bold lowercase letter with subscript \(i\) (e.g., the \(i\)-th row of matrix \(\mathbf{X}\) is \(\mathbf{x}_{i}\)).
**Graph Convolutional Networks.** In this paper, we study the message passing schema in Graph Convolutional Network (GCN) (GCN, 2018), which is one of the most classic graph neural networks. Let \(\mathcal{G}=\{\mathcal{V},\mathbf{A},\mathbf{X}\}\) denote a graph with node set \(\mathcal{V}=\{v_{1},\ldots,v_{n}\}\), adjacency matrix \(\mathbf{A}\), and node feature matrix \(\mathbf{X}\), where \(n\) is the number of nodes in the graph. For any node \(v_{i}\), we denote its degree and the feature as \(d_{i}\) and \(\mathbf{x}_{i}\). In this work, we consider the binary adjacency matrix, meaning that \(\mathbf{A}_{ij}=1\) if there exists an edge connecting the node \(v_{i}\) and the node \(v_{j}\), and \(\mathbf{A}_{ij}=0\) otherwise.2 Given an \(L\)-layer GCN, the weight matrix in the \(l\)-th hidden layer is represented as \(\mathbf{W}^{(l)}\). The input and output node representations of node \(v_{i}\) in the \(l\)-th hidden layer are denoted as \(\mathbf{h}_{i}^{(l)}\) and \(\mathbf{h}_{i}^{(l+1)}\), respectively.3 Then the message passing schema in GCN is \(\widehat{\mathbf{h}}_{i}^{(l)}=\sum_{v_{j}\in\widehat{\mathcal{N}}(v_{i})}a_{ ij}\mathbf{h}_{j}^{(l)}\), where \(\widehat{\mathcal{N}}(v_{i})=\mathcal{N}(v_{i})\cup\{v_{i}\}\) is the self-augmented neighborhood (i.e., the union of node \(v_{i}\) and its \(1\)-hop neighborhood \(\mathcal{N}(v_{i})\)) and \(a_{ij}\) is the aggregation weight with the source node being \(v_{i}\) and the target node being \(v_{j}\) (e.g., \(a_{ij}=\frac{1}{\sqrt{d_{i}+1}\sqrt{d_{j}+1}}\) for renormalized graph Laplacian and \(a_{ij}=\frac{1}{d+1}\) for row-normalized graph Laplacian). Based on the message passing schema, the graph convolution for \(v_{i}\) in GCN can be formulated as \(\mathbf{h}_{i}^{(l+1)}=\sigma(\widehat{\mathbf{h}}_{i}^{(l)}\mathbf{W}^{(l)})\), where \(\sigma(\cdot)\) is the nonlinear activation (e.g., ReLU).
Footnote 2: Although we only consider binary adjacency matrix, our theoretical analysis and proposed method can be naturally generalized to graph with weighted edges by replacing 0/1 edge weight to other values.
Footnote 3: For notation simplicity, we denote \(\mathbf{h}_{i}^{(l)}=\mathbf{x}_{i}\) for all \(v_{i}\in\mathcal{V}\).
**Group Fairness.** Group fairness aims to ensure the parity of model predictions among the demographic groups of data points, where the demographic groups are often determined by a sensitive attribute (e.g., gender and race). Specifically, we adopt two widely used fairness criteria, _i.e._, statistical parity (Kohn and Sham, 2015) and equal opportunity (Kohn and Sham, 2015), which are defined in Definitions 1 and 2, respectively.
**Definition 1**.: _(**Statistical parity (Kohn and Sham, 2015)**) Given any target label \(y\in\{0,1\}\), any sensitive attribute \(s\in\{0,1\}\) and the prediction \(\widetilde{y}\in\{0,1\}\) inferred by a model, a model satisfies statistical parity if and only if the acceptance rate with respect to the model predictions are equal for different demographic groups. Mathematically, statistical parity can be expressed as_
\[P(\widetilde{y}=1\mid s=0)=P(\widetilde{y}=1\mid s=1). \tag{1}\]
_where \(P(\cdot)\) refers to the probabililty of an event._
**Definition 2**.: _(**Equal opportunity (Kohn and Sham, 2015)**) Following the settings of Definition 1, a model satisfies equal opportunity if and only if the true positive rate with respect to the model predictions are equal for different demographic groups. Mathematically, equal opportunity can be expressed as_
\[P(\widetilde{y}=1\mid y=1,s=0)=P(\widetilde{y}=1\mid y=1,s=1). \tag{2}\]
_where \(P(\cdot)\) refers to the probability of an event._
Given Definitions 1 and 2, the bias with respect to statistical parity and equal opportunity are naturally defined as the discrepancies in the acceptance rate and the true positive rate across different demographic groups. Mathematically, the quantitative measures of bias with respect to statistical parity and equal opportunity are defined as Eq. (3) and Eq. (4), respectively.
\[\Delta_{\text{SP}}=|P(\widetilde{y}=1\mid s=1)-P(\widetilde{y}=1\mid s=0)|, \tag{3}\]
\[\Delta_{\text{EO}}=|P(\widetilde{y}=1\mid y=1,s=1)-P(\widetilde{y}=1\mid y= 1,s=0)|. \tag{4}\]
From the information-theoretic perspective, minimizing Eq. (3) and Eq. (4) is essentially equivalent to eliminating the statistical dependence between the model prediction \(\widetilde{y}\) and the sensitive attribute \(s\). Consequently, existing works could only ensure group fairness by imposing the statistical dependence (e.g., mutual information, Wasserstein distance) between \(\widetilde{y}\) and \(s\) as regularization in the optimization problem (Beng and Yang, 2015; Kohn and Sham, 2015; Kohn and Sham, 2015). Nevertheless, they could not explicitly consider the bias caused by the message passing schema. To this end, we seek to understand the role of message passing in the context of algorithmic fairness. Based on that, we define the problem of bias amplification in message passing as follows:
**Problem 1**.: _Bias amplification in message passing._
**Given:** (1) An undirected graph \(\mathcal{G}=\{\mathcal{V},\mathbf{A}\}\) of \(n\) nodes; (2) an \(L\)-layer GCN; (3) the vanilla message passing schema in any \(l\)-th hidden layer \(\widehat{\mathbf{h}}_{i}^{(l)}=\sum_{v_{j}\in\mathcal{N}(v_{i})\cup\{v_{j}\}}a_ {ij}\mathbf{h}_{j}^{(l)};(4)\) a sensitive attribute \(s\); (5) a bias measure bias for statistical parity.
**Find**: A binary decision regarding whether or not the bias will be amplified after message passing.
Based on the answer to Problem 1, our goal is to develop a generic fair message passing such that the bias measure in Problem 1 will decrease after message passing. We define the problem of fair message passing as follows:
**Problem 2**.: _Fair message passing._
**Given**: (1) An undirected graph \(\mathcal{G}=\{\mathcal{V},\mathbf{A},\mathbf{X}\}\) of \(n\) nodes; (2) an \(L\)-layer GCN; (3) a sensitive attribute \(s\); (4) a bias measure bias for statistical parity.
**Find:** A fair message passing schema \(\mathbf{\widehat{h}}_{i}^{(I)}=\textit{MP}(\mathbf{h}_{i}^{(I)})\) such that the bias will decrease after message passing.
## 3. Bias Amplification in Message Passing
In this section, we provide the empirical evidence that message passing does amplify bias, followed by the theoretical analysis on bias amplification in message passing.
### Empirical Evidence
We first empirically illustrate that message passing could amplify the bias. The design of our experiment is based on the assumption that the mining results (e.g., class probabilities) from a fair mining model (e.g., GCN) should be independent of the sensitive attribute (Beng et al., 2017). Therefore, we propose to use a logistic regression classifier to predict the sensitive attribute of a node using its corresponding mining result output by a mining model, where the accuracy of the sensitive attribute estimator naturally serves as an indicator of how biased a mining model is.
We investigate the effect of bias amplification of message passing on the Pokec-z dataset by comparing the behavior of a two-layer multi-layer perception (MLP) and a two-layer GCN. 4 Note that the only difference between the MLP and the GCN is whether the message passing is utilized or not, while the hidden dimensions and the nonlinear activation function are consistent with each other. The details for the empirical evaluation are as follows. First, we train the MLP or the GCN to predict labels using the node features with sensitive attributes excluded as input. When the graph neural networks reach the best performance, we freeze the parameters of the first layer of both the MLP and the GCN. Hence, the first layer of the MLP and the GCN can serve as extractors for label-related features. Second, we train a logistic regression classifier to predict sensitive attributes with the label-related features extracted by the MLP and the GCN. If an accurate prediction towards sensitive attributes can be obtained with label-related features, it implies that those label-related features contain rich sensitive attribute-related information. In this way, we are able to compare the relevance between sensitive attributes and those label-related features extracted by the two models.
Footnote 4: Details about the Pokec-z dataset is deferred to Section 5.
The evaluation results are presented in Figure 1. From the figure, we have three key observations. First, from Figure 1(b), we can observe a strong positive correlation between the sensitive attribute of a node and the sensitive attribute of its neighbors, i.e., a node tends to have the same sensitive attribute as the majority of its neighbors. Second, as shown in Figure 1(c), the MLP predicts the sensitive attribute of all nodes to the majority demographic group, even when a node and all its neighbors do not belong to the majority demographic group. Third, in Figure 1(c), the GCN makes much more accurate predictions than the MLP when the neighbors of a node are more likely to be in the minority group. The first and second observations imply that the label-related features extracted by MLP have little correlation with sensitive attributes, thus leading to the unbiased prediction in order to reach a high accuracy. On the contrary, the GCN extracts the node features containing rich sensitive attribute-related information for those nodes which belong to the same demographic group as their neighbors, as indicated in the third observation. This tendency of the GCN is clear for both the minority group and the majority group as well. These results demonstrate that message passing could aggregate the sensitive attribute-related information from the neighborhood, which further amplifies the bias and guides the logistic regression classifier to correctly predict the sensitive attribute.
### Theoretical Analysis
For analysis purposes, we consider a binary sensitive attribute and an \(L\)-layer linear GCN with row normalization (i.e., \(\alpha_{ij}=\frac{1}{d_{i}+1}\)) for binary node classification, where linear GCN is a special type of GCN model without the nonlinear activation (e.g., (Zhu et al., 2017)).5 Although our analysis relies on the linear GCN, recent works have shown that lack of non-linearity in GCN enjoys almost the same expressive power as nonlinear GCN (Zhu et al., 2017; Wang et al., 2018). Moreover, as shown in Figure 2, linear GCN exhibits similar or even slightly better node classification accuracy on the NBA dataset than its nonlinear counterpart, which also demonstrates that linearizing GCN is a good alternative to understand the behavior of GNN models.
Footnote 5: Our analysis can be naturally generalized to non-binary sensitive attribute and multi-class node classification with minor modifications.
To analyze the bias amplification phenomenon in message passing, we make the following assumption (Assumption 1) on the
Figure 1. The empirical evidence of bias amplification in GCN on the Pokec-z dataset. Best viewed in color. In (b), majority neighbor ratio is grouped into 10 equal-width bins with width being \(0.1\), i.e., \([0,0.1)\), \([0.1,0.2)\), \(\ldots\), \([0.9,1.0]\).
linearity of the node features in a graph. Similar linear relationship assumptions have been implicitly established in several existing works [32; 49]. For example, [32] utilizes linear orthogonal projection to mitigate the leakage of the sensitive attribute information (e.g., the projection direction is parallel to \(\mathbf{b}^{(1)}\) in Figure 3); [49] finds features in relation to the sensitive attribute using Pearson correlation coefficient and proposes a linear constraint to ensure the group fairness.
**Assumption 1**: _The feature vector \(\mathbf{x}_{i}=\mathbf{h}_{i}^{(1)}\) of any node \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\))))) \}}} \} \} \} \} \} \} \ \} \} \ \ \ \ \ \\ \ \\ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}
of each node to the prototype of the demographic group it belongs to. Therefore, we could formulate the distance-based bias as_
\[\text{bias}^{(I)}\left(\mathcal{G},\mathbf{Z}^{(I)},s\right)=\frac{1}{\mathbb{E} _{\mathbb{M}}\left[\|\mathbf{z}_{i}^{(I)}-\mu(\mathbb{z}_{i})\|_{2}^{2}\right]} \tag{6}\]
_where \(\mathbf{z}_{i}^{(I)}\) is the \(i\)-th row of \(\mathbf{Z}^{(I)}\) and \(\mu(\mathbb{z}_{i})\) is the prototype corresponding to the demographic group of node \(\mathbb{e}_{i}\)._
Since message passing in GNNs without nonlinear activation is essentially a weighted summation, the Gaussian distribution of sensitive characteristic vector is retained after message passing. In other words, all the sensitive characteristic vectors in any hidden layer share the same distribution, i.e., Gaussian distribution. Therefore, the expected squared distance in different hidden layers can be directly compared with each other to reflect the bias. Then we furthermore show that the expected squared distance would shrink after message passing in any \(l\)-th hidden layer, which is equivalent to the amplification of the distance-based bias. Proof is deferred to Appendix B.
Theorem 1 ().: _(**Distance shrinkage**) Suppose we have an \(L\)-layer linear GCN and an input graph \(\mathcal{G}=\{\mathcal{V},\mathbf{A},\mathbf{X}\}\). In the \(l\)-th hidden layer, let \(\mathbf{z}_{i}^{(I)}\) be the sensitive characteristic vector of any \(\mathbb{e}_{i}\in\mathcal{V}\) and \(\mathbf{\widetilde{z}}_{i}^{(I)}\) be the sensitive attribute vector of the same node after message passing. For any \(l\)-th hidden layer, we have_
\[\mathbb{E}_{\mathbb{M}}\left[\|\mathbf{z}_{i}^{(I)}-\mu(\mathbb{z}_{i})\|_{2 }^{2}\right]>\mathbb{E}_{\mathbb{M}}\left[\|\mathbf{\widetilde{z}}_{i}^{(I)} -\mu(\mathbb{z}_{i})\|_{2}^{2}\right] \tag{7}\]
_which means the distance-based bias is amplified after message passing, i.e., \(\text{bias}^{(I)}(\mathcal{G},\mathbf{Z}^{(I)},s)<\text{bias}^{(I)}(\mathcal{ G},\mathbf{\widetilde{Z}}^{(I)},s)\)_
**Distance-based Bias vs. Statistical Parity.** Here we discuss how the amplification of distance-based bias would affect the statistical parity. Recall that statistical parity is satisfied if and only if the output of GCN is independent to the sensitive attribute. Suppose we have a well-trained \(L\)-layer GCN whose output for node \(\mathbb{e}_{i}\) is \(\mathbf{h}_{i}^{(L+1)}\) and a classifier \(g\) to predict the label of each node in the label set \(\mathcal{Y}\). By Lemma 1, we have \(\mathbf{h}_{i}^{(L+1)}=c_{i}\mathbf{\widetilde{t}}_{i}^{(L)}\mathbf{W}^{(L)} +c_{2}f(\mathbf{\widetilde{z}}_{i}^{(L)})\mathbf{W}^{(L)}\). Since the bias information vector \(\mathbf{b}_{i}^{(L)}=\mathbf{\widetilde{t}}_{i}^{(L)}\mathbf{W}^{(L)}=f( \mathbf{\widetilde{z}}_{i}^{(L)})\mathbf{W}^{(L)}\) determines the sensitive attribute of each node and the label information vector \(\mathbf{t}_{i}^{(I)}\) determines the label of each node, a fair classifier should always output the same predictions for a specific label information vector \(\mathbf{t}_{i}^{(I)}\). Mathematically, we have \(g(\mathbf{h}_{i}^{(L\pm 1)})=c_{1}g(\mathbf{\widetilde{t}}_{i}^{(L)} \mathbf{W}^{(L)})\), i.e., \(g(f(\mathbf{\widetilde{z}}_{i}^{(L)})\mathbf{W}^{(L)})=\mathbf{0}\) where \(\mathbf{0}\) is the vector filled with \(0\). Note that \(\mu_{0}\) and \(\mu_{1}\) are the prototypes of the minority group and the majority group, respectively. If the GCN output is biased, the classifier tends to give different predictions for the minority group and the majority group. Thus, for any label \(y\in\mathcal{Y}\), we have
\[g(f(\mu_{0})\mathbf{W}^{(L)})[y]\cdot g(f(\mu_{1})\mathbf{W}^{(L)})[y]<0 \tag{8}\]
Eq. (8) means that, for any class \(y\in\mathcal{Y}\) the probability of being predicted as label \(y\) will increase for the majority group (i.e., \(g(f(\mu_{1})\mathbf{W}^{(L)})[y]>0\)) and decrease for the minority group (i.e., \(g(f(\mu_{0})\mathbf{W}^{(L)})[y]<0\)), or vice versa. However, as shown in Theorem 1, if the expected distance to the prototypes would shrink after message passing, for any node \(\mathbb{e}_{i}\), it is likely that \(\mathbf{\widetilde{z}}_{i}^{(I)}\) will gradually approach the corresponding prototype \(\mu(\mathbb{e}_{i})\in\{\mathbb{\mu}_{0},\mathbb{\mu}_{1}\}\), which makes \(g(\mathbf{\widetilde{z}}_{i}^{(L)}\mathbf{W}^{(L)})[y]\neq 0\), \(\forall y\in\mathcal{Y}\), and causes unfair predictions.
## 4. BEMAP: A fair message passing schema
In this section, we first present how to avoid bias amplification in message passing and then propose a fair message passing method named BeMap.
### Fair Message Passing
Following the settings in Section 3.2, Eq. (8) states that an unfair GCN would give opposite estimates on the prototypes of the majority group and the minority group. By the intermediate zero theorem, it is trivial that there exists a fair prototype \(\bar{\mu}\) on the line segment of the prototypes \(\mu_{0}\) and \(\mu_{1}\) such that \(g(f(\bar{\mu})\mathbf{W}^{(L)})=0\). To ensure group fairness, the intuition to make sure the sensitive characteristic vector \(\mathbf{\widetilde{z}}_{i}\) would move towards the fair prototype. An illustrative example is presented in Figure 4.
It is non-trivial to find the fair prototype. By Theorem 1, in addition to letting the sensitive characteristic vectors move towards the fair prototype, we consider an alternative goal: the expected squared Euclidean distance between the sensitive characteristic vector of each node and the corresponding prototype it belongs to would increase after message passing. Combining these two goals, we empirically set the fair prototype as \(\bar{\mu}=\frac{1}{2}\mu_{0}+\frac{1}{2}\mu_{1}\). When the sensitive characteristic vectors move towards \(\bar{\mu}\), it maximizes the expected squared distance to the prototype of the corresponding demographic group of each node. To obtain the fair prototype, we provide a sufficient condition in Lemma 2. Proof is deferred to Appendix C.
Lemma 2 ().: _(**Sufficient condition for fair prototype**) Suppose Assumption 1, Assumption 2 and Assumption 3 hold, and we are given an input graph \(\mathcal{G}=\{\mathcal{V},\mathbf{A},\mathbf{X}\}\) and an \(L\)-layer linear GCN with \(L-1\) parameter-free hidden layers,6 i.e., \(\mathbf{h}^{(L+1)}=\mathbf{\widetilde{h}}^{(I)}=\mathbf{\Sigma}_{\mathbb{H} ^{(I)}}\) and \(\sum_{\mathbb{e}_{j}\in\widetilde{N}(\mathbb{e}_{j})}\alpha_{ij}\mathbf{h}^{(I)}\) for \(\mathbb{\mathbb{\nu}}_{i}\in\mathcal{V},l\in\{1,\dots,L-1\}\) where \(\alpha_{ij}=\frac{1}{d_{i}+1}\). For any node \(\mathbb{\mathbb{\nu}}_{i}\in\mathcal{V}\), let \(\widetilde{N}_{0}(\mathbb{\mathbb{\nu}}_{i})\) and \(\widetilde{N}_{1}(\mathbb{\mathbb{\nu}}_{i})\) be the number of neighbors in \(\widetilde{N}(\mathbb{\mathbb{\nu}}_{i})=\mathcal{N}(\mathbb{\mathbb{\nu}}_{i})\cup \{\mathbb{\nu}_{i}\}\) that belong to the minority group and majority group, respectively. In the \(l\)-th hidden layer, the mean of the sensitive characteristic vector \(\mathbf{z}_{i}^{(I)}\) for \(\mathbb{\mathbb{\nu}}_{i}\in\mathcal{V}\) is the fair prototype \(\bar{\mu}=\frac{1}{2}\mu_{0}+\frac{1}{2}\mu\) when_
Footnote 6: For linear GCN, \(\mathbf{H}^{(L+1)}=\widetilde{\mathbf{\Sigma}}^{\mathrm{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbb
between the sensitive characteristic vector \(\mathbf{z}_{i}^{(l)}\) and the fair prototype \(\hat{\rho}\) will shrink after message passing. Mathematically, we have_
\[\mathbb{E}_{u}\left[\|\widetilde{\mathbf{z}}_{i}^{(I)}-\hat{\rho}\|_{2}^{2} \right]<\mathbb{E}_{u}\left[\|\mathbf{z}_{i}^{(I)}-\hat{\rho}\|_{2}^{2}\right],\quad l\in\{1,\ldots,L\}. \tag{10}\]
### BeMap Algorithm
Based on our results in Theorem 2, we propose a fair message passing algorithm named BeMap, whose key idea is to perform message passing on a sufficiently large and balanced self-augmented neighborhood (which we call fair neighborhood). To obtain the fair neighborhood, we consider sampling (i.e., edge deletion) over the original self-augmented neighborhood in BeMap. Note that other types of neighborhood augmentation techniques (e.g., edge addition, edge rewiring) could also be applied to obtain the fair neighborhood, which we leave for future work. Hereafter, BeMap is referred to as message passing over the sampled fair neighborhood.
In practice, there are two main challenges in obtaining the fair neighborhood in BeMap due to the long-tailed in many real-world networks. First, there could exist some node \(v_{i}\) such that \(\widetilde{N}_{0}(v_{i})=\widetilde{N}_{1}(v_{i})\) and \(\widetilde{N}_{0}(v_{i})+\widetilde{N}_{1}(v_{i})\geq 4\) cannot be satisfied simultaneously. In this case, we apply the following two empirical remedies.
* If all neighbors in \(\widetilde{\mathcal{N}}(v_{i})\) of a node \(v_{i}\) belong to one demographic group, we sample a subset of \(k\) nodes where \(k=\max\{\beta|\mathcal{N}(v_{i})|,m\}\), \(|\mathcal{N}(v_{i})|\) is the cardinality of \(\mathcal{N}(v_{i})\) and \(\beta\) as well as \(m\) are two hyperparameters. The intuition is that decreasing the number of neighbors (i.e., the degree \(d_{i}\) of node \(v_{i}\)) helps reduce the difference between the expected squared distances before and after message passing as shown in Eq. (18). Thus, it helps decelerate the sensitive characteristic vectors moving towards the prototype of the demographic group of \(v_{i}\).
* Otherwise, we keep the sampled neighborhood to be balanced, i.e., \(\widetilde{N}_{0}(v_{i})=\widetilde{N}_{1}(v_{i})\), by sampling over \(\mathcal{N}_{0}(v_{i})\) if \(\widetilde{N}_{0}(v_{i})>\widetilde{N}_{1}(v_{i})\) or \(\mathcal{N}_{1}(v_{i})\) if \(\widetilde{N}_{0}(v_{i})<\widetilde{N}_{1}(v_{i})\). By balancing the sampled neighborhood, though it may not be guaranteed to decrease the expected squared distance to the fair prototype, it helps keep the mean of the sensitive characteristic vector to be the fair prototype.
Second, there might exist some node \(v_{i}\) such that its neighbors within \(L\) hops could contain node(s) whose neighbors always belong to only one demographic group. Note that the receptive field of the \(l\)-th hidden layer in an \(L\)-layer GCN is the \(l\)-hop neighborhood of a node. Then, for such node \(v_{i}\), it is hard to maintain a balanced \(l\)-hop neighborhood if we apply uniform sampling, so as to satisfy Lemma 2. To alleviate the aforementioned scenario, we propose _balance-aware sampling_ whose key idea is to adjust the sampling probability based on the difference between the numbers of neighbors in the majority group and in the minority group. Specifically, for any node \(v_{i}\in\mathcal{V}\), we define the balance score as
\[\text{balance}_{i}=\frac{1}{|\widetilde{N}_{0}(v_{i})-\widetilde{N}_{1}(v_{i} )|+\delta} \tag{11}\]
where \(\delta\) is a hyperparameter to avoid division by zero and \(\widetilde{N}_{0}(v_{i})\) as well as \(\widetilde{N}_{1}(v_{i})\) are the numbers of all neighbors within \(L\) hops that belong to the minority group and majority group, respectively. Then in the balance-aware sampling, for any node \(v_{i}\in\mathcal{V}\), the sampling probability of node \(v_{j}\in\mathcal{N}(v_{i})\) is defined as
\[P(v_{j}|v_{i})=\frac{\text{balance}_{j}}{\sum_{v_{k}\in\mathcal{N}(v_{i})}\text {balance}_{k}} \tag{12}\]
The general workflow of BeMap is presented in Algorithm 1. Before training, we precompute the sampling probability in the balance-aware sampling (lines 3 - 5). During each epoch, we first generate the fair neighborhood using the balance-aware sampling (lines 7 - 11). Then for each hidden layer, the fair node representation of each node is learned on the fair neighborhood (lines 12 - 15). Finally, we update the model parameters with back-propagation (lines 16 - 17).
**Extension to Non-binary Sensitive Attribute.** We consider a non-binary sensitive attribute \(s\) which forms \(n_{s}\) demographic groups, i.e., \(s\in\{1,\ldots,n_{s}\}\). The key idea of BeMap is to balance the number of neighbors across different demographics. We discuss two cases for balancing the neighborhood in the following.
Figure 4. An illustrative example on the intuition of BeMap. After BeMap, the sensitive characteristic vectors will move towards the fair prototype (the green point), whereas, after the vanilla message passing in GCN, they will move towards the prototype of the majority group and the minority group (the red points).
* _When all neighbors belong to one demographic group:_ We adopt the same strategy as the case of binary sensitive attribute by sampling a subset of \(k\) neighbors for any node \(v_{i}\) such that \(k=\max\{\beta|\mathcal{N}(v_{i})|,m\}\).
* _When the neighbors belong to different demographic groups:_ In this case, for any node \(v_{i}\), we first count the number of neighbors in each demographic group \(\{\widetilde{\mathcal{N}}_{\text{s}}(v_{i})|\forall s=1,\ldots,n_{\text{s}}\}\). Then we set the number of neighbors to be sampled for any demographic group \(k\) as the smallest non-zero value in \(\{\widetilde{\mathcal{N}}_{\text{s}}(v_{i})|\forall s=1,\ldots,n_{\text{s}}\}\). After that, for each demographic group in the neighborhood of \(v_{i}\), we sample \(k\) neighbors to create a balanced neighborhood.
Regarding the balance-aware sampling probability in the above sampling steps, we modify Eq. (11) by replacing the absolute difference \(|\widetilde{\mathcal{N}}_{\text{0}}(v_{i})-\widetilde{\mathcal{N}}_{\text{1}} (v_{i})|\) in the cardinalities of two demographic groups in binary case to the average squared difference between the cardinalities of any two demographic groups.
\[\text{balance}_{i}=\frac{1}{\sqrt{\frac{2}{n_{\text{s}}(n_{ \text{s}}-1)}\sum_{s=1}^{n_{\text{s}}}(\widetilde{\mathcal{N}}_{k}(v_{i})- \widetilde{\mathcal{N}}_{j}(v_{i}))^{2}}+\delta}\] \[\forall k,j\in\{1,\ldots,n_{\text{s}}\},k\neq j \tag{13}\]
It should be noted that when \(\text{s}\) is binary sensitive attribute, Eq. (13) is equivalent to Eq. (11). In this way, we could adopt similar training procedure in Algorithm 1 to train the fair graph neural network.
## 5. Experiments
In this section, we answer two research questions through experiments: (1) How effective is BeMap in bias mitigation? (2) How does the fairness impact the utility of the downstream task?
### Experimental Settings
**Datasets.** We conduct experiments on 4 real-world datasets, including _Pokee-z_, _NBA_, _Credit_, and _Recidivism_. The detailed statistics of the datasets are provided in Table 2. The _Pokee-z_ dataset (Pokee, 2017) is drawn from Pokec, a Facebook-like social network in Slovakia, based on regional information. In Pokec-z, we set the sensitive attribute as the region of a user and aim to predict the field of work of a user. The _NBA_ dataset (Kipper, 2017) includes the age, nationality (US vs. overseas), salary of a NBA player for the 2016 - 2017 season. Two players are connected if one follows another on Twitter. In this dataset, the nationality is used as the sensitive attribute, and the goal is to predict whether the salary of a player is above the median. The _Credit_ dataset (Kipper, 2017) contains age, education and payment patterns of a credit card customer. The links among customers are determined by the pairwise similarity of their credit accounts. Here, age is set as the sensitive attribute and the label is whether a user will default on credit card payments. The _Recidivism_ dataset (Beng et al., 2017) consists of defendants who were released on bail during 1990 - 2009. Two defendants are connected if they share similar demographic information and criminal records. The goal is to predict whether a defendant is likely to commit a crime when released (i.e., bail) or not (i.e., no bail) with race as the sensitive attribute.
**Baselines.** We compare BeMap with two classic graph neural networks, including _GCN_(Kipper, 2017) and _GraphSAGE_(Kipper, 2017), as well as five fair graph neural networks, including _FairGNN_(Kipper, 2017), _EDITS_(Kipper, 2017), _FairDrop_(Kipper, 2017), _NIFTY_(Beng et al., 2017) and _FMP_(Kipper, 2017). More specifically, (1) _GCN_(Kipper, 2017) learns the representation of a node by iteratively aggregating the representations of its 1-hop neighbors; (2) _GraphSAGE_(Kipper, 2017) aggregates node representations from a subset of 1-hop neighbors; (3) _FairGNN_(Kipper, 2017) leverages adversarial learning to debias the node representations; (4) _EDITS_(Kipper, 2017) defines attribute bias and structural bias, and removes these biases in the input data by minimizing the Wasserstein distance; (5) _FairDrop_(Kipper, 2017) breaks sensitive attribute homophily by randomly masking edges in the input graph, thus mitigating bias; (6) _NIFTY_(Beng et al., 2017) utilizes a layer-wise weight normalization limited by the Lipschitz constant to improve counterfactual
fairness in a contrastive learning framework; and (7) _FMP_(Krizhevsky et al., 2015) redesigns a unified framework for fairness objectives which aggregates useful information from neighbors with reduced topology bias.
**Parameter Settings.** Unless otherwise specified, we use default hyperparameter settings in the released code of corresponding publications. Regarding BeMap, we set the learning rate as \(1e-3\), \(m\) as \(4\), \(\beta\) as \(\frac{1}{4}\) and \(\delta\) as \(1\). We set the backbone model of BeMap as a 2-layer GCN with \(128\) hidden dimensions and the optimizer as Adam optimizer with \(1e-5\) weight decay. To be consistent with the number of hidden layers, for each node, all neighbors within 2-hops are used to calculate the balance score in Eq. (11).
**Metrics.** We consider the task of semi-supervised node classification. To measure the utility, we use the classification accuracy (ACC) and the area under the receiver operating characteristic curve (AUC). Regarding fairness, we use \(\Delta_{\text{SP}}\) and \(\Delta_{\text{EO}}\) mentioned in Section 2, which are the two most commonly used metrics to evaluate the effectiveness of bias mitigation. For ACC and AUC, the higher the better; while for \(\Delta_{\text{SP}}\) and \(\Delta_{\text{EO}}\), the lower the better.
### Experimental Results
**Main Results.** The evaluation results on the utility (ACC and AUC) and fairness (\(\Delta_{\text{EO}}\) and \(\Delta_{\text{SP}}\)) are presented in Table 1. Regarding fairness, we can see that our proposed BeMap is the only method that can consistently mitigate bias (i.e., a smaller value of \(\Delta_{\text{EO}}\) and \(\Delta_{\text{SP}}\) than GCN and GraphSAGE without fairness consideration) for all datasets. Moreover, compared with the vanilla GCN without fairness consideration, BeMap could reduce \(\Delta_{\text{EO}}\) (\(\Delta_{\text{SP}}\)) by \(86.8\%\) (\(83.6\%\)), \(70.8\%\) (\(0.12\%\)), \(53.4\%\) (\(79.31\%\)), \(45.7\%\) (\(40.1\%\)) on the Pokec-z, NBA, Recidivism and Credit datasets, respectively. Other than the superior performance in bias mitigation, BeMap also achieves comparable performance with the baseline methods in terms of the utility of node classification (ACC and AUC). It is because BeMap generates a new balanced graph for each epoch, analogous to data augmentation. This data augmentation effectively prevents the graph neural network from overfitting during training, thus improving the utility of the model. For example, on NBA dataset, BeMap even achieves a higher AUC (\(78.76\%\)) than the vanilla GCN (\(78.45\%\)) when significantly reducing fairness-related metrics \(\Delta_{\text{SP}}\) and \(\Delta_{\text{EO}}\) In short words, BeMap could achieve a good balance between mitigating the bias and maintaining the classification accuracy.
**Ablation Study.** To evaluate the effectiveness of the balance-aware sampling utilized in BeMap, we compare it with two other heuristic sampling strategies: (1) _uniform sampling_ (Uniform), which assigns the same probability to all neighbors of a node, and (2) _degree-based sampling_ (Degree), which sets the probability of node \(v_{i}\) in the neighborhood of node \(u\) as \(P(v_{i})\propto d_{i}^{0.75},\forall v_{i}\in\mathcal{N}(u)\)(Kumar et al., 2017). From the results in Table 3, we can see that the balance-aware sampling achieves the lowest \(\Delta_{\text{EO}}\) and \(\Delta_{\text{SP}}\) on all datasets and maintains a comparable classification accuracy, which demonstrates the superiority of the balance-aware sampling in balancing the utility and fairness.
## 6. Related Work
**Graph Neural Network.** Graph neural network (GNN) has demonstrated state-of-the-art performance in various learning tasks, including anomaly detection (Krizhevsky et al., 2015), crime rate prediction (Zhu et al., 2017), and recommender systems (Krizhevsky et al., 2015). (Zhu et al., 2017) proposes the Graph Convolutional Network (GCN) by utilizing a localized first-order approximation of spectral graph convolutions. (Glorot and Bengio, 2015) introduces graph neural networks
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Metric**} & \multicolumn{6}{c|}{**Compared Methods**} \\ \cline{3-10} & & GCN & GraphSAGE & FairGNN & EDITS & FairDrop & NIFTY & FMP & BeMap \\ \hline \multirow{5}{*}{**Pokec-z**} & ACC(\(\%\))\(\uparrow\) & \(70.62\pm 0.22\) & \(70.27\pm 0.32\) & \(65.74\pm 2.49\) & \(67.25\pm 0.61\) & \(67.45\pm 0.80\) & \(67.55\pm 0.79\) & \(\mathbf{72.05\pm 0.42}\) & \(68.94\pm 0.46\) \\ & AUC(\(\%\))\(\uparrow\) & \(76.41\pm 0.61\) & \(75.73\pm 0.30\) & \(72.54\pm 4.21\) & \(73.05\pm 0.57\) & \(73.77\pm 0.50\) & \(73.87\pm 0.23\) & \(\mathbf{80.50\pm 0.11}\) & \(73.01\pm 0.29\) \\ \cline{2-10} & \(\Delta_{\text{SP}}\)(\(\%\))\(\downarrow\) & \(8.86\pm 2.32\) & \(7.11\pm 2.69\) & \(\mathbf{4.31\pm 0.80}\) & \(\mathbf{10.36\pm 1.20}\) & \(\mathbf{9.46\pm 2.06}\) & \(8.83\pm 1.60\) & \(5.03\pm 1.22\) & \(\mathbf{1.45\pm 0.40}\) \\ & \(\Delta_{\text{EO}}\)(\(\%\))\(\downarrow\) & \(\mathbf{7.81\pm 1.99}\) & \(6.97\pm 2.65\) & \(\mathbf{4.34\pm 1.22}\) & \(\mathbf{9.09\pm 1.34}\) & \(7.91\pm 1.59\) & \(7.00\pm 1.70\) & \(1.72\pm 0.64\) & \(\mathbf{1.03\pm 0.42}\) \\ \hline \multirow{5}{*}{**NBA**} & ACC(\(\%\))\(\uparrow\) & \(72.16\pm 0.46\) & \(\mathbf{75.21\pm 0.86}\) & \(72.39\pm 0.46\) & \(66.19\pm 0.75\) & \(70.81\pm 0.63\) & \(60.84\pm 3.32\) & \(68.57\pm 1.58\) & \(65.37\pm 1.77\) \\ & AUC(\(\%\))\(\uparrow\) & \(78.45\pm 0.25\) & \(77.83\pm 2.00\) & \(77.74\pm 1.24\) & \(69.94\pm 0.72\) & \(70.75\pm 0.46\) & \(65.94\pm 1.30\) & \(77.73\pm 0.35\) & \(\mathbf{78.76\pm 0.62}\) \\ \cline{1-1} & \(\Delta_{\text{SP}}\)(\(\%\))\(\downarrow\) & \(4.00\pm 2.15\) & \(78.1\pm 3.45\) & \(3.96\pm 1.81\) & \(18.15\pm 3.64\) & \(5.42\pm 1.59\) & \(10.03\pm 5.46\) & \(31.41\pm 1.11\) & \(\mathbf{5.54\pm 0.97}\) \\ \cline{1-1} & \(\Delta_{\text{EO}}\)(\(\%\))\(\downarrow\) & \(13.07\pm 3.34\) & \(8.87\pm 5.58\) & \(4.94\pm 2.35\) & \(13.19\pm 3.48\) & \(7.30\pm 1.34\) & \(5.70\pm 3.21\) & \(27.17\pm 3.19\) & \(\mathbf{3.31\pm 0.98}\) \\ \hline \multirow{5}{*}{**Recidivism**} & ACC(\(\%\))\(\uparrow\) & \(85.89\pm 0.18\) & \(85.00\pm 0.52\) & \(69.54\pm 5.70\) & \(83.88\pm 0.84\) & \(\mathbf{91.81\pm 0.36}\) & \(83.43\pm 0.83\) & \(56.80\pm 10.83\) & \(77.66\pm 0.35\) \\ & AUC(\(\%\))\(\uparrow\) & \(88.74\pm 0.23\) & \(89.41\pm 0.36\) & \(80.79\pm 5.33\) & \(86.82\pm 0.40\) & \(\mathbf{92.17\pm 0.86}\) & \(84.56\pm 0.33\) & \(61.79\pm 4.87\) & \(80.77\pm 1.02\) \\ \cline{1-1} \cline{2-10} & \(\Delta_{\text{SP}}\)(\(\%\))\(\downarrow\) & \(8.51\pm 0.48\) & \(8.99\pm 0.37\) & \(6.61\pm 2.29\) & \(7.63\pm 0.57\) & \(6.80\pm 0.22\) & \(4.75\pm 0.92\) & \(22.43\pm 9.72\) & \(\mathbf{1.76\pm 0.14}\) \\ \cline{1-1} & \(\Delta_{\text{EO}}\)(\(\%\))\(\uparrow\) & \(5.75\pm 1.08\) & \(6.14\pm 0.54\) & \
for inductive learning via neighborhood sampling and aggregation. (Kang et al., 2022) leverages the self-attention mechanism to learn the attention weights for neighborhood aggregation. (Kang et al., 2022) scales up the training on large graphs by importance sampling. (Kang et al., 2022) learns node representations by randomly dropping nodes to augment data and enforcing the consistency of predictions among augmented data. Similarly, (Kang et al., 2022) randomly drops nodes for several runs and aggregates the predictions for the final prediction by ensemble. Different from (Kang et al., 2022; Kang et al., 2022; Kang et al., 2022) that drop nodes for scalable training or improving the generalization and/or expressiveness of GNN, our work drops edges (i.e., dropping nodes in the local neighborhood of a node) to mitigate bias in GNN. (Kang et al., 2022) randomly drops edges to perform data augmentation and prevent over-smoothing. Different from (Kang et al., 2022) that randomly drops edges to alleviate over-smoothing, our work selectively removes edges to obtain balanced graph structures to improve the model fairness. For comprehensive literature reviews on graph neural networks, please refer to (Kang et al., 2022; Kang et al., 2022; Kang et al., 2022; Kang et al., 2022).
**Fairness in Graph Neural Network.** Fairness has been actively studied in the context of graph neural networks, e.g., group fairness (Kang et al., 2022; Kang et al., 2022), individual fairness (Kang et al., 2022; Kang et al., 2022), and counterfactual fairness (Kang et al., 2022). A common strategy to learn fair graph neural networks is through optimizing a regularization-based optimization problem (Kang et al., 2022; Kang et al., 2022; Kang et al., 2022; Kang et al., 2022). For group fairness, (Kang et al., 2022) ensures group fairness by imposing mutual information between sensitive attributes and node embeddings as the regularization term and mitigates the bias via adversarial learning. (Kang et al., 2022) leverages a similar debiasing strategy to learn fair graph neural networks with limited sensitive attributes. For individual fairness, (Kang et al., 2022) measures individual bias as the debiasing norm with respect to the pairwise node similarity matrix and Laplacian matrix, and regularizes it in the optimization problem. (Kang et al., 2022) learns individually fair graph neural networks by regularizing a ranking-based individual fairness measure. In terms of counterfactual fairness, (Kang et al., 2022) learns counterfactually fair node embeddings by imposing the contrastive loss as the regularization term. (Kang et al., 2022) follows the similar contrastive learning strategy while generating the negative examples via variational graph auto-encoder (Kang et al., 2022). Different from the aforementioned techniques, our work does not require any regularization in the objective function. Other than regularization-based formulation, (Kang et al., 2022) preprocesses the input graph by pre-training a neural network to minimize the Wasserstein distance between the node feature distributions of majority nodes and minority nodes. (Kang et al., 2022) proposes a novel message passing schema that essentially solves a minimax problem to promote group fairness. (Kang et al., 2022) proposes biased edge dropout that promotes the existence of edges connecting nodes in different demographic groups. However, the message passing schema proposed by (Kang et al., 2022) bears little resemblance to the original message passing schema, and thus completely changes the training procedures of graph neural networks; (Kang et al., 2022) requires a pretrained model to preprocess the input graph. Compared to (Kang et al., 2022; Kang et al., 2022), our work concisely and efficiently tunes message passing via a theory-guided node sampling strategy to ensure both utility and fairness.
## 7. Conclusion
In this paper, we study the problems of bias amplification in message passing and fair message passing. We empirically and theoretically prove that message passing amplifies the bias as long as the numbers of neighbors from different demographic groups for each node are unbalanced. Guided by our analyses, we propose BeMap, which relies on a balance-aware sampling strategy to generate a fair neighborhood among different demographic groups. Then, BeMap performs message passing over the generated fair neighborhood. Extensive evaluations on the real-world datasets demonstrate the effectiveness of our proposed method in mitigating bias while maintaining utility.
|
2310.14394 | Neural Networks are Integrable | In this study, we explore the integration of Neural Networks, a powerful
class of functions known for their exceptional approximation capabilities. Our
primary emphasis is on the integration of multi-layer Neural Networks, a
challenging task within this domain. To tackle this challenge, we introduce a
novel numerical method that consist of a forward algorithm and a corrective
procedure. Our experimental results demonstrate the accuracy achieved through
our integration approach. | Yucong Liu | 2023-10-22T19:58:33Z | http://arxiv.org/abs/2310.14394v2 | # Neural Networks are Integrable
###### Abstract
In this study, we explore the integration of neural networks, a potent class of functions known for their exceptional approximation capabilities. Our primary emphasis is on the integration of multi-layer neural networks, a challenging task within this domain. To tackle this challenge, we introduce a novel numerical method that combines the forward algorithm with a corrective procedure. Our experimental results demonstrate the accuracy achieved through our integration approach.
keywords: Neural Network, Integration, Numerical Algorithm +
Footnote †: journal: Elsevier
## 1 Introduction
Deep learning models have demonstrated incredible power in various fields such as image and speech recognition, natural language processing, and autonomous driving in recent years. This work primarily centers on the deep neural network, also known as multi-layer perceptron or feed-forward neural network.
We give the definition of neural networks as below. A k-layer neural network from \(\mathbb{R}^{n_{0}}\) to \(\mathbb{R}^{n_{k}}\) is a layer-wise structure. The \(i\)-the layer, for \(i\in\{1,\cdots,k\}\), is defined as :
\[y^{(i)} =W^{(i)}x^{(i-1)}+b^{(i)},\] \[x^{(i)} =\sigma(y^{(i)}).\]
In each layer, \(W^{(i)}\in\mathbb{R}^{n_{i}\times n_{i-1}}\) is the weight matrix and \(b^{(i)}\in\mathbb{R}^{n_{i}}\) is the bias vector. The input of \(i\)-th layer \(x^{(i)}\in\mathbb{R}^{n_{i}}\) is the output of \((i-1)\)-th layer. Activation function \(\sigma\) is a nonlinear point-wise function, i.e \((\sigma(y^{(i)}))_{j}=\sigma(y^{(i)}_{j})\)
One of the most well-known activation function is rectified linear unit (ReLU), which is defined as \(\text{ReLU}(x)=\max(x,0)\).
There has been numerous works on theory, algorithms and methods for neural networks. One of the key factors contributing to the success of neural networks is the Universal Approximation Theorem (UAT), which states that these networks have the ability to approximate arbitrary continuous function.
Throughout the past century, several researchers such as Cybenko [3], Hornik [8], Pinkus [14], et al, directed their attention toward assessing the approximation capabilities of single-layer neural networks. However, recent work by Kidger and Lyons [9] has extended these findings. They demonstrated that multi-layer networks, each layer having a fixed width, possess the capacity to approximate arbitrary continuous functions within a compact set.
This work focuses on the topic of integration for both shallow and deep neural networks. While it is known that a neural network with a continuous activation function is integrable, the primary research emphasis has been on exploring the gradients of neural networks. The gradient's pivotal role in optimizing diverse loss functions has spurred extensive research efforts. In contrast, the integrability of neural networks, despite its significance in comprehending the models' theoretical properties, has garnered comparatively less attention. In this work, we aim to bridge this gap by providing explicit forms of integration for one-layer neural networks with any integrable activation function and deriving a piece-wise structure of the integration for multi-layer neural networks with ReLU activation function, along with a proposed algorithm with a corrector.
We will introduce our basic motivation in Section 2. For the integration of ReLU neural networks, we separate it into two cases, one-layer case in Section 3.1 and multi-layer case in Section 3.2. Our algorithm also works for Convolutional Neural Networks [10] and Residual Neural Networks [6], which will be introduced in Section 3.3. We will present our numerical experiments in Section 4 and discuss about future work in Section 5.
## 2 Basic Motivation
Let us begin by focusing on a classical numerical question: how can we approximate the integral of a function given only some samples? Suppose we have a integrable function \(f:[a,b]\rightarrow\mathbb{R}\). Let \(F(x)=\int_{a}^{x}f(t)dt\) denotes the integral of function \(f\), and suppose we have \(N\) samples of \(f\), i.e., \(\{(x_{n},f(x_{n}))\}_{n=1}^{N}\). How can we recover \(F\) with these samples without knowing
the corresponding values \(F(x_{n})\) or the formula of \(F\)? Especially, can we get an accurate estimation for \(F(b)=\int_{a}^{b}f(t)dt\)?
Different from numerical integration algorithms, we here propose one alternative way to solve this problem. With samples, we could approximate \(f\) using some integrable estimation \(\hat{f}\), then approximate \(F\) by integral over \(\hat{f}\). UAT guarantees that, for any \(\varepsilon>0\), there exists a neural network \(\psi\) is a good estimation of \(f\), such that \(|\psi-f|\leq\varepsilon/(b-a)\). Then the integral over \(\phi\) satisfies that \(|\int_{a}^{x}\phi(t)dt-F(x)|\leq\varepsilon\), which gives a good estimation of function \(F\).
The problem is how to get the integral of a neural network which will be explained in next section.
## 3 Integral and Algorithms
In this section, our focus is on obtaining the closed-form solutions for integrating neural networks. We first provide explicit integration forms for one layer neural networks with any integrable activation function. Then, we derive a piece-wise structure for the integration of multi-layer neural networks that use ReLU activation function, and propose an forward integral algorithm with a corrector to enhance the accuracy of the integration.
### Integral over one-layer Neural Networks
We start with a trivial case: one layer neural networks \(\psi:\mathbb{R}\rightarrow\mathbb{R}\)
\[\begin{split} y&=W^{(1)}x+b^{(1)},\\ \psi(x)&=W^{(2)}\sigma(y)+b^{(2)}.\end{split} \tag{1}\]
Notice that in this 1-D case, \(W^{(2)}\) is a row vector with length \(n_{1}\) and \(W^{(1)}=(w_{1},\ldots,w_{n_{1}})^{\intercal}\) is a column vector with length \(n_{1}\). Then we state the following Lemma.
**Lemma 3.1**.: _For a one-layer neural network \(\psi\) defined on a closed interval \([a,b]\), the integral of \(\psi\) can be expressed as:_
\[\int_{a}^{x}\psi(t)dt=W^{(2)}z+b^{(2)}(x-a),\]
_where \(z=(z_{1},\ldots,z_{n_{1}})^{\intercal}\in\mathbb{R}^{n_{1}}\) and_
\[z_{i}=\int_{a}^{x}\sigma(w_{i}t)dt.\]
Proof.: By direct calculation, this lemma holds.
For \(n\)-dimensional case, the result is also similar. We only have to repeat Lemma 3.1 for n time. Let \(\psi\) be an one-layer neural network: \(\mathbb{R}^{n}\rightarrow\mathbb{R}\). Then,
\[\begin{split}&\int_{[a_{1},b_{1}]\times\ldots[a_{n},b_{n}]}\psi(x )dx_{1}\ldots dx_{n}\\ &=\int_{[a_{2},b_{2}]\times\ldots[a_{n},b_{n}]}W^{(2)}[\int_{a_{1 }}^{b_{1}}\sigma(W^{(1)}x+b^{(1)})dx_{1}]+b^{(2)}(b_{1}-a_{1}).\end{split} \tag{2}\]
As long as we have the explicit form of \(\tilde{\sigma}=\int\sigma\), we can write down the integral as
\[\begin{split}&\int_{[a_{1},b_{1}]\times\ldots[a_{n},b_{n}]}\psi(x )dx_{1}\ldots dx_{n}\\ &=\int_{[a_{2},b_{2}]\times\ldots[a_{n},b_{n}]}W^{(2)}\tilde{ \sigma}(\tilde{W}^{(1)}\tilde{x}+\tilde{b}^{(1)})+\tilde{b}^{(2)},\end{split} \tag{3}\]
for some constant term \(\tilde{b}^{(1)}\) relative with \([a_{1},b_{1}]\), \(b^{(1)}\), \(W^{(1)}\) and \(\sigma\). Here \(\tilde{\sigma}\) is the anti-derivative of \(\sigma\), \(\tilde{x}=(x_{2},\ldots,x_{n})\) and \(\tilde{b}^{(2)}=b^{(2)}(b_{1}-a_{1})\). As a result, we can represent the formula with a new activation function \(\tilde{\sigma}\). Thus, repeating Lemma 3.1 leads to the final result for this one-layer case.
[1] focused on the integral of one layer neural network with sigmoid activation function. We demonstrate that our analysis here works for general integrable activation function.
### Numerical Integral over multi-layer Neural Networks
In this section, we will be delving into the integration of a multi-layer ReLU neural network, which is considerably more complex than the one-layer case. In a single-layer neural network, the network can be expressed as a simple weighted sum of several activation functions, making integration relatively straightforward. However, for a multi-layer neural network, the situation is quite different. Even with ReLU activation function, we do not possess explicit knowledge of the areas where the output of a neuron is zero or positive, particularly for neurons in higher layers. As a result, integration of multi-layer neural networks is a challenging task. To address this challenge, we propose a numerical method.
We start with an observation for neural networks. Every ReLU neural network is a piece-wise linear function. Arora et al. [2] also showed that every
piece-wise linear function is also an ReLU neural network. We cite their Theorem as reference.
**Theorem 3.2** ([2]).: _Every \(\mathbb{R}^{n}\rightarrow\mathbb{R}\) ReLU neural network represents a piecewise linear function, and every piece-wise linear function \(\mathbb{R}^{n}\rightarrow\mathbb{R}\) can be represented by a ReLU neural network with at most \(\lceil\log 2(n+1)\rceil+1\) depth_
Then to integral a ReLU neural network is equivalent with to integral a piece-wise linear function without explicitly knowing the break points. In each piece, the neural network can be represented in the form of \(\alpha x+\beta\), where \(\alpha\) and \(\beta\) are coefficients determined by the weights and biases in the network structure. Note that both \(\alpha\) and \(\beta\) could be vector-valued. Knowing these coefficients enables us to compute the integral of the neural network separately in each piece. Thus, we design a forward integral algorithm. We demonstrate that this algorithm works for batch of input and can be accelerated by GPUs.
However, by using a piece-wise integral, it is likely that we cannot obtain a continuous function since there are usually jumps at the breakpoints. Nonetheless, it is worth noting that these jumps are actually constant. To address this issue, we propose a numerical algorithm that corrects these jumps and ensures the continuity of the entire integral.
Figure 1 illustrates the entire neural network integration process, and we will delve into each component in detail in the subsequent sections.
#### 3.2.1 Forward Integral Algorithm
In this section, we will introduce our forward integral algorithm.
As mentioned before, given arbitrary input \(x\), the key to calculate the integral would be the coefficient \(\alpha\) and the constant term \(\beta\). By Theorem 3.2, the output of \(i\)-th layer could be represented as a linear combination of input \(x\) plus a constant term, i.e \(\alpha_{i}x+\beta_{i}\). Note that the output of \(i\)-th layer could be a
Figure 1: Integration Process
vector, which means \(\alpha_{i}\) could be matrix-valued and \(\beta_{i}\) could be vector-valued. Then the output of \((i+1)\)-th layer \(\alpha_{i+1}x+\beta_{i+1}=\sigma(W^{(i+1)}(\alpha_{i}x+\beta_{i})+b^{(i+1)})\). Recall the definition of ReLU, it's equivalent with identity when the input is positive. Otherwise, it always outputs \(0\), having no effect on \(\alpha\) and \(\beta\). Given \(\alpha_{i}\), \(\beta_{i}\), \(W_{(i+1)}\), \(b^{(i+1)}\) and the sign of the term inside \(\sigma\), it's easily to get \(\alpha_{i+1}\) and \(\beta_{i+1}\). Thus, following this procedure for each layer, we could summarize our Forward Integral Algorithm 1. Here \(\mathbb{1}\) denotes indicator function and \(\odot\) denotes Hadamard product.
```
Input:\(x\in\mathbb{R}^{n}\), \(k\)-layer neural network with weights \(W^{(i)}\), bias \(b^{(i)}\) Output: First order coefficient \(\alpha\) and constant term \(\beta\) \(\alpha\gets 0\) \(\beta\gets 0\) \(y\gets x\) for\(i=0\)to\(k\)do if\(i==0\)then \(y\leftarrow\sigma(W^{(i)}y+b^{(i)})\) \(z\leftarrow\mathbb{1}_{\{y>0\}}\) \(\alpha\gets z\odot W^{(0)}\) \(\beta\gets z\odot b^{(0)}\) elseif\(i==k\)then \(\alpha\gets W^{(k)}\times\alpha\) \(\beta\gets W^{(k)}\times\beta+b^{(k)}\) else \(y\leftarrow\sigma(W^{(i)}y+b^{(i)})\) \(z\leftarrow\mathbb{1}_{\{y>0\}}\) \(\alpha\leftarrow(z\odot W^{(i)})\times\alpha\) \(\beta\leftarrow(z\odot W^{(i)})\times\beta+(z\odot b^{(0)})\) endif endfor Return:\(\alpha\) and \(\beta\)
```
**Algorithm 1** Forward Integral Algorithm
We demonstrate here, this algorithm is just a modified version of the forward algorithm to get output by adding Hadamard and indicator operators. As a result, it can be accelerated by GPUs.
As long as we have \(\alpha\) and \(\beta\), we can directly get the integral at the piece where input \(x\) located in, i.e \(\int_{a_{i}}^{x}\psi(x)dx_{i}\) can be represented a Polynomial
term plus a constant term. Since the Polynomial term only depends on \(\alpha\) and \(\beta\), we denote it as Poly[\(\alpha\), \(\beta\)] for convenience.
#### 3.2.2 Numerical Corrector
By Forward Integral Algorithm, we know the integral at the each piece while ignoring a constant term. In this section, we introduce a numerical method to correct the jump at the break point between pieces.
Given an interval, we first select a partition \(\{z_{i}\}_{i=1}^{N}\) of such interval. For each \(z_{i}\). After getting corresponding \(\alpha_{i}\) and \(\beta_{i}\) for each \(z_{i}\) by our forward integral algorithm, we can connect the integral at \(z_{i}\) by adding a constant term \(\text{Poly}[\alpha_{i},\beta i](z_{i})-\text{Poly}[\alpha_{i-1},\beta i-1](z_{i})\). Our corrector algorithm is shown in Algorithm 2.
```
Input: Partition \(\{z_{i}\}_{i=1}^{N}\) on interval \([a,b]\), k-layer neural network \(\psi\) Output:\(\int_{a}^{b}\psi dx_{j}\) \(\alpha_{0}\gets 0\) \(\beta_{0}\gets 0\) \(C_{0}\gets 0\) for\(i=1\)to\(N\)do \(\alpha_{i},\beta_{i}\leftarrow\text{Forward Integral Algorithm}(z_{i})\) \(C_{i}\leftarrow\text{Poly}[\alpha_{i},\beta i](z_{i})-\text{Poly}[\alpha_{i-1},\beta i-1](z_{i})+C_{i-1}\) endfor Return:\(\text{Poly}[\alpha_{N},\beta_{N}](z_{N})-\text{Poly}[\alpha_{0},\beta_{0}](z_{0})+C_{N}\)
```
**Algorithm 2** Numerical Corrector
### Extension to Convolutional Neural Networks and Residual Neural Networks
Convolutional Neural Networks [10] and Residual Neural Networks [6] are more powerful in practice. Our algorithms also work for them, when the activation function is ReLU.
The convolutional layer can be expressed as \(\sigma(\kappa*x+b)\), where \(*\) denotes the convolution operation, \(\kappa\) is the kernel and \(b\) is the bias vector. A convolutional neural network is one consist of convolutional layers and fully connected layers.
A Residual Block is usually consists of convolutional layers, fully-connected layers and a residual connection, i.e \(\Psi(x)=x+\psi(x)\), where \(\psi(x)\) can be expressed as a convolutional neural network. A Residual Neural Network is one
taking structures of Residual Blocks, convolutional layers and fully-connected layers.
We first observe that the convolutional layer and the residual block would not affect the piece-wise structure for ReLU neural networks. As a result, our Forward Integral Algorithm works for both ReLU Convolutional Neural Network and ReLU Residual Neural Network. We demonstrate here the convolutional layer works as same as the fully connected layer in our Forward Integral Algorithm because the convolution operator by kernel \(\kappa\) acts as a weighted linear combination of input. For Residual Neural Networks, we only have to add an all one vector on \(\alpha\) at the end of Residual block when we running forward algorithm.
## 4 Experiments
We constructed extensive experiments to demonstrate that our algorithms work well on numerical integral, compared with traditional numerical algorithms.
We set the domain to be \([0,5]\), \(f(x)=\cos(x)-x^{2}+4-1/(x+1)\). Then the integration \(F(x)=\int_{0}^{x}f(t)dt=\sin(x)-1/3*x^{3}+4x-\log(x+1)\). We trained a 2-layer neural network with width 100 to approximate \(f\) with 51 data points \(\{x_{i},f(x_{i})\}_{i=0}^{50}\), where \(x_{i}\) evenly spaced over \([0,5]\). We set epochs to be 200, learning rate 0.001 with batch size 20, using Stochastic Gradient Descent and mean squared error loss.
For comparisons, we also implemented Euler-Forward method [5], Explicit Runge-Kutta method of order 5(4) (RK45) [4] and Explicit Runge-Kutta method of order 8 (DOP853)[16].
We present our experiment results in Figure 2. In the left figure, we display our neural network's approximation capabilities. The 2-layer neural network exhibits outstanding approximations of the integrand \(f\), and its integral simultaneously provides an excellent approximation of \(F\). In the right figure, we track the approximation error, denoted as \(\|F(x)-\hat{F}(x)\|\), where \(\hat{F}(x)\) represents the estimated integral obtained through numerical algorithms. Notably, the integral computed by the neural network displays the smallest approximation error, highlighting its superior performance in our experiment.
## 5 Discussion
To compute the integral of neural networks, we have developed novel algorithms, including the forward algorithm and a corrector. In conclusion, we outline several promising avenues for future research.
One immediate area of interest is the path integrals of neural networks, offering potential solutions to challenges in fields such as Molecular Dynamics [13] and Quantum Mechanics [7].
Furthermore, Physics-Informed-Neural-Networks (PINNs) [15] have garnered significant attention. While current approaches focus on using derivatives of neural networks to satisfy partial differential equations (PDEs), exploring the integration of neural networks may open up new avenues for addressing these problems.
Neural networks have also demonstrated their efficacy in tackling challenges related to density estimation and Bayesian inference, as demonstrated in prior research [12; 1; 11]. The integrability of neural networks has the potential to significantly enhance these applications, particularly in tasks like expectation and variance estimation.
|
2305.14122 | Transferring Learning Trajectories of Neural Networks | Training deep neural networks (DNNs) is computationally expensive, which is
problematic especially when performing duplicated or similar training runs in
model ensemble or fine-tuning pre-trained models, for example. Once we have
trained one DNN on some dataset, we have its learning trajectory (i.e., a
sequence of intermediate parameters during training) which may potentially
contain useful information for learning the dataset. However, there has been no
attempt to utilize such information of a given learning trajectory for another
training. In this paper, we formulate the problem of "transferring" a given
learning trajectory from one initial parameter to another one (learning
transfer problem) and derive the first algorithm to approximately solve it by
matching gradients successively along the trajectory via permutation symmetry.
We empirically show that the transferred parameters achieve non-trivial
accuracy before any direct training, and can be trained significantly faster
than training from scratch. | Daiki Chijiwa | 2023-05-23T14:46:32Z | http://arxiv.org/abs/2305.14122v2 | # Transferring Learning Trajectories
###### Abstract
Training deep neural networks (DNNs) is computationally expensive, which is problematic especially when performing duplicated training runs, such as model ensemble or knowledge distillation. Once we have trained one DNN on some dataset, we have its learning trajectory (i.e., a sequence of intermediate parameters during training) which may potentially contain useful information for learning the dataset. However, there has been no attempt to utilize such information of a given learning trajectory for another training. In this paper, we formulate the problem of "transferring" a given learning trajectory from one initial parameter to another one, called _learning transfer problem_, and derive the first algorithm to approximately solve it by matching gradients successively along the trajectory via permutation symmetry. We empirically show that the transferred parameters achieve non-trivial accuracy before any direct training. Also, we analyze the loss landscape property of the transferred parameters, especially from a viewpoint of mode connectivity.
## 1 Introduction
Enormous computational cost is a major issue in deep learning, especially in training large-scale neural networks (NNs). Their highly non-convex objective and high-dimensional parameters make their training difficult and inefficient. Toward a better understanding of training processes of NNs, their loss landscapes [21; 7] have been actively studied from viewpoints of optimization [17; 38; 57] and geometry [14; 47]. One of the geometric approaches to loss landscapes is mode connectivity [15; 9], which shows the existence of low-loss curves between any two optimal solutions trained with different random initializations or data ordering. This indicates a surprising connection between seemingly different independent trainings.
Linear mode connectivity (LMC), a special case of mode connectivity, focuses on whether or not two optimal solutions are connected by a low-loss linear path, which is originally studied in relation to neural network pruning [13]. It is known that the solutions trained from the same initialization (and data ordering in the early phase) tend to be linearly mode connected [45; 13], but otherwise they cannot be linearly connected in general. However, Entezari et al. [10] observed that even two solutions trained from different random initializations can be linearly connected by an appropriate permutation symmetry. Ainsworth et al. [3] developed an efficient method to find such permutations and confirmed the same phenomena with modern NN architectures. These observations strength the expectation on some sort of similarity between two independent training runs even from different random initializations, via permutation symmetry.
In this paper, motivated by these observations, we make the first attempt to leverage such similarity between independent training processes for efficient training. In particular, we introduce a novel problem called _learning transfer problem_, which aims to reduce training costs for seemingly duplicated training runs on the same dataset, such as model ensemble or knowledge distillation, by transferring
a learning trajectory for one initial parameter to another one without actual training. The problem statement is informally stated as follows:
Learning transfer problem (informal)._Suppose that a learning trajectory \((\theta^{0}_{1},\cdots,\theta^{T}_{1})\) is given for an initial parameter \(\theta^{0}_{1}\). Given another initial parameter \(\theta^{0}_{2}\), how can we synthesize the learning trajectory \((\theta^{0}_{2},\cdots,\theta^{T}_{2})\) for the given initialization \(\theta^{0}_{2}\)?_
To tackle this problem, as illustrated in Figure 2, we take an approach to transform the source trajectory \((\theta^{0}_{1},\cdots,\theta^{T}_{1})\) by an appropriate permutation symmetry like the previous works in LMC. In Section 3, we formulate the learning transfer problem in a mathematically rigorous way, and then we derive a theoretically-grounded algorithm to approximately solve it. We also develop practical techniques to reduce the storage and computational cost in our derived method. In Section 4, first we empirically demonstrated that learning trajectories can be successfully transferred between different random or pre-trained initial parameters (Section 4.1). Next we further confirmed that the transferred parameters can indeed accelerate the convergence in the subsequent training (Section 4.2). Finally, we empirically analyze the mode connectivity properties of the transferred parameters. We observed that, while the transferred parameter tends to be linearly mode connected to the source parameter \(\theta^{T}_{1}\), the transferred parameter and the one actually trained from \(\theta^{0}_{2}\) are linearly connected only when the permutation is obtained by matching the source trajectory and the actually trained trajectory (Section 4.3).
## 2 Background
### Neural networks
Let \(L,N\in\mathbb{N}\). Let \(f(x;\theta)\) be an \(L\)-layered neural network (NN) parameterized by \(\theta\in\mathbb{R}^{N}\) with a non-linear activation function \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) and intermediate dimensions \((d_{0},\cdots,d_{L})\in\mathbb{N}^{L+1}\). Given an input \(x\in\mathbb{R}^{d_{0}}\), the output \(f(x;\theta):=x_{L}\in\mathbb{R}^{d_{L}}\) is computed inductively as follows:
\[x_{i}:=\begin{cases}x,&(i=0)\\ \sigma(W_{i}x_{i-1}+b_{i}),&(1\leq i\leq L-1)\\ W_{L}x_{L-1}+b_{L-1},&(i=L)\end{cases}\]
where \(W_{i}\in\mathbb{R}^{d_{i}\times d_{i-1}},b_{i}\in\mathbb{R}^{d_{i}}\) are weight matrices and bias vectors. Under these notation, the parameter vector \(\theta\) is described as \(\theta=(W_{1},b_{1},\cdots,W_{L},b_{L})\in\mathbb{R}^{N}\).
Stochastic gradient descent (SGD) is a widely used approach for training neural networks. Let \(\mathcal{X}\) be the input space \(\mathbb{R}^{d_{0}}\) and \(\mathcal{Y}\) be the output space \(\mathbb{R}^{d_{L}}\). Let \(\mathcal{D}\) be a probabilistic distribution over the input-output space \(\mathcal{X}\times\mathcal{Y}\), and \(\mathcal{L}:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}\) be a differentiable loss function. SGD trains a neural network \(f(x;\theta)\) by iterating the following steps: (1) Sampling a mini-batch \(B=((x_{1},y_{1}),\cdots,(x_{b},y_{b}))\sim\mathcal{D}^{b}\) of size \(b\in\mathbb{N}\), (2) computing an estimated gradient \(g_{B}:=\frac{1}{b}\sum_{i=1}^{b}\nabla_{\theta}\mathcal{L}(f(x_{i};\theta),y_ {i})\) for the mini-batch and (3) updating the model parameter \(\theta\) by \(\theta-\alpha g_{B}+\text{(momentum)}\) where \(\alpha\in\mathbb{R}_{>0}\) is a fixed or scheduled step size.
### Permutation symmetry of NNs
For simplicity, we assume that all bias vectors \(b_{i}\) are zero by viewing them as a part of the weight matrices. Let \(\Theta\) be the parameter space \(\{\theta=(W_{1},\cdots,W_{L})\in\mathbb{R}^{N}\}\) for the \(L\)-layered neural
network \(f(x;\theta)\). Now we introduce a permutation group action on the parameter space \(\Theta\). For \(n\in\mathbb{N}\), let \(S_{n}\) denotes the symmetric group over \(\{1,\cdots,n\}\subset\mathbb{N}\), which is the set of all bijective mapping \(\sigma:\{1,\cdots,n\}\rightarrow\{1,\cdots,n\}\). We define our permutation group \(G\) by \(G:=S_{d_{1}}\times\cdots\times S_{d_{L-1}}\). The group action \(G\times\Theta\rightarrow\Theta\) is defined as follows: For \(\pi=(\sigma_{1},\cdots,\sigma_{L-1})\in G\) and \(\theta\in\Theta\), the action \(\pi\theta\) is defined by
\[\pi\theta:=(\sigma_{1}W_{1},\cdots,\sigma_{i}W_{i}\sigma_{i-1}^{-1},\cdots,W_ {L}\sigma_{L-1}^{-1})\in\Theta, \tag{1}\]
where each \(\sigma_{i}\) is viewed as the corresponding permutation matrix of size \(d_{i}\times d_{i}\). We call this group action the permutation symmetry of \(L\)-layered neural networks.
Simply speaking, the action \(\pi\theta\) just interchanges the axes of the intermediate vector \(x_{i}\) of the neural network \(f(x;\theta)\) with the corresponding base change of the weight matrices and bias vectors (Figure 1). Thus we can see that this action does not change the output of the neural network, i.e., \(f(x;\pi\theta)=f(x;\theta)\) for every \(x\in\mathcal{X},\theta\in\Theta,\pi\in G\). In other words, the two parameters \(\theta\) and \(\pi\theta\) can be identified from the functional perspective of neural networks.
### Parameter alignment by permutation symmetry
Previous work by Ainsworth et al. [3] attempts to merge given two NN models into one model by leveraging their permutation symmetry. They reduced the merge problem into the parameter alignment problem:
\[\min_{\pi\in G}\lVert\pi\theta_{1}-\theta_{2}\rVert_{2}^{2}=\min_{\pi=(\sigma _{1},\cdots,\sigma_{L-1})}\sum_{1\leq i\leq L}\lVert\sigma_{i}W_{i}\sigma_{i- 1}^{-1}-Z_{i}\rVert_{F}^{2}, \tag{2}\]
where \(\theta_{1}=(W_{1},\cdots,W_{L})\) and \(\theta_{2}=(Z_{1},\cdots,Z_{L})\) are the parameters to be merged. To solve this, they also proposed a coordinate descent algorithm by iteratively solving the following linear optimizations regarding to each \(\sigma_{i}\)'s:
\[\max_{\sigma_{i}\in S_{i}}\langle\sigma_{i},Z_{i}\sigma_{i-1}W_{i}^{\top}+Z_{ i+1}^{\top}\sigma_{i+1}W_{i+1}\rangle \tag{3}\]
The form of this problem has been well-studied as a linear assignment problem, and we can solve it in a very efficient way [33]. Although the coordinate descent algorithm was originally proposed for model merging, we can also use it for other problems involving the parameter alignment problem (Eq. 2).
## 3 Learning Transfer
In this section, first we formulate the problem of transferring learning trajectories (which we call _learning transfer problem_) as a non-linear optimization problem. Next, we derive an algorithm to solve it by reducing the non-linear optimization problem to a sequence of linear optimization problems. Finally, we introduce additional techniques for reducing the storage and computation cost of the derived algorithm.
### Problem formulation
Let \(f(x;\theta)\) be some NN model with an \(N\)-dimensional parameter \(\theta\in\mathbb{R}^{N}\). A sequence of \(N\)-dimensional parameters \((\theta^{0},\cdots,\theta^{T})\in\mathbb{R}^{N\times(T+1)}\) is called a _learning trajectory_ of length \(T\) for the neural network \(f(x;\theta)\) when each \(\theta^{t}\) is the \(t\)-th intermediate parameter during training of \(f\) with the initial parameter \(\theta^{0}\) and the convergent one \(\theta^{T}\). In other words, \(\theta^{t}\) is a result of some iterations of SGD from \(\theta^{t-1}\). Note that here we do not specify what \(t\) is, it could be iteration, epoch or other notion of training time. Now we can state our main problem:
Learning transfer problem (informal).Suppose that a learning trajectory \((\theta^{0}_{1},\cdots,\theta^{T}_{1})\) is given for an initial parameter \(\theta^{0}_{1}\), which we call a _source trajectory_. Given another initial parameter \(\theta^{0}_{2}\) "similar" to \(\theta^{0}_{1}\) in some sense, how can we synthesize the learning trajectory \((\theta^{0}_{2},\cdots,\theta^{T}_{2})\), which we call a _transferted trajectory_, for the given initialization \(\theta^{0}_{2}\)? (Figure 2)
To convert this informal problem into a formal and computable one, we need to formalize the notion of "similarity" between two initial parameters. For this, we assume that the learning trajectories for the two initial parameters are indistinguishable up to permutation sym
(Section 2.2). In other words, for the two learning trajectories \((\theta_{1}^{0},\cdots,\theta_{1}^{T})\) and \((\theta_{2}^{0},\cdots,\theta_{2}^{T})\), we consider the following assumption:
Assumption (P).There exists a permutation \(\pi\) satisfying \(\pi(\theta_{1}^{t}-\theta_{1}^{t-1})\approx\theta_{2}^{t}-\theta_{2}^{t-1}\) for \(t=1,\cdots,T\), where the transformation \(\pi(\theta_{1}^{t}-\theta_{1}^{t-1})\) is as defined in Section 2.2.
Under this assumption, if we know the permutation \(\pi\) providing the equivalence between the source and transferred trajectories, we can recover the latter one \((\theta_{2}^{0},\cdots,\theta_{2}^{T})\) from the former one \((\theta_{1}^{0},\cdots,\theta_{1}^{T})\) and the permutation \(\pi\), by setting \(\theta_{2}^{t}:=\theta_{2}^{t-1}+\pi(\theta_{1}^{t}-\theta_{1}^{t-1})\) inductively on \(t\) (Figure 2). Therefore, the learning-trajectory problem can be reduced to estimating the permutation \(\pi\) from the source trajectory \((\theta_{1}^{0},\cdots,\theta_{1}^{T})\) and the given initialization \(\theta_{2}^{0}\).
Naively, to estimate the permutation \(\pi\), we want to consider the following optimization problem:
\[\min_{\pi}\sum_{t=1}^{T}\left\|\pi(\theta_{1}^{t}-\theta_{1}^{t-1})-(\theta_{2 }^{t}-\theta_{2}^{t-1})\right\|_{2}^{2} \tag{4}\]
However, this problem is ill-defined in our setting since each \(\theta_{2}^{t}\) is not available for \(1\leq t\leq T\) in advance. Even if we defined \(\theta_{2}^{t}:=\theta_{2}^{t-1}+\pi(\theta_{1}^{t}-\theta_{1}^{t-1})\) in the equation (4) as discussed above, the optimization problem became trivial since any permutation \(\pi\) makes the \(L^{2}\) norm to be zero.
Thus we need to fix the optimization problem (Eq. 4) not to involve unavailable terms. We notice that the difference \(\theta_{2}^{t}-\theta_{2}^{t-1}\) can be approximated by a negative gradient at \(\theta_{2}^{t-1}\) averaged over a mini-batch if the trajectory is enough fine-grained. Therefore, we can consider the approximated version of equation (4) as follows:
\[\mathcal{P}_{T}:\ \min_{\pi}\sum_{t=0}^{T-1}\left\|\pi\nabla_{\theta_{1}^{t} }\mathcal{L}-\nabla_{\theta_{2,\pi}^{t}}\mathcal{L}\right\|_{2}^{2},\ \text{where}\ \theta_{2,\pi}^{t}:=\theta_{2,\pi}^{t-1}+\pi(\theta_{1}^{t}-\theta_{1}^{t-1}). \tag{5}\]
In contrast to the equation (4), this optimization problem is well-defined even in our setting because each \(\theta_{2,\pi}^{t}\) is defined by using the previous parameter \(\theta_{2,\pi}^{t-1}\) inductively.
### Algorithm: gradient matching along trajectory
Now our goal is to solve the optimization problem \(\mathcal{P}_{T}\) (Eq. 5). However, the problem \(\mathcal{P}_{T}\) seems hard to solve directly because the variable \(\pi\) appears non-linearly in the second term \(\nabla_{\theta_{2,\pi}^{t}}\mathcal{L}\). To avoid the non-linearity, we introduce a sequence of linear sub-problems \(\{\mathcal{P}_{s}^{\prime}\}_{1\leq s\leq T}\) whose solution converges to the solution for \(\mathcal{P}_{T}\). For each \(s\in\{1,\cdots,T\}\), we consider the following problem:
\[\mathcal{P}_{s}^{\prime}:\ \min_{\pi_{s}}\sum_{t=0}^{s-1}\left\|\pi_{s} \nabla_{\theta_{1}^{t}}\mathcal{L}-\nabla_{\theta_{2,\pi_{s-1}}^{t}}\mathcal{ L}\right\|_{2}^{2} \tag{6}\]
Since the second term in \(\mathcal{P}_{s}^{\prime}\) uses the solution \(\pi_{s-1}\) for the previous sub-problem \(\mathcal{P}_{s-1}^{\prime}\), the unknown variable \(\pi_{s}\) appears only in the first term \(\pi_{s}\nabla_{\theta_{1}^{t}}\mathcal{L}\) in a linear way. Moreover, the following lemma implies that the final solution \(\pi_{T}\) from the sequence \(\{\mathcal{P}_{s}^{\prime}\}_{1\leq s\leq T}\) approximates the solution for the original problem \(\mathcal{P}_{T}\):
**Lemma 3.1**: _Under some regularity assumption, we have \(\theta_{2,\pi_{s}}^{t}\approx\theta_{2,\pi_{s}}^{t}\), for \(0\leq t\leq s<s^{\prime}\)._
The proof will be given in Appendix. Indeed, by this approximation, we find out that the solution \(\pi_{T}\) for the last sub-problem \(\mathcal{P}_{T}^{\prime}\) minimizes
\[\sum_{t=0}^{T-1}\left\|\pi_{T}\nabla_{\theta_{1}^{t}}\mathcal{L}-\nabla_{ \theta_{2,\pi_{T-1}}^{t}}\mathcal{L}\right\|_{2}^{2}\approx\sum_{t=0}^{T-1} \left\|\pi_{T}\nabla_{\theta_{1}^{t}}\mathcal{L}-\nabla_{\theta_{2,\pi_{T}}^{t} }\mathcal{L}\right\|_{2}^{2}, \tag{7}\]
where the right-hand side is nothing but the objective of the original problem \(\mathcal{P}_{T}\).
Algorithm 1 gives a step-by-step procedure to obtain the transferred learning trajectory \((\theta_{2}^{1},\cdots,\theta_{2}^{T})\) by solving the sub-problems \(\{\mathcal{P}_{s}^{\prime}\}_{0\leq s\leq T}\) sequentially. In lines 2-6, it computes an average of gradients \(\nabla_{\theta}\mathcal{L}\) over a single mini-batch for each \(\theta=\theta_{1}^{t-1},\theta_{2}^{t-1},(1\leq t\leq s)\), which is required in the \(s\)-th sub-problem \(\mathcal{P}_{s}^{\prime}\) (Eq. 6). In line 7, the \(s\)-th permutation \(\pi_{s}\) is obtained as a solution of the sub-problem \(\mathcal{P}_{s}^{\prime}\), which can be solved as a linear optimization (Eq. 3) using the coordinate descent algorithm proposed in [3]. Then we update the transferred parameter \(\theta_{2}^{t}\) for \(t=1,\cdots,s\) in line 8.
### Additional techniques
While Algorithm 1 solves the learning transfer problem (Eq. 5) approximately, it still has some issues in terms of storage and computation cost. Here we explain two practical techniques to resolve them.
Linear trajectory.In terms of the storage cost, Algorithm 1 requires a capacity of \(T+1\) times the model size to keep a learning trajectory of length \(T\), which will be a more substantial issue as model size increases or the trajectory becomes fine-grained. To reduce the required storage capacity, instead of keeping the entire trajectory, we propose to imitate it by linearly interpolating the end points. In other words, given an initial parameter \(\theta_{1}^{0}\) and the final \(\theta_{1}^{T}\), we define a new trajectory \([\theta_{0}^{0}:\theta_{1}^{T}]:=(\theta_{1}^{0},\cdots,\theta_{1}^{0},\cdots,\theta_{1}^{T})\) with \(\theta_{1}^{t}:=(1-\lambda_{t})\theta_{1}^{0}+\lambda_{t}\theta_{1}^{T}\) and \(0=\lambda_{0}\leq\cdots\leq\lambda_{t}\leq\cdots\leq\lambda_{T}=1\). This idea is motivated by the previous findings on monotonic linear interpolation [16; 12]. For our experiments, we consider two strategies for scheduling \(\lambda_{t}\): (1) uniform scheduling by \(\lambda_{t+1}-\lambda_{t}:=1/T\), and (2) cosine scheduling by \(\lambda_{t+1}-\lambda_{t}:=\alpha_{t}/Z\) with \(\alpha_{t}:=1+\cos(\pi t/T)\) and the normalizing factor \(Z:=\sum_{t=0}^{T}\alpha_{t}\). We found that the cosine scheduling is better than the uniform one (Fig. 2(a)) probably because the learning rate during the training of \(\theta_{1}^{T}\) is also cosine annealed in our setting.
Next, using the cosine scheduling, we compare the transferred results between the linear trajectory \([\theta_{1}^{0}:\theta_{1}^{T}]\) and the actual trajectory \((\theta_{1}^{0},\cdots,\theta_{1}^{T})\) where each \(\theta_{1}^{t}\) is a checkpoint at the \(t\)-th training epoch on CIFAR-10 with \(T=60\). Interestingly, the transfer of the linear trajectory is more stable and has less variance than the transfer of the actual one. This may be because the actual trajectory contains noisy information while the linear trajectory is directed towards the optimal solution \(\theta_{1}^{T}\). Due to its storage efficiency and stability in accuracy, we employ the linear trajectory with \(T=30\) (except for MNIST where \(T=10\) is used) throughout our experiments in Section 4.
Gradient caching.In terms of the computation cost, Algorithm 1 requires \(O(T^{2})\) times gradient computation. To reduce the number of gradient computation, we propose to cache the gradients once computed instead of re-computing them for every \(s=1,\cdots,T\). In fact, the cached gradients \(\nabla_{\theta_{2,\pi_{s}}^{t-1}}\mathcal{L}\) and the re-computed gradients \(\nabla_{\theta_{2,\pi_{s}}^{t-1}}\mathcal{L}\) are not the same quantity exactly since the intermediate parameter \(\theta_{2,\pi_{s}}^{t-1}=\theta_{2}^{0}+\pi_{s}(\theta_{1}^{t-1}-\theta_{1}^ {0})\) takes different values for each \(s\). However, they can be treated as approximately equal by Lemma 3.1 if we assume the continuity of the gradients. Now we can reduce the number of gradient computation from \(O(T^{2})\) to \(O(T)\) by caching the gradients once computed. We describe this computationally efficient version in Algorithm 2.
Figure 3: Transfer experiments on CIFAR-10 with Conv8.
## 4 Experiments
In this section, we empirically evaluate how learning transfer works on standard vision datasets. First, we compare our proposed methods (**GMT**, **FGMT**) and two baselines (**Naive**, **Oracle**), which are explained below, under the following two scenarios: (1) transferring learning trajectories starting from randomly initialized parameters and (2) transferring learning trajectories starting from pre-trained parameters (Section 4.1). Next, we evaluate how efficiently the transferred parameters can be trained in their subsequent training. (Section 4.2). Finally, we investigate the loss landscape properties of the transferred parameters from a viewpoint of mode connectivity (Section 4.3). The details on the datasets, NN architectures and training settings used in our experiments are provided in Appendix.
Baselines.As baselines for learning transfer, we introduce two natural methods: **Naive** and **Oracle**. Both in the two baselines, we transfer a given learning trajectory \((\theta^{0}_{1},\cdots,\theta^{T}_{1})\) by a single permutation \(\pi_{\texttt{naive}}\) or \(\pi_{\texttt{oracle}}\), according to the problem formulation in Section 3.1. In the Naive baseline, we define \(\pi_{\texttt{naive}}\) as the identity permutation, which satisfies \(\pi_{\texttt{naive}}\theta=\theta\). In other words, the transferred parameter by Naive is simply obtained as \(\theta^{t}_{2,\pi_{\texttt{naive}}}=\theta^{0}_{2}+(\theta^{t}_{1}-\theta^{0} _{1})\). On the other hand, in the Oracle baseline, we first obtain a true parameter \(\theta^{T}_{2}\) by actually training the given initial parameter \(\theta^{0}_{2}\) with the same optimizer as training of \(\theta^{T}_{1}\). Then we define \(\pi_{\texttt{oracle}}\) by minimizing the layer-wise \(L^{2}\) distance between the actually trained trajectories \(\theta^{T}_{2}-\theta^{0}_{2}\) and \(\pi_{\texttt{oracle}}(\theta^{T}_{1}-\theta^{0}_{1})\), where we simply apply the coordinate descent [3] explained in Section 2.3. The Oracle baseline is expected to be close to the optimal solution for the learning transfer problem via permutation symmetry.
Source trajectories.In our experiments, as discussed in Section 3.3, we consider to transfer linear trajectories \([\theta^{0}_{1}:\theta^{T}_{1}]\) of length \(T\) rather than actual trajectories for \(\theta^{T}_{1}\) due to the storage cost and instability emerging from noise. For the fixed permutation \(\pi\) (i.e., in the case of the baselines), the final transferred parameter \(\theta^{T}_{2,\pi}\) does not depend on which of the two trajectories we use. Therefore we note that the choice of the two types of trajectory does not lead to unfair evaluation for the baselines. Nevertheless, it is an interesting direction for future work to explore more elaborated methods that can transfer actual trajectories effectively.
Figure 4: We plot the validation accuracies of the transferred parameter \(\theta^{t}_{2,\pi_{t}}\) for each \(t=1,\cdots,T\) with various datasets and NN architectures. We also provide the standard deviation over three runs for each experiment. (Upper) Transfer of a learning trajectory on a single dataset between random initial parameters. (Lower) Transfer of a fine-tuning trajectory between pre-trained parameters. For example, “ImageNet \(\rightarrow\) Cars” means the transfer of the fine-tuning trajectory on the Cars dataset between the parameters pre-trained on ImageNet.
### Learning transfer experiments
Figure 4 shows the validation accuracies of the transferred parameters \(\theta^{t}_{2,\pi_{t}}\) for each timestep \(t=1,\cdots,T\) during the transfer. For the baselines (Naive and Oracle), we set the \(t\)-th permutation \(\pi_{t}\) by the fixed \(\pi_{\mathtt{naive}}\) and \(\pi_{\mathtt{oracle}}\) for every \(t\). For our algorithms (GMT and FGMT), the \(t\)-th permutation \(\pi_{t}\) corresponds to \(\pi_{s}\) in Algorithm 1 and 2.
In the upper figures 3(a)-3(c), we transfer a learning trajectory trained with a random initial parameter on a single dataset (MNIST [34], CIFAR-10 [32] and ImageNet [8]) to another random initial parameter. We will refer to this experimental setting as the random initialization scenario. We can see that our methods successfully approximate the Oracle baseline and outperform the Naive baseline on each dataset. Also, we can see that FGMT, the fast approximation version of GMT, performs very similarly to or even outperforms GMT. This is probably because the update of \(\pi_{t}\) affects the previously computed gradients in GMT, but not in FGMT, resulting in the stable behavior of FGMT.
In the lower figures 3(d)-3(f), we transfer a learning trajectory of fine-tuning on a specialized dataset (a 10-classes subset of CIFAR-100 [32], Stanford Cars [31] and CUB-200-2011 [52]) from an initial parameter that is pre-trained on ImageNet to another pre-trained one. We refer to this experimental setting as the pre-trained initialization scenario. This scenario seems to be more difficult to transfer the learning trajectories than the random initialization scenario shown in the upper figures, since the Naive baseline always fails to transfer the trajectories. We can see that, while our methods behave closely to the Oracle baseline up to the middle of the timestep, the accuracy deteriorates immediately after that. Nevertheless, the peak accuracies of our methods largely outperform those of the Naive baseline. By stopping the transfer at the peak points (i.e., so-called early stopping), we can take an advantage of the transferred parameters as we will see in the next section.
Figure 5: Subsequent training of the transferred parameters. In each figure, we plot the validation accuracies during the training of the transferred parameters for \(10\) epochs, on the same dataset as the source trajectory being trained on. The transferred parameters obtained by solving the equation (5) can be trained faster than the Naive baseline especially in the pre-trained initialization scenario.
Figure 6: Mode connectivity analysis on CIFAR-10 with Conv8. In Figure 5(a) and 5(b) we plot the validation losses along the linear path between two parameters. The label Source* refers to the appropriately permuted source parameter corresponding to each transfer method, and the label Trained refers to the parameter actually trained from the initial parameter \(\theta^{0}_{2}\). In Figure 5(c), we plot the validation losses over the \(uv\)-plane following the same protocol as Garipov et al. [15].
### Efficient training of transferred parameters
In the previous section, we obtained the transferred parameters that achieve non-trivial accuracy without any direct training. Here we evaluate how efficiently the transferred parameters can be trained in their subsequent training, by training for \(10\) epochs on the same dataset as the source trajectory being trained on. We started each training from the transferred parameter \(\theta^{t}_{2,\pi_{t}}\) at the best trajectory step \(t\) in Figure 4. Figure 5 shows the validation accuracies for each epoch in the training of the transferred parameters. In the random initialization scenario, although the differences between the methods seem quite small (Figure 4(a), 4(b)), the mean values still follow a similar trend as in the transfer experiments in Figure 4. This is because the Naive baseline also achieves non-trivial accuracy already when transferred in this scenario. On the other hand, in the pre-trained initialization scenario, the parameters transferred by our methods and the Oracle baseline learns the datasets faster than the parameters transferred by the Naive baseline (Figure 4(c), 4(d)). Thus the benefit of the transferred parameters seems to be greater in the pre-trained initialization scenario than in the random initialization scenario.
### Mode connectivity analysis
Here we investigate the loss landscape properties of the transferred parameters on CIFAR-10, especially from the viewpoint of mode connectivity. Figure 5(a) visualizes the \(1\)-dimensional loss landscape along the linear path between the appropriately permuted source parameter \(\pi\theta^{T}_{1}\) and the transferred parameter \(\theta^{T}_{2,\pi}\), where we set \(\pi=\pi_{T}\) for GMT, \(\pi=\pi_{\texttt{naive}}\) and \(\pi=\pi_{\texttt{oracle}}\) for the Naive and Oracle baselines. In the figure, we refer to the appropriately permuted source parameters \(\theta^{T}_{2,\pi}\) by Source*. The results show that the transferred parameters lie in the same basin as the source parameter for all methods including baselines. On the other hand, Figure 5(b) visualizes the \(1\)-dimensional landscape along the linear path between the true parameter \(\theta^{T}_{2}\) (i.e., actually trained from \(\theta^{0}_{2}\)) and the transferred parameter \(\theta^{T}_{2,\pi}\). In this visualization, we notice that only the oracle baseline is closer to linearly mode connected with the true parameter. Futhermore, as we can see in the \(2\)-dimensional landscape around the true parameter (Figure 5(c)), the transferred parameter by Oracle is exactly not linearly mode connected to the true parameter, but they still live in the same basin. It remains as an open question how we can obtain the permutation \(\pi\) by which the transferred parameter \(\theta^{T}_{2,\pi}\) lives in the same basin as the true parameter \(\theta^{T}_{2}\), without using the true parameter itself unlike the Oracle baseline.
## 5 Related Work
Loss landscape, linear mode connectivity, permutation symmetry.Loss landscape of training deep neural network has been actively studied in an effort to unravel mysteries of non-convex optimization in deep learning [21; 7; 35; 29; 36]. One of the mysteries in deep learning is the stability and consistency of their training processes and solutions, despite of the multiple sources of randomness such as random initialization, data ordering and data augmentation [11; 6; 50; 26]. Previous studies of mode connectivity both theoretically [14; 47] and empirically [9; 15] demonstrate the existence of low-loss curves between any two optimal solutions trained independently with different randomness.
Linear mode connectivity (LMC) is a special case of mode connectivity where two optimal solutions are connected by a low-loss linear path [45; 13; 44; 28; 41]. Entezari et al. [10] observed that even two solutions trained from different random initialization can be linearly connected by an appropriate permutation symmetry. Ainsworth et al. [3] developed an efficient method to find such permutations, and Jordan et al. [27] extends it to NN architectures with Batch normalization [25]. Their observations strength the expectation on some sort of similarity between two training processes even from different random initializations, via permutation symmetry. In our work, based on these observations, we attempt to transfer one training process to another initial parameter by permutation symmetry.
Another line of research related to our work is the studies of monotonic linear interpolation (MLI) between an initialization and its trained result. Goodfellow et al. [16] first observed that the losses are monotonically decreasing along the linear path between an initial parameter and the trained one. Frankle [12] and Lucas et al. [42] confirmed that the losses are monotonically non-increasing even with modern network architectures such as CNNs and ResNets [19]. Vlaar and Frankle [51] empirically analyzed which factor in NN training influences the shape of the non-increasing loss curve
along the linear interpolation, and Wang et al. [53] theoretically analyzed the plateau phenomenon in the early phase of the linear interpolation. Motivated by these observations, we introduced the notion of linear trajectories in Section 3.3 to reduce storage costs in our learning transfer.
Model editing.Our approach of transferring learning trajectories can be also considered as a kind of model editing [49; 46; 24; 23] in the parameter space because we modify a given initial parameter by adding an appropriately permuted trajectory. In particular, a recent work by Ilharco et al. [23] is closely related to our work. They proposed to arithmetically edit a pre-trained NN with a task vector, which is defined by subtracting the initial pre-trained parameter from the parameter fine-tuned on a specific task. From our viewpoint, task vectors can be seen as one-step learning trajectories (i.e., learning trajectories with \(T=1\)). Model merging (or model fusion) [48; 43; 55; 37] is also related in the sense of the calculation in the parameter space.
Efficient training for multiple NNs.There are several literatures that attempt to reduce the computation costs in training multiple NNs. Fast ensemble is an approach to reduce the cost in ensemble training by cyclically scheduled learning rate [22] or by searching different optimal basins in loss landscape [15; 11; 54; 5]. A recent work by Liu et al. [39] leverages knowledge distillation [20] from one training to accelerate the subsequent trainings. Our approach differs from theirs in that we try to establish a general principle to transfer learning trajectories. Also, the warm-starting technique investigated by Ash and Adams [4] seems to be related in that they subsequently train from a once trained network. There may be some connection between their and our approaches, which remains for future work.
Gradient matching.The gradient information obtained during training has been utilized in other areas outside of ours. For example, in dataset distillation, Zhao et al. [58] optimized a distilled dataset by minimizing layer-wise cosine similarities between gradients on the distilled dataset and the real one, starting from random initial parameters, which leads to similar training results on those datasets. Similarly, Yin et al. [56] successfully recovered private training data from its gradient by minimizing the distance between gradients. In contrast to their problem where input data is optimized, our problem requires optimizing unknown transformation for NN parameters. In addition, our problem requires matching the entire learning trajectories, which are too computationally expensive to be computed naively.
## 6 Limitations
One limitation in our approach via permutation symmetry is that it requires consistency in the NN architectures of the source and transferred models. However, at least for architectures with different neuron sizes, we expect that this limitation can be addressed by using doubly stochastic matrices instead of permutations, as in previous work on model merging [48], which remains for future work.
In addition, we mainly focused on how we can successful transfer given learning trajectories. Thus its practical applications such as model ensemble and knowledge distillation, where the latter needs the extended method for transferring between different NN architectures as explained above, have not yet been explored. Also the ad-hoc techniques employed in our experiments, such as cosine scheduling for the linear trajectory (Section 3.3) and early stopping (Section 4.1), may be sub-optimal for practical use. It remains for future work to improve these techniques and explore the practical applications.
## 7 Conclusion
In this work, we formulated the problem of how we can synthesize an unknown learning trajectory from the known one, called the learning transfer problem. We derived an algorithm that approximately solves it and developed practical techniques to reduce the storage and computation costs of the algorithm. In our experiments, we confirmed that our algorithm successfully transfers a given learning trajectory at the same level as the Oracle baseline. Moreover, we observed that the transferred parameters can be efficiently trained in the subsequent training especially in the pre-trained initialization scenario. We also analyzed the mode connectivity around the transferred parameters and the source or actually trained parameters. We hope that our investigations will be helpful for future research in this new direction to reduce training costs by transferring learning trajectories. |
2310.16597 | Beyond IID weights: sparse and low-rank deep Neural Networks are also
Gaussian Processes | The infinitely wide neural network has been proven a useful and manageable
mathematical model that enables the understanding of many phenomena appearing
in deep learning. One example is the convergence of random deep networks to
Gaussian processes that allows a rigorous analysis of the way the choice of
activation function and network weights impacts the training dynamics. In this
paper, we extend the seminal proof of Matthews et al. (2018) to a larger class
of initial weight distributions (which we call PSEUDO-IID), including the
established cases of IID and orthogonal weights, as well as the emerging
low-rank and structured sparse settings celebrated for their computational
speed-up benefits. We show that fully-connected and convolutional networks
initialized with PSEUDO-IID distributions are all effectively equivalent up to
their variance. Using our results, one can identify the Edge-of-Chaos for a
broader class of neural networks and tune them at criticality in order to
enhance their training. Moreover, they enable the posterior distribution of
Bayesian Neural Networks to be tractable across these various initialization
schemes. | Thiziri Nait-Saada, Alireza Naderi, Jared Tanner | 2023-10-25T12:38:36Z | http://arxiv.org/abs/2310.16597v3 | # Beyond iid weights: sparse and low-rank deep neural networks are also Gaussian Processes
###### Abstract
The infinitely wide neural network has proven a useful and manageable mathematical model that enables the understanding of many phenomena appearing in deep learning. One example is the convergence of random deep networks to Gaussian Processes that allows a rigorous analysis of the way the choice of activation function and network weights impacts the training dynamics. In this paper, we extend the seminal proof of Matthews et al. (2018) to a larger class of initial weight distributions (which we call Pseudo-iid), including the established cases of iid and orthogonal weights, as well as the emerging low-rank and structured sparse settings celebrated for their computational speed-up benefits. We show that fully connected and convolutional networks initialized with Pseudo-iid distributions are all effectively equivalent up to their variance. Using our results, one can identify the Edge-of-Chaos for a broader class of neural networks and tune them at criticality in order to enhance their training.
## 1 Introduction
Deep neural networks are often studied at random initialization, where in the limit of infinite width, they have been shown to generate intermediate entries which approach Gaussian Processes. Seemingly this was first studied for one-layer networks in Neal (1996) when the weight matrices have identically and independently distributed (iid) Gaussian entries, and became a popular model for deep networks following the seminal results for deep fully connected networks in Lee et al. (2017) and Matthews et al. (2018). Specifically, the latter formulated a proof strategy that quickly became a cornerstone in the field, paving the way for various extensions of the Gaussian Process limit (see Section 1.1). The resulting Gaussian Process has proven to be of practical interest for at least two reasons. First, it emerges as a mathematically motivated choice of prior in the context of Bayesian Neural Networks. Secondly, it helps the analysis of exploding and vanishing gradient phenomena as done in Schoenholz et al. (2017) and Pennington et al. (2018), amongst other network properties.
In this paper, we extend the proof of the Gaussian Process from Matthews et al. (2018) to a larger class of initial weight distributions (which we call Pseudo-iid). Pseudo-iid distributions include the already well-known cases of iid and orthogonal weights. Moreover, we explore other important settings that conform to our more general conditions (e.g. structured sparse and low-rank), yet for which the Gaussian Process limit could not be derived previously due to their violation of the iid assumption.
Why studying low-rank and structured sparse networks at initialization?In recent years, deep learning models have significantly increased in size, while concurrently smaller AI chips have been preferred. The most widely studied approaches for bridging the gap between these two are to reduce the number of parameters by either pruning the network to be sparse or using low-rank factors. The _lottery ticket hypothesis_Frankle and Carbin (2019) has given empirical evidence of the existence of pruned subnetworks (i.e. _winning tickets_) achieving test accuracy that is comparable and even superior to their original network. Unfortunately most pruning methods produce sparsity patterns that are unstructured, thereby limiting potential hardware accelerations (see Hoefler et al. (2021)). Remarkably, Chen et al. (2022) revealed some _winning tickets_ with structured sparsity, that
can be exploited by efficient hardware accelerators on GPUs and FPGAs to reduce greatly their in-memory access, storage, and computational burden (Zhu et al. (2020)). Similarly, low-rank models are similarly being used to improve the efficiency and accuracy of networks Osawa et al. (2017); Yang et al. (2020a,b); Tai et al. (2022); Price and Tanner (2023); Karimi Mahabadi et al. (2021). The emergence of low-rank structures within deep networks during training is also observed where the low-rank factors are associated with learned classes Han et al. (2022) and this phenomenon has motivated thresholding of singular values in Berlyand et al. (2023), which is observed to improve the accuracy similarly to the lottery ticket hypothesis.
If such computationally efficient winning tickets exist, why would one want to train a dense (or full-rank) network at a high computational price only to later reduce it to be sparse (or low-rank)? This question has given rise to single-shot approaches, often referred to as Pruning at Initialization (PaI) such as SNIP Lee et al. (2019), SynFlow Tanaka et al. (2020), GraSP Wang et al. (2020a), NTT Liu and Zenke (2020). Consequently, the present work delves into the properties of such random networks at initialization, within the larger framework of Pseudo-iid regimes.
### Related work
To the best of our knowledge, the Gaussian Process behaviour in the infinite width regime was first established by Neal (1996) in the case of one-layer fully connected networks when the weights are iid sampled from standard distributions. The result has then been extended in Matthews et al. (2018) to deep fully connected networks, where the depth is fixed, the weights are distributed as iid Gaussians and the widths of the hidden layers grow jointly to infinity. Jointly scaling the network width substantially distinguishes their method of proof from the approach taken in Lee et al. (2017), where the authors considered a sequential limit analysis through layers. That is, analyzing the limiting distribution at one layer when the previous ones have already converged to their limiting distributions, as in Lee et al. (2017), is significantly different from examining it when the previous layers are jointly converging to their limits, as is done in Matthews et al. (2018). The latter approach now serves as a foundational machinery in the infinite width analysis from which numerous subsequent works followed. This paper is one of them.
Since this Gaussian Process behaviour has been established, two main themes of research have further been developed. The first consists of the extension of such results for more general and complex architectures such as convolutional networks with many channels Novak et al. (2020), Garriga-Alonso et al. (2019) or any modern architectures composed of fully connected, convolutional or residual connections, as summarized in Yang (2021), using the Tensor Program terminology. The second line of work concerns the generalization of this Gaussian Process behaviour to other possible weight distributions, such as orthogonal weights in Huang et al. (2021) or, alternatively, any iid weights with finite moments as derived in Hanin (2021). Note that the orthogonal case does not fit into the latter as entries are exchangeable but not independent. The same kind of results for general architectures in the iid setting have been derived in Golikov and Yang (2022) and Yang (2021). Our contribution fits into this line of research, where we relax the independence requirement of the weight matrix entries and instead consider Pseudo-iid distributions made of uncorrelated and exchangeable random variables. This broader class of distributions, Definition 2, enables us to present a unified proof that strictly generalizes the approaches taken so far and encompasses all of them, for two types of architectures, namely, fully connected and convolutional networks.
### Organization of the paper
In Section 2, we focus on fully connected neural networks, formally stating the Pseudo-iid regime and its associated Gaussian Process (GP) in Theorem 1, providing a rigorous proof in Appendix A. Our Pseudo-iid regime unifies the previously studied settings of iid and orthogonal weights, while also allowing for novel initialization schemes conform to the Gaussian Process limit, such as low-rank and structured sparse weights. We expand the definition of Pseudo-iid distributions to convolutional kernels and extend the resulting Gaussian Process limit to convolutional neural networks (CNNs) in Section 2.2. In Section 3, we provide examples of Pseudo-iid distributions in practice, supporting our theoretical results with numerical simulations. Moreover, we allude to two important consequences of the GP limit in 3.3, namely the analysis of stable initialization of deep networks on the so-called Edge-of-Chaos (EoC), as well as the implication of the limiting Gaussian
distribution on Bayesian inference, in our more expansive Pseudo-iid regime. Lastly, in Section 4, we review our main contributions and put forward some further research directions.
## 2 Gaussian Process behaviour in the Pseudo-iid regime
We consider an untrained fully connected neural network with width \(N_{\ell}\) at layer \(\ell\in\{1,\cdots,L+1\}\). Its weights \(W^{(\ell)}\in\mathbb{R}^{N_{\ell}\times N_{\ell-1}}\) and biases \(b^{(\ell)}\in\mathbb{R}^{N_{\ell}}\) at layer \(\ell\) are sampled from centered probability distributions, respectively \(\mu_{W}^{(\ell)}\) and \(\mu_{b}^{(\ell)}\). Starting with such a network, with nonlinear activation \(\phi:\mathbb{R}\to\mathbb{R}\), the propagation of any input data vector \(z^{(0)}\coloneqq x\in\mathcal{X}\subseteq\mathbb{R}^{N_{0}}\) through the network is given by the following equations,
\[h_{i}^{(\ell)}(x)=\sum_{j=1}^{N_{\ell-1}}W_{ij}^{(\ell)}z_{j}^{(\ell-1)}(x)+b_ {i}^{(\ell)},\qquad z_{j}^{(\ell)}(x)=\phi(h_{j}^{(\ell)}(x)), \tag{1}\]
where \(h^{(\ell)}(x)\in\mathbb{R}^{N_{\ell}}\) is referred to as the preactivation vector at layer \(\ell\), or the feature maps.
Pseudo-iid distributions (formally introduced in Def. 2) only need to meet some moment conditions along with an exchangeability requirement that we define below.
**Definition 1** (Exchangeability).: _Let \(X_{1},\cdots,X_{n}\) be scalar, vector or matrix-valued random variables. We say \((X_{i})_{i=1}^{n}\) are exchangeable if their joint distribution is invariant under permutations, i.e. \((X_{1},\cdots,X_{n})\stackrel{{ d}}{{=}}(X_{\sigma(1)},\cdots,X_{ \sigma(n)})\) for all permutations \(\sigma:[n]\to[n]\). A random matrix is called row- (column-) exchangeable if its rows (columns) are exchangeable random vectors, respectively._
A row-exchangeable and column-exchangeable weight matrix \(W\in\mathbb{R}^{m\times n}\) is not in general entrywise exchangeable, which means its distribution is not typically invariant under arbitrary permutations of its entries; particularly, out of \((mn)!\) possible permutations of the entries, \(W\) only needs to be invariant under \(m!n!\) of them -- an exponentially smaller number. We can now define the family of Pseudo-iid distributions.
**Definition 2** (Pseudo-iid).: _Let \(m,n\) be two integers and \(a\in\mathbb{R}^{n}\) be any fixed vector. We will say that the random matrix \(W=(W_{ij})\in\mathbb{R}^{m\times n}\) is in the Pseudo-iid distribution with parameter \(\sigma^{2}\) if_
1. _the matrix is row-exchangeable and column-exchangeable,_
2. _its entries are centered, uncorrelated, with variance_ \(\mathbb{E}(W_{ij}^{2})=\frac{\sigma^{2}}{n}\)_,_
3. \(\mathbb{E}\big{|}\sum_{j=1}^{n}a_{j}W_{ij}\big{|}^{8}=K\|\mathbf{a}\|_{2}^{8} n^{-4}\) _for some constant_ \(K\)_,_
4. _and_ \(\lim_{n\to\infty}\frac{n^{2}}{\sigma^{4}}\mathbb{E}(W_{ia,j}W_{ii,j}W_{i,j^{ \prime}}W_{i,d,j^{\prime}})=\delta_{ia,i_{k}}\delta_{i_{c},i_{d}}\)_, for all_ \(j\neq j^{\prime}\)_._
_When \(W^{(1)}\) has iid Gaussian entries and the other weight matrices \(W^{(\ell)},\,2\leq\ell\leq L+1\), of a neural network (see equation 1) are drawn from a Pseudo-iid distribution, we will say that the network is under the Pseudo-iid regime._
The Pseudo-iid conditions (i)-(iv) are included to serve following purposes: Conditions (i) and (ii) allow for non-iid initializations such as permuted block sparse and low-rank while being sufficient for a CLT-type argument to hold. The variance scaling prevents the preactivations in equation 2 from blowing up as the network's size grows. Condition (iii) requires the moments of the possibly dependent weights to behave like independent ones, while condition (iv) constrains their cross-correlations to vanish fast enough. Together they maintain that in the limit, these variables interact as if they were independent, and this is all our proof needs. The importance of condition (iii) is expanded upon in Appendix G. It is worth noting that because the first layer's weight matrix has only one dimension scaling up with \(n\), the tension between the entries' dependencies and the network
size cannot be resolved, thus we take these weights to be iid 1. Whether some of the conditions are redundant still remains an open question.
Footnote 1: In fact, it is enough for \(W^{(1)}\) to have iid rows. We assume Gaussian iid entries for simplicity, yet this can be further relaxed at the cost of a more complicated formula for the first covariance kernel (see equation 21). Examples of row iid distributions that could be used for \(W^{(1)}\) which are similar to the low-rank and structured ensembles include the product \(W^{(1)}=DPC\) of a diagonal matrix \(D\in\mathbb{R}^{N_{1}[n]\times N_{1}[n]}\) with entries drawn iid, \(P\) a random permutation matrix, and \(C\in\mathbb{R}^{N_{1}[n]\times N_{0}}\) a non-zero low rank or structured sparse matrix.
Throughout this paper, we consider a specific set of activation functions that satisfy the so-called linear envelope property, Definition 3, satisfied by most activation functions used in practice (ReLu, Softmax, Tanh, HTanh, etc.).
**Definition 3**.: _(Linear envelope property) A function \(\phi:\mathbb{R}\to\mathbb{R}\) is said to satisfy the linear envelope property if there exist \(c,M\geq 0\) such that, for any \(x\in\mathbb{R}\),_
\[|\phi(x)|\leq c+M|x|.\]
### The Pseudo-iid regime for fully connected networks
Our proofs of Pseudo-iid networks converging to Gaussian Processes are done in the more sophisticated simultaneous width growth limit as pioneered by Matthews et al. (2018). For a review of the literature on deep networks with sequential vs. simultaneous scaling, see Section 1.1. One way of characterizing such a simultaneous convergence over all layers is to consider that all widths \(N_{\ell}\) are increasing functions of one parameter, let us say \(n\), such that, as \(n\) grows, all layers' widths increase: \(\forall\ell\in\{1,\cdots,L\},N_{\ell}\coloneqq N_{\ell}[n]\). We emphasize this dependence on \(n\) by appending an index \(X[n]\) to the random variables \(X\) when \(n\) is finite and denote by \(X[*]\) its limiting random variable, corresponding to \(n\to\infty\). The input dimension \(N_{0}\) and the final output layer dimension \(N_{L+1}\) are finite thus they do not scale with \(n\). Moreover, the input data are assumed to come from a countably infinite input space \(\mathcal{X}\) (see A.1). Equation equation 1 can thus be rewritten, for any \(x\in\mathcal{X}\), as
\[h_{i}^{(\ell)}(x)[n]=\sum_{j=1}^{N_{\ell-1}[n]}W_{ij}^{(\ell)}z_{j}^{(\ell-1)} (x)[n]+b_{i}^{(\ell)},\qquad z_{j}^{(\ell)}(x)[n]=\phi(h_{j}^{(\ell)}(x)[n]), \tag{2}\]
and the associated Gaussian Process limit is given in Theorem 1, which we can now state.
**Theorem 1** (GP limit for fully connected Pseudo-iid networks).: _Suppose a fully connected neural network as in equation 2 is under the Pseudo-iid regime with parameter \(\sigma_{W}^{2}\) and the activation satisfies the linear envelope property Def. 3. Let \(\mathcal{X}\) be a countably-infinite set of inputs. Then, for every layer \(2\leq\ell\leq L+1\), the sequence of random fields \((i,x)\in[N_{\ell}]\times\mathcal{X}\mapsto h_{i}^{(\ell)}(x)[n]\in\mathbb{R}^ {N_{\ell}}\) converges in distribution to a centered Gaussian Process \((i,x)\in[N_{\ell}]\times\mathcal{X}\mapsto h_{i}^{(\ell)}(x)[*]\in\mathbb{R}^ {N_{\ell}}\), whose covariance function is given by_
\[\mathbb{E}\Big{[}h_{i}^{(\ell)}(x)[*]\cdot h_{j}^{(\ell)}(x^{\prime})[*]\Big{]} =\delta_{i,j}K^{(\ell)}(x,x^{\prime}), \tag{3}\]
_where_
\[K^{(\ell)}(x,x^{\prime})=\begin{cases}\sigma_{b}^{2}+\sigma_{W}^{2}\mathbb{E }_{(u,v)\sim\mathcal{N}(\mathbf{0},K^{(\ell-1)}(x,x^{\prime}))}[\phi(u)\phi(v) ],&\ell\geq 2,\\ \sigma_{b}^{2}+\frac{\sigma_{W}^{2}}{N_{0}}\langle x,x^{\prime}\rangle,&\ell=1.\end{cases} \tag{4}\]
### The Pseudo-iid regime for Convolution Neural Networks
We consider a CNN with \(C_{\ell}\) number of channels at layer \(\ell\in\{1,\cdots,L+1\}\) and two-dimensional convolutional filters \(\mathbf{U}_{i,j}^{(\ell)}\in\mathbb{R}^{k\times k}\) mapping the input channel \(j\in\{1,\cdots,C_{\ell-1}\}\) to the output channel \(i\in\{1,\cdots,C_{\ell}\}\). The input signal \(\mathbf{X}\) (also two-dimensional) has \(C_{0}\) channels \(\mathbf{x}_{1},\cdots,\mathbf{x}_{C_{0}}\)
and its propagation through the network is given by
\[\mathbf{h}_{i}^{(\ell)}(\mathbf{X})[n]=\begin{cases}b_{i}^{(1)}\mathbf{1}+\sum_{j= 1}^{C_{0}}\mathbf{U}_{i,j}^{(1)}\star\mathbf{x}_{j},&\ell=1,\\ b_{i}^{(\ell)}\mathbf{1}+\sum_{j=1}^{C_{\ell-1}}\mathbf{U}_{i,j}^{(\ell)}\star \mathbf{z}_{j}^{(\ell-1)}(\mathbf{X})[n],&\ell\geq 2,\end{cases} \tag{5}\]
In equation 5, the symbol \(\mathbf{1}\) should be understood as having the same size as the convolution output, the non-linearity \(\phi(\cdot)\) is applied entrywise, and we emphasize on the simultaneous scaling with \(n\) through appending \([n]\) to the feature maps \(\mathbf{h}_{i}^{(\ell)}(\mathbf{X})[n]\). We denote spatial (multi-) indices by boldface Greek letters \(\mathbf{\mu}\), \(\mathbf{\nu}\), etc., that are ordered pairs of integers taking values in the range of the size of the array. For example, if \(\mathbf{X}\) is an RGB (\(C_{0}=3\)) image of \(H\times D\) pixels, \(i=2\), and \(\mathbf{\mu}=(\alpha,\beta)\), then \(X_{i,\mathbf{\mu}}\) returns the green intensity of the \((\alpha,\beta)\) pixel. Moreover, we define \([\mathbf{\mu}]\) to be the patch centered at the pixel \(\mathbf{\nu}\) covered by the filter, e.g. if \(\mathbf{\mu}=(\alpha,\beta)\) and the filter covers \(k\times k=(2k_{0}+1)\times(2k_{0}+1)\) pixels, then \(\llbracket\mathbf{\mu}\rrbracket=\{(\alpha^{\prime},\beta^{\prime})\mid\alpha-k_{ 0}\leq\alpha^{\prime}\leq\alpha+k_{0},\ \beta-k_{0}\leq\beta^{\prime}\leq\beta+k_{0}\}\), with the usual convention of zero-padding for the out-of-range indices. Sufficient conditions for Pseudo-iid CNNs to converge to a Gaussian Process in the simultaneous scaling limit are given in Definition 4.
**Definition 4** (Pseudo-iid for CNNs).: _Consider a CNN with random filters and biases \(\{\mathbf{U}_{i,j}^{(\ell)}\}\) and \(\{b_{i}^{(\ell)}\}\) as in equation 5. It is said to be in the Pseudo-iid regime with parameter \(\sigma^{2}\) if \(\mathbf{U}^{(1)}\) has iid\(\mathcal{N}(0,\frac{\sigma^{2}}{C_{0}})\) entries and, for \(2\leq\ell\leq L+1\), such that for any fixed vector \(\mathbf{a}:=(a_{j,\mathbf{\nu}})_{j,\mathbf{\nu}}\),_
1. _the convolutional kernel_ \(\mathbf{U}^{(\ell)}\in\mathbb{R}^{C_{\ell}\times C_{\ell-1}\times k\times k}\) _is row-exchangeable and column-exchangeable, that is its distribution is invariant under permutations of the first and second indices,_
2. _the filters' entries are centered, uncorrelated, with variance_ \(\mathbb{E}[(\mathbf{U}_{i,j,\mathbf{\mu}}^{(\ell)})^{2}]=\sigma^{2}/C_{\ell-1}\)_,_
3. \(\mathbb{E}\big{|}\sum_{j=1}^{C_{\ell-1}}\sum_{\mathbf{\nu}}a_{j,\mathbf{\nu}} \mathbf{U}_{i,j,\mathbf{\nu}}^{(\ell)}\big{|}^{8}=K\|\mathbf{a}\|_{2}^{8}(C_{\ell -1})^{-4}\) _for some constant_ \(K\)_,_
4. _and_ \(\lim_{n\to\infty}\frac{C_{\ell-1}[n]^{2}}{\sigma^{4}}\mathbb{E}\big{(} \mathbf{U}_{i_{\alpha},j,\mathbf{\mu}_{a}}^{(\ell)}\mathbf{U}_{i_{b},j,\mathbf{\mu}_{ b}}^{(\ell)}\mathbf{U}_{i_{c},j^{\prime},\mathbf{\mu}_{a}}^{(\ell)}\mathbf{U}_{i _{d},j^{\prime},\mathbf{\mu}_{d}}^{(\ell)}\big{)}=\delta_{i_{a},i_{b}}\delta_{i_{ c},i_{d}}\delta_{\mathbf{\mu}_{a},\mathbf{\mu}_{b}}\delta_{\mathbf{\mu}_{c},\mathbf{\mu}_{d}}\) _for all_ \(j\neq j^{\prime}\)_._
Similar to the fully connected case, the filters' entries can now exhibit some interdependencies as long as their cross-correlations vanish at a fast enough rate dictated by conditions (iii)-(iv).
**Theorem 2** (GP limit for CNN Pseudo-iid networks).: _Suppose a CNN as in equation 5 is under the Pseudo-iid regime with parameter \(\sigma_{W}^{2}\) and the activation satisfies the linear envelope property Def. 3. Let \(\mathcal{X}\) be a countably-infinite set of inputs and \(\mathbf{\mu}\in\mathcal{I}\) denote a spatial (multi-) index. Then, for every layer \(1\leq\ell\leq L+1\), the sequence of random fields \((i,\mathbf{X},\mathbf{\mu})\in[C_{\ell}]\times\mathcal{X}\times\mathcal{I}\mapsto h_ {i,\mathbf{\mu}}^{(\ell)}(\mathbf{X})[n]\) converges in distribution to a centered Gaussian Process \((i,\mathbf{X},\mathbf{\mu})\in[C_{\ell}]\times\mathcal{X}\times\mathcal{I}\mapsto h _{i,\mathbf{\mu}}^{(\ell)}(\mathbf{X})[*]\), whose covariance function is given by_
\[\mathbb{E}\Big{[}h_{i,\mathbf{\mu}}^{(\ell)}(\mathbf{X})[*]\cdot h_{j,\mathbf{\mu}^{ \prime}}^{(\ell)}(\mathbf{X}^{\prime})[*]\Big{]}=\delta_{i,j}\Big{(}\sigma_{b }^{2}+\sigma_{W}^{2}\sum_{\mathbf{\nu}\in[\mathbf{\mu}]\cap[\mathbf{\mu}^{\prime}]}K_{\bm {\nu}}^{(\ell)}(\mathbf{X},\mathbf{X}^{\prime})\Big{)}, \tag{6}\]
_where_
\[K_{\mathbf{\nu}}^{(\ell)}(\mathbf{X},\mathbf{X}^{\prime})=\begin{cases}\mathbb{E}_{ (u,v)\sim\mathcal{N}(\mathbf{0},K_{\mathbf{\nu}}^{(\ell-1)}(\mathbf{X},\mathbf{X}^ {\prime}))}[\phi(u)\phi(v)],&\ell\geq 2,\\ \frac{1}{C_{0}}\sum_{i=1}^{C_{0}}X_{i,\mathbf{\nu}}X_{i,\mathbf{\nu}}^{\prime},&\ell=1. \end{cases} \tag{7}\]
These equations resemble those derived in the fully connected case, apart from the introduction of additional terms accounting for the pixels in some patch \(\mathbf{\mu}\), that vanish when the filter kernel size is reduced to \(k=1\). Our proof of Theorem 2 can be found in Appendix B.
Pseudo-iid in practice
In this section, we illustrate some important Pseudo-iid distributions through examples. We further show that the Gaussian behaviour can serve, among others, two purposes of practical significance (see section 3.3). First, we elucidate the analysis of signal propagation in neural networks. Secondly, this Gaussian Process limit simplifies the initially complicated task of choosing priors for Bayesian Neural Networks.
### Examples of Pseudo-iid distributions
We elaborate here on some typical initialization schemes and their belonging to the Pseudo-iid class. To the best of our knowledge, rigorous proofs of the Gaussian Process convergence in the literature are restricted to the iid cases only (see Section 1.1). An important non-iid case that also results in a Gaussian Process is the random orthogonal initialization, partially derived in Huang et al. (2021) for fully connected networks. Nonetheless, we regret the authors remained elusive on the treatment of the first layer, an aspect which we have since then enhanced. Our proposed Pseudo-iid regime encompasses both iid and orthogonal cases (see Appendix D) but also allows for a broader class of weight distributions such as random low-rank or structured sparse matrices, for which the Gaussian Process limit has not been established before, despite their practical significance (see 1.
Low-rank weights.Low-rank structures are widely recognized for speeding up matrix multiplications and can be used to reduce memory requirements, see 1. Whilst such structures inevitably impose dependencies between the weight matrix entries \(A\in\mathbb{R}^{m\times n}\), thus breaking the iid assumption, Nait Saada and Tanner (2023) introduced a low-rank framework that falls within our Pseudo-iid regime. Let \(C\coloneqq[C_{1},\cdots,C_{r}]\in\mathbb{R}^{m\times r}\) be a uniformly drawn orthonormal basis for a random \(r\)-dimensional subspace. Suppose \(P=(P_{ij})\in\mathbb{R}^{r\times n}\) has iid entries \(P_{ij}\overset{\text{iid}}{\sim}\mathcal{D}\). Setting \(A\coloneqq CP\), the columns of \(A\) are spanned by those of \(C\). The row and column exchangeability of \(A\) follows immediately from that of \(C\) and \(P\), and the moment conditions (iii) and (iv) are controlled by the choice of distribution \(\mathcal{D}\). Direct computation of the four-cross product that appears in condition (iv) gives us
\[\mathbb{E}(A_{i_{a},1}A_{i_{b},1}A_{i_{c},2}A_{i_{d},2})=s^{2}\sum_{1\leq k,k^ {\prime}\leq r}\mathbb{E}\Big{[}C_{i_{a},k}C_{i_{b},k}C_{i_{c},k^{\prime}}C_{ i_{d},k^{\prime}}\Big{]},\]
where \(s\coloneqq\mathbb{E}(P_{1,1}^{2})\). Using the expression in Lemma 3 of Huang et al. (2021) we can calculate the above expectation and deduce condition (iv) when \(r\) is linearly proportional to \(m\).
Structured sparse weights.Block-wise pruned networks have recently been under extensive study for their efficient hardware implementation Dao et al. (2022b). Once the sparsifying mask is fixed, we may apply random row and column permutations on the weight matrices without compromising the accuracy or the computational benefit. Let \(A=(A_{ij})\in\mathbb{R}^{m\times n}\) have iid entries \(A_{ij}\overset{\text{iid}}{\sim}\mathcal{D}\) and \(B\in\mathbb{R}^{m\times n}\) be the binary block-sparse mask. Let \(\tilde{A}=P_{m}(A\odot B)P_{n}\), where \(P_{m}\) and \(P_{n}\) are random permutation matrices of size \(m\times m\) and \(n\times n\) respectively, and \(\odot\) represents entrywise multiplication. Then, by construction, \(\tilde{A}\) is row- and column-exchangeable and, for suitable choices of underlying distribution \(\mathcal{D}\), it satisfies the moment conditions of Definition 2. Appendix H contains some illustrations for reference.
Orthogonal CNN filters.Unlike the fully connected case, it is not obvious how to define the orthogonality of a convolutional layer and, once defined, how to randomly generate such layers for initialization. Xiao et al. (2018) defines an orthogonal convolutional kernel \(\mathbf{U}\in\mathbb{R}^{c_{out}\times c_{in}\times k\times k}\) made of \(c_{out}\) filters of size \(k\times k\) through the energy preserving property \(\|\mathbf{U}+\mathbf{\times}\|_{2}=\|\mathbf{X}\|_{2}\), for any signal \(\mathbf{X}\) with \(c_{in}\) input channels. Wang et al. (2020) requires the matricized version of the kernel to be orthogonal, while Qi et al. (2020) gives a more stringent definition imposing isometry, i.e.
\[\sum_{i=1}^{c_{out}}\mathbf{U}_{i,j}\star\mathbf{U}_{i,j^{\prime}}=\begin{cases} \mathbf{\delta},&j=j^{\prime},\\ 0,&\text{otherwise},\end{cases}\]
where \(\mathbf{\delta}\) is 1 at \((0,0)\), and \(0\) elsewhere. Another definition in Huang et al. (2021) calls for orthogonality of "spatial" slices \(\mathbf{U}_{\mathbf{\mu}}\in\mathbb{R}^{c_{out}\times c_{in}}\), for all positions \(\mathbf{\mu}\).
We take a different approach than Wang et al. (2020b) for matricizing the tensor convolution operator, setting the stride to 1 and padding to 0: reshape the kernel \(\mathbf{U}\) into a matrix \(\mathbf{\tilde{U}}\in\mathbb{R}^{c_{out}\times k^{2}c_{in}}\) and unfold the signal \(\mathbf{X}\) into \(\mathbf{\tilde{X}}\in\mathbb{R}^{k^{2}c_{in}\times d}\), where \(d\) is the number of patches depending on the sizes of the signal and the filter. This allows \(\mathbf{\tilde{U}}\) to be an arbitrary unstructured matrix rather than the doubly block-Toeplitz matrix in Wang et al. (2020b), as shown in Figure 1. Matricizing the tensor convolution operator imposes the structure on the signal \(\mathbf{\tilde{X}}\) rather than the filter \(\mathbf{\tilde{U}}\). Orthogonal (i.e. energy-preserving) kernels \(\mathbf{\tilde{U}}\) can then be drawn uniformly random with orthogonal columns, such that
\[\mathbf{\tilde{U}}^{\top}\mathbf{\tilde{U}}=\frac{1}{k^{2}}I \tag{8}\]
and then reshaped into the original tensor kernel \(\mathbf{U}\). Note that this construction is only possible when \(\mathbf{\tilde{U}}\) is a tall matrix with trivial null space, that is when \(c_{out}\geq k^{2}c_{in}\), otherwise the transpose might be considered. We emphasize that equation 8 is a sufficient (and not necessary) condition for \(\mathbf{U}\) to be energy-preserving, since \(\mathbf{\tilde{X}}\), by construction, belongs to a very specific structure set \(T\subseteq\mathbb{R}^{k^{2}c_{in}\times d}\), and, therefore, \(\mathbf{\tilde{U}}\) only needs to preserve norm on \(T\) (and not everywhere). Therefore, we do not claim the generated orthogonal convolutional kernel \(\mathbf{U}\) is "uniformly distributed" over the set of all such kernels.
Now let us check that what we defined to be orthogonal filters in the previous section verify the conditions of Definition 4. Each filter \(\mathbf{U}_{i,j}\) is flattened as \(\mathbf{\tilde{U}}_{i,j}\in\mathbb{R}^{1\times k^{2}}\) and forms part of a row of \(\mathbf{\tilde{U}}\) as shown below:
\[\mathbf{\tilde{U}}=\left[\begin{array}{cccc}\mathbf{\tilde{U}}_{1,1}& \mathbf{\tilde{U}}_{1,2}&\cdots&\mathbf{\tilde{U}}_{1,\text{e}in}\\ \mathbf{\tilde{U}}_{2,1}&\mathbf{\tilde{U}}_{2,2}&\cdots&\mathbf{\tilde{U}}_ {2,\text{e}in}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{\tilde{U}}_{\text{e}out,1}&\mathbf{\tilde{U}}_{\text{e}out,2}&\cdots& \mathbf{\tilde{U}}_{\text{e}out,\text{e}in}\end{array}\right]. \tag{9}\]
Applying permutations on the indices \(i\) and \(j\) translates to permuting rows and "column blocks" of the orthogonal matrix \(\mathbf{\tilde{U}}\), which does not affect the joint distribution of its entries. Hence, the kernel's distribution is unaffected, and therefore \(\mathbf{U}\) is row- and column-exchangeable. The moment conditions are both straightforward to check as \(\mathbf{U}_{i,j,\boldsymbol{\mu}}=\mathbf{\tilde{U}}_{i,(j-1)k^{2}+\mu},\;\mu \in\{1,\cdots,k^{2}\}\), where \(\mu\) is the counting number of the pixel \(\boldsymbol{\mu}\). To check condition (iv), note that \(\mathbb{E}\big{(}\mathbf{U}_{i_{a},1,\boldsymbol{\mu}_{a}}^{(l)}\mathbf{U}_{i_ {b},1,\boldsymbol{\mu}_{b}}^{(l)}\mathbf{U}_{i_{c},2,\boldsymbol{\mu}_{c}}^{( l)}\mathbf{U}_{i_{d},2,\boldsymbol{\mu}_{d}}^{(l)}\big{)}=\mathbb{E}\big{(} \mathbf{\tilde{U}}_{i_{a},\mu_{a}}\mathbf{\tilde{U}}_{i_{b},\mu_{b}}\mathbf{ \tilde{U}}_{i_{c},k^{2}+\mu_{c}}\mathbf{\tilde{U}}_{i_{d},k^{2}+\mu_{d}}\big{)}\), that is a four-cross product of the entries of an orthogonal matrix, whose expectation is explicitly known to be \(\frac{C_{t}+1}{(C_{t}-1)C_{t}(C_{t}+2)}\delta_{i_{a},i_{b}}\delta_{i_{c},i_{d} }\delta_{\mu_{a},\mu_{b}}\delta_{\mu_{c},\mu_{d}}\)(Huang et al., 2021, Lemma 3).
Figure 1: There are various ways to compute convolutions \(\mathbf{U}\star\mathbf{X}\) between a tensor filter \(\mathbf{U}\) and a 2D signal \(\mathbf{X}\) (in the middle) from matrix multiplications. We illustrate the approach taken in Garriga-Alonso et al. (2019) on the left, where the reshaping procedure is applied to the filter, whilst the method we followed, shown on the right, consists of reshaping the signal instead in order to define special structures on the CNN filters such as orthogonality, sparsity and low-rank.
Simulations of the Gaussian Processes in Theorem 1 for fully connected networks with Pseudo-iid weights
Theorem 1 establishes that random fully connected Pseudo-iid networks converge to Gaussian Processes in the infinite width limit. Here we conduct numerical simulations which validate this for modest dimensions of width \(N_{\ell}=n\) for \(n=3\), \(30\), and \(300\). Fig. 2 shows histograms of the empirical distribution of the preactivations when the weights are drawn from various Pseudo-iid distributions. 2 Even at \(n=30\) there is excellent agreement of the histograms with the limiting variance predicted by our Theorem 1. These histograms in Fig. 2 are further quantified with Q-Q plots in Appendix E.
Footnote 2: Experiments conducted for Fig. 2-5 used a fully connected network with activation \(\phi(x)=\tanh(x)\), weight variance \(\sigma_{w}=2\) and without bias. Dropout used probability \(1/2\) of setting an entry to zero, low-rank used rank \(\lceil n/2\rceil\), and block-sparsity used randomly permuted block-diagonal matrices with block-size \(\lceil n/5\rceil\). The code to reproduce all these figures can be found at [https://shorturl.at/gNQO0](https://shorturl.at/gNQO0).
Fig. 3 illustrates the rate with which two independent inputs \(x_{a}\) and \(x_{b}\) generate feature maps corresponding to uncorrelated Gaussian Processes, when passing through a network initialized with Pseudo-iid weights. The experiment is done by examining the joint empirical distribution of \(h_{i}^{(\ell)}(x_{a})[n]\) and \(h_{i}^{(\ell)}(x_{b})[n]\) at the same neuron index \(i\) and layer \(\ell\) through the same network. The level curves depict the theoretical joint Gaussian distribution given by the covariance kernel in Theorem 1. Interestingly, we observe the fastest convergence to the Gaussian Process in the orthogonal initialization. The other Pseudo-iid distributions considered exhibit good agreement at \(n=30\) which improves at \(n=300\). Some more experiments for fully connected networks and CNNs initialized with orthogonal filters can be found in respectively, Appendix E and F.
### Implications of the Gaussian Process Limit
Bayesian Neural Network and Gaussian Process.As opposed to frequentist approaches, Bayesian inference aims at finding a predictive distribution that serves to quantify the model's predictions uncertainty and prevent overfitting. Starting from a prior, this posterior distribution is updated once the data is observed. Considering Bayesian neural networks, these priors are to be defined on the weights and biases if adopting a parametric perspective, or, alternatively, directly on the input-output function represented by the model. Note how one can jump from one approach to the other given
Figure 2: For different instances of the Pseudo-iid regime, in the limit, the preactivation given in the first neuron at the fifth layer tends to a Gausssian whose moments are given by Theorem 1. The experiments were conducted \(10000\) times on a random 7-layer deep fully connected network with input data sampled from \(\mathbb{S}^{8}\).
that random initialization of the parameters of a neural network induces a distribution on the input-output function \(h^{(L+1)}(x)\) of the network. Nonetheless, finding a meaningful prior over the space of functions is in general a delicate task.
In the large width limit, when the weights are drawn Pseudo-iid in a \(L\)-layer deep network, Theorem 1 secures that the function represented by the network is _exactly_ a Gaussian Process with known covariance kernel \(K^{(L+1)}\), which only depends on the variances \(\sigma_{b},\sigma_{W}\) and the activation function. Therefore, the exact posterior distribution can be computed, yielding the Neural Network Gaussian Process (NNGP) equivalence described in Novak et al. (2020). Since the covariance kernel obtained in our less restrictive setting recovers the one in Novak et al. (2020), Matthews et al. (2018) and Garriga-Alonso et al. (2019), their experiments still hold in our case and we refer to these works for practical considerations on how to compute the posterior distribution. Specifically, NNGP yields better accuracy compared to finite point-estimate neural networks Lee et al. (2017) and approximates finite Bayesian networks very well (Matthews et al., 2018, Section 5).
Edge of Chaos (EoC).Random initialization of deep neural networks plays an important role in their trainability by regulating how quantities like the variance of preactivations and the pairwise correlation between input signals propagate through layers. An inappropriate initialization causes the network to collapse to a constant function or become a chaotic mapping oversensitive to input perturbations. The Edge of Chaos (EoC) initialization strategy rectifies these issues, additionally causing the network to have gradients of consistent magnitude across layers. Calculation of the EoC requires integration with respect to the distribution of preactivations in intermediary layers, which, in general, is an intractable task. However, the Gaussian Process limit simplifies the EoC analysis as carried out in Poole et al. (2016) and Xiao et al. (2018). Moreover, under the same distributional assumption, Schoenholz et al. (2017); Pennington et al. (2018) show that initialization on the EoC achieves _dynamical isometry_, making backpropagation of errors stable. Our main contributions of Theorems 1 and 2 allow similar calculations to be made for Pseudo-iid networks such as the examples of low-rank, structured sparse, and orthogonal CNN as shown in Section 3.1. The EoC for low-rank networks has been calculated in Nait Saada and Tanner (2023) under the assumption of Theorems 1 and 2 which was at that time unproven. Interestingly, in terms of signal propagation, structured sparse or low-rank initializations are equivalent to their dense and full-rank counterparts up to a rescaling of the variance by a fractional factor of sparsity or rank.
Figure 3: The empirical joint distribution of the preactivations generated by two distinct inputs flowing through the network. The large width limiting distribution as defined in Theorem 1 is included as level curves. The input data \(x_{a},x_{b}\) are drawn iid from \(\mathbb{S}^{9}\) and \(10000\) experiments were conducted on a \(7\)-layer fully connected network. The horizontal and vertical axes in each subplot are respectively \(h_{1}^{(5)}(x_{a})\) and \(h_{1}^{(5)}(x_{b})\).
Conclusion
We proved in this paper a new Gaussian Process limit for deep neural networks initialized with possibly inter-dependent entries. Examples include orthogonal, low-rank and structured sparse networks which are particularly of interest due to their efficient implementation and their empirically observed enhanced accuracy. Our result makes possible exact Bayesian inference as well as tractable Edge of Chaos analysis, for a broader class of either fully connected or convolutional networks. We expect the present work paves the way for a better understanding of the training dynamics of the emerging deep neural networks with parsimonious representations.
## Acknowledgments
We thank Juba Nait Saada for insightful discussion and feedback on the manuscript, as well as Alex Cliffe for valuable comments and help in drawing Figure 1. Thiziri Nait Saada is supported by the UK Engineering and Physical Sciences Research Council (EPSRC) through the grant EP/W523781/1. Jared Tanner is supported by the Hong Kong Innovation and Technology Commission (InnoHK Project CIMDA). |
2303.06455 | Graph Neural Network contextual embedding for Deep Learning on Tabular
Data | All industries are trying to leverage Artificial Intelligence (AI) based on
their existing big data which is available in so called tabular form, where
each record is composed of a number of heterogeneous continuous and categorical
columns also known as features. Deep Learning (DL) has constituted a major
breakthrough for AI in fields related to human skills like natural language
processing, but its applicability to tabular data has been more challenging.
More classical Machine Learning (ML) models like tree-based ensemble ones
usually perform better. This paper presents a novel DL model using Graph Neural
Network (GNN) more specifically Interaction Network (IN), for contextual
embedding and modelling interactions among tabular features. Its results
outperform those of a recently published survey with DL benchmark based on five
public datasets, also achieving competitive results when compared to
boosted-tree solutions. | Mario Villaizán-Vallelado, Matteo Salvatori, Belén Carro Martinez, Antonio Javier Sanchez Esguevillas | 2023-03-11T17:13:24Z | http://arxiv.org/abs/2303.06455v2 | # Graph Neural Network Contextual Embedding for Deep Learning on Tabular Data
###### Abstract
All industries are trying to leverage Artificial Intelligence (AI) based on their existing big data which is available in so called tabular form, where each record is composed of a number of heterogeneous continuous and categorical columns also known as features. Deep Learning (DL) has constituted a major breakthrough for AI in fields related to human skills like natural language processing, but its applicability to tabular data has been more challenging. More classical Machine Learning (ML) models like tree-based ensemble ones usually perform better. In this manuscript a novel DL model that uses Graph Neural Network (GNN), more specifically Interaction Network (IN), for contextual embedding is introduced. Its results outperform those of the recently published survey with DL benchmark based on five public datasets, achieving also competitive results when compared to boosted-tree solutions.
Deep Learning, Graph Neural Network, Interaction Network, Contextual Embedding, Tabular Data, Artificial Intelligence.
## I Introduction
Many practical real-world applications store data in tabular form, i.e. samples (rows) with the same set of attributes (columns). Medicine, finance or recommender systems are some common examples.
DL success in tasks involving texts, images or audio has sparked interest in its possible application to tabular data. Nevertheless, this success is often achieved when the input data are homogeneous and the structure used to organize the information provides insights about the data understanding. All tokens in a sentence are instances of the same categorical variable and their arrangement has semantic significance. Pixels in an image are continuous and usually have spatial correlation.
Tabular data have two characteristics that hinder DL performance. On the one hand, tabular features are heterogeneous, having a mix of continuous and categorical distributions that may correlate or be independent. On the other hand, the meaningfulness of tabular data row is independent of the column order, i.e. position is arbitrary and does not provide information.
Tree-based ensemble models such as XGBoost [1], CatBoost [2], and LightGBM [3] achieve the state of the art (SOTA) performances on tabular data: they have competitive prediction accuracy and are fast to train. Further research and development of DL models for tabular data is motivated, however, by the fact that standard tree-based approaches have limitations, for example, in case of continual learning, reinforcement learning or when tabular data is only part of the model input, which also includes data such as images, texts or audio.
Inspired by the success of contextual embedding in large language models (for example BERT [4]), several recent research [5, 6, 7] have investigated how to enhance tabular feature representation (and hence global DL model performances) by taking into consideration their context, that is, feature interaction. The results obtained in these works, as well as the outcomes of recent comparisons on many public datasets [8], illustrate how the contextual embedding approach tends to outperform not only standard Multi-Layer Perceptron (MLP) models, but also more complex models developed to solve complicated tasks [9, 10, 11, 12, 13] or models combining DL architectures with standard ML approaches [14, 15].
Many of the most recent studies employ Transformers [16] as a method for contextual embedding. However, in this paper, we look at how to use a GNN to improve contextual embedding for tabular data. GNNs are a special subset of neural networks that are capable of managing information organized in a graph which is a structure with variable shape or size and with complex topological relations. One of the most important features of a graph is that its meaning does not depend on the order of its nodes, just as the meaning of a tabular row does not depend on the order of its columns.
**Contributions.** The contributions of our paper are summarized as follows:
* original features and CLS virtual node
- and enhances their representation. The resulting CLS virtual node is sent into the final classifier/regressor. For sake of reproducibility, we share an implementation of INCE 2.
* We compare INCE against a wide range of deep tabular models and generally used tree-based approaches, using the tabular datasets provided in [8] as a benchmark. INCE outperforms all other DL methods on average, and it achieves competitive results when compared to boosted-tree solutions.
* We thoroughly investigate the differences between contextual embeddings based on Transformers and INs and analyze the influence of IN hyperparameters on model performance: quality of results, model size, computational time. Regardless of the dataset or task challenge, we gain a collection of patterns that aid in the establishment of a strong baseline.
* We investigate the interpretability of the IN ensuing contextual embeddings. On the one hand, we focus on the feature-feature relationship discovered by the IN, while on the other hand, we concentrate on how contextual embeddings improve traditional context-free embeddings.
## II Related Work
**Standard Tabular Models.** As already commented, when dealing with tabular data, tree-based ensemble models such as XGBoost, CatBoost and LightGBM are often a popular choice. They usually provide high performance regardless of the amount of data available, can handle many data types, are resilient in the case of null values, are fast to train and can be interpreted at least globally.
**Deep Tabular Models.** Due to the success of DL in task involving texts, sound or images, many efforts are being made to find the best approach to apply these models also to tabular data [5, 6, 7, 15, 20, 21]. Most of these efforts belong to one of the 3 categories described below.
_Modeling of multiplicative interactions between features_ Modeling explicitly the interaction between features of a tabular dataset [9, 10, 11, 12, 13] has been shown to have a significant impact on the performance of deep learning models in applications such as recommender systems and click-through-rate prediction. Nevertheless, recent comparisons [8, 6] show that these approaches produce worse outcomes than the rest of categories described below.
_Hybrid models._ Hybrid models transform the tabular data and combine deep neural networks with classical ML approaches, often decision trees. Such hybrid models can be designed to be optimized in a fully-differentiable end-to-end or to benefit from non-differentiable approaches combined with deep neural networks. NODE [14] is partially inspired by CatBoost [2] and provides an example of fully differentiable model based on an ensemble of oblivious decision trees [22]. Entmax transformation and soft splits allow to obtain a fully differentiable end-to-end optimization. Other examples of fully-differentiable hybrid architecture are [23, 24, 25]. On the other hand, DeepGBM model [26] is an example of how to take advantage from the combination of non-differentiable approaches with deep neural networks. It combines deep neural network flexibility with gradient boosting decision tree preprocessing capabilities. TabNN [27] first distills the knowledge from gradient boosting decision trees to retrieve feature groups and then constructs a neural network based on feature combinations produced by clustering the results of the previous step.
_Transformer-based models._ Many of DL recent successes have been driven by the use of transformer-based methods [4, 28, 29] inspiring the proposal of multiple approaches using deep attention mechanisms [16] for heterogeneous tabular data. The TabNet [15] design is inspired by decision trees: a set of subnetworks is processed in a hierarchical order and the results of all decision steps are aggregated in order to obtain the final prediction. A feature transformer module chooses which features should be transferred to the next decision step and which should be employed to get the output at the present decision phase. TabTransformer [5] uses Transformer to improve the contextual embeddings of tabular features. First, each categorical variable goes through a specific embedding layer. A stack of Transformers is then used to enhance the categorical feature representation. The final contextual embedding is given by the concatenation of the so obtained categorical representation and the initial continuous features. In FT-Transformer [6], columnar transformations (embeddings) are applied to both categorical and continuous features. As in BERT [4], a CLS token is added to the set of columnar embeddings and then, a stack of transformer layers is applied. The final CLS representation is employed as final contextual embedding, i. e. for predictions. SAINT [7] combines the self-attention between features of the same tabular row with inter-sample attention over multiple-rows. When handling missing or noisy data, this mechanism allows the model to borrow the corresponding information from similar samples.
As in [5, 6, 7], we investigate how contextual embedding affects the final model performance on supervised tasks. The main difference from the existing research is that in our approach, the contextual embedding is provided via GNNs and, more specifically, INs.
**Graph Neural Network and Interaction Network.** In case of neural networks such as Convolutional Neural Network or Transformer, the inputs must be structured data (grid and sequence, respectively). GNN are a special subset of neural networks that can cope with less structured data, such as a graph. This means that the input can have arbitrary shapes and sizes and can have complex topological relations. Permutation invariance is a crucial feature distinguishing GNN from the rest of neural networks. The order of nodes in a graph has no relevance, that is how we order the nodes in a graph does not impact the results produced by GNNs. In a tabular dataset, the order of features (columns) does not have any meaning, so GNN is a good candidate to model the interaction between them.
The flow of a GNN can be modeled using the Message-Passing scheme. a) For each pair of nodes \((u,v)\) in the graph, a message \(M(u,v,e_{u,v})\) from \(v\) to \(u\) is created. Here \(u\), \(v\) are the embedding of nodes and \(e_{u,v}\) is the (optional) embedding of edge. b) Each node aggregates the messages coming from all its neighbors. The aggregation must be permutation-invariant. c) The node is updated using its initial representation and the information obtained in point b.
It is simple to find a map between the Message-Passing scheme and the contextual embedding of tabular features. a) Initial node representation is given by columnar feature embeddings. b) Message-passing through edges is the pairwise interaction between features. c) The neighbor aggregation represents the effect of the interaction of current feature with all its neighbors. d) The update step provides the contextual representation of each feature.
In this paper, we investigate the benefits of using INs for contextual embeddings of tabular data. They are a low-biased family of GNN that have obtained enormous success when applied to simulation of complex physics or weather forecasting [30].
## III Interaction Network Contextual Embedding
This section introduces the INCE model and describes its components in depth.
**Problem Definition.** We focus on supervised learning problems with tabular datasets \(D=\left\{x_{i}^{cat},x_{i}^{num},y_{i}\right\}_{i=1}^{N}\) where \(x_{i}^{num}=\left\{x_{i}^{j_{n}}\right\}\) with \(j_{n}\in[1,M_{num}]\) is the set of numerical features, \(x_{i}^{cat}=\left\{x_{i}^{j_{c}}\right\}\) with \(j_{c}\in[1,M_{cat}]\) is the set of categorical features, \(y_{i}\) is the label, \(i\in[1,N]\) counts the dataset rows, \(N\) is the total number of rows and \(M=M_{num}+M_{cat}\) is total number of features.
**Encoder-Decoder Perspective.** As in [31], we use the encoder-decoder perspective, Fig. 1. First an encoder model maps each tabular dataset feature into a latent vector or embedding and then a decoder model takes the embeddings and uses them to solve the supervised learning task.
The encoder model is composed by two components: the _columnar_ and the _contextual_ embedding. The decoder model is given by a MLP tuned to the learning task to solve.
**Encoder - Columnar Embedding.** All of the original tabular heterogeneous features are projected in the same homogeneous and dense \(d\)-dimensional latent space by the _columnar_ embedding depicted in Fig. 2. As in the [6, 7], the columnar embedding \(c_{i}^{j_{n}},c_{i}^{j_{c}}\in\mathbb{R}^{d}\) of continuous and categorical features \(x_{i}^{j_{n}},x_{i}^{j_{c}}\) are obtained as follows:
\[c_{i}^{j_{n}} =\text{ReLU}\left(b_{num}^{j}+x_{i}^{j_{n}}\cdot W_{num}^{j} \right) W_{num}^{j}\in\mathbb{R}^{d} \tag{1}\] \[c_{i}^{j_{c}} =b_{cat}^{j}+e_{ij}^{j}W_{cat}^{j} W_{cat}^{j}\in\mathbb{R}^{|j_{c}|\times d} \tag{2}\]
where ReLU is the non-linear activation function for the continuous embedding, \(b^{j}\) is the \(j-\)th feature bias, \(W_{num}^{j}\in\mathbb{R}^{d}\) is a learnable vector, \(W_{cat}^{j}\in\mathbb{R}^{|j_{c}|\times d}\) is a learnable lookup table and \(|j_{c}|\) and \(e_{ij}^{T}\) are the size and the one-hot representation of the categorical feature \(x_{i}^{j_{c}}\), respectively.
**Encoder - Contextual Embedding.** The _columnar_ embedding works feature by feature and has trouble identifying correlation or more general relationships between features in tabular datasets. To overcome this limitation, a _contextual_ embedding is introduced. In contrast to recent research [5, 7, 6, 15] that use Transformers, we propose a _contextual_ embedding based on GNN and, more specifically, IN [17, 18, 19].
In this approach, the initial supervised learning task on tabular data is turned into a graph state estimation issue in which a categorical (classification task) or a continuous (regression task) graph state must be predicted. Taking into account the initial node representation (i. e. _columnar_ embedding) and graph edges, a stack of GNN has to model the interactions among nodes in the latent space and learn a richer representation of the entire graph capable of improving state estimation.
As shown in Fig. 3, the first step consists of building a fully-connected graph. For each original tabular feature, a node is created \(n_{j}\equiv x_{j}\) and for each pair of nodes \((n_{j_{1}},n_{j_{2}})\), two directed and independent edges are defined: \(e_{j_{1}j_{2}}:n_{j_{1}}\to n_{j_{2}}\) and \(e_{j_{2}j_{1}}:n_{j_{2}}\to n_{j_{1}}\). The dense \(d-\)dimensional vector \(c_{j}\in\mathbb{R}^{d}\) obtained from the _columnar_ embedding is used as initial node representation, giving rise to an homogeneous graph. No positional embedding is used to improve the node representation: the original tabular features are heterogeneous and each one is projected in the common
Fig. 1: The encoder-decoder perspective [31]: an encoder model maps each tabular dataset feature into a latent vector, a decoder model uses the embeddings to solve the supervised learning task. In the encoding step, first a _columnar_ embedding individually projects any feature in a common latent space and then a _contextual_ embedding improves these representations taking into account the relationships among features. The decoder MLP transforms the _contextual_ embedding output in the final model prediction.
Fig. 3: _Contextual_ embedding. (a) Homogeneous and fully-connected graph: it contains a node for each initial tabular features and a bidirectional-edge for each pair of nodes. The initial node representation is obtained by the _columnar_ embedding. A virtual CLS node is introduced to characterize the global graph state. (b) A stack of IN [17] models node interactions to create a more accurate representation of nodes (i.e. tabular features). (c) The final representation of the CLS virtual node is used as _contextual_ embedding.
Fig. 2: The _columnar_ embedding is responsible for projecting all the heterogeneous features in the tabular dataset in a common latent space. For each feature, a continuous or categorical transformation is defined. The _columnar_ embedding ignores any potential relationship or similarity between the tabular dataset features.
latent space using a separate _columnar_ embedding. This is enough to distinguish the nodes among them without explicitly modeling their position in the graph3. As in the BERT [4], a virtual CLS node connected to each existing node is added to the graph. The \(d-\)dimensional initial representation of the CLS virtual node is a vector of learnable parameters. No features are initially considered for the edges \(e_{ij}\).
Footnote 3: We have explicitly tested this hypothesis and the experiments confirm that the use of positional embedding does not improve the model performance.
In the following step, a stack of IN is used to improve the representation of each node and edge in the graph. The final CLS vector embedding produced by the stack of IN is used as global representation of the graph, i.e. as a _contextual_ embedding of the tabular row.
**Interaction Network.** The workflow of a standard IN layer [17, 18] is described in the Fig. 4. In the first step, the representation of each edge (i. e. interaction between each pair of tabular features) is updated using the information of the adjacent nodes (i. e. pair of tabular features):
\[e^{\prime}_{j_{1}\to j_{2}}=\text{MLP}_{\text{E}}\left(\text{ Concat}\left(n_{j_{1}},n_{j_{2}},e_{j_{1}\to j_{2}}\right)\right)\,, \tag{3}\]
where \(n_{j}\), \(e_{j_{1}j_{2}}\in\mathbb{R}^{d}\) are respectively node and edge representation, \(\text{MLP}_{\text{E}}\) is the shared neural network used to update all the graph edges. For simplicity of notation we have suppressed the row index.
In the second step, all the messages coming from the incoming edges are aggregated and used to update the node representation:
\[n^{\prime}_{j}=\text{MLP}_{\text{N}}\left(\text{Concat}\left(n_{j},\sum_{k\in \mathcal{N}}e_{k\to j}\right)\right)\,,\]
where \(\mathcal{N}\) is the set of \(n_{j}\) neighborhoods and \(\text{MLP}_{\text{N}}\) is the shared neural network used to update all the graph nodes.
The residual connection between the initial and updated representations yields the final node and edge representations:
\[n_{j} =n^{\prime}_{j}+n_{j}\] \[e_{j_{1}\to j_{2}} =e^{\prime}_{j_{1}\to j_{2}}+e_{j_{1}\to j_{2}} \tag{4}\]
**Decoder.** The decoder \(\text{MLP}_{\text{DEC}}\) receives the contextual embedding computed by the encoder. It is a MLP where the final output layer size and activation function are adapted to the supervised learning problem to solve - classification or regression.
## IV Experiments
[8] provides a detailed review on the literature of DL on tabular data together with an extensive empirical comparison of traditional ML methods and DL models on multiple real-world heterogeneous tabular datasets.
We consider the standard and deep models analyzed in [8] as baseline and evaluate INCE using the tabular benchmark presented therein.
**Data.** The main properties of datasets are summarized in Table I.
_HELOC_[32]: Home Equity Line of Credit (HELOC) provided by FICO (a data analytics company), contains anonymized credit applications of HELOC credit lines. The dataset contains 21 numerical and two categorical features characterizing the applicant to the HELOC credit line. The task is a binary classification and the goal is to predict whether the applicant will make timely payments over a two-year period.
_California Housing_[33]: The information refers to the houses located in a certain California district, as well as some basic statistics about them based on 1990 census data. This is a regression task, which requires to forecast the price of a property.
Fig. 4: Interaction Network layer
_Adult Incoming_[34]: Personal details such as age, gender or education level are used to predict whether an individual would earn more or less than \(50K\$\) per year.
_Forest Cover Type_[34]: Cartographic variables are used to predict the forest cover type: it is a multi-class (seven) classification task. The first eight features are continuous whereas the last two are categorical with four and 40 levels, respectively.
_HIGGS_[35]: The dataset contains 11M of rows and 28 features where the first 21 are kinematic properties measured by the particle detectors and the last seven are processed features built by physicists. The data has been produced using Monte Carlo simulations and the binary classification task is to distinguish between signals with Higgs bosons and a background process.
**Data Preprocessing.** In order to compare INCE with the results of [8], we reproduce the same data preprocessing. Zero-mean and unit-variance normalization is applied to the numerical features whereas an ordinal encoding is used for the categorical ones. The missing values were imputed with zeros.
**Baselines.** INCE is compared to the following models. _Standard methods_: Linear Model, KNN, Decision Tree, Random Forest [36], XGBoost [1], LightGBM [3], CatBoost [2]. _Deep learning models_: MLP [37], DeepFM [10], DeepGBM [26], RLN [38], TabNet [15], VIME [39], TabTrasformer [5], NODE [14], Net-DNF [25], SAINT [7], FT-Transformer [6].
**Setup.** For each tabular dataset, we use the Optuna library [40] with 50 iterations to tune INCE hyperparameters. Each hyperparameter configuration is cross-validated with five folds. The search space is the following: latent space size \(\in\{32,64,128\}\), number of stacked \(\text{IN}\in\{1,2,3,4\}\) and depth of \(\text{MLP}_{\text{E}}\), \(\text{MLP}_{\text{N}}\in\{1,2,3,4\}\).
In all the experiments, we consider a decoder \(\text{MLP}_{\text{DEC}}\) with two hidden layers and ReLU is the non-linear activation function used for \(\text{MLP}_{\text{E}}\), \(\text{MLP}_{\text{N}}\) and \(\text{MLP}_{\text{DEC}}\). Cross-Entropy and Mean Squared Error (MSE) are the loss functions used in classification and regression tasks, respectively. We train all the models \(200\) epochs using Adam optimizer with a learning rate of \(0.001\) and with batches of size \(256\). All the DL code is implemented using PyTorch [41] and PyTorch-Geometric [42] and parallelized with Ray [43].
### _Results_
In Table II and Fig. 5, we report the results on the tabular benchmark described above. In four of five datasets INCE outperforms all the DL baselines. In the fifth, HIGGS case, INCE obtains the second best performance behind SAINT model [7], but largely above the rest of DL models. In two of the five datasets, INCE outperforms tree-based models, while in the other three it achieves results that are competitive with them.
In terms of baseline performance, we carefully reproduced the findings for XGBoost, MLP, TabTransformer, and SAINT to ensure that our preprocessing and optimization approach was equivalent to [8] for all datasets in the benchmark. After demonstrating the comparability of [8] and our flows, the other baseline results are quoted from this paper. It should be noted that we include in our study the FT-Transformer that is subsequent to [8].
## V Deep Dive in Interaction Network
For each tabular dataset, we have studied how the choice of latent space size \(l\), \(\text{MLP}_{\text{N, E}}\) depth \(d\) and number \(n\) of stacked IN influences the model behavior: number of trainable parameters, performances and computational time. The findings from the various datasets reveal similar patterns, leading to consistent conclusions.
**Trainable parameters.** The number of trainable parameters \(\mathcal{TP}\left(\text{IN}\right)\) of a stack of \(n\) IN is given by:
\[\mathcal{TP}\left(\text{IN}\right) =\sum_{i=1}^{n}\mathcal{TP}\left(\text{IN}^{i}\right) \tag{5}\] \[=\sum_{i=1}^{n}\left[\mathcal{TP}\left(\text{MLP}_{\text{E}}^{i }\right)+\mathcal{TP}\left(\text{MLP}_{\text{N}}^{i}\right)\right]\] \[\mathcal{TP}\left(\text{MLP}_{\text{N}}^{i}\right) =\left(2\cdot l^{2}+l\right)+\left(d-1\right)\cdot\left(l^{2}+l\right)\] \[\mathcal{TP}\left(\text{MLP}_{\text{E}}^{i}\right) =\left(K_{i}\cdot l^{2}+l\right)+\left(d-1\right)\cdot\left(l^{2 }+l\right)\,,\]
where \(K_{i}=2\) if \(i=0\) and \(K_{i}=3\) otherwise. We consider all the hidden layers of \(\text{MLP}_{\text{E, N}}\) of the same size. The difference in the number of parameters between \(\text{MLP}_{\text{E}}^{i=0}\) and \(\text{MLP}_{\text{E}}^{i>0}\) is due to the fact that all IN with \(i>0\) receive the edge features computed by preceding layers, whilst the first IN does not use any initial edge features.
The quantity of trainable parameters increases quadratically with the size of the latent space and linearly with the number of stacked IN or the \(\text{MLP}_{\text{E, N}}\) depth, Fig. 6. The slope of the straight line corresponding to the number of stacked IN is steeper than the one relative to the \(\text{MLP}_{\text{E, N}}\) depth.
**Performances.** Our experiments suggest that whereas the latent space size needs to be fine-tuned for each dataset, the impact of \(\text{MLP}_{\text{E, N}}\) depth \(d\) and number \(n\) of stacked IN does not depend on the supervised learning problem to solve. The configuration with \(d=3\) and \(n=2\) is a solid baseline regardless of the underlying task.
To clarify this point, in Fig. 7 we show how the normalized metric changes as a function of the \(\text{MLP}_{\text{E, N}}\) depth and the
Fig. 5: The boxplots in red and blue illustrate the distribution of tree-based and DL baseline, respectively. The horizontal dotted line represents the INCE performance. Accuracy and MSE are the metrics used for classification and regression tasks. The presence of an up/down arrow near the dataset name indicates whether the metric must be maximized o minimized.
number of stacked IN. The normalized metric is a global performance measure (higher is better) generated using the findings from all of the datasets as described in Appendix A.
The left side plot in Fig. 7 depicts the normalized metric curves \(\mathcal{C}_{d}\) (blue line) and \(\mathcal{C}_{n}\) (orange line) obtained varying \(d\) and \(n\) respectively while the other parameters are kept constant. The information on the right side plot is the same as on the left, but it is compared to the normalized number of trainable parameters.
The depth \(d\) of the shared neural networks \(\text{MLP}_{\text{N, E}}\) has the most impact on the model performances and, at the same time, it has reduced effect on the number of learnable parameters. These results are coherent with the observed behavior of the Optuna [40] bayesian optimizer. Regardless of the supervised learning problem, after few attempts, it quickly reduces search space for \(d\) to \([3,4]\) and then it fine-tunes the number of stacked IN in the range \([2,3]\). The configuration with \(d=3\) and \(n=2\) is always a solid candidate regardless of the tabular dataset.
Why adding more than two layers does not improve the _contextual_ encoder capability? We interpret this as follows. a) The number of nodes in the graph is small. In our formulation there is a node for each tabular feature and the number of them goes from eight (California Housing) to 28 (HIGGS). After two IN layers, the information of a node has been transmitted to every other node in the graph. b) We are working with a fully connected graph, i. e. a trivial topology. The IN has to model the strength of each edge but the initial topological information seems to be poor. c) The size of datasets is limited (excludind HIGGS).
**Computational time.** Fig. 8 shows how the number of features in the tabular dataset as well as the INCE configuration (latent space size, number of stacked IN and \(\text{MLP}_{\text{N, E}}\) depth) impact on the training time. In particular, Fig. 8 presents the average training time for a batch size of \(256\). All the INCE training times are normalized by using the corresponding train time of a MLP with the same columnar embedding and the same decoder but without contextual embeddings. For each dataset, the three curves are obtained varying one parameter (for example \(n\in\{1,2,3,4\}\) for the orange line) while holding the other two constant (\(l=16\) and \(d=1\)).
* As expected, the number of features in the tabular dataset has an effect on the computational time: it grows from California Housing (eight features) to Heloc (\(23\) features) for a fixed INCE configuration. In our proposal, we are
Fig. 6: Growth of the normalized \(\mathcal{TP}\left(\text{IN}\right)\) as a function of \(\text{MLP}_{\text{R,N}}\) depth, number of stacked IN and latent space size. The plot on the left compares the evolution of \(\mathcal{TP}\left(\text{IN}\right)\) when two hyperparameters are fixed and the third is increased. The plot on right is a zoom on the contribution of \(\text{MLP}_{\text{E,N}}\) depth and number of stacked IN. The baseline used to normalize \(\mathcal{TP}\left(\text{IN}\right)\) is given by the number of trainable parameters of the simplest case: \(l=16\), \(d=1\), \(n=1\). It is trivial to show using Eq. 5 that the behavior of normalized \(\mathcal{TP}\left(\text{IN}\right)\) curve does not depend on the particular choice of the baseline latent space size \(l\).
Fig. 7: Average Normalized Metric. Left side plot depicts how the normalized metric changes when the \(\text{MLP}_{\text{R,N}}\) depth or the number of stacked IN is increased and the other is kept constant. The right side plot shows the same information but referenced to the normalized number of trainable parameters.
working with a fully-connected graph and the volume of operations increases quadratically with respect to the number of nodes (features).
* For a fixed dataset, the number \(n\) of stacked IN has the greatest impact on the amount of operations and, hence, on computational time.
* When the number of features is around \(20\), the impact of latent space size is comparable or even greater than the impact of \(\text{MLP}_{\text{N, E}}\) depth.
### _Interaction Network vs. Transformer_
Recent works [5, 7, 6] propose the Transformers encoder [16] as _contextual_ embedding. Here, we analyze similarities and differences between the two approaches.
**Performances.** For comparison purposes, we replace in our flow the IN with a Transformer encoder while keeping intact the rest of the model components - i.e. same _columnar_ embedding and same decoder. See [16] for the details about Transformer models and its components: Multi Head Self-Attention and FeedForward block. Using Optuna [40], we look for the best set of Transformer encoder hyperparameters in the following search space: number of attention heads \(h\in\{1,2,4,8\}\), FeedForward layer space size \(f\in\{512,1024,2048\}\), number of stacked Transformer encoders \(n\in\{1,2,3,4\}\) and latent space size \(l\in\{16,32,64,128\}\). As in the IN case, each hyperparameter configuration is cross-validated five folds and all the models are trained \(200\) epochs using Adam optimizer with a learning rate of \(0.001\) and batches of size \(256\).
Table III shows how the two approaches provide comparable results even though, at least on the selected benchmark, the IN encoder performs slightly better.
**Trainable parameters.** The size \(f\) of the latent space used by the Transformer FeedForward block has a significant impact on the number of trainable parameters in a Transformer encoder. In the comparison4 that follows, we take into account the setup where \(f=512\) since it achieves the best average results in the Optuna optimization.
Footnote 4: For the purpose of simplicity, we exclude the Normalization Layers parameters from our study in both cases, Transformer and IN.
The number of trainable parameters of a Transformer encoder is given by:
\[\begin{split}\mathcal{TP}\left(\text{Transformer}\right)& =n\cdot\left[\mathcal{TP}\left(Q,K,V\right)+\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\left.\mathcal{TP}\left( \text{MultiAttention}\right)+\right.\\ &\qquad\qquad\qquad\left.\mathcal{TP}\left(\text{FeedForward} \right)\ \right]\\ \mathcal{TP}\left(Q,K,V\right)&=3\cdot h\cdot l \cdot\left(l+1\right)\\ \mathcal{TP}\left(\text{MultiAttention}\right)& =l\cdot\left(h+1\right)\\ \mathcal{TP}\left(\text{FeedForward}\right)&=2\cdot f \cdot l+f+l\\ \end{split} \tag{6}\]
where \(l\), \(h\), \(f\) and \(n\) are respectively the latent space size, the number of attention heads, the FeedForward latent space size and the number of stacked Transformer encoders.
Fig. 9 compares the behavior of \(\mathcal{TP}\left(\text{Transformer}\right)\) and \(\mathcal{TP}\left(\text{IN}\right)\). As in Fig. 6, the normalized number of trainable parameters \(\mathcal{TP}\) is obtained dividing by \(\mathcal{TP}\left(\text{IN}_{l,d=1,n=1}\right)\). The Fig. 9 presents the results for \(l=128\). IN has less trainable parameters than Transformers and the relative difference is even bigger when \(l\) decreases. When the number of attention heads is \(h\leq 2\), the difference is due to the FeedForward block parameters. For \(h>2\), Transformer has more parameters included, without taking into account the FeedForward block.
**Limitations.** When the number of tabular features increases, both IN and Transformers use greater resources. The vanilla Multi Head Self-Attention and IN on fully-connected graph share quadratic complexity with respect to the number of features. This issue can be alleviated by using efficient approximations of Multi Head Self-Attention [44] or a more complex graph topology with less edges in the Interaction Network case. Additionally, it is still possible to distill the final model into simpler architectures for better inference performance.
## VI Interpretability of contextual embedding
### _Columnar vs. Contextual embedding_
In subsection IV-A, the effect of _contextual_ embedding on the model performance has been shown. INCE outperforms solutions that just use _columnar_ embedding and, more generally, produces results that are on par with or even better than those of SOTA DL models when applied to tabular data.
Fig. 8: Average normalized training time. For each dataset the INCE training time is normalized using the time of the corresponding MLP with the same columnar embedding and decoder but without contextual embeddings. All the results are relative to a batch size of \(256\). Starting from the configuration base \(l=16\), \(n=1\) and \(d=1\), the different curves are computed varying one parameter while the others are kept constant.
In this subsection, we visually examine how this mechanism improves the features representation, enhancing the performance of the final model. For sake of simplicity, in the following discussion, we use the Titanic [34] dataset. The supervised learning problem is a binary classification. The preprocessed dataset contains eight features. Age and fare are the zero-mean and one-standard-deviation continuous variables. The categorical features are \(\text{sex}\in\{\text{female},\text{ male}\}\), \(\text{title}\in\{\text{Mr., Mrs., Rare}\}\), \(\text{pclass}\in\{1,\ 2,\ 3\}\), \(\text{family\_size}\in\{0,\ 1,\ 2,\ 3,\ 4,\ 6,\ 7,\ 8\}\), \(\text{is\_alone}\in\{0,\ 1\}\), \(\text{embark}\in\{\text{C}=\text{C}\text{Chebourg},\ \text{Q}=\text{Queenstown},\ \text{S}=\text{ Southampton}\}\). For this exercise, we consider a simple INCE model with latent space size \(l=2\), \(\text{MLP\_N,\ \text{E}}\) depth \(d=3\) and \(n=2\). The choice of \(l=2\) allows analyzing the representation in latent space without alleged artifacts introduced by the dimensional reduction.
The left side plot of Fig. 10 shows the output of _columnar_ embedding. Semantically related features like pclass-fare, sex-title, or family_size-is_alone are distributed without any discernible pattern. The representation does not depend on the context: regardless of pclass, age or family_size values, \(\text{title}=\text{Mrs}\) is always projected to the same point in the latent space size.
The _contextual_ embedding is depicted in the right side plot of Fig. 10. This represents the message sent from each node (i.e. tabular feature) to update the CLS representation in the last IN. Patterns are easily discernible: sex vs. title, family_size vs. is_alone and the feature that is closest to pclass is fare. Additionally, it is feasible to see that the latent projections of categorical features are not yet limited to a fixed number of points when the context is taken into consideration, as shown, for example, by title embedding.
### _Feature importance from Feature-Feature interaction_
The attention map for the CLS virtual node may be used to assess the feature relevance when the _contextual_ embedding is a Transformer [6, 7]. Here, we look into if the feature-feature interaction that the IN learns can reveal details about the significance of tabular features. We first explain our methodology using the Titanic dataset for the purpose of simplicity, and then we illustrate the findings we achieved using the same technique on the other tabular datasets.
In contrast to the Transformer case, we now have two new problems to resolve: 1) The feature-feature interaction is a \(l\)-dimensional vector (that is, it is not a scalar); 2) To assess the feature global significance, we must aggregate the feature-feature importance. The description of our process is provided below.
_First Step:_ We split data in train/test datasets. We train the model and use the trained INCE on the test dataset to produce the feature-feature interaction, i. e. \(e_{j_{1}\to j_{2}}^{i}\in\mathbb{R}^{l}\) in Eq. 4 returned by the last IN. In this notation, we have explicitly recovered the tabular row index \(i\).
_Second Step:_ We estimate mean \(\mu\) and covariance \(S\) of the entire population \(\{e_{j_{1}\to j_{2}}^{i}\ \forall\,i,j_{1},j_{2}\).
_Third Step:_ For each pair \((j_{1},j_{2})\) of features and for each test row \(i\), we compute the squared Mahalanobis distance:
\[D_{i}^{2}\left(j_{1}\to j_{2}\right)=\left(e_{j_{1}\to j_{2}}^{i}-\mu \right)S^{-1}\left(e_{j_{1}\to j_{2}}^{i}-\mu\right)\]
_Fourth Step:_ The squared Mahalanobis distance follows a Chi-Square distribution, so we can normalize the distance using p-value. The number of degrees of freedom of Chi-Square is given by the latent space size \(l\):
\[p_{i}\left(j_{1}\to j_{2}\right)=\text{Pr}\left(D_{i}^{2}\geq\chi_{l}^{2}\right)\]
_Fifth Step:_ The global interaction p-value \(p\left(j_{1}\to j_{2}\right)\) is obtained averaging the previous results over the test dataset:
\[p\left(j_{1}\to j_{2}\right)=\frac{1}{N_{test}}\sum_{i=1}^{N_{test}}p_{i} \left(j_{1}\to j_{2}\right)\]
The findings of the proposed methodology on the Titanic dataset are displayed in the heatmap of Fig. 11. The results are broken down at the feature-value level (i.e. \(\text{sex}=\text{female}\), \(\text{sex}=\text{male}\), \(\text{title}=\text{Mrs}\), \(\text{title}=\text{Mr}\), etc.). This is how the heatmap may be understood: the relevance of the message from the row-\(r\)-feature to the column-\(c\)-feature is represented by the element (row=\(r\), column=\(c\)) of the heatmap. A lower p-value implies more significance. The last column, "Mean", is created by averaging all of the row values and shows the average relevance of the messages sent by row-\(r\)-feature. In a
Fig. 10: **Left:**_Columnar_ embedding before the stack of IN. **Right:**_Contextual_ embedding from the last IN.
Fig. 9: Comparison of IN and Transformer trainable parameters. The normalized \(\mathcal{TP}\) is obtained using Eq. 5 and Eq. 6 and then normalizing with respect to \(\mathcal{TP}\left(IN_{l,d=1,n=1}\right)\). The plot shows the results for \(l=128\). Transformers have more trainable parameters than IN, and the relative difference grows when \(l\) decreases.
similar way, the last row (also known as "Mean") is derived by averaging all the values of the columns and it represents the mean relevance of the messages received by column-\(c\)-feature.
In order to quantitatively assess the quality of the heatmap, we compute the Spearman Rank correlation \(\rho\) between
\[p(j)=\frac{1}{|\mathcal{N}|}\sum_{j\in\mathcal{N}}p(j,\hat{j})=\frac{1}{2}\left[ p\left(j\to\hat{j}\right)+p\left(\hat{j}\to j\right)\right]\]
and the feature importance calculated by KernelShap [45]. In the formula above, \(\mathcal{N}\) and \(|\mathcal{N}|\) are the set of neighbors of node \(j\) and its size, respectively. The outcome for the Titanic dataset is \(\rho=0.81(\text{p-value}=0.05)\).
The heatmap and the Spearman Rank correlation provide the following insights.
**a)** The feature-feature interaction is not symmetric. In the fully connected graph we have two independent edges \(j_{1}\to j_{2}\) and \(j_{2}\to j_{1}\) and the Eq. 3 is not invariant by \(j_{1}\longleftrightarrow j_{2}\) interchange. Our experiments demonstrate that inducing \(j_{1}\longleftrightarrow j_{2}\) invariance in Eq. 3 results in a learning bias that negatively affects INCE performance.
**b)** From heatmap, it is possible to discern logical patterns. For example, is_alone \(=1\) does not add information (high p-value) when family_size is \(0\) or \(1\) and on the contrary, the value family_size is very relevant (low p-value) for any value of title.
**c)** Considering that KernelShap evaluates global model behavior (including the decoder) and that IN models separately \((j_{1}\to j_{2})\) and \((j_{2}\to j_{1})\) and that we have to aggregate and average them to compare with KernelShap, the Spearman Rank correlation analysis result can be considered encouraging.
Finally, Table IV summarizes the Spearman Rank correlation achieved on various datasets and demonstrates how the results are consistent regardless of the dataset under consideration.
## VII Conclusions
Let us highlight the main contributions of this article:
* To the best of our knowledge, this is the the first time that the model architecture proposes the use of GNN for contextual embedding to solve supervised tasks involving tabular data.
* Literature discusses mainly about the usage of Transformers. This manuscript shows that GNN, particularly IN, are a valid alternative. It shows better performance with a lower number of training parameters.
* As a matter of fact, this innovative architecture outperforms the state of the art DL benchmark based on 5 different diverse datasets. Moreover, it closes the gap with classical ML models (tree-based), outperforming them in 2 of these datasets, and being very close in two more. The tradeoff versus tree-based models is additional computational load in the form of training time, and scalability issues with the number of features (nodes) of the dataset, which constitute future lines of research to keep improving its practical implementation.
* Finally, the interpretability of GNN is explored. This is a key topic for industry environments, and apparently this is the first study for GNN and tabular data.
```
0:\(l\): latent space, \(r\): dataset
0:\(C_{d}\), \(C_{n}\) two list of normalized metric \(base\gets Metric_{r}(d=1,n=1,l,r)\) \(C_{d}\leftarrow Metric_{r}(d,n=1,l,r)\) \(\forall d\in\{1,2,3,4\}\) \(C_{n}\leftarrow Metric_{r}(d=1,n,l,r)\)\(\forall n\in\{1,2,3,4\}\) \(best\gets Best_{r}(C_{d},C_{n})\) \(C_{d}\leftarrow\dfrac{C_{d}-base}{best-base}\) \(C_{n}\leftarrow\dfrac{C_{n}-base}{best-base}\) \(\forall n\in\{1,2,3,4\}\) return\(C_{d}\), \(C_{n}\)
```
**Algorithm 1** Normalized metric
## Appendix A
Fig. 11: Titanic feature-feature interaction at feature-value level.
The curves of Fig. 7 are obtained by computing the average and the standard-deviation from results of Alg. 1.
|
2306.14857 | Metapopulation Graph Neural Networks: Deep Metapopulation Epidemic
Modeling with Human Mobility | Epidemic prediction is a fundamental task for epidemic control and
prevention. Many mechanistic models and deep learning models are built for this
task. However, most mechanistic models have difficulty estimating the
time/region-varying epidemiological parameters, while most deep learning models
lack the guidance of epidemiological domain knowledge and interpretability of
prediction results. In this study, we propose a novel hybrid model called
MepoGNN for multi-step multi-region epidemic forecasting by incorporating Graph
Neural Networks (GNNs) and graph learning mechanisms into Metapopulation SIR
model. Our model can not only predict the number of confirmed cases but also
explicitly learn the epidemiological parameters and the underlying epidemic
propagation graph from heterogeneous data in an end-to-end manner. The
multi-source epidemic-related data and mobility data of Japan are collected and
processed to form the dataset for experiments. The experimental results
demonstrate our model outperforms the existing mechanistic models and deep
learning models by a large margin. Furthermore, the analysis on the learned
parameters illustrate the high reliability and interpretability of our model
and helps better understanding of epidemic spread. In addition, a mobility
generation method is presented to address the issue of unavailable mobility
data, and the experimental results demonstrate effectiveness of the generated
mobility data as an input to our model. | Qi Cao, Renhe Jiang, Chuang Yang, Zipei Fan, Xuan Song, Ryosuke Shibasaki | 2023-06-26T17:09:43Z | http://arxiv.org/abs/2306.14857v2 | # Metapopulation Graph Neural Networks:
###### Abstract
Epidemic prediction is a fundamental task for epidemic control and prevention. Many mechanistic models and deep learning models are built for this task. However, most mechanistic models have difficulty estimating the time/region-varying epidemiological parameters, while most deep learning models lack the guidance of epidemiological domain knowledge and interpretability of prediction results. In this study, we propose a novel hybrid model called MopoGNN for multi-region epidemic forecasting by incorporating Graph Neural Networks (GNNs) and graph learning mechanisms into Metapopulation SIR model. Our model can not only predict the number of confirmed cases but also explicitly learn the epidemiological parameters and the underlying epidemic propagation graph from heterogeneous data in an end-to-end manner. The multi-source epidemic-related data and mobility data of Japan are collected and processed to form the dataset for experiments. The experimental results demonstrate our model outperforms the existing mechanistic models and deep learning models by a large margin. Furthermore, the analysis on the learned parameters illustrate the high reliability and interpretability of our model and helps better understanding of epidemic spread. In addition, a mobility generation method is presented to address the issue of unavailable mobility data, and the experimental results demonstrate effectiveness of the generated mobility data as an input to our model.
Epidemic forecasting, Hybrid model, Metapopulation epidemic model, Graph learning, GNNs, COVID-19
## I Introduction
The coronavirus disease 2019 (COVID-19) pandemic has caused around 500 million confirmed cases and more than 6 million deaths in the global, and it is still ongoing. Due to this circumstance, epidemic forecasting has been a key research topic again as it can guide the policymakers to develop effective interventions and allocate the limited medical resources. Many mechanistic models and deep learning models have been built for the epidemic prediction task. In particular, human mobility is seen as one of the most important factors to understand and forecast the epidemic propagation among different regions.
In this study, we employ metapopulation SIR model [2, 3] as the base model for our task, which extends the most fundamental compartmental model (i.e., SIR [4]) in epidemiology with metapopulation epidemic propagation. As illustrated in Fig. 1, it divides the total population under the epidemic into several sub-populations (e.g., by regions). Each sub-population consists of three compartments, \(S\) (susceptible individuals), \(I\) (infectious individuals), \(R\) (removed individuals, including deaths and recovery cases), and the human mobility between sub-populations is modeled as a directed graph. Thus, it can well model the epidemic propagation among regions. The metapopulation epidemic models have achieved great success in modeling and analyzing the propagation of epidemic diseases, such as SARS, H1N1, and Malaria [5, 6, 7].
However, it is always a non-trivial task to build a metapopulation epidemic model, especially for new emerging epidemics such as the COVID-19 due to the following reasons. First, the epidemiological parameters in metapopulation model keep varying from region to region and time to time. As we all know, the Coronavirus keeps evolving, and the transmissibility and mortality of the variants (e.g., Alpha, Delta, and Omicron) are significantly different. Besides, the intervention policies and the human movements also vary over different periods and regions. Second, due to the mixed factors mentioned above, the epidemic propagation effects via human mobility in metapopulation epidemic model are also difficult to be obtained or estimated. In the case of prefecture-level prediction in Japan, we need to collect the large-scale human mobility data of the entire Japan and obtain the amount of human movements between each pair of prefectures. Then how to accurately infer the underlying disease propagation network becomes another intractable task. Third, besides the daily
Fig. 1: Metapopulation epidemic propagation among regions [2].
infection data, external features such as date information (e.g., \(dayofweek\)) and daily movement change patterns should also be involved.
To tackle these challenges, we incorporate deep learning into metapopulation SIR model to form a novel hybrid epidemic model. Specifically, we first learn the time/region-varying epidemiological parameters from multiple data features through a spatio-temporal module, which consists of Temporal Convolutional Networks (TCN) and Graph Convolutional Networks (GCN). Next, we design two types of graph learning module to automatically approximate the underlying epidemic propagation graph based on the countrwide human mobility data. Furthermore, we let the learned latent graph be shared by the spatio-temporal module and the metapopulation SIR module, which further enhances the model interpretability and reliability. Previous deep learning methods [8, 9, 10, 11, 12] simply treat the epidemic forecasting as time-series prediction task or spatio-temporal prediction task, which can only output the predicted number of infections in a pure black-box manner. Recent study [13] involves the classical epidemic modeling into deep neural networks. However, it does not explicitly consider the epidemic propagation among regions via metapopulation modeling like our model, which largely limits the model interpretability for multi-region epidemic forecasting. _To the best of our knowledge, our work is the first hybrid model that couples metapopulation epidemic model with spatio-temporal graph neural networks_. In summary, our work has the following contributions:
* We propose a novel hybrid model along with two types of graph learning module for multi-step multi-region epidemic prediction by mixing metapopulation epidemic model and spatio-temporal graph convolution networks.
* Our model can explicitly learn the time/region-varying epidemiological parameters as well as the latent epidemic propagation among regions from the heterogeneous inputs like infection related data, human mobility data, and meta information in a completely end-to-end manner.
* We collect and process the big human GPS trajectory data and other COVID-19 related data that covers the 47 prefectures of Japan from 2020/04/01 to 2021/09/21 for countrwide epidemic forecasting.
* We conduct comprehensive experiments to evaluate the performance of our model on epidemic forecasting task of the 47 prefectures in Japan by comparing with three classes of baseline models. The results illustrate the superior forecasting performance of our model, especially for unprecedented surge of cases. Furthermore, we also use the case studies to demonstrate the high interpretability of our model.
* We present a mobility generation method with minimal data requirement to handle the situation which mobility data is unavailable. The experimental results show the effectiveness of generated mobility data as the initialization of adaptive graph learning in our model.
## II Related Work
In this section, we first generally review the two major classes of models for epidemic forecasting and then introduce the research directly related to our model with more details.
### _Epidemic Forecasting Models_
The models for epidemic simulation and forecasting can be divided into two types: _mechanistic approaches_ and _deep learning approaches_.
_Mechanistic approaches_ are built based on the domain knowledge of epidemiology which employ pre-defined physical rules to model infectious diseases' transmission dynamics, mainly _classical compartmental models_[4, 14], _metapopulation models_[15, 16, 3, 17] and _agent-based model_[18, 19, 20]. The classical compartmental models simulate the spread of infectious diseases in a homogeneous population which are unable to model epidemic spread between regions. The metapopulation models assume the heterogeneity of sub-populations and use the human mobility pattern between regions to model the spread of the epidemic [2, 3]. The agent-based models directly use the individual-level movement pattern [18, 19] or trajectories [20] to emulate the contagion process. Our work is related to the metapopulation model which is most suitable for multi-region epidemic forecasting task. To implement epidemic modeling, it needs to be calibrated first using historical observations and use the optimized or manually modified parameters to make prediction. These efforts are hardly applicable for multi-step forecasting tasks. The parameters calibration process needs high computational complexity, especially when facing huge parameter state space [15, 18]. Moreover, in most mechanistic models, epidemiological parameters keep fixed during forecasting. The variation of parameters through time is not considered which leads to the problem of cumulative error on multi-step prediction.
_Deep learning approaches_ have shown excellent performance in the modeling and forecasting on time series prediction tasks. As a typical time series, several research efforts utilizing deep learning techniques, such as LSTM [8, 10], have been conducted for epidemic forecasting over a single region [8, 10, 21, 22]. Nevertheless, the epidemic propagation is often spatially dependent, i.e., co-evolving over regions. Thus, treating epidemic forecasting as a multivariate time-series prediction task, performing collaborative forecasting over multiple geographical units should be a more reasonable choice. For such tasks, a key challenge is to model the complex and implicit spatio-temporal dependencies among the observations, on which much evidence shows that GNN can perform very well for modeling the inter-series relationships. A series of state-of-the-art solutions based on GNN have been proposed for multivariate time-series prediction tasks, such as STGCN [23], DCRNN [24], GraphWaveNet [25], ColaGNN [11], and CovidGNN [12]. In particular, ColaGNN [11] and CovidGNN [12] were explicitly designed for the epidemic prediction. However, these works ignore the domain knowledge of epidemiology and are hard to interpret from the epidemiological perspective. STAN [21] incorporates epidemiological constraints into deep learning models, but it can only predict infections of a single region. CausalGNN [13] embeds single-patched SIRD model into GNN for multi-region epidemic forecasting.
Overall, we distinguish our work from existing ones in the following ways: Compared with the mechanistic models, MepoGNN adopts an end-to-end framework that can predict the dynamic change of epidemiological parameters and use predicted parameters to produce multi-region and multi-step prediction; Compared with the deep learning models for the multi-region prediction task, MepoGNN incorporates the domain knowledge of epidemiology and enhances the interpretability by combining spatio-temporal deep learning model with the metapopulation model; Furthermore, MepoGNN can output the prediction of infections through the metapopulation epidemic model and learn the interpretable epidemiological parameters and the latent graph of epidemic propagation simultaneously.
### _SIR Model and Metapopulation SIR Model_
After the general review of epidemic forecasting models, we introduce the models closely related to our research with more details. The compartmental models are widely used technique for modeling the spread of infectious diseases. SIR model [4] is one of the most representative classical compartmental models (most other compartmental models, like SEIR model, SIRD model and SIRV model can be seen as variants of SIR model). SIR model divides population into three compartments, including \(S^{t}\) for number of susceptible individuals, \(I^{t}\) for number of infectious individuals, \(R^{t}\) for the number of recovered or deceased individuals at time \(t\). As shown in Fig. 2, susceptible individuals can become infectious individuals when they contact with infectious individuals and get infected by the infectious disease, and infectious individuals can become removed individuals when they recover or die from the infectious disease. And SIR model uses \(\beta\) as the infection rate and \(\gamma\) as the removal rate. \(S\), \(I\), \(R\) can be updated by following equations:
\[\begin{split}\frac{dS^{t+1}}{dt}&=-\frac{\beta S^{t }I^{t}}{P}\\ \frac{dt^{t+1}}{dt}&=\frac{\beta S^{t}I^{t}}{P}- \gamma I^{t}\\ \frac{dR^{t+1}}{dt}&=\gamma I^{t}\end{split} \tag{1}\]
where \(P\) represents total population size which is usually assumed as a constant number (\(P=S^{t}+I^{t}+R^{t}\)).
When SIR model is used for epidemic modeling, the typical method is as follows:
1. Input historical epidemic into Eq. 1 and optimize parameters (i.e., \(\beta\) and \(\gamma\)) to fit the epidemic curve.
2. Use optimized parameters to update \(S\), \(I\), \(R\) iteratively to model the epidemic spread in future.
SIR model can only model the epidemic spread for a homogeneous population, which ignores the epidemic propagation between sub-populations. However, in the real world, the sub-populations usually have heterogeneous epidemic situations and interact with each other (e.g., epidemic propagation among sub-populations). Metapopulation SIR model [3] fills this gap by assuming the heterogeneity of sub-populations and using human mobility to model the propagation between sub-populations. We demonstrate the difference between SIR model and Metapopulation SIR model by an example of Tokyo and Chiba in Fig. 3. Metapopulation SIR model consists of three compartments for each sub-population: \(S_{n}^{t}\) for number of susceptible individuals, \(I_{n}^{t}\) for number of infectious individuals, \(R_{n}^{t}\) for the number of recovered or deceased individuals of sub-population \(n\) at time \(t\). \(P_{n}\) represents the size of sub-population \(n\) which is assumed to be a constant number, where \(P_{n}=S_{n}^{t}+I_{n}^{t}+R_{n}^{t}\). \(\beta\) is the rate of infection1, and \(\gamma\) is the rate of recovery and mortality. Furthermore, it uses \(h_{nm}\) to represent the epidemic propagation from sub-population \(n\) to \(m\). The original metapopulation SIR model in [3] can be expressed as follows:
Footnote 1: \(\beta\) in metapopulation SIR model is not completely equivalent to \(\beta\) in SIR model, but we use the same symbol in this work for simplicity.
\[\begin{split}\frac{dS_{n}^{t+1}}{dt}&=-\beta\cdot S _{n}^{t}\sum_{m=1}^{N}(\frac{h_{mn}}{P_{m}}+\frac{h_{nm}}{P_{n}})I_{m}^{t}\\ \frac{dI_{n}^{t+1}}{dt}&=\beta\cdot S_{n}^{t}\sum_{ m=1}^{N}(\frac{h_{nm}}{P_{m}}+\frac{h_{nm}}{P_{n}})I_{m}^{t}-\gamma\cdot I_{n}^{t} \\ \frac{dR_{n}^{t+1}}{dt}&=\gamma\cdot I_{n}^{t} \end{split} \tag{2}\]
The parameter \(h_{nm}\) can form an epidemic propagation graph as shown in Fig. 3. In [3], the task is formed as a network inference problem, and optimization for the objective function with some regularization is used to solve this problem.
Fig. 3: Differences between SIR model and Metapopulation SIR model.
Fig. 2: Epidemic spread in classical SIR model.
## III Problem Formulation
In this study, we focus on forecasting the number of daily confirmed cases for multi-region and multi-step simultaneously by using epidemic related data and human mobility data.
For a single region, the historical daily confirmed cases from timestep \(t-T_{in}+1\) to \(t\) can be represented as \(\mathbf{x}^{t-(T_{in}-1):t}\in\mathbb{R}^{T_{in}}\). Then, the historical daily confirmed cases of \(N\) regions as illustrated in Fig. 4 can be denoted as \(\mathbf{X}^{t-(T_{in}-1):t}=\{\mathbf{x}_{1}^{t-(T_{in}-1):t},\mathbf{x}_{2}^{ t-(T_{in}-1):t},...,\mathbf{x}_{N}^{t-(T_{in}-1):t}\}\in\mathbb{R}^{N\times T _{in}}\). Besides the historical observations, we also incorporate the external factors to form a multi-channel input as \(\mathcal{X}^{t-(T_{in}-1):t}=\{\mathbf{X}_{1}^{t-(T_{in}-1):t},\mathbf{X}_{2}^ {t-(T_{in}-1):t},...,\mathbf{X}_{C}^{t-(T_{in}-1):t}\}\in\mathbb{R}^{N\times T _{in}\times C}\). Details of the input features will be introduced in Section V. Additionally, human mobility between regions (static flow data \(\mathbf{U}\in\mathbb{R}^{N\times N}\) or dynamic flow data \(\mathcal{O}^{t-(T_{in}-1):t}\in\mathbb{R}^{N\times N\times T_{in}}\)) is used as another type of input. Details of processing mobility flow data will be also introduced in Section V.
Thus, in this study, we aim to predict the daily confirmed cases of \(N\) regions in next \(T_{out}\) timesteps \(\mathbf{Y}^{t+1:t+T_{out}}\in\mathbb{R}^{N\times T_{out}}\) that takes human mobility between regions into consideration. As demonstrated by Fig. 5, our problem can be formulated as follows:
\[\{\mathcal{X}^{t-(T_{in}-1):t},\mathbf{U}\}\xrightarrow{f(\cdot)}\mathbf{Y}^{ t+1:t+T_{out}} \tag{3}\]
\[\{\mathcal{X}^{t-(T_{in}-1):t},\mathcal{O}^{t-(T_{in}-1):t}\} \xrightarrow{f(\cdot)}\mathbf{Y}^{t+1:t+T_{out}} \tag{4}\]
## IV Methodology
In this section, we present Epidemic Metapopulation Graph Neural Networks (MepoGNN), demonstrated in Fig. 6, for spatio-temporal epidemic prediction. MepoGNN consists of three major components: metapopulation SIR module, spatio-temporal module and graph learning module. These three components tightly cooperate with each other. Graph learning module learns the mobility intensity between regions as a graph and output it to spatio-temporal module and metapopulation SIR module. Spatio-temporal module captures the spatio-temporal dependency to predict the sequences of parameters for metapopulation SIR module. Then, metapopulation SIR module takes the learned graph and the predicted parameters to produce the multi-step prediction of daily confirmed cases.
### _Metapopulation SIR Module_
SIR model is one of the most fundamental compartmental models in epidemiology, used for modeling the epidemic spread [4]. However, it can only model the epidemic spread for a homogeneous population, which ignores the epidemic propagation between sub-populations. Metapopulation SIR model [3] fills this gap by assuming the heterogeneity of sub-populations and using human mobility to model the propagation between sub-populations.
In this study, we model population of each region as sub-population in metapopulation SIR model. So, the \(h_{nm}\) in Eq. 2 can be represented by human mobility between regions. Because of different characteristics of regions, policy changes with time and so on, there exists spatio-temporal heterogeneity in epidemic spread. In our model, \(\beta\), \(\gamma\) and \(h_{nm}\) are assumed to vary over time and regions. In addition, to prevent \(\beta\) to be extremely small and make it be in a relatively stable magnitude, \(S_{n}^{t}\) is omitted from the equations. Thus, we extend the original metapopulation SIR in Eq. 2 as follows:
\[\frac{dS_{n}^{t+1}}{dt} =-\beta_{n}^{t+1}\sum_{m=1}^{N}(\frac{h_{mn}^{t+1}}{P_{m}}+\frac{ h_{nm}^{t+1}}{P_{n}})I_{m}^{t} \tag{5}\] \[\frac{dI_{n}^{t+1}}{dt} =\beta_{n}^{t+1}\sum_{m=1}^{N}(\frac{h_{mn}^{t+1}}{P_{m}}+\frac{ h_{nm}^{t+1}}{P_{n}})I_{m}^{t}\negneg\gamma_{n}^{t+1}\cdot I_{n}^{t}\] \[\frac{dR_{n}^{t+1}}{dt} =\gamma_{n}^{t+1}\cdot I_{n}^{t}\]
With predicted \(\beta_{n}^{t+1}\), \(\gamma_{n}^{t+1}\) and \(\mathcal{H}^{t+1}\) (the epidemic propagation matrix formed by \(\{h_{nm}^{t+1}|n,m\in\{1,2,...,N\}\}\)), \(S\), \(I\), \(R\) can be updated iteratively:
\[[S_{n}^{t},I_{n}^{t},R_{n}^{t}]\xrightarrow[\beta_{n}^{t+1},\gamma_{n}^{t+1}, \mathcal{H}^{t+1}]}[S_{n}^{t+1},I_{n}^{t+1},R_{n}^{t+1}] \tag{6}\]
The final prediction output of daily confirmed cases can be formed as:
\[\hat{y}_{n}^{t+1} =\beta_{n}^{t+1}\sum_{m=1}^{N}(\frac{h_{mn}^{t+1}}{P_{m}}+\frac{ h_{nm}^{t+1}}{P_{n}})I_{m}^{t} \tag{7}\] \[\mathbf{\hat{Y}} =\begin{bmatrix}\hat{y}_{1}^{t+1}&\ldots&\hat{y}_{1}^{t+T_{out}} \\ \vdots&\ddots&\vdots\\ \hat{y}_{n}^{t+1}&\ldots&\hat{y}_{n}^{t+T_{out}}\end{bmatrix}_{N\times T_{out}}\]
### _Spatio-Temporal Module for Epidemiological Parameters_
Spatio-temporal module takes the node input features \(\mathcal{X}\in\mathbb{R}^{N\times T_{in}\times C}\) and the weighted adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\) as input and output the predicted parameters \(\beta\in\mathbb{R}^{N\times T_{out}}\) and \(\gamma\in\mathbb{R}^{N\times T_{out}}\). We use the spatio-temporal layer (ST layer) combining Gated TCN and GCN (same as in GraphWaveNet
Fig. 4: Illustration of multi-regional epidemic forecasting.
Fig. 5: Illustration of problem definition (model input and output).
[25]) to capture the spatio-temporal dependency. Gated TCN [26] is used to capture temporal dependency:
\[\mathcal{Q}_{l}=g(\Theta_{l1}\star\mathcal{Z}_{l}+\mathbf{b}_{l1})\odot\sigma( \Theta_{l2}\star\mathcal{Z}_{l}+\mathbf{b}_{l2}) \tag{8}\]
where \(\mathcal{Z}_{l}\) is input of \(l\)-th layer, \(\Theta_{l1}\) and \(\Theta_{l2}\) are temporal convolution kernels, \(\mathbf{b}_{l1}\) and \(\mathbf{b}_{l2}\) are biases, \(g(\cdot)\) is tanh activation function for output, \(\sigma(\cdot)\) is sigmoid function to form the gate, \(\star\) is convolution, \(\odot\) is element-wise product. Next, we model the regions and the interactions between regions as a graph and use diffusion graph convolution [24, 25] to capture the spatial dependency:
\[\mathbf{P}_{f} =\mathbf{A}/rowsum(\mathbf{A}) \tag{9}\] \[\mathbf{P}_{b} =\mathbf{A^{T}}/rowsum(\mathbf{A^{T}})\]
\[\tilde{\mathcal{Z}}_{l}=\sum_{k=0}^{K}\mathbf{P}_{f}^{k}\mathcal{Q}_{l} \mathbf{W}_{lk1}+\mathbf{P}_{b}^{k}\mathcal{Q}_{l}\mathbf{W}_{lk2} \tag{10}\]
where \(\mathbf{A}\in\mathbb{R}^{N\times N}\) is weighted adjacency matrix, \(\mathbf{P}_{f}\) is forward transition matrix, \(\mathbf{P}_{b}\) is backward transition matrix, \(\tilde{\mathcal{Z}}_{l}\) is output of \(l\)-th layer.
Multiple ST layers can be stacked to capture the spatio-temporal dependency in different scales. We use a gated dense connection to bridge different ST layers. It can extract important information from previous ST layers and pass it to following layer:
\[\mathcal{D}_{l}=\begin{cases}\mathcal{X},&\text{if }l=1,\\ \mathcal{D}_{l-1}+\mathcal{Z}_{l},&\text{otherwise}.\end{cases} \tag{11}\]
\[\mathcal{Z}_{l+1}=\begin{cases}\mathcal{X},&\text{if }l=0,\\ \tilde{\mathcal{Z}}_{l}\odot\sigma(\tilde{\mathcal{Z}}_{l})+\mathcal{D}_{l} \odot(1-\sigma(\tilde{\mathcal{Z}}_{l})),&\text{otherwise}.\end{cases} \tag{12}\]
where \(\mathcal{D}_{l}\) stores the information from previous layers. Then, we concatenate the output from different layers through skip connections to fuse the information of different scales. Finally, the parameters \(\beta\in\mathbb{R}^{N\times T_{out}}\) and \(\gamma\in\mathbb{R}^{N\times T_{out}}\) are produced through two fully connected layers, respectively.
### _Graph Learning Module for Epidemic Propagation_
There are two different graphs used in metapopulation SIR module and spatio-temporal module, respectively. Unlike the trivial method which input two fixed graphs to each module separately, we make two modules share a single learnable graph. With the shared learnable graph, the spatial dependency used in spatio-temporal module would be consistent with epidemic propagation in metapopulation SIR module which can improve the interpretability of our model. Furthermore,
Fig. 6: Proposed Epidemic Metapopulation Graph Neural Networks (MepoGNN) for spatio-temporal epidemic prediction.
the parameters of graph learning module can be updated by gradients from both spatio-temporal module and metapopulation SIR module which make learned graph more realistic.
As shown in Fig. 7, there are two types of graph learning module to deal with different input data. The first type is adaptive graph learning module which takes the static flow data (e.g., commuter survey data) as input. Unlike the graph learning methods in some deep learning models [25, 27, 28], it is difficult to random initialization for graph learning module in our model. Because the metapopulation SIR module requires the input \(\mathcal{H}^{t}\) to be a graph with values which can reflect the real heterogeneous mobility intensity between each pair of regions (e.g., the number of trips, the number of commuters) in an appropriate magnitude (to make the parameters easier to learn) rather than a normalized (e.g., through row normalization) weighted adjacency graph. Intuitively, we initialize an adaptive graph \(\mathbf{G}\) with static flow matrix \(\mathbf{U}\) and make it learnable through training. Then, the adaptive graph can be output to spatio-temporal module (Eq. 9) as \(\mathbf{A}\in\mathbb{R}^{N\times N}\) and to metapopulation SIR module (Eq. 5) as \(\mathcal{H}\in\mathbb{R}^{N\times N\times 1}\) (which means we use same \(h_{nm}\) for all timesteps).
The second type is dynamic graph learning module which takes the dynamic OD flow tensor as input. Although the OD flow and epidemic spread status are both dynamic, but they are not necessarily one-to-one temporally corresponding. Considering the delayed effect, influence of mobility on epidemic spread can be seen as a weighted average of the given past values (\(T_{in}\) days). So, we initialize a learnable time weight matrix \(\mathbf{L}\in\mathbb{R}^{T_{out}\times T_{in}}\) and normalize it as \(\tilde{\mathbf{L}}\) through a softmax function. The normalized time weight matrix can map the historical dynamic flow \(\mathcal{O}^{t-(T_{in}-1):t}\in\mathbb{R}^{N\times N\times T_{in}}\) to its influence on future epidemic spread. The output of \(\mathcal{H}^{t+1:t+T_{out}}\in\mathbb{R}^{N\times N\times T_{out}}\) and \(\mathbf{A}\in\mathbb{R}^{N\times N}\) can be calculated as follows:
\[\tilde{\mathbf{L}}=Softmax_{:,j}(\mathbf{L}) \tag{13}\]
\[\mathcal{H}^{t+1:t+T_{out}}=\tilde{\mathbf{L}}\mathcal{O}^{t-(T_{in}-1):t}, \quad\mathbf{A}=\frac{\sum_{i=1}^{T_{out}}\mathcal{H}^{t+i}}{T_{out}} \tag{14}\]
**Why propose two types of graph learning?** Dynamic graph learning module can illustrate the dynamic change of epidemic propagation. But it requires dynamic flow data which is not available in most cases. To improve the applicability of our model, we propose adaptive graph learning module to address this problem. With two types of graph learning module, our model can handle different situations of data availability in the best way possible.
### _Mobility Generation for Initialization_
As mentioned above, we proposed two types of graph learning module to handle different situations of data availability. However, it is required to access static flow data even for adaptive graph learning module. However, in some worse situations of data availability, the static flow data is difficult to access or even not exist. So, how to deal with this problem is a key to improve the applicability of our MepoGNN model.
In this section, we try to generate the mobility between each pair of regions using as simple data as possible. In the assumption, there are two major factors determining the mobility intensity between each pair of regions:
(1) Populations of origin and destination: Population of each region determines the trip generation from each region and trip attraction to each region. For example, the metropolitan regions (e.g., Tokyo) have more capability to generate the trip from itself to other regions and also attract more visitors from other regions.
(2) Distance between origin and destination: Distance between origin and destination is also a key factor determining the mobility distribution. Each region tends to generate more trips to nearer regions and attract more visitors from nearer regions. For example, Aichi Prefecture and Saitama Prefecture have similar populations, but there are more trips from Tokyo to Saitama than from Tokyo to Aichi.
The relationship between the mobility intensity between from region \(n\) to \(m\) and the above-mentioned two factors can be simply approximated as following equation:
\[mob_{nm}\propto\frac{P_{n}P_{m}}{(dist_{nm})^{d}+\epsilon} \tag{15}\]
where \(P_{n}\) and \(P_{m}\) is populations of region \(n\) and \(m\), respectively; \(dist_{nm}\) is distance between region \(n\) and \(m\);
Fig. 7: Two types of graph learning: Adaptive and Dynamic.
is a parameter to control the power of distance decay; \(\epsilon\) is a constant number to prevent dividing by zero when computing intra-region mobility (i.e., \(n\) = \(m\)).
In MepoGNN model, we use mobility between regions to estimate the epidemic propagation between regions. So, the absolute mobility intensity (e.g., real trip numbers between each pair of regions) is not required in our study. The relative mobility intensity can be approximated by following equation:
\[\widetilde{mob}_{nm}=\alpha\frac{P_{n}P_{m}}{(dist_{nm})^{d}+\epsilon} \tag{16}\]
where \(\alpha\) is a parameter to control \(\widetilde{mob}_{nm}\) in an appropriate magnitude.
The relative mobility among \(N\) regions \(\{\widetilde{mob}_{nm}|n,m\in\{1,2,...,N\}\}\) can form a relative mobility intensity matrix \(\widetilde{\mathbf{M}}\in\mathbb{R}^{N\times N}\) which can be used to initialize the learnable graph of adaptive graph learning module. Although this simple assumption might not provide a precise estimation of mobility intensity between regions, it is sufficient as initialization for adaptive graph learning module since the learnable graph can evolve during training. Furthermore, the aim of our method is to generate relative mobility intensity using as simple data as possible. To best of our knowledge, population of each region and distance between regions are available in most cases and can be seen as the most simple data requirement for mobility generation. This minimal data requirement ensures better applicability of our method.
## V Data
We collect multi-source epidemic-related data and mobility data for epidemic prediction. We set 47 prefectures of Japan and 2020/04/01 \(\sim\) 2021/09/21 (539 days) as our study area and time period, respectively. The used data can be divided into three sub-groups, including epidemic data, external data and mobility flow data. We will describe the data sources, demonstrate the data processing and visualize the data in this section.
### _Epidemic Data_
The number of daily confirmed cases and cumulative cases and deaths are collected from the NHK COVID-19 database2. Fig. 8 demonstrates the spatial distribution of daily confirmed cases of Japan on 2021/04/01 and 2021/08/01, respectively. Fig. 9 demonstrates the daily confirmed cases of Tokyo and the distribution difference between two periods of time (2020/04/01 \(\sim\) 2021/06/30 and 2021/07/01 \(\sim\) 2021/09/21).
Footnote 2: [https://www3.nhk.or.jp/news/special/coronavirus/data/](https://www3.nhk.or.jp/news/special/coronavirus/data/)
The number of recovered cases is collected from Japan LIVE Dashboard3[29] (original data source is from Ministry of Health, Labour and Welfare, Japan). The population of each prefecture is collected from 2020 Japan census data4. With above-mentioned data, daily \(S\), \(I\), \(R\) of each prefecture can be calculated.
Footnote 3: [https://github.com/swsoyee/2019-ncov-japan](https://github.com/swsoyee/2019-ncov-japan)
Footnote 4: [https://www.stat.go.jp/data/kokuse/2020/kekaka.html](https://www.stat.go.jp/data/kokuse/2020/kekaka.html)
### _External Data_
Apart from the number of daily confirmed cases, the input node features also include daily movement change, the ratio of daily confirmed cases in active cases, and \(dayofweek\). The ratio of daily confirmed cases in active cases can reflect the trend of epidemic which is very important for epidemic prediction, especially for periods near change points.
Fig. 11: Dynamic flow from Saitama to Tokyo from 2020/04/01 to 2021/09/21 before normalization (blue) and after normalization (red).
Fig. 8: Spatial distribution of daily confirmed cases of Japan on 2021/04/01 (left) and 2021/08/01 (right).
Fig. 10: Original movement change data (left) and prefecture-level movement change data (right) through population weighted averaging on 2021/08/01.
Fig. 9: Daily confirmed cases of Tokyo (left) and distribution gap between two periods of time (right).
The movement change data is collected from Facebook Movement Range Maps5. It records the change of people movement range compared to a baseline period. Because it is not provided at prefecture level, we use population weighted average to get data at prefecture level. Fig. 10 visualizes the original movement change data and the prefecture-level movement change data through population weighted averaging on 2021/08/01. Because there exists the weekly periodicity in number of people taken test, it is necessary to include \(dayofweek\) as one of external features.
Footnote 5: [https://data.humdata.org/dataset/movement-range-maps](https://data.humdata.org/dataset/movement-range-maps)
### _Mobility Flow Data_
The input static flow data for adaptive graph learning module is the number of commuters between each pair of prefectures, which is collected from 2015 Japan census data6.
Footnote 6: [https://www.stat.go.jp/data/kokusei/2015/kekka.html](https://www.stat.go.jp/data/kokusei/2015/kekka.html)
The input dynamic flow data for dynamic graph learning module is the daily OD flow data among 47 prefectures, which is generated from human GPS trajectory data provided by Blogwatcher Inc.. Blogwatcher dataset provides the GPS trajectories of multi-million users containing the rich human mobility information of Japan. We use detected Move/Stay points to divide each trajectory into several OD trips and aggregate the trips by origin prefecture and destination prefecture of each trip to get the daily dynamic OD flow between each pair of prefectures.
However, the significant spatio-temporal imbalance exists in daily dynamic OD flow data:
(1) Since the GPS data is collected from mobile application users, there are significant gaps among the utilization rates (i.e., sample rate of GPS data) of different prefectures;
(2) The number of unique user IDs in GPS records keeps varying every day, and the trend and periodicity exist in the variations.
It would be problematic to use the raw dynamic OD flow data directly which could lead to incorrect representation of human mobility intensity between regions and mislead the construction of epidemic propagation graph. We need to normalize the raw dynamic flow data to mitigate the spatio-temporal imbalance. However, it is extremely difficult to deal with spatial imbalance and temporal imbalance simultaneously. Because temporal imbalance itself is also spatially different, and vice versa.
Normalizing the raw dynamic flow data only using GPS data would be very complicated. We address this problem by introducing the extra data, stay put ratio (ratio of people staying in a single location all day) data in Facebook Movement Range Maps. As mentioned in Sec. V-B, we also use population weighted average to get the stay put ratio on prefecture level. Then, a simple method can be applied to normalize the dynamic flow data:
(1) Using stay put ratio and population, we can get the proportion of active people (people not staying in a single location all day) by:
\[\widetilde{P}_{n}^{t}=(1-stay_{n}^{t})\times P_{n} \tag{17}\]
where \(\widetilde{P}_{n}^{t}\) is the number of active people of region \(n\) at time \(t\) and \(stay_{n}^{t}\) is stay put ratio of region \(n\) at time \(t\).
(2) We aggregate the trips by their origin regions, and count the number of unique user IDs in each group of trips with same origin region. Then, the sample rate of dynamic flow data can be computed by:
\[sample_{n}^{t}=\frac{nuid_{n,:}^{t}}{\widetilde{P}_{n}^{t}} \tag{18}\]
where \(sample_{n}^{t}\) is the sample rate of region \(n\) at time \(t\) and \(nuid_{n,:}^{t}\) is the number of unique user IDs in the group of trips with same origin \(n\) at time \(t\).
(3) With the sample rate of each region, we can set an anchor using sample rate of a specific region at a specific timestep (in our case, the sample rate of Tokyo at 2020/04/01) and normalize the raw dynamic OD flow data by:
\[\mathcal{O}_{n,m}^{t}=\frac{sample_{n_{n}}^{t_{a}}}{sample_{n}^{t}}\times \overline{\mathcal{O}}_{n,m}^{t} \tag{19}\]
where \(sample_{n_{a}}^{t_{a}}\) is the anchor (the sample rate of region \(n_{a}\) at time \(t_{a}\)), \(\overline{\mathcal{O}}_{n,m}^{t}\) is the raw dynamic OD flow from region \(n\) to \(m\) at time \(t\), and \(\mathcal{O}_{n,m}^{t}\) is normalized dynamic flow data which can form the input to dynamic graph learning module \(\mathcal{O}^{t-(T_{in}-1):t}\in\mathbb{R}^{N\times N\times T_{in}}\).
Although this method may not perfectly remove the spatio-temporal imbalance in dynamic OD flow data, it is a simple but effective way to normalize the dynamic OD flow data. We demonstrate its effectiveness in Fig. 11.
By combining the above-mentioned epidemic data, external data and mobility flow data, finally we can get all the data for experiments. In cases of prefecture-level epidemic forecasting task of Japan, the input features of 47 prefectures are generated as a (539, 47, 4) tensor, the static flow is a (47, 47) matrix, and the dynamic flow is a (539, 47, 47) tensor.
## VI Experiments
In this study, we use the collected and processed epidemic data, external data and mobility flow data from 2020/04/01 to 2021/09/21 for epidemic forecasting tasks of 47 prefectures in Japan. We conduct the experiments on this epidemic prediction tasks by comparing three classes of baseline models (including mechanistic models, general spatio-temporal deep learning models and GNN-based epidemic models). Furthermore, we use case studies to demonstrate and analyze the high reliability and interpretability of our model. In addition, as the initialization of adaptive graph learning in our model with minimal data requirement, the effectiveness of generated mobility data by the mobility generation method is also evaluated by experiments.
### _Setting_
The input time length \(T_{in}\) and output time length \(T_{out}\) are both set to 14 days which means we use two-week historical observations to do the two-week prediction of daily confirmed cases. Then, we split the data with ratio 6:1:1 to get training/validation/test datasets, respectively. The fifth
wave of infection in Japan is included in test dataset to test the model performance on a real outbreak situation. During training, we use the curriculum learning strategy [27] which increases one prediction horizon every two epochs starting from one day ahead prediction until reaching output time length. The batch size is set to 32. The loss function is set as _MAE_ (Mean Absolute Error). Adam is set as the optimizer, where the learning rate is 1e-3 and weight decay is 1e-8. The training algorithm would either be early-stopped if the validation error did not decrease within 20 epochs or be stopped after 300 epochs. PyTorch is used to implement our model. Then experiments are performed on a server with four 2080Ti GPUs.
Finally, we evaluate the performance of model on 3 days, 7 days, 14 days ahead prediction and overall 14 steps prediction. The four metrics are used to qualify the performance: _RMSE_ (Root Mean Square Error; as Eq. 20), _MAE_ (Mean Absolute Error; as Eq. 21), _MAPE_ (Mean Absolute Percentage Error; as Eq. 22) and _RAE_ (Relative Absolute Error; as Eq. 23). To mitigate the influence of randomness, we perform 5 trials for each model and calculate the mean and 95% confidence interval of results. The used random seeds are 0, 1, 2, 3, 4.
\[RMSE(\hat{\mathbf{Y}},\mathbf{Y})=\sqrt{\frac{1}{NT_{out}}\sum_{i=1}^{N}\sum_{j= 1}^{T_{out}}(\hat{\mathbf{Y}}_{i}^{t+j}-\mathbf{Y}_{i}^{t+j})^{2}} \tag{20}\]
\[MAE(\hat{\mathbf{Y}},\mathbf{Y})=\frac{1}{NT_{out}}\sum_{i=1}^{N}\sum_{j=1}^{T _{out}}|\hat{\mathbf{Y}}_{i}^{t+j}-\mathbf{Y}_{i}^{t+j}| \tag{21}\]
\[MAPE(\hat{\mathbf{Y}},\mathbf{Y})=\frac{100\%}{NT_{out}}\sum_{i=1}^{N}\sum_{j= 1}^{T_{out}}\Bigl{|}\frac{\hat{\mathbf{Y}}_{i}^{t+j}-\mathbf{Y}_{i}^{t+j}}{ \mathbf{Y}_{i}^{t+j}}\Bigr{|} \tag{22}\]
\[RAE(\hat{\mathbf{Y}},\mathbf{Y})=\frac{\sum_{i=1}^{N}\sum_{j=1}^{T_{out}}|\hat {\mathbf{Y}}_{i}^{t+j}-\mathbf{Y}_{i}^{t+j}|}{\sum_{i=1}^{N}\sum_{j=1}^{T_{ out}}|\mathbf{Y}_{i}^{t+j}-\mathbf{Y}_{1:N}^{t+1:T_{out}}|} \tag{23}\]
### _Evaluation_
We implement three classes of baseline models (including mechanistic models, general spatio-temporal deep learning
models and GNN-based epidemic models), compare them with our model, and evaluate the performance on epidemic prediction task of 47 prefecture in Japan.
* **Mechanistic Models:** **(1) SIR[4]**. SIR model is one of most basic compartmental models in epidemiology. We use optimized \(\beta\) and \(\gamma\) of each region to produce the prediction. **(2) SIR(Copy)**. Because of weekly periodicity, we copy the \(\beta\) and \(\gamma\) of last week to produce the prediction. **(3) MetaSIR[3]**. Metapopulation SIR model considers the heterogeneity of sub-populations and models the interaction between sub-populations. We use the commuter survey data as \(\mathcal{H}\) and optimize \(\beta\) and \(\gamma\) for each region to produce the prediction. **(4) MetaSIR(Copy)**. We copy the \(\beta\) and \(\gamma\) of last week to produce the prediction. **(5) STGCN[23]**. STGCN is one of the earliest models which applies GCN and TCN to do spatio-temporal prediction. **(6) DCRNN[24]**. DCRNN proposes a variant of GCN, called diffusion convolution and combines it with gated recurrent unit (GRU) to build a spatio-temporal prediction model. **(7) GraphWaveNet[25]**. GraphWaveNet proposes an adaptive learnable graph and uses GCN and TCN to capture spatio-temporal dependency. **(8) MTGNN[27]**. MTGNN uses a graph learning module to learn spatial correlation and fuse different spatial hops and different TCN kernels to enhance the model capacity. **(9) AGCRN[28]**. AGCRN uses GCN and GRU along with a graph learning module and a node adaptive parameter learning module to capture spatio-temporal dependency.
* **GNN-based Epidemic Models:** **(10) CovidGNN[12]**. CovidGNN is one of the earliest GNN-based epidemic models. It embeds temporal features on each node and uses GCN with skip connections to capture spatial dependency. **(11) ColaGNN[11]**. ColaGNN uses the location-aware attention to extract spatial dependency and uses GCN to integrate the spatio-temporal information.
**Performance Evaluation.** In Table I, we compare the performance on three different horizons and overall performance for multi-step prediction among the above-mentioned three classes of baseline models and proposed MepoGNN with two types of graph learning module. Generally, the spatio-temporal deep learning models and GNN-based epidemic models outperform the mechanistic models, especially for longer horizons. Among all baseline models, GraphWaveNet gets the best performance. However, our proposed two MepoGNN models get the very significant improvement over all baseline models. For two types of graph learning module, the dynamic one gets slightly better performance than the adaptive one. Fig. 12 compares the 7 days ahead prediction results of Tokyo, Saitama, Chiba and Kanagawa of the top two baseline models and MepoGNN model with dynamic graph learning module. From the prediction results, GraphWaveNet and ColaGNN can not produce accurate predictions for high daily confirmed cases during the epidemic outbreak. This phenomenon could be explained by different data distributions of daily confirmed cases in training dataset and test dataset. The test dataset covers the period of fifth epidemic wave in Japan which is much more severe than previous ones. Deep learning models have difficulty to predict these high daily confirmed cases that never happened before the fifth wave. However, with the help of metapopulation SIR module, our proposed MepoGNN model can handle this problem and make significantly better prediction for unprecedented surge of cases. This capability is very crucial for a trustworthy epidemic forecasting model.
**Ablation Study.** To demonstrate the effectiveness of different components of our model, we conduct an ablation study for MepoGNN models with two different graph learning modules, respectively. The variants are as follows:
**(1) w/o glm**: Remove the graph learning module of MepoGNN model;
**(2) w/o propagation**: Remove the metapopulation propagation from metapopulaiton SIR module (which means metapopulation SIR model is reduced to SIR model);
**(3) w/o SIR**: Remove the metapopulation SIR module completely.
Table II demonstrates that all three components can bring
Fig. 12: Predicted daily confirmed cases of Tokyo, Saitama, Chiba and Kanagawa with horizon=7.
significant boost of performance to our model. Particularly, it is easy to find that the biggest performance drop happens when removing the metapopulation SIR module. Because the metapopulation SIR module enables the capability of MepoGNN model to handle the unprecedented surge of cases.
### _Case Study_
The final output of MepoGNN model is fully produced by metapopulation SIR module. It brings significant interpretability to our model. We conduct an analysis for the predicted parameters of metapopulation SIR module to demonstrate the interpretability.
As shown in Fig. 13, we plot weekly average of predicted pseudo effective reproduction number7 (\(\hat{R}^{t}=\beta^{t}/\gamma^{t}\)) of Tokyo at 7 days ahead horizon in validation and test dataset and label major events and policy changes on timeline. \(\hat{R}^{t}\) starts to increase when state of emergency ends and starts to decrease when state of emergency starts. \(\hat{R}^{t}\) rapidly increases during Tokyo Olympics, and decreases after it. It demonstrates the change of predicted \(\hat{R}^{t}\) is consistent with real events and policy changes.
Footnote 7: \(\beta\) in metapopulation SIR model is not completely equivalent to \(\beta\) in SIR model, so we call \(\hat{R}^{t}=\beta^{t}/\gamma^{t}\) pseudo effective reproduction number.
Fig. 14 shows the learned time weight matrix of dynamic graph learning module. The most significant time lag of mobility effect on epidemic spread is 22 days. This result is consistent with a public health research [30] which states that the effective reproduction number significantly increased 3 weeks after the nightlife places mobility increased in Tokyo. Although the used indicator is different from our research, the mechanisms behind the time lag could be similar.
Fig. 15 shows the learned graph of adaptive graph learning module and the differences between it and the commuter graph used as initialization. The learned adaptive mobility graph keeps the major structure of commuter graph. And the minor changes from initialization can reflect the pattern differences between regular commuting and spatial epidemic propagation.
### _Mobility Generation Test_
We introduce two different types of graph learning module (Dynamic and Adaptive) in our MepoGNN model to handle different data accessibility of flow data. Furthermore, in the experiments of epidemic prediction for 47 prefectures of Japan, we collect the static flow data and dynamic flow data as inputs to adaptive and dynamic graph learning module, respectively. However, even static flow data which is used to initialize the learnable graph of adaptive graph learning
Fig. 14: Learned time weight matrix in dynamic graph learning module.
Fig. 13: 7-day moving average of predicted \(\hat{R}^{t}\) of Tokyo with horizon=7.
Fig. 15: Learned adaptive mobility graph of the 47 prefectures of Japan with log transformation (left) and its difference with static commuter graph (right).
module is not always accessible in real world. So, we conduct an additional experiment to evaluate the performance of our MepoGNN model under the the situation of worse data availability. Without using any mobility flow data, we just use population of each prefecture and distance between each pair of prefectures to generate the mobility graph for adaptive graph learning module and then compare the performance of initialization from static flow data and generated mobility data.
By using Eq. 16, we can generate the relative mobility intensity between each pair of prefectures. In this experiment, we set parameter \(\alpha\) as 1e-6, \(d\) as 1.7 and \(\epsilon\) as 9. Then, the relative mobility intensity between prefecture \(n\) and \(m\) can be computed by:
\[\widetilde{mob}_{nm}=10^{-6}\times\frac{P_{n}P_{m}}{(dist_{nm})^{1.7}+9} \tag{24}\]
Then, we use the relative mobility intensity matrix \(\widehat{\mathbf{M}}\in\mathbb{R}^{N\times N}\) to initialize the learnable graph \(\mathbf{G}\) in adaptive graph learning module and run the experiments using same setting mentioned in Section VI-A. We compare the performance of two types of initialization (using static flow matrix or generated relative mobility intensity matrix) for adaptive graph learning module as Table III. The performance of initialization using mobility generation is competitive, compared with that of initialization using static flow data. Since the performance gap is very small, the mobility generation method can be considered as a reliable alternative to static flow data serving as the input to adaptive graph learning module. Furthermore, with minimal data requirement, this mobility generation method can help us handle the situation of lacking adequate data (i.e., flow data) and make our model much more applicable.
Additionally, we also visualize the generated mobility matrix and learned graph using initialization of generate mobility matrix as Fig. 16. The generated mobility matrix is symmetric and relatively smooth as initialization, but the learned graph shows directional and complex spatial structure. This phenomenon demonstrates that the complex spatial correlation of epidemic propagation can be learned during training in our model, even with the initialization by a mobility generation method with minimal data requirement.
## VII Conclusion
Since the outbreak of COVID-19, epidemic forecasting has become a key research topic again. In this study, we propose a novel hybrid model called MepoGNN for epidemic forecasting that incorporates spatio-temporal graph neural networks and graph learning mechanisms into metapopulation SIR model. To the best of our knowledge, our model is the first hybrid model that couples metapopulation epidemic model with spatio-temporal graph neural networks. Our model can not only predict the number of confirmed cases but also explicitly learn the time/region-varying epidemiological parameters and the underlying epidemic propagation graph from heterogeneous data in an end-to-end manner.
Then, we collect and process the real data under COVID-19 from 2020/04/01 to 2021/09/21 in Japan, including epidemic data, external data and mobility flow data, for epidemic forecasting task. We evaluate our model by comparing with three classes of baseline models (including mechanistic models, GNN-based epidemic models and general spatio-temporal deep learning models) on this task. The results demonstrate that our model outperforms all baseline models and have the capability to handle the unprecedented surge of cases. We also visualize the learned parameter and epidemic propagation graph in case studies to illustrate the high interpretability of our model. Besides building and evaluating the proposed MepoGNN model, we additionally propose a mobility generation method with minimal data requirement to deal with the situations that the mobility data is unavailable. And the experimental results demonstrate the effectiveness of using generated mobility in our model.
This work demonstrates combining deep learning model and domain knowledge can bring great benefits to the performance, reliability and interpretability of model. So, a potential future direction could be exploring this kind of combination in other fields (e.g., incorporating the domain knowledge of traffic into spatio-temporal traffic prediction models).
Fig. 16: Generated mobility matrix (left) and learned adaptive mobility graph using it as initialization (right) with log transformation.
Fig. 17: Predicted daily confirmed cases of Tokyo with horizon=7 on the extra test dataset from the sixth epidemic wave.
## VIII Limitation
One of the limitations is that our proposed method can not perfectly handle the highly extreme situations of sudden outbreak. Fig. 17 shows the predictions in the sixth epidemic wave of Tokyo in which the number of confirmed cases suddenly surged to thousands from continued near zero in a very short period of time. The proposed method can outperform baselines in this case, but it fails to produce accurate predictions at the beginning of this epidemic wave. Although difficult, collecting extra data and improving the model to predict the extreme outbreak would be an important research direction worth further investigations.
## Acknowledgments
This work was partially supported by JST SICORP Grant Number JPMJSC2104.
Part of this work first published in [22], pp. 453-468, 2023] by Springer Nature [20].
|
2302.13417 | Training neural networks with structured noise improves classification
and generalization | The beneficial role of noise-injection in learning is a consolidated concept
in the field of artificial neural networks, suggesting that even biological
systems might take advantage of similar mechanisms to optimize their
performance. The training-with-noise algorithm proposed by Gardner and
collaborators is an emblematic example of a noise-injection procedure in
recurrent networks, which can be used to model biological neural systems. We
show how adding structure to noisy training data can substantially improve the
algorithm performance, allowing the network to approach perfect retrieval of
the memories and wide basins of attraction, even in the scenario of maximal
injected noise. We also prove that the so-called Hebbian Unlearning rule
coincides with the training-with-noise algorithm when noise is maximal and data
are stable fixed points of the network dynamics. | Marco Benedetti, Enrico Ventura | 2023-02-26T22:10:23Z | http://arxiv.org/abs/2302.13417v6 | # Training neural networks with structured noise improves classification and generalization
###### Abstract
The beneficial role of noise in learning is nowadays a consolidated concept in the field of artificial neural networks. The training-with-noise algorithm proposed by Gardner and collaborators is an emblematic example of a noise injection procedure in recurrent networks. We show how adding structure into noisy training data can substantially improve memory performance, allowing to approach perfect classification and maximal basins of attraction. We also prove that the so-called unlearning rule coincides with the training-with-noise algorithm when noise is maximal and data are fixed points of the network dynamics. Moreover, a sampling scheme for optimal noisy data is proposed and implemented to outperform both the training-with-noise and the unlearning procedures.
## I Introduction
Consider a fully connected network of \(N\) binary variables \(\{S_{i}=\pm 1\}\), \(i\in[1,..,N]\), linked by couplings \(J_{ij}\). The network is endowed with a dynamics
\[S_{i}(t+1)=\mathrm{sign}\left(\sum_{j=1}^{N}J_{ij}S_{j}(t)\right),\qquad i=1,..,N \tag{1}\]
which can be run either in parallel (i.e. _synchronously_) or in series (i.e. _asynchronously_ in a random order) over the \(i\) indices. We will mainly concentrate on asynchronous dynamics, in which case eq. (1) can only converge to fixed points, when they exist [1]. This kind of network can be used as an associative memory device, namely for reconstructing a number \(p\) of configurations \(\{\xi_{i}^{\mu}\}=\pm 1\), \(\mu\in[1,...,p]\), called _memories_, when initialized into a configuration similar enough to one of them. In this work, we will concentrate on i.i.d. memories, generated with a probability \(P(\xi_{i}^{\mu}=\pm 1)=1/2\). The number of memories is extensive \(p=\alpha N\), where \(\alpha\) is called _load_ of the network. The network is considered to perform well if such asymptotic states are similar enough to the memories. Whether this is the case, depends on the choice of the coupling matrix \(J\). To give an example, Hebb's (or Hebbian) learning prescription [2; 3]
\[J_{ij}^{H}=\frac{1}{N}\sum_{\mu=1}^{p}\xi_{i}^{\mu}\xi_{j}^{\mu} \tag{2}\]
is a rudimentary yet effective rule that allows to retrieve memories up to a critical capacity \(\alpha_{c}^{H}\sim 0.14\)[4]. Notably, when \(\alpha<\alpha_{c}^{H}\) memories are not perfectly recalled, but only reproduced with a small number of errors. Several techniques have been developed to build better performing coupling matrices, i.e. to reduce the retrieval error and increase the critical capacity as well as the size of the basins of attraction to which the memories belong. Three significant examples are the unlearning algorithm [5; 6; 7; 8; 9; 10], the linear perceptron algorithms [11; 12; 13], and the training-with-noise algorithm [14]. All these procedures rely on iteratively modifying the couplings on the basis of an initial choice of the couplings and a set of training data presented to the network. Each step of the unlearning algorithm does not explicitly use the memories, and only exploits the information encoded in Hebb's rule. On the other hand, the linear perceptron is trained with the pure memories until they become attractors of the network dynamics. In analogy with a noisy linear perceptron, Gardner and collaborators proposed the training-with-noise algorithm, managing to increase the size of the basins of attraction while conserving a rather high accuracy in the memory retrieval. In this context, _noise_ is defined as the degree of similarity between training data and memories: the higher the noise, the more the presented data differ from the memories.
The main goal of this work is to show that, the training-with-noise algorithm can be considerably improved, when training data contain internal dependencies, that we call _structure_. Specifically, in case of maximal noise, we derive a condition to be satisfied by the structure to drive learning towards the best memory performance. We also show that not all initial conditions are supplied with the best training data, and that previous Hebbian learning already provides for a good noise structure. Remarkably, we have found that the unlearning rule emerges from training-with-noise in the particular case of maximal noise and training data chosen to be fixed points of (1). We also developed an effective sampler device for optimal noisy training data that can be used to reach higher critical capacities while preserving rather large basins of attraction of the memories.
The paper is organized as follows: in Sec. II we define some measures for the network performance, in particular the concepts of classification and generalization. In Sec. III and Sec. IV we describe the unlearning and the training-with-noise algorithms and their effects on the neural network. Successively, in Sec. V the role of noise structure in training data is extensively discussed,
with a particular attention on the cases of maximal and moderate noise injection. Eventually in Sec. VI we use insights from the previous sections to devise a performing sampling algorithm for training data.
## II Network performance
In this section we describe how to benchmark the neural network performance, specifically in terms of the dynamic stability achieved by the memory vectors and the ability of the system to retrieve blurry examples of the latter.
We define _classification_ as the capability to perfectly retrieve each memory when the dynamics is initialized to the memory itself. The fraction among all \(pN\) units \(\xi_{i}^{\mu}\) that are stable according to one step of the dynamics (1) will be called \(n_{SAT}\), in analogy with the celebrated perceptron optimization problem [11; 13]. Classification is reached when \(n_{SAT}=1\).
We define _generalization_ as the capability to retrieve the memory, or a configuration that is strongly related to it, by initializing the dynamics on a noise-corrupted version of the memory. This property of the neural network is related to the size of the basins of attraction to which the memories belong, and does not imply \(n_{SAT}=1\). A good measure of the performance in this sense is the _retrieval map_
\[m_{f}(m_{0}):=\overline{\Big{\langle}\frac{1}{N}\sum_{i=1}^{N}\xi_{i}^{\nu}S _{i}^{\nu}(\infty)\Big{\rangle}}. \tag{3}\]
Here, \(\vec{S}^{\nu}(\infty)\) is the stable fixed point reached by the network, when it exists, when the dynamics is initialized to a configuration \(\vec{S}^{\nu}(0)\) having overlap \(m_{0}\) with a given memory \(\vec{\xi}^{\nu}\). The symbol \(\overline{\phantom{\Big{|}}}\) denotes the average over different realizations of the memories and \(\langle\cdot\rangle\) the average over different realizations of \(\vec{S}^{\nu}(0)\). In the classification regime, one obtains \(m_{f}=1\) when \(m_{0}=1\). The analytical computation of the retrieval map might be challenging for some networks. Hence one can introduce another indicative observable for the generalization, i.e. the _one-step retrieval map_\(m_{1}(m_{0})\)[15], defined by applying a single step of _synchronous_ dynamics (1):
\[m_{1}(m_{0}):=\overline{\frac{1}{N}\sum_{i=1}^{N}\Big{\langle}\xi_{i}^{\nu} \ \mathrm{sign}\Big{(}\sum_{j=1}^{N}J_{ij}S_{j}^{\nu}(0)\Big{)}\Big{\rangle}}\;, \tag{4}\]
This work is devoted to exploring the characteristics of the training process leading to classification and good generalization.
## III Unlearning
Inspired by the brain functioning during REM sleep [6], the unlearning algorithm [5; 6; 7; 8; 9] is a training procedure leading to classification and good generalization in a symmetric neural network. This section contains an introduction to the algorithm as well as a description of the gained performance in terms of classification and generalization.
Training starts by initializing the connectivity matrix according to Hebb's rule eq. (2) (i.e. \(J^{0}=J^{H}\)). Then, the following procedure is iterated at each step \(d\):
1. Initialize of the network dynamics (1) on a random neural state.
2. Run the asynchronous dynamics until convergence to a stable fixed point \(\vec{S}^{(d)}\).
3. Update couplings according to: \[\delta J_{ij}^{(d)}=-\frac{\lambda}{N}S_{i}^{(d)}S_{j}^{(d)}\qquad\quad J_{ii }=0\quad\;\forall i.\] (5)
This algorithm was first introduced to prune the landscape of attractors from proliferating spurious states, i.e. fixed points of (1) not coinciding with the memories [1; 16]. Such spurious states are only weakly correlated with the memories. Even though this pruning action leads to the full stabilization of the memories, the exact mechanism behind this effect is not completely understood. The analysis in [9] gives deeper insights in both the classification and generalization capabilities of a network trained with unlearning.
### Classification
The classification property is achieved running the algorithm when \(\alpha\leq\alpha_{c}^{U}\) with \(\alpha_{c}^{U}\simeq 0.6\). To quantify the classification performance of the network, one can track numerically the observable
\[\Delta_{\min}=\overline{\min_{i,\mu}\left(\Delta_{i}^{\mu}\right)}, \tag{6}\]
where the _stability_\(\Delta_{i}^{\mu}\) is defined by
\[\Delta_{i}^{\mu}=\frac{\xi_{i}^{\mu}}{\sqrt{N}\sigma_{i}}\sum_{j=1}J_{ij}\xi_{ j}^{\mu},\qquad\sigma_{i}=\sqrt{\sum_{j=1}^{N}J_{ij}^{2}/N}. \tag{7}\]
As soon as \(\Delta_{\min}>0\), memories become fixed points of the dynamics [11]. Fig. 1 reports the evolution of \(\Delta_{\min}\) as a function of the number of performed updates of \(J\), for \(\alpha=0.3\). The amount of iterations \(d=d_{in}\) (indicated with a red circle) marks when \(\Delta_{\min}\) crosses \(0\). At this point, all the memories are fixed points of the dynamics. Other two points, \(d=(d_{top},d_{fin})\) are reported in the plot, even though [9] has proved \(d_{in}\) to show the best generalization performance in the large \(N\) limit. The scaling of \((d_{in},d_{top},d_{fin})\) is also described in [9]. The progressive decay of \(\Delta_{\min}\), that becomes negative again after \(d=d_{fin}\), is due to the vanishing of the first two moments of the couplings.
### Generalization
The unlearning algorithm creates large basins of attraction around the memories, when \(\alpha<\alpha_{c}^{U}\). Fig. 2 reports the retrieval map for \(N=100\) and \(\alpha=0.3\). The curves relative to the SVM and unlearning at \(d=d_{in}\) coincide. This observation is coherent with estimates for the large \(N\) limit performed in [9]. The SVM is trained with no symmetry constraints, though it superimposes with the symmetric version discussed in [9; 12; 17]: this is probably due to the high degree of symmetry displayed by SVMs. Fig. 2 also shows that curves for \(d=d_{in}\) and \(d=d_{top}\) do overlap significantly. This is certainly due to finite size effects, since [9] clearly shows the basins to progressively deteriorate, in the large \(N\) limit, along the interval \(d\in[d_{in},d_{fin}]\).
One open issue is why the optimum is found when \(\Delta_{\min}=0\) (i.e. \(d=d_{in}\)) and not when \(\Delta_{\min}\) is maximum (i.e. \(d=d_{top}\)). A second question regards why unlearning, and the choice of using fixed points as training data, should lead to maximally large basins of attraction.
## IV Training with noise
The concept of learning from noisy examples, introduced for the first time in [18], is at the basis of Gardner and co-workers work [14], a pioneering attempt to increase and control generalization through the introduction of noise during the training phase of recurrent neural networks. Here, we report the algorithm and characterize, for the first time, its performance over fully connected neural networks.
The training-with-noise algorithm [14] consists in starting from any initial coupling matrix \(J_{ij}^{0}\) with null entries on the diagonal, and updating recursively the couplings according to
\[\delta J_{ij}^{(d)}=\frac{\lambda}{N}\epsilon_{i}^{\mu_{d}}\xi_{i}^{\mu_{d}}S _{j}^{\mu_{d}},\hskip 28.452756pt\delta J_{ii}^{(d)}=0\hskip 8.535827pt\forall i, \tag{8}\]
where \(\lambda\) is a small learning rate, \(\mu_{d}\in[1,...,p]\) is a randomly chosen memory index and the mask \(\epsilon_{i}^{\mu_{d}}\) is defined as
\[\epsilon_{i}^{\mu_{d}}=\frac{1}{2}\Big{(}1-\text{sign}\Big{(}\xi_{i}^{\mu_{d} }\sum_{k=1}^{N}J_{ik}S_{k}^{\mu_{d}}\Big{)}\Big{)}. \tag{9}\]
In this setting, \(\vec{S}^{\mu_{d}}\) is a noisy memory, generated according to
\[P(S_{i}^{\mu_{d}}=x)=\frac{(1+m_{t})}{2}\delta(x-\xi_{i}^{\mu_{d}})+\frac{(1-m _{t})}{2}\delta(x+\xi_{i}^{\mu_{d}}). \tag{10}\]
The _training overlap_\(m_{t}\) is a control parameter for the level of _noise_ injected during training, corresponding to the expected overlap between \(\vec{S}^{\mu_{d}}\) and \(\vec{\xi}^{\mu_{d}}\), i.e.
\[m_{t}=\frac{1}{N}\sum_{j=1}^{N}\xi_{j}^{\mu_{d}}S_{j}^{\mu_{d}}+O\Big{(}\frac {1}{\sqrt{N}}\Big{)}. \tag{11}\]
Figure 1: The minimum stability \(\Delta_{\min}\) as a function of the normalized algorithm time. The threshold \(\Delta=0\) is indicated with the _gray_ dotted line. Three relevant amount of iterations are indicated by the colored circles: \(d=d_{in}\) in _red_, \(d=d_{top}\) in _orange_, \(d=d_{fin}\) in _yellow_. All measures are averaged over 50 realizations of the network. Choice of the parameters: \(N=100\), \(\alpha=0.3\), \(\lambda=10^{-2}\).
Figure 2: Retrieval map \(m_{f}(m_{0})\) for a SVM and the unlearning algorithm at the three relevant steps indicated in fig. 1. All measures are averaged over 10 realizations of the network. Choice of the parameters: \(N=100\), \(\alpha=0.3\), \(\lambda=10^{-2}\) for unlearning and \(\lambda=1\) for the SVM.
Each noisy configuration can be expressed in terms of a vector of _noise units_\(\vec{\chi}\), such that
\[S_{i}^{\mu_{d}}=\chi_{i}^{\mu_{d}}\xi_{i}^{\mu_{d}}. \tag{12}\]
In this setting, noise units are i.i.d variables, distributed according to
\[P(\chi_{i}^{\mu_{d}}=x)=\frac{(1+m_{t})}{2}\delta(x{-}1){+}\frac{(1-m_{t})}{2} \delta(x{+}1). \tag{13}\]
The algorithm would converge when every configuration with overlap \(m_{t}\) with a memory generates on each site a local field aligned with the memory itself. Let us define the function
\[\mathcal{L}(m,J)=-\frac{1}{\alpha N^{2}}\sum_{i,\mu}^{N,p}\text{erf}\left( \frac{m\Delta_{i}^{\mu}}{\sqrt{2(1-m^{2})}}\right). \tag{14}\]
Assuming that stabilities are self-averaging quantities, it can be proven that \(-\mathcal{L}(m=m_{0},J)\) tends to \(m_{1}(m_{0})\) when \(N\to\infty\)[19; 15]. Each step of the training procedure (8) leads to a reduction of \(\mathcal{L}(m,J)\), for any value of \(m\) and \(m_{t}\). In fact, considering a small variation of the stabilities induced by the algorithm update
\[\Delta_{i}^{\mu}\to\Delta_{i}^{\mu}+\delta\Delta_{i}^{\mu},\]
and performing a Taylor expansion of (14) at first order in \(O(N^{-1/2})\), one obtains (see Appendix A.1)
\[\mathcal{L}^{{}^{\prime}}=\mathcal{L}+\sum_{i=1}^{N}\delta\mathcal{L}_{i} \tag{15}\]
where
\[\delta\mathcal{L}_{i}=-\frac{\epsilon_{i}^{\mu_{d}}\lambda}{\alpha\sigma_{i}N ^{5/2}}\frac{\sqrt{2}m\cdot m_{t}}{\sqrt{\pi(1-m^{2})}}\exp{\left(-\frac{m^{2 }\Delta_{i}^{\mu_{d}^{2}}}{2(1-m^{2})}\right)}. \tag{16}\]
Hence, \(\delta\mathcal{L}_{i}\) is strictly non-positive when \(\frac{\lambda}{N}\) is small, so that the Taylor expansion is justified. Now, the couplings' variance \(\sigma_{i}\) (see eq. (7)) is a variable quantity over time, and numerics suggest it is slowly decreasing. As a result, the expansion performed to determine the variation of \(\mathcal{L}\) might not be appropriate after a certain number of steps, leading to a non-monotonic trend of the function. This inconvenience can be overcome by transforming the learning rate \(\lambda\) into \(\lambda_{i}=\lambda\cdot\sigma_{i}\) at each iteration.
Wong and Sherrington [20; 21] propose an elegant analysis of a network designed to optimize \(\mathcal{L}(m,J)\), i.e. whose couplings \(J\) correspond to the global minimum of \(\mathcal{L}(m,J)\). Some of their findings, relevant to this work, are:
1. For any \(m_{0}\), the maximum value of \(m_{1}(m_{0})\) is obtained if \(m=m_{0}\).
2. When \(m\to 1^{-}\), the minimization of the \(\mathcal{L}(m,J)\) trains a linear perceptron with \(N\) dimensional input/output layers (as in [12]) and maximal stability. This network will be referred to as a Support Vector Machine (SVM) [22].
3. When \(m\to 0^{+}\), the minimization of \(\mathcal{L}(m,J)\) leads to a Hebbian coupling matrix \(J_{ij}\propto J_{ij}^{H}\).
Numerically, we find that iterating (8) with a given value of \(m_{t}\) drives \(\mathcal{L}(m_{t},J)\) to its theoretical absolute minimum computed in [21], as reported in fig. 3 for one choice of \(N,\alpha\) and \(m\). This means that the performance of the training-with-noise algorithm can be completely described in the analytical framework of [21] and, in particular, all the above points hold. We are going to call \(\mathcal{L}(m_{t},J)\) loss function of the training-with-noise algorithm.
### Classification
The computations contained in [21], and resumed in Appendix B, are now used to calculate \(n_{SAT}\) as a function
Figure 3: The _blueish_ lines in the main plot report the function \(\mathcal{L}(m=0.5,J)\) for different training overlaps as functions of the number of algorithm steps \(d\). The _dotted_ line represents the theoretical minimum value from [21]. The learning strength \(\lambda\) has been rescaled by the standard deviation of the couplings as described in the text. The subplot reports the case \(m_{t}=0.5\) when the learning strength is not rescaled: \(\mathcal{L}\) is in _blue_, while a measure of the standard deviation of the couplings, defined as \(\sigma=\frac{1}{N}\sum\sigma_{i}\), is reported in _red_. The value \(\lambda\cdot N^{-1/2}\) is also depicted in _light gray_ to properly signal the moment when equation (16) loses its validity. All measures are averaged over 5 realizations of the couplings \(J\). Choice of the parameters: \(N=100\), \(\alpha=0.3\), \(\lambda=1\), the initial couplings are Gaussian with unitary mean, zero variance and \(J_{ii}^{(0)}=0\ \forall i\).
of \(m_{t}\) and \(\alpha\) in the training-with-noise problem.
The distribution of the stabilities in the trained network (see equation (4)) has always a tail in the negative values, except in the trivial case of \(m_{t}=1\), where \(n_{SAT}=1\) for \(\alpha\leq 2\). This implies that classification is never reached by the training-with-noise algorithm. Nevertheless, the values of \(n_{SAT}\) remain close to unity for relatively high values of \(m_{t}\) and relatively low values of \(\alpha\) (see fig. 4).
### Generalization
The generalization properties of a network trained through training-with-noise are now discussed. The color map in fig. 5 reports the estimate of the retrieval map \(m_{f}\) at \(m_{0}=1\) in the limit \(N\to\infty\). Notice the emergence of two phases from the map: the _non retrieval_ phase where \(m_{f}(1)\) is mostly smaller than \(0.5\), and memories are far from being at the center of the basin; the _retrieval_ phase where the \(m_{f}(1)\) is higher, mostly close to \(1\), i.e. the memory is very close to the center of the basin. Such separation is typical of fully connected neural networks [4], differently from sparse neural networks [21; 23] where the topology of the basins results more various yet harder to get measured by experiments. In Appendix C we propose an empirical criterion to predict the transition between these two regimes. The resulting critical line is reported in fig. 5.
Fixing ourselves to the retrieval regime we employ a procedure described in Appendix C to compute the typical size of the basins of attraction of networks trained as SVMs and with the training-with-noise algorithm. White dots in fig. 5 signal the combinations of \((m_{t},\alpha)\) where the basins of attraction found by training-with-noise algorithm resulted smaller than than the ones obtained by a SVM at the same value of \(\alpha\). We want to stress the importance of a comparison between the training-with-noise and the relative SVM, since numerical investigations have shown the latter to achieve very large basins of attraction, presumably due to the maximization of the stabilities [9; 17]. One can conclude that for most of the retrieval region the generalization performance is worse than the SVM, which maintains larger basins of attraction; on the other hand, at higher values of \(\alpha\) the trained-with-noise network sacrifices its classification property to achieve a basin that appears wider than the SVM one. In conclusion, the training-with-noise algorithm never outperforms the relative SVM without reducing its classification capabilities.
## V Training with structured noise
In this section, we study how adding additional constraints on the configurations used in the training-with-noise algorithm affects its performance. This amounts to imposing specific internal dependencies among the noise units \(\vec{\chi}\), which are no more i.i.d. random variables, as it was in [14]. When this choice is appropriate, the algorithm can lead to a classification regime and a high degree of generalization, resembling a SVM. We firstly evaluate the maximal noise case, i.e. \(m_{t}=0^{+}\) and derive a condition to be satisfied by noisy data-points to approach the best memory performance. In light of such
Figure 4: \(n_{SAT}\) as a function of \(m_{t}\) and \(\alpha\). Warmer shades of colour are associated to higher classification performances.
Figure 5: \(m_{f}(1)\) as a function of \(m_{t}\) and \(\alpha\). Warmer shades of colour are associated to higher retrieval performances. The _black dashed_ line represents the boundary of the retrieval regime according to the criterion in Appendix C, _white_ dots signal the points where basins of attraction to which memories belong are larger than ones obtained from a SVM at \(N=200\).
condition, we analyze the performance of fixed points of the dynamics as training data. Finally, we run training-with-noise with \(m_{t}>0\) using fixed points of the dynamics as training-data, after initializing the coupling according to Hebb's rule. We find that the structure of noise is relevant also in this case. It will be helpful for our purposes to implement a symmetric version of rule (8), i.e.
\[\delta J_{ij}^{(d)}=\frac{\lambda}{N}\left(\epsilon_{i}^{\mu_{d}}\xi_{i}^{\mu_{ d}}S_{j}^{\mu_{d}}+\epsilon_{j}^{\mu_{d}}\xi_{j}^{\mu_{d}}S_{i}^{\mu_{d}}\right) \tag{17}\]
Equation (17) can be rewritten explicitly making use of (9), leading to
\[\begin{split}\delta J_{ij}^{(d)}=&\frac{\lambda}{2 N}\big{(}\xi_{i}^{\mu_{d}}S_{j}^{\mu_{d}}+S_{i}^{\mu_{d}}\xi_{j}^{\mu_{d}}\big{)}+ \\ -&\frac{\lambda}{2N}\big{(}S_{i}^{1,\mu_{d}}S_{j}^{ \mu_{d}}+S_{i}^{\mu_{d}}S_{j}^{1,\mu_{d}}\big{)}\end{split} \tag{18}\]
where \(S_{i}^{1,\mu_{d}}=\text{sign}\left(\sum_{k=1}^{N}J_{ik}S_{k}^{\mu_{d}}\right)\). The total update to the coupling at time \(D\) can be decomposed as a sum of two contributions
\[\Delta J_{ij}(D)=\Delta J_{ij}^{N}(D)+\Delta J_{ij}^{U}(D). \tag{19}\]
The first term on right-hand side, which will be referred to as _noise_ contribution, is expressed in terms of noise units as
\[\Delta J_{ij}^{N}(D)=\frac{\lambda}{2N}\sum_{d=1}^{D}\xi_{i}^{\mu_{d}}\xi_{j} ^{\mu_{d}}\chi_{j}^{\mu_{d}}+\frac{\lambda}{2N}\sum_{d=1}^{D}\xi_{j}^{\mu_{d}} \xi_{i}^{\mu_{d}}\chi_{i}^{\mu_{d}}, \tag{20}\]
while the second term, which will be referred to as _unlearning_ contribution, is given by
\[\Delta J_{ij}^{U}(D)=-\frac{\lambda}{2N}\sum_{d=1}^{D}\left(S_{i}^{1,\mu_{d}}S _{j}^{\mu_{d}}+S_{i}^{\mu_{d}}S_{j}^{1,\mu_{d}}\right). \tag{21}\]
### \(m_{t}=0^{+}\)
Training configurations with maximal noise are now studied. Firstly, we apply eq. (19) to compute the variation of \(\mathcal{L}(m,J)\) during training, showing what are the properties of the training configurations that lead to a good network performance. Then we numerically probe two types of attractor landscapes to study the distribution of the good training data. Eventually, we consider fixed points of the dynamics (1) as training configurations, showing a connection between the training-with-noise and unlearning algorithms.
#### The optimal structure of noise
We now want to study what kind of noise structure is favorable to achieving classification and a good degree of generalization. Section (IV) has shown that the training-with-noise algorithm on fully connected networks never outperforms a SVM without reducing their classification properties. In addition to this, we know from numerics [9; 17] that SVMs, as maximally stable perceptrons, maximize the size of the basins of attraction of the memories as they are also attractors of the dynamics. We hence consider the SVMs as optimal recurrent networks in both terms of classification and generalization. As a consequence, we just need to approach the global minimum of \(\mathcal{L}(m=1^{-},J)\) to train a well performing neural network. During the rest of this work we will apply an important differentiation between the two variables \(m\) and \(m_{t}\): while the former controls the final result of the optimization procedure, the latter is a fixed level of noise, \(m_{t}=0^{+}\) in the current case. According to the training-with-noise algorithm, at \(m_{t}=0^{+}\) the variation of \(\mathcal{L}(m,J)\) is \(\delta\mathcal{L}=\delta\mathcal{L}_{N}+\delta\mathcal{L}_{U}\), where \(\delta\mathcal{L}_{N}\) cancels in time, and the only relevant contribution is
\[\delta\mathcal{L}_{U}\propto\frac{m}{\sqrt{2(1-m^{2})}}\sum_{i,\mu}^{N,p} \omega_{i}^{\mu}\exp{\left(-\frac{m^{2}\Delta_{i}^{\mu^{2}}}{2(1-m^{2})} \right)}, \tag{22}\]
where
\[\omega_{i}^{\mu}=\frac{1}{2\sigma_{i}}\left(m_{\mu}\chi_{i}^{1,\mu}+m_{1,\mu} \chi_{i}^{\mu}\right), \tag{23}\]
with
\[\chi_{i}^{\mu}=\xi_{i}^{\mu}S_{i}^{\mu_{d}}\hskip 28.452756pt\chi_{i}^{1,\mu}=\xi _{i}^{\mu}S_{i}^{1,\mu_{d}}, \tag{24}\]
and
\[m_{\mu}=\frac{1}{N}\sum_{j=1}^{N}S_{j}^{\mu_{d}}\xi_{j}^{\mu}\hskip 28.452756ptm_{1, \mu}=\frac{1}{N}\sum_{j=1}^{N}S_{j}^{1,\mu_{d}}\xi_{j}^{\mu}. \tag{25}\]
The derivation of (22) is reported in Appendix A.2. For the case of fixed points, we have \(\omega_{i}^{\mu}=m_{\mu}\chi_{i}^{\mu}\), coherently with the same observable defined in [9]. Generally speaking, \(\omega_{i}^{\mu}\) can be a function of \(\Delta_{i}^{\mu}\). Since stabilities are self-averaging quantities, by taking \(N,p\gg 1\) we can rewrite equation (22) as
\[\delta\mathcal{L}_{U}\propto\frac{m}{\sqrt{2\pi(1-m^{2})}}\int_{-\infty}^{+ \infty}d\Delta\rho(\Delta)\omega(\Delta)e^{-\frac{m^{2}\Delta^{2}}{2(1-m^{2})}}, \tag{26}\]
where \(\rho(\Delta)\) is the true probability density function of \(\Delta\) at a given step of the algorithm. When \(m\to 1^{-}\) the Gaussian contained in the integral becomes very peaked around \(0\). Since \(\rho\) is a strictly positive function, and we want \(\delta\mathcal{L}\) to decrease, we need
\[\omega(\Delta)<0\hskip 5.690551pt\text{when}\hskip 5.690551pt|\Delta|<\epsilon, \hskip 5.690551pt\epsilon=0^{+} \tag{27}\]
Hence, a coupling update improves the memory performances when condition (27) is satisfied. As a consequence, noisy units are internally constrained by (27), as a difference with the standard training-with-noise algorithm. The more negative \(\omega\) is when \(\Delta\sim 0\) the more powerful a given data-point is at approaching the SVM performances. One should also bear in mind that training is a dynamic process: to reduce \(\mathcal{L}(m=1^{-},J)\) condition (27) should hold during training.
#### About the initialization
We are now interested in studying how the initialization of the couplings influences the choice of training configurations, concentrating in particular on the function \(\omega(\Delta)\). Computing \(\omega\) analytically is a challenging task, but we will introduce some related empirical quantities, which can be useful to probe the structure of noise in the configurations. To do so, we first sample training configurations from a network initialized with the Hebbian rule according to a Monte Carlo dynamics at temperature \(T\). Temperature acts as a control parameter: when \(T=0\) training configurations are stable fixed points of eq. (1), as in standard unlearning. Higher values of \(T\) progressively reduce the structure of noise in training configurations, and in the limit \(T\to\infty\), training configurations are the same as in the training-with-noise algorithm. The Monte Carlo of our choice is of the Kawasaki kind [24], to ensure that all training configurations are at the prescribed overlap \(m_{t}=0^{+}\). Fig. 6, presents the result at four different temperatures. Each panel shows \(\omega\) as a function of \(\Delta\). Data points are collected over 15 realizations of the network, then plotted and smoothed to create a density map. We are interested in the _typical_ behavior of \(\omega(\Delta)\) in the neighborhood of \(\Delta=0\), which can be estimated by a linear fit of the data. We consider the intercept of the best fit line as an _indicator_\(\omega_{emp}(0)\) of the mean value of \(\omega(0)\) over the noise.
We find that the lower the temperature, the more sampled configurations favor both classification and generalization: the line is negative over a larger interval centered around \(\Delta=0\). As temperature becomes too high, \(\omega_{emp}(0)\) gets closer to zero, suggesting low quality in terms of training performance.
Another interesting aspect emerged from the analysis is that, by calling \(a\) the line intercept and \(b\) its angular coefficient, the position of the right extremum of the negative band \(\frac{|a|}{b}\) is independent of the system size \(N\), implying that finite size effects are not strong for this kind of measure. This is likely because both the mean of the distribution of the points along \(\omega\) and their dispersion, i.e. how far they span the y-axis, are decreasing as \(O(N^{-1/2})\).
The same analysis is repeated in the case of a random initialization. We chose the Sherrington-Kirkpatrick (SK) model [25] as a case of study. Panels (a) and (b) in fig. 7 report the smoothed distribution of \(\omega\) versus \(\Delta\) showing a different scenario with respect to the Hebbian one. The distribution looks anisotropic, as in the Hebbian case, yet the stabilities are centered Gaussians, so \(\omega_{emp}(0)\) is positive. In particular, things appear to improve when \(T\) increases, in contrast with the previous case of study, though in accordance with the Hebbian limit of the training-with-noise algorithm explained in Section IV. Panel (c) displays more clearly the mutual dependence between \(\omega\) and stabilities \(\Delta\) by reporting the Pearson coefficient at the various \(T\) between these two quantities: both Hebb's and SK show some mutual dependence, but the distribution in the Hebbian landscape is way more deformed. Furthermore, panel (d) shows \(\omega_{emp}(0)\) in both cases. This quantity is very well correlated with the Pearson coefficient: whereas in the Hebb's initialization \(\omega_{emp}(0)\) remains negative and reaches the lowest values at low temperatures, the random case shows an opposite trend, where the estimated \(\omega_{emp}(0)\) is rather positive. Notice, from both panels (c), (d), the existence of an optimum which does not coincide with the stable fixed points of the dynamics (i.e. \(T=0\)): this aspect will be deepened in Sec. VI.
#### Training with stable fixed points
Let us now consider the instance of a set of noisy states that are also fixed points of the dynamics with a training overlap \(m_{t}=0^{+}\). In this case, one has
\[\Delta J^{N}_{ij}(D)=0^{+}+O\left(\sqrt{\frac{\lambda}{N}}\right). \tag{28}\]
On the other hand, the unlearning contribution to the evolution of the couplings is
\[\Delta J^{U}_{ij}(D)=-\frac{\lambda}{N}\sum_{d=1}^{T}S^{\mu_{d}}_{i}S^{\mu_{d} }_{j}, \tag{29}\]
which is the classic unlearning updating rule. As a result, when \(\lambda/N\to 0\) the training-with-noise algorithm and the unlearning algorithm will converge to the same updating rule for the couplings when stable fixed points of the dynamics are used in the training. The same argument can be applied to the original asymmetric rule (8), however asymmetric networks may have no stable fixed points of (1) that can be learned.
We now perform a numerical test of the argument above, in the case of a symmetric connectivity matrix. At each step of the algorithm, the network is initialized with an initial overlap contained in \((0,N^{-1/2})\) with one memory \(\vec{\xi}^{\mu_{d}}\). Then, asynchronous dynamics (1) is run until convergence, and the final overlap \(m_{t}\) is measured. If \(m_{t}\in(0,N^{-1/2})\), we use the sampled configuration for training, otherwise the process is repeated. Typically, an initial overlap equal to \(0^{+}\) implies a similar order of magnitude for the final overlap, hence no reiteration is needed.
The algorithm (17) is repeated for \(D^{*}=O(N/\lambda)\) steps. The order of magnitude of \(\Delta J^{U}_{ij}(D)\) is supposed to be the same of \(J^{(0)}_{ij}\), in order to see significant modifications to the initial connectivity matrix. The network is initialized according to the Hebb's rule (2), i.e. \(J^{(0)}_{ij}=O(N^{-1/2})\) which implies \(\Delta J^{U}_{ij}(D)=O(N^{-1/2})\) at leading order. The contributions U and N are compared by computing the norm of the relative \(\Delta J\) matrix and evaluating the ratio \(|\Delta J^{U}|/|\Delta J^{N}|\). From our previous considerations we expect \(|\Delta J^{U}|/|\Delta J^{N}|\) to be linear in \(\lambda^{-1/2}\) when corrections vanish. Results are reported in fig. 8:
\(|\Delta J^{U}|/|\Delta J^{N}|\) grows when \(N\) increases and \(\lambda\) decreases, according to the scaling relation predicted by our argument. In addition to this, curves are collapsing on the expected line when \(\lambda\to 0\) and \(N\to\infty\).
We also measured \(\Delta_{\min}\) at its maximum over the course of the algorithm (as described in IV.1). Results are reported in fig. 9. \(\Delta_{\min}\) produced by training-with-noise and unlearning are found to coincide when \(\lambda\) is sufficiently low. Moreover, the number of steps necessary to reach the maximum are the same for both the algorithms, confirming that couplings are transforming in the same way. This last aspect is corroborated by the subplot in fig. 9, representing the set of \(J_{ij}\) obtained with the traditional unlearning algorithm as a function of the one resulting from the training-with-noise algorithm, for one realization of the network. The strong correlation is evident, as predicted from the our pseudo-analytical arguments.
One can also study how \(\omega(\Delta)\) evolves during the training process. Fig. 9(a) shows the value of \(\omega_{emp}(0)\) at different time steps of the training process, for different values of \(\alpha\). The colored halo around the line of data points represents the width of the band centered in \(0\) where the line of the best fit assumes negative values.
Figure 6: Distribution of \(\omega_{i}^{\mu}\) as a function of \(\Delta_{i}^{\mu}\) for training configurations sampled with a Monte Carlo at temperature \(T=0\) i.e. stable fixed points only (a), \(T=0.5\) (b), \(T=1\) (c), \(T=8\) (d), on a Hebbian network. Warmer colors represent denser region of data points. The _full black_ line is the non-weighted best fit line for the points, the _dotted white_ line represents \(\omega=0\), the _red dot_ is the value of the best fit line associated with \(\Delta=0\). Sub-panels to each panel report a zoom of the line around \(\Delta=0\): the _reddish_ region give a measure of the largeness of the negative band centered around \(\Delta=0\). Measures have been collected over \(15\) samples of the network. Choice of the parameters: \(N=500\), \(\alpha=0.5\).
Figure 7: (a), (b): Distribution of \(\omega_{i}^{\mu}\) as a function of \(\Delta_{i}^{\mu}\) for training configurations sampled with a Monte Carlo at temperature \(T=0\) i.e. stable fixed points only (a), and \(T=8\) (b) on a SK model. Warmer colors represent denser region of data points. The _full black_ line is the non-weighted best fit line for the points, the _dotted white_ line represents \(\omega=0\), the _red spot_ is the value of the best fit line associated with \(\Delta=0\). Sub-panels to each panel report a zoom of the line around \(\Delta=0\). (c), (d): Comparison between the Hebbian initialization and the Random one through evaluation of: the Pearson coefficient between \(\omega_{i}^{\mu}\) and \(\Delta_{i}^{\mu}\) (c) and the estimated value of \(\omega_{emp}(0)\) from the dispersion plots (d). Measures have been collected over \(15\) samples of the network. Choice of the parameters: \(N=500\), \(\alpha=0.5\).
We find that \(\omega_{emp}(0)<0\) for \(\alpha\leq 0.6\). The progressive increase of \(\omega_{emp}(0)\) means that the structure of the fixed points is more effective in the starting Hebbian landscape compared to intermediate stages of training. In the last part of training, points reacquire more negative values, but this is not a reliable indication of good performance: as shown in fig. 10c, in this part of the process the standard deviation of the couplings \(\sigma_{i}\) is comparable to \(O(\lambda\cdot N^{-1/2})\), and the expansion of the \(\mathcal{L}\) in eq. (22) is not valid. The last part of the training, where \(\sigma_{i}\simeq 0\ \forall i\), has been neglected from the plot. On the other hand, the width of the colored halos decreases with \(\alpha\). The experiment is thus consistent with the characterization of the unlearning algorithm presented in [9], which showed decreasing classification and generalization when increasing \(\alpha\). This is confirmed by the study of the Pearson correlation coefficient between \(\omega_{i}^{\mu}\) and the associated stabilities \(\Delta_{i}^{\mu}\) (see Fig. 10,b). High values of the Pearson coefficient show a strong dependence of the structure of noise on the relative stabilities. For all \(\alpha\), the Pearson coefficient is highest at \(d=0\), and progressively decreases during training, suggesting that the quality of the training configurations is deteriorating. The final increase in the coefficient is, again, due to the vanishing of the standard deviations \(\sigma_{i}\) of the couplings, and does not indicate good performance.
### \(m_{t}>0\)
When training configurations are fixed points of the dynamics with \(m_{t}=O(1)\), the considerations of the previous section no longer apply. Such configurations can be generated by initializing the network at an overlap \(m=O(1)\) with a memory, and let it evolve according to (1). The connectivity matrix is initialized according to Hebb's learning rule eq. (2). In this setting, if at some point during training \(m\) happens to enter the basin of attraction of the memories, the fixed point reached by the dynamics will be the memory itself, and the noise contribution will cancel exactly the learning contribution, giving \(\delta J=0\). On the other hand, for sufficiently high values of the load \(\alpha\), at the start of the training procedure memories will have zero size basins of attraction and trajectories will drift away from the memories following the dynamics. In this scenario, the two contributions \(\delta J^{N}\) and \(\delta J^{U}\) decorrelate, and \(\delta J^{N}\) will take again a similar role to what described in the previous section. The result is an algorithm which interpolates between unlearning when \(d\) is small and a supervised algorithm when basins increase to a size close to \((1-m)\). In this regime, \(\delta J^{N}\) acts as a breaking term, preventing the algorithm to further modify the coupling matrix \(J\). A similar mechanism has been studied in [26] where a supervised term was added to the standard unlearning update rule, leading to \(\delta J_{ij}\propto-S_{i}^{\mu_{d}}\delta_{j}^{\mu_{d}}+\xi_{i}^{\mu_{d}}\xi_ {j}^{\mu_{d}}\). Notice that the term
Figure 8: Estimates of the ratio \(|\Delta J^{U}|/|\Delta J^{N}|\) as a function of \(\lambda^{-\frac{1}{2}}\) and \(N\) for \(\alpha=0.5\). Measures are averaged over 5 samples. Error bars are not indicated because smaller than the symbols.
Figure 9: The quantity \(\max_{d}\left(\Delta_{\min}\right)\) as a function of \(\lambda^{-1/2}\). Colors are: _red_ for \(N=100\) and _blue_ for \(N=500\). The _gray_ line represents the null value for the stability. Symbols are: _circles_ for the training-with-noise rule, _triangles_ for the unlearning rule. In the subplot on the center right, the couplings obtained through the unlearning algorithm are plotted as a function of the ones resulting from the training-with-noise, at the same amount of iterations, for one sample at \(N=500\) and \(\lambda=5\cdot 10^{-3}\). Measures are averaged over 50 samples. The choice of the parameters is: \(\alpha=0.5\), \(m_{t}=0^{+}\).
\(\xi_{i}^{\mu_{d}}\xi_{j}^{\mu_{d}}\) is deterministically reproducing Hebb's learning rule, while in our study the unlearning is modified by a stochastic term, whose nature we have already discussed. Given a sufficiently small learning rate \(\lambda\), there will exist a characteristic number of steps of the algorithm over which the coupling matrix does not change significantly. We will call this timescale _an epoch_. Averaging the effect of training steps over an epoch, we get a snapshot of how the algorithm is affecting the couplings at a given point during training. This can be used to study the relation between \(\delta J^{N}\) and \(\delta J^{U}\), quantified by the connected correlation coefficient
\[\text{Cov}_{\text{N-U}}:=\frac{2}{N(N-1)}\sum_{i,j>i}\big{(}\overline{\delta J _{ij}^{N}\delta J_{ij}^{U}}-\overline{\delta J_{ij}^{N}}\,\overline{\delta J_{ ij}^{U}}\big{)}, \tag{30}\]
where the average is computed over an epoch. When this quantity equals one, there is no effective update of the couplings over an epoch. Results are presented in fig. 11 for different values of \(m\) and of \(\alpha\), as a function of the number of training steps \(d\). The number of iterations has been rescaled by a factor \(p/\lambda\) for clarity of the plot. At any given \(\alpha\), training with higher \(m\) results in a faster increase of \(\text{Cov}_{\text{N-U}}\), i.e. a faster convergence of the algorithm. If the value of \(m\) is too low, the algorithm never manages to build a big enough basin of attraction, and never stops. As \(\alpha\) is increases, higher and higher values of \(m\) are required for the algorithm to stop, since the typical size of the attraction basins shrinks.
The network performance can be benchmarked by tracking the evolution of the lowest stability \(\Delta_{\text{min}}\) throughout the training procedure. Results are presented in fig. 12 for different values of \(\alpha\) and \(m\). For sufficiently low values of \(\alpha\) and sufficiently high values of \(m\), \(\Delta_{\text{min}}\) surpasses the zero. Once this condition is met, the value \(\Delta_{\text{min}}\) becomes essentially constant, even if \(\text{Cov}_{\text{N-U}}<1\), signaling that the update of the coupling matrix is still in progress. The result is a curve \(\Delta_{\text{min}}(d)\) which barely surpasses zero. For each value of \(m\) there exists a critical
Figure 11: Correlation between _noise_ and _unlearning_ contributions to \(\delta J\) as a function of the rescaled number of training steps \(\frac{d\lambda}{p}\), for two values of the training parameter \(m\), and different values of \(\alpha\). When \(\text{Cov}_{\text{N-U}}=1\), the algorithm stops modifying the coupling matrix. Choice of the parameters: \(N=400\), \(\lambda=10^{-2}\). Results are averaged over 100 samples.
Figure 10: The training-with-noise algorithm is implemented by sampling stable fixed points of the network dynamics with \(m_{t}=0^{+}\). (a) The empirical measure of \(\omega\) for \(\Delta=0\) for the stable fixed points as a function of the rescaled number of iterations of the training-with-structured-noise algorithm: the bars represent the symmetric negative band obtained from the linear fit in fig. 6. (b) Pearson coefficient measured between \(\omega_{i}^{\mu}\) and \(\Delta_{i}^{\mu}\). (c) The standard deviation of the couplings during learning, defined as \(\sigma=\frac{1}{N}\sum\sigma_{i}\). Points are averaged over 50 samples and the choice of the parameters is: \(N=100\), \(\lambda=10^{-2}\).
value \(\alpha_{c}(m)\) beyond which no amount of steps is able to produce \(\Delta_{\min}>0\). Extrapolating empirical results to the \(N\to\infty\) limit, one finds
\[\alpha_{c}(m)=A\cdot(m)^{B}+C,\]
where
\[A=0.35\pm 0.01,\quad B=6.9\pm 0.5,\quad C=0.58\pm 0.01\]
Consistently with what presented in the previous section, in the limit \(m\to 0^{+}\) one finds the critical capacity of the unlearning algorithm [7; 9]. The critical capacity increases up to a value \(\alpha_{c}(1)=0.93\pm 0.01\) when \(m\) reaches unity.
Regardless of whether \(\Delta_{\min}>0\) is obeyed, one can monitor the network performance as an associative memory device by measuring the retrieval map \(m_{f}(m_{0})\). We find that the best performance always corresponds to the number of training step maximizing \(\Delta_{\min}\), hence the curves \(m_{f}(m_{0})\) are all relative to this number of steps. Results are presented in fig. 13. When classification is achieved (i.e. \(\alpha<\alpha_{c}(m)\)), lower values of \(m\) increase the degree of generalization of the network (i.e. enlarge the basins of attraction), at the cost of a lower critical capacity.
## VI A sampling procedure for noisy data-points
Insights from the previous sections on the structure of well performing training data can be used to effectively sample configurations to teach to the model. This can be achieved by means of a supervised Monte Carlo routine that searches for maximally noisy configurations (i.e. \(m_{t}=0^{+}\)) satisfying condition (27). The coupling matrix is initialized according to Hebb's rule (2), and updated recursively according to either eq. (8) or eq. (5). We first introduce the sampling algorithm and then report some numerical results regarding both the training-with-noise and unlearning routines.
### The sampling algorithm
We sample maximally noisy training configurations \(m_{t}=0^{+}\) such that \(E(\vec{\chi}|m,J)<0\), where
\[E(\vec{\chi}|m,J):=\frac{m}{\sqrt{2(1-m^{2})}}\sum_{i,\mu}^{N,p}\omega_{i}^{ \mu}\exp\left(-\frac{m^{2}\Delta_{i}^{\mu^{2}}}{2(1-m^{2})}\right)\!, \tag{31}\]
that is a function of the noisy variables \(\chi_{i}^{\mu}\), conditioned on a reference overlap \(m\) and the couplings \(J\). Sampling is done through the following procedure:
1. The network is initialized in a random configuration and the asynchronous dynamics in eq. (1) is run until convergence on a fixed point. The final state \(\vec{S}^{\mu_{d}}\) must have an overlap \(m_{t}\) in the interval \((0,1/\sqrt{N})\) with one memory \(\mu_{d}\), otherwise the procedure is repeated.
2. A \(T=0\) temperature dynamics in the landscape of \(E(\vec{\chi}|m,J)\) is performed until \(E(\vec{\chi}|m,J)<0\). We
Figure 12: Minimum stability as a function of the rescaled number of training steps \(\frac{d\lambda}{p}\), for two values of the training parameter \(m\), and different values of \(\alpha\). When \(\Delta_{\min}\geq 0\), each memory is a stable fixed point of the dynamics. Choice of the parameters: \(N=400\), \(\lambda=10^{-2}\). Results are averaged over 100 samples.
Figure 13: Retrieval map \(m_{f}(m_{0})\) for two values of \(\alpha\) and different values of the training parameter \(m\). At \(\alpha=0.4\), every \(m\) leads to stable memories, i.e. \(m_{f}(1)=1\). At \(\alpha=0.8\), only the highest values of \(m\) lead to stable memories, while for low values of \(m\) one has \(m_{f}(1)<1\). Choice of the parameters: \(N=400\), \(\lambda=10^{-2}\). Results are averaged over 100 samples.
use a Kawasaki kind of dynamics over the noisy variables \(\vec{\chi}\) to make sure that \(m_{t}\) maintains the prescribed value.
Since \(E(\vec{\chi}|m,J)\) is proportional to \(\delta\mathcal{L}_{U}\) (see eq. (22)), the procedure will lead to a reduction in the Loss eq. (14). In this setting, the classification and generalization properties can be tuned by the parameter \(m\), while the training configurations always have a fixed value of \(m_{t}=0^{+}\). In particular, to require a performance that is most similar to the one of a SVM, we will set \(m\to 1^{-}\).
The sampling procedure starts from fixed points because we know, from the previous sections, that they are close to be the most effective configurations. Interestingly, the described procedure results significantly more effective than a standard minimization of \(\mathcal{L}(m=1^{-},J)\): in the latter case training stops when \(\mathcal{L}=-1\), while the former technique apparently pushes the stabilities further in the positive values. Nevertheless, the goal of this section is not to propose a winning, or more computationally efficient training routine, but rather to search for the most virtuous training data points in the context of maximal training noise, and reveal their identity.
### Algorithm performance
The sampling procedure results in a better performance for both training-with-noise and the unlearning update rules.
Results for training-with-noise are reported in fig. 14. Panel (a) shows that classification is reached up to \(\alpha\simeq 0.8\) for a network of size \(N=100\). Panel (b) shows the retrieval map \(m_{f}(m_{0})\), for different values of \(\alpha\) and for the lowest number of algorithm iterations leading to classification. The high values of \(m_{f}\) for \(m_{0}\sim 1\) indicates that the network, while achieving classification, maintains good generalization properties. Results for a SVM with the same control parameters are also shown, for comparison. The curves obtained from the training-with-noise are higher than ones relative to the SVM, signaling a better recalling performance, even though quantifying this effect will require a more detailed study of finite size effects. Fig. 15 shows analogous plots for the unlearning update rule. Again, the network compares favorably with traditional unlearning in terms of memory capacity, that is increased up to \(\alpha\simeq 0.9\) (see panel (a)) with respect to the maximum capacity of \(\alpha\simeq 0.7\) obtained with fixed points only. Panel (b) represents an indication for the sizes of the basins of attraction at the lowest number of algorithm iterations leading to classification. Results are coherent with fig. 1 concerning \(\alpha=0.35\), yet again basins appear to be larger than the ones from a SVM trained with the same control parameters.
different \(f\) assume similar volumes of the basins of attraction when they are measured at the very first instant they reach classification. The plot also shows that the generalization performance is comparable with the one of a SVM trained with the same choice of the control parameters, consistently with the results of [9] in the matter of the unlearning algorithm.
Figure 14: Performance of the training-with-noise algorithm taught with noisy configurations sampled according to Section VI regarding four progressing values of the load \(\alpha\). (a) Minimum stability as a function of the algorithm time: _full_ line is the unlearning with sampling, _dotted_ line is the traditional unlearning. (b) Retrieval map \(m_{f}(m_{0})\), relatively to the _circles_ in panel (a) for \(\alpha\in[0.35,0.7,0.8]\): _full_ line is the unlearning with sampling, _dashed_ line is a SVM trained with no symmetry constraints with the same control parameters. (c) Saddle index \(f\) as a function of the algorithm steps for \(\alpha\in[0.35,0.7,0.8]\). The sub-panel zooms over \(\alpha=0.35\) alone. All data points have been averaged over \(20\) samples in (a),(b) and \(5\) samples in (c). Errors are neglected for clarity of the image. The choice of the parameters: \(N=100\), \(\lambda=10^{-3}\), \(m=0.9999\).
Figure 15: Performance of the unlearning algorithm trained with noisy configurations sampled according to Section VI regarding four progressing values of the load \(\alpha\). (a) Minimum stability as a function of the algorithm time: _full_ line is the unlearning with sampling, _dotted_ line is the traditional unlearning. (b) Retrieval overlap \(m_{f}(m_{0})\), relatively to the _circles_ in panel (a) for \(\alpha\in[0.7,0.8,0.9]\): _full_ line is the unlearning with sampling, _dashed_ line is a SVM trained with no symmetry constraints with the same control parameters. (c) Saddle index \(f\) as a function of the algorithm steps for \(\alpha\in[0.7,0.8,0.9]\). All data points have been averaged over \(20\) samples in (a),(b) and \(5\) samples in (c). Errors are neglected for clarity of the image. The choice of the parameters: \(N=100\), \(\lambda=10^{-3}\), \(m=0.9999\).
## VII Conclusions
The seminal work of [14] showed that noise could be injected during training to increase generalization, i.e. the capability of neural networks to retrieve classes of data by receiving corrupted stimuli [29; 30; 31; 32]. The same concept recently seems to catch on in biology as well [33; 34], as artificial neural networks appear more frequently to be involved in explaining neuroscience, and viceversa.
In Section IV, we show that the training-with-noise algorithm [14] reproduces the thermodynamics described by Wong and Sherrington in [20; 21]. This implies convergence to a Hebbian matrix [2; 4] or a Support Vector Machine (SVM) [11; 22; 35] when learning random configurations with, respectively, a maximal (i.e. \(m_{t}=0^{+}\)) and a minimal (i.e. \(m_{t}=1^{-}\)) amount of noise. The training overlap \(m_{t}\in(0,1)\) can be then tuned to interpolate between these two models.
In Section V, we exploit a novel degree of freedom to access a new path in coupling space. After setting a specific amount of training noise \(m_{t}\), we constrain the entries of the training configurations to assume a particular internal _structure_. In the case of maximal noise \(m_{t}=0^{+}\), for every training configuration, we define a set of variables \(\omega_{i}^{\mu}\), whose value in the neighborhood of \(\Delta_{i}^{\mu}=0\) allows to infer the classification and generalization power of the training configuration. When this quantity is negative, training drives the network closer to the optimal properties of a SVM.
In this framework, the shape of the initial landscape of attractors where configurations are sampled assumes a crucial role. We find that the Hebbian landscape in the _oblivion_ regime (i.e. \(\alpha>0.14\)) is a very effective starting condition for learning. Furthermore, our analysis shows that the quality of the configurations, in terms of noise structure, increases when we descend towards the metastable minima of a Hebbian landscape, while the inverse trend is observed in a fully random landscape. In order to reach lower states in the Lyapunov function, we focused on a symmetric version of the training-with-noise algorithm [14]. In particular, we proved that the traditional _unlearning_ algorithm [5; 7; 8; 9] (see Section III) is recovered from the training-with-noise algorithm (see Section IV) when the noise is maximal and the training configurations are stable fixed points of the dynamics. Therefore, it is now clear that basins of attraction are close to be maximized by the unlearning algorithm [9] because training data are such as to approach the global minimum of the loss function of a SVM. The same principle is traditionally applied by other unsupervised procedures that are similar to unlearning [36; 37; 10]. As a difference with the latter, such algorithms learn configurations that are higher in the landscape of attractors. Consistently with our study, these techniques achieve small or nonexistent basins of attraction, while the traditional unlearning routine, that makes use of the metastable states, approaches an optimal memory performance [9].
The picture about maximally noisy training data is completed by Section VI, where the measure given by the \(\omega_{i}^{\mu}\) variables is used to sample efficient training data. Simulations suggest that low saddles favor higher performances even more than stable fixed points do. This result can be exploited to outperform the traditional unlearning algorithm both in terms of critical capacity and basins of attraction.
In addition to the maximal noise scenario, we studied an application of the training-with-noise algorithm when stable fixed points with \(m_{t}>0\) are learned by the network, when it is initialized according to the Hebb's rule. The resulting algorithm reaches both classification and a good degree of generalization, proving that the internal structure of the configurations plays a relevant role also for higher values of \(m_{t}\). The critical capacity reached by the algorithm has been numerically estimated.
In light of our work, the full learning and memory consolidation process can be interpreted in terms of a sequential training-with-noise routine in conditions of maximal noise. In the following, we conjecture a new interpretation of learning as it would naturally come from our analysis. During a first _online_ phase external stimuli are learned by the network according to the standard training-with-noise algorithm: such stimuli might be conceived to have a very weak overlap with a set of _concepts_
Figure 16: Minimum stability \(\Delta_{\min}\) as a function of the algorithm steps on a network trained with the symmetric training-with-noise routine that learns saddles of various indices \(f\). The initial matrix is assembled according to the Hebb’s rule. Full dots report the amount of iterations needed to accomplish classification. The subplot depicts the relation \(m_{f}(m_{0})\) as measured on the positions of the dots. A comparison with a SVM trained with the same choice of the parameters is presented. All measures are averaged over 5 samples with the shaded region indicating the experimental errors. The choice of the parameters is: \(N=100\), \(\alpha=0.35\), \(\lambda=10^{-3}\).
or _archetypes_ that should be pseudo-orthogonal with each other and unknowable a-priori. The archetypes are contained in the external environment, i.e. they are hidden in the symmetries and structures of the stimuli. At the end of this phase the initial network, that had been initialized at random, or in the form of a _tabula rasa_, has achieved a Hebbian appearance hiding the unconscious archetypes. The Hebbian network might now be in the non-retrieval regime, hence without any practical utility. A second _offline_ phase follows to the first one. Now the network samples structured noisy neural configurations, still weakly correlated with the archetypes, from the freshly formed landscape of attractors. Such states could be, for instance, lower saddles or stable fixed points of the neural dynamics. Consequently, memory is consolidated by centering the unconscious archetypes into very large basins of attraction. From the biological point of view, this picture supports the importance of both daily and night experience in learning, as hebbian learning acquires higher credibility [38; 39] and the role of sleep in consolidating memory has been repeatedly confirmed by electrophysiology [40; 41]. Moreover, the association between the stored memories and pseudo-orthogonal archetypes may present interesting connections with the efforts of early psychopathology to interpret the content of dreams in humans [42]. Most importantly, it should be noticed that working with maximal noise allows the learning process to go unsupervised, as it is meant to work in real neural networks. Furthermore, learning as composed by two alternating phases is also the object of very recent speculations at the interface between neuroscience and computer science: in analogy with an adversarial network [43; 44], or a Boltzmann Machine [45; 46], the brain would use internal noisy representations to increase the robustness of learning and help generalization. We hope that our results can shed light over the particular structure of noise that is optimal for learning in neural networks, possibly helping to develop a proper theory behind the very empirical techniques of noise injection (i.e. dropout or data augmentation) that are already largely implemented in training deep networks [47; 29; 48] and which can be integrated into a wider concept of memory formation in the brain [49].
###### Acknowledgements.
The authors are particularly grateful to their mentors Enzo Marinari, Giancarlo Ruocco and Francesco Zamponi for the precious suggestions and support. They also thank Fabian Aguirre Lopez, Aldo Battista, Simona Cocco, Giampaolo Folena, Remi Monasson and Mauro Pastore for useful discussions.
|
2308.07439 | Interaction-Aware Personalized Vehicle Trajectory Prediction Using
Temporal Graph Neural Networks | Accurate prediction of vehicle trajectories is vital for advanced driver
assistance systems and autonomous vehicles. Existing methods mainly rely on
generic trajectory predictions derived from large datasets, overlooking the
personalized driving patterns of individual drivers. To address this gap, we
propose an approach for interaction-aware personalized vehicle trajectory
prediction that incorporates temporal graph neural networks. Our method
utilizes Graph Convolution Networks (GCN) and Long Short-Term Memory (LSTM) to
model the spatio-temporal interactions between target vehicles and their
surrounding traffic. To personalize the predictions, we establish a pipeline
that leverages transfer learning: the model is initially pre-trained on a
large-scale trajectory dataset and then fine-tuned for each driver using their
specific driving data. We employ human-in-the-loop simulation to collect
personalized naturalistic driving trajectories and corresponding surrounding
vehicle trajectories. Experimental results demonstrate the superior performance
of our personalized GCN-LSTM model, particularly for longer prediction
horizons, compared to its generic counterpart. Moreover, the personalized model
outperforms individual models created without pre-training, emphasizing the
significance of pre-training on a large dataset to avoid overfitting. By
incorporating personalization, our approach enhances trajectory prediction
accuracy. | Amr Abdelraouf, Rohit Gupta, Kyungtae Han | 2023-08-14T20:20:26Z | http://arxiv.org/abs/2308.07439v2 | # Interaction-Aware Personalized Vehicle Trajectory Prediction Using Temporal Graph Neural Networks
###### Abstract
Accurate prediction of vehicle trajectories is vital for advanced driver assistance systems and autonomous vehicles. Existing methods mainly rely on generic trajectory predictions derived from large datasets, overlooking the personalized driving patterns of individual drivers. To address this gap, we propose an approach for interaction-aware personalized vehicle trajectory prediction that incorporates temporal graph neural networks. Our method utilizes Graph Convolution Networks (GCN) and Long Short-Term Memory (LSTM) to model the spatio-temporal interactions between target vehicles and their surrounding traffic. To personalize the predictions, we establish a pipeline that leverages transfer learning: the model is initially pre-trained on a large-scale trajectory dataset and then fine-tuned for each driver using their specific driving data. We employ human-in-the-loop simulation to collect personalized naturalistic driving trajectories and corresponding surrounding vehicle trajectories. Experimental results demonstrate the superior performance of our personalized GCN-LSTM model, particularly for longer prediction horizons, compared to its generic counterpart. Moreover, the personalized model outperforms individual models created without pre-training, emphasizing the significance of pre-training on a large dataset to avoid overfitting. By incorporating personalization, our approach enhances trajectory prediction accuracy.
## I Introduction
The advent of vehicle-generated big data has sparked considerable interest in data-driven personalized advanced driver assistance systems (ADAS) [1]. By leveraging the wealth of personalized driving patterns and insights expressed by drivers, personalization has the potential to enhance the performance of ADAS systems greatly. This, in turn, leads to improved driving experiences, increased driver acceptance, and greater utilization of ADAS functionalities. In recent years, many personalized ADAS applications have been proposed, including Adaptive Cruise Control [2], Forward Collision Warning [3], Lane Keeping Assistance [4], among many others. Furthermore, personalization extends to battery electric vehicle (BEV) ADAS applications. Personalized range estimation, in particular, holds significant promise in mitigating range anxiety, as it enables more accurate predictions of available driving range for individual drivers' given their unique driving styles [5].
Vehicle trajectory prediction is a foundational component in many ADAS applications and autonomous vehicle (AV) systems [6]. It plays a significant role in various safety-critical ADAS applications, such as collision warning [7], automated braking [8], [9], and lane change prediction [10]. The accuracy of trajectory prediction is crucial for enhancing the effectiveness of these safety-critical applications [11], enabling them to provide timely warnings while minimizing false positives. Moreover, in the context of autonomous vehicles, trajectory prediction plays a vital role in ensuring safe motion planning and overall system safety [12]. Anticipating and responding to dynamic traffic situations helps to reduce the risk of accidents and instills trust in AV systems. Additionally, trajectory prediction can significantly contribute to Vehicle-to-Vehicle (V2V) communication, particularly in intent-sharing [13] and negotiation applications [14]. By facilitating vehicular communication of their intentions to others, trajectory prediction for intent-sharing promotes cooperative behavior in mixed traffic, improves traffic flow, and enhances overall safety.
Recent advancements in trajectory prediction have emphasized the significance of vehicle interactions [12]. Interaction-aware trajectory prediction considers the spatio-temporal relationship between the target vehicle and its surrounding neighbors. In the recent past, some of the most effective approaches for short-term trajectory prediction combined graph neural networks (GNNs) to model the spatial domain and recurrent neural networks (RNNs) to capture the temporal domain for vehicle interactions [15], [16].
Despite the substantial body of research dedicated to vehicle trajectory prediction, a notable research gap persists in personalized vehicle trajectory prediction. Specifically, there is limited exploration of interaction-aware personalized trajectory prediction methods [17]. A major challenge stems from the lack of individual driver trajectory data which captures the driver's trajectory in addition to the driver's surrounding vehicle trajectories for an extended period of time.
This paper presents a novel approach for generating personalized interaction-aware vehicle trajectory predictions. Our method builds upon state-of-the-art techniques and combines graph convolution networks (GCN) with Long Short-Term Memory (LSTM) models to effectively capture the dynamics of vehicle interactions. To obtain individual driver trajectories which can support interaction-aware modeling, we utilized human-in-the-loop simulation using the CARLA driving simulator [18]. The proposed personalization frame |
2310.17312 | An Ensemble Method Based on the Combination of Transformers with
Convolutional Neural Networks to Detect Artificially Generated Text | Thanks to the state-of-the-art Large Language Models (LLMs), language
generation has reached outstanding levels. These models are capable of
generating high quality content, thus making it a challenging task to detect
generated text from human-written content. Despite the advantages provided by
Natural Language Generation, the inability to distinguish automatically
generated text can raise ethical concerns in terms of authenticity.
Consequently, it is important to design and develop methodologies to detect
artificial content. In our work, we present some classification models
constructed by ensembling transformer models such as Sci-BERT, DeBERTa and
XLNet, with Convolutional Neural Networks (CNNs). Our experiments demonstrate
that the considered ensemble architectures surpass the performance of the
individual transformer models for classification. Furthermore, the proposed
SciBERT-CNN ensemble model produced an F1-score of 98.36% on the ALTA shared
task 2023 data. | Vijini Liyanage, Davide Buscaldi | 2023-10-26T11:17:03Z | http://arxiv.org/abs/2310.17312v1 | An Ensemble Method Based on the Combination of Transformers with Convolutional Neural Networks to Detect Artificially Generated Text
###### Abstract
Thanks to the state-of-the-art Large Language Models (LLMs), language generation has reached outstanding levels. These models are capable of generating high quality content, thus making it a challenging task to detect generated text from human-written content. Despite the advantages provided by Natural Language Generation, the inability to distinguish automatically generated text can raise ethical concerns in terms of authenticity. Consequently, it is important to design and develop methodologies to detect artificial content. In our work, we present some classification models constructed by ensembling transformer models such as SciBERT, DeBERTa and XLNet, with Convolutional Neural Networks (CNNs). Our experiments demonstrate that the considered ensemble architectures surpass the performance of the individual transformer models for classification. Furthermore, the proposed SciBERT-CNN ensemble model produced an F1-score of 98.36% on the ALTA shared task 2023 data.
## 1 Introduction
Nowadays, people have access to state-of-the-art LLMs which help them simplify some of their daily activities. One of the most notable breakthroughs in recent years is the evolution of OpenAI's GPT models which are capable of generating text that looks as if they are written by a human. Especially, the latest models such as ChatGPT and GPT4 (OpenAI, 2023) have won global attention for providing solutions to any kind of question or concern that humans possess. Moreover, these models produce outputs that appear to be written by a human.
Thus there is a potential risk in determining the authenticity of textual content that mankind refers to. Especially, in a domain such as academia, leveraging generation models in composing articles might raise an ethical concern. For example in ICML 2023, they have included a note under the "Ethics" section prohibiting the use of text generated by ChatGPT and other LLMs, unless "presented as part of the paper's experiential analysis." 1. Accordingly, it is essential to have mechanisms for detecting artificially composed text from human written text.
Footnote 1: [https://icml.cc/Conferences/2023/l1m-policy](https://icml.cc/Conferences/2023/l1m-policy)
Currently, a substantial amount of research has focused on the detection of automatically generated text. Recent research ((Zellers et al., 2019), (Glazkova and Glazkov, 2022) and Liyanage and Buscaldi (2023)) mostly consider detection as a binary classification task and leverage SOTA classification models to distinguish machine-generated text from original text. Besides, some employ statistical detection tools such as GLTR (Gehrmann et al., 2019) or latest deep learning based tools such as GPT2 output detector2, DetectGPT (Mitchell et al., 2023) or GPTZero 3. Moreover, several researchers (Liyanage et al. (2022), (Kashnitsky et al., 2022)) have published corpora composed of machine-generated content, which can be utilized by future research on detection.
Footnote 2: https://openai-openai-detector—5smxg.hf.space
Footnote 3: [https://gptzero.me/](https://gptzero.me/)
Our work is based on the participation of our team in the ALTA shared task 2023 (Molla et al., 2023) The objective of the task is to build automatic detection systems that can discriminate between human-authored and synthetic text generated by Large Language Models (LLMs). Their corpus is composed of artificial contents that belong to a variety of domains (such as law, medical) and are generated by models such as T5 (Raffel et al., 2020) and GPT-X.
This paper is organized as follows. We provide the corpus and task description in Section 2. In Section 3, we describe our methodology and Section 4, deliver the experimental setup and the official results. Section 5 concludes this paper.
## 2 Task Overview
### Task Definition
The task at hand revolves around distinguishing between automatically generated and human-written texts. In essence, it involves a binary classification challenge where the goal is to categorize provided texts into two distinct and exclusive groups. To outline this formally:
* Input: We are presented with text segments.
* Output: The objective is to assign one of two possible labels to each text segment: either "human-written" or "machine-generated".
This undertaking aims to establish a clear boundary between texts created through automated processes and those crafted by human authors. The primary aim is to develop a model that can effectively differentiate between these two categories based on the characteristics of the given excerpts.
### Corpus
The dataset published for the ALTA shared task is a balanced one composed of 9000 original (human written) excerpts and 9000 fake (artificially generated) excerpts. On average, the excerpts consist of 35 words each. To gain a deeper comprehension of the corpus, category-wise (original vs generated) statistics with respective example excerpts are provided in Table 1.
## 3 Methodology
Given that the shared task frames detection as a binary classification challenge, we utilized a range of classification models to address this objective. In the subsequent subsections, in-depth explanations are provided pertaining to the examined statistical, recurrent and transformer models, and the corresponding ensemble architectures.
### Statistical Models and their Respective Ensemble Architectures
In our work, we primarily employed Naive Bayes, Passive Aggressive and Support Vector Machine (SVM), which are classification algorithms used in machine learning to categorize data points into different classes [1]. Naive Bayes is a probabilistic classification algorithm based on Bayes' theorem and it is widely used for tasks such as spam detection. It assumes that the features are conditionally independent given the class label. Passive Aggressive is a type of algorithm that aims to make aggressive updates when it encounters a misclassified point and passive updates when the point is correctly classified. SVM is a powerful supervised machine learning algorithm used for classification and regression tasks. It is a popular algorithm in text classification tasks. These algorithms were employed in conjunction with the two text encoding methodologies, namely Bag of Words (BoW) and Term Frequency-Inverse Document Frequency (TF-IDF).
Furthermore, we harnessed the capabilities of ensembles comprising the aforementioned statistical models, applying various ensemble methodologies such as voting, stacking, bagging, and boosting. By amalgamating the predictions of multiple models, ensemble techniques aim to enhance the overall predictive power of our system. Voting combines the outputs through a majority or weighted decision, stacking involves training a meta-model on the predictions of base models, bagging leverages bootstrapped subsets of data for training individual models, and boosting iteratively adjusts model weights to prioritize difficult-to-classify instances. Through these ensemble strategies, we sought to extract richer insights from our data and attain improved classification performance.
### Recurrent Models and their Respective Ensemble Architectures
Recurrent models, a subset of neural network architectures, are models designed to capture temporal dependencies and patterns within sequences. We conducted experiments with LSTM and Bi-LSTM models, which are a type of RNN architecture specifically designed to address the vanishing gradient problem that can occur in traditional RNNs. To further improve classification accuracies of these models, we ensembled them with a Convolutional Neural Networks (CNNs) architecture. This approach helps in enhancing the predictive capabilities overall model by capitalizing on their respective strengths in capturing temporal dependencies and spatial features. We trained the entire ensemble end-to-end, allowing the network to learn how to best combine the features extracted by both LSTM and CNN components.
### Transformer Models and their Respective Ensemble Architectures
For our classification experiments, we leveraged cutting-edge transformer models, namely BERT,
SciBERT, DeBERTa, and XLNet. These state-of-the-art architectures have demonstrated exceptional proficiency in a wide spectrum of natural language processing tasks, including classification. BERT (Bidirectional Encoder Representations from Transformers) Devlin et al. (2018) introduces bidirectional context by pretraining on a massive corpus and then fine-tuning on task-specific data. SciBERT Beltagy et al. (2019) is specialized for scientific text, adapting BERT's embeddings to domain-specific language. DeBERTa (Decoding-enhanced BERT with Disentangled Attention) He et al. (2020) enhances attention mechanisms, capturing dependencies among words more effectively. XLNet Yang et al. (2019) employs a permutation-based training approach to capture bidirectional context and alleviate BERT's limitations.
Initially, we created ensembles by combining the capabilities of SciBERT and DeBERTa models with the foundational BERT model. This process involves channeling the data through each base model, which comprises the transformer block along with a subsequent max pooling layer. Subsequently, the outcomes derived from these individual models are concatenated to generate a unified representation, which is then channeled into a linear classification layer for making refined predictions.
Furthermore, we integrated the transformer models with Convolutional Neural Networks (CNNs) to construct ensemble architectures that exhibit enhanced performance. As depicted in the architectural diagram 1, the embeddings produced by the transformer model are directed into a CNN layer. In our approach, the labels won't be actively employed since we're solely utilizing the BERT transformer without a distinct specialized component on the upper layer. The CNN, placed as the upper layer, takes on the role of our main component. We exclude the nn.Embedding layers, as the need for a lookup table for embedding vectors is obviated. Instead, we directly infuse the embedding vectors from BERT into the CNN architecture.
## 4 Experiments and Results
The text underwent preliminary processing, involving the elimination of stopwords and stemming, before being supplied to either statistical or neural network architectures. The processed data was then transformed into numerical vectors using Bag of Words (BoW) or tf-idf encoding techniques, which were subsequently utilized as inputs for the statistical models. All of the employed statistical models, as well as their corresponding ensemble methods, were imported from the Scikit-learn library. For constructing LSTM and CNN models, the relevant layers were imported from TensorFlow's Keras module. Training these recurrent models, including those combined with CNN ensembles, involved running 10 epochs. The LSTM and Bi-LSTM architectures were trained using batch sizes of 64 and 128, respectively.
Concerning transformer architectures and their associated ensembles, pre-trained models from Hugging Face Wolf et al. (2020) were imported and subsequently fine-tuned through the utilization of Simple Transformers 4. The BERT tokenizer was consistently employed across all models. The fine-tuning process involved 3 epochs, a batch size of 16, and a maximum sequence length of 128. Leveraging the T4 GPU Hardware accelerator, the average training time for models was approximately 30 minutes. For standalone models, the input consisted of unprocessed text, while ensembles underwent pre-processing involving punctuation removal and conversion to lowercase. As represented in Figure 1, the CNN block of the ensembles was composed of three convolutional layers.
Footnote 4: [https://simpletransformers.ai](https://simpletransformers.ai)
The dataset was split in 80:20 ratio for training and testing. To assess the classification per
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & **Original** & **Generated** \\ \hline Min. word count & 10 & 1 \\ \hline Max. word count & 96 & 192 \\ \hline Avg. word count & 25 & 45 \\ \hline Example excerpt & This is the data I collected so far (motorcycle standing on central stand, back wheel revolving, velocity comes from the back wheel, ABS LED blinking). & In this sense, she emphasized that it was a mistake to tie development aid to times of economic booms, as it is a ”permanent commitment”. \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of the ALTA shared task corpus (The avg. figures are rounded off to the nearest whole number)
formance of the models under consideration, the F1 score was employed. This score, being a balanced combination of precision and recall, offers a comprehensive evaluation. Each model underwent a total of five experimental iterations, and the resultant average F1 scores are presented in Table 2.
In general, the ensemble architectures have exhibited superior performance compared to their corresponding original models. Our best-performing solution is the combination of DeBERTa\({}_{large}\) with CNN, achieving an F1 score of **98.36%**.
## 5 Conclusion
In this work, we have explored the application of different SOTA classification models on the detection of automatically generated text from human written text. Moreover, we have created various ensemble methods with the aforementioned models and examined their performance on the detection task. Our results on the test data showed that generally the ensemble architectures outperform the considered original models.
As future work, we plan to examine the applicability of our ensemble architectures in detecting artificially generated text in multilingual corpora. Another potential research direction involves assessing the effectiveness of knowledge-based approaches for detecting artificial text.
|
2303.02141 | Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! | Sparse Neural Networks (SNNs) have received voluminous attention
predominantly due to growing computational and memory footprints of
consistently exploding parameter count in large-scale models. Similar to their
dense counterparts, recent SNNs generalize just as well and are equipped with
numerous favorable benefits (e.g., low complexity, high scalability, and
robustness), sometimes even better than the original dense networks. As
research effort is focused on developing increasingly sophisticated sparse
algorithms, it is startling that a comprehensive benchmark to evaluate the
effectiveness of these algorithms has been highly overlooked. In absence of a
carefully crafted evaluation benchmark, most if not all, sparse algorithms are
evaluated against fairly simple and naive tasks (eg. CIFAR, ImageNet, GLUE,
etc.), which can potentially camouflage many advantages as well unexpected
predicaments of SNNs. In pursuit of a more general evaluation and unveiling the
true potential of sparse algorithms, we introduce "Sparsity May Cry" Benchmark
(SMC-Bench), a collection of carefully-curated 4 diverse tasks with 10
datasets, that accounts for capturing a wide range of domain-specific and
sophisticated knowledge. Our systemic evaluation of the most representative
sparse algorithms reveals an important obscured observation: the
state-of-the-art magnitude- and/or gradient-based sparse algorithms seemingly
fail to perform on SMC-Bench when applied out-of-the-box, sometimes at
significantly trivial sparsity as low as 5%. By incorporating these
well-thought and diverse tasks, SMC-Bench is designed to favor and encourage
the development of more scalable and generalizable sparse algorithms. | Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Jaiswal, Zhangyang Wang | 2023-03-03T18:47:21Z | http://arxiv.org/abs/2303.02141v1 | # Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
###### Abstract
Sparse Neural Networks (SNNs) have received voluminous attention predominantly due to growing computational and memory footprints of consistently exploding parameter count in large-scale models. Similar to their dense counterparts, recent SNNs generalize just as well and are equipped with numerous favorable benefits (e.g., low complexity, high scalability, and robustness), sometimes even better than the original dense networks. As research effort is focused on developing increasingly sophisticated sparse algorithms, it is startling that a _comprehensive benchmark to evaluate the effectiveness_ of these algorithms has been highly overlooked. In absence of a carefully crafted evaluation benchmark, most if not all, sparse algorithms are evaluated against fairly simple and naive tasks (eg. CIFAR-10/100, ImageNet, GLUE, etc.), which can potentially _camouflage_ many advantages as well unexpected predicaments of SNNs. In pursuit of a more general evaluation and unveiling the true potential of sparse algorithms, we introduce "**Sparsity May Cry**" **Benchmark (SMC-Bench)**, a collection of carefully-curated 4 diverse tasks with 10 datasets, that accounts for capturing a wide range of domain-specific and sophisticated knowledge. Our systemic evaluation of the most representative sparse algorithms reveals an important obscured observation: _the state-of-the-art magnitude- and/or gradient-based sparse algorithms seemingly fail to perform on SMC-Bench_ when applied out-of-the-box, sometimes at significantly trivial sparsity as low as \(5\%\). The observations seek the immediate attention of the sparsity research community to reconsider the highly proclaimed benefits of SNNs. We further conduct a thorough investigation into the reasons for the failure of common SNNs. Our analysis points out that such failure is intimately related to the "lazy regime" of large model training, which hints us with stronger pruning recipes that alleviate the failure on SMC-Bench (though still more or less suffering). By incorporating these well-thought and diverse tasks, SMC-Bench is designed to favor and encourage the development of more scalable and generalizable sparse algorithms. We open-source SMC-Bench to assist researchers in building next-generation sparse algorithms that scale and generalize: [https://github.com/VITA-Group/SMC-Bench](https://github.com/VITA-Group/SMC-Bench).
## 1 Introduction
Sparse Neural Networks (SNNs) are no stranger to the deep learning community (Liu and Wang, 2023), but recently they have received stupendous attention in the era of transformers (eg. BERT, GPT, ViT, CLIP, etc.), when the parameter count is frequently measured in billions rather than millions. Due to the consistent efforts of sparsity researchers, SNNs have ushered enormous breakthroughs and can generalize just as well as original dense networks, and it is feasible to procure them after training (Frankle and Carbin, 2019; Sanh et al., 2020; Chen et al., 2020; Frankle et al., 2020), during training (Zhu and Gupta, 2017; Gale et al., 2019; Liu et al., 2021), and even before training (Mocanu et al., 2018; Lee et al., 2019; Liu et al., 2022) their dense counterparts using pruning. Apart from well-known efficiency benefits, surprisingly, SNNs also enjoy auxiliary benefits such as adversarial robustness (Guo et al., 2018; Ozdenizci and Legenstein, 2021; Chen et al., 2022), out-of-distribution
generalization (Zhang et al., 2021; Diffenderfer et al., 2021), and uncertainty estimation (Liu et al., 2021), etc. Despite the multi-dimensional success of numerous sparse algorithms, startlingly, our extensive survey across over 100 recent SNN papers within 2015-2022 unveils multiple daunting issues regarding evaluation datasets and protocols blindly followed within the sparsity community, that may significantly impede future progress if left unacknowledged.
**Issues with current evaluation paradigm:**_Firstly_, the vast majority of current work on SNNs is _narrowly evaluated_, i.e., only targeting a single or a few tasks (usually on image classification and sometimes on language understanding) on which SNNs have already proven their proficiency (Gale et al., 2019; Frankle and Carbin, 2019). Surprisingly, 79 papers out of our carefully selected 100 papers on SNNs, evaluate sparse models _merely_ on a single task, where 72 out of them evaluate image classification. _Secondly_, people are obsessed with evaluating SNNs on well-understood datasets, including but not limited to MNIST (LeCun, 1998) (26 papers), CIFAR-10/100 (Krizhevsky et al., 2009) (59 and 37 papers, respectively), ImageNet (Deng et al., 2009) (62 papers), and GLUE (Wang et al., 2018) (9 papers), where deep neural networks have already exceeded the human-equivalent performance (refer to Appendix D for more details). For instance, even though ImageNet has been considered a rather challenging task over years, very high accuracy (\(>\)90%) has been reported many times (Yu et al., 2022; Wortsman et al., 2022; Zhai et al., 2022). Such relatively restricted evaluations with "nearly saturated performance" limit the scope of sparse neural networks and are potentially ill-suited to identify new and unexpected capabilities of SNNs.
Addressing the aforementioned limitations of current SNN evaluation protocols is a pressing need for the community. To this end, we assemble a large-scale, fairly arduous, and diverse benchmark for sparse neural networks - "**Sparsity May Cry" Benchmark** (or briefly **SMC-Bench**). Specifically, we consider a broad set of tasks including _complex reasoning, multilingual translation, and protein prediction_, whose content spans multiple domains. Those tasks require a vast amount of commonsense knowledge, solid mathematical and scientific backgrounds to solve even for humans. Note that none of the datasets in SMC-Bench was created from scratch for the benchmark, we rely on pre-existing datasets as they have been agreed by researchers as challenging, interesting, and of high practical value. We rigorously measure the performance of state-of-the-art (SOTA) pruning and sparse training approaches (in their most common, basic settings) on SMC-Bench, to understand the potential of SNNs to scale and generalize. Our key observations and contributions can be unfolded as:
* We present "**Sparsity May Cry**" Benchmark, to **re-define** the evaluation protocols for sparse neural networks and facilitate a comprehensive assessment of SOTA sparse algorithms. The premise of SMC-bench is to develop a suite of large-scale, challenging, realistic, and diverse tasks and datasets that can empower the rigorous advancements in the community.
* all of the SOTA sparse algorithms seem to fail on SMC-Bench "out-of-the-box", sometimes at significantly trivial sparsity _e.g.,_\(5\%\). Note that the failure does not appear specific to one sparsification approach but unanimously across all approaches we evaluated. This observation _alarmingly_ demands the attention of the sparsity community to reconsider the highly proclaimed benefits of SNNs.
* We conduct extensive experiments across representative SNNs produced by various SOTA pruning and sparse training approaches on SMC-Bench, and we summarize our findings:
Model prunability is intimately related to task difficulty: models trained on difficult tasks suffer more from pruning compared to easier tasks;
The success of before-training sparsification (sparse training or pruning at initialization) is hard to generalize in more complex scenarios;
Iterative magnitude pruning (IMP) does not necessarily generalize better than one-shot pruning (OMP) or during-training pruning;
Despite performance difference, different magnitude-based pruning approaches lead to extremely similar layerwise sparsities.
* We further carry out a comprehensive investigation into the potential causes of SNN failures on SMC-Bench. Our analysis suggests that the failure of the existing sparse algorithms might be due to the "lazy regime" dynamics emerging in sufficiently overparameterized models (Chizat et al., 2019; Malladi et al., 2022). Inspired by this finding, we hypothesize and confirm that the second-order pruning approaches, i.e., oBERT (Kurtic et al., 2022), are more reliable pruning approaches for SMC-Bench, which yield relatively more promising performance on SMC-Bench in Appendix C.
## 2 Related Work
### Advances in Sparse Neural Networks
**Post-Training.** SNNs refer to a neural network where a certain portion of its components (e.g., weights, neurons, filters, and attention heads) have exactly zero values. The initial purpose of SNNs is retrospectively to accelerate model at inference time (a.k.a., post-training sparsification; Mozer and Smolensky (1989); LeCun et al. (1990)). Thanks to the over-parameterization property of deep neural networks, we can dramatically prune deep neural networks to smaller sizes with marginal loss of performance. Post-training sparsification has been well studied and results in various mature criteria that can be generally categorized into zero-order methods (magnitude-based; Han et al. (2015)), first-order methods (gradient-based; Molchanov et al. (2016); Sanh et al. (2020); Jiang et al. (2021)), and second-order methods (Hessian-based; LeCun et al. (1990); Hassibi and Stork (1992); Dong et al. (2017)). Second-order sparsification usually achieves higher performance than the other two but is also more expensive due to the full Hessian calculation. Fortunately, many approaches have been proposed to efficiently approximate Hessian (Zeng and Urtasun, 2018; Wang et al., 2019; Singh and Alistarh, 2020). The Lottery Ticket Hypothesis (LTH) adopts iterative magnitude pruning (IMP) on fully trained networks and finds a subnetwork at initialization that can be re-trained in isolation to match the original dense networks. Renda et al. (2020) further found that instead of re-training with the initial weights, re-training with the final weights achieves better performance. With the rise of large language models (LLMs), newer post-training pruning methods have emerged which aim to improve the affordability of these models (Sanh et al., 2020; Chen et al., 2020; Zafrir et al., 2021; Kuric et al., 2022; Xu et al., 2021; Lagunas et al., 2021; Zhang et al., 2022; Frantar et al., 2021).
**During-Training.** During-training sparsification (Finnoff et al., 1993) is a cheaper option, compared to sparsify a fully converged model. Approaches of during-training sparsification usually train a dense network for some time and then gradually sparsify the network with some schedules and end up with a sparse model with target sparsities. Zhu and Gupta (2017); Gale et al. (2019); Lin et al. (2020); Liu et al. (2021) are highlight approaches that gradually prune networks during training and meanwhile allow the pruned weights to be reactivated in case of inaccurate pruning. Another direction of during-training sparsification is adding sparsifying penalties such as (grouped) \(L_{0}\) and \(L_{1}\) norm to the loss function, which will punish the unimportant weights to zero, leading to sparse weights (Louizos et al., 2018; Luo and Wu, 2020; Savarese et al., 2020).
**Prior-Training.** Recently, foundation models (Brown et al., 2020; Chowdhery et al., 2022; Ramesh et al., 2022) have demonstrated promising quantitative improvement and new qualitative capabilities with increasing scale (Zhang et al., 2020). Along with the scaling of model size and data size, the training resources of these foundation models also get outrageous. To accelerate training, we need to sparsify models before training. LTH unveils the possibility to find SNNs at initialization that can match their dense counterparts, even though it uses post-training pruning to find them. At the same time, sparse training (Mocanu et al., 2018; Mostafa and Wang, 2019; Dettmers and Zettlemoyer, 2019; Evci et al., 2020; Liu et al., 2021; Schwarz et al., 2021) was proposed that can train a randomly-initialized sparse neural network from scratch while dynamically optimizing the sparse connectivity with promising performance. Instead of randomly initializing sparse networks, one iteration (Lee et al., 2019; Wang et al., 2020) or a few iterations (Tanaka et al., 2020; de Jorge et al., 2021) of training can be utilized to guide the search for sparse networks before training.
### Benchmarking in Sparse Neural Networks
Gale et al. (2019) rigorously evaluated variational dropout (Molchanov et al., 2017), \(l_{0}\) regularizaion (Louizos et al., 2018), and GMP (Zhu and Gupta, 2017) on two large-scale tasks. They demonstrated that the appealing performance advantages of variational dropout and \(l_{0}\) regularization cannot generalize to large-scale tasks whereas simple magnitude pruning performs surprisingly well. Liu et al. (2018) examined two pipelines: training from scratch and fine-tuning, concluding that fine-tuning a pruned model only gives comparable or worse performance than training from scratch. Blalock et al. (2020) provided a comprehensive literature review on SNNs and found that pruning papers rarely make direct and controlled comparisons. Frankle et al. (2021) assessed the efficacy of various pruning at initialization approaches and attribute their inferior performance to their insensitivity to weight shuffling and re-initialization. Liu et al. (2022) re-evaluated the performance of various random pruning before training and found that sparsely training a randomly pruned network from scratch can surprisingly match the performance of its dense equivalent. These papers shed light on the behavior of SNNs and discover important research problems for future work.
## 3 SMC-Bench
SMC-Bench is crafted for evaluating if all proclaimed benefits of SNNs can "scale and generalize". It consists of 4 diverse and difficult tasks, including commonsense reasoning, arithmetic reasoning, multilingual translation, and protein prediction, with 10 datasets collected from prior work and open-source GitHub repositories. To investigate if there is a strong correlation between model prunability and task difficulty, we choose multiple datasets with different degrees of difficulty.
### Commonsense Reasoning
Commonsense reasoning task asks commonsense questions about the world involving complex semantics that often require rich common sense and background knowledge. We consider three commonly used datasets for commonsense reasoning. (1) **RACE**(Lai et al., 2017) contains near 28,000 passages and 100,000 questions collected from the English exams for Chinese students in middle (RACE-M) and high school (RACE-H). (2) **WinoGrande**(Sakaguchi et al., 2021) is a modified version of the Winograd Schema Challenge (WSC) benchmark (Levesque et al., 2012) with enhanced scale and hardness, containing 44k problems. (3) **Commonsense QA (CSQA)**(Talmor et al., 2018) is a challenging dataset containing 12,247 multiple-choice questions where one source concept and three target concepts are first extracted from ConceptNet (Speer et al., 2017) based on which the Crowd-works are asked to author multiple-choice questions with two additional distractors. In general, CSQA is harder than WinoGrande and RACE, with ceiling human performance of 89%, 94%, and 95%, respectively.
### Arithmetic Reasoning
Arithmetic reasoning poses a question of a math problem and the model is asked to generate a mathematical equation that can be evaluated to get the answer. We consider the following three math word problem datasets: (1) the widely used **MAWPS** benchmark (Koncel-Kedziorski et al., 2016) composed of 2,373 problems; (2) the arithmetic subset of ASDiv (Miao et al., 2021) - **ASDiv-A** with 1,218 math problems; (3) the more challenging **SVAMP**(Patel et al., 2021) dataset which is created by applying complex types of variations to the samples from ASDiv-A. The task difficulty monotonically increases from MAWPS to ASDiv-A, and to SVAMP.
### Protein Thermostability Prediction
Maintaining a stable 3D structure is an essential pre-condition for protein to function correctly in biological phenomena. Numerous efforts are devoted to modeling and predicting protein's stability against pH, salinity, and temperature. We consider the tasks of protein thermostability prediction on two representative datasets: (1) **HotProtein**(Chen et al., 2023) is recently proposed as a large-scale, standardized protein benchmark with organism-level temperature annotations, which contains \(182\)K protein sequences and \(3\)K folded structure from \(230\) different species. Three dataset variants, **HP-S**, **HP-S\({}^{2}\)C**, and **HP-S\({}^{2}\)C**, are adopted to examine sequence- and structure-based methods, respectively. **HP-S** has \(\{6,30,3\},4946,30,333,79,087,31,549\}\) protein sequences from five categories of (_Cryophilic_, _Psychrophilic_, _Mesophilic_, _Threphenophilic_, _Hyperthermophilic_); **HP-S\({}^{2}\)C** has both sequences and structures for \(\{73,387,195,196,189\}\) proteins from the same five classes ordered from _Cryophilic_ to _Hyperthermophilic_; **HP-S\({}^{2}\)C** has both sequences and structures for \(\{1,026,939\}\) proteins from {"hot" (\(\geq 45^{\circ}\)C), "cold" (\(<45^{\circ}\)C)} two classes. (2) **Meltome Atlas**(Jarzab et al., 2020) is another challenging test bed for protein's thermostability. It has \(\{7,902,15,833,10,518\}\) protein sequences from three of the five aforementioned classes, from _Mesophilic_ to _Hyperthermophilic_. All samples are annotated with their melting temperature.
### Multilingual Translation
Multilingual translation processes multiple languages using a single language model and requires the model to have the ability to perform translation across languages. We follow Liu et al. (2020); Tang et al. (2020) and choose 10 English-centric language pairs (Fr, Cs, De, Gu, Ja, My, Ro, Ru, Vi, Zh \(\leftrightarrow\) En) from an open source parallel corpus - OPUS (OPU, 2020). We follow Arivazhagan et al. (2019) and use pivot data through English to create 3 Many-to-Many multilingual translation fine-tuning settings including \(2\)-to-\(2\) (Fr, Cs), \(5\)-to-\(5\) (Fr, Cs, De, Gi, Ja), and \(10\)-to-\(10\).
## 4 Evaluation on SMC-Bench
Models.Despite we are aware that performing few-shot prompting on large-scale pre-training language models with billions of parameters is able to solve these tasks (Wei et al., 2022; Srivastava et al., 2022), we choose to focus on fine-tuning or training with pre-trained mid-scale models with millions of parameters, to improve the accessibility of our Benchmark. Specifically, we choose to fine-tune the popular RoBERTa (Liu et al., 2019) for commonsense reasoning; to fine-tune mBART (Liu et al., 2020) for multilingual translation; to train GTS (Xie and Sun, 2019) and Graph2Tree (Zhang et al., 2020) with RoBERTa's pre-trained embedding for arithmetic reasoning; to fine-tune Transformer-based (Vaswani et al., 2017) for protein property prediction. See Appendix A for full details.
Sparse Neural Networks.We select the most representative magnitude- and/or gradient-based approaches where the prune operation is performed before, during, or after training. Formally, given a dense network \(\theta_{l}\in\mathbb{R}^{d_{l}}\) with a dimension of \(d_{l}\) in each layer \(l\in\{1,\dots,L\}\), pruning generates binary masks \(m_{l}\in\{0,1\}^{d_{l}}\) yielding sparse neural networks with sparse weights \(\theta_{l}\odot m_{l}\). The sparsity level is the fraction of the weights that are zero-valued, calculated as \(s=1-\frac{\sum_{l}m_{l}}{\sum_{l}d_{l}}\). Following a mainstream convention in many sparse training papers (Frankle and Carbin, 2019; Gale et al., 2019; Evci et al., 2020; Lee et al., 2019; Liu et al., 2021c), we sparsify most layers in the model including embedding layers and classifier heads, and we do not apply advanced techniques such as Iterative Learning Rate Rewinding (Renda et al., 2020) and Knowledge Distillation (Hinton et al., 2015) in our main evaluations, even if we observe that they help to alleviate accuracy drops as in Appendix C.
\(\bullet\)_Lottery Ticket Hypothesis (LTH)_(Frankle and Carbin, 2019) is a strong post-training pruning baseline that iteratively adopts magnitude pruning after training to produce binary masks and re-train together with weights from step \(t\). We set \(t=0\) in this paper, since rewinding to early training does not notably improve the performance of Transformer models (e.g., BERT) for downstream tasks (Chen et al., 2020). The pruning rate of each IMP is set as 20%.
\(\bullet\)_Magnitude After Training_ is a strong baseline for prune after training, which has demonstrated strong results in various regimes. After training or fine-tuning models on the specific task, we prune the model with one-shot magnitude pruning and re-train it with the full learning rate schedule from the beginning, dubbed "OMP (After)" in our experiments.
\(\bullet\)_Random After Training_(Mittal et al., 2019) is the most naive baseline for post-training pruning. It uniformly samples a random score \(s_{l}\in\text{Uniform}(0,1)\) for each weight and prunes the weights with the lowest scores. After pruning, we also re-train with the entire training schedule.
\(\bullet\)_Gradual Magnitude Pruning (GMP)_(Zhu and Gupta, 2017; Gale et al., 2019) gradually sparsifies networks during training according to a pre-defined sparsification schedule with sorting-based weight thresholding. The starting and the ending iteration of the gradual sparsification process are set as 10% and 80% of the entire training iterations. The frequency of sparsification steps is tuned among 500, 1000, and 4000, depending on the specific tasks. While we are aware of the advanced gradual pruning methods - movement pruning (Sanh et al., 2020), it usually exceeds GMP only at high sparsities (e.g., \(>\)90%), which is interesting but not within the scope of this paper.
\(\bullet\)_Magnitude Before Training_(Frankle et al., 2021) simply removes weights with the lowest magnitude at initialization. Since we inherit weights from pre-trained models, the initial weights actually refer to the weights that are learned on the pre-trained tasks. We abbreviate this approach to "OMP (Before)" as we use one-shot magnitude pruning.
\(\bullet\)_Random Before Training_(Liu et al., 2022) is the most naive baseline for prior-training pruning. We randomly sample scores for each weight and removes the weights with the lowest scores. Different from Random After Training, the pruning operation is performed before fine-tuning.
\(\bullet\)_SNIP_(Lee et al., 2019) is a prior-training pruning technique that globally removes weights with the lowest connection sensitivity score \(|g\odot w|\). SNIP is a strong baseline that consistently performs well among various prior-training approaches (Frankle et al., 2021).
\(\bullet\)_Rigging the Lottery (RigL)_(Evci et al., 2020) is a leading sparse training method that updates the topology of sparse neural networks during training via a prune-and-grow scheme. To evaluate its effectiveness on downstream fine-tuning, we combine RigL with the other three prior-training methods. The update interval of RigL is set the same as the ones used for updating sparsity in GMP, following Liu et al. (2021b).
### Commonsense Reasoning
Implementation Details.We follow the training settings of sequence modeling toolkit Fairseq (Ott et al., 2019) and fine-tune the pre-trained RoBERTa on our datasets with a standard cross-entropy loss. Specifically for each question, we also construct five inputs, one for each of the five candidate answer choices. Each input is constructed by concatenating the question and candidate answer together. We then encode each input and pass the resulting "[CLS]" representations through a classifier to predict the correct answer. All models are trained with the Adam (Kingma and Ba, 2014) optimizer with a learning rate of \(1\times 10^{-5}\) using an A100 GPU. For CSQA, we train the model for 3000 steps with a linear warmup of 150 steps and a batch size of \(16\). The dropout rate is set as 0.1. This gives us a test accuracy of \(77.3\%\) with dense RoBERTa. For RACE, we train each model for 3 epochs with a batch size of \(16\). This gives us \(86.6\%\) and \(82.0\%\) dense accuracy on RACE (H) and RACE (M), matching the ones reported in Fairseq (\(86.5\%\) and \(81.3\%\)). Models on WinoGrande are trained for \(23,750\) steps with \(2,735\) warmup steps and \(32\) batch size, reaching a \(76.3\%\) accuracy.
Results and Analyses.The results of various sparse neural networks are demonstrated in Figure 1. We summarize our main observations below:
_Q All sparse algorithms seemingly fail to find matching SNNs, even at trivial sparsities such as 36%._ While several methods maintain the dense performance at 20% sparsity, their performance starts to drop significantly after that, and will undergo a catastrophic failure as the sparsity continues increasing. It is difficult even for the top-performing LTH to maintain the matching performance after the \(2^{rd}\) IMP iteration. This is in stark contrast with the behavior of SNNs on the image classification task, where LTH can gracefully preserve the matching performance even at very extreme sparsities (\(>\)95% on CIFAR-10/100 (Yin et al., 2022) and \(>\)80% on ImageNet (Renda et al., 2020)).
_Q The quality of SNNs on harder tasks suffers more from sparsity._ Models trained on the hardest task, CSQA, undergo a larger accuracy loss at the same sparsity than the other two datasets. For instance, all the SNNs on CSQA suffer from a catastrophic accuracy drop (up to 74%) and become no better than the random prediction at 59% sparsity. Meanwhile, when trained on WinoGrande and RACE at 59% sparsity, two sparse algorithms (LTH and GMP) can maintain relatively good performance with a smaller performance loss (i.e., 3% \(\sim\) 10%).
_Q Post-training pruning consistently outperforms prior-training pruning._ LTH achieves the best performance across datasets, GMP performs well, and OMP (After) follows behind. However, prior-training pruning achieves worse performance. OMP (Before) performs closely behind OMP (After), whereas SNIP performs no better than the naive random pruning. After digging deeper into the case of SNIP, we find SNIP aggressively prunes the embedding layers to more than 99% sparsity even with a mild overall sparsity of 20%. Surprisingly, the leading dynamic sparsity approach RigL does not bring significant gains to these prior-training approaches, and sometimes even hurts the performance.
### Arithmetic Reasoning
Implementation Details.We follow SVAMP (Patel et al., 2021) and choose the two top-performing tree-based models for arithmetic reasoning: GTS (Xie and Sun, 2019) and Graph2Tree (Zhang et al., 2020). Graph2Tree in general performs slightly better than GTS. GTS adopts LSTM to encode input sequences and a tree-based Decoder to generate questions. Graph2Tree uses a graph transformer to learn the latent quantity representations from data, and a tree structure decoder to generate a solution expression tree. We follow exactly the training settings of Patel et al. (2021). The embedding weights are inherited from THE pre-trained RoBERTa. All models are trained with Adam for 50 epochs.
Figure 1: Commonsense reasoning performance of various sparse RoBERTa.
On MAWPS and ASDiv-A, models are trained with the training data and then evaluated on 5-fold cross-validation based on pre-assigned splits. For SVAMP, we train the models on a combination of MAWPS and ASDiv-A and test them on SVAMP, following Patel et al. (2021).
Results and Analyses.The performance on arithmetic reasoning is reported in Figure 2. The overall performance trend is very similar to the commonsense reasoning: SNNs can only match the dense performance when the sparsity level is lower than 48.8% with the exception of Graph2Tree on the relatively simple MAWPS dataset whose failing sparsity is \(59\%\); SNNs are prone to sacrifice more performance on harder datasets (i.e., ASDiv-A and SVAMP) than the easier MAWPS dataset; prior-training methods perform no differently than random pruning. Moreover, we want to highlight that LTH surprisingly reaches lower accuracy than OMP and GMP at high sparsity levels, indicating that iterative magnitude pruning may not necessarily generalize better than on more complex tasks. Moreover, Magnitude Before Training (OMG (Before)) consistently causes severe layer collapse in the non-embedding layers, leading to zero accuracies. Since including the results of OMG (Before) will significantly dilute the performance difference of different sparsification methods, we choose to report it in Appendix B.
### Protein Thermal Stability Prediction
#### 4.3.1 Sequence-Based Models
Implementation Details.We examine two classic sequence-based approaches in protein property prediction, _i.e._, TAPE (Rao et al., 2019) and ESM-1B (Rives et al., 2021). For TAPE, we fine-tune it from the official pre-training (Rao et al., 2019) for \(4\) epochs with an initial learning rate of \(1\times 10^{-4}\) and a linear decay scheduler together with \(100\) warm-up steps. As for ESM-1B (Rives et al., 2021), starting from the official pre-trained checkpoint, we fine-tune the backbone with a learning rate of \(1\times 10^{-6}\) and the linear classification head on top of the backbone with a learning rate of \(2\times 10^{-2}\). The learning rate schedulers used for both backbone and linear head are OneCycle (Smith and Topin, 2019) decay schedulers. The training batch size is \(2\) for Meltome Atlas and \(3\) for HotProtein (HP-S). Classification accuracy on test sets is collected to measure the model performance.
Results and Analyses.In this section, we examine diverse sparse neural networks of sequence-based models (_i.e._, transformers) on protein thermostability prediction benchmarks. ESM-1B (Rives et al., 2021), a SOTA approach in protein property modeling, is evaluated on both HotProtein (HP-S) and Meltome Atlas datasets. TAPE (Rao et al., 2019) is a classic transformer-based model adopted on HotProtein (HP-S). Extensive results of both static and dynamic sparsifications are collected in Figure 3. We observe that \(\mathbf{\cdot}\) For ESM-1B, all extracted sparse neural networks incur significant performance degradation whenever the sparsity level is larger than \(20\%\). Note that here we only sparsify the fully connected layers in multi-head self-attentions & feed-forward networks of each
Figure 2: Arithmetic reasoning performance of various sparse GTS and Graph2Tree.
transformer layer, and leave all other modules unpruned. Even under this loose condition, ESM-1B still fails after pruning on both HP-S and Meltome Atlas, which indicates the low parameter redundancy in ESM-1B for modeling protein thermal stability. \(\mathbf{\Theta}\) In general, static and dynamic pruning algorithms achieve similar performance with ESM-1B. SNP (Before) and SNIP + RIGL (Before) deliver relatively better accuracies, especially for the high sparsity (\(\geq 48.80\%\)) subnetworks on HP-S. \(\mathbf{\Theta}\) As for the worse backbone TAPE compared with ESM-1B, magnitude-based prunings like LTH (After), OMP (After), and GMP (During) show satisfactory results before \(59\%\) sparsity.
Furthermore, we conduct a more fine-grained pruning schedule to investigate the tolerance of ESM-1B against sparsification. In detail, we prune \(5\%\) weights with OMP (after) on different modules in ESM-1B and present the obtained accuracies in Table 1. Q, K, V, O, and FFN represent the fully connected layer in the query, key, value, & output of the self-attention module and feed-forward networks, respectively. The results show that whatever modules we select, \(5\%\)**sparsity** damages the ESM-1B performance of protein thermostability prediction on HP-S\({}^{2}\)C2.
#### 4.3.2 Structure-Based Models
Implementation Details.We further consider a representative structure-based algorithm for thermostability prediction, _i.e._, ESM-IF1 (Hsu et al., 2022). Specifically, for ESM-IF1, we train the backbone and its linear head with learning rates of \(1\times 10^{-4}\) and \(2\times 10^{-2}\), respectively. A batch size of \(4\) is used for both models on HotProtein (HP-S\({}^{2}\)C5). Classification testing accuracy is reported to reflect the model performance.
Results and Analyses.In this section, we study structure-based models and their sparse variants on HotProtein (HP-S\({}^{2}\)C5). ESM-IF1 (Hsu et al., 2022), a recent SOTA approach, is chosen for benchmarking. It takes the 3D structure of proteins as input and predicts its thermal stable temperature. As shown in Figure 3, ESM-IF1 produces inferior sparse neural networks with all pruning mechanisms of both static and dynamic, where OMP (after) and GMP (During) present relatively better accuracies.
### Multilingual Translation
Implementation Details.We choose the official multilingual model mBART1(Liu et al., 2020), which was originally pre-trained on 25 languages using masked language modeling (MLM), following the fine-tuning setting of Tang et al. (2020). We first choose \(10\) languages from the language pools used for MLM pre-training; create three sub-groups containing \(2\), \(5\), \(10\) languages; and fine-tune mBART on each sub-group, referring to \(2\)-to-\(2\), \(5\)-to-\(5\), and \(10\)-to-\(10\) multilingual translation fine-tuning, respectively. During inference, we report the averaged BLEU (Tang et al., 2020; Liu et al., 2020) scores of bi-directional translation across 10 languages to measure the translation performance. Hence, the task difficulty monotonically decreases from \(2\)-to-\(2\) to \(5\)-to-\(5\), and to \(10\)-to-\(10\) fine-tuning as more languages are involved during training. The default fine-tuning configurations in Tang et al. (2020) are adopted for \(40\)K iterations with an Adam optimizer and a learning rate of \(1\times 10^{-6}\).
Footnote 1: [https://github.com/facebookresearch/fairseq](https://github.com/facebookresearch/fairseq)
Results and Analyses.Intuitively, fewer languages involved during fine-tuning leads to a more difficult translation for all languages. As demonstrated in Figure 4, several consistent conclusions can be drawn: \(\mathbf{\Theta}\) Besides OMP (After) and LTH (After), all other produced sparse subnetworks perform
\begin{table}
\begin{tabular}{l|c} \hline \multicolumn{2}{l}{Panned Modules Accuracy (\(\uparrow\))} \\ \hline (None (Dense) & 94.68 \\ \hline Q, K, V, OF, FFN & 92.19 \\ Q, K, V & 92.55 \\ Q, K, V & 93.62 \\ Q, V & 93.62 \\ K, V & 92.55 \\ \hline \end{tabular}
\end{table}
Table 1: OMP (after) pruning \(5\%\) weights of ESM-1B on different modules with HP-S\({}^{2}\)C2.
Figure 3: Protein prediction performance of various sparse models.
worse than the dense baseline when the sparsity is larger than \(20\%\). The BLEU scores of OMP (After) and LTH (After) also decline and fail to match at \(\geq 20\%\), \(\geq 48.8\%\), \(\geq 59\%\) sparsity levels for \(2\)-to-\(2\), \(5\)-to-\(5\), and \(10\)-to-\(10\) fine-tuning, respectively.
Magnitude-based sparsifications like OMP, LTH, and GMP are comparably robust across all three translation settings, while other pruning methods have negligible advantages compared to random pruning.
While the overall tendency of SNNs is quite consistent across different tasks, the prunability of mBART increases as more languages are involved during fine-tuning. It seems that multilingual translation has already been a challenging task for pruning, and involving more languages in inference causes extra obstacles. This is why in the hardest scenario of fine-tuning on \(2\)-to-\(2\) and evaluating with \(10\) languages, all sparse subnetworks suffer from substantial performance degradation.
### Why SNNs Fail on SMC-Bench
We conduct a thorough investigation into the reasons why most SNNs struggle to perform on SMC-Bench. Our analysis identifies two possible causes for the failure: (1) the "lazy regime" in LLMs, and (2) the specific model components that are pruned. Based on these findings, we discover a set of stronger pruning recipes that alleviates (though still more or less suffering from) the failure on SMC-Bench, by breaking down the performance of the state-of-the-art BERT-pruning framework - oBERT (Kurtic et al., 2022) on SMC-Bench (note that most models evaluated in this paper are also BERT-based). Due to the limited space, we present our full investigation in Appendix C, and briefly present our sanity check of layer collapse below.
**Does layer collapse occur unexpectedly on SMC-Bench?** Layer collapse is the most common cause that blocks the information flow (signal propagation) of sparse neural networks, leading to a catastrophic performance drop (Tanaka et al., 2020). We plot the layerwise sparsity ratios of various sparse models in Appendix C.1. We do not observe severe layer collapse across methods except for SNIP which specifically removes nearly entire embedding layers. However, we do observe an unexpected phenomenon: _layerwise sparsities of different magnitude-based pruning approaches (i.e., IMP, OMP, and GMP) are extremely similar;_ all overlapped on one line, despite the significant performance gap among them (up to 42.3%); small differences only start to appear until reaching very deep layers (e.g., classification heads) (see Appendix C.1 for more details). This abnormal behavior is highly correlated with the "lazy regime" (Neyshabur et al., 2020; Malladi et al., 2022) where the model stays in the same basin during fine-tuning with fairly small weight changes, and hence all magnitude-based pruning approaches, before, during, and after fine-tuning, tend to collapse to the same solution.
## 5 Conclusion
Given the enormous breakthroughs and the fruitful results that sparse neural networks have achieved in numerous fields, it is necessary to rethink the sufficiency of current evaluation protocols and introduce more difficult and diverse benchmarks to explore the limitation of sparse neural networks. In this paper, we assemble a large-scale, challenging, and more diverse benchmark, SMC-Bench. Through our thorough evaluation across various leading sparsifications, we confirm that SMC-Bench notably challenges the capability of most magnitude- or/and gradient-based sparse algorithms. We further dig deeper into the behavior of SNNs, and observe several surprising phenomena that are absent in the current evaluation. Our analysis points out that such failure is intimately related to the "lazy regime", which leads us to a suite of strong pruning recipes that alleviates (yet still more or less suffering from) the failure on SMC-Bench. Our subsequent effort will focus on exploring stronger sparse training algorithms that can scale and generalize on SMC-Bench, and meanwhile will consider the training costs of different sparse algorithms for a more holistic picture of their efficiency benefits.
Figure 4: Multilingual performance of various sparse mBART. All models are tested on 10-to-10 multilingual translation and the averaged BLEU are reported.
## Acknowledgement
We thank Dan Alistarh and Eldar Kurtic for the extremely helpful discussions about the implementation details of oBERT as well as our benchmark's claims; and Zhangheng Li for helping run extra experiments with oBERT. S. Liu and Z. Wang are in part supported by the NSF AI Institute for Foundations of Machine Learning (IFML). Part of this work used the Dutch national e-infrastructure with the support of the SURF Cooperative using grant no. NWO2021.060, EINF-2694 and EINF-2943/L1.
|
2310.16742 | Interferometric Neural Networks | On the one hand, artificial neural networks have many successful applications
in the field of machine learning and optimization. On the other hand,
interferometers are integral parts of any field that deals with waves such as
optics, astronomy, and quantum physics. Here, we introduce neural networks
composed of interferometers and then build generative adversarial networks from
them. Our networks do not have any classical layer and can be realized on
quantum computers or photonic chips. We demonstrate their applicability for
combinatorial optimization, image classification, and image generation. For
combinatorial optimization, our network consistently converges to the global
optimum or remains within a narrow range of it. In multi-class image
classification tasks, our networks achieve accuracies of 93% and 83%. Lastly,
we show their capability to generate images of digits from 0 to 9 as well as
human faces. | Arun Sehrawat | 2023-10-25T16:17:47Z | http://arxiv.org/abs/2310.16742v1 | # Interferometric Neural Networks
###### Abstract
On the one hand, artificial neural networks have many successful applications in the field of machine learning and optimization. On the other hand, interferometers are integral parts of any field that deals with waves such as optics, astronomy, and quantum physics. Here, we introduce neural networks composed of interferometers and then build generative adversarial networks from them. Our networks do not have any classical layer and can be realized on quantum computers or photonic chips. We demonstrate their applicability for combinatorial optimization, image classification, and image generation. For combinatorial optimization, our network consistently converges to the global optimum or remains within a narrow range of it. In multi-class image classification tasks, our networks achieve accuracies of 93% and 83%. Lastly, we show their capability to generate images of digits from 0 to 9 as well as human faces.
## I Introduction
Artificial neural networks (NNs) [1; 2; 3] have demonstrated remarkable success in a wide range of applications in various domains such as autonomous vehicles, healthcare, finance, robotics, gaming, over-the-top media services, and chatbots. Some of the applications include image classification [4; 5; 6], natural language processing, speech recognition, robotic control systems, recommendation systems, and image generation [7; 8; 9; 10]. Convolutional NNs (CNNs) were introduced in [4; 5] and further improved in seminal works like [6] for image classification. Similarly, generative adversarial networks (GANs) made their debut in [7] and underwent notable advancements in subsequent works, including [8; 9; 10] for image generation. In this paper, we introduce interferometric NNs (INNs) consisting of interferometers in Sec. II and then interferometric GANs (IGANs) composed of INNs in Sec. III.
Interferometers play a pivotal role in metrology, astronomy, optics, and quantum physics. Feynman, in his renowned lectures, introduced quantum (wave-particle) behavior through double-slit (two-path) interference experiments [11]. The wave-particle duality, a subject of the famous Bohr-Einstein debates, has been further investigated by numerous scientists using two- and multi-path interferometers (for instance, see [12; 13; 14; 15; 16; 17; 18; 19]). Our INNs are made of sequences of such multi-path interferometers, where each interferometer is a combination of beamsplitters (or fiber couplers) and a phase shifter [14; 17; 18; 20]. A beamsplitter is represented by a discrete Fourier transformation, and the phase shifters hold all the learnable parameters of an INN. In principle, an INN can work with classical or quantum waves. However, a sequence of interferometers can be viewed as a parameterized quantum circuit (PQC), where a Fourier transformation can be implemented exponentially faster than its classical counterpart thanks to Peter Shor [21].
The PQCs [22] are integral components of the variational quantum algorithms (VQAs), including the variational quantum eigensolver (VQE) [23; 24] and its variants [25; 26; 27; 28] for determining molecular energies and the quantum approximate optimization algorithm (QAOA) [29] for solving combinatorial optimization problems by recasting them as energy minimization in Ising spin glass systems [30]. Recognizing that fully fault-tolerant quantum computers may still be decades away, VQAs are harnessing the capabilities of today's noisy intermediate-scale quantum (NISQ) computers [31] by providing a framework to build a wide range of applications. Some of these applications include finding ground [23; 24] and excited [32; 33] energy states, combinatorial optimization [29; 30], simulating quantum dynamics [34; 35; 36; 37; 38], solving a system of linear [39; 40; 41] and non-linear [42; 43] equations, data classification [44; 45; 46; 47; 48; 49; 50; 51], and generative tasks [52; 53; 54; 55; 56; 57]. For an in-depth review of VQAs and NISQ algorithms, we refer readers to [68; 69]. An INN also belongs to the VQA family, as it can provide an ansatz for a VQA.
We employ INNs for combinatorial optimization and image classification in Sec. II and IGANs for image generation in Sec. III. The notion of quantum GANs was initially introduced in [53] and numerically implemented in [54] using PQCs. Subsequently, various quantum GANs have been proposed to generate samples from random distributions [55; 56; 57], as well as to generate images [58; 59; 60; 61; 62]. Leveraging the Born rule, which provides a probability distribution from a (parameterized) quantum state, PQCs as Born machines [63; 64; 65; 66] are utilized for these generative tasks.
We provide a summary of our contributions at the beginning of the next two sections and an overarching conclusion and outlook in Sec. IV. One can find a dedicated Jupyter Notebook for each of the problems discussed in this paper within our GitHub repository [70]. These problems are tackled using INNs, which are classically simulated in the PyTorch machine-learning framework [71]. PyTorch automatically manages the gradient of a loss or an energy function. For the training of each INN, we employ the Adam optimizer [72].
Inn
In this section, our primary contributions are in (3)-(6), (14), Figs. 1-5, Table 1, and [70]. Here we introduce INNs through (3)-(6), (14), Figs. 1, 4, and 5. Initially applied to specific combinatorial optimization problems, we illustrate their performance in Fig. 2. Subsequently, we use them for image classification tasks, and their performance is detailed in Fig. 3 and Table 1.
The INN in Fig. 1 is made of a sequence of interferometers, where each interferometer has \(d\) distinct paths represented by the computation (orthonormal) basis
\[\mathcal{B}=\left\{\left|0\right\rangle,\cdots,\left|d-1\right\rangle\right\} \tag{1}\]
of the associated Hilbert space. Basically, this whole setup is a \(d\)-level quantum computer. In the case of \(d=2^{n}\), one can construct an equivalent PQC of \(n\) qubits (2-level quantum systems) as shown in Fig. 1.
In a machine-learning task, one has to feed the data to a NN, and the data is a collection of feature vectors \(\mathbf{x}=(x_{0},\cdots,x_{d-1})\in\mathbb{R}^{d}\). Whereas, in the case of INN, we have to encode every \(\mathbf{x}\) into an input state vector \(\left|\mathbf{x}\right\rangle\) through a unitary transformation. Such a process is called feature encoding [51]. Throughout the paper, we consider the _amplitude_ encoding
\[\left|\mathbf{x}\right\rangle=\frac{1}{\left|\mathbf{x}\right|}\sum_{j=0}^{d- 1}x_{j}\left|j\right\rangle, \tag{2}\]
where \(\left|\mathbf{x}\right|\) is the Euclidean norm of \(\mathbf{x}\).
As shown in Fig. 1 following [14; 17; 18; 20], an interferometer has two essential elements: _beamsplitters_ (or fiber couplers) and a _phase shifter_, which, in our case, are mathematically described by the unitary operators
\[F =\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}\sum_{j=0}^{d-1}\omega^{k\,kj} \left|k\right\rangle\!\!\left\langle j\right|\quad\text{and}\] \[U(\boldsymbol{\phi}) =\sum_{j=0}^{d-1}\mathrm{e}^{\mathrm{i}\phi_{j}}|j\rangle\! \!\left\langle j\right|, \tag{3}\]
respectively. In (3), \(\omega=\mathrm{e}^{\mathrm{i}\frac{2\pi}{d}}\) is a \(d\)th root of unity, \(\mathtt{i}=\sqrt{-1}\), and \(\boldsymbol{\phi}=(\phi_{0},\cdots,\phi_{d-1})\in\mathbb{R}^{d}\) carry phases. Here the discrete Fourier transformation \(F\) acts as a generalization of a symmetric 50/50 beamsplitter for a 2-path interferometer. It turns a single path \(|j\rangle\) into an equal superposition of all the paths of \(\mathcal{B}\). The phase shifter \(U(\boldsymbol{\phi})\) shifts the phase by \(\phi_{j}\) in the \(j\)th path. By the way, using the operators from (3), one can realize a phase encoding \(\left|\mathbf{x}\right\rangle_{\mathrm{ph}}=U(\mathbf{x})F|0\rangle\) of the data, which might be useful in some applications. We want to emphasize that two layers of the Hadamard and diagonal unitary operations like \(U\) are used in [48] to present a feature map that is hard to simulate classically. Additionally, parameterized diagonal unitary operations such as \(U(\boldsymbol{\phi})\) have been employed to make a quantum simulation faster in [37; 38] (see also [36]).
For every problem, we have executed \(F\) in [70], utilizing the fast Fourier transform available in PyTorch. However, its implementation can reach exponential speedup on a quantum computer due to Shor's algorithm for quantum Fourier transforms [21]. The phase shifter can be decomposed as a product of controlled-phase gates [36]:
\[U(\boldsymbol{\phi}) =\prod_{j=0}^{d-1}\mathrm{cp}(\phi_{j})\,,\quad\text{where}\] \[\mathrm{cp}(\phi_{j}) =\mathrm{e}^{\mathrm{i}\phi_{j}}|j\rangle\!\!\left\langle j \right|+\left(I-|j\rangle\!\!\left\langle j\right|\right)=\mathrm{e}^{ \mathrm{i}\phi_{j}|j\rangle\!\!\left\langle j\right|} \tag{4}\]
and \(I\) is the identity operator. In the case of \(d=2^{n}\), with the binary representation of integers \(j=\sum_{\mu=1}^{n}2^{n-\mu}j_{\mu}\) and association \(|j\rangle\equiv\otimes_{\mu=1}^{n}|j_{\mu}\rangle\), one can show that \(\mathrm{cp}\) is indeed a controlled-phase gate with \(n-1\) control qubits. Although there are exponentially many \(\mathrm{cp}\) gates relative to \(n\) in the decomposition, they can all, in principle, be simultaneously realized due to their commutativity. By the way, for \(\phi_{j}=\pi\), \(\mathrm{cp}(\phi_{j})\) becomes a quantum oracle of Grover's algorithm [73].
Now, we have all the elements to describe the INN of Fig. 1. The INN is made of \(L\) layers of interferometers, where the output from one is fed to the next. An input to the first interferometer is \(\left|\mathbf{x}\right\rangle\) from (2), the output from
Figure 1: The top and bottom pictures show an INN and its equivalent PQC in the case of \(d=2^{n}\), respectively. The INN is a sequence of \(L\) interferometers, where the output from one is fed to the next. Each interferometer has \(d\) distinct paths colored in green, beamsplitters (or fiber couplers) exhibited by blue-colored slabs, and a phase shifter shown by red disks in the top picture. In the bottom picture, brown wires depict \(n\) qubits, and the beamsplitter and phase shifter are displayed by blue and red-rounded rectangles, respectively. An input to the INN or to the PQC is a ket \(\left|\mathbf{x}\right\rangle\) from (2), the beamsplitter \(F\) and phase shifters \(U(\boldsymbol{\phi}^{\downarrow})\) are given in (3), and the output \(\left|\mathbf{x},\Phi\right\rangle\) is read by the detectors shown in black. The whole setup is summarized in (5).
the last one is
\[\ket{\mathbf{x},\Phi} =\left(\prod_{l=1}^{L}FU(\mathbf{\phi}^{l})\right)F\ket{\mathbf{x}}, \quad\text{and}\] \[p =\ket{\langle 0|\mathbf{x},\Phi\rangle|^{2}} \tag{5}\]
is the probability of getting 0th outcome (that is, detecting quantum particles in the 0th path). Equivalently, in the case of \(n\) qubits, we have to perform measurements on all the qubits to get \(p\) as \(\ket{0}=\ket{0\cdots 0}\) in the binary representation. Equations in (5) completely specify the INN in Fig. 1, and all its learnable parameters are denoted by \(\Phi=(\mathbf{\phi}^{1},\cdots,\mathbf{\phi}^{L})\in\mathbb{R}^{L\times d}\), which carries \(L\) phase-vectors, one for each interferometer.
One can view the INN as a differentiable function \(p:\mathbb{R}^{L\times d}\rightarrow[0,1]\) on the parameter space. In optimization or machine-learning tasks, PyTorch automatically manages the gradient computation of an energy or a loss function [given in (8), (11), and (17)] of \(p\) with respect to parameter updates. Without such a framework, we would need to compute derivatives such as
\[\frac{\partial p}{\partial\phi_{j}^{l}} =\bra{\mathbf{x}}\cdots\partial_{j}U^{\dagger}(\mathbf{\phi}^{l}) \cdots\ket{0}\!\bra{0}\cdots U(\mathbf{\phi}^{l})\cdots\ket{\mathbf{x}}+\] \[\bra{\mathbf{x}}\cdots U^{\dagger}(\mathbf{\phi}^{l})\cdots\ket{0} \!\bra{0}\cdots\partial_{j}U(\mathbf{\phi}^{l})\cdots\ket{\mathbf{x}},\text{ where}\] \[\partial_{j}U(\mathbf{\phi}^{l})=\frac{\partial U}{\partial\phi_{j}^{ l}}=\mathbbm{i}\,\mathrm{e}^{i\phi_{j}^{l}}|j\rangle\!\langle j|=\frac{\partial \mathrm{CP}(\phi_{j}^{l})}{\partial\phi_{j}^{l}}\,. \tag{6}\]
The second equation in (6) expresses the derivative of the \(l\)th phase shifter with respect to the phase \(\phi_{j}^{l}\), and it also turns out to be the derivative of the cp operator. Similarly, one can get the derivative \(\partial_{j}U^{\dagger}(\mathbf{\phi}^{l})\) of the adjoint (denoted by \(\dagger\)) of \(U\).
Now, we demonstrate applications of the INN first for quadratic unconstrained binary optimization (QUBO) and then for classification. A \(n\)-variable QUBO problem is equivalent to finding a minimum energy eigenstate of an associated \(n\)-qubit Hamiltonian (for many such problems, see [30])
\[H=\sum_{\mu,\nu=1}^{n}\mathsf{Q}_{\mu\nu}P_{\mu}P_{\nu}\,, \tag{7}\]
where \(\mathsf{Q}\) is a \(n\times n\) real matrix that defines the problem, and the projector \(P_{\mu}\) acts on the \(\mu\)th qubit as \(P_{\mu}|j_{\mu}\rangle=j_{\mu}|j_{\mu}\rangle,j_{\mu}\in\{0,1\}\). By the binary representation \(j=\sum_{\mu}2^{n-\mu}j_{\mu}\) and \(|j\rangle=\otimes_{\mu}|j_{\mu}\rangle\), \(H\) is diagonal in the computational basis \(\mathcal{B}\) of (1). The global minimum will be \(E_{j}=\langle j|H|j\rangle\) for some \(j\in\{0,\cdots,d-1\}\). Since \(\{E_{j}\}\) does not have any structure like convex functions, finding the minimum out of exponentially (\(d=2^{n}\)) many possibilities is an NP-hard problem.
Several successful VQAs, including the VQE [23; 24] and QAOA [29], have been proposed to find good approximate solutions. Their basic structure involves preparing a parametric state (referred as ansatz), computing and minimizing the energy expectation value, and obtaining an approximate solution from the optimized ansatz. A QAOA ansatz is constructed by sequentially applying the problem and a mixer unitary operators, whereas the VQE ansatz is generated by applying unitary coupled cluster operators to an initialized state (for more details, see [68; 24; 69]). Our approach is similar as narrated next, with the key distinction being that our ansatz is derived from the INN of Fig. 1.
We start with the input ket \(\ket{\mathbf{x}}=\ket{0}\) and reach the output ket \(\ket{0,\Phi}\equiv\ket{\Phi}\) as per (5). Then we perform measurements to compute the energy expectation value \(\mathcal{E}(\Phi)\) to minimize it over the parameter space and obtain
\[\Phi_{\text{sol}} =\underset{\Phi}{\text{argmin}}\ \mathcal{E}(\Phi)\,, \tag{8}\] \[\mathcal{E}(\Phi) =\bra{\Phi}H|\Phi\rangle=\sum_{j=0}^{d-1}E_{j}|\langle j|\Phi \rangle|^{2}=\sum_{\mu,\nu=1}^{n}\mathsf{Q}_{\mu\nu}\langle P_{\mu}P_{\nu} \rangle\,.\]
Usually, quantum and classical computers are put in a loop to compute and minimize \(\mathcal{E}\), respectively. To compute \(\mathcal{E}\), one needs only \(n^{2}\) expectation values \(\langle P_{\mu}P_{\nu}\rangle\) of local observables. After (8), we perform measurements on the optimized ansatz \(\ket{\Phi_{\text{sol}}}\) in the computational basis \(\mathcal{B}\) to get the most probable outcome
\[j_{\text{sol}}=\underset{j}{\text{argmax}}\ \lvert\langle j|\Phi_{\text{sol}} \rangle\rvert^{2} \tag{9}\]
as our solution. If the global minimum \(E_{\text{min}}=\min_{j}\{E_{j}\}\) is known then one can compute
\[\text{optimality gap}=\left|\frac{E_{j_{\text{sol}}}-E_{\text{min}}}{E_{ \text{min}}}\right|\times 100\,, \tag{10}\]
where \(E_{j_{\text{sol}}}=\langle j_{\text{sol}}|H|j_{\text{sol}}\rangle\) corresponds to the obtained solution via the INN.
For \(n=17\) qubits, to generate QUBO instances, we sampled \(\mathsf{Q}\) from one of the two distributions: the uniform
Figure 2: The plot illustrates optimality gaps for two distinct sets of QUBO problem instances, each containing one thousand instances for \(n=17\) qubits. The red circles (\(\circ\)) and blue triangles (\(\blacktriangle\)) represent the gaps for the two sets where the \(\mathsf{Q}\) matrix is sampled from the discrete uniform and standard normal distributions, respectively. The INN consistently yields the global minimum, achieving a success rate of 95% and 89% for the uniform and normal distributions, respectively. In both cases, the optimality gaps remain under 2%.
distribution over the set \(\{-10,\cdots,10\}\) of integers and the standard normal distribution. From each distribution, we generated one thousand QUBO instances. For each instance, we obtained \(E_{\mathrm{min}}\) through an exhaustive search over \(d=2^{17}=131072\) possibilities and reported the optimality gap in Fig. 2. In the case of discrete uniform distribution, we achieved the global minimum in 958 out of 1000 instances, meaning that the gap was \(\leqslant 10^{-10}\). Over all the instances, the mean and maximum gaps were 0.04% and 1.72%, respectively. When using the standard normal distribution, we reached the global minimum in 894 instances, and the mean and maximum gaps were 0.04% and 1.52%, respectively. For each instance, we maintained a consistent configuration with a number of layers, \(L=2\), and carried out 201 epochs of training with a learning rate of 0.05 and \(\mathrm{betas}=(0.5,0.9)\) for the Adam optimizer. Further details can be found in [70].
It will be interesting to see how the INN will perform if we increase the system size \(n\). For this, we need _quantum_ hardware that can store \(|\Phi\rangle\) and return \(\mathcal{E}(\Phi)\). Beyond certain \(n\), classical hardware cannot store exponentially many numbers \(\langle j|\Phi\rangle\) for \(j=0,\cdots,2^{n}-1\). It is a true power for a quantum computer [23]. We are not exploring this direction any further but moving to the next problem.
In a \(C\)-class classification problem, our goal is to predict the true label \(y=(y_{1},\cdots,y_{C})\in\{0,1\}^{C}\) for a data point based on its features provided in \(\mathbf{x}\). When the true class is \(c\), \(y_{c}=1\), and the remaining components of \(y\) are set to zero. During the training process of a NN for the problem, the objective is to minimize a specific loss function, such as the cross-entropy (negative log-likelihood)
\[\mathcal{L}=-\frac{1}{N}\sum_{i=1}^{N}\sum_{c=1}^{C}y_{c}^{[i]}\ln(p_{c}^{[i]}) \tag{11}\]
within the parameter space. The outer summation in (11) is performed over a mini-batch \(\{\mathbf{x}^{[i]},y^{[i]}\}_{i=1}^{N}\) containing \(N\) data points.
Suppose the dataset contains only \(C=2\) classes, then one can interpret \(p\) of (5) as the probability of a data point belonging to the positive class (\(c=1\)) and \(1-p\) as the probability of it belonging to the negative class (\(c=0\)). Consequently, the INN exhibited in Fig. 1 can be employed for binary classification. To illustrate this, we have utilized the MNIST dataset [74], which contains images of 0 to 9 handwritten digits. To create a 2-class classification problem, we gathered all the images of only two specific digits, denoted as \(a\) and \(b\), which are respectively labeled as the negative class \(y=(1,0)\) and the positive class \(y=(0,1)\). After the training of the INN, we evaluate its performance by using
\[\mathrm{accuracy} = \frac{\mathrm{TP}+\mathrm{TN}}{\mathrm{TP}+\mathrm{TN}+\mathrm{ FP}+\mathrm{FN}}\quad\mathrm{and}\] \[\mathrm{f}_{1} = \frac{2\,\mathrm{TP}}{2\,\mathrm{TP}+\mathrm{FP}+\mathrm{FN}} \tag{12}\]
on the test set. The results are presented in Fig. 3 for every \(a,b\in\{0,\cdots,9\}\) provided \(a\neq b\). Here, TP, TN, FP, and FN represent the counts of true positives, true negatives, false positives, and false negatives, respectively. Note that if we were to interchange the labels of \(a\) and \(b\), then \(\mathcal{L}\), the trained INN, accuracy, and \(\mathrm{f}_{1}\) score will be different. As a result, the matrices in Fig. 3 are not symmetric around their diagonals. Various binary classifications have been carried out on the MNIST dataset in [44; 45; 46; 49; 51] employing diverse PQCs. In our case, we achieve comparable performance, as illustrated in Fig. 3.
An image consists of \(d=\mathsf{C}\times\mathsf{H}\times\mathsf{W}\) pixel values, where \(\mathsf{C}\), \(\mathsf{H}\), and \(\mathsf{W}\) denote the number of channels, height, and width in pixels, respectively. Since the MNIST dataset contains black and white images, \(\mathsf{C}=1\). To get the results of Fig. 3, we have pre-processed the data, setting \(\mathsf{H}=\mathsf{W}=2^{4}\), and scaled all pixel values to the range of \(-1\) to \(1\). After flattening an image, we obtain a \(d=2^{8}\) component feature vector \(\mathbf{x}\), which is then transformed into
Figure 3: The top and bottom matrices display the accuracy and \(\mathrm{f}_{1}\) scores of trained INNs for binary classification problems. The entry in \(a\)th row and \(b\)th column corresponds to the image classification task of digit \(a\) (negative class) against digit \(b\) (positive class). It can be observed that the INN achieves accuracy ranging from 0.81 to 0.99 and \(\mathrm{f}_{1}\) scores between 0.75 and 0.99.
\(|\mathbf{x}\rangle\) following (2). Subsequently, we pass \(|\mathbf{x}\rangle\) through the INN shown in Fig. 1, utilizing \(L=2\) layers. Other hyperparameters, including the learning rate, betas, batch size, and number of epochs, are set to \(0.01\), \((0.5,0.9)\), \(2^{6}\), and \(10\), respectively (for additional details, refer to [70]). For every \((a,b)\), we maintain the same settings as described above to obtain the results presented in Fig. 3. In every case, the INN achieves an accuracy of more than \(80\%\) and \(\mathrm{f}_{1}\) score more than \(75\%\). These scores can potentially be further improved by modifying the INN architecture and fine-tuning the hyperparameters.
To classify images of the MNIST dataset into the \(C=10\) classes, we need a NN that takes a \(d_{\mathrm{in}}\)-dimensional input and provides a \(d_{\mathrm{out}}\)-dimensional output, where \(d_{\mathrm{out}}\) is not necessarily the same as \(d_{\mathrm{in}}\). Let us take \(d_{\mathrm{in}}=3\), \(d_{\mathrm{out}}=2\), and compare
\[\mathbf{x}^{\prime}=W\mathbf{x}=\begin{pmatrix}w_{00}&w_{01}&w_{02}\\ w_{10}&w_{11}&w_{12}\end{pmatrix}\begin{pmatrix}x_{0}\\ x_{1}\\ x_{2}\end{pmatrix}\quad\text{with} \tag{13}\]
The above equations represent a single linear layer of a classical NN (multi-layer perceptron) [1, 2, 3] and a single layer of INN of Fig. 1, respectively. Here, we have used amplitude encoding (2) by assuming \(\|\mathbf{x}\|=1\). The linear layer combines the input features from \(\mathbf{x}\) in a weighted manner to generate new features in \(\mathbf{x}^{\prime}\), while the INN layer creates _interference_ among the input features to generate new features.
Two more differences can be observed through (13). Firstly, the output dimension is \(2\) in the case of the linear layer, while it is \(3\) (not equal to the desired \(d_{\mathrm{out}}\)) for the INN layer. Secondly, each row of \(W\) is _unrelated_ to the others, whereas the rows of any unitary matrix must follow the _orthogonality relation_. As a result, we get _distinct_ new features \(\sum_{j}w_{0j}x_{j}\) and \(\sum_{j}w_{1j}x_{j}\) from the linear layer. Whereas, from the INN layer, the new features like \(\sum_{j}\mathrm{e}^{\mathrm{i}\phi_{j}}x_{j}\) and \(\sum_{j}\mathrm{e}^{\mathrm{i}\phi_{j}}x_{j}\,\omega^{j}\) represent very similar functions (machine learning modes) of the _same_ set \(\mathbf{\phi}\) of parameters, and \(\omega=\mathrm{e}^{\mathrm{i}\frac{2\pi}{3}}\) is a constant. Furthermore, their derivatives with respect to a parameter only differ by a constant. So, to get \(d_{\mathrm{out}}\)_distinct_ features, we put \(d_{\mathrm{out}}\) distinct INNs of Fig. 1 in _parallel_ and obtain
\[|\mathbf{x},\Phi^{t}\rangle =\mathbf{U}(\Phi^{t})|\mathbf{x}\rangle\,,\quad\text{where}\] \[\mathbf{U}(\Phi^{t}) =\left(\prod_{l=1}^{L}FU(\mathbf{\phi}^{t_{l}})\right)F\quad\text{and}\] \[p_{t} =|\langle 0|\mathbf{x},\Phi^{t}\rangle|^{2} \tag{14}\]
for \(t=0,\cdots,d_{\mathrm{out}}-1\). Similar to how (5) represents a sequence of interferometers, (14) portrays a block of sequences of interferometers, as displayed in Fig. 4. In the figure, we also illustrate the similarity of (14) to a linear layer that represents \(d_{\mathrm{out}}\) linear regression models in _parallel_. One can observe that, from the same \(|\mathbf{x}\rangle\), we get \(d_{\mathrm{out}}\) distinct probabilities \(p_{t}\) through \(d_{\mathrm{out}}\)_mutually independent_ unitary operators \(\mathbf{U}(\Phi^{t})\), each of which represents a sequence of \(L\) interferometers. Since these probabilities are independent of each other, they are not required to sum up to \(1\). If their normalization is needed, one can apply the function
\[\mathrm{softmax}\left(p_{t}\right)=\frac{\mathrm{e}\,p_{t}}{\sum_{t=0}^{d_{ \mathrm{out}}-1}\mathrm{e}\,p_{t}} \tag{15}\]
to every component of \(\mathbf{p}=(p_{0},\cdots,p_{d_{\mathrm{out}}-1})\). Essentially, the INN block (14) creates distinct new features \(\mathbf{p}\) from the old \(\mathbf{x}\) through independent interferences. The vector \(\mathbf{p}\) can serve as a final output or an input to the next INN block as shown in Fig. 5.
Figure 5 shows an INN whose architecture is defined by \(\mathbf{d}=(d_{0},\cdots,d_{M-1},d_{M})\) and \(\mathbf{L}=(L_{0},\cdots,L_{M-1})\), where the number of blocks \(M\) is \(2\). Its \(m\)th block takes \(d_{m}\)-dimensional input and gives \(d_{m+1}\)-dimensional output. Within the block, there are \(d_{m+1}\) parallel sequences of interferometers represented by \(\otimes_{t=0}^{d_{m+1}-1}\mathbf{U}(\Phi^{t})\), and the length of each sequence is \(L_{m}\).
While the INN in Fig. 5 draws inspiration from multilayer perceptrons (NNs) [1, 2, 3] like the quantum NNs (QNNs) in [75, 76, 44], it differs in the following aspects. A single quantum preceptron in [75] is a sum of single-qubit unitary operators, whereas it is a product of unitary operators in the case of INN as shown in (5). Compared
Figure 4: The top and bottom pictures exhibit a linear layer [for example, given in (13)] of a classical NN and an interferometric block [specified by (14)] of an INN, respectively. By the color coding one can see a similar _parallel_ structure in both cases. The linear layer and INN block turn the input \(\mathbf{x}\in\mathbb{R}^{d_{\mathrm{in}}}\) into the outputs \(\mathbf{x}^{\prime}\in\mathbb{R}^{d_{\mathrm{out}}}\) and \(\mathbf{p}\in[0,1]^{\times d_{\mathrm{out}}}\), respectively. In the bottom picture, the squares represent \(d_{\mathrm{in}}\times d_{\mathrm{in}}\)_independent_ unitary matrices \(\mathbf{U}(\Phi^{t})\), each of which is a sequence of interferometers as depicted in Fig. 1.
to [44; 76], the INN has a parallel structure of unitary operators, as depicted in Fig. 4. In contrast to the QNNs from [75; 76; 44], where each node represents a qubit, INNs do not necessitate a qubit system. INNs are applicable in any dimension and explicitly incorporate Fourier transformations.
We have employed INNs, as illustrated in Fig. 5, for image classification on both the MNIST and FashionMNIST [77] datasets, each with \(C=10\) classes. The FashionMNIST dataset, like MNIST, consists of images representing clothing items from ten (labeled \(0\) to \(9\)) different categories. In both cases, we have adopted \(\mathbf{d}=(2^{8},2^{6},C)\), the amplitude encoding of \(16\times 16\) images, \(0.01\) learning rate, \(2^{5}\) batch size, and \(3\) epochs (for further details, refer to [70]). Their performances on the test sets are presented in Table 1 for \(\mathbf{L}=(1,1)\) and \((2,2)\). As we increase the number of layers, the performance--measured by the accuracy and average f\({}_{1}\)--of the INN improves for both datasets. In summary, we have attained accuracies and average f\({}_{1}\) scores of \(93\%\) and \(83\%\) on the MNIST and FashionMNIST datasets, respectively.
The f\({}_{1}\) score in (12) pertains to the positive class. We can utilize a similar formula to calculate the f\({}_{1}\) score for each class and then derive the average f\({}_{1}\) score. It is also possible to extend the accuracy formula presented in (12) from \(C=2\) to \(C=10\) classes. In the case of the cross-entropy loss function described in (11), it is essential to ensure that the probabilities are appropriately normalized. This can be accomplished by employing the softmax function outlined in (15).
We conclude this section with the following two observations: (i) It is of interest to investigate whether the pair \(\{U(\mathbf{\phi}),F\}\) from (3) constitutes a universal set--capable of generating any unitary operation through multiplications--for quantum computation with a \(d\)-level system. Notably, for \(d=2\), it is a known universal set [78]. Moreover, \(U(\mathbf{\phi})\) encompasses all diagonal unitary operations and can be expressed as a linear combination of different powers of \(Z=\sum_{j=0}^{d-1}\omega^{j}|j\rangle\langle j|\). Through multiplication, \(Z\) and \(X=F^{3}ZF\) can generate the Heisenberg-Weyl group [79], whose elements constitute the unitary operator bases [80].
(ii) As a black and white image is a two-dimensional (2D) object, instead of (2), one can use
\[|\mathbf{x}\rangle_{\text{2D}}=\frac{1}{\|\mathbf{x}\|}\sum_{j=0}^{\mathsf{H- 1}}\sum_{j^{\prime}=0}^{\mathsf{W-1}}x_{jj^{\prime}}\,|j\rangle\otimes|j^{ \prime}\rangle\,, \tag{16}\]
to encode an image \(\mathbf{x}\) into two quantum subsystems with dimensions \(\mathsf{H}\) and \(\mathsf{W}\), where \(d=1\times\mathsf{H}\times\mathsf{W}\). Then, one can use the tensor product \(F_{\mathsf{H}}\otimes F_{\mathsf{W}}\) and \(U(\mathbf{\phi})=\sum_{k}\sum_{k^{\prime}}\mathrm{e}^{\mathrm{i}\phi_{kk^{\prime }}}|k\rangle\!\langle k|\otimes|k^{\prime}\rangle\!\langle k^{\prime}|\) at the place of (3). The tensor product performs a 2D discrete Fourier transform on the matrix \(x_{jj^{\prime}}\), with \(F_{\mathsf{H}}\) and \(F_{\mathsf{W}}\) operating exclusively on their respective subsystems. Afterward, \(U(\mathbf{\phi})\) executes a componentwise multiplication between the phase matrix (filter) \(\mathrm{e}^{\mathrm{i}\phi_{kk^{\prime}}}\) and the Fourier-transformed matrix \(\widehat{x}_{kk^{\prime}}\). As per the convolution theorem, such multiplication in the frequency domain is linked to _convolution_ in the spatial domain through Fourier transforms. Fourier transforms are fundamental components of digital image processing, particularly for designing various filters in the frequency domain [81].
## III Igan
In this section, we introduce IGANs, and our primary contributions are summarized as follows: Tables 2 and 3 detail the architectures and hyperparameters of IGANs, Algorithm 1 outlines the training process, and the implementation can be found in [70]. Figure 9 offers insights into how losses and probabilities evolve during the training. And, Figs. 7 and 8 display sample images generated after the training process.
generated data and provides a probability score for an input being real.
To enhance classification accuracy, the discriminator aims to drive \(D(\mathbf{x})\) closer to 1 through maximization and \(D(G(\mathbf{z}))\) closer to 0 through minimization. Both objectives are accomplished by minimizing the discriminator's loss
\[\mathcal{L}_{D} =-\frac{1}{2N}\sum_{i=1}^{N}\left[\ln(D(\mathbf{x}^{[i]}))+\ln(1-D (G(\mathbf{z}^{[i]})))\right]\,,\] \[\mathcal{L}_{G} =-\frac{1}{N}\sum_{i=1}^{N}\ln(D(G(\mathbf{z}^{[i]}))) \tag{17}\]
serves as a guide for the generator. It encourages the generator to produce increasingly realistic data, thereby pushing \(D(G(\mathbf{z}))\) closer to 1. In (17), every summation is over a mini-batch, where \(\{\mathbf{x}^{[i]}\}_{i=1}^{N}\) originates from the given data and \(\{\mathbf{z}^{[i]}\}_{i=1}^{N}\) is drawn from a prior (in our case, the standard normal) distribution.
The two networks are trained simultaneously as described in Algorithm 1 inspired from [7]. Initially, \(\mathcal{L}_{D}\) is minimized with respect to the discriminator's parameters, followed by minimizing \(\mathcal{L}_{G}\) with respect to the generator's parameters within a single iteration (epoch). The two networks compete with each other and seek the Nash equilibrium point, where the discriminator is unable to distinguish between real and fake data. There both the probabilities \(D(\mathbf{x})\) and \(D(G(\mathbf{z}))\) attain a value of \(\frac{1}{2}\), resulting in \(\mathcal{L}_{D}=\ln(2)=\mathcal{L}_{G}\).
Training a GAN can be a challenging and unstable process. Therefore, several GAN variants, such as Deep Convolutional GANs [8] and Wasserstein GANs [9; 10], have been developed. Deep Convolutional GANs introduced architectural guidelines for generator and discriminator networks, leading to more stable training and the generation of realistic and detailed images. Wasserstein GANs introduced the Wasserstein distance as a more stable and meaningful loss function for GAN training, effectively addressing issues like mode collapse.
```
Input: batch size \(N\), number of epochs, learning rate, \(\mathrm{Adam}\) hyperparameters \((\beta_{1},\beta_{2})\) from Table III. \(\bullet\) Initialize parameters of discriminator \(\boldsymbol{\Phi}_{D}\) and of generator \(\boldsymbol{\Phi}_{G}\) with the Xavier initialization [82]. fornumber of epochsdo fornumber of mini-batchesdo\(\bullet\) Sample mini-batch \(\{\mathbf{z}^{[i]}\}_{i=1}^{N}\) from the standard normal distribution \(\mathcal{N}(0,1)\). \(\bullet\) Sample mini-batch \(\{\mathbf{x}^{[i]}\}_{i=1}^{N}\) from the training dataset. \(\bullet\) Update \(\boldsymbol{\Phi}_{D}\leftarrow\mathrm{Adam}(\nabla_{\boldsymbol{\Phi}_{D}} \mathcal{L}_{D},\beta_{1},\beta_{2})\)\(\bullet\) Update \(\boldsymbol{\Phi}_{G}\leftarrow\mathrm{Adam}(\nabla_{\boldsymbol{\Phi}_{G}} \mathcal{L}_{G},\beta_{1},\beta_{2})\) end for end for
```
**Algorithm 1**IGAN training algorithm
In the quantum domain, several models have been employed for image generation, including the quantum GANs [58; 59; 60; 61; 62], Born machines [63; 64; 66], matrix product states [83], and quantum variational autoencoder [84]. In quantum GANs, described in [58; 59], images are created from multiple patches generated in parallel by sub-generators from a product state, which encodes the components of \(\mathbf{z}\) into rotation angles. The loss function in [59] is inspired by Wasserstein GANs [10]. In contrast, quantum GANs in [61; 62] leverage quantum state fidelity-based loss functions and incorporate principal component analysis for input image compression. In the case of hybrid quantum-classical GANs discussed in [60], a specific remapping method is employed to enhance the quality of generated images. The generative models in [63; 64; 83] are built using datasets of binary images. In [66; 84; 85], quantum models are employed for producing noise vectors \(\mathbf{z}\). Similar to our IGAN, the MNIST dataset is utilized in all the generative models introduced in [58; 59; 60; 61; 62; 66; 83; 84; 85].
Now, we introduce our IGANs, where both the generator \(G\) and discriminator \(D\) are INNs as shown in Fig. 5. It is important to note that our approach does not involve principal component analysis for input compression, binary images for training, or the inclusion of any classical layers within our INNs. For input loading, we always use amplitude encoding (2) for classical vectors such as \(\mathbf{x}\), \(\mathbf{z}\), and \(\mathbf{p}\). An output from an interferometric block is always a classical vector \(\mathbf{p}\) as illustrated in Fig. 4. From the generator, we get \(\mathbf{p}\in[0,1]^{\times d}\) with an image size of \(d=\mathsf{C}\times\mathsf{H}\times\mathsf{W}\), which, after the componentwise transformation
\[\mathbf{p}\rightarrow\tanh(2\,\mathbf{p}-1)=G(\mathbf{z})\in[-1,1]^{\times d }\,, \tag{18}\]
is fed to the discriminator during training. The discriminator's output, \(D(\mathbf{x})\) or \(D(G(\mathbf{z}))\), represents the probability, \(p\in[0,1]\), of the input being real, where the corresponding \(\mathbf{p}=(1-p\,,p)\). The training process for our
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline dataset & INN & \(\mathbf{d}\) & \(\mathbf{L}\) \\ \hline MNIST & \(D\) & \((20^{2},2^{6},1)\) & \((3,3)\) \\ & \(G\) & \((2^{3},2^{6},20^{2})\) & \((3,3)\) \\ \hline CelebA & \(D\) & \((32^{2},2^{7},1)\) & \((3,3)\) \\ & \(G\) & \((2^{5},2^{7},32^{2})\) & \((3,3)\) \\ \hline \hline \end{tabular}
\end{table}
Table II: The table presents the architecture of INNs for the discriminator \(D\) and generator \(G\) of separate IGANs trained on the two datasets. In both cases, every INN is of the structure depicted in Fig. 5.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline dataset & data size & batch size & epochs & rate & \((\beta_{1},\beta_{2})\) \\ \hline MNIST & 5000 & \(2^{7}\) & 10 & 0.01 & \((0.5,0.9)\) \\ \hline CelebA & 50000 & \(2^{9}\) & 10 & 0.01 & \((0.5,0.9)\) \\ \hline \hline \end{tabular}
\end{table}
Table III: The table presents the hyperparameters utilized in Algorithm 1 for training separate IGANs on the two datasets.
IGANs is outlined in Algorithm 1 and its complete implementation is given in [70].
Separate IGANs were trained on the MNIST dataset [74] and CelebA datasets [86], which consists of images featuring celebrities' faces as displayed in Fig. 6. Subsequently, samples of generated images from their respective \(G\)s are presented in Fig. 7. In both cases, black and white images were used for training, and 2-block INNs were employed for both \(D\) and \(G\). For the two datasets, we have adopted slightly different architectures for the INNs, as detailed in Table 2, and slightly varied hyper-parameters, as specified in Table 3.
In the case of the MNIST dataset, we have randomly taken 5000 images of 0 to 9 digits for the training as specified in the "data size" column of Table 3. Each image has dimensions \(d=\mathsf{C}\times\mathsf{H}\times\mathsf{W}=1\times 20\times 20\), as evident from the first and last components of \(\mathsf{ds}\) for \(D\) and \(G\) in Table 2. In the top panel of generated images in Fig. 7, clear representations of numbers 0, 1, 3, 5, and 9 are easily noticeable. However, the images of 2, 4, and 6 appear less distinct, while those of 7 and 8 do
Figure 6: The top and bottom panels showcase samples of 64 _real_ images from the MNIST [74] and CelebA [86] datasets, respectively.
Figure 7: After separately training IGANs on the MNIST [74] and CelebA [86] datasets, the top and bottom panels showcase 64 _fake_ images generated by their respective generators. To enhance contrast, we have employed the transformation \(\mathbf{p}\rightarrow\tanh(3(2\,\mathbf{p}-1))\) at the place of (18) exclusively for image generation in the top panel and Fig. 8.
not seem to be present. In another set of experiments, we trained the same IGAN using images of a single digit at a time and achieved the results depicted in Fig. 8. These results demonstrate our capability to generate images corresponding to all ten digits, from 0 to 9. One can visually assess the quality of generated images by comparing them to real images through Figs. 6-8.
In the case of the CelebA dataset, we randomly selected 50000 images for training. These images were converted from color to black and white and resized to the dimensions of \(d=1\times 32\times 32\), as detailed in Table 2. With the table, one can also compute that the INNs for \(D\) and \(G\) have a total of 393600 and 405504 trainable parameters (phases), respectively. In Fig. 9, we depict how the discriminator and generator losses converge toward \(\ln(2)\) as training progresses. Additionally, we illustrate how the average discriminator probabilities
\[\mathcal{P}_{\text{real}}=\frac{1}{N}\sum_{i=1}^{N}D(\mathbf{x}^{[i]})\quad \text{and}\quad\mathcal{P}_{\text{fake}}=\frac{1}{N}\sum_{i=1}^{N}D(G(\mathbf{ z}^{[i]})) \tag{19}\]
approach \(\frac{1}{2}\). After the training, we got the ability to generate images featuring human faces, as showcased in Fig. (7). By the way, with the average probabilities, one can define \(\mathcal{L}^{\prime}_{D}=-\mathcal{P}_{\text{real}}+\mathcal{P}_{\text{fake}}\) and \(\mathcal{L}^{\prime}_{G}=-\mathcal{P}_{\text{fake}}\), inspired from the Wasserstein loss functions [9, 10, 59], at place of (17).
## IV Conclusion and outlook
We have introduced INNs, which are artificial neural networks composed of interferometers. An INN is a sequence of interferometric blocks, with each block containing parallel sequences of interferometers. While one could, in principle, use an INN with classical waves, it can be regarded as a quantum model, devoid of any classical layers.
We have shown that INNs are useful for optimization as well as supervised or unsupervised machine learning tasks. For the QUBO problems, we achieve the global
Figure 8: From top to bottom, each row displays eight _fake_ images generated by the \(G\) network trained separately on real images of digits \(0,\cdots,9\). For each digit, we maintained consistent architecture and hyperparameters (with the exception of data size) as those presented in Tables 2 and 3 for the MNIST dataset. The training data was sourced from the MNIST training set.
Figure 9: At the top, we present the evolution of \(\mathcal{L}_{D}\) and \(\mathcal{L}_{G}\) at various epochs, represented by orange and blue curves, respectively. Meanwhile, at the bottom, we illustrate the changes in \(\mathcal{P}_{\text{real}}\) and \(\mathcal{P}_{\text{fake}}\) over different epochs, denoted by red and green graphs, respectively. The definitions of these loss functions and average probabilities can be found in (17) and (19), respectively. In the top and bottom plots, the gray horizontal lines denote the values \(\ln(2)\) and \(\frac{1}{2}\), respectively. Both the plots pertain to the IGAN trained on the CelebA dataset.
minimum approximately 90% of the time, with the remaining instances typically falling within a range of 2% from the global minimum. In the context of multi-class image classification problems, we achieved accuracies of 93% and 83% on the MNIST and FashionMNIST datasets, respectively. While our accuracy falls short in comparison to state-of-the-art classical NNs, which provide 99.87% accuracy on the MNIST dataset [87], and 96.91% accuracy on the FashionMNIST dataset [88], it is important to note that INNs exhibit a simpler architecture. Nonetheless, our work represents a significant step forward in the development of more advanced quantum NNs.
We have introduced IGANs, made of INNs, for image generation. These IGANs have successfully generated images of 0 to 9 digits and human faces. While our image quality may currently lag behind that of classical GANs [7; 8; 9; 10], there is potential for enhancement through network modifications, architectural adjustments, and fine-tuning of hyperparameters. Last but not least, it is crucial to analyze the robustness of our INNs against noise and qubit errors, which will be the focus of future research.
|
2302.00457 | Simplicity Bias in 1-Hidden Layer Neural Networks | Recent works have demonstrated that neural networks exhibit extreme
simplicity bias(SB). That is, they learn only the simplest features to solve a
task at hand, even in the presence of other, more robust but more complex
features. Due to the lack of a general and rigorous definition of features,
these works showcase SB on semi-synthetic datasets such as Color-MNIST,
MNIST-CIFAR where defining features is relatively easier.
In this work, we rigorously define as well as thoroughly establish SB for one
hidden layer neural networks. More concretely, (i) we define SB as the network
essentially being a function of a low dimensional projection of the inputs (ii)
theoretically, we show that when the data is linearly separable, the network
primarily depends on only the linearly separable ($1$-dimensional) subspace
even in the presence of an arbitrarily large number of other, more complex
features which could have led to a significantly more robust classifier, (iii)
empirically, we show that models trained on real datasets such as Imagenette
and Waterbirds-Landbirds indeed depend on a low dimensional projection of the
inputs, thereby demonstrating SB on these datasets, iv) finally, we present a
natural ensemble approach that encourages diversity in models by training
successive models on features not used by earlier models, and demonstrate that
it yields models that are significantly more robust to Gaussian noise. | Depen Morwani, Jatin Batra, Prateek Jain, Praneeth Netrapalli | 2023-02-01T14:00:35Z | http://arxiv.org/abs/2302.00457v1 | # Simplicity Bias in 1-Hidden Layer Neural Networks
###### Abstract
Recent works (Shah et al., 2020; Chen et al., 2021) have demonstrated that neural networks exhibit extreme _simplicity bias_ (SB). That is, they learn _only the simplest_ features to solve a task at hand, even in the presence of other, more robust but more complex features. Due to the lack of a general and rigorous definition of _features_, these works showcase SB on _semi-synthetic_ datasets such as Color-MNIST, MNIST-CIFAR where defining features is relatively easier.
In this work, we rigorously define as well as thoroughly establish SB for _one hidden layer_ neural networks. More concretely, (i) we define SB as the network essentially being a function of a low dimensional projection of the inputs (ii) theoretically, we show that when the data is linearly separable, the network primarily depends on only the linearly separable (\(1\)-dimensional) subspace even in the presence of an arbitrarily large number of other, more complex features which could have led to a significantly more robust classifier, (iii) empirically, we show that models trained on _real_ datasets such as Imagenette and Waterbirds-Landbirds indeed depend on a low dimensional projection of the inputs, thereby demonstrating SB on these datasets, iv) finally, we present a natural ensemble approach that encourages diversity in models by training successive models on features not used by earlier models, and demonstrate that it yields models that are significantly more robust to Gaussian noise.
Machine Learning, ICML
## 1 Introduction
It is well known that neural networks (NNs) are vulnerable to distribution shifts as well as to adversarial examples (Szegedy et al., 2014; Hendrycks et al., 2021). A recent line of work (Geirhos et al., 2018; Shah et al., 2020; Geirhos et al., 2020) proposes that _Simplicity Bias (SB)_ - aka shortcut learning - i.e., the tendency of neural networks (NNs) to learn only the simplest features over other useful but more complex features, is a key reason behind this non-robustness. The argument is roughly as follows: for example, in the classification of swans vs bears, as illustrated in Figure 1, there are many features such as background, color of the animal, shape of the animal etc. that can be used for classification. However using only one or few of them can lead to models that are not robust to specific distribution shifts, while using all the features can lead to more robust models.
Several recent works have demonstrated SB on a variety of _semi-real constructed datasets_(Geirhos et al., 2018; Shah et al., 2020; Chen et al., 2021), and have hypothesized SB to be the key reason for NN's britteness to distribution shifts (Shah et al., 2020). However, such observations are still only for specific semi-real datasets, and a general method that can identify SB on a _given dataset_ and a _given model_ is still missing in literature. Such a method would be useful not only to estimate the robustness of a model but could also help in designing more robust models.
A key challenge in designing such a general method to identify (and potentially fix) SB is that the notion of _feature_ itself is vague and lacks a rigorous definition. Existing works like (Geirhos et al., 2018; Shah et al., 2020; Chen et al., 2021) avoid this challenge of vague feature defini
Figure 1: Classification of swans vs bears. There are several features such as background, color of the animal, shape of the animal etc., each of which is sufficient for classification but using all of them will lead to a more robust model. 2
tion by using carefully designed datasets (e.g., concatenation of MNIST images and CIFAR images), where certain high level features (e.g., MNIST features and CIFAR features, shape and texture features) are already baked in the dataset definition, and arguing about their _simplicity_ is intuitively easy.
**Contributions**: One of the main contributions of this work is to provide a precise definition of a particular simplicity bias - LD-SB\(-\) of \(1\)-_hidden layer neural networks_. In particular, we characterize SB as _low dimensional input dependence_ of the model. Concretely,
**Definition 1.1** (Ld-Sb).: A model \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{c}\) with inputs \(x\in\mathbb{R}^{d}\) and outputs \(f(x)\in\mathbb{R}^{c}\) (e.g., logits for \(c\) classes), trained on a distribution \((x,y)\sim\mathcal{D}\) satisfies LD-SB if there exists a _projection_ matrix \(P\in\mathbb{R}^{d\times d}\) satisfying:
* rank \((P)=k\ll d\),
* \(f(Px^{(1)}\ +\ P_{\perp}x^{(2)})\quad\approx\quad f(x_{1})\quad\forall(x^{(1)},y^{(1)})\), \((x^{(2)},y^{(2)})\sim\mathcal{D}\)
* An independent model \(g\) trained on \((P_{\perp}x,y)\) where \((x,y)\sim\mathcal{D}\) achieves high accuracy.
Here \(P_{\perp}\) is the projection matrix onto the subspace orthogonal to \(P\).
In words, LD-SB says that there exists a small \(k\)-dimensional subspace (given by the projection matrix \(P\)) in the input space \(\mathbb{R}^{d}\), which is the only thing that the model \(f\) considers in labeling any input point \(x\). In particular, if we _mix_ two data points \(x_{1}\) and \(x_{2}\) by using the projection of \(x_{1}\) onto \(P\) and the projection of \(x_{2}\) onto the orthogonal subspace \(P_{\perp}\), the output of \(f\) on this _mixed point_\(Px_{1}+P_{\perp}x_{2}\) is the same as that on \(x_{1}\). This would have been fine if the subspace \(P_{\perp}\) does not contain any feature useful for classification. However, the third bullet point says that \(P_{\perp}\) indeed contains features that are useful for classification since an independent model \(g\) trained on \((P_{\perp}x,y)\) achieves high accuracy.
Furthermore, theoretically, we demonstrate LD-SB of \(1\)-hidden layer NNs for a fairly general class of distributions called _independent features model (IFM)_, where the features (i.e., coordinates) are distributed independently conditioned on the label. IFM has a long history and is widely studied, especially in the context of naive-Bayes classifiers (Lewis, 1998). For IFM, we show that as long as there is even a _single_ feature in which the data is linearly separable, NNs trained using SGD will learn models that rely almost exclusively on this linearly separable feature, even when there are an _arbitrarily large number_ of features in which the data is separable but with a _non-linear_ boundary. Empirically, we demonstrate LD-SB on three real world datasets: binary and multiclass version of Imagenette (FastAI, 2021) as well as waterbirds-landbirds (Sagawa et al., 2020) dataset. Compared to the results in (Shah et al., 2020), our results (i) theoretically show LD-SB in a fairly general setting and (ii) empirically show LD-SB on real datasets.
Finally, building upon these insights, we propose a simple ensemble method - _OrthoP_ - that sequentially constructs NNs by projecting out principle input data directions that are used by previous NNs. We demonstrate that this method can lead to significantly more robust ensembles for real-world datasets in presence of simple distribution shifts like Gaussian noise.
**Why only \(1\)-hidden layer networks?**: One might wonder why the results in this paper are restricted to \(1\)-hidden layer networks and why they are interesting. We present two reasons.
1. From a **theoretical** standpoint, prior works have thoroughly characterized the training dynamics of infinite width \(1\)-hidden layer networks under different initialization schemes (Chizat et al., 2019) and have also identified the limit points of gradient descent for such networks (Chizat & Bach, 2020). Our results crucially build upon these prior works. On the other hand, we do not have such a clear understanding of the dynamics of deeper networks 3. Footnote 3: For more discussion on the difficulty of extending these results to deep nets, refer Appendix D
2. From a **practical** standpoint, the dominant paradigm in machine learning right now is to pretrain large models on large amounts of data and then finetune on small target datasets. Given the large and diverse pretraining data seen by these models, it has been observed that they do learn rich features (Rosenfeld et al., 2022; Nasery et al., 2022). However, finetuning on target datasets might not utilize all the features in the pretrained model. Consequently, approaches that can train robust finetuning heads (such as a \(1\)-hidden layer network on top) can be quite effective.
Extending our results to deeper networks and to other architectures is an exciting direction of research from both theoretical and practical points of view.
**Paper organization**: This paper is organized as follows. Section 2 presents related work. Section 3 presents preliminaries. Our main results on LD-SB are presented in Section 4. Section 5 presents results on training diverse classifiers. We conclude in Section 6.
## 2 Related Work
In this section, we briefly mention the closely related works. Extended related work can be found in Appendix C.
**Simplicity Bias**: Subsequent to (Shah et al., 2020), there have been several papers investigating the presence/absence of SB in various networks as well as reasons behind SB (Scimeca et al., 2021). Of these, (Huh et al., 2021) is the most closely related work to ours. (Huh et al., 2021)_empirically observe_ that on certain _synthetic_ datasets, the _embeddings_ of NNs both at initialization as well as after training have a low rank structure. In contrast, we prove LD-SB _theoretically_ on the IFM model as well as empirically validate this on _real_ datasets. Furthermore, our results show that while the _network weights_ exhibit low rank structure in the rich regime (see Section 3.2 for definition), the manifestation of LD-SB is far more subtle in lazy regime. Moreover, we also show how to use LD-SB to train a second diverse model and combine it to obtain a robust ensemble. (Galanti & Poggio, 2022) provide a theoretical intuition behind the relation between various hyperparameters (such as learning rate, batch size etc.) and rank of learnt weight matrices, and demonstrate it empirically. (Pezeshki et al., 2021) propose that _gradient starvation_ at the beginning of training is a potential reason for SB in the lazy/NTK regime but the conditions are hard to interpret. In contrast, our results are shown for any dataset in the IFM model in the _rich_ regime of training. Finally (Lyu et al., 2021) consider anti-symmetric datasets and show that single hidden layer input homogeneous networks (i.e., without _bias_ parameters) converge to linear classifiers. However, our results hold for general datasets and do not require input homogeneity.
**Learning diverse classifiers**: There have been several works that attempt to learn diverse classifiers. Most works try to learn such models by ensuring that the input gradients of these models do not align (Ross & Doshi-Velez, 2018; Teney et al., 2022). (Xu et al., 2022) propose a way to learn diverse/orthogonal classifiers under the assumption that a complete classifier, that uses all features is available, and demonstrates its utility for various downstream tasks such as style transfer. (Lee et al., 2022) learn diverse classifiers by enforcing diversity on unlabeled target data.
**Spurious correlations**: There has been a large body of work which identifies reasons for spurious correlations in NNs (Sagawa et al., 2020) as well as proposing algorithmic fixes in different settings (Liu et al., 2021; Chen et al., 2020).
**Implicit bias of gradient descent**: There is also a large body of work understanding the implicit bias of gradient descent dynamics. Most of these works are for standard linear (Ji & Telgarsky, 2019) or deep linear networks (Soudry et al., 2018; Gunasekar et al., 2018). For nonlinear neural networks, one of the well-known results is for the case of \(1\)-hidden layer neural networks with homogeneous activation functions (Chizat & Bach, 2020), which we crucially use in our proofs.
## 3 Preliminaries
In this section, we provide the notation and background on infinite width max-margin classifiers that is required to interpret the results of this paper.
### Basic notions
**1-hidden layer neural networks and loss function.** Consider instances \(x\in\mathcal{R}^{d}\) and labels \(y\in\{\pm 1\}\) jointly distributed as \(\mathcal{D}\). A 1-hidden layer neural network model for predicting the label for a given instance \(x\), is defined by parameters \((\bar{w}\in\mathbb{R}^{m\times d},\bar{b}\in\mathbb{R}^{m},\bar{a}\in\mathbb{ R}^{m})\). For a fixed activation function \(\phi\), given input instance \(x\), the model is given as \(f((\bar{w},\bar{b},\bar{a}),x)\coloneqq\langle\bar{a},\phi(\bar{w}x+\bar{b})\rangle\), where \(\phi(\cdot)\) is applied elementwise. The cross entropy loss \(\mathcal{L}\) for a given model \(f\), input \(x\) and label \(y\) is given as \(\mathcal{L}\left(f(x),y\right)\stackrel{{\text{def}}}{{=}}\log(1+ \exp(-yf((\bar{w},\bar{b},\bar{a}),x)))\).
**Margin.** For data distribution \(\mathcal{D}\), the margin of a model \(f(x)\) is given as \(\min_{(x,y)\sim\mathcal{D}}yf(x)\).
**Notation.** Here is some useful notation that we will use repeatedly. For a matrix \(A\), \(A(i,.)\) denotes the \(i\)th row of \(A\). For any \(k\in\mathbb{N}\), \(\mathcal{S}^{k-1}\) denotes the surface of the unit norm Euclidean sphere in dimension \(k\).
### Initializations
The gradient descent dynamics of the network depends strongly on the scale of initialization. In this work, we primarily consider _rich regime_ initialization.
**Rich regime.** In rich regime initialization, for any \(i\in[m]\), the parameters \((\bar{w}(i,.),\bar{b}(i))\) of the first layer are sampled from a uniform distribution on \(\mathbb{S}^{d}\). Each \(\bar{a}(i)\) is sampled from _Unif\(\{-1,1\}\)_, and the output of the network is scaled down by \(\frac{1}{m}\)(Chizat & Bach, 2020). This is roughly equivalent to Xavier initialization Glorot & Bengio (2010), where the weight parameters in both the layers are initialized approximately as \(\mathcal{N}(0,\frac{2}{m})\) when \(m\gg d\).
In addition, we also present some results for the lazy regime initialization described below.
**Lazy regime.** In the lazy regime, the weight parameters in the first layer are initialized with \(\mathcal{N}(0,\frac{1}{d})\), those of second layer are initialized with \(\mathcal{N}(0,\frac{1}{m})\) and the biases are initialized to \(0\)(Bietti & Mairal, 2019; Lee et al., 2019). This is approximately equivalent to Kaiming initialization (He
et al., 2015).
### Infinite Width Case
For 1-hidden layer neural networks with ReLU activation in the infinite width limit i.e., as \(m\rightarrow\infty\), Jacot et al. (2018); Chizat et al. (2019); Chizat and Bach (2020) gave interesting characterizations of the trained model. As mentioned above, the training process of these models falls into one of two regimes depending on the scale of initialization (Chizat et al., 2019):
**Rich regime.** In the infinite width limit, the neural network parameters can be thought of as a distribution \(\nu\) over triples \((w,b,a)\in\mathbb{S}^{d+1}\) where \(w\in\mathbb{R}^{d},b,a\in\mathbb{R}\). Under the rich regime initialization, the function \(f\) computed by the model can be expressed as
\[f(\nu,x)=\mathbb{E}_{(w,b,a)\sim\nu}[a(\phi(\left<w,x\right>+b)]\,. \tag{1}\]
(Chizat and Bach, 2020) showed that the training process with rich initialization can be thought of as gradient flow on the Wasserstein-2 space and gave the following characterization 4 of the trained model under the cross entropy loss \(\mathbb{E}_{(x,y)\sim\mathcal{D}}[\mathcal{L}(\nu,(x,y))]\).
Footnote 4: Theorem 3.1 is an informal version of Chizat and Bach 2020, Theorem 5. For exact result, refer Theorem E.1 in Appendix E.
**Theorem 3.1**.: _(Chizat and Bach, 2020) Under rich initialization in the infinite width limit with cross entropy loss, if gradient flow on 1-hidden layer NN with ReLU activation converges, it converges to a maximum margin classifier \(\nu^{*}\) given as_
\[\nu^{*}=\operatorname*{arg\,max}_{\nu\in\mathcal{P}(\mathbb{S}^{d+1})}\min_{ (x,y)\sim\mathcal{D}}yf(\nu,x)\,, \tag{2}\]
_where \(\mathcal{P}(\mathbb{S}^{d+1})\) denotes the space of distributions over \(\mathbb{S}^{d+1}\)._
This training regime is known as the 'rich' regime since it learns data dependent features \(\left<w,\cdot\right>\).
**Lazy regime.**(Jacot et al., 2018) showed that in the infinite width limit, the neural network behaves like a kernel machine. This kernel is popularly known as the Neural Tangent Kernel(NTK), and is given by \(K(x,x^{\prime})=\left<\frac{\partial f(x)}{\partial W},\frac{\partial f(x^{ \prime})}{\partial W}\right>\), where \(W\) denotes the set of all trainable weight parameters. This initialization regime is called 'lazy' regime since the weights do not change much from initialization, and the NTK remains almost constant, i.e, the network does not learn data dependent features. We will use the following characterization of the NTK for 1-hidden layer neural networks.
**Theorem 3.2**.: _(Bietti and Mairal, 2019) Under lazy regime initialization in the infinite width limit, the NTK for 1-hidden layer neural networks with ReLU activation i.e., \(\phi(u)=\max(u,0)\), is given as_
\[K(x,x^{\prime})=\|x\|\|x^{\prime}\|\kappa\left(\frac{\left<x,x^{\prime}\right> }{\|x\|\|x^{\prime}\|}\right)\,,\]
_where_
\[\kappa(u)=\frac{1}{\pi}(2u(\pi-cos^{-1}(u))+\sqrt{1-u^{2}})\,.\]
_Lazy regime for binary classification._(Soudry et al., 2018) showed that for linearly separable datasets, gradient descent for linear predictors on logistic loss converges to the max-margin support vector machine (SVM) classifier. This implies that, any sufficiently wide neural network, when trained for a finite time in the lazy regime on a dataset that is separable by the finite-width induced NTK, will tend towards the \(\mathcal{L}_{2}\) max-margin-classifier given by
\[\operatorname*{arg\,min}_{f\in\mathcal{H}}\|f\|_{\mathcal{H}}\text{ s.t. }yf(x)\geq 1\;\forall\;(x,y)\sim\mathcal{D}\,, \tag{3}\]
where \(\mathcal{H}\) represents the Reproducing Kernel Hilbert Space (RKHS) associated with the finite width kernel (Chizat, 2020). With increasing width, this kernel tends towards the infinite-width NTK (which is universal (Ji et al., 2020)). Therefore, in lazy regime, we will focus on the \(\mathcal{L}_{2}\) max-margin-classifier induced by the infinite-width NTK.
## 4 Characterization of SB in \(1\)-hidden layer neural networks
In this section, we first theoretically characterize the SB exhibited by gradient descent on linearly separable datasets in the _independent features model (IFM)_. The main result, stated in Theorem 4.1, is that for binary classification of inputs in \(\mathbb{R}^{d}\), even if there is a _single_ coordinate in which the data is linearly separable, gradient descent dynamics will learn a model that relies _solely_ on this coordinate, even when there are an arbitrarily large number \(d-1\) of coordinates in which the data is separable, but by a non-linear classifier. In other words, the simplicity bias of these networks is characterized by _low dimensional input dependence_, which we denote by LD-SB. We then experimentally verify that NNs trained on some real datasets do indeed satisfy LD-SB.
### Dataset
We consider datasets in the independent features model (IFM), where the joint distribution over \((x,y)\) satisfies \(p(x,y)=r(y)\prod_{i=1}^{d}q_{i}(x_{i}|y)\), i.e, the features are distributed independently conditioned on the label \(y\). Here \(r(y)\) is a distribution over \(\{-1,+1\}\) and \(q_{i}(x_{i}|y)\) denotes the conditional distribution of \(i^{\text{th}}\)-coordinate \(x_{i}\) given \(y\). IFM is widely studied in literature, particularly in the context of naive-Bayes classifiers (Lewis, 1998). We make the
following assumptions which posit that there are at least two features of differing complexity for classification: _one_ with a linear boundary and _at least_ one other with a non-linear boundary. See Figure 2 for an illustrative example.
* One of the coordinates (say, the \(1^{\text{st}}\) coordinate WLOG) is separable by a linear decision boundary with margin \(\gamma\) (see Figure 2), i.e, \(\exists\gamma>0\), such that \(\gamma\in Supp(q_{1}(x_{1}|y=+1))\subseteq[\gamma,\infty)\) and \(-\gamma\in Supp(q_{1}(x_{1}|y=-1))\subseteq(-\infty,-\gamma]\), where \(Supp(\cdot)\) denotes the support of a distribution.
* None of the other coordinates is linearly separable. More precisely, for all the other coordinates \(i\in[d]\setminus\{1\}\), \(0\in Supp(q_{i}(x_{i}|y=-1))\) and \(\{-1,+1\}\subseteq Supp(q_{i}(x_{i}|y=+1))\).
* The dataset can be perfectly classified even without using the linear coordinate. This means, \(\exists i\neq 1\), such that \(q_{i}(x_{i}|y)\) has disjoint support for \(y=+1\) and \(y=-1\).
Though we assume axis aligned features, our results also hold for any rotation of the dataset. While our results hold in the general IFM setting, in comparison, current results for SB e.g., (Shah et al., 2020), are obtained for _very specialized_ datasets within IFM, and do not apply to IFM in general.
### Main result
Our main result states that, for rich initialization (Section 3.2), NNs demonstrate LD-SB for any IFM dataset satisfying the above conditions, along with some technical conditions stated in Theorem E.1. Its proof appears in Appendix A.1.
**Theorem 4.1**.: _For any dataset in the IFM model with bounded density and bounded support, satisfying the above conditions and \(\gamma\geq 1\), if gradient flow for 1-hidden layer FCN under rich initialization in the infinite width limit with cross entropy loss converges and satisfies the technical conditions in Theorem E.15, it converges to \(\nu^{*}=0.5\delta_{\theta_{1}}+0.5\delta_{\theta_{2}}\) on \(\mathcal{S}^{d+1}\), where \(\theta_{1}=(\frac{\gamma}{\sqrt{2(1+\gamma^{2})}}\mathbf{e}_{1},\frac{1}{ \sqrt{2(1+\gamma^{2})}},1/\sqrt{2}),\theta_{2}=(-\frac{\gamma}{\sqrt{2(1+ \gamma^{2})}}\mathbf{e}_{1},\frac{1}{\sqrt{2(1+\gamma^{2})}},-1/\sqrt{2})\) and \(\mathbf{e}_{1}\stackrel{{\text{def}}}{{=}}[1,0,\cdots,0]\) denotes first standard basis vector. This implies \(f(\nu^{*},Px^{(1)}+P_{\perp}x^{(2)})=f(\nu^{*},x_{1}^{(1)})\)\(\forall\)\((x^{(1)},y^{(1)}),(x^{(2)},y^{(2)})\sim\mathcal{D}\), where \(P\) represents the (rank-1) projection matrix on first coordinate._
Footnote 5: Note that Theorem E.1 is a restatement of Theorem 5 of Chizat & Bach (2020), which we are using as a black box in our analysis
Moreover, since at least one of the coordinates \(\{2,\ldots,d\}\) has disjoint support for \(q_{i}(x_{i}|y=+1)\) and \(q_{i}(x_{i}|y=-1)\), \(P_{\perp}(x)\) can still perfectly classify the given dataset, thereby implying LD-SB.
It is well known that the rich regime is more relevant for the practical performance of NNs since it allows for feature learning, while lazy regime does not (Chizat et al., 2019). Nevertheless, in the next section, we present theoretical evidence that LD-SB holds even in the lazy regime, by considering a much more specialized dataset within IFM.
### Lazy regime
In this regime, we will work with the following dataset within the IFM family:
For \(y\in\{\pm 1\}\) we generate \((x,y)\in D\) as
\[\mathbf{x}_{1}=\gamma y\]
\[\forall i\in 2,..,d,\mathbf{x}_{i}=\left\{\begin{array}{cc}\pm 1&\text{ for } \quad y=1\\ 0&\text{ for }\quad y=-1\end{array}\right.\]
Although the dataset above is a point mass dataset, it still exhibits an important characteristic in common with the rich regime dataset - only one of the coordinates is linearly separable while others are not. For this dataset, we provide the characterization of max-margin NTK (as in Eqn. (3)):
**Theorem 4.2**.: _For sufficiently small \(\epsilon>0\), there exists an absolute constant \(N\) such that for all \(d>N\) and \(\gamma\in[7,\epsilon\sqrt{d})\), the \(\mathcal{L}_{2}\) max-margin classifier for joint training of both the layers of 1-hidden layer FCN in the NTK regime on the dataset \(D\), i.e., any \(f\) satisfying Eqn. (3) satisfies:_
\[\text{pred}(f(Px^{(1)}+P_{\perp}x^{(2)}))=\text{pred}(f(x^{(1)}))\] \[\forall\)__\((x^{(1)},y^{(1)}),(x^{(2)},y^{(2)})\in D\]
_where \(P\) represents the projection matrix on the first coordinate and \(\text{pred}(f(x))\) represents the predicted label by the model \(f\) on \(x\)._
The above theorem shows that the prediction on a _mixed_ example \(Px^{(1)}+P_{\perp}x^{(2)}\) is the same as that on \(x^{(1)}\), thus establishing LD-SB. The proof for this theorem is provided in Appendix A.2.
Figure 2: Illustration of an IFM dataset. Given a class \(\pm 1\) represented by blue and red respectively, each coordinate value is drawn independently from the corresponding distribution. Shown above are the supports of distributions on three different coordinates for an illustrative IFM dataset, for positive and negative labels.
### Empirical verification
In this section, we will present empirical results demonstrating LD-SB on \(3\) real datasets: Imagenette (FastAI, 2021), a binary version of Imagenette (b-Imagenette) and waterbirds-landbirds (Sagawa et al., 2020) as well as one designed dataset MNIST-CIFAR (Shah et al., 2020). More details about the datasets can be found in Appendix B.1.
#### 4.4.1 Experimental setup
We take Imagenet pretrained Resnet-50 models, with \(2048\) features, for feature extraction and train a \(1\)-hidden layer fully connected network, with ReLU nonlinearity, and \(100\) hidden units, for classification on each of these datasets. During the finetuning process, we freeze the backbone Resnet-50 model and train only the \(1\)-hidden layer head (more details in Appendix B.1).
**Demonstrating LD-SB**: Given a model \(f(\cdot)\), we establish its low dimensional SB by identifying a small dimensional subspace, identified by its projection matrix \(P\), such that if we _mix_ inputs \(x_{1}\) and \(x_{2}\) as \(Px_{1}+P_{\perp}x_{2}\), the model's output on the mixed input \(\widetilde{x}\stackrel{{\text{def}}}{{=}}Px_{1}+P_{\perp}x_{2}\), \(f(\widetilde{x})\) is always _close_ to the model's output on \(x_{1}\) i.e., \(f(x_{1})\). We measure _closeness_ in four metrics: (1) \(P_{\perp}\)-randomized accuracy (\(P_{\perp}\)-RA): accuracy on the dataset \((Px_{1}+P_{\perp}x_{2},y_{1})\) where \((x_{1},y_{1})\) and \((x_{2},y_{2})\) are sampled iid from the dataset, (2) \(P\)-randomized accuracy (\(P\)-RA): accuracy on the dataset \((Px_{1}+P_{\perp}x_{2},y_{2})\), (3) \(P_{\perp}\) logit change (\(P_{\perp}\)-LC): relative change wrt logits of \(x_{1}\) i.e., \(\left\|f(\widetilde{x})-f(x_{1})\right\|/\left\|f(x_{1})\right\|\), and (4)\(P\) logit change (\(P\)-LC): relative change wrt logits of \(x_{2}\) i.e., \(\left\|f(\widetilde{x})-f(x_{2})\right\|/\left\|f(x_{2})\right\|\). Moreover, we will also show that a subsequent model trained on \((P_{\perp}x,y)\) achieves significantly high accuracy on these datasets.
As described in Sections 4.2 and 4.3, the training of \(1\)-hidden layer neural networks might follow different trajectories depending on the scale of initialization. So, the subspace projection matrix \(P\) will be obtained in different ways for rich vs lazy regimes. For rich regime, we will empirically show that the first layer weights have a low rank structure as per Theorem 4.1 while for lazy regime, we will show that though first layer weights do not exhibit low rank structure, the model still has low dimensional dependence on the input as per Theorem 4.2.
#### 4.4.2 Rich regime
Theorem 4.1 suggests that asymptotically, the first layer weight matrix will be low rank. However, since we train only for a finite amount of time, the weight matrix will only be approximately low rank. To quantify this, we use the notion of effective rank (Roy and Vetterli, 2007) to measure the rank of the first layer weight matrix.
**Definition 4.3**.: Given a matrix \(M\), its effective rank is defined as: \(\text{Eff-rank}(M)=e^{-\sum_{i}\sigma_{i}(M)^{2}\log\sigma_{i}(M)^{2}}\) where \(\sigma_{i}(M)\) denotes the \(i^{\text{th}}\) singular value of \(M\) and \(\overline{\sigma_{i}(M)^{2}}\stackrel{{\text{def}}}{{=}}\frac{ \sigma_{i}(M)^{2}}{\sum_{i}\sigma_{i}(M)^{2}}\).
One way to interpret the effective rank is that it is the exponential of von-Neumann entropy (Petz, 2001) of the matrix \(\frac{MM^{\top}}{\text{Tr}(MM^{\top})}\), where \(\text{Tr}\left(\cdot\right)\) denotes the trace of a matrix. For illustration, the effective rank of a projection matrix onto \(k\) dimensions equals \(k\).
Figure 2(a) shows the evolution of the effective rank through training on the four datasets. We observe that the effective rank of the weight matrix decreases drastically towards the end of training. To confirm that this indeed leads to LD-SB, we set \(P\) to be the subspace spanned by the top singular directions of the first layer weight matrix and compute \(P\) and \(P_{\perp}\) randomized accuracies as well as the relative logit change. The results, presented in Table 1 confirm LD-SB in
\begin{table}
\begin{tabular}{|c c c c c c c|} \hline Dataset & rank \((P)\) & Acc(\(f\)) & \(P_{\perp}\)-RA \((\uparrow)\) & \(P\)-RA \((\downarrow)\) & \(P_{\perp}\)-LC \((\downarrow)\) & \(P\)-LC \((\uparrow)\) \\ \hline \hline b-Imagenette & \(1\) & \(93.05\pm 0.26\) & \(89.94\pm 0.22\) & \(49.53\pm 0.24\) & \(28.57\pm 0.26\) & \(92.13\pm 0.24\) \\ Imagenette & \(10\) & \(79.52\pm 0.13\) & \(75.89\pm 0.25\) & \(9.33\pm 0.01\) & \(33.64\pm 1.21\) & \(106.29\pm 0.53\) \\ Waterbirds & \(3\) & \(91.88\pm 0.1\) & \(91.47\pm 0.11\) & \(62.51\pm 0.07\) & \(25.24\pm 1.03\) & \(102.35\pm 0.19\) \\ MNIST-CIFAR & 1 & \(99.69\pm 0.0\) & \(94.15\pm 0.21\) & \(55.2\pm 0.13\) & \(38.97\pm 0.76\) & \(101.98\pm 0.31\) \\ \hline \end{tabular}
\end{table}
Table 1: Demonstration of LD-SB in the rich regime: This table presents \(P_{\perp}\) and \(P\) randomized accuracies (RA) as well as logit changes (LC) on the four datasets. These results confirm that projection of input \(x\) onto the subspace spanned by \(P\) essentially determines the model’s prediction on \(x\). \(\uparrow\) (resp. \(\downarrow\)) indicates that LD-SB implies a large (resp. small) value.
Figure 3: Evolution of effective rank of first layer weight matrices in rich and lazy regimes.
the rich regime on these datasets. Moreover, in Appendix B.2, in Table 4, we show that an independent model trained on \((P_{\perp}x,y)\) achieves significantly high accuracy.
#### 4.4.3 Lazy regime
For the lazy regime, it turns out that the rank of first layer weight matrix remains high throughout training, as shown in Figure 2(b). However, we are able to find a low dimensional projection matrix \(P\) satisfying the conditions of LD-SB (as stated in Def 1.1) as the solution to an optimization problem. More concretely, given a pretrained model \(f\) and a rank \(r\), we obtain a _projection matrix_\(P\) solving:
\[\min_{P}\frac{1}{n}\sum_{i=1}^{n}\left(\mathcal{L}\left(f(Px^{(i)}),y^{(i)} \right)+\lambda\mathcal{L}\left(f(P^{\perp}x^{(i)}),\mathcal{U}[L]\right)\right) \tag{4}\]
where \(\mathcal{U}[L]\) represents a uniform distribution over all the \(L\) labels, \((x^{(1)},y^{(1)}),\cdots,(x^{(n)},y^{(n)})\) are training examples and \(\mathcal{L}\left(\cdot,\cdot\right)\) is the cross entropy loss. We reiterate that the optimization is only over \(P\), while the model parameters \(f\) are unchanged. In words, the above function ensures that the neural network produces correct predictions along \(P\) and uninformative predictions along \(P_{\perp}\). Table 2 presents the results for \(P_{\perp}\) and \(P\)-RA as well as LC. As can be seen, even in this case, we are able to find small rank projection matrices demonstrating LD-SB. Similar to the rich regime, in Appendix B.2, in Table 5, we show that an independent model trained on \((P_{\perp}x,y)\) in the lazy regime achieves significantly high accuracy.
## 5 Training diverse classifiers using _OrthoP_
Motivated by our results on low dimensional SB, in this section, we present a natural way to train diverse models, so that an ensemble of such models could mitigate SB. More concretely, given an initial model \(f\) trained with rich regime initialization, we first compute the low dimensional projection matrix \(P\) using the top few PCA components of the first layer weights.
We then train another model \(f_{\text{proj}}\) by projecting the input through \(P_{\perp}\) i.e., instead of using dataset \((x^{(i)},y^{(i)})\) for training, we use \((P_{\perp}x^{(i)},y^{(i)})\) for training the second model (denoted by \(f_{\text{proj}}\)). We refer to this training procedure as \(OrthoP\) for _orthogonal projection_. First, we show that this method provably learns different set of features for any dataset within IFM in the rich regime. Then, we provide two natural diversity metrics and demonstrate that \(OrthoP\) leads to diverse models on practical datasets. Finally, we provide a natural way of ensembling these models and demonstrate than on real-world datasets such ensembles can be significantly more robust than the baseline model.
**Theoretical proof for IFM**: First, we theoretically establish that \(f\) and \(f_{\text{proj}}\) obtained via \(OrthoP\) rely on different features for any dataset within IFM. Consequently, by the definition of IFM, \(f\) and \(f_{\text{proj}}\) have independent logits conditioned on the class. Its proof appears in Appendix A.1.2.
**Proposition 5.1**.: _Consider any IFM dataset as described in Section 4.1. Let \(f\) be the model described in Theorem 3.1 and \(f_{\text{proj}}\) be the second model obtained by \(OrthoP\). Then,
\begin{table}
\begin{tabular}{|c c c c c c c|} \hline Dataset & rank \((P)\) & Acc(\(f\)) & \(P_{\perp}\)-RA (\(\uparrow\)) & \(P\)-RA (\(\downarrow\)) & \(P_{\perp}\)-LC (\(\downarrow\)) & \(P\)-LC (\(\uparrow\)) \\ \hline \hline b-Imagenette & \(1\) & \(92.75\pm 0.06\) & \(90.07\pm 0.34\) & \(52.09\pm 1.34\) & \(36.94\pm 1.01\) & \(138.41\pm 1.62\) \\ Imagenette & \(15\) & \(79.97\pm 0.44\) & \(68.25\pm 1.18\) & \(11.92\pm 0.82\) & \(55.99\pm 3.86\) & \(133.86\pm 5.42\) \\ Waterbirds & \(6\) & \(90.46\pm 0.07\) & \(89.67\pm 0.42\) & \(62.44\pm 4.48\) & \(36.89\pm 5.18\) & \(105.41\pm 7.06\) \\ MNIST-CIFAR & \(2\) & \(99.74\pm 0.0\) & \(99.45\pm 0.17\) & \(49.83\pm 0.67\) & \(24.9\pm 0.61\) & \(141.12\pm 1.86\) \\ \hline \end{tabular}
\end{table}
Table 2: Demonstration of LD-SB in the lazy regime: This table presents \(P_{\perp}\) and \(P\) randomized accuracies as well as logit changes on the four datasets. These results confirm that the projection of input \(x\) onto the subspace spanned by \(P\) essentially determines the model’s prediction on \(x\).
the outputs \(f\) and \(f_{\text{proj}}\) on \(x\) i.e., \(f(x)\) and \(f_{\text{proj}}(x)\) depend only on \(x_{1}\) and \(\{x_{2},\cdots,x_{d}\}\) respectively._
**Diversity Metrics**: Given any two models \(f\) and \(\tilde{f}\), we empirically evaluate their diversity using two metrics. The first is mistake diversity: \(\text{Mist-Div}\left(f,\tilde{f}\right)\overset{\text{def}}{=}1-\frac{|\{i:f( \mathbf{x}^{(i)})\neq y^{(i)}\,\&\,\tilde{f}(\mathbf{x}^{(i)})\neq y^{(i)}\}| }{\min\left(|\{i:f(\mathbf{x}^{(i)})\neq y^{(i)}\}|,|\{i:f(\mathbf{x}^{(i)}) \neq y^{(i)}\}|\right)|}\), where we abuse notation by using \(f(x_{i})\) (resp. \(\tilde{f}(x_{i})\)) to denote the class predicted by \(f\) (resp \(\tilde{f}\)) on \(x_{i}\). Higher \(\text{Mist-Div}\left(f,\tilde{f}\right)\) means that there is very little overlap in the mistakes of \(f\) and \(\tilde{f}\). The second is class conditioned logit correlation i.e., correlation between outputs of \(f\) and \(\tilde{f}\), conditioned on the class. More concretely, CC-LogitCorr \(\left(f,\tilde{f}\right)=\frac{\sum_{v\in\mathcal{V}}\text{Corr}\left(\left[f (\mathbf{x}_{i})\right],[\tilde{f}(\mathbf{x}_{i})]:y_{i}=y\right)}{|\mathcal{ V}|}\), where \(\text{corr}([f(\mathbf{x}_{i})],[\tilde{f}(\mathbf{x}_{i})]:y_{i}=y)\) represents the empirical correlation between the logits of \(f\) and \(\tilde{f}\) on the data points where the true label is \(y\). Table 3 compares the diversity of two independently trained models (\(f\) and \(f_{\text{ind}}\)) with that of two sequentially trained models (\(f\) and \(f_{\text{proj}}\)). The results demonstrate that \(f\) and \(f_{\text{proj}}\) are more diverse compared to \(f\) and \(f_{\text{ind}}\).
Figure 4 shows the decision boundary of \(f\) and \(f_{\text{proj}}\) on \(2\)-dimensional subspace spanned by top two singular vectors of the weight matrix. We observe that the decision boundary of the second model is more non-linear compared to that of the first model.
**Ensembling**: Figure 5 shows the variation of test accuracy with the strength of gaussian noise added to the pretrained representations of the dataset. Here, an ensemble is obtained by averaging the logits of the two models. We can see that an ensemble of \(f\) and \(f_{\text{proj}}\) is much more robust as compared to an ensemble of \(f\) and \(f_{\text{ind}}\)
## 6 Discussion
In this work, we characterized the simplicity bias (SB) exhibited by one hidden layer neural networks in terms of the low-dimensional input dependence of the model. We provided a theoretical proof of presence of low-rank SB on a general class of linearly separable datasets. We further validated our hypothesis empirically on real datasets. Based on this characterization, we also proposed OrthoP - a simple ensembling technique - to train diverse models and show that it leads to models with significantly better Gaussian noise robustness.
This work is an initial step towards rigorously defining simplicity bias or shortcut learning of neural networks, which is believed to be a key reason for their brittleness (Geirhos et al., 2020). Providing a similar characterization for deeper networks is an important research direction, which requires _deeper_ understanding of the training dynamics and limit points of gradient descent on the loss landscape.
## Acknowledgements
We acknowledge support from Simons Investigator Fellowship, NSF grant DMS-2134157, DARPA grant
Figure 4: Decision boundaries for \(f\) and \(f_{\text{proj}}\) for B-Imagenette and Waterbirds datasets, visualized in the top \(2\) singular directions of the first layer weight matrix. The decision boundary of \(f_{\text{proj}}\) is more non-linear compared to that of \(f\).
Figure 5: Variation of test accuracy vs standard deviation of Gaussian noise added to the pretrained representations of the dataset. Model 1 (i.e., \(f\)) is kept fixed, and values for both the ensembles are averaged across 3 runs. Standard deviation is shown by the error bars.
W911NF2010021, and DOE grant DE-SC0022199.
|
2308.12785 | Single-shot Bayesian approximation for neural networks | Deep neural networks (NNs) are known for their high-prediction performances.
However, NNs are prone to yield unreliable predictions when encountering
completely new situations without indicating their uncertainty. Bayesian
variants of NNs (BNNs), such as Monte Carlo (MC) dropout BNNs, do provide
uncertainty measures and simultaneously increase the prediction performance.
The only disadvantage of BNNs is their higher computation time during test time
because they rely on a sampling approach. Here we present a single-shot MC
dropout approximation that preserves the advantages of BNNs while being as fast
as NNs. Our approach is based on moment propagation (MP) and allows to
analytically approximate the expected value and the variance of the MC dropout
signal for commonly used layers in NNs, i.e. convolution, max pooling, dense,
softmax, and dropout layers. The MP approach can convert an NN into a BNN
without re-training given the NN has been trained with standard dropout. We
evaluate our approach on different benchmark datasets and a simulated toy
example in a classification and regression setting. We demonstrate that our
single-shot MC dropout approximation resembles the point estimate and the
uncertainty estimate of the predictive distribution that is achieved with an MC
approach, while being fast enough for real-time deployments of BNNs. We show
that using part of the saved time to combine our MP approach with deep ensemble
techniques does further improve the uncertainty measures. | Kai Brach, Beate Sick, Oliver Dürr | 2023-08-24T13:40:36Z | http://arxiv.org/abs/2308.12785v1 | # Single-shot Bayesian approximation for neural networks
###### Abstract
Deep neural networks (NNs) are known for their high-prediction performances. However, NNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty. Bayesian variants of NNs (BNNs), such as Monte Carlo (MC) dropout BNNs, do provide uncertainty measures and simultaneously increase the prediction performance. The only disadvantage of BNNs is their higher computation time during test time because they rely on a sampling approach. Here we present a single-shot MC dropout approximation that preserves the advantages of BNNs while being as fast as NNs. Our approach is based on moment propagation (MP) and allows to analytically approximate the expected value and the variance of the MC dropout signal for commonly used layers in NNs, i.e. convolution, max pooling, dense, softmax, and dropout layers. The MP approach can convert an NN into a BNN without re-training given the NN has been trained with standard dropout. We evaluate our approach on different benchmark datasets and a simulated toy example in a classification and regression setting. We demonstrate that our single-shot MC dropout approximation resembles the point estimate and the uncertainty estimate of the predictive distribution that is achieved with an MC approach, while being fast enough for real-time deployments of BNNs. We show that using part of the saved time to combine our MP approach with deep ensemble techniques does further improve the uncertainty measures.
Deep neural networks, Bayesian neural networks, moment propagation, error propagation, MC dropout approximation, uncertainty
## 1 Introduction
Deep neural networks (NNs) have arisen as the dominant technique in many fields of computer vision in the last decade for the analysis of perceptual data.
NNs are also proposed for safety-critical applications such as autonomous driving [1, 2] or medical applications [3]. However, classical NNs have deficits in capturing the model uncertainty [4, 5]. But for safety-critical applications, it is mandatory to provide an uncertainty measure that can be used to identify unreliable predictions [6, 7, 8, 9, 10]. These can be situations that are completely different from all that occurred during training. In many application areas such as robotics or autonomous driving [1, 11], where machines interact with humans, it is furthermore important to identify those situations in real time where a model prediction is unreliable and a human intervention is necessary.
Bayesian NNs (BNNs) [12] are an established method to compute an uncertainty measure for individual model predictions that take the model uncertainty into account, in addition to the uncertainty inherent in the data. However,
state-of-the-art BNNs require sampling during deployment. This leads to enlarged computation times compared to classical NNs limiting their usage for real-time applications. This work overcomes this drawback by providing a method that allows to approximate the expected value and variance of a BNN's predictive distribution in a single run. It has therefore a comparable computation time as a classical NN. We focus here on a sampling free approximation for a special variant of BNNs which is known as MC dropout [5].
Ensembling-based models take an alternative approach to estimate uncertainties and have been successfully applied to NNs [13; 14]. Recently it has been shown that ensembles of BNNs can further improve the quality of the uncertainty measure of both approaches [15].
## 2 Related work
### Quantization of uncertainty
For the discussion of uncertainty, it is beneficial to treat deep learning (DL) models in a probabilistic framework where distributions are used to capture uncertainties. It's common to distinguish between aleatoric uncertainty capturing the data inherent variability and epistemic uncertainty capturing the uncertainty of the model parameter.
Aleatoric uncertainty can be modeled by controlling the parameters of a predictive distribution \(p(y|x)\) of the outcome \(y\) given \(x\) via the output nodes of a NN, e.g. for a Gaussian predictive distribution \(p(y|x)=N(y;\mu,\sigma)\)\(\mu\) and \(\sigma\) need to be controlled by an NN. For more complex predictive distributions for which the distribution family cannot be specified, a parametric transformation model can be used [16]. In computer vision, the input \(x\) is often an image and outcome \(y\) may be a scalar (regression), a categorical variable (classification), or might have as many categorical variables as there are pixels (semantic segmentation). These models are trained on the training data \(D=(x_{i},y_{i})_{i=1}^{N}\) minimizing the negative log-likelihood
\[\mathrm{NLL}=-\frac{1}{N}\sum_{i=1}^{N}\log\left(p(y_{i}|x_{i})\right), \tag{1}\]
To evaluate probabilistic models, the NLL on the test data is used. It can be shown that the NLL is a strictly proper scoring rule [17], meaning it gets only minimal if, and only if, the predictive distribution corresponds to the data generation distribution. The model with the smallest test NLL yields the most appropriate predictive distribution.
For regression type problems a common choice for the family of the predictive distribution is the normal distribution \(N(\mu(x),\sigma(x)^{2})\), where the variance \(\sigma(x)^{2}\) quantifies the uncertainty at a particular value of \(x\). Several choices exist to model the variance \(\sigma^{2}(x)\). In [5] the variance is treated as the reciprocal of a hyperparameter \(\tau\), \(\sigma^{2}=\tau^{-1}\), and the value \(\tau\) is chosen by cross-validation. Alternatively, the NN can be used to learn the variance from the data, if necessary in dependence on the input \(x\)[4]. Note that a non-probabilistic regression model that is trained with the mean-square-error (MSE) loss corresponds to a model with a Gaussian predictive distribution and a constant variance.
For classification, a common choice for the predictive distribution is a categorical distribution \(p(y|x)=\mathrm{Cat}(y;\pi(x))\) where the number of classes defines the length of the parameter vector \(\pi(x)\) with \(\pi_{i}(x)\) being the predicted probability of class \(i\). Again, the parameters of the predictive distribution are controlled by an NN that computes \(\pi(x)\) by turning the NN output logits of the last layer into probabilities using the softmax function.
Another, less common approach, is to use a Dirichlet distribution instead of the categorical [18]. Alternatively, an NN can be used to control the parameter of a transformation model to estimate the predictive distributions of an disordered or ordered categorical outcome [19].
From the predictive distribution an uncertainty measure can be constructed. For regression type problems the spread of the distribution, quantified by the variance \(\sigma^{2}(x)\), can be used. For classification tasks it is less obvious which measure captures the spread of the probabilities across the classes best. A common measure is the entropy:
\[H(x)=-\sum_{i}^{K}\pi_{i}(x)\cdot\ln(\pi_{i}(x)), \tag{2}\]
The entropy takes its minimal value zero, if the whole probability mass of one is assigned to only a single class indicating that there is no uncertainty about the outcome. Another common choice for the uncertainty of a prediction is \(1-\max(\pi_{i})\)[20].
So far, the spread of the predictive distribution reflects the fact that the input \(x\) does not completely define the value of the outcome \(y\). This corresponds to the uncertainty inherent in the data (aleatoric uncertainty). Note that the aleatoric
uncertainty cannot be reduced when adding more training data. However, it is obvious that a lack of training data hinders a precise estimation of the model parameters, leading to a high uncertainty. This kind of model uncertainty is called epistemic uncertainty. This gets especially important when leaving the range of the training data in case of regression, or when presenting an instance of a novel class in case of classification.
Recent work shows that ensembles of NNs can be used to capture epistemic uncertainty. In this approach an ensemble of NNs with identical architecture are trained on the same data but with different random initialization of the weights [13; 15]. During test time the differently initialized NNs of the ensemble yield for the same input different outputs, from which the moments of the predictive distribution can be estimated.
A well-established approach to model the epistemic uncertainty of a model is Bayesian modeling. In BNNs the fixed weights are replaced by distributions \(p(w|D)\). The more training data are available, the narrower these posteriors get indicating a low model uncertainty. The uncertainty of the weights translates in the uncertainty of the predictive distribution. The predictive distribution in a BNN is given by marginalizing over different weight configurations via:
\[p(y|x)=\int p(y|x,w)p(w|D)\ dw \tag{3}\]
An analytical solution for (3) can not be determined for a BNN with millions of weights. For small networks it is possible to do a Markov Chain Monte Carlo (MCMC) simulation [21]. In the MCMC \(T\) samples of \(w_{t}\sim p(w|D)\) from the posterior are drawn. Using these samples, the integration in (3) can be approximated by summation:
\[p(y|x)\approx\frac{1}{T}\sum_{t=1}^{T}p(y|x,w_{t}) \tag{4}\]
For common-sized networks MCMC is way too slow and approximations of the posterior need to be used. One way to approximate the posterior is the variational inference approach, where a simple parametric distribution \(q_{\phi}(w)\), called variational distribution, is used to approximate the true posterior. The variational parameter \(\phi\) of the variational distribution \(q_{\phi}(w)\) is tuned to minimize the Kullback-Leibler divergence between the true posterior and \(q_{\phi}(w)\). A common approach is the mean-field variational approximation, in which the weight distributions are modeled independently, often using simple independent Gaussians for \(q_{\phi}(w)\)[22; 23]. While complex multivariate distributions are possible in principle [24], recent work [25] suggests that for deep neural networks the mean-field assumption gives a sufficient approximation to the predictive distribution in equation 3. The mean-field variational approximation can be further approximated, which is commonly done by MC dropout.
### MC dropout
In case of MC dropout BNNs, each weight distribution is a Bernoulli distribution: the weight takes with the dropout probability \(p^{*}\) the value zero and with probability \(1-p^{*}\) the value \(w\). All weights starting from the same neuron are set to zero simultaneously. The dropout probability \(p^{*}\) is usually treated as a fixed hyperparameter and the weight-value \(w\) is the variational parameter that is tuned during training. In contrast to standard dropout [26], the weights in MC dropout are not frozen and rescaled after training, but the dropout procedure is also done during test time. It can be shown that MC dropout can be interpreted as a variational inference approach [5]. MC dropout BNNs were successfully used in many applications, have proven to yield improved prediction performance, and allow to define uncertainty measures to identify individual unreliable predictions [27; 28; 29; 30]. To employ a trained BNN in practice one performs \(T\) runs of predictions \(p(y|x,w_{t})\) with different weight configurations \(w_{t}\sim q_{\phi}\) during test time. As in the MCMC case, we can make use of equation 4 to calculate \(p(y|x)\). In this way, the outcome distribution incorporates the epistemic and aleatoric uncertainty.
The only drawback of an MC dropout BNN compared to its classical NN variant is the increased computing time due to the sampling procedure. The sampling procedure leads to a computing time that is prohibitive for many real-time applications like such as autonomous driving. To overcome this limitation, a single shot method, which does not require sampling, is desirable.
### Moment propagation
In this work, we are aiming to approximate MC dropout BNN without sacrificing their benefits over NN, i.e. improved prediction performance and quantification of the epistemic uncertainty without the need to do time-costly sampling. To do so we do not treat the activation of the neural network as a distribution. This is similar to natural parameter networks [31], which are based on exponential distributions and so requires that the network components approximately keep the
distributions in the exponential family. This disallows the use of important but highly non-linear network components such as softmax needed for classification or dropout needed for quantification of uncertainty.
In contrast, our method mainly relies on statistical moment propagation (MP). More specifically, we propagate the expectation \(E\) and the variance \(V\) of the signal distribution, through the different layers of an NN. This approach is known in statistics as the delta method [32]. In MP, we approximate the layer-wise transformations of the variance and the expected value of our signal. A similar approach has also been used for NNs before [18; 33; 34; 35; 36; 37].
However, quantifying the epistemic uncertainty [18] and [36] did not include the MC dropout activity in their framework, but did time-consuming MC dropout runs. Due to our best knowledge, so far only [38] and [37] have modeled the MC dropout within the MP framework thus allowing to evaluate the epistemic uncertainty in a single-shot approximation without the need to do several MC dropout runs. While [38] included the MC dropout in a sampling-free manner to estimate the "injected" noise, they did not consider mean value but just propagated the variance. However, treating the mean value as part of the MP increases the predictive performance. Further, we here treat a more complete set of activations in contrast to our preliminary previous work [37] for regression-like problems, by including convolutions, max pooling, and softmax which are important for classification problems.
## 3 Methods
The goal of our MP method is to approximate the expected value E and the variance V of the predicted output that is obtained by the above described MC dropout method. When propagating the signal of an observation through an MC dropout network, we get each layer with \(p\) nodes, an activation signal with an expected value \(E\) (of dimension \(p\)) and a variance given by a variance-covariance matrix \(V\) (of dimension \(p\times p\)). In the dropout layer, uncertainty is introduced yielding to a non-zero variance in intermediate layers and the output, despite the fact that the input signal variance is assumed to be zero. We neglect the effect of correlations between different activations, which are small anyway in deeper layers due to the decorrelation effect of the dropout. Hence, we only consider diagonal terms in the correlation matrix. In the following, we describe the essential layer types in fully connected and convolutional networks and how the expected value E and its variance V is propagated. As layer types we consider dropout, convolution, dense, max pooling, softmax, and ReLU activation layers. Figure 1 provides an overview of the layer-wise abstraction.
### Dropout layer
We start our discussion with the effect of MC dropout. Let \(E_{i}\) be the expectation at the ith node of the input layer and \(V_{i}\) the variance at the ith node. In a dropout layer the random value of a node \(i\) is multiplied independently with a Bernoulli variable \(Y\sim\texttt{Bern}(\texttt{p}^{*})\) that is either zero or one. The expectation \(E_{i}^{D}\) of the ith node after dropout is then given by:
\[E_{i}^{D}=E_{i}(1-p^{*}) \tag{5}\]
For computing the variance \(V_{i}^{D}\) of the ith node after dropout, we use the fact that the variance \(V(X\cdot Y)\) of the product of two independent random variables \(X\) and \(Y\), is given by [39]:
\[V(X\cdot Y)=V(X)V(Y)+V(X)E^{2}(Y)+E^{2}(X)V(Y) \tag{6}\]
With \(V(Y)=p^{*}(1-p^{*})\), we get:
\[V_{i}^{D}=V_{i}\cdot p^{*}(1-p^{*})+V_{i}(1-p^{*})^{2}+E_{i}^{2}\cdot p^{*}(1- p^{*}) \tag{7}\]
Figure 1: Overview of the proposed method. The expectation E and variance V flow through different layers of the network in a single forward pass. Shown is an example configuration in which dropout (DO) is followed by a convolution (CONV), an ReLU activation and a max pooling layer (POOL). The network output is specified by a softmax activation (SM) which follows a dense layer (FC) with linear activation. More complex networks can be built by different arrangements of the individual blocks.
Dropout is the only layer in our approach where uncertainty is created, even if the input has \(V_{i}=0\) and the output of the dropout layer has for \(p^{*}\neq 0\) a variance \(V_{i}^{D}>0\).
### Dense and convolutional layer
For the dense layer with \(p\) input and \(q\) output nodes, we compute the value of the ith output node as \(\sum_{j}^{p}w_{ji}x_{j}+b_{i}\), where \(x_{j},j=1\ldots p\) are the values of the input nodes. Using the linearity of the expectation, we get the expectation \(E_{i}^{F}\) of the ith output node from the expectations, \(E_{j}^{F},j=1\ldots p\), of the input nodes:
\[E_{i}^{F}=\sum_{j=1}^{p}w_{ji}E_{j}+b_{i} \tag{8}\]
To calculate the change of the variance, we use the fact that the variance under a linear transformation behaves like \(V(w_{ji}\cdot x_{j}+b)=w_{ji}^{2}V(x_{j})\). Further, we assume independence of the j different summands, yielding:
\[V_{i}^{F}=\sum_{j=1}^{p}w_{ji}^{2}V_{j} \tag{9}\]
For convolutional layers, which are also just another form of affine linear layers, the results above stay true. The sum in equation (8) is now over all neighboring pixels \(j\) of \(i\). For the expectation \(E^{C}\), the usual convolutional formula stays valid and is now taken with respect to the expectation of the activations \(E(x_{j})\). In the technical implementation, the highly optimized convolution of the underlying deep learning framework can be used, one just has to provide the expectations E instead of the activations. For the variance \(V_{i}^{C}\) the weights of the kernel have to be squared and a version without bias has to be used.
### ReLU activation layer
To calculate the expectation \(E_{i}^{R}\) and variance \(V_{i}^{R}\) of the ith node after an ReLU, as a function of the \(E_{i}\) and \(V_{i}\) of the layer before the ReLU, we need to make a distributional assumption. We assume that the input is standard Gaussian distributed, with \(\phi(x)=N(x;0,1)\) the PDF, and \(\Phi(x)\) the corresponding CDF, we get the following output for the expectation and variance. Derivation is given by [33]:
\[E_{i}^{R}=E_{i}\cdot\Phi\left(\frac{E_{i}}{\sqrt{V_{i}}}\right)+\sqrt{V_{i}} \cdot\phi\left(\frac{E_{i}}{\sqrt{V_{i}}}\right) \tag{10}\]
\[V_{i}^{R}=(E_{i}^{2}+V_{i})\Phi\left(\frac{E_{i}}{\sqrt{V_{i}}}\right)+E_{i} \sqrt{V_{i}}\cdot\phi\left(\frac{E_{i}}{\sqrt{V_{i}}}\right)-{E_{i}^{R}}^{2} \tag{11}\]
### Max pooling layer
The most commonly used pooling operation in convolutional neural networks (CNN) is max pooling. It has found to be superior for capturing invariances in image-like data, compared to other sub-sampling operations [40; 41]. Max pooling takes the maximum values out of each pooling-region, specified by a \(K=(N\times N)\) kernel matrix and creates a new matrix containing the maximum values.
For MP, we again have to make distributional assumptions. Specifically, we assume that the values within each pooling-region follow independent Gaussians with \(x_{k}\sim N(E_{k},V_{k})\), where \(k=1,\ldots,K\).
Extracting the maximum values of the \(K\)-independent Gaussians cannot be applied directly using the conventional max pooling approach. To our best knowledge, there exists no closed-form expression to calculate the maximum of more than two Gaussian variables directly. We therefore decompose the problem into repeated calculations of two Gaussians. The exact moments \(E\) and \(V\) of \(X=\max(X_{1},X_{2})\) for two random variables \(X_{i}\sim N(E_{i},V_{i})\), \(i\in 1,2\) are given by [42] as
\[\begin{split} E^{P}(X)=E_{1}\Phi\left(\frac{E_{1}-E_{2}}{\theta} \right)+& E_{2}\Phi\left(\frac{E_{2}-E_{1}}{\theta}\right)\\ +&\phi\phi\left(\frac{E_{2}-E_{1}}{\theta}\right) \end{split} \tag{12}\]
\[\begin{split} V^{P}(X)&=\left(V_{1}+E_{1}^{2}\right) \Phi\left(\frac{E_{1}-E_{2}}{\theta}\right)\\ &+\left(V_{2}+E_{2}^{2}\right)\Phi\left(\frac{E_{2}-E_{1}}{\theta }\right)\\ &+\left(E_{1}+E_{2}\right)\theta\phi\left(\frac{E_{1}-E_{2}}{ \theta}\right)\end{split} \tag{13}\]
where \(\theta=\sqrt{V_{1}+V_{2}}\) for uncorrelated random variables.
To calculate the maximum of an arbitrary number of \(K\) random variables \(X=\max(X_{1},X_{2},X_{3},\ldots,X_{k})\) for \(k=1\ldots K\), we apply the equations (12) and (13) \(K-1\) times. For example a \(K=(2\times 2)\) kernel matrix lead to a vector \(X=(X_{1},\ldots,X_{4})\) of four independent Gaussians with \(X_{k}\sim N(E_{k},V_{k})\) where \(k=1,\ldots,4\). To find the maximum of \(X\), we need to calculate the maximum of two independent random variables recursively.
In [35] the random variables \(X_{k}\) have been sorted using a heuristic. However, in preliminary studies we did not find a consistent beneficial effect of the ordering and did refrain from reordering.
### Softmax layer
For classification, the output of the NN are the probabilities \(\pi_{i}\), for the respective classes \(k=1,\ldots K\). Usually the probabilities are calculated via a softmax (\(\mathrm{sm}\)) layer, from the output \(z_{i}\) (logits) of the final layer of a network as
\[\pi_{i}=\mathrm{sm}(z_{i})=\frac{\exp(z_{i})}{\sum_{k=1}^{K}\exp(z_{k})} \tag{14}\]
Again, we assume that the input \(z_{i}\) follows independent Gaussians \(z_{i}\sim N(E_{i},V_{i})\). An approximation, for the expectation of the softmax, is given in [43] with \(\sigma(z)\) being the sigmoid function:
\[E(\mathrm{sm}(z_{i}))\approx\frac{1}{2-K+\sum_{k^{\prime}\neq k}\frac{1}{E( \sigma(z_{i}-z_{k}^{\prime}))}} \tag{15}\]
Since \(z_{i}\) and \(z_{k}^{\prime}\) are independent Gaussian \(z_{k}-z_{k}^{\prime}\sim N(E(z_{k})-E(z_{k}^{\prime}),V(Z_{k})+V(z_{k}^{\prime }))\). For the calculation of the equation (15), we thus need the expectation of a Gaussian "piped through" a sigmoid function. While [43] also gives an approximation for this, we found in Monte Carlo studies that the approximation given in equation (8.68) of Murphy's book [44] yields better results. In this approximation the sigmoid function is approximated by the CDF \(\Phi\) of the standard Gaussian, as:
\[\sigma(x)\approx\int_{\infty}^{x}N(\frac{\pi}{8}\cdot x^{\prime};0,1)dx^{ \prime}=\Phi(\frac{\pi}{8}x) \tag{16}\]
This allows to write the expectation \(E[\sigma(x)]\) of a \(X\sim N(E_{i},V_{i})\) (see [44]) as
\[E_{x\sim N(E_{i},V_{i})}[\sigma(x)]=\Phi\left(\frac{E_{i}}{\sqrt{V_{i}+8/\pi}}\right) \tag{17}\]
TensorFlow 1 implementations of all described layers are available on GitHub 2. Note that in order to use the MP approach to transfer a DNN into a BDNN no re-training has to be done, provided that the original network has been trained with standard dropout.
Footnote 1: [https://www.tensorflow.org/](https://www.tensorflow.org/)
Footnote 2: [https://github.com/kaibrach/Moment-Propagation](https://github.com/kaibrach/Moment-Propagation)
## 4 Results
### Toy dataset
We first apply our approach on a one dimensional regression toy dataset, with only one input feature and additional Gaussian noise of \(\mu_{noise}=0\) and \(\sigma_{noise}=0.1\) on training data. We use a fully connected NN with three layers each with 256 nodes, ReLU activations, and dropout after the dense layers. We have a single node in the output layer which is interpreted as the expected value \(\mu\) of the conditional outcome distribution \(p(y|x)\). We train the network for 2000 epochs using the MSE loss and apply dropout with \(p^{*}=0.3\). From the MC dropout BNN, we get at each x-position \(T=30\) MC samples \(\mu_{t}(x)\) from which we can estimate the expectation \(E_{\mu}\) by the mean and \(V_{\mu}\) by the variance of \(\mu_{t}(x)\). We use our MP approach on the same data to approximate the expected value \(E_{\mu}\) and the variance \(V_{\mu}\) of \(\mu\) at each \(x\) position. In addition we use an NN in which dropout has only been used during training yielding a deterministic output \(\mu(x)\). All three approaches yield within the range of the training data nearly identical results for the estimated \(\mu(x)\) (see upper panel in figure 2). We attribute this to the fact that we have plenty of training data and so the epistemic uncertainty is neglectable. In the lower panel of figure 2 a comparison of the uncertainty of \(\mu(x)\) is shown by displaying an interval given by the expected value of \(\mu(x)\) plus/minus two times the standard deviation of \(\mu(x)\). Here the width of the resulting intervals of a BNN via the MP approach and the MC dropout are comparable (the NN does not yield an estimation for the epistemic uncertainty). This demonstrates that our MP approach is equally useful as the MC approach to provide epistemic uncertainty estimations.
### UCI-datasets
To benchmark our method in regression tasks, we redo the analysis of [5] for the UCI regression benchmark dataset, using the same NN model structure as provided in the experiment. The NN model is a fully connected neural network including one hidden layer with ReLU activation in which the predictive distribution \(p(y|x)\) is estimated via:
\[p(y|x)=\frac{1}{T}\sum_{t}N(y;\mu_{t}(x),\tau^{-1}) \tag{18}\]
Again \(\mu_{t}(x)\) is the single output of the BNN for the tth out of \(T=10\ 000\) MC runs. To derive a predictive distribution, Gal assumes in each run a Gaussian distribution, centered at \(\mu\), and a precision \(\tau\), corresponding to the reciprocal of the variance. The parameter \(\mu\) is received from the NN and \(\tau\) is treated as a hyperparameter. For the MP model, the MC sampling in equation (18) is replaced by integration:
Figure 2: Comparison of the MP and MC dropout results. The NNs were fitted on training data in the range of -3 to 19 (vertical lines). In the upper panel the estimated expectations of the MC, the MP, and additionally the NN methods are compared. In the lower panel the predicted spread of \(\mu(t)\) for the MC and MP method is shown.
\[p(y|x) = \int N(y;\mu^{\prime},\tau^{-1})N(\mu^{\prime};E^{\text{MP}},V^{ \text{MP}})\ d\mu^{\prime} \tag{19}\] \[= N(y;E^{\text{MP}},V^{\text{MP}}+\tau^{-1})\]
For comparison of our MP method against MC dropout we follow the same protocol as [5]3. Accordingly, we train the network for 10\(\times\) the epochs provided in the individual dataset configuration. As described in [5], an excessive grid search over the dropout rate \(p^{*}=0.005,0.01,0.05,0.1\), and different values of the precision \(\tau\) is done. The hyperparameters minimizing the validation NLL are chosen and applied on the test set.
Footnote 3: [https://github.com/yaringal/DropoutUncertaintyExps](https://github.com/yaringal/DropoutUncertaintyExps)
We report in table 1 the test performance (RMSE and NLL) achieved via MC BNN using the optimal hyperparameters for the different UCI datasets. We also report the test RMSE and the NLL achieved with our MP method. Allover, the MC and MP approaches produces very similar results (the NAVAL dataset is a special case, because the outcome depends in a deterministic manner on the input, and also the epistemic uncertainty is close to zero). But we want to stress that our MP approach is much faster, as shown in the last column in the table 1, because it has only to perform one forward pass instead of \(T=10\ 000\) forward passes.
### Classification
We also want to benchmark our MP method on a classification task. For this we use a conventional CNN as illustrated in figure 1 on CIFAR-10 data4. We set up a CNN with three convolutional layers with filter sizes of 16, 32, and 64, and 3x3 kernel. ReLU activations followed by a max pooling operation are applied after each convolution. Dropout with \(p^{*}=0.3\) is applied to each max-pooling layer as well as to the two additional dense layers with 128 neurons each. In the last layer of the CNN a softmax function is used to transform the logits \(z_{i}\) into a vector of probabilities \((\pi_{1},\dots,\pi_{K})\) for the categorical predictive distribution with \(K\) classes.
Footnote 4: [https://www.cs.toronto.edu/~kriz/cifar.html](https://www.cs.toronto.edu/~kriz/cifar.html)
To investigate if the MP approach approximates the MC dropout uncertainty we performed an out-of-distribution (OOD) experiment. This experiment is set up by training the CNN only on five out of ten classes of the CIFAR-10 dataset, which we call in-distribution (IND) classes (airplane, cat, deer, dog, ship); the remaining classes are OOD classes (automobile, bird, frog, horse, truck). This allows us to evaluate the quality of the uncertainty measures for aleatoric (for the IND setting) and combined aleatoric and epistemic (for the OOD setting) uncertainty. We used a train/validation split of \(80/20\) and as loss function the NLL of the categorical distribution \(p(y|x)=\mathrm{Cat}(\pi)\).
We apply an automatic reduction of the learning rate after five epochs by a factor of 15% if the monitored validation NLL does not improve. Further, we use early stopping mechanism if no decrease in the validation NLL for a consecutive number of ten epochs is observed. We use the same trained network during test time in three settings, as a non-Bayesian NN with fixed weights (NN), an MC dropout version with \(T=50\) samples (MC), and our approximation (MP).
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{\(N\)} & \multirow{2}{*}{\(Q\)} & \multicolumn{2}{c}{Test RMSE} & \multicolumn{2}{c}{Test NLL} & \multicolumn{2}{c}{Test RT [s]} \\ & & & MC & MP & MC & MP & MC & MP \\ \hline Boston & 506 & 13 & 3.14 \(\pm 0.20\) & 3.10 \(\pm 0.20\) & 2.57 \(\pm 0.07\) & 2.56 \(\pm 0.08\) & 2.51 \(\pm 0.03\) & 0.04 \(\pm 0.00\) \\ Concrete & 1,030 & 8 & 5.46 \(\pm 0.12\) & 5.40 \(\pm 0.12\) & 3.12 \(\pm 0.02\) & 3.13 \(\pm 0.03\) & 3.37 \(\pm 0.04\) & 0.04 \(\pm 0.00\) \\ Energy & 768 & 8 & 1.65 \(\pm 0.05\) & 1.61 \(\pm 0.05\) & 1.95 \(\pm 0.04\) & 2.01 \(\pm 0.04\) & 2.84 \(\pm 0.03\) & 0.04 \(\pm 0.00\) \\ KnNBM & 8,192 & 8 & 0.08 \(\pm 0.00\) & 0.08 \(\pm 0.00\) & -1.10 \(\pm 0.01\) & -1.11 \(\pm 0.01\) & 7.37 \(\pm 0.06\) & 0.04 \(\pm 0.00\) \\ Naval & 11,934 & 16 & 0.00 \(\pm 0.00\) & 0.00 \(\pm 0.00\) & -4.36 \(\pm 0.01\) & -3.60 \(\pm 0.01\) & 9.69 \(\pm 0.11\) & 0.04 \(\pm 0.00\) \\ Power & 9,568 & 4 & 4.05 \(\pm 0.04\) & 4.04 \(\pm 0.04\) & 2.82 \(\pm 0.01\) & 2.84 \(\pm 0.01\) & 6.85 \(\pm 0.07\) & 0.04 \(\pm 0.00\) \\ Protein & 45,730 & 9 & 4.42 \(\pm 0.03\) & 4.41 \(\pm 0.02\) & 2.90 \(\pm 0.00\) & 2.91 \(\pm 0.00\) & 31.38 \(\pm 0.09\) & 0.05 \(\pm 0.00\) \\ Wine & 1,599 & 11 & 0.63 \(\pm 0.01\) & 0.63 \(\pm 0.01\) & 0.95 \(\pm 0.01\) & 0.95 \(\pm 0.01\) & 4.78 \(\pm 0.01\) & 0.04 \(\pm 0.00\) \\ Yacht & 308 & 6 & 2.93 \(\pm 0.22\) & 2.91 \(\pm 0.26\) & 2.35 \(\pm 0.07\) & 2.11 \(\pm 0.07\) & 2.01 \(\pm 0.01\) & 0.04 \(\pm 0.00\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the average prediction performance in test RMSE (root-mean-square error), test NLL (negative log-likelihood) and test RT (runtime) including \(\pm\) standard error on UCI regression benchmark datasets between MC and MP. \(N\) and \(Q\) correspond to the dataset size and the input dimension. For all test measures, smaller means better.
As a first test of our MP method, we compare how well our approach approximates the MC method with respect to the predictive distribution \(p(y|x)=\mathrm{Cat}(\pi(x))\). A direct comparison of the output of the network, the five estimated parameters \(\pi_{i}\), requires a multivariate treatment. Since the \(\pi_{i}\)s are probabilities the restriction \(\sum_{i}\pi_{i}=1\) applies. A sound discussion would need to take this into account and compare the probabilities in the framework of compositional data which is beyond the scope of this study. To still provide an impression of the similarity of the probabilities received with the three methods (NN, MC, MP), we compare the marginal probabilities of the first class (the IND class airplane) for all predictions in the test set (see figure 3).
In the upper panel of figure 3 we look only at samples from the test set that correspond to a class that is in the training set, and we can see from the color code that for airplane samples all models predict usually high airplane probabilities whereas for non-airplane samples all models predict usually rather low airplane probabilities. For the 5000 OOD images (lower panel in figure 3), by definition, none of the possible classes are correct, leading to a larger spread of the probability values, but still the wrongly assigned airplane probabilities are quite reproducible across all three models. Comparing the left versus the right panel of figure 3 shows that the results from the MP and MC models are much closer to each other than the results from the NN and MC model, indicating again that our MP approach is a good approximation for the MC model.
For a global comparison of the predictive distribution taking all classes into account, we use the entropy from equation (2) quantifying the uncertainty of the predictive distribution in one number. For both test samples (IND and OOD), the entropy values from the MP method nicely approximate the entropy values from the MC dropout method (see left panel of figure 4). Most data lies on a diagonal, and the Pearson correlation coefficient between MP and MC is 0.981 with a 95% CI [0.980, 0.982] for the IND images, and 0.970 [0.968 0.971] for the out-train-classes images. Figure 4 shows also that the non-Bayesian network (NN) has a significant weaker correlation with the MC approach, the correlation here is 0.931 [0.927 0.934] for the in training examples and 0.893 [0.887 0.899] for the novel examples.
Figure 4 also shows that the predictive distribution of IND classes typically has a low entropy (indicating a low uncertainty of the predictions), and OOD classes often have larger entropy values (indicating a high uncertainty).
Altogether, our MP model yields a good approximation for the MC dropout model. MC dropout models are usually used for two reasons, first to quantify the uncertainty and secondly to enhance the predictive performance. We now check if our MP method approximates the MC dropout results with respect to both aspects.
To access the predictive performance, we use all ten classes of the CIFAR-10 dataset for training and testing. The achieved predictive performance in terms of accuracy is 0.7168 with a 95% Wilson CI [0.7079, 0.7255] in case of NNs, 0.7467 [0.7381 0.7551] in case of MC, and 0.7459 [0.7372 0.7543] in case of MP. In terms of the NLL, where a smaller value indicates a better performance, we see a similar picture and achieve \(0.82\pm 0.02\) (NN), \(0.77\pm 0.02\) (MC),
Figure 3: Density plots for comparison of the marginal probability values that were predicted for the IND class airplane in the OOD experiment, where the true class is indicated by the color code. The upper row (A, B) shows the results for 5000 IND images. The lower row (C, D) shows results for 5000 OOD images. Shown is the the MC dropout approach against our approximation (MP) (panels A and C) and against a NN (panels B and D).
and \(0.75\pm 0.02\) (MP)5. For both measures the MP and MC significantly outperform the simple neural network, while no significant performance difference has been observed between the MC method and the fast MP approximation.
Footnote 5: The 95% CI have been calculated using \(\pm 1.96\) times the standard error
To study if the quality of the uncertainty measures resulting from MC and MP are equally reliable, we want to check if the uncertainty measures from both methods perform equally good in identifying novel classes. For this purpose we switch back to the OOD experiment in which five classes were left out for training. We use the entropy as uncertainty-score to decide if an example is from a novel class [20; 28].
In the following, we present a ROC analysis for the ability to detect the OOD classes examples. We use the entropy to distinguish between in-distribution (IND) from OOD examples. In figure 5 the histograms for (NN) upper panel, (MC) middle panel, and (MP) lower panel are shown. As expected the IND examples generally have a lower entropy. This effect is especially pronounced for the MP and MC case.
The entropy, thus, can be seen as a continuous score to distinguish IND from OOD examples. With such a score, we can cast the OOD detection as a binary classification and do a ROC analysis. The ROC curve corresponding to the histograms in 5 is shown in figure 6 with dashed lines. In addition the results for an ensemble of five trained models (ens=5) are shown, where each model was trained with different random initialization, and the predicted probabilities of the five models have been averaged. In [45], the area under the receiver operation curve (AUC) is suggested as an measure for the ability of an OOD detection method, as reported in the legend.
Finally, we investigate how many MC samples are required to achieve AUC values similar to the AUC values from the sampling-free MP approximation (see figure 7) indicating that our MP model can discriminate equally well between IND and OOD samples as the MC model with a certain number of sampling steps.
To get a feeling for the reproducibility, we repeated the experiment 20 times. Figure 7 shows that, with an increasing number of MC forward passes, the AUC of the MC method gradually improves and outperforms an NN. After \(\approx\) 20 MC forward passes the AUC of the MC method is comparable to the AUC that we achieve with our MP method. At about \(T=30\) the MC method slightly surpasses the performance of the MP approximation.
Finally, we compare the performance of our approximation to the MC method with a filter experiment relevant in practice. We, therefore, sort the 10 000 test examples according to their entropy value and then quantify for each threshold of the entropy the accuracy of the subset of test examples with entropy values below this threshold. In case of a useful uncertainty measure (low entropy values indicate reliable predictions), we expect better accuracies for smaller subsets corresponding to test examples with lower entropy values. With our MP method (see dotted lines in figure 8) we achieve a performance very similar to the MC approach, and both approaches are clearly superior to the
Figure 4: Density plots for comparison of entropy values of the predicted categorical CPDs in the OOD experiment. On the one hand the MC model is compared to the MP model (panels A and C) and on the other hand to the NN model (panels B and D). In the upper panel the comparison is done on 5000 IND images, and in the lower panel the comparison is done on 5000 OOD images.
simple NN. Also shown in the figure are the results for an ensemble of five members (ens=5). The beneficial effect of ensembling for MC and MP becomes evident but is interestingly less pronounced in case of the NN.
## 5 Discussion
With our MP approach we have introduced an approximation to MC dropout that requires no sampling but instead propagates the expectation and the variance of the signal through the network. This results in a time saving by a factor that approximately corresponds to the number of MC runs. Further, the proposed approach can be applied to a traditional NN without re-training if the NN has been trained with standard dropout beforehand.
Figure 5: Entropy as OOD score. Shown are the histograms of the entropy separated between IND and OOD for NN, MC, MP from top to bottom.
Figure 6: ROC curves for an OOD classification based on the entropy for CIFAR10 with five OOD classes. The different ROC curves correspond to the entropy values from NN, MC and MP for a single network (ens=1) and an network ensemble of five networks (ens=5)
For the regression settings, we have shown that our MP approach approximates precisely the expectation and variance of the predictive distribution achieved by MC dropout. Also the achieved prediction performance in terms of RMSE and NLL do not show significant differences when using MC dropout or our MP approach.
For the classification setting, we compared our approximation with the MC method. Specifically, we found no major difference between MC and MP in terms of the probabilities for the different classes and the derived entropy in both the IND and the OOD class setting. This suggests that the MP approximation can be used in all settings that require a precise quantification of uncertainty. To verify this assumption, we demonstrated that the beneficial effect of MC dropout remains preserved in the MP approach. This showed the ability of MC and MP models to improve the accuracy in the IND setting, to detect OOD examples, and to allow for filtering out wrongly classified images. We also demonstrated that the MP approach can be used in conjunction with ensembling.
Hence, our presented MP approach opens the door to include uncertainty information in real-time applications or to use the saved computing time for deep ensembling approaches leading to an additional boost in performance.
## 6 Acknowledgements
We are very grateful to Elektrobit Automotive GmbH for supporting this research work. Further, part of the work has been founded by the Federal Ministry of Education and Research of Germany (BMBF) in the project DeepDouble (grant no. 01IS19083A).
Figure 8: Filter experiment results where the entropy is used to filter out uncertain predictions. From left to right descending cutoffs of the entropy are used leading to an increasing number of samples that pass the less stringent uncertainty cutoff and a decreasing accuracy. The legend indicates the model type used: NN, MC, and MP for a single network (ens=1) and an ensemble of five networks (ens=5)
Figure 7: Dependence of AUC on the number of MC samples. Shown are the AUC values corresponding to the OOD classification experiment achieved with different models: a non-Bayesian NN (black horizontal line), our MP approach (red horizontal line), and an MC approach (blue). The MC approach was performed with different numbers of MC Samples \(T\). Dotted MC results belong to 20 iterations, solid line indicates the average values. |
2302.12906 | Generative Invertible Quantum Neural Networks | Invertible Neural Networks (INN) have become established tools for the
simulation and generation of highly complex data. We propose a quantum-gate
algorithm for a Quantum Invertible Neural Network (QINN) and apply it to the
LHC data of jet-associated production of a Z-boson that decays into leptons, a
standard candle process for particle collider precision measurements. We
compare the QINN's performance for different loss functions and training
scenarios. For this task, we find that a hybrid QINN matches the performance of
a significantly larger purely classical INN in learning and generating complex
data. | Armand Rousselot, Michael Spannowsky | 2023-02-24T21:25:07Z | http://arxiv.org/abs/2302.12906v3 | **Generative Invertible Quantum Neural Networks**
## Abstract
**Invertible Neural Networks (INN) have become established tools for the simulation and generation of highly complex data. We propose a quantum-gate algorithm for a Quantum Invertible Neural Network (QINN) and apply it to the LHC data of jet-associated production of a Z-boson that decays into leptons, a standard candle process for particle collider precision measurements. We compare the QINN's performance for different loss functions and training scenarios. For this task, we find that a hybrid QINN matches the performance of a significantly larger purely classical INN in learning and generating complex data.**
## 1 Introduction
Generative modelling has been a field of particular interest in machine learning research, being vastly improved by successful model architectures, including Variational Autoencoders (VAE), Generative Adversarial Networks (GAN) and Invertible Neural Networks (INN) [1, 2, 3]. Among other applications, their use in event generation has been extensively investigated [4, 5, 6]. Their advantages over the Markov chain Monte Carlo (MCMC) techniques [7, 8, 9, 10, 11], which had so far established themselves as the leading LHC simulation and interpretation methods go beyond an increase of inference speed. Furthermore, generative models can be trained end-to-end, allowing for a much more comprehensive range of applications such as unfolding [12, 13, 14], anomaly detection [15, 16, 17, 18, 19] and many more [20].
However, the large parameter space of these Neural Networks (NN), which allows them to model complex interactions, also leads to a massive demand for computing resources. The size of popular NN architectures has long reached the boundary of computational feasibility. Quantum Machine Learning (QML) introduces the power of quantum computing to the existing foundation of machine learning to establish and then exploit the quantum advantage for a performance increase exclusive to quantum algorithms. While gate-based quantum computing differs significantly from classical computing, many equivalents to aforementioned classical generative networks have already been constructed, including Quantum Autoencoders [21] and Quantum GANs [22, 23, 24, 25, 26, 27]. The notable exception is INNs [28, 29], which have not yet been transferred to the realm of QML. Such networks would be a desirable addition to the array of Quantum Neural Networks (QNN). While tractability of the Jacobian determinant in classical INNs enables them to perform density estimation, which intrinsically prevents mode collapse, the full Jacobian matrix
can usually not be computed efficiently [30]. A fully tractable Jacobian in INNs, available for QNNs, would allow efficient learning of the principal data manifolds [31, 32, 33, 34], opening up opportunities for interpretable representation learning and new insights into underlying processes.
Coupling-based INN architectures have empirically shown to be more resilient to the vanishing-gradient problem [28], which lets them directly benefit from deep architectures with many parameters. However, many of the INN applications listed so far already require considerable resources for training. Current research suggests that quantum models could circumvent this need for an immense parameter space. They outclass regular NNs in terms of expressivity, being able to represent the same transformations with substantially fewer parameters [35, 36, 37, 38, 39]. This theoretical groundwork is bolstered by several instances of specifically constructed QML circuits presenting significantly more efficient solutions than classically possible to specially designed problems [40, 41, 42, 43]. QNNs have already been successfully applied to relatively limited high-energy physics problems [44, 45, 46, 47, 48, 49, 50, 51, 21, 52, 53, 54, 55, 21], along non-QML approaches [52, 53, 54, 55, 56]. However, to our knowledge, there has not yet been an attempt to construct an invertible QNN that can be used as a density estimator through its invertibility for generative tasks.
With this work, we aim to fill the remaining gap of a quantum equivalent to classical INNs, developing a Quantum Invertible Neural Network (QINN). We show how each step in the QNN pipeline can be designed to be invertible and showcase the ability of a simulated network to estimate the density of distributions. As a proof-of-principle, we apply our model to complex simulated LHC data for one of the most important and most studied high-energy physics processes,
\[pp\to Zj\rightarrow\ell^{+}\ell^{-}j\,\]
and show its capability to reconstruct the invariant mass \(M_{Z}\). While currently available noisy intermediate-scale quantum computers (NISQ) cannot support models required for even basic generative tasks, the concept of inverting QNNs still shows promise in light of the aforementioned theoretical and simulation-based results, documenting their increased expressivity. As we will confirm in our experiments, QINNs have the potential to express the same set of transformations as classical INNs with much fewer parameters.
This study is structured as follows: In Sec. 2 we provide a short review of classical Invertible Neural Networks. Then, Sec. 3 is dedicated to our proposal for a Quantum Invertible Neural Network based on quantum-gate algorithms and outlining its technical challenges. Next, we apply the QINN to simulated LHC data in Sec. 4. We offer a brief summary and conclusions
## 2 Classical Invertible Neural Networks
To illustrate the advantages of invertible models and to benchmark the performance of the QINN, we first present the architecture of a classical INN. As we will see in the following section, we can train any model to perform density estimation as long as it fulfils the Jacobian determinant's requirements of invertibility and availability. To meet these requirements, the INNs are constructed using coupling blocks, see Fig. 1. Each coupling block splits the data vector in two halves \(x=[u,\,v]^{T}\) along the feature dimension, transforming one half at a time. The parameters for this transformation are predicted by a small model, e.g. a neural network, based on the other half of the data. The transformation used as a benchmark in this work are cubic spline coupling blocks [57]. However, we will introduce affine coupling blocks first as a simpler example.
The affine coupling block performs an element-wise linear transformation on one half \(u\), with
Figure 1: Layout of a coupling block. The input is split into two halves along the feature dimension. One half is used as neural network input to predict parameters for either an affine or a spline transformation of the other. In the end, both halves are fused again. The blue lines illustrate example coupling transformations of both block types, based on the network output (black knots). The orange line shows the inverse transformation.
parameters \(s(v),t(v)\) predicted by a neural network from the other half \(v\)
\[\begin{bmatrix}\hat{u}\\ \hat{v}\end{bmatrix}=\begin{bmatrix}u\odot e^{s(v)}+t(v)\\ v\end{bmatrix}. \tag{1}\]
The inverse is now trivially given by
\[\begin{bmatrix}u\\ v\end{bmatrix}=\begin{bmatrix}(\hat{u}-t(\hat{v}))\odot e^{-s(\hat{v})}\\ \hat{v}\end{bmatrix}. \tag{2}\]
In each layer the Jacobian is an upper triangular matrix, as the transformation of \(u_{i}\) only depends on \(v\) and \(u_{i}\)
\[J=\begin{pmatrix}\text{diag }e^{s(v)}&\text{finite}\\ 0&\mathbb{I}\end{pmatrix}. \tag{3}\]
Therefore the Jacobian determinant can be calculated and using \(\det J(f_{1}\circ f_{2})=\det J(f_{1})\det J(f_{2})\), the Jacobian determinant for the entire network can be assembled from all blocks individually. In this configuration of the coupling block, the lower half \(v\) stays unchanged. After each block, a unitary soft-permutation matrix is applied to the data vector to mitigate this fact. Since this matrix is invertible and the Jacobian determinant is \(1\), this does not influence the network prerequisites.
The spline coupling block is functionally similar, except that the transformation performed in each block is a cubic spline of \(u\), predicted from \(v\). Given several buckets, \(\xi-1\) (hyperparameter), the coupling block subnetwork predicts \(2\xi+2\) parameters for each \(u_{i}\). These parameters serve as anchor points for a cubic spline on \(u_{i}\), meaning \(x,y\) coordinates for \(\xi\) bucket boundaries (ensuring that \(x_{i}<x_{i+1},y_{i}<y_{i+1}\)) and two values to define the slope of the interpolating spline in the two outer anchor points. Since each bucket is simply a monotonous cubic function of \(u\), the whole transformation is invertible, and the same argument for the triangularity of the Jacobian from the affine coupling blocks still holds. Spline couplings trade increased computation time for higher expressivity per parameter, which is why we choose them for the comparison to QINNs in Sec. 4. We implement these networks in PyTorch using the FreIA module [58], where finer implementation details can be found.
### Density Estimation using Invertible Models
We aim to train the model to estimate the density of some given training data with distribution \(p(x)\). This is achieved by learning a bijective mapping \(f\), defined by network parameters \(\theta\), from some desired distribution \(p(x),\ x\in R^{n}\) to a normal distribution
\[f(x\sim p(x)|\theta)\sim\mathcal{N}_{0,1}^{n}(z). \tag{4}\]
With an invertible network function \(f\), new data can straightforwardly be generated from the desired distribution by sampling \(x\sim f^{-1}(z\sim\mathcal{N}_{0,1}^{n}(z)|\theta)=:p(x|\theta)\). To construct a loss function, we can use the change of variables formula
\[p(x|\theta)=\mathcal{N}_{0,1}^{n}(f(x|\theta))\left|\det\left(\frac{\partial f }{\partial x}\right)\right|. \tag{5}\]
The training objective is to find the network parameters \(\theta\) which maximise the probability of observing the training data from our model \(f\)
\[\max_{\theta}p(\theta|x)\propto p(x|\theta)p(\theta). \tag{6}\]
We can transform this expression to obtain a loss function, by minimizing the negative log likelihood, and substitute Equation 5 for \(p(x|\theta)\). Finally we assume a gaussian prior on \(p(\theta)=\exp(\tau\theta^{2})\)1 and write \(J=\det\left(\frac{\partial f}{\partial x}\right)\) to get the loss function
Footnote 1: In our experiments we found it sufficient to set \(\tau=0\) for quantum parameters, which indicates a uniform weight prior on the qubit rotation angles.
\[\mathcal{L}=\mathbb{E}\left[\frac{||f(x|\theta)||_{2}^{2}}{2}- \log|J|\right]+\tau||\theta||_{2}^{2}. \tag{7}\]
Therefore all we need to train a model to perform density estimation is invertibility and the ability to calculate its Jacobian determinant w.r.t. the input \(x\).
### Maximum Mean Discrepancy
We can improve targeted observables by using a so-called Maximum Mean Discrepancy (MMD) loss [59]. MMD estimates the distance of two given distributions in some Reproducing Kernel Hilbert Space. In our case we want to minimize the distance \(d(p(\phi(x)),p(\phi(f^{-1}(z|\theta)))\) given some features of our choice \(\phi\). MMD approximates this distance given samples from the two distributions \(X\sim p(x)\) and \(Y\sim p(f^{-1}(z|\theta))\) and a function \(\phi\)
\[\mathcal{L}_{MMD}^{2}=\left|\left|\frac{1}{|X|}\sum_{x\in X}\phi (x)-\frac{1}{|Y|}\sum_{y\in Y}\phi(y)\right|\right|_{2}^{2}= \tag{8}\] \[\frac{1}{|X|^{2}}\sum_{x\in X}\sum_{x^{\prime}\in X}\phi(x)^{T} \phi(x^{\prime})+\frac{1}{|Y|^{2}}\sum_{y\in Y}\sum_{y^{\prime}\in Y}\phi(y)^ {T}\phi(y^{\prime})-2\frac{1}{|X||Y|}\sum_{x\in X}\sum_{y\in Y}\phi(x)^{T}\phi (y).\]
Since all appearances of \(\phi\) involve the inner product \(\phi^{T}(\cdot)\phi(\cdot)\) we can use the kernel trick to substitute them with a matching kernel \(k\) that calculates the induced vector product \(<\cdot,\cdot>_{\phi}\)
\[\mathcal{L}_{MMD}^{2}=<X,X>_{\phi}+<Y,Y>_{\phi}-2<X,Y>_{\phi} \tag{9}\] \[=\overline{k(X,X)}+\overline{k(Y,Y)}-2\overline{k(X,Y)}.\]
The kernel should be selected according to the distributions that are being compared. Since a Gaussian-like distribution is generally a sufficient approximation in most cases, we use the gaussian kernel function
\[k_{\text{gauss}}(x,y)=\exp\left(-\frac{||x-y||_{2}^{2}}{2\sigma ^{2}}\right). \tag{10}\]
In our experiments we will also apply a MMD to the invariant mass of a \(Z\)-Boson \(M_{Z}\). In this case, since it has a Breit-Wigner distribution we use the corresponding kernel
\[k_{\text{Breit-Wigner}}(x,y)=\frac{\sigma^{2}}{\sigma^{2}+||x-y ||_{2}^{2}}. \tag{11}\]
The parameter \(\sigma\) determines the width of the kernel. Choosing this width correctly is often more important than the correct kernel shape. If \(\sigma\ll||x-y||_{2}\) then \(k(x,y)\simeq 0\) and if \(\sigma\gg||x-y||_{2}\) then \(k(x,y)\simeq 1\). In both cases, \(\mathcal{L}_{MMD}\) will be very close to zero, and the gradient will vanish. While methods exist that adaptively change \(\sigma\) over the training to fit the model distribution, another easy and effective way is to use multiple kernels of different widths.
Classical INNs generally do not benefit greatly from an additional MMD loss for the overall training apart from improving specifically targeted observables. However, we found that applying MMD on both the latent and input sides of the QINN to all training observables, even simultaneously to train a gaussian latent and the target input distribution, significantly improves performance.
## 3 Invertible Quantum Neural Networks
As we will illustrate in this section, QNNs lend themselves well to being inverted, requiring very few modifications from their most basic form. For example, the underlying architecture of a circuit-centric QNN [60] can be split into three distinct parts, _state preparation, model circuit_ and _measurement_ as seen in Fig. 2.
State preparation transforms a given data point \(x=(x_{1},\ldots,x_{n})^{T}\) from the classical domain into a quantum state \(|x\rangle\). One of the simplest methods to achieve this is angle encoding, in which each input dimension is interpreted as an angle for a qubit rotation. Therefore, the number of qubits is fixed as the number of feature dimensions \(n\). The entire dataset is first scaled to \(\mathbb{R}^{n}\rightarrow[0,\pi]^{n}\). Next, we apply a global linear transformation to the input with trainable parameters \(a\in[0,1]^{n}\,;b\in[0,1-a]^{n}\), clipped to prevent \(x\notin[0,\pi]\). We obtain a quantum state by defining a state preparation operator \(S_{x}=R_{y}(x)\), which acts on the initial state
\[|x\rangle=S_{x}\,|0\rangle^{\otimes n}=\bigotimes_{i=1}^{n}\cos(x_{i})\,|0 \rangle+\sin(x_{i})\,|1\rangle\,. \tag{12}\]
When performed this way, inverting the state preparation is straightforward since simply measuring \(P(\text{qubit}\,i=1)\) gives \(\langle x|\sigma_{x,i}|x\rangle=\sin^{2}(x_{i})\implies\arcsin\left(\sqrt{ \langle x|\sigma_{x,i}|x\rangle}\right)=x_{i}\).
Figure 2: An overview of the model circuit. We show a three-qubit, two-layer model following the hardware efficient ansatz from [61]. The state preparation uses angle encoding, where each feature is encoded on its qubit. The learnable parameters of the model circuit are the rotation angles \(\theta\) in each layer.
The model circuit is the quantum analogue of a classical neural network, mapping the prepared state to the output state \(\ket{x}\mapsto U\ket{x}=:\ket{y}\). The circuit comprises modular layers \(U=U_{l}\dots U_{1}\). Each layer starts with a rotation gate for each qubit \(Rot(\phi,\theta,\eta)\) parameterized by trainable weights [61]. Afterwards, the qubits are entangled by applying CNOT gates between each adjacent pair of qubits and from the last to the first qubit. The entire model circuit can be straightforwardly inverted by applying the adjoint to each layer in the reverse order
\[U^{\dagger}=(U_{l}\dots U_{1})^{\dagger}=U_{1}^{\dagger}\dots U_{l}^{\dagger}. \tag{13}\]
The adjoint operation to each of the gates is simple to obtain
\[Rot^{\dagger}(\phi,\theta,\eta)=Rot(-\eta,-\theta,-\phi) \tag{14}\] \[CNOT^{\dagger}(i,j)=CNOT(i,j). \tag{15}\]
Finally we measure \(P(\text{qubit }i=1)\) for all \(n\) qubits and apply another trainable global linear transformation with parameters \(c,d\in\mathbb{R}^{n}\)
\[\ket{y}\mapsto c\bra{y}\sigma_{z}\ket{y}+d. \tag{16}\]
Inverting the final measurement is not directly possible, as many different states can lead to the same expectation value \(\mathbb{E}[y]\). Note that the model function can still be bijective if no two created states yield the same \(\mathbb{E}[y]\). Since the network input is \(x\in R^{n}\) the set of created final states only exists in a \(S\subseteq\mathbb{C}^{2^{n}}\), \(s.t.\dim S=n\) subspace of the state space. However, we need to ensure that \(S\) does not share any \(\mathbb{E}[y]\), as well as find the proper method to perform the inverse state preparation (ISP) for each data point.
### Inverse State Preparation
Given a model circuit \(U\) and a data point \(y\) that arose from measuring \(\ket{y}=U\ket{x}\), it is infeasible to search for an ISP method that creates \(\ket{y}\) from \(y\). Therefore we instead aim to train the model circuit \(U\) in a way such that for a given fixed ISP \(g\), the state \(\ket{y}\) before the measurement and \(\ket{\tilde{y}}:=g(\bra{y}\sigma_{z}\ket{y})\) are as close as possible. We evaluate the fidelity, measuring the "closeness" of two quantum states [62],
\[F=\bra{\tilde{y}}\ket{y}\, \tag{17}\]
Figure 3: The SWAP-test, comparing a state \(\ket{y}=U\ket{x}\) created in the forward direction to \(\ket{\tilde{y}}=g(y)\) created by the ISP. The CSWAP gate acts pairwise on the wires, i.e. \(y_{1}\) is swapped with \(\tilde{y}_{1}\), etc.
using the SWAP-Test shown in Fig. 3. Which side of the model we perform the SWAP-Test on does not matter, as the operator \(U\) is unitary. We train the entire model, the model circuit and all ISP parameters, to adhere to \(F\simeq 1\) for the loss function
\[\mathcal{L}_{F}=\lambda_{F}(\log(F)). \tag{18}\]
While the model is invertible if the fidelity \(F\simeq 1\), the opposite is not necessarily true. In fact for a given circuit \(U\) we can find exponentially many different states \(|\tilde{y}\rangle\) such that
\[\tilde{x}:=\langle\tilde{y}|U\sigma_{z}^{\otimes n}U^{\dagger}|\tilde{y} \rangle=x. \tag{19}\]
Thus, it seems more natural to define the invertibility loss directly on \(\tilde{x}\). We construct an alternative loss function which only trains the model to adhere to \(\tilde{x}\simeq x\)
\[\mathcal{L}_{MSE}=\lambda_{MSE}(x-\tilde{x})^{2}. \tag{20}\]
We compare both loss functions quantitatively in Sec. 4.
While one can select any ISP method of one's choosing, a fixed ISP will often be too restrictive in practice. We, therefore, allow the model to learn its ISP by creating a separate module which is only called in the inverse direction, mapping a measurement \(y\) to the quantum state \(|y\rangle=g(y)\), see Fig. 4. This module combines a small classical neural network \(g_{C}\) with a quantum neural network \(g_{Q}\). First, the neural network predicts \(3n\) angles \(\psi\) from \(y\)
\[y\overset{g_{C}}{\mapsto}\psi\in\mathbb{R}^{3n}, \tag{21}\]
which serve as inputs for the \(Rot\) gates. The quantum state prepared in this way is then further transformed by the quantum neural network to create the input for the (inverse) model circuit
\[\psi\overset{Rot}{\mapsto}|\psi\rangle\overset{g_{Q}}{\mapsto}|\tilde{y} \rangle\,. \tag{22}\]
With these additional steps, we can ensure that the model is trained to be invertible. Furthermore, as explained in Sec. 2.1, the model's Jacobian needs to be tractable for density estimation. For the QINN, it can be obtained in a similar way that the gradients are calculated by using parameter shift rules [63, 64].
The potential of a QINN is twofold. Firstly, a QINN provides the ability to compute the full Jacobian matrix. Unlike classical INNs, which only allow for efficient computation of the Jacobian determinant [30], a full Jacobian would open up opportunities for a new array of applications. There has been extensive research into efficient learning of the principal data manifolds in distributions [31, 32, 33, 34], yet it remains a challenging and computationally expensive task in higher dimensions. Exploiting the full Jacobian of a QINN to encode the principal manifolds in selected latent variables would come at very little additional cost. This advantage of QINNs could pave the way towards learning interpretable representations and new insights into underlying processes.
The second advantage of a QINN lies in the increased expressive power provided by quantum models, which extensive theoretical work has documented [35, 36, 37, 38]. There has been considerable effort to define quantum equivalents of multiple generative models in the current machine learning landscape [65]. Sometimes, simulation of distributions created by quantum circuits is not efficiently possible with classical methods [66]. For example, the authors of [67] show an advantage in learning density estimation via samples with Quantum Circuit Born Machines [68] using Clifford group gates [69]. Even though QINNs operate fundamentally differently, since we marginalize over the measured state distribution, there remains reason to assume increased expressivity of a QINN over the classical counterpart, which we aim to further establish by the experiments shown in Sec. 4.
## 4 Application to High Energy Physics
We evaluate the performance of the QINN by learning to generate events of the LHC process
\[pp\to Zj\to\ell^{+}\ell^{-}j. \tag{23}\]
We simulate 100k events using Madgraph5[70] with generator-level cuts for the transverse momentum and the rapidity of the leptons of \(15\GeV<p_{T,\ell^{\pm}}<150\GeV\) and \(\eta_{\ell^{\pm}}<4.5\) as well as the energy of the \(Z\)-Boson \(E_{Z}<1900\GeV\). The data sample is split into a training, a validation and a test sample following the split of \(37.5\%/12.5\%/50\%\).
We compare the two methods of training invertibility described in Sec. 3.1, the fidelity of the quantum states and a mean squared error. Finally, we train a classical INN with the spline coupling block architecture presented in Sec. 2 and compare the performance based on the number of model parameters required. The setup of all models and hyperparameters can be found in Table 1. We implement the training pipeline with PyTorch[71], where we use PennyLane[72] to implement the QINN and FrEIA[58] for the classical INN. We train all networks on the observables \(p_{T,\ell^{\pm}},\Delta\phi_{\ell\ell},\eta_{\ell^{\pm}}\), which we can use to reconstruct the \(Z\)-Boson. The models are trained for 200 epochs with an MMD loss as described in Sec. 2.2 on the \(M_{Z}\) distribution, which significantly improves the results in this observable for all models. The MMD loss on input and latent observables for the QINN are used throughout the entire training process.
Figure 4: A diagram of the learnable ISP method. To map a measurement back to a quantum state, first a NN predicts \(3n\) angles, which serve as state preparation for \(\ket{\psi}\), which is then transformed further by a separate QNN that recreates \(\ket{y}\).
### Comparing Fidelity and Mean Squared Error for the loss function
To decide which of the loss functions for invertibility is more advantageous for the QINN, we perform experiments with the same hyperparameters, only changing the loss. A comparison of the results is shown in Fig. 5. While both results are similar in performance of \(M_{Z}\), the fidelity loss creates significant artefacts in the \(\Delta R_{\ell^{+},\ell^{-}}\) distribution whenever we use the MMD loss necessary to improve \(M_{Z}\). This aligns with the intuition of the MSE loss allowing for a more flexible choice of ISP We, therefore, proceed with the MSE loss for invertibility throughout the rest of this work.
\begin{table}
\begin{tabular}{l|c c} \hline \hline Hyperparameter & QINN & INN \\ \hline LR scheduling & 1cycle [73] & same \\ Maximum LR & \(10^{-3}\) & same \\ Start/Final LR factors & \(4\cdot 10^{-2}/10^{-4}\) & same \\ Epochs & 200 & same \\ Batch size & 128 & same \\ Adam \(\beta_{1}\), \(\beta_{2}\) & 0.9, 0.9 & 0.5, 0.9 \\ \hline \# Spline bins & - & 5/8/8 \\ \# Coupling blocks & - & 6/9/10 \\ Layers per block & - & 3/3/3 \\ Hidden dimension & - & 8/12/24 \\ \hline \# Forward quantum layers & 12 & - \\ \# ISP quantum layers & 8 & - \\ \# ISP NN layers & 3 & - \\ \# ISP hidden dimension & 32 & - \\ \hline \(\lambda_{MMD}\) (input/latent) & 1.0/0.5 & - \\ \(\lambda_{M_{Z}}\) & 0.375 & 0.5 \\ \(\lambda_{F/MSE}\) & 10.0 & - \\ MMD kernel widths & [0.02, 0.1, 0.4, 1, 5] & same \\ \# Trainable parameters & 2k (QNN \(\sim\) 300, NN \(\sim\) 1.7k) & 2k/6k/16k \\ \hline \hline \end{tabular}
\end{table}
Table 1: Hyperparameters for the QINN and INN used for training and setup. The hyperparameters for the INN were chosen such that the performance of the QINN and the INN were comparable while keeping the number of trainable parameters as low as possible.
### Classical INN versus Quantum INN
We choose three INN sizes: 2k parameters to match the QINN in parameter number, 6k and 16k parameters to approximately lower- and upper-bound the QINN performance. In Fig. 6, we first compare correlations of low-level observables between the QINN and the 16k INN. While both networks cannot learn the \(p_{x}\) double peak structure due to their limited size, they both show no problems learning the hard \(p_{T}\) cuts. Furthermore, the QINN shows no additional signs of deterioration or artefacts in the low-level observables that may have arisen from underparameterization apart from the ones also present in the INN.
Figure 5: \(M_{Z}\) and \(\Delta R_{\ell^{+},\ell^{-}}\) of the \(Z\) reconstructed from the leptons as generated by the QINN with Fidelity and MSE loss for invertibility. True shows the distribution of the entire test set.
The networks' ability to capture high-dimensional correlations can be tested by reconstructing the \(Z\)-Boson observables, specifically the invariant mass \(M_{Z}\) from the generated leptons. We show these reconstructed results in Fig. 7. It is immediately apparent that the 2k parameter INN is nowhere as expressive as the QINN. In fact, the QINN even outperforms the 16k parameter INN at reconstructing the sharp peak of the \(M_{Z}\) distribution, though it does not match the tails of the shown distributions as well as the 16k INN. Comparing the QINN to the 6k INN, it arguably even outperforms a classical network three times its size. With an average deviation of the reconstructed observables of \(||\frac{\bar{x}-x}{x}||<2.1\%\), we can also determine that the MMD loss does not dominate the optimization process and the QINN does learn to perform an invertible transformation.
In conclusion, we find the performance of the QINN to be equivalent to that of a classical INN with around \(3-8\) times the number of parameters on this 5 dim task, with most of the QINN parameter count still being attributed to the classical NN.
## 5 Conclusion
Generative modelling has become increasingly important for simulating complex data in various science areas. In recent years, neural network techniques, such as Variational Autoencoders, Gen
Figure 6: 2d correlations of selected \(\ell^{\pm}\) observables. We show the distribution in the dataset (left) the one generated by the QINN (center) and the one generated by the 16k parameter INN (right).
erative Adversarial Networks and Invertible Neural Networks, have received attention, showing promising outcomes. At the same time, algorithms designed for quantum computers have opened a new avenue to expand on existing classical neural network structures. For example, quantum-gate algorithm-based Quantum Variational Autoencoders and Quantum Generative Adversarial Networks have been studied thoroughly. They have been shown empirically to match or even outperform their classical counterparts on specific tasks or when limiting the size of the classical network, thereby indicating that QNNs can offer a larger expressivity or faster and more robust network training.
In this work, we proposed a novel approach for Quantum Invertible Neural Networks and highlighted their use as density estimators in generative tasks. By applying the QINN to the simulation of final states of the LHC process \(pp\to Zj\to\ell^{+}\ell^{-}j\), we showed its ability to reconstruct the \(Z\)-Boson with significantly fewer parameters than classical INNs. Our model combines the conventional QNN architecture, consisting of the trinity of state preparation variational quantum circuit and measurement, with a classical-quantum hybrid network for learning an Inverse State Preparation. Furthermore, we demonstrate how the combined model can be trained to invert the quantum measurement of the QNN, allowing for a reversible transformation. Through the property of having the entire network Jacobian at one's disposal, performing density estimation with QNNs could lead to new insights and better understanding of the modelled generative processes.
The hybrid QINN with 2k trainable parameters, most of which originate in the classical network part, showed to be more expressive than its entirely classical counterpart, thereby evidencing a gain in expressivity due to the inclusion of the quantum circuit. This encouraging result motivates the detailed future study and employment of QINNs in complex generative tasks.
## Acknowledgements
We thank Tilman Plehn for valuable discussions and encouragement during this project.
Figure 7: \(M_{Z}\) and \(\Delta R_{\ell^{+}\ell^{-}}\) of the \(Z\) reconstructed from the leptons as generated by the QINN and the reference INNs. True shows the distribution of the entire test set. |
2310.02904 | Spline-based neural network interatomic potentials: blending classical
and machine learning models | While machine learning (ML) interatomic potentials (IPs) are able to achieve
accuracies nearing the level of noise inherent in the first-principles data to
which they are trained, it remains to be shown if their increased complexities
are strictly necessary for constructing high-quality IPs. In this work, we
introduce a new MLIP framework which blends the simplicity of spline-based MEAM
(s-MEAM) potentials with the flexibility of a neural network (NN) architecture.
The proposed framework, which we call the spline-based neural network potential
(s-NNP), is a simplified version of the traditional NNP that can be used to
describe complex datasets in a computationally efficient manner. We demonstrate
how this framework can be used to probe the boundary between classical and ML
IPs, highlighting the benefits of key architectural changes. Furthermore, we
show that using spline filters for encoding atomic environments results in a
readily interpreted embedding layer which can be coupled with modifications to
the NN to incorporate expected physical behaviors and improve overall
interpretability. Finally, we test the flexibility of the spline filters,
observing that they can be shared across multiple chemical systems in order to
provide a convenient reference point from which to begin performing
cross-system analyses. | Joshua A. Vita, Dallas R. Trinkle | 2023-10-04T15:42:26Z | http://arxiv.org/abs/2310.02904v1 | # Spline-based neural network interatomic potentials: blending classical and machine learning models
###### Abstract
While machine learning (ML) interatomic potentials (IPs) are able to achieve accuracies nearing the level of noise inherent in the first-principles data to which they are trained, it remains to be shown if their increased complexities are strictly necessary for constructing high-quality IPs. In this work, we introduce a new MLIP framework which blends the simplicity of spline-based MEAM (s-MEAM) potentials with the flexibility of a neural network (NN) architecture. The proposed framework, which we call the spline-based neural network potential (s-NNP), is a simplified version of the traditional NNP that can be used to describe complex datasets in a computationally efficient manner. We demonstrate how this framework can be used to probe the boundary between classical and ML IPs, highlighting the benefits of key architectural changes. Furthermore, we show that using spline filters for encoding atomic environments results in a readily interpreted embedding layer which can be coupled with modifications to the NN to incorporate expected physical behaviors and improve overall interpretability. Finally, we test the flexibility of the spline filters, observing that they can be shared across multiple chemical systems in order to provide a convenient reference point from which to begin performing cross-system analyses.
interatomic potential splines machine learning interpretability
## 1 Introduction and Background
In the fields of computational materials science and chemistry, machine learning interatomic potentials (MLPs) are rapidly becoming an essential tool for running high-fidelity atomic-scale simulations. Notably, major breakthroughs in the field have typically been marked by the development of new methods for encoding the atomic environments into machine-readable descriptors, or using architectures from other fields of machine learning to improve regression from descriptors into energies and atomic forces. The seminal work using atom-centered symmetry functions (ACSFs) Behler and Parrinello (2007) as the encoder, followed by "smoooth overlap of atomic position" (SOAP) descriptors Bartok et al. (2010), and, most recently, equivariant message-passing networks Batzner et al. (2022) being milestones of particular importance in the field. In parallel with these improvements to the encoding functions of MLPs has been the development of new regression tools, favored either for the simplicity gained through the use of linear combinations of basis functions (e.g., Thompson et al. (2015), Shapeev (2016), Drautz (2019)) or the accuracy gained by increasing the effective cutoff radius of the model using message-passing neural networks (e.g., Gilmer et al. (2017), Schutt et al. (2018), Batatia et al. (2022)). While countless other models have been proposed using combinations or variations of different methods (an incomplete list: Mueller et al. (2020), Manzhos and Carrington (2021), Christensen et al. (2020), Musaelima et al. (2022), Gasteiger et al. (2021), Haghighatliar et al. (2021), Lubbers et al. (2018), Hu et al. (2021), Batatia et al. (2022)), the key insights remain the same: interatomic potentials can be greatly improved by leveraging architectures and optimization strategies drawn from other machine learning and deep learning applications.
Despite the success of these MLPs, there has been a persistent need in the community for the continued development of low-cost classical IPs, particularly for use with large-scale simulations Sosso et al. (2016), Ravelo et al. (2013),
Diemand et al. (2013); Phillips et al. (2020); Zepeda-Ruiz et al. (2017). Some of the major benefits that classical IPs Jones (1924); Daw and Baskes (1984); Buckingham (1938); Tersoff (1986); Brenner et al. (2002); Shan et al. (2010); van Duin et al. (2001); Baskes (1992) have over MLIPs are their low computational costs, strong foundations in known physics, and a history of scientific research analyzing their behaviors and theoretical limitations. Although improvements are being made to the speeds of MLIPs, the stark differences between classical and ML IPs in terms of size, design, and overall complexity make it difficult to leverage the well-established tools and knowledge from classical models in order to further improve their ML counterparts.
In this work, we develop a new IP model whose hyper-parameters can be tuned to transition smoothly between low-cost, low-complexity classical models and full MLIPs. By basing the proposed model off of a classical spline-based MEAM potential, then extending it using a basic ML architecture, we enable direct comparisons to well-established classical forms as well as modern ML models. These results help to bridge the gap between the two classes of models and highlight methods for improving speeds, interpretability, and transferability of MLIPs.
### s-MEAM
The model developed in this paper builds heavily upon the spline-based MEAM potential Lenosky et al. (2000) ("s-MEAM"), which we describe here in order to provide sufficient background to understand the new model architecture proposed in this work. The s-MEAM potential is a spline-based version of the popular analytical MEAM potential Baskes (1992) that was intended to provide additional flexibility to the model while maintaining the same overall functional form. In the s-MEAM formalism, the energy \(E_{i}\) of a given atom \(i\) is written as:
\[\begin{split} E&=\sum_{i}E_{i}=\sum_{i}\left[U_{c_{ i}}(n_{i})+\sum_{j<i}\phi_{c_{j}}(r_{ij})\right]\\ n_{i}&=\sum_{j\neq i}\rho_{c_{j}}(r_{ij})\\ &\qquad+\sum_{\begin{subarray}{c}j<k,\\ j,k\neq i\end{subarray}}f_{c_{j}}(r_{ij})f_{c_{k}}(r_{ik})g_{c_{j}c_{k}}[\cos \left(\theta_{jik}\right)]\,.\end{split} \tag{1}\]
In Eqn. 1 the energy of the \(i\)-th atom, \(E_{i}\), is composed of a pair term (\(\phi\)) and an embedding energy contribution (\(U\)) for a given "electron density" \(n_{i}\), where all five functions (\(\phi\), \(U\), \(\rho\), \(f\), and \(g\)) are represented using cubic Hermite splines. The pair term is calculated by summing over pair distances \(r_{ij}=|\vec{r_{j}}-\vec{r_{i}}|\) between each atom \(i\) and its neighbors \(j\) (with \(r_{ij}\) less than a chosen cutoff distance, \(r_{c}\)). The electron density \(n_{i}\) is further decomposed into 2-body (\(\rho\)) and 3-body (products of \(f\) and \(g\)) contributions. The 2-body term is similar to the summation over \(\phi\), while the 3-body term is computed by summing over the product of three spline functions that take \(r_{ij}\), \(r_{ik}\), and \(cos(\theta_{jik})\) as inputs, where \(\theta_{jik}\) is the bond angle formed by atoms \(i\), \(j\), and \(k\) with \(i\) at the center. The subscripts on the functions indicate that separate splines are used for evaluation depending on the chemistries \(c_{i}\), \(c_{j}\), and \(c_{k}\) of atoms \(i\), \(j\), and \(k\) (e.g., \(g_{AA}\) for A-A bonds, \(g_{AB}\) for A-B bonds, etc.).
In order to facilitate comparisons between s-MEAM and the model that will be proposed in this work, we will first define two new functions
\[\begin{split} G_{3,i}^{\alpha}(r_{ij},r_{ik},\cos\theta_{jik})= \\ \sum_{\begin{subarray}{c}j<k\\ j,k\neq i\end{subarray}}f_{c_{j}}^{\alpha}(r_{ij})f_{c_{k}}^{\alpha}(r_{ ik})g_{c_{jk}}^{\alpha}\left(\cos\left(\theta_{jik}\right)\right)\\ G_{2,i}^{\beta}(r_{ij})&=\sum_{j\neq i}\rho_{c_{j}}^{ \beta}(r_{ij}),\end{split} \tag{2}\]
then re-write Eqn. 1 as
\[\begin{split} E&=\sum_{i}\left[U_{c_{i}}(n_{i})+\frac {1}{2}G_{2,i}^{0}\right]\\ n_{i}&=\sum_{\beta=1}^{N_{2}}G_{2,i}^{\beta}+\sum_{ \alpha=1}^{N_{3}}G_{3,i}^{\alpha}\end{split}\,, \tag{3}\]
where \(N_{2}=1\) and \(N_{3}=1\). We will henceforth refer to \(G_{3,i}^{\alpha}\) and \(G_{2,i}^{\beta}\) as 3-body and 2-body "spline filters" respectively, in acknowledgement of the fact that they can be thought of as filters that characterize the local environment
around atom \(i\) in order to produce a scalar atomic environment descriptor \(n_{i}\). In Eqn. 3 we have introduced summations over the superscripts \(\alpha\) and \(\beta\) which currently only take on a single value of 1, but will be used later to denote different filters.
### Nnp
Shown to be universal function approximators Hornik et al. (1989), neural networks (NNs) are provably more flexible than classical IPs which use explicit analytical forms. Because of this, a sufficiently large NN would be expected to be able to accurately reproduce an arbitrary potential energy surface, assuming that it properly accounted for long-range interactions, was provided with enough fitting data, and did not suffer from limitations due to trainability. The original Behler-Parrinello NNP Behler and Parrinello (2007) was one of the first successful applications of NNs towards practical systems, where the atomic energy of atom \(i\) is written as
\[E=\sum_{i}N_{c_{i}}(\vec{D}_{i}) \tag{4}\] \[\vec{D_{i}}=\langle D_{3,i}^{1},\ldots,D_{3,i}^{N_{3}},D_{2,i}^{1 },\ldots,D_{2,i}^{N_{2}}\rangle.\]
In Eqn. 4, \(N_{c_{i}}\) is a neural network, \(c_{i}\) is the element type of atom \(i\), and \(\vec{D}_{i}\) is the atom-centered symmetry function (ACSF) descriptor Behler and Parrinello (2007) of atom \(i\). The ACSF local environment descriptor is comprised of radial symmetry functions
\[D_{2,i}^{\beta}(r_{ij})=\sum_{j\neq i}e^{-\eta^{\beta}(r_{ij}-R_{s}^{\beta})^{ 2}}v_{c}(r_{ij}), \tag{5}\]
parameterized by \(\eta^{\beta}\) for changing the width of the Gaussian distribution, and \(R_{s}^{\beta}\) to shift the distribution. A smooth cutoff function \(v_{c}(r_{ij})\) is used with the form:
\[v_{c}(r_{ij})=\left\{\begin{array}{ll}0.5\times\big{[}\cos\frac{\pi r_{ij}} {r_{c}}+1\big{]},&\mbox{ if }r_{ij}\leq r_{c}\\ 0,&\mbox{ if }r_{ij}>r_{c}.\end{array}\right. \tag{6}\]
Angular contributions are accounted for using the angular symmetry functions
\[D_{3,i}^{\alpha}(r_{ij},r_{ik},r_{jk},\theta_{jik})= \tag{7}\] \[2^{1-\zeta^{\alpha}}\sum_{j,k\neq i}(1+\lambda^{\alpha}\cos \theta_{jik})^{\zeta^{\alpha}}\] \[\times e^{-\eta^{\alpha}(r_{ij}^{2}+r_{ik}^{2}+r_{jk}^{2})}v_{c}( r_{ij})v_{c}(r_{ik})v_{c}(r_{jk}).\]
Multiple radial and angular symmetry functions are constructed by making \(N_{2}\) choices for \(\eta^{\beta}\) and \(R_{s}^{\beta}\), and \(N_{3}\) choices for \(\zeta^{\alpha}\), \(\lambda^{\alpha}\)\((=\pm 1)\), and \(\eta^{\alpha}\). The evaluations of all of these symmetry functions are then concatenated together into a single vector \(\vec{D}_{i}\) which is passed through a feed-forward neural network.
Obvious parallels can be drawn between the NNP form in Eqn. 4 and the re-written form of s-MEAM shown in Eqn. 3. What differentiates NNP from s-MEAM, however, are the details regarding the construction of the local descriptors, and the form of the embedding function. Where s-MEAM uses trainable spline filters for both the descriptor and the embedding function, NNP uses ACSFs and an NN respectively. Although an NN would have an increased fitting capacity over the \(U_{c_{i}}\) splines used in Eqn. 3, there are many similarities between the ACSF descriptors and the spline filters described in Eqn. 2. For example, the 2-body components of an ACSF descriptor, which are constructed by evaluating \(D_{2,i}^{\beta}\) with multiple radial shifts \(R_{s}^{\beta}\) for each neighboring atom \(j\), can be viewed as basis functions used for interpolating the desired range of atomic distances. This is conceptually related to how the basis functions of cubic Hermite splines allow \(G_{2,i}^{\beta}\) to interpolate over its domain as well. A similar argument can be made relating the angular components of ACSFs to the 3-body filters \(G_{3,i}^{\alpha}\), where Eqn. 2 and Eqn. 5 both multiply functions of pair distances (\(f_{c_{k}}^{\alpha}(r_{ij})\) and \(f_{c_{k}}^{\alpha}(r_{ik})\)) by a function of the triplet angle (\(g_{c_{jk}}^{\alpha}(\cos{(\theta_{jik})})\)). The ANI model Smith et al. (2017), which will be used in this work for comparison to the model which we developed, is nearly identical to the NNP form described above, with the modifications that only a single \(\eta^{\beta}\) is used, and Eqn. 7 is altered to introduce both radial (\(R_{s}^{\alpha}\)) and angular (\(\theta_{s}^{\alpha}\)) shifts:
\[D_{3,i}^{\alpha}(r_{ij},r_{ik},\theta_{jik})= \tag{8}\] \[2^{1-\zeta^{\alpha}}\sum_{j,k\neq i}[1+\cos(\theta_{jik}-\theta _{s}^{\alpha})]^{\zeta^{\alpha}}\] \[\times e^{-\eta^{\alpha}(\frac{r_{ij}+r_{ik}}{2}-R_{s}^{\alpha})} v_{c}(r_{ij})v_{c}(r_{ik}).\]
## 2 Methods
### s-NNP
The main contribution of this work is the development of a spline-based neural network potential, outlined in Fig. 1, which we call the "s-NNP" (short for "spline-NNP"). The s-NNP framework is intended to extend the fitting capacity of the s-MEAM class of potentials while maintaining the high interpretability and speed provided by the use of splines. s-NNP can be thought of as an s-MEAM potential with two critical changes: first, \(N_{3}\) and \(N_{2}\) in Eqn. 3, the numbers of \(G^{\alpha}_{3,i}\) and \(G^{\beta}_{2,i}\) spline filters, are taken to be hyper-parameters of the model; and second, the "embedding" spline \(U_{i}\) in an s-MEAM model is replaced by a fully-connected neural network. By including additional spline filters, the s-NNP is able to describe the local environment around an atom with increasingly fine resolution (since each spline can be thought of as a basis function for interpolating the atomic energies). The introduction of a neural network allows the model to achieve much more complex mappings into atomic energies than would be possible with the cubic splines \(U_{c_{i}}\). In the s-NNP formalism, the energy \(E_{i}\) of a given atom \(i\) is then written as
\[\begin{split} E&=\sum_{i}N(\vec{G}_{i}),\\ \vec{G}_{i}&=\langle G^{1}_{3,i},\dots,G^{N_{3}}_{3,i},G^{1}_{2,i},\dots,G^{N_{2}}_{2,i}\rangle.\end{split} \tag{9}\]
where \(N\) is a neural network, and \(\vec{G}_{i}\) is a vector of length \((N_{3}+N_{2})\). Notice that each component of \(\vec{G}_{i}\) is computed by evaluating a different 3- or 2-body spline filter for the local environment of atom \(i\). The parameters of an s-NNP that are trained during fitting are the positions of the knots of the \(f^{\alpha}\), \(g^{\alpha}\), and \(\rho^{\,\beta}\) splines from Eqn. 2, as well as the weights and biases in the neural network \(N\). The hyper-parameters of the model are \(N_{3}\), \(N_{2}\), the number of knots in each spline, the number of layers in \(N\), and the number of hidden nodes in each of those layers.
One benefit of the s-NNP framework is that it is closely related to both classical and machine learning interatomic potentials (see Section 4.2 for more discussion), making it possible to easily probe the performance gap between the two. For example, many classical potentials (LJ Jones [1924], EAM Daw and Baskes [1984], MEAM Baskes [1992], Stillinger-Weber Stillinger and Weber [1985], Tersoff Tersoff [1986], and Buckingham Buckingham [1938]) could be reformulated as s-NNP potentials with very few filters (\(N_{3}\in[0,1]\), \(N_{2}\in[1,2]\)) and custom embedding functions instead of a neural network (though given the universal approximation theorem, these embedding functions could be represented using an NN as well). Because of this, we can easily construct spline-based "classical" models by adjusting \(N_{3}\) and \(N_{2}\), but not including a neural network, then compare them directly to MLIPs by subsequently attaching networks with varying depths and widths. See Section 3 in the Results for details of such a study.
### Interpretability improvements
A central tenet of constructing interpretable models is designing the model architecture in a way that helps isolate the contributions of specific parameter sets to the final model predictions. With this in mind, we therefore propose the five modifications described in Fig. 2. Although the modifications proposed here are only discussed in the context of s-NNP, they can also be applied to many other existing MLIP frameworks.
When applying all of the modifications described in Fig. 2, the full form of s-NNP is written as:
\[\begin{split} E=\sum_{i}E_{i}&=\sum_{i}\left[E_{ \text{lin},i}+E_{\text{net},i}\right]\\ E_{\text{lin},i}&=E_{\text{se},i}+G_{2,i}^{\text{ short}}+b\\ E_{\text{sc},i}&=\sum_{n=1}^{N_{3}}G_{3,i}^{n}+ \sum_{n=1}^{N_{2}}G_{2,i}^{n}\\ E_{\text{net},i}&=\sigma\big{[}N_{\text{no-bias}}( \vec{G}_{i})\big{]},\end{split} \tag{10}\]
where \(E_{\text{sc},i}\) as a "skip connection" term (as used in other DL fields He et al. [2015]), \(G_{2,i}^{\text{short}}\) is a short-range 2-body filter, \(b\) is a trainable isolated atom energy, \(\sigma\) is the Softplus activation function \(\sigma(x)=\log{(1+e^{x})}\), and \(N_{\text{no-bias}}\) is a
Figure 2: Modifications to the s-NNP form described in Fig. 1 for improving interpretability. The four modifications are: 1) adding skip connections; 2) removing the internal network bias, and adding an external bias, \(b\); 3) wrapping the network outputs in a Softplus activation function; and 4) introducing a single short-range pair term using a 2-body filter. The short-range pair term is only allowed to be non-zero up to a cutoff of 2.5 Å.
neural network with no bias terms on any of its layers. Note that \(E_{\text{lin},i}\) encompasses all of the terms which are linearly dependent upon the spline filters, and \(E_{\text{net},i}\) captures the non-linear dependence. While a more in-depth discussion of the effects of each of these modifications can be found in Section 4.1, the practical result of Eqn. 10 is that the spline filter visualizations like those shown in Fig. 3 can be intuitively understood. For example, negative regions of the filters will usually correspond to negative \(E_{i}\), and positive regions of the filters will always correspond to positive \(E_{i}\). While Eqn. 10 still suffers from some drawbacks, predominantly associated with the non-linear effects of \(N_{\text{no-bias}}(\widetilde{G}_{i})\), we believe that it strikes a good balance between the accuracy achieved through a NN architecture, and the interpretability characteristic of classical potentials.
### Benchmarking dataset: AI
In order to test our framework, in Section 3.1 we fit s-NNP models to the aluminum dataset from Smith et al. [2021], which serves as a good benchmarking dataset due to its size and configurational complexity. This dataset was built using an active learning technique for generating interatomic potential training data Smith et al. [2018]. It has also been shown to contain extremely diverse atomic environments, such that the ANI model Smith et al. [2019] which was originally trained to the dataset was tested for use in shock simulations. After removing duplicate calculations (identical atomic positions and computed energies/forces, generated at different active learning steps), and one outlier configuration with particularly high forces, the final dataset consisted of 5,751 unique configurations (744,356 atoms total) with their corresponding DFT-computed energies and forces. A 90:10 train-test split was performed to partition the dataset into training/testing data. This data can be obtained from the original source ani.
### Additional datasets: Cu, Ge, Mo
As a test of the flexibility of the spline filters, in Section 3.3 we trained an s-NNP model simultaneously to the Al dataset described in Section 2.3 and the Cu, Ge, and Mo datasets from Zuo et al. [2020]. Each dataset from Zuo et al. [2020] was manually constructed and contains the ground state structure for the given element, strained supercells, slab structures, _ab initio_ molecular dynamics (AIMD) sampling of supercells at different temperatures (300 K and \(0.5\times\), \(0.9\times\), \(1.5\times\), and \(2.0\times\) the melting point), and AIMD sampling of supercells with single vacancies at 300 K and \(2.0\times\) the melting point. On average, each training dataset includes approximately 250 structures for a total of 27,416 atoms (Cu), 14,072 atoms (Ge), and 10,087 atoms (Mo). A detailed summary of the contents of the datasets can be found in Zuo et al. [2020]. This data can be obtained from the original source mle.
In order to attempt to balance the combined dataset, we took a random subset comprised of 20% of the Al dataset in addition to the full Cu, Ge, and Mo datasets, resulting in a total training set size of 1271 Al configurations, 262 Cu, 228 Ge, and 194 Mo. Another logical choice for constructing the multi-element training set would be to further downsample the Al dataset to better balance the relative concentrations of each element type. However, we observed that doing so resulted in worse property predictions for Al, presumably due to the large number of high-energy configurations in the original Al dataset making random sampling of low-energy configurations relevant to the properties of interest unlikely. In all cases, the atomic energies of each element type were shifted by the average energy of that type before training.
## 3 Results
### Benchmarking tests
Using the s-NNP framework, we fit a collection of models to the Al dataset to probe the effects of increasing model capacity in two distinct ways: first by increasing the number of spline filters, and second by increasing the size of the network used for mapping filter outputs to energies. Table 1 outlines the architectures and accuracies of the trained models, grouped by key architectural changes and sorted by total number of parameters. The trained s-NNP models can be conceptually broken down into three main groups, each highlighting the effects of specific architectural changes: 1) "\((N_{2},N_{3})\) linear" models, increasing the number of splines when no neural network is used; 2) "\((N_{2},N_{3})\)\(l\) =*" models, increasing the number of splines and network size; and 3) introducing the interpretability improvements described in Section 2.2.
The results in Table 1 show multiple avenues for systematically improving the performance of an s-NNP model, though each method appears to experience a saturation point beyond which the model suffers from diminishing returns as complexity increases. Using the "\((1,1)\) linear" model as a baseline, we see that increasing \(N_{3}\) from 1 to 8 can monotonically decrease both the energy and the force errors. Increasing \(N_{2}\) has no significant effect on the accuracy
of the linear models, which is consistent with the notion that a linear combination of cubic Hermite splines can be represented using a single spline.
The introduction of a neural network enables the model to better utilize the additional spline filters, leading to significant improvement of the "\((1,2)\)\(l=3\)" model over any of the linear models. Increasing the number of spline filters, and subsequently increasing the network width and depth to maintain a "funnel-like" structure (decreasing layer width with increasing depth) can also lead to steady improvements. However, with increasing model size we began to see a commensurate increase in training difficulty, often resulting in larger models with higher errors than what might be expected based on the performance of their smaller counterparts (e.g., "(2,4) \(l=6\)" compared to "(2,4) \(l=5\)", or "(1,8) \(l=5\), int. wide" compared to "(1,8) \(l=5\), int. all"). Notably, the interpretability improvements from Section 2.2 did not hurt model performance, demonstrating that accuracy and interpretability are not strictly opposing attributes of a model. Based on the results shown in Table 1, we will use the "(1,8) \(l=5\), int. all" for all experiments and analyses in the remainder of this paper.
### Model visualization
A major advantage of s-NNP over many other MLIPs, especially when coupled with the modifications from Section 2.2, is that the spline filters \(G^{\alpha}_{3,i}\) and \(G^{\beta}_{2,i}\) lend themselves to easy visualization. This can be valuable for helping model developers and users to better understand how their model is interacting with the data or influencing simulation results. The polar plots in Fig. 2(a), corresponding to the "(1,8) \(l=5\), int. all" model, represent the total activation of the 3-body filters induced by placing an atom at a given \((r,\theta)\). These visualizations make it easy to recognize
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Model name & \((N_{2},N_{3})\) & Network & Parameters & \(E_{\text{RMSE}}\) & \(F_{\text{RMSE}}\) \\ & & & (splines, network) & (meV/atom) & & (eV/Å) \\ & & & Train & Test & Train & Test \\ \hline \((1,1)\) linear & \((1,1)\) & - & (66, 0) & 74 & 78 & 0.76 & 0.82 \\ \((1,2)\) linear & \((1,2)\) & - & (110, 0) & 56 & 56 & 0.54 & 0.58 \\ \((1,4)\) linear & \((1,4)\) & - & (198, 0) & 40 & 41 & 0.46 & 0.49 \\ \((2,4)\) linear & \((2,4)\) & - & (220, 0) & 38 & 36 & 0.46 & 0.50 \\ \((1,8)\) linear & \((1,8)\) & - & (374, 0) & 32 & 34 & 0.42 & 0.46 \\ \((1,2)\)\(l=3\) & \((1,2)\) & \((3,2,1)\) & (110, 23) & 44 & 44 & 0.33 & 0.35 \\ \((1,4)\)\(l=5\) & \((1,4)\) & \((5,4,3,2,1)\) & (198, 80) & 10 & 11 & 0.22 & 0.23 \\ \((2,4)\)\(l=4\) & \((2,4)\) & \((6,3,2,1)\) & (220, 74) & 22 & 21 & 0.21 & 0.21 \\ \((2,4)\)\(l=5\) & \((2,4)\) & \((6,4,3,2,1)\) & (220, 96) & 8.7 & 9.0 & 0.19 & 0.20 \\ \((2,4)\)\(l=6\) & \((2,4)\) & \((6,5,4,3,2,1)\) & (220, 127) & 10 & 12 & 0.24 & 0.25 \\ \((1,8)\)\(l=5\) & \((1,8)\) & \((9,8,4,2,1)\) & (374, 219) & 7.5 & 6.5 & 0.13 & 0.13 \\ \((1,8)\)\(l=5\), int. skip & \((1,8)\) & \((9,8,4,2,1)\) & (374, 219) & 5.5 & 4.6 & 0.15 & 0.15 \\ \((1,8)\)\(l=5\), int. all & \((1,8)\) & \((9,8,4,2,1)\) & (374, 219) & 5.5 & 5.9 & 0.12 & 0.12 \\ \((1,8)\)\(l=5\), int. wide & \((1,8)\) & \((128,64,32,16,1)\) & (374, 11,793) & 5.9 & 5.1 & 0.13 & 0.14 \\ ANI Smith et al. (2021) & - & \((96,96,64,1)\) & (-, 15,585) & - & 1.9 & - & 0.06 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Fitting results comparing linear s-NNP models, “\((N_{2},N_{3})\) linear”, s-NNP models with neural networks, “\((N_{2},N_{3})\)\(l=\)*”, s-NNP with interpretability improvements “\((1,8)\)\(l=5\), int. *”, and the ANI model from Smith et al. (2021), “ANI”. The \((N_{2},N_{3})\) notation for the spline layers indicates that \(N_{2}\) 2-body and \(N_{3}\) 3-body spline filters were used. Network architectures are denoted as a tuple of integers (“Network” column) specifying the number of nodes in each hidden layer of the model. Although the training/testing errors decrease significantly when adding additional splines to the linear models, their performance appears to begin to saturate with the “(1,8) linear” model. Model performance only begins to be competitive with the ANI results upon the inclusion of a neural network with sufficient depth, which allows for non-linear combinations of spline outputs. While the “(1,8) \(l=5\), int. skip” model only adds the use of skip connections, the “(1,8) \(l=5\), int. all” and “(1,8) \(l=5\), int. wide” use all of the interpretability improvements discussed in Section 2.2 and Section 4.1. Note that this means that the single 2-body filter specified using the “(1,8)” notation is the short-range pair term described in Fig. 2, and that there are therefore no 2-body filters being passed through the network. Testing errors for the “ANI” model were taken directly from Smith et al. (2021), which did not report training errors. Note that the ANI model uses ensemble-averaging over 8 networks, which they report yields energy and force errors that are “20% and 40% smaller, respectively, compared to a single ANI model”.
aspects of the local environments around an atom that are learned during training to have lower energy. For example, the averaged filter shown in Fig. 3b has an attractive behavior of the model for bond angles \(60^{\circ}\leq\theta_{jik}\leq 90^{\circ}\) in addition to repulsion for angles larger than approximately \(120^{\circ}\). Many of the individual filters in Fig. 3a also show characteristic features at various bond angles, especially for bond lengths near the first and second nearest neighbor distances in FCC aluminum. Fig. 3c shows the short-range pair term, which was learned to have a strongly repulsive contribution, as would be expected based on the Pauli exclusion principle.
It's worth mentioning that similar plots to the ones shown in Fig. 3 could also be generated for other NNP-like models, for example by visualizing each of the components of the vector output of the first hidden layer in an ANI model. However, most other neural network-based architectures would have significantly more filters to visualize given the size of their hidden layers. For example, the ANI model in Table 1 has 96 nodes in its first hidden layer, thus making it more difficult to interpret the results. Furthermore, since ANI (and most other models) does not use skip connections summing the hidden layer directly into the output, any resultant visualization will not necessarily be in units of energy, meaning it may undergo significant non-linear transformations as it passes through the network.
### Flexibility tests
The high interpretablility of the s-NNP spline filters, as discussed in Section 3.2, is particularly valuable when the filters can be applied to different chemical systems in order to enable cross-system comparisons. For example, if multiple s-NNP models using the same spline filters were trained to different chemical systems, then the networks of each model could be analyzed in order to understand how differences in chemistries result in different sensitivities to the local environment embeddings defined by the spline filters. The multi-element model trained to the datasets described in Section 2.4 uses the same spline filters for all of the data, but separate NNs for each element type. The network architecture maintained a funnel-like structure of \((12,8,4,2,1)\). It also used four additional 3-body filters in order to increase its fitting capacity, for a total of one short-range pair filter and 12 3-body filters, though without
Figure 3: Visualizations of spline filters of the “(1,8) final” model from Table 1 for atomic distances in the range [2.5Å, 7Å]. a) Plots of the individual \(G^{\alpha}_{3,i}\) filters, where integer labels above each plot indicate the index \(\alpha\). b) The average of all \(G^{\alpha}_{3,i}\) 3-body filters. c) The short-range pair term, as described in Section 2.2. Note that the radial splines \(f^{\alpha}_{c_{j}}\), \(f^{\alpha}_{c_{k}}\), and \(\rho^{\alpha}_{c_{j}}\) use linear extrapolation for distances outside of the domain defined by their knots. Dashed grey lines in the polar plots correspond to the first and second nearest-neighbor distances (2.86 Å and 4.05 Å respectively) for FCC Al at room temperature Nakashima [2020]. Each point in the polar plots is computed by fixing atom \(i\) at the origin, fixing atom \(j\) at the given \((r,\theta)\), fixing atom \(k\) along the \(\theta=0\) axis, then integrating \(G^{\alpha}_{3}\) over \(r_{k}\in[1.5,7.0]\). Note that there is a forced symmetry in the polar plots since the \(g^{\alpha}_{c_{jk}}\) splines in Eqn. 2 take \(\cos{(\theta)}\) as input. All polar plots use the same color scale, where values are clipped to fall within a chosen range to optimize for visibility while avoiding information loss. Values for \(r<1.5\) Å, which was the smallest distance sampled in the Al dataset as shown in Fig. B1, are omitted to ensure that high signals at small atomic distances would not wash out the rest of the colors in the plots. Though all plots in this figure are technically in units of eV, this would not be true if skip connections were not used.
the use of skip connections or the Softplus activation function. In the case of the multi-element model, the use of skip connections would imply a "background" energy that is consistent across all four element types and is augmented by the network contributions for each element. While this assumption sounds plausible in theory, we found that in practice a model using skip connections resulted in poorer property predictions than one which did not. This decreased performance when using skip connections can likely be attributed to the larger concentration of Al data dominating the fitting and causing the filters to learn a background energy that is influenced by the high-energy Al configurations and therefore not transferable to the lower energy Cu/Ge/Mo data. We believe that further research into constructing balanced training datasets could help address this issue, though it may also be possible that a larger number of splines is required for describing this more diverse dataset. Another alternative, which was not explored here, would be to only use skip connections for a subset of the spline filters in order to give the model the ability to learn a transferable background energy without enforcing the full constraint of skip connections for all filters.
The property predictions of the multi-element model plotted in Fig. 4 show that a model using shared filters for multiple datasets can learn to make reasonable property predictions for all four elements studied in this work. While the initial predictions for most of the elements were relatively good, the cubic elastic constants and bulk modulus for Mo were noticeably under-predicted, and the vacancy migration energy was far too large to be considered acceptable (dotted line in Fig. 4). Due to the relatively few number of spline filters used, and their high degree of interpretablity, we were able to fine-tune the model by zeroing out the network weights of hand-chosen filters in order to remove their contribution to the Mo energy predictions. For example, the contributions of each spline filter can be visualized
Figure 4: Property predictions of the lattice constant (\(a\)), cubic elastic constants (\(C_{ij}\)), bulk modulus (\(B\)), and vacancy formation and migration energies (\(E_{vf}\) and \(E_{m}\)) for s-NNP models compared to existing MLIPs. s-NNP results are from this work, ANI results from Smith et al. (2021), and all others from Zuo et al. (2020). All properties have been normalized with respect to the DFT-predicted values. Note that Smith et al. (2021) did not compute \(E_{m}\) for ANI. The “(1,8) \(l=5\), int. all” model (solid black line), which was trained only to Al, predicts all seven properties well, with the exception of \(C_{44}\), which has a relatively small magnitude compared to the other elastic constants. The performance of the multi-element s-NNP model (dashed black lines) is within the range of the other single-element MLIPs, though s-NNP’s predictions for Ge are somewhat distorted. The “fine-tuned” s-NNP model was manually adjusted after training was completed in order improve the property predictions, as described in Section 3.3.
individually in order to isolate the influence of each filter on the properties of interest (see Fig. C2). Following this approach, we were able to improve the Mo predictions to bring them within an acceptable range without altering the predictions for the other three elements (dashed lines in Fig. 4). However, we note that the original training errors of the model (before fine-tuning) as shown in Fig. C4 were comparable to those of the NNIPs from Zuo et al. (2020), suggesting that the property predictions of the multi-element s-NNP model may have been able to have been improved by re-balancing the dataset. Similar to what was described in Section 2.4, a possible explanation for this is that the Mo dataset did not fully constrain the portions of the spline filters relevant to computing the properties of interest, and their shapes were therefore dictated by the Al dataset, leading to corrupted Mo property predictions. Further research into methods for preventing this type of "cross-pollution" of information would be valuable for constructing more flexible and generalizable models.
In order to gain additional insights into the differences in energetics between the Al, Cu, Ge, and Mo systems, we compute the sensitivities of the energies predicted by the multi-element model with respect to each of the 3-body spline filters. The sensitivity of the total energy, \(E\), for all configurations \(N\) with respect to a given 3-body spline filter \(G_{3}^{\alpha}\) can be computed as:
\[s_{E}^{\alpha}=\sum_{i=1}^{N}|\partial E_{\text{s-NNP}}/\partial G_{3,i}^{ \alpha}|/E_{\text{DFT}}, \tag{11}\]
which can be easily computed via back-propagation.
Calculation and comparison of these sensitivities for each dataset, as shown in Fig. 5, highlight the learned similarities and differences between the four elemental datasets. Examination of the correlation between the sensitivities (right panel of Fig. 5) shows that the Cu and Mo datasets have similar filter sensitivities, while Ge is the only element to have a negative correlation with any of the other elements. The Al dataset appears to be uniformly similar to all three remaining elements, reflecting the fact that most of the geometric configurations of the Cu/Ge/Mo datasets are well-sampled by the Al dataset (see Fig. C3). We hypothesize that the negative correlation of the Ge sensitivities is due to an attempt by the model to encode the large cluster of outlying Ge points in the UMAP visualizations shown in Fig. C3.
## 4 Discussion
### Understanding model behavior
The main purpose behind the modifications proposed in Fig. 2 and Section 2.2 is to simplify the process of understanding the behavior of s-NNP models. This not only improves the usefulness of visualizations like those shown in Fig. 3 and Fig. C1, but can also aid in debugging model predictions in practice. In this section, we will discuss each modification from Section 2.2 in greater detail in order to highlight how they improve the interpretability of the model.
Figure 5: Analysis of sensitivities \(s_{E}^{\alpha}\) (Eqn. 11) of the predicted energies with respect to the 3-body spline filters, \(G_{3}^{\alpha}\), for each elemental model. We compute the sensitivity by calculating the derivative of the predicted energy with respect to the spline filter outputs, normalized by the DFT reference value, then summing over all atoms in the training set. Note that these sensitivities are computed with respect to the total energy \(E\), which includes the network contributions. The left panel plots the raw values, while the right panel shows the correlation between the sensitivities of each element. The empty bars for Mo correspond to the filters which were removed from the Mo energy predictions during fine-tuning as described in Section 3.3. The corresponding spline filters are visualized in Fig. C1.
The first of these modifications is the use of skip connections, which result in strong theoretical changes that can not only lead to improved trainability as seen in Table 1 and other deep learning applications Li et al. (2017), but also greatly increase the interpretability of the model. Modifying \(\vec{G}_{i}\) to have units of energy seemingly contrasts with the notion of the atomic "density", \(n_{i}\), from Eqn. 1 and the local environment descriptor, \(\vec{D}_{i}\), from Eqn. 4, both of which are considered to be intermediate representations which will only become energies once they have been transformed by a regression function (\(U_{c_{i}}\) or \(N_{c_{i}}\) respectively). However, we emphasize that these three quantities (\(n_{i}\), \(\vec{D}_{i}\), and \(\vec{G}_{i}\)) remain closely related even when \(\vec{G}_{i}\) is in units of energy while the others are not. Although local environment descriptors are usually seen as quantifying the geometry within a local neighborhood, there is no reason that the environment descriptor may not itself also be an energy. An energy-based descriptor could then be thought of as a type of energy partitioning scheme, where in the case of s-NNP with skip connections the atomic energies contribute both linearly to the total energy (through the skip connection) and non-linearly (through the network). In fact, in our previous work Vita and Trinkle (2021) we observed that \(U_{c_{i}}\) was often learned to be a nearly linear function, meaning that it essentially served the purpose of a simple unit conversion for \(n_{i}\). This linearity is essential to model interpretability, and is the main motivator for the use of skip connections in s-NNP, as it is a step towards simplifying the process of understanding how the spline filters contribute to the total energy.
Nevertheless, care must still be taken when interpreting the filter visualizations when the other modifications discussed in Section 2.2 are not also employed. For example, skip connections alone do _not_ guarantee that a negative filter value means \(E_{i}\) will also be negative, since the network contribution \(N(\vec{G}_{i})\) may outweigh the skip connection term. Similarly, adjusting the filter outputs (e.g., by tuning the knot positions) may have unexpected results due to the non-linear behavior of the network. These issues are also present with all non energy-based descriptors like \(n_{i}\) and \(\vec{D}_{i}\), but can be further improved upon using the remaining techniques discussed in this section.
In order to further facilitate straightforward interpretation of spline visualizations, we also choose to pin the knots of the radial splines to be 0 at the cutoff distance \(r_{c}\), and we remove bias terms from the layers of the NN. As a result of this, both \(N(\vec{G}_{i})\) and \(E_{\text{sc},i}\) will smoothly decay to 0 at the cutoff distance, thus incorporating the desired behavior which is enforced in the Behler-Parrinello NNP using the cutoff function \(v_{c}\) in Eqn. 6. In order to account for the fact that some datasets may have non-zero energy at the cutoff distance, an external bias term, \(b\) is added so that the atomic energy converges to \(b\) at \(r_{c}\). In this sense, \(b\) corresponds to the learned energy of the isolated atom. We note that this external bias term is preferable to using internal network bias or un-pinning the radial splines because it ensures that \(N(\vec{G}_{i})\) and \(E_{\text{sc},i}\) behave similarly as \(r_{ij}\) approaches \(r_{c}\).
Wrapping the network output \(N_{\text{no-bias}}(\vec{G}_{i})\) in the Softplus activation function guarantees that the NN contributions to the total energy are strictly non-negative. This ensures that any attractive behavior of the model arises solely from the spline filters through the skip connection term, \(E_{\text{sc},i}\), thus helping to isolate certain model behaviors to specific parameter sets. It is important to note, however, that the Softplus activation should only be used in conjunction with skip connections in order to ensure that the model can predict negative energies.
A common challenge for many MLPs is ensuring a repulsive behavior at small values of \(r_{ij}\). This difficulty is not reflected in classical potentials, however, as classical models often include explicit repulsive terms. While a data-driven solution to this problem is possible, by explicitly adding short-range dimers into the training dataset, many MLIP developers choose instead to adjust the form of their IP. One such method used in the literature is to augment the model by introducing an auxiliary potential designed to capture the repulsive behavior of a pair potential. For example, in Miladovich et al. (2023) a "composite potential" was constructed by first fitting a repulsive "auxiliary potential" to DFT data, then fitting a "main potential" to the residuals of the pair potential using a Gaussian Approximation Potential Bartok et al. (2010). We incorporate a similar idea by including a short-range 2-body filter, \(G_{2,i}\), which is summed directly into the total energy without ever passing through the NN. Note that this is different from the skip connections, which are summed directly into the energy _and_ passed through the NN. Having a spline filter which only contributes linearly to the total energy means that it is free of any of the interpretability issues described for the spline filters which are used as input to the NN. While this short-range pair term does not necessarily guarantee a repulsive behavior at small distances, we observe empirically that it is often learned during training to have a strongly repulsive shape. A similar technique is used by the SNAP interatomic potential Wood and Thompson (2018), where the repulsive ZBL pair potential is added to the potential form as a "reference potential".
### s-NNP's relation to other models
The s-NNP architecture shown in Fig. 1 can be further understood by drawing relationships between itself and other models from the literature, particularly the UF3 model Xie et al. (2021) and the original Behler-Parrinello NNP Behler and Parrinello (2007). s-NNP can be compared to UF3 and NNP by analyzing the differences between the two key
components of each model: the embedding function for encoding local atomic environments into a descriptor, and the regression function for mapping the descriptor into an energy. Although it can be difficult to clearly distinguish between the embedding/regression portions of most MLIPs due to the inability to definitively establish the roles of all parameters in a deep model, we will attempt to break each model down in intuitive ways to facilitate comparison. One can view the vector \(\vec{G}_{i}\) from Eqn. 9 as a descriptor generated by an embedding function defined by the spline filters \(G_{3,i}^{\alpha}\) and \(G_{2,i}^{\beta}\). This embedding technique is most similar to the UF3 method, which also decomposes the energy into two-body and three-body terms described by spline basis functions. Though there are some differences in the exact details of the UF3 and s-NNP spline functions, for example UF3's use of tricubic B-splines instead of the 1D cubic Hermite splines of s-NNP, the general principle is the same. While s-NNP's embedding function is most similar to that of UF3, its regression function is identical to that of the Behler-Parrinello NNP Behler and Parrinello (2007). Therefore, in order to help analyze the performance of s-NNP with respect to other models in the literature, it is suitable to think of s-NNP as a combination of a UF3-like embedding function with an NNP regressor. Or, equivalently, as an NNP using a spline-based embedding function instead of atom-centered symmetry functions Behler and Parrinello (2007). However, neither UF3 nor NNP utilize all of the interpretability improvements discussed in Section 2.2.
The recently-proposed EAM-R model Nitol et al. (2023) is the most closely related model to s-NNP in the literature, as it also incorporates components of both a classical model (EAM) and an MLIP (an NNP). EAM-R is a composite potential (using the terminology described in Section 4.1) where the auxiliary potential is an EAM model, and the main potential is an RANN (an NNP with descriptors inspired by the analytical MEAM equations Baskes (1992)). Similar to this work, the developers of EAM-R observed that combining a classical model with an MLIP resulted in both improved stability relative to an MLIP and improved accuracy to a classical potential alone. Despite s-NNP's similarity to EAM-R, s-NNP has some key differences, namely the use of spline filters (instead of analytical MEAM-inspired descriptors) and some of the interpretability improvements discussed in Section 2.2. In particular, the spline filters may be expected to be more flexible than the RANN descriptors (similar to how s-MEAM is more flexible than analytical MEAM) for a given computational cost, and benefit from the ability to enforce smoothness and convergence through curvature penalties and knot pinning. Furthermore, s-NNP's use of skip connections and the removal of the internal network bias greatly improve the interpretability of the model, as discussed in Section 2.2. Although EAM-R does not include these modifications, it is an excellent example of an MLIP which could easily incorporate these same interpretability improvements.
### Computational costs
The computational cost of s-NNP is dominated by the evaluation of \(\vec{G}_{i}\), and is particularly dependant upon the choice of \(N_{3}\). In fact, basic profiling tests revealed that the filter evaluations accounted for approximately 95% of the total CPU and GPU time. This behavior can be understood heuristically by the fact that Eqn. 2 involves approximately \(\mathcal{O}(N_{n}^{2})\) spline evaluations for each filter where \(N_{n}\) is the average number of neighbors within the cutoff distance (due to the summation over triplets of atoms), as opposed to the relatively few matrix multiplications associated with the evaluation of the neural network. In general, the computational cost of inference with an s-NNP potential will scale sub-linearly with \(N_{3}\) (some speedup can be achieved by performing batched spline evaluations for the filters in Eqn. 2). An important practical implication of this is that in order to improve the accuracy of a given s-NNP model (and other NNP-based MLIPs as well), it is much more computationally efficient to increase the size of the network rather than the size of the embedding function. On the other hand, increasing \(N_{3}\) may lead to larger increases in accuracy (up to a point) than what is achievable by only increasing the network size
Given the performances of the s-NNP models in Table 1 and the timing comparisons observed in our previous work between s-MEAM and a NNP Vita and Trinkle (2021), it may be expected that an s-NNP could be constructed that achieves identical errors to ANI while maintaining a higher speed. The "(1,8) \(l=5\), int. all" model is already nearing this threshold, especially taking into account that the values for ANI reported in Table 1 use ensemble-averages over 8 models, as reported by the original authors Smith et al. (2021), which they say can make the energy and force errors "20% and 40% smaller, respectively" and may account for the differences in performance as compared to "(1,8) \(l=5\), int. all".
## 5 Conclusion
In this work we developed a novel framework using spline-based filters coupled with a neural network regressor in order to blend the strengths of both classical and ML IPs. We use this framework to probe the gap between these two classes of models, observing performance limits of linear ("classical") models that can be overcome using even small neural networks. We then show that this improved performance can be maintained while incorporating architectural changes which improve the interpretability of the model, such as the use of skip connections, an external bias term, |
2304.05104 | Approaching Test Time Augmentation in the Context of Uncertainty
Calibration for Deep Neural Networks | With the rise of Deep Neural Networks, machine learning systems are nowadays
ubiquitous in a number of real-world applications, which bears the need for
highly reliable models. This requires a thorough look not only at the accuracy
of such systems, but also at their predictive uncertainty. Hence, we propose a
novel technique (with two different variations, named M-ATTA and V-ATTA) based
on test time augmentation, to improve the uncertainty calibration of deep
models for image classification. By leveraging na adaptive weighting system,
M/V-ATTA improves uncertainty calibration without affecting the model's
accuracy. The performance of these techniques is evaluated by considering
diverse metrics related to uncertainty calibration, demonstrating their
robustness. Empirical results, obtained on CIFAR-10, CIFAR-100, Aerial Image
Dataset, as well as in two different scenarios under distribution-shift,
indicate that the proposed methods outperform several state-of-the-art post-hoc
calibration techniques. Furthermore, the methods proposed also show
improvements in terms of predictive entropy on out-of-distribution samples.
Code for M/V-ATTA available at: https://github.com/pedrormconde/MV-ATTA | Pedro Conde, Tiago Barros, Rui L. Lopes, Cristiano Premebida, Urbano J. Nunes | 2023-04-11T10:01:39Z | http://arxiv.org/abs/2304.05104v2 | Approaching Test Time Augmentation in the Context of Uncertainty Calibration for Deep Neural Networks
###### Abstract
With the rise of Deep Neural Networks, machine learning systems are nowadays ubiquitous in a number of real-world applications, which bears the need for highly reliable models. This requires a thorough look not only at the accuracy of such systems, but also to their predictive uncertainty. Hence, we propose a novel technique (with two different variations, named _M-ATTA_ and _V-ATTA_) based on test time augmentation, to improve the uncertainty calibration of deep models for image classification. Unlike other test time augmentation approaches, _MV-ATTA_ improves uncertainty calibration without affecting the model's accuracy, by leveraging an adaptive weighting system. We evaluate the performance of the technique with respect to different metrics of uncertainty calibration. Empirical results, obtained on CIFAR-10, CIFAR-100, as well as on the benchmark Aerial Image Dataset, indicate that the proposed approach outperforms state-of-the-art calibration techniques, while maintaining the baseline classification performance. Code for _MV-ATTA_ available at: [https://github.com/pedromcondo/MV-ATTA](https://github.com/pedromcondo/MV-ATTA)
Uncertainty Calibration, Reliability, Probabilistic Interpretation, Test Time Augmentation, Deep Neural Networks.
## 1 Introduction
Deep Neural Networks (DNNs) changed the paradigm with regards to the applicability of machine learning (ML) systems to real-world scenarios. Consequently, deep learning (DL) models are now present in critical application domains (_e.g._, autonomous driving, medicine, remote sensing, robotics), where bad decision-making can bear potentially drastic consequences. This requires that DNNs are not only highly accurate, but also highly _reliable_ - decision-makers should be able to "trust" the predictions of these models. This lead us to the problem of _uncertainty calibration_ (also referred as confidence calibration or simply _calibration_): it is required that the confidence output generated by the DL model - that translates as the confidence the model has in the prediction that is making - realistically represents the true likelihood of correctness. For the sake of intuition, a calibrated model would for example, in the long run, correctly classify \(70\%\) of those predictions that have a confidence value of \(0.7\) associated. This accurate quantification of predictive uncertainty results in reliable confidence values associated with each prediction, and therefore, in a more reliable model. As such, it is important to understand how well calibrated are modern DNNs. Further details, including the formalization of the uncertainty calibration problem, will be described in Section 3.
Although increasingly accurate, modern DL architectures have been found to be tendentiously _uncalibrated_[20, 7]. Furthermore, "modern neural networks exhibit a strange phenomenon: probabilistic error and miscalibration worsen even as classification error is reduced" [7]. For this reason, the goal of this work is to improve the uncertainty calibration of DNNs in the task of image classification, by proposing a novel accuracy-consistent weighted test time augmentation method.
Test time augmentation is a general methodology that leverages data augmentation techniques to create multiple samples from the original input at inference (_i.e._, at test time). Therefore, test time augmentation methods can be applied to pre-trained models, since, in this case, the augmentation process is not applied during the training phase. The technique introduced in this work combines the use of test time augmentation with a custom weighting system, guaranteeing that the accuracy of the original DL model is not corrupted, while still being optimized to improve uncertainty calibration. This builds, partially, on the work done in [3], proposing both a reformulated version - _V-ATTA_ (Vector Adaptive Test Time Augmentation) - of the preliminary method presented in [3] and also a generalized version - _M-ATTA_ (Matrix Adaptive Test Time Augmentation) - with a broader and extended empirical evaluation.
_M/V-ATTA_ is evaluated on the benchmark CIFAR-10/CIFAR-100 [11] datasets, as well as on a benchmark satellite imagery dataset, the Aerial Image Dataset (AID) [28]. The results are compared with state-of-the-art _post-hoc_ calibration methods, with respect to the Brier score [2] and the _Expected Calibration Error_ (ECE), for both _common_ and _strong_ uncertainty calibration (see Section 3 for further details).
**Contribution**: We propose a novel calibration technique - with two different variations (_M/V-ATTA_) - based on test time augmentation, that guarantees consistency in the accuracy of deep models in which is applied, while being shown to improve uncertainty calibration-related evaluation metrics, outperforming state-of-the-art _post-hoc_ calibration
methods in most cases. To the best of our knowledge, _M/V-ATTA_ is the first method based on test time augmentation that has been proposed to improve the uncertainty calibration of deep models (besides its predecessor in [3]). Furthermore, the method presented here can be used with pre-trained DNNs, which is advantageous in terms of applicability.
## 2 Related Work
The authors in [7] introduce the topic of uncertainty calibration to the DL community, by evaluating how well calibrated are different modern DNNs, using different datasets (from both computer vision and natural language processing applications). As previously stated, the authors argue that, although more accurate, modern DNNs exhibit problems of miscalibration, often more severe than those found in older - and less accurate - architectures. Several _post-hoc_ calibration techniques - particularly designed to address uncertainty calibration without the need of re-training the original DL models - like _temperature scaling_ (an extension of the _Platt scaling_ algorithm [18, 22]), _histogram binning_[29] and _isotonic regression_[30], are described to address this problem.
Other approaches, like approximate Bayesian models [4, 6] and some regularization techniques [15, 21], have also been used in the context of uncertainty calibration [20]. Nonetheless, these approaches require building new more complex models or modifying and re-training pre-existing ones, contrarily to the previously mentioned _post-hoc_ calibration methods and the proposed _M/V-ATTA_.
For the evaluation of uncertainty calibration, one of the most popular metrics is the ECE [17], where the bin-wise difference between the accuracy and the confidence outputs of a given model is evaluated. Although intuitive, some limitations of this metric have been identified in relevant literature [8, 19, 27], with regard to, for example, its intractability and dependence on the chosen binning scheme. Furthermore, because ECE is not a proper scoring rule [5], there can be found trivial uninformative solutions that obtain an optimal score, like always returning the marginal probability (see example in Supplementary Material, Section 2). A popular alternative to evaluate uncertainty calibration is the Brier score [2]; because it is a proper scoring rule and overcomes some of the identified limitation of the ECE, this metric has been increasingly used by the scientific community [20, 13, 24].
Test time augmentation methods have been gaining some attention in the last few years, for example in biomedical applications [14, 25, 26]. Nonetheless, we see that some potential in this type of technique is still under-researched and, to the best of our knowledge, all relevant literature fails to address its effect on calibration-specific metrics like the Brier score or the ECE (with exception of our preliminary work done in [3]). Contrarily to other recent novel approaches to test time augmentation [10, 12, 23], the method proposed in this work does not focuses on improving the accuracy of DNNs, but instead on improving their uncertainty calibration without altering the original accuracy. In fact, the authors in [23] show that some forms of test time augmentation may produce corrupted predictions - which can ultimately worsen the model's performance - reinforcing the need for a test time augmentation-based methodology that does not alter the predicted class, while still calibrating its confidence value.
## 3 Background
**Notation**: we will use bold notation to denote vectors, like \(\mathbf{p}=\ (p_{1},\ldots,p_{k})\); the \(i\)-th element of some vector \(\mathbf{p}\) will be referred as \(\mathbf{p}_{\{i\}}:=p_{i}\); the \(\downarrow\) symbol, associated with a given metric, informs that a lower value of such metric represents a better performance; the \(\odot\) symbol represents the Hadamard product, \(\sigma:\mathbb{R}^{k}\rightarrow\Delta_{k}\) represents the _softmax_ function. These remarks are valid for all sections of the article; other remarks on notation will be given along the text, when found relevant.
In this section we discuss the problem of uncertainty calibration and present some evaluation metrics commonly used in this context. In relevant literature, the concept of uncertainty calibration related to DL systems in a multi-class scenario is often defined in two different ways. The most common way is the one presented in [7], where the multi-class scenario is considered as an extension of the binary problem in a _one_\(\mathbf{\mathit{vs}}\)_all_ approach, taking in account only the calibration of the highest confidence value for each prediction. Nonetheless, some works consider the more general definition present in [29], that takes into account all the confidence values of the predicted probability vector. Like in [27], we make the respective distinction between a _calibrated_ model and a _strongly calibrated_ model, in the following definitions. Notice that in the case of a binary classifier such definitions are equivalent.
Let us consider a pair of random variables \((X,Y)\), where \(X\) represents an input space (or feature space) and \(Y\) the corresponding set of true labels. Let us now take a model \(f:X\rightarrow\Delta_{k}\), with \(\Delta_{k}=\{(p_{1},\ldots,p_{k})\in[0,1]^{k}:\sum_{i=1}^{k}p_{i}=1\}\) being a probability simplex (this setting corresponds to a classification problem with \(k\) different classes). The model \(f\) is considered **calibrated** if
\[\mathds{P}[Y=\operatorname*{arg\,max}_{i\in\{1,\ldots,k\}}f(X)\mid\max_{i\in \{1,\ldots,k\}}f(X)]=\max_{i\in\{1,\ldots,k\}}f(X). \tag{1}\]
Additionally, the model \(f\) is considered **strongly calibrated** if
\[\mathds{P}[Y=y\mid f(X)_{\{y\}}]=f(X)_{\{y\}}. \tag{2}\]
As stated in [7], achieving perfect calibration is impossible in practical settings. Furthermore, the probability values in the left hand side of both (1) and (2) cannot be computed using finitely many samples, which motivates the need for scoring rules to assess uncertainty calibration.
### _Brier score \(\downarrow\)_
Brier score [2] is a proper scoring rule [5] that computes the squared error between a predicted probability and its true
response, hence its utility to evaluate uncertainty calibration. For a set of \(N\) predictions we define the **Brier score** as
\[\text{BS}=\frac{1}{N}\sum_{j=1}^{N}(p^{j}-o^{j})^{2}, \tag{3}\]
where \(p^{j}\) is the highest confidence value of the prediction \(j\) and \(o^{j}\) equals 1 if the true class corresponds to the prediction and 0 otherwise.
The previous definition is useful to asses calibration (in the more common sense). Nonetheless, [2] also presents a definition for Brier score in multi-class scenario that is suitable to assess strong calibration. For a problem with \(k\) different classes and a set of \(N\) predictions we define the **multi-class Brier score** (mc-Brier score) as
\[\text{mc-BS}=\frac{1}{N}\sum_{j=1}^{N}\sum_{i=1}^{k}(p_{i}^{j}-o_{i}^{j})^{2}, \tag{4}\]
where \(p_{i}^{j}\) is the confidence value in the position \(i\) of the \(j\)-nth prediction probability vector and \(o_{i}^{j}\) equals 1 if the true label equals \(i\) and 0 otherwise.
We refer to [16] and [1] for some thorough insights about the interpretability and decomposition of the Brier score.
### _Expected Calibration Error \(\downarrow\)_
To compute the ECE [17] we start by dividing the interval \([0,1]\) in \(M\) equally spaced intervals. Then a set of bins \(\{B_{1},B_{2},\ldots,B_{M}\}\) is created, by assigning each predicted probability value to the respective interval. The idea behind this measurement is to compute a weighted average of the absolute difference between accuracy and confidence in each bin \(B_{i}\) (\(i=1,\ldots,M\)). We define the **confidence per bin** as
\[conf(B_{i})=\frac{1}{|B_{i}|}\sum_{j\in B_{i}}p^{j}, \tag{5}\]
where \(p^{j}\) is the highest confidence value of the prediction \(j\). The **accuracy per bin** is defined as
\[acc(B_{i})=\frac{1}{|B_{i}|}\sum_{j\in B_{i}}o^{j}, \tag{6}\]
where \(o^{j}\) equals \(1\) if the true class corresponds to the prediction and 0 otherwise. For a total of \(N\) predictions and a binning scheme \(\{B_{1},B_{2},\ldots,B_{M}\}\), the **ECE** is defined as
\[\text{ECE}=\sum_{i=1}^{M}\frac{|B_{i}|}{N}|conf(B_{i})-acc(B_{i})|. \tag{7}\]
## 4 Proposed Methodology
In this section we introduce _M-ATTA_ and _V-ATTA_. Both methods leverage the use of test time augmentation, combined with a custom adaptive weighting system. _V-ATTA_ can be interpreted as restricted version of the more general _M-ATTA_.
### _M-Atta_
Let us start by considering \(m\in\mathbb{N}\) different types of augmentations. Because it is common that some augmentations have random parameters, it can be desirable to apply the same type of augmentation more than once; for this reason, let us consider as \(n_{i}\) (\(i=1,\ldots,m\)) the number of times the \(i\)-nth type of augmentation is applied. As such, we will define for each original input \(I_{0}\), the \(j\)-nth (\(j=1,\ldots,n_{i}\)) augmented input with the \(i\)-nth augmentation type, as \(I_{i,j}\).
We can now take into account the model \(f:X\rightarrow\Delta_{k}\) (where \(X\) is the input space and \(k\) the number of classes) and consider \(g:X\rightarrow\mathbb{R}^{k}\) as such that \(f:=g\circ\sigma\). With this, we now define
\[p^{0}=f(I_{0}),\qquad z_{i,j}=g(I_{i,j}), \tag{8}\]
_i.e._, \(p^{0}\) is the probability vector associated with the original input and \(z_{i,j}\) is the logit associated with the \(j\)-nth augmentation of the \(i\)-nth type. Subsequently, we can define, \(\forall i\in[1,\ldots,m]\),
\[\mathbf{z}^{i}=\frac{\sum_{j=1}^{n_{i}}\mathbf{z}_{i,j}}{n_{i}}\equiv\left(z _{1}^{i},z_{2}^{i},\ldots,z_{k}^{i}\right)\in\mathbb{R}^{k}, \tag{9}\]
and then construct the matrix
\[\mathbf{Z}=\left[\mathbf{z}^{i}\right]_{i=1,\ldots,m}^{\text{T}}=\left[\begin{array} []{cccc}z_{1}^{1}&z_{1}^{2}&\cdots&z_{1}^{m}\\ z_{2}^{1}&z_{2}^{2}&\cdots&z_{2}^{m}\\ \vdots&\vdots&\ddots&\vdots\\ z_{k}^{1}&z_{k}^{2}&\cdots&z_{k}^{m}\end{array}\right]\in\mathbb{R}^{k,m}. \tag{10}\]
Now, for some parameters
\[\omega^{*}\in[0,1],\quad\mathbf{W}=\left[\begin{array}{cccc}\omega_{1}^{1}& \omega_{1}^{2}&\cdots&\omega_{1}^{m}\\ \omega_{2}^{1}&\omega_{2}^{2}&\cdots&\omega_{2}^{m}\\ \vdots&\vdots&\ddots&\vdots\\ \omega_{k}^{1}&\omega_{k}^{2}&\cdots&\omega_{k}^{m}\end{array}\right]\in \mathbb{R}^{k,m}, \tag{11}\]
we finally define, for each prediction, the final prediction probability vector as
\[\mathbf{p}\left(\tilde{\omega}\right)=(1-\tilde{\omega})\mathbf{p}^{0}+ \tilde{\omega}\ \sigma(\mathbf{W}\odot\mathbf{Z})I_{m}, \tag{12}\]
with
\[\tilde{\omega}=\max\Big{\{}\omega\in[0,\omega^{*}]:\operatorname*{arg\,max}_{i \in\{1,\ldots,k\}}\mathbf{p}\left(\tilde{\omega}\right)=\operatorname*{arg\, max}_{i\in\{1,\ldots,k\}}\mathbf{p}^{0}\Big{\}}. \tag{13}\]
We consider \(I_{m}\in\mathbb{R}^{m}\) as an \(m\) dimensional vector where every element equals 1 and remind that \(\sigma:\mathbb{R}^{k}\rightarrow\Delta_{k}\) represents the _softmax_ function. We also note that the learnable parameters \(\mathbf{W}\in\mathbb{R}^{k,m}\) and \(\omega^{*}\in\mathbb{R}\) work, respectively, as an weight matrix and an upper bound for \(\tilde{\omega}\).
The value of \(\tilde{\omega}\) may vary in each prediction, adapting in a way that prevents corruptions in terms of accuracy, according to the definition in (13). Both \(\omega^{*}\in[0,1]\) and \(\mathbf{W}\in\mathbb{R}^{k,m}\) can be optimized with a given validation set. In a practical scenario, the value \(\tilde{\omega}\) is approximated as described in the algorithmic description of _M-ATTA_ in Algorithm 1. In our case \(\epsilon\) in Algorithm 1 is defined as 0.01.
### _V-ATTA_
With _V-ATTA_ we restrict the matrix \(\mathbf{W}\) to a diagonal matrix
\[\mathbf{W}_{d}=\left[\begin{array}{cccc}\omega^{1}&0&\cdots&0\\ 0&\omega^{2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\omega^{m}\end{array}\right]\in\mathbb{R}^{m,m}, \tag{14}\]
and define the new prediction probability vector as
\[\mathbf{p}\left(\tilde{\omega}\right)=(1-\tilde{\omega})\mathbf{p}^{0}+ \tilde{\omega}\;\sigma(\mathbf{W}_{d}\mathbf{Z}^{\mathrm{T}})I_{m}, \tag{15}\]
with \(\tilde{\omega}\) still defined as in (13).
We care to note that, contrarily to _M-ATTA_, _V-ATTA_ has the same number of parameters as the preliminary method presented in [3], while still being able of accomplishing improved results (see Supplementary Material, Section 3).
In this case, the algorithmic description is as represented in Algorithm 2. Once again, \(\epsilon\) is 0.01.
```
Input: Augmented logits matrix (\(\mathbf{Z}\in\mathbb{R}^{k,m}\)), Original prediction (\(\mathbf{p}^{0}\in\Delta_{k}\)) Output: Calibrated prediction (\(\mathbf{p}\in\Delta_{k}\)) Learnable parameters:\(\mathbf{W}_{d}\in\mathbb{R}^{m,m}\), \(\omega^{*}\in\mathbb{R}\)
1\(\epsilon\gets 0.01\)\(\tilde{\omega}\leftarrow\omega^{*}\)\(c_{0}\leftarrow\arg\max_{i\in\{1,\dots,k\}}\mathbf{p}^{0}\)while\(c\neq c_{0}\;\wedge\;\tilde{\omega}>0\)do
2\(\mathbf{p}\leftarrow(1-\tilde{\omega})\mathbf{p}^{0}+\tilde{\omega}\;\sigma( \mathbf{W}_{d}\mathbf{Z}^{\mathrm{T}})I_{m}\)\(c\leftarrow\arg\max_{i\in\{1,\dots,k\}}\mathbf{p}\)\(\tilde{\omega}\leftarrow\tilde{\omega}-\epsilon\)
```
**Algorithm 2**V-ATTA
## 5 Experiments and Results
Let us start by observing that test time augmentation methods - just like traditional augmentation methods - can be applied with an highly extensive set of augmentations policies, thus possibly resulting in virtually endless different scenarios. Nevertheless, for practical reasons, the experiments here presented were conducted by taking into consideration only four different image transformations, combining them into eight different augmentation policies. The image transformations used were
* _Flip_: a flip around the vertical axis of the image;
* _Crop_: creates a cropped input with size ratio of approximately \(0.8\), extracted from a random position within the original input image;
* _Brightness_: creates, from an original input \(\gamma\), a new input \(\gamma^{\prime}=\gamma+\beta\gamma_{\text{max}}\), where \(\gamma_{\text{max}}\) is the maximum pixel value from \(\gamma\), and \(\beta\) is a random number extracted from a continuous Uniform distribution within the interval \([-0.5,0.5]\);
* _Contrast_: creates, from an original input \(\gamma\), a new input \(\gamma^{\prime\prime}=\gamma(1+\alpha)\), where \(\alpha\) is a random number extracted from a continuous Uniform distribution within the interval \([-0.2,0.2]\).
These four types of augmentations were selected for this work based on the fact that they are common and easily replicable image transformations, thus reinforcing the merits of the proposed methods (_M/V-ATTA_) by showing their applicability without the need of extensively searching a set of complex augmentations.
The composition of the eight different augmentation policies is described in Table I. The rationale behind the selection of such policies is based on some conclusions derived from the results with the preliminary approach presented in [3]. First, all augmentation policies are composed by more than one type of augmentation, based on the fact that this was generally the best approach in [3]. Secondly, the _contrast_ transformation is only present in augmentation policies that are composed by three or more types of augmentations, since there was some evidence in [3] that this transformation had, in general, the lowest positive impact in terms of uncertainty calibration.
In all experiments, a ResNet-50 [9] architecture is used. The achieved classification accuracy values are \(94.21\%\), \(72.78\%\) and \(93.40\%\), for CIFAR-10, CIFAR-100 and AID test sets, respectively. The size of the training, validation and test sets are respectively: 50000, 1000, 9000, for the CIFAR-10/100 datasets, 7000, 1000, 2000, for the AID dataset. We prioritized having the same size in all validation sets, since this is the one used in the optimization process of _M/V-ATTA_.
For each dataset, the experiments made with _M/V-ATTA_ are performed according to the following steps: using each of the eight augmentation policies (see Table I), the learnable parameters are optimized in the validation set, with Negative Log Likelihood (NLL) as loss function, 500 epochs, batch size of 500, Adam optimizer with a learning rate of 0.001 and all weights initialized as 1; the eight obtained versions of M/V-ATTA are compared on the validation
\begin{table}
\begin{tabular}{l||l} \hline \hline Aug. 1: \(F+5Cr\) & Aug. 5: \(F+5Cr+5Ct\) \\ Aug. 2: \(F+5B\) & Aug. 6: \(F+5B+5Ct\) \\ Aug. 3: \(5Cr+5B\) & Aug. 7: \(5Cr+5B+5Ct\) \\ Aug. 4: \(F+5Cr+5B\) & Aug. 8: \(F+5Cr+5B+5Ct\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Composition of eight different augmentation policies used in this work. \(F\) refers to a _flip_ transformation, \(Cr\) to _crop_, \(B\) to _brightness_ and \(Ct\) to _contrast_. _E.g._, Augmentation Policy 1 (Aug. 1) is composed by one _flip_ transformation and five _crop_ transformations.
set (the same used in the optimization), using the Brier score, where the best performing augmentation policy is selected (see Subsection 5.1); using the previously found augmentation policy, the method is finally evaluated in the test set (see Subsection 5.2).
### _Augmentation policy validation_
In this subsection we compare the eight augmentation policies described in Table I, in terms on how _M-ATTA_ and V-ATTA perform, considering the Brier score. For this purpose, the experiments are done in the respective validation set of each of the three datasets (CIFAR-10, CIFAR-100, AID). The results are presented in Table II. The augmentation policies that achieve the best performance, in each case, are subsequently used in Subsection 5.2 in the experiments done with each respective test set.
We observe, by the analysis of the results in Table II, that while Aug. 8 is the best performing augmentation policy for _M-ATTA_ - independent of which dataset is used - the same does not hold when considering _V-ATTA_, where for each dataset we find a different best performing augmentation policy. Nonetheless, the results obtained with Aug. 8 applied to _V-ATTA_ are still relatively close to best performing scenario, in all three datasets. We also note that Aug. 2 and Aug. 6 - which show the best results for _V-ATTA_ in CIFAR-10 and CIFAR-100, respectively - are the worst performing augmentation policies when considering the AID dataset, suggesting some dataset-dependent behaviour.
Finally, we also observe that _M-ATTA_ performs better than _V-ATTA_ in all cases in the three validation sets.
The analogous results of those in Table II, but for the respective test sets, can be found in the Supplementary Material, Section 3.
### _Main results_
Here we describe and discuss the results summarized in Table III. The results obtained with both _M-ATTA_ and _V-ATTA_ are compared against the performance of _temperature scaling_[7], _isotonic regression_[30], _histogram binning_[29] (for details on these baseline methods see the Supplementary Material, Section 1) and a _vanilla_ approach (referring to the results obtained from the DNN without any type of calibration method). The choice of these baselines takes into consideration that these methods share, with both _M-ATTA_ and _V-ATTA_, two important characteristics: can be applied to pre-trained DNNs, requiring no modification of the network architecture; do not alter the original classification accuracy of the DNN with which they are applied.
All methods are evaluated with the described uncertainty calibration metrics - Brier score (both "classical" and multi-class version) and ECE (with 15 bins) - and also the widely known NLL loss. Although a proper scoring rule (and so, built to evaluate the quality of probabilistic predictions), the NLL loss (traditionally used in the training process of DNNs) cannot be considered a metric of calibration (or strong calibration) given our definition, because
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c} \hline & & Aug. 1 & Aug. 2 & Aug. 3 & Aug. 4 & Aug. 5 & Aug. 6 & Aug. 7 & Aug. 8 \\ \hline \hline \multirow{2}{*}{CIFAR-10} & M-ATTA & 0.0399 & 0.0371 & 0.0434 & 0.0372 & 0.0376 & 0.0366 & 0.0416 & **0.0362** \\ & V-ATTA & 0.0426 & **0.0402** & 0.0465 & 0.0417 & 0.0419 & 0.0408 & 0.0448 & 0.0408 \\ \hline \hline \multirow{2}{*}{CIFAR-100} & M-ATTA & 0.1149 & 0.1126 & 0.1179 & 0.1081 & 0.1084 & 0.1109 & 0.1134 & **0.1065** \\ & V-ATTA & 0.1277 & 0.1242 & 0.1336 & 0.1246 & 0.1248 & **0.1240** & 0.1311 & 0.1241 \\ \hline \hline \multirow{2}{*}{AID} & M-ATTA & 0.0272 & 0.0453 & 0.0284 & 0.0254 & 0.0253 & 0.0428 & 0.0263 & **0.0247** \\ & V-ATTA & 0.0316 & 0.0529 & 0.0328 & **0.0310** & 0.0314 & 0.0507 & 0.0328 & 0.0315 \\ \hline \end{tabular}
\end{table} TABLE II: Results with respect to the Brier score - in the respective validation sets of the CIFAR-10, CIFAR-100 and AID datasets - comparing the eight different augmentation policies with both our methods (_M-ATTA_ and _V-ATTA_). For each case, the best result is represented in bold.
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c} \hline & \multicolumn{4}{c|}{CIFAR-10} & \multicolumn{4}{c|}{CIFAR-100} & \multicolumn{4}{c}{AID} \\ & Brier & ECE & mc-Brier & NLL & Brier & ECE & mc-Brier & NLL & Brier & ECE & mc-Brier & NLL \\ \hline \hline Vanilla & 0.0438 & 0.0367 & 0.0929 & 0.2356 & 0.1599 & 0.1325 & 0.4055 & 1.2132 & 0.0481 & 0.0306 & 0.1068 & 0.3046 \\ \hline \hline T. Scaling [7] & 0.0390 & 0.0119 & 0.0860 & 0.1909 & 0.1364 & 0.0241 & 0.3766 & 1.0208 & 0.0493 & 0.0300 & 0.1073 & 0.2621 \\ I. Regression [30] & 0.0396 & **0.0069** & 0.0870 & 0.1918 & 0.1352 & 0.0192 & 0.3792 & 1.0471 & 0.0476 & 0.0158 & 0.1074 & 0.2725 \\ H. Binning [29] & 0.0462 & 0.0112 & 0.0967 & 0.2613 & 0.1412 & **0.0135** & 0.3907 & 1.4232 & 0.0467 & **0.0138** & 0.1083 & 0.3376 \\ \hline \hline
**M-ATTA** (ours) & 0.0371 & 0.0090 & 0.0813 & 0.1800 & 0.1350 & 0.0274 & 0.3730 & 1.0042 & **0.0263** & 0.0232 & **0.0632** & **0.1300** \\
**V-ATTA** (ours) & **0.0358** & 0.0130 & **0.0793** & **0.1705** & **0.1270** & 0.0187 & **0.3584** & **0.9565** & 0.0278 & 0.0221 & 0.0645 & 0.1365 \\ \hline \end{tabular}
\end{table} TABLE III: Results with respect to four different performance metrics (Brier score, ECE, multi-class Brier score and NLL) - in the respective test sets of the CIFAR-10, CIFAR-100 and AID datasets - comparing the performance of our methods (_M-ATTA_ and _V-ATTA_) against four different baselines (Vanilla, Temperature Scaling, Isotonic Regression and Histogram Binning). For each case, the best result is represented in bold.
it only considers the quality of the predictions associated with the true class; nevertheless, given that it is a proper scoring rule and a popular metric, it is common to consider the NLL alongside to Brier score and ECE, when evaluating uncertainty calibration methods [13, 20, 24].
We start our analysis of the results in Table III by comparing all the uncertainty calibration methods against the _vanilla_ approach. The first main observation derived in this context, is that our methods (_M/V-ATTA_) are the only that have consistently better performance than the _vanilla_ approach, presenting better results in all evaluation metrics and all three datasets. Contrarily, _temperature scaling_ worsens the performance in the AID dataset (Brier score, mc-Brier score), _isotonic regression_ also worsens the performance in the AID dataset (Brier score) and _histogram binning_ worsens the performance in CIFAR-10 (Brier score, mc-Brier score, NLL), CIFAR-100 (NLL) and AID (mc-Brier score, NLL) datasets.
Focusing now on highlighting the best results for each specific scenario, we observe that in all three datasets, one of our methods (_M-ATTA_ or _V-ATTA_) is the best performing method in terms of Brier score, mc-Brier score and NLL. Focusing in the three aforementioned evaluation metrics: in both CIFAR-10 and CIFAR-100 datasets, _V-ATTA_ is consistently the best performing method, while _M-ATTA_ is consistently the second-best performing method; in the AID dataset, _M-ATTA_ is consistently the best performing method, while _V-ATTA_ is consistently the second-best performing method. On the other hand, when considering ECE, _isotonic regression_ is the best performing method in the CIFAR-10 dataset, while _histogram binning_ is best performing method in the remaining two datasets (these methods are actually biased to ECE evaluation, since they are binning-based methods, what can explain the particularly good behaviour with this metric); still in this context, _M-ATTA_ is the second-best performing method in CIFAR-10, and _V-ATTA_ is the second best performing method in CIFAR-100.
Some additional observations can be derived from Table III: _histogram binning_ seems to be an unreliable uncertainty calibration method, given how it worsens multiple evaluation metrics in different datasets; the AID dataset is found to be an interesting case study, where all three post-hoc calibration baselines struggle to accomplish consistent results; the empirical evidence presented suggests that _M-ATTA_ and _V-ATTA_ are the most consistent and generally best performing methods.
Finally, while _M-ATTA_ is always better - in terms of Brier score - in the validation sets (see Subsection 5.1), this is not observable when using the respective test sets. This behavior can suggest some type of over-fitting is happening in the optimization process of _M-ATTA_. It is not surprising that _M-ATTA_ is more prone to over-fitting phenomena than _V-ATTA_, since it has a larger number of parameters (proportional to the number of classes in the respective dataset).
### _Validation size experiments_
In the previous subsection we observed that, when considering Brier score, mc-Brier score and NLL, _V-ATTA_ is the best performing method in CIFAR-10 and CIFAR-100 (that are datasets with similar characteristics) while _M-ATTA_ is the best performing method in the AID dataset. While this is possibly justified by the different nature of the AID dataset (in comparison to CIFAR-10/100), we explore the possibility that the different behavior is caused by different proportions in terms validation/test sets. As such, this subsection is centered around experimenting with the ratio between the validation and the test set on the CIFAR-10 and CIFAR-100
Fig. 1: Comparing the Brier and mc-Brier scores obtained with _M-ATTA_ and _V-ATTA_, with different validation/test set ratios, evaluated in the respective test sets of both CIFAR-10 and CIFAR-100 datasets.
datasets. With these experiments we also try to reduce the over-fitting phenomena in the _M-ATTA_ method, since we are increasing the size of the validation sets.
The results are shown in Figure 1. In the presented experiments, both _M-ATTA_ and _V-ATTA_ are optimized with different validation/test set size ratios, beginning with 1000 validation set samples (like in Subsection 5.2) and iteratively adding 500 samples (while subtracting them from the test set) until reaching 5000 (this is done in both CIFAR-10 and CIFAR-100 datasets). In the CIFAR-10 dataset, we observe that _M-ATTA_ actually outperforms _V-ATTA_ when the validation set size is around the range between 2500 and 4000 (in this range the validation set size is approximately 1/3 of the test set size, just like in the AID setup for the previous experiments). On the other hand, in the CIFAR-100 experiments, although some improvement in performance is observed with _M-ATTA_, it is not enough to outperform _V-ATTA_. It is somewhat expected that the problem of over-fitting is easily addressed when working with CIFAR-10, since the _M-ATTA_ model used in CIFAR-100 has ten times more parameters than the one used with CIFAR-10.
## 6 Final Remarks
Based on the results presented and discussed in the previous section, we highlight some conclusions derived from this work:
* contrarily to the baseline methods selected for comparison
- they never produce worse results (compared to the _unvilla_ approach) with any of the selected evaluation metrics, and are always the best performing methods in terms of Brier score, mc-Brier score and NLL, independent of the dataset.
* Although generally the best performing, our methods lose to either _isotonic regression_ or _histogram binning_ when evaluated with the ECE. These baseline methods are somewhat biased towards ECE evaluation, since they leverage binning for obtaining better calibrated predictions; we speculate this justifies the inconsistency of these methods with the other evaluation metrics and the specially good performance with the ECE.
* _V-ATTA_ is capable of outperforming _M-ATTA_ with less parameters, in the CIFAR-10 and CIFAR-100 test sets, even though _M-ATTA_ is always the best method in the respective validation sets. We speculate that _M-ATTA_ is suffering of some over-fitting phenomena in these scenarios; in the case of CIFAR-10, we show this can be surpassed by increasing the size of the validation set (when considering Brier and mc-Brier scores).
For future work, we identify some challenges that can be addressed: studying different solutions for the partial over-fitting phenomena found with _M-ATTA_; exploring more complex types of augmentations; adapting our methods and extending their empirical evaluation to scenarios of object detection and/or semantic segmentation.
## Acknowledgments
This work has been supported by the Portuguese Foundation for Science and Technology (FCT), via the project \(GreenBotics\) (PTDC/EEI-ROB/2459/2021), and partially by Critical Software, SA.
|
2304.05662 | State Classification via a Random-Walk-Based Quantum Neural Network | In quantum information technology, crucial information is regularly encoded
in different quantum states. To extract information, the identification of one
state from the others is inevitable. However, if the states are non-orthogonal
and unknown, this task will become awesomely tricky, especially when our
resources are also limited. Here, we introduce the quantum stochastic neural
network (QSNN), and show its capability to accomplish the binary discrimination
of quantum states. After a handful of optimizing iterations, the QSNN achieves
a success probability close to the theoretical optimum, no matter whether the
states are pure or mixed. Other than binary discrimination, the QSNN is also
applied to classify an unknown set of states into two types: entangled ones and
separable ones. After training with four samples, it can classify a number of
states with acceptable accuracy. Our results suggest that the QSNN has the
great potential to process unknown quantum states in quantum information. | Lu-Ji Wang, Jia-Yi Lin, Shengjun Wu | 2023-04-12T07:39:23Z | http://arxiv.org/abs/2304.05662v1 | # State Classification via a Random-Walk-Based Quantum Neural Network
###### Abstract
In quantum information technology, crucial information is regularly encoded in different quantum states. To extract information, the identification of one state from the others is inevitable. However, if the states are non-orthogonal and unknown, this task will become aweseny tricky, especially when our resources are also limited. Here, we introduce the quantum stochastic neural network (QSNN), and show its capability to accomplish the binary discrimination of quantum states. After a handful of optimizing iterations, the QSNN achieves a success probability close to the theoretical optimum, no matter whether the states are pure or mixed. Other than binary discrimination, the QSNN is also applied to classify an unknown set of states into two types: entangled ones and separable ones. After training with four samples, it can classify a number of states with acceptable accuracy. Our results suggest that the QSNN has the great potential to process unknown quantum states in quantum information.
pacs: 03.67.-a; 03.67.Lx; 03.67.Ac; 42.50.Dv Quantum state discrimination identifies a quantum state among an already known set of candidate states. It is a key step in many quantum technologies as quantum states are the carriers of information in quantum computing protocols and quantum information processing. Although quantum mechanics fundamentally forbids deterministic discrimination of non-orthogonal states, probabilistic methods such as the minimum error discrimination [1] and the unambiguous discrimination [2; 3; 4], have been developed, and their theoretical best performances have also been found. Beyond state discrimination, a special case of classification, one would expect to classify quantum states by more properties in one's own particular interests. A well studied example is the classification of states according to whether the state is entangled or not. Lots of strategies have been proposed for detecting entanglement, such as using the positive partial transpose (PPT) criterion [5], entanglement witnesses [6; 7; 8], and the Clauser-Horne-Shimony-Holt (CHSH) inequality [9; 10].
Most of the strategies mentioned above require complete knowledge of the quantum states before carrying out their best practices. However, exactly determining all the states is in principle forbidden, unless infinite copies of the states are provided. In Ref. [11], Massar and Popescu have addressed this topic, known as the state estimation, and proved that the optimal mean fidelity for two-level system state estimation is \(\frac{N+1}{N+2}\), where \(N\) is the number of identically prepared copies of the state. The optimal mean fidelity is less than 1 and tends towards 1 as the number of copies \(N\) tends to infinity. Then, in Ref. [12] the authors extended the result to finite dimensional quantum systems in pure states. All these results indicate that the pre-processing of quantum state estimation before carrying out the state discrimination or classification will introduce extra errors. In addition, state estimation often involves a quantum information technology called state tomography, which is sometimes prohibitively expensive to perform on an unknown state [13]. In the states classification task, it also becomes practically impossible to do a tomography for each state, because the number of states waiting to be classified can be extremely large. In addition to the extra errors introduced by the pre-processing and the expensive tomography, there is another difficulty for these traditional strategies. That is, even if we exactly know the quantum states to be identified or classified, the optimal measurement is hard to be analytically developed in the fashion of traditional strategies, when more than two states are involved [14; 15].
Facing the above difficulties, we hope there will be a new strategy for state discrimination and classification. Recently, there has been a rising trend of using machine learning methods to fully exploit the inherent advantages of quantum technologies [16; 17]. Among these studies, the classification problem has received a great deal of attention, and some quantum classifiers have shown excellent performance [18; 19]. In addition, as a fusion of deep learning and quantum computation, quantum neural networks [20; 21] have been proved to be effective and almost irreplaceable in many quantum tasks, such as quantum operation implementations [22; 23; 24], many-body system simulations [25; 26], and quantum autoencoders [27; 28; 24].
There are also several successful attempts of classifying quantum states with quantum neural networks. Chen _et al._[29] utilized a hybrid approach to learn the design of a quantum circuit to distinguish between two families of non-orthogonal quantum states and generalizes its ability to previously unseen quantum data. This approach was further extended to noisy devices by Patterson _et al._[30] Cong _et al._[31] introduced a quantum convolutional neural network accurately recognizing quantum phases. Those neural networks can all be implemented on near-term devices. Dalla Pozza _et al._[32] proposed
a quantum network with state discrimination capability based on an open quantum system. A tightly related protocol was then experimentally implemented by Laneve _et al._[33], which provided a novel approach to multi-state discrimination.
Inspired by these works, in the Letter we introduce a new kind of quantum neural network, i.e., the quantum stochastic neural network (QSNN), to complete quantum state discrimination and classification tasks. Our network is based on quantum stochastic walks (QSWs) [34], which have been theoretically proposed and experimentally implemented to simulate the associative memory of Hopfield neural networks [35, 36]. Therefore, it is worthwhile to explore the power of quantum walks in building general quantum neural networks.
_Approach_. A classical random walk describes the probabilistic motion of a walker over a graph. Farhi and Gutmann [37] generalized classical random walks into quantum versions, i.e., continuous-time quantum walks (CTQWs). QSWs are further generalizations of CTQWs by introducing the decoherence, so that both classical and quantum random walks can be described by them. The state of the walker is described by a density matrix \(\rho\), and evolves as [38, 39, 40]
\[\frac{d\rho}{dt}=-i[H,\rho]+\sum_{k}\left(L_{k}\rho L_{k}^{\dagger}-\frac{1}{2 }\{L_{k}^{\dagger}L_{k},\rho\}\right), \tag{1}\]
where \(H\) is the Hamiltonian and \(L_{k}\) is a Lindblad operator.
Our QSNN is based on QSWs (implied by Eq. (1)) rather than CTQWs, because we need to introduce the decoherence to the QSNN to simulate the forward propagation of probabilities in classical networks. The state of the QSNN is described by a density matrix \(\rho=\sum_{ij}\rho_{ij}|i\rangle\langle j|\) in the \(N\) dimensional Hilbert space, where \(\{|i\rangle\}_{i=1}^{N-1}\) is an orthogonal basis, and each basis state \(|i\rangle\) corresponds to a vertex of the graph. The vertices can be seen as neurons of the QSNN. As an example, a QSNN consisting of three layers, 6 neurons is shown in Fig. 1. The state of the QSNN evolves according to Eq. (1), where the Hamiltonian \(H\) and the Lindblad operators \(L_{k}\) respectively determine the coherent and decoherent part of the dynamic. In our approach, we use the Hamiltonian \(H=\sum_{ij}h_{ij}|i\rangle\langle j|\) to characterize the coherent transmission between neurons. The coefficients \(h_{ij}\) are complex numbers with the requirement \(h_{ij}=h_{ji}^{*}\) to ensure the hermiticity of the Hamiltonian in general. However, real coefficients \(h_{ij}\in\mathbb{R}\) are sufficient for the tasks of our interests and we don't consider the coupling of a neuron to itself. Thus, the Hamiltonian is written as
\[H=\sum_{ij}h_{ij}|i\rangle\langle j|=\sum_{i<j}h_{ij}(|i\rangle\langle j|+|j \rangle\langle i|). \tag{2}\]
The Lindblad operator used by us in Eq. (1) is
\[L_{k}\to L_{ij}=\gamma_{ij}|i\rangle\langle j|, \tag{3}\]
which simulates the decoherent (typically one-way) transmission from the \(j\)th neuron to the \(i\)th neuron. The coefficient \(\gamma_{ij}\) that characterizes the dissipation rate is a real number in general. We group the coefficients in the Hamiltonian as a single vector \(\mathbf{h}=(h_{1},h_{2},\cdots,h_{k},\cdots)\) and the coefficients in the Lindblad operators as an another vector \(\mathbf{\gamma}=(\gamma_{1},\gamma_{2},\cdots,\gamma_{k},\cdots)\). The coefficient vectors \(\mathbf{h}\) and \(\mathbf{\gamma}\) are the parameters of the QSNN that need to be optimized. In this study, the dissipation and the Hamiltonian couplings only exist between certain neurons, which is discussed in more detail in the Supplementary Material (SM) I.A. As shown in Fig. 1, the Lindblad operators (the orange lines with arrows) only connect the neurons in the adjacent layers to transfer the probability amplitude from one layer to the other, so that probability converges in the output layer. And the Hamiltonian couplings (the green dashed lines) only exist between some neurons in the input and hidden layer to perform a non-directional transmission.
The QSNN is initialized by encoding the state to be classified in the state of the input layer neurons of the network. To be specific, if we use an \(N\)-dimensional QSNN with \(n\) input layer neurons to classify \(n\)-level quantum states \(\rho\), the state of the network would be initialized as \(\rho_{\text{in}}=\rho\oplus 0_{N-n,N-n}\), where \(0_{i,j}\) represents an \(i\times j\) zero matrix. Then, the network evolves according to Eq. (1) for a duration \(T\) from its initial state \(\rho_{\text{in}}\) and gives the final state \(\rho_{\text{out}}^{s}\) for the \(s\)th input. The evolution time \(T\) is considered dimensionless (it is of dimension \(1/\gamma\) actually, where \(\gamma\) is a typical value of the coupling parameters \(h_{k}\) in Hamiltonian or the dissipation rates \(\gamma_{k}\) in Lindblad operators). The final state describes the probability that the walker is on each vertex (neuron) at time \(T\). The probability converges in the output layer due to the existence of the one-way decoherent transmission. We correspond output neurons to the labels that distinguish all kinds of different states one by one. For the example of the QSNN shown in Fig. 1, if \(\rho_{\text{out}}^{s}=|N-2\rangle\langle N-2|\) (\(\rho_{\text{out}}^{s}=|N-1\rangle\langle N-1|\)), we say the unknown quantum state belongs to class 1 (class 2). Hidden layers should be set according to tasks, and a single hidden layer is sufficient in the tasks of our interests [details in SM I.B].
In the training process, we firstly draw an already labeled sample state \(\rho_{\text{in}}^{s}\) together with its label \(l^{s}\in\{N-2,N-1\}\) from a training set \(\{(\rho_{\text{in}}^{s},l^{s})\}_{s=1}^{M}\), where \(M\) is the number of samples in the training set. Then, performing a projective measurement \(\Omega^{s}=|l^{s}\rangle\langle l^{s}|\) on the final state \(\rho_{\text{out}}^{s}\) of the network gives the success probability
\[P_{\text{N}}^{s}=\text{Tr}(\rho_{\text{out}}^{s}\Omega^{s}) \tag{4}\]
that the QSNN gives the desired output corresponding to the \(s\)th sample. We can design the specific forms of the loss function
\[\text{Loss}=\text{Loss}\left(\mathbf{h},\mathbf{\gamma},\{(\rho_{\text{in}}^{s},l^{s}) \}\right)=\sum_{s}w_{s}f(\rho_{\text{out}}^{s},\Omega^{s}) \tag{5}\]
according to different tasks, where \(w_{s}\) is a weight on the sample \(s\), and \(f\) is the sample-wise loss. The loss function
should be designed such that minimizing it (by gradient descent introduced in SM II) leads to the result that the QSNN classifies states correctly.
_Results - Quantum State Binary Discrimination._ The general processes of the minimum error (ME) discrimination and our QSNN discrimination are respectively shown in Fig. 2. There is an ensemble where quantum states are respectively prepared by two devices. The two kinds of quantum states in the ensemble are unknown, i.e., not well-defined mathematically. We randomly pick one of the states from the ensemble. The task, called quantum state binary discrimination, is to determine which kind of state we have picked. The quantum states \(\rho_{1}\) and \(\rho_{2}\) are prepared with prior probabilities \(w_{1}\) and \(w_{2}\) (\(w_{1}+w_{2}=1\)), respectively.
As shown in Fig. 2(a), before discriminating two unknown states in the ensemble with ME discrimination, a pre-processing of quantum state tomography is often needed, so that the appropriate measurement can be set up. It is not trivial to find the ME measurement set-up in general cases, but for the quantum state binary discrimination, the minimum error probability is given analytically by Helstrom [1] and Holevo [41], which is called the Helstrom bound:
\[P_{\rm H}^{\rm error}=\frac{1}{2}(1-{\rm Tr}|w_{2}\rho_{2}-w_{1}\rho_{1}|). \tag{6}\]
Then, the success probability of the ME discrimination is given as
\[P_{\rm H}=1-P_{H}^{error}. \tag{7}\]
However, as shown in Fig. 2(b), tomography is not required before our QSNN discrimination. The QSNN can discriminate quantum states by learning from some labeled samples, without knowing the mathematical expressions of these states. In order to complete the quantum state binary discrimination task, the QSNN is constructed by 6 neurons divided into three layers, as shown in Fig. 1. Some discussions about the topology of the network are given in SM I. There are only two sample states in the training set, namely \(\rho_{1}\) and \(\rho_{2}\). They are labeled \(state\) 1 and \(state\) 2, and fed into the QSNN with the prior probability \(w_{1}\) and \(w_{2}\), respectively. Thus, according to Eq. (4), the average success probability that the QSNN correctly gives the labels of the two input states is given as
\[P_{\rm N}=\sum_{s=1}^{2}w_{s}{\rm Tr}(\rho_{\rm out}^{s}\Omega^{s}). \tag{8}\]
Then, the loss function is defined as the distance between 1 and the average success probability, that is,
\[{\rm Loss}=1-P_{\rm N}. \tag{9}\]
In our simulation, we choose \(w_{1}=w_{2}=0.5\). To show the performance of our approach, we compare the suc
Figure 1: It is the graph representation of the quantum stochastic neural network (QSNN) used for quantum state binary discrimination. The state of the QSNN can be represented by a density matrix in the \(N=6\) dimensional Hilbert space constituted by an orthogonal basis \(\{|i\rangle\}_{i=0}^{5}\). The state of the two neurons of the input layer is initialized as the state of a 2-level input quantum state. Two neurons in the output layer correspond to the 2 labels \(state1\) and \(state2\) that distinguish two different states. The vertices are decoherently connected (orange lines with arrows) by Lindblad operators and coherently connected (green dashed lines) by Hamiltonian elements.
Figure 2: The flow charts of quantum state binary discrimination task. Black and white dots respectively represent the unknown quantum states from two different devices, and they become indistinguishable (gray dots) when mixed in an ensemble. The task is that we randomly pick one state from the ensemble and determine which device the state is coming from. (a) The minimum error discrimination. Before the discrimination, a pre-processing of quantum state tomography is often needed to get the complete information of the unknown states. (b) The QSNN discrimination. There is no need to obtain any prior information of quantum states through tomography in our approach. The network can discriminate quantum states after being trained with some labeled samples.
cess probability of the QSNN (\(P_{\rm N}\)) and that of the ME discrimination (\(P_{\rm H}\)).
First, without loss of generality, we consider the quantum states
\[|\psi_{\theta}\rangle=\cos\theta|0\rangle+\sin\theta|1\rangle \tag{10}\]
in the real vector space. Here \(\{|0\rangle,|1\rangle\}\) constitutes an orthogonal basis. The specific expressions of the quantum states selected here are intended only to mathematically evaluate the performance of our model. The states we want to discriminate against are \(|\psi_{0}\rangle\) and \(|\psi_{\theta}\rangle\), so our training set is \(\{(|\psi_{0}\rangle,state1),(|\psi_{\theta}\rangle,state2)\}\). In our simulation, we train the QSNN with different training sets separately, where \(\theta=0,\frac{\pi}{6},\frac{2\pi}{6},\frac{3\pi}{6},\cdots,\frac{11\pi}{6}\). The average success probability of the QSNN (blue curve) for the discrimination of 12 quantum state pairs (\(|\psi_{0}\rangle\), \(|\psi_{\theta}\rangle\)) increases with the number of iterations used in the training procedure, as shown in Fig. 3(a). Our QSNN approximately achieves the optimal theoretical bound of the success probability named as Helstrom bound (red dashed line) after about 30 iterations. The optimal success probability of the QSNN for each training set with different \(\theta\) is shown as a blue dot in Fig. 3(b). Each of them approximates to the Helstrom bound well.
Second, we consider the quantum states
\[|\psi_{\varphi}\rangle=\frac{\sqrt{2}}{2}(|0\rangle+e^{i\varphi}|1\rangle), \tag{11}\]
which are in a complex space. Similarly, we train the QSNN to discriminate states \(|\psi_{0}\rangle\) and \(|\psi_{\varphi}\rangle\) with the different training sets \(\{(|\psi_{0}\rangle,state1),(|\psi_{\varphi}\rangle,state2)\}\), which are different in \(\varphi=0,\frac{2\pi}{6},\frac{3\pi}{6},\cdots,\frac{11\pi}{6}\). The result shown in Fig. 3(c) is also the average success probabilities for all training sets. We can see a certain gap between the success probability of the QSNN and the Helstrom bound. Fig. 3(d) indicates that the trained QSNN does not perform well in discriminating the quantum states with complex amplitudes. Even so, the discrimination result given by the optimized QSNN is still referential, because it can achieve a success probability of no less than 91% of the theoretical optimum.
In summary, the QSNN can complete binary discrimination of unknown quantum states without tomography. It can be trained to its optimum after a few iterations and used to discriminate the states with a single detection. The optimized QSNN can achieve a success probability close to the Helstrom bound for both pure and mixed state (displayed in SM III) discrimination. Our model also works when the dimension of quantum states to be discriminated against is greater than 2, and we show the simulation result of the 3-qubit case in Fig. S8 of the SM. In addition, our approach is not limited by the number of states to be discriminated against theoretically, as shown in SM IV. In opposite, in general, if the number of the states is greater than two, the optimal measurement will be hard to be analytically developed in the fashion of traditional strategies.
_Results - Classification of Entangled and Separable States._ Entanglement is a primary feature of quantum mechanics, which is also considered with a resource. As the carrier of entanglement, entangled quantum states are costly to produce. Therefore, determining whether the given state is entangled or not is an important topic in quantum information theory. Now, some unknown separable states and entangled quantum states are mixed in an ensemble. We only know that they are prepared by two devices with an equal prior probability, without knowing their mathematical expressions. The task we consider in the following is to determine which category each quantum state belongs to. This is a classification task.
Several traditional strategies have been proposed to complete this task. For example, the positive partial transpose (PPT) criterion [5] is both sufficient and necessary for 2-qubit entanglement detection with the requirement of quantum tomography (see Fig. S9 of the SM). The CHSH inequality [9; 10] is also an attractive strategy because it only needs partial information of the quantum states (see Fig. S9 of SM). However, on the one hand, multiple measurements are still required. On the other
Figure 3: (a) The blue curve represents the average success probability of the QSNN for the discrimination of different quantum state pairs (\(|\psi_{0}\rangle,|\psi_{\theta}\rangle\)) with real amplitudes. The red dashed line shows the average optimal theoretical value given by the minimum error discrimination. The results are given by taking the average over the success probabilities corresponding to all state pairs with different values of \(\theta\). The error bars are plotted from the variances. The average success probability rises rapidly, and after about 30 iterations, it achieves the Helstrom bound. (b) The optimal success probabilities of the QSNN in discriminating \(|\psi_{0}\rangle\) and \(|\psi_{\theta}\rangle\) with each \(\theta\) are drawn as blue dots. The Helstrom bound is shown as the red dashed line. Here, (c) and (d) include the results similar to (a) and (c), respectively. They are given by replacing the state represented by Eq. (10) with that represented by Eq. (11). The QSNN can achieve a success probability of no less than 91% of the theoretical optimum.
hand, using fixed measurements, the CHSH inequality can't detect all entangled states in a set of states. To optimize the accuracy of the entanglement detection using the CHSH inequality, classical artificial neural networks combined with machine learning techniques have been proposed in Refs. [42; 43]. They construct a quantum state classifier and it achieves the accuracy of the classification to near unity, but multiple measurements are still required. To avoid multiple measurements or state tomography, we train a QSNN to be a quantum classifier.
In order to evaluate our approach more clearly, we select an unknown set of Werner-like states in our simulation. Each state is of the form
\[\rho=p|\Psi\rangle\langle\Psi|+(1-p)\frac{I}{4}, \tag{12}\]
where \(|\Psi\rangle=(U_{1}\otimes U_{2})|\psi_{+}\rangle\) and the real coefficient \(p\in[0,1]\). \(U_{1}\) and \(U_{2}\) are two unknown local unitaries acting on the Bell states \(|\psi_{+}\rangle=\frac{\sqrt{2}}{2}(|01\rangle+|10\rangle)\). The state \(\rho\) can be regarded as the convex combination of an unknown maximally-entangled state and the maximally mixed state \(\frac{I}{4}\). The quantum state \(\rho\) is separable when \(p\leq\frac{1}{3}\) and it is entangled otherwise.
In our simulation, there are four neurons in the input layer and four in the hidden layer. Two neurons in the output layer correspond to two labels \(l^{s}\): reparable \(S\) and entangled \(E\). To be more specific, if the final state \(\rho_{\text{out}}\) is \(|S\rangle\langle S|\) (\(|E\rangle\langle E|\)), the trained QSNN shows that the input quantum state is separable (entangled). Performing a corresponding measurement \(\Omega^{s}=|l^{s}\rangle\langle l^{s}|\) on the final state \(\rho_{\text{out}}^{s}\) gives the probability that the QSNN gives the label of the \(s\)th input state correctly. The loss function is defined as the mean error probability of all the \(M\) training samples
\[\text{Loss}=1-\frac{1}{M}\sum_{s=1}^{M}\text{Tr}(\rho_{\text{out}}^{s}\Omega^{ s}). \tag{13}\]
For the example of \(U_{1}=\sigma_{z}\), \(U_{2}=I\), we numerically give the classification results. We only use \(M=4\) training samples, where \(p\in\{0,0.2,0.4,0.8\}\), while use 49 states with \(p\in\{0.02\cdot n\}_{n=1}^{49}\) to evaluate the performance of the trained QSNN in the simulation. The classification confusion matrix for these 49 states is shown in Fig. 4(a). The trained QSNN identifies separable states successfully with the probability of 0.62 and identifies entangled states successfully with the probability of 0.75. The probability of one kind of the wrong classifications, i.e., the trained network identifies a separable state as an entangled one, is 0.38. The probability of the other kind of wrong classifications is 0.25.
The probabilities that each quantum state with a specific value of \(p\in\{0.02\cdot n\}_{n=1}^{49}\) is identified as a separable state (blue bar) and an entangled state (red bar) are shown in Fig. 4(b). When the input states are entangled at \(1/3<p\leq 1\) (light red region), the red bars are always longer than blue bars. It means that the success probabilities are always higher than the error probabilities, which is also true when \(0\leq p\leq 1/3\) (light blue region).
In summary, when given an unknown quantum state set, the QSNN can be trained by several labeled states and used to classify the others. Although the QSNN becomes confused about the states at the boundary \(p=1/3\), the success probability is always higher than the error probability for each state. What's more, the trained QSNN can give the classification result only using a single detection for each state. Some details about the parameters and hyperparameters of the QSNN are given in SM V.
In this work, we have introduced the quantum stochastic neural network (QSNN), based on quantum stochastic walks. When combined with machine learning, it can be trained to become a quantum state classifier.
If one wants to classify an ensemble only containing two unknown quantum states, the classification task be
Figure 4: (a) The classification confusion matrix of the QSNN. The numbers in it are the mean success and error probabilities of the QSNN in classifying the 49 samples represented by Eq. (12) with \(p\in\{0.02\cdot n\}_{n=1}^{49}\). The QSNN performs better when the input states are entangled. (b) The classification result of each state with a specific value of \(p\). The bar chart shows the probabilities that the trained network identifies the 49 input states as entangled states (red bars) and separable states (blue bars). The whole chart is divided by \(p=\frac{1}{3}\) into the light blue area (the state is separate) and the light red area (the state is entangled). The success probabilities are always higher than the error probabilities.
comes binary discrimination. The QSNN doesn't need any information of the candidate states in advance, so it avoids the experimentally expensive quantum state tomography used in the traditional minimum error discrimination. We have benchmarked the QSNN's performance in the quantum state binary discrimination tasks with numerical simulation. The success probability of the QSNN turned out to be very close to the theoretical optimal success probability, i.e., the Helstrom bound.
When given an unknown ensemble containing states from two different families, the QSNN can be trained to classify them into those two families. We show an example of classifying Werner-like states according to whether they are entangled or not. For all those states, the trained QSNN is always more likely to classify it into the correct family with a single detection. The optimal performance of the QSNN can be achieved with only four training samples, while avoiding the state tomography and multiple measurements required by other classification methods such as using the PPT criteria or the CHSH inequality. This also suggests that our approach may reduce the consumption of resources compared to traditional methods.
All the present results have shown the potential of the QSNN as a general-purpose quantum classifier, which may be helpful in various quantum machine learning models, such as the quantum adversarial generative networks [44].
_Acknowledgments._ This work is supported by the National Key R&D Program of China (Grant No. 2017YFA0303703) and the National Natural Science Foundation of China (Grant No. 12175104).
|
2304.05676 | Mathematical derivation of wave propagation properties in hierarchical
neural networks with predictive coding feedback dynamics | Sensory perception (e.g. vision) relies on a hierarchy of cortical areas, in
which neural activity propagates in both directions, to convey information not
only about sensory inputs but also about cognitive states, expectations and
predictions. At the macroscopic scale, neurophysiological experiments have
described the corresponding neural signals as both forward and
backward-travelling waves, sometimes with characteristic oscillatory
signatures. It remains unclear, however, how such activity patterns relate to
specific functional properties of the perceptual apparatus. Here, we present a
mathematical framework, inspired by neural network models of predictive coding,
to systematically investigate neural dynamics in a hierarchical perceptual
system. We show that stability of the system can be systematically derived from
the values of hyper-parameters controlling the different signals (related to
bottom-up inputs, top-down prediction and error correction). Similarly, it is
possible to determine in which direction, and at what speed neural activity
propagates in the system. Different neural assemblies (reflecting distinct
eigenvectors of the connectivity matrices) can simultaneously and independently
display different properties in terms of stability, propagation speed or
direction. We also derive continuous-limit versions of the system, both in time
and in neural space. Finally, we analyze the possible influence of transmission
delays between layers, and reveal the emergence of oscillations at biologically
plausible frequencies. | Grégory Faye, Guilhem Fouilhé, Rufin VanRullen | 2023-04-12T07:53:22Z | http://arxiv.org/abs/2304.05676v1 | # Mathematical derivation of wave propagation properties
###### Abstract
Sensory perception (e.g. vision) relies on a hierarchy of cortical areas, in which neural activity propagates in both directions, to convey information not only about sensory inputs but also about cognitive states, expectations and predictions. At the macroscopic scale, neurophysiological experiments have described the corresponding neural signals as both forward and backward-travelling waves, sometimes with characteristic oscillatory signatures. It remains unclear, however, how such activity patterns relate to specific functional properties of the perceptual apparatus. Here, we present a mathematical framework, inspired by neural network models of predictive coding, to systematically investigate neural dynamics in a hierarchical perceptual system. We show that stability of the system can be systematically derived from the values of hyper-parameters controlling the different signals (related to bottom-up inputs, top-down prediction and error correction). Similarly, it is possible to determine in which direction, and at what speed neural activity propagates in the system. Different neural assemblies (reflecting distinct eigenvectors of the connectivity matrices) can simultaneously and independently display different properties in terms of stability, propagation speed or direction. We also derive continuous-limit versions of the system, both in time and in neural space. Finally, we analyze the possible influence of transmission delays between layers, and reveal the emergence of oscillations at biologically plausible frequencies.
Introduction
The brain's anatomy is characterized by a strongly hierarchical architecture, with a succession of brain regions that process increasingly complex information. This functional strategy is mirrored by the succession of processing layers found in modern deep neural networks (and for this reason, we use the term "layer" in this work to denote one particular brain region in this hierarchy, rather than the laminar organization of cortex that is well-known to neuroscientists). The hierarchical structure is especially obvious in the organization of the visual system [15], starting from the retina through primary visual cortex (V1) and various extra-striate regions, and culminating in temporal lobe regions for object recognition and in parietal regions for motion and location processing.
In this hierarchy of brain regions, the flow of information is clearly bidirectional: there are comparable number of fibers sending neural signals down (from higher to lower levels of the hierachy) as there are going up [8]. While the bottom-up or "feed-forward" propagation of information is easily understood as integration of sensory input (and matches the functional structure found in artificial deep learning networks), the opposite feedback direction of propagation is more mysterious, and its functional role remains unknown.
Predictive coding is one dominant theory to explain the function of cortical feedback [27]. Briefly, the theory states that each layer in the cortical hierarchy generates predictions about what caused their own activity; these predictions are sent to the immediately preceding layer, where a prediction error can be computed, and carried forward to the original layer, which can then iteratively update its prediction. Over time (and as long as the sensory input does not change), the system settles into a state where top-down predictions agree with bottom-up inputs, and no prediction error is transmitted. Like any large-scale theory of brain function, the predictive coding theory is heavily debated [22]. But macroscopic (EEG) experiments have revealed characteristic propagation signatures that could be hallmarks of predictive coding. For instance, Alamia and VanRullen [1] showed evidence for alpha-band (7-15Hz) oscillatory travelling waves propagating in both directions (feed-forward and feedback); the oscillation frequency and dynamics were compatible with a simplistic hierarchical model that included a biologically plausible time delay for transmitting signals between layers, and was also confirmed by a rudimentary mathematical model. In another study, Bastos et al [4, 5] found that beta (15-30Hz) and gamma-frequency (30-100Hz) oscillations could reflect, respectively, the predictions and prediction errors signals carried by backward and forward connections.
More recently, predictive coding has been explored in the context of deep neural networks [9, 25, 32]. For instance, Choksi et al [9] augmented existing deep convolutional networks with feedback connections and a mechanism for computing and minimizing prediction errors, and found that the augmented system displayed more robust perception, better aligned with human abilities. In another study, Pang et al [25] used a similar system and reported the emergence of illusory contour perception comparable to what humans (but not standard deep neural networks) would typically perceive.
While the concept of predictive coding is potentially fundamental for understanding brain function, and its large-scale implementation in deep artificial neural networks provides empirical support for its potential functional relevance, there is a gap of theoretical knowledge about the type of brain activity that predictive coding could engender, and the potential conditions for its stability. Here, we propose a mathematical framework where a potentially infinite number of neuronal layers exchange signals in both directions according to predictive coding principles. The stable propagation of information in such a system can be explored analytically as a function of its initial state, its internal parameters (controlling the strength of
inputs, predictions, and error signals) and its connectivity (e.g. convolution kernels). Our approach considers both a discrete approximation of the system, as well as continuous abstractions. We demonstrate the practical relevance of our findings by applying them to a ring model of orientation processing. Finally, we extend our analytical framework to a more biologically plausible situation with communication delays between successive layers. This gives rise to oscillatory signals resembling those observed in the brain.
## 2 Model description
Our initial model is inspired by the generic formulation of predictive coding proposed in the context of deep learning models by Choksi et al. [9]. This formulation considers different update terms at each time step: feed-forward inputs, memory term, feedback- and feed-forward prediction error corrections. By modulating the hyper-parameters controlling each of these terms, the model can be reconciled with different formulations of predictive coding (for instance, the Rao and Ballard model [27] by setting the feed-forward input term to zero) or other models of hierarchical brain function (e.g. similar to Heeger's model [19] by setting the feed-forward error correction to zero). Indeed, our objective is precisely to characterize the propagation dynamics inside the network as a function of the relative value of these hyper-parameters, which in turn alters the model's functionality.
We consider the following recurrence equation where \(\mathcal{E}_{j}^{n}\in\mathbb{R}^{d}\) represents an encoder at step \(n\) and layer \(j\)
\[\mathcal{E}_{j}^{n+1}=\beta\mathcal{W}^{f}\mathcal{E}_{j-1}^{n+1}+(1-\beta) \mathcal{E}_{j}^{n}-\alpha\mathcal{F}_{j-1}^{n}-\lambda\mathcal{B}_{j}^{n}, \quad j=1,\cdots,J-1\,,\]
where \(\mathcal{W}^{f}\in\mathscr{M}_{d}(\mathbb{R})\) is a \(d\times d\) square matrix representing the weights of feedforward connections which we assume to be the same for each layer such that \(\mathcal{W}^{f}\mathcal{E}_{j-1}^{n+1}\) models an instantaneous feedforward drive from layer \(j-1\) to layer \(j\), controlled by hyper-parameter \(\beta\). The term \(\mathcal{F}_{j-1}^{n}\) encodes a feedforward error correction process, controlled by hyper-parameter \(\alpha\), where the reconstruction error \(\mathcal{R}_{j-1}^{n}\) at layer \(j-1\), defined as the square error between the representation \(\mathcal{E}_{j-1}^{n}\) and the predicted reconstruction \(\mathcal{W}^{b}\mathcal{E}_{j}^{n}\), that is
\[\mathcal{R}_{j-1}^{n}:=\frac{1}{2}\|\mathcal{E}_{j-1}^{n}-\mathcal{W}^{b} \mathcal{E}_{j}^{n}\|^{2},\]
propagates to the layer \(j\) to update its representation. Here, \(\mathcal{W}^{b}\in\mathscr{M}_{d}(\mathbb{R})\) is a \(d\times d\) square matrix representing the weights of feedback connections which we assume to be the same for each layer. Following [1, 9, 27, 32], the contribution \(\mathcal{F}_{j-1}^{n}\) is then taken to be the gradient of \(\mathcal{R}_{j-1}^{n}\) with respect to \(\mathcal{E}_{j}^{n}\), that is
\[\mathcal{F}_{j-1}^{n}=\nabla\mathcal{R}_{j-1}^{n}=-(\mathcal{W}^{b})^{\text{ t}}\mathcal{E}_{j-1}^{n}+(\mathcal{W}^{b})^{\text{t}}\mathcal{W}^{b}\mathcal{E}_{j}^ {n}.\]
On the other hand, \(\mathcal{B}_{j}^{n}\) incorporates a top-down prediction to update the representation at layer \(j\). This term thus reflects a feedback error correction process, controlled by hyper-parameter \(\lambda\). Similar to the feedforward process, \(\mathcal{B}_{j}^{n}\) is defined as the the gradient of \(\mathcal{R}_{j}^{n}\) with respect to \(\mathcal{E}_{j}^{n}\), that is
\[\mathcal{B}_{j}^{n}=\nabla\mathcal{R}_{j}^{n}=-\mathcal{W}^{b}\mathcal{E}_{j +1}^{n}+\mathcal{E}_{j}^{n}.\]
As a consequence, our model reads
\[\mathcal{E}_{j}^{n+1}=\beta\mathcal{W}^{f}\mathcal{E}_{j-1}^{n+1}+\alpha( \mathcal{W}^{b})^{\text{t}}\mathcal{E}_{j-1}^{n}+\left[(1-\beta-\lambda) \mathbf{I}_{d}-\alpha(\mathcal{W}^{b})^{\text{t}}\mathcal{W}^{b}\right] \mathcal{E}_{j}^{n}+\lambda\mathcal{W}^{b}\mathcal{E}_{j+1}^{n}, \tag{2.1}\]
for each \(j=1,\cdots,J-1\) and \(n\geq 0\) where we denoted \(\mathbf{I}_{d}\) the identity matrix of \(\mathscr{M}_{d}(\mathbb{R})\). We supplement the recurrence equation (2.1) with the following boundary conditions at layer \(j=0\) and layer \(j=J\). First, at layer \(j=0\), we impose
\[\mathcal{E}_{0}^{n}=\mathcal{S}_{0}^{n},\quad n\geq 0, \tag{2.2}\]
where \(\mathcal{S}_{0}^{n}\in\mathbb{R}^{d}\) is a given source term, which can be understood as the network's constant visual input. At the final layer \(j=J\), there is no possibility of incoming top-down signal, and thus one gets
\[\mathcal{E}_{J}^{n+1}=\beta\mathcal{W}^{f}\mathcal{E}_{J-1}^{n+1}+\alpha( \mathcal{W}^{b})^{\mathsf{t}}\mathcal{E}_{J-1}^{n}+\left[(1-\beta)\mathbf{I}_ {d}-\alpha(\mathcal{W}^{b})^{\mathsf{t}}\mathcal{W}^{b}\right]\mathcal{E}_{J} ^{n},\quad n\geq 0. \tag{2.3}\]
Finally, at the initial step \(n=0\), we set
\[\mathcal{E}_{j}^{0}=\mathcal{H}_{j},\quad j=0,\cdots,J, \tag{2.4}\]
for some given initial sequence \((\mathcal{H}_{j})_{0,\cdots,J}\). For instance, in Choksi et al [9], \(\mathcal{H}_{j}\) was initialized by a first feedforward pass through the system, i.e. \(\beta>0\) and \(\alpha=\lambda=0\). Throughout we assume the natural following compatibility condition between the source terms and the initial condition, namely
\[\mathcal{S}_{0}^{0}=\mathcal{H}_{0}. \tag{2.5}\]
Regarding the hyper-parameters of the problem we assume that
\[0\leq\beta<1,\quad\text{ with }\quad 0\leq\alpha+\lambda\leq 1. \tag{2.6}\]
Our key objective is to characterize the behavior of the solutions of the above recurrence equation (2.1) as a function of the hyper-parameters and the feedforward and feedback connections matrices \(\mathcal{W}^{f}\) and \(\mathcal{W}^{b}\). We would like to stay as general as possible to encompass as many situations as possible, keeping in mind that we already made strong assumptions by imposing that the weight matrices of feedforward and feedback connections are identical from one layer to another. Motivated by concrete applications, we will mainly consider matrices \(\mathcal{W}^{f}\) and \(\mathcal{W}^{b}\) which act as convolutions on \(\mathbb{R}^{d}\).
Figure 1: _Schematic illustration of the network structure of model (2.1) where each point represents a given neuronal layer index \(j\) (x-axis) at a particular time step \(n\) (y-axis), and the red arrows indicate the contributions leading to the update of \(\mathcal{E}_{j}^{n+1}\)._
The identity case
It turns out that we will gain much information by first treating the simplified case where \(\mathcal{W}^{f}\) and \(\mathcal{W}^{b}\) are both identity. That is, from now on, and throughout this section we assume that
\[\mathcal{W}^{f}=\mathcal{W}^{b}=\mathbf{I}_{d}.\]
That is, each neuron in a layer is only connected to the corresponding neuron in the immediately preceding and following layer, with unit weight in each direction. Under such a setting, the recurrence equation (2.1) reduces to a scalar equation, that is
\[e_{j}^{n+1}=\beta e_{j-1}^{n+1}+\alpha e_{j-1}^{n}+(1-\beta-\lambda-\alpha)e_{ j}^{n}+\lambda e_{j+1}^{n},\quad j=1,\cdots,J-1, \tag{3.1}\]
with this time the unknown \(e_{j}^{n}\in\mathbb{R}\), together with
\[e_{0}^{n}=s_{0}^{n},\quad n\geq 0, \tag{3.2}\]
and
\[e_{J}^{n+1}=\beta e_{J-1}^{n+1}+\alpha e_{J-1}^{n}+(1-\beta-\alpha)e_{J}^{n}, \quad n\geq 0. \tag{3.3}\]
with
\[e_{j}^{0}=h_{j},\quad j=0,\cdots,J. \tag{3.4}\]
### Wave propagation on an infinite depth network
It will be first useful to consider the above problem set on an infinite domain and look at
\[e_{j}^{n+1}=\beta e_{j-1}^{n+1}+\alpha e_{j-1}^{n}+(1-\beta-\lambda-\alpha)e_{ j}^{n}+\lambda e_{j+1}^{n},\quad j\in\mathbb{Z}, \tag{3.5}\]
given some initial sequence
\[e_{j}^{0}=h_{j},\quad j\in\mathbb{Z}.\]
This situation has no direct equivalent in the brain, where the number of hierarchically connected layers is necessarily finite; but it is a useful mathematical construct. Indeed, such recurrence equations set on the integers \(\mathbb{Z}\) are relatively well understood from the mathematical numerical analysis community. The behavior of the solution sequence \((e_{j}^{n})_{j\in\mathbb{Z}}\) can be read out from the so-called amplification factor function defined as
\[\rho(\theta):=\frac{\alpha\left(e^{-\mathbf{i}\theta}-1\right)+1-\beta+\lambda \left(e^{\mathbf{i}\theta}-1\right)}{1-\beta e^{-\mathbf{i}\theta}},\quad \theta\in[-\pi,\pi], \tag{3.6}\]
and which relates spatial and temporal modes. Indeed, formally, the sequence \((\rho(\theta)^{n}e^{\mathbf{i}j\theta})_{j\in\mathbb{Z}}\) is an explicit solution to (3.5) for each \(\theta\in[-\pi,\pi]\). Actually one can be much precise and almost explicit in the sense that one can relate the expression of the solutions to (3.5) starting from some initial sequence \((h_{j})_{j\in\mathbb{Z}}\) to the properties of \(\rho\) in a systematic way that we now briefly explain.
Let us first denote by \(\mathcal{G}^{n}=(\mathcal{G}^{n}_{j})_{j\in\mathbb{Z}}\) the sequence which is the fundamental solution of (3.5) in the special case where \((\mathcal{H}_{j})_{j\in\mathbb{Z}}\) is the Dirac delta sequence \(\boldsymbol{\delta}\). The Dirac delta sequence \(\boldsymbol{\delta}\) is defined as \(\boldsymbol{\delta}_{0}=1\) and \(\boldsymbol{\delta}_{j}=0\) for all \(j\in\mathbb{Z}\backslash\{0\}\). As a consequence, we have \(\mathcal{G}^{0}=\boldsymbol{\delta}\) and for each \(n\geq 0\)
\[\mathcal{G}^{n+1}_{j}-\beta\mathcal{G}^{n+1}_{j-1}=\alpha\mathcal{G}^{n}_{j-1}+ (1-\beta-\lambda-\alpha)\mathcal{G}^{n}_{j}+\lambda\mathcal{G}^{n}_{j+1},\quad j \in\mathbb{Z}.\]
The starting point of the analysis is the following representation formula, obtained via inverse Fourier transform, which reads
\[\mathcal{G}^{n}_{j}=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{\mathbf{i}j\theta}\rho( \theta)^{n}\mathrm{d}\theta,\quad n\geq 1,\quad j\in\mathbb{Z}. \tag{3.7}\]
Then, given any initial sequence \((h_{j})_{j\in\mathbb{Z}}\), the solution \((e^{n}_{j})_{j\in\mathbb{Z}}\) to (3.5) can be represented as the convolution product between the initial sequence and the fundamental solution, namely
\[e^{n}_{j}=\sum_{\ell\in\mathbb{Z}}\mathcal{G}^{n}_{j-\ell}h_{\ell},\quad j\in \mathbb{Z},\quad n\geq 1. \tag{3.8}\]
That is, having characterized the fundamental solution for a simple input pattern (\(\boldsymbol{\delta}\)), with a unitary impulse provided to a single layer, we can now easily generalize to any arbitrary input pattern, by applying the (translated) fundamental solution to each layer.
Our aim is to understand under which conditions on the hyper-parameters we can ensure that the solutions of (3.5) given through (3.8) remain bounded for all \(n\geq 1\) independently of the choice of the initial sequence \((h_{j})_{j\in\mathbb{Z}}\). More precisely, we introduce the following terminology. We say that the recurrence equation is _stable_ if for each bounded initial sequence \((h_{j})_{j\in\mathbb{Z}}\in\ell^{\infty}(\mathbb{Z})\), the corresponding solution \((e^{n}_{j})_{j\in\mathbb{Z}}\) given by (3.8) satisfies
\[\sup_{j\in\mathbb{Z}}\lvert e^{n}_{j}\rvert\underset{n\to\infty}{\longrightarrow}0.\]
On the other hand, we say that the recurrence equation is _unstable_ if one can find a bounded initial sequence \((h_{j})_{j\in\mathbb{Z}}\in\ell^{\infty}(\mathbb{Z})\) such that the corresponding solution \((e^{n}_{j})_{j\in\mathbb{Z}}\) given by (3.8) satisfies
\[\sup_{j\in\mathbb{Z}}\lvert e^{n}_{j}\rvert\underset{n\to\infty}{\longrightarrow }+\infty.\]
Finally, we say that the recurrence equation is _marginally stable_ if there exists a universal constant \(C>0\) such that for each bounded initial sequence \((h_{j})_{j\in\mathbb{Z}}\in\ell^{\infty}(\mathbb{Z})\), the corresponding solution \((e^{n}_{j})_{j\in\mathbb{Z}}\) given by (3.8) satisfies
\[\sup_{j\in\mathbb{Z}}\lvert e^{n}_{j}\rvert\leq C\sup_{j\in\mathbb{Z}}\lvert h _{j}\rvert,\quad n\geq 1.\]
It turns out that one can determine the stability properties of the recurrence equation by solely looking at the amplification factor function. Indeed, from [29], we know that
\[\lim_{n\to\infty}\lVert\mathcal{G}^{n}\rVert^{1/n}_{\ell^{1}(\mathbb{Z})}= \max_{\theta\in[-\pi,\pi]}\lvert\rho(\theta)\rvert,\]
where we have set
\[\lVert\mathcal{G}^{n}\rVert_{\ell^{1}(\mathbb{Z})}:=\sum_{j\in\mathbb{Z}}| \mathcal{G}^{n}_{j}|.\]
As a consequence, we directly deduce that the recurrence equation is stable when \(|\rho(\theta)|<1\) for all \(\theta\in[-\pi,\pi]\), whereas it is unstable if there exists \(\theta_{0}\in[-\pi,\pi]\) such that \(|\rho(\theta_{0})|>1\). The limiting case occurs precisely when \(\underset{\theta\in[-\pi,\pi]}{\max}|\rho(\theta)|=1\) and there is actually a long history of works [10, 12, 14, 26, 30] that have studied the marginal stability of the recurrence equation in that case. All such results rely on a very precise understanding of the amplification factor function and lead to the following statement.
**Theorem 1** ([10, 12, 14, 26, 30]).: _Suppose that there exist finitely many \(\theta_{1},\cdots,\theta_{K}\in[-\pi,\pi]\) such that for all \(\theta\in[-\pi,\pi]\backslash\left\{\theta_{1},\cdots,\theta_{k}\right\}\) one has \(|\rho(\theta)|<1\) and \(|\rho(\theta_{k})|=1\) for each \(k=1,\cdots,K\). Furthermore, assume that there exist \(c_{k}\in\mathbb{R}\), \(\sigma_{k}\in\mathbb{C}\) with \(\mathrm{Re}(\sigma_{k})>0\) and an integer \(\mu_{k}\geq 1\) such that_
\[\frac{\rho(\theta_{k}+\theta)}{\rho(\theta_{k})}=\exp\left(-\mathbf{i}c_{k} \theta-\sigma_{k}\theta^{2\mu_{k}}+\mathcal{O}(|\theta|^{2\mu_{k}+1})\right), \text{ as }\theta\to 0.\]
_Then the recurrence equation is marginally stable._
Based on the above notions of stability/instability, we see that the only interesting situation is when the recurrence equation is marginally stable, and thus when the amplification function is contained in the unit disk with finitely many tangent points to the unit circle with prescribed asymptotic expansions. This is also the only interesting situation from a biological standpoint, as it ensures that the network remains active, yet without runaway activations.
#### 3.1.1 Study of the amplification factor function
Since we assumed that \(0\leq\beta<1\), the denominator in (3.6) never vanishes and is well-defined. Next, we crucially remark that we always have
\[\rho(0)=1.\]
We will now check under which conditions \(|\rho(\theta)|\leq 1\) for all \(\theta\in[-\pi,\pi]\) to guarantee marginal stability of the recurrence equation.
To assess stability, we compute
\[|\rho(\theta)|^{2} =\frac{\left((\lambda+\alpha)(\cos(\theta)-1)+1-\beta\right)^{2}+ (\lambda-\alpha)^{2}\sin(\theta)^{2}}{1-2\beta\cos(\theta)+\beta^{2}}\] \[=\frac{(\lambda+\alpha)^{2}(\cos(\theta)-1)^{2}+2(1-\beta)( \lambda+\alpha)(\cos(\theta)-1)+(1-\beta)^{2}+(\lambda-\alpha)^{2}(1-\cos( \theta)^{2})}{(1-\beta)^{2}+2\beta(1-\cos(\theta))}\] \[=\frac{(1-\cos(\theta))\left((\lambda+\alpha)^{2}(1-\cos(\theta ))-2(1-\beta)(\lambda+\alpha)+(\lambda-\alpha)^{2}(1+\cos(\theta))\right)+(1- \beta)^{2}}{(1-\beta)^{2}+2\beta(1-\cos(\theta))}\] \[=\frac{(1-\cos(\theta))\left(-4\alpha\lambda\cos(\theta)-2(1- \beta)(\lambda+\alpha)+2(\lambda^{2}+\alpha^{2})\right)+(1-\beta)^{2}}{(1- \beta)^{2}+2\beta(1-\cos(\theta))}\]
such that \(\left|\rho(\theta)\right|^{2}\leq 1\) is equivalent to
\[(1-\cos(\theta))\left(2\beta+4\alpha\lambda\cos(\theta)+2(1-\beta)(\lambda+ \alpha)-2(\lambda^{2}+\alpha^{2})\right)\geq 0,\quad\theta\in[-\pi,\pi],\]
and since \(1-\cos(\theta)\geq 0\) we need to ensure
\[\beta+2\alpha\lambda\cos(\theta)+(1-\beta)(\lambda+\alpha)-\lambda^{2}-\alpha ^{2}\geq 0,\quad\theta\in[-\pi,\pi],\]
and evaluating at \(\pm\pi\) the above inequality we get
\[\beta+(1-\beta)(\lambda+\alpha)-(\lambda+\alpha)^{2}\geq 0.\]
But we remark that the above expression can be factored as
\[(\beta+\lambda+\alpha)\left(1-\lambda-\alpha\right)\geq 0.\]
As a consequence, \(\left|\rho(\theta)\right|^{2}\leq 1\) if and only if \(\lambda+\alpha\leq 1\). This is precisely the condition that we made in (2.6). We can actually track cases of equality which are those values of \(\theta\in[-\pi,\pi]\) for which we have
\[(1-\cos(\theta))\left(2\beta+4\alpha\lambda\cos(\theta)+2(1-\beta)(\lambda+ \alpha)-2(\lambda^{2}+\alpha^{2})\right)=0.\]
We readily recover that at \(\theta=0\) we have \(\left|\rho(0)\right|=1\). So, now assuming that \(\theta\neq 0\), we need to solve
\[\beta+2\alpha\lambda\cos(\theta)+(1-\beta)(\lambda+\alpha)-\lambda^{2}-\alpha ^{2}=0,\]
which we write as
\[\beta-2\alpha\lambda+(1-\beta)(\lambda+\alpha)-\lambda^{2}-\alpha^{2}+2\alpha \lambda\left(\cos(\theta)+1\right))=0,\]
and using the previous factorization we get
\[\left(\beta+\lambda+\alpha\right)(1-\lambda-\alpha)+2\alpha\lambda\left(\cos (\theta)+1\right))=0,\]
and we necessarily get that both \(1+\cos(\theta)=0\) and \(1-\lambda-\alpha=0\) must be satisfied. As consequence, \(\left|\rho(\pm\pi)\right|=1\) if and only if \(1=\lambda+\alpha\).
As a summary we have obtained that:
* if \(0\leq\lambda+\alpha<1\) and \(0\leq\beta<1\), then \(\left|\rho(\theta)\right|<1\) for all \(\theta\in[-\pi,\pi]\backslash\{0\}\) with \(\rho(0)=1\);
* if \(\lambda+\alpha=1\) and \(0\leq\beta<1\), then \(\left|\rho(\theta)\right|<1\) for all \(\theta\in(-\pi,\pi)\backslash\{0\}\) with \(\rho(0)=1\) and \(\rho(\pm\pi)=-1\).
We present in Figure 2 several representative illustrations of the spectral curves \(\rho(\theta)\) for various values of the hyper-parameters recovering the results explained above.
Furthermore, near \(\theta\sim 0\), we get that \(\rho\) admits the following asymptotic expansion
\[\rho(\theta)=\exp\left(-\mathbf{i}\frac{\beta+\alpha-\lambda}{1-\beta}\theta- \frac{\beta(1-\alpha-\lambda)+\alpha+\lambda-(\lambda-\alpha)^{2}}{2(1-\beta) ^{2}}\theta^{2}+\mathcal{O}(|\theta|^{3})\right),\text{ as }\theta\to 0,\]
provided that
\[\beta(1-\alpha-\lambda)+\alpha+\lambda-(\lambda-\alpha)^{2}\neq 0.\]
Figure 2: _Several representative illustration of the curve \(\theta\mapsto\rho(\theta)\) for \(\theta\in[-\pi,\pi]\) in the case where \(\beta=0\) and \(\beta\neq 0\). (a)-(b) Amplification factor function \(\rho(\theta)\) (blue curve) with a unique tangency point on the unit circle at \(z=1\) corresponding \(\theta=0\). (c)-(f) When \(\alpha+\lambda=1\) the function \(\rho(\theta)\) (blue curve) has two tangency points on the unit circle at \(z=1\) corresponding \(\theta=0\) and \(z=-1\) corresponding to \(\theta=\pm\pi\)._
In fact, since \(-(\lambda-\alpha)^{2}\geq-(\lambda+\alpha)^{2}\) as both \(\alpha\) and \(\lambda\) are positive, we remark that
\[\beta(1-\alpha-\lambda)+\alpha+\lambda-(\lambda-\alpha)^{2}\geq\beta(1-\alpha- \lambda)+\alpha+\lambda-(\lambda+\alpha)^{2}=(\beta+\lambda+\alpha)\left(1- \lambda-\alpha\right)\geq 0.\]
Finally, we remark that when \(\alpha+\lambda=1\) we have
\[\rho(\theta+\pi)=-\exp\left(-\mathrm{i}\frac{-\beta+\alpha-\lambda}{1+\beta} \theta-\frac{1-(\alpha-\lambda)^{2}}{2(1+\beta)^{2}}\theta^{2}+\mathcal{O}(| \theta|^{3})\right),\text{ as }\theta\to 0.\]
From now on, we denote
\[c_{0}:=\frac{\beta+\alpha-\lambda}{1-\beta}, \sigma_{0}:=\frac{\beta(1-\alpha-\lambda)+\alpha+\lambda-(\lambda- \alpha)^{2}}{2(1-\beta)^{2}},\] \[c_{\pi}:=\frac{-\beta+\alpha-\lambda}{1+\beta}, \sigma_{\pi}:=\frac{1-(\alpha-\lambda)^{2}}{2(1+\beta)^{2}},\]
and we always assume that
\[\sigma_{0}>0,\text{ and }\sigma_{\pi}>0,\]
which is equivalent to assume that \(0<\alpha<1\) and \(0<\lambda<1\).
Here, \((c_{0},\sigma_{0})\) and \((c_{\pi},\sigma_{\pi})\) are derived, respectively, from the asymptotic expansions of the amplification factor function \(\rho(\theta)\) near \(\theta=0\) and \(\theta=\pi\), as defined above. On the one hand \(c_{0}\) reflects the propagation speed of the solution associated with \(\rho(0)\), while \(\sigma_{0}\) can be understood as its spatio-temporal spread (and similarly for the solution potentially associated with \(\rho(\pi)\)). In the following, we explore the fundamental solutions of this system for various values of its hyper-parameters.
#### 3.1.2 Turning off the instantaneous feedforward connections: case \(\beta=0\)
We first investigate the case where there is no instantaneous feedforward connections in the network, that is we set \(\beta=0\). This case, although less generic, is compatible with the prominent Rao-Ballard formulation of predictive coding [27], in which feedforward connections--after contributing to setting the initial network activity--only convey prediction errors, as captured by the hyper-parameter \(\alpha\). In that case, the model is fully explicit: the update at time step \(n+1\) only depends on the internal states at the previous step \(n\) since we simply have
\[e_{j}^{n+1}=\alpha e_{j-1}^{n}+(1-\lambda-\alpha)e_{j}^{n}+\lambda e_{j+1}^{n},\quad j\in\mathbb{Z}.\]
As we assumed that \(\alpha+\lambda\leq 1\), the right-hand side of the recurrence equation is a positive linear combination of elements of the sequence \((e_{j}^{n})\) such that we have positivity principle of the solution, namely
\[\forall j\in\mathbb{Z},\quad 0\leq h_{j}\quad\Longrightarrow\quad\forall j \in\mathbb{Z},\quad n\geq 1,\quad 0\leq e_{j}^{n}.\]
Furthermore, since the recurrence equation is explicit, we have finite speed propagation, in the following sense. Recall that when \(\beta=0\), the fundamental solution \(\mathcal{G}^{n}\) is solution to
\[\mathcal{G}_{j}^{n+1}=\alpha\mathcal{G}_{j-1}^{n}+(1-\lambda-\alpha)\mathcal{G }_{j}^{n}+\lambda\mathcal{G}_{j+1}^{n},\quad n\geq 1,\qquad j\in\mathbb{Z},\]
starting from \(\mathcal{G}^{0}=\boldsymbol{\delta}\). Finite speed of propagation then refers to the property that
\[\mathcal{G}_{j}^{n}=0,\quad|j|>n.\]
This in turn implies that necessarily \(c_{0}\in(-1,1)\) which is readily seen from the explicit formula \(c_{0}=\alpha-\lambda\) in that case. Actually, it is possible to be more precise and to give a general expression for the fundamental solution. Roughly speaking, each \(\mathcal{G}_{j}^{n}\) ressembles a discrete Gaussian distribution centered at \(j=c_{0}n\) and we refer to the recent theoretical results of [10, 12, 14, 26] for a rigorous justification.
Essentially, the results can be divided into two cases depending on whether or not \(\alpha+\lambda=1\). As can be seen above, the special case \(\alpha+\lambda=1\) results in a cancellation of the "memory" term, such that a neuronal layer \(j\)'s activity does not depend on its own activity at the previous time step, but only on the activity of its immediate neighbors \(j-1\) and \(j+1\). More precisely, we have the following:
* Case: \(0\leq\lambda+\alpha<1\). The fundamental solution can be decomposed as \[\mathcal{G}_{j}^{n}=\frac{1}{\sqrt{4\pi\sigma_{0}n}}\exp\left(-\frac{|j-c_{0} n|^{2}}{4\sigma_{0}n}\right)+\mathcal{N}_{j}^{n},\quad|j|\leq n,\] where the remainder term satisfies an estimate \[\big{|}\mathcal{N}_{j}^{n}\big{|}\leq\frac{C}{n}\exp\left(-\kappa\frac{|j-c_{ 0}n|^{2}}{n}\right),\quad|j|\leq n,\] for some universal constants \(C,\kappa>0\) which only depend on the hyper-parameters and not \(n\) and \(j\). In Figure 3(a), we represented the fundamental solution \(\mathcal{G}_{j}^{n}\) at different time iterations (circles) in the case \(\lambda<\alpha\) where there is rightward propagation with \(c_{0}>0\) and compared it with the leading order fixed Gaussian profile centered at \(j=c_{0}n\) (plain line). On the other hand, in Figure 4, panels (a)-(b)-(c), we illustrate the above results by presenting a space-time color plot of the fundamental solution rescaled by a factor \(\sqrt{n}\). We observe rightward (respectively leftward) propagation with \(c_{0}>0\) (respectively \(c_{0}<0\)) when \(\lambda<\alpha\) (respectively \(\alpha<\lambda\)), while when \(\alpha=\lambda\) we have \(c_{0}=0\) and no propagation occurs.
* Case: \(\lambda+\alpha=1\). In this case, we first note that we have \(c_{0}=c_{\pi}\) together with \(\sigma_{0}=\sigma_{\pi}\) and \[\mathcal{G}_{j}^{n}=\frac{1+(-1)^{n+j}}{\sqrt{4\pi\sigma_{0}n}}\exp\left(- \frac{|j-c_{0}n|^{2}}{4\sigma_{0}n}\right)+\mathcal{N}_{j}^{n},\quad|j|\leq n,\] where the remainder term satisfies an estimate \[\left|\mathcal{N}_{j}^{n}\right|\leq\frac{C}{n}\exp\left(-\kappa\frac{|j-c_{0} n|^{2}}{n}\right),\quad|j|\leq n,\] for some universal constants \(C,\kappa>0\). In Figure 3(b), we represented the fundamental solution \(\mathcal{G}_{j}^{n}\) at different time iterations (circles) in the case \(\alpha<\lambda\) where there is leftward propagation with \(c_{0}<0\) and compared it with the leading order fixed Gaussian profile centered at \(j=c_{0}n\) (plain line). Similarly as in the previous case, in Figure 4, panels (d)-(e)-(f), we illustrate the above results by presenting a space-time color plot of the fundamental solution rescaled by a factor \(\sqrt{n}\). The direction of propagation still depends on the sign of \(c_{0}\) and whether or not \(\lambda\lessapprox\alpha\). Unlike the case \(\alpha+\lambda<1\), we observe a tiled pattern where \(\mathcal{G}_{j}^{n}=0\) for even or odd integers alternatively for each time step.
As a partial intermediate summary, we note that the sign of \(c_{0}\) (directly related to the sign of \(\alpha-\lambda\)) always indicates in which direction the associated Gaussian profile propagates. Namely if \(\alpha>\lambda\) and \(c_{0}>0\) (resp.
\(\alpha<\lambda\) and \(c_{0}<0\)) there is rightward (resp. leftward) propagation. Intuitively, this behavior reflects the functional role of each hyper-parameter, with \(\alpha\) and \(\lambda\) controlling feed-forward and feed-back prediction error correction, respectively. When \(\alpha=\lambda\), the two terms are equally strong, and there is no dominant direction of propagation. In addition, when \(\lambda+\alpha=1\), the Gaussian profile is _oscillating_ because of the presence of \((-1)^{n+j}\). As will be seen later when considering continuous versions of our model, this oscillatory pattern might not be truly related to neural oscillations observed in the brain, but could instead arise here as a consequence of discrete updating.
Finally, we note that the fundamental solution sequence \((\mathcal{G}_{j}^{n})_{j\in\mathbb{Z}}\) is uniformly integrable for all values of the parameters, that is there exists some universal constant \(C>0\), depending only on the hyper-parameters such that
\[\|\mathcal{G}^{n}\|_{\ell^{1}(\mathbb{Z})}:=\sum_{j\in\mathbb{Z}}|\mathcal{G}_ {j}^{n}|\leq C,\quad n\geq 1.\]
As a consequence, since given any bounded initial sequence \((h_{j})_{j\in\mathbb{Z}}\in\ell^{\infty}(\mathbb{Z})\), the solution \((e_{j}^{n})_{j\in\mathbb{Z}}\) to (3.5) can be represented as the convolution product between the initial sequence and the fundamental solution, namely
\[e_{j}^{n}=\sum_{\ell\in\mathbb{Z}}\mathcal{G}_{j-\ell}^{n}h_{\ell},\quad j\in \mathbb{Z},\quad n\geq 1,\]
we readily deduce that the solution \((e_{j}^{n})_{j\in\mathbb{Z}}\) is uniformly bounded with respect to \(n\), that is there exists some universal constant denoted \(C>0\), such that
\[\sup_{j\in\mathbb{Z}}\left|e_{j}^{n}\right|\leq C\,\sup_{j\in\mathbb{Z}}|h_{j} |\,,\quad n\geq 1.\]
This is exactly our definition of marginal stability.
#### 3.1.3 Turning on the instantaneous feedforward connections: case \(\beta>0\)
We now turn to the general case where \(\beta>0\). That is, the feed-forward connections continue to convey sensory inputs at each time step following the network initializing, and \(\beta\) controls the strength of these signals. In that case, the recurrence equation is no longer explicit but implicit and the positivity property together with the finite speed propagation no longer hold true in general. Indeed, upon introducing the shift operator
\[\mathbf{S}:(u_{j})_{j\in\mathbb{Z}}\mapsto(u_{j+1})_{j\in\mathbb{Z}},\]
we remark that equation (3.5) can be written
\[\left(\mathrm{Id}-\beta\mathbf{S}^{-1}\right)e^{n+1}=\alpha\mathbf{S}^{-1}e^{ n}+(1-\beta-\lambda-\alpha)e^{n}+\lambda\mathbf{S}e^{n},\quad n\geq 0,\]
with \(e^{n}=(e_{j}^{n})_{j\in\mathbb{Z}}\). Since \(0<\beta<1\) and \(\||\mathbf{S}^{-1}\||_{\ell^{q}(\mathbb{Z})\to\ell^{q}(\mathbb{Z})}=1\) for any \(q\in[1,+\infty]\), the operator \(\mathrm{Id}-\beta\mathbf{S}^{-1}\) is invertible on \(\ell^{q}(\mathbb{Z})\) for any \(q\in[1,+\infty]\) with inverse
\[\left(\mathrm{Id}-\beta\mathbf{S}^{-1}\right)^{-1}=\sum_{\ell=0}^{\infty}\beta ^{\ell}\mathbf{S}^{-\ell}.\]
As a consequence, the recurrence equation can be recast as a convolution operator across the network layers with infinite support, namely
\[e_{j}^{n+1}=\alpha\sum_{\ell=0}^{\infty}\beta^{\ell}e_{j-\ell-1}^{n}+(1-\beta- \lambda-\alpha)\sum_{\ell=0}^{\infty}\beta^{\ell}e_{j-\ell}^{n}+\lambda\sum_{ \ell=0}^{\infty}\beta^{\ell}e_{j-\ell+1}^{n},\quad j\in\mathbb{Z},\quad n\geq 0.\]
From the above expression, we readily deduce that the positivity of the solution is preserved whenever \(0<\beta<1-\lambda-\alpha\). Furthermore, for the fundamental solution starting from the Dirac delta solution which solves
\[\mathcal{G}_{j}^{n+1}-\beta\mathcal{G}_{j-1}^{n+1}=\alpha\mathcal{G}_{j-1}^{n} +(1-\beta-\lambda-\alpha)\mathcal{G}_{j}^{n}+\lambda\mathcal{G}_{j+1}^{n}, \quad j\in\mathbb{Z},\quad n\geq 0,\]
we only have that
\[\mathcal{G}_{j}^{n}=0,\quad j<-n,\]
which implies that \(-1<c_{0},c_{\pi}<+\infty\). Indeed, from the formula of \(c_{0}\) we get that
\[c_{0}\sim\frac{1-\alpha-\beta}{1-\beta}\longrightarrow+\infty,\quad\text{ as }\beta\to 1^{-}.\]
Once again, as in the case with \(\beta=0\), we can precise the behavior of the fundamental solution by using the combined results of [10, 12].
* Case: \(0\leq\lambda+\alpha<1\). There exist some universal constants \(C,\kappa>0\) and \(L>0\) such that \[\mathcal{G}_{j}^{n}=\frac{1}{\sqrt{4\pi\sigma_{0}n}}\exp\left(-\frac{|j-c_{0}n |^{2}}{4\sigma_{0}n}\right)+\mathcal{N}_{j}^{n},\quad-n\leq j\leq Ln,\] where the remainder term satisfies a Gaussian estimate \[\left|\mathcal{N}_{j}^{n}\right|\leq\frac{C}{n}\exp\left(-\kappa\frac{|j-c_{0} n|^{2}}{n}\right),\quad-n\leq j\leq Ln.\]
Figure 5: _Effects of turning on \(\beta>0\) when \(\alpha+\lambda<1\). When \(\beta+\alpha<\lambda\) we have \(c_{0}<0\) and observe backward propagation in panel (a) while when \(\lambda<\beta+\alpha\) we have \(c_{0}>0\) and have forward propagation as seen in panel (c). At the transition \(\beta+\alpha=\lambda\), the wave speed vanishes \(c_{0}=0\) and there is no propagation as illustrated in panel (b).Intuitively, \(\alpha\) and \(\beta\), both propagating signals in the forward (rightward) direction, compete with \(\lambda\) carrying the feedback (leftward) prediction signals; this competition determines the main direction of propagation of neural activity in the system._
While for \(j>nL\) we simply get a pure exponential bound \[\left|\mathcal{G}_{j}^{n}\right|\leq Ce^{-\kappa n-\kappa j},\quad nL<j.\] Inspecting the formula for \(c_{0}\), we notice that when \(\alpha+\beta\lessgtr\lambda\) we have \(c_{0}\lessgtr 0\) and the wave speed vanishes precisely when \(\alpha+\beta=\lambda\). This is illustrated in Figure 5, where we see that \(\alpha\) and \(\beta\), both propagating signals in the forward (rightward) direction, compete with \(\lambda\) carrying the feedback (leftward) prediction signals; this competition determines the main direction of propagation of neural activity in the system.
* Case: \(\lambda+\alpha=1\). What changes in that case is the existence of a secondary wave with associated wave speed \(c_{\pi}\) whose sign depends on the competition between \(\alpha\) and \(\beta+\lambda\). When \(\alpha<\beta+\lambda\) then we have \(c_{\pi}<0\), and the competition between \(\lambda\) and \(\beta+\alpha\) will determine the sign of \(c_{0}\), as illustrated in panels (a)-(b)-(c) of Figure 6. On the other hand, when \(\beta+\lambda<\alpha\) implying that \(c_{\pi}>0\), we note that \(\alpha+\beta>\lambda\) and thus \(c_{0}>0\). In that case, the explicit formula for \(c_{\pi}\) and \(c_{0}\) shows that
\(0<c_{\pi}<c_{0}\) and the secondary wave associated to \(c_{\pi}\) is slower to propagate into the network, see Figure 6(d). Finally, when \(\beta+\lambda=\alpha\) we have \(0=c_{\pi}<c_{0}\) and the secondary wave is blocked, see Figure 6(e).
We have summarized in the diagram of Figure 6(f) all possible configurations for the sign of the wave speeds \(c_{0}\) and \(c_{\pi}\) when \(\beta\in(0,1)\) when \(\alpha+\lambda\leq 1\). We notably observe that when \(\beta\) is increased the region of parameter space where \(c_{0}<0\) diminishes while the region of parameter space where \(c_{\pi}<0\) increases, indicating that for high values of \(\beta\) the primary wave is most likely to be forward while the secondary wave is most likely to be backward.
### Wave propagation on a semi-infinite network with a forcing source term
Now that we have understood the intrinsic underlying mechanisms of wave propagation for our model (3.1) set on an infinite domain, we turn to the case where the network is semi-infinite. That is, the network admits an _input_ layer that is only connected to the layer above. The problem now reads
\[\begin{cases}e_{j}^{n+1}-\beta e_{j-1}^{n+1}=\alpha e_{j-1}^{n}+(1-\beta- \lambda-\alpha)e_{j}^{n}+\lambda e_{j+1}^{n},\quad j\geq 1,\quad n\geq 0,\\ \quad\quad\quad\quad\quad\quad e_{0}^{n}=s_{0}^{n},\quad n\geq 0,\\ \quad\quad\quad\quad\quad\quad e_{j}^{0}=h_{j},\quad j\geq 1.\end{cases} \tag{3.9}\]
We see that the system depends on the source term \(s_{0}^{n}\) applied to its input layer at each time step, also called a _boundary value_, and on the starting activation value \((h_{j})\) applied to each layer at the initial time point, also called the _initial value_. In fact, the linearity principle tells us that the solutions of the above problem can be obtained as the linear superposition of the solutions to the following two problems, the boundary value problem, where all layers except the input layer are initialized at zero:
\[\begin{cases}g_{j}^{n+1}-\beta g_{j-1}^{n+1}=\alpha g_{j-1}^{n}+(1-\beta- \lambda-\alpha)g_{j}^{n}+\lambda g_{j+1}^{n},\quad j\geq 1,\quad n\geq 0,\\ g_{0}^{n}=s_{0}^{n},\quad n\geq 0,\\ g_{j}^{0}=0,\quad j\geq 1,\end{cases} \tag{3.10}\]
and the initial value problem, where the input layer source term is set to zero for all time steps:
\[\begin{cases}f_{j}^{n+1}-\beta f_{j-1}^{n+1}=\alpha f_{j-1}^{n}+(1-\beta- \lambda-\alpha)f_{j}^{n}+\lambda f_{j+1}^{n},\quad j\geq 1,\quad n\geq 0,\\ f_{0}^{n}=0,\quad n\geq 0,\\ f_{j}^{0}=h_{j},\quad j\geq 1.\end{cases} \tag{3.11}\]
Subsequently, the generic solution sequence \((e_{j}^{n})_{j\geq 1}\) can be obtained as
\[e_{j}^{n}=f_{j}^{n}+g_{j}^{n},\quad j\geq 1,\quad n\geq 1.\]
#### 3.2.1 The initial value problem (3.11)
It is first natural to investigate the initial value problem (3.11) since it is really close to the infinite network case of the previous section. Here, we consider the effect of the initial value assigned to each layer \(j>0\)
at the first time step (\(n=0\)), except the input layer (\(j=0\)) which is set to zero. The dynamics of (3.11) is still read out from the amplification factor function \(\rho\) defined in (3.6) and once again the solutions to (3.11) can be obtained as the convolution of the initial sequence with the fundamental solution associated to the problem. For \(j_{0}\geq 1\), we denote by \(\boldsymbol{\delta}^{j_{0}}\) the Dirac delta sequence defined as \(\boldsymbol{\delta}^{j_{0}}_{j_{0}}=1\) and \(\boldsymbol{\delta}^{j_{0}}_{j}=0\) for all \(j\geq 1\) and \(j\neq j_{0}\). Correspondingly, we denote by \(\mathcal{G}^{n}_{\rm ivp}(\cdot,j_{0})=(\mathcal{G}^{n}_{\rm ivp}(j,j_{0}))_{ j\geq 1}\) the solution to (3.11) starting from \(\boldsymbol{\delta}^{j_{0}}\), and let us remark that the solutions to (3.11) starting from any initial condition \((h_{j})_{j\geq 1}\) can be represented as
\[f^{n}_{j}=\sum_{j_{0}=1}^{+\infty}\mathcal{G}^{n}_{\rm ivp}(j,j_{0})h_{j_{0}},\quad j\geq 1,\quad n\geq 1.\]
Combining the results of [10, 12] together with those of [13, 16, 17] which precisely deal with recurrence equations with boundary conditions, one can obtain very similar results as in the previous case. The very first obvious remark that we can make is that for all \(j,j_{0}\geq 1\) and \(1\leq n<j_{0}\) we have
\[\mathcal{G}^{n}_{\rm ivp}(j,j_{0})=\mathcal{G}^{n}_{j-j_{0}},\]
meaning that it takes \(n=j_{0}\) iterations before the solution arrives at the boundary \(j=0\) and for \(1\leq n<j_{0}\) the problem is similar to the one set on the infinite network. This behavior is illustrated in Figure 7 for several values of the hyper-parameters where we represent the spatio-temporal evolution of the rescaled solution sequence \((\sqrt{n}\,\mathcal{G}^{n}_{\rm ivp}(j,j_{0}))_{j\geq 1}\). We clearly observe a Gaussian behavior before the solution reaches the boundary. And for all \(n\geq j_{0}\), we can write
\[\mathcal{G}^{n}_{\rm ivp}(j,j_{0})=\mathcal{G}^{n}_{j-j_{0}}+\mathcal{G}^{n}_{ \rm bl}(j,j_{0}),\]
where \(\mathcal{G}^{n}_{\rm bl}(j,j_{0})\) is a remainder term generated by the boundary condition at \(j=0\). It is actually possible to bound \(\mathcal{G}^{n}_{\rm bl}(j,j_{0})\) in each of the cases treated above.
When \(\beta=0\) and \(\alpha+\lambda<1\) with \(\alpha<\lambda\) such that \(c_{0}<0\), then \(\mathcal{G}^{n}_{\rm bl}(j,j_{0})\) is well approximated by
\[\mathcal{G}^{n}_{\rm bl}(j,j_{0})\approx\left\{\begin{array}{ll}-\frac{1}{ \sqrt{4\pi\sigma_{0}n}}\exp\left(-\frac{|-j_{0}-c_{0}n|^{2}}{4\sigma_{0}n} \right)\left(\frac{\alpha}{\lambda}\right)^{j},&1\leq j\leq j_{0},\\ e^{-\kappa n-\kappa(j-j_{0})},&j>j_{0},\end{array}\right.\]
while when \(\lambda<\alpha\) with \(c_{0}>0\), then \(\mathcal{G}_{\mathrm{bl}}^{n}(j,j_{0})\) is well approximated by
\[\mathcal{G}_{\mathrm{bl}}^{n}(j,j_{0})\approx\left\{\begin{array}{ll}e^{- \kappa n-\kappa(j_{0}-j)},&1\leq j\leq j_{0},\\ -\frac{1}{\sqrt{4\pi\sigma_{0}n}}\exp\left(-\frac{|j-c_{0}n|^{2}}{4\sigma_{0} n}\right)\left(\frac{\alpha}{\lambda}\right)^{j_{0}},&j_{0}<j,\end{array}\right.\]
while when \(\lambda<\alpha\) with \(c_{0}>0\), then \(\mathcal{G}_{\mathrm{bl}}^{n}(j,j_{0})\) is well approximated by
\[\mathcal{G}_{\mathrm{bl}}^{n}(j,j_{0})\approx\left\{\begin{array}{ll}-\frac {1+(-1)^{n}}{\sqrt{4\pi\sigma_{0}n}}\exp\left(-\frac{|-j_{0}-c_{0}n|^{2}}{4 \sigma_{0}n}\right)\left(\frac{\alpha}{\lambda}\right)^{j},&1\leq j\leq j_{0}, \\ e^{-\kappa n-\kappa(j-j_{0})},&j>j_{0},\end{array}\right.\]
while when \(\lambda<\alpha\) with \(c_{0}>0\), then \(\mathcal{G}_{\mathrm{bl}}^{n}(j,j_{0})\) is well approximated by
\[\mathcal{G}_{\mathrm{bl}}^{n}(j,j_{0})\approx\left\{\begin{array}{ll}-\frac {1+(-1)^{n}}{\sqrt{4\pi\sigma_{0}n}}\exp\left(-\frac{|-j_{0}-c_{0}n|^{2}}{4 \sigma_{0}n}\right)\left(\frac{\alpha}{\lambda}\right)^{j},&1\leq j\leq j_{0}, \\ e^{-\kappa n-\kappa(j-j_{0})},&j>j_{0},\end{array}\right.\]
while when \(\lambda<\alpha\) with \(c_{0}>0\), then \(\mathcal{G}_{\mathrm{bl}}^{n}(j,j_{0})\) is well approximated by
\[\mathcal{G}_{\mathrm{bl}}^{n}(j,j_{0})\approx\left\{\begin{array}{ll}e^{- \kappa n-\kappa(j_{0}-j)},&1\leq j\leq j_{0},\\ -\frac{1+(-1)^{n}}{\sqrt{4\pi\sigma_{0}n}}\exp\left(-\frac{|j-c_{0}n|^{2}}{4 \sigma_{0}n}\right)\left(\frac{\alpha}{\lambda}\right)^{j_{0}},&j_{0}<j.\end{array}\right.\]
When \(0<\beta<1\) and \(\alpha+\lambda<1\) the approximations are similar as for the case with \(\beta=0\). We thus need to discuss three cases.
* Case \(-1<c_{\pi}<c_{0}<0\). In that case, we have for \(1\leq j\leq j_{0}\) that \[\mathcal{G}_{\mathrm{bl}}^{n}(j,j_{0})\approx-\frac{1}{\sqrt{4\pi\sigma_{0}n} }\exp\left(-\frac{|-j_{0}-c_{0}n|^{2}}{4\sigma_{0}n}\right)\left(\frac{\alpha +\beta}{\lambda}\right)^{j}-\frac{(-1)^{n}}{\sqrt{4\pi\sigma_{\pi}n}}\exp\left( -\frac{|-j_{0}-c_{\pi}n|^{2}}{4\sigma_{\pi}n}\right)\left(\frac{\alpha-\beta}{ \lambda}\right)^{j},\] with an exponential bound for \(j>j_{0}\). This situation is presented in Figure 7(c)
* Case \(-1<c_{\pi}<0<c_{0}\). In this case we have \[\mathcal{G}_{\mathrm{bl}}^{n}(j,j_{0})\approx\left\{\begin{array}{cc}-\frac{(-1 )^{n}}{\sqrt{4\pi\sigma_{\pi}n}}\exp\left(-\frac{|j-c_{0}n|^{2}}{4\sigma_{\pi}n }\right)\left(\frac{\alpha-\beta}{\lambda}\right)^{j},&1\leq j\leq j_{0},\\ -\frac{1}{\sqrt{4\pi\sigma_{0}n}}\exp\left(-\frac{|j-c_{0}n|^{2}}{4\sigma_{0} n}\right)\left(\frac{\lambda}{\alpha+\beta}\right)^{j_{0}},&j_{0}<j<Ln.\end{array}\right.\]
* Case \(-1<0<c_{\pi}<c_{0}\). In this case we have \[\mathcal{G}_{\mathrm{bl}}^{n}(j,j_{0})\approx-\frac{1}{\sqrt{4\pi\sigma_{0}n }}\exp\left(-\frac{|j-c_{0}n|^{2}}{4\sigma_{0}n}\right)\left(\frac{\lambda}{ \alpha+\beta}\right)^{j_{0}}-\frac{(-1)^{n}}{\sqrt{4\pi\sigma_{\pi}n}}\exp \left(-\frac{|j-c_{\pi}n|^{2}}{4\sigma_{\pi}n}\right)\left(\frac{\lambda}{ \alpha-\beta}\right)^{j_{0}}\] for \(j_{0}<j<Ln\).
#### 3.2.2 The boundary value problem (3.10)
We now turn our attention to the boundary value problem (3.10) where the network is initialized with zero activity, for all layers except the input. Motivated by applications, we will only focus on the case where \(s_{0}^{n}=s_{0}\in\mathbb{R}\) for all \(n\geq 0\) (i.e., a constant sensory input) and thus study:
\[\left\{\begin{aligned} g_{j}^{n+1}-\beta g_{j-1}^{n+1}& =\alpha g_{j-1}^{n}+(1-\beta-\lambda-\alpha)g_{j}^{n}+\lambda g_{j+ 1}^{n},\quad j\geq 1,\quad n\geq 0,\\ g_{0}^{n}&=s_{0},\quad n\geq 0,\\ g_{j}^{0}&=0,\quad j\geq 1.\end{aligned}\right. \tag{3.12}\]
Case \(\beta=0\).Here, the stimulus information \(s_{o}\) does not directly propagate through the network via its feedforward connections (since \(\beta=0\)), but may still propagate towards higher layers \(j>0\) via the feedforward prediction error correction mechanism, governed by parameter \(\alpha\). When \(\alpha+\lambda\leq 1\), we distinguish between three cases. Here and throughout, we denote by \(\mathrm{erf}\) the error function defined by
\[\mathrm{erf}(x):=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-z^{2}}\mathrm{d}z,\quad x \in\mathbb{R}.\]
* Case \(\alpha<\lambda\). In this case we have \[g_{j}^{n}=s_{0}\left(\frac{\alpha}{\lambda}\right)^{j}\left(1+\omega_{j}^{n} \right),\quad\text{ with }\quad\left|\omega_{j}^{n}\right|\leq Ce^{-\kappa n-\kappa j},\quad j \geq 1,\quad n\geq 1.\] It is interesting to note that the sequence \(\left(s_{0}\left(\frac{\alpha}{\lambda}\right)^{j}\right)_{j\leq 1}\) is a stationary solution to (3.12) and we have uniform convergence at exponential rate toward this stationary solution, that is \[\sup_{j\geq 1}\left|g_{j}^{n}-s_{0}\left(\frac{\alpha}{\lambda}\right)^{j} \right|\leq Ce^{-\kappa n}\underset{n\rightarrow+\infty}{\longrightarrow}0.\] We illustrate this uniform convergence in Figure 9 (a)-(d).
* Case \(\alpha=\lambda\). We have \[\left|g_{j}^{n}-s_{0}\left(1-\mathrm{erf}\left(\frac{j}{\sqrt{4\sigma_{0}n}} \right)\right)\right|\leq\frac{C}{n}\exp\left(-\kappa\frac{j^{2}}{n}\right), \quad j\geq 1,\quad n\geq 1.\]
In this case, we observe a slow convergence to the steady state \(s_{0}\). Indeed, for each \(\delta\in(0,1/2)\) we have \[\lim_{n\to+\infty}\sup_{1\leq j\leq n^{\delta}}\left|g_{j}^{n}-s_{0}\right|=0,\] while for any \(\delta>1/2\) we get \[\lim_{n\to+\infty}\sup_{j\geq n^{\delta}}\left|g_{j}^{n}\right|=0.\] The propagation is thus diffusive along \(j\sim\sqrt{n}\). This can be seen in Figure 9 (b)-(e).
* Case \(\lambda<\alpha\). In this case we have \[\left|g_{j}^{n}-\frac{s_{0}}{2}\left(1-\operatorname{erf}\left(\frac{j-c_{0}n }{\sqrt{4\sigma_{0}n}}\right)\right)\right|\leq\frac{C}{\sqrt{n}}\exp\left(- \kappa\frac{(j-c_{0}n)^{2}}{n}\right),\quad j\geq 1,\quad n\geq 1.\] In this case, we deduce that we have local uniform convergence towards the steady state \(s_{0}\), actually we have spreading at speed \(c_{0}\). More precisely, for any \(c\in(0,c_{0})\) we have \[\lim_{n\to+\infty}\sup_{1\leq j\leq cn}\left|g_{j}^{n}-s_{0}\right|=0,\] while for any \(c>c_{0}\), we get \[\lim_{n\to+\infty}\sup_{j\geq cn}\left|g_{j}^{n}\right|=0.\] We refer to Figure 9 (c)-(f) for an illustration. The figure clearly shows the competition between hyperparameters \(\alpha\) and \(\lambda\), with forward propagation of the sensory input only when \(\alpha\geq\lambda\).
Case \(0<\beta<1\).Here, the stimulus information \(s_{o}\) propagates through the network not only via its feedforward connections (governed by \(\beta>0\)) but also via the feedforward prediction error correction mechanism, governed by parameter \(\alpha\). In the case where \(\alpha+\lambda\leq 1\), the results from the case \(\beta=0\) remain valid, the only differences coming from the fact that the above approximations in the case \(\lambda\leq\alpha\) are only valid for \(1\leq j\leq Ln\) for some large constant \(L>0\) with exponential localized bounds for \(j\geq Ln\) and that the steady state is now \(\left(s_{0}\left(\frac{\alpha+\beta}{\lambda}\right)^{j}\right)_{j\geq 1}\) whenever \(\alpha+\beta<\lambda\). This confirms that the feedforward propagation of the input \(s_{0}\) is now dependent on both terms \(\alpha\) and \(\beta\), jointly competing against the feedback term \(\lambda\).
Let us remark that when \(0<\beta<\alpha-\lambda\) and in the special case \(\alpha+\lambda=1\), where a second stable point exists for the amplification factor function at \(\rho(\pi)\), we can get a slightly more accurate description of the solution in the form
\[g_{j}^{n}=\frac{s_{0}}{2}\left(1-\operatorname{erf}\left(\frac{j-c_{0}n}{ \sqrt{4\sigma_{0}n}}\right)\right)-\frac{s_{0}}{2(1+\beta)}\frac{(-1)^{j}}{ \sqrt{4\pi\sigma_{\pi}n}}\exp\left(-\frac{(j-c_{\pi}n)^{2}}{4\sigma_{\pi}n} \right)+r_{j}^{n},\quad n\geq 1,\quad 1\leq j\leq Ln,\]
where the remainder term satisfies an estimate of the form
\[\left|r_{j}^{n}\right|\leq\frac{C}{\sqrt{n}}\exp\left(-\kappa\frac{(j-c_{0}n) ^{2}}{n}\right)+\frac{C}{n}\exp\left(-\kappa\frac{(j-c_{\pi}n)^{2}}{n}\right).\]
This is illustrated in Figure 10. It should be noted here that, while the main wavefront reflecting solution \(c_{0}\) is a generic property of our network in the entire range of validity of parameters \(0\leq\beta<1\) and \(\alpha+\lambda\leq 1\), the second oscillatory pattern reflecting \(c_{\pi}\) only appears in the special case of \(\beta\neq 0\) and \(\alpha+\lambda=1\). This oscillation is, in fact, an artifact from the discrete formulation of our problem, as will become evident in the next section, where we investigate continuous formulations of the problem.
### Towards continuous predictive models
Starting from a discrete approximation of our system made sense, not only for mathematical convenience but also because artificial neural networks and deep learning systems implementing similar predictive coding principles are intrinsically discrete. Nonetheless, it can be useful to discard this discrete approximation and investigate our system in the continuous limit. Note that in the following, we will explore continuous extensions of our model in both time _and_ space. Biological neural networks, like any physical system, operate in continuous time and thus it is more biologically accurate to relax the temporal discretization assumption. This is what we do in the first part of this section. In the spatial domain, however, the discretization of our system into successive processing layers was not just an approximation, but also a reflection of the hierarchical anatomy of the brain. Nonetheless, we can still represent neuronal network depth continuously, even if only as a mathematical abstraction. This is what we will do in the subsequent part of this section. Understanding such continuous limits can allow us to test the robustness of our framework, and to relate it to canonical models whose dynamics have been more exhaustively characterized.
Figure 9: _Case \(\beta=0\) and \(\alpha+\lambda\leq 1\). Visualization of the solution \((g_{j}^{n})_{j\geq 1}\) of (3.12) when \(s_{0}^{n}=1\) for all \(n\geq 0\). Top row: space-time plots of the solution depending on \(\alpha\) and \(\lambda\). Bottom row: solution profiles at different time iterations (circles) compared with the leading order approximations. In the case \(\alpha<\lambda\), we observe uniform convergence (in space) at exponential rate (in time) toward the stationary solution \(s_{0}\left(\left(\frac{\alpha}{\lambda}\right)^{j}\right)_{j\geq 1}\) while when \(\lambda<\alpha\) we observe the propagation of a wavefront where the uniform steady state \(s_{0}\) is propagating at speed \(c_{0}>0\). In the intermediate case where \(\alpha=\lambda\) we get a diffusive invasion as illustrated by the curve \(j=\sqrt{4\sigma_{0}n}\)._
#### 3.3.1 Continuous in time interpretation
As a first step, we present a continuous in time interpretation of the model (3.9). We let \(\Delta t>0\) be some parameter which will represent a time step and reformulate the recurrence equation as
\[(1-\beta)\frac{e_{j}^{n+1}-e_{j}^{n}}{\Delta t}=\frac{\beta}{\Delta t}\left(e_{ j-1}^{n+1}-e_{j}^{n+1}\right)+\frac{\lambda}{\Delta t}\left(e_{j+1}^{n}-e_{j}^{n} \right)-\frac{\alpha}{\Delta t}\left(e_{j}^{n}-e_{j-1}^{n}\right).\]
We now interpret \(e_{j}^{n}\) as the approximation of some smooth function of time \(\mathbf{e}_{j}(t)\) evaluated at \(t_{n}:=n\Delta t\), that is \(e_{j}^{n}\sim\mathbf{e}_{j}(t_{n})\). As a consequence, we get that
\[e_{j}^{n+1}\sim\mathbf{e}_{j}(t_{n+1})=\mathbf{e}_{j}(t_{n}+\Delta t)= \mathbf{e}_{j}(t_{n})+\Delta t\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{e}_{j}(t _{n})+\mathcal{O}(|\Delta t|^{2}),\text{ as }\Delta t\to 0,\]
such that
\[\frac{e_{j}^{n+1}-e_{j}^{n}}{\Delta t}\sim\frac{\mathrm{d}}{\mathrm{d}t} \mathbf{e}_{j}(t_{n}),\text{ as }\Delta t\to 0.\]
Now, introducing the scaled parameters
\[\widetilde{\beta}:=\frac{\beta}{\Delta t},\quad\widetilde{\lambda}:=\frac{ \lambda}{\Delta t},\text{ and }\widetilde{\alpha}:=\frac{\alpha}{\Delta t},\]
we get at the limit \(\Delta t\to 0\) the following lattice ordinary differential equation
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{e}_{j}(t)=(\widetilde{\beta}+ \widetilde{\alpha})\mathbf{e}_{j-1}(t)-(\widetilde{\beta}+\widetilde{\alpha} +\widetilde{\lambda})\mathbf{e}_{j}(t)+\widetilde{\lambda}\mathbf{e}_{j+1}(t ),\quad t>0. \tag{3.13}\]
When defined on the infinite lattice \(\mathbb{Z}\), one can represent the solutions as
\[\mathbf{e}_{j}(t)=\sum_{k\in\mathbb{Z}}\mathbf{G}_{j-k}(t)h_{k},\quad j\in \mathbb{Z},\quad t>0,\]
starting from the initial sequence \(\mathbf{e}_{j}(t=0)=(h_{j})_{j\in\mathbb{Z}}\) where \((\mathbf{G}_{j}(t))_{j\in\mathbb{Z}}\) is the fundamental solution to (3.13) starting from the Dirac delta sequence \(\boldsymbol{\delta}\). Once again, each \(\mathbf{G}_{j}(t)\) can be represented by the inverse
Figure 10: _Case \(0<\beta<1\) and \(\alpha+\lambda=1\) with \(0<\beta<\alpha-\lambda\). Visualization of the solution \((g_{j}^{n})_{j\geq 1}\) of (3.12) when \(s_{0}^{n}=1\) for all \(n\geq 0\). The solution is the super-position of a leading rightward front spreading at speed \(c_{0}>0\) and an oscillatory Gaussian profile propagating at speed \(c_{\pi}>0\) with \(c_{\pi}<c_{0}\)._
Fourier transform and reads
\[\mathbf{G}_{j}(t)=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{\nu(\theta)t}e^{\mathbf{i}j \theta}\mathrm{d}\theta,\quad t>0,\quad j\in\mathbb{Z},\]
where the function \(\nu(\theta)\) is defined as
\[\nu(\theta):=(\widetilde{\beta}+\widetilde{\alpha})e^{-\mathbf{i}\theta}-( \widetilde{\beta}+\widetilde{\alpha}+\widetilde{\lambda})+\widetilde{\lambda}e ^{\mathbf{i}\theta},\quad\theta\in[-\pi,\pi].\]
The function \(\nu(\theta)\) serves as an amplification factor function for the time continuous equation (3.13). To ensure stability1, one needs to impose that \(\mathrm{Re}(\nu(\theta))\leq 0\) for each \(\theta\in[-\pi,\pi]\). From its formula, we obtain that
Footnote 1: Note that the notions of stability/unstability and marginal stability introduced in the fully discrete setting naturally extend to the semi-continuous setting.
\[\mathrm{Re}(\nu(\theta))=(\widetilde{\beta}+\widetilde{\alpha}+\widetilde{ \lambda})(\cos(\theta)-1),\quad\theta\in[-\pi,\pi],\]
such that we deduce that \(\mathrm{Re}(\nu(0))=0\) and \(\mathrm{Re}(\nu(\theta))<0\) for all \(\theta\in[-\pi,\pi]\backslash\{0\}\). In particular, it is now evident that, contrary to the discrete case, \(\nu(\pi)\) cannot be a stable solution for the continuous system (except in the trivial case where all hyperparameters \(\widetilde{\alpha},\widetilde{\beta},\widetilde{\lambda}\) are zero). This confirms that the previously observed oscillations associated with \(\rho(\pi)\) in specific cases were merely an artifact of the temporal discretization.
We note that, near the tangency point at \(\theta=0\), the function \(\nu(\theta)\) has the following asymptotic expansion
\[\nu(\theta)=-(\widetilde{\beta}+\widetilde{\alpha}+\widetilde{\lambda}) \mathbf{i}\theta-\frac{\widetilde{\beta}+\widetilde{\alpha}+\widetilde{\lambda }}{2}\theta^{2}+\mathcal{O}(|\theta|^{3}),\text{ as }\theta\to 0.\]
It is also possible to prove a Gaussian approximation in that case, and following for example [6], we have
\[\mathbf{G}_{j}(t)=\frac{1}{\sqrt{4\pi\widetilde{\sigma_{0}}t}}\exp\left(- \frac{|j-\widetilde{c_{0}}t|^{2}}{4\widetilde{\sigma_{0}}t}\right)+\mathbf{R} _{j}(t),\quad j\in\mathbb{Z},\quad t>0,\]
with
\[|\mathbf{R}_{j}(t)|\leq\frac{C}{\sqrt{t}}\exp\left(-\kappa\frac{|j-\widetilde {c_{0}}t|^{2}}{t}\right),\quad j\in\mathbb{Z},\quad t>0,\]
for some universal constants \(C>0\) and \(\kappa>0\). Here, \(\widetilde{c_{0}}\) and \(\widetilde{\sigma_{0}}\) are given by
\[\widetilde{c_{0}}=\widetilde{\beta}+\widetilde{\alpha}-\widetilde{\lambda}, \quad\text{ and }\quad\widetilde{\sigma_{0}}=\frac{\widetilde{\beta}+ \widetilde{\alpha}+\widetilde{\lambda}}{2}>0.\]
We remark that both \(\widetilde{c_{0}}\) and \(\widetilde{\sigma_{0}}\) are linked to \(c_{0}\) and \(\sigma_{0}\) (the propagation speed and spread of the solution in the case of the discrete model) in the following sense
\[\frac{c_{0}}{\Delta t}\to\widetilde{c_{0}},\quad\text{ and }\quad\frac{ \sigma_{0}}{\Delta t}\to\widetilde{\sigma_{0}},\quad\text{ as }\Delta t\to 0.\]
We also note that the spatially homogeneous solutions of (3.13) are trivial in the sense that if we assume that \(\mathbf{e}_{j}(t)=\mathbf{e}(t)\) for all \(j\in\mathbb{Z}\) then the equation satisfied by \(\mathbf{e}(t)\) is simply
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{e}(t)=0.\]
Finally, we conclude by noticing that in this continuous in time regime, there is no possible oscillations either in space or time, in the sense that the fundamental solution always resembles a fixed Gaussian profile advected at wave speed \(\widetilde{c_{0}}\). The formula for \(\widetilde{c_{0}}\) highlights the intuitive functional relation between the propagation (or advection) direction and the "competition" between the feedforward influences \(\widetilde{\alpha}+\widetilde{\beta}\) and the feedback influence \(\widetilde{\lambda}\).
#### 3.3.2 Fully continuous interpretation: both in time and depth
In this section, we give a possible physical interpretation of the discrete model (3.9) via continuous transport equations, in which both time and space (i.e., neuronal network depth) are made continuous. Let us introduce \(\Delta t>0\), \(\Delta x>0\) and set \(\nu:=\frac{\Delta x}{\Delta t}\). As before, we can view \(\Delta t\) as a time step for our system; additionally, \(\Delta x\) can be viewed as a spatial step in the (continuous) neuronal depth dimension, and thus \(\nu\) becomes akin to a neural propagation speed or a conduction velocity. We then rewrite the recurrence equation as
\[(1-\beta)\frac{e_{j}^{n+1}-e_{j}^{n}}{\Delta t}=\beta\nu\frac{e_{j-1}^{n+1}-e_ {j}^{n+1}}{\Delta x}+\lambda\nu\frac{e_{j+1}^{n}-e_{j}^{n}}{\Delta x}-\alpha \nu\frac{e_{j}^{n}-e_{j-1}^{n}}{\Delta x}.\]
The key idea is to now assume that \(e_{j}^{n}\) represents an approximation of some smooth function \(\mathbf{e}(t,x)\) evaluated at \(t_{n}:=n\Delta t\) and \(x_{j}:=j\Delta x\), that is \(e_{j}^{n}\sim\mathbf{e}(t_{n},x_{j})\). Then passing to the limit \(\Delta t\to 0\), \(\Delta x\to 0\) with \(\frac{\Delta x}{\Delta t}=\nu>0\) fixed and assuming that \(\beta+\alpha\neq\lambda\), one gets the partial differential equation
\[\partial_{t}\mathbf{e}(t,x)+\frac{\nu(\beta+\alpha-\lambda)}{1-\beta}\partial _{x}\mathbf{e}(t,x)=0,\quad t>0,\quad x>0, \tag{3.14}\]
with boundary condition \(\mathbf{e}(t,x=0)=s_{0}(t)\) and initial condition \(\mathbf{e}(t=0,x)=h(x)\) satisfying the compatibility condition \(s_{0}(0)=h(0)\) where \(s_{0}(t)\) is a smooth function such that \(s_{0}(t^{n})=s_{0}^{n}\) and \(h(x_{j})=h_{j}\). The above partial differential equation is a transport equation with associated speed \(\frac{\nu(\beta+\alpha-\lambda)}{1-\beta}=\nu c_{0}\). Depending on the sign of \(c_{0}\), we have a different representation for the solutions of (3.14).
* Case \(c_{0}<0\). Solution is given by \[\mathbf{e}(t,x)=h\left(x-\nu c_{0}t\right),\quad t>0,\quad x>0.\] Let us remark that when \(c_{0}<0\) the trace of the solution at \(x=0\) is entirely determined by the initial data \(h(x)\) since \[\mathbf{e}(t,x=0)=h\left(-\nu c_{0}t\right),\quad t>0.\] Intuitively, this reflects the dominance of backward (leftward) propagation in this network, with solutions determined entirely by the initial value \(h(x)\), even for \(x=0\) (the source term, \(s_{0}(t)\), having no influence in this case).
* Case \(c_{0}>0\). Solution is given by \[\mathbf{e}(t,x)=\left\{\begin{array}{ll}s_{0}\left(t-\frac{x}{\nu c_{0}} \right),&x\leq\nu c_{0}t,\\ h\left(x-\nu c_{0}t\right),&x>\nu c_{0}t,\end{array}\right.\quad t>0,\quad x >0.\] Intuitively, this reflects the dominance of forward (rightward) propagation in this network, with both the source term \(s_{0}(t)\) and the initial values \(h(x)\) transported at constant velocity \(\nu c_{0}\).
Thanks to the explicit form of the solutions, we readily obtain many qualitative properties of the solution \(\mathbf{e}(t,x)\). Boundedness and positivity of the solutions are inherited from the functions \(s_{0}(t)\) and \(h(x)\). In the case where \(\beta+\alpha=\lambda\) (i.e., with balanced feed-forward and feedback influences), the limiting equation is slightly different. Indeed, in this case, introducing \(\delta:=\frac{\Delta x^{2}}{\Delta t}\) and letting \(\Delta t\to 0\), \(\Delta x\to 0\) with \(\delta>0\) fixed, on gets the partial differential equation
\[\partial_{t}\mathbf{e}(t,x)=\frac{\delta(\beta+\alpha+\lambda)}{2(1-\beta)} \partial_{x}^{2}\mathbf{e}(t,x),\quad t>0,\quad x>0, \tag{3.15}\]
and we readily observe that when \(\beta+\alpha=\lambda\), we have that
\[\frac{\beta+\alpha+\lambda}{2(1-\beta)}=\frac{\beta(1-\alpha-\lambda)+\alpha+ \lambda-(\lambda-\alpha)^{2}}{2(1-\beta)^{2}}=\sigma_{0}>0.\]
We obtain a heat equation with a boundary condition \(\mathbf{e}(t,x=0)=s_{0}(t)\) and initial condition \(\mathbf{e}(t=0,x)=h(x)\). Upon denoting
\[\mathcal{S}(t,x,y):=\frac{1}{\sqrt{4\pi\delta\sigma_{0}t}}\left(e^{-\frac{(x-y )^{2}}{4\delta\sigma_{0}t}}-e^{-\frac{(x+y)^{2}}{4\delta\sigma_{0}t}}\right),\]
the solution of the equation is given by
\[\mathbf{e}(t,x)=s_{0}(t)+\int_{0}^{+\infty}\mathcal{S}(t,x,y)\left(h(y)-s_{0}( 0)\right)\mathrm{d}y-\int_{0}^{t}\int_{0}^{+\infty}\mathcal{S}(t-s,x,y)s_{0}^ {\prime}(s)\mathrm{d}y\mathrm{d}s,\quad t>0,\quad x>0,\]
Let us remark that when \(s_{0}(t)=s_{0}\in\mathbb{R}\) is constant for all \(t\geq 0\), the above expression simplifies to
\[\mathbf{e}(t,x)=s_{0}\left(1-\mathrm{erf}\left(\frac{x}{\sqrt{4\delta\sigma_{ 0}t}}\right)\right)+\int_{0}^{+\infty}\mathcal{S}(t,x,y)h(y)\mathrm{d}y,\quad t >0,\quad x>0.\]
In conclusion, this section extended our discrete model towards a continuous limit in both space and time. In the temporal domain, it allowed us to understand our stable solution as an advection behavior, and alerted us that the other apparently oscillatory solutions previously observed in specific cases were mainly due to our discretization approximation. In the spatio-temporal domain, the continuous limit (3.14) allowed us to realize that our main equation (3.1) was merely a discrete version of a transport equation.
In the following sections, we will systematically return to discrete implementations (with gradually increasing functionality), before considering, again, their continuous formulations.
## 4 Beyond the identity case
In the previous section we have studied in depth the case where \(\mathcal{W}^{f}\) and \(\mathcal{W}^{b}\) are both the identity matrix: each neuron in any given layer directly conveys its activation value to a single corresponding neuron in the next layer, and to a single neuron in the previous layer. Motivated by concrete implementations of the model in deep neural networks [9, 32], we aim to investigate more realistic situations with more complex connectivity matrices. While the generic unconstrained case (i.e. two unrelated and dense connection matrices \(\mathcal{W}^{f}\) and \(\mathcal{W}^{b}\)) does not easily lend itself to analytical study, we will consider here two situations of practical interest: in the first one, the forward and backward connection matrices are symmetric and identical; in the second case, each matrix is symmetric, but the two are not necessarily identical.
### The symmetric Rao & Ballard case
Following the pioneering work of Rao & Ballard [27], we will assume in this section that \(\mathcal{W}^{f}=(\mathcal{W}^{b})^{\mathbf{t}}\) and \(\mathcal{W}^{f}\) is symmetric, which implies that
\[\mathcal{W}^{f}=\mathcal{W}^{b}\in\mathscr{S}_{d}(\mathbb{R}),\]
where we denoted \(\mathscr{S}_{d}(\mathbb{R})\) the set of symmetric matrices on \(\mathbb{R}^{d}\).
The underlying interpretation is that, if a strong synaptic connection exists from neuron \(a\) to neuron \(b\), then there is also a strong connection from \(b\) to \(a\). This assumption, which follows from Hebbian plasticity rules ("neurons that fire together wire together") does not capture all of the diversity of brain connectivity patterns, but can be considered a good first approximation [27].
#### 4.1.1 Neural basis change and neural assemblies
The spectral theorem ensures that \(\mathcal{W}^{f}\) (and thus \(\mathcal{W}^{b}\)) is diagonalizable in an orthonormal basis. Namely, there exists an orthogonal invertible matrix \(P\in\mathscr{M}_{d}(\mathbb{R})\) such that \(PP^{\mathbf{t}}=P^{\mathbf{t}}P=\mathbf{I}_{d}\) and there exists a diagonal matrix denoted \(\mathcal{D}\in\mathscr{M}_{d}(\mathbb{R})\) such that
\[\mathcal{W}^{f}=\mathcal{W}^{b}=P\mathcal{D}P^{\mathbf{t}}.\]
We denote by \(\gamma_{p}\in\mathbb{R}^{d}\), \(p=1,\cdots,d\) the diagonal elements of \(\mathcal{D}\) and without loss of generality we may assume that
\[\gamma_{1}\leq\cdots\leq\gamma_{d}.\]
Thanks to this diagonalization, we can now perform a change of basis for our neuronal space. We set \(\mathcal{U}^{n}_{j}:=P^{t}\mathcal{E}^{n}_{j}\) as the new basis, with \(P\mathcal{U}^{n}_{j}:=\mathcal{E}^{n}_{j}\). Each \(\mathcal{U}^{n}_{j}\) can now be understood as a neural _assembly_, reflecting one of the principal components of the weight matrix \(\mathcal{W}^{f}=\mathcal{W}^{b}\). Importantly, although assemblies may overlap, activity updates induced by feedforward or feedback connections to one given assembly do not affect the other assemblies, since the matrix \(P\) is orthogonal. Therefore, our problem is much simplified when considering activity update equations at the level of these neural assemblies \(\mathcal{U}^{n}_{j}\) rather than across individual neurons \(\mathcal{E}^{n}_{j}\). Our model (2.1) becomes
\[\mathcal{U}^{n+1}_{j}=\beta\mathcal{D}\mathcal{U}^{n+1}_{j-1}+\alpha\mathcal{D }\mathcal{U}^{n}_{j-1}+\left[(1-\beta-\lambda)\mathbf{I}_{d}-\alpha\mathcal{D }^{2}\right]\mathcal{U}^{n}_{j}+\lambda\mathcal{D}\mathcal{U}^{n}_{j+1}.\]
Note that, because all matrices in the above equation are diagonal, we have totally decoupled the \(d\) components of the vector \(\mathcal{U}^{n}_{j}\). More precisely, by denoting \(u^{n}_{j,p}\) the \(p\)th component of \(\mathcal{U}^{n}_{j}\), that is \(\mathcal{U}^{n}_{j}=(u^{n}_{j,1},\cdots,u^{n}_{j,d})^{\mathbf{t}}\), we obtain
\[u^{n+1}_{j,p}-\beta\gamma_{p}u^{n+1}_{j-1,p}=\alpha\gamma_{p}u^{n}_{j-1,p}+(1- \beta-\lambda-\alpha\gamma_{p}^{2})u^{n}_{j,p}+\lambda\gamma_{p}u^{n}_{j+1,p}, \quad p=1,\cdots,d.\]
This indicates that one needs to study
\[u^{n+1}_{j}-\beta\gamma u^{n+1}_{j-1}=\alpha\gamma u^{n}_{j-1}+(1-\beta- \lambda-\alpha\gamma^{2})u^{n}_{j}+\lambda\gamma u^{n}_{j+1}, \tag{4.1}\]
where \(\gamma\in\mathbb{R}\) is a given parameter. Here, \(\gamma\) can be thought of as the connection strength across layers (both feedforward and feedback, since we assumed here symmetric connectivity) of the neural assembly under consideration. By construction, each assembly in a given layer is only connected to the corresponding assembly in the layer above, and similarly in the layer below. Note that when \(\gamma=1\), we encounter again the exact situation that we studied in the previous section (3.9), but now with neural assemblies in lieu of individual neurons.
#### 4.1.2 Study of the amplification factor function
Based on our previous analysis, the behavior of the solutions to (4.1) are intrinsically linked to the properties of the amplification factor function:
\[\rho_{\gamma}(\theta)=\frac{\alpha\gamma\left(e^{-\mathbf{i}\theta}-\gamma \right)+1-\beta+\lambda\left(\gamma e^{\mathbf{i}\theta}-1\right)}{1-\beta \gamma e^{-\mathbf{i}\theta}},\quad\theta\in[-\pi,\pi],\]
where one needs to ensure that \(|\rho_{\gamma}(\theta)|\leq 1\) for all \(\theta\in[-\pi,\pi]\). The very first condition is to ensure that \(1\neq\beta|\gamma|\) (to avoid division by zero). Next, we investigate the behavior of \(\rho_{\gamma}(\theta)\) at \(\theta=0\) and check under which condition on \(\gamma\) we can ensure that \(-1\leq\rho_{\gamma}(0)\leq 1\). We have
\[\rho_{\gamma}(0)=1+\frac{(1-\gamma)(\alpha\gamma-\lambda-\beta)}{1-\beta\gamma},\]
which readily tells us that \(\rho_{\gamma}(0)=1\) if and only if \(\gamma=1\) or \(\gamma=\frac{\lambda+\beta}{\alpha}\). And on the other hand \(\rho_{\gamma}(0)=-1\) if and only if \(\gamma=\gamma_{\pm}^{0}\) with
\[\gamma_{\pm}^{0}=\frac{\lambda+\alpha-\beta\pm\sqrt{(\lambda+\alpha-\beta)^{2 }+4\alpha(2-\lambda-\beta)}}{2\alpha},\]
with \(\gamma_{-}^{0}<0<\gamma_{+}^{0}\) since \(\lambda+\beta<2\) by assumption. One also notices that
\[(\lambda+\alpha-\beta)^{2}+4\alpha(2-\lambda-\beta)=(\lambda+3\alpha-\beta)^{ 2}+8\alpha(1-\alpha-\lambda),\]
such that either \(\alpha+\lambda=1\) and in that case \(\gamma_{-}^{0}=-1\) and \(\gamma_{+}^{0}=1+\frac{1-\beta}{\alpha}>1\), or \(\alpha+\lambda<1\) and in that case \(\gamma_{-}^{0}<-1\) and \(\gamma_{+}^{0}>1+\frac{1-\beta}{\alpha}>1\). Next, we remark that
\[\rho_{\gamma}(\pi)=1-\frac{(1+\gamma)(\alpha\gamma+\lambda+\beta)}{1+\beta \gamma}=\rho_{-\gamma}(0),\]
which then implies that \(\rho_{\gamma}(\pi)=1\) if and only if \(\gamma=-1\) or \(\gamma=-\frac{\lambda+\beta}{\alpha}\) and \(\rho_{\gamma}(\pi)=-1\) if and only if \(\gamma=-\gamma_{\pm}^{0}\).
As explained in the beginning, our aim is to completely characterize under which conditions on \(\gamma\in\mathbb{R}\), \(0\leq\beta<1\) and \(0<\alpha,\lambda<1\) with \(\alpha+\lambda\leq 1\), one can ensure that \(|\rho_{\gamma}(\theta)|\leq 1\) for all \(\theta\in[-\pi,\pi]\).
Potential regions of marginal stability are thus given by those values of the parameters satisfying \(\gamma=\pm 1\), \(\gamma=\pm\frac{\lambda+\beta}{\alpha}\), \(\gamma=\pm\gamma_{+}^{0}\) and \(\gamma=\pm\gamma_{-}^{0}\), and it is important to determine the intersections among the above regions. We have already proved that \(\gamma_{-}^{0}=-1\) whenever \(\alpha=1-\lambda\). Next, we compute that \(\gamma_{-}^{0}=-\frac{\lambda+\beta}{\alpha}\) whenever \(\alpha=\frac{\lambda(\lambda+\beta)}{1-\lambda-\beta}=:\Lambda\), while \(\gamma_{-}^{0}=-\gamma_{+}^{0}\) whenever \(\alpha=\beta-\lambda\) and \(\gamma_{+}^{0}=\frac{\lambda+\beta}{\alpha}\) when \(\alpha=\beta(\lambda+\beta)\). Let us already point out that \(\Lambda\) is only defined if \(\lambda+\beta<1\) and in that case \(\Lambda>0\).
We now introduce five regions in the quadrant \((\beta,\lambda)\in[0,1)\times(0,1)\) which are depicted in Figure 11(a). First, since \(1-\lambda=\Lambda=\lambda+\beta\) if and only if \(2\lambda+\beta=1\) (which corresponds to the blue line in Figure 11(a)), we deduce that when \(2\lambda+\beta\geq 1\) we have \(1-\lambda\leq\min(\Lambda,\lambda+\beta)\) which leads us to define the following two regions:
\[(\mathbf{I}) :=\left\{(\beta,\lambda)\in[0,1)\times(0,1)\ |\ 2\lambda+\beta \geq 1\text{ and }\beta\leq\lambda\right\},\] \[(\mathbf{II}) :=\left\{(\beta,\lambda)\in[0,1)\times(0,1)\ |\ 2\lambda+\beta \geq 1\text{ and }\beta>\lambda\right\}.\]
Now, when \(2\lambda+\beta<1\), we have the strict ordering \(0<\Lambda<\lambda+\beta<1-\lambda\) and when \(\beta>\lambda\) it is thus necessary to compare \(\Lambda\) to \(\beta-\lambda\). We remark that \(\Lambda=\beta-\lambda=\beta(\lambda+\beta)\) if and only if \(\beta^{2}-\beta(1-\lambda)+\lambda=0\), which corresponds to the yellow parabola in Figure 11(a). We thus define the following three regions:
\[(\mathbf{III}) :=\left\{(\beta,\lambda)\in[0,1)\times(0,1)\ |\ 2\lambda+\beta<1 \text{ and }\beta\leq\lambda\right\},\] \[(\mathbf{IV}) :=\left\{(\beta,\lambda)\in[0,1)\times(0,1)\ |\ 2\lambda+\beta<1,\beta>\lambda \text{ and }\beta^{2}-\beta(1-\lambda)+\lambda\geq 0\right\},\] \[(\mathbf{V}) :=\left\{(\beta,\lambda)\in[0,1)\times(0,1)\ |\ 2\lambda+\beta<1,\beta> \lambda\text{ and }\beta^{2}-\beta(1-\lambda)+\lambda<0\right\}.\]
Note that when \((\beta,\lambda)\) is in region \((\mathbf{IV})\), we have the ordering
\[0<\beta-\alpha\leq\Lambda<\lambda+\beta<1-\lambda,\]
while for \((\beta,\lambda)\) in region \((\mathbf{V})\), we have
\[0<\Lambda<\beta(\lambda+\beta)<\beta-\alpha<\lambda+\beta<1-\lambda.\]
Figure 11: _Stability/instability regions and their boundaries as a function of \((\alpha,\gamma)\) for (4.1) while \((\lambda,\beta)\) being fixed in one the five regions given in panel (a). Shaded orange regions correspond to an instability for (4.1) while purple regions correspond to a stability for (4.1). The boundaries of the stability/instability regions are given by the intersections of the parametrized curves \(\gamma=\pm 1\) (dark red curves), \(\gamma=\pm\gamma_{-}^{0}\) (dark blue curves), \(\gamma=\pm\gamma_{+}^{0}\) (pink curves) and \(\gamma=\pm\frac{\lambda+\beta}{\alpha}\) (magenta curves). Note that the region of interest is \(0<\alpha\leq 1-\lambda\). Along each such parametrized curves equation (4.1) is marginally stable._
We can characterize the stability of our equation separately for each of the five regions defined in Figure 11(a). Since the region already determines the value of the parameters \(\beta\) and \(\lambda\), the stability will be expressed as a function of the two remaining parameters \(\alpha\) and \(\gamma\) (Figure 11(b-f)). We refer to Figures 11(b-f) for a comprehensive representation of the stability regions. Note that the boundaries of the stability/instability regions are precisely given by the intersections of the parametrized curves \(\gamma=\pm 1\) (dark red curves), \(\gamma=\pm\gamma_{-}^{0}\) (dark blue curves), \(\gamma=\pm\gamma_{+}^{0}\) (pink curves) and \(\gamma=\pm\frac{\lambda+\beta}{\alpha}\) (magenta curves). Along each such parametrized curves equation (4.1) is marginally stable. We comment below the case \((\lambda,\beta)\) in Region (**III**). The other cases can be described in the same way, but we leave this out for conciseness.
Suppose that \((\lambda,\beta)\) belongs to Region (**III**). We present the results of Figure 11(d) by letting \(\alpha\) vary between \(0\) and \(1-\lambda\) and \(\gamma\in\mathbb{R}\). More precisely, for each fixed \(\alpha\in(0,1-\lambda)\) we investigate the stability properties as a function of \(\gamma\). We have to distinguish between several subcases.
1. If \(0<\alpha<\Lambda\). Then, equation (4.1) is stable for each \(\gamma\in(-1,1)\), unstable for \(|\gamma|>1\) and marginally stable at \(\gamma=\pm 1\) with \(\rho_{1}(0)=1\) and \(\rho_{-1}(\pm\pi)=1\).
2. If \(\alpha=\Lambda\). Then \(\gamma_{-}^{0}=-\frac{\lambda+\beta}{\alpha}\) and equation (4.1) is stable for each \(\gamma\in(-1,1)\), unstable for \(|\gamma|>|\gamma_{-}^{0}|\) and \(|\gamma_{-}^{0}|>|\gamma|>1\), whereas it is marginally stable at \(\gamma=\pm 1\) and at \(\gamma=\pm\gamma_{-}^{0}\) with \(\rho_{1}(0)=1\), \(\rho_{-1}(\pm\pi)=1\), \(\rho_{\gamma_{-}^{0}}(0)=-1\) and \(\rho_{-\gamma_{-}^{0}}(\pm\pi)=-1\) together with \(\rho_{-\gamma_{-}^{0}}(0)=1\), \(\rho_{\gamma_{-}^{0}}(\pm\pi)=1\).
3. If \(\Lambda<\alpha<\lambda+\beta\). Then, equation (4.1) is stable for each \(\gamma\in(-1,1)\) and \(\frac{\lambda+\beta}{\alpha}<|\gamma|<|\gamma_{-}^{0}|\), unstable for \(|\gamma|>|\gamma_{-}^{0}|\) and \(\frac{\lambda+\beta}{\alpha}>|\gamma|>1\) and marginally stable at \(\gamma\in\left\{\pm 1,\pm\frac{\lambda+\beta}{\alpha},\pm\gamma_{-}^{0}\right\}\) with \(\rho_{1}(0)=1\), \(\rho_{-1}(\pm\pi)=1\), \(\rho_{\frac{\lambda+\beta}{\alpha}}(0)=1\), \(\rho_{-\frac{\lambda+\beta}{\alpha}}(\pm\pi)=1\), \(\rho_{\gamma_{-}^{0}}(0)=-1\) and \(\rho_{-\gamma_{-}^{0}}(\pm\pi)=-1\).
4. If \(\alpha=\lambda+\beta\). Then, equation (4.1) is stable for each \(\gamma\in(\gamma_{-}^{0},-\gamma_{-}^{0})\backslash\{\pm 1\}\), unstable for \(|\gamma|>|\gamma_{-}^{0}|\) and marginally stable at \(\gamma\in\left\{\pm 1,\pm\gamma_{-}^{0}\right\}\) with \(\rho_{1}(0)=1\), \(\rho_{-1}(\pm\pi)=1\), \(\rho_{\gamma_{-}^{0}}(0)=-1\) and \(\rho_{-\gamma_{-}^{0}}(\pm\pi)=-1\). Remark that in this case we have \(\frac{\lambda+\beta}{\alpha}=1\).
5. If \(\lambda+\beta<\alpha<1-\lambda\). Then, equation (4.1) is stable for each \(\gamma\in\left(-\frac{\lambda+\beta}{\alpha},\frac{\lambda+\beta}{\alpha}\right)\) and \(1<|\gamma|<|\gamma_{-}^{0}|\), unstable for \(|\gamma|>|\gamma_{-}^{0}|\) and \(1>|\gamma|>\frac{\lambda+\beta}{\alpha}\) and marginally stable at \(\gamma\in\left\{\pm 1,\pm\frac{\lambda+\beta}{\alpha},\pm\gamma_{-}^{0}\right\}\) with \(\rho_{1}(0)=1\), \(\rho_{-1}(\pm\pi)=1\), \(\rho_{\frac{\lambda+\beta}{\alpha}}(0)=1\), \(\rho_{-\frac{\lambda+\beta}{\alpha}}(\pm\pi)=1\), \(\rho_{\gamma_{-}^{0}}(0)=-1\) and \(\rho_{-\gamma_{-}^{0}}(\pm\pi)=-1\).
6. If \(\alpha=1-\lambda\). Then, equation (4.1) is stable for each \(\gamma\in\left(-\frac{\lambda+\beta}{\alpha},\frac{\lambda+\beta}{\alpha}\right)\), unstable for \(|\gamma|>|\gamma_{-}^{0}|=1\) and \(1>|\gamma|>\frac{\lambda+\beta}{\alpha}\) and marginally stable at \(\gamma\in\left\{\pm 1,\pm\frac{\lambda+\beta}{\alpha}\right\}\) with \(\rho_{1}(0)=1\) and \(\rho_{1}(\pm\pi)=-1\), \(\rho_{-1}(0)=-1\), with \(\rho_{\frac{\lambda+\beta}{\alpha}}(0)=1\), \(\rho_{-\frac{\lambda+\beta}{\alpha}}(\pm\pi)=1\). Remark that in this case we have \(\gamma_{-}^{0}=-1\).
Summary.In summary, we see that stability is nearly guaranteed whenever \(-1<\gamma<1\), regardless of the values of other parameters (as long as \(0<\alpha<1-\lambda\)). This makes intuitive sense, as \(\gamma\) represents the connection strength across layers of a particular neural assembly, a connection weight \(|\gamma|<1\) implies that activity of this assembly will remain bounded across layers. Additionally, and perhaps more interestingly, in some but not all regions (e.g. Regions II and V) stability can be obtained for much larger values of \(|\gamma|\); this, however, appears to coincide with low values of the \(\alpha\) parameter. In other words, for high connection strengths \(|\gamma|\), the feedforward error correction term \(\alpha\) makes the system unstable.
#### 4.1.3 Wave speed characterization
In the previous section (The Identity Case), we have proved that the direction of propagation was given by the sign of \(c_{0}\) and \(c_{\pi}\) whenever they exist which could be read off from the behavior of \(\rho_{\gamma}(\theta)\) near \(\theta=0\) or \(\theta=\pm\pi\). We have reported the values of \(c_{0}^{\gamma}\) and \(c_{\pi}^{\gamma}\) for different values of \(\gamma\) in Table 1. For example, in Figure 12 we illustrate the changes in propagation speed and direction for \(c_{0}^{\gamma}\) for the case \((\lambda,\beta)\) in Region **(III)** (as defined in Figure 10(a)), but the calculations remain valid for the other regions.
It is worth emphasizing that for fixed values of the hyper-parameters \(\alpha\), \(\beta\) and \(\lambda\), we see here that varying \(\gamma\) can give rise to different propagation speeds or even different directions. As each neuronal assembly \(u_{j,p}\) in a given layer \(j\) is associated with its own connection strength \(\gamma_{p}\), it follows that different speeds and even different directions of propagation can concurrently be obtained in a single network, one for each assembly. For instance, in a given network with hyperparameters \(\alpha=0.2\), \(\beta=0.2\) and \(\lambda=0.3\) (region **III**), a neural assembly with a connection strength of \(\gamma=1\) would propagate forward at a relatively slow speed, while another with \(\gamma=2.5\) would propagate in the same direction at a much faster speed, and yet another assembly with \(\gamma=\gamma_{-}^{0}\approx-2.09\) would simultaneously propagate in the opposite backward direction.
#### 4.1.4 Continuous in time interpretation
We can repeat the "continuous system" analysis conducted in the previous section (The Identity Case), which has lead to (3.13), but this time with Rao-Ballard connection matrices between layers. With the
same scaling on the hyperparameters
\[\widetilde{\beta}:=\frac{\beta}{\Delta t},\quad\widetilde{\lambda}:=\frac{ \lambda}{\Delta t},\text{ and }\widetilde{\alpha}:=\frac{\alpha}{\Delta t},\]
we get that, at the limit \(\Delta t\to 0\), the equation (4.1) becomes the following lattice ordinary differential equation
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{u}_{j}(t)=(\widetilde{\beta}+ \widetilde{\alpha})\gamma\mathbf{u}_{j-1}(t)-(\widetilde{\beta}+\widetilde{ \lambda}+\widetilde{\alpha}\gamma^{2})\mathbf{u}_{j}(t)+\widetilde{\lambda} \gamma\mathbf{u}_{j+1}(t),\quad t>0. \tag{4.2}\]
Note that the neuronal layer activity is now expressed in terms of neural assemblies \(\mathbf{u}_{j}\) rather than individual neurons \(\mathbf{e}_{j}\).
The amplification factor function in this case is given by
\[\nu_{\gamma}(\theta)=(\widetilde{\beta}+\widetilde{\alpha})\gamma e^{- \mathbf{i}\theta}-(\widetilde{\beta}+\widetilde{\lambda}+\widetilde{\alpha} \gamma^{2})+\widetilde{\lambda}\gamma e^{\mathbf{i}\theta},\quad\theta\in[- \pi,\pi],\]
whose real part is given by
\[\mathrm{Re}(\nu_{\gamma}(\theta))=(\widetilde{\beta}+\widetilde{\alpha}+ \widetilde{\lambda})\gamma\cos(\theta)-(\widetilde{\beta}+\widetilde{\lambda} +\widetilde{\alpha}\gamma^{2}),\quad\theta\in[-\pi,\pi].\]
When \(\gamma>0\), we observe that
\[\max_{\theta\in[-\pi,\pi]}\mathrm{Re}(\nu_{\gamma}(\theta))=\mathrm{Re}(\nu_{ \gamma}(0))=(\widetilde{\lambda}+\widetilde{\beta}-\widetilde{\alpha}\gamma)( \gamma-1),\]
whereas when \(\gamma<0\), we have
\[\max_{\theta\in[-\pi,\pi]}\mathrm{Re}(\nu_{\gamma}(\theta))=\mathrm{Re}(\nu_{ \gamma}(\pm\pi))=-(\widetilde{\lambda}+\widetilde{\beta}+\widetilde{\alpha} \gamma)(\gamma+1).\]
As a consequence, the stability analysis in this case is very simple and depends only on the relative position of \(\gamma\) with respect to \(\pm 1\) and \(\pm\frac{\widetilde{\lambda}+\widetilde{\beta}}{\widetilde{\alpha}}\). It is summarized in Figure 13.
The simple behavior illustrated in Figure 13 for our continuous system contrasts with the number and diversity of behaviors obtained for the discrete version of the same system (Figure 11). A number of
Figure 13: _Stability/instability regions and their boundaries as a function of \((\widetilde{\alpha},\gamma)\) for (4.2) for any \((\widetilde{\lambda},\widetilde{\beta})\) fixed. The shaded orange region corresponds to an instability for (4.2) while the purple region corresponds to a stability for (4.2). The boundaries of the stability/instability regions are given by the intersections of the parametrized curves \(\gamma=\pm 1\) (dark red curves) and \(\gamma=\pm\frac{\lambda+\beta}{\alpha}\) (magenta curves) where equation (4.2) is marginally stable._
points are worth highlighting. For instance, although the values of \(\beta\) and \(\lambda\) were critical for the discrete system (to define the region (I) to (V)), they do not affect the qualitative behavior of the continuous system. Furthermore, some observations in the continuous system appear to contradict the conclusions made previously in the discrete case. We see that stability can still be obtained with high values of the connection weight \(\gamma>>1\), but this time the stable regions coincide with high \(\alpha\) values, whereas it was the opposite in Figure 11 panels (b),(f). This qualitative difference in behavior can be taken as a point of caution, to remind us that a discrete approximation of the system can be associated with important errors in interpretation.
Finally we note that, while stability regions are qualitatively different in the continuous case compared to the discrete approximation, the speed and direction of propagation of neural signals (reflected in the variables \(c_{0}\) and \(c_{\pi}\) when they exist) remains comparable.
#### 4.1.5 A class of examples
In this section, we provide a class of examples of \(\mathcal{W}^{f}\) amenable to a complete analysis. Namely we consider \(\mathcal{W}^{f}\) as the following linear combination
\[\mathcal{W}^{f}=\zeta\mathbf{I}_{d}+\xi\mathbf{A}, \tag{4.3}\]
for some \(\zeta,\xi\in\mathbb{R}\) where \(\mathbf{A}\in\mathscr{M}_{d}(\mathbb{R})\) is given by
\[\mathbf{A}=\begin{pmatrix}-2&1&0&\cdots&\cdots&0\\ 1&-2&1&\ddots&\ddots&\vdots\\ 0&\ddots&\ddots&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&1&-2&1\\ 0&\cdots&\cdots&0&1&-2\end{pmatrix}.\]
The matrix \(\mathbf{A}\) is nothing but the discrete laplacian and \(\mathcal{W}^{f}\) acts as a convolution operator on \(\mathbb{R}^{d}\). More precisely, \(\mathcal{W}^{f}\) combines a convolution term with a residual connection term, as in the well-known ResNet architecture [18]. Let us also note that the spectrum of \(\mathbf{A}\) is well known and given by
\[\mathrm{Spec}(\mathbf{A})=\left\{-4\sin^{2}\left(\frac{p\pi}{2(d+1)}\right), \quad p=1,\cdots,d\right\}.\]
As a consequence, the spectrum of \(\mathcal{W}^{f}\) is simply given by
\[\mathrm{Spec}(\mathcal{W}^{\mathbf{f}})=\left\{\zeta-4\xi\sin^{2}\left(\frac{ p\pi}{2(d+1)}\right),\quad p=1,\cdots,d\right\}.\]
One can for example set
\[\zeta=\frac{\sin^{2}\left(\frac{d\pi}{2(d+1)}\right)+\sin^{2}\left(\frac{\pi} {2(d+1)}\right)}{\sin^{2}\left(\frac{d\pi}{2(d+1)}\right)-\sin^{2}\left(\frac{ \pi}{2(d+1)}\right)}\quad\text{ and }\quad\xi=\frac{1}{2\left(\sin^{2}\left(\frac{d\pi}{2(d+1)} \right)-\sin^{2}\left(\frac{\pi}{2(d+1)}\right)\right)},\]
such that
\[\zeta-4\xi\sin^{2}\left(\frac{d\pi}{2(d+1)}\right)=-1,\quad\zeta-4\xi\sin^{2} \left(\frac{\pi}{2(d+1)}\right)=1,\]
and for all \(p=2,\cdots,d-1\)
\[\zeta-4\xi\sin^{2}\left(\frac{p\pi}{2(d+1)}\right)\in(-1,1).\]
Next, for any \(p=1,\cdots,d\) the eigenvector corresponding to the eigenvalue \(-4\sin^{2}\left(\frac{p\pi}{2(d+1)}\right)\) is
\[U_{p}=\left(\cos\left(\frac{p\pi}{d+1}\right),\cdots,\cos\left(\frac{pk\pi}{d+ 1}\right),\cdots,\cos\left(\frac{pd\pi}{d+1}\right)\right)^{\mathbf{t}}\in \mathbb{R}^{d}.\]
\(U_{p}\) is the projection vector that corresponds to the \(p^{th}\) neural assembly \(u_{j,p}\) as defined above.
Along \(U_{1}\), the recurrence equation reduces to (4.1) with \(\gamma=1\), while along \(U_{d}\), the recurrence equation reduces to (4.1) with \(\gamma=-1\), and we can apply the results of the previous section (the Identity case). In between (for all \(1\leq p\leq d\)) we see that the eigenvalues of our connection matrix \(\mathcal{W}^{\mathbf{f}}\) span the entire range between \(-1\) and \(1\), that they can be explicitly computed, and thus that the stability, propagation speed and direction of activity in the corresponding neural assembly can be determined.
#### 4.1.6 Fully continuous interpretation in time, depth and width.
For the same class of example (connection matrix composed of a convolution and residual terms), we now wish to provide a fully continuous interpretation for model (2.1) in the special case \(\zeta=1\) and \(\xi\) adjusted as follows. By fully continuous, we mean that we explore the limit of our model when not only time \(t\), but also network depth \(x\)_and_ neuronal layer width \(y\) are considered as continuous variables. Although we already presented a model that was continuous in both _time_ and _depth_ in subsection 3.3.2, the layers in that model only comprised a single neuron, and had no intrinsic spatial dimension. We now introduce this third continuous dimension. The starting point is to see \(\mathcal{E}_{j,k}^{n}\), the \(k\)th element of \(\mathcal{E}_{j}^{n}\), as an approximations of some continuous function \(\mathcal{E}(t,x,y)\) evaluated at \(t_{n}=n\Delta t\), \(x_{j}=j\Delta x\) and \(y_{k}=k\Delta y\) for some \(\Delta t>0\), \(\Delta x>0\) and \(\Delta y>0\). Let us first remark that the action of \(\mathbf{A}\) on \(\mathcal{E}_{j}^{n}\) is given by
\[(\mathbf{A}\mathcal{E}_{j}^{n})_{k}=\mathcal{E}_{j,k-1}^{n}-2\mathcal{E}_{j,k }^{n}+\mathcal{E}_{j,k+1}^{n},\]
which can be seen at a discrete approximation of \(\partial_{y}^{2}\mathcal{E}(t_{n},x_{j},y_{k})\) up to a scaling factor of order \(\Delta y^{2}\). Once again, setting \(\nu=\frac{\Delta x}{\Delta t}\) and introducing \(\kappa=\frac{\Delta y^{2}}{\Delta t}\), we may rewrite (2.1) with \(\mathcal{W}^{f}=\mathcal{W}^{b}=\mathbf{I}_{d}+\xi\mathbf{A}\) as
\[(1-\beta)\frac{\mathcal{E}_{j}^{n+1}-\mathcal{E}_{j}^{n}}{ \Delta t} =\beta\nu\frac{\mathcal{E}_{j-1}^{n+1}-\mathcal{E}_{j}^{n+1}}{ \Delta x}+\lambda\nu\frac{\mathcal{E}_{j+1}^{n}-\mathcal{E}_{j}^{n}}{\Delta x }-\alpha\nu\frac{\mathcal{E}_{j}^{n}-\mathcal{E}_{j-1}^{n}}{\Delta x}\] \[\quad+\beta\xi\kappa\frac{\mathbf{A}}{\Delta y^{2}}\mathcal{E}_{j -1}^{n+1}+\alpha\xi\kappa\frac{\mathbf{A}}{\Delta y^{2}}\mathcal{E}_{j-1}^{n}- 2\alpha\xi\kappa\frac{\mathbf{A}}{\Delta y^{2}}\mathcal{E}_{j}^{n}-\alpha \xi^{2}\kappa\Delta y^{2}\frac{\mathbf{A}}{\Delta y^{2}}\frac{\mathbf{A}}{ \Delta y^{2}}\mathcal{E}_{j}^{n}\] \[\quad+\lambda\xi\kappa\frac{\mathbf{A}}{\Delta y^{2}}\mathcal{E}_ {j+1}^{n}.\]
Now letting \(\Delta t\to 0\), \(\Delta x\to 0\) and \(\Delta y\to 0\) with \(\nu\) and \(\kappa\) fixed, we obtain the following partial differential equation
\[\partial_{t}\mathcal{E}(t,x,y)+\frac{\nu(\beta+\alpha-\lambda)}{1-\beta} \partial_{x}\mathcal{E}(t,x,y)=\frac{\xi\kappa(\beta+\lambda-\alpha)}{1-\beta} \partial_{y}^{2}\mathcal{E}(t,x,y).\]
This is a diffusion equation along the \(y\) dimension while being a transport in the \(x\) direction. As such, it is only well defined (or stable) when the sign of the diffusion coefficient in front of \(\partial_{y}^{2}\mathcal{E}(t,x,y)\) is positive. This depends on the sign of \(\xi\) and \(\beta+\lambda-\alpha\), which need to verify \(\xi(\beta+\lambda-\alpha)>0\). In that case, the system diffuses neural activity along the dimension \(y\) such that the entire neuronal layer converges to a single, uniform activation value when \(t\to\infty\).
### The general symmetric case
Finally, we now wish to relax some of the assumptions made in the previous Rao-Ballard case. Thus, the last case that we present is one where we assume that
* \(\mathcal{W}^{f}\) and \(\mathcal{W}^{b}\) are symmetric matrices, that is \(\mathcal{W}^{f},\mathcal{W}^{b}\in\mathscr{S}_{d}(\mathbb{R})\),
* \(\mathcal{W}^{f}\) and \(\mathcal{W}^{b}\) commute, that is \(\mathcal{W}^{f}\mathcal{W}^{b}=\mathcal{W}^{b}\mathcal{W}^{f}\).
But we do not necessarily impose that \(\mathcal{W}^{f}=(\mathcal{W}^{b})^{\mathfrak{t}}\) as in the Rao & Ballard's previous case. Let us already note that examples of matrices verifying the above conditions are residual convolution matrices introduced in (4.3), that is \(\mathcal{W}^{f}=\zeta_{f}\mathbf{I}_{d}+\xi_{f}\mathbf{A}\) and \(\mathcal{W}^{b}=\zeta_{b}\mathbf{I}_{d}+\xi_{b}\mathbf{A}\) for some \(\zeta_{b,f},\xi_{b,f}\in\mathbb{R}\). Under assumptions (i) and (ii), \(\mathcal{W}^{f}\) and \(\mathcal{W}^{b}\) can be diagonalized in the same orthonormal basis, meaning that there exist an invertible orthogonal matrix \(P\in\mathscr{M}_{d}(\mathbb{R})\) such that \(PP^{\mathfrak{t}}=P^{\mathfrak{t}}P=I_{d}\), and two diagonal matrices \(\mathcal{D}^{f}\in\mathscr{M}_{d}(\mathbb{R})\) and \(\mathcal{D}^{b}\in\mathscr{M}_{d}(\mathbb{R})\) with the properties that
\[P^{\mathfrak{t}}\mathcal{W}^{f}P=\mathcal{D}^{f},\quad\text{ and }\quad P^{ \mathfrak{t}}\mathcal{W}^{b}P=\mathcal{D}^{b}.\]
For future reference, we denote by \(\gamma_{p}^{f,b}\) for each \(1\leq p\leq d\) the diagonal elements of \(\mathcal{D}^{f,b}\). Once again, we can use the matrix \(P\) to apply an orthonormal basis change and create neural _assemblies_\(\mathcal{U}_{j}^{n}:=P^{t}\xi_{j}^{n}\). With \(P\mathcal{U}_{j}^{n}:=\mathcal{E}_{j}^{n}\), the recurrence equation becomes
\[\mathcal{U}_{j}^{n+1}-\beta\mathcal{D}^{f}\mathcal{U}_{j-1}^{n+1}=\alpha \mathcal{D}^{b}\mathcal{U}_{j-1}^{n}+\left[(1-\beta-\lambda)\mathbf{I}_{d}- \alpha\mathcal{D}^{b}\mathcal{D}^{b}\right]\mathcal{U}_{j}^{n}+\lambda \mathcal{D}^{b}\mathcal{U}_{j+1}^{n}.\]
Note that, because all matrices in the above equation are diagonal, we have also totally decoupled the \(d\) components of the vector \(\mathcal{U}_{j}^{n}\). More precisely, by denoting \(u_{j,p}^{n}\) the \(p\)th component of \(\mathcal{U}_{j}^{n}\), that is \(\mathcal{U}_{j}^{n}=(u_{j,1}^{n},\cdots,u_{j,d}^{n})^{\mathfrak{t}}\), we obtain
\[u_{j,p}^{n+1}-\beta\gamma_{p}^{f}u_{j-1,p}^{n+1}=\alpha\gamma_{p}^{b}u_{j-1,p}^ {n}+(1-\beta-\lambda-\alpha\left(\gamma_{p}^{b}\right)^{2})u_{j,p}^{n}+\lambda \gamma_{p}^{b}u_{j+1,p}^{n},\quad p=1,\cdots,d.\]
This indicates that one needs to study
\[u_{j}^{n+1}-\beta\gamma_{1}u_{j-1}^{n+1}=\alpha\gamma_{2}u_{j-1}^{n}+(1-\beta- \lambda-\alpha\gamma_{2}^{2})u_{j}^{n}+\lambda\gamma_{2}u_{j+1}^{n}, \tag{4.4}\]
where \(\gamma_{1,2}\in\mathbb{R}\) are now two given parameters. As before, \(\gamma_{1,2}\) can be thought of as the connection strength across layers of the neural assembly under consideration. By construction, each assembly in a given layer is only connected to the corresponding assembly in the layer above, and similarly in the layer below, with \(\gamma_{1}\) for the feedforward direction and \(\gamma_{2}\) for the feedback direction. Note that \(\gamma_{1}=\gamma_{2}\) would then correspond to the Rao-Ballard situation studied previously.
#### 4.2.1 Study of the amplification factor function
Repeating the previous analysis, one needs to understand the amplification factor
\[\rho_{\gamma_{1},\gamma_{2}}(\theta)=\frac{\alpha\gamma_{2}\left(e^{- \mathfrak{i}\theta}-\gamma_{2}\right)+1-\beta+\lambda\left(\gamma_{2}e^{ \mathfrak{i}\theta}-1\right)}{1-\beta\gamma_{1}e^{-\mathfrak{i}\theta}}, \quad\theta\in[-\pi,\pi].\]
We already note a symmetry property of the amplification factor function which reads
\[\rho_{\gamma_{1},\gamma_{2}}(\theta)=\rho_{-\gamma_{1},-\gamma_{2}}(\theta\pm \pi),\quad\theta\in[-\pi,\pi].\]
As a consequence, whenever \(\rho_{\gamma_{1},\gamma_{2}}(0)=\pm 1\) one has \(\rho_{-\gamma_{1},-\gamma_{2}}(\pm\pi)=\pm 1\) for the same values of the parameters. Then, we note that
\[\rho_{\gamma_{1},\gamma_{2}}(0)=1\Longleftrightarrow\gamma_{1}=\chi(\gamma_{ 2}),\]
where the function \(\chi(x)\), depending only on the hyper-parameters, is given by
\[\chi(x):=\frac{\alpha x^{2}-(\alpha+\lambda)x+\lambda+\beta}{\beta},x\in \mathbb{R}. \tag{4.5}\]
Thus, using the above symmetry, we readily deduce that
\[\rho_{\gamma_{1},\gamma_{2}}(\pm\pi)=1\Longleftrightarrow\gamma_{1}=-\chi(- \gamma_{2}).\]
Finally, we compute that
\[\rho_{\gamma_{1},\gamma_{2}}(0)=-1\Longleftrightarrow\gamma_{1}=\zeta(\gamma _{2}),\]
where the function \(\zeta(x)\), depending only on the hyper-parameters, is given by
\[\zeta(x):=\frac{-\alpha x^{2}+(\alpha+\lambda)x+2-\lambda-\beta}{\beta},x\in \mathbb{R}. \tag{4.6}\]
Using the above symmetry, we readily deduce that
\[\rho_{\gamma_{1},\gamma_{2}}(\pm\pi)=-1\Longleftrightarrow\gamma_{1}=-\zeta(- \gamma_{2}).\]
A complete and exhaustive characterization of all possible cases as a function of \(\gamma_{1,2}\) and the hyper-parameters is beyond the scope of this paper. Nevertheless, we can make some few further remarks. The four above curves \(\gamma_{1}=\chi(\gamma_{2})\), \(\gamma_{1}=-\chi(-\gamma_{2})\), \(\gamma_{1}=\zeta(\gamma_{2})\) and \(\gamma_{1}=-\zeta(-\gamma_{2})\) form parabolas in the plane \((\gamma_{1},\gamma_{2})\) that can intersect and provide the boundaries of the stability regions. For example, we can notice that \(\gamma_{1}=\zeta(\gamma_{2})\) and \(\gamma_{1}=-\zeta(-\gamma_{2})\) intersect if and only if \(\gamma_{2}=\pm\sqrt{\frac{2-\lambda-\beta}{\alpha}}\) whereas \(\gamma_{1}=\chi(\gamma_{2})\) and \(\gamma_{1}=-\chi(-\gamma_{2})\) can never intersect. We refer to Figure 14 for an illustration of the stability regions and their boundaries in the case \((\alpha,\beta,\lambda)=(0.4,0.2,0.3)\). Here, we see that stability can be obtained with large values of the feedforward connection strength \(\gamma_{1}\), but this requires the feedback connections strength \(\gamma_{2}\) to remain low. Of course, different qualitative behaviors and stability regions may be obtained for different choices of the hyperparameters \((\alpha,\beta,\lambda)\); while it is beyond the scope of the present study to characterize them all, it is important to point out that such a characterization is feasible using the present method, for any choice of the hyperparameters.
More interestingly, we can investigate the dependence of the wave speed as a function of the parameters \(\gamma_{1}\) and \(\gamma_{2}\). For example, when \(\gamma_{1}=\chi(\gamma_{2})\), we have that
\[\rho_{\chi(\gamma_{2}),\gamma_{2}}(\theta)=\exp\left(-\mathbf{i}\frac{(\alpha- \lambda)\gamma_{2}+\beta\chi(\gamma_{2})}{1-\beta\chi(\gamma_{2})}\theta+ \mathcal{O}(|\theta|^{2})\right),\text{ as }\theta\to 0,\]
such that the associated wave speed is given by
\[c_{0}^{\chi}=\frac{(\alpha-\lambda)\gamma_{2}+\beta\chi(\gamma_{2})}{1-\beta \chi(\gamma_{2})},\]
whose sign may vary as \(\gamma_{2}\) is varied. We refer to the forthcoming section 4.2.4 below for a practical example (see Figure 18).
#### 4.2.2 Continuous in time interpretation
As done in previous sections, we now perform a continuous in time limit of the model (4.4). With the same scaling on the hyperparameters
\[\widetilde{\beta}:=\frac{\beta}{\Delta t},\quad\widetilde{\lambda}:=\frac{ \lambda}{\Delta t},\text{ and }\widetilde{\alpha}:=\frac{\alpha}{\Delta t},\]
we get that, at the limit \(\Delta t\to 0\), the equation (4.4) becomes the following lattice ordinary differential equation
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{u}_{j}(t)=(\widetilde{\beta}\gamma_{1}+ \widetilde{\alpha}\gamma_{2})\mathbf{u}_{j-1}(t)-(\widetilde{\beta}+ \widetilde{\lambda}+\widetilde{\alpha}\gamma_{2}^{2})\mathbf{u}_{j}(t)+ \widetilde{\lambda}\gamma_{2}\mathbf{u}_{j+1}(t),\quad t>0. \tag{4.7}\]
The amplification factor function in this case is given by
\[\nu_{\gamma_{1},\gamma_{2}}(\theta)=(\widetilde{\beta}\gamma_{1}+\widetilde{ \alpha}\gamma_{2})e^{-\mathbf{i}\theta}-(\widetilde{\beta}+\widetilde{\lambda} +\widetilde{\alpha}\gamma_{2}^{2})+\widetilde{\lambda}\gamma_{2}e^{\mathbf{ i}\theta},\quad\theta\in[-\pi,\pi],\]
whose real part is given by
\[\mathrm{Re}(\nu_{\gamma_{1},\gamma_{2}}(\theta))=(\widetilde{\beta}\gamma_{1} +(\widetilde{\alpha}+\widetilde{\lambda})\gamma_{2})\cos(\theta)-(\widetilde{ \beta}+\widetilde{\lambda}+\widetilde{\alpha}\gamma_{2}^{2}),\quad\theta\in[- \pi,\pi].\]
When \(\widetilde{\beta}\gamma_{1}+(\widetilde{\alpha}+\widetilde{\lambda})\gamma_{2}>0\), we observe that
\[\underset{\theta\in[-\pi,\pi]}{\max}\mathrm{Re}(\nu_{\gamma_{1},\gamma_{2}}( \theta))=\mathrm{Re}(\nu_{\gamma_{1},\gamma_{2}}(0))=\widetilde{\beta}\gamma_{ 1}+(\widetilde{\alpha}+\widetilde{\lambda})\gamma_{2}-(\widetilde{\beta}+ \widetilde{\lambda}+\widetilde{\alpha}\gamma_{2}^{2}),\]
such that
\[\underset{\theta\in[-\pi,\pi]}{\max}\mathrm{Re}(\nu_{\gamma_{1},\gamma_{2}}( \theta))=0\Longleftrightarrow\gamma_{1}=\frac{\widetilde{\alpha}\gamma_{2}^{2} -(\widetilde{\alpha}+\widetilde{\lambda})\gamma_{2}+\widetilde{\beta}+ \widetilde{\lambda}}{\widetilde{\beta}}.\]
Figure 14: _Stability/instability regions and their boundaries as a function of \((\gamma_{1},\gamma_{2})\) for (4.4) for fixed values of the hyperparameters \((\alpha,\beta,\lambda)\). The shaded orange region corresponds to an instability for (4.4) while the purple region corresponds to a stability for (4.4). The boundaries of the stability/instability regions are given by the intersections of the parametrized curves \(\gamma_{1}=\chi(\gamma_{2})\) (blue curve), \(\gamma_{1}=-\chi(-\gamma_{2})\) (light blue curve), \(\gamma_{1}=\zeta(\gamma_{2})\) (dark green curves) and \(\gamma_{1}=-\zeta(-\gamma_{2})\) (light green curves) where equation (4.4) is marginally stable. We represented the line \(\gamma_{1}=\gamma_{2}\) (black curve) which corresponds to the case studied in Figure 11(d) with \((\beta,\lambda)\) in Region (**III**) and \(\alpha\) fixed in \((\Lambda,\lambda+\beta)\)._
Whereas, when \(\widetilde{\beta}\gamma_{1}+(\widetilde{\alpha}+\widetilde{\lambda})\gamma_{2}<0\), we observe that
\[\max_{\theta\in[-\pi,\pi]}\mathrm{Re}(\nu_{\gamma_{1},\gamma_{2}}(\theta))= \mathrm{Re}(\nu_{\gamma_{1},\gamma_{2}}(\pm\pi))=-\widetilde{\beta}\gamma_{1}- (\widetilde{\alpha}+\widetilde{\lambda})\gamma_{2}-(\widetilde{\beta}+ \widetilde{\lambda}+\widetilde{\alpha}\gamma_{2}^{2}),\]
such that
\[\max_{\theta\in[-\pi,\pi]}\mathrm{Re}(\nu_{\gamma_{1},\gamma_{2}}(\theta))=0 \Longleftrightarrow\gamma_{1}=\frac{-\widetilde{\alpha}\gamma_{2}^{2}-( \widetilde{\alpha}+\widetilde{\lambda})\gamma_{2}-\widetilde{\beta}-\widetilde {\lambda}}{\widetilde{\beta}}.\]
As a consequence, the stability regions are determined by the locations of the parabolas \(\gamma_{2}\mapsto\frac{\widetilde{\alpha}\gamma_{2}^{a}-(\widetilde{\alpha}+ \widetilde{\lambda})\gamma_{2}+\widetilde{\beta}+\widetilde{\lambda}}{ \widetilde{\beta}}\) and \(\gamma_{2}\mapsto\frac{-\widetilde{\alpha}\gamma_{2}^{2}-(\widetilde{\alpha}+ \widetilde{\lambda})\gamma_{2}-\widetilde{\beta}-\widetilde{\lambda}}{ \widetilde{\beta}}\) in the plane \((\gamma_{1},\gamma_{2})\). We observe that they never intersect and are oriented in the opposite directions and refer to Figure 15 for a typical configuration. Here, we see that the system is stable for a very large range of values of both \(\gamma_{1}\) and \(\gamma_{2}\). In particular, for large enough values of the feedback connection weight (e.g. \(|\gamma_{2}|>3\)), stability is guaranteed regardless of the value of the feedforward connection weight \(\gamma_{1}\) (within a reasonable range, e.g. \(\gamma_{1}\in(-10,10)\)). This is the opposite behavior as that obtained for the discrete system in Figure 14, where stability was impossible under the same conditions for \(\gamma_{1,2}\). This highlights again the errors of interpretation that can potentially be caused by discrete approximation of a continuous system.
2.3 Fully continuous interpretation when \(\mathcal{W}^{f}=\mathbf{I}_{d}+\xi_{f}\mathbf{A}\) and \(\mathcal{W}^{b}=\mathbf{I}_{d}+\xi_{b}\mathbf{A}\).
When \(\mathcal{W}^{f}=\mathbf{I}_{d}+\xi_{f}\mathbf{A}\) and \(\mathcal{W}^{b}=\mathbf{I}_{d}+\xi_{b}\mathbf{A}\), one can once again identify \(\mathcal{E}^{t}_{j,k}\) as the approximation of some smooth function \(\mathcal{E}(t,x,y)\) at \(t_{n}=n\Delta t\), \(x_{j}=j\Delta x\) and \(y_{k}=k\Delta y\), along the three dimensions of time,
network depth and neuronal layer width. We may rewrite (2.1) in this case as
\[(1-\beta)\frac{\mathcal{E}_{j}^{n+1}-\mathcal{E}_{j}^{n}}{\Delta t} =\beta\nu\frac{\mathcal{E}_{j-1}^{n+1}-\mathcal{E}_{j}^{n+1}}{ \Delta x}+\lambda\nu\frac{\mathcal{E}_{j+1}^{n}-\mathcal{E}_{j}^{n}}{\Delta x}- \alpha\nu\frac{\mathcal{E}_{j}^{n}-\mathcal{E}_{j-1}^{n}}{\Delta x}\] \[\quad+\beta\xi_{f}\kappa\frac{\mathbf{A}}{\Delta y^{2}}\mathcal{E }_{j-1}^{n+1}+\alpha\xi_{b}\kappa\frac{\mathbf{A}}{\Delta y^{2}}\mathcal{E}_{j -1}^{n}-2\alpha\xi_{b}\kappa\frac{\mathbf{A}}{\Delta y^{2}}\mathcal{E}_{j}^{n}- \alpha\xi_{b}^{2}\kappa\Delta y^{2}\frac{\mathbf{A}}{\Delta y^{2}}\frac{ \mathbf{A}}{\Delta y^{2}}\mathcal{E}_{j}^{n}\] \[\quad+\lambda\xi_{b}\kappa\frac{\mathbf{A}}{\Delta y^{2}} \mathcal{E}_{j+1}^{n},\]
such that in the limit \(\Delta t\to 0\), \(\Delta x\to 0\) and \(\Delta y\to 0\) with \(\nu\) and \(\kappa\) fixed, we obtain the following partial differential equation
\[\partial_{t}\mathcal{E}(t,x,y)+\frac{\nu(\beta+\alpha-\lambda)}{1-\beta} \partial_{x}\mathcal{E}(t,x,y)=\kappa\frac{\beta\xi_{f}+(\lambda-\alpha)\xi_{b }}{1-\beta}\partial_{y}^{2}\mathcal{E}(t,x,y).\]
As before, this is a diffusion equation along the \(y\) dimension, whose stability depends on the positivity of the diffusion coefficient, i.e. \(\beta\xi_{f}+(\lambda-\alpha)\xi_{b}\geq 0\).
#### 4.2.4 Application to a ring model of orientations
Going back to our discrete system, in this section we consider the case where neurons within each layer encode for a given orientation in \([0,\pi]\). Here, we have in mind visual stimuli which are made of a fixed elongated black bar on a white background with a prescribed orientation. We introduce the following matrix \(\mathbf{A}_{\text{per}}\in\mathscr{M}_{d}(\mathbb{R})\) given by
\[\mathbf{A}_{\text{per}}=\begin{pmatrix}-2&1&0&\cdots&0&1\\ 1&-2&1&\ddots&\ddots&0\\ 0&\ddots&\ddots&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\ 0&\ddots&\ddots&1&-2&1\\ 1&0&\cdots&0&1&-2\end{pmatrix},\]
which is nothing but the discretizing of the Laplacian with boundary condition. Indeed, for each \(\mathcal{E}_{j}^{n}\in\mathbb{R}^{d}\), we assume that neuron \(\mathcal{E}_{j,k}^{n}\) encodes for orientation \(\frac{k}{d}\pi\) for \(k=1,\cdots,d\). We readily remark that \(0\in\operatorname{Spec}(\mathbf{A}_{\text{per}})\) with corresponding eigenvector \(U_{1}=(1,\cdots,1)^{\mathbf{t}}\in\mathbb{R}^{d}\). Furthermore, we have:
* if \(d=2m+1\) is odd, then \(\lambda_{p}=-4\sin\left(\frac{p\pi}{d}\right)^{2}\) with \(p=1,\cdots,m\) is an eigenvalue of \(\mathbf{A}_{\text{per}}\) of multiplicity \(2\) with associated eigenvectors \[U_{2p} =\left(\cos\left(\frac{2p\pi}{d}\right),\cdots,\cos\left(\frac{2kp \pi}{d}\right),\cdots,1\right)^{\mathbf{t}}\in\mathbb{R}^{d},\] \[U_{2p+1} =\left(\sin\left(\frac{2p\pi}{d}\right),\cdots,\sin\left(\frac{2 kp\pi}{d}\right),\cdots,0\right)^{\mathbf{t}}\in\mathbb{R}^{d};\]
* if \(d=2m\) is even, then \(\lambda_{p}=-4\sin\left(\frac{p\pi}{d}\right)^{2}\) with \(p=1,\cdots,m-1\) is an eigenvalue of \(\mathbf{A}_{\text{per}}\) of multiplicity \(2\) with associated eigenvectors \(U_{2p}\) and \(U_{2p+1}\) as above. And \(\lambda=-4\) is a simple eigenvalue of \(\mathbf{A}_{\text{per}}\) with associated eigenvector \(U_{d}=(-1,1,-1,1,\cdots,-1,1)\in\mathbb{R}^{d}\).
It may be interesting to note that any linear combinations of \(U_{2p}\) and \(U_{2p+1}\) can always be written in the form
\[aU_{2p}+bU_{2p+1}=A\left(\cos\left(\frac{2p\pi}{d}+\varphi\right),\cdots,\cos \left(\frac{2kp\pi}{d}+\varphi\right),\cdots,\cos(\varphi)\right)^{\mathbf{t}} \in\mathbb{R}^{d},\]
where \(A=\sqrt{a^{2}+b^{2}}>0\) and \(\varphi=-\arctan\left(\frac{b}{a}\right)\in(-\pi/2,\pi/2)\) whenever \(a\neq 0\) and \(b\neq 0\). This means that \(U_{2p}\) and \(U_{2p+1}\) span all possible translations modulo \([0,\pi]\) of a fixed profile. We refer to Figure 16 for a visualization of the first eigenvectors. In short, these eigenvectors \(U_{i}\) implement a Fourier transform of the matrix \(\mathbf{A}_{\mathrm{per}}\).
We now set \(\mathcal{W}^{b}\) to be
\[\mathcal{W}^{b}=\frac{1}{2}\mathbf{I}_{d}-\frac{1}{4}\mathbf{A}_{\mathrm{per} }=\begin{pmatrix}1&-\frac{1}{4}&0&\cdots&0&-\frac{1}{4}\\ -\frac{1}{4}&1&-\frac{1}{4}&\ddots&\ddots&0\\ 0&\ddots&\ddots&\ddots&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\ 0&\ddots&\ddots&-\frac{1}{4}&1&-\frac{1}{4}\\ -\frac{1}{4}&0&\cdots&0&-\frac{1}{4}&1\end{pmatrix},\]
which means that \(\mathcal{W}^{b}\) acts as a convolution with local excitation and lateral inhibition. From now on, to fix ideas, we will assume that \(d=2m\) is even. We define the following matrix
\[P=(U_{1},U_{2},\cdots,U_{d})\in\mathscr{M}_{d}(\mathbb{R}).\]
As a consequence, we have the decomposition
\[P^{\mathbf{t}}\mathcal{W}^{b}P=\mathcal{D}^{b},\]
with \(\mathcal{D}^{b}=\operatorname{diag}\left(\frac{1}{2},\frac{1}{2}-\frac{1}{4} \lambda_{1},\frac{1}{2}-\frac{1}{4}\lambda_{1},\cdots,\frac{1}{2}-\frac{1}{4} \lambda_{m-1},\frac{1}{2}-\frac{1}{4}\lambda_{m-1},\frac{3}{2}\right)\in \mathscr{M}_{d}(\mathbb{R})\). Now, for given values of the hyper-parameters \((\alpha,\beta,\lambda)\) with \(\beta>0\), we set \(\mathcal{D}^{f}:=\chi(\mathcal{D}^{b})\) where the map \(\chi\), defined in (4.5), is applied to the diagonal elements of \(\mathcal{D}^{b}\), that is
\[\mathcal{D}^{f}=\operatorname{diag}\left(\chi\left(\frac{1}{2}\right),\chi \left(\frac{1}{2}-\frac{1}{4}\lambda_{1}\right),\cdots,\chi\left(\frac{1}{2}- \frac{1}{4}\lambda_{m-1}\right),\chi\left(\frac{3}{2}\right)\right)\in\mathscr{ M}_{d}(\mathbb{R}).\]
Figure 16: _We plot the eigenvectors \(U_{2p}\) and \(U_{2p+1}\) for \(p=1\) and \(p=2\) as a function of \(\frac{k}{d}\pi\) for \(k=1,\cdots,d\). We note that \(U_{2p}\) and \(U_{2p+1}\) encode the first Fourier modes. Here we have set \(d=2^{5}\)._
And then we set \(\mathcal{W}^{f}:=P\mathcal{D}^{f}P^{\mathbf{t}}\). We refer to Figure 17 for an illustration of the structures of matrices \(\mathcal{W}^{f}\) and \(\mathcal{W}^{b}\). For large set of values of the hyper-parameters, \(\mathcal{W}^{f}\) still present a band structure with positive elements on the diagonals indicating that \(\mathcal{W}^{f}\) can also be interpreted as a convolution with local excitation. For the values of the hyper-parameters fixed in Figure 17, the feedforward matrix \(\mathcal{W}^{f}\) is purely excitatory.
Reproducing the analysis developed in the previous Subsection 4.2, we perform a change of orthonormal basis to express neural activities in terms of the relevant _assemblies_\(\mathcal{U}^{n}_{j}:=P^{t}\mathcal{E}^{n}_{j}\). With \(P\mathcal{U}^{n}_{j}:=\mathcal{E}^{n}_{j}\), the recurrence equation becomes
\[\mathcal{U}^{n+1}_{j}-\beta\mathcal{D}^{f}\mathcal{U}^{n+1}_{j-1}=\alpha \mathcal{D}^{b}\mathcal{U}^{n}_{j-1}+\left[(1-\beta-\lambda)\mathbf{I}_{d}- \alpha\mathcal{D}^{b}\mathcal{D}^{b}\right]\mathcal{U}^{n}_{j}+\lambda \mathcal{D}^{b}\mathcal{U}^{n}_{j+1}.\]
Then, if we denote by \(\gamma_{p}\) the \(p\)th diagonal element of \(\mathcal{D}^{b}\), then for each \(p=1,\cdots,d\) the above recurrence writes
\[u^{n+1}_{j,p}-\beta\chi(\gamma_{p})u^{n+1}_{j-1,p}=\alpha\gamma_{p}u^{n}_{j-1,p}+(1-\beta-\lambda-\alpha\gamma_{p}^{2})u^{n}_{j,p}+\lambda\gamma_{p}u^{n}_{ j+1,p},\]
where \(u^{n}_{j,p}\) is the \(p\)th component (or neural assembly) of \(\mathcal{U}^{n}_{j}\). For each \(p=1,\cdots,d\), the associated amplification factor function reads
\[\rho_{p}(\theta)=\frac{\alpha\gamma_{p}\left(e^{-\mathrm{i}\theta}-\gamma_{p} \right)+1-\beta+\lambda\left(\gamma_{p}e^{\mathrm{i}\theta}-1\right)}{1-\beta \chi(\gamma_{p})e^{-\mathrm{i}\theta}},\quad\theta\in[-\pi,\pi],\]
and with our specific choice of function \(\chi\), we have that \(\rho_{p}(0)=1\) with
\[\rho_{p}(\theta)=\exp\left(-\mathrm{i}\frac{(\alpha-\lambda)\gamma_{p}+\beta \chi(\gamma_{p})}{1-\beta\chi(\gamma_{p})}\theta-\sigma_{0}^{p}\theta^{2}+ \mathcal{O}(|\theta|^{3})\right),\text{ as }\theta\to 0,\]
such that the associated wave speed is given by
\[c_{0}^{p}=\frac{(\alpha-\lambda)\gamma_{p}+\beta\chi(\gamma_{p})}{1-\beta \chi(\gamma_{p})},\]
and where we have set
\[\sigma_{0}^{p}=\frac{\alpha(1+4\lambda)\gamma_{p}^{2}+\beta+\lambda-(\alpha+ \lambda)\gamma_{p}(\alpha\gamma_{p}^{2}+\beta+\lambda)}{2(1-\beta\chi(\gamma _{p}))^{2}}.\]
From now on, we assume that we have tuned the hyper-parameters such that \(|\rho_{p}(\theta)|<1\) for all \(\theta\in[-\pi,\pi]\setminus\{0\}\) and each \(p=1,\cdots,d\). This can in fact be systematically checked numerically for a given set of hyper-parameters. We report in Figure 18 the shape of \(p\mapsto c_{0}^{p}\) for the same values of the hyper-parameters as the ones in Figure 17 and \(d=2^{5}\). We first remark that \(p\mapsto c_{0}^{p}\) is a monotone decreasing map, and in our specific case we have
\[c_{0}^{d}<c_{0}^{d-1}=c_{0}^{d-2}<\cdots<c_{0}^{9}=c_{0}^{8}<0<c_{0}^{7}=c_{0}^ {6}<c_{0}^{5}=c_{0}^{4}<c_{0}^{3}=c_{0}^{2}<c_{0}^{1}.\]
Given a fixed input entry \(\mathcal{E}_{0}\in\mathbb{R}^{d}\) presented at \(j=0\) to the network continually at each time step, we can deduce which components of \(\mathcal{E}_{0}\in\mathbb{R}^{d}\) will be able to propagate forward through the network. More precisely, we can decompose \(\mathcal{E}_{0}\) along the basis \((U_{1},\cdots U_{d})\) of eigenvectors, that is
\[\mathcal{E}_{0}=\sum_{p=1}^{d}a_{p}U_{p},\]
for some real coefficients \(a_{p}\) for \(p=1,\cdots,d\). Assuming that the network was at rest initially, we get that the dynamics along each eigenvector (or neural assembly) is given by
\[\left\{\begin{aligned} u_{j,p}^{n+1}-\beta\chi(\gamma_{p})u_{j-1,p}^ {n+1}&=\alpha\gamma_{p}u_{j-1,p}^{n}+(1-\beta-\lambda-\alpha \gamma_{p}^{2})u_{j,p}^{n}+\lambda\gamma_{p}u_{j+1,p}^{n},\quad j\geq 1, \quad n\geq 0,\\ u_{0,p}^{n}&=a_{p},\quad n\geq 0,\\ u_{j,p}^{0}&=0,\quad j\geq 1.\end{aligned}\right. \tag{4.8}\]
Thus, we readily obtain that
\[\mathcal{E}_{j}^{n}=\sum_{p=1}^{d}u_{j,p}^{n}U_{p},\quad j\geq 1,\quad n\geq 1,\]
where \(u_{j,p}^{n}\) is a solution to (4.8).
As a consequence, the monotonicity property of the map \(p\mapsto c_{0}^{p}\) indicates that the homogeneous constant mode \(U_{1}\) is the fastest to propagate forward into the network with associated spreading speed \(c_{0}^{1}\), it is then followed by the modes \((U_{2},U_{3})\) propagating at speed \(c_{0}^{2}=c_{0}^{3}\). In our numerics, we have set the parameters
Figure 18: _Plot of the wave speed \(c_{0}^{p}\) for \(p=1,\cdots,d\) (colored dots). The color code (blue/red) refers to the sign of \(c_{0}^{p}\): blue when positive and dark red when negative. Note that only the elements associated to the eigenvalues \(\frac{1}{2}\), \(\frac{1}{2}-\frac{1}{4}\lambda_{1}\), \(\frac{1}{2}-\frac{1}{4}\lambda_{2}\) and \(\frac{1}{2}-\frac{1}{4}\lambda_{3}\) are positive. We also remark that \(c_{0}^{p}\) is a monotone decreasing function. Here \(d=2^{5}\) and values of the hyper-parameters are fixed to \((\alpha,\beta,\lambda)=(0.1,0.1,0.5)\)._
such that \(c_{0}^{1}\approx c_{0}^{2}=c_{0}^{3}\) with a significant gap with the other wave speeds. Lets us remark, that all modes \(U_{p}\) with \(p\geq 8\) are not able to propagate into the network (see Figure 19). Thus our architecture acts as a mode filter.
Even more precisely, let us remark that the sequence \(\left(a_{p}\left(\frac{\alpha\gamma_{p}+\beta\chi(\gamma_{p})}{\lambda\gamma_{p }}\right)^{j}\right)_{j\geq 0}\) is a stationary solutions of (4.8) which remains bounded whenever \(p\) is such that the associated wave speed is negative, that is \(c_{0}^{p}<0\), since in that case, one has \(\alpha\gamma_{p}+\beta\chi(\gamma_{p})<\lambda\gamma_{p}\). The solution \(\mathcal{E}_{j}^{n}\) can then be approximated as
\[\mathcal{E}_{j}^{n}\simeq\sum_{p\ :\ c_{0}^{p}>0}\frac{a_{p}}{2}\left(1- \operatorname{erf}\left(\frac{j-c_{0}^{p}n}{\sqrt{4\sigma_{0}^{p}n}}\right) \right)U_{p}+\sum_{p\ :\ c_{0}^{p}<0}a_{p}\left(\frac{\alpha\gamma_{p}+\beta\chi( \gamma_{p})}{\lambda\gamma_{p}}\right)^{j}U_{p},\quad j\geq 1,\quad n\geq 1,\]
This is illustrated by a first example simulation in Figures 20 and 21. We present at \(j=0\) a fixed input \(\mathcal{E}_{0}\) which is generated as the superposition of a tuned curve at \(\vartheta=0\) (blue) with some fixed random noise: namely we select \(a_{1}=0\), \(a_{2}=1\), \(a_{3}=0\) and all other coefficients \(a_{p}\) for \(p=4,\cdots,d\) are drawn from a normal law with an amplitude pre-factor of magnitude \(\varepsilon\) set to \(\varepsilon=0.1\). The shape of the input \(\mathcal{E}_{0}\) is shown in Figure 20(a). The profile of \(\mathcal{E}_{j}^{n}\) at time iteration \(n=200\) along the first layers of the network \(j\in\{1,2,3,4,5\}\) is given in Figure 20(b)-(c)-(d)-(e)-(f) respectively. We first observe that the network indeed acts as a filter since across the layers of the network the solution profile \(\mathcal{E}_{j}^{n}\) has been denoised and get closer to the tuned curve at \(\vartheta=0\). Let us also remark that the filtering is more efficient for layers away from the boundary and is less efficient for those layers near the boundary. This is rather natural since the impact of the input \(\mathcal{E}_{0}\) is stronger on the first layers. We see that already at layer \(j=5\), we have almost fully recovered the tuned curve at \(\vartheta=0\) (see Figure 20(f)). On the other hand, in Figure 21, we show the time evolution of \(\mathcal{E}_{j}^{n}\) at a fixed layer far away from the boundary, here \(j=10\). Initially, at \(n=0\), the layer is inactivated (see Figure 21(a)), and we see that after several time iterations that the solution profile \(\mathcal{E}_{j}^{n}\) start to be activated. It is first weakly tuned (see Figures 21(b)-(c)-(d)) and then it becomes progressively fully tuned and converges to the tuned curve at \(\vartheta=0\) (see Figures 21(e)-(f)).
Figure 19: _Space-time plot of the solution of the recurrence equation for (4.8) for \(p=2\) and \(p=10\) associated to respectively positive wave speed \(c_{0}^{2}>0\) and negative wave speed \(c_{0}^{10}<0\). The neural assembly associated with the 2nd eigenvector of the connectivity matrix propagates its input signal into the network at constant speed; but the neural assembly associated with the 10th eigenvector does not propagate the signals it receives on the input layer._
In a second example simulation (Figure 22), we highlight the dynamics of the different modes in a situation where the input is a narrow Gaussian profile (close to a Dirac function), with a superposition of various Fourier modes. As expected from the different values of the propagation speed \(c_{0}\) (Figure 18), we see that the mode associated with the first Fourier component is the first to reach layer \(j=10\), later followed by successive modes associated with later Fourier components. In other words, this hierarchically higher layer \(j=10\) first receives information about the coarse spatial structure of the input signal, and then gradually about finer and finer spatial details.
### Summary
In this section, we saw that the results obtained initially (The Identity Case) with the amplification function can be extended to more realistic situations with forward and backward connection matrices, for instance implementing (residual) convolutions or orientation processing. When we consider neural _assemblies_ capturing the principal components of the connection matrices, we see that each assembly can be treated independently in terms of stability and signal propagation speed and direction. The exact behavior of the system will depend on the actual connection matrices (and thus on the function that they implement in the neural network), but the important point is that our generic framework can always be applied in practice. In some example cases (ring model of orientations), we saw that only a few assemblies support
Figure 20: _A fixed input \(\mathcal{E}_{0}\) (red) which is the superposition of a tuned curve at \(\vartheta=0\) (blue) with some fixed random noise is presented at layer \(j=0\). Profile (yellow) of \(\mathcal{E}_{j}^{n}\) at time iteration \(n=200\) along the first layers of the network \(j\in\{1,2,3,4,5\}\)._
signal propagation (implying that the system acts a filter on its inputs), and these assemblies propagate information at different speeds (implementing a coarse-to-fine analysis). In other cases (e.g. Figure 12), we have even seen that distinct assemblies can simultaneously propagate information in opposite directions, with one assembly supporting feedforward propagation while another entails feedback propagation.
We have extended our equations to the continuous limit in time, and found that the amplification factor function can give rise to qualitatively different stability regions compared to the discrete model. This served as a cautionary note for situations where the discrete implementation must be chosen; in that case, using smaller time steps will be preferable, because it makes such discrepancies less likely.
Finally, we also showed that it is possible to consider fully continuous versions of our dynamic system, where not only time but also network depth and neural layer width are treated as continuous variables. This gives rise to diffusion equations, whose stability can also be characterized as a function of hyperparameter values.
In the following, we address possible extensions of the model to more sophisticated and more biologically plausible neural architectures, taking into account the significant communication delays between layers.
## 5 Extension of the model: taking into account transmission delays
Deep feedforward neural networks typically implement _instantaneous_ updates, as we did in Eq (2.1) with our feedforward term \(\mathcal{E}_{j}^{n+1}=\beta\mathcal{W}^{f}\mathcal{E}_{j-1}^{n+1}+...\). Similarly, artificial recurrent neural networks sequentially update their activity from _one time step to the next_, as we did with the other terms in our equation (2.1) (memory term, feedforward and feedback prediction error correction terms): \(\mathcal{E}_{j}^{n+1}=...+\alpha(\mathcal{W}^{b})^{\mathsf{t}}\mathcal{E}_{j- 1}^{n}+(1-\beta-\lambda)\mathcal{E}_{j}^{n}-\alpha(\mathcal{W}^{b})^{\mathsf{ t}}\mathcal{W}^{b}\mathcal{E}_{j}^{n}+\lambda\mathcal{W}^{b}\mathcal{E}_{j+1}^{n}\). However, in the brain there are significant transmission delays whenever neural signals travel from one area to another. These delays could modify the system's dynamics
and its stability properties. Therefore, in this section we modify model (2.1) by assuming that it takes \(k\) time steps to receive information from a neighboring site in the feedback/feedforward dynamics, namely we consider the following recurrence equation
\[\mathcal{E}_{j}^{n+1}-\beta\mathcal{W}^{f}\mathcal{E}_{j-1}^{n+1}=\alpha( \mathcal{W}^{b})^{\mathbf{t}}\mathcal{E}_{j-1}^{n-k}+(1-\beta-\lambda) \mathcal{E}_{j}^{n}-\alpha(\mathcal{W}^{b})^{\mathbf{t}}\mathcal{W}^{b} \mathcal{E}_{j}^{n-2k}+\lambda\mathcal{W}^{b}\mathcal{E}_{j+1}^{n-k}, \tag{5.1}\]
where \(k\geq 1\) is some given fixed integer (see Figure 23 for an illustration with \(k=1\)), and we refer to [23] for the justification of the derivation of the model. (Note in particular that we did not modify the _instantaneous_ nature of our feedforward updating term \(\mathcal{E}_{j}^{n+1}=\beta\mathcal{W}^{f}\mathcal{E}_{j-1}^{n+1}+...\). This is because, as motivated in [9, 23], we aim for the feedforward part of the system to be compatible with state-of-the-art deep convolutional neural networks, and merely wish to investigate how adding recurrent dynamics can modify its properties.) We may already notice that when \(k=0\), we recover our initial model (2.1). In what follows, for the mathematical analysis, we restrict ourselves to the identity case \(\mathcal{W}^{f}=\mathcal{W}^{n}=\mathbf{I}_{d}\) and when the model is set on \(\mathbb{Z}\). Indeed, our intention is to briefly explain what could be the main new propagation properties that would emerge by including transmission delays. Thus, we consider
\[e_{j}^{n+1}-\beta e_{j-1}^{n+1}=\alpha e_{j-1}^{n-k}+(1-\beta-\lambda)e_{j}^{n} -\alpha e_{j}^{n-2k}+\lambda e_{j+1}^{n-k},\quad j\in\mathbb{Z}. \tag{5.2}\]
Let us also note that the system (5.2) depends on a "history" of \(2k+1\) time steps; thus one needs to impose \(2k+1\) initial conditions:
\[e_{j}^{m}=h_{j}^{m},\quad m=0,\cdots,2k,\quad j\in\mathbb{Z},\]
for \(2k+1\) given sequences \((h_{j}^{m})_{j\in\mathbb{Z}}\) with \(m=0,\cdots,2k\).
Figure 23: _Illustration of the network structure of model (5.1) for \(k=1\) where the red arrows indicate the contributions leading to the update of \(\mathcal{E}_{j}^{n+1}\)._
To proceed in the analysis, we first introduce a new vector unknown capturing each layer's recent history:
\[\mathbf{E}_{j}^{n}:=\left(\begin{array}{c}e_{j}^{n-2k}\\ \vdots\\ e_{j}^{n-2}\\ e_{j}^{n-1}\\ e_{j}^{n}\end{array}\right)\in\mathbb{R}^{2k+1},\quad n\geq 1,\quad j\in \mathbb{Z},\]
such that the above recurrence (5.2) can then be rewritten as
\[\mathbf{E}_{j}^{n+1}-\beta Q_{-1}\mathbf{E}_{j-1}^{n+1}=\alpha Q_{1}\mathbf{E }_{j-1}^{n}+Q_{0}\mathbf{E}_{j}^{n}+\lambda Q_{1}\mathbf{E}_{j+1}^{n},\quad n \geq 1,\quad j\in\mathbb{Z}, \tag{5.3}\]
where the matrices \(Q_{1},Q_{0},Q_{-1}\in\mathscr{M}_{2k+1}(\mathbb{R})\) are defined as follows
\[Q_{0}=\begin{pmatrix}0&1&0&\cdots&\cdots&0\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&\ddots&0\\ 0&\ddots&\ddots&\ddots&0&1\\ -\alpha&0&\cdots&\cdots&0&1-\beta-\lambda\end{pmatrix},\]
and \(Q_{\pm 1}\) have a single nonzero element on their last row:
\[\left(Q_{-1}\right)_{2k+1,2k+1}=1,\quad\left(Q_{1}\right)_{2k+1,k+1}=1.\]
### Mathematical study of the recurrence equation (5.3)
We now postulate an Ansatz of the form \(\rho^{n}e^{\mathbf{i}\theta j}\mathbf{E}\) for some non zero vector \(\mathbf{E}\in\mathbb{C}^{2k+1}\), and obtain
\[\underbrace{\left(\rho\left[\mathbf{I}_{2k+1}-\beta e^{-\mathbf{i}\theta}Q_{- 1}\right]-Q_{0}-\left(\alpha e^{-\mathbf{i}\theta}+\lambda e^{\mathbf{i}\theta }\right)Q_{1}\right)}_{:=\mathcal{A}_{k}\left(\rho,\theta\right)}\mathbf{E}= \left(\begin{array}{c}0\\ \vdots\\ 0\end{array}\right)\]
which is equivalent to
\[\det\left(\rho\left[\mathbf{I}_{2k+1}-\beta e^{-\mathbf{i}\theta}Q_{-1}\right] -Q_{0}-\left(\alpha e^{-\mathbf{i}\theta}+\lambda e^{\mathbf{i}\theta}\right) Q_{1}\right)=0,\]
that is
\[(1-\beta e^{-\mathbf{i}\theta})\rho^{2k+1}-\rho^{2k}\left(1-\beta-\lambda \right)-\rho^{k}\left(\alpha e^{-\mathbf{i}\theta}+\lambda e^{\mathbf{i} \theta}\right)+\alpha=0. \tag{5.4}\]
The above system has \(2k+1\) roots in the complex plane that we denote \(\rho_{m}(\theta)\) for \(m=1,\cdots 2k+1\). We remark at \(\theta=0\), \(\rho=1\) is always a root of the equation since in this case (5.4) reduces to
\[(1-\beta)\rho^{2k+1}-\rho^{2k}\left(1-\beta-\lambda\right)-\rho^{k}\left( \alpha+\lambda\right)+\alpha=0. \tag{5.5}\]
By convention, we assume that \(\rho_{1}(0)=1\). We further note that \(\mathbf{E}_{1}=(1,\cdots,1)^{\mathbf{t}}\) is the associated eigenvector. As usual, we can perform a Taylor expansion of \(\rho_{1}\) near \(\theta=0\) and we obtain that
\[\rho_{1}(\theta)=\exp\left(-\mathbf{i}\frac{\alpha+\beta-\lambda}{1-\beta+k( \lambda-\alpha)}\theta+\mathcal{O}(|\theta|^{2})\right),\text{ as }\theta\to 0,\]
so that the associated wave speed is this time given by
\[c_{0}^{k}=\frac{\alpha+\beta-\lambda}{1-\beta+k(\lambda-\alpha)},\]
and depends explicitly on the delay \(k\). We readily conclude that:
* When \(\alpha<\lambda\), then \(c_{0}^{k}\) is well defined for all values of \(k\). Furthermore, the amplitude of the wave speed \(k\mapsto|c_{0}^{k}|\) decreases as \(k\) increases with \(|c_{0}^{k}|\to 0\) as \(k\to+\infty\). That is, the activity waves may go forward or backward (depending on the hyperparameter values), but the transmission delay always slows down their propagation.
* When \(\alpha=\lambda\), then \(c_{0}^{k}=\frac{\beta}{1-\beta}>0\) is independent of the delay \(k\). This is compatible with our implementation choice, where the initial feedforward propagation term (controlled by \(\beta\)) is not affected by transmission delays.
* When \(\lambda<\alpha\), then \(c_{0}^{k}\) is well defined whenever \(k\neq\frac{1-\beta}{\alpha-\lambda}>0\). Furthermore, the wave speed \(c_{0}^{k}>0\) for \(1\leq k<\frac{1-\beta}{\alpha-\lambda}\) and increases with the delay \(k\) on that interval. That is, in this parameter range neural activity waves propagate forward and, perhaps counterintuitively, accelerate when the transmission delay increases. On the other hand \(c_{0}^{k}<0\) for \(k>\frac{1-\beta}{\alpha-\lambda}\) and \(k\mapsto|c_{0}^{k}|\) decreases as \(k\) increases on that domain with \(|c_{0}^{k}|\to 0\) as \(k\to+\infty\). In this parameter range, waves propagate backward, and decelerate when the transmission delay increases.
Coming back to (5.5), we can look for other potential roots lying on the unit disk, i.e., marginally stable solutions. That is we look for \(\omega\in(0,2\pi)\) such that \(\rho=e^{\mathbf{i}\omega}\). We obtain a system of two equations
\[\begin{cases}(1-\beta)\cos((2k+1)\omega)-(1-\beta-\lambda)\cos(2k\omega)-( \alpha+\lambda)\cos(k\omega)+\alpha=0,\\ (1-\beta)\sin((2k+1)\omega)-(1-\beta-\lambda)\sin(2k\omega)-(\alpha+\lambda) \sin(k\omega)=0.\end{cases} \tag{5.6}\]
Case \(k=1\).When \(k=1\), coming back to (5.5), we see that the two other roots are real and given by \(-\frac{\lambda}{2(1-\beta)}\pm\frac{\sqrt{\lambda^{2}+4\alpha(1-\beta)}}{2(1- \beta)}\), such that when \(\alpha+\beta+\lambda=1\) the negative root is precisely \(-1\) such that \(\omega=\pi\) is a solution which we assume, without loss of generality, to be the second root, that is \(\rho_{2}(0)=-1\) whenever \(\alpha+\beta+\lambda=1\). In this specific case, the associated eigenvector is \(\mathbf{E}_{-1}=(1,-1,1)^{\mathbf{t}}\). Recall that \(\mathbf{E}\) reflects the _history_ of activity across the \(2k+1=3\) preceding time steps. In this case, the eigenvector \(\mathbf{E}_{-1}\) is a rapid alternation of activity, i.e. an oscillation. We refer to Figure 24(a) for an illustration of the spectral configuration in that case. We can perform a Taylor expansion of \(\rho_{2}\) near \(\theta=0\) and we obtain that
\[\rho_{2}(\theta)=-\exp\left(-\mathbf{i}\frac{\alpha+\beta-\lambda}{5-5\beta- \alpha-3\lambda}\theta+\mathcal{O}(|\theta|^{2})\right),\text{ as }\theta\to 0,\]
which provides an associated wave speed \(\widetilde{c}_{0}\) given by
\[\widetilde{c}_{0}=\frac{\alpha+\beta-\lambda}{5-5\beta-\alpha-3\lambda}.\]
As a consequence of the above analysis, if \(\mathbf{G}_{j}^{n}\) denotes the fundamental solution of (5.3) starting from a Dirac delta mass centered at \(j=0\) along the direction \(\mathbf{E}\in\mathbb{R}^{3}\), then we have the following representation for \(\mathbf{G}_{j}^{n}\):
* If \(\alpha+\beta+\lambda\neq 1\), then \[\mathbf{G}_{j}^{n}\approx\frac{1}{\sqrt{4\pi\sigma_{0}^{k}n}}\exp\left(-\frac{ |j-c_{0}^{k}n|^{2}}{4\sigma_{0}^{k}n}\right)\big{\langle}(0,0,1)^{\mathbf{t}},\pi_{1}(\mathbf{E})\big{\rangle}_{\mathbb{R}^{3}}\,,\] where \(\pi_{1}\) is the spectral projection of \(\mathbb{R}^{3}\) along the direction \(\mathbf{E}_{1}\) and \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{3}}\) is the usual scalar product. Here \(\sigma_{0}^{k}\) is some positive constant that can be computed explicitly by getting the higher order expansion of \(\rho_{1}(\theta)\).
* If \(\alpha+\beta+\lambda=1\), then \[\mathbf{G}_{j}^{n}\approx\frac{1}{\sqrt{4\pi\sigma_{0}^{k}n}}\exp \left(-\frac{|j-c_{0}^{k}n|^{2}}{4\sigma_{0}^{k}n}\right)\big{\langle}(0,0,1) ^{\mathbf{t}},\pi_{1}(\mathbf{E})\big{\rangle}_{\mathbb{R}^{3}}\\ +\frac{(-1)^{n}}{\sqrt{4\pi\sigma_{0}n}}\exp\left(-\frac{|j- \widetilde{c}_{0}n|^{2}}{4\widetilde{\sigma_{0}}n}\right)\big{\langle}(0,0,1) ^{\mathbf{t}},\pi_{-1}(\mathbf{E})\big{\rangle}_{\mathbb{R}^{3}}\,,\] where \(\pi_{-1}\) is the spectral projection of \(\mathbb{R}^{3}\) along the direction \(\mathbf{E}_{-1}\). Here \(\widetilde{\sigma}_{0}\) is some positive constant that can be computed explicitly by getting the higher order expansion of \(\rho_{2}(\theta)\).
In Figure 25, we illustrate the previous results in the case where \(\alpha+\beta+\lambda=1\). In panel (a), we have set \(\mathbf{E}=\mathbf{E}_{1}\) (a constant history of activity over the previous 3 time steps), such that \(\pi_{1}(\mathbf{E}_{1})=\mathbf{E}_{1}\) and \(\pi_{-1}(\mathbf{E}_{1})=0_{\mathbb{R}^{3}}\) so that we only observe a Gaussian profile propagating at speed \(c_{0}^{k}\). On the other hand in panel (b), we have set \(\mathbf{E}=\mathbf{E}_{-1}\) (an oscillating history of activity over the previous 3 time steps), such that \(\pi_{1}(\mathbf{E}_{-1})=0_{\mathbb{R}^{3}}\) and \(\pi_{-1}(\mathbf{E}_{-1})=\mathbf{E}_{-1}\) so that we only observe an oscillating (in time) Gaussian wave profile propagating at speed \(\widetilde{c}_{0}\). Note that in this case, the period of the oscillation is necessarily equal to \(2k\), i.e. twice the transmission delay between layers. Finally in panel (c), we observe a super-position of the two Gaussian profiles propagating at speed \(c_{0}^{1}\) and \(\widetilde{c}_{0}\).
Figure 24: _Spectral configurations in the case \(k=1\) (a) and \(k=2\) (c) with tangency points associated to \(\theta=0\) in (5.5). In (b)-(d), we plot the left-hand side of the equations defining system (5.6) in the case \(k=1\) and \(k=2\) respectively where the first component is in blue and the second component in dark red. For \(k=1\), we have a solution at \(\omega=0\) and \(\omega=\pi\) which can be seen in panel (a) with the tangency points at \(\pm 1\). For \(k=2\), we have three solutions \(\omega=0\) and \(\omega\sim 1.885\) and \(\omega\sim 2\pi-1.885\) which can be seen in panel (c) with the tangency points at \(1\) and \(e^{\pm 1.885}\). Parameters are set to \((\alpha,\beta,\lambda)=(0.4,0.3,0.3)\) in (a)-(b) and \((\alpha,\beta,\lambda)=(0.3,0.3292,0.3)\)._
Case \(k\geq 2\).Studying the above system (5.6) in full generality is a very difficult task. We refer to Figure 24(c)-(d) for an illustration in the case \(k=2\) with three tangency points associated to \(\theta=0\) lying on the unit circle. Increasing the delay \(k\) while keeping fixed the other hyper-parameters \((\alpha,\beta,\lambda)\) will generically tend to destabilize the spectrum (as shown in Figure 26).
### Continuous in time interpretation
As done before, we now re-examine our model (with transmission delays) in the time-continuous limit. First, we recall our notations for the scaled parameters
\[\widetilde{\beta}:=\frac{\beta}{\Delta t},\quad\widetilde{\lambda}:=\frac{ \lambda}{\Delta t},\text{ and }\widetilde{\alpha}:=\frac{\alpha}{\Delta t},\]
where \(\Delta t>0\) is some time step. Next we introduce the following rescaled time delay (representing the transmission time for neural signals between adjacent areas)
\[\tau:=k\Delta t.\]
Identifying \(e_{j}^{n}\) as the approximation of some continuous fonction \(\mathbf{e}_{j}(t_{n})\) at \(t_{n}=n\Delta t\), we readily derive a delayed version of (3.13), namely
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{e}_{j}(t)=\widetilde{\beta}\mathbf{e}_ {j-1}(t)-(\widetilde{\beta}+\widetilde{\lambda})\mathbf{e}_{j}(t)+\widetilde{ \alpha}\mathbf{e}_{j-1}(t-\tau)+\widetilde{\lambda}\mathbf{e}_{j+1}(t-\tau)- \widetilde{\alpha}\mathbf{e}_{j}(t-2\tau),\quad t>0,\quad j\in\mathbb{Z}.\]
In what follows, we first investigate the case of homogeneous oscillations, which are now possible because of the presence of time delays into the equation. Then, we turn our attention to oscillatory traveling waves.
Figure 25: _Space-time plots of the last component of the rescaled fundamental solution \(\mathbf{E}_{j}^{n}\) of (5.3) starting from a Dirac delta mass centered at \(j=0\) along different directions \(\mathbf{E}\) when \(k=1\) and \(\alpha+\beta+\lambda=1\). (a) When \(\mathbf{E}=\mathbf{E}_{1}\) (constant history of activity) is the eigenvector associated to \(\mathcal{A}_{1}(1,0)\) we observe propagation at wave speed \(c_{0}^{1}\). (b) When \(\mathbf{E}=\mathbf{E}_{-1}\) (oscillating history of activity) is the eigenvector associated to \(\mathcal{A}_{1}(-1,0)\) we observe propagation of an oscillatory wave at wave speed \(\widetilde{c}_{0}\). (c) When \(\mathbf{E}\) is a linear combination of \(\mathbf{E}_{1}\) and \(\mathbf{E}_{-1}\), we observe two propagating waves (one of them oscillating) at wave speed \(c_{0}^{1}\) and \(\widetilde{c}_{0}\) respectively. Parameter values are set to \((\alpha,\beta,\lambda)=(0.4,0.3,0.3)\)._
#### 5.2.1 Homogeneous oscillations
One key difference of the above delayed equation compared to (3.13) is that spatially homogeneous solutions (i.e., solutions \(\mathbf{e}_{j}(t)\) that are independent of the layer \(j\)) may now have a non trivial dynamics, such as a broadly synchronized oscillation resembling brain rhythmic activity. Indeed, looking for solutions which are independent of \(j\), we get the delayed ordinary differential equation
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{e}(t)=-\widetilde{\lambda}\mathbf{e}(t)+( \widetilde{\alpha}+\widetilde{\lambda})\mathbf{e}(t-\tau)-\widetilde{\alpha} \mathbf{e}(t-2\tau),\quad t>0.\]
Looking for pure oscillatory exponential solutions \(\mathbf{e}(t)=e^{\mathbf{i}\omega t}\) for some \(\omega\in\mathbb{R}\) we obtain
\[\mathbf{i}\omega=-\widetilde{\lambda}+(\widetilde{\alpha}+\widetilde{\lambda })e^{-\mathbf{i}\tau\omega}-\widetilde{\alpha}e^{-2\mathbf{i}\tau\omega}.\]
This leads to the system of equations
\[\begin{cases}0=-\widetilde{\lambda}+(\widetilde{\alpha}+\widetilde{\lambda}) \cos(\tau\omega)-\widetilde{\alpha}\cos(2\tau\omega),\\ \omega=-(\widetilde{\alpha}+\widetilde{\lambda})\sin(\tau\omega)+\widetilde{ \alpha}\sin(2\tau\omega).\end{cases}\]
Introducing \(\varrho=\widetilde{\lambda}/\widetilde{\alpha}>0\), we observe that the above system writes instead
\[\begin{cases}0=-\varrho+(1+\varrho)\cos(\tau\omega)-\cos(2\tau\omega),\\ \omega=\widetilde{\alpha}\left(-(1+\varrho)\sin(\tau\omega)+\sin(2\tau\omega) \right).\end{cases} \tag{5.7}\]
Using trigonometry identities, the first equation can be factorized as
\[0=(1-\cos(\tau\omega))(\varrho-1-2\cos(\tau\omega)).\]
We distinguish several cases. If \(\varrho>3\), then the above equation has solutions if and only if \(\tau\omega=2k\pi\) for \(k\in\mathbb{Z}\). Inspecting the second equation, we see that necessarily \(k=0\) and \(\omega=0\) is the only possible solution. When \(\varrho=3\), we notice that the equation reduces to \(0=(1-\cos(\tau\omega))^{2}\), and the solutions are
Figure 26: _Destabilization of the spectrum by increasing the delay \(k\) while keeping fixed the hyper-parameters to \((\alpha,\beta,\lambda)=(0.3,0.1,0.3)\)._
again given by \(\tau\omega=2k\pi\) for \(k\in\mathbb{Z}\), which yields \(\omega=0\) because of the second equation. Now, if \(\varrho\in(0,3)\), we deduce that either \(\tau\omega=2k\pi\) for \(k\in\mathbb{Z}\) or
\[\cos(\tau\omega)=\frac{\varrho-1}{2}.\]
In the first case, we recover that \(\omega=0\). Assuming now that \(\omega\neq 0\), i.e., a true oscillation with non-zero frequency, we derive that
\[\tau\omega=\pm\arccos\left(\frac{\varrho-1}{2}\right)+2k\pi,\quad k\in\mathbb{ Z}.\]
Injecting the above relation into the right-hand side of the second equation yields that
\[\omega=\widetilde{\alpha}\left(-(1+\varrho)\sin(\tau\omega)+\sin(2\tau\omega) \right)=\mp\widetilde{\alpha}\sqrt{(1+\varrho)(3-\varrho)},\]
and thus necessarily
\[(\tau,\omega)=\left(\frac{-\arccos\left(\frac{\varrho-1}{2}\right)+2k\pi}{ \widetilde{\alpha}\sqrt{(1+\varrho)(3-\varrho)}},\pm\widetilde{\alpha}\sqrt{ (1+\varrho)(3-\varrho)}\right),\quad k\in\mathbb{Z}.\]
We recover the fact that the system (5.7) is invariant by \(\omega\mapsto-\omega\). Since \(\arccos\left(\frac{\varrho-1}{2}\right)\in[0,\pi]\), we deduce that the smallest positive \(\tau\) is always achieved at \(k=1\). We computed for several values of \(\widetilde{\alpha}\) the corresponding values of \(\tau\) and \(\omega\) (for \(k=1\)) as a function of \(\varrho\), which are presented in Figure 27(a)-(b). We observe that for values of \(\varrho\) in the range \((1/2,1)\) the corresponding time delay \(\tau\) takes values between \(12ms\) to \(23ms\) for values of \(1/\widetilde{\alpha}\) ranging from \(5ms\) to \(10ms\). Correspondingly, in the same range of values for \(\varrho\), the frequency \(\omega/2\pi\) takes values between \(30Hz\) to \(60Hz\).
This tells us that, when the time delay \(\tau\) is chosen to be around \(10-20ms\), compatible with biologically plausible values for communication delays between adjacent cortical areas, and when hyperparameters \(\widetilde{\alpha}\) and \(\widetilde{\lambda}\) are suitably chosen (\(\widetilde{\alpha}\) in particular must be strong enough to allow rapid feed-forward error correction updates, i.e. around \(1/\widetilde{\alpha}<8ms\), while \(\widetilde{\lambda}\) can be chosen more liberally, as long as it stays \(<3\widetilde{\alpha}\)), then the network produces globally synchronized oscillations, comparable to experimentally observed brain rhythms in the \(\gamma\)-band regime (30-60Hz). In this context, it is interesting to note that theoretical and neuroscientific considerations have suggested that error correction in predictive coding systems is likely to be accompanied by oscillatory neural activity around this same \(\gamma\)-frequency regime [4].
#### 5.2.2 Oscillatory traveling waves
However, experimental and computational studies have also suggested that oscillatory signatures of predictive coding could be found at lower frequencies, in the so-called \(\alpha\)-band regime, around 7-15Hz. Furthermore, these oscillations are typically not homogeneous over space, as assumed in the previous section, but behave as forward- or backward-travelling waves with systematic phase shifts between layers [1]. To explore this idea further, we now investigate the possibility of having traveling wave solutions of the form
\[\mathbf{e}_{j}(t)=e^{\mathrm{I}(\omega t+j\theta)},\quad t>0,\quad j\in\mathbb{ Z},\]
for some \(\omega\in\mathbb{R}\) (representing the wave's temporal frequency) and \(\theta\in[0,2\pi)\) (representing the wave's spatial frequency, i.e. its phase shift across layers), and we are especially interested in deriving conditions
under which one can ensure that \(\theta\neq 0\) (since otherwise, we would be again facing the homogeneous oscillation case). We only focus on the case \(\widetilde{\beta}=0\) (as postulated, e.g. in Rao and Ballard's work [27]) and leave the case \(\widetilde{\beta}>0\) for future investigations. As a consequence, the equation reduces to
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{e}_{j}(t)=-\widetilde{\lambda}\mathbf{e }_{j}(t)+\widetilde{\alpha}\mathbf{e}_{j-1}(t-\tau)+\widetilde{\lambda} \mathbf{e}_{j+1}(t-\tau)-\widetilde{\alpha}\mathbf{e}_{j}(t-2\tau),\quad t>0, \quad j\in\mathbb{Z}.\]
Plugging the ansatz \(\mathbf{e}_{j}(t)=e^{\mathrm{i}(\omega t+j\theta)}\), we obtain:
\[\mathbf{i}\omega=\widetilde{\alpha}\left(e^{-\mathrm{i}(\omega\tau+\theta)}-e ^{-2\mathrm{i}\omega\tau}\right)+\widetilde{\lambda}\left(e^{-\mathrm{i}( \omega\tau-\theta)}-1\right).\]
Taking real and imaginary parts, we obtain the system
\[\begin{cases}0=\widetilde{\alpha}\left(\cos(\omega\tau+\theta)-\cos(2\omega \tau)\right)+\widetilde{\lambda}\left(\cos(\omega\tau-\theta)-1\right),\\ \omega=-\widetilde{\alpha}\left(\sin(\omega\tau+\theta)-\sin(2\omega\tau) \right)-\widetilde{\lambda}\sin(\omega\tau-\theta).\end{cases}\]
Once again, we introduce \(\varrho:=\frac{\widetilde{\lambda}}{\widetilde{\alpha}}\geq 0\) where we implicitly assumed that we always work in the regime \(\widetilde{\alpha}>0\). Then, we note that the right-hand side of the first equation of the above system can be factored as
\[\widetilde{\alpha}\left(\cos(\omega\tau+\theta)-\cos(2\omega\tau)\right)+ \widetilde{\lambda}\left(\cos(\omega\tau-\theta)-1\right)=-2\widetilde{\alpha} \sin\left(\frac{\theta-\omega\tau}{2}\right)\left(\varrho\sin\left(\frac{ \theta-\omega\tau}{2}\right)+\sin\left(\frac{\theta+3\omega\tau}{2}\right) \right).\]
As a consequence, either \(\sin\left(\frac{\theta-\omega\tau}{2}\right)=0\), that is \(\omega\tau=\theta+2k\pi\) for \(k\in\mathbb{Z}\), which then leads, from the second equation, to \(\omega=0\) and \(\theta=0\) since we restrict \(\theta\in[0,2\pi)\), or \(\sin\left(\frac{\theta-\omega\tau}{2}\right)\neq 0\). In the latter case, assuming that \(\omega\tau\neq\theta+2k\pi\) for \(k\in\mathbb{Z}\), we get that
\[0=\varrho\sin\left(\frac{\theta-\omega\tau}{2}\right)+\sin\left(\frac{\theta+3 \omega\tau}{2}\right).\]
We will now study several cases.
Figure 27: _(a) Representation of the (minimal) time delay \(\tau\) expressed in milliseconds as a function of the parameter \(\varrho\) for various values of \(1/\widetilde{\alpha}\) ranging from \(5ms\) to \(10ms\). We observe that for values of \(\varrho\) in the range \((1/2,1)\) the corresponding time delay \(\tau\) takes values between \(12ms\) to \(23ms\). (b) Representation of the frequency \(\omega/2\pi\) (in Hertz) as a function of the parameter \(\varrho\) for various values of \(1/\widetilde{\alpha}\) ranging from \(5ms\) to \(10ms\). We observe that for values of \(\varrho\) in the range \((1/2,1)\) the corresponding frequency \(\omega/2\pi\) takes values between \(30Hz\) to \(60Hz\)._
Case \(\varrho=0\).(In other words, this case implies \(\widetilde{\lambda}=0\), that is, a system with no feedback error correction.) From \(\sin\left(\frac{\theta+3\omega\tau}{2}\right)=0\), we deduce that \(\theta=-3\omega\tau+2k\pi\) for some \(k\in\mathbb{Z}\), and reporting into the second equation of the system, we end up with
\[\omega=2\widetilde{\alpha}\sin(2\omega\tau).\]
We always have the trivial solution \(\omega=0\) with \(\theta=0\). In fact, when \(4\widetilde{\alpha}\tau\leq 1\), \(\omega=0\) is the only solution of the above equation. On the other hand, when \(4\widetilde{\alpha}\tau>1\), there can be multiple non trivial solutions. At least, for each \((\widetilde{\alpha},\tau)\) such that \(4\widetilde{\alpha}\tau>1\) there always exist a unique \(\omega_{c}(\widetilde{\alpha},\tau)\in\left(0,\frac{\pi}{2\tau}\right)\) solution of the above equation. This gives a corresponding \(\theta_{c}^{k}=-3\omega_{c}(\widetilde{\alpha},\tau)\tau+2k\pi\) with \(k\in\mathbb{Z}\), and retaining the corresponding value of \(\theta\) in the interval \([0,2\pi)\), we have \(\theta_{c}=-3\omega_{c}(\widetilde{\alpha},\tau)\tau+2\pi\). We refer to Figure 28 for an illustration of the solutions \((\omega,\theta)\) for several values of the parameters.
Interestingly, we see that for biologically plausible values of the time delay \(\tau\) between \(10ms\) and \(20ms\), the observed oscillation frequency is lower than in the previous case, and now compatible with the \(\alpha\)-frequency regime (between \(10Hz\) and \(20Hz\)). Furthermore, the phase shift between layers \(\theta\) varies roughly between \(2\) and \(4\) radians. As phase shifts below \(\pi\) or above \(\pi\) radians indicate respectively backward- or forward-travelling waves, we see that the exact value of the parameters \(\tau\) and \(\widetilde{\alpha}\) critically determines the propagation direction of the travelling waves: stronger feedforward error correction (lower values of \(1/\widetilde{\alpha}\)) and longer communication delays \(\tau\) will tend to favor backward-travelling waves; and vice-versa, weaker feedforward error correction (higher values of \(1/\widetilde{\alpha}\)) and shorter communication delays \(\tau\) will favor forward-travelling waves.
Case \(\varrho=1\).Now we assume that \(\widetilde{\lambda}\neq 0\), that is, the system now includes feedback error correction. At first, we consider the simpler case when \(\varrho=1\), that is when \(\widetilde{\alpha}=\widetilde{\lambda}\), where the equation can also be solved easily. Indeed, we have either
\[\frac{\omega\tau-\theta}{2}=\frac{\theta+3\omega\tau}{2}+2k\pi,\quad k\in \mathbb{Z},\]
or
\[\frac{\omega\tau-\theta}{2}=\pi-\frac{\theta+3\omega\tau}{2}+2k\pi,\quad k\in \mathbb{Z}.\]
Figure 28: _Representation of the temporal frequency \(\omega/2\pi\) (in Hz) and the spatial frequency \(\theta\in[0,2\pi)\) (panel (a) and (b) respectively) in the case \(\widetilde{\lambda}=0\) as a function of \(1/\widetilde{\alpha}\) (in ms) for several values of the time delay \(\tau\)._
This equivalent to
\[\theta=-\omega\tau+2k\pi,\quad k\in\mathbb{Z},\]
or
\[\omega\tau=\frac{\pi}{2}+k\pi,\quad k\in\mathbb{Z}.\]
Let assume first that \(\theta=-\omega\tau+2k\pi\) for some \(k\in\mathbb{Z}\), then the second equation of the system gives \(\omega=0\) since \(\widetilde{\alpha}=\widetilde{\lambda}\) when \(\varrho=1\), and thus we end up with \(\theta=0\). Now, if \(\omega\tau=\frac{\pi}{2}+k\pi\) for some \(k\in\mathbb{Z}\), the second equation leads to
\[\omega=-2\widetilde{\alpha}\cos(\theta+k\pi),\]
from which we deduce that necessarily we must have
\[\frac{(2k+1)\pi}{-4\widetilde{\alpha}\tau}=\cos(\theta+k\pi),\quad k\in\mathbb{ Z}.\]
We first remark that if \(4\widetilde{\alpha}\tau<\pi\), then the above equation has no solution. On the other hand if \(4\widetilde{\alpha}\tau\geq\pi\), one can obtain solutions to the above equation. Indeed, let us denote
\[p:=\left\lfloor\frac{4\widetilde{\alpha}\tau}{\pi}\right\rfloor\geq 1\]
the integer part of \(\frac{4\widetilde{\alpha}\tau}{\pi}\). Then, for each \(k\in\mathbb{Z}\) such that \(|2k+1|\leq p\), we have
\[\theta=\pm\arccos\left((-1)^{k+1}\frac{(2k+1)\pi}{4\widetilde{\alpha}\tau} \right)+2m\pi,\quad m\in\mathbb{Z},\]
with corresponding \(\omega\) given by
\[\omega=\frac{\pi}{2\tau}+\frac{k\pi}{\tau}.\]
That is, the set of solutions is given by
\[(\omega,\theta)=\left(\frac{\pi}{2\tau}+\frac{k\pi}{\tau},\pm\arccos\left((-1 )^{k+1}\frac{(2k+1)\pi}{4\widetilde{\alpha}\tau}\right)+2m\pi\right),\]
for each \(k\in\mathbb{Z}\) such that \(|2k+1|\leq\left\lfloor\frac{4\widetilde{\alpha}\tau}{\pi}\right\rfloor\) and \(m\in\mathbb{Z}\). If we only retain the smallest \(\omega\) and the corresponding value of \(\theta\in[0,2\pi)\), we must take \(k=0\), and we have two solutions
\[(\omega,\theta)=\left(\frac{\pi}{2\tau},\arccos\left(-\frac{\pi}{4\widetilde{ \alpha}\tau}\right)\right),\text{ and }(\omega,\theta)=\left(\frac{\pi}{2\tau},-\arccos \left(-\frac{\pi}{4\widetilde{\alpha}\tau}\right)+2\pi\right)\quad\text{ if }4 \widetilde{\alpha}\tau\geq\pi.\]
We note that in this case the temporal frequency only depends on the time delay \(\tau\) since it is given by \(\frac{\omega}{2\pi}=\frac{1}{4\tau}\) and ranges from \(12.5\)Hz (\(\alpha\)-band regime) to \(25\)Hz (\(\beta\)-band regime) for values of \(\tau\) between \(10\)ms to \(20\)ms (as long as \(\widetilde{\alpha}\) is fixed such that \(4\widetilde{\alpha}\tau\geq\pi\) is verified). The corresponding spatial frequencies are \(\arccos\left(-\frac{\pi}{4\widetilde{\alpha}\tau}\right)\in(\pi/2,\pi)\) and \(-\arccos\left(-\frac{\pi}{4\widetilde{\alpha}\tau}\right)+2\pi\in(3\pi/2,2\pi)\).
In summary, when the feedforward and feedback error correction strengths are matched (that is when \(\widetilde{\alpha}=\widetilde{\lambda}\)) and sufficiently high (such that \(4\widetilde{\alpha}\tau\geq\pi\)), then the system will show two simultaneous travelling waves at the same frequency in the \(\alpha\)-band or \(\beta\)-band regime, but travelling in opposite directions, one as a feedforward wave and the other as a feedback wave.
Case \(\varrho>1\).Here, the feedback error correction \(\lambda\) is stronger than the feedforward \(\alpha\). In this case, we remark that
\[\varrho\sin\left(\frac{\theta-\omega\tau}{2}\right)+\sin\left(\frac{\theta+3 \omega\tau}{2}\right)\,=(\varrho+\cos(2\omega\tau))\sin\left(\frac{\theta- \omega\tau}{2}\right)+\cos\left(\frac{\theta-\omega\tau}{2}\right)\sin(2\omega \tau).\]
Since \(\varrho>1\) and \(\omega\tau\neq\theta+2k\pi\) for \(k\in\mathbb{Z}\), we have \((\varrho+\cos(2\omega\tau))\sin\left(\frac{\theta-\omega\tau}{2}\right)\neq 0\), and thus \(\cos\left(\frac{\theta-\omega\tau}{2}\right)\sin(2\omega\tau)\neq 0\) otherwise we would reach a contradiction since we try to solve
\[\varrho\sin\left(\frac{\theta-\omega\tau}{2}\right)+\sin\left(\frac{\theta+3 \omega\tau}{2}\right)=0.\]
As a consequence, \(\cos\left(\frac{\theta-\omega\tau}{2}\right)\neq 0\) and we can rewrite the above equation as
\[\tan\left(\frac{\theta-\omega\tau}{2}\right)=-\frac{\sin(2\omega\tau)}{\varrho +\cos(2\omega\tau)},\]
so that
\[\theta=\omega\tau-2\text{arctan}\left(\frac{\sin(2\omega\tau)}{\varrho+\cos(2 \omega\tau)}\right)+2k\pi,\quad k\in\mathbb{Z}.\]
Injecting this expression for \(\theta\) into the second equation, we find, after simplification, that
\[\omega=\widetilde{\alpha}\frac{2\sin(2\omega\tau)\left(1-\varrho^{2}\right)}{ 2\varrho\cos(2\omega\tau)+\varrho^{2}+1}.\]
We first remark that \(\omega=0\) is always a solution, giving \(\theta=0\). Now, inspecting the right-hand of the above expression, we get that
\[\frac{2\widetilde{\alpha}\left(1-\varrho^{2}\right)}{2\varrho\cos(2\omega\tau )+\varrho^{2}+1}<0,\text{ for all }\omega\in\mathbb{R}.\]
As a consequence, we look for the negative minima of the function \(\omega\mapsto\frac{\sin(2\omega\tau)}{2\varrho\cos(2\omega\tau)+\varrho^{2}+1}\) which are given by \(\omega_{0}=\frac{\pi}{2\tau}+\frac{1}{2\tau}\text{arcos}\left(\frac{2\varrho }{1+\varrho^{2}}\right)+\frac{k\pi}{\tau}\) for \(k\in\mathbb{Z}\), at such minima, one gets that
\[\frac{\sin(2\omega_{0}\tau)}{2\varrho\cos(2\omega_{0}\tau)+\varrho^{2}+1}= \frac{1}{1-\varrho^{2}}.\]
This implies that if \(4\widetilde{\alpha}\tau<\pi+\text{arcos}\left(\frac{2\varrho}{1+\varrho^{2}}\right)\), then there is no other solution than \((\omega,\theta)=(0,0)\). As a consequence, one needs to assume \(4\widetilde{\alpha}\tau\geq\pi+\text{arcos}\left(\frac{2\varrho}{1+\varrho^{ 2}}\right)\) to ensure the existence of at least one non trivial solution. We remark that this condition is consistent with our condition \(4\widetilde{\alpha}\tau\geq\pi\) derived in the case \(\varrho=1\).
Case \(0<\varrho<1\).We start once again from the equation
\[0=(\varrho+\cos(2\omega\tau))\sin\left(\frac{\theta-\omega\tau}{2}\right)+\cos \left(\frac{\theta-\omega\tau}{2}\right)\sin(2\omega\tau).\]
This time, it is possible that \(\varrho+\cos(2\omega\tau)=0\), which gives necessarily that
\[\omega\tau=\pm\frac{1}{2}\text{arcos}(-\varrho)+k\pi,\quad k\in\mathbb{Z}.\]
But if \(\varrho+\cos(2\omega\tau)=0\), then one has \(0=\cos\left(\frac{\theta-\omega\tau}{2}\right)\sin(2\omega\tau)\).
Let us first assume that it is \(0=\cos\left(\frac{\theta-\omega\tau}{2}\right)\), such that \(\theta=\omega\tau+(2k+1)\pi\) for \(k\in\mathbb{Z}\). Now, looking at the second equation, we find that
\[\omega=2\widetilde{\alpha}\sin(2\omega\tau)=\pm 2\widetilde{\alpha}\sqrt{1- \varrho^{2}},\]
which implies that it is possible only if
\[\tau=\frac{\arccos(-\varrho)+k\pi}{4\widetilde{\alpha}\sqrt{1-\varrho^{2}}}, \quad k\geq 0.\]
As a conclusion, if \(\tau\) and \(0<\varrho<1\) satisfy \(\tau=\frac{\arccos(-\varrho)+k\pi}{4\widetilde{\alpha}\sqrt{1-\varrho^{2}}}\) for some positive integer \(k\geq 0\), then
\[(\omega,\theta)=\left(2\widetilde{\alpha}\sqrt{1-\varrho^{2}},\frac{1}{2} \arccos(-\varrho)-\pi\right),\quad\text{ and }\quad(\omega,\theta)=\left(-2 \widetilde{\alpha}\sqrt{1-\varrho^{2}},-\frac{1}{2}\arccos(-\varrho)+\pi \right),\]
are corresponding solutions of the problem.
Next, let us assume that it is \(\sin(2\omega\tau)=0\), implying that \(2\omega\tau=k\pi\) for \(k\in\mathbb{Z}\). Now we readily remark that since \(0<\varrho<1\), we have \(\arccos(-\varrho)\in(\pi/2,\pi)\). As a consequence, we should have
\[\pm\arccos(-\varrho)+2k\pi=p\pi,\quad k,p\in\mathbb{Z},\]
this is impossible and thus \(\sin(2\omega\tau)\neq 0\) and we are back to the case treated before.
We now assume that \(\varrho+\cos(2\omega\tau)\neq 0\). In that case, we can proceed as in the case \(\varrho>1\) and obtain that
\[\theta=\omega\tau-2\arctan\left(\frac{\sin(2\omega\tau)}{\varrho+\cos(2 \omega\tau)}\right)+2k\pi,\quad k\in\mathbb{Z},\]
which gives
\[\omega=\widetilde{\alpha}\frac{2\sin(2\omega\tau)\left(1-\varrho^{2}\right)}{ 2\varrho\cos(2\omega\tau)+\varrho^{2}+1}.\]
Once again, \((\omega,\theta)=(0,0)\) is always a solution. What changes in this case is that now
\[\frac{2\widetilde{\alpha}\left(1-\varrho^{2}\right)}{2\varrho\cos(2\omega \tau)+\varrho^{2}+1}>0,\text{ for all }\omega\in\mathbb{R}.\]
This time, one needs to look at the positive maxima of the map \(\omega\mapsto\frac{\sin(2\omega\tau)}{2\varrho\cos(2\omega\tau)+\varrho^{2}+1}\) which are given by \(\omega_{0}=\frac{\pi}{2\tau}-\frac{1}{2\tau}\arccos\left(\frac{2\varrho}{1+ \varrho^{2}}\right)+\frac{k\pi}{\tau}\) for \(k\in\mathbb{Z}\), at such maxima, one gets that
\[\frac{\sin(2\omega_{0}\tau)}{2\varrho\cos(2\omega_{0}\tau)+\varrho^{2}+1}= \frac{1}{1-\varrho^{2}}.\]
As a consequence, if \(4\widetilde{\alpha}\tau<\pi-\arccos\left(\frac{2\varrho}{1+\varrho^{2}}\right)\), then there is no other solution than \((\omega,\theta)=(0,0)\). To obtain at least one non trivial positive solution, one needs to impose that \(4\widetilde{\alpha}\tau\geq\pi-\arccos\left(\frac{2\varrho}{1+\varrho^{2}}\right)\). Once again, this condition is consistent with the condition \(4\widetilde{\alpha}\tau\geq\pi\) derived in the case \(\varrho=1\). We can also derive a second simple condition which ensures the existence of a non trivial solution by looking at the behavior near \(\omega\sim 0\) where we have
\[\widetilde{\alpha}\frac{2\sin(2\omega\tau)\left(1-\varrho^{2}\right)}{2 \varrho\cos(2\omega\tau)+\varrho^{2}+1}\sim\frac{4\widetilde{\alpha}\tau\left( 1-\varrho\right)}{1+\varrho}\omega.\]
Thus if
\[4\widetilde{\alpha}\tau>\frac{1+\varrho}{1-\varrho},\]
then there exists at least one positive solution \(\omega\in(0,\pi/2\tau)\) to the above equation (and also one negative solution in \((-\pi/2\tau,0)\) by symmetry). Note that the condition \(4\widetilde{\alpha}\tau>\frac{1+\varrho}{1-\varrho}\) is consistent with the condition \(4\widetilde{\alpha}\tau>1\) derived in the case \(\varrho=0\).
Examples.To illustrate the different scenarios and their possible interpretations in terms of brain oscillations, we take here two distinct examples corresponding to the situations described above. We report in Figure 29 the non trivial branches of solutions corresponding to positive values of \(\omega\), as a function of \(\varrho\) for fixed values of the time delay \(\tau=12\)ms and \(1/\widetilde{\alpha}=15\)ms. These values are biologically plausible and correspond to the values used in [1]. Upon denoting
\[\varrho_{c}:=\frac{1-4\widetilde{\alpha}\tau}{1+4\widetilde{\alpha}\tau}\in(0,1),\]
for all \(\varrho\in[0,\varrho_{c})\), we get the existence of a unique branch of solution (blue curve) for the temporal frequency \(\omega/2\pi\). A second branch (light blue curve) of solutions emerges precisely at \(\varrho=\varrho_{c}\). These two branches cross at \(\varrho=1\) where \(\omega/2\pi=\frac{1}{4\tau}\) and terminate at a value of \(\varrho=\varrho_{0}\sim 1.06\) (see Figure 29(b)). The branch of solutions which exists for all values of \(\varrho\in[0,\varrho_{0}]\) has an associated spatial frequency which is almost constant and whose value is around \(\sim 2.82\in(0,\pi)\). On the other hand, the branch of solutions which only exists for values of \(\varrho\in(\varrho_{c},\varrho_{0}]\) has an associated spatial frequency which lies in \((\pi,2\pi)\). Let us remark that at \(\varrho=1\), the spatial frequencies of the two solutions are different and symmetric with respect to \(\pi\). Furthermore, at \(\varrho=\varrho_{0}\sim 1.06\) where the two branches collide the associated spatial frequency is \(\theta\sim\pi\). Let us finally note that for \(\varrho\in[1,\varrho_{0}]\), the spatial frequencies of the two branches are almost identical, although the secondary branch is slightly above the primary one.
Correspondingly we illustrate in Figure 30, the space-time plot of two points along the two different branches which correspond to the orange and dark red points in Figure 29. The corresponding values are \((\omega,\theta)\sim(0.12,2.82)\) and \((\omega,\theta)\sim(0.08,4.60)\) and associated to the same value of \(\varrho\sim 0.633\). In
Figure 29: _Representation of the temporal frequency \(\omega/2\pi\) (in Hz) and the spatial frequency \(\theta\in[0,2\pi)\) (panel (a) and (c) respectively) as a function of \(\varrho\) for fixed values of the time delay \(\tau=12\)ms and \(1/\widetilde{\alpha}=15\)ms. Panel (b) represents a zoom of panel (a) near \(\varrho\sim 1\) where the two branches terminate._
the first panel of Figure 30(a), which corresponds to the point on the branch of solution defined for all \(\varrho\in[0,\varrho_{0}]\), since the corresponding value of the spatial frequency is \(\theta\in(0,\pi)\), we observe an apparent backward propagation, while in the second panel of Figure 30(b), we observe a forward propagation. This corresponds to the point on the lower branch of the solutions defined for values of \(\varrho\in(\varrho_{c},\varrho_{0}]\) with associated spatial frequency \(\theta\in(\pi,2\pi)\). From a biological point of view, this indicates that the more interesting range of the parameters is the one with \(\varrho\in(\varrho_{c},\varrho_{0}]\) and the corresponding branch of solutions which emerges at \(\varrho=\varrho_{c}\) from the trivial solution \((\omega,\theta)\sim(0,0)\) since in this case we obtain an oscillatory traveling wave with forward propagation into the network.
In Figure 31, we show the global structure of the branches for a second example, with fixed values of the time delay \(\tau=12\)ms and \(1/\widetilde{\alpha}=12\)ms, which are still biologically relevant values. We observe that the two branches terminate at a value of \(\varrho=\varrho_{0}\sim 3.03\) with a crossing at \(\varrho=1\). For \(\varrho\in[1,\varrho_{0}]\), the primary branch (blue curve) has a temporal frequency below the secondary branch (light blue curve), the difference in frequencies is almost \(5\)Hz for values of \(\varrho\sim 2\). Even more interestingly, we see that the corresponding spatial frequencies along the secondary branch are decreasing from \(2\pi\) to a final value below \(\pi\) at \(\varrho_{0}\) indicating that by increasing the value of \(\varrho\) we can reverse the direction of propagation from forward to backward oscillatory traveling waves. The transition occurs for \(\varrho\sim 1.65\), that is for values of \(1/\widetilde{\lambda}\sim 7-8\)ms. It is further noticed that the associated temporal frequencies in the backward regime are around \(25\)Hz (\(\beta\)-frequency regime) much higher than for forward traveling waves whose temporal frequencies range from \(0\)Hz to \(20\)Hz (and include the \(\alpha\)-frequency regime).
SummaryIn this section, we saw that including temporal delays in the time-continuous version of the system produces non-trivial dynamics that can be characterized analytically. Contrary to the discrete version of the system, which can only be analyzed for a few discrete time delays \(k=1,2,...\), the continuous version is informative for a wide range of biologically plausible delays, and the resulting frequencies are very diverse. In particular, we observed homogenous synchronized oscillations in the gamma band \((30-60Hz)\)
Figure 30: _Space-time plot of \(\cos(\omega t+\theta j)\) for values of \((\omega,\theta)\) which correspond to the orange and dark red points of Figure 29 lying respectively on the blue and light blue curves, time \(t\) is in ms. In (a), the temporal frequency is \(\omega/2\pi\sim 19\)Hz while in (b) it is \(\omega/2\pi\sim 13.3\)Hz. In (a), since \(\theta\in(0,\pi)\), we observe a backward propagation of the wave while in (b) we have a forward propagation since \(\theta\in(\pi,2\pi)\)._
that emerged when the feed-forward error correction term \(\widetilde{\alpha}\) was strong enough (roughly, with \(1/\widetilde{\alpha}<8ms\)). But we also found situations in which the oscillatory activity was not homogenous, but propagated as a travelling wave through the network. With biologically plausible values for the various parameters, the waves could propagate forward in the alpha-band (\(7-15Hz\)) frequency range, and when the feedback error correction term \(\widetilde{\lambda}\) was strong enough (e.g. \(1/\widetilde{\lambda}<8ms\) while \(1/\widetilde{\alpha}=12ms\)), they started moving backward at a faster frequency in the beta-band (\(15-30Hz\)). Altogether, this pattern of results is compatible with various (sometimes conflicting) observations from the Neuroscience literature [1, 4], and informs us about the conditions in which the corresponding dynamic behaviors might emerge in the brain.
## 6 Discussion
### Contributions
We proposed a mathematical framework to explore the properties and stability of neural network models of the visual system comprising a hierarchy of visual processing areas (or "layers"), mutually connected according to the principles of predictive coding. Using a discrete model, as is typically done in the recent deep learning literature, we introduced the amplification factor function, which serves to characterize the interesting (i.e., "marginally stable") regions as a function of the model hyperparameters. When considered on an infinite domain, we showed that the response of our linear neural network to a Dirac delta initialization presents a universal behavior given by a Gaussian profile with fixed variance and which spreads at a given speed. Both speed and variance could be explicitly characterized in terms of the model hyperparameters. This universal Gaussian profile was then the key to understand the long-time dynamics of the linear neural network set on a semi-infinite domain with a fixed constant source term at the left boundary of the network.
At first, we ignored the influence of neuronal selectivity and used feed-forward and feedback connection
Figure 31: _Representation of the temporal frequency \(\omega/2\pi\) (in Hz) and the spatial frequency \(\theta\in[0,2\pi)\) (panel (a) and (b) respectively) as a function of \(\varrho\in[0,1]\) for fixed values of the time delay \(\tau=12\)ms and \(1/\widetilde{\alpha}=12\)ms._
matrices set to the identity matrix. When \(\beta=0\) (no feedforward update after the network initialization), we observed that hyperparameters \(\alpha\) and \(\lambda\) compete for forward and backward propagation, respectively. When \(\beta>0\), the constant feedforward input makes things more complex, with \(\lambda\) (feedback error correction) now competing with \(\beta+\alpha\) (feedforward drive and feedforward error correction). In the special case when \(\alpha+\lambda=1\), a second (but spurious) mode of propagation with rapidly alternating activity can emerge, whose direction is determined by the competition between \(\alpha\) and \(\beta+\lambda\).
Next, to evaluate the influence of a more complex and functionally relevant connectivity matrix, we defined _neural assemblies_ reflecting the eigenvectors of the matrix. Each of these neural assemblies can be analyzed separately, and its behavior depends on the corresponding eigenvalue (in addition to the hyperparameters \(\alpha\), \(\beta\) and \(\lambda\), as explained above). Different assemblies can simultaneously support different dynamics, so that some may propagate information forward, others may not propagate at all (acting as a filter on the inputs), while yet others might propagate backward (e.g. carrying "priors" set by preceding activations). We again saw a number of cases where "fringe" or spurious behavior arose, e.g. rapid alternations in activity, and understood that this could be caused by the discrete nature of our model, when the time steps defining the model's temporal resolution are too coarse.
The time-continuous version of the model helped us overcome this issue, and characterize dynamics in the limit of infinitely small time steps. The amplification factor function is still crucial in this situation, but it produces more robust results, without fringe behavior or spurious oscillations. In particular, the analysis of stability and propagation direction/speed was greatly simplified in this continuous case.
The same time-continuous model also allowed us to investigate the inclusion of biologically plausible communication delays between layers. In this case, we demonstrated the emergence of genuine oscillatory dynamics and travelling waves in various frequency bands compatible with neuroscientific observations (alpha-band from 7 to 15Hz, beta-band from 15 to 30Hz and gamma-band from 30 to 60Hz).
Finally, we considered fully continuous versions of the model, not only in time but also in space, both across network depth (across neuronal layers) and width (across neurons in the same layer). This mathematical abstraction revealed that our model could be understood as a transport equation, and that it produced diffusion dynamics.
### Biological interpretations
The mathematical framework that we proposed naturally lends itself to interpretation in biological terms. The model's hyperparameters reflect the strength of feedforward and feedback signalling in the brain. These are determined not only by axonal density and synaptic strength (that vary slowly throughout development and learning), but can also be gated by other brain regions and control systems, e.g. through the influence of neurotransmitters, and thus vary much more dynamically. For instance, the feedforward drive \(\beta\) could be more active to capture sensory information immediately after each eye movement, and decrease over time until the next eye movement [21]; similarly, feedback error correction \(\lambda\) could dominate over the feedforward error correction \(\alpha\) for one given second (e.g. because top-down attention drives expectation signals) and decrease in the next second (e.g. because unexpected sensory inputs have been detected) [31]. In this dynamic context, it is fundamental to be able to characterize the dependence of the system's behavior on the exact hyperparameter values. Fortunately, our framework reveals that when the hyperparameters vary, the stability of the system, and its ability to propagate signals and maintain activity, change in predictable
ways. Some hyperparameter combinations would not support signal propagation at all; others would render the system unstable, e.g. because of runaway excitation. Under the (reasonable) assumption that the brain behaves as a predictive coding system, our equations inform us about the plausible parameter regimes for the brain.
Using our time-continuous model, we found that predictive coding dynamics associated with inter-areal communication delays result in oscillatory activity. This finding resonates with both experimental observations and neuroscientific theories [1, 4].
Bastos and colleagues [4, 5] suggested that feedforward error correction could be accompanied by gamma-band oscillations; this suggestion was verified in our model, with synchronized gamma rhythms appearing when the corresponding hyperparameter \(\widetilde{\alpha}\) was strong enough (and with a frequency that monotonically increased from 30 to 60Hz when the value of \(1/\widetilde{\alpha}\) decreased from 10ms to 5ms). However, considering that the communication delay \(\tau\) between two adjacent brain regions is a fixed property of the system (a reasonable first approximation), our analysis shows that this oscillatory mode will only happen for a narrow range and a very precise combination of hyperparameter values \(\widetilde{\alpha}\) and \(\widetilde{\lambda}\) (see Figure 27). This could explain why gamma-band oscillations are not always found during electrophysiological recording experiments [28].
By relaxing the phase delay between brain areas, our equations also revealed the potential emergence of oscillatory travelling waves across brain regions, similar to those observed in human EEG experiments [1, 2, 3, 24]. Again, for a fixed communication delay \(\tau\), these waves may only happen for specific values and combinations of the hyperparameters \(\widetilde{\alpha}\) and \(\widetilde{\lambda}\). In certain regimes (see e.g. Figure 31 with \(1/\widetilde{\alpha}=1/\widetilde{\lambda}=12ms\)), two waves might coexist at the same frequency, but going in opposite directions. This matches experimental reports of co-occurring feedforward and feedback waves in the brain [2, 3]. Upon increasing the feedback strength \(\widetilde{\lambda}\), we saw that an initial alpha-band (7-15Hz) feed-forward wave could accelerate (towards the beta-band, 15-30Hz) and eventually reverse its direction, producing a feedback wave. Similar reversal phenomena have also been reported for oscillatory waves in the human brain [2, 3, 24].
### Limitations and future extensions
"All models are wrong, but some are useful" [7]. Our model, like all mathematical models, is based on simplifications, approximations and assumptions, and can only be valid under those assumptions. Some (if not all) of these assumptions are questionable, and future work will need to determine the robustness of the model, or its potential modifications, when relaxing these assumptions.
Even tough we assumed that the brain follows the general principles of predictive coding [27], our system's hyperparameters can in fact be modulated to accommodate many variants of this framework [9, 31, 32, 19]. One other important assumption that we made was to simplify the connectivity matrices between neuronal layers--which determines the selectivity of each neuron, and thus the functionality of the entire system. Even when we moved past the "identity" assumption, the connection matrices that we adopted were constrained to be symmetric, and most importantly, were assumed to be similar from one layer to the next. This made our equations tractable, but it constitutes a clear restriction, and a departure from biological plausibility that will need to be addressed in future extensions. Another important limitation that we wish to relax in future works is the fact that we have considered a linear model although real biological networks or deep neural networks are intrinsically nonlinear. Going beyond the linear analysis that we have presented here would need the development of new theoretical techniques which constitutes
a major open problem to be addressed in forthcoming works.
Aside from exploring richer patterns of connectivity between adjacent layers, another natural extension of the model could be to incorporate long-range interactions, beyond the immediately adjacent layers. For instance, one could explore a simple second-order layer model (illustrated in Figure 32), whose scalar version reads:
\[e_{j}^{n+1}-\beta_{1}e_{j-1}^{n+1}-\beta_{2}e_{j-2}^{n+1}=\alpha_{1}e_{j-1}^{n }+\alpha_{2}e_{j-2}^{n}+(1-\beta^{*}-\lambda^{*}-\alpha^{*})e_{j}^{n}+\lambda _{1}e_{j+1}^{n}+\lambda_{2}e_{j+2}^{n},\quad j\in\mathbb{Z}, \tag{6.1}\]
where we have set \(\beta^{*}:=\beta_{1}+\beta_{2}\), \(\alpha^{*}:=\alpha_{1}+\alpha_{2}\) and \(\lambda^{*}:=\lambda_{1}+\lambda_{2}\). Once again the fate of such a system (6.1) would be dictated by the amplification factor function
\[\rho(\theta)=\frac{\alpha_{1}e^{-\mathbf{i}\theta}+\alpha_{2}e^{-2\mathbf{i} \theta}+1-\beta^{*}-\lambda^{*}-\alpha^{*}+\lambda_{1}e^{\mathbf{i}\theta}+ \lambda_{2}e^{2\mathbf{i}\theta}}{1-\beta_{1}e^{-\mathbf{i}\theta}-\beta_{2}e ^{-2\mathbf{i}\theta}},\quad\theta\in[-\pi,\pi].\]
This as well as higher-order interaction models, possibly including "hub" regions like the thalamus that would be mutually interconnected with all layers in the hierarchy [20], are promising directions for follow-up studies.
### Conclusion
The mathematical framework proposed here, guided by both computational considerations and neuroscientific inspiration, can be of use to both fields. In machine learning, the framework may serve to provide guarantees about the stability of a predictive coding system given its chosen hyperparameters, or to choose a valid range for these hyperparameters. For neuroscientists, our equations can be used directly to understand biological vision and to make predictions about biological behavior in various situations compatible with predictive coding. But this general mathematical framework (a number of hierarchically connected layers with source terms, boundary conditions, feedforward and feedback connectivity matrices, analyzed via its amplification factor function) may also be adapted to fit other models of biological perception and cognition beyond predictive coding. We hope that the various derivations made in the present work can serve as a template for future applications in this direction. And more generally, that this study may be helpful to the larger computational neuroscience community.
Figure 32: _Illustration of the network structure of model (6.1) where blue arrows indicate the new long-range interactions coming from layer \(j\pm 2\)._
## Acknowledgements
G.F. acknowledges support from the ANR via the projects: Indyana under grant agreement ANR-21-CE40-0008, ChaMaNe under grant agreement ANR-19-CE40-0024 and an ANITI (Artificial and Natural Intelligence Toulouse Institute) Research Chair. R.V. is supported by ANR OSCI-DEEP (ANR-19-NEUC-004) and an ANITI (Artificial and Natural Intelligence Toulouse Institute) Research Chair (ANR-19-PI3A-0004).
|
2302.12084 | Dermatological Diagnosis Explainability Benchmark for Convolutional
Neural Networks | In recent years, large strides have been taken in developing machine learning
methods for dermatological applications, supported in part by the success of
deep learning (DL). To date, diagnosing diseases from images is one of the most
explored applications of DL within dermatology. Convolutional neural networks
(ConvNets) are the most common (DL) method in medical imaging due to their
training efficiency and accuracy, although they are often described as black
boxes because of their limited explainability. One popular way to obtain
insight into a ConvNet's decision mechanism is gradient class activation maps
(Grad-CAM). A quantitative evaluation of the Grad-CAM explainability has been
recently made possible by the release of DermXDB, a skin disease diagnosis
explainability dataset which enables explainability benchmarking of ConvNet
architectures. In this paper, we perform a literature review to identify the
most common ConvNet architectures used for this task, and compare their
Grad-CAM explanations with the explanation maps provided by DermXDB. We
identified 11 architectures: DenseNet121, EfficientNet-B0, InceptionV3,
InceptionResNetV2, MobileNet, MobileNetV2, NASNetMobile, ResNet50, ResNet50V2,
VGG16, and Xception. We pre-trained all architectures on an clinical skin
disease dataset, and fine-tuned them on a DermXDB subset. Validation results on
the DermXDB holdout subset show an explainability F1 score of between
0.35-0.46, with Xception displaying the highest explainability performance.
NASNetMobile reports the highest characteristic-level explainability
sensitivity, despite it's mediocre diagnosis performance. These results
highlight the importance of choosing the right architecture for the desired
application and target market, underline need for additional explainability
datasets, and further confirm the need for explainability benchmarking that
relies on quantitative analyses. | Raluca Jalaboi, Ole Winther, Alfiia Galimzianova | 2023-02-23T15:16:40Z | http://arxiv.org/abs/2302.12084v1 | # Dermatological Diagnosis Explainability Benchmark for Convolutional Neural Networks
###### Abstract
In recent years, large strides have been taken in developing machine learning methods for various dermatological applications, supported in part by the widespread success of deep learning. To date, diagnosing diseases from images is one of the most explored applications of deep learning within dermatology. Convolutional neural networks (ConvNets) are the most commonly used deep learning method in medical imaging due to their training efficiency and accuracy, although they are often described as black boxes because of their limited explainability. One popular way to obtain insight into a ConvNet's decision mechanism is gradient class activation maps (Grad-CAM). A quantitative evaluation of the Grad-CAM explainability has been recently made possible by the release of DermXDB, a skin disease diagnosis explainability dataset which enables benchmarking the explainability performance of ConvNet architectures. In this paper, we perform a literature review to identify the most common ConvNet architectures used for this task, and compare their Grad-CAM explainability performance with the explanation maps provided by DermXDB. We identified 11 architectures: DenseNet121, EfficientNet-B0, InceptionV3, InceptionResNetV2, MobileNet, MobileNetV2, NASNetMobile, ResNet50, ResNet50V2, VGG16, and Xception. We pre-trained all architectures on an clinical skin disease dataset, and then fine-tuned them on a subset of DermXDB. Validation results on the DermXDB holdout subset show an explainability F1 score of between 0.35-0.46, with Xception the highest explainability performance, while InceptionResNetV2, ResNet50, and VGG16 displaying the lowest. NASNetMobile reports the highest characteristic-level explainability sensitivity, despite it's mediocre diagnosis performance. These results highlight the importance of choosing the right architecture for the desired application and target market, underline need for additional explainability datasets, and further confirm the need for explainability benchmarking that relies on quantitative analyses rather than qualitative assessments.
deep learning, dermatologys, explainability, benchmark, review
## 1 Introduction
With an expected shortage of approximately ten million healthcare professionals by 2030 [World Health Organization, 2016], the world is facing a massive healthcare crisis. Automation has been proposed as a solution to the scarcity of medical professionals, with the Food and Drugs Administration in the United States approving medical devices based on artificial intelligence for marketing to the public [U.S. Food and Drug Administration, 2018].
This development is due in part to the advancement in machine learning using unstructured data. Ever since Krizhevsky et al. (2017) won the ImageNet Large Scale Visual Recognition Challenge (Russakovsky et al., 2015) using a convolutional neural network (ConvNet), ConvNets have been at the forefront of machine learning based automation. Employed primarily in healthcare for imaging applications, ConvNets have been used for disease diagnosis (Gao et al., 2019), cell
counting (Falk et al., 2019), disease severity assessment (Gulshan et al., 2016), disease progression estimation (Kijowski et al., 2020), lesion or anatomical region segmentation (Hesamian et al., 2019; Ramesh et al., 2021), etc. Esteva et al. (2017) were the first to demonstrate that ConvNets can achieve expert-level performance in dermatological diagnosis using dermoscopy images. Since then, dermatology has embraced ConvNets as a solution to various diagnosis and segmentation tasks (Esteva et al., 2017; Zhang et al., 2019; Jinnai et al., 2020; Haenssle et al., 2020; Roy et al., 2022).
Despite these considerable advancements in medical imaging, there has not yet been a widespread adoption of machine learning based automation in the clinical workflow. One of the main hurdles that detract from adoption is the lack of ConvNet explainability (Kelly et al., 2019), this issue being enhanced by the recently implemented legislation aimed at ensuring that automated methods can offer an explanation into their decision mechanisms (Goodman and Flaxman, 2017). Different post-hoc explainability methods have been proposed as a way to explain a ConvNet's decisions (Bai et al., 2021; Selvaraju et al., 2017; Lundberg and Lee, 2017; Ribeiro et al., 2016). Gradient class activation maps (Grad-CAM) is currently the most commonly used explainability method within medical imaging, due to its intrinsic ease of interpretation and its low computational requirements. However, validating the resulting explanations is an expensive, time consuming process that requires domain expert intervention, and thus most explainability validations are performed as small, qualitative analyses. With the release of DermXDB (Jalaboi et al., 2022), it became possible to quantitatively analyse the explainability of ConvNets trained for diagnosing six skin conditions: acne, psoriasis, seborrheic dermatitis, viral warts, and vitiligo.
The purpose of this benchmark is to provide the means to quantitatively compare the explainability of the state-of-the-art approaches to dermatological diagnosis using photographic imaging. Our contributions are twofold:
1. We perform a comprehensive systematic review to reveal the usage of the ConvNets for the task of dermatological diagnosis using photographic images,
2. We benchmark the identified ConvNets for diagnostic and explainability performance and compare them with eight expert dermatologists.
## 2 Background
### Machine learning methods in dermatological diagnosis
After the renewed interest in artificial intelligence and machine learning that started in 2012, practitioners from both academia and the industry began investigating automated methods for dermatological applications (Thomsen et al., 2020; Jeong et al., 2022). Until 2017, the vast majority of articles applying machine learning methods on dermatological problems were using classical models such as support vector machines (Liu et al., 2012; Sabouri et al., 2014), and linear or logistic regression (Kaur et al., 2015; Kefel et al., 2016). These models were trained using hand-crafted features or features extracted using classical computer vision methods such as gray-level co-occurrence matrices (Shimizu et al., 2014); Sobel and Hessian filters (Arroyo and Zapirain, 2014), or HOS texture extraction (Shrivastava et al., 2016). However, the main drawback of classical computer vision approaches is that hand-crafting features is an expensive, time-consuming process, while their automated extraction is too sensitive to the environmental factors of the image acquisition (e.g. lighting, zoom).
Esteva et al. (2017) were the first to propose a ConvNet for diagnosing skin conditions from dermoscopy images. Their ConvNet reached expert-level performance without requiring any hand-crafted features or classical computer vision models, thus paving the way towards the current popularity of ConvNets in dermatological applications.
One key component to the rise of ConvNets was the introduction of large scale dermatological datasets. The International Skin Imaging Collaboration (ISIC) challenge dataset (Codella et al., 2018) is one of the best known open access dermoscopy datasets, containing 25,331 images distributed over nine diagnostic categories. Large clinical image datasets are also available for research purposes, such as SD-260 (Sun et al., 2016) which consists of 20,600 clinical images of 260 different skin diseases, and DermNetNZ (DermNetNZ, 2021) which contains more than 25,000 clinical images.
Aided by the release of increasingly more performant architectures, their publicly available pre-trained weights on the ImageNet (Deng et al., 2009) dataset, and the recently published public dermatological datasets, the vast majority of research contributions in machine learning applications for dermatology rely on ConvNet architectures. ConvNets have been extensively used in lesion diagnosis (Tschandl et al., 2017; Han et al., 2018; Reshma et al., 2022) and lesion segmentation (Yuan et al., 2017; Wu et al., 2022; Baig et al., 2020) on different modalities relevant for the domain. Attempts at explaining the decisions taken by ConvNets were made by several groups (Tschandl et al., 2020; Tanaka et al., 2021), but no quantitative analysis was performed.
### Explainability in convolutional neural networks
ConvNets have, from their very beginning, been notoriously difficult to interpret and explain. Interpretability is generally considered the ability to understand the internal structure and properties of a ConvNet architecture, while explainability is defined as a ConvNet's capacity to offer plausible arguments in favour of its decision (Roscher et al., 2020). Within healthcare, explainability is especially important due to its intrinsic ability to interact with domain experts in a common vocabulary (Kelly et al., 2019). Although some architecture or domain-specific explainability methods exist, most medical imaging research articles employ attribution-based methods due to their ease of use and open source access (Singh et al., 2020; Bai et al., 2021).
There are two main ways of implementing attribution-based methods: through perturbation and by using the ConvNet's gradients. Perturbation-based methods, such as Shapley values (Lipovetsky and Conklin, 2001), LIME (Ribeiro et al., 2016), or SharpLIME (Graziani et al., 2021), rely on modifying the original image and then evaluating the changes in the ConvNet's prediction. For example, LIME uses a superpixel algorithm to split the image into sections, and randomly selects a subset of superpixels to occlude. The target ConvNet then performs an inference step on the perturbed image. This procedure is run multiple times to identify the superpixels that lead to the most drastic change in the ConvNet's prediction. SharpLIME uses hand-crafted segmentations to split the image into relevant sections, and then proceeds with the perturbation process defined in LIME. The main drawback of perturbation based methods is the need to run the prediction algorithm multiple times, which leads to high computational costs and long running times.
Gradient-based methods, such as saliency maps (Simonyan and Zisserman, 2015), guided backpropagation (Springenberg et al., 2014), gradient class-activation maps (Grad-CAM) (Selvaraju et al., 2017), or layer-wise relevance propagation (Bach et al., 2015), use a ConvNet's backpropagation step to identify the areas in an image that contribute the most to the prediction. In general, gradient-based methods compute the gradient of a given input in relation to the prediction, and apply different post-processing methods to the output. In the case of Grad-CAM, image features are extracted by forward propagating the image until the last convolutional layer. Then, the gradient is set to 0 for all classes except the target class, and the signal is backpropagated to the last convolutional layer. The extracted image features that directly contribute to the backpropagated signal constitute the Grad-CAM for the given class. Since the analysis can be performed at the same time as the inference itself and only requires one iteration, Grad-CAM is often used in research and industrial applications (Pereira et al., 2018; Young et al., 2019; Tschandl et al., 2020; Hepp et al., 2021; Jalaboi et al., 2023). Due to its popularity, in this paper we will use Grad-CAM to benchmark the explainability of commonly used ConvNet architectures.
## 3 Material and methods
### Literature review
We performed a systematic literature review on PubMed, following the methodology introduced by Thomsen et al. (2020). The query, described in Table 1, focused on dermatological applications of deep learning. A total of 3,650 articles were retrieved. We excluded articles that focused on domains other than dermatology, articles that did not include an original contribution in disease classification, articles using modalities other than photographic images, articles using methods other than ConvNets, and articles using proprietary ConvNets.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Search term & \multicolumn{1}{c}{Search term} & \multicolumn{1}{c}{Search term} \\ \hline ((((dermatology[MeSH Terms]) OR & AND & ((neural network[MeSH Terms]) OR & AND & (English[Language]) \\ (skin disease[MeSH Terms]) OR & & (machine learning[MeSH Terms]) OR & \\ (skin lesion[MeSH Terms])) & & (artificial intelligence[MeSH Terms]) OR & \\ & & (deep learning) OR & \\ & & (deep neural network) OR & \\ & & (convolutional neural network)) & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Search query used on PubMed to identify the list of relevant articles. We searched for articles focused on dermatology, using deep learning methods, written in English. The query was last performed on the 20th of February 2023.
### Explainability benchmark
#### 3.2.1 Explainability dataset
For explainability benchmarking, we use DermXDB, a skin disease diagnosis explainability dataset published by Jalaboi et al. (2022). The dataset consists of 524 images sourced from DermNetNZ (DermNetNZ, 2021) and SD-260 (Sun et al., 2016), and labeled with diagnoses and explanations in the form of visual skin lesion characteristics by eight board-certified dermatologists. To match the Grad-CAM output, we focus on the characteristic localization task.
#### 3.2.2 Diagnosis evaluation
For establishing the expert-level diagnosis performance, we compare each dermatologist with the reference standard diagnosis. We follow the same approach for benchmarking the diagnosis performance of the ConvNets. We evaluate the performance using the categorical F1 score, sensitivity, and specificity, defined as:
\[\text{F1 score}=\frac{2TP}{2TP+FP+FN}, \tag{1}\]
\[\text{Sensitivity}=\frac{TP}{TP+FN}, \tag{2}\]
\[\text{Specificity}=\frac{TN}{TN+FP}, \tag{3}\]
where the true positives \(TP\) represent correctly classified samples, the false positives \(FP\) represent samples incorrectly classified as part of the target class, the false negatives \(FN\) represent samples of the target class incorrectly classified as being part of a different class, and the true negatives \(TN\) represent samples correctly identified as not being part of the target class.
#### 3.2.3 Explainability evaluation
For establishing expert-level explainability performance, we compare the attention masks of each dermatologist with the aggregated fuzzy union of attention masks created by the other seven dermatologists (explanation maps). More specifically, we define the _image-level explanation maps_ as the union of all characteristics segmented by all dermatologists for an image, and the _characteristic-level explanation maps_ as the union of all segmentations for each characteristic for an image. Figure 1 illustrates the mask creation process for a psoriasis case. The ConvNet Grad-CAM attention maps are compared with explanations maps derived from all eight dermatologist evaluations.
These two types of explanation maps offer a way to check whether the ConvNets take into account the entire area selected by dermatologists as important to their decision, and whether they focus on specific characteristics when making their decisions. To quantify the similarity between the Grad-CAMs and the explanation maps, we compute the F1 score, sensitivity and specificity following their fuzzy implementation defined in (Crum et al., 2006), described as:
\[\text{F1 score}=\frac{2\sum_{p\in pixels}\min(\mathcal{G}_{p},\mathcal{E}_{p}) }{\sum_{p\in pixels}(\mathcal{G}_{p})+\sum_{p\in pixels}(\mathcal{E}_{p})}, \tag{4}\]
\[\text{Sensitivity}=\frac{\sum_{p\in pixels}\min(\mathcal{G}_{p},\mathcal{E}_{p} )}{\sum_{p\in pixels}(\mathcal{S}_{p})}, \tag{5}\]
\[\text{Specificity}=\frac{\sum_{p\in pixels}\min(1-\mathcal{G}_{p},1-\mathcal{ E}_{p})}{\sum_{p\in pixels}(1-\mathcal{E}_{p})}, \tag{6}\]
where \(\mathcal{G}\) is the ConvNet-generated Grad-CAM, and \(\mathcal{E}\) is the explanation map for a given image.
For characteristics, we report the Grad-CAM sensitivity with regard to the characteristic-level explanation maps. Specificity and F1 score were considered too stringent, as multiple characteristics can be present and essential for a diagnosis, and an explainable ConvNet must detect all of them to plausibly explain the diagnosis.
#### 3.2.4 Experimental setup
From the 22 articles that fulfilled all inclusion criteria, we selected the set of ConvNets to benchmark based on their reproducibility: we required that all benchmarked ConvNets had been pre-trained on ImageNet due to the limited amount of training data available. Thus, we exclude architectures that do not have publicly available pre-trained ImageNet weights compatible with the deep learning Keras framework (Chollet, 2015), i.e. GoogleNet (Szegedy et al., 2015), InceptionV4 (Babenko and Lempitsky, 2015), MobileNetV3 (Howard et al., 2019), SENet (Hu et al., 2018), SE-ResNet (Hu et al., 2018), SEResNeXT (Hu et al., 2018), and ShuffleNet (Zhang et al., 2018). Furthermore, as several articles compare different versions of the same architecture (e.g. EfficientNet-B0 through EfficientNet-B7, see Table 2), we select the smallest version of each architecture for our benchmark to avoid overfitting to the DermXDB dataset.
In the rest of this work, we will focus on the following ConvNets: DenseNet121 (Huang et al., 2017), EfficientNet-B0 (Tan and Le, 2019), InceptionResNetV2 (Szegedy et al., 2017), InceptionV3 (Szegedy et al., 2016), MobileNet (Howard et al., 2017), MobileNetV2 (Sandler et al., 2018), NASNetMobile (Zoph et al., 2018), ResNet50 (He et al., 2016a), ResNet50V2 (He et al., 2016b), VGG16 (Simonyan and Zisserman, 2015), and Xception (Chollet, 2017).
We used the pre-trained weights offered by Keras to initialize the networks in our experiments. Next, all ConvNets were pre-trained on a proprietary clinical photography skin disease dataset collected by a dermatologist between 2004-2018. All images included in the dataset were anonymized, and the patients consented to their data being used for research purposes. More information about the dataset is available in Appendix Table A1. We performed a hyper-parameter search for each ConvNet, with the values used for experimentation and the validation performance being reported in Appendix Table A2 and Appendix Table A3, respectively. We further fine-tuned all ConvNets for 50 epochs with 261 randomly chosen images from the DermXDB dataset. The remaining 263 images were used as the test set. Each ConvNet was trained and tested five times. All results presented in this paper are aggregated over the five test runs. All code used for running the experiments is available at [https://github.com/ralucaj/dermx-benchmark](https://github.com/ralucaj/dermx-benchmark).
Figure 1: Explanation maps creation example for a psoriasis case evaluated by two dermatologists. Both dermatologists identified plaque and scale as the two characteristics associated with the psoriasis diagnosis, and localized them. By combining the localization maps for each characteristic, we obtain the characteristic-level explanation maps. By combining the localization maps created by each dermatologist, we obtain the individual dermatologist explanation maps. By combining all localization maps, we obtain the image-level explanation map.
## 4 Results
### Literature review
Figure 2 displays the Preferred Reporting Items for Systematic Review and Meta-Analyses statement flowchart of the performed review, while Figure 3 illustrates the evolution of articles topics over the years. Out of the original 3,650 articles, only 22 fulfilled all the inclusion criteria. Table 2 summarizes the ConvNet architectures, their implementation, and reported performance employed in the final 22 articles selected for benchmarking.
Figure 2: The Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) statement flowchart of the performed review process for identifying the benchmarked ConvNet architectures. First, we screened articles to ensure that they were using dermatological data and deep learning methods. Afterwards, we excluded review articles and contributions focused on tasks other than classification, and articles that that used non-photographic image data, e.g. dermoscopy, whole slides. Finally, we excluded articles that used proprietary ConvNets, leading to 22 articles serving as the benchmark basis.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Publication & ConvNets employed & Task & Data & Performance \\ \hline Aggarwal [2019] & InceptionV3 & Disease diagnosis on five classes & Open source images and images scraped from Google & 0.66 F1 score, 0.65 sensitivity, 0.91 specificity, 0.67 precision, 0.91 NPV, 0.57 MCC \\ Burlina et al. [2019] & ResNet50 & Disease diagnosis on four classes & Internet-scrapped images & 82.79\% accuracy, 0.76 kappa score \\ Zhao et al. [2019] & Xception & Skin cancer risk assessment with three classes & Clinical images & 72\% accuracy, 0.92-0.96 ROC AUC, 0.85-0.93 sensitivity, 0.85-0.91 specificity \\ Burlina et al. [2020] & ResNet50, ResNet152, InceptionV3, InceptionResNetV2 & Disease diagnosis on eight classes & Clinical and other photographic images & specificity, 0.72 precision, 0.96 NPV, 0.67 averaged using Google & kappa, 0.72 F1 score, 0.80 average precision, 0.94 AUC \\ Chin et al. [2020] & DenseNet121, VGG16, ResNet50 & Binary skin cancer risk assessment & Smartphone images & 0.83-0.86 AUC, 0.72-0.77 sensitivity, 0.85-0.86 specificity \\ Han et al. [2020] & SENet, SE-ResNet50, VGG19 & Disease classification on 134 classes & Clinical images & 44.86-57\% accuracy, 0.94-0.98 AUC \\ Liu et al. [2020] & InceptionV4 & Disease diagnosis on 26 classes & Clinical images & 66\% accuracy, 0.56 sensitivity \\ Zhao et al. [2020] & DenseNet121, Xception, InceptionV3, InceptionResNetV2 & Binary psoriasis classification & Clinical images & 96\% accuracy, 0.95-0.98 AUC, 0.96-0.97 specificity, 0.83-0.95 sensitivity \\ Wu et al. [2021] & SEResNeXt, SE-ResNet, InceptionV3 & Disease diagnosis on five classes & Clinical images & 0.96-0.97 AUC, 90-91\% accuracy, 0.90-0.93 sensitivity, 0.90 specificity \\ Aggarwal and Papay & InceptionResNetV2 & Disease diagnosis on four classes & Clinical images & 0.60-0.82 sensitivity, 0.60-0.82 specificity, 0.33-0.93 precision, 0.33-0.93 NPV, 0.43-0.84 F1 score \\ Ba et al. [2022] & EfficientNet-B3 & Disease diagnosis on 10 classes & Clinical images & 78.45\% accuracy, 0.73 kappa \\ Hossain et al. [2022] & VGG16, VGG19, ResNet50, ResNet101, ResNet50V2, Reaction10V2, InceptionV3, Inception, DenseNet121, DenseNet201, MobileNetV3ClassInf, EfficientNet-B0 through EfficientNet-B5 & 61.42-84.42\% accuracy, 0.72-0.90 sensitivity, 0.50-0.81 specificity, 0.61-0.83 precision, 0.63-0.87 NPV, 0.23-0.69 MCC, 0.22-0.69 Cohen’s kappa, 1.46-4.70 positive likelihood ratio, 0.14-0.55 negative likelihood ratio, 0.66-0.085 F1 score, 0.65-0.92 AUC \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of the 22 articles fulfilling all inclusion criteria. All articles use ConvNets for a dermatological classification task using photographic images. Tasks vary between binary or multi-disease diagnosis, disease risk assessment, lesion type classification, and severity assessment.
### Diagnosis results
Table 3 provides an overview of the diagnostic performance of the networks and that of the dermatologists on average in terms of F1 score. As can be seen from the table, although several ConvNets achieve expert-level performance when diagnosing actinic keratosis, seborrheic dermatitis, and viral warts, none of them achieve overall expert-level performance. ConvNets follow the trend also seen in dermatologists of having difficulties correctly diagnosing actinic keratosis and seborrheic dermatitis, while the diagnosis of acne and viral warts displays higher performance. Similar trends can be observed for the sensitivity and specificity performance, as seen in Appendix Table A4 and Appendix Table A5, respectively.
### Explainability results
Table 4 shows the image-level explainability results for each of the benchmarked ConvNets, while Figure 4 shows the relationship between ConvNet diagnosis performance, image-level explainability, and number of parameters. Xception scores the highest on the image-level Grad-CAM F1 score, while InceptionResNetV2, ResNet50, and VGG16 have the lowest performance. DenseNet121 and NASNetMobile report expert-level sensitivity scores, while ResNet50V2 achieves expert-level performance in specificity.
Looking at the characteristic-level sensitivity depicted in Figure 5, NASNetMobile and DenseNet121 achieve the highest overall performance. InceptionResNetV2, ResNet50, ResNet50V2, and VGG16 report the lowest scores. All ConvNets outperform dermatologists in closed comedo, open comedo, and pustule. The opposite is true for dermatoglyph disruption, leukotrichia, patch, plaque, scale, sun damage, and telangiectasia - no ConvNet reaches expert-level.
Figure 6 illustrates the differences in Grad-CAMs between the benchmarked ConvNets. Older ConvNet architectures, such as VGG16, InceptionResNetV2, ResNet50, and ResNet50V2, tend to focus on small areas that contain characteristics relevant for the diagnosis, e.g. focusing on a single plaque in the psoriasis diagnosis example, while more modern ConvNets pay attention to the entire area covered by diagnosis-relevant lesions. Several ConvNets, namely
Figure 3: Distribution of retrieved article topics per publication year, based on the search query defined in Table 1 (ran on the 20th of February 2023). 2017 marks an explosion in the number of deep learning applications in dermatology, a fact highlighted by the large increase in articles in the subsequent years, and an increase in review articles. Starting 2019, the industrial involvement in this field became apparent due to the increase in proprietary ConvNets. 2019 also marks the first emergence of dermatological applications using photographic imaging. Finally, although classification is still the most common application, other applications are becoming increasingly more researched.
EfficientNet-B0, MobileNet, MobileNetV2, and VGG16 seem to have overfit on the training set, focusing on the watermark rather than the image itself when diagnosing the vitiligo case.
## 5 Discussion
### Literature review
ConvNets have become a default approach when it comes to automated diagnosis using images, aligned with the rise of the deep learning methodology for vision recognition. The continuous breakthroughs in diagnostic performance across a wide variety of medical imaging modalities and disorders have made automated diagnosis as close to integration with practice as ever. In dermatology, the diagnosis performance has achieved that of the expert raters as early as 2017 with a seminal work of Esteva et al. (2017) that disrupted the research field and set the trend that still persists, as can be seen through the trends of the continuous growth outlined in Figure 3. The increased interest of industrial entities that started in 2019, illustrated in Figure 3 by the increase in proprietary methods, is further highlighted by the large number of dermatology-oriented med-tech companies relying on machine learning for their products. Year 2019 also marks the year when research groups began investigating photographic images as a primary modality for diagnosing skin conditions, meaning the rise of machine learning solutions to assist telecdermatology.
The potential of using ConvNets to streamline dermatological tasks is underlined by the diversity of tasks being solved in the retrieved articles. Classification was the first methodology to be approached, with applications in disease diagnosis, risk assessment, lesion type classification, lesion characteristics identification, and disease severity assessment. Segmentation and natural language processing applications are also gaining more traction, as shown by the constant increase in non-classification tasks in Figure 3.
However, this potential has not yet translated into the much-needed transformation of the clinical practice. In part, this is due to regulatory challenges which are often faced due to the limited generalizability and lack of explainability of the methods (Kelly et al., 2019). By benchmarking the diagnosis and explainability performance of ConvNets, we both
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & **Acne** & **Actinic** & **Sporiasis** & **Seborrheic** & **Viral** & **Vitiligo** \\ & & **keratosis** & & **dermatitis** & **warts** & \\ \hline
**ConvNets** & & & & & & \\ DenseNet121 & \(0.80\pm 0.02\) & \(\mathbf{0.63\pm 0.08}\) & \(0.66\pm 0.01\) & \(\mathbf{0.69\pm 0.03}\) & \(\mathbf{0.88\pm 0.03}\) & \(0.74\pm 0.03\) \\ EfficientNet-B0 & \(0.72\pm 0.03\) & \(0.53\pm 0.10\) & \(0.60\pm 0.06\) & \(\mathbf{0.57\pm 0.08}\) & \(0.80\pm 0.07\) & \(0.66\pm 0.02\) \\ InceptionV3 & \(0.77\pm 0.02\) & \(\mathbf{0.57\pm 0.11}\) & \(0.60\pm 0.02\) & \(0.54\pm 0.03\) & \(0.77\pm 0.04\) & \(0.73\pm 0.05\) \\ InceptionResNetV2 & \(0.73\pm 0.02\) & \(0.52\pm 0.10\) & \(0.53\pm 0.05\) & \(0.56\pm 0.05\) & \(0.69\pm 0.03\) & \(0.53\pm 0.12\) \\ MobileNet & \(0.72\pm 0.06\) & \(\mathbf{0.55\pm 0.19}\) & \(0.51\pm 0.14\) & \(\mathbf{0.57\pm 0.06}\) & \(0.68\pm 0.06\) & \(0.56\pm 0.10\) \\ MobileNetV2 & \(0.56\pm 0.07\) & \(0.23\pm 0.09\) & \(0.31\pm 0.08\) & \(0.46\pm 0.05\) & \(0.63\pm 0.07\) & \(0.48\pm 0.14\) \\ NASNetMobile & \(0.50\pm 0.05\) & \(0.33\pm 0.12\) & \(0.42\pm 0.07\) & \(0.43\pm 0.05\) & \(0.55\pm 0.11\) & \(0.51\pm 0.05\) \\ ResNet50 & \(0.77\pm 0.04\) & \(\mathbf{0.53\pm 0.17}\) & \(0.61\pm 0.03\) & \(\mathbf{0.61\pm 0.19}\) & \(0.79\pm 0.02\) & \(0.61\pm 0.07\) \\ ResNet50V2 & \(0.76\pm 0.04\) & \(\mathbf{0.62\pm 0.07}\) & \(0.59\pm 0.01\) & \(0.57\pm 0.01\) & \(0.76\pm 0.01\) & \(0.75\pm 0.05\) \\ VGG16 & \(0.70\pm 0.05\) & \(\mathbf{0.62\pm 0.03}\) & \(0.59\pm 0.03\) & \(\mathbf{0.49\pm 0.15}\) & \(0.71\pm 0.03\) & \(0.62\pm 0.07\) \\ Xception & \(0.80\pm 0.04\) & \(\mathbf{0.64\pm 0.07}\) & \(0.70\pm 0.02\) & \(\mathbf{0.60\pm 0.03}\) & \(0.81\pm 0.04\) & \(0.81\pm 0.05\) \\
**Dermatologists** & & & & & & \\ Average & \(0.95\pm 0.02\) & \(0.79\pm 0.14\) & \(0.85\pm 0.06\) & \(0.72\pm 0.09\) & \(0.93\pm 0.05\) & \(0.96\pm 0.03\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Diagnostic performance of the ConvNets (average \(\pm\) standard deviation across five runs) and dermatologists (average \(\pm\) standard deviation across eight experts) using F1-score, split by diagnosis. Several ConvNets achieve expert-level per-disease diagnosis performance, in actinic keratosis, seborrheic dermatitis, and viral warts (in **bold**), although none reach the same performance for acne, psoriasis, and vitiligo.
enable a comparison among the methods, as well as help the identification gaps between the current state-of-the-art and the clinical practice.
### Diagnosis benchmark
The direct comparison of the diagnostic performance is not possible using reported values from the literature not only due to variability in the choice of the metrics, but more importantly due to the variance in the number of classes and the differences in the datasets used for training and validation (Table 2). By reformulating the task to the diagnosis of six disease classes, utilizing the same initialization, pre-training, and hyperparameter optimization search strategy, and training and validating on the common database, this benchmark minimizes the performance variability related to such implementation details.
We found considerable variability among the diagnostic performance values, with the average F1 scores ranging from 0.50 to 0.80 for acne, from 0.23 to 0.64 for actinic keratosis, from 0.31 to 0.70 for psoriasis, from 0.43 to 0.69 for seborrheic dermatitis, from 0.55 to 0.88 for viral warts, and from 0.51 to 0.81 for vitiligo. These values were aligned with the diagnostic complexity of the diseases as expressed by the performance of the dermatologists, averaging 0.95 for acne, 0.79 for actinic keratosis, 0.85 for psoriasis, 0.72 for seborrheic dermatitis, 0.93 for viral warts, and 0.96 for vitiligo. As such, none of the ConvNets achieved the average dermatologist performance, although there were multiple instances of ConvNets reaching the range of the expert performance for a specific disease (see Table 3). The majority of the benchmarked ConvNets achieved expert level for diagnosis of actinic keratosis and seborrheic dermatitis: seven and six out of 11, respectively. This further confirms the similarity of ConvNet performance with respect to the dermatologists: most ConvNets display a similar difficulty in diagnosing actinic keratosis and seborrheic dermatitis as the eight dermatologists, and a similar ease of diagnosing acne and viral warts.
### Explainability benchmark
While diagnostic performance is recognized as critical for the generalizability of ConvNets, the explainability performance validation has been generally approached as an optional, qualitative, post-hoc analysis. One of the key challenges
Figure 4: ConvNet explainability as a function of ConvNet performance and their number of parameters. Xception displays both the highest performance and image-level explainability, while ResNet50 performs poorly in both criteria.
faced by researchers trying to implement a more objective validation of explainability is linking the human-approachable explanations with those feasible for ConvNets. With the use of the labels for dermatological diagnosis explainability available from the recently released DermXDB dataset, our benchmark is quantitative as well as predefined. Thus, we avoid potential biases and limitations stemming from machine learning experts with little domain knowledge performing a visual, qualitative evaluation of Grad-CAMs (Tschandl et al., 2020).
The image-level explainability analysis shows that no ConvNet reaches the same F1 score as the dermatologists, although several ConvNets achieve expert-level sensitivity or specificity. Different ConvNets show different patterns of explanation behaviour (Figure 6): some tend to focus on smaller areas that are highly indicative of the target diagnosis, while others tend to focus on the entire affected area. Extensive user tests with both experts and patients would enable us to learn which of the two options is preferred as an explanation: a single, classical lesion descriptive of the diagnosis, or highlighting the entire affected area.
From a characteristic-level sensitivity perspective, most ConvNets outperform the average dermatologist performance in characteristics smaller than 1cm in diameter (Nast et al., 2016). For larger characteristics, although NASNetMobile and Xception approach expert-level, no ConvNet exceeds it. The relationship between diseases and their characteristics is visible in the characteristic-level ConvNet explainability: most ConvNets report high sensitivity on characteristics often associated with acne and viral warts (e.g. closed and open comedones, papules, and thrombosed capillaries), while reporting a lower performance on characteristics associated with actinic keratosis and seborrheic dermatitis (e.g. plaque, sun damage, and patch). Characteristic-level explainability may be more relevant for use cases where identifying the differentiating factor between different diseases is the most important component for garnering trust.
These result suggests that while ConvNets have the potential to produce human-approachable explanations, more work is necessary to fully achieve expert-level performance. Part of the necessary work is the creation of additional user-derived explainability datasets that enable quantitative analyses on a ConvNet's explainability within a domain. A component of this is performing extensive user tests to identify the explainability expectations of an application's end users. From a machine learning perspective, more research must be devoted to the creation of instrinsically explainable ConvNets, rather than relying solely on post-hoc explanation methods. Such a ConvNet must be aligned with the explainability requirements of its task and its users: a psoriasis diagnosis ConvNet aimed at dermatologists might
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **F1 score** & **Sensitivity** & **Specificity** \\ \hline
**ConvNets** & & & \\ DenseNet121 & \(0.43\pm 0.01\) & \(\mathbf{0.61\pm 0.01}\) & \(0.78\pm 0.00\) \\ EfficientNet-B0 & \(0.39\pm 0.01\) & \(0.52\pm 0.00\) & \(0.82\pm 0.00\) \\ InceptionV3 & \(0.42\pm 0.01\) & \(0.56\pm 0.01\) & \(0.82\pm 0.01\) \\ InceptionResNetV2 & \(0.35\pm 0.01\) & \(0.40\pm 0.01\) & \(0.87\pm 0.01\) \\ MobileNet & \(0.37\pm 0.02\) & \(0.50\pm 0.01\) & \(0.85\pm 0.01\) \\ MobileNetV2 & \(0.38\pm 0.02\) & \(0.49\pm 0.02\) & \(0.87\pm 0.01\) \\ NASNetMobile & \(0.44\pm 0.00\) & \(\mathbf{0.62\pm 0.00}\) & \(0.81\pm 0.00\) \\ ResNet50 & \(0.35\pm 0.01\) & \(0.42\pm 0.03\) & \(0.84\pm 0.01\) \\ ResNet50V2 & \(0.37\pm 0.01\) & \(0.38\pm 0.01\) & \(\mathbf{0.91\pm 0.00}\) \\ VGG16 & \(0.35\pm 0.01\) & \(0.40\pm 0.01\) & \(0.86\pm 0.01\) \\ Xception & \(0.46\pm 0.01\) & \(0.56\pm 0.00\) & \(0.88\pm 0.01\) \\
**Dermatologists** & & & \\ Average & \(0.66\pm 0.03\) & \(0.67\pm 0.07\) & \(0.93\pm 0.03\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Explainability performance in terms of the image-level Grad-CAM evaluation for the ConvNets (average \(\pm\) standard deviation across five runs), and an explanation map evaluation dermatologists (average \(\pm\) standard deviation across eight experts). Older ConvNets, such as ResNet50, ResNet50V2, and VGG16, have lower performance than most other modern ConvNets. Two networks achieve expert-level sensitivity scores, and one achieves expert-level specificity (in **bold**).
require high characteristic-level explainability to offer a constructive explanation against a possible differential diagnosis of atopic dermatitis, while the same ConvNet aimed at patients might require high image-level explainability to reassure the patient that all aspects of their condition are taken into consideration.
### Limitations and future work
Our work has a few limitations. First, the original DermXDB dataset contains little information about the gender, age, and ethnicity of the subjects, leading to difficulties in performing an in-depth bias analysis of our benchmark. Second, the small size of the dataset limits the training capabilities of our benchmark, which may underestimate the performance of the larger ConvNets.
In future work, we plan on expanding this benchmark by using more explainability methods, such as saliency maps and LIME, to also create a benchmark of explainability methods and their performance compared to that of dermatologists. Additionally, with the increased popularity of visual transformers (Khan et al., 2022), an analysis of their Grad-CAM explainability would be of interest to the research world.
Figure 5: Explainability performance in terms of characteristic-level Grad-CAM sensitivity for the ConvNets (averaged across five runs) and dermatologists (averaged across eight experts). NASNetMobile and Xception outperform expert level in seven characteristics, while no ConvNet achieves expert-level performance in eight characteristics.
Figure 6: Example of Grad-CAM outputs for six images that were correctly diagnosed by all ConvNets. Older ConvNets, such as VGG16, ResNet50, ResNet50V2, and InceptionResNetV2, tend to focus on a single, highly indicative lesion rather than the whole affected region. More modern ConvNets, such as NASNetMobile, Xception, and EfficientNet, focus on the entire affected area. Some ConvNets overfitted during training, and focus on the watermark when diagnosing vitiligo.
## 6 Conclusions
In this paper, we performed a systematic literature review to identify the most used ConvNet architectures for the diagnosis of skin diseases from photographic images. We benchmarked the 11 identified ConvNets on DermXDB, a skin disease explainability dataset. Xception stands out as a highly explainable ConvNet, although NASNetMobile outperforms it on characteristic-level sensitivity. Our findings highlight the importance of explainability benchmarking, and will hopefully motivate additional studies within the field of quantitative evaluations for explainability.
## Acknowledgments
Funding: RJ's work was supported in part by the Danish Innovation Fund under Grant 0153-00154A. OW's work was funded in part by the Novo Nordisk Foundation through the Center for Basic Machine Learning Research in Life Science (NNF20OC0062606). OW acknowledges support from the Pioneer Centre for AI, DNRF grant number P1.
|
2307.12601 | Concept backpropagation: An Explainable AI approach for visualising
learned concepts in neural network models | Neural network models are widely used in a variety of domains, often as
black-box solutions, since they are not directly interpretable for humans. The
field of explainable artificial intelligence aims at developing explanation
methods to address this challenge, and several approaches have been developed
over the recent years, including methods for investigating what type of
knowledge these models internalise during the training process. Among these,
the method of concept detection, investigates which \emph{concepts} neural
network models learn to represent in order to complete their tasks. In this
work, we present an extension to the method of concept detection, named
\emph{concept backpropagation}, which provides a way of analysing how the
information representing a given concept is internalised in a given neural
network model. In this approach, the model input is perturbed in a manner
guided by a trained concept probe for the described model, such that the
concept of interest is maximised. This allows for the visualisation of the
detected concept directly in the input space of the model, which in turn makes
it possible to see what information the model depends on for representing the
described concept. We present results for this method applied to a various set
of input modalities, and discuss how our proposed method can be used to
visualise what information trained concept probes use, and the degree as to
which the representation of the probed concept is entangled within the neural
network model itself. | Patrik Hammersborg, Inga Strümke | 2023-07-24T08:21:13Z | http://arxiv.org/abs/2307.12601v1 | Concept backpropagation: An Explainable AI approach for visualising learned concepts in neural network models
###### Abstract
Neural network models are widely used in a variety of domains, often as black-box solutions, since they are not directly interpretable for humans. The field of explainable artificial intelligence aims at developing explanation methods to address this challenge, and several approaches have been developed over the recent years, including methods for investigating what type of knowledge these models internalise during the training process. Among these, the method of concept detection [1], investigates which _concepts_ neural network models learn to represent in order to complete their tasks. In this work, we present an extension to the method of concept detection, named _concept backpropagation_, which provides a way of analysing how the information representing a given concept is internalised in a given neural network model. In this approach, the model input is perturbed in a manner guided by a trained concept probe for the described model, such that the concept of interest is maximized. This allows for the visualisation of the detected concept directly in the input space of the model, which in turn makes it possible to see what information the model depends on for representing the described concept. We present results for this method applied to a various set of input modalities, and discuss how our proposed method can be used to visualise what information trained concept probes use, and the degree as to which the representation of the probed concept is entangled within the neural network model itself.
Explainable artificial intelligence, concept detection, neural networks, deep learning
## I Introduction
Neural network models are becoming increasingly common for solving many complex problems. However, these models are not interpretable to humans, meaning that it is not directly knowable what information the models use to make their predictions. During recent years, methods have been developed that allow for the probing of what such models have learned, through the representation of information as _concepts_. The method of concept detection is fundamentally based on "interpreting the intermediate states of neural network in terms of human-friendly concepts" [1]. In this case, a concept is an human-defined abstraction of information present in a given input sample.1 of a is While concept detection is useful for probing the presence of a predefined knowledge, i.e. concepts, it does not provide a means for detecting exactly _how_ said knowledge is internalised in the model. It is possible to know whether knowledge is represented in the model, but this does not guarantee that its representation is not, e.g., entangled with some other information.
Footnote 1: The information required to make up a concept might not be directly represented by a single input feature, but rather as a function of a set number of features. A widely used example, as presented in [1], is the notion of “strips” as a concept for detecting zebras in an image.
In order to investigate how knowledge is represented in a neural network model, we propose the method of _concept backpropagation_. This method allows for a visualisation of how a given concept is internalised by a neural network model. This is done by arranging a structure that allows for the maximisation of a pre-trained concept probe, i.e. being able to transform an input sample in order to maximise a given concept. This provides a means of investigating what information is being used to detect the described concept in the model, in addition to making it possible to visualise the internalisation of the concept directly in the model's input space. We apply the method to a various set of problem cases, including tabular data, images, and chess.
The paper is structured as follows: In Sec. II, we provide the necessary background and describe the proposed method of concept backpropagation. In Sec. III we present its application on four different cases featuring different data input spaces. In Sec. IV, we show the results of the method on the presented problem cases. In Sec. V, we discuss the benefits and limitations of the proposed method, in addition to highlighting how it is relevant in the broader field of Explainable Artificial Intelligence (XAI).
We also provide an open source repository containing code implementing all our described methods.2
Footnote 2: This is available at [https://github.com/patrik-ha/concept-backpropagation](https://github.com/patrik-ha/concept-backpropagation).
Footnote 3: This is available at [https://github.com/patrik-ha/concept-backpropagation](https://github.com/patrik-ha/concept-backpropagation).
## II Method
### _Concept detection_
Our proposed method is based on the established concept detection method used in [2], itself based on the work presented in [1]. In a nutshell, for a neural network model \(M:I\to O\)
with an intermediary layer \(L\), and a concept function \(f(s)\) that quantifies the presence of some concept \(C\) in an input sample \(s\), concept detection aims to indicate if \(M\) learns to distill information pertaining to \(C\) by looking at the information generated in \(L(s)\), as discussed in [3]. This is done by training a logistic probe \(P:L\to C\) on a large set of samples \((L(s),f(s))\), and if successful, means that enough information pertaining to \(C\) is linearly represented in the values generated in \(L(s)\).
For a binary concept, the probe is trained by minimising
\[\left\|\sigma\left(\mathbf{w}\cdot L(s_{i})+\mathbf{b}\right)-f(s_{i})\right\| _{2}^{2}+\lambda\|\mathbf{w}\|_{1}+\lambda\left|\mathbf{b}\right|\,, \tag{1}\]
for each pair \((L(s_{i}),f(s_{i}))\), where \(\mathbf{w}\) and \(\mathbf{b}\) are the trainable parameters of the probe, and \(\sigma\) is the standard sigmoid function. The method is also adapted for scalar concepts by removing the use of the sigmoid function from Eq. 1, which essentially changes the learned relationship from being logistic to being linear.
While concept detection gives a direct assessment of the presence of a described concept in a model, it does not guarantee that the information used by the concept function \(f(s)\) is the same as the information being used in \(L(s)\) to construct \(C\). However, since these trained probes effectively indicate a learned relationship between the layer \(L\) and the specific concept \(C\), it is possible to use a trained probe to find an out which elements of \(L\) are being used to represent \(C\). Then, by using this dependency between \(L\) and \(C\), said information can be used to infer how changes in a given state \(s\) affect the detected concept \(C\). Since the probes can be represented as generic single-layer neural networks, the gradients of the probe's output can be used to guide a search for a perturbation of \(s\) that maximises \(f(s)\) according to \(P\).
### _Concept backpropagation_
Our idea is formulated as a minimisation problem. For a model \(M:I\to O\), with an intermediate layer \(L(s)\) and a trained logistic probe \(P:L\to C\), an input state \(s\), a concept output \(o\) as the desired output of the concept function \(f(\cdot)\), and some combination operator \(\odot\), we wish to find a minimal perturbation \(s^{*}\) so that \(P(L(s\odot s^{*}))=o\). That is, we aim to minimise
\[\lambda_{1}|P(L(s\odot s^{*}))-o|+\lambda_{2}\operatorname{dist}(s,s^{*}), \tag{2}\]
where \(\operatorname{dist}(\cdot,\cdot)\) is a function that indicates the distance between \(s\odot s^{*}\) and \(s\), and \(\lambda_{1}\), \(\lambda_{2}\) are weighting constants, in the range \([0,\infty]\). Both \(\odot\) and \(\operatorname{dist}(\cdot,\cdot)\) are chosen to suit the input space of the presented problem. A high-level illustration of the described setup is shown in Fig. 1. This minimisation process is done by standard gradient descent, meaning that one needs to choose \(\odot\) and \(\operatorname{dist}(\cdot,\cdot)\) to allow for adequate propagation of the gradient from the output of the probe.
### _Use cases_
While the method presented in Sec. II-B is quite general, it can be demonstrated through application to specific problem cases.
#### Ii-C1 Tabular data
Our first use case is a neural network model \(M\) trained on tabular data whose input space consists of samples of \(n\)-dimensional real-valued vectors. For this case, the distance function is defined as \(dist(s,s^{*})=||s^{*}||_{2}\), and \(\odot\) as standard, element-wise addition. This gives the following minimisation objective,
\[|P(L(s+s^{*}))-o|+||s^{*}||_{2}, \tag{3}\]
for \(\lambda_{1}\), \(\lambda_{2}\) equal to \(1\). Here, for an input vector \(s\), we aim to add the smallest perturbation (the notion of "smallest" being expressed through \(||s^{*}||_{2}\)) to \(s\) that produces the desired probe output.
#### Ii-C2 Images
Our next use-case are neural network models, typically convolutional neural networks (CNNs), trained to handle images. We observe that it is difficult to work with perturbations of images directly, which in turn hinders the feasibility of a direct application of the proposed method. The challenges observed occur due to the difference in dimensionality between the images and the intermediate layers, as backed up by preliminary experiments: It was observed that the size of the images provided the possibility that valid perturbations only consisted of large amounts of low magnitude noise. While these did provide valid maximisations of the probe, they do not provide relevant information regarding to how the model learned to represent the relevant concepts for standard images, i.e. images represented by the training data or images we can expect the model to encounter during use.
The described problem is mitigated by adding an embedding network for the given image model: For an image model \(M\) and an input image \(s\), one first maps \(s\) to a latent space by some encoding function \(s_{l}=E(s)\). This latent space then serves as the de facto input space for concept maximisation. Then, the image can be mapped back into its original space by some decoding function \(D(s_{l})\). For the proposed method, this means that we can define \(\odot\) as
\[s\odot s^{*}=D(E(s)+s^{*}), \tag{4}\]
and \(\operatorname{dist}(\cdot,\cdot)\) as
\[\operatorname{dist}(s,s^{*})=||D(E(s)+s^{*})-s||_{2}. \tag{5}\]
Fig. 1: The architecture described in Sec. II-B, which enables the maximisation of a given concept, by following the minimisation objective described in Eq. 2. For a single sample \(s\), the objective is to find a perturbation \(s^{*}\) that maximises the desired probe output, where the probe provides gradients to guide the search.
Here, \(s^{*}\) is an \(n\)-dimensional vector in the created embedding space, and the distance function expresses the squared difference between an image \(s\), and the decoded image after having its embedding perturbed by \(s^{*}\). The main idea is that the perturbation now takes place in the embedding space, circumventing the need to perturb the image in its original representation.
#### Ii-A3 Chess
In our final use-case, we consider a model for playing 6x6-ches, first presented in [4]. It is trained by reinforcement learning (RL) model through self-play, similar to the model being used in [2]. In this case, many of the aspects of the method are adapted to fit the intricacies of chess as an input space. The positional aspect (i.e. the pieces on the board) are strictly binary, meaning that all elements of a perturbation \(s^{*}\) need to be binary. Additionally for a state \(s\), \(s^{*}\) can only add pieces to vacant squares, or remove pieces from filled squares, which in turn places some restrictions as to what perturbations are valid for \(s\). In this case, \(s^{*}\) was decomposed into two trainable binary matrices, \(s^{-}\) and \(s^{+}\). \(\odot\) was then defined as
\[s\odot s^{*}=(s-s^{-})+s^{+}, \tag{6}\]
for \(s^{*}=(s^{-},s^{+})\), where \(s^{-}\) designated which squares were to have pieces removed, and \(s^{+}\) designated which squares to have pieces added to it, in addition to what pieces should be added to the applicable square(s).3
Footnote 3: The binary nature of this mask is upheld by implementing the masks as binarised layers, as presented in [5].
Since it was desirable for \(s^{*}\) to only give perturbations that produced legal positions within the rules of chess, the distance function was modified to accommodate this. A legality classifier \(c(\cdot)\) was trained to discern legal and illegal positions of chess, and used to augment the distance estimate for \(s\odot s^{*}\), by letting
\[\mathrm{dist}(s,s^{*})=c(s\odot s^{*})+||s^{+}||_{1}+||s^{-}||_{1}. \tag{7}\]
Here, the main point is that since the legality classifier itself was a neural network model, it too could produce gradients allowing its output to be minimised by finding an adequate \(s^{*}\). Additional information regarding the implementation of \(s^{-}\) and \(s^{+}\), and details wrt. using chess as an input space can be found in [6].
## III Applications
### _Tabular data_
We apply the method as presented in Sec. II-C1 to a small neural network model trained on an altered version of the California Housing dataset, first presented in [8]. It is used as a small tabular dataset with six real-valued features, as described in Table I, and it was normalised to better suit regression by a neural network. We define the probed concept to be \(\frac{\mathbf{AveBedrms}}{\mathbf{AveOccup}}\), i.e. a ratio proportional to the average number of bedrooms per person for each household.
### _Images_
We apply the method as described in Sec. II-C2 to a convolutional autoencoder model trained on the MNIST dataset [9]. We aim to maximise the concept "loopiness", i.e. if the standard drawing of the given digit includes any self-closing loops.4 We aim to maximise a concept in the latent dimension of the autoencoder itself, as shown in Fig. 2.
Footnote 4: l.e. matching the digits 0, 6, 8, 9
We also apply the described method to a image classifier model trained on the Fashion-MNIST dataset. Here, the goal is to maximise the lightness of the given article of clothing, i.e. the ratio of non-black pixels to pixels with magnitude above a certain threshold. We use use a convolutional autoencoder to embed the images, and probe for the concept in an intermediate layer in the classifier, as shown in Fig. 2.
### _Chess_
We apply the method as presented in Sec. II-C3 to a pre-trained model for 6x6-ches, where we seek to maximise the threat on the queen of the player to move.
## IV Results
### _Tabular data_
The results for the method described in Sec. II-C1 are shown in Table II. In all the presented cases, we see significant changes in one or both features directly related to the concept
\begin{table}
\begin{tabular}{l l} \hline \hline Feature name & Description \\ \hline
**MedInc** & Median income in group \\
**HAge** & Median house age in group \\
**AveRms** & Average room number per household \\
**AveBedrms** & Average number of bedrooms for each household \\
**Pop** & Population of the given group \\
**AveOccup** & Average number of household members \\
**Target** & Median house value per group \\ \hline \hline \end{tabular}
\end{table} TABLE I: Description of each feature in the California Housing dataset. Note that each sample does not operate on individual housing units, but rather groups of housing (referred to as “groups”). The dataset and the corresponding feature labels were obtained through [7].
Fig. 2: The arrangement of the probing and maximisation architecture for the image-based problem cases.
as \(\frac{\textbf{AveBedrms}}{\textbf{AveOcp}}\). We also observe that most of these maximisations have side-effects, changing features that are not directly correlated to this ratio, namely \(\textbf{MedInc}\) and \(\textbf{AveRms}\). This is discussed in Sec. V.
### _Images_
The results for the method described in Sec. II-C1 for the MNIST-autoencoder are shown in Fig. 3, and the results for the Fashion-MNIST classifier are shown in Fig. 4. We observe that all samples achieve successful maximisation. However, it is also worth noting that most maximised MNIST-samples are often visually very different from their original images.
### _Chess_
The results for the method described in Sec. II-C3 for the neural network model trained on 6x6-ches are shown in Fig. 5. We observe that most samples achieve successful maximisation, but that many of these additionally introduce pieces that are not seemingly relevant to the given concept. This is discussed in Sec. V.
## V Discussion
We have demonstrated that the proposed method allows for direct visualisation of learned concepts in neural network models over a wide variety of domains, meaning that it presents _how_ these models learn to internalise the given concepts. This method is therefore applicable for most cases where one wishes to utilise concept detection to probe for learned knowledge in trained neural network models.
Through the results shown in Table II, it is observed that the presented method is suitable for uncovering how entangled features can affect how a given model internalise concepts. When the model is tasked with maximising the ratio of the average amount of bedrooms per person with regard to its intermediate activation space, it also increases the average amount of rooms per person. While this is a logical degree of entanglement, it also means that a "standard" procedure of concept detection might incorrectly lead one to assume that this ratio is internalised independently of this confounding factor. In this case, the results by the presented methods might suggest that it is more apt to consider these three variables together, even if a valid concept detection result is obtained. These results also indicate that the utility of the proposed method is likely to be significantly higher when applied to models of high complexity, such as multi-layer neural networks. This is because such models often learn complex mappings with several entangled relationships between features, which in turn makes it possible for these relationships to be highlighted by using the proposed method. For simpler regressors, however, these relationships would most likely be trivially available through direct inspection of the learned model itself.
The results shown in Figs. 3 and 4 also show that it is possible to generate perturbations to maximise concepts by first mapping a given model's input space to an embedding space. This is useful for models that operate in a input space that is hard to work with directly. This in turn makes it possible to operate in an embedding space that is easier to operate, while still retaining the interpretability and visual capabilities of input spaces such as images.
The results from the chess-playing model, as shown in Fig. 5, highlight that it is possible to create valid maximisations adhering to multiple strict constraints. While most samples present a successful maximisation of the given concept (as shown in Figs. 4(a), 4(b), 4(c)), we also observe that it does not succeed in doing so in some cases, as exemplified through Fig. 4(c).
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**MedInc** & **HAge** & **AveRms** & **AveBrms** & **Pop** & **AveOcp** \\ \hline \(-0.009\) & \(+0.000\) & \(+8.999\) & \(+1.560\) & \(+0.121\) & \(-0.223\) \\ \hline \(+0.000\) & \(+0.003\) & \(+5.067\) & \(+0.811\) & \(+0.126\) & \(-1.216\) \\ \hline \(+0.000\) & \(+0.057\) & \(+1.797\) & \(+1.803\) & \(-0.567\) & \(-5.696\) \\ \hline \(+0.013\) & \(+0.003\) & \(-0.003\) & \(+0.000\) & \(+0.003\) & \(-0.798\) \\ \hline \(+1.125\) & \(+0.011\) & \(-0.007\) & \(+0.001\) & \(-0.061\) & \(-17.494\) \\ \hline \end{tabular}
\end{table} TABLE II: Maximisation results for five tabular samples. The results are deltas made to each input sample in order to maximise a probe trained to detect \(\frac{\textbf{AveBedrms}}{\textbf{AveOcp}}\).
Fig. 3: Samples from the MNIST dataset, with corresponding maximisations.
Empirically, this was observed to occur more frequently with the method to the chess model as opposed to the other modalities. We hypothesise that this is due to the difficulty of finding a valid perturbation that successfully maximises the given concept, while ensuring that the resulting perturbation abides by all rules described in Sec. II-C3. This is also implied by the fact that most of the samples presented in Fig. 5 also introduce a notable amount of pieces that are not relevant for the position at hand.5 While this might be attributable to some aspect of the model's learned representation of the concept, it is in this case strongly hypothesised to be due to the discreteness of chess as an input space.
Footnote 5: See e.g. Fig. 4(d), where White’s leftmost road and Black’s leftmost knight are removed. The position otherwise provides a valid maximisation.
While the method is very generalisable, it can in many cases be difficult to find the right balance between highlighting a given concept, and preserving the original structure of the input sample itself. In practice, this amounts to find an adequate tuning of \(\lambda_{1}\) and \(\lambda_{2}\). An example of this can be seen in Fig. 6.
Fig. 4: Samples from the Fashion-MNIST dataset, with corresponding maximisations.
Fig. 5: Samples obtained by using the models described in [4], with corresponding states where the threat on the opposite’s player queen(s) is maximised.
Here, the method produces wildly different maximisations for the different weightings of the minimisation objective described in Eq. 2. While this is a problem in some cases, it also shows that the method facilitates the generation of various samples that maximise the relevant concepts for almost all constraints. Additionally, since this method is relatively inexpensive to perform, it is also possible to generate many such perturbations with different weights, in order to consider a larger variety of samples.
## VI Conclusion
We have presented a method that allows for visualisations of learned concepts through concept maximisation. This is relevant for obtaining a deeper understanding of how a given neural network model learns to internalise important concepts, and for easily presenting these internalised representations in the model's own input space. The method is generalisable to most domains, allowing for easy applicability for a diverse set of problems independently of both network architecture and input structure. Finally, the method draws from the strengths of concept detection, which means that it can be applied to most pre-trained models, without requiring expensive training procedures to be performed. In this vein, an interesting future work for this method would be to apply it to problems of higher complexity, such as state of the art image classifying models, or large language models.
|
2302.10804 | GDBN: a Graph Neural Network Approach to Dynamic Bayesian Network | Identifying causal relations among multi-variate time series is one of the
most important elements towards understanding the complex mechanisms underlying
the dynamic system. It provides critical tools for forecasting, simulations and
interventions in science and business analytics. In this paper, we proposed a
graph neural network approach with score-based method aiming at learning a
sparse DAG that captures the causal dependencies in a discretized time temporal
graph. We demonstrate methods with graph neural network significantly
outperformed other state-of-the-art methods with dynamic bayesian networking
inference. In addition, from the experiments, the structural causal model can
be more accurate than a linear SCM discovered by the methods such as Notears. | Yang Sun, Yifan Xie | 2023-01-28T02:49:13Z | http://arxiv.org/abs/2302.10804v1 | # GDBN: a Graph Neural Network Approach to Dynamic Bayesian Network
###### Abstract
Identifying causal relations among multi-variate time series is one of the most important elements towards understanding the complex mechanisms underlying the dynamic system. It provides critical tools for forecasting, simulations and interventions in science and business analytics. In this paper, we proposed a graph neural network approach with score-based method aiming at learning a sparse DAG that captures the causal dependencies in a discretized time temporal graph. We demonstrate methods with graph neural network significantly outperformed other state-of-the-art methods with dynamic bayesian networking inference. In addition, from the experiments, the structural causal model can be more accurate than a linear SCM discovered by the methods such as Notears.
## 1 Introduction
Multi-variate time series data has been carefully studied for many years. Sensors have been recording large amount of collected time series data including traffics, electricity consumption and so on. Interestingly, these time series variables are subtly internally interlinked. These links with causal dependencies carry critical information for decision makers.
Causality has been formalized as a theory for causal and counterfactual inference [19] from a structural model. Once knowledge about the causality among time series variables is not a priori, we have to discover these relational structure. Moreover, the dynamics described from a structural equation model can articulate the causal dependencies via a directed acyclic graph (DAG). Causal models with explicitly solved DAG enables computation of posterior distribution with treatment[26, 25] and provides us with actionable insights. A careful treatment of causal relation will also avoid misleading judgement from misidentifying spurious correlations as illustrated from Figure 1.
Recently, research done on time series data mainly focuses on the summary graph [15][21] for multi-variate time series. We are more interested with latent structure that governs the multi-variate time series data. The other closedly related topic is graph neural network. Deep neural network has been known as universal function approximator for data distribution and a powerful tool for representation learning. Graph neural network is known as the specific class of deep neural network that is suitable for graph-structured data. With a clear definition of interactions (with directions) among time series data with lags, we recast our problem of Bayesian Dynamics Networks as a link prediction problem via Graph Neural Network (GNN) [30]. The adjacency matrix of the DAG is provided as an input to the graph neural network which will be the objective of our inference problem. One of the most closely related work is [18] based on the linear version of Notears. However the generation process of real world time series data is highly complex and it is difficult to describe them as the usual linear relations.
The main contributions of ours can be summarized in the following.
* By understanding the intrinsic property of the time series data, we define the causal temporal graph and its associated adjacency matrix. We introduce the fundamental elements used in our causal discovery framework.
* We synthesize a wide range of linear dataset and nonlinear dataset from a generation process of VAR(p) with stationary property for causal discovery.
* We further designed a graph-based neural network with a neural message passing scheme that caters for the intrinsic property of multi-variate time series. The graph structure is optimized based on a variational autoencoder structure which generalizes better on understanding the distribution of time series data. We formulate
the optimization objective with variational lower bound and sparsity constraint to solve for causal temporal graph and the SEM.
* We carried out experiments over our synthesized data of different settings and datasets published on [14]. Experiments show that our approach significantly outperformed methods from linear method as Notears [32] and conditional independence framework [24].
## 2 Related Work
The causal inference problem and causal feature selection for time series data has been an issue under different contexts, including econometric, biology, physics and etc. Granger Causality [8] based on prediction method has been proposed as a corner stone methodology for time series causal analysis. It used a variation of the VAR(p) model and the lagging causal effect is modeled with different matrices with index p. Despite its advantages, it suffers from the often lack of the causal sufficiency condition in real scenario. Lower sampling rate on the time dimension could also lead to unidentifiability of the Granger Causality, although some exceptional cases are studied [7]. An alternative is the PC algorithm that tests the conditional independence along with the autocorrelation [22][24]. Despite its capacity of accurate causal discovery, this type of methods usually scale with number of variables and \(\tau_{max}\) in polynomial time. For example [24] shows the time complexity is \(N^{3}\tau_{max}^{2}\) where \(N\) is the number of variables while the complexity of our inference method is of \(N^{2}\tau_{max}\). [17] gives the necessary and sufficient condition to select time series random variable with conditional dependencies based on certain assumptions for the time series data. [2] introduces spectral method for identifying causal effect with robustness to down-sampling.
Score-based methods formulate the causal graph inference problem with an objective function which regularizes the adjacency matrix to satisfy the constraint with respect to being a DAG [11]. NO TEARS[32] recasts the combinatoric graph search problem as a continuous optimization problem, while DYNATEARS[18] provides a vector-autoregressive extension to multivariate time series data. More recently, [31] derived a first principle relation between Graph Neural Network and SCM. Our approach is based on [30] where the authors treat the problem of DAG inference as a link prediction problem.
Neural ODE [4] has started a new chapter for understanding the dynamics of multi-variate time series data in continuous time domain [3] had revealed a fundamental situation where ordinary differential equation can be discovered and [1] approximates a vector fields parametrized by a neural network for modelling the continuous dynamical system. [5] further develops the techniques for conditional treatment effect estimation.
## 3 Problem Formulation
### Assumptions
We assume Markov property and faithfulness holds, and that there are no instantaneous effects or hidden common causes. Then identifiabilty is guaranteed [16, 20]. Our causal discovery work is based on the assumption used in [17]. Besides, the causal relations are invariant under a joint time shift of all variables.
We represent the observations of the multi-variate time series \(X\) as \(\left(X_{1}^{t},...,X_{d}^{t}\right)\) with discrete timestamps.
Following Mastakouri et al.[17], we further assume, in the process of data generation,
**Hypothesis 1** (**H1**): _Lag is one for univariate time series: \(x_{i}^{t-\tau}\) influence \(x_{i}^{t}\) if and only if \(\tau=1,\forall i\in\mathbb{N}^{*}\)._
**Hypothesis 2** (**H2**): _Single-lag:There is only one lag for each pair of time series \((x_{i},x_{j}),\forall i,j\in\mathbb{N}^{*}\)._
where \(\tau\) denotes the time lag between two temporal nodes.
### Definitions
**Definition 3.1**: _causal temporal graph: Causal temporal Graph \(G=\{V,E\}\) is an equivalence class up to time translational invariance. It is composed of vertices \(V=\{X^{t},X^{t-1},\cdots,X^{t-\tau_{max}}\}\) and edges \(E=\{\text{Pa}_{X^{t}}\to X^{t}\}\), \(\text{Pa}_{X^{t}}\) denotes the parent nodes of \(X^{t}\) and \(\tau_{max}\) denotes the size of the observation window size. \(\tau_{max}\) is a hyperparameter in the model training [18]._
**Definition 3.2**: _causal temporal adjacency matrix: Causal
Figure 1: An example of time series (a) with the corresponding full time graph (b) and summary graph (c) under our assumptions. The bold nodes and arrays in (b) constitute a causal temporal graph (Definition 3.1), which highlights the connections between variables at a time slice \(t\) with their past. The maximum time lag \(\tau_{\text{max}}\) is 3. \(x_{2}\) influence \(x_{3}\) with lag=1, which is denoted as \(a_{32}^{1}\). There exist spurious associations represented by gray dashed arrows in (c), which could lead to errors of misidentification of causal relations.
Temporal Adjacency Matrix (TAM) \(\mathcal{A}\) is the adjacency matrix defined over the causal temporal graph G.
The ideal situation would be \(\tau_{max}\) exceeds the maximum lag between time series variables. These definitions enables our proposed graph neural network. Note that the temporal causal graph is a sub-component of the full time graph. Since the full time is translational invariant, it can be generated from the causal temporal graph.
### Causal Discovery for Multi-variate Time Series Data
Consider a \(d\)-variate time series \((X^{t})_{t\in\mathbb{Z}}=(x^{t}_{i},\cdots,x^{t}_{d})\), where the variables influence each other in a time-lagged manner. A general structural causal model (SCM) for the stochastic process can be described as
\[x^{t}_{i}=f_{i}(\text{Pa}^{\tau_{max}}_{i},\cdots,\text{Pa}^{1}_{i},z^{t}_{i}) \tag{1}\]
where \(\text{Pa}^{\tau}_{i}\) denotes a set of variables that influence \(x^{t}_{i}\) at \(t-\tau\). \(z^{t}_{i}\) are jointly independent noise terms, \(i=1,\cdots,m,t\in\mathbb{Z},\tau=1,\cdots,\tau_{max}\). \(\tau_{max}\) is the maximal lag of the causal temporal graph.
#### 3.3.1 General SCM for Multi-variate Time Series Data
A special case is that the model is a class of VAR model with additive noise. Given \(m\) variables \(\mathbf{x}\in\mathbb{R}^{d}\). For a time series of length \(T\), we use \(a^{\tau}_{ij}\) to denote the impact of \(\mathbf{x}_{j}\) on \(\mathbf{x}_{i}\) with a time lag of \(\tau\).
\[\mathbf{x}^{t}_{i}=\sum_{j}\sum_{\tau}a^{\tau}_{ij}\mathbf{x}^{t-\tau}_{j}+\mathbf{z}^{t}_ {i} \tag{2}\]
The variable \(\mathbf{x}\) can be a \(d\)-dimensional vector, or reduced to a scalar when \(d=1\). \(\mathbf{z}\) is the random noise of the same shape with \(\mathbf{x}\), typically independent Gaussian exogenous factors. By taking the matrix form,
\[X^{t}=\sum_{\tau}A^{\tau}X^{t-\tau}+Z^{t} \tag{3}\]
where \(X^{t}=[\mathbf{x}^{t}_{1}\mid\cdots\mid\mathbf{x}^{t}_{m}]\in\mathbb{R}^{m\times d}\), \(Z^{t}\in\mathbb{R}^{m\times d}\) is the noise matrix, \(A^{\tau}=(a^{\tau}_{ij})\in\mathbb{R}^{m\times m}\) is introduced to describe the causal relation between variables, and the compact form can be derived as
\[X^{t}=\mathcal{A}X+Z^{t} \tag{4}\]
where \(\mathcal{X}=[X^{t-1\tau}\mid\cdots\mid X^{t-p\tau}]^{\top}\in\mathbb{R}^{pm \times d},\mathcal{A}=[A^{1}\mid\cdots\mid A^{p}]\in\mathbb{R}^{m\times pm}\) corresponds to the TAM.
For a generalized case we have,
\[X^{t}_{j}=f(\text{Pa}(X^{t}_{j}),Z^{t}_{j}) \tag{5}\]
\(f\) is an arbitrary function on parents node \(X^{t}_{j}\) and independent noise. We only formulate the evolution of the time series with a specific time slice from its past observation window. The reason for this simplification comes from the fact that normally the time series is lengthy and sliding window is a reasonable approach for this type of problem.
#### 3.3.2 GNN-based SCM
Graph neural network is known for its phenomenal power in representation learning for graph. The nodes in the time series contains abundant information from the past, and hence we might utilize the message passing scheme of GNN and approximate complex functions for the dynamics. We parametrize the neural network for SCM along with a graph neural networks,
\[X^{t}=f_{\mathcal{A}}(\mathcal{X},\,Z^{t}) \tag{6}\]
where \(f_{\mathcal{A}}\) is a graph neural network and \(\mathcal{A}\) parametrizes the temporal adjacency matrix which encodes the causal graph, where the non-zero elements suggest the existence of causal relation between the corresponding variables.
## 4 Framework of Gdbn
We propose a bayesian deep generative model with a graph neural network to capture the inter connections among time series nodes. The dependency relations can be described with message passing mechanism in a graph neural network [12][15]. Instead of treating the graph edges as hidden variables, we allow straightforward gradient descent through elements of the temporal adjacency matrix.
### Time Series Window Construction
Now we build the sliding window in the time series, which consists of two parts, one for observation followed by the other one for prediction. By setting the observation window size \(s_{o}\) and the prediction window size \(s_{p}\), we assume that the current effect to the future in time series \(\mathbf{x}\) lasts at most \(s_{o}\) steps, and rebuild the following time series of \(s_{p}\) steps is reasonable. If \(s_{o}>p\), we expect all learned \(A^{p+1},\cdots,A^{s_{o}}\) to be zero matrices. Please see Figure 2 for detail.
### Model Architecture
Here we propose a neural network for inference of the causal relation among multi-variate time series data. It has two main components. The first one is a proposed graph neural network with message passing scheme following a strict causal relational path. Nodes and edges form the representation of values of the time series and their time difference, e.g. lags. The second component is variational inference network
that produce a variational objective for the optimization of the temporal causal graph and structural equation end to end, namely, our framework, is not only capable of achieving causal discovery in a computationally friendly manner, but also provision a tool for prediction purpose of the time series data with strong explainability.
#### 4.2.1 Proposed Graph Neural Network
Graph Neural Network (GNN) is a powerful machinery to introduce relational inductive bias. The neural networks learns the node representation by local message passing mechanism via the graph structure. All variants of GNN [13][9][27][29] have been shown effective on graph structure data related tasks such as graph classification, node classification, link prediction and etc.. Following [6], the local message passing mechanism between vertices (v) and edges (e) can be summarized in the following equations:
\[v\to e: h^{l}_{(i,j)}=f^{l}_{e}(\{h^{l}_{i},h^{l}_{j},x_{(i,j)}\}) \tag{7}\] \[e\to v: h^{l+1}_{j}=f^{l}_{v}(\{\sum_{i\in N_{j}}h^{l}_{(i,j)},x_{j}\}) \tag{8}\]
\(i\) and \(j\) denote the node in the graph. \(x\) represents the raw feature, while \(h^{l}\) represents the embedding in layer \(l\). For example, \(x_{(i,j)}\) is the raw feature of the edge \(i\to j\) and \(h^{l}_{j}\) is the \(l\)th embedding layer of the node \(j\). \(\mathcal{N}_{j}\) represents the set of neighboring nodes for node \(j\). The square bracket denotes the concatenation of the features. The skip connection[10] with raw feature of \(x_{(i,j)}\) in (7) or \(x_{j}\) in (8) is optional. \(f_{e}\) and \(f_{v}\) can be modeled with neural networks. [13] and [28], including the GNN we had proposed can be treated as special case of (8).
_Proposed Graph Neural Network_ (\(\mathcal{G}\))
* Node Embedding (NE): NE maps the node feature to a new embedding space.
* Edge Neural Network (ENN): edge feature can be attained with a edge neural network of a non-linearized transformation. For example, we can choose to encode the lag \(\tau\) as edge feature. We aggregate the node embedding and edge embedding by concatenation to update the embedding of edge, followed by an element-wise multiplication operation with the temporal adjacency matrix \(\mathcal{A}\).
* Node Neural Network (NNN): NNN maps the aggregated edge features back to the node again.
Notice that the propagation of the messages follows a strict topological order which is faithful to the temporal order. Considering the causal effect from the node \(j\) at \(t^{\prime}\) to node \(i\) at \(t\), \(\tau=t-t^{\prime}=1,\cdots,p\), \(\mathcal{G}\) can be formulated as
\[\text{NE}: h_{it}=f_{emb}(x_{it})\] \[\text{ENN}: h^{l}_{(i,j\prime t^{\prime})}=A^{T}_{ij}f^{l}_{e}(\{h^{l}_{it},f_{\tau}(\tau)\})\] \[\text{NNN}: h^{l+1}_{it}=f^{l}_{v}(\sum_{j,t^{\prime}}h^{l}_{(it,j\prime t^{ \prime})})\]
\(f_{emb},f_{\tau},f_{e},f_{v}\) are parameterized by the neural networks, which take the form of Multi-Layer Perceptron (MLP) in this work. See Algorithm 1 for the module of computing GDBN.
#### 4.2.2 Variational Autoencoder
For each step of reconstruction of the time window as in Figure 2, the noise can be generalized from linear Granger model in (6) with our proposed GNN \(\mathcal{Y}\). We assume they yield independent multi-variate Gaussian distribution. At the same time, prediction window is computed with a decoding
Figure 2: Left: Construction of the Causal Temporal Graph with maximal time length of \(S_{o}\). Each slice of the tensor \(A\) represents the connection of \(i\)th node and \(j\)th node with a certain lag. Right: Reconstruction of the time window. Given the observation window of length \(s_{o}\), at every step we predict next value of the time series based on recent past \(s_{o}\) values. After \(s_{p}\) steps, the reconstruction is completed for the current time window. The dotted squares are rolling observation windows from left to right which denotes the direction for recurrent prediction.
procedure faithful to the causal graph. The encoder and decoder are
\[[\mu_{Z^{t}}|\log\sigma_{Z^{t}}] =F_{1}\big{(}F_{2}(X^{t})-\mathscr{C}(X)\big{)} \tag{9}\] \[[\mu_{X^{t}}|\log\sigma_{X^{t}}] =F_{3}(\mathscr{C}(X)+F_{4}(Z^{t})) \tag{10}\]
respectively, where \(Z^{t}\) is the hidden variable in the VAE model, \(F_{i},i=1,\cdots,4\) can be either an identity map or an MLP layer that acts upon the feature dimension.
We used a recurrent decoder that can forecast for multi steps based on one time decoder described above while the reconstruction loss and the KL-divergence for multi-steps are jointly trained. During the training, we model \(P_{\mathscr{C}}(X_{t+1}|X^{t}_{t},X_{t-1}..X)\), where \(X^{t}_{t}\) as predicted from (10). The observation and prediction window described in Figure 2 slides forward along with the causal path. We found in the training process, it is a good choice to use only the expectation value \(\mu_{X^{t}_{t}}\) instead of the sampled value for successive recurrent prediction. This has been proved to be effective for the causal learning as shown in experiments section. For the detail of GDBN network, see a summarized Algorithm 2
```
0: Number of observed variables: \(m\) Size of the observation window: \(s_{o}\) Size of the prediction window: \(s_{p}\) Time series of length \(s_{o}+s_{p}\): \(X^{1:s_{o}+s_{p}}_{1:m}\) EMB: MLP on feature dimension Output: TAM \(\mathscr{A}\), Reconstructed Series \(X^{s_{o}+1:s_{o}+s_{p}}_{1:m}\) \(Z^{s_{o}+1:s_{o}+s_{p}}_{1:m}\) for\(i\gets 1\) to \(s_{p}\)do ENCODE: logits \(\leftarrow\) EMB(\(\text{EMB}(X^{i+s_{o}}_{1:m})\)-\(\text{GENC}(X^{i+s_{o}-1}_{1:m})\)) \(M\), \(\log\Sigma\leftarrow\text{EMB}(logits)\) \(Z^{s_{o}+s_{p}}_{1:m}\leftarrow\text{SAMPLE}(M,\log\Sigma)\) DECODE: \(X^{i+s_{o}}_{1:m}\leftarrow\text{EMB}(\text{GENC}(X^{i+s_{o}-1}_{1:m})+\text{ EMB}\ (Z^{i+s_{o}}_{1:m}))\) REPLACE: \(X^{t+s_{o}}_{1:m}\gets X^{i+s_{o}}_{1:m}\) endfor return\(\mathscr{A}\), \(X^{{}^{\prime}_{o}+1:s_{o}+s_{p}}_{1:m}\), \(M\), \(\log\Sigma\)
```
**Algorithm 2** Forward GDBN Network Module (**GDBN**)
#### 4.2.3 Optimization Objective
**Evidence Lower Bound**
Since the exact inference is intractable, the Evidence Lower
Figure 3: Model architecture of GDBN. Time series of length \(s_{o}+1\) is fed into the model and the output is the prediction \(X^{\prime}_{p}\). At time \(t\), \(X_{p},X_{o}\) are \(X^{t},\mathcal{X}\) respectively. The rounded rectangle is a repeated unit for \(s_{p}\) times. \(h_{\tau}\) is the embedding of raw edge feature \(\tau\).
Bound (ELBO) is optimized in order to approximate the posterior distribution,
\[\text{ELBO}(\theta,\phi;X)= \mathbb{E}_{q_{\phi}(Z|X)}\left[\log p_{\theta}(X|Z)\right]\] \[-KL(q_{\phi}(Z|X)||p_{\theta}(Z)) \tag{11}\]
where \(p_{\theta}(Z)\) is the prior distribution of the latent variables, \(p_{\theta}(X|Z)\) is the conditional distribution of observed \(X\) given \(Z\) and \(\theta\) are the true parameters. \(q_{\phi}\) is the variational posterior parameterized by \(\phi\).
For the inference network, the variational posterior is a factored Gaussian with mean \(M_{Z}\in\mathbb{R}^{m\times d}\) and variance \(\Sigma_{Z}\in\mathbb{R}^{m\times d}\). For the generative network, the likelihood \(p(X|Z)\) is a factored Gaussian with mean \(M_{X}\in\mathbb{R}^{m\times d}\) and variance \(\Sigma_{X}\in\mathbb{R}^{m\times d}\). Then the complexity loss of the latent representation is derived as
\[-KL(q_{\phi}(Z|X)||p_{\theta}(Z))\] \[= \frac{1}{2}\sum_{i=1}^{m}\sum_{j=1}^{d}\left(1+\log(\Sigma_{Z})_{ ij}-(M_{Z})_{ij}^{2}-(\Sigma_{Z})_{ij}\right) \tag{12}\]
and the reconstruction loss can be computed with Monte Carlo approximation:
\[\mathbb{E}_{q_{\phi}(Z)}\left[\log p_{\theta}(X|Z)\right]\approx\] \[-\frac{1}{L}\sum_{l=1}^{L}\sum_{i=1}^{m}\sum_{j=1}^{d}\left( \frac{(X_{ij}-(M_{x}^{l})_{ij})^{2}}{2(\Sigma_{X}^{l})_{ij}}+\log(\Sigma_{X}^{ l})_{ij}+c\right) \tag{13}\]
where \(L\) denotes the number of Monte Carlo samples, \(c\) is a constant.
### Sparsity and Acyclicity Constraint
We add \(L1\)-norm penalty for the temporal adjacency matrix as a standard approach to introduce sparsity in the optimization objective. Since, we have ignored inter-slice dependencies on the temporal graph, causal relations suffices to have strict topological order and hence no acyclicity constraint needs to be enforced. However, our method can be naturally adapted to inter-slice relations by adding acyclicity constraint in the inter-slice temporal adjacency graph.
### Training
Now that we have all the components for optimization, the training process is composed of the following,
* Given training example \(X=X^{1:s_{0}+s_{P}}\), we first run the encoder and compute \(q(Z|X)\), then we sample \(Z\) and utilize the decoder to compute the prediction values for \(X^{s_{0}+1},\cdots,X^{s_{0}+s_{P}}\).
* We compute the ELBO objective with estimated KL loss and reconstruction error in (12) and (13). We add the regularized term for sparsity constraint and the optimization objective is estimated, \[\min_{\Phi}\min_{\mathcal{A}\in\mathbb{R}^{m\times p}}\frac{1}{n}\sum_{n}I_{ \mathcal{A},\Phi}(X)+\lambda||\mathcal{A}||_{1}\] (14) where \(n\) is the number of samples, \(\lambda\) is the coefficient of sparsity and \(l\) is the opposite of ELBO defined in (11). \(\Phi\) denotes the parameters of the neural network.
* The code is implemented in Pytorch. We perform gradient descents through the neural network parameters and \(\mathcal{A}\). When the optimization process finishes, a hard threshold \(\omega\) on \(\mathcal{A}\) is set. Any weight smaller than \(\omega\) in absolute value is treated a vanishing matrix element, which provably reduces the number of false discoveries[34].
Training process of the network is given in 3,
```
0: Number of observed variables: \(m\) Size of the observation window: \(s_{o}\) Size of the prediction window: \(s_{p}\) Addency Matrix \(\mathcal{A}\) repeat Sample time series of length \(s_{o}+s_{p}\): \(X^{1:s_{0}+s_{p}}_{1:m}\) from the whole time series \(X\) Compute \(\mathcal{A}\), \(X^{s_{o}+1:s_{o}+s_{p}}_{1:m}\), \(M\), \(\log\Sigma=\text{GDBN}(X^{1:s_{0}+s_{p}}_{1:m}\), \(\mathcal{A})\) Compute ELBO(\(\theta,\phi\).\(\mathcal{A}\)) with \(M\), \(\log\Sigma\), \(X^{1:s_{0}+s_{p}}_{1:m}\) from (12) and (13). Compute the sparsity term \(||\mathcal{A}||_{1}\) Compute the gradients \(g\leftarrow\nabla ELBO+\lambda||\mathcal{A}||_{1}\) \(\theta\), \(\phi\), \(\mathcal{A}\)\(\leftarrow\) Update parameters using gradients until Convergence of \(\theta,\phi,\mathcal{A}\) return\(\mathcal{A}\)
```
**Algorithm 3** Causal Inference for causal temporal graph
## 5 Experiments
### Datasets, Baselines and Performance Metrics
Since we are evaluating the performance of our inference network for causal discovery. We focus on two datasets that has validation ground truth of causal links.
#### 5.1.1 Dataset
Our own synthetic DatasetWe build our own synthetic dataset following the two steps:
_Temporal Adjacency Matrix Generation_
The temporal adjacency matrix \(\mathcal{A}=[A^{1}\mid\cdots\mid A^{p}]\in\mathbb{R}^{m\times p\mid}\) suffices to generate the full time series data given initial condition. We first assign the positions of nonzero elements. In order for the causal temporal graph to meet the assumptions mentioned in section 3.1, \(\mathcal{A}\) has to satisfy Hypothesis 1 and Hypothesis 2. Hence among all possible positions for nonzero elements, several are assigned nonzero values in a random way. We set the ratio of zero elements in the temporal adjacency matrix \(r\in[0,1]\), to control the sparsity of the causal graph. Besides, about half of the final nonzero positions are assigned with negative values. Then we sample the weights of \(\mathcal{A}\) uniformly with absolute values in the range \([0.7,0.95]\), as suggested by Mastakouri et al.[17].
#### Linear Dataset and Nonlinear Dataset Generation
Now we simulate the time series with TAM \(\mathcal{A}\) based on (4). For the linear case, we set a time series of the \(m\)-variable vector \(\mathbf{x}\), which corresponds to a \(p\)th order Vector Autoregressive (VAR) model:
\[\mathbf{x}^{t}=A_{1}\mathbf{x}^{t-1}+\cdots+A_{p}\mathbf{x}^{t-p}+\mathbf{\epsilon}^{t} \tag{15}\]
where \(A\) is an \(m\)th-order parameters matrix, \(\mathbf{\epsilon}^{t}\) is an independent noise, \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\sigma I)\). To guarantee the stability, all roots of \(\det(I-A_{1}z-\cdots-A_{p}z^{p})\) are limited to lying outside the unit circle. See (1) for details. Notice that the variables are scalar-valued for simplicity but can be easily generalized to the vector-valued ones.
For the nonlinear case, we consider two ways:
\[X^{t}=f(\mathcal{A}\mathcal{X})+Z \tag{16}\] \[X^{t}=\mathcal{A}g(\mathcal{X})+Z \tag{17}\]
where the nonlinear function \(f\), \(g\) perform elementwise. (16) is a widely adopted form considering nonlinearity. We also notice that (17) will not change the causal graph structurally, as Yu et al.[30] reason its property with Taylor expansion. Here we consider \(f(\cdot)=g(\cdot)=\sin(\cdot)\) as the nonlinear form for convenience.
With \(X^{1},\cdots,X^{p}\) initialized independently in a random way, the time series of length \(T\) can be computed by generating \(X^{t}\) at each time step \(t\) repeatedly for \(t=p,\cdots,T\). We construct time series samples of length \(s\) with sliding windows.
Other Benchmark DatasetTo further test our proposed model, we also adopt the dataset generated by [14], which provides a system to generate time series data with flexible configurations for causal discovery. Notice the data is different from ours in that it allows for a varying likelihood of nonlinear functions given a TAM. We set the maximum lag \(p=2\), the number of variables \(d=5\) and generate TAM under out assumptions, then simulate time series with gaussian noise. Here we adopt the concept of complexity, which is set with different default levels in [14]. For level 0, the time series is generated only in a linear way. Increasing level stands for increasing percentage and variety of non-linear dependencies, including piecewise linear, monotonic and periodic functions.
#### 5.1.2 Baselines
To validate the proposed model GDBN, we compare its performance with other popular approaches for bayesian networks inference. Our baseline is PCMCI [24], PCMCI+[23] and DYNAPARS [18] without intra-slice acyclicity constraint according to our assumptions about the time series data. PCMCI is strong baseline based on conditional independence test. It is compatible with our assumption of Causal stationarity, no contemporaneous causal links and no hidden variables. DYNAPARS is similar to ours that the optimization objective is the same, i.e., the adjacency matrix of DAG.
#### 5.1.3 Performance Metrics
The inference results are evaluated on some common graph metrics: 1) False discovery rate (FDR), 2) True positive rate (TPR). 3) F1 computed as the harmonic mean of (1-FDR) and TPR, 4) Structural Hamming Distance (SHD), which is the number of required changes to the graph to match the ground truth and computed as the sum of missing edges and extra edges. In view of the time order, the direction of all edges are already determined according to our assumptions. True (T) is the set of true edges, False (F) is the set of non-edges in the ground truth graph. Positive (P) is the set of estimated edges, where True Positive (TP) is an edge that is in the true graph, while False Positive (FP) is not.
* FDR = # FP / # P
* TPR = # TP / # T
* FDR) \(\times\) TPR / (1
- FDR + TPR)
* 2 \(\times\) # TP
Considering these metrics relies on the threshold set manually(4.3), we also introduce AUROC.
### Results and Hyperparameters
We set the maximum lag \(p=5\), the number of variables \(d=10\) to simulate the nonlinear time series of length \(T=600\). We set the coefficient for L-1 norm \(\lambda=0.01\) for GDBN and DYNAPARS, regularization parameter \(\alpha_{\text{PC}}=0.1\) for PCMCI, and threshold are tuned for each configuration to achieve a desirable F1 score. For GDBN, there are more hyperparameters. we set the observation window size \(s_{o}=10\), the prediction window size \(s_{p}=3\), the feature dimension
of the latent variables \(d_{Z}=8\), and the number of hidden units for MLP \(h=32\). The performances are shown in Table 1 for the nonlinear data (16), and Table 2 for the nonlinear data (17). Note that we run 5 repeated random experiments and take an average to report the performances of GDBN and DYNAGEARS, while for PCMCI, which is significantly time-consuming, we only run once.
GDBN outperforms other methods, especially in the detection of complicated nonlinear causal relations. And GDBN has a particular advantage when applied to relatively large datasets with high accuracy and efficiency.
For nonlinear data, GDBN gives an outstanding performance, especially decreasing the number of false discoveries within range of the maximum lag \(p\) when compared with DYNAGEARS, shown in Figure 4 and Figure 5. This shows the effectiveness of GDBN on nonlinear data and DYNAGEARS is generally not sufficient for nonlinear scenario.
We show the GDBN performance on dataset in [14] for different level of complexity in Table 3. In case of large number of edges, GDBN reaches comparable performance with saving a signifcant ammount of time shown in Table 4. The experiment is carried on Intel(R) Core(TM) i7-8557U CPU @ 1.70GHz. The time recorded for GDBN is for 100 epochs so that GDBN reaches stable AUC performance.
## 6 Conclusion and Discussion
In this work, we introduce GDBN, a method that can infer causal dependencies for a full time graph in a discrete multi-variate temporal graph. As a preliminary, we define a fundamental structure of temporal graph and the relevant adjacency graph matrix. We formalize the inference network to causal discovery and structural causal model, and designed the way to train the network jointly from end to end. We conducted experiments and benchmark our methods on two main different types of time series data with the known ground truth. Our approach reaches significantly higher precision than popular benchmarks such as Dynotears[18] and PCMCI [24]. Our method can be utilized widely in the real world problems, as understanding causal relational knowledge is becoming a pressing need for decision makers.
In despite of the success of our method, the graph neural networks of higher expressive power is expected. The question remains regarding what type of graph neural network suites the causal discovery is suitable for general SCM defined in [33]. The other important problem is what if our time series data is downsampled or irregularly sampled from the sufficient time series data and how our framework can be adjusted to adapt to such cases.
|
2304.10552 | Approximation and interpolation of deep neural networks | In this paper, we prove that in the overparametrized regime, deep neural
network provide universal approximations and can interpolate any data set, as
long as the activation function is locally in $L^1(\RR)$ and not an affine
function.
Additionally, if the activation function is smooth and such an interpolation
networks exists, then the set of parameters which interpolate forms a manifold.
Furthermore, we give a characterization of the Hessian of the loss function
evaluated at the interpolation points.
In the last section, we provide a practical probabilistic method of finding
such a point under general conditions on the activation function. | Vlad-Raul Constantinescu, Ionel Popescu | 2023-04-20T08:45:16Z | http://arxiv.org/abs/2304.10552v2 | # Interpolation property of shallow neural networks
###### Abstract
We study the geometry of global minima of the loss landscape of overparametrized neural networks. In most optimization problems, the loss function is convex, in which case we only have a global minima, or nonconvex, with a discrete number of global minima. In this paper, we prove that in the overparametrized regime, a shallow neural network can interpolate any data set, i.e. the loss function has a global minimum value equal to zero as long as the activation function is not a polynomial of small degree. Additionally, if such a global minimum exists, then the locus of global minima has infinitely many points. Furthermore, we give a characterization of the Hessian of the loss function evaluated at the global minima, and in the last section, we provide a practical probabilistic method of finding the interpolation point.
## 1 Introduction
We study the geometry of global minima of the loss landscape of overparametrized neural networks. In the light of the interpolation threshold outlined in [1] one of the important issues in neural networks is to have guarantees that the interpolation is indeed achieved. We tackle this problem for the case of shallow neural network and show that this holds true in general as long as the activation is not a polynomial of low degree.
Standard optimization problems are done in the case the loss function is convex in which case we only have a global minima. Another class of optimization problems is for nonconvex loss functions which has a discrete number of global minima.
Recently there has been interesting progress aimed at understanding the locus of the global minima for overparametrized neural networks ([3], [6]) when the activation function is continuous. In this paper, we generalize these results in section 2 for a larger class of activation functions. More precisely, we prove that in the overparametrized regime, we can interpolate any data set consisting of \(d\) points with a shallow neural network having at least \(d\) neurons on the hidden layer and with an activation function which is locally integrable and not almost everywhere a polynomial of degree at most \(d-2\). In addition, if the activation function is also smooth, the locus of global minima of the loss landscape of an over-parametrized neural network is a submanifold of \(\mathbb{R}^{n}\). More precisely, if a neural net has \(n\) parameters and is trained on \(d\) data points, where \(n>d\), then the locus \(M\) of global minima is an \(n-d\) dimensional submanifold of \(\mathbb{R}^{n}\). If the activation function \(\sigma\) is a non-constant polynomial, the interpolation problem depends very much on the data set and the degree of \(\sigma\). In section 3 we provide similar results for this class of activation functions. Additionally, we give a description in section 4 for the eigenspectrum of the Hessian of the loss function of a neural net evaluated at the global minima.
The next question that arises is how can one find such an interpolation point? One of the numerical methods that is used for training neural networks is the (stochastic) gradient descent method. For the convex case, there is a large literature on convergence results for (stochastic) gradient descent, see for example [2]. However, in the non-convex scenario, first-order methods like gradient descent can get stuck at a saddle point. In section 5 we give an answer to this question by reducing the minimization of a non-convex loss function to a simple linear regression problem. In [5], this is done for shallow neural networks, using a smooth activation function with a bounded derivative and a number of hidden nodes of order \(O(d\log^{2}(d))\). Using similar techniques as in [5], we extend this result to shallow neural networks with a continuous activation function that is not a polynomial of degree less than \(d-2\). The reduction is done by a random initialization of the input-to-hidden weights, and optimizing over the output layer. Our result requires that the number of hidden is of order \(O(d\log(d))\).
## 2 Interpolation with non-polynomial activation function
We consider a neural network of any architecture (eg., feedforward, convolutional, etc.), with weights \(w=(w_{1},w_{2},\ldots)\) and biases \(b=(b_{1},b_{2},\ldots)\). The number of weights and biases is \(n\), and we train our neural network on \(d\) data points \((x_{i},y_{i})_{i=\overline{1,d}}\), where \(x_{i}\in\mathbb{R}^{p}\) and \(y_{i}\in\mathbb{R}\). We assume that the \(x_{i}\) are distinct and our neural network is overparametrized, i.e. \(n>d\).
We denote by \(f_{w,b}\) the function given by our neural network. For each data point \((x_{i},y_{i})\), we define \(f_{i}(w,b)=f_{w,b}(x_{i})-y_{i}\). We suppose that each \(f_{i}(w,b)\) is smooth in \(w\) and \(b\). For example, if our neural network is feedforward, the smoothness of the activation function \(\sigma\) implies the smoothness of \(f_{i}(w,b)\).
For the training of our neural net, we use the mean squared loss function
\[L(w,b)=\sum_{i=1}^{d}f_{i}(w,b)^{2}\]
From our definition of the loss function, we observe that \(L(w,b)\geq 0\). If \(M=L^{-1}(0)\) is nonempty, then \(M\) is the locus of global minima of \(L\). Also, the locus of global minima can be written as
\[M=\bigcap_{i=1}^{d}M_{i},\]
where
\[M_{i}=f_{i}^{-1}(0)\]
The following theorem is a result of [3] which we state here for the case of smooth activation functions.
**Theorem 2.1**.: _In the framework above, the set \(M=L^{-1}(0)\) is generically (that is, possibly after an arbitrarily small change to the data set) a smooth \(n-d\) dimensional submanifold (possibly empty) of \(\mathbb{R}^{n}\)._
In this paper, we will prove that for a class of feedforward neural networks, the set \(M=L^{-1}(0)\) is non-empty. In this context, \(f_{w,b}\) is written in matrix form as
\[f_{w,b}(x)=W_{l}\sigma(W_{l-1}\sigma(\ldots\sigma(W_{1}x-b_{1})\ldots)-b_{l-1} )-b_{l},\]
where \(W_{i}\in\mathcal{M}_{n_{i-1}\times n_{i}}(\mathbb{R}),b_{i}\in\mathbb{R}^{n_{i}}\) and \(n_{0}=p,n_{l}=1\). Moreover, we use the convention that \(\sigma\) applied to a vector is simply the component-wise evaluation:
\[\sigma(v_{1},v_{2},\ldots,v_{k})=(\sigma(v_{1}),\sigma(v_{2}),\ldots,\sigma(v _{k})).\]
When the activation function \(\sigma\) is continuous and not a polynomial, any shallow neural network, i.e. a feedforward neural network with one hidden layer, can interpolate any data set [6]. In this paper, we will prove the same result for activation functions \(\sigma\), which satisfy the following assumption:
**Assumption 2.2**.: _The activation function \(\sigma\) is locally integrable, i.e \(\sigma\in L^{1}_{loc}(\mathbb{R})\), and is almost surely not a polynomial of degree less or equal than \(d-2\), i.e. there exists no polynomial \(P\) of degree at most \(d-2\) such that \(\sigma=P\) almost surely._
**Proposition 2.3**.: _Let \((x_{i},y_{i})_{i=\overline{1,d}}\) be a data set with \(x_{i}\in\mathbb{R}^{p},y_{i}\in\mathbb{R}\), and with \(x_{i}\) assumed distinct. Assume that \(\sigma\) satisfies Assumption 2.2. Then, for any \(h\geq d\), there exists a shallow neural network with width \(h\geq d\), with activation function \(\sigma\) such that it interpolates our data set, i.e. \(f_{w,b}(x_{i})=y_{i}\) for all \(i\)._
Proof.: The idea is to refine the proof of Theorem 5.1 from [6]. The output function of a shallow neural network is written in the matrix form as
\[f_{w,b}(x)=v^{T}\sigma(Wx-b)-b^{\prime},\]
where \(W\in\mathcal{M}_{h\times p}(\mathbb{R}),v\in\mathbb{R}^{h},b\in\mathbb{R}^{h}\) and \(b^{\prime}\in\mathbb{R}\). The entries of \(w\) and \(v\) are the weights, and the entries of \(b,b^{\prime}\) are the biases. For our neural net, we take \(b^{\prime}=0\) and \(h=d\). If \(h>d\), then we set the weights and biases after the first \(d\) nodes to be equal to zero. Hence, we can reduce our construction to \(h=d\). Let \(w_{1},\ldots,w_{d}\) be the lines of \(W\) and \(v=(v_{1},\ldots,v_{d})\). Since the \(x_{i}\) are distinct, we can find a vector \(w\in\mathbb{R}^{p}\) such that \(w^{T}x_{i}=t_{i}\) are distinct for all \(i\). We set \(w_{i}=a_{i}^{T}w\) for some \(a_{i}\in\mathbb{R}\), \(i=\overline{1,d}\). Therefore, we have to show that there exists \((a_{i},b_{i},v_{i})_{i=\overline{1,d}}\) such that
\[\sum_{j=1}^{d}v_{j}\sigma(a_{j}t_{i}-b_{j})=y_{i},\]
for all \(i\). This interpolation problem is equivalent to proving the linear independence (over \(a\) and \(b\)) of the \(d\) functions \(\sigma(at_{i}-b)\). If we have linear independence of these functions, then we can find \((a_{i},b_{i})_{i=\overline{1,d}}\) such that the matrix system of our interpolation problem
\[\left(\begin{array}{ccc}\sigma(a_{1}t_{1}-b_{1})&\ldots&\sigma(a_{d}t_{1}-b_{ d})\\ \vdots&\ddots&\vdots\\ \sigma(a_{1}t_{d}-b_{1})&\ldots&\sigma(a_{d}t_{d}-b_{d})\end{array}\right)\]
is nonsingular. And from here we can determine \((v_{1},\ldots,v_{d})\) uniquely. Suppose that our \(d\) functions are linearly dependent. This means that we can find nontrivial coefficients \((c_{i})_{i=\overline{1,d}}\) such that
\[\sum_{i=1}^{d}c_{i}\sigma(at_{i}-b)=0. \tag{1}\]
Let \(\zeta\in C_{\circ}^{\infty}(\mathbb{R},[0,\infty))\), i.e. \(\zeta\) is non-negative, infinitely differentiable with compact support and \(\int_{\mathbb{R}}\zeta(x)dx=1\). We define for \(\epsilon>0\), the following function
\[\sigma_{\epsilon}(t)=\int_{\mathbb{R}}\frac{1}{\epsilon}\zeta\left(\frac{t-x }{\epsilon}\right)\sigma(x)dx\]
Since \(\sigma\in L^{1}_{loc}(\mathbb{R})\), standard arguments show that
\[\sigma_{\epsilon}\xrightarrow[\epsilon\to 0]{1}\sigma\]
In particular, we also have along a subsequence \(\epsilon_{n}\) which converges to \(0\) such that \(\sigma_{\epsilon_{n}}\) converges to \(\sigma\) almost surely. The key observation is that if \(\sigma_{\epsilon_{n}}\) is a polynomial of degree less than \(d-2\) for every \(n\), then in the limit we also have that \(\sigma\) is almost surely a polynomial of degree at most \(d-2\).
Consequently, we can reduce the problem to the case where \(\sigma\) is replaced by \(\sigma_{\epsilon}\) for some \(\epsilon>0\). Using now (1) we will continue to have the same relation also for \(\sigma_{\epsilon}\). Thus from now on we simply assume that \(\sigma\) is smooth and (1) is satisfied. If we differentiate \(k\) times relation (1) with respect to \(a\), we get
\[\sum_{i=1}^{d}c_{i}t_{i}^{k}\sigma^{(k)}(at_{i}-b)=0.\]
Since \(\sigma\) is not a polynomial of degree less or equal than \(d-2\), for any \(k=\overline{0,d-1}\) we can find \(b_{k}\in\mathbb{R}\) such that \(\sigma^{(k)}(-b_{k})\neq 0\). Taking \(a=0\) and \(b=b_{k}\) for each equation, we get a system of \(d\) equations
\[\sum_{i=1}^{d}c_{i}t_{i}^{k}=0, \tag{2}\]
for each \(k=\overline{0,d-1}\). Since the matrix system of (2) is a Vandermonde matrix, and the \(t_{i}\) are distinct, we get that all \(c_{i}\) must be equal to \(0\), which is a contradiction.
Since a shallow neural network can be embedded in a deeper feedforward neural network, we have the following Corrolary of Proposition 2.3.
**Corollary 2.4**.: _Let \((x_{i},y_{i})_{i=\overline{1,d}}\) be a data set with \(x_{i}\in\mathbb{R}^{p},y_{i}\in\mathbb{R}\), and the \(x_{i}\) are distinct. Assume that the activation function \(\sigma\) satisfies Assumption 2.2 and it is one-to-one on some interval \(I\) of the real line. Then, in the family of feedforward neural nets of \(l\) hidden layers, with the last hidden layer having \(h\geq d\) neurons, and remaining hidden layers of any width, we can find one that interpolates our data set, i.e. \(f_{w,b}(x_{i})=y_{i}\) for all \(i\)._
Proof.: By the hypothesis, \(\sigma\) is injective on \(I\). The idea is to first the map the input into the \(l-2\) hidden layer one-to-one and then consider these as the input data for a shallow neural network. To this end, let \(f_{k}\) denote the output function of the first \(k\) hidden layers before the activation is applied, i.e.
\[f_{k}(x)=W_{k}\sigma(W_{k-1}\sigma(\ldots\sigma(W_{1}x-b_{1})\ldots)-b_{k-1})- b_{k}.\]
Notice that \(f_{1}(x)=W_{1}x-b_{1}\) and then
\[f_{k}(x)=W_{k}\sigma(f_{k-1}(x))-b_{k}\]
One can always choose \(W_{1},\ldots W_{l-2}\) and \(b_{1},\ldots b_{l-2}\) such that the components of \(f_{k}(x_{i})\) belongs to \(I\). Therefore, if we start with \(d\) distinct points \(x_{1},\ldots x_{d}\), we end up with \(d\) distinct outputs \(\sigma(f_{l-2}(x_{1})),\ldots\sigma(f_{l-2}(x_{d}))\) by the injectivity of \(\sigma\) on \(I\). Now for \(W_{l-1},W_{l},b_{l-1}\), and \(b_{l}\), we use the same construction as in Proposition 2.3.
With these settings described above, we have the following consequence.
**Theorem 2.5**.: _Let \((x_{i},y_{i})_{i=\overline{1,d}}\) be a data set with \(x_{i}\in\mathbb{R}^{p},y_{i}\in\mathbb{R}\), and the \(x_{i}\) are distinct. Assume that the activation function \(\sigma\) is smooth and satisfies Assumption 2.2. Let \(L\) be the mean squared loss function of a feedforward neural network with \(l\) hidden layers and with the last one of width \(h\geq d\). Then, the set \(M=L^{-1}(0)\) is generically (that is, possibly after an arbitrarily small change to the data set) a smooth \(n-d\) dimensional submanifold nonempty of \(\mathbb{R}^{n}\)._
Proof.: This is a consequence of Theorem 2.1 and Corollary 2.4. Notice that because \(\sigma^{\prime}\) is not constant, we can find an interval \(I\) such that \(\sigma^{\prime}\) has constant sign on that interval, thus is one-to-one as the Corollary 2.4 requires.
## 3 Interpolation with polynomial activation function
If the activation function \(\sigma\) is a non-constant polynomial, the interpolation problem depends very much on the \(x_{i}\) and the degree of \(\sigma\). More precisely, we have the following result
**Proposition 3.1**.: _Let \((x_{i},y_{i})_{i=\overline{1,d}}\) be a data set with \(x_{i}\in\mathbb{R}^{p},y_{i}\in\mathbb{R}\), and the \(x_{i}\) are distinct. If \(\sigma\) is a polynomial of degree \(m\), then we have the following two statements_
1. _If_ \(d>\sum_{k=1}^{m}\binom{p+k}{k}\)_, then the interpolation problem in Proposition_ 2.3 _is not possible._
2. _If_ \(d\leq\sum_{k=1}^{m}\binom{p+k}{k}\) _and_ \((1,x_{i},x_{i}^{\otimes 2},\ldots,x_{i}^{\otimes m})_{i=\overline{1,d}}\) _are linearly independent, then the interpolation problem in Proposition_ 2.3 _is possible._
Proof.: Since the interpolation problem is equivalent to proving that the functions \(\sigma(w^{T}x_{i}-b)\) are linearly independent (over \(w\) and \(b\)), we will show that one can find nontrivial coefficients \((c_{i})_{i=\overline{1,d}}\) such that
\[\sum_{i=1}^{d}c_{i}\sigma(w^{T}x_{i}-b)=0, \tag{3}\]
for any \(w\in\mathbb{R}^{p}\) and \(b\in\mathbb{R}\). For simplicity, we can take \(b=0\) by considering \(x_{i}\leftarrow(x_{i},-1)\), and \(w\leftarrow(w,b)\). By these notations, equation (3) is equivalent to
\[\sum_{i=1}^{d}c_{i}\sigma(w^{T}x_{i})=0, \tag{4}\]
for any \(w\in\mathbb{R}^{p+1}\). Since \(\sigma\) is a polynomial of degree \(m\), equation (4) is equivalent to
\[\sum_{i=1}^{d}c_{i}(w^{T}x_{i})^{k}=0, \tag{5}\]
for any \(k=\overline{0,m}\) and \(w\in\mathbb{R}^{p+1}\). And equation (5) is equivalent to
\[\sum_{i=1}^{d}c_{i}x_{i}^{\otimes k}=0, \tag{6}\]
for any \(k=\overline{0,m}\). Thus, our problem is reduced to finding a linear combination of the elements \((1,x_{i},x_{i}^{\otimes 2},\ldots,x_{i}^{\otimes m})\).
It is well known that \(\text{Sim}^{k}(\mathbb{R}^{p+1})\), i.e. the space of symmetric tensors of order \(k\), is spanned by elements of the form \(v^{\otimes k}\) and has dimension \(\binom{p+k}{k}\). Consequently, if the number of data points \(x_{i}\) is bigger than the dimension of \(\bigoplus_{k=1}^{m}\text{Sim}^{k}(\mathbb{R}^{p+1})\), which is \(\sum_{k=1}^{m}\binom{p+k}{k}\), then we can find a linear dependence.
On the other hand, if \(d\leq\sum_{k=1}^{m}\binom{p+k}{k}\), and the vectors \((1,x_{i},x_{i}^{\otimes 2},\ldots,x_{i}^{\otimes m})_{i=\overline{1,d}}\) are linearly independent, then \(c_{i}\) must all be equal to \(0\), thus the interpolation is possible.
As the following results in this section heavily rely on property 2 stated in Proposition 3.1, we introduce the following assumption.
**Assumption 3.2**.: _Let \((x_{i},y_{i})_{i=\overline{1,d}}\) be a data set with \(x_{i}\in\mathbb{R}^{p},y_{i}\in\mathbb{R}\). We require that \(d\leq\sum_{k=1}^{m}\binom{p+k}{k}\) and \((1,x_{i},x_{i}^{\otimes 2},\ldots,x_{i}^{\otimes m})_{i=\overline{1,d}}\) are linearly independent._
We have the following consequence from Proposition 3.1
**Corollary 3.3**.: _Let \((x_{i},y_{i})_{i=\overline{1,d}}\) be a data set and \(\sigma\) a polynomial function. Assume that the activation function \(\sigma\) has degree \(m\) and our data set satisfy Assumption 3.2. Then, in the family of feedforward neural nets of \(l\) hidden layers, with the last hidden layer having \(h\geq d\) neurons, and remaining hidden layers of any width, we can find one that interpolates our data set, i.e. \(f_{w,b}(x_{i})=y_{i}\) for all \(i\)._
Proof.: The proof follows a similar approach to that of Corollary 2.4. Notice that because \(\sigma\) is a non-constant polynomial function, we can find an interval \(I\) such that \(\sigma^{\prime}\) has a constant sign on that interval, thus is one-to-one as the Corollary 2.4 requires.
With these settings described above, we have the following consequence
**Theorem 3.4**.: _Let \((x_{i},y_{i})_{i=\overline{1,d}}\) be a data set and \(\sigma\) a polynomial function. Assume that the activation function \(\sigma\) and our data set satisfy Assumption 3.2. Let \(L\) be the mean squared loss function of a feedforward neural network with \(l\) hidden layers and with the last one of width \(h\geq d\). Then, the set \(M=L^{-1}(0)\) is generically (that is, possibly after an arbitrarily small change to the data set) a smooth \(n-d\) dimensional submanifold nonempty of \(\mathbb{R}^{n}\)._
Proof.: This is a consequence of Proposition 3.1 and Corollary 3.3.
## 4 The Hessian for the global minima
In this section, we describe the Hessian eigenspectrum of the loss function \(L\) evaluated at a point \(m\in M=L^{-1}(0)\). The following proposition is a result of [3], and it is true for any neural network architecture.
**Proposition 4.1**.: _Let \(M=L^{-1}(0)=\bigcap M_{i}\), where \(M_{i}=f_{i}^{-1}(0)\), be the locus of global minima of \(L\). If each \(M_{i}\) is a smooth codimension 1 submanifold of \(\mathbb{R}^{n}\), \(M\) is nonempty, and the \(M_{i}\) intersect transversally at every point of \(M\), then at every point \(m\in M\), the Hessian evaluated at \(m\) has \(d\) positive eigenvalues and \(n-d\) eigenvalues equal to 0._
Consider now a shallow neural net as in Theorem 2.5 or Theorem 3.4. Then we have the following Corrolary of Proposition 3.1 :
**Corollary 4.2**.: _Let \(L\) be the mean square loss function of a neural net as described above. Then, \(M\) is nonempty, and the Hessian of \(L\), evaluated at any point \(m\in M=L^{-1}(0)\) has \(d\) positive eigenvalues and \(n-d\) eigenvalues equal to 0._
Proof.: Without losing the generality, suppose our shallow neural network is in the setting of Theorem 2.5.
The locus of global minima \(M=L^{-1}(0)\) is the intersection of \(M_{i}\), where
\[M_{i}=\{(w,b)\in\mathbb{R}^{n}|f_{w,b}(x_{i})=y_{i}\}\]
Due to Proposition 3.1, it suffices to prove that \(M\) is non-empty, \(M_{i}\) are smooth of codimension 1, and that \(M_{i}\) intersects transversally at each point of \(M\).
The nonemptiness of \(M\) follows from Theorem 2.5. Each \(M_{i}\) is smooth of codimension 1, again by Theorem 2.5. for \(d=1\). It remains to prove that the intersection of \(M_{i}\) is transversal. Let \(m=(w,b)\in M\). We assume that the intersection at \(m\) is not transversal. This means the tangent space \(T_{m}M_{1}=T_{m}M_{i}\) for all \(i\). From our notations, we have that
\[f_{i}(w,b)=W_{2}\sigma(W_{1}x_{i}-b_{1})-b_{2}-y_{i},\]
The equality of the tangent spaces at \(m\), means that their normal vectors are collinear, i.e. \(\nabla f_{i}(w,b)=\alpha_{i}\nabla f_{1}(w,b)\) for some \(\alpha_{i}\in\mathbb{R}\). If we compute the partial derivatives with respect to \(W_{1},b_{1}\), and \(b_{2}\), we get
\[\frac{\partial f_{i}}{\partial W_{1}}(w,b)= -\frac{\partial f_{i}}{\partial b_{1}}(w,b)\otimes x_{i}\] \[\frac{\partial f_{i}}{\partial b_{2}}(w,b)= -1\]
From the partial derivative with respect to \(b_{2}\), we get that \(\alpha_{i}=1\) for all \(i\). Thus,
\[\frac{\partial f_{i}}{\partial b_{1}}(w,b)= \frac{\partial f_{j}}{\partial b_{1}}(w,b)\] \[\frac{\partial f_{i}}{\partial b_{1}}(w,b)\otimes x_{i}= \frac{\partial f_{j}}{\partial b_{1}}(w,b)\otimes x_{j}\]
for all \(i,j\). Since \(\sigma\) is smooth, we can find an interval \(I\) such that \(\sigma^{\prime}\) does not vanish on it. We consider a point \((w^{*},b^{*})\in\mathbb{R}^{n}\) such that all entries of \(W_{1}\) are equal to 0, all entries of \(W_{2}\) are different from 0, and all entries of \(-b_{1}\) belong to \(I\). With this setting, each component of \(\frac{\partial f_{i}}{\partial b_{1}}(w^{*},b^{*})\) is different from zero. So from the last two relations, we get \(x_{i}=x_{j}\) for all \(i,j\), which is a contradiction with the assumption of our data set.
Convergence to the global minima
In Section 2, we established the existence of an interpolation point. In this section, we present a method which probabilistically determines this point. This approach involves initializing the input-to-hidden weights randomly and optimizing the out-layer weights \(v\in\mathbb{R}^{h}\). This idea is inspired from [5]. Before we jump into the details, we will absorb the biases into the weights, simply adding to the inputs vectors \(x_{i}\) the \(p+1\) coordinate equal to \(1\). Thus in the rest of this section, we will assume that \(x_{i}\) is constructed this way and we will call this again \(x_{i}\) to keep the notations simple. Notice that the dimension of the vector changes now from \(p\) to \(p+1\).
Now, we need to minimize the loss function:
\[L(v):=\sum_{i=1}^{d}(v^{T}\sigma(Wx_{i})-y_{i})^{2}=||\sigma(XW^{T})v-y||^{2},\]
which is a simple linear regression problem. Moreover, if \(\sigma(XW^{T})\) has full rank, then the global minimum of this optimization problem is given by
\[\tilde{v}:=\phi^{T}(\phi\phi^{T})^{-1}y\]
where \(\phi:=\sigma(XW^{T})\). So the question we ask is how much overparameterization is needed to achieve a full rank for the matrix \(\sigma(XW^{T})\). Observe that
\[\phi\phi^{T}=\sigma(XW^{T})\sigma(XW^{T})^{T}=\sum_{l=1}^{h}\sigma(Xw_{l}) \sigma(Xw_{l})^{T}.\]
where \(w_{l}\) is the \(l\)-th line of \(W\). This leads us to the following definition.
**Definition 5.1**.: Let \(w\) be a random vector with a \(\mathcal{N}(0,I_{p+1})\) distribution. We define the following matrix
\[\tilde{\Sigma}(X):=\mathbb{E}_{w}[\sigma(Xw)\sigma(Xw)^{T}]\]
And let \(\tilde{\lambda}(X)\) be the minimum eigenvalue of \(\tilde{\Sigma}(X)\), i.e. \(\tilde{\lambda}(X):=\lambda_{min}(\tilde{\Sigma}(X))\)
The following Proposition is a consequence of the interpolation property.
**Proposition 5.2**.: _If the activation function \(\sigma\) and our data set \(\left(x_{i},y_{i}\right)_{i=\overline{1,d}}\) satisfies Assumption 2.2 or 3.2, then \(\tilde{\lambda}(X)>0\)._
Proof.: Let \(v\in\mathbb{R}^{d}\) such that \(v\tilde{\Sigma}(X)v^{T}=0\). This is equivalent to
\[\sum_{i=1}^{d}v_{i}\sigma(w^{T}x_{i})=0, \tag{7}\]
for almost every \(w\in\mathbb{R}^{p+1}\). If \(\sigma\) satisfies Assumption 2.2, then, using the same arguments as in Proposition 2.3, we get that \(v=0\). Otherwise, we use the reasoning from 3.1. Therefore, \(\tilde{\Sigma}(X)\) is a symmetric positive definite matrix.
In [5], using matrix concentration inequalities and Gaussian Lipschitz concentration inequalities, one can prove the non-singularity of \(\phi\phi^{T}\) when the activation function \(\sigma\) has a bounded derivative. Using similar arguments as in [5], we extend this result for continuous activation functions \(\sigma\) which are not polynomials of degree less than \(d-2\).
We state here one result which plays the leading role in our arguments.
**Theorem 5.3**.: _(Matrix Chernoff) Let \(\left(A_{l}\right)_{l=\overline{1,l}}\) be sequence of independent, random, Hermitian matrices of dimension \(n\). Assume that \(0\preceq A_{l}\preceq R\cdot I_{n}\) for \(l=\overline{1,k}\). Then_
\[\mathbb{P}\left(\lambda_{min}\left(\sum_{l=1}^{k}A_{l}\right)\leq(1-\delta) \lambda_{min}\left(\sum_{l=1}^{k}\mathbb{E}(A_{l})\right)\right)\leq n\left( \frac{e^{-\delta}}{(1-\delta)^{1-\delta}}\right)^{\frac{\lambda_{min}\left( \sum_{l=1}^{k}\mathbb{E}(A_{l})\right)}{M}}\]
_for any \(\delta\in[0.1)\)_
Now we are ready for the main result of this section.
**Theorem 5.4**.: _Let \((x_{i},y_{i})_{i=\overline{1,d}}\) be a data set with \(x_{i}\in\mathbb{R}^{p+1},y_{i}\in\mathbb{R}\), and assume that \(x_{i}\) are distinct. Consider a shallow neural network with \(h\) hidden nodes of the form \(f(v,W):=v^{T}\sigma(Wx)\) with \(W\in\mathcal{M}_{h\times(p+1)}(\mathbb{R})\) and \(v\in\mathbb{R}^{h}\). Let \(\mu\) be the Gaussian measure. We assume the activation function \(\sigma\in C(\mathbb{R})\cap L^{2}(\mathbb{R},\mu)\) and is not a polynomial of degree less than \(d-2\). We initialize the entries of \(W\) with i.i.d. \(\mathcal{N}(0,1)\). Also, assume_
\[h\geq\frac{C_{\sigma}d\log(d)}{\tilde{\lambda}(X)}\]
_where \(C_{\sigma}\) is a constant that depends only on \(\sigma\). Then, the matrix \(\sigma(XW^{T})\) has full row rank with probability at least \(1-\frac{1}{d^{10\cdot\sigma}}\)._
Proof.: It suffices to prove that \(\sigma(XW^{T})\sigma(XW^{T})^{T}\) is non-singular with high probability. First, observe that
\[\phi\phi^{T}=\sigma(XW^{T})\sigma(XW^{T})^{T}=\sum_{l=1}^{h}\sigma(Xw_{l}) \sigma(Xw_{l})^{T}\geq\sum_{l=1}^{h}\sigma(Xw_{l})\sigma(Xw_{l})^{T}\mathbb{1 }_{\{[|\sigma(Xw_{l})|]<T_{d}\}}.\]
Here \(T_{d}\) is a function of \(d\) which will be determined later in the proof. Applying the Matrix Chernoff concentration inequality for \(A_{l}=\sigma(Xw_{l})\sigma(Xw_{l})^{T}\mathbb{1}_{\{[|\sigma(Xw_{l})|]<T_{d}\}}, R=T_{d}^{2}\) and \(\tilde{A}(w)=\sigma(Xw)\sigma(Xw)^{T}\mathbb{1}_{\{[|\sigma(Xw)|]<T_{d}\}}\), we get
\[\lambda_{min}\left(\phi\phi^{T}\right)\geq h(1-\delta)\lambda_{min}\left( \mathbb{E}\left[\tilde{A}(w)\right]\right) \tag{8}\]
holds with probability \(1-d\left(\frac{e^{-\delta}}{(1-\delta)^{1-\delta}}\right)^{\frac{h\lambda_{min }\left(\mathbb{E}[\tilde{A}(w)\right]\right)}{T_{d}^{2}}}\). We can fix \(\delta\) from now on, for instance we can pick \(\delta=1/2\).
Now, it remains to prove that \(\mathbb{E}[\tilde{A}(w)]\) is a positive definite matrix. Let \(v\in\mathbb{R}^{d}\) such that \(v\mathbb{E}[\tilde{A}(w)]v^{T}=0\). This is equivalent to
\[\sum_{i=1}^{d}v_{i}\sigma(w^{T}x_{i})=0, \tag{9}\]
for any \(w\in\mathbb{R}^{p+1}\) that satisfies almost surely \(\|\sigma(Xw)\|<T_{d}\). Because \(\sigma\) is continuous we actually have relation (9) valid for all \(w\) with \(\|\sigma(Xw)\|<T_{d}\).
We impose now a first condition on \(T_{d}\), namely, we require that \(\sigma(0)\sqrt{d}<T_{d}\). Since the \(x_{i}\) are distinct, we can find a vector \(w\in\mathbb{R}^{p+1}\) such that \(w^{T}x_{i}=t_{i}\) are distinct for all \(i\). We can take this \(w\) such that the last component is also \(0\). With the choice of \(T_{d}\), we can now scale \(w\) to be sufficiently small so that \(\|\sigma(Xw)\|_{2}<T_{d}\). Then for any \(a,b\in\mathbb{R}\) that satisfy \(\sum_{i=1}^{d}\sigma^{2}(at_{i}-b)<T_{d}^{2}\) we have
\[\sum_{i=1}^{d}v_{i}\sigma(at_{i}-b)=0. \tag{10}\]
Let \(\zeta\in C_{0}^{\infty}(\mathbb{R},[0,\infty))\), i.e. \(\zeta\) is non-negative, infinitely differentiable with compact support on \([-1,1]\) and \(\int_{\mathbb{R}}\zeta(x)dx=1\). We define for \(\epsilon>0\), the following function
\[\sigma_{\epsilon}(t)=\int_{\mathbb{R}}\frac{1}{\epsilon}\zeta\left(\frac{t-x}{ \epsilon}\right)\sigma(x)dx\]
Since \(\sigma\) is continuous, standard arguments show that
\[\sigma_{\epsilon}\xrightarrow[\epsilon\to 0]{u}\sigma \tag{11}\]
Since \(\sigma\) is not a polynomial of degree less than \(d-2\), we can find \(M>0\) large enough such that \(\sigma|_{[-M,M]}\) is not a polynomial of degree less than \(d-2\). From 11, we can find a small \(\epsilon\) such that \(\sigma_{\epsilon}\) is not a polynomial of degree less than \(d-2\) restricted to \([-M,M]\).
One of the key steps is the following
_Claim._ Since \(\sigma_{\epsilon}\) is not a polynomial of degree less than \(d-2\) on \([-M,M]\), we can find
\[b_{0}\in[-M,M]\text{ such that }\sigma_{\epsilon}^{(k)}(-b_{0})\neq 0\text{ for any }k=\overline{0,d-1}.\]
One way to justify this claim follows for instance from the argument indicated by Pinkus in [6] which actually refers to [4, Theorem of Agmon, page 53] with an easy adaptation.
Another way to see this is the following. Consider
\[D_{k}=\{b\in(-M,M):\sigma_{\epsilon}^{(k)}(-b)\neq 0\}\text{ for }k=0,1,\ldots,d-1.\]
In the first place, we notice \(D_{k}\) are open sets in \((-M,M)\) and \(D_{d-1}\neq\O\). By induction we can assume that \(D_{k+1}\cap D_{k+2}\cdots\cap D_{d-1}\neq\O\) and then taking a \(b\) in this intersection and \(\delta\) sufficiently small such that \(B(b,\delta)\subset D_{k+1}\cap D_{k+2}\cdots\cap D_{d-1}\). Now, we can argue that there must be \(b^{\prime}\in B(b,\delta)\) such that \(\sigma_{\epsilon}^{(k)}(-b^{\prime})\neq 0\) (otherwise \(\sigma_{\epsilon}^{(k+1)}(b)=0\)). Therefore, \(b^{\prime}\in D_{k}\cap D_{k+1}\cap D_{k+2}\cdots\cap D_{d-1}\) which shows by induction that \(D_{0}\cap D_{1}\cap\ldots,\cap D_{d-1}\neq\O\).
With the above Claim at hand, without loss of generality, we can consider \(M\) such that \((-b_{0}-\epsilon,-b_{0}+\epsilon)\subset[-M,M]\). Let \(T_{d}:=\sqrt{d}(\sup_{x\in[-M,M]}|\sigma(x)|+1)\). Notice that this \(T_{d}\) already satisfies the condition we require for the choice of \(w\) above. We can find a small interval \([-\rho,\rho]\) such that \(at_{i}-b_{0}-\epsilon z\in[-M,M]\) for any \(a\in[-\rho,\rho]\) and \(z\in[-1,1]\), and from the definition of \(T_{d}\) we have
\[\sum_{i=1}^{d}\sigma^{2}(at_{i}-b_{0}-\epsilon z)<T_{d}^{2} \tag{12}\]
for any \(a\in[-\rho,\rho]\) and \(z\in[-1,1]\). Consequently,
\[\sum_{i=1}^{d}v_{i}\sigma_{\epsilon}(at_{i}-b_{0})=\int_{\R}\zeta(z)\sum_{i=1} ^{d}v_{i}\sigma(at_{i}-b_{0}-\epsilon z)dz=0 \tag{13}\]
for any \(a\in[-\rho,\rho]\).
If we differentiate \(k\) times relation (13) with respect to \(a\), we get
\[\sum_{i=1}^{d}v_{i}t_{i}^{k}\sigma_{\epsilon}^{(k)}(at_{i}-b_{0})=0.\]
Taking \(a=0\) for each equation, we get a system of \(d\) equations
\[\sum_{i=1}^{d}v_{i}t_{i}^{k}=0, \tag{14}\]
for each \(k=\overline{0,d-1}\). Since the matrix system of (14) is a Vandermonde matrix, and the \(t_{i}\) are distinct, we get that all \(v_{i}\) must be equal to \(0\). Hence \(\mathbb{E}[\hat{A}(w)]\) is a symmetric positive definite matrix and \(\phi\) has full rank with probability at least \(1-de^{-\frac{\gamma\hat{\lambda}_{min}(\mathbb{E}[\hat{A}(w)])}{T_{d}^{2}}}\) where \(\gamma\) is a constant depending explicitly on \(\delta\). This probability is larger than \(1-\frac{1}{d^{200}}\) as long as
\[h\geq\frac{C_{\sigma}d\log(d)}{\lambda_{min}(\mathbb{E}[\hat{A}(w)])}\geq \frac{C_{\sigma}d\log(d)}{\hat{\lambda}(X)}\]
Following the same line of reasoning, we have a similar result for polynomial functions. More precisely, we have the following result
**Theorem 5.5**.: _Let \((x_{i},y_{i})_{i=\overline{1,d}}\) be a data set and \(\sigma\) a polynomial function. Assume that the activation function \(\sigma\) and our data set satisfy Assumption 3.2.Consider a shallow neural network with \(h\) hidden nodes of the form \(f(v,W):=v^{T}\sigma(Wx)\) with \(W\in\mathcal{M}_{h\times(p+1)}(\R)\) and \(v\in\R^{h}\). We initialize the entries of \(W\) with i.i.d. \(\mathcal{N}(0,1)\). Also, assume_
\[h\geq\frac{C_{\sigma}d\log(d)}{\hat{\lambda}(X)}\]
_where \(C_{\sigma}\) is a constant that depends only on \(\sigma\). Then, the matrix \(\sigma(XW^{T})\) has full row rank with probability at least \(1-\frac{1}{d^{100}}\)._
Proof.: Following the same reasoning as in Theorem 5.4, we get to the equation
\[\sum_{i=1}^{d}v_{i}\sigma(w^{T}x_{i})=0, \tag{15}\]
for any \(w\in\R^{p+1}\) that satisfies \(||\sigma(Xw)||<T_{d}\). Let \(T_{d}:=\sigma(0)\sqrt{d}+1\). Using the same arguments as in Proposition 3.1, equation 15 is equivalent to
\[\sum_{i=1}^{d}v_{i}x_{i}^{\otimes k}=0, \tag{16}\]
for any \(k=\overline{0,m}\). From assumption 3.2 we get that all \(v_{i}\) must be equal to \(0\). The rest follows as in Theorem 5.4. |
2306.04410 | Meta-Learning in Spiking Neural Networks with Reward-Modulated STDP | The human brain constantly learns and rapidly adapts to new situations by
integrating acquired knowledge and experiences into memory. Developing this
capability in machine learning models is considered an important goal of AI
research since deep neural networks perform poorly when there is limited data
or when they need to adapt quickly to new unseen tasks. Meta-learning models
are proposed to facilitate quick learning in low-data regimes by employing
absorbed information from the past. Although some models have recently been
introduced that reached high-performance levels, they are not biologically
plausible. We have proposed a bio-plausible meta-learning model inspired by the
hippocampus and the prefrontal cortex using spiking neural networks with a
reward-based learning system. Our proposed model includes a memory designed to
prevent catastrophic forgetting, a phenomenon that occurs when meta-learning
models forget what they have learned as soon as the new task begins. Also, our
new model can easily be applied to spike-based neuromorphic devices and enables
fast learning in neuromorphic hardware. The final analysis will discuss the
implications and predictions of the model for solving few-shot classification
tasks. In solving these tasks, our model has demonstrated the ability to
compete with the existing state-of-the-art meta-learning techniques. | Arsham Gholamzadeh Khoee, Alireza Javaheri, Saeed Reza Kheradpisheh, Mohammad Ganjtabesh | 2023-06-07T13:08:46Z | http://arxiv.org/abs/2306.04410v1 | # Meta-Learning in Spiking Neural Networks with Reward-Modulated STDP
###### Abstract
The human brain constantly learns and rapidly adapts to new situations by integrating acquired knowledge and experiences into memory. Developing this capability in machine learning models is considered an important goal of AI research since deep neural networks perform poorly when there is limited data or when they need to adapt quickly to new unseen tasks. Meta-learning models are proposed to facilitate quick learning in low-data regimes by employing absorbed information from the past. Although some models have recently been introduced that reached high-performance levels, they are not biologically plausible. We have proposed a bio-plausible meta-learning model inspired by the hippocampus and the prefrontal cortex using spiking neural networks with a reward-based learning system. Our proposed model includes a memory designed to prevent catastrophic forgetting, a phenomenon that occurs when meta-learning models forget what they have learned as soon as the new task begins. Also, our new model can easily be applied to spike-based neuromorphic devices and enables fast learning in neuromorphic hardware. The final analysis will discuss the implications and predictions of the model for solving few-shot classification tasks. In solving these tasks, our model has demonstrated the ability to compete with the existing state-of-the-art meta-learning techniques.
Meta-Learning, Few-Shot Learning, Learning to Learn, Spiking Neurons, STDP, Reward-Modulated STDP, PFC, Hippocamp.
## I Introduction
Today's machine learning and deep learning models excel at solving single tasks; however, they struggle when training data is insufficient or they have to adapt to changing tasks [1]. Meta-learning is a desirable solution to remedy this problem by leveraging previous experiences [2]. A meta-learning model should be able to generalize a learning strategy to different tasks derived from a common distribution [3]. In contrast, traditional machine learning models are limited in adapting to a single task by finding patterns that generalize across data points.
Recently, several meta-learning methods have been proposed and demonstrated to be effective. The main objective is to provide a model for efficient learning of new tasks to help us get closer to Artificial General Intelligence (AGI). The primary idea behind meta-learning is to employ the acquired knowledge in solving previous tasks to generalize the learning to new tasks. In general, meta-learning algorithms can be classified into three types: metric-based, optimization-based, and model-based.
We can point to Matching Networks [4], Prototypical Networks [5], and Relation Networks [6] as notable methods belonging to metric-based algorithms. They can allow unseen tasks to be efficiently learned while not experiencing catastrophic forgetting. However, they cannot be directly applied to other methods associated with reinforcement learning. Optimization-based algorithms are prevalent since they are model agnostic besides being task agnostic [7]. However, they are computationally inefficient since they are required to compute second-order gradients. MAML [7], Meta-SGD [8], and AVID [9] are practical methods in this category. We can refer to MANN [10] and SNAIL [11] as noteworthy examples of model-based techniques. Most of these models include memory modules such as Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM) units, allowing them to store their experiences. Nevertheless, using these modules would increase the number of model parameters.
Despite being influenced by human brain behavior, most successful meta-learning approaches lack bio-plausibility. The human brain can quickly learn new skills by utilizing prior knowledge and experiences, which are mostly encoded in the temporal lobe of the human cortex [12]. The temporal lobe is the second largest lobe, responsible for processing auditory information, language comprehension, and the formation of long-term memories [13]. The hippocampus is one of the main regions in the medial temporal lobe, which plays a crucial role in episodic memory [14]. As a type of declarative memory, episodic memory refers to the human ability to recall specific events and experiences from the past.
Wang et al. [15] have simulated the Prefrontal Cortex (PFC)
behavior for meta-reinforcement learning using the assumption that the PFC also encodes the recent history of actions and rewards besides representing the expected values of actions, objects, and states. It has been conceived that PFC, along with the thalamic nuclei and basal ganglia, form a recurrent neural network [16, 17]. Through its inputs, this network is provided with information about the performed actions and received rewards. On the output side, actions and estimation of the state value are generated by the network [18]. By incorporating PFC with dopamine (DA), Wang et al. have created a model that includes two full-fledged reinforcement learning systems, one based on activity representations and the other based on synaptic learning to develop a unique meta-learning algorithm [15]. The model was further enhanced by Ritter et al. [19] with embedding the episodic recall mechanism to overcome catastrophic forgetting using differentiable-neural-dictionary (DND) [20, 21] as long-term memory.
Although Wang et al. [15] and Ritter et al. [19] have proposed models inspired by biological procedures, they use LSTM, making them to be less biologically plausible. However, using Spiking Neural Networks (SNNs) can provide a better solution to design biologically plausible models, as SNNs are more similar to the behavior of the human brain neural network [22]. In a sense, SNNs can be viewed as a particular case of RNNs with internal states similar to the LSTM [23]. Also, SNNs are more energy-efficient than conventional artificial neural networks and can be easily incorporated into neuromorphic electronic systems [24]. Neuromorphic devices aim to capture the brain's fundamental properties to enable low-power, versatile, and fast processing of information [25, 26, 27]. Stewart and Neftci [28] have utilized SNNs for meta-learning using surrogate gradient descent to take advantage of the MAML paradigm, which has achieved remarkable results. Subramoney et al. [29] have investigated how synaptic plasticity and network dynamics contribute to fast learning in SNNs by allowing synaptic weights to store prior information via backpropagation through time (BPTT) to mimic the brain's ability to learn from very few samples. Scherr et al. [30] have proposed a model for one-shot learning using SNNs by simulating the ventral tegmental area (VTA) using an additional model to produce some learning signals as dopamine that enables fast learning via local synaptic plasticity in their network. However, all these models rely on gradient computations and backpropagation algorithms, which undermines the bio-plausibility of the model.
In this paper, we have utilized SNNs to simulate memory and decision-making systems using Spike-Timing-Dependent Plasticity (STDP) [31, 32] and reward-modulated STDP (R-STDP) [33, 34] for synaptic plasticity. Using these methods, we propose a novel bio-plausible meta-learner based on the role of the hippocampus and the PFC to effectively solve new unseen tasks and prevent catastrophic forgetting by encoding past experiences into a memory layer. The required information for solving new tasks that share a similar structure with previously learned tasks can then be retrieved from this memory layer.
## II Materials and Methods
We consider a few-shot learning task \(\mathcal{T}\) to be sampled from an unknown task distribution \(p(\mathcal{T})\). Each task is characterized by a distinct dataset \(\mathcal{D}=\{(x_{i},y_{i})\,|\,x_{i}\in X,\,y_{i}\in Y\}_{i=1}^{n}\), consisting of \(n\) independent and identically distributed input-output pairs \((x_{i},y_{i})\). The corresponding dataset of task \(\mathcal{T}\) is divided into two sets: a support set \(\mathcal{D}^{\mathcal{S}}\) and a query set \(\mathcal{D}^{\mathcal{Q}}\), such that \(\mathcal{D}=\mathcal{D}^{\mathcal{S}}\cup\mathcal{D}^{\mathcal{Q}}\).
Here, we present a model to solve unseen tasks sampled from \(p(\mathcal{T})\). The proposed model relies on the interactions between the hippocampus, PFC, and VTA, as shown in Figure 1. Together, they create a memory system where the hippocampus memorizes long-term events for the future while the PFC receives signals from the hippocampus and other regions to make decisions. The PFC also includes short-term memory, or working memory, which can store recent events and accelerate decision-making [35].
Also, the ventral tegmental area (VTA) plays a crucial role in the brain's reward system. It contains dopaminergic neurons that release dopamine, which interacts with the prefrontal cortex (PFC) and hippocampus [35].
This work provides a new perspective on reward-based learning computations by considering bio-plausibility and simplicity as fundamental principles. With this in mind, we have utilized SNNs in order to enhance computational power in designing a highly dynamic meta-learner. Figure 2 depicts the overall architecture of the model, composed of three major components: the convolutional layer, the memory layer, and the decision layer. In this model, input data arrives at the
Fig. 1: The interactions between PFC, hippocampus, and VTA in the brain.
convolutional layer, where it is processed and its features are extracted. The resulting features are then encoded in the memory layer, which serves as an episodic memory system for the model. Finally, the decision layer classifies the input data using the retrieved information from the memory layer, which provides the necessary context for accurate classification.
Meta-learning includes two phases: the meta-training phase and the meta-testing phase. The meta-training phase involves performing both the memory and the decision adaptation stages. Accordingly, the memory layer is updated with the support set, whereas the decision layer is updated with the query set. During the meta-testing phase, the model is given a support set of few data points for each unseen task in order to update the memory layer through the memory adaptation stage. The updated memory layer is then utilized in conjunction with the decision layer to predict the labels of the corresponding query set.
### _Convolutional Layer_
Humans can recognize surrounding objects easily [36] despite changes in object position, size, pose, illumination conditions, and background context [37, 38]. Apparently, this invariant recognition is handled through a hierarchical process in the ventral pathway. It begins with the V1 area, which extracts simple features like bars and edges [39]. It continues through intermediate areas such as V2 and V4, which are responsive to more complex features [40], and finally, in the inferior temporal cortex (IT), in which the neurons selectively respond to objects or parts of the objects [41].
In the same way as the ventral path, the convolutional layer extracts the features of the input image and feeds them to the next layer, i.e., the memory layer. The convolutional layer is composed of leaky integrate-and-fire (LIF) neurons, which receive Poisson-encoded images as inputs.
The LIF neuron model is a mathematical model used to describe the behavior of biological neurons, where the membrane potential depends on incoming excitatory and inhibitory inputs and passive ion leakage across the membrane. When the membrane potential reaches a certain threshold \(u_{\theta}\), the neuron fires an action potential or spike, and the membrane potential is then reset to a lower value. The LIF model assumes that the membrane potential can be described by a single variable, \(u(t)\), which represents the voltage across the neuron's membrane at time \(t\). The equation governing the dynamics of \(u(t)\) is as follows [42]:
\[\tau.\frac{du}{dt}=-(u-u_{rest})+R.I(t), \tag{1}\]
where \(u_{rest}\) denotes the voltage across the neuron's membrane in the rest mode, \(R\) is the resistance of the neuron's membrane, and \(I(t)\) represents its input current at time \(t\). Also, \(\tau\) is a time constant that corresponds to the leakage rate.
The training process of the convolutional layer is unsupervised and is performed through a particular form of synaptic plasticity known as spike-timing-dependent plasticity (STDP), which has also been observed to occur in the human visual cortex [43]. In general, STDP potentiates the afferent connections involved in making a neuron fire while depressing the others [44, 45].
We have utilized the standard STDP, where the weights are updated as follows:
\[\Delta w_{ij}=\left\{\begin{aligned} & A_{+}\exp\frac{-\Delta t}{\tau_{+}}, \ \ if\ \ \Delta t\geq 0,\\ &-A_{-}\exp\frac{\Delta t}{\tau_{-}},\ \ if\ \ \Delta t<0,\end{aligned}\right. \tag{2}\]
where \(i\) and \(j\) respectively refer to the index of post- and pre-synaptic neurons, \(\Delta t=t_{i}-t_{j}\), in which \(t_{i}\) and \(t_{j}\) are the corresponding spike times, and \(\Delta w_{ij}\) is the synaptic weight change. Also, positive constants \(A_{+}\) and \(A_{-}\) scale the strength of potentiation and depression of weight changes, respectively, and \(\tau_{+}\) and \(\tau_{-}\) are positive time constants defining the width of the positive and negative learning window.
In addition, we have utilized the soft bounding by multiplying the term \(w_{ij}(1-w_{ij})\) by \(\Delta w_{ij}\) in order to keep the weights within the range \([0,1]\) and stabilize the weight changes as it converges [46].
### _Memory Layer_
Typically, human decisions are based on memories of previous events. It seems that we learn by storing summaries of individual episodes for long periods and then retrieving them when similar situations arise. Past experiences are recalled through episodic memory [14].
As part of meta-learning, the goal is to encode the information of past decisions into memories, which can be used to make future decisions more effectively. Specifically, the memory layer aims to mimic the hippocampus role and equip the model with an episodic memory system.
Memories are made by changes in collections of neurons and the synaptic connections between them [47]. A memory may be encoded in one group of neural circuits and can be recalled in another. Every time a memory is recalled, it may change depending on the active neural circuits at that time [48]. When a memory is constantly recalled, its active connections become stronger.
Consequently, this layer encodes the information received from the convolutional layer within a portion of LIF neurons with stochastic thresholds, in which spikes of neuron \(i\) are generated stochastically with stochastic intensity \(\rho_{i}=g(u_{i})\)
in which \(g(u_{i})\) is an exponential function of the membrane potential [49]:
\[\rho_{i}=g(u_{i})=\rho_{\theta}exp\left(\frac{u_{i}-u_{\theta}}{\Delta u}\right), \tag{3}\]
where \(\rho_{\theta}\) indicates the stochastic intensity at threshold, \(u_{\theta}\) is the default firing threshold and \(\Delta u\) defines the width of threshold region.
In the memory layer, LIF neurons with stochastic thresholds form an episodic memory system that can recall prior information to solve unseen tasks more effectively. The LIF neuron with stochastic threshold has been utilized to make the model robust for intrinsic or synaptic noises generated by pre-synaptic neurons to improve generalization.
The weights of the memory layer are updated during the memory adaptation stage, as indicated in Figure 2. During this stage, the reward-modulated STDP (R-STDP) is used to ensure that large amounts of information can be encoded efficiently. Using R-STDP, each sample is forced to be encoded with a small subset of neurons. As a result, when previous experiences are recalled frequently, these neurons become more selective towards similar features extracted from the convolutional layer. This process enables the memory layer to adaptively learn and store relevant information.
R-STDP is a type of synaptic plasticity that is thought to underlie learning and memory in the brain [33]. More precisely, it is a process by which the strength of synapses between neurons can be modified based on the relative timing of their activities, as well as reward signals provided by neuromodulators such as dopamine (DA) [50]. The reward signal typically comes from dopaminergic neurons in the midbrain, which release dopamine in response to rewarding or aversive stimuli [34]. R-STDP is a form of reinforcement learning that allows for a more nuanced and context-specific form of synaptic plasticity that takes into account the rewards and punishments associated with different patterns of neural activity [46]. This process can refine and optimize neural circuits over time in response to environmental feedback, leading to enhance learning and adaptation. The R-STDP learning rule can be formulated by the following equations:
\[\begin{split}\frac{dc}{dt}&=-\frac{c}{\tau_{c}}+ STDP(\Delta t)\delta(t-t_{pre/post}),\\ \frac{dw}{dt}&=cd,\\ \frac{dd}{dt}&=-\frac{d}{\tau_{d}}+DA(t),\end{split} \tag{4}\]
where \(c\) represents the eligibility traces that act as synaptic tags, while \(\Delta t=t_{post}-t_{pre}\) measures the time difference between post- and pre-synaptic activities. The variable \(d\) describes the concentration of extracellular dopamine, and \(\delta(t)\) is the Dirac delta function. Also, \(\tau_{c}\) and \(\tau_{d}\) are time constants of eligibility traces and DA uptake, respectively. Lastly, the function \(DA(t)\) models the source of dopamine resulting from the activity of dopaminergic neurons in the midbrain.
In the memory adaptation stage, we aim to encode the extracted information of each sample in a portion of neurons of this layer using the R-STDP learning rule to maintain memory over time. It allows the network to adapt itself to new situations while retaining existing knowledge which improves its generalization capabilities and prevents catastrophic forgetting. Adjusting reward/punishment intervals allows only a certain percentage of neurons to fire; therefore, learning rarely occurs in synaptic weights. The network becomes more flexible and efficient when appropriately considering
Fig. 2: The proposed model architecture overview.
the reward/punishment intervals. The intervals along with the corresponding reward/punishment values are determined using the following piecewise function:
\[r(n_{s})=\left\{\begin{array}{ll}-2,&if~{}~{}n_{s}<c-4s,\\ -1,&if~{}~{}c-4s\leq n_{s}<c-2s,\\ +1,&if~{}~{}c-2s\leq n_{s}<c-s,\\ +2,&if~{}~{}c-s\leq n_{s}\leq c+s,\\ +1,&if~{}~{}c+s<n_{s}\leq c+2s,\\ -1,&if~{}~{}c+2s<n_{s}\leq c+4s,\\ -2,&if~{}~{}c+4s<n_{s},\end{array}\right. \tag{5}\]
where, \(n_{s}\) is the percentage of activated neurons in this layer. Also, the sparsity level is specified by \(c\pm s\), where \(c\) is a constant that represents the average percentage of neurons that can fire, and \(s\) is a spread percentage used to enhance the flexibility of the memory layer.
### _Decision Layer_
The prefrontal cortex (PFC) plays a crucial role in the brain, with decision-making being one of its most important functions [51, 52]. It receives input from multiple regions of the brain to appropriately process information as well as to make decisions, where the history of choices and rewards are dynamically encoded in PFC [53, 54, 55, 56, 57]. A reinforcement learning mechanism is evident in the neural activity of PFC [58].
The principal objective of this layer in our model is to simulate the behavior of the PFC for decision-making using the reward-modulated STDP, introduced in Section II-B as a reinforcement learning rule. This layer receives recalled information from its preceding layer, i.e., the memory layer, and processes it for decision-making. This layer consists of several groups of LIF neurons that are categorized based on the task. Each group of neurons includes \(M\) neurons that can represent the input's class. Typically, for solving an \(N\)-way \(K\)-shot classification problem, this layer would have \(N\) groups of neurons, each containing \(M\) neurons, resulting in a total of \(N\times M\) neurons. Each class could be represented by \(M\) different neurons in a group, enhancing the generalization and versatility of the model. Finally, the input's class can be determined by the group of neurons that fire most frequently.
During the decision adaptation stage, this layer is trained by rewarding accurate predictions and punishing inaccurate ones. To update the weights more effectively, we have implemented adaptive R-STDP in this layer, in which the reward signal value is updated after solving each task to maintain a balance between rewards and punishments in the network. Initially, the reward and punishment values are \(\nicefrac{{1}}{{2}}\) and \(\nicefrac{{-1}}{{2}}\), respectively. These values are updated in each task containing \(N\) samples by \(\nicefrac{{N_{incorrect}}}{{N}}\) and \(\nicefrac{{N_{correct}}}{{N}}\), respectively. Here, \(N_{correct}\) and \(N_{incorrect}\) represent the number of samples that are classified correctly and incorrectly over all samples of a task. Additionally, lateral inhibition mechanisms are utilized between different groups of neurons to make them compete. As a result, when the activity of one group of neurons is increased, the activities of other groups are suppressed, allowing the model to converge faster. Algorithm 1 provides an outline of the complete algorithm in its general form.
```
1:Task distribution \(p(\mathcal{T})\) and corresponding dataset \(\mathcal{D}\)
2:for samples in \(\mathcal{D}\)do
3: Compute \(w_{c}\) for convolutional layer by Eq (2)
4:endfor
5:Randomly initialize \(w_{m}\) and \(w_{d}\) for memory and decision layers, respectively
6:for each epoch do
7: Sample batch of tasks \(\mathcal{T}_{i}=(\mathcal{D}_{i}^{\mathcal{S}},\mathcal{D}_{i}^{\mathcal{Q}})\) from \(p(\mathcal{T})\)
8:for samples in \(\mathcal{D}_{i}^{\mathcal{S}}\)do
9: Compute the percentage of activated neurons \((n_{s})\)
10: Determine the corresponding reward/punishment to \(n_{s}\), as described in Section II-B
11: Update \(w_{m}\) by Eq (4)
12:endfor
13:for samples in \(\mathcal{D}_{i}^{\mathcal{Q}}\)do
14: Determine the corresponding reward/punishment based on the decision layer's prediction, as described in Section II-C
15: Update \(w_{d}\) by Eq (4)
16:endfor
```
**Algorithm 1** Learning procedure for few-shot learning
## III Experiments
The performance of the proposed meta-learner is evaluated by testing its ability to solve various few-shot classification tasks. The primary purpose of this study was to determine whether this simple model could gain comparative results on few-shot learning benchmarks and whether it is capable of learning to learn.
### _The Few-shot Learning Setting_
Meta-learning comprises two phases for training and evaluating a few-shot learning model: meta-training and meta-testing.
During the meta-training phase, we sample a batch of tasks \(\mathcal{T}_{i}\) represented by \(\mathcal{D}_{i}\), and the meta-learner is trained on the support set \(\mathcal{D}_{i}^{\mathcal{S}}\) in the memory adaptation stage (task-level),
followed by the decision adaptation stage (meta-level) on a query set \(\mathcal{D}_{i}^{\mathcal{Q}}\) for each task.
In the meta-testing phase, the trained meta-learner is evaluated on a set of held-out unseen tasks \(\mathcal{T}_{u}\sim p(\mathcal{T})\) that were not used for training. The model is given a support set \(\mathcal{D}_{u}^{\mathcal{S}}\) corresponding to the new task \(\mathcal{T}_{u}\) for the memory adaptation stage and then used to predict labels of the corresponding query set \(\mathcal{D}_{u}^{\mathcal{Q}}\).
### _Model Configuration_
To set up the model for experiments, we first pre-trained the convolutional layer discussed in Section II-A using random samples in the given dataset, where each sample is exposed to this network for \(50ms\) through Poisson encoding. This layer contains \(30\) convolutional filters with a kernel size of \(8\times 8\) and a stride size of \(2\) for downsampling. To enhance the edge perception of the model, we also employed the lateral inhibition mechanism in this layer, where each neuron inhibits the activity of neighboring neurons of an individual filter using lateral connections.
In the memory layer, we used \(N=100\) neurons with \(u_{rest}=-70mV\), \(\rho_{\theta}=1/ms\), \(u_{\theta}=-50mV\), and \(\Delta u=5mV\) in Equation (3). Furthermore, we have included reward/punishment intervals from Section II-B to regulate the sparsity of neuronal activity in the memory layer to around \(15\%\) by using \(c=15\%\) and \(s=3\%\) in Equation (5) to boost the model's generalization.
To make the decision layer scalable, we consider \(M=10\) neurons for each group representing a class in the decision layer, as discussed in Section II-C.
### _Few-Shot Classification Tasks_
The \(N\)-way \(K\)-shot classification problem involves \(N\) different classes, each containing \(K\) samples. A powerful meta-learner should be capable of recognizing inputs by comparing them rather than memorizing a definite mapping between those inputs and the desired classes. We scale up our approach to few-shot classification tasks using the Omniglot and Double MNIST datasets.
The Omniglot dataset is a well-known benchmark for few-shot learning introduced by Lake et al. [59]. This dataset includes handwritten characters from \(50\) different languages, representing 1632 different classes, with \(20\) samples in each class as shown in Figure 3. According to the baseline models, 1200 characters are randomly selected for training, and the remaining ones are used for testing. We also applied data augmentation by rotating each instance of a class by a multiple of 90 degrees, following the approach suggested by Santoro et al. [10].
The Double MNIST dataset consists of 100 distinct classes of two-digit numbers, each containing 1000 unique handwritten samples as depicted in Figure 4. For training purposes, 80 classes are randomly chosen, while the remaining 20 classes are preserved for testing.
In order to evaluate the model on the \(N\)-way \(K\)-shot classification problem, we feed \(N\times K\) samples to the meta-learner in a random order to perform the memory adaptation stage explained in Section II-B. We then present a new, unlabeled sample from one of the \(N\) classes and report the average accuracy on this last \((N\times K+1)\)-th pass based on the decision layer discussed in Section II-C in a similar fashion to SNAIL [11].
Using these standard datasets, we were able to demonstrate the efficiency and generalization capabilities of our proposed model.
### _Results_
We present an analysis of the results obtained from solving few-shot classification tasks. Firstly, we examine the performance of our proposed model in solving 5-way 5-shot and 5-way 1-shot classification tasks on the Omniglot dataset, as shown in Table I.
Our proposed model outperformed other models such as Siamese Networks [60], Matching Networks [4], Prototypical Networks [5], and Meta Networks [61] in solving 5-way 5-shot tasks on Omniglot. It also performed similarly to the state-of-the-art meta-learning models, namely MAML [62] and
Fig. 4: Illustration of a few of the two-digit numbers provided in Double MNIST.
Fig. 3: Illustration of a few of the alphabets provided in Omniglot.
SNAIL [11]. Moreover, our proposed model achieved superior results in solving 5-way 1-shot tasks on the same dataset compared to the other competitors.
By comparing the results of our proposed model with those of comparable simple and generic methods such as MANN [10], it can be concluded that our proposed model is highly effective in solving problems with a small number of samples.
MANN [10] is a regular method for meta-learning that utilizes Neural Turing Machine (NTM) [63] as the embedded external memory. This model stores the information of each task in its memory so that when it encounters a new one, it recovers the information needed to solve that task by measuring its cosine similarity with the information stored in the memory. The disadvantage of this method is that it requires more memory than our model since our proposed model uses different combinations of neurons of the episodic memory layer to simulate memory. Moreover, the episodic memory layer in our model is more flexible than the embedded memory in MANN [10]. While MANN retrieves information using cosine similarity, which returns a row of information related to a task that has the highest degree of similarity, some tasks in practice are composed of a combination of several tasks and cannot be solved by retrieving information from one previously solved task. Our proposed model overcomes this limitation by providing an episodic memory system that resembles the human brain.
In another experiment, we evaluated the performance of our model in solving \(5\)-way \(1\)-shot tasks on the Double MNIST dataset, with results presented in Table II. Our primary objective was to compare our model's performance with a previously introduced meta-learning model, which is essentially the same as MAML and its first-order variation (FOMAML) but uses spiking neurons as its building blocks with surrogate gradient descent as the learning rule [28]. In spite of their important work embedding SNNs into MAML, it was expected to have the same or weaker performance than the original MAML model, as confirmed by the results. However, our proposed model is entirely different in terms of network design and learning rules, and we attempted to provide a new meta-learning approach that is consistent with biological observations.
The results show that our model can compete with other meta-learning models, despite being based on biologically plausible learning rules that are less computationally complex than the surrogate gradient descent [24]. It should be noted that previous models were all based on gradient computations, which are computationally expensive and not biologically plausible. Our model takes advantage of the potential of SNNs in meta-learning, which is difficult to implement using conventional artificial neural networks.
We further analyze the effect of the sparsity level in the memory layer to understand its impact when we encode the information of each sample in either a smaller or larger portion of neurons by considering the appropriate reward/ punishment intervals. Table III presents the results corresponding to different sparsity levels in solving 5-way 1-shot tasks using the Omniglot and Double MNIST datasets. It shows that encoding information in too few neurons may not yield the best performance as it lacks sufficient information for decision-making. Conversely, encoding information in a large portion of neurons can also decrease the model's performance as it cannot efficiently preserve previously learned information. By comparing the results of different sparsity levels for Omniglot (5-way 1-shot) and Double MNIST (5-way 1-shot), we can observe that decreasing the sparsity level of the memory layer reduces the performance of the model in both tasks. However, this effect is more pronounced for Omniglot, which contains more classes than Double MNIST. Due to this, it is more susceptible to catastrophic forgetting when compared to Double MNIST. Therefore, selecting an appropriate sparsity level for the memory layer is crucial to achieve high performance and maintain the knowledge learned from previous tasks to prevent
## IV Discussion
Meta-learning algorithms have recently attracted the attention of many researchers due to their ability to learn new tasks with just a few samples. Despite the remarkable performance of existing meta-learning models, designing a lightweight and efficient bio-plausible meta-learning model is a critical challenge. Bio-plausible neural networks must have three essential characteristics [64]: (i) the ability to integrate temporal input and generate spikes, (ii) the use of spike-based computation for both training and inference, (iii) and the ability to use learning rules based on findings from biological experiments. As mentioned in Section I, some biologically inspired models have been proposed. Wang et al. [15] have discovered the role of PFC as a meta-reinforcement learning system. Later on, Ritter et al. [19] improved the prior work and utilized the differentiable neural dictionary (DND) to store some information to solve new tasks. Although both models are the basis for our work, they use LSTMs, DND, gradient descent, and backpropagation, hindering their bio-plausibility. Stewart and Neftci have embedded SNNs into MAML, which is highly advantageous. Still, they use surrogate gradient and backpropagation, restricting their algorithm's bio-plausibility. Furthermore, Subramoney et al. [29], and Scherr et al. [30] have proposed a novel approach to enable quick learning in SNNs; however, their models do not meet bio-plausibility requirements. Here, we used LIF neurons along with their stochastic threshold variation incorporating a combination of STDP and R-STDP learning rules. Accordingly, the proposed model is highly bio-plausible as it meets all the aforementioned requirements.
On the other hand, the efficiency and hardware friendliness of SNNs makes them an appropriate choice for neuromorphic hardware deployment. Neuromorphic hardware is particularly well suited for online learning at the edge. In spite of this, they face several challenges, such as learning from scratch on data-hungry models due to robustness and time-to-convergence issues. Here, we have presented a lightweight and simple SNN model with a pre-trained convolutional layer that can alleviate these issues. According to the results, our model can learn new tasks in a few-shot setting. As a result, this enables learning in low-data regimes since accessing sufficient labeled data is demanding. In addition, our proposed model features a highly efficient episodic memory that significantly mitigates the issue of catastrophic forgetting. This is achieved through the memory's ability to store a vast amount of information and link similar underlying patterns, received from the convolutional layer. Consequently, it can effectively and partially recall previously stored data when presented with new, similar data. As a result of this functionality, the proposed model exhibits a high degree of generalization, enabling it to adapt to novel tasks with few examples.
We conducted an analysis to assess the efficacy of the memory layer, which involved the memory representation of a subset of samples from the Omniglot dataset, as depicted in Figure 5. The memory representation of each sample is a binary vector that indicates which neurons were activated for that sample. To evaluate the correlation between pairs of samples, we calculated their Pearson correlation coefficient and generated a corresponding heatmap, which is also presented in Figure 5. Higher values on the heatmap indicate a stronger correlation between the memory representations of the respective samples. Our results demonstrate that the sample with index 0 shares similar features with sample indices 1 and 2, resulting in a high correlation with sample index 1 due to their high degree of similarity, as well as a correlation with sample index 2 given the presence of some shared patterns. In contrast, sample index 1 has a low correlation with sample index 2 since they do not share any analogous patterns. These findings suggest that the memory layer functions appropriately by partially selecting neurons for underlying similarities.
## V Conclusion and Future Work
We presented a bio-plausible meta-learner with spiking neural networks (SNNs), motivated by the need for a meta-learner to be capable of mimicking the human brain's behavior when encountering a new unseen task. We have appropriately implemented the episodic memory and decision-making system by studying the role of the Hippocamp and the Prefrontal Cortex (PFC) to enable quick learning by incorporating past experiences. Our proposed model can be considered as an adaptive lifelong learning algorithm because it can attend to experiences over a lifetime and overcome catastrophic forgetting. As a result, our model can learn faster and generalize better due to its lifelong memory. Furthermore,
it is computationally very efficient as it relies on spike-timing-dependent plasticity (STDP) that empowers the memory layer to decide what experiences are worth remembering.
Our work represents an initial step toward developing a bio-plausible meta-learning model using SNNs, and future works could build on this foundation to address more complex tasks. Moreover, by scaling these ideas, we could facilitate the implementation of meta-learning models on spike-based neuromorphic devices, enabling fast learning on neuromorphic hardware.
|
2305.04801 | Financial Hedging and Risk Compression, A journey from linear regression
to neural network | Finding the hedge ratios for a portfolio and risk compression is the same
mathematical problem. Traditionally, regression is used for this purpose.
However, regression has its own limitations. For example, in a regression
model, we can't use highly correlated independent variables due to
multicollinearity issue and instability in the results. A regression model
cannot also consider the cost of hedging in the hedge ratios estimation. We
have introduced several methods that address the linear regression limitation
while achieving better performance. These models, in general, fall into two
categories: Regularization Techniques and Common Factor Analyses. In
regularization techniques, we minimize the variance of hedged portfolio profit
and loss (PnL) and the hedge ratio sizes, which helps reduce the cost of
hedging. The regularization techniques methods could also consider the cost of
hedging as a function of the cost of funding, market condition, and liquidity.
In common factor analyses, we first map variables into common factors and then
find the hedge ratios so that the hedged portfolio doesn't have any exposure to
the factors. We can use linear or nonlinear factors construction. We are
introducing a modified beta variational autoencoder that constructs common
factors nonlinearly to compute hedges. Finally, we introduce a comparison
method and generate numerical results for an example. | Ali Shirazi, Fereshteh Sadeghi Naieni Fard | 2023-04-11T01:36:18Z | http://arxiv.org/abs/2305.04801v1 | # Financial Hedging and Risk Compression
###### Abstract
Finding the hedge ratios for a portfolio and risk compression is the same mathematical problem. Traditionally, regression is used for this purpose. However, regression has its own limitations. For example, in a regression model, we can't use highly correlated independent variables due to multicollinearity issue and instability in the results. A regression model cannot also consider the cost of hedging in the hedge ratios estimation. We have introduced several methods that address the linear regression limitation while achieving better performance. These models, in general, fall into two categories: Regularization Techniques and Common Factor Analyses. In regularization techniques, we minimize the variance of hedged portfolio profit and loss (PnL) and the hedge ratio sizes, which helps reduce the cost of hedging. The regularization techniques methods could also consider the cost of hedging as a function of the cost of funding, market condition, and liquidity. In common factor analyses, we first map variables into common factors and then find the hedge ratios so that the hedged portfolio doesn't have any exposure to the factors. We can use linear or nonlinear factors construction. We are introducing a modified beta variational autoencoder that constructs common factors nonlinearly to compute hedges. Finally, we introduce a comparison method and generate numerical results for an example.
_Keywords_: _financial hedging; cost of hedging; neural network; regularization;common factors_
## 1 Introduction
Market risk management is a very important part of any financial business in both buy and sell sides. Assume we have a portfolio of one or more securities. We are interested to hedge the portfolio against market risk by adding, long or short positions, of additional securities to minimize the portfolio Profit and Loss (PnL) given a market movement. In addition, we could monitor our risk exposure by measuring the portfolio sensitivity to risk factors movements. As we will show later, both of these problems are related and mathematically the same. The challenge is to find the proper hedging instruments and compute the hedge ratios. Traditionally linear regressions are used to compute the hedge ratios or for risk compression. However, there are limitations in using regression especially when there are multiple hedge instruments that are highly correlated. Or when we need to select a subset of hedge instruments from a number of candidate instruments considering not only minimizing risk but also hedge ratio stability and cost.
## 2 Hedge Ratio vs. Risk Compression
Assume we have a portfolio of multiple securities that we need to manage against market risk. First, assume we want to add additional "n" securities as hedging instruments to our portfolio. For any market movement, we can break down the PnL of the whole portfolio into two buckets:
\[PnL_{total}(t)=PnL_{nhedged}(t)-PnL_{hedging}(t) \tag{1}\]
If we have done a really good job selecting the right instruments and the right amount of each hedging instrument then \(PnL_{total}\) should be zero for any given market movement that generates \(PnL_{unhedged}\) and \(PnL_{Hedging}\). But in reality \(PnL_{total}\) can't be zero but rather a small value either positive or negative. In other words, we are trying to minimize the variance of \(PnL_{total}\). If we also present \(PnL_{total}(t)\) by \(\epsilon(t)\) and we show \(PnL_{unhedged}(t)\) as \(Y(t)\) then we have:
\[PnL_{total}(t)=PnL_{unhedged}(t)-\sum_{i=1}^{n}\beta_{i}H_{i}(t) \tag{2}\]
\[Y(t)=\sum_{i=1}^{n}\beta_{i}H_{i}(t)+\epsilon(t) \tag{3}\]
In this equation, \(H_{i}(t)\) represents the movement in hedge instrument "i" while \(\beta_{i}\) is the corresponding hedge ratio. Mathematically we are trying to find \(\beta_{i}\) values i.e. hedge ratios, so the hedged portfolio \(PnL_{total}(t)\) or \(\epsilon(t)\) has the lowest variance. This is a regression fitting problem.
In risk compression, we are interested in monitoring our portfolio risk. For example, assume our portfolio has exposure to "m" risk factors i.e. \(RF_{i}\) when \(i=1,2,..,m\). We can compute the sensitivity of our portfolio PnL to those risk factors:
\[S_{i}(t)=\frac{\partial PnL(t)}{\partial x_{i}} \tag{4}\]
The problem with computing the sensitivity using the partial derivative of PnL with respect to any risk factor is that it does not consider the correlation between risk factors. In addition, in practice, monitoring the sensitivities over time for all risk factors are difficult given a large number of risk factors. One solution to these problems is to clone the PnL of the main portfolio by using a handful of financial instruments so that at any given time the PnL of the main portfolio is equal or very close to the one of the cloned portfolio.
\[PnL(t)=\sum_{i=1}^{i=k<m}\alpha_{i}\ RF_{i}(t)+\epsilon^{\prime}(t) \tag{5}\]
As it can be seen, both finding hedging ratio and risk compression are mathematically the same challenges i.e. finding the best hedging instruments and then finding the right hedge ratios.
## 3 Regression Limitations
Traditionally finding hedge ratios (or risk compression) is achieved by using linear regression. However, not all requirements of a regression model could be met in practice. In particular, hedging instruments are correlated and high correlation in the independent variables creates multicollinearity issue and instability in the estimated factor loadings i.e. hedge ratios. Mathematically speaking, one or more independent variables could be written as a linear combination of other independent variables and this will cause the correlation matrix of independent variables to not be full rank and inversable.
We also assume that hedge instruments are given in the regression. However, in practice, we need to select a few hedging instruments from a pool of candidate instruments first and then find the hedge ratios. It is also desirable to hedge a portfolio against market risk with minimum hedging cost. The cost of hedging is a function of how much each hedging instrument is required as well as the unit cost of the hedging instruments. A regression model can't consider the additional requirements and these limitations.
## 4 Alternative Methods
As explained in the previous section, one of the requirements of regression is to use relatively independent variables that are not highly correlated to avoid multicollinearity. Multicollinearity could be detected using different statistical tests e.g. Variance Inflation Factor (VIF) [1].
In practice, it is not always possible to use independent hedge instruments. In general, there are two approaches to dealing with highly correlated hedge instruments:
* Regularization Techniques.
* Common Factor Analyses.
In the following sections, we explain each method and how it could be employed to estimate our hedge estimation.
## 5 Regularization Techniques
Assume we want to hedge the PnL of a portfolio where we present the PnL time series as \(Y(t)\) using N number of hedging instruments presented by \(X_{i}\) time series. The independent variables could have high correlations where if used in a regression equation, we will have a multicollinearity issue.
In addition, one of the concerns in statistical learning models is overfitting. We are using historical data as a proxy for what might happen in the future. We would like to learn from the past but if we create a model that completely fits the historical data then it will not perform well in the future when new and unseen data comes. One of the techniques used to overcome this issue is to calibrate and estimate the parameters to not only achieve a good fit to historical data but also restrain the coefficients to become too big. When we have multicollinearity a number of coefficients could become big and unstable. Therefore we could use regularization to overcome the multicollinearity as well as overfitting. In general, we estimate the regression coefficient using the following equation:
\[\tilde{\beta}=\underset{\beta}{argmax}\{\sum_{i=1}^{N}(y_{i}+\beta_{0}-\sum _{j=1}^{P}x_{ij}\beta_{j})^{2}+\lambda\sum_{j=1}^{P}\mid\beta_{j}\mid^{P}\}[2] \tag{6}\]
The following figure shows the Contours of the constant value of \(\mid\beta_{j}\mid^{P}\) for given values of P. If P is selected as 1 the equation turns into least absolute shrinkage and selection operator; also Lasso or LASSO. It was originally introduced in geophysics [3], and later by Robert Tibshirani who coined the term. [4].
If P is equal to 2 then the equation turns into Ridge Regression also known as Tikhonov regularization [5]. The theory was first introduced by Hoerl and Kennard in 1970 in their Technometrics papers "RIDGE regressions: biased estimation of nonorthogonal problems" and "RIDGE regressions: applications in nonorthogonal problems".
Parameter \(\lambda\) controls the amount of shrinkage in factor loadings i.e. \(\beta\) values. There is usually a trade-off in the amount of hedge instruments that should be purchased versus better market risk control. The PnL of a hedged portfolio could be written as:
\[PnL_{Hedged}(t)=PnL_{Hedging\ Cost}(t)+PnL_{Market}(t) \tag{7}\]
As the above equation shows the hedged portfolio PnL has two components:
Figure 1: Contours of constant value of \(\mid\beta_{j}\mid^{P}\) for given values of q. [2]
1. \(PnL_{Hedging\ Cost}(t):\) This is usually a deterministic negative PnL associated with the cost of hedging. It is desirable to minimize this cost.
2. \(PnL_{Market}(t):\) This is a stochastic value with the expected value of zero which could be either positive or negative due to the market movement. If the expected value is not zero then it is referred to as alpha. The goal is to minimize the variance of this term.
Typically we can't achieve items 1 and 2 simultaneously. We rather have to find the optimum solution. This is achieved by either cross-validation or manually changing \(\lambda\) to achieve targeted market risk hedges and the number/amount of hedge instruments.
Even though there are similarities between Ridge and Lasso but in practice, they could be used in different scenarios:
* **Ridge**: If we know the candidate hedge instrument(s), we can use Ridge regression. This will ensure no hedge ratio is equal to zero and gets eliminated.
* **Lasso**: If we are not sure about which hedging instrument(s) should be used, we can include all candidate hedging instruments and Lasso might force a number of hedge ratios equal to zero and only keep the best instruments while computing the hedge ratios.
In other words, Lasso could be used as a feature engineering method while ridge will keep all independent variables by forcing the squared value of variables to be small but not zero values.
One of the considerations in finding the best hedges is the cost of hedging. As discussed before, shrinkage methods in general reduce the cost by applying restrictions on the size of factor loadings. However, if the cost of hedging is not the same for all hedge instruments then it is desirable to consider the unit cost of each hedging instrument in the optimization. This could be achieved using the following equation:
\[\tilde{\beta}=\underset{\beta}{argmax}\{\sum_{i=1}^{N}(y_{i}+\beta_{0}-\sum_{ j=1}^{P}x_{ij}\beta_{j})^{2}+\lambda\sum_{j=1}^{P}\mid\psi_{j}\beta_{j}\mid^{P}\} \tag{8}\]
In this equation, \(\psi_{j}\) is the relative unit cost of \(\beta_{j}\). Given \(\lambda\) is scaling the second term, \(\psi_{j}\) only needs to be the relative unit cost of \(\beta_{j}\) instead of the actual unit cost of \(\beta_{j}\). Please note that the cost of hedging for securities is not uniformly defined and could change for different financial institutions. In addition, the cost of shorting an instrument i.e. a negative \(\beta_{j}\) might not be the same as buying (having a long position) on the same security i.e. a positive \(\beta_{j}\). Later in this section, we discuss how the unit cost of hedging could be computed.
Generally, libraries and packages can't consider the unit cost of hedges ( \(\psi_{j}\) ). The following change of variable could be used to change the equation to be able to use available libraries and packages to solve the optimization problem.
\[\hat{\beta_{j}}=\psi_{j}\beta_{j} \tag{9}\]
\[\hat{x}_{ij}=\frac{x_{ij}}{\psi_{j}} \tag{10}\]
\[\tilde{\tilde{\beta}}=\underset{\beta}{argmax}\{\sum_{i=1}^{N}(y_{i}+\beta_{ 0}-\sum_{j=1}^{P}\hat{x}_{ij}\hat{\beta_{j}})^{2}+\lambda\sum_{j=1}^{P}\mid \hat{\beta_{j}}\mid^{P}\} \tag{11}\]
In this method, we first adjust each independent variable with its relative unit cost. Then we compute the factor loadings using the adjusted independent variables and the standard algorithm for optimization. The hedge ratios are computed after adjusting back the factor loadings by their unit costs.
The unit cost of security could be computed as:
\[\psi=r+\frac{E(S_{bid-ask})}{Price_{ask}} \tag{12}\]
\[S_{bid-ask}=Price_{ask}-Price_{bid} \tag{13}\]
where \(r\) is the cost of funding for a unit of the hedging instrument price. r is not only a function of the market but also the creditworthiness of the institution as well as the hedging instrument. For example, the cost of funding could change for different instruments or even the direction of trade i.e. long vs. short. \(E(S_{bid-ask})\) is the expected bid-ask spread when the hedges are liquidated. We are assuming that we have to pay for the bid-ask spread. Even as a market maker, hedging is a risk-reduction activity and we are not expecting to get the benefit of the bid-ask spread. The expected bid-ask spread is a function of market trading volume and in general liquidity for that security. Selecting a liquid hedge instrument will decrease the cost associated with the bid-ask spread and also lower cost of funding. The unit in this equation is the cost of a unit of hedging security price.
## 6 Common Factor Analyses
Another way to deal with multicollinearity is to use Common Factor Analyses. The idea is to find common factors between hedge instruments and the main portfolio. And then use the common factors to find the hedge instruments.
Assume we want to hedge the PnL of a portfolio where we present the PnL time series as \(Y(t)\) using N hedging instruments presented by \(X_{i}\) time series. The independent variables could have high correlations where if used in a regression equation, we will have a multicollinearity issue. We first find the time series of N factors so that factors are either independent or do not have high correlations between them so that their correlation matrix is full rank. The factors should also be able to explain a majority of the variance of the variables (both main portfolio PnL as well as each hedging instrument).
\[X_{i}(t)=\sum_{i=1}^{n}\gamma_{i}F_{i}(t)+\epsilon(t) \tag{14}\]
where \(X_{i}(t)\) is the value of each hedging instrument and \(F_{j}\) stands for each common factor. please note that \(E(\epsilon(t))=0\). Please also note that we are using the same number of factors as the hedging instruments. We can write the equation in the matrix form as:
\[\mathbf{X_{k\times N}}=\mathbf{F_{k\times N}}\ \mathbf{\gamma_{N\times N}}+\mathbf{\epsilon_{k \times N}} \tag{15}\]
where k is the number of data points and N is the number of variables (both hedging instruments and common factors). We can write PnL(t) i.e. \(Y(t)\) as a function of factors as well:
\[Y(t)=\sum_{j=1}^{N}\alpha_{ij}F_{j}+\delta_{i}(t) \tag{16}\]
where \(E(\delta_{i}(t))=0\). We can write the equation in matrix form:
\[\mathbf{Y_{k\times 1}}=\mathbf{F_{k\times N}}\ \mathbf{\alpha_{N\times 1}}+\mathbf{\delta_{k \times 1}} \tag{17}\]
If the hedged PnL has no exposure to any factor and given the factors explain the majority of the variance then it means we are reducing the PnL after hedging:
\[\mathbf{PnL_{k\times 1}^{unHedged}}-\mathbf{X_{k\times N}}\ \mathbf{\beta_{N\times 1}}=\mathbf{F_{k \times N}}\ \mathbf{\alpha_{N\times 1}}+\mathbf{\delta k\times 1}-(\mathbf{F_{k\times N}}\ \mathbf{\gamma_{N\times 1}}+\mathbf{ \epsilon_{k\times N}})\mathbf{\beta_{N\times 1}} \tag{18}\]
If we set the coefficients of factors equal to zero we will have:
\[\mathbf{\alpha_{N\times 1}}-\mathbf{\gamma_{N\times N}}\ \mathbf{\beta_{N\times 1}}=\mathbf{0_{N \times 1}} \tag{19}\]
\[\mathbf{\beta_{N\times 1}}=\mathbf{\gamma_{N\times N}^{-1}}\ \mathbf{\alpha_{N\times 1}} \tag{20}\]
\[\mathbf{PnL_{k\times 1}^{Hedged}}=\mathbf{\delta_{k\times 1}}+\mathbf{\epsilon_{k\times N}} \ \mathbf{\gamma_{N\times N}^{-1}}\ \mathbf{\alpha_{N\times 1}} \tag{21}\]
The factors could be determined using different methods. In general, chosen factors should have the following characteristics:
* Factors should be able to explain the majority of variables' variance.
* Factors should be either independent or have low correlations between them.
* Factors should satisfy regression requirements e.g. stationarity etc.
* \(\boldsymbol{\gamma}\) matrix as define by equation 15 should be inversable.
There are different methods to build or select factors. Factors could be selected from multiple factor models e.g. Fama-French three-factor model [7] for equity. However, there might not be proper multiple factor models for all asset classes that satisfy the requirements.
In general, we could construct factors mathematically. For example Principal Component Analysis, PCA [6] is a good method to extract factors. Another candidate for factor construction is to leverage factor analysis. "Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors" [8].
## 7 Neural Network
In this section, we describe how neural networks could be leveraged in finding hedge ratios. "Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains". [9]. You can think of a neural network as a function where it gets inputs and generates output(s). The functions are usually non-linear. You might think to use hedge instruments as inputs and the main portfolio PnL as the output and calibrate the model to mimic the portfolio PnL.
However, the main issue with using a simple Neural Network as shown in Figure 2 is that even though it can model PnL using hedge instruments accurately, the transformation from hedge instrument to PnL is non-linear. Generally, we can't create a portfolio that mimics the non-linear function of instruments. In order to overcome this issue we are designing a particular form of neural networks to overcome this issue as we explain in the following sections.
### Autoencoders
As explained in section 6 we can map hedge instruments into factors and use them to compute hedging ratios. Factors could be constructed either as a linear combination of instruments or a
Figure 2: Neural Network where inputs are hedge instruments and output is the portfolio PnL
nonlinear combination of instruments. For example, in PCA we construct factors as a linear combination of instruments under certain restrictions to maximize the variances of principal components. One of the methods to generate factors nonlinearly is to use autoencoders."An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data (unsupervised learning). The encoding is validated and refined by attempting to regenerate the input from the encoding. The autoencoder learns a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore insignificant data ("noise")." [10]
There are two major problems with using an autoencoder network:
1. A typical autoencoder network is symmetrical and has similar structures for encoder and decoder parts. This means code nodes generate outputs through a nonlinear transformation.
2. In a typical autoencoder network there is no restriction on the correlation of code nodes. The code nodes could have a high correlation after calibrating the network.
In the following section, we propose a solution to address these issues.
### Variational Autoencoder
"In machine learning, a variational autoencoder (VAE), is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling, belonging to the families of probabilistic graphical models and variational Bayesian methods." [10]
In VAE, the dimension is reduced by mapping the input into a few independent standard normal distributed variables e.g. Z1 and Z2 in Figure 4. Then the encoder network maps back the Z values into the input variables. The loss function in VAE is designed to not only makes the input close to the output but also make Z values as close as possible to independent normal distributions by calibrating \(\mu\) and \(\sigma\) values. This will ensure the conditional independence of factors i.e. Z values. But the factors are mapped back into variables using nonlinear transformations which is not acceptable for our hedge ratios estimation problem. To address the issues we are proposing a Modified Variational Autoencoder network as we explain in the following section.
Figure 3: Simple schema of an autoencoder with two code nodes.
### Modified Beta Variational Autoencoder
To explain the model, assume we want to hedge PnL of the main portfolio where its time series is presented by \(Y(t)\). For the purpose of illustration, in this example, we assume to have two candidate hedging instruments with time series \(H_{1}(t)\) and \(H_{2}(t)\).
\[Y(t)=\beta_{1}H_{1}(t)+\beta_{2}H_{2}(t)+\epsilon(t) \tag{22}\]
where (\(E(\epsilon(t))=0\)) i.e. the expected value of residuals is zero. Please note in general we are not expecting any intercepts. If there is any intercept, that means there is a market inefficiency to make money by going long the portfolio and shorting the hedging portfolio if the intercept is positive, and vice versa, if the intercept is negative. This migth exists in a short period of time but it will not last and we are not expecting it to be observed in the future. First, we construct the following neural network:
There are several differences and similarities between the modified VAE and the original VAE. The loss function in both minimizes not only the difference between targeted output values and estimated ones but also pushes \(\mu\) values to zero and \(\sigma\) values to one.
\[\begin{split} Loss(t)=(H_{1}(t)-\hat{H_{1}}(t))^{2}+(H_{2}(t)- \hat{H_{2}}(t))^{2}+(Y(t)-\hat{Y}(t))^{2}+\\ \hat{\beta}\ (KL(N(\mu_{1},\sigma_{1},N(0,1))+KL(N(\mu_{2},\sigma_{2 },N(0,1)))\end{split} \tag{23}\]
Figure 4: Simple schema of a variational autoencoder with two latents.
Figure 5: Modified variational autoencoder with two latents.
where \(\hat{\beta}\) is a free parameter and KL is Kullback-Leibler divergence [11]. The use of \(\hat{\beta}\) was introduced by Higgins et al. where \(\hat{\beta}\) is not equal to one, the model is called Beta-VAE[12]. Please note that:
* Similar to \(\beta\)-VAE, modified \(\beta\)-VAE has the same structure i.e. encoder and decoder as well as latent distribution unit.
* While the encoder in modified \(\beta\)-VAE could have any number of hidden layers and activation functions, the number of inputs is equal to the number of hedge instruments.
* the decoder has the same number of outputs as the encoder plus one for Y. It only has one layer with linear activation with no bias.
* The latent distribution has twice the number of hedge instruments to model \(\mu\) and \(\sigma\) values.
The Z values are computed via a non-linear transformation of inputs while the outputs are computed from Z values by a linear transformation. Similar to linear factors, we can write the following equations corresponding to Figure 5:
\[\hat{\mathbf{H}}_{\mathbf{k\times 2}}=\mathbf{Z}_{\mathbf{k\times 2}}\ \mathbf{\gamma}_{\mathbf{2\times 2}} \tag{24}\]
The value of matrix \(\gamma_{2\times 2}\) is extracted from the calibrated weight of the decoder.
\[\hat{\mathbf{Y}}_{\mathbf{k\times 1}}=\mathbf{Z}_{\mathbf{k\times 2}}\ \mathbf{\alpha}_{\mathbf{2\times 1}} \tag{25}\]
The value of matrix \(\alpha_{2\times 1}\) is extracted from the calibrated weight of the decoder to compute Y from Z values. Using 20 we can compute the hedge ratios:
\[\mathbf{\beta_{2\times 1}}=\mathbf{\gamma_{2\times 2}^{-1}}\ \mathbf{\alpha_{2\times 1}} \tag{26}\]
## 8 Historical values sampling
In all methods, we are using dependent i.e. \(Y(t)\) and independent variables i.e. \(X_{i}(t)\) variables. Typically we use historical values for Xs and Y time series. In reality, we are interested in the next move of the market rather than historical values. We like to create a hedged portfolio so that the move in our position gets offset by the move in the hedge instruments so that the hedge position shows minimal PnL movement. Nobody knows what the next movements will be, but we can estimate the next movement distribution by considering historical movements. For example, we can assume that the next move will be similar to one of the movements observed in the last 250 days. By using the last 250 days' Xs and Y time series we are implicitly assuming the next-day movement distribution is the same distribution that has generated the last 250 samples. In this section, we present a tool on how to test this assumption and how to generate better samples to be used in our models for dependent and independent values.
Assume we have a time series X where we have observed the values till the time "t". We would like to estimate X(t+1) distribution given time series X:
\[P(x(t+1)\ |\ \mathbf{X})=f(t+1) \tag{27}\] \[\mathbf{X}=[X(1),X(2),...,X(t)] \tag{28}\]
We can compute the quantile of x(t+1) assuming its distribution is f(t+1). Let's call that as \(F^{-1}(x(t+1))\). We know if we have estimated f(.) correctly then \(F^{-1}(.)\) should follow a uniform distribution regardless of what kind of distribution f(.) follows. For most of the financial time series, we could see that using historical value will not satisfy this condition. Intuitively we also expect to see the next day move more similar to today's move than similar to a move that happened 6 months ago. One way to address this concern is to generate Xs and Y time series by sampling from historical values with an exponential distribution where we assign a higher probability to more recent data to be picked than older data. Let's start with a univariate time series:
\[\mathbf{X}=[X(1),X(2),...,X(t)] \tag{29}\] \[\mathbf{P}=[\alpha^{t-1},,\alpha^{t-2},...,\alpha,1]\times G\] (30) \[G=\sum_{i=1}^{i=t}\alpha^{t-i} \tag{31}\]
where G is the scaling factor. \(\alpha\leq 1.0\) is the decay factor where it could be changed so that \(F^{-1}(.)\) become a uniformly distributed variable.
For multivariate time series, we can compute the principal component time series first which by construction are perpendicular i.e. they have zero correlation. Now we can treat each principal component time series as a univariate time series and compute its \(\alpha\). The \(\alpha\) for the whole set is the weighted average of each \(\alpha\) computed for each principal component time series. The weights could be the percentage of total variance that each component explains in PCA. Please note that in multivariate time series, we sample dates first and then generate the whole matrix. In other words, we are sampling vectors of data (Y and X values) from different dates to preserve the correlation.
## 9 Result and Discussion
In this section, we are illustrating how different methods could be used. Assume we want to hedge (or compress risk) SPY using a few equities as hedging instruments (or risk factors). SPY (SPDR S&P 500 ETF Trust) is an exchange-traded fund that trades on the NYSE Arca and tries to track the S&P 500 stock market index. S&P 500 is an equity basket tracking the stock performance of 500 large companies listed on stock exchanges in the United States. We use daily historical prices from 2014-03-27 to 2022-12-23. The following figures show SPY and candidate hedging instruments' relative price movements:
The following figure shows the correlation between the log rate of return of daily prices. As expected there are relatively high correlations between equities, especially between GOOG and GOOGL tickers as expected:
If we use different models as discussed in this paper, then we could compute the hedge ratios. The sampling decay as discussed in Section 8 is equal to 0.996584 using the last 100 days. The following table shows the results for sampling 1000 data points:
Columns represent employed methods and all rows except the last one are the computed hedge ratios to hedge SPY. The last row presents R squared which is computed as the squared correlation between the predicted values and actual values of the daily log return of SPY. For all methods, we consider the cost of hedging as shown in Table 2. They have been computed according to equations 12 and 13. As discussed before only Lasso_Cost and Ridge_Cost models actively consider the cost of hedges in optimization and finding hedge ratios. However, the cost of hedges is considered in the expected PnL as can be seen in Figure 9 where the mean of each boxplot is equal to the cost of hedging rather than zero.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline Name & Regression & Lasso & Lasso & Cost & Ridge & Ridge & Cost & Factors & PCA & Factors & Varimax & FA & VAE \\ \hline AAPL & 0.135 & 0.136 & 0.147 & 0.136 & 0.136 & 0.136 & 0.113 & -0.086 \\ AMZN & 0.089 & 0.089 & 0.092 & 0.089 & 0.089 & 0.091 & 0.071 & -0.501 \\ BRK-B & 0.251 & 0.248 & 0.191 & 0.243 & 0.244 & 0.249 & 0.101 & 0.618 \\ GOOG & -0.040 & 0.000 & 0.045 & -0.001 & -0.025 & -0.042 & 0.068 & -3.462 \\ GOOGL & 0.068 & 0.027 & 0.000 & 0.031 & 0.054 & 0.070 & 0.070 & 3.829 \\ JNJ & 0.092 & 0.090 & 0.090 & 0.091 & 0.093 & 0.090 & 0.087 & -0.093 \\ JPM & 0.149 & 0.150 & 0.180 & 0.152 & 0.153 & 0.148 & 0.136 & -0.336 \\ MSFT & 0.126 & 0.126 & 0.106 & 0.124 & 0.125 & 0.124 & 0.097 & 0.894 \\ UNH & 0.015 & 0.014 & 0.009 & 0.016 & 0.015 & 0.017 & 0.133 & 0.205 \\ XOM & 0.063 & 0.063 & 0.066 & 0.064 & 0.064 & 0.063 & 0.095 & 0.101 \\ \hline R2 & 0.958 & 0.958 & 0.957 & 0.958 & 0.958 & 0.958 & 0.943 & 0.936 \\ \hline \end{tabular}
\end{table}
Table 1: Hedge ratios and R-Squared values for different methods
Figure 8: Correlation matrix.
The following figure shows the boxplot of residuals plus hedging cost and 99% Value at Risk (VAR). The mean is equal to the cost of hedging which is negative. VaR is a function of both the deterministic cost of hedging as well as the tail risk which is stochastic.
### Models Advantages and Limitations
Each model has its own advantages and limitations. Here we briefly discuss each model's pros and cons:
* **Regression:** Regression is easy to perform and least computationally expensive. Given there is no restriction on coefficients it should provide the lowest in-sample fitted error variance. However, a regression could be prone to over-fitting and higher cost of hedging given there is no restriction on computed hedge ratios. It could also become unstable when highly correlated hedge instruments (independent variables) are used.
* **Lasso:** Lasso is similar to regression when L1 regularization is applied to fitted coefficients i.e. hedge ratios. The regularization might force a number of coefficients to be set to zero and control the size of other coefficients. While regularization helps with avoiding over-fitting and controls the cost of hedging, we might get higher in-sample fitted error variance given the
\begin{table}
\begin{tabular}{l c} \hline Variable & Cost \\ \hline AAPL & 0.000529 \\ AMZN & 0.000082 \\ BRK-B & 0.000863 \\ GOOG & 0.000282 \\ GOOGL & 0.000387 \\ JNJ & 0.000304 \\ JPM & 0.000252 \\ MSFT & 0.000633 \\ UNH & 0.000364 \\ XOM & 0.000269 \\ \hline \end{tabular}
\end{table}
Table 2: Hedging cost
Figure 9: Different models residuals and 99% Value at Risk (VAR), presented as the dashed line.
applied restriction on the optimization. In practice, we can use Lasso as a feature engineering tool to pick a handful of hedge instruments from a pool of candidate hedge instruments. One of the limitations of the model is that model picks which coefficients are forced to zero as data changes. If for example, we use the tool for risk compression, it would be difficult to monitor coefficients over time given that any variable could be dropped out of the equation at any time. This also could increase the cost of hedge portfolio rebalancing.
* **Lasso_Cost:** Lasso_Cost is the same as Lasso where the cost of hedging is the same for all hedge instruments. Therefore, this model has the same advantages and limitations as Lasso. In addition, the model could adjust the computed weights to consider the relative costs between hedge instruments.
* **Ridge:** Ridge is similar to regression when L2 regularization is applied to fitted coefficients i.e. hedge ratios. The regularization forces the coefficients to be set to small values but not zero. While regularization helps with avoiding over-fitting and controls the cost of hedging, we might get higher in-sample fitted error variance given the applied restriction on the optimization similar to Lasso. In practice, we can use Ridge when we want to keep all variables in contrast to Lasso where we need to select a subset of variables.
* **Ridge_Cost:** Ridge_Cost is the same as Ridge where the cost of hedging is the same for all hedge instruments. Therefore, this model has the same advantages and limitations as Ridge. In addition, the model could adjust the computed weights to consider the relative costs between hedge instruments.
* **Factors_PCA:** In this method we first compute the PCA time series as factors. Using PCA by design ensures the factors to be perpendicular. Using Factors, in general, are more intuitive to understand since we find hedge ratios so that we do not have any exposure to factors. However, given it is a two-step process i.e. finding factors under some restrictions and then finding hedge ratios that eliminate our exposures to factors, we are expecting, in general, to have a less optimum solution given more restrictions in the process.
* **Factors_Unrotated_FA:** In this method we first compute the factors' time series using factor analysis [8]. The pros and cons of this method are similar to Factors_PCA.
* **Factors_Varimax_FA:** As it could be shown, extracted factors using factor analysis [8] are not unique and rotated factors are also a possible answer. "Varimax is the most commonly used rotation method. Varimax is an orthogonal rotation of the factor axes that maximizes the variance of the squared loadings of a factor (column) on all the variables (rows) in a factor loadings matrix" [13]. Please note that the rotation only helps with better interoperability of the results i.e. which factors explain what variables. The final results for the hedge ratios are the same for Factors_Varimax_FA and Factors_Unrotated_FA methods. The pros and cons of this method is similar to Factors_PCA.
* **VAE:** This model is a modified \(\beta-\)VAE. The model is similar to a common factor analysis. The main difference is that factors are generated non-linearly while in PCA and Factor Analysis, the factors are constructed from variables linearly. In addition, VAE is a neural network where it inherited the pros and cons of the neural network models. One of the limitations is that VAE is sensitive to the initial seed for its weights initiation in the optimization and instability in the results from one run to the other [14]. Training the model might take longer than other models and we might not observe better performance than other models given higher computation cost in general.
## 10 Conclusion
Besides a simple linear regression, there are other methods to use to find hedge ratios or compress risk in a few risk factors. The methods fall into either Regularization Techniques or Common Factor Analyses. In general Regularization Techniques could not only help with the overfitting issues but also control the size of the hedges to save hedging costs. Lasso which is a member
of this family could be used to select hedge instruments from a pool of candidate securities. In regularization methods, we could also consider the cost of hedging in the optimization which is a function of the type of security, market volatility, and the liquidity of the hedging instruments.
In Common Factor Analyses methods the securities first map into common factors and then hedges are selected in a way to kill the exposure of the hedged portfolio to those factors. While this method might provide a more intuitive solution, it might not provide the best answers given the two-step process in comparison to Regularization Techniques. We also can't control the size of the hedges similar to the Regularization Technique. However, both Regularization Techniques and Common Factor Analyses are robust to deal with multicollinearity. This means we can use highly correlated hedge securities and find their hedge ratios without generating instability that could be observed by using a linear regression model. We presented a modified beta variational autoencoder model that could be used for hedging. This method falls into Common Factor Analyses type methods.
We have shown using historical values without proper sampling might not be the best option to calibrate our models. We discussed and presents a method on how to sample from the historical values to be used as input data in our models. In this method, we calibrate an exponential distribution where it samples from historical values by giving more weight to more recent data than older data.
Finally, numerical results have been generated for an example using different methods for comparison. As discussed in section 10, each method has its pros and cons and a proper method should be used for each problem. As part of the assessment not only the variation of the hedged portfolio PnL minimization but also the PnL associated with the cost of hedging should be considered.
|
2301.02039 | Randomized Message-Interception Smoothing: Gray-box Certificates for
Graph Neural Networks | Randomized smoothing is one of the most promising frameworks for certifying
the adversarial robustness of machine learning models, including Graph Neural
Networks (GNNs). Yet, existing randomized smoothing certificates for GNNs are
overly pessimistic since they treat the model as a black box, ignoring the
underlying architecture. To remedy this, we propose novel gray-box certificates
that exploit the message-passing principle of GNNs: We randomly intercept
messages and carefully analyze the probability that messages from adversarially
controlled nodes reach their target nodes. Compared to existing certificates,
we certify robustness to much stronger adversaries that control entire nodes in
the graph and can arbitrarily manipulate node features. Our certificates
provide stronger guarantees for attacks at larger distances, as messages from
farther-away nodes are more likely to get intercepted. We demonstrate the
effectiveness of our method on various models and datasets. Since our gray-box
certificates consider the underlying graph structure, we can significantly
improve certifiable robustness by applying graph sparsification. | Yan Scholten, Jan Schuchardt, Simon Geisler, Aleksandar Bojchevski, Stephan Günnemann | 2023-01-05T12:21:48Z | http://arxiv.org/abs/2301.02039v1 | # Randomized Message-Interception Smoothing:
###### Abstract
Randomized smoothing is one of the most promising frameworks for certifying the adversarial robustness of machine learning models, including Graph Neural Networks (GNNs). Yet, existing randomized smoothing certificates for GNNs are overly pessimistic since they treat the model as a black box, ignoring the underlying architecture. To remedy this, we propose novel gray-box certificates that exploit the message-passing principle of GNNs: We randomly intercept messages and carefully analyze the probability that messages from adversarially controlled nodes reach their target nodes. Compared to existing certificates, we certify robustness to much stronger adversaries that control entire nodes in the graph and can arbitrarily manipulate node features. Our certificates provide stronger guarantees for attacks at larger distances, as messages from farther-away nodes are more likely to get intercepted. We demonstrate the effectiveness of our method on various models and datasets. Since our gray-box certificates consider the underlying graph structure, we can significantly improve certifiable robustness by applying graph sparsification.1
Footnote 1: Project page: [https://www.cs.cit.tum.de/daml/interception-smoothing](https://www.cs.cit.tum.de/daml/interception-smoothing)
## 1 Introduction
The core principle behind the majority of Graph Neural Networks (GNNs) is message passing - the representation of a node is (recursively) computed based on the representations of its neighbors (Gilmer et al., 2017). This allows for information to propagate across the graph, e.g. in a k-layer GNN the prediction for a node depends on the messages received from its k-hop neighborhood. With such models, if an adversary controls a few nodes in the graph, they can manipulate node features to craft adversarial messages that in turn change the prediction for a target node.
Such feature-based adversarial attacks are becoming significantly stronger in recent years and pose a realistic threat (Ma et al., 2020; Zou et al., 2021): Adversaries may arbitrarily manipulate features of entire nodes in their control, for example in social networks, public knowledge graphs and graphs in the financial and medical domains. Detecting such adversarial perturbations is a difficult unsolved task even beyond graphs (Carlini and Wagner, 2017), meaning such attacks may go unnoticed.
How can we limit the influence of such adversarial attacks? We introduce a simple but powerful idea: _intercept_ adversarial messages. Specifically, we propose message-interception smoothing where we randomly delete edges and/or randomly ablate (mask) nodes, and analyze the probability that messages from adversarially controlled nodes reach the target nodes. By transforming any message-passing GNN into a smoothed GNN, where the prediction is the majority vote under this randomized message interception, we can provide robustness certificates (see Figure 1).
Experimentally we obtain significantly better robustness guarantees compared to previous (smoothing) certificates for GNNs (compare Section 7). This improvement stems from the fact that our certificates take the underlying architecture of the classifier into account. Unlike previous randomized smoothing certificates which treat the GNN as a black-box, our certificates are _gray-box_. By making the certificate message-passing aware we partially open the black-box and obtain stronger guarantees.
Our approach is also in contrast to white-box certificates that apply only to very specific models. For example, Zugner and Gunnemann (2019) only certify the GCN model (Kipf and Welling, 2017). While newly introduced GNNs require such certificates to be derived from scratch, our approach is model-agnostic and flexible enough to accommodate the large family of message-passing GNNs.
We evaluate our certificates on node classification datasets and analyze the robustness of existing GNN architectures. By applying simple graph sparsification we further increase the certifiable robustness while retaining high accuracy, as sparsification reduces the number of messages to intercept. In stark contrast to previous probabilistic smoothing-based certificates for GNNs, our certificates require only a few Monte-Carlo samples and are more efficient: For example, we can compute certificates on Cora-ML in just 17 seconds and certify robustness against much stronger adversaries than previous smoothing-based certificates (Bojchevski et al., 2020) that take up to 25 minutes.
In short, our main contributions are:
* The first gray-box smoothing-based certificates for GNNs that exploit the underlying _message-passing_ principle for stronger guarantees.
* Novel randomized smoothing certificates for strong threat models where adversaries can arbitrarily manipulate features of multiple nodes in their control.
## 2 Preliminaries and Background
**Threat model.** We develop certificates for feature perturbations given _evasion_ threat models. Specifically, we model adversaries that attack GNNs by entirely perturbing attributes of a few \(\rho\) nodes in the graph at inference. Given an attributed graph \(G=(\mathbf{A},\mathbf{X})\in\mathbb{G}\) encoded via adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\) and feature matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\) with \(n\) nodes and \(d\) features, we formally define the threat model of feature perturbations as a ball centered at a given graph \(G=(\mathbf{A},\mathbf{X})\):
\[B_{\rho}(G)\triangleq\{G^{\prime}=(\mathbf{A}^{\prime},\mathbf{X}^{\prime})\mid\mathbf{A} =\mathbf{A}^{\prime},\delta(G,G^{\prime})\leq\rho\}\]
where \(\delta(G,G^{\prime})\triangleq\sum_{v=1}^{n}\mathbf{1}_{\mathbf{x}_{v}\neq\mathbf{x}_{v}^ {\prime}}\) denotes the number of nodes whose features differ in at least one dimension when comparing the clean graph \(G\) and the perturbed graph \(G^{\prime}\). Intuitively, this means adversaries control up to \(\rho\) nodes in the graph and can arbitrarily manipulate node features.
**Graph neural networks.** We design robustness certificates for GNNs that instantiate the so-called message-passing framework (Gilmer et al., 2017). The message-passing framework describes a large family of GNN architectures that are based on the local aggregation of information from neighboring nodes in the graph. To compute a new representation \(\mathbf{h}_{v}^{(\ell)}\) of node \(v\), each message-passing layer \(\Psi^{(\ell)}\) transforms and aggregates the representations \(\mathbf{h}_{v}^{(\ell-1)}\) and \(\mathbf{h}_{u}^{(\ell-1)}\) of all nodes \(u\) in the local neighborhood \(\mathcal{N}(v)\triangleq\{u\mid\mathbf{A}_{uv}=1\}\) of node \(v\).
Figure 1: Randomized message-interception smoothing: We model adversaries that can arbitrarily manipulate features of multiple nodes in their control (red) to alter the predictions for a target node \(v\). We intercept messages (gray) by randomly deleting edges and/or ablating (mask) all features of entire nodes. Our certificates are based on the majority vote under this randomized message interception.
We can formally describe a message-passing layer as follows: \(\mathbf{h}_{v}^{(\ell)}\triangleq\Psi_{u\in\mathcal{N}(v)\cup\{v\}}^{(\ell)}\left(\mathbf{ h}_{v}^{(\ell-1)},\mathbf{h}_{u}^{(\ell-1)}\right)\). For node classification, message-passing GNNs with \(k\) GNN-layers can be described as parametrized functions \(f:\mathbb{G}\rightarrow\{1,\ldots,C\}^{n}\) that assign each node \(v\) in graph \(G\) class \(f_{v}(G)\triangleq\operatorname*{argmax}_{c}\mathbf{h}_{v,c}^{(k)}\), where \(\mathbf{h}_{v}^{(0)}\triangleq\mathbf{x}_{v}\in\mathbb{R}^{d}\) denotes the input and \(\mathbf{h}_{v}^{(k)}\in\mathbb{R}^{C}\) the final representation of node \(v\).
**Randomized smoothing.** Our robustness certificates for GNNs build upon the randomized smoothing framework (Cohen et al., 2019; Lecuyer et al., 2019): Given any base classifier \(f\), for example a message-passing GNN, we can build a "smoothed" classifier \(g\) that classifies randomly perturbed input samples, and then takes the "majority vote" among all predictions. The goal is to construct a smoothed classifier that behaves similar to \(f\) (for example in terms of accuracy) and for which we can prove (probabilistic) robustness certificates.
Randomized ablation (Levine and Feizi, 2020) is a smoothing-based certificate that "ablates" the input: Unlike in randomized smoothing where the input is randomly perturbed (e.g. by adding Gaussian noise to images), in randomized ablation the input is randomly masked, for example by replacing parts of the input with a special ablation token that "hides" the original information. If the perturbed input is masked for the majority of predictions, we can issue certificates for the smoothed classifier \(g\).
## 3 Randomized Message-Interception Smoothing for Graph Neural Networks
The main idea of our gray-box smoothing certificates is to intercept messages from perturbed nodes by (1) deleting edges to disconnect nodes, and/or (2) ablating nodes to mask their features (cf. Figure 1).
To implement this we introduce two independent smoothing distributions \(\phi_{1}(\mathbf{A})\) and \(\phi_{2}(\mathbf{X})\) that randomly apply these changes to the input graph: The first smoothing distribution \(\phi_{1}(\mathbf{A})\) randomly deletes edges in the adjacency matrix (\(1\to 0\)) with probability \(p_{d}\). The second smoothing distribution \(\phi_{2}(\mathbf{X})\) randomly ablates all features of nodes with probability \(p_{a}\) by replacing their feature representations with a fixed representation token \(\mathbf{t}\in\mathbb{R}^{d}\) for ablated nodes. The ablation representation \(\mathbf{t}\) is a trainable parameter of our smoothed classifier and can be optimized during training. Introducing two independent smoothing distributions is important since our base classifiers \(f\) are GNNs, which behave differently under structural changes in the graph than to feature ablation of nodes in practice.
We use this message-interception smoothing distribution \(\phi(G)\triangleq(\phi_{1}(\mathbf{A}),\phi_{2}(\mathbf{X}))\) to randomly sample and then classify different graphs with a message-passing GNN \(f\). Finally, our smoothed classifier \(g\) takes the majority vote among the predictions of \(f\) for the sampled graphs \(\phi(G)\). We formally describe our smoothed classifier \(g\) as follows:
\[g_{v}(G)\triangleq\operatorname*{argmax}_{y\in\{1,\ldots,C\}}p_{v,y}(G)\qquad p _{v,y}(G)\triangleq p(f_{v}(\phi(G))=y)\]
where \(p_{v,y}(G)\) denotes the probability that the base GNN \(f\) classifies node \(v\) in graph \(G\) as class \(y\) under the smoothing distribution \(\phi(G)=(\phi_{1}(\mathbf{A}),\phi_{2}(\mathbf{X}))\).
## 4 Provable Gray-box Robustness Certificates for Graph Neural Networks
We derive provable certificates for the smoothed classifier \(g\). To this end, we develop a condition that guarantees \(g_{v}(G)=g_{v}(G^{\prime})\) for any graph \(G^{\prime}\in B_{\rho}(G)\): We make the worst-case assumption that adversaries alter the prediction for a target node whenever it receives at least one message from perturbed nodes. Let \(E\) denote the event that at least one message from perturbed nodes reaches a target node \(v\). Then the probability \(\Delta\triangleq p(E)\) quantifies how much probability mass of the distribution \(p_{v,y}(G)\) over classes \(y\) is controlled by the worst-case adversary:
**Proposition 1**.: _Given target node \(v\) in graph \(G\), and adversarial budget \(\rho\). Let \(E\) denote the event that the prediction \(f_{v}(\phi(G))\) receives at least one message from perturbed nodes. Then the change in label probability \(|p_{v,y}(G)-p_{v,y}(G^{\prime})|\) is bounded by the probability \(\Delta=p(E)\) for all classes \(y\in\{1,\ldots,C\}\) and graphs \(G^{\prime}\) with \(G^{\prime}\in B_{\rho}(G)\): \(|p_{v,y}(G)-p_{v,y}(G^{\prime})|\leq\Delta\)._
_Proof sketch_ (Proof in Appendix A). Whenever we intercept all adversarial messages, adversaries cannot alter the prediction. Thus \(|p_{v,y}(G)-p_{v,y}(G^{\prime})|\) is bounded by \(\Delta\). \(\Box\)
Note that we derive an upper bound on \(\Delta\) in Section 5.
We first consider the special case of node ablation smoothing, discuss its relation to randomized ablation for image classifiers (Levine and Feizi, 2020), and then we derive our provably stronger guarantees for the general case of message-interception smoothing.
**Special case of node ablation smoothing.** For the special case of node feature ablation smoothing only (\(p_{d}=0\)), we can directly determine the probability \(\Delta\) (Proof in Appendix B):
**Proposition 2**.: _For node feature ablation smoothing only (\(p_{d}=0\)), we have \(\Delta=1-p_{a}^{o}\)._
In this special case, our certificates for GNNs are theoretically related to the randomized ablation certificates for image classifiers (Levine and Feizi, 2020). We could apply their smoothing distribution to GNNs by randomly ablating features of entire nodes, instead of pixels in an image. However, their approach is specifically designed for image classifiers and comes with serious shortcomings when applied to GNNs. Notably, our robustness certificates are provably tighter and experimentally stronger even in this special case without edge deletion smoothing (\(p_{d}=0\)): Given that \(\Delta^{L}\) denotes the bounding constant as defined by Levine and Feizi (2020), we show \(\Delta<\Delta^{L}\) in Appendix B. We carefully discuss such differences with more technical details in Appendix B. Most importantly, their certificate applied to GNNs ignores the underlying graph structure.
**General case of message-interception smoothing.** In contrast, our message-interception certificates are specifically designed for _graph-structured_ data, message-passing aware, and consider the interception of messages via edge deletion as follows:
Consider a fixed target node \(v\) in the graph. The formal condition for intercepting messages from a fixed target node \(v\) to itself is \(\phi_{2}(\mathbf{x}_{v})=\mathbf{t}\), since we only intercept messages from the target node to the target node itself if we ablate its features. To model the interception of messages from perturbed nodes \(\mathbb{B}\) other than the target node, we take the graph structure \(\mathbf{A}\) into account: We consider all simple paths \(P_{wv}^{k}=\{(e_{1},\dots,e_{i})\mid i\leq k\}\) from perturbed nodes \(w\in\mathbb{B}\) to target node \(v\) of length at most \(k\) (where \(k\) is the number of GNN layers).2 Intuitively, if any edge \(e\) on path \(p\in P_{wv}^{k}\) is deleted, or the features of \(w\) are ablated, messages via path \(p\) get intercepted. If all messages from perturbed nodes get intercepted, adversaries cannot alter the prediction for the target node (Proof in Appendix A):
Footnote 2: We consider simple paths (all nodes appear only once), since we only receive perturbed messages via more complex paths iff we receive perturbed messages via the simple part of the complex path.
**Lemma 1**.: _Given a fixed target node \(v\) and perturbed nodes \(\mathbb{B}\) in the graph with \(v\notin\mathbb{B}\). Then \(f_{v}(\phi(G))=f_{v}(\phi(G^{\prime}))\) for any graph \(G^{\prime}\in B_{\rho}(G)\) if_
\[\forall w\in\mathbb{B}:\left(\forall p\in P_{wv}^{k}:\exists(i,j)\in p:\phi_{ 1}(\mathbf{A})_{ij}=0\right)\vee(\phi_{2}(\mathbf{x}_{w})=\mathbf{t})\]
Since k-layer message-passing GNNs aggregate information over local neighborhoods, only features of nodes in the _receptive field_ affect the prediction for a target node (only via paths with a length of at most \(k\) to \(v\)). For any perturbed node \(w\in\mathbb{B}\) outside of the receptive field we have \(P_{wv}^{k}=\emptyset\) and the message-interception condition of Lemma 1 is always fulfilled.
In practice, however, we do not know which nodes in the graph are controlled by the adversary. To account for this, we assume adversaries control nodes indicated by \(\mathbf{\rho}_{v}\in\{0,1\}^{n}\) that maximize the probability of the event \(E(\mathbf{\rho}_{v})\) that target node \(v\) receives perturbed messages:
**Theorem 1**.: _The worst-case change in label probability \(|p_{v,y}(G)-p_{v,y}(G^{\prime})|\) is bounded by_
\[\Delta=\max_{\|\mathbf{\rho}_{v}\|_{0}\leq\rho}p\left(E(\mathbf{\rho}_{v})\right)\]
_for all classes \(y\in\{1,\dots,C\}\) and any graph \(G^{\prime}\in B_{\rho}(G)\)._
Proof in Appendix A. Finally, we provide conservative robustness certificates for the smoothed classifier \(g\) by exploiting that perturbed nodes are disconnected and/or ablated and cannot send messages for the majority of predictions:
**Corollary 1** (Multi-class certificate).: _Given \(\Delta\) as defined in Proposition 1. Then we can certify the robustness \(g_{v}(G)=g_{v}(G^{\prime})\) for any graph \(G^{\prime}\in B_{\rho}(G)\) if_
\[p_{v,y^{*}}(G)-\Delta>\max_{\tilde{y}\neq y^{*}}p_{v,\tilde{y}}(G)+\Delta\]
_where \(y^{*}\triangleq g_{v}(G)\) denotes the majority class, and \(\tilde{y}\) the follow-up (second best) class._
Proof in Appendix A. We also provide a certificate for binary node classification in Appendix A.
## 5 Practical Interception Smoothing Certificates
Message-interception certificates constitute two challenges in practice: (1) computing the bounding constant \(\Delta\) for arbitrary graphs, and (2) computing the label probabilities \(p_{v,y^{*}}(G)\) and \(p_{v,\tilde{y}}(G)\). We address the first problem by providing upper bounds on \(\Delta\) (i.e. lower bounds on the certifiable robustness). For the second problem we follow existing literature and estimate the smoothed classifier.
**Lower bound on certifiable robustness.** Computing \(\Delta\) of Theorem 1 poses two problems: First, finding the worst-case nodes in arbitrary graphs involves a challenging optimization over the powerset of nodes in the receptive field. Second, computing the probability \(p(E(\mathbf{\rho}_{v}))\) to receive perturbed messages is challenging even for fixed \(\mathbf{\rho}_{v}\), since in general, it involves evaluating the inclusion-exclusion principle (Appendix C). We can compute \(\Delta\) exactly only for special cases such as small or tree-structured receptive fields (Appendix D). Notwithstanding the challenges, we provide practical upper bounds on \(\Delta\). Instead of assuming a fixed \(\mathbf{\rho}_{v}\), we solve both problems regarding \(\Delta\) at once and directly bound the maximum over all possible \(\mathbf{\rho}_{v}\) by assuming _independence_ between paths. Due to Corollary 1, any upper bound on \(\Delta\) result in lower bounds on the certifiable robustness.
We first derive an upper bound on \(\Delta\) for a single perturbed node, and then generalize to multiple nodes. Let \(E_{w}\) denote the event that the target node \(v\) receives messages from node \(w\), and \(\Delta_{w}\triangleq p(E_{w})\). Note in the special case of the target node \(v=w\) we just have \(\Delta_{w}=1-p_{a}\), since the features \(\mathbf{x}_{v}\) of the target node \(v\) are used for the prediction independent of any edges. For any \(w\neq v\) in the receptive field we can derive the following upper bound for single sources (Proof in Appendix E):
**Theorem 2** (Single Source Multiplicative Bound).: _Given target node \(v\) and source node \(w\neq v\) in the receptive field of a \(k\)-layer message-passing GNN \(f\) with respect to \(v\). Let \(P_{wv}^{k}\) denote all simple paths from \(w\) to \(v\) of length at most \(k\) in graph \(G\). Then \(\Delta_{w}\leq\overline{\Delta}_{w}\) for:_
\[\overline{\Delta}_{w}\triangleq\left[1-\prod_{q\in P_{wv}^{k}}\left(1-(1-p_ {d})^{|q|}\right)\right](1-p_{a})\]
_where \(|q|\) denotes the number of edges on the simple path \(q\in P_{wv}^{k}\) from \(w\) to \(v\)._
We visualize \(\Delta_{w}\) for different \(p_{d}\) and \(p_{a}\) in Figure 2. The upper bound for single sources is tight for one- and two-layer GNNs (\(\Delta=\overline{\Delta}_{w}\)), since then all paths from a single source to the target node are independent (Appendix E). The single source multiplicative bound on \(\Delta_{w}\) can only be used to certify a radius of \(\rho=1\). For multiple nodes (\(\rho>1\)), we generalize Theorem 2 as follows:
**Theorem 3** (Generalized multiplicative bound).: _Assume an adversarial budget of \(\rho\) nodes and let \(\Delta_{1},\dots,\Delta_{\rho}\) denote the \(\rho\) largest \(\Delta_{i}\) for nodes \(i\) in the receptive field. Then we have \(\Delta\leq\overline{\Delta}_{M}\) for_
\[\overline{\Delta}_{M}\triangleq 1-\prod_{i=1}^{\rho}(1-\Delta_{i})\]
Proof in Appendix E. Notably, the multiplicative bound is tighter than a union bound. We specifically address the approximation error in detail in Appendix F.
Figure 2: Single source bounding constant \(\Delta_{i}\) for different edge deletion probabilities \(p_{d}\) and node feature ablation probabilities \(p_{a}\). White isolines indicate \(\Delta_{i}=0.5\) and separate the theoretically certifiable region (\(\Delta_{i}<0.5\)) from the uncertainfiable region (\(\Delta_{i}\geq 0.5\)). (a) For the target node, \(p_{d}\) does not affect \(\Delta_{i}\). (b) Direct neighbor of target node, single edge. (c) Second-hop neighbor, single path (two edges). (a-c) More distant nodes have larger theoretically certifiable regions.
Estimating the smoothed classifier in practice.Computing the probabilities \(p_{v,y^{*}}(G)\) and \(p_{v,\tilde{y}}(G)\) exactly is challenging in practice. We instead estimate them similar to previous work by drawing Monte-Carlo samples from \(\phi\)(Cohen et al., 2019; Levine and Feizi, 2020; Bojchevski et al., 2020). We first identify the majority class \(y^{*}\) and follow-up class \(\tilde{y}\) using a few samples. We then draw more samples to estimate a lower bound \(\underline{p_{v,y^{*}}(G)}\) on \(p_{v,y^{*}}(G)\) and an upper bound \(\overline{p_{v,\tilde{y}}(G)}\) on \(p_{v,\tilde{y}}(G)\). We use the Clopper-Pearson Bernoulli confidence interval and apply Bonferroni correction to ensure that the bounds hold simultaneously with significance level \(\alpha\) (with probability of at least \(1-\alpha\)). Moreover, our smoothed classifier abstains from predicting if \(\underline{p_{v,y^{*}}(G)}\leq\overline{p_{v,\tilde{y}}(G)}\), meaning if the estimated probabilities are too similar. We experimentally analyze abstained predictions in Appendix H.
**Practical robustness certificates.** Finally, our robustness certificates also hold when bounding \(\Delta\) and the label probabilities as the following Corollary shows (Proof in Appendix A):
**Corollary 2**.: _We guarantee \(g_{v}(G)=g_{v}(G^{\prime})\) with probability of at least \(1-\alpha\) for any \(G^{\prime}\in B_{\rho}(G)\) if \(\underline{p_{v,y^{*}}(G)}-\overline{\Delta}>\overline{p_{v,\tilde{y}}(G)}+ \overline{\Delta}\), where \(y^{*}\) denotes the majority class, and \(\tilde{y}\) the follow-up class._
## 6 Discussion
Our certificates require knowledge about the graph structure \(\mathbf{A}\) and can only account for structure perturbations if the perturbed adjacency matrix \(\mathbf{A}^{\prime}\) is known. While adversarial edge deletion potentially increases robustness (due to less messages to intercept), adversaries could arbitrarily increase the number of messages via edge insertion. Moreover, the number of simple paths in the graph can be huge. We argue, however, that (1) graphs are usually sparse, (2) the number of paths can be reduced via sparsification, and (3) we have to compute paths only once for each graph.
**Limitations of ablation certificates.** Since the probability to receive messages from perturbed nodes increases the more nodes are adversarial, \(\Delta\) is monotonously increasing in \(\rho\). Thus, the certifiable radius is bounded independent of the label probabilities (uncertainly region for \(\Delta\geq 0.5\) due to Corollary 1). This bound depends on the graph structure and changes for each target node, but in the case of node feature ablation smoothing we can directly determine the bound (Proof in Appendix I):
**Proposition 3**.: _Given fixed \(p_{a}>0\) and \(p_{d}=0\), it is impossible to certify a radius \(\rho\) if \(p_{a}\leq\sqrt[c]{0.5}\)._
This bound is only determined by the parameters of the smoothing distribution (\(p_{d},p_{a}\)) and does not depend on the base GNN \(f\). The existence of an upper bound is in stark contrast to certificates whose largest certifiable radius depends on the inverse Gaussian CDF of the label probabilities (Cohen et al., 2019). Such certificates are theoretically tighter than ablation certificates: For example, if the base classifier \(f\) classifies all samples from \(\phi\) the same (\(p_{y^{*}}=1\)), they would certify a radius of \(\infty\), whereas the radius of ablation-based certificates is bounded. We leave the development of even stronger gray-box certificates for GNNs to future work.
**Limitations of probabilistic certificates.** Our certificates are probabilistic and hold with significance level \(\alpha\). Notably, our method still yields strong guarantees for significantly smaller confidence levels (we show additional experiments for varying \(\alpha\) in Appendix H). We found that \(\alpha\) has just a minor effect on the certificate strength, since increasing it cannot increase the largest certifiable radius, which is theoretically bounded. Recent works also "derandomize" probabilistic certificates, that is they compute the label probabilities exactly (Levine and Feizi, 2020, 2021). In Appendix J we propose the first derandomization technique that leverages message-passing structures. We believe future work can build upon it towards even more efficient derandomization schemes.
**Threat model extensions.** Notably, edge-deletion smoothing (\(p_{d}>0\)) also yields guarantees for adversarial node insertion and deletion, as disconnected nodes cannot alter the prediction.3 As discussed above, we can only evaluate such certificates with structural information, that is how inserted/deleted nodes are connected to target nodes: Given clean graphs (as in our evaluation), we know which nodes adversaries _could delete_. Given perturbed graphs, we know which nodes _could have been inserted_. Note that although we can technically extend our method to certify adversarial edge deletion, we focus on the novel problem of arbitrary feature manipulations of entire nodes since there are already certificates against edge-modification attacks (Bojchevski et al., 2020).
Experimental Evaluation
We evaluate our certificates for different GNN architectures trained on node classification datasets. Our certificates work in standard transductive learning settings used throughout the literature and we report such results in Appendix H. However, combining transductive learning with an evasion threat model comes with serious shortcomings for the evaluation of certificates, since no separate test data is available. For example, we can usually achieve high accuracy by overfitting a Multi-Layer Perceptron (MLP) to labels predicted by GNNs during training. MLPs do not propagate information through the graph at test time and are robust to adversarial messages. Instead, we evaluate our certificates in semi-supervised _inductive_ learning settings with hold-out test nodes:
**Experimental setup.** As labelled nodes, we draw 20 nodes per class for training and validation, and 10% of the nodes for testing. We use the labelled training nodes and all remaining unlabeled nodes as training graph, and successively insert (hold-out) validation and test nodes. We train on the training graph, optimize hyperparameters against validation nodes, assume adversaries control nodes at test time, and compute certificates for all test nodes. We also delete edges and ablate node features during training (Appendix G). We use \(n_{0}=1{,}000\) samples for estimating the majority class, \(n_{1}=3{,}000\) samples for certification, and set \(\alpha=0.01\). We conduct five experiments for random splits and model initializations, and report averaged results including standard deviation (shaded areas in the plots). When comparing settings (e.g. architectures), we run \(1{,}000\) experiments for each setting and draw deletion and ablation probabilities from \([0,1]\) for each experiment (sampling separately for training and inference). Then, we compute dominating points on the Pareto front for each setting. For brevity, we only show points with clean accuracy of at most \(5\%\) below the maximally achieved performance.
**Datasets and models.** We train our models on citation datasets: Cora-ML (Bojchevski and Gunnemann, 2018; McCallum et al., 2000) with 2,810 nodes, 7,981 edges and 7 classes; Citeseer (Sen et al., 2008) with 2,110 nodes, 3,668 edges and 6 classes; and PubMed (Namata et al., 2012) with 19,717 nodes, 44,324 edges and 3 classes. We implement smoothed classifiers for four architectures with two message-passing layers: Graph convolutional networks (GCN) (Kipf and Welling, 2017), graph attention networks (GAT and GATv2) (Velickovic et al., 2018; Brody et al., 2022), and soft medoid aggregation networks (SMA) (Geisler et al., 2020). More details in Appendix G. We also compute certificates for the larger graph ogbn-arxiv (Hu et al., 2020) in Appendix H.
**Evaluation metrics.** We report the classification accuracy of the smoothed classifier on the test set (_clean accuracy_), and the _certified ratio_, that is the number of test nodes whose predictions are certifiable robust for a given radius. Since all nodes have different receptive field sizes, we also divide the certifiable radius by the receptive field size. The resulting _normalized_ robustness better reflects how much percentage of the "attack surface" (that is the number of nodes the adversary could attack) can be certified. Moreover, we report the area under this (normalized) certified ratio curve (_AUCRC_). For completeness, we also report the _certified accuracy_ in Appendix H, that is the number of test nodes that are correctly classified (without abstaining) _and_ certifiable robust for a given radius.
**Message-interception smoothing.** In Figure 3 (a,b) we demonstrate our certificates for specific edge deletion probabilities \(p_{d}\) and node feature ablation probabilities \(p_{a}\). By making our certificates message-passing aware, we can (1) certify robustness against arbitrary feature perturbations of entire nodes, (2) analyze robustness locally in the receptive fields by incorporating the "attack surface", and (3) provide stronger guarantees for attacks against nodes at larger distances to target nodes.
Figure 3: Smoothed GAT on Cora-ML: (a) Robustness at different distances to target nodes (\(p_{d}\)=\(0.31\), \(p_{a}\)=\(0.794\), with skip, ACC=\(0.79\)). (b) Robustness normalized by receptive field size (“attack surface”). (c) Naive baseline comparison (base certificate (Bojchevski et al., 2020), \(10^{5}\) samples, \(\alpha\)=\(0.01\)).
**First certificate for stronger adversaries.** Experimentally we obtain significantly better robustness guarantees compared to previous (smoothing-based) certificates for Graph Neural Networks. Specifically, existing certificates for GNNs only certify perturbations to a few attributes \(\tilde{\rho}\) in the entire graph. Our certificates are novel as they provide guarantees for much stronger adversaries that can arbitrarily manipulate features of a multiple nodes in the graph. To compare these two approaches, consider a naive baseline that certifies \(\rho=\tilde{\rho}/d\) nodes, where \(d\) is the number of attributes per node.4 If each node in the graph had just a single feature, the number of certifiable nodes \(\rho\) is high. As the number of features \(d\) per node increases, however, the baseline dramatically deteriorates. In contrast, our certificates are entirely independent of the dimension \(d\) and hold regardless of how high-dimensional the underlying node data might be. We demonstrate this comparison in Figure 3 (c) for the first smoothing-based certificate for GNNs (Bojchevski et al., 2020), assuming attribute deletions against second-hop nodes (\(p_{+}\)=0, \(p_{-}\)=0.6). However, the superiority of our certificate regarding robustness against all features of entire nodes holds for any other GNN certificate proposed so far.
Footnote 4: We are the first to certify such strong adversaries. Thus no baselines exist so far and we compare our method against existing certificates for GNNs using the naïve baseline we propose above.
**Stronger certificates for sparser graphs.** Notably, our gray-box certificates incorporate graph structure and become stronger for sparser graphs. This is in contrast to black-box certificates that ignore the underlying message-passing principles of GNNs. We demonstrate this by applying graph sparsification, which significantly improves robustness while retaining high clean accuracy: First, sparsification reduces the number of paths in the graph and thus reduces the number of messages to intercept. Second, sparsification reduces the number of nodes in the receptive fields and thus the "attack surface", that is the number of nodes that send messages. In Figure 4 (a,b) we apply GDC preprocessing (Gasteiger et al., 2019) to the Cora-ML graph at test time. GDC preprocessing yields directed graphs and reduces the number of edges in the graph from \(15{,}962\) to \(14{,}606\) (we set the sparsification threshold of GDC to \(\epsilon=0.022\) and ignore resulting edge attributes). Interestingly, evaluating the model on the sparsified graph yields significantly higher certifiable robustness, although both approaches show high clean accuracy of \(80\%\). Note that for the validity of our certificates we assume adversaries perturb nodes after sparsification and cannot attack the sparsification itself.
**Efficient message-interception smoothing.** Drawing Monte-Carlo samples from \(\phi\) to estimate the smoothed classifier is usually the most costly part when computing smoothing-based certificates (Cohen et al., 2019). In Figure 4 (c) we show that our certificates are much more sample efficient as we do not benefit from more than a few thousand samples from \(\phi\). This is in stark contrast to existing smoothing-based certificates for GNNs (Bojchevski et al., 2020). For a fair comparison, we adopt their transductive setting and compute certificates for \(p_{d}=0.3\) and \(p_{a}=0.85\). Bojchevski et al. (2020) use \(10^{6}\) Monte-Carlo samples for certifying test nodes on Cora-ML, which takes up to 25 minutes. In contrast, our certificates saturate already for \(2{,}000\) Monte-Carlo samples in this setting, which takes only 17 seconds (preprocessing Cora-ML takes 8 additional seconds). Our gray-box certificates are significantly more sample-efficient while also providing guarantees against much stronger adversaries. We hypothesise that our certificates saturate much faster as the certifiable radius does not depend on the inverse Gaussian CDF of the label probabilities as discussed in Section 6.
Figure 4: (a,b) Sparsification significantly improves certifiable robustness of our gray-box certificates to second-hop attacks since sparsification reduces (a) messages to intercept, and (b) receptive field sizes and thus the “attack surface” (Smoothed GAT, Cora-ML, \(p_{d}=0.31\), \(p_{a}=0.71\), with skip-connection, \(\text{ACC}=0.8\)). (c) Our certificate with largest certifiable radius of 4 with varying samples for certification (Smoothed GAT, Cora-ML, \(p_{d}=0\), \(p_{a}=0.85\)). Our certificates are more sample efficient than existing smoothing-based certificates for GNNs.
**Different classifiers.** In Figure 5 (a) we compare robustness-accuracy tradeoffs for different GNNs against second-hop attacks. Attention-based message-passing GNNs (Velickovic et al., 2018) are dominating. We hypothesize that the degree-normalization of GCN (Kipf and Welling, 2017) may be problematic for the performance under randomized edge deletion. Our approach may promote novel message-passing architectures, specifically designed for smoothed classifiers.
**Skip-connections.** With higher node feature ablation probability, more messages from the target node itself will be intercepted, which may be detrimental for the accuracy. Assuming adversaries do not attack target nodes, we can modify the architecture for improved robustness-accuracy tradeoffs (Figure 5b). To this end, we forward the non-ablated input graph through the GNN _without edges_, and add the resulting final representation of each node to the final representation when forwarding the (ablated) graph with graph structure. We use the same weights of the base GNN, but more complex skip-connections are straightforward. Such skip-connections yield better robustness-accuracy tradeoffs against second-hop attacks, but we also loose guarantees for the target node itself. To account for that, future work could deploy existing smoothing methods for features of target nodes separately: e.g., if nodes represent images, we could deploy Gaussian smoothing (Cohen et al., 2019) on node features send through the skip-connection and still obtain robustness guarantees for target nodes.
**Training-time smoothing parameters.** In Figure 5 (c) we show that ablating less during training can improve the robustness-accuracy tradeoffs. Note that only inference-time smoothing parameters determine the strength of our certificates, and the probabilities \(p_{d},p_{a}\) during training are just hyperparameters that we can optimize to improve the robustness-accuracy tradeoffs. In detail, we experiment with three different settings: Using the same ablation probabilities during training and inference (\(p_{t}=p_{e}\)), ablating 10% more during training (\(p_{t}=p_{e}\)+0.1), and ablating 10% less during training (\(p_{t}\)=\(p_{e}\)\(-\)0.1). Note that we use \(\max(\min(p_{t},1),0)\) to project the training-time parameters into \([0,1]\).
**Robustness-accuracy.** We compare robustness-accuracy tradeoffs of three different settings: (1) edge deletion and feature ablation (\(p_{d}>0\), \(p_{a}>0\)), (2) edge deletion only (\(p_{d}>0,p_{a}=0\)), and (3) feature ablation only (\(p_{d}=0,p_{a}>0\)). Our experiments show that edge deletion _and_ feature ablation smoothing achieves significantly better robustness-accuracy tradeoffs against attribute attacks to the _second-hop_ neighborhood and dominates on Cora-ML and Citeseer (Figure 6b,c). On PubMed, edge deletion smoothing dominates. More results (e.g. with skip-connections) in Appendix H.
Figure 5: Second-hop attacks on Cora-ML: (a) Robustness-accuracy tradeoffs for different GNN architectures. (b) Skip-connections yield improved robustness-accuracy tradeoffs for node feature ablation smoothing. (c) Ablating less during training yields better robustness-accuracy tradeoffs (GAT).
Figure 6: Robustness-accuracy tradeoffs for second-hop attacks against smoothed GAT models (without skip). Edge deletion and node ablation dominates on Cora-ML (a) and Citeseer (b). On PubMed (c), edge deletion is stronger. Lines connect dominating points on the Pareto front.
Related Work
**GNN robustness.** The vast majority of GNN robustness works focus on heuristic defenses, including adversarial graph detection (Zhang and Ma, 2020; Zhang et al., 2019); architecture modifications (Brody et al., 2022; Zhang et al., 2019); robust aggregations (Geisler et al., 2020); robust training procedures (Xu et al., 2019; Zugner and Gunnemann, 2019), transfer learning (Tang et al., 2020); and graph preprocessing techniques such as edge pruning (Zhang and Zitnik, 2020; Wu et al., 2019), low-rank approximations (Entezari et al., 2020), and graph anomaly detection (Ma et al., 2021).
The effectiveness of such seemingly robust defenses on the adversarial robustness of GNNs can only be assessed against existing adversarial attacks. Heuristic defenses do not guarantee robustness, and may even be broken by stronger attacks later on (Mujkanovic et al., 2022). Instead, we are interested in robustness certificates that _provably guarantee_ the stability of predictions. However, robustness certificates for GNNs are still in their infancy (Gunnemann, 2022):
**Certificates for GNNs.** Most certificates for GNNs are designed for specific architectures (Zugner and Gunnemann, 2020; Jin et al., 2020; Bojchevski and Gunnemann, 2019; Zugner and Gunnemann, 2019). Despite providing provable robustness guarantees, their applicability is limited to specific architectures. Bojchevski et al. (2020) present the first tight and efficient smoothing-based, model-agnostic certificate for graph-structured data. However, their method comes with crucial limitations: First, their method cannot certify robustness against arbitrary feature modifications of entire nodes. Second, their black-box certificate delletes edges but completely ignores the underlying _message-passing_ principle. Third, their certificate requires an expensive evaluation of the smoothed classifier, which questions the practicability of their certificate beyond theoretical robustness assessments.
Randomized ablation certificates for image classifiers (Levine and Feizi, 2020) are another approach for discrete data. Such certificates have already been applied to point cloud classifiers (Liu et al., 2021) and even for individual attribute perturbations in GNNs (Bojchevski et al., 2020). However, Bojchevski et al. (2020) show that their method outperforms such ablation certificates for individual attributes. In contrast, we propose to certify entire nodes, instead of only a few of their attributes. As already discussed, applying their ablation certificates for image classifiers directly to GNNs comes with serious shortcomings that we overcome (Section 4 and details in Appendix B).
**Gray-box certificates.** Exploiting model knowledge to derive tighter randomized smoothing certificates constitutes a widely unexplored research problem. The first works derive tighter guarantees using information about the model's gradients (Mohapatra et al., 2020; Levine et al., 2020). Recently proposed collective certificates (Schuchardt et al., 2021) incorporate knowledge about the receptive fields of GNNs. Their certificates are _orthogonal_ to ours, and our certificates could lead to significant improvements in such collective settings, as adversaries cannot attack first-hop neighbors of all nodes simultaneously. Schuchardt and Gunnemann (2022) propose tight gray-box certificates for models that are invariant to spatial transformations.
## 9 Conclusion
We propose novel gray-box, message-passing aware robustness certificates for GNNs against strong threat models where adversaries can arbitrarily manipulate all features of multiple nodes in the graph. The main idea of our certificates is to intercept adversarial messages by randomly deleting edges and/or masking features of entire nodes. Our certificates are significantly stronger and more sample-efficient than existing methods. Future enhancements could smooth specific edges and nodes with different probabilities, for example to intercept messages from central nodes with higher probability. Our gray-box certificates could lead to novel architectures, training techniques and graph preprocessing techniques to further strengthen the robustness of GNNs against adversarial examples.
## Acknowledgments and Disclosure of Funding
This work has been funded by the German Federal Ministry of Education and Research, the Bavarian State Ministry for Science and the Arts, and the German Research Foundation, grant GU 1409/4-1. The authors of this work take full responsibility for its content. |
2304.13098 | Uncovering the Representation of Spiking Neural Networks Trained with
Surrogate Gradient | Spiking Neural Networks (SNNs) are recognized as the candidate for the
next-generation neural networks due to their bio-plausibility and energy
efficiency. Recently, researchers have demonstrated that SNNs are able to
achieve nearly state-of-the-art performance in image recognition tasks using
surrogate gradient training. However, some essential questions exist pertaining
to SNNs that are little studied: Do SNNs trained with surrogate gradient learn
different representations from traditional Artificial Neural Networks (ANNs)?
Does the time dimension in SNNs provide unique representation power? In this
paper, we aim to answer these questions by conducting a representation
similarity analysis between SNNs and ANNs using Centered Kernel Alignment
(CKA). We start by analyzing the spatial dimension of the networks, including
both the width and the depth. Furthermore, our analysis of residual connections
shows that SNNs learn a periodic pattern, which rectifies the representations
in SNNs to be ANN-like. We additionally investigate the effect of the time
dimension on SNN representation, finding that deeper layers encourage more
dynamics along the time dimension. We also investigate the impact of input data
such as event-stream data and adversarial attacks. Our work uncovers a host of
new findings of representations in SNNs. We hope this work will inspire future
research to fully comprehend the representation power of SNNs. Code is released
at https://github.com/Intelligent-Computing-Lab-Yale/SNNCKA. | Yuhang Li, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda | 2023-04-25T19:08:29Z | http://arxiv.org/abs/2304.13098v1 | # Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient
###### Abstract
Spiking Neural Networks (SNNs) are recognized as the candidate for the next-generation neural networks due to their bio-plausibility and energy efficiency. Recently, researchers have demonstrated that SNNs are able to achieve nearly state-of-the-art performance in image recognition tasks using surrogate gradient training. However, some essential questions exist pertaining to SNNs that are little studied: _Do SNNs trained with surrogate gradient learn different representations from traditional Artificial Neural Networks (ANNs)? Does the time dimension in SNNs provide unique representation power?_ In this paper, we aim to answer these questions by conducting a representation similarity analysis between SNNs and ANNs using Centered Kernel Alignment (CKA). We start by analyzing the spatial dimension of the networks, including both the width and the depth. Furthermore, our analysis of residual connections shows that SNNs learn a periodic pattern, which rectifies the representations in SNNs to be ANN-like. We additionally investigate the effect of the time dimension on SNN representation, finding that deeper layers encourage more dynamics along the time dimension. We also investigate the impact of input data such as event-stream data and adversarial attacks. Our work uncovers a host of new findings of representations in SNNs. We hope this work will inspire future research to fully comprehend the representation power of SNNs. Code is released at [https://github.com/Intelligent-Computing-Lab-Yale/SNNCKA](https://github.com/Intelligent-Computing-Lab-Yale/SNNCKA).
## 1 Introduction
Lately, Spiking Neural Networks (SNNs) (Tavanaei et al., 2019; Roy et al., 2019; Deng et al., 2020; Panda et al., 2020; Christensen et al., 2022) have received increasing attention thanks to their biology-inspired neuron activation and efficient neuromorphic computation. SNNs process information with binary spike representation and therefore avoid the need for multiplication operations during inference. Neuromorphic hardware such as TrueNorth (Akopyan et al., 2015) and Loihi (Davies et al., 2018) demonstrate that SNNs can save energy by orders of magnitude compared to Artificial Neural Networks (ANNs).
Although SNNs can bring enormous energy efficiency in inference, training SNNs is notoriously hard because of their spiking activation function. This function returns a zero-but-all gradient (_i.e.,_ Dirac delta function) and thus makes gradient-based optimization infeasible. To circumvent this problem, various training techniques have been proposed. For example, spike-timing-dependent plasticity (STDP) (Rao and Sejnowski, 2001)
either strengthens or weakens the synaptic weight based on the firing time; time-to-first-spike (Mostafa, 2017) encodes the information into the time of spike arrival to get a closed-form solution of the gradients. However, these two methods are restricted to small-scale tasks and datasets. Surrogate gradient technique (Bengio et al., 2013; Bender et al., 2018; Wu et al., 2018; Bellec et al., 2018; Kim et al., 2022b; Li et al., 2022), on the other hand, can achieve the best task performance by applying an alternate function during backpropagation. Combined with surrogate gradient, SNNs can be optimized by Backpropagation Through Time (BPTT) (Neftci et al., 2019) algorithm, outperforming other learning rules in SNNs.
Despite increasing interest in pursuing high-performance SNNs with surrogate gradient training, there is limited understanding of how surrogate gradient training affects the representation of SNNs. Investigating this fundamental question is critical since surrogate gradient-based BPTT algorithm mimics the way how ANN learns and is less biological-plausible compared to other learning rules like STDP. Therefore, it would be intriguing to study whether surrogate gradient-based SNNs learn different representations than ANNs. Understanding the representation learned in SNN can also promote further research developments, e.g., designing spiking-friendly architectures (Kim et al., 2022; Na et al., 2022) and exploring other ways to optimize SNNs (Bellece et al., 2020; Zhang and Li, 2020; Kim and Panda, 2021).
More concretely, we ask, do SNNs optimized by surrogate gradient BPTT learn distinct representations from ANNs? How do the width and depth of the neural network affect the representation learned in SNNs and ANNs? Does the extra temporal dimension in SNNs yield unique intermediate features? On neuromorphic datasets, how does the SNN process event-based data? In this paper, we aim to answer these core questions through a detailed analysis of ResNets (He et al., 2016) and VGG-series (Simonyan and Zisserman, 2015) models using a representation similarity analysis tool. Specifically, we utilize the popular Centered Kernel Alignment (CKA) (Kornblith et al., 2019) to measure the similarity between SNNs and ANNs. Fig. 1 demonstrates the overall workflow of our representation similarity analysis framework. Our analysis spans both the spatial and temporal dimensions of SNNs, as well as the impact of network architecture and input data.
Our contributions and findings include:
* We analyze the representation similarity between SNNs and ANNs using the centered kernel alignment to determine whether SNNs produce different feature representations from ANNs. We examine various aspects of representation similarity between SNNs and ANNs, including spatial and temporal dimensions, input data type, and network architecture.
* Surprisingly, our findings show that SNNs trained with surrogate gradient have a rather similar representation to ANNs. We also find that residual connections greatly affect the representations in SNNs.
Figure 1: **The representation similarity analysis workflow. The test images are fed into both ANN and SNN, then we record the intermediate feature for computing the correlation matrix, which is used for inferring the CKA similarity (Kornblith et al., 2019).**
* Meanwhile, we find that the time dimension in SNNs does not provide much unique representation. We also find that shallow layers are insensitive to the time dimension, where the representation in each time step converges together.
## 2 Related Work
**Spiking Neural Networks (SNNs).** SNNs have gained increasing attention for building low-power intelligence. Generally, the SNN algorithms to obtain high performance can be divided into two categories: (1) ANN-SNN conversion (Rueckauer et al., 2016, 2017; Han et al., 2020; Sengupta et al., 2019; Han and Roy, 2020) and (2) direct training SNN from scratch (Wu et al., 2018, 2019). Conversion-based methods utilize the knowledge from ANN and convert the ReLU activation to a spike activation mechanism. This type of method can produce an SNN in a short time. For example, in Rueckauer et al. (2017), one can find the percentile number and set it as the threshold for spiking neurons. Authors in Deng and Gu (2021) and Li et al. (2021) decompose the conversion error to each layer and then propose to reduce the error by calibrating the parameters. However, achieving near-lossless conversion requires a considerable amount of time steps to accumulate the spikes. Direct training from scratch allows SNNs to operate in extremely low time steps, even less than 5 (Zheng et al., 2020). To enable gradient-based learning, direct training leverages surrogate gradient to compute the derivative of the discrete spiking function. This also benefits the choice of hyper-parameters in spiking neurons. Recent works (Fang et al., 2021; Rathi and Roy, 2020; Kim and Panda, 2021; Deng et al., 2022) co-optimize parameters, firing threshold, and leaky factor together via gradient descent. Our analysis is mostly based on directly trained SNNs, as converted SNNs only contain ANN features and may be misleading for representation comparison.
**Representation Similarity Analysis (RSA).** RSA (Kriegeskorte et al., 2008) was not originally designed for analyzing neural networks specifically. Rather, it is used for representation comparison between any two computational models. Prior works such as Khaligh-Razavi and Kriegeskorte (2014); Yamins et al. (2014) have used RSA to find the correlation between visual cortex features and convolutional neural network features. Authors of Seminar (2016); Raghu et al. (2017); Morcos et al. (2018); Wang et al. (2018) have studied RSA between different neural networks. However, recent work (Kornblith et al., 2019) argues that none of the above methods for studying RSA can yield high similarity even between two different initializations of the same architecture. They further propose CKA, which has become a powerful evaluation tool for RSA and has been successfully applied in several studies. For example, Nguyen et al. (2020) analyzes the representation pattern in extremely deep and wide neural networks, and Raghu et al. (2021) studies the representation difference between convolutional neural networks and vision transformers with CKA. In this work, we leverage this tool to compare ANNs and SNNs.
## 3 Preliminary
### ANN and SNN Neurons
In this paper, vectors/matrices are denoted with bold italic/capital letters (_e.g._\(\mathbf{x}\) and \(\mathbf{W}\) denote the input vector and weight matrix, respectively). Constants are denoted by small upright letters. For non-linear activation function in artificial neurons, we use the rectified linear unit (ReLU) (Krizhevsky et al., 2012), given by \(\mathbf{y}=\max(0,\mathbf{W}\mathbf{x})\). As for the non-linear activation function in spiking neurons, we adopt the well-known Leaky Integrate-and-Fire (LIF) model. Formally, given a membrane potential \(\mathbf{u}^{(t)}\) at time step \(t\) and a pre-synaptic input \(\mathbf{i}^{(t+1)}=\mathbf{W}\mathbf{x}^{(t+1)}\), the LIF neuron will update as
\[\mathbf{u}^{(t+1),\text{pre}}=\tau\mathbf{u}^{(t)}+\mathbf{i}^{(t+1)},\quad\mathbf{y}^{(t+1)} =\begin{cases}1&\text{if }\mathbf{u}^{(t+1),\text{pre}}>v_{th}\\ 0&\text{otherwise}\end{cases},\quad\mathbf{u}^{(t+1)}=\mathbf{u}^{(t+1),\text{pre}} \cdot(1-\mathbf{y}^{(t+1)}). \tag{1}\]
Here, \(\mathbf{u}^{(t+1),\text{pre}}\) is the pre-synaptic membrane potential, \(\tau\) is a constant leak factor within \((0,1)\). Let \(v_{th}\) be the firing threshold, the LIF neuron will fire a spike (\(\mathbf{y}^{(t+1)}=1\)) when the membrane potential exceeds the threshold; otherwise, it will stay inactive (\(\mathbf{y}^{(t+1)}=0\)). After firing, the spike output \(\mathbf{y}^{(t+1)}\) will propagate to
the next layer and become the input \(\mathbf{x}^{(t+1)}\) of the next layer. Note that here the layer index is omitted for simplicity. The membrane potential will be reset to 0 if a spike fires (refer to Eq.1 the third sub-equation).
### Optimize SNN with Surrogate Gradient
To enable gradient descent for SNN, we adopt the BPTT algorithm (Werbos, 1990). Formally, denote the loss function value as \(L\), the gradient of the loss value with respect to weights can be formulated by
\[\frac{\partial L}{\partial\mathbf{W}}=\sum_{t=1}^{T}\frac{\partial L}{\partial \mathbf{y}^{(t)}}\frac{\partial\mathbf{y}^{(t)}}{\partial\mathbf{u}^{(t),\text{pre}}} \mathbf{K}^{(t)},\quad\text{where }\mathbf{K}^{(t)}=\left(\frac{\partial\mathbf{u}^{(t), \text{pre}}}{\partial\mathbf{i}^{(t)}}\frac{\partial\mathbf{i}^{(t)}}{\partial\mathbf{ W}}+\frac{\partial\mathbf{u}^{(t),\text{pre}}}{\partial\mathbf{u}^{(t-1)}}\frac{ \partial\mathbf{u}^{(t-1)}}{\partial\mathbf{u}^{(t-1),\text{pre}}}\mathbf{K}^{(t-1)} \right). \tag{2}\]
Here, the gradient is computed based on the output spikes from all time steps. In each time step, we denote \(\mathbf{K}\) as the gradient of pre-synaptic membrane potential with respect to weights \(\frac{\partial\mathbf{u}^{(t)}}{\partial\mathbf{W}}\), which consists of the gradient of pre-synaptic input and the gradient of membrane potential in the last time steps.
As a matter of fact, all terms in Eq.2 can be easily differentiated except \(\frac{\partial\mathbf{y}^{(t)}}{\partial\mathbf{u}^{(t),\text{pre}}}\) which returns a zero-but-all gradient (Dirac delta function). Therefore, the gradient descent ends up either freezing the weights or updating weights to infinity. To address this problem, the surrogate gradient is proposed (Bender et al., 2018; Wu et al., 2018; Bellec et al., 2018; Neftci et al., 2019; Li et al., 2021b) to replace the Dirac delta function with another function:
\[\frac{\partial\mathbf{y}^{(t)}}{\partial\mathbf{u}^{(t),\text{pre}}}=\frac{1}{\alpha} \mathbbm{1}_{|\mathbf{u}^{(t),\text{pre}}-v_{th}|<\alpha}, \tag{3}\]
where \(\alpha\) is a hyper-parameter for controlling the sharpness and \(\alpha=1\) the surrogate gradient becomes the Straight-Through Estimator (Bengio et al., 2013).
Compared to other methods such as ANN-SNN conversion (Deng and Gu, 2021) or spike-timing-dependent plasticity (Caporale et al., 2008), BPTT using surrogate gradient learning yields the best performance in image recognition tasks. However, from a biological perspective, BPTT is implausible: for each weight update, BPTT requires the use of the transpose of the weights to transmit errors backward in time and assign credit for how past activity affected present performance. Running the network with transposed weights requires the network to either have two-way synapses or use a symmetric copy of the feedforward weights to backpropagate the error (Marschall et al., 2020). Therefore, the question remains whether the representation in SNNs learned with surrogate gradient-based BPTT actually differs from the representation in ANNs.
### Centered Kernel Alignment
Let \(\mathbf{X}_{s}\in\mathbb{R}^{m\times Tp_{1}}\) and \(\mathbf{X}_{a}\in\mathbb{R}^{m\times p_{2}}\) contain the representation in an arbitrary layer of SNN with \(p_{1}\) hidden neurons across \(T\) time steps and the representation in an arbitrary layer of ANN with \(p_{2}\) hidden neurons, respectively. Here \(m\) is the batch size and we concatenate features from all time steps in the SNN altogether. We intend to use a similarity index \(s(\mathbf{X}_{s},\mathbf{X}_{a})\) to describe how similar they are. We use the Centered Kernel Alignment (CKA) (Kornbith et al., 2019) to measure this:
\[\text{CKA}(\mathbf{K},\mathbf{L})=\frac{\text{HSIC}(\mathbf{K},\mathbf{L})}{ \sqrt{\text{HSIC}(\mathbf{K},\mathbf{K})\text{HSIC}(\mathbf{L},\mathbf{L})}}, \quad\text{HSIC}(\mathbf{K},\mathbf{L})=\frac{1}{(m-1)^{2}}tr(\mathbf{KHLH}). \tag{4}\]
Here, \(\mathbf{K}=\mathbf{X}_{s}\mathbf{X}_{s}^{\top},\mathbf{L}=\mathbf{X}_{a} \mathbf{X}_{a}^{\top}\) are the Gram matrices as shown in Fig.1. Each Gram matrix has the shape of \(m\times m\), reflecting the similarities between a pair of examples. For example, \(\mathbf{K}_{i,j}\) indicates the similarity between the \(i^{th}\) and \(j^{th}\) example in the SNN feature \(\mathbf{X}_{s}\). Further measuring the similarity between \(\mathbf{K}\) and \(\mathbf{L}_{i}\) one can measure whether SNN has a similar inter-example similarity matrix with ANN. Let \(\mathbf{H}=\mathbf{I}-\frac{1}{m}\mathbf{1}\mathbf{1}^{\top}\) be the centering matrix, the Hilbert-Schmidt Independence Criterion (HSIC) proposed by Gretton et al. (2005) can conduct a test statistic for determining whether two sets of variables are independent. HSIC = 0 implies independence. The CKA further normalizes HSIC to produce a similarity index between 0 and 1 (the higher the CKA, the more similar the input pair) which is invariant to isotropic scaling. In our implementation, we use the unbiased estimator of HSIC (Song et al., 2012; Nguyen et al., 2020) to calculate it across mini-batches.
## 4 Do SNNs Learn Different Representation from ANNs?
In this section, we comprehensively compare the representation learned in SNNs and ANNs. Our primary study case is ResNet with identity mapping block (He et al., 2016) on the CIFAR10 dataset, which is the standard architecture and dataset in modern deep learning for image recognition. 1 There are two differences between our SNNs and ANNs. First, ANNs adopt the Batch Normalization layer (Ioffe and Szegedy, 2015), and SNNs use the time-dependent Batch Normalization layer (Zheng et al., 2020), which normalizes the feature across all time steps (_i.e._\(\mathbf{X}s\)). Second, the ANNs use ReLU activation, and SNNs leverage the LIF spiking neurons. For default SNN training, we use direct encoding, \(\tau=0.5\) for the leaky factor, \(vth=1.0\) for the firing threshold, \(T=4\) for the number of time steps, and \(\alpha=1.0\) for surrogate gradient, which are tuned for the best training performance on SNNs. Detailed training setup and codes can be found in the supplementary material.
Footnote 1: We also provide RSA on VGG-series networks in Sec. A.1 and RSA on CIFAR100 dataset in Sec. A.2.
### Scaling up Width or Depth
We begin our study by studying how the spatial dimension of a model architecture affects internal representation structure in ANNs and SNNs. We first investigate a simple model: ResNet-20, and then we either increase its number of layers or increase its channel number to observe the effect of depth and width, respectively. In the most extreme cases, we scale the depth to 164 and the width to \(16\times\) (see detailed network configuration in Table D.1). For each network, we compute CKA between all possible pairs of layers, includ
Figure 2: **CKA heatmap between SNNs and ANNs with different depth and width on the CIFAR-10 dataset. Top: the CKA cross-layer heatmap across different depths from 20 layers to 164 layers. Middle: the CKA cross-layer heatmap across different widths from the original channel number to 16 times. Bottom: visualizing only the corresponding layer, which is the diagonal of the CKA heatmap. We find generally SNNs and ANNs have relatively high similarity, and deeper/wider networks have positive/negative effects on the representation similarity.**
ing convolutional layers, normalization layers, ReLU/LIF layers, and residual block output layers. Therefore, the total number of layers is much greater than the stated depth of the ResNet, as the latter only accounts for the convolutional layers in the network. Then, we visualize the result as a heatmap, with the \(x\) and \(y\) axes representing the layers of the network, going from the input layer to the output layer. Following Nguyen et al. (2020), our CKA heatmap is computed on 4096 images from the test dataset.
As shown in Fig. 2, the CKA heatmap emerges as a checkerboard-like grid structure, especially for the deeper neural network. In ResNet-20, we observe a bright block in the middle and deep layers, indicating that ANNs and SNNs learn overlapped representation. As the network goes deep, we find the CKA heatmap becomes darker, meaning that representations in ANNs and those in SNNs are diverging. Notably, we find a large portion of layers in artificial ResNet-164 exhibit significantly different representations than spiking ResNet-164 (\(<\)0.2 CKA value) which demonstrates that deeper layers tend to learn disparate features.
In Fig. 2 middle part, we progressively enlarge the channel number of ResNet-20. In contrast to depth, the heatmap of wider neural networks becomes brighter, which indicates the representations in SNNs and ANNs are converging. Interestingly, although the majority of layers are learning more similar representations between ANN and SNN in wide networks, the last several layers still learn different representations.
We further select only the diagonal elements in the heatmap and plot them in Fig. 2 bottom part. Because SNNs and ANNs have the same network topology, this visualization is more specific and may accurately reveal the similarity between SNNs and ANNs at each corresponding layer. First, we can find that in Fig. 2 the CKA curve of ResNet-20 shows relatively high values. Most layers go above 0.5 and some of them can even reach nearly 1.0. Interestingly, we observe that deeper networks tend to derive a curve with a jagged shape. This means some layers in SNN indeed learn different representations when compared to ANN, however, _the difference is intermittently mitigated_. In later sections, we will show that the mitigation of dissimilarity is performed by residual connection. As for width, we generally notice that CKA curves mostly become higher for wider networks, especially when comparing ResNet-20 and ResNet-20 \(\times\)8, where most layers have above 0.8 CKA value.
similarity since it is a linear transformation. These results substantiate that _the convolutional layers and LIF layers in SNNs are able to learn different representations than ANNs. However, the representation in the residual branch still dominates the representation in post-residual layers and leads to the junction of ANN's and SNN's representation_.
To further explore why residual connections can restore the representations in SNNs to ANN-like, we conduct an ablation study. We select one of the three stages in the spiking ResNet-56 where the residual connections are disabled selectively. In Fig. 4, we visualize the CKA heatmaps of SNN itself, which means both \(x\) and \(y\) axes are the same layers in SNN. The first heatmap demonstrates the full residual network, while the remaining three heatmaps show the partial residual networks, with the 1st, 2nd, and 3rd stages disabled, respectively. Our observations can be summarized as follows: (1) In terms of inter-stage similarity, residual connections can preserve the input information from the previous stage. In the 1st and 2nd heatmaps in Fig. 4, we find residual blocks can have high similarity with their former stage. The non-residual block, however, does not have this property. In the 3rd and 4th heatmaps, we can see that blocks without residual connections exhibit significantly different representations when compared to their former stage. Therefore, we can find that residual connections preserve the representation in early layers. As such, if ANN and SNN learn similar representations in the first layer, the similarity can propagate to very deep layers due to residual connections. (2) In terms of intra-stage similarity, the non-residual stage's heatmap appears with a uniform representation across all layers, meaning that layers in this stage are similar. In contrast, residual stages share a grid structure.
Next, we verify the accuracy of SNNs and ANNs when both are equipped with residual connections or not, under different network depths. As shown in Table 2, both the SNNs and ANNs can successfully train very
Figure 4: **The effect of residual connections in SNNs.** We remove residual connections in one of three stages in the ResNet-56 and show the CKA heatmaps. The non-residual stage is annotated with green square \(\square\).
Figure 3: **Emergence of periodic jagged CKA curve. Left**: CKA curve of ResNet-110. We subplot the 10-\(th\) and the 34-\(th\) residual blocks in ResNet-110, which forms a periodic jagged curve. **Top right**: The averaged CKA value in all blocks, **Bottom right**: The architecture specification of the residual block we used.
deep networks if the residual connections are enabled. In this case, though SNNs do not surpass the accuracy of ANNs, the gap is relatively small, with 1\(\sim\)2% accuracy degradation. However, if the residual connections are removed from SNNs, the gap between the accuracies of ANNs and SNNs significantly enlarges, ranging from 5\(\sim\)56%. Therefore, we can conclude that _the residual connections help the gradient descent optimization in SNNs and regularize the representations in SNNs to be similar to those in ANNs so that SNNs can have similar task performances with ANNs._
### Scaling up Time Steps
The results of the previous sections help characterize the effects of spatial structure on internal representation differences between SNNs and ANNs. Next, we ask whether the time dimension helps SNN learn some unique information. To verify this, we train several spiking ResNet-20 with 4/8/16/32/64/128 time steps and calculate the ANN-SNN CKA similarity. In Fig. 5, we envision the CKA heatmaps and curves respectively between artificial ResNet-20 and spiking ResNet-20 with various time steps. Notably, we cannot find significant differences among these heatmaps. Looking at the CKA curves, we also discover that many layers are overlapped, especially when we focus on the residual block output (the local maximums). We find similarities between different time steps reaching the same point, meaning that the time step variable does not provide much unique representation in SNNs.
To further analyze the representation along the time dimension in SNNs, we compare the CKA among various time steps. Concretely, for any layer inside an SNN, we reshape the feature \(\mathbf{X}_{s}\) to \([\mathbf{X}^{(1)},\mathbf{X}^{(2)},\dots,\mathbf{X}^{(T)}]\) where \(\mathbf{X}^{(i)}\) is the _i-th_ time step's output. By computing the CKA similarity between arbitrary two time steps, _i.e.,_\(\text{CKA}(\mathbf{X}^{(i)},\mathbf{X}^{(j)})\), we are able to construct a CKA heatmap with \(x,y\) axes being the time dimension, which demonstrates whether the features are similar across different time steps. Fig. 6 illustrates such CKA heatmaps of outputs from all residual blocks in the spiking ResNet-20, with time steps varying from 4 to 32. In general, deeper residual block output exhibits darker CKA heatmaps and the shallower layers tend to become yellowish. In particular, all the residual blocks from the first stage have an all-yellow CKA heatmap,
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Type**} & \multicolumn{4}{c}{**Depth**} \\ \cline{3-6} & & 20 & 38 & 56 & 74 \\ \hline \multirow{2}{*}{ANN} & w/. residual connection & 91.06 & 92.34 & 92.98 & 92.85 \\ & w/o residual connection & 91.32 & 91.17 & 89.62 & 21.07 \\ \hline \multirow{2}{*}{SNN} & w/. residual connection & 89.63 & 91.14 & 91.94 & 91.83 \\ & w/o residual connection & 86.50 & 82.64 & 33.61 & 10.00 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **The impact of residual connections on accuracy.**
Figure 5: **The effect of time steps in SNNs. Left: CKA heatmaps between ANNs and SNNs with the different number of time steps. Right: The CKA curve of corresponding layers (diagonal values as in left).**
indicating extremely high similarity in these blocks. The second stage starts to produce differences across time steps, but they still share \(>\)0.8 similarities between any pair of time steps. The last stage, especially the last block, demonstrates around 0.5 similarities between different time steps. To summarize, the impact of time in SNN is gradually increased as the feature propagates through the network. In Appendix A.4, we provide the heatmap of convolutional/LIF layers and find a similar trend.
We further conduct an empirical verification to verify the findings in Fig. 6. More specifically, we define a sensitivity metric and measure it by reducing the number of time steps to 1 in certain layers of an SNN and recording the corresponding accuracy degradation. In Fig. 6 we find the first stage (s1) has the same representation in time dimension while the last stage (s3) exhibits a more diverse representation. Therefore, we choose to reduce the number of time steps to 1 either in s1 or in s3. To achieve this "mixed-time-step SNN", we repeat/average the activation in time dimension after s1/before s3 to match the dimension. Table 3 summarizes the sensitivity results. We observe that the first stage can have much lower accuracy degradation (\(<\)1%), while the last stage drop 2\(\sim\)4% accuracy. Moreover, if the last stage only uses 1 time step, then increasing the time steps for the other two stages does not benefit the accuracy at all. This indicates that LIF neurons are more effective in deep layers than in shallow layers.
### Representation under Event Data
In this section, we evaluate the CKA similarity on the event-based dataset. We choose CIFAR10-DVS (Li et al., 2017), N-Caltech 101 (Orchard et al., 2015) and train spiking/artificial ResNet-20. Since the ANN cannot process 4-dimensional spatial-temporal event data easily, we integrate all events into one frame for ANN training and 10 frames for SNN training. Fig. 7 provides the CKA heatmaps/curves on the event dataset, showing a different similarity distribution than the previous CIFAR-10 dataset. The heatmaps have a different pattern and the curves also do not appear with a periodic jagged shape. In addition, the similarity distribution differs at the dataset level, i.e., the CIFAR10-DVS and N-Caltech 101 share different
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Type**} & \multicolumn{4}{c}{**\# Time Steps**} \\ \cline{3-6} & & 4 & 8 & 16 & 32 \\ \hline \multirow{3}{*}{SNN} & Full time steps & 89.67 & 90.44 & 90.98 & 90.99 \\ & Reduce the time steps in the first stage & **88.81** & **89.70** & **89.91** & **90.47** \\ & Reduce the time steps in the last stage & 87.41 & 87.78 & 87.38 & 87.73 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **The sensitivity of time steps in SNNs.**
Figure 6: **The similarity across times in SNN.** Each heatmap shows the CKA among different time steps in the output of the residual block. “s” means stage, and “b” means block. The top/middle/bottom rows are spiking ResNet-20 with 4/8/32 time steps, respectively.
CKA curves and heatmaps. On N-Caltech 101, the SNN learns different feature representations compared with ANN in shallow and deep layers, but similar representation in intermediate layers. For CIFAR10-DVS, the similarity continues to decrease from 0.9 to 0.5 as the layers deepen. In summary, with the event-based dataset, SNNs and ANNs share a different CKA pattern in comparison to the natural image dataset, which implies that SNNs may have further optimization space in this type of dataset. We put more CKA results on various models for the CIFAR10-DVS dataset in Appendix A.3.
### Representation under Adversarial Attack Data
We next study the adversarial robustness of SNN and ANN using CKA. Inspired by quantized ANNs that are robust to adversarial attack Lin et al. (2019), the SNNs could also inherit this property since their activations are also discrete. Previous works have explored understanding the inherent robustness of SNNs (Sharmin et al., 2020; Kundu et al., 2021; Liang et al., 2021; Kim and Panda, 2021). However, they either evaluate on converted SNN or using rate-encoded images. Here, we test Projected Gradient Descent (PGD) attack (Madry et al., 2017) on the directly trained SNN and ANN using direct encoding. Formally, we generate the adversarial images by restricting the \(L\)-infinity norm of the perturbation, given by
\[\mathbf{x}_{adv}^{k+1}=\Pi_{P_{e}(\mathbf{x})}(\mathbf{x}_{adv}^{k}+\alpha\text{sign}( \nabla_{\mathbf{x}_{adv}^{k}}L(\mathbf{x}_{adv}^{k},\mathbf{w},\mathbf{y}))), \tag{5}\]
where \(\mathbf{x}_{adv}^{k}\) is the generated adversarial sample at the \(k\)-\(th\) iteration. \(\Pi_{P_{e}(\mathbf{x})}(\cdot)\) projects the generated sample onto the projection space, the \(\epsilon-L_{\infty}\) neighborhood of the clean sample. \(\alpha\) is the attack optimization step size. With higher \(\epsilon\), the adversarial image is allowed to be perturbed in a larger space, thus degrading task performance.
We evaluate the performance of spiking and artificial ResNet-20 on the CIFAR-10 dataset with \(\epsilon\) values ranging from \(0.001,0.005,0.01,0.02,0.05\) to generate adversarial images. We then compute the CKA value between the features of the clean images and the adversarially corrupted images. The results are summarized in Fig. 8 (left). We find that although the clean accuracy of ANN is higher than that of SNN, SNN has higher robustness against attacks. For example, the PGD attack with 0.01 \(L\)-infinity norm perturbation reduces the accuracy of ANN by 43%, while only reducing the accuracy of SNN by 22%. We also investigate the CKA similarity between clean and adversarial images, as shown in the second and third subplots of Fig. 8. We observe that the higher the robustness against adversarial attacks, the higher the similarity between clean and corrupted images. This intuition is confirmed by the CKA curves, which show that SNN has a higher similarity than ANN. We also observe several interesting phenomena. For example, the ANN
Figure 8: **The robustness against adversarial attack. Left: The accuracy of SNN and ANN after attack under different \(\epsilon\). Right: The CKA curve between clean images and adversarial images of ANN and SNN, respectively.**
Figure 7: **CKA Similarity on Event Dataset. We train spiking and artificial ResNet-20 on CIFAR10-DVS and N-Caltech 101, respectively. Left: the CKA heatmaps. Right: The CKA curves of corresponding layers between ANN and SNN.**
suffers a large decrease in similarity in the first block, even with a small \(\epsilon\) value of 0.001. Additionally, when we focus on the purple line (\(\epsilon=0.02\)), we notice that ANN and SNN have similar perseverance in earlier layers, but ANN drops much more similarity than SNN in the last block. These results provide insight into model robustness and suggest that SNNs are more robust than ANN, especially in their shallow and deep layers.
## 5 Discussion and Conclusion
Given that SNNs are drawing increasing research attention due to their bio-plausibility and recent progress in task performance, it is necessary to verify if SNNs, especially those trained with the surrogate gradient algorithms, can or have the potential to truly develop desired features different from ANNs. In this work, we conduct a pilot study to examine the internal representation of SNNs and compare it with ANNs using the popular CKA metric. This metric measures how two models respond to two different examples. Our findings can be briefly summarized as follows:
1. Generally, the layer-wise similarity between SNNs and ANNs is high, suggesting SNNs trained with surrogate gradient learn similar representation with ANNs. Moreover, wider networks like ResNet-20 8\(\times\) can even have \(>0.8\) similarities for almost all layers.
2. For extremely deep ANNs and SNNs, the CKA value would become lower, however, the residual connections play an important role in regularizing the representations. By conducting ablation studies, we demonstrate that the residual connections make SNNs learn similar representations with ANNs and help SNNs achieve high accuracy.
3. The time dimension does not provide much additional representation power in SNNs. We also demonstrate that the shallow layers learn completely static representation along the time dimension. Even reducing the number of time steps to 1 in shallow layers does not significantly affect the performance of SNNs.
4. On other types of datasets, SNNs may develop less similar representations with ANNs, _e.g.,_ event-data.
Our results show that SNNs optimized by surrogate gradient algorithm do not learn distinct spatial-temporal representation compared to the spatial representation in ANNs. Current SNN learning relies on the residual connection and wider neural networks (for example, Zheng et al. (2020) use ResNet-19 which is similar to our ResNet-20 8\(\times\)) to obtain decent task performance. However, our study suggests that this task performance is highly credited to the similar representation with ANN. Furthermore, the time dimension brings limited effect to the SNN representation on static datasets like CIFAR10 and CIFAR100. In particular, the first stage of ResNets results in quite similar representation across time steps.
Nonetheless, our study is not a negation against SNNs. Our results are based on the surrogate-gradient BPTT optimization, which, as aforementioned, is inherently bio-implausible and resembles the optimization method for ANNs. Therefore, it may not be surprising to see SNNs and ANNs have similar representations under a similar optimization regime. Additionally, we find that input data is also important in developing the representations. Indeed, the direct encoding used in SNNs inputs the same static images for each time step, again leading to less gap between the representations of ANNs and SNNs.
Here, we provide several directions worth studying in the future: a) Bio-plausible learning rule for SNNs: surrogate gradient training tends to learn an ANN-like representation in SNN, thus it is necessary to develop an optimization method that suits SNN better. b) Spiking architecture design: a specialized SNN network architecture may avoid learning similar representation, _e.g.,_ Kim et al. (2022). c) Understanding the robustness of SNN: adversarial attack is inconsequential for human visual systems, which may be reflected in SNN as well. We believe the SNN robustness can be significantly improved. To conclude, our work tries to understand the representation of SNNs trained with surrogate gradient and reveals some counter-intuitive observations. We hope our work can inspire more research in pushing the limit of SNNs.
**Acknowledgements.** This work was supported in part by CoCoSys, a JUMP2.0 center sponsored by DARPA and SRC, Google Research Scholar Award, the National Science Foundation CAREER Award, TII (Abu Dhabi), the DARPA AI Exploration (AIE) program, and the DoE MMICC center SEA-CROGS (Award #DE-SC0023198). |
2306.14818 | Accelerating Molecular Graph Neural Networks via Knowledge Distillation | Recent advances in graph neural networks (GNNs) have enabled more
comprehensive modeling of molecules and molecular systems, thereby enhancing
the precision of molecular property prediction and molecular simulations.
Nonetheless, as the field has been progressing to bigger and more complex
architectures, state-of-the-art GNNs have become largely prohibitive for many
large-scale applications. In this paper, we explore the utility of knowledge
distillation (KD) for accelerating molecular GNNs. To this end, we devise KD
strategies that facilitate the distillation of hidden representations in
directional and equivariant GNNs, and evaluate their performance on the
regression task of energy and force prediction. We validate our protocols
across different teacher-student configurations and datasets, and demonstrate
that they can consistently boost the predictive accuracy of student models
without any modifications to their architecture. Moreover, we conduct
comprehensive optimization of various components of our framework, and
investigate the potential of data augmentation to further enhance performance.
All in all, we manage to close the gap in predictive accuracy between teacher
and student models by as much as 96.7% and 62.5% for energy and force
prediction respectively, while fully preserving the inference throughput of the
more lightweight models. | Filip Ekström Kelvinius, Dimitar Georgiev, Artur Petrov Toshev, Johannes Gasteiger | 2023-06-26T16:24:31Z | http://arxiv.org/abs/2306.14818v2 | # Accelerating Molecular Graph Neural Networks via Knowledge Distillation
###### Abstract
Recent advances in graph neural networks (GNNs) have allowed molecular simulations with accuracy on par with conventional gold-standard methods at a fraction of the computational cost. Nonetheless, as the field has been progressing to bigger and more complex architectures, state-of-the-art GNNs have become largely prohibitive for many large-scale applications. In this paper, we, for the first time, explore the utility of knowledge distillation (KD) for accelerating molecular GNNs. To this end, we devise KD strategies that facilitate the distillation of hidden representations in directional and equivariant GNNs and evaluate their performance on the regression task of energy and force prediction. We validate our protocols across different teacher-student configurations and demonstrate that they can boost the predictive accuracy of student models without altering their architecture. We also conduct comprehensive optimization of various components of our framework, and investigate the potential of data augmentation to further enhance performance. All in all, we manage to close as much as 59% of the gap in predictive accuracy between models like GemNet-OC and PaiNN with zero additional cost at inference.
## 1 Introduction
In the last couple of years, the field of molecular simulations has undergone a rapid paradigm shift with the advent of new, powerful computational tools based on machine learning (ML) [1; 2]. At the forefront of this transformation have been recent advances in graph neural networks (GNNs), which have brought about architectures that more effectively capture geometric and structural information critical for the accurate representation of molecules and molecular systems [3; 4]. Consequently, a multitude of GNNs have been developed, which now offer predictive performance on par with conventional gold-standard methods like density functional theory (DFT) at a fraction of the computational cost at inference time [5; 6; 7; 8]. This has, in turn, significantly accelerated the modeling of molecular properties and the simulation of diverse molecular systems.
Nonetheless, this progress - largely coinciding with the development of bigger and more complex models, has naturally come at the expense of increased complexity [9; 10]. This has gradually limited the utility of state-of-the-art GNNs for large-scale molecular simulation applications, where inference throughput (i.e., how many samples can be processed for a given time) is critical for making fast continual predictions about the evolution of a system. Therefore, addressing the trade-off
between accuracy and computational demand remains essential for creating more affordable tools for molecular simulations and expanding the transformational impact of GNN models in the area.
Motivated by that, in this work, we, for the first time, investigate the potential of knowledge distillation (KD) in enhancing the performance and scalability of state-of-the-art GNNs for molecular simulations (Fig. 1). To this end, we devise custom strategies for KD on molecular GNNs, which we call _node-to-node (n2n)_, _edge-to-node (e2n)_ and _vector-to-vector (v2v)_ knowledge distillation. These overcome common limitations of KD for regression tasks by facilitating the distillation of hidden representations in directional and equivariant GNNs. We evaluate the performance of our KD protocols in augmenting the training process of different student models trained to predict molecular properties like energy and forces, and demonstrate the effectiveness of our protocols across teacher-student configurations, datasets and KD settings. We show that they can substantially improve the performance of student models without modifying their architecture, allowing us to reduce the performance gap between models like GemNet-OC [11] and PaiNN [12] by as much as \(59.16\%\) in energy predictions and \(16.7\%\) in force predictions, while fully preserving the throughput of the simpler student model.
In summary, the contributions of this paper are as follows:
* a large-scale, multi-output regression task, challenging to address with most current KD methods.
* We design custom KD strategies, which facilitate the distillation of hidden representations in directional and equivariant GNNs. We demonstrate the effectiveness of our protocols across diverse teacher-student configurations and datasets.
* We conduct a comprehensive analysis of different components of our KD strategies, as well as explore data augmentation techniques for further improving performance.
## 2 Background and related work
**Molecular simulations.** In this work, we consider molecular systems at an atomic level, i.e., \(N\) atoms represented by their atomic number \(\mathbf{z}=\{z_{1},...,z_{N}\}\in\mathbb{Z}^{N}\) and position \(\mathbf{X}=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\}\in\mathbb{R}^{N\times 3}\). Given a system, we want a model that can predict the energy \(E\in\mathbb{R}\) of the system, and the forces \(\mathbf{F}\in\mathbb{R}^{N\times 3}\) acting on each atom. Both these properties are of high interest when simulating molecular systems. The energy of a system is essential for the prediction of its stability, whereas the forces are important for molecular dynamics simulations, where computed forces are combined with the equations of motion to simulate the evolution of the system over time.
**GNNs for molecular systems.** GNNs are a suitable framework for modeling molecular systems. Each molecular system \((\mathbf{X},\mathbf{z})\) can be represented as a mathematical graph, where the set of atoms corresponds to the nodes \(\mathcal{V}\), and edges \(\mathcal{E}\) created between nodes by connecting the closest neighboring atoms (typically defined by a cutoff radius and/or a maximum number of neighbors). Hence, in the
Figure 1: Using knowledge distillation, we manage to significantly boost the predictive accuracy of different student models without altering their architecture. Hence, we substantially lessen the tradeoff between speed and performance in molecular GNNs, allowing us to run faster molecular simulations while maintaining predictive accuracy.
context of molecular simulations, we can create a GNN that operates on atomic graphs \(G=(\mathcal{V},\mathcal{E})\) by propagating information between the atoms and the edges, and makes predictions about the energy and forces of each system in a multi-output manner - i.e., \(\hat{E},\hat{\mathbf{F}}=\text{GNN}(\mathbf{X},\mathbf{z})\).
The main problem when modeling molecules and molecular properties are the number of underlying symmetries to consider. For instance, the total energy \(E\) of a system is not affected by (i.e., is _invariant_ to) rotations and translations of the system. However, the forces \(\mathbf{F}\) do change as we rotate a system - i.e., they are _equivariant_ to rotations. Therefore, to make accurate predictions about molecular systems, it is crucial to devise models that respect these symmetries. There is now a plethora of diverse molecular GNNs that achieve that, e.g., SchNet [13], DimeNet [6; 14], PaiNN [12], GemNet [7; 11], NequIP [5], and SCN [10].
**Knowledge distillation.** Knowledge distillation is a technique for compressing and accelerating ML models [15], which has recently demonstrated significant potential in domains like computer vision [16] and natural language modeling [17]. The main objective of KD is to create more efficient models by means of transferring knowledge (e.g. model parameters and activations) from large, computationally expensive, more accurate models, often referred to as teacher models, to simpler, more efficient models called student models [18]. Since the seminal work of Hinton _et al._[19], the field has drastically expanded methodologically, with the development of protocols that accommodate the distillation of "deeper" knowledge, more comprehensive transformation and fusion functions, as well as more robust distillation losses [18; 20]. Yet, these advances have mostly focused on classification, resulting in methods of limited utility in regression tasks. Moreover, most research in the area has been confined to grid-structured data (e.g., images, text, tabular data). Despite recent efforts to extend KD to graph data and GNNs, these have likewise only concentrated on classification tasks involving standard GNN architectures [21]. And, in particular, the application of KD to state-of-the-art, directional and equivariant GNN architectures, where features can include complex geometric information and span both node- and edge-level features, as well as to real-world regression problems in molecular simulations, is still unexplored.
## 3 Knowledge distillation in molecular GNNs
**Vanilla KD.** The standard loss function when training molecular GNNs is a loss that combines both the energy and force prediction error as follows:
\[\mathcal{L}_{0}=\alpha_{\text{E}}\mathcal{L}_{\text{E}}(\hat{E},E)+\alpha_{ \text{F}}\mathcal{L}_{\text{F}}(\hat{\mathbf{F}},\mathbf{F}), \tag{1}\]
where \(E\) and \(\mathbf{F}\) are the ground-truth energy and forces, \(\hat{E}\) and \(\hat{\mathbf{F}}\) are the predictions of the model of interest, and \(\mathcal{L}_{\text{E}}\) and \(\mathcal{L}_{\text{F}}\) are some loss functions weighted by \(\alpha_{\text{E}},\alpha_{\text{F}}\in\mathbb{R}\).
In KD, we augment this training process by defining an auxiliary knowledge distillation loss term \(\mathcal{L}_{\text{KD}}\), which is added to \(\mathcal{L}_{0}\) (with a factor \(\lambda\in\mathbb{R}\)) to derive a new training loss function \(\mathcal{L}\) of the form:
\[\mathcal{L}=\mathcal{L}_{0}+\lambda\mathcal{L}_{\text{KD}}. \tag{2}\]
This was originally proposed in the context of classification by leveraging the fact that the soft label predictions (i.e., the logits after softmax normalization) of a given (teacher) model carry valuable information that can complement the ground-truth labels in the training process of another (student) model [19]. Since then, this has become the standard KD approach - usually referred to as vanilla KD in the literature, which is often the foundation of new KD protocols. The main idea of this technique is to employ a KD loss \(\mathcal{L}_{\text{KD}}\) that enforces the student to mimic the predictions of the teacher model. This is commonly achieved by constructing a loss \(\mathcal{L}_{\text{KD}}=\text{KL}(z_{s},z_{t})\) based on the Kullback-Leibler (KL) divergence between the soft logits of the student \(z_{s}\) and the teacher \(z_{t}\).
However, this strategy - based on the distillation of the output of the teacher model only [18] - poses two significant limitations. Firstly, it is by design exclusively applicable to classification tasks, since there are no outputs analogous to logits in regression setups [15; 22]. This has consequently limited the utility of most KD methods for regression tasks. Secondly, this approach forces the student to emulate the final output of the teacher directly, which is often unattainable in regimes where the complexity gap between the two models is substantial, and thus detrimental to KD performance [23].
**Feature-based KD.** To circumvent these shortcomings, we focus on feature-based KD [18]. This is an extension of vanilla KD [24], which is concerned with the distillation of knowledge across the
intermediate layers of models. This allows more lightweight models to be trained to mimic features that are easier to assimilate compared to the final output directly [25]. In this paper, we perform knowledge distillation of intermediate representations by devising a loss on selected hidden features \(H_{\mathrm{s}}\in U_{\mathrm{s}}\) and \(H_{\mathrm{t}}\in U_{\mathrm{t}}\) in the student and teacher models respectively, which takes the form:
\[\mathcal{L}_{\text{KD}}=\mathcal{L}_{\text{feat}}(\mathcal{M}_{\mathrm{s}}(H_{ \mathrm{s}}),\mathcal{M}_{\mathrm{t}}(H_{\mathrm{t}})), \tag{3}\]
where \(\mathcal{M}_{\mathrm{s}}:U_{\mathrm{s}}\mapsto U\) and \(\mathcal{M}_{\mathrm{t}}:U_{\mathrm{t}}\mapsto U\) are transformations that map the hidden features to a common feature space \(U\), and \(\mathcal{L}_{\text{feat}}:U\times U\mapsto\mathbb{R}^{+}\) is some loss of choice. Possible options for the transformations \(\mathcal{M}_{\mathrm{s}},\mathcal{M}_{\mathrm{t}}\) include the identity transformation, linear projections and multilayer perceptron (MLP) projection heads; whereas for the distillation loss \(\mathcal{L}_{\text{feat}}\), typical functions are mean squared error (MSE) and mean absolute error (MAE).
The fundamental design decision when devising a KD strategy based on the distillation of internal representations is the choice of appropriate representations (i.e., features \(H_{s}\) and \(H_{t}\)). One needs to ensure that paired features are similar both in modeling capacity and relevance to the output. Most research on feature-based distillation on graphs has so far focused on models that only have one type of (scalar) features in single-output classification tasks [26, 27, 28, 29], thereby reducing the problem to the selection of layers to pair across the student and the teacher. This is often further simplified by utilizing models of the same architecture.
**Feature-based KD in molecular GNNs.** Most GNNs for molecular simulations contain diverse features (scalars, vectors and/or equivariant higher-order tensors based on spherical harmonics) organized across nodes and edges within a complex graph topology. These are continually evolved by model-specific operators to infer molecular properties, such as energy and forces, in a multi-output prediction fashion. Therefore, features often represent different physical, geometric and/or topological information relevant to specific parts of the output. This significantly complicates the design of an effective KD strategy, especially when the teacher and the student differ architecturally.
In this work, we set out to devise KD strategies that are representative and effective across various molecular GNNs. We consider GNNs that have diverse architectures and performance profiles, and can be organized in teacher-student configurations at different levels of architectural disparity. This is why we investigate the effectiveness of KD with respect to the following three GNN models:
* _SchNet_[30]: A simple GNN model based on continuous-filter convolutional layers, which only contains scalar node features \(\mathbf{s}\in\mathbb{R}^{d}\). These are used to predict the energy \(\hat{E}\). The force is then calculated as the negative gradient of the energy with respect to the atomic positions, i.e., \(\hat{\mathbf{F}}=-\nabla\hat{E}\).
* used for energy prediction; as well as geometric vectorial node features \(\mathbf{v}\in\mathbb{R}^{3\times d_{2}}\) that are equivariant to rotations and can thus be combined with the scalar features to make direct predictions of the forces (i.e., without computing gradients of the energy).
* _GemNet-OC_[11]: A GNN model that utilizes directional message passing between scalar node features \(\mathbf{h}\in\mathbb{R}^{d_{\mathrm{h}}}\) and scalar edges features \(\mathbf{m}\in\mathbb{R}^{d_{\mathrm{m}}}\). After each block of layers, these are processed through an output block, resulting in scalar node features \(\mathbf{x}_{\mathrm{E}}^{(i)}\) and edge features \(\mathbf{x}_{\mathrm{F}}^{(i)}\), where \(i\) is the block number. The output features from each block are aggregated into output features \(\mathbf{x}_{E}\) and \(\mathbf{x}_{F}\), which are used to compute the energy and forces respectively.
An overview of the features of the three models can be found in Table 1
**Defining feature distillation strategies.** As the three models considered in this work contain very different kinds of features, there is room for considering different ways of distilling the information
\begin{table}
\begin{tabular}{l c c} \multicolumn{1}{c}{SchNet} & \multicolumn{1}{c}{PaiNN} & \multicolumn{1}{c}{GemNet-OC} \\ \hline Scalar node features & ✓ & ✓ & ✓ \\ Scalar edge features & & ✓ & \\ Vectorial node features & ✓ & & \\ Output blocks & & & ✓ \\ \end{tabular}
\end{table}
Table 1: Molecular GNNs can have diverse features depending on their architecture. This is an overview of the types of features available in the three models we use in this study.
contained in the respective features between model architectures. In the context of the three model architectures we have considered and their types of features, we devise three different KD strategies:
_node-to-node (n2n):_ As all three models considered in this study contain scalar node features \(H_{\text{node}}\), we can distill knowledge in between these directly by defining a loss \(\mathcal{L}_{KD}\), such that
\[\mathcal{L}_{KD}=\mathcal{L}_{\text{feat}}(\mathcal{M}_{s}(H_{\text{node}, \text{s}}),\mathcal{M}_{t}(H_{\text{node},\text{t}})). \tag{4}\]
_edge-to-node (e2n):_ The GemNet-OC model heavily relies on its edge features, which are a key component in the directional message passing defined in the architecture and can be useful as a KD resource. However, the other models considered here do not have similar edge features to distill to. To accommodate that, we propose a KD strategy where we transfer information from GemNet-OC's edge features \(H_{\text{edge},(i,j)}\) by first aggregating them as follows:
\[H_{\text{edge2node},i}=\sum_{j\in\mathcal{N}(i)}H_{\text{edge},(i,j)}, \tag{5}\]
where \(i\) is the node index. The resulting vector \(H_{\text{edge2node},i}\) is a scalar, node-level feature, and we can, therefore, use it to transfer knowledge to the student node features \(H_{\text{node},\text{s}}\) as in Eq. (4).
_- vector-to-vector (v2v):_ Similarly, the PaiNN model defines special vectorial features, which differ substantially from the scalar (node and edge) features available in the other models. These are not scalar and invariant to rigid transformations of the atoms, but geometrical vectors that are equivariant with respect to rotations. This poses a new challenge when distilling knowledge from or onto PaiNN. To this end, we define a KD procedure, where we transfer knowledge between (equivariant) vectorial node features and (invariant) scalar edge features. We achieve that by noting that scalar edge features sit on an equivariant 3D grid since they are associated with an edge between two atoms in 3D space. Hence, we can aggregate the edge features \(\{H_{\text{edge},(i,j)}\}_{j\in\mathcal{N}}\) corresponding to a given node \(i\) into node-level equivariant vectorial features \(H_{\text{vec},i}\) by considering the unit vector \(\mathbf{u}_{ij}=\frac{1}{\left\lvert\mathbf{x}_{j}-\mathbf{x}_{i}\right\rvert}(\mathbf{x}_{j} -\mathbf{x}_{i})\) that defines the direction of the edge \((i,j)\), such that
\[H_{\text{vec},i}^{(k)}=\sum_{j\in\mathcal{N}(i)}\mathbf{u}_{i,j}H_{\text{edge},(i, j)}^{(k)}, \tag{6}\]
with the superscript \(k\) indicating the channel. This fulfills the condition of equivariance with respect to rotations as each vector \(\mathbf{u}_{ij}\) is equivariant to rotations, with \(H_{\text{edge},(i,j)}^{(k)}\) - a scalar not influencing its direction.
**Baseline KD strategies.** To validate the performance of our KD strategies, we evaluate their performance against 2 vanilla-based KD approaches suitable for regression tasks.
_Vanilla (1):_ As mentioned above, the main problem with using vanilla KD for regression is the lack of features analogous to logits. One way of adapting vanilla KD for regression is by steering the student to mimic the final output of the teacher directly [31]:
\[\mathcal{L}_{\text{KD}}=\alpha_{\text{E}}\mathcal{L}_{\text{E}}(\hat{E}_{ \text{s}},\hat{E}_{\text{t}})+\alpha_{\text{F}}\mathcal{L}_{\text{F}}(\hat{ \mathbf{F}}_{\text{s}},\hat{\mathbf{F}}_{\text{t}}), \tag{7}\]
where the subscripts \({}_{\text{s}}\) and \({}_{\text{t}}\) refer to the predictions of the student and teacher, respectively. Note that, unlike in classification, this approach does not provide much additional information in regression tasks, except for some limited signal about the error distribution of the teacher model [15, 22].
_Vanilla (2):_ One way to enhance the teacher signal during training is to consider the fact that many GNNs for molecular simulations make separate atom- and edge-level predictions which are consequently aggregated into a final output. For instance, the total energy \(E\) of a system is usually defined as a sum of the predicted contributions from each atom \(\hat{E}=\sum_{i}\hat{E}_{i}\). Hence, we note that we can extend the aforementioned vanilla KD approach by imposing a loss on these granular predictions instead. Following the energy definition above, the KD loss can be expressed as:
\[\mathcal{L}_{\text{KD}}=\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}_{\text{E}}(\hat{ \text{E}}_{i,s},\hat{E}_{i,t}). \tag{8}\]
These individual energy contributions are not part of the labeled data, but, when injected during training, provide more fine-grained information than the aggregated prediction.
## 4 Experimental results
To evaluate our proposed methods, we perform experiments on the OC20-2M [32] dataset - a representative subset of the diverse OC20 dataset for molecular simulations [11]. We focus on the S2EF task, i.e., predicting energies and forces for a given system (atom types \(\mathbf{z}\) and positions \(\mathbf{X}\)), which is the task of broadest interest [32]. We use the models as implemented in the OC20 codebase2 (see Appendix A for detailed information about the training procedure and different model hyperparameters). We then validate the robustness of our approach by conducting additional experiments on the COLL dataset introduced in [14].
Footnote 2: [https://github.com/Open-Catalyst-Project/ocp](https://github.com/Open-Catalyst-Project/ocp)
**Benchmarking molecular GNNs.** We selected SchNet, PaiNN and GemNet-OC for our experiments as they cover most of the accuracy-complexity spectrum. To highlight this, we start by first benchmarking the predictive accuracy and computational cost of the three models on the OC20-2M [32] dataset. In conjunction with the default PaiNN model - referred to as PaiNN-big in this paper, we also utilize a more lightweight version of the architecture (referred to as PaiNN-small), where we reduce the number of hidden layers and their dimensionality. We train SchNet, PaiNN-small and PaiNN-big to convergence ourselves, whereas the GemNet-OC model we employ is the pre-trained model as available within the OC20 repository. The benchmarking results, summarized in Table 2, demonstrate the substantial tradeoff between predictive accuracy and computational cost across different GNN architectures, and motivate the need for methods that can alleviate this limitation.
**Similarity analysis.** To make our analyses exhaustive, we set out to design experiments involving teacher and student architectures of a variable degree of architectural disparity. As a proxy of that, we derive similarity scores based on central kernel alignment (CKA) [33; 34]. In particular, we calculate the CKA similarity score across the hidden node representations of the trained SchNet, PaiNN and GemNet-OC (averaged over \(n=987\) nodes). The results of this analysis are summarised in Fig. 2. Focusing on the intra-model similarities first (the diagonal in Fig. 2), we observe that PaiNN and SchNet exhibit similar behavior, with representations having a generally high degree of similarity within the two models. This indicates that the information captured across the respective architectures is similar, which is consistent with the fact that the features of PaiNN and SchNet are iteratively updated by adding displacement features computed at each layer. GemNet-OC, however, displays completely different behavior, where the representations extracted at each layer are
\begin{table}
\begin{tabular}{l c c c c c} & \begin{tabular}{c} Inference \\ Throughput \\ \end{tabular} & \multicolumn{5}{c}{OC20 S2EF Validation} \\ \cline{2-6} & \begin{tabular}{c} Samples / \\ GPU sec. \(\uparrow\) \\ \end{tabular} & \begin{tabular}{c} Energy MAE \\ \(\mathrm{meV}\downarrow\) \\ \end{tabular} & \begin{tabular}{c} Force MAE \\ \(\mathrm{meV}/\mathrm{\AA}\downarrow\) \\ \end{tabular} & \begin{tabular}{c} Force cos \\ \(\uparrow\) \\ \end{tabular} &
\begin{tabular}{c} EFwT \\ \% \(\uparrow\) \\ \end{tabular} \\ \hline SchNet & \(788.2\) & \(1308\) & \(65.1\) & \(0.204\) & \(0\) \\ PaiNN-small & \(618.2\) & \(489\) & \(47.1\) & \(0.345\) & \(0.085\) \\ PaiNN-big & \(237.8\) & \(440\) & \(45.3\) & \(0.376\) & \(0.14\) \\ GemNet-OC & \(75.8\) & \(286\) & \(25.7\) & \(0.598\) & \(1.06\) \\ \end{tabular}
\end{table}
Table 2: Evaluation of the performance of the four baseline models on the OC20 S2EF validation dataset (trained on OC20-2M), averaged across OC20’s four validation sets. The results show a clear tradeoff between accuracy and computational cost. For individual validation sets, see Appendix B.
Figure 2: Similarity analysis between the node features of SchNet, PaiNN and GemNet-OC using CKA.
significantly different from the rest of the architecture. This is likewise in line with the GemNet-OC model, where each layer embedding is a separate output feature. When examining the similarity between different models instead, we see that, generally speaking, the node features in SchNet and PaiNN are similar, whereas those between SchNet and GemNet-OC, and PaiNN and GemNet-OC, diverge significantly as we move deeper into GemNet-OC.
**KD setup.** Based on the aforementioned analyses, we define the following teacher-student pairs, covering the whole spectrum of architectural disparity: PaiNN-big to PaiNN-small (_same_ architecture); PaiNN-big to SchNet (_similar_ architectures); GemNet-OC to PaiNN-big (_different_ architectures). We train the student models by utilizing an offline KD strategy [18], where we distill knowledge from the more competent, pre-trained teacher model to the simpler, more lightweight student model. We augment the training of each student model with our KD protocols and evaluate the effect on predictive accuracy against the two vanilla KD approaches. If not mentioned otherwise, we utilize the following setup: we use MSE as a distillation loss \(\mathcal{L}_{\text{feat}}\); a learned linear as a transformation function \(\mathcal{M}_{s}\) on the features of the student; and the identity transformation as \(\mathcal{M}_{t}\). When distilling knowledge from GemNet-OC, we use the aggregated output block features \(\mathbf{x_{\text{E}}}\) and \(\mathbf{x_{\text{F}}}\) as node features and edge features, respectively. This is similar to the review-based setup proposed in [35]. For PaiNN and SchNet, we use the final node features. Other training details are available in Appendix A.
**Knowledge distillation results.** The results of the experiment are summarized in Table 3, presenting a comparative analysis of the predictive performance of different student models with and without the implementation of knowledge distillation. Especially notable was the improvement in energy predictions for PaiNN-big when trained with GemNet-OC as a teacher under the _n2n_ KD strategy. This configuration led to a reduction in PaiNN-big's energy MAE from 440 \(\mathrm{meV}\) to 349 \(\mathrm{meV}\). This equates to a decrease of over 20 % compared to the base PaiNN-big model, thus closing nearly 60 % of the accuracy gap between GemNet-OC and PaiNN-big in energy predictions. The _n2n_ KD strategy also led to an improvement in force predictions, albeit a less dramatic one, bridging 9.3 % of the
\begin{table}
\begin{tabular}{l l r r r r} & & \multicolumn{4}{c}{OC20 S2EF Validation} \\ \cline{3-6} & & Energy MAE & Force MAE & Force cos & EFwT \\ Model & \(\mathrm{meV}\downarrow\) & \(\mathrm{meV}/\mathrm{\AA}\downarrow\) & \(\uparrow\) & \% \(\uparrow\) \\ \hline \multirow{7}{*}{
\begin{tabular}{l} **S**-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** **\\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\ **PaiNN-** \\
performance gap between the two models. We manage to improve that with our custom _e2n_ and _v2v_ protocols, elevating the improvements in force predictions to 16.7 % and 15.6% respectively. Nonetheless, this was offset by a decline in energy prediction accuracy, which dropped to 0.6 % and 0.8 % respectively. In contrast, the _n2n_ proved to be the best strategy for both energy and force predictions when distilling PaiNN-big in PaiNN-small. For this teacher-student configuration, we were able to close the gap in energy and force prediction between the two models by \(\sim\)\(65\%\) and \(20\%\) respectively. The effect of _v2v_ KD on energy predictions was similar at 60.8 %. Yet, this approach was detrimental to force predictions (-9.8 %). In both cases, the performance of our two vanilla KD protocols was not as effective, either yielding smaller improvements or significantly impairing accuracy. Still, they showed the most promising results when distilling knowledge from PaiNN-big to SchNet. However, it is noteworthy to highlight that the effect of KD for this pair was tangibly lower at around 11 % for energy improvements and 2.5 % for forces. Based on previous research [23], we deduct this may be a result of the vast gap in complexity and performance between the two models, which affects the learning of the substantially more lightweight student model.
These experiments showcase the effectiveness of our KD strategies, which allowed us to consistently boost the accuracy of student models across all examined teacher-student configurations. Importantly, these findings remained consistent when broadening our experiment to the COLL dataset, where we similarly observed significant performance gains across all model configurations (see Appendix C). This further highlights the robustness of our training protocols and their utility across molecular datasets. Finally, we reiterate a central facet of our approach in that we enhance the performance of our student models solely by imposing an additional KD loss during training, without any structural alterations to their architecture. Hence, the inference throughput of the distilled student models remained completely unaffected.
**Effect of KD on model similarity.** We continue our investigation of KD on molecular GNNs by investigating how KD affects the student architectures in the various teacher-student configurations we have studied. To this end, we analyze how the CKA similarity between the teacher and the student models changes with the introduction of KD during training. We observed that KD introduces strong and specific similarity gains in layers that are used for KD, which also propagate along the student architecture (see Fig. 3). We saw similar behavior across KD strategies and teacher-student configurations (see Appendix F), allowing us to monitor and quantify the effect of KD on student models as we explore different KD settings and design choices. Hence, we believe that such similarity-based analyses can serve as a useful tool for quantitatively profiling KD procedures, which can elucidate the mechanism behind KD and inform the development of KD protocols.
**Hyperparameter studies** To increase the breadth of our work, we additionally conducted a thorough study to evaluate the effect of different design choices within our KD strategy. We present the result of this study here, hoping to provide valuable insights into the trends we have observed, which can inform future research in the area. The results are further detailed in Appendix E.
_Effect of distillation loss_: In terms of distillation losses \(\mathcal{L}_{\text{feat}}\), we explored: mean squared error (MSE); mean absolute error (MAE); and also directly optimizing CKA. Our analyses showed that MAE performed similarly to MSE, whereas using CKA substantially hurt accuracy.
Figure 3: Similarity analysis between Gemnet-OC and PaiNN-big without KD (left) and with KD (right). It illustrates a clear similarity gain within the feature pair used during KD (indicated with a \(\diagup\) in the middle heatmap), which also propagates to earlier layers of the student architecture. Similarity analyses for other KD strategies and teacher-student configurations are presented in Appendix F.
_Effect of transformation function_: We also investigated a number of different transformation functions and the effect they have on performance. The transformations we utilized include: the identity transformation (when appropriate); learned linear transformations and MLP projection heads; as well as variants of the structure-preserving transformations described in [36]. Our results (see Appendix E.1) showed that the examined transformations either hurt performance or only provided a marginal improvement in performance compared to our default linear mapping.
_Effect of feature selection_: We additionally explored the effect of feature selection on KD performance. In particular, we analyzed the change in the predictive accuracy of PaiNN as a student model as we distill: features from earlier layers in the teacher (GemNet-OC); onto earlier layers in the student; across multiple layers in the teacher and the student. We observed that our default, review-based strategy was the best-performing KD protocol for this teacher-student configuration, with other approaches either hurting performance or only providing a marginal improvement (see Appendix E.3). We complemented our experiments by performing CKA-based similarity analyses, which allowed us to gain insights into how these protocols change the architecture of the student (see Appendix F).
## 5 Data augmentation
As for most other applications, data labeling for molecular data is costly as it requires running computationally expensive quantum mechanical calculations to obtain energies and forces. We explored two protocols for knowledge distillation via training on novel data points labeled by the teacher, and briefly describe them here. See Appendix D for more details.
**Random rattling.** Apart from the relaxation trajectories in OC20, there is also a dataset called "Rattled", where random noise has been added to a subset of the systems along the relaxation trajectories, and the forces and energies have then been computed for these rattled systems using DFT. Adding noise to existing structures has also been used in the context of pretraining of molecular GNNs [37; 38] or as a regularization strategy [39]. However, adding noise to existing samples and aligning the predictions between teacher and student (without ground truth labels) did not show significant improvement. Additionally, we tried using gradient ascent to find perturbations that maximize the discrepancy between the teacher and student predictions, similar to [40], but this approach increased training time and did not show improvements over random noise.
**Synthetic Data.** One weakness of the OC20 datasets is that some samples originate from the same relaxation trajectory and are therefore correlated. To tackle this, we generated our own distilled dataset coined _d1M_ consisting of 1M samples, which were generated by sampling novel systems (generated with the OC Datasets codebase 3), running relaxations with the pre-trained GemNet-OC 2M model, and then subsampling approx. 10% of the frames. We tried different ways of incorporating this new d1M dataset, all of which are based on joint training on the 2M as well as the d1M datasets (similar to what we did with the rattled systems). To explore different combinations of the ground truth DFT samples and the d1M samples during training, we defined two hyperparameters determining (a) how many of the samples per batch should originate from each of the datasets, and (b) how to weight the loss contributions based on the dataset of origin. Unfortunately, and contrary to similar approaches e.g. in speech recognition [41], the results we observed didn't significantly improve on the baseline models.
Footnote 3: [https://github.com/Open-Catalyst-Project/Open-Catalyst-Dataset](https://github.com/Open-Catalyst-Project/Open-Catalyst-Dataset)
## 6 Conclusion
In this paper, we investigate the utility of knowledge distillation as a means of distilling larger, more computationally expensive GNNs for molecules into smaller, more efficient models. To this end, we propose three distinct feature-based KD strategies, which facilitate the distillation of intermediate representations across diverse GNN models. Our analyses demonstrate that our protocols can significantly enhance the performance of molecular GNN without any modifications to their architecture, allowing us to run faster molecular simulations without substantially impairing predictive accuracy. We validate our approach across different teacher-student configurations and datasets, showcasing the effectiveness and robustness of the method. One limitation of our approach is that even though inference times are not affected, we need to perform forward passes through the teacher during training, increasing training times.
With this work, we aim to elucidate the potential of KD in the domain of molecular simulations and stimulate future work in the area, which can bring about important advances in scientific disciplines as broad as material science, drug discovery and catalysis. However, it is noteworthy to highlight that such technologies can be potentially harmful if, e.g., used to simulate and discover systems that are toxic or intended to be used in harmful technology.
## Acknowledgments and Disclosure of Funding
This project would not have been possible without the computing resources provided by: the Knut and Alice Wallenberg Foundation at the National Supercomputer Centre (Berzelius resource); the National Academic Infrastructure for Supercomputing in Sweden (NAISS) at Chalmers e-Commons at Chalmers (C3SE) partially funded by the Swedish Research Council through grant agreement no. 2022-06725; as well as the Chair of Aerodynamics and Fluid Mechanics at Technical University of Munich. We are also grateful to the team behind 2022 London Geometry and Machine Learning Summer School (LOGML), where this research project was initially conceived. We would like to specially thank Guocheng Qian and I-Ju Chen for their contribution during the early conceptualizing stages of this project during and in the first weeks following the summer school. We also thank the Open Catalyst team for their open-source codebase, support and discussions. In particular, we would like to thank Muhammed Shuaibi for providing the COLL dataset in LMDB format. Figures assembled in BioRender.com. Funding disclosure: Dimitar Georgiev is supported by UK Research and Innovation [UKRI Centre for Doctoral Training in AI for Healthcare grant number EP/S023283/1]; Filip Ekstrom Kelvinius is financially supported by the Excellence Center at Linkoping-Lund in Information Technology (ELLIIT).
|
2310.12557 | DepWiGNN: A Depth-wise Graph Neural Network for Multi-hop Spatial
Reasoning in Text | Spatial reasoning in text plays a crucial role in various real-world
applications. Existing approaches for spatial reasoning typically infer spatial
relations from pure text, which overlooks the gap between natural language and
symbolic structures. Graph neural networks (GNNs) have showcased exceptional
proficiency in inducing and aggregating symbolic structures. However, classical
GNNs face challenges in handling multi-hop spatial reasoning due to the
over-smoothing issue, i.e., the performance decreases substantially as the
number of graph layers increases. To cope with these challenges, we propose a
novel Depth-Wise Graph Neural Network (DepWiGNN). Specifically, we design a
novel node memory scheme and aggregate the information over the depth dimension
instead of the breadth dimension of the graph, which empowers the ability to
collect long dependencies without stacking multiple layers. Experimental
results on two challenging multi-hop spatial reasoning datasets show that
DepWiGNN outperforms existing spatial reasoning methods. The comparisons with
the other three GNNs further demonstrate its superiority in capturing long
dependency in the graph. | Shuaiyi Li, Yang Deng, Wai Lam | 2023-10-19T08:07:22Z | http://arxiv.org/abs/2310.12557v2 | # DepWiGNN: A Depth-wise Graph Neural Network for Multi-hop Spatial Reasoning in Text +
###### Abstract
Spatial reasoning in text plays a crucial role in various real-world applications. Existing approaches for spatial reasoning typically infer spatial relations from pure text, which overlook the gap between natural language and symbolic structures. Graph neural networks (GNNs) have showcased exceptional proficiency in inducing and aggregating symbolic structures. However, classical GNNs face challenges in handling multi-hop spatial reasoning due to the over-smoothing issue, _i.e._, the performance decreases substantially as the number of graph layers increases. To cope with these challenges, we propose a novel **D**epth-**W**ise **G**raph **N**eural **N**etwork (**DepWiGNN**). Specifically, we design a novel node memory scheme and aggregate the information over the depth dimension instead of the breadth dimension of the graph, which empowers the ability to collect long dependencies without stacking multiple layers. Experimental results on two challenging multi-hop spatial reasoning datasets show that DepWiGNN outperforms existing spatial reasoning methods. The comparisons with the other three GNNs further demonstrate its superiority in capturing long dependency in the graph.
## 1 Introduction
Spatial reasoning in text is crucial and indispensable in many areas, _e.g._, medical domain (Datta et al., 2020; Massa et al., 2015), navigations (Zhang et al., 2021; Zhang and Kordjamshidi, 2022; Chen et al., 2019) and robotics (Luo et al., 2023; Venkatesh et al., 2021). It has been demonstrated to be a challenging problem for both modern pre-trained language models (PLMs) (Mirzaee et al., 2021; Deng et al., 2023) and large language models (LLMs) like ChatGPT (Bang et al., 2023). However, early textual spatial reasoning datasets, _e.g._, bAbI (Weston et al., 2016), suffer from the issue of over-simplicity and, therefore, is not qualified for revealing real textual spatial reasoning scenario. Recently, researchers propose several new benchmarks (Shi et al., 2022; Mirzaee and Kordjamshidi, 2022; Mirzaee et al., 2021) with an increased level of complexity, which involve more required reasoning steps, enhanced variety of spatial relation expression, and more. As shown in Figure 1, 4 steps of reasoning are required to answer the question, and spatial relation descriptions and categories are diverse.
To tackle the problem of multi-hop spatial reasoning, Shi et al. (2022) propose a recurrent memory network based on Tensor Product Representation (TPR) (Schlag and Schmidhuber, 2018), which mimics the step-by-step reasoning by iteratively updating and removing episodes from memory. Specifically, TPR encodes symbolic knowledge hidden in natural language into distributed vector space to be used for deductive reasoning. Despite the effectiveness of applying TPR memory, the performance of this model is overwhelmed by modern PLMs (Mirzaee and Kordjamshidi, 2022). Moreover, these works typically overlook the gap between natural language and symbolic relations.
Graph Neural Neural Networks (GNNs) have been considerably used in multi-hop reasoning (Xu
Figure 1: An example of multihop spatial reasoning in text from the StepGame dataset (Shi et al., 2022).
et al., 2021; Chen et al., 2020; Qiu et al., 2019). These methods often treat a single graph convolutional layer of node information propagation (node to its immediate neighbors) in GNN as one step of reasoning and expand it to multi-hop reasoning by stacking multiple layers. However, increasing the number of graph convolutional layers in deep neural structures can have a detrimental effect on model performance (Li et al., 2018). This phenomenon, known as the over-smoothing problem, occurs because each layer of graph convolutions causes adjacent nodes to become more similar to each other. This paradox poses a challenge for multi-hop reasoning: _although multiple layers are needed to capture multi-hop dependencies, implementing them can fail to capture these dependencies due to the over-smoothing problem_. Furthermore, many chain-finding problems, _e.g._, multi-hop spatial reasoning, only require specific depth path information to solve a single question and do not demand full breadth aggregation for all neighbors (Figure 1). Nevertheless, existing methods (Palm et al., 2018; Xu et al., 2021; Chen et al., 2020; Deng et al., 2022) for solving this kind of problem usually lie in the propagation conducted by iterations of breadth aggregation, which brings superfluous and irrelevant information that may distract the model from the key information.
In light of these challenges, we propose a novel graph-based method, named **Depth**-**W**ise **G**raph **N**eural **N**etwork (DepWiGNN), which operates over the depth instead of the breadth dimension of the graph to tackle the multi-hop spatial reasoning problem. It introduces a novel node memory implementation that only stores depth path information between nodes by applying the TPR technique. Specifically, it first initializes the node memory by filling the atomic information (spatial relation) between each pair of directly connected nodes, and then collects the relation between two indirectly connected nodes via depth-wisely retrieving and aggregating all atomic spatial relations reserved in the memory of each node in the path. The collected long-dependency information is further stored in the source node memory in the path and can be retrieved freely if the target node is given. Unlike typical existing GNNs (Morris et al., 2019; Velickovic et al., 2017; Hamilton et al., 2017; Kipf and Welling, 2017), DepWiGNN does not need to be stacked to gain long relationships between two distant nodes and, hence, is immune to the over-smoothing problem. Moreover, instead of aimlessly performing breadth aggregation on all immediate neighbors, it selectively prioritizes the key path information.
Experiments on two challenging multi-hop spatial reasoning datasets show that DepWiGNN not only outperforms existing spatial reasoning methods, but also enhances the spatial reasoning capability of PLMs. The comparisons with three GNNs verify that DepWiGNN surpasses classical graph convolutional layers in capturing long dependencies by a noticeable margin without harming the performance of short dependency collection.
Overall, our contributions are threefold:
* We propose a novel graph-based method, DepWiGNN, to perform propagation over the depth dimension of a graph, which can capture long dependency without the need of stacking layers and avoid the issue of over-smoothing.
* We implement a novel node memory scheme, which takes advantage of TPR mechanism, enabling convenient memory updating and retrieval operations through simple arithmetic operations instead of neural layers.
* DepWiGNN excels in multi-hop spatial reasoning tasks, surpassing existing methods in experimental evaluations on two challenging datasets. Besides, comparisons with three other GNNs highlight its superior ability to capture long dependencies within the graph. Our code will be released via [https://github.com/Syon-Li/DepWiGNN](https://github.com/Syon-Li/DepWiGNN).
## 2 Related works
Spatial Reasoning In Texthas experienced a thriving development in recent years, supported by several benchmark datasets. Weston et al. (2016) proposes the bAbI project, which contains 20 QA tasks, including one focusing on textual spatial reasoning. However, some issues exist in bAbI, such as data leakage, overly short reasoning steps, and monotony of spatial relation categories and descriptions, which makes it fails to reflect the intricacy of spatial reasoning in natural language (Shi et al., 2022). Targeting these shortages, StepGame (Shi et al., 2022) expands the spatial relation types, the diversity of relation descriptions, and the required reasoning steps. Besides, SPARTQA (Mirzaee et al., 2021) augments the number of question types from one to four: _find relation_ (FR), _find blocks_
(FB), _choose objects_ (CO), and _yes/no_ (YN), while SPARTUN Mirzaee and Kordjamshidi (2022) only has two question types (FR, YN) built with broader coverage of spatial relation types.
Tensor Product Representation(TPR) Schlag and Schmidhuber (2018) is a mechanism to encode symbolic knowledge into a vector space, which can be applied to various natural language reasoning tasks Huang et al. (2018); Chen et al. (2020). For example, Schlag and Schmidhuber (2018) perform reasoning by deconstructing the language into combinatorial symbolic representations and binding them using Third-order TPR, which can be further combined with RNN to improve the model capability of making sequential inference Schlag et al. (2021). Shi et al. (2022) used a paragraph-level, TPR memory-augmented way to implement complex multi-hop spatial reasoning. However, existing methods typically apply TPR to pure text, which neglects the gap between natural language and symbolic structures.
Graph Neural Networks(GNNs) Morris et al. (2019); Velickovic et al. (2017); Hamilton et al. (2017); Kipf and Welling (2017) have been demonstrated to be effective in inducing and aggregating symbolic structures on other multi-hop question answering problems Cao et al. (2019); Fang et al. (2020); Huang and Yang (2021); Heo et al. (2022); Xu et al. (2021); Deng et al. (2023). In practice, the required number of graph layers grows with the multi-hop dependency between two distant nodes Wang et al. (2021); Hong et al. (2022), which inevitably encounters the problem of over-smoothing Li et al. (2018). Some researchers have conducted studies on relieving this problem Wu et al. (2023); Min et al. (2020); Huang and Li (2023); Yang et al. (2023); Liu et al. (2023); Koishekenov (2023); Song et al. (2023). However, these methods are all breadth-aggregation-based, _i.e._, they only posed adjustments on breadth aggregation like improving the aggregation filters, scattering the aggregation target using the probabilistic tool, etc., but never jumping out it. In this work, we investigate a depth-wise aggregation approach that captures long-range dependencies across any distance without the need to increase the model depth.
## 3 Method
### Problem Definition
Following previous studies on spatial reasoning in text Mirzaee et al. (2021); Shi et al. (2022); Mirzaee and Kordjamshidi (2022), we define the problem as follows: Given a story description \(S\) consisting of multiple sentences, the system aims to answer a question \(Q\) based on the story \(S\) by selecting a correct answer from the fixed set of given candidate answers regarding the spatial relations.
### The DepWiNet
As presented in Figure 2, the overall framework named DepWiNet consists of three modules: the entity representation extraction module, the DepWiGNN reasoning module, and the prediction module. The entity representation extraction module provides comprehensive entity embedding that is used in the reasoning module. A graph with recognized entities as nodes is constructed after obtaining entity representations, which later will be fed into DepWiGNN. The DepWiGNN reasons over the constructed graph and updates the node embedding correspondingly. The final prediction module adds the entity embedding from DepWiGNN to the embeddings from the first extraction module and applies a single step of attention Vaswani et al. (2017) to generate the final result.
Entity Representation Extraction ModuleWe leverage PLMs to extract entity representations1. The model takes the concatenation of the story \(S\) and the question \(Q\) as the input and output embeddings of each token. The output embeddings are further projected using a single linear layer.
Footnote 1: Entity in this paper refers to the spatial roles defined inKordjamshidi et al. (2010).
\[\mathbf{\hat{S}},\mathbf{\hat{Q}}=\mathbf{PLM}(S,Q) \tag{1}\] \[\mathbf{\hat{S}}_{\alpha}=\mathbf{\hat{S}}{\mathbf{W}_{\alpha}}^ {T}\quad\mathbf{\hat{Q}}_{\alpha}=\mathbf{\hat{Q}}{\mathbf{W}_{\alpha}}^{T} \tag{2}\]
where \(\mathbf{PLM}(\cdot)\) denotes the PLM-based encoder, _e.g._, BERT Devlin et al. (2019). \(\mathbf{\hat{S}}_{\alpha}\in\mathbb{R}^{r_{S}\times d_{h}}\), \(\mathbf{\hat{Q}}_{\alpha}\in\mathbb{R}^{r_{Q}\times d_{h}}\) denotes the list of projected tokens of size \(d_{h}\) in the story and question, and \(\mathbf{W}_{\alpha}\in\mathbb{R}^{d_{h}\times d_{h}}\) is the shared projection matrix. The entity representation is just the mean pooling of all token embeddings belonging to that entity.
Graph ConstructionThe entities are first recognized from the input by employing rule-based entity recognition. Specifically, in StepGame Shi et al. (2022), we propose a novel approach to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation. The entity representation is then used to extract entity representations from the context of the entity representation.
et al., 2022), the entities are represented by a single capitalized letter, so we only need to locate all single capitalized letters. For SPARTUN and ReSQ, we use nltk RegexpParser2 with self-designed grammars3 to recognize entities. Entities and their embeddings are treated as nodes of the graph, and an edge between two entities exists if and only if the two entities co-occur in the same sentence. We also add an edge feature for each edge.
Footnote 2: [https://www.nltk.org/api/nltk.chunk.regexp.html](https://www.nltk.org/api/nltk.chunk.regexp.html)
Footnote 3: [http://www.nltk.org/api/nltk.chunk.regexp.html](http://www.nltk.org/api/nltk.chunk.regexp.html)
Footnote 4: [http://www.nltk.org/api/nltk.chunk.regexp.html](http://www.nltk.org/api/nltk.chunk.regexp.html)
\[E_{ij}=\begin{cases}\mathbf{0}&\text{if i = j,}\\ \mathbf{e}_{[CLS]}&\text{otherwise.}\end{cases} \tag{3}\]
If two entities are the same (self-loop), the edge feature is just a zero tensor with a size \(d_{h}\); otherwise, it is the sequence's last layer hidden state of the \([CLS]\) token. The motivation behind treating [CLS] token as an edge feature is that it can help the model better understand the atomic relation (k=1) between two nodes (entities) so as to facilitate later depth aggregation. A straightforward justification can be found in Table 1, where all three PLMs achieve very high accuracy at k=1 cases by using the [CLS] token, which demonstrates that the [CLS] token favors the understanding of the atomic relation.
DepWiGNN ModuleThe DepWiGNN module (middle part in Figure 2) is responsible for multi-hop reasoning and can collect arbitrarily distant dependency without layer stacking.
\[\hat{V}=\mathbf{DepWiGNN}(\mathcal{G};V,E) \tag{4}\]
where \(V\in\mathbb{R}^{|V|\times d_{h}}\) and \(E\in\mathbb{R}^{|E|\times d_{h}}\) are the nodes and edges of the graph. It comprises three components: node memory initialization, long dependency collection, and spatial relation retrieval. Details of these three components will be discussed in Section 3.3.
Prediction ModuleThe node embedding updated in DepWiGNN is correspondingly added to entity embedding extracted from PLM.
\[\hat{Z_{i}}=\begin{cases}Z_{i}+\hat{V}_{idx(i)}&\text{if $i$-th token}\in \hat{V}\\ Z_{i}&\text{otherwise.}\end{cases} \tag{5}\]
where \(Z_{i}=[\hat{S};\hat{Q}]\) is the ith token embedding from PLM and \(\hat{V}\) denotes the updated entity representation set. \(idx(i)\) is the index of \(i\)-th token in the graph nodes.
Then, the sequence of token embeddings in the question and story are extracted separately to perform attention mechanism Vaswani et al. (2017) with the query to be the sequence of question tokens embeddings \(\hat{Z}_{\hat{\mathbf{Q}}}\), key and value to be the sequence of story token embeddings \(\hat{Z}_{\hat{\mathbf{S}}}\).
\[R=\sum_{i=0}^{r_{Q}}(softmax(\frac{\hat{Z}_{\hat{\mathbf{Q}}}\hat{Z}^{T}_{ \hat{\mathbf{S}}}}{\sqrt{d_{h}}})\hat{Z}_{\hat{\mathbf{S}}})_{i} \tag{6}\]
\[C=FFN(\mathbf{layer}\mathbf{nm}(R)) \tag{7}\]
Figure 2: The DepWiNet framework. The entity representations are first extracted from the entity representation extraction module (left), and then a homogeneous graph is constructed based on the entity embeddings and fed into the DepWiNet reasoning module. The DepWiNet depth-wisely aggregates information for all indirectly connected node pairs, and stores it in node memories. The updated node embeddings are then passed to the prediction module.
where **layernnm** means layer normalization and \(C\) is the final logits. The result embeddings are summed over the first dimension, layernormed, and fed into a 3-layer feedforward neural network to acquire the final result. The overall framework is trained in an end-to-end manner to minimize the cross-entropy loss between the predicted candidate answer probabilities and the ground-truth answer.
### Depth-wise Graph Neural Network
As illustrated in Figure 3, we introduce the proposed graph neural network, called DepWiGNN, _i.e._, the operation \(\hat{V}=\mathbf{DepWiGNN}(\mathcal{G};V,E)\). Unlike existing GNNs (e.g. [19, 17, 18]), which counts the one-dimension node embedding itself as its memory to achieve the function of information reservation, updating, and retrieval, DepWiGNN employs a novel two-dimension node memory implementation approach that takes the advantage of TPR mechanism allowing the updating and retrieval operations of the memory to be conveniently realized by simple arithmetic operations like plus, minus and outer product. This essentially determines that the information propagation between any pair of nodes with any distance in the graph can be accomplished without the demand to iteratively apply neural layers.
Node Memory InitializationAt this stage, the node memories of all nodes are initialized with the relations to their immediate neighbors. In the multi-hop spatial reasoning cases, the relations will be the atomic spatial orientation relations (which only needs one hop) of the destination node relative to the source node. For example, "X is to the left of K and is on the same horizontal plane." We follow the TPR mechanism [19], which uses outer product operation to bind roles and fillers4. In this work, the calculated spatial vectors are considered to be the fillers. They will be bound with corresponding node embeddings and stored in the two-dimensional node memory. Explicitly, we first acquire spatial orientation filler \(f_{ij}\in\mathbb{R}^{d_{h}}\) by using a feedforward network, the input to FFN is concatenation in the form \([V_{i},E_{ij},V_{j}]\). \(V_{i}\) represents \(i\)-th node embedding.
Footnote 4: The preliminary of TPR is presented in Appendix A.
\[f_{ij}=FFN([V_{i},E_{ij},V_{j}]) \tag{8}\] \[M_{i}=\sum\nolimits_{V_{j}\in N(V_{i})}f_{ij}\otimes V_{j} \tag{9}\]
The filler is bound together with the corresponding destination node \(V_{j}\) using the outer product. The initial node memory \(M_{i}\) for node \(V_{i}\) is just the summation of all outer product results of the fillers and corresponding neighbors (left part in Figure 3).
Long Dependency CollectionWe discuss how the model collects long dependencies in this section. Since the atomic relations are bound by corresponding destinations and have already been contained in the node memories, we can easily unbind all the atomic relation fillers in a path using the corresponding atomic destination node embedding (middle part in Figure 3). For each pair of indirectly connected nodes, we first find the shortest path between them using breadth-first search (BFS). Then, all the existing atomic relation fillers along the path are unbound using the embedding of each node in the path (Eq.10).
\[\hat{f}_{p_{i}(p_{i+1})}=M_{p_{i}}V_{p_{i+1}}^{T} \tag{10}\] \[\mathbf{\hat{F}}=\mathbf{Aggregator}(\mathbf{F})\] (11) \[f_{p_{0}p_{n}}=\mathbf{layern}(FFN(\mathbf{\hat{F}})+\mathbf{ \hat{F}})\] (12) \[M_{p_{0}}=M_{p_{0}}+f_{p_{0}p_{n}}\otimes V_{p_{n}}^{T} \tag{13}\]
where \(p_{i}\) denotes the \(i\)-th element in the path and \(\mathbf{F}=[\hat{f}_{p_{0}p_{1}};....;\hat{f}_{p_{n-1}p_{n}}]\) is the retrived filler set along the path. The collected relation fillers are aggregated using a selected depth aggregator like LSTM [17], etc, and passed to a feedforward neural network to reason the relation filler between the source and destination node in the path (Eq.12). The result spatial filler is then bound with the target node embedding and added to the source node memory (Eq.13). In this way, each node memory will finally contain the spatial orientation information to every other connected node in the graph.
Spatial Relation RetrievalAfter the collection process is completed, every node memory contains
Figure 3: The illustration of DepWiGNN.
spatial relation information to all other connected nodes in the graph. Therefore, we can conveniently retrieve the spatial information from a source node to a target node by unbinding the spatial filler from the source node memory (right part in Figure 3) using a self-determined key. The key can be the target node embedding itself if the target node can be easily recognized from the question, or some computationally extracted representation from the sequence of question token embeddings if the target node is hard to discern. We use the key to unbind the spatial relation from all nodes' memory and pass the concatenation of it with source and target node embeddings to a multilayer perceptron to get the updated node embeddings.
\[\hat{f}_{i[key]}=M_{i}V_{[key]}^{T} \tag{14}\] \[\hat{V}_{i}=FFN([V_{i},\mathbf{layernm}(f_{i[key]}),V_{[key]}]) \tag{15}\]
The updated node embeddings are then passed to the prediction module to get the final result.
## 4 Experiments
### Experimental Setups
Datasets & Evaluation MetricsWe investigated our model on StepGame Shi et al. (2022) and ReSQ Mirzaee and Kordjamshidi (2022) datasets, which were recently published for multi-hop spatial reasoning. StepGame is a synthetic textual QA dataset that has a number of relations (\(k\)) ranging from 1 to 10. In particular, we follow the experimental procedure in the original paper Shi et al. (2022), which utilized the TrainVersion of the dataset containing 10,000/1,000 training/validation clean samples for each \(k\) value from 1 to 5 and 10,000 noisy test examples for \(k\) value from 1 to 10. The test set contains three kinds of distraction noise: _disconnected_, _irrelevant_, and _supporting_. ReSQ is a crowdsourcing benchmark that includes only Yes/No questions with 1008, 333, and 610 examples for training, validating, and testing respectively. Since it is human-generated, it can reflect the natural complexity of real-world spatial description. Following the setup in Shi et al. (2022); Mirzaee and Kordjamshidi (2022), we report the accuracy on corresponding test sets for all experiments.
BaselinesFor StepGame, we select all traditional reasoning models used in the original paper Shi et al. (2022) and three PLMs, _i.e._, BERT Devlin et al. (2019), RoBERTa Liu et al. (2019), and ALBERT Lan et al. (2020) as our baselines. For ReSQ, we also follow the experiment setting described in Mirzaee and Kordjamshidi (2022), which used BERT with or without further synthetic supervision as baselines.
Implementation DetailsFor all experiments, we use the base version of corresponding PLMs, which has 768 embedding dimensions. The model was trained in an end-to-end manner using Adam optimizer Kingma and Ba (2015). The training was stopped if, up to 3 epochs, there is no improvement greater than 1e-3 on the cross-entropy loss for the validation set. We also applied a Pytorch training scheduler that reduces the learning rate with a factor of 0.1 if the improvement of cross-entropy loss on the validation set is lower than 1e-3 for 2 epochs. In terms of the determination of the key in the Spatial Relation Retrieval part, we used the target node embedding for StepGame since it can be easily recognized, and we employed a single linear layer to extract the key representation from the sum-aggregated question token embeddings for ReSQ. In the StepGame experiment, we fine-tune the model on the training set and test it on the test set. For ReSQ, we follow the procedure in Mirzaee and Kordjamshidi (2022) to test the model on ReSQ with or without further supervision from SPARTUN Mirzaee and Kordjamshidi (2022). Unless specified, all the experiments use LSTM Hochreiter and Schmidhuber (1997) as depth aggregator by default. The detailed hyperparameter settings are given in the Appendix B.
### Overall Performance
Table 1 and 2 report the experiment results on StepGame and ReSQ respectively. As shown in Table 1, PLMs overwhelmingly outperform all the traditional reasoning models and the proposed DepWiNet overtakes the PLMs by a large margin, especially for the cases with greater \(k\) where the multi-hop reasoning capability plays a more important role. This aligns with the characteristics of our model architecture that the aggregation focuses on the depth dimension, which effectively avoids the problem of over-smoothing and the mixture of redundant information from the breadth aggregation. Despite the model being only trained on clean distraction-noise-absent samples with \(k\) from 1 to 5, it achieves impressive performance on the distraction-noise-present test data with \(k\) value from 6 to 10, demonstrating the more advanced generalization capability of our model. Moreover,
our method also entails an improvement for the examples with lower \(k\) values for BERT and ALBERT and an ignorable decrease for RoBERTa, which attests to the innocuity of our model to the few-hop reasoning tasks.
Experimental results in Table 2 show that DePWiNet reaches a new SOTA result on ReSQ for both cases where extra supervision from SPARTUN is present and absent. Excitingly, the performance of our model without extra supervision even overwhelms the performance of the BERT with extra supervision from SPARTQA-AUTO (Mirzaee et al., 2021), StepGame, and SPARTUN-S and approaches closely to the SPARTUN supervision case. All these phenomenons signify that our model has the potential to better tackle the natural intricacy of real-world spatial expressions.
### Ablation study
We conduct ablation studies on the impact of the three components of DepWiGNN and different depth aggregators, as presented in Table 3.
Impact of DepWiNet componentsThe model performance experiences a drastic decrease, particularly for the mean score of \(k\) between 6 and 10, after the Long Dependency Collection (LDC) has been removed, verifying that this component serves a crucial role in the model. Note that the mean score for \(k\)(6-10) even drops below the ALBERT baseline (Table 1). This is reasonable as the LDC is directly responsible for collecting long dependencies. We further defunctionalized the Node Memory Initialization (NMI) and then Spatial Relation Retrieval (SRR) by setting the initial fillers (Eq.8) and key representation (Eq.14) to a random vector separately. Compared with the case where only LDC was removed, both of them lead to a further decrease in small and large \(k\) values.
Impact of Different AggregatorsThe results show that the mean and max pooling depth aggregators fail to understand the spatial rule as achieved by LSTM. This may be caused by the relatively less expressiveness and generalization caliber of mean and max pooling operation.
### Comparisons with Different GNNs
To certify our model's capacity of collecting long dependencies as well as its immunity to the over-smoothing problem, we contrast it with four graph neural networks, namely, GCN (Kipf and Welling, 2017), GraphConv (Morris et al., 2019), GAT (Velickovic et al., 2017) and GCNII (Chen et al., 2020). We consider the cases with the number of layers varying from 1 to 5 and select the best performance to compare. The plot of the accuracy metric is reported in Figure 4 and the corresponding best
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline Method & k=1 & k=2 & k=3 & k=4 & k=5 & k=6 & k=7 & k=8 & k=9 & k=10 & Mean(k=1-5) & Mean(k=6-10) \\ \hline \hline RN (Santoro et al., 2017) & 22.64 & 17.08 & 15.08 & 12.84 & 11.52 & 11.12 & 11.53 & 11.21 & 11.13 & 11.34 & 15.83 & 11.27 \\ RRN (Palm et al., 2018) & 24.05 & 19.98 & 16.03 & 13.22 & 12.31 & 11.62 & 11.40 & 11.83 & 11.22 & 11.69 & 17.12 & 11.56 \\ UT (Dehghani et al., 2019) & 45.11 & 28.36 & 17.41 & 14.07 & 13.45 & 12.73 & 12.11 & 11.40 & 11.41 & 11.74 & 23.68 & 11.88 \\ STM (Le et al., 2020) & 53.42 & 35.96 & 23.03 & 18.45 & 15.14 & 13.80 & 12.63 & 11.54 & 11.30 & 11.77 & 29.20 & 12.21 \\ TPR-RNN (Schlag and Schmidhuber, 2018b) & 70.29 & 46.03 & 36.14 & 26.82 & 24.77 & 22.25 & 19.88 & 15.45 & 13.01 & 12.65 & 40.81 & 16.65 \\ TP-MANN (Shi et al., 2022) & 85.77 & 60.31 & 50.18 & 37.45 & 31.25 & 28.53 & 26.45 & 23.67 & 22.52 & 21.46 & 52.99 & 24.53 \\ \hline BERT (Mirzaee and Kordjamshidi, 2022) & 98.53 & 93.40 & 91.19 & 66.98 & 54.57 & 48.59 & 42.81 & 37.98 & 34.16 & 33.62 & 80.93 & 39.43 \\ RoBERTa\({}^{*}\) (Mirzaee and Kordjamshidi, 2022) & 98.68 & 97.05 & 95.66 & 79.59 & 74.89 & 70.67 & 66.01 & 61.42 & 57.20 & 54.53 & 89.17 & 61.93 \\ ALBERT\({}^{*}\) (Mirzaee and Kordjamshidi, 2022) & 98.56 & **97.56** & **96.08** & 90.01 & 83.33 & 77.24 & 71.57 & 64.47 & 60.02 & 56.18 & 93.11 & 65.90 \\ \hline \hline
**DepWiNet** (BERT) & 98.44 & 96.57 & 94.75 & 81.41 & 70.68 & 62.80 & 57.46 & 49.14 & 45.56 & 44.01 & 88.37\({}_{+9.25}\) & 51.79\({}_{+31.35}\) \\
**DepWiNet** (RoBERTa) & **98.70** & 96.69 & 95.54 & 79.50 & 74.94 & 70.37 & 66.22 & 61.89 & 58.01 & 56.89 & 89.07\({}_{-0.157}\) & 62.68\({}_{+1.28}\) \\
**DepWiNet** (ALBERT) & 98.59 & 97.13 & 95.87 & **91.96** & **87.50** & **82.62** & **77.80** & **71.40** & **67.55** & **64.98** & **94.21\({}_{+1.257}\)** & **72.87\({}_{+10.65}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experimental results on StepGame. \({}^{*}\) denotes the implementation of the same model as BERT (Mirzaee and Kordjamshidi, 2022) with RoBERTa and ALBERT. The other results are reported in (Shi et al., 2022).
\begin{table}
\begin{tabular}{l l l} \hline Method & _SynSup_ & ReSQ \\ \hline \hline Majority Baseline & - & 50.21 \\ BERT & - & 57.37 \\ BERT & SPARTQA-AUTO & 55.08 \\ BERT & StepGame & 60.14 \\ BERT & SPARTUN-S & 58.03 \\ BERT & SPARTUN & 63.60 \\ \hline \hline
**DepWiNet** (BERT) & - & **63.30** \\
**DepWiNet** (BERT) & SPARTUN & **65.54** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experimental results on ReSQ, under the transfer learning setting (Mirzaee and Kordjamshidi, 2022).
cases are in Table 4. It is worth noting that almost all the baseline GNNs cause a reduction in the original PLM performance (Table 4). The reason for this may partially come from the breadth aggregation, which aggregates the neighbors round and round and leads to indistinguishability among entity embeddings such that the PLM reasoning process has been disturbed. The majority of baseline GNNs suffer from an apparent performance drop when increasing the number of layers (Figure 4), while our model consistently performs better and is not affected by the number of layers at all since it does not use breadth aggregation. Therefore, our model has immunity to over-smoothing problems. In both small and large \(k\) cases, our model outperforms the best performance of all four GNNs (including GCNII, which is specifically designed for over-smoothing issues) by a large margin (Table 4), which serves as evidence of the superiority of our model in long dependency collection.
### Case study
In this section, we present case studies to intuitively show how DepWiGNN mitigates the three kinds of distracting noise introduced in StepGame, namely, _disconnected_, _irrelevant_, and _supporting_.
* The _disconnected_ noise is the set of entities and relations that forms a new independent chain in the graph (Figure 5(a)). The node memories constructed in DepWiGNN contain spatial information about the nodes if and only if that node stays in the same connected component; otherwise, it
\begin{table}
\begin{tabular}{l r r r r r r r r r r r r r r} \hline Method & k=1 & k=2 & k=3 & k=4 & k=5 & k=6 & k=7 & k=8 & k=9 & k=10 & Mean(k=1-5) & Mean(k=6-10) \\ \hline \hline ALBERT & 98.56 & 97.56 & 96.08 & 90.01 & 83.33 & 77.24 & 71.57 & 64.47 & 60.02 & 56.18 & 93.11 & 65.90 \\ \hline \hline w/ GCNs (Kipf and Welling, 2017) & 98.62 & 97.72 & **96.49** & 82.01 & 76.70 & 71.60 & 67.34 & 60.49 & 56.11 & 52.65 & 90.31\({}_{-3.01\%}\) & 61.64\({}_{-6.46\%}\) \\ w/ GraphConv2 (Morris et al., 2019) & 98.66 & **97.73** & 96.48 & 82.37 & 78.57 & 74.47 & 69.91 & 63.04 & 60.01 & 56.58 & 90.76\({}_{-2.52\%}\) & 64.80\({}_{-1.67\%}\) \\ w/ GAT\({}_{1}\)(Velickovic et al., 2017) & **98.67** & 97.34 & 95.59 & 81.47 & 78.39 & 73.79 & 70.17 & 63.52 & 60.64 & 57.33 & 90.29\({}_{-3.03\%}\) & 65.09\({}_{-1.23\%}\) \\ w/ GCNII\({}_{4}\)(Chen et al., 2020c) & **98.56** & 97.61 & 96.36 & 83.31 & 79.22 & 74.88 & 70.64 & 63.64 & 60.59 & 56.99 & 91.01\({}_{-2.25\%}\) & 65.35\({}_{+0.01\%}\) \\ w/ DepWiGNN & 98.59 & 97.13 & 95.87 & **91.96** & **87.50** & **82.62** & **77.80** & **71.40** & **67.55** & **64.98** & **94.21\({}_{+1.18\%}\)** & **72.87\({}_{+10.58\%}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparisons with different GNNs. The subscripts represent the number of GNN layers. We select the best mean performance among layer number 1 to 5.
Figure 4: Impact of the layer number in different GNNs on StepGame. The solid and dashed lines denote the mean score of (\(k\)=1-5) and (\(k\)=6-10) respectively.
Figure 5: Cases with distracting noisy from StepGame.
\begin{table}
\begin{tabular}{l r r r r r r r r r r r r} \hline Method & k=1 & k=2 & k=3 & k=4 & k=5 & k=6 & k=7 & k=8 & k=9 & k=10 & Mean(k=1-5) & Mean(k=6-10) \\ \hline \hline DepWiNet (ALBERT) & **98.59** & **97.13** & **95.87** & **91.96** & **87.50** & **82.62** & **77.80** & **71.40** & **67.55** & **64.98** & **94.21** & **72.87** \\ \hline \hline - w/o LDC & 98.58 & 97.72 & 96.42 & 82.46 & 78.53 & 74.52 & 70.45 & 62.84 & 59.06 & 56.57 & 90.74\({}_{-3.7\%}\) & 64.69\({}_{-11.2\%}\) \\ - w/o NMI \& LDC & 98.48 & 96.88 & 94.83 & 79.14 & 75.46 & 70.51 & 67.19 & 61.88 & 57.65 & 55.02 & 88.96\({}_{-5.6\%}\) & 62.45\({}_{-14.3\%}\) \\ - w/o SRR \& LDC & 98.64 & 97.52 & 95.91 & 80.72 & 76.85 & 72.56 & 67.93 & 61.29 & 57.97 & 54.17 & 89.93\({}_{-4.5\%}\) & 62.78\({}_{-13.8\%}\) \\ \hline \hline - w/ DepWiGNN\({}_{mean}\) & 98.52 & 96.46 & 93.69 & 81.10 & 77.90 & 73.47 & 70.78 & 65.54 & 62.42 & 59.82 & 89.53\({}_{-5.0\%}\) & 66.41\({}_{-8.9\%}\) \\ - w/ DepWiGNN\({}_{maxpooling}\) & 98.65 & 95.83 & 93.90 & 80.81 & 77.20 & 72.07 & 68.76 & 62.79 & 59.63 & 57.14 & 89.28\({}_{-5.2\%}\) & 64.08\({}_{-12.1\%}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study. NMI, LDC, and SRR denotes the three stages in DepWiGNN, _i.e._, Node Memory Initialization, Long Dependency Collection, and Spatial Relation Retrieval, respectively.
has no information about the node as there is no path between them. Hence, in this case, for the questioned source node **P**, its memory has no information for the disconnected noise **T** and **D**.
* The _irrelevant_ noise branches the correct reasoning chain out with new entities and relations but results in no alternative reasoning path (Figure 5(b)). Hence the irrelevant noise entities will not be included in the reasoning path between the source and destination, which means that it will not affect the destination spatial filler stored in the source node memory. In this case, when the key representation (embedding of entity **E**) is used to unbind the spatial filler from node memory of the source node **P**, it obtains the filler \(f_{PE}=FFN(Aggregator([f_{PX};f_{XQ};f_{QE}]))\), which is intact to the irrelevant entity **Y** and relation \(f_{xy}\) or \(f_{yx}\).
* The _supporting_ noise adds new entities and relations to the original reasoning chain that provides an alternative reasoning path (Figure 5(c)). DepWiGNN is naturally exempted from this noise for two reasons: first, it finds the shortest path between two entities, therefore, will not include **I** and **M** in the path; Second, even if the longer path is considered, the depth aggregator should reach the same result as the shorted one since the source and destination are the same.
## 5 Conclusion
In this work, we introduce DepWiGNN, a novel graph-based method that facilitates depth-wise propagation in graphs, enabling effective capture of long dependencies while mitigating the challenges of over-smoothing and excessive layer stacking. Our approach incorporates a node memory scheme leveraging the TPR mechanism, enabling simple arithmetic operations for memory updating and retrieval, instead of relying on additional neural layers. Experiments on two recently released textual multi-hop spatial reasoning datasets demonstrate the superiority of DepWiGNN in collecting long dependencies over the other three typical GNNs and its immunity to the over-smoothing problem.
## Limitation
Unlike the classical GNNs, which use one-dimensional embedding as the node memory, our method applies a two-dimensional matrix-shaped node memory. This poses a direct increase in memory requirement. The system has to assign extra space to store a matrix with shape \(\mathbb{R}^{d_{h}\times d_{h}}\) for each node in the graph, which makes the method less scalable. However, it is a worthwhile trade-off because the two-dimensional node memory can potentially store \(d_{h}-1\) number of spatial fillers bound with node embeddings while keeping the size fixed, and it does not suffer from information overloading as faced by the one-dimension memory since the spatial fillers can be straightforwardly retrieved from the memory.
Another issue is the time complexity of finding the shortest path between every pair of nodes. For all experiments in this paper, since the edges do not have weights, the complexity is O((n+e)*n), where n and e are the numbers of nodes and edges, respectively. However, things will get worse if the edges in the graph are weighted. We believe this is a potential future research direction for the improvement of the algorithm.
|