id
stringlengths 10
10
| title
stringlengths 26
192
| abstract
stringlengths 172
1.92k
| authors
stringlengths 7
591
| published_date
stringlengths 20
20
| link
stringlengths 33
33
| markdown
stringlengths 269
344k
|
---|---|---|---|---|---|---|
2304.00050 | kNN-Res: Residual Neural Network with kNN-Graph coherence for point
cloud registration | In this paper, we present a residual neural network-based method for point
set registration that preserves the topological structure of the target point
set. Similar to coherent point drift (CPD), the registration (alignment)
problem is viewed as the movement of data points sampled from a target
distribution along a regularized displacement vector field. While the coherence
constraint in CPD is stated in terms of local motion coherence, the proposed
regularization term relies on a global smoothness constraint as a proxy for
preserving local topology. This makes CPD less flexible when the deformation is
locally rigid but globally non-rigid as in the case of multiple objects and
articulate pose registration. A Jacobian-based cost function and
geometric-aware statistical distances are proposed to mitigate these issues.
The latter allows for measuring misalignment between the target and the
reference. The justification for the k-Nearest Neighbour(kNN) graph
preservation of target data, when the Jacobian cost is used, is also provided.
Further, to tackle the registration of high-dimensional point sets, a constant
time stochastic approximation of the Jacobian cost is introduced. The proposed
method is illustrated on several 2-dimensional toy examples and tested on
high-dimensional flow Cytometry datasets where the task is to align two
distributions of cells whilst preserving the kNN-graph in order to preserve the
biological signal of the transformed data. The implementation of the proposed
approach is available at https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/
under the MIT license. | Muhammad S. Battikh, Dillon Hammill, Matthew Cook, Artem Lensky | 2023-03-31T18:06:26Z | http://arxiv.org/abs/2304.00050v2 | # kNN-Res: Residual Neural Network with kNN-Graph coherence for point cloud registration
###### Abstract
In this paper, we present a residual neural network-based method for point set registration that preserves the topological structure of the target point set. Similar to coherent point drift (CPD), the registration (alignment) problem is viewed as the movement of data points sampled from a target distribution along a regularized displacement vector field. While the coherence constraint in CPD is stated in terms of local motion coherence, the proposed regularization term relies on a global smoothness constraint as a proxy for preserving local topology. This makes CPD less flexible when the deformation is locally rigid but globally non-rigid as in the case of multiple objects and articulate pose registration. A Jacobian-based cost function and geometric-aware statistical distances are proposed to mitigate these issues. The latter allows for measuring misalignment between the target and the reference. The justification for the k-Nearest Neighbour(kNN) graph preservation of target data, when the Jacobian cost is used, is also provided. Further, to tackle the registration of high-dimensional point sets, a constant time stochastic approximation of the Jacobian cost is introduced. The proposed method is illustrated on several 2-dimensional toy examples and tested on high-dimensional flow Cytometry datasets where the task is to align two distributions of cells
whilst preserving the kNN-graph in order to preserve the biological signal of the transformed data. The implementation of the proposed approach is available at [https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/](https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/) under the MIT license.
## 1 Introduction
Point set registration is a widely studied problem in the field of computer vision but also arises in other fields e.g. bioinformatics as is discussed below. The problem involves aligning a deformed target set of \(d\)-dimensional points to another reference point set by applying a constrained transformation. This alignment allows for improved comparison and analysis of the two sets of points and is used in a variety of fields including object tracking, body shape modeling, human pose estimation, and removal of batch effects in biological data. [1, 2, 3, 4, 5]
Point set registration techniques are typically categorized based on two main properties, first, whether the technique is a correspondence-based or a correspondence-free technique, and second, whether the estimated transformation is rigid or non-rigid. Correspondence-based techniques require the availability of correspondence information (e.g. labels) between the two point sets, while correspondence-free, sometimes called simultaneous pose and correspondence registration, does not require such information and therefore is considered a significantly more difficult problem. Rigid registration techniques are also generally simpler. A rigid transformation is an isometric transformation that preserves the pairwise distance between points and such transformation is typically modeled as a combination of rotation and translation. Several rigid registration techniques have been proposed in [6, 7, 8, 9, 10, 11, 12, 13, 14]. Assuming the transformation is rigid, however, makes the types of deformations that could be handled quite limited. Non-rigid transformations allow for more flexibility; however, this makes the problem ill-posed as there are an infinite number of transformations that could align two point sets, thus, non-rigid registration techniques employ additional constraints.
### Problem Formulation
In this section, we formulate the alignment problem. Inspired by CPD [15], we view an alignment method as finding a map \(\phi\) that transforms data points sampled from an underlying distribution \(Q\) to distribution \(P\) in such a way that preserves the topological structure of data sampled from \(Q\). This is an ill-posed density estimation problem, therefore, we require an additional desiderium for \(\phi\) to be as simple as possible. In this context, we call a map \(\phi\) simple if it is close to the identity transformation. Importantly, this could be visualized as data points sampled from \(Q\) moving along a regularized displacement vector field \(F\).
More formally, we denote two sets of \(d\)-dimensional vectors (points), a ref
erence point set \(\mathbf{R}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{n}\}\), and target point set \(\mathbf{T}=\{\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{m}\}\), generated by a probability distributions \(P\) and \(Q\) respectively. Additionally, a \(k\)-Nearest Neighbour (kNN) graph is associated with (or constructed from) the set \(\mathbf{T}\) which must be preserved after transformation. A kNN graph for set \(\mathbf{T}\) is a directed graph such that there is an edge from node \(i\) to \(j\) if and only if \(\mathbf{y}_{j}\) is among \(\mathbf{y}_{i}\)'s \(k\) most similar items in \(\mathbf{T}\) under some similarity measure \(\rho\).
Thus, the goal of an alignment method, given the sets \(\mathbf{R}\) and \(\mathbf{T}\) in a matrix form of \(X\in\mathbf{R}^{n\times d}\) and \(Y\in\mathbf{R}^{m\times d}\) respectively, is finding a transformation \(\phi\) parameterized by \(\theta\) such that:
\[\hat{\theta}=\arg\max_{\theta}D(\phi(Y;\theta),X) \tag{1}\]
subject to the constraints:
\[\texttt{kNN}_{g}(\phi(Y;\theta))=\texttt{kNN}_{g}(y) \tag{2}\]
where \(D\) is a statistical distance that measures the difference between two probability distributions.
### Limitations of existing approaches
A classic example of a such constraint is found in CPD [15] and its extensions [16, 17, 18]. CPD uses a Gaussian Mixture Model to induce a displacement field from the target to source points and uses local motion coherence to constrain the field such that nearby target points move together. CPD achieves this however via a global smoothing constraint which makes it locally inflexible, and therefore unsuitable for articulated deformations in 3D human data, scenes with multiple objects, and biological data [19].
In this work, we introduce a Jacobian orthogonality loss and show that it is a sufficient condition for preserving the kNN graph of the data. Jacobian orthogonality introduced as a penalty \(|\mathbf{J}_{\mathbf{X}}^{\top}\mathbf{J}_{\mathbf{X}}-\mathbf{I}_{d}|\) where \(\mathbf{J}_{\mathbf{X}}\) is the Jacobian matrix at a point \(\mathbf{x}\) and \(\mathbf{I}_{d}\) is the \(d\times d\) identity matrix. The penalty has been proposed in other contexts as well, such as unsupervised disentanglement [20] and medical image registration [21, 22].
In [21], the finite difference method is employed to compute the Jacobian penalty for the B-splines warping transformation, and mutual information of corresponding voxel intensities is used as the similarity measure. Instead of using finite difference for the Jacobian penalty, which produces a numerical approximation of first-order derivatives, the authors of [22] derive an analytical derivative specific to the multidimensional B-splines case. Such approaches however are limited to low dimensions by the nature of the transformations used, the way in which the Jacobian penalty is computed, and their proposed similarity measures.
### Contributions
To address these limitations, we use Hutchinson's estimator [20, 23] for fast computation of the Jacobian loss for high-dimensional point clouds, a scalable residual neural network (ResNet) [24] architecture as our warping transformation, and geometry-aware statistical distances. The choice of ResNet with identity block \(\phi(x)=x+\delta(x)\) is natural since we view alignment similar to CPD as a regularized movement of data points along a displacement vector field; which in our case is simply \(\phi(x)-x=\delta(x)\). It is also worth mentioning that ResNets can learn identity mapping more easily. Further discussion on this choice is given in section 2.2. Moment-matching ResNet(MM-Res) [5] use a similar ResNet architecture with RBF kernel maximum-mean discrepancy as its similarity measure [25, 26], however, no topological constraints are provided to preserve the topological structure of the transformed data nor to limit the nature of the learned transformation as shown in Figure 1. Additionally, while maximum-mean discrepancy is a geometry-aware distance, we address some limitations by incorporating Sinkhorn divergence into our framework [27].
Figure 1: Stanford Bunny example showing the effect of the Jacobian penalty on the learned transformation.
To elaborate further, we first start by defining Maximum Mean Discrepancy (MMD):
\[\texttt{MMD}(\alpha,\beta):=\frac{1}{2}\int_{X^{2}}k(\mathbf{x},\mathbf{y})d \zeta(\mathbf{x})d\zeta(\mathbf{y}) \tag{3}\]
where \(\alpha,\beta\in M_{1}^{+}(X)\) are unit-mass positive empirical distributions on a feature space \(X\), \(\zeta=\alpha-\beta\), and \(k(\mathbf{x},\mathbf{y})\) is a kernel function. MM-Res uses an RBF kernel which is suitable for high-dimensional Euclidean feature spaces (e.g. to represent \(X\subset\mathbb{R}^{n}\)) and makes training complexity low as it scales up to large batches, nonetheless, such kernel blinds the model to details smaller than its standard deviation, and the networks' gradient suffers from the well-known vanishing gradient problem. One simple solution is to decrease the standard deviation of the kernel; however, this introduces another issue, namely, the target points will not be properly attracted to source points [28]. In practice, this makes such a framework incapable of learning simple deformations with sizable translations as we show in section 4. Optimal transport (OT) losses do not typically suffer from this issue and produce more stable gradients; however, such losses require solving computationally costly linear programs. A well-known efficient approximation of the OT problem is entropic regularized \(OT_{\epsilon}\)[29], for \(\epsilon>0\), it is defined as:
\[\texttt{OT}_{\epsilon}(\alpha,\beta):=\min_{\pi_{1}=\alpha,\pi_{2}=\beta}\int _{X^{2}}C(\mathbf{x},\mathbf{y})d\pi+\epsilon\texttt{KL}(\pi|\alpha\times\beta) \tag{4}\]
where \(C\) is a cost function (typically the Euclidean distance), \((\pi_{1},\pi_{2})\) denotes the two marginals of the coupling measure \(\pi\in M_{1}^{+}\) and KL is the KL-divergence. The solution for this formulation could be efficiently computed using the Sinkhorn algorithm as long as \(\epsilon>0\). It is clear that by setting \(\epsilon\) to 0, this minimization problem reduces back to standard OT. Sinkhorn divergence combines the advantages of MMD and OT and is defined as:
\[S_{\epsilon}(\alpha,\beta)=\texttt{OT}_{\epsilon}(\alpha,\beta)-\frac{1}{2}( \texttt{OT}_{\epsilon}(\alpha,\alpha)+\texttt{OT}_{\epsilon}(\beta,\beta)) \tag{5}\]
The authors of [29] show that:
\[\lim_{\epsilon\to 0}S_{\epsilon}(\alpha,\beta)=\texttt{OT}(\alpha,\beta) \tag{6}\]
and
\[\lim_{\epsilon\rightarrow\infty}S_{\epsilon}(\alpha,\beta)=\frac{1}{2} \texttt{MDD}_{-C}^{2}(\alpha,\beta) \tag{7}\]
where \(C\) is the kernel used by MMD.
In the following section, we review other related methods.
### Other related work
Several point cloud registration approaches have been proposed. Thin plate spline functions-based techniques preserve the local topology via local rigidity on the surface of a deformed shape; however, these approaches are not scalable
to large datasets and are typically limited to 3-dimensional point clouds [30, 31, 32, 33, 34, 35]. To address these limitations, a deep learning paradigm for point cloud registration has been adopted. Deep learning-based approaches can be divided into two categories, namely, features-based, and end-to-end learning. In features-based methods, a neural network is used as a feature extraction. By developing sophisticated network architectures or loss functions, they aim to estimate robust correspondences by the learned distinctive feature [30, 36, 37, 38]. While feature-based learning typically involves elaborate pipelines with various steps such as feature extraction, correspondence estimation, and registration, end-to-end learning methods combine various steps in one objective and try to solve the registration problem directly by converting it to a regression problem [39, 40]. For example, [39] employs a key point detection method while simultaneously estimating relative pose.
Another class of methods is Graph Matching techniques, which are quadratic assignment problems (QAP) [40]. The main challenge for such methods is finding efficient approximate methods to the otherwise NP-hard QAP. Congruent Sets Gaussian Mixture (CSGM) [41] uses a linear program to solve the graph-matching problem and apply it to solve the cross-source point cloud registration task. Another approach is a high-order graph [42] that uses an integer projection algorithm to optimize the objective function in the integer domain. Finally, Factorized Graph Matching (FGM) method [43] factorizes the large pairwise affinity matrix into some smaller matrices. Then, the graph-matching problem is solved with a simple path following optimization algorithm.
## 2 Proposed model
### Methodology
In our case, we parametrize the transformation \(\phi\) as a residual neural network and formulate the optimization problem as:
\[\mathcal{L}(\theta)=\mathcal{L}_{1}+\lambda\mathcal{L}_{2} \tag{8}\]
where \(\mathcal{L}_{1}\) is the alignment loss \(D(\theta(Y;\theta),X)\) and \(\lambda\) is a hyperparamater, and \(\mathcal{L}_{2}\) is the topology preserving loss:
\[\mathcal{L}_{2}=\frac{1}{m}\sum_{\mathbf{y}\in T}|\mathbf{J}_{X}^{\top} \mathbf{J}_{X}-\mathbf{I}_{d}| \tag{9}\]
where \(\mathbf{J}_{y}\) is the Jacobian matrix at points \(y\) and \(\mathbf{I}_{d}\) is the \(d\times d\) identity matrix. In section 2.4 we prove that the orthogonality of the Jacobian matrix is indeed a sufficient condition for preserving the kNN graph of the data. We use two statistical distances, namely, Sinkhorn divergences, and maximum mean discrepancy. Sinkhorn divergences is a computationally efficient approximation for the Wasserstein distance in high dimensions and converge to the maximum mean discrepancy.
\[\mathcal{L}_{1}(\theta)=S(\alpha,\beta)=\texttt{OT}_{2}(\alpha,\beta)-\frac{1}{2}( \texttt{OT}_{2}(\alpha,\alpha)+\texttt{OT}_{2}(\beta,\beta)) \tag{10}\]
where \(\texttt{OT}_{\epsilon}\) is the optimal transport with \(\mathcal{L}_{2}\)-norm cost, and \(\alpha\) and \(\beta\) are measures over reference and target distributions respectively. The measures \(\alpha\) and \(\beta\) are unknown and are only known via samples from \(\mathbf{R}\) and \(\mathbf{T}\) respectively. Although \(S_{\epsilon}(\alpha,\beta)\) interpolates to MMD as \(\epsilon\) goes to infinity, we still maintain an efficient standalone MMD distance for data where MMD performs better than the Wasserstein distance and therefore no need for the interpolation overhead. Specifically, we use Gaussian-based MMD:
\[\texttt{MMD}(\alpha,\beta):=\frac{1}{2}\int_{X^{2}}k(\mathbf{x},\mathbf{y})d \zeta(\mathbf{x})d\zeta(\mathbf{y}) \tag{11}\]
### Architecture
We use a simple ResNet identity block with a skip connection as our transformation where the output dimension is equal to the input dimension, and the output is calculated as such: \(\phi(\mathbf{y};\theta)=\mathbf{y}+\delta(\mathbf{y};\theta)\), where \(\delta\) is a standard multi-layer perceptron (MLP) with LeakyRelu activation functions and \(\theta\) represents the trainable weights of the network. The ResNet identity block has been chosen for two reasons: biasing \(\theta\) to have small values via weight decay or initializing the output layer using a distribution with mean zero and a small standard deviation minimizes the contribution of \(\delta(\mathbf{y};\theta)\) to the final transformation which makes \(\phi(\mathbf{y};\theta)\) close to the identity. Additionally, this follows the same recipe from CPD of viewing the alignment function as a smooth displacement field.
The ResNet identity block is chosen for the following two reasons. Biasing \(\theta\) to have small values via weight decay or initialization using a distribution with close to zero values minimizes the contribution of \(\delta(\mathbf{x}:\theta)\) to the final transformation which in turn makes \(\phi(\mathbf{x}:\theta)\) close to the identity by design. Additionally, since we take a similar approach to CPD by viewing the alignment transformation as a regularized movement of data point along displacement vector field \(F\); having a ResNet identity block is mathematically convenient since a displacement vector is a difference between the final position \(\phi(\mathbf{x}:\theta)\) (transformed point) and the initial position (data point) \(\mathbf{x}\) such that \(F(\mathbf{x})=\phi(\mathbf{x}:\theta)-\mathbf{x}=\delta(\mathbf{x}:\theta)\), therefore, we only need to worry about \(\delta(\mathbf{x}:\theta)\) instead of \((\phi(\mathbf{x}:\theta)-\mathbf{x})\) absent skip connection.
### Orthogonal Jacobian preserves kNN graph
In this section, we show that the orthogonality of the Jacobian matrix evaluated at data points is a sufficient condition for preserving the kNN graph of the data. A vector-valued function \(\mathcal{F}:\mathbb{R}_{n}\rightarrow\mathbb{R}_{n}\) preserves the kNN graph of data points \(X\in\mathbb{R}_{n}\) if, for every two points \(\mathbf{v}\) and \(\mathbf{w}\) that are in some small \(\epsilon\)-neighborhood of \(\mathbf{u}\), the following holds:
\[||\mathbf{u}-\mathbf{v}||_{2}^{2}<||\mathbf{u}-\mathbf{w}||_{2}^{2} \rightarrow||F(\mathbf{u}),F(\mathbf{v})||_{2}^{2}<||F(\mathbf{u}),F( \mathbf{w})||_{2}^{2}, \tag{12}\]
where \(||\cdot||_{2}^{2}\) is the squared Euclidian distance. Without loss of generality, we choose two points \(\mathbf{w}\), \(\mathbf{v}\) that lie in \(\epsilon\) neighborhood of point \(\mathbf{u}\) and linearize the vector field \(F\) around point \(\mathbf{u}\) such that:
\[F(\mathbf{x};\mathbf{u})\approx F(\mathbf{u})+\mathbf{J}_{\mathbf{u}}(\mathbf{x }-\mathbf{u}), \tag{13}\]
where \(\mathbf{J}_{\mathbf{u}}\) is the Jacobian matrix evaluated at point \(\mathbf{u}\).
The squared distance of \(\mathbf{u}\) and \(\mathbf{v}\) is:
\[||\mathbf{u}-\mathbf{v}||_{2}^{2}=(\mathbf{u}-\mathbf{v})^{\top}(\mathbf{u}- \mathbf{v})=\sum_{i}^{n}\left(\mathbf{u}_{i}-\mathbf{v}_{i}\right)^{2} \tag{14}\]
Similarly, the squared distance between \(F(\mathbf{u};\mathbf{u})\) and \(F(\mathbf{v};\mathbf{u})\) computes as follows
\[\begin{array}{rcl}||F(\mathbf{u};\mathbf{u})-F(\mathbf{v};\mathbf{u})||_{2 }^{2}&=&(F(\mathbf{u};\mathbf{u})-F(\mathbf{v};\mathbf{u})^{\top}(F(\mathbf{u} ;\mathbf{u})-F(\mathbf{v};\mathbf{u}))\\ &=&F(\mathbf{u})-F(\mathbf{u})-\mathbf{J}_{\mathbf{u}}(\mathbf{v}-\mathbf{u})^ {\top}(F(\mathbf{u})-F(\mathbf{u})-\mathbf{J}_{\mathbf{u}}(\mathbf{v}- \mathbf{u}))\\ &=&(\mathbf{J}_{\mathbf{u}}(\mathbf{v}-\mathbf{u}))^{\top}(\mathbf{J}_{ \mathbf{u}}(\mathbf{v}-\mathbf{u}))\\ &=&(\mathbf{v}-\mathbf{u})^{\top}\mathbf{J}_{\mathbf{u}}^{\top}\mathbf{J}_{ \mathbf{u}}(\mathbf{v}-\mathbf{u})\\ &=&(\mathbf{v}-\mathbf{u})^{\top}(\mathbf{v}-\mathbf{u})\end{array}\]
The last step follows from the orthogonality of \(\mathbf{J}_{\mathbf{u}}\) i.e. \((\mathbf{J}_{\mathbf{u}}^{\top}\mathbf{J}_{\mathbf{u}}=\mathbf{I})\)
### Jacobian Loss Via Finite Difference
Given a vector-valued function \(F:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), a data batch \(X\in\mathbb{R}^{m\times d}\), and the Jacobian \(\mathbf{J}_{X}\) of \(F\) at points \(\mathbf{X}\) is an \(\mathbb{R}^{m\times d\times d}\) tensor, it is possible to compute \(\mathbf{J}_{\mathbf{X}}\) analytically using autodifferentiation modules, however, such computation is highly inefficient, thus, we use numerical approximation.
Given a \(d\)-dimensional vector \(\mathbf{x}=[x_{1},...,x_{d}]\), the partial first derivative of \(F\) with respect to \(x_{i}\) is:
\[\frac{\partial F}{\partial x_{i}}=\lim_{\epsilon\to 0}\frac{F(\mathbf{x}+ \epsilon e_{i})-F(\mathbf{x})}{\epsilon}, \tag{15}\]
where \(e_{i}\) is a standard basis vector (i.e. only the \(i\)th component equals 1 and the rest are zero). This could be approximated numerically using a small \(\epsilon\). The Jacobian matrix \(\mathbf{J}_{\mathbf{x}}\) is simply \(\lfloor\frac{\partial F}{\partial x_{i}},...,\frac{\partial F}{\partial x_{d}}\rfloor\). To ensure the orthogonality of the Jacobian at \(\mathbf{X}\), we minimize the following loss:
\[\mathcal{L}_{2}=\frac{1}{m}\sum_{\mathbf{x}\in\mathbf{X}}|\mathbf{J}_{\mathbf{ x}}^{\top}\mathbf{J}_{\mathbf{x}}-\mathbf{I}_{d}| \tag{16}\]
This process could be computed efficiently in a few lines of code as indicated in algorithm 1.
### Training
The training process (algorithm 2) takes advantage of two sets of \(d\)-dimensional vectors (points), a reference point set \(\mathbf{R}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{n}\}\), and target point set \(\mathbf{T}=\{\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{m}\}\). First, we sample points from \(\mathbf{R}\) and \(\mathbf{T}\) and create two matrices \(\mathbf{X}\) and \(\mathbf{Y}\). We feed \(\mathbf{Y}\) to the model and obtain \(\hat{\mathbf{Y}}\). Under the GMM assumption, we compute the GMM posterior probability as a similarity matrix and estimate \(\mathcal{L}_{1}\) as the negative log-likelihood. For the Sinkhorn divergence approach, we compute equation (10). We use the SoftkNN operator to construct the kNN graph for both the input \(\mathbf{Y}\) and the output \(\hat{\mathbf{Y}}\) and compute \(\mathcal{L}_{2}\) as the mean squared error between the two. Finally, we use backpropagation by minimizing the loss \(\mathcal{L}=\mathcal{L}_{1}+\lambda\mathcal{L}_{2}\) until convergence.
### Stochastic Approximation of Orthogonal Jacobian Loss
Using finite difference to compute the Jacobian for low-dimensional point clouds is efficient, however, the computational cost increases linearly with the dimension of the data. Thus, an approximate estimate with the constant computational cost is introduced.
Given a vector-valued function \(F\), and a sample \(\mathbf{x}\), we would like to minimize the following:
\[\mathcal{L}_{\mathbf{J}}(F)=|\mathbf{J}^{\top}\mathbf{J}\circ(1-\mathbf{I})| _{2}=\sum_{i\neq j}\frac{\partial F_{i}}{\partial x_{j}}\frac{\partial F_{j}} {\partial x_{i}} \tag{17}\]
Following [20, 23], the Hutchinson's estimator of \(\mathcal{L}_{\mathbf{J}}(F)\) can be approximated as such:
\[\mathcal{L}_{\mathbf{J}}(F)=\texttt{Var}_{r}(r_{\epsilon}^{\top}(\mathbf{J}^{ \top}\mathbf{J})r_{\epsilon})=\texttt{Var}_{r}((\mathbf{J}r_{\epsilon})^{\top }(\mathbf{J}r_{\epsilon})) \tag{18}\]
where \(r_{\epsilon}\) denotes a scaled Rademacher vector (each entry is either \(-\epsilon\) or \(+\epsilon\) with equal probability) where \(\epsilon>0\) is a hyperparameter that controls the granularity of the first directional derivative estimate and \(\texttt{Var}_{r}\) is the variance. It
is worth noting that this does not guarantee orthonormality, only orthogonality. In practice, however, we find that such an estimator produces comparable results to the standard finite difference method and could be efficiently implemented in Pytorch as shown in algorithm 3.
```
Input: \(\mathbf{R}\), and \(\mathbf{T}\) pointsets, blurring factor \(\sigma\), step size \(\epsilon\), regularisation \(\lambda\), and batch size \(b\); Output: Trained model \(\triangleright\) Simple mini-batches of size \(b\) from \(\mathbf{R}\) and \(\mathbf{T}\) while\((\mathbf{X},\mathbf{Y})\in(\mathbf{R},\mathbf{T})\) until convergencedo \(\phi(\mathbf{Y})\leftarrow\mathbf{Y}+\delta(\mathbf{Y})\); ifloss=="sinkhorn"then \(\mathcal{L}_{1}=\mathtt{S}(\mathbf{X},\phi(\mathbf{Y});\sigma^{2})\); else \(\mathcal{L}_{1}=\mathtt{MMD}(\mathbf{X},\phi(\mathbf{Y});\sigma^{2})\); \(\mathbf{J}_{\mathbf{Y}}[i,:]=\frac{\delta(\mathbf{Y}+\epsilon\epsilon_{i})- \delta(\mathbf{Y})}{\epsilon}\); \(\mathcal{L}_{2}=\frac{1}{m}\sum_{\mathbf{x}\in\mathbf{X}}|\mathbf{J}_{\mathbf{ x}}^{\top}\mathbf{J}_{\mathbf{x}}-\mathbf{I}_{d}|\); \(\mathcal{L}=\mathcal{L}_{1}+\lambda\mathcal{L}_{2}\); \(\triangleright\) backpropogation step Minimize(\(\mathcal{L}\));
```
**Algorithm 2**Training kNN-Resnet
### Parameters Selection
The proposed model has three main hyperparameters, namely: \(\sigma\), \(\epsilon\), and \(\lambda\). In the case of Sinkhorn divergence, \(\sigma>0\) is the blur (interpolation) parameter between OT and MMD, with a default value of \(0.01\) for datasets that lie in the first quadrant of the unit hypercube (minmax normalized data). Decreasing \(\sigma\) has the effect of solving for an exact OT, which typically produces very accurate registration, however, this comes at a slower convergence cost. In the cases where it is more advantageous to use MMD, \(\sigma\) represents the standard deviation of the Gaussian kernel. \(\epsilon>0\) represents the finite difference step size and controls the radius of topology preservation around each point. It is worth noting that a large epsilon value that covers all data tends to produce globally isomorphic transformations. \(\lambda>0\) is simply a regularization parameter that prioritizes regularization over alignment and is typically less than \(0.01\). An additional hyperparameter \(k\) is introduced when using a stochastic approximation of Jacobian orthogonality for high-dimensional data. This hyperparameter determines the number of Rademacher vectors sampled to estimate the Jacobian orthogonality penalty. Generally, a large \(k\) tends to produce a more accurate estimation, however; in practice, \(k=5\) seems to be a sufficient number for the datasets we experimented with.
```
1defstochastic_orth_jacobian(G,z,epsilon=0.01):
2''
3InputG:FunctiontocomputetheJacobianPenaltyfor.
4Inputz:(batchsize,d)InputtoGthattheJacobianis
5computedw.r.t.
6Inputk:numberofdirectionstosample(default5)
7Inputepsilon:(default0.1)
8Output:mean(\(|\mathbf{J}_{X}^{T}\mathbf{J}_{X}-\mathbf{I}_{d}|\))
9'
10r=torch.randint(0,2,size=torch.Size((k,*z.size()),))
11#r:rademacherrandomvector
12r[r==0]=-1
13vs=epsilon*r
14diffs=[G(z+v)-Gzforvinvs]
15#std:stochasticfinitediffs
16sfd=torch.stack(diffs)/epsilon
17loss=torch.var(sfd,dim=0).max()
18returnloss
19
```
**Algorithm 3**PyTorch code for Hutchinson approximation for Jacobian off-diagonal elements at data points \(z\).
## 3 Experiments
In this section, we provide experimental results on several datasets, namely, Chui-Rangarajan synthesized dataset used in [31, 44, 45], and single-cell RNA data used in [5]. The Chui-Rangarajan synthesized dataset is comprised of two shapes; a fish shape, and a Chinese character shape. Each shape is subjected to 5 increasing levels of deformations using an RBF kernel, and each deformation contains 100 different samples. The samples are generated using different RBF coefficients which are sampled from a zero-mean normal distribution with standard deviation \(\sigma\), whereby increasing \(\sigma\) leads to generally larger deformation.
### Results on 2D Data
We use the root-mean-squared error (RMSE) between the transformed data \(\hat{y_{i}}\) and the ground truth \(y_{i}\) available from the Chui-Rangarajan synthesized dataset: \(error=\sqrt{\frac{1}{m}\sum_{i=0}^{m}{(\hat{y_{i}}-y_{i})^{2}}}\).
It is important to note that such ground-truth correspondence is absent during training time and is only available during test time. Figures 2 and 3 show the initial point set distributions and their corresponding aligned versions for the Chinese character and the fish examples respectively. We also report results for our kNN-Res, MM-Res[5], CPD [15], TRS-RPM [31], RPM-LNS [45], and GMMREG [32] over 5 deformation levels and 100 samples per level. Figures 4b and 4b show results for tested models for the Chinese character, and Fish datasets respectively. We notice that after a certain level of
non-rigid deformation, MM-Res is unable to converge. For our kNN-Res, we set \(\epsilon=.005,\lambda=10^{-5},\sigma=.001\) and number of hidden units = 50. We start with a relatively high learning rate (0.01) for ADAM [46] optimizer and use a reduce-on-plateau scheduler with a reduction factor of 0.7 and minimum learning rate of \(5\times 10^{-5}\). Qualitatively, the grid-warp representations from the second column in figures 2 and 3 indicate that our estimated transformations are, at least visually, "simple" and "coherent". Furthermore, to quantitatively assess neighborhood preservation we use the hamming loss \(\mathcal{L}_{H}\) to estimate the difference between the kNN graph before and after transformation:
\[\mathcal{L}_{H}=\sum_{i=0}^{m}\sum_{j=0}^{m}I(\hat{p}_{i,j}^{k}\neq p_{i,j}^{k})\]
where \(p_{i,j}^{k}\) is the \(i\),\(j\) element of the k-NN graph matrix before transformation, \(\hat{p}_{i,j}^{k}\) is the corresponding element after transformation, and \(I\) is the indicator function. Figures 5b and 5a show the difference in neighborhood preservation between MM-Res and our kNN-Res for the Chinese character, and Fish datasets respectively for three different levels of deformations.
Moreover, despite the additional topology regularization term, our kNN-Res generally incurred smaller alignment errors and was able to converge under large deformation levels.
### Results on High-Dimensional CyTOF Data
Cytometry by time of flight (CyTOF) provides the means for the quantification of multiple cellular components data, however, is susceptible to the so-called batch effect problem, where systematic non-biological variations during
Figure 2: The Chinese character deformation example: Top row represents original and deformed sets, Mid row represents the vector field, and Bottom row is the final alignment.
the measuring process result in a distributional shift of otherwise similar samples. This effect breaks the intra-comparability of samples which is a crucial component of downstream tasks such as disease diagnosis and typically requires the intervention of human experts to remove these batch effects. The CyTOF dataset used in our experiments was curated by the Yale New Haven Hospital. There are two patients, and two conditions were measured on two different days. All eight samples have 25 markers each representing a separate dimension ('CD45', 'CD19', 'CD127', 'CD4', 'CD8a', 'CD20', 'CD25', 'CD278', 'TNFa', 'Tim3', 'CD27', 'CD14', 'CCR7', 'CD28','CD152', 'FOXP3', 'CD45RO', 'INFg', 'CD223', 'GzB', 'CD3', 'CD274', 'HLADR', 'PD1', 'CD11b'), and a range of cells (points) between 1800 to 5000 cells per sample. The split is done such that samples collected on day 1 are the target, and samples collected on day 2 are the target, and samples collected on day 3 are the target, and samples collected on day 4 are the target, and samples collected on day 5 are the target, and samples collected on day 6 are the target, and samples collected on day 7 are the target, and samples collected on day 8 are the target, and samples collected on
Figure 5: The figures show Hamming loss for the following levels of deformations: (left) level 1, (mid) level 2, (right) level 3.
Figure 6: The blue and red dots represent 1st and 2nd principal components of reference (patient #2 on day 2) and the target samples (patient #2 on day 1) correspondingly.
the reference, resulting in four alignment experiments.
We follow the exact preprocessing steps described in [5]. To adjust the dynamic range of samples, a standard pre-preprocessing step of CyTOF data is applying a log transformation [47]. Additionally, CyTOF data typically contains a large number of zero values (40%) due to instrumental instability which are not considered biological signals. Thus, a denoising autoencoder (DAE) is used to remove these zero-values [48]. The Encoder of the DAE is comprised of two fully-connected layers with ReLU activation functions. The decoder (output) is a single linear layer, with no activation function. All layers of the DAE have the same number of neurons as the dimensionality of the data. Next, each cell is multiplied by an independent Bernoulli random vector with probability =.8, and the DAE is trained to reconstruct the original cell using an MSE loss. Furthermore, the DAE is optimized via RMSprop and weight decay regularization. The zero values in both reference and target are then removed using the trained DAE. Finally, each feature in both target and reference samples is independently standardized to have a zero-mean and unit variance. For our kNN-Res, we set \(\epsilon=0.05,\lambda=0.1,\sigma=0.04\), \(k=5\) for Hutchinson's estimator, and the number of hidden units to 50. We start with a relatively high learning rate (0.01) for the ADAM optimizer and use a reduce-on-plateau scheduler with a reduction factor of.7, and a minimum learning rate of \(5\times 10^{-5}\). Figure 6 shows the first two principal components of data before and after alignment using two kNN-Res models with different lambdas. Although the two samples appear less aligned when using a large \(\lambda\), this comes with the benefit of preserving the topology of the data as shown by the learned transformation in figure 7 where points (cells) are moved in a coherent way.
This becomes more clearer when looking at the marginals in figure 13 in the appendix. In this experiment, we trained five models with five different
Figure 7: Point set transformation(alignment) of patient #2 sample on day 1 and day 2, shown in space of 1st and 2nd principal components.
lambdas ranging from 0 to 1. It is clear that having a small \(\lambda\) favors alignment over faithfulness to the original distribution, however, increasing \(\lambda\) preserves the shape of the original data after transformation, which is desirable in biological settings. For results of other experiments see Appendix.
## 4 Discussion
### Implications
Point-set registration methods are typically used for problems in computer vision to align point clouds produced by either stereo vision or by Light Detection and Ranging devices (e.g. Velodyne scanner) for instance to stitch scenes and align objects. These datasets are usually of 2 or 3 dimensions and hence the methods had limited exposure to high-dimensional datasets. Biological data, on the other hand, is usually of high dimension and hence methods from point-set registration do not directly translate to biological data. The proposed method in this study was tested on a 25-dimensional CyTOF dataset. However, in flow and mass cytometry, data could easily go beyond 50 dimensions (markers). For instance, methods that combine protein marker detection with unbiased transcriptome profiling of single cells provide an even higher number of markers. These methods show that multimodal data analysis can achieve a more detailed characterization of cellular phenotypes than transcriptome measurements alone [49, 50] and hence recently gained significant traction. Unfortunately, these methods require more sophisticated batch normalization algorithms, since manual gating and normalization using marginal distributions become infeasible. It is worth mentioning that even though the experts are making sure that the marginal distributions are aligned, there is still no guarantee that the samples are aligned in the higher dimensional space. Moreover, the alignment might result in such nonlinear and non-smooth transformations that break biological relationships or introduce non-existing biological variabilities. The proposed method mitigates these issues and guarantees smooth transformations.
### Limitations
It is clear from the last step of the proof that the orthogonal Jacobian is too strong a condition for preserving the kNN graph:
\[(\mathbf{v}-\mathbf{u})^{\top}\mathbf{J}_{\mathbf{u}}^{\top}\mathbf{J}_{ \mathbf{u}}(\mathbf{v}-\mathbf{u})=(\mathbf{v}-\mathbf{u})^{\top}(\mathbf{v}- \mathbf{u}) \tag{19}\]
The objective is satisfied by preserving inequality and not equality. In other words, it is only necessary and sufficient for \(\mathbf{J}\) to preserve the kNN graph if the following holds:
\[\mathbf{u}^{\top}\mathbf{u}\leq\mathbf{v}^{\top}\mathbf{v}\rightarrow\mathbf{ u}^{\top}\mathbf{J}^{\top}\mathbf{J}\mathbf{u}\leq\mathbf{v}^{\top}\mathbf{J} ^{\top}\mathbf{J}\mathbf{v} \tag{20}\]
or
\[\langle\mathbf{u},\mathbf{u}\rangle\leq\langle\mathbf{v},\mathbf{v}\rangle \rightarrow\langle\mathbf{J}\mathbf{u},\mathbf{J}\mathbf{u}\rangle\leq \langle\mathbf{J}\mathbf{v},\mathbf{J}\mathbf{v}\rangle \tag{21}\]
Having strict equality puts a limitation on the kind of transformations the model is capable of learning. Furthermore, even if the deformation could theoretically be expressed, such a penalty makes convergence unnecessarily slower. On the empirical side, we only have a limited number of experiments to test the proposed method. More experimentation and ablation are required to better understand the limits of our current approach and to learn how it fairs on a wider selection of real-world data such as RNA-Seq.
### Future Work
An important future direction is incorporating local or partial matching using modified alignment losses such as Gromov-Wasserstein distance. This should lead to a much more robust solution than global matching, especially in the case of outliers and missing objects. We also posit that solving point set registration under topological constraints such as preserving the kNN graph is naturally extendable to dimensionality reduction.
## 5 Conclusion
This paper presents a simple, scalable framework for point cloud registration. At its core, it consists of three components, namely (a) residual neural network with identity blocks as a parametrized displacement field, (b) Jacobian penalty as a topology-preserving loss, and (c) Sinkhorn Divergence as a sample-based, geometry-aware statistical distance. Additionally, by incorporating Hutchinson's estimator for the Jacobian loss, we show that our model is easily extensible to high dimensions with constant complexity. Furthermore, we offer both qualitative and quantitative analysis for synthetic and CyTOF datasets showing the flexibility and applicability of our model in multiple domains.
|
2310.20579 | Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks | We analytically investigate how over-parameterization of models in randomized
machine learning algorithms impacts the information leakage about their
training data. Specifically, we prove a privacy bound for the KL divergence
between model distributions on worst-case neighboring datasets, and explore its
dependence on the initialization, width, and depth of fully connected neural
networks. We find that this KL privacy bound is largely determined by the
expected squared gradient norm relative to model parameters during training.
Notably, for the special setting of linearized network, our analysis indicates
that the squared gradient norm (and therefore the escalation of privacy loss)
is tied directly to the per-layer variance of the initialization distribution.
By using this analysis, we demonstrate that privacy bound improves with
increasing depth under certain initializations (LeCun and Xavier), while
degrades with increasing depth under other initializations (He and NTK). Our
work reveals a complex interplay between privacy and depth that depends on the
chosen initialization distribution. We further prove excess empirical risk
bounds under a fixed KL privacy budget, and show that the interplay between
privacy utility trade-off and depth is similarly affected by the
initialization. | Jiayuan Ye, Zhenyu Zhu, Fanghui Liu, Reza Shokri, Volkan Cevher | 2023-10-31T16:13:22Z | http://arxiv.org/abs/2310.20579v1 | # Initialization Matters: Privacy-Utility Analysis of Overparameterized Neural Networks
###### Abstract
We analytically investigate how over-parameterization of models in randomized machine learning algorithms impacts the information leakage about their training data. Specifically, we prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets, and explore its dependence on the initialization, width, and depth of fully connected neural networks. We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training. Notably, for the special setting of linearized network, our analysis indicates that the squared gradient norm (and therefore the escalation of privacy loss) is tied directly to the per-layer variance of the initialization distribution. By using this analysis, we demonstrate that privacy bound improves with increasing depth under certain initializations (LeCun and Xavier), while degrades with increasing depth under other initializations (He and NTK). Our work reveals a complex interplay between privacy and depth that depends on the chosen initialization distribution. We further prove excess empirical risk bounds under a fixed KL privacy budget, and show that the interplay between privacy utility trade-off and depth is similarly affected by the initialization.
## 1 Introduction
Deep neural networks (DNNs) in the over-parameterized regime (i.e., more parameters than data) perform well in practice but the model predictions can easily leak private information about the training data under inference attacks such as membership inference attacks [44] and reconstruction attacks [17; 7; 29]. This leakage can be mathematically measured by the extent to which the algorithm's output distribution changes if DNNs are trained on a neighboring dataset (differing only in a one record), following the differential privacy (DP) framework [23].
To train differential private model, a typical way is to randomly perturb each gradient update in the training process, such as stochastic gradient descent (SGD), which leads to the most widely applied DP training algorithm in the literature: DP-SGD [2]. To be specific, in each step, DP-SGD employs gradient clipping, adds calibrated Gaussian noise, and yields differential privacy guarantee that scales with the noise multiplier (i.e., per-dimensional Gaussian noise standard deviation divided by the clipping threshold) and number of training epochs. However, this privacy bound [2] is overly general as its gradient clipping artificially neglects the network properties (e.g., width and depth) and training schemes (e.g., initializations). Accordingly, a natural question arises in the community:
_How does the over-parameterization of neural networks (under different initializations) affect the privacy bound of the training algorithm over_ worst-case _datasets?_
To answer this question, we circumvent the difficulties of analyzing gradient clipping, and instead _algorithmically_ focus on analyzing privacy for the Langevin diffusion algorithm _without_ gradient clipping nor Lipschitz assumption on loss function. 2 It avoids an artificial setting in DP-SGD [2] where a constant sensitivity constraint is enforced for each gradient update and thus makes the privacy bound insensitive to the network over-parameterization. _Theoretically_, we prove that the KL privacy loss for Langevin diffusion scales with the expected gradient difference between the training on any two worst-case neighboring datasets (Theorem 3.1). 3 By proving precise upper bounds on the expected \(\ell_{2}\)-norm of this gradient difference, we thus obtain KL privacy bounds for fully connected neural network (Lemma 3.2) and its linearized variant (Corollary 4.2) that changes with the network width, depth and per-layer variance for the initialization distribution. We summarized the details of our KL privacy bounds in Table 1, and highlight our key observations below.
Footnote 2: A key difference between this paper and existing privacy utility analysis of Langevin diffusion [26] is that we analyze in the absence of gradient clipping or Lipschitz assumption on loss function. Our results also readily extend to discretized noisy GD with constant step-size (as discussed in Appendix E).
Footnote 3: We focus on KL privacy loss because it is a more relaxed distinguishability notion than standard \((\varepsilon,\delta)\)-DP, and therefore could be upper bounded even without gradient clipping. Moreover, KL divergence enables upper bound for the advantage (relative success) of various inference attacks, as studied in recent works [39; 28].
* Width always worsen privacy, under all the considered initialization schemes. Meanwhile, the interplay between network depth and privacy is much more complex and crucially depends on which initialization scheme is used and how long the training time is.
* Regarding the specific initialization schemes, under small per-layer variance in initialization (e.g. in LeCun and Xavier), if the depth is large enough, our KL privacy bound for training fully connected network (with a small amount of time) as well as linearized network (with finite time) decays exponentially with increasing depth. To the best of our knowledge, this is the first time that an improvement of privacy bound under over-parameterization is observed.
We further perform numerical experiments (Section 5) on deep neural network trained via noisy gradient descent to validate our privacy analyses. Finally, we analyze the privacy utility trade-off for training linearized network, and prove that the excess empirical risk bound (given any fixed KL privacy budget) scales with a lazy training distance bound \(R\) (i.e., how close is the initialization to a minimizer of the empirical risk) and a gradient norm constant \(B\) throughout training (Corollary 6.4). By analyzing these two terms precisely, we prove that under certain initialization distributions (such as LeCun and Xavier), the privacy utility trade-off strictly improves with increasing depth for linearized network (Table 1). To our best knowledge, this is the first time that such a gain in privacy-utility trade-off due to over-parameterization (increasing depth) is shown. Meanwhile, prior results only prove (nearly) dimension-independent privacy utility trade-off for such linear models in the literature [45; 32; 37]. Our improvement demonstrates the unique benefits of our algorithmic framework and privacy-utility analysis in understanding the effect of over-parameterization.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline \multirow{2}{*}{Initialization} & \begin{tabular}{c} Variance \(\beta_{l}\) \\ for layer \(l\) \\ \end{tabular} & \begin{tabular}{c} Gradient norm \\ constant \(B\) (7) \\ \end{tabular} & \begin{tabular}{c} Approximate lazy \\ training distance \\ \(R\) (9) \\ \end{tabular} &
\begin{tabular}{c} Excess Empirical risk \\ under \(\varepsilon\)-KL privacy \\ (Corollary 6.4) \\ \end{tabular} \\ \hline LeCun [34] & \(1/m_{l-1}\) & \(\frac{\text{\small{\small{\small{\small{\small{\small{\small{\small{\small{ \begin{array}{c}{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny }}}}}} \left[{\left[{{\tiny{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\left}}}}}}}}{{{\tiny{{ \tiny{\tiny{{\tiny{\tiny{\tiny{\tiny{\{\left}}}}}}{{{\tiny{{\tiny{\tiny{\tiny{{ \tiny{\left}}}}}}{{{\tiny{{\tiny{{\tiny{\tiny{\{}}}}}}{{ \tiny{{\tiny{{\tiny{\tiny{\left}}}}}{{{\tiny{{\tiny{\tiny{\tiny{\left}}}}{{{\tiny{{ \tiny{\left}}}}{{{\tiny{{\tiny{\tiny{{\left}}}}{{{\tiny{\tiny{\tiny{{\left}}}}{{{ \tiny{\tiny{\left}}}{{{\tiny{{\tiny{\left}}}}{{{\tiny{{\tiny{\left}}}{{{ \tiny{\tiny{\left}}}{{{\tiny{{\left{\left}}}}{{{\tiny{{\tiny{\left{}}}}{{ \tiny{{\tiny{\left{\left}}}{{{\tiny{\left{{\left}}}}{{{\tiny{{\tiny{\left{{\left}}} }}{{{{\tiny{{\left{\left}}}}{{{\tiny{{\tiny{\left{\left}}}}{{{\tiny{{\left{{ \left}}}}{{{\tiny{{\left{\left}}}{{{\tiny{\left{{\left}}}}{{{ \tiny{{\left{\left{{\left}}}}{{{\tiny{{\left{\left{\left{}}}}{{\tiny{{\left{{ \left}}}}{{{\left{{\left{\left{\left}}}{{{\left{\left{{{}}} {\left{\left{{\left{\left}}}}{{{\left{\left{{\left{}}}}{{\left{{\left{{ }}}}{{\left{\left{{\left{\left{{\left}}}}{{{\left{\left{{{ }}}}{{\left{\left{{\left{\left}}}{{{\left{{{}}} {\left{{\left{\left{\left}}}}{{{\left{{\left{\left{{\left}}}}{{\left{{{{ }}}}{\left{\left{{\left{{\left}}}{{\left{{{{{{{}}}}} {\left{\left{{\left}}}}{{\left{\left{\left{{{{\left}}}}}{{\left{{ \left{{\left{{{\left}}}}}{{\left{\left{\left{{{{\left}}}}{{\left{{{{{{{{{{{}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\}\}\}\}\}\}\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\{\{\{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\}\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\
### Related Works
over-parameterization in DNNs and NTK.Theoretical demonstration on the benefit of over-parameterization in DNNs occurs in global convergence [3; 21] and generalization [4; 16]. Under proper initialization, the training dynamics of over-parameterized DNNs can be described by a kernel function, termed as neural tangent kernel (NTK) [31], which stimulates a series of analysis in DNNs. Accordingly, over-parameterization has been demonstrated to be beneficial/harmful to several topics in deep learning, e.g., robustness [15; 54], covariate shift [50]. However, the relationship between over-parameterization and privacy (based on the differential privacy framework) remains largely an unsolved problem, as the training dynamics typically change [14] after adding new components in the privacy-preserving learning algorithm (such as DP-SGD [2]) to enforce privacy constraints.
Membership inference privacy risk under over-parameterization.A recent line of works [47; 48] investigates how over-parameterization affects the theoretical and empirical privacy in terms of membership inference advantage, and proves novel trade-off between privacy and generalization error. These literature are closet to our objective of investigating the interplay between privacy and over-parameterization. However, Tan et al. [47; 48] focus on proving upper bounds for an average-case privacy risk defined by the advantage (relative success) of membership inference attack on models trained from randomly sampled training dataset from a population distribution. By contrast, our KL privacy bound is heavily based on the strongest adversary model in the differential privacy definition, and holds under an arbitrary _worst-case_ pair of neighboring datasets, differing only in one record. Our model setting (e.g., fully connected neural networks) is also quite different from that of Tan et al. [47; 48]. The employed analysis tools are accordingly different.
Differentially private learning in high dimensionStandard results for private empirical risk minimization [9; 46] and private stochastic convex optimization [11; 12; 5] prove that there is an unavoidable factor \(d\) in the empirical risk and population risk that depends on the model dimension. However, for unconstrained optimization, it is possible to seek for the dimension-dependency in proving risk bounds for certain class of problems (such as generalized linear model [45]). Recently, there is a growing line of works that proves dimension-independent excess risk bounds for differentially private learning, by utilizing the low-rank structure of data features [45] or gradient matrices [32; 37] during training. Several follow-up works [33; 13] further explore techniques to enforce the low-rank property (via random projection) and boost privacy utility trade-off. However, all the works focus on investigating a general high-dimensional problem for private learning, rather than separating the study for different network choices such as width, depth and initialization. Instead, our study focuses on the fully connected neural network and its linearized variant, which enables us to prove more precise privacy utility trade-off bounds for these particular networks under over-parameterization.
## 2 Problem and Methodology
We consider the following standard multi-class supervised learning setting. Let \(\mathcal{D}=(\mathbf{z}_{1},\cdots,\mathbf{z}_{n})\) be an input dataset of size \(n\), where each data record \(\mathbf{z}_{i}=(\mathbf{x}_{i},\mathbf{y}_{i})\) contains a \(d\)-dimensional feature vector \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) and a label vector \(\mathbf{y}_{i}\in\mathcal{Y}=\{-1,1\}^{o}\) on \(o\) classes. We aim to learn a neural network output function \(\mathbf{f}_{\mathbf{W}}(\cdot):\mathcal{X}\rightarrow\mathcal{Y}\) parameterized by \(\mathbf{W}\) via empirical risk minimization (ERM)
\[\min_{\mathbf{W}}\mathcal{L}(\mathbf{W};\mathcal{D}):=\frac{1}{n}\sum_{i=1}^{n}\ell( \mathbf{f}_{\mathbf{W}}(\mathbf{x}_{i});\mathbf{y}_{i})\,, \tag{1}\]
where \(\ell(\mathbf{f}_{\mathbf{W}}(\mathbf{x}_{i});\mathbf{y}_{i})\) is a loss function that reflects the approximation quality of model prediction \(f_{\mathbf{W}}(\mathbf{x}_{i})\) compared to the ground truth label \(\mathbf{y}_{i}\). For simplicity, throughout our analysis, we employ the cross-entropy loss \(\ell(\mathbf{f}_{\mathbf{W}}(\mathbf{x});\mathbf{y})=-\langle\mathbf{y},\log\text{softmax}(\mathbf{ f}_{\mathbf{W}}(\mathbf{x}))\rangle\) for multi-class network with \(o\geq 2\), and \(\ell(\mathbf{f}_{\mathbf{W}}(\mathbf{x});\mathbf{y})=\log(1+\exp(-\mathbf{y}\mathbf{f}_{\mathbf{W}}(\mathbf{x}))\) for single-output network with \(o=1\).
Fully Connected Neural NetworksWe consider the \(L\)-layer, multi-output, fully connected, deep neural network (DNN) with ReLU activation. Denote the width of hidden layer \(l\) as \(m_{l}\) for \(l=1,\cdots,L-1\). For consistency, we also denote \(m_{0}=d\) and \(m_{L}=o\). The network output \(f_{\mathbf{W}}(\mathbf{x})\coloneqq\mathbf{h}_{L}(\mathbf{x})\) is defined recursively as follows.
\[\mathbf{h}_{0}(\mathbf{x})=\mathbf{x};\quad\mathbf{h}_{l}(\mathbf{x})=\phi(\mathbf{W}_{l}\mathbf{x})\text{ for }l=1,\cdots,L-1;\quad\mathbf{h}_{L}(\mathbf{x})=\mathbf{W}_{L}\mathbf{h}_{L-1}(\mathbf{x})\,, \tag{2}\]
where \(h_{l}(\mathbf{x})\) denotes the post-activation output at \(l\)-th layer, and \(\{\mathbf{W}_{l}\in\mathbb{R}^{m_{l}\times m_{l-1}}:l=1,\ldots,L\}\) denotes the set of per-layer weight matrices of DNN. For brevity, we denote the vector \(\mathbf{W}\coloneqq(\text{Vec}(\mathbf{W}_{1}),\ldots,\text{Vec}(\mathbf{W}_{L}))\in \mathbb{R}^{m_{1}\cdot d+m_{2}\cdot m_{1}+\cdots+o\cdot m_{L-1}}\), i.e., the the concatenation of vectorizations for weight matrices of all layers, as the model parameter.
Linearized NetworkWe also analyze the following _linearized network_, which is used in prior works [35, 3, 41] as an important tool to (approximately and qualitatively) analyze the training dynamics of DNNs. Formally, the linearized network \(\mathbf{f}_{\mathbf{W}}^{lin,0}(\mathbf{x})\) is a first-order Taylor expansion of the fully connected ReLU network at initialization parameter \(\mathbf{W}_{0}^{lin}\), as follows.
\[\mathbf{f}_{\mathbf{W}}^{lin,0}(\mathbf{x})\equiv\mathbf{f}_{\mathbf{W}_{0}^{lin}}(\mathbf{x})+\frac{ \partial\mathbf{f}_{\mathbf{W}}(\mathbf{x})}{\partial\mathbf{W}}\Big{|}_{\mathbf{W}=\mathbf{W}_{0}^{ lin}}\left(\mathbf{W}-\mathbf{W}_{0}^{lin}\right), \tag{3}\]
where \(\mathbf{f}_{\mathbf{W}_{0}^{lin}}(\mathbf{x})\) is the output function of the fully connected ReLU network (2) at initialization \(\mathbf{W}_{0}^{lin}\). We denote \(\mathcal{L}_{0}^{lin}(\mathbf{W};\mathcal{D})=\frac{1}{n}\sum_{i=1}^{n}\ell\left( \mathbf{f}_{\mathbf{W}_{0}^{lin}}(\mathbf{x}_{i})+\frac{\partial\mathbf{f}_{\mathbf{W}}(\mathbf{x})}{ \partial\mathbf{W}}|_{\mathbf{W}=\mathbf{W}_{0}^{lin}}(\mathbf{W}-\mathbf{W}_{0}^{lin});\mathbf{y}_{i}\right)\) as the empirical loss function for training linearized network, by plugging (3) into (1).
Langevin DiffusionRegarding the optimization algorithm, we focus on the _Langevin diffusion_ algorithm [36] with per-dimensional noise variance \(\sigma^{2}\). Note that we aim to _avoid gradient clipping_ while still proving KL privacy bounds. After initializing the model parameters \(\mathbf{W}_{0}\) at time zero, the model parameters \(\mathbf{W}_{t}\) at subsequent time \(t\) evolves as the following stochastic differential equation.
\[\mathrm{d}\mathbf{W}_{t}=-\,\nabla\mathcal{L}(\mathbf{W}_{t};\mathcal{D})\mathrm{d}t+ \sqrt{2\sigma^{2}}\mathrm{d}\mathbf{B}_{t}\,. \tag{4}\]
Initialization DistributionThe initialization of parameters \(\mathbf{W}_{0}\) crucially affects the convergence of Langevin diffusion, as observed in prior literatures [52, 25, 24]. In this work, we investigate the following general class of Gaussian initialization distributions with different (possibly depth-dependent) variances for the parameters in each layer. For any layer \(l=1,\cdots,L\), we have
\[[\mathbf{W}^{l}]_{ij}\sim\mathcal{N}(0,\beta_{l})\text{, for }(i,j)\in[m_{l}]\times[m_{l-1}]\,, \tag{5}\]
where \(\beta_{1},\cdots,\beta_{L}>0\) are the per-layer variance for Gaussian initialization. By choosing different variances, we recover many common initialization schemes in the literature, as summarized in Table 1.
### Our objective and methodology
We aim to understand the relation between privacy, utility and over-parameterization (depth and width) for the Langevin diffusion algorithm (under different initialization distributions). For privacy analysis, we prove a KL privacy bound for running Langevin diffusion on any two _worst-case_ neighboring datasets. Below we first give the definition for neighboring datasets.
**Definition 2.1**.: We denote \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) as neighboring datasets if they are of same size and only differ in one record. For brevity, we also denote the differing records as \((\mathbf{x},\mathbf{y})\in\mathcal{D}\) and \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\in\mathcal{D}^{\prime}\).
**Assumption 2.2** (Bounded Data).: For simplicity, we assume bounded data, i.e., \(\|\mathbf{x}\|_{2}\leq\sqrt{d}\).
We now give the definition for KL privacy, which is a more relaxed, yet closely connected privacy notion to the standard \((\varepsilon,\delta)\) differential privacy [22], see Appendix A.2 for more discussions. KL privacy and its relaxed variants are commonly used in previous literature [8, 10, 53].
**Definition 2.3** (KL privacy).: A randomized algorithm \(\mathcal{A}\) satisfies \(\varepsilon\)-KL privacy if for any neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\), we have that the KL divergence \(\mathrm{KL}(\mathcal{A}(\mathcal{D})\|\mathcal{A}(\mathcal{D}^{\prime}))\leq\varepsilon\), where \(\mathcal{A}(\mathcal{D})\) denotes the algorithm's output distribution on dataset \(\mathcal{D}\).
In this paper, we prove KL privacy upper bound for \(\max_{\mathcal{D},\mathcal{D}^{\prime}}\mathrm{KL}(\mathbf{W}_{[0:T]}\|\mathbf{W}_{[0:T]} ^{\prime})\) when running Langevin diffusion on any _worst-case_ neighboring datasets. For brevity, here (and in the remaining paper), we abuse the notations and denote \(\mathbf{W}_{[0:T]}\) and \(\mathbf{W}_{[0:T]}^{\prime}\) as the distributions of model parameters trajectory during Langevin diffusion processes Eq. (4) with time \(T\) on \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) respectively.
For utility analysis, we prove the upper bound for the excess empirical risk given any fixed KL divergence privacy budget for a single-output neural network under the following additional assumption (it is only required for utility analysis and not needed for our privacy bound).
**Assumption 2.4** ([40; 20; 21]).: The training data \(\mathbf{x}_{1},\cdots,\mathbf{x}_{n}\) are i.i.d. samples from a distribution \(P_{x}\) that satisfies \(\mathbb{E}[\mathbf{x}]=0,\|\mathbf{x}\|_{2}=\sqrt{d}\) for \(\mathbf{x}\sim P_{x}\), and with probability one for any \(i\neq j\), \(\mathbf{x}_{i}\nparallel\mathbf{x}_{j}\).
Our ultimate goal is to precisely understand how the excess empirical risk bounds (given a fixed KL privacy budget) are affected by increasing width and depth under different initialization distributions.
## 3 KL Privacy for Training Fully Connected ReLU Neural Networks
In this section, we perform the composition-based KL privacy analysis for Langevin Diffusion given random Gaussian initialization distribution under Eq. (5) for fully connected ReLU network. More specifically, we prove upper bound for the KL divergence between distribution of output model parameters when running Langevin diffusion on an arbitrary pair of neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\).
Our first insight is that by a Bayes rule decomposition for density function, KL privacy under a relaxed gradient sensitivity condition can be proved (that could hold _without_ gradient clipping).
**Theorem 3.1** (KL composition under possibly unbounded gradient difference).: _The KL divergence between running Langevin diffusion (4) for DNN (2) on neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) satisfies_
\[\mathrm{KL}(\mathbf{W}_{[0:T]}\|\mathbf{W}_{[0:T]}^{\prime})=\frac{1}{2\sigma^{2}} \int_{0}^{T}\mathbb{E}\left[\|\nabla\mathcal{L}(\mathbf{W}_{t};\mathcal{D})- \nabla\mathcal{L}(\mathbf{W}_{t};\mathcal{D}^{\prime})\|_{2}^{2}\right]\mathrm{d}t\,. \tag{6}\]
Proof sketch.: We compute the partial derivative of KL divergence with regard to time \(t\), and then integrate it over \(t\in[0,T]\) to compute the KL divergence during training with time \(T\). For computing the limit of differentiation, we use Girsanov's theorem to compute the KL divergence between the trajectory of Langevin diffusion processes on \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\). The complete proof is in Appendix B.1.
Theorem 3.1 is an extension of the standard additivity [51] of KL divergence (also known as chain rule [1]) for a finite sequence of distributions to continuous time processes with (possibly) unbounded drift difference. The key extension is that Theorem 3.1 does not require bounded sensitivity between the drifts of Langevin Diffusion on neighboring datasets. Instead, it only requires finite second-order moment of drift difference (in the \(\ell_{2}\)-norm sense) between neighboring datasets \(\mathcal{D},\mathcal{D}^{\prime}\), which can be proved by the following Lemma. We prove that this expectation of squared gradient difference incurs closed-form upper bound under deep neural network (under mild assumptions), for running Langevin diffusion (without gradient clipping) on any neighboring dataset \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\).
**Lemma 3.2** (Drift Difference in Noisy Training).: _Let \(M_{T}\) be the subspace spanned by gradients \(\{\nabla\ell(f_{\mathbf{W}_{t}}(\mathbf{x}_{i};\mathbf{y}_{i}):(\mathbf{x}_{i},\mathbf{y}_{i})\in \mathcal{D},t\in[0,T]\}_{i=1}^{n}\) throughout Langevin diffusion \((\mathbf{W}_{t})_{t\in[0,T]}\). Denote \(\|\cdot\|_{M_{T}}\) as the \(\ell_{2}\) norm of the projected input vector onto \(M_{T}\). Suppose that there exists constants \(c,\beta>0\) such that for any \(\mathbf{W}\), \(\mathbf{W}^{\prime}\) and \((\mathbf{x},\mathbf{y})\), we have \(\|\nabla\ell(f_{\mathbf{W}}(\mathbf{x});\mathbf{y})\rangle-\nabla\ell(f_{\mathbf{W}^{\prime}} (\mathbf{x});\mathbf{y})\|_{2}<\max\{c,\beta\|\mathbf{W}-\mathbf{W}^{\prime}\|_{M_{T}}\}\). Then running Langevin diffusion Eq. (4) with Gaussian initialization distribution (5) satisfies \(\varepsilon\)-KL privacy with \(\varepsilon=\frac{\max_{\mathcal{D},\mathcal{D}^{\prime}}\int_{0}^{T}\mathbb{ E}\left[\|\nabla\mathcal{L}(\mathbf{W}_{t};\mathcal{D})-\nabla\mathcal{L}(\mathbf{W}_{t}; \mathcal{D}^{\prime})\|_{2}^{2}\right]\mathrm{d}t}{2\sigma^{2}}\) where_
\[+\underbrace{\frac{2\beta^{2}}{n^{2}(2+\beta^{2})}\left(\frac{ \epsilon^{(2+\beta^{2})T}-1}{2+\beta^{2}}-T\right)\cdot\left(\mathbb{E}\left[ \|\nabla\mathcal{L}(\mathbf{W}_{0};\mathcal{D})\|_{2}^{2}\right]+2\sigma^{2} \text{rank}(M_{T})+c^{2}\right)}_{\text{gradient difference fluctuation during training}}+\underbrace{\frac{2c^{2}T}{n^{2}}}_{\text{non- smoothness}}.\]
Proof sketch.: The key is to reduce the problem of upper bounding the gradient difference at any training time \(T\), to analyzing its two subcomponents: \(\|\nabla\ell(f_{\mathbf{W}_{t}}(\mathbf{x});\mathbf{y}))-\nabla\ell(f_{\mathbf{W}_{t}}(\mathbf{x}^{\prime});\mathbf{y}^{\prime}) \|_{2}^{2}\leq\underbrace{2\left\|\nabla\ell(f_{\mathbf{W}_{0}}(\mathbf{x});\mathbf{y}) \right\|-\nabla\ell(f_{\mathbf{W}_{0}}(\mathbf{x}^{\prime});\mathbf{y}^{\prime})\right\|_ {2}^{2}}_{\text{gradient difference at initialization}}+2\beta^{2}\underbrace{\left\|\mathbf{W}_{t}- \mathbf{W}_{0}\right\|_{M_{T}}^{2}}_{\text{parameters' change after time $T$}}+2c^{2}\), where \((\mathbf{x},\mathbf{y})\) and \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) are the differing data between neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\). This inequality is by the Cauchy-Schwartz inequality. In this way, the second term in Lemma 3.2 uses the change of parameters
to bound the gradient difference between datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) at time \(T\), via the relaxed smoothness assumption of loss function (that is explained in Remark 3.5 in details). The complete proof is in Appendix B.2.
_Remark 3.3_ (Gradient difference at initialization).: The first term and in our upper bound linearly scales with the difference between gradients on neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) at initialization. Under different initialization schemes, this gradient difference exhibits different dependency on the network depth and width, as we will prove theoretically in Theorem 4.1.
_Remark 3.4_ (Gradient difference fluctuation during training).: The second term in Lemma 3.2 bounds the change of gradient difference during training, and is proportional to the the rank of a subspace \(M_{T}\) spanned by gradients of all training data. Intuitively, this fluctuation is because Langevin diffusion adds per-dimensional noise with variance \(\sigma^{2}\), thus perturbing the training parameters away from the initialization at a scale of \(O(\sigma\sqrt{\text{rank}(M_{T})})\) in the expected \(\ell_{2}\) distance.
_Remark 3.5_ (Relaxed smoothness of loss function).: The third term in Lemma 3.2 is due to the assumption \(\|\nabla\ell(f_{\mathbf{W}}(\mathbf{x});\mathbf{y})\|_{2}<\max\{c,\beta\|\mathbf{W}-\mathbf{W}^{ \prime}\|_{M_{T}}\}.\) This assumption is similar to smoothness of loss function, but is more relaxed as it allows non-smoothness at places where the gradient is bounded by \(c\). Therefore, this assumption is general to cover commonly-used smooth, non-smooth activation functions, e.g., sigmoid, ReLU.
_Growth of KL privacy bound with increasing training time \(T\)._ The first and third terms in our upper bound Lemma 3.2 grow linearly with the training time \(T\), while the second term grows exponentially with regard to \(T\). Consequently, for learning tasks that requires a long training time to converge, the second term will become the dominating term and the KL privacy bound suffers from exponential growth with regard to the training time. Nevertheless, observe that for small \(T\to 0\), the second component in Lemma 3.2 contains a small factor \(\frac{e^{(2+\beta^{2})T}-1}{2+\beta^{2}}-T=o(T)\) by Taylor expansion. Therefore, for small training time, the second component is smaller than the first and the third components in Lemma 3.2 that linearly scale with \(T\), and thus does not dominate the privacy bound. Intuitively, this phenomenon is related to lazy training [19]. In Section 5 and Figure 2, we also numerically validate that the second component does not have a high effect on the KL privacy loss in the case of small training time.
_Dependence of KL privacy bound on network over-parameterization_. Under a fixed training time \(T\) and noise scale \(\sigma^{2}\), Lemma 3.2 predicts that the KL divergence upper bound in Theorem 3.1 is dependent on the gradient difference and gradient norm at initialization, and the rank of gradient subspace \(\text{rank}(M_{T})\) throughout training. We now discuss the how these two terms change under increasing width and depth, and whether there are possibilities to improve them under over-parameterization.
1. The gradient norm at initialization crucially depends on how the per-layer variance in the Gaussian initialization distribution scales with the network width and depth. Therefore, it is possible to reduce the gradient difference at initialization (and thus improve the KL privacy bound) by using specific initialization schemes, as we later show in Section 4 and Section 5.
2. Regarding the rank of gradient subspace \(\text{rank}(M_{T})\): when the gradients along the training trajectory span the whole optimization space, \(\text{rank}(M_{T})\) would equal the dimension of the learning problem. Consequently, the gradient fluctuation upper bound (and thus the KL privacy bound) worsens with increasing number of model parameters (over-parameterization) in the worst-case. However, if the gradients are low-dimensional [45; 32; 43] or sparse [37], \(\text{rank}(M_{T})\) could be dimension-independent and thus enables better bound for gradient fluctuation (and KL privacy bound). We leave this as an interesting open problem.
## 4 KL privacy bound for Linearized Network under over-parameterization
In this section, we focus on the training of linearized networks (3), which fosters a refined analysis on the interplay between KL privacy and over-parameterization (increasing width and depth). Analysis of DNNs via linearization is a commonly used technique in both theory [19] and practice [43; 41]. We hope our analysis for linearized network serves as an initial attempt that would open a door to theoretically understanding the relationship between over-parameterization and privacy.
To derive a composition-based KL privacy bound for training a linearized network, we apply Theorem 3.1 which requires an upper bound for the norm of gradient difference between the training
processes on neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) at any time \(t\). Note that the empirical risk function for training linearized models enjoys convexity, and thus a relatively short amount of training time is enough for convergence. In this case, intuitively, the gradient difference between neighboring datasets does not change a lot during training, which allows for a tighter upper bound for the gradient difference norm for linearized networks (than Lemma 3.2).
In the following theorem, we prove that for a linearized network, the gradient difference throughout training has a uniform upper bound that only depends on the network width, depth and initialization.
**Theorem 4.1** (Gradient Difference throughout training linearized network).: _Under Assumption 2.2, taking over the randomness of the random initialization and the Brownian motion, for any \(t\in[0,T]\), running Langevin diffusion on a linearized network in Eq. (3) satisfies that_
\[\mathbb{E}\left[\|\nabla\mathcal{L}(\mathbf{W}_{t}^{lin};\mathcal{D})-\mathcal{ L}(\mathbf{W}_{t}^{lin};\mathcal{D}^{\prime})\|_{2}^{2}\right]\leq\frac{4B}{n^{2}} \,,\text{ where }B\coloneqq d\cdot o\cdot\left(\prod_{i=1}^{L-1}\frac{\beta_{i}m_{i}}{2} \right)\sum_{l=1}^{L}\frac{\beta_{L}}{\beta_{l}}\,, \tag{7}\]
_where \(n\) is the training dataset size, and \(B\) is a constant that only depends on the data dimension \(d\), the number of classes \(o\), the network depth \(L\), the per-layer network width \(\{m_{i}\}_{i=1}^{L}\), and the per-layer variances \(\{\beta_{i}\}_{i=1}^{L}\) of the Gaussian initialization distribution._
Theorem 4.1 provides a precise analytical upper bound for the gradient difference during training linearized network, by tracking the gradient distribution for fully connected feed-forward ReLU network with Gaussian weight matrices. Our proof borrows some techniques from [3, 54] for computing the gradient distribution, refer to Appendix C.1 and C.2 for the full proofs. By plugging Eq. (7) into Theorem 3.1, we obtain the following KL privacy bound for training a linearized network.
**Corollary 4.2** (KL privacy bound for training linearized network).: _Under Assumption 2.2 and neural networks (3) initialized by Gaussian distribution with per-layer variance \(\{\beta_{i}\}_{i=1}^{L}\), running Langevin diffusion for linearized network with time \(T\) on any neighboring datasets satisfies that_
\[\mathrm{KL}(\mathbf{W}_{[0:T]}^{lin}\|\mathbf{W}_{[0:T]}^{lin})\leq\frac{2BT}{n^{2} \sigma^{2}}\,, \tag{8}\]
_where \(B\) is the constant that specifies the gradient norm upper bound, given by Eq. (7)._
Over-parameterization affects privacy differently under different initialization.Corollary 4.2 and Theorem 4.1 prove the role of over-parameterization in our KL privacy bound, crucially depending on how the per-layer Gaussian initialization variance \(\beta_{i}\) scales with the per-layer network width \(m_{i}\) and depth \(L\). We summarize our KL privacy bound for the linearized network under different width, depth and initialization schemes in Table 1, and elaborate the comparison below.
**(1) LeCun initialization** uses small, width-independent variance for initializing the first layer \(\beta_{1}=\frac{1}{d}\) (where \(d\) is the number of input features), and width-dependent variance \(\beta_{2}=\cdots=\beta_{L}=\frac{1}{m}\) for initializing all the subsequent layers. Therefore, the second term \(\sum_{l=1}^{L}\frac{\beta_{L}}{\beta_{l}}\) in the constant \(B\) of Eq. (7) increases linearly with the width \(m\) and depth \(L\). However, due to \(\frac{m_{l}\cdot\beta_{l}}{2}<1\) for all \(l=2,\cdots,L\), the first product term \(\prod_{l=1}^{L-1}\frac{\beta_{l}m_{l}}{2}\) in constant \(B\) decays with the increasing depth. Therefore, by combining the two terms, we prove that the KL privacy bound worsens with increasing width, but improves with increasing depth (as long as the depth is large enough). Similarly, under **Xavier initialization**\(\beta_{l}=\frac{\mathcal{Z}}{m_{l-1}+m_{l}}\), we prove that the KL privacy bound (especially the constant \(B\) (7)) improves with increasing depth as long as the depth is large enough.
**(2) NTK and He initializations** use large per-layer variance \(\beta_{l}=\begin{cases}\frac{2}{m_{l}}&l=1,\cdots,L-1\\ \frac{1}{o}&l=L\end{cases}\) (for NTK) and \(\beta_{l}=\frac{2}{m_{l-1}}\) (for He). Consequently, the gradient difference under NTK or He initialization is significantly larger than that under LeCun initialization. Specifically, the gradient norm constant \(B\) in Eq. (7) grows linearly with the width \(m\) and the depth \(L\) under He and NTK initializations, thus indicating a worsening of KL privacy bound under increasing width and depth.
## 5 Numerical validation of our KL privacy bounds
To understand the relation between privacy and over-parameterization in _practical_ DNNs training (and to validate our KL privacy bounds Lemma 3.2 and Corollary 4.2), we perform experiments for
DNNs training via noisy GD to numerically estimate the KL privacy loss. We will show that if the total training time is small, it is indeed possible to obtain numerical KL privacy bound estimates that does not grow with the total number of parameter (under carefully chosen initialization distributions).
_Numerical estimation procedure_. Theorem 3.1 proves that the exact KL privacy loss scales with the expectation of squared gradient norm during training. This could be estimated by empirically average of gradient norm across training runs. For training dataset \(\mathcal{D}\), we consider all 'car' and 'plane' images of the CIFAR-10. For neighboring dataset, we consider all possible \(\mathcal{D}^{\prime}\) that removes a record from \(\mathcal{D}\), or adds a test record to \(\mathcal{D}\), i.e., the standard "add-or remove-one" neighboring notion [2]. We run noisy gradient descent with constant step-size \(0.01\) for \(50\) epochs on both datasets.
_Numerically validate the growth of KL privacy loss with regard to training time_. Figure 1 shows numerical KL privacy loss under different initializations, for fully connected networks with width \(1024\) and depth \(10\). We observe that the KL privacy loss grows linearly at the beginning of training (\(<10\) epochs), which validates the first and third term in the KL privacy bound Lemma 3.2. Moreover, the KL privacy loss under LeCun and Xavier initialization is close to zero at the beginning of training (\(<10\) epochs). This shows LeCun and Xavier initialization induce small gradient norm at small training time, which is consistent with Theorem 4.1. However, when the number of epochs is large, the numerical KL privacy loss grows faster than linear accumulation under all initializations, thus validating the second term in Lemma 3.2.
_Numerically validate the dependency of KL privacy loss on network width, depth and initializations_. Figure 2 shows the numerical KL privacy loss under different network depth, width and initializations, for a fixed training time. In Figure 1(c), we observe that increasing width and training time always increases KL privacy loss. This is consistent with Theorem 4.1, which shows that increasing width worsens the gradient norm at initialization (given fixed depth), thus harming KL privacy bound Lemma 3.2 at the beginning of training. We also observe that the relationship between KL privacy
Figure 1: Numerically estimated KL privacy loss for noisy GD with constant step-size \(0.001\) on deep neural network with width \(1024\) and depth \(10\). We report the mean and standard deviation across \(6\) training runs, taking worst-case over all neighboring datasets. The numerical KL privacy loss grows with the number of training epochs under all initializations. The growth rate is close to linear at beginning of training (epochs \(<10\)) and is faster than linear at epochs \(\geq 10\).
Figure 2: Numerically estimated KL privacy loss for noisy GD with constant step-size on fully connected ReLU network with different width, depth and initializations. We report the mean and standard deviation across \(6\) training runs, taking worst-case over all neighboring datasets. Under increasing width, the KL privacy loss always grows under all evaluated initializations. Under increasing depth, at the beginning of training (20 epochs), the KL privacy loss worsens with depth under He initialization, but first worsens with depth (\(\leq 8\)) and then improves with depth (\(\geq 8\)) under Xavier and LeCun initializations. At later phases of the training (50 epochs), KL privacy worsens (increases) with depth under all evaluated initializations.
and network depth depends on the initialization distributions and the training time. Specifically, in Figure (a)a, when the training time is small (20 epochs), for LeCun and Xavier initializations, the numerical KL privacy loss improves with increasing depth when depth \(>8\). Meanwhile, when the training time is large (50 epochs) in Figure (b)b, KL privacy loss worsens with increasing depth under all initializations. This shows that given small training time, the choice of initialization distribution affects the dependency of KL privacy loss on increasing depth, thus validating Lemma 3.2 and Theorem 4.1.
## 6 Utility guarantees for Training Linearized Network
Our privacy analysis suggests that training linearized network under certain initialization schemes (such as LeCun initialization) allows for significantly better privacy bounds under over-parameterization by increasing depth. In this section, we further prove utility bounds for Langevin diffusion under initialization schemes and investigate the effect of over-parameterization on the privacy utility trade-off. In other words, we aim to understand whether there is any utility degradation for training linearized networks when using the more privacy-preserving initialization schemes.
Convergence of training linearized networkWe now prove convergence of the excess empirical risk in training linearized network via Langevin diffusion. This is a well-studied problem in the literature for noisy gradient descent. We extend the convergence theorem to continuous-time Langevin diffusion below and investigate factors that affect the convergence under over-parameterization. The proof is deferred to Appendix D.1.
**Lemma 6.1** (Extension of [42, Theorem 2] and [45, Theorem 3.1]).: _Let \(\mathcal{L}_{0}^{lin}(\mathbf{W};\mathcal{D})\) be the empirical risk function of a linearized network in Eq. (3) expanded at initialization vector \(\mathbf{W}_{0}^{lin}\). Let \(\mathbf{W}_{0}^{*}\) be an \(\alpha\)-near-optimal solution for the ERM problem such that \(\mathcal{L}_{0}^{lin}(\mathbf{W}_{0}^{*};\mathcal{D})-\min_{\mathbf{W}}\mathcal{L}_{0 }^{lin}(\mathbf{W};\mathcal{D})\leq\alpha\). Let \(\mathcal{D}=\{\mathbf{x}_{i}\}_{i=1}^{n}\) be an arbitrary training dataset of size \(n\), and denote \(M_{0}=\left(\nabla f_{\mathbf{W}_{0}^{lin}}(\mathbf{x}_{1}),\cdots,\nabla f_{\mathbf{W}_{0 }^{lin}}(\mathbf{x}_{n})\right)^{\top}\) as the NTK feature matrix at initialization. Then running Langevin diffusion (4) on \(\mathcal{L}_{0}^{lin}(\mathbf{W})\) with time \(T\) and initialization vector \(\mathbf{W}_{0}^{lin}\) satisfies_
\[\mathbb{E}[\mathcal{L}_{0}^{lin}(\tilde{\mathbf{W}}_{T}^{lin})]-\min_{\mathbf{W}} \mathcal{L}_{0}^{lin}(\mathbf{W};\mathcal{D})\leq\alpha+\frac{R}{2T}+\frac{1}{2} \sigma^{2}rank(M_{0})\,,\]
_where the expectation is over Brownian motion \(B_{T}\) in Langevin diffusion in Eq. (4), \(\bar{\mathbf{W}}_{T}^{lin}=\frac{1}{T}\int\tilde{\mathbf{W}}_{t}^{lin}\mathrm{d}t\) is the average of all iterates, and \(R=\|\mathbf{W}_{0}^{lin}-\mathbf{W}_{0}^{*}\|_{M_{0}}^{2}\) is the gap between initialization parameters \(\mathbf{W}_{0}^{lin}\) and solution \(\mathbf{W}_{0}^{*}\)._
Remark 6.2.: The excess empirical risk bound in Lemma 6.1 is smaller if data is low-rank, e.g., image data, then \(\text{rank}(M_{0})\) is small. This is consistent with the prior dimension-independent private learning literature [32, 33, 37] and shows the benefit of low-dimensional gradients on private learning.
Lemma 6.1 highlights that the excess empirical risk scales with the gap \(R\) between initialization and solution (denoted as lazy training distance), the rank of the gradient subspace, and the constant \(B\) that specifies upper bound for expected gradient norm during training. Specifically, the smaller the lazy training distance \(R\) is, the better is the excess risk bound given fixed training time \(T\) and noise variance \(\sigma^{2}\). We have discussed how over-parameterization affects the gradient norm constant \(B\) and the gradient subspace rank \(\text{rank}(M_{0})\) in Section 3. Therefore, we only still need to investigate how the lazy training distance \(R\) changes with the network width, depth, and initialization, as follows.
Lazy training distance \(R\) decreases with model over-parameterizationIt is widely observed in the literature [19, 55, 38] that under appropriate choices of initializations, gradient descent on fully connected neural network falls under a lazy training regime. That is, with high probability, there exists a (nearly) optimal solution for the ERM problem that is close to the initialization parameters in terms of \(\ell_{2}\) distance. Moreover, this lazy training distance \(R\) is closely related to the smallest eigenvalue of the NTK matrix, and generally decreases as the model becomes increasingly overparameterized. In the following proposition, we compute a near-optimal solution via the pseudo inverse of the NTK matrix, and prove that it has small distance to the initialization parameters via existing lower bounds for the smallest eigenvalue of the NTK matrix [40].
**Lemma 6.3** (Bounding lazy training distance via smallest eigenvalue of the NTK matrix).: _Under Assumption 2.4 and single-output linearized network Eq. (3) with \(o=1\), assume that the per-layer network widths \(m_{0},\cdots,m_{L}=\tilde{\Omega}(n)\) are large. Let \(\mathcal{L}_{0}^{lin}(\mathbf{W})\) be the empirical risk Eq. (1) for
linearized network expanded at initialization vector \(\mathbf{W}_{0}^{lin}\). Then for any \(\mathbf{W}_{0}^{lin}\), there exists a corresponding solution \(\mathbf{W}_{0}^{\frac{1}{n^{2}}}\), s.t. \(\mathcal{L}_{0}^{lin}(\mathbf{W}_{0}^{\frac{1}{n^{2}}})-\min_{\mathbf{W}}\mathcal{L}_{0 }^{lin}(\mathbf{W};\mathcal{D})\leq\frac{1}{n^{2}}\), \(\text{rank}(M_{0})=n\) and_
\[R\leq\tilde{\mathcal{O}}\left(\max\left\{\frac{1}{d\beta_{L}\left(\prod_{i=1} ^{L-1}\beta_{i}m_{i}\right)},1\right\}\frac{n}{\sum_{l=1}^{L}\beta_{l}^{-1}} \right)\,, \tag{9}\]
_with high probability over training data sampling and random initialization Eq. (5), where \(\tilde{\mathcal{O}}\) ignores logarithmic factors with regard to \(n\), \(m\), \(L\), and tail probability \(\delta\)._
The full proof is deferred to Appendix D.2. By using Lemma 6.3, we provide a summary of bounds for \(R\) under different initializations in Table 1. We observe that the lazy training distance \(R\) decreases with increasing width and depth under LeCun, He and NTK initializations, while under Xavier initialization \(R\) only decreases with increasing depth.
_Privacy & Excess empirical risk tradeoffs for Langevin diffusion under linearized network_. We now use the lazy training distance \(R\) to prove empirical risk bound and combine it with our KL privacy bound Section 4 to show the privacy utility trade-off under over-parameterization.
**Corollary 6.4** (Privacy utility trade-off for linearized network).: _Assume that all conditions in Lemma 6.3 holds. Let \(B\) be the gradient norm constant in Eq. (7), and let \(R\) be the lazy training distance bound in Lemma 6.3. Then for \(\sigma^{2}=\frac{2BT}{\varepsilon n^{2}}\) and \(T=\sqrt{\frac{\varepsilon nR}{2B}}\), releasing all iterates of Langevin diffusion with time \(T\) satisfies \(\varepsilon\)-KL privacy, and has empirical excess risk upper bounded by_
\[\mathbb{E}[\mathcal{L}_{0}^{lin}(\bar{\mathbf{W}}_{T}^{lin})] -\min_{\mathbf{W}}\mathcal{L}_{0}^{lin}(\mathbf{W};\mathcal{D})\leq \tilde{\mathcal{O}}\left(\frac{1}{n^{2}}+\sqrt{\frac{BR}{\varepsilon n}}\right) \tag{10}\] \[=\tilde{\mathcal{O}}\left(\frac{1}{n^{2}}+\sqrt{\frac{\max\{1,d \beta_{L}\prod_{l=1}^{L-1}\beta_{l}m_{l}\}}{2^{L-1}\varepsilon}}\right) \tag{11}\]
_with high probability over random initialization Eq. (5), where the expectation is over Brownian motion \(B_{T}\) in Langevin diffusion, and \(\tilde{O}\) ignores logarithmic factors with regard to width \(m\), depth \(L\), number of training data \(n\) and tail probability \(\delta\)._
See Appendix D.3 for the full proof. Corollary 6.4 proves that the excess empirical risk worsens in the presence of a stronger privacy constraint, i.e., a small privacy budget \(\varepsilon\), thus contributing to a trade-off between privacy and utility. However, the excess empirical risk also scales with the lazy training distance \(R\) and the gradient norm constant \(B\). These constants depend on network width, depth and initialization distributions, and we prove privacy utility trade-offs for training linearized network under commonly used initialization distributions, as summarized in Table 1.
We would like to highlight that our privacy utility trade-off bound under LeCun and Xavier initialization strictly improves with increasing depth as long as the data satisfy Assumption 2.4 and the hidden-layer width is large enough. To our best knowledge, this is the first time that a strictly improving privacy utility trade-off under over-parameterization is shown in literature. This shows the benefits of precisely bounding the gradient norm (Appendix C.1) in our privacy and utility analysis.
## 7 Conclusion
We prove new KL privacy bound for training fully connected ReLU network (and its linearized variant) using the Langevin diffusion algorithm, and investigate how privacy is affected by the network width, depth and initialization. Our results suggest that there is a complex interplay between privacy and over-parameterization (width and depth) that crucially relies on what initialization distribution is used and the how much the gradient fluctuates during training. Moreover, for a linearized variant of fully connected network, we prove KL privacy bounds that improve with increasing depth under certain initialization distributions (such as LeCun and Xavier). We further prove excess empirical risk bounds for linearized network under KL privacy, which similarly improve as depth increases under LeCun and Xavier initialization. This shows the gain of our new privacy analysis for capturing the effect of over-parameterization. We leave it as an important open problem as to whether our privacy utility trade-off results for linearized network could be generalized to deep neural networks.
## Acknowledgments and Disclosure of Funding
The authors would like to thank Yaxi Hu and anonymous reviewers for helpful discussions on drafts of this paper. This work was supported by Hasler Foundation Program: Hasler Responsible AI (project number 21043), and the Swiss National Science Foundation (SNSF) under grant number 200021_205011, Google PDPO faculty research award, Intel within the www.private-ai.org center, Meta faculty research award, the NUS Early Career Research Award (NUS ECRA award number NUS ECRA FY19 P16), and the National Research Foundation, Singapore under its Strategic Capability Research Centres Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
|
2306.17396 | Koopman operator learning using invertible neural networks | In Koopman operator theory, a finite-dimensional nonlinear system is
transformed into an infinite but linear system using a set of observable
functions. However, manually selecting observable functions that span the
invariant subspace of the Koopman operator based on prior knowledge is
inefficient and challenging, particularly when little or no information is
available about the underlying systems. Furthermore, current methodologies tend
to disregard the importance of the invertibility of observable functions, which
leads to inaccurate results. To address these challenges, we propose the
so-called FlowDMD, aka Flow-based Dynamic Mode Decomposition, that utilizes the
Coupling Flow Invertible Neural Network (CF-INN) framework. FlowDMD leverages
the intrinsically invertible characteristics of the CF-INN to learn the
invariant subspaces of the Koopman operator and accurately reconstruct state
variables. Numerical experiments demonstrate the superior performance of our
algorithm compared to state-of-the-art methodologies. | Yuhuang Meng, Jianguo Huang, Yue Qiu | 2023-06-30T04:26:46Z | http://arxiv.org/abs/2306.17396v2 | # Physics-informed invertible neural network for the Koopman operator learning 1
###### Abstract
In Koopman operator theory, a finite-dimensional nonlinear system is transformed into an infinite but linear system using a set of observable functions. However, manually selecting observable functions that span the invariant subspace of the Koopman operator based on prior knowledge is inefficient and challenging, particularly when little or no information is available about the underlying systems. Furthermore, current methodologies tend to disregard the importance of the invertibility of observable functions, which leads to inaccurate results. To address these challenges, we propose the so-called FlowDMD, a Flow-based Dynamic Mode Decomposition that utilizes the Coupling Flow Invertible Neural Network (CF-INN) framework. FlowDMD leverages the intrinsically invertible characteristics of the CF-INN to learn the invariant subspaces of the Koopman operator and accurately reconstruct state variables. Numerical experiments demonstrate the superior performance of our algorithm compared to state-of-the-art methodologies.
keywords: Koopman operator, Generative models, Invertible neural networks +
## 1 Introduction
Nonlinear dynamic systems are widely prevalent in both theory and engineering applications. Since the governing equations are generally unknown in many situations, it can be challenging to study the systems directly based on the first principles. Fortunately, the data about the systems of interest could be available by experiments or observations. Instead, one could seek to understand the behavior of the nonlinear system through the data-driven approaches [1; 2; 3; 4; 5].
The Koopman operator [6], which embeds the nonlinear system of interest into an infinite dimensional linear space by observable functions has attracted lots of attention. The Koopman operator acts on the infinite dimensional Hilbert space and aims to capture the full representations of the nonlinear systems. Dynamic mode decomposition (DMD) calculates the spectral decomposition of the Koopman operator numerically by extracting dynamic information from the collected data. Concretely, DMD devises a procedure to extract the spectral information directly from a data sequence without an explicit formulation of the Koopman operator, which is efficient for handling high dimensional data [7]. Variants of DMD are proposed to address challenges in different scenarios [8; 9; 10; 11; 12; 13; 14; 15].
The selection of observable functions plays an essential role in the DMD algorithm. Exact DMD [8] exploits the identity mapping as the observables. This implies that one uses a linear system to approximate a nonlinear system with given data [16]. This would yield inaccurate or even completely mistaken outcomes. Furthermore, the short-term prediction of Exact DMD might be acceptable for some cases, but the long-term prediction is probably unreliable. Typically, prior knowledge is required to select the observable functions that span the invariant subspace of the Koopman operator. However, the invariant subspace is not simply available. In order to overcome the limitations of the Exact DMD algorithm and capture the full feature of the nonlinear system, several data-driven selection strategies for observable functions have been proposed. Extended DMD (EDMD) [17] lifts the state variables from the original space into a higher dimensional space using the dictionary functions. The accuracy and rate of convergence of EDMD depend on the choice of the dictionary functions. Therefore, EDMD needs as many dictionary functions as possible. This implies that the set of dictionary functions (nonlinear transformations) should be sufficiently complex, which results in enormous computational cost. Kernel based DMD (KDMD) [18] differs from EDMD in that it utilizes the kernel trick to exploit the implicit expression of dictionary functions, whereas EDMD uses the explicit expression of dictionary functions. Nonetheless, both EDMD and KDMD are prone to overfitting [19], which leads to large generalization error. How to efficiently choose the observable functions that span the invariant subspace of the Koopman operator
becomes a significant challenge.
In contrast to EDMD and KDMD, observable functions can be represented by neural networks. Dictionary learning [20] couples the EDMD with a set of trainable dictionary functions, where dictionary functions are represented by a fully connected neural network and an untrainable component. Fixing the partial dictionary function facilitates the reconstruction of the state variables, however, this setting implicitly assumes that linear term lies in the invariant subspace of the Koopman operator. Yeung et al. [21] select low-dimensional dictionary functions more efficiently using deep neural networks.
Autoencoder (AE) neural networks have been widely applied to learn the optimal observable functions and reconstruction functions in Koopman embedding [19; 22; 23; 24; 25; 26]. Concretely, the invariant subspace of the Koopman operator and reconstruction functions are represented by the encoder and decoder network in AE, respectively. Lusch et al. [23] utilize neural networks to identify the Koopman eigenfunctions and introduced an auxiliary network to cope with the dynamic systems with continuous spectrum. Azencot et al. [24] propose the Consistent Koopman AE model that combines the forward-backward DMD method [27] with the AE model. This approach extracts the latent representation of high-dimensional non-linear data and eliminates the effect of noise in the data simultaneously. Pan and Duraisamy [25] parameterize the structure of the transition matrix in linear space and construct an AE model to learn the residual of the DMD. Li and Jiang [26] utilize deep learning and the Koopman operator to model the nonlinear multiscale dynamical problems, where coarse-scale data is used to learn the fine-scale information through a set of multiscale basis functions. Wang et al. [28] propose Koopman Neural Forecaster (KNF) combining AE with Koopman operator theory to predict the data with distributional shifts.
Representing Koopman embedding by dictionary learning or AE networks has several drawbacks. Firstly, the reconstruction in dictionary learning partially fixes the dictionary functions, which leads to a low level of interpretability of the model. Secondly, the encoder and decoder in an AE model are trained simultaneously, but neither of them is invertible, cf. [29] for more details. Moreover, due to the structural noninvertibility of the encoder and decoder, it typically requires a large amount of training data in order to obtain accurate representations, which makes the AE model prone to overfitting. Alford-Lago et al. [29] analyze the property of both the encoder and decoder in AE and proposed the deep learning dynamic mode decomposition (DLDMD). Bevanda et al. [30] constructed a conjugate map between the nonlinear system and its Jacobian linearization, which is learned by a diffeomorphic neural network.
In this paper, we develop a novel architecture that incorporates physical knowledge to learn the Koopman embedding. Specifically, we apply the coupling flow invertible neural networks (CF-INN) to learn the observable functions and reconstruction functions. The invertibility of the learned observable functions makes our method more flexible than dictionary learning or AE learning. Our contributions are three-folds:
1. We utilize an structurally invertible mapping to reconstruct state variables, which increases the interpretability of the neural network and alleviates the overfitting of AE.
2. The difficulty of learning the observable functions and observable functions is reduced by exploiting their structural invertibility of neural networks. Therefore, the reconstruction error in the loss function could be eliminated.
3. As the physical information is embedded into the model, the number of parameters is reduced to achieve comparable accuracy with other methods. Additionally, the parameters to be optimized are reduced dramatically since the learned mappings and their inverse share the same parameters.
This paper is organized as follows. In Section 2, we briefly review the Koopman operator theory and DMD. In Section 3, we present the structure of CF-INN and introduce how to learn the invariant subspace of the Koopman operator and the reconstruction functions. In Section 4, several numerical experiments are performed to demonstrate the performance of our method, and we summarize our work in Section 5.
## 2 Preliminaries
### Koopman operator theory
Consider the nonlinear autonomous system in discrete form,
\[\mathbf{x}_{k+1}=f(\mathbf{x}_{k}),\quad\mathbf{x}_{k}\in\mathcal{M}\subset \mathbb{R}^{m}, \tag{1}\]
where \(\mathcal{M}\) represents the set of state space, \(f:\mathcal{M}\rightarrow\mathcal{M}\) is an unknown nonlinear map, and \(k\) is the time index.
**Definition 1** (Koopman operator [16]).: _For the nonlinear system (1), the Koopman operator \(\mathcal{K}\) is an infinite-dimensional linear operator that acts on all observable functions \(g:\mathcal{M}\rightarrow\mathbb{C}\) such that_
\[\mathcal{K}g(\mathbf{x})=g(f(\mathbf{x})).\]
_Here, \(g(x)\in\mathcal{H}\) and \(\mathcal{H}\) represents the infinite dimensional Hilbert space._
Through the observable functions, the nonlinear system (1) could be transformed into an infinite-dimensional linear system using the Koopman operator,
\[g(\mathbf{x}_{k+1})=g(f(\mathbf{x}_{k}))=\mathcal{K}g(\mathbf{x}_{k}). \tag{2}\]
Note that the Koopman operator is linear, _i.e._, \(\mathcal{K}(\alpha_{1}g_{1}(\mathbf{x})+\alpha_{2}g_{2}(\mathbf{x}))=\alpha_{1} g_{1}(f(\mathbf{x}))+\alpha_{2}g_{2}(f(\mathbf{x}))\), with \(g_{1}(\mathbf{x}),g_{2}(\mathbf{x})\in\mathcal{H}\) and \(\alpha_{1},\alpha_{2}\in\mathbb{R}\). As \(\mathcal{K}\) is an infinite-dimensional operator, we denote its eigenfunctions and eigenvalues by \(\{\lambda_{i},\varphi_{i}(x)\}_{i=0}^{\infty}\) such that \(\mathcal{K}\varphi_{i}(\mathbf{x})=\lambda_{i}\varphi_{i}(\mathbf{x})\), where \(\varphi_{i}(\mathbf{x}):\mathcal{M}\rightarrow\mathbb{R}\), \(\lambda_{i}\in\mathbb{C}\).
The Koopman eigenfunctions define a set of intrinsic measurement coordinates, then a vector-valued observable function \(\mathbf{g}(\mathbf{x})=[g_{1}(\mathbf{x}),\cdots,g_{n}(\mathbf{x})]^{T}\) could be written in terms of the Koopman eigenfunctions,
\[\mathbf{g}(\mathbf{x}_{k})=\begin{bmatrix}g_{1}(\mathbf{x}_{k})\\ \vdots\\ g_{n}(\mathbf{x}_{k})\end{bmatrix}=\sum_{i=1}^{\infty}\varphi_{i}(\mathbf{x}_{ k})\begin{bmatrix}<\varphi_{i},g_{1}>\\ \vdots\\ <\varphi_{i},g_{n}>\end{bmatrix}=\sum_{i=1}^{\infty}\varphi_{i}(\mathbf{x}_{k}) \mathbf{v}_{i}, \tag{3}\]
where \(\mathbf{v}_{i}\) refers to the \(i\)-th Koopman mode with respect to the Koopman eigenfunction \(\varphi_{i}(\mathbf{x})\). Combining (2) and (3), we have the decomposition of a vector-valued observable functions
\[\mathbf{g}(\mathbf{x}_{k+1})=\mathcal{K}\mathbf{g}(\mathbf{x}_{k})=\mathcal{K }\sum_{i=1}^{\infty}\varphi_{i}(\mathbf{x}_{k})\mathbf{v}_{i}=\sum_{i=1}^{ \infty}\lambda_{i}\varphi_{i}(\mathbf{x}_{k})\mathbf{v}_{i}.\]
Furthermore, the decomposition could be rewritten as
\[\mathbf{g}(\mathbf{x}_{k})=\sum_{i=1}^{\infty}\lambda_{i}^{k}\varphi_{i}( \mathbf{x}_{0})\mathbf{v}_{i}.\]
In practice, we need a finite-dimensional representation of the infinite-dimensional Koopman operator. Denote the \(n\)-dimensional invariant subspace of the Koopman operator \(\mathcal{K}\) by \(\mathcal{H}_{g}\), _i.e._, \(\forall g(\mathbf{x})\in\mathcal{H}_{g},\mathcal{K}g(\mathbf{x})\in\mathcal{H }_{g}\). Let \(\{g_{i}(\mathbf{x})\}_{i=1}^{n}\) be one set of basis of \(\mathcal{H}_{g}\), this induces a finite-dimensional linear operator \(\mathbf{K}\)[16], which projects the Koopman operator \(\mathcal{K}\) onto \(\mathcal{H}_{g}\), _i.e._, for the \(n\)-dimensional vector-valued observable functions \(\mathbf{g}(\mathbf{x})=[g_{1}(\mathbf{x}),\cdots,g_{n}(\mathbf{x})]^{T}\), we have
\[\mathbf{g}(x_{k+1})=\begin{bmatrix}g_{1}(x_{k+1})\\ \vdots\\ g_{n}(x_{k+1})\end{bmatrix}=\begin{bmatrix}\mathcal{K}g_{1}(x_{k})\\ \vdots\\ \mathcal{K}g_{n}(x_{k})\end{bmatrix}=\mathbf{K}\begin{bmatrix}g_{1}(x_{k})\\ \vdots\\ g_{n}(x_{k})\end{bmatrix}=\mathbf{K}\mathbf{g}(x_{k}) \tag{4}\]
### Dynamic mode decomposition
DMD approximates the spectral decomposition of the Koopman operator numerically. Given the state variables \(\{\mathbf{x}_{0},\mathbf{x}_{1},\cdots,\mathbf{x}_{p}\}\) and a vector-valued observable function \(\mathbf{g}(\mathbf{x})=[g_{1}(\mathbf{x}),\cdots,g_{n}(\mathbf{x})]^{T}\), then we get the sequence \(\{\mathbf{g}(\mathbf{x}_{0}),\mathbf{g}(\mathbf{x}_{1}),\cdots,\mathbf{g}( \mathbf{x}_{p})\}\), where each \(\mathbf{g}(\mathbf{x}_{k})\in\mathbb{R}^{n}\) is the observable snapshot of the \(k\)-th time step. According to (4), we have
\[\mathbf{g}(\mathbf{x}_{k+1})=\mathbf{K}\mathbf{g}(\mathbf{x}_{k}),\]
where \(\mathbf{K}\in\mathbb{R}^{n\times n}\) is the matrix form of the finite-dimensional operator. For the two data matrices, \(\mathbf{X}=[\mathbf{g}(\mathbf{x}_{0}),\cdots,\mathbf{g}(\mathbf{x}_{p-1})]\) and \(\mathbf{Y}=[\mathbf{g}(\mathbf{x}_{1}),\cdots,\mathbf{g}(\mathbf{x}_{p})]\), where \(\mathbf{X}\) and \(\mathbf{Y}\) are both in \(\mathbb{R}^{n\times p}\), which satisfies \(\mathbf{Y}=\mathbf{K}\mathbf{X}\). Therefore, \(\mathbf{K}\) can be represented by
\[\mathbf{K}=\mathbf{Y}\mathbf{X}^{\dagger},\]
where \(\mathbf{X}^{\dagger}\) denotes the Moore-Penrose inverse of \(\mathbf{X}\).
The Exact DMD algorithm developed by Tu et al. [8] computes dominant eigen-pairs (eigenvalue and eigenvector) of \(\mathbf{K}\) without the explicit formulation of \(\mathbf{K}\). In Algorithm 1, we present the DMD algorithm on the observable space, which is a general form of the Exact DMD algorithm. When using the identical mapping as the observable functions, _i.e._, \(\mathbf{g}(\mathbf{x})=\mathbf{x}\), Algorithm 1 is identical to the Exact DMD algorithm.
```
1. Compute the (reduced) SVD of \(\mathbf{X}\), \(\mathbf{X}=\mathbf{U}_{\mathbf{r}}\mathbf{\Sigma}_{\mathbf{r}}\mathbf{V}_{ \mathbf{r}}^{*}\), where \(\mathbf{U}_{\mathbf{r}}\in\mathbb{C}^{n\times r}\), \(\mathbf{\Sigma}_{\mathbf{r}}\in\mathbb{R}^{r\times r}\), \(\mathbf{V}_{\mathbf{r}}\in\mathbb{C}^{p\times r}\).
2. Compute \(\tilde{\mathbf{K}}=\mathbf{U}_{\mathbf{r}}^{*}\mathbf{Y}\mathbf{V}_{\mathbf{r} }\mathbf{\Sigma}_{\mathbf{r}}^{-1}\).
3. Compute the eigen-pairs of \(\tilde{\mathbf{K}}\): \(\tilde{\mathbf{K}}\mathbf{W}=\mathbf{W}\mathbf{\Lambda}\).
4. Reconstruct the eigen-pairs of \(\mathbf{K}\), where eigenvalues of \(\mathbf{K}\) are diagonal entries of \(\Lambda\), the corresponding eigenvectors of \(\mathbf{K}\)(DMD modes) are columns of \(\mathbf{\Phi}=\mathbf{Y}\mathbf{V}_{\mathbf{r}}\mathbf{\Sigma}_{\mathbf{r}}^{ -1}\mathbf{W}\).
5. Approximate the observation data via DMD, \(\hat{\mathbf{g}}(\mathbf{x}_{k})=\mathbf{\Phi}\mathbf{\Lambda}^{k}\mathbf{b}\), where \(\mathbf{b}=\mathbf{\Phi}^{\dagger}\mathbf{g}(\mathbf{x}_{0})\).
6. Reconstruct the state variables \(\hat{\mathbf{x}}_{k}=\mathbf{g}^{-1}(\hat{\mathbf{g}}(\mathbf{x}_{k}))= \mathbf{g}^{-1}\left(\mathbf{\Phi}\mathbf{\Lambda}^{k}\mathbf{b}\right)\).
```
**Algorithm 1** DMD on observable space [16; 31]
### State reconstruction
Koopman operator theory utilizes observable functions \(\mathbf{g}\) to transform the nonlinear system (1) into a linear system while preserving the nonlinearity. Evolving the nonlinear system (1) is computationally expensive or even impossible when \(f\) is
unknown, whereas evolving through the Koopman operator (2) offers a promising and computationally efficient approach.
Figure 1 illustrates the relation between the nonlinear evolution \(f\) and the Koopman operator evolution where the system evolves linearly in the observation space \(\mathcal{H}\). By computing the Koopman eigenvalues and modes, we can make predictions of the observable functions \(\mathbf{g}(\mathbf{x})\). We could reconstruct the state \(\mathbf{x}\) by the inverse of the observable functions \(\mathbf{g}^{-1}(\mathbf{x})\) provided that \(\mathbf{g}(\mathbf{x})\) is invertible. The invertibility of observable functions is essential to ensure the reconstruction accuracy and the interpretability of the outcomes.
Typical observable functions \(\mathbf{g}(\mathbf{x})\) selection are performed manually based on prior knowledge. Exact DMD takes the identical mapping, while the EDMD utilizes a set of pre-defined functions such as polynomials, Fourier modes, radial basis functions, and so forth [17]. However, these methods can be inaccurate and inefficient for Koopman embeddings learning. Deep neural networks, as efficient global nonlinear approximators, could be applied to represent the observable function \(\mathbf{g}(\mathbf{x})\) and the reconstruction function \(\mathbf{g}^{-1}(\mathbf{x})\). Several studies have demonstrated that the encoder and decoder networks in AE correspond to \(\mathbf{g}(\mathbf{x})\) and \(\mathbf{g}^{-1}(\mathbf{x})\), respectively [19; 22; 23; 24; 25; 26].
In practical applications, it is not always guaranteed that \(\mathbf{g}(\mathbf{x})\) is invertible. In the learning Koopman embedding via AE, the invertibility of \(\mathbf{g}(\mathbf{x})\) is enforced through numerical constraints, _i.e._, the reconstruction error \(\|\mathbf{x}-\mathbf{g}^{-1}(\mathbf{g}(\mathbf{x}))\|_{2}^{2}\), which tends to result in overfitting and suboptimal performance [29]. Besides, the reconstruction error is trained simultaneously with the prediction error and the linearity error [23]. The weights assigned to each loss term are hyperparameters that can be challenging to tune. In this paper, we propose a structurally invertible mapping learning framework, which eliminates the need for the reconstruction term in the loss function and yields more robust and accurate results. We present the details of our method in Section 3.
Figure 1: Koopman operator and inverse of observable functions
## 3 Learning Koopman embedding by invertible neural networks
In this section, we first briefly review the AE neural network and demonstrate the limitation of this class of neural networks in the Koopman embedding learning. Then, we introduce our method to overcome this limitation.
### Drawback of AE in the Koopman embedding learning
Most of the work use the Autoencoder (AE) neural networks as the backbone to learn the invariant subspace of the Koopman operator and reconstruct the state variables. AE as the frequently-used unsupervised learning structure of neural networks, consists of two parts, _i.e._, the encoder \(\mathcal{E}\) and the decoder \(\mathcal{D}\). AE learns these two mappings (functions) \(\mathcal{E}\) and \(\mathcal{D}\) by optimizing
\[\min_{\mathcal{E},\mathcal{D}}\mathbb{E}_{x\sim m(x)}[\text{loss}(x,\mathcal{ D}\circ\mathcal{E}(x))]. \tag{5}\]
Here \(m(x)\) denotes the distribution of the input data, \(\text{loss}(x,y)\) describes the difference between \(x\) and \(y\), and \(\mathbb{E}(\cdot)\) represents the expectation.
**Definition 2**.: _Let \(f_{1}:S\to S^{\prime}\) be an arbitrary mapping, and it is said to be invertible if there exists a mapping \(f_{2}:S^{\prime}\to S\) such that_
\[f_{1}\circ f_{2}=\mathcal{I},f_{2}\circ f_{1}=\mathcal{I},\]
_where \(\mathcal{I}\) is the identity mapping. Then, \(f_{2}\) is said to be the inverse mapping of \(f_{1}\)._
Let \(\mathcal{E}\) and \(\mathcal{D}\) be two mappings learned by AE such that \(\mathcal{D}\circ\mathcal{E}\approx\mathcal{I}\). However, the reverse order of the mapping \(\mathcal{E}\circ\mathcal{D}\) is not always a good approximation to the identity mapping, moreover, \(\mathcal{E}\) and \(\mathcal{D}\) are generally not invertible [29]. The main reason is that while AE strives to reach \(\mathcal{D}\circ\mathcal{E}\approx\mathcal{I}\), it omits the additional constraint \(\mathcal{E}\circ\mathcal{D}\approx\mathcal{I}\) which requires the latent variable data to train. Unfortunately, the latent variables are not accessible, thus rendering it impossible for AE to satisfy \(\mathcal{E}\circ\mathcal{D}\approx\mathcal{I}\) and \(\mathcal{D}\circ\mathcal{E}\approx\mathcal{I}\) simultaneously.
AE learns an identity mapping \(\mathcal{I}\) from a training data set \(\mathcal{S}\), _i.e._, for any \(\mathbf{x}\in\mathcal{S},\mathcal{D}\circ\mathcal{E}(\mathbf{x})\approx\mathbf{x}\). For data out of the set \(\mathcal{S}\), the mapping learned by AE may perform badly. In other words, AE may have poor generalization capability. Next, we use a preliminary experiment to demonstrate this limitation. The details of this numerical example are given in Section 4.1. We use the structure of AE defined in [26] and randomly generate 120 trajectories to train the AE, and the results are depicted by Figure 2.
Figure 2 compares the input data points out of the distribution of the training data with the corresponding reconstructed data points using the trained AE model. Figure 2(a) shows the density distribution of training data set \(\mathcal{S}\), which provides a rough illustration of the data space \(\mathcal{S}\). For the reconstruction test of AE, we generate three types of data, _i.e._, the sin-shaped scatters, the S-shaped scatters, and scatters from the standard 2-d normal distribution. We plot the corresponding input points (blue) and reconstructed data points (red) of the AE. The results shown in the next three subfigures illustrate that AE can reconstruct the input data points nearby the training data set \(\mathcal{S}\) very well. But for the data points far away from \(\mathcal{S}\), AE performs badly. The same situation happens in learning the Koopman embedding. Specifically, in the training process of AE, one aims to find the Koopman invariant space by minimizing the error of the Koopman embedding learning and the reconstruction error. However, minimizing the error between latent variables and their corresponding reconstruction denoted by \(\text{loss}(\mathbf{x},\mathcal{E}\circ\mathcal{D}(\mathbf{x}))\) is intractable. This result is in poor stability and generalization capability.
### Structure of Cf-Inn
We have shown that the mapping learned by AE performs poorly, which inspires us that invertibility can greatly reduce computational complexity and yields better
Figure 2: Generalization capability test of AE. (a) the training data distribution. (b) the \(sin(x)\) test function. (c) S-shaped scatters test. (d) random scatters from 2-d standard normal distribution.
generalization capability. Next, we introduce an invertible neural network to overcome the drawback of AE. Let \(\mathbf{g}_{\boldsymbol{\theta}}(\mathbf{x}):\mathbf{X}\rightarrow\mathbf{Y}\) denote the input-output mapping of the invertible neural network, where \(\boldsymbol{\theta}\) represents the parameters of the neural network. Let \(\mathbf{f}_{\boldsymbol{\theta}}\) be the inverse mapping of \(\mathbf{g}_{\boldsymbol{\theta}}\) which shares the same parameters with \(\mathbf{g}_{\boldsymbol{\theta}}\). Then we can reconstruct \(x\) in the backward direction by \(\mathbf{f}_{\boldsymbol{\theta}}(\mathbf{y}):\mathbf{Y}\rightarrow\mathbf{X}\). In generative tasks of machine learning, the forward generating direction is called the flow direction and the backward direction is called the normalizing direction. Next, we introduce the concept of coupling flows, which belongs to the invertible neural networks.
**Definition 3** (Coupling flow [32]).: _Let \(m\in\mathbb{N}\) and \(m\geq 2\), for a vector \(\mathbf{z}\in\mathbb{R}^{m}\) and \(2\leq q\leq m-1\), we define \(\mathbf{z}_{up}\) as the vector \((z_{1},\ldots,z_{q})^{\top}\in\mathbb{R}^{q}\) and \(\mathbf{z}_{low}\) as the vector \((z_{q+1},\ldots,z_{m})^{\top}\in\mathbb{R}^{m-q}\). A coupling flow (CF), denoted by \(h_{q,\tau}\), has the following form,_
\[h_{q,\tau}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},\tau(\mathbf{z}_ {low},\sigma(\mathbf{z}_{up}))),\]
_where \(\sigma:\mathbb{R}^{q}\rightarrow\mathbb{R}^{l}\), and \(\tau(\cdot,\sigma(\mathbf{y})):\mathbb{R}^{m-q}\times\mathbb{R}^{l}\rightarrow \mathbb{R}^{m-q}\) is a bijection mapping for any \(\mathbf{y}\in\mathbb{R}^{q}\)._
A coupling flow defined in _Definition 3_ is invertible if and only if \(\tau\) is invertible and its inverse \(h_{q,\tau}^{-1}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},\tau^{-1}( \mathbf{z}_{low},\sigma(\mathbf{z}_{up})))\)[33]. The key point of making the CF invertible is the invertibility of \(\tau\). One of the mostly used CF is the affine coupling function (ACF) [34, 35, 36], where \(\tau\) is an invertible element-wise function.
**Definition 4** (Affine coupling function [33]).: _Define an affine coupling function by the mapping \(\Psi_{q,s,t}\) from \(\mathbb{R}^{q}\times\mathbb{R}^{m-q}\) to \(\mathbb{R}^{m}\) such that_
\[\Psi_{q,s,t}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},(\mathbf{z}_{ low}+t(\mathbf{z}_{up}))\odot s(\mathbf{z}_{up})), \tag{6}\]
_where \(\odot\) is the Hadamard product, \(s,t:\mathbb{R}^{q}\rightarrow\mathbb{R}^{m-q}\) are two arbitrary vector-valued mappings._
Definition 4 defines the forward direction of computations, and the backward direction of computations is given by \(\Psi_{q,s,t}^{-1}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},\mathbf{ z}_{low}\oslashs(\mathbf{z}_{up})-t(\mathbf{z}_{up}))\), where \(\oslash\) denotes the element-wise division of vectors. The mappings \(s\) and \(t\) in Definition 4 can be any nonlinear functions, neural networks such as fully-connected neural network (FNN) are typically used to parameterize \(t\) and \(s\).
Let \(\Psi_{1},\ldots,\Psi_{L}\) be a sequence of \(L\) affine coupling functions and define \(\mathbf{g}_{\boldsymbol{\theta}}=\Psi_{L}\circ\Psi_{L-1}\circ\cdots\Psi_{1}\), where \(\boldsymbol{\theta}\) represents the parameters of \(\{\Psi_{i}\}_{i=1}^{L}\). The resulted vector-valued function \(\mathbf{g}_{\boldsymbol{\theta}}\) is an invertible neural network and called by coupling flow invertible neural network (CF-INN) in this paper. Moreover, for any \(\Psi_{i}\), the division
index \(q\) of the input vector \(x\) is user-guided. In this paper, we set \(q=\lceil m/2\rceil\), where \(\lceil\cdot\rceil\) is the rounding function. Furthermore, in order to mix the information sufficiently, we can flip the ACF by using the form \(\bar{\Psi}_{q,s,t}(\mathbf{z}_{up},\mathbf{z}_{low})=((\mathbf{z}_{up}+t( \mathbf{z}_{low}))\odot s(\mathbf{z}_{low}),\mathbf{z}_{low})\). We plot the computation process of an ACF and a flipped ACF in Figure 3, where the network structure diagram left shows the forward direction and the network structure diagram right shows the backward direction. The red area is an ACF block and consists of a standard ACF and a flipped ACF, which is a CF-INN of depth 2.
When the depth (L) of a CF-INN is large, its training becomes challenging. The main curse is that the dividend term \(s\) is too small in \(\Psi\) in the backward direction computations. This can be solved by replacing the affine coupling functions with residual coupling functions. Similar idea has also been applied in the residual term of ResNet.
**Definition 5** (Residual coupling functions [37]).: _Define a residual affine coupling function (RCF) by the map \(\Psi_{q,s,t}\) from \(\mathbb{R}^{q}\times\mathbb{R}^{m-q}\) to \(\mathbb{R}^{m}\) such that_
\[\Psi_{q,t}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},\mathbf{z}_{low }+t(\mathbf{z}_{up})),\]
_where \(t:\mathbb{R}^{q}\rightarrow\mathbb{R}^{m-q}\) is a nonlinear mapping._
RCFs are simplifications of ACFs and when we connect a RCF with a flipped RCF, we obtain a RCF block, which is a simplified ACF block in Figure 3.
### Loss function for Koopman embedding
In this paper, we use the CF-INN to learn the Koopman invariant subspace and the reconstructions simultaneously, where the forward direction of CF-INN is represented by \(\mathbf{g}_{\boldsymbol{\theta}}\) and its backward direction is represented by \(\mathbf{f}_{\boldsymbol{\theta}}\). The observable
Figure 3: The illustration of the forward and backward direction in an ACF block.
functions evolve linearly in the Koopman invariant subspace. Hence, the linearity constrained loss function that represents the DMD approximation error is given by
\[\mathcal{L}_{\text{linear}}=\sum_{t=1}^{T}||\mathbf{g}_{\boldsymbol{\theta}}( \mathbf{x}_{t})-\Phi\Lambda^{t}\Phi^{\dagger}\mathbf{g}_{\boldsymbol{\theta}}( \mathbf{x}_{0})||^{2}=\sum_{t=1}^{T}||\mathbf{g}_{\boldsymbol{\theta}}( \mathbf{x}_{t})-\hat{\mathbf{g}}_{\boldsymbol{\theta}}(\mathbf{x}_{t})||^{2},\]
where \(\hat{\mathbf{g}}_{\boldsymbol{\theta}}(\mathbf{x}_{t})=\Phi\Lambda^{t}\Phi^{ \dagger}\mathbf{g}_{\boldsymbol{\theta}}(\mathbf{x}_{0})\) is the DMD approximation of the observable functions \(\{\mathbf{g}(\mathbf{x}_{t})\}_{t=1}^{T}\) by using Algorithm 1. To reconstruct the states \(x_{t}\), the inverse mapping of \(\mathbf{g}\), _i.e_, \(\mathbf{f}_{\theta}\) corresponds to the backward direction of CF-INN. \(\mathbf{f}_{\theta}\) shares the same network structure and parameters with \(\mathbf{g}_{\theta}\). Therefore, the computational cost is greatly reduced, compared with AE that another neural network is required to parameterize the inverse mapping of \(\mathbf{g}_{\theta}\). The reconstruction loss due to the DMD approximation error is given by
\[\mathcal{L}_{\text{rec}}=\sum_{t=1}^{T}||\mathbf{x}_{t}-\mathbf{f}_{ \boldsymbol{\theta}}(\hat{\mathbf{g}}_{\boldsymbol{\theta}}(\mathbf{x}_{t})) ||^{2}.\]
The optimal parameters \(\boldsymbol{\theta}^{*}\) is given by
\[\boldsymbol{\theta}^{*}=\operatorname*{arg\,min}_{\boldsymbol{\theta}} \mathcal{L}_{\text{linear}}+\alpha\mathcal{L}_{\text{rec}},\]
where \(\alpha\) is a user-guard hyperparameter.
Compared with other Koopman embedding learning frameworks, the loss function in our approach is much more simplified. We summarize our CF-INN framework for the Koopman embedding learning in Figure 4 and our method is called FlowDMD since this framework uses a flow model based Dynamic Model Decomposition to compute the finite dimensional Koopman operator approximation and reconstruct system states.
Figure 4: The general framework of FlowDMD.
## 4 Numerical experiments
In this section, we use three numerical examples to demonstrate the efficiency of our method for learning the Koopman embedding and compare its performance with LIR-DMD [26] and Exact DMD. We use the Python library _FEniCS_[38] to compute the numerical solutions of PDEs, the Python library _PyDMD_[39] to complete the calculations of Exact DMD, and the Python library _PyTroch_[40] to train the neural networks. Besides, the Xavier normal initialization scheme [41] is utilized to initialize the weights of all neural networks, while the biases of all nodes are set to zero. All the networks are trained by the Adam optimizer [42] with an initial learning rate of \(10^{-3}\). In order to find the optimal parameters of the network, we use _ReduceLROnPlateau_[43] to adjust the learning rate during the training process for all numerical examples. For fairness, all the methods share the same training strategies. Denote \(x\) as the "true" value of the states and \(\hat{x}\) as its reconstruction. We use three metrics to evaluate different methods synthetically., i.e., the relative \(L_{2}\) error
\[\text{RL2E}(t)=\frac{||\hat{x}_{t}-x_{t}||_{2}}{||x_{t}||_{2}},\]
the mean squared error (MSE),
\[\text{MSE}(t)=\frac{||\hat{x}_{t}-x_{t}||_{2}^{2}}{m},\]
and the total relative \(L_{2}\) error
\[\text{TRL2E}=\sqrt{\frac{\sum_{t=1}^{T}||\hat{x}_{t}-x_{t}||_{2}^{2}}{\sum_{i= 1}^{T}||x_{t}||_{2}^{2}}}.\]
### Fixed-point attractor
The fixed-point attractor example [23] is given by
\[\begin{cases}x_{t+1,1}=\lambda x_{t,1},\\ x_{t+1,2}=\mu x_{t,2}+(\lambda^{2}-\mu)x_{t,1}^{2}.\end{cases}\]
The initial state is chosen randomly by \(x_{0,1}\sim U(0.2,4.2)\), \(x_{0,2}\sim U(0.2,4.2)\) and \(\lambda=0.9,\mu=0.5\). We divide the data set into three parts where the ratio of training, validation, and test is \(60\%,20\%\), and \(20\%\), respectively. The number of neurons of each layer for the encoder network in LIR-DMD is \(2,10,10,3\) and the number of neurons of decoder network is \(3,10,10,2\). This results in \(345\) trainable parameters for
LIR-DMD. We use three ACFs for this problem. The mappings \(t\) and \(s\) are parameterized by FNN with three layers and the width of each layer is 1,8,2, respectively. This results in 102 trainable parameters in total.
We randomly choose one example from the test set and plot its results in Figure 5. Both Figure 5(a) and Figure 5(b) show that the reconstruction calculated by LIR-DMD and FlowDMD are better than that by the Exact DMD and the difference of trajectories between LIR-DMD and FlowDMD is very small. Figure 5(c) and Figure 5(d) illustrate that the reconstruction error of FlowDMD is the smallest. In the first 30 time steps, LIR-DMD has a similar error to FlowDMD. The error of FlowDMD increases much more slowly than that of LIR-DMD for the following 30 time steps. We conclude that FlowDMD has better generalization ability than LIR-DMD.
We test FlowDMD, LIR-DMD and Exact DMD using 40 randomly generated examples and the results are depicted by Figure 6. We use the total relative \(L_{2}\) error to evaluate the reconstruction results of trajectories. For FlowDMD, the reconstruction error is the lowest among almost all of the test examples, and the average total relative \(L_{2}\) error is only 0.3%. Compared with LIR-DMD, FlowDMD has better generalization ability and learning ability of the Koopman invariant subspace.
Figure 5: Comparison of three methods for Example 4.1. The total relative \(L_{2}\) error of the Exact DMD, LIR-DMD, and FlowDMD are 0.2448, 0.0111 and 0.0018, respectively.
### Burgers' equation
The 1-D Burgers' equation [44] is given by
\[\begin{cases}\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=\frac{0.01}{\pi}\frac{\partial^{2}u}{\partial x^{2}}\quad x\in(-1,1),t\in(0,1],\\ u(1,t)=u(-1,t)=0,\\ u(x,0)=-\xi*sin(\pi x),\end{cases} \tag{7}\]
where \(\xi\) is a random variable that satisfies a uniform distribution \(U(0.2,1.2)\). We use the finite element method with 30 equidistant grid points for the spatial discretization and the implicit Euler method with a step size of 0.01 for temporal discretization. We generate 100 samples of \(\xi\) for the initial state and compute the corresponding solutions. The examples are then divided into three parts, with proportions 60% for training, 20% for validation, and 20% for test. We test the performance of the Exact DMD, LIR-DMD, and FlowDMD. The rank of Exact DMD is 3 and the same rank is also used in LIR-DMD and FlowDMD to embed the Koopman linearity. The structure of the encoder network for LIR-DMD is \([30,40,50,40]\), and the decoder network is \([40,50,40,30]\) where the numbers in the brackets represent the width of each layer and we use RCFs to replace ACFs. This results in an invertible neural network of depth of 3 with one RCF block and one RCF. In each RCF, the width of each layer in FNN to parameterize the mapping \(t\) is 15, 40, 15, which results in 7530 parameters in FlowDMD, whereas LIR-DMD has 10650 parameters.
Figure 7 depicts that FlowDMD has the smallest absolute reconstruction error and total relative reconstruction error. Figure 8(a) and Figure 8(b) show that the reconstruction error of Exact DMD and LIR-DMD increase with time, but FlowDMD maintains in a very low level. Figure 9 summarizes the TRL2E of reconstruction on all test examples and depicts that the FlowDMD has the smallest error on almost all test examples, where the average TRL2E of FlowDMD is 1.5%. For some test examples, Exact DMD has the same TRL2E with FlowDMD, but for most test
Figure 6: Total relative \(L_{2}\) error in Example 4.1.
examples, FlowDMD performs better than Exact DMD. The TRL2E of LIR-DMD are bigger than FlowDMD over all the test examples and are slightly better than Exact DMD for some test examples.
### Allen-Cahn equation
The 1-D Allen-Cahn equation [44] is given by
\[\begin{cases}\dfrac{\partial u}{\partial t}-\gamma_{1}\dfrac{ \partial^{2}u}{\partial x^{2}}+\gamma_{2}\left(u^{3}-u\right)=0,x\in(-1,1),t \in(0,1],\\ u(0,x)=\xi*x^{2}\cos(2\pi x),\\ u(t,-1)=u(t,1),\end{cases} \tag{8}\]
where \(\gamma_{1}=0.0001\), \(\gamma_{2}=5\), and \(\xi\sim\mathcal{N}(-0.1,0.04)\). We use the finite element method with 20 equidistant grid points for the spatial discretization and the implicit Euler with a step size of 0.02 for the temporal discretization. Furthermore, we generate 100 samples of \(\xi\) and use _FEniCS_ to compute the numerical solutions. The data set is segmented according to a ratio of 60%, 20%, 20%, respectively to be used as
Figure 7: Comparison of three methods in Example 4.2. The total relative \(L_{2}\) errors for exact DMD, LIR-DMD, and FlowDMD are 0.08, 0.119, and 0.017, respectively.
the training set, the validation set, and the test set. The structure of the encoder network for LIR-DMD is [20, 30, 40, 30] and the decoder network is [30, 40, 30, 20], where the numbers in the bracket indicate the width of each layer. This results in 6190 parameters for LIR-DMD. For FlowDMD, we also use RCFs to replace the ACFs. The neural network for FlowDMD consists of one RCF block and one RCF, which results in a network with depth \(L=3\). In each RCF, the width of each layer of the FNN to parameterize \(t\) is 10, 20, 10. Finally, we obtain 2580 parameters for FlowDMD. The rank of Exact DMD is 3, and the same rank is also used in LIR-DMD and FlowDMD to embed the Koopman linearity.
Figure 10 clearly shows that FlowDMD can reconstruct the original state most accurately. It reveals that the absolute error of both exact DMD and LIR-DMD increase over time, but FlowDMD can maintain the error in a low level all the time. Numerical results show that FlowDMD is more robust and generalizes better than Exact DMD and LIR-DMD. The error of the state reconstruction for three methods are given in Figure 11. At the beginning time, FlowDMD has the biggest relative error because the norm of the true state variables is too small, which leads to a
Figure 8: Relative error of three methods for Example 4.2.
Figure 9: Total relative \(L_{2}\) error in Example 4.2.
large relative error. As time evolves, the error of FlowDMD reaches the lowest level among all three methods. In Figure 12, we use the test data set to evaluate the generalization ability. The FlowDMD has almost the smallest total relative \(L_{2}\) error in most examples and the average of the total relative \(L_{2}\) error is 9%. It also shows that the fluctuation of error for FlowDMD is smaller than that of LIR-DMD, which demonstrates that FlowDMD has a better generalization ability and is more robust than LIR-DMD.
### Sensitivity study
Here, we study the sensitivity of FlowDMD systematically with respect to the following four aspects:
1. The neural network initialization.
2. The hyperparameter \(\alpha\) in the loss function.
3. The structure of neural networks.
4. The rank \(r\) used by DMD in Algorithm 1.
We study the sensitivity of FlowDMD using the Allen-Cahn equation in Section 4.3.
Figure 10: Comparison of three methods in Example 4.3. The total relative \(L_{2}\) error for exact DMD, LIR-DMD, and FlowDMD are 0.6129, 0.4038, and 0.0725, respectively.
#### 4.4.1 Sensitivity with respect to the neural network initialization
In order to quantify the sensitivity of FlowDMD with respect to the initialization, we consider the same data set with Section 4.3. Simultaneously, we fix the structure for FlowDMD to include only one RCF block and one RCF. Each RCF has a FNN to parameterize \(t\) where the width of each layer is \(10,20,10\). Moreover, all FNNs use the rectified linear unit (ReLU) as activation functions. We use \(15\) random seeds to initialize models and train all the models with the same setting. In Figure 13, we report the total relative \(L_{2}\) error between the reconstructed states and the"true" states. Evidently, the TRL2E remains stable for different initializations of neural networks, as demonstrated by the consistent results obtained within the following interval,
\[[\mu_{TRL2E}-\sigma_{TRL2E},\mu_{TRL2E}+\sigma_{TRL2E}]=[6.5\times 10^{-2}-1.6 \times 10^{-2},6.5\times 10^{-2}+1.6\times 10^{-2}]\]
#### 4.4.2 Sensitivity with respect to \(\alpha\)
We utilize the same training set with Section 4.3 and select \(\alpha\) from the list \([0.01,0.1,1,10,100]\). As shown in Table 1, the different weights \(\alpha\) in the loss function
Figure 11: Relative error of three methods for Example 4.3.
Figure 12: Total relative \(L_{2}\) error in Example 4.3.
have little influence on the final results. We observe that the error is minimized when \(\alpha=10\), which suggests the use of an adaptive weight selection algorithm. The gradient flow provided by the neural tangent kernel [45] can be employed to adjust the weight \(\alpha\) and accelerate the training process, and we leave this for our future work.
#### 4.4.3 Sensitivity with respect to the structure of neural networks
We study the impact of the number of RCFs and the number of neurons in the FNN to parameterize the mapping \(t\) on the performance of the FlowDMD. Specifically, the sensitivity of FlowDMD is being quantified with respect to two parameters: the number of RCFs (\(N_{f}\)) and the number of neurons (\(N_{n}\)) in the middle layer of the FNN. Here, the FNN used to parameterize \(t\) is restricted to a three layer structure of \([10,N_{n},10]\). The results are summarized in Table 2, which indicate that the reconstruction of FlowDMD has little to do with its structure while adding more neurons or more RCFs will not improve the final results to a big extent.
#### 4.4.4 Sensitivity with respect to the rank of DMD
As we increase the rank \(r\) used for the DMD computations in Algorithm 1, we include more physical information, but the computation time also increases. In this study, we investigate how the DMD rank affects the model and its reconstruction. The results in Table 3 show that as we increase the rank \(r\), the corresponding error decreases rapidly.
\begin{table}
\begin{tabular}{c c c c c c} \hline \(\alpha\) & 0.01 & 0.1 & 1 & 10 & 100 \\ \hline TRL2E & 6.2e-02 & 6.8e-02 & 8.2e-02 & 3.2e-02 & 6.9e-02 \\ \hline \end{tabular}
\end{table}
Table 1: Total relative \(L_{2}\) error for different \(\alpha\).
Figure 13: Total relative \(L_{2}\) error for different neural network initializations.
## 5 Conclusion
In this paper, we propose a coupling flow invertible neural network approach to learn both the observable functions and reconstruction functions for the Koopman operator learning. Our method generate more accurate Koopman embedding model and better approximations of the Koopman operator than state-of-the-art methods. Our FlowDMD is structurally invertible, which simplifies the loss function and improves the accuracy of the state reconstruction. Numerical experiments show that our approach is more accurate, efficient, and interpretable than the state-of-the-art methods.
## Acknowledgments
The authors would like to thank Mengnan Li and Lijian Jiang for sharing their code.
|
2310.04424 | "Stability Analysis of Non-Linear Classifiers using Gene Regulatory\n Neural Network for Biological(...TRUNCATED) | "The Gene Regulatory Network (GRN) of biological cells governs a number of key\nfunctionalities that(...TRUNCATED) | Adrian Ratwatte, Samitha Somathilaka, Sasitharan Balasubramaniam, Assaf A. Gilad | 2023-09-14T21:37:38Z | http://arxiv.org/abs/2310.04424v1 | "# Stability Analysis of Non-Linear Classifiers using Gene Regulatory Neural Network for Biological (...TRUNCATED) |
2309.03770 | Neural lasso: a unifying approach of lasso and neural networks | "In recent years, there is a growing interest in combining techniques\nattributed to the areas of St(...TRUNCATED) | David Delgado, Ernesto Curbelo, Danae Carreras | 2023-09-07T15:17:10Z | http://arxiv.org/abs/2309.03770v1 | "# Neural lasso: a unifying approach of lasso and neural networks\n\n###### Abstract\n\nIn recent ye(...TRUNCATED) |
2309.04037 | "SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression\n with Super-resolution Neur(...TRUNCATED) | "The fast growth of computational power and scales of modern super-computing\nsystems have raised gr(...TRUNCATED) | Jinyang Liu, Sheng Di, Sian Jin, Kai Zhao, Xin Liang, Zizhong Chen, Franck Cappello | 2023-09-07T22:15:32Z | http://arxiv.org/abs/2309.04037v3 | "SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural (...TRUNCATED) |
2309.15728 | Line Graph Neural Networks for Link Weight Prediction | "Link weight prediction is of great practical importance, since real-world\nnetworks are often weigh(...TRUNCATED) | Jinbi Liang, Cunlai Pu | 2023-09-27T15:34:44Z | http://arxiv.org/abs/2309.15728v1 | "# Line Graph Neural Networks for Link Weight Prediction\n\n###### Abstract.\n\nLink weight predicti(...TRUNCATED) |
2309.03374 | "Physics Informed Neural Networks for Modeling of 3D Flow-Thermal\n Problems with Sparse Domain Dat(...TRUNCATED) | "Successfully training Physics Informed Neural Networks (PINNs) for highly\nnonlinear PDEs on comple(...TRUNCATED) | Saakaar Bhatnagar, Andrew Comerford, Araz Banaeizadeh | 2023-09-06T21:52:14Z | http://arxiv.org/abs/2309.03374v3 | "# Physics Informed Neural Networks for Modeling of 3D Flow-Thermal Problems with Sparse Domain Data(...TRUNCATED) |
2309.16022 | GNNHLS: Evaluating Graph Neural Network Inference via High-Level
Synthesis | "With the ever-growing popularity of Graph Neural Networks (GNNs), efficient\nGNN inference is gaini(...TRUNCATED) | Chenfeng Zhao, Zehao Dong, Yixin Chen, Xuan Zhang, Roger D. Chamberlain | 2023-09-27T20:58:33Z | http://arxiv.org/abs/2309.16022v1 | "# GNNHLS: Evaluating Graph Neural Network Inference via High-Level Synthesis\n\n###### Abstract\n\n(...TRUNCATED) |
2309.04426 | Advanced Computing and Related Applications Leveraging Brain-inspired
Spiking Neural Networks | "In the rapid evolution of next-generation brain-inspired artificial\nintelligence and increasingly (...TRUNCATED) | Lyuyang Sima, Joseph Bucukovski, Erwan Carlson, Nicole L. Yien | 2023-09-08T16:41:08Z | http://arxiv.org/abs/2309.04426v1 | "# Advanced Computing and Related Applications\n\n###### Abstract\n\nIn the rapid evolution of next-(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 20